entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 14
193
| authors
sequencelengths 1
1.14k
| primary_category
stringclasses 125
values | categories
sequencelengths 1
6
| text
stringlengths 12
495k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2409.02687v1 | 20240904131729 | Horseshoes and spiral waves: capturing the 3D flow induced by a low-mass planet analytically | [
"Joshua J. Brown",
"Gordon I. Ogilvie"
] | astro-ph.EP | [
"astro-ph.EP"
] |
firstpage–lastpage
Pointwise and uniform bounds for functions of the Laplacian on non-compact symmetric spaces
[
September 9, 2024
===========================================================================================
§ ABSTRACT
The key difficulty faced by 2D models for planet-disc interaction is in appropriately accounting for the impact of the disc's vertical structure on the dynamics. 3D effects are often mimicked via softening of the planet's potential; however, the planet-induced flow and torques often depend strongly on the choice of softening length. We show that for a linear adiabatic flow perturbing a vertically isothermal disc, there is a particular vertical average of the 3D equations of motion which exactly reproduces 2D fluid equations for arbitrary adiabatic index. There is a strong connection here with the Lubow-Pringle 2D mode of the disc. Correspondingly, we find a simple, general prescription for the consistent treatment of planetary potentials embedded within ‘2D’ discs. The flow induced by a low-mass planet involves large-scale excited spiral density waves which transport angular momentum radially away from the planet, and ‘horseshoe streamlines’ within the co-orbital region. We derive simple linear equations governing the flow which locally capture both effects faithfully simultaneously. We present an accurate co-orbital flow solution allowing for inexpensive future study of corotation torques, and predict the vertical structure of the co-orbital flow and horseshoe region width for different values of adiabatic index, as well as the vertical dependence of the initial shock location. We find strong agreement with the flow computed in 3D numerical simulations, and with 3D one-sided Lindblad torque estimates, which are a factor of 2 to 3 times lower than values from previous 2D simulations.
planet–disc interactions – protoplanetary discs – accretion, accretion discs – methods: analytical – hydrodynamics – waves
§ INTRODUCTION
Discs of gas in orbital motion around a massive central body are found in many astronomical contexts. Objects embedded within these discs, for example planets embedded in circumstellar discs, or stellar-mass black holes within AGN discs, are in general subject to strong gravitational interactions with the gas and dust which comprise the discs. Indeed, signatures such as gaps, rings and spirals observed in discs' emission are highly suggestive of large embedded planets <cit.>. These interactions lead to angular momentum exchange between the object and the disc, causing the object to migrate.
Type I migration concerns low-mass embedded objects which excite predominantly linear perturbations to the flow in the disc. Not only is it a crucial ingredient in theory of planet formation and population synthesis <cit.>, but it also may play a key role in understanding the rapid merger rate of stellar-mass black hole (sBH) binaries inferred from LIGO detections <cit.>. Whilst the migration of sBHs within very thin AGN discs is arguably better suited to the type I parameter regime, the development of type I migration theory has been driven by the need to understand and predict the dynamics of young planets.
Linear two-dimensional analyses from early work <cit.> presented a picture in which the planet exchanged angular momentum with the disc via two mechanisms: some angular momentum was transported away from the planet by density waves excited at Lindblad resonances, and some separately `absorbed' into corotation resonances. The torque depends only on the properties of the disc and their gradients near the planet, since it is exerted predominantly locally. Type I torque formulae therefore take very simple forms. <cit.>, and more recently <cit.>, extended the 2D linear calculation to 3D locally isothermal discs, arriving at a torque formula in good agreement with the numerical simulations of <cit.>, which drives rapid inward migration on a time-scale of ∼ 10^5 years for an Earth-mass planet.
This unopposed rapid inward migration is worrying; it acts as a potential obstacle to the feasibility of the core accretion model, in which giant gas planets form via the accretion of gas onto a solid core on a time-scale of ≳ 10^6 years <cit.>. Furthermore, since the lifetime of the disc is ≲ 10^7 years, it suggests at surface level an additional paradoxical existential threat for extra-solar Earth-like planets. Indeed, planetary population synthesis models have struggled to reproduce the observed distribution of semi-major axes among detected low-mass exoplanets from the linear type I torque estimates <cit.>.
However, as pointed out by <cit.>, the linear treatment of the corotation resonance is inappropriate, unless the disc is sufficiently viscous. Close to corotation, the radial displacement of fluid elements becomes non-linear, as they execute ‘horseshoe turns’ (akin to the behaviour of test particles seen in <cit.> and <cit.>), exchanging angular momentum with the planet as they are excited onto inner or outer orbits. An asymmetry between the potential vorticity (PV), or vortensity, on incident inner and outer orbits leads to a net torque comparable to that of the Lindblad torque <cit.>, not captured in the linear theory.
In addition, the longer time-scale associated with this horseshoe motion means that additional physical effects become relevant, each with an associated strong torque with potential to resolve the aforementioned `paradoxes'. This has motivated a wealth of research, both semi-analytical and computational, to quantify the dependence of the corotation torque on diffusive, radiative and migration-feedback processes, among others <cit.>.
The torques which arise often depend strongly on the precise geometry of the coorbital flow, and correspondingly the value of the softening length, b (often used to modify the planetary potential to mimic 3D effects in 2D disc models). The variation of torque components can be by more than a factor of 3 for different reasonable choices of b (e.g. <cit.>, figure 5); such varied behaviour for different softening lengths is an issue not limited to the study of planetary torques <cit.>. In this paper we aim to accurately describe the flow induced by the planet, on top of which torque-inducing physics may be studied. Though there is much to be achieved and understood here, the coorbital flow has received little analytical attention since <cit.>, despite their analysis suffering some limitations (which we point out in section <ref>).
A key objective of this work is to address the problem of capturing 3D disc physics with 2D equations. In particular, for low-mass objects embedded within vertically isothermal discs which permit adiabatic perturbations, we show that there exists a particular vertical average of the equations of motion which reproduces the commonly adopted 2D fluid equations. This averaging procedure is closely related to the operator which projects onto the Lubow-Pringle 2D mode <cit.>. This paper is concerned with the non-axisymmetric generalisation of this mode. The 2D mode is so-called as it has the property v_z = 0, but in general it is z-dependent. It is a member of the wider family of 3D disc modes and describes the spiral wake as well as the horseshoe motion within the coorbital region. Most importantly, our averaging process yields a simple, general prescription for the consistent treatment of planetary potentials embedded within `2D' discs, as well as an interpretation for 2D models.
A quantitative description and understanding of the 2D mode of the planet-induced flow, which captures the impact of the disc's vertical extent on the spiral wake, also has important observational applications. Detection of disc-embedded exoplanets via their kinematic signatures is a promising method for finding very young protoplanets during their formation stages <cit.>. As <cit.> point out, 3D effects impact the precise structure of the planet's spiral wake, with important consequences for the observational analysis of this signature, and consequently planet mass estimates.
In section <ref>, we project the 3D equations of motion onto this 2D flow. We derive (without further approximation) independent, second order, linear equations governing the 2D flow in section <ref>. In section <ref>, we present the flow solution. We discuss our findings in section <ref>, and draw our conclusions in section <ref>.
§ GOVERNING EQUATIONS
For the remainder of the paper, we will assume two quantities to be small. Firstly, we assume the disc's aspect ratio h = H/r to be much smaller than 1. Secondly, we assume the planet's mass M_p to be much smaller than the `thermal mass' <cit.>. That is,
q ≡M_p/M_⋆≪M_th/M_⋆ = h^3,
where M_⋆ is the mass of the central star. This ensures that the equations governing the flow are almost everywhere well approximated as linear (this may be verified post hoc), and that the disc's vertical structure is almost everywhere determined by the star's gravity. We therefore have 2 small parameters, namely h and q/h^3; we'll exploit the fact that both are small to make analytical progress.
§.§ Unperturbed state
Before continuing, it's important to define precisely the model for the disc with which our planet will interact. We will consider the planet to introduce a perturbation to the background disc structure outlined below.
Our background disc is steady, axisymmetric and comprised entirely of an ideal gas. It is in orbit about a star of mass M_⋆, fixed at the centre of our frame, and is inviscid and non-self-gravitating to a first approximation. The gas which comprises it solves the steady Euler equations and ideal gas equation,
u_0·∇u_0 = - 1/ρ_0∇ p_0 - ∇Φ_0,
∇·(ρ_0 u_0) = 0,
p_0/ρ_0 = k_B/μ̅T_0,
where u_0, ρ_0, p_0 and T_0 are the velocity, density, pressure and temperature of the background disc. In addition, k_B is Boltzmann's constant, and μ̅ the mean molecular mass. We introduce the cylindrical coordinates (r, θ, z), with the z-axis normal to the plane of the disc, so that
Φ_0 = -G M_⋆/√(r^2+z^2)
is the gravitational potential of the central star, and the velocity of the background state may be written as u_0 = rΩ(r,z) e_θ.
We assume the background disc's temperature, T_0, to be a prescribed function of r only, the result of a relatively fast thermal relaxation in the vertical direction compared to the long time-scale of the evolution of the disc, and ignoring the hotter irradiated outer layers. We remark that the background disc need only be `locally isothermal' to a first approximation for the analysis performed in this paper to hold. Though this background state is idealised, the calculation we perform is robust: we define perturbations relative to the exact background state (though we are perhaps ignorant of its precise structure), but only need knowledge of its leading order behaviour in order to evaluate these perturbations.
Furthermore, the isothermal prescription applies only to the background state: we'll consider the planet to perturb this background state adiabatically. This represents a good approximation provided the time-scale for thermal relaxation is longer than the planet's orbital period. We discuss the validity of this assumption in more detail in section <ref>. We define the isothermal sound speed, c_s(r), via
c_s^2 = k_B/μ̅T_0.
We define the scale height, H, and aspect ratio, h via
h ≡H/r≡c_s/rΩ_K,
where Ω_K is the Keplerian angular frequency, satisfying
G M_⋆/r^3 = Ω_K^2.
We assume h ≪ 1, so that the disc is thin. The angular velocity of the gas in the unperturbed disc satisfies
Ω = Ω_K(1 + 𝒪(h^2)).
The density and pressure then satisfy
ρ_0 = p_0/c_s^2 = Σ_0(r)/√(2) Hexp(-z^2/2H^2),
where Σ_0(r) is the surface density.
§.§ Perturbation equations
We now introduce a planet of mass M_p on a Keplerian circular orbit of radius r_p and angular frequency Ω_p = Ω_K(r_p)√(1+q) to the background disc. We consider the perturbation problem in the frame corotating with the planet, and assume further that the flow in this frame is steady. The fluid velocity in this frame is v = u - rΩ_p e_θ, and the relevant Euler equations become
v·∇v + 2Ω_p ×v = - 1/ρ∇ p - ∇Φ_t - ∇Ψ_p,
∇·(ρv) = 0,
v·∇(p ρ^-γ) = 0,
where Φ_t = Φ_0(r,z) - 1/2(r^2- 3r_p^2)Ω_p^2 is the tidal potential, and the planet's potential, including the indirect term arising from the acceleration of the frame centred on the star, is given by
Ψ_p = - G M_p/√(r_p^2 + r^2 - 2 r r_p cosθ + z^2) + q r_p Ω_p^2 r cosθ.
We now define local quasi-Cartesian coordinates x = r - r_p, y = r_pθ, and perturbed variables
ρ' = ρ - ρ_0,
p' = p - p_0,
v'_x = v_r,
v'_y = v_θ - r(Ω - Ω_p) = u_θ - r Ω,
v'_z = v_z.
It's useful to define further the time-derivative following the orbital shear flow
D = -3/2xΩ_p_y,
and upon subtraction of the background state solution from the Euler equations (<ref>), we find the balance at the next order in h (assuming x = 𝒪(H) and y = 𝒪(H)) is given by
D v'_x - 2Ω_p v'_y + _x p'/ρ_p = -_x ϕ_p,
D v'_y + 1/2Ω_p v'_x + _y p'/ρ_p = -_y ϕ_p,
D v'_z + Ω_p^2 z ρ'/ρ_p + _z p'/ρ_p = -_z ϕ_p,
D ρ' + ρ_p_xv'_x + ρ_p_yv'_y + _z(ρ_pv'_z) = 0,
D(p' - γ c_p^2ρ') + (γ - 1)Ω_p^2 z ρ_p v'_z = 0,
where c_p = c_s(r_p),
ϕ_p = -G M_p/√(x^2+y^2+z^2),
and we have taken the leading order approximation to the background disc's density,
ρ_p(z) ≡Σ_p/√(2) H_pexp(-z^2/2H_p^2),
for Σ_p = Σ_0(r_p) and H_p = H(r_p). The system of equations (<ref>) describes locally the flow excited by a planet embedded in a stratified 3D disc. We could in theory relax the assumption y ≲ H by reintroducing the azimuthally global expression for the planet's potential, Ψ_p, in place of ϕ_p, and applying periodic boundary conditions at y = ± r_p. The corresponding correction however is only of relative size 𝒪(h), so that local system involving ϕ_p (which we adopt for the remainder of the paper) becomes exact in the limit h → 0.
The system (<ref>) governs the excitation of the spiral density waves by the planet, permits downstream gravity waves (e.g. <cit.>), and also describes the horseshoe trajectories followed by fluid elements close to the planet's orbital radius. As noted by several authors, for example by <cit.>, the density wave excitation is confined to the region extending only a few scale heights from the planet, though (<ref>) does not capture the asymmetry between the inner and outer spiral wakes. Importantly, as we'll demonstrate in the next section, (<ref>) also has the elegant property that it may be vertically integrated to derive exact 2D flow equations.
§.§ Projection onto a 2D flow
Remarkably, under a particular choice of vertical averaging, the system of equations (<ref>) may be transformed exactly into the familiar linearized 2D flow equations. Associated with this averaging procedure is a definite choice for the `softening' of the planetary potential (specified in equation (<ref>)).
The averaging procedure defined below is closely related to the operator which projects onto the 2D mode of the disc found by <cit.>. Indeed, the 2D equations we find govern the radial and azimuthal evolution of the amplitude of this mode. The 2D mode is a member of a larger family of 3D modes, it exists for γ < 2, and is often referred to as `2D', since it has the property that v'_z = 0, despite possessing a vertical dependence.
Before we proceed, it's helpful to first introduce the adiabatic sound speed, scale height and aspect ratio as
c_γ = √(γ) c_p, H_γ = c_γ/Ω_p, h_γ = H_γ/r_p.
We combine (<ref>) and (<ref>) to eliminate ρ':
c_γ^2 × (<ref>) + (<ref>)
D p' + c_γ^2 ρ_p(_x v'_x + _y v'_y)
+ (γ - 1)Ω_p^2 z ρ_p v'_z + c_γ^2_z(ρ_pv'_z) = 0
D p'/ρ_p + c_γ^2 (_x v'_x + _y v'_y) +
c_γ^2 exp(z^22 H_γ^2)_z[ v'_z exp(-z^22 H_γ^2)] = 0.
We now multiply (<ref>) by the factor exp(-z^22 H_γ^2)∝ρ_p^1/γ and integrate. (Incidentally, the function ρ_p^1/γ is the product of the background density distribution and the vertical profile of the 2D mode. In this way, the vertical integration may formally be seen to be an inner product with the 2D mode.) The result of the integration is the 2D mass conservation analogue
D P' + Σ_p c_γ^2(_x v̅_x + _y v̅_y) = 0,
where we have defined
v̅_x ≡⟨ v'_x⟩, v̅_y ≡⟨ v'_y⟩, P' ≡Σ_p ⟨ p'/ρ_p(z)⟩,
for vertical average ⟨ ⋯⟩ defined via:
⟨ X ⟩≡1/√(2)H_γ∫_-∞^∞Xexp(-z^2/2H_γ^2) z.
We may similarly apply the averaging procedure ⟨ ⋯⟩ defined in (<ref>) to equations (<ref>) and (<ref>) to obtain the 2D momentum equations
Dv̅_x - 2Ω_p v̅_y + 1/Σ_p_x P' = -_x Φ_p,
Dv̅_y + 1/2Ω_p v̅_x + 1/Σ_p_y P' = -_y Φ_p,
where now Φ_p is precisely defined as ⟨ϕ_p ⟩, that is,
Φ_p ≡⟨ -G M_p/√(x^2 + y^2 + z^2)⟩ = -G M_p/H_γ^1/4s^2/√(2)K_0(14s^2),
with s = √(x^2 + y^2)/H_γ, and K_0(z) the modified Bessel function. Note that in a global 2D disc model, the above expression remains valid upon redefining s = |r - r_p|/H_γ and reintroducing the indirect term (this matter is discussed in greater detail in section <ref>). This prescription for the potential is therefore generally applicable to 2D disc models. Reassuringly, at large distances, the above expression becomes
Φ_p = -G M_p/√(x^2 + y^2)(1 + 𝒪(H_γ^2/x^2+y^2)),
and close to the planet, our 2D potential has a logarithmic singularity. We therefore have exact 2D equations forced by a 2D potential, valid for arbitrary adiabatic index γ.
We remark that we did not make use of equation (<ref>) for the vertical velocity; the 2D mode is orthogonal to and ignorant of the permitted vertical motions of the disc (including downstream gravity waves and inertial waves). Studies of these excited gravity waves indicate they may have an important observational signature and impact on the torque on the planet, which is difficult to predict due to the intricacy of the problem <cit.>. That being said, much of the important dynamics are captured by the 2D mode, and we shall focus the majority of our attention here for the remainder of the paper.
§.§ Relation to 2D Euler equations
2D disc models enjoy a valuable simplicity compared to 3D models. However, the non-linear 2D Navier-Stokes equations comprise only an approximate model for the flow in the disc. The analysis performed above (as well as that in section <ref>) provides an interpretation for the 2D flow variables in terms of their counterparts in a 3D disc with vertical extent. This relationship holds when the disc's response (for example to forcing by a perturbing embedded planet) constitutes a linear perturbation to its background state. The 2D variables are given by weighted vertical averages of the (adiabatic) pressure and velocity perturbations to the locally isothermal background state of a 3D disc. Specifically, if we define
Σ̅≡Σ_p + P'/c_γ^2,
v̅≡v̅_xe_x + (-32Ω_p x + v̅_y)e_y,
then we see that our vertically averaged flow equations (<ref>), (<ref>) and (<ref>) are linearisations of the familiar 2D barotropic model,
v̅·∇v̅ + 2Ω_p ×v̅ = - 1/Σ̅∇P̅ - ∇Φ̅_t - ∇Φ_p,
∇·(Σ̅v̅) = 0,
P̅ = K Σ̅^γ,
where Φ̅_t = -32Ω_p^2 x^2, and K = c_p^2Σ̅_p^1-γ uniformly, so that P̅ = c_γ^2 Σ̅. We remind the reader of the formal definitions of P', v̅_x and v̅_y, given in equation (<ref>), and our convention c_γ^2 = γ c_p^2. (Note we may treat (<ref>) as an exact definition of P̅.)
We note that the error in the system (<ref>) scales as the size of the quadratic non-linear terms, that is, (q/h_γ^3)^2. This is notably far smaller than the error introduced by any alternative averaging process, which would be of order q/h_γ^3. Recall we assumed q ≪ h_γ^3 so that our 3D flow was linear, and that the errors introduced in the linearisation of the 3D system also scaled as (q/h_γ^3)^2. Furthermore, whilst we've addressed here only a local model of disc dynamics, this approach may be generalised to a global disc. It's sufficient for the background disc to satisfy a locally isothermal equation of state.
It's worth mentioning that the `surface density' Σ̅ in the 2D model must really be thought of in terms of the vertically averaged pressure perturbation. Moreover, equation (<ref>) doesn't technically enforce physical mass conservation; mass conservation would be derived from a direct (unweighted) vertical integral of the 3D mass conservation equation (<ref>). Instead, (<ref>) describes the evolution of the pressure due to a combination of compressive and buoyant motions; however, it takes precisely the same form as a 2D mass conservation equation (and for the remainder of this paper this is how we shall think of it).
In contrast, the 2D model's flow velocities (v̅_x and v̅_y) can be derived from simple weighted vertical averages of their 3D counterparts. Finally, we remind the reader that the `planetary potential', Φ_p, which forces this 2D system should be taken to be the vertical average of the 3D potential specified in (<ref>). This prescription is generally applicable to 2D disc models, and is discussed in more detail in section <ref>.
§.§ Stream function and Bernoulli invariant
The stream function and streamlines of our 2D vertically averaged flow are of particular importance near to corotation. This 2D behaviour is expected to contribute dominantly to the torque exerted on the planet, and for low-mass planets, the corotation torque is expected to scale with the fourth power of the horseshoe region width. To find and compute these streamlines, it's instructive to consider the exact 2D flow which solves (<ref>) (which is well approximated by our averaged flow), and exactly conserves the Bernoulli invariant (which we demonstrate below). Equation (<ref>) implies the existence of a stream function ψ. We take the definition
Σ̅v̅ = -e_z ×∇ψ.
We may rewrite (<ref>) as
(2 Ω_p + ∇×v̅) ×v̅ + ∇(1/2|v̅|^2 + W̅ + Φ̅_t + Φ_p) = 0,
where W̅ = P̅/Σ̅. We define the Bernoulli invariant, B, as
B = 1/2|v̅|^2 + W̅ + Φ̅_t + Φ_p,
and introduce the potential vorticity (PV) (or vortensity)
ζ = 2 Ω_p + e_z· (∇×v̅)/Σ̅.
We may then rewrite equation (<ref>) as
Σ̅ζe_z ×v̅ + ∇ B = 0
- ζe_z ×(e_z ×∇ψ) + ∇ B = 0
∇ B = -ζ∇ψ.
That is to say, the Bernoulli function is constant on streamlines, B = B(ψ), and further the PV,
- B/ψ = ζ(ψ)
is also conserved. We now further impose that ζ is uniformly constant upstream (which holds to leading order within the local approximation), so that
ζ≡Ω_p/2 Σ_p.
Combining this with the definition (<ref>) implies that linearly
_x v̅_y - _y v̅_x - Ω_p/2 c_γ^2P'/Σ_p = 0.
In other words, the PV is uniform. Integrating equation (<ref>) (and setting the arbitrary constant of integration to zero) then gives an expression for the stream function for the 2D flow
-Ω_p/2 Σ_pψ = 1/2|v̅|^2 + W̅ + Φ̅_t + Φ_p.
That is to say, our 2D vertically averaged flow has streamlines which are the contours of the stream function
ψ/Σ_p = 3/4Ω_p x^2 - 2/Ω_p(P'/Σ_p+Φ_p) + 3 x v̅_y + 𝒪(q^2/h_γ^6c_γ^2/Ω_p).
§.§ On the corotation singularity
Importantly, (<ref>) allows us to write down the equation for the radial displacement of fluid elements, ξ̅_x, which we take to satisfy
v̅·∇ξ̅_x = v̅_x,
with ξ̅_x = 0 far upstream. If a given streamline (or contour of ψ) has radial location x_0 far upstream, we have that
ξ̅_x = x - x_0.
Comparing values of ψ far upstream to the point (x,y), we see that at leading order
ψ = 3/4Σ_pΩ_p x_0^2 = 3/4Σ_pΩ_p (ξ̅_x-x)^2
= 3/4Σ_pΩ_p x^2 - 2Σ_p/Ω_p(P'/Σ_p+Φ_p) + 3 x Σ_pv̅_y,
ξ̅_x^2 - 2 x ξ̅_x + 8/3Ω_p^2(P'/Σ_p+Φ_p) - 4 x v̅_y/Ω_p = 0,
ξ̅_x = x ±√(x^2 -8/3Ω_p^2(P'/Σ_p+Φ_p) + 4 x v̅_y/Ω_p),
where the choice of + or - is determined by whether the fluid element has just undertaken a horseshoe turn or not. Here, P' and v̅_y may be taken to be the linear solutions to the 2D equations of motion, which we reduce to independent second order equations in section <ref>.
Equation (<ref>) captures the essence of the corotation singularity, and what is meant by the `non-linear' corotation torque. The linear theory such as that applied in <cit.> and <cit.> may be thought to implicitly discard the quadratic term ξ̅_x^2, which is non-negligible for x = 𝒪(√(q/h^3)H). In this way, the linear equation for the particle displacement experiences a singularity at x = 0, which must be resolved non-linearly (though there exist linear equations which remain valid at corotation for v̅_x, v̅_y and P'). The singularity is introduced in linear analyses when the linearized azimuthal momentum equation, in our case equation (<ref>), is used to directly re-arrange for v̅_y, namely by integrating with respect to azimuthal coordinate y or θ (which may look like dividing by im(Ω - Ω_p) in Fourier space). More specifically, this `first integral' equation for v̅_y must become non-linear to remain valid near to corotation, just as the azimuthal integral of v̅_x becomes large enough that inclusion of the quadratic term ξ̅_x^2 in equation (<ref>) is necessary.
Importantly, this means that any materially conserved quantity Q(ψ) = Q(ξ̅_x - x), will experience the same singularity as the radial displacement, which appears when the quadratic term in (<ref>) is neglected. A gradient of (the materially conserved) potential vorticity over the coorbital region will therefore induce a singularity in the linear equations of motion. In order to resolve this singularity, the steady distribution of PV, which is advected by the flow induced by the planet within the horseshoe region, must be ascertained. In this sense, the corotation torque is `non-linear'. The corotation torque that arises depends very strongly on the geometry of the base flow, demanding an accurate model for the flow induced by a planet. We find this flow in the case of a low-mass planet in this paper.
§.§ 2D mode orthogonality and torque decomposition
For γ < 2, the unforced linearized equations of motion in a disc admit a `2D mode' solution with v'_z = 0. This 2D mode is a member of a wider family of modes, whose axisymmetric members are discussed in <cit.>. Here we demonstrate that the 2D mode is orthogonal to the remaining set of (non-axisymmetric) 3D disc modes, and show how the torque may be decomposed into 2D and 3D components. For γ > 2, a different family of modes exists which does not include such a mode with v'_z = 0. Indeed, imposing v'_z = 0 in this case yields an unnormalisable mode of infinite energy. Appropriately normalised, the 2D mode may be written as
v'_x,0 = v̅_x(x,y)√(2-γ)exp((γ-1)z^22 H_γ^2),
v'_y,0 = v̅_y(x,y)√(2-γ)exp((γ-1)z^22 H_γ^2),
p'_0 = P'(x,y)√(2-γ)ρ_p(z)/Σ_pexp((γ-1)z^22 H_γ^2),
v'_z,0 = 0, ρ'_0 = p'_0/c_γ^2,
where again
v̅_x ≡⟨ v'_x⟩, v̅_y ≡⟨ v'_y⟩, P' ≡Σ_p ⟨ p'/ρ_p(z)⟩.
We may then decompose, for example, the radial velocity into its 2D mode component and an orthogonal complement, which we call v'_x,c, describing the rest of the disc's 3D motions, which include for example inertial and gravity waves.
v'_x = v'_x,0 + v'_x,c.
Now, v'_x,0 and v'_x,c are everywhere orthogonal with respect to the inner product involving the density
⟨ f,g ⟩ = 1/Σ_p∫_-∞^∞ρ_p(z) f(z) g(z) z.
This may be seen from the relation for the vertical average ⟨ ⋯⟩ in terms of the density-weighted inner product
⟨ ⋯⟩≡⟨ ⋯,1/√(γ)exp((γ-1)z^22 H_γ^2)⟩.
Specifically, the orthogonality follows via
⟨ v'_x,0, v'_x,c⟩ = ⟨ v'_x,0, v'_x - v'_x,0⟩
= √(2-γ) v̅_x⟨exp((γ-1)z^22 H_γ^2), v'_x⟩
- (2-γ)v̅_x^2⟨exp((γ-1)z^22 H_γ^2),exp((γ-1)z^22 H_γ^2)⟩
= √(γ(2-γ)) v̅_x⟨ v'_x⟩ - (2-γ)v̅_x^2√(γ/2-γ)
=0.
As a result, we obtain the Parseval identity,
⟨ v'_x, v'_y ⟩ = ⟨ v'_x,0, v'_y,0⟩ + ⟨ v'_x,c, v'_y,c⟩,
as well as equivalent identities for any two variables drawn from the set {v'_x, v'_y, p'/ρ_p(z)}. In particular, this allows us to decompose the radial angular momentum flux, F_A, into independent components associated with the 2D mode and 3D remainder, since
F_A ≈ r_p∬ρ_p(z) v'_x v'_y z y
= r_p∬ρ_p(z) v'_x,0 v'_y,0 z y + r_p∬ρ_p(z) v'_x,c v'_y,c z y
= √(γ(2 - γ))r_p Σ_p∫_-∞^∞v̅_xv̅_y y + F_A^3D≡ F_A^2D + F_A^3D
It's in this sense that the torque on the planet for a general adiabatic index γ may be separated into 2D and 3D contributions. The factor of √(γ(2-γ)) (approximately 92% for γ = 1.4) may be shown to be the fraction of torque imparted into the 2D mode by an external potential (assumed to be z-independent) at a Lindblad resonance of order m ≪ 1/h_γ (see for example equation (45) in <cit.> and appendix B1 of <cit.>). In this way, naïvely approximating the 3D flow velocities by their averaged values inadvertently recovers the total flux in this simplified scenario, as this approximation removes the factor of √(γ(2-γ)) from the flux expression. This follows from the applicability of resonant torque excitation theory to this regime.
This principle is at play in figure 2 of <cit.>. The top left panel depicts the simulated torque density excited in 3D discs which admit adiabatic perturbations to an isothermal background state. Notably, for x≳2H_γ, and for each value of γ considered, <cit.> recover torque densities matching the case γ = 1 (in which the 2D mode is z-independent and contributes almost all of the torque). The torque density in this `outer' region scales as x^-4 <cit.>, so that their γ-dependent scaling of the x-axis with H_γ and y-axis with c_γ^4 does not meaningfully affect the outer torque density curves. This torque density agreement is achieved despite a considerable fraction of the flux being carried by gravity waves and inertial waves when γ > 1. That is, the torque density for x≳2H_γ matches that obtained via a naïve approximation of the 3D flow by its vertical average, as defined in (<ref>).
There is however in the planet-disc interaction problem an important (indeed, a dominant) contribution from azimuthal modes with m ∼ 1/h_γ. The vertical profile of the planetary potential plays a key role here too. Indeed, inertial waves and gravity waves excited near the planet also impact the torque exerted on the planet significantly, contributing to F_A^3D. The problem of resolving the spectrum of gravity waves excited by an embedded planet and the flux they carry is very challenging, and has received only limited attention <cit.>. Numerical approaches face difficulties resolving the complex and fine structure of the waves, and quantitative analytical approaches struggle since the gravity waves are not separable in the vertical direction.
§ REDUCTION TO INDEPENDENT SECOND ORDER LINEAR EQUATIONS
In section <ref>, we showed how the 3D equations governing the flow near a low-mass planet may be manipulated into the same form as the familiar 2D local equations for mass, PV and momentum conservation
D P' + Σ_p c_γ^2(_x v̅_x + _y v̅_y) = 0,
_x v̅_y - _y v̅_x - Ω_p/2 c_γ^2P'/Σ_p = 0,
Dv̅_x - 2Ω_p v̅_y + 1/Σ_p_x P' = -_x Φ_p,
Dv̅_y + 1/2Ω_p v̅_x + 1/Σ_p_y P' = -_y Φ_p,
where D ≡ -3/2xΩ_p_y is the leading order advective operator arising from the background shear flow. We now derive with no further approximations simple, second order linear equations from these which describe both the spiral wave excitation and horseshoe streamlines. It's helpful to non-dimensionalise the equations. We first replace
x → H_γ x, y → H_γ y,
so that x and y now measure distances in units of `adiabatic scale heights'. We also let
𝒟 = -3/2x_y,
and introduce the non-dimensional potential ϕ̂_p
Φ_p(x,y) = q/h_γ^3c_γ^2ϕ̂_p(s), s = √(x^2+y^2),
ϕ̂_p(s) = -^1/4s^2/√(2)K_0(14s^2) ∼ -1/s + 𝒪(1/s^3).
We define non-dimensional, scaled x- and y- velocity and `enthalpy' perturbations u(x,y), v(x,y) and W(x,y) via
v̅_x ≡q/h_γ^3c_γ u, v̅_y ≡q/h_γ^3c_γ v, P' ≡q/h_γ^3c_γ^2Σ_p W.
Equations (<ref>), (<ref>) and (<ref>) become
_x v - _y u - 1/2χ = - 1/2ϕ̂_p,
𝒟 u - 2v + _x χ = 0,
𝒟 v + 1/2u + _y χ = 0,
where χ≡ W + ϕ̂_p. Additionally, `mass conservation', (<ref>) becomes
𝒟χ + _x u + _y v = 𝒟ϕ̂_p,
which may also be derived from (<ref>), (<ref>) and (<ref>). It's clear at this stage that the flow is indeed linear so long as q ≪ h_γ^3.
Remarkably, these equations are independent of γ! They are analogous to the equations considered by <cit.>; however, it's worth noting that the approximations adopted in their subsequent analysis led to three sources of inaccuracy. Firstly, the use of decaying boundary conditions instead of a radiation condition, as well as the use of a softened planetary potential both introduce discrepancies. Additionally, motivated by its validity in the case of test particle dynamics, they further approximated the above equations near to corotation by setting 𝒟 = 0 (before then differentiating with respect to x). The resulting system omits large factors at corotation in the fluid case. Note that _x𝒟→ -32_y as x → 0, which does not vanish at corotation.
Before continuing the derivation, it's helpful also to note the equation for PV conservation in differential form,
𝒟(_x v - _y u) + 1/2(_x u + _y v) = 0.
To proceed, we take 𝒟(<ref>) and combine with (<ref>), and then take 𝒟(<ref>) and combine with (<ref>). This yields
(𝒟^2 + 1)u = - 𝒟_x χ - 2 _y χ,
(𝒟^2 + 1)v = - 𝒟_y χ + 1/2_x χ.
Now, it may be shown from equations (<ref>) and (<ref>) that
𝒟_x χ + 2 _y χ = 𝒟_x ϕ̂_p + 2 _y ϕ̂_p - ∇^2 u + 3_y(χ - ϕ̂_p),
and
𝒟_y χ - 1/2_x χ = 𝒟_y ϕ̂_p - 1/2_x ϕ̂_p - ∇^2 v.
Equations (<ref>) and (<ref>) become
(𝒟^2 + 1 - ∇^2)u + 3 _y χ = _y ϕ̂_p - 𝒟_x ϕ̂_p,
(𝒟^2 + 1 - ∇^2)v = - 𝒟_y ϕ̂_p + 1/2_x ϕ̂_p.
Now, to find an equivalent equation for χ, we take 𝒟^2(<ref>). We're then able to use equation (<ref>), the momentum equations (<ref>) & (<ref>) and the PV equation (<ref>) again to deduce
(𝒟^2 + 1 - ∇^2)χ + 3_y u = (𝒟^2 + 1)ϕ̂_p.
We now define the linearized Riemann invariants[The interpretation of J_± as linearized Riemann invariants is clarified in section <ref>.] J_±:
J_±≡ u ±χ.
Combining equations (<ref>) and (<ref>), it follows that
[𝒟^2 + 1 ± 3 _y - ∇^2]J_± = [_y - 𝒟_x ± (𝒟^2 + 1)]ϕ̂_p,
[𝒟^2 + 1 - ∇^2]v = - 𝒟_y ϕ̂_p + 1/2_x ϕ̂_p.
Now, equation (<ref>) is the parabolic cylinder equation[More specifically, it becomes the parabolic cylinder equation following an azimuthal mode decomposition or Fourier transform.] derived by <cit.> for the azimuthal velocity perturbation, but equation (<ref>) is novel. Importantly, it provides a simple and non-singular description of the behaviour of the radial velocity and enthalpy at corotation. It describes the linear excitation of the profiles of the Riemann invariants of <cit.>, which are conserved non-linearly further from the planet. These equations also accurately capture the behaviour of the coorbital flow, including the horseshoe dynamics.
It is worth noting and crediting the similar equations derived by <cit.>, who studied the excitation of density waves via turbulence. In the [appx]appendix, we discuss the numerical solution of equations (<ref>) and (<ref>) to high accuracy, and we present these solutions in the next section.
The inner limit of equations (<ref>) and (<ref>) (that is, the simplified equations valid close to corotation taking x ≪ H) may be written as
[1 ± 3 _y - ∇^2]J_± = [_y - 𝒟_x ± (𝒟^2 + 1)]ϕ̂_p,
[1 - ∇^2]v = - 𝒟_y ϕ̂_p + 1/2_x ϕ̂_p.
Note that terms including 𝒟 acting on ϕ̂_p may not be neglected, as in this case ϕ̂_p has a (logarithmic) singularity at the origin, with the effect that such terms are non-negligible. For comparison, the equivalent equation from <cit.> expressed in our notation reads
[1 - ∂_x^2 - 4∂_y^2]χ = ϕ_p.
§.§ Connection with resonant wave excitation theory
One may deduce directly from (<ref>) the locations of the `effective' Lindblad resonances. In a pressureless disc composed of test-particles, the Lindblad resonances are located at orbital radii such that the epicyclic frequency, κ(r), is an integer multiple of the interaction rate, Ω - Ω_p, of the test particles with the planet. In this way, successive interactions of a test particle with the planet will administer in phase `kicks' to the test particle. In the 2D gas-dynamic case, inertio-acoustic wave disturbances instead oscillate with squared frequency ω^2 = κ^2 + c^2 |k|^2 for wavevector k and sound speed c. In this way, disturbances of large wavenumber have a larger frequency, which has the effect that the Lindblad resonances of large order need not have a very small interaction rate. In this way, the Lindblad resonances all stand off and pile up a finite distance from the planet's orbital radius, namely at x = ±23H_γ.
Furthermore, in the gas-dynamic case the contribution of the radial wavenumber k_x to the wave frequency ω leads to the spreading out of each Lindblad resonance radially.
The resonance locations may equivalently be defined in the gas-dynamic context as where the solution for a given azimuthal mode changes from evanescent to wavelike. It's near this turning point, where the wave has zero frequency (and correspondingly the most time to be excited), that the forcing has most influence on the wave excitation. We note that the operator 𝒟^2 + 1 - ∇^2 is hyperbolic in the wave-permitting region |x| > 23H_γ, and elliptic in the region where all modes are evanescent, namely |x| < 23H_γ. This signposts that the Lindblad resonances must all stand off a distance 23H_γ from the planet's orbital radius. More specifically, for the azimuthal disc mode proportional to ^ m θ, recalling the original definition y = r_pθ and reintroducing dimensions, (<ref>) becomes
(- 9/4Ω_p^2 x^2/r_p^2m^2 + Ω_p^2 + c_γ^2m^2/r_p^2 - c_γ^2_x^2 )v = ⋯.
We see that were it not for the acoustic terms in the above operator, namely c_γ^2 m^2/r_p^2 and c_γ^2_x^2, we would recover the positions of the close-in Lindblad resonances for test-particles by setting this operator to 0, that is, r_L,± m = r_p(1 ±23 m). Appropriately, at these resonances, solving for v would involve division by 0. Including the acoustic terms yields a parabolic cylinder equation for v_y whose solution is evanescent for |x| < 23H_γ√(1 + (h_γ m)^-2), and wavelike for |x| > 23H_γ√(1 + (h_γ m)^-2). The effective Lindblad resonances close to the planet (with h_γ m ≳ 1) therefore have locations
r_L,± m = r_p ±2/3H_γ√(1 + (h_γ m)^-2).
§ RESULTS
In this section, numerical solutions to equations (<ref>) and (<ref>) are presented and discussed. We compare our flow solution near to corotation with flows computed in 3D simulations, for example by <cit.>, <cit.> and <cit.>. We further compare the excited wave profiles with those found in 2D studies. We find good agreement with 3D one-sided Lindblad torque estimates, which are typically a factor of 2-3 lower than 2D values.
§.§ Flow streamlines and the horseshoe width
Real-space plots for the solutions of equations (<ref>) and (<ref>) for each flow variable (u, v, χ, J_+, J_-, as well as the stream function ψ), are shown in figure <ref>. The planet is located at the origin of each figure. We imposed radiation boundary conditions in the x-direction, specifying that our numerical solution includes no incoming waves, and used a Fourier transform method in y. Our numerical method is accurate and tailored specifically to this problem, and the plotted solutions have an uncertainty of 1 × 10^-5. Details on this numerical procedure are discussed in the [appx]appendix.
Qualitatively, the vertically averaged flow field streamlines depicted in the bottom right panel of figure <ref> are comparable to those obtained with a `softened' potential with smoothing length b = 0.4H_γ. Namely, they agree in their prediction for the horseshoe width, and the streamlines are only weakly affected by the presence of the density waves. This is to be expected, as this particular choice for b is known to match well the horseshoe width measured in 3D simulations <cit.>.
In this case there is however a noticeable difference in the excited density wave's amplitude. We find via a second analogous calculation that the `softened' potential excites a wave with peak amplitude 30% greater than that of the 2D mode in both the profiles of J_+ and v. This wave correspondingly transports an angular momentum flux inflated by 55%, depicted in figure <ref>. This numerical discrepancy highlights the well-known result that softening prescriptions are unable to simultaneously capture both the corotation torque as well as the Lindblad torque accurately <cit.>. Indeed, in figure 5 of <cit.> one observes that torque components vary by up to a factor of 3 as the smoothing length is varied between 0.3 and 0.7.
The horseshoe region semi-thickness x_s may be found from our solution by first noting that χ, u and v decay rapidly to 0 as y →∞ in the region |x| < 2/3H_γ. The leading order far-field horseshoe streamlines therefore have reflectional symmetry about x = 0. We showed in section <ref> that the stream function in our vertically averaged flow may be expressed as
ψ/Σ_p = 3/4Ω_p x^2 - 2/Ω_p(P'/Σ_p+Φ_p) + 3 x v̅_y + 𝒪(q^2/h_γ^6c_γ^2/Ω_p).
By evaluating the stream function at a stagnation point (located on the line x = 0 in the limit q/h_γ^3 → 0), and then far up- or downstream on the same streamline, we see that
3/4Ω_p x_s^2 = -2/Ω_pq/h_γ^3c_γ^2 χ_s,
where χ_s is the value of the (non-dimensional) pseudo-enthalpy χ at a stagnation point on the separatrix streamline. From our numerical solution, we have
χ_s = -0.47115,
and in the same limit q/h_γ^3 → 0, the flow has three stagnation points, with coordinates x = 0, y = ± 0.439 H_γ and y = 0. We therefore predict a horseshoe region half-width for a low-mass planet of
x_s = 1.12089√(q/h_γ^3)H_γ = 1.12089√(q/h^3)Hγ^-1/4,
in good agreement with 2D simulations using a smoothing length b = 0.4H_γ <cit.>, which match well the horseshoe width measured in 3D simulations for this choice of b <cit.>.
The exact γ^-1/4 dependence is a non-trivial result, arising from the coincidence that the 2D mode of the potential, specified in equation (<ref>), depends only on the length-scale H_γ and not H, even though the vertical structure of the background disc varies vertically on the length-scale H.
The phenomenon of three stagnation points appearing on the axis x = 0 seems to be physical, indeed it is observed in 3D simulations (see for example figures 3 and 4 in <cit.>, where in their isothermal fiducial run, the stagnation points have approximate positions y = - 0.36 H, y = 0.53 H and y = 0, having been displaced by the far-field radial pressure gradient).
In a 3D disc, in general the horseshoe width will depend on height. In the isothermal case however (when γ = 1), there is no entropy stratification, and in fact close to the orbital radius of the planet, the flow is columnar (akin to Taylor-Proudman columns) since the advection is dominated by Coriolis and body forces. In this way, the horseshoe width becomes approximately height-independent, as is observed in 3D isothermal simulations, for example those performed by <cit.> and <cit.>.
In the adiabatic case, the Taylor-Proudman theorem no longer applies. Indeed, we expect a buoyancy wake within the downstream coorbital region, as observed by <cit.>. Instead, to gain traction here we appeal to the vertical structure of the 2D mode. As discussed in section <ref>, this mode behaves as
v'_x ∝exp((γ-1)z^22 H_γ^2), v'_y ∝exp((γ-1)z^22 H_γ^2), v'_z = 0,
p' ∝exp(-z^22 H_γ^2), ρ' ∝exp(-z^22 H_γ^2).
Consequently (if we neglect the contributions from the other vertical modes), we expect the horseshoe width to increase with height above the mid-plane as
x_s(z) ≈ 1.12 √(q/h^3)H γ^-1/4√(2-γ)exp((γ-1)z^22 H_γ^2).
The neglect of further 3D modes in this approximation is well-motivated. For example, inertial waves behave very differently to the 2D mode, with enthalpy perturbations typically modulated by √(x) near to corotation <cit.>. Gravity waves are also confined near to corotation to the region above diagonal buoyancy resonances <cit.>.
Whilst the exponential increase in horseshoe width (as well as perturbed flow velocities) with height is striking, we note that that the factor (γ - 1)/2γ is small for reasonable values of γ. Further, we shouldn't be too concerned with the dynamics beyond 2 to 3 scale heights above the mid-plane, as over 95% of the disc's mass is contained within the first 2 scale heights. Moreover, our model's assumptions, including linearity, and that the background state is vertically isothermal (discussed in section <ref>) may begin to break down at such heights.
Figure <ref> depicts this vertical dependence for a few values of γ. Curiously, the expression in (<ref>) is poorly defined for γ⩾ 2; however we don't expect γ to take such values in astrophysical discs. This oddity is due to the change in the character of the the disc's linear modes as γ increases through 2. For γ⩾ 2, for example there is no mode with v'_z = 0 (such a mode would contain infinite energy).
In the isothermal case, our numerical value for the horseshoe width x_s = 1.12√(q/h^3)H is in reasonable agreement with 3D simulations (though inflated by a few percent). <cit.> and <cit.> both estimate a width via 3D simulation of x_s = 1.05√(q/h^3)H. The reasons for the discrepancy between our result and theirs are perhaps two-fold. Most significantly, we use a fully local expression for the potential (<ref>), rather than performing a discrete azimuthal mode decomposition that takes into account the finite circumference of the disc. In this way, our calculation is most applicable to ultra-thin discs (for example in the problem of black hole migration within an AGN disc). We expect a relative discrepancy between our value and those obtained in 3D simulations of order h_γ∼ 0.05. Furthermore, we exclude any non-linearity in our calculation. From equation (<ref>) we expect a next order correction to x_s of relative size 𝒪(q/h^3), which may also be on the order of a few percent.
§.§ Velocity profiles at corotation
Of particular importance, especially when studying the dynamics of PV and entropy within the horseshoe region, is the dominant flow structure of the horseshoe streamlines. The radial extent of the horseshoe region is 𝒪(√(q/h^3)H), whereas the flow velocity varies on the length-scale H. As a result, the flow perturbations induced by the planet within the horseshoe region (which inform the dominant flow structure) are well approximated simply by their values on the line x = 0, that is, taking
v_y,hs≈ -3/2Ω_px + v'_y(0,y), v_x,hs≈ v'_x(0,y)
This was noted by <cit.>, who studied the saturation of the corotation torque, and used matched asymptotics to compute Fourier coefficients for the flow velocity perturbations near to corotation. Figure <ref> depicts the non-dimensional radial and azimuthal velocity distributions at corotation, which satisfy (<ref>) and (<ref>). These correspond to taking cross-sections of the flow depicted in figure <ref> along the line x=0.
§.§ Wave evolution, profiles and one-sided torque
In this section we clarify in what sense J_+ and J_- are linearized Riemann invariants, how they correspond to their approximately conserved non-linear counterparts, and discuss the excited wave profiles shown in figure <ref> and corresponding one-sided Lindblad torque.
The non-linear 2D theory developed by <cit.> neglects the azimuthal velocity perturbation, as its linear solution decays as |x|^-1/2 away from the planet. The resulting 2D system conserves on characteristic curves the Riemann invariants
R_± = u ±2/γ - 1(c - c_γ), c = c_γ(Σ/Σ_p)^γ - 1/2.
For small departures from the background state, the enthalpy perturbation obeys W'/c_γ≈ c - c_γ. We may write therefore
R_±≈ u ±W'/c_γ,
equal to J_± up to an unimportant additive contribution of Φ_p/c_γ. It therefore makes sense to interpret J_± as linearized Riemann invariants.
A WKB analysis, appropriately imposing outgoing wave boundary conditions, indicates that for each azimuthal Fourier mode (with wavenumber k_y > 0),
J̃_+∼
|x|^+12exp(+34Ω_p k_yc_γx^2), if x > 0
|x|^-32exp(-34Ω_p k_yc_γx^2), if x < 0.
The behaviour of J̃_- can be determined from the symmetry J_+(x,y) = -J_-(-x,-y), or in Fourier-space, J̃_-(x,k_y) = -J̃_+^*(-x,k_y). Note the strong |x|^-3/2 decay of J̃_+ in x < 0. This may be thought of as a consequence of J_+ being approximately conserved on characteristics emanating from x = -∞, where it is 0 (so that we have a simple wave). The consequence of this can be seen in figure <ref>, where we see each Riemann invariant nearly vanish one side of the line x = 0. The decay is not quite as strong as |x|^-3/2 in real space close to the planet because of the influence of wave excitation there.
The spiral arm locations are well approximated by the characteristics of the second order operators in equations (<ref>) and (<ref>). That is, the density waves in x > 0 follow the curve η(x,y) = 0, for characteristic coordinate
η = y - x/3√(9/4x^2 - 1) - 1/3cosh^-1(3x/2).
The accurate computation of these wave profiles is an important step in the determination of the total torque on the planet, contributing to the Lindblad torque. They're also needed to compute the locations of eventual shocking due to wave steepening. Whilst the profiles obtained within a strictly local approximation yield cancelling inner and outer torques, the modification to the profiles from any asymmetry will lead to a net torque on the planet. The profiles excited by the vertically averaged Φ_p, shown in figure <ref>, are qualitatively very similar to the profiles obtained in planet-driven wave evolution studies <cit.>.
Our one-sided torque estimate is however more than a factor of 2 smaller than that obtained in the 2D studies conducted by <cit.> and <cit.>. We attribute this to the point-mass and softened potentials used to force the 2D system in previous studies artificially inflating the excited wave amplitudes. This is the case even for `higher order' expressions for the planet potential such as those used by <cit.>, and especially for smaller softening parameters. Indeed, we find that the angular momentum flux carried by the 2D mode, F_A^2D = √(γ(2-γ))r_pΣ_p∫_-∞^∞v̅_xv̅_y y (as defined in section <ref>), far from the planet is
F_A^2D(∞) = 0.37 √(γ(2-γ))(G M_p)^2 Σ_p r_p Ω_p/c_γ^3,
corresponding to a one-sided torque
T^2D≡ F_A^2D|_0^∞ = 0.34 √(γ(2-γ))(G M_p)^2 Σ_p r_p Ω_p/c_γ^3.
For comparison, in the case of a point-mass potential in a 2D disc, <cit.> calculate a dimensionless numerical torque prefactor of 0.93. We've used here that F_A^2D(0) = 0.03 √(γ(2-γ))(G M_p)^2 Σ_p r_p Ω_p/c_γ^3, which is apparent from figure <ref>. The fact that F_A^2D(0) ≠ 0 is due to a small fraction (4%) of the flux carried by the outward-propagating wave being excited in x < 0.
In the case γ = 1, the torque estimate (<ref>) is in agreement with the 3D (isothermal) simulations of <cit.> to within a few percent (one can find a one-sided Lindblad torque estimate from the symmetric part of the cumulative torque profiles in their figures 7 and 8); it further matches precisely the torque found by <cit.>. This is because in the case γ = 1, all gravity waves are suppressed, and inertial waves carry very little flux <cit.>.
<cit.> and <cit.> also note the discrepancy of a factor of 2 or 3 between previous 2D and 3D estimates of the one-sided torque. Judging from our result and figure 2 of <cit.>, it appears that in the case γ = 1.4, the 2D mode contributes only 75% of the one-sided Lindblad torque, with inertial waves and gravity waves carrying the majority of the remaining flux.
The angular momentum flux computed from our numerical solution, F_A^2D(x), is plotted in figure <ref>. Of note is that the torque is almost entirely exerted within a few scale heights of the planet, which justifies a local approach to the problem. Further, there's a very small decrease in the flux beyond x ≈ 3.5 H_γ, which may be attributed to the `negative torque phenomenon' <cit.>.
Finally, we're able to comment on the likely vertical profile of the shock front, and offer insight into the modification to the shocking length incorporating 3D effects. We note that for a freely propagating 2D mode, v'_z = 0, so that wave steepening may be considered as a purely horizontal effect. Further, the density wave wake and the gravity wave wake are spatially separated far from the planet <cit.>, which suggests considering the wave steepening of an isolated 2D mode would provide a good model for 3D shock formation.
Using equation (<ref>) and comparing the maximum steepness of our 2D mode profile with that of <cit.> and following their wave steepening calculation, we estimate the location of the initial shock in a 3D disc relative to their 2D estimate as
l_sh^3D≈ 1.2^-γ - 15 H_γ^2z^2/(2-γ)^1/5· l_sh^2D.
Here, we've simply exploited the A^-2/5 dependence of the shocking length on the amplitude of the wave, and noted the reduced maximum steepness of our linear 2D mode surface density profile (which is around 65% of that found by <cit.> at x = 1.33 H_γ). The vertical dependence is weak for physical values of γ; for γ = 1.4 it varies by only 5% within the first scale height, H_p, above the disc mid-plane, though higher up a shock may be expected to form slightly closer to the planet. At much greater height however, as discussed in section <ref>, our model's basic assumptions may begin to break down.
§ DISCUSSION
We've already discussed many key items as and when they arose earlier in this paper, including
* the nature and resolution of the corotation singularity in section <ref>, including the manner in which it has a `non-linear' resolution, and how and why it appears in previous linear analyses;
* the relation between the commonly adopted 2D Euler equations and variables and their 3D counterparts (section <ref>);
* the appearance of and displacements in the locations of the Lindblad resonances within the framework of this paper in section <ref>.
However, it's worth discussing caveats to this work as well as two further topics which warrant additional attention: the `rigorous' softening of planetary potentials to reproduce 3D effects within a 2D disc model, and the role that this 2D mode plays in determining the torque on a low-mass planet.
§.§ Treating planetary potentials in 2D discs
§.§.§ Context: softening prescriptions
When studying planet-disc interactions in 2D, the planet's potential should be modified to account for the influence of the vertical extent of the disc on the dynamics. Prescriptions such as
Φ_p = -G M_p/√(|r - r_p|^2 + b^2), b ∼ H/2,
are commonly adopted, as well as other similar prescriptions including the `fourth order' softened potential described in equation (16) of <cit.>. Each of these prescriptions however struggle to uniformly capture the impact of the disc's vertical extent on the dynamics within a few scale heights of the planet.
<cit.> compared (<ref>) to the density-weighted average of a point-mass potential and concluded that the best choice for b varies depending on the distance from the planet, especially for separations |r - r_p| ∼ H. They note, worryingly, that the choice of b can inform the direction of planetary migration, or in fragmentation simulations whether the disc fragments or not. For our purposes, adopting such a prescription can significantly inflate the excited wave flux (by as much as or more than a factor of 2, for both (<ref>) and the `fourth order' potential), and can impact strongly the horseshoe region width. As mentioned previously, the variation of net torque components can be by more than a factor of 3 as smoothing length is varied between 0.3 and 0.7 (<cit.>, figure 5).
§.§.§ A rigorous choice
In light of this, in this section we emphasise that (<ref>) is a far better choice for the 2D potential. In particular, taking inspiration from <cit.>, we proved in section <ref> that in the case of a low-mass planet, it is possible to directly derive 2D fluid equations governing averaged velocities and an effective `surface density' which are forced precisely by (<ref>). In this way, we argue that (<ref>) (which for simplicity excludes the height-independent indirect term) is the optimal choice for the planetary potential in a 2D disc, especially when considering sub-thermal mass planets.
Φ_p ≡ -G M_p/H_γ^1/4s^2/√(2)K_0(14s^2), s = |r-r_p|/H_γ.
(We discuss below how this may be implemented practically in simulations.)
Indeed, (<ref>) is uniformly a very good approximation to the potential which forces the global 2D mode of the disc (which would simply involve s = |r-r_p|/H_γ(r)), since far away from the planet
Φ_p ∼ -G M_p/|r-r_p|[1 - 1/2s^2 + 9/8s^4 + 𝒪(s^-6)].
That is, far away the potential returns to the familiar point-mass potential with small correction terms. In the same way, we reassuringly recover the point mass potential in the case of a `razor-thin' disc. The higher order corrections may be thought to arise physically from the quadrupolar and higher order multipolar interactions between the vertically extended disc and the planet monopole. In seeking to converge more rapidly to a point-mass potential, the `fourth order' softened potential of <cit.> neglects the quadrupolar interaction term.
Near the planet, if we assume the background temperature T_0(r) (and therefore the scale height H(r)) varies slowly with radius, then the departure of the potential which forces the global 2D mode from the above expression for Φ_p is also small.
In figure <ref> we compare graphically the appearance of the vertically averaged potential (<ref>) with a point-mass potential and `softened' potentials of the form (<ref>), as well as their corresponding gradients. Note that Φ_p has a logarithmic singularity at the origin. More precisely,
Φ_p ∼G M_p/H_γ2 ln s/√(2) as s → 0.
A logarithmic singularity is a general feature of any weighted vertical average of the planetary potential.
Despite the singularity experienced by the averaged potential (and correspondingly the enthalpy) at the location of the planet, the linearity of the system is hardly compromised. To demonstrate this, we note that in reality, the singularity of the 3D potential will be truncated at the planet's radius, R_p. We define the small parameter
ε = R_p/H_γ.
For Earth-like planets located at around 1 to 10 au, we expect 10^-5≲ε≲ 10^-3. Suppose we truncate the singularity of the 3D planetary potential (excluding the indirect term) at the planet's radius prior to the vertical averaging as
ϕ_p =
-G M_p/|r-r_p|, |r-r_p| > R_p,
-G M_p/R_p, |r-r_p| ⩽ R_p.
In this case,
Φ_p(0) ∼G M_p/H_γ2lnε/√(2).
However, |2lnε/√(2)| ∼ 7 < h_γ^3/q by assumption, so that the 2D system (including the enthalpy W) everywhere constitutes an approximately linear perturbation to the background state. We note further that the solutions for u, v and χ shown in section <ref> are very weakly affected by the (integrable) logarithmic singularity in the potential.
This then suggests that for numerical applications it would be very reasonable (namely to avoid numerical divergences and timestep issues), to `soften' the 2D potential (<ref>) by taking s^2 = ε^2 + |r-r_p|^2/H_γ^2, though here a practical choice for ε for grid-based simulations in which the planetary radius is unresolved would be on the order of the grid spacing divided by H_γ.
Regarding numerical implementation, we ought to address the subtlety that when directly evaluating the potential for large arguments s, we multiply both exponentially large and small numbers. However, efficient C functions for the exponentially scaled Bessel functions ^xK_0(x) and ^xK_1(x) (which both appear in the expression for the gradient of Φ_p) already exist within the Cephes library[<https://www.netlib.org/cephes/>] <cit.>. Alternatively, the vertically averaged potential in (<ref>) and its derivative are functions of one parameter only, so that a lookup table would provide an efficient method for evaluation.
§.§ Torques on low-mass planets
The torque exerted on the planet by the disc is of particular importance, as it informs the rate and direction of planet migration. The solution presented in this paper constitutes the leading order flow induced by the planet, which is rotationally symmetric and so exerts no net torque on the planet. The next-order correction to this flow is a factor of h_γ smaller, includes geometric asymmetries as well as radial variations in the properties of the background disc, and in general leads to a net torque on the planet. The majority of the torque exerted on the planet is done so locally. In this way, a radially local model of the planet-disc interaction is sufficient to determine the torque on the planet, which must depend linearly on the local gradients of the disc's properties.
§.§.§ Corotation torque and critical layer at corotation
One key complicating issue is that of the horseshoe streamlines. They form closed loops when the full azimuthal extent of the flow is considered (and when the planet and disc are not migrating or drifting radially relative to each other). In particular, materially conserved quantities (such as Ertel's PV and the entropy) are advected from inner to outer disc radii, and vice versa. In the absence of strong viscosity or dissipation therefore, the properties of the background disc are not restored far (azimuthally) from the planet.
This effect gives rise to an additional torque known as the corotation torque. To determine this torque, it's necessary first to solve for the distributions of these conserved quantities within the critical layer which is called the horseshoe region, with appropriate prescriptions for viscosity, heating etc. A key ingredient in this solution is an accurate description of the leading order flow within the corotation region. As mentioned in the [s:intro]introduction, there has already been a wealth of research devoted to the study of corotation torques. However, there is still more to be explored here, including in particular the impact that the vertical extent of the disc has on the torque. Part of this impact is captured in the behaviour of the 2D mode of the leading order flow, whose velocity profiles at corotation are shown in figure <ref>. These profiles vary only slightly over the corotation region, and may be used to enable cheap numerical studies of the influence of a range of physics on the corotation torque.
§.§ Caveats
We point out three main caveats to this work, which are detailed in the subsections below.
§.§.§ Omitted 3D wave modes
The solution presented in this paper is not fully 3D. We solved for the behaviour of the z-dependent 2D mode of the disc, which ought to be superposed with a spectrum of inertial and internal gravity waves. These contribute a significant fraction (upwards of 20% for γ = 1.4) of the one-sided torque on the planet, as discussed in section <ref>.
§.§.§ Impact of turbulence and magnetic fields
Many physical phenomena may act to disrupt the solution for the disc's 2D mode. The vertical shear in the 2D mode might be affected by processes that provide greater coupling between different layers in the disc, for example turbulence, viscosity, or a vertical magnetic field. Favourable conditions include a large plasma β≫ 1 and low level of ionisation, as well as a low Reynolds stress, captured by the condition for the viscosity parameter α≪ 1 <cit.>. It's widely believed that except for the innermost regions, protoplanetary discs do predominantly exhibit β≫ 1 <cit.>, and recent numerical and observational studies point towards a smaller turbulent viscosity than previously thought. Non-ideal MHD effects such as ambipolar diffusion and Ohmic resistivity act to suppress turbulence even in the disc's surface layers <cit.>, and to decouple the fluid from the magnetic field. Indeed, <cit.> and later <cit.> find that observations of the discs around HL Tau and Oph163131 are consistent with small viscosity parameters α∼ 10^-4 and α∼ 10^-5 respectively.
§.§.§ Thermodynamic assumptions
The validity of the thermodynamic model adopted in this paper (which simply constitutes a vertically isothermal background state permitting adiabatic perturbations) depends critically on the thermal relaxation time-scale for the disc's gas. If the cooling is too rapid, then the adiabatic assumption breaks down, too slow and the disc's background state won't have reached a thermal equilibrium as the disc evolves. The disc's outer layers are typically hotter than its interior due to the stellar irradiation, which is not able to penetrate the optically thick disc interior. The molecular gas which comprises the disc interior is optically thick to its own radiative emission; its primary means of cooling is via the surrounding dust grains. The gas must first transfer its thermal energy to the dust, which is then able to radiate away the excess energy more efficiently (though the disc is not necessarily optically thin to this emission). As pointed out by <cit.>, <cit.>, <cit.> and <cit.>, infrequent gas-dust collisions often act as a bottleneck for the thermal relaxation of the gas, especially in its surface layers. <cit.> predict that typical gas thermal relaxation time-scales are tens of orbits even at 10 au, corresponding to a cooling parameter β≡ 2 t_relax/t_orb≳ 10^1.5.
This slow cooling offers favourable conditions for the adiabatic model governing planet-induced perturbations to represent a good approximation, though the cooling is still more rapid than the disc's evolution time-scale. Indeed, the downstream buoyancy wake excited by a giant planet orbiting at 90 au is believed to be the source of the tightly wound spirals observed in TW Hya <cit.>. The existence of such a signature adds to the credibility of the adiabatic model, as it implies t_relax≫ N_z^-1 (for N_z the Brunt-Väisälä frequency), necessary for the buoyancy wake to develop. It's likely however that cooling and thermal diffusion play important roles on the much longer time-scale associated with the librating horseshoe motion. Whilst this has only a small effect on the dominant flow behaviour found in this paper, it has important consequences for the entropy and potential vorticity distributions within the horseshoe region, and correspondingly implications for the corotation torque.
Insight into the vertical temperature structure of the outer regions of protoplanetary discs is offered by the emission from different CO isotopes which trace different disc altitudes. Observational studies typically find a flat temperature plateau near the disc mid-plane, with an increase in temperature in the disc's upper layers, a few pressure scale heights above the mid-plane <cit.>. This is consistent with the physical picture discussed above. The disc's surface temperature is typically a factor of 2 larger than its mid-plane temperature. Whilst there's no exact analogue for the 2D projection procedure described in section <ref> when the temperature of the background state varies with height, the majority of the disc's mass is contained within the region of temperature plateau. Whilst an oversimplification, this suggests therefore that the vertically isothermal model for the background disc may be expected to give quantitatively reasonable results.
§ CONCLUSIONS
For linear adiabatic perturbations to a vertically isothermal protoplanetary disc, there is a particular vertical average of the 3D gas dynamic equations which exactly yields the familiar 2D linear system, comprised of averaged flow velocities, but an effective surface density, pressure, and planetary potential. This averaging process is closely related to the projection operator onto the Lubow-Pringle 2D mode of the disc <cit.>. Importantly, when compared with more traditional softening prescriptions for planetary potentials, adopting the averaged potential (<ref>) provides a more rigorous and accurate, parameter-free method to modify the potential of an embedded planet to account for 3D effects.
In this paper we presented the solution for the 2D mode of the flow excited by a low-mass planet on a circular orbit, whose features include a spiral wake as well as horseshoe streamlines within the coorbital region. We derived non-singular, independent second order equations for each flow variable, (<ref>) and (<ref>). These include novel parabolic cylinder equations for linear combinations of the radial velocity and enthalpy, and offer a correction to the model of the coorbital flow proposed by <cit.>.
The 2D mode (so-called as it has the property v_z = 0, though in general it is z-dependent) is a member of the wider family of 3D disc modes. It provides an interpretation for 2D disc models, and plays a dominant role in the response of the disc to the planet, particularly at corotation. We find that in the limit of an ultra-thin disc, the vertically averaged horseshoe width is x_s = 1.12√(q/h^3)Hγ^-1/4. Taking only the 2D mode contribution to the flow predicts a horseshoe width which grows with height above the disc mid-plane as described in equation (<ref>) and figure <ref>.
The flow in the corotation region is well approximated by superposing the background shear flow with the perturbed flow fields evaluated at corotation, x = 0. This coorbital flow solution, depicted in figure <ref>, may be used to inexpensively simulate the impact of various physics on the corotation torque, including diffusive and migration-feedback effects.
Our approach also allowed us to capture accurately the wave angular momentum flux transported by the spiral density wave in a 3D disc, which is reduced by a factor of 2 or 3 compared with previous 2D estimates. We used the profiles of this wave to estimate the location at which it first shocks in a 3D disc, as a function of height above the disc mid-plane.
We demonstrated that the 2D mode is orthogonal to the wider family of permitted 3D motions, which include for example gravity waves and inertial waves. As such, the torque on the planet decomposes into the sum of separate 2D and 3D components. We omit from this paper any discussion of the remaining excited spectrum of 3D wave modes. Resolving and understanding the gravity wave spectrum is a challenging and under-studied, yet important problem which requires future attention, not only because of the angular momentum which they transport, but also for their observational signature.
§ ACKNOWLEDGEMENTS
This research was supported by an STFC PhD studentship (grant number 2750631). We are very grateful to the referee, Clément Baruteau, for a thorough report and very helpful suggestions which have improved the paper.
§ DATA AVAILABILITY
The data underlying this article will be shared on reasonable request to the corresponding author.
mnras
§ APPENDIX A: NUMERICAL PROCEDURE
In this section we outline how the numerical solutions plotted in figure <ref> were obtained. We sought solutions with very high resolution and accuracy so that in future work we may use them to predict accurately the intricate dynamics which inform the corotation (and indeed wave) torques. We consider the problem in the (x,k_y) plane, having Fourier transformed with respect to y. For brevity we denote k_y = k. It is helpful to define
x' = √(3|k|)x, a = 1 + k^2/3 |k|.
Equations (<ref>) and (<ref>) become the canonical parabolic cylinder equations,
[_x'^2 + 14x'^2 - (a ± sgn(k))]J̃_±
= - sgn(k)[13 + 12 x'_x'∓(14x'^2 - 13 k)]ϕ̃_p.
[_x'^2 + 14x'^2 - a]ṽ = 1/2√(3)[x'√(|k|) - 1√(|k|)_x']ϕ̃_p,
forced by the Fourier transform of the potential ϕ̂_p, which we denote ϕ̃_p. It may be shown (namely by evaluating the Fourier transform prior to the vertical averaging) that
ϕ̃_p(x,k) = -√(2/)∫_-∞^∞K_0(|k|√(x^2+z^2))^-z^2/2 z,
which is far cheaper to evaluate accurately than the direct expression for the transform of ϕ̂_p.
Now, as x' → +∞, the solution will consist of the homogeneous waves U( a',x'^-/4) and U(- a',x'^/4). Here a' is understood to denote the order of the relevant parabolic cylinder equation, that is, either a± or a, and we adopt U(a,x) as defined in chapter 19 of <cit.>. We use the parabolic cylinder function (PCF) U, as E(a',x) and W(a',x) are not defined in general for a' ∈ℂ. From now on we take k ⩾ 0, noting our real real-space variables will have conjugate-symmetric Fourier transforms. With this in mind, and noting that ∀δ > 0, the PCF U(a,z) ∼ z^-a-1/2^-1/4z^2 as z →∞ in (z) ⩽34 - δ, we identify the following solutions as in- or outgoing waves:
U( a',x'^-/4) is an outgoing wave as x' →∞
U(- a',x'^/4) is an ingoing wave as x' →∞
U( a',-x'^-/4) is an ingoing wave as x' → -∞
U(- a',-x'^/4) is an outgoing wave as x' → -∞
We want no incoming waves present in the solution. That is, our desired particular integral solution for J̃_± or ṽ, (which we denote η̃_p) behaves as
η̃_p ∼ b_1U( a',x'^-/4) as x' →∞
η̃_p ∼ b_2U(- a',-x'^/4) as x' → -∞.
We eliminate incoming waves in our numerical solution with the following algorithm. Suppose we compute a numerical solution η̃, imposing arbitrary initial data at x' = 0. It follows that, without loss of generality
η̃∼ (b_1 + c_1) U( a',x'^-/4)
+ c_2 U(- a',x'^/4) as x' →∞,
η̃∼ (b_2 + d_1)U(- a',-x'^/4)
+ d_2U( a',-x'^-/4) as x' → -∞.
We may then fit the numerical solutions via linear regression to the homogeneous PCF wave solutions far from x' = 0 (where the forcing has decayed sufficiently) to extract values for c_2 and d_2. We note that c_2 and d_2 both depend linearly on the (arbitrary) initial data imposed at x' = 0. Integrating the system twice more with two different choices for initial data allows us to solve this linear system for the correct initial data (which yields c_2 = d_2 = 0). We then perform one final integration with this choice of initial data, giving a numerical solution for η̃_p.
We carried out this algorithm with a 4th order Runge-Kutta ODE solver to compute the solutions to equations (<ref>). This method provides a very high numerical accuracy, and we were able to achieve a relative error of no more than 1× 10^-5 for all 0.01 ⩽ k ⩽ 500.
It's worth noting some explicit analysis concerning the k=0 case, which is not permitted in the previous algorithm, and further how we can treat the singularities in the system in both real and Fourier-space without adopting softening prescriptions or cutoffs. In the case k=0, ϕ̃_p doesn't exist, and indeed J̃_± diverges too. However, since
[1-_x^2]W̃|_k=0 = _x^2ϕ̃_p, and [1-_x^2]ṽ|_k = 0 = 1/2_xϕ̃_p,
for W̃ the transformed enthalpy perturbation, we may write
W̃(x,0) = ∫_-∞^∞^-|x'-x|[√(/2)|x'|erfcx(|x'|/√(2)) - 1] x',
ṽ(x,0) = 1/2∫_-∞^∞^-|x'-x|√(/2)erfcx(|x'|/√(2))sgn(x') x',
ũ(x,0) = 0,
where erfcx(x) is the scaled complementary error function. Now, as well as the divergence of the potential as k → 0, we have that for large k,
ϕ̃_p(x,k) ∼ - √(2 )^-|k x|/|k|,
that is, as we should expect, the inverse Fourier transform of ϕ̃_p diverges at x = y = 0 (as the 2D real-space potential has a logarithmic singularity there).
We may however eliminate the need to evaluate the solutions at very many large values of k (in order to obtain high accuracy), and overcome the singular behaviour at k = 0, by defining variables which are well-behaved everywhere in Fourier and real space. The inverse Fourier transform of these variables will then quickly give accurate results. First note that
∫_-∞^∞ln(x^2+y^2)^- k y y = -2 /|k|^-|k x|.
It therefore makes sense to define regularised J_± variables
J_±,reg = J_±∓[ϕ̂_p - 1/√(2)ln(x^2+y^2/1+y^2)]
so that
J̃_±,reg = J̃_±∓[ϕ̃_p + √(2 )(^-|k x| - ^-|k|/|k|)].
These variables are then indeed well-behaved (with no singularities) everywhere in Fourier and real space. In particular, for k = 0,
J̃_±,reg(x,0) = ∓[W̃(x,0) + √(2 )(1 - |x|)],
where W̃(x,0) may be found by evaluating (<ref>). With these regularised variables, we are able to quickly achieve the earlier quoted absolute uncertainty in the flow solution of 1 × 10^-5.
|
http://arxiv.org/abs/2409.03467v1 | 20240905122232 | Cubic power functions with optimal second-order differential uniformity | [
"Connor O'Reilly",
"Ana Sălăgean"
] | cs.IT | [
"cs.IT",
"cs.CR",
"math.IT",
"math.NT",
"94D10 (Primary) 11T06, 94A60 (Secondary)"
] |
§ ABSTRACT
We discuss the second-order differential uniformity of vectorial Boolean functions, a relevant cryptographic property due to indication of resistance to the boomerang attack. First, we discuss connections with the second-order zero differential uniformity and its recent literature. We then prove the optimality of monomial functions with univariate form x^d where d=2^2k+2^k+1 and (k,n)=1, and begin work towards generalising such conditions to all monomial functions of algebraic degree 3. Finally, we discuss further questions arising from computational results.
94D10 (Primary) 11T06, 94A60 (Secondary)
Dynamics of Small Solid Particles on Substrates of Arbitrary Topography
[
September 9, 2024
=======================================================================
§ INTRODUCTION AND BACKGROUND
Boolean functions can be understood as mathematical functions that model transformations on bit-strings, and have wide usage across theoretical computer science, primarily in the subjects of coding theory, and cryptography. Extensive coverage of the application of Boolean functions in cryptography and coding theory is provided in carlet2021book, cusick2009cryptographic. Boolean functions have historically been used in the construction of stream ciphers, pseudorandom generators, and block ciphers. In particular, the S-boxes in the Data Encryption Standard (DES) and Advanced Encryption Standard (AES) ciphers can be described using Boolean functions <cit.>. The difficulty lies in choosing a “secure” Boolean function (or multiple) in the construction of these ciphers in order to provide resistance against attacks.
One well-studied attack on cryptosystems is the differential attack, introduced by <cit.>. For a Boolean function to be resistant to a differential attack, it must satisfy a criterion equivalent to the outputs of the derivative D_af being as uniformly distributed as possible <cit.>*Ch. 3.4.1, or equivalently, having minimal differential uniformity nyberg1993differential, nyberg1993provable. This motivates the extensive study of perfect nonlinear and APN functions, as explained in Section <ref>.
The Boomerang attack is a variation of a differential attack, initially introduced in <cit.>. This method has an advantage over previously-known differential attacks, in that it beats a bound on the minimum number of texts required to break a cipher in some cases. This disproved a common belief, known in <cit.> as a “folk theorem”, leading to some aspects in the design of previous ciphers. Since then, many variations of the attack have been introduced, notably <cit.>.
The main direction to-date in studying resistance to this attack is to generalise the differential uniformity to the second-order derivative of the chosen Boolean function. In <cit.>, the Boomerang Connectivity Table (BCT) was introduced to measure the resistance of an S-box to the differential attack. However, this table does not address the case of Feistel ciphers, and thus later, in <cit.>, the Feistel Boomerang Connectivity Table (FBCT) was introduced for this purpose. More recently, there has been a focus on the second-order zero differential spectra and the second-order zero differential uniformity, which is defined as follows. We note that the authors of <cit.> originally call this the second-order differential spectra, but we include the term "zero" to avoid confusion with the second-order differential uniformity we work with in later sections.
Given f: _p^n→_p^n, and a,b∈_p^n, we define the second-order zero differential spectra of f with respect to a,b as:
∇_f(a,b)=|{x∈_p^n : f(x+a+b)-f(x+a)-f(x+b)+f(x)=0}|.
If p=2, we define the second-order zero differential uniformity of f as:
∇_f = max{∇_f(a,b) : a ≠ b, a,b ≠ 0}.
If p>2, then we define the second-order zero differential uniformity of f as:
∇_f = max{∇_f(a,b) : a,b≠ 0}.
This notion extends the Feistel Boomerang Connectivity Table to Boolean functions over odd characteristic fields; indeed the definitions of the Feistel Boomerang uniformity and the second-order zero differential uniformity are identical for p=2. There have been a number of results on the second-order zero differential spectra of various classes of functions. We provide a full list of studied power functions in Table <ref>, along with their second-order zero differential uniformity, denoted by ∇_f. We note that, while we do not list them, full descriptions of the second-order zero differential spectra for each table entry can be found in the respective citations provided.
Our approach follows the definition of the second-order differential uniformity provided in <cit.>, which we feel most naturally extends the definition of the differential uniformity. Our key result provides a new class of power functions with optimal second-order differential uniformity, namely the functions f:_2^n→_2^n given by f(x)=x^d, where d=2^2k+2^k+1, and (k,n)=1. We note that the second-order zero differential uniformity and second-order differential uniformity are equivalent for cubic Boolean functions, as the second derivatives appearing in both cases are affine. As such, many of the studied functions listed in Table <ref> are relevant here, and our results are directly relevant to the study of the second-order zero differential uniformity.
This paper is structured as follows. In Section <ref>, we discuss preliminaries of polynomials in finite fields, Boolean functions, and differential uniformity. In Section <ref>, we discuss a class of power functions of algebraic degree 3 and provide explicit values of the second-order differential uniformity in all cases. In Section <ref>, we discuss algebraic degree 3 power functions in the general case, and provide conditions for optimal second-order differential uniformity. Finally, we conclude in Section <ref> by discussing further questions arising from computational results.
§ PRELIMINARIES
§.§ Notation
We use p to denote a prime, and q to denote a prime power. We then denote by _q the finite field containing q elements, and by _q the non-zero elements of _q^*. Most commonly, we choose p=2, q=2^n for some n∈, and write _2^n. Similarly, we denote by _p^n the vector space of dimension n over the field _p and, most commonly choosing p=2, write _2^n. We use the convention ={1,2,3,…}, i.e. not including 0.
§.§ Boolean functions
Given n,m∈, a Boolean function f is a function f: _2^n→_2, and a vectorial Boolean function is a function g: _2^n→_2^m <cit.>. Note that we can always write g(x)=(f_1(x), …, f_m(x)) for some Boolean functions f_1,…, f_m, which we refer to as the coordinate functions of f. We commonly refer to vectorial Boolean functions as, simply, Boolean functions, and consider a Boolean function f: _2^n→_2 as a specific case with m=1. In this work, we will always consider Boolean functions with respect to affine equivalence. For the following definitions and further information, we refer to <cit.>.
Given two Boolean functions f,g: _2^n →_2^m, we say that f,g are affine equivalent if there exists some affine automorphims L_1, L_2 of _2^n, _2^m respectively, such that f=L_2∘ g∘ L_1.
We will often refer to the algebraic degree of a Boolean function.
The algebraic normal form, or ANF, of a given Boolean function f: _2^n→_2 is the unique representation given by:
f(x)=∑_I⊆{1,…,n} a_I ( ∏_i∈ I x_i) = ∑_I⊆supp(x)a_I
where a_I∈_2, and x=(x_1,…, x_n).
The degree of the ANF of a Boolean function f is denoted by d_algf, and is called the algebraic degree of f.
Note that a Boolean function is known as cubic when it has algebraic degree less than or equal to 3. We now extend these definitions to vectorial Boolean functions.
Given a vectorial Boolean function f: _2^n →_2^m, the ANF of f is then:
f(x) = ∑_I⊆{1,…,n} a_I ( ∏_i∈ I x_i) = ∑_I⊆supp(x)a_I
where a_I∈_2^m, and x=(x_1,…, x_n). The algebraic degree of f is then given by:
d_algf=max{| I | : I ⊆{1,…, n}, a_I ≠ 0}
i.e., the maximal algebraic degree of the coordinate functions of f.
We can equivalently write any (vectorial) Boolean function with n=m uniquely as a map f: _2^n→_2^n, in the form:
f(x)=∑_i=0^2^n-1δ_i x^i
where δ_i ∈_2^n, via the isomorphism between _2^n and _2^n.
When we write a Boolean function in the form of <ref>, we say it is in multivariate form, and when we write it in the form of <ref>, we say it is in univariate form. We will primarily work with Boolean functions in univariate form from here.
§.§ Linearised polynomials
Due to <cit.>*Thm. 4.3, linearised polynomials form a key element in the study of the second-order differential uniformity of Boolean functions.
A polynomial of the form:
L(x)=∑_i=0^mα_ix^p^i,
where α_i∈_p^n and p a prime, is called a p-polynomial over _p^n. If the value of p is fixed or clear from context, L may also be called a linearised polynomial.
Note that the term linearised comes from the fact that the polynomial L can be considered as a linear operator over _p^n.
Let R be a commutative ring of characteristic p. Then for all a,b∈ R:
(a+b)^p^n=a^p^n+b^p^n, and (a-b)^p^n=a^p^n-b^p^n
§.§ Differential uniformity
We begin with the following key definitions.
Given finite Abelian groups (G_1, +) and (G_2, +), some function f: G_1 → G_2, and a∈ G_1^*, we define the discrete derivative of f in direction a by:
D_af(x)= f(x+a)-f(x).
Moreover, if f: G → G for a group (G,+), given a_i ∈ G^* for i=1,…, k, we denote repeated differentiation in directions a_i by:
D_a_1, a_2, …, a_k^(k) f(x) = D_a_1D_a_2⋯ D_a_kf(x).
Note that applying the discrete derivative to a Boolean function always reduces the algebraic degree by at least 1 <cit.>.
We can then define the differential uniformity of a given Boolean function.
Let G_1 and G_2 be finite Abelian groups. Then f: G_1 → G_2 is called differentially δ-uniform if, for every non-zero a∈ G_1 and b∈ G_2, the equation D_af(x)=b has at most δ solutions in G_1. The minimum such δ is denoted by δ_f and is called the differential uniformity of f.
We often consider differentiation in the group (_p^n, +). Functions of optimal differential uniformity are known as perfect nonlinear functions.
We say that a function f: _p^n→_p^n is perfect nonlinear if it is differentially 1-uniform.
Note that, when p=2, for any a we have D_af(x) = D_af(x+a). As such in this case, the function D_af can be at best 2-to-1, so f can be at best differentially 2-uniform.
We say that a function f: _2^n→_2^n is almost perfect nonlinear, or APN, if the function D_af is 2-to-1 for every a∈_2^n^*, i.e., |{ D_af(x) : x∈_2^n}| =2^n-1 for all a≠ 0.
We briefly mention the paper <cit.>, which studies the distribution of the second-order derivative of polynomials over _2^n. They first define the second-order differential uniformity as:
δ^2(f)=max_a≠ a'∈_2^n^*, b∈ F_2^n|{x∈_2^n : D^(2)_a,a'f(x)=b}|.
Note that an equivalent definition is given recently in <cit.>, referred to as the double differential uniformity. The definition we use in the following work matches these definitions, with altered notation.
Given n∈, let f: _2^n→_2^n be a Boolean function. Then we say that f has second-order differential uniformity ∇_f if ∇_f is the least integer such that, for every a,b∈_2^n^*, a≠ b, and every c∈_2^n, the equation D_a,b^(2)f(x)=c has at most ∇_f solutions, i.e.,
∇_f=max_a≠ b∈_2^n^*, c∈ F_2^n|{x∈_2^n : D^(2)_a,bf(x)=c}|.
Note that we always have:
D_a,b^(2)f(x)=D_a,b^(2)f(x+a)=D_a,b^(2)f(x+b)=D_a,b^(2)f(x+a+b),
and thus for any Boolean function f, we have ∇_f≥ 4. Moreover, ∇_f is always a multiple of 4.
Given n∈, let f: _2^n→_2^n be a Boolean function. We say that f has optimal second-order differential uniformity if ∇_f=4.
Note that the second-order differential uniformity is an affine invariant.
§ POWER FUNCTIONS WITH EXPONENT 2/2K+2/K+1
We first consider a particular class of power functions, and give results on its second-order differential uniformity in all cases. We note similarity to the well-studied Bracken-Leander functions <cit.>, but without the restriction n=4k.
Let k,n∈ satisfy (k,n)=1. Then the Boolean function f: _2^n→_2^n given by f(x)= x^2^2k+2^k+1 has optimal second-order differential uniformity.
By <cit.>*Prop. 7, it is equivalent to consider only the derivatives D_1,c^(2)f, where c∈_2^n^*. Let q=2^k. We then have:
D_1,c^(2)f(x) =x^q^2+q+1 + (x+c)^q^2+q+1 + (x+1)^q^2+q+1 + (x+c+1)^q^2+q+1
= x^q^2+q+1 + (x+c)(x^q^2+c^q^2)(x^q+c^q)+(x+1)(x^q^2+1)(x^q+1)
+(x+c+1)(x^q^2+c^q^2+1)(x^q+c^q+1)
⋮
= x^q^2(c^q+c) + x^q(c^q^2+c)+x(c^q^2+c^q)
+ c^q^2+q + c^q^2+1 + c^q^2 + c^q+1 + c^q + c
Suppose there exists x,y such that D_1,c^(2)f(x)= D_1,c^(2)f(y). Then defining z=x+y we have:
0 = z^q^2(c^q+c) + z^q(c^q^2+c)+z(c^q^2+c^q).
Clearly, we have solutions for z∈{0,1}, so assume z∉{0,1}. As c∉{ 0,1} and (k,n)=1, we have c^q^2≠ c^q, c^q^2≠ c, and c^q ≠ c. We then have:
c^q^2+c^q/c^q+c = (c^q+c)^q/c^q+c = (c^q+c)^q-1,
and thus:
c^q^2+c/c^q+c = c^q^2+c^q+c^q+c/c^q+c=c^q^2+c^q/c^q+c + 1= (c^q+c)^q-1 + 1.
Dividing <ref> by (c^q+c), from <ref> and <ref> we have:
0 = z^q^2 + z^q((c^q+c)^q-1 + 1)+z(c^q+c)^q-1
0 = z^q^2 + z^q + z^q(c^q+c)^q-1 + z(c^q+c)^q-1
0 = (z^q + z)^q + (z^q + z) (c^q+c)^q-1
0 = (z^q + z)^q-1 + (c^q+c)^q-1
1 = (z^q + z/c^q + c)^q-1.
and thus:
z^q + z = c^q + c
z^q + c^q = z + c
(z+c)^q = z + c
which implies z+c ∈{0,1}. Thus the only solutions are z∈{0,1,c,1+c}, so D_1,c^(2)f is 4-to-1. As this holds for all c∈_2^n^*, we have ∇_f=4.
Let k,n∈ satisfy (k,n)>1. Then the Boolean function f: _2^n→_2^n given by f(x)= x^2^2k+2^k+1 has maximal second-order differential uniformity, i.e., second-order differential uniformity 2^n.
From <ref>, we have:
0 = z^q^2(c^q+c) + z^q(c^q^2+c)+z(c^q^2+c^q).
where q=2^k. As (k,n)=u>1, we have that _2^u is a subfield of _2^n. Thus whenever c∈_2^u∖{0,1} we have c^q^2=c^q=c, so choosing any such c we have that <ref> holds for all z. As such, D_1,c^(2)f(0)= D_1,c^(2)f(0+z) for all z ∈_2^n, so f has second-order differential uniformity 2^n.
Combining the above, we find the following result.
Given k,n∈, define the Boolean function f: _2^n→_2^n by f(x)= x^2^2k+2^k+1. Then f has optimal second-order uniformity if and only if (k,n)=1. Otherwise, f has second-order uniformity 2^n.
By Proposition <ref>, we have that (k,n)>1 implies that f is not optimal, and so by the contrapositive, we have that f being optimal implies (k,n)=1. Along with Theorem <ref>, this completes the if and only if statement. The second statement is exactly Proposition <ref>.
Note that f having second-order uniformity 2^n does not provide information on the exact value of (k,n).
§ GENERAL CUBIC MONOMIALS
We now discuss power functions of algebraic degree 3 in the general case. The following is a necessary condition for optimality resembling the contrapositive of Proposition <ref> used in the proof of Theorem <ref>.
Let i,j,n∈, 0<i<j<n, j≠ 2i, and define the Boolean function f: _2^n→_2^n given by f(x)= x^2^j+2^i+1. Suppose f has optimal second-order differential uniformity. Then if n is odd, we have (i,n)= (j,n)=(j-i,n)=1. If n is even, then one of these equals to 2, and the others equal to 1.
Let c∈_2^n^* such that c≠ 1. Then we have:
D_1,c^(2)f(x) = x^2^j+2^i+1+(x+1)^2^j+2^i+1+(x+c)^2^j+2^i+1+(x+1+c)^2^j+2^i+1
= x^2^j+2^i+1 + (x+1)(x^2^i+1)(x^2^j+1)+(x+c)(x^2^i+c^2^i)(x^2^j+c^2^j)
+(x+1+c)(x^2^i+1+c^2^i+1)(x^2^j+1+c^2^j+1)
⋮
= x(c^2^j+c^2^i)+x^2^i(c^2^j+c)+x^2^j(c^2^i+c)
+ (c^2^j+c^2^i+1)+(c^2^i+c^2^j+1)+(c^2^i+2^j+c).
Then as f is optimal, for each fixed x there exist exactly four elements y such that D_1,c^(2)f(x)= D_1,c^(2)f(y). Defining z=x+y, we have:
0 = z(c^2^j+c^2^i)+z^2^i(c^2^j+c)+z^2^j(c^2^i+c)
= z(c^2^j+c^2^i)+z^2^i(c^2^j+c^2^i+c^2^i+c)+z^2^j(c^2^i+c)
= (z+z^2^i)(c^2^j+c^2^i)+(z^2^j+z^2^i)(c^2^i+c).
and similarly:
0 = (z+z^2^j)(c^2^j+c^2^i)+(z^2^j+z^2^i)(c^2^j+c), and
0 = (z+z^2^j)(c^2^i+c)+(z+z^2^i)(c^2^j+c).
Firstly, suppose (i,n)≥ 3. Then there would exist c∈_2^(i,n)⊆_2^n, c≠ 0,1, such that c^2^i+c= 0. Then, from Equation <ref>, we have:
0 = (z+z^2^i)(c^2^j+c^2^i)
for at most 4 solutions. But there exist at least 8 elements z∈_2^(i,n), contradicting f being optimal. As such, we must have (i,n)≤ 2. Similarly, by Equation <ref> we must have (j,n)≤ 2, and by Equation <ref> we must have (j-i,n)≤ 2. Note then that if n is odd, we have (i,n)=(j,n)=(j-i,n)=1. This proves the first statement.
Assume now that n is even, and that (i,n)=2, i.e. i is even also. Again, there would exist c∈_2^(i,n)⊆_2^n, c≠ 0,1, such that c^2^i+c= 0, and from Equations <ref> and <ref>, we have:
0 = (z+z^2^i)(c^2^j+c^2^i) , and
0 = (z+z^2^i)(c^2^j+c).
As f is optimal, we must have c^2^j-i≠ c and c^2^j≠ c for all c∈_2^(i,n) (otherwise these equations would hold for all z for some c). As such, we must have c∉_2^j∩_2^n and c∉_2^j-i∩_2^n. The case c∉_2^j∩_2^n requires either _2^j⊈_2^n or _2^(i,n)⊈_2^j, i.e. either (j,n)=1 or ((i,n),j)=(i,j,n)=1. If (i,j,n)=1, we must have that j is odd, and so (j,n)≠ 2, so (j,n)=1. Similarly, from c∉_2^j-i∩_2^n we see that (j-i,n)=1.
Similarly, beginning with the assumption that (j,n)=2 implies that (i,n)=1=(j-i,n), and assuming (j-i,n)=2 implies (i,n)=1=(j,n).
Conversely, assume (i,n)=1=(j,n). Then i,j are odd, so j-i is even, so (j-i,n)=2. Similarly, (i,n)=1=(j-i,n) implies (j,n)=2, and (j,n)=1=(j-i,n) implies (i,n)=2. This concludes the proof of the second statement.
We note that the converse is generally false. For example, d=11 for n=7,8 satisfies the result, but was computed to have second-order differential uniformity 8 in both cases.
The next result leads to a sufficient condition for a general power function of algebraic degree 3 to be affine equivalent to a Boolean function of the form discussed in Theorem <ref>.
Let i,j,n∈, 0<i<j<n, and define the function f: _2^n→_2^n given by f(x)= x^2^j+2^i+1. Then if j≡ 2i, or i≡ 2j, or i+j≡ 0 n, there exists k such that f is affine equivalent to g(x)= x^2^2k+2^k+1.
If j≡ 2i n, we have j=2i as i,j<n, and so f=g for k=i, and we are done. If i ≡ 2j n, define k=j-i, and define A(x)=x^2^2k. Then:
A(f(x)) = (x^2^j+2^i+1)^2^2k
= x^2^3j-2i+2^2j-i+2^2k
=x^2^k2^2j-i+2^2j-i+2^2k
=x^2^2k+2^k+1
=g(x).
Note that the fourth equality holds as 2^2j-i≡ 1 (mod 2^n-1). Finally, if i+j≡ 0 n, define k=i, and define A(x)=x^2^k. Then:
A(f(x)) = (x^2^j+2^i+1)^2^k
=x^2^i+j+2^2i+2^i
=x^2^2k+2^k+1
=g(x). *
Together with Theorem <ref>, this provides the following classes of Boolean functions with optimal second-order differential uniformity.
Let i,j,n∈, 0<i<j<n, and define the function f: _2^n→_2^n given by f(x)= x^2^j+2^i+1. Then if either:
* j≡ 2in, with (i,n)=1, or
* 2j≡ in, with (j-i,n)=1, or
* j≡ -in, with (i,n)=1,
the function f has optimal second-order differential uniformity.
§ EXPERIMENTAL RESULTS
To conclude, we provide topics for further investigation arising from computational data. We computed all optimal power functions of the form f:_2^n→_2^n, f(x)=x^d, for 4≤ n≤ 20. Noting that x^d is affine equivalent to x^2^id for all i, we consider exponents up to this transformation. Discussing the data, we make the following key observations.
We found exactly two exponents of algebraic degree 4, d=15,85 at n=5,10 respectively. Observe that these exponents are of the form d=2^3k+2^2k+2^k+1 for k=1,2. Notably, d=585, corresponding to k=3 was computed to have second-order differential uniformity 8 for n=15.
All other computed optimal exponents are of algebraic degree 3. Every computed optimal cubic exponent was of the form d=2^2k+2^k+1 for some k satisfying (k,n)=1 (up to the aforementioned transformation). In other words, every computed optimal cubic exponent is of the form described in Theorem <ref>.
|
http://arxiv.org/abs/2409.02746v1 | 20240904142656 | A Systematic Survey of Moon-Forming Giant Impacts. II. Rotating bodies | [
"Thomas Meier",
"Christian Reinhardt",
"Miles Timpe",
"Joachim Stadel",
"Ben Moore"
] | astro-ph.EP | [
"astro-ph.EP"
] |
Thomas Meier
[email protected]
0000-0001-9682-8563]Thomas Meier
Department of Astrophysics, University of Zurich, Winterthurerstrasse 190, CH-8057 Zurich, Switzerland
0000-0002-4535-3956]Christian Reinhardt
Department of Astrophysics, University of Zurich, Winterthurerstrasse 190, CH-8057 Zurich, Switzerland
Physics Institute, Space Research and Planetary Sciences, University of Bern, Sidlerstrasse 5, CH-3012 Bern, Switzerland
0000-0003-1938-7877]Miles Timpe
Department of Astrophysics, University of Zurich, Winterthurerstrasse 190, CH-8057 Zurich, Switzerland
0000-0001-7565-8622]Joachim Stadel
Department of Astrophysics, University of Zurich, Winterthurerstrasse 190, CH-8057 Zurich, Switzerland
0000-0001-5996-171X]Ben Moore
Department of Astrophysics, University of Zurich, Winterthurerstrasse 190, CH-8057 Zurich, Switzerland
§ ABSTRACT
In the leading theory of lunar formation, known as the giant impact hypothesis, a collision between two planet-size objects resulted in a young Earth surrounded by a circumplanetary debris disk from which the Moon later accreted. The range of giant impacts that could conceivably explain the Earth-Moon system is limited by the set of known physical and geochemical constraints. However, while several distinct Moon-forming impact scenarios have been proposed—from small, high-velocity impactors to low-velocity mergers between equal-mass objects—none of these scenarios have been successful at explaining the full set of known constraints, especially without invoking one or more controversial post-impact processes. Allowing for pre-impact rotation of the colliding bodies has been suggested as an avenue which may produce more promising collision outcomes. However, to date, only limited studies of pre-impact rotation have been conducted. Therefore, in the second paper of this series, we focus on pairwise impacts between rotating bodies. Using non-rotating collisions as a baseline, we systematically study the effects of rotation on collision outcomes. We consider nine distinct rotation configurations and a range of rotation rates up to the rotational stability limit. Notably, we identify a population of collisions that can produce low post-impact angular momentum budgets and massive, iron-poor protolunar disks.
§ INTRODUCTION
The prevailing theory on the formation of Earth's Moon is the Giant Impact (GI) hypothesis. It proposes that a collision between the young Earth and a planetary-sized body ejected material into a circumplanetary disk from which the Moon then formed <cit.>. In the leading version of this hypothesis, called the "canonical" Moon-forming impact, the impactor is roughly Mars-sized and the collision is oblique and occurs at a low impact velocity, v_imp≃ v_esc, where v_esc is the mutual escape velocity of the two bodies.
A successful Moon-forming collision must satisfy a number of known constraints. As these constraints are already discussed at length in <cit.> (hereafter Paper I), here we only reiterate those constraints which are directly relevant to our simulations. First, such a collision has to eject at least one lunar mass of material into orbit to allow the formation of the Moon. The proto-lunar disk also has to be strongly depleted in iron to explain the small iron core of the Moon which is ≤1.5 by mass <cit.>. Then the total angular momentum of the Earth-Moon system has to be consistent with the observed value (). Finally, the Earth and the Moon have an indistinguishable isotopic composition in several elements including ^18O/^17O <cit.>, ^50Ti/^47Ti <cit.>, and ^182W/^184W <cit.>. This in turn implies that either the proto-Earth and the impactor initially had a very similar isotopic composition (and therefore formed at a similar heliocentric distance) or that the material of the two colliding bodies was very well mixed during the collision.
Early simulations <cit.> suggest that the canonical scenario could eject approximately one lunar mass of material into orbit while simultaneously reproducing the observed angular momentum (AM) of the Earth-Moon system and the low iron content of the Moon. However, most of the material that forms the protolunar disk is derived from the impactor and it therefore exhibits poor mixing. As a consequence, new scenarios were proposed to reconcile the GI hypothesis with isotopic constraints. One of these scenarios, first proposed by <cit.>, is a merger of near-equal mass bodies. This scenario can produce near-perfect mixing due to the symmetry of the impact. Another scenario that can result in a well-mixed protolunar disk is a high-velocity impact by a small impactor onto a rapidly spinning proto-Earth <cit.>. This latter scenario can recover the isotopic similarity by ejecting material primarily from the proto-Earth into orbit. However, both of these scenarios result in excess angular momentum of 12. It is still being debated by how much the two proposed post-impact processes can reduce the angular momentum of the Earth-Moon system. Estimates for the solar evection resonance range from a few percent <cit.> to several <cit.> depending on the underlying tidal model, while the Laplace plane transition can, depending on the Earth's initial obliquity, reduce the initial angular momentum by a factor of two to three <cit.>.
During the planet formation process, terrestrial planets are expected to rotate rapidly due to accretion of small bodies and giant impacts (e.g., ). Accounting for different pre-impact spins of the colliding bodies when investigating the Moon-forming GI will broaden the parameter space and expand the range of collision outcomes. However, most prior work on the GI hypothesis has been limited to initially non-rotating bodies. So far, only a few studies, e.g., <cit.>, <cit.>, <cit.>, and <cit.>, have investigated impacts between rotating bodies but those were limited to (relatively) narrow regions of the giant impact parameter space, such as the canonical impact or high-velocity impacts on a rapidly spinning proto-Earth. In this paper, we investigate collisions with pre-impact rotation of both the target and the impactor.
As it stands, no giant impact scenario has been shown to simultaneously reproduce all known constraints of the Earth-Moon system without requiring very specific assumptions regarding post-impact processes or the initial composition of the colliding bodies. Moreover, prior work has largely focused on a limited range of impact parameters in order to explain specific observational constraints. A systematic investigation of the parameter space of potential Moon-forming impacts has so far not been performed.
Therefore, in this work, we present a systematic survey of Moon-forming giant impacts. The aim of this study is to provide the community with a comprehensive survey of the parameter space and a systematic analysis of the collision outcomes. The simulations in this study assume a single giant impact event and the subsequent post-impact analysis determines whether any such event can simultaneously explain the observed physical, compositional, and geochemical constraints of the Earth-Moon system.
We have chosen to split the results into two papers in order to keep the results tractable. Paper I focused on the subset of collisions without pre-impact rotation and provides a baseline against which the effects of pre-impact rotation can be compared. In Paper I, we found that, in order to obtain a sufficiently massive protolunar disk (e.g., M_d ≥) without pre-impact rotation, an initial angular momentum budget of at least two times the current value of the Earth-Moon system (J_0 ≳2) is required. Therefore, without pre-impact rotation, a post-impact process capable of removing at least one is needed to reconcile such collisions with the observational constraint. This also clearly refutes the canonical scenario, as it does not produce a disk massive enough to form the Moon. Furthermore, Paper I also demonstrated that good mixing between proto-Earth and impactor material can only be consistently realized in low-velocity collisions between near-equal mass bodies, as proposed in <cit.>. This type of equal-mass merger together with a post-impact process that is able to remove at least 1 would thus be able to explain the formation of the Moon.
In the present paper (hereafter Paper II), we broaden the parameter space and consider collisions with pre-impact rotation of the proto-Earth and impactor for a wide range of rotational configurations. The expanded parameter space adds 7152 collisions to the data set presented in Paper I. Adding pre-impact rotation to the colliding bodies introduces six new degrees of freedom (i.e., two angles for the orientation of the spin axis and a rotation rate for each body). We show that these new impacts fill some of the regions of the post-impact parameter space that were left empty by the non-rotating collisions studied in Paper I, e.g., by producing sufficient disk masses at lower post-impact angular momentum budgets. But considering these additional impacts also produces degeneracy in the disk parameters, meaning that vastly different initial conditions can produce very similar post-impact disks.
This paper is structured as follows: in Section <ref>, we will first reiterate the methods used to perform the impact simulations and the subsequent analysis, while emphasizing the differences to Paper I. In Section <ref>, the results of all impact simulations, including those already studied in Paper I, are presented and discussed. Finally, in Section <ref> we provide a summary of our findings and present their implications for the giant impact hypothesis. In Appendix <ref>, the pre-impact parameter space is described in detail. In Appendix <ref>, we investigate the correlations between selected pre- and post-impact parameters. In Appendix <ref>, we explore the concept of immediately formed satellites.
§ METHODS
The methods used to perform the impact simulations and their analysis follow the procedures described in Section 3 of Paper I, thus in this section we highlight the differences to Paper I. We use the Smoothed Particle Hydrodynamics (SPH) code <cit.> with modifications for giant impact simulations <cit.>. After the impact, the post-impact state of the system is analyzed with <cit.> to identify gravitationally bound fragments and classify the outcome of the collision. For collisions that result in a merger, the planet, circumplanetary disk, and ejecta are differentiated using the novel disk finder presented in Paper I. A detailed description of the disk finding algorithm can be found in Appendix C of Paper I and the code implementation is freely available on GitHub at <cit.>.
§.§ Rotating pre-impact models
In the present paper, we consider pre-impact rotation of the target (i.e., the proto-Earth) and impactor. This introduces additional steps when generating the initial planet models. In order to achieve rotation in the targets and impactors, we follow the approach introduced in <cit.>. First, following the same procedure explained in Section 3.2 of Paper I, we create a non-rotating model with the desired mass, Earth-like composition (iron core 33 and rocky mantle 67 by mass) and surface temperature (T_s = 1000). We then evolve the particle representation of the model in a co-rotating coordinate frame and gradually increase the centrifugal force until the desired (uniform) angular velocity is achieved.
The pre-impact rotation rate is parameterized using the critical angular velocity, which is the angular velocity at which the body is expected to become rotationally unstable,
Ω_crit = √(π G ρ̅ h_crit) ,
where G is Newton's gravitational constant, ρ̅ is the bulk density of the non-rotating model, and h_crit = 0.44931 is derived from MacLaurin's formula <cit.>. The initial angular velocity of each model is therefore parameterized as,
Ω=f_ΩΩ_crit ,
where f_Ω is a scalar value called the angular velocity factor.
The model is then transferred from the co-rotating frame into the stationary frame by adding the velocity components corresponding to the solid body rotation to each particle and then rotating the body to the desired rotation orientation.
§.§ Initial conditions
The initial conditions for each collision are generated as described in Paper I, with the addition of six free parameters, i.e. the rotation rate and the orientation of the rotation axis of the two bodies. As in Paper I, the initial total mass (M_tot) in every collision is 1.05. The masses of the target and impactor are determined by M_tot and γ,
M_targ = M_tot( 1/γ + 1) ,
M_imp = M_tot( γ/γ + 1) .
The total number of particles in each collision is set to 100000 and the particles are distributed amongst the target and projectile in proportion to their mass.
Sampling the six additional free parameters introduced by pre-impact rotation (i.e., two angles for the orientation of the spin axis and a rotation rate for each body) with any reasonable resolution would lead to an infeasible number of simulations. Thus, in order to enable a tractable systematic study, we reduce these six parameter to two by constraining the mutual orientation of the colliding bodies' angular momentum vectors to three distinct configurations. The rotational state of each body is specified relative to the angular momentum vector of the collision's orbit, which always points in the positive z-direction. The possible states are: the body is not rotating (N; for “non-rotating”), the body's angular momentum vector is oriented parallel (U; for “up”) or anti-parallel (D; for “down”) to the collision's orbital angular momentum vector. Thus, all angular momentum vectors (J⃗_0, J⃗_orb, J⃗_targ, J⃗_imp) point in either the positive or negative z-direction and can be reduced to scalars (J_0, J_orb, J_targ, J_imp).
From the three possible spin orientations of the bodies (N/U/D), nine different configurations are possible (because the angular velocity factor f_Ω is always the same for both bodies except when one is zero because it is non-rotating): the target and impactor are both non-rotating (NN), the target and impactor are both rotating with their angular momentum vectors pointing upwards (UU), the target is rotating with its angular momentum vector pointing upwards and the impactor is non-rotating (UN), the target is non-rotating and impactor is rotating with its angular momentum vector pointing upwards (NU), and so on. Note that the first letter in this notation always indicates the rotational state of the target and the second letter the rotational state of the impactor.
Of the 7649 simulations performed for this study, 497 are non-rotating (NN) collisions. These collisions were extensively discussed in Paper I. For the remaining rotational configurations, simulations are performed with the pre-impact bodies rotating at different rotation rates set by Equation (<ref>).
In contrast to Paper I, wherein J_0 is simply the orbital angular momentum (of the impactor's orbit around the target), J_0 in this paper is the sum of the collision's orbital angular momentum (motion of the impactor relative to the target) and the rotational angular momentum of the target and impactor (spin). Thus, for a given rotational configuration and value of J_0, the orbital angular momentum (J_orb) can be obtained from
J_0 = J_orb + J_targ + J_imp ,
where the angular momenta are measured in the collision's center of mass frame. The pre-impact angular momenta J_0, J_targ, and J_imp are therefore independent parameters in the initial conditions while J_orb depends on these other parameters. With this, we can calculate the asymptotic impact parameter (b_∞) similar to Paper I (note the change from J_0 to J_orb):
b_∞ = J_orb/M_tot v_∞( γ + 1 )^2/γ .
From this point onward, the procedure is exactly the same as in Paper I, with the only exception being that for the radii of the bodies we use the equatorial radii (i.e. the largest distance between the center of the body and any particle). This means that the definition of R_crit used in Paper I is replaced with R_crit = R_targ,eq + R_imp,eq, using the equatorial radii of rotating bodies and normal radii for non-rotating bodies.
The total length of each simulation (τ) is the sum of the pre-impact phase (τ_pre)—which depends on the initial pre-impact state (e.g., v_∞) and is determined analytically—and the post-impact phase (τ_post) which is fixed. In this study, the post-impact phase is equivalent to τ_post = 7 days. In some cases, graze-and-merge encounters have not resolved within this time limit. In these cases, which are rare, the simulation is continued in blocks of 7 days, until the encounter has either resolved or a maximum in-simulation time of 42 days is reached. Those simulations that do not resolve within 42 days are excluded from the analysis.
In summary, the free parameters in this study are the rotational configuration, total initial angular momentum J_0, asymptotic relative velocity v_∞, impactor-to-target mass ratio γ and angular velocity factor f_Ω. The experimental design of the resulting parameter space is described in detail in Appendix <ref>.
§ RESULTS & DISCUSSION
In Table <ref>, we provide an overview of the collision outcomes, including the results of the non-rotating simulations presented in Paper I. Out of the grand total of 7649 simulations that were performed, 6247 are considered in our analysis as potential Moon-forming impacts, while 1402 are rejected for various reasons. Of the 1402 simulations that are rejected, 226 are rejected due to the post-impact bound mass being too small (M_b<1), 1170 are hit-and-run collisions which are excluded from the analysis (see Paper I for a detailed discussion), and a further six collisions are still classified as unresolved graze-and-merge encounters (i.e., the merger has yet to occur by the maximum simulation time of 42 after the initial impact).
In our analysis, we focus on four post-impact properties which are either directly related to known constraints (J_b) or are necessary proxies to such constraints (M_d, F^Fe_d, δ_pd). J_b is the angular momentum budget of the bound material remaining after the impact and should match the currently observed angular momentum budget of the Earth-Moon system () or be within a range set by post-impact angular momentum removal processes. M_d is the mass of the post-impact circumplanetary disk (i.e., the protolunar disk) and must be at least one lunar mass (= 0.0123) to provide enough material for lunar accretion; however, previous studies suggest that 24 is required under realistic accretion efficiencies. F^Fe_d is the iron mass fraction of the protolunar disk, a property which will affect the iron mass fraction of the Moon which subsequently forms out of the disk. As the iron mass fraction of the Moon is constrained to less than 2, F^Fe_d should not exceed 0.02 unless differential accretion rates are invoked. δ_pd is the compositional difference between the post-impact planet and disk (see Equation <ref>), where the impactor mass fraction is used as a proxy for isotopic composition.
To be considered a successful Moon-forming impact, a simulation must reproduce all four of these constraints simultaneously; if post-impact angular momentum removal (e.g., a Solar evection resonance) or compositional mixing processes (e.g., a synestia) are invoked, then certain constraints can be relaxed but the resulting parameter ranges must still be satisfied simultaneously. As post-impact compositional equilibriation can only occur when a synestia is present, the proximity of the post-impact planet to the hot-spin stability limit (R_p/R_HSSL) must also be considered when invoking such a process. Therefore, the correlation between these post-impact properties is crucial to identifying Moon-forming impacts and we dedicate a considerable part of the subsequent analysis to these correlations.
In Figure <ref>, the four main post-impact properties are shown for all 6247 simulations considered in this analysis. The distribution of each property can be discerned, as well as several correlations between properties. Several trends are worth remarking on. First, for the range of pre-impact conditions considered in this study (see Appendix <ref>), the collision outcomes are very diverse. Indeed, J_b ranges from 0.08-3.27 , with pronounced peaks at the values of J_0 that are more frequently sampled by the initial conditions (i.e., 1.0, 1.5, 2.0, 2.25, 2.5 and 3.0). M_d ranges from 06.71 and a significant number of simulations (N=1166) produce disks with masses in the favorable range of 2 and 4. The impacts that do not produce any disk (N=1022; M_d=0) result in undefined values of F_d^Fe and δ_pd, and are excluded from panels involving either value. F^Fe_d spans the full range between pure rock (F_d^Fe=0) and pure iron (F_d^Fe=1), with a peak in the distribution at F_d^Fe≃0.03; a significant number of cases (N=1478) produce disks with no iron (F^Fe_d=0). δ_pd ranges from δ_pd=-0.51 (where the disk has a lower impactor mass fraction than the planet) to δ_pd=1.00 (where the disk is comprised entirely of impactor material) with a pronounced peak at zero (perfect mixing) and at δ_pd=0.5 (where the disk has a higher impactor fraction than the planet).
§.§ Relation between pre- and post-impact angular momentum budgets
We observe a strong correlation (r=0.97) between the pre-impact angular momentum budget (J_0; measured in the center-of-mass frame of the colliding bodies) and the post-impact angular momentum budget of the bound material (J_b; measured in the frame of the post-impact planet) as shown in Figure <ref> in Appendix <ref>. In Paper I, we assumed that this correlation holds because little mass is lost in the collision and the associated ejecta does not carry away a significant amount of angular momentum. This is confirmed by the top panel of Figure <ref>, which shows the relationship between J_0 and J_b for all non-rotating collisions. With the exception of the low-γ simulations (i.e., small impactors) that do not produce massive disks for non-rotating collisions, all results scatter around the J_b=J_0 line with differences between J_0 and J_b of less than 20. Thus, J_0 is an effective proxy for J_b in non-rotating collisions and, in Paper I, we used J_0 in the analysis of the collision outcomes.
When pre-impact rotation is introduced (center and bottom panels of Figure <ref>), J_b and J_0 remain strongly correlated (r=0.97). Indeed, in the majority of cases (N=5896), the difference between J_0 and J_b is less than 20 of J_0. However, in contrast to the non-rotating dataset, the rotating dataset is host to a large number of outliers, with deviations up to 128.6 of J_0. Thus, for the purposes of our analysis, J_0 is no longer a suitable proxy for J_b despite its strong correlation. We directly use J_b in the analysis that follows.
The bottom panel of Figure <ref>, which shows low-γ collisions (i.e., small impactors), suggests that there are additional regions of the pre-impact parameter space that may be of interest to the Moon-formation community. Notably, significantly larger values of J_b can be achieved for a given value of J_0, implying that one could investigate the region J_0 < to obtain results with J_b ≃. However, our dataset shows that the cases populating the region with J_0 = and < J_b < 2.0 do not produce significantly massive disks (M_d<0.1). Thus, this additional part of the pre-impact parameter space is not considered in this study but may still be explored in future work.
§.§ Post-impact parameters
Figure <ref> shows the results of all 6247 simulations in six panels, with each panel showing a pair of the four main post-impact variables (J_b, M_d, F_d^Fe, and δ_pd). Notably, it illustrates the post-impact parameter spaces sampled by prior studies, which represent different impact scenarios. These scenarios include the canonical giant impact scenario <cit.>, equal-mass mergers <cit.>, and the fast spinning proto-Earth scenario <cit.>. We note that our simulations produce similar outcomes to each of these studies, with the exception of the region 2.5≤ J_b≤2.75, where <cit.> reports slightly larger disk masses. However, since our study covers a much larger region of the pre-impact parameter space and does not focus on a specific impact scenario, our collision outcomes are more diverse.
§.§.§ Disk mass
For non-rotating collisions, the mass of the generated circumplanetary disk (M_d) is mostly determined by the pre-impact angular momentum budget (J_0), evincing a correlation of r=0.89. Moreover, collisions between non-rotating bodies require a pre-impact angular momentum budget of at least twice the current Earth-Moon angular momentum budget (J_0 ≥2) to produce a disk of at least one lunar mass (M_d ≥). Notably, this implies that the canonical Moon-forming impact model can not form sufficiently massive disks.
In Figure <ref>, we show that the strong correlation between J_0 and M_d persists for collisions between rotating bodies. Indeed, J_0 and M_d are strongly correlated at r=0.80, while J_b and M_d are also strongly correlated at r=0.78. Despite this persisting correlation, we demonstrate that impacts with a counter-rotating target (DD, DN, DU) can produce significantly more massive disks at lower values of J_b. This population of ”low-J_b, high-M_b” outcomes can be seen in Figure <ref> (top panel), while Figure <ref> clearly demonstrates that these outcomes are unique to the DX configurations. Furthermore, Figure <ref> shows that this population of collisions is limited to moderate mass ratios (0.3 ≤γ≤ 0.5). The NU configuration also shows hints of a similar population, but does not achieve similarly low values of J_b as seen in the DX cases.
The population of low-J_b, high-M_d outcomes is produced only by grazing (θ_imp≳45), low velocity (v_∞≲ 0.5 v_esc) impacts onto counter-rotating targets (DX) at moderate mass ratios (0.3 ≤γ≤ 0.5). We note that several collisions in this population satisfy the disk iron mass fraction constraint, however none of the collisions produce planets and disks with similar compositions. Because the post-impact planets in this population are rotating well below the HSSL, they are therefore not candidates for post-impact compositional equilibration of the planet and disk (e.g., via a synestia). This implies that, for the collisions in this population to be viable Moon-forming impacts, their targets and impactors would have to evince very similar isotopic compositions prior to impact. This population represents a novel class of Moon-forming impacts, wherein a counter-rotating target roughly the mass of Venus suffers a grazing, low-velocity impact by an impactor roughly 2-3 times the mass of Mars.
Generally, for a given value of J_b, counter-rotating targets (DD, DN, DU) are capable of generating larger disk masses than non-rotating targets (ND, NN, NU) while co-rotating targets (UD, UN, UU) tend to result in even smaller disk masses. This is because, for co-rotating targets, the material near the contact zone with the impactor is moving in the same direction as the impactor itself, effectively swallowing it. In the case of counter-rotating targets, this effect is reversed, meaning that the local material is moving towards the impactor, thereby producing larger local collision velocities. The higher local relative velocity at the impact site could result in high vapor mass fraction of the disk. Future work should investigate this using a higher number of particles and a more sophisticated EOS to connect to studies investigating lunar accretion <cit.>.
For large impactors (γ≥ 0.1), the rotation rate (f_Ω) of the colliding bodies affects the disk mass. For counter-rotating (DX) and non-rotating targets (ND and NU), faster rotation rates result in lower disk masses. For co-rotating targets (UX), the relationship is reversed; indeed, while the UD configuration does not show a dependence on f_Ω, faster rotation rates result in more massive disks for both the UN and UU configurations.
In Paper I, we found that small impactors (γ < 0.1) between non-rotating bodies (NN) result in very small disk masses. However, for small impactors with rotating targets, (DN and UN), such low-γ collisions are able to produce disks with significant mass for f_Ω=1.01. The f_Ω=1.01 cases result in significantly more massive disks than f_Ω=0.9 (for which there are no cases with M_d≥2.0) and f_Ω=0.5 (which does not produce significant disk mass at all). This implies that, for rotating bodies, it is much easier to eject mass because the material at the equator is more weakly bound than in the non-rotating case. Furthermore, we confirm the results of <cit.> that collisions with low γ and high v_∞ are able to generate massive disks but result in excess angular momentum of the Earth-Moon system (J_b≥2).
In summary, to satisfy the disk mass constraint (M_d >), two results are useful to take note of. First, with pre-impact rotation, the disk mass remains strongly correlated with the post-impact angular momentum budget (J_b), with J_b ≥2 generally required to produce sufficiently massive disks. Second, the tyranny of this relationship can be broken by a population of grazing, low-velocity collisions in which a medium-size impactor (0.3 ≤γ≤ 0.5) strikes a counter-rotating target (DX). This population can produce massive disks (M_d > 2) with low iron mass fractions (F^Fe_d ≤ 0.02), but cannot meet the composition constraint.
§.§.§ Disk iron mass fraction
For non-rotating collisions, we showed that F^Fe_d is most strongly correlated with v_∞ (r=0.48), with high-velocity impacts tending to produce disks more enriched in iron. However, for collisions with pre-impact rotation this is not the case (F^Fe_d shows but a weak correlation of r=0.17 with b_∞).
In the collisions presented here, F_d^Fe spans the entire range of possible compositions, from pure rock (F_d^Fe=0) to pure iron (F_d^Fe = 1). While there is no clear relation between F_d^Fe and J_b or M_d, there is a general trend that the maximum iron mass fraction decreases with higher M_d and J_b. Extremely iron rich disks can be obtained up to J_b ∼2 and tend to have a very low disk mass. A similar trend is observed for the lowest possible iron mass fraction that decreases with higher M_d and J_b. Pure rock disks are found up to 3. The minimum iron mass fraction decreases for higher mass disks because the contribution from a single iron particle is lower. Increasing the resolution of the simulation would allow to resolve lower particle masses and therefore lower iron mass fractions.
For collisions between non-rotating bodies (NN), F^Fe_d is positively correlated with v_∞. In Figure <ref>, for collisions between rotating bodies, we show that this dependence on v_∞ persists for large impactors (γ≥ 0.1). In addition, F^Fe_d is positively correlated with the pre-impact rotation rate of the rotating bodies (f_Ω). For small impactors (γ < 0.1) on rotating targets, F^Fe_d does not show the same dependence on v_∞ or f_Ω. Instead, F^Fe_d is strongly affected by the direction of the target's rotation, with impacts on co-rotating targets (UN) producing much higher disk iron mass fractions.
To satisfy the disk iron mass fraction constraint (F^Fe_d ≤ 0.02), the following systematic trends can be ascertained from Figure <ref>. For large impactors (γ≥ 0.1), relatively low-velocity impacts between non-rotating or slowly rotating bodies are preferred. For small impactors (γ < 0.1), impacts with a counter-rotating target (DN) produced significantly smaller disk iron mass fractions and are therefore preferred over impacts with a co-rotating target (UN).
§.§.§ Planet-disk compositional difference
The compositional difference we use as the proxy for the isotopic similarity is defined as
δ_pd = F_d^imp - F_p^imp = (N_imp/N_tot)_d - (N_imp/N_tot)_p ,
where N_imp is the number of particles originating from the impactor and N_tot is the total number of particles in either the disk (d) or the planet (p). It exhibits a degeneracy, where different disk compositions F_d^imp can result in the same value for δ_pd depending on the corresponding planet composition. This is shown in Figure <ref> together with all data points.
The composition δ_pd varies from δ_pd∼ -0.5, where the disk material has a lower impactor fraction than the planet, to being formed purely of impactor material (δ_pd∼ 1). None of the disks, no matter how low the mass, are composed entirely of target material. Disks entirely formed from the impactor are possible but tend to be low in mass. Such disks can also have a high iron mass fraction. As the bound angular momentum and disk mass increase, the composition tends to be more well mixed and the most massive disks have impactor mass fractions very similar to the planet. Those disks are generated in near-equal mass collisions which is consistent with <cit.>.
In Paper I, we found that only near equal mass mergers (i.e., γ∼ 1) can achieve near perfect mixing (δ_pd∼ 0) in the absence of pre-impact rotation. This appears to be a result of the symmetry of the impact. Once pre-impact rotation is introduced, many cases with perfect mixing are still from γ = 1 impacts (435 of 1315 with |δ_pd|≤ 0.05). However, including pre-impact rotation can break the symmetry that would otherwise exist without pre-impact rotation. In these cases, the body that is co-rotating with the collision (i.e., in the “up” configuration) tends to have a higher mass fraction in the disk than in the planet. We also obtain cases with | δ_pd| ≤ 0.01 for lower impactor-to-target mass ratios. If we additionally require the disk mass to be at least 1 then γ≥ 0.5 impacts can still result in perfectly mixed disks. The impacts with γ = 0.5 are all involving a co-rotating target which enhances the ejection of material originating from the proto-Earth. The collisions with γ < 0.1 proposed by <cit.> can also produce near perfect mixing (| δ_pd| ≤ 0.01), but in these cases the disk mass is too small to allow the formation of the Moon.
From Figures <ref> and <ref>, several useful trends can be extracted. First, for large impactors, the impactor mass fraction of the proto-Earth (F^imp_p) is determined almost exactly by the pre-impact mass ratio (γ) of the colliding bodies. This is because, for the impacts in the large impactor set, the impactors tends to merge almost entirely with their targets. In contrast, the impactor mass fraction of the disk (F^imp_d) does not show any clear dependencies, but we note that planet and disks with similar compositions tend to result from more head-on impacts. Furthermore, for disks of at least one lunar mass (M_d ≥), only high-γ collisions can produce sufficiently low values of F^imp_d to achieve favorable values of δ_pd. For small impactors, the dependence of F^imp_p on γ only holds weakly for low-velocity impacts (v_∞≤ v_esc). For high-velocity impacts (v_∞ > v_esc) by small impactors, γ can no longer be used to predict F^imp_p, however F^imp_d shows a dependence on γ in this region. Because only the high-velocity impacts are capable of producing disks of at least one lunar mass; for these impacts, only the lowest values of γ can produce favorable values of δ_pd.
To satisfy the planet-disk compositional similarity constraint for massive disks (M_d ≥), several systematic trends can be leveraged. For large impactors, only high-γ impacts (γ≥ 0.5) are capable of producing favorable compositions (|δ_pd|≤ 0.05). Within this region, impacts by non-rotating (XN) or counter-rotating impactors (XD) are likely to produce smaller compositional differences. For small impactors, only very small mass ratios of γ≤ 0.025 and velocities in excess of 2 v_esc are able to produce disks of at least one lunar mass with favorable compositions; additionally, impacts onto a counter-rotating target (DN) tend to produce lower compositional differences than impacts onto co-rotating targets (UN).
§.§.§ Hot-spin stability limit (HSSL)
If the atmosphere of the proto-Earth and the inner edge of the protolunar disk remain in contact following the impact, then it is possible for these reservoirs to continue exchanging material. The resulting post-impact structure is known as a synestia <cit.> and may allow the Earth and protolunar disk to achieve near or total compositional equilibrium, potentially relaxing the isotopic constraints. However, for a synestia to exist, the post-impact Earth must be rotating at a rate sufficient to push its equatorial radius to the hot-spin stability limit (R_p/R_HSSL = 1). R_HSSL is determined by the disk finder; a detailed explanation of how R_HSSL is calculated can be found in Appendix B of Paper I.
For collisions without pre-impact rotation, we demonstrated that the post-impact Earth's proximity to R_HSSL is largely determined by the pre-impact angular momentum budget (J_0). For γ≳ 0.6, J_0 appears to be the sole determining variable. For γ≲ 0.6, the impact velocity (v_∞) also plays a small role, with higher impact velocities resulting in decreased proximity to the HSSL (i.e., lower values of R_p/R_HSSL) for a given pre-impact angular momentum budget. For all non-rotating collisions, a pre-impact angular momentum budget of J_0 ≥2 is required to reach the HSSL. For non-rotating collisions, this implies that any collision with a pre-impact angular momentum budget of J_0 ≤2 cannot invoke post-impact compositional mixing.
For collisions between rotating bodies, the strong dependence on the pre-impact angular momentum budget (J_0) persists. Figure <ref> demonstrates this relationship for both large (γ≥ 0.1) and small impactors (γ < 0.1). Whereas large impactors appear to reach the HSSL only at 1.75, small impactors can reach the HSSL as soon as 1.5. For large impactors, a significant number of grazing, low-velocity, low-γ impacts populate an area below the otherwise well-behaved relation. We note that this phenomenon was also present in Paper I for collisions between non-rotating bodies, whereby low-velocity, low-γ impacts can be seen receding from the HSSL as the impact angle becomes large due to the increasing angular momentum budget. For small impactors, there also exists a population of points below the otherwise well-behaved relation. However, unlike for large impactors, this population does not evince an obvious dependence on γ or the impact angle. In Figure <ref> (Panel S), a dependence on v_∞ can be discerned, with lower-velocity impacts being furthest from the HSSL.
§.§ Promising cases
A successful Moon-forming impact must satisfy a set of constraints from observations, measurements, and theory. Most of these constraints are drawn from measurements of the Earth-Moon system as it exists today, while others are provided by theoretical studies of the system's past evolution. In the context of giant impact simulations, these observations translate into constraints on the simulated post-impact properties of the proto-Earth and the protolunar disk (for a detailed discussion of these properties see Section 2 of Paper I). These constraints are the mass of the Earth for which we use the total mass of the gravitationally bound material (M_b), the total angular momentum of the bound material (J_b), the circumplanetary disk mass (M_d), the iron mass fraction of the disk (F_d^Fe), and the relative fraction of impactor material in the disk relative to the proto-Earth (δ_pd). The Earth also has a well known iron mass fraction, but we ignore this constraint because we set the initial core fractions of both the target and the impactor to 0.33 and the total mass to 1.05, such that for results that satisfy the constraint on M_b, the resulting iron mass fraction of the planet is satisfying the constraint.
Despite the addition of several thousand new simulations that include pre-impact rotation to the data set presented in Paper I, we are still unable to identify a single impact scenario that can simultaneously satisfy all known constraints. Specifically, a collision that generates a sufficiently massive protolunar disk (M_d ≥2) and recovers the current angular momentum of the Earth-Moon system (J_b =) along with good compositional mixing remains elusive. But we can apply a subset of constraints on the data to find promising cases. In Paper I, we considered two different sets of constraints: one permissive (M_b≥1, M_d≥1 and F_d^Fe≤ 0.04) and one strict (M_b≥1, 2≤ M_d ≤4, F_d^Fe≤0.02 and |δ_pd|≤0.05). In both cases there is no constraint on the bound angular momentum because for the non-rotating bodies in Paper I we do not obtain massive disks (M_d≥1) below J_b ∼2.
In Figure <ref>, the 1570 promising cases under the assumption of the permissive constraints are shown. This subset of results contains cases from all rotation configurations, all impactor-to-target mass ratios γ and all sampled velocities v_∞. From the cases with counter-rotating target (i.e., DX configurations), we get some results around J_b≃1, but most of the results are beyond J_b≥1.5. Cases with good mixing start to appear at J_b≥1.75. Of the cases populating the green region of Figure <ref>, 28 satisfy these permissive constraints.
Figure <ref> shows the 135 promising cases under the strict constraints. Under these constraints, we can no longer reconcile the bound angular momentum and disk mass, which confirms the findings of Paper I, even though pre-impact rotation is considered. Like the subset with permissive constraints, this subset also contains promising cases of all rotation configurations. In general, either higher γ≥0.5 or very low (γ=0.025, as proposed by ) impactor-to-target mass ratios are able to generate promising cases. For γ≥0.5 all promising cases are with asymptotic relative velocities of v_∞≤0.7 v_esc, while for γ=0.025, all promising cases are for 2.2 v_esc≤ v_∞≤ 3.0 v_esc. The minimum bound angular momentum in this subset is J_b=2.09. Of the cases populating the green region of Figure <ref>, none satisfy these strict constraints because they all have disk compositions that are more enriched in impactor material than the planet and thus do not satisfy the constraint on |δ_pd|.
Of all the known constraints, F_d^Fe is the easiest to satisfy. This is due to the fact that, in general, it is difficult to inject significant amount of iron into the disk. Therefore, we focus on the relationship between the remaining three constraints (M_d, J_b and |δ_pd|). If we apply a constraint on the bound angular momentum (0.75≤ J_b≤1.25) together with the strict constraints we can not find results, as discussed above. But from these three constraints we can choose any combination of two and we get results that satisfy them.
§.§ Immediate satellite formation
In several cases, we find gravitationally bound fragments that remain on stable orbits until the end of the simulation. Some of these fragments have masses around 1 and may be considered immediately formed Moons as proposed in <cit.> (and earlier by <cit.> in the context of Uranus and Neptune). Such (potential) satellites are either formed directly from the tidal disruption of the impactor (and therefore have a higher impactor fraction than the planet) or due to the fragmentation of spiral arms (which results in a well mixed composition or even disks with a lower impactor fraction than the planet). Additionally, <cit.> show that such objects can accrete a substantial amount of material from the proto-Earth during close encounters within the Roche limit and could, therefore, acquire a very similar composition to the Earth.
While <cit.> makes this immediate formation scenario out to be very rare, restricted to a very narrow parameter space and only occurring in high resolution simulations, we observe it in roughly 3 of the cases in our low resolution simulations over a wide range of the pre-impact parameter space (1.0≤ J_0≤3.5, 0.04≤γ≤1.0, 0.1 v_esc≤ v_∞≤ 1.0 v_esc and 0.34≤ b_∞≤ 1.09) for all different rotation configurations and all angular velocity factors f_Ω.
In Figure <ref>, the 191 cases with M_b≥1 and a bound second largest remnant with a mass of 0.5≤ M_SLR≤1.5 are shown with the marker color being the mass of the SLR. Some of these fragments are embedded in relatively massive disks, while others comprise of nearly the full disk mass. In some cases, the disk finder does not count the fragment to the disk at all, because its orbit is such that it will eventually merge with the planet. These cases are excluded from Figure <ref>.
Of the 191 results shown in Figure <ref>, 137 satisfy the permissive constraints (M_b≥1, M_d≥1 and F_d^Fe≤0.04) used in Figure <ref>, while none satisfy the strict constraints. The disk iron mass fraction ranges from 0.05 (mostly rock) to 27 (iron rich) but there are no cases with zero disk iron mass fraction. The composition of the total disk (disk + fragment) ranges from the disk having a much lower impactor fraction than the planet (13 cases with δ_pd < 0.0) to the disk having a much higher impactor fraction than the planet (178 cases with δ_pd>0.0) with only 5 cases resulting in what we consider good mixing (|δ_pd|≤ 0.05), all of which have γ≥ 0.5. All cases with low J_b have a higher impactor fraction in the disk than in the planet. Disks that have a lower impactor fraction than the planet only occur for J_b> 2.0 and fragments with a much lower impactor fraction than the planet are only created in collisions with large impactor-to-target ratios (γ≥ 0.9) and either NX or DX configuration.
An interesting result is case 3567 which has pre-impact parameters close to the canonical model (NN, J_0=1, γ=0.1, v_∞=0.2 v_esc, b_∞= 0.8153) and produces a bound fragment of M_SLR=0.84 with total disk mass of M_d=0.99 at a bound angular momentum of J_b=0.99, mixing parameter δ_pd=0.67 and F_d^Fe=0.0016. A plot of the end state is shown in Figure <ref>. While this case does not show adequate mixing between the proto-Earth and disk, the angular momentum of the system as well as the fragment mass and iron content are in excellent agreement with observational data. If the mixing constraint can be dropped (as suggested by ) this simulation would be an excellent match for the one that produced our Moon.
If a massive bound fragment is embedded in the disk, it may act as a seed for Lunar formation and increase the fraction of the disk mass that will be accreted onto the Moon. In turn, if the disk is very massive and vapor dominated, the fragment could experience a drag force and spiral onto the proto-Earth (e.g., <cit.>). Clearly, the interplay between directly formed proto-satellites and a circumplanetary disk should be investigated in future work. Direct formation via giant impacts provides an interesting pathway for the formation of the Moon and other satellites. However, future work should investigate to which extent the numerical method could enhance direct formation and we suggest a follow-up study using higher resolution simulations (see also Appendix <ref>) to investigate, if these fragments can persist.
§.§ Collision outcome degeneracy
The simulation results show a large degeneracy with respect to the initial conditions, i.e., very different IC can result in almost identical collision outcomes. If we require the initial conditions for two collisions to be different in all parameters (orientation, angular momentum J_0, velocity v_∞, impact parameter b_∞ and angular velocity factor f_Ω) we find for example the runs 3822 and 7334 (for pre-impact and post-impact parameters see Table <ref>) that differ by only 0.22 in J_b, 3.60 in M_d, 0.76 in F_d^Fe and 0.82 in δ_pd. Allowing more similar IC, we find cases with even better agreement in the results. One example are the runs 2922 and 7605 where the J_0 and f_Ω are identical. The post-impact variables differ by only 0.99 in J_b, 0.29 in M_d, 0.51 in F_d^Fe and 1.11 in δ_pd. The outcome of these simulations can be considered identical within the typical accuracy of such simulations (e.g., ).
Furthermore, even if the post-impact parameters of two collisions are very similar, the morphology of the resulting disks can be visibly different. As an example we show the result of two collision with very different IC in Figure <ref>. The post-impact variables (i.e., bound mass, disk mass and composition) differ by less than 5.1 in this case. However, the first (top panel) results in a massive bound second largest remnant (SLR) at the end of the simulation while the second (bottom panel) does not contain such a remnant.
We can conclude that the same result can in principle be generated with very different initial conditions. This means that even if one would find a collision which satisfies all constraints, it may still not be the impact that actually resulted in the formation of the Moon. In turn, one generally cannot determine (unique) pre-impact orbits of the proto-Earth and impactor, e.g., in order to constrain the pre-impact isotopic composition, from successful collisions.
§.§ Summary
Collisions between non-rotating bodies can only produce sufficiently massive disks (M_d ≥) for post-impact angular momentum budgets of J_b ≳2, corresponding to an equivalent pre-impact angular momentum budget requirement (J_0 ≳2). Notably, small impactors (γ < 0.1) and γ=0.1 from the large impactor subset are unable to produce disks of at least one lunar mass, implying that the canonical Moon-forming impact is not viable in the absence of rotation. We also find that achieving compositional similarity between the proto-Earth and protolunar disk requires near-equal-mass mergers; near-perfect mixing can only be consistently achieved in equal-mass collisions.
In the present paper, we introduce pre-impact rotation to the colliding bodies. This increases the number of free parameters in our study by two (rotation configuration and angular velocity factor) and, therefore, requires significantly more simulations to fill the parameter space at the same resolution. Yet, despite this expansion of the pre-impact parameter space, we are still unable to identify a collision that is simultaneously consistent with the set of known constraints. Nonetheless, we obtained a number of useful insights into the parameter space that will inform future work on Moon-forming impacts.
We find that even for pre-rotating bodies the disk mass and the post-impact angular momentum budget are strongly correlated. Generally, J_b ≳2 is required to produce disks of at least one lunar mass. However, if the proto-Earth is counter-rotating (DX configurations) with respect to the orbital angular momentum of the collision, disks with M_d ≥1 and a bound angular momentum of J_b ∼1 can be obtained. These cases only occur for collisions with γ = 0.3 - 0.5 and result in disks that have substantially higher impactor fraction than the planet. In order to explain the isotopic similarity between the proto-Earth and the Moon, such a scenario requires that either the impactor has a very similar isotopic composition or strong mixing between the proto-Earth and the disk occurred after the impact.
Good mixing requires near-equal mass (γ≥ 0.5) mergers or high velocity impacts with γ = 0.025 onto a rapidly spinning proto-Earth as proposed in <cit.> and <cit.> respectively. Such collisions result in an excess angular momentum of 12. Generally, if the disk must contain at least one lunar mass, one cannot reconcile the angular momentum of the Earth-Moon system with good mixing. Possible post-impact processes that could either result in mixing between the proto-Earth and the disk <cit.> or efficiently remove the excess angular momentum <cit.> were proposed but require very specific conditions.
Significantly iron-depleted disks can be obtained over a very wide range of impact conditions and achieving a specific level of iron depletion is just a matter of fine-tuning. Thus, setting the constraint on the iron mass fraction aside, one can get two out of three: either massive disks with the correct AM or good mixing, or very low mass disks with the correct AM and good mixing.
We also find, that over a wide range of impact parameters relatively massive, bound fragments can form as a result of the collision. Such fragments result in about 3 of all simulations and can have very diverse properties. Among those 191 have masses between 0.5 and 1.5 and 131 satisfy the permissive constraints (see Section <ref> for details). Furthermore, if a fragment is embedded in a disk, it could interact with the disk and act as a seed and enhance the accretion efficiency. While none of the simulations results in a (potential) satellite that matches the Moon, such a scenario could be an interesting pathway for lunar formation.
§ CONCLUSIONS
We performed a systematic investigation of potential Moon-forming giant impacts. Our study consists of 7649 pairwise collisions between differentiated bodies with impactor-to-target mass ratios between 0.02 and 1 and nine distinct rotational configurations. This data set includes the 497 collisions between non-rotating bodies introduced in Paper I <cit.>.
General observations:
* Despite the introduction of eight distinct rotation configurations and variable rotation rates, we can identify no single collision capable of simultaneously satisfying all known constraints. In all cases, one or more post-impact processes must be invoked to reconcile the constraints or it must be assumed that the target and impactor have the same isotopic composition prior to the impact.
* If the disk iron fraction constraint is ignored, we find that out of the remaining constraints (M_d, J_b and δ_pd), a maximum of two can be satisfied simultaneously, but never all three.
* Massive bound fragments (sometimes embedded in the disk) are a common outcome for a wide range of impact conditions and could be proto-satellites or act as seeds for accretion.
Systematic trends:
* The post-impact disk mass (M_d) remains strongly correlated to the post-impact angular momentum budget (J_b), with J_b ≳2 generally required to produce disks of at least one lunar mass. Thus, even with pre-impact rotation, the canonical Moon-forming impact is still incapable of producing favorably massive disks. However, a unique population of grazing, low-velocity impacts on counter-rotating targets at 0.3 ≤γ≤ 0.5 breaks this relationship and produces low post-impact angular momentum budgets with massive disks.
* The disk iron mass fraction (F^Fe_d) is correlated with v_∞ for large impactors (γ≥ 0.1), with higher impact velocities producing disks more enriched in iron. With pre-impact rotation, faster rotation rates are correlated with higher disk iron fractions. For small impactors, impacts on a co-rotating target (UN) produce significantly higher disk iron fractions than counter-rotating targets (DN).
* For large impactors that produce massive disks (M_d ≥), compositional similarity between the post-impact planet and disk can only be achieved for high mass ratios (γ≥ 0.5) and becomes increasingly probable as γ→ 1. Notably, symmetry appears to play an important role in determining δ_pd, with the symmetric rotation configurations (UU, NN, DD) consistently producing near-zero δ_pd for γ=1. For small impactors that produce massive disks, only collisions at γ=0.025 can satisfy the compositional constraint, however these cases result in excess angular momentum.
Effect of rotation on leading theories:
* The canonical Moon-forming impact is only capable of producing lunar-mass disks (M_d ≥) when the target is rapidly co-rotating (UX; f_Ω = 0.9). This requires an excessive post-impact angular momentum budget (J_b ≳2) and the resulting disks evince substantial compositional differences with the proto-Earth (δ^min_pd > 0.2).
* Massive disks and very small compositional differences between the proto-Earth and the orbiting material can either be obtained by high velocity impacts of very small impactors onto rapidly co-rotating targets or near equal-mass mergers as proposed in <cit.> and <cit.> respectively. However, those collisions result in an excess angular momentum of 12.
* We identify a population of collisions that are uniquely capable of producing low post-impact angular momentum budgets and massive, iron-poor disks. This population represents a promising new class of Moon-forming impacts, but requires the target and impactor to have very similar compositions prior to the impact. In this scenario, a counter-rotating target roughly the mass of Venus suffers a grazing, low-velocity impact by an impactor roughly 2-3 times the mass of Mars.
While this study makes a first step towards understanding the systematics of Moon-forming impacts, there is still much work to be done. Future investigations should consider arbitrary mutual orientations, variable pre-impact core fractions, and further investigate the regions of interest identified in this study. Clearly, tighter constraints on the possible range of the post-impact state of the Earth-Moon system are required. Significant questions remain about accretion processes in the post-impact disk (e.g., accretion efficiency and possible enrichment in iron) and the efficacy of the proposed post-impact processes. To the latter point, simulations need to be done to constrain which post-impact states are suitable for various post-impact processes. Finally, connecting to formation models would allow to study lunar formation in a broader context and assess the frequency of satellites orbiting Earth-like planets.
We thank the anonymous reviewer for valuable suggestions and comments that helped to substantially improve the paper. We would also like to thank Martin Jutzi, Paolo Sossi and Maria Schönbächler for helpful discussions. This work has been carried out within the framework of the National Centre of Competence in Research PlanetS supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. The authors acknowledge the financial support of the SNSF. We acknowledge access to Piz Daint and Eiger@Alps at the Swiss National Supercomputing Centre, Switzerland under the University of Zurich's share with the project ID UZH4.
§ DATA AVAILABILITY
The data underlying this article is available in the Dryad Digital Repository, at <https://doi.org/XXXXXXXXX/dryad.XXXXXX>[This will be updated after the review process]. The analysis results and an example Jupyter notebook to create figures from it can be found in the GitHub repository [CITE][This will be updated after the review process].
Swiss National Supercomputing Centre (Piz Daint, Eiger@Alps)
Gasoline <cit.>,
ballic <cit.>,
eoslib <cit.>,
skid <cit.>,
numpy <cit.>,
scipy <cit.>,
matplotlib <cit.>,
pynbody <cit.>,
GNU parallel <cit.>
§ SPECIFIC PRE-IMPACT PARAMETERS
As described in Section <ref>, we restrict the orientation of the rotation axis of both the target and the impactor to three possible states: co-rotating with the orbital angular momentum (U), counter-rotating with the orbital angular momentum (D) and non-rotating (N). From these 3 states, we construct the rotation configuration of the collision (XX) where the first letter gives the orientation of the target and the second letter the orientation of the impactor. The possible rotation configurations are:
XX∈{DD, DN, DU, ND, NN, NU, UD, UN, UU}.
The parameter space presented in this paper is split into three regions:
* 435 non-rotating (NN) collisions with large impactors (γ≥ 0.1) from Paper I,
* 4984 rotating collisions with large impactors (γ≥ 0.1),
* 2230 collisions with small impactors (γ<0.1),
where the first two regions together form the subset of large impactors (L) while the third region forms the subset of small impactors (S).
Given a rotation configuration, there are four free parameters that describe the initial conditions: the initial total angular momentum (J_0), the impactor-to-target mass ratio (γ), the asymptotic relative velocity (v_∞) and the angular velocity factor (f_Ω, if both bodies are rotating, they both have the same f_Ω value).
§.§ Non-rotating collisions with large impactors
The non-rotating (NN) simulations were the first simulations we ran and we used a high sampling resolution for J_0 to better understand the pre-impact parameter space. This sub-set contains all combinations of the following parameter choices that result in a collision (configurations that result in a fly-by were not run):
XX = NN
J_0∈ {1.00, 1.50, 2.00, 2.25, 2.50, 2.75, 3.00, 3.25, 3.50, 4.00, 4.50, 5.00 }
γ∈ {0.1, 0.3, 0.5, 0.7, 0.9, 1.0 }
v_∞∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 } v_esc
f_Ω = 0.0
as well as additional simulations with J_0=1.25 for γ=0.1, close to the canonical scenario. The results of these 435 simulations were thoroughly discussed in Paper I.
§.§ Rotating collisions with large impactors
The first sub-set we add for Paper II consists of all combinations of these parameter choices that result in a collision:
XX ∈ {DD, DN, DU, ND, NU, UD, UN, UU}
J_0∈ {-1.75, -1.50, -1.25, -1.00, 1.00, 1.50, 2.00, 2.25, 2.50, 3.00 }
γ∈ {0.1, 0.3, 0.5, 0.7, 0.9, 1.0 }
v_∞∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 } v_esc
f_Ω∈ {0.5, 0.9}
This results in 4984 simulations. We use the same values for γ and v_∞ as for the non-rotating collisions. The values for f_Ω correspond to a body rotating at approximately 50 of the critical rotation rate, and a body rotating just shy of the critical rotation rate respectively. The body marked N in the NX and XN configurations is non-rotating. For the values of J_0, we decided to not exceed 3.0, even though we found in Paper I that mergers can be observed up to J_0=3.5 for the NN configuration, because the bound AM is very similar to J_0 and removing such large amounts of AM is very difficult and thus these cases would not be relevant for the formation of the Moon. For the rotating configurations we also use a lower sampling resolution in J_0, because based on the NN results we are confident that the lower resolution is sufficient to identify systematic trends. The lower sampling resolution also prevents the study from becoming infeasible due to the additional computational resources required.
Our pipeline sets up the collisions such that the asymptotic impact parameter b_∞ and, by extension, J_orb is always positive. But, impacts with negative impact parameter are also part of the parameter space that could lead to the formation of the Moon. By mirroring the arrangement on the plane perpendicular to b⃗, such an impact with negative impact parameter can be transformed into one with positive impact parameter, while all angular momentum values change sign, such that the total initial angular momentum J_0 can be negative. This is shown in Figure <ref> with the initial setup on the left side and the transformed setup with positive impact parameter on the right side.
Thus, in order to mimic collisions with negative impact parameter b_∞, we add J_0∈{-1.75, -1.50, -1.25, -1.00} for the DX configuration (values below J_0=-1.87 are not possible, because it is the smallest value of the sum J_rot,targ + J_rot,imp). In the NN case, the configuration is symmetric, but this is no longer the case if pre-impact rotation is added. It is not possible to create a negative J_0 value with the NX and UX configurations because J_orb is always positive. The ND configuration naturally creates negative J_0, but the largest impactor (γ=1.0) rotating at f_Ω=0.9 only has J_imp=-0.71 which is not enough AM to get to a total angular momentum of -1.0. The same is true for the UD case which could theoretically lead to a negative J_0, but this is not possible in our parametrization, as the absolute value of the angular momentum of a rotating target is always larger or equal to that of a rotating impactor.
§.§ Collisions with small impactors
In order to sample the parameter space of very fast counter-rotating targets, small impactors and high impact velocities proposed by <cit.>, we add a second sub-set for all orientations with non-rotating impactors (XN) which consists of all combinations of these parameter choices that result in a collision:
XX ∈ {DN, NN, UN}
J_0∈ {-2.45, -2.25, -2.00, -1.75, -1.5, -1.25, -1.00,.
. 1.00, 1.50, 2.00, 2.25, 2.50, 3.00, 3.50 }
γ∈ {0.02, 0.025, 0.03, 0.04, 0.05}
v_∞∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, .
. 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, .
. 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0 }v_esc
f_Ω∈ {0.0, 0.5, 0.9, 1.01}
This results in 2230 simulations. For this subset, we increase the maximum asymptotic impact velocity v_∞ from 1 v_esc to 3 v_esc and we add the angular velocity factor f_Ω=1.01. This creates bodies rotating above Ω_crit, which should not be stable, but because Ω_crit is an estimation assuming a homogeneous density, slightly larger values are still stable. f_Ω=1.01 is the largest value that is stable for all our bodies. The value of J_0 = -2.45 was chosen because J_0 = -2.5 does not produce collisions for γ∈{0.04,0.05}.
§.§ Summary
In total, we run 7649 impact simulations that are distributed as follows:
* Rotational configuration: 2927 simulations with counter-rotating targets (DX), 1713 with non-rotating targets (NX) and 3009 with co-rotating targets (UX). A more detailed list of the number of simulations performed for each rotational configuration is provided in Table <ref>.
* Initial total angular momentum J_0:
1053 J_0=1.00 1000 J_0=1.50
1080 J_0=2.00 1027 J_0=2.25
1012 J_0=2.50 893 J_0=3.00
111 J_0=3.50 1365 J_0
108 J_0∈{1.25, 2.75, 3.25, 4.00, 4.50, 5.00}
* Target-to-impactor mass ratio γ:
431 γ=0.1 882 γ=0.3
1009 γ=0.5 1034 γ=0.7
1034 γ=0.9 1029 γ=1.0
2230 γ<0.1γ
* Relative velocity at infinity v_∞:
526 v_∞=0.1 v_esc 538 v_∞=0.2 v_esc
552 v_∞=0.3 v_esc 569 v_∞=0.4 v_esc
590 v_∞=0.5 v_esc 603 v_∞=0.6 v_esc
619 v_∞=0.7 v_esc 628 v_∞=0.8 v_esc
642 v_∞=0.9 v_esc 647 v_∞=1.0 v_esc
1735 v_∞>1.0 v_esc
* Angular velocity factor f_Ω: 497 non-rotating simulations (f_Ω=0.0), 2755 simulations with f_Ω = 0.5, 3494 simulations with f_Ω = 0.9 and 903 simulations with f_Ω = 1.01. We reiterate that the latter set is a specific subset of low-γ collisions that was inspired by the parameter space proposed in <cit.>.
§ CORRELATIONS IN THE FULL DATASET
In this appendix, we investigate global correlations between a subset of the pre-impact and post-impact properties. Figure <ref> provides Pearson correlation coefficients for these properties. The coefficients are defined as
r_XY=cov(X,Y)/σ(X)σ(Y) ,
where cov(X,Y) is the covariance and σ(X) the standard deviation. The coefficients can range from -1 (perfect anti-correlation) to 1 (perfect correlation). Values of 0.25 ≤| r_XY|≤0.5 are considered weak correlations while | r_XY|>0.5 are considered strong correlations. Note that the correlations between the pre-impact parameters can be affected by the way our initial conditions are generated and how the parameter space is sampled by our simulations (see Appendix <ref>).
We first look a the correlations between the pre-impact parameters (J_0, γ, v_∞, b_∞ and f_Ω). They exhibit very low r values, with the exception of v_∞-γ which evinces a coefficient of r=-0.53. This strong correlation is due to the fact that no collisions with γ≥0.1 and v_inf>1.0 v_esc are simulated. J_0 and v_∞ are not correlated because they are independent variables in our study. However, they are not perfectly uncorrelated because, for large values of J_0, high velocities can result in hit & runs and low velocities are not simulated because they result in misses. The correlation between J_0 and b_∞ is weak (r=0.42) because, for collisions with pre-impact rotation, J_0 contains both the orbital angular momentum and the spin angular momenta of the colliding bodies (J_0 = J_orb + J_targ + J_imp), which again are independent variables. Being an independent parameter, f_Ω should not show correlations with the other independent parameters, but nevertheless, it shows weak correlations with γ (r=-0.37) and v_∞ (r=0.27), because f_Ω=1.01 is only used for γ<0.1 cases which mainly produce mergers for larger values of v_∞.
Several strong correlations exist between pairs of pre-impact and post-impact parameters. The mass of the bound post-impact material (M_b) has a strong negative correlation of r=-0.69 with v_∞. This is because high-velocity impacts tend to have small impact parameters (due to the initial angular momentum constraint) and the resulting head-on collisions tend to eject more material. M_b is also strongly correlated with γ (r=0.56), because collisions with high γ are usually at lower impact velocities (see the negative correlation between v_∞ and γ mentioned above). The angular momentum of the bound mass J_b correlates strongly (r=0.97) with J_0 because the majority of results (5896) exhibit a difference between J_b and J_0 of less than 20 of J_0. This relation and its influence on the analysis is discussed in detail in Section <ref>. J_b also has a weak correlation with b_∞ (r=0.42) because of the correlation between b_∞ and J_0. J_0 also has a strong correlation (r=0.80) with the disk mass M_d, confirming a key finding of Paper I. From this correlation follows a strong correlation (r=0.78) between J_b and M_d which can be seen in the top-left panel of Figure <ref> and is consistent with the results of Paper I. M_d also has a weak correlation (r=0.44) with b_∞ because of its correlation with J_0.
The iron mass fraction of the disk F_d^Fe does not show strong correlations with any parameter, the strongest value being r=0.17 with b_∞. The absolute value of the mixing parameter |δ_pd| has no strong correlations with any of the other parameters, but has weak negative correlations with the pre-impact parameters J_0 and the post-impact parameter J_b (because those are strongly correlated). Due to these correlations and the correlation between J_b and R_p/R_HSSL, |δ_pd| also has a correlation with R_p/R_HSSL of r=-0.44. It is interesting to note that, contrary to what one would expect if most of the impactor material is sheared into the disk, there is no correlation between F_d^Fe and |δ_pd|. This suggests that iron from the impactors core tends to fall back onto the proto-Earth in such collisions.
The proximity of the post-impact planet to the hot-spin stability limit (R_p/R_HSSL) is strongly correlated with J_0 (r=0.82) and weakly correlated with b_∞ (r=0.29). This is an intuitive result, as the radius of the planet (R_p) is determined by its rotation rate which increases with the amount of angular momentum in the post-impact system (J_b). Indeed, J_b and R_p/R_HSSL are strongly correlated (r=0.88) due to the strong correlation between J_b and J_0. As for b_∞, larger impact parameters are expected for large pre-impact angular momentum budgets, so it is unsurprising that b_∞ and R_p/R_HSSL share a weak correlation.
§ ON THE IMMEDIATE FORMATION OF SATELLITES
As discussed in Section <ref>, we find that many collisions result in a (more or less) massive second largest fragment which remains bound until the end of the simulation. In Figure <ref>, we show the 191 cases with fragment masses 0.5≤ M_SLR≤1.5 but our data set also contains 50 simulations with M_SLR > 1.5 with a maximum mass of the second largest fragment of 3.2. Such fragments are potential satellites or can act as a seed for the accretion of more mass from the circumplanetary disk and thus accelerate the formation and increasing the accretion efficiency. However, it is still being debated, to which extent the formation of such fragments can be enhanced by the numerical method. Known effects that can cause artificial fragmentation in SPH simulations are clumping due to the pairing instability <cit.>, regions with negative pressures in the EOS <cit.> and the inherent graininess of the gravitational potential caused by discretization in particle based simulations <cit.>. , the code used in the main part of this study should not suffer from the first two problems, as it uses the Wendland C2 kernel that is stable to pair instability, and it does not allow for negative pressures. <cit.> (K22) shows that in certain cases, such immediately formed bound fragments could be physical, because, even though the final mass of the fragments varies slightly, the material flow leading to the formation of these fragments is stable to changes in resolution (above a certain minimum particle number needed to actually resolve the flow feature). They argue that it is possible that the Moon formed in such a scenario rather than being accreted from a disk. But they also caution that "the region of parameter space for the immediate formation of stable satellites is not huge".
In this appendix we investigate this scenario and shed some light on the question if these stable fragments are physical or an artifact of their method specifically. For this, we try to reproduce the two most promising runs of <cit.>, which they depict in the Figures 1 and 2 of their paper, using a newly developed SPH code based on pkdgrav3 (<cit.>, Meier et al., in prep) at a resolution of 10^7 particles. The SPH implementation in pkdgrav3 is based on <cit.> with corrections for ∇ h terms, has an interface/surface correction based on <cit.> and uses ISPH <cit.> to enforce an adiabatic evolution in the absence of shocks. To avoid the pairing instability the high resolution Wendland C6 kernel with a target of 400 neighboring particles is used. Regarding the equations of state we follow K22 and use M-ANEOS Fe_85Si_15 <cit.> for the cores of the target and impactor and M-ANEOS Forsterite <cit.> for the mantles. To avoid artificial clumping we suppress negative pressures that can occur at low densities and temperatures.
In Figure <ref> snapshots of a simulation with the initial conditions from Figure 2 of K22 are shown. Similar to their results, a massive satellite on a stable orbit forms in our simulation. However, in our simulations that satellite orbits closer to the Earth and has a mass of 2 rather than 1.41 as found in K22. Figure <ref> shows snapshots of a simulation with the initial conditions of Figure 1 in K22. In this case, the fragment that forms from the arm in the top right panel falls back onto the proto-Earth, colliding in an oblique collision and gets sheared into another arm structure which then fragments into many very small bodies on varying orbits. These fragments are very low in mass with the most massive being of the order of a few percent of a Lunar mass.
In general, we can say that the concept of the immediate formation of satellites is possible. However, the exact properties, such as mass and initial orbit, of the resulting satellite seem very sensitive to small changes in initial conditions and details of the implementation of the numerical method. But this concept looks very promising and certainly warrants further investigation, especially because we find them all over the parameter space and at a (compared to ) small resolution (see Section <ref>).
aasjournal
|
http://arxiv.org/abs/2409.03656v1 | 20240905161054 | Quantum complexity and localization in random quantum circuits | [
"Himanshu Sahu",
"Aranya Bhattacharya",
"Pingal Pratyush Nath"
] | quant-ph | [
"quant-ph",
"cond-mat.dis-nn",
"cond-mat.stat-mech",
"cond-mat.str-el",
"hep-th"
] |
[email protected]
Department of Physics and Astronomy and Institute for Quantum Computing,University of Waterloo, Ontario N2L 3G1, Canada.
Perimeter Institute for Theoretical Physics, Waterloo, ON, N2L 2Y5, Canada.
Department of Physics and Department of Instrumentation & Applied Physics, Indian Institute of Sciences, C.V. Raman Avenue, Bangalore 560012, India.
[email protected]
Institute of Physics, Jagiellonian University, Lojasiewicza 11, 30-348 Kraków, Poland.
[email protected]
Centre for High Energy Physics, Indian Institute of Science, C.V. Raman Avenue, Bangalore 560012, India.
§ ABSTRACT
Quantum complexity has emerged as a central concept in diverse areas of physics, ranging from quantum computing to the theory of black holes. We perform a systematic study of complexity in random quantum circuits with and without measurements. We observe that complexity grows linearly before saturating to a constant value. For N qubits without measurements, the saturation value scales as 2^N-1, and the saturation time scales as 2^N. This behaviour remains identical in the presence of random measurements with different probabilities, indicating that this notion of complexity is insensitive to the rate of measurement. We also study the behaviour of complexity in two variants of the random unitary floquet circuit, where we observe that complexity acts as a novel probe of Anderson localization and many-body localization.
Quantum complexity and localization in random quantum circuits
Pingal Pratyush Nath^0000-0001-5311-7729
September 9, 2024
==============================================================
Introduction.— Understanding the complexity of quantum states and operators is relevant to a wide range of settings, from quantum many-body physics through quantum gravity to quantum computation <cit.>. In many-body physics, insights into the buildup of complexity in the time evolution of an initial local observable, known as operator growth, have inspired new ways of probing the dynamics of thermalization <cit.>.
Out-of-time-order correlators (OTOCs) <cit.>, a quantitative tool for measuring operator growth, obeys a dynamical bound arising from unitarity and analyticity <cit.>. It is shown that in a version of quantum gravity known as anti-de-Sitter space/conformal field theory (AdS/CFT) duality, the black holes saturate this bound <cit.>. Similar to black holes, OTOCs saturates the bound in models such as the so-called Sachdev-Ye-Kitaev (SYK) model which has given rise to holographic dual description with black holes <cit.>.
In quantum computing, the complexity of pure states is defined as the size of the smallest circuit that produces the state from a product state, while the complexity of a unitary is defined as the smallest circuit that approximates the unitary. This notion of quantum circuit complexity has recently gained interest due to connections between gate complexity and holography in AdS/CFT correspondance <cit.>. It is conjectured that in the bulk theory, the wormhole's volume is dual to the boundary state's quantum complexity, whose growth has been proved for the random unitary circuits <cit.>.
Recently, the notion of state and operator complexity based on Krylov basis (which is referred to as `quantum complexity' for convenience) defined using the generator of evolution operator has been extensively studied as a probe of information scrambling <cit.>. Operator complexity (known as Krylov complexity) is conjectured to grow at most exponentially in nonintegrable systems and can be used to extract the Lyapunov exponent, therefore, establishing a connection with OTOCs <cit.>. On the other hand state complexity (known as spread complexity), a generalization of Krylov complexity for quantums states, is used as a probe to study quantum choas and topological phase transitions <cit.>. Furthermore, since by construction this notion of complexity measures the delocalization of a wave function in the Krylov basis with time, it also captures the localization of the wavefunction by suppresion in the complexity saturation value <cit.>.
Despite a number of investigations in varying quantum systems, how the quantum complexity in systems with discrete-time evolution behaves remains an open question. In this Letter, we study quantum complexity in various classes of random quantum circuits. Quantum circuits built from local unitary gates (and local measurements) are a new playground for quantum many-body physics and a tractable setting to explore universal collective phenomena far from equilibrium. These models have shed light on longstanding questions about thermalization and chaos, and on the underlying universal dynamics of quantum information and entanglement <cit.>.
In random unitary circuits (RUCs) that consists of local random Haar unitaries, any quantum state evolves towards increasingly entangled states characterized by an extensive scaling of entanglement entropy with system volume. In presence of measurements that occur repeatedly during the evolution at a fixed rate, the system undergoes a phase transition from volume- and area-law entanglement entropy scaling for infrequent and frequent measurement rates, respectively <cit.>. Another class of random quantum circuits capturing important physical insights are the time-periodic ones, namely Floquet quantum circuits, where each time-step is repeated with the same random instances. In previous studies, classes of Floquet unitary circuit have been shown to exhibit localized phase such as Anderson localization (AL) and many-body localization (MBL) <cit.>.
In this Letter, we study the quantum complexity in two classes of quantum circuits — completely random unitary circuits and floquet random unitary circuits (where the random realization of the first timestep is repeated in all further timesteps). In random unitary circuits, the complexity undergoes transition from linear growth at early time to sublinear and saturates at exponential late times in system-size. We find that the complexity profile remains invariant under local measurements. In floquet unitary circuits, we find the suppression of late-time saturation value in localized phases, therefore, can be used as probe for phase transition from thermal to MBL and Anderson localization phases. To this end, our work therefore provides the first ever explicit circuit realization of the Krylov spread complexity, which probes the onset of localizing phases through suppression of complexity saturation value.
Model.— We consider discrete time evolution of two classes of one-dimensional quantum circuit models with N qubits — Random unitary circuits (RUCs) and Floquet unitary circuits (FUCs). For each time step, there is a unitary evolution operator U_t =U(t;t-1), under which a pure state evolves as |ψ(t)⟩ = U_t |ψ(t-1)⟩. The unitary evolution U_t is taken to be local unitary gates which are arranged in a bricklayer pattern:
U_t = (∏_x∈odd𝒰_(x,x+1),2t+1)(∏_x∈even𝒰_(x,x+1),2t)
= 𝒰^(o)_t ·𝒰^(e)_t
where 𝒰_(x,x+1),τ is the gate on link (x,x+1) at time step τ. In random unitary circuits, the local unitary gates 𝒰_x,t are Haar-random unitary gates. We note that even the minimal brickwork circuit above possesses one basic structure, which is the spatial locality of the interaction, which is natural in quantum information and as toy models for black holes <cit.>. The quantum circuit complexity of such system is shown to grow linearly, before saturating when the number of applied gates reaches threshold that grows exponentially with the number of qubits <cit.>. Furthermore, the scrambling dynamics exhibits similarities to the large-N or semiclassical case, but wavefront broadens diffusively <cit.>. We further introduce nonunitary dynamics by puncturing the unitary circuit with local single-qubit measurements. Measurements are done on a fraction p of all sites. Under the measurement, the wave function transforms as
|ψ⟩→M_α |ψ⟩/‖ M_α|ψ⟩‖
where {M_α} are a set of linear generalized measurement operators satisfying ∑_α M^†_α M_α = 1. Under such a measurement, the process described by Eq. (<ref>) is probabilistic, with outcome α happening with probability p_α = ⟨ψ |M^†_α M_α |ψ⟩. In our study, we will choose these generalized measurement operators to be mutually orthogonal projectors, that is M_α→ P_α, with P_± = (1± Z)/2 measuring the Z component of the spin of individual qubits. Such projectors satisfy P_α P_β = δ_αβ P_α and ∑_α P_α = 1. This model undergoes a phase transition at a finite value of p=p_C from volume- and area-law entanglement <cit.>.
In Floquet unitary circuits, the evolution is made time-periodic U_t = U_t+1, I.e., the two layers are repeated identically in the spirit of Floquet evolution. We consider several variations of Floquet evolution with a unitary circuit, which was previously shown to exhibit localized phases <cit.>. The scenarios we consider are the following:
(A) Gaussian circuits: We consider N fermionic pairs (a_i,a_i^†) satisfying {a_i,a^†_j} = δ_ij. We begin by defining a set of Hermitian fermionic operators given by
q_i = 1/√(2) (a^†_i + a_i ) and p_i = i/√(2) (a^†_i - a_i )
referred to as Majorana modes. Turning to the covariance matrix Ω, if we choose the Majorana basis ξ^a ≡ (q_1,p_1,q_2,p_2… q_N,p_N),
Ω^ab = -i Tr(ρ [ξ^a,ξ^b]) .
The covariance matrix provides a straightforward framework for discussing the corresponding group of unitary transformations for fermionic Gaussian states. In this context, we focus on Gaussian circuits that map Gaussian states to Gaussian states. The most general unitary operation acting on the covariance matrix is represented by an orthogonal transformation O∈ O(2N). We specifically consider a subclass of these unitary transformations, generated by quadratic Hamiltonians, which correspond to special orthogonal transformations O ∈SO(2N)
<cit.>.
With this, the time evolution operator is given by an orthogonal transformation O∈O(2N) built of random two-site operations P_i, Q_i ∈O(4),
U_t = G(⊕_i=1^N/2 Q_i) G^T (⊕_i=1^N/2 P_i)
where
G = [ 0 1_2; 1_2 0 ; 0 ⋱ ⋱ ; 1_2 0 ]
takes care of circularly shifting ⊕ Q_i by one site, and ensures periodic boundary condition. Given the block diagonal form of the evolution operator, the two-site operations P_i are coupling between sites 2i-1 and 2i. The time average of covariance matrix can be used to assess long-time behavior of a typical state. It is shown that the inhomogeneous evolution exhibits Anderson localization; an initially localized impurity stays localized. On the otherhand, the homogeneous evolution results in thermalization <cit.>.
(B) Spins: The spin system that we consider is periodic version of random unitary circuit, where local unitary gates 𝒰 are drawn from different probability distributions with the common property of single-site Haar invariance. In other words, any transformation of a 𝒰 of the form
𝒰↔ (w_1⊗ w_2) ·𝒰· (w_3⊗ w_4)
does not affect averages ⟨⋯⟩, for arbitrary choice of single-qubit operator w_i.
Method.— Consider the evolution of initial state |ψ(0)⟩
under discrete-time dependent unitary evolution such that |ψ(t)⟩ = U_t |ψ(t-1)⟩, where t = 1,2,… We define the Krylov basis by choosing |K_0⟩ = |ψ(0)⟩ and then recursively orthogonalizing each |ψ(t)⟩ with all the |K_i⟩ for i < t <cit.>. At any t, we can expand the state |ψ(t)⟩ in Krylov basis as
|ψ(t) ⟩ = ∑_n=0^Dϕ_n(t) |K_n⟩ ,
where D is dimension of Hilbert space, and ϕ_n(t) = ⟨ K_n|ψ(t)⟩. We define the spread complexity of the state as the average position of the distribution on the ordered Krylov basis:
𝒞(t) = ∑_n=0^D n|ϕ_n(t)|^2 .
Analogously, we can define the Krylov complexity (K-complexity) of the operator, which evolves under discrete-time evolution O_t = U^†_t O_t-1 U_t. As before, the Krylov basis is obtained by choosing |K_0) = O_o and then recursively orthogonalizing each O_t with all the |K_i) for i<t. At any t, we can expand the operator O_t in Krylov basis as
O_t = ∑_n=0^D^2φ_n(t) |K_n)
where φ_n(t) = (K_n|O_t). We define the K-complexity as
𝒦(t) = ∑_n=0^D^2 n |φ_n(t)|^2 .
Results.— We start by considering the complexity of the random unitary circuits. In this case, the complexity profile consists of early time growth followed by saturation at exponentially late times. At early times, complexity growth is linear, which later transitions to sub-linear growth and reaches saturation depending on the size of the chain. The initial linear growth is tied to fast relaxation associated with linear light-cone <cit.>.
At late times, the complexity saturates to half the Krylov dimension i.e.. D/2, which is generic in many-body chaotic systems and quantum circuit complexity. Figure <ref> shows the spread complexity for the system-sizes N=8,9,10, where the local unitaries are sampled from random-haar ensemble. Unless otherwise mentioned, we will perform the average over the large number of sample RUCs so that quantities are well-converged. In numerical calculations, the initial state is a product state |ψ(0)⟩ = |↑↓↑↓…⟩. However, taking any other states produces the same feature in the complexity profile, suggesting that it is independent of the initial state.
In monitored RUCs, the Krylov basis constructed from orthogonalization of span of vectors
𝔎_1 = {|ψ(0)⟩, M𝒰^(o)_1 M 𝒰^(e)_1 |ψ(0)⟩,…}
where M represents measurement operation performed at probability rate p. In Fig. <ref>, the averaged spread complexity (the averaging is done over both randomized measurements and the choice of unitaries) behaves exactly similarly to the case without measurements, independent of probability rate p.
Interestingly, the spread of complexity does not appear to affect the system undergoing a measurement-induced phase transition. In other words, the system's complexity grows independently in two distinct phases, each characterized by volume-law and area-law entanglement entropy.This indicates that complexity defined in this way is completely independent of the notion of entanglement, since it is defined on all the degrees of freedom of the system whereas entanglement measures how the correlation between two complementary parts of the systems change.
We now move to two classes of Floquet unitary circuits, namely Gaussian circuits and spins.
(A) Gaussian circuits.— In Fig. <ref>, we show the spread complexity associated with inhomogenous and homogenous settings. In homogenous case where the time evolution O is 2-site-translation invariant, randomness is the same for all sites P_i = P_j and Q_i = Q_j. We find that the spread of complexity exhibits a lower value of late-time saturation in the inhomogeneous case compared to the homogeneous case, which is reminiscent of Anderson localization. The suppression value 𝒞_∞ takes up a constant in thermodynamics limit N→∞.
(B) Spins.— To begin with we consider the local unitary gates 𝒰 to be drawn from random Haar. In Fig. <ref>, we present numerical results for the spread complexity. The complexity profile remains similar to a random unitary case, except it features a peak before saturating.
The local unitary 𝒰 drawn from random Haar are highly entangling operators, therefore, move information that is initial localized in one site across the chain. On the other hand, MBL can typically requires strong random potential relative to the coupling <cit.>. To witness the MBL <cit.>, we cast every unitary in 𝒰(4) as
(u_1⊗ u_2) e^iaσ_x⊗σ_x + ib σ_y⊗σ_y + ic σ_z ⊗σ_z (u_3⊗ u_4)
where u_i∈U(2) and coefficients a,b,c∈ℝ. The probability distribution for all two-qubit operators 𝒰∈U(4) composing the time evolution operator is defined by Eq. <ref>, drawing each u_i from the Haar measure for U(2) and a,b,c uniformly from the interval [-h,h].
In Fig. <ref> we show numerics for 𝒞(t) for distributions with various coupling strengths h. We find a crossover from thermalization for large coupling and localization for small coupling as evident from late-time saturation value 𝒞_∞. The MBL transition can be extracted from 𝒞_∞ as a function of h. In Fig. <ref>, the transition occurs at value h_0 ≈ 0.3 independent of system-size which matches with previous studies <cit.>.
Discussion.— In this Letter, we study the state complexity in various classes of random unitary circuits. We show that the complexity undergoes a transition from linear to sublinear growth in random Haar unitaries, before saturating which scale exponentially with the number of qubits. Interestingly, the exponential scaling of saturation time is also found in quantum circuit complexity, indicating the possible connection between the two complexities <cit.>. In the monitored case, the quantum complexity profile remains invariant under measurement rate p. In Floquet unitary classes, we probe thermal and local phases using late time saturation of complexity. More specifically, inhomogenous Gaussian circuits exhibit Anderson localization while strongly coupled spins show many-body localization. The operator complexity can equally be studied and be used to probe thermal and local phases. Similar to the state complexity, the operator complexity is expected to undergo linear to sublinear transition, before it saturates at exponentially large times t_sat∼ 4^N to a value which grows exponentially with system size ∼ 4^N/2.
In summary, we discuss in this letter
* How Krylov spread complexity can be defined for discrete time-dependent random circuits.
* How the complexity profile, saturation values, and the saturation times scale with system size, indicating a connection with the circuit complexity.
* In floquet setups, how this notion of complexity in the Krylov basis, generated by the unitaries in the discrete-time picture acts as a novel probe of various kinds of localization (Anderson and MBL) through suppression of the saturation value. This notion of complexity therefore still measures the delocalization of the wave function in the state Hilbert space.
There are a number of questions that remain to be answered in future works. A pressing one is if its possible to probe scrambling transition in the monitored RUCs using quantum complexity. The definition of complexity allows it to grow even in the absence of any growth in entanglement. For example, a discrete evolution in which a product state evolves to another product state results in the growth of complexity but not entanglement. A possible way could be to consider radiative random unitary circuits <cit.>, which were previously shown to exhibit scrambling transition probed using OTOCs. Additionally, it should be noted that the measurement process as defined in Eq. (<ref>), renormalizes the state, thereby, killing the non-unitary effects. To reattain the non-unitarity, one can either remove the normalization or study evolution with respect to the effective Hamiltonian derived from the non-unitary evolution operator.
Acknowledgement.— We would like to thank Sumilan Banerjee, Mario Flory, Shane Kelly, Subroto Mukherjee, and Zahra Raissi for useful discussions. The work of A.B. is supported by the Polish National Science Centre (NCN) grant 2021/42/E/ST2/00234. A.B. would like to thank International Centre for Theoretical Sciences, Bengaluru, and, the organizers of the program “Quantum Information, Quantum Field Theory, and Gravity" for the hospitality when this work was in the final stage.
apsrev4-2
|
http://arxiv.org/abs/2409.03045v1 | 20240904193012 | Coherent Thermal Emission from Large-Scale Suspended Nanomechanical Membranes | [
"Mitradeep Sarkar",
"Rajashree Haldankar",
"Julien Legendre",
"Gloria Davidova",
"Adrian Bachtold",
"Georgia T. Papadakis"
] | physics.optics | [
"physics.optics"
] |
0.0cm
0.2cm
16cm
21cm
1.0cm
sciabstract
24pt
Coherent Thermal Emission from Large-Scale Suspended Nanomechanical Membranes
Mitradeep Sarkar,^1 Rajashree Haldankar,^1 Julien Legendre,^1 Gloria Davidova,^1
Adrian Bachtold,^1 Georgia T. Papadakis,^1∗
^1ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology,
Avinguda Carl Friedrich Gauss, 3, 08860 Castelldefels, Barcelona, Spain
^∗To whom correspondence should be addressed; E-mail: [email protected].
===========================================================================================================================================================================================================================================================================================================================================================================================
Thermal radiation is an abundant form of incoherent light. Generating coherent infrared light through incandescence promises a cheap alternative to the costly and epitaxially complex quantum cascade laser, however it remains a fundamental challenge. Previous approaches leveraged the spatial coherence of polaritonic excitations that occur in the thermal near-field, by diffracting them into the far-field zone via patterned micro- or nano-scatterers. This approach requires high-resolution lithography, is difficult to scale-up, and yields limited outcoupled radiation due to the intrinsically polarized nature of polaritons. We overcome these limitations and report coherent thermal emission through simple wave interference. We show that unpatterned, millimeter-scale, suspended nanomechanical membranes of SiC operate for both linear polarizations and exhibit antenna-like directionality without relying on the excitation of near-field polaritons. The ability to generate polarization-insensitive, narrowband and spatially coherent incandescent light without lithography at large scales paves the way towards democratizing thermal infrared technologies.
Incandescence is the emission of thermal radiation by black or grey bodies, which peaks at mid-infrared (IR) frequencies for near-room temperatures, as predicted by Planck's law <cit.>. Detecting deviations from the black body spectrum in the thermal emissivity of materials is central to applications in sensing, spectroscopy, and materials' identification, while tailoring this emissivity by design enables contactless temperature regulation, thermal camouflage, IR imaging, and thermophotovoltaic energy conversion <cit.>. Many of these applications require a coherent IR light source <cit.>. Examples include thermophotovoltaic systems operating at maximal efficiency <cit.>, detection and identification of molecular fingerprints <cit.>, IR holography <cit.>, image encryption via physical tags <cit.> and multispectral imaging <cit.> that require narrowband and directional response of individual pixels. Nonetheless, generating coherent light at mid-IR frequencies presents fundamental challenges; light emitting diodes (LEDs) suffer from strong non-radiative losses <cit.> and are thus alien to the spectral range above 5 μ m <cit.>, while quantum cascade lasers (QCL) rely on expensive epitaxial techniques. Due to the ubiquitous nature of thermal radiation, IR light emission through incandescence promises a cheap alternative to the QCL, however conventional incandescent sources and filaments, like globars, emit radiation with similar characteristics to that of a black body. Thereby, they generate incoherent light that significantly reduces their efficiency due to parasitic emission towards unwanted frequencies and directions.
Generating coherent incandescent light requires confining the spectrally broad and spatially diffuse characteristics of black body radiation into a narrow spectral and angular range. This becomes possible by tailoring the thermal emissivity via nanophotonic design <cit.>. To reduce the spectral bandwidth of far-field thermal radiation at mid-IR frequencies, one can harness the long lifetimes of phonon-induced resonances in crystalline polar materials like silicon carbide (SiC) <cit.>, or utilize the quantum confinement in low-dimensional materials such as carbon nanotubes <cit.> and atomically thin semiconductors <cit.>. It is also possible to leverage wave interference to induce narrowband emission by employing the concept of a Salisbury screen <cit.>. In a Salisbury screen, absorption in a lossy thin-film is resonantly enhanced through interference; the absorbing layer is separated from a back-side reflector via a quarter-wavelength-thick dielectric spacer that enables constructive interference <cit.>. By Kirchhoff's law of thermal radiation <cit.>, a Salisbury absorber can also serve as an emitter, in which case the lossy thin-film serves as the emitting layer. As shown by Ergoktas recently <cit.>, precise tuning of the thickness of the emitting layer yields localized emission, which however remains spatially diffuse.
To narrow the spatially diffuse characteristics of black body radiation, various approaches have been developed since the instrumental work by Greffet et al. <cit.>. The coherent nature of thermally excited surface polaritons in the near-field, in other words at microscopic distances from a surface, can yield antenna-like directional emission in the far-field zone <cit.>. The concept was first demonstrated on a SiC surface that supports surface-phonon polaritons. Via a lithographically patterned grating, these polaritons diffracted into far-field propagating electromagnetic modes towards specific angles. Similar demonstrations that rely on surface-plasmon polaritons have also been reported <cit.>. However, surface plasmon- or phonon-polaritons occur only in transverse magnetic (TM) polarization <cit.>, hence, concepts that rely on their excitation are generally polarization-specific. In addition, they require coupling between the thermal near-field and far-field zones. In the landscape of thermal emissivity engineering, enabling this coupling requires diffractive elements such as gratings and nanoantennas <cit.>, for which micro- or nano-patterning is required. This demonstrates the need for high-resolution lithography that hinders large-scale adoption of coherent thermal sources. Additionally, relying on the excitation of TM-polarized polaritons limits the amount of outcoupled radiation and thereby reduces by half the luminocity of a potential thermal source.
To realize technologically relevant mid-IR thermal sources, it is critical that spatial and spectral coherence do not come at the cost of complicated lithographic steps. At the same time, to maximize the brightness or luminocity of a source, it is critical that the thermal emission is not polarization-specific. Recently, directional thermal emission with planar, pattern-free structures has been demonstrated in the context of the epsilon-near-zero response of materials <cit.>, where a Berreman mode occurs <cit.>. In both <cit.> and <cit.>, however, the reported emissivity remained spectrally incoherent, occurred only for TM polarization, and the reported directionality was considerably inferior to nano-structured surfaces <cit.> . Alternatively, directional emission is possible with periodic arrangements of alternating lossy bilayers in a photonic crystal <cit.>. However, for strong directionality, more than ten bilayers are required to mimic the response of a bulk photonic crystal, making their realization impractical and expensive.
Here, we introduce a radically different approach to confine thermal radiation into an ultra-narrow spectral and spatial range that does not rely on the near-field coherence of polaritons <cit.>, neither on collective excitations in subwavelength meta-architectures <cit.>, nor on photonic crystal effects <cit.>. We experimentally demonstrate large-scale (lateral dimensions ∼2 mm), lithography-free thermal emission arising from suspended nanomechanical membranes of SiC, with thickness 200 nm. In analogy to the work by Greffet et al. <cit.>, we utilize SiC as the thermally emitting material, however, without patterning it. The angular selectivity becomes possible by integrating the membranes into a modified Salisbury screen. Although both a single slab of SiC and a SiC-based conventional Salisbury screen yield diffuse emission <cit.>, by modifying the characteristic dimensions of a Salisbury screen we enable previously unreported antenna-like directionality. This directionality is the result of wave interference, thereby, the resulting thermal emission is polarization-insensitive. At a central wavelength of 13.2 μm, we measure a narrow spectral emission bandwidth of 0.7 μm and obtain emission lobes with an angular spread of 12.9 degrees. The angular spread is measured via direct thermal emission measurements, and is comparable to the values reported in the case of gratings <cit.> and 1D photonic crystals <cit.> when conducting similar thermal emissivity characterization above room temperature.
We note that, although Salisbury screens are trivial to realize at other spectral ranges, their experimental implementation at mid-IR frequencies remains a challenge. In particular, a key constituent of a Salisbury screen is a transparent dielectric spacer, however at mid-IR frequencies most materials resonantly absorb due to crystal lattice vibrations <cit.>, thereby there is a scarcity of lossless and dispersionless mid-IR transparent media. Here, we show an alternative and versatile approach to building Salisbury screens at mid-IR frequencies by considering an air gap as the dielectric spacer, for which the SiC nanomembranes are mechanically suspended.
§ DESIGN OF COHERENT THERMAL SOURCE
A Salisbury screen provides near-unity thermal emissivity on-resonance from a planar three-layered heterostructure. It consists of an infinitesimally thin lossy emitting layer and a λ/4-thick spacer layer on a back-side reflector, where λ is the wavelength of light inside the spacer. The conventional Salisbury configuration yields narrowband emission but the emitted light is spatially diffuse <cit.>. Without loss of generality, here we consider that the dielectric spacer air (n_s=1).
We recently demonstrated that, in order to achieve maximal thermal emission in the three-layered configuration of a Salisbury screen, the following condition should be satisfied <cit.>:
4π/λ h_scosθ+2Ψ_e=(2l-1)π,
where h_s is the thickness of the dielectric spacer, and θ is the central angle of emission (see Fig. <ref> (A)). The parameter Ψ_e represents the phase accumulated upon single pass within the lossy emitter, and depends on both the refractive index (n_e) and thickness (h_e) of this layer. In a conventional Salisbury screen, the phase accumulated inside the lossy emitter vanishes due to its small thickness (Ψ_e 0), for which Eq. <ref> yields h_s=λ/4. By contrast, directional thermal emission is achieved when Ψ_e≈π/2 , for which Eq. <ref> yields h_s=λ/(2cosθ). The large (π/2) phase accumulation inside the emitter is observed as strong light confinement in <cit.>, making the device operate like an optical cavity, thereby assuring spatially coherent emission (see Supplementary Information). To achieve Ψ_e≈π/2, the refractive index of the lossy emitting layer should be much larger than unity, and its thickness should no longer be small as in the original Salisbury configuration <cit.>. Considering h_e=λ/(4 | n_e|) warrants sufficient phase accumulation while Eq. <ref> remains satisfied.
We consider SiC for the lossy emitting layer, since it supports an ultra-high refractive index in its crystalline form at frequencies near its Reststrahlen band, corresponding to wavelengths in the range 12.6 μm - 13.5 μm. Importantly, this is the spectral range where the black body spectrum is thermal radiation is maximal for near-room temperatures <cit.>. For wavelengths in this range, the SiC layer should have a thickness of h_e=λ/(4 | n_e|)≈ 200 nm.
Central to realizing a Salisbury screen at mid-IR frequencies is to identify appropriate non-absorbing and dispersionless transparent materials that can serve as the dielectric spacer layer <cit.> as well as appropriate methods to grow or deposit them. The targeted thickness for the spacer layer is in the range of few-microns, therefore conventional deposition methods, such as thermal and electron beam evaporation or reactive sputtering, cannot be used due to challenges in film uniformity, adhesion, buckling, peel-off, and related effects <cit.>. In addition, most dielectrics exhibit strong polar resonances in the mid-IR range. To circumvent these challenges, we utilize an air gap as a dielectric spacer (see schematic in Fig. <ref> (A)). Due to its low-refractive index, air is an ideal spacer for directional emission <cit.>. By aiming to achieve directional thermal emission at an angle of approximately θ_0=40 degrees and at wavelengths near the Reststrahlen band of SiC, the height of the air gap should be h_s=λ/(2cosθ_0)≈8.5μ m.
To practically realize this air gap, we etch a trench of height h_t=8.5 μm and lateral dimensions L_t × L_t, where L_t=3 mm, into a silicon dioxide wafer (SiO_2) via reactive ion etching. A corner of the trench is shown via a scanning electron microscope (SEM) image in Fig. <ref> (C), and its height is measured via profilometry as shown in Fig. <ref> (D). A thin (150 nm) layer of gold is deposited at the bottom of the trench by thermal evaporation. This gold layer serves as the back-side reflector of the Salisbury screen. We place on top of the trench the 200 nm-thick layer of SiC, which is in the form of a commercially available nanomechanical suspended membrane with lateral dimensions L_e × L_e, where L_e=2 mm, purchased from Norcada Inc. The membrane is commercialized with a 400 μm-thick silicon frame that is shown with the green color in Fig. <ref> (A). A photograph of the complete device is shown in Fig. <ref> (B).
The lateral dimensions of the etched trench are intentionally selected to be slightly larger than those of the SiC nanomembrane to avoid direct contact of the trench edges with the nanomembrane in order to reduce mechanical stress <cit.>. The flatness of the nanomembrane upon its integration onto the trench is evaluated by scanning the whole area of the membrane while conducting micro-Fourier Transform Infrared Spectroscopy (FTIR). In particular, we measured the thickness of the spacer layer, h_s, at each scanned position, by tracking the reflectance minimum at normal incidence, which occurs when Eq. <ref> is satisfied for θ=0. The measured h_s is shown in Fig. <ref> (E). As shown in Fig. <ref> (E), h_s varies from 8.5 μm to 9.2 μm, a variation that does not considerably affect the degree of directionality as shown in the following results. A small tilt of the nanomembrane towards one of the corners of the wedge is seen from Fig. <ref> (E). Despite this tilt, the suspended nanomembrane exhibits no bending or buckling across a surface area of 2 mm × 2 mm despite its high length-to-thickness aspect ratio h_e/L_e=10^-4, due to the high stress homogeneity in the suspended nanomembrane <cit.>. We note that, in extracting the thickness of the spacer, we utilized experimentally measured values of the refractive index of SiC (see Supplementary Information Section 2), which agree with those reported in the literature for crystalline SiC <cit.>. For additional details on the experimental realization, device imaging and characterization, see Supplementary Information.
§ EXPERIMENTAL CHARACTERIZATION OF DIRECTIONAL ABSORPTIVITY AND THERMAL EMISSION
In the absence of magnetic effects, Kirchhoff's law of thermal radiation <cit.> imposes an equality between the thermal emissivity, ℰ, and the absorptivity, 𝒜, per frequency, polarization, and direction. In opaque substances with vanishing transmission, the absorptivity equals 𝒜=1-R, where R is the reflectivity. In the experimental characterization of the thermal source, we conducted both angle-dependent absorptivity and emissivity measurements. The absorptivity measurements are carried out by probing the reflectivity for both linear polarizations: transverse magnetic (TM) and transverse electric (TE), using a manual variable-angle reflection stage coupled to an FTIR. The measured 𝒜, shown in Figs. <ref> (B), (E) for TM and TE polarization, respectively, were normalized to a reference gold mirror. In Figs. <ref> (A), (D), we show the corresponding calculations that were carried out using the transfer matrix method <cit.>, averaged over the range of spacer layer thicknesses that were experimentally measured as shown in Fig. <ref> (E) (see Supplementary Information Section 5).
As shown in Fig. <ref> (A), (B) and (D), (E), the calculated and measured absorptivities, respectively, are in good agreement. In these panels, the wavelengths corresponding to the transverse optical (TO) and longitudinal (LO) phonons of SiC are marked with λ_TO and λ_LO, respectively (see dashed horizontal lines). As expected from theory <cit.>, for both linear polarization, absorptivity maxima occur near λ_TO. By contrast, the additional absorptivity maximum near λ_LO corresponds to the excitation of a surface phonon polariton that occurs only for TM polarization (panels (A) and (B)). The polarization-independent absorptivity maxima near λ_TO shift as a function of θ, similar to the case of a diffraction grating <cit.>. In Figs. <ref> (C), (F) we show the polar plots of the absorptivity at the wavelength of λ=13.2 μm, for TM and TE polarization, respectively (this wavelength is marked with a dashed blue line in panels (A), (B), (D), (E) of the same figure). As seen, the solid blue lines, representing the theoretical calculation, match the experimental measurements, shown with solid black dots. The angular width of the lobes for TM polarization is measured to be 7.4 degrees, whereas the theoretical calculation yields 7.7 degrees. For TE polarization, the measured as well as the theoretically calculated angular width is 6.5 degrees. From Fig. <ref>, the thermal source preferentially absorbs mid-IR light at specific wavelengths and incident angles with directionality comparable to nanostructured surfaces <cit.>.
Next, we carry out direct thermal emissivity measurements by heating the sample and probing its thermal emission. To probe thermal emissivity and absorptivity simultaneously, these measurements are first conducted at normal incidence, θ=0, from a spot of 100μ m× 100μ m, using FTIR micro-spectroscopy with a Cassegrain objective that has an aperture angle of 16 degrees. The schematic of the experimental setup is shown in Fig. <ref> (A). As shown, two separate MCT detectors are used for these measurements; in the beam path for the absorptivity measurements, the sample is placed between the FTIR and the detector. By contrast, in the emissivity measurements the blackbody source of the FTIR is replaced by the sample itself, thus changing the beam path.
To extract the thermal emissivity we adopt the method outlined by Xiao et al. in <cit.>, by first measuring the emission signal S(λ,T)=m(λ)[ℰ(λ)I_BB(λ,T)+B(λ)],
where m(λ) is the spectral response of the measuring system, I_BB(λ,T) is the black body radiation given by Planck's law, and B(λ) is the background emission at room temperature. We measure the emission signal from a carbon tape, which serves as the reference, S_ref, and that of the sample, S_sample, at two temperatures: T_1=423K and T_2=393K, using a thermal stage (Linkam Inc.). The sample's emissivity, ℰ(λ), is calculated as:
ℰ(λ)=ℰ_ref[S_sample(λ,T_1)-S_sample(λ,T_2) /S_ref(λ,T_1)-S_ref(λ,T_2)],
where ℰ_ref is the emissivity of the carbon tape, which was evaluated by measuring the reflectance from the tape (see Supplementary Information Section 6).
The simultaneous measurements of absorptivity and emissivity at normal incidence are shown in Fig. <ref> (B). Since thermally emitted light is intrinsically unpolarized, the incident light for the absorptivity measurements was also unpolarized for these measurements, for a direct comparison between ℰ and 𝒜. As shown, the two measurements agree, as expected, with small deviations. These deviations are expected since, at the measured temperatures that are not considerably higher than room-temperature, the emitted signal from the sample is comparable in magnitude to the background thermal emission arising from the instrument itself (see Supplementary Information). Thereby, there is a higher uncertainty associated with measuring ℰ than measuring 𝒜. There exist three peaks in both 𝒜 and ℰ spectra. The two peaks at the shorter wavelengths result from the intrinsic properties of SiC, in particular its vanishing permittivity at the wavelengths of λ_LO and λ_TO (see Fig. <ref> and Supplementary Information). These are independent from the angle of observation. The peak near λ=14.7μ m corresponds to a Fabry-Perot mode that also satisfies Eq. <ref> for θ=0. This peak, as shown in Fig. <ref>, depends strongly on the observation angle.
Next, we conduct angle-dependant measurements of the thermal emissivity using the same manual variable angle stage coupled to the same FTIR that was used for absorption measurements. The schematic of the experimental setup is shown in Fig. <ref> (A). Similar to the measurements at normal incidence (Fig. <ref>), for each angle of observation, we obtain the emission spectra for the sample and the carbon tape at T_1=423K and T_2=393K. From these spectra, the angle-dependent thermal emissivity is derived (see Eq. <ref> and Supplementary Information) and presented in Fig. <ref> (B). As expected, it closely follows the dispersion of the absorptivity shown in Figs. <ref> (B), (E), with a major difference being that the emissivity measurements are conducted for unpolarized light whereas the absorptivity measurements in Fig. <ref> were conducted while polarizing the incident beam.
To suppress the influence of the background thermal emission from the FTIR instrument itself, which lowers the signal-to-noise ratio that is detected, we used a pinhole to limit the area of the sample from which emission is detected. Although such a pinhole was also used in the absoprtivity measurements, the aperture size of the pinhole was different in the two measurements. The pinhole's diameter in the absorptivity measurement was 1 mm, whereas it was 5 mm in the emissivity measurements. A larger pinhole diameter is required in the emissivity measurements, in order to detect a sufficient signal from the nanomembranes that have lateral dimensions only 2 mm × 2 mm. For this reason, in Figs. <ref> (B), (E), and <ref> (B), the scale for the absorptivity and emissivity measurements, respectively, differ, and the emissivity lobes in Fig. <ref> (D) are broader than those of the absorptivity in Figs. <ref> (C), (F). In particular, this difference is attributed to the background thermal emission from the silicon frame of the membrane (see Fig. <ref> (A)) and the wafer, which are at the same temperature as the membrane, thereby also emitting IR light. This signal is obviously not present in the absorptivity measurements, when the sample is held at room temperature. Due to the 5 mm pinhole used in the emissivity measurements, the signal from the silicon frame and wafer is considerable. In addition, we note that in contrast to the simultaneous measurements of emissivity and absorptivity, which were conducted within the FTIR microscope (Fig. <ref> (A)), the angle-dependent measurements of ℰ and 𝒜 were conducted separately, altering the sample position and alignment for every probed angle of observation and incidence, respectively.
In Fig. <ref> (C), we show the emissivity spectra for two observation angles, θ=36 and 42 degrees, and in Fig. <ref> (D) we show the corresponding polar plots of the thermal emissivity for the same wavelength as in Figs. <ref> (C), (F), i.e. for λ=13.2 μm (blue) as well as for λ=13.7 μm (black). The spectral width of the peaks shown in Fig. <ref> (C) are 1 μm and 0.7 μm, for θ=36 and 42 degrees, respectively, while the angular spread of the emission lobes shown in Fig. <ref> (D) are 12.9 and 14.4 degrees, respectively, for λ=13.2 μm and λ=13.7 μm, respectively. These spectral widths, corresponding to unpolarized light emitted from the sample are roughly equal to the sum of the spectral widths obtained when measuring absorptivity for TM and TE polarization separately, as shown in Figs. <ref> (C), (F), respectively. The increased angular spread with respect to the absorptivity measurements is associated with the aforementioned background parasitic emission from surrounding bodies. Despite this angular broadening, the highly directive character of the sample is preserved and clearly shown in thermal emissivity measurements. This angular spread is comparable with that reported via direct thermal emission measurements from diffraction gratings <cit.> as well as micro- and nano-structured surfaces <cit.>.
§ DISCUSSION
We introduced an approach to generating highly directional and ultra-narrow-band infrared light through incandescence. In order to directly compare the proposed concept with the seminal work by Greffet et al. <cit.> that serves as a hallmark in achieving directionality in thermal emission, in our experimental demonstration, the thermal emitter is also composed of SiC as in <cit.>. However, unlike <cit.> and relevant follow-up works <cit.>, the thermal source presented here does not rely on the excitation of a surface-plasmon or phonon polariton, neither on diffraction, nor on the collective excitation of meta-atoms as in metasurface-based architectures. By contrast, it relies on wave interference and builds upon the well-established concept of a Salisbury screen <cit.> that has been proposed in the 1950's. Via appropriate modifications to the characteristic dimensions of a Salisbury screen, thermal emission becomes highly spectrally coherent and directionally narrow for both linear polarizations.
With etching being the only synthesis step in the fabrication of the device, for the realization of an IR-wavelength-thick air gap, we propose a straightforward way to realize coherent incandescent sources at mid-IR wavelengths. The generated IR light is confined into a very narrow frequency range and angular range without the need of any high-resolution lithography. This creates a platform for ultra-efficient and simple-to-fabricate mid-IR thermal sources. The lateral size of the thermal source is on the order of millimeters, thus making this approach amenable to large-scale heat transfer applications and integrated mid-IR photonics at the wafer scale. The top layer, composed of a SiC nanomembrane in the present demonstration, can be replaced by another polar material with high-crystallinity, such as hexagonal boron nitride, α-MoO_3, and III-V materials like GaAs, InP and AlAs and others <cit.>. Finally, the wavelength and angle of emission can be actively tuned by modifying the thickness of the air gap. This mechanism is available in micro electro-mechanical systems (MEMS) technology, thus enabling actively tunable coherent thermal emission sources. We hope that these findings can be a key factor in democratizing thermal infrared technologies, enabling cheaper IR spectroscopy and sensing, and improving integrated IR photonics functionalities.
§ ACKNOWLEDGEMENTS
This work has been supported in part by la Caixa Foundation (ID 100010434),
the Spanish MICINN (PID2021-125441OA-I00, PID2020-112625GB-I00, and CEX2019-000910-S), the European Union (fellowship LCF/BQ/PI21/11830019 under the Marie Skłodowska-Curie Grant Agreement No. 847648), Generalitat de Catalunya (2021 SGR 01443), Fundació Cellex, and Fundació Mir-Puig.
Science
|
http://arxiv.org/abs/2409.03354v1 | 20240905085556 | Few-Shot Continual Learning for Activity Recognition in Classroom Surveillance Images | [
"Yilei Qian",
"Kanglei Geng",
"Kailong Chen",
"Shaoxu Cheng",
"Linfeng Xu",
"Hongliang Li",
"Fanman Meng",
"Qingbo Wu"
] | cs.CV | [
"cs.CV"
] |
Few-Shot Continual Learning for Activity Recognition in Classroom Surveillance Images
^† Corresponding author ([email protected])
^* Equal Contribution
Yilei Qian^*, Kanglei Geng^*, Kailong Chen, Shaoxu Cheng, Linfeng Xu^†, Hongliang Li, Fanman Meng, Qingbo Wu
School of Information and Communication Engineering
University of Electronic Science and Technology of China, Chengdu, China
{yileiqian, kangleigeng, chenkailong, shaoxu.cheng}@std.uestc.edu.cn, {lfxu, hlli, fmmeng, qbwu}@uestc.edu.cn
September 9, 2024
===================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
The application of activity recognition in the “AI + Education" field is gaining increasing attention. However, current work mainly focuses on the recognition of activities in manually captured videos and a limited number of activity types, with little attention given to recognizing activities in surveillance images from real classrooms. In real classroom settings, normal teaching activities such as reading, account for a large proportion of samples, while rare non-teaching activities such as eating, continue to appear. This requires a model that can learn non-teaching activities from few samples without forgetting the normal teaching activities, which necessitates few-shot continual learning (FSCL) capability. To address this gap, we constructed a continual learning dataset focused on classroom surveillance image activity recognition called ARIC (Activity Recognition in Classroom). The dataset has advantages such as multiple perspectives, 32 activity categories, and real-world scenarios, but it also presents challenges like similar activities and imbalanced sample distribution. To overcome these challenges, we designed a few-shot continual learning method that combines supervised contrastive learning (SCL) and an adaptive covariance classifier (ACC). The SCL improves the generalization ability of the model, while the ACC module provides a more accurate description of the distribution of new classes. Experimental results show that our method outperforms other existing approaches on the ARIC dataset.
Few-Shot Continual learning, Activity
Recognition in Classroom Surveillance Images, Adaptive Covariance Classifier
§ INTRODUCTION
In recent years, activity recognition has gained increasing attention as a significant application of AI in classroom settings. However, existing studies<cit.> have primarily focused on the recognition of a limited number of activities, and the data collected are often manually captured videos rather than classroom surveillance images. Activity recognition in classroom surveillance images faces multiple challenges, including class imbalance, high activity similarity, and privacy protection. To fill this gap, we constructed the ARIC dataset, specifically designed for activity recognition in classroom surveillance images. This dataset offers a rich variety of activity types, provides multi-perspective surveillance images, and is sourced from real classroom surveillance videos. However, the ARIC dataset also presents several challenges: 1) an imbalanced distribution of activity categories with significant differences in sample sizes; 2) high similarity between samples of different categories, which can lead to confusion; 3) features extracted by a shallow network to protect privacy, increasing recognition difficulty; and 4) the continuous occurrence of non-instructional activity in real scenarios, requiring the model to have continual learning capabilities.
To address the challenges faced by the ARIC dataset, we can apply few-shot continual learning methods. Few-shot continual learning has garnered significant attention in recent years, with mainstream approaches involving training a feature extractor during the base phase and freezing it during the incremental phase, using class prototypes as classifiers. The FACT<cit.> method creates virtual classes to reserve space for future classes, SAVC<cit.> introduces contrastive learning during base phase and achieves better model generalization through the fantasy space, and ALICE<cit.> uses angular penalty loss to achieve more compact intra-class clustering.
Nevertheless, current methods remain inadequate in addressing the specific challenges posed by the ARIC dataset. To this end, we propose a specialized few-shot continual learning method for activity recognition in classroom surveillance images. During the base phase, we use a feature-augmented supervised contrastive learning approach to enhance the model's generalization ability and reserve space for future activity categories to better achieve future class predictions. In the incremental phase, the covariance matrix is used as a memory unit, combined with an adaptive mechanism to form the ACC module. By analyzing the variance of new classes, it dynamically adjusts the classifier's decision boundaries to match the feature distribution of the new classes, effectively addressing the issues of small sample size and similarity between new and old classes. Experimental results demonstrate that our method outperforms existing approaches on the ARIC dataset.
§ ARIC-DATASET
The ARIC is a brand-new and challenging dataset based on real classroom surveillance scenarios. We used surveillance videos from three different perspectives—front, middle, and real—of real classroom scenarios as the raw data.(as shown in Fig. <ref>). Images were then extracted from these videos, and the activities of students and teachers within the images were annotated, forming the image modality. We also extracted audio corresponding to 5 seconds before and after each image (a total of 10 seconds) as the audio modality. Additionally, we used the open-source large model InternVL<cit.> to generate captions for each image as the text modality.
The ARIC dataset is characterized by its real classroom scenarios, three modalities, and diverse perspectives. The complexity of human activities, the diversity of actions, and the uniqueness of crowded classroom scenes make this dataset highly challenging. The dataset consists of 36,453 surveillance images covering 32 classroom activities, such as listening to lecture, reading, and using mobile phone. The distribution of samples across different activities is shown in Fig. <ref>.
To protect the privacy of individuals appearing in the images and to avoid releasing the original images, we used shallow layers of pre-trained models to convert the original images into feature data. Considering the need for backbone models in the field of continual learning, we selected three commonly used pre-trained models: ResNet50<cit.>, ViT<cit.>, and CLIP-ViT<cit.>. For example, by using conv1 layer and 3x3 max pool layer of the ResNet50 pre-trained model, we converted the image data into feature data with dimensions of [1, 64, 56, 56].
We also pre-defined reasonable incremental learning task divisions within the dataset to standardize experiments across the dataset: A) In the base phase, provide a few categories with a large number of samples, then randomly and as evenly as possible distribute the remaining categories across different incremental phases. B) Arrange the categories in descending order by the number of samples and then allocate them to different incremental phases based on this order. The specific partitioning schemes will be represented using the formula: B+S×N. Here, B represents the number of the base class, S represents the number of incremental phases, and N represents the number of categories in each incremental phase. For example, 8+6×4 means there are 8 base categories, 6 incremental phases, and 4 categories in each incremental phase.
The ARIC dataset can be downloaded, and more detailed information can be obtained by the link: https://ivipclab.github.io/publication_ARIC/ARIChttps://ivipclab.github.io/publication_ARIC/ARIC.
§ METHOD
In this section, we will first introduce the task setup for FSCL, followed by an explanation of our proposed method.
§.§ Few-Shot Continual Learning
Base Session: In FSCL, the dataset needs to provide a base class training set with sufficient samples, denoted as 𝒟^0 = {(𝐱_i, 𝐲_i)}_i=1^N_0, and a base class test set 𝒟_t^0 = {(𝐱_i, 𝐲_i)}_i=1^M_0, where N_0 and M_0 represent the number of samples in the training set and test set respectively. Here, 𝐱_i ∈ℝ^D is the training instance for 𝐲_i ∈ Y_0, and Y_0 is the label space of the base task.
Incremental Session: In this stage, the training set for new tasks {𝒟^1 ,…,𝒟^B} are introduced sequentially. Each set is denoted as 𝒟^b = {(𝐱_i, 𝐲_i)}_i=1^N_b, where 𝐲_i ∈ Y_b, and Y_b ∩ Y_b' = ∅ for b ≠ b'. The dataset 𝒟^b is only accessible during the training phase of task b. The limited instances in each dataset can be organized in an N-way, K-shot format, representing N classes with K sample instances per class at each incremental stage.
§.§ Feature-Augmented Supervised Contrastive Learning
To address the challenge of high similarity between different activities in the ARIC dataset, we introduce supervised contrastive learning during the base phase. SCL is particularly effective in handling fine-grained differences, enabling the model to better distinguish and amplify subtle variations<cit.> between easily confused categories, such as reading a book versus looking at a phone. Additionally, SCL contributes to achieving more compact clustering, which reserves space for future incremental categories and thus enhances the model’s ability for FSCL.
In contrastive learning, image augmentation techniques play a crucial role<cit.>. However, since the ARIC dataset is released as features rather than images, we designed a feature augmentation strategy that adapts traditional image augmentation methods, including cropping, flipping, and rotation, to the feature space. This strategy is integrated into the MoCo<cit.> framework to implement SCL, as shown in Fig. <ref>. This framework maintains a continuously updated feature repository, allowing the model to learn the most recent feature representations. In each training iteration, we first apply a series of random augmentations to the input feature 𝐱, generating two augmented views 𝐱_q and 𝐱_k. These are then processed by their respective encoders ϕ_q, ϕ_k and projection layers h_q, h_k, resulting in query feature 𝐪 and key feature 𝐤. A feature queue stores the most recently computed key features along with their label information. The key network is updated using a momentum mechanism to ensure smoother and more robust parameter updates. This setup enables the model to learn more discriminative feature representations from a large pool of samples.
The supervised contrastive loss for each feature sample 𝐱 is computed as follows:
ℒ_SCL (𝐱)
= -1/|P(𝐱)|∑_𝐤_+∈ P(𝐱)logexp(𝐪·𝐤_+ / τ)/∑_𝐤^'∈𝐤∪𝐐exp(𝐪·𝐤^' / τ)
Here, 𝐐 represents the feature queue, and P(𝐱) denotes the set of positive samples, which is the set of samples in 𝐤∪𝐐 that belong to the same class as 𝐱.
During the base phase, in addition to the SCL loss, we also use a cross-entropy classification loss to simultaneously optimize the model's classification ability and the discriminability of feature representations. We use ϕ_q in the query network as a feature extractor to extract features for classification. The cross-entropy classification loss is defined as follows:
ℒ_cls(𝐱, 𝐲) = ℒ_ce(W^⊤ϕ_q(𝐱), 𝐲)
where ℒ_ce(·, ·) denotes the cross-entropy loss, W ∈ℝ^d × |Y_0|, and ϕ_q(𝐱) ∈ℝ^d × 1.
The final loss function is:
ℒ_total = ℒ_cls + ℒ_SCL
§.§ Adaptive Covariance Classifier
Traditional classifiers based on the Nearest Class Mean (NCM) rely on learning features from all classes together. However, in incremental learning, dynamic data streams can make NCM less effective. Mensink et al.<cit.> introduced the use of Mahalanobis distance to measure the distance between samples and classes, which is better suited for this scenario<cit.>. Additionally, a feature extractor trained only on base classes can result in high semantic similarity between new classes and some old classes<cit.>. As shown on the left side of Fig. <ref>, some new class samples have features that are too close to old classes, leading to classification errors. Our proposed ACC module leverages class variance characteristics to adjust the covariance matrix, making it more aligned with the class feature distribution. After adjustment, the decision boundaries for the new classes, as shown on the right side of the Fig. <ref>, allow a significant portion of the new classes to be correctly reclassified.
When predicting the label of a sample, the Mahalanobis distance 𝐃(𝐱) is used to calculate the distance between the sample and the class. Here, 𝐆 represents the Gaussian-transformed feature vector of the sample 𝐱, denoted as 𝐆(ϕ_q(𝐱)), and μ is the mean vector of the class, while Σ_𝐚 is the adaptive covariance matrix.
𝐃(𝐱) = √((𝐆 -μ)^⊤Σ_𝐚^-1
(𝐆 - μ))
Using Gaussian-transformed data helps generate representative samples, but raw feature data often exhibits skewness<cit.>. To ensure that the input features approximate a Gaussian distribution, we applied the Box-Cox transformation, where λ is a hyperparameter:
𝐆(x) =
x^λ - 1/λ if λ≠ 0
log(x) if λ = 0
In few-shot learning scenarios, the number of samples is much smaller than the feature dimensions, which can result in a rank-deficient covariance matrix, making it impossible to compute its inverse. To address this, we introduced covariance shrinkage<cit.>, incorporating the adaptive parameter α, and applied normalization to compute the adaptive covariance matrix Σ_𝐚.
Σ_𝐚 = Normal[ Σ + ασ_1 𝐈 + σ_2 (1 - 𝐈)]
α = k/N_b∑_i=1^N_b(ϕ_q(𝐱) - μ)^2
Here, Σ is the class covariance matrix, 𝐈 is an identity matrix of the same shape as Σ, and 1 is an all-ones matrix of the same shape as Σ. The values σ_1 and σ_2 represent the mean of the diagonal and off-diagonal elements of Σ, respectively, with a scaling factor k > 1. The adaptive parameter α adjusts the covariance matrix through σ_1 and σ_2.
§ EXPERIMENTS
§.§ Implementation Details
We evaluate our proposed method on the ARIC dataset, using only the image modality for this experiment. The task is divided as follows: the base phase utilizes 20 classes, and in the subsequent 4 incremental phases, 3 new classes are introduced at each phase, following a 3-way 5-shot setting. In each incremental phase, only 5 samples per new class are provided for training. This setup simulates the scenario in classroom surveillance images where non-instructional activities continuously appear but with a limited number of samples, requiring the model to learn effectively under constrained sample conditions. We adopt ResNet18<cit.> as the backbone of our network. In base phase, we utilized an SGD optimizer with a momentum of 0.9 and an initial learning rate of 0.1, adjusted using a cosine annealing scheduler. The model is trained for 200 epochs with a batch size of 256. During the incremental phase, set the parameter λ=0.2 for the Gaussian transformation, and set the adaptive scaling factor k=4 in the ACC module.
§.§ Result
The experimental results of our method on the ARIC dataset are shown in Table <ref> (the evaluation metric is the average TOP-1 accuracy tested on all known classes). To demonstrate the effectiveness of our method on the ARIC dataset, we compared it with several state-of-the-art few-shot continual learning methods. ALICE<cit.>, FACT<cit.>, and SAVC<cit.>, are all based on the feature space and use prototypes as classifiers, while Teen<cit.> only adjusts the prototype classifier during the incremental phase. Additionally, we present the results of Finetune without using any continual learning methods. The experimental results show that our proposed method significantly outperforms existing methods in each incremental task on the ARIC dataset.
§.§ Ablation Study
To evaluate the impact of each component in our proposed method, we conducted ablation experiments, as shown in Table <ref>. First, when we disabled the SCL loss (ℒ_SCL) during the base phase, the experimental results showed a significant drop in the model's performance on base class classification. This indicates that SCL allows the model to more accurately distinguish between easily confused categories. Additionally, the model’s performance after the last incremental phase also declined, suggesting that SCL achieved more compact clustering during the base phase, thereby enhancing the model’s few-shot continual learning ability. Second, when we removed the ACC module and used only the prototype classifier, the experimental results showed a decrease in performance at each incremental stage, indicating that the ACC classifier better defines the decision boundaries for each class in incremental tasks. We also added the ACC module to the FACT method for experimentation, and the results similarly demonstrated this point.
Finally, to verify the impact of the adaptive mechanism on the ACC module, we fixed the adaptive parameter α to 1 in our experiments. The results, shown in Table <ref>, indicate that disabling the adaptive mechanism led to a decline in performance across all incremental stages. This demonstrates the significant improvement provided by the adaptive mechanism to the covariance classifier, offering a quantitative assessment of its contribution to the model's performance.
§.§ Visualization
As illustrated in Fig. <ref>, we compared the feature distribution in the feature space after the base phase for three different methods. It is evident that our method achieves superior clustering for the base classes. Compared to Fintune and FACT, incorporating supervised contrastive learning effectively increases the inter-class distance while reducing the intra-class distance, resulting in more compact clusters for each class. This significantly contributes to accurate base class recognition and better integration of incremental classes into the feature space.
§ CONCLUSION
In this study, we tackled the unique challenges of activity recognition in classroom surveillance images, particularly those presented by the ARIC dataset, by developing an innovative few-shot continual learning method. Our approach effectively addresses issues such as class imbalance, high activity similarity, and the need for privacy-preserving features. This is achieved by integrating feature-augmented SCL in the base phase and the ACC module in the incremental phase. The experimental results on the ARIC dataset demonstrate that our method significantly enhances the model's generalization ability and improves classifier accuracy.
IEEEtran
|
http://arxiv.org/abs/2409.03249v1 | 20240905045540 | Multiple weather images restoration using the task transformer and adaptive mixup strategy | [
"Yang Wen",
"Anyu Lai",
"Bo Qian",
"Hao Wang",
"Wuzhen Shi",
"Wenming Cao"
] | cs.CV | [
"cs.CV"
] |
Yang Wen Anyu Lai et al.
Guangdong Provincial Key Laboratory of Intelligent Information Processing, School of Electronic and Information Engineering, Shenzhen University, Shenzhen, China The Department of Comupter Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
Multiple weather images restoration using the task transformer and adaptive mixup strategy
Yang Wen10000-0001-6303-8178 Anyu Lai1 Bo Qian2 Hao Wang1 Wuzhen Shi1 Wenming Cao1
September 9, 2024
==========================================================================================
§ ABSTRACT
The current state-of-the-art in severe weather removal predominantly focuses on single-task applications, such as rain removal, haze removal, and snow removal. However, real-world weather conditions often consist of a mixture of several weather types, and the degree of weather mixing in autonomous driving scenarios remains unknown. In the presence of complex and diverse weather conditions, a single weather removal model often encounters challenges in producing clear images from severe weather images. Therefore, there is a need for the development of multi-task severe weather removal models that can effectively handle mixed weather conditions and improve image quality in autonomous driving scenarios. In this paper, we introduce a novel multi-task severe weather removal model that can effectively handle complex weather conditions in an adaptive manner. Our model incorporates a weather task sequence generator, enabling the self-attention mechanism to selectively focus on features specific to different weather types. To tackle the challenge of repairing large areas of weather degradation, we introduce Fast Fourier Convolution (FFC) to increase the receptive field. Additionally, we propose an adaptive upsampling technique that effectively processes both the weather task information and underlying image features by selectively retaining relevant information. Our proposed model has achieved state-of-the-art performance on the publicly available dataset.
§ INTRODUCTION
The removal of adverse weather conditions, including rain, snow, and haze, from images is a critical challenge in numerous fields. Extreme weather events significantly impair the ability of computer vision algorithms to extract relevant information from images. Therefore, mitigating such weather-related effects is necessary to enhance the reliability of computer vision systems<cit.><cit.><cit.>.
In this paper, we propose a novel approach for recovering multi-weather degraded images by leveraging a task sequence generator and an adaptive module. During the feature extraction phase, we employ a Task Intra-patch Block (TIPB) to partition the image into smaller patches and extract degraded features from them. These features are utilized not only in subsequent feature extraction stages but also fed into a Task Query Generator to generate task sequences based on the input task characteristics at each stage. This enables us to selectively focus on different types of degradation information during the upsampling stage. To address the challenge of handling large-area degraded features, we utilize Fast Fourier Convolution (FFC) to expand the receptive field. Finally, to fuse degraded information with background information, we employ adaptive upsampling techniques in the image restoration process. Our main contributions are:
* We introduce a novel and highly efficient solution for tackling severe weather removal challenges, specifically focusing on image inpainting guided by the generation of weather degradation information and task feature sequences. Our proposed method surpasses the performance of existing state-of-the-art approaches in both real-world datasets and downstream object detection tasks.
* We propose the Task Intra-patch Block (TIPB), a novel feature extraction block that effectively captures detailed features of various degradation types at different scales. By utilizing TIPB at multiple stages, our approach can extract highly informative features that are tailored to each stage of the image restoration process. This enables us to effectively address different types of degradation and achieve superior performance in restoring degraded images.
* We present a novel task sequence generator that leverages multi-scale degradation details to generate task feature sequences. Our approach effectively captures the complex relationships between different degradation types at different scales and generates task sequences that are tailored to the specific. characteristics of the input image.
§ RELATED WORK
Deep learning-based solutions have become increasingly popular for various weather-related image restoration tasks, including rain removal<cit.>, hazy removal<cit.>, and snow removal<cit.>. These approaches have demonstrated significant performance improvements compared to traditional methods.
§.§ For Rain Removal
Jiang et al.<cit.> mainly proposed a multi-scale progressive fusion network (MSPFN). For the imaging principle of rain, due to the different distances between the rain and the camera, the rain in the image will show different ambiguities and resolutions, so the complementary information between multi-resolution and multi-pixels can be used to represent rain streaks The main paper proposes a framework from the perspective of input geometry and depth graphics, explores the multi-geometry representation of rain streaks, and first accomplishes deraining. For pixel rain streaks at different locations, the gradient calculation is used to obtain the global texture, so as to explore the complementarity in the spatial dimension and read out information to characterize the target rain streaks.
§.§ For Haze Removal
To address the dense and uneven distribution of haze, Jin et al.<cit.> propose a model that extracts feature representations from a pre-trained visual transformer (DINO-ViT) to recover background information. To guide the network to focus on non-uniform haze regions and then remove the haze accordingly, they introduce uncertainty feedback learning, which produces uncertainty maps with higher uncertainty in denser haze regions and can be viewed as the attention map representing the density of the haze and the non-uniformity of the distribution. Let the feedback network iteratively refine our dehazing output based on the uncertainty map.
§.§ For Snow Removal
Currently, handcrafted features are still the mainstream for snow removal, making it difficult to achieve large-scale generalization. Liu et al.<cit.> designed a multi-stage network called DesnowNet to sequentially handle the removal of translucent and opaque snow particles. We also differentiate snow translucency and color difference properties for accurate estimation. Furthermore, their method separately estimates the remaining complement of snow-free images to recover details occluded by opaque snow. In addition, the whole network adopts a multi-scale design to simulate the diversity of snow.
§.§ Multi-task Weather-related Image Restoration
Li et al.<cit.> first designed a generator with multiple task-specific encoders, each associated with a specific type of severe weather degradation. They utilize neural architecture search to optimally process image features extracted from all encoders. Subsequently, to transform degraded image features into clean background features, they introduce a series of tensor-based operations that encapsulate the fundamental physics behind the formation of rain, haze, snow, and adherent raindrops. These operations are the basic building blocks of schema search. Finally, the discriminator simultaneously evaluates the correctness of the restored image and classifies the degradation type. Valanarasu et al.<cit.> propose TransWeather, a Transformer-based end-to-end model that can recover images degraded by any weather condition with only one encoder and one decoder. Specifically, they exploit a novel Transformer encoder that uses intra-patch Transformer blocks to enhance intra-patch attention to effectively remove small weather degradations.
§ METHOD
We propose a novel framework to tackle different image degradation tasks, as shown in Figure<ref>. In this section, we provide a comprehensive overview of the network framework.
§.§ Network Architecture
The proposed network takes a 3×3 weather-degraded image as input. The input image will be processed by a multi-stage Transformer block to generate local information from different stages. The output of each stage is then input to the TIPB module, which extracts degradation-specific detail features according to the specific degradation task. The resulting multi-stage degradation features are fed into the task sequence generator to generate a task sequence that facilitates the identification of the specific degradation task affecting the input image. To capture more global information, the FFC module is introduced into the network. The down-sampled output of each stage is first input to the FFC module to extract global information, which is then used to assist in image restoration in the subsequent upsampling stage. In the upsampling stage, learnable parameters are employed to selectively fuse task features and image features, ultimately resulting in the restoration of a clear image.
§.§ Task Intra-patch Block
At each stage, the Task Intra-patch Block(TIPB) processes the image features, which are first cropped to half the size of the original image to facilitate the extraction of smaller degraded details. As shown in Figure<ref>, in order to adaptively query different degraded features, an external learnable sequence is introduced and optimized during the training of the network. The resulting sequence generates a feature map that contains a substantial amount of task-specific information, which is combined with the input image and input into the Transform Block of the next stage. The feature maps from each stage are jointly input into the task sequence generator to generate a task query vector that assists in identifying the specific degradation task affecting the input image. This approach facilitates the effective extraction of task-specific information at each stage, ultimately leading to improved performance in image restoration. The output of TIPB can be expressed as:
TIPB_i(I_i)=FFN(MSA(I_i)+I_i)
where T(·) represents the transformer block, FFN(·) repre-
sents the feed-forward network block, MSA(·) represents
multi-head self-attention, I is the input and i represents the
stage in the encoder. The multi-head attention of the TIPB module is different from the traditional form, and its self-attention is defined as follows:
Attn(Q,K,V) = softmax(Q_learnabledK^T/√(d))V
The proposed network leverages a randomly generated task query sequence (Q) to represent a diverse range of weather conditions. The keys (K) and values (V) used in the attention mechanism are derived from the input feature map.
§.§ Task Sequence Generator
The TIPB module introduces a stochastic task vector into the transformer module as a query in the attention mechanism. This vector is trained concurrently with the network and facilitates the capture of degradation characteristics under varying weather conditions. TIPB operates on each level of the encoder to extract degradation information of diverse scales in the image. The TIPB's output is subsequently fed into the encoder of the subsequent stage, and the outputs of all stages are jointly utilized as input to the Task Sequence Generator for generating a task sequence pertaining to the image.
The Task Sequence Generator comprises several convolutional layers of varying scales and a self-attention module. These convolutions, operating at different stages, enable effective processing of the output of the Task Information Processing Block (TIPB). We then utilize a 3x3 convolutional layer to combine the task information from the four different scales, resulting in a task feature query vector map. This map is subsequently used as the query (Q) in conjunction with the image in the self-attention mechanism to generate a feature map that contains rich task information. The output of Task sequence generator can be expressed as:
Tsg(I, Q_Task)=FFN(MSA(I, Q_Task)+I)
Q_Task=Cov_3,3(Cov_7,7(T_1)+Cov_5,5(T_2)+Cov_3,3(T_3))
Where Tsg(·) represents the output of the Task-sequence Generator, FFN(·), and MSA(·) represent the feedforward network and the multi-head self-attention module, respectively. I represent the feature map input to the Task-sequence Generator, and Q_Task represents the generated task query sequence. T_i denotes the output of TIPB from the i-th stage. Conv_n,n represents the use of n×n convolution operations.
The Task-sequence Generator module improves the ability to capture the degradation characteristics of different weather conditions. Figure <ref> presents a comparison of the intermediate results of three tasks using the Base Model and the Task-sequence Generator module. Specifically, outputs a and b correspond to the rain removal task, c and d correspond to the haze removal and rain removal task, and e corresponds to the snow removal task. The first three rows illustrate the input image, the output of the Base Model, and the output after incorporating the Task-sequence Generator module.
Our results demonstrate that the Task-sequence Generator module enhances the ability to capture degradation characteristics compared to the Base Model. Specifically, the degradation information in the output images is clearer, and the contrast between the degradation content and the background is stronger. These findings highlight the importance of incorporating advanced techniques, such as the Task-sequence Generator module, in image restoration tasks to improve performance and enhance the quality of the results.
§.§ Fast Fourier Convolution
The restoration of weather images with dense degradation has posed a significant challenge in the field of image restoration. Conventional methods have primarily relied on local background information to restore detailed features, but their efficacy in dealing with large-scale degradation has been limited. Recently, Roman Suvorov et al. proposed a novel technique that leverages global information to tackle this problem. Building upon their work, we have employed this approach in the context of weather image restoration, enabling us to incorporate a wider range of background information and achieve effective restoration of images with large-scale degradation.
The illustration of FFC is available in Figure <ref>. The input feature map is split into two branches for parallel processing. The local branch performs a conventional convolution operation, while the global branch employs channel-wise fast Fourier transform (FFT) to capture the global context. The information from these two branches is then fused to generate a feature map with a receptive field that covers the entire image. This feature map is subsequently applied to the upsampling process, resulting in the restoration of a more realistic and detailed image. The steps of FFC are defined as follows:
a) applies Real FFT2d to an input tensor
Real FFT2d: R^H × W × C→ C^H ×W/2× C
and concatenates real and imaginary parts
Complex To Real: C^H ×W/2× C→ R^H ×W/2× 2C
b) applies a convolution block in the frequency domain
Rule ∘ BN ∘ Conv 1× 1 : R^H ×W/2× 2C→ R^H ×W/2× 2C
c) applies inverse transform to recover a spatial structure
Real To Complex : R^H ×W/2× 2C→ C^H ×W/2× C
Inverse Real FFT2d : C^H ×W/2× C→ R^H × W × C
Firstly, we apply the fast Fourier transform (FFT) to the input feature map. Next, we combine the real part and the imaginary part obtained from the FFT. Subsequently, we perform the convolution operation on the combined feature map, emphasizing the global background information. Finally, we separate the combined feature map into real and imaginary parts and restore the feature map to the time domain using the inverse Fourier transform.
§.§ Adaptive Mixup For Feature Preserving
The proposed network incorporates an encoder-decoder architecture that can effectively extract low-level features from the input image and task-specific features from the degraded image. Adaptive upsampling is utilized to enable the effective mixing of task information and image features. Addition-based skip connections, which are commonly used in encoder-decoder models, may lead to loss of shallow features or external task information. To address this, the Adaptive Mixup approach is introduced, which is able to retain more texture information of the image by adaptively mixing the features from different levels of the network. The output of Adaptive Mxiup can be expressed as:
f_↑ i+1 = Mix(f_↓ m-i, f_↑ i)=σ (θ _i)*f_↓
i+(1-σ (θ_i) *f_↑ i)
Where f_↑ i and f_↓ m-i represent the upsampling and downsampling feature maps of the i-th stage(i⊆{1,2...m}), σ(θ_i) represents the learnable factor of the i-th stage, which is used to fuse the low-level features from the downsampling and the task features from the decoder, and Its value is determined by the sigmoid operator on the parameter θ_i.
§ RESULT
In this section, we conducted an extensive experimental analysis to validate the effectiveness of our proposed approach. Specifically, we provide detailed information regarding the dataset used, experimental design, and comparative analysis with state-of-the-art techniques.
§.§ Comparison with state-of-the-art
We compare our method against state-of-the-art modalities specifically designed for each task. We compare with state-of-the-art methods such as Attention Gan<cit.> and Swin-IR<cit.>.
At the same time, we also compare with the state-of-the-art methods of multi-tasking such as All-in-one<cit.> and Transweather<cit.>. The detailed experimental results will be shown below.
§.§.§ Visual Quality Comparison
We performed a qualitative comparison with All-in-One Network and TransWeather. The results are shown in Figures <ref>. Our proposed method exhibits superior performance in removing degradations, particularly in cases where there are large areas of degenerate features. In the first scenario of Figure <ref>, our ability to remove large raindrops is superior to the other two models. Furthermore, our approach outperforms other models in terms of restoring texture and color information in the image. In the second scenario of the Figure<ref>, our model significantly outperforms the other two models in restoring the wall color. Both All-in one and TransWeather leave residual color changes caused by raindrops on the image.
§.§.§ Referenced Quality Metrics
We use PSNR and SSIM to quantitatively evaluate the performance of different models in RainDrop test sets. The experimental results are shown in Tables <ref>. Our results demonstrate that our proposed method outperforms the multi-task single-job approach in the combination of three distinct weather types. Moreover, when compared to other multi-task models, our approach exhibits superior performance.
§.§.§ Object Detection Comparison
Severe weather conditions significantly impact the field of autonomous driving, particularly in the context of object detection from acquired images. The ability to accurately detect objects in these images is crucial for analyzing the current driving situation and making appropriate judgments. In this chapter, we utilize the YOLOV5 algorithm to perform object detection on images repaired by each of the considered models.
Table<ref> presents the quantitative analysis outcomes of our object detection methodology on a dataset comprising 200 objects. In terms of detection accuracy, our approach surpasses the performance of the All-in-One<cit.> method. Furthermore, when compared to the TransWeather<cit.> technique, our detected objects exhibit a higher level of confidence.
§ CONCLUSION
In this paper, we present a novel model for addressing the challenges posed by multi-weather degraded images. Our proposed approach involves leveraging trainable sequences to extract multi-scale features, which are subsequently utilized to generate task sequences specific to degradation-related tasks. These task sequences serve as guidance for the network, enabling selective focus on degradation information from different tasks. To capture broader background information and facilitate the restoration of large degraded regions, we introduce a Fast Fourier Convolution (FFC) module. This module effectively captures global contextual information, aiding in the recovery process. Additionally, we employ adaptive mixing to fuse features obtained from different modules, enhancing the overall performance of our method. Our proposed approach overcomes the limitation of single-task-specific networks, which often struggle to be practically deployed. When compared to other multi-task processing networks, our method exhibits superior capabilities in extracting information on various weather degradations while effectively repairing extensive degraded content. To validate the effectiveness of our proposed method, we conducted extensive evaluations on diverse datasets. The experimental results demonstrate the superior performance of our approach, surpassing many state-of-the-art methods in the field.
splncs04
|
http://arxiv.org/abs/2409.03393v1 | 20240905095353 | VQ-DeepVSC: A Dual-Stage Vector Quantization Framework for Video Semantic Communication | [
"Yongyi Miao",
"Zhongdang Li",
"Yang Wang",
"Die Hu",
"Jun Yan",
"Youfang Wang"
] | cs.NI | [
"cs.NI"
] |
VQ-DeepVSC: A Dual-Stage Vector Quantization Framework for Video Semantic Communication
Yongyi Miao+, Zhongdang Li+, Yang Wang, Die Hu*, Jun Yan, and Youfang Wang
Yongyi Miao, Zhongdang Li, Yang Wang, Die Hu, Jun Yan, and Youfang Wang are with the School of Information Science and Technology, Fudan University, Shanghai 200433, China.
======================================================================================================================================================================================================================================================================
§ ABSTRACT
In response to the rapid growth of global video traffic and the limitations of traditional wireless transmission systems, we propose a novel dual-stage vector quantization framework, VQ-DeepVSC, tailored to enhance video transmission over wireless channels.
In the first stage, we design the adaptive key-frame extractor and interpolator, deployed respectively at the transmitter and receiver, which intelligently select key frames to minimize inter-frame redundancy and mitigate the cliff-effect under challenging channel conditions.
In the second stage, we propose the semantic vector quantization encoder and decoder, placed respectively at the transmitter and receiver, which efficiently compress key frames using advanced indexing and spatial normalization modules to reduce redundancy.
Additionally, we propose adjustable index selection and recovery modules, enhancing compression efficiency and enabling flexible compression ratio adjustment.
Compared to the joint source-channel coding (JSCC) framework, the proposed framework exhibits superior compatibility with current digital communication systems.
Experimental results demonstrate that VQ-DeepVSC achieves substantial improvements in both Multi-Scale Structural Similarity (MS-SSIM) and Learned Perceptual Image Patch Similarity (LPIPS) metrics than the H.265 standard, particularly under low channel signal-to-noise ratio (SNR) or multi-path channels, highlighting the significantly enhanced transmission capabilities of our approach.
Semantic Communication, Video Transmission, Vector Quantization, Multipath Fading Channel, Deep Learning.
§ INTRODUCTION
With the rapid advancement of information technology, global data traffic, especially video traffic, has grown exponentially, now constituting the predominant component of internet data traffic <cit.>. Despite enhancements in traditional wireless video transmission systems that focus on optimizing bit error rates (BER) <cit.>, challenges persist in ensuring high-quality transmission. These systems primarily emphasize compression efficiency but often fall short in addressing the semantic understanding and adaptability required for dynamic network conditions. Even with the adoption of the latest H.265 technology, the impact of the cliff-effect remains unresolved. This effect describes a significant decline in video transmission quality when the channel signal-to-noise ratio (SNR) drops below a critical threshold.
§.§ Prior Work
The rapid development of deep learning (DL) technology has propelled semantic communication into a crucial role in the next generation of communication technologies, demonstrating significant potential and advantages <cit.>. Semantic communication, grounded in understanding before transmitting, involves extracting semantic information <cit.>. It enables profound compression by analyzing and distilling the essence of original content, proving highly adaptable and extensible across diverse domains, including text <cit.>, speech <cit.>, images <cit.>, and video <cit.>. Thus, it heralds a new era of intelligent and context-aware communication systems.
Despite considerable focus on the application of deep learning to video compression <cit.>, the advent of semantic-based deep learning to enhance wireless video transmission was marked by the introduction of DeepWiVe by Tung et al. in 2022 <cit.>. This pioneering approach presented an integrated solution for video compression, channel coding, and modulation through an end-to-end joint source-channel coding (JSCC) framework. Building upon this foundation, Zhang et al. further innovated by incorporating dual optical flow estimation for video transmission <cit.>. Dong et al. introduced Rosefinch, a semantic communication model for live media streaming, video conferencing, and low-rate image transmission <cit.>. With these developments, JSCC framework has become the main research paradigm for video semantic communication (VSC) systems.
Notably, Gong et al. <cit.> introduced an adaptive bit rate VSC system, modulating the bit rate based on network conditions. Niu et al. <cit.> added a channel and spatial attention mechanism to their VSC framework, enhancing adaptability. Bao et al. <cit.> proposed a model division VSC scheme to extract shared semantic features and counteract noisy channels. Liang et al. <cit.> developed the VISTA framework, using semantic location graphs to manage dynamic objects and adapt to changing channel conditions.
Specialized systems have also been developed for specific scenarios. Jiang et al. <cit.> created a scalable video coding network for video conferencing, minimizing transmission resources. Liu et al. <cit.> introduced a federated learning-enhanced vehicle semantic communication framework, optimizing semantic extraction and resource allocation while maintaining data privacy. Wang et al. <cit.> employed a nonlinear transformation and conditional coding architecture to optimize the balance between transmission rate and distortion, underpinned by perceptual quality metrics.
§.§ Motivation and Contributions
While JSCC-based VSC systems have theoretically shown promise in reducing distortion and enhancing transmission efficiency, these systems map source data directly to channel symbols, causing constellation points to appear anywhere on the constellation diagram, which deviates from the design of current digital communication systems.
In addition, most studies simulate channels based on additive white Gaussian noise (AWGN) or other simplistic models. Although the AWGN model is useful for controlled experiments, it fails to capture the complexities of real-world wireless communications, such as multipath effects, signal decay, and the ever-fluctuating nature of channel conditions.
Therefore, the robustness and suitability of JSCC methodologies in complex, dynamic wireless environments require thorough validation.
To make semantic communication more compatible with digital communication systems, semantic communication systems for image transmission based on discrete latent spaces have been designed <cit.> <cit.> so that semantic features can be quantized into feature indices through the latent embedding space, thereby converting them into bitstreams. Subsequently, these bitstreams can be directly mapped into symbols for transmission using existing constellation mapping schemes.
The vector quantization (VQ) semantic communication system designed by Hu et al. <cit.>, Masked VQ-VAE, reduces transmitted information by quantizing and transmitting only feature indices. However, due to the limitations of VQ-VAE in image reconstruction <cit.>, it addresses downstream tasks solely by transmitting task-relevant key feature indices within the latent embedding space. In addition, VQ-DeepSC, designed by Fu et al. <cit.>, utilizes a U-Net structure <cit.> to extract features at different scales and the input image is mapped to single-channel indices at each scale. However, this method necessitates multiple latent embedding spaces of different sizes, leading to excessive storage requirements at both the transmitter and receiver ends. Furthermore, the information bottleneck problem within the U-Net structure limits its ability to capture remote global contextual information, thereby reducing its effectiveness in handling high-resolution images. Additionally, the quantization operator is lossy, and similar patches are often embedded with the same index, leading to the creation of pseudo-shadows and discontinuities in generated images.
In addition to being limited by the image reconstruction capabilities of current semantic communication network structures, extending VQ-based semantic communication systems from image to video transmission is challenging. This is due to the unique demands of video transmission, such as the need to further reduce temporal redundancy.
In this paper, we firstly propose a dual-stage vector quantization-based video semantic communication system named VQ-DeepVSC. To enhance the physical interpretability of the system, VQ-DeepVSC is divided into two distinct stages to improve transmission efficiency, each specifically aimed at reducing redundancy in the temporal and spatial dimensions, respectively.
In the first stage, we design an adaptive key-frame extraction and interpolation (AKEI) module to reduce temporal redundancy, transforming video transmission into a key-frame transmission task. Meanwhile, to mitigate the cliff-effect arising from the constrained channel coding capacity in separated source-channel coding systems, the proposed AKEI estimates frame importance and models the relationship between SNR and key-frame rate, prioritizing the retransmission of critical key frames under poor channel conditions (e.g., when SNR is low). At the receiving end, an optical flow estimation-based interpolation algorithm restores the video to its original frame rate, enhancing robustness and stability in challenging channel conditions.
In the second stage, we leverage a multi-channel spatially conditioned vector quantization (MSVQ) for the ultimate compression of key frames. Compared to existing methodologies such as Masked VQ-VAE <cit.> and VQ-DeepSC <cit.>, MSVQ utilizes a shared latent embedding space for multi-channel index mapping <cit.>, supporting high-resolution video transmission and promoting diversity without increasing latent space size. Spatial conditional normalization in the MSVQ decoder mitigates quantization artifacts, ensuring high-quality image reconstruction. Additionally, key-frame indices are further compressed using adjustable index selection and recovery algorithms, allowing flexible compression rates. The main contributions of this paper are summarized as follows:
* We are the first to use DL-based VQ methods for video transmission. Compared to JSCC-based VSC systems, the VQ-DeepVSC adapts better to the variable conditions of real wireless channels and is more compatible with existing communication systems. The proposed MSVQ is designed to minimize intra-frame redundancies using multi-channel indexing and spatial conditional normalization, achieving a high compression degree while maintaining frame quality.
* We design the AKEI module, a revolutionary framework that intelligently selects key frames based on content and wireless channel quality, thereby optimizing the semantic compression process, significantly reducing inter-frame redundancies, and effectively tackling the cliff-effect in video transmission.
* We propose adjustable index selection and recovery algorithms that reduce redundancy between key frames by transmitting only indices significantly different from their predecessors, as determined by a preset threshold. This approach enhances compression efficiency, minimizes compression loss, and allows for more flexible adjustment of compression rates.
* Extensive simulation experiments have validated that the proposed VQ-DeepVSC achieves superior reconstruction quality at equivalent compression rates compared to other methods. What's more, our system demonstrates exceptional robustness, maintaining high-performance transmission efficiency across various channel conditions.
§.§ Organization
The structure of this paper is as follows: Section <ref> proposes the framework of the VQ-DeepVSC system. Section <ref> elaborates on the proposed AKEI. Section <ref> designs the implementation of the MSVQ. Section <ref> explores the strategy for reducing redundancy between key frames. Section <ref> presents experimental results that validate the performance of the proposed VQ-DeepVSC system. Finally, Section <ref> concludes the paper.
Notations: Superscripts ^T stands for transpose. 𝐈_𝑀×𝑁 represents the 𝑀×𝑁 identity matrix and 0_M× N denotes the M× N all-zero matrix. For a matrix 𝐀, [𝐀]_𝑖,: denotes the i-th row of 𝐀.
For a vector 𝐚, ||𝐚||_2 denotes the Euclidean norm, 𝐚(m) denotes the m-th elements of 𝐚, and 𝐚(𝑚:𝑛) represents taking the 𝑚-th to 𝑛-th elements from vector 𝐚.
For a scalar x, ⌊ x ⌋ denotes the nearest integer smaller than or equal to x.
For operations, ⟨·, ·⟩ represents the inner product, ⊗ indicates the batch-wise matrix multiplication operation, and ⊙ denotes the Hadamard product.
Finally, ℝ^m × n and ℤ^m × n denote the spaces of m × n real and integer matrices, respectively.
§ FRAMEWORK OF VQ-BASED SEMANTIC COMMUNICATIONS FOR VIDEO TRANSMISSION
In this section, we introduce a novel framework for semantic communication systems specifically tailored for video transmission. This framework, based on VQ,
is illustrated in Fig. <ref>. The transmitter consists of an adaptive key-frame extractor, a semantic vector quantization encoder, an adjustable index selector, channel coding and modulation, and orthogonal frequency division multiplexing (OFDM) modulation. The receiver comprises OFDM demodulation, channel estimation and channel equalization, a demodulation and channel decoder, an adjustable index restorer, a semantic vector quantization decoder, and an adaptive key-frame interpolator.
§.§ Transmitter
As shown in Fig. <ref>, the video transmitter processes a sequence of video data, which spans an arbitrary duration T and is composed of N individual frames. Each frame 𝐅_n ∈ℝ^W_F× H_F× C_F with n = 1, …, N is an RGB image with the width of W_F, the height of H_F , and the C_F = 3 channels.
To avoid the cliff-effect, a feasible method is to retransmit video frames multiple times. However, this approach will undoubtedly increase the amount of transmitted data, greatly affecting the compression effect. To ensure that the compression ratio remains unchanged during retransmission, we propose an adaptive key-frame extractor that assesses the importance of each frame based on channel quality (such as SNR) and content changes, and extracts the key frames. Only the key frames are transmitted and retransmitted under poor channel conditions, and denote the m-th key frame as 𝐊_m ∈ℝ^W_F× H_F× C_F, where m = 1, …, M, and M represents the number of key frames, and we use the vector 𝐯∈ℤ^N × 1 to record the positions of the key frames. Specifically, if the 𝑛-th frame is the key frame, we set 𝐯(n)=1 , otherwise we set 𝐯(n)=0, where n = 1, …, N. The specific details of this process and its implications will be elaborated in Section <ref>.
We define the ratio of key frames to all frames ρ as:
ρ = M/N,
which can be adaptively adjusted according to the channel conditions.
After extracting key frames, a semantic vector quantization encoder is utilized for feature extraction and indexing. Here, we utilize a latent embedding space which is denoted by 𝐄 = {𝐞_1, …, 𝐞_L}∈ℝ^L × d, where L denotes the size of 𝐄, d is the dimensionality of each latent embedding vector 𝐞_l in 𝐄, and l = 1, …, L. Space 𝐄 is trained during the training phase and is shared between the transmitter and receiver. Following feature extraction by the semantic vector quantization encoder, 𝑀 index sequences {𝐬_m}_m=1^M are generated, where 𝐬_m represents the index sequence of the m-th keyframe corresponding to the quantized vector in the 𝐄 space, m = 1, …, M. Details will be provided in Section <ref>.
To further reduce redundancy, we propose an adjustable index selector that performs the final screening of the index sequence. The outputs from the adjustable index selector are the resulting indices 𝐬_η and the position sequence 𝐩. The specific details will be provided in Section <ref>.
After the aforementioned modules, the video data {𝐅_n}_n=1^N can ultimately be encoded into a bitstream 𝐛, which is composed of the following three parts, i.e.:
𝐛 = [𝐛_𝐬, 𝐛_𝐩,𝐛_𝐯],
where 𝐛_𝐬, 𝐛_𝐩 and 𝐛_𝐯 correspond to the bitstreams of 𝐬_η, 𝐩 and 𝐯, respectively.
After obtaining 𝐛, operations identical to those used in traditional communication systems can be employed, namely encoding and modulating the bitstream for transmission. To combat multipath fading in the channel, OFDM technology is also utilized here. As can be seen from the figure, the semantic communication system we propose is fully compatible with existing communication systems.
§.§ Receiver
The received time-domain signal at the receiver can be expressed as:
y(t) = h(t,τ) * x(t) + n(t),
where h(t, τ) represents the multi-path time-varying channel impulse response, n(t) denotes the additive white Gaussian noise, and * signifies convolution.
After applying OFDM demodulation, channel equalization, demodulation, and channel decoding, we can
finally obtain the estimates of 𝐬_η, 𝐩 and 𝐯, i.e., 𝐬̂_η, 𝐩̂ and 𝐯̂, respectively.
Subsequently, the adjustable index restorer is employed based on 𝐬̂_η and 𝐩̂, enabling us to retrieve {𝐬̂_m}_m=1^M.
Given 𝐯̂, the semantic vector quantization decoder is then used to recover the key frames, i.e., to obtain {𝐊̂_m}_m=1^M. Finally, given {𝐊̂_m}_m=1^M and 𝐯̂, we use an adaptive key-frame interpolator to recover all video frames {𝐅̂_n}_n=1^N.
§ THE PROPOSED AKEI
Many frames within a video sequence are often similar, containing redundant information. Therefore, the transmission of data can be reduced by only transmitting a subset of frames and then performing frame interpolation at the receiver. These transmitted frames are known as key frames. Here, we propose AKEI to achieve the extraction of key frames and the subsequent recovery of all frames.
AKEI aims to reduce inter-frame redundancies in video transmission and address cliff-effect. AKEI consists of two primary components: the adaptive key-frame extractor, located on the transmitter, and the adaptive key-frame interpolator, situated on the receiver.
When the SNR decreases, the quality of video transmission may suddenly drop at a certain point, a phenomenon known as the cliff-effect. An effective method to address the cliff-effect is to perform multiple retransmissions of the video, but this can lead to a significant decrease in compression capability of the system. To solve the cliff-effect without increasing the compression rate, we employ an adaptive key-frame extractor to dynamically select key frames for transmission and perform multiple retransmissions under low SNRs. The adaptive key-frame extractor dynamically selects key frames and generates a vector to record the positions of these key frames within the video sequence. This selection process is adaptively adjusted based on content variation between video frames and the quality of the multi-path channel.
Conversely, the adaptive key-frame interpolator utilizes the received key frames to reconstruct the full video sequence. To facilitate practical deployment, AKEI is designed with an asymmetric structure, featuring a simpler structure at the receiver.
§.§ Adaptive Key-frame Extractor
As illustrated in Fig. <ref>, the input of the adaptive key-frame extractor is {𝐅_n}_n=1^N. And the output is the key frames {𝐊_m}_m=1^M. Initially, an incremental flow network (IFNet) is used to estimate the optical flow of frames and outputs {𝐎_i}_i=2^N-1. Each optical flow map 𝐎_i ∈ℝ^C_O× H_O× W_O denotes the optical flow corresponding to frame 𝐅_i, where C_O, H_O, and W_O represent the number of channels, height, and width of 𝐎_i, respectively. These dimensions are consistent across all 𝐎_i.
By default, we assume that 𝐅_1 and 𝐅_N are the key frames, i.e., we set 𝐯(1) = 1 and 𝐯(N) = 1. Therefore, the IFNet only computes the optical flow for the intermediate (N-2) frames. Unlike dual optical flow networks that provide two optical flow estimations for each frame to represent the content changes relative to the preceding and succeeding frames, our approach only outputs one optical flow estimation for each frame, indicating the content changes relative to the surrounding frames.
Suppose that the frame to be reconstructed is 𝐅_i, where i = 2, …, N-1, the FusionNet utilizes its preceding frame 𝐅_i-1 and succeeding frame 𝐅_i+1, as well as the optical flow estimation 𝐎_i of 𝐅_i, to reconstruct 𝐅_i. Here, we denote the reconstruction result by 𝐅̌_i.
Following FusionNet, the frame importance score (FIS) calculator outputs the score sequence β∈ℝ^N × 1. These scores, derived from comparisons between {𝐅̌_n}_n=2^N-1 and {𝐅_n}_n=2^N-1, are assigned to β(2:N-1). β(1) and β(N) are set to sufficiently large values.
In the end, the mask generator determines which frames among the N frames are key frames based on the β and channel conditions (e.g., SNR), and outputs 𝐯.
The details of the four modules in Fig. <ref> are as follows:
§.§.§ IFNet
IFNet diverges from traditional dual optical flow networks by adopting a coarse-to-fine strategy that directly estimates optical flow for intermediate frames. Simple linear interpolation of forward and backward flows may falter in the presence of rapid motion or complex backgrounds. In contrast, our approach mitigates artifacts and blurriness by iteratively refining optical flow estimation. The specific structure of the IFNet comprises multiple incremental flow blocks (IFBlocks). IFNet utilizes the preceding and succeeding frames of the current frame, namely 𝐅_pre and 𝐅_next, to calculate the optical flow 𝐎_cur of the current frame. IFNet employs a coarse-to-fine strategy through three IFBlocks to progressively enhance the accuracy of optical flow estimation. Each IFBlock utilizes a distinct downsampling factor α_i (set as 4, 2, 1 respectively) to reduce the spatial dimensions of the image and extracts features using convolutional 2D layers, denoted as Conv2d. As resolution is incrementally enhanced, finer details are captured. Finally, upsampling restores the image to its original resolution through deconvlutional 2d layers, coupled with Bilinear Resize operations, achieving high-precision optical flow estimation.
This coarse-to-fine approach iteratively corrects motion estimation inaccuracies. By refining optical flow across stages, our method effectively reduces artifacts and blurriness, resulting in more precise and higher-quality interpolation of intermediate frames.
§.§.§ FusionNet
FusionNet utilizes two ContextNets to extract background features from the video frames 𝐅_pre and 𝐅_next, and considers the results of optical flow estimation through WarpBlock operations, ultimately obtaining 𝐎^1,2,3,4_pre and 𝐎^1,2,3,4_next, respectively. ContextNet consists of four WarpBlocks. The first WarpBlock accepts the optical flow results generated by IFNet, along with either the previous frame 𝐅_pre or the next frame 𝐅_next as input. Subsequent WarpBlocks, specifically the i-th WarpBlock with i=2,3,4, take as inputs the intermediate results 𝐎^i-1 and 𝐅^i-1, which are produced by the (i-1)-th WarpBlock. The i-th WarpBlock utilizes Conv2d to extract features from 𝐎^𝑖-1, and employs Bilinear Resize to restore the size of 𝐎^𝑖 to obtain 𝐎^𝑖. Concurrently, 𝐎^𝑖 is warped with 𝐅^𝑖-1 to yield 𝐅^𝑖. Each WarpBlock allows for the learning of background information to varying degrees. Then a U-Net structured network is employed to continually perform Conv2d and supplement 𝐎^i to extract multi-scale features, and uses Deconv2d to perceive and reconstruct these features, ultimately yielding the frame reconstruction result.
§.§.§ Frame Importance Score Calculator
To assess the importance of frames based on video content changes and motion information, we develop a new evaluative metric called the frame importance score, denoted by β, which can reflect the degree of difference between the reconstructed frames {𝐅̌_n}_n=2^N-1 and the original frames {𝐅_n}_n=2^N-1. A higher value in β indicates a greater distinction between the two images, signifying that the frame is more challenging to reconstruct, and thus, this frame is deemed more significant. The score of the n-th frame can be given by:
β(n) = f_is(𝐅̌_n,𝐅_n),
where f_is(· , · ) represents the assessment function, which is contingent upon the specific application context.
For instance, one can calculate the Structural Similarity (SSIM) <cit.> or the Learned Perceptual Image Patch Similarity (LPIPS) <cit.> between 𝐅̌_n and 𝐅_n.
§.§.§ Adaptive Frame Selection
Before selecting key frames, it is essential to determine the key frames ratio ρ.
When the channel condition is poor, it is essential to reduct transmission resources for retransmitting the most critical key frames, specifically those with higher β values. This adjustment effectively mitigates the cliff-effect, which deteriorates video quality significantly under extremely poor channel conditions.
By conserving bandwidth through prioritizing critical frames, the strategy ensures reliable reception of essential information even in challenging channel conditions.
Here, we adopt SNR, denoted by γ, to represent channel conditions because of its simplicity and ease of acquisition in practice. We establish a quantitative relationship between γ and ρ as follows:
ρ = ∑_i=0^I a_i · (log(γ))^i,
where ρ represents the predicted key-frame ratio based on γ, a_i are the coefficients of the polynomial regression model, and I is the degree of the polynomial.
The polynomial coefficients 𝐚 = [a_0, a_1, …, a_I] can be determined by minimizing the objective function Err, which measures the discrepancy between observed and model-predicted key-frame ratios:
𝐚 = min_𝐚{ Err + υ· R(𝐚) }.
Here, υ is a custom parameter that adjusts the balance between the data fit and the regularization term, and R(𝐚) is the regularization function given by:
R(𝐚) =
𝐚_1 = ∑_i=0^I |a_i|, for L1 regularization,
𝐚_2^2 = ∑_i=0^I a_i^2, for L2 regularization.
Here, R(𝐚) applies either L1 or L2 regularization to the coefficients 𝐚, with L1 promoting sparsity and L2 encouraging smaller values across the coefficients. Err is defined as:
Err = ( ρ - ( ∑_i=0^I a_i · (log(γ))^i ) )^2,
which quantifies the squared difference between the actual key-frame ratio and that predicted by the model.
Given γ, we can obtain ρ and then M. Based on M and β, we can determine the key-frames by selecting the frames corresponding to the top M values in β. These frames are then marked as key-frames by setting their positions in the sequence of N frames to 1.
§.§ Adaptive Key-frame Interpolator
The adaptive key-frame interpolator is designed with a simpler structure than the transmitter's adaptive key-frame extractor to accommodate the practical conditions of wireless communication. As illustrated in Fig. <ref>, its inputs are {𝐊̂_m}_m=1^M and 𝐯̂, which are the estimates of {𝐊_m}_m=1^M and 𝐯, repectively. Initially, the gap calculator generates a sequence 𝐠∈ℤ^(M-1) × 1, which records the number of non-key frames between each pair of consecutive key frames based on 𝐯̂. Subsequently, all key frames along with 𝐠 are input into the IFNet to calculate the optical flow {𝐎̂_i}_i=1^N-M of the (N-M) non-key frames interspersed between the key frames. The FusionNet then reconstructs the non-key frames {𝐊̃_i }_i=1^N-M, by utilizing the information from {𝐎̂_i }_i=1^N-M. Finally, we use the video seam module to stitch together {𝐊̃_i }_i=1^N-M and {𝐊̂_m }_m=1^M according to 𝐠, reconstructing the sequence of video frames {𝐅̂_n }_n=1^N.
§ THE PROPOSED MSVQ
To optimize compression efficiency for each key-frame, we propose MSVQ that can reduce intra-frame redundancies. MSVQ module consists of two key components: the semantic vector quantization encoder at the transmitter and the semantic vector quantization decoder at the receiver. The former possesses exceptional image compression capability, encoding the frames into multi-channel indices, while the latter accurately reconstructs the original frames from these indices.
§.§ Semantic Vector Quantization Encoder
§.§.§ Procedure
As shown in Fig. <ref>, the encoder first applies a CNN encoder to extract multi-channel features 𝐳_m= [𝐳_m^(1),…,𝐳_m^(c)] ∈ℝ^h× w× d× c from the m-th key-frame 𝐊_m where 𝐳_m^(j)∈ℝ^h× w× d denotes the j-th channel feature of the m-th frame, j=1,2,… ,c, and c represents the number of channels.
The feature 𝐳_m^(j) is reshaped to 𝐳̅_m^(j)∈ℝ^U × d, where U = hw and its u-th row, i.e., [𝐳̅_m^(j)]_u,:, is then quantized to the index of the nearest vector in the latent embedding space 𝐄, i.e.,
𝐬̅_m^(j)(u) = largmin[𝐳̅_m^(j)]_u,: - 𝐞_l_2,
where l = 1, …, L and 𝐬̅_m^(j)∈ℤ^U × 1. Once L is given, each index value, i.e., 𝐬̅_m^(j)(u), can be represented by B = log_2 L bits.
Subsequently, we concatenate these index sequences to form 𝐬_m = [𝐬̅_m^(1); …; 𝐬̅_m^(c)] ∈ℤ^L_𝐬× 1, where L_𝐬=Uc. Thus, employing the semantic vector quantization encoder compresses the data of a video frame into a vector of length L_𝐬, equivalent to a bitstream of length L_𝐬B.
§.§.§ Details of CNN Encoder
The CNN encoder in MSVQ is capable of encoding the input into a multi-channel vector rather than a single channel. The advantages of multi-channel representation over single-channel representation include increased information capacity, enhanced feature combinations, and improved image generation quality <cit.>.
Our CNN encoder incorporates multi-level downsampling modules to extract multi-scale features and expand the receptive field of each feature point, thus improving the model's ability to capture global structure and semantic information from the input data. Each down-sample module consists of two residual blocks, which help mitigate the vanishing gradient problem, facilitate the training of deeper networks, and improve overall network performance. To assist the network in better understanding more abstract and complex patterns that may exist in the deep features, an attention mechanism module is added in the last down-sample module.
Finally, the features are mapped to the latent embedding space through an intermediate layer and various operations.
§.§ Semantic Vector Quantization Decoder
§.§.§ Procedure
The semantic vector quantization decoder depicted in Fig. <ref> takes the index sequence 𝐬̂_m as input. This sequence is first mapped back to the corresponding vector 𝐳̂_m in the latent embedding space. Subsequently, it is decoded using a CNN decoder to reconstruct the key-frame 𝐊_m. Finally, a patch-based discriminator 𝒟 is employed to assess the constructed frame 𝐊̂_m. It should be pointed out that during the training phase of this model, we directly assign 𝐬̂_m = 𝐬_m.
§.§.§ Details of CNN Decoder and Discriminator
Compared to the CNN encoder, the CNN decoder in MSVQ exhibits a mirrored architecture in certain aspects. The decoder includes an intermediate layer, multiple up-sample modules, and concludes with a final Norm, SiLU, and Conv2d. The structures of the residual block and the attention mechanism module are consistent with those in the CNN encoder.
However, the CNN decoder is not completely symmetrical to the CNN encoder. It is noteworthy that all Norms in the CNN decoder, including residual blocks and attention module blocks, are replaced by spatial conditional normalization (SCN). This ensures that the same quantized indices produce results that are not identical but more natural at different positions, thereby reducing artifacts and discontinuities in the generated frames compared to conventional DL-based VQ methods <cit.>.
The SCN module initially standardizes the intermediate feature map 𝐟^i∈ℝ^C_𝐟^i× H_𝐟^i× W_𝐟^i via group normalization to eliminate stylistic variations, where C_𝐟^i represents the number of channels, while H_𝐟^i and W_𝐟^i represent the height and width of the feature map respectively.
Then, it integrates the embedded vectors 𝐳̂_m as auxiliary information into the input feature maps. Specifically, this is achieved using (<ref>):
𝐟^i+1 = 𝐟^i - μ_GN(𝐟^i)/σ_GN(𝐟^i)⊙Θ_y(Φ(𝐳̂_m)) + Θ_b(Φ(𝐳̂_m)),
where function Φ(·) interpolates and adjusts the auxiliary input 𝐳̂_m to match the size of the input feature map 𝐟^i, Φ(𝐳̂_m)∈ℝ^ c × H_𝐟^i× W_𝐟^i and c represents the number of channels described in Section <ref>. Θ_y and Θ_b are two learnable affine transformations that map the adjusted 𝐳̂_m to the same number of channels as 𝐟^i. Additionally, μ_GN and σ_GN respectively represent the mean and standard deviation of the group normalization applied to 𝐟^i.
In summary, the normalized feature map is multiplied and added with the convolutionally mapped auxiliary information to generate the final output feature map 𝐟^i+1.
§.§ Loss Function
During the training phase of the encoder, the loss function we use consists of three components: reconstruction loss, vector quantization loss, and adversarial loss.
The reconstruction loss measures the difference between the frames generated by the model and the original ones, which is given by:
ℒ_rec(𝐊_m,𝐊̂_m) = 𝐊_m - 𝐊̂_m_2^2 + λℒ_perceptual(𝐊_m, 𝐊̂_m),
where λ is the weight coefficient.
ℒ_perceptual quantifies the perceptual distance between the generated frames and the original frames by computing the distance between the feature maps extracted from the generated and input frames through networks. In this paper, the feature outputs of five convolutional layers in the VGG-16 model <cit.> are selected to compute the perceptual loss as:
ℒ_perceptual(𝐊_m,𝐊̂_m)=∑_j=0^41/C_jH_jW_j∥φ_j(𝐊_m)-φ_j(𝐊̂_m)∥_2^2,
where φ_j is the output of the j-th layer of the feature extraction network, and C_j, H_j, W_j represent the number of channels, height, and width of the feature maps at the j-th layer, respectively.
The vector quantization loss is incurred during the quantization process due to the straight-through gradient estimator method, which can be calculated as:
ℒ_VQ(𝐳_m,𝐳̂_m) =sg[𝐳_m]-𝐳̂_m_2^2+sg[𝐳̂_m]-𝐳_m_2^2.
Here sg[ · ] denotes the stop-gradient operation.
The adversarial loss is given by:
ℒ_GAN(𝐊_m,𝐊̂_m;θ)=log𝒟(𝐊_m;θ)+log(1-𝒟(𝐊̂_m;θ)),
where θ represents learnable parameters in 𝒟.
The whole objective function for identifying the optimal model can be expressed as:
𝒬^* = 𝐰_𝖤, 𝐰_𝖦, 𝐰_𝒵*minθ*max𝔼_𝐊_m ∼ p(𝐊_m)[ ℒ_rec + μ_1 ℒ_VQ + μ_2 ℒ_GAN],
where 𝐰_𝖤, 𝐰_𝖦, and 𝐰_𝒵 denote the learnable parameters of the CNN encoder, the CNN decoder, and the latent embedding space, respectively. μ_1 and μ_2 are weighting coefficients that adjust the contributions of the vector quantization loss and the generative adversarial network loss to the total loss. 𝔼_𝐊_m ∼ p(𝐊_m) represents the expected value over the distribution p(𝐊_m) of the input frame 𝐊_m.
§ ADJUSTABLE INDEX SELECTOR AND RESTORER
Using the semantic vector quantization encoder, the m-th key frame is represented by the sequence of indices 𝐬_m.
Letting 𝐪_m ∈ℝ^L_𝐬× d represent the matrix that consists of L_𝐬 quantized feature vectors for the m-th key frame, we have
[𝐪_m]_i,: = 𝐞_𝐬_m(i)^T ,
where i = 1, 2, …, L_𝐬 and 𝐞_𝐬_m(i) refers to the 𝐬_m(i)-th basis vector in the latent embedding space 𝐄.
In practice, the quantized feature vectors between adjacent key frames may exhibit similarity. By computing the similarity between the feature vectors of the 𝐊_m-1 and 𝐊_m, we can assess the redundancy between consecutive frames.
Take cosine similarity as an example, which is given by:
𝑠𝑖𝑚 = ⟨[𝐪_m-1]_i,:, [𝐪_m]_i,:⟩/[𝐪_m-1]_i,:_2 ·[𝐪_m]_i,:_2.
If sim exceeds a predefined threshold η, it indicates substantial redundancy between the two frames. In such cases, transmission can be optimized by selectively transmitting only the indices that exhibit significant changes, thereby reducing the volume of data transmitted while minimizing the impact on video reconstruction quality.
Let 𝐬_η denote the final sequence of selected indices and define:
𝐩 = [𝐩_2; …; 𝐩_M] ,
where 𝐩_m is a length L_𝐬 vector whose element is 0 or 1, and m = 2, …, M. Specifically, if the cosine similarity between [𝐪_m-1]_i,: and [𝐪_m]_i,: is less than η, we set 𝐩_m(i)=1 and include 𝐬_m(i) in 𝐬_η. Otherwise, we set 𝐩_m(i)=0 and keep 𝐬_η unchanged.
Upon receiving the estimates of 𝐬_η and 𝐩, i.e., 𝐬̂_η and 𝐩̂ at the receiver, the adjustable index restorer reconstructs the key frame index sequence {𝐬̂_m}_m=1^M. Initially, 𝐬̂_1 is initialized using 𝐬̂_η.
Then for each subsequent key frame m = 2,…,M, 𝐬̂_m is reconstructed based on 𝐬̂_m-1 and using 𝐬̂_η with consideration of 𝐩̂.
As shown in Fig. <ref>, after processing the original video frames {𝐅_n}_n=1^N through the adaptive key-frame extractor, semantic vector quantization encoder, and adjustable index selector modules, we can finally obtain 𝐯, 𝐬_η, and 𝐩. Define the bit compression ratio (BCR) as the ratio of the number of bits required for the final transmission sequence to the original video frames. Thus, the BCR of the proposed VQ-DeepVSC is given by:
BCR = N + L_η× B + L_𝐬× (M-1)/C_F× H_F× W_F× 8 × N,
where L_η is the length of 𝐬_η, and N, L_ηB, and L_𝐬(M-1) are the number of bits required for 𝐯, 𝐬_η, and 𝐩, respectively.
§ EXPERIMENTS
In this section, we provide a detailed description of the training configurations for the AKEI and MSVQ stages, as well as the conditions for the transmission experiments. We focus on the significance of AKEI and the adjustable index selector and restorer and their impact on system performance. Finally, we compare VQ-DeepVSC with the traditional H.265 method through experiments.
§.§ Implementation Details
In simulations, we employ low-density parity-check code (LDPC) codes with a block length of 648 bits and a 3/4 rate to encode 𝐛. Both quadrature phase shift keying (QPSK) and 16-quadrature amplitude modulation (16QAM) signaling schemes are utilized.
The subsequent experimental evaluation assesses the performance of the proposed VQ-DeepVSC using the UCF101 <cit.> dataset. The UCF101 dataset, comprising 101 distinct video categories, allows for a thorough examination of the system's capabilities. From each category, we randomly select a subset of 10 videos, creating a test dataset of 1,010 videos. This diverse test set allows for a rigorous evaluation of the efficacy and reliability of VQ-DeepVSC in video transmission, ensuring a more robust assessment of the performance of the different modules within the system.
To evaluate the system performance, we employ MS-SSIM <cit.> to compare the multiscale structural similarity at the pixel level between each frame of the original video and its reconstructed counterpart, and utilize LPIPS <cit.> to assess the perceptual quality of the reconstructed video from the human perspective.
§.§ Comparative Analysis of Methods
We compare the performance of VQ-DeepVSC with typical and commonly used H.265 video compression codecs for video transmission. We utilize the FFMPEG <cit.> library to perform H.265 encoding and decoding. The BCR of H.265 is given by:
BCR = B_s× 8/C_F× H_F× W_F× 8 × N,
where B_s represents the file size in bytes after H.265 compression.
According to (<ref>), the BCR for H.265 on the test dataset is 0.024.
For VQ-DeepVSC, we set η to 1, and based on (<ref>), we can obtain the BCR of our system is 0.023, which is slightly lower than that of H.265.
Fig. <ref> and Fig. <ref> show the comparisons between the proposed VQ-DeepVSC and H.265 video codecs under AWGN and multipath fading channel, respectively.
From the figures, it can be observed that the proposed VQ-DeepVSC outperforms H.265 in terms of both MS-SSIM and LPIPS, especially at medium to low SNRs.
This is due to H.265 being sensitive to noise, thereby limiting its effectiveness at medium to low SNRs. In contrast, our method not only achieves reliable video transmission at low SNRs but also maintains high video quality.
It can also be observed that under multipath fading channel, H.265 fails to transmit videos effectively, whereas the proposed method maintains robust video transmission and ensures high quality.
§ CONCLUSIONS
In this paper, we present the VQ-DeepVSC system, which employs an innovative dual-stage vector quantization framework to enhance video transmission quality and efficiency over wireless channels. This system effectively mitigates the cliff-effect and maintains significant data compression even under low SNR conditions, ensuring robust video transmission.
The proposed VQ-DeepVSC system employs the AKEI module in the first stage to extract key frames, reducing inter-frame redundancy and enhancing transmission robustness under low SNR conditions.
In the second stage, the MSVQ module compresses these key frames using a shared latent embedding space, effectively reducing intra-frame redundancy and supporting high-resolution video transmission. Additionally, the adjustable index selector and restorer further reduce inter-frame redundancy by compressing key-frame indices, enabling more flexible compression rates.
Experimental results demonstrate that the proposed VQ-DeepVSC achieves higher compression degree and higher video quality than the H.265 standard, especially under low SNRs or multi-path channels. Simulation results also demonstrate that the proposed VQ-DeepVSC has excellent generalization capabilities, making it suitable for various types of video transmission.
ieeetr
|
http://arxiv.org/abs/2409.03397v1 | 20240905101459 | Dynamic String Generation and C++-style Output in Fortran | [
"Marcus Mohr"
] | cs.PL | [
"cs.PL",
"D.3.3"
] |
=1
codelet
0.85
#1
Comment: #1
0000-0003-2942-8484
Geophysics, Department of Earth and Environmental
Sciences, Ludwig-Maximilians-Universität München
Munich
Germany
[email protected]
§ ABSTRACT
Using standard components of modern Fortran we present a technique to
dynamically generate strings with as little coding overhead as possible
on the application side. Additionally we demonstrate how this can be extended
to allow for output generation with a C++ stream-like look and feel.
<ccs2012>
<concept>
<concept_id>10011007.10011006.10011008.10011024.10011025</concept_id>
<concept_desc>Software and its engineering Polymorphism</concept_desc>
<concept_significance>500</concept_significance>
</concept>
[500]Software and its engineering Polymorphism
Dynamic String Generation and C++-style Output in Fortran
Marcus Mohr
^1 Université Paris-Saclay, CEA, Service de Thermo-hydraulique
et de Mécanique des Fluides, 91191 Gif-sur-Yvette, France
^2 Laboratoire de Physique de la Matière Condensée, CNRS,
École Polytechnique,
Institut Polytechnique de Paris, 91120 Palaiseau, France
=======================================================================================================================================================================================================================================================================
acmauthoryear
§ MOTIVATION
Scientific simulation codes do not only perform large-scale I/O for reading
input datasets and storing final simulation results. Especially in the case
of long-running simulations, they also typically generate log messages to
inform the user on various details of the simulation run, ranging from
echoing steering parameters, over current time-step values, up to the progress
of iterative solvers and many more. These get send to either the
screen/terminal or a dedicated logfile or both.
In larger projects one quickly reaches a point where aspects such as the
following become highly desirable
* employ a 'nice' and uniform formatting for log messages
* allow users/developpers to select different levels of verbosity
(e.g. debug, info, warning)
* have different levels of indentation or another way to signal which
program component logged a message
* allow switching the message destination (terminal, file or both)
* …
Enforcing formatting rules and checking e.g. the current verbosity level
becomes cumbersome, if the corresponding write statements are cluttered
throughout the code. Obviously the standard idea to reduce code duplication
also applies here, i.e. one delegates the actual I/O operations, including
aspects like indentation, verbosity checking, etc. to a designated part of
the code, let's say a log_manager module.
An additional difficulty arises in MPI-parallel applications. Typically only
one MPI process e.g. the one with rank 0, is intended to generate log messages.
This implies that any generation of a log message must be wrapped inside code
for checking the rank of the executing MPI process. Again this check could
conveniently be encapsulated in said log_manager module, thus,
uncluttering the rest of the code.
Naturally the delegation of these decisions and code parts to a separate module
comes at a price. However, performance-wise the extra costs resulting from
calls to a different subprogram in such an approach can be considered
uncritical as in a typical simulation code the time spent for generating log
messages is negligible compared to the actual computational work.
While our hypothetical log_manager module handles the actual I/O
operations and possibly also deals with formatting issues and the like,
the generation of the actual text of a log message necessarily must happen
in the respective code part of the caller. The resulting text string must then
be send to the log_manager module e.g. by invoking a
log_write() subroutine.
Generating a string literal to pass to a subroutine is of course
straightforward and not different in Fortran than in any other
programming language employed in scientific programming
[frame=single,numbers=none]
call log_write( "Starting assembly of FE matrix" )
However, often log messages will contain information that is only available at
run-time, such as the number of vertices in an input mesh, the norm of a
residual vector or the current iteration count of a loop. Thus, we need to be
able to dynamically generate messages. Of course, this is nothing one
could not accomplish using standard Fortran tools. We just need to perform an IO
operation on an “internal file”, i.e. an existing string. Assume that for
this purpose our log_manager module provides a string of constant
length
[frame=single,numbers=none]
character(len=max_message_length) :: msg
Then logging e.g. the norm of a residual vector stored in the variable
resnorm could be achieved by
[frame=single,numbers=none]
write( msg, "( 'resnorm = ', E12.4E3)" ) resnorm
call log_write( msg )
Or we could output the entry a_2,3 of a matrix via
[frame=single,numbers=none]
write( msg, "('(',I0,',',I0,') = ',F0.3)" ) 2, 3, a(2,3)
call log_write( msg )
giving us an output of e.g. ), optional, intent(in) :: spec
character(len=:), allocatable :: str
character(len=1024) :: fmt
! if conversion specifier present: use it
if ( present(spec) ) then
write( fmt,"(3A)") "(", spec, ")"
write( buffer, fmt ) val
else
fmt = "(I0)"
end if
write( buffer, fmt ) val
str = trim( buffer )
end function io_int2str
Example of stringification function for 32-bit integers.
The second example is different in two respects. This time the argument to
v2s() is assumed to be of type real, with kind real64,
and we provide our own conversion specification as a second argument. The
latter is not required, as the approach allows us to define, as part of the
stringify module, (project-wide) rules for the format conversion. An
example can be found in the demonstrator code, <cit.>.
On the other hand, providing a conversion specification also e.g. allows us
to change the way logicals get converted. Our example io_bool2str()
implementation provides the options default (classic Fortran-style,
i.e. T/F),
word (true/false), code (.true./.false.) and "switch" (on/off).
Thus, the third example passes ), intent(in) :: str_in
real(real64) , intent(in) :: val
str_out = str_in // v2s( val )
end function real2stream
§ EXTENSIONS
§.§ Multi-Line Messages
Our first extension will allow generation of mult-line messages. Classically
one would handle this in an output statement in Fortran by inserting an
end-of-record specifier '/' into the conversion format. So
[frame=single,numbers=none]
print "(A,/,A)", "1st line of message ...", "... and 2nd line"
will produce
1st line of message ...
... and 2nd line
A simple alternative is to directly insert the line-break into the string.
For this purpose the stringify module provides a string literal
newline that is a shorthand for the ASCII line-feed control
character[This works for Unix/Linux and Mac OS. In the case of the
Windows OS we need to replace this by a carriage return followed by a
line-feed.].
[frame=single,numbers=none]
character(len=*), parameter :: newline = char(10) ! LF only
Hence, the following codelet
[frame=single,numbers=none]
print "(A)", newline// " We can" // newline // " use multi" //
"-line" // newline // " strings, too!"
will output
We can
use multi-line
strings, too!
The same can also be done with tab-stops.
§.§ I/O for Arrays
The approach is not restricted to scalar types, but can be
extended to any kind of derived type. We start by giving an example for
rank-1 arrays of integers, before considering a user-defined type.
We implement conversion of a rank-1 integer array in the function
io_intvec2str() given in Fig. <ref>. Assume that
the variable int_vec has the following four entries (1,7,-3,5),
then the codelet
[frame=single,numbers=none]
print "(A)", "An integer vector:" // newline // v2s(int_vec, "I2")
will print
§.§ Example with a User-Defined Structure Type
We now demonstrate the extension to user-defined derived types. As an example
let use consider a simple type point3d_t for representing a point in
3D.
The first step is to add a type-bound procedure stringify_point
(lines 3-4)
[frame=single,numbers=left]
type :: point3d_t
real(real64) :: x, y, z
contains
procedure, pass (this) :: stringify_point
end type point3d_t
The procedure then might be implemented as follows
[frame=single,numbers=left]
function stringify_point( this ) result( str_out )
class(point3d_t), intent(in) :: this
character(len=:), allocatable :: str_out
! allocate deferred length string for using it in write()
integer :: str_out_len = 6 + 3*4
allocate( character(len=str_out_len) :: str_out )
! pretty-print point coordinates as triple
write( str_out, "(3(A,SP,F3.1),A)" ) "(", this
this
end function stringify_point
As second step we add a corresponding function to our stringify module
whose only task is to delegate string generation to the type-bound procedure.
[frame=single,numbers=left]
function io_point3d2str( val ) result(str)
class(point3D_t), intent(in) :: val
character(len=:), allocatable :: str
str = val
end function io_point3d2str
This new function io_point3d2str() is then added as another possiblity
to the overload list for v2s(). In order to simply place objects of
type point3d_t in the output stream, we extend as third and
final step the streamstyle module. For this we add a function
point2stream(), which, again, is syntactically identical to
real2stream() above, and add it to the list of overloads for the
concatenation operator.
[frame=single,numbers=left]
interface operator(//)
module procedure int2stream, real2stream, point2stream
end interface operator(//)
Now the following codelet
[frame=single,numbers=left]
type(point3d_t) :: point = point3d_t( 0.5, 1.0, -2.0 )
write(*,'(A)') "Point at coords = " // point // " in domain"
will output , …) and nested I/O operations (first allowed in F2023). As
these
are well-implemented, see e.g. <cit.>, there should be no
portability issues. The demonstrator at <cit.> has successfully
been test with the following compilers:
2cCompiler version
AMD/AOCC flang 12.0.0
GNU gfortran 12.2.0
Intel/classic ifort 2021.11.1
Intel/OneAPI ifx 2024.0.2
Nvidia nvfortran 21.9-0
ACM-Reference-Format
|
http://arxiv.org/abs/2409.03435v1 | 20240905113654 | Direct Measurement of Density Matrices via Dense Dual Bases | [
"Yu Wang",
"Hanru Jiang",
"Yongxiang Liu",
"Keren Li"
] | quant-ph | [
"quant-ph"
] |
Beijing Institute of Mathematical Sciences and Applications
Beijing Institute of Mathematical Sciences and Applications
[email protected]
Peng Cheng Laboratory, Shenzhen 518055, China
[email protected]
College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, China
Quantum Science Center of Guangdong-Hong Kong-Macao Greater Bay Area
(Guangdong), Shenzhen 518045. China
§ ABSTRACT
A pivotal step to learn a quantum system is to select proper observables.
Pauli observables and mutually unbiased bases are mainstream choices, thus being either utilized in practice or deemed theoretically optimal for state tomography.
However, for Pauli observables, partially knowing a density matrix cannot be realized within a constant number of observables, and substantial observables are required for full tomography.
For mutually unbiased bases, the existence of informationally complete ones of all dimension is uncertain.
Therefore, alternative observables are necessitated.
In this work, 2d observables are designed, with their implementations specified, which enabling a complete depiction of any d-dimension quantum state. To emphasize the advantages, we apply them in two scenarios.
Firstly, we show that direct measurement of density matrix elements is feasible without auxiliary systems, and any element is extracted by three selected observables.
Secondly, we show that state tomography of unknown rank-r density matrices, excluding only a negligible set, can be realized with O(r log d) observables.
The designing observables present a significant advancement in the field, enhancing both the efficiency and feasibility to learn the quantum system.
Efficient understanding of a quantum system fundamentally relies on the selection of observables. Pauli observables and mutually unbiased bases (MUBs) are widely used in practice and often regarded as theoretically optimal for quantum state tomography (QST). However, Pauli observables require a large number of measurements for complete tomography and do not permit direct measurement of density matrix elements with a constant number of observables. For MUBs, the existence of complete sets of d+1 bases in all dimensions remains unresolved, highlighting the need for alternative observables.
In this work, we introduce a novel set of 2d observables specifically designed to enable the complete characterization of any d-dimensional quantum state. To demonstrate the advantages of these observables, we explore two key applications. First, we show that direct measurement of density matrix elements is feasible without auxiliary systems, with any element extractable using only three selected observables. Second, we demonstrate that QST for unknown rank-r density matrices—excluding only a negligible subset—can be achieved with O(r log d) observables. This significantly reduces the number of unitary operations compared to compressed sensing with Pauli observables, which typically require O(r d log^2 d) operations. Each circuit is iteratively generated and can be efficiently decomposed into at most O(n^4) elementary gates for an n-qubit system.
The proposed observables represent a substantial advancement in the characterization of quantum systems, enhancing both the efficiency and practicality of quantum state learning and offering a promising alternative to traditional methods.
Direct Measurement of Density Matrices via Dense Dual Bases
Keren Li
September 9, 2024
===========================================================
§ INTRODUCTION
The density matrix (DM) is a fundamental representation of a quantum state <cit.>, crucial for understanding quantum systems. Determining the DM accurately is a central task in quantum science. Traditionally, this problem is tackled through Quantum State Tomography (QST) <cit.>, which involves performing informationally complete (IC) measurements <cit.> and post-processing the data to estimate the quantum state. In a d-dimensional Hilbert space, a general density matrix has d^2 - 1 independent parameters. An IC positive operator valued measurement should contain at least d^2 projectors <cit.>.
Projective measurements (PMs) onto d + 1 mutually unbiased bases (MUB) are considered optimal IC measurements <cit.>. However, the existence of such d+1 MUBs when d is not a prime power remains an open question in quantum information theory <cit.>. In an n-qubit system, where d = 2^n, estimating the 4^n Pauli expectation values requires 3^n unitary operations, each followed by projective measurement (PM) on the computational basis <cit.>. This leads to 6^n different rank-1 projectors and necessitates exponential storage for the data, making QST impractical for large dimensions.
The off-diagonal elements of the DM are crucial as they capture key quantum properties such as entanglement <cit.> and decoherence <cit.>. Determining the entire DM is inefficient when only a subset of elements is needed. To address this, direct measurement protocols (DMPs) have been developed <cit.>, which involve various weak couplings between the main system and ancillary pointers, along with PMs. Although weak measurements slightly disturb the system and are easier to implement, they are inherently biased procedures that introduce unavoidable errors in the reconstructed state. Consequently, DMPs are less precise than QST <cit.>. To improve accuracy and precision, methods similar to weak measurement protocols have been proposed, where the coupling strength can be strong <cit.>.
DMPs can reduce computational effort for specific DM elements but add complexity by coupling ancillary pointers to the main system. For instance, measuring DM element ρ_jk typically involves tailored coupling operations U_j, followed by post-selection of the state |k⟩, and measuring the ancillary pointers <cit.>. For pure states or rank-1 DMs, a single pointer is sufficient <cit.>, but more general DMs require additional pointers <cit.>. Methods like δ-quench measurements have been proposed to eliminate the need for pointers, although they are restricted to specific systems <cit.>. Recent advancements have reduced the number of required pointers to one for multi-qudit DMs <cit.>, and even eliminated all pointers using phase-shifting techniques, though this approach necessitates O(d^2) unitary operations <cit.>. Implementing d^2 independent projectors with O(d^2) unitary operations in QST can also directly measure all DM elements <cit.>, but this method remains more resource-intensive compared to the O(d) coupling operations in two-pointer DMPs. This underscores the ongoing need for more efficient methods that can directly measure DM elements without ancillary systems and with reduced operational complexity.
Thus, we investigate whether O(d) unitary operations and projective measurements (PM) on the computational basis can be used for QST in arbitrary dimension d, eliminating the need for ancillary pointers to directly measure all DM elements. We are particularly interested in whether each DM element can be measured using a constant number of unitary operations from these O(d) operations.
In this work, we achieve this by designing a set of 2d eigenbases for QST, corresponding to 2d unitary operations followed by PM on the computational basis. Unlike the minimal d+1 MUBs, the existence of these observables for any dimension d is ensured through a deterministic construction algorithm. This set of observables also functions as a DMP, enabling each DM element to be measured with just three observables, thereby ensuring accuracy without the need for ancillary systems. Furthermore, these observables are applied to perform QST on quantum states with prior knowledge of rank-r, connecting through DMPs via matrix completion techniques. As a result, we demonstrate that for a rank-r density matrix, only O(r log d) of the 2d observables are required for full-state characterization. This approach dramatically reduces the required unitary operations, showing an exponential decrease compared to the O(rd log^2 d) operations needed when using random Pauli observables from the d^2 set via compressed sensing <cit.>. Finally, we present a unified formula that expresses all these unitary operations, where each can be decomposed into a permutation gate followed by Pauli measurements. The permutation gate itself can be efficiently decomposed into O(n^4) gates on an n-qubit system. This not only enhances the efficiency of quantum state learning but also opens new pathways for practical implementations in high-dimensional quantum systems.
§ PRELIMINARIES AND DENSE DUAL BASES
Observables and Projective Measurements.– When measuring a DM ρ with an observable O = ∑_k=1^d λ_k |O_k⟩⟨ O_k|, the Born rule states that the measurement outcome λ_k occurs with probability tr(ρ |O_k⟩⟨ O_k|). This corresponds to PM onto the orthonormal eigenbasis {|O_k⟩}_k=1^d. To simulate this PM, we can use the unitary operation U = ∑_k=1^d |O_k⟩⟨ k|, transforming the measurement into the computational basis {|k⟩}_k=1^d. As applying U^† followed by a PM on the computational basis yields the outcome k with probability tr(U^†ρ U |k⟩⟨ k|) = tr(ρ |O_k⟩⟨ O_k|). Our focus is on designing PMs on O(d) eigenbases, implemented with O(d) unitary operations and PM on the computational basis, to directly measure all DM elements.
Regardless of directness, 4^n Pauli observables are a common choice to reconstruct an n-qubit unknown state:
ρ=1/2^n∑_i_1,⋯,i_n=0^3 tr(ρσ_i_1⊗⋯⊗σ_i_n)σ_i_1⊗⋯⊗σ_i_n
A total of 4^n expectation values should be measured <cit.>. Experimentally, 3^n Pauli observables, excluding the identity I for each qubit, are sufficient to obtain the 4^n expectation values. However, this requires 3^n unitary operations and computational basis measurements, resulting in 6^n projectors, which is far more than the 4^n needed. For example, the observable Z⊗ Z on a 2-qubit system corresponds to four different eigenstates: |↑↑⟩, |↑↓⟩, |↓↑⟩, and |↓↓⟩. For arbitrary dimension d, the 4^n Pauli matrices can be generalized to d^2 Gell-Mann matrices, Heisenberg-Weyl matrices, and so on.
PMs onto d+1 MUBs are the minimal and optimal strategy for QST, which produces d(d+1) projectors.
Two orthonormal bases {|a_j⟩}_j=1^d and {|b_k⟩}_k=1^d are termed as mutually unbiased if |⟨ a_j | b_k ⟩|^2 = 1/d for all j,k.
For prime power dimension d, d+1 MUBs can be constructed <cit.>.
In N-qubit systems, all MUB measurements can be efficiently implemented with 2^N+1 decomposed circuits and PMs on the computational basis <cit.>. MUBs are widely used in quantum information theory <cit.>. However, for arbitrary dimension d, the existence of d+1 MUBs remains unknown <cit.>, and strong numerical evidence suggests that only 3 MUBs exist for d=6 <cit.>.
For any dimension d, Caves, Fuchs, and Schack constructed d^2 rank-1 projections to directly determine all DM elements <cit.>.
Denote |ϕ_jk^±⟩≐(|j⟩± |k⟩)/√(2), |ψ_jk^±⟩≐(|j⟩± i|k⟩)/√(2), where j,k∈[d]≐{0,⋯,d-1}.
The collection of d^2 projected states are
𝒜_d={|l⟩,|ϕ_jk^+⟩,|ψ_jk^+⟩: 0≤ j<k≤ d-1; l∈ [d]}.
Every ρ_ij is reconstructed via at most four projectors.
ρ_ll=(ρ|l⟩⟨ l|),
ρ_jk=(ρ(|ϕ_jk^+⟩⟨ϕ_jk^+|-i|ψ_jk^+⟩⟨ψ_jk^+|))-1-i/2(ρ_kk+ρ_jj).
The challenges associated with implementing those O(d^2) projectors, as discussed in <cit.>, are highlighted in comparison to using O(d) unitary coupling operations and computational basis measurements in two-pointer DSP.
We use an efficient algorithm to construct at most 2d eigenbases to cover the states in Eq. (<ref>) for arbitrary dimension d.
These are referred to as dense dual bases (DDBs).
§ RESULTS AND ANALYSIS
For an arbitrary dimension, all elements of a density matrix can be directly measured using 2d-1 DDBs for even d and 2d DDBs for odd d.
Analysis. When designing each eigenbasis, orthogonality and completeness are the key constraints, precisely, each pair of elements in an eigenbasis should be orthogonal, and there should be d elements in each basis.
Thus we introduce elements {|ϕ_jk^-⟩} and {|ψ_jk^-⟩} into the set 𝒜_d in Eq. (<ref>).
Consequently, there are 2d^2-d elements. For the initial case d=2, the six elements {|0⟩, |1⟩, |ϕ_01^+⟩, |ϕ_01^-⟩, |ψ_01^+⟩, |ψ_01^-⟩} are the eigenstates of the Pauli observables Z, X, Y.
Here, the pair {(0, 1)} corresponds to the eigenbasis {|ϕ_01^±⟩} of Pauli X, and `dually', to the eigenbasis {|ψ_01^±⟩} of Pauli Y. For general d, we dually arrange |ϕ_jk^±⟩ and |ψ_jk^±⟩ into different eigenbases. Besides, elements |ϕ_j_1k_1^±⟩ and |ϕ_j_2k_2^±⟩ cannot be grouped into the same eigenbasis if j_1=j_2 or k_1=k_2. The minimal DDBs design should avoid reusing elements in different eigenbasis.
It can be transformed into finding an optimal game strategy. Suppose we have many bands, each marked with d numbers {0, 1, ⋯, d-1}. The game requires us to cut each band to form combinations (pairs) of two numbers. The sequence is irrelevant. Each band can produce at most ⌊ d/2 ⌋ pairs. When d is odd, there are (d-1)/2 pairs and a single element. The target is to determine how we can use a minimal number of bands to ensure that each pair {(j,k):0 ≤ j < k ≤ d-1} can be formed.
Each partition of the cut band is dually associated with two eigenbases, similar to the case when d=2. When d is even, we need at least C_d^2/(d/2)=d-1 bands to construct the corresponding d-1 partitions, which corresponds to 2(d-1) eigenbases. Together with ℬ_0={|0⟩, ⋯, |d-1⟩}, there are 2d-1 DDBs.
When d is odd, we require d bands to construct the corresponding d partitions, which corresponds to 2d DDBs.
For n-qubit system, d=2^n, we illustrate the minimal d-1 partitions construction with n=log d iterations in Fig. (<ref>).
For example, the seven DDBs corresponding to the three partitions for d=4 are as follows:
ℬ_0^4 = {|0⟩,|1⟩,|2⟩,|3⟩},
ℬ_1^4 = { |ϕ_01^±⟩,|ϕ_23^±⟩}, 𝒞_1^4 = { |ψ_01^±⟩,|ψ_23^±⟩},
ℬ_2^4 = { |ϕ_02^±⟩,|ϕ_13^±⟩}, 𝒞_2^4 = { |ψ_02^±⟩,|ψ_13^±⟩},
ℬ_3^4 = { |ϕ_03^±⟩,|ϕ_12^±⟩}, 𝒞_3^4 = { |ψ_03^±⟩,|ψ_12^±⟩}.
For n=1, the initial partition is 𝕋^2 = T^2_1 = {(0,1)}.
For the general n, the partitions consist of two parts. The first part is obtained by merging the partitions of 𝕋^2^n-1 and 𝕋^2^n-1 + 2^n-1, specifically:
T^2^n_m_1 = T^2^n-1_m_1∪ (T^2^n-1_m_1+2^n-1),
where 1≤ m_1≤2^n-1-1.
The second part is defined by the intersection of the set {0, …, 2^n-1 - 1} and the set {2^n-1, …, 2^n - 1}. Specifically:
T^2^n_m_2={(k,2^n/2+[(k+m_2)2^n/2]):k∈ [2^n/2]},
where 2^n-1≤m_2≤ 2^n-1.
It is straightforward to verify that each pair {(j,k) : 0 ≤ j < k ≤ 2^n - 1} is present in the constructed partitions. Using a similar iterative technique, the minimal d-1 partitions for even d, or d partitions for odd d can be constructed. The details are provided in Appendix A.
Without ancillas, the minimal eigenbases for QST are d+1 MUBs, if they exist, corresponding to d(d+1) projectors <cit.>. In contrast, DDBs can produce at most 2d^2 projectors and are applicable for arbitrary d. Earlier DMPs <cit.> employed four or three Pauli observables on two-pointers, resulting in a total of 16d^2 or 8d^2 projectors, respectively. To the best of our knowledge, 2d (or 2d-1) DDBs require a minimal number of projectors to directly measure all DM elements, illustrated in Fig. (<ref>). Additionally, three DDBs are sufficient to determine each DM element. These three DDBs can also determine the d diagonal elements and d/2 off-diagonal elements.
It should be noticed here that the construction of minimal partitions is not unique. We can find many solutions when we use depth-first or breadth-first search algorithms for brute-force search. The above-designed algorithm applies log d iterations, which significantly reduces the time complexity of the construction.
For d=6 (qubit-qutrit), three MUBs exist, but strong numerical evidence suggests that a fourth MUB does not exist <cit.>. Thus they are not enough for QST. As a comparison, a total of 11 DDBs are sufficient to directly reconstruct all 36 × 36 DM elements. Numerical simulations in Appendix B verify this result.
The 2d DDBs can determine all DM elements {ρ_jk : j,k ∈ [d]}. When only partial DM elements are known, properties such as entanglement or decoherence can be inferred. Furthermore, the entire DM can be recovered using matrix completion techniques <cit.> with the partial DM elements {ρ_jk : j,k ∈ C}, where C corresponds to the prior knowledge about ρ.
Typically, prior knowledge can greatly reduce the measurement resources for QST in different application scenarios. Examples include matrix product states <cit.>, permutation-invariant states <cit.>, subsets of a fiducial state <cit.>, as well as low-rank states <cit.>, among others. Compressed sensing shows that randomly choosing O(rd log^2 d) out of d^2 Pauli observables suffices to determine a rank-r DM with high probability <cit.>. Here, we consider the recovery of low-rank density matrices using partial DDBs.
To uniquely determine a rank-r DM of dimension d, O(rlogd/r) DDBs suffice, except on a measure zero set. In this context, the rank r is significantly smaller than the dimension d (r ≪ d).
Analysis. When r=1, the rank-1 DMs correspond to pure states. PMs onto 3d-2 states can uniquely determine all pure states, except for a measure-zero set <cit.>. These states can be {|l⟩, |ϕ_jk^+⟩, |ψ_jk^+⟩ : 0 ≤ j < k ≤ d-1, |j-k| ≤ 1; l ∈ [d]}. In another view, the five eigenbases for pure state tomography constructed by Goyeneche et al. <cit.> form the minimal cover of these 3d-2 states. Furthermore, these projectors onto 5d states are proved to be rank-1 strictly complete <cit.>.
Rank-r strictly complete measurements can uniquely determine any given rank-r state with a probability close to 1. No other physical states of any rank share the same measurement probability distributions, except for a possibly dense set of rank-r states on a measure-zero set.
A POVM is rank-r strictly complete if it can determine the DM elements with the following labels:
C = {(j,k) : |j-k| ≤ r}.
The whole DM can be recovered using the following convex optimization:
X̂ = min_Xtr(X|k⟩⟨ j|) - ρ_jk for (j,k)∈ C,
such that X ≽ 0, tr(X) = 1.
Besides, it is proved that the strictly complete measurements are advantageous due to their compatibility with efficient convex optimization and their robustness to statistical noise and state preparation errors (see Corollary 1 and 2 in <cit.> for detailed discussions).
These correspond to the 0-th to r-th diagonal lines of the DM, as depicted in Fig. (<ref>). Thus, the task is to select DDBs containing the following states:
ℬ = {|l⟩, |ϕ_jk^+⟩, |ψ_jk^+⟩ : 0 ≤ j < k ≤ d-1, |j-k| ≤ r; l ∈ [d]}.
Equivalently, this involves finding the number of partitions containing pair (j,k), where |j-k| ≤ r, 0 ≤ j < k ≤ d-1.
For d = 2^n or for a general value of d, we can construct O(r logd/r ) partitions for this task, as detailed in Appendix C.
An alternative approach to covering the states in Eq. (<ref>) is to use 4r + 1 eigenbases, which is specifically applicable to dimensions of the form d = 2^n <cit.>. In contrast, Result <ref> necessitates a larger number of eigenbases but extends its applicability to arbitrary dimensions d.
When using the compressed sensing method, random O(rd log^2 d) expectation values of Pauli observables can produce the estimation of the unknown state with high probability <cit.>. They can be implemented with O(rd log^2 d) different styles of unitary operation and PM on the computational basis. Thus PMs onto DDBs provide an exponential decrease in the styles of unitary operations.
Furthermore, we think the exponential decrease in unitary rotations also leads to a reduction in the required sampling times.
The strength lies in the different information distraction ability between two-outcome measurements and d-outcome measurements.
Consider the direct measurement in QST discussed in <cit.>, which involves O(d^2) unitary rotations. They can correspond to PM on computational basis and d^2 - d two-outcome POVMs:
{ U_k|0⟩⟨ 0|U_k^†, I - U_k|0⟩⟨ 0|U_k^†}, k = 1, ⋯, d^2 - d.
Here, U_k|0⟩ represents the nontrivial DDB elements in Eq. (<ref>).
Let p_k = tr(ρ_0 U_k|0⟩⟨ 0|U_k^†), where ρ_0 is the unknown DM. To estimate p_k with a confidence level of 1-σ and an error less than ϵ, the required sampling time N_k is given by <cit.>:
N_k ≈2ln(1/σ) · p_k(1 - p_k)/ϵ^2 = ln(1/σ)/2ϵ^2.
However, the N_k(1-p_k) experiments corresponding to measurement I - U_k|0⟩⟨ 0|U_k^† are essentially wasted, as they do not provide additional information.
To determine the O(rd) DM elements labeled in Eq. (<ref>), the complexity of sampling times is O(rd ·ln(1/σ)/ϵ^2).
When extending the two-outcome measurement to the d-outcome DDBs measurements, the waste component I - U_k|0⟩⟨ 0|U_k^† is decomposed into d-1 rank-1 projections for other DDBs, which can provide up to d/2-1 pieces of new information. Given that a total of O(rlog d) unitary operations are used, the sample complexity is O(r log d ·ln(1/σ)/ϵ^2).
Pauli observables can be treated as two-outcome measurements {P_k^+, I - P_k^+}, where P_k^+ is the eigenspace with eigenvalue 1. Consequently, we observe a similar reduction in sampling complexity compared to compressed sensing with randomized Pauli measurements. However, even though the types of unitary operations for DDBs decrease to O(r log(d/r)), the total number of projectors remains O(rd log(d/r)). This indicates that the data required for convex optimization still scales with d, albeit with a slight reduction.
Besides, the parameter ϵ for each expectation value is intrinsically linked to the dimension d, as it is crucial to minimize the errors for each observable, ensuring that the overall accumulated error remains within a tolerably small threshold.
To test the results, we process a numerical simulation, which explores the fidelity of reconstructed quantum density matrices under varying numbers of measurements and ranks.
The simulation begins with random quantum density matrices. For each matrix, multiple reconstructions are performed using different numbers of measurements via DDBs. The fidelity of the reconstructed matrices is then calculated, with the process repeated 20 times to obtain average fidelity values.
Figure <ref>(a) shows how the fidelity changes with the number of measurements for a fixed dimension (d=16) while varying the rank of the density matrix (r=3,4,8,16), demonstrating that increasing the number of measurements generally improves the fidelity of the reconstruction. In addition, we conducted a test on reconstructing density matrices using either DDBs or a Pauli-based compressed sensing method. The compressed sensing involved an initial measurement with varying numbers of random Pauli bases (CSPs), specifically rn, 10rn, drn^2, and 10drn^2, followed by convex optimization to minimize the nuclear norm of the density matrix. In contrast, Result <ref> employed 𝒪(rn) DDBs with semidefinite optimization to minimize the Frobenius distance. As shown in Figure <ref>(b)-(d), corresponding to d=4, 8, 16, our methods demonstrate clear advantages in requiring fewer measurements.
The failed set of determinations can be characterized. Consider r× r principal submatrices
A_k=(
[ ρ_k,k ⋯ ρ_k,k+r-1; ⋮ ⋱ ⋮; ρ_k+r-1,k ⋯ ρ_k+r-1,k+r-1 ]),
k=0,…,d-r. The total failure set is when A_i is singular for i = 0, …, d - r - 1 and A_j is singular for j = i ± 1 <cit.>.
Adaptive strategy can handle a failure case when some diagonal elements are zero. DM is expressed with an ensemble {p_k,|ψ_k⟩}, ρ=∑ p_k|ψ_k⟩⟨ψ_k|.
For example, if ρ_00=0, the first component of all {ψ_k} is zero, implying ρ_0l=ρ_l0=0 for all l=0,…,d-1.
We can reconstruct the submatrix by erasing the first column and row of ρ. Then the DDBs can be designed for the projected subspace spanned by {|1⟩,…,|d-1⟩}.
Now we consider the circuits implementation for the DDBs in n-qubit case.
In n-qubit systems, each projective measurement onto 2 × 2^n - 1 DDBs can be implemented using a permutational operation followed by a Pauli measurement. Each permutation can be decomposed into O(n^4) elementary gates.
Precisely, the projective measurements onto computational basis {|0⟩,⋯,|2^n-1⟩} is implemented by Pauli measurement Z^⊗ n.
In total, 2^n - 1 permutational operations correspond to the whole partitions in Result <ref>.
For t = 1, the permutation P_1 = I^⊗ n. The subsequent Pauli observables are Z^⊗ (n-1)⊗ X and, dually, Z^⊗ (n-1)⊗ Y.
For t = 2, …, 2^n - 1, find j and k such that t = 2^k + j and j < 2^k. The permutational operation is
P_t = I^⊗ (n - k) ⊗( |0⟩⟨0| ⊗ I^⊗ (k-1) + |1⟩⟨1| ⊗ (𝒰_k-1)^j ),
where 𝒰_k-1 is a (k-1)-qubit operation and
𝒰_k=∑_m=0^2^k-1|m-1 2^k⟩⟨ m|.
The subsequent Pauli observables are Z^⊗ (n - k)⊗ X ⊗ Z^⊗ (k - 1) and, dually, Z^⊗ (n - k)⊗ Y ⊗ Z^⊗ (k - 1).
The total involved 2n Pauli observables can be implemented with a Hadarmard gate H (or H̃^†) and computational basis measurement at each qubit.
Analysis.
It is straightforward that the PM onto computation basis is implemented with Pauli measurement Z^⊗ n.
The universal expression of permutational operation lies in the iterative constructions of the partitions.
When n = 1, the three DDBs are the eigenbases of the Pauli Z, X, and Y operators. The implementation of a Pauli X (or Y) measurement can be transformed into a sequence of applying the H (or H̃^†) operation followed by a Pauli Z measurement. Here H is the Hadamard gate and H̃=S· H, S=|0⟩⟨ 0|+i|1⟩⟨ 1|.
When n = 2, the merging partition is T_1^4 = T_1^2 ∪ (T_1^2 + 2) = {(0, 1), (2, 3)}. Expressed in binary form, one DDB is ℬ_1^4 = {(|00⟩± |01⟩) / √(2), (|10⟩± |11⟩) / √(2)}. We can obtain it by acting I ⊗ H on the computational basis.
The first intersection partition is T_2^4 = {(0, 2), (1, 3)}, corresponding to ℬ_2^4 = {(|00⟩± |10⟩) / √(2), (|01⟩± |11⟩) / √(2)}. The unitary operation U_2^4 = H ⊗ I transforms the computational basis into this form.
The second intersection partition is T_3^4 = {(0, 3), (1, 2)}, corresponding to the Bell basis ℬ_3^4 = {(|00⟩± |11⟩) / √(2), (|01⟩± |10⟩) / √(2)}. The unitary operation U_3^4 = (|0⟩⟨0| ⊗ I + |1⟩⟨1| ⊗ X) · (H ⊗ I) transforms the computational basis into it. By changing H into H̃, we obtain the circuit for the dual basis.
For general n, when t=1,⋯,2^n-1-1, the partition T_t^2^n is obtained by merging and intersection. Denote U_t as the operation to transform the computational basis into one nontrivial DDB. For the emerging partitions by Eq.(<ref>), U_t=I⊗ U_t^2^n-1, where U_t^2^n-1 represents the t-th circuits at iteration (n-1), starting from I^⊗ (n-2)⊗ H.
When t=2^n-1, T_2^n-1={(0,2^n-1),(1,2^n-1+1),⋯} by Eq.(<ref>), then U_2^n-1=H⊗ I^⊗ (n-1). When t=2^n-1+j, U_t= (|0⟩⟨0|⊗ I+|1⟩⟨1|⊗ (𝒰^†_n-1)^j])· (H⊗ I^⊗ (n-1)).
As the operation H (H̃^†) followed by Pauli measurement Z is equal to Pauli measurement X (Y). Thus we only need to perform the permutation I^⊗ n and the ones in Eq.(<ref>) followed by a Pauli measurement to implement PMs onto 2^n-2 DDBs. Together with Pauli measurement Z^⊗ n, all circuits implementation of PMs onto DDBs are depicted.
In Appendix. D <cit.>, we decompose each permutational operation P_j with at most O(n^4) elementary gates.
𝒰_k is a k-qubit global quantum gate. Though the classical counterpart is a basic operation, realizing it in the quantum device may be challenging. It can be implemented with O(k^3) elementary gates with the decomposition of incremental gates and Corollary 7.6 for generalized Toffoli gate <cit.>.
Recent work on high-dimensional quantum systems shows promising techniques to implement similar gates (<cit.>), potentially overcoming this technical barrier.
§ CONCLUSION AND DISCUSSION
Effective characterization of quantum states is a fundamental problem in quantum science. We propose an efficient method for constructing and decomposing minimal DDBs to address this issue. In the task of QST, our approach is compared with the commonly used Pauli observables and the optimal d+1 MUBs, as detailed in Table. <ref>. Additionally, this method functions as a direct measurement protocol, allowing for the reconstruction of each DM element using a constant number of unitary operations. The strong von Neumann measurements employed in our scheme ensure accuracy, unlike the commonly used weak measurement protocols, and they eliminate the need for ancillary pointers, resulting in minimal 2d^2 projectors. In response to the challenge presented in <cit.>, our method demonstrates that only O(d) unitary operations, rather than O(d^2), along with PM in the computational basis, are also sufficient to directly reconstruct all DM elements. The sampling of the 2d DDBs could provide an exponential decrease when reconstructing low-rank DM.
Furthermore, we validate our approach through numerical simulations and cloud-based quantum experiments, demonstrating its practical feasibility.
There are several promising directions for future research stemming from this work. Firstly, random DDBs hold significant potential for efficiently predicting properties of unknown quantum states, particularly in n-qubit shadows tomography, where they could enable constant-time post-processing <cit.> for any observable in one experiment—a contrast to the exponential worst-case scenario encountered in random Clifford measurements <cit.>. Secondly, optimizing permutation operations in DDBs, which currently exhibit O(n^4) complexity, presents an intriguing task. Extending to d-level systems, where nontrivial DDBs correspond to unitary matrices with only two nonzero elements per row and column, raises important questions about efficient implementation and identifying physical systems capable of performing these operations seamlessly. Thirdly, integrating DDBs with matrix recovery techniques in QST suggests that with certain prior information, the entire density matrix could be reconstructed using exponentially fewer unitary operations. Our work demonstrates this efficiency with rank-r information, and it is worth exploring whether other types of prior information could similarly simplify the reconstruction process. Lastly, comparing the accuracy of Pauli observables, MUBs, and DDBs in quantum information tasks across different platforms—especially under the influence of errors or when analyzing specific unknown states—could provide valuable guidance in establishing these methods as fundamental tools in characterizing unknown quantum states. These directions underscore the substantial potential of DDBs to advance both theoretical and practical aspects of quantum information science.
Acknowledgements—
We thank Cheng Qian for the brute-force searches that helped clarify the structure of the DDBs construction. This work was supported by the National Natural Science Foundation of China under Grants No. 62001260, 42330707 (Y.W.), 11701536 (Y.L.), and 11905111 (K.L.); the Beijing Natural Science Foundation under Grant No. Z220002 (Y.W.); and the Major Key Project of PCL (Y.L.).
Author Contributions—
Y. W. conceived the idea for this paper, applied the rank-r QST, decomposed the circuits, and drafted the manuscript. H. J. constructed partitions for d = 2^n, while Y. L. handled cases where d/2 is odd. K. L. conducted the error analysis, numerical simulations, and cloud experiments, and provided revisions to the manuscript. All authors reviewed and approved the final version of the manuscript for submission.
apsrev4-1
§ DETAILED PROOFS IN RESULT 1
In the main text, we have demonstrated how to construct d-1 partitions when d=2^n, such that each pair (j, k) with 0 ≤ j < k ≤ 2^n - 1 is included in one of these partitions.
Now we give the detailed construction of minimal partitions for arbitrary dimension d.
For even d, we can always create d-1 partitions {T^d_1,⋯,T^d_d-1} such that each pair (j, k) with 0 ≤ j < k ≤ d - 1 is in one of the partitions.
We will use mathematical induction to prove the statement 𝕊(d), where d represents the even dimension.
Base Case: For d=2, there is only one pair, and its corresponding partition is denoted as 𝕋^2 = {T_1^2} = {(0,1)}. Consequently, 𝕊(2) holds true.
Inductive Step:
Since d is even, one of the numbers d/2 or d/2+1 is also even.
To prove 𝕊(d) for a general even d, we assume that either 𝕊(d/2) or 𝕊(d/2+1) holds and then construct the partitions for d.
Case 1: d/2 is even.
According to the induction hypothesis 𝕊(d/2), there exists a set of partitions {T_t^d/2 : t = 1, 2, ⋯, d/2 - 1} covering each (j, k) with 0 ≤ j < k ≤ d/2 - 1.
Similar to the construction when d is the power of 2, we can construct the target partitions {T_t^d : t = 1, 2, …, d-1} using the following rules:
T_t^d = T_t^d/2∪ (T_t^d/2+d/2), t=1,2,...,d/2-1,
T_t^d = {(j,d/2+[(t+j) d/2]): 0≤ j≤ d/2-1}, t=d/2,d/2+1,...,d-1.
Case 2: d/2 is odd.
According to the induction hypothesis 𝕊(d/2 + 1), there exists a set of partitions {T_t^d/2 + 1 : t = 1, 2, ⋯, d/2} covering each (j, k) with 0 ≤ j < k ≤ d/2.
We select pairs {(c_t, d/2)}, where c_t is the neighbor of d/2. We construct the target partitions {T_t^d : k = 1, 2, …, d-1} using the following rules:
T_t^d = T_t^d/2+1∪(T_t^d/2+1+d/2)-(c_t,d/2)- (d/2+c_t,d)∪ (c_t,d/2+c_t), , t=1⋯,d/2,
T_t^d = {(j,d/2+[(t+j) d/2]):0≤ j≤ d/2-1}, t=d/2+1,...,d-1.
We illustrate the construction method for d=6 in Fig.(<ref>).
Conclusion:
Finally, we prove that this construction covers all pairs (j, k) with 0 ≤ j < k ≤ d - 1, assuming that 𝕊(2⌈d/4⌉) holds.
Case 1: j and k on the same side of d/2. If 0 ≤ j < k ≤ d/2 - 1 or d/2 ≤ j < k ≤ d - 1, using the induction hypothesis 𝕊(2⌈d/4⌉), the tuple (j, k) is either in T_t^d/2 or (T_t^d/2 + d/2), covered by the partitions {T_t^d} as shown in Equations (<ref>) and (<ref>).
Case 2: j and k on different sides of d/2. Assume 0 ≤ j < d/2 ≤ k ≤ d - 1.
If d/2 is even, the tuple (j, k) can be found in the partitions constructed by Equation (<ref>).
If d/2 is odd, there are two subcases:
If k - j < d/2, the tuple (j, k) is in the partitions constructed by Equation (<ref>).
If k - j = d/2, the tuple (j, k) is in the partitions constructed by Equation (<ref>).
Thus, the set of partitions 𝕋^d = {T_t^d : t = 1, 2, …, d - 1} is generated. Each tuple (j, k) can be found in at least one partition, and therefore 𝕊(d) holds.
If d is odd, there exist d partitions such that any tuple (j,k) with 0 ≤ j < k ≤ d - 1 can be found in one of these partitions.
By Lemma 1, we can construct d partitions for an even dimension d + 1. Each partition contains a tuple (c_d, d), where c_d is a neighboring number of d. By replacing (c_d, d) with the single number c_d, we obtain d partitions satisfying the requirements for the odd dimension d.
§.§ Algorithm for arbitrary dimension based on the lemmas
§.§ Examples of minimal partitions and corresponding DDBs
The deterministic algorithm can generate the minimal partitions required for any dimension d. We will first present the partitions for 1-qubit, 2-qubit, and 3-qubit systems, which have dimensions of 2, 4, and 8, respectively. Subsequently, we will provide the corresponding DDBs for each partition. Then, we will demonstrate the partition and DDBs construction for the odd dimension d = 7.
The case of d=2 is trivial, as there is only one tuple (0,1) with 0≤ j<k ≤ 1. Therefore, 𝕋^2=T^2_1,
T^2_1={(0,1)}.
The three IC DDBs are:
ℬ_0^2 = {|0⟩, |1⟩},
ℬ_1^2 = {|0⟩± |1⟩/√(2)}, 𝒞_1^2 = {|0⟩± i|1⟩/√(2)}.
For d=4, there are 3 partitions, which are the ones in main text. 𝕋^4={T^4_1, T^4_2, T^4_3}, where
T^4_1 = {(0,1), (2,3)},
T^4_2 = {(0,2), (1,3)},
T^4_3 = {(0,3), (1,2)}.
The seven IC DDBs are:
ℬ_0^4 = {|0⟩, |1⟩, |2⟩, |3⟩},
ℬ_1^4 = {|0⟩± |1⟩/√(2), |2⟩± |3⟩/√(2)}, 𝒞_1^4 = {|0⟩± i|1⟩/√(2), |2⟩± i|3⟩/√(2)},
ℬ_2^4 = {|0⟩± |2⟩/√(2), |1⟩± |3⟩/√(2)}, 𝒞_2^4 = {|0⟩± i|2⟩/√(2), |1⟩± i|3⟩/√(2)},
ℬ_3^4 = {|0⟩± |3⟩/√(2), |1⟩± |2⟩/√(2)}, 𝒞_3^4 = {|0⟩± i|3⟩/√(2), |1⟩± i|2⟩/√(2)}.
For d=8, there are 7 partitions in 𝕋^8.
T^8_1 = {(0,1),(2,3),(4,5),(6,7)},
T^8_2 = {(0,2),(1,3),(4,6),(5,7)},
T^8_3 = {(0,3),(1,2),(4,7),(5,6)},
T^8_4 = {(0,4),(1,5),(2,6),(3,7)},
T^8_5 = {(0,5),(1,6),(2,7),(3,4)},
T^8_6 = {(0,6),(1,7),(2,4),(3,5)},
T^8_7 = {(0,7),(1,4),(2,5),(3,6)}.
The fifteen IC DDBs are:
ℬ_0^8 = {|0⟩, |1⟩, …, |7⟩},
ℬ_1^8 = { |ϕ_01^±⟩, |ϕ_23^±⟩, |ϕ_45^±⟩, |ϕ_67^±⟩}, 𝒞_1^8 = { |ψ_01^±⟩, |ψ_23^±⟩, |ψ_45^±⟩, |ψ_67^±⟩},
ℬ_2^8 = { |ϕ_02^±⟩, |ϕ_13^±⟩, |ϕ_46^±⟩, |ϕ_57^±⟩}, 𝒞_2^8 = { |ψ_02^±⟩, |ψ_13^±⟩, |ψ_46^±⟩, |ψ_57^±⟩},
ℬ_3^8 = { |ϕ_03^±⟩, |ϕ_12^±⟩, |ϕ_47^±⟩, |ϕ_56^±⟩}, 𝒞_3^8 = { |ψ_03^±⟩, |ψ_12^±⟩, |ψ_47^±⟩, |ψ_56^±⟩},
ℬ_4^8 = { |ϕ_04^±⟩, |ϕ_15^±⟩, |ϕ_26^±⟩, |ϕ_37^±⟩}, 𝒞_4^8 = { |ψ_04^±⟩, |ψ_15^±⟩, |ψ_26^±⟩, |ψ_37^±⟩},
ℬ_5^8 = { |ϕ_05^±⟩, |ϕ_16^±⟩, |ϕ_27^±⟩, |ϕ_34^±⟩}, 𝒞_5^8 = { |ψ_05^±⟩, |ψ_16^±⟩, |ψ_27^±⟩, |ψ_34^±⟩},
ℬ_6^8 = { |ϕ_06^±⟩, |ϕ_17^±⟩, |ϕ_24^±⟩, |ϕ_35^±⟩}, 𝒞_6^8 = { |ψ_06^±⟩, |ψ_17^±⟩, |ψ_24^±⟩, |ψ_35^±⟩},
ℬ_7^8 = { |ϕ_07^±⟩, |ϕ_14^±⟩, |ϕ_25^±⟩, |ϕ_36^±⟩}, 𝒞_7^8 = { |ψ_07^±⟩, |ψ_14^±⟩, |ψ_25^±⟩, |ψ_36^±⟩}.
For dimension d=7, the seven partitions is constructed by Lemma <ref>, which are
T^7_1 = {(0,1),(2,3),(4,5),6},
T^7_2 = {(0,2),(1,3),(4,6),5},
T^7_3 = {(0,3),(1,2),4,(5,6)}.
T^7_4 = {(0,4),(1,5),(2,6),3},
T^7_5 = {(0,5),(1,6),2,(3,4)},
T^7_6 = {(0,6),1,(2,4),(3,5)},
T^7_7 = {0,(1,4),(2,5),(3,6)}.
Then the 14 IC DDBs are the following:
ℬ_1^7 = { |ϕ_01^±⟩, |ϕ_23^±⟩, |ϕ_45^±⟩, |6⟩}, 𝒞_1^7 = { |ψ_01^±⟩, |ψ_23^±⟩, |ψ_45^±⟩, |6⟩},
ℬ_2^7 = { |ϕ_02^±⟩, |ϕ_13^±⟩, |ϕ_46^±⟩, |5⟩}, 𝒞_2^7 = { |ψ_02^±⟩, |ψ_13^±⟩, |ψ_46^±⟩, |5⟩},
ℬ_3^7 = { |ϕ_03^±⟩, |ϕ_12^±⟩, |4⟩, |ϕ_56^±⟩}, 𝒞_3^7 = { |ψ_03^±⟩, |ψ_12^±⟩, |4⟩, |ψ_56^±⟩},
ℬ_4^7 = { |ϕ_04^±⟩, |ϕ_15^±⟩, |ϕ_26^±⟩, |3⟩}, 𝒞_4^7 = { |ψ_04^±⟩, |ψ_15^±⟩, |ψ_26^±⟩, |3⟩},
ℬ_5^7 = { |ϕ_05^±⟩, |ϕ_16^±⟩, |2⟩, |ϕ_34^±⟩}, 𝒞_5^7 = { |ψ_05^±⟩, |ψ_16^±⟩, |2⟩, |ψ_34^±⟩},
ℬ_6^7 = { |ϕ_06^±⟩, |1⟩, |ϕ_24^±⟩, |ϕ_35^±⟩}, 𝒞_6^7 = { |ψ_06^±⟩, |1⟩, |ψ_24^±⟩, |ψ_35^±⟩},
ℬ_7^7 = { |0⟩, |ϕ_14^±⟩, |ϕ_25^±⟩, |ϕ_36^±⟩}, 𝒞_7^7 = { |0⟩, |ψ_14^±⟩, |ψ_25^±⟩, |ψ_36^±⟩}.
Here the computation basis ℬ_0^7 is saved as the elements {|0⟩,⋯,|6⟩} have been included twice in the other DDBs.
§ NUMERICAL EXPERIMENTS ON DIMENSION SIX
While it is known that the projective measurements onto d+1 MUBs are minimal and optimal QST measurements for a d-dimensional system, their construction for each dimension d is still an open question. The first dimension for which d+1 MUBs have not been constructed is 6, corresponding to a qubit-qutrit system, ℋ_2⊗ℋ_3.
For d=6, there are 5 partitions in 𝕋^6,
T^6_1={(0,1),(2,5),(3,4)}, T^6_2={(0,2),(1,4),(3,5)}, T^3_3={(0,3),(1,2),(4,5)},
T^6_4={(0,4),(1,5),(2,3)}, T^6_5={(0,5),(1,3),(2,4)}.
The corresponding 11 IC DDBs are denoted as
{ℬ_0^6,ℬ_1^6...,ℬ_5^6,𝒞_1^6,...,𝒞_5^6}.
ℬ_0^6 = {|0⟩, …, |5⟩},
ℬ_1^6 = { |ϕ_01^±⟩, |ϕ_25^±⟩, |ϕ_34^±⟩}, 𝒞_1^6 = { |ψ_01^±⟩, |ψ_25^±⟩, |ψ_34^±⟩},
ℬ_2^6 = { |ϕ_02^±⟩, |ϕ_14^±⟩, |ϕ_35^±⟩}, 𝒞_2^6 = { |ψ_02^±⟩, |ψ_14^±⟩, |ψ_35^±⟩},
ℬ_3^6 = { |ϕ_03^±⟩, |ϕ_12^±⟩, |ϕ_45^±⟩}, 𝒞_3^6 = { |ψ_03^±⟩, |ψ_12^±⟩, |ψ_45^±⟩},
ℬ_4^6 = { |ϕ_04^±⟩, |ϕ_15^±⟩, |ϕ_23^±⟩}, 𝒞_4^6 = { |ψ_04^±⟩, |ψ_15^±⟩, |ψ_23^±⟩},
ℬ_5^6 = { |ϕ_05^±⟩, |ϕ_13^±⟩, |ϕ_24^±⟩}, 𝒞_5^6 = { |ψ_05^±⟩, |ψ_13^±⟩, |ψ_24^±⟩}.
We tested our proposal numerically in a 6-dimensional system.
We examined four quantum state types: (a) the maximally mixed state I/6, (b) a balanced state 1/6∑_k,j=0^5|k⟩⟨ j|, (c) a separable state, and (d) an entangled state. These states have Hermitian, semi-definite, and unit trace density matrices. States (a) and (b) were directly generated in our simulation, while states (c) and (d) were prepared using local unitary transformations U_2∈ℋ_2 and U_3∈ℋ_3, distributed uniformly according to the Haar measure.
As U_2 and U_3 are local, the entanglement of the resulting state remains unchanged, which can be verified using the Peres-Horodecki criterion. We prepared states (c) and (d) as U_2⊗ U_3 |ϕ⟩, where |ϕ⟩ represents |0⟩ or |1⟩+|2⟩+|3⟩+|5⟩, respectively.
For the reconstruction of an unknown density matrix ρ, we utilize two methods. The first method is based on semi-definite programming, where ρ̃ represents the estimated form of ρ and is obtained through a parameterized matrix X≥ 0. The mathematical model is formulated as follows:
ρ̃=_X∑_i=1^66( (X E_i)-p_i),
where · is a norm function, E_i are from the bases of (ℬ_0,...,ℬ_5,𝒞_1,...,𝒞_5), and
p_i are the measured probabilities on E_i by unknown density matrix ρ.
The second approach is direct reconstruction. This method utilizes a total of 36 probabilities, which is half of the available data.
Consequently, numerical experiments were conducted 20 times for all tested states. The Monte Carlo method was utilized to simulate p_i with 100×2^ shots. The results of these numerical experiments are illustrated in SFig.(<ref>), where the infidelities are represented by the Frobenius distance.
F_f=√(((ρ-ρ̃)· (ρ-ρ̃)^†))).
Error bars come from standard deviations of 20 times repetitions of the simulation.
§ RIGOROUS PROOFS OF RESULT 2
When d = 2^n, the pair (j, k) with 0 ≤ j < k ≤ 2^n - 1 and |j - k| ≤ r can be found in at most O(r (n - log r)) partitions for minimal DDBs.
Analysis. Using n iterations, 2^n - 1 partitions are constructed such that each pair (j, k) with 0 ≤ j < k ≤ 2^n - 1 is covered. When r ≪ 2^n, we can always find m such that 2^m-1 < r ≤ 2^m.
At iteration t = m for dimension 2^m, a total of 2^m - 1 partitions have been constructed by Result <ref>. Thus, at most 2^m - 1 partitions contain (j, k) for 0 ≤ j < k ≤ 2^m - 1. At iteration t = m + 1, new partitions are constructed by Eq. (<ref>) and Eq. (<ref>), resulting in at most 2^m - 1 and r partitions that satisfy |j - k| ≤ r, 0 ≤ j < k ≤ 2^m+1 - 1. Therefore, from t = m to t = log d = n, the number of relevant partitions is less than 2^m - 1 + r(log d - m) < r(log d + 2 - m) < r log(4d/r).
For any general dimension d, the pair (j, k) with 0 ≤ j < k ≤ d - 1 and |j - k| ≤ r can be found in at most O(r logd/r)) partitions for minimal DDBs.
Analysis. Based on the construction in Algorithm 1, we iteratively construct b_k-1 partitions for dimension b_k, where k=L,L-1,⋯,1.
Here b_L = 2 and b_1 = f(d).
Firstly, we prove that the number of iterations is exactly L = ⌈log d ⌉.
For example, consider d = 100. We should construct the partitions iteratively for the dimensions:
b_7 = 2, b_6 = 4, b_5 = 8, b_4 = 14, b_3 = 26, b_2 = 50, b_1 = 100.
With ⌈log 100 ⌉ = 7 iterations, we obtain the partitions for d = 100.
For any general d, we observe that b_k-1 = 2b_k or b_k-1 = 2b_k - 2, leading to the inequality b_k-1≤ 2b_k. If the iteration count is ⌈log d ⌉ - 1, the maximum value of b_1 is 2^⌈log d ⌉ - 1.
However, 2^⌈log d ⌉ - 1 < d. This results in a contradiction since b_1 should equal d or d + 1. On the other hand, it is easy to observe that if we start with dimension b_1=f(d) and iterate L = ⌈log d ⌉ times, the final value can always be reduced to 2. This is because that 2^L-1< b_1 ≤ 2^L, and generally 2^L-k< b_k ≤ 2^L-k+1 for k=2,⋯,L.
Similar to the case when d=2^n, there is always a number m such that 2^m-1<r ≤ 2^m. Denote L-k+1=m. Then at iteration m, there are at most b_L+1-m-1≤ 2^m-1 partitions for dimension b_L+1-m.
Repeating the procedure until the iteration L for dimension b_1, the new constructions are based on Eq. (<ref>) and Eq. (<ref>). The number of partitions containing the required elements is less than 2^m-1+r( L-m)< r(2+⌈log d ⌉ -m)<rlog (8d/r), where -m ≤ -log r. Then the number of DDBs required is O(r log(d/r)).
Each partition corresponds to two eigenbases. Together with ℬ_0, O(r logd/r) DDBs can reconstruct the elements {ρ_jk : j,k ∈ C}.
§ CIRCUITS ANALYSIS
We may as well label the 2^n+1-1 DDBs on n-qubit systems as follows:
{ℬ_0^2^n, ℬ_j^2^n, 𝒞_j^2^n:j=1,⋯,2^n-1}.
The computational basis is ℬ_0^2^n={|0⟩,⋯,|2^n-1⟩}.
The basis ℬ_j^2^n and 𝒞_j^2^n are dually designed for the same partition. Here we consider the 2^n+1-2 circuits to transform the computational basis to nontrivial DDBs {ℬ_j^2^n, 𝒞_j^2^n:j=1,⋯,2^n-1}.
1-qubit: The DDBs ℬ_1^2 and 𝒞_1^2 are in Eq. (<ref>). The DDB circuits to map the computational basis into them are shown in SFig.(<ref>).
2-qubit: The six nontrivial DDBs are in Eq. (<ref>). With the binary form, the three DDBs for the three partitions are the following
ℬ_1^4 = {|00⟩± |01⟩/√(2), |10⟩± |11⟩/√(2)},
ℬ_2^4 = {|00⟩± |10⟩/√(2), |01⟩± |11⟩/√(2)},
ℬ_3^4 = {|00⟩± |11⟩/√(2), |01⟩± |10⟩/√(2)}.
The corresponding circuits are:
n-qubit:
Denote the 2^n-1 unitary operations as {U_t^2^n:t=1,⋯,2^n-1} for the nontrivial DDBs ℬ_t^2^n. The DDBs {𝒞_t^2^n} is dually obtained by adding i in the basis states. At the last iteration of (n-1)-qubit case, the unitary operations {U_t^2^n-1 : t = 1, ⋯, 2^n-1 - 1} map the computational basis ℬ_0^2^n-1 into ℬ_t^2^n-1.
For the n-qubit case, we have T_t^2^n = T_t^2^n-1∪ (T_t^2^n-1 + 2^n-1) for t = 0, ⋯, 2^n-1 - 1, and T_t^2^n = {(j, 2^n-1 + [(j + t) 2^n-1]) : 0 ≤ j ≤ 2^n-1 - 1} for t = 2^n-1, ⋯, 2^n - 1.
Thus, when t = 0, ⋯, 2^n-1 - 1, we have U_t^2^n = I ⊗ U_t^2^n-1. This is because the basis states of ℬ_t^2^n-1 are of the form |k_1⟩± |k_2⟩. The partition T_t^2^n is iteratively constructed by T_t^2^n-1∪ (T_t^2^n-1 + 2^n-1), where t = 0, ⋯, 2^n-1 - 1. Therefore, the basis states of ℬ_k^2^n are in the form |0⟩(|k_1⟩± |k_2⟩) or |1⟩(|k_1⟩± |k_2⟩).
When t = 2^n-1, we have U_2^n-1^2^n = H ⊗ I^⊗ n-1. This follows because the partition T_2^n-1^2^n consists of {(0, 2^n-1), (1, 2^n-1 + 1), ⋯, (2^n-1 - 1, 2^n - 1)}. These numbers can be expressed in binary form, and the corresponding basis states are given by {(|0⟩± |1⟩) ⊗ |j_2, ⋯, j_n⟩ : j_2, ⋯, j_n = 0, 1}. Hence, U_2^n-1^2^n = H ⊗ I^⊗ n-1.
When t = 2^n-1 + 1, ⋯, 2^n - 1, we can express t as 2^n-1 + j. Then, U_k^2^n = [|0⟩⟨ 0|⊗ I + |1⟩⟨ 1|⊗ (𝒱_n-1)^j] · [H ⊗ I^⊗ (n-1)], where 𝒱_n-1 =𝒰^†_n-1= ∑_m=0^2^n-1 - 1 |m + 1 2^n-1⟩⟨ m|.
This result is derived as follows. The partition T_t^2^n consists of {(0, 2^n-1 + j), (1, 2^n-1 + (1 + j 2^n-1)), ⋯, (2^n-1 - 1, 2^n-1 + (2^n-1 - 1 + j 2^n-1))}. Expressing these numbers in binary form, the corresponding basis states are given by {|0⟩⊗ |j_2, ⋯, j_n⟩ + |1⟩⊗ |[(j_2, ⋯, j_n) + j] 2^n-1⟩ : j_2, ⋯, j_n = 0, 1}. This basis can be obtained by applying the conditional shift operation |0⟩⟨ 0|⊗ I + |1⟩⟨ 1|⊗ (𝒰^†_n-1)^j on the basis of ℬ_2^n-1^2^n.
Remark: The circuits depicted on the middle and right of Fig. (<ref>) are realized during the n-th iteration. At each k-th iteration, where k=2, …, n-1, the newly generated 2^k circuits correspond to those shown in Fig. (<ref>). The operation P_j, as defined in Eq. (9) of the main text, represents the universal form of the conjugate transpose of controlled operations for iterations k=2, …, n. The structure of these circuits provides an alternative interpretation for the total of 2^n-1 partitions, illustrated by the sum 2^0 + 2^1 + … + 2^n-1 = 2^n - 1.
§.§ Circuits decomposition of permutational operation
As previously mentioned, the projective measurement (PM) of the state ρ onto the eigenbasis {U|k⟩ : k = 0, ⋯, 2^n - 1} can be implemented by first applying U^†, followed by a PM onto the computational basis ℬ_0^2^n.
The operation H (H̃^†) followed by 1-qubit computational measurement (Pauli Z measurement) is equivalent to Pauli X (Y) measurement. Thus we just need to decompose the following operations in Fig. (<ref>):
P_t= |0⟩⟨ 0|⊗ I + |1⟩⟨ 1|⊗ (𝒰_k-1)^j,
where 𝒰_k=∑_m=0^2^k-1|m-1 2^k⟩⟨ m|, k=2,⋯,n-1 and j=1,⋯,2^k-1-1.
Next, we will demonstrate that all the permutation operations can be efficiently decomposed into polynomially many elementary gates.
For each j = 1, …, 2^k-1, even in exponential time, (𝒰_k)^j can be decomposed into a linear combination of k specific unitary operations, namely (𝒰_k), (𝒰_k)^2^1, …, (𝒰_k)^2^k-1.
Analysis: For each j ∈{1, …, 2^k-1}, we can express j in its binary form as j⃗= (j_0, j_1, …, j_k-1) where j_i ∈{0,1}. Specifically, j is written as:
j = j_0 × 2^0 + j_1 × 2^1 + ⋯ + j_k-1× 2^k-1.
Therefore, we have:
(𝒰_k)^j = [(𝒰_k)^1]^j_0× [(𝒰_k)^2^1]^j_1×⋯× [(𝒰_k)^2^k-1]^j_k-1.
The circuit is illustrated in Fig. (<ref>).
For example, if j_0 = 0, then the term [(𝒰_k)^1]^j_0 simplifies to the identity operation I, and can thus be omitted.
Consequently, we only need to identify the nonzero elements in {j_0, j_1, …, j_k-1} and perform at most k operations from the set {(𝒰_k)^1, (𝒰_k)^2^1, …, (𝒰_k)^2^k-1}. The order of these operations can vary since each of them commutes with the others.
Consider the implementation of k unitary operations (𝒰_k)^1, (𝒰_k)^2^1, …, (𝒰_k)^2^k-1. These circuits are equivalent to those for 𝒰_k, 𝒰_k-1, …, 𝒰_1 = X. Specifically,
(𝒰_k)^2^l = 𝒰_k-l⊗ I^⊗ l
for l = 0, …, k-1, as illustrated in Fig. (<ref>). Additionally, the circuit decomposition of 𝒰_l is shown in Fig. (<ref>).
Analysis. We have (𝒰_k)^2^l=∑_m=0^2^k-1|m -2^l 2^k⟩⟨ m|.
The binary form of 2^l is the following
l⃗=(0, ⋯, 0_k-l-1, 1, 0, ⋯, 0_l).
We express the binary form of m as m⃗=(m_0⋯,m_k-1).
In the binary form,
(𝒰_k)^2^l = ∑_m_0,⋯,m_k-1=0,1 |(m⃗-l⃗) 2^k⟩⟨m⃗|
= ∑_m_0,⋯,m_k-1=0,1
|[(m_0⋯ m_k-l-1)-(0, ⋯, 0_k-l-1, 1)] 2^k-l⟩⟨ m_0⋯ m_k-l-1| ⊗ ( |m_k-l⋯ m_k-1⟩⟨ m_k-l⋯ m_k-1|)
=∑_m=0^2^k-l-1|m-1 2^k-l⟩⟨ m| ⊗ I^⊗ l
= 𝒰_k-l⊗ I^⊗ l.
Now we consider the decomposition of 𝒰_l, where l=1,⋯,k.
When l=1, 𝒰_1=∑_m=0^1 |m-1 2⟩⟨ m|=|0⟩⟨ 1|+|1⟩⟨ 0|=X.
For general l, 𝒰_l = ∑_m=0^2^l-1 |m-1 2^l⟩⟨ m| represents a global shift operation. The classical counterpart of this operation, the basic increment by 1 (`+1'), serves as its foundation.
The circuit implementation of 𝒰_l is also used in pure state (rank-1 density matrix) QST <cit.>, as depicted in Fig. (<ref>).
Each permutation operation required to perform PM on DDBs can be decomposed into at most O(n^4) elementary 1-qubit and 2-qubit gates.
Analysis: Denote ∧_m(U) as the generalized Toffoli gate with m+1 input bits, which maps |x_1, …, x_m, y⟩ to |x_1, …, x_m, (∏_k=1^m x_k) ⊕ y⟩. On input (x_1, …, x_k, y), the gate applies U to y if and only if ∏_k=1^m x_k = 1. When m = 1, ∧_1(X) corresponds to the Controlled-NOT operation, expressed as ∧_1(X) = |0⟩⟨ 0| ⊗ I + |1⟩⟨ 1| ⊗ X.
Thus, the operation 𝒰_l in Fig. (<ref>) is a combination of the following gates:
X, ∧_1(X), …, ∧_l-1(X).
According to Corollary 7.6 in <cit.>, the l-qubit gate ∧_l-1(X) can be decomposed into Θ(l^2) elementary 1-qubit and 2-qubit gates. Consequently, 𝒰_l can be decomposed into O(l^3) elementary gates. For the controlled operation |0⟩⟨ 0| ⊗ I + |1⟩⟨ 1| ⊗𝒰_l, the cost in elementary gates also scales as O(l^3), as it involves a combination of ∧_1(X), …, ∧_l-1(X), ∧_l(X).
Now, consider the worst-case scenario where the permutation operation P_t in Eq. (<ref>) incurs the maximum cost in terms of elementary gates. This occurs when k = n. For each j = 1, …, 2^n-1 - 1, P_t can be implemented with at most n controlled operations using Decomposition <ref>. Therefore, the upper bound for decomposing all permutation operations involved in DDBs is O(n^4).
Using l-1 ancilla qubits, ∧_l-1(X) can be decomposed into O(l) elementary gates <cit.>. Consequently, each permutation operation in the DDB circuit can be decomposed into at most O(n^3) elementary gates.
It is noteworthy that, with the following decomposition and strategy, the cost of gates could be further reduced.
The operations (𝒰_l)^j and (𝒰_l)^2^l-j can be implemented using the same number of gates, where j = 1, …, 2^l - 1.
Analysis: We have 𝒰_l = ∑_m=0^2^l-1 |m-1 2^l⟩⟨ m|. Thus, (𝒰_l)^j · (𝒰_l)^2^l-j = I. Therefore, if we perform the conjugate transpose circuit of (𝒰_l)^j, we obtain the circuit for (𝒰_l)^2^l-j.
Strategy: By combining the analysis from Decompositions <ref> and <ref>, the circuit decomposition of (𝒰_l)^j can be further simplified compared to the binary expression. For instance, when j = 2^l - 1, we should integrate the circuits for 𝒰_l, (𝒰_l)^2^1, …, and (𝒰_l)^2^l-1 as described in Decomposition <ref>. However, according to Decomposition <ref>, it suffices to implement a single circuit for (𝒰_l)^†. As a result, the circuit components for (𝒰_l)^2^1, …, and (𝒰_l)^2^l-1 can be omitted, leading to a more efficient implementation.
In general, we can define a finite set of integers:
S = {± 1, ± 2^1, …, ± 2^l-1}.
For any j ∈{1, …, 2^l - 1}, we can identify the minimal elements in the set S such that their sum equals j. We then decompose 𝒰_l^j according to these minimal elements in S, rather than just using the binary form corresponding to S' = {1, 2^1, …, 2^l-1}.
§.§ Three circuits for arbitrary DM element
Now we consider the three circuits of PMs onto DDBs to directly reconstruct arbitrary unknown DM element ρ_jk, 0≤ j <k≤ d-1.
The basis for determining the diagonal element is the computational basis. Thus the measurement is Pauli measurement Z^⊗ n.
The other two circuits are constructed in the following way.
* Write the binary representation of j and k, which are j_1j_2⋯ j_n and k_1k_2⋯ k_n respectively.
* Find the first different qubit of |j_1j_2⋯ j_n⟩ and |k_1k_2⋯ k_n⟩.
We may as well denote it as q_s. Namely,
{
j =j_1,⋯,j_s-1,j_s=0,j_s+1,⋯,j_n
k =k_1,⋯,k_s-1,k_s=1,k_s+1,⋯,k_n
.
Denote the difference between the binary numbers j_s+1,⋯,j_n and k_s+1,⋯,k_n as
l=∑_m=s+1^n (k_m-j_m)× 2^n-m.
* The permutational operation for ρ_jk is defined by
P_j,k=I^⊗ s-1⊗ [|0⟩⟨ 0|⊗ I+|1⟩⟨ 1|⊗ (𝒰_n-s)^l].
The circuits for the PMs onto the nontrivial DDBs are depicted in Fig. (<ref>). When we go through all ρ_jk, the required circuit types are O(2^n) instead of O(4^n). We can verify the function of the conjugate transpose of the circuits.
After applying the operation H, the state |j⟩ = |j_1 ⋯ j_n⟩ evolves to
|j_1⋯ j_s-1⟩|0⟩ + |1⟩/√(2) |j_s+1⋯ j_n⟩.
Following the conditional permutation operation, the final state becomes
|j_1⋯ j_n⟩ + |k_1⋯ k_n⟩/√(2) = |ϕ^+_jk⟩.
Thus, in the left circuit of Fig. (<ref>), if the measurement result is j_1,⋯,j_n, it corresponds to the projected state |ϕ^+_jk⟩. Similarly, if the result is k_1,⋯,k_n, it corresponds to |ϕ^-_jk⟩.
In the right circuit, if the measurement result is j_1,⋯,j_n (or k_1,⋯,k_n), it corresponds to the projected state |ψ^+_jk⟩ (or |ψ^-_jk⟩).
§ CLOUD EXPERIMENTS
To test the performance of our strategy, real experiments were carried out on two quantum computers, superconducting qubits on IBM quantum Lab and nuclear spins on Spinq.
For quantum chip on ibmq-manila, it is with a one dimensional structure and 32 quantum volume, shown in SFig.(<ref>). Table.(<ref>) has its detailed parameters.
Only qubit 0 and 1 are used, with frequencies at 4.963 Ghz and 4.838 Ghz, with anharmonicity -0.34335 Ghz and -0.34621 Ghz, respectively. The error of CNOT gate from 0 on 1 is 6.437e-3 which costs 277.333ns, while the error of CNOT gate from 1 on 0 is 6.437e-3 which costs 312.889ns.
For entire experiments, states such as
|ψ_1i⟩ = α_i|0⟩^⊗ 2+β_i|1⟩^⊗ 2
|ψ_2i⟩ = (α_i|0⟩+β_i|1⟩)^⊗ 2
are tested, with i=1,...,21 and α_i=cos(θ_i/2), β_i=sin(θ_i/2), θ_i= (i-1) π/20.
The prepared circuits are depicted in SFig.(<ref>).
With the construction method, 7 measurement circuits are generated, which is shown in SFig.(<ref>).
For entire experiments, two sets were conducted. The first is through our strategy with data analysis of direct calculation and SDP. The second is the standard tomography protocol. The main results such as Frobenius's distance and fidelities are calcultaed in the manuscript. Here we list the density matrix for each experiment-prepared state as supplementary.
SFig.(<ref>) and SFig.(<ref>) are from Eq.(<ref>) and Eq.(<ref>), respectively. Although 21 experiments were conducted, only 11 density matrices are listed from 0 to π.
SFig.(<ref>) (SFig.(<ref>)) is divided into two-lines, the first line is the real parts of the density matrix and the second line is the image parts.
Meanwhile, transparency parts are theoretical values, and solid parts are from experiments.
As for the cloud quantum computer of SpinQ, it is a liquid NMR based architecture, that uses crotonic acid as their qubit system.
As it is shown in SFig.(<ref>). 4 carbon nuclei are denoted as 4 qubits, where related parameters are listed in table.
With an external programmable radio-frequency pulses as control fields, almost 4-qubit quantum logic gates can be achieved.
In the table, diagonal elements are frequencies, while off-diagonal elements are J-couplings, which are all measured at room temperature.
In order to demonstrate our strategy, 30 measurement circuits are generated, which is shown in SFig.(<ref>). ℬ_0 is ignored here as it is a trivial one as conventional tomography strategy. However, we do not realize the following circuits, since some entangled gates are out of the current device's capability. Decoherence time doesn't allow us more quantum gates.
Thus, we have to simulate circuits in SFig.(<ref>) by decomposing each measurement basis into Pauli and summing them up in the end, indirectly completing the proposal.
Therefore, For entire experiments, states such as
|ψ_1i⟩ = α_i|0⟩^⊗ 4+β_i|1⟩^⊗ 4
|ψ_2i⟩ = (α_i|0⟩+β_i|1⟩)^⊗ 4
are tested, with i=1,...,11 and α_i=cos(θ_i/2), β_i=sin(θ_i/2), θ_i= (i-1) π/10.
The prepared circuits are depicted in SFig.(<ref>).
Similarly, results are presented with direct calculation and SDP. As a comparison, standard tomography protocol was also conducted. The main results of Frobenius's distance and fidelities are calculated in the manuscript. Here we only list the density matrix for each experiment-prepared state.
SFig.(<ref>) and SFig.(<ref>) are from Eq.(<ref>) and Eq.(<ref>), respectively. Although 11 experiments were conducted, only 6 density matrices are listed, where θ_i=0, π/5, 2π/2, 3π/5, 4π/5, π.
SFig.(<ref>) (SFig.(<ref>)) is divided into two lines, the first line is the real parts of the density matrix and the second line is the image parts.
Meanwhile, transparency parts are theoretical values, and solid parts are from experiments. As GHZ-like states were prepared through 3-CNOT gates, which cost around 100ms, the decoherence affects the states heavily.
§.§ Table of fidelity for entire experiments
At the end of this section, we list the table for the entire experiments in Table.(<ref>).
§ ERROR ANALYSIS
A real situation is that measurement bases {ℬ_0, ℬ_k, 𝒞_k, 1≤ k≤ d } cannot be perfectly realized. In most cases, we could create very close ones, which cause slight differences when obtaining the measured probability. That is to say, possibly we project an unknown target ρ onto an approximate basis state |̃ϕ̃_̃ĩ⟩̃, instead of the exact one, |ϕ_i⟩.
For certain ideal and realized basis operators, the i-th basis states (eigenstates) have such relation,
|̃ϕ̃_̃ĩ⟩̃=|ϕ_i⟩+ϵ_i|e_i⟩,
where |ϕ_i⟩ is the exact i-th basis state, ϵ_i is a constant amplitude, and |e_i⟩ is one random state of haar measure.
Obviously, vast repeated measurements produce averaged effects,
∫|e_i⟩ d e_i =0, ∫|e_i⟩⟨e_i| d e =I/d.
Therefore, probabilities measured on ρ is with such disturbance,
(ρ|̃ϕ̃_̃ĩ⟩̃⟨̃ϕ̃_̃ĩ|̃)=1/1+ϵ_i^2(ρ|ϕ_i⟩⟨ϕ_i|)+ϵ_i^2/1+ϵ_i^21/d.
As for the procedure to reconstruct a certain density matrix d-dimension ρ, the trace distance is employed to show the performance of protocols under the above error assumption. Specifically,
||ρ-σ||_2=[(ρ-σ)· (ρ-σ)^†]=∑_i,jξ_ijξ_ij^⋆,
where σ is the reconstructed matrix and ξ_ij=ρ_ij-σ_ij and i,j ∈[1, d].
Additionally, ϵ_i are assumed to be at the same level, i.e, ϵ.
Therefore, for diagonal elements, |ρ_ii| and its measured deviation,
∑_iξ_iiξ_ii^⋆ = ∑_i|-ϵ^2/1+ϵ^2ρ_ii+ϵ^2/1+ϵ^21/d|^2
≤ 2 ∑_i|ϵ^2/1+ϵ^2ρ_ii|^2+ 2∑_i|ϵ^2/1+ϵ^21/d|^2
∼ 𝒪(ϵ^4)
Here, the coefficients ahead are ignored as we assumed ∑_i |ρ_ii|^2 is bounded.
For off-diagonal elements,
ρ_jk = (ρ|k⟩⟨ j|)
= (ρ|ϕ_jk^+⟩⟨ϕ_jk^+|)-i(ρ|ψ_jk^+⟩⟨ψ_jk^+|)-1-i/2(ρ_kk+ρ_jj),
where notations in Eq.(<ref>) are listed in main text.
As with multivariable derivative formula, |ξ_jk| is bounded, where
|ξ_jk|^2 ≤ Δ^2 (ρ|ϕ_jk^+⟩⟨ϕ_jk^+|)+ Δ^2(ρ|ψ_jk^+⟩⟨ψ_jk^+|)+|ξ_kk|^2+|ξ_jj|^2,
and
Δ^2 (ρ|ϕ_jk^+⟩⟨ϕ_jk^+|)≤ 2× (ϵ^2/1+ϵ^2(ρ|ϕ_jk^+⟩⟨ϕ_jk^+|))^2+2× (ϵ^2/1+ϵ^21/d)^2.
Accordingly, an approximate error is evaluated,
∑_j≠ k|ξ_jk|^2 ≤𝒪(ϵ^4){∑_j≠ k(ρ|ϕ_jk^+⟩⟨ϕ_jk^+|)^2+∑_j≠ k(ρ|ψ_jk^+⟩⟨ψ_jk^+|)^2}+𝒪(ϵ^4)
For normalization condition, ∑_j≠ k(ρ|ϕ_jk^+⟩⟨ϕ_jk^+|∼𝒪(1). In summary,
∑_j≠ k|ξ_jk|^2 ∼𝒪(ϵ^4).
Specifically, under a random error assumption with the same error strength, the error of the protocol, which is expressed as a distance of measurement, is in a higher-order formation as with respect to individual measurement devices.
The above analysis is under the assumption that basis states have a discrepancy ϵ. However, we didn't consider the size of the device. The more qubits involved, the basis states would be less accurate. As 𝒪(poly(n)) quantum gates are required to implement specific measurement operators. With a reasonable assumption that each element gate is with a bound error ε, ϵ∼𝒪(poly(n))ε). Accordingly, the total error caused by measurement setups of our protocol is in polynomials with respect to each individual quantum gate and the size of the system.
|
http://arxiv.org/abs/2409.02559v1 | 20240904092705 | Thermal density functional theory approach to quantum thermodynamics | [
"Antonio Palamara",
"Francesco Plastina",
"Antonello Sindona",
"Irene D'Amico"
] | quant-ph | [
"quant-ph",
"cond-mat.mes-hall",
"cond-mat.quant-gas",
"cond-mat.stat-mech",
"cond-mat.str-el"
] |
corresponding author: [email protected]
§ ABSTRACT
Understanding the thermodynamic properties of many-body quantum systems and their emergence from microscopic laws is a topic of great significance due to its profound fundamental implications and extensive practical applications.
Recent advances in experimental techniques for controlling and preparing these systems have increased interest in this area, as they have the potential to drive the development of quantum technologies.
In this study, we present a density-functional theory approach to extract detailed information about the statistics of work and the irreversible entropy associated with quantum quenches at finite temperature.
Specifically, we demonstrate that these quantities can be expressed as functionals of thermal and out-of-equilibrium densities, which may serve as fundamental variables for understanding finite-temperature many-body processes. We, then, apply our method to the case of the inhomogeneous Hubbard model, showing that our density functional theory based approach can be usefully employed to unveil the distinctive roles of interaction and external potential on the thermodynamic properties of such a system.
Thermal density functional theory approach to quantum thermodynamics
Irene D'Amico
September 9, 2024
====================================================================
§ INTRODUCTION
Density-functional theory (DFT) <cit.> and its time-dependent (TD) extension <cit.> are powerful and well-established methods for studying the electronic properties of interacting many-body systems at zero temperature, with DFT providing comprehensive access to ground state properties and TDDFT extending this capability to include the prediction of excited states.
Thermal density-functional theory (ThDFT), introduced by Mermin <cit.>, extends the Hohenberg-Kohn (HK) framework of DFT <cit.> to address the electronic properties of many-body systems under conditions where accounting for finite temperature effects is indispensable <cit.>.
Renewed attention in ThDFT has been driven by the study of thermal properties of out-of-equilibrium interacting quantum systems, which represents a major focus in quantum thermodynamics (QT).
This field is rapidly evolving thanks to advances in preparing and coherently controlling quantum systems at the microscopic scale, enabling experimental verification of fundamental properties such as fluctuation theorems <cit.>.
These developments also hold potential for new quantum technologies based on complex quantum systems (see e.g. <cit.>).
A major goal in QT is to understand the role of purely quantum features, such as coherence and correlations, in thermodynamic processes.
This notably includes work processes, where work is extracted from or performed on a quantum system, and the generation of irreversible entropy <cit.>.
In this realm, the emphasis is rapidly shifting towards coupled many-body systems; see, e.g, <cit.>. Indeed, recent studies have demonstrated that particle interactions can enhance the efficiency of quantum heat engines <cit.>.
On the theoretical side, addressing finite-temperature quantum many-body systems poses significant challenges, often requiring approximations to manage their complexity.
In this context, and drawing inspiration from previous works <cit.>, our objective is to establish a robust theoretical framework for applying DFT to the study of non-equilibrium thermodynamics in quenched interacting many-body systems. Our formalism focuses on the canonical ensemble, of particular relevance for QT thermal machines.
Specifically, we demonstrate that the thermal and out-of-equilibrium densities form the basis of an ab initio framework for deriving thermodynamic properties of quantum systems that experience a sudden quench.
A key advantage is that the out-of-equilibrium thermodynamics of interacting many-body systems can be effectively investigated using the Kohn-Sham (KS) approach to DFT <cit.>.
The strength of this approach lies in its ability to evaluate, in principle exactly, the thermal densities by mapping the original interacting many-body system onto a fictitious non-interacting one.
We thus establish a general framework for the DFT approach to QT, specifically for the canonical ensemble, and validate it through the analysis of the quenched inhomogeneous Hubbard model.
Accordingly, we first compare results from exact diagonalization for small systems with those obtained using our finite temperature KS mapping, and then extend our analysis to larger systems.
This extension allows us to clarify how interactions influence thermal densities and, consequently, the work performed.
The reminder of the paper is organized as follows.
In Sec. <ref>, we recall the Mermin-Hohenberg-Kohn (MHK) theorem <cit.>, adapting it to the context of lattice Hamiltonians in closed quantum systems.
In Sec. <ref>, we review the fundamental concepts of out-of-equilibrium QT, showing that for work protocols of infinitesimal duration, the probability distributions of work and irreversible entropy production are completely determined by the finite-temperature equilibrium densities of the pre-quench and post-quench Hamiltonians.
In Sec. <ref>, we recall the Mermin-Kohn-Sham (MKS) equations <cit.>, employing the finite-temperature KS mapping to calculate the thermodynamic quantities of interest.
In Sec. <ref>, we present a method for calculating the thermal densities of a system of indistinguishable particles within the canonical ensemble using the KS mapping.
In Sec. <ref>, we apply our theoretical framework to short Hubbard chains, solving the problem both exactly and via the finite-temperature KS mapping to validate the robustness and accuracy of our DFT-based approach.
We then extend our DFT-based analysis to the Hubbard model with a larger number of sites. Finally, we draw our conclusions in Sec. <ref>.
§ THERMAL {A}-FUNCTIONAL THEORY
In its original formulation, DFT is founded on the HK theorems <cit.>, which were later generalized to finite temperatures by Mermin <cit.>.
These theorems establish a one-to-one correspondence between the external potential v(𝐫) (or v(𝐫) - μ, where μ is the chemical potential), the quantum (equilibrium) state of the system, and the ground state (or thermal) electron density n(𝐫) (or n^β(𝐫)).
More recently, it has been demonstrated that the HK theorems at zero temperature are still valid for quantum systems whose Hamiltonian is defined on a lattice <cit.>, with some limitations <cit.>.
Similarly, it can be shown that this extension also holds true for finite-temperature closed quantum systems.
To demonstrate this, let us consider a closed quantum system governed by a Hamiltonian ℋ̂ of the following form:
ℋ̂[{λ_i}]= ℋ̂_0+ 𝒱̂_ext[{λ_i}]
=ℋ̂_0 + ∑^ℒ_i=1λ_i Â_i,
where ℋ̂ is quite general and can describe spin chains, fermionic, or bosonic systems.
ℋ̂_0 is the `universal' HK Hamiltonian, and 𝒱̂_ext is the external potential, controlled by a set of ℒ parameters {λ_i} that multiply the local operators {Â_i}.
Let us denote the thermal expectation value of Â_i as
a^β_i := {ρ̂^th_β[{λ_i}]Â_i},
where
ρ̂^th_β[{λ_i}]=
e^-βℋ̂[{λ_i}]/𝒵[{λ_i}]
represents the Gibbs state and 𝒵[{λ_i}] = {e^-βℋ̂[{λ_i}]} expresses the canonical partition function.
By definition, each a^β_i is a function of the parameters {λ_i}.
It can be shown that exactly one parameter set {λ_i} corresponds to a given mean value set {a^β_i} (see appendix <ref>).
Since {a^β_i} uniquely determines {λ_i}, which in turn determines ρ̂^th_β, the thermal state is also a unique functional of {a^β_i}.
This relationship highlights the direct connection between observable mean values and the underlying thermal state, emphasizing the role of {a^β_i} as the fundamental descriptors of the system:
{λ_i}{a^β_i}ρ̂^th_β≡ρ̂^th_β[{a^β_i}].
By the one-to-one relations established in Eq. (<ref>), the thermal HK theorem, as demonstrated by Mermin <cit.>, remains valid for closed quantum systems defined by the Hamiltonian (<ref>).
Consequently, the free energy
ℱ[ρ̂]:={ρ̂(ℋ̂+lnρ̂/β)},
minimized by the equilibrium Gibbs state, can be expressed as a unique function of {λ_i} or, equivalently, as a unique functional of {a^β_i}:
ℱ[{a^β_i}] = Ω[{a^β_i}]+ ∑^ℒ_i=1λ_i a^β_i.
Here, Ω[{a^β_i}] represents the generally unknown `universal' functional <cit.>, associated with the {λ_i}-independent Hamiltonian ℋ̂_0.
Similarly to the zero-temperature case <cit.>, Eq. (<ref>) allows us to interpret the HK theorem as an expression of duality in the sense of a Legendre transform.
This duality relates the thermal expectation values {a^β_i}, which play a role analogous to the thermal electron densities n^β(𝐫), to the set of work parameters {λ_i}, which fulfill a role analogous to the external potentials v(𝐫).
A special case of the Hamiltonian (<ref>) corresponds to the following single-parameter scenario:
ℋ̂[λ]= ℋ̂_0+ λ∑^ℒ_i=1Â_i,
where the HK theorem remains valid.
However, to ensure a unique mapping between λ and
⟨Â|≡⟩{ρ̂^th_β[{λ_i}] ∑_i=1^ℒÂ_i},
it is necessary to require that [ℋ̂_0, ∑_i=1^ℒÂ_i] ≠ 0 <cit.>.
Another important tool is offered by the Hellmann-Feynman (HF) theorem <cit.>,
which establishes a relationship between the first derivative of the free energy with respect to the i-th external parameter and the i-th thermal density:
∂ℱ/∂λ_i = {ρ̂^th_β[{λ_i}] ∂ℋ̂/∂λ_i}
={ρ̂^th_β[{λ_i}]Â_i}=a^β_i.
This equation will be especially useful in the subsequent sections, particularly in the context of thermal sudden quenches, where thermal expectation values serve as fundamental variables for deriving the thermodynamic quantities of interest.
§ THERMAL {A}-FUNCTIONAL THEORY APPROACH TO QUANTUM THERMODYNAMICS
ThDFT can be effectively employed to extract information about the out-of-equilibrium thermodynamics of a closed quantum system by leveraging its ability to handle thermal and quantum fluctuations.
To establish the formalism, we first review some key concepts in QT.
Our focus is on a closed quantum system that has been driven out of equilibrium by a unitary quantum process.
Specifically, we consider a generic closed quantum system characterized by the Hamiltonian (<ref>), which depends on a set of time-dependent work parameters {λ^t_i}.
The system is initially in equilibrium with a bath at inverse temperature β.
In this configuration, at t=0, the set of work parameters {λ^0_i} defines the thermal state,
represented by the Gibbs density operator ρ̂^th_β[{λ^0_i}].
After the system is decoupled from the bath, it undergoes unitary dynamics, from t=0 to t=τ, governed by the time evolution operator 𝒰̂(τ,0).
This evolution is driven by a protocol that changes the set of work parameters from {λ^0_i}, with corresponding mean values {a^β 0_i}, to {λ^f_i}, with corresponding mean values {a^β f_i}, over the finite time interval τ.
§.§ Probability distributions of work and irreversible entropy production
The work performed or extracted during the protocol is not an observable and cannot be represented by a Hermitian operator. Rather, work is a stochastic variable characterized by a probability distribution, which is determined by performing two projective measurements at the initial and final times, respectively <cit.>.
These measurements involve the instantaneous eigenbasis of the system Hamiltonian
denoted {ϵ_n(t), |n(t)⟩}.
The probability distribution of work (PDW) is defined as follows:
P_τ(w)=∑_nmp_n(0)p_n → m(τ)δ(w-w_mn),
where w_mn = ϵ_m(τ) - ϵ_n(0) is the work performed in a single realization, p_n(0) ≡⟨n(0)|ρ̂^th_β|n(0)⟩ is the probability of finding the system in the n-th eigenstate at time t=0, and p_m → n(τ) is the transition probability, between the n-th and m-th eigenstates, due to the protocol.
Fluctuations arising from the protocol and the measurements are encoded by p_n(0)p_m → n(τ) and constrained by the Jarzynski's equality <cit.>
⟨e^-β w|=⟩ e^-βΔℱ,
where Δℱ= ℱ[{λ^f_i}]-ℱ[{λ^0_i}] is the free energy difference between the two equilibrium configurations, corresponding to the initial and final Hamiltonian.
By the Jensen's inequality, Eq. (<ref>) implies that ⟨w|≥⟩Δℱ, which reflects the second law of thermodynamics.
This leads to the definition of the average irreversible work <cit.>
⟨w_irr|:⟩= ⟨w|-⟩Δℱ,
which is directly related to the average irreversible entropy production <cit.>
⟨𝒮_irr|:⟩=β⟨w_irr|.⟩
Both ⟨w_irr|$⟩ and⟨𝒮_irr|$⟩ give a measure of the irreversibility introduced by performing the unitary transformation ρ̂(τ)=𝒰̂(τ,0)ρ̂^th_β[{λ^0_i}]𝒰̂^†(τ,0).
Strictly speaking, due to the unitary nature of the time evolution, no von Neumann entropy is generated during this process, with the entropy of the system remaining constant:
𝒮(ρ̂(τ))=-{ρ̂(τ)logρ̂(τ)}≡𝒮(ρ̂^th_β[{λ^0_i}]).
Equation (<ref>) is referred to as a measure of irreversibility because, when the system is returned to the bath after the protocol, it relaxes from the out-of-equilibrium state to the thermal state ρ̂^th_β[{λ^f_i}].
This relaxation is a non-unitary process, and the entropy produced during this process is precisely ⟨𝒮_irr|$⟩.
We emphasize that, like the average work,⟨𝒮_irr|$⟩ is also the first moment of a probability distribution obtained within the two-point measurement framework.
Indeed, it is possible to define a stochastic variable, associated with the production of irreversible entropy, as follows:
s_mn:=β(ϵ_m(τ)-ϵ_n(0))-βΔℱ.
Then, the probability distribution for entropy production (PDE), analogous to the PDW, takes the form
P_τ(s)=∑_nmp_n(0)p_n → m(τ)δ(s-s_mn).
A complementary approach to investigate the statistical properties of work processes and irreversible entropy production is based on the Fourier transforms of the corresponding probability distributions, namely, P_τ(w) from Eq. (<ref>) and P_τ(s) from Eq. (<ref>).
It is therefore convenient to rely on the characteristic function of work <cit.>
χ_ν(w,τ) := ∫ dw e^iν wP_τ(w)
= {
e^iνℋ̂[{λ^f_i}]𝒰̂(τ,0)
e^-iνℋ̂[{λ^0_i}]
×ρ̂^th_β([{λ^0_i}])
𝒰̂^†(τ,0)},
with associated moments
⟨w^n(τ)|=⟩ (-i)^n∂^n_νχ_ν(w,τ)|_ν=0.
It is further instructive to introduce the characteristic function for irreversible entropy production:
χ_μ(s,τ) := ∫ ds e^iμ s P_τ(s)
= e^-i βμΔℱ{
e^iβμℋ̂[{λ^f_i}]𝒰̂(τ,0)
×
e^-iβμℋ̂[{λ^0_i}]ρ̂^th_β[{λ^0_i}]
𝒰̂^†(τ,0)
},
with associated moments
⟨s^n(τ)|=⟩ (-i)^n∂^n_μχ_s(w,τ)|_μ=0.
The two quantities expressed in Eqs. (<ref>) and (<ref>) play a crucial role in the development of the thermal density functional framework for specific protocols, as detailed in Sec. <ref> (sudden quench) and appendix <ref> (finite-time protocols).
§.§ The sudden quench protocol
A sudden quench involves an instantaneous shift of the work parameters, from {λ^0_i} to {λ^f_i}.
This variation occurs in an infinitesimally short timeframe, unlike the finite-time protocols covered in appendix <ref>.
It can be demonstrated that all of the moments of the PDW in this scenario are functionals of the initial thermal densities.
This outcome is due to the fact that, in a sudden quench, the time evolution operator approaches the identity operator, as the quench duration becomes infinitesimally small <cit.>: lim_τ→ 0^+𝒰̂(τ,0)=1̂.
Therefore, the characteristic function of work (<ref>) simplifies to an ensemble average over the initial Gibbs state:
χ_ν(w,0^+) = {
e^iνℋ̂[{λ^f_i}]
e^-iνℋ̂[{λ^0_i}]ρ̂^th_β[{λ^0_i}]}.
For a Hamiltonian of the form (<ref>), any thermal average over the initial state can be expressed as a functional of the initial mean values {a^β 0_i}, as dictated by the generalized HK theorem outlined in Sec. <ref> and appendix <ref>.
Consequently, we have: χ_ν(w,0^+)=χ_ν(w,0^+)[{a^β 0_i}], which implies that all the moments of P_0^+(w)
are functionals of {a^β 0_i}, with parametric dependence on both {λ^0_i} and {λ^f_i}.
More explicitly, by using Eq. (<ref>), these moments can be expressed as the following thermal averages:
⟨w^n|=⟩{(ℋ̂[{λ^f_i}]-ℋ̂[{λ^0_i}])^nρ̂^th_β[{λ^0_i}]},
which are functional of the initial thermal densities: ⟨w^n|=⟩⟨w^n|[⟩{a^β 0_i}].
Now, the linearity of the Hamiltonian ℋ[{λ_i}] in the work parameters {λ_i} leads to
ℋ̂[{λ^f_i}]-ℋ̂[{λ^0_i}]=∑_i(λ^f_i-λ^0_i)∂ℋ̂/∂λ^0_i.
Then, using Eqs. (<ref>) and (<ref>),
the average work becomes:
⟨w|=⟩∑_i(λ^f_i-λ^0_i)a^β 0_i
Similarly, the characteristic function of the irreversible entropy production (<ref>), for a sudden quench protocol, takes the simplified expression
χ_μ(s,0^+)= e^-i βμΔℱ{
e^iβμℋ̂[{λ^f_i}]
× e^-iβμℋ̂[{λ^0_i}]ρ̂^th_β[{λ^0_i}]
}.
Again, by virtue of the thermal HK theorem, the final and initial free energies in Δℱ are functionals of {a^β f_i} and {a^β 0_i}, respectively.
On the other hand, as seen in Eq. (<ref>), the trace in Eq. (<ref>) is a functional of {a^β 0_i} only.
Consequently, we can assert that P_0^+(s), or χ_μ(s,0^+), and the associated moments ⟨s^n|$⟩ are functionals of both{a^β f_i}and{a^β 0_i}.
In particular, the average irreversible entropy production takes the form:
⟨𝒮_irr|=⟩ β∑_i(λ^f_i-λ^0_i)a^β 0_i
-β{ℱ[{a^β f_i}]-ℱ[{a^β 0_i}]}
We focus on sudden quenches of infinitesimal variation, where the work parameters{λ^0_i}change by an elementary amount to{λ^0_i + δλ_i}.
In this context, we can derive an explicit functional form for the average irreversible entropy production.
In particular, we can expand Eq. (<ref>) in a Taylor series and apply the HF theorem, as expressed by Eq. (<ref>).
This yields:
⟨𝒮_irr|=⟩ -β/2∑_i,jδλ_iδλ_j∂ a^β 0_i/∂λ^0_j.
We emphasize that Eqs. (<ref>) and (<ref>) demonstrate that the mean values of work and irreversible entropy production are explicit functionals of the initial thermal densities.
As discussed in the following sections, this is particularly important for extracting information about many-body systems using the KS mapping, which enables the computation of thermal electron densities within a formally non-interacting framework.
§.§ Fluctuation-dissipation relations in the sudden quench limit
We now recall that classical quasi-adiabatic processes follow the fluctuation-dissipation relation (FDR)
⟨𝒮_irr|=⟩β^2/2σ^2_w,
whereσ^2_w=⟨w^2|-⟩⟨w|^⟩2represents the variance in the PDW <cit.>.
Recently, it has been demonstrated that for slow processes in open quantum systems, close to equilibrium, the FDR is given by Eq. (<ref>) minus a positive, purely quantum term, which arises from the non-commutativity of the thermodynamic protocol <cit.>.
Here, with the aim of obtaining an explicit functional form in terms of initial thermal densities for the second moment of the work probability distribution, we reobtain a similar generalized FDR that holds in the infinitesimal sudden quench regime.
This should not be surprising, as an adiabatic process, i.e., one that is close to equilibrium throughout, can be considered as a sequence of a large number of sudden quenches, each followed by thermalization towards the equilibrium state <cit.>.
To this end, we focus on the second moment of the PDW.
Then, we distinguish the case where the final and initial Hamiltonians share a common eigenbasis, and the case where they do not.
Additional details on the following derivations are provided in appendix <ref>.
In the specific scenario where[ℋ̂[{λ^f_i}],ℋ̂[{λ^0_i}]]=0, the second moment of the PDW is given by:
⟨w^2|_⟩c=∑_i,jδλ_iδλ_j a^β 0_ia^β 0_j-1/β∑_i,jδλ_iδλ_j∂ a^β 0_i /∂λ^0_j.
Notably, Eq. (<ref>) expresses an explicit functional of the initial mean values, independently of the amplitude of the sudden quench.
Nonetheless, with an infinitesimal sudden quench, we can utilize the expressions for the average work, Eq. (<ref>), and the average irreversible entropy production, Eq. (<ref>), to rewrite Eq. (<ref>) as:
⟨w^2|_⟩c=⟨w|^⟩2+2/β^2⟨𝒮_irr|,⟩
This relation validates the FDR, in its classical form, as given by Eq. (<ref>),
to the leading order in{δλ_i}.
Turning to the instance where the initial and final Hamiltonians do not commute, and using Eq. (<ref>), the second moment of the PDW is still a functional of the initial equilibrium thermal densities.
Specifically, the latter can be split into the following two parts:
⟨w^2|=⟩⟨w^2|_⟩c+ Θ_2[{a^β 0_i}],
whereΘ_2[{a^β 0_i}]arises directly from the incompatibility of the two Hamiltonians.
A possible approximation method for this functional is provided in sec <ref>.
Operating again in the infinitesimal sudden quench limit, we can plug Eqs. (<ref>) and (<ref>) into Eq. (<ref>).
By doing so, we recover the generalized FDR,
⟨𝒮_irr|=⟩β^2/2σ^2_w -β^2/2Θ_2[{a^β 0_i}],
which takes into account both thermal fluctuations and quantum fluctuations due to[ℋ̂[{λ^f_i}],ℋ̂[{λ^0_i}]]≠0.
§ THERMAL KOHN-SHAM MAPPING FOR QUANTUM THERMODYNAMICS
The results discussed so far in previous sections are formally exact, at least in the limiting conditions of the protocols investigated for the evolution of the coupling parameters.
However, as a many-body system grows in complexity, the number of interactions and possible configurations needed to determine the exact thermal density becomes computationally infeasible.
To address this challenge, the KS scheme <cit.> provides a powerful approach within the framework of DFT for developing efficient approximations.
This method relies on defining a formally non-interacting many-body system, the KS system, which is designed to replicate the same particle density as the original interacting physical system.
For systems governed by the Hamiltonian (<ref>), and building on methods developed in earlier studies <cit.> at zero temperature, the KS approach can be applied as follows.
We assume the existence of a set of auxiliary systems, each described by the Hamiltonian
ℋ̂^ks=ℋ̂^ks_0+ ∑^ℒ_i=1λ^ks_i Â_i,
where the one-body operatorℋ̂^ks_0replaces the complex many-body termℋ̂_0in Eq. (<ref>).
The KS Hamiltonian simplifies the problem by focusing on non-interacting particles in an effective potential associated to specific coupling parameters.
We further assume thatℋ̂^ksyields the same set of thermal densities as the original Hamiltonian:
{ρ̂^th_β[{λ_i}]Â_i}={ρ̂^th_βks[{λ^ks_i}]Â_i}.
In this setting,ρ̂^th_βandρ̂^th_β, ksdenote the thermal density matrices of the original and KS systems, respectively, both parameterized by inverse temperatureβand coupling parameters{λ_i}and{λ^ks_i}.
The MHK theorem clearly holds for the KS Hamiltonian, though with some restrictions at absolute zero temperature <cit.>.
Consequently, the coupling parametersλ^ks_iare functionals of the thermal averages{a^β_i}, i.e.,λ^ks_i = λ^ks_i[{a^β_i}].
At this point, the following MKS equations can be solved self-consistently for{a^β_i}:
ℋ̂^ks_0+∑^ℒ_i=1(λ^h-xc_i[{a^β_i}]+λ_i)Â_i|ϕ^i_β⟩=ϵ^i_β|ϕ^i_β⟩
a^β_i={ρ̂^th_βks[{λ^h-xc_i[{a^β_i}]+λ_i}]Â_i}.
Here, the effective parametersλ^h-xc_i[{a^β_i}]=λ^ks_i-λ_iplay the role of the Hartree (H) and exchange-correlation (XC) potentials in the usual KS mapping, which account for the effect of the many-body interaction term inℋ̂_0.
It is worth recalling that while the eigensystem of the non-interacting KS Hamiltonian exactly reproduces the thermal density, it generally do not correspond to the eigensystem of the interacting Hamiltonian <cit.>.
The approach outlined here is particularly useful when the one-body Hamiltonianℋ̂^ks_0has a simple form, such as in the case of a chain of interacting fermions, whereℋ̂^ks_0reduces to a kinetic energy operator.
In these scenarios, as is typically done within the KS framework, suitable approximations can be employed for the functionalsλ^h-xc_i[{a^β_i}].
In other terms, any thermodynamic quantity expressed as an explicit functional of the thermal densities can be evaluated through a KS mapping, with an accuracy dictated by the approximations made for the functionalsλ^h-xc_i[{a^β_i}].
Nonetheless, not all quantities in the MKS equations can be directly expressed as functionals of the densities.
For example, the functional form ofΘ_2[{a^β 0_i}]in Eq. (<ref>) requires reasonable approximations to be determined.
§.§ Local density approximation for Θ_2[{a^β 0_i}]
The local density approximation (LDA) is the simplest and most widely used approach for modeling XC effects in DFT.
For instance, the LDA has been effectively employed to develop functionals for calculating the entanglement in spatially inhomogeneous many-fermion systems <cit.>.
To construct an LDA scheme for an inhomogeneous system, it is necessary to have an analytical solution for the corresponding homogeneous problem, where all coupling parameters are equal, i.e.,λ_i = λ.
In the homogeneous case, the functionalΘ_2[{a^β 0_i}]reduces toΘ^hom_2[a^β 0], where:
a^β 0=1/ℒ{ρ̂^th_β[{λ_i}]∑^ℒ_i=1Â_i}.
Based on this, the following LDA scheme can be put forward:
Θ^lda_2[{a^β 0_i}]=∑_iΘ^hom_2[a^β 0]|_a^β 0→ a^β 0_i.
A crucial aspect of this implementation is that Eq. (<ref>) approximates the fluctuations inΘ_2[{a^β,0_i}]arising from the incompatibility between the pre- and post-quench Hamiltonians, as discussed in Sec. <ref>.
Therefore, it is essential that the homogeneous system satisfies the condition:[ℋ̂_0, ∑_i=1^N Â_i] ≠0.
Otherwise,Θ^hom_2[a^β 0]would be identically zero.
§.§ Approximation for Θ_2[{a^β 0_i}] via perturbation treatment of the KS system
The fictitious KS world is governed by a Hamiltonian that differs from the one describing the actual system under investigation.
However, if the exact form of the functionalλ^h-xc_i[{a^β_i}]is known, the KS framework can accurately reproduce the density of the original interacting system.
Therefore, a well-founded idea is to treat the KS Hamiltonian as a zeroth-order approximation to the `true' Hamiltonian <cit.>.
This is because the KS Hamiltonian encapsulates key aspects of the many-body properties inherent the real system.
In this perturbation-like approach, we can express the original Hamiltonian asℋ̂ = ℋ̂^ks + Δℋ̂, whereΔℋ̂ = ℋ̂ - ℋ̂^ks.
Accordingly, any observable property𝒬[{a^β 0_i}]can be expanded as
𝒬[{a^β 0_i}] = 𝒬^ks[{a^β 0_i}] + Δ𝒬[{a^β 0_i}],
where𝒬^ks[{a^β 0_i}]represents the zeroth-order approximation of the quantity of interest.
This method is particularly advantageous when the LDA scheme is not applicable.
For example, if the underlying homogeneous system satisfies[ℋ̂_0,∑^N_i=1Â_i]= 0, the zeroth-order approximation can effectively address the incompatibility between the initial and final forms of the interacting Hamiltonian.
In such cases,Θ_2[{a^β 0_i}]can be approximated using the corresponding quantity calculated within the KS framework.
§ THE KOHN-SHAM SCHEME IN THE CANONICAL ENSEMBLE
In Sec. <ref>, we introduced a finite-temperature KS mapping, which enables the accurate evaluation of equilibrium thermal densities through iterative solutions of the self-consistent MKS equations, i.e., Eqs.(<ref>) and (<ref>).
However, dealing with statistical systems that have a fixed number of indistinguishable particles, even within the non-interacting KS framework, presents considerable challenges <cit.>, making the solution of the MKS equations computationally demanding.
This complexity is one reason why finite-temperature DFT calculations typically employ the grand canonical ensemble, where the average number of particles is fixed by⟨N|=⟩∑_i f_μ(ϵ_i).
In a many-fermion system, the Fermi-Dirac distributionf_μ(ϵ_i)=(1+e^β(ϵ_i-μ))^-1describes the occupation probabilities of the KS eigenstates, withϵ_irepresenting the KS eigenvalues andμthe chemical potential.
This framework offers the advantage of a straightforward expression for thermal electron densities:n^β(𝐫)=∑_i f_μ(ϵ_i)|ψ^ks_i(𝐫)|^2, based on the KS wavefunctionsψ^ks_i(𝐫)<cit.>.
Here, we present a method to compute thermal densities within the canonical ensemble using the KS mapping while keeping the calculations feasible.
To elucidate our approach, we consider a system ofNinteracting fermions on a lattice, characterized by the Hamiltonian (<ref>), with the specific form
ℋ̂=ℋ̂_0+∑^ℒ_i=1 V_i n̂_i.
The universal part of this Hamiltonian reads
ℋ̂_0=𝒯̂+𝒲̂,
where
𝒯̂=-J
∑_i; σ=↑,↓(ĉ^†_i,σĉ_i+1,σ + h.c.)
is the kinetic term, while𝒲̂accounts for the two-body interaction.
In Eq. (<ref>),ĉ^†_i,σandĉ_i,σdenote the creation and annihilation operators for a fermion with spinσ=↑,↓, andn̂_i=n̂_i,↑+ n̂_i,↓is the total number operator for thei-th site.
Given the form of the external potential, the thermal densities are naturally defined asn^β_i = {ρ̂^th_β[{V_i}]n̂_i}.
According to the MHK theorem, the following correspondence holds:
{V_i}{n^β_i}ρ̂^th_β≡ρ̂^th_β[{n^β_i}].
As remarked in Sec. <ref>, there exists a KS system having exactly the same set of thermal densities as the original interacting system.
The corresponding KS Hamiltonian is
ℋ̂^ks=𝒯̂+∑^ℒ_i=1(V^h-xc_i[{n^β_i}]+V_i)n̂_i.
At this point, we seek a more computationally tractable equation for the thermal densities than Eq. (<ref>).
In particular, we build on previous research <cit.> that examined the canonical partition function forNnon-interacting fermions.
These studies enable us to express the canonical partition function for the KS fermions using the following recursive formula:
𝒵_N = 1/N∑^N_m=0(-1)^m-1𝒵_1(mβ)𝒵_N-m(β),
where𝒵_1(0)=1and𝒵_1(mβ)=∑_i e^-mβϵ_iform>1.
Given the structure of the KS Hamiltonian (<ref>), the partition function for a KS system withN_↑spin-up andN_↓spin-down fermions becomes:𝒵^ks_N=𝒵^ks_N_↑𝒵^ks_N_↓,
with particle number conservation ensured byN = N_↑+ N_↓.
The corresponding equilibrium free energy is then:ℱ^ks=-1/βlog(𝒵^ks_N),
from which the equilibrium thermal densities can be extracted using the HF theorem as
n_i^β= ∂ℱ^ks/∂ V^ks_i.
The combination of Eq. (<ref>) and Eq. (<ref>) forms the self-consistent foundation of our finite-temperature KS approach, which, in principle, exactly reproduces the thermal densities of the original interacting system.
§ A NOTABLE EXAMPLE
This section is dedicated to validating our finite-temperature KS approach, as defined by Eqs. (<ref>) and (<ref>), within the context of the Hubbard model <cit.>.
Specifically, we examine a scenario where electrons are influenced by an inhomogeneous external potential dependent on the parameterv_0.
The system Hamiltonian is expressed as:
ℋ̂=ℋ̂_0+∑^ℒ_i=1 V_i(v_0) n̂_i,
whereV_i(v_0) = f_i v_0with the dimensionless set{f_i}that defines the spatial shape of the external potential along the chain. Hereℋ̂_0takes the form given in Eq. (<ref>), with the two-body interaction term given by:
𝒲̂=∑^ℒ_i=1 Un̂_i,↑n̂_i,↓.
As described in Sec. <ref> the system is initially prepared in the Gibbs stateρ̂^th_β[{n^β_i}].
Subsequently, it is decoupled from the thermal bath and undergoes an instantaneous quench in the work parameter, with amplitudesδV_i = f_i δv_0.
We focus on the half-filled Hubbard model, with the total spin along thezdirection set to zero.
Our analysis considers two external potentials. One that decreases linearly along the chain,
𝒱̂_ext = ∑^ℒ_i=1[v_0 - 2v_0(i-1)/ℒ-1] n̂_i,
and another one with a harmonic dependence,
𝒱̂_ext = ∑^ℒ_i=11/2v_0[i - ℒ+1/2]^2n̂_i.
We begin by analyzing the exact results for the Hubbard dimer, as presented in Sec. <ref>. We then compare these results with those obtained from our KS mapping, as detailed in Sec. <ref> and further explored in Sec. <ref>.
Next, we extend our study to longer Hubbard chains probed by the linear potential defined in Eq. (<ref>), as outlined in Sec. <ref> and Sec. <ref>.
In particular, in Sec. <ref>, we compare exact results for systems with up to8sites with corresponding ones from our KS mapping.
Finally, in Secs. <ref> and <ref>, we investigate the impact of electron-electron interactions on work extraction in longer chains, considering both linear and harmonic potentials given in Eqs. (<ref>) and (<ref>).
§.§ Exact results for the Hubbard dimer
In the two-particle subspace with total spin zero along thez-axis, the Hamiltonian (<ref>) characterizes a two-site Hubbard chain and is represented by the matrix
ℋ̂=̇[ U+2V_1 -J J 0; -J V_1+V_2 0 -J; J 0 V_1+V_2 J; 0 -J J U+2V_2 ],
in the basis{|↑↓, 0⟩, |↑, ↓⟩, |↓, ↑⟩, |0, ↑↓⟩}.
This straightforward, exactly solvable model exhibits a diverse range of physical phenomena <cit.>, including a precursor to the Mott metal-insulator transition and, influenced by the external potential, a precursor to the ionic insulator transition.
The two transitions are in competition, with the former favoring single-site occupation and the latter promoting double-site occupation.
The metal phase emerges in the narrow region whereU ∼2v_0, driven by the interplay of the interaction term and the external potential.
In Fig. <ref>(a), we examine the average extracted work,⟨w|_⟩ex = -⟨w|$⟩, as a function of U and v_0, following a sudden quench of amplitude δ v = 0.05J.
For U > 2v_0, the system enters the Mott insulating phase, leading to a decrease in the extractable work as the interaction strength U increases.
In this phase, double occupancy of sites becomes energetically unfavorable, rendering work extraction through the external potential quench impractical.
This behavior can be understood by examining the thermal densities in the limit of large U: for U ≫ 2v_0, we have n^β_1 ∼ n^β_2 ∼ 1, which, according to Eq.(<ref>), results in ⟨w|∼⟩0.
Conversely, for U < 2v_0, the system is in the ionic insulating phase, where the extractable work increases as U decreases, reaching its maximum value as U approaches zero.
This is because for U≪2v_0 we find n^β_1∼ 2 and n^β_2∼ 0, leading to ⟨w|∼⟩-2δv_0 as per Eq. (<ref>).
In Fig. <ref>(b), we show the average irreversible entropy production ⟨𝒮_irr|$⟩ for the same process.
As expected,⟨𝒮_irr|$⟩ exhibits a pronounced peak in the metallic region separating the Mott insulating phase from the ionic insulating phase.
This behavior can be understood through the dependence of the average irreversible entropy production on the thermal densities, as described in Eq. (<ref>).
The thermal densities are sensitive to small variations in the work parameter v_0 when U ∼ 2v_0, which is also reflected in the peak observed in the derivatives of the thermal density with respect to v_0, shown in Fig. <ref>(c).
|
http://arxiv.org/abs/2409.03256v1 | 20240905052227 | E2CL: Exploration-based Error Correction Learning for Embodied Agents | [
"Hanlin Wang",
"Chak Tou Leong",
"Jian Wang",
"Wenjie Li"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
No Man is an Island: Towards Fully Automatic Programming by Code Search, Code Generation and Program Repair
Zhenyu Chen
September 5, 2024
===========================================================================================================
§ ABSTRACT
Language models are exhibiting increasing capability in knowledge utilization and reasoning. However, when applied as agents in embodied environments, they often suffer from misalignment between their intrinsic knowledge and environmental knowledge, leading to infeasible actions. Traditional environment alignment methods, such as supervised learning on expert trajectories and reinforcement learning, face limitations in covering environmental knowledge and achieving efficient convergence, respectively. Inspired by human learning, we propose Exploration-based Error Correction Learning (E^2CL), a novel framework that leverages exploration-induced errors and environmental feedback to enhance environment alignment for LM-based agents. E^2CL incorporates teacher-guided and teacher-free exploration to gather environmental feedback and correct erroneous actions. The agent learns to provide feedback and self-correct, thereby enhancing its adaptability to target environments. Evaluations in the Virtualhome environment demonstrate that E^2CL-trained agents outperform those trained by baseline methods and exhibit superior self-correction capabilities.
§ INTRODUCTION
Language Models (LMs) are becoming increasingly capable of knowledge utilization and reasoning across various knowledge-intensive tasks <cit.>. This success motivates researchers to apply LMs to build LM-based agents in embodied environments, which similarly requires the use of reasoning and planning upon environmental knowledge <cit.>.
In this case, LM-based agents are asked to plan appropriate actions based on the given environmental information and the history of actions already taken.
However, the knowledge acquired by these LM-based agents comes from general-purpose corpora during pre-training, and as a result the intrinsic knowledge of these models often misalign with environmental knowledge.
Such environmental knowledge involves physical constraints that LMs have not yet explored.
For example, if the embodied agent holds two objects, it is prohibited to grab one more other object.
This misalignment causes LM-based agents to frequently infer actions that cannot be executed in the environment, hindering their application in real-world environments.
To address this issue, two primary types of environment alignment methods have been explored.
The first type involves having LM-based agents undergo supervised learning on expert trajectories, which are human-labeled sequences of observations and actions <cit.>.
Nevertheless, these trajectories often fail to fully cover the knowledge within the environment, such as scenarios where certain actions cannot be executed.
The second type is based on reinforcement learning, which allows agents to freely explore the environment, collect trajectories that comprehensively cover the environment's knowledge, and obtain rewards based on these trajectories' success or failure <cit.>.
However, the rewards are sparsely obtained because the performance evaluation of the agent is based on a complete trajectory. This makes the learning process difficult to converge.
Human learning is not comprehensive or efficient if it relies solely on imitating experts' behavior or merely knowing whether an action is correct. Instead, by collecting and understanding feedback from the environment via exploration and learning to correct errors based on the feedback, humans can learn comprehensively and efficiently.
Inspired by this, we propose a novel exploration framework for LM-based agents to align with environments, which is called Exploration-based Error Correction Learning (E^2CL).
As depicted in <Ref>, our framework incorporates exploration-induced errors and environmental feedback, leading to a comprehensive alignment with target environments.
During the process of E^2CL, we ask a pretrained agent to perform predefined tasks and explore the environment to collect experiences in both efficient and comprehensive manners. This is achieved by two different proposed schemes, namely teacher-guided exploration and teacher-free exploration.
The former prompts the agent to perform one-step exploration given sliced expert trajectories, whereas the latter allows the agent to continue exploring until it infers a stop.
In these two exploration phases, we collect the feedback given by the environment when the agent makes errors, as well as the correct actions corresponding to these error actions.
Having these exploration trajectories with additional correction, we train the agent to provide feedback on their trajectories and correct their error actions based on the feedback.
To apply learned self-correction ability, we further propose Speculative Inference, which performs corrections if the initial planned actions are inferred to be errors by the agent's feedback.
We evaluate the agent trained by E^2CL in Virtualhome (), a household embodied environment. E^2CL-trained agent surpasses the agents trained by other baseline methods in all agentic metrics, demonstrating its effectiveness.
Furthermore, our analysis reveals that the small models constructed using our method outperform larger models of the same series that have only undergone behavior cloning. In addition, in evaluations based on feedback-driven re-planning, our models demonstrate self-correction capabilities that are comparable to LLMs.
In summary, our main contributions are as follows. (1) We introduce the Exploration-based Error Correction Learning (E^2CL) framework, enabling LM-based agents to align with environments through effective feedback-driven exploration and correction. (2) We propose two novel exploration schemes, teacher-guided and teacher-free exploration, that facilitate the collection of correction and feedback via agent-environment interaction. (3) We introduce a novel action inference algorithm, namely speculative inference, which can prevent executable errors from occurring. (4) We demonstrate the superior performance of E^2CL-trained agents in the Virtualhome environment, surpassing baseline methods and showcasing the potential of our approach for real-world deployment.
§ METHOD
In this section, we introduce our framework, E^2CL. This framework equips LM-based agents with self-feedback and self-correction capabilities, which enhances their ability to tackle tasks in new environments.
§.§ Task Formulation
The LM-based embodied agent is asked to complete a set of tasks via interacting with a virtual environment.
The interaction between the agent and the environment can be formalized as a partially observable Markov decision process (POMDP) (𝒬, 𝒮, 𝒜, 𝒪, 𝒯, ℛ) with instruction space 𝒬, state space 𝒮, action space 𝒜, observation space 𝒪, transition function 𝒯: 𝒮×𝒜→𝒮, and reward function ℛ:𝒮×𝒜→[0,1].
In our LM-based agent scenario, 𝒬, 𝒜, 𝒪 are subsets of language space.
The interaction process between the agent and the environment is described as follows. Given a planning instruction q_p∈𝒬 that prompts the agent to plan for a task, the agent with parameter θ generates the first action a_1∼π_θ(·|q_p) ∈𝒜 according to its policy π_θ. Each action at step t induces a transformation in the latent state space s_t ∈𝒮.
And the agent would face a new observation o_t∈𝒪. Then the agent would incorporate task instruction q_p and interaction trajectories j_t = (a_1,o_1,…,a_t,o_t) to generate the next action a_t+1∼π_θ(·|q_p,j_t). The interaction loop repeats until the agent assumes the task is finished or the number of steps exceeds the maximum steps.
§.§ Exploration-based Error Correction Learning
Our E^2CL framework consists of three phases of learning and exploration within the environment: the pre-tuning phase, the exploration phase, and the training phase. In the pre-tuning phase, we equip the agent with basic planning ability before exploration. Then, in the exploration phase, the agent collects exploration experience in the environment via two complementary schemes, as shown in <Ref>. Following this, in the training phase, the agent is trained to align with the environmental knowledge from the collected experience and to develop the ability to provide feedback and correct its own errors.
Pre-tuning Phase
To serve as the foundation for environmental exploration, we aim to empower LM-based embodied agents with a basic planning capability. Given a dataset 𝒥={(q^i_p,j^i_n_i)}^|𝒥| with |𝒥| task instructions and expert trajectories where each trajectory has n_i steps, we first construct a planning dataset D_p by slicing each trajectory into sub-trajectories of varying lengths from 1 to n_i. Formally, the planning dataset D_p is defined as:
D_p = ⋃_i=1^|𝒥|⋃_t=1^n_i{(q^i_p, j^i_t) | j^i_t ⊆ j^i_n_i (q^i_p,j^i_n_i) ∈𝒥}.
Notably, we sample a subset of D_p, denoted as D_p', for pre-tuning to avoid overfitting to expert trajectories and maintain exploration diversity.
Then, we fine-tune the LM-based agent by minimizing neg-likelihood loss:
ℒ(θ) = 𝔼_∼𝒟_p'[ -logπ_θ(a_t | (q_p, j_t-1)) ].
Notably, a_t consists of multiple tokens. Therefore, when calculating the loss, it effectively becomes an auto-regressive loss over a sequence of tokens, following previous practices. This approach is consistently applied in the latter stages of training as well.
Exploration Phase
Intuitively, to gather diverse experiences that fully cover environmental knowledge, we can simply let the agent freely execute its predicted plans and collect the trajectories.
However, when utilizing these trajectories, we need to correct the errors made by the agent.
Since these trajectories are newly generated, we do not have the correct action data corresponding to the errors. Although one can use a more powerful LLM to correct these errors automatically, the quality of the generated data is inevitably lower compared to expert data.
To balance data diversity and quality, we propose a limited exploration scheme guided by expert trajectories, referred to as teacher-guided exploration (TGE).
Correspondingly, we call the aforementioned free exploration scheme teacher-free exploration (TFE).
These two schemes complement each other and enhance the diversity and quality of the collected experiences.
Specifically, for each expert's sub-trajectory (q_p, j_t)∈ D_p, the agent conducts TGE by executing the action â_t ∼π_θ(·|q_p, j_t-1).
The environment then provides feedback f_t indicating the executability of this action.
Since the agent only performs one step of exploration under the guidance of the expert, we can naturally use a_t as the ground truth action for that step.
After traversing all the expert trajectories, we obtain the feedback dataset D_f^TGE consisting of samples in the form of (q_f, j_t-1, â_t, f_t), and the correction dataset D_c^TGE consisting of samples in the form of (q_c, j_t-1, â_t, f_t, a_t) where â_t≠ a_t, q_f is the instruction that prompts the model to generate feedback, and q_c is the instruction that prompts the model to correct errors. Please refer to <Ref> for templates of these samples.
During TFE, the agent iterates through each task instruction q_p∼𝒬 and accrodingly obtain trajectories j_t=(â_1,ô_1,…,â_t,ô_t). Similar to TGE, whenever the agent predicts a non-executable â_t, the environment provides feedback f_t indicating why this action is non-executable. To obtain the executable action at this step without manual intervention, we leverage an LLM with powerful reasoning ability (e.g., GPT-4o) to automatically correct the action, yielding a_t. Considering the LLM may not always provide perfect corrections due to a lack of environment alignment, we further filter its predictions to ensure the corrections are executable. As a result, we obtain the feedback dataset D_f^TFE and D_c^TFE, each in the same form as the samples in D_f^TGE and D_c^TGE, respectively.
Training Phase
After the above two phases, we obtain the planning dataset D_p, the feedback dataset D_f=D_f^TGE⋃ D_f^TFE, and the correction dataset D_c=D_c^TGE⋃ D_c^TFE.
Next, we train the agent to align with the environmental knowledge gathered from these datasets and to develop the ability to provide feedback and correct its own errors.
This is achieved by fine-tuning the agent to minimize the following losses:
ℒ_p(θ) = 𝔼_∼𝒟_p[ -logπ_θ(a_t | q_p, j_t-1) ],
ℒ_f(θ) = 𝔼_∼𝒟_f[ -logπ_θ(f_t | q_f, j_t-1, â_t) ],
ℒ_c(θ) = 𝔼_∼𝒟_c[ -logπ_θ(a_t | q_c, j_t-1, â_t, f_t) ],
ℒ_total(θ) = ℒ_p(θ) + ℒ_f(θ) + ℒ_c(θ).
We refer the reader to the pseudo-code of the overall E^2CL process in Appendix <ref>.
§.§ Speculative Inference
To utilize the learned abilities in the training phase, we propose speculative inference, a process of predicting the error occurring ahead and correcting itself. This process reduces execution errors and generates correct action based on self-generated feedback.
To be more precise, when given each test task instruction q_p, the agent initially predicts a action â_t ∼π_θ(·|q_p, j_t-1). However, this action â_t will not be executed immediately. The agent will `reflect' itself and generate an environment feedback f̂_t ∼π_θ(·|q_f, j_t-1, â_t). If the agent believes the initial action â_t is executable, then this action will be executed. Otherwise, the embodied agent will correct this action â and predict a new action â_c ∼π_θ (·|q_c, j_t-1, â_t, f̂_t). Once the corrected action â_c passes its own check, this action will be executed at this step and concatenated into trajectory ĵ_t-1. The above process is iterated until the agent assumes the task is finished or the total steps exceed the maximum threshold.
The process of speculative inference for each test task is shown in <Ref>.
§ EXPERIMENTS
§.§ Experimental Settings
VirtualHome Environment & Tasks
We evaluate our method in the VirtualHome Environment <cit.>, a simulation platform for performing typical household activities.
During the experiment, we are able to access data, such as states of objects, observation of agents, and environmental feedback, from the simulator.
We refer to Appendix <ref> for more details of the environment.
We use the predefined tasks from ActivityPrograms Knowledge Base <cit.> for the experiment. It contains 292 unique high-level household tasks, with 1374 unique action plans and 6201 unique environmental settings in total extracted from VirtualHome.
After filtering low-quality tasks, we conduct experiments on a total of 285 tasks. They are randomly divided into a training set of 235 tasks and a test set of 50 tasks.
We select 50 tasks from the training set as seen tasks, while the 50 tasks in the test set as unseen tasks.
We evaluate the method on both seen tasks and unseen tasks.
Baselines We compare our method with both prompting-based methods and other tuning-based baseline methods. Similar to our approach, tuning-based methods achieve alignment between the embodied agent and the environment via model fine-tuning.
(1) Language-planner <cit.> aims to inject environment knowledge into prompt and prompted Large Language Models to output action. We utilize GPT-4o as the foundational model for our baseline method.
(2) We perform Behavior Cloning (BC) on expert planning data <cit.>, which is the same method used in the pre-tuning phase of our methods and other baselines.
(3) LWM <cit.> employs a robot agent to interact with the environment and collect a large amount of environmental knowledge data to fine-tune the model.
(4) Plasma <cit.> leverages ChatGPT to generate multi-task planning-related data for model training.
(5) Lema <cit.> enhances the agent's reasoning capabilities by providing error-correction data pairs during model fine-tuning.
(6) NAT <cit.> implements a negative-aware training approach, enabling LM-based agents to effectively learn from both positive and negative examples.
Evaluation Metrics
Following previous studies <cit.>, we evaluate our action plans across three metrics: executability (Exec.), affordance rate (AR), and longest common sequence (LCS).
Executability measures whether an action plan can be correctly parsed and satisfies the common-sense constraints of the environment. Specifically, the parsed action must contain only allowable action, and the objects must be in the environment. Moreover, the action must satisfy the pre-conditions (e.g., the embodied agent cannot send email before walking to the computer) and post-conditions (e.g., the state of TV changes from closed to open after the agent opens it). Similarly to executablility, affordance rate measures the average percentage of all plan steps that are executable, in cases where the entire plan is not executable. However, executability and affordance rate only can reflect whether the agent could compliant with environment physical constraints, but they cannot reflect whether the plan is correct. LCS calculates the length of the longest common subsequence between generated plans and the ground truth plans, normalized by the maximum length of the two.
§.§ Results
As shown in <Ref>, our method outperforms both prompting-based and tuning-based baseline methods across multiple metrics, showing the superiority of our method.
Notably, the prompting-based method significantly lags behind all tuning-based methods in different metrics.
Despite this contradicts with the experience that LLMs exhibit exceptional general reasoning capabilities, we observe that the actions generated by prompt-based methods, while seemingly reasonable, fail to comply with the physical constraints of the environment often.
Regarding tuning-based baseline methods, our method demonstrates significant improvements over BC in both seen and unseen tasks.
Moreover, LWM and Plasma, which are also fed by expert planning data and can be seen as augmented versions of BC, only show a marginal increase in performance.
Compared to these BC-based methods, the method utilizing failure data, i.e., Lema and NAT, demonstrates better performance. Taking a step further, we evolve this idea by training the agent to develop self-feedback and self-correction capabilities through its failure experiences. The results show that our method increases executability-related metrics by up to 15% and LCS by up to 10% compared with Lema and NAT. This demonstrates that these two capabilities effectively enable the agent to align with the environment for task-solving.
§.§ Ablation Study on Training Data
In this section, we explore the impact of the collected training data, i.e., feedback data D_f and correction data D_c, on overall performance by ablating them in the training. During the inference phase, we employ speculative inference for all settings to ensure consistency.
As shown in <Ref>, we observe that both D_f and D_c are each beneficial for the agent, but lag behind the combination of them.
We hypothesize that the improvement observed when training with D_c is primarily due to the enhanced self-correction capability of the agent. However, the limited ability to generate high-quality action feedback hampers the effectiveness of self-correction during speculative inference, as demonstrated in Section <ref>.
Compared to the agent training without both D_f and D_c, training with D_p and D_f improves the performance by training to predict environmental feedback, which explicitly aligns with environmental knowledge. However, the weak self-correction capability of the agent constrains the agent from generating executable and correct action in speculative inference, which is demonstrated in <Ref>.
In our method, we integrate both types of data, enabling our agent to generate higher-quality action feedback and exhibit stronger self-correction abilities. This results in a substantial performance boost compared to other ablation settings.
§.§ Analysis on Different Size of the Model
To investigate the impact of model size on performance, we train models of different sizes on seen tasks using both BC and our method, and evaluated them on unseen tasks.
Our results are in line with common experience, where larger models perform relatively better across all aspects, indicating that model scale significantly impacts performance.
Moreover, it can also be observed that our method outperforms BC in both Affordance rate and LCS across models with different parameter sizes, which demonstrates that our method consistently provides superior performance regardless of model size.
Notably, when using our method, smaller models achieve performance surpassing larger models using BC across all metrics.
This finding suggests that our method is able to release the potential of small language models and lays the foundation for building agents that work on edge devices in the future.
§.§ Evaluation on Self-Correction Ability
We further evaluate the self-correction capability of our constructed agent.
We conduct two different experiment settings to validate the performance of the agent. For seen tasks, we randomly select 100 samples from correction data. For unseen tasks, we collect 100 correction data samples in a similar process to TGE.
For comparison, we also evaluate the prompting-based agent and the agent trained by BC. Since these two agents have both undergone general instruction tuning, we instruct them to conduct self-correction off the shelf.
As shown in <Ref>, our method generates correct corrected actions far more frequently than BC and prompting-based methods in both seen tasks and unseen tasks, which demonstrates our agent's strong self-correction capability.
The powerful self-correction capability reflects our agent can truly align with the environment and generate correct corrective actions that do not violate physical constraints.
Furthermore, we can observe from <Ref> that our agent generates correct actions at a high proportion in both seen and unseen tasks. This ensures a reliable self-correction process in speculative inference.
§.§ Analysis on Speculative Inference
To analyze the contribution of speculative inference to overall performance, as well as to explore the quality and effectiveness of self-generated feedback, we conduct an analysis on speculative inference.
Firstly, as shown in <Ref>, we conduct three kinds of experiment settings and test their performance on unseen tasks. Employing speculative inference significantly improves the agent's executability and affordance rate. This shows that speculative inference effectively reduces errors during execution, which demonstrates the effectiveness of the design. Moreover, LCS has not changed regardless of using speculative inference. This indicates that speculative inference contributes to the performance gain mainly by generating more executable actions, instead of recovering the expert trajectories in the training data.
Next, we provide our agent with self-generated feedback as well as three other types of feedback, and test its performance on unseen tasks.
As shown in <Ref>, given random feedback to the agent, the agent performs worst in both affordance rate and LCS, which underscores the importance of high-quality feedback.
When fed with self-generated feedback, the agent performs better than that of using random feedback and boolean executability signals, while slightly worse than that of using ground truth. This suggests that our method enables the agents to generate feedback with good quality.
Overall, we can observe that feedback with better qualities yields a better performance, which demonstrates that the speculative inference process faithfully relies on high-quality feedback.
§.§ Error Analysis
We also perform an error analysis to identify the aspects where the agent constructed using our method outperforms BC. There are a total of eight types of errors, which can be further classified into grounding errors (object availability) and execution-related errors (others). The detailed demonstration can be found in the Appendix <ref>.
As shown in Figure <ref>, we observe that all error types decreased by more than 24%, with Over occupied error showing the highest reduction rate of 94.4%. This highlights the effectiveness of our method in reducing various types of errors, highlighting its comprehensiveness.
For the two most frequent types of execution-related errors, unflipped boolean state and agent proximity, our method achieves a reduction in error count by over 37% compared to BC, thereby demonstrating its effectiveness.
Although our method primarily aims to avoid execution errors related to physical constraint and does not specifically target grounding errors such as object availability, the fact that it still reduces this type of error demonstrates the generalizability of our method.
§ RELATED WORK
LM-based Agent
Nowadays, due to the increasingly powerful generalization capabilities of language models, they are often regarded as the policy function of agents to plan their behavior <cit.>. However, one issue is that there may be a misalignment between the knowledge in the environment and the internal knowledge of the model. Consequently, a significant amount of work aims to ground the language model to the environment <cit.>. Some studies harness the immense capabilities of large language models and employ intricate prompts or integrate specifically designed modules <cit.>. However, LLM-based agents would cost heavily and are not suitable for offline scenarios. Some line of work deploys language model as decision-making agents to align with embodied environments via reinforcement learning <cit.>. This type of approach tends to have low learning efficiency in embodied environments with large action spaces. In addition, similar to our approach, other research efforts have proposed frameworks where the agent first explores the environment and subsequently utilizes the exploration experience for learning <cit.>. These approaches often overly focus on the agent and lack comprehensive environmental feedback modeling, making it difficult to avoid execution errors.
Learning from Failure
After exploration, the agent would encounter failure in the past experience, which is assumed as negative samples. The topic of learning from negative samples has increasingly gained attention as an alternative approach to learning solely from positive samples. Traditionally, some studies aim to decrease the probability of negative samples while increasing the probability of positive samples in order to achieve better performance <cit.>. Additionally, some works construct correcting dataset and tuning language models on these data <cit.>. Besides, there are other efforts aimed at leveraging the comprehension abilities of language models to widen the gap between positive and negative samples <cit.>. In our work, we similarly leverage the inherent understanding capabilities of language models and enhance the embodied agent's learning from environmental feedback on exploration errors, as well as its ability to self-correct.
§ CONCLUSION
In this work, we aim to align the embodied agent with the environment to enhance its task-solving performance.
Firstly, we present E^2CL, a novel framework that leverages exploration-induced errors and environmental feedback to enhance environment alignment for LM-based agents during teacher-guided and teacher-free exploration.
Furthermore, we introduce speculative inference, a process in which the agent utilizes learned abilities for self-feedback and self-correction to reduce execution errors. Extensive experiments show that our method outperforms behavior cloning and other baseline methods.
§ LIMITATIONS
The baseline model for the robot agent constructed using our method is a text-based model, meaning the agent's observations are input in textual form. However, there is a gap between textual descriptions of real-world visual images and the actual visual information, which cannot fully encapsulate all real-world details. This discrepancy affects the robot agent's ability to ground itself in the environment. In future work, we aim to incorporate visual information directly into the input to better align with real-world scenarios. Additionally, although VirtualHome <cit.> is a relatively complex environment, we have not conducted experimental validation in other embodied environments or the real world. In the future, we will perform more experiments for validation.
§ ETHICAL CONSIDERATIONS
This work aims to construct a robot agent within Virtual Environment.
The virtual environment setup and related data strictly follow the specifications of VirtualHome <cit.>.
We refer to VirtualHome v2.3.0[<https://github.com/xavierpuigf/virtualhome/tree/master>] to conduct out our experiments (MIT license[<https://github.com/xavierpuigf/virtualhome/blob/master/LICENSE>]).
The models, i.e. flan-t5-small, flan-t5-base and flan-t5-large <cit.>, we use for fine-tuning are all open-source, and we will strictly follow the protocols for the academic use of these language models (Apache License 2.0[<https://huggingface.co/google/flan-t5-base>]).
In addition, we partially use AI assistants, such as Copilot and ChatGPT, to help with our coding and writing.
§ APPENDIX
§ DATA TEMPLATE
We show the data template of planning data (D_p), feedback data (D_f) and correction data (D_c). Each piece of data adheres to the specified format.
§ ILLUSTRATION OF ERROR TYPE
During the interaction between the agent and the environment, we collect error feedback from the environment and classify it into eight categories as followings.
Unflipped Boolean State error occurs when an action meant to change the state of an object with a Boolean attribute (such as open/closed or on/off) does not achieve the intended effect, like attempting to open an already open door. Missing Object error arises when the agent is not holding the necessary object to complete an action, preventing the task's execution. Enclosed Object error involves the target object being contained within a closed structure, with the action failing to free the object for use. Invalid Action error occurs when the agent attempts to perform an action on a target object that is not afforded to it, such as trying to pull a ceiling. Over-occupied Agent error happens when the agent's hands are occupied or already interacting with objects, leaving it unable to interact with the target object in the current step. Agent Proximity errors arise when the agent is not close enough to the target object to perform the action. Object availability errors occur when the agent attempts to interact with an object that does not exist in the environment. The remaining errors are categorized under Others.
§ LENGTH ANALYSIS
Following our common sense, tasks with a greater number of steps are generally considered more challenging for the agent. To evaluate the performance of the agent on tasks of varying difficulty, we collected and analyzed the executability of tasks with different lengths of generated steps between our method and BC. As shown in <ref>, in terms of execution rates for different lengths of generated steps, our method outperforms BC, particularly in tasks with longer steps. This indicates the widespread efficacy of our method.
§ MORE DETAILS OF VIRTUALHOME ENVIRONMENT
VirtualHome provides diverse and customizable household environments that support a wide array of possible interactions in the form of atomic action steps. There are three kinds of action template based on the action type, which are "[Action]", "[Action] <Object> <id>" and "[Action] <Object> <id> <Object> <id>". Each [Action] refers to one of 42 atomic actions supported in Virtualhome. Full list of atomic actions are shown in <Ref>.
In each scene, there are around 350 objects which the embodied agent can interact with and each object refers to the specific <id>. Each object has properties(e.g. drinkable, eatable) corresponding to its action affordances. Some objects also have a semantic state like heated, washed, or used.
In ActivityPrograms Knowledge Base <cit.>, there are 292 unique high-level household tasks, with 1374 unique action plans and 6201 unique environments in total extracted from VirtualHome, and task and action plan samples manually annotated by Amazon Mechanical Turk workers. Each data sample consists of a high-level task, the description of the task and the complete action programs, which can be directly executed in the VirtualHome environment. A piece of data sample is shown in <Ref>.
§ FURTHER EXPERIMENT DETAILS
In our work, we primarily fine-tuned three models of different sizes: flan-t5-small with 77 million parameters, flan-t5-base with 248 million parameters, and flan-t5-large with 783 million parameters <cit.>.
All experiments were conducted on eight NVIDIA RTX A6000 GPUs. During the pre-tuning phase, we selected 1000 samples from the expert planning data D_p and trained for one epoch. During the training process, we set the following hyperparameters: a batch size of 30, training for three epochs, and selecting the best-performing checkpoints from these epochs. The learning rate was set to 1e-4. During the inference process, all generation parameters were kept consistent with the default generation parameters of the flan-t5 series models. All experiments are expected to reproduce in one day.
[t!]
𝒟_p: Expert planning data, π_θ: initial robot agent policy, T_1: number of epochs in pre-tuning phase, VS: Virtualhome Simulator, M: Number of tasks, n_j: the step length of task j, M: GPT-4o, T_2: number of epochs in training phase.
Final policy π_θ
Randomly select few planning training data 𝒟_few⊆𝒟_p
i=1 T_1
Optimize θ on BC objective: ℒ(θ) = 𝔼_(q_p,j_t) ∼𝒟_p[ -logπ_θ(a_t | (q_p, j_t-1)) ]
j=1 M
k=1 n_j
Predicting the action â_k ∼π_θ(q_p,j_k-1)
â_k executed in VS and obtain new observation o_k, environmental execution feedback f_k
(â_k ≠ a_k) & â_k is non-executable
Correction data sample: (q_c, j_k-1,â_k, f_k, a_k) added to D_c
Feedback data sample: (q_f, j_k-1, â_k, f_k) added to D_f
a_k executed in VS and obtain new observation o_k
â_k == a_k
Feedback data sample: (q_f, j_k-1, a_k, True) added to D_f
j=1 M
the agent assumes the task is not finished
Predicting the action â_k ∼π_θ(q_p,j_k-1)
â_k executed in VS and obtain new observation o_k, environmental execution feedback f_k
â_k is non-executable
Gain corrected action a_c ∼ M(q_c, j_k-1, â_k, f_k)
Correction data sample: (q_c, j_k-1, â_k, f_k, a_c) added to D_c
Feedback data sample: (q_f, j_k-1, â_k, f_k) added to D_f
i=1 T_2
Optimize θ on autoregressive objective loss: ℒ_SFT (π_θ) = 𝔼_∼𝒟_p[ -logπ_θ(a_t | q_p, j_t-1) ] + 𝔼_∼𝒟_f[ -logπ_θ(f_t | q_f, j_t-1, â_t) ] + 𝔼_∼𝒟_c[ -logπ_θ(a_t | q_c, j_t-1, â_t, f_t) ]
π_θ
Exploration-based Error Correction Learning
§ PSEUDOCODE
This section presents the pseudocode of E^2CL in Algorithm 2. A detailed discussion of the method is given in <Ref>.
|
http://arxiv.org/abs/2409.03143v1 | 20240905004057 | Large Étendue 3D Holographic Display with Content-adpative Dynamic Fourier Modulation | [
"Brian Chao",
"Manu Gopakumar",
"Suyeon Choi",
"Jonghyun Kim",
"Liang Shi",
"Gordon Wetzstein"
] | cs.GR | [
"cs.GR",
"eess.IV",
"physics.optics"
] |
0000-0002-4581-6850
Stanford University
Stanford
CA
94305
USA
[email protected]
0000-0001-9017-4968
Stanford University
Stanford
CA
94305
USA
[email protected]
0000-0001-9030-0960
Stanford University
Stanford
CA
94305
USA
[email protected]
0000-0002-1197-368X
NVIDIA
Santa Clara
CA
95051
USA
[email protected]
0000-0002-4442-4679
Massachusetts Institute of Technology
Cambridge
MA
02139
USA
[email protected]
0000-0002-9243-6885
Stanford University
Stanford
CA
94305
USA
[email protected]
§ ABSTRACT
Emerging holographic display technology offers unique capabilities for next-generation virtual reality systems. Current holographic near-eye displays, however, only support a small étendue, which results in a direct tradeoff between achievable field of view and eyebox size. Étendue expansion has recently been explored, but existing approaches are either fundamentally limited in the image quality that can be achieved or they require extremely high-speed spatial light modulators. We describe a new étendue expansion approach that combines multiple coherent sources with content-adaptive amplitude modulation of the hologram spectrum in the Fourier plane. To generate time-multiplexed phase and amplitude patterns for our spatial light modulators, we devise a pupil-aware gradient-descent-based computer-generated holography algorithm that is supervised by a large-baseline target light field. Compared with relevant baseline approaches, ours demonstrates significant improvements in image quality and étendue in simulation and with an experimental holographic display prototype.
<ccs2012>
<concept>
<concept_id>10010583.10010786</concept_id>
<concept_desc>Hardware Emerging technologies</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010371.10010387</concept_id>
<concept_desc>Computing methodologies Graphics systems and interfaces</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Hardware Emerging technologies
[500]Computing methodologies Graphics systems and interfaces
< g r a p h i c s >
Conventional holographic displays use a single laser source that provides a limited étendue, here visualized by a recorded spectrum that only covers a small area of the Fourier plane (b). Single-source holograms therefore only support a limited eyebox size, which means that an image can be observed only when the user's pupil (b, red) is well aligned with the eyebox (c). The image quickly degrades and fades into black as the pupil (b, blue) shifts even a small amount (d). Using multi-source illumination (a), our holographic display creates a significantly expanded coverage of addressable spatial frequencies (e) which, combined with our content-adaptive Fourier modulation strategy, achieves a large étendue with better image quality across an expanded eyebox (f,g).
Large Étendue 3D Holographic Display with Content-adaptive Dynamic Fourier Modulation
Gordon Wetzstein
Received: 20 June 2024 / Accepted: 05 July 2024
=====================================================================================
§ INTRODUCTION
Holographic near-eye displays offer unique benefits to virtual and augmented reality (VR/AR) applications. For example, holographic displays can present perceptually realistic 3D images with natural parallax to the user in lightweight device form factors <cit.>. Yet, the étendue of holographic displays is fundamentally limited by the pixel count of the underlying spatial light modulators (SLMs), preventing current holographic near-eye displays from achieving a large field of view and eyebox simultaneously. This limitation is a fundamental barrier towards making this a practical display technology.
Increasing the pixel count of an SLM seems like the natural solution. However, developing large-area phase-only SLMs with pixel pitches matching the small feature sizes (i.e., tens of nanometers) of analog holographic films <cit.> is simply not feasible with today's hardware solutions. To overcome this problem, étendue expansion techniques have been described in the literature, including those based on static, high-resolution masks <cit.>, pupil replication <cit.>, steered or multi-source illumination <cit.>, and making use of higher-diffraction orders and pupil optimization <cit.>. However, each of these approaches has its limitations, as mask-based systems do not have sufficient degrees of freedom to achieve a high image quality, pupil replication approaches cannot create natural 3D effects and parallax over the eyebox, and steered sources are hindered by the requirement for high-speed SLMs as well as high diffraction orders (HDOs) that fundamentally limit the image quality. As a result, none of these solutions is able to achieve high-quality 3D holography with a large étendue.
Our work is motivated by the hypothesis that a holographic display requires sufficient degrees of freedom to achieve a large field of view and eyebox simultaneously. In the absence of an extremely high-resolution SLM, this is only achievable using steered or multi-source illumination. We thus build on the latter approach but address its major shortcomings, HDOs and symmetric illumination copies, by introducing a dynamic, programmable amplitude modulation mechanism in the Fourier plane, after the SLM. This unique optical setup allows us to extend steered / multi-source configurations such that they modulate the frequency spectrum of the display image in a content-adaptive manner. For this purpose, we leverage a stochastic optimization approach that factors a target light field into a set of time-multiplexed phase SLM and corresponding Fourier amplitude masks that are displayed in rapid succession while being integrated by the user's eye.
Using the proposed system, we demonstrate improved 3D image quality over a large étendue, surpassing the performances of existing approaches in both simulation and experiment. Specifically, our contributions include
* A novel optical holographic display configuration that combines a time-multiplexed phase SLM near the image plane and a dynamic amplitude SLM that controls the frequency spectrum.
* A computer-generated holography framework that uses stochastic optimization to factor a target light field into a set of phase–amplitude image pairs.
* Demonstration of improved 3D image quality among high-étendue holographic displays.
Our method should be clearly distinguished from Multisource Holography, a system recently proposed by <cit.> for speckle reduction that also leverages a multi-source laser array. In Multisource Holography, the multi-source laser and two phase-only SLMs placed in close proximity are used to remove speckles, but the system étendue remains limited since the spacing between each source is relatively small. In our system, we place the laser sources much farther apart to create a high-étendue backlight for the phase-only SLM to greatly increase the eyebox size and place an amplitude display at the Fourier plane.
§ RELATED WORK
Holographic Near-eye Displays. Holographic displays are a promising technology for virtual and augmented reality applications due to their unique capability to display true 3D content and significant progress has been made recently <cit.>. In particular, the advancement in computer graphics, machine learning, and computing infrastructures have enabled real-time hologram rendering based on neural networks <cit.>, significantly improved image quality with end-to-end optimization <cit.>, higher light efficiency and brightness with simultaneous control of multiple wavelengths and energy-efficiency loss function <cit.>, and thin form factors in eyeglasses-like design <cit.>. Despite offering these unique capabilities, current holographic displays fail to provide a comfortable immersive experience as they cannot simultaneously provide a wide field of view (FoV) and a sufficiently large eyebox (i.e., the region in which a user's eye perceives the displayed content).
In a given display system, the product of the FoV and the eyebox is a constant, referred to as the étendue. For a holographic display, the étendue is directly proportional to the number of pixels in the SLM. A 1080p SLM, for instance, can either support a wide field of view (e.g., 80 degrees) with an eyebox smaller than 1 mm or vice versa.
However, increasing the SLM resolution to the point where both large field of view and eyebox can be achieved simultaneously faces significant challenges in manufacturing, cost, and addressing speed, accuracy, and bandwidth. Instead, efforts have been made to increase the étendue of holographic displays without increasing SLM resolution.
The approaches fall under two categories: (i) increasing étendue after the SLM and (ii) increasing étendue before the SLM.
Post-SLM Étendue Expansion
The most representative method in this category is mask-based étendue expansion <cit.>, where a static mask at a resolution higher than the SLM is placed after the SLM to increase the diffraction angles of the SLM-modulated wavefront, thus increasing the étendue. However, such systems suffer from difficulties in alignment and from reduced image quality and low contrast because their effective degrees of freedom <cit.> are insufficient to synthesize a high-quality large-étendue wavefront. Pupil replication <cit.> is another popular approach. It is implemented either by putting a pupil-replicating waveguide after the SLM to replicate pupil locations at its out-coupler or using a pupil-replicating holographic optical element (HOE) as the eyepiece, effectively expanding the eyebox of the system. However, pupil-replicating displays cannot display 3D content or natural parallax across the expanded eyebox since the content within the eyebox are merely copies of the same wavefront. Higher-diffraction orders combined with pupil optimization can also be leveraged to slightly expand the eyebox in the single-source case <cit.>. Finally, a regular eyepiece can be replaced by a lens array to partition an unexpanded eyebox into an array of smaller chunks that cover an expanded area <cit.>. However, this comes at an explicit cost of image quality and brightness nonuniformity, especially when observed with a small pupil.
Pre-SLM Étendue Expansion
Methods in this category modify the laser illumination to expand étendue either through a multi-source configuration or beam-steering. <cit.> used a micro-electromechanical-system (MEMS) mirror to temporally change the laser illumination and steer the resulting pupils over a larger eye box. <cit.> implemented the same principle by arranging individual laser diodes into a 2D array and sequentially turning each one on to create temporal directional illumination. <cit.> implemented per-pixel beam steering of the phase SLM by using transmissive LCD panels and polarization gratings and demonstrated that the étendue expansion amount scales exponentially with the number of LCD layers.
To permanently expand the étendue, <cit.> activated all illumination sources simultaneously. They introduced a random mask at the Fourier plane to break the correlation among copies in the spectrum formed by directional illuminations. This effectively eliminates duplicate images within the expanded eyebox. However, they did not demonstrate view-dependent effects across the expanded eyebox and the random mask is not content adaptive, resulting in reduced 3D realism and low image quality. Instead of using multiple laser diodes that are incoherent with each other, <cit.> implemented a mutually coherent multi-laser source using a lens array. The mutually coherent sources can interfere constructively and destructively with each other, granting the hologram optimization process more degrees of freedom. However, their system requires eye tracking and a new hologram needs to be optimized for each dynamic pupil location, making the system challenging for real-time applications.
Fourier Modulation.
Holographic display systems often require Fourier plane filtering to remove HDOs created by the pixelated structure of the phase SLM <cit.>. However, it is not straightforward to apply this to a multi-source or beam steering setting since the directional illuminations create shifted copies of the wavefront from normally incident illumination and the associated HDOs in the Fourier domain. When using beam-steering, the filter position needs to be dynamically adjusted to block the HDOs of the shifted wavefront. To achieve this, <cit.> placed a programmable polarization shutter at the Fourier plane and synchronized the laser sources to filter out the HDOs. However, when using multi-source illumination for étendue expansion, HDOs associated with one illumination intermingle with the shifted wavefront of another illumination at the Fourier domain, making it impossible to separate. It is therefore crucial to model HDOs to precisely characterize how they contribute to different angular views. Explicit modeling of HDOs has been demonstrated for 2D images <cit.> and 3D focal stacks <cit.>, but not for 4D light fields under multi-source illumination.
Inspired by <cit.>, we employ a multisource laser illumination and additionally place a dynamic amplitude SLM at the Fourier plane to enable content adaptive modulation and time multiplexing. We jointly optimize the patterns for the phase and amplitude SLMs to reproduce a 4D light field rendered over an expanded eyebox. We also explicitly model the HDOs and demonstrate notable improvement in image quality and contrast. Collectively, this new hardware and software co-design enables a dynamic view-dependent holographic display with a large eyebox.
§ METHOD
In this section, we first review the conventional single-source holographic image formation model before introducing the multi-source image formation model of our system.
§.§ Single-Source Holographic Image Formation Model
For on-axis Fresnel holography, a collimated beam from a laser source illuminates an SLM with a normally incident, coherent field . The SLM imparts a spatially varying phase delay ϕ to the field which propagates a distance z along the optical axis. The wavefront at this plane can be mathematically described using the angular spectrum method (ASM) <cit.> as a function of the phase pattern and distance from the SLM:
f (ϕ, z ) = {{ e^iϕ(x, y )(x, y ) }·(, ; z ) }
(, ; z ) =
e^i2π/λ z √(1- ( λ) ^2- ( λ) ^2) if √(^2+^2) < 1/λ.
0 otherwise.
Here, λ is the wavelength of light, x, y are the spatial coordinates on the SLM, , are the frequency coordinates,
and is the transfer function of the ASM. The operator f models free-space propagation between the parallel SLM and target planes separated by a distance z. For notational convenience, we omit the dependence
of the fields on x and y. The intensity generated by a holographic display at a distance z in front of the SLM is therefore | f (ϕ, z ) |^2. If a high-speed SLM is available, a time-multiplexed variant of the image formation is ∑_t=1^T | f (ϕ^(t), z ) |^2 / T, where T phase SLM patterns ϕ^(t), t = 1, …, T are rapidly displayed in sequence, and the resulting intensities are averaged by the users' eye <cit.>.
§.§ Multi-Source Holographic Image Formation Model with Fourier Modulation
To extend the single-source image formation model to our system, we modify the formulation to incorporate off-axis collimated illumination traveling in direction
𝐤 = (k_x, k_y, k_z) and a programmable amplitude mask at the Fourier plane of the holographic display system. This results in the model
f^(j)(ϕ, , z ) =
{^(j)(, ; ϕ) ·(, ; z ) ·(, ) }
^(j)(, ; ϕ) = { e^iϕ(x, y )^(j)(x, y ) e^i 𝐤^(j)·𝐱}
where j is the index of the source,
and ^(j)(x, y ) is the complex-valued field modeling any deviations in amplitude and phase of source j=1, …, J from a perfect plane wave e^i 𝐤^(j)·𝐱, 𝐱 = (x,y,z). Moreover, ^(j)(x, y ) can optionally also include per-source, time-dependent modulation, such as switching individual lasers on and off. In our setup, we do not consider this case and assume that all sources are turned on at all times. Please refer to the supplemental material for more discussions about the generalized configuration with amplitude-controllable laser sources ^(j)(x, y ).
§.§ Stochastic Optimization of Light Field Holograms
To reconstruct a light field, we use gradient descent to optimize a set of time-multiplexed phase patterns ϕ^(t) and corresponding Fourier masks ^(t) by minimizing the following objective:
{ϕ^(t), ^(t)}minimize‖ s √(1/T∑_t=1^T∑_j=1^J |H2LF( f^(j)(ϕ^(t), ^(t), z) ))|^2 - lf_target‖
where s is a scale factor, lf_target is the amplitude of the target light field, and H2LF is a hologram-to-light field transformation, such as the Short-Time Fourier Transform (STFT) <cit.>.
The memory consumption of the above optimization problem is huge due to time multiplexing, multiple sources, and the explicit modeling of HDOs. Therefore, it is impractical to realize H2LF using the STFT since it reconstructs a whole light field and the memory consumption would explode for a dense lf_target. To solve this problem, we devise a stochastic version of Eq. <ref> that allows us to optimize a single light-field view rather than a full light field in each iteration of the optimization routine.
For this purpose, we randomly chose a view p of the target light lf^(p)_target in each iteration and run a gradient descent step of Eq. <ref>. A binary pupil mask ^(p) in the Fourier plane in the hologram-to-light field transform is applied to reconstruct one specific view as
H2LF^(p)( f^(j)(ϕ^(t), ^(t), z) ) = f^(j)(ϕ^(t), ^(t)·^(p), z),
^(p)(, ) =
1, if ( - p)^2 + ( - p)^2 ≤ r_p^2,
0, otherwise
where ^(p) is a binary pupil mask in the Fourier plane, r_p is the radius of the pupil and p, p are the spatial coordinates of the center of the pupil. This procedure is similar to the pupil-supervision techniques described in <cit.>. Please refer to the supplemental material for more details on our stochastic light field optimization procedure.
§.§ Implementation Details
Since we are using a highly-quantized 4-bit phase SLM, the quantization of pixel values needs to be taken into consideration. Such quantization constraints can be enforced using techniques described in prior work <cit.>. Higher diffraction orders (HDOs) are modeled using the wave propagation model described in <cit.>. We use PyTorch to implement all our algorithms and run optimization.
In all our experiments, the radius r_p of the pupils are set to be 2 mm, resulting in a 4 mm diameter pupil. 81 pupils are equally spaced in the Fourier plane (eyebox plane), where each pupil corresponds to a single view in a 9 × 9 light field. The illumination directions of the multisource laser are set such that they match the diffraction angle of the ± 1^st higher diffraction orders of the blue wavelength. This allows the blue spectrum copies to be perfectly tiled in the Fourier plane, while removing the gaps between the red and green wavelength spectrum copies. Please see the supplemental materials for detailed discussion on the choosing the appropriate illumination angles.
§ ANALYSIS
§.§ Optical System Analysis
The étendue G of a display is defined as the product of the display area with the solid angle of emitted light:
G = 4Asin^2θ,
where A is the display area and 2θ is the solid angle of the emission cone of each display pixel. Étendue is conserved through reflections, refractions, and free space propagation in an optical system. When illuminated with a normal incidence light of wavelength λ, the diffraction angle θ_SLM of an SLM with pixel pitch p can be expressed as θ_SLM = ±sin^-1λ/2p.
For an SLM of physical size L_x × L_y, its étendue G_SLM can be expressed as:
G_SLM = 4L_x L_ysin^2θ_SLM = λ^2 N_x N_y
where N_x × N_y is the pixel resolution of the SLM. This means that the étendue of a holographic display is directly proportional to the number of pixels of the SLM.
In a Fresnel holography display system, the 1D field-of-view (FoV) and eyebox size w can be expressed as follows:
FoV = 2tan^-1(L/2 ) = 2tan^-1(Np/2 ), w = λ/p,
where L, N are the size of the SLM and the number of SLM pixels in the x or y axis, respectively, and is the eyepiece focal length. Under paraxial assumptions (θ≈sinθ≈tanθ), we see that the product of the 2D FoV and eyebox of a holographic display system is exactly the étendue of the system:
FoV_x ·FoV_y · w^2 =
2tan^-1(N_xp/2 ) · 2tan^-1(N_yp/2 ) ·^2λ^2/p^2≈λ^2 N_x N_y = G_SLM
This implies that there is an inherent tradeoff between the FoV and the eyebox of a holographic display.
When the SLM is illuminated with a grid of α×α off-axis, directional illuminations, the system eyebox is expanded due to shifted copies of the original spectrum. Specifically, if the directional illumination is selected such that the illumination direction matches the higher-order diffraction angles, the system 1D eyebox is exactly expanded by α while the FoV remains the same, resulting in an expanded 1D eyebox size of w = α fλ/p. Therefore, the 2D étendue of the system is expanded by a factor of α^2.
We show how the FoV and eyebox size relates to the required number of sources in Fig. <ref>. We assume an SLM pixel pitch of 10.8 μm and resolution of 1000 × 1000 and laser wavelength of 632.8 nm. The FoV and eyebox size move along each white line in opposite directions as we vary the eyepiece focal length while the system étendue remains fixed. As we increase the number of sources, the étendue of the system also increases, as the white lines move further towards to upper-right of the plot.
§.§ Baseline Configurations
We next discuss a number of holographic display system configurations that serve as baselines to our proposed design shown in Fig. <ref>. Illustrations of these baselines are shown in Fig. <ref>.
I. Single Source with Fourier Filter. The conventional holographic display setup with a single laser source and a Fourier filter to block HDOs, including <cit.>. Such systems suffer from small étendue and non-uniform brightness across the eyebox.
II. Single Source with Phase Mask. A high-resolution phase mask is placed in front of the SLM to increase the diffraction angle of the SLM and therefore increase the étendue of the system. The phase masks can be random <cit.> or optimized <cit.>. These approaches have been shown to expand the étendue at the cost of decreased image quality and contrast.
III. Multiple Sources. Multiple mutually incoherent lasers illuminate the SLM from different angles simultaneously. Due to the absence of a Fourier filter, the frequency spectrum contains multiple shifted, potentially overlapping copies of the same hologram. These constraints limit this system's capability to perfectly reconstruct a light field.
IV. Multiple Sources with Fixed Random Fourier Mask Multiple lasers illuminate the SLM from different angles simultaneously while a fixed random mask is placed at the Fourier plane to break the correlation between the image copies, as demonstrated by Jo et al. jo2022binary. Time multiplexing and content-adaptive filtering are not feasible since the random masks are custom-printed and fixed.
V. Multiple Sources with Dynamic Fourier Filter (ours). Multiple lasers illuminate the SLM from different angles simultaneously while an amplitude SLM is placed at the Fourier plane. The amplitude SLM can be dynamically refreshed and is synchronized with the phase SLM, allowing for time-multiplexed and content-adaptive Fourier modulation. We additionally compare with a generalized configuration V^* where the amplitude of the laser sources are controllable rather than fixed. More discussions on this generalized configuration can be found in the supplement.
VI. Steered Illumination. Multiple individually controllable sources or a single, swept source illuminate the SLM from different angles without Fourier filtering, including <cit.>. Reduced image quality due to HDOs remains an issue due to the lack of filtering. Furthermore, such methods only apply to very high-speed SLMs, because each source is sequentially turned on or steered in sequence.
VII. Steered Illumination with Shifting Fourier Filter. Multiple individually controllable sources or a single, swept source illuminate the SLM from different angles while a synchronizable, dynamic filter is placed at the Fourier plane to filter out the HDOs. One example is the steered illumination system described in <cit.>. This method is still sequential in nature, as each source is turned on one at a time, and requires a very high-speed SLM.
§.§ Assessment
Table <ref> and Figure <ref> shows the light field reconstruction performance of different baseline configurations in simulation. For configurations with multiple sources (III, IV, V, V*, VI, and VII), we consider a 3×3 grid of sources. We simulate a 800 × 1280 phase SLM for single-SLM configurations (I, II, III, VI, VII) and an additional 20 × 20 Fourier display for configurations IV, V, and V^*. We run our optimization algorithm on all configurations to reconstruct a 9 × 9 light field. Additionally, we optimize the single source configuration to reconstruct a smaller, 3 × 3 light field in Table <ref>. Time multiplexing is not used for configurations I, II, III, and IV. Please refer to the supplemental material for additional discussions on the optimization parameters, degrees of freedom of the system, and ablation studies on the resolution of the Fourier display.
The naive single-source configuration (I) is able to reconstruct a small 3 × 3 light field, but fails to reconstruct a larger-baseline 9 × 9 light field and cannot support uniform brightness across the expanded eyebox (Fig. <ref>, b–d). Mask-based étendue expansion techniques (II) reconstruct low-contrast and speckly images. By using multiple sources (III), the eyebox is expanded but the light field reconstruction quality is poor due to copies created by multiple sources and HDOs. Introducing a fixed random mask at the Fourier plane (IV) improves image quality, although the improvement is limited due to the lack of time multiplexing and content-adaptive Fourier mask optimization. Steered illumination options (VI, VII) achieve decent image quality, however both configurations reconstruct speckly images due to HDOs and can only be implemented using high-speed SLMs due to the large number of required time-multiplexed frames.
Our methods (V, V^*) achieves the best image reconstruction quality when using one frame and is better than the steered illumination baseline (VII) while using fewer frames (6 vs. 9). This is achieved through time multiplexing and our novel content-adaptive Fourier modulation optimization framework. Our method successfully removes the copies created by multiple sources and HDOs, reconstructing clean and speckless light field views. More importantly, a minimal increase in degrees of freedom in the Fourier plane (a low-resolution 20 × 20 Fourier display) is sufficient to achieve good image quality, and we perform extensive experiments to validate this claim in the supplemental material. Finally, although our generalized configuration V^* with amplitude-controllable sources achieves the best quantitative image quality, the improvement is marginal (<0.5 dB in terms of PSNR) and suffers from a much higher system complexity. Hence, we opted for configuration V for our hardware implementation.
§ EXPERIMENTAL RESULTS
Hardware Implementation We implement the proposed 3D holographic display design and evaluate our algorithms on the system. The hardware setup and the optical path are shown in Fig. <ref>. We implement our multi-source laser by cascading multiple 1:4 fiber splitters (Thorlabs TWQ560HA) and arranging 9 customized fiber tip outputs into a 3×3 array, which is then held together using a custom-printed 3D mount. The spacing between each source is 8.17 mm and a 200 mm lens is used to collimate the multi-source laser. Each collimated source field is, therefore, incident on the phase SLM with a 2.34^∘ incident angle. We use a TI DLP6750Q1EVM phase SLM for phase modulation and a 1080p SiliconMicroDisplay liquid crystal on silicon (LCoS) display for Fourier amplitude modulation. A 75 mm Fourier transform lens is used to image the spectrum of the phase-modulated wavefront onto the amplitude SLM. Our final design has a diagonal field-of-view (FoV) of 7.78 degrees and an eyebox size of 8.53 mm × 8.53 mm. A FLIR Grasshopper 2.3 MP color USB3 vision sensor paired with a Canon EF 50mm f/1.4 USM camera lens is used to capture all experimental results. Please refer to the supplemental material for additional details on the degrees of freedom of our system and the relevant optimization parameters.
Experimental Capture Details
To capture light field views, we place pupil masks at the Fourier plane to mimic the movement of the user's eyes, which is a technique used in prior works <cit.>. We implement this with a Thorlabs SM1D12 adjustable iris on a translation stage at the Fourier plane. To capture focal stacks, we center the pupil at the Fourier plane and adjust the camera focus to capture images at different depths.
Assessment
Experimentally captured results are shown in Figs. <ref>, <ref>, Table <ref>, and in the supplemental material. The PSNR and SSIM values are averaged across all captured light field views. We observe the same trends as predicted by our simulations both quantitatively and qualitatively: the single-source configuration only supports a limited eyebox and suffers from severe brightness falloff at peripheral viewpoints; 3D multi-source holography without a Fourier filter cannot achieve a high image quality due to the copies created the multiple sources; a static random mask placed in the Fourier plane only provides limited degrees of freedom and suffers from low contrast; our approach without time multiplexing (i.e., 1 frame) improves the quality over the random mask as it optimizes the amplitude mask pattern in a content-adaptive manner; our method with 6-frame time multiplexing achieves the highest image quality with the largest amount of empirically observed parallax.
§ DISCUSSION
In summary, we present a novel hardware system for étendue expansion and an algorithmic framework for 4D light-field-supervised computer-generated holography. The hardware system includes a multi-source laser array to create a large-étendue coherent backlight for the phase SLM and an amplitude SLM for dynamic Fourier-amplitude modulation. The algorithmic framework includes the joint optimization of time-multiplexed amplitude SLM and phase SLM patterns and a memory-efficient, stochastic light field supervision procedure to create 4D light field holograms. We compare our method with a number of étendue expansion baselines and verify in simulation and experimentally that our system achieves the highest-quality light field reconstruction results for large étendue settings.
Limitations and Future Work
We demonstrate our results on a benchtop display setup but futher efforts are required to miniaturize this system. Currently, our multisource laser array is implemented using bulky fiber splitters and could be miniaturized using nanophotonic phased arrays <cit.>. Folding the propagation distance of holograms using optical waveguides could further remove the need of beam splitters and subsequently shrink the form factor, as demonstrated in <cit.>. We illustrate potential compact designs in the supplemental material. The frame rate of our system is limited by our amplitude display (240 Hz native frame rate). This translates to a ∼13.33 Hz frame rate when operating in color-sequential mode with 6-frames time-multiplexing. The frame rate can be improved by using more advanced LCoS displays with frame rate > 720 Hz <cit.>. Real-time synthesis of light field holograms are necessary for practical holographic displays, but is not currently not supported by our system. Extending recent neural network-based hologram synthesis methods <cit.> to work for 4D light field holograms would be an interesting future direction. Finally, we did not attempt to calibrate a neural network–parameterized wave propagation model of our prototype display system, which has been demonstrated to significantly improve experimentally captured holographic image quality for other types of optical configurations <cit.>.
Conclusion The novel hardware design and algorithmic framework presented in this work improves the étendue of holographic displays and allows for light field holograms synthesis with improved image quality. These help make holographic displays a more practical technology for augmented and virtual reality applications.
We thank Grace Kuo for helpful advice regarding the implementation of the multisource laser setup. Brian Chao is supported by the Stanford Graduate Fellowship and the NSF GRFP. Manu Gopakumar is supported by the Stanford Gradudate Fellowship. Suyeon Choi is supported by the Meta Research PhD Fellowship.
ACM-Reference-Format
|
http://arxiv.org/abs/2409.02414v2 | 20240904034138 | Sphaleron and gravitational wave with the Higgs-Dilaton potential in the Standard Model Two-Time Physics | [
"Vo Quoc Phong",
"Quach Ai Mi",
"Nguyen Xuan Vinh"
] | hep-ph | [
"hep-ph"
] |
compatibility=false
|
http://arxiv.org/abs/2409.02421v1 | 20240904040022 | MusicMamba: A Dual-Feature Modeling Approach for Generating Chinese Traditional Music with Modal Precision | [
"Jiatao Chen",
"Tianming Xie",
"Xing Tang",
"Jing Wang",
"Wenjing Dong",
"Bing Shi"
] | cs.SD | [
"cs.SD",
"eess.AS"
] |
[
[
September 9, 2024
=====================
§ ABSTRACT
In recent years, deep learning has significantly advanced the MIDI domain, solidifying music generation as a key application of artificial intelligence. However, existing research primarily focuses on Western music and encounters challenges in generating melodies for Chinese traditional music, especially in capturing modal characteristics and emotional expression. To address these issues, we propose a new architecture, the Dual-Feature Modeling Module, which integrates the long-range dependency modeling of the Mamba Block with the global structure capturing capabilities of the Transformer Block. Additionally, we introduce the Bidirectional Mamba Fusion Layer, which integrates local details and global structures through bidirectional scanning, enhancing the modeling of complex sequences. Building on this architecture, we propose the REMI-M representation, which more accurately captures and generates modal information in melodies. To support this research, we developed FolkDB, a high-quality Chinese traditional music dataset encompassing various styles and totaling over 11 hours of music. Experimental results demonstrate that the proposed architecture excels in generating melodies with Chinese traditional music characteristics, offering a new and effective solution for music generation.
Music generation, music information retrieval, music generation, neural networks, deep learning, machine learning
§ INTRODUCTION
Recent advances in deep learning have significantly impacted the MIDI domain, making music generation a key application of artificial intelligence. Melody generation, a central task in music composition, involves creating musical fragments through computational models and presents more challenges than harmony generation and arrangement. A successful model must capture essential features like pitch and rhythm while producing melodies that align with specific styles and emotions. However, most existing methods, whether based on Recurrent Neural Networks<cit.> or Transformer architectures <cit.>, struggle with the complexity and structure of melodies. For example, while <cit.> generated long-term structured melodies, Transformers excel in capturing global dependencies, demonstrating strong performance across various melody tasks <cit.>.
Several studies have integrated music theory into the generation process. For instance, <cit.> introduced chord progressions for melody generation, <cit.> controlled polyphonic music features through chords and textures, and <cit.> improved beat structure representation. Additionally, <cit.> generated harmonious jazz melodies by adjusting harmonic and rhythmic properties, while other works have explored structured music generation using note-to-bar relationships <cit.> and melody skeletons <cit.>.
Meanwhile, State Space Models (SSMs) have also advanced in modeling long-sequence dependencies, particularly in capturing global musical structures. Models like S4 <cit.> and S5 <cit.> have significantly improved parallel scanning efficiency through new state space layers. Mamba <cit.>, a successful SSM variant, enhances parallel computation and has been applied across various fields, including visual domains with VMamba <cit.> and large-scale language modeling with Jamba <cit.>. Recognizing Mamba's potential in sequence modeling, we applied it to symbolic music generation.
However, these methods primarily focus on Western music and struggle with generating Chinese traditional melodies. While they can produce smooth melodies, they often align with modern styles, failing to capture the unique contours and rhythms of Chinese traditional music. As shown in <ref>, existing methods underperform in preserving the stylistic elements of Chinese music. Modes play a central role in Chinese melodies, determining note selection and arrangement, while conveying specific emotions and styles <cit.>. Due to significant differences in scales, pitch relationships, and modal structures between Western and Chinese music, these methods fail to capture these modal characteristics, leading to discrepancies in style and emotional expression <cit.>. The lack of high-quality Chinese traditional music datasets further limits their effectiveness.
To address these issues, we propose a new architecture, the Dual-Feature Modeling Module, which combining the long-range dependency modeling of the Mamba Block with the global structure capturing of the Transformer Block. We also designed the Bidirectional Mamba Fusion Layer, which integrates local details and global structures through bidirectional scanning, enhancing complex sequence modeling. This comprehensive architecture enables the generation of Chinese traditional music with complex structures and coherent melodies. Specifically, our contributions are:
* Mamba architecture to the MIDI domain. We applied the Mamba architecture to MIDI music generation, proposing the Dual-Feature Modeling Module, which combines the strengths of Mamba and Transformer Blocks. Through the Bidirectional Mamba Fusion Layer, we integrated local details with global structures, achieving excellent performance in long-sequence generation tasks.
* REMI-M Representation. We extended the REMI representation with REMI-M, introducing mode-related events and note type indicators, allowing the model to more accurately capture and generate modal information in melodies.
* FolkDB. We created a high-quality Chinese traditional music dataset, FolkDB, designed for studying Chinese traditional music. With over 11 hours of music covering various styles, FolkDB fills a gap in existing datasets and provides a foundation for further research.
§ PROPOSED METHOD
§.§ Problem Formulation
In melody generation, the condition sequence is typically defined as x_1:t = [x_1,...,x_t] and the target sequence as y_1:k=[y_1,...,y_k], where k>t. The prediction of the j-th element in the target sequence can be expressed as y_j|[x_1,...,x_t]∼p(y_j|x_1,...,x_t) where p(y_j|x_1,...,x_t) represents the conditional probability distribution of y_j given the condition sequence.
Chinese traditional music often includes various modes. For example, in pentatonic modes, if a note N_i∈{C,D,E,G,A} serves as the tonic note, then the following notes, if they follow a specific interval relationship, form a mode M. Therefore, to generate Chinese music with mode characteristics, the target sequence can consist of multiple modes and transition notes, represented as y_1=(M_1,f(M_1),M_2,f(M_2),...,M_l,f(M_l)), where M_i corresponds to a subsequence of notes within a specific mode. where M_i corresponds to a subsequence of notes within a specific mode, and M_i represents the transition note sequence following M_i. The task of generating melodies with Chinese modes can ultimately be formulated as the following autoregressive problem:
p(𝐲|𝐱, M) = ∏_i=1^l p(M_i|C_i) · p(f(M_i)|C'_i) ,
where M is the collection of multiple modes, C_i = (𝐱, 𝐲_<i) and C'_i = (𝐱, 𝐲_≤ i). During the step-by-step generation of notes, the corresponding mode sequence M_i is generated first, followed by the generation of the transition note sequence f(M_i) based on the mode sequence.
§.§ REMI-M Representation
In generating traditional Chinese music, mode generation is a crucial and complex component. Chinese music often features intricate modal structures, such as pentatonic and heptatonic scales, where the selection and transition of modes are vital to the style and expression of the music. However, existing music representation methods face significant limitations in capturing and generating these modal structures. Although the REMI representation <cit.> effectively captures rhythm, pitch, and velocity information through events such as bar, position, tempo, and note, it struggles with complex modal structures, particularly when handling the dynamic modes in Chinese music.
To address this issue, we extended REMI by introducing two new events in the REMI-M representation to explicitly describe modes:
* Note type event. Distinguishes between mode notes and transition notes, helping the model to more accurately capture modal information.
* Mode-related events. Include the start, end, and type of mode, enabling REMI-M to explicitly annotate and generate modal changes in the music.
As shown in <ref>, the original REMI and MIDI-Like encodings result in low mode generation rates, whereas REMI-M demonstrates significant improvements, achieving mode generation rates exceeding 0.8 across all tested music lengths. These enhancements allow REMI-M to better handle complex modal structures, significantly improving the stylistic consistency and theoretical accuracy in generated music.
§.§ Model
In music generation tasks, it is crucial to capture both local melodic details and global musical structure dependencies within long contexts. To achieve this, as shown in Figure <ref>, we designed a hierarchical feature extraction and integration architecture named the Dual-Feature Modeling Module, which combines the long-range dependency modeling capability of the Mamba Block with the global structure capturing ability of the Transformer Block.
Dual-Feature Modeling Module. In music sequence generation, both melodic details (like note variations and modal transitions) and overall structure (such as phrases and repetition patterns) are crucial. Traditional architectures often struggle to capture these levels of features simultaneously. Let 𝐇 represent the feature matrix. The Mamba Block captures melodic details and modal dependencies by computing a dot product between the mode mask and melody tokens, generating the feature representation 𝐇_1. This provides essential long-range and local information for integration. The Transformer Block primarily models global structural information, processing input melody embeddings with positional encoding to obtain the structural representation 𝐇_2.
Bidirectional Mamba Fusion Layer. To integrate the outputs of the Mamba Block and Transformer Block, we introduce the Bidirectional Mamba Fusion Layer. This layer simultaneously receives the long-range features 𝐇_1 generated by the Mamba Block and the global features 𝐇_2 generated by the Transformer Block. Through a bidirectional scanning mechanism, the forward and backward features are processed separately to obtain F_forward and F_backward. Then, self-attention is applied to the forward and backward features to extract key information:
F_1 = Attention(F_forward), F_2 = Attention(F_backward) ,
Next, these two directional features are concatenated to obtain the fused feature 𝐇_fusion, and further processed by a linear layer:
Output= Linear(𝐇_fusion) ,
The fused feature 𝐇_fusion combines the long-range dependencies and global structure of the melody, providing complete information support for generating complex and coherent music sequences. Finally, the linear layer maps the fused features to the output space, generating the final music sequence.
§ EXPERIMENTS
§.§ Implementation Details
§.§.§ Dataset
We used two datasets: the POP909 dataset<cit.> and a self-collected Chinese Traditional Music dataset (referred to as the FolkDB). The POP909 dataset contains rich musical information, particularly in chords and melodies. Pre-training on this dataset allows the model to learn fundamental musical structures and elements, helping it to adapt more quickly to our FolkDB. Additionally, to address the lack of cultural diversity in the POP909 dataset, we have compiled a dataset of approximately 300 Chinese traditional music pieces. This dataset contains about 11 hours of piano MIDI works, featuring traditional modes such as the pentatonic and heptatonic scales, showcasing the diverse styles and modal characteristics of Chinese music.
In terms of data preprocessing, since the original data consists of single-track Chinese traditional music melodies, we performed additional processing on the self-collected Chinese traditional music dataset to ensure that the model can effectively capture and generate music with Chinese cultural characteristics. The specific steps are as follows:
* Tonic Track Extraction. We employed the tonic extraction framework mentioned in the Wuyun model<cit.>. This framework uses a layered skeleton-guided approach, first constructing the skeleton of the melody and then extending it.
* Mode Detection and Annotation. After extracting the tonic track, we conducted mode detection on the melody using interval relationships. By analyzing the intervals between each pair of tonic notes and leveraging the knowledge-enhanced logic within the Wuyun model, we obtained the mode track for each piece of music.
§.§.§ Model Settings
We adopted an architecture based on the MambaBlock2 module<cit.>, with the model's hidden dimension set to 256 and the feedforward network's intermediate layer dimension set to 1024. Additionally, we employed GatedMLP to enhance the model's nonlinear representation capabilities. During training, we used the Adam optimizer with an initial learning rate of 2 × 10^-4, dynamically adjusted via the LambdaLR scheduler. The training data was processed in batches, with each batch containing 8 samples and a fixed sequence length of 512 tokens. We used the cross-entropy loss function to measure the difference between the model's predictions and the target labels. This loss function is defined as follows:
Loss = L_CE - λ_1L_NT - λ_2L_MR ,
Among these, L_CE is used to focus on the model's ability to accurately predict musical events, while L_NT and L_MR correspond to note type events and mode-related events, respectively. λ_1 and λ_2 are used to balance the contributions of the two losses, with both values typically ranging between 0 and 1.
§.§ Objective Evaluation
§.§.§ Metric
To evaluate our music generation model, we selected the following four objective metrics: Pitch Class Entropy, Groove Consistency<cit.>, Style Consistency, and Mode Consistency.
* Pitch Class Entropy. This metric reflects the diversity of pitch distribution, with higher entropy indicating a more dispersed distribution of generated notes, while lower entropy indicates a more concentrated distribution.
* Groove Consistency. Higher groove consistency indicates less variation in rhythm, resulting in a smoother, more stable musical flow.
* Style Consistency. A higher style consistency score indicates that the generated music aligns more closely with the expected style.
* Mode Consistency. This metric evaluates whether the notes in the generated music conform to the predefined mode structure. We improved the traditional scale consistency metric <cit.> to better align with the modal characteristics of Chinese folk music. The specific formula is as follows:
Consistency Score = |𝒫_melody∩𝒫_scale|/|𝒫_melody|× 100% ,
Here, 𝒫_melody represents the set of melody notes, 𝒫_scale represents the set of scale notes, |𝒫_melody∩𝒫_scale| denotes the size of the intersection between the melody note set and the scale note set, and 𝒫_melody represents the size of the melody note set. The consistency score is determined by calculating the overlap ratio between the sets of notes in the melody and scale tracks.
§.§.§ Results
Before introducing the objective indicator test results, we tested the key restoration of each model, and it is clear from the <ref> that MusicMamba not only effectively restores the key in the original sequence, but also introduces additional key changes, while MusicTransformer, although it captures some keys, is not as comprehensive and diverse as MusicMamba. Experimental results show that MusicMamba is better at generating melodies with traditional Chinese music styles, and can generate richer and more consistent sequences.
We conducted two sets of comparative experiments using the MusicTransformer <cit.> and MelodyT5 <cit.> models as baselines. In each experiment, we randomly generated approximately 50 songs for each model and calculated objective metrics, which are displayed in the table. When evaluating the quality of generated music, we consider values that are closer to real data as better. As shown in the <ref>, our model's generated music is closer to the real values in terms of pitch entropy, outperforming the other models. Our model also excels in style consistency and rhythm consistency. Notably, in terms of mode consistency, over 70% of the music generated by our model exhibits a detectable modal structure, and more than 60% of the music performs well in mode consistency. The above metrics are shown in the <ref>.
§.§ Subjective Listening Test
To evaluate the quality of the music samples generated by the model, we designed a subjective listening test. We recruited 10 music enthusiasts from social networks, each of whom plays at least one musical instrument. Each participant was asked to listen to 10 generated audio samples. They rated the samples based on three criteria: coherence, richness, and style, with scores ranging from 0 to 10. In the subjective evaluation results, MusicMamba outperformed all baseline models in coherence, richness, and style, showing the best overall performance.
§ CONCLUSION
This paper proposes a new architecture that combines the long-range dependency modeling capability of the Mamba Block with the global structure capturing ability of the Transformer Block. We also designed the Bidirectional Mamba Fusion Layer to effectively integrate local and global information. By introducing the REMI-M representation, we were able to more accurately capture and generate modal features in Chinese traditional music. Experimental results show that the combination of REMI-M and MusicMamba more accurately reproduces and generates specific modes in Chinese traditional music, with the generated music outperforming traditional baseline models in terms of stylistic consistency and quality. Our research provides a new direction and technical foundation for exploring more complex modes in various types of ethnic music, as well as for generating melodies with distinctive styles through the incorporation of traditional instruments.
§ ACKNOWLEDGEMENTS
Hao-Wen thanks J. Yang and Family Foundation and Taiwan Ministry of Education for supporting his PhD study. This project has received funding from the European Research Council (ERC REACH) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement #883313).
IEEEbib
|
http://arxiv.org/abs/2409.03630v1 | 20240905154249 | Generalizing Linear Graphs and Bond Graph Models with Hetero-functional Graphs for System-of-Systems Engineering Applications | [
"Ehsanoddin Ghorbanichemazkati",
"Amro M. Farid"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
Generalizing Linear Graphs and Bond Graph Models with Hetero-functional Graphs for System-of-Systems Engineering Applications
Ehsanoddin Ghorbanichemazkati, Amro M. Farid
E. Ghorbanichemazkati is a doctoral research assistant with the Department of Systems and Enterprises at the Stevens Insititute of Technology, Hoboken NJ 07030
A.M. Farid is the Alexander Crombie Humphreys Chair Professor of Economics in Engineering in Department of Systems and Enterprises at the Stevens Insititute of Technology, Hoboken NJ 07030. He is also the Principal Systems Scientist at CSIRO Smart Energy (Newcastle, Australia) and a Visiting Scientist at MIT Mechanical Engineering Cambridge, MA.
March 28 2024
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
In the 20th century, individual technology products like the generator, telephone, and automobile were connected to form many of the large-scale, complex, infrastructure networks we know today: the power grid, the communication infrastructure, and the transportation system. Progressively, these networked systems began interacting, forming what is now known as systems-of-systems. Because the component systems in the system-of-systems differ, modeling and analysis techniques with primitives applicable across multiple domains or disciplines are needed. For example, linear graphs and bond graphs have been used extensively in the electrical engineering, mechanical engineering, and mechatronic fields to design and analyze a wide variety of engineering systems. In contrast, hetero-functional graph theory (HFGT) has emerged to study many complex engineering systems and systems-of-systems (e.g. electric power, potable water, wastewater, natural gas, oil, coal, multi-modal transportation, mass-customized production, and personalized healthcare delivery systems). This paper seeks to relate hetero-functional graphs to linear graphs and bond graphs and demonstrate that the former is a generalization of the latter two. The contribution is relayed in three stages. First, the three modeling techniques are compared conceptually. Next, these techniques are contrasted on six example systems: (a) an electrical system, (b) a translational mechanical system, (c) a rotational mechanical system, (d) a fluidic system, (e) a thermal system, and (f) a multi-energy (electro-mechanical) system. Finally, this paper proves mathematically that hetero-functional graphs are a formal generalization of both linear graphs and bond graphs.
§ INTRODUCTION
In the 20th century, individual technology products like the generator, telephone, and automobile were connected to form many of the large-scale, complex, infrastructure networks we know today: the power grid, the communication infrastructure, and the transportation system<cit.>. Over time, these networked systems began to develop interactions between themselves in what is now called systems-of-systems<cit.>. The “smart grid”<cit.>, the energy-water nexus<cit.>, the electrification of transport<cit.>, are all good examples where one network system has fused with another to form a new and much more capable system. This trend is only set to continue. The energy-water-food nexus<cit.> fuses three such systems and the recent interest in smart cities<cit.> provides a platform upon which to integrate all of these efforts.
Because the component systems in the system-of-systems are unlike each other, there is a need to use modeling and analysis techniques that have modeling primitives that can be applied across multiple domains or disciplines. For example, linear graphs<cit.> and bond graphs<cit.> have been used extensively in the electrical engineering, mechanical engineering, and mechatronic fields to design and analyze a wide variety of engineering systems. They use modeling primitives such as generalized resistors, capacitors, inductors, transformers, and gyrators to address mechanical, electrical, fluidic, and thermal systems. Unfortunately, the application of linear graphs and bond graphs are limited to systems where system elements are connected via flows of power (of various types).
In contrast, hetero-functional graph theory (HFGT)<cit.> has emerged to study many complex engineering systems and systems-of-systems. More specifically, HFGT provides a means of algorithmically translating SysML models<cit.> into hetero-functional graphs and/or Petri Nets<cit.> where they can be structurally analyzed<cit.>, dynamically simulated<cit.>, and ultimately optimized<cit.>. HFGT has been applied to numerous application domains including electric power, potable water, wastewater, natural gas, oil, coal, multi-modal transportation, mass-customized production, and personalized healthcare delivery systems. In so doing, HFGT has demonstrated its ability to model the supply, demand, transportation, storage, transformation, assembly, and disassembly of multiple operands in distinct locations over time<cit.>. These multiple operands include matter, energy, information, money, and living organisms<cit.> and not just energy as in the case of linear graphs and bond graphs.
§.§ Original Contribution
Consequently, this paper seeks to relate hetero-functional graphs to linear graphs and bond graphs and demonstrate that the former is a generalization of the latter two. The contribution is relayed in three stages. First, the three modeling techniques are compared conceptually. Next, the three modeling techniques are contrasted on six example systems: (a) an electrical system, (b) a translational mechanical system, (c) a rotational mechanical system, (d) a fluidic system, (e) a thermal system, and (f) a multi-energy (electro-mechanical) system. Finally, this paper proves mathematically that hetero-functional graphs are a formal generalization of both linear graphs and bond graphs.
To facilitate the discussion, several assumptions and limitations are made.
* As the majority of the literature for all three modeling approaches concerns lumped parameter models, this paper restricts its scope to such systems.
* So as to facilitate the discussion, all physical systems will be modeled with “power-variables"<cit.> – pairs of physical variables whose product equals a quantity of power.
* Linear graphs and bond graphs are assumed to describe the flows of power of various types: translational mechanical, rotational mechanical, electrical, fluidic, and thermal. This has been the typical application of linear graphs and bond graphs in the literature.
* Without loss of generality, and for simplicity of discussion, this paper restricts its discussion to linear constitutive laws. Non-linear constitutive laws can be readily integrated into bond graphs and hetero-functional graphs.
* Without loss of generality, and also for simplicity of discussion, this paper restricts itself to the following types of elements (as understood in bond graphs):
* Effort sources/sinks,
* Flow sources/sinks,
* Generalized resistors,
* Generalized capacitors,
* Generalized inductors,
* Generalized transformers, and
* Generalized gyrators.
While other types of elements have been introduced in bond graphs <cit.>, their exclusion does not detract from the original contribution presented here and can be re-introduced straightforwardly.
§.§ Paper Outline
The remainder of the paper proceeds as follows. Sec. <ref> provides an overview of linear graphs, bond graphs, and hetero-functional graph techniques. Sec. <ref> then introduces six illustrative examples that serve as the basis for comparison. Sections <ref>, <ref>, and <ref> then demonstrate the linear graph, bond graph, and hetero-functional graph methodologies on each of these six systems. Sec. <ref> then proves mathematically that hetero-functional graphs are a formal generalization of both linear graphs and bond graphs. Sec. <ref> brings the paper to a conclusion.
§ OVERVIEW OF LINEAR GRAPH, BOND GRAPH, AND HETERO-FUNCTIONAL GRAPH TECHNIQUES
In order to begin to relate hetero-functional graphs to linear graphs and bond graphs, all three approaches must be briefly described at a conceptual level. Interestingly, linear graphs share more common features with bond graphs and hetero-functional graphs than the other two with each other. Consequently, Sections <ref>, <ref> and <ref> describe linear graphs, bond graphs, and hetero-functional in that order to facilitate discussion. Figure <ref> serves to guide the discussion for the remainder of the section.
§.§ Linear Graphs
As shown in Fig. <ref>, linear graphs are a type of graphical model that shows the interconnectedness of lumped parameter elements depicted as arcs<cit.> that transform and transport the flow of power. It is important to recognize that when defining the linear graph's lumped parameter elements, the system modeler is implicitly choosing either a Lagrangian or an Eulerian view<cit.> of the system. Each of these power flows in the linear graph's arcs is associated with a pair of variables; one “across" variable and another “through" variable. Fig. <ref> shows the across and through variables for each of the five energy domains. Furthermore, as shown in Fig. <ref>, each of the arcs in a linear graph can be classified into one of several different types. These arcs connect with each other at nodes that are associated with points in space that have distinct across-variable values measured relative to a well-chosen absolute reference frame. Once a linear graph has been constructed it is then translated into a “normal tree" so as to eliminate redundant variables<cit.>. From there, three sets of simultaneous differential and algebraic equations are derived to form a mathematical model of the system.
* Continuity laws: For each node in the normal tree, a continuity law (e.g. Newton's 1st Law, Kirchhoff's Current law) is derived that describes the conservation of the relevant through variables.
* Constitutive laws: For each arc in the normal tree, a constitutive law (e.g. Newton's 2nd Law, Ohm's Law) is derived that describes the relationship between across and through variables in the lumped parameter element.
* Compatibility law: For each arc in the normal tree, a compatibility law is derived to relate the across variable of the lumped parameter element to the across variables of the linear graph's nodes.
In most cases, the compatibility laws can be entirely omitted if the simultaneous equations are expressed exclusively in terms of the (absolute) across variables at each of the nodes.
The three sets of simultaneous equations that constitute the mathematical model can then be algebraically simplified into a state space model of the form:
Ẋ= AX + BU + EU̇
y = CX + DU + FU̇
where X is the system state vector, U are the system inputs, y are the system outputs, and A,B,C,D,E and F are constant parameter coefficient matrices of appropriate size. The size of the system state vector is determined by the number of independent energy storage elements (as determined from the normal tree)<cit.>. The size of the system input vector is determined from the number of source/sink elements, and the size of the output vector is left to the modeler's discretion. Finally, the state space model is simulated in the time domain by numerical integration.
§.§ Bond Graphs
Bond graph models are derived in a similar manner; although with several key differences. As shown in Fig. <ref>, bond graphs are a type of graphical model that shows the interconnectedness of lumped parameter elements depicted as labeled nodes <cit.> transform and transport power. Each of these power flows is associated with a pair of variables; one “effort" variable and another “flow" variable. Fig. <ref> shows the effort and flow variables for each of the five energy domains. Importantly, when the system model adopts a Lagrangian view, the effort and flow variables map to through and across variables respectively. In contrast, when the system model adopts an Eulerian view, the effort and flow variables map to across and through variables respectively. Furthermore, as shown in Fig. <ref>, each of the elements in the bond graph can be classified into one of several different types. In addition to these bond graph elements, the bond graph also includes “0-Junctions" and “1-Junctions". 0-Junctions, or flow junctions, conserve the sum of all flow variables and have a common associated effort variable. Meanwhile, 1-Junctions, or effort junctions, conserve the sum of all effort variables and have a common associated flow variable. Finally, the 0-Junctions and the 1-Junctions are connected to the bond graph elements with arcs (i.e. power bonds) that describe a flow of power between elements and junctions. Once the bond graph has been constructed, three sets of simultaneous differential-algebraic equations are derived to form a mathematical model of the system.
* 0-Junction Laws: For each 0-Junction, the flow conversation law is derived. For systems with a Lagrangian view, these 0-Junction laws are compatibility laws. For systems with an Eulerian view, these 0-junction laws are continuity laws (e.g. Kirchhoff's current law).
* 1-Junction Laws: For each 1-Junction, the effort conservation law is derived. For systems with a Lagrangian view, these 1-Junction laws are continuity laws (e.g. Newton's 1st law). For systems with an Eulerian view, these 1-junction laws are compatibility laws (e.g. Kirchhoff's voltage law).
* Constitutive laws: For each element in the bond graph, a constitutive law (e.g. Newton's 2nd Law, Ohm's Law) is derived that describes the relationship between effort and flow variables in the lumped parameter element.
As with linear graphs, the three sets of simultaneous equations that constitute the mathematical model can then be algebraically simplified into a state space model shown in Eq. <ref> and <ref>.
§.§ Hetero-functional Graphs
As shown in Fig. <ref>, hetero-functional graphs are also a type of graphical models that show the interconnectedness of lumped parameter elements. Unlike the relatively specific graphical ontologies used in linear graphs and bond graphs, Hetero-functional graph theory stems from the universal structure of human language with subjects and predicates and the latter made up of verbs and objects<cit.> It includes set of system resources R as subjects, a set of system processes P as predicates, and a set of operands L as their constituent objects.
An asset or object l_i ∈ L that is operated on or consumed during the execution of a process.
An activity p ∈ P that transforms or transports a predefined set of input operands into a predefined set of outputs.
An asset or object r_v ∈ R that facilitates the execution of a process.
As shown in Fig. <ref>, these operands, processes, and resources are organized in an engineering system meta-architecture stated in the Systems Modeling Language (SysML)<cit.>.
Importantly, the system resources R=M ∪ B ∪ H are classified into transformation resources M, independent buffers B, and transportation resources H. Additionally, the set of “buffers" B_S=M ∪ B is introduced to support the discussion. Equally important, the system processes P = P_μ∪ P_η̅ are classified into transformation processes P_μ and refined transportation processes P_η. The latter arises from the simultaneous execution of one transportation process and one holding process. Finally, hetero-functional graph theory emphasizes that resources are capable of one or more system processes to produce a set of “capabilities"<cit.>.
A resource r_v ∈ R is a buffer b_s ∈ B_S iff it is capable of storing or transforming one or more operands at a unique location in space.
An action e_wv∈ E_S (in the SysML sense) defined by a system process p_w ∈ P being executed by a resource r_v ∈ R. It constitutes a subject + verb + operand sentence of the form: “Resource r_v does process p_w".
The highly generic and abstract nature of these definitions has allowed HFGT to be applied to numerous application domains including electric power, potable water, wastewater, natural gas, oil, coal, multi-modal transportation, mass-customized production, and personalized healthcare delivery systems. For a more in-depth description of HFGT, readers are directed to past works<cit.>.
Fig. <ref> also serves to relate the hetero-functional graph theory definitions to the ontological elements found in linear graphs and bond graphs.
* First, linear graphs and bond graphs are concerned with physical flows, and more specifically the flows of force, torque, electrical current, fluids, and heat. Consequently, Fig. <ref> shows these objects as types of operands.
* Second, linear graphs and bond graphs implicitly define a set of physical points at which there are distinct values of across-value attributes measured in absolute terms relative to a well-chosen reference frame. Again, Fig. <ref> summarizes the different types of physical points by the type of across variable. Consequently, Fig. <ref> shows these as types of independent buffers.
* Third, linear graphs and bond graphs are composed of physical sources. Because the across variables at physical points are types of independent buffers (in HFGT), across-variable and through-variable sources are adopted into the taxonomy (rather than effort and flow sources). Because these physical sources inject power across the system boundary as system processes, and all system processes that inject operands across the system boundary are transformation processes<cit.>, then Fig. <ref> shows these physical sources as transformation resources. Furthermore, because they are transformation resources, they are also buffers and inherently have an absolute across variable associated with physical points.
* Fourth, linear graphs and bond graphs are composed of the physical elements shown in Fig. <ref>. Each of these is associated with a through-variable attribute and an across-variable attribute defined relative to the across variables of the physical points (as independent buffers). They are further classified as generalized resistors, capacitors, inductors, transformers, and gyrators (as defined in bond graphs). They dissipate power, store potential energy in an effort variable, store energy in a flow variable, transform power, and gyrate power respectively. (For brevity, the storage of energy in a flow variable will be referred to as storing kinetic energy; although this is a strained analogy in the electrical domain). As each of these system processes is a flow of power between two physical points, Fig. <ref> shows all of these physical elements as transportation resources.
Finally, it is important to recognize that linear graphs and bond graphs are limited to physical sources and physical elements with only a single associated capability. In contrast, HFGT is able to address engineering systems with resources that have an arbitrary number of processes. In this regard, and as shown in Fig. <ref>, HFGT is more ontologically rich than both linear graphs and bond graphs.
Returning to Fig. <ref>, the engineering system meta-architecture stated in SysML must be instantiated and ultimately transformed into the associated Petri net model(s). To that end, the positive and negative hetero-functional incidence tensors (HFIT) are introduced to describe the flow of operands through buffers and capabilities.
The negative hetero-functional incidence tensor M_ρ^- ∈{0,1}^|L|× |B_S| × | E_S| is a third-order tensor whose element M_ρ^-(i,y,ψ)=1 when the system capability ϵ_ψ∈ E_S pulls operand l_i ∈ L from buffer b_s_y∈ B_S.
The positive hetero-functional incidence tensor M_ρ^+ ∈{0,1}^|L|× |B_S| × | E_S| is a third-order tensor whose element M_ρ^+(i,y,ψ)=1 when the system capability ϵ_ψ∈ E_S injects operand l_i ∈ L into buffer b_s_y∈ B_S.
These incidence tensors are straightforwardly “matricized" to form 2^nd Order Hetero-functional Incidence Matrices M = M^+ - M^- with dimensions |L||B_S|× | E|. Consequently, the supply, demand, transportation, storage, transformation, assembly, and disassembly of multiple operands in distinct locations over time can be described by an Engineering System Net and its associated State Transition Function<cit.>.
An elementary Petri net N = {S, E_S, M, W, Q}, where
* S is the set of places with size: |L||B_S|,
* E_S is the set of transitions with size: | E|,
* M is the set of arcs, with the associated incidence matrices: M = M^+ - M^-,
* W is the set of weights on the arcs, as captured in the incidence matrices,
* Q=[Q_B; Q_E] is the marking vector for both the set of places and the set of transitions.
The state transition function of the engineering system net Φ() is:
Q[k+1]=Φ(Q[k],U^-[k], U^+[k]) ∀ k ∈{1, …, K}
where k is the discrete time index, K is the simulation horizon, Q=[Q_B; Q_ E], Q_B has size |L||B_S| × 1, Q_ E has size | E_S|× 1, the input firing vector U^-[k] has size | E_S|× 1, and the output firing vector U^+[k] has size | E_S|× 1.
Q_B[k+1] =Q_B[k]+M^+U^+[k]Δ T-M^-U^-[k]Δ T
Q_ E[k+1] =Q_ E[k]-U^+[k]Δ T +U^-[k]Δ T
where Δ T is the duration of the simulation time step.
Here, it is important to recognize that the engineering system net state transition function is an explicit restatement of the continuity laws in linear graphs. Similarly, the engineering system net describes the 0-Junction laws in bond graphs that use an Eulerian view and describes the 1-Junction laws in bond graphs that use a Lagrangian view. For this reason, the relationship between hetero-functional graphs and linear graphs is more straightforward than between hetero-functional graphs and bond graphs.
In addition to the engineering system net, in HFGT, each operand can have its own state and evolution. This behavior is described by an Operand Net and its associated State Transition Function for each operand.
Given operand l_i, an elementary Petri net N_l_i= {S_l_i, E_l_i, M_l_i, W_l_i, Q_l_i} where
* S_l_i is the set of places describing the operand's state.
* E_l_i is the set of transitions describing the evolution of the operand's state.
* M_l_i⊆ (S_l_i× E_l_i) ∪ ( E_l_i× S_l_i) is the set of arcs, with the associated incidence matrices: M_l_i = M^+_l_i - M^-_l_i ∀ l_i ∈ L.
* W_l_i : M_l_i is the set of weights on the arcs, as captured in the incidence matrices M^+_l_i,M^-_l_i ∀ l_i ∈ L.
* Q_l_i= [Q_Sl_i; Q_ El_i] is the marking vector for both the set of places and the set of transitions.
The state transition function of each operand net Φ_l_i() is:
Q_l_i[k+1]=Φ_l_i(Q_l_i[k],U_l_i^-[k], U_l_i^+[k]) ∀ k ∈{1, …, K} i ∈{1, …, L}
where Q_l_i=[Q_Sl_i; Q_ E l_i], Q_Sl_i has size |S_l_i| × 1, Q_ E l_i has size | E_l_i| × 1, the input firing vector U_l_i^-[k] has size | E_l_i|× 1, and the output firing vector U^+[k] has size | E_l_i|× 1.
Q_Sl_i[k+1] =Q_Sl_i[k]+M_l_i^+U_l_i^+[k]Δ T - M_l_i^-U_l_i^-[k]Δ T
Q_ E l_i[k+1] =Q_ E l_i[k]-U_l_i^+[k]Δ T +U_l_i^-[k]Δ T
Here, it is important to recognize that although HFGT introduces operand nets and their respective state transition functions, linear graphs and bond graphs do not have a counterpart modeling concept. This is because when power flows as an operand, regardless of whether it is mechanical, electrical, fluidic, or thermal power, it does not change state and therefore does not require an operand net. In contrast, other application domains, most notably production systems<cit.> and healthcare systems<cit.> respectively have products and patients as operands with often very complex operand state evolution.
Returning to Fig. <ref>, HFGT describes the behavior of an engineering system using the Hetero-Functional Network Minimum Cost Flow (HFNMCF) problem<cit.>. Whereas linear graphs and bond graphs models insist on the state space form in Eq. <ref>-<ref>, HFGT similarly insists on the HFNMCF mathematical program. It optimizes the time-dependent flow and storage of multiple operands (or commodities) between buffers, allows for their transformation from one operand to another, and tracks the state of these operands. In this regard, it is a very flexible optimization problem that applies to a wide variety of complex engineering systems. For the purposes of this paper, the HFNMCF is a type of discrete-time-dependent, time-invariant, convex optimization program<cit.>.
minimize Z = ∑_k=1^K-1 f_k(x[k],y[k])
s.t. A_CPX = B_CP
E_CP≤ D(X) ≤E_CP
g(X,Y) = 0
h(Y) ≤ 0
where
* Z is a convex objective function separable in k.
* k is the discrete time index.
* K is the simulation horizon.
* f_k() is a set of discrete-time-dependent convex functions.
* X=[x[1]; …; x[K]] is the vector of primary decision variables at time k.
x[k] = [ Q_B ; Q_ E ; Q_SL ; Q_ EL ; U^- ; U^+ ; U^-_L ; U^+_L ][k] ∀ k ∈{1, …, K}
* Y=[y[1]; …; y[K]] is the vector of auxiliary decision variables at time k. The need for auxiliary decision variables depends on the presence and nature of the device models g(X,Y)=0 in Eq. <ref> and h(Y)≤ 0 in Eq. <ref>.
* A_CP is the linear equality constraint coefficient matrix.
* B_CP is the linear equality constraint intercept vector.
* D_CP is the linear inequality constraint coefficient matrix.
* E_CP is the linear inequality constraint intercept vector.
* g(X,Y) and h(Y) is a set of device model functions whose presence and nature depend on the specific problem application.
Despite the terse description of the HFNMCF problem presented above, it has immediate relationships to linear graphs and bond graphs.
* The through variables in a linear graph appear amongst the primary decision variables X.
* The across variables in a linear graph appear amongst the auxiliary variables Y.
* Because HFGT assumes that the across variables are stated in absolute terms, the compatibility laws in a linear graph are not required in the HFNMCF.
* The continuity relations in a linear graph appear amongst the linear equality constraints in Eq. <ref>
* The constitutive relations in a linear graph appear amongst the device model constraints in Eq. <ref>.
The relationships between the HFNMCF problem and bond graphs can be similarly deduced via the HFGT-to-linear graph relationships stated above. With the above understanding, Equations <ref>-<ref> are elaborated below.
§.§.§ Objective Function
With respect to the objective function in Eq. <ref>, Z is a convex objective function separable in discrete time steps k. For the remainder of this work, the discrete-time-dependent functions f_k are assumed to be time-invariant quadratic functions. Matrix F_QP and vector f_QP in Equation <ref> allow quadratic and linear costs to be incurred from the place and transition markings in both the engineering system net and operand nets.
Z = ∑_k=1^K-1 x^T[k] F_QP x[k] + f_QP^T x[k]
* F_QP is a positive semi-definite, diagonal, quadratic coefficient matrix.
* f_QP is a linear coefficient matrix.
§.§.§ Equality Constraints
Matrix A_QP and vector B_QP in Equation <ref> are constructed by concatenating constraints Equations <ref>-<ref>.
-Q_B[k+1]+Q_B[k]+M^+U^+[k]Δ T - M^-U^-[k]Δ T= 0 ∀ k ∈{1, …, K}
-Q_ E[k+1]+Q_ E[k]-U^+[k]Δ T + U^-[k]Δ T= 0 ∀ k ∈{1, …, K}
- U_ψ^+[k+k_dψ]+ U_ψ^-[k] = 0 ∀ k∈{1, …, K} ψ∈{1, …, E_S}
-Q_Sl_i[k+1]+Q_Sl_i[k]+M_l_i^+U_l_i^+[k]Δ T - M_l_i^-U_l_i^-[k]Δ T= 0 ∀ k ∈{1, …, K} i ∈{1, …, |L|}
-Q_ El_i[k+1]+Q_ El_i[k]-U_l_i^+[k]Δ T + U_l_i^-[k]Δ T= 0 ∀ k ∈{1, …, K} i ∈{1, …, |L|}
- U_xl_i^+[k+k_dxl_i]+ U_xl_i^-[k] = 0 ∀ k∈{1, …, K}, ∀ x∈{1, …, | E_l_i}|, l_i ∈{1, …, |L|}
U^+_L[k] - Λ^+ U^+[k] = 0 ∀ k ∈{1, …, K}
U^-_L[k] - Λ^- U^-[k] = 0 ∀ k ∈{1, …, K}
[ D_Up 0; 0 D_Un ][ U^+; U^- ][k] = [ C_Up; C_Un ][k] ∀ k ∈{1, …, K}
[ E_Lp 0; 0 E_Ln ][ U^+_l_i; U^-_l_i ][k] = [ F_Lpi; F_Lni ][k] ∀ k ∈{1, …, K} i ∈{1, …, |L|}
[ Q_B ; Q_ E ; Q_SL ][1] = [ C_B1 ; C_ E1 ; C_SL1 ]
[ Q_B ; Q_ E ; Q_SL ; U^- ; U_L^- ][K+1] = [ C_BK ; C_ EK ; C_SLK ; 0 ; 0 ]
* Equations <ref> and <ref> describe the state transition function of an engineering system net (Defn <ref> & <ref>).
* Equation <ref> is the engineering system net transition duration constraint where the end of the ψ^th transition occurs k_dψ time steps after its beginning.
* Equations <ref> and <ref> describe the state transition function of each operand net N_l_i (Defn. <ref> & <ref>) associated with each operand l_i ∈ L.
* Equation <ref> is the operand net transition duration constraint where the end of the x^th transition occurs k_dx_l_i time steps after its beginning.
* Equations <ref> and <ref> are synchronization constraints that couple the input and output firing vectors of the engineering system net to the input and output firing vectors of the operand nets respectively. U_L^- and U_L^+ are the vertical concatenations of the input and output firing vectors U_l_i^- and U_l_i^+ respectively.
U_L^-[k] =[U^-_l_1; …; U^-_l_|L|][k]
U_L^+[k] =[U^+_l_1; …; U^+_l_|L|][k]
* Equations <ref> and <ref> are boundary conditions. Eq. <ref> is a boundary condition constraint that allows some of the engineering system net firing vectors decision variables to be set to an exogenous constant. Eq. <ref> is a boundary condition constraint that allows some of the operand net firing vector decision variables to be set to an exogenous constant.
* Equations <ref> and <ref> are the initial and final conditions of the engineering system net and the operand nets where Q_SL is the vertical concatenation of the place marking vectors of the operand nets Q_Sl_i.
Q_SL^-[k] =[Q^-_Sl_1; …; U^-_Sl_|L|][k]
U_SL^+[k] =[U^+_Sl_1; …; U^+_Sl_|L|][k]
§.§.§ Inequality Constraints
D_QP() and vector E_QP in Equation <ref> place capacity constraints on the vector of decision variables at each time step x[k] = [ Q_B ; Q_ E ; Q_SL ; Q_ EL ; U^- ; U^+ ; U^-_L ; U^+_L ][k] ∀ k ∈{1, …, K}. This flexible formulation allows capacity constraints on the place and transition markings in both the engineering system net and operand nets.
§.§.§ Device Model Constraints
As mentioned above, g(X,Y) and h(Y) are a set of device model functions whose presence and nature depend on the specific problem application. They can not be further elaborated until the application domain and its associated capabilities are identified.
§ ILLUSTRATIVE EXAMPLES
The overview of linear graphs, bond graphs, and hetero-functional graphs in the previous section, provides a strong foundation for comparison. In order to concretely describe the relationship between linear graphs, bond graphs, and hetero-functional graphs, this section introduces several illustrative examples for further discussion. As these graph-based methodologies have been applied to electrical, translational mechanical, rotational mechanical, fluidic, thermal, and multi-energy domains, one example system for each of these domains is provided. Figure <ref> summarizes each of these systems graphically.
* Figure <ref>a is an electrical system composed of an ideal voltage source V_s, three resistors R_1, R_2, and R_3, two inductors L_1 and L_2, and one capacitor C_1.
* Figure <ref>b is a translational mechanical system composed of an ideal force source F_s(t), two translational dampers B_1 and B_2, two translational springs K_1 and K_2, two masses m_1 and m_2. V_m_1 and V_m_2 indicate the sign convention of positive velocity.
* Figure <ref>c is a rotational mechanical system composed of a rotating disk J, a torsional spring K, a torsional damper b, and a torque source τ_s(t). The angular velocity ω_J indicates the sign convention of positive angular velocity.
* Figure <ref>d is a fluidic system composed of two tanks C_1 and C_2, a pipe with inductance I and resistance R_1 and a valve with resistance R_2, and a volumetric flow rate source V̇_f. The pressure measurement points are P_C_1 and P_C_2.
* Figure <ref>e is a thermal system composed of a “House" and an “ice box" which are considering as thermal capacitors C_h and C_i respectively. Also the “House Insulation" and “Ice box insulation " are considered as thermal resistors, R_h and R_i respectively. The system also includes a “Heater" as a heat flow source Q̇_s.
* Figure <ref>f is an electro-mechanical system composed of an ideal voltage source V_s, a resistor R, an inductor L, a torsional damper B, and a rotating disk J. The angular velocity ω_J indicates the sign convention of positive angular velocity of the disk.
§ LINEAR GRAPHS BY EXAMPLE
Building upon the illustrative examples described in the previous section, this section demonstrates the linear graph methodology by example. To recall from Fig. <ref> and the overview provided in Sec. <ref>, the linear graph methodology consists of four essential steps:
* Construct the linear graph from the identified system elements.
* Translate the linear graph into its corresponding normal tree.
* State the continuity, constitutive, and compatibility laws of the system.
* Simplify these laws into a single state-space model.
This section follows each of these four steps for the six illustrative examples identified in Fig. <ref>.
§.§ Electrical System
First, the electrical circuit diagram in Fig. <ref>a is transformed into the linear graph shown in Fig. <ref>. Based on Fig. <ref>, the through-variable in electrical systems is current i, and the across-variable is voltage V. Furthermore Fig. <ref> shows that resistors, capacitors, and inductors are categorized as D-type, A-type, and T-type elements respectively. Consequently, the voltage source in Fig. <ref>a becomes the across-variable source in Fig. <ref>.
Second, the linear graph shown in Fig. <ref> is translated into its associated normal tree in Fig. <ref>. To recall, in the linear graph methodology, a normal tree is a “spanning tree" <cit.> that includes the following elements in order of priority:
* all the system graph nodes,
* all across variable sources,
* as many A-Type elements as possible,
* in the case of existing transformers or gyrators, include one branch of each transformer and both or neither branch of each gyrator,
* as many D-Type elements as possible,
* as many T-Type elements as possible, and
* as many through variable sources as possible,
without creating any loops in the graph.
<cit.>. For the electrical system in Fig. <ref>, the V_s, C_1, R_1, and R_2 elements are included following the above prioritization, and consequently the R_3, L_1, and L_2 elements are removed. Importantly, the A-Type elements in the normal tree, and T-Type elements hidden from the normal tree, define the state variables in the system: V_C_1 for the capacitor, and i_L_1 and i_L_2 for the two inductors.
Third, the normal tree facilitates the statement of the electrical system's constitutive, continuity, and compatibility laws. The constitutive laws of the system are:
dV_C_1/dt = 1/C_1 i_C_1,
V_R_1 = i_R_1 R_1,
di_L_1/dt = 1/L_1 V_L_1,
V_R_2 = i_R_2 R_2,
di_L_2/dt = 1/L_2 V_L_2,
i_R_3 = 1/R_3 V_R_3.
Additionally, the continuity laws are:
i_C_1 = i_L_1 - i_R_2 - i_R_3,
i_R_1 = i_L_1,
i_R_2 = i_L_2.
Also, the compatibility laws are:
V_L_1 = V_S - V_C_1 - V_R_1,
V_L_2 = V_C_1 - V_R_2,
V_R_3 = V_C_1.
Finally, these laws are simplified algebraically to produce a state space model in Eq. <ref>.
d/dt[ V_C_1; i_L_1; i_L_2 ]
=
[ -1/R_3C_1 1/C_1 -1/C_1; -1/L_1 -R_1/L_1 0; 1/L_2 0 -R_2/L_2 ][ V_C_1; i_L_1; i_L_2 ]
+
[ 0; 1/L_1; 0 ]
V_S
§.§ Translational Mechanical System
The state space model of the translational mechanical system in Fig. <ref>b is developed similarly. First, it is transformed into the linear graph shown in Fig. <ref>. According to Fig. <ref>, the through-variable in translation mechanical systems is force F, and the across-variable is velocity V. Furthermore, Fig. <ref> shows that translational dampers, masses, and translational springs are categorized as D-type, A-type, and T-type elements respectively. Consequently, a force source is a through variable source.
Second, the linear graph shown in Fig. <ref> is translated into the normal tree in Fig. <ref>.
Given the prioritization exposited in Sec. <ref>, the m_1, m_2, and b_2 elements in this translational mechanical system have been included, and consequently the F_s, b_1, k_1, and k_2 elements are removed. The across variables of the A-Type elements included in the normal tree V_m_1 and V_m_2, and the through variables of T-Type elements not included in the normal tree F_k_1 and F_k_2, are identified as system state variables.
Thirdly, the normal tree facilitates the statement of the translational mechanical system's constitutive, continuity, and compatibility laws. The constitutive laws are:
dV_m_1/dt = 1/m_1 F_m_1,
dV_m_2/dt = 1/m_2 F_m_2,
dF_k_1/dt = k_1 V_k_1,
dF_k_2/dt = k_2 V_k_2,
V_b_2 = 1/b_2 F_b_2,
F_b_1 = b_1 V_b_1
The continuity laws are:
F_m_1 = - F_b_1 - F_k1 - F_k2,
F_m_2 = F_b_2 + F_S,
F_b_2 = F_k_2
The compatibility laws are:
V_k_1 = V_m_1,
V_k_2 = V_m_1 - V_m_2 - V_b_2,
V_b_1 = V_m_1
Finally, these equations are algebraically simplified to produce the system's state space model in Equation <ref>.
d/dt[ V_m_1; V_m_2; F_k_1; F_k_2 ]
=
[ -b_1/m_1 0 -1/m_1 -1/m_1; 0 0 0 1/m_2; k_1 0 0 0; k_2 -k_2 0 -k_2/b_2 ][ V_m_1; V_m_2; F_k_1; F_k_2 ]
+
[ 0; 1/m_2; 0; 0 ]
V_S
§.§ Rotational Mechanical System
The linear graph methodology is applied similarly to the rotational mechanical system in Fig. <ref>c. It is transformed into the linear graph shown in Fig. <ref>. Referring back to Fig. <ref>, the through-variable in a rotational mechanical system is torque τ, and the across-variable is angular velocity ω. Also referring back to Fig. <ref>, rotational dampers, rotational disks, and rotational springs are categorized as D-type, A-type, and T-type elements respectively. Consequently, a torque source is a through-variable source.
Second, the linear graph shown in Fig. <ref> is translated into the normal tree in Fig. <ref>. Following the prioritization mentioned in Sec <ref>, the J element in this rotational mechanical system is included accordingly, and consequently, the b, K, and τ_s elements are necessarily removed. Furthermore, the across variables of A-Type elements included in the normal tree ω_J, and the through variables of T-Type elements not included in the normal tree τ_K are the state variables of this system.
Third, the normal tree facilitates the statement of the rotational mechanical system's constitutive, continuity, and compatibility laws. The constitutive laws are:
dω_J/dt = 1/Jτ_J
dτ_K/dt = K ω_K
τ_b = b ω_b
The continuity law is:
τ_J = τ_s - τ_K - τ_b
The compatibility laws are:
ω_K = ω_J
ω_b = ω_J
Finally, these laws are simplified algebraically to produce a state space model in Eq. <ref>.
d/dt[ ω_J; τ_K ] = [ -b/J -1/J; k 0 ][ ω_J; τ_K ] + [ 1/J; 0 ]τ_s
§.§ Fluidic System
The linear graph methodology is also applied to the fluidic system in Fig. <ref>d. It is transformed into the linear graph shown in Fig. <ref>. In the fluidic system domain, Fig. <ref> shows that the through-variable is volumetric flow rate V̇, and the across-variable is pressure difference P. Similarly, Fig. <ref> shows that a fluidic resistance in a pipe or valve, a fluidic capacitance in a tank, and a fluidic inertance are categorized as D-type, A-type, and T-type elements respectively. Consequently, a fluid flow source is a through-variable source.
Second, the linear graph shown in Fig. <ref> is translated into its associated normal tree in Fig. <ref>.
Given the prioritization defined in Sec. <ref>, the C_1, C_2, and R_1 elements are included, and consequently V̇_f, R_2, and I elements are necessarily removed. The across variables of A-Type elements included in the normal tree P_C_1 and P_C_2, and the through variables of T-Type elements not included in the normal tree V̇_̇ ̇İ, are the system state variables.
Third, the constitutive, continuity, and compatibility laws are determined from the normal tree. The constitutive laws are:
dp_C_1/dt = 1/C_1V̇_C_1
dp_C_2/dt = 1/C_2V̇_C_2
dV̇_ I/dt = 1/IV̇_R_1
P_R_1 = R_1 P_R_2
V̇_R_2 = 1/R_2 P_R_2
The continuity laws are:
V̇_C_1 = V̇_f - V̇_ I
V̇_C_2 = V̇_R_1 - V̇_R_2
V̇_R_1 = V̇_ I
The compatibility laws are:
P_ I = P_C_1 - P_C_2 - P_R_1
P_R_2 = P_C_2
Finally, these equations are simplified into the state space model in Eq <ref>.
d/dt[ P_C_1; P_C_2; V̇_I ] = [ 0 0 -1/C_1; 0 -1/C_2 R_2 1/C_2; 1/ I -1/ I -R_1/ I ][ P_C_1; P_C_2; V̇_ I ] + [ 1/C_1; 0; 0 ]V̇_f
§.§ Thermal System
The linear graph methodology is also applied to the thermal system depicted in Fig. <ref>e. To facilitate the process, this thermal system is first converted into an analogous electrical circuit as illustrated in Fig. <ref>. Furthermore, the following notation is adopted.
* T_C_i: The temperature within the icebox, representing the temperature difference across the C_i element, which is the thermal capacitance of the icebox.
* T_R_i: The temperature drop across icebox wall resistor.
* T_C_h: The temperature within the house, representing the temperature difference across the C_h element, which is the thermal capacitance of the house.
* T_R_h: The temperature drop across the house wall resistor.
* Q̇_C_i: The heat flow into the icebox.
* Q̇_R_i: The heat flow through the icebox wall.
* Q̇_C_h: The heat flow into the house.
* Q̇_R_h: The heat flow through the house wall.
From this point, the linear graph methodology is straightforwardly applied. The fluidic system in Fig. <ref> is transformed into the linear graph shown in Fig. <ref>. Next, Fig. <ref> shows that, in thermal systems, the through-variable is heat flow rate Q̇, and the across-variable is temperature T. Furthermore, Fig. <ref> states that thermal resistances and capacitances are D-type and A-type elements respectively. Consequently, a fluid flow source is a through-variable source.
Second, the associated normal tree is derived in Fig. <ref>. Given the prioritization defined in Sec. <ref>, the C_h, and C_i elements are included, and consequently, Q̇_̇ṡ, R_h, and R_i elements are necessarily removed. The across variables of A-Type elements included in the normal tree T_C_i and T_C_h are the state variables of this system.
Third, the normal tree facilitates the statement of the thermal system's constitutive, continuity, and compatibility laws. The constitutive laws are:
dT_C_i/dt = 1/C_iQ̇_C_i
dT_C_h/dt = 1/C_hQ̇_C_h
Q̇_R_h = 1/R_h T_R_h
Q̇_R_i = 1/R_i T_R_i
The continuity laws are:
Q̇_C_i = Q̇_R_i
Q̇_C_h = Q̇_s - Q̇_R_i - Q̇_R_h
The compatibility laws are:
T_R_h = T_C_h
T_R_i = T_C_h - T_C_i
Finally, these equations are algebraically simplified to the state space model in Eq. <ref>
d/dt[ T_C_i; T_C_h ] = [ -1/C_i R_i 1/C_i R_i; 1/R_i C_h -1/R_i C_h-1/C_h R_h ][ T_C_i; T_C_h ] + [ 0; 1/C_h ]Q̇_s
§.§ Multi-Energy System
First, the linear graph methodology is also applied to the electro-mechanical system shown in Fig. <ref>f. It is transformed into the linear graph shown in Fig. <ref>. As electro-mechanical systems are a combination of electrical and (rotational) mechanical systems; current i and torque τ are the through variables, while voltage V and angular velocity ω are the across-variables. Additionally, referring to Fig. <ref>, the electrical inductor and the rotational spring are the T-type energy storage elements. The electrical capacitor and the rotating disk are A-type energy storage elements. The electrical resistor and the mechanical damper are the D-type elements. The voltage V_S represents an ideal across-variable source. Lastly, a transformer element connects the electrical subsystem to the rotational subsystem and transforms power between the two different domains.
Second, Fig. <ref> shows the normal tree associated with the electro-mechanical system. Given the prioritization defined in Sec. <ref>, the V_S, the J, the electrical branch of the transformer, and the R elements are included. Consequently, the L, the B, and the mechanical branch of the transformer are necessarily removed. As mentioned in the electrical and rotational mechanical system examples, ω_J, and i_L are the state variables of this system.
Third, the normal tree facilitates the statement of the electro-mechanical system's constitutive, continuity, and compatibility laws. The constitutive laws are:
dω_J/dt = 1/Jτ_J
di_L/dt = 1/L V_L
V_R = R i_R
τ_B = B ω_B
V_1 = 1/K_aω_2
τ_2 = -1/K_a i_1
The continuity laws are:
τ_J = -τ_2 - τ_B
i_R = i_L
i_1 = i_L
The compatibility laws are:
V_L = V_s - V_1 - V_R
ω_2 = ω_J
ω_B = ω_J
Finally, these equations are algebraically simplified to the state space model in Eq. <ref>.
d/dt[ ω_J; i_L ] = [ -B/J 1/J K_a; -1/K_a L -R/L ][ ω_J; i_L ] + [ 0; 1/L ] V_s(t)
§ BOND GRAPHS BY EXAMPLE
In order to concretely describe the relationship between linear graphs, bond graphs, and hetero-functional graphs, the same illustrative examples are now modeled using the bond graph methodology. According to Fig. <ref>, and the overview provided in Sec <ref>, the bond graph methodology follows these three main steps:
* Construct the bond graph from the identified system elements.
* State the 0-junction, 1-junction, and constitutive laws of the system using bond graph junctions.
* Simplify the above-mentioned laws into a single state space model.
This section follows each of these three steps for the six illustrative examples identified in Fig. <ref>.
§.§ Electrical System
First, the bond graph associated with the electrical system illustrated in Fig. <ref>a is shown in Fig. <ref>. According to Fig. <ref>, in the electrical system domain, voltage V is the effort variable, and current i is the flow variable. Additionally, as shown in Fig. <ref>, electrical resistors, capacitors, and inductors are categorized as generalized resistors, capacitors, and inductors respectively. Furthermore, in bond graphs, the state variables of this system are the effort variables of generalized C elements (V_C_1), and the flow variables of generalized inductors (i_L_1 and i_L_2).
Second, the constitutive laws in Eq. <ref>-<ref> are retained, Eq. <ref>-<ref> are adopted as 0-junction laws, and Eq. <ref>-<ref> are adopted as 1-junction laws. Finally, these laws are simplified algebraically to produce the state space model in Eq. <ref>.
§.§ Translational Mechanical System
The bond graph methodology is also applied to the translational mechanical system shown in Fig. <ref>b. The bond graph associated with this system is illustrated in <ref>. According to Fig. <ref>, in the translational mechanical system domain, force F is the effort variable, and velocity V is the flow variable. Referring back to Fig. <ref>, translational dampers, springs, and masses are categorized as generalized resistors, capacitors, and inductors respectively. The state variables of this system are the effort variables of generalized C elements (F_k_1 and F_k_2), and the flow variable of generalized inductors (V_m _1 and V_m_2).
After the bond graph is derived, the constitutive laws in Eq. <ref>-<ref> are retained, Eq. <ref>-<ref> are adopted as 0-junction laws, and Eq. <ref>-<ref> are adopted as 1-junction laws. Finally, these laws are simplified algebraically to produce the state space model in Eq. <ref>.
§.§ Rotational Mechanical System
The bond graph associated with the rotational mechanical system illustrated in Fig. <ref>c is shown in Fig. <ref>. Fig. <ref> shows that in the rotational mechanical system domain, torque τ is the effort variable, and angular velocity ω is the flow variable. Additionally, as shown in Fig. <ref>, rotational dampers, springs, and disks are categorized as generalized resistors, capacitors, and inductors respectively. Furthermore, the state variables of this system are the effort variable of generalized C elements (τ_K), and the flow variable of generalized inductors (ω_J).
Consequently, the constitutive laws in Eq. <ref>-<ref> are retained, Eq. <ref>-<ref> are adopted as 0-junction laws, and Eq. <ref> is adopted as 1-junction law. Finally, these laws are simplified algebraically to produce the state space model in Eq. <ref>.
§.§ Fluidic System
Fig. <ref> is the associated bond graph of the fluidic system illustrated in Fig. <ref>d. According to Fig. <ref>, in the fluidic system domain, pressure P is the effort variable, and volumetric flow rate V̇ is the flow variable. Additionally, as shown in Fig. <ref>, fluid resistance of pipes or valves, fluid tanks, and fluid inertances are categorized as generalized resistors, capacitors, and inductors respectively. Additionally, the state variables of this fluidic system are the effort variable of generalized C elements (P_C_1 and P_C_2), and the flow variable of generalized inductors (V̇_̇ ̇İ).
After constructing the bond graph, the constitutive laws in Eq. <ref>-<ref> are retained, Eq. <ref>-<ref> are adopted as 0-junction laws, and Eq. <ref>-<ref> are adopted as 1-junction laws. Finally, these laws are simplified algebraically to produce the state space model in Eq. <ref>.
§.§ Thermal System
First, the bond graph associated with the thermal system and its analogous electrical circuit illustrated in Fig. <ref>e and <ref>, are shown in Fig. <ref>. According to Fig. <ref>, in the thermal system domain, temperature T is the effort variable, and heat flow rate Q̇ is the flow variable. Additionally, as shown in Fig. <ref>, thermal resistances, and thermal capacitances are categorized as generalized resistors and capacitors respectively. Next, the state variables of this system are the effort variables of generalized C elements (T_C_i and T_C_h).
Second, the constitutive laws in Eq. <ref>-<ref> are retained, Eq. <ref>-<ref> are adopted as 0-junction laws, and Eq. <ref>-<ref> are adopted as 1-junction laws. Finally, these laws are simplified algebraically to produce the state space model in Eq. <ref>.
§.§ Multi-Energy System
The bond graph methodology is applied similarly to the electro-mechanical system shown in Fig. <ref>f. The bond graph associated with this system is shown in Fig. <ref>. As electro-mechanical systems are a combination of electrical and (rotational) mechanical systems, voltage V and torque τ are the effort variables, and current i and angular velocity ω are the flow variables. Additionally, as shown in Fig. <ref>, the electrical resistors and the rotational dampers are generalized resistors. The electrical capacitors and rotational springs are generalized capacitors. The electrical inductors and rotational disks are categorized as generalized inductors. Furthermore, due to the different views in bond graphs, a generalized gyrator connects the electrical subsystem to the mechanical subsystem. Lastly, As mentioned in the electrical and rotational system examples, (ω_J and i_L and i_L_2) are the state variables of this system.
Next, the constitutive laws in Eq. <ref>-<ref> are retained and Eq. <ref>-<ref> and Eq. <ref>-<ref> are adopted as 1-junction laws. Finally, these laws are simplified algebraically to produce the state space model in Eq. <ref>.
§ HETERO-FUNCTIONAL GRAPHS BY EXAMPLE
In order to continue to concretely describe the relationship between linear graphs, bond graphs, and hetero-functional graphs, the same illustrative examples are now modeled using hetero-functional graph theory. According to Fig. <ref>, and the overview provided in Sec <ref>, the hetero-functional graph methodology follows three main steps:
* Identify the system resources, the system processes, and their associated capabilities following Defn. <ref>-<ref>
* Construct the engineering system net (and operand net if necessary) following Defn. <ref>-<ref>.
* Setup and solve the hetero-functional network minimum cost flow problem as stated in Eq. <ref>-<ref> below.
Before proceeding with derivation for each of the illustrative examples, it is important to recognize that linear graphs and bond graphs make several inherent, and limiting assumptions that are not made in hetero-functional graph theory by default.
* X[k] ∈ℝ ∀ k ∈{1 … K}. In physical systems, the primary decision variables are in the domain of real numbers.
* Y[k] ∈ℝ ∀ k ∈{1 … K}.
Auxiliary decision variables are also in the domain of real numbers.
* Z=0. Linear graphs and bond graphs solve a set of simultaneous differential algebraic equations and do not require optimization. Consequently, a dummy objective function is defined.
* Δ T → 0. Linear graphs and bond graphs model differential algebraic equations where the simulation time step is infinitesimal.
* k_dψ=0 ∀ k, ∀ψ. The duration of each capability (Defn. <ref>) is instantaneous. Consequently, Eq. <ref> becomes:
U_ψ^+[k] = U_ψ^-[k] ∀ k ∈{1, …, K}
Additionally, Eq. <ref> collapses to triviality and is eliminated.
* Q_B[k] = Q_B[k+1] ∀ k ∈{1 … K}. The engineering system does not accumulate operands at its buffers (Defn. <ref>). Furthermore, it is important to recognize that the above treatment of linear graphs and bond graphs only uses power variables (i.e. effort and flow pairs), and Q_B[k] is a displacement variable in the Eulerian via and a momentum variable in the Lagrangian view. Therefore, a 1-to-1 comparison of hetero-functional graphs to linear and bond graphs will not require the Q_B variable. Consequently, Eq. <ref> becomes:
M U[k] Δ T = 0 ∀ k ∈{1, …, K}
where M = M^+ - M^-.
* S_l_i = ∅. E_l_i = ∅. N_l_i = ∅ (Defn. <ref>) All of the operands used in linear graphs and bond graphs have no state evolution and do not require their associated operand nets. Consequently, Eqs. <ref>, <ref>, and <ref> are eliminated. Similarly, without any operand net, there is no need for synchronization with the engineering system net firing vectors. Consequently, Eqs. <ref>, <ref> are eliminated as well.
* The engineering system net boundary condition constraint in Equation <ref> applies only when the engineering system has through-variable sources. In such cases, and in light of the above, the boundary condition constraint becomes:
D_U.U[k] = C_U[k] ∀ k ∈{1, …, K}
Furthermore, the engineering system net boundary condition constraint is used to capture any initial conditions on the engineering system net firing vector.
D_Ui.U[1] = C_Ui[1]
* Without any operand net, its boundary condition constraint in Equation <ref> is eliminated.
* The initial condition constraint in Eq <ref> is also eliminated as Q_B is not retained as a decision variable.
* The final condition constraint in Eq. <ref> is also eliminated as Q_B is not retained as a decision variable. Furthermore, all of the linear graph and bond graph models described above are initial value (rather than final value) problems.
* E_CP→ -∞, E_CP→∞. The linear graph and bond graph methodologies do not place lower or upper bounds on the primary decision variables. Consequently, the inequality constraints on primary decision variables in Eq. <ref> are eliminated.
* The device model functions g(X,Y) in Eq. <ref> become the engineering system's constitutive laws. As elaborated below, they relate the engineering system net's primary variables (i.e. through variables) to its auxiliary variables (i.e. across variables).
* The device model function h(Y) in Eq. <ref> places bounds on the engineering system net's auxiliary variables. Such constraints are used in linear graphs and bond graphs to impose times series from across variable sources.
h(y[k]) = C_Y[k] ∀ k ∈{1, …, K}
They are also used to impose initial conditions on the auxiliary variables.
h_i(y[1]) = C_yi[1]
In summary, the HFNMCF problem stated in Eq. <ref>-<ref> collapses to the following optimization problem in the context of linear and bond graph models.
minimize Z = 0
s.t. MU[k]ΔT = 0 ∀k ∈{1, …, K}
D_U.U[k] = C_U[k] ∀k ∈{1, …, K}
D_Ui.U[1] = C_U[1]
g(X,Y) =0
h(Y) = C_Y[k] ∀k ∈{1, …, K}
h_i(y[1]) = C_yi[1]
As mentioned above, the device model constraint in the HFNMCF problem shown first in Eq. <ref> and now in Eq. <ref> represents the constitutive laws in linear graphs and bond graphs. As there are only a small number of generalized elements (e.g. resistor, inductor, capacitor, transformer, and gyrator), it is worthwhile recognizing that these generalized elements take on generic forms in the context of hetero-functional graph theory. For generalized resistors:
S_R · U[k] = Z_R· S_R · (-M)^T · y[k] ∀ k ∈{1, …, K}
where S_R is a projection operator that serves to select out the relevant primary (i.e. through) variables from the engineering system net firing vector. Furthermore, Z_R is a diagonal matrix of resistance values. The engineering system net incidence matrix adopts a negative sign so that the across variables y lose magnitude in the direction of flow. Note that Eq. <ref>, quite appropriately, is an algebraic relation between across and through variables at the same discrete time step k. Next, for generalized inductors (in the Eulerian view, and generalized capacitors in the Lagrangian view):
S_L · (U[k+1]-U[k])Δ T = Z_L · S_L · (-M)^T · y[k] ∀ k ∈{1, …, K-1}
where S_L is a projection operator that serves to select out the relevant primary (i.e. through) variables from the engineering system net firing vector. Furthermore, Z_L is a diagonal matrix of inductance values. Note that Eq. <ref>, quite appropriately, is a difference equation that results from discretizing the generalized inductor law via an Euler transformation. Next, for generalized capacitors (in the Eulerian view and generalized inductors in the Lagrangian view):
S_C · U[k] = Z_C · S_C · (-M)^T · (y[k+1]-y[k]) ·Δ t ∀ k ∈{1, …, K-1}
where S_C is a projection operator that serves to select out the relevant primary (i.e. through) variables from the engineering system net firing vector. Furthermore, Z_C is a diagonal matrix of capacitance values. Once again, Eq. <ref> results from the discretization of the generalized capacitance law via n Euler transformation. Next, for generalized transformers:
S_T y[k] = 0 ∀ k ∈{1, …, K}
where S_T_1 is the projection operator that serves to select out the transformer's relevant auxiliary (i.e. across) variables from the engineering system net. Finally, for generalized gyrators:
S_G_1U[k] = S_G_2 y[k] ∀ k ∈{1, …, K}
where S_G_1 and S_G_2 are the projection operators that serve to select out the gyrator's relevant primary (i.e. through) and auxiliary (i.e. across) variables and from the engineering system net.
Given this specialized application of hetero-functional graphs to linear graphs and bond graphs, each of the six illustrative examples shown in Fig. <ref> can be solved as the HFNMCF problem using the three steps identified at the top of the section.
§.§ Electrical System
The first step is to recognize that the electrical system shown in Fig. <ref>a is first, a specialization into the electrical domain, followed by an instantiation of the engineering system meta-architecture in Fig. <ref>. Consequently, Defn. <ref>-<ref> are understood as follows. There are five electrical points that have distinct absolute values of across-variables that serve as independent buffers: V_S, V_R_L, V_C_1, V_L_2 and V_0. Additionally, there is one across-variable source V_S that serves as a transformation resource. Additionally, the transportation resources include generalized resistors R_1, R_2, R_3, generalized inductors L_1, L_2, and a generalized capacitor C_1. Fig. <ref> shows that each of these transformation and transportation resources has exactly one system process; inject power with imposed through variable, dissipate power, store potential energy, and store kinetic energy. The result is that each of these transformation and transportation resources introduces exactly one system capability with a primary through variable and an auxiliary across variable as attributes.
In the next step, the electrical system shown in Fig. <ref>a is transformed into the engineering system net shown in Fig. <ref>.
While it is possible to produce Fig. <ref> from the electrical circuit in Fig. <ref>a by visual inspection; such an approach falls apart in systems with multiple energy domains or capabilities with multiple inputs and outputs. Instead, the Engineering System Net and its state transition function is constructed according to Defn. <ref> and <ref>. Note that the hetero-functional graph theory toolbox <cit.> can automatically calculate the positive and negative hetero-functional incidence matrices from an XML input file that instantiates the information from Defn. <ref> - <ref>. The Engineering System Net in Fig. <ref> shows the system buffers as places, the capabilities as transitions, and the incidence between them.
In the third step, the hetero-functional network minimum cost flow problem is set up and solved. More specifically, Eq. <ref>-<ref> are written out explicitly.
minimize Z = 0
s.t. [ +1 -1 0 0 0 0 0; 0 +1 -1 0 0 0 0; 0 0 +1 -1 -1 0 -1; 0 0 0 0 +1 -1 0; ][ i_V_S; i_R_1; i_L_1; i_C_1; i_R_2; i_L_2; i_R_3; ][k]Δ T = 0 ∀ k ∈{1, …, K}
[ 0 0 1 0 0 0 0; 0 0 0 0 0 1 0 ][ i_V_S; i_R_1; i_L_1; i_C_1; i_R_2; i_L_2; i_R_3 ][1]
=
[ 0; 0 ]
[ 0 1 0 0 0 0 0; 0 0 0 0 1 0 0; 0 0 0 0 0 0 1 ][ i_V_S; i_R_1; i_L_1; i_C_1; i_R_2; i_L_2; i_R_3 ][k]
=
[ 1/R_1 0 0; 0 1/R_2 0; 0 0 1/R_3 ][ 0 1 0 0 0 0 0; 0 0 0 0 1 0 0; 0 0 0 0 0 0 1 ][ -1 0 0 0; +1 -1 0 0; 0 +1 -1 0; 0 0 +1 0; 0 0 +1 -1; 0 0 0 -1; 0 0 +1 0 ][ V_S; V_RL; V_C_1; V_L_2 ][k]
∀ k ∈{1, …, K}
[ 0 0 1 0 0 0 0 0; 0 0 0 0 0 1 0 0 ](
[ i_V_S; i_R_1; i_L_1; i_C_1; i_R_2; i_L_2; i_R_3 ][k+1]
-
[ i_V_S; i_R_1; i_L_1; i_C_1; i_R_2; i_L_2; i_R_3 ][k]
)
=
[ 1/L_1 0; 0 1/L_2 ][ 0 0 1 0 0 0 0; 0 0 0 0 0 1 0 ][ -1 0 0 0; +1 -1 0 0; 0 +1 -1 0; 0 0 +1 0; 0 0 +1 -1; 0 0 0 -1; 0 0 +1 0 ][ V_S; V_RL; V_C_1; V_L_2 ][k]Δ T
∀ k ∈{1, …, K-1}
[ 0 0 0 1 0 0 ][ i_V_S; i_R_1; i_L_1; i_C_1; i_R_2; i_L_2; i_R_3 ][k] Δ T
=
[ C_1 ][ 0 0 0 1 0 0 0 ][ -1 0 0 0; +1 -1 0 0; 0 +1 -1 0; 0 0 +1 0; 0 0 +1 -1; 0 0 0 -1; 0 0 +1 0 ](
[ V_S; V_RL; V_C_1; V_L_2 ][k+1]
-
[ V_S; V_RL; V_C_1; V_L_2 ][k]
)
∀ k ∈{1, …, K-1}
[ 1 0 0 0 ][ V_S; V_RL; V_C_1; V_L_2 ][k]
= 1 ∀ k ∈{1, …, K}
[ 0 0 1 0 ][ V_S; V_RL; V_C_1; V_L_2 ][k=1]
= 0
This explicit statement of the HFNMCF problem in the context of the electrical system shown in Fig. <ref>a provides the following insights:
* Eq. <ref> shows that the electrical system does not have an objective function and is simply a set of simultaneous equations.
* Eq. <ref> is a matrix restatement of the continuity laws in Eq. <ref> - <ref>. Note that Eq. <ref> introduces an additional matrix row to account for the current provided by the voltage source i_V_S. While this variable is not required in the linear graph and bond graph methodologies, the HFGT derivation requires across and through variables for all capabilities. Also note that Eq. <ref> does not include the current balance associated with the voltage ground V_0. This is because incidence matrices of closed systems (i.e. circuits) have a rank of N-1<cit.> and so the last redundant equation must be eliminated to make Eq. <ref> full rank. Finally, the hetero-functional incidence matrix M can be automatically produced from the HFGT toolbox <cit.> for systems of arbitrary size.
* There are no equations that impose exogenous values on the currents because there are no current sources.
* Eq. <ref> is the initial condition on the inductors' current as the state variables.
* Eq. <ref> is a matrix restatement of the constitutive law for resistors (i.e. Ohm's Law) in Eq. <ref>, <ref>, and <ref>.
* Eq. <ref> is a matrix restatement of the constitutive law for inductors in Eq. <ref> and <ref>.
* Eq. <ref> is a matrix restatement of the constitutive law for capacitors in Eq. <ref>.
* The selector matrices in Eq. <ref>-<ref> can be automatically produced from the HFGT toolbox <cit.>for systems of arbitary size.
* Eq. <ref> imposes exogenous values on the voltages due to the presence of voltage sources.
* Eq. <ref> is the initial condition on the capacitor voltage as a state variable.
* The compatibility laws stated in Eq. <ref> - <ref> are superfluous because all of the voltages have been stated in absolute terms relative to the ground rather than as voltage differences between electrical points.
Ultimately, the HFNMCF problem restates, in discrete time, and in matrix form, the simultaneous continuity and constitutive equations from the linear graph derivation and eliminates entirely the need for compatibility equations. Furthermore, the HFNMCF problem reveals that only the continuity laws create relationships between capabilities (Defn. <ref>) in the engineering system. The remaining constraints, those tied to initial conditions, exogenous values, and constitutive laws, address capabilities individually. Consequently, the HFGT toolbox<cit.> makes setting up the HFNMCF problem relatively straightforward because it can automatically generate the hetero-functional graph structure and keep track of the indices and types of each capability.
Once the HFNMCF problem for the electrical system has been set up, it can be simulated straightforwardly and compared against the state space ODE model derived by linear graph and/or bond graph. The following parameter values are chosen:
R_1 = 200 Ω, R_2 = 200 Ω, R_3 = 220 Ω, L_1 = 100 mH, L_2 = 150 mH, C_1 = 10 μF, and V_s = 1 V (step input voltage). The simulation time is t = 0.01 seconds, and the time step Δ T = 1e-4 seconds. The HFNMCF results for the primary (current) and auxiliary (voltage) decision variables are shown as solid lines in Fig. <ref>. The associated state space ODE results are shown in dashed lines with embedded triangles.
§.§ Translational Mechanical System
The first step is to recognize that the translational mechanical system shown in Fig. <ref>b is first, a specialization into the mechanical domain, followed by an instantiation of the engineering system meta-architecture in Fig. <ref>. Consequently, Defn. <ref>-<ref> are understood as follows. There are four points that have distinct absolute values of across-variables that serve as independent buffers: V_m_1, V_k_2b_2, V_m_2 and V_g. Additionally, there is one through-variable source F_S that serves as a transformation resource. Additionally, the transportation resources include generalized resistors b_1, b_2, generalized inductors k_1, k_2, and generalized capacitors m_1, m_2. Fig. <ref> shows that each of these transformation and transportation resources has exactly one system process; inject power with imposed through variable, dissipate power, store potential energy, and store kinetic energy. The result is that each of these transformation and transportation resources introduces exactly one system capability with a primary through variable and an auxiliary across variable as attributes.
In the next step, the translational mechanical system shown in Fig. <ref>b is transformed into the engineering system net shown in Fig. <ref>.
While it is possible to produce Fig. <ref> from either the translational mechanical system in Fig. <ref>b or the associated linear graph in Fig. <ref> by visual inspection; such an approach falls apart in systems with multiple energy domains or capabilities with multiple inputs and outputs. Instead, the Engineering System Net and its state transition function is constructed according to Defn. <ref> and <ref>. Note that the hetero-functional graph theory toolbox <cit.> can automatically calculate the positive and negative hetero-functional incidence matrices from an XML input file that instantiates the information from Defn. <ref> - <ref>. The Engineering System Net in Fig. <ref> shows the system buffers as places, the capabilities as transitions, and the incidence between them.
In the third step, the hetero-functional network minimum cost flow problem is set up and solved. More specifically, Eq. <ref>-<ref> are written out explicitly.
minimize Z = 0
s.t. [ +1 -1 +1 0 0 0 0; 0 0 -1 +1 0 0 0; 0 0 0 -1 -1 -1 -1 ][ F_S; F_m_2; F_b_2; F_k_2; F_m_1; F_b_1; F_k_1 ][k]Δ T = 0 ∀ k ∈{1, …, K}
[ 1 0 0 0 0 0 0 ][ F_S; F_m_2; F_b_2; F_k_2; F_m_1; F_b_1; F_k_1 ][k]
=
1 ∀ k ∈{1, …, K}
[ 0 0 0 1 0 0 0; 0 0 0 0 0 0 1 ][ F_S; F_m_2; F_b_2; F_k_2; F_m_1; F_b_1; F_k_1 ][k=1]
=
[ 0; 0 ]
[ 0 0 0 0 0 1 0; 0 0 1 0 0 0 0; ][ F_S; F_m_2; F_b_2; F_k_2; F_m_1; F_b_1; F_k_1 ][k]
=
[ b_1 0; 0 b_2 ][ 0 0 0 0 0 1 0; 0 0 1 0 0 0 0; ][ -1 0 0; +1 0 0; -1 +1 0; 0 -1 +1; 0 0 +1; 0 0 +1; 0 0 +1; ][ V_m_2; V_k_2b_2; V_m_1 ][k]
∀ k ∈{1, …, K}
[ 0 0 0 0 0 0 1; 0 0 0 1 0 0 0; ](
[ F_S; F_m_2; F_b_2; F_k_2; F_m_1; F_b_1; F_k_1; ][k+1]
-
[ F_S; F_m_2; F_b_2; F_k_2; F_m_1; F_b_1; F_k_1; ][k]
)
=
[ k_1 0; 0 k_2 ][ 0 0 0 0 0 0 1; 0 0 0 1 0 0 0; ][ -1 0 0; +1 0 0; -1 +1 0; 0 -1 +1; 0 0 +1; 0 0 +1; 0 0 +1; ][ V_m_2; V_k_2b_2; V_m_1 ][k]Δ T
∀ k ∈{1, …, K-1}
[ 0 0 0 0 1 0 0; 0 1 0 0 0 0 0; ][ F_S; F_m_2; F_b_2; F_k_2; F_m_1; F_b_1; F_k_1; ][k] Δ T
=
[ m_1 0; 0 m_2 ][ 0 0 0 0 1 0 0; 0 1 0 0 0 0 0; ][ -1 0 0; +1 0 0; -1 +1 0; 0 -1 +1; 0 0 +1; 0 0 +1; 0 0 +1; ](
[ V_S; V_R_1; V_C_1; V_L_2 ][k+1]
-
[ V_m_2; V_k_2b_2; V_m_1 ][k]
)
∀ k ∈{1, …, K-1}
[ 1 0 0; 0 0 1 ][ V_m_2; V_k_2b_2; V_m_1 ][k=1]
=
[ 0; 0 ]
This explicit statement of the HFNMCF problem in the context of the translational mechanical system shown in Fig. <ref>b provides the following insights:
* Eq. <ref> shows that the mechanical system does not have an objective function and is simply a set of simultaneous equations.
* Eq. <ref> is a matrix restatement of the continuity laws (i.e. Newton’s First Law) in Eq. <ref> - <ref>. Again, the force balance on the ground place is redundant and therefore eliminated.
* Eq. <ref> imposes exogenous values on the force due to the presence of force sources.
* Eq. <ref> is the initial condition on the spring force as a state variable.
* Eq. <ref> is a matrix restatement of the constitutive law for mechanical dampers in Eq. <ref>, and <ref>.
* Eq. <ref> is a matrix restatement of the constitutive law for springs in Eq. <ref> and <ref>.
* Eq. <ref> is a matrix restatement of the constitutive law for masses in Eq. <ref> and <ref>.
* There are no equations that impose exogenous values on the velocities because there are no velocity sources.
* Eq. <ref> is the initial condition on the mass velocities as state variables.
* The compatibility laws stated in Eq. <ref> - <ref> are superfluous because all of the velocities have been stated in absolute terms relative to the ground reference frame rather than as velocity differences between points.
Once the HFNMCF problem for the translational mechanical system has been set up, it can be simulated straightforwardly and compared against the state space ODE model derived by linear graph and/or bond graph. The following parameter values are chosen: m_1 = 1 kg, m_2 = 2 kg, k_1 = 20 N/m, k_2 = 10 N/m, b_1 = 1 Ns/m, b_2 = 10 Ns/m, and F_s = 1 N (Step input force). The simulation time K = 20 seconds, and the time step Δ T = 0.02 seconds. The HFNMCF results for the primary (force) and auxiliary (velocity) decision variables are shown as solid lines in Fig. <ref>. The associated state space ODE results are shown in dashed lines with embedded triangles.
§.§ Rotational Mechanical System
The first step is to recognize that the rotational mechanical system shown in Fig. <ref>c is first, a specialization into the rotational mechanical domain, followed by an instantiation of the engineering system meta-architecture in Fig. <ref>. Consequently, Defn. <ref>-<ref> are understood as follows. There are two points that have distinct absolute values of across-variables that serve as independent buffers: ω_J, and ω_g. Additionally, there is one through-variable source τ_s that serves as a transformation resource. Additionally, the transportation resources include a generalized resistor b, a generalized inductor K, and a generalized capacitor J. Fig. <ref> shows that each of these transformation and transportation resources has exactly one system process; inject power with imposed through variable, dissipate power, store potential energy, and store kinetic energy. The result is that each of these transformation and transportation resources introduces exactly one system capability with a primary through-variable and an auxiliary across-variable as attributes.
In the next step, the rotational mechanical system shown in Fig. <ref>c is transformed into the engineering system net shown in Fig. <ref>.
While it is possible to produce Fig. <ref> from the rotational mechanical system in Fig. <ref>c or the linear graph in Fig. <ref> by visual inspection; such an approach falls apart in systems with multiple energy domains or capabilities with multiple inputs and outputs. Instead, the Engineering System Net and its state transition function is constructed according to Defn. <ref> and <ref>. Again, the hetero-functional graph theory toolbox <cit.> automatically calculates the positive and negative hetero-functional incidence matrices from an XML input file that instantiates the information from Defn. <ref> - <ref>. The Engineering System Net in Fig. <ref> shows the system buffers as places, the capabilities as transitions, and the incidence between them.
In the third step, the hetero-functional network minimum cost flow problem is set up and solved. More specifically, Eq. <ref>-<ref> are written out explicitly.
minimize Z = 0
s.t. [ +1 -1 -1 -1 ][ τ_s; τ_J; τ_K; τ_b ][k]Δ T = 0 ∀ k ∈{1, …, K}
[ 1 0 0 0 ][ τ_s; τ_J; τ_K; τ_b ][k]
= 1 ∀ k ∈{1, …, K}
[ 0 0 1 0 ][ τ_s; τ_J; τ_K; τ_b ][k=1]
=
0
[ 0 0 0 1 ][ τ_s; τ_J; τ_K; τ_b ][k]
=
[ b_ω ][ 0 0 0 1 ][ -1; +1; +1; +1 ][ ω_J ][k]
∀ k ∈{1, …, K}
[ 0 0 1 0 ](
[ τ_s; τ_J; τ_K; τ_b ][k+1]
-
[ τ_s; τ_J; τ_K; τ_b ][k]
)
=
[ K_ω ][ 0 0 1 0 ][ -1; +1; +1; +1 ][ ω_J ][k]Δ T
∀ k ∈{1, …, K-1}
[ 0 1 0 0 ][ τ_s; τ_J; τ_K; τ_b ][k] Δ T
=
[ J ][ 0 1 0 0 ][ -1; +1; +1; +1 ](
[ ω_J ][k+1]
-
[ ω_J ][k]
)
∀ k ∈{1, …, K-1}
[ ω_J ][k=1]
=
0
This explicit statement of the HFNMCF problem in the context of the rotational mechanical system shown in Fig. <ref>c provides the following insights:
* Eq. <ref> shows that mechanical system does not have an objective function and is simply a set of simultaneous equations.
* Eq. <ref> is a matrix restatement of the continuity law in Eq. <ref>. Again, the torque balance on the ground place is redundant and therefore eliminated.
* Eq. <ref> imposes exogenous values on the torque due to the presence of torque sources.
* Eq. <ref> is the initial condition on the rotational spring torque as a state variable.
* Eq. <ref> is a matrix restatement of the constitutive law for rotational dampers in Eq. <ref>.
* Eq. <ref> is a matrix restatement of the constitutive law for rotational springs in Eq. <ref>.
* Eq. <ref> is a matrix restatement of the constitutive law for rotational inertias in Eq. <ref>.
* The selector matrices in Eq. <ref>-<ref> can be automatically produced from the HFGT toolbox <cit.> for systems of arbitrary size.
* There are no equations that impose exogenous values on the angular velocity because there are no angular velocity sources.
* Eq. <ref> is the initial condition on the disk angular velocity as a state variable.
* The compatibility laws stated in Eq. <ref> and <ref> are superfluous because all of the angular velocities have been stated in absolute terms relative to the ground reference frame rather than as angular velocity differences between points.
Once the HFNMCF problem for the rotational mechanical system has been set up, it can be simulated straightforwardly and compared against the state space ODE model derived by linear graph and/or bond graph. The following parameter values are chosen: J = 0.5 kg·m^2, k = 2 N·m/rad, b = 0.5 N·m·s/rad, and τ_s = 1 N·m (Step input torque). The simulation time K = 15 seconds, and the time step Δ T = 0.1 seconds. The HFNMCF results for the primary (torque) and auxiliary (angular velocity) decision variables are shown as solid lines in Fig. <ref>. The associated state space ODE results are shown in dashed lines with embedded triangles.
§.§ Fluidic System
The first step is to recognize that the fluidic system shown in Fig. <ref>d is first, a specialization into the fluidic domain, followed by an instantiation of the engineering system meta architecture in Fig. <ref>. Consequently, Defn. <ref>-<ref> are understood as follows. There are four fluidic points that have distinct absolute values of across-variables that serve as independent buffers: P_1, P_ℐ R_1, P_2, and P_0. Additionally, there is one through-variable source V̇_f that serves as a transformation resource. Additionally, the transportation resources include generalized resistors R_1 and R_2, a generalized inductor I, and generalized capacitors C_1 and C_2. Fig. <ref> shows that each of these transformation and transportation resources has exactly one system process; inject power with imposed through variable, dissipate power, store potential energy, and store kinetic energy. The result is that each of these transformation and transportation resources introduces exactly one system capability with a primary through-variable and an auxiliary across-variable as attributes.
In the next step, the fluidic system shown in Fig. <ref>d is transformed into the engineering system net shown in Fig. <ref>.
While it is possible to produce Fig. <ref> from the fluidic system in Fig. <ref>d and the linear graph in Fig. <ref> by visual inspection; such an approach falls apart in systems with multiple energy domains or capabilities with multiple inputs and outputs. Instead, the Engineering System Net and its state transition function is constructed according to Defn. <ref> and <ref>. Again, the hetero-functional graph theory toolbox <cit.> automatically calculates the positive and negative hetero-functional incidence matrices from an XML input file that instantiates the information from Defn. <ref> - <ref>. The Engineering System Net in Fig. <ref> shows the system buffers as places, the capabilities as transitions, and the incidence between them.
In the third step, the hetero-functional network minimum cost flow problem is set up and solved. More specifically, Eq. <ref>-<ref> are written out explicitly.
minimize Z = 0
s.t. [ +1 -1 -1 0 0 0; 0 +1 0 -1 0 0; 0 0 0 +1 -1 -1 ][ V̇_f; V̇_ I; V̇_C_1; V̇_R_1; V̇_C_2; V̇_R_2 ][k]Δ T = 0 ∀ k ∈{1, …, K}
[ 1 0 0 0 0 0 ][ V̇_f; V̇_ I; V̇_C_1; V̇_R_1; V̇_C_2; V̇_R_2 ][k]
=
1 ∀ k ∈{1, …, K}
[ 0 1 0 0 0 0 ][ V̇_f; V̇_ I; V̇_C_1; V̇_R_1; V̇_C_2; V̇_R_2 ][k=1]
=
0
[ 0 0 0 1 0 0; 0 0 0 0 0 1; ][ V̇_f; V̇_ I; V̇_C_1; V̇_R_1; V̇_C_2; V̇_R_2 ][k]
=
[ 1/R_1 0; 0 1/R_2 ][ 0 0 0 1 0 0; 0 0 0 0 0 1; ][ -1 0 0; +1 -1 0; +1 0 0; 0 +1 -1; 0 0 +1; 0 0 +1; ][ P_1; P_ℐ R_1; P_2 ][k]
∀ k ∈{1, …, K}
[ 0 1 0 0 0 0 ](
[ V̇_f; V̇_ I; V̇_C_1; V̇_R_1; V̇_C_2; V̇_R_2 ][k+1]
-
[ V̇_f; V̇_ I; V̇_C_1; V̇_R_1; V̇_C_2; V̇_R_2 ][k]
)
=
[ 1/ I ][ 0 1 0 0 0 0 ][ -1 0 0; +1 -1 0; +1 0 0; 0 +1 -1; 0 0 +1; 0 0 +1; ][ P_1; P_ℐ R_1; P_2 ][k]Δ T
∀ k ∈{1, …, K-1}
[ 0 0 1 0 0 0; 0 0 0 0 1 0; ][ V̇_f; V̇_ I; V̇_C_1; V̇_R_1; V̇_C_2; V̇_R_2 ][k] Δ T
=
[ C_1 0; 0 C_2 ][ 0 0 1 0 0 0; 0 0 0 0 1 0; ][ -1 0 0; +1 -1 0; +1 0 0; 0 +1 -1; 0 0 +1; 0 0 +1; ](
[ P_1; P_ℐ R_1; P_2 ][k+1]
-
[ P_1; P_ℐ R_1; P_2 ][k]
)
∀ k ∈{1, …, K-1}
[ 1 0 0; 0 0 1 ][ P_1; P_ℐ R_1; P_2 ][k=1]
=
[ 0; 0 ]
This explicit statement of the HFNMCF problem in the context of the fluidic system shown in Fig. <ref>d provides the following insights:
* Eq. <ref> shows that the fluidic system does not have an objective function and is simply a set of simultaneous equations.
* Eq. <ref> is a matrix restatement of the continuity laws in Eq. <ref> - <ref>. Again, the volumetric flow rate balance on the ground place is redundant and therefore eliminated.
* Eq. <ref> imposes exogenous values on the volumetric flow rate due to the presence of a volumetric flow rate source.
* Eq. <ref> is the initial condition on the pipe inductance volumetric flow rate as a state variable.
* Eq. <ref> is a matrix restatement of the constitutive law for the elements' fluidic resistances in Eq. <ref>, and <ref>.
* Eq. <ref> is a matrix restatement of the constitutive law for the elements' fluidic inductances in Eq. <ref>.
* Eq. <ref> is a matrix restatement of the constitutive law for the tanks in Eq. <ref> and <ref>.
* The selector matrices in Eq. <ref>-<ref> can be automatically produced from the HFGT toolbox <cit.>for systems of arbitary size.
* There are no equations that impose exogenous values on the pressure because there are no pressure sources.
* Eq. <ref> is the initial condition on the tank pressures as state variables.
* The compatibility laws stated in Eq. <ref> and <ref> are superfluous because all of the pressures have been stated in absolute terms relative to the reference ambient pressure rather than as pressure differences between fluidic points.
Once the HFNMCF problem for the fluidic system has been set up, it can be simulated straightforwardly and compared against the state space ODE model derived by linear graph and/or bond graph. The following parameter values are chosen: R_1 = 2 s/m^2, R_2 = 1 s/m^2, C_1 = 0.02 m^3, C_2 = 0.05 m^3, ℐ = 2 N·s^2/m^5, and v̇_f = 1 m^3/s (Step input volumetric flow rate). The simulation time K = 10 seconds, and the time step Δ T = 0.02 seconds. The HFNMCF results for the primary (volumetric flow rate) and auxiliary (pressure) decision variables are shown as solid lines in Fig. <ref>. The associated state space ODE results are shown in dashed lines with embedded triangles.
§.§ Thermal System
The first step is to recognize that the thermal system shown in Fig. <ref>e is first, a specialization into the thermal domain, followed by an instantiation of the engineering system meta architecture in Fig. <ref>. Consequently, Defn. <ref>-<ref> are understood as follows. There are three thermal points that have distinct absolute values of across-variables that serve as independent buffers: T_C_h, T_C_i, and T_0. Additionally, there is one through-variable source Q̇_̇ṡ that serves as a transformation resource. Additionally, the transportation resources include generalized resistors R_i and R_h, and generalized capacitors C_i and C_h. Fig. <ref> shows that each of these transformation and transportation resources has exactly one system process; inject power with imposed through variable, dissipate power, and store (thermal) potential energy. The result is that each of these transformation and transportation resources introduces exactly one system capability with a primary through-variable and an auxiliary across-variable as attributes.
In the next step, the thermal system shown in Fig. <ref>e is transformed into the engineering system net shown in Fig. <ref>.
While it is possible to produce Fig. <ref> from the thermal system in Fig. <ref>e, it's analogous electrical circuit in Fig. <ref> and/or its linear graph in Fig. <ref> by visual inspection; such an approach falls apart in systems with multiple energy domains or capabilities with multiple inputs and outputs. Instead, the Engineering System Net and its state transition function is constructed according to Defn. <ref> and <ref>. Note that the hetero-functional graph theory toolbox <cit.> can automatically calculate the positive and negative hetero-functional incidence matrices from an XML input file that instantiates the information from Defn. <ref> - <ref>. The Engineering System Net in Fig. <ref> shows the system buffers as places, the capabilities as transitions, and the incidence between them.
In the third step, the hetero-functional network minimum cost flow problem is set up and solved. More specifically, Eq. <ref>-<ref> are written out explicitly.
minimize Z = 0
s.t. [ +1 -1 0 -1 -1; 0 +1 -1 0 0; ][ Q̇_s; Q̇_R_i; Q̇_C_i; Q̇_C_h; Q̇_R_h ][k]Δ T = 0 ∀ k ∈{1, …, K}
[ 1 0 0 0 0 ][ Q̇_s; Q̇_R_i; Q̇_C_i; Q̇_C_h; Q̇_R_h ][k]
= [ 1 ] ∀ k ∈{1, …, K}
[ 0 1 0 0 0; 0 0 0 0 1; ][ Q̇_s; Q̇_R_i; Q̇_C_i; Q̇_C_h; Q̇_R_h ][k]
=
[ 1/R_i 0; 0 1/R_h ][ 0 1 0 0 0; 0 0 0 0 1; ][ -1 0; +1 -1; 0 +1; +1 0; +1 0; ][ T_C_h; T_C_i ][k]
∀ k ∈{1, …, K}
[ 0 0 1 0 0; 0 0 0 1 0 ][ Q̇_s; Q̇_R_i; Q̇_C_i; Q̇_C_h; Q̇_R_h ][k] Δ T
=
[ C_i 0; 0 C_h ][ 0 0 1 0 0; 0 0 0 1 0 ][ -1 0; +1 -1; 0 +1; +1 0; +1 0 ](
[ T_C_h; T_C_i ][k+1]
-
[ T_C_h; T_C_i ][k]
)
∀ k ∈{1, …, K-1}
[ 1 0; 0 1 ][ T_C_h; T_C_i ][1]
=
[ 0; 0 ]
This explicit statement of the HFNMCF problem in the context of the thermal system shown in Fig. <ref>e provides the following insights:
* Eq. <ref> shows that the thermal system does not have an objective function and is simply a set of simultaneous equations.
* Eq. <ref> is a matrix restatement of the continuity laws in Eq. <ref> and <ref>. Again, the heat balance on the ground place is redundant and therefore eliminated.
* Eq. <ref> imposes exogenous values on the heat flow rate due to the presence of heat flow sources.
* Due to the absence of generalized inductors in thermal systems, there are no equations for their initial condition.
* Eq. <ref> is a matrix restatement of the constitutive law for element's thermal resistances in Eq. <ref> and <ref>.
* There are no equations that restate the constitutive laws for thermal inductors because there are no generalized inductors in thermal systems.
* Eq. <ref> is a matrix restatement of the constitutive law for the thermal capacitance of system elements in Eq. <ref> and <ref>.
* The selector matrices in Eq. <ref>-<ref> can be automatically produced from the HFGT toolbox <cit.> for systems of arbitrary size.
* There are no equations that impose exogenous values on the temperature because there are no temperature sources.
* Eq. <ref> is the initial condition on the thermal capacitors' temperature as the state variables.
* The compatibility laws stated in Eq. <ref> and <ref> are superfluous because all of the temperatures have been stated in absolute terms relative to 0 C as a reference temperature rather than as temperature differences between points.
Once the HFNMCF problem for the fluidic system has been set up, it can be simulated straightforwardly and compared against the state space ODE model derived by linear graph and/or bond graph. The following parameter values are chosen:
R_i = 0.5 K/W, R_h = 0.2 K/W, C_i = 1 J/K, C_h = 2 J/K, Q̇_S = 1 W (Step input heat flow rate).The simulation time K = 5 seconds, and the time step Δ T = 0.1 seconds. The HFNMCF results for the primary (heat flow rate) and auxiliary (temperature) decision variables are shown as solid lines in Fig. <ref>. The associated state space ODE results are shown in dashed lines with embedded triangles.
§.§ Multi-Energy System
The first step is to recognize that the electro-mechanical system shown in Fig. <ref>f is first, a specialization into the multi-system domain, followed by an instantiation of the engineering system meta-architecture in Fig. <ref>. Consequently, Defn. <ref>-<ref> are understood as follows. There are four electrical points that have distinct absolute values of across-variables that serve as independent buffers: V_S, V_RL, V_LM, V_0. Also, there are two mechanical points that have distinct absolute values of across-variables that serve as independent buffers: ω_J and ω_0. Additionally, there is one across variable source V_S that serves as a transformation resource. Additionally, the transportation resources include generalized resistors R and B, a generalized inductor L, a generalized capacitor J, and a generalized transformer (i.e. motor) with motor constant 1/k_a. Fig. <ref> shows that each of these transformation and transportation resources has exactly one system process; inject power with imposed through variable, dissipate power, store potential energy, and store kinetic energy. The result is that each of these transformation and transportation resources introduces exactly one system capability with a primary through-variable and an auxiliary across-variable as attributes.
In the next step, the electro-mechanical system shown in Fig. <ref>a is transformed into the engineering system net shown in Fig. <ref>.
Notice that the transformer (i.e. motor) acts as a transition with two inputs and two outputs: the motor current in and out, and the motor torque in and out. Also note that the through variable associated with the motor torque no longer appears explicitly, and instead appears implicitly in the form of the arc weights label with the motor constant k_a. While Fig. <ref> strongly resembles the electro-mechanical diagram in Fig. <ref>f and the linear graph in Fig. <ref>, it can also be produced directly from HFGT. The Engineering System Net and its state transition function is constructed according to Defn. <ref> and <ref>. Note that the hetero-functional graph theory toolbox <cit.> can automatically calculate the positive and negative hetero-functional incidence matrices from an XML input file that instantiates the information from Defn. <ref> - <ref>. The Engineering System Net in Fig. <ref> shows the system buffers as places, the capabilities as transitions, and the incidence between them.
In the third step, the hetero-functional network minimum cost flow problem is set up and solved. More specifically, Eq. <ref>-<ref> are written out explicitly.
minimize Z = 0
s.t. [ +1 -1 0 0 0 0; 0 +1 -1 0 0 0; 0 0 +1 -1 0 0; 0 0 0 +1/k_a -1 -1; ][ i_V_S; i_R; i_L; i_m; τ_B; τ_J; ][k]Δ T = 0 ∀ k ∈{1, …, K}
[ 0 0 1 0 0 0 ][ i_V_S; i_R; i_L; i_m; τ_B; τ_J; ][k=1]
=
[ 0 ]
[ 0 1 0 0 0 0; 0 0 0 0 1 0; ][ i_V_S; i_R; i_L; i_m; τ_B; τ_J; ][k]
=
[ 1/R 0; 0 B ][ 0 1 0 0 0 0; 0 0 0 0 1 0; ][ -1 0 0 0; +1 -1 0 0; 0 +1 -1 0; 0 0 +1 -1/k_a; 0 0 0 +1; 0 0 0 +1; ][ V_S; V_RL; V_LM; ω_J; ][k]
∀ k ∈{1, …, K}
[ 0 0 1 0 0 0 ](
[ i_V_S; i_R; i_L; i_m; τ_B; τ_J; ][k+1]
-
[ i_V_S; i_R; i_L; i_m; τ_B; τ_J; ][k]
)
=
[ 1/L ][ 0 0 1 0 0 0 ][ -1 0 0 0; +1 -1 0 0; 0 +1 -1 0; 0 0 +1 +k_a; 0 0 0 +1; 0 0 0 +1; ][ V_S; V_RL; V_LM; ω_J; ][k]Δ T
∀ k ∈{1, …, K-1}
[ 0 0 0 0 0 1 ][ i_V_S; i_R; i_L; i_m; τ_B; τ_J; ][k] Δ T
=
[ J ][ 0 0 0 0 0 1 ][ -1 0 0 0; +1 -1 0 0; 0 +1 -1 0; 0 0 +1 -1/k_a; 0 0 0 +1; 0 0 0 +1 ](
[ V_S; V_RL; V_LM; ω_J; ][k+1]
-
[ V_S; V_RL; V_LM; ω_J; ][k]
)
∀ k ∈{1, …, K-1}
[ 0 0 1 -1/k_a ][ V_S; V_RL; V_LM; ω_J; ][k]=
[ 0 ] ∀ k ∈{1, …, K}
[ 1 0 0 0 ][ V_S; V_RL; V_LM; ω_J; ][k]
= [ 1 ] ∀ k ∈{1, …, K}
[ 0 0 0 1 ][ V_S; V_RL; V_LM; ω_J; ][k=1]
=
[ 0 ]
This explicit statement of the HFNMCF problem in the context of the electro-mechanical system shown in Fig. <ref>f provides the following insights:
* Eq. <ref> shows that an electro-mechanical system does not have an objective function and is simply a set of simultaneous equations.
* Eq. <ref> is a matrix restatement of the continuity laws in Eq. <ref> - <ref>.
The first three rows apply a current balance on each electrical place, while the last applies a torque balance on each mechanical place. Note that the motor constant k_a serves to transform the motor torque i_m into the motor torque. This serves to combine the transformer's constitutive law in Eq. <ref> with the torque balance in Eq. <ref>. Note that Eq. <ref> introduces an additional matrix row to account for the current provided by the voltage source i_V_S. While this variable is not required in the linear graph and bond graph methodologies, the HFGT derivation requires across and through variables for all capabilities. Again, the current balance on the electrical ground and the torque balance on the mechanical ground are redundant and therefore eliminated.
* There are no equations that impose exogenous values on the currents and angular velocities because there are no associated sources.
* Eq. <ref> is the initial condition on the inductor current as a state variable.
* Eq. <ref> is a matrix restatement of the constitutive law for the resistor and the physical damper in Eq. <ref> and <ref>.
* Eq. <ref> is a matrix restatement of the constitutive law for the inductor in Eq. <ref>.
* Eq. <ref> is a matrix restatement of the constitutive law for rotating disk in Eq. <ref>.
* Eq. <ref> is a matrix restatement of the constitutive law for the auxiliary (i.e. across) decision variable for transformer in Eq. <ref>. Again, the transformer's second constitutive law has already been incorporated in the context of <ref>.
* The selector matrices in Eq. <ref>-<ref> can be automatically produced from the HFGT toolbox <cit.> for systems of arbitrary size.
* Eq. <ref> imposes exogenous values on the voltages due to the presence of voltage sources.
* Eq. <ref> is the initial condition on the disk angular velocity as a state variable.
* The compatibility laws stated in Eq. <ref> - <ref> are superfluous because all of the voltages and angular velocities have been stated in absolute terms relative to the ground rather than as voltage/angular velocity differences between points.
Once the HFNMCF problem for the fluidic system has been set up, it can be simulated straightforwardly and compared against the state space ODE model derived by linear graph and/or bond graph. The following parameter values are chosen:
R = 1 Ω, L = 0.01 mH, J = 5 kg·m^2, B = 0.1 N·m·s/rad, and k_a = 0.1, and V_s = 1 V (Step input voltage). The simulation time K = 0.3 seconds, and the time step Δ T = 0.001 seconds. The HFNMCF results for the primary (current and torque) and auxiliary (voltage and angular velocity) decision variables are shown as solid lines in Fig. <ref>. The associated state space ODE results are shown in dashed lines with embedded triangles.
§ ON THE GENERALITY OF HETERO-FUNCTIONAL GRAPHS
The previous section concretely demonstrated the relationship between linear graphs, bond graphs, and hetero-functional graphs on the six illustrative examples depicted in Fig <ref>. Each time, the result of the HFNMCF problem was numerically equivalent to the simulation of the associated state space ODE model. In effect, the linear graphs and bond graph methodologies derive continuity laws (i.e. 0-Junction in Eulerian view systems/ 1-Junction in Lagrangian view systems), constitutive laws, and compatibility laws (i.e. 1-Junction in Eulerian view system/ 0-Junction in Lagrangian view systems) and then use algebraic manipulations to simplify them into a state space ODE model. Hetero-functional graph theory, quite similarly, includes the continuity laws in the engineering system net state transition function, includes the constitutive laws in the device models, and eliminates the need for compatibility models by virtue of its choice of reference frame and then states these laws as equations that are solved simultaneously and numerically (without manual algebraic manipulations). Consequently, while the numerical evidence in the previous section is compelling from a pedagogical perspective, in reality, the pattern of results points to two more general results.
Given an arbitrary linear graph composed of
* across variable sources,
* through variable sources,
* D-Type elements,
* A-Type elements,
* T-Type elements,
* generalized transformers, and
* generalized gyrators,
organized in an arbitrary topology, and a minimal set of initial conditions on the associated state variables, the solution of its associated state space ODE model is equivalent to the solution of a specialized instance of the HFNMCF problem.
The linear graph's state space ODE model is equivalent to a set of simultaneous differential algebraic equations composed of continuity laws, constitutive laws, and compatibility laws. Furthermore, the compatibility laws can be eliminated entirely with an algebraic change of variable that measures all across variables relative to a ground reference frame. Furthermore, the differential algebraic equation form of the continuity and constitutive laws can be stated as algebraic equations with a sufficiently small choice of the discrete-time step Δ T. Next, the continuity laws can be algebraically recast into the form stated in Eq. <ref>. Additionally, the constitutive laws can be algebraically recast into the form stated in Eq. <ref> where
* generalized resistors laws take the form in Eq. <ref>,
* generalized T-Type element laws take the form in Eq. <ref>,
* generalized A-Type element laws take the form in Eq. <ref>,
* generalized transformers laws take the form in Eq. <ref>,
* generalized gyrator laws take the form in Eq. <ref>.
Additionally, the across-variable sources are described by Eq. <ref>, and the through-variable sources are described by Eq. <ref>. Next, the initial conditions on through-type state variables are described by Eq. <ref> and the initial conditions on across-type state variables are described by Eq. <ref>. Next, the solution of the simultaneous equations in Eq. <ref>-<ref> can be recast as an optimization program with the null objective function in Eq. <ref> and these same equations as constraints. Finally, the optimization problem stated in Eq. <ref>-<ref> is a special case of the HFNMCF problem in Eq. <ref>-<ref> under the conditions described in the beginning of Sec. <ref>.
Given an arbitrary bond graph composed of
* effort sources,
* flow sources,
* generalized resistors,
* generalized capacitors,
* generalized inductors,
* generalized transformers, and
* generalized gyrators,
organized in an arbitrary topology, and a minimal set of initial conditions on the associated state variables, the solution of its associated state space ODE model is equivalent to the solution of a specialized instance of the HFNMCF problem.
The bond graph's state space ODE model is equivalent to a set of simultaneous differential algebraic equations composed of 0-junction laws, 1-junction laws, and constitutive laws. Furthermore, 0-junction laws, 1-junction laws, and bond graph system elements can be transformed 1-to-1 to form the continuity laws, compatibility laws, and system elements of the corresponding linear graph. As elaborated in Sec. <ref>) and more specifically,
* In systems with an Eulerian view (e.g. electrical systems, fluidic systems, thermal systems)
* effort sources are equivalent to across variable sources,
* flow sources are equivalent to through variable sources,
* generalized resistors are equivalent to D-Type variables,
* generalized capacitors are equivalent to A-type variables,
* generalized inductors are equivalent to T-Type variables,
* 0-Junction laws are equivalent to constitutive laws,
* 1-junction laws are equivalent to compatibility laws.
* In systems with the Lagrangian view (e.g. Mechanical system)
* effort sources are equivalent to through-variable sources,
* flow sources are equivalent to across-variable sources,
* generalized resistors are equivalent to D-Type variables,
* generalized capacitors are equivalent to T-Type variables,
* generalized inductors are equivalent to A-type variables,
* 0-Junction laws are equivalent to compatibility laws,
* 1-junction laws are equivalent to continuity laws.
* A generalized gyrator in a bond graph is equivalent to either a generalized transformer or generalized gyrator in the linear graph methodology depending on the choice of system on either side of the element.
* Similarly, a generalized transformer in a bond graph is equivalent to either a generalized transformer or generalized gyrate in the linear graph methodology depending on the choice of system on either side of the element.
Because an arbitrary bond graph model can be transformed to an equivalent linear graph model, then by Theorem <ref>, the ODE state space model derived by the bond graph is a special case of the HFNMCF problem.
§ CONCLUSION AND FUTURE WORK
This paper relates hetero-functional graphs to linear graphs and bond graphs. Despite having completely different theoretical origins, it demonstrates the former is a generalization of the latter two. To facilitate the comparison, each of the three modeling techniques is described and then compared conceptually. These three descriptions reveal that hetero-functional graphs, linear graphs, and bond graphs have completely different ontologies so finding a direct relationship through a purely abstract treatment is difficult. Instead, the paper focuses the discussion concretely on six example systems: (a) an electrical system, (b) a translational mechanical system, (c) a rotational mechanical system, (d) a fluidic system, (e) a thermal system, and (f) a multi-energy (electro-mechanical) system. Each of these systems was modeled with hetero-functional graphs, linear graphs, and bond graphs to reveal that dynamic simulation models produced by each of these modeling techniques result in numerically equivalent results. Finally, this concrete numerical evidence provides significant intuition and insight that overcomes the ontological differences between these three types of graph approaches. The paper proves mathematically that hetero-functional graphs are a formal generalization of both linear graphs and bond graphs.
This abstract and highly general result is significant for several reasons. First, linear graphs and bond graphs have a much longer history in the literature and have produced extensive theoretical and practical results. Until now, these contributions have been theoretically divorced from the hetero-functional graph theory literature. A direct relationship between hetero-functional, linear, and bond graphs facilitates the cross-pollination of theoretical and practical results between these approaches. For example, bond graphs are often used to study system causality<cit.>; where such an analysis has been elusive in hetero-functional graphs. More practically, the well-known modeling and simulation tool, Modelica<cit.>, was originally developed on a bond-graph foundation but its connection to hetero-functional graphs has not been made. Looking further ahead, and as exposited in the introduction, the 21st century is creating challenges that demand a deep understanding of the structure and behavior of systems-of-systems. This requires modeling approaches with an open, rather than closed, set of modeling primitives that spans systems of fundamentally different functions. These systems must address operands of energy, matter, information, money, and living organisms, and not just energy. It also requires modeling approaches that can handle continuous time, discrete time, and discrete event dynamics. This paper reveals that HFGT can provide such analytical flexibility and extensibility without losing the rich tradition of graph-based modeling that originated in the previous century.
IEEEtran
|
http://arxiv.org/abs/2409.03036v1 | 20240904190929 | Continuation and bifurcations of periodic orbits and symbolic dynamics in the Swift-Hohenberg equation | [
"Jakub Czwórnóg",
"Daniel Wilczak"
] | math.DS | [
"math.DS",
"nlin.CD",
"65P30, 65G20, 37M20, 37C27"
] |
*
Axel Klawonn0000-0003-4765-7387
Martin Lanser0000-0002-4232-9395
======================================================================
§ ABSTRACT
Steady states of the Swift–Hohenberg <cit.> equation are studied. For the associated four–dimensional ODE we prove that on the energy level E=0 two smooth branches of even periodic solutions are created through the saddle-node bifurcation. We also show that these orbits satisfy certain geometric properties, which implies that the system has positive topological entropy for an explicit and wide range of parameter values of the system.
The proof is computer-assisted and it uses rigorous computation of bounds on certain Poincaré map and its higher order derivatives.
§ INTRODUCTION
The Swift-Hohenberg equation <cit.> is a fundamental partial differential equation that plays a crucial role in the study of pattern formation and models various phenomena in physics and biology <cit.>. Originally the one-dimensional equation has the form
∂ U/∂ t =
- ( ∂^2/∂ X^2 + 1 )^2 U + α U - U^3.
The existence of some types of stationary solutions such as periodic and homoclinic <cit.> in one-dimensional case is studied analytically. There are also many numerical simulations that demonstrate the existence of complicated structures and behaviour <cit.>. However, many observed phenomena are not proved by means of mathematical rigour.
The aim of this article is to reproduce and extend results from <cit.> about one-dimensional stationary Swift-Hohenberg <cit.> equation
-U”” - 2 U” + (α - 1)U - U^3 = 0.
This equation has conserved energy
E = U”' U' - 1/2 (U”)^2 + U'^2 - α - 1/2 U^2 + 1/4 U^4 + (α - 1)^2/4.
The authors of <cit.> proved the following theorem.
<cit.>
The dynamics of the ODE (<ref>) on the energy level E=0 is chaotic for all α≥ 2 in the sense that certain Poincaré map is semi-conjugated to a subshift of finite type with positive topological entropy.
The proof in <cit.> splits into two parts. First, the authors prove <cit.> that the existence of a periodic orbits of (<ref>) with the parameter value α>3/2 on the energy level E=0 and satisfying certain geometric properties (see Section <ref> for details) implies the existence of symbolic dynamics for (<ref>). Then in <cit.> the authors show, that the assumptions of <cit.> are checked for all α≥ 2. The proof of <cit.> is computer-assisted and it is based on so-called radii polynomial approach. The authors reformulate the problem of the existence of a periodic orbit as a zero-finding problem for the Fourier coefficients of the orbit to be found. Then, by means of interval arithmetics <cit.> and Newton-Kantorowitch type argument, it is shown that this infinite-dimensional system of algebraic equations has a branch of isolated solution parameterized by α≥ 2.
The aim of this paper is to prove the following result.
There exists α^*∈ 1.9690842080_101989^293001 and two smooth curves U_±:[α^*,∞)→ℝ^4 such that
* E(U_±(α))=0 for α≥α^*,
* the solution to (<ref>) with the parameter value α and with the initial condition (U,U',U”,U”')=U_±(α) is an even and periodic function satisfying geometric properties from <cit.>,
* U_-(α)≠ U_+(α) for α>α^* and U_-(α^*)=U_+(α^*),
* the fold bifurcation occurs at α^*.
For the proof of Theorem <ref> we reformulate the problem of the existence of an even periodic orbit as a zero-finding problem for an univariate map (certain Poincaré map). In that sense, the proposed approach is much simpler and geometric as we work directly in the phase space of the ODE rather than an infinite dimensional space of Fourier coefficients. Moreover, computer-assisted verification of the result is significantly faster than the one presented in <cit.> – less than 13 minutes for two branches of periodic orbits and the fold bifurcation versus 12 hours for one branch.
The paper is organised as follows. In Section <ref> we recall the required geometric properties of these periodic orbits from <cit.>. In Section <ref> and derive a simple scalar equation for even periodic orbits of (<ref>). In Section <ref> we present details of the computer-assisted proof of Theorem <ref>.
§ SYMBOLIC DYNAMICS FROM A SINGLE PERIODIC ORBIT
For self-consistency of the article we recall here the forcing theorem <cit.> and required geometric conditions for periodic orbits.
The system (<ref>) has two equilibria ±√(α-1) on the energy level E=0. For 1<α≤3/2 they are stable foci (pure imaginary eigenvalues) and for all α>3/2 they are of saddle-focus type. Thus, for α>3/2 near these equilibria families of hyperbolic (on isolated energy level) periodic orbits appear giving rise to more complicated dynamics.
In order to make the interval of parameters bounded, and thus much easier for computer analysis, following the ideas from <cit.> we perform a change of coordinates
y = X/√(α - 1), u(y) = U(X)/√(α - 1), ξ = 2/√(α - 1).
Now the parameter range α≥3/2 corresponds to 0<ξ≤√(8) and (<ref>) becomes
-u”” - ξ u” + u - u^3 = 0
with the energy
E = u”' u' - 1/2(u”)^2+ ξ/2(u')^2 + 1/4(u^2 - 1)^2.
<cit.>
Let ξ∈ [0,√(8)) and suppose that there exists a periodic solution ũ of (<ref>) at the energy level E=0, satisfying the following geometric conditions
* ũ has exactly four monotone laps in one period and extrema ũ_1,ũ_2,ũ_3,ũ_4;
* ũ_1, ũ_3 are minima, and ũ_2,ũ_4 are maxima;
* ũ_1 < -1 < ũ_3 < 1 < ũ_2, ũ_4;
* ũ is symmetric at its minima.
Then the system is chaotic in the sense that there exists a two-dimensional Poincaré return
map which has a compact invariant set on which the topological entropy is positive.
Geometric conditions about periodic orbits are illustrated in Fig.<ref>. The solutions that belong to the chaotic invariant set resulting from Theorem <ref> are encoded by their extrema. While the maximum is always bigger than 1 we are free to choose minimum to be either less than -1 or between -1 and 1. This leads to symbolic dynamics on three symbols, where each symbol is a building block of solution – as illustrated in Fig. <ref>.
§ EVEN PERIODIC ORBITS AS A ZERO-FINDING PROBLEM.
Equation (<ref>) can rewritten as a system of first order ODEs
x'(t) = y(t)
y'(t) = z(t)
z'(t) = w(t)
w'(t) = -ξ z(t) + x(t) - x^3(t).
Define a Poincaré section
Π = {(x,y,z,w) : y = 0 }.
and let 𝒫_ξ:Π→Π be the associated Poincaré map for (<ref>) with the fixed parameter value ξ. The choice y=u'=0 of Poincaré section is very natural when we are looking for periodic orbits with fixed number of extrema. Then, periodic points for 𝒫_ξ of principal period n correspond to functions with exactly n local extrema in one period.
Another property of this section is that Fix(R)⊂Π, where
R(x,y,z,w) = (x,-y,z,-w).
is a reversing symmetry of (<ref>). It is well known <cit.> that in such case the Poincaré map is also R-reversible. In order to find an even (R-symmetric) periodic orbit of (<ref>) it suffices to find a point u=(x,0,z,0)∈Fix(R) such that 𝒫_ξ^k(u)∈Fix(R). Then 𝒫_ξ^2k(u)=u and the trajectory of u is periodic and R-symmetric.
Hence, even periodic solutions of (<ref>) correspond to solutions of the scalar equation
π_w 𝒫^2_ξ(x,0,z,0) = 0,
where π_w is the projection onto w variable. Observe that the intersection of Poincaré section with the energy level E=0
E(x,0,z,w) = -1/2z^2+1/4(x^2-1)=0
gives the following relation
u”=z = ±1/√(2)(x^2 - 1).
This means, that a solution u at the energy level E=0 has always proper extrema provided they are taken at x≠±1. Shifting minumum ũ_1<-1 to t=0 (see geometric conditions in Theorem <ref> and Fig. <ref>) we must take z(0)=1/√(2)(x(0)^2-1)>0. Substituting (<ref>) to (<ref>), we eventually transform the question of finding even periodic solutions of (<ref>) to a zero finding problem of the following equation
G(ξ,x) = (π_w ∘𝒫_ξ^2 ) (x,0, 1/√(2)(x^2 - 1), 0 ) = 0.
Finally, we have to check that an orbit corresponding to a solution of G(ξ,x_0)=0 satisfies the geometric conditions from Theorem <ref>. It is sufficient to check if
(x_0 < -1) ∧ (x_1 > 1)∧ (-1<x_2<1),
where
x_i= π_x𝒫_ξ^i(x_0,0,1/√(2)(x_0^2 - 1),0), i=1,2. Indeed, by (<ref>) the function has at most four extrema in one period. From (<ref>) the minima x_0≠ x_2 are different. Hence, the solution has exactly four extrema in period. All of them are proper because x_i≠±1, i=0,1,2.
§ COMPUTER-ASSISTED PROOF OF THEOREM <REF>
As already mentioned (see (<ref>)) the parameter range α≥3/2 corresponds to 0<ξ≤√(8). Numerical simulation shows (see Fig. <ref>, Fig. <ref> and <cit.>) that a saddle-node bifurcation occurs at ξ≈ 2.0316516135713902 creating two branches of even periodic orbits. These branches continue to exist until ξ=0 (in fact they continue until ξ≈ -2.3 but we restrict here to the range ξ≥ 0 due to change of variables (<ref>)).
Computer-assisted proof of Theorem <ref> is split into the following steps.
Step 1. We fix a threshold value =266 291 · 2^-17≈2.0316390991210938 ( is IEEE-754 <cit.> representable number) and prove that there are two smooth branches x_±:[0,]→ℝ such that for all ξ∈[0,] there holds x_-(ξ)<x_+(ξ), G(ξ,x_±(ξ))≡ 0 and x_±(ξ) satisfies (<ref>).
Step 2. We prove that there is a smooth and concave function ξ̃:[x_-(),x_+()]→ℝ such that for x∈ [x_-(),x_+()] there holds G(ξ̃(x),x)≡ 0 and x satisfies (<ref>) in the system with parameter ξ̃(x).
Step 3. We prove that ξ̃ has unique maximum ξ^*=ξ̃(x^*) for some x^*∈[x_-(),x_+()].
Step 4. We prove that graphs of x_± and ξ̃ glue into a smooth curve in the (ξ,x) plane, as shown in Fig. <ref> and Fig. <ref>.
Finally, from these steps we will conclude that the functions x_± can be extended beyond to ξ^*, at which value they are equal and the saddle-node bifurcation occurs at (ξ^*,x^*). Via change of variables (<ref>) we obtain the assertion of Theorem <ref>.
§.§ Interval Newton Operator
In a computer-assisted verification of Steps 1–4 we will use the standard Interval Newton Operator for proving the existence and uniqueness of zeros of maps <cit.>.
Let f:X⊂ℝ^n→ℝ^n be 𝒞^1 smooth, where X is compact and convex. Fix x_0∈int X and let [A] be an interval matrix such that
{Df(x) : x∈ X} ⊂ [A].
If the interval Newton operator
N := x_0 - [A]^-1f(x_0)
is well defined and N⊂intX then f has unique zero x in X and x∈ N.
Using implicit function theorem and Theorem <ref> it is easy to prove the following extension of Theorem <ref> to parameterised functions.
Let f:Z× X⊂ℝ^k×ℝ^n→ℝ^n be 𝒞^1 smooth, where Z,X are compact and convex. Fix x_0∈int X and let [A] be an interval matrix and [e] be an interval vector such that
{D_xf(z,x) : (z,x)∈ Z× X} ⊂ [A],
{f(z,x_0) : z∈ Z} ⊂ [e].
If the interval Newton operator
N := x_0 - [A]^-1[e]
is well defined and N⊂intX then the set of zeroes of f in Z× X is a graph of a 𝒞^1 smooth function x:Z→ N⊂ X. That is, for (z,x)∈ Z× X
f(z,x)=0⟺ x=x(z).
§.§ Details of verification of Steps 1–4.
Step 1. To prove the existence of two smooth curves x_±:[0,]→ℝ solving (<ref>) we perform an adaptive subdivision of the parameter range [0,]=Ξ^1_±∪⋯∪Ξ_±^N_±, where Ξ^j_± are closed intervals satisfying minΞ_±^1=0, maxΞ_±^N_±= and maxΞ_±^j=minΞ_±^j+1 for j=1,…,N_±-1. The sizes of subdivisions N_-=24157, N_+=142821 are returned by our adaptive algorithm. Then for each subinterval Ξ^j_± of parameters we construct (using continuation algorithms) a closed interval X_±^j and using Theorem <ref> we prove that the solution set of (<ref>) in Ξ^j_±× X_±^j is a smooth curve x_±^j:Ξ_±^j→ X_±^j. Moreover (<ref>) holds true for every point on this curve. Finally we show, that
X_±^j∩ X_±^j+1 ≠ ∅, j=1…,N_±-1.
Because maxΞ^j_±=minΞ_±^j+1 and by the uniqueness property of the Interval Newton Operator we conclude that x_±^j(maxΞ_±^j)=x_±^j+1(minΞ_±^j+1) for j=1,…,N_±-1. Hence, the functions x_±^j, defined on subintervals Ξ_±^j of the parameter range join into continuous functions x_± defined on [0,]. Sine at every ξ∈[0,] we have D_xG(ξ,x_±(ξ))≠ 0 (the interval Newton operator is defined), by the implicit function theorem these functions are smooth.
Finally, using Theorem <ref> we validate that
x_+()∈ -1.5824440318_2912^327613,
x_-()∈ -1.58253506275_30319^63035.
From these bounds it is clear, that x_-()< x_+() and thus, by the uniqueness property of the interval Newton operator this relation extends to x_-(ξ) < x_+(ξ) for all ξ∈[0,].
Step 2.
To prove the existence of a concave curve ξ̃ : [x_-(), x_+()] → solving (<ref>) we need to compute second order derivatives of G with respect to the parameter ξ. Because of the interface of the CAPD library <cit.>, in order to compute derivatives with respect to a parameter we have to treat it as a variable. Therefore we extend the system (<ref>) to
x'(t) = y(t)
y'(t) = z(t)
z'(t) = w(t)
w'(t) = -ξ(t) z(t) + x(t) - x^3(t)
ξ'(t) = 0.
We are going to seek the zeros of the modified function
G̅(ξ,x) = (π_w ∘𝒫^2 ) (x,0, 1/√(2)(x^2 - 1), 0, ξ),
where 𝒫 : Π̅→Π̅ is the Poincaré map for the section
Π̅ = {(x,y,z,w,ξ) : y = 0 }.
Clearly zeroes of G are in on-to-one correspondence with zeroes of G̅.
The rest of this step is similar to the Step 1 with the addition of verification of concavity. Denote by x^_- the lower bound for x_-() and by x^_+ the upper bound for x_+() – see (<ref>). To find a curve solving (<ref>) we perform an adaptive subdivision of the range X_*:=[x^_-, x^_+] = ⋃_j=1^N X^j into N = 6903 closed intervals with max X^j=min X^j+1, j=1,…, N-1. Then for each subinterval X^j we construct an interval Ξ^j and using Theorem <ref> we prove, that the solution set of (<ref>) in Ξ^j× X^j is the graph of a smooth function ξ̃^j:Ξ^j→ X^j. We also check Ξ^j∩Ξ^j + 1≠∅ for j = 1,...,N - 1. By the uniqueness property of the interval Newton operator the functions ξ̃^j join into a smooth function ξ̃:X_*→ such that for x∈ X_* there holds G̅(ξ̃(x),x)=0 and x satisfies (<ref>).
Additionally, on each subinterval X^j we compute a bound on the second order derivative of the implicit function ξ̃. For this purpose we use the 𝒞^r-Lohner algorithm <cit.> for integration of second order variational equations for (<ref>) needed for the second derivatives of Poincaré map 𝒫. From these computation we obtain a bound
ξ̃”(x) ∈ [-74010.849232287583, -12744.872650106316],
for x∈ X_*. Hence the function ξ̃ is concave on the interval [x^_-,x^_+].
Step 3. We apply Theorem <ref> to the following equation
H(ξ,x) = (G(ξ,x),∂ G/∂ x(ξ,x))=0.
Using standard Newton method we have found an approximate solution
(ξ_0,x_0) = (2.0316516135713902,-1.5824941113082425)
to H(ξ,x)=0. Then, using Theorem <ref> we proved that there is a unique zero (ξ^*,x^*) of H in the set (ξ_0,x_0)+[-1,1]^2· 10^-10 that belongs to
ξ^* ∈ 2.0316516135_613893^814116,
x^* ∈ -1.582494111^3301776_2863635.
Using bound (<ref>) and change of variables (<ref>) we obtain a bound for α^* in Theorem <ref>. Finally using bounds (<ref>)-(<ref>) we checked, that (ξ^*,x^*) belongs to one of the sets Ξ^j× X^j used to verify the existence of ξ̃ in Step 2. Hence, by the uniqueness property of the interval Newton operator, we obtain ξ^*=ξ̃(x^*). From the implicit function theorem we have ξ̃'(x^*)=-∂ G/∂ x(ξ^*,x^*)/∂ G/∂ξ(ξ^*,x^*)=0. Given that ξ̃ is concave we conclude, that ξ̃ has unique maximum at ξ^*.
Step 4.
By the construction of ξ̃ we know that x_±()∈ X_*.
To show, that the graphs x_± and ξ̃ glue into a smooth curve in the (ξ,x) plane it suffices to check that
ξ̃(x_±())=.
Indeed, because the interval Newton operator is well defined in Steps 1–2, both partial derivatives ∂ G/∂ x(,x_±()) and ∂ G/∂ξ(,x_±()) are nonzero. Hence the curves x_±() can be extended in a smooth and unique way in a neighborhood of and therefore graphs of ξ̃ and x_± must locally coincide near (,x_±()).
In order to check (<ref>), using bounds (<ref>) we verify that
(,x_-())∈Ξ^1× X^1 and (,x_+())∈Ξ^N× X^N.
Then, from construction of ξ̃ in Step 2 we obtain (<ref>).
§.§ Implementation notes.
In the computer-assisted verification of Steps 1–4 we need to compute bounds on Poincaré map and its derivatives up to the second order. For this purpose we used interval arithmetics <cit.>, algorithms for rigorous integration of ODEs and associated (higher order) variational equations and algorithms computation of Poincaré maps <cit.> all implemented in the CAPD library <cit.>.
The C++ program that performs verification of Steps 1–4 is a supplement to this article and available at <cit.>.
The computation of the bound for the curve x_-(ξ) takes approximately 109 seconds. This curve corresponds to the solution from <cit.>. The curve x_+(ξ) takes less than 10 minutes (537 seconds) to compute, and the curve ξ̃(x) 1.5 minutes (100 seconds). Overall, the total computation takes less than 13 minutes.
§ CONCLUSIONS AND FUTURE WORKS
In this paper we reproduced and extended some results from <cit.> about dynamics of the stationary Swift-Hohenberg equation on the energy level E=0. After change of variables (<ref>) we obtain a system with a parameter ξ related to original parameter α. Numerical simulation shows that the two curves resulting from Theorem <ref> continue to exist for negative values of ξ, which apparently undergo further bifurcation near ξ≈ -2.3.
Our method is computationally less expensive than the one proposed in <cit.>. This gives a hope to obtain results as in Theorem <ref> for a range of energy levels and, perhaps, study codimension two bifurcations of this family of periodic orbits.
99
SwiftHohenberg J Swift and PC Hohenberg. Hydrodynamic fluctuations at the convective instability. Physical Review A, 15(1):319, 1977.
TLIDI19941475 M. Tlidi and Paul Mandel. Spatial patterns in nascent optical bistability. Chaos, Solitons & Fractals, 4(8):1475–1486, 1994.
PhysRevLett.73.2978 J. Lega, J. V. Moloney, and A. C. Newell. Swift-Hohenberg equation for lasers. Phys. Rev. Lett., 73:2978–2981, Nov 1994.
MERON201270 Ehud Meron. Pattern-formation approach to modelling spatially extended
ecosystems. Ecological Modelling, 234:70–82, 2012.
Glebsky Lerman LM Glebsky LY. On small stationary localized solutions for the generalized 1-D Swift-Hohenberg equation, 1995.
BurkeKnoblach John Burke and Edgar Knobloch. Localized states in the generalized Swift-Hohenberg equation. Phys. Rev. E, 73:056211, May 2006.
Deng Shengfu Deng. Periodic solutions and homoclinic solutions for a Swift-Hohenberg equation with dispersion. Discrete and Continuous Dynamical Systems - S, 9(6):1647–1662,
2016.
Yang Junxiang Yang and Junseok Kim. Numerical simulation and analysis of the Swift–Hohenberg equation by the stabilized Lagrange multiplier approach, 2022.
Su Jian Su, Weiwei Fang, Qian Yu, and Yibao Li. Numerical simulation of Swift–Hohenberg equation by the fourth-order compact scheme, 2019.
sanchez2013numerical S Sánchez Pérez-Moreno, S Ruiz Chavarría, and
G Ruiz Chavarría. Numerical solution of the Swift–Hohenberg equation. In Experimental and computational fluid mechanics, pages 409–416. Springer, 2013.
chaos Jan Bouwe Van Den Berg and Jean-Philippe Lessard. Chaotic braided solutions via rigorous numerics: Chaos in the Swift–Hohenberg equation. SIAM Journal on Applied Dynamical Systems, 7(3):988–1031, 2008.
Moore1966 Ramon E. Moore. Interval analysis. Prentice-Hall, Inc., Englewood Cliffs, N.J., 1966.
IEEE1788-2015 IEEE Computer Society. 1788-2015 - IEEE Standard for Interval Arithmetic, 2015.
lamb1992reversing Jeroen SW Lamb. Reversing symmetries in dynamical systems. Journal of Physics A: Mathematical and General, 25(4):925, 1992.
wilczak2003chaos Daniel Wilczak. Chaos in the Kuramoto–Sivashinsky equations—a
computer-assisted proof. Journal of Differential Equations, 194(2):433–459, 2003.
IEEE754-2019 IEEE Computer Society. IEEE Standard for Floating-Point Arithmetic. IEEE Std 754-2019, July 2019.
Neumaier1990 Arnold Neumaier. Interval methods for systems of equations, volume 37 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 1990.
capd Tomasz Kapela, Marian Mrozek, Daniel Wilczak, and Piotr Zgliczyński. CAPD::DynSys: a flexible C++ toolbox for rigorous numerical analysis of dynamical systems. Communications in nonlinear science and numerical simulation, 101:105578, 2021.
WilczakZgliczynski2011 Daniel Wilczak and Piotr Zgliczyński. 𝒞^r-Lohner algorithm. Schedae Informaticae, 20:9–46, 2011.
poincare Tomasz Kapela, Daniel Wilczak, and Piotr Zgliczyński. Recent advances in a rigorous computation of Poincaré maps. Communications in Nonlinear Science and Numerical Simulation, 110:106366, 2022.
repo <https://github.com/Jacob4leg/SH_bifurcation>.
|
http://arxiv.org/abs/2409.02278v1 | 20240903202437 | Evaluation and Comparison of Visual Language Models for Transportation Engineering Problems | [
"Sanjita Prajapati",
"Tanu Singh",
"Chinmay Hegde",
"Pranamesh Chakraborty"
] | cs.CV | [
"cs.CV"
] |
Types of Size-Dependent Melting in Fe Nanoclusters:
a Molecular Dynamics Study
David B. Graves
September 9, 2024
===============================================================================
§ ABSTRACT
Recent developments in vision language models (VLM) have shown great potential for diverse applications related to image understanding. In this study, we have explored state-of-the-art VLM models for vision-based transportation engineering tasks such as image classification and object detection. The image classification task involves congestion detection and crack identification, whereas, for object detection, helmet violations were identified. We have applied open-source models such as CLIP, BLIP, OWL-ViT, Llava-Next, and closed-source GPT-4o to evaluate the performance of these state-of-the-art VLM models to harness the capabilities of language understanding for vision-based transportation tasks. These tasks were performed by applying zero-shot prompting to the VLM models, as zero-shot prompting involves performing tasks without any training on those tasks. It eliminates the need for annotated datasets or fine-tuning for specific tasks. Though these models gave comparative results with benchmark Convolutional Neural Networks (CNN) models in the image classification tasks, for object localization tasks, it still needs improvement. Therefore, this study provides a comprehensive evaluation of the state-of-the-art VLM models highlighting the advantages and limitations of the models, which can be taken as the baseline for future improvement and wide-scale implementation.
Keywords: visual-language models, crack detection, congestion detection, helmet violation detection, image understanding
§ INTRODUCTION
In recent years, there have been significant advancements in computer vision and language modeling for solving different tasks using deep learning. Inspired by the advancements in natural language processing using transformer based models, a new concept in computer vision called Vision Transformers (ViT) <cit.>, was introduced in 2021 for image understanding. In the field of language modeling, many Large Language Models (LLMs) such as Llama and ChatGPT have shown excellent capability to solve a large variety of tasks. These models, which were initially designed for text inputs, now also support visual inputs, connecting vision to language and enabling zero-shot or few-shot learning. This development has the potential to unlock several applications that will be key to the current AI-based technological revolution.
On the other hand, Convolutional Neural Networks (CNNs) have been extensively utilized for the past decade for vision-based processing, demonstrating efficient real-time performance. Their simpler architecture renders them favorable for real-time deployment. However, CNNs have limitations, such as the requirement for extensive datasets and the need for fine-tuning for almost every use case to achieve better results. This process is labor-intensive requiring manual annotations, highlighting the need for pre-trained models. This has created a necessity for foundational models that can be applied to various tasks without the need for extensive fine-tuning.
In this study, our primary focus is to understand the capabilities and limitations of the state-of-the-art Vision Language Models (VLMs) in the field of vision-based transportation engineering tasks. The study involved the careful selection of three specific transportation engineering tasks, each of which presents distinct challenges and complexities. The first task focuses on detecting congestion on highways from surveillance cameras, which is a critical issue in transportation management. The second task involves the identification of cracks in pavement surfaces, an essential aspect of infrastructure maintenance. While both these tasks fall under the domain of image classification, we go further to understand capabilities of VLMs in object detection tasks. The third task addresses the vital issue of detecting helmet violation, specifically determining whether motorbike riders were wearing helmets or not, which is crucial for enhancing safety on roadways.
These tasks have been chosen due to the fact that they require fine-tuning of pre-trained CNN models. Notably, the classes relevant to these tasks are not included in the COCO dataset <cit.>, necessitating specialized attention and refinement. The performance of the chosen tasks has been rigorously evaluated, providing valuable insights into the effectiveness of zero-shot VLMs in addressing transportation engineering tasks. In this research, both open-source and closed-source foundation models of VLM have been considered. Within the open-source category, the study delved into the performance of models such as CLIP <cit.>, BLIP <cit.>, OWL-ViT <cit.>, and Llava-Next <cit.>. Furthermore, the study also included an analysis of the closed-source GPT-4o <cit.> model, presenting a thorough evaluation of a range of foundational VLMs for transportation engineering tasks.
The following section discusses the past studies on the application of VLM models in transportation. This is followed by the methodology used in both image classification tasks, congestion and crack detection, and then addresses the methodology used in object detection. After that, the paper discusses the datasets used in this study and the results of our research. Finally, the conclusions and the future scope are highlighted.
§ LITERATURE REVIEW
Large Language Models (LLMs) have revolutionized natural language processing (NLP), allowing machines to understand and generate human-like language with unprecedented success. The performance of LLMs in textual understanding and their versatility in different domains of language tasks has led to the exploration of multi-modal LLMs <cit.>. Multimodal LLMs can process and generate information across various data types such as text, images, audio, and video. Vision Language Models (VLMs) blend computer vision and NLP capabilities. They are designed such that they can process and generate human-like text based on visual inputs, or the other way around <cit.>. By bridging the gap between visual and textual data understanding, VLMs have various applications such as image captioning, visual question answering, textual descriptions, and even image generation.
Recently, pre-trained VLM with zero-shot prediction has attracted significant attention, where VLM is pre-trained on a large-scale image-text dataset. The pre-trained VLM with a rich textual and image understanding can then be directly applied to any visual task without fine-tuning. Zero-shot prediction implies that the model can interpret and generate descriptions or answer questions, based on textual instructions it has never seen before.
§.§ VLM in transportation
The VLMs and LLMs have lately demonstrated strong zero-shot capabilities and human-like reasoning capabilties. Recently, few studies have integrated the VLMs and LLMs for traffic-related tasks such as understanding traffic scenes, autonomous driving, and anomaly detection for enhancing interpretability, safety, and generalization capabilities.
Some studies have attempted to leverage VLMs in autonomous driving for various purposes such as navigation, forecasting, interpreting vehicle action, and planning. DriveVLM employs VLM to interpret complex traffic scenarios for understanding and analyzing the scene to plan the actions for autonomous driving <cit.>. Similarly, DriveGPT4, a multimodal LLM uses input multi-frame videos and textual queries to generate responses and predicts low-level control signals for vehicle action<cit.>. GPT in DriveGPT4 stands for Generative pre-trained transformer and The digit “4” represents multimodality. In Vision Language Planning (VLP), researchers also integrated language models with vision-based systems to enhance autonomous driving by improving their contextual understanding and generalization capabilities. It has two components, an agent-centric learning paradigm and a self-driving-car-centric learning paradigm that improves the local details in the BEV feature map and enhances the planning process respectively, by leveraging the knowledge encoded in the pre-trained language model <cit.>. While these works focused on improving autonomous driving systems, in the domain of Visual Language Navigation, VLN system was developed to navigate action for intelligent vehicles leveraging LLM and VLM, by extracting landmark names from user’s language instructions, matching landmark names with environmental objects, and finally reasoning navigation actions for the intelligent vehicle <cit.>. On the other hand, CityLLaVA, was developed to understand traffic scenarios in the city by fine-tuning VLMs by utilizing bounding-box guided view selection and prompt engineering modules <cit.>.
Apart from autonomous driving application and scene understanding, DriveCLIP <cit.> framework explores the application of vision-language models, particularly the CLIP model, to identify distracted driving activities from naturalistic driving videos and images. This system offers zero-shot transfer, fine-tuning, and video-based models for driver’s state prediction. All these studies have only performed research on the homogeneous driving environment, consisting of major four-wheelers. The heterogeneous driving environments have not been explored by any researcher being complex containing different types of vehicles with varying speeds.
Video Anomaly Detection is another field where few works have applied VLMs and LLMs for improved performances.VAD-LLaMA <cit.> is such a framework where traffic anomaly is detected and localized in a long-range surveillance video. They have incorporated video-based large language models (VLLM) to make threshold-free detection and explain the reasons for the anomalies detected. They have also introduced a novel Long-Term Context module to alleviate the incapability of long-range context modeling in existing VLLMs.
Except for anomaly detection work, the field explored by VLMs in transportation has used datasets primarily containing in-vehicle camera images or videos (except anomaly detection tasks). On the other hand, the surveillance camera based images/videos can also be analysed through VLMs for improved image/video understanding. Moreover, all these applications are focused on high-level tasks such as vehicle navigation, anomaly detection, etc. In addition to exploring VLMs in such high-level image and video understanding, there is also a need to understand and analyze the potential of VLMs in low-level image understanding tasks such as image classification and object detection. This will involve utilizing the vision and language modalities to significantly improve zero-shot or few-shot classification and detection tasks in the transportation domain.
We recognized these limitations in the application of VLMs in transportation engineering-related problems and applied different state-of-the-art vision-language models in basic image understanding tasks such as classification and object detection to understand the capabilities and limitations of the models.
§ METHODOLOGY
This study aims to leverage the capabilities of VLM for vision-based transportation engineering tasks which include; a) image classification and b) object detection. In this study, we tested the state-of-the-art VLMs for two image classification tasks a) congestion detection and b) crack detection. On the other hand, in the domain of object detection, we have evaluated the performance of VLM for detecting helmet violation cases i.e. motorbike riders wearing helmet or not.
These image classification and object detection tasks are selected to identify the potential of VLM in tasks that go beyond detecting regular traffic entities (such as cars, pedestrians, etc) and therefore can harness the capabilities of language understanding for vision-based transportation tasks. In this section, we discuss the details of the state-of-the-art VLM models that have been used for the selected image classification and object detection tasks
§.§ Image classification task
In this study, the first image classification task involves congestion detection i.e., detecting the congestion in any of the highway lanes and classifying the image as congested or not. The second task of crack detection implies that, given the pavement surface images, the model needs to identify whether any cracks are present or not.
For these image classification tasks, the three models used are OpenAI Contrastive Language-Image Pre-Training (CLIP) <cit.>, Bootstrapping Language-Image Pre-training (BLIP) <cit.>, and Large Language and Vision Assistant - Next Generation (LLaVA-NeXT) <cit.> and GPT-4o. As these models have strong zero-shot performance, this eliminates the need for annotated training data in the image classification task. Therefore these models were selected for classifying vision-based transportation-related tasks using zero-shot prompting. As explained earlier, zero-shot prompting is a technique where a model is given a task or instruction without any prior example or training on that specific task.
§.§.§ CLIP model
CLIP model <cit.> is trained on 400 million image-text pairs available on the internet, allowing it to learn a range of visual features along with their corresponding text description. During the training of CLIP, it employs contrastive learning where it learns to predict which text and image are paired together. An image and text encoder are trained to maximize the cosine similarity of the correct image and text pairs while minimizing the cosine similarity of the incorrect pairings.
The CLIP model has zero-shot learning capability, allowing it to classify images based on natural language prompts without requiring additional task-specific training.
In this study, image classification has been performed for two vision-based transportation tasks, congestion and crack classification. For both tasks, we used the names of the binary classes as the probable text pairings and employed CLIP to predict the most likely (image, text) pair.
To classify congestion, we used five different class names for the CLIP model:
A1: ["Congested", "Non-congested"],
A2: ["Congested lanes", "Non-congested lanes"],
A3: ["Lanes with congestion", "Lanes without congestion"],
A4: ["Queued traffic", "Free-flow traffic"],
A5: ["Congested lanes", "Free-lanes"].
Similarly, for classifying cracks in the pavement surface, the different class names used were:
B1: [“Cracked”, “Non-Cracked”],
B2: [“Cracks present”, “Cracks absent”],
B3: [“Cracked surface”, “Non-Cracked surface”],
B4: [“Cracked pavement”, “Crack-free pavement”],
B5: [“Crack”, “No crack”].
§.§.§ BLIP model
BLIP addresses both vision-language understanding and generation tasks by using a multimodal mixture of encoder-decoder architecture and a novel data bootstrapping technique called CapFilt. BLIP generates synthetic captions and filters out noisy ones to enhance training data quality, leading to state-of-the-art performance across various vision-language tasks, including image-text retrieval, image captioning, and visual question answering <cit.>. Therefore BLIP is more versatile for both generation-based and understanding tasks compared to CLIP which focuses on alignment and representation learning.
Similar to the CLIP model, probable class names were used by BLIP to classify images for both tasks. The different class names utilized in the BLIP model are the same as those mentioned for the CLIP model.
§.§.§ LLaVA model
The Large Language and Visual Assistant (LLaVA)<cit.> is an open-source multimodal model that is designed to interpret and generate results based on both visual and textual input (10). It leverages the LLaMa <cit.> model and incorporates the pre-trained CLIP visual encoder for processing visual content. The encoder extracts visual features from input images and links them to language embedding through a trainable projection matrix, effectively translating visual features into language embedding tokens and bridging the gap between text and images. Although trained on smaller datasets than closed-source multimodal GPT models, Llava purports to demonstrate behavior analogous to the proprietary models.
The LLaVA-NeXT (an updated version of LLaVa) <cit.> focuses on enhancing multimodal instruction following capabilities on data generated to follow detailed visual and textual instructions, for interactive and complex visual tasks. In contrast, CLIP learns generalizable visual representations from large-scale natural language supervision aligning image and text embedding to enable zero-shot learning across diverse vision.
The LLaVA-NeXT model was also employed in this study for both classification tasks with different task-specific instructions. We first selected the initial prompt to query the model to generate the description of each image. To get the desired output as a discrete class name, we further queried the model with the generated description and follow-up prompt to get output as class names.
The five different initial prompts that were adopted to generate the description of congested/non-congested dataset are as follows:
P1: Classify whether highway lanes are congested or not in the image.
P2: Classify whether highway lanes are congested or not in the image.
P3: Classify whether in the image highway lanes are congested or not.
P4: Classify whether the highway have congested lane or free-lane in the image.
P5: Check whether the highway lanes are congested or not in the image.
The follow-up prompt selected corresponding to each of these initial prompts were
F1: Write Yes for congested, No for non-congested.
F2: Write Congested lanes if lanes are congested, Free-lanes if lanes are not congested.
F3: Write Congested lanes if lanes are congested, Free-lanes if lanes are not congested.
F4: Write Congested lanes if lanes are congested, Free-lanes if free-lane.
F5: Write Congested lanes if lanes are congested, Free-lanes if lanes are not congested.
Similarly, for classifying cracked/non-cracked images, the initial prompt adopted were
P1: Classify whether the pavements have cracks or not in the image?
P2: Classify whether the cracks are present or not in the pavement surface image?
P3: Classify whether the pavement surface is cracked or not in the image?
P4: Classify whether in the image, the pavement surface have cracks or not?
P5: Check whether the pavement surface has any cracks or not?
The corresponding follow-up prompt selected for query were
F1: Write Cracked if cracks present, Non-cracked if cracks not present.
F2: Write Cracked if cracks present, Non-cracked if cracks not present.
F3: Write Cracked if surface is cracked, Non-cracked if surface is not-cracked.
F4: Write Cracked if surface has cracks, Non-cracked if surface do not have cracks.
F5: Write Cracked if cracks present, Non-cracked if cracks not present.
§.§.§ GPT-4o
GPT-4-o, with the "o" for "omni", is an advanced iteration of the Generative Pre-trained Transformer series by OpenAI. GPT-4o is built to handle multimodal data including text, images, and audio, allowing it to process and generate not only natural language but also interpret and respond to visual and auditory data. Its ability to understand and generate human-like text makes it a valuable tool in diverse fields.
In this study, we have prompted GPT-4o for both image classification tasks for comparison and evaluation purposes. The prompt used for congestion classification was "Can you tell me whether the closer lane are free lanes or not. Only return non-Congested if there are all free lanes otherwise return congested", whereas for crack classification "Can you tell me whether the pavements have cracks or not in the image. Only return yes if crack is present and no if crack is not present."
§.§.§ Benchmark CNN model
The task of congestion classification has been compared using a DCNN model present in Chakraborty et al.<cit.>. It took 25 minutes to train the model on an NVIDIA Tesla K20m GPU with 4 GB RAM. On the other hand, for crack classification, the VLM models were compared with CNN EfficientNet B1 model architecture, it took 903 secs to run on the test dataset.
§.§ Object detection tasks
Our study focuses on the object detection task of identifying helmet violations. The task aims to detect whether motorcyclists were wearing helmets, which is a mandatory rule of road safety in many countries. This class is not present in the COCO dataset or any other pre-trained CNN model, necessitating fine-tuning of the model for our specific use case. Our objective is to explore how well zero-shot Vision Language Models (VLM) perform in these scenarios, to reduce the need for intensive datasets and fine-tune them, thereby streamlining the resource-intensive processes of dataset creation.
We aim to detect two classes: “Helmet” – a motorcyclist wearing helmet, and “No-Helmet” – a motorcyclist without wearing helmet. While identifying the positive sentiment class is relatively straightforward, but the challenge lies in identifying the negative sentiment class, "No-Helmet," especially for Vision Language models. For any language model, it is easy to understand the positive sentiment class, but VLMs have been found to face difficulties identifying a negative sentiment class.
For zero-shot object detection, we have been used the Vision Transformers for Open-World Localization (OWL-ViT) model <cit.> with basic classes and performing required post-processing to improve results. Additionally, we are utilizing textual class prompts to eliminate the need for post-processing. However, OWL-ViT does not perform well on textual classes since it is not trained on Large Language Models. As a result, we are considering open-source Large Language Vision models such as Llava-Next <cit.>, as well as close-source VLMs like GPT-4o <cit.>.
§.§.§ OWL-ViT model
OWL-ViT <cit.> is a state-of-the-art open vocabulary object detection model, which was launched by the Google research team in 2022. This model is designed to understand the relationship between images and text. Operating as a zero-shot object detection model, it leverages CLIP <cit.> as its multimodal backbone in conjunction with a ViT-like (Vision Transformer-like) model. To use CLIP for object detection, OWL-ViT removes the final token pooling layer of the vision model and adds a lightweight classification and box head to each transformer output token pool. Open-vocabulary classification is achieved by substituting the fixed classification layer weights with the embeddings obtained from the text model.
In this part of our study, we are focused on Helmet violation detection using OWL-ViT <cit.>. For that, we have explored its performance in basic classes, i.e., one-word classes given to the prompts. This method requires post processing for better accuracy. Furthermore, we have extended it to beyond one-word classes by checking its performance on textual classes.
a) Detection of basic classes and post processing
Initially, a single word class name is input via prompt, i.e., Motorbike, Person and Helmet. After detecting these classes, post processing is necessary to obtain the desired output class, i.e., (1) “Helmet” (a person who is sitting on a motorbike wearing helmet) and (2) “NoHelmet” (a person who is sitting on a motorbike without wearing helmet).
Three steps are involved in the post-processing module, as shown in Fig 1. First, a non-maximum suppression method is used to remove the duplicated bounding boxes. Second, the person bounding boxes that are aligned with the motorbike are selected based on the calculation of the Intersection over Union (IoU) of the motorbike and person bounding boxes. Those with an IoU greater than 60% are retained, thereby excluding persons not seated on motorbikes such as pedestrians, bicyclists etc. In the third step, we identify person bounding boxes sharing an IoU of over 60% with the helmet bounding boxes and assign them to the "Helmet" class. Any remaining person bounding boxes, which are not aligned with the helmet bounding boxes, are categorized as members of the "NoHelmet" class.
b) Detection using text classes directly so that post processing not required
As we had observed in the section above, OWL-ViT <cit.> with basic classes need some post processing. We aim to use VLMs so that no post-processing is needed. In this case, we provided prompt with textual classes, which consists of entire sentences instead of individual words, providing a complete explanation. OWL-ViT is an open vocabulary object detector, and it performs well at identifying basic classes. In our study we are interested in determining whether OWL-ViT could perform equally well with textual classes without requiring any post-processing.
For our research we had selected different textual prompts
Prompt 1: “A person on a motorbike wearing helmet”
Prompt 2: “A person on a motorbike bareheaded”
Prompt 3: “A person on a motorbike without wearing helmet”
§.§.§ LLava model
As discussed in the earlier section, OWL-ViT <cit.> needs a post processing module to understand negative sentiment and give better results. In this scope of study, we wanted to test the performance of large language models like Llava-Next <cit.> on our use case. As mentioned in the section above, Llava-Next holds very good image understanding capabilities.
In this scope of our study, we undertook several experiments with Llava-Next <cit.> to evaluate its performance in the field of Object Detection. To assess its performance in Image Understanding, we provided the prompt “Describe the image”. The Llava-Next model is known for its exceptional image interpretation capabilities, yielding precise results. However, Llava-Next is unable to generate bounding boxes or provide updated images with bounding boxes. We attempted to obtain the coordinates using different prompts but found that it exhibits poor object localization capability and negative sentiment understanding. Therefore, to leverage Llava-Next image understanding capabilities, we combined OWL-ViT with Llava-Next.
OWL-ViT (basic classes) with Llava-Next :
In this part of our study, we integrated OWL-ViT <cit.> with Llava-Next <cit.> to optimize the outcomes. Initially OWL-ViT was employed with basic classes in it’s prompt, specifically, Motorbike, Person and Helmet. Following this, we implemented non-maximum suppression to remove redundant bounding boxes. Subsequently, we utilized IoU selection to extract the images of individuals seated on motorbikes, as mentioned in the preceding section. These images were then processed as inputs for Llava-Next, using the prompt: "Identify whether all person sitting on motorbike is wearing helmet or not?".
Additionally, we employed a follow-up prompt to assign discrete classes. Specifically, we assigned the class “Helmet” to all crop images in which all visible individuals were wearing helmets, and the class “NoHelmet” to the crop images in which any of the visible persons was not wearing a helmet. The follow-up prompt was: " Write no if any person is not wearing helmet and write yes if all person is wearing helmet."
§.§.§ GPT-4o
As mentioned above, GPT-4o <cit.> is OpenAI latest LLM model. Being a closed source model, it exhibits visual language understanding than any other available model. It is exceptionally good in visual understanding, but similar to Llava-Next <cit.>, GPT-4o <cit.> also lacks the capability of object localization and fails to return correct bounding box coordinates. Therefore, we combine OWL-ViT and GPT-4o. The crops from OWL-ViT (basic classes) are given to GPT-4o, with a prompt: “Can you tell me the if there is a person wearing helmet or not. Only return helmet if all person are wearing helmet otherwise result nohelmet”. We don’t require a follow-up while using GPT-4o, as its textual understanding is good and it returned the expected classes, i.e., Helmet and NoHelmet.
§.§.§ Benchmark CNN Model
For comparing the results of VLM with CNN models, we finetune a YOLOv8 model <cit.>. The model is trained using 2500 training images. The YOLOv8 is trained on Nvidia-RTX A4000 GPU, it takes around 6 hours for 250 epochs.
§ DATASET
In this section, we discussed the dataset used in our study for the tasks discussed earlier, starting with image classification tasks i.e., congestion and crack and classification and then object detection.
§.§ Classification tasks
The congestion dataset was taken from the work of Chakraborty et al <cit.>, where images were obtained from 121 cameras from the Iowa DOT CCTV camera database spread across the inter-states and highways. The dataset has 1010 images in total, having 516 congested images and 494 non-congested images of highways.
The dataset used for crack classification is SDNET2018 <cit.>, a publicly available dataset containing more than 56,000
images of concrete walls, bridges, and pavements. The pavement images are labeled and categorized into cracked and non-cracked classes. In our study, all the cracked 2608 images were used and we randomly selected 2600 non-cracked pavement images to balance the dataset.
§.§ Object detection tasks
The dataset used is sourced from AICity Challenge 2024, specifically Track 5 - Detecting Violation of Helmet Rule for Motorcyclists <cit.>. The dataset consists of 100 training and 100 testing videos, recorded at 10 fps and 1080p resolution from various locations in an Indian city. We extracted the dataset and selected 2500 training images and 200 test images. These images show a close-up view of traffic captured by cameras. The dataset contains 3 object classes: motorbike, Helmet (a person seated on a motorbike wearing a helmet), and NoHelmet (a person seated on a motorbike without wearing a helmet). The dataset includes images of individuals wearing helmets, scarves, and turbans, as well as those without any headgear. Additionally, it contains footage from congested lanes. The same test dataset, consisting of 200 images, has been utilized for the vision-language model OWL-ViT <cit.>, Llava-Next <cit.> and GPT-4o <cit.>.
§ RESULTS AND DISCUSSION
In this section, we discussed the results achieved in our study by applying different state-of-the-art VLM models discussed earlier, starting with image classification tasks i.e., congestion and crack and classification and then object detection.
§.§ Classification tasks
§.§.§ Congestion classification
The task of classifying congestion was accomplished by applying zero-shot prompting to the three models. As zero-shot prompting involves performing tasks without any training on those tasks, it has to rely entirely on the instruction provided in the prompt itself to perform the task.
The performance of the CLIP model varied depending on the class name used as prompts. The accuracy achieved for different classnames A1, A2, A3, A4 and A5 were 76%, 76%, 66%, 77%, and 88% respectively. The same classname was used by the BLIP model too but it had different results. For BLIP model the accuracy received were 86%, 94%, 93%, 49%, and 87% for A1, A2, A3, A4, and A5 respectively. The LLaVA-NeXT model utilized the initial and follow-up prompts instead of class names to classify the image. The result achieved by different combinations of initial and follow-up prompts, P1-F1, P2-F2, P3-F3, P4-F4, P5-F5 was 86%, 87%, 82%, 64%, and 87% accuracy respectively. The best results of all the models are presented in Table 1 with the Precision, Recall, and F1-score.
The True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) are shown in Figure 3. As demonstrated in Fig 2 (g-i), it was observed that the models gave false positive results for the night-time because of the lightning effect and hence assumed it to be congested. Moreover, it can be also inferred from Fig 2 (j-l) that if one of the lanes had free-flowing conditions, it was classified as non-congestion even though the other lanes had congestion. The models are still not capable of understanding language in terms of whether any of the lanes are congested or not.
From Table 1, it can be inferred that all the models gave comparative results and BLIP even outperformed the benchmark DCNN model.
§.§.§ Crack classification
The zero-shot prompting techniques being highly flexible allows the model to be applied to a wide range of tasks. Therefore, we also applied it to the crack classification task, but its ability to adapt to the specific demands of a new task is limited by what it has learned during its training.
The prompts play a vital role in getting the result from models through zero-shot prompting. The different class names used in the CLIP model with their accuracy, B1, B2, B3, B4, and B5 with 79%, 68%, 79%, 74%, and 70% respectively. The same class name was used by the BLIP model and it has varying results to CLIP, the accuracy achieved was 50%, 71%, 57%, 61%, and 50%, for class names B1, B2, B3, B4, and B5 respectively. Though the class names used by both models were the same, the results attained were contrasting. For the first and the last prompt used in the BLIP, it gave 50% accuracy which represents that the model is not able to handle negation prompts. The initial and follow-up prompts were used by the LLaVA-NeXT model to classify the image. The result attained by different combinations of initial and follow-up prompts P1-F1, P2-F2, P3-F3, P4-F4, and P5-F5 was 72%, 67%, 53%, 58%, 76% accuracy respectively.
The best result of all the models is demonstrated with precision, recall, and F1-score of each class in Table 2. The TP, TN, FP, and FN are shown in Figure 3. It was inferred that the models are not able to distinguish between rough surface and crack, all the models classified the rough surface as cracked as shown in Fig 3 (g-h). On the other hand, if the crack was present near the edge as shown in Fig 3 (j-l), the model was not able to identify the crack.
In the image classification task, VLMs performed well even with zero-shot prompting. As no specific prompt was used, further by applying prompt-based strategies, we can enhance the models’ performance, making them viable for high-level tasks and possibly reducing the reliance on extensively annotated datasets.
§.§ Object detection tasks
§.§.§ OWL-ViT
As mentioned in the Table 3, OWL-ViT <cit.> shows significant good results when given basic classes in its prompts, such as Motorbike, Person, and Helmet. In order to achieve the Helmet and NoHelmet classes, some post-processing needs to be done. After post-processing, the Zero-Shot OWL-ViT model shows significantly better results compared to a trained YOLOv8 <cit.> model. It achieved a precision of 95% in the Helmet class and 74% precision in the NoHelmet class, as mention in Table 3. The inference results are shown in the Fig 4. One major advantage of the OWL-ViT is, with basic prompts, can also identify the differences between a cap, turban, scarf, and helmet, as shown in the Fig 4 (a,b,e).
OWL-ViT <cit.> has not been trained on higher language models, resulting in lack of visual and textual understanding. With the help of different prompts, we have observed that OWL-ViT performs poorly in processing textual data that consist of complete sentences. Furthermore, it also lacks the understanding of negative sentiments.
§.§.§ LlaVa-Next
Llava-Next <cit.> utilizes LlaMa models <cit.>, showing significant advancements in image understanding. Llava-Next cannot detect objects or provide images with bounding boxes, which can be achievable by OWL-ViT <cit.>. In our study, by tuning prompt, we get the bounding box coordinates in pascal voc format <cit.> (xmin, ymin, xmax, ymax). Llava-Next cannot accurately locate objects and gives incorrect coordinates, despite its excellent understanding of prompts and images.
Our approach involved integrating the OWL-ViT <cit.> model with Llava-Next <cit.> to leverage its image understanding capabilities. The basic class prompts of OWL-ViT were utilized as inputs for the Llava-Next model, as shown in Fig 5(a). Furthermore, we employed follow-up prompt for summarizing the results. As mentioned in the Table3, with our experiment we achieved 88% precision in Helmet class and 90% precision in NoHelmet class.
§.§.§ GPT-4o
GPT-4o <cit.> has a great capability for understanding visual language, similar to Llava-Next <cit.>. However, it does not support object detection. It has excellent image understanding, even more so than Llava-Next. By combining it with OWL-ViT, and providing GPT-4o with crops of OWL-ViT <cit.> (basic class prompt), we achieved 99% precision in the Helmet class and 92% precision in the NoHelmet class, which is almost equal to the benchmark results of CNN, which is shown in the Fig 5(b). Additionally, GPT-4o shows a recall of 99% for the NoHelmet class, which is higher than the benchmark results of CNN as shown in Table 3.
According to table 3, VLM demonstrates significantly good precision and recall in zero-shot performance. This capability can lead to cost reduction by eliminating the need to fine-tune models for every new use case and annotate millions of images. With the right methodology, engineering, and utilization, VLM has the potential to outperform traditional CNN models. Despite excelling in these areas, VLM has poor image localization capabilities. Models such as Llava-Next and GPT-4o understand images well but struggle to localize objects. Additionally, VLMs are not currently lightweight or fast compared to CNN models. For example, OWL-ViT with Llava-Next took 6.2 seconds to process one image, while OWL-ViT with post-processing took 0.68 seconds and even the closely related GPT-4o took 3.3 seconds. In contrast, a CNN model only took 0.14 seconds for the same task. While VLMs can provide better accuracy, further work is needed to make them suitable for real-time field use.
§ CONCLUSIONS
The objective of the study was to understand the performance of visual language models in vision-based transportation tasks. It was carried out by comparing different state-of-the-art VLM models by zero-shot prompting for two tasks i.e., image classification and object detection. The transportation-related vision task selected for image classification was congestion detection and crack identification whereas, for object detection, it was the identification of helmet violations. VLMs performed at par sometimes but performance needs to be improved in terms of prompt engineering, localization of object detected, and inference time. This paper gives a comprehensive understanding of VLM models' limitations which need to be worked upon for large-scale implementation. Future studies can also be done in other case studies related to vision-based transportation tasks. Further other VLM models can also be explored with different benchmark datasets.
§ AUTHOR CONTRIBUTIONS
The authors confirm their contribution to the paper as follows: study conception and design: T. Singh, S. Prajapati, C. Hedge, P. Chakraborty; data collection: S. Prajapati, T. Singh; analysis and interpretation of results: S. Prajapati, T. Singh, C. Hedge, P. Chakraborty; draft manuscript preparation: S. Prajapati, T. Singh, C. Hedge, P. Chakraborty. All authors reviewed the results and approved the final version of the manuscript.
§ ACKNOWLEDGEMENTS
Our research results are based upon work supported by the IITK-NYU Joint Research Grant. Any opinions, findings, and conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the IITK and NYU.
trb
|
http://arxiv.org/abs/2409.03328v1 | 20240905080411 | Pareto Set Prediction Assisted Bilevel Multi-objective Optimization | [
"Bing Wang",
"Hemant K. Singh",
"Tapabrata Ray"
] | cs.NE | [
"cs.NE"
] |
[email protected]
School of Engineering and Technology, University of New South Wales Canberra
Australia
[email protected]
School of Engineering and Technology, University of New South Wales Canberra
Australia
[email protected]
School of Engineering and Technology, University of New South Wales Canberra
Australia
§ ABSTRACT
Bilevel optimization problems comprise an upper level optimization task that contains a lower level optimization task as a constraint. While there is a significant and growing literature devoted to solving bilevel problems with single objective at both levels using evolutionary computation, there is relatively scarce work done to address problems with multiple objectives (BLMOP) at both levels. For black-box BLMOPs, the existing evolutionary techniques typically utilize nested search, which in its native form consumes large number of function evaluations. In this work, we propose to reduce this expense by predicting the lower level Pareto set for a candidate upper level solution directly, instead of conducting an optimization from scratch. Such a prediction is significantly challenging for BLMOPs as it involves one-to-many mapping scenario. We resolve this bottleneck by supplementing the dataset using a helper variable and construct a neural network, which can then be trained to map the variables in a meaningful manner. Then, we embed this initialization within a bilevel optimization framework, termed Pareto set prediction assisted evolutionary bilevel multi-objective optimization (PSP-BLEMO).
Systematic experiments with existing state-of-the-art methods are presented to demonstrate its benefit. The experiments show that the proposed approach is competitive across a range of problems, including both deceptive and non-deceptive problems [This is author submitted version of the work currently undergoing peer-review].
<ccs2012>
<concept>
<concept_id>10002950.10003714.10003716.10011136.10011797.10011799</concept_id>
<concept_desc>Mathematics of computing Evolutionary algorithms</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003752.10003809.10003716.10011138</concept_id>
<concept_desc>Theory of computation Continuous optimization</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Theory of computation Continuous optimization
[500]Mathematics of computing Evolutionary algorithms
Pareto Set Prediction Assisted Bilevel Multi-objective Optimization
Tapabrata Ray
====================================================================
§ INTRODUCTION
A number of real-world problems manifest as a hierarchical optimization problem, where the objective(s) of an upper-level (UL) problem are optimized subject to the optimality of a lower-level (LL) problem. The UL and LL are also referred to as leader and follower tasks, respectively, and may contain their own additional constraints. Such problems are referred to as bilevel programming or bilevel optimization problems (BLP) <cit.>; and are the most-studied subset of a more generalized class of multi-level optimization problems <cit.>. Such problems are of interest in a number of application domains <cit.>, including transportation<cit.>, economics <cit.>, engineering <cit.> and management <cit.>, to name a few. Moreover, they also pose certain unique theoretical and algorithmic challenges <cit.> compared to traditional optimization (referred to occasionally in this study as single-level optimization for specificity). These include, for example, being NP-hard even when the problems are linear at both levels <cit.>, deceptive nature of UL evaluations for solutions that are sub-optimal at LL, and challenges in environmental selection and benchmarking strategies <cit.>. Considering the above, bilevel problems have attracted interest from both academic researchers and practitioners. Formally, bilevel problems can be represented as shown in Eq. <ref>.
[ 𝐱_u∈𝕏_uMinimize F(𝐱_u,𝐱_l) = F^1(𝐱_u, 𝐱_l), ...., F^M(𝐱_u, 𝐱_l); Subject to:; 𝐱_l ∈𝐱_l∈𝕏_largmin f(𝐱_u, 𝐱_l) =f^1(𝐱_u,𝐱_l) … f^N(𝐱_u,𝐱_l); subject to: g^1(𝐱_u, 𝐱_l),g^2(𝐱_u, 𝐱_l)… g^q(𝐱_u, 𝐱_l) ≤ 0; G^1(𝐱_u, 𝐱_l), G^2(𝐱_u, 𝐱_l)… G^p(𝐱_u, 𝐱_l) ≤ 0; ]
Here F^1 … F^M and f^1 … f^N are objective functions at UL and LL, respectively. The UL and LL design variables are denoted using {𝐱_u, 𝐱_l}, sampled from their design spaces {𝕏_u, 𝕏_l}, respectively. The function g(𝐱_u, 𝐱_l) represents a constraint for the LL problem, while G(𝐱_u, 𝐱_l) determines feasible space for the UL problem. For non-exact techniques, the equality constraints, if they exist, are often converted into inequalities with a tolerance, hence equality constraints are not included in Eq. <ref> for brevity.
Bilevel optimization problems have their origins in Stackelberg game theory and economics <cit.>, but have been since studied in several other fields. However, overwhelming majority of these studies have been dedicated to problems where both levels have a single objective (BLSOP), i.e., M=N=1. A number of competitive algorithms have been developed for this class of problems utilizing ideas such as problem transformation <cit.>, hybridization <cit.>, surrogate-assisted search <cit.>, transfer learning <cit.>. For more comprehensive overview of the works in the area of BLSOP, the readers may refer to review papers such as <cit.>.
However, there has been relatively scarce attention paid to bilevel problems with multiple objectives at one or both levels. The bilevel multi-objective optimization problems (BLMOPs), also denoted with acronym MOBO in <cit.>, typically have multiple objectives at UL, while the LL may have one or multiple objectives. The special case where the UL has a single objective, but LL has multiple objectives, is referred to as semi-vectorial bilevel optimization (SVBO) problem <cit.>. A number of real-world problems can be formulated as BLMOPs, and thus could benefit from efficient methods to solve such problems. In <cit.>, an environmental economics problem is discussed wherein a company wants to set up a gold mine in a specific geographical region. The government acts as the UL decision-maker in this case, with the objectives of maximizing revenues generated by the project (tax, jobs etc.) and minimizing the detrimental impacts on the environment. The mining company, as the LL decision-maker, aims to maximize its profits as well as maximize its reputation/public image. Another problem discussed in <cit.> is that of hierarchical decision-making in a company, wherein the objectives of a chief executive officer (CEO) are to maximize the quality of the products and the company profits. On the other hand, the branch heads working at a lower level seek to maximize the branch profits and the worker satisfaction. In the context of border security, the problem discussed in <cit.> involves UL objectives of maximizing total weighted exposure of the minimal exposure path, minimizing sensor relocation time and the number of sensors relocated, while the LL seeks to minimize the total expected weighted exposure of the intruder’s minimal exposure path. Some other BLMOP application examples include transportation planning and management <cit.>, manufacturing <cit.>, machine learning problems <cit.>. A number of additional applications from the domains of logistics, environmental economics and manufacturing can also be found in <cit.>.
In this study, we are predominantly concerned with addressing BLMOPs where both levels have multiple objectives. Given the limited work in the field so far, the scope of study is limited to two objectives, but in principle higher number of objectives fall within the same category. Beyond the challenges already encountered for BLSOPs, BLMOPs have their own characteristic challenges. The key challenge is that for any given UL solution, the optimum solution at LL is not just a single solution but a set of trade-off solutions (Pareto front, PF). This situation occurs rarely in BLSOPs, where the LL optimum has multiple global optimum solutions, but is ubiquitous for BLMOPs. This implies that for each 𝐱_u, not only a significant number of function evaluations are required at LL to find a good PF approximation, a large set of solutions (corresponding to LL PF) also needs to be evaluated at UL. This results in proliferation of function evaluations used by approaches, especially those that rely on nested strategies. The second, related challenge is that of autonomy in decision-making by UL and LL. Since LL decision-maker has an entire PF approximation to chose from for sending to UL decision-maker, co-operation or conflict between the two decision-makers may result in sub-optimal PF at the UL. In most of the existing works, an optimistic version of the problem is assumed, wherein the UL decision-maker has the authority to choose solutions from the LL PF that bring UL the most benefits. In what follows, we highlight some of the representative approaches that address BLMOPs. Moreover, some of the works convert the LL problem from multi-objective to single-objective through the use of value functions in order to provide a single solution to the UL <cit.>. For a more extensive coverage of literature in BLMOPs, the readers are referred to <cit.>.
Classical/analytical techniques:
Much like the case of BLSOPs, some of the early efforts were directed towards solving BLMOPs utilizing mathematical techniques for exact solutions. The pre-condition to the application of these techniques is often that the underlying response functions (objectives, constraints) need to satisfy certain regularity conditions. For instance, in <cit.>, an approach is developed to solve BLMOPs where all objectives and constraints at both levels are linear. Three different cases were solved with regards to anticipated behavior of the LL decision-maker, namely, optimistic, pessimistic, and historical. In <cit.>, bi/multi-level problems with multiple objectives are solved under a scenario where the decision-makers have certain tolerances, modeled using fuzzy set theory. In <cit.>, an interactive method was presented for BLMOP by replacing the LL problems with Kuhn–Tucker conditions. In <cit.>, an approach to solve non-linear BLMOPs was presented. Differentiability is generally assumed for calculating gradients in such approaches.
In <cit.>, BLMOP is linearly scalarized to generate Pareto frontier solutions, and then a filtering mechanism is constructed to maintain distribution for these solutions. Linear functions are assumed for the given problems. In <cit.>, Pareto solutions of bilevel linear problems have been characterized after converting bilevel problem into single level mixed 0-1 programming problems.
Moreover, in most of these works, the working principles are demonstrated numerically on a very limited set of problems (often one or two), with small number of variables.
Metaheuristic techniques: In order to handle cases where the underlying functions do not satisfy the required mathematical properties, or are not available altogether (so-called “black-box” functions), metaheuristic techniques, such as evolutionary algorithms or swarm intelligence algorithms, are often chosen. However, this versatility typically comes with the requirement to evaluate large number of candidate solutions to reach near-optimal solutions. Given the structure of a bilevel problem, a common way to solve it using metaheuristic methods is through a nested approach, where a standard multi-objective evolutionary algorithm (MOEA) or equivalent methods are used at both levels. Towards this end, non-dominated sorting algorithm II (NSGA-II) with a special population structure was utilized in <cit.> in a nested manner to solve BLMOPs. This was further hybridized with local search methods in <cit.>. Differential evolution was implemented in a nested form for solving BLMOPs in <cit.>.
With similar intent, nested search has also been proposed using other metaheuristic approaches, such as particle swarm optimization (PSO) and simulated annealing (SA) <cit.>.
Recently, a variable decomposition based cooperative co-evolutionary method was proposed to improve search efficiency of nested search structure of BLMOP <cit.>.
A generic framework to solve BLMOPs was also presented recently in <cit.>, wherein some gain in efficiency was realized via a representation that utilizes grouping of certain variables as families.
Since the implementation of meta-heuristic search in a nested mode involves, in particular, a large number of LL function evaluations (FE), a pertinent effort is to develop strategies to expedite the search at LL and reduce LL FE. Towards this end, transfer of solutions obtained corresponding to a neighboring UL solution as a seeding population has been investigated for BLSOPs in <cit.>,<cit.>,<cit.>, and more recently, extended for BLMOPs in <cit.>. Though the results were promising, the performance of such simple transfer strategies is subject to neighboring landscapes being similar, as was demonstrated for BLSOPs in <cit.>. Another promising direction to improve LL search efficiency is by generating LL optimal solutions through certain prediction mechanism. Often, the LL optimal solutions are not entirely independent of each other, since they are related by the UL variables that act as parameters for the LL problems. Consequently, by learning from the dataset of previously evaluated solutions, predictions can be made regarding the optimal 𝐱_l for a new candidate 𝐱_u. The predicted LL optimum can be used as a starting point for local search, or to seed initial population to reduce the effort required in carrying out the LL optimization. In mf-BLEAQ <cit.>, a quadratic fiber based surrogate model was proposed to predict LL Pareto sets (PS). Using predicted solutions to start the LL search provided a head-start. However, this modeling scheme needs to build a model for each variable of a solution, and for LL PS there is a need for multiple solutions to cover PF. Therefore, a large number of models are required to be built and maintained. A more recent study <cit.> proposes to reduce the number of prediction models for LL PS to a single model, i.e., conditional generative neural network (cGAN). Relying on the approximation capacity of cGAN, the proposed algorithm cG-BLEMO only needs to maintain one model. However, due to Gaussian noise introduced in cGAN, LL solutions generated by cGAN do not follow the distribution of the PS/PF closely, requiring additional search to improve the solutions to capture LL PS. Furthermore, to keep the training time practicable for cGAN, the datasize used had to be kept relatively small in the implementation (typically up to 800).
Continuing the above line of inquiry, in this study, we aim to advance the research direction that uses LL PS prediction for solving BLMOPs efficiently. Towards this goal, we build a simple yet accurate model customized for BLMOPs, that can be used to predict LL PS in lieu of evolutionary search to drive majority of the search. The key contributions of this work can be summarized as:
* Ordinarily, the LL optimum comprises multiple solutions (PS) for each UL solution 𝐱_u. Thus, the dataset of previously evaluated solutions results in a one-to-many mapping with regards to LL PS, which is unsuitable for building prediction models. We propose a simple transformation of the data to make them amenable for creating one-to-one mapping.
* We then use simple feedforward neural networks to build the prediction model that maps 𝐱_u and optimal 𝐱_l (LL PS). The model training is relatively fast, uses fewer hyperparameters compared to more sophisticated models previously used in the literature, and is more accurate in terms of its LL PS prediction.
* We embed the above PS prediction into the nested search for BLMOPs. After accumulation of training data and construction of the NN model, LL search can gradually be delegated to prediction. To maintain/improve the prediction accuracy, NN is re-trained periodically during the run with new evaluated solutions.
* We benchmark the proposed approaches on a diverse set of BLMOPs from the literature, including deceptive problems that have been scarcely studied. Comparison with state-of-art algorithms highlights its competent performance in terms of proximity to optimum, and efficiency in terms of function evaluations used.
The remainder of this paper is structured as follows. In Section <ref>, we present the main proposed idea for PS prediction and show proof-of-concept results to establish its viability.
Then, this prediction method is embedded within a nested bilevel multi-objective optimization framework, the details of which are presented in Section <ref>. The numerical experiments and discussions are presented in Section <ref>, followed by concluding remarks in Section <ref>.
§ THE MAIN IDEA AND PROOF-OF-CONCEPT
In standard (single-level) multi-objective optimization, it is common to seek intermittent estimation of Pareto front/Pareto sets to aid the search, especially when the underlying response functions (objectives, constraints) are computationally expensive in nature. This can be done by training a surrogate model on an archive of evaluated solutions to establish a mapping between the input 𝐱_i and output response(s) y_i, as shown on the left side in Fig. <ref>. Thereafter, an internal optimization can be conducted using the surrogate model to come up with solutions that are likely to be Pareto optimal, which can then undergo the expensive evaluation <cit.>. Given that the accuracy of the models is dependent on the dataset, this process usually needs to be conducted multiple times over a run. More recently, there have also been attempts <cit.> to predict the Pareto optimal front based on preference vectors. This is done by utilizing preferences (quantified using, e.g., reference vector coefficients) with regards to different objectives as inputs, and certain known Pareto optimal solutions as output. The mapping between these can be done using various machine learning models such as neural networks, Gaussian process or hypernetworks. The mapping can then be used to generate new solutions that are likely to be Pareto optimal; which can be used, for example to increase the density of solutions on the PF, or aid the search interactively.
For the above modeling tasks, the dataset usually has a one-to-one or many-to-one relationship between inputs and outputs, for which there is no mathematical ambiguity in terms of building the mappings. In BLMOPs however, this is mostly not the case. If the models are built individually for LL or UL functions to aid the search independently, the above models and types of mappings can still be maintained. However, to potentially bypass the LL optimization for some UL candidate solutions entirely by predicting the LL PS, a different type of mapping is required. As shown in the right side of Fig. <ref>, the available dataset in this case may be the previously evaluated solutions, 𝐱_u^i. For each of these solutions, a lower level PS approximation has been found using an MOEA, denoted as {𝐱_lj^i*,j=1… m }. That task is to predict the PS {𝐱_l^* } directly for a new 𝐱_u sampled during the UL search. For BLSOPs, in most cases this can still be modeled as a one-to-one mapping <cit.>, but for BLMOPs, it is evident that such a mapping is not straightforward due to the same value of input 𝐱_𝐮 for multiple values of 𝐱^*_l.
To address some of the issues above, we propose a simple method in this work to transform the data and use neural networks (NN) for creating a mapping between 𝐱_u and 𝐱^*_l. Neural networks can inherently support vectors as outputs (which is the case with 𝐱^*_l), and are also proven to be universal approximators given sufficient training data. Given the number of points accumulated with LL PS corresponding to multiple 𝐱_u values, the size of the dataset works in favor of neural networks for BLMOPs to create accurate mappings. Of course, the issue of one-to-many mapping still needs to be resolved. We use a helper variable to supplement the input data to overcome this. Our proposed approach is quite simple in its structure and discussed below with an example.
To begin with, the dataset that will be provided to the NN needs to be generated and conditioned. To generate the initial data, the LL PS needs to be identified for certain initial samples of 𝐱_u. These LL PS (or approximations thereof) can be obtained using an MOEA at the LL. Once obtained, the available data resembles the one shown on the right in Fig. <ref>. Then, the data for each 𝐱_u is conditioned schematically as shown in Fig. <ref>. To do so, the LL PS approximations {𝐱_l^*} are first sorted based on increasing order of one of the LL objectives; chosen (without loss of generalization) as f^1 in this case. This ordered set forms the output values for the NN. For the inputs, the value of a given 𝐱_u is replicated m times, where m is the number of solutions in the LL PS approximation corresponding to 𝐱_u. In addition, a helper variable r is appended to the inputs. The variable r assumes a uniformly spaced values in the range of 0 to 1, where r_1 corresponds to 0 and r_m corresponds to 1. The r values can thus be generated easily as per the number of points in the LL PS approximation. As it can be seen, the resulting dataset has unique input vectors, each mapping to a single output vector 𝐱^*_l. Therefore, the dataset is suitable for many-to-one or one-to-one mapping. The significance of having 𝐱^*_l in the sorted order of f^1, which corresponds to sorted order of r in the inputs, can be inferred as providing a sense of direction. This can be thought of as akin to mapping solutions along different preferences or reference vectors in, e.g., decomposition-based evolutionary algorithms. It is worth noting that, using a single helper variable r is only suitable for solving bi-objective problems, because there is an unambiguous spatial ordering among the PF points for bi-objective problems. That is, the PF points can be traversed from one corner of the PF to another, corresponding to the ordered set of reference vectors based on the monotonic progression of r values. Such ordering is not straightforward for problems with more than two objectives. Using the proposed approach for more than two objectives may be feasible following similar ideas of decomposition, but will require (a) more than one helper variables, and (b) careful consideration of the ordering of the data points and helper variables. Given that most of the existing BLMOPs involve two objectives <cit.> and that the existing methods require significant developments even for bi-objective problems, we limit the scope of this study to two objectives.
Once the above processing has been applied to all available 𝐱_u and the corresponding 𝐱^*_l, the data is stacked to form the dataset for training a feed-forward NN (FNN). It takes the inputs (𝐱_u,r) and builds a model 𝕄' to predict the output 𝐱^*_l. This model can then be integrated in the bilevel search method to predict the LL PS for any new candidate 𝐱_u under consideration, or can form a seed population to expedite the LL search towards its PS.
The benchmark problem DS2 <cit.>, with two variables at each level, is used here for proof-of-concept. The theoretical LL PS can be inferred for the problem for any given 𝐱_u, which can be used to verify the predication accuracy. For this illustration, we sample 10 random solutions (𝐱_u) at the UL. For each 𝐱_u, we sample 20 Pareto solutions (𝐱^*_l) on the LL PS. This data is then supplemented with the helper variable values as discussed above, and the FNN is trained.
We use MATLAB feedforward neural network tool () to create the mapping from {(𝐱_u, r)} to {𝐱^*_l}. One hidden layer with 4 nodes is set for this problem. Default parameter settings are used for the training step, mean square error is used as loss function, and backpropagation is used for training (More detailed parameters will be discussed in Section IV). Next, we generate a new 𝐱_u and attempt to predict its PS. The generated UL variable, 𝐱_u=[1.2, 1.2] is combined with r values to predict 20 LL PS solutions. The image of predicted PS in the LL objective space is shown in Fig. <ref>.
It can be seen that predicted solutions lie on the true PF as well as exhibit a good diversity on it. In Fig. <ref>, in variable space, we plot PS and predicted solutions. It shows consistency between PS and predicted solutions, For comparison, we also show the results obtained when the r is not appended to 𝐱_u in a uniformly increasing sequence but randomly generated instead (this is also equivalent to not sorting the 𝐱^*_l based on f^1). The results can be seen in Fig. <ref> and Fig. <ref>, where the mapping is able to predict solutions close to only a small part of the PF and PS, respectively. Lastly, we present 20 solutions that are randomly sampled in the LL search space for the same 𝐱_u. This is shown in Fig. <ref>, and highlights that if the LL search was conducted in a standard manner starting from a random population, the initial solutions will be far away from the PF. Instead, if the predicted PS such as that in Fig. <ref> is utilized, the search can be bypassed entirely for some generations, or seeded on/close to the PS, expediting the LL search and potentially saving significant number of LL evaluations. Using this central idea, we build an algorithmic framework to solve BLMOPs, termed Pareto set prediction assisted bilevel evolutionary multiple objective optimization algorithm (PSP-BLEMO) next.
§ PROPOSED ALGORITHM
The general framework of PSP-BLEMO[The code and data will be made available for research purpose after the review process.] is outlined in Algo. <ref> and Fig. <ref>, followed by description of its key components in the following subsections. At UL, the search is conducted through an MOEA, while the LL search has the PSP model integrated in it.
§.§ UL search and variable association check
The UL search commences with an initial LL variable association check, followed by the creation of an initial population P_1
generated through a random uniform distribution. An LL variable association check is done for accommodating an uncommon feature of some BLMOPs, referred to as variable association ambiguity (VAA) <cit.>. If a BLMOP has LL VAA, a subset of its LL variables participate solely in UL objectives. A detailed discussion of VAA can be found in <cit.>, here we only briefly introduce the VAA checking step adapted to this study, to keep the primary focus of this study on PSP module. The VAA check is done by perturbing an LL variable while holding all other variables constant. By comparing the LL objective values before and after the perturbation, it becomes possible to ascertain whether this variable is associated with LL objectives. Likewise, the UL objective values are also checked before and after the perturbation to ascertain which LL variables affect them. This process is repeated for all LL variables one by one. Subsequently, a binary vector v (Algo <ref>, Line <ref>) is generated to record the association status of each variable: 0 signifies no VAA, while 1 indicates otherwise.[Note that theoretically another case is possible where an LL variable is completely redundant, i.e, does not feature in LL or UL. We have omitted discussion of such cases for brevity in this study, but they can be easily filtered out from the search entirely through the above VAA checks.]
Upon completion of the VAA check and the generation of P_1, we proceed to evaluate P_1. To evaluate each individual 𝐱_u^i within P_1, LL search is conducted using an MOEA to identify the corresponding LL PS approximation {𝐱_l^i*} (Algo. <ref> Line <ref>). In case {𝐱_l^i*} is empty, such a solution is assigned a ∞ value at UL. The special cases of VAA require additional search components, which will be discussed later in Algos. <ref>-<ref>.
The 𝐱_u of the non-dominated (ND) solutions in P_1 and the {𝐱_l^*} corresponding to each of them are paired and stored in an archive (Algo. <ref> Line <ref>). All solutions that went through LL search are accumulated into the archive for training/updating the PSP model (Algo. <ref> Line <ref>).
As indicated in the previous section, the helper variable r is also generated and appended to 𝐱_u in the archive. After the first population P_1 has been evaluated through LL search, UL offspring population is generated by using evolutionary operators (Algo. <ref>, Line <ref>). Since the PSP model has been trained in the previous step, it can be used to supplement LL search for evaluation of the UL new offspring. This can be done by initializing the LL population using the predicted PS (Algo. <ref>) for the LL search. Here, we introduce an NN update parameter γ to explore the potential of the predictor, and a parameter ds for training data size threshold. Without sufficient training data, the accuracy of the model may not be good enough to reliably skip LL search. Therefore, LL search is conducted as usual, until training data size meets ds (Algo. <ref>, Line <ref>). Thereafter, the LL search is only run every γ generations, initialized with PSP. The LL search is skipped in the intermediate generations entirely (Algo. <ref>, Line <ref>). In this way, the LL search can be forgone for most of the generations to save on evaluations. The training data for NN are only accumulated from solutions for which LL search is conducted to ensure the accuracy of the evaluations in the dataset (Algo. <ref>, Line <ref>). When the number of data points in the archive exceeds ds, the most recent ds data points are used for building the model, to keep the model more accurate in the current region of search.
After each UL child has received its corresponding LL solutions {𝐜_l^i*} and evaluated at UL (Algo. <ref>, Line <ref>), UL ND archive is first updated. Then parent and child populations are combined and sorted according to feasibility first (FF), non-dominance (ND) and crowding distance (CD) (Algo. <ref>, Line <ref>). The next generation UL population is selected from combined sorted population using environmental selection mechanism outlined in Algo. <ref>. This evolution continues until termination criterion is met, and the archive of ND solutions obtained is returned.
§.§ PSP and PSP-assisted LL search
The role of PSP is to make LL search efficient by either replacing it altogether with predicted PS, or providing LL search a better starting population than random initialization. For LL PS prediction, when a new 𝐱_u is given to PSP, a list of linearly spaced helper value r (r∈[0, 1]) is first generated (Algo. <ref> Line <ref>). Then 𝐱_u is replicated to achieve the same number (n^l) of solutions that exist in r (Algo. <ref> Line <ref>). The collated 𝐱_u and r are provided as an input for PSP (ϕ) (Algo. <ref> Line <ref>). The output of ϕ forms candidate solutions for the LL PS or initial solutions for LL search (Algo. <ref> Line <ref>).
The above generated solutions are evaluated using the LL objective/constraint functions. Then, for the generations where PS approximation is directly used, the ND of the above solutions are returned back to the UL. For the generations where LL search is invoked (i.e., every γ generations; Algo. <ref>, Line <ref>), the ND solutions are used to seed the first LL population (Algo. <ref>, Line <ref>).
For the LL searches in the first UL generation, the initial population is randomly generated, while subsequently it comes partially from predicted LL ND solutions. If ND solution size is smaller than pre-defined population size n^l, then the rest of the solutions are filled in with randomly generated solutions (Algo. <ref> Line <ref>). The intent behind this is that if the initial solutions generated by PSP model ϕ are clustered in a small region, these randomly introduced candidate solutions can help maintain diversity to a certain extent. Inferred from vector v, an LL solution can be divided into (𝐱_li^U, 𝐱_li^L), where 𝐱_li^U refers to LL variables that only appear in UL objective. If there is no VAA in the problem (which is most often the case), then 𝐱_li^U is empty. LL search discussed above is only applied to 𝐱_li^L (Algo. <ref> Line. <ref>). This brings benefits of smaller search space and potentially quicker convergence speed to LL search, for the cases where 𝐱_li^U is non-empty.
After first population P_1^l is formed and evaluated, evolutionary operators are used to generate child population C_g (Algo. <ref>, Line <ref>). Environmental selection (Algo. <ref>) is then invoked to select the surviving population. This iterates until stopping condition is met. Then ND solutions from the last population are selected (Algo. <ref>, Line <ref>).
With 𝐱_l^L variables determined in L, we still need to identify the values for 𝐱_l^U by calling Algo <ref>, if VAA is present (i.e., 𝐱_li^U is non-empty), as discussed next.
§.§ Additional UL search in case of VAA
This module determines the value of 𝐱_l^U part of the search result L from Algo <ref>. For each solution 𝐱_li = (𝐱_li^U, 𝐱_li^L*) in L, to determine values for the part 𝐱_li^U, MOEA search is conducted to optimize the UL objectives with 𝐱_li^U as variables, while 𝐱_li^L* stay fixed. However, running a standard MOEA for every single 𝐱_li will require exorbitant FE consumption. To expedite this, we utilize solution transfer in this MOEA search. As shown in Algo. <ref> Lines <ref>-<ref>, once there are one or more solutions available in L^*, then when new search to be started, solution transfer is using solutions from L^*. For this, the solution 𝐱_lj^L* that has the smallest Euclidean distance from 𝐱_li^L* (i≠ j) is identified. Then, 𝐱_lj^U* corresponding to this solution is inserted into the initial population for the search corresponding to this 𝐱_li^L. Once 𝐱_li^U* is determined for each 𝐱_li^L*, the updated L^* is returned.
§.§ Environmental Selection
In the evolutionary process of Algo. <ref> (Line <ref>), and Algo. <ref>, (Line <ref>) new population is selected from the sorted combined parent and offspring population. In order to maintain diversity in the population, we adopt distance based subset selection (DSS) <cit.> in this step when ND front size has more unique solutions than predefined population size. As shown in Algo. <ref>, Line <ref>, from sorted population, we first identify the number N of unique ND solutions in {𝐱}_g. If N is smaller than predefined population size, then new population is selected in order of ranking. If N is larger than predefined population size, then DSS selection is applied to {𝐱}_g's ND front solutions. DSS selects solutions iteratively by choosing the one with the maximum distance to already selected solutions. As in Eq. <ref>, 𝐱_i refers to candidate solution, 𝐱_j^s refers to solutions already selected, and k is the number of selected solutions. In each iteration, the solution that has highest value of d_𝐱_i is selected.
d_𝐱_i = min{d(𝐱_i, 𝐱_1^s)), d(𝐱_i, 𝐱_2^s), ...d(𝐱_i, 𝐱_j^s), ...d(𝐱_i, 𝐱_k^s)}
§.§ Termination condition
To terminate the algorithm, we adopt the condition proposed in <cit.>, which
attempts to detect the stability (stagnation) of the evolving population based on measurements capturing convergence and diversity. Convergence is measured by the following normalized ideal (z^*) difference and nadir (z^nad) difference.
Δ z_t-1, t^* = max_i=1^M z_i^*(t-1) - z_i^*(t)/z_i^nad(t) - z_i^*(t)
Δ z_t-1, t^nad = max_i=1^M z_i^nad(t-1) - z_i^nad(t)/z_i^nad(t) - z_i^*(t)
If Δ z_t-1, t^* and Δ z_t-1, t^nad do not change significantly in the past ω generations by comparing their maximum metric values within a threshold ϵ, then the algorithm is deemed to have sufficiently converged. Here t refers to a given generation during evolution. M refers to the number of objectives.
The diversity is assessed using the following metric (but as noted in <cit.>, is not entirely independent of convergence).
ϕ(t) = IGD(P^t(t-1), P^t(t))
IGD refers to inverted generational distance <cit.>. Suppose 0≤ t ≤τ, τ is current generation, the normalized i^th objective value of j^th point in generation t with regard to τ generation is computed as
P_i^τ, j(t) = P_i^j(t) - z_i^*(τ)/z_i^nad(τ) - z_i^*(τ)
A sliding window ω is used to compute the metrics. If over ω generations, the maximum values of three metrics given below are no greater than a threshold ϵ, the algorithm is terminated.
max(Δ z_τ, τ-1^*, ..., Δ z_τ-ω, τ -ω -1^*) ≤ϵ
max(Δ z_τ, τ-1^nad, ..., Δ z_τ-ω, τ -ω -1^nad) ≤ϵ
max(Δϕ_τ, τ-1, ..., Δϕ_τ-ω, τ -ω -1) ≤ϵ
Similar to IGD, it is also possible to monitor HV to formulate a termination condition, as done in some of the other works <cit.>. Given that the true PF is not known a priori while solving the problem, either of these can only measure whether the population has stabilized, without guaranteeing convergence to the true optimum. We choose IGD based stopping condition as the default in our algorithm, merely to be consistent with the use of IGD metric later for performance assessment. IGD is a more reliable metric for measuring the final performance for BLMOPs, especially where deceptive problems are involved. For deceptive problems, a higher HV value may not reflect true performance of an algorithm, since it may be an artifact of sub-optimal solutions at the LL. The UL solutions in such cases may appear to dominate the true PF and generate high HV value, contradicting its true performance. IGD metric on the other hand becomes worse when PF approximation is away from the true PF, irrespective of which side it is located, providing a more accurate measurement. We adopt above IGD based stopping condition as default in our studies. However, some exceptions lie in comparison with state-of-the-art algorithms where we have used termination based on HV or/and number of evaluations to be consistent with that used in other works.
§ EXPERIMENTS AND DISCUSSION
Ten problems are used in our experiments from study <cit.>, which are listed in Table <ref>. Five of them (i.e. TP1, TP2, DS1D, DS2D, DS3D) exhibit some level of deceptiveness. Theoretical PF can be derived for all the test problems to analyze the performance of the algorithms. To generate n points approximately uniformly distributed on the PF, we first over-sample by generating 2n approximately uniform points on PF, utilizing the equations relevant to the PF of the problem. Then we use distance based subset selection <cit.> to select n solutions (n=1025 to ensure near uniformity) as the final PF. For calculating HV when comparing to state-of-the-art algorithms, reference point is set 1.1 times of the maximum objective values of the true PF, without normalization, to be consistent with their settings.
For the evolutionary search part of the proposed framework, differential evolution (DE) <cit.> and polynomial mutation <cit.> are used for generating offspring population. The crossover rate is set to 1 and scaling factor is set to 0.5 for DE operator. As for PM, its mutation probability is set to 1/D (D is the number of variables) and mutation index to 20. The first generation of PSP-BLEMO relies on MOEA for its LL search. This MOEA has fixed generation size set relative higher to 300 to ensure first training data has good quality. Stopping conditions as discussed previously are activated during the remainder of the search. 21 independent runs are conducted for each setting. Wilcoxon rank-sum significant test is used to draw conclusions regarding statistical significance.
For the NN structure used in PSP-BLEMO, one hidden layer is used. The size of hidden layer nodes is set to two times of the size of input nodes or output nodes, whichever is higher. Two parameters of PSP-BLEMO, re-train gap γ and training data size ds, are determined empirically through experiments, discussed shortly in more detail; however it is to be noted that with sufficient training datasize, algorithm performance is observed to be not too sensitive to γ. Matlab tool is used to train the NN with all parameters at their default values. For example, learning rate is 0.1, loss function is mean square error between network output and target output. Levenberg-Marquardt is the default training algorithm. NN training stops if validation error fails to decrease for 6 (default) iterations. The split ratio for training, validation and test is 70%, 15% and 15%, respectively.
In the following subsections, we present three sets of experiments. In the first set, we empirically determine two key parameters of PSP-BLEMO, namely the training gap γ and the data size threshold ds. After their values are fixed, we continue on to the second set of experiments, which compare the performance of PSP-BLEMO with six state-of-the-art algorithms, namely cG-BLEMO <cit.>, MOBEA-DPL <cit.>, SMS-MOBO <cit.>, BLMOCC <cit.>, mf-BLEAQ <cit.>, H-BLEMO <cit.>. For these comparison studies, both UL and LL population sizes are fixed to 20, same as in the most recent study <cit.>. The parameter settings for SMS-MOBO, are consistent with <cit.> too. As discussed in above stopping condition section, for these experiments, HV stopping condition <cit.> is used to be consistent with these methods. The thresholds for HV stopping condition are 1e-3 over ω=10 generations, same as compared methods in <cit.>. For the additional UL search, population size is set to 5 and first search generation size is set to 80. During the UL search assisted by transferred solutions (Algo. <ref>), HV stopping condition is used for consistency. In the last set of experiments, we compare PSP-BLEMO with its variant where the training is done only once (referred to as one-shot (OS) method) in order to save on evaluations. It reveals some interesting observations are when larger population sizes (proportional to number of variables) are used, unlike the previous two experiments.
§.§ Empirical experiments on key parameters
In this first experiment, we use empirical analysis to gauge the effect of γ and ds. For γ, we investigate 5 different values, 5, 10, 15, 20 and infinite (Inf). Inf here means that once the data reaches ds size, the model is trained and thereafter never updated. As for ds, we tested 5 values: 5e2, 1e3, 2e3 and 5e3. Test bed involves all 10 problems shown in Table <ref>. For each combination of γ and ds, 21 runs are conducted for each problem. Both UL and LL stops using default IGD stopping condition, with thresholds (ϵ) being 1e-2 over 5 generation window (ω). The statistics on IGD median values and FE consumption are reported in the supplementary (Section 1). Here, we include a summary of statistical significance tests in Table <ref>. For each γ setting, we run Wilcoxon rank sum tests (0.05 significance level) between each data size setting to the others on each problem, and count how many times this setting shows significant better performance. For example, when γ=5 is fixed, run statistical tests on test problems between (1) ds=5e2 vs ds=1e3, (2) ds=5e2 vs ds=2e3, (3) ds=5e2 vs ds=5e3, resulting in 30 tests (10 test problems × 3 pairwise comparisons). Setting ds=5e2 shows significant better, equivalent and worse performance in 0, 13 and 17 tests, respectively.
The left half of Table <ref> basically evaluates which data size ds setting has most win cases when γ setting is unchanged. It can be seen that ds=5e3 has highest win cases in all γ settings. Therefore, in following experiments, we fix training data size ds=5e3. Following the same approach, the right half of Table <ref> shows the number of win cases for each γ setting when ds is fixed. We can see that when ds=5e3, γ=10 has most win cases. Consequently ds and γ are set to these two values for further experiments.
§.§ Comparison with state-of-the-art approaches
The compared algorithms include the most recently proposed cG-BLEMO <cit.> along with MOBEA-DPL <cit.>, SMS-MOBO <cit.>, BLMOCC <cit.>, mf-BLEAQ <cit.> and H-BLEMO <cit.>. For BLMOCC and H-BLEMO we inherit the reported results in their corresponding papers, due to unavailability of the codes. The key idea of PSP-BLEMO is to replace LL PS with predicted value, cG-BLEMO and mf-BLEAQ are the most closely related algorithms for comparison, as both involve modules predicting LL solutions. For mf-BLEAQ, the source code was not available and hence we implemented its LL PS predictor, i.e. set valued mapping (SVM) and embedded it into PSP-BLEMO framework for comparisons.
In Table <ref>, the mean and standard deviation of HV and IGD values obtained by the six algorithms are shown.
The symbols ↑, ↓ and ≈ show significantly better, worse or equivalent result, respectively, compared to PSP-BLEMO. For BLMOCC and H-BLEMO the statistical tests are conducted through confidence interval test <cit.>. If two algorithms' 95% confident intervals are overlapping, then there is no statistical difference between two algorithms, otherwise, statistical difference is reported. For others, Wilcoxon ranksum test is used. Since our experiments include deceptive problems, HV performance can be misleading (as discussed earlier in Section <ref>). We therefore mainly focus on IGD performance, but still show HV results for completeness. With regards to cG-BLEMO, we observed certain trade-offs between function evaluations and performance, so we have reported two values. The second column in Table <ref> shows the performance of cG-BLEMO after its run is completed. The third column is also cG-BLEMO, but marked with FE truncated (tr). The metric values of cG-BLEMO are extracted when its total FE consumption exceeds total FE consumption of PSP-BLEMO for each run (across 21 runs). The process is as follows. Both PSP-BLEMO and cG-BLEMO runs are sorted based on the FE consumed and then the metric values of cG-BLEMO are extracted when its total FE consumption exceeds the FE consumption of PSP-BLEMO for the corresponding run in the sorted order. This truncation allows us to see performance difference between PSP-BLEMO and cG-BLEMO for similar computational budgets.
In terms of mean IGD (Table <ref>), PSP-BLEMO shows the lowest value for 5 out of 10 problems, while cG-BLEMO has the lowest for 1. In terms of significance test, cG-BLEMO performs better than PSP-BLEMO in 4 problems, and worse or equal in the other 6 problems. PSP-BLEMO shows better mean and standard deviation values especially in deceptive problems when compared with cG-BLEMO. This implies that performance variation of cG-BLEMO is much higher when dealing with deceptive problems. This can be explained as follows. cG-BLEMO inserts UL evaluations on LL solutions during LL search. If its LL prediction module initializes LL solutions that are not close enough to PS, UL objectives of such LL solutions may dominate the true UL PF.
Consequently, UL search may be misguided by evolving around corresponding UL solutions which show inaccurate UL objective values; resulting in poor search performance. In this respect, mean and standard deviation reflect how well LL prediction maintains its performance. PSP-BLEMO shows relatively stable performance compared to cG-BLEMO.
Next, we look into FE consumption of all algorithms, reported in Table <ref>. For brevity, raw values are reported for PSP-BLEMO, while for the rest of the algorithms, ratio of their FE to that of PSP-BLEMO is reported. It can be seen that cG-BLEMO uses around 7 to 40 times more FE than PSP-BLEMO at the UL. Likewise, on the LL, cG-BLEMO uses around 3 to 15 times more FEs than PSP-BLEMO. In previous IGD metric based performance, cG-BLEMO showed better statistical performance in 4 out of 10 cases, but it also consumes much higher resource. To have one of these two metrics (IGD or FE) on the same level, we report cG-BLEMO IGD performance when its FE consumption exceeds PSP-BLEMO's in column 3 Table <ref>. Statistical analysis shows that PSP-BLEMO performs significantly better than cG-BLEMO in 8 out of 10 cases, suggesting the improved performance of PSP-BLEMO for lower evaluation budgets. To note that DS4 and DS5 are two special cases, where additional UL search as suggested in Algo. <ref> are invoked. It can be observed that PSP-BLEMO consumes much more FEs for such problems especially on the UL. However, even when considering similar FEs, its performance still remain competitive to cG-BLEMO. In supplementary (Section II), we also present a comparison of PSP-BLEMO with cG-BLEMO for deceptive versions of DS4 and DS5 (formulated in <cit.>) to highlight the advantage of PSP-BLEMO for such problems. Additionally, in the supplementary (Section III), we report FE comparisons based on when these algorithms achieve the same level of IGD as BLMOCC <cit.>. This method for comparison was adopted in <cit.>, hence we report them for completeness. PSP-BLEMO is observed to be competitive in these comparisons, consistent with the results above.
The above performance comparison between PSP-BLEMO and cG-BLEMO is also visually reflected in convergence plots of median IGD runs shown in Fig <ref>. Typically, PSP-BLEMO uses much fewer FEs to converge, at a faster rate than cG-BLEMO. Besides FE and IGD, there is also additional information implicit in the plots. For TP1 and DS1D, IGD values of cG-BLEMO decreases first and then rises. This is reflection of the algorithm performance affected negatively by the deceptive problems. In contrast, such case do not occur for PSP-BLEMO. For problems DS4 and DS5 (the cases with VAA), it can be observed that there is suddenly drop in IGD value in the first generation for PSP-BLEMO. It shows the effect of UL additional search to improve the IGD. In Table <ref>, we compare the average runtime of two algorithms. Consistent with FE consumption, it can be seen that PSP-BLEMO uses much less time to converge in 8 out of 10 problems. For DS4 and DS5, the two algorithms use similar time.
SVM-BLEMO also uses LL PS prediction. However, its performance is significantly worse than PSP-BLEMO, as evident from Table <ref>. The IGD values are much higher than rest of the compared methods. This suggests that the LL search results of SVM-BLEMO are relatively further away from the optimal. Due to lower accuracy of LL PS prediction, it is not ideal for SVM based predictor to skip LL search entirely (for up to γ generations) in the proposed framework.
Four other state-of-the-art algorithms, namely BLMOCC, H-BLEMO, MOBEA-DPL and SMS-MOBO, are less similar conceptually to PSP-BLEMO. BLMOCC adopts co-evolution on both levels, while H-BLEMO involves hybridization via local search. One can observe from Table <ref> that PSP-BLEMO has lower mean IGD values than BLMOCC and H-BLEMO. In terms of FE, compared to BLMOCC, PSP-BLEMO uses fewer FE on the LL, and and more FE on the UL. However, the FE consumption on LL is usually 10 to 100 more times than UL (seen from Table <ref>). Thus, overall, PSP-BLEMO holds advantage over BLMOCC in both IGD metric and FE consumption. Similar observations are made with respect to H-BLEMO; additionally, PSP-BLEMO shows fewer evaluations also on the UL. MOBEA-DPL adopts two populations on the LL search, of which one population is evaluated on the UL. Therefore deceptive problems are challenging for MOBEA-DPL. In terms of FE consumption, it uses up to 30 times more FE than PSP-BLEMO on the UL and up to 14 times more FE on the LL. SMS-MOBO shows competitive performance in 4 problems, but its performance needs up to 100 times more FEs (on the LL) to achieve this performance. SMS-MOBO also faces challenges on deceptive problems, where it uses more FEs but returns rather high IGD values. Compared to SMS-MOBO, PSP-BLEMO shows stable performance across different problem types.
Lastly, we explicitly observed the quality of the LL PS predictions, to account for the improvements in performance achieved by PSP-BLEMO. In Fig. <ref>, we compared LL prediction results on PSP-BLEMO and cG-BLEMO. In Fig. <ref>, it shows the prediction results obtained by two algorithms after first UL generation. PSP-BLEMO's prediction is not close to PF for this case. The reason is that after the first generation training data accumulated was only 400 (70% training data therefore uses 280 data points). The size of training data is not enough to capture the UL solution to LL PS mapping. cG-BLEMO's PS prediction is relatively better in terms of being closer to PF. However, when the training data size increases to 1000 (70% training is 700 data points), PSP-BLEMO's prediction is much closer to the true LL PF and well distributed as shown in Fig. <ref>. As for cG-BLEMO, its prediction improves, but there is still significant amount of search effort required to move towards LL PF.
To summarize, PSP-BLEMO shows competitive performance in terms of UL IGD, FE consumption, and average runtime when compared with other state-of-the-art algorithms. Especially, when the target problem has deceptive nature and the number of allowable evaluations is relatively low. The competence of PSP-BLEMO is supported by the LL PS prediction quality, which helps it skip LL search for most of the generations with low impact on the quality of solutions obtained.
§.§ Comparison study with `one-shot' PSP-BLEMO
Above comparison with state-of-the-art algorithms shows competitive performance of PSP-BLEMO in terms of accuracy and FE consumption. The experimental settings used in previous studies (e.g. population size 20) have been used for fair and consistent comparisons with the state-of-the-art algorithms. However, in this section, we would like to look into a variant of PSP-BLEMO and its performance when experimental settings and problem settings are varied. PSP-BLEMO, in its default form, updates NN every γ generations. To attempt further savings in FE, one may consider a version of PSP-BLEMO that forgoes re-training entirely and train the model only once, in the first generation. We refer this version one-shot (OS) PSP-BLEMO. During the run, the LL PS is always predicted, never obtained through search. At the end of the run, for the final reporting, the solutions of the last population are evaluated by running predicted LL assisted search on the ND solutions in UL archive. This process helps improve the solution quality on the LL, to more accurately reflect the quality of returned UL solutions. For a fair comparison, such a re-evaluation scheme is also applied to PSP-BLEMO in these experiments.
In what follows, we use the shorthand OS and PSP to distinguish these two variants. Further, as a baseline, we use nested evolutionary algorithm (NE), where both UL and LL search are carried out by MOEA described earlier in Section <ref> without the use of any predictions. For test problems, we expand the test bed to include problems of different variable sizes, so that the scalability of algorithms can also be observed. The detailed problem variable settings can be found in Table <ref>.
In the first part of this experiment, we keep the population size fixed to 20, as done previously. The performance in terms of normalized UL IGD, normalized LL IGD and total FE are summarized in Table <ref>. The calculation of normalized LL IGD is as follows. Firstly, for every UL search result, we compute the normalized LL IGD for each UL solution. Next, the mean of these normalized LL IGD values is associated with each search result. Subsequently, we identify the maximum and minimum normalized IGD values across all three algorithms (considering all seeds) to establish the range for normalizing each LL IGD result between 0 and 1. The reported normalized LL IGD is the final result from above 3 steps.
Statistical test results are shown in symbols ↑_i, ↓_i and ≈_i. This i means compared to (i) algorithm, current algorithm (on the left of symbols) performs better, worse or equivalent. For FE consumption, except for DS4 and DS5, it is not surprising that OS is the best, and PSP uses more FE than OS, but both of them use fewer FE than the baseline.
The larger the number of variables, the more savings on FE the proposed method can achieve.
For DS4 and DS5, due to additional search on the UL, PSP has more FE consumed than baseline, but this number decreases as the number of variables increases as shown in the last section of Table <ref>. For OS, its FE is higher than baseline when variable size is small (S1), but in the rest of the settings, its FE consumption becomes much smaller than baseline and PSP.
It can be noted that PSP shows better performance in all variable settings. For S1, PSP performs equal to or better than baseline in all problems on the UL, 9 out of 10 on the LL. For S2 and S3, PSP performs equal or better than baseline in all problems on UL and LL. Between PSP and OS, similar trend can be observed on the UL.
For all problems except one across three settings, PSP performs better than or equal to OS.
LL situation varies mainly in S3 section, where where OS outperforms PSP 4 out of 10 cases on LL. But when we check the corresponding UL performance, e.g. DS2, DS3 and DS2D etc, OS has much worse UL IGD values. Overall, PSP presents most consistent performance in this setting. The performance of OS is also inline with expectation, i.e. with a population size of 20 and for small number of variables, it would have accumulated enough training data for appropriate prediction accuracy. However, for larger number of variables, its prediction accuracy is compromised affecting the overall performance.
In the second part of this experiment, we set the population size to be 10D and maximum generation size to be 10D (while also retaining the IGD termination criterion), making them proportional to the problem variable size. D refers to corresponding level variable size. For UL, it means D_u, while for LL it means D_l. Table <ref> shows results of these experiments. Again, it is seen that OS and PSP use much fewer FE compared to the baseline except DS4 and DS5. For DS4 and DS5, similar to the first experiment, when problem variable size increases, this extra FE consumption also decreases compared to those used by baseline. Between PSP and baseline, on both levels and across all problem settings, PSP performs significantly better.
Between PSP and OS, the performance is not as overwhelmingly in favor of PSP as it has been in the previous experiments.
For S1 and S2, PSP still shows better performance on both levels, while for S3, PSP performs better than OS at UL in 4 problems and worse in 3 problems. On the LL, PSP performs better or equal to OS in all cases. For S3, the population size and generations are significantly larger in this case (e.g. 100 for 10 variables). Therefore, OS has same number of training points as PSP, except that PSP can also update the models later. The competitive performance of OS suggests that the proposed LL PS prediction model, being built on a larger dataset, can maintain its performance quite well over generations. This observation actually also resonates with the Table <ref> last column about test results on γ when ds=5e3. Although according to wining cases, we choose γ=10 as our parameter value, for majority of the cases, the differences are not significant (i.e. majority of the cases are `equivalent'), γ=10 is only marginally better than others. So, the OS performance can be competitive when ds is large. Overall, we still tend to choose PSP with re-training (every γ generations) as the default. Although the FE consumption on DS4 and DS5 is high, it is still competitive with the state-of-the-art, and such problems are not too common in BLMOPs. Considering the performance at both UL and LL, it is the most consistent and competitive strategy across different settings and provides an opportunity for model correction intermittently. But the insights gained from this last set of experiments provides an avenue for further FE reduction. The use of OS strategy can be beneficial towards this endeavor without significant compromise on performance, provided a sufficiently large dataset is used to train the models.
§ CONCLUSION AND FUTURE WORK
Solving BLMOPs using a nested approach requires exorbitant number of function evaluations. Therefore, it is of interest to develop techniques that can reduce the computational expense while achieving good solution quality. In this paper, we develop a Pareto set prediction assisted search (PSP-BLEMO) towards addressing this challenge. To achieve this, the archive of evaluated solutions is transformed using a helper variable, and an NN model is trained on the resulting dataset. The model is then used to predict the PS approximation, used to seed the LL population or bypass the LL search entirely. The model itself can be re-trained periodically using additional UL solutions that undergo LL search every γ generations. For saving additional FEs, a `one-shot' approach can be applied where the model is not updated during the run.
Extensive numerical experiments were conducted to test the efficacy of the proposed approach in terms of, solution quality, FE consumption and runtime. Key parameters and variants of the algorithm were also analyzed. Comparisons with the state-of-the-art algorithms shows competitive and consistent performance of PSP-BLEMO. The performance is particularly notable where deceptive functions are involved. Moreover, it was concluded that the performance of the algorithm was not too sensitive to the training gap γ, as long as there is sufficient training data available to build accurate PSP models.
In the future work, further improvements in the algorithmic performance will be investigated, e.g., by enhancing the modeling through use of more carefully selected subset from the data. Currently, the most recent LL search results are used as training data. Its distribution might not be uniformly distributed over the entire PF. Decomposition based segmentation on objective space can provide opportunity in improving this data quality.
Moreover, extensions for higher number of objectives could also be considered. The way in which helper variable r is used in this study is particularly suited for two-objective problems, since arranging r and f_1 corresponds to sequentially traversing the span of the PF. However, for problems with more objectives, the composition of r will need to be further refined. One potential way is to utilize reference vectors obtained through systematically sampled points using normalized boundary intersection method, commonly utilized in decomposition-based MOEAs. In the current state of the field, more than two objectives are rare, hence this would be an interesting extension of this study in the future. Lastly, extension of some other recent modeling structures, such as hypernetworks, to solve BLMOPs could also be explored.
ACM-Reference-Format
|
http://arxiv.org/abs/2409.02285v1 | 20240903203924 | Coping or Hoping? Livelihood Diversification and Food Insecurity in the COVID-19 Pandemic | [
"Ann M. Furbush",
"Anna Josephson",
"Talip Kilic",
"Jeffrey D. Michler"
] | econ.GN | [
"econ.GN",
"q-fin.EC"
] |
1]Ann M. Furbush
2]Anna Josephson
3]Talip Kilic
2]Jeffrey D. Michler
[1]Cambridge Econometrics
[2]Department of Agricultural and Resource Economics, University of Arizona
[3]Development Data Group (DECDG), World Bank
Coping or Hoping? Livelihood Diversification and Food Insecurity in the COVID-19 PandemicCorresponding author email: mailto:[email protected]@arizona.edu. Authors are listed alphabetically. A pre-analysis plan for this study was filed prior to completion of post-outbreak data collection at https://osf.io/nu593https://osf.io/nu593. Funding for data collection and analysis comes from the World Bank Multi-Donor Trust Fund for Integrated Household and Agricultural Surveys in Low and Middle-Income Countries (TF072496). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. We acknowledge the research assistance provided by Lorin Rudin-Rush and Joshua Brubaker. We appreciate comments and feedback from participants at the 2021 NIFA Agriculture Policy Conference, the 31^st International Conference of Agricultural Economists, the 2021 World Bank Development Data Group Learning Series, the Learning from Longitudinal Studies in LMICS: Before, During, and After COVID-19 workshop in 2021, the IFAD Conference 2022, and the Agricultural and Applied Economics Association Annual Meeting in Anaheim. We thank the individuals involved in the design, implementation and dissemination of high-frequency phone surveys on COVID-19, specifically the World Bank LSMS team, and the phone survey managers and interviewers at the Malawi National Statistical Office, the Nigeria Bureau of Statistics, the Uganda Bureau of Statistics and Laterite Ethiopia.
[
September 2024
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
§ ABSTRACT
We examine the impact of livelihood diversification on food insecurity amid the COVID-19 pandemic. Our analysis uses household panel data from Ethiopia, Malawi, and Nigeria in which the first round was collected immediately prior to the pandemic and extends through multiple rounds of monthly data collection during the pandemic. Using this pre- and post-outbreak data, and guided by a pre-analysis plan, we estimate the causal effect of livelihood diversification on food insecurity. Our results do not support the hypothesis that livelihood diversification boosts household resilience. Though income diversification may serve as an effective coping mechanism for small-scale shocks, we find that for a disaster on the scale of the pandemic this strategy is not effective. Policymakers looking to prepare for the increased occurrence of large-scale disasters will need to grapple with the fact that coping strategies that gave people hope in the past may fail them as they try to cope with the future.
JEL Classification: F6, I38, O2, Q18
Keywords: COVID-19 pandemic, coping strategies, food security, pre-analysis plan, Sub-Saharan Africa
§ INTRODUCTION
The COVID-19 pandemic exacerbated many of the hardships faced by impoverished households. Due to their limited resources, households across Sub-Saharan Africa are particularly vulnerable to shocks, such as natural disasters and pandemics. While households cannot control their risk of exposure, they can employ ex ante and ex post coping strategies to mitigate the impact of realized risks and enhance their own resilience. For households in Sub-Saharan Africa, among whom formal insurance is uncommon, these coping strategies often involve reallocation of resources, monies, and labor within the family or household <cit.>. Households can diversify their incomes to bolster resilience to future shocks and uncertainty - an ex ante coping behavior <cit.> - or decide to diversify only after a shock exposes their vulnerability - an ex post coping behavior <cit.>.
We study the impact of livelihood diversification on mitigating food insecurity, a proxy for welfare, during the COVID-19 pandemic. To do this, we use household panel data collected by the World Bank, which combines face-to-face survey data collected in Ethiopia, Malawi, and Nigeria prior to the COVID-19 pandemic with monthly post-outbreak phone surveys. This allows us to establish causal relationships to understand how households coped with the impacts of the pandemic through livelihood diversification. Prior to the public release of the phone survey data, we pre-specified our analysis and registered the pre-analysis plan with the Open Science Foundation (OSF) <cit.>. We report on the results of two research questions. First, how has household income composition and livelihood diversification changed since the onset of the pandemic? And second, how does household income composition and livelihood diversification impact household food insecurity amid the pandemic?[In our pre-analysis plan, we specify a third research question: how do changes in income composition and livelihood diversification and subsequent effects on food insecurity vary across different population subgroups? We discuss findings related to this research question in the Online Appendix.]
With respect to our first question, we interrogate the data and produce stylized facts regarding how livelihood diversification changes during the pandemic relative to pre-pandemic diversity. We do not observe a substantial nor a systematic change in household income composition nor livelihood diversification since the start of the pandemic. We do observe small differences in Ethiopia and Malawi, where households become more specialized during the pandemic than before. These changes are driven by a decline in the percent of households receiving remittances, government assistance, and wage income following the onset of COVID-19. This effect is relatively modest and we cannot disentangle if this was a voluntary coping strategy or an involuntary separation from diverse activities (e.g., the loss of a wage job rather than leaving a wage job). In Nigeria, diversification increased slightly after the start of the pandemic, mainly due to increased participation in farming and greater government assistance. From these modest trends we conclude that households made limited use of livelihood diversification as an ex post coping strategy. This may be due to the unique nature of government response to the pandemic, which placed constraints on how households could respond, though these restrictions also did not lead to an across the board increase in the specialization of livelihood activities.
To answer our second question, we use a dynamic panel model and an ANCOVA estimation to assess changes in household food insecurity. A rich body of literature evaluates livelihood diversification as a both an ex ante and ex post coping strategy to improve recovery from and resilience to shocks, particularly those related to climate and civil unrest. These studies generally coalesce around the conclusion that income diversification improves household welfare <cit.>, though there is important heterogeneity based on gender of the household head and whether the household is urban or rural <cit.>. We do not find evidence for this positive relationship in our analysis. Across multiple econometric specifications, combining income sources into various indices, and conducting sub-group analysis by gender and location, nearly all our results are nulls. We then discuss possible explanations for our null results, presenting robustness checks were possible. We conclude that the results are true nulls. The finding that livelihood diversification prior to and during the pandemic has no effect on welfare, as proxied by food insecurity, which runs contrary to much of the previous literature, may be due to the extreme conditions of the COVID-19 pandemic.
This paper contributes to the existing literature on livelihood diversification, as well as to the emergent literature on the impacts and effects of the COVID-19 pandemic. In terms of livelihood diversification, many studies conclude that diversification of income sources reduces poverty and enhances resilience <cit.>. A recent paper examines if diversified firms are more resilient than specialized ones in the face of the COVID-induced market shock <cit.>. Using food insecurity, which is a common proxy for household welfare in many studies <cit.>, we build on this body of existing literature to extend our understanding of the role of livelihood diversification in bolstering household resilience to severe socioeconomic shocks like the COVID-19 pandemic.
Despite the relative recency of the COVID-19 outbreak, there is already a substantial body of literature summarizing the socioeconomic ramifications of the pandemic on households in low-income countries, including on income <cit.>, well-being <cit.>, food security <cit.>, and other welfare outcomes <cit.>. A subset of this literature focuses on understanding specific policies or transfer programs associated with the pandemic <cit.> or informational transfer <cit.>. However, much of this literature remains descriptive in nature and does not investigate changes due to the pandemic, but rather changes occurring during the pandemic. We extend this conversation by estimating causal relationships about coping with the pandemic through livelihood diversification, using both pre- and post-outbreak data.
Finally, our paper contributes to the small but growing body of research that uses pre-analysis plans in observational studies. While pre-analysis plans are generally associated with studies relying on experimental data <cit.>, the first use of a pre-analysis plan in economics was an observational study of the impacts of minimum wage laws <cit.>. The main argument against using pre-analysis plans in observational studies is the difficulty in credibly committing to a plan prior to data availability <cit.>. But, as <cit.> argue, there are numerous study settings where research questions can be clearly formulated ahead of the release of data. Democratic elections <cit.>, policy changes <cit.>, and the timed release of government data <cit.> are all examples in which researchers combine pre-analysis plans with observational data. In our case, in the month immediately following the outbreak of COVID-19, the World Bank formulated a plan to collect at least 12 rounds of monthly panel data from households that had been surveyed in the year prior to the pandemic. This commitment to future data collection, following a standardized survey instrument, allowed us to formulate hypotheses, develop an empirical approach, and register our plan prior to the collection and public release of all rounds of data <cit.>. In a research setting in which there are numerous ways one could define the variables of interest and model their relationships, a pre-analysis plan lends credibility to our analysis.
§ DATA
Our analysis focuses on changes to food insecurity after the outbreak of COVID-19 relative to food insecurity status pre-COVID. The spread of the virus impacted household finances indirectly, largely through the closure of businesses and schools and the interruption of supply chains. Governments in the three countries imposed various restrictions to movement, business interactions, and on educational institutions throughout the course of the pandemic. While these restrictions sought to slow the spread of the virus and protect citizens from infection, they disrupted normal activities including household income generation.
§.§ COVID-19 Shock
We describe the circumstances in each country, based on government restrictions that were in place during each data round. In Ethiopia, restrictions were largely implemented at the national-level. Ethiopia closed schools and suspended public gatherings on 16 March 2020. On 8 April 2020, the country declared a state of emergency which included limiting international and domestic travel. However, Ethiopia never went into a complete national lockdown in the sense of closing businesses, restricting movement, or imposing curfews <cit.>. In Malawi, the President declared a state of disaster on 20 March 2020, which included closing schools and limiting the size of public gatherings. A stay at home order was issued in April. However, this order faced legal challenges, which culminated in the High Court barring the regulation and preventing the stay-at-home order from going into effect, leaving daily economic activity largely intact <cit.>. Nigeria’s response primarily occurred at the state-level. Most Nigerian states closed schools and suspended large gatherings by 24 March 2020 and suspended inter-state travel on 23 April. While non-essential shops as well as restaurants were ordered to close, the government's attempts to impose these closures along with curfews, social distancing, and self-quarantine, were largely ignored, meaning daily economic activity was relatively unchanged <cit.>. The government lifted the closure order less than a month later. Compared to lockdowns in China, Europe, and the United States, the closures of businesses and the restrictions to daily activities in Ethiopia, Malawi, and Nigeria were substantially less strict. This is important since it means that households in these countries were less constrained in pursuing livelihood opportunities than those in regions of the world with government imposed lockdowns.
To account for the variation in COVID-19-related restrictions over time, we use Our World in Data's COVID-19 Government Stringency Index in some of our empirical specifications <cit.>. The index considers nine metrics to calculate daily scores for each country: school closures; workplace closures; cancellation of public events; restrictions on public gatherings; closures of public transport; stay-at-home requirements; public information campaigns; restrictions on internal movements; and international travel controls. The stringency index is calculated as the mean score of the nine metrics, each taking a value between 0 and 100. A higher score indicates a stricter regulatory regime. To match these daily data to each round of our data, we take the average daily score during each survey period. Figure <ref> displays the average government stringency index in each country over time.
§.§ Sample Selection and Surveys
To examine the relationship between livelihood diversification and welfare outcomes, we use panel data from high frequency phone surveys (HFPS) in Ethiopia, Malawi, and Nigeria . In each country, interviewers conduct these surveys on a monthly basis with households for a period of at least 12 months following the outbreak of COVID-19.[The following agencies implement the monthly surveys with support from the World Bank Living Standards Measurement Study (LSMS): Laterite Ethiopia, the Malawi National Statistical Office, and the Nigeria Bureau of Statistics.] The sample for the HFPS is drawn from households that had been interviewed during the most recent (2019) round of the national longitudinal household survey implemented by the respective national statistical office, with assistance from the World Bank . These pre-COVID-19 Living Standards Measurement Study - Integrated Surveys on Agriculture (LSMS-ISA) data are representative at the national, regional, and urban/rural levels and serve as a baseline for our post-COVID-19 analysis.
The HFPS are not nationally representative as participation requires that each household have (1) at least one member who owns a phone, (2) cell network coverage, and (3) access to electricity. These requirements may lead to selection bias in the survey sample. Additionally, the surveys may suffer from non-response bias if targeted households were not willing or able to participate. To address these challenges, we use survey weights provided in the HFPS data which include selection bias corrections and post-stratification adjustments. Several studies using the HFPS data have found that the use of survey weights and post-stratification adjustments substantially reduce the bias, though it does not fully eradicate the bias, and our results should be interpreted with this in mind <cit.>. For a detailed description of the weight calculations used in this study, see <cit.>.
The integration of data from the post-outbreak HFPS and pre-outbreak LSMS-ISA surveys allows us to capture the variation in the effects of the pandemic across a diverse set of Sub-Saharan Africa countries and over time. Importantly, the combined data afford us the opportunity to examine the effects of COVID-19 in relation to a pre-pandemic baseline, allowing us to establish a causal relationship between our variables of interest. The surveys feature cross-country comparable questionnaires on a range of topics including participation in income-generating activities and food insecurity. In total, over 9,000 households are included in this analysis. With baseline LSMS-ISA data in all three countries plus 10 rounds of HFPS data in Ethiopia and 11 in Malawi and Nigeria, our research draws from a total of over 34,000 observations. The average number of households in each round of data is: 2,784 in Ethiopia, 1,611 in Malawi, and 1,943 in Nigeria, though the actual number of households in the baseline and each round differ due to attrition.
§.§ Livelihood Diversification Indices
Prior to describing how we construct each of our livelihood diversification indices, we present the disaggregated categories in which households engage in income generation (see Table <ref>). The table shows pre-COVID income sources in each country at the most detailed level provided by the classification system used by the World Bank in the LSMS surveys.[In the LSMS data, income is reported in the local currency. To allow for cross-country comparisons, we convert income values to constant 2019 US dollars.] Income data is collected for all household members and we have summed across individuals to construct a household-level measure. The indices we consider in this paper use aggregated versions of these income sources, though we show the most detailed level here for completeness.
The post-outbreak phone surveys collected less detailed income data than the pre-outbreak in-person surveys. The income module was also not asked in every survey round. In surveys that did included a module on sources of income, those modules asked “In the last 3 months, which of the following were your household's sources of livelihood?” Options included: 1) Family farming, livestock or fishing; 2) Non-farm family business, including family business; 3) Wage employment of household members; 4) Remittances from abroad; 5) Assistance from family within the country; 6) Assistance from other non-family individuals; 7) Income from properties, investments or savings; 8) Pension; 9) Assistance from the Government; 10) Assistance from NGOs / charitable organization/religious bodies; 11) Other income source. A household is assigned a one if it reports “yes” to the question and zero if it reports “no.”
Our variable of interest in our analysis consists of a series of indices measuring income diversification. Following methodology from <cit.>, we use two measures to evaluate household income diversification: (1) a simple fractional index (FI) and (2) a Herfindahl-Hirschman Index (HHI).[We also generate four variations of these indices, following our pre-analysis plan, that differ in if they standardize income categories across countries and/or across time. Summaries of these additional indices are presented in Online Appendix <ref>.] HHI scores are negatively related to diversification. That is, they are larger for less diversified households and smaller for more diversified households. For consistency, we adjust the fractional index to maintain this negative relationship. Both indices can be interpreted as specialization indices that are inversely related to diversification. Table <ref> summarizes the characteristics of each index.
The simple fractional index is calculated using the count of the income sources each household is engaged in (m) at time t given the total number of income-generating opportunities (n) in their region (j) over the entire time series:
FI_ijt = 1-m_it/n_j.
The fraction is subtracted from one so that a higher score is associated with fewer income-generating activities, while a lower score indicates a more diversified income portfolio. The fractional index is calculated for both pre- and post-COVID-19 data. To generate the index, we collapse multiple income categories into seven categories standardized across the three countries: (1) farm, (2) wage, (3) pension, (4) remittances, (5) non-farm enterprises, (6) income from properties, savings, and investment, and (7) other, which includes asset sales, income from NGOs, and other government assistance. This index accounts for geographic area to capture livelihood specialization relative to regional diversification opportunities. We count the total number of income sources households participate in for all geographic areas available in the data (e.g., region, zone, district, postal code, ward). We then determine the smallest geographic area with at least 10 available observations. The count of income sources households are engaged in within that smallest geographic area with sufficient observations then serves as the denominator (n_j) in the index calculation for households residing in that area.[Our final results are robust to changes in this benchmark: if we include a control for the size of the benchmark or standardize the geographic region that is the benchmark for diversification we see no difference in how diversification relates to food insecurity.]
The HHI considers the portion of a household's income generated from each income source. In calculating the HHI, we include all revenue generated by households but do not net out costs of production. The HHI is calculated as:
HHI_i = ∑_m=1^M p_m^2,
where M represents each household's total number of income sources. Each p_m represents the percentage of the household's income generated from income source m. A highly specialized household with only one income source would receive the highest possible score of 1 (1^2). Similarly, a household with two income sources each accounting for 50 percent of household's total income would receive a score of .5 (.5^2+.5^2). As with the simple fractional index, higher scores indicate more income specialization and less diversification. The HHI includes only pre-COVID data and so there are no time sub-scripts. To generate the index, we use 12 income categories standardized across countries, and the respective amount of income each household earns from each source. These categories are: (1) remittances, (2) in-kind assistance from family and friends, (3) investments and savings, (4) income from properties, (5) pension, (6) non-farm enterprises, (7) crop sales and consumption, (8) livestock sales, (9) livestock product sales and consumption, (10) wages, (11) government and NGO assistance, and (12) other.
Before proceeding, it is necessary to comment on the limitations of our measure. First, when it comes to survey-based measures of household finances, asking questions about consumption is generally preferred to asking questions about earned income. Wealthier households tend to under-report earned income <cit.>. In this study, however, we examine specifically the sources of income, and so a consumption approach would provide no information on the sources from which a household earns its livelihood. Second, diversification as a coping strategy is not about how many different sources of income one has but about spreading the risk by diversifying to income sources with different risk profiles. To that end, one would want to have information on crop choice and the sectoral composition of wages. Unfortunately, the data, particularly the HFPS data, does not capture this information. Third, the way in which we have measured diversification cannot distinguish between voluntary and involuntary changes in diversification. For example, a person may be separated from a job, which would appear to be a decrease in diversification, though an involuntary one, not a deliberate coping strategy. Ideally, we could differentiate between involuntary and voluntary diversification actions, but we can only observe the level of diversification and the subsequently related associated food insecurity. Finally, as job searches, crop production, and starting a new business typically extend over several months, one would want to have data over a long enough time frame to adequately allow for inter-household adjustments to livelihood sources. While our data span a time period of more than two years, we acknowledge that this time frame is shorter than most other studies of livelihood diversification, some of which span decades <cit.>.
§.§ Food Insecurity
We examine food insecurity as our primary outcome variable to measure household well-being. We use the Food Insecurity Experience Scale (FIES), which is an experience-based metric which can be used to compare prevalence rates of food insecurity across national and sub-national populations. Following the FIES standard survey model <cit.>, respondents to the pre- and post-outbreak surveys answer eight questions aimed to capture whether the respondent or other adult households members:
* were worried they would not have enough to eat,
* were unable to eat healthy and nutritious food,
* ate only a few kinds of food,
* had to skip a meal,
* ate less than they thought they should,
* ran out of food,
* were hungry but did not eat, or
* went without eating for a whole day.
Following standard practice <cit.>, we count the number of affirmative answers to these eight questions to categorize households into mild, moderate, and severe food insecurity. Households which answered affirmatively to between one and three FIES questions are classified as experiencing mild food insecurity. Households which answered yes to between four and seven questions are classified as experiencing moderate food insecure. Households are classified as severely food insecure if they responded affirmatively to all eight questions.
FIES scores using these integer values may be limited by several factors. First, some post-outbreak rounds do not include food insecurity modules, so there are gaps in the data, just as there are with income sources. Second, there are inconsistencies in the reference period for food insecurity questions in the pre-outbreak data.[Online Appendix <ref> includes a the exact wording of each question (including reference periods) in each country both before and after the pandemic began.] In Ethiopia and Malawi, the reference period in the pre-outbreak data is the last seven days while in the post-outbreak period it is 30 days. In Nigeria, the reference period is consistent (30 days) both pre- and post-outbreak. Finally, in Malawi, only six of the eight questions were included in the pre-outbreak survey.
To ensure our measures of food insecurity are as similar as possible over time, we create a standardized FIES score developed by <cit.> and implemented in <cit.> that is in addition to our mild, moderate, and severe indicators. The standardized measure counts the number of affirmative answers to FIES questions in the pre-outbreak data by country and uses survey weights to standardize the variable such that its mean is zero and standard deviation is one.[The standardization process creates a z-score by subtracting off the mean and dividing by the standard deviation to get FIES ∼𝒩(0,1). In calculating the mean and standard deviation, we use weights so that the mean is the weighted mean and the standard deviation is the weighted standard deviation. This preserves the representativeness in the standardized variable.] Following a similar process, the post-outbreak data are standardized by country across all data rounds. As such, the standardization process facilitates comparison between pre- and post-outbreak data and across country by ensuring our measure of food insecurity is as similar as possible over time and across countries. This allows for comparisons of deviations from the pre-pandemic mean and the mean of the variables after the onset of the pandemic within each country. Additionally, standardization allows us to interpret estimated coefficients in terms of standard deviations instead of a unitless score.
As seen in Figure <ref>, food insecurity in all three countries increased substantially between 2019 and the summer of 2020, following the onset of the pandemic. Recovery in food security throughout the subsequent year was slow in all three countries, with about 80 percent of households in Malawi and Nigeria and about 60 percent in Ethiopia experiencing mild food insecurity in almost every month following the outbreak. Prior to the onset of the pandemic, mild food insecurity affected less than 30 percent of households in Ethiopia and about 60 percent in Malawi and Nigeria. Similarly, moderate food insecurity spiked after the COVID-19 outbreak and slowly recovered in subsequent months. Severe food insecurity increased in Malawi and Nigeria in June 2020 and rose slightly in Ethiopia after the initial outbreak period. In all three countries, food security has not returned to pre-pandemic levels.
§ METHOD
We use two econometric approaches to investigate the causal effect of livelihood diversification on mitigating the impact of COVID-19 on food insecurity. The first approach is a dynamic panel data estimator in which food insecurity for a particular round is explained by the diversity index from the previous round. This approach is designed to capture the potential ex post effects of changing livelihood diversification strategies in response to the pandemic. The second approach is an ANCOVA estimator, in which we regress food insecurity for a particular round on the pre-COVID-19 diversity index. This approach is designed to capture the potential ex ante effects of livelihood diversification in anticipation of a shock. For each specification, we run regressions for each country separately. Survey weights are included in all specifications.
Following <cit.>, who investigate poverty outcomes in linear and non-linear settings, our dynamic panel data model with lagged variables takes the following form:
y_it= α + β_1 y_it-1 + β_2 (y_it-1*div_it-1) + β_3 div_it-1 + δ_t + r_j*t_t + u_i + ϵ_it,
where y_it is food insecurity for household i at time t. y_it-1 is the lagged value of food insecurity and div_it-1 is the lagged value of the diversity index. We lag these values to account for the time it takes for livelihood diversification to actually affect welfare. Diversification does not have an instant impact. Rather, households may use diversification, ex ante, to cope with shocks and thus improve food security in the future. Including lagged food insecurity y_it-1 in our specifications ensures that the variation we observe in our dependent variable is due to livelihood diversification rather than household-level characteristics or differences. In this specification, β_2 is our variable of interest, measuring how lagged income diversification impacts a household's food insecurity, dependent on that household's food insecurity status in the prior round of data.[We pre-specified using past income level and focused on diversification as the coping strategy but our results are robust to alternative proxies, such changes in income level, and use of other coping strategies and alternative estimators. See the Online Appendix <ref> for additional details and results.]
In estimation, we account for regional and time differences in COVID-19 policies and mitigation strategies. We include time (i.e., round) indicators (δ_t) to capture variation in COVID-19 cases and COVID-19-related policies occurring nation-wide. This also captures other large-scale temporal events such as elections in Malawi and civil unrest in Ethiopia. Regional indicators (r_j) are interacted with a time trend t_t to control for regional differences in COVID-19 mitigation strategies over the evolution of the pandemic as well as other regional shocks such as drought or conflict. Lastly, u_i is a household fixed effect to control for time-invariant, unobservable household heterogeneity, and ϵ_it is an idiosyncratic error term. Robust standard errors are clustered by household to correct for within-household correlation over time.
The dynamic panel data specification captures causal impacts if the livelihood diversification index is not correlated with the error term. We address issues of endogeneity which might arise from (1) simultaneity or reverse causality between the independent and dependent variables, (2) omitted variable bias, or (3) non-classical measurement error. With regard to the first issue, it is possible that households with better welfare outcomes have more resources, enabling them to diversify their livelihoods. In this case, a simultaneity problem arises as the dependent and independent variables are co-determined. To account for simultaneity, we use lagged independent variables to ensure temporal precedence of livelihood diversification relative to observed household food insecurity, thus isolating the relationship between past diversification and subsequent outcomes.[The use of lagged variables has the potential to bias coefficients downward, which <cit.> and others address through instrumental variables. In Online Appendix <ref> we implement the Arellano-Blover/Blundell-Bond <cit.> GMM estimator and show that the size of the downward bias is small and does not affect the interpretation of our results.] Our panel data, with a pre-COVID baseline helps us avoid omitted variable bias. By including household fixed effects, we account for observable and unobservable time-invariant household characteristics that might influence food insecurity. Including time dummies and region-time trends control for unobserved time varying heterogeneity, albeit not at the household-level. As a result, we greatly reduce the probability of omitting crucial variables. Finally, we address potential non-classical measurement error by pre-specifying multiple measures of livelihood diversification that categorize and aggregate income in different ways. All robustness checks are presented in Online Appendix <ref>. While we cannot account for all possible sources of endogeneity, such as contemporaneous, time-varying, idiosyncratic shocks to the household, our dynamic model, series of controls, and multiple livelihood diversification measures reduce the likelihood of correlation between dependent and independent variables and enable us to credibly claim to identify causal relationships.
To enrich the dynamic panel data model, we add interaction effects to account for the socioeconomic impacts associated with COVID-19 government restrictions:
y_it=α+β_1y_it-1 + β_2div_it-1+β_3str_t + β_4(y_it-1*div_it-1) +
β_5(y_it-1*str_t) +
β_6(div_it-1*str_t) + β_7(y_it-1*div_it-1*str_t) + δ_t+r_j*t_t+u_i+ϵ_it.
Here str_t is the government stringency score at time t. The triple interaction term (β_7) indicates the combined impact of lagged food insecurity, lagged income diversity, and contemporaneous government stringency. As with Equation (<ref>), the specification of Equation (<ref>) includes household fixed effects, time dummies, region-time trends, and clustered standard errors by household.
In addition to the dynamic panel data models, we use an ANCOVA estimator to generate difference-in-difference-type estimates <cit.>.[We also pre-specified the use of simple difference-in-difference (DID) models in which we include an indicator for the start of the lockdown. We prefer the ANCOVA specifications to simple DID because coefficients are more precisely estimated. However, we present DID results in Online Appendix <ref>.] Here we explicitly control for pre-pandemic welfare:
y_it = α + β_1 y_it=0 + β_2 (y_it=0*div_it=0) + β_3 div_it=0 + δ_t + r_j*t_t + ϵ_it.
In this equation, div_it=0 and y_it=0 are the diversity index and food insecurity in the pre-COVID-19 data. All other terms are as previously defined. Including the pre-COVID outcome variable y_it=0 more precisely attributes the variation in food insecurity to our variable of interest. In this model, we evaluate the impact of ex ante income diversification on post-shock food insecurity. In this context, β_2 is the variable of interest, which is the relationship between pre-COVID-19 income diversification and household food insecurity during the pandemic, dependent on that household’s food insecurity status before the pandemic. As with the other models, we include time dummies, region-time trends, and cluster standard errors at the household-level.
With respect to identification, by observing the impacts of pre-COVID diversification on post-outbreak food insecurity, we avoid simultaneity issues and potential reverse causality. As we compare within-household changes relative to the baseline, the ANCOVA estimator controls for unobserved time-invariant heterogeneity between households. Additionally, our set of controls capture geographic and temporal shocks that evolve over the pandemic. That said, it is still possible that baseline livelihood diversification affects the probability of a household suffering from a time-varying, idiosyncratic shock at some point during the pandemic that impacts household welfare.
§ RESULTS
§.§ Income Composition Over Time
We describe a set of stylized facts using graphical analysis and non-parametric regressions in order to evaluate changes in income composition over time. Recall our first research question is: how has household income sources for households and household diversification changed since the onset of the pandemic?
Figure <ref> reports on changes in the mean number of households earning income from each of the seven income categories included in our fractional index. A first stylized fact that emerges from these non-parametric regressions is that households in all three countries are fairly specialized. While the data are weighted so as to be nationally representative, a majority of households are engage in farming. In Ethiopia, farming far dominates other sources of income. Wages and non-farm enterprises are the next most common sources of income in both countries, though less than 40 percent of household earn income from these sources. In Malawi, over 80 percent of the population earn income from farming, though unlike in Ethiopia, a majority of household report earning income from wages and remittances. In Nigeria, nearly as many households (60 percent) report earning income from farming as from non-farm enterprises.
A second stylized fact is that in most countries there was a small but significant increase during the pandemic in the number of households engaged in farming. In Malawi and Nigeria the number of households reporting income from farming increased after the start of the lockdowns and then slowly decreased over the length of the pandemic. Some of this change is likely due to seasonality, as June and July are harvest season for maize and beans in these countries. However the agro-climatic variation in each country, their location in the tropics, and the diversified nature of smallholder agriculture means that seasonality is unlikely to account for all the temporal change in farm income. A second likely reason for the increase in farm income is household adaptation to business closures and travel restrictions. In fact, we see declines in wage labor and/or remittances that are contemporaneous with the increase in farm income in each of the countries. Ethiopia is the outlier, with farm income decreasing after the pandemic. Again seasonality offers a partial explanation: harvest for crops grown during the long-rains (mehir) is November and December, though crops grown during the short-rains (belg) is in May and June, when we see participation in farm income declining. The decline in farm income as well as the contemporaneous decline in wage and non-farm enterprises suggests households in Ethiopia became more specialized as a result of seasonal labor demands and government restrictions related to COVID-19.
A third stylized fact is that most households in most countries lost sources of income during the pandemic, particularly in its first months. In Ethiopia, we see a decline in the share of households reporting income from farming, wages, non-farm enterprises, remittances, and assistance. In Malawi, we see declines in wages, remittances, and assistance. That Ethiopia sees a decline in participation from almost every source again suggests a tend towards specialization in response to the pandemic. In Malawi, while households stopped receiving income from various sources, the increase in participation in farming means that the direction of a household's response is indeterminate. Relative to these two countries, Nigeria is an outlier. While remittance income declined, more households report participating in farming, wage labor, and receiving assistance after the start of the pandemic. There was no apparent immediate effect of the pandemic on non-farm enterprise. All of this suggests that household in Nigeria responded to the pandemic by becoming more diversified.
Figure <ref> illustrates how engagement in income-generating sources changed over time. Recall that our fractional index in constructed so that the number of categories one could earn income from are standardized across country and across time. Despite changes in specific income sources, a fourth stylized fact that emerges from the data is that on average households did not change their income diversification pattern since the onset of the pandemic. We do observe small differences in Ethiopia and Malawi, where households become more specialized during the pandemic than before. These changes are driven by a decline in the percent of households receiving remittances, government assistance, and wage income. This effect is relatively modest. In Nigeria, diversification increased slightly after the start of the pandemic, mainly due to increased participation in farming and greater government assistance.
From these modest stylized facts we conclude that households made limited use of livelihood diversification as a coping strategy after the onset of the pandemic or their attempts to do so were offset by job losses from illness and/or government restrictions throughout 2020.
§.§ Livelihood Diversification and Food Insecurity
We present country-level results for our two empirical models, examining the impact of livelihood diversification on household welfare as proxied by food insecurity. Our dynamic panel specification relies on the fractional index measured across all rounds, both pre- and post-outbreak, and aims to test the effectiveness of diversification as an ex post coping strategy. Our ANCOVA specification relies on the HHI, which is only measured at baseline, and aims to test the effectiveness of diversification as an ex ante coping strategy.
From equation <ref>, we are interested in β_1 y_it-1 + β_2 (y_it-1*div_it-1) + β_3 div_it-1. The coefficient β_1 measures the persistence of food insecurity for the household as current food insecurity is likely to be affected by food insecurity in the previous period. The coefficient β_3 captures the impact of past decisions about livelihood diversification on current food insecurity, as it takes time for those decision to manifest in income shifts. Finally, the coefficient β_2 captures the heterogeneity in how past decisions about income diversification impact current food insecurity based on past food insecurity.
Using this framework, we see that by and large our coefficients of interest are relatively small and not significant. Looking first at Table <ref>, coefficients are not significant across all measures of food insecurity across nearly all countries. Of the twelve regressions, the coefficient on the interaction between the lagged fractional index and the lagged food insecurity measure is significant only for the moderate measure of food insecurity in Ethiopia. In Ethiopia, coefficients are small and tend to be negative, suggesting a null or possibly negative effect in which increased specialization may lead to less food insecurity (greater food security). In Malawi and Nigeria, coefficients are small and tend to be positive, suggesting a null or possibly positive effect in which increased specialization may lead to more food insecurity (less food security). However, as 11 of 12 results are null, we do not place great weight on the signs of regression coefficients.
Next, in Table <ref> we examine the effects of the triple interaction of the lagged fractional index, lagged food insecurity, and contemporaneous government stringency score. For all four measures of food insecurity and across all three countries, coefficients on the interaction term of interest are not significantly different from zero. That said, the same pattern in signs emerges. Three of four results are small and negative in Ethiopia, potentially suggesting that increased specialization leads to less food insecurity. In Malawi and Nigeria, the opposite pattern holds, potentially suggesting that increased specialization leads to more food insecurity. In this specification, we also observe that COVID stringency is significantly associated with food insecurity, suggesting a positive, albeit small relationship between the strictness of COVID policies and food insecurity, in Ethiopia and Malawi. Based on the results in Tables <ref> and <ref>, we conclude that livelihood diversification was not effective as an ex post coping strategy for improving food insecurity during the pandemic.
Turning to the ANCOVA specification, Table <ref> reports results that relates pre-COVID livelihood diversification, measured using the HHI, to post-outbreak food insecurity measures. Results using ANCOVA are fundamentally similar to those using the dynamic panel regressions. The majority of coefficients on baseline HHI are not significant, with only three coefficients in the interaction term significant. Food insecurity increases in Malawi with more specialization while it decreases in Ethiopia. Coefficients are even closer to zero and confidence intervals are tighter in the ANCOVA regressions, potentially because of the larger sample sizes, since ANCOVA does not rely on a balanced panel of households. We conclude that livelihood diversification was not effective as an ex ante coping strategy for improving food insecurity during the pandemic.
§.§ Interpreting Our Results
Summarizing the results, the preponderance of evidence points to a lack of a statistically significant relationship between livelihood diversification and food insecurity, our proxy for welfare. We see this both in terms of the stylized facts from our descriptive analysis and in the regression results from our causal analysis. We see households making limited use of livelihood diversification as a coping strategy after the onset of the pandemic. And we see almost no evidence that greater levels of diversification pre-COVID, or changes to diversification during COVID, had a meaningful impact on food insecurity. Based on this, we conclude that the data fails to support our pre-specified hypotheses.
In interpreting our results, there are four potential mechanisms that could offer an explanation. The first is that livelihood diversification truly is effective but that our measure of livelihood diversification or food insecurity does not adequately capture what really matters. This is a reasonable explanation given that there is no single, fully agreed upon way to measure diversification or food insecurity in the literature and no settled theory about if some sources of income matter more than others. Thus, any index of diversification or quantification of food insecurity is inherently ad hoc. In anticipation of potential mismeasurement leading to null or unexpected results, we pre-specified a total of six measures of livelihood diversification. In the body of the paper we have presented results from our two preferred diversification measures while in Online Appendix <ref> we describe the other four pre-specified indices. As is evident in Online Appendix <ref>, a minority of results are statistically significant, with the vast majority of coefficients being not significant. While there are clearly more than just six ways to measure livelihood diversification, we believe the weight of the evidence from these various robustness checks that our main findings are not simply due to mis-categorization of livelihood diversification.
A second explanation for the null findings is that significant results are masked by uncontrolled heterogeneity. Here the logic is that while the average effects are zero, if we conducted a subgroup analysis we would find significant results for these smaller populations. This is exactly what <cit.> find: diversification impacts male versus female headed households differently. Urban households are also differently impacted, relative to rural households. Anticipating the presence of heterogeneous effects, we pre-specified subgroup analysis by gender of the head of household and by whether the household lives in a rural or urban area. Previous research, including research on the effects of COVID-19 <cit.>, has show shocks and welfare impacts vary by these sub-populations <cit.>. In Online Appendix <ref> we present all of our analyses using these pre-specified subgroups. Nearly every estimate is statistically indistinguishable from zero. We find no consistent or coherent evidence that our primary findings of null effects are masking significant effects for certain subgroups.
A third explanation is that our study lacks sufficient power to detect significant effects. The logic here is that we are failing to detect a true, statistically significant, and economically meaningful effect because we lack sufficient observations with sufficient variation. We cannot conclusively rule out this explanation because we did not conduct ex ante power calculations as part of our pre-analysis plan. We failed to do this for two reasons. First, it was not clear what values should be used in a power calculation for means and standard deviations in the control group nor what a reasonable expected effect size would be. Second, and quite honestly, we did not think it necessary given the large size of the data collection effort. In total, the pre- and post-outbreak LSMS-ISA data sets contain more than 84,000 observations. Country-specific regressions contain between 3,000 and 15,000 observations. In writing the pre-analysis plan, we did not expect a lack of power to be something we would eventually need to address. Given the evidence on how misleading ex post power calculations can be, we have not done this sort of calculation <cit.>. Our failure to conduct ex ante power calculations means we cannot provide definitive evidence against the explanation that our analysis lacks power. However, we believe that the large number of observations used in the analysis makes the lack of power an unlikely story.
A final explanation, and the one we find most compelling, is that in the face of a global pandemic and related government restrictions, livelihood diversification was not an effective coping strategy. This is true both in terms of using livelihood diversification as an ex ante strategy, to prepare for a potential shock, and an ex post strategy, to react to a realized shock. Health concerns and government restrictions to stop the spread of the virus may have stripped resource-rich households of their comparative advantage and equalized vulnerability of income-diverse and income-specialized households. Or perhaps moving away from subsistence farming left households unable to access sufficient food during times of crisis, leaving income-diverse households worse off. In the end, we do not find evidence that income diverse households were better equipped to cope with the socioeconomic impacts of the COVID-19 pandemic. Unlike transitory or localized shocks, which are frequently the setting for research on livelihood diversification as a coping strategy, the pandemic lasted several years and occurred at a global scale. To combat the spread of the virus, governments imposed restrictions on travel and business operations, which may have limited a household's ability to diversify income in response to the pandemic. The evidence presented in this paper suggests that livelihood diversification is ill-fit as a coping strategy for households preparing for or reacting to a shock of the immensity and length of the pandemic.
§ CONCLUSIONS
The COVID-19 pandemic exacerbated the challenges faced by households in Sub-Saharan Africa. Much of the existing literature suggests that income diversification bolsters household resilience to shocks and improves household welfare after the experience of unanticipated events. However, the focus of this literature is on weather shocks and localized conflict events and so does not capture the nature of a large-scale disaster, like the COVID-19 pandemic. With this study, we seek to fill that gap, exploring two questions related to household income composition and welfare outcomes in this new and unprecedented context. We take advantage of rich survey data to assess trends in income composition over time and to understand the relationship between livelihood diversification and household welfare outcomes, in particular, food security. As our data include a pre-outbreak baseline, we are able to observe household status prior to the pandemic and then track them through and beyond the first year of the pandemic. This panel data, along with our empirical strategies, allows us to identify causal relationships between a household's choice of livelihood activities, income sources, and their level of food insecurity during the pandemic.
In terms of how household livelihood diversification has changed since the onset of the pandemic, we do not observe substantial or systematic changes in household income composition. Small differences exist in Ethiopia and Malawi, where households become more specialized during the pandemic than before. Conversely, in Nigeria diversification increased slightly after the start of the pandemic. From these trends we conclude that households made limited use of livelihood diversification as a coping strategy during the pandemic.
In terms of how household income composition impacts on food insecurity, our regressions provide little evidence to support the idea that livelihood diversification improved food insecurity during the pandemic. The preponderance of evidence, across countries, estimation methods, and measures of diversification is that there is no significant relationship between livelihood diversification and food insecurity. We provide a number of robustness checks and evidence that we believe demonstrates that the null results are true nulls. Though income diversification may serve as an effective ex ante or ex post coping mechanism for many shocks, in particular small transitory shocks, we find that for a disaster on the scale of the COVID-19 pandemic this strategy does not appear to be effective.
An optimistic interpretation of the evidence in this paper would be that the extreme socioeconomic impacts of the pandemic appear to necessitate alternative adaptation strategies. A pessimistic interpretation is that a pandemic is too disastrous and omnipresent to prepare for or adequately adapt to. Either interpretation leads to the conclusion that commonly promoted coping strategies, such as livelihood diversification, that households are encouraged to undertake on their own are inadequate for large-scale or long-term disruptions and disasters. As households, development agencies, and governments look to prepare for the increased occurrence of such disasters, either due to climate change or the increased spread of zoonotic disease, this point bears keeping in mind. Future research and development will need to grapple with the fact that the coping strategies that gave people hope in the past may fail them as they try to cope with the increased scale of shocks in the future.
chicago
l ccc
Pre-COVID Engagement in and Earnings from Income Sources
[-1.8ex] 2cIncome (USD)
1cShare Engaged 1cMean 1cStandard Deviation
2cIncome (USD)
1cShare Engaged 1cMean 1cStandard Deviation
4cContinued on Next Page…
4cPanel A: Ethiopia
Crop Income 0.522 279 498
Livestock Sales 0.318 249 240
Livestock Product Income 0.489 533 1,202
Wages 0.215 1,490 3,740
Casual Employment Wages 0.088 179 387
Temporary Employment Wages 0.087 96 97
Non-Farm Enterprises 0.224 1,271 2,933
In-Kind Transfers/Gifts 0.024 57 96
Cash Transfers/Gifts 0.099 304 504
Food Transfers/Gifts 0.048 61 79
In-kind Transfers from Govt and NGOs 0.009 39 30
Cash Transfers from Govt and NGOs 0.032 64 62
Free Food 0.049 36 47
Pension 0.013 279 242
Rental Income 0.084 337 584
Asset Sales 0.087 262 244
Savings, Interest, Investment 0.002 145 355
Other 0.008 354 424
1lObservations
3c3,247
4cPanel B: Malawi
Crop Income 0.776 131 206
Tree Crop Sales 0.062 28 32
Livestock Sales 0.263 56 76
Livestock Product Income 0.298 42 89
Wages 0.261 1,617 3,109
Casual Employment Wages 0.609 314 453
Non-Farm Enterprises 0.438 1,894 5,154
Cash Transfers/Gifts 0.263 68 131
Food Transfers/Gifts 0.272 13 11
In-Kind Transfers/Gifts 0.114 30 50
Cash from Children 0.197 65 87
In-Kind Transfers from children 0.131 45 48
Free Food 0.182 20 14
Cash Transfers from Govt and NGOs 0.064 57 36
Cash or Inputs for Work 0.019 56 28
MASAF Public Works Program 0.043 32 15
Pension 0.011 1,355 1,320
Rental Income 0.080 321 392
Asset Sales 0.076 87 102
Savings, Interest, Investment 0.068 64 77
Other 0.043 43 108
1lObservations
3c1,726
4cPanel C: Nigeria
Crop Income 0.644 401 449
Tree Crop Sales 0.064 233 369
Livestock Sales 0.216 165 214
Livestock Product Income 0.187 93 178
Wages 0.260 1,598 1,419
Non-Farm Enterprises 0.624 2,926 4,109
Domestic Remittances 0.266 110 115
Foreign Remittances 0.034 267 295
In-Kind Remittances 0.125 47 61
Cash, Food, or In-kind Assistance 0.042 54 54
Pension 0.030 689 900
Rental Income (Non-Ag) 0.049 347 437
Rental Income (Ag) 0.039 45 88
Savings, Interest, Investment 0.021 277 804
Other 0.011 610 485
1lObservations
3c1,950
4p360ptNote: The table displays the share of households engaged in each category of livelihood activity and the mean and standard deviation of income earned from that category. In the LSMS-ISA data, income is reported in the local currency. To allow for cross-country comparisons, we convert income values to US dollars using 2019 exchange rates.
P1cm P2cm P1.5cm P8.5cm P8.5cm Livelihood Diversification Indices Summary
[-1.8ex]
Index Type Standardized Across Countries Time Period Description Pre-COVID-19 Kernel Density Graph
Fraction Yes Pre- and Post-COVID-19
To generate this fraction index, we collapse multiple income sources into seven broad income-generation categories that are consistent across rounds and across countries. These categories are: farm; wage; pension; remittances; non-farm enterprise; income from properties, investments and savings; and other. The “other” income category varies across countries and rounds but generally includes asset sales, income from NGOs, and government assistance.
< g r a p h i c s >
HHI Yes Pre-COVID-19 Given the level of detail provided in the pre-COVID survey data, we are able to generate an HHI to capture household income diversity more precisely. For this index, we use the same 12 income categories used in the simple fraction index but consider the amount earned from each source.
< g r a p h i c s >
[-1.8ex]5p23cmNote: The table summarizes the two livelihood diversification indices used in the main analysis. Higher index values indicate more household specialization (less income diversification). Appendix <ref> contains similar summary information regarding the other four indices we pre-specified.
§ ONLINE-ONLY ONLINE APPENDIX TO “COPING OR HOPING? LIVELIHOOD DIVERSIFICATION AND HOUSEHOLD WELFARE IN THE COVID-19 PANDEMIC”
§ ADDITIONAL DATA CONSIDERATIONS
§.§ Diversification Indices
Following our pre-analysis plan, in addition to the indices specified and discussed in the main text, we also present findings from four additional indices. These are summarized in Table <ref>.
Each of the indices, here and in the paper, has advantages and limitations. The fractional indices consider engagement in income-generating activities. These measures include binary responses and the dichotomous nature of these variables allows for comparison in income-generating activities over time with the inclusion of the post-outbreak HFPS rounds. However, these fractional indices do not consider the amount of income earned from each source. As such, these indices are a less nuanced representation of household income diversity than the Herfindahl-Hirschman Index (HHI) measures. For example, suppose Household A was engaged in casual employment for one week in 2019. During that week, the household earned five percent of their total annual income and the remaining 95 percent was generated through farm work. Suppose their neighbor, Household B, was also engaged in casual labor and farm work but generated equal incomes from these two sources (a 50 percent split). In our data, Households A and B would receive the same fraction score, even though Household A is much more dependent on a single income source than Household B and as a result Household A would have a higher HHI than Household B. There is a trade-off between using all of the data, pre- and post-outbreak, and just the pre-COVID data. The former gives us more observations over time but less detail about income. The later provides more detail about income but is only a snap shot in time.
Similarly, there is a trade-off between the simplicity of the fractional indices and the HHI. The fractional indices lack detail because they encode that detail in simple “yes” or “no” answers. Conversely, the HHI indices consider the portion of total income generated from each source, providing a more detailed measure of income diversity. However, these values are influenced by outliers in the data. Income calculations often involve multiplication of different variables:
wage earnings = hourly income * hours worked per week *
weeks worked per month * months worked per year,
aggregation across income sub-categories:
income from livestock products = income from milk sales + value of household milk
consumption + income from meat sales + value of household
meat consumption + ... ,
and other data manipulations. An error, misrepresentation, or miscalculation in any one of these intermediate variables can lead to erroneous estimations, which compound as one continues to aggregate values. Additionally, when considering crop and livestock product income, prices are not available for household consumption. Following standard practices in the literature, we assume the value of a consumed product is equal to the median sale price for that product in the household's geographic area. To account for large outliers, for each income category we winsorize outliers greater than two standard deviations from the median and impute their values. Despite this adjustment, the data are still vulnerable to potential error and subjective assumptions that affect their accuracy, which may lead to inaccuracy and/or bias in our estimated values. Because HHI scores are calculated based on a percentage, a measurement inaccuracy in one income source can distort the overall score.
§.§ Food Insecurity Experience Scale Questions
Tables <ref> through <ref> shows survey questions used to measure food insecurity in each country included in this study.
left=2cm, top=10.3cm
empty
P1cm P2cm P1.5cm P8.5cm P8.5cm Livelihood Diversification Indices Summary
[-1.8ex]
Index Type Standardized Across Countries Time Period Description Pre-COVID-19 Kernel Density Graph
Index Type Standardized Across Countries Time Period Description Pre-COVID-19 Kernel Density Graph
5cContinued on Next Page…
Fraction No Pre- and Post-COVID-19 For this index we collapse variables into income categories that are consistent across rounds within a country but vary across countries. As a result, this index allow us to observe income sources at the most granular level available over multiple waves for each country individually. This fraction index considers 10 income source categories in Ethiopia, 7 in Malawi and Nigeria, and 8 in Uganda.
< g r a p h i c s >
Fraction Yes Pre-COVID-19 For this index we collapse pre-COVID variables into income categories that are consistent across countries. Because the index only draws from the rich pre-COVID data, we are able to include 12 income categories available across all four countries: remittances; in-kind assistance from friends and family; investments and savings; income from properties; pension; non-farm enterprise; crop sales and consumption; livestock sales; livestock products sales and consumption; wages; government and NGO assistance; and other.
< g r a p h i c s >
Fraction No Pre-COVID-19 For this index we collapse pre-COVID variables into income categories that vary across country. As a result, this index allow us to observe income sources at the most granular level available in the pre-COVID data. We generate this index using 19 income sources in Ethiopia, 21 in Malawi, 15 in Nigeria, and 13 in Uganda.
< g r a p h i c s >
HHI No Pre-COVID-19 This index is identical to the fractional Pre-COVID-19 index described above but uses an HHI instead of a fraction to evaluate the distribution of income from each source. The index varies across country.
< g r a p h i c s >
5p23cmNote: The table summarizes the four livelihood diversification indices that we pre-specified but did not include in our main analysis. Higher index values indicate more household specialization (less income diversification).
§ ROBUSTNESS CHECKS ON MAIN ANALYSIS
§.§ Results Using Alternative Indices
Per our pre-analysis plan, we pre-specified tests of our primary hypotheses using six indices. After having collected and cleaned the data and conducted the analysis, we found that nearly all estimates of the coefficients of interest were not statistically significant. Because of this, we simplified the presentation of our findings in the main body of the paper to rely solely on our two preferred indices. These indices are the fractional index and HHI that standardize income categories over country and over time. This allows for the cleanest possible comparison between countries and pre/post-outbreak. To complete the pre-specified analysis, we present the results from all of our main empirical specifications using the other four indices. In short, results do not differ when using any of these other four pre-specified indices.
Table <ref> presents results using the what we term Fractional Index 2. This index is similar to our preferred fractional index in that is standardizes income categories across time (pre- and post-outbreak). It differs from our preferred index in that it does not standardize income categories across country. This means that each country can have a different number of income sources, taking full advantage of the richness of the data but making cross country comparisons more difficult. Results using fractional index 2 are not meaningfully different from those with our preferred fractional index. Nearly all coefficients of interest are not significant.
Table <ref> provides results for what we terms fractional indices 3 and 4 and HHI 2. Fractional index 3 standardizes income categories across countries but uses only the pre-COVID data for construction. Fractional index 4 also uses only pre-COVID data but does not standardize across countries. Recall our preferred fractional index standardizes across both time and country, making the index comparable across these dimensions but also resulting in the fewest categories. Fractional index 4 uses all available data, resulting in the most income categories possible for each country, but the loss of standardization means the index values from one country are not comparable to another. Like fractional index 4, income categories in HHI 2 are not standardized across country.
In Tables <ref> and <ref> we present results of all three indices' impact on food security using the ANCOVA and difference-in-difference specifications. Note that since all of these indices use only baseline data we cannot employ them in the dynamic panel models. Similar to the results using our two preferred indices, that vast majority of point estimates are not statistically significant.
We conclude that our primary results, which show no significant relationship between livelihood diversification and welfare outcomes, are not an artifact of our definition of livelihood diversification. Using alternative, pre-specified indices does not change our findings.
In this Online Appendix we present additional information on our main results. First, we present tabular versions of the main results from the paper. Second, we present results from our pre-specified difference-in-difference regressions, which serve as a robustness check on our preferred ANCOVA results.
In terms of our main results, Table <ref> corresponds to a figure in the main text. The graphical representation of results in the main text is limited to our variable of interest: an interaction term in the dynamic panel model and the value of the pre-COVID index in the ANCOVA model. While the coefficient plots are succinct and condensed, they lack information on sample size as well as coefficient estimates on other terms that might be of interest in the regressions. Below we present results from our main specifications in tabular form so as to provide more complete information for the interested reader.
§.§ Results Using Alternative Specifications
In addition to ANCOVA specifications, we estimate a simple difference-in-difference model in which we include an indicator for the start of COVID-19-related restrictions in Sub-Saharan Africa. This specification takes on the following functional form:
y_it=α+β_1div_it=0+β_2 ( div_it=0*covid_t ) +β_3covid_t+δ_t+c_j*t_t+u_i+ϵ_it.
Here div_it=0 is the diversity index in the pre-pandemic period and covid_t is an indicator for before and after the start of the pandemic. The variable of interest in this specification is β_2, the difference-in-difference effect of income diversity post-pandemic.
As seen in Figure <ref>, results for food insecurity from the difference-in-difference specifications are generally consistent with their ANCOVA counterpart (Table <ref>) but with less precise measures of standard errors. The same is true for results on educational engagement.
While not pre-specified, we also test the robustness of our results to using using the Arellano-Blover/Blundell-Bond <cit.> GMM estimator. These results are presented in Table <ref>. With this specification, we demonstrate that the size of the downward bias is small and does not affect the interpretation of our results, with both specifications mirroring one another.
§ HETEROGENEOUS EFFECTS ANALYSIS
In our pre-analysis plan we proposed to investigate the heterogeneous effects of livelihood diversification for different population subgroups. Specifically, we planned to assess differences for male- and female-headed households as well as urban and rural households. We did not present these results in the main body of the paper since our primary results were not significant and because the subgroup analysis also produced null results. For completeness, we discuss the method used and results of the sub-group analysis here.
In Table <ref> we present the distributions of all of the indices comparing urban and rural as well as male- and female-headed households. We only include the pre-COVID-19 data, even when post-outbreak rounds are available for that index. Urban households tend to be more specialized than rural households, a result that is particularly evident in Ethiopia. There are not notable differences in income diversification by head-of-household gender.
§.§ Method
To estimate heterogeneous effects of livelihood diversification on welfare outcomes use the ANCOVA specifications discussed in section <ref>. We interact the sub-group indicator variables with livelihood diversification at baseline to understand the differential impacts for female headed households and rural households.
y_it=α+β_1div_it=0+β_2 ( div_it=0*sub_i) +β_3sub_i+ β_4y_it=0+δ_t+r_j*t_t+ϵ_it
sub_i is an indicator variable for population subgroups based on head-of-household gender or household sector for household i. All other terms are as previously defined. The interaction term, β_2, represents the differential impact of pre-COVID-19 livelihood diversification on household welfare outcomes for these population subgroups. Standard errors are clustered at the household level.
Similarly, we estimate heterogeneous effects using a standard difference-in-difference model, with out coefficient of interesting being on the triple difference effect of COVID-diversification-female headed/rural household (β_7):
y_it = α + β_1 div_it=0 + β_2 ( div_it=0*covid_t ) + β_3 covid_t + β_4 ( div_it=0*sub_i) + β_5 ( covid_t*sub_i)
+ β_6 sub_i + β_7 + ( div_it=0**covid_t*sub_i) + δ_t + c_j*t_t + ϵ_it.
All other terms are previously defined and standard errors are clustered at the household.
§.§ Heterogeneous Effects of Livelihood Diversification
In the main paper we address two pre-specified research questions. In this section, we explore our third pre-specified research question: does income diversification have disparate impacts on different country subgroups in the context of the COVID-19 pandemic? Specifically, we investigate heterogeneous impacts for male- and female-headed households as well as urban and rural households. To detect these potentially disparate effects, we include binary interactions term in our ANCOVA and difference-in-difference specifications indicating head-of-household gender and household sector. Because we use only the ANCOVA and difference-in-difference specification to answer this question, we restrict our analysis to just the indices that rely on baseline data (fractional indices 3 and 4 and HHI 1 and 2).
Table <ref> displays coefficient estimates for the ANCOVA specification with a head-of-household gender interaction. In this context, male-headed households are the comparison group. While nearly every coefficient of interest is not significant, coefficients tend to be slightly positive coefficients, suggesting that households headed by women may experience increased food insecurity when household incomes are more specialized. As seen in Table <ref>, using the difference-in-difference estimator also produces null results with a similar tendency for most coefficients to be slightly positive.
We also test for heterogeneous impacts across rural and urban populations. Similar to the results for differences based on gender of the head-of-household, all of the results for differences between urban and rural households are statistically insignificant. Unlike the head-of-household gender results, the specifications that include urban/rural indicators do not point to a consistent relationship in terms of sign. For these specifications, rural households serve as the comparison group.
As seen in Table <ref> and Table <ref>, coefficients for the interaction term do not evidence a differential impact of livelihood diversification on food security for urban versus rural populations. Coefficient estimates are never statistically significant and do not follow a discernible trend across countries.
Overall we do not find any significant differences in how livelihood diversification impacts welfare outcomes based on the gender of the head-of-household or whether the household is rural or urban. There is a slight pattern of female headed households experiencing worse outcomes than male headed households when female headed households are more specialized. No pattern emerges for urban/rural households.
P8cm P8cm Pre-COVID-19 Indices Density by Urban/Rural and Head-of-Household Gender
[-1.8ex]
Urban/Rural Male-Headed/Female-Headed
Urban/Rural Male-Headed/Female-Headed
2cContinued on Next Page…
< g r a p h i c s >
< g r a p h i c s >
2cFractional Index 1
< g r a p h i c s >
< g r a p h i c s >
2cHHI 1
< g r a p h i c s >
< g r a p h i c s >
2cFractional Index 2
< g r a p h i c s >
< g r a p h i c s >
2cFractional Index 3
< g r a p h i c s >
< g r a p h i c s >
2cFractional Index 4
< g r a p h i c s >
< g r a p h i c s >
2cHHI 2
2J17cmNote: The table displays kernel density graphs for the pre-COVID-19 round for each of the six livelihood diversification indices. The first column of graphs shows densities for urban versus rural populations while the second column of graphs separates the data by male- versus female-headed households. Higher index values indicate more household specialization (less income diversification).
|
http://arxiv.org/abs/2409.02197v1 | 20240903180543 | Schwinger Effect of Extremal Reissner-Nordström Black Holes | [
"Puxin Lin",
"Gary Shiu"
] | hep-th | [
"hep-th",
"astro-ph.HE",
"hep-ph"
] |
Department of Physics, University of Wisconsin-Madison
1150 University Avenue, Madison, WI 53706, USA
[email protected], [email protected] Schwinger effect has a variety of physics applications. In the context of black hole physics, it provides a channel for the decay of charged black holes. While the Schwinger rate has been derived for extremal Reissner-Nordström (RN) black hole using the AdS_2× S^2 geometry of the horizon, a full analysis in the whole geometry is lacking, begging the question of whether it is sufficient to ignore contributions away from the horizon. In this paper, we address this problem and obtain the spatial profile of the Schwinger production rate in an asymptotically flat RN black hole spacetime. We find that the Schwinger effect is strongest on the horizon and decays with distance from the horizon, exhibiting a characteristic scale of the Compton wavelength of the particle. The rate is switched off when the particle's charge-to-mass ratio approaches the corresponding extremality bound for black holes, in accordance with a strong form of the Weak Gravity Conjecture (WGC). Schwinger Effect of Extremal Reissner-Nordström Black Holes
Puxin Lin, Gary Shiu
============================================================
§ INTRODUCTION
The discussion of effective action in Quantum Electrodynamics dates back to the work of Euler-Heisenberg <cit.> and Weisskopf <cit.>, in which polarization of the vacuum due to creation of virtual charged particles from the electromagnetic field is computed. Later on, Schwinger derived the vacuum-persistent amplitude in a constant electric field using the proper time formulation <cit.> and interpreted the imaginary part of the effective action as the production rate of real charged particles[See <cit.> also regarding this interpretation.], which is now referred to as the Schwinger effect. These seminal works sparked an interest in experimental proposals to observe the effect (see <cit.> for a review) and moreover, opened up a field of theoretical studies of non-perturbative nucleation effects in Quantum Field Theory (QFT).
The Schwinger effect in constant curvature spaces has been derived with various methods in different contexts, for instance with the worldline approach <cit.>, in hyperbolic space <cit.>, AdS space <cit.> and dS space <cit.>. It has been discussed in (near-extremal) Reissner-Nordström (RN) black hole spacetime with different asymptotic structures <cit.>. Despite the existing work, a full account of the spatial dependence of the Schwinger effect has not been presented and therefore the question of the scale of the region relevant for the decay of RN black holes remains to be understood. In this work, we use the worldline instanton approach to compute the spatial profile of the Schwinger production rate of an extremal RN black hole and identify this scale to be the Compton wavelength of the charged particle.
The occurrence of the Compton scale is suggestive of the relation between the Schwinger effect and black hole superradiance, which is originally proposed as a mechanism to extract energy from a Kerr black hole through scattering with an incoming wave. Black hole superradiance involving bosonic fields[Superradiant amplification of incoming waves by black holes was shown to be absent for fermionic particles, see <cit.> for instance. This is often considered as a consequence of the Pauli exclusion. For what is relevant to this paper, we note that the existence of stimulated emission of fermions is in no contradiction with absence of superradiance - the latter simply implies that the absorbtion rate is higher than the stimulated emission rate.] was studied in different settings, see <cit.> and references therein for a review of the development of the field. The discussion has been extended to the superradiant effect of charged black holes, where extraction of charge and energy of the black hole can occur when the superradiance condition ω<qΦ_H is met, where ω,q are the energy and charge of incoming particle and Φ_H is the electric potential at the black hole horizon. The superradiance effect have mostly been studied in a first quantized context, which is insufficient to directly capture spontaneous processes like the Schwinger effect. However, the fact that superradiance has a typical extent of, and a rate that is controlled by the Compton wavelength suggests a connection between the Schwinger effect and charged superradiance. They are two sides of a coin - the former is a spontatneous charged radiation process and the latter is a stimulated scattering associated with RN black holes. A connection as such also follows from the long-standing principle of detailed balance, from which a first establishment between spontaneous and stimulated emission of a simple quantum system was obtained by Einstein. Soon after the realization that black holes emit Hawking radiation <cit.>, an early application of this principle to black holes was presented by Wald <cit.>, where it was shown that the stimulated emission of neutral particles implies spontaneous emission. The same logic can be generalized to charged black holes, which indicates that stimulated charged emission of black holes is always associated with a spontaneous emission process. We identify the former as charged superradiance and the latter as the Schwinger effect.
The study on decay of charged black holes has deep relations to the Weak Gravity Conjecture (WGC) originally proposed in <cit.>. (See <cit.> for a general review.) In its simplest form, the conjecture requires the existence of at least one superextremal particle with charge-to-ratio larger than the corresponding black hole extremality bound. The superextremal particles can lead to the decay of non-supersymmetric extremal black holes, whose discharge is constrained to prevent exposure of naked singularities. The WGC can take different forms in spacetimes with different asymptotic structures. One interesting direction is to obtain the form of the WGC bound in dS space. Since the decay of extremal black holes is linked to the WGC, understanding the Schwinger production can be of great benefits to identifying the WGC bound in different settings. For instance, the Schwinger rate we computed for an extremal black hole in asymptotically flat space registers the information of the extremality bound - the rate sees a switch-off behavior when the charged particle tends to extremality from above. This is in accordance with a strong form of WGC in flat space and leads to the speculation that the Schwinger rate might be indicative of the WGC bound for general cases. While in this paper we do not present concrete results on the Schwinger effect of RN black holes in dS space, we note the possibility of generalizing the worldline approach adopted in this paper to the study of dS black holes. We leave this interesting question to be tackled in a future work.
The paper is organized as follows: In section <ref>, we review the worldline path integral formalism used to compute the Schwinger pair production rate. We devote section <ref> to the computation of the instanton paths, instanton action and one-loop determinant in the extremal RN black hole spacetime, obtaining the local Schwinger production rate in the exterior of the black hole. In section <ref>, we summarize our findings and conclude that the radial profile of the Schwinger effect is characterized by the Compton wavelength of the particle and that the production is switched off when the particle's charge-to-mass ratio tends to the extremality bound for charged black holes in flat space. We further discuss the connection between the Schwinger effect and black hole superradiance and the implications of Schwinger effect to bounds on the particle spectrum.
§ EFFECTIVE ACTION AND WORLDLINE INSTANTONS
In this section, we review the formalism for computing the Schwinger effect. The creation rate for charged particles in an electric field was first derived in <cit.>, where the production rate Γ is expressed in terms of a regularized vacuum amplitude,
Γ=1-|⟨0_A|0_A||⟩^2/|⟨0|0||⟩^2.
The quotient |⟨0_A|0_A||⟩^2/|⟨0|0||⟩^2 concerns two vacuum states |0_A⟩, |0⟩ defined with and without the gauge field[The choice of time-coordinate in curved space implicitly defines these vacua, which are the surviving projection in the past and future infinity.] and it has the meaning of vacuum persistence. In other words, Γ is the probability that the system is not in the vacuum state due to the gauge field. The vacuum-to-vacuum amplitudes are written as path integrals
⟨0_A|0_A|⟩/⟨0|0|⟩=∫D{ψ} e^iS[{ψ},A]/∫D{ψ} e^iS[{ψ}],
where {ψ} is used to generally represent all the fields that are integrated out and S is the action associated with these fields. Eq.(<ref>) defines the effective action e^iS_eff[A]≡⟨0_A|0_A|⟩/⟨0|0|⟩ whose imaginary part is related to the Schwinger production rate by
Γ=2Im(S_eff).
The specific field content that we consider in this paper is a complex scalar field charged under a U(1), denoted as ϕ(x). The scalar field action is assumed to be quadratic in ϕ, given by
S=∫√(-g) d^4xϕ^* H_A ϕ,
where √(-g)d^4x is the curved space volume form, H_A=D_μ D^μ -m^2 is a Hermitian operator and D_μ=∇_μ+ieA_μ is the covariant derivative in curved space containing also the gauge connection. The gravitational field and gauge field are taken to be fixed, so the associated Einstein-Hilbert and Maxwell action terms contribute only a constant phase to the vacuum amplitude that does not affect the production rate. For this reason, the two action terms will be omitted in discussion of the Schwinger effect. Further, the form of Eq.(<ref>) and the gauge field being non-dynamical means that the effective action is generated by all the one-loop diagrams with arbitrary external gauge field legs and scalar field propagating in the loop.
The path integral over the complex scalar field is Gaussian due to the quandratic nature of its action, which can be formally expressed as a functional determinant
∫Dϕ^*Dϕ e^-S_E=(H_A)^-1,
where we have chosen to proceed in Euclidean time. Introducing the auxiliary proper time parameter and applying the log-determinant identity, the abstract determinant is put into the form of a proper time integral of the kernel ⟨x|e^sH_A|x|$⟩lnH_A=trln H_A=tr∫ds/se^sH_A=∫√(-g) d^4x ∫ds/s⟨x|e^sH_A|x|.⟩
A common approach to evaluate the above integral is to perform the trace in the eigenvalue basis of the operatorH_A, which involves a sum over the eigenstates
∫ d^4x∫_0^∞ds/s∫ dλ |ϕ_λ(x)|^2 e^-sλ.
Hereλandϕ_λare the eigenvalue and eigenfunction of the operatorH_Arespectively. This is referred to as the heat kernel method <cit.> which has been applied to computations of effective action in hyperbolic space <cit.> and black hole entropy corrections <cit.>.
There are manifest divergences in Eq.(<ref>) with different physics origins. The UV divergence at smallsis related to the renormalization of the couplings in the theory. The operatorH_Acan admit zero modes withλ=0, leading to an IR divergence at larges. The zero modes present in the derivation of the Schwinger rate in flat space andAdS_2arise from the spacetime symmetry - all points in constant curvature space being equivalent. They appear to be pure gauge and the treatment is to extract and replace them by collective coordinates, resulting in a volume factor upon integration. The Schwinger effect of particle creation is closely related to a different type of IR divergence in the heat kernel expression - negative modes. The coupling of the scalar field to the background gauge field shifts the spectrum of the operatorH_Aand generates negative modes that would have been absent. The negative mode indicates an instability of the system under influence of the background field. For the Schwinger effect, the electric field triggers the transition of the original vacuum state to a state with non-zero particle occupation number. Unlike the pure gauge modes, negative modes contain important physical information of the decay process and thus require special care when being dealt with. It is a goal of this paper to accomplish this in the setting of extremal RN black holes using the worldline path integral formalism, which we review in the following paragraphs.
A central idea of the worldline formalism is to rewrite the kernel⟨x|e^-sH_A|y|$⟩ as a quantum mechanical position space integral over all paths ξ(τ) connecting x and y. The formalism was inspired by one-loop vacuum amplitude computations in string theory (see <cit.> for an overview) and was developed into an alternative method <cit.> to Feynman diagrams for computations of effective actions in QFT. It had seen use in the derivation of the Schwinger effect in flat space <cit.> and was discussed in the context of curved space and higher spins <cit.>. Of course, the worldline method is based on an earlier idea of path integrals <cit.>, whose generalization to curved space can be found in <cit.>.
While the perturbative expansion of the worldline action in curved space and its renomalization have been studied <cit.>, an application of the formalism to a black hole spacetime has not been put forth. We achieve the goal of computing the non-perturbative[Here, we mean a result that is non-perturbative in the coupling constant e.] Schwinger rate in the extremal RN spacetime by applying the stationary point approximation to the worldline path integral.
Using the worldline formalism, we expressed the kernel in Eq.(<ref>) as[This form holds for spacetimes with vanishing Ricci scalar.]⟨x|e^H_As|x|=⟩∫_ξ(0)=x^ξ(s)=xDξ e^-∫_0^s dτ [1/4(Dξ/dτ)^2+e∫ A_μ dξ^μ+sm^2].
S_wl≡∫_0^s dτ [1/4(Dξ/dτ)^2+e∫ A_μ dξ^μ+sm^2] is interpreted as the worldline action[In some papers, the gauge field related action term has an i in front, but the Euclidean gauge field is imaginary. We choose to absorb the imaginary unit into the gauge field. The different conventions lead to the same equations of motion.] and ξ as the worldline parameterized by its proper time τ. The auxiliary parameter s now has the meaning of the total proper time of the worldline. It appears to be more convenient to uniformly parametrize the total proper time of the worldlines as 1, motivating the following rescaling s→s/m^2 and τ→s/m^2τ. Eq.(<ref>) becomes
∫√(g) d^4x ∫ds/s∫_ξ(0)=x^ξ(1)=xDξ e^-[s+m^2/4s∫_0^1dτ (Dξ/dτ)^2+e∫ dτ A_μDξ^μ/dτ].
The advantage of rewriting the kernel in this form is that the UV divergence in the small s regime corresponding to high momentum paths is manifestly regularized by the kinetic term of the worldline action as seen from Eq.(<ref>).
Proceeding from here, one has different choices in what order the integrals in s and ξ are performed. In <cit.>, the proper time integral was first done using a stationary point approximation, which generates a non-local term for the paths ξ in the worldline action. The non-local term hinders the further evaluation of the one-loop determinant of the path fluctuations. Another route, taken in <cit.>, was to first consider the integration over the paths. This will generate an s-dependent term which would then be combined with the remainder in Eq.(<ref>) to be integrated. This choice is less applicable when the integration over the paths does not yield a fully analytical expression. We will instead perform the stationary point approximation to both the s and ξ integrals simultaneously, identifying the stationary points in the space of (s,ξ). The final result should be independent of this choice because the order of evaluation only alters the basis in (s,ξ), not affecting the special points of the worldline action and the determinant of its second variation.
The worldline action is expanded around the stationary points (s,ξ) up to second order, see Appendix <ref>. The stationary points are determined by imposing vanishing first variations,
s=m/2√(∫_0^1dτ g_μνξ^μξ^ν)
(m^2/2sg_μνD^2/dτ^2-eF_μνD/dτ)ξ^ν=0
.
To simplify notation, the overbar will be omitted and the restriction to the stationary points will be assumed unless otherwise stated. The second order expansion in Appendix <ref> can be expressed compactly as
S^(2)=1/2(δ s H_ssδ s +δ s H_sξδξ +δξH_ξ sδ s +δξH_ξξδξ),
where H is the Hessian operator of the worldline action. Note that an integration over τ is implicit in the definition of how H acts on the path fluctuations, which is explicitly given in Appendix <ref>. To facilitate the computation of the one-loop determinant of the second variation, we diagonalize the Hessian with respect to the path fluctuations
S^(2)=1/2( δ s H̃_ssδ s+δξ'H_ξξδξ')
where the diagonalized element, denoted by H̃_ss, and the shifted path fluctuation is defined as
H̃_ss=H_ss-H_sξH_ξξ^-1H_ξ s
δξ'=δξ +H_ξξ^-1H_ξ sδ s
.
More explicitly, H̃_ss is given by
∫_0^1dτm^2/4sg_μνξ^μξ^ν-∫_0^1 dτ∫_0^1dτ'm^2/2s^2g_μαD^2/dτ^2ξ^α(τ)G^μν(τ,τ')m^2/2s^2g_νβD^2/dτ'^2ξ^β(τ').
In Eq.(<ref>), an operator inverse is involved. It is the matrix-valued Green's function associated with the operator H_ξξ. To simplify the notation in section <ref>, we denote Λ≡H_ξξ and G≡H_ξξ^-1. The Green's function satisfies the following differential equation
Λ G =I δ(τ-τ'),
with I being the indentity matrix with the same rank as Λ. The Green's function is defined together with the boundary conditions. From Eq.(<ref>), the requirement that the paths begin and end at the same point suggests the boundary conditions on the path fluctuations be Dirichlet, δξ(0)=δξ(1). Therefore, the Green's function by definition satisfies the same boundary conditions in both variables τ, τ'. This ensures that the shifted path fluctuation δξ' also satisfies the Dirichlet condition, =δξ'(0)=δξ'(1)=0. We give the construction of G in Appendix <ref>.
In the next few paragraphs, we analyze the stationary conditions for the stationary points. We observe that Eq.(<ref>) is the geodesic equation of a charged particle coupled to an electric field in curved space with a rescaled mass parameter m^2→m^2/4s, if we interpret ξ̇^μ≡D/dτξ^μ as the particle velocity along the worldline. For this reason, following <cit.>, we will call these solutions worldline instantons, or instanton paths. The geodesic equation is a second order differential equation, from which a set of two first order equations, Eq.(<ref>) and Eq.(<ref>), can be obtained as first integrals. Contracting the second equation in (<ref>) with the particle velocity and making use of the anti-symmetric property of F_μν, one obtains D/dτ(ξ)^2=0, so
g_μνξ^μξ^ν=a^2=const,
and s=ma/2 from Eq.(<ref>).
Choosing a static gauge A=A_0(r) dt, the geodesic equations can be written as
m/aD^2t/dτ^2=eF^0_ 1Dr/dτ
m/aD^2r/dτ^2=-eF^1_ 0Dt/dτ,
We should note that several assumptions were made when obtaining the above equations: we are considering a four dimensional spherically symmetric spacetime with coordinates ξ^μ=(t,r,θ,φ). We also imposed the condition that the particles have no angular motion. The latter assumption is reasonable since we are identifying the stationary points that dominate the worldline path integral and an addition of angular momentum will increase the worldline action, suppressing the contribution to the path integral. The static gauge choice allows for a first integral of the first line in Eq.(<ref>), which yields a conserved quantity ω that can be understood as the energy of the particle
ω=m̃g_00t+eA_0.
Combining the above expression with Eq.(<ref>), the radial coordinate separates from the time coordinate and we obtain
r=± a√(g^-1_11[1-(eA_0-ω)^2/m^2g_00]).
Eq.(<ref>) and Eq.(<ref>) will be the key equations used to compute the instanton paths.
Finally, we define the local effective action as the integrand of Eq.(<ref>) inside the spacetime integral,
w(x) =∫_ξ(0)=x^ξ(1)=xDξ∫_0^∞ds/s e^-S[s,ξ]
= A(ξ) e^-S^(0)[s,ξ]
where ξ is the stationary path starting and ending at ξ=x, S^(0) is the corresponding stationary worldline action and A(ξ)=s^-1(H)^-1/2=s^-1(H̃_ssΛ)^-1/2. The number of negative modes of the operator Λ determines the phase of the local effective action. The local Schwinger rate corresponds to the imaginary part of the effective action, and with exactly one negative mode, it can be expressed as
Γ(x)=Im[w(x)]=|A(ξ)| e^-S^(0)[s,ξ].
For the case of Schwinger effect in constant curvature space, factoring out the spacetime integral and defining the volume rate is a necessary step towards a physical and finite answer of the particle production rate. A local definition of the production rate extends beyond the homogeneous case since the electric field induced particle creation is clearly a local effect - an observer can place a detector at a specific location in the electric field and expect charged particles to be detected at some rate. We would like to understand the spatial profile of the particle production rate in the extremal RN black hole spacetime.
§ SCHWINGER PRODUCTION RATE OF CHARGED BLACK HOLES
In this section, we compute the Schwinger production rate using the worldline formalism reviewed in section <ref>. The stationary points are first obtained, from which we compute the stationary worldline action and the associated prefactor. The latter is achieved by finding the determinant of the second variation operator evaluated at the stationary points.
Before we dive into the computation of the Schwinger rate, it is beneficial to first identify the physically relevant scenarios for the production process. By that, we refer to the particle spectrum that enables
the decay of extremal black holes. While the analysis in this paper is done in a fixed background without back-reactions on the geometry, the consequence of the back-reactions should not be overlooked. In the allowed parameter space (Q,M) for black holes with charge Q and mass M in flat space, extremal RN black holes lie at the boundary of the valid space Q≤ Mℓ_P where no naked singularity exists. In asymptotically flat space, the charge and energy of the gravitating system is well defined at the asymptotic boundary and the charge and mass of a produced particle is subtracted from the black hole when it reaches the asymptotic region. The black hole extremality bound therefore only permits particles with q>m to escape to infinity and discharge the black hole. This can be shown more concretely by considering the reversed process of the thought experiment in <cit.>. However, a classical picture suffices to demonstrate this point. Away from an RN black hole, the gravitational and electric potential is well described by the inverse power law, generating for a particle with charge q and mass m a total potential of the form
V(r)∼qQ-mMℓ_P^2/r.
For superextremal particles with e/mℓ_P>Q/M ℓ_P=1, the potential is repulsive and the particle will be accelerated by the electric field after nucleation with near zero kinetic energy around the black hole and reach infinity, reducing the black hole to a slightly subextremal one. On the contrary, if the particle has e/mℓ_P<1, the potential is attractive and the particle will eventually be re-absorbed by the black hole. In the latter situation, an asymptotic observer will not see a flux of particles coming from the black hole. While this does not rule out the possibility that subextremal particles cannot locally or temporarily exist around the black hole, for the purpose of computing the Schwinger effect, we will only be interested in the production rate for particles with z=e/mℓ_P>1 that lead to the decay of the black hole.
§.§ Instanton solutions in Extremal RN black hole spacetime
We consider an extremal RN black hole background with metric
ds^2=fdt^2+dr^2/f+r^2(dθ^2+sin^2φ^2),
and gauge field
A=Q/rdt,
where f=(1-Qℓ_P/r)^2. Throughout this paper, we use the Planck length ℓ_P=1/√(G_N) as the only dimensionful unit[Further, the 4π factor and vacuum permittivity are absorbed into the definition of the charge and coupling constant. The complex scalar particle we consider is assumed to carry one charge q=1.].
The radial trajectory is governed by Eq.(<ref>) particularized on the RN spacetime
r=± a √((1-Qℓ_P/r)^2-e^2/m^2(Q/r-ω/e)^2).
To put the above equation into a more convenient form, we make the change of the radial coordinate ρ=Qℓ_P/r, denote the charge-to-mass ratio of the particle as z=e/mℓ_P and define the rescaled energy parameter ρ_0=ωℓ_P/e. The new radial coordinate maps the black hole exterior to ρ∈(0,1]. We further denote the function under the square root of Eq.(<ref>) as h(ρ)=(1-ρ)^2-z^2(ρ-ρ_0)^2, which corresponds to an effective potential in the radial direction. We can now rewrite Eq.(<ref>) as
ρ=∓a/Qℓ_Pρ^2√(h(ρ)).
The particles that we will be concerned with, as explained at the beginning of this section, are superextremal with z>1. In Euclidean signature, the coordinates are space-like, and consequently the gauge field is magnetic-like. The trajectories of the charged particles are spirals, whose instantaneous radii depend on the local field strength. The paths that contribute to the effective action has the same beginning and ending points according to Eq.(<ref>), which translates to the condition that the radial and time components of the paths admit turn-around points. This gives rise to a constraint on the instanton parameter ρ_0∈ (0,1). The full trajectory is schematically shown in Fig.<ref> and the contributing fraction of the path is the self-intersected loop.
By direct integration, the solution to Eq.(<ref>) is found to be
√((ρ_1-ρ)(ρ-ρ_2))/ρ_1ρ_2ρ+ρ_1+ρ_2/(ρ_1ρ_2)^3/2arctan√(ρ_1(ρ_2-ρ)/ρ_2(ρ-ρ_1))=a√(z^2-1)/Qℓ_P(τ-τ_0),
where τ_0 is an integration constant and ρ_1,2=z ρ_0± 1/z± 1∈ (0,1] are the two zeros of h(ρ). Eq.(<ref>) implicitly defines the function ρ(τ). To identify the intersection point, we solve for t(τ), which is governed by Eq.(<ref>). In the redefined coordinates and parameters, the equation appears as
t=za(ρ_0-ρ)/f(ρ).
It is convenient to switch from τ to the variable ρ using the identity t=ρdt/dρ together with Eq.(<ref>), leading to
dt/dρ=∓ Qℓ_P zρ_0-ρ/ρ^2f√(h(ρ)).
Denoting the endpoints of the path by ρ(0)=ρ(1)=ρ_×, t(0)=t(1)=0, we determine a and ρ_×, which specifies the instanton paths, by solving the following equations:
ρ(τ=0)=ρ(τ=1)=ρ_×
ρ(τ=1/2)=ρ_2
t(τ=0)=t(τ=1/2)=0
.
Because the intersection ρ_× is a root of a transcendental equation, in general the paths will have to be computed numerically.
§.§ The instanton action
We next compute the worldline instanton action. This is given by the sum of a kinetic term ma and a gauge field associated term
S_A =e∫_0^1 dτA_0(ω-eA_0)/m̃g_00=e^2a/mℓ_P^2∮_γdρ/ρρ(ρ_0-ρ)/(1-ρ)^2
=-Qmℓ_Pz^2/√(z^2-1)× 2I,
where
I=∫_ρ_×^ρ_2F(ρ)dρ≡∫_ρ_×^ρ_2(ρ_0-ρ)dρ/ρ(1-ρ)^2√((ρ_2-ρ)(ρ-ρ_1)).
The same technique used to obtain Eq.(<ref>) is applied here to switch the integration variable to ρ. Generally outside the black hole, the lower limit ρ_× is the solution to a transcendental equation and has to be determined numerically. Making use of the numerical solutions obtained in section <ref>, we compute the instanton action and obtain the full profile of the exponential term of the Schwinger rate outside the extremal RN black hole. The result is presented in Fig.<ref>.
To check consistency with existing results in AdS_2, we seek an approximation of the action integral near the horizon. We first note that ρ_0→ 1 translates to the near horizon limit since the two turn-around points ρ_1,2=z ρ_0± 1/z± 1→ 1 both tend to the horizon in this limit. We then observe that ρ_× - ρ_1/ρ_2-ρ_1→ 0 as the instanton path approaches the horizon. This means that in the near horizon region, Eq.(<ref>) is well approximated by the same integral but with the lower limit ρ_× replaced by ρ_1. In fact, as the horizon is approached, the instanton paths, after proper coordinate transformation, tend to those obtained in AdS_2, which are closed trajectories discussed in Appendix <ref>. Setting ρ_×→ρ_1, Eq.(<ref>) can then be analytically integrated using the residue theorem. The contour is chosen to wrap around infinity and the branch cut between ρ=ρ_1,2, shown in Fig.<ref>. The residues of F(ρ) at poles ρ=0,1 are
Res[F(ρ=0)]=iρ_0/ρ_1ρ_2
Res[F(ρ=1)]=iρ_0/2(2ρ_1ρ_2-3ρ_1-3ρ_2+4)/[(1-ρ_1)(1-ρ_2)]^3/2-i/2(2-ρ_1-ρ_2)/[(1-ρ_1)(1-ρ_2)]^3/2.
Only the contour integrals along C_1,3 have a non-zero contribution and each turns out to contribute I, therefore
2I=2π{ρ_0/√(ρ_1ρ_2)+ρ_0(2ρ_1ρ_2-3ρ_1-3ρ_2+4)-(2-ρ_1-ρ_2)/2[(1-ρ_1)(1-ρ_2)]^3/2}.
Substituting the turn-around points with the instanton parameter ρ_1,2=z ρ_0± 1/z± 1, we obtain
2I=2π√(z^2-1)/z(-1+z ρ_0/√(z^2ρ_0^2-1)),
and the total action reads
S_inst=ma+S_A=2π Qmℓ_P[z^2ρ_0-1/(z^2ρ_0^2-1)^3/2+z-z^2ρ_0/√(z^2ρ_0^2-1)].
From Eq.(<ref>), it is easy to recover the AdS_2 instanton action by taking the limit ρ_0→ 1, and one finds that
lim_ρ_0→ 1 S_inst=2π Qmℓ_P(z-√(z^2-1)),
in agreement with <cit.> and the computations in Appendix <ref>.
Using the above approximate form, near the horizon the instanton action can be expressed as a dimensionless ratio of the black hole size and the particle's Compton wavelength λ_c=m^-1,
S∼Qℓ_P/c(z)λ_c,
where c(z) is a z-dependent parameter. When z is large, c(z)∼ z and when z is of order unity, c(z)∼ O(1).
The stationary point approximation applied to obtain this result requires S≫ 1. There are two independent parameters that one can dial to explore the boundary where the action is of order unity S∼ 1 - this is the point where the production rate Γ∝ e^-S is no longer suppressed and the stationary point approximation starts to break down. From Eq.(<ref>) we can see that the particle production process will tend to be unsuppressed when (a) the particle becomes very light (m→ 0 with z or e kept fixed), or (b) the charge-to-mass ratio is very high (z→∞ with m kept fixed).
To put in some context, consider electrons whose z∼ 10^22 and λ_c∼ 10^-12m, and a solar mass charged black hole of radius r_S∼ 10^3m. The action is roughly S∼ 10^-7, indicating no suppression of the Schwinger process and thus a short lifetime of an extremally charged black hole due to rapid discharge. This shows why we do not expect to see black holes carrying high charges in nature. If we consider a particle charged under a different (hidden) U(1) sector with z∼ 1, then the extremal black hole carrying the hidden U(1) charge can potentially have a longer lifetime, if Qmℓ_P≫ 1.
The calculation we present concerns an electrically charged black hole, but the analysis and results hold as well if one considers magnetic charge emission of magnetic black holes. For instance, if we consider the 't Hooft-Polyakov magnetic monopole <cit.>, the charge and mass of the monopole are related to the electric coupling and the cut-off scale Λ_c of the theory by
q_mag∼1/e, m_mag∼Λ_c/e^2.
The charge-to-mass ratio of the magnetic monopole is estimated as z_mag∼e/Λ_c ℓ_P and the Compton length will be λ_c,mag∼e^2/Λ_c. If we take the cut-off scale to be the GUT scale, then for a solar sized black hole, S∼ 10^33 and the Schwinger effect for magnetic charge production will be highly suppressed. For a lower cut-off scale such as the electroweak scale, magnetic charge production of sub-solar sized black holes with mass M∼ 10^-3 M_⊙ can potentially be cosmologically relevant.
§.§ The one-loop determinant A
In this section, we analyze the one-loop determinant. An imaginary part of the effective action leads to a non-vanishing probability of particle creation, meaning that a determinant of the path fluctuations is essential to the Schwinger effect. As reviewed in section <ref>, the determinant of the second variation operator can be diagonalized as
H=H̃_ssΛ,
where Λ≡H_ξξ is given by Eq.(<ref>). Formally, the determinant of Λ is an infinite product of its eigenvalues determined by
Λδξ⃗=λδξ⃗,
and supplied with the boundary conditions
δξ⃗(0)=δξ⃗(1)=0.
The infinite product is inherently divergent and has to be regularized. The Gelfand-Yaglom method <cit.> provides just that. The theorem states that
Λ/Φ(1)=Λ_0/Φ_0(1),
where Λ_0 is a reference operator, Φ(τ) is the d× d matrix formed by a set of linearly independent fundamental solutions {u⃗_i} to the following differential equation and boundary conditions
Λu⃗_i=0
u⃗_i(0)=0
u⃗_i'(0)=w⃗_i
, i=1,2,⋯, d
where d is the rank of Λ and {w⃗_i} is an arbitrary set of d linearly independent vectors[The construction ensures the regularized determinant is independent of this choice.].
Φ_0(τ) is defined analogously with respect to the operator Λ_0. The RHS of Eq.(<ref>) is absorbed into the definition of the integration measure <cit.> and thus Λ∝Φ(1).
The determinants for different z and instanton parameter ρ_0 is presented in Fig.(<ref>), where ρ_0 is translated into the spacetime location ρ_× where the instanton contributes to the effective action.
§.§ The Schwinger production rate
The prefactor A and exponent e^-S_inst together give the local Schwinger rate and its dependence in the radial coordinate.
Close to the horizon where the Schwinger effect is dominant, the Schwinger rate is described by the approximate form of
Γ(r)≈C√(z^2-1)/Q^2ℓ_P^2√(Δr̃)e^-(S_AdS+BΔr̃),
where Δr̃=r-r_+/Qℓ_P, S_AdS=2π Qmℓ_P(z-√(z^2-1)) and B=2π Qmℓ_P z(z-1)/(z^2-1)^3/2.
Although the dependence of the prefactor on the charge-to-mass ratio and distance to the horizon is computed numerically, they exhibit general features that can be understood intuitively.
First, the local rate is diverging but integrable when the horizon is approached. The diverging local rate at the horizon is a consequence of restoration of the AdS_2 conformal symmetry. Symmetries of this kind lead to zero modes associated with invariance of the stationary action. These zero modes cause the path integral to diverge when summing over the equivalent configurations and contribute an infinite volume factor. In the static RN black hole spacetime, one of such zero modes can be identified as that associated with time translational symmetry, contributing a factorized time interval from the total Schwinger rate. Another zero mode associated with the AdS_2 conformal symmetry exists on the horizon. However, in the exterior of the black hole, the conformal symmetry is broken. This means that we expect the zero eigenvalue of the fluctuation operator on the horizon to be continuously uplifted by the separation from the horizon. The uplifted eigenvalue remains small close to the horizon, giving the divergence of the local rate after the path integral. Unlike in pure AdS_2 space, however, in the full RN geometry, the volume factor associated with the (near) zero mode of AdS_2 cannot be infinite because the AdS_2 geometry is a good approximation only up to a finite size. This implies the integrability of the local rate over the black hole exterior. Indeed, the condition for the local rate to be integrable is a natural expectation for particle creation by finite-sized systems such as the black hole. A diverging rate would indicate that the black hole will lose most of its charge in an arbitrarily short amount of time, which is a clear contradiction to the existence of the black hole.
Another feature of the one-loop prefactor is the switch-off behavior near the extremality of the produced particles. Focusing on the near horizon region where the Schwinger effect is dominant, the instanton paths are small deformations from the AdS_2 instantons computed in Appendix <ref>. The z-dependence of the prefactor A∝1/s is captured by the kinetic term of the stationary action s=ma/2=π Qmℓ_P/√(z^2-1). From this we see that A∝√(z^2-1) and the prefactor continuously approach zero when the particle becomes extremal, switching off the Schwinger effect. The threshold of the Schwinger effect at z=1 is reminiscent of the WGC bound. As previously argued from the kinematic viewpoint of preventing naked singularies to form in the backreacted spacetime, particles with z<1 cannot cause the decay of extremal black holes in asymptotically flat space. Here the same bound reappears, but seen directly from the dynamics of the charged emission process. While emission of extremal charged particles by extremal black holes does not lead to exposure of singularities[The black hole extremality bound receives corrections from higher derivative operators, see <cit.>, potentially allowing smaller black holes themselves to count towards the superextremal object responsible for the decay of larger black holes. The emission of charged particles by large extremal black holes that we considered is a special case where one of the decay products is microscopic.
Modular invariance in string theory <cit.> as well as IR consistencies <cit.> suggest the existence of a tower of superextremal states. In certain string theoretical setups, the tower was shown to interpolate between extremal black holes and microscopic superextremal particles <cit.>.], such a process is not dynamically favorable. Viewing the Schwinger effect as a quantum mechanical tunneling process driven by the electric force, the aforementioned process sees a cancellation between the electric repulsion and gravitational attraction between the two extremal objects. This effectively diminishes the residue electric force that sources the tunneling, causing a vanishingly small probability for the process to happen. The same threshold can be determined by an entropic reasoning. It is shown in <cit.> that the particle creation by black hole is closely related to the change in entropy of the system. Consider emission of one single extremal particle by an extremal black hole. This emission process keeps the black hole on the extremal line but reduces the size of the black hole. The entropy of the emitted extremal particle has to be in the zero momentum state at infinity thus should have zero entropy. Therefore, the emission of extremal particles leads to a net decreased of entropy of the system, suggesting that the process is not entropically favored[Emission of extremal particles by extremal black holes resembles the AdS-fragmentation discussed in <cit.>, except the particle is not associated with a horizon. The exponential factor that we obtained is consistent with the instanton action in <cit.> for AdS brane nucleation and with the Brill instanton <cit.>, but our analysis shows a vanishing prefactor which is not evaluated in <cit.>. It would be a useful exercise to see whether the vanishing prefactor remains when considering gravitational back-reactions in our analysis or to compute the prefactor associated with the fragmentation process.].
We note the resemblance of the z>1 condition to the charged superradiance condition
m<ω <q Φ_H,
where Φ_H is the electric potential at the horizon and is unity for an extremal RN black hole. In fact, it is not a surprise that the stimulated emission of charged particles, the superradiance effect, is connected to the spontaneous Schwinger effect in such a way. Black holes as quantum systems are thought to obey detailed balancing where the spontaneous and stimulated production imply one another.
§ CONCLUSIONS
Using the worldline path integral formalism, we compute the spatial profile of the Schwinger production rate in the exterior of an extremal RN black hole. This is the first time a full description of the Schwinger effect outside the RN black hole horizon is given. We notice many interesting aspects of the result that is worth emphasizing, in particular, the characteristic scale of the Schwinger effect outside the black hole, the connection of the Schwinger effect to black hole superradiance and the relation between conditions of non-zero Schwinger rate and bounds on the particle spectrum.
We identify the characteristic scale of the Schwinger effect outside extremal black holes to be the Compton wavelength of the charged particle. From Eq.(<ref>), it is evident that the Schwinger effect is the strongest on the black hole horizon. This is not surprising since the electric field is the strongest there. The scale at which the particle production is not too diminished compared to the horizon rate is determined by the exponential suppression factor. The fall-off scale can be estimated from the value of the exponent. For instance, we define this scale as the distance over which the exponent changes by one, therefore Bλ/Qℓ_P∼ 1. In other words, this scale λ is measured by the Compton wavelength
λ∼ c(z)λ_c,
and λ_c=m^-1 is the Compton wavelength of the particle and c(z) is a z-dependent factor[Concrete values for an electron is given in section <ref>.]. We note that the Compton length is also the characteristic scale of the superradiant effect of black holes.
This scale serves as
an independent check of the detailed balancing principle which relates the spontaneous Schwinger production and the stimulated superradiant scattering effect. The connection can be drawn from both the profile and the strength of the effects. As can be seen from the gravitational atom analysis of superradiance in <cit.>, the extent of a superradiant cloud around the black hole is described by the Compton wavelength, scaled by the gravitational coupling constant, taking the same form as Eq.(<ref>) if we think of z as the relative coupling strength between the electromagnetic and gravitational field felt by the particle. It is also shown that the superradiance rate is controlled by the ratio of the black hole size and the particle's Compton wavelength, giving a highest rate when they are of the same order <cit.>. This suggests an interpolation from suppressed Schwinger effect to catastrophic superradiant instability as we change the parameter M ℓ_P^2/λ_c. The Schwinger effect derived in this paper describes the situation when M ℓ_P^2/λ_c≫ 1. A transition to the unsuppressed regime happens at M ℓ_P^2/λ_c∼ O(1) when the stationary point approximation breaks down, resembling the tuning of a resonance effect. It is important to note that the results referenced above for the superradiant effect is originally considered for a Kerr black hole, so one has to carefully apply the statements to the charged case. If the results for rotating superradiance generalizes to charged superradiant effect, then there will be a unified picture of the Schwinger effect and the superradiant effect - both are consequences of the instability due to some negative modes induced by the electric field. The spontaneous Schwinger effect describes the decay from the vacuum state while the superradiance phenomenon captures the resonance effect of particle-occupied states with the unstable mode.
The scale of the Schwinger effect being the Compton length should hold more generally for charged black holes embedded in different asymptotic spacetimes because it is the intrinsic scale associated with the particle being created. The scale not only indicates the relevant region for the Schwinger effect, it is also suggestive of the regime where the Schwinger formula starts to break down. Having this in mind, we revisit the scenario that led to the Festina Lente (FL) bound on the particle spectrum in dS space m^2≳ eHℓ_P^-1. The bound was proposed in <cit.> and was refined in a later paper <cit.>. The original idea was to put a constraint on the particle spectrum such that the subsequent evolution of the Nariai spacetime is non-singular. Using the Schwinger formula in dS space derived in <cit.>, <cit.> considered the (near) homogeneous rapid discharge of a cosmological-sized charged black hole into light particles throughout the space between the black hole and cosmological horizon. The rapid creation and violent annihilation of the created particles create an oscillating dipole effect that converts the energy of the electric field into radiation, leading to a big crunch of the spacetime. The remedy to this given by the authors in <cit.> was to prevent the rapid discharge by putting a lower bound on the particle mass. We will provide a different physical picture of the Schwinger effect of charged dS black holes in the next paragraph, but before that, it is useful to point out some key ingredients that went into the proposal of the FL bound: (1) a strong Schwinger effect throughout the region between the black hole and cosmological horizons, and (2) the rapid annihilation of charged particles.
Process (1) causes the black hole to discharge and (2) leads to a collapse of the spacetime, as argued in <cit.>. Our findings strongly motivate one to revisit whether the singular collapse envisioned in <cit.> can happen[<cit.> had also argued that the singular collapse may not occur.]. The coordinate separation[The coordinate distance sets the S^2 size and controls the electric field strength, therefore the spatial profile of the Schwinger production is measured in coordinate distance.] of the black hole and cosmological horizon will be relevant to our discussion and we denote this quantity as r_u=r_c-r_+, where r_c is the location of the cosmological horizon and r_+ is location of the black hole horizon, see Fig.<ref>. When the Compton wavelength is small compared to the horizon separation, λ_c≪ r_u, particle production happens only near the black hole horizon. Because the charged particles created are highly localized in space, they will not lead to a singular collapse of the full spacetime described in <cit.>. When the separation between the black hole horizon and cosmological horizon is further decreased such that the Compton wavelength becomes larger[In this limit, the proper distance between the black hole and cosmological horizons stays finite, and is of the dS scale. We note that if the particle's Compton wavelength is even larger than the dS scale, the notion of particle in the spacetime between the horizons breaks down.], λ_c>r_u, the charge and mass of the dS black hole will receive significant corrections if the Schwinger rate is unsuppressed. This is because the black hole will form a particle cloud due to the rapid discharge. The particle cloud should have a density distribution that is proportional to the Schwinger rate profile and would carry a significant fraction of the charge and mass of the black hole. If λ_c>r_u, this cloud extends outside the cosmological horizon, then the actual mass and charge of the black hole appearing in the dS black hole metric will need to be corrected by a potentially large fraction. The significant modification to the mass and charge parameter of a large black hole in dS space (near Nariai black hole) due to the particle cloud calls for a more careful analysis of the particle emission process and the back-reaction on the black hole. This points towards the need of a more complete spatial analysis of the Schwinger production in the general dS black hole spacetime and a proper account of the gravitational back-reaction, which are interesting future extensions of our work.
On the gravitational side, the intuition that the black hole together with the particle cloud of size λ_c should be contained within the cosmological horizon has direct implication to the yet open question of whether a near Nariai black hole can dynamically exit the allowed parameter space (the so called shark fin region). When the particle spectrum includes some charged light particles, there will exist a region R near the Nariai line in which a black hole should always be considered together with its surrounding charged particle distribution. One should refrain from tracing the evolution of a pure dS black hole starting from R because pure black holes cannot be stable here. We speculate that the region R modifies the Nariai line and smooths out the evolution of charged dS black holes as they approach R from inside the shark fin region, preventing a pathological development of the spacetime.
On the particle side, decay of charged dS black holes might provide insights to the WGC. The WGC is a bound on particle spectrum for consistency of quantum gravity allowing for decay of black holes, in a way consistent with the Cosmic Censorship Conjecture. The computation of the Schwinger effect can be indicative of the form of the WGC bound, as seen from the analysis in this paper. It remains an open question of what the Schwinger production rate is between the black hole and cosmological horizons for general dS black holes. This again motivates the application of the worldline approach to the study of Schwinger effect of dS black holes, especially for charged (near) Nariai black holes, to provide hints on the WGC bound in dS space. We will leave this investigation to a future work.
Acknowledgments
We would like to thank Lars Aalsma, Yoshihiko Abe, Gregory Loges, Miguel Montero, and Jan Pieter van der Schaar for helpful discussions and comments. This work is supported in part by the DOE grant DE-SC0017647.
§ EXPANSION OF THE WORLDLINE ACTION
The worldline action of interest is
S=∫_0^1dτ(m^2/4s g_μνξ^μξ^ν+e A_μξ̇^μ+s).
Expanding the kinetic term with respect to the path fluctuation, we obtain
g_μν(ξ+δξ)(ξ^μ+δξ^μ)(ξ^ν+δξ^ν)
= (g_μν+g_μν,σδξ^σ+1/2g_μν,σρδξ^σδξ^ρ+⋯)(ξ^μ+δξ^μ)(ξ^ν+δξ^ν)
= g_μνξ^μξ^ν+2g_μνξ^μδξ^ν+δξ^σ g_μν,σξ^μξ^ν
+ g_μνδξ^μδξ^ν+δξ^σ g_μν,σ(δξ^μξ^ν+ξ^μδξ^ν)+1/2δξ^ρδξ^σ g_μν,σρξ^μξ^ν+⋯.
Similarly, the gauge field term is expanded as
A_μ(ξ+δξ)(ξ^μ+δξ^μ)
= (A_μ+A_μ,νδξ^ν+1/2A_μ,νσδξ^νδξ^σ)(ξ^μ+δξ^μ)
= A_μξ^μ+A_μ,νδξ^νξ^μ+A_μδξ^μ+A_μ,νδξ^νδξ^μ+1/2ξ^μ A_μ,νσδξ^νδξ^σ+⋯.
The action expanded with respect to the path, by expansion order, is
S^(0)=∫_0^1dτ(s+m^2/4s g_μνξ^μξ^ν+e A_μξ̇^μ),
S^(1)_ξ =∫_0^1dτ[m^2/4s(2g_μνξ^μδξ^ν+δξ^σ g_μν,σξ^μξ^ν)+e(A_μ,νδξ^νξ^μ+A_μδξ^μ)
]
=∫_0^1dτδξ^ν{m^2/4s[-2d/dτ
(g_μνξ^μ)+g_μσ,νξ^μξ^σ]+e(A_μ,νξ^μ-d/dτA_ν)
}
=∫_0^1dτδξ^ν{-m^2/2s(g_μνξ^μ+g_μν,σξ^σξ^μ-1/2g_μσ,νξ^μξ^σ)+e(A_μ,ν-A_ν,μ)ξ^μ}
=-∫_0^1dτδξ^ν{m^2/2sg_μνD^2/dτ^2ξ^μ+eF_μνξ^μ}
=-∫_0^1dτδξ^μ{m^2/2sg_μνD^2/dτ^2-eF_μνD/dτ}ξ^ν,
S^(2)_ξξ= 1/2∫_0^1dτδξ^μ{m^2/2s[
-g_μνd^2/dτ^2-ξ^σ g_μσ,ν-2ξ^σΓ_σνμd/dτ-ξ^ρξ^σ g_μσ,νρ+1/2ξ^ρξ^σ g_ρσ,μν]
+e[F_μνd/dτ+ξ^σ(A_σ,νμ-A_μ,νσ)]}δξ^ν,
where the following is applied to obtain Eq.(<ref>)
∫_0^1dτ{
g_μνδξ^μδξ^ν+δξ^σ g_μν,σ(δξ^μξ^ν+ξ^μδξ^ν)+1/2δξ^ρδξ^σ g_μν,σρξ^μξ^ν}
= ∫_0^1dτδξ^μ{-g_μνδξ^ν-ξ^σ g_μν,σδξ^ν-(ξ^ρξ^σ g_μσ,νρδξ^ν+g_μσ,νξ^σδξ^ν+g_μσ,νξ^σδξ^ν)
+ξ^σ g_σν,μδξ^ν
+1/2ξ^ρξ^σ g_ρσ,μνδξ^ν}
= ∫_0^1dτδξ^μ{
-g_μνd^2/dτ^2-g_μσ,νξ^σ-2ξ^σΓ_σνμd/dτ-ξ^ρξ^σ g_μσ,νρ+1/2ξ^ρξ^σ g_ρσ,μν}δξ^ν,
∫_0^1dτ{
A_μ,νδξ^νδξ^μ+1/2ξ^μ A_μ,νσδξ^νδξ^σ}
= ∫_0^1dτ{1/2A_μ,νδξ^νδξ^μ+1/2A_ν,μδξ^μδξ^ν+1/2ξ^σ A_σ,νμδξ^νδξ^μ}
= 1/2∫_0^1dτδξ^μ{
A_ν,μδξ^ν-A_μ,νδξ^ν-A_μ,νσξ^σδξ^ν+ξ^σ A_σ,νμδξ^νδ}
= 1/2∫_0^1dτδξ^μ{F_μνd/dτ+ξ^σ(A_σ,νμ-A_μ,νσ)}δξ^ν.
The expansion with respect to the proper time can be easily computed since it is not dynamical. The results are
S^(1)_s=∫_0^1dτ (1-m^2/4s^2g_μνξ^μξ^ν)δ s
S^(2)_ss=∫_0^1dτ(m^2/2s^3g_μνξ^μξ^ν) δ s^2
S^(2)_sξ=∫_0^1dτδ s (m^2/2s^2g_μνD^2/dτ^2ξ^ν) δξ^μ
§ GREEN'S FUNCTION OF MATRIX DIFFERENTIAL OPERATORS
Consider the matrix generalization of the Sturm-Liouville operator defined on τ∈[0,1]
L=d/dτPd/dτ+Q,
where P,Q are matrices. For what is pertained to this paper, the boundary condition is Dirichlet. Assuming that the kernel of the operator is trivial, the inverse is defined as the solution of
LG(τ,τ')=δ(τ-τ'),
which can be constructed as a gluing of two solutions
G(τ,τ')=Θ(τ'-τ)Y_L(τ)A(τ')+Θ(τ-τ')Y_R(τ)B(τ'),
where Y_L,R are matrices formed by independent solutions to Ly⃗_L,R=0 satisfying boundary conditions y⃗_L(0)=y⃗_R(1)=0 and A,B are matrix coefficients.
Continuity of G and the jump of derivative due to the delta function lead to the condition
Y'_L(τ')A(τ')-Y'_R(τ')B(τ')=-P^-1
Y_L(τ')A(τ')-Y_R(τ')B(τ')=0
.
Solving for A,B,
A=[P (Y'_R Y_R^-1 Y_L-Y'_L)]^-1
B=[P(Y'_R-Y'_LY_L^-1Y_R)]^-1,
the full Green's function is then
G(τ,τ') =Θ(τ'-τ)Y_L(τ)[P (Y'_R Y_R^-1 Y_L-Y'_L)]^-1(τ')
+Θ(τ-τ')Y_R(τ)[P(Y'_R-Y'_LY_L^-1Y_R)]^-1(τ').
When the operator L has zero eigenvalues, a pseudo-inverse can be defined by projecting onto the orthogonal space. Making use of the eigenvalue representation of the Green's function, we can construct
G'(τ,τ')=lim_ϵ→0(G_ϵ(τ,τ')-∑_my_0m^T(τ)y_0m(τ')/ϵ),
where G_ϵ is the Green's function of the shifted operator L_ϵ=L+ϵ and y_0m is the zero mode of L.
§ WORLDLINE INSTANTONS IN ADS_2 SPACE
In this appendix, we will compute the instanton path and action in the Poincaré and global coordinates of AdS_2. The Poincaré coordinate (in Euclidean signature) is
ds^2=L^2dt^2+dr^2/r^2.
The gauge field is taken to be A=EL^2/rdt, corresponding to a constant electric field in AdS_2. The equations of motions is
r=±ar/L√(1-z^2(1-ω̃ r)^2)
t=azr/L(ω̃ r-1)
,
where z=eEL/m, ω̃=ω/eEL^2. Solutions to the above equation are circles. This can be seen after substituting t=rdt/dr into the second line of Eq.(<ref>),
dt/dr=±z(ω̃ r-1)/√(1-z^2(1-ω̃ r)^2),
which describes circles that are tangent to t=±r/z^2-1. We can thus parametrize the solutions by
t=-1/ω̃ zsinθ(τ)
r=1/ω̃-1/ω̃ zcosθ(τ).
The concrete form of θ(τ) is obtained by inserting Eq.(<ref>) back to Eq.(<ref>)
θ(τ)=2arctan[√(z-1/z+1)tan(aτ/2L√(z^2-1))].
The periodic condition of the instanton path sets a=2π L/√(z^2-1) and the Poincaré coordinates restricts the valid paths to the r>0 region, requiring z>1.
The action can then be computed by reading off the kinetic contribution
ma=2π mL/√(z^2-1)
and computing the gauge field contribution
S_A =e∫_0^1 dτ A_0 t=e∫_0^1dτEL^2/razr/L(ω̃r-1)
=2zm∫_r_1^r_2z(ω̃r-1)/r√(1-z^2(1-ω̃r)^2)dr
=2π mL(z-z^2/√(z^2-1)).
The full instanton action is therefore
S=2π mL(z-√(z^2-1)),
in agreement with that in <cit.>.
In global coordinates[The same symbol r is used here, which should not be confused with the radial coordinate in the Poincaré patch.],
the metric is
ds^2=(1+r^2/L^2)dt^2+dr^2/1+r^2/L^2
and the gauge field A=Erdt. The corresponding geodesic equation is
r=± a√((1+r^2/L^2)-(eEr-ω)^2/m^2)
t=a/mω-eEr/1+r^2/L^2.
After defining the dimensionless coordinateρ=r/L and parameters Ẽ=EL^2, ω̃=ω L, z^2=e^2Ẽ^2/m^2L^2, ρ_0=ω̃/eẼ, we rewrite the equation as
ρ =±a/L√((1+ρ^2)-z^2(ρ-ρ_0)^2)
=±a/L√(z^2-1)√(1-z^2ρ_0^2/z^2-1+2z^2ρ_0/z^2-1ρ-ρ^2)
=±a/L√(z^2-1)√(b^2-(ρ-ρ)^2),
where we further defined b^2=z^2-1+z^2ρ_0^2/(z^2-1)^2>0 and ρ=z^2/z^2-1ρ_0. For Eq.(<ref>) to yield periodic solutions, it is required that z>1. The above equation is integrated to obtain
arcsinρ-ρ/b=a/L√(z^2-1)(τ-τ_0),
and the instanton path is[Solutions in the global coordinates are not circular but can be mapped to the circles in Poincaré coordinates derived in the earlier part of this appendix and reported in <cit.>.]ρ=ρ+bsin[a/L√(z^2-1)(τ-τ_0)].
Periodicity translates to a=2π L/√(z^2-1). To check that the time coordinate is also periodic, we eliminate proper time from Eq.(<ref>) using t=ρdt/dρdt/dr=Lz(ρ_0-ρ)/(1+ρ^2)√(1+ρ^2-z^2(ρ-ρ_0)).
We will not show the integration here but it can be computed using a similar complex contour integral performed in the following to compute the instanton action. It is not hard to checked that integration of Eq.(<ref>) between the two turn-around points of ρ is zero, assuring that the paths are closed.
We next compute the action using the instanton solution Eq.(<ref>)
S_A =e∫_0^1 dτ A_0t = e∫ dτA_0(ω-eA_0)/m̃g_00
=ea/mL^2∫_0^1 dτ/1+ρ^2Ẽρ(ω̃-eẼρ)
=2e^2Ẽ^2/mL√(z^2-1)∫_ρ_-^ρ_+dρ/1+ρ^2ρ(ρ_0-ρ)/√(b^2-(ρ-ρ)^2),
where we have changed the integration variable from dτ to dρ in the third line. The original integral is along a periodic curve parametrized by τ∈[0,1], so upon switching to dρ, the path is represented by two oppositely orientated paths ρ=ρ_-→ρ_+ and ρ=ρ_+→ρ_- with Jacobians of opposite signs.
To evaluate the integral, we write ρ(ρ_0-ρ)/1+ρ^2=-1+1+ρ_0ρ/1+ρ^2, separating it into two parts
I_1=∫_ρ_-^ρ_+dρ/√((ρ_+-ρ)(ρ-ρ_-))
I_2=∫_ρ_-^ρ_+dρ/1+ρ^21+ρ_0ρ/√((ρ_+-ρ)(ρ-ρ_-)).
The first integral is just I_1=π and the second can be computed via the residue theorem by taking a contour very similar to that in Fig.<ref>. The result is
I_2=π/[(ρ_+^2+1)(ρ_1^2+1)]^1/4[sin(φ_++φ_-/2)-ρ_0cos(φ_++φ_-/2)]
where φ_±=(-ρ_±+i). Using the root relations of ρ_± from Eq.(<ref>), the denominator is computed as z√(1+ρ_0^2/z^2-1) and the terms in the braket reduce to √(1+ρ_0^2).
The total action can finally be expressed as
S =2π mL[1/√(z^2-1)+z^2/√(z^2-1) (√(z^2-1)/z-1)]
=2π mL(z-√(z^2-1)),
where z=eEL^2/mL=eEL/m.
This is exactly the same result obtained in the Poincaré patch and recovers the flat space instanton action for large L, lim_L→∞S=π m^2/eE.
unsrt
|
http://arxiv.org/abs/2409.03684v1 | 20240905163913 | Predicting quantum channels over general product distributions | [
"Sitan Chen",
"Jaume de Dios Pont",
"Jun-Ting Hsieh",
"Hsin-Yuan Huang",
"Jane Lange",
"Jerry Li"
] | quant-ph | [
"quant-ph",
"cs.DS",
"cs.LG"
] |
Infrared spectroscopy study of kagome material CsTi_3Bi_5
Rongyan Chen
Accepted XXX. Received YYY; in original form ZZZ
=========================================================
§ ABSTRACT
We investigate the problem of predicting the output behavior of unknown quantum channels. Given query access to an n-qubit channel ℰ and an observable 𝒪, we aim to learn the mapping
ρ↦(𝒪ℰ[ρ])
to within a small error for most ρ sampled from a distribution 𝒟. Previously, Huang, Chen, and Preskill <cit.> proved a surprising result that even if ℰ is arbitrary, this task can be solved in time roughly n^O(log(1/ϵ)), where ϵ is the target prediction error.
However, their guarantee applied only to input distributions 𝒟 invariant under all single-qubit Clifford gates, and their algorithm fails for important cases such as general product distributions over product states ρ.
In this work, we propose a new approach that achieves accurate prediction over essentially any product distribution 𝒟, provided it is not “classical” in which case there is a trivial exponential lower bound. Our method employs a “biased Pauli analysis,” analogous to classical biased Fourier analysis. Implementing this approach requires overcoming several challenges unique to the quantum setting, including the lack of a basis with appropriate orthogonality properties. The techniques we develop to address these issues may have broader applications in quantum information.
§ INTRODUCTION
When is it possible to learn to predict the outputs of a quantum channel ?
Such questions arise naturally in a variety of settings, such as the experimental study of complex quantum dynamics <cit.>, and in fast-forwarding simulations of Hamiltonian evolutions <cit.>.
However, in the worst case this problem is intractable, as it generalizes the classical problem of learning an arbitrary Boolean function over the uniform distribution from black-box access.
To circumvent this, our goal is to understand families of natural restrictions on the problem under which efficient estimation is possible.
One way to avoid this exponential scaling would be to posit further structure on the channel, e.g. by assuming it is given by a shallow quantum circuit <cit.> or a structured Pauli channel <cit.>.
However, there are settings where the evolutions may be quite complicated — e.g. the channel might correspond to the time evolution of an evaporating black hole <cit.> — and where it is advantageous to avoid such strong structural assumptions on the underlying channel.
Recently, <cit.> considered an alternative workaround in which one only attempts to learn a complicated n-qubit channel in an average-case sense. Given query access to , and given an observable , the goal is to learn the mapping
ρ↦([ρ])
accurately on average over input states ρ drawn from some n-qubit distribution , rather than over worst-case input states. The authors of <cit.> came to the surprising conclusion that this average-case task is tractable even for arbitrary channels , provided comes from a certain class of “locally flat” distributions. Their key observation was that the Heisenberg-evolved observable ^†[] admits a low-degree approximation in the Pauli basis, where the quality of approximation is defined in an average-case sense over input state ρ.
Another interesting feature of this result is that their learning algorithm only needs to query on random product states, regardless of the choice of locally flat distribution . This is both an advantage and a shortcoming. On one hand, if one is certain that the states ρ one wants to predict on are samples from a locally flat distribution, no further information about is needed to implement the learning protocol in <cit.>. On the other hand, locally flat distributions are quite specialized: they are constrained to be invariant under any single-qubit Clifford gate. In particular, almost all product distributions over product states fall outside this class. Worse yet, the general approach of low-degree approximation in the Pauli basis can be shown to fail when local flatness does not hold (see <Ref>). We therefore ask:
Are there more general families of distributions under which one can
learn to predict arbitrary quantum dynamics?
Identifying rich settings where it is possible to characterize the average-case behavior of such dynamics, while making minimal assumptions on the dynamics, is of intense practical interest. Unfortunately, our understanding of this remains limited: even for general product distributions, known techniques break down. In this work we take an important first step towards this goal by completely characterizing the complexity of learning to predict arbitrary quantum dynamics in the product setting.
Informally stated, our main result is that learning is possible so long as the distribution is not classical.
That is, for this problem there is a “blessing of quantum-ness”: as long as the distribution displays any quantitative level of quantum behavior, there is an efficient algorithm for predicting arbitrary quantum dynamics under this distribution.
More formally, note that if D is the uniform distribution over the computational basis states |0⟩ and |1⟩, then the task of predicting ([ρ]) on average over ρ∼≜ D^⊗ n for an arbitrary channel is equivalent to the task of learning an arbitrary Boolean function from random labeled examples, which trivially requires exponentially many samples. This logic naturally extends to any “two-point” distribution in which D is supported on two diametrically opposite points on the Bloch sphere.
Note that any such distribution, up to a rotation, is an embedding of a classical distribution onto the Bloch sphere.
A natural way of quantifying closeness to such distributions is in terms of the second moment matrix ∈^3× 3 of the distribution D, when D is viewed as a distribution over the Bloch sphere (see <Ref> for formal definitions).
We refer to this matrix as the Pauli second moment matrix of D.
For the purposes of this discussion, the key property of this matrix is that _𝗈𝗉≤ 1 for all D, and moreover, _𝗈𝗉 = 1 if and only if D is one of the aforementioned two-point distributions.
With this, we can now state our main result:
Let , δ, η∈ (0,1).
Let D be an unknown distribution over the Bloch sphere with Pauli second moment matrix such that _≤ 1-η.
Let be an unknown n-qubit quantum channel, and let be a known n-qubit observable.
There exists an algorithm with time and sample complexity
min(2^O(n)/ϵ^2, n^O(log(1/)/log(1/(1-η))))·log(1/δ) that outputs an efficiently computable map f' such that
_ρ∼ D^⊗ n [(([ρ]) - (f'(ρ)))^2] ≤
with probability at least 1-δ.
Note that the only condition on D we require is a quantitative bound on the spectral norm of its Pauli second moment matrix.
In other words, so long as the distribution D is far from any two-point distribution, i.e., it is far from any classical distribution, we demonstrate that there is an efficient algorithm for learning to predict general quantum dynamics under this distribution.
Previously it was only known how to achieve the above guarantee in the special case where D has mean zero. Indeed, as soon as one deviates from the mean zero case, the Pauli decomposition approach of <cit.> breaks down. In contrast, our guarantee works for any product distribution whose marginal second moment matrices have operator norm bounded away from 1.
We note that our techniques generalize to the case where the distribution is the product of different distributions over qubits,
so long as each distribution has second moment with operator norm bounded by 1-η.
However, for readability we will primarily focus on the case where all of the distributions are the same.
See <Ref> for a discussion of how to easily generalize our techniques to this setting.
Beyond low-degree concentration in an orthonormal basis. Here we briefly highlight the key conceptual novelties of our analysis, which may be of independent interest. We begin by recalling the analysis in <cit.> in greater detail. They considered the decomposition of '≜^†[] into the basis of n-qubit Pauli operators, i.e. ' = ∑_P ∈{I,X,Y,Z}^nα_P· P and argued that this is well-approximated by the low-degree truncation '_ low = ∑_|P|<tα_P · P. This can be readily seen from the following calculation. By rotating the distribution D, we may assume the covariance is diagonal, with entries bounded by 1 - η. Then the error achieved by the low-degree truncation is given by
[((' - '_ low)ρ)^2] = [(∑_|P|≥ tα_P·(Pρ))^2] ≤∑_|P|≥ t (1 - η)^|P|·α_P^2 ≤ (1 - η)^t ·1/2^n'^2_F ≤ (1 - η)^t ,
where the second step follows from the fact that
[(Pρ) (Qρ)] = 0 if P ≠ Q and is at most (1-η)^|P| if P = Q (since D is mean zero and its covariance is diagonal with entries bounded by 1 - η), and the last step follows by the assumption that '_ op≤ 1.
Note that when D is not mean zero, this step breaks, and we do not have this nice exponential decay in t. In fact, in <Ref> we construct examples of operators which are not well-approximated by their low-degree truncations in the Pauli basis when D has mean bounded away from zero.
A natural attempt at a workaround would be to change the basis under which we truncate. At least classically, biased product distributions over the Boolean hypercube still admit suitable orthonormal bases of functions, namely the biased Fourier characters. As we show in <Ref>, this idea can be used to give the following learning guarantee in the classical case where D is only supported along the Z direction in the Bloch sphere:
Let , δ, η∈ (0,1).
Let D be an unknown distribution over the interval [-(1-η),1-η].
Let f:[-1,1]^n → [-1,1] be an unknown bounded, multilinear function.
There exists an algorithm with time and sample complexity
n^O(log(1/)/log(1/(1-η)))·log(1/δ) that outputs a hypothesis f' such that
_x ∼ D^⊗ n[(f(x) - f'(x))^2] ≤
with probability at least 1-δ.
Unfortunately, when we move beyond the classical setting, the picture becomes trickier. In particular, it is not immediately clear what the suitable analogue of the biased Fourier basis should be in the quantum setting. We could certainly try to consider single-qubit operators of the form P = P - μ_P· I for P∈{X,Y,Z}, where μ_P denotes the P-th coordinate of the mean of D regarded as a distribution over the Bloch sphere. We could then extend naturally to give a basis over n qubits, and the functions ρ↦(Pρ) would by design be orthogonal to each other with respect to the distribution D^⊗ n. Writing ' = ∑_P α_P ·P and defining _ low≜∑_|P|<tα_P ·P, we can mimic the calculation above and obtain
[((' - _ low)ρ)^2] = [(∑_|P|≥ tα_P·(Pρ))^2] ≤ (1 - η)^|P|∑_|P|≥ tα^2_P .
Unfortunately, at this juncture the above naive approach hits a snag. In the mean zero case, we could easily relate ∑_P α_P^2 to 1/2^n'^2_F because of a fortuitous peculiarity of the mean-zero setting. In that setting, we implicitly exploited both that the Pauli operators P are orthogonal to each other with respect to the trace inner product, and also that that the functions ρ↦(Pρ) are orthogonal to each other with respect to D^⊗ n. In the above approach for nonzero mean, we achieved the latter condition by shifting the Pauli operators to define P, but these shifted operators are no longer orthogonal to each other with respect to the trace inner product.
Circumventing this issue is the technical heart of our proof. As we will see in <Ref>, several key technical moves are needed. First, instead of assuming that the covariance of D is diagonalized, we will fix a rotation that simplifies the mean, [ρ]. Second, instead of shifting the basis operators {X,Y,Z} so that the resulting functions ρ↦(Pρ) are orthogonal with respect to D^⊗ n, we shift them so that [ρ] is orthogonal to them, and define '_ low by truncating in this new basis instead. Finally, instead of directly bounding the truncation error [((' - '_ low)ρ)^2] using the above sequence of steps, we crucially relate it to the quantity
((')^2 [ρ])
in order to establish exponential decay. Note that ((')^2 [ρ]) ≤'^2_ op·([ρ]) ≤ 1. To our knowledge, all three of these components are new to our analysis. We leave it as an intriguing open question to find other applications of these ingredients to domains where “biased Pauli analysis” arises.
Just as in <cit.>, we can also easily extend our guarantee to the setting where we wish to learn the joint mapping
(ρ, ) ↦([ρ]) .
This is the natural channel learning analogue of the question of classical shadows for state learning <cit.> – recall that in the latter setting, one would like to perform measurements on copies of ρ and obliviously produce a classical description of the state that can then be used to compute some collection of observable values. We sketch the argument for extending to learning the joint mapping in Eq. (<ref>) in <Ref>.
Impossibility for general concentrated distributions.
It is natural to wonder to what extent our results can be generalized, especially to states that are entangled. Could it be that all one needs is some kind of global covariance bound? Unfortunately, we show in <Ref> that even in the classical setting, this is not the case. Since classical distributions can be encoded by distributions over qubits,
this implies hardness for learning in the quantum setting as well.
There exists a distribution over [-(1-η), 1-η]^n and a concept class such that no algorithm PAC-learns the class over D in subexponential time.
There is a wide spectrum of distributional assumptions
that interpolates between fully product distributions and general concentrated distributions — for instance, products of k-dimensional qudit distributions, output states of small quantum circuits, or distributions over negatively associated variables.
As discussed earlier, our understanding of when it is possible to predict the average-case behavior of arbitrary quantum dynamics is still nascent, and understanding learnability with respect to these more expressive distributional assumptions remains an important open question.
Organization.
In <Ref>, we state some preliminaries.
In <Ref>, we prove <Ref>, which is the classical setting and can be viewed as a warm-up to the quantum setting.
In <Ref>, we prove <Ref>, our main result.
Finally, in <Ref>, we show impossibility results including <Ref> and the failure of low-degree truncation in the standard Pauli basis.
§.§ Related work
Our work is part of a growing literature bridging classical computational learning theory and its quantum counterpart. Its motivation can be thought of as coming from the general area of quantum process tomography <cit.>, but as this is an incredibly extensive research direction, here we only focus our attention on surveying directly relevant works.
Quantum analysis of Boolean functions.
In <cit.>, it was proposed to study Pauli decompositions of Hermitian unitaries as the natural quantum analogue of Boolean functions. One notable follow-up work <cit.> proved various quantum versions of classical Fourier analytic results like Talagrand's variance inequality and the KKL theorem in this setting, and also obtained corollaries about learning Hermitian unitaries in Frobenius norm given oracle access (see also <cit.> for the non-Hermitian case). Recently, <cit.> considered the Pauli spectrum of the Choi representation of quantum channels and proved low-degree concentration for channels implemented by 𝖰𝖠𝖢^0. One technical difference with our work is that these notions of Pauli decomposition are specific to the channel, whereas the object whose Pauli decomposition we consider is specific to the Heisenberg-evolved operator ^†[].
Additionally, we note that all of the above mentioned works focus on questions more akin to learning a full description of the channel and thus are inherently tied to channels with specific structure.
In contrast, our focus is on learning certain properties of the channel, and only in an average-case sense over input states. As mentioned previously, this specific question was first studied in <cit.>. There have been two direct follow-up works to this paper which are somewhat orthogonal to the thrust of our contributions. The first <cit.> establishes refined versions of the so-called non-commutative Bohnenblust-Hille inequality which was developed and leveraged by <cit.> to obtain logarithmic sample complexity bounds. In this work, we did not pursue this avenue of improvement but leave it as an interesting open question to improve our sample complexity guarantees accordingly. The second follow-up <cit.> to <cit.> studies the natural qudit generalization of the original question where the distribution over qudits is similarly closed under a certain family of single-site transformations.
Finally, we note the recent work of <cit.> which studied the learnability of quantum channels with only low-degree Pauli coefficients. Their focus is incomparable to ours as they target a stronger metric for learning, namely ℓ_2-distance for channels, but need to make a strong assumption on the complexity of the channel being learned. In contrast, we target a weaker metric, namely average-case error for predicting observables, but our guarantee applies for arbitrary channels.
Classical low-degree learning.
The general technique of low-degree approximation in classical learning theory is too prevalent to do full justice to in this section. This idea of learning Boolean functions by approximating their low-degree Fourier truncation was first introduced in the seminal work of <cit.>. Fourier-analytic techniques have been used to obtain new classical learning results for various concept classes like decision trees <cit.>, linear threshold functions <cit.>, Boolean formulas <cit.>, low-degree polynomials <cit.>, and more.
While Fourier analysis over biased distributions dates back to early work of Margulis and Russo <cit.>, it was first applied in a learning-theoretic context in <cit.>, extending the aforementioned result of <cit.>.
§ PRELIMINARIES
§.§ Bloch sphere and Pauli covariance matrices
The Pauli matrices I, X, Y, Z provide the basis for 2 × 2 Hermitian matrices.
This is captured by the following standard fact on expanding a single-qubit state using Pauli matrices.
[Pauli expansion of states]
Any single-qubit mixed state ρ can be written as
ρ = 1/2(I + α_xX + α_yY + α_zZ) ,
where α⃗∈^3, α⃗_2 ≤ 1, and X,Y,Z are the standard Pauli matrices:
X = [ 0 1; 1 0 ] Y = [ 0 -i; i 0 ] Z=[ 1 0; 0 -1 ] .
The set of all such α⃗ of unit norm is the Bloch sphere.
Any single-qubit distribution D can be viewed as a distribution over the Bloch sphere.
We use _D[ρ] ∈^2 × 2 and μ⃗∈^3 to refer to the expected state and the expected Bloch vector respectively.
By taking tensor products of Pauli matrices, we obtain the collection of 4^n Pauli observables {I,X,Y,Z}^⊗ n, which form a basis for the space of 2^n × 2^n Hermitian matrices:
[Pauli expansion of observables]
Let be an n-qubit observable. Then can be written in the following form:
= ∑_P ∈{I,X,Y,Z}^⊗ n(P) · P ,
where (P) ≜( P) / 2^n.
Next, we define the Pauli covariance and second moment matrices associated to any distribution D over the single-qubit Bloch sphere.
Let D be a distribution over the Bloch sphere.
We will associate with D the second moment matrix ∈^3× 3 and covariance matrix Σ∈^3× 3,
indexed by the non-identity Pauli components X, Y, and Z.
We define such that _P,Q = _ρ∼ D[(Pρ)(Qρ)],
and Σ such that Σ_P,Q = _ρ∼ D[(Pρ)(Qρ) - (P[ρ])(Q[ρ])].
The following is a consequence of <Ref>:
For any distribution over the Bloch sphere, the Pauli second moment matrix satisfies () = 1.
§.§ Access model
In this paper we consider the following, standard access model, see e.g. <cit.>. We assume that we can interact with the unknown channel by preparing any input state, passing it through , and performing a measurement on the output. Additionally, we are given access to training examples from some distribution = D^⊗ n over product states, and their corresponding classical descriptions.
Because product states over n qubits can be efficiently represent efficiently using O(n) bits, the training set can be stored efficiently on classical computers.
The standard approach to represent a product state on a classical computer is as follows.
For each state ρ = ⊗^n_i=1|ψ_i⟩ sampled from , the classical description can be given by their 1-qubit Pauli expectation values: (P |ψ_i ⟩⟨ψ_i |) for all i∈[n] ranging over each qubit and P∈{X,Y,Z}.
Given these classical samples from and the ability to query , the learning goal is to produce a hypothesis f' which takes as input the classical description of a product state ρ and outputs an estimate for ([ρ]). Formally, we want this hypothesis to have small test loss in the sense that _ρ∼[(([ρ]) - (f'(ρ)))^2] ≤ϵ with probability at least 1 - δ over the randomness of the learning algorithm and the training examples from .
§.§ Generalization bounds for learning
For our learning protocol, we will use the following elementary results about linear and polynomial regression:
[Rademacher complexity generalization bound <cit.>]
Let be the class of bounded linear functions [-1,1]^d→[-B,B], and let ℓ be a loss function with Lipschitz constant L and a uniform upper bound of c.
With probability 1-δ over the choice of a training set S of size m drawn i.i.d. from distribution ,
_(x,y) ∼[ℓ(f(x),y)] ≤_(x,y) ∼ S[ℓ(f(x),y)] + 4LB√(d/m) + 2c √(log 1/δ/2m) .
Let f be a function that is -close to a degree-≤ d polynomial f^⋆:
_x ∼[(f(x)-f^⋆(x))^2] ≤.
Then linear regression over the set of degree-d polynomials with coefficients in [-1,1] has time and sample complexity (n^d, log 1/) ·log 1/δ and finds h such that
_x ∼[(f(x)-h(x))^2] ≤ O()
with probability 1-δ.
§ WARM-UP: THE CLASSICAL CASE
In this section we will prove <Ref>, which is a special case of <Ref> where the distribution is classical, i.e. supported only on the Z component.
The key ingredient in the proof is to show that for any function which is L^2-integrable with respect to a product distribution over [-(1-η),1-η]^n and whose extension to the hypercube is bounded, the function admits a “low-degree” approximation under an appropriate orthonormal basis. Roughly, the intuition is that the space of linear functions over a distribution D on [-(1-η), 1-η] has an orthonormal basis which is a (1-η)-scaling of a basis for a distribution on {-1,1}.
Therefore, the space of multilinear functions over the corresponding product distribution D^⊗ n has a basis whose degree-d components are scaled by (1-η)^d.
This, combined with the assumption that the function is bounded on the hypercube, allows us to conclude that the contribution of the degree-d component to the variance of f over D^⊗ n is at most (1-η)^2d.
We prove this structural result in <Ref> and conclude the proof of <Ref> in <Ref>.
§.§ Existence of low-degree approximation
We first review some basic facts about classical biased Fourier analysis. For a more extensive overview of this topic, we refer the reader to <cit.>.
Given a measure μ, we let L^2(μ) denote the space of L^2-integrable functions with respect to μ.
[Biased Fourier basis]
Let D be a distribution over with mean μ∈ (-1,1). Given f∈ L_2(D^⊗ n), the μ-biased Fourier expansion of f: ^n → is
f(x) = ∑_S ⊆ [n]f̂(S) ϕ_S(x) ,
where ϕ_S(x) = ∏_i ∈ Sx_i - μ/√(1-μ^2) and
f̂(S) = _x∼ D^⊗ n[ϕ_S(x)f(x)].
The functions ϕ_S provide an orthonormal basis for the space of functions L^2(D^⊗ n), where D is the distribution over {1, -1} with mean μ. We can naturally extend this to arbitrary product distributions over ^d as follows:
[Basis for an arbitrary product distribution]
Let D be a distribution over with mean μ and variance σ^2 > 0.
Then {1, x - μ/σ} is an orthonormal basis for L^2(D),
and thus {1, x - μ/σ}^⊗ n is an orthonormal basis for L^2(D^⊗ n).
The orthonormality of the basis immediately implies the following simple fact:
[Parseval’s Theorem]
For any function f expressed as f(x) = ∑_S ⊆ [n]f(S) ϕ_S(x), we have
_x∼ D^⊗ n[f(x)^2] = ∑_S⊆ [n]f(S)^2.
The following is the crucial structural result in the classical setting that gives rise to <Ref>. Roughly speaking, it ensures that for the “concentrated product distributions” D considered therein, any bounded multilinear function has decaying coefficients when expanded in the orthonormal basis for L^2(D^⊗ n).
Let f:[-1,1]^n → [-1,1] be a multilinear function and let D be a distribution over with mean μ and variance σ^2 < 1-μ^2.
Then there exists a function f^≤ d such that
_x ∼ D^⊗ n[(f(x) - f^≤ d(x))^2] ≤σ^2/1-μ^2^d.
Let f be expressed in the basis B_hypercube = {1, x - μ/√(1-μ^2)}^⊗ n; i.e. as
f(x) = ∑_S ⊆ [n]f(S)ψ_S(x)
where ψ_S = ∏_i ∈ Sx_i - μ/√(1-μ^2).
Note that {ψ_S}_S⊆ [n] is orthonormal with respect to the distribution D^⊗ n where D is supported on {±1} with mean μ.
Since |f(x)| ≤ 1 for x ∈ [-1,1]^n, it follows that
1 ≥_x∼D^⊗ n[f(x)^2] = ∑_S ⊆ [n]f(S)^2
via <Ref>.
Now, consider the basis ϕ_S ∏_i∈ Sx_i-μ/σ = (√(1-μ^2)/σ)^|S|·ψ_S.
By <Ref>, we know that {ϕ_S}_S⊆[n] is orthonormal with respect to D.
Let f^>d∑_|S| > df(S)ψ_S.
We have
_x ∼ D^⊗ n[f^>d(x)^2] = _x ∼ D^⊗ n[(∑_|S| > df(S)ψ_S)^2]
= _x ∼ D^⊗ n[(∑_|S| > df(S) (σ/√(1-μ^2))^|S|ϕ_S)^2 ]
= ∑_|S|>df(S)^2 (σ^2/1-μ^2)^|S|
≤(σ^2/1-μ^2)^d∑_|S|>df(S)^2 ,
again using <Ref>.
Since ∑_Sf(S)^2 ≤ 1,
we have
_x ∼ D^⊗ n[(f(x) - f^≤ d(x))^2] ≤(σ^2/1-μ^2)^d
as desired.
§.§ Sample complexity and error analysis
In light of <Ref>, the proof of <Ref> is straightforward given the following elementary fact:
[Bernoulli maximizes variance]
Let D be a distribution over the interval [-(1-η), 1-η] with mean μ.
Then _x ∼ D(x) ≤ (1-η)^2(1-μ^2).
We can now conclude the proof of <Ref>:
By <Ref>, we have _D(x) ≤ (1-η)^2(1-μ^2).
Then <Ref> gives us a degree-d approximation f^≤ d to f such that
_x ∼ D^⊗ n[(f(x)-f^≤ d(x))^2] ≤ (1 -η)^2d.
Taking d log(1/)/log(1/(1-η))
gives an approximation with error ≤.
Then by <Ref>, linear regression on the space of polynomials of degree log(1/)/log(1/(1-η)) finds an O()-error hypothesis in
n^O(log(1/)/log(1/(1-η)))·log (1/δ) time and samples.
In the classical setting linear regression is not required,
as we can estimate the mean, and the coefficients satisfy
f̂(S) = _x ∼ D^⊗ n [f(x) ϕ_S(x)] ,
so they can be estimated directly.
We give the guarantee in terms of linear regression because the approach of directly estimating the coefficients does not generalize to the quantum setting.
§ LEARNING AN UNKNOWN QUANTUM CHANNEL
In this section we will prove <Ref>.
First we will show that under any product distribution with second moment such that _𝗈𝗉≤ 1-η for some η∈ (0,1),
every observable has a low-degree approximation.
A distribution has _𝗈𝗉 = 1 only if it is effectively a classical distribution; i.e. it is supported on two antipodal points in the Bloch sphere.
So we are showing that any product distribution which is “spread out” within the Bloch sphere behaves well with low-degree approximation.
To do this, we cannot use exactly the same argument as in <Ref>, because
there is not necessarily an orthonormal basis for our product distribution D^⊗ n that is a “stretched” basis for some other distribution over the Bloch sphere.
Instead, we compare the variance of the observable under D^⊗ n to the quantity _ρ∼ D^⊗ n[(^2 ρ)].
This allows us to use the boundedness of to derive bounds for the contribution of the degree-d part to the variance of under D^⊗ n.
The learning algorithm will find the low-degree approximation by linear regression
over the degree-log(1/) Pauli coefficients. The notion of low-degreeness will be with respect to a basis adapted to D^⊗ n.
Let D be a distribution over the Bloch sphere.
Let U^† A U be the eigendecomposition of ρ = _D[ρ]. Let
, , := U^† X U, U^† Y U, U^† Z U - (ρ U^† Z U) I/√(1 - (ρ U^† Z U)^2).
Let B = {I, , , }^⊗ n.
The degree of P ∈ B is the number of non-identity elements in the product. The degree of a linear combination ∑_P∈ Bα_P P of elements in B is the largest degree of P ∈ B such that α_P ≠ 0.
The fact that the degree is defined for any observable (which is equivalent to X̃, Ỹ, Z̃ forming a basis of the space of operators) is the content of <Ref>. The existence of low-degree approximation is guaranteed by the following lemma.
Let be a bounded n-qubit observable and
let D be a distribution over the Bloch sphere with mean μ⃗ and Pauli second moment matrix such that ≤ 1-η for some η∈ (0,1).
Then there exists a degree-d observable ^≤ d and a constant η' ∈ (0,1) such that
_ρ∼ D^⊗ n [((ρ) - (^≤ dρ))^2] ≤ (1-η')^d,
where η' is a function of η.
Once <Ref> is established, <Ref> follows readily from an application of <Ref>.
Let 1-η be a known upper bound on _𝗈𝗉.
We assume the access model of <Ref>, where we get a set S of examples [(Pρ)] for 1-local P.
Our algorithm is as follows:
* Compute η' as in the last line of <Ref>. Let d O(log(1/)/log(1/(1-η')).
* Draw a set S of size n^d·log (1/δ), and initialize S' to be empty.
* For each x ∈ S, prepare a set T of log(|S| + 1/^2 + 1/δ) copies of the state ρ that matches the 1-local expectations of x. Let est(([ρ])) be the estimate of ([ρ]), where for each ρ∈ T, we measure with respect to {^†[], I - ^†[]}, and est(([ρ])) is the empirical probability of measuring the first outcome.
Add
(x^⊗log(1/)/log(1/(1-η')), est(([ρ])))
to the set S'.
* Run linear regression on S' and output the returned hypothesis h.
By Hoeffding's inequality, each estimate of ([ρ]) is within of its expectation with probability 1-exp(-Ω(|T| ^2)).
By union bound over the |S| estimates,
with probability ≥ 1 - δ, all estimates are within of ([ρ]).
The time, sample, and error bounds follow from <Ref>, which guarantees that ^†[] is -close to some degree-d polynomial. This implies
the labels of our sample set are 2-close to such a polynomial.
The dimension of the linear regression problem is ≤ nd · 3^d ≤ n^O(d), as there are 3 choices for each non-identity component.
Then by <Ref>, linear regression has time and sample complexity
n^O(log(1/)/log(1/(1-η')))·log (1/δ)
and outputs a hypothesis h such that
_ρ∼ D^⊗ n [(([ρ]) - (hρ))^2] ≤ O() .
The proof of <Ref> follows from three technical Lemmas. The first gives, for any product distribution over n-qubit states, a (non-orthogonal) decomposition of any observable into operators which are centered and bounded in variance with respect to that distribution.
Let D be a distribution over the Bloch sphere.
Let U^† A U be the eigendecomposition of ρ = _D[ρ], and let
, , := U^† X U, U^† Y U, U^† Z U - (ρ U^† Z U) I/√(1 - (ρ U^† Z U)^2).
Then {I, , , } is a basis for the set of 2× 2 unitary Hermitian matrices, and thus every n-qubit observable can be written as
= ∑_P ∈ B(P) P
for B = {I, , , }^⊗ n.
Furthermore, each non-identity P ∈ B satisfies _ρ∼ D^⊗ n[(Pρ)] = 0 and _ρ∼ D^⊗ n[(Pρ)^2] ≤ 1.
It is clear that {I, , , } is linearly independent and thus B forms a basis for any n-qubit observable.
Note that (Pρ) = ∏_i=1^n (P_i ρ_i) as ρ is a product state, so we can restrict the analysis to single qubit states drawn from D.
For P = or , it is clear that _ρ∼ D[(Pρ)] = 0 and _ρ∼ D[(Pρ)^2] = 1.
For , we have
_ρ∼ D[(ρ)] = [ (ρ U^† Z U)] - (ρ U^† Z U)/√(1-(ρ U^† Z U)^2) = 0 .
Moreover,
_ρ∼ D[(Z̃ρ)^2] = [(ρ U^† Z U)]/1-(ρ U^† Z U)^2≤ 1 ,
because [(ρ U^† Z U)] + [(ρ U^† Z U)]^2 = [(ρ U^† Z U)^2] ≤ 1.
Going forward, we will assume w.l.o.g. that the mean of the distribution _D[ρ] is diagonal because we can always transform the basis according to U.
Thus, we will assume that the mean state _D[ρ] = 1/2(I+μ Z) and the mean Bloch vector μ⃗ = (0,0,μ) for some μ∈ [-1,1],
We next prove two important components required in the proof of <Ref>.
The first lemma gives an eigenvalue lower bound on a Hermitian matrix that arises when we expand (^2 [ρ]).
Let μ∈ (-1,1) and
M = [ 1 iμ; -iμ 1 ].
Then λ_min(Re(M^⊗ k)) ≥ (1-μ^2)^k/2 for any k∈.
Note that M = I + μ Y and Y is imaginary. Thus,
Re((I + μ Y)^⊗ k) = ∑_P∈{I,Y}^k: |P| evenμ^|P|⊗_i=1^k P_i .
Note also that the eigenvalues of the above remain the same if we replace the Y's with Z's.
Thus, the eigenvalues of the above can be indexed by x∈{±1}^k, and can be expressed as follows:
λ(x) = ∑_S⊆ [k]: |S| evenμ^|S| x^S .
Let f: ^k → be the function f(x) = ∏_i=1^k (1+x_i) = ∑_S⊆[k] x^S. Then, we have λ(x) = 1/2(f(μ x) + f(-μ x)).
By the AM-GM inequality,
λ(x) ≥√(f(μ x) f(-μ x)) = ∏_i=1^k √((1+μ x_i) (1-μ x_i)) = (1-μ^2)^k/2 .
The next lemma gives an upper bound on the Pauli covariance matrix Σ (recall <Ref>) scaled by a specific diagonal matrix.
Let D be a distribution over the Bloch sphere with mean μ⃗ = (0,0,μ), Pauli second moment matrix such that _𝗈𝗉≤ 1-η for some η∈ (0,1), and Pauli covariance matrix Σ.
Let = ((1-μ^2)^-1/4,(1-μ^2)^-1/4,(1-μ^2)^-1/2).
Then there exists η' ∈ (0,1) such that Σ_𝗈𝗉≤ 1-η', where η' is a function of η.
We split into two cases based on whether μ^2 ≥η/2.
The case where μ^2 < η/2
is the simpler case: since Σ_ op≤_ op≤ 1-η and _ op^2 ≤ (1-μ^2)^-1, we have
Σ_ op≤Σ_ op/1 - μ^2≤1 - η/1 - μ^2≤1-η/1-η/2≤ 1-η/2 .
Now we consider the case where the inequality is not satisfied.
We note that Σ = - μ⃗μ⃗^⊤ and μ⃗ = (0,0,μ), thus we have that
Σ has the following block structure:
Σ = [ _2 × 2/√(1 - μ^2) b; b^† _zz - μ^2/1 - μ^2 ] ,
where _2× 2 is the top left 2 × 2 block of , _zz is the bottom right entry, and b is the remaining part (its values will not be directly relevant).
Note also that Σ≽ 0, and it is well-known that any PSD matrix of the form [ A b; b^† c ]≽ 0 has operator norm at most A_ op + c.
We thus know that
Σ_ op≤_zz - μ^2/1 - μ^2 + _2 × 2_ op/√(1 - μ^2)≤_zz - μ^2/1 - μ^2 + (_2× 2)/√(1 - μ^2)≤_zz - μ^2/1 - μ^2 + 1 - _zz/√(1 - μ^2) ,
where we use the fact that 1 = () = (_2× 2) + _zz.
Then we have
Σ_ op ≤_zz - μ^2/1 - μ^2 + 1 - _zz/√(1 - μ^2)
= _zz(1/1-μ^2 - 1/√(1-μ^2)) + 1/√(1 - μ^2) - μ^2/1 - μ^2
≤ (1-η)(1/1-μ^2 - 1/√(1-μ^2)) + 1/√(1 - μ^2) - μ^2/1 - μ^2
= 1 - η(1/1-μ^2-1/√(1-μ^2))
≤ 1 - η1 -√(1-η/2)/1-η/2 .
The third line is by the fact that _zz≤_𝗈𝗉≤ 1 - η,
and the last inequality is because the function 1/1-μ^2 - 1/√(1-μ^2) is increasing with μ^2 and that we are in the μ^2 ≥η/2 case.
Thus, combining the two cases, there always exists η' such that
Σ_ op≤ 1 - η'.
Specifically, we have
η' ≥min{η1 -√(1-η/2)/1-η/2, η/2 } > 0 .
As mentioned in <Ref>, our techniques can be extended to learn not just the mapping ρ↦(O[ρ]), but also the joint mapping (O,ρ)↦(O[ρ]). As this is standard, here we only briefly sketch the main ideas.
The general strategy is to produce a classical description of the channel that we can then use to make predictions about properties of output states. To do this, we draw many input states ρ_1,…,ρ_N from , query the channel on each them, and for each of the output states [ρ_j], we apply a randomized Pauli measurement to each of them and use these to form unbiased estimators for the output state. Concretely, given an output state [ρ_j], a randomized Pauli measurement will result in a stabilizer state |ψ^(j)⟩ = ⊗^n_i=1|s^(j)_i⟩∈{|0⟩,|1⟩,|+⟩,|-⟩,|y+⟩,|y-⟩}^⊗ n, and the expectation of ⊗^n_i=1 (3|s^(j)_i⟩⟨s^(j)_i| - I) is [ρ_j]. The classical description of the channel is given by the O(log(1/ϵ))-body reduced density matrices of the input states ρ_1,…,ρ_N, together with the classical encodings of |ψ^(1)⟩,…,|ψ^(N)⟩.
Given an observable O, we can then perform regression to predict the labels (O⊗^n_i=1(3|s^(j)_i⟩⟨s^(j)_i|-I)) given the features {(Pρ_j)}_|P| ≤ O(log (1/ϵ)). Because the labels are unbiased estimates of (O[ρ_j]), the resulting estimator will be an accurate approximation to ρ↦(O[ρ]) for ρ∼.
In this work, we do not belabor these details as they are already investigated in depth in <cit.>. Instead, we focus on the single observable case as this is where the main difficulty lies in extending the results of <cit.> to more general input distributions.
Now we prove <Ref> which shows the existence of a low-degree approximator.
Assume w.l.o.g., as in <Ref>, that _D[ρ] = 1/2(I + μ Z) and μ⃗ = (0,0,μ).
Let be expressed in the basis B = {I, X, Y, Z - μ I/√(1-μ^2)}^⊗ n as in <Ref>.
For a subset S ⊆ [n], we will denote by {P ∈ B: P ∼ S} the set of basis elements with non-identity components at indices in S and
identity components at indices in S.
We will also denote by |P| the number of non-identity components in P.
Let ^> d∑_|S| > d∑_P ∼ S(S)P.
We will show that ^>d satisfies _ρ∼ D^⊗ n[(^> dρ)^2] ≤ (1 - η')^d where η' ∈ (0,1) depends on η.
We first note a bound on the related quantity _ρ∼ D^⊗ n [(^2 ρ)].
Because _ op≤ 1 and ([ρ]) ≤ 1, (^2 [ρ]) ≤ 1 as well.
We will expand this quantity as a quadratic form.
_ρ∼ D^⊗ n [(^2 ρ)] = ∑_P,Q ∈ B(P) (Q)(PQ [ρ])
= ∑_P,Q ∈ B(P) (Q)(⊗_i=1^n P_iQ_i [ρ_i])
= ∑_P,Q ∈ B(P) (Q)∏_i=1^n( P_iQ_i [ρ_i]) .
Note that the product is 0 whenever exactly one of P_i, Q_i is I for any i ∈ [n] (as ((Z-μ I)· (I+μ Z)) = 0).
Therefore, we can partition the terms into groups that share a subset of identity variables:
_ρ∼ D^⊗ n [(^2 ρ)] = ∑_S ⊆ [n]∑_P,Q ∼ S(P) (Q)∏_i=1^n( P_iQ_i [ρ_i])
= ∑_S ⊆ [n]_S^† M^⊗ |S|_S ,
where _S is the vector of coefficients for the set {P: P ∼ S}, and M is the 3 × 3 matrix such that M_P,Q = ( PQ _D[ρ]):
M = [ 1 iμ 0; -iμ 1 0; 0 0 1 ] .
Here, the entry iμ arises because XY = iZ and (XY ·1/2(I+μ Z)) = iμ.
Since M is positive semidefinite,
we have _S^† M^⊗ |S|_S ≥ 0 for all S,
and therefore we have
1 ≥_ρ∼ D^⊗ n [(^2 ρ)]
= ∑_S ⊆ [n]_S^† M^⊗ |S|_S
≥∑_S ⊆ [n]:|S| > d_S^† M^⊗ |S|_S .
Now we will write the desired quantity _ρ∼ D^⊗ n [(^> dρ)^2] as a quadratic form as well:
_ρ∼ D^⊗ n [(^> dρ)^2] = ∑_P,Q ⊆ B: |P|,|Q|>d(P) (Q) ∏_i ∈ [n][(P_i ρ) (Q_i ρ)] .
Similarly, the product is 0 whenever exactly one of P_i and Q_i is identity (by the guarantee of <Ref> that [(P_iρ)] = 0 for P_i ≠ I), so we can make the same partition:
_ρ∼ D^⊗ n [(^>dρ)^2] = ∑_|S|>d_S^† M'^⊗ |S| .
Here M' is 3 × 3 matrix such that M'_PQ = _ρ∼ D[(Pρ)(Qρ)] for P,Q ∈{X,Y,Z-μ I/√(1-μ^2)}; in other words, it is the second moment matrix of the non-identity elements of our biased Pauli basis.
Below, we show the entries of M' in terms of the Pauli covariance matrix Σ:
M' = [ Σ_xx Σ_xy Σ_xz/√(1 - μ^2); Σ_xy Σ_yy Σ_yz/√(1 - μ^2); Σ_xz/√(1 - μ^2) Σ_yz/√(1 - μ^2) Σ_zz/1 - μ^2 ] .
Our aim is to bound M' in terms of M in order to show the existence of some η' ∈ (0,1) such that
[(^> dρ)^2] ≤ (1 - η')^d ·((^> d)^2 [ρ]) ≤ (1-η')^d.
Comparing Eq. (<ref>) and (<ref>),
it suffices to show that
(1 - η')^|S|_S^† M^⊗ |S|_S ≥_S^† M'^⊗ |S|_S
for all vectors _S.
Crucially, since the Pauli coefficients _S must be real, it suffices to prove that
M'^⊗ |S|≼
(1-η')^|S|·Re(M^⊗ |S|) .
Let M_2× 2 be the top left 2 × 2 block in M, which is exactly the matrix in <Ref>.
From <Ref>, we have λ_min(Re(M_2× 2^⊗ |S|)) ≥ (1- μ^2)^ |S|/2.
By the block structure of M, it follows that Re(M^⊗ |S|) ≽(√(1-μ^2), √(1-μ^2), 1)^⊗ |S|.
Thus, we can establish Eq. (<ref>) by proving that
M' ≼ (1-η') ·( √(1-μ^2), √(1-μ^2), 1 ) .
Now, note that M' can be written as Σ, where Σ is the covariance matrix of D and = (1,1, (1-μ^2)^-1/2).
Letting = ((1 - μ^2)^-1/4, (1 - μ^2)^-1/4, (1 - μ^2)^-1/2) (which is the diagonal matrix defined in <Ref>), we can see that Eq. (<ref>) is equivalent to
M'_𝗈𝗉≤ 1-η',
By <Ref>, this is true by our assumption that _𝗈𝗉≤ 1-η.
This establishes Eq. (<ref>) and thus Eq. (<ref>), and from Eq. (<ref>) and (<ref>), we have
1 ≥_ρ∼ D^⊗ n [(^2 ρ)]
≥∑_|S| > d_S^† M^⊗ |S|_S
≥ (1-η')^-d∑_|S| > d_S^† M'^⊗ |S|_S
= (1-η')^-d_ρ∼ D^⊗ n[(^>dρ)^2] .
Therefore,
_ρ∼ D^⊗ n[((ρ) - (^≤ dρ))^2] ≤ (1-η')^d .
Hence, we have obtained the claimed result.
In the proof of <Ref>, we relate the quantity _ρ∼ D^⊗ n[(^>dρ)^2] that we wish to bound to the related quantity _ρ∼ D^⊗ n [(^2 ρ)], which is at most 1 since _ op≤ 1.
Due to the choice of our biased Pauli basis B = {I,X,Y, Z-μ I/√(1-μ^2)}^⊗ n, we may write both quantities as sums of _S^† M'^⊗ |S|_S and _S^† M^⊗ |S|_S
(see Eq. (<ref>) and (<ref>)).
Suppose ρ is a product of different distributions where each qubit has mean Bloch vector μ⃗_i = (0,0,μ_i) (after rotation; see <Ref>) and second moment bounded by 1-η.
We will instead use the basis B = ⊗_i=1^n {I,X,Y, Z-μ_i I/√(1-μ_i^2)}.
One can easily see that the above quantities are essentially the same, with M^⊗ |S| replaced by ⊗_i∈ S M_i and similarly for M'.
Then, the next steps in the proof (establishing Eq. (<ref>) and (<ref>)) are exactly the same.
§ LOWER BOUNDS
In this section, we show lower bounds in the classical case, which automatically imply hardness in the quantum case.
In <Ref>, we prove <Ref>, which shows hardness of learning without the product distribution assumption in <Ref>.
In <Ref>, we show that truncating in the unbiased basis fails when the distribution is not mean zero.
§.§ Lower bounds for learning non-product distributions
In this section, we prove <Ref>. We show that if is an arbitrary distribution, then even if is supported on [-(1-η), (1-η)]^n (i.e., in the interior of the hypercube), there is no learning algorithm.
Let C be a code over ^n of size 2^Θ(n) and distance n/4.
The following fact is standard.
For any constant > 0, any learning algorithm that can learn an arbitrary function f: {±1}^n →{±1} over C to error requires 2^Ω(n) queries.
Let η = 0.1.
We set the distribution to be the uniform distribution over (1-η)· C, which is supported in [-(1-η), (1-η)]^n.
Let f: {±1}^n →{±1} be an arbitrary function.
Since the code C has distance n/4, we can without loss of generality assume that f(x') = f(x) whenever d(x,x') ≤ n/8.
Suppose for contradiction that there is an algorithm that, with only 2^o(n) queries to f, outputs a function g: [-1,1]^n → such that _x ∼[(g(x) - f(x))^2] ≤.
Let p 1-η/2, and let Ber(p) be the distribution where we have +1 with probability p and -1 otherwise.
Then, for any x∈{±1}^n, we have f((1-η)x) = _z∼Ber(p)^⊗ n[f(x ∘ z)], where ∘ denotes entry-wise product.
We claim that for any x∈ C, |_z∼Ber(p)^⊗ n[f(x ∘ z)] - f(x)| ≤ o_n(1).
The number of -1 coordinates in z is distributed as a Bin(n,η/2).
By the Chernoff bound, [Bin(n,η/2) ≥ (1+δ)1/2η n] ≤ e^-O(δ^2 η n) = o_n(1).
In particular, for η = 0.1, we have that [Bin(n,η/2) ≥ n/8] ≤ o_n(1).
Thus, with probability 1-o_n(1), d(x∘ z, x) < n/8, which means that f(x∘ z) = f(x).
This proves that |f((1-η)x) -f(x)| ≤ o_n(1) for all x∈ C.
Then, let h(x) = g((1-η)x).
_x∈ C(h(x) - f(x))^2 = _x∈ C(g((1-η)x) - f(x))^2
≤_x∈ C(g((1-η)x) - f((1-η)x))^2 + o_n(1)
= _x∼(g(x) - f(x))^2 + o_n(1)
≤ + o_n(1) .
This means that h is an -approximation of f over C.
This contradicts <Ref> thus completing the proof.
§.§ Lower bounds for unbiased degree truncation
In <Ref>, if the product distribution over [-(1-η), 1-η]^n has mean zero, then directly truncating f with respect to the standard monomial basis at degree O(log(1/)/log(1/(1-η))) (independent of n) suffices.
However, in this section, we will show that without the mean zero assumption, even truncating at Ω(n) degree w.r.t. the monomial basis does not give a small approximation error.
Our counter-example implies that truncation in any distribution-oblivious basis will fail on some product distribution.
In the quantum setting, this implies that low-degree truncation in the standard Pauli basis fails on some product distribution as well.
The counter-example is quite simple: f is the multilinear extension of the Boolean majority function, and is supported on a single nonzero point (which is in fact a product distribution).
[<cit.>]
Let f be the multilinear extension of the Boolean majority function.
Then the ℓ_2 Fourier weight on terms of degree k is Θ(k^-3/2).
The following is a well-known fact in approximation theory.
[Chebyshev extremal polynomial inequality <cit.>]
Let p be a degree-d univariate polynomial with leading coefficient 1.
Then, max_x∈[-1,1] |p(x)| ≥ 2^-d+1.
Let f: {±1}^n →{±1} be the majority function extended to the domain [-1,1]^n.
Let 0 < a < b < 1 be fixed constants.
Then, there exist δδ(a,b) ∈ (0,1) and t^*∈ [a,b] such that the degree-δ n truncation f^≤δ n has |f^≤δ n(t^* ·1⃗)| ≥ω_n(1).
Let g(t) f^≤ d(t·1⃗), which is a univariate polynomial of degree d, and consider the shifted polynomial g(t) = g(b-a/2t + a+b/2).
The leading coefficient of g, denoted c_d, is c_d (b-a/2)^d, where c_d is the leading coefficient of g.
Since g(t) = f^≤ d(t·1⃗), we have c_d = ∑_S: |S|=df(S).
For the majority function, <Ref> implies that ndf(S)^2 = Θ(d^-3/2) for all S of size d, thus
|c_d| = b-a/2^d nd^1/2Θ(d^-3/4) .
Then, by <Ref>, there must be a s^*∈[-1,1] such that
|g(s^*)| ≥ 2^-d+1·b-a/2^d nd^1/2Θ(d^-3/4) .
If d = δ n, then nd≥ (1/δ)^d = e^dlog(1/δ).
Thus, given 0 < a < b < 1, there exists a δ∈ (0,1) such that the above is exp(Ω(d)).
Thus, there exists a t^*∈ [a,b] such that |f^≤ d(t^* ·1⃗)| ≥ω_n(1).
§ ACKNOWLEDGMENTS
SC and HH would like to thank Ryan O'Donnell for a helpful discussion at an early stage of this project. Much of this work was completed while JD and JL were interns at Microsoft Research.
alpha
|
http://arxiv.org/abs/2409.03526v1 | 20240905133652 | Does Subset Sum Admit Short Proofs? | [
"Michał Włodarczyk"
] | cs.DS | [
"cs.DS",
"cs.CC"
] |
Use of triplet loss for facial restoration in low-resolution images
Sebastián Pulgar, Domingo Mery
This work is supported byFondecyt-Chile 1191131 and National Center for Artificial Intelligence CENIA FB210017, Basal ANID, partly supported this work.
September 9, 2024
=====================================================================================================================================================================================================
§ ABSTRACT
We investigate the question whether Subset Sum can be solved by a polynomial-time algorithm with access to a certificate of length poly(k) where k is the maximal number of bits in an input number.
In other words, can it be solved using only few nondeterministic bits?
This question has motivated us to initiate a systematic study of certification complexity of parameterized problems.
Apart from Subset Sum, we examine problems related to integer linear programming, scheduling, and group theory.
We reveal an equivalence class of problems sharing the same hardness with respect to having a polynomial certificate.
These include Subset Sum and Boolean Linear Programming parameterized by the number of constraints.
Secondly, we present new techniques for establishing lower bounds in this regime.
In particular, we show that Subset Sum in permutation groups is at least as hard for nondeterministic computation as 3Coloring in bounded-pathwidth graphs.
§ INTRODUCTION
Nondeterminism constitutes a powerful lens for studying complexity theory.
The most prominent instantiation of this concept is the class NP
capturing all problems with solutions checkable in polynomial time.
Another well-known example is the class NL of problems that can be solved nondeterministically in logarithmic space <cit.>.
But the usefulness of nondeterminism is not limited to merely filtering candidates for deterministic classes.
A question studied in proof complexity theory is how much nondeterminism is needed to solve certain problems or, equivalently, how long proofs have to be to prove certain theorems <cit.>.
Depending on the considered logic, these theorems may correspond to instances of problems complete for NP <cit.>, coNP <cit.>, or W[SAT] <cit.>.
The central goal of proof complexity is to
establish lower bounds for increasingly powerful proof systems in the hope of building up techniques to prove, e.g., NP coNP.
What is more, there are connections between nondeterministic running time lower bounds and fine-grained complexity <cit.>.
In the context of online algorithms, nondeterminism is used to measure how much knowledge of future requests is needed to achieve a certain performance level <cit.>.
Bounded nondeterminism plays an important role in organizing parameterized complexity theory.
The first class studied in this context was W[P] comprising parameterized problems solvable in FPT time, i.e., f(k)·(n), when given access to f(k)·log(n) nondeterministic bits <cit.>.
In the last decade, classes defined by nondeterministic computation in limited space have attracted significant attention <cit.> with a recent burst of activity around the class XNLP <cit.> of problems solvable in nondeterministic time f(k)·(n) and space f(k)·log(n).
While the study of W[P] and XNLP concerns problems considered very hard from the perspective of FPT algorithms, a question that has eluded a systematic examination so far
is how much nondeterminism is necessary to solve FPT problems in polynomial time.
A related question has been asked about the amount of nondeterminism needed to solve d-CNF-SAT in sub-exponential time <cit.>.
To concretize our question,
we say that a parameterized problem P admits a polynomial certificate if an instance (I,k) can be solved in polynomial time when given access to (k) nondeterministic bits[It is more accurate to say that a “certificate” refers to a particular instance while a problem can admit a “certification”. We have decided however to choose a shorter and more established term. We also speak of “certificates” instead of “witnesses” because “witness” sometimes refers to a concrete representation of a solution for problems in NP, see e.g., <cit.>.].
For example, every problem in NP admits a polynomial certificate under parameterization by the input length.
This definition captures, e.g., FPT problems solvable via branching as a certificate can provide a roadmap for the correct branching choices.
Furthermore, every parameterized problem that is in NP and admits a polynomial kernelization has a polynomial certificate given by the NP certificate for the compressed instance.
The containment in NP plays a subtle role here: Wahlström <cit.> noted that a polynomial compression for the K-Cycle problem is likely to require a target language from outside NP exactly because K-Cycle does not seem to admit a polynomial certificate.
In this article we aim to
organize the folklore knowledge about polynomial certification into a systematic study, provide new connections, techniques, and motivations,
and lay the foundations for a hardness framework.
When is certification easy or hard?
The existence of a certificate of size p(k) = (k) entails an FPT algorithm with running time 2^p(k)(n),
by enumerating all possible certificates. We should thus restrict ourselves only to problems solvable within such running time.
On the other hand, when such an algorithm is available then one can solve the problem in polynomial time whenever p(k) ≤log n.
Therefore, it suffices to handle the instances with log n < p(k).
Consequently,
for such problems it is equivalent to ask for a certificate of size (k+log n) as this can be bounded polynomially in k via the mentioned trade-off (see <Ref>).
This observation yields polynomial certificates for problems parameterized by the solution size, such as Multicut <cit.> or Planarization <cit.>, which do not fall into the previously discussed categories.
What are the problems solvable in time 2^k^(1)n^(1) yet unlikely to admit a polynomial certificate?
The Bandwidth problem has been conjectured
not be belong to W[P] because one can merge multiple instances into one, without increasing the parameter, in such a way that the large instance is solvable if and only if all the smaller ones are.
It is conceivable that a certificate for the large instance should require at least one bit for each of the smaller instances, hence it cannot be short <cit.>.
The same argument applies to every parameterized problem that admits an AND-composition, a construction employed to rule out polynomial kernelization <cit.>, which effectively encodes a conjunction of multiple 3SAT instances as a single instance of the problem.
Such problems include those parameterized by graph width measures like treewidth or pathwidth, and it is hard to imagine polynomial certificates for them.
However, kernelization hardness can be also established using an OR-composition, which does not stand at odds with polynomial certification.
The close connection between AND-composition and polynomial certificates has been observed
by Drucker, Nederlof, and Santhanam <cit.>
who focused on parameterized search problems solvable by
One-sided Probabilistic Polynomial (OPP) algorithms (cf. <cit.>).
They asked which problems admit an OPP algorithm that finds a solution with probability 2^-(k).
This may seem much more powerful than using (k) nondeterministic bits but the success probability can be replaced by Ω(1) when given a single access to an oracle solving k-variable Circuit-SAT <cit.>.
A former result of Drucker <cit.> implies that an OPP algorithm with success probability 2^-(k) (also called a polynomial Levin witness compression) for a search problem admitting a so-called constructive AND-composition would imply <cit.>.
Constructive vs. non-constructive proofs.
The restriction to search problems is crucial in the work <cit.> because
the aforementioned hardness result does not apply to algorithms that may
recognize yes-instances without constructing a solution explicitly but by proving its existence in a non-constructive fashion.
As noted by Drucker <cit.>, his negative results do not allow to rule out this kind of algorithms.
In general, search problems may be significantly harder than their decision counterparts.
For example, there are classes of search problems for which the solution is always guaranteed to exist (e.g., by the pigeonhole principle, in the case of class PPP <cit.>) but the existence of a polynomial algorithm computing some solution is considered unlikely.
For a less obvious example, consider finding a
non-trivial divisor
of a given integer n.
A polynomial (in log n) algorithm finding a solution could be used to construct the factorization of n, resolving a major open problem.
But the existence of a solution is equivalent to n being composite and this can be verified in polynomial time by the AKS primality test <cit.>.
As yet another example, consider the problem of finding a knotless embedding of a graph, i.e., an embedding in ℝ^3 in which every cycle forms a trivial knot in a topological sense.
The class 𝒢 of graphs admitting such an embedding is closed under taking minors so Robertson and Seymour's Theorem ensures that 𝒢 is characterized by a finite set of forbidden minors <cit.>, leading to a polynomial algorithm for recognizing graphs from 𝒢.
Observe that excluding all the forbidden minors yields a non-constructive proof that a knotless embedding exists.
On the other hand, the existence of a polynomial algorithm constructing such an embedding remains open <cit.>.
To address this discrepancy, we propose the following conjecture which asserts that not only finding assignments to many instances of 3SAT requires many bits of advice but even certifying that such assignments exist should require many bits of advice.
We define the parameterized problem where an instance consists of a sequence of n many 3SAT formulas on k variables each, and an instance belongs to the language if all these formulas are satisfiable. We treat k as a parameter.
does not admit a polynomial certificate unless .
Observe that is solvable in time 2^k·(k)· n, so the questions whether it admits a certificate of size (k), (k)·log n, or (k + log n) are equivalent.
We formulate <Ref> as a conditional statement because we believe that its proof in the current form is within the reach of the existing techniques employed in communication complexity and kernelization lower bounds <cit.>.
Then the known examples of AND-composition could be interpreted as reductions from that justify non-existence of polynomial certificates.
§.§ The problems under consideration
Our focus: Subset Sum.
In Subset Sum we are given a sequence of n integers (also called items), a target integer t (all numbers encoded in binary), and we ask whether there is a subsequence summing up to t.
This is a fundamental NP-hard problem that can be solved in pseudo-polynomial time (tn) by the classic algorithm by Bellman from the 50s <cit.>.
In 2017 the running time has been improved to (t+n) by Bringmann <cit.>.
Subset Sum reveals miscellaneous facets in complexity theory: it has been studied from the perspective of exponential algorithms <cit.>, logarithmic space <cit.>,
approximation <cit.>, kernelization <cit.>, fine-grained complexity <cit.>,
cryptographic systems <cit.>, and average-case analysis <cit.>.
Our motivating goal is the following question.
Does Subset Sum admit a polynomial certificate for parameter k = log t?
From this point of view, the pseudo-polynomial time (tn) can be interpreted as FPT running time (2^kn).
It is also known that a kernelization of size (k) is unlikely <cit.>.
Observe that we cannot hope for a certificate of size o(log t) because the algorithm enumerating all possible certificates would solve Subset Sum in time 2^o(log t)n^(1) = t^o(1)n^(1) contradicting the known lower bound based on the Exponential Time Hypothesis (ETH) <cit.>.
The parameterization by the number of relevant bits exhibits a behavior different from those mentioned so far, that is, width parameters and solution size, making it an uncharted territory for nondeterministic algorithms.
The study of was suggested by Drucker et al. <cit.> in the context of polynomial witness compression.
These two directions are closely related yet ultimately
incomparable: the requirement to return a solution makes the task more challenging but the probabilistic guarantee is less restrictive than constructing a certificate.
However, establishing hardness in both paradigms boils down to finding
a reduction of a certain kind from AND-3SAT[k].
The density of a Subset Sum instance is defined as n / log(t) in the cryptographic context <cit.>.
As it is straightforward to construct a certificate of size n, we are mostly interested in instances of high density, which also appear hard for exponential algorithms <cit.>.
On the other hand, instances that are very dense enjoy a special structure that can be leveraged algorithmically <cit.>.
The instances that seem the hardest in our regime are those in which n is slightly superpolynomial in log t.
Apart from the obvious motivation to better understand the structure of Subset Sum, we believe that the existence of short certificates for dense instances could be valuable for cryptography.
There are several other studied variants of the problem. In Unbounded Subset Sum the input is specified in the same way but one is allowed to use each number repeatedly.
Interestingly, this modification enables us to certify a solution with
(log^2 t) bits (<Ref>).
Another variant is to replace the addition with some group operation.
In Group-G Subset Sum we are given a sequence of n elements from G and we ask whether one can pick a subsequence whose group product equals the target element t ∈ G.
Note that we do not allow to change the order of elements when computing the product what makes a difference for non-commutative groups.
This setting has been mostly studied for G being the cyclic group _q <cit.>.
To capture the hardness of an instance, we choose the parameter to be log |G| (or equivalent).
In particular, we will see that Group-_q Subset Sum[log q] is equivalent in our regime to .
We will also provide examples of groups for which certification is either easy or conditionally hard.
Integer Linear Programming.
We shall consider systems of equations in the form {Ax=b | x ∈{0,1}^n} where A ∈^m × n, b ∈^m, with the parameter being the number of constraints m.
This is a special case of Integer Linear Programming (ILP) over boolean domain, known in the literature as pseudo-boolean optimization.
This case has been recognized as particularly interesting by the community working on practical ILP solvers because pseudo-boolean optimization can be treated with SAT solvers <cit.>.
It is also applicable in the fields of approximation algorithms <cit.> and election systems <cit.>.
Eisenbrand and Weismantel <cit.> found an elegant application of Steinitz Lemma to this problem and gave
an FPT algorithm
with running time (||A||_∞ + m)^(m^2)· n (cf. <cit.>).
For simplicity we will consider variants with bounded ||A||_∞.
In the 0-1 ILP problem we restrict ourselves to matrices A ∈{-1,0,1}^m × n and in Monotone 0-1 ILP we consider A ∈{0,1}^m × n.
A potential way to construct a short certificate would be to tighten the proximity bounds from <cit.> for such matrices: find an extremal solution x^* to the linear relaxation {Ax=b | x ∈ [0,1]^n} and hope that some integral solution z lies nearby, i.e., ||z-x^*||_1 ≤(m).
This however would require tightening the bounds on the vector norms in the Graver basis of the matrix A.
Unfortunately, there are known lower bounds making this approach hopeless <cit.>.
It may be tempting to seek the source of hardness in the large values in the target vector b.
We will see however that the problem is no easier when we assume b=0 and look for any non-zero solution.
We refer to such problem as 0-Sum 0-1 ILP.
A special case of Monotone 0-1 ILP is given by a matrix A ∈{0,1}^m × n with n = md columns corresponding to all size-d subsets of {1,…,m}.
Then Ax = b has a boolean solution if and only if b forms a degree sequence of some d-hypergraph, i.e., it is d-hypergraphic.
There is a classic criterion by Erdős for a sequence to be graphic <cit.> (i.e., 2-hypergraphic) but already for d=3 deciding if b is d-hypergraphic becomes NP-hard <cit.>.
It is straightforward to certify a solution with m^d bits, what places the problem in NP for each fixed d, but it is open whether it is in NP when d is a part of the input (note that the matrix A is implicit so the input size is (mdlog m)).
This basically boils down to the same dilemma: can we certify the existence of a boolean ILP solution x without listing x in its entirety?
There is yet another motivation to study the certification complexity of 0-1 ILP.
If this problem admits a certificate of size (m) then any other problem that can be modeled by 0-1 ILP with few constraints must admit a short certificate as well.
This may help classifying problems into these that can be solved efficiently with ILP solvers and those that cannot.
Other problems parameterized by the number of relevant bits.
A classic generalization of Subset Sum is the Knapsack problem where each item is described by a size p_i and a weight w_i; here we ask for a subset of items of total weight at least w but total size not exceeding t.
Following Drucker et al. <cit.> we parameterize it by the number of bits necessary to store items' sizes and weights, i.e., log(t + w).
The need to process weights makes Knapsack harder than Subset Sum from the perspective of fine-grained complexity <cit.> but they are essentially equivalent on the ground of exponential algorithms <cit.>.
We will see that they are equivalent in our regime as well.
We will also consider the following scheduling problem which is in turn a generalization of Knapsack.
In Scheduling Weighted Tardy Jobs
we are given a set of n jobs, where each
job j ∈ [n] has a processing time p_j ∈, a weight w_j ∈, and a due date d_j ∈.
We schedule the jobs on a single machine and we want to minimize the total weight of jobs completed after their due dates (those jobs are called tardy).
Equivalently, we try to maximize the total weight of jobs completed in time.
In the scheduling literature, Scheduling Weighted Tardy Jobs is referred to as 1 || ∑ w_jU_j using Graham’s notation.
The problem is solvable in pseudo-polynomial time (n· d_max) by the classic Lawler and Moore's algorithm <cit.>.
The interest in 1 || ∑ w_jU_j has been revived due to the recent advances in fine-grained complexity
<cit.>.
Here, the parameter that captures the number of relevant bits is
log (d_max + w_max).
§.§ Our contribution
The standard polynomial parameter transformation (PPT) is a polynomial-time reduction between parameterized problems that maps an instance with parameter k to one with parameter k' = (k).
We introduce the notion of a nondeterministic polynomial parameter transformation (NPPT) which extends PPT by allowing the reduction to guess (k) nondeterministic bits.
Such reductions preserve the existence of a polynomial certificate.
We write P ≤_nppt Q (resp. P ≤_ppt Q) to indicate that P admits a NPPT (resp. PPT) into Q.
We demonstrate how NPPT help us organize the theory of polynomial certification, similarly as PPT come in useful for organizing the theory of Turing kernelization <cit.>.
As our first result, we present an equivalence class of problems that share the same certification-hardness status as .
In other words, either all of them admit a polynomial certificate or none of them.
Despite apparent similarities between these problems, some of the reductions require a nontrivial use of nondeterminism.
The following parameterized problems are equivalent with respect to NPPT:
*
* Knapsack[log(t+w)], Knapsack[log(p_max+w_max)]
* , , 0-Sum 0-1 ILP[m]
* Group-_q Subset Sum[log q]
Even though we are unable to resolve <Ref>, we believe that revealing such an equivalence class supports the claim that a polynomial certificate for is unlikely.
Otherwise, there must be some intriguing common property of all problems listed in <Ref> that has eluded researchers so far despite extensive studies in various regimes.
Next, we present two negative results.
They constitute a proof of concept that
can be used as a non-trivial source of hardness.
First, we adapt a reduction from <cit.> to show that scheduling with weights and due dates is hard assuming <Ref>.
[]theoremthmMainScheduling
≤_ppt .
It is possible to formulate this result in terms of AND-composition but we chose not to work with this framework since it is tailored for refuting kernelization and relies on concepts that do not fit into our regime (e.g., polynomial relation <cit.>).
Our second hardness result involves Group-S_k Subset Sum[k]: a variant of Subset Sum on permutation groups.
Such groups contain exponentially-large cyclic subgroups (see <Ref>) so this problem is at least as hard as Group-_q Subset Sum[log q] (which is equivalent to ).
We reduce from 3Coloring parameterized by pathwidth which is at least as hard as with respect to PPT.
Indeed, we can transform each 3SAT formula in the input (each of size (k^3)) into an instance of 3Coloring via the standard NP-hardness proof, and take the disjoint union of such instances, which implies ≤_ppt 3Coloring[pw].
Notably, the reduction in the other direction is unlikely (see <Ref>) so
3Coloring[pw] is probably harder than .
theoremthmMainPermutation
3Coloring[pw] ≤_nppt Group-S_k Subset Sum[k].
Consequently, Group-S_k Subset Sum[k]
does not admit a polynomial certificate assuming <Ref> and NP coNP/poly.
Unlike <Ref>, this time establishing hardness requires a nondeterministic reduction.
An interesting feature of 3Coloring[pw] is that it is NL-complete under logspace reductions when
the pathwidth pw is restricted to (log n) <cit.>.
On the other hand, Subset Sum can be solved in time (tn^2) and space polylog(tn) using algebraic techniques <cit.>.
Therefore, obtaining a logspace PPT from 3Coloring[pw] to (where pw = log n implies t = 2^polylog(n))
would lead to a surprising consequence: a proof that
NL DSPACE(polylog(n)) that is significantly different from Savitch's Theorem (see also discussion in <cit.> on low-space determinization).
This suggests that a hypothetical reduction to should either exploit the “full power” of NPPT (so it cannot be improved to a logspace PPT) or
start directly from .
Finally, we examine the case of the group family ^k_k on which Subset Sum is still NP-hard (as this generalizes Subset Sum on cyclic groups)
but enjoys a polynomial certificate.
Specifically,
we exploit the bound on the maximal order of an element in ^k_k to prove that
there always exists a solution of bounded size.
[]lemmathmZkk
Group-^k_k Subset Sum[k] admits a polynomial certificate.
In summary, Group-G Subset Sum appears easy for G=^k_k (due to bounded maximal order), hard for G = S_k (due to non-commutativity), and the case G = _2^k lies somewhere in between.
In the light of <Ref>, tightening this gap seems a promising avenue to settle <Ref>.
Organization of the paper.
We begin with the preliminaries where we formally introduce the novel concepts, such as NPPT.
We prove Theorems <ref> and <ref> in Sections <ref> and <ref>, respectively.
The proofs marked with () can be found in the appendix.
§ PRELIMINARIES
We denote the set {1,…,n} by [n].
For a sequence x_1, x_2, …, x_n, its subsequence is any sequence of the form x_i_1, …, x_i_m for some choice of increasing indices 1 ≤ i_1 < … <i_m ≤ n.
All considered logarithms are binary.
A parameterized problem P is formally defined as a subset of Σ^* ×.
For the sake of disambiguation, whenever we refer to a parameterized problem, we denote the choice of the parameter in the [·] bracket, e.g., 3Coloring[pw].
We call P fixed-parameter tractable (FPT) is the containment (I,k) ∈ P can be decided in time f(k)·(|I|) for some computable function f.
We say that P admits a polynomial compression into a problem Q if there is a polynomial-time algorithm that transform (I,k) into an equivalent instance of Q of size (k).
If Q coincides with the non-parameterized version of P then such an algorithm is called a polynomial kernelization.
A polynomial Turing kernelization for P is a polynomial-time algorithm that determines if (I,k) ∈ P using an oracle that can answer if (I',k') ∈ P whenever |I'|+k' ≤(k).
Let P Σ^* × be a parameterized problem.
We say that P has a polynomial certificate if there is an algorithm 𝒜 that, given an instance (I,k) of P and a string y of (k) bits, runs in polynomial time and accepts or rejects (I,k) with the following guarantees.
* If (I,k) ∈ P, then there exists y for which 𝒜 accepts.
* If (I,k) ∉P, then 𝒜 rejects (I,k) for every y.
Let P Σ^* × and Q Σ^*.
Suppose that Q ∈ NP and P admits a polynomial compression into Q.
Then P admits a polynomial certificate.
For a given instance (I,k) of P we execute the compression algorithm to obtain an equivalent instance I' of Q of size (k).
Since Q ∈ NP the instance I' can be solved in polynomial-time with an access to a string y of (|I'|) = (k) nondeterministic bits.
Then y forms a certificate for (I,k).
Let P,Q Σ^* × be parameterized problems.
An algorithm 𝒜 is called a polynomial parameter transformation (PPT) from P to Q if, given an instance (I,k) of P, runs in polynomial time, and outputs an equivalent instance (I',k') of Q with k' ≤(k).
An algorithm ℬ is called a nondeterministic polynomial parameter transformation (NPPT) from P to Q if, given an instance (I,k) of P and a string y of (k) bits, runs in polynomial time, and outputs an instance (I',k') of Q with the following guarantees.
* k' ≤(k)
* If (I,k) ∈ P, then there exists y for which ℬ outputs (I',k') ∈ Q.
* If (I,k) ∉P, then ℬ outputs (I',k') ∉Q for every y.
Clearly, PPT is a special case of NPPT.
We write P ≤_ppt Q (P ≤_nppt Q) if there is a (nondeterministic) PPT from P to Q.
We write P ≡_ppt Q (P ≡_nppt Q) when we have reductions in both directions.
It is easy to see that the relation ≤_nppt is transitive.
Similarly as the relation ≤_ppt is monotone with respect to having a polynomial kernelization, the relation ≤_nppt
is monotone with respect to having a polynomial certificate.
Let P,Q ∈Σ^* × be parameterized problems. If P ≤_nppt Q and Q admits a polynomial certificate then P does as well.
Given an instance (I,k) of P the algorithm guesses a string y_1 of length (k) guiding the reduction to Q and constructs an instance (I',k') with k' = (k).
Then it tries to prove that (I',k') ∈ Q by guessing a certificate y_2 of length (k') = (k).
A different property transferred by PPT is polynomial Turing kernelization.
Hermelin et al. <cit.> proposed a hardness framework for this property by considering complexity classes closed under PPT (the WK-hierarchy).
Next, we prove the equivalence mentioned in the Introduction.
Suppose P Σ^* × admits an algorithm 𝒜 deciding if (I,k) ∈ P in time 2^p(k)(|I|) where p(·) is a polynomial function.
Then P[k] ≡_ppt P[k+log |I|].
The direction P[k+log |I|] ≤_ppt
P[k] is trivial.
To give a reduction in the second direction, we first check if p(k) ≤log |I|.
If yes, we execute 𝒜 in time (|I|) and according to the outcome we return a trivial yes/no-instance.
Otherwise we have log |I| < p(k) so we can output (I,k') for the new parameter k' = k + log |I| being polynomial in k.
Pathwidth.
A path decomposition of a graph G is a
sequence 𝒫 = (X_1,X_2,…,X_r) of bags, where X_i V(G), and:
* For each v ∈ V(G) the set {i | v ∈ X_i} forms a non-empty subinterval of [r].
* For each edge uv ∈ E(G) there is i ∈ [r] with {u,v}⊆ X_i.
The width of a path decomposition is defined as max_i=1^r |X_i| - 1. The pathwidth of a graph G is the minimum width of a path decomposition of G.
<cit.>
If a graph G has pathwidth at most p, then it admits a nice path decomposition 𝒫 = (X_1,X_2,…,X_r) of width at most p, for which:
* X_1 = X_r = ∅.
* For each i ∈ [r-1] there is either a vertex v ∉X_i for which X_i+1 = X_i ∪{v} or a vertex v ∈ X_i for which X_i+1 = X_i {v}.
Furthermore, given any path decomposition of G, we can turn it into a nice path decomposition of no greater width, in polynomial time.
The bags of the form X_i+1 = X_i ∪{v} are called introduce bags while the ones of the form X_i+1 = X_i {v} are called forget bags.
Similarly as in the previous works <cit.> we assume that a path decomposition of certain width is provided with the input.
This is not a restrictive assumption for our model since pathwidth can be approximated within a polynomial factor in polynomial time <cit.>.
Group theory.
The basic definitions about groups can be found in the book <cit.>.
A homomorphism between groups G,H is a mapping ϕ G → H that preserves the group operation, i.e., ϕ(x) ∘_H ϕ(y) = ϕ(x ∘_G y) for all x,y ∈ G.
An isomorphism is a bijective homomorphism and
an automorphism of G is an isomorphism from G to G.
We denote by Aut(G) the automorphism group of G with the group operation given as functional composition.
A subgroup N of G is normal if for every g ∈ G, n ∈ N we have g ∘_G n ∘_G g^-1∈ N.
The symmetric group S_k comprises permutations over the set [k] with the group operation given by composition.
For a permutation π∈ S_k we consider a directed graph over the vertex set [k] and arcs given as {(v,π(v)) | v ∈ [k]}.
The cycles of this graph are called the cycles of π.
We denote by _k the cyclic group with addition modulo k.
We write the corresponding group operation as ⊕_k.
An order of an element x ∈ G is the size of the cyclic subgroup of G generated by x.
The Landau's function g(k) is defined as the maximum order of an element x in S_k. It is known that g(k) equals max𝗅𝖼𝗆(k_1,…,k_ℓ) over all partitions k = k_1 + … + k_ℓ (these numbers correspond to the lengths of cycles in x) and that g(k) = 2^Θ(√(klog k)) <cit.>.
An element of large order can be found easily if we settle for a slightly weaker bound.
For each k there exists π∈ S_k of order 2^Ω(√(k)/log k) and it can be found in time (k).
Consider all the primes p_1,…,p_ℓ that are smaller than √(k).
By the prime number theorem there are ℓ = Θ(√(k)/log k) such primes <cit.>.
We have p_1 + … + p_ℓ≤√(k)·√(k) = k so we can find a permutation in S_k with cycles of lengths p_1,…,p_ℓ (and possibly trivial cycles of length 1).
We have 𝗅𝖼𝗆(p_1,…,p_ℓ) = ∏_i=1^ℓ p_i ≥ 2^ℓ = 2^Ω(√(k)/log k).
For two groups N, H and a homomorphism ϕ H → Aut(N) we define the outer semidirect product <cit.> N ⋊_ϕ H as follows.
The elements of N ⋊_ϕ H are {(n,h) | n ∈ N, h ∈ H} and the group operation ∘ is given as (n_1,h_1) ∘ (n_2,h_2) = n_1 ∘ϕ_h_1(n_2), h_1 ∘ h_2.
A special case of the semidirect product occurs when we combine subgroups of a common group.
Let G be a group with a normal subgroup N and a subgroup H, such that every element g ∈ G can be written uniquely
as g = n∘ h for n ∈ N, h ∈ H.
Let ϕ H → Aut(N) be given as ϕ_h(n) = h∘ n ∘ h^-1 (this is well-defined because N is normal in G).
Then G is isomorphic to the semidirect product N ⋊_ϕ H.
Group-G Subset SumA sequence of elements g_1, g_2,…,g_n ∈ G, an element g ∈ Glog |G|Is there a subsequence (i_1 < i_2 < … < i_r) of [n]
such that g_i_1∘ g_i_2∘…∘ g_i_r = g?
We assume that the encoding of the group elements as well as the group operation ∘ are implicit for a specific choice of a group family.
For a group family parameterized by k, like (S_k)_k=1^∞, we treat k as the parameter.
In all considered cases it holds that k ≤log |G| ≤(k) so these two parameterizations are equivalent under PPT.
§ EQUIVALENCES
We formally introduce the variants of ILP that will be studied in this section.
0-1 ILP
A matrix A ∈{-1,0,1}^m × n, a vector b ∈^m
m
Is there a vector x ∈{0,1}^n for which Ax=b?
In Monotone 0-1 ILP we restrict ourselves to matrices A ∈{0,1}^m × n.
In 0-Sum 0-1 ILP we have A ∈{-1,0,1}^m × n and we seek a binary vector x 0 for which Ax=0.
We first check that all the parameterized problems considered in this section are solvable in time 2^k^(1)n^(1).
For we can use the classic (tn)-time algorithm <cit.> which can be easily modified to solve Group-_q Subset Sum[log q] in time (qn).
For Knapsack[log(p_max + w_max)] there is an (p_max· w_max· n)-time algorithm <cit.> which also works for the larger parameterization by log(t+w).
Next, can be solved in time 2^(m^2log m)· n using the algorithm for general matrix A <cit.>.
This algorithm can be used to solve 0-sum 0-1 ILP[m] due to <Ref>.
Hence by <Ref> we can assume in our reductions that (log n) is bounded by a
polynomial function of the parameter.
Knapsack[log(t + w)] ≡_ppt Knapsack[log(p_max + w_max)].
We only need to show the reduction from Knapsack[log(p_max + w_max)].
When p_max· n < t we can afford taking all the items.
On the other hand, if w_max· n < w then no solution can exist.
Therefore, we can assume that log t ≤log p_max + log n and log w ≤log w_max + log n.
By <Ref> we can assume log n to be polynomial in log(p_max + w_max) so the new parameter log(t + w) is polynomial in the original one.
≡_nppt Knapsack[log(t + w)].
The (≤) reduction is standard: we translate each input integer p_i into an item (p_i,p_i) and set w=t.
Then we can pack items of total weight t into a knapsack of capacity t if and only if the Subset Sum instance is solvable.
Now consider the (≥) reduction.
Let k = log(t + w).
By the discussion at the beginning of this section we can assume that log n ≤log(t· w) ≤ 2k.
We can also assume that w_max < w as any item with weight exceeding w and size fitting into the knapsack would form a trivial solution.
Let W = w · n + 1.
Suppose there is a set of items with total size equal t' ≤ t and total weight equal w' ≥ w.
Note that w' must be less than W.
We nondeterministically choose t' and w': this requires guessing log t + log W ≤ 4k bits.
Now we create an instance of Subset Sum by mapping each item (p_i, w_i) into integer p_i · W + w_i and setting the target integer to t” = t' · W + w'.
If we guessed (t',w') correctly then such an instance clearly has a solution.
On the other hand, if this instance of Subset Sum admits a solution then we have ∑_i ∈ I (p_i · W + w_i) = t' · W + w' for some I [n].
Since both w' and ∑_i ∈ I' w_i belong to [1,W) we must have ∑_i ∈ I w_i = w' and ∑_i ∈ I p_i = t' so the original instance of Knapsack has a solution as well.
Finally, it holds that log t”≤ 5k so the parameter is being transformed linearly.
We will need the following extension of the last argument.
Let W ∈ and a_1,…,a_n, b_1,…,b_n be sequences satisfying a_i,b_i ∈ [0,W) for each i∈[n].
Suppose that S := ∑_i=1^n a_i W^i-1 = ∑_i=1^n b_i W^i-1.
Then a_i=b_i for each i∈ [n].
Consider the remainder of S when divided by W.
Since W divides all the terms in S for i ∈ [2,n] and a_1,b_1 ∈ [0,W) we must have a_1 = (S W) = b_1.
Next, consider S' = (S - a_1) / W = ∑_i=2^n a_i W^i-2 = ∑_i=2^n b_i W^i-2.
Then a_2 = (S' W) = b_2. This argument generalizes readily to every i ∈ [n].
≡_nppt .
(≤): Consider an instance ({p_1, …, p_n}, t) of Subset Sum. Let k = ⌈log t ⌉. We can assume that all numbers p_i belong to the interval [t].
For an integer x ∈ [t] let (x) ∈{0,1}^k denote the binary encoding of x so that x = ∑_j=1^k (x)_j · 2^j-1.
Observe that the condition x_1 + … + x_m = t can be expressed as ∑_j=1^k ∑_i=1^m (x_i)_j· 2^j-1 = t.
We nondeterministically guess a sequence b = (b_1,…,b_k)
so that b_j equals ∑_i ∈ I(p_i)_j where I [n] is a solution.
This sequence must satisfy max_j=1^k b_j ≤ t, and so we need k^2 nondeterministic bits to guess b.
We check if the sequence b satisfies ∑_j=1^k b_j · 2^j-1 = t; if no then the guess was incorrect and we return a trivial no-instance.
Otherwise we construct an instance of Monotone 0-1 ILP[k] with a system Ax = b.
The vector b is given as above and its length is k.
The matrix A comprises n columns where the i-th column is (p_i).
This system has a solution x ∈{0,1}^n if and only if there exists I [n] so that ∑_i ∈ I(p_i)_j = b_j for all j ∈ [k].
This implies that ∑_i ∈ I p_i = t.
Conversely, if such a set I [n] exists, then there is b ∈ [t]^k
for which Ax=b admits a boolean solution.
(≥): Consider an instance Ax=b of . As usual, we assume log n ≤(m).
We can also assume that ||b||_∞≤ n as otherwise Ax=b is clearly infeasible.
We construct an instance of Subset Sum with n items and target integer t = ∑_j=1^m b_j · (n+1)^j-1.
Note that t ≤ m· ||b||_∞· (n+1)^m so log t ≤(m).
For i ∈ [n] let a^i ∈{0,1}^m denote the i-th column of the matrix A.
We define p_i = ∑_j=1^m a^i_j · (n+1)^j-1 and we claim that that instance J = ({p_1, …, p_n}, t) of Subset Sum is solavble exactly when the system Ax=b has a boolean solution.
First, if x ∈{0,1}^m forms a solution to Ax = b then for each j∈ [m] we have ∑_i=1^n x_ia^i_j (n+1)^j-1 = b_j(n+1)^j-1 and so ∑_i=1^n x_ip_i = t.
Hence the set I = {i ∈ [n] | x_i = 1} encodes a solution to J.
In the other direction, suppose that there is I [n] for which ∑_i ∈ I p_i = t.
Then t = ∑_j=1^m ( ∑_i ∈ I a^i_j) · (n+1)^j-1.
Due to <Ref> we must have b_i = ∑_i ∈ I a^i_j for each i∈[m] and there is subset of columns of A that sums up to the vector b.
This concludes the proof.
For the next reduction, we will utilize the lower bound on the norm of vectors in a so-called Graver basis of a matrix.
For two vectors y,x ∈^n we write y ⊲ x if for every i ∈ [n] it holds that y_ix_i ≥ 0 and |y_i| ≤ |x_i|.
A non-zero vector x ∈^n belongs to the Graver basis of A ∈^m × n if Ax = 0 and no other non-zero solution Ay=0 satisfies y ⊲ x.
In other words, x encodes a sequence of columns of A, some possibly repeated or negated, that sums to 0 and none of its nontrivial subsequences sums to 0.
The following lemma concerns the existence of vectors with a large ℓ_1-norm in a Graver basis of a certain matrix.
We state it in the matrix-column interpretation.
For every k ∈ there is a sequence (v_1,…,v_n) of vectors from {-1,0,1}^k such that
* n = Θ(2^k),
* the vectors v_1,…,v_n sum up to 0, and
* no proper non-empty subsequence of (v_1,…,v_n) sums up to 0.
≤_ppt .
Consider an instance Ax = b of with A ∈{0,1}^m × n.
We can assume that A contains 1 in every column as otherwise such a column can be discarded.
Let v_1,…,v_ℓ∈{-1,0,1}^k be the sequence of vectors from <Ref> with ℓ≥ n and k = (log n).
Next, we can assume that ||b||_∞≤ n ≤ℓ as otherwise there can be no solution.
We decompose b into a sum b_1 + … + b_ℓ of vectors from {-1,0,1}^m, possibly using zero-vectors for padding.
Now we construct a matrix A' ∈{-1,0,1}^(m+k)×(n+ℓ).
The first n columns are given by the columns of A with 0 on the remaining k coordinates.
The last ℓ columns are of the form (-b_i,v_i) for i ∈ [ℓ].
See <Ref> for an illustration.
The new parameter is m+k which is m + (log n).
We claim that Ax = b is feasible over boolean domain if and only if A'y = 0 admits a non-zero boolean solution y.
Consider a solution x to Ax = b.
We define y as x concatenated with vector 1^ℓ.
In each of the first m rows we have (A'y)_j = (Ax)_j - b_j = 0.
In the remaining k rows we have first n zero vectors followed by the sequence v_1,…,v_ℓ which sums up to 0 by construction.
Hence A'y = 0 while y 0.
Now consider the other direction and let y 0 be a solution to A'y = 0.
Let us decompose y as a concatenation of y_1 ∈{0,1}^n and y_2 ∈{0,1}^ℓ.
First suppose that y_2 = 0.
Then y_1 0 and Ay_1 = 0 but this is impossible since A ∈{0,1}^m × n and, by assumption, every column of A contains 1.
It remains to consider the case y_2 0.
By inspecting the last k rows of A' we infer that the non-zero indices of y_2 correspond to a non-empty subsequence of v_1,…,v_ℓ summing up to 0.
By construction, this is not possible for any proper subsequence of (v_1,…,v_ℓ) so we must have y_2 = 1^ℓ.
Hence 0 = A'y = A'y_1 + A'y_2 = A'y_1 + (-b,0) and so Ay_1 = b. This concludes the proof of the reduction.
We will now reduce from to .
The subtlety comes from the fact that in the latter problem we accept the solution x = 0 while in the first we do not.
Observe that the reduction is easy when we can afford guessing a single column from a solution.
For a matrix A ∈^m × n and i ∈ [n] we denote by A^i ∈^m × 1 the i-th column of A and by A^-i∈^m × (n-1) the matrix obtained from A by removal of the i-th column.
An instance Ax=0 of is solvable if and only if there is i ∈ [n]
such that the instance A^-iy= -A^i of is solvable.
Using the FPT algorithm for we obtain an FPT algorithm for 0-Sum Vector[m].
0-Sum Vector[m] is solvable in time 2^(m^2log m)n^2.
≤_nppt .
<Ref> enables us to solve in polynomial time when log n is large compared to m, by considering all i ∈ [n] and solving the obtained instance.
Hence we can again assume that log n ≤(m).
In this case, <Ref> can be interpreted as an NPPT that guesses (m) bits to identify the index i ∈ [n].
≤_nppt .
We decompose the matrix A as A^+ - A^- where A^+, A^- have entries from {0,1}.
Suppose that there exists a vector x satisfying Ax = b.
We nondeterministically guess vectors b^+, b^- that satisfy A^+x = b^+, A^-x = b^- and we check whether b^+ - b^- = b; if no then the guess is rejected.
This requires mlog n nondeterministic bits.
We create an instance of with 2m constraints given as A^+y = b^+, A^-y = b^-.
If we made a correct guess, then y=x is a solution to
the system above.
On the other hand, if this system admits a solution y then Ay = A^+y - A^-y = b^+ - b^- = b so y is also a solution to the original instance.
≡_nppt Group-_q Subset Sum[log q].
For the reduction (≤) consider q = nt and leave t intact.
We can assume that each input number belongs to [1,t) hence the sum of every subset belongs to [1,q) and so there is no difference in performing addition in or _q.
Now we handle the reduction (≥).
Let S be the subset of numbers that sums up to t modulo q.
Since each item belongs to [0,q) their sum in is bounded by nq; let us denote this value as t'.
We nondeterministically guess t' ∈ [0, nq] and check whether t' = t m.
We consider an instance J of Subset Sum over with the unchanged items and the target t'.
We have log t' ≤log n + log q what bounds the new parameter as well as the number of necessary
nondeterministic bits.
If the guess was correct then J will have a solution.
Finally, a solution to J yields a solution to the original instance because t' = t q.
Using the presented lemmas, any two problems listed in <Ref> can be reduced to each other via NPPT.
§ PERMUTATION SUBSET SUM
This section is devoted to the proof of <Ref>.
We will use an intermediate problem involving a computational model with ℓ binary counters,
being a special case of bounded Vector Addition System with States (VASS) <cit.>.
This can be also regarded as a counterpart of the intermediate problem used for establishing XNLP-hardness, which concerns cellular automata <cit.>.
For a sequence = (f_1, …, f_n), f_i ∈{O,R} (optional/required),
we say that a subsequence of [n] is -restricted if it contains all the
indices i with f_i = R.
We say that a sequence of vectors v_1, …, v_n ∈{-1,0,1}^ℓ forms a 0/1-run if v_1 + … + v_n = 0 and for each j ∈ [n] the partial sum v_1 + … + v_j belongs to {0,1}^ℓ.
0-1 Counter Machine
Sequences 𝒱 = (v_1, …, v_n), v_i ∈{-1,0,1}^ℓ, and = (f_1, …, f_n), f_i ∈{O,R}.
ℓ
Is there a subsequence (i_1 < i_2 < … < i_r) of [n] that is -restricted and such that (v_i_1,v_i_2,…,v_i_r) forms a 0/1-run?
Intuitively, a vector v_i ∈{-1,0,1}^ℓ tells which of the ℓ counters should be increased or decreased.
We must “execute” all the vector v_i for which f_i = R plus some others so that the value of each counter is always kept within {0,1}.
We give a reduction from 3Coloring[pw] to .
3ColoringAn undirected graph G and a path decomposition of G of width at most pwpwCan we color V(G) with 3 colors so that the endpoints of each edge are assigned different colors?
[]lemmalemColoring
3Coloring[pw] ≤_ppt .
In the proof,
we assign each vertex a label from [pw + 1] so that the labels in each bag are distinct.
We introduce a counter for each pair (label, color) and whenever a vertex is introduced in a bag, we make the machine increase one of the counters corresponding to its label.
For each edge uv there is a bag containing both u,v; we then insert a suitable sequence of vectors so that running it is possible if and only if the labels of u, v have active counters in different colors.
Finally, when a vertex is forgotten we deactivate the corresponding counter.
In order to encode the operations on counters as composition of permutations, we will employ the following algebraic construction.
For q ∈ consider an automorphism ϕ_1 _q^2 →_q^2 given as ϕ_1((x,y)) = (y,x).
Clearly ϕ_1 ∘ϕ_1 is identify, so there is a homomorphism ϕ_2 → Aut(_q^2) that assigns identity to 0 ∈_2 and ϕ_1 to 1 ∈_2.
We define the group U_q as the outer semidirect product _q^2 ⋊_ϕ_2 (see <Ref>).
That is, the elements of U_q are {((x,y),z) | x,y ∈_q, z ∈_2} and the group operation ∘ is given as
((x_1,y_1),z_1) ∘ ((x_2,y_2),z_2)=
((x_1 ⊕_q x_2), (y_1 ⊕_q y_2), z_1 ⊕_2 z_2) if z_1 = 0
((x_1 ⊕_q y_2), (y_1 ⊕_q x_2), z_1 ⊕_2 z_2) if z_1 = 1.
The z-coordinate works as addition modulo 2 whereas the element z_1 governs whether we add (x_2,y_2) or (y_2,x_2) modulo q on the (x,y)-coordinates.
The neutral element is ((0,0),0).
Note that U_q is non-commutative.
For ((x,y),z) ∈ U_q we define its norm as x+y.
Consider a mapping Γ{-1,0,1}→ U_q given as Γ(-1) = ((1,0),1), Γ(0) = ((0,0),0), Γ(1) = ((0,1),1).
Let b_1,…, b_n ∈{-1,0,1} and q > n.
Then b_1,…, b_n forms a 0/1-run (in dimension ℓ=1) if and only if the group product g = Γ(b_1) ∘Γ(b_2) ∘…∘Γ(b_n) in U_q is of the form g = ((0,n'),0) for some n' ∈[n].
Recall that Γ(0) is the neutral element in U_q.
Moreover, removing 0 from the sequence does not affect the property of being a 0/1-run, so we can assume that b_i ∈{-1,1} for each i ∈ [n].
Note that the inequality q > n is preserved by this modification.
This inequality is only needed to ensure that the addition never overflows modulo q.
Suppose now that b_1,…, b_n is a 0/1-run.
Then it comprises alternating 1s and -1s: (1,-1,1,-1,…,1,-1).
Hence the product g = Γ(b_1) ∘…∘Γ(b_n) equals (Γ(1) ∘Γ(-1))^n/2.
We have ((0,1),1) ∘ ((1,0),1) = ((0,2),0) and so g = ((0,n),0).
Now suppose that b_1,…, b_n is not a 0/1-run.
Then either ∑_i=1^n b_i = 1 or ∑_i=1^j b_i ∉{0,1} for some j ∈ [n].
In the first scenario n is odd so g has 1 on the z-coordinate and so it is not in the form of ((0,n'),0).
In the second scenario there are 3 cases: (a) b_1 = -1, (b) (b_i,b_i+1) = (1,1) for some i∈[n-1], or (c) (b_i,b_i+1) = (-1,-1) for some i∈[n-1].
Case (a): g = Γ(-1) ∘ h = ((1,0),1) ∘ h for some h ∈ U_q of norm ≤ n-1.
Then g cannot have 0 at the x-coordinate because n<q and the addition does not overflow.
Case (b): Γ(1)^2 = ((0,1),1)^2 = ((0,1) ⊕_q (1,0), 1 ⊕_2 1) = ((1,1),0).
For any h_1,h_2 ∈ U_q of total norm ≤ n-1 the product h_1 ∘ ((1,1),0) ∘ h_2 cannot have 0 at the x-coordinate.
Case (c): Analogous to (b) because again Γ(-1)^2 = ((1,0),1)^2 = ((1,0) ⊕_q (0,1), 1 ⊕_2 1) = ((1,1),0).
Next, we show how to embed the group U_q into a permutation group over a universe of small size.
On an intuitive level, we need to implement two features: counting modulo q on both coordinates and a mechanism to swap the coordinates.
To this end, we will partition the universe into two sets corresponding to the two coordinates.
On each of them, we will use a permutation of order q to implement counting without interacting with the other set.
Then we will employ a permutation being a bijection between the two sets, which will work as a switch.
See <Ref> for a visualization.
For every n ∈ there exist q > n and r̂ = (log^3 n) for which there is a homomorphism χ U_q → S_r̂.
By <Ref> we can find a permutation g of order q > n in S_r for some r = (log^3 n).
The subgroup of S_r generated by g is isomorphic to _q.
We will now consider the permutation group over the set [r] ×_2, which is isomorphic to S_2r.
Instead of writing χ explicitly, we will identify a subgroup of S_2r isomorphic to U_q.
Let π_z be the permutation given as π_z(i,j) = (i,1-j) for (i,j) ∈ [r]×_2, i.e., it switches the second coordinate.
Let π_0 act as g on [r] × 0 and as identify on [r] × 1.
Analogously, let π_1 act as g on [r] × 1 and as identify on [r] × 0.
Let N be the subgroup of S_2r generated by π_0 and π_1; it is isomorphic to _q^2 and each element of N is of the form (π_0^x, π_1^y) for some x,y ∈_q.
Next, let H be the subgroup generated by π_z; it is isomorphic to _2.
Now consider a homomorphism ϕ H → Aut(N) given as conjugation ϕ_π(g) = π∘ g ∘π^-1.
In this special case, the semidirect product N ⋊_ϕ H is isomorphic to the subgroup of S_2r generated by the elements of N and H (<Ref>).
On the other hand, ϕ_π_z maps (π_0^x, π_1^y) ∈ N into (π_0^y, π_1^x) so this is exactly the same construction as used when defining U_q.
We infer that U_q is isomorphic to a subgroup of S_2r and the corresponding homomorphism is given by the mapping of the generators: χ((0,1),0) = π_0, χ((1,0),0) = π_1, χ((0,0),1) = π_z.
Armed with such a homomorphism, we translate <Ref> to the language of permutations.
For every n ∈ there exists r = (log^3 n), a permutation π∈ S_r of order greater than n, and a mapping Γ{-1,0,1}→ S_r so that the following holds.
A sequence b_1,…, b_n ∈{-1,0,1} is a 0/1-run if and only if the product Γ(b_1) ∘Γ(b_2) ∘…∘Γ(b_n) is of the form π^n' for some n' ∈ [n].
Let χ U_q → S_r be the homomorphism from <Ref> for q > n and r = (log^3 n).
We define Γ{-1,0,1}→ S_r
as Γ(i) = χ(Γ(i)) using the mapping Γ from <Ref>.
Since χ is a homomorphism, the condition Γ(b_1) ∘…∘Γ(b_n) = ((0,n'),0) is equivalent to Γ(b_1) ∘…∘Γ(b_n) = χ(((0,n'),0)).
We have g = χ(((0,n'),0)) for some n' ∈ [n] if and only if g = π^n'
for π = χ((0,1),0).
The order of π is q > n, as requested.
For a sequence of vectors from {0,1}^ℓ we can use a Cartesian product of ℓ permutation groups S_r to check the property of being a 0/1-run by inspecting the product of permutations from S_ℓ r.
This enables us to encode the problem with binary counters as Group-S_k Subset Sum[k].
We remark that we need nondeterminism to guess the target permutation.
This boils down to guessing
the number n' from <Ref> for each of ℓ coordinates.
Finally, <Ref> follows by combining <Ref> with <Ref>.
[]lemmalemPermFinal
≤_nppt Group-S_k Subset Sum[k].
§ CONCLUSION
We have introduced the nondeterministic polynomial parameter transformation (NPPT) and used this concept to shed some light on the unresolved questions about short certificates for FPT problems.
We believe that our work will give an impetus for further systematic study of certification complexity in various contexts.
The main question remains to decipher certification complexity of .
Even though Subset Sum enjoys a seemingly simple structure, some former breakthroughs required advanced techniques such as additive combinatorics <cit.> or number theory <cit.>.
<Ref> makes it now possible to analyze through the geometric lens using concepts such as lattice cones <cit.> or Graver bases <cit.>.
Drucker et al. <cit.> suggested also to study k-Disjoint Paths and K-Cycle in their regime of polynomial witness compression.
Recall that the difference between that model and ours is that they ask for a randomized algorithm that outputs a solution.
Observe that a polynomial certificate (or witness compression) for k-Disjoint Paths would entail an algorithm with running time 2^k^(1)n^(1) which seems currently out of reach <cit.>.
What about a certificate of size (k + log n)^(1)?
Interestingly, Planar k-Disjoint Paths does admit a polynomial certificate: if k^2 ≤log n one can execute the known 2^(k^2)n^(1)-time algorithm <cit.> and otherwise one can guess the homology class of a solution (out of n^(k)≤ 2^(k^3)) and then solve the problem in polynomial time <cit.>.
Another interesting question is whether k-Disjoint Paths admits a certificate of size (k + log n)^(1) on acyclic digraphs.
Note that we need to incorporate (log n) in the certificate size because the problem is W[1]-hard when parameterized by k <cit.>.
The problem admits an n^(k)-time algorithm based on dynamic programming <cit.>.
For K-Cycle we cannot expect to rule out a polynomial certificate via a PPT from because the problem admits a polynomial compression <cit.>, a property unlikely to hold for <cit.>.
Is it possible to establish the
certification hardness by NPPT (which does not preserve polynomial compression) or
would such a reduction also lead to unexpected consequences?
A different question related to bounded nondeterminism is whether
one can rule out a logspace algorithm for
directed reachability (which is NL-complete) using only polylog(n) nondeterministic bits.
Observe that relying on the analog of the assumption for NL would be pointless because NL = coNL by Immerman-Szelepcsényi Theorem.
This direction bears some resemblance to the question whether directed reachability can be solved in polynomial time and polylogarithmic space, i.e., whether NL SC <cit.>.
plainurl
§ PROBLEM DEFINITIONS
Subset SumA sequence of positive integers p_1, p_2,…,p_n, integer t, all in binarylog tIs there a subsequence of p_1, p_2,…,p_n summing up to t?
KnapsackA sequence of positive integer pairs (p_1,w_1), (p_2,w_2),…,(w_n,p_n), integers t,wlog (t+w)Is there a subset I [n] such that ∑_i ∈ I p_i ≤ t and ∑_i ∈ I w_i ≥ w?
AND-3SATA sequence I_1, I_2, …,, I_n of instances of 3SAT, each on at most k variableskAre all the instances I_1, I_2, …,, I_n satisfiable?
§ SCHEDULING
A job is represented by a triple (p_i,w_i,d_i) of positive integers (processing time, weight, due date).
For a sequence of n jobs a schedule is a permutation ρ [n] → [n].
The completion time C_i of the i-th job in a schedule ρ equals ∑_j ∈ [n], ρ(j) ≤ρ(i) p_j.
A job is called tardy is C_i > d_i.
A job is being processed at time x if x ∈ (C_i - p_i, C_i].
Scheduling Weighted Tardy JobsA sequence of jobs (p_1,w_1,d_1), (p_2,w_2,d_2),…,(w_n,p_n,d_n), integer wlog (d_max+w_max)Is there a schedule in which the total weight of tardy jobs is at most w?
We show that Scheduling Weighted Tardy Jobs does not admit a polynomial certificate as long as and <Ref> holds.
The proof utilizes a construction that appeared in a conditional lower bound on pseudo-polynomial running time for Scheduling Weighted Tardy Jobs <cit.>.
On an intuitive level, we will pack multiple instances of Subset Sum into a single instance of Scheduling Weighted Tardy Jobs and adjust the weights to enforce that all the Subset Sum instances must be solvable if a good schedule exists.
*
Consider an instance given by a sequence of formulas Φ_1, …, Φ_n of 3SAT, each on at most k variables.
If 2^k ≤ n then the running time 2^k·(k)· n becomes polynomial in n so we can assume that n ≤ 2^k.
We transform each instance Φ_j into an equivalent instance (S_j,t_j) of Subset Sum following the standard NP-hardness proof for the latter <cit.>.
The number of items in S_j, as well as logarithm of t_j, becomes linear in the number of variables plus the number of clauses in Φ_j, which is (k^3).
Now we create an instance I of Scheduling Weighted Tardy Jobs with ∑_i=1^n |S_i| items.
Let s_j = t_1 + … + t_j for j ∈ [n] and s_0 = 0.
For each j ∈ [n] and x ∈ S_j we create a job u with due date d_u = s_j, processing time p_u = x, and weight w_u = p_u · (n+1-j); we say that such job u comes from the j-th set.
We claim that I admits a schedule with total weight of non-tardy jobs at least ∑_i=1^n t_i·(n+1-i) if and only if all n instances (S_j,t_j) of Subset Sum are solvable.
The implication (⇐) is easy.
For j∈ [n] let S'_j S_j be the subset summing up to t_j.
We schedule first the jobs corresponding to the elements of S'_1, then the ones corresponding to S'_2, and so on.
The tardy jobs are scheduled in the end in an arbitrary order.
Then the jobs coming from the j-th set are being processed within the interval (s_j-1, s_j] of length t_j so they all meet their deadlines.
The weight of the non-tardy jobs from the j-th set equals ∑_x ∈ S'_j x · (n+1-j) = t_j · (n+1-j) and the total weight is as promised.
Now we prove the implication (⇒).
Consider some schedule of jobs in the instance I with total weight of the non-tardy jobs at least ∑_i=1^n t_i·(n+1-i).
We define two sequences (a_i), (b_i) of length s_n.
We set a_i = (n+1-j) where j ∈ [n] is the unique index satisfying i ∈ (s_j-1, s_j].
If there is a job u being processed at time i we set b_i = w_u/p_u, otherwise we set b_i = 0.
We claim that for each i ∈ [s_n] it holds that b_i ≤ a_i.
Indeed, when i ∈ (s_j-1, s_j] then any non-tardy job u processed at time i must have its due date at s_j or later.
Hence u comes from the j'-th set, where j' ≥ j, so b_i = w_u/p_u = n+1-j' ≤ n+1-j = a_i.
Now observe that ∑_i=1^t a_i = ∑_j=1^n t_j·(n+1-j) whereas ∑_i=1^t b_i equals the total weight of the non-tardy jobs in the considered schedule.
By assumption, we must have ∑_i=1^t a_i = ∑_i=1^t b_i and so a_i = b_i for all i ∈ [t].
Therefore, for each j ∈ [n] and i ∈ (s_j-1, s_j] we have b_i = (n+1-j) so the total processing time of the non-tardy jobs coming from the j-set equals t_j.
Hence each instance (S_j,t_j) of Subset Sum admits a solution.
It remains to check that the parameter log (d_max + w_max) is polynomially bounded in k.
The maximum due date d_max equals s_n ≤ n · 2^(k^3).
The maximum weight w_max is bounded by n times the maximum size of an element in ⋃_i=1^n S_i which is 2^(k^3).
By the initial assumption n ≤ 2^k so log (d_max + w_max) ≤(k).
This concludes the proof.
§ PERMUTATION SUBSET SUM: MISSING PROOFS
*
Let k = pw + 1.
By <Ref> we can assume that the given path decomposition of the graph G is nice.
We can turn it into a sequence C of commands of the form (v), (v), (u,v) such that (i) each vertex v is introduced once and forgotten once afterwards, and (ii) for each edge uv ∈ E(G) the command (u,v) appears at some point when u,v are present (i.e., after introduction and before being forgotten).
Next, we assign each vertex a label from [k] in such a way that no two vertices of the same label share a bag.
To this end, we scan the sequence C, store the current bag, and assign the colors greedily.
Whenever we see a command (v) then the current bag must contain less than k vertices so there is some label not being used in the current bag.
We then assign this label to v and continue.
We will now assume that the arguments of the commands , , refer to labels, e.g., (x) means that a vertex with label x appears in the current bag and (x,y) means that the vertices labeled x,y in the current bag are connected by an edge.
Let D [3]^2 be the set of all 6 ordered pairs different colors.
For each label x ∈ [k] and each color c ∈ [3] we create a counter called x_c.
The intended meaning if x_c = 1 is that the vertex labeled x in the current bag is assigned color c.
We also create special counters: one named S
and 6 counters Z_c,d for each (c,d) ∈ D.
Hence the total number of counters is ℓ = 3k + 7.
We translate the sequence C into sequences 𝒱, forming the instance of .
Instead of specifying directly, we will indicate which vectors in 𝒱 are optional and which are required.
To concisely describe a vector in which, e.g., counter y_1 is being increased, counter y_2 is being decreased, and the remaining ones stay intact, we write
[y_1↑, y_2↓].
We scan the sequence C and for each command we output a block of vectors, according to the following instructions.
(x): We insert optional vectors [x_1↑, S↑], [x_2↑, S↑], [x_3↑, S↑], followed by a required vector [S↓].
Since the counter S cannot exceed 1, at most one of the first three vectors can be used.
In the end we are required to decrease S so if we began with S set to 0 then exactly one of x_1, x_2, x_3 must be set to 1.
(x): We insert optional vectors [x_1↓, S↑], [x_2↓, S↑], [x_3↓, S↑], followed by a required vector [S↓].
Similarly as before, we can decrease at most one of the counters x_1, x_2, x_3 and we will check that exactly one this events must happen.
(x,y):
We insert 6 optional vectors [x_c↓, y_d↓, Z_c,d↑, S↑], one for each (c,d) ∈ D.
This is followed by required vectors [S↓], [S↑].
Then we insert 6 optional vectors [x_c↑, y_d↑, Z_c,d↓, S↓], for each (c,d) ∈ D, this time followed by [S↑], [S↓].
Suppose that initially S is set to 0.
The required vectors manipulating S enforce that exactly one of the first 6 vectors and exactly one of the last 6 vectors are used.
This verifies that the vertices labeled x,y are colored with different colors.
If the graph is 3-colorable then there exists a subsequence of 𝒱 whose indices are -restricted and which forms a 0/1-run.
We will maintain the following invariant: between the blocks (1) all the special counters are set to 0, (2) for each vertex in the current bag with label x and color c the counter x_c is set to 1.
When a vertex with label x and color c is introduced we choose the optional vector [x_c↑, S↑] and subsequently S gets decreased.
When a vertex with label x and color c is forgotten we know that x_c is currently set to 1 so we can choose the optional vector [x_c↓, S↑] and then again S gets decreased.
When the command (x,y) is processed we know that there is a pair (c,d) ∈ D so the counters x_c, y_d are set to 1.
Therefore we can execute the sequence [x_c↓, y_d↓, Z_c,d↑, S↑], [S↓], [S↑], [x_c↑, y_d↑, Z_c,d↓, S↓], [S↑], [S↓], maintaining the invariant in the end.
Finally, when a vertex labeled x with color c is being forgotten then the counter x_c is decreased so in the end all the counters are 0.
If there exists a subsequence of 𝒱 whose indices are -restricted and which forms a 0/1-run, then the graph is 3-colorable.
We will prove two invariants about the states of counters between the blocks: (1) all the special counters are set to 0, (2) for each x ∈ [k] if the vertex labeled x is present in the current bag then there is exactly one c ∈ [3] so that x_c is set to 1, and if there is no vertex labeled x in the current bag then all x_1,x_2,x_3 are set to 0.
We first prove (1).
Observe that each block ends with required [S↓] so it remains to analyze the counters Z_c,d.
They can be activated only in a block corresponding to command (x,y).
The required actions on S enforce that that exactly one of the first 6 optional vectors and exactly one of the last 6 optional vectors are used.
Since each vector in second group decreases some counter Z_c,d and the first group can increase only one, these two vectors must process the same pair (c,d) ∈ D.
Hence in the end all the special counters are again deactivated.
We move on to invariant (2).
First consider a command (x). Due to the initial preprocessing, no other vertex labeled x can be present in the current bag so the counters x_1,x_2,x_3 are inactive.
We must activate exactly one of the counters x_1,x_2,x_3 so the invariant is preserved. An analogous argument applies to (x).
Now consider the block corresponding to command (x,y).
By the argument from the the analysis of invariant (1), the two optional vectors used in this block process the same pair (c,d) ∈ D.
So the state of all the counters after processing this block is the same as directly before it.
We define the 3-coloring of the graph as follows.
When a vertex v with label x is being introduced, we know that one counter x_c for some c ∈ [3] gets increased.
We assign the color c to v.
To check that this is a correct coloring, consider an edge uv.
Let x,y be the labels of u,v.
There is a command (x,y) in C being executed when both u,v are present in the current bag.
By the analysis above, we must take one of the vectors [x_c↓, y_d↓, Z_c,d↑, S↑] at the beginning of the corresponding block, which implies that the counters x_c and y_d were active at the beginning of the block.
But (c,d) ∈ D so c d and we infer that u,v must have been assigned different colors.
This concludes the correctness proof of the reduction.
*
Consider an instance of given by the sequences 𝒱 = (v_1, …, v_n), v_i ∈{-1,0,1}^ℓ, and = (f_1, …, f_n), f_i ∈{O,R}.
The problem can be easily solved in time (2^ℓ· n) so we can assume from now on that log n ≤ℓ.
Let r = (log^3 n), π∈ S_r, and Γ{-1,0,1}→ S_r be as in <Ref>.
For each i ∈ [n] we construct an (ℓ+1)-tuple of permutations (p^1_i, p^2_i, …, p^ℓ+1_i) from S_r as follows.
For each j ∈ [ℓ] we set p^j_i = Γ(v_i^j) where v_i = (v_i^1,…, v_i^ℓ).
We set p^ℓ+1_i = π if f_i = R (i.e., the i-th vector is required) and otherwise we set p^ℓ+1_i to the identify permutation.
Let f_C denote the number of indices with f_i = R.
Next, let I = (i_1 < … < i_m) denote a subsequence of [n].
We claim that the following conditions are equivalent.
* The subsequence I is -restricted.
* The subsequence of p^ℓ+1_1,…, p^ℓ+1_n given by indices I yields product π^f_C.
To see this, observe that π has order larger than n (as guaranteed by <Ref>) so the product π^f_C is obtained exactly when I contains all f_C indices for which f_i = R.
We move on to the next equivalence.
* The subsequence of v_1, …, v_n given by indices I forms a 0/1-run.
* For each j ∈ [ℓ] the subsequence of p^j_1,…, p^j_n given by indices I yields product π^n_i for some n_i ∈ [n].
A sequence of ℓ-dimensional vectors forms a 0/1-run if and only if each single-dimensional sequence (corresponding to one of ℓ coordinates) forms a 0/1-run.
Fix j ∈ [ℓ].
By <Ref> the subsequence of v^j_1, …, v^j_n given by indices I forms a 0/1-run if only only if multiplying their images under the mapping Γ yields a product of the form π^n_i for some n_i ∈ [n].
This justifies the second equivalence.
To create an instance of Group-S_k Subset Sum[k] we take k = (ℓ +1)· r so we can simulate multiplication in S_r^ℓ+1 by dividing [k] into (ℓ+1) subsets of size r.
We have k = (ℓ^4) so indeed the new parameter is polynomial in ℓ.
For each i ∈ [n] we transform the tuple (p^1_i, p^2_i, …, p^ℓ+1_i) into p_i ∈ S_k using the aforementioned natural homomorphism.
We nondeterministically guess the numbers n_1,…, n_ℓ∈ [n]
and set the target permutation p_T as the image of (π^n_1, π^n_2, …, π^n_ℓ, π^f_C) under the natural homomorphism.
This requires guessing ℓ·log n ≤ℓ^2 bits.
By the equivalences above, the instance of is solvable if and only there is a tuple (n_1,…, n_ℓ) for which the constructed instance ((p_1,…,p_n), p_T) of Group-S_k Subset Sum[k] is solvable.
This concludes the proof.
§ REMAINING PROOFS
First, we show that Group-G Subset Sum becomes easy for the group family _k^k.
To this end, we need the following result from the 0-sum theory (see <cit.> for a survey).
Let G be a finite commutative group, m be the maximal order of an element in G, and s satisfy s>m(1 + log(|G| / m)).
Then any sequence a_1 …, a_s of elements in G has a non-empty subsequence that sums to zero.
*
We apply <Ref> to the group G = _k^k.
The maximal order m in G is k and so we can set s = k^2log k.
Consider an instance of Group-_k^k Subset Sum[k] and a solution a_1 + … + a_ℓ = t that minimizes ℓ.
If ℓ≥ s then there exists a non-empty subsequence of a_1, …, a_ℓ that sums to 0.
Removing this subsequence from a_1, …, a_ℓ does not modify the sum so we obtain a shorter solution, which yields a contradiction.
Hence ℓ < s and so we can guess the solution using ℓ· (klog k) = k^3log^2 k bits.
Next, we prove that becomes easy when we drop the restriction that each element can be used by a solution only once.
Recall that in Unbounded Subset Sum we ask for a multiset of integers from {p_1, p_2,…,p_n} that sums up to t.
In fact, in this variant there always exists a solution with small support.
This is a special case of the integer version of Carathéodory's theorem <cit.>.
Unbounded Subset Sum[log t] admits a polynomial certificate.
A solution can be represented as a multiset of input integers.
Consider a solution S that minimizes the number ℓ of distinct integers.
Suppose for the sake of contradiction that ℓ > log t and let S_D denote the set of distinct elements in S.
There are 2^ℓ > t subsets of S_D and each of them has a sum in the interval [0,t].
Consequently, there are two subsets S_1, S_2 S_D that give the same sum.
The same holds for S_1 (S_1 ∩ S_2) and S_2 (S_1 ∩ S_2) so we can assume that S_1,S_2 are disjoint.
Let x ∈ S_1 ∪ S_2 be an element with the least multiplicity in S; let m denote this multiplicity and assume w.l.o.g. that x ∈ S_1.
We construct a new solution S' from S: we decrease the multiplicity of each element from S_1 by m and increase multiplicity of each element from S_2 by m.
By the choice of m, the multiplicity of x drops to 0 and we never decrease the multiplicity of any element below 0.
Therefore, S' is also a valid solution with a lower number of distinct elements; a contradiction.
We have shown that ℓ≤log t.
Moreover, the multiplicity of each element in S cannot exceed t.
We can guess this solution by guessing the set S_D and the numbers from [t] representing their multiplicities, using log^2 t nondeterministic bits.
Binary (k,Δ)-ILP admits a Levin certificate of size (k+Δ).
Finally, we justify why we should not expect a PPT from 3-Coloring[pw] to .
In fact, our argument works already for parameterization by treedepth.
We consider the standard CNF-SAT problem with unbounded arity parameterized by the number of variables
n.
It is known that CNF-SAT[n] is MK[2]-hard <cit.> and as such it is considered unlikely to admit a polynomial Turing kernelization.
On the other hand, admits a trivial polynomial Turing kernelization
and so it suffices to show a PPT from CNF-SAT[n] to 3-Coloring[pw].
An analogous hardness has been observed for Indepedent Set parameterized by treewidth <cit.>.
CNF-SAT[n] ≤_ppt 3-Coloring[pw]. Consequently, 3-Coloring[pw] is MK[2]-hard.
We adapt the known NP-hardness proof of 3-Coloring and we refer to the 3 colors as T (true), F (false), B (blocked).
Let g_B, g_F be two special vertices, connected by an edge, and let B, F refer to the colors used by g_B, g_F respectively.
We will utilize the 2-OR-gadget that, for given vertices u,v, uses a fresh vertex g_uv (the output), so that (i) u,v must be colored with T or F,
(ii) the color of g_uv must be F if both u,v have color F,
(iii) the vertex g_uv can be colored with T if one of u,v has color T.
First, connect both u,v to g_B to ensure (i).
Next, create auxiliary vertices u',v' and edges uu', vv', u'v'.
The vertex g_uv is connected to u',v',g_B.
Suppose that both u,v have color F.
Then the colors used by u',v' must be {T,B} and the only color left for g_uv is F.
Next, if one of u,v has color T then u',v' can be colored with {F,B} and g_uv can use T.
We can use the output of the 2-OR-gadget as an input of the next gadget and so for each d ∈ we can construct a d-OR-gadget that takes d vertices, uses (d) auxiliary vertices, and its output implements the OR-function of the input colors.
We are now in position to give the reduction from CNF-SAT[n].
For each variable x_i we create vertices x_i^Y, x_i^N, corresponding to literals x_i, x_i, and connect them to the special vertex g_B.
For each clause ϕ with arity d we create a d-OR-gadget over the vertices corresponding to its literals, and connect its output to the special vertex g_F.
First, we analyze the pathwidth of such a graph G by constructing a path decomposition.
We put all the literal and special vertices (2n+2 in total) in all the bags.
Then each bag corresponds to some clause and contains the respective OR-gadget on (n) vertices.
Consequently, we obtain a path decomposition of G of width (n).
Suppose now that the given instance from CNF-SAT[n] admits a satisfying assignment.
If the variable x_i is set to True, we color x_i^Y with T and x_i^N with F.
Otherwise we use the opposite coloring.
By the property (iii) of the OR-gadget, there is a coloring that assigns T to each output of the gadget and so we obtain a proper 3-coloring.
In the other direction, consider a proper 3-coloring of G.
Let B,F be the colors used by g_B,g_F.
Each pair (x_i^Y,x_i^N) is colored as T/F or F/T; we treat x_i as set to True when x_i^Y uses color T.
The output of each OR-gadget must utilize the color unused by g_B,g_F, that is, T.
By the property (ii) of the OR-gadget, one of the inputs also must be colored with T.
Hence the described truth-assignment satisfies every clause.
|
http://arxiv.org/abs/2409.02341v1 | 20240904000204 | Combinatorial description of Lusztig $q$-weight multiplicity | [
"Seung Jin Lee"
] | math.RT | [
"math.RT",
"math.CO",
"05E10"
] |
§ ABSTRACT
We conjecture a precise relationship between Lusztig q-weight multiplicities for type C and Kirillov-Reshetikhin crystals. We also define 𝔤𝔩_n-version of q-weight multiplicity for type C and conjecture the positivity.
Fair Minimum Representation Clustering via Integer Programming A preliminary version of this paper appeared in CPAIOR 2024.
[
===========================================================================================================================
§ INTRODUCTION
The Kostka-Foulkes polynomials K_λμ(t) are coefficients for the modified Hall-Littlewood polynomials when expressed in the Schur basis. They are primarily studied in algebraic combinatorics and representation theory. In 1978, Lascoux and Schützenberger <cit.> showed that
K_λμ(t)=∑_T∈SSYT(λ,μ)t^charge(T),
where the sum is taken over all semistandard Young tableaux of shape λ and weight μ. Since the discovery of the charge formula <cit.>, there have been various interpretations of the charge statistic.
Kostka-Foulkes polynomials can be generalized in mainly two ways. One important generalization is the Macdonald-Kostka polynomials K_λμ(q, t), which are the coefficients for the modified Macdonald polynomials expressed in terms of Schur functions. When q=0, the Macdonald-Kostka polynomial reduces to the Kostka-Foulkes polynomial. Regarding positivity, Haiman <cit.> showed that K_λμ(q, t) lies in ℤ_≥ 0[q, t] by investigating Hilbert schemes. Macdonald polynomials can also be expanded positively in terms of LLT polynomials <cit.>, and Haiman and Grojnowski <cit.> demonstrated that LLT polynomials are Schur positive by proving that the coefficients appear in the theory of Hecke algebras and Kazhdan-Lusztig polynomials, which are known to be nonnegative. Despite various proofs of the positivity of Kostka-Foulkes polynomials, no combinatorial (manifestly positive) formula is currently known.
Another way to generalize the Kostka polynomials is to consider those for other types, known as Lusztig's q-weight multiplicities <cit.>, which are the main focus of this paper. For a simple Lie algebra 𝔤 and dominant weights λ and μ, the Lusztig's q-weight multiplicity ^𝔤_λμ(q) is defined by the formula
D_w_0(e^λ∏_α∈Δ^+1/1 - q e^α) = ∑_μ^𝔤_λμ(q) χ^μ,
where w_0 is the longest element in the Weyl group of 𝔤, Δ^+ is the set of positive roots, D_w is the Demazure operator, and χ^μ is the irreducible character for 𝔤 indexed by μ. Note that D_w_0(e^λ) = χ^λ is the Weyl character formula. Lusztig's q-weight multiplicities are known to be nonnegative due to the theory of affine Kazhdan-Lusztig polynomials, but manifestly positive formulas are not generally known. For type C, Lecouvey <cit.> constructed the cyclage on Kashiwara-Nakajima tableaux <cit.> and conjectured that the statistics defined by the cyclage might provide a manifestly positive formula for Lusztig's q-weight multiplicity in type C. Additionally, Lecouvey and Lenart <cit.> found a combinatorial formula for ^C_n_λμ(q) when μ = 0.
In this paper, we describe the precise relationship between Lusztig's q-weight multiplicities for type C_n and the energy functions in the affine KR crystals for affine type B^(1)_g for large g. Note that Nakayashiki and Yamada <cit.> demonstrated that the charge for type A corresponds to the energy function in the affine KR crystals for affine type A. To describe the conjecture, we introduce semistandard oscillating tableaux (SSOT) and their realization as certain classical highest weights in the tensor products of Kirillov-Reshetikhin column crystals.
Let 𝔤 be an affine Lie algebra and U'_q(𝔤) the corresponding quantum algebra without derivation. Kirillov-Reshetikhin crystals (KR crystals) are crystal bases B^r,s <cit.> of a certain subset of irreducible finite-dimensional U'_q(𝔤)-modules known as Kirillov-Reshetikhin (KR) modules W^r,s <cit.>. For a partition γ, we define the tensor products of KR column crystals B^t_γ as B^γ_1,1⊗⋯⊗ B^γ_n,1. For each element r in B^t_γ, we can associate a non-negative integer E(r), called the energy, which is invariant under the Kashiwara operators e_i and f_i for i ≠ 0. In our specific case, 𝔤 is of type B^(1)_g for large g, but it will be evident that the idea can be generalized to other types (see Section 3 for the stable version).
On the other hand, the author <cit.> constructed a bijection between King tableaux of shape λ and weight μ and SSOTs of shape λ̃ and μ̃ with at most g columns, where λ̃ is the rectangular complement partition in (g^n) for large g. Note that the number of such objects is equal to the weight multiplicity of weight μ in the irreducible 𝔰𝔭_n-module indexed by λ.
The goal of this paper is to define a certain statistic E(T), also called energy, for each SSOT T by making an injective map from the set of SSOT's of shape λ̃ and μ̃ to the set of some classical highest weights in B^t_μ̃ of weight λ and then take the energy function defined on B^t_μ̃. To be more precise, an SSOT consists of oscillating horizontal strip (α,β,γ) where
α, β, γ are partitions such that α and γ are contained in β, β/α and β/γ are horizontal strips, and the size of a oscillating horizontal strip is ℓ=2|β|-|α|-|γ| where |α| is defined by the sum of parts of α. For each oscillating horizontal strips (α,β,γ), we associate an element in B^ℓ,1 which has an entry i if and only if there is a cell at i-th column of β/α, and has an entry i if and only if there is a cell at i-th column of β/γ. For example, for an oscillating horizontal strip ((1),(3,1),(2,1)) we associate an element
1
2
3
3=
1
2
in B^4,1. The right-hand side uses the sage notation, making the column admissible.
Let ϵ^C(T) be the maximum number of columns among partitions in the SSOT T. In KR crystal language, ϵ^C(T) is the maximum number i appearing in the element T in B^t_μ̃. Then we have the following conjecture:
^C_n_λ,μ(q)=∑_T q^E(T)
where the sum runs over all SSOT T of shape λ̃ and weight μ̃ and ϵ^C(T)≤ g.
In general, defining and computing the energy function can be challenging, so we provide an example where the energy function is easy to compute, which is the case μ̃=(1^n).
For an element
a_n
⊗
a_n-1⊗⋯⊗
a_1
in (B^1,1)^⊗ n, written with the sage notation, we define the energy function by ∑_i=1^n-1 (n-i)H(a_i+1,a_i) where
H(b,a)=
2 if a=1 and b=1
1 if b ≽ a and (b,a)≠ (1,1)
0 if b ≺ a
under the order
1 ≺ 2 ≺⋯ n ≺n≺⋯≺1.
Note that when all a_i's are positive, the energy function is the same as the charge statistic of a standard young tableau in Lascoux-Schützenberger formula. In general, Conjecture <ref> generalizes Lascoux-Schützenberger's charge formula for semistandard Young tableaux, which applies to the case where |λ| = |μ|. This conjecture is a significant generalization of the charge formula for type A, as it not only describes the q-weight multiplicities for type C but also establishes a connection with affine crystal theory.
Also note that Conjecture <ref> naturally implies the following monotonicity:
_λ+(1^n),μ+(1^n)^C_n(q)-_λ,μ^C_n(q) ∈ℤ_≥ 0[q].
The following is an easy example of Conjecture <ref>
Let n=3, λ=(1,1) and μ=∅. When g=1, λ̃=(1) and μ̃=(1,1,1) and all classcal highest weights in ( B^1,1)^⊗ 3 with the weight λ̃ are
1⊗ 1 ⊗ 1, 1 ⊗1⊗ 1, 2⊗ 2⊗ 1.
Since the values of ϵ^C for the first two elements are 1 and the value of ϵ^C for the third element is 2, with corresponding energies being 2, 4, and 3, respectively, we have the following:
_(1,1),∅^C_n(q)=q^2+q^4.
Similarly, we have
_(2,2,1),(1,1,1)^C_n(q)=q^2+q^3+q^4.
In Section 3, we prove the conjecture for the stable case by relating q-weight multiplicities to a one-dimensional sum and the X = ^∞ theorem by Shimozono and Lecouvey <cit.>. The proof in Section 3 applies not only to type C but also to all nonexceptional types.
During the course of this work, the author identified another natural version of the q-weight multiplicity, which we term the 𝔤𝔩_n q-weight multiplicities for nonexceptional types. Consider a function L_A on the set of positive roots, where L_A(α) = 1 if α is a positive root of type A, namely ϵ_i - ϵ_j for i < j, and 0 otherwise. Then the 𝔤𝔩_n q-weight multiplicity for λ,μ of type 𝔤 is defined by the following:
For a function L from the set of positive roots to ℚ, define
D_w_0(e^λ∏_α∈Δ^+1/1 - q^L(α) e^α) := ∑_μ^𝔤, L_λμ(q) χ^μ.
Then 𝔤𝔩_n q-weight multiplicity for λ,μ of type 𝔤 is defined by _λμ^𝔤,L_A
^C_n,_A_λ,μ(q) is in ℤ_≥ 0[q].
Proofs of both Conjecture <ref> and Conjecture <ref> will be presented in a separate paper <cit.>. In this paper, we investigate the connection between Conjectures <ref> and <ref> and known results, particularly in relation to affine crystal theory and X=^∞ theorem.
§ ACKNOWLEDGEMENT
The author formulated Conjecture <ref> around December 2019 and shared with Mark Shimozono and Jaehoon Kwon. I thank them for keeping our discussions confidential and for their valuable insights. I also appreciate Hyunjae Choi, Donghyun Kim for helpful discussions. This work is supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No.0450-20240021)
§ LUSZTIG Q-WEIGHT MULITIPLICITY AND 1-DIMENSIONAL SUM
Let 𝔤 be an affine algebra for nonexceptional type. In this section, we explain X=^∞ theorem briefly and relate the theorem with Conjecture <ref> and <ref>.
First, note that Equation (<ref>) is equivalent to the follwing formula:
^𝔤,L_λμ(q)= ∑_w∈ W (-1)^w [e^w(λ+ρ)-μ-ρ]∏_α∈Δ^+1/1-q^L(α)e^α
where [e^β]f denotes the coefficient of e^β in f ∈ℤ[P]. The stable version is defined by
^∞^𝔤,L_λμ(q)= ∑_w∈ S_n (-1)^w [e^w(λ+ρ)-μ-ρ]∏_α∈Δ^+1/1-q^L(α)e^α,
where S_n is the symmetric group, as a subgroup of W.
The reason why this definition is called stable version is because for large k, we have
^∞^𝔤,L_λμ(q)=^𝔤,L_λ+(k^n),μ+(k^n)(q).
One-dimensional (1-d) sums X are graded tensor product multiplicities for affine Kac-Moody algebras, which arise from two-dimensional solvable lattice models <cit.>, and which may be defined using the combinatorics of affine crystal graphs <cit.>. For any nonexceptional family of affine algebras, the 1-d sums have a large rank limit which is called the stable 1-d sums. There are only four distinct kinds of stable 1-d sums <cit.>, and they are labeled by the four partitions ◊∈{∅,(1),(2),(1,1)} having at most 2 cells. See <cit.> for more description. To explain Conjecture <ref> we only need the case ◊=(2) which corresponds to the case when 𝔤 is of type B^(1)_g , A^(2)_2g-1 or D^(1)_g for large g. To be precise, let
B_μ=B^1,μ_1⊗⋯⊗ B^1,μ_ℓ
B^t_μ=B^μ_1,1⊗⋯⊗ B^μ_ℓ,1
where ℓ=ℓ(μ).
For any ◊ and for any tensor product B of KR modules, the X polynomial is defined by
X^◊_λ,B= ∑ q^E(T)
where T runs over all classical highest elements in B for type ◊, of weight λ. In this paper, B is either B_μ or B^t_μ.
The following is X=^∞ theorem in <cit.>.
Let ||μ||=∑_i=1^n (i-1)μ_i. Then we have
X^◊_λ,B_μ(q)=q^||μ||+|μ|-|λ|^∞^𝔤_n_λ̃μ̃(q^-1)
§ PROOF OF CONJECTURE <REF> AND <REF> FOR THE STABLE CASE
There is a follwing duality in X.
<cit.>
X^◊^t_λ^t,B^t_μ(q)=q^||μ||+|μ|-|λ|X^◊_λ,B_μ(q^-1)
Therefore, we have
^∞^𝔤_n_λμ(q)=X^(1,1)_λ̃,B^t_μ̃(q)
by Theorem <ref> and <ref>, and Conjecture <ref> for the stable case follows.
To prove Conjecture <ref> for the stable case, consider the automorphism ϕ(e^ϵ_i)=q^1/2 e^ϵ_i. Note that the automorphism ϕ commutes with the action of S_n, but not with the action of W. For type C positive roots, ϕ(e^α) is e^α if α is of type A, or q e^α otherwise. Therefore, we have
ϕ(e^w(λ))=ϕ(e^λ)= q^⟨λ,ω_n⟩/2 e^λ
where w∈ S_n and ω_n=(1^n). Also, we have
ϕ(∏_α∈Δ^+1/1-q^L_A(α)e^α)=∏_α∈Δ^+1/1-qe^α.
Therefore, we have
Assume that 𝔤 is nonexceptional. Then
q^|λ|-|μ|^∞_λ,μ^𝔤,L_A(q)=^∞_λ,μ^𝔤(q).
Therefore, Conjecture <ref> for the stable case holds.
There is no such an elegant identity in <ref> for non-stable cases, because ϕ(e^w(α)) is not equal to ϕ(e^α) for w∈ W in general.
§ FURTHER DIRECTIONS
One natural question is whether Conjectures <ref> and <ref> can be generalized to other types. Partial progress on this topic will appear in <cit.>. Applications of Conjecture <ref> include its connections with parabolic q-weight multiplicities <cit.>, type C analogues <cit.> of Catalan functions <cit.>, and potentially with Macdonald polynomials and LLT polynomials for type C.
Another question is whether there is a geometric or representation-theoretic interpretation of the duality that appears in Conjecture <ref>. It would be interesting to explain Theorem <ref> without relying on the X = K result, where K denotes Kostka polynomials. The proof of Theorem <ref> utilizes the X = K theorem and the duality for Kostka polynomials.
Lastly, Conjecture <ref> is not compatible with Lecouvey's conjecture in <cit.>, but it seems to be compatible with the work <cit.> by Lecouvey and Lenart when μ=0. It would be interesting to generalize the computation of the energy in their way.
alpha
|
http://arxiv.org/abs/2409.02758v1 | 20240904143533 | Power-grid modelling via gradual improvement of parameters | [
"Bálint Hartmann",
"Géza Ódor",
"Kristóf Benedek",
"István Papp"
] | nlin.AO | [
"nlin.AO",
"cond-mat.dis-nn",
"cond-mat.stat-mech",
"physics.soc-ph"
] |
AIP/123-QED
[email protected]
Institute of Energy Security and Environmental Safety, HUN-REN Centre for Energy Research, P.O. Box 49, H-1525 Budapest, Hungary
Institute of Technical Physics and Materials Science, HUN-REN Centre for Energy Research, P.O. Box 49, H-1525 Budapest, Hungary
Institute of Technical Physics and Materials Science, HUN-REN Centre for Energy Research, P.O. Box 49, H-1525 Budapest, Hungary
Department of Theoretical Physics, Budapest University of Technology and Economics, Budafoki út 8, H-1111 Budapest, Hungary
Institute of Technical Physics and Materials Science, HUN-REN Centre for Energy Research, P.O. Box 49, H-1525 Budapest, Hungary
§ ABSTRACT
The dynamics of electric power systems are widely studied through the phase synchronization of oscillators, typically with the use of the Kuramoto equation. While there are numerous well-known order parameters to characterize these dynamics, shortcoming of these metrics are also recognized. To capture all transitions from phase disordered states over phase locking to fully synchronized systems, new metrics were proposed and demonstrated on homogeneous models. In this paper we aim to address a gap in the literature, namely, to examine how gradual improvement of power grid models affect the goodness of certain metrics. To study how the details of models are perceived by the different metrics, 12 variations of a power grid model were created, introducing varying level of heterogeneity through the coupling strength, the nodal powers and the moment of inertia. The grid models were compared using a second-order Kuramoto equation and adaptive Runge-Kutta solver, measuring the values of the phase, the frequency and the universal order parameters. Finally, frequency results of the models were compared to grid measurements. We found that the universal order parameter was able to capture more details of the grid models, especially in cases of decreasing moment of inertia. The most heterogeneous models showed very low synchronization and thus suggest a limitation of the second-order Kuramoto equation. Finally, we show local frequency results related to the multi-peaks of static models, which implies that spatial heterogeneity can also induce such multi-peak behaviour.
Power-grid modelling via gradual
improvement of parameters
I. Papp
September 4, 2024
================================================================
Modeling power-grid systems has got a major importance in present days as transformation to renewable energy sources requires the complete re-design of energy transmission.
Renewable energy sources can be located quite far from their consumption points because urban and industrial structures do not follow physical constraints and capabilities. Important examples are the sea coast vs
inland divisions in the case of wind power. Ill-constructed high-voltage (HV) power grids can cause catastrophic damages to economies as it was demonstrated in recent history via the emergence of large blackout events<cit.>. The probability distributions of such events was found to be fat-tailed, exhibiting power-law (PL) tails very often
<cit.>. To understand them, self-organized critical direct current (DC) models have been constructed <cit.> and have been shown to describe well the PL exponents of empirical values. However, many details could not be understood as power-grids work with alternating currents (AC) in which phase differences are the primary causes of the power-flows.
§ INTRODUCTION
AC modelling of power-grids have been proposed since the equivalence of swing equations to the second-order Kuramoto model was shown <cit.>.
Failures leading to blackouts have been studied by composite Kuramoto and threshold models <cit.> and the PL tailed cascade failures could be modelled by them <cit.>. Network topological features, which lead to desynchronization by network fragmentation and Braess paradox phenomena, have been identified <cit.>. We have shown that these are basically consequences of quenched heterogeneity, which can be mitigated by the enhanced fluctuations, that arise naturally in the neighborhood of synchronization transition points, where power grids self-organize themselves by the competition of supply and demand <cit.>.
In power grid systems, the focus is often on the phase synchronization of the individual oscillators <cit.> since their steady state is usually a stable limit cycle. To study the synchronization dynamics, several order parameters are used to characterize the dynamic state of the system. In the literature, there are numerous well-known Kuramoto order parameters, such as the complex order parameter <cit.>, the local order parameter measuring the phase coherence and its global variant <cit.>, a mean-field variant of the complex order parameter <cit.> or the one respecting network topology <cit.>.
Shortcomings of these metrics have been highlighted in a number of papers, most importantly by Ref. <cit.>, who claim that existing order parameters are not fully suitable to characterize complex oscillator networks as they don't capture all transitions from incoherence over phase locking to full synchrony for arbitrary, finite networks. Hence a universal order parameter was also introduced, which captures partial phase locking, respects the topology of the network, and has been shown to increase monotonically with the coupling strength.
In this paper, we aim to address a research gap in the literature, namely to examine how different modeling assumptions regarding the heterogeneities of a power grid are captured by the different order parameters. In the different scenarios, to analyze and compare the dynamic behavior of the various models, we will use the frequency spread, the global order parameter, and the newly proposed universal order parameter by ref. <cit.> as the main measures. We also present a frequency analysis of the simulation results and confirm q-Gaussian distributions, matching real data distributions presented in one of our earlier work <cit.>.
The remainder of the paper is structured as follors. Section <ref> introduces the synchronization model and the twelve grid models. Section <ref> presents the results organized around five aspects. Finally, these results are discussed in Section <ref>, and conclusions are drawn.
§ DATA AND MODELS
§.§ The synchronization model
Modeling power-grid systems come in different flavors, but at the heart of most approaches describing the time evolution lies the so-called swing equations <cit.>, set up for mechanical elements (e.g. rotors in generators and motors) with inertia. Mathematically it is formally equivalent to the second-order Kuramoto equation <cit.>, for a network of N oscillators with phases θ_i(t).
To investigate the effects of different parametrizations and to facilitate benchmarking with previous results, we used a more specific form <cit.>, which includes dimensionless electrical parametrization and approximations for unknown ones:
θ̈_i+α θ̇_i=P_i/I_iω_S
+P_i^max/I_i ω_S ∑
_j=1^NW_𝑖𝑗 sin(θ_j-θ_i) + Ω_i .
In this equation θ_i is the phase angle, ω_i=θ̇_̇i̇, is the frequency of node i, α is the dissipation or damping factor, W_ij is the coupling strength and P_i is the source/load power. Furthermore, I_i denotes the rotation inertia, ω_S the system frequency, and P_i^max the maximal transmitted power in the system. Note that we can also have an intrinsic frequency of nodes Ω_i = 50 Hz (in Europe), but it can be transformed out in a rotating frame and we have omitted it in the calculations. Our frequency results show the deviations from this value.
If we know more details of the electrical parameters we can cast this into the form with real physical dimensions:
ω̇_i = -D_i ω_i/M_i ω_S + L_i/M_i ω_S + ∑_j=1^NY_ij V_i V_j/M_i ω_Ssin(θ_j - θ_i),
where D_i has dimension of [·^2/^2] and describes the damping effect of element i in the system, L_i [·^2/^3] is the power capacity of node i, Y_ij=1/X_ij [1/] is the susceptance of lines, the inverse of reactance, V_i [] is the nodal voltage level and M_i [·^2] is the moment of inertia.
Topological heterogeneity of power grids is the result of two factors, (i) the structure and connectivity of the grid itself, and (ii) the heterogeneity of power line capacities and nodal behaviors, as it was presented in our recent work <cit.>. In the second-order Kuramoto equation, these varying properties are represented by the parameters L_i, Y_ij and M_i.
The time step resolution of the calculations was set to be Δ t= 0.25 and α=0.1 was used, similarly as in Refs <cit.>. Ref. <cit.> also used α=0.4. In this way, the results of the Kuramoto equation become dimensionless.
To model station fluctuations, we have added a multiplicative, quenched noise to the source/sink terms of <ref>
η_i,j = 0.05 ξ_j P_i/I_i ω_s ,
where ξ_j ∈ N(0.1) is drawn from a zero centered Gaussian distribution. To solve the equations of motion we used an adaptive, Runge-Kutta-4, from the package Numerical Recipies.
We investigated the standard synchronization measures of the phases R(t) and the frequency spread Ω(t), called the frequency order parameter. We measured the Kuramoto phase order parameter:
z(t_k) = r(t_k) exp[i θ(t_k)] = 1 / N ∑_j exp[i θ_j(t_k)] .
Sample averages over different initial fluctuations for the phases
R(t_k) = ⟨ r(t_k)⟩
and for the variance of the frequencies
Ω(t_k) = 1/N⟨∑_j=1^N (ω(t_k)-ω_jt_k))^2 ⟩
were determined, where ω(t_k) denotes the mean frequency within each respective sample at time step
t_k = 1 + 1.08^k, k=1,2,3....
Sample averages were computed from the solutions with hundreds of independent self-frequency realizations (i.e. η_i,j) for each control parameter.
Besides, we measured a more complex order parameter suggested for the second-order Kuramoto model, which claimed to accurately track the degree of partial phase locking and synchronization<cit.>
r_iui(t_k) = 1/(∑_i,j^N w_ij) ∑_i,j^N w_ijcos(θ_i-θ_j)
and it's sample and temporal average in the steady state:
R_uni=⟨ r_iui(t_k) ⟩
The fluctuations of the order parameters are measured by the standard deviations of the sample and temporal averages in the steady state, typically after 100 s transient time.
§.§ The grid model
We chose the Hungarian high-voltage (132, 220, and 400kV) network to create our grid models.
The network consists of 387 nodes and 640 edges, and its most important features are presented in Table <ref>.
Cross-border transmission lines were reduced to their domestic terminals as sources or consumers,
thus resulting in a standalone synchronous system. In the modeled loading state, 351 nodes behave as consumers and 36 as sources.
In order to study how modeling depth is perceived by the different order parameters, we created 12 different variations of the Hungarian network. These variations introduce heterogeneity to the parameters W_ij, L_i, and M_i. The resulting representations thus range from completely homogeneous networks, which are the most widely covered in related literature, to completely heterogeneous ones, where electric parameters and nodal behaviors are defined using the actual data and measurements of the Hungarian system.
The following assumptions are used for the three parameters.
* W_ij, coupling strength:
* Identical value for each edge, the value corresponding to the largest thermal capacity (ampacity) limit in the system (approx. 1400 MW). This option represents the benchmark used by e.g. Refs. <cit.>.
* Unique value for each edge, depending on their actual thermal capacity limits (range between 40 and 1400 MW).
* Unique value for each edge, depending on their actual admittance Y_ij and voltage level.
* L_i, nodal power:
* The sources (L_i<0) are distributed equally among the nodes representing power plants and the consumers (L_i>0) are distributed equally among nodes truly representing consumption. This is a slightly modified assumption of Ref. <cit.>, where half the nodes correspond to consumers (L_i>0), while the other half to power sources (L_i<0).
* Every L_i value is uniquely assigned, based on measured data (SCADA).
* M_i, moment of inertia:
* Aligning with the literature, we set a constant value for M_i, corresponding to a 400 gas turbine power plant as in <cit.>.
* We evenly distribute the moment of inertia among 400 and 220 nodes, which host the majority of synchronous generators (conventional power plants).
* We evenly distribute the moment of inertia along all nodes of the model.
* We set unique values based on measured data and whether the node actually hosts a synchronous machine or not.
In the following the 12 scenarios will refer to the different models as shown in Table <ref>.
§ RESULTS
In the following, results of the synchronization studies are presented in a sequential, interdependent way. First, the coupling strength was varied to compare how different order parameters display criticality (Section <ref>). Then the transient behaviour of the three order parameters, R, Ω and R_uni are analysed in sections <ref>, <ref> and <ref>, respectively. Section <ref> compares the frequency data of the simulations to grid frequency data presented in Ref. <cit.>.
Note that to assist the interpretation of the results, not all scenarios are displayed in all figures.
§.§ Dependence of the critical point on the models
We assume that power-grid-like systems operate near the state of self-organized criticality (SOC) <cit.>,. This means that the system is not operated on 100% load capacity, but usually at a lower level. To mimic the not fully loaded behavior we can cast equation (<ref>) in the following form:
θ̈_i+α θ̇_i=P_i/I_iω_S
+P_i^max/I_i ω_S λ∑
_j=1^NW_𝑖𝑗 sin(θ_j-θ_i) .
The multiplicative factor λ in front of the interaction term is the chosen constant for the system and its value corresponds to different load levels. Alternatively it can be also said, that the initial P_i^max/I_i ω _S term transforms to P_i/I_i ω_S, with arbitrary P_i power. In a mathematical sense this maps to a changing coupling strength, which allows us to identify the SOC behavior by analyzing the standard deviation of the Kuramoto order parameter.
To identify the cross-over point to synchronization (cf. fig. <ref>), we varied the λ parameter from 0.1 to 1.0 with steps of 0.1. For the sake of completeness, we performed analysis starting the system from a phase-ordered state, i.e. all oscillators have the same initial phase and some noise, or from a disordered phase, i.e. all the oscillators have random initial phase assignment. In the figures of this section, dash-dotted lines with unfilled markers will represent the phase-ordered states, and continuous lines with filled markers are the disordered states.
We have chosen model scenarios 1, 5, and 9 for finding the optimal λ value, as these scenarios represent homogeneous nodal behavior with three different options for defining the coupling strength. We performed the statistical calculation on a minimum 2500 sample and a maximum 7500 for each λ value. The results are shown in Figs. <ref>-<ref> for R, Ω and R_uni, respectively.
Fig. <ref> shows that the behavior of the phase-ordered curves (dash-dotted lines with unfilled markers) is non-trivial. As the curves of scenarios 1, 5, and 9 overlap, this implies that R does not capture the difference in modeling the coupling strength. Very similar results were obtained for Ω-s, as shown in Fig. <ref>.
Fig. <ref> displays that the R_uni order parameter indeed increases monotonically with the exception of the case of scenario 9, phase ordered initial condition, large λ values. Also, the standard deviation of phase-disordered results shows a peak at λ=0.4 and λ=0.3 for scenarios 5 and 9 and respectively 1. Phase-ordered results show now trivial behavior, with decreasing tendency with the exception of scenario 9. Finally, it is important to notice that R_uni captures the difference in modeling the coupling strength. Scenario 5, representing the unique thermal capacity of the lines shows a steep increase as the function of λ, showing that a heterogeneous grid structure tends to desynchronize under less severely weakened states.
§.§ Kuramoto order parameter transient behavior
Fig. <ref> shows the Kuramoto order parameter, R in the transient from different initial conditions. It can be seen that the scenarios align in three curves. The highest R values are reached by scenarios 1, 5, and 9 (R≈0.7 for phase disordered initial conditions), then 2, 6, and 10 (R≈0.5), and finally by 3, 7, and 11 (R≈0.4). These results indicate that changing the coupling strength, W_ij does not cause visible changes in steady-state values of R if all other parameters are fixed. This implies that R is unable to describe the synchronization behavior by changing the characteristics of the transmission lines.
It can also be seen that increasing the heterogeneity of the moment of inertia, M_i, decreases the value of the order parameter, as expected.
§.§ The Ω order parameter transient
Fig. <ref> shows the evolution of the order parameter Ω. Similarly to the results of R, the scenarios form three distinct curves. However, the order of these curves is the opposite of the Kuramoto order parameter. Scenarios 1, 5, and 9 exhibit the lowest results (ΔΩ≈6 for phase disordered initial conditions), followed by 2, 6, and 10 (ΔΩ≈40), and by 3, 7, and 11 (ΔΩ≈5000). These results again imply that changing the coupling strengths, W_ij does not affect the steady-state values of Ω too much if all other parameters are fixed, thus heterogeneity in the edge behavior seems to be irrelevant.
§.§ The new order parameter R_uni
Fig. <ref>
shows the evolution of R_uni from disordered (left) and from ordered (right) initial conditions. This time all nine curves are separated, thus it is validated that R_uni is able to capture different heterogeneity of the model. Similarly to the other order parameters, the highest synchronization is shown by the completely homogeneous model, Scenario 1. However, the results of the other scenarios are significantly different than before. In general, unique nodal behavior modeled through L_i and M_i decreases synchronization, which is an important advantage compared to the use of R, shown in Section<ref>.
When comparing the scenarios, an interesting behavior can be observed. If only the heterogeneity of the coupling strength, W_ij is increased, better synchronization is found in the case of the scenarios, where the values of W_ij are calculated from admittances (Scenarios 9, 10, 11) as compared to the ones, where actual thermal capacity was considered (5, 6, 7). If we compare curves with the same coupling strength, it is seen that the benchmark of literature, namely assuming the moment of inertia of a large power plant at each node, is a significant factor in reaching high synchronization. (E.g. (R_uni≈0.9 for Scenario 1 and ≈0.8 for Scenario 2.) However such models are demonstrably overly optimistic, especially with the increase of non-synchronous generation in the mix <cit.>. As R_uni is able to capture the different nodal behaviors and also shows their dependency on the coupling strength, its use over R should be favored, especially when using the Kuramoto model for grid case studies.
§.§ Frequency distributions
Testing the predictive power of the Kuramoto-based modeling is a great challenge and has not been done on the quantitative level on realistic power-grids, according to our knowledge. Here we show node frequency results obtained by different levels of parameter approximations. We calculated the PDF-s at nodes obtained in the steady state form samples at the last 10 time steps and from thousands of independent realizations, which differ in the input/output power by the addition of the small quenched fluctuation values as shown in Eq.(<ref>).
These calculations were done for each scenario and for each λ.
We tried to fit the PDF-s with the 8 most popular distributions: Gaussian, exponential, Student's t, log-normal, Pareto, double Weibull, generalized extreme value, and beta, from the Python distfit package as well as by the q-Gaussian functions as this distribution was commonly fitted well other HV studies of AC electrical data <cit.>. Agreement with the q-Gaussian is remarkably good for lower λ values, see Fig <ref> in case of
Scenario 9.
For λ > λ_c we found multi-peak behavior as displayed on the inset of Fig. <ref> even though the width of the frequency spread decreases by increasing λ. Multi-peak frequency behavior is very common in the European power grid, especially in islands, like Great Britain, Ireland, and Mallorca. <cit.>. A numerical analysis based on the extension of the swing equations with a time-dependent damping factor could reproduce such global frequency fluctuations, suggesting that the system is wandering around the nominal 50Hz peak <cit.>.
Our swing equation solutions on the full Hungarian power-grid network suggest that even with constant parameters multi-peak frequency behavior can emerge when we overload the system with global power transmission above the synchronization point λ_c.
Thus, beyond temporally different behavior we can also find sub-peaks on static power grids, due to the network heterogeneity. Note, that in our previous large-scale simulations, we showed different synchronization behaviors at fixed control parameters in different communities of Europe <cit.>.
This may hint at the dangers in power grids with multi-peaks being out of optimal operation control even if the frequency spread is narrow.
For scenarios other than 9 fat-tailed distributions were also obtained. As they are very numerous publication of them will be published elsewhere.
§ DISCUSSION AND CONCLUSIONS
As it was shown in the comparative analysis of Section <ref>, the use of R_uni, proposed by Ref.<cit.> is encouraged to display the differences of heterogeneous power grid models. We found that in contrast to R and Ω, this order parameter is able to capture the difference of the coupling strength, showing higher synchronization values when W_ij is calculated based on the admittances of power lines. We also found that decreasing inertia of the system is more distinctly presented across the different scenarios. This feature of R_uni is especially advantageous for case studies, as the benchmark models of the literature tend to overestimate the amount of inertia in the system.
Considering the modeling depth, we found that completely heterogeneous models with unique nodal behavior based on SCADA measurements show very low synchronization and thus seem to be inappropriate for Kuramoto models. The underlying reasons could be (i) the ignorance of power losses and (ii) the ignorance of reactive power. To bridge these gaps, a promising approach was presented in Refs <cit.>, where the voltage magnitudes are incorporated in the Kuramoto model.
We provided local frequency results for the whole Hungarian power-grids, agreeing quite well with empirical measurements <cit.>. The calculated PDF-s of ΔΩ, with respect to the nominal 50 Hz exhibit similar width and shapes as those recorded and
published in <cit.>. For λ-s, which drive the system above the synchronization point earlier we observed community dependent synchronization <cit.>.These are related to the frequency multi-peaks of static models we report now. This means, that
they occur not only in systems with time dependent parameters, but spatial heterogeneity can also induce them and warn for over-driven power grids, in the sense that they are away from the optimal SOC behavior.
Bálint Hartmann acknowledges the support of the Bolyai János Research Scholarship of the Hungarian Academy of Sciences. Support from the Hungarian National Research, Development and Innovation Office NKFIH (K146736) is also acknowledged.
§ DATA AVAILABILITY STATEMENT
The data that support the findings of
this study are available from the
corresponding author upon reasonable
request.
§ REFERENCES
|
http://arxiv.org/abs/2409.03378v1 | 20240905092931 | On Using Curved Mirrors to Decrease Shadowing in VLC | [
"Borja Genoves Guzman",
"Ana Garcia Armada",
"Maïté Brandt-Pearce"
] | eess.SP | [
"eess.SP",
"cs.SY",
"eess.SY"
] |
On Using Curved Mirrors to Decrease
Shadowing in VLC
Borja Genoves Guzman has received funding from the European Union under the Marie Skłodowska-Curie grant agreement No 101061853.
Borja Genoves Guzman12,
Ana Garcia Armada2, and
Maïté Brandt-Pearce1
1Electrical and Computer Engineering Dept.,
University of Virginia, Charlottesville, VA 22904 USA
2Signal Theory and Communications Dept., University Carlos III of Madrid, Leganes, Madrid 28911 Spain
E-mails: [email protected], [email protected], [email protected]
September 9, 2024
=========================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Visible light communication (VLC) complements radio frequency in indoor environments with large wireless data traffic. However, VLC is hindered by dramatic path losses when an opaque object is interposed between the transmitter and the receiver. Prior works propose the use of plane mirrors as optical reconfigurable intelligent surfaces (ORISs) to enhance communications through non-line-of-sight links. Plane mirrors rely on their orientation to forward the light to the target user location, which is challenging to implement in practice. This paper studies the potential of curved mirrors as static reflective surfaces to provide a broadening specular reflection
that increases the signal coverage in mirror-assisted VLC scenarios. We study the behavior of paraboloid and semi-spherical mirrors and derive the irradiance equations. We provide extensive numerical and analytical results and show that curved mirrors, when developed with proper dimensions, may reduce the shadowing probability to zero, while static plane mirrors of the same size have shadowing probabilities larger than 65%. Furthermore, the signal-to-noise ratio offered by curved mirrors may suffice to provide connectivity to users deployed in the room even when a line-of-sight link blockage occurs.
Curved mirrors, line-of-sight (LoS) link blockage, optical reconfigurable intelligent surface (ORIS), reflector, visible light communication (VLC).
§ INTRODUCTION
Visible light communication (VLC) has been demonstrated to be a complementary technology to alleviate the congestion in the radio frequency bands produced by the massive increase of wireless communication data. VLC retrofits the light-emitting diode (LED) infrastructure, which has been primarily deployed for illumination, to also provide wireless communication services. However, the large path-loss produced in the visible wavelengths when light goes through obstacles makes VLC unreliable, which impacts its quality of service and limits its mass market adoption.
Prior works have strengthened the VLC technology by proposing cooperative multi-point transmissions <cit.>, exploiting reflections coming from the wall <cit.>, or creating angle diversity receivers that allow receiving signals from multiple angles <cit.>. However, these techniques involve complex deployments to provide high-quality VLC-based services. Recently, the research community has considered the use of optical reconfigurable intelligent surfaces (ORISs) to make the non-line-of-sight (NLoS) VLC signals stronger. Mirrors have been demonstrated to be the most promising ORIS material, showing a larger maturity and providing a better performance than metasurfaces <cit.>. Mirrors have been explored in ORIS-assisted VLC systems to optimize their orientation and LED-user association with the aim of maximizing the data rate <cit.>, the spectral efficiency <cit.> or the secrecy rate <cit.>. Recently, the authors studied the optimal location of both static and movable mirrors in <cit.> to minimize the outage probability, which is one of the main problems in VLC. All these works consider the use of plane mirrors that are optimally oriented so that the light coming from an LED hits in the center of such mirror and it is forwarded to the destination. Indirectly, the light impinging onto the mirror does not only contribute to power level at the desired destination, but it can potentially provide coverage to users located surrounding the destination.
Although the installation of static mirrors is more realistic than mirrors with optimal and movable orientations, the overall coverage provided by them is still unexplored.
In the case of a plane mirror, the volume occupied in the space by the rays impinging onto a single reflecting point is the same as the one occupied by the reflected rays. This limits the potential of plane mirrors in VLC when the objective is to provide a coverage as wide as possible. Differently, diffuse reflections produced by walls scatter light, which potentially increases the coverage, but the power loss produced in the material is dramatic. In this paper we introduce the concept of broadening specular reflection,
created by exploiting curved (convex) mirrors where the power loss occurs in the propagation and not in the reflecting material. This phenomenon is depicted in Fig. <ref>. This paper, for the first time to the best of the authors' knowledge, proposes the use of curved mirrors to increase the coverage in a mirror-assisted VLC system.
We can find few works in the literature that invoke curved mirrors for light communication: Spherical concave mirrors have been proposed in free space optical (FSO) communications to compensate the effects of the air turbulence <cit.>; <cit.> proposed paraboloid mirrors to fabricate a small corner cube retroreflector and provide sunlight communication; spherical reflectors have been installed in user equipment to enable visible light positioning with a broad coverage <cit.>; and the authors of <cit.> theoretically proposed the use of concave mirrors on the transmitter side to expand the lighting area of VLC. Besides, curved convex mirrors have been traditionally placed in corners with low visibility such as hospitals or roads to increase the vision area. Inspired by this traditional use case and different from prior works, we propose the use of convex mirrors placed on walls to extend the VLC coverage and evaluate its performance.
Notation: In this paper, capitalized bold letters such as 𝐀 stand for vectors starting at the origin and ending at point A. 𝐀𝐁 is the unit vector of 𝐀𝐁=𝐁-𝐀. 𝐞_k stands for the k-th column vector of the 3×3 identity matrix, and (·)^ T represents the transpose operator.
§ SYSTEM MODEL
The considered indoor VLC scenario has one light source located at the room ceiling with a height of h_ d from the horizontal plane where the receiver D is located. The set of points representing the light source and all possible positions of D are denoted by 𝒮 and 𝒟, respectively. We assume that the source has a uniform power distribution along its area A_ s=w_ sl_ s, and that every point in 𝒮, denoted by S, has a Lambertian emission pattern. We consider a convex mirror reflector located on a wall, where the set of its points is denoted by ℛ and whose center is defined as the origin. The detector D is located in a horizontal plane that is parallel to the room floor, and its size is assumed to be a point due to its small active area, typically in the range of mm^2 or few cm^2. The x, y and z axes create a coordinate system with their positive directions as the ones described in Fig. <ref>. The point vectors to the center of the source and detector are
𝐒_ c = [ -x_ s; y_ s; -z_ s ], 𝐃 = [ x_ d-x_ s; y_ d+y_ s; h_ d-z_ s ].
§.§ Line-of-sight received power
The lighting industry often installs large lighting sources to provide a homogeneous illumination. Typically, the research community has based its LiFi studies assuming a point source, which puts aside large lighting sources. However, the source size has an important impact on the line-of-sight (LoS) link blockage, and, since large lighting sources are widespread, we consider them in this study.
The total source size is divided into small blocks of size dS that transmit an optical power of P_ opt,S = P_ opt/A_ s. Assuming that the detector size is very small with respect to the source image in the detector plane <cit.>, the optical power received by the detector located at D through the LoS link is computed as
P^𝒮_ opt,rx = E_ D^𝒮· A_ pd,
where A_ pd is the area of the photodetector (PD) and E_ D^𝒮 is the irradiance at a point D from the source 𝒮 that, if detector D is assumed to be looking upwards, can be calculated by
E_ D^𝒮 = (m+1)· P_ opt,S/2π∫_𝒮cos^(m+1)(θ^ S_ D)/D^2_ S,Dd𝒮,
where cos(θ^ S_ D)=𝐞_3^ T𝐒𝐃, m=-1/log_2(cos(ϕ_1/2)) is the Lambertian index of the LED that models the radiation pattern defined by its half-power semi-angle ϕ_1/2, and D_ S,D is the Euclidean distance between S and D points.
We can now evaluate the impact of the source size on the LoS-link blockage probability. The irradiance at D can be either 0 or E_ D^𝒮, depending on whether the detector has a LoS-link blocked or not, respectively. Let us assume a user whose position is uniformly distributed along the room, oriented at any horizontal angle following a uniform distribution 𝒰[0, 2π), and assuming that the device is always looking upwards. The user body is modelled by a cylinder of height 1.75 m and radius 0.15 m, with the device separated at a distance of 0.3 m from the body <cit.>. The room height is 3 m, h_ d=2 m, and the room and source sizes and location are the ones depicted in Fig. <ref>. After distributing 10^4 users, we compute the user locations where the LoS-link with all points in 𝒮 may be potentially blocked. In Fig. <ref> we represent those user locations with a grey shadowed area. Note that the area changes with respect to the size of the source. The larger the source size, the smaller the area representing those locations suffering from LoS-link blockage. The user locations that most likely suffer from a LoS-link blockage are those close to the edges, and they must rely on reflections coming from the walls. This issue was also pointed out by the authors in <cit.>, and it was addressed by looking for the optimal location and orientation of planar mirrors. Instead, in this work we aim at simplifying the scenario and analyzing the performance of NLoS coverage when employing a convex mirror that provides a broadening specular reflection
to cover those user locations at the edges.
§.§ Non-line-of-sight received power
For the sake of simplicity, we analyze the performance of two three-dimensional reflectors. Namely, we study an elliptic paraboloid and a semi-sphere. Both are depicted in Fig. <ref>. The former is defined by the parameters w_ par, l_ par and h_ par, which indicate the dimensions in the x, y and z axes, respectively. The latter is only defined by the parameter r_ sph, which indicates the radius of the sphere.
The set of points ℛ can be either ℛ_ par or ℛ_ sph when invoking a paraboloid or a semi-sphere, respectively. For simplification, let us use the term ℛ to refer to the reflective surface in a general way, without specifying its shape.
The optical power received by the detector located at D through the NLoS reflector links is computed as
P^ℛ_ opt,rx = E_ D^ℛ· A_ pd,
where E_ D^ℛ is the irradiance at a point D from the reflector ℛ, and is derived in Section <ref>.
§.§ Figures of merit
Two main figures of merit are considered in this study:
§.§.§ NLoS shadowing probability
To strengthen the VLC performance, we desire to provide a NLoS link to all users distributed along the room that can perform as a backup link in case the LoS one is blocked. This metric defines the probability for a user, uniformly distributed along the room, to have no NLoS contributions due to the incapacity of the reflector. It is represented by
Prob(E_ D^ℛ=0 ∀D∈𝒟).
§.§.§ Signal-to-noise power ratio
It is formulated as
SNR=[η_ pd·(𝕀_ LoSP^𝒮_ opt,rx + P^ℛ_ opt,rx)]^2/N_0B,
where η_ pd is the PD's responsivity, 𝕀_ LoS is the binary variable that takes values 0 or 1 when the LoS link is blocked or not, respectively, N_0 is the power spectral density of the additive white Gaussian noise (AWGN), and B is the communication bandwidth.
§ IRRADIANCE PERFORMANCE OF CURVED MIRRORS
The irradiance at a point D from a small point R∈ℛ can be computed as <cit.>
E_ D^ R=ρ_ℛ· L_ R← I_R,D·cos(θ^ D_ R)· A_dℛ·cos(θ'_ R,D)/D_ R,D^2,
where ρ_ℛ is the reflection coefficient of the reflective surface ℛ, which is constant for all R points; L_ R← I_R,D is the radiance at position R coming from I_ R,D point; θ^ D_ R is the angle created by the vector 𝐃𝐑 and the z-axis; A_dℛ is a small point area of ℛ; θ'_ R,D is the incidence angle in R created by the ray coming from I_ R,D that, due to the Snell's law, is the same as the angle created by the vector 𝐑𝐃 and the normal vector of R (𝐍_ R); and D_ R,D is the Euclidean distance between R and D points. The cosine of both angles can be computed by cos(θ^ D_ R)=𝐞_3^ T𝐃𝐑 and cos(θ'_ R,D)=𝐍_ R^ T𝐑𝐃. The geometry of the scenario and all its angles are represented in Fig. <ref>. The points R∈ℛ_ par can be defined as
ll
R_par =
{x,y,z |
y=x^2/A+z^2/B+C, -w_par/2<x<w_par/2, y>0,
-h_par/2<z<h_par/2},
where A=-w_ par^2/(4l_ par), B=-h_ par^2/(4l_ par) and C=l_ par. The points R∈ℛ_ sph can be defined as
ll
R_sph =
{x,y,z |
x^2+y^2+z^2=r_sph^2, -r_sph<x<r_sph, y>0,
-r_sph<z<r_sph}.
Differently from works considering a plane reflector where all reflector points have the same normal vector, when considering a curved mirror each point R has its own normal vector, which is computed as
𝐍_ R_par=[-2x/A,1,-2z/B]^ T/√(4x^2/A^2+1+4z^2/B^2),
in the case of a paraboloid, and
𝐍_ R_sph=[2x,2y,2z]^ T/√(4x^2+4y^2+4z^2)
in the case of a semi-sphere. To generalize, we will use the term 𝐍_ R.
Since each point in the transmitter is assumed to follow a Lambertian emission pattern, the radiance at position R coming from the I_R,D point can be defined as
rCl
L_R←I_R,D = (m+1)P_opt,S/2π cos^m-1(θ^I_R,D_R)
×𝕀(|𝐞_1^T𝐒_c𝐈_R,D|≤w_s/2, |𝐞_2^T𝐒_c𝐈_R,D|≤h_s/2)
×𝕀(𝐍_R^T𝐑𝐃≥0),
where θ^ I_R,D_R is the irradiance angle from I_R,D to R with respect to the z-axis, and cos(θ^ I_R,D_R)=𝐞_3^ T𝐈_R,D𝐑, where 𝐈_R,D𝐑 is computed by the law of reflection as <cit.>
rCl
𝐈_R,D𝐑 = -(2(𝐍_R^T𝐑𝐃)·𝐍_R - 𝐑𝐃 ).
The two binary variables in (<ref>) represented by 𝕀(·) take a value of 1 when the condition inside is satisfied, and 0 otherwise. The first variable considers when the point I_ R,D∈𝒮, and the second variable ensures Snell's reflection law. The point I_ R,D for each R and D is computed as the incident point that the ray coming from D reflected in R has on the plane of S, computed by <cit.>
rCl
𝐈_R,D = [ 𝐞_1^T(𝐑+𝐞_3^T𝐑𝐒_c/𝐞_3^T𝐑 𝐈_R,D𝐑 𝐈_R,D); 𝐞_2^T(𝐑+𝐞_3^T𝐑𝐒_c/𝐞_3^T𝐑 𝐈_R,D𝐑 𝐈_R,D); 𝐞_3^T𝐒_c ].
The total irradiance at a point D from the whole reflective surface ℛ can be written as
rCl
E_D^ℛ = ρ_ℛ(m+1)P_opt,S/2π∫_ℛ (𝐞_3^T𝐈_R,D𝐑)^(m-1)(𝐞_3^T𝐃𝐑)(𝐍_R^T𝐑𝐃)/D^2_R,D
×𝕀(|𝐞_1^T𝐒_c𝐈_R,D|≤w_s/2, |𝐞_2^T𝐒_c𝐈_R,D|≤h_s/2)
×𝕀(𝐍_R^T𝐑𝐃≥0) dℛ.
For simplification purposes, we can derive an approximation for this irradiance equation. Let us define the points R'∈ℛ:E_ D^R'≠ 0, which conform the subset ℛ'⊂ℛ involving all points in ℛ that contribute to D when the light is coming from any point in 𝒮. The subset ℛ' conforms a small sub-area in ℛ that can be approximated as a single plane mirror of area A_ℛ' and normal vector 𝐍_ R'_c, as represented in Fig. <ref>, where R'_c represents the centroid of ℛ' whose point vectors can be computed as 𝐑'_c=∑_𝐑'∈ℛ'𝐑'/|ℛ'|, and 𝐍_ R'_c=(𝐑'_c𝐒_c/D_R'_c,S_c + 𝐑'_c𝐃/D_R'_c,D)/√(2+2𝐑'_c𝐃^ T𝐑'_c𝐒_c/D_R'_c,S_cD_R'_c,D), as R'_c is located in the bisector formed by lines 𝐑'_c𝐒_c and 𝐑'_c𝐃. |ℛ'| is the cardinality of set ℛ', and D_R'_c,S_c and D_R'_c,D are the Euclidean distances between R'_c and S_c points, and between R'_c and D points, respectively. Once the whole reflector has been simplified to a small sub-area, we can assume that the largest dimension of the surface ℛ' is much smaller than the shortest distance between a point in ℛ' and a point in the source 𝒮. Therefore every variable in (<ref>) depending on I_ R,D and R can be replaced by S_ c and R'_c, respectively. We can follow up the large source and small reflector case approximation in <cit.>, and we can write an approximation for the irradiance equation initially described in (<ref>) as
ll
E_D^ℛ=
ρ_ℛ(m+1)P_opt,S A_ℛ' (𝐞_3^T𝐒_c𝐑'_c)^(m-1)(𝐞_3^T𝐃𝐑'_c)(𝐍^T_R'_c𝐑'_c𝐃)/2πD^2_R'_c,D.
§ ANALYTICAL AND SIMULATION RESULTS
We aim to analyze the contribution that a curved reflector may have into the signal power received by a user at a position that can be distributed along a room. We select a room with (x,y) dimensions 4x4 m. The reflector is installed in the vertical plane y=0, and its center is in (x,z)=[2,1] coordinates. We select a mirror as the reflector surface, with a high reflection coefficient of ρ_ℛ=0.99. We consider a single light source of dimensions w_ s=0.2 m and l_ s=0.2 m located in the center of the ceiling, at relative coordinates x_ s=0 m and y_ s=2 m. The light source transmits a total optical power of P_ opt=20 W with a radiation pattern defined by ϕ_1/2=80^∘. The detector can be located at any point in the horizontal plane D, and it is distributed uniformly in our simulations. The PD's responsivity and area are η_ pd=0.4 A/W and A_ pd=4 cm^2, respectively, and the communication bandwidth and power spectral density of the noise are assumed to be B=1 MHz and N_0=2.5· 10^-20 W/Hz, respectively. All simulation parameters are summarized in Table <ref>.
We first evaluate the accuracy of the approximation for the irradiance equation derived in Section <ref>. For this purpose, we assume that the reflector is a paraboloid with dimensions in the range of h_ par∈[0.1,1] m, l_ par∈ [0.1, 0.2] m and w_ par∈ [0.2,1] m, or a semi-sphere with dimensions r_ sph∈ [0.1,1] m. We then evaluate the maximum (peak) relative error produced by the exact and approximated NLoS irradiance expressed in (<ref>) and (<ref>), respectively. The relative error is computed as |E_ D^ℛ-E_ D^ℛ|/E_ D^ℛ for all realistic D positions, i.e., all except for those which are located at a distance from the wall y=0 that is lower than l_ par. As can be seen in Fig. <ref>, the peak relative error is lower than 5% for all configurations. It slightly increases with w_ par at low curvature (l_ par=0.1 m), because the shadowing effect increases as it will be seen next, and then the accuracy reduces. For the semi-sphere reflector case, it slightly increases with r_ sph due to approaching the assumption limit in (<ref>)
about the dimension of ℛ' and the distance between ℛ' and 𝒮, but it is always below 0.6%. This demonstrates the good accuracy of the approximated equation (<ref>) to compute the irradiance coming from a curved mirror.
Let us now evaluate the performance of the curved mirror when it has different dimensions. We analyze the shadowing probability defined by (<ref>). Fig. <ref> shows the shadowing probability for multiple combinations of l_ par, w_ par and h_ par when using a paraboloid-shaped mirror reflector. We consider that h_ par may take values of 0.1 m, 0.5 m, or 1 m for a small, medium, or large mirror, respectively. Then w_ par can go from 0.1 m to 1 m, and the curvature of the mirror defined by the l_ par parameter can go from 0 m (plane mirror) to 0.2 m. Results show that the larger the height of the mirror denoted by h_ par, the higher the shadowing probability is due to a curvature decrease, which introduces some shadowing just below the reflector as shown in Fig. <ref>. A similar phenomenon happens when w_ par increases for the same h_ par and l_ par values, which is shown in Fig. <ref>, where we can see that a lower curvature introduces shadowing in the room corners. However, for the same h_ par and w_ par values, a larger curvature (l_ par) makes the shadowing probability reduce, since the visibility at the room edges increases. This gives the insight that, given a mirror size defined by h_ par and w_ par, there is a minimum l_ par so that the shadowing probability equals zero.
Although a graph representation is not included in this paper, note that, when using a semi-sphere-shaped mirror reflector, the shadowing probability is zero due to its perfect curvature.
Finally, we evaluate the SNR defined in (<ref>) when the LoS is blocked (NLoS case) or not (LoS + NLoS case). For the NLoS case, we analyze a number of cases that are grouped by the area occupied by the mirror on the wall. We analyze three areas according to the paraboloid and semi-sphere dimensions: small size (area = 0.0314 m^2), medium size (area =0.1571 m^2), and large size (area = 0.3142 m^2). Their dimensions are included in Table <ref>, and the results are depicted in Fig. <ref>. As expected, the larger the size, the larger the SNR obtained. Plane mirrors provide a shadowing floor, i.e., locations with a SNR =-∞ dB, with probability values of 68%, 76% and 86% for large, medium and small mirrors. However, the use of curved mirrors reduces the shadowing probability to zero, as seen in previous results and also in Fig. <ref>. In terms of the SNR distribution, the best curvature l_ par is an intermediate value of 0.1 m, as a very low l_ par value (plane mirror) increases the shadowing probability, and very large l_ par values distribute the light to the room edges to a very large extent. Thus, there is a balance in the selection of the l_ par value that reduces the shadowing probability while maintaining a good light distribution. When comparing the paraboloid and semi-sphere mirror shapes, the paraboloid provides better performance when the size of the mirror is medium or large. However, for small area sizes, it is more convenient to use a semi-sphere as it will distribute reflections in a better way for the evaluated configurations. Note that the dimensions of the convex mirror determine its curvature, which plays a key role in the light distribution as shown in Fig. <ref> and Fig. <ref>, and they must be carefully configured for the SNR improvement.
The results show that the LoS link provides much larger SNR values than NLoS links. NLoS contributions may be insignificant when the LoS link exists, but they are extremely important in the case of a LoS link blockage. That is, curved mirrors allow the user to have communication even when the LoS link is blocked, regardless of its location, as the SNR values provided by the NLoS link are good enough to invoke modulation and coding schemes <cit.>.
§ CONCLUSION
This paper investigated the contribution of curved mirrors to improve the VLC coverage. We derived equations for the irradiance received from a curved mirror with a paraboloid or semi-sphere shape, and we provided an approximation that matches the irradiance accurately. We studied the influence of multiple mirror dimensions in the shadowing probability, and results reveal that curved mirrors may offer a zero shadowing probability.
Finally, we studied the SNR contribution coming from curved mirrors, and results show that curved mirrors are interesting static reflective surfaces to provide connectivity to the users located along the room when they experience a LoS link blockage. As future work, experimental research must be done to validate the performance of curved mirrors in VLC scenarios relying on NLOS links. Additionally, we can consider small-sized curved mirrors provided with mobility for a dynamic adaptation to minimize the intrusiveness of such mirrors type. Then, we could optimize the mirrors' curvature and placement for best performance.
IEEEtran
|
http://arxiv.org/abs/2409.03472v1 | 20240905123438 | On torsion in eulerian magnitude homology of Erdos-Renyi random graphs | [
"Giuliamaria Menara"
] | math.CO | [
"math.CO",
"math.AT",
"math.PR"
] |
Purification of Gaussian States by Photon Subtraction
Mattia Walschaers
September 9, 2024
=====================================================
§ ABSTRACT
In this paper we investigate the regimes where an Erdös-Rényi random graph has torsion free eulerian magnitude homology groups.
To this end, we start be introducing the eulerian Asao-Izumihara complex - a quotient CW-complex whose homology groups are isomorphic to direct summands
of the graph eulerian magnitude homology group.
We then proceed by producing a vanishing threshold for a shelling of eulerian Asao-Izumihara complex.
This will lead to a result establishing the regimes where eulerian magnitude homology of Erdös-Rényi random graphs is torsion free.
§ INTRODUCTION
Magnitude, introduced by Leinster in <cit.>, is an invariant for metric spaces that quantifies the number of effective points in the space.
Hepworth and Willerton introduced magnitude homology for graphs as a categorification of magnitude <cit.>, and this concept was later extended to metric spaces and enriched categories by Leinster and Shulman <cit.>.
In recent years, various methods have been devised to calculate the magnitude homology groups <cit.>.
Eulerian magnitude homology is a variant recently introduced by Giusti and Menara in <cit.> to highlight the connection between magnitude homology of simple graphs equipped with the path metric and their combinatorial structure.
Here the authors introduce the complex of eulerian magnitude chains, which are supported by trails without repeated vertices.
Then they describe the strong connections between the (k,k)-eulerian magnitude homology groups and the graph's structure.
Further, in the context of Erdös-Rényi random graphs they derive a vanishing threshold for the limiting expected rank of the (k,k)-eulerian magnitude homology in terms of the density parameter.
In this paper, we will make some progress towards investigating the presence of torsion in eulerian magnitude homology.
Torsion in standard magnitude homology was first studied by Kaneta and Yoshinaga in <cit.>, where the authors have analyzed the structure and implications of torsion in magnitude homology.
Torsion in the magnitude homology of graphs was also studied by Sazdanovic and Summers in <cit.> and by Caputi and Collari in <cit.>.
In the present work, as a first step towards exploring whether graphs have torsion in their eulerian magnitude homology groups, we turn our attention to Erdös-Rényi model for random graphs.
This model is the most extensively studied and utilized model for random graphs, and it represents the maximum entropy distribution for graphs with a given expected edge proportion.
Random complexes originating from Erdös-Rényi graphs are widely studied in stochastic topology <cit.>, and in studying this “unstructured" example our intent is to create a foundation for understanding the torsion in “structured" graphs.
Adapting the construction introduced by Asao and Izumihara in <cit.> to the context of eulerian magnitude homology, we able to produce for every pair of vertices (a,b) ∈ G two simplicial complexes (a,b) and (a,b) such that the homology of the quotient (a,b) / (a,b) is isomorphic to a direct summand of the eulerian magnitude homology EMH_∗,ℓ(G) up to degree shift.
Therefore, producing a shellability result of the complexes (a,b) and (a,b) will in turn determine a torsion-free result for EMH_∗,ℓ(G).
In Theorem <ref> we achieve such shellability result for (a,b) in terms of the density parameter.
Further, in Corollary <ref> we link the torsion-free result for eulerian magnitude homology groups stated in Theorem <ref> with the vanishing threshold produced in <cit.>, determining sufficient conditions under which if eulerian magnitude homology is non-vanishing, then it is also torsion-free.
§.§ Outline
The paper is organized as follows.
We start by recalling in Section <ref> some general background about graphs, eulerian magnitude homology and shellability.
In Section <ref> we introduce the eulerian Asao-Izumihara complex.
We then investigate in Section <ref> the probability regimes in which the eulerian Asao-Izumihara complex is shellable, and we conclude by producing a vanishing threshold for torsion in eulerian magnitude homology groups.
Finally, in Section <ref> we propose extensions of the current work and identify open questions that could deepen the understanding of the topic.
§ BACKGROUND
We begin by recalling relevant definitions and results.
We assume readers are familiar with the general theory of simplicial homology (for a thorough exposition see <cit.>).
Throughout the paper we adopt the notation [m] = {1, …, m} and [m]_0 = {0, …, m} for common indexing sets.
§.§ Graph terminology and notation
An undirected graph is a pair G=(V,E) where V is a set of vertices and E is a set of edges (unordered pairs of vertices).
A walk in such a graph G is an ordered sequence of vertices x_0,x_1,…,x_k∈ V such that for every index i ∈ [k]_0 there is an edge {x_i,x_i+1}∈ E.
A path is a walk with no repeated vertices.
For the purposes of introducing eulerian magnitude homology we assume that all graphs are simple, i.e. they have no self-loops and no multiedges <cit.>.
One can interpret the set of vertices of a graph as an extended metric space (i.e. a metric space with infinity allowed as a distance) by taking the path metric d(u,v) to be equal to the length of a shortest path in G from u to v, if such a path exists, and taking d(u,v) = ∞ if u and v lie in different components of G.
Let G = (V,E) be a graph, and k a non-negative integer. A k-trail x̅ in G is a (k+1)-tuple (x_0,…,x_k) ∈ V^k+1 of vertices for which x_i ≠ x_i+1 and d(x_i,x_i+1)<∞ for every i ∈ [k-1]_0.
The length of a k-trail (x_0,…,x_k) in G is defined as the minimum length of a walk that visits x_0,x_1,…,x_k in this order:
(x_0,…,x_k) = d(x_0,x_1)+⋯ + d(x_k-1,x_k).
We call the vertices x_0, … x_k the landmarks, x_0 the starting point, and x_k the ending point of the k-trail.
§.§ Eulerian magnitude homology
The magnitude homology of a graph G, MH_k,ℓ(G), was first introduced by Hepworth and Willerton in <cit.>, and the eulerian magnitude homology of a graph EMH_k,ℓ(G) is a variant of it with a stronger connection to the subgraph structures of G.
Specifically, while the building blocks of standard magnitude homology are tuples of vertices (x_0,…,x_k) where we ask that consecutive vertices are different, eulerian magnitude homology is defined starting from tuples of vertices (x_0,…,x_k) where we ask that all landmarks are different.
Eulerian magnitude homology was recently introduced by Giusti and Menara in <cit.> and we recall here the construction.
(Eulerian magnitude chain)
Let G=(V,E) be a graph.
We define the (k,ℓ)-eulerian magnitude chain EMC_k,ℓ(G) to be the free abelian group generated by trails (x_0,…,x_k) ∈ V^k+1 such that x_i ≠ x_j for every 0≤ i,j ≤ k and (x_0,…,x_k)=ℓ.
It is straightforward to demonstrate that the eulerian magnitude chain is trivial when the length of the path is too short to support the necessary landmarks.
Let G be a graph, and k > ℓ non-negative integers. Then EMC_k,ℓ(G) ≅ 0.
Suppose EMC_k,ℓ(G)≠ 0.
Then, there must exist a k-trail (x_0,…,x_k) in G so that (x_0,…,x_k)=d(x_0,x_1)+⋯+d(x_k-1,x_k) = ℓ.
However, as all vertices in the k-trail must be distinct, d(x_i,x_i+1) ≥ 1 for i ∈ [k-1]_0, so k can be at most ℓ.
(Differential)
Denote by (x_0,…,x̂_̂î,…,x_k) the k-tuple obtained by removing the i-th vertex from the (k+1)-tuple (x_0,…,x_k). We define the differential
∂_k,ℓ: EMC_k,ℓ(G) → EMC_k-1,ℓ(G)
to be the signed sum ∂_k,ℓ= ∑_i∈ [k-1](-1)^i∂_k,ℓ^i of chains corresponding to omitting landmarks without shortening the walk or changing its starting or ending points,
∂_k,ℓ^i(x_0,…,x_k) =
(x_0,…,x̂_̂î,…,x_k) , if (x_0,…,x̂_̂î,…,x_k) = ℓ,
0, otherwise.
For a non-negative integer ℓ, we obtain the eulerian magnitude chain complex, EMC_*,ℓ(G), given by the following sequence of free abelian groups and differentials.
(Eulerian magnitude chain complex)
We indicate as EMC_*,ℓ(G) the following sequence of free abelian groups connected by differentials
⋯→ EMC_k+1,ℓ(G) EMC_k,ℓ(G) EMC_k-1,ℓ(G) →⋯
The differential map used here is the one induced by standard magnitude, and it is shown in <cit.> that the composition ∂_k,ℓ∘∂_k+1,ℓ vanishes, justifying the name “differential" and allowing the definition the corresponding bigraded homology groups of a graph.
(Eulerian magnitude homology)
The (k,ℓ)-eulerian magnitude homology group of a graph G is defined by
EMH_k,l(G) = H_k(EMC_*,l(G)) = (∂_k,ℓ)/(∂_k+1,ℓ).
Notice that by construction we have the following proposition.
For ℓ≥ 0, the following direct sum decomposition holds:
EMC_∗, ℓ(G) =⊕_a, b∈ V(G) EMC_∗, ℓ(a, b),
where EMC_∗, ℓ(a, b) is the subcomplex of EMC_∗, ℓ(G) generated by trails which start at a and end at b.
§.§ Shellable simplicial complexes
We recall the definition of shellable simplicial complex.
If X is a finite simplicial complex, then a shelling of X is an ordering F_1,…,F_t of the facets (maximal faces) of X such that F_k ∩⋃_i=1^k-1F_i is a non-empty union of facets of F_k for k ≥ 2.
If X has a shelling, we say it is shellable.
In other words, we ask that the last simplex F_k meets the previous simplices along some union B_k of top-dimensional simplices of the boundary of F_k, so that X can be built stepwise by introducing the
facets one at a time and attaching each new facet F_k to the complex previously built in the nicest possible fashion.
Suppose X is a non-pure simplicial complex.
In this case the first facet of a shelling is always of maximal dimension.
In fact, if X is shellable there is always a shelling in which the facets appear in order of decreasing dimension.
Let F_1, F_2, … ,F_t be a shelling of X.
Let F_i_1, F_i_2, … ,F_i_t be the rearrangement obtained by taking first all
facets of dimension d = X in the induced order, then all facets of dimension d-1 in the induced order, and continuing this way in order of decreasing dimension.
Then this rearrangement is also a shelling.
Let X be a simplicial complex, and let 0 ≤ r ≤ s ≤ X.
Define X^(r,s)={σ∈ X such that σ≤ s and σ∈ F for some facet F with F ≥ r }.
If X is shellable, then so is X^(r,s) for all r ≤ s.
Lemma <ref> and Theorem <ref> can be interpreted as providing a kind of “structure theorem”, describing how a general shellable complex X is put together from pure
shellable complexes.
First there is the pure shellable complex X^1=X^(d,d) generated by all facets of maximal size.
Then X^1's (d-1)-skeleton, which is also shellable, is extended by shelling steps in dimension d-1 to obtain X^2=X^(d-1,d).
Then X^2's (d-2)-skeleton is extended by shelling steps in dimension (d-2) to obtain X^(d-2,d), and so on until all of X=X^(0,d) has been constructed.
A shellable simplicial complex enjoys several strong properties of a combinatorial, topological and algebraic nature.
Let it suffice here to mention that it is homotopy equivalent to a wedge sum of spheres, one for each spanning simplex of corresponding dimension <cit.>.
§ EULERIAN ASAO-IZUMIHARA COMPLEX
We introduced in this section the eulerian Asao-Izumihara complex.
Recall that the Asao-Izumihara complex is a CW complex which is obtained as the quotient of a simplicial complex K_ℓ (a,b) divided by a subcomplex K'_ℓ (a,b), and was proposed in <cit.> as a geometric approach to compute magnitude homology of general graphs.
Here we adapt this construction to the context of eulerian magnitude homology, providing a way of replacing the computation of the eulerian magnitude homology EMH_k,ℓ(G) by that of simplicial homology.
Let us start by recalling the Asao-Izumihara complex.
Let G=(V,E) be a connected graph and fix k ≥ 1.
For any a,b ∈ V the set of walks with length ℓ which start with a and end with b is denoted by
W_ℓ(a,b):={x̅=(x_0,…,x_k) walk in G | x_0=a,x_k=b, (x̅)=ℓ}.
Let G be a graph, and a, b ∈ V, ℓ≥ 3.
K_ℓ(a,b) := { ∅≠((x_i_1,i_1),…,(x_i_k,i_k)) ⊂V ×{1,…,ℓ-1}
| (a, x_i_1,…,x_i_k,b) ≺∃(a, x_1,…, x_ℓ-1, b) ∈W_ℓ(a,b) }
K'_ℓ(a,b) := { ((x_i_1,i_1),…,(x_i_k,i_k)) ∈K_ℓ(a,b) | (a, x_i_1,…,x_i_k,b) ≤ℓ-1 }.
Following <cit.>, we will denote ((x_i_1,i_1),…,(x_i_k,i_k)) by (x_i_1,…,x_i_k) when there is no confusion.
It can also be easily seen that K_ℓ(a,b) is a simplicial complex and K'_ℓ(a,b) is a subcomplex.
Let ℓ≥ 3 and ∗≥ 0.
Then, the isomorphism
(C_∗(K_ℓ(a,b),K'_ℓ(a,b)),-∂) ≅
(MC_∗ +2,ℓ(a,b),∂)
of chain complexes holds.
Let ℓ≥ 3.
* If k ≥ 3, MH_k,ℓ(a,b) ≅ H_k-2(K_ℓ(a,b),K'_ℓ(a,b)).
* If k = 2, we also have
MH_2, ℓ(a, b) ≅H_0(K_ℓ(a, b), K'_ℓ(a, b)) if d(a, b) < ℓ,
H̃_0(K_ℓ(a, b)) if d(a, b) = ℓ,
where H̃_∗ denotes the reduced homology group.
Notice while both K_ℓ -1(a,b) and K'_ℓ(a,b) are subcomplexes of K_ℓ(a,b), in general K_ℓ -1(a,b) ⊊ K'_ℓ(a,b).
Indeed, say v and u are two adjacent vertices, then the tuple (v,u,u) is an element of both K_3(v,u) and K'_3(v,u) because it is a subtuple of (v,u,v,u), but it cannot be in K_2(v,u).
This type of example with consecutively repeated vertices is the only one that can be constructed to show that K_ℓ-1(a,b) is a proper subset of K'_ℓ(a,b), and in the context of eulerian magnitude homology it cannot arise because the tuples have all different vertices.
Therefore when introducing the eulerian Asao-Izumihara complex it will possible to only rely on the (eulerian versions of the) complexes K_ℓ(a,b) and K_ℓ -1(a,b).
Let (a, b) be the set of eulerian trails from a to b with length smaller than ℓ.
That is, the set of all trails (x_1, …, x_t)∈ V^t+1 such that x_i ≠ x_j for every i, j ∈{1,…,t} and
(a,x_1, …, x_t,b) ≤ℓ.
The set (a, b) is clearly a simplicial complex, and the complex (a, b) is a subcomplex of (a, b), see Figure <ref> for an illustration.
Consider the same graph G as in example <ref>.
Suppose we choose (a,b)=(0,4) and ℓ =4.
Then we have ET_4(0,4)={(1,2,3),(1,2),(1,3),(2,3),(1),(2),(3) } and ET_3(0,4)={(1,2),(2,3),(1),(2),(3) }.
The following two results can be shown proceeding similarly to the proofs of <cit.>.
Let a, b be vertices of a graph G, and fix an integer ℓ≥ 3.
Then we can construct a pair of simplicial complexes ((a, b),(a, b)) which satisfies
C_∗ -2((a, b), (a, b)) ≅ EMC_∗,ℓ(a, b).
Let ℓ≥ 3. Then
EMH_k, ℓ(a, b)≅ H_k-2((a, b), (a, b))
Moreover, for k = 2, we also have
EMH_2, ℓ(a, b) ≅H_0((a, b), (a, b)) if d(a, b) < ℓ,
H̃_0((a, b)) if d(a, b) = ℓ,
where H̃_∗ denotes the reduced homology group.
§ TORSION IN EMH OF ERDŐS-RÉNYI RANDOM GRAPHS
In this section we investigate the regimes where the eulerian magnitude homology of Erdős-Rényi random graphs is torision free.
Recall that the Erdős-Rényi (ER) model for random graphs, denoted as G(n,p) and first introduced in <cit.>, is one of the most extensively studied and utilized models for random graphs.
This model represents the maximum entropy distribution for graphs with a given expected edge proportion, making it a valuable null model across a wide array of scientific and engineering fields. Consequently, the clique complexes of ER graphs have garnered significant interest within the stochastic topology community <cit.>.
The Erdős-Rényi (ER) model
G(n, p) = (Ω, P) is the probability space where Ω is the discrete space of all graphs on n vertices, and
P is the probability measure that assigns to each graph G ∈Ω with m edges probability
P(G)=p^m(1-p)^n 2-m.
We can sample an ER graph G ∼ G(n, p) on n vertices with parameter p∈ [0,1] by determining whether each of the n 2 potential edges is present via independent draws from a Bernoulli distribution with probability p.
In order to study the limiting behavior of these models as n →∞, it is often useful to change variables so that p is a function of n.
Here we will take p=n^-α, α∈ [0,∞), as in <cit.>.
We will first prove in Section <ref> that, under certain assumptions, the complex (a,b) is shellable for every choice for ℓ≥ 3.
This will imply that H_∗((a,b),(a,b)) is torsion free, and by Corollary <ref> that EMH_∗+2,ℓ(G) is torsion free.
§.§ Homotopy type of the eulerian Asao-Izumihara complex
Recall from Section <ref> that the eulerian Asao-Izumihara chain complex is the relative complex C_∗((a,b) , (a,b)), where (a,b) is the set of eulerian tuples (x_0…,x_k) such that (a,x_0,…,x_k,b) ≤ℓ, and (a,b) is defined similarly.
Fix and integer ℓ≥ 3.
Let G(n,n^-α) be an ER graph.
Suppose the facets f_1,…,f_t-1,f_t of (a,b) are ordered in decreasing dimension.
Then as n→∞ ET_≤ℓ(a,b) is shellable asymptotically almost surely when
* 0< α < ∏_i=1^t-1 f_i + f_i+1/ℓ +2 f_i+1 -2, if f_1 < ℓ -2/2,
* 0<α< ∏_i=1^k-1 f_i +3/ℓ +4∏_i=k^t-1 f_i + f_i+1/ℓ +2 f_i+1 -2, if f_i ≥ℓ -2/2 for 1 ≤ i ≤ k-1 and f_i < ℓ -2/2 for i ≥ k.
Consider the facets f_1,…,f_t of (a,b).
Suppose they are ordered in decreasing dimension and say f_1=d.
There are some cases we need to consider.
* If there is a single facet f_1, then (a,b) is homotopic to a sphere S^d-1 with d= f_1 and we are done.
* Say there are two different maximal facets, f_1 and f_2 and suppose they have the same dimension d.
If f_1 and f_2 differ in one vertex, then they intersect in a (d-1)-face, and thus {f_1,f_2} is a shelling.
If f_1 and f_2 differ in two vertices u,v, then we need to distiguish the situations when u and v are adjacent and when they are not.
* If u and v are not adjacent, then we will have f_1=(a,…,u,…,v,…,b) and f_2=(a,…,u',…,v',…,b), and by construction there exists a third facet f_3=(a,…,u',…,v,…,b) such that {f_1,f_3,f_2} is a shelling, see Figure <ref>.
* If u and v are adjacent, then in order to construct a facet f_3 intersecting f_1 in a (d-1)-face we need either the edge (u,v') or the edge (u',v) to be present (see Figure <ref>), and this happens with probability p=n^-α.
Now say f_1 and f_2 differ in m vertices and, indicating the facets f_1 and f_2 only by the vertices they differ in, write f_1=(u_1,u_2,…,u_m) and f_2=(u_1',u_2',…,u_m').
Define a partition A_i with ⋃_iA_i={u_1,…,u_m} such that two vertices u_α^i, u_β^i belong to the same set A_i if and only if they are adjacent in G, see Figure <ref>.
Call A'_i the corresponding partition for the vertices (u_1',u_2',…,u_m').
Notice that |A_i|=|A'_i| for every i.
Indeed, suppose by contradiction this is not true.
Then, because f_1 and f_2 have the same dimension, there exists i_1,i_2 such that |A_i_1| > |A_i_1'| and |A_i_2| < |A_i_2'|.
But then it is possible to construct a f_3 visiting vertices from A_i_1 and A_i_2' thus having f_3 > f_1, f_2, contradicting the fact that f_1 and f_2 are maximal facets.
Then in this case we need for every set of adjacent vertices A_i and A'_i a number |A_i|-1 of edges (u_α^i, u_β^', i), α≠β, in order to create a shelling.
Indeed, we need to be able to construct a sequence of facets f'_1,…,f'_m by changing one vertex each time so that the intersection between the j-th facet and the preceding (j-1) facets is a (d -1)- dimensional simplex, see Figure <ref>.
Given the fact that we also require for every set A_i a number |A_i|+1 of edges to connect the vertices in A_i, we obtain that the probability of all the required edges existing is
p^ℓ + ∑_i(|A_i|+1) + ∑_i(|A_i|-1) = p^ℓ +2m.
With p=n^-α, α∈ [1/2,∞), we get
∑_m=2^d-1nd+1+m n^-α (ℓ + 2m)≤
(d-2) nd+3 n^-α (ℓ + 4)∼
(d-2) n^d+3/(d+3)! n^-α (ℓ + 4)
0, if α > d+3/ℓ +4
∞, if α < d+3/ℓ +4.
Notice that we assumed α∈ [1/2,∞) and d+3/ℓ +4≥1/2 only when d ≥ℓ -2/2.
With p=n^-α, α∈ [0,1/2), we get
∑_m=2^d-1nd+1+m n^-α (ℓ + 2m)≤
(d-2) n2d n^-α (ℓ + 2d -2)∼
(d-2) n^2d/(2d)! n^-α (ℓ + 2d -2)
0, if α > 2d/ℓ + 2d -2
∞, if 0<α < 2d/ℓ + 2d -2.
Since it holds also in this case that 2d/ℓ + 2d -2≥1/2 if and only if d ≥ℓ -2/2, we can conclude that we can construct a shelling when
0< α < d+3/ℓ +4, if d≥ℓ -2/2
0<α< 2d/ℓ +2d -2, if d< ℓ -2/2.
* Suppose now there are two different facets, f_1 and f_2, and suppose f_2 < f_1.
Let f_2 = d'≤ d-1.
Following the structure theorem for non-pure shellable complexes provided by Lemma <ref> and Theorem <ref>, in order to produce a shelling we need to extend the (d')-skeleton of f_1 to f_2 by constructing a sequence of (d')-dimensional facets f'_1,…,f'_m by changing one vertex each time so that the intersection between the j-th facet and the preceding (j-1) facets is a (d'-1)-dimensional simplex.
If the simplices in the (d')-skeleton of f_1 and f_2 differ in m ≤ d'-1 vertices, constructing such sequence is possible if we can find ℓ +2m edges joining the vertices in which f_1 and f_2 differ.
This happens with probability p^ℓ +2m and therefore following the computations done in the previous point we get, for p = n^-α and α∈ [1/2, ∞),
∑_m=2^d'-1nd+1+m n^-α (ℓ + 2m)≤
(d-3) nd+3 n^-α (ℓ + 4)∼
(d-3) n^d+3/(d+3)! n^-α (ℓ + 4)
0, if α > d+3/ℓ +4
∞, if 1/2<α < d+3/ℓ +4.
With p=n^-α, α∈ [0,1/2), we get
∑_m=2^d'-1nd+1+m n^-α (ℓ + 2m)≤
(d-3) nd+d' n^-α (ℓ + 2(d'-1))∼
(d-3) n^d+d'/(d+d')! n^-α (ℓ + 2d' -2)
0, if α > d+d'/ℓ + 2d' -2
∞, if 0<α < d+d'/ℓ + 2d' -2.
Again, from the fact that both inequalities d+3/ℓ +4≥1/2 and d+d'/ℓ + 2d' -2≥1/2 are true if and only if d ≥ℓ -2/2, we conclude that we can construct a shelling when
0< α < d+4/ℓ +4, if d≥ℓ -2/2
0<α< d+d'/ℓ + 2d' -2, if d< ℓ -2/2.
* Suppose there are t facets f_1,…,f_t-1,f_t ordered in decreasing order with f_1=d, then we only need to iterate the observations made in point (3).
That is, at each step j ∈ [1,...t-1] we have a shelling when
0< α < f_j +3/ℓ +4 , if f_j ≥ℓ -2/2
0<α< f_j + f_j+1/ℓ + 2 f_j+1 -2, if f_j< ℓ -2/2.
Therefore, suppose d= f_1 < ℓ -2/2.
Then every smaller facet f_k will be such that f_k < ℓ -2/2 and we will have a shelling when
α < ∏_i=1^t-1 f_i + f_i+1/ℓ +2 f_i+1 -2.
On the other hand, if d= f_1 ≥ℓ -2/2 let f_k be the first facet in the sequence f_1,…,f_t such that f_k < ℓ -2/2.
Then we will have a shelling when
α < ∏_i=1^k-1 f_i +3/ℓ +4∏_i=k^t-1 f_i + f_i+1/ℓ +2 f_i+1 -2.
Let G(n,n^-α) be an ER graph.
Suppose the facets g_1,…,g_τ-1,g_τ of (a,b) are ordered in decreasing dimension.
Then as n →∞ (a,b) is shellable asymptotically almost surely when
* 0< α < ∏_i=1^τ-1 g_i + g_i+1/(ℓ -1) +2 g_i+1 -2, if g_1 < (ℓ -1) -2/2,
* 0<α< ∏_i=1^k-1 g_i +3/(ℓ -1) +4∏_i=k^τ-1 g_i + g_i+1/(ℓ -1) +2 g_i+1 -2, if g_i ≥(ℓ -1) -2/2 for 1 ≤ i ≤ k-1 and g_i < (ℓ -1) -2/2 for i ≥ k.
It was shown in both <cit.> and <cit.> that a shellable simplicial complex has the homotopy type of a wedge of spheres.
Therefore using Theorem <ref> and Corollary <ref> we can show the following.
Let G(n,n^-α) be an ER graph.
For any pair of vertices (a,b)∈ V^2 consider the eulerian Asao-Izumihara chain complex C_∗ -2(ET_≤ℓ(a,b), ET_≤ℓ-1(a,b)) ≅ EMC_∗,ℓ(a, b).
Suppose the facets f_1,…,f_t of ET_≤ℓ(a,b) and g_1,…,g_τ of ET_≤ℓ-1(a,b) are ordered in decreasing dimension.
As n →∞, in the regimes where both (a,b) and (a,b) are shellable, EMH_k,ℓ(a,b) is torsion free for every k.
In the regimes where both (a,b) and (a,b) are shellable we can assume
(a,b) ≃⋁_i=1^t S_i^n_iand(a,b) ≃⋁_j=1^τ S_j^n_j.
So, H_k ((a,b), (a,b)) ≅ H_k ( ∨ S^n_i, ∨ S^n_j), and considering the long exact sequence
⋯→ H_k(∨ S^n_j) → H_k(∨ S^n_i) → H_k ( ∨ S^n_i, ∨ S^n_j) → H_k-1(∨ S^n_j) →⋯
we see that
H_k ((a,b), (a,b)) ≅ H_k ( ∨ S^n_i, ∨ S^n_j) ≅ℤ^m_i, if k=n_i,
ℤ^m_j, if k=n_j,
0, otherwise.
Finally, from the isomorphism theorem <ref> proved in <cit.>, we can conclude that EMH_k,ℓ(a,b) is torsion free for every k.
Recall that <cit.> provides a vanishing threshold for the limiting expected rank of the (ℓ, ℓ)-eulerian magnitude homology in terms of the density parameter in the contexts of Erdös-Rényi random graphs.
Let G = G(n,n^-α) be an Erdös-Rényi random graph. Fix ℓ and let α > ℓ+1/2ℓ-1.
As n →∞, [β_ℓ,ℓ(n, n^-α)] → 0 asymptotically almost surely.
Notice that when the smallest facet of (a,b), f_t, is such that f_t ∼ℓ > ℓ -2/2, then (a,b) is shellable when
α < ∏_i=1^t-1( f_i +3/ℓ +4) ∼∏_i=1^t-1(ℓ +3/ℓ +4) ∼ 1.
Therefore, putting together Remark <ref> with Theorems <ref> and <ref> we have the following.
Let G(n,n^-α) be an Erdős-Rényi random graph.
When the smallest facet f_t of (a,b) and the smallest facet g_τ of (a,b) are such that f_t, g_τ∼ℓ, if EMH_k,ℓ(G(n,n^-α)) is non-vanishing it is also torsion free.
§ FUTURE DIRECTIONS
In this paper we investigated the regimes where an Erdös-Rényi random graph G has torsion free eulerian magnitude homology groups.
While the results presented have provided significant insights into the problem, several aspects remain unexplored, offering fertile ground for continued research.
In this section, we propose extensions of the current work and identify open questions that could deepen the understanding of the topic.
§.§ The choice of ℓ
The result stated in Corollary <ref> relies on the dimension of the minimal facet f_t of (a,b) and the minimal facet g_τ of (a,b) being “close enough” to the parameter ℓ so that f_i +3/ℓ +4∼ 1 and g_j +3/ℓ +4∼ 1 for every other facet f_i,g_j.
It is thus natural to ask, how do we choose ℓ so that f_t ∼ℓ?
First, notice that the parameter ℓ cannot be too big with respect to the number of vertices n.
Specifically, ℓ cannot be of the order n^2.
Indeed, suppose we pick ℓ = n(n+1)/2.
The only way we can produce a facet f inducing a path of such length is if we have a path graph on n vertices V={1,…,n}, (a,b)=(1,⌈ n/2 ⌉), and we visit vertex n-i+1 after vertex i, i ∈{1,…,⌊ n/2 ⌋}, i.e. f= (1,n,2,n-1,…,⌈ n/2 ⌉).
Then f= n < n(n+1)/2.
See Figure <ref> for an illustration.
We conclude that a quadratic growth rate for ℓ with respect to n is not appropriate.
On the other hand, setting ℓ = n we do not encounter the same problem as before.
For example, consider the path graph in Figure <ref>.
Choosing (a,b)=(1,4) and ℓ = n =7 we find two facets f_1=(1,2,3,6,5,4) and f_2=(1,2,3,5,6,4).
Both have dimension 6 and thus f_i +3/ℓ +4 = 6+3/7+4 = 9/11 > 1/2.
Based on this computation, along with many other examples not displayed here, we make the following conjecture.
Indicate the diameter of the graph G by diam(G).
There exists a linear function φ such that if ℓ≤φ(diam(G)), then f_t ∼ℓ.
§.§ Connection with the complex of injective words
A natural development of the work present in this paper (which we are already investigating) concerns a deterministic result about the presence of torsion in eulerian magnitude homology groups of graphs.
It is the author's belief that this kind of result can be achieved by exploiting the strong connection between the eulerian magnitude chain complex and the complex of injective words.
An injective word over a finite alphabet V is a sequence w = v_1v_2⋯ v_t of distinct elements of V.
Call Inj(V) the set of injective words on V partially ordered by inclusion, and recall that the order complex of a poset (P,≤), denoted Δ(P), is the simplicial complex on the vertex set P, whose k-simplices are the chains x_0 < ⋯ < x_k of P.
For example, if P = [n]={1,…,n} with the usual ordering, then Δ(P)=Δ_n-1 is the standard (n-1)-simplex.
A complex of injective words is an order complex Δ(W) associated to a subposet W ⊂Inj(V).
Farmer <cit.> proved that if #(V)=n, then Δ(Inj(V)) has the homology of a wedge of D(n) copies of the (n-1)-sphere S^n-1, where D(n) is the number of derangements (i.e. fixed point free permutations) in 𝕊_n.
The following result was obtained by Björner and Wachs in <cit.> as a strengthening of Farmer's theorem.
Δ(Inj([n])) ≃⋁_D(n) S^n-1.
Let now the alphabet V be the vertex set of a graph G=(V,E).
Let Inj(V) be the set of injective words on the vertex set V and denote by Inj(V,ℓ)={w ∈Inj(V) such that (w)≤ℓ}, the subset containing w ∈Inj(V) such that length of the walk w in G is less than ℓ.
Then we have a filtration
Inj(V,0) ⊂Inj(V,1) ⊂⋯⊂Inj(V,ℓ) ⊂⋯⊂Inj(V).
The following equivalence easily follows from the definition of the filtration of Inj(V) and the definition of the eulerian Asao-Izumihara complex (a,b)/(a,b),
|Inj(V,ℓ)|/|Inj(V,ℓ -1)| = ⋁_(a,b)|(a,b)|/|(a,b)|,
where |·| denotes the geometric realization.
Further, the connection between the eulerian magnitude chain complex and the complex of injective words is strengthen by the following observation.
Hepworth and Roff <cit.> thoroughly analyzed in the context of directed graphs the magnitude-path spectral sequence (MPSS), a spectral sequence whose E^1 page is exactly standard magnitude homology, path homology <cit.> can be identified with a single axis of page E^2, and whose target object is reachability homology <cit.>.
Reproducing the computations proposed in <cit.> using the filtration of the complex of injective words in <ref>, leads to a version of the MPSS where the E^1 page is exactly eulerian magnitude homology.
Since the homology of the complex of injective words, as the target object, controls the behavior of the spectral sequence, we make the following conjecture.
Let G be a graph.
The eulerian magnitude homology groups of G, EMH_k,ℓ(G), are torsion free for every k,ℓ≥ 0.
§ ACKNOWLEDGMENTS
The author is thankful to Yasuhiko Asao, Luigi Caputi and Chad Giusti for helpful conversations throughout the development of this work.
amsplain
|
http://arxiv.org/abs/2409.03342v1 | 20240905083911 | Trajectory of the stellar flyby that shaped the outer solar system | [
"Susanne Pfalzner",
"Amith Govind",
"Simon Portegies Zwart"
] | astro-ph.EP | [
"astro-ph.EP",
"astro-ph.GA",
"astro-ph.SR"
] |
Structuring the outer solar system]Trajectory of the stellar flyby that shaped the outer solar system
[1]Susanne [email protected]
1]Amith [email protected]
2]Simon Portegies [email protected]
*[1]Jülich Supercomputing Centre, Forschungszentrum Jülich, 52428 Jülich, Germany
[2]Leiden Observatory, Leiden University, PO Box 9513,Leiden, 2300, the Netherlands
Unlike the Solar System planets, thousands of smaller bodies beyond Neptune orbit the Sun on eccentric (e > 0.1) and (i> 3^∘) orbits. While migration of the giant planets during the early stages of Solar System evolution can induce substantial scattering of trans-Neptunian objects (TNO), this process cannot account for the small number of distant TNOs (r_p > 60 au) outside the planets' reach. The alternative scenario of the close flyby of another star can instead produce all these TNO features simultaneously, but the possible parameter space for such an encounter is vast. Here, we compare observed TNO properties with thousands of flyby simulations to determine the specific properties of a flyby that reproduces all the different dynamical TNO populations, their location and their relative abundance and find that a 0.8^+0.1_-0.1
star passing at a distance of
inclined by
gives a near-perfect match. This flyby also replicates the retrograde TNO population, which has proved difficult to explain. Such a flyby is reasonably frequent; at least 140 million solar-type stars in the Milky Way are likely to have experienced a similar one. In light of these results, we predict that the upcoming Vera Rubin telescope will reveal that distant and retrograde TNOs are relatively common.
[
[
September 9, 2024
=====================
§ INTRODUCTION
The solar system planets accumulated from a disc of dust and gas that once orbited the Sun. Therefore, the planets move close to their common plane on near-circular orbits. About 3000 small objects have been observed to orbit the Sun beyond Neptune (r_p > 35 au); surprisingly, most move on eccentric and inclined orbits <cit.>. Therefore, some force must have lifted these trans-Neptunian objects (TNO) from the disc where they formed and altered their orbits dramatically. One popular hypothesis is that the planets originally were in a more compact configuration; the TNOs formed between them and were scattered outwards when the planets moved to their current locations <cit.>.
However, three distinct TNO dynamical groups are incredibly challenging to explain by the original planet scattering: (i) the cold Kuiper belt objects (KBOs) moving on nearly circular orbits close to the plane, (ii) the Sedna-like TNOs orbiting at large distances on highly eccentric orbits and (iii) TNOs with high inclination (i>60^∘) . While only three Sedna-like objects and two highly inclined TNOs are known so far, they are the make-or-break test for any outer solar system formation theory. Their existence, especially the observed clustering among the Sedna-like and high-inclination TNOs, is unlikely to stem from scattering by the planets <cit.>.
Here, we build on a completely different hypothesis for the TNOs' origin <cit.>. In this model, the TNOs formed in the outer solar system (> 30 au) and the close passage of another star catapulted them to their current orbits. This hypothesis was initially overlooked as such close flybys were deemed too rare. However, recent ALMA observations reveal that close stellar flybys seem to be relatively common <cit.>. Recently, this scenario has gained renewed interest due to simulations showing that flybys can produce a cold Kuiper belt population and Sedna-like objects <cit.>. These proof-of-principle studies considerably strengthened the flyby hypothesis. However, the possible flyby parameter space has remained relatively large, and the resulting predictions remained vague. More precise predictions are essential to decide between the competing hypotheses. Here, we present the essential next step – we provide the close-to-exact parameters of the potential outer solar system shaping fly. The resulting predictions are distinct and testable by the ≈ 40 000 TNOs awaiting discovery when the LSST becomes operational <cit.>. The TNOs orbiting in the opposite direction as the planets (i > 90^∘) – so-called retrograde TNOs – may be the key to this decision.
§ RESULTS
Our exhaustive numerical parameter study consists of over 3000 individual simulations modelling the effect of a stellar flyby on a planetesimal disc surrounding the Sun extending to and . Such sizes have been observed to be typical for protoplanetary and debris discs <cit.>. We vary the mass of the perturber, M_p, its perihelion distance, r_p, and the relative orientation of its path in terms of inclination, i, and angle of periastron, ω, and the size of the disc, R_d.
We systematically rejected any flyby that failed to quantitatively match the observed TNO population. This means that any successful candidates had to reproduce the location in the a, e, i parameter space and the relative population sizes of the cold KBOs and the Sedna-like objects. The latter are particularly important as, unlike the resonant TNOs, their relative numbers and orbits are largely unaffected by interactions with Neptune after the flyby, expressed by the Tisserand parameter T<3.05. In addition, we demanded that the planet orbits remain unperturbed (for details, see Methods section). Only three flybys met our strict criteria for an excellent quantitative match to the observed TNOs (see Table 1). These three flybys produced the hot, cold, and Sedna-like TNOs in the observed relative quantities and in the right places in the multi-dimensional parameter space. Each best-fit model emphasised different TNO dynamical groups in the selection process. Still, their parameters are so similar that one can combine them into a single flyby scenario with a remarkably small error bar.
For a parabolic flyby, we find that a star with mass M_p = 0.8^+0.1_-0.1 at a perihelion distance inclined by and an angle of periastron of provides the best candidate for an outer solar-system-shaping flyby based on current data. The spatial orientation is given relative to the plane of the pre-flyby disc. For an illustration of the flyby dynamics,
see and the Supplementary video. The past flyby orbital parameters are shown by Fig. 2, left. We performed higher-resolution simulations for models A – C with 10^5 tracer particles and modelled two disc sizes (150 au and - models A1 and A2, respectively).
Interestingly, these parameters are fairly consistent with those of another flyby scenario <cit.>, which argues that a 1.8 M_⊙ star would have passed the Solar system at r_p = 227 au inclined by 17^∘ – 34^∘. The different mass can be explained by the type of encounter studied: where <cit.> adopted an exchange interaction to abduct Sedna from the intruder, whereas here we argue that Sedna (and the other KBOs) are native to the Solar system.
The flyby probably happened several Gyr in the past; thus, how much change the orbital parameters on such a time scale? Investigating the long-term evolution of the TNO population is computationally expensive. Therefore, we studied only the period of 1 Gyr after the flyby. The general outcome remains very similar (see Fig. 2, middle). The changes include an increase in low-inclination TNOs, improving the match to the cold TNO population and filling in the low-inclination distant TNOs missing immediately after the flyby. Thus, the long-term evolution leads to an even better fit.
The final model delivered a surprise: The best-fit flyby created retrograde TNOs despite them not being part of the selection process. So far two retrograde TNOs have been confirmed – and , both having relatively small periastron distances and are inclined by 103.41^∘ and by 110.15^∘, respectively.
An additional TNO is suspected of moving on a retrograde orbit – 2019 EE_6 – but its orbit is currently not well constrained. It is more distant (r_p > 30 au) and may be closer to the plane
Eventually, high-inclination TNOs could be crucial when deciding between different hypotheses. Retrograde TNOs, as such, provide a challenge for the planet instability model. Adding a distant planet (Planet Nine) appeared to solve the problem <cit.>. This combined model can account for retrograde TNOs with and like and <cit.>.
However, distant, highly inclined TNOs (r_p> 30 au, i>150^∘), if existing, may provide a challenge also for the planet nine model.
Conversely, retrograde TNOs might also be the key to determining the primordial size of the solar system disc. The maximum inclination of retrograde TNOs is directly related to the primordial disc size (see Fig. 3). The inclinations of 2008 KV_42 and (103.41^∘ and 110.15^∘) demand that the Sun's primordial debris disc must have extended to at least
Close to the plane retrograde TNOs would argue for an even larger size Using this relation, retrograde TNOs detected in the future will enable setting stringent bounds on the primordial disc size.
Currently, only the nearest and brightest TNOs are observable, and high-inclination and very eccentric objects are challenging to detect. The right panel of Fig. 2 supplies a sneak preview of the TNO discoveries we expect from the here presented flyby scenario. It shows that the clustering among the known highly inclined TNOs <cit.> and Sedna-like objects is part of a much larger pattern caused by the flyby. It will be interesting to see this pattern emerge when more TNOs are discovered. Although the pattern becomes slightly less distinctive on Gyr timescales due to secular effects (see middle panel), the clustering as such persists (see Fig. 2, middle).
The information about the flyby parameters enables us to predict how the relative sizes of different TNOs dynamical groups will change when the observable space expands (see Supplementary Figure 1 and Supplemantary Table 1). Matching the observations, Sedna-like TNOs make up only about 0.1% of all TNOs in model A–C in the currently observationally accessible space. However, this will increase to 7% by the end of the ten-year observation campaign of the Vera Rubin telescope as more distant TNOs become observable. Likewise, we anticipate an increase in the fraction of retrograde TNOs from 0.15% to about 5% as the discovery space expands. Although some of the expected retrograde TNOs may orbit close to the plane, we foresee most of them moving at high inclinations from the plane.
However, we caution against overinterpreting Fig. 2. To some extent, we expect the non-detection of TNOs in covered areas. Neither the size nor the structure of the primordial solar disc is known. Any change – smaller size or ring structures – in the primordial disc leads to "holes" in the parameter space indicated in Fig. 2. Indeed, such gaps could even help to determine the solar disc's structure before the flyby. Conversely, if TNOs are found in areas not predicted by Fig. 2 even after including the long-term evolution, this would challenge the presented hypothesis. However, its falsifiability makes the flyby hypothesis methodologically so strong.
So far, we have concentrated on the bound TNO population beyond However, while leaving the planetary orbits undisturbed, the flyby injects many TNOs (≈9% of the initial disc mass m_i) into the area inside 30 au. These injected TNOs move on high eccentricity (e> 0.4), high-inclined regularly revisiting the trans-Neptunian region . At the same time, a considerable fraction (26%) of the TNOs become unbound from the Sun (see Supplementary Figure 2), and the perturber captures 8.3% of the material initially bound to the Sun (model A1). While moving on highly eccentric orbits, some of these captured solar TNOs come incredibly close to the perturber star . These TNOs move well within the ice lines of this system, where volatiles evaporate.
§ DISCUSSION
The known TNO population is subject to many different biases <cit.>, and
likely represent only a fraction (<1% – 10%) of the entire population. New TNOs are constantly discovered, some with entirely unexpected orbital properties <cit.>. Thus, searching for flyby parameters best fitting the observations presented here can only be a first step. Once a significant portion of the TNOs is known, this procedure must be repeated, and the flyby parameters adjusted accordingly. Despite these reservations, we expect the final best-fit parameters to be similar. After all, the model must still account for the Kuiper belt, Sedna-like and retrograde TNOs at the currently known positions in the multi-dimensional parameter space. Alternative hybrid schemes combining planet scattering with one or more flybys have been suggested <cit.>. However, it is an open question whether such hybrid scenarios have predictive power.
When would this flyby have occurred? Close encounters are most frequent during the first 10 Myr of a star's life when it is still part of its birth cluster. Simulations find that typically of all solar-type stars experience an encounter reducing the unperturbed area to 30 au - 50 au in favourable environments (similar to NGC 2244 and M44) <cit.>. Even in low-density clusters, ≈ 1% of solar-type stars experience such an encounter. Putting this number in perspective: in the first 10 Myr of their life, at least 140 million solar-type stars (possibly ten times more) have experienced an encounter similar to the Sun's in the Milky Way. In ≈10% of these cases, the encounter was with a similar mass perturber at approximately the same periastron distance (r_p = 90 au – 130 au) as the Sun's flyby. Close flybys became less frequent after the solar birth cluster expanded and dissolved at the end of the star formation process. However, the 4.55 Gyr that passed since the solar system formed more than outbalances the much lower encounter frequency, summing up to a probability of 20%– 30% likelihood for a late encounter <cit.>. However, due to the stellar velocity dispersion increasing with the Sun's age, these flybys would be mainly on highly hyperbolic orbits. Hyperbolic flybys are much less efficient in exciting the orbits of TNOs. Therefore, the question of whether a later flyby could lead to a similarly good match require further study.
The flyby scenario neither excludes the planets forming in a more compact configuration nor the existence of a primordial Oort cloud. Planet migration could have scattered additional objects into the trans-Neptunian region, contributing to the hot Kuiper belt population without altering the Sedna-like or retrograde TNO populations. Even if the planets were located at their current positions at the time of the flyby, they would have been unaffected by the flyby except Neptune. If Neptune were at its current distance at the time of the flyby, it would have been shielded from the effect of the flyby in 25% of case, staying in the gravitational shadow of the perturber – meaning flying behind the perturber star (see Supplementary Figure 2).
If the Oort cloud existed before the flyby, it would have been severely affected, but not erased. A flyby of the given parameters would have left a sufficiently large number of TNOs bound to the Sun to account for the current estimates of the Oort cloud mass. Besides, the Oort cloud might have been simultaneously enriched by TNOs with a ≫ 10^4 au, originally belonging to the outer disc and planetesimals initially being part of the potentially existing perturber Oort cloud <cit.>.
Finally, one may speculate whether the probability of the perturber's planetary system developing life increased by the flyby. The probability would have been higher if the flyby happened not during the first 10 Myr but later when pre-forms of life had already developed.
§ CONCLUSION
We demonstrated that the flyby of star of mass
passing on a parabolic orbit at a perihelion of
at an inclination of explains several unaccounted-for outer solar system features. It quantitatively reproduces the orbital properties of the cold Kuiper belt population, Sedna-like objects and high-inclination TNOs. Unexpectedly, this flyby also accounts for the otherwise difficult-to-explain retrograde population. The model's beauty lies in its simplicity and ability to make specific predictions. These predictions include a distinct clustering in a-, e-, i-space and a rise in the relative fraction of retrograde and Sedna-like TNOs. Future TNO discoveries may reveal the primordial solar system disc's size and structure.
§ METHOD
§.§ Flyby simulations and selection procedure
Our parameter study consists of 3080 individual simulations
modelling the effect of stellar flybys on a planetesimal or debris disc surrounding the Sun. The aim was to find the subset of simulations that produce the various dynamics groups in the observed quantities and locations in the multi-dimensional parameter space. Previous work <cit.> found that the most promising parameter space for finding the most challenging TNO dynamical groups - entails perturber masses in the range , periastron distance , inclinations i= 50^∘ – 70^∘, and angles of periastron . We scanned this parameter space in mass steps of 0.1 , r_p in steps of 10 au, i in units of 5^∘and the ω with a variation of 10^∘.
The simulations start with an idealised thin disc <cit.> represented by N=10^4 mass-less tracer particles. Taking the observed sizes of typically 100 – 500 au of protoplanetary and debris discs for guidance <cit.>, we model disc sizes of and We treat model the N gravitational three-body interactions between the Sun, the perturber star and each of the N test particles <cit.>. Self-gravity and viscosity effects are negligible, as the interaction time is short (< 4000 yr) and the disc's mass is considerably smaller than the Sun's (m_d ≪ 0.001 M_⊙). We use a Runge-Kutta Cash-Karp scheme to determine the particle trajectories. The simulations start and end when the force of the perturber star on each particle is less than 0.1%
<cit.>. We optimise the computational effort by using an initial constant particle surface density to obtain a high resolution in the outer parts of the disc. We then post-process the data by assigning different masses to the particles to model the actual mass density distribution <cit.>.
We set strict standards for matching observations with simulations, rejecting 99.9% of all simulated cases. Nevertheless, this computational expense paid off. We obtained a near-perfect match to the known TNO population. We tested only for those TNOs not strongly coupled to Neptune (T_N > 3.05, where T is the Tisserand parameter). Thus, most resonant TNOs were excluded from the comparison. Similarly, we did not consider TNOs with a >10,000 au as more distant encounters and the galactic potential could affect their orbits over Gyr timescales.
After the flyby, some objects enter into a resonant orbit with Neptune during our long-term simulation. They are not visible in Fig. 1 since they do not meet the T_N > 3.05 threshold. Likely the number of resonant objects is small because the simulation only covers the first 1 Gyr, additional resonant TNOs may be produced over extended periods. A higher resolution of the disc population would also required to describe this process adequately. Besides, resonant TNOs may be produced if Neptune migrated outward after the flyby.
We used a decision tree-based inspection method, first selecting the flybys that avoid strong perturbations inside 30 au – 35 au. We used the approximation,
r_d=0.28 × M_p^-0.32 r_peri
<cit.>, as an indicator of the radial distance r_d up to which the disc remains largely undisturbed. This equation applies only to coplanar encounters while we study inclined encounters. Therefore, we slightly extend the parameter space to account for the difference. A subset of 490 simulations fulfilled the criterion of an unperturbed population up to Here, we assume that the planets orbit at their current locations. If the solar system was in a more compact configuration during the flyby, the constraints would relax. Next, we retained only flybys that produce a cold Kuiper belt population and Sedna-like objects in the suitable regions of the parameter space. Only a small subset clustering around perturber masses 0.7 - and periastron distances of 90 au – 110 au fulfils this criterion. Among the few remaining possibilities, additional cases can be excluded on more stringent criteria. For example, among the r_p=110 cases, higher-mass perturbers tend to produce too few cold Kuiper belt objects, while lower-mass perturbers (M_p ≤ 0.7 ) have difficulties reproducing the high eccentricity TNOs. For 0.8 perturbers, only perihelion distances of 100 au and can produce the right size of the unperturbed region. The closer encounter (100 au) produces 80% fewer cold TNOs than the 110 au perturber. Hence, a 0.8 perturber passing at a periastron distance of 110 au best fits the observational data.
We simultaneously tested for the inclinations and the argument of perihelion of the perturber's orbit. Again, the relative number of cold Kuiper belt objects is a key element. Supplementary Figure 3 shows the dependence of the number of cold population particles as a function of i and ω for a flyby with M_p = 0.8 at r_p = 110 au. The cold population decreases significantly below 70^∘ inclination and 80^∘ argument of perihelion. While above these values, the simulations do not reproduce the inclination and eccentricity distributions of the TNOs correctly. Hence, an inclination of 70^∘ and an angle of perihelion of 80^∘ produce the best fit.
We tested the best-fit flyby to check their influence on the giant planets orbits. Our criterion is that the changes in i and e due to the flyby should be less than the difference of currently observed planetary orbits from being circular and in the plane. Neptune's orbit is more vulnerable than those of the other planets. However, the key parameter is the orbital position at the moment of flyby. Even Neptune's orbit remains nearly unaffected (Δ < today's e and i) at the locations indicated in blue in Supplementary Figure 2). Uranus's eccentricity remains unaffected; however, small ranges of positions are excluded because the inclination is slightly higher (1^o) than today's (0.7^o). The influence on Jupiter and Saturn is negligible, independent of orbital location.
When performing such a comparison, one faces two challenges: (i) the biases in the known TNO population <cit.> and (ii) the fact that the size of the primordial disc is unknown. Therefore, we determined three best fits emphasising different populations (see Table 1). Model B gives a slightly larger cold population than A 1 (see Supplementary Figure 4). Model C produces more high-inclination objects (see Supplementary Figure 6). Models A1 and A2 only differ in their assumed disc sizes of 150 au and 300 au, respectively.
While this method was labour-intensive, it was the most reliable approach. Automated statistical methods <cit.> generally test against deviations from the median or mean. We find that taking a mean as the decision basis fails to account for multiple clustering in TNO dynamical groups, especially in multidimensional parameter space. Besides, various observational biases make it problematic to compare “unbiased" simulation results in an automated way.
We also tested using the observation simulator OSSOS <cit.>, applying the default absolute magnitude distribution to assign magnitudes to the test particles. We find that for model A1, 70 objects of our simulated objects should be currently observable. However, rating the quality of this match in an automated way faces the problem that the result depends sensitively on the size of the chosen comparison parameter space.
§.§ Long-term evolution
Determining the long-term evolution after the flyby requires a high-precision integrator, which makes these simulations computationally expensive. Therefore, we modelled the long-term only for a subset of the results consisting of all particles fulfiling the conditions:
and a < 2000 au. These TNOs correspond to ≈20 % of the total TNO population and roughly to the TNOs that should be visible with instruments like the Vera Rubin telescope. In addition to the test particles from the flyby simulation, the four outer giant planets were included in the long-term simulation. We start with the particle positions and velocities at 12 000 years after the perihelion passage. Using the GENGA code <cit.>, we follow the trajectories of the test particles for the consecutive 1 Gyr. These trajectories are determined using a hybrid symplectic integrator.
§.§ Flyby frequency determination
We determined the occurrence rate of such close flybys in different cluster environments ranging from short-lived low-N clusters to massive, compact, long-lived clusters. We performed an extensive set of simulations using the code Nbody 6++ <cit.>. In these simulations <cit.>, the cluster development matches that of observed clusters regarding the temporal development of the cluster mass and size with cluster age. We computed hundreds of realisations so that the results have high statistical relevance. We record the parameters of any close interaction between two stars and use this information in a post-processing step to determine the effect of each encounter on the disc size (equalling the unperturbed area after an encounter). We study the sub-set of solar-type stars and test for the frequency of encounters leading to a 30 – 50 au-sized unperturbed disc. We also test for solar-type stars encountering a perturber of mass
§.§ Toy model for effect on the Oort cloud
We estimated the effect of such a flyby on a potentially existing Oort cloud. Our toy model consisted of 10 000 particles randomly distributed in a 100 000 au-sized sphere surrounding the Sun. We simulated model A's flyby effect on this Oort cloud. The particle trajectories are calculated using the REBOUND N-body code <cit.> employing IAS15, a 15th order Gauss-Radau integrator <cit.>.
Data availability The data of the complete parameter study are available on the DESTINY database under the following link https://destiny.fz-juelich.de.
Code availability The codes REBOUND and GENGA are open access codes. The DESTINY code will be available upon reasonable request. However, the DESTINY database (https://destiny.fz-juelich.de) also allows to perform diagnostics online. It allows to reproduce Fig. 2, and also similar plots for the entire parameter study. A complete illustration of the dynamics of the flyby scenario of model A1 is available in the Supplementary video.
Author contributions
Conceptualization, S.P.; Simulation of the parameter study, long-term evolution and effect of Oort cloud: A.G.; Diagnostic: S.P., A.G.; Comparison to observational data: A.G., S.P., S.P.Z.; Writing – S.P., A.G., S.P.Z.; Funding Acquisition & Resources, S.P., Supervision, S.P.; Data Curation, A.G.
Acknowledgments
We want to thank Sonja Habbinga for her assistance in the visualization of model A1, M. Bannister and R. Dorsey for advising us on interpreting TNO survey results and the OSSOS simulator and F. Wagner for supporting us in implementing the code GENGA on the FZJ system. SP has received funding for this project through the grant 450107816 of the Deutsche Forschungsgemeinschaft.
Competing interests
The authors declare no competing interests related to the topic of this paper.
4
47
#1ISBN #1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<https://doi.org/#1>et al.#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1
#1#1#1#1#1#1#1#1#1#1#1#1#1PreBibitemsHook
Gladman:2021
Gladman, B.,
Volk, K.:
Transneptunian Space.
Ann. Rev. Astron. Astr.
59
203–246
(2021).
10.1146/annurev-astro-120920-010005
Kavalaars:2020
Kavelaars, J.J.,
Lawler, S.M.,
Bannister, M.T.,
Shankman, C.:
Perspectives on the distribution of orbits of distant Trans-Neptunian
objects.
In: Prialnik, D.,
Barucci, M.A.,
Young, L. (eds.)
The Trans-Neptunian Solar System,
61–77
(2020).
10.1016/B978-0-12-816490-7.00003-5
Fernandez:1984
Fernandez, J.A.,
Ip, W.-H.:
Some dynamical aspects of the accretion of Uranus and Neptune: The
exchange of orbital angular momentum with planetesimals.
Icarus
58(1),
109–120
(1984).
10.1016/0019-1035(84)90101-5
Hahn:1999
Hahn, J.M.,
Malhotra, R.:
Orbital Evolution of Planets Embedded in a Planetesimal Disk.
Astron. J.
117(6),
3041–3053
(1999)
https://arxiv.org/abs/astro-ph/9902370arXiv:astro-ph/9902370
[astro-ph].
10.1086/300891
Gomez:2003
Gomes, R.S.:
The origin of the Kuiper Belt high-inclination population.
Icarus
161(2),
404–418
(2003).
10.1016/S0019-1035(02)00056-8
Morbi:2003
Morbidelli, A.,
Brown, M.E.,
Levison, H.F.:
The Kuiper Belt and its Primordial Sculpting.
Earth Moon and Planets
92(1),
1–27
(2003).
10.1023/B:MOON.0000031921.37380.80
Levison:2008
Levison, H.F.,
Morbidelli, A.,
Van Laerhoven, C.,
Gomes, R.,
Tsiganis, K.:
Origin of the structure of the Kuiper belt during a dynamical
instability in the orbits of Uranus and Neptune.
Icarus
196(1),
258–273
(2008)
https://arxiv.org/abs/0712.0553arXiv:0712.0553
[astro-ph].
10.1016/j.icarus.2007.11.035
Raymond:2018
Raymond, S.N.,
Izidoro, A.,
Morbidelli, A.:
Solar System Formation in the Context of Extrasolar Planets.
In: Meadows, V.S.,
Arney, G.N.,
Schmidt, B.E.,
Des Marais, D.J. (eds.)
Planetary Astrobiology,
p. 287
(2020).
10.2458/azu_uapress_9780816540068
Brown:2004
Brown, M.E.,
Trujillo, C.,
Rabinowitz, D.:
Discovery of a Candidate Inner Oort Cloud Planetoid.
Astrophys. J.
617(1),
645–649
(2004)
https://arxiv.org/abs/astro-ph/0404456arXiv:astro-ph/0404456
[astro-ph].
10.1086/422095
Trujillo:2014
Trujillo, C.A.,
Sheppard, S.S.:
A Sedna-like body with a perihelion of 80 astronomical units.
Nature
507(7493),
471–474
(2014).
10.1038/nature13156
Sheppard:2019
Sheppard, S.S.,
Trujillo, C.A.,
Tholen, D.J.,
Kaib, N.:
A New High Perihelion Trans-Plutonian Inner Oort Cloud Object: 2015
TG387.
Astron. J.
157(4),
139
(2019)
https://arxiv.org/abs/1810.00013arXiv:1810.00013
[astro-ph.EP].
10.3847/1538-3881/ab0895
Gladman:2009
Gladman, B.,
Kavelaars, J.,
Petit, J.-M.,
Ashby, M.L.N.,
Parker, J., et al.,
Discovery of the First Retrograde Transneptunian Object.
Astrophys.J. Lett.
697(2),
91–94
(2009).
10.1088/0004-637X/697/2/L91
Chen:2016
Chen, Y.-T.,
Lin, H.W.,
Holman, M.J.,
Payne, M.J.,
Fraser, W.C., et al.,
Discovery of a New Retrograde Trans-Neptunian Object: Hint of a
Common Orbital Plane for Low Semimajor Axis, High-inclination TNOs and
Centaurs.
Astrophys. J. Lett.
827(2),
24
(2016)
https://arxiv.org/abs/1608.01808arXiv:1608.01808
[astro-ph.EP].
10.3847/2041-8205/827/2/L24
2014MNRAS.444.2808P
Punzo, D.,
Capuzzo-Dolcetta, R.,
Portegies Zwart, S.:
The secular evolution of the Kuiper belt after a close stellar
encounter.
Mon. Not. R. Astron.Soc.
444(3),
2808–2819
(2014)
https://arxiv.org/abs/1403.6633arXiv:1403.6633
[astro-ph.EP].
10.1093/mnras/stu1650
Kobayashi:2001
Kobayashi, H.,
Ida, S.:
The Effects of a Stellar Encounter on a Planetesimal Disk.
Icarus
153(2),
416–429
(2001)
https://arxiv.org/abs/astro-ph/0107086arXiv:astro-ph/0107086
[astro-ph].
10.1006/icar.2001.6700
Kenyon:2004
Kenyon, S.J.,
Bromley, B.C.:
Stellar encounters as the origin of distant Solar System objects in
highly eccentric orbits.
Nature
432(7017),
598–602
(2004)
https://arxiv.org/abs/astro-ph/0412030arXiv:astro-ph/0412030
[astro-ph].
10.1038/nature03136
Kobayashi:2005
Kobayashi, H.,
Ida, S.,
Tanaka, H.:
The evidence of an early stellar encounter in Edgeworth Kuiper
belt.
Icarus
177(1),
246–255
(2005).
10.1016/j.icarus.2005.02.017
Dai:2015
Dai, F.,
Facchini, S.,
Clarke, C.J.,
Haworth, T.J.:
A tidal encounter caught in the act: modelling a star-disc fly-by in
the young RW Aurigae system.
Mon. Not. R. Astron. Soc.
449(2),
1996–2009
(2015)
https://arxiv.org/abs/1502.06649arXiv:1502.06649
[astro-ph.SR].
10.1093/mnras/stv403
Rodriguez:2018
Rodriguez, J.E.,
Loomis, R.,
Cabrit, S.,
Haworth, T.J.,
Facchini, S., et al.,
Multiple Stellar Flybys Sculpting the Circumstellar Architecture in
RW Aurigae.
Astrophys. J.
859(2),
150
(2018)
https://arxiv.org/abs/1804.09190arXiv:1804.09190
[astro-ph.SR].
10.3847/1538-4357/aac08f
Rosa:2019
De Rosa, R.J.,
Kalas, P.:
A Near-coplanar Stellar Flyby of the Planet Host Star HD 106906.
Astron. J.
157(3),
125
(2019)
https://arxiv.org/abs/1902.10220arXiv:1902.10220
[astro-ph.EP].
10.3847/1538-3881/ab0109
Winter:2018b
Winter, A.J.,
Booth, R.A.,
Clarke, C.J.:
Evidence of a past disc-disc encounter: HV and DO Tau.
Mon. Not. R. Astron. Soc.
479(4),
5522–5531
(2018)
https://arxiv.org/abs/1807.04295arXiv:1807.04295
[astro-ph.SR].
10.1093/mnras/sty1866
Akiyama:2019
Akiyama, E.,
Vorobyov, E.I.,
Baobabu Liu, H.,
Dong, R.,
de Leon, J., et al.,
A Tail Structure Associated with a Protoplanetary Disk around SU
Aurigae.
Astron. J.
157(4),
165
(2019)
https://arxiv.org/abs/1902.10306arXiv:1902.10306
[astro-ph.EP].
10.3847/1538-3881/ab0ae4
Menard:2020
Ménard, F.,
Cuello, N.,
Ginski, C.,
van der Plas, G.,
Villenave, M., et al.,
Ongoing flyby in the young multiple system UX Tauri.
Astron. Astrophys.
639,
1
(2020)
https://arxiv.org/abs/2006.02439arXiv:2006.02439
[astro-ph.SR].
10.1051/0004-6361/202038356
Pfalzner:2018
Pfalzner, S.,
Bhandare, A.,
Vincke, K.,
Lacerda, P.:
Outer Solar System Possibly Shaped by a Stellar Fly-by.
Astrophys. J.
863(1),
45
(2018)
https://arxiv.org/abs/1807.02960arXiv:1807.02960
[astro-ph.GA].
10.3847/1538-4357/aad23c
Moore:2020
Moore, N.W.H.,
Li, G.,
Adams, F.C.:
Inclination Excitation of Solar System Debris Disk Due to Stellar
Flybys.
Astrophys. J.
901(2),
92
(2020)
https://arxiv.org/abs/2007.15666arXiv:2007.15666
[astro-ph.EP].
10.3847/1538-4357/abb08f
LSST_book
LSST Science Collaboration:
LSST Science Book, Version 2.0.
arXiv e-prints,
0912–0201
(2009)
https://arxiv.org/abs/0912.0201arXiv:0912.0201
[astro-ph.IM].
10.48550/arXiv.0912.0201
Andrews_2020
Andrews, S.M.:
Observations of Protoplanetary Disk Structures.
Ann. Rev. Astron. Astrophys.
58,
483–528
(2020).
10.1146/annurev-astro-031220-010302
Hendler_2020
Hendler, N.,
Pascucci, I.,
Pinilla, P.,
Tazzari, M.,
Carpenter, J., et al.,
The Evolution of Dust Disk Sizes from a Homogeneous Analysis of 1-10
Myr old Stars.
Astrophys. J.
895(2),
126
(2020)
https://arxiv.org/abs/2001.02666arXiv:2001.02666
[astro-ph.EP].
10.3847/1538-4357/ab70ba
2015MNRAS.453.3157J
Jílková, L.,
Portegies Zwart, S.,
Pijloo, T.,
Hammer, M.:
How Sedna and family were captured in a close encounter with a solar
sibling.
Mon. Not. R. Astron. Soc.
453(3),
3157–3162
(2015)
https://arxiv.org/abs/1506.03105arXiv:1506.03105
[astro-ph.EP].
10.1093/mnras/stv1803
Batygin:2016
Batygin, K.,
Brown, M.E.:
Generation of Highly Inclined Trans-Neptunian Objects by Planet
Nine.
Astrophys. J. Lett.
833(1),
3
(2016)
https://arxiv.org/abs/1610.04992arXiv:1610.04992
[astro-ph.EP].
10.3847/2041-8205/833/1/L3
Batygin:2024
Batygin, K.,
Morbidelli, A.,
Brown, M.E.,
Nesvorný, D.:
Generation of Low-inclination, Neptune-crossing Trans-Neptunian
Objects by Planet Nine.
Astrophys. J. Lett.
966(1),
8
(2024)
https://arxiv.org/abs/2404.11594arXiv:2404.11594
[astro-ph.EP].
10.3847/2041-8213/ad3cd2
Bannister:2018
Bannister, M.T.,
Gladman, B.J.,
Kavelaars, J.J.,
Petit, J.-M.,
Volk, K., et al.,
OSSOS. VII. 800+ Trans-Neptunian Objects—The Complete
Data Release.
Astrophys. J. Supp.
236(1),
18
(2018)
https://arxiv.org/abs/1805.11740arXiv:1805.11740
[astro-ph.EP].
10.3847/1538-4365/aab77a
Bernardinelli:2022
Bernardinelli, P.H.,
Bernstein, G.M.,
Sako, M.,
Yanny, B.,
Aguena, M., et al.,
A Search of the Full Six Years of the Dark Energy Survey for Outer
Solar System Objects.
Astrophys. J. Supp.
258(2),
41
(2022)
https://arxiv.org/abs/2109.03758arXiv:2109.03758
[astro-ph.EP].
10.3847/1538-4365/ac3914
Shephard:2016
Sheppard, S.S.,
Trujillo, C.,
Tholen, D.J.:
Beyond the Kuiper Belt Edge: New High Perihelion Trans-Neptunian
Objects with Moderate Semimajor Axes and Eccentricities.
Astrophys. J. Supp.
825(1),
13
(2016)
https://arxiv.org/abs/1606.02294arXiv:1606.02294
[astro-ph.EP].
10.3847/2041-8205/825/1/L13
Nesvorny:2023
Nesvorný, D.,
Bernardinelli, P.,
Vokrouhlický, D.,
Batygin, K.:
Radial distribution of distant trans-Neptunian objects points to
Sun's formation in a stellar cluster.
Icarus
406,
115738
(2023)
https://arxiv.org/abs/2308.11059arXiv:2308.11059
[astro-ph.EP].
10.1016/j.icarus.2023.115738
Pfalzner:2020
Pfalzner, S.,
Vincke, K.:
Cradle(s) of the Sun.
Astrophys. J.
897(1),
60
(2020)
https://arxiv.org/abs/2005.11260arXiv:2005.11260
[astro-ph.EP].
10.3847/1538-4357/ab9533
2021A A...647A.136P
Portegies Zwart, S.:
Oort cloud Ecology. I. Extra-solar Oort clouds and the origin of
asteroidal interlopers.
Astron. Astrophys.
647,
136
(2021)
https://arxiv.org/abs/2011.08257arXiv:2011.08257
[astro-ph.EP].
10.1051/0004-6361/202038888
Pringle:1981
Pringle, J.E.:
Accretion discs in astrophysics.
Ann. Rev. Astron. Astrophys,
19,
137–162
(1981).
10.1146/annurev.aa.19.090181.001033
Musielek:2014
Musielak, Z.E.,
Quarles, B.:
The three-body problem.
Reports on Progress in Physics
77(6),
065901
(2014)
https://arxiv.org/abs/1508.02312arXiv:1508.02312
[astro-ph.EP].
10.1088/0034-4885/77/6/065901
Breslau:2014
Breslau, A.,
Steinhausen, M.,
Vincke, K.,
Pfalzner, S.:
Sizes of protoplanetary discs after star-disc encounters.
Astron. Astrophys.
565,
130
(2014)
https://arxiv.org/abs/1403.8099arXiv:1403.8099
[astro-ph.GA].
10.1051/0004-6361/201323043
Hall:1996
Hall, S.M.,
Clarke, C.J.,
Pringle, J.E.:
Energetics of star-disc encounters in the non-linear regime.
Mon. Not. R. Astron. Soc.
278,
303–320
(1996)
https://arxiv.org/abs/astro-ph/9510153arXiv:astro-ph/9510153
[astro-ph].
10.1093/mnras/278.2.303
Steinhausen:2012
Steinhausen, M.,
Olczak, C.,
Pfalzner, S.:
Disc-mass distribution in star-disc encounters.
Astrophys. J.
538,
10
(2012)
https://arxiv.org/abs/1111.2466arXiv:1111.2466
[astro-ph.SR].
10.1051/0004-6361/201117682
Jilkova:2015
Jílková, L.,
Portegies Zwart, S.,
Pijloo, T.,
Hammer, M.:
How Sedna and family were captured in a close encounter with a solar
sibling.
Mon. Not. R. Astron. Soc.
453(3),
3157–3162
(2015)
https://arxiv.org/abs/1506.03105arXiv:1506.03105
[astro-ph.EP].
10.1093/mnras/stv1803
Grimm_2014
Grimm, S.L.,
Stadel, J.G.:
The GENGA code: Gravitational encounters in n-body simulations with
gpu accelaration.
Astrophys. J.
796(1),
23–39
(2014).
10.1088/0004-637X/796/1/23
Aarseth:2003
Aarseth, S.J.:
Gravitational N-Body Simulations,
(2003)
rebound
Rein, H.,
Liu, S.-F.:
REBOUND: an open-source multi-purpose N-body code for collisional
dynamics.
Astron. Astrophys.
537,
128
(2012)
https://arxiv.org/abs/1110.4876arXiv:1110.4876
[astro-ph.EP].
10.1051/0004-6361/201118085
reboundias15
Rein, H.,
Spiegel, D.S.:
IAS15: a fast, adaptive, high-order integrator for gravitational
dynamics, accurate to machine precision over a billion orbits.
Mon. Not. Astron. Soc.
446(2),
1424–1437
(2015)
https://arxiv.org/abs/1409.4779arXiv:1409.4779
[astro-ph.EP].
10.1093/mnras/stu2164
|
http://arxiv.org/abs/2409.02816v2 | 20240904153502 | Simple fusion-fission quantifies Israel-Palestine violence and suggests multi-adversary solution | [
"Frank Yingjie Huo",
"Pedro D. Manrique",
"Dylan J. Restrepo",
"Gordon Woo",
"Neil F. Johnson"
] | physics.soc-ph | [
"physics.soc-ph",
"cs.CE",
"math-ph",
"math.MP",
"nlin.AO"
] |
Dynamic Online Networks Laboratory, George Washington University, Washington, DC 20052, U.S.A.
Dynamic Online Networks Laboratory, George Washington University, Washington, DC 20052, U.S.A.
International and Global Studies, Brandeis University, Waltham, MA 02453, U.S.A.
Moody's (Risk Management Solutions), London, EC3R 7BB U.K.
Dynamic Online Networks Laboratory, George Washington University, Washington, DC 20052, U.S.A.
^* corresponding author: [email protected]
12pt
§ ABSTRACT
Simple fusion-fission quantifies Israel-Palestine violence and suggests multi-adversary solution
Neil F. Johnson^*
September 9, 2024
=================================================================================================
24pt
Why humans fight has no easy answer. However, understanding better how humans fight could inform future interventions, hidden shifts and casualty risk. Fusion-fission describes the well-known grouping behavior of fish etc. fighting for survival in the face of strong opponents: they form clusters (`fusion') which provide collective benefits and a cluster scatters when it senses danger (`fission'). Here we show how similar clustering (fusion-fission) of human fighters provides a unified quantitative explanation for complex casualty patterns across decades of Israel-Palestine region violence, as well as the October 7 surprise attack – and uncovers a hidden post-October 7 shift. State-of-the-art data shows this fighter fusion-fission in action. It also predicts future `super-shock' attacks that will be more lethal than October 7 and will arrive earlier. It offers a multi-adversary solution. Our results – which include testable formulae and a plug-and-play simulation – enable concrete risk assessments of future casualties and policy-making grounded by fighter behavior.
0.2in
§ INTRODUCTION
Human conflict and terrorism pose a seemingly intractable challenge for governments, humanitarian organizations, development agencies and academia, and have attracted a wealth of valuable in-depth studies from myriad disciplinary perspectives <cit.>.
Part of this challenge is the urgent need of the trillion-dollar insurance industry and others, to better quantify the risk (i.e. probability) that a future conflict/terrorism event will produce a large number of casualties. This translates mathematically into understanding the tails of a casualty distribution. The major complication is that a large casualty event like Hamas' 7 October, 2023 attack on Israel may never have happened before, or it is so rare that the data are sparse. A more mechanistic understanding of how such violent events are generated, could provide the insurance industry, policymakers and others with a concrete platform to explore what-if scenarios, potential impacts of interventions, and more rigorous counterfactual thinking <cit.>. Indeed, the need for innovation in this area is now so great that it has spawned the 2024 joint U.S. National Science Foundation-government-industry initiative on terrorism and catastrophic online/cyber risks <cit.>.
We are all familiar with footage showing how fish fight for survival in the face of strong opponents: they repeatedly cluster together
(`fusion') then a cluster breaks up (`fission') when it senses danger <cit.>. Fusion brings collective benefits such as aggregated strength and awareness, while total fission (scattering) can be an effective response to imminent danger. Fusion-fission is ubiquitous across timescales, environments, geographical locations and species <cit.>.
Here we show how similar fusion-fission among human fighter forces can explain patterns in the violence across the Israel-Palestine region. These fighter forces include Hamas/Al-Qassam Brigades, Palestinian Islamic Jihad (PIJ), Fatah, Hezbollah, Al-Aqsa Martyrs' Brigades, Houthis, Islamic State (IS), Al-Nusrah Front – all of which regard Israel as a strong opponent and all of which will typically need to adapt quickly in any fight to avoid annihilation <cit.>. This suggests a commonality of tactics that can then generate similar patterns in the violence, as we find. Our mathematical analysis focuses on the mesoscale fighter cluster dynamics and does not need to specify why they fight or individual-level identities, links or animosities, nor does the violence always need to directly involve Israel. We hence use the generic term `fighter force' instead of assigning labels like non-state army, terrorist organization, armed civilians, insurgency etc.; `overall fight' instead of war, small war, asymmetric war, conflict, civilian uprising, insurgency, terrorist campaign etc.; `fighter' instead of combatant, terrorist, armed civilian, non-state actor, insurgent, extremist, freedom fighter etc.; and `cluster' to denote some operationally cohesive unit of such fighters. As such, our findings can also help
deepen understanding of organizational behavior across categories of violence that can be hard to separate conceptually (e.g. terrorism vs. insurgency) <cit.>.
Our findings also suggest a counterintuitive policy takeaway: one of the world's most complex regions for violence and non-state armed actors (Israel-Palestine) may be among the simplest to understand in terms of how those fighters fight. Our quantitative results provide a rigorous foundation for discussions of future interventions, hidden shifts and casualty risk – and crucially, our mathematical approach and its results can be scrutinized and explored by any non-specialist via our plug-and-play fusion-fission simulation which requires no mathematical or coding knowledge:
<https://gwdonlab.github.io/netlogo-simulator/>
§ RESULTS
§.§ Fusion-fission empirical evidence
Akin to footage of fish, the snapshots in
Fig. 1A show clusters of fighters (nodes) forming and breaking up. It uses data from a state-of-the-art study of fighter behavior (Provisional Irish Republican Army (PIRA) who fought rather successfully against a strong opponent (U.K.) who labelled them a terrorist organization) <cit.>. PIRA provide unique behavioral insight into fighter forces facing Israel <cit.> because
operational details often got copied <cit.>. Each fighter is a node and their faction (brigade), role and skill are denoted by the node's shape, size, and color respectively.
Two fighters are linked if they partnered in a prior operation and/or are close as friends, relatives or by marriage: each link hence facilitates operational interactions and cohesion
<cit.>. Reference <cit.> confirms these links/interactions gave PIRA strong operational cohesion. Each emerging cluster (e.g. size s=8 in Fig. 1A) is a cohesive unit since its fighters are all directly or indirectly linked. A link says nothing about fighters' spatial proximity, since linked fighters can interact by phone or online and may create new links with distant fighters.
Despite their army name and brigade structure, PIRA therefore exhibits cluster dynamics in time. Even though each snapshot aggregates over a finite time window, these cluster dynamics are strikingly similar to our simple plug-and-play fusion-fission simulation.
Figure 1B shows these cluster dynamics at daily-scale resolution. This time-lapse snapshot of pro-IS fighters operating in the online communications space <cit.> reveals day-to-day fusion and total fission of clusters when in danger from security agencies. SI Fig. 1 illustrates how the individual fighter characteristics make them very different from casual anti-U.S. or anti-Israel etc. users <cit.>. Each link denotes a fighter (white node) being a member of a given online community (colored node) <cit.>. Studies show that an online community's members feel strong links of trust with each other <cit.>. Hence each online community is a single cluster of linked fighters (i.e. cohesive unit) akin to those in Fig. 1A. Interestingly, the U.K.'s 2024 violent uprisings also featured fighter (rioter) fusion-fission behavior online and offline
<cit.>: their opponents (U.K. authorities) later confirmed that this fusion-fission made the fighters more unpredictable and hence harder to fight against <cit.>.
§.§ Fusion-fission mathematics and its predictions
Our mathematics provides a mesoscale description of this cluster fusion-fission (Figs. 1A,B). The plug-and-play simulation shows it in real time.
This mathematics is quite general and works by performing cluster-level averages: hence it is agnostic to the microscale details of which fighters and links are in what clusters; the precise nature of each operationally relevant link (e.g. trust, duty or something else); whether fighters leave the fighter force or become casualties and others join, as long as the fighter force size N and its heterogeneity change slower than the fusion-fission rates <cit.>; and whether the fusion and fission of clusters is spontaneous, pre-meditated, self-organized or managed, and its root causes. For example, for a given set of fusion-fission rates the fusion between large clusters may be pre-meditated for strategic reasons, the fusion between medium-size clusters may be more tactical depending on how a fight is evolving, and the fusion of small clusters may be ad hoc – or any variant of these.
The starting equation is the rate of change of the number of clusters n_s that contain s fighters (size s=1,2 … N). This equals the number of new clusters of size s being created minus the number of existing clusters of size s being lost.
A cluster of size s=s_1+s_2 is created by fusion of two smaller clusters s_1 and s_2 (e.g. 6+5=11 Fig. 1C). A cluster of size s is lost either by its fusion with another cluster or its fission (e.g. 11=1+1+…+1+1 Fig. 1C). The algebra only requires the post-fission cluster fragments are small ( <s_ min), i.e. it does not have to be total fission.
If fighters' links/interactions are independent of the distance between them (e.g. unlimited online/phone use) a new link can emerge between any of the s_1 fighters in cluster 1 and any of the s_2 fighters in cluster 2, which fuses the two clusters to create a new cluster of size s_1+s_2. Hence the fusion rate depends on the product s_1 s_2. In the opposite case of links/interactions being limited to fighters who are near each other (e.g. clusters on a two-dimensional checkerboard with no online/phone use) the fusion rate dependence becomes s_1^1/2 s_2^1/2 because it only involves fighters on the cluster perimeters (clusters of size s_1 and s_2 have perimeters s_1^1/2 and s_2^1/2). Both cases have a pre-factor F: If the fighter force comprises only one adversarial species (e.g. Hamas fighters), F is a number that depends on those Hamas fighters' average heterogeneity; but with D>1 adversarial species (e.g. Hamas, PIJ etc. fighters) F becomes a D-dimensional matrix.
Following reaction kinetics, the number of fighters s in a cluster is likely to determine the total number of casualties x in an event that involves that cluster: x can include fighters on either side and civilians, and it can be any constant multiple or fraction of s. Taking the rate at which clusters are involved in events as a constant (e.g. because every fighter needs a similar time to recover or re-equip, regardless of its cluster size) the distribution n_s will have the same mathematical form as the casualty distribution n_x, i.e. the number of events with x casualties.
This mathematics yields three key predictions for any overall fight in which a fighter force is undergoing fusion and occasional total (or near total) fission in its fight against some typically strong opponent. Each of these can be seen and explored visually using the plug-and-play simulation:
0.2in
Prediction 1: The number of events n_x with x casualties will have an approximate power-law distribution x^-α where 2.0≤α≤ 2.5.
As proved mathematically in SI Eq. 37, α=2.5 when fighter links/interactions are distance independent (e.g. unlimited online/phone use), and α=5/2-1/2≡ 2.0 when fighters only form links/interact with other fighters nearby (e.g. no online/phone use). Intermediate cases will lie between these values. This Prediction 1 is exact for x≥ x_ min when the fighter force is large (N≫ 1). The plug-and-play simulation shows explicitly the 2.5 case. Prediction 1 has a remarkable robustness to mathematical variations <cit.> which means it should also be a reasonable approximation more generally.
0.2in
Prediction 2: If fission becomes extremely infrequent, a single giant cluster of fighters will suddenly emerge at a time t_c which means high risk of a giant attack. Kinks in its growth curve mean that multiple adversarial species are involved and undergoing joint fusion-fission (e.g. Hamas with PIJ and Fatah). The term `giant' is used in Physics to mean that the largest cluster's size has become a significant fraction (G(t)) of the entire fighter population N <cit.>. Hence it suddenly appears as a surprise to any observer who does not know about the mesoscale cluster dynamics. Its mathematical `shock' shape is due to smaller but substantial clusters fusing together in quick succession <cit.>.
The SI Secs. 3 and 4 derive exact formulae for its onset time t_c and growth curve, but crudely t_c≈ N/2F.
0.2in
Prediction 3: If multiple adversarial species (e.g. Hamas, PIJ and Fatah) undergo joint fusion-fission with strong couplings (i.e. links/interactions) between them, their separate giant clusters will pile together to create a single super-shock cluster – and hence attack – that will be stronger (more casualties) and arrive earlier than an attack like October 7.
1.0in
§.§ Prediction 1 matches existing casualty data (Fig. 1C) and provides risk estimates for future casualties
0.1in
In agreement with Prediction 1, Fig. 1C shows that all the casualty distributions from the Israel-Palestine region GED conflict and GTD terrorism data <cit.> (see Methods) follow an approximate power-law distribution with α values broadly across the predicted range 2.0≤α≤ 2.5. This result, and Prediction 1 itself, involve no cherry-picking or fine-tuning of parameters, e.g. N, F and the fusion/fission rates can have any values as long as N≫ 1 and fission is less frequent than fusion. The plug-and-play simulation shows the 2.5 limit emerging explicitly.
Approximate power-laws are known to arise for conflicts/terrorism and mechanisms have been proposed <cit.>. However, the comprehensive study in Ref. <cit.> showed that the empirical α values for conflicts/terrorism around the globe range from 1.37 to 5.21, which is well outside Prediction 1's narrow range 2.0≤α≤ 2.5. Richardson's original result of α=1.7 across all wars, also falls outside <cit.>. Furthermore, no other theoretical model has predicted this same narrow range 2.0≤α≤ 2.5. This all suggests that Israel-Palestine violence represents a special subset of global violence, and that it is an archetypal example of fighter fusion-fission.
Prediction 1 also predicts and explains a hidden post-October 7 shift in the specific Israel-Hamas etc. data-point from α=2.6 to α=2.0 (Fig. 1C). Given that the post-October 7 violence became focused in Gaza and hence a grid-like battlefield, and that long-range communications <cit.> became risky for Hamas etc. as well as being curtailed by Israel, Prediction 1 predicts a shift to α=2.0 which is exactly as observed in the empirical casualty data (Fig. 1C).
Furthermore, Prediction 1 explains why the datapoint for pro-IS fighters online sits near α=2.5 (blue box, Fig. 1C). Their online interactions are not restricted by geographical separation, hence Prediction 1 predicts α=2.5 as observed empirically. The fact that the IS online and offline α values are so similar, suggests strong online-offline operational and organizational interplay.
Prediction 1 also allows calculation of concrete risk estimates for future casualties in the Israel-Palestine region. For example, suppose the probability that a future event will produce x_0 casualties has been assessed as p. Prediction 1 shows the probability that it will instead produce f times more casualties is f^-α p. So if the chance of x_0=100 casualties is 10%, the chance of 1000 casualties is
10^-α.10=0.1% for the case of nearby-fighter interactions as currently in Gaza (i.e. α→ 2.0). This is far higher than the value using the standard Gaussian risk assumption which would hence dangerously underestimate this risk.
0.2in
§.§ Prediction 2 matches October 7 fighter data (Fig. 2)
Prediction 2 matches fighters' collective behavior online around October 7: specifically, the massive growth in membership of anti-Israel military wing communities on Telegram (Fig. 2) which attract a diverse set of fighters akin to SI Fig. 1, i.e. they are not casual online users. The mathematical fit suggests t_c is October 6 (see SI Sec. 1.5), i.e. a giant cluster of fighters surfaced the day before the attack, leaving the subset who were physically nearby several hours for the logistics of advancing into Israel. Figure 2A's inset shows evidence of smaller-scale fighter fusion prior to this giant cluster emergence, which is consistent with the cluster fusion-fission mathematics and can also be seen explicitly in the plug-and-play simulation.
Prediction 2 also reproduces the details of the growth kinks, as shown in Fig. 2B which plots Fig. 2A's rate of change. It implies that the number of adversarial species that participated in the giant cluster and hence October 7 attack was at least three, i.e. D≥ 3.
A simple – but misleading – takeaway would be that the October 7 attack happened because Israel stopped generating effective fission events. However, the good fit for D≥ 3 adversarial species in Fig. 2 suggests the answer is more complex: fighters from a minimum of three adversaries (e.g. Hamas, PIJ and others) were undergoing fusion together in the same way at the same time. This explains why a multi-adversary attack force could so easily assemble, i.e. clusters could easily slot into each others' activity. It also explains why this multi-adversary assembly would have been missed by single-adversary surveillance. Eye-witness accounts in SI Sec. 1.4 provide additional independent support of this takeaway of multi-adversary fusion of fighters.
0.4in
§.§ Prediction 3: future super-shock clusters and attacks (Fig. 3)
Figure 3 shows how Prediction 3's super-shock cluster, and hence likely attack, emerge at large values of the couplings between adversarial species (i.e. many links/interactions). It also shows that the October 7 attack would have been progressively more lethal and occurred earlier as these couplings increase. The plug-and-play simulation shows visually how the super-shock arises: giant clusters from each of the adversarial species pile up together. A key takeaway for policymakers is that any future such super-shock attack will not be attributable to a single adversarial species (e.g. Hamas).
The super-shock formation time and hence earliest start of a super-shock attack approximates to:
t_ c^ super-shock = ( f/f+(D-1)ϵ) t_ c^ Oct 7
given equal couplings between adversarial species (ϵ>0) and equal couplings within species (f>0). t_ c^ Oct 7 is the formation time for an October 7-like attack. SI Secs. 3,4 derive exact formulae. Equation 1 guarantees t_c^ super-shock < t_c^ Oct 7 and hence the super-shock attack will arrive increasingly early as the number of adversarial species D increases or the interactions between them ϵ increase.
The key takeaway for policymakers is that the likelihood of a super-shock attack is decreased by interventions that decrease the links/interactions between adversarial species ϵ, and that this decrease can be estimated from Eq. 1 and explored explicitly using the plug-and-play simulation.
The SI Sec. 5 proves mathematically that globalized (i.e. multi-adversary) surveillance and intervention is indeed the best mitigation strategy for controlling fighter buildup. This can be verified visually using the plug-and-play simulation.
0.1in
§ DISCUSSION
Our findings present a unified quantitative description and explanation of Israel-Palestine region violence.
They hence complement the rich body of existing work in conflict studies which addresses the politics, religion, history, economics, ideology and human psychology that undoubtedly play a key role in other aspects.
Our `crude look at the whole' <cit.> allows us to calculate the consequences in a rigorous way, with the good empirical agreement (Figs. 1C,2B) suggesting that the net effect of the many missing details cancels out to some degree. But the question of why will require further study.
Our findings also establish a concrete connection to fusion-fission studies across the animal kingdom <cit.> which suggests that the task of finding a lasting solution to human violence could benefit directly from combining their insights with those of conflict studies experts.
1.0in
§ METHODS
We provide an online plug-and-play simulation of the paper's fusion-fission mathematics and its consequences including the Predictions 1,2 and 3, at <https://gwdonlab.github.io/netlogo-simulator/> which can be accessed and used in full by anyone, anytime using any browser – including on a smartphone. It requires no coding or mathematical knowledge. It shows the fusion-fission process of single or multiple adversarial species, and it allows visual exploration of how the mathematical results and predictions arise and what they mean. It can also be used to explore interventions, run what-if-scenarios, and investigate their consequences. It is self-explanatory but SI Sec. 1.7 provides a brief starter manual in case useful.
The fighter data in Fig. 1A are from the state-of-the-art study led by John Horgan and Paul Gill (which lists N.F.J. as co-contributor) and are used with their kind permission
<cit.>. The fighter data in Fig. 1B are from the state-of-the-art study of online fighter behavior led by N.F.J. and first reported in Ref. <cit.>. These references and SI Sec. 1 contain detailed discussions of how these data were collected etc. and some further examples of the fighters themselves.
The conflict and terrorism data in Fig. 1C are from the Georeferenced Event Dataset (GED <https://ucdp.uu.se/downloads/>) and the Global Terrorism Database (GTD <https://www.start.umd.edu/gtd/>) respectively, as analyzed and discussed in depth in Ref. <cit.> which was co-authored by N.F.J. Ref. <cit.> also provides full replication code and statistical testing. It gives the error bars (standard deviation) in the α values and shows that these are typically small (hence we do not give them and instead refer to Ref. <cit.> for these values); and it gives the statistical goodness-of-fit values for the power-laws
and shows these are typically significant/large (hence again we refer to Ref. <cit.> for these values).
The dataset label `Israel: Palestine' from Ref. <cit.> was too vague and hence has been specified more precisely in Fig. 1C as `Israel: Hamas, PIJ, Fatah etc.'. Goodness-of-fit values for all the empirical power-laws are generally high (see Ref. <cit.> for details). The goodness-of-fit value goes down for the post-October 2023 data-point featured in Fig. 1C but this is understandable given the preliminary form of the casualty data for the current war: see SI Sec. 1.6 for evidence of the crude estimation scheme being used for current Gaza casualties, specifically the tendency to report to the nearest factor of 10. Our findings are unchanged if the casualty data are a systematic underestimate or overestimate by some factor, since taking the logarithm of a power-law distribution just means that factor adds to the intercept – and α does not essentially depend on the intercept.
Figure 2 data comes from militant communities on Telegram whose data is publicly available (see <https://ir.tgstat.com>). It shows the increase in members. The small, steady member attrition (e.g. moving to the Telegram equivalent of a private WhatsApp group, or un-joining) is not included because it is so small, steady, and has not changed significantly for years.
The underlying mathematics is – given its fusion-fission starting point (Sec. IIB) – algebra that is agnostic of topic and sociopolicial labels, and is 100% reproducible using undergraduate skills without further debate. Hence we place it in the SI to avoid disrupting the flow of the main paper. It is written out in substantial detail in the SI so that anyone interested can see the rigorous foundations and then be taken through step-by-step.
§ DATA AVAILABILITY
Data are publicly available as discussed in Methods, e.g. from Ref. <cit.>, GED and GTD websites, and Telegram. The personal identities of the PIRA fighters in Fig. 1A, the pro-IS fighters in Fig. 1B, and the anti-Israel fighters in Fig. 2, are not available but nor are they needed or used since our study's methodology and results only deal with the clusters that they form.
§ CODE AVAILABILITY
The plug-and-play software for readers to scrutinize our fighter fusion-fission findings and explore interventions, is given online to use directly using any browser (<https://gwdonlab.github.io/netlogo-simulator/>). The code itself is at the same link (see tabs at bottom of webpage) together with a full explanation. Generic instructions for the underlying NetLogo machinery are publicly available through the NetLogo website (<https://ccl.northwestern.edu/netlogo/>) but the plug-and-play format itself is self-explanatory. SI Sec. 1.7 provides a brief starter manual. We are very grateful to Akshay Verma for helping set up this simulation.
No special code was used to generate the results or the figures. The power-law results and code are given in Ref. <cit.> and on various other publicly available websites. Any type of open-source plotting software can be used that plots the equation solutions given in the SI.
99
Israel1 M. Kaldor.
New and Old Wars: Organized Violence in a Global Era.
Stanford University Press, 2012.
ISBN: 9780804747877.
Israel4 L. Richardson.
The Roots of Terrorism.
New York: Routledge, 2006.
ISBN: 9780415954389.
H4 J. Zartman. Conflict in the Modern Middle East: An Encyclopedia of Civil War, Revolutions, and Regime Change. ABC-CLIO, 2020. ISBN 1440865027
H7 T. May. A Quick Look at Hamas. The New York Times (2023). Archived from the original on 14 October 2023. Retrieved 9 October 2023. <https://www.nytimes.com/2023/10/08/world/middleeast/hamas-military-gaza-explained.html>
H8 C. Edwards. Have war crimes been committed in Israel and Gaza and what laws govern the conflict?. CNN. 16 November 2023. Archived from the original on 16 November 2023. Retrieved 18 November 2023. <https://www.cnn.com/2023/11/16/middleeast/israel-hamas-gaza-war-crimes-international-law-explainer-intl/index.html>
Richardson L.F. Richardson. Statistics of Deadly Quarrels. Boxwood Press (January 1, 1960)
ISBN-10 0910286108. ISBN-13 978-0910286107
Mackay1
B. Fagan, I. Horwood, N. MacKay, C. Price, A.J. Wood. Quantifying Counterfactual Military History.
Chapman and Hall, 2023. ISBN 9781138592384
Mackay2
B.T. Fagan, N.J. MacKay, A.J. Wood. Robustness of steady state and stochastic cyclicity in generalized coalescence-fragmentation models. The European Physical Journal B 97, 21 (2024)
Mackay3 N.J. MacKay. When Lanchester met Richardson, the outcome was stalemate: A parable for mathematical models of insurgency. Journal of the Operational Research Society 66, 191–201 (2015). <https://doi.org/10.1057/jors.2013.178>
Gutfraind
A. Gutfraind and M. Genkin. A Graph Database Framework for Covert Network Analysis: An Application to the Islamic State Network in Europe. Social Networks, vol. 51, pp. 178–188, October 2017.
Johnson
D.D.P. Johnson.
Darwinian selection in asymmetric warfare.
<https://www.cl.cam.ac.uk/ rja14/shb10/johnson.pdf>
Johnson2
D.D.P. Johnson.
Strategic Instincts: The Adaptive Advantages of Cognitive Biases.
Princeton University Press, 2020.
Peacock
T.N. Peacock. Cromwell’s `Spymaster'? John Thurloe and Rethinking Early Modern Intelligence. The Seventeenth Century 35, 3–30 (2018)
Spagat2
M. Spagat. The violent death toll from the Iraq War: 2003–2023. PLoS ONE 19, e0297895 (2024)
Slaughter A.M. Slaughter. The Chessboard and the Web: Strategies of Connection in a Networked World. Yale University Press, 2017. ISBN 9780300215649.
ISBN:
9780691137452
Wrangham1
R.W. Wrangham. Two types of aggression in human evolution. PNAS 115, 245–253 (2018)
Wrangham2
J. Manson, R. Wrangham, J. Boone, B. Chapais, R. Dunbar, C. Ember, W. Irons, L. Marchant, W. McGrew, T. Nishida, J. Paterson, E. Smith, C. Stanford, and C. Worthman. Inter-group aggression in chimpanzees and humans. Curr.
Anthropol. 32, 369 (1991).
Epstein J.M. Epstein. Nonlinear Dynamics, Mathematical Biology, and Social Science.
CRC Press, 1997. ISBN 9780201419887
Kertesz A. Alrhmoun, C. Winter, J. Kertész. Automating Terror: The Role and Impact of Telegram Bots in the Islamic State’s Online Ecosystem. Terrorism and Political Violence. Feb 7 (2023). <https://doi.org/10.1080/09546553.2023.2169141>
Gill I. Van der Vegt, P. Gill, and B. Kleinberg. Online influence, offline violence. J. Comp. Social Sci. 4, 333 (2021)
HH
A. Soulier and T. Halpin-Healy, The Dynamics of Multi-dimensional Secession: Fixed Points and Ideological Condensation, Phys. Rev. Lett. 90, 258103 (2003).
Idriss C. Miller-Idriss, Hate in the Homeland (Princeton University Press, 2022).
Overton
I. Overton. The Price of Paradise: How the Suicide Bomber Shaped the Modern World. Quercus (4 April 2019)
Horwood
I. Horwood, C. Price. A Fundamental Weapon: The Transatlantic Air Power Controversy of the Early 1920s and the US Navy as a Learning Organisation. Journal of Transatlantic Studies, 19, 4-26 (2021)
sergey S. Gavrilets. Collective action and the collaborative brain. J. R. Soc. Interface 12, 20141067 (2015).
Lazer D. Lazer et al. Computational Social Science. Science 323, 721–723 (2009).
Axtell J.M. Epstein, R.L. Axtell.
Growing Artificial Societies: Social Science from the Bottom Up.
MIT Press, 1996.
ISBN: 9780262550259.
Kalyvas S.N. Kalyvas. The Logic of Violence in Civil War. Cambridge Studies in Comparative Politics. Cambridge University Press, 2006. ISBN: 978-0521670043. <https://www.cambridge.org/core/books/logic-of-violence-in-civil-war/1845F0DCCA285AC71BBED83F416F42E6>
Cederman L-E. Cederman. Modeling the Size of Wars: From Billiard Balls to Sandpiles. American Political Science Review. 97, 135-150 (2003) <doi:10.1017/S0003055403000571>
singh J. P. Singh. Diffusion of Power and Diplomacy: New Meanings, Problem Solving, and Deadlocks in Multi-lateral Negotiations, International Negotiation 20, 73 (2015).
Strogatz S. H. Strogatz. Nonlinear Dynamics and Chaos. 3rd Edition. CRC Press, 2024.
Kress M. Kress, J.P. Caulkins, G. Feichtinger, D. Grass, A. Seidl.
Lanchester model for three-way combat.
European Journal of Operational Research,
264, 46 (2018).
ISSN 0377-2217.
<https://doi.org/10.1016/j.ejor.2017.07.026.>
Science2011 N.F. Johnson et al. Pattern in Escalations in Insurgent and Terrorist Activity. Science 333, 81-84 (2011). DOI:10.1126/science.1205068
Tivnan B. Tivnan. Coevolutionary Dynamics and Agent-based Models in Organization Science.
Proceedings of the 37th Winter Simulation Conference, Orlando, FL, USA, December 4-7, 2005.
DOI:10.1109/WSC.2005.1574353
Danforth E. W. McGinnis, S. Lunna, I. Berman, S. Bagdon, G. Lewis, M. V. Arnold, C. M. Danforth, P. S. Dodds, M. Price, W. E. Copeland, R. S. McGinnis. Expecting the Unexpected. IEEE Open Journal of Engineering in Medicine and Biology 5, 14-20 (2024). doi: 10.1109/OJEMB.2024.3354208.
irwin
C. Irwin. Gaza war: a ceasefire depends on a leap of faith from both sides – Northern Ireland showed us how.
Strong
S. Marble, P. Strong. Artillery in the Great War.
Pen and Sword Military (2011)
Storr
J. Storr.
Something Rotten: Land Command in the 21st Century.
Howgate Publishing Limited (2022)
common A. Moghadam, R. Berger, P. Beliakova. Say Terrorist, Think Insurgent: Labeling and Analyzing Contemporary Terrorist Actors. Perspectives on Terrorism. 8, 2–17 (2014)
Hossack
A. Hossack.
Historical Analysis of Terrorist Campaigns with observations on current operations in Iraq.
<http://www.ismor.com/cornwallis/cornwallis_2004/2004_24Hossack-9-Oct.pdf>
Russell
T. Russell. The Territorial Force: lessons for force design and generation. HADSS 2024. York, U.K.
Coulson
S. Coulson.
Lanchester modelling of intelligence in combat.
IMA Journal of Management Mathematics 30, 2, 149–164 (2019) <https://doi.org/10.1093/imaman/dpx014>
Robinson
M. Flanagan, D. Lambert, T.C. Lipscombe, A. Northey, I.M. Robinson.
Lanchester’s Fighting Strength as a Battle Outcome Predictor Applied to a Simple Fire and Manoeuvre Wargame.
<https://www.researchgate.net/publication/379398241_Lanchester's_Fighting_Strength_as_a_battle_outcome_predictor_applied_to_a_simple_fire_and_manoeuvre_wargame>
DOI:10.5772/intechopen.1002384
Lucas
T. Lucas, T. Turkes. Fitting Lanchester Equations to the Battles of Kursk and Ardennes. Naval Research Logistics, 51, February 2004, pp. 95-116.
Hernandez
A.S. Hernandez, J.R. Hummel, G.T. Vaucher, J.A. Bell, M.C. Petri. Operational Energy, Powering the National Security Community. Phalanx, 1/SPRING 2022(55), pp. 38-42 (2022)
Armstrong
M.J. Armstrong.
The Effectiveness of Rocket Attacks and Defenses in Israel.
Journal of Global Security Studies, Volume 3, Issue 2, April 2018, Pages 113–132 (2018). <https://doi.org/10.1093/jogss/ogx028>
Berger J.M. Berger and H. Perez. The Islamic State’s Diminishing Returns on Twitter. 2016. <https://extremism.gwu.edu/sites/g/files/zaxdzs5746/files/downloads/JMB
IUCRC IUCRC Proposals for Research and Thought Leadership on Insurance Risk Modeling and Underwriting Related to Terrorism and Catastrophic Cyber Risks, April 24 2024 <https://www.nsf.gov/pubs/2024/nsf24082/nsf24082.jsp?WT.mc_ev=click WT.mc_id= utm_medium=email utm_source=govdelivery>
Gordon G. Woo. Calculating Catastrophe. Imperial College Press, 2011
GordonNeil G. Woo, N.F. Johnson. Stochastic Modelling of Possible Pasts to Illuminate Future Risks. The Oxford Handbook of Complex Disaster Risks and Resilience, Oxford University Press (2023) doi.org/10.1093/oxfordhb/9780190466145.013.12
Gordon2 G. Woo.
Downward Counterfactual Search for Extreme Events
Front. Earth Sci. 7, 340 (2019). doi: 10.3389/feart.2019.00340
Couzin
I.D. Couzin, M.E. Laidre. Fission-fusion populations. Current Biology 19, PR633-R635, (2009)
Levin S. Gueron, S.A.
Levin.
The dynamics of group formation.
Math. Biosci. 128, 243-264 (1995)
Aureli F. Aureli et al. Fission-fusion dynamics: New research frameworks. Current Anthropology 49, 627–654 (2008)
Stijn M. Spagat, N.F. Johnson, S. van Weezel. Fundamental patterns and predictions of event size distributions in modern wars and terrorist campaigns. PLOS ONE, https://doi.org/10.1371/journal.pone.0204639 (2018)
B2B J. Horgan et al.
From Bomb to Bomb-maker: A Social Network Analysis of the Socio-Psychological and Cultural Dynamics of the IED Process. Final Report.
Office of Naval Research Code 30. See also <https://www.psu.edu/news/research/story/international-center-study-terrorism-focuses-latest-research-improvised-explosive/>
Gill2 P. Gill, J. Lee, K. Rethemeyer, J. Horgan, V. Asal. Lethal connections: The determinants of network connections in the Provisional Irish Republican Army, 1970–1998. International Interactions 40, 52-78 (2014)
White R. White. Out of the Ashes: An Oral History of the Provisional Irish Republican Movement. Merrion Press. ISBN 9781785370939 (2017)
Frampton M. Frampton.
The Long March: The Political Strategy of Sinn Fein, 1981-2007.
Palgrave Macmillan, 2009.
ISBN: 9780230220938.
IRAPLO S. Gannon. IRA-PLO cooperation: A long, cozy relationship. April 7, 2009. The Jerusalem Post.
<https://www.jpost.com/opinion/op-ed-contributors/ira-plo-cooperation-a-long-cozy-relationship>
groups Flashpoint October 2023. Militant and Terrorist Groups Involved in the October 7 Attack on Israel. <https://flashpoint.io/blog/israel-hamas-war-military-and-terrorist-groups/>
PRL2023 P. D. Manrique, F. Y. Huo, S. El Oud, M. Zheng, L. Illari, N. F. Johnson. Shockwave-like behavior across social media. Phys. Rev. Lett. 130, 237401 (2023)
Science2016 N. F. Johnson, M. Zheng, Y. Vorobyeva, A. Gabriel, H. Qi, N. Velásquez, P. Manrique, D. Johnson, E. Restrepo, C. Song et al., New online ecology of adversarial aggregates: Isis and beyond, Science 352, 1459 (2016)
onlinetrust T. Ammari, S. Schoenebeck, Thanks for your interest in our Facebook group, but it’s only for dads, in Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work and Social Computing (2016), <10.1145/2818048.2819927>.
bbc1 D. De Simone. Riots show how the UK's far right has changed. BBC News August 20, 2024 <bbc.com/news/articles/c74lwnxxxzjo>
bbc2
Briefing: IS supporters turn to Facebook amid Telegram crackdown. BBC Monitoring. May 28, 2024. <https://monitoring.bbc.co.uk/product/b0001pbp>
ben B. Ruszczycki, Z. Zhao, B. Burnett, N. F. Johnson. Relating the microscopic rules in coalescence-fragmentation models to the cluster-size distribution. European Physical Journal 72, 289 (2009)
scirep2013
N.F. Johnson et al. Simple mathematical law benchmarks human confrontations. Sci Rep 3, 3463 (2013). https://doi.org/10.1038/srep03463
Nature2009
J.C. Bohorquez et al. Common ecology quantifies human insurgency. Nature 462, 911–914 (2009).
math W. H. Stockmayer. Theory of
molecular size distribution and gel formation in branched-chain polymers. J. Chem. Phys. 11, 45 (1943)
Ziff R. M. Ziff, E. M. Hendriks, and M. H. Ernst. Critical Properties for Gelation: A Kinetic Approach. Phys. Rev. Lett. 49, 8 (1982)
Oxford K. Tkacova, A. Idler, N.F. Johnson, E. Lopez. Explaining conflict violence in terms of conflict actor dynamics. Sci. Rep. 13, 21187 (2023)
Clauset
A. Clauset, M. Young, K.S. Gleditsch. On the frequency of severe terrorist events. J. Confl. Resolut. 51, 58–87 (2007)
Dylan1 M. Spagat, S. van Weezel, D.J. Restrepo, M. Zheng, N.F. Johnson. Unifying casualty distributions within and across conflicts. Heliyon 6 e04808 (2020)
Dylan2 D.J. Restrepo, M. Spagat, S. van Weezel, M. Zheng, N.F. Johnson. A computational science approach to understanding human conflict. Journal of Computational Science. Online 3 February (2020), doi.org/10.1016/j.jocs.2020.101088
phone
P. Brown, Z. Cohen. Hamas operatives used phone lines installed in tunnels under Gaza to plan Israel attack over 2 years, sources familiar with intelligence say. CNN News. 25 October 2023.
<https://edition.cnn.com/2023/10/24/politics/intelligence-hamas-israel-attack-tunnels-phone-lines/index.html>.
Gellmann M. Gell-Mann. A Crude Look at the Whole. Nanyang Technological University, Singapore, 2013. <https://www.paralimes.org/2018/07/transcript-of-a-crude-look-at-the-whole-by-murray-gell-mann/>
complex J. H. Miller. A Crude Look at the Whole: The Science of Complex Systems in Business, Life, and Society. Basic Books, 2016.
§ ACKNOWLEDGEMENTS
N.F.J. is supported by U.S. Air Force Office of Scientific Research awards FA9550-20-1-0382 and FA9550-20-1-0383 and The Templeton Foundation.
§ AUTHOR CONTRIBUTIONS
F.Y.H, G.W. and N.F.J. conceived the empirical and theoretical connections to form the paper. F.Y.H., P.M., D.J.R. and N.F.J. conducted the analysis. All authors analyzed the results. All authors reviewed the manuscript.
§ COMPETING INTERESTS
The authors have no competing financial and/or non-financial interests in relation to the work described.
§ SUPPLEMENTARY INFORMATION (SI)
Supplementary Information (SI) is available for this paper.
Correspondence and requests for materials should be addressed to N.F.J. ([email protected])
> |
http://arxiv.org/abs/2409.03566v1 | 20240905142116 | Quantum gravity effects on particle creation and evaporation in a non-commutative black hole via mass deformation | [
"A. A. Araújo Filho",
"N. Heidari",
"Ali Övgün"
] | gr-qc | [
"gr-qc",
"hep-th"
] | |
http://arxiv.org/abs/2409.02308v1 | 20240903213731 | Black holes of type D revisited: relating their various metric forms | [
"Hryhorii Ovcharenko",
"Jiri Podolsky",
"Marco Astorino"
] | gr-qc | [
"gr-qc",
"hep-th"
] |
LIFT–7-2.24
[email protected], [email protected]
Charles University, Faculty of Mathematics and Physics,
Institute of Theoretical Physics,
V Holešovičkách 2, 18000 Prague 8, Czechia
[email protected]
Charles University, Faculty of Mathematics and Physics,
Institute of Theoretical Physics,
V Holešovičkách 2, 18000 Prague 8, Czechia
[email protected]
Laboratorio Italiano di Fisica Teoretica (LIFT),
Via Archimede 20, I-20129 Milano, Italy
§ ABSTRACT
We investigate a complete family of spacetimes which represent black holes with rotation, NUT twist, acceleration, electric and magnetic charges. These are exact solutions of the Einstein-Maxwell equations with any cosmological constant, such that the (non-null) electromagnetic field is aligned with both the double-degenerate principal null directions of the Weyl tensor. In particular, we explicitly relate various coordinates and the corresponding physical parameters of such solutions, namely the original Plebański-Demiański (PD) form, the convenient Astorino (A) form which was found recently, and formally improved here (A^+), the Griffiths-Podolský (GP), and Podolský-Vrátný (PV) form of the metric. It is demonstrated that, if properly mapped and physically interpreted, all these representations cover the complete class of type D black holes. Using the new A-parameters, the two main PD quartic metric functions are factorized into the product of quadratic expressions, enabling thus an explicit analysis. Moreover, we clarify the role of the twist parameter ω, related to both the Kerr-like rotation and the NUT parameters a and l, respectively. Special attention is payed to the elusive subclass of accelerating NUT black holes with a=0.
04.20.Jb, 04.40.Nr, ...
Black holes of type D revisited:
relating their various metric forms
Marco Astorino
September 9, 2024
======================================================================
§ INTRODUCTION
The aim of this article is to elucidate mutual relations between various metric representations of a large family of solutions in the Einstein-Maxwell theory (with or without the cosmological constant) which belong to the type D of the Petrov-Penrose classification. In particular, we focus on stationary and axisymmetric solutions (i.e., with a couple of commuting Killing vectors ∂_t and ∂_φ) such that two expanding repeated principal null directions of the Weyl tensor are both aligned with the two principal null directions of the electromagnetic field.
Such a family of spacetimes is relevant because it describes the most renowned black holes in general relativity, starting from the static and spherically symmetric metrics, such as the Schwarzschild line element, the C-metric describing accelerating black holes, the Newman-Unti-Tamburino twisting spacetime, to a general stationary rotating, accelerating, and charged Kerr-Newman solution <cit.>. This class is often identified with the Plebański-Demiański family <cit.> (but see also the earlier work of Debever <cit.>), subsequently investigated in detail in <cit.>.[Actually, this is the subclass of all type D spacetimes for the theory under consideration. Other solutions, including non-expanding cases, were studied in a number of works. See, e.g., the review <cit.>.]
Recently a large class of such type D solutions was systematically investigated by means of the solution generating technique, and a nice new metric form was thus obtained <cit.>. This novel spacetime representation has the advantage to directly contain the limits to all the subcases of type D black holes contained in the general Plebański-Demiański solution, including also the peculiar accelerating solutions with (just) the NUT parameter, which was previously considered to exist only outside the type D class <cit.>, <cit.>. It came as a surprise because the only accelerating black holes with NUT parameter known before <cit.>, namely the Chng-Mann-Stelea metric <cit.> investigated in <cit.>, were of a general algebraic type I (see <cit.> for the rotating and charged generalization).
This discovery of a novel general form of type D metric, which comprises accelerating black holes with NUT parameter, naturally opens the way to questions about the actual generality of the Plebański-Demiański metric and its different parameterizations, namely:
* Is the Plebański-Demiański solution the most general black hole spacetime of type D, or is the metric presented in <cit.> its extension?
* Might the metric of <cit.> be just another equivalent reparametrization of the Plebański-Demiański metric, but more suitable for description of all type D black hole specializations?
* What is the relation between the new spacetime of <cit.> and the type D metrics known so far in the literature, such as those in <cit.>?
* Why the accelerating NUT black holes have not been explicitly identified in previous works <cit.>?
* Which is the more appropriate/convenient parametrization to describe the physical and geometrical properties of the whole class of accelerating Kerr-Newman-NUT black holes?
In our paper we address these open questions. In Section <ref> we start by revisiting the solution of <cit.>, denoted here as A, putting it into a simpler metric form which we will denote A^+. In subsequent Section <ref> the transformation from the Astorino metric to the original Plebański-Demiański coordinates, together with explicit relation of the physical A and A^+ parameters to the PD integration constants, is presented (full details can be found in Appendix <ref>). This leads to a factorized form of the PD metric functions, and their simplification for various special cases. Transformation to the Griffiths-Podolský form of this family of black-hole spacetimes is presented in Section <ref>, and their special cases are discussed in Section <ref>, after elucidating the role of the twist parameter ω and clarifying the physical dimensionality of the parameters. Relation to the Podolský-Vratný metric representation is contained in Section <ref>. Section <ref> summarizes and compares the key special cases in A, PD, GP, and PV metric forms, followed by concluding remarks in Section <ref>.
For convenience of the reader, we summarize our nomenclature and conventions for A, PD, GP, and PV coordinates and parameters in Table <ref>, together with references to original articles. These metrics admit any value of the cosmological constant Λ, but in this paper we only investigate black hole spacetimes with Λ=0.
Mutual relations between all these metric forms are shown in the scheme on Figure <ref>. Particular connections are presented in full detail in Sections which are indicated by the corresponding arrows between them. Namely, the double-arrows show equivalence proven already in previous works, the single-arrows are the equivalences proven in this paper.
§ THE ASTORINO MOST GENERAL METRIC FORM
In 2024, Astorino <cit.> presented a very convenient explicit metric for the most general type D black hole (with a cosmological constant Λ and doubly-aligned electromagnetic field) in the form[We have relabeled the parameters, namely α to -α, l to -l, and p to g.]
s^2=-f(r,x) [ t-ω(r,x) φ]^2+1/f(r,x)[
e^2γ(r,x)( r^2/Δ_r(r)+ x^2/Δ_x(x))
+ ϱ^2(r,x) φ^2],
where the metric functions are
f= [1+α^2(l^2-a^2)x^2]^2 Δ_r
-[a+2α l r+α^2a r^2]^2 Δ_x/(1-α r x)^2ρ^2,
ω= (a+2l x+a x^2)[1+α^2(l^2-a^2)x^2] Δ_r
+(r^2+l^2-a^2)(a+2α l r+α^2a r^2) Δ_x/[1+α^2(l^2-a^2)x^2]^2 Δ_r-[a+2α l r+α^2a r^2
]^2 Δ_x,
e^2γ= C_f [1+α^2(l^2-a^2)x^2]^2 Δ_r-(a+2α l r+α^2 a r^2)^2 Δ_x(1-α r x)^4,
ϱ^2= Δ_r Δ_x(1-α r x)^4,
Δ_r= (1-α^2r^2)[(r-m)^2 - (m^2+l^2-a^2-e^2-g^2)]
- Λ/3( 3l^2/1+α^2 a^2 r^2
- 4α a l/1+α^2 a^2 r^3 + r^4 ) ,
Δ_x= (1-x^2)[(1-α m x)^2-α^2x^2(m^2+l^2-a^2-e^2-g^2)]
- Λ/3( 3l^2/1+α^2 a^2 x^2
+4al/1+α^2 a^2 x^3 + a^2+α^2(a^2-l^2)^2/1+α^2 a^2 x^4 ) ,
and ρ^2 in the numerator of (<ref>) is defined as
ρ^2= (l+a x)^2 + 2α l (a+2l x+a x^2) r + α^2(a^2-l^2)^2 x^2 + [1+α^2(a+l x)^2 ] r^2 .
It was argued in <cit.> that m is the mass parameter, a denotes rotation, l in the NUT parameter, α is acceleration, e and g are electric and magnetic charges,[More precisely, they are directly related to physical conserved charges only in some special cases, not in the general case.] and C_f is an additional normalization constant (which could be related to conicity).[Here, without loss of generality, we have set the auxiliary constant parameters ϵ, κ, ξ, χ which appear in Eq. (2.7) in <cit.> to zero. See also the Wolfram Mathematica Notebook in Supplementary Material.] Here we naturally assume that m, a, α are positive (or zero), while l, e, g, Λ can take any value.
We should also clarify that these quantities have usual physical dimensions, namely m, a, l, e, g have the dimension of length, α and √(Λ) are inverse length, and C_f is dimensionless. The coordinates t, r have dimension of length while x, φ are dimensionless. Consequently, Δ_x, f are dimensionless, Δ_r, ϱ^2, ρ^2, e^2γ have the dimension of length squared, and ω(r,x) has the dimension of length.
The electromagnetic field is given by the vector potential A(r,x)=A_t t + A_φ φ with
A_t = √(1+α^2(a^2-l^2)/1+α^2a^2) 1/ρ^2[ [ g (r - α a l + α^2 a^2 r) + e l ] r
+ l (g+α a e) (a-2 α l r+α^2a r^2) x
+(a - α l r)[ ag + α l(el-gr) + α^2a [g(a^2-l^2)-el r] ] x^2]
+ g/l ,
A_φ = -√(1+α^2(a^2-l^2)/1+α^2a^2) 1/ρ^2 x/l[(1+α^2a^2)[ ae(x-α r) r - gal x + g(r+α a^2x) r]
- α l^3 (e-α ag)x + l^2[α^2a(e-α a g) xr - (g+α a e)] ] - (a+ω_c) A_t ,
where ω_c is a constant (for the choice ω_c=-a the last term vanishes).
The apparently problematic terms l^-1 are included to remove the divergencies in the limit l → 0.
Now, to simplify the metric (<ref>)–(<ref>), it is useful to define the following functions
A(x) := 1 + α^2(l^2-a^2) x^2,
B(x) := a + 2l x + a x^2,
C(r) := a + 2α l r + α^2 a r^2,
D(r) := (l^2-a^2) + r^2,
and
Ω(r,x) := 1-α r x .
Interestingly, their specific combination,
ρ^2 = AD+BC,
gives exactly the complicated expression introduced in (<ref>). Notice also that (<ref>)–(<ref>) are just quadratic polynomials, and (<ref>) also includes up to the second power, both in x and r.
With these shorthands, we can rewrite the metric functions f, ω and γ as
f= A^2Δ_r-C^2Δ_xΩ^2ρ^2,
ω= AB Δ_r + CD Δ_xA^2 Δ_r-C^2 Δ_x,
e^2γ= C_f A^2 Δ_r - C^2 Δ_xΩ^4,
ϱ^2= Δ_r Δ_xΩ^4,
and after substituting (<ref>)–(<ref>) into (<ref>) we get the metric
s^2 = 1Ω^2[-A^2Δ_r-C^2Δ_xρ^2( t-AB Δ_r + CD Δ_xA^2Δ_r-C^2Δ_x φ)^2
+ρ^2Δ_r Δ_xA^2Δ_r-C^2Δ_x φ^2
+ C_f ρ^2 ( r^2Δ_r + x^2Δ_x )].
Next, we simplify the parts depending on t and φ, namely
-A^2Δ_r-C^2Δ_xρ^2 t^2
+ 2 AB Δ_r +CD Δ_xρ^2 t φ
-(AB Δ_r + CD Δ_x)^2ρ^2(A^2Δ_r-C^2Δ_x) φ^2
+ρ^2Δ_r Δ_xA^2Δ_r-C^2Δ_x φ^2.
Using the definition (<ref>) the last two terms combine to
D^2Δ_x - B^2Δ_rρ^2 φ^2,
so that the whole complicated expression (<ref>) “miraculously” simplifies to
-Δ_rρ^2(A t-B φ)^2+Δ_xρ^2(C t+D φ)^2.
This allows us to finally write the Astorino complete metric in a very compact and explicit form as
s^2=1Ω^2[-Δ_rρ^2(A t - B φ)^2
+ Δ_xρ^2(C t + D φ)^2
+ C_f ρ^2 ( r^2Δ_r + x^2Δ_x )]
.
It may naturally be given a nickname “A^+ metric”.
Recall that ρ^2 is the polynomial expression (<ref>), the quadratic functions A, B, C, D, Ω have the form (<ref>)–(<ref>), and the two quartics Δ_r, Δ_x are given by (<ref>), (<ref>).
The Astorino (A) solution (<ref>) rewritten in the new compact (A^+) form of the metric (<ref>) resembles the Griffiths-Podolský (GP) representation <cit.> of the family of type D black holes in the Plebański-Demiański (PD) class of electrovacuum solutions with Λ. An exact relation between these metric forms, and the physical parameters of the black holes, will be derived and investigated later in Sections <ref>–<ref>.
However, first we will present an explicit and direct transformation from the Astorino metric (<ref>), that is (<ref>), to the original Plebański-Demiański form of the metric. This will demonstrate, that both the Astorino class of solutions and the Plebański-Demiański class of solutions are equivalent, and that represent all solutions of a given type, including the elusive accelerating (purely) NUT black holes of algebraic type D.
§ TRANSFORMATION TO THE PLEBAŃSKI-DEMIAŃSKI METRIC FORM
A general Plebański-Demiański metric representing all solutions of Einstein-Maxwell-Λ equations of algebraic type D (with double aligned, non-null electromagnetic field) — which includes black holes of this type — is originally given by Eq. (3.30) in the seminal paper <cit.>.
It is also repeated (with a slight modification of the symbols used) in Chapter 16 of <cit.> as Eqs. (16.1) and (16.2), namely
[ s^2=1/(1-p' r')^2[
-Q' (τ'-p'^ 2 σ')^2/r'^ 2+p'^ 2
+P' (τ'+r'^ 2 σ')^2/r'^ 2+p'^ 2; 12pc +r'^ 2+p'^ 2/P' p'^ 2
+r'^ 2+p'^ 2/Q' r'^ 2] . ]
This contains two quartic functions
[ P'(p') = k' +2 n'p' - ϵ'p'^ 2 +2 m'p'^ 3-(k'+e'^2+g'^2+Λ/3) p'^ 4 ,; Q'(r') =(k'+e'^2+g'^2) -2m'r' +ϵ'r'^ 2 -2n'r'^ 3-(k'+Λ/3) r'^ 4 , ]
with 7 arbitrary real parameters Λ, e', g', m', n', ϵ', k' (the parameter γ of <cit.> is obtained by putting k'=γ-g'^2-Λ/6). Here Λ is the cosmological constant, while e' and g' represent electric and magnetic charges, as the vector potential reads
A = - e' + i g'/r' + i p' ( τ' - i p' r' σ' ).
In <cit.> a convenient rescaling of the original PD metric (<ref>), (<ref>) was performed, namely
p' ↦√(αω) p', r' ↦√(α/ω) r', σ' ↦√(ω/α^3) σ', τ' ↦ =√(ω/α) τ',
with the relabelling of constants
m'+ n' ↦ (α'/ω)^3/2(m'+ n'),
e'+ g' ↦ (α'/ω)(e'+ g'),
ϵ' ↦ (α'/ω) ϵ',
k' ↦α'^2k.
This introduced two important kinematic parameters α and ω, later interpreted as the acceleration and the twist of the black hole, respectively. Such a rescaled metric, we will denote as PD_αω, reads
-3pc s^2=1/(1-α' p' r')^2[
- Q/r'^ 2+ω^2p'^ 2 (τ'-ω p'^ 2σ')^2
+ P/r'^ 2+ω^2p'^ 2 (ωτ'+r'^ 2σ')^2
12pc
+r'^ 2+ω^2p'^ 2/ Q r'^ 2
+r'^ 2+ω^2p'^ 2/ P p'^ 2],
where the key functions are
[ P(p') = k' +2ω^-1n'p' -ϵ'p'^ 2 +2α' m'p'^ 3
-[α'^ 2(ω^2 k'+e'^ 2+g'^ 2)+ω^2Λ/3] p'^ 4 ,; Q(r') = (ω^2 k'+e'^ 2+g'^ 2) -2m'r' +ϵ' r'^ 2 -2α'ω^-1n'r'^ 3
-(α'^ 2k'+Λ/3) r'^ 4 , ]
see Eqs. (16.5) and (16.6) in <cit.>, with the vector potential
A = - e' + i g' /r' + i ω p' ( τ' - i p'r' σ' ).
Here we considered the Plebański-Demiański metric, denoted as PD_α,
s^2=1(1-α' r' x')^ 2[
- Q'r'^ 2+x'^ 2 (τ' - x'^ 2 ϕ' )^2
+ P'r'^ 2+x'^ 2 (τ' + r'^ 2 ϕ' )^2
+ C^2 (r'^ 2+x'^ 2) ( r'^ 2Q' + x'^ 2P' )],
with the metric functions
[ P'(x') = k'+2n' x' - ϵ'x'^ 2 +2α'm' x'^ 3
- [α'^ 2 (k'+e'^2 + g'^2)+Λ'/3] x'^ 4 ,; Q'(r') = (k'+e'^2 + g'^2)-2m' r' + ϵ'r'^ 2 - 2α' n' r'^ 3 - (α'^ 2 k' + Λ'/3) r'^ 4 . ]
Changing x' to p', ϕ' to σ, and unprimed, these are exactly the expressions (<ref>) for the twist parameter ω=1. They are equivalent to the metric functions (16.6) in <cit.>.
From now on, we will only consider the case Λ'=0. Generalization to any value of the cosmological constant will be presented in our subsequent paper elsewhere.
A direct transformation of coordinates between the original Astorino metric (<ref>)–(<ref>), equivalent to the A^+ metric (<ref>), and the Plebański-Demiański metric (<ref>), (<ref>) is
t = aα(a^2-l^2) √(K-1√(I^2 ∓ J^2)) [[K-α^2(a^2-l^2)] τ' + K-1/α^2(a^2-l^2) ϕ' ],
φ = 1α(a^2-l^2) √(K-1√(I^2 ∓ J^2)) [ϕ'-α^2(a^2-l^2) τ' ],
x = aK x'-l/aK-lα^2(a^2-l^2) x' ,
α r = aK(a^2-l^2) α r'-al(K-1)/(a^2K-l^2) - l(a^2-l^2) α r' ,
in which the convenient (auxiliary) dimensionless constants I, J, and K are defined as
I := 1+α^2(a^2-l^2) ,
J := 2 α la √(|a^2-l^2|) ,
2K := I + √(I^2 ∓ J^2) .
Here the upper sign is used when a^2>l^2, while the lower sign is used in the complementary case a^2<l^2 (and a=0 in particular). Therefore,
a^2(I^2 ∓ J^2) = a^2 [1+α^2(a^2-l^2)]^2 + 4α^2 l^2(l^2-a^2) .
and
2aK = a [1+α^2(a^2-l^2)] + √( a^2[1+α^2(a^2-l^2)]^2 + 4α^2 l^2(l^2-a^2)) ,
which are useful explicit relations to be employed later.
Actually, the transformation has a simple structure. For t and φ it is just a linear combination, and for x and r it is a fraction of linear expressions of the respective coordinates (a real version of the Möbius transformation). For l=0 this reduces to x=x', r=a r'.
Let us observe that the coordinates τ', ϕ' and the constant C in the metric (<ref>) have the physical dimension of length, while r', x' are dimensionless. Also the PD coefficients α', k', n', ϵ', m', e', g', and thus both the metric functions P', Q', are dimensionless.[There are other possibilities.
By a rescaling of the coordinates τ' ↦ C τ' and ϕ' ↦ C ϕ', the constant C^2 can be removed from the last two terms in (<ref>) because it then becomes an overall constant conformal factor determining the specific physical scale of the metric in the square brackets with dimensionless coordinates τ', ϕ', r', x'. Alternatively, by performing r' ↦ r'/C and x' ↦ x'/C all the new coordinates τ', ϕ', r', x' have the dimension of length (after the PD parameters are properly rescaled).]
As explained in full detail in Appendix, to exactly identify the metrics A and PD_α, that is (<ref>) and (<ref>), it is also necessary to perform a constant dimensionless scaling such that the conformal factor in (<ref>) is
Ω'^ 2 = S^2 ( 1 -α' r' x' )^2,
where
S^2 = a^2/α^4 |a^2-l^2|^3 K-1/√(I^2 ∓ J^2),
see (<ref>) and (<ref>). This specific rescaling is already included in the transformation of t and φ given by (<ref>) and (<ref>).
Even more importantly, the transformation uniquely relates the PD acceleration parameter α' in Ω'^ 2 and the more convenient acceleration parameter α as
α' = α a [ K - α^2(a^2-l^2)],
see (<ref>) which follows from (<ref>). Written explicitly it terms of the new Astorino parameters, it reads
α' = α 12[ a - α^2 a (a^2-l^2)
+ √(a^2+α^4 a^2(a^2-l^2)^2+2α^2(a^2-l^2)(a^2-2l^2)) ].
By setting α=0 or l=0 we get α'=α a, while by setting a=0 we get α'=α^2 l^2.
Relations between the other six parameters in the Plebański-Demiański metric functions (<ref>)
are more involved, namely
k' = 1/I^2 ∓ J^2 K-1/α^2(a^2-l^2) I L ,
n' = -1/I^2 ∓ J^2 [ K-1/α^2(a^2-l^2) I M - [1-α^2(a^2-l^2)] L l/a ],
ϵ' = 1/I^2 ∓ J^2 [1-α^2(a^2-l^2)]
( I L + 4 M l/a)
- (e^2+g^2)K-1/(a^2-l^2) I/√(I^2 ∓ J^2) ,
α' m' = 1/I^2 ∓ J^2 [ (K-1) I M
+ [1-α^2(a^2-l^2)]( I M
+ α^2 [(a^2-l^2) L+ (e^2+g^2)√(I^2 ∓ J^2) ]l/a)],
e'^2 + g'^2 = (e^2 + g^2) K-1/α^2(a^2-l^2)^2 I/√(I^2 ∓ J^2) ,
where we introduced specific dimensionless combinations of the physical parameters as
L := I+2α m l/a + α^2(e^2+g^2)1/K l^2/a^2 ,
M := α m I+α^2(2a^2-2l^2+e^2+g^2)l/a .
Notice that in (<ref>) we can alternatively employ the identity
1/K l^2/a^2 = I-√(I^2 ∓ J^2)/2α^2(a^2-l^2) .
A systematic, step-by-step derivation of this transformation from the Astorino form to the Plebański-Demiański form of the metric is contained in Appendix <ref>.
To complete the discussion of the mutual relation between the original Plebański-Demiański form (<ref>) of the metric PD_α (with Λ=0) and the new Astorino representation of this entire family of black holes, we will now substitute the parameters k', n', ϵ', m', e', g' and α', as given by (<ref>) and (<ref>), into the metric functions (<ref>), resulting in
P'=1I^2 ∓ J^2 [ I(K-1)α^2(a^2-l^2)+2 [1-α^2(a^2-l^2)]la x'-I [K-α^2(a^2-l^2)] x'^ 2]
× [ L -2M x' + α^2 [(a^2-l^2)L+(e^2+g^2)√(I^2 ∓ J^2) ] x'^ 2],
Q'=1I^2 ∓ J^2 [I-2α l [1-α^2(a^2-l^2)] r'-α^2(a^2-l^2)I r'^ 2]
× [ K-1α^2(a^2-l^2)^2 [(a^2-l^2)L + (e^2+g^2)√(I^2 ∓ J^2) ]
-2α aM r'+L [K-α^2(a^2-l^2)] r'^ 2].
We have thus arrived at a very nice result: using the new parameters introduced in <cit.>, the quartic PD polynomials <cit.> are factorized into the product of quadratic expressions. Such a factorized form is crucial for the geometrical and physical interpretation because the roots of Q' and P', which can now by easily found, represent the horizons and poles (axes of symmetry), respectively, of the the black-hole spacetimes. This was previously achieved in the GP and PV forms of the metric <cit.>, but the new Astorino parametrization has now enabled us to factorize the metric functions P' and Q' directly in the original PD metric.
There is a simplification in special cases when some of the physical parameters vanish:
∙ Special case l=0: no NUT
For black holes without the NUT twist (when l=0),
I = 1+α^2a^2 ,
J = 0 ,
K = I ,
L = I ,
M = α m I ,
so that S^-2 = α^2a^2(1+α^2a^2) and the Plebański-Demiański parameters (<ref>) simplify to
α' = α a ,
k' = 1 ,
n' = -α m ,
ϵ' = 1 - α^2(a^2+e^2+g^2) ,
α' m' = α m ,
(e'^2 + g'^2) = a^-2 (e^2 + g^2) .
The two key metric functions (<ref>), (<ref>) reduce to
P' = ( 1 - x'^ 2)( 1 -2α m x' + α^2 (a^2+e^2+g^2) x'^ 2),
Q' =a^-2( 1 -α^2a^2 r'^ 2) ( a^2+e^2+g^2 - 2 m a r' + a^2 r'^ 2).
After restoring the correct physical dimensionality of the PD parameters and coordinates (as explained in Subsections <ref> and <ref>, in particular by applying the analogue of the relations (<ref>) for the choice γ=a) we get
Q' = ( 1 -α^2 r^2 ) ( a^2+e^2+g^2 - 2 m r + r^2 ). These results fully agree with Eq. (16.24) in Section 16.3.2 of the monograph <cit.>. Moreover, due to the nice factorization, the roots of P' and Q' — which identify the axes and horizons, respectively — can easily be determined.
∙ Special case α=0: no acceleration
For non-accelerating black holes, that is in the limit α→0, we get
I = 1 ,
J = 0 , K-1/α^2(a^2-l^2) = a^2-l^2/a^2 ,
L = 1 ,
M = α m ,
where the expression for K can be calculated using (<ref>), so that K → 1, S^-2→α^2|a^2-l^2|, and α' →α a.
The PD parameters thus become
α' = 0 ,
k' = 1 - l^2/a^2 ,
n' = l/a ,
ϵ' = 1 ,
m' = m/a ,
e'^2 + g'^2 = a^-2 (e^2 + g^2) .
Note that setting l=0 in (<ref>) agrees with the previous l=0 case if α=0 is set in (<ref>).
The metric functions (<ref>), (<ref>) simplify to
P' = 1 - ( x' - la )^2,
Q' = a^-2( a^2-l^2+e^2+g^2 - 2 m a r' + a^2 r'^ 2).
It agrees with Eq. (16.23) in Sec. 16.3.1 of <cit.> after performing a shift x'-l/a ↦ x' in P', and restoring the correct dimensionality in Q' by rescaling it a^2 and redefining r' to r=a r'.
∙ Special case a=0: no rotation
For black holes without the Kerr-like rotation, by setting a = 0 in (<ref>), (<ref>), (<ref>) and (<ref>), we get
I = 1-α^2l^2 ,
a^2(I^2 ∓ J^2) = 4α^2l^4 ,
aK = α l^2 ,
aL = α (2ml + e^2+g^2) ,
aM = α^2l (-2l^2 +e^2+g^2) .
Consequently, S^-2→ 2α^4l^6/a^2 and
α' = α^2 l^2 ,
k' = α^2 l^2-1/4α^2 l^4 (2m l +e^2+g^2 ) ,
n' = (m-l) l+(e^2+g^2)/2 α l^3 + α/2(m+l) ,
ϵ' = -2 (1+α^2l^2) + 1/2(e^2+g^2)( α^2 + 3/l^2) ,
α' m' = α/2 [ -(m+l) + α^2l^2(l-m)
+e^2+g^2/l ] ,
e'^2 + g'^2 = 1-α^2l^2/2α^2 l^4 (e^2 + g^2) .
The two key metric functions (<ref>), (<ref>) thus take an explicit factorized
form
P'=14α^2l^4 [ (1-α^2l^2) - 2α l (1+α^2l^2) x' + α^2l^2 (1-α^2l^2) x'^ 2]
× [ - (2ml + e^2+g^2)
+ 2 α l (-2l^2 +e^2+g^2) x'
- α^2l^2(-2ml +e^2+g^2) x'^ 2],
Q'=14α^2l^4 [(1-α^2l^2)-2α l (1+α^2l^2) r' + α^2l^2(1-α^2l^2) r'^ 2]
× [ (-2ml+e^2+g^2) -2 α l (-2l^2 +e^2+g^2) r'
+ α^2l^2(2ml + e^2+g^2) r'^ 2].
This particular choice of the PD coefficients, expressed in terms of the genuine Astorino physical parameters, identifies the elusive family of accelerating NUTty black holes without the Kerr-like rotation within the Plebański-Demiański family of metrics. This will be investigated in more detail in Section <ref>. Moreover, we will present these accelerating (possibly charged) purely NUT black holes in the Griffiths-Podolský form, see the metric (<ref>)–(<ref>), which has not been known untill now.
§ TRANSFORMATION TO THE GRIFFITHS-PODOLSKÝ METRIC FORM
In previous section we have proven that, assuming Λ=0, the complete Astorino (A) class of solutions (<ref>) can be equivalently written in the Plebański-Demiański (PD_α) form of the metric (<ref>),
s^2=1(1-α' r' x')^ 2[
- Q'r'^ 2+x'^ 2 (τ' - x'^ 2 ϕ' )^2
+ P'r'^ 2+x'^ 2 (τ' + r'^ 2 ϕ' )^2
+ C^2(r'^ 2+x'^ 2) ( r'^ 2Q' + x'^ 2P' )],
where the metric functions P'(x') and Q'(r'), explicitly expressed in terms of the very convenient Astorino parameters and factorized, are generally given by (<ref>) and (<ref>), respectively.
Our aim in this section is to relate the PD_α form (<ref>) of the type D black holes to the Griffiths-Podolský (GP) form of these solutions, summarized in <cit.>. This will elucidate the direct relation of the GP metric to the A metric. In particular, it will clarify the relation between the Astorino initial parameters α, a, l, m, e, g and the physical parameters α̃, ã, l̃, m̃, ẽ, g̃ employed in the GP form of the metric previously <cit.>.
We start by performing a simple rescaling of coordinates, bringing the PD_α metric to the PD_αω metric (<ref>) which involves an additional twist parameter ω, namely[Note that the acceleration parameter α' is already included in the metric PD_α so we do not have to include it in the coordinate transformation as was done <cit.> and repeated at the beginning of section <ref>.]
x' ↦√(ω) x',
r' ↦r'√(ω), τ' ↦√(ω) τ', ϕ' ↦√(ω) ϕ'.
These rescalings bring the metric (<ref>) to the PD_αω form
-3pc s^2=1/(1-α' r' x')^2[
- Q/r'^ 2+ω^2x'^ 2(τ'-ω x'^ 2ϕ')^2
+r'^ 2+ω^2x'^ 2/ Q r'^ 2
6pc
+ P/r'^ 2+ω^2x'^ 2(ωτ'+r'^ 2ϕ')^2
+r'^ 2+ω^2x'^ 2/ P x'^ 2],
where
𝒫(x') := P'(√(ω) x') and
𝒬(r') := ω^2 Q'(r'√(ω)).
Following <cit.>, the next step is to perform a coordinate transformation
x'=l̃ω+ãω x̃, τ'=t-(ã+l̃)^2ã φ, ϕ'=-ωã φ,
r'=r̃,
where ã represents the Kerr-like rotational parameter, while l̃ represent the NUT-like parameter. After these linear transformations, the metric becomes
s^2=1Ω̃^2[ -𝒬̃ ρ̃^ 2[ t-(ã(1-x̃^2)+2l̃(1-x̃))φ]^2
+ ρ̃^ 2𝒬̃ r̃^2
+ ρ̃^ 2𝒫̃ x̃^2
+𝒫̃ ρ̃^ 2[ã t-(r̃^2+(ã+l̃)^2) φ]^2],
in which
Ω̃ := 1-α'ω r̃ (l̃+ã x̃),
ρ̃^ 2 := r̃^ 2+(l̃+ã x̃)^2,
𝒫̃(x̃) := ω^2ã^2 P'(l̃+ã x̃√(ω)),
𝒬̃(r̃) := ω^2 Q' (r̃√(ω)) ,
where the functions P'(x'), Q'(r') are given by (<ref>), (<ref>). This is the general Griffiths-Podolský form of the metric, as summarized in Eq. (16.12) in <cit.>.
By inspecting the conformal factor Ω̃ we observe that the GP acceleration parameter α̃ is equal to the PD acceleration parameter, α̃=α'. Using (<ref>), we can thus directly relate the GP acceleration to the A parameters as
α̃ = α a [ K - α^2(a^2-l^2)].
Expressed explicitly in terms of the new Astorino parameters, this is actually quite an involved expression (<ref>), namely
α̃ = α 12[ a - α^2 a (a^2-l^2)
+ √(a^2+α^4 a^2(a^2-l^2)^2+2α^2(a^2-l^2)(a^2-2l^2)) ].
Clearly, α=0 implies α̃=0, which is expected. However, by setting l=0 we get α̃=α a. It means that the GP acceleration parameter α̃ also vanishes for l=0=a. Similarly, for a=0 we get α̃=α^2 l^2, so that
α̃ also vanishes for a=0=l. This degeneracy is an unfortunate feature of the original GP representation of the whole class of type D black holes, preventing to identify the genuine subclass of accelerating NUT black holes without the Kerr-like rotation — which exists, and is nicely contained in the Astorino metric (<ref>)–(<ref>).
Let us now concentrate on the GP “rotational” parameters ã and l̃. In the transformation (<ref>) these are arbitrary constants. However, they are naturally constrained by the requirement that in the final form of the GP metric the spherical-like coordinate θ should be introduced instead of x̃ in (<ref>) via the relation x̃=cosθ. To this end, the function 𝒫̃(x̃) must be written in the specific factorized form
𝒫̃=(1-x̃^2)(1-a_3 x̃-a_4 x̃^2),
and this can be achieved by a unique values of the parameters ã and l̃. Indeed, for the choice[In view of the definition (<ref>), the upper sign applies for a^2>l^2, while the lower sign applies for a^2<l^2.]
ã = √(ω)K-α^2(a^2-l^2) √(I^2 ∓ J^2)/I ,
l̃ = √(ω)K-α^2(a^2-l^2)
F la ,
where F denotes the fraction
F := 1-α^2(a^2-l^2)1+α^2(a^2-l^2),
the first bracket in P' given by (<ref>), expressed in the new coordinate x̃ such that x'=(l̃+ã x̃)/√(ω), becomes
I(K-1)α^2(a^2-l^2)+2 [1-α^2(a^2-l^2)] la x'-I [K-α^2(a^2-l^2)] x'^ 2
= I^2 ∓ J^2I [K-α^2(a^2-l^2)] (1-x̃^2) .
Notice also that this natural fixing of ã and l̃ can be rewritten using the relation (<ref>) as
α̃ ã = α a √(ω) √(I^2 ∓ J^2)I ,
α̃ l̃ = α l √(ω) F .
Evaluating also the second bracket in P', we obtain the metric function 𝒫̃ in (<ref>)
𝒫̃(x̃)= (1-x̃^2) ω^2ã^2 1I [K-α^2(a^2-l^2)]
×[ L - 2Ml̃√(ω)
+α^2[(a^2-l^2)L+(e^2+g^2) √(I^2 ∓ J^2) ] l̃^2ω
-(2Mã√(ω)-2α^2[(a^2-l^2)L+(e^2+g^2) √(I^2 ∓ J^2) ]
l̃ãω) x̃
+α^2 ã^2ω[(a^2-l^2)L+(e^2+g^2) √(I^2 ∓ J^2) ] x̃^2 ],
which is indeed of the required factorized form (<ref>) — up to an overall rescaling which can always be achieved.
Indeed, so far ω has been a free “twist” parameter introduced by (<ref>). To describe a black hole with the horizon topology of a sphere, the function 𝒫̃(x̃) has to satisfy the condition 𝒫̃(x̃=0)=1 which directly follows from (<ref>) (see also <cit.>). For (<ref>), using (<ref>), (<ref>) and (<ref>), we thus derive a special value of ω, namely[Let us note, however, that in Sec. <ref> we will demonstrate that an arbitrary value of ω in the metric can be restored by a rescaling of the acceleration parameter.]
ω_0 =
α aα̃ I^2 ∓ J^2I( L - 2Mα lα̃ F
+α^4l^2α̃^2[(a^2-l^2)L+(e^2+g^2) √(I^2 ∓ J^2)] F^2
)^-1.
This brings 𝒫̃ exactly to the desired form (<ref>), that is
𝒫̃(x̃)=(1-x̃^2) P̃(x̃),
where P̃(x̃) := 1-a_3 x̃-a_4 x̃^2,
with the coefficients a_3 and a_4 given by
a_3= 2√(I^2 ∓ J^2) ω_0
( M - α^3 l α̃[(a^2-l^2)L+(e^2+g^2) √(I^2 ∓ J^2) ] F ),
a_4= -α^3 aα̃ I ω_0[(a^2-l^2)L+(e^2+g^2) √(I^2 ∓ J^2) ].
The last metric function 𝒬̃(r̃) easily follows from the relations (<ref>) and (<ref>),
𝒬̃(r̃)=
ω_0^2(k'+e'^ 2+g'^ 2)-2 ω_0^3/2m' r̃+ω_0ϵ' r̃^2
-2√(ω_0) α'n' r̃^3-α'^ 2k' r̃^4 ,
recalling that α̃=α', and introducing the rescaled parameters
k̃:=k', m̃:=ω_0^3/2 m', ñ:=ω_0^3/2 n' , ϵ̃:=ω_0 ϵ', ẽ:=ω_0 e', g̃:=ω_0 g',
where k', n', ϵ', m', e', g' are given by (<ref>).
To obtain the Griffiths-Podolský form of the metric it now suffices to introduce the angular coordinate
θ in (<ref>) via the simple relation x̃=cosθ, resulting in
s^2=1Ω̃^2[ -Q̃ ρ̃^ 2[ t-(ãsin^2θ+2l̃(1-cosθ))φ]^2
+ ρ̃^ 2Q̃ r̃^2
+ ρ̃^ 2P̃ θ^2
+P̃ ρ̃^ 2sin^2θ [ ã t-(r̃^2+(ã+l̃)^2) φ]^2],
where
Ω̃ = 1-α̃ω_0 r̃ (l̃+ãcosθ),
ρ̃^ 2 = r̃^ 2+(l̃+ã cosθ)^2,
P̃(θ) = 1-a_3cosθ-a_4cos^2θ ,
Q̃(r̃) = (ω_0^2 k̃+ẽ^ 2+g̃^ 2)
-2m̃ r̃+ϵ̃ r̃^ 2
-2α̃ñω_0 r̃^ 3
-α̃^2k̃ r̃^ 4 .
It is exactly the GP metric given by Eqs. (6.18), (6.19) in the monograph <cit.>.
Moreover, a straightforward (but somewhat lengthy) calculation proves that the above parameters satisfy the following set of relations
a_3 = 2α̃ãω_0m̃
-4α̃^2ãl̃ω_0^2(ω_0^2k̃+ẽ^ 2+g̃^ 2),
a_4 =
-α̃^2ã^2ω_0^2(ω_0^2k̃+ẽ^ 2+g̃^ 2),
ϵ̃ = ω_0^2 k̃ã^2
-l̃^2+4α̃l̃ω_0m̃
-(ã^2+3l̃^2)[ α̃^2ω_0^2(ω_0^2k̃+ẽ^ 2+g̃^ 2)],
ñ = ω_0^2 k̃l̃ã^2-l̃^2
-α̃ã^2-l̃^2ω_0m̃
+(ã^2-l̃^2)l̃[ α̃^2ω_0^2(ω_0^2k̃+ẽ^ 2+g̃^ 2)],
(ω_0^2ã^2-l̃^2+3α̃^2 l̃^ 2)k̃ =
1+2α̃l̃ω_0m̃
-3α̃^2l̃^ 2ω_0^2(ẽ^ 2+g̃^ 2),
which are exactly the expressions in Eqs. (16.20) and (16.15)–(16.17) in <cit.>.
This finishes the construction of the Griffiths-Podolský form of the general metric of black holes of algebraic type D. Moreover, it explicitly demonstrates the full equivalence of the GP form with the PD and A forms of this large class of spacetimes.
However, it should be emphasized that there is a subtle but very important difference: the original Griffiths-Podolský physical parameters α̃, ã, l̃ (representing acceleration, Kerr-like rotation, NUT twist) and m̃, ẽ, g̃ (representing mass, electric charge, magnetic charge) in the metric (<ref>) are not the new Astorino genuine parameters α, a, l and m, e, g, which properly separate the corresponding subclasses when they are set to zero.
In fact, the old GP parameters are now explicitly expressed in terms of the new A parameters via the complicated relations (<ref>), (<ref>), (<ref>) and (<ref>) with (<ref>). Moreover, the additional twist parameter ω has a very special value ω_0 given by (<ref>). Due to their highly involved structure, these could not be guessed in previous investigation of this family of spacetimes.
To elucidate the relation between the GP and A (that is also A^+) physical parameters in more detail, it seems to be instructive to consider the special cases α=0, l=0, a=0. It will clearly demonstrate in which situations the two sets of parameters agree, and what are their specific differences.
§ SPECIAL CASES
To analyze various special subcases of black holes, it is first necessary to clarify the freedom in the choice of the twist parameter ω, and also to consider the physical dimensionality of the parameters α̃, l̃ and ã.
§.§ An issue of the twist parameter ω
In previous section, an explicit transformation from the Astorino form of the metric to the Griffiths-Podolský one was presented. It involves a very special, unique choice of the twist parameter ω=ω_0 given by (<ref>). This may be seen as a contradiction to statements in the published works, such as <cit.>, where it was argued that ω is a free parameter (with a general restriction that it is related to the twist of the congruence generated by PNDs, and thus to both the Kerr-like rotational parameter ã and the NUT-like parameter l̃.
However, it can be demonstrated that there is no such contradiction because arbitrary (nonzero) value of ω can be restored from (nonzero) ω_0. This is achieved by a simple rescaling of the acceleration parameter,
α̃ ↦ α̃ ω_0ω ,
while keeping all other physical parameters (that is ã, l̃, m̃, ẽ, g̃) the same. After this substitution, the set of relations (<ref>)–(<ref>) become
a_3 = 2α̃ãωm̃
-4α̃^2ãl̃ω^2(ω_0^2k̃+ẽ^ 2+g̃^ 2),
a_4 =
-α̃^2ã^2ω^2(ω_0^2k̃+ẽ^ 2+g̃^ 2),
ϵ̃ = ω_0^2 k̃ã^2
-l̃^2+4α̃l̃ωm̃
-(ã^2+3l̃^2)[ α̃^2ω^2(ω_0^2k̃+ẽ^ 2+g̃^ 2)],
ñ = ω_0^2 k̃l̃ã^2-l̃^2
-α̃ã^2-l̃^2ωm̃
+(ã^2-l̃^2)l̃[ α̃^2ω^2(ω_0^2k̃+ẽ^ 2+g̃^ 2)],
(ω^2ã^2-l̃^2+3α̃^2 l̃^ 2) ω_0^2/ω^2 k̃ =
1+2α̃l̃ωm̃
-3α̃^2l̃^ 2ω^2(ẽ^ 2+g̃^ 2).
The last equation suggest a rescaling
k̃ ↦ ω^2ω_0^2 k̃ ,
which replaces the special value of ω_0 in all relations (<ref>)–(<ref>) by an arbitrary value ω (because ω_0^2k̃ is replaced by ω^2k̃).
Concerning the metric functions given by (<ref>)–(<ref>), ρ̃^ 2 and P̃ remain the same, while Ω̃ and Q̃ change to
Ω̃ = 1-α̃ω r̃ (l̃+ãcosθ),
Q̃(r̃) = (ω^2k̃+ẽ^ 2+g̃^ 2)
-2m̃ r̃+ϵ̃ r̃^ 2
-2α̃ñω r̃^ 3
-α̃^2k̃ r̃^ 4 .
The metric (<ref>) thus takes exactly the form of Eq. (16.18)–(16.20) in <cit.>, that is the original Griffiths-Podolský metric with a general (not fixed) value of ω.
To conclude, the simple rescaling (<ref>) of the acceleration parameter α̃, accompanied by the rescaling (<ref>) of the parameter k̃, restores arbitrariness of ω in the GP metric. In other words, the original GP metrics with different values of ω are equivalent (unless ω=0).
§.§ Restoring the physical dimensionality of the black-hole parameters
Recall that it is convenient and natural to consider that all the coordinates and genuine physical parameters in the Astorino metric (<ref>)–(<ref>) have the usual physical dimension — as in the Boyer-Lindquist-type coordinates for the Kerr-Newman black holes (and thus their generalization in the Griffiths-Podolský metric form). In particular, the physical parameters m, a, l, e, g have the dimension of length (while α has the dimension 1/length). Also the coordinates r and t have the dimension of length.
On the other hand, the GP parameters m̃, α̃, ã, l̃, ẽ, g̃ in the metric (<ref>) are dimensionless. The reason is that they have been obtained from the dimensionless PD coefficients k', n', ϵ', m', e', g' — given by (<ref>) — using the relations (<ref>) and α̃=α'.
Their proper physical dimensionality can be restored by introducing a parameter γ with the dimension of length, namely by rescaling the GP parameters in such a way that
r̃↦γ r̃, m̃↦γ m̃, ã↦γ ã, l̃↦γ l̃, ω↦γ ω,
ẽ↦γ ẽ, g̃↦γ g̃, α̃↦α̃γ, Q̃↦γ^2 Q̃ .
Let us note that after this rescaling the conformal factor Ω̃ given by (<ref>) remains the same, while (<ref>) changes to
Ω̃ = 1-α̃/γω_0 r̃ (l̃+ãcosθ)
if we keep ω_0, fixed by (<ref>), dimensionless.
In the most general case, it is not a priori clear how to choose the unique value of γ. However, it can be easily identified in the particular cases of black holes, to recover the standard forms of these well-known solution.
§.§ The special case l=0: no NUT
Using (<ref>), from (<ref>) we get ω_0 =
α a / α̃. Recalling (<ref>), which gives the relation
α̃ = α a,
we immediately obtain a nice result ω_0 = 1. The expressions (<ref>), (<ref>) then reduce to
ã = √(ω_0) = 1 , l̃ = 0 ,
and the coefficients a_3 and a_4 are simply a_3 = 2α m, a_4 = -α^2 (a^2+e^2+g^2),
so that
P̃ = 1 - 2α m cosθ + α^2 (a^2+e^2+g^2) cos^2θ .
The function Q̃ is given by (<ref>) with the coefficients determined by (<ref>) and (<ref>),
Q̃ = 1/a^2((a^2+e^2+g^2) - 2m a r̃ +a^2r̃^ 2)
(1 - α^2a^2 r̃^ 2).
Together with
Ω̃ = 1-α a r̃cosθ, ρ̃^ 2 = r̃^ 2+cos^2θ,
it gives an explicit form of the metric (<ref>) in terms of the Astorino physical parameters.
Restoring now the correct dimensionality of the PD parameters by using (<ref>) with the simple choice
γ = a,
we obtain
ã=a, l̃=0, α̃=α
The metric functions take the form
Ω̃ = 1-α̃ r̃cosθ,
ρ̃^ 2 = r̃^ 2+ã^2 cos^2θ,
P̃ = 1 - 2α̃m̃ cosθ + α̃^ 2 (ã^ 2+ẽ^ 2+g̃^ 2) cos^2θ ,
Q̃ = ((ã^ 2+ẽ^ 2+g̃^ 2) - 2m̃ r̃ + r̃^ 2) (1 - α̃^ 2r̃^ 2),
which fully agrees with previous GP and PV forms of this class of accelerating Kerr-Newman black holes without the NUT parameter, as presented in Eq. (35)–(39) of <cit.> and also in Sec. 16.3.2 of <cit.>. In this standard form of the metric, any remaining physical parameters can be set to zero, in any order.
For m̃^ 2>ã^ 2+ẽ^ 2+g̃^ 2, both the key metric functions can be written in a factorized form
P̃ = (1 - α̃ r̃_+ cosθ)(1 - α̃ r̃_- cosθ) ,
Q̃ = ( r̃ - r̃_+ )( r̃ - r̃_- )
(1 - α̃ r̃)(1 + α̃ r̃) ,
where r̃_± = m̃±√(m̃^ 2-ã^ 2-ẽ^ 2-g̃^ 2). The roots of Q̃ define the position of the horizons. As explained in detail in <cit.>, there are two black-hole horizons and two acceleration horizons. Extremal black holes with a degenerate horizon occur when m̃^ 2=ã^ 2+ẽ^ 2+g̃^ 2, while for α̃=0 the acceleration horizons disappear.
§.§ The special case α=0: no acceleration
In this case
I = 1 ,
J = 0 ,
K = 1 ,
L = 1 ,
M = α m ,
see (<ref>). From (<ref>) we get the limit α̃→α a, so that using (<ref>) we obtain
α̃=0 , ω_0 = 1 .
Consequently, the relations (<ref>), (<ref>) give
ã = 1 , l̃ = l/a ,
and the coefficients a_3 and a_4 are simply a_3 = 0 = a_4. Therefore, employing
(<ref>)–(<ref>) with (<ref>) and (<ref>), the metric functions in the GP metric (<ref>) become
Ω̃ = 1 ,
ρ̃^ 2 = r̃^ 2+1/a^2(l +a cosθ)^2 ,
P̃ = 1 ,
Q̃ = 1/a^2((a^2-l^2+e^2+g^2) - 2m a r̃ +a^2r̃^ 2).
This gives an explicit form of the metric in terms of the Astorino physical parameters. Restoring the correct dimensionality of the coordinates and parameters by the choice
γ = a ,
we finally obtain
ρ̃^ 2 = r̃^ 2+(l +a cosθ)^2 ,
Q̃ = (a^2-l^2+e^2+g^2)-2m r̃+r̃^ 2 ,
which fully agrees with Eq. (16.23) in Sec. 16.3.1 of <cit.>.
§.§ The special case a=0: no Kerr-like rotation
In such a case the auxiliary parameters are
I = 1-α^2l^2,
a|J| = 2α l^2,
aK = α l^2,
aL = α (2m l+e^2+g^2),
aM = α^2 l (-2l^2+e^2+g^2),
so that using (<ref>), (<ref>), (<ref>), (<ref>) we obtain
α̃ = α^2l^2,
ã = 2√(ω_0) 11-α^2l^2, l̃= √(ω_0)α l 1+α^2l^21-α^2l^2,
ω_0 = 1-α^2l^2/ 1 - 2α^2 ml + α^4l^2(e^2+g^2-l^2) .
Consequently,
P̃ = 1-a_3cosθ-a_4cos^2θ,
Q̃ = ω_0^24α^2l^4[(1-α^2l^2) - 2α l (1+α^2l^2)r̃√(ω_0)
+α^2l^2 (1-α^2l^2)r̃^ 2ω_0 ]
×[e^2+g^2-2ml - 2α l (e^2+g^2-2l^2)r̃√(ω_0)
+α^2l^2(e^2+g^2+2ml)r̃^ 2ω_0 ],
where
a_3 = 2α m (1+α^2l^2) - l - α^2 l (e^2+g^2-l^2)/1 - 2α^2 ml + α^4l^2(e^2+g^2-l^2),
a_4 = α^22ml-e^2-g^2 1 - 2α^2 ml + α^4l^2(e^2+g^2-l^2).
Here we have used the definition (<ref>) together with (<ref>) (<ref>), that is
𝒫̃(x̃) :=
ω_0^2ã^2
P'(√(ω_0) x'=l̃+ã x̃√(ω_0)), 𝒬̃(r̃) :=
ω_0^2
Q' (r'=r̃√(ω_0)),
in which the explicit transformation (<ref>),
x' = l̃+ã x̃ω_0
= (1+α^2l^2) + 2α l x̃/α l (1-α^2l^2) √(ω_0),
was inserted. In fact, the resulting metric functions P̃, Q̃ are fully consistent with the expressions (<ref>), (<ref>). In particular, the first quadratic factor in P' gives the term (1-x̃^ 2), while the second term leads to P̃(x̃) = 1-a_3 x̃-a_4 x̃^2. It is then natural to introduce x̃=cosθ.
Correct dimensionality of the physical parameters can be restored by applying (<ref>) with
γ = α l^2,
so that
α̃ = α, ã= 2√(ω_0) α l^21-α^2l^2, l̃= l √(ω_0) 1+α^2l^21-α^2l^2.
Recall that the mass and charge parameters m,e,g in (<ref>)–(<ref>) already have the proper physical dimension (of length) but it is also necessary to rescale r̃↦γ r̃, Q̃↦γ^2 Q̃. Thus,
Q̃(r̃) = 14 [(1-α^2l^2) ω_0 - 2(1+α^2l^2)√(ω_0) r̃l
+(1-α^2l^2) r̃^ 2l^2 ]
×[(e^2+g^2-2ml) ω_0 - 2(e^2+g^2-2l^2)√(ω_0) r̃l
+(e^2+g^2+2ml) r̃^ 2l^2 ],
while the function P̃ remains the same as in (<ref>). Also, here we keep the same dimensionless parameter ω_0 given by (<ref>).
We can thus write the metric for accelerating charged NUT black hole without the Kerr-like rotation in the Griffiths-Podolský metric representation (<ref>) as
s^2=1Ω̃^2[ -Q̃ ρ̃^ 2[ t - ( (ã+l̃) + (l̃+ã cosθ))(1-cosθ) φ ]^2
+ ρ̃^ 2Q̃ r̃^2
+ ρ̃^ 2P̃ θ^2
+P̃ ρ̃^ 2sin^2θ [ ã t-(r̃^2+(ã+l̃)^2) φ]^2],
where from (<ref>) it follows that
ã+l̃ = l √(ω_0) 1+α l1-α l,
l̃+ã cosθ = l √(ω_0) [ 1+α l1-α l - 2α l1-α^2l^2(1-cosθ) ].
Applying these relations we get an explicit metric
s^2=1Ω̃^2{ -Q̃ ρ̃^ 2[ t -
2l √(ω_0) [ 1+α l1-α l - α l1-α^2l^2(1-cosθ) ]
(1-cosθ) φ ]^2
+ ρ̃^ 2Q̃ r̃^2
+ ρ̃^ 2P̃ θ^2
+P̃ ρ̃^ 2sin^2θ [ 2√(ω_0) α l^21-α^2l^2 t
-(r̃^2+l^2ω_0 (1+α l)^2(1-α l)^2) φ]^2},
in which (<ref>) and (<ref>) takes the form
Ω̃ = 1-[ 1+α l1-α l - 2α l1-α^2l^2(1-cosθ) ]
r̃l √(ω_0),
ρ̃^ 2 = r̃^ 2+l^2ω_0 [ 1+α l1-α l - 2α l1-α^2l^2(1-cosθ) ]^2,
while the functions P̃ an Q̃ are given by (<ref>) and (<ref>), respectively.
It seems convenient now to perform a rescaling of the coordinates and the metric functions
r̅ = r̃/√(ω_0), t̅ = √(ω_0) t, φ̅ = ω_0 φ,
P̅ = P̃/ω_0, Q̅ = Q̃/ω_0^2, ρ̅^ 2 = ρ̃^ 2/ω_0,
which brings the metric to the form
s^2=1Ω̅^2{ -Q̅ ρ̅^ 2[ t̅ - 2l (1+α^2l^2)(1-cosθ) + α l sin^2θ/1-α^2l^2 φ̅ ]^2
+ ρ̅^ 2Q̅ r̅^2
+ ρ̅^ 2P̅ θ^2
+P̅ ρ̅^ 2sin^2θ [ 2 α l^21-α^2l^2 t̅
-(r̅^2+l^2 (1+α l)^2(1-α l)^2)φ̅ ]^2},
where
Ω̅ = 1-
1+α^2l^2+2α lcosθ1-α^2l^2 r̅l,
ρ̅^ 2 = r̅^ 2+l^2
[ 1+α^2l^2+2α lcosθ1-α^2l^2 ]^2 ,
P̅ = 1/1-α^2l^2[ 1 - 2α^2 ml + α^4l^2(e^2+g^2-l^2)
+ 2α( l - m (1+α^2l^2) + α^2 l (e^2+g^2-l^2))cosθ
+ α^2 ( e^2+g^2-2ml)cos^2θ],
Q̅ = 1/4l^4[ (r̅-l)^2 - α^2 l^2(r̅+l)^2]
[ 2ml(r̅^2-l^2) + 4l^3r̅ +(e^2+g^2)(r̅-l)^2 ].
This is an explicit GP metric form (<ref>) of the class of accelerating charged NUT black holes (of type D) without the Kerr-like rotation. It has not been identified in previous works <cit.> due to the fact that the Kerr-like parameter ã was not properly chosen to cover this special subcase. Its convenient choice is (<ref>), coupled to the NUT parameter l and acceleration α, whereas previously it was incorrectly assumed that ã=0 in such a situation. This lead to a degenerate parametrization of this sector of the complete family of type D black hole spacetimes.
In this metric we can independently set α=0 and l=0, expecting to obtain the (charged) NUT solution without acceleration and the (charged) C-metric without the NUT parameter, respectively. Let us investigate these two special subcases in detail.
For α=0 we get
s^2=
-Q̅Ω̅^2 ρ̅^ 2[ t̅ - 2l(1-cosθ)φ̅ ]^2
+ ρ̅^ 2Ω̅^2Q̅ r̅^2
+r̅^2+l^2/Ω̅^2( θ^2 + sin^2θ φ̅^2 ),
with P̅ = 1 and
ρ̅^ 2 = r̅^ 2+l^2,
Ω̅ = 1l (l-r̅),
Q̅ = 1/4l^4 (l-r̅)^2
[ 2ml(r̅^2-l^2) + 4l^3r̅ +(e^2+g^2)(r̅-l)^2 ].
Performing now the following transformation and rescaling of the physical parameters,
r̅/l = R-L/R+L, t̅ = √(2) T,
l =√(2) L,
m=√(2) M,
e=√(2) E,
g=√(2) G,
we obtain
s^2=
-F [ T - 2L(1-cosθ)φ̅ ]^2
+ R^2F + (R^2+L^2)( θ^2 + sin^2θ φ̅^2),
where
F = R^2-2MR-L^2+E^2+G^2/R^2+L^2.
This is the standard form of the NUT solution, see Eqs. (12.3), (12.2) and (12.19) in <cit.>. For vanishing NUT parameter (L=0), it reduces to the Reissner-Nordström solution.
It should also be remarked that the inverse transformation to (<ref>) reads
R/L = l+r̅/l-r̅,
so that R=∞⇔r̅=l ⇔Ω̅=0. The conformal infinity is thus approached as R→∞ in the standard coordinates of (<ref>), while it is located at r̅=l in the new (unfamiliar) metric representation (<ref>) when α=0. This may be another “technical” reason why it was previously difficult to identify the genuine accelerating purely NUT black holes in the whole type D class.
Finally, let us investigate the complementary special subcase l=0, which should lead to the (charged) C-metric without the NUT parameter. To do so, we consider the metric (<ref>)–(<ref>) and perform the change of coordinates
r̅ = l R/R+2l , φ̅ = φ + α t̅ .
It is then possible in the transformed line element to take the limit l → 0 to zero NUT parameter, obtaining
2 s^2 = 1(1-α Rcosθ)^2[ - Q̅ t̅^ 2 + R^2Q̅
+R^2P̅ θ^2
+P̅ R^2sin^2θ φ^2 ],
where
P̅ = 1 - 2α m cosθ + α^2 ( e^2+g^2)cos^2θ ,
Q̅ = ( 1 - α^2R^2)(1 - 2m/R + e^2+g^2/R^2) .
This is precisely the standard form of the C-metric solution, as given by Eqs. (14.6), (14.41) in <cit.> (up to a trivial overall conformal rescaling of the line element s^2 by the constant factor 2). Recall that the roots of Q identify horizons. In general, there are two acceleration horizons located at R_ a± = ± 1/α and two black-hole horizons at R_ b± = m̃±√(m̃^ 2-ẽ^ 2-g̃^ 2) <cit.>. For vanishing acceleration (α=0), the metric (<ref>) reduces to the usual form of the Reissner-Nordström solution.
§ TRANSFORMATION TO THE PODOLSKÝ-VRÁTNÝ METRIC FORM
Finally, we will elucidate the relation between the Astorino (A) metric (<ref>) of all type D black holes and the metric presented by Podolský and Vrátný (PV) in 2021 and 2023 <cit.>. The PV metric is an improvement of the GP metric in the sense that the key metric functions are considerably simpler, fully explicit and factorized. This turned out to be convenient for geometrical and physical interpretation of these spacetimes (identification and description of horizons and singularities, finding the global structure, ergoregions, identification of cosmic string, including regions with closed timelike curves if these strings are rotating, etc.).
Actually, the new PV metric has the same general form as the GP metric (<ref>), that is the metric given by Eqs. (6.18) in <cit.>, but the metric functions P̃, Q̃ are much simpler (this was achieved by introducing a new set of the mass and charge parameters, rescaling the metric by a uniquely chosen constant conformal factor, and making a suitable choice of the twist parameter ω). Therefore, we can employ the same initial steps as in section <ref>, starting form the PD_α form of the metric (<ref>) — that is equivalent to the A form — arriving at (<ref>)–(<ref>). Changing tildes to hats in all the parameters and coordinates (except t and φ which remain the same), we thus obtain
s^2=1Ω̂^2[ -𝒬̂ ρ̂^ 2[ t-(â(1-x̂^2)+2l̂(1-x̂))φ]^2
+ ρ̂^ 2𝒬̂ r̂^2
+ ρ̂^ 2𝒫̂ x̂^2
+𝒫̂ ρ̂^ 2[â t-(r̂^2+(â+l̂)^2) φ]^2],
with
Ω̂ := 1-α'ω r̂ (l̂+â x̂),
ρ̂^ 2 := r̂^ 2+(l̂+â x̂)^2,
𝒫̂(x̂) := ω^2â^2 P'(l̂+â x̂√(ω)),
𝒬̂(r̂) := ω^2 Q' (r̂√(ω)) ,
where the functions P'(x'), Q'(r') are given by (<ref>), (<ref>). Again, this is the general Griffiths-Podolský metric, see Eq. (16.12) in <cit.>.
The conformal factor Ω̂ already has the PV form, so that we can directly identify the Podolský-Vrátný (dimensionless) acceleration parameter as
α̂ = α̃ = α' = α a [ K - α^2(a^2-l^2)].
The PV, GP, and PD acceleration parameters are thus the same, and expressed explicitly in terms on the Astorino parameters,
α̂ = α 12[ a - α^2 a (a^2-l^2)
+ √(a^2+α^4 a^2(a^2-l^2)^2+2α^2(a^2-l^2)(a^2-2l^2)) ].
If α=0 then α̂=0. As in the GP case, for l=0 we get α̂=α a, while for a=0 we get α̂=α^2 l^2.
Concerning the PV rotational parameters â and l̂, as in the previous GP case they are fixed by the condition that the metric function 𝒫̂(x̃) is factorized to 𝒫̂∝ (1-x̂^2), this leads to the same expressions as in (<ref>) and (<ref>), that is
â = ã = √(ω)K-α^2(a^2-l^2) √(I^2 ∓ J^2)/I,
l̂ = l̃ = √(ω)K-α^2(a^2-l^2)
F la .
Recall that the upper (minus) sign in (<ref>) applies when a^2>l^2, while the lower (plus) sign is used when a^2<l^2. With these parameters, the metric function 𝒫̂ in (<ref>) takes the form analogous to (<ref>),
𝒫̂(x̂)= (1-x̂^2) ω^2â^2 1I [K-α^2(a^2-l^2)]
×[ L - 2Ml̂√(ω)
+α^2[(a^2-l^2)L+(e^2+g^2) √(I^2 ∓ J^2) ] l̂^2ω
-(2Mâ√(ω)+2α^2[(a^2-l^2)L+(e^2+g^2) √(I^2 ∓ J^2) ]
l̂âω) x̂
+α^2 â^2ω[(a^2-l^2)L+(e^2+g^2) √(I^2 ∓ J^2) ] x̂^2 ].
So far, the steps were the same as those deriving the GP form of the metric in previous section. However now, instead of fixing the twist parameter ω=ω_0 by (<ref>), we choose a different value ω=ω_1, where
ω_1 = α aα̂I^2 ∓ J^2I L≡I^2 ∓ J^2I [ K - α^2(a^2-l^2)] L .
The function 𝒫̂ then becomes
𝒫̂= (1-x̂^2)[1-2MLl̂+â x̂√(ω_1)
+α^2[a^2-l^2+(e^2+g^2) √(I^2 ∓ J^2)L ](l̂+â x̂√(ω_1))^2].
Now we introduce new mass and charge parameters as
m̂ := 1α̂ ML √(ω_1) ,
ê^2+ĝ^2 :=α^2α̂^2 (e^2+g^2) √(I^2 ∓ J^2)L ω_1 ,
and employ an important identity
α̂^2(â^2-l̂^ 2)=α^2(a^2-l^2) ω_1 ,
so that 𝒫̂ becomes
𝒫̂=(1-x̂^2)[1-2 α̂ m̂ l̂+â x̂ω_1
+α̂^2(â^2-l̂^2+ê^2+ĝ^2)(l̂+âx̂ω_1)^2].
Defining
r̂_± :=m̂±√(m̂^2+l̂^2-â^2-ê^2-ĝ^2),
the metric function takes a compact and fully factorized form
𝒫̂(x̂)=(1-x̂^2)(1-α̂ω_1 r̂_+(l̂+â x̂))
(1-α̂ω_1 r̂_-(l̂+â x̂)).
Notice that such a factorization corresponds to the factorization of the Astorino metric function (<ref>),
Δ_x = (1-x^2)( 1 - α r_+ x ) ( 1 - α r_- x),
where r_± := m ±√(m^2+l^2-a^2-e^2-g^2).
Similarly we analyze the metric function 𝒬̂ in (<ref>). From the definition (<ref>) with (<ref>), we obtain the expression
𝒬̂(r̂)=ω_1^2I^2 ∓ J^2 [I-2α l[1-α^2(a^2-l^2)] r̂√(ω_1)
-α^2(a^2-l^2)I r̂^ 2ω_1]
× [ K-1α^2(a^2-l^2)^2 [(a^2-l^2)L + (e^2+g^2)√(I^2 ∓ J^2) ]
-2α aM r̂√(ω_1)+L [K-α^2(a^2-l^2)] r̂^ 2ω_1].
Using the relation α̂ l̂ = α̃ l̃ = α l √(ω_1) F which follows form (<ref>), and the identity (<ref>), the first (square) bracket can be rewritten in the factorized form
I (1-2 α̂ l̂ r̂ω_1-α̂^2(â^2-l̂^ 2)r̂^2ω_1^2)
=I (1-α̂(â+l̂ )r̂ω_1)
(1+α̂(â-l̂ )r̂ω_1).
The second (square) bracket, applying the relations
K-1α^2(a^2-l^2)^2 = αα̂ a ,
K-α^2(a^2-l^2) = α̂α a ,
see (<ref>), and the definitions of the “hatted” charges and masses (<ref>) and (<ref>), becomes
α̂/α a L [α^2(a^2-l^2)α̂^2 + (ê^2+ĝ^2)1ω_1
- 2 m̂ r̂ω_1 + r̂^2ω_1].
Using the identity (<ref>) we then obtain a factorized expression
α̂α a Lω_1 [r̂^2-2 m̂ r̂+(â^2-l̂^2+ê^2+ĝ^2)]
= α̂α a Lω_1 (r̂-r̂_+)(r̂-r̂_-).
Therefore, the metric function (<ref>) takes the form
𝒬̂= ω_1 α̂α a I LI^2 ∓ J^2 (r̂-r̂_+)(r̂-r̂_-)
(1-α̂(â+l̂ )r̂ω_1)
(1+α̂(â-l̂ )r̂ω_1),
which for the convenient choice (<ref>) of ω_1 simplifies to a nice factorized expression
𝒬̂(r̂)=
(r̂-r̂_+)(r̂-r̂_-)
(1-α̂(â+l̂ )r̂ω_1) (1+α̂(â-l̂ )r̂ω_1).
This directly corresponds to the factorization of the Astorino metric function (<ref>),
Δ_r = (r-r_+)(r-r_-)(1-α r)(1+α r) .
Finally, using (<ref>) the conformal factor (<ref>) reads
Ω̂ = 1-α̂ω_1 r̂ (l̂+â x̂). As argued in section <ref>, the special (nonzero) twist parameter ω_1 can by put to any (nonzero) value ω by the simple rescaling (<ref>) of the acceleration parameter (provided it is nonzero),
α̂ ↦ α̂ ω_1ω .
It explicitly restores a freedom to choose ω in Ω̂, and also in
the metric functions (<ref>) and (<ref>). The metric (<ref>) thus becomes
s^2=1Ω̂^2[ -𝒬̂ ρ̂^ 2[ t-(â(1-x̂^2)+2l̂(1-x̂))φ]^2
+ ρ̂^ 2𝒬̂ r̂^2
+ ρ̂^ 2𝒫̂ x̂^2
+𝒫̂ ρ̂^ 2[â t-(r̂^2+(â+l̂)^2) φ]^2],
with
Ω̂ = 1-α̂ω r̂ (l̂+â x̂),
ρ̂^ 2 = r̂^ 2+(l̂+â x̂)^2,
𝒫̂(x̂) =(1-x̂^2)(1-α̂ω r̂_+(l̂+â x̂))
(1-α̂ω r̂_-(l̂+â x̂)),
𝒬̂(r̂) = (r̂-r̂_+)(r̂-r̂_-)
(1-α̂(â+l̂ )r̂ω) (1+α̂(â-l̂ )r̂ω) .
It now only remains to introduce the angular coordinate via the relation x̂=cosθ, and choose the twist parameter to be
ω :=â^2+l̂^ 2â .
This finally leads to
s^2=1Ω̂^2[ -Q̂ ρ̂^ 2[ t-(âsin^2θ+2l̂(1-cosθ))φ]^2
+ ρ̂^ 2Q̂ r̂^2
+ ρ̂^ 2P̂ θ^2
+P̂ ρ̂^ 2sin^2θ [ â t-(r̂^2+(â+l̂)^2) φ]^2],
where
Ω̂ = 1-α̂ ââ^2+l̂^ 2 r̂ (l̂+âcosθ) ,
ρ̂^ 2 = r̂^ 2+(l̂+â cosθ)^2,
P̂(θ) =(1-α̂ ââ^2+l̂^ 2 r̂_+(l̂+âcosθ))
(1-α̂ ââ^2+l̂^ 2 r̂_-(l̂+âcosθ)),
Q̂(r̂) = (r̂-r̂_+)(r̂-r̂_-)
(1-α̂ â â+l̂â^2+l̂^ 2 r̂) (1+α̂ â â-l̂â^2+l̂^ 2 r̂).
This is exactly the Podolský-Vrátný form of the metric, as expressed by Eqs. (1)–(5) in <cit.>.
Let us emphasize that the choice (<ref>), although convenient in many situations, prevents us to identify accelerating “purely NUT” black holes because â=0 completely removes the acceleration parameter α̂ from the metric (<ref>)–(<ref>). However, because ω is an additional free parameter, the PV metrics (<ref>) with different values of the twist ω are equivalent by the rescaling (<ref>) of the acceleration parameter (unless ω=0). This keeps a possibility to explicitly identify the accelerating purely NUT black holes (without the Kerr-like rotation, charged, of type D) even within the PV class of metrics for a better choice of ω than (<ref>), such that the degeneracy â=0 ⇒α̂=0 is removed. This possibility was explicitly demonstrated in the GP form of the metric, see Eqs. (<ref>)–(<ref>).
§ SUMMARY OF THE SPECIAL CASES
Let us summarize special cases of the Astorino (A, or A^+) metric in which the physical parameters are set to zero, and the corresponding Plebański-Demiański (PD), Griffiths-Podolský (GP) and Podolský-Vratný (PV) parameters.
§.§ The limit l → 0
As derived in (<ref>), the dimensionless PD acceleration parameter is α' = α a. After restoring the correct physical dimensionality of the GP and PV parameters in Subsection <ref>, the result is simple, namely
α̃ = α̂ = α ,
ã =â=a ,
l̃ =l̂=l=0 .
The acceleration, Kerr-like and NUT parameters in A, A^+, GP, and PV metric forms are thus identical.
§.§ The limit α→ 0
From (<ref>) we obtain
α' →α a, so that α' = α̃ = α̂→ 0.
The Kerr-like and NUT parameters (rescaled to a correct physical dimension) are
ã =â=a ,
l̃ =l̂=l ,
respectively. This gives us the Kerr-Newman-NUT solution in A, A^+, GP, and PV metrics.
§.§ The limit a → 0
From (<ref>)–(<ref>) it follows that
ãl̃ = √(I^2 ∓ J^2)/1-α^2(a^2-l^2) a/l → 2 α l1+α^2 l^2.
In the limit a → 0 the ratio ã/l̃ thus remains finite, and nonzero (unless α l=0). It means that taking the limit in which Astorino's Kerr-like parameter a tends to zero does not bring either of the PV parameters ã or l̃ to zero.
Thus we conclude that the accelerating purely NUT black hole solution with a=0, identified by Astorino, corresponds (after a suitable coordinate transformations presented in previous sections) to “accelerating Kerr-NUT solution” with a non-zero Kerr-like GP parameter ã0 (and l̃0), and PV parameter â0 (and l̂0).
This also explains why in the A (and A^+) metric form the acceleration parameter α is not redundant in the case a=0, as opposed in the GP (and PV) form. Similar considerations show that the GP, PV, and PD acceleration parameters are non-zero in the case a=0. Indeed, from (<ref>), (<ref>) we obtain that the dimensionless acceleration parameters are given by
α̃ = α̂ = α' = α^2 l^2 ,
which does not vanish when α l 0.
§ CONCLUSIONS
We thoroughly studied a large class of spacetimes representing black holes with mass m, rotation a, NUT parameter l, acceleration α, and electric and magnetic charges e and g (generating electromagnetic field aligned with both principal null directions of the type D Weyl tensor).
In particular, we found relations between coordinates and physical parameters of various metric forms of such exact solutions to the Einstein-Maxwell system, namely those of Plebański-Demiański (PD), Astorino (A, improved here to a more compact metric A^+), Griffiths-Podolský (GP), and Podolský-Vrátný (PV). The references to original articles, nomenclature and conventions are summarized in Table <ref>.
Main conclusions resulting from our investigation are:
* If properly mapped and physically interpreted, all these representations cover the complete class of such type D black holes.
* The physical parameters of the A metric representation (and thus also A^+) are a very good choice for describing the type D black holes. Moreover, they can be set to zero in any order, leading to expected special cases (and further subcases), without any unpleasant coordinate degeneracies or divergencies.
* Explicit coordinate transformations and relations between the parameters of A, A^+, PD, GP and PV metric forms (and their variants which also include the twist parameter ω) were found and discussed. These mutual relations are shown in the scheme on Figure <ref> by arrows, with the references to the corresponding Sections of our paper.
* We clarified the role of the twist parameter ω, related to both the Kerr-like rotation parameter a and to the NUT parameter l.
* Expressed in terms of the new convenient physical parameters m, a, l, α, e, g, the key A^+, PD, GP and PV metric functions are explicitly factorized into the product of quadratic expressions. This is very helpful for the physical interpretation, namely the identification of horizons and axes.
* Special attention was payed to the main subclasses of type D black holes, namely those with no NUT (l=0), no acceleration (α=0) and — until recently elusive — black holes with no Kerr like-rotation a=0.
* We proved that in the subclasses l=0 and α=0, the rotation, NUT, and acceleration parameters are the same in A, GP, and PV metric forms, that is a=ã=â, l=l̃=l̂, and α=α̃= α̂.
* On the other hand, in the non-rotating subclass a=0, the Kerr-like parameters ã=â and the NUT parameters l̃=l̂ were not properly identified in previous GP and PV metric forms <cit.>. The correct general relations are given by (<ref>)–(<ref>), which in the case a=0 reduce to (<ref>).
* The accelerating (charged) purely NUT black holes of type D (a=0, but α l nonzero) thus have ã=â0 in the GP and PV forms. With this correction, we were able to derive the GP (equivalent to PV) metric form of such black holes (<ref>)–(<ref>).
* In this metric it is possible to independently set α=0 and l=0, obtaining thus the (charged) NUT solution without acceleration and the (charged) C-metric without the NUT parameter, respectively, in their usual form.
§ ACKNOWLEDGMENTS
This work has been supported by the Czech Science Foundation Grant No. GAČR 23-05914S.
10
Stephanietal:2003
H. Stephani, D. Kramer, M. MacCallum, C. Hoenselaers and E. Herlt,
Exact Solutions of Einstein's Field Equations
(Cambridge University Press, Cambridge, 2003).
GriffithsPodolsky:2009
J. B. Griffiths and J. Podolský,
Exact Space-Times in Einstein's General Relativity
(Cambridge University Press, Cambridge, 2009).
PlebanskiDemianski:1976
J. F. Plebański and M. Demiański,
Rotating, charged and uniformly accelerating mass in general relativity,
Ann. Phys. (N.Y.) 98 (1976) 98–127.
Debever:1971
R. Debever,
On type D expanding solutions of Einstein–Maxwell equations,
Bull. Soc. Math. Belg. 23 (1971) 360–76.
GriffithsPodolsky:2005
J. B. Griffiths and J. Podolský,
Accelerating and rotating black holes,
Class. Quantum Grav. 22 (2005) 3467–79.
GriffithsPodolsky:2006
J. B. Griffiths and J. Podolský,
A new look at the Plebański–Demiański family of solutions,
Int. J. Mod. Phys. D 15 (2006) 335–69.
PodolskyGriffiths:2006
J. Podolský and J. B. Griffiths,
Accelerating Kerr–Newman black holes in (anti-)de Sitter space-time,
Phys. Rev. D 73 (2006) 044018 (5pp).
PodolskyVratny:2021
J. Podolský and A. Vrátný,
New improved form of black holes of type D,
Phys. Rev. D 104 (2021) 084078 (26pp).
PodolskyVratny:2023
J. Podolský and A. Vrátný,
New form of all black holes of type D with a cosmological constant,
Phys. Rev. D 107 (2023) 084034 (29pp);
Erratum: Phys. Rev. D 108 (2023) 129902(E) (2pp).
VandenBergh:2017
N. Van den Bergh,
Algebraically special Einstein–Maxwell fields,
Gen. Relativ. Grav. 49 (2017) 9 (16pp).
Astorino:2024a
M. Astorino,
Equivalence principle and generalised accelerating black holes from binary systems,
Phys. Rev. D 109 (2024) 084038 (16pp).
Astorino:2024b
M. Astorino,
Most general type-D black hole and accelerating Reissner-Nordstrom-NUT-(A)dS,
arXiv2404.06551 [gr-qc] (9pp).
PodolskyVratny:2020
J. Podolský and A. Vrátný,
Accelerating NUT black holes,
Phys. Rev. D 102 (2020) 084024 (27pp).
ChngMannStelea:2006
B. Chng, R. Mann and C. Stelea,
Accelerating Taub-NUT and Eguchi–Hanson solitons in four dimensions,
Phys. Rev. D 74 (2006) 084031 (9pp).
Astorino:2023elf
M. Astorino and G. Boldi,
Plebanski-Demianski goes NUTs (to remove the Misner string),
JHEP 08 (2023) 085 (35pp).
Astorino:2023b
M. Astorino,
Accelerating and charged type I black holes,
Phys. Rev. D 108 (2023) 124025 (24pp).
§ DERIVATION OF THE TRANSFORMATION TO THE PLEBAŃSKI-DEMIAŃSKI FORM OF THE METRIC, AND IDENTIFICATION OF THE PHYSICAL PARAMETERS
The A^+ metric (<ref>) has a general form which is very similar to the Griffiths–Podolský representation <cit.> (see also Eq. (16.18) in <cit.>, and <cit.>)
of the family of type D black holes in the Plebański-Demiański class of electrovacuum solutions with Λ.
Nevertheless, a closer look at both the metrics reveals a crucial difference. In the Griffiths–Podolský representation, the functions A(x) and C(r) are constants, namely
A ≡ A_c = 1,
C ≡ C_c = a
(there is a nice agreement with (<ref>), (<ref>) for α=0).
However, we can remedy this problem by a suitable transformation of the coordinates x ↦ X and r ↦ R given by
X ≡ A_c xA(x),
R ≡ C_c rC(r).
We also introduce new metric functions, expressed in these coordinates, as
Δ_X(X) ≡A_c^2A^2 Δ_x,
Δ_R(R) ≡C_c^2C^2 Δ_r,
and analogously
Ω̃^2(R,X) ≡A_c C_cA C Ω^2,
ρ̃^ 2(R,X) ≡A_c C_cA C ρ^2,
so that
ρ̃^ 2Ω̃^2 = ρ^2/Ω^2.
Then the A^+ metric (<ref>) takes the form
s^2=1Ω̃^2[
- Δ_Rρ̃^ 2(A_c t - ℬ φ)^2
+ Δ_Xρ̃^ 2(C_c t + 𝒟 φ)^2
+ C_f ρ̃^ 2( R^2Δ_R + X^2Δ_X )]
,
where A_c, C_c are constants, as desired, and the new metric functions are
ℬ(X) ≡ A_c BA,
𝒟(R) ≡ C_c DC.
It is now possible to find an explicit relation between the Astorino representation of all type D black holes, written in the form (<ref>), and the Plebański-Demiański class of the same solutions, written in the Griffiths–Podolský representation. In particular, this will elucidate the problem of purely NUT accelerating black holes of type D without the Kerr-like rotation (i.e., black holes with m, l, α 0 and a=0, admitting also the charges e, p and a cosmological constant Λ). These are clearly contained in the Astorino class of solutions (<ref>) but so far have not been identified in the Plebański-Demiański family.
The procedure starts with an integration of (<ref>),
X=A_c∫ xA(x),
R=C_c∫ rC(r),
of the specific quadratic functions A(x) and C(r) given by (<ref>) and (<ref>), respectively. Fortunately, it can be done explicitly, yielding for a^2>l^2 quite a simple transformation
x = 1α√(a^2-l^2) tanh[α√(a^2-l^2)A_c (X + X_0) ],
r = √(a^2-l^2)α a tan[α√(a^2-l^2)C_c (R + R_0) ] - lα a,
where X_0, R_0 are free constants of integration. Notice that for very small values of X and R we obtain just linear relations
x ≈1A_c (X + X_0) and
r + lα a≈a^2-l^2C_c a (R + R_0).
Here we assume a^2>l^2, i.e., that the Kerr-like rotation parameter a is greater than the NUT parameter l, and a>0. The complementary case a^2<l^2 can be treated similarly by changing the trigonometric functions to hyperbolic ones, and vice versa. It leads to the same expressions, but with the term √(a^2-l^2) generalized to √(|a^2-l^2|), and 2K := I + √(I^2+J^2) instead of the definition 2K := I + √(I^2-J^2) employed in (<ref>). It also covers various special cases when (some of) parameters are zero.
Next we have to evaluate the functions A(X), B(X), C(R), D(R) given by (<ref>)–(<ref>). Then we obtain ℬ(X), 𝒟(R) by using (<ref>), and finally
Δ_X(X), Δ_R(R), Ω̃^2(R,X), ρ̃^ 2(R,X) by using (<ref>), (<ref>).
A direct substitution into (<ref>), (<ref>) gives
1/A(X) = cosh^2ξ ,
1/C(R) = aa^2-l^2 cos^2χ,
where we introduced convenient dimensionless parameters
ξ≡α √(a^2-l^2)A_c (X + X_0) ≡ξ' + ξ_0 ,
χ≡α √(a^2-l^2)C_c (R + R_0) ≡χ' + χ_0 .
General expressions for B(X), D(R) look more complicated, namely
B(X) = aα^2(a^2-l^2)tanh^2 ξ + 2lα√(a^2-l^2)tanhξ + a ,
D(R) = 1α^2a^2[(a^2-l^2)tan^2χ - 2l√(a^2-l^2) tanχ + [l^2+α^2 a^2(l^2-a^2) ] ] ,
and thus using (<ref>)
ℬ(X) = A_c aα^2(a^2-l^2)[
sinhξ (I sinhξ + J coshξ) + α^2(a^2-l^2)
],
𝒟(R) = C_cα^2(a^2-l^2)a[
sinχ ( (a^2 I - 2 l^2) sinχ - 2l√(a^2-l^2)cosχ)
+l^2-α^2(a^2-l^2)a^2],
where the constants I, J, and K are defined as
I := 1+α^2(a^2-l^2) ,
J := 2 l αa √(a^2-l^2) ,
2K := I + √(I^2-J^2) .
Following the idea employed previously in
[Debever et al 1984] (see Section 4 in and subcases A2 and A3 therein), these involved functions can be considerably simplified by employing a remaining coordinate freedom encoded in the integration constants X_0, R_0 of (<ref>). By their unique choice
tanh(2 ξ_0) = -J/I,
tan(2 χ_0)= 2l√(a^2-l^2)a^2 I - 2 l^2 ≡aα Ja^2 I - 2 l^2,
(assuming a^2 I > 2 l^2, which is natural because it admits a special case l=0) we achieve
ℬ(X) = A_c ( b_0 + b_1 sinh^2 ξ' ),
𝒟(R) = C_c ( d_0 + d_1 sin^2 χ' ),
where
the constant coefficients read
b_1 = + d_1 = a/α^2(a^2-l^2) √(I^2-J^2),
b_0 = - d_0 = a/α^2(a^2-l^2) (K - 1).
Next step is to apply a transformation of all the coordinates
t= 12[α√(a^2-l^2) (√(I^2-J^2)-I+2) τ'+(√(I^2-J^2)+I-2) ϕ'/α√(a^2-l^2) ],
φ= 1a[α√(a^2-l^2) ϕ'-(α√(a^2-l^2) )^3 τ'],
tanhξ'= α √(a^2-l^2) x' ,
tanχ'= α √(a^2-l^2) r' ,
so that
( t - b_0 φ) = ( t + d_0 φ) = α√(a^2-l^2) √(I^2-J^2) τ' ,
and
b_1 sinh^2 ξ' = a √(I^2-J^2) x'^2/1-α^2(a^2-l^2) x'^2 ,
d_1 sin^2 χ' = a √(I^2-J^2) r'^2/1+α^2(a^2-l^2) r'^2 .
This leads to a great simplification of the key combinations of the functions
A_c ( t - ( b_0 + b_1 sinh^2 ξ' ) φ) = A_c α√(a^2-l^2)√(I^2-J^2)/1-α^2(a^2-l^2) x'^2 ( τ' - x'^2 ϕ' ) ,
C_c ( t + ( d_0 + d_1 sin^2 χ' ) φ) = C_c α√(a^2-l^2)√(I^2-J^2)/1+α^2(a^2-l^2) r'^2 ( τ' + r'^2 ϕ' ).
Using (<ref>), (<ref>), (<ref>) we also obtain ρ̃^ 2=A_c 𝒟+C_c ℬ. The functions 𝒟, ℬ are given by (<ref>), (<ref>), (<ref>), so that (with the help of the relation d_0+b_0=0)
ρ̃^ 2 = A_c C_c a √(I^2-J^2) (r'^2+x'^2)/[1+α^2(a^2-l^2) r'^2][1-α^2(a^2-l^2) x'^2] .
If we now conveniently introduce metric functions Ω'(r', x'), P(x'), Q(r') as
Ω'^ 2≡ a [1+α^2(a^2-l^2) r'^2]
[1-α^2(a^2-l^2) x'^2]/A_c C_c α^2(a^2-l^2) √(I^2-J^2) Ω̃^2 ,
P' ≡ A_c^-2 [1-α^2(a^2-l^2) x'^2]^2 Δ_X ,
Q' ≡ C_c^-2 [1+α^2(a^2-l^2) r'^2]^2 Δ_R ,
the metric (<ref>) takes a nice form
s^2=1Ω'^ 2[
- Q'r'^2+x'^2 ( τ' - x'^2 ϕ' )^2
+ P'r'^2+x'^2 ( τ' + r'^2 ϕ' )^2
+ a^2 C_f/α^2(a^2-l^2) (r'^2+x'^2)
( r'^2Q' + x'^2P' )].
Actually, this is a general Plebański-Demiański metric representing all (double aligned, non-null) solutions of Einstein-Maxwell-Λ equations of algebraic type D, including black holes of this type, see Eq. (3.30) in <cit.>, and Chapter 16 of <cit.> in which a special gauge α^2(a^2-l^2)=-1 and a^2 C_f=-1 is considered. The former condition can always be obtained by a rescaling on the angular coordinate ϕ', while the latter is achieved by the choice of the free constant C_f. Also, these operations relate the metrics (<ref>) and (<ref>).
To complete the transformation from the Astorino new metric representation (<ref>)-(<ref>) to the Plebański-Demiański metric representation (<ref>), it remains to prove that the conformal factor (<ref>) has the form Ω' = 1-α' r'x', with a suitable acceleration parameter α', and also that the metric functions P'(x'), Q'(r') are quartic functions of the respective coordinates.
Moreover, this explicit transformation will yield a unique relation between the physical parameters of the Astorino metric and the (purely mathematical) Plebański-Demiański parameters, and with the physical parameters in the Griffiths-Podolský form of these black-hole metrics. In particular, it will give their fully general form, which will identify the elusive (overlooked) accelerating NUT black holes without the Kerr-like rotation parameter a.
To this end, we have to explicitly evaluate these metric functions. We start with the conformal factor Ω̃^2 given by (<ref>), (<ref>),
Ω̃^2= A_c C_c/A C (1-α r x)^2.
In view of (<ref>), (<ref>), (<ref>) we get
Ω̃^2 = A_c C_c aa^2-l^2 [ cosχ coshξ
- 1α√(a^2-l^2) sin (χ - β) sinhξ ]^2,
where we defined a useful auxiliary constant β as
sinβ≡l/a ,
so that cosβ = √(a^2-l^2)/a,
and tanβ = l/√(a^2-l^2).
Recalling (<ref>)
we rewrite (<ref>) as
Ω̃^2 = A_c C_c aα^2(a^2-l^2)^2 [ -α√(a^2-l^2) cos(χ' + χ_0) cosh(ξ' + ξ_0)
+sin(χ' + (χ_0 - β)) sinh(ξ' + ξ_0) ]^2,
where
tanh(2 ξ_0) = -J/I,
tan(2 χ_0) = aα Ja^2 I - 2 l^2
see (<ref>), (<ref>). Using standard identities for goniometric and hyperbolic functions, one derives the equivalent expressions (to be employed below), namely
cosh(2 ξ_0) = I/√(I^2-J^2),
cos(2 χ_0) = a^2 I - 2 l^2/a^2 √(I^2-J^2),
so that
cosh^2ξ_0 = I+√(I^2-J^2)/2√(I^2-J^2),
cos^2χ_0 = a^2(√(I^2-J^2)+I) - 2l^2/2a^2 √(I^2-J^2),
sinh^2ξ_0 = I-√(I^2-J^2)/2√(I^2-J^2),
sin^2χ_0 = a^2(√(I^2-J^2)-I) + 2l^2/2a^2 √(I^2-J^2),
and thus
tanhξ_0 = -J/2K,
tanχ_0 = a/α J/2(a^2 K-l^2).
Notice that l=0 implies J=0=β, so that ξ_0 = 0 = χ_0.
Applying these relations, it is possible to prove that
α√(a^2-l^2) sinχ_0 coshξ_0 = -cos(χ_0-β) sinhξ_0,
α√(a^2-l^2) cosχ_0 sinhξ_0 = +sin(χ_0-β) coshξ_0.
Using these two identities, after employing in (<ref>) usual formulae for the sum in the argument of trigonometric and hyperbolic function , the terms containing cosχ' sinhξ' and sinχ' coshξ' vanish, while the two remaining combine in such a way that
Ω̃^2 = A_c C_c aa^2-l^2 [ cosχ_0/coshξ_0 cosχ' coshξ'
+sinχ_0/sinhξ_0 sinχ' sinhξ' ]^2.
Performing now the transformation (<ref>), (<ref>) we get
Ω̃^2 = A_c C_c a c_0^2 a^2-l^2 ( 1 - α' r' x' )^2/[1+α^2(a^2-l^2) r'^2][1-α^2(a^2-l^2) x'^2],
where
c_0^2 = cos^2χ_0/cosh^2ξ_0, α' = α^2(a^2-l^2) tanχ_0/tanhξ_0 .
The conformal factor (<ref>) thus, using (<ref>) and (<ref>),
takes the form
Ω'^ 2 = S^2 ( 1 - α' r' x' )^2,
where the dimensionless parameters are
α' = α a (a^2-l^2)K/a^2K-l^2,
S^2 = 1/α^2(a^2-l^2)^2 a^2K-l^2/K √(I^2-J^2) ≡a^2α^4(a^2-l^2)^3K-1√(I^2-J^2).
For l=0, these relations simplify considerably to α' = α a and S^-2=α^2 a^2 (1+α^2a^2). On the contrary, for a=0, it reduces to α' = α^2 l^2.
Using these relations, we can rewrite the metric (<ref>) in the form:
s^2=1S^2(1-α'r'x')^2[
- Qr'^2+x'^2 ( τ' - x'^2 ϕ' )^2
+ Pr'^2+x'^2 ( τ' + r'^2 ϕ' )^2
+ a^2 C_f/α^2(a^2-l^2) (r'^2+x'^2) ( r'^2Q + x'^2P )].
By rescaling time and angular coordinates
τ' ↦ S τ' , ϕ' ↦ S ϕ' ,
and introducing a new constant
C :=a^2 C_fα^2(a^2-l^2)S^2 ,
the metric takes exactly the form (<ref>). Thus, to conclude, the required transformation of coordinates from the A to the PD form of the spacetimes is given by
t = aα(a^2-l^2) √(K-1√(I^2-J^2)) [[K-α^2(a^2-l^2)] τ' + K-1/α^2(a^2-l^2) ϕ' ],
φ = 1α(a^2-l^2)√(K-1√(I^2-J^2)) [ϕ'-α^2(a^2-l^2)^2 τ' ],
together with the transformation of x and r given by (<ref>) and (<ref>).
Finally, the metric functions P' and Q' are explicitly obtained from the Astorino metric functions Δ_X and Δ_R using the relations (<ref>) and (<ref>), respectively. It turns out to be the quartics (<ref>), where the parameters in the Plebański-Demiański metric functions are given by (<ref>).
Recall that throughout this Appendix we assumed a^2>l^2. The case a^2<l^2 leads to similar expressions, but with √(a^2-l^2) replaced by √(|a^2-l^2|), and I^2-J^2 replaced by I^2+J^2. In particular, it generalizes the definition of the parameters J and K in (<ref>) to (<ref>), and yields a more general transformation (<ref>).
|
http://arxiv.org/abs/2409.03048v1 | 20240904193339 | Entanglement content of kink excitations | [
"Luca Capizzi",
"Michele Mazzoni"
] | cond-mat.str-el | [
"cond-mat.str-el",
"hep-th",
"quant-ph"
] |
equationsection
bibliography.bib
ł()̊⟨⟩#1 #1
#1 #1
Entanglement content of kink excitations
Luca Capizzi^1, Michele Mazzoni^2
==========================================
^1Université Paris-Saclay, CNRS, LPTMS, 91405, Orsay, France.
^2Department of Mathematics, City, University of London, 10 Northampton Square EC1V 0HB, UK.
§ ABSTRACT
Quantum one-dimensional systems in their ordered phase admit kinks as elementary excitations above their symmetry-broken vacua. While the scattering properties of the kinks resemble those of quasiparticles, they have distinct locality features that are manifest in their entanglement content. In this work, we study the entanglement entropy of kink excitations. We first present detailed calculations for specific states of a spin-1/2 chain to highlight the salient features of these excitations. Second, we provide a field-theoretic framework based on the algebraic relations between the twist fields and the semilocal fields associated with the excitations, and we compute the Rényi entropies in this framework. We obtain universal predictions for the entropy difference between the excited states with a finite number of kinks and the symmetry-broken ground states, which do not depend on the microscopic details of the model in the limit of large regions. Finally, we discuss some consequences of the Kramers-Wannier duality, which relates the ordered and disordered phases of the Ising model, and we explain why, counterintuitively, no explicit relations between those phases are found at the level of entanglement.
§ INTRODUCTION
In the last decades, several studies regarding the entanglement of quantum systems shed light on the fundamental properties of quantum correlations in the ground state of many-body systems <cit.>. In particular, many exact results have been provided in the context of critical ground states via Conformal Field Theory (CFT) techniques <cit.>. Moreover, analytical investigations of gapped systems close to criticality have been given in a series of works by means of form-factor bootstrap for integrable systems <cit.>. Despite the abundance of works for quantum systems with a unique vacuum, for instance, many-body models in their disordered phase, only a few results are present for the symmetry-broken ground states, especially in the context of Quantum Field Theory (QFT). Considering the importance of spontaneous symmetry breaking in modern physics, this fact is quite surprising.
Since there are Kramers-Wannier dualities connecting ordered and disordered phases <cit.>, one could erroneously argue that there should be no discernible distinction in the entanglement content between these phases, and consequently identical predictions should arise at corresponding dual points. However, this is not the case, as previous exact lattice results for the Ising chain pointed out <cit.>. While the presence of these discrepancies is known (see also Ref. <cit.>), some fundamental issues regarding their origin are not completely understood: is there a relation between the ground state entanglement of ordered and disordered phases? Is there a quantity dual to the entanglement entropy under Kramers-Wannier?
In a previous work <cit.>, we have shown how the universal corrections to the area law found in Ref. <cit.> for a large class of theories with a single vacuum do not apply to the Ising Field Theory in its ordered phase (where spontaneous symmetry breaking arises and two vacua are present); that was traced back to the topological nature of the kink excitations and the corresponding selection rules for the form factors. A natural question, that is the focus of this work, is to understand whether similar discrepancies can be found for the low-lying excited states as well. Namely, in the series of works <cit.> the excess of entropy for the particle excitations in the disordered phase of a 1+1D gapped many-body system was proved to be universal. Those predictions boil down to general formulae with a simple semiclassical interpretation, and we ask whether similar results can be found for the ordered phase. We anticipate that we will find universal results different from those of the disordered phase; discrepancies are manifest, for example, in the Rényi entropies of tripartite geometries in the presence of a single kink.
It is worth mentioning that the importance of the intrinsic non-locality of the kinks has been remarked for the quench dynamics in quantum spin chains in Refs. <cit.>. Moreover, topological frustration, as for antiferromagnetic chains with an odd number of sites, forces the ground state to be a one-kink state, with many counterintuitive consequences highlighted in Refs. <cit.>. Therefore, a deeper understanding of the entanglement content of kinks is imperative to correctly interpret a plethora of phenomena arising in one-dimensional quantum systems.
We organize the manuscript as follows. In Section <ref> we first introduce the problem and obtain some explicit results on the lattice, and then, in the spirit of <cit.>, we provide a qubit picture that is sufficient to infer the correct universal contribution to the entanglement entropy arising from the kinks. Section <ref>, which is the core of our work, contains a field-theoretic explanation of the mechanism. In particular, we make use of the replica trick to compute the Rényi entropies via the so-called twist fields, and we trace back the origin of the novel entanglement properties to the algebraic relations between the twist fields and the disorder lines associated with the kinks. Finally, in Section <ref> we analyze the Kramers-Wannier duality and we explain the faith of the density matrix associated with local regions under the duality within a C^*-algebraic approach; this allows us to understand in detail why the entropy is not self-dual and we give an explicit construction of its dual. Conclusions are given in Section <ref>.
§ PREDICTIONS FROM SPIN CHAINS AND THE QUBIT PICTURE
In this section, we compute the Rényi entropies of states of a 1D spin chain containing one or two kink excitations and we show that, in the case of a spatial tripartition, our predictions differ from those obtained for localised quasiparticle excitations. For the latter, the excess of Rényi and entanglement entropy with respect to the ground state is well known in the case of CFTs <cit.>, massive integrable QFTs <cit.> and spin chains <cit.>. In particular, in a state of a massive QFT containing a single localised excitation, the excess of entanglement entropy of a region A with fixed relative volume r= V_A / V in the large volume limit is given by
Δ S = -r log r - (1-r)log(1-r).
This result relies on the locality of the excitation, and it also applies whenever A is made of disjoint connected regions. On the other hand, if we consider the topological excitations of a spin chain, such as the kinks in the ordered phase of the Ising model, the situation is different. Indeed, these excitations separate regions of different magnetisations, and thus by measuring the local magnetisation at a given point one could tell, in principle, whether the domain wall is found to the right or to the left of that point. The same is not possible for (local) quasiparticles, and physically one expects that this is the main feature responsible for the difference between kinks and particles at the level of entanglement content: in particular, we will show that (<ref>) does not hold for kinks if A consists of multiple disjoint regions.
The situation becomes more complex in the presence of two or more kinks, leading to a richer phenomenology. In particular, if two kinks are sufficiently close they behave as a single collective particle (bound state): measuring the magnetisation at a distant point does not give information on the position of the bound state. Conversely, if the kinks are well separated, a measurement at a given point can reveal whether the two kinks belong to the opposite sides of the system. We depict the two scenarios in Figure <ref>.
In this section, we study the low-lying spectrum of a spin chain deep in its order phase (e.g. the quantum Ising chain at small transverse field), where the two vacua become product states with opposite magnetisation and the kinks are plane-wave superpositions of domain wall configurations: this limiting regime, where the correlation functions of the vacua shrink to zero, is paradigmatic in its simplicity and it allows to elucidate the difference between quasiparticles and kinks. For states with one or two kinks, we are able to perform exact calculations on the lattice. However, since the task becomes challenging as the number of kinks increases, we also propose an alternative picture based on multi-qubit states; we adapt the qubit picture of Refs. <cit.>, initially introduced to describe the entanglement entropy and negativity of localised excitations, to deal with kinks.
§.§ Rényi entropies of some one- and two-kink states in a spin-1/2 chain
§.§.§ Structure of the Hilbert space
We consider a spin-1/2 chain of length L, whose Hilbert space is
ℋ = Span{|s_1,…,s_L⟩ , s_j = ±}≃łℂ^2^̊⊗ L.
We regard the two states |+… +⟩ and |-… -⟩ as vacua, related by a global ℤ_2 symmetry, and we parametrise the other configurations via domain walls at different lattice sites as explained below.
First, according to the sign of the spin on the left boundary, we identify two sectors (of dimension 2^L-1) defined as
ℋ^(±)≡Span{|±,s_2,…,s_L⟩, s_j = ±},
and such that ℋ = ℋ^(+)⊕ℋ^(-). Additionally, we split ℋ^(±) into sectors with a fixed number of (N) domain walls as
ℋ^(±) = ⊕^L-1_N=0ℋ^(±)_N.
Specifically, ℋ^(±)_N is generated by the configurations starting from ± with N changes of sign (domain walls) between neighboring sites.
In the following, without loss of generality, we will focus on the sector ℋ^(+), with the first spin being up. For N=0, ℋ^(±)_0 is generated by |+,…,+⟩, meaning that no domain walls are present. For N=1, ℋ^(+)_1 admits the following basis
|K_+-(j)⟩≡|+,+,…,+,-,…,-⟩, j=1,…,L-1,
with a domain wall between the positions j and j+1. Similarly, for N=2, ℋ^(+)_2 is spanned by the states
|K_+-(j_1)K_-+(j_2)⟩≡|+,+…,+,-,…,-,+,…,+⟩, 1 ≤ j_1 <j_2 ≤ L-1,
with the first domain wall between sites j_1, j_1 +1, and the second between sites j_2 and j_2 +1. More generally, a basis for ℋ^(+)_N is parametrised by sequences of N ordered positions (j_1,…,j_N), belonging to {1,…,L-1}, and therefore the dimension of this sector is
dimℋ^(+)_N = L-1N;
this is consistent with ∑^L-1_N=0dimℋ^(+)_N = dimℋ^(+), since ∑_N=0^L-1L-1N=2^L-1.
§.§.§ One-kink state
In this section, we analyse a state given by a coherent superposition of configurations with a single domain wall:
|𝒦_+-(p)⟩ = 1/√(L-1)∑^L-1_j=1e^i p j|K_+-(j)⟩.
This state represents a kink with a given momentum p that is completely delocalised in space.
We are interested in the computation of the Rényi entropy of extended regions, and we first analyse the simplest case of a bipartition A∪A̅, with
A = {1,…,ℓ}, A̅ = {ℓ +1,… L}.
We can compute the reduced density matrix (RDM) by tracing out the degrees of freedom in A̅:
ρ^(1)_A = Tr_A̅ł|𝒦_+-(p)⟩⟨𝒦_+-(p)|≡̊s_ℓ + 1,…,s_L = ±∑A̅⟨s_ℓ +1,…,s_L | 𝒦_+-(p)|⟩⟨𝒦_+-(p)| s_ℓ +1,…,s_L|_⟩A̅,
obtaining
ρ^(1)_A = 1/L-1∑_j,j'=1^ℓ-1 e^i p (j-j')|K_+-(j)⟩_A A⟨K_+-(j')| + ł1-ℓ/L-1|̊%̊s̊⟩̊+,…,+_A A⟨+,…,+|
+1/L-1∑_j=1^ℓ-1ł e^i p (j-ℓ)|K_+-(j)⟩_A A⟨+,…,+| + h.c.,̊
with |K_+-(j)⟩_A the state of A with a kink at position j. We identify three types of contributions in Eq. (<ref>): a term with one kink in A, a term with no kinks in A, and a mixed term that couples the two sectors; in particular, one can show that the latter contribution is due to the configurations with a domain wall lying between ℓ and ℓ+1, that is the edge of the region A. While the exact diagonalisation of ρ^(1)_A at finite size is quite cumbersome, a simplification occurs in the limit
ℓ→∞, L →∞, r≡ℓ/L fixed.
In particular, the first two terms of (<ref>) can be diagonalised simultaneously and the associated non-vanishing eigenvalues are r and 1-r respectively: the third term can be treated in perturbation theory, and in the limit (<ref>) one can easily show that the associated contribution to the spectrum goes to zero. Therefore, in this limit, the nth Rényi entropy of the one-kink state is
S_n = 1/1-nlog[(ρ_A^(1))^n] ≃1/1-nlog(r^n + (1-r)^n).
We observe that this result coincides with the predictions of Ref. <cit.> for quasiparticles; in particular, the entanglement entropy, obtained in the limit n→ 1 of Eq. (<ref>), directly yields (<ref>) since the entropy of the vacuum state |+,…,+⟩ vanishes. This means that in the case of the geometry (<ref>) there is no distinction between kinks and quasiparticle at the level of entanglement entropy (at least, in the large volume limit).
We now turn to a tripartite geometry, and we consider the entanglement between a single interval and the rest. Namely, we choose A = A_1∪ A_3, A̅=A_2 with
A_1 = {1,…,ℓ_1}, A_2 = {ℓ_1 +1,…,ℓ_1 + ℓ_2}, A_3 = {ℓ_1 + ℓ_2 +1,…,L};
therefore, after defining ℓ_3 ≡ L-ℓ_1-ℓ_2, the length of the interval A_i is ℓ_i.
The evaluation of the reduced density matrix ρ^(1)_A_1∪ A_3 is analogous to the bipartite case discussed above. Thus, after some algebra, we obtain the following expression
ρ^(1)_A_1 ∪ A_3 = 1/L-1∑_j,j'=1^ℓ_1-1 e^ip (j-j')|K_+-(j)⟩_A_1A_1⟨K_+-(j')|⊗|-…,-⟩_A_3A_3⟨-,…,-|
+ 1/L-1∑_j,j'=ℓ_1+ℓ_2 +1^L-1 e^ip(j-j')|+,…,+⟩_A_1A_1⟨+,…,+|⊗|K_+-(j)⟩_A_3A_3⟨K_+-(j')|
+ ℓ_2 +1/L-1|+,…,+⟩_A_1A_1⟨+,…,+|⊗|-…,-⟩_A_3A_3⟨-,…,-|
+ 1/L-1∑_j=1^ℓ_1 -1ł e^i p (j-ℓ_1)|K_+-(j)⟩_A_1A_1⟨+,…,+|⊗|-…,-⟩_A_3A_3⟨-,…,-| + h.c.
+ 1/L-1∑_j=ℓ_1 + ℓ_2 + 1^L-1ł e^i p (j-ℓ_1-ℓ_2)|+,…,+⟩_A_1A_1⟨+,…,+|⊗|-…,-⟩_A_3A_3⟨K_+-(j)| + h.c..̊
As expected, the number of kinks belonging to A_1 ∪ A_3 can be either 0 or 1, and we identify the first three terms in (<ref>) as those associated with a fixed number of kinks in the regions A_1 and A_3; specifically, after decomposing the Hilbert space ℋ_A_1⊗ℋ_A_3 in sectors, we find
* One (ℓ_1 -1) × (ℓ_1-1) block acting on ℋ_A_1,1^(+)⊗ℋ_A_3,0^(-) (one kink in A_1, no kinks in A_3),
* One (ℓ_3 -1) × (ℓ_3-1) block acting on ℋ_A_1,0^(+)⊗ℋ_A_3,1^(+) (no kinks in A_1, one kink in A_3),
* One 1 × 1 block on acting ℋ_A_1,0^(+)⊗ℋ_A_3,0^(-) (no kinks in A).
On the other hand, the last two terms of (<ref>) mix the sectors. However, in the limit L→∞ with
r_i ≡ℓ_i/L fixed, i=1,2,3;
the eigenvalues of the first three blocks converge to r_1, r_2, r_3 respectively, and the contribution coming from the two mixing terms goes to zero. In conclusion, we compute the Rényi entropy as
S_n = 1/1-nlog[(ρ_A_1∪ A_3^(1))^n] ≃1/1-nlog(r_1^n + r_2^n + r_3^n),
which coincides with (<ref>) for r_3=0. As anticipated, this result differs from the one of quasiparticles in a tripartite geometry (<ref>), where no distinction between connected and disconnected regions is found. The difference between the two results is a hallmark of the non-locality of kink excitations, and it is physically better understood when one takes the limit r_2 → 0: we shall come back to this point at the end of Section <ref>.
§.§.§ Two-kink states
Interesting features related to the non-locality of kink excitations emerge in the presence of two kinks for the tripartite geometry in Eq. (<ref>). As a paradigmatic example, we focus on the case where the two kinks have equal momenta, p=p'=0, and we consider the state
|𝒦_+-(0)𝒦_-+(0)⟩=√(2/(L-1)(L-2))∑_1≤ j < j' ≤ L-1|K_+-(j)K_-+(j')⟩,
where the prefactor ensures the correct normalisation. The computation of its reduced density matrix, denoted by ρ^(2)_A_1 ∪ A_3, does not pose additional difficulties, but it is lengthy and requires some care. We leave the details of the computation and the explicit expression of ρ^(2)_A_1 ∪ A_3 to Appendix <ref>; here, we discuss its salient features.
First, since two kinks are present in the entire system, only zero, one, or two kinks can be present in A_1 ∪ A_3. Second, in analogy to the previous calculation, only the contributions coming from the blocks with a fixed number of kinks in A matter in the limit (<ref>); we report them below, together with their non-zero eigenvalues:
* One 1 × 1 block acting on ℋ_A_1,0^(+)⊗ℋ_A_3,0^(+) (no kinks in A). It yields an eigenvalue r_2^2.
* One (ℓ_1 -1) × (ℓ_1-1) block acting on ℋ_A_1,1^(+)⊗ℋ_A_3,0^(+) (one kink in A_1, one in A̅). It yields an eigenvalue 2r_1 r_2.
* One (ℓ_3 -1) × (ℓ_3-1) block acting on ℋ_A_1,0^(+)⊗ℋ_A_3,1^(-) (one kink in A̅, one in A_3). It yields an eigenvalue 2r_3 r_2.
* One (ℓ_1 -1)(ℓ_3-1) × (ℓ_1 -1)(ℓ_3-1) block acting on ℋ_A_1,1^(+)⊗ℋ_A_3,1^(-) (one kink in A_1 and one in A_3). It yields an eigenvalue 2r_1 r_3.
* One [(ℓ_1 -1)(ℓ_1-2)/2 + (ℓ_3 -1)(ℓ_3-2)/2] ×[(ℓ_1 -1)(ℓ_1-2)/2 + (ℓ_3 -1)(ℓ_3-2)/2] block acting on[Here, the tensor product ⊗ is performed first, followed by the direct sum ⊕.]ℋ_A_1,2^(+)⊗ℋ_A_3,0^(+)⊕ℋ_A_1,0^(+)⊗ℋ_A_3,2^(+) (two kinks in A). It yields an eigenvalue r_1^2 + r_3^2.
It is worth noting that the last block gives rise to a mixing of two sectors, corresponding to the two kinks being in either A_1 or A_3, respectively. From the point of view of entanglement, this contribution can be interpreted as the two kinks behaving like a single bound state which has a probability r_1^2 + r_3^2 of being found in A_1 ∪ A_3. Similarly, the first block corresponds to a single bound state belonging to A_2 with probability r^2_2. Conversely, in all the other cases the kinks belong to distinct regions, and their genuine non-locality manifests explicitly in their entanglement contributions.
We conclude by reporting the value of the Rényi entropy for the tripartition in the large volume limit:
S_n ≃1/1-nlog(r_2^2n + (r_1^2 + r_3^2)^n + 2^n(r_1^n r_2^n + r_1^n r_3^n+r_2^n r_3^n)).
This expression is different from that of the two quasiparticle excitations in Ref. <cit.>, which depends only on the sum r_1 + r_3 and not on r_1 and r_3 separately.
§.§ The qubit picture
In this section we further explore the difference between multi-kink and multi-particle states at the level of entanglement by adopting the qubit formalism, first introduced in <cit.>, and later extended to deal with internal symmetries <cit.>. The idea is to recast the states considered in Section (<ref>), into simpler states by retaining the same entanglement content (at least, in the large volume limit). Before showing how to generalise the formalism to kinks, we first review the qubit description in the case of quasiparticle excitations.
§.§.§ One- and two-particle states, revisited
Let us consider the same tripartite geometry of Section (<ref>), and we focus on a single quasiparticle at a given momentum (completely delocalised in space). We assign 1 to the region A_i whenever the particle belongs to it, and 0 otherwise. The qubit representation of the state above is
|Ψ^(1)⟩ = √(r_1)|100⟩ + √(r_2)|010⟩ + √(r_3)|001⟩,
so that r_i is the probability that the particle belongs to A_i.
Its RDM is
ρ_A^(1) = _A_2|Ψ^(1)⟩⟨Ψ^(1)| =
ł[ |00⟩ |10⟩ |01⟩; ⟨00| r_2 0 0; ⟨10| 0 r_1 √(r_1 r_3); ⟨01| 0 √(r_1 r_3) r_3; ].̊
After introducing
|Φ_0⟩ = |00⟩, |Φ_1⟩ = √(r_1)|10⟩+√(r_3)|01⟩/√(r_1 + r_3),
we express the RDM in terms of its spectral projectors as
ρ_A^(1) = (1-r)|Φ_0⟩⟨Φ_0| + r|Φ_1⟩⟨Φ_1|,
with r=r_1+r_3. In particular, the entanglement content, encoded in the non-zero eigenvalues of ρ^(1)_A (r and 1-r) depends only on the probability r that the particle belongs to A=A_1∪ A_3.
The situation is slightly more complex in the presence of two indistinguishable particles. In that case, we assign the value 2 to the region A_i if both particles belong to A_i and we express the qubit-state as
|Ψ^(2)⟩ = r_1 |200⟩ + r_2|020⟩ + r_3 |002⟩ + √(2r_1 r_2)|110⟩ + √(2r_1 r_3)|101⟩+√(2r_2 r_3)|011⟩,
whose corresponding RDM is
ρ_A^(2) = _A_2|Ψ^(2)⟩⟨Ψ^(2)|
=
ł[ |00⟩ |10⟩ |01⟩ |20⟩ |02⟩ |11⟩; ⟨00| r_2^2 0 0 0 0 0; ⟨10| 0 2 r_2 r_1 2 r_2√(r_1 r_3) 0 0 0; ⟨01| 0 2 r_2 √(r_1 r_3) 2 r_2 r_3 0 0 0; ⟨20| 0 0 0 r_1^2 r_1 r_3 r_1 √(2 r_1 r_3); ⟨02| 0 0 0 r_1 r_3 r_3^2 r_3 √(2 r_1 r_3); ⟨11| 0 0 0 r_1 √(2 r_1 r_3) r_3 √(2 r_1 r_3) 2 r_1 r_3; ].̊
Its non-zero eigenvalues are (1-r)^2, 2r(1-r), r^2 and, again, they only depend on r_1,r_3 though their sum r.
An important message is that mixing is always present between the states with a given number of particles in A_1∪ A_3: as we will see, this is not the case for the kinks.
§.§.§ Rényi entropies of multi-kink states in a tripartite geometry
We now adopt a qubit-like formalism to describe the states containing multiple kinks/antikinks. The key idea consists of fixing the number of kinks in a given region and then forgetting about all the other details related to the spatial distributions of spins in that region. To this aim, we introduce the states |±⟩ to represent the two vacua, and |1^±⟩ as a kink (or antikink) interpolating between |±⟩ and |∓⟩. The state of a single kink delocalised in space is
|Ψ^(1)⟩ = √(r_1)|1^+, -, -⟩ +√(r_2)|+, 1^+, -⟩+ √(r_3)|+, +, 1^+⟩,
and its RDM for the region A=A_1∪ A_3 is
ρ_A^(1) = r_1 |1^+,-⟩⟨1^+,-| + r_2 |+,-⟩⟨+,-| + r_3 |+,1^+⟩⟨+,1^+|.
Crucially, this result reproduces the same non-zero spectrum of (<ref>) in the large-volume limit and it shows the absence of mixing between the states with a single kink in A_1 or A_3 respectively (in contrast with the quasiparticle state in Eq. (<ref>)).
We now focus on a two-kink state. First, we introduce the state |2^±⟩ to represent the presence of a kink-antikink pair interpolating between |±⟩ and |±⟩; then we consider
|Ψ^(2)⟩ = r_1 |2^+, +, +⟩ +r_2 |+, 2^+, +⟩+ r_3 |+, +, 2^+⟩
+ √(2 r_1 r_2)|1^+, 1^-, +⟩ + √(2 r_1 r_3)|1^+, -, 1^-⟩ + √(2 r_2 r_3)|+, 1^+, 1^-⟩,
whose RDM is
ρ_A^(2) =
ł[ |+,+⟩ |+,1^-⟩ |1^+,+⟩ |2^+,+⟩ |+,2^+⟩ |1^+,1^-⟩; ⟨+,+| r_2^2 0 0 0 0 0; ⟨+,1^-| 0 2 r_2 r_3 0 0 0 0; ⟨1^+,+| 0 0 2 r_1 r_2 0 0 0; ⟨2^+,+| 0 0 0 r_1^2 r_1 r_3 0; ⟨+,2^+| 0 0 0 r_1 r_3 r_3^2 0; ⟨1^+,1^-| 0 0 0 0 0 2 r_1 r_3; ].̊
In analogy with Section <ref>, a mixing occurs between the state with a pair kink-antikink in A_1 and that with the pair in A_3. For completeness, we diagonalise the RDM as
ρ_A^(2) = r_2^2 |Φ_0^+⟩⟨Φ_0^+| + 2 r_1 r_2 |Φ_1^-⟩⟨Φ_1^-| + 2 r_2 r_3 |Φ_1^+⟩⟨Φ_1^+| + 2 r_1 r_3 |Φ_2^-⟩⟨Φ_2^-| + (r_1^2 + r_3^2) |Φ_2^+⟩⟨Φ_2^+|,
with |Φ_0^+⟩ = |+,+⟩, |Φ_1^-⟩ = |1^+,+⟩, |Φ_1^+⟩=|+,1^-⟩, |Φ_2^-⟩=|1^+,1^-⟩ and
|Φ_2^+⟩ = r_1 |2^+,+⟩ + r_3 |+,2^+⟩/√(r_1^2 + r_3^2).
We find, as expected, the same non-zero spectrum reported in Section <ref>.
Finally, we discuss the situation of a generic number N of kinks. In analogy with the case of indistinguishable particles <cit.>, we consider an N-kink state
|Ψ^(N)⟩ = ∑_N_1, N_2, N_3 ∈ℕ
N_1 + N_2 + N_3 = N√(N! r_1^N_1r_2^N_2r_3^N_3/N_1! N_2! N_3!)δ_N_1 + N_2 + N_3, N|N_1^+, N_2^(-)^N_1, N_3^(-)^N_1+N_2⟩_Kinks in all the three regions
+ ∑_N_1=1^N-1√(NN_1r_1^N_1r_2^N-N_1)|N_1^+, (N-N_1)^(-)^N_1,(-)^N⟩
+ ∑_N_1=1^N-1√(NN_1r_1^N_1r_3^N-N_1)|N_1^+, (-)^N_1,(N-N_1)^(-)^N_1⟩
+ ∑_N_1=1^N-1√(NN_1r_2^N_1r_3^N-N_1)|+, N_1^+,(N-N_1)^(-)^N_1⟩_Kinks in two regions
+ r_1^N/2|N, (-)^N,(-)^N⟩+r_2^N/2|+, N^+, (-)^N⟩+r_3^N/2|+, +, N^+⟩_Kinks in one region.
In the expression above |N^±_j⟩ is the state with N_j kinks in a given region with the vacuum |±⟩ on their left. The coefficients of the state are the square roots of the probabilities of finding a certain configuration of kinks among the regions A_1, A_2, A_3. While the RDM does not have a very insightful structure, we write its diagonal form as
ρ_A^(N) = λ_0 |Φ_0⟩⟨Φ_0| + ∑_ϵ=±
N_A=1,…,Nλ_N_A^ϵ|Φ_N_A^ϵ⟩⟨Φ_N_A^ϵ|.
The eigenstates |Φ_N_A^ϵ⟩ have a fixed number N_A of kinks in A_1 ∪ A_3 and a fixed parity ϵ of the number of kinks in one of these two regions: without loss of generality, we choose A_1, so ϵ =+ iff there is an even number of kinks in A_1. The parity of the number of kinks/antikinks in A_3 is then automatically fixed by N_A and N. In the eigenspace with N_A and ϵ fixed, the qubit states are combined in a superposition to form |Φ_N_A^ϵ⟩, as in Eq. (<ref>).
The mechanism behind the mixing is the following: two states form a linear superposition if and only if they have the same number N_Aand they can be obtained one from the other by moving one magnon (i.e. a pair kink/antikink) from one region to the other. For every N_A 0 there are two eigenspaces labeled by the parity ϵ, while for N_A = 0 there is a single one-dimensional eigenspace. Specifically, the eigenvalues are:
λ_0 = r_2^N,
and for N_A=1,…,N
λ_N_A^+ = ∑_j=0,
j even^ N_AN!/j!(N_A - j)! (N-N_A)!r_1^j r_3^N_A - jr_2^N-N_A,
λ_N_A^- = ∑_j=1, j odd^ N_AN!/j!(N_A - j)! (N-N_A)!r_1^j r_3^N_A - jr_2^N-N_A,
where in every sum j, N_A-j and N-N_A correspond to the number kinks in A_1, A_3, A_2 respectively.
From (<ref>) and (<ref>) we can finally express the tripartite nth Rényi entropy of the N-kink state as
S_n = 1/1-nlog[λ_0^n + ∑_N_A = 1^N ł(λ_N_A^+)^n + (λ_N_A^-)^n].
The above expressions can be checked via a direct calculation; in Appendix <ref>, as a detailed example, we discuss explicitly the cases with N=3,4.
§ FIELD-THEORETIC APPROACH
In this section, we discuss the origin of the unusual entanglement content of kink excitations within a general field-theoretical framework. Our discussion relies on the description of those excitations in terms of semilocal fields acting on a symmetry-broken ground state, and their relations with the twist fields, which are the building blocks to compute entanglement measures via replica trick. In particular, we characterise the algebraic relations between the twist fields and semilocal fields such as the disorder operator in the Ising QFT. We recall that, as explained in Ref. <cit.>, the algebraic relations between twist fields and local operators are sufficient to recover the entanglement content of quasi-particles; here, by generalising the commutation relations so to take into account the semilocality of the disorder operator, we are able to obtain the entanglement content of kink excitations.
We consider a 1+1 QFT displaying spontaneous symmetry breaking of a ℤ_2 symmetry, for instance the Ising model in the ferromagnetic phase, and we denote its vacua by |±⟩. We take a finite-size system of length L, and we specify the boundary conditions at the extremal points x=0,L. In particular, we denote by b_± the boundary conditions associated with the presence of positive/negative boundary magnetic field. We assume that the asymptotic spectrum of the theory is described by kinks interpolating between the two vacua. Due to topological constraints, if the boundary conditions at x=0,L are equal, say they are both b_+, the ground state is the vacuum |+⟩ and kinks above it can only appear in an even number. On the other hand, if the system has different boundary conditions at the two edges, it can only host an odd number of kinks and the ground state of the finite-size system is a one-kink state.
Let μ(x) be a disorder field, that is a semilocal field connecting the two vacua in the region x'>x. Roughly speaking [
In the context of Integrable Field Theories <cit.>, one usually refers to μ as a field with UV scaling dimension Δ_μ = 1/8, while the elementary excitations correspond to the fermionic field with dimension Δ = 1/2; there is no direct relation between those fields, and μ has non-vanishing form factors with an arbitrary odd number of kinks. However, only the semilocal properties of the fields will enter our calculation and these issues do not play a role.], μ(x) generates a kink at position x, and we are interested in the associated state at a given momentum. A single kink state can only connect sectors with different boundary conditions. For instance, the state
μ(x)|+⟩,
interpolates between b_+ (at x=0) and b_- (at x=L). We now construct the Fourier transform of μ(x) as
μ(p) = ∫ dx (e^-ipx +ℛe^ipx)μ(x),
which generates a kink excitation at a given momentum (in absolute value). Here, ℛ is a phase (|ℛ|=1) which depends on the boundary conditions of the field μ, but it does not play a major role in the discussion below. We describe eigenstates with a finite number of kinks (in the large volume limit) as
μ(p_1)…μ(p_2N)|+⟩, (b_+,b_+) sector,
μ(p_1)…μ(p_2N)|-⟩, (b_-,b_-) sector,
μ(p_1)…μ(p_2N+1)|+⟩, (b_+,b_-) sector,
μ(p_1)…μ(p_2N+1)|-⟩, (b_-,b_+) sector.
We remark that, in principle, the momenta associated with eigenstates of the finite-size systems have to be quantised; however, this issue does not play a role as long as the number of particles is finite and the momenta are kept fixed in the large volume limit (as in Ref. <cit.>).
Let us now define smeared fields by restricting the support of μ(p) to some spatial region A. That is, we introduce
μ_A(p) = ∫_A dx (e^-ipx +ℛe^ipx)μ(x).
Then, we study the product of two smeared fields in the semiclassical limit where the momenta are fixed and the size of the regions becomes very large:
łμ_A'(p') ^̊†μ_A(p) = ∫_A dx ∫_A' dx' (e^-ipx +ℛe^ipx) (e^ip'x' +ℛe^-ip'x') μ^†(x') μ(x) ≃
∫_A dx ∫_A' dx' (e^-ipx +ℛe^ipx) (e^ip'x' +ℛe^-ip'x') μ^†(x')μ(x),
where we only kept the lightest contribution coming from the fusion μ^†×μ→ 1, as all terms corresponding to heavier operators are less relevant in the limit of large regions; thus, we replace μ^†(x') μ(x) with its vacuum expectation value over |+⟩ (a similar discussion can be found in Ref. <cit.> with periodic boundary conditions for local fields). We can further simplify Eq. (<ref>) if we assume that μ^†(x)μ(x') decays fast enough with the distance |x-x'|, e.g. exponentially as it happens in the massive (ordered) phase of the Ising model. In that case, we can perform the change of variable x' = x+x” and approximate
łμ_A'(p') ^̊†μ_A(p) ≃∫ dx” ∫_A ∩ A'dx (e^i(p'-p)x+ip'x” + ℛe^i(p+p')x+ip'x”+c.c.)μ^†(x)μ(x+x”)≃
∫ dx”ł V_A ∩ A'δ_p,p'e^ipx” + V_A ∩ A'δ_p+p',0ℛ + c.c.μ^†(x)μ(x+x”),
where the vacuum correlation function μ^†(x)μ(x+x”) is assumed to be x-independent if x is sufficiently distant from the boundary points and we denoted by V_A ∩ A' the volume of the region A ∩ A'. Eventually, we get
łμ_A'(p') ^̊†μ_A(p) ∝ V_A ∩ A'δ_p,p',
where the proportionality constant comes from the integration of the vacuum correlation function.
We remark that the considerations above hold only if the momentum is kept fixed in the large-volume limit. However, in some relevant situations, this is not the case. For example, the lowest-lying one-kink state is described by a single-body wave function ∼sinłπ x/L$̊ (p=π/Landℛ=-1in Eq. (<ref>)) if Dirichlet boundary conditions are present. In particular, the density of particles is not uniform in the large-volume limit, and the state looks inhomogeneous. These cases can be tackled as well with minor modifications. In particular, one can consider general smearing induced by a functionf(x)as
μ_f ≡∫ dx f(x)μ(x).
Inhomogeneous states can then be studied by computing correlation functions of these smeared fields; however, this is beyond the purpose of our work.
§.§ Twist fields and their algebra
We first review some properties of the twist fields and their relations with the local operators, following Refs. <cit.>, and then we generalise the discussion to the case of the semilocal disorder operator. We anticipate that the twist fields relevant to our discussion are those with an even number of disorder lines attached to them and associated with distinct copies of the replica field theory. To the best of our knowledge, these fields have been studied for the first time in our previous work <cit.> in the context ofℤ_2entanglement asymmetry; in particular, they have the same scaling dimension of the standard twist field but different monodromy properties, which become manifest in the ordered phase. We also mention that in previous works, for the symmetry-resolved entanglement, twist fields attached to a single disorder line have been studied in the disordered phase <cit.>: these are the composite branch-point twist fields obtained from the fusion of the standard field with the disorder operator.
Let us considernreplicas of the QFT. A field𝒪(x)inserted in thejth replica is denoted by𝒪^j(x). We focus on an algebra𝒜of local fields; in the Ising model, this is the algebra generated by the order operatorσ(x)and the energy densityε(x). To compute the entropy of a spatial region, that is the entropy of the subalgebra of𝒜associated with that region, we employ the branch-point twist fields. These are fields in the replica model satisfying
𝒯(x)𝒪^j(y) = 𝒪^j(y)𝒯(x), x>y,
𝒯(x)𝒪^j(y) = 𝒪^j+1(y)𝒯(x), x<y.
Loosely speaking, we say that𝒯(x)introduces a branch cut atx'>xwhich connects the replicas through the permutationj→ j+1. Similarly, we introduce a conjugate twist field𝒯̃(x)such that
𝒯̃(x)𝒪^j(y) = 𝒪^j(y)𝒯̃(x), x>y,
𝒯̃(x)𝒪^j(y) = 𝒪^j-1(y)𝒯̃(x), x<y
acting as the permutationj→ j-1onx'>x.𝒯and𝒯̃are the building blocks to construct the Rényi entropy. For example, one can express the moments of the RDM of|Ψ⟩associated with the regionA = [x_1,x_2]as
Trłρ^n_A∝̊ ^n⟨Ψ|𝒯(x_1)𝒯̃(x_2)|Ψ⟩^n.
Here|Ψ⟩^n ≡|Ψ⟩^⊗ nis the replicated state, and the non-universal proportionality constant can be absorbed in the normalisation of the twist fields.
While Eq. (<ref>) is sufficient to reconstruct the semiclassical predictions of the entropy of particles <cit.> (generated by local operators), when kinks are involved one needs to understand the corresponding generalisation of Eq. (<ref>) to semilocal operators. To the best of our knowledge, this problem has not been considered in the previous literature. We conjecture that the commutation relations betweenμand𝒯read as follows
𝒯(x)μ^j(y) = μ^j(y)( μ^jμ^j+1·𝒯 )(x), x>y,
𝒯(x)μ^j(y) = μ^j+1(y)𝒯(x), x<y,
with(μ^jμ^j+1·𝒯)(x)the lightest field generated by the fusion
μ^jμ^j+1×𝒯,
and(μ^jμ^j+1)(x)is a semilocal field of the replica model with disorder lines on replicasjandj+1. Physically, this means thatμcharges the standard twist field𝒯generating a new composite fieldμ^jμ^j+1·𝒯. We give a pictorial representation of Eq. (<ref>) in Fig. <ref>: the branch cut of the twist field is represented as a black dashed line while the disorder line is the red line.
A simple derivation of Eq. (<ref>) can be obtained assuming thatμ(x)is generated by a product of local spin flip operators𝒪(x')inserted at positionx'>xμ(x) = ∏_x'>x𝒪(x).
In the casex>y, a straightforward computation gives
𝒯(x)μ^j(y) = 𝒯(x)∏_y'>y𝒪^j(y') = ∏_y<y'<x𝒪^j(y') ∏_y'>x𝒪^j+1(y')𝒯(x)=
∏_y'>y𝒪^j(y')∏_y'>x𝒪^j(y')∏_y'>x𝒪^j+1(y')𝒯(x) = μ^j(y)μ^j(x)μ^j+1(x)𝒯(x),
where the locality of𝒪(x)has been employed, through Eq. (<ref>), together with the property𝒪(x)𝒪(x)=1. We remark that, while the derivation above can be made rigorous in the lattice, its extension to field theories is not straightforward due to divergences coming from the insertion of fields at the same points; we claim that these issues are only technical and Eq. (<ref>) still holds in the QFT.
In general, the commutation relation between𝒯and the string(μ^j_1…μ^j_k)(x), associated with disorder lines in the replicasj_1…,j_k, can be found in a similar way: each disorder lineμ^jcrosses the branch cut, and it charges the twist field with an additional insertion ofμ^jandμ^j+1. Furthermore, two disorder lines associated with the same replica simplify (due to theℤ_2fusion ruleμ×μ→ 1). In conclusion, via this procedure only twist fields with an even number of disorder lines attached to them are generated, and we refer the interested reader to Ref. <cit.> for the characterisation of their form factors. For our purposes, we only need to recall the properties of the 0-kink form factors, namely the matrix elements between the (2^n) vacua in the replica theory. In particular, due to the topological constraints, it was argued in <cit.> that the only non-vanishing 0-kink form factors for the operators above are
^n⟨+|𝒯(x)|+⟩^n = ^n⟨-|𝒯(x)|-⟩^n∝ m^-Δ_𝒯.
Here,m^-1is a correlation length,Δ_𝒯is the scaling dimension of𝒯, and the non-universal proportionality constant depends on the normalisation of the field. A technical observation is in order: these results refer to the theory at infinite size, while here we are interested in the finite-size counterpart. However, as investigated in Ref. <cit.>, we expect deviations from the infinite-size theory to be exponentially suppressed in the system size, and therefore we neglect those in the forthcoming discussion.
§.§ One-kink entropy
The algebraic relations (<ref>) summarise the essential distinction between (local) particles and kink excitations at the level of entropy of regions. We now claim that these relations are sufficient to give explicit predictions for the Rényi entropies in the semiclassical limit where the regions are large compared to the microscopic lengths. In this section we provide an application of the formalism developed so far by analysing a simple yet non-trivial case, that is the entropy of a single kink in a tripartite geometry.
We consider an intervalA = [ℓ_1,ℓ_2]and its complementB = B_1∪ B_2 = [0,ℓ_1]∪[ℓ_2,L]and we study the Rényi entropy of the state
|Ψ⟩ = μ(p)|+⟩.
To do so, we need to compute the ratio
^n⟨Ψ|𝒯(ℓ_1)𝒯̃(ℓ_2)|Ψ⟩^n/^n⟨Ψ|Ψ|^⟩n,
where the denominator ensures proper normalisation, and compare it with
^n⟨+|𝒯(ℓ_1)𝒯̃(ℓ_2)|+⟩^n,
which yields the ground state contribution. Let us first compute the normalisation^n⟨Ψ|Ψ|^⟩nas follows
^n⟨Ψ|Ψ|^⟩n = ł⟨Ψ|Ψ| ̊⟩^n = ł⟨+|(μ(p))^†μ(p)|+⟩^̊n ∝ L^n,
where Eq. (<ref>) and⟨+|+|=⟩1have been employed.
To proceed with the computation, it is convenient to split the fieldμ^j(p)in terms of its spatial restrictions:
μ^j(p) = μ^j_A(p)+μ^j_B_1(p)+μ^j_B_2(p).
The operators above have definite commutation relations with the twist fields, which, as a consequence of (<ref>), read
𝒯(ℓ_1)𝒯̃(ℓ_2)μ^j_B_1(p) = μ^j_B_1(p)(μ^jμ^j+1·𝒯 )(ℓ_1)( μ^jμ^j+1·𝒯̃ )(ℓ_2),
𝒯(ℓ_1)𝒯̃(ℓ_2)μ^j_B_2(p) = μ^j_B_2(p)𝒯(ℓ_1)𝒯̃(ℓ_2),
𝒯(ℓ_1)𝒯̃(ℓ_2)μ^j_A(p) = μ^j+1_A(p)𝒯(ℓ_1)( μ^jμ^j+1·𝒯̃)(ℓ_2).
We expand the numerator of Eq. (<ref>) in terms of the restrictions of the fields as
^n⟨Ψ|𝒯(ℓ_1)𝒯̃(ℓ_2)|Ψ⟩^n = ^n⟨+| (μ^n(p))^†…(μ^1(p))^†𝒯(ℓ_1)𝒯̃(ℓ_2) μ^1(p)…μ^n(p)|+⟩^n
=
^n⟨+| (μ^n(p))^†…(μ^1(p))^†𝒯(ℓ_1)𝒯̃(ℓ_2)
× (μ^1_A(p)+μ^1_B_1(p)+μ^1_B_2(p))…(μ^n_A(p)+μ^n_B_1(p)+μ^n_B_2(p))|+⟩^n.
We employ Eq. (<ref>), bringing the restrictions ofμ^j(p)to the left of the twist fields, so that they can be eventually contracted withłμ^j(p)^̊†. The contractions produce a total of3^nterms. However, most of these are vanishing in the limit we are considering
For instance, since Eq. (<ref>) gives
(μ^j'(p))^†μ^j_C(p) ∝ V_C δ_jj',
for any regionC, it follows that only contractions between disorder lines in the same replica survive. In addition, among the remaining terms, we should only keep those for which the field obtained after performing all the commutation relations is the standard twist field, otherwise, the corresponding vacuum expectation values vanish.
In conclusion, only three terms remain from the expansion in Eq. (<ref>), so we can write
^n⟨Ψ|𝒯(ℓ_1)𝒯̃(ℓ_2)|Ψ⟩^n ≃ ^n⟨+| (μ^n(p))^†…(μ^1(p))^†μ^1_B_1(p)…μ^n_B_1(p) 𝒯(ℓ_1)𝒯̃(ℓ_2) |+⟩^n+
^n⟨+| (μ^n(p))^†…(μ^1(p))^†μ^1_B_2(p)…μ^n_B_2(p) 𝒯(ℓ_1)𝒯̃(ℓ_2) |+⟩^n+
^n⟨+| (μ^n(p))^†…(μ^1(p))^†μ^2_A(p)…μ^1_A(p) 𝒯(ℓ_1)𝒯̃(ℓ_2) |+⟩^n ∝
(ℓ_1^n + (ℓ_2-ℓ_1)^n + (L-ℓ_2)^n)×^n⟨+|𝒯(ℓ_1)𝒯̃(ℓ_2)|+⟩^n.
A schematic representation of the mechanism depicted above is sketched in Fig. <ref>.
Putting everything together, we compute the following universal ratio
^n⟨Ψ|𝒯(ℓ_1)𝒯̃(ℓ_2)|Ψ⟩^n/^n⟨Ψ|Ψ|^⟩n× ^n⟨+|𝒯(ℓ_1)𝒯̃(ℓ_2)|+⟩^n≃ (r_1^n+r^n_2+r_3^n),
withr_jdefined by
r_1 ≡ℓ_1/L, r_2 ≡ℓ_2-ℓ_1/L, r_3 ≡L-ℓ_2/L,
corresponding to the probabilities to find the kink in the regionB_1,A,B_2respectively. We finally express the difference of entropy between|Ψ⟩and|+⟩as
S_n-S_n,0 = 1/1-nlogł r^n_1+r^n_2+r^n_3.̊
This is the same result (<ref>), up to the vacuum entropyS_n,0that is, in general, non-vanishing in a field theory. Remarkably, we find universality of the entropy difference between the low-lying excited states and the vacuum, which is one of the significant achievements of the field-theoretic framework.
Some comments regarding the regime of small regions are needed. So far, we analysed the scaling limit where the ratios of the sizes are fixed, and they are much larger than the microscopic lengths: in particular, the regimeℓ_2-ℓ_1 ≲ m^-1is not described by the previous formula. Indeed, formally, ifℓ_2=ℓ_1the regionAbecomes the empty set and the entropy vanishes. Unfortunately, this behavior is not recovered by the limitr_2→ 0of Eq. (<ref>), and it is not obvious a priori why there should be a residual entropy when the regionAis much smaller than the rest of the system. The reason is that, albeit the probability that the kink belongs toAvanishes in the limit above, one can still distinguish whether the latter belongs toB_1orB_2just by measuring the local magnetisation inA. In particular, the state becomes locally indistinguishable from the statistical mixture
… = r_1 ⟨+|…|+⟩ + r_3 ⟨-|…|-⟩,
and thus one observes an entropy differenceS_n-S_n,0 = 1/1-nlogł r^n_1+r^n_3$̊.
§ KRAMERS-WANNIER DUALITY AND THE ENTANGLEMENT OF ALGEBRAS
In the previous sections, we have shown that the entanglement content of kinks explicitly differs from that of particles. This might sound surprising at first, since the ordered/disordered phases are related by duality in the quantum Ising chain: in particular, the kinks in the ordered phase are dual to the particles in the disordered phase. Naively, one could expect that the entanglement content of the corresponding excitations should be the same, but this is not the case. The only reasonable conclusion is that the entanglement of regions is not self-dual under Kramers-Wannier duality. This fact can be understood in terms of the non-local nature of the duality. In particular, it has been shown in <cit.> that the entanglement is transferred from local to non-local degrees of freedom. However, while it is well-established that some local and semilocal operators are related by duality (e.g. the order and disorder operators), the identification of a possible dual of twist fields or of the entropy of a region is less obvious. In this Section, we adopt an algebraic approach, in the spirit of Refs. <cit.>, to answer those questions. The formalism, based on the theory of (finite-dimensional) C^*-algebras, allows dealing systematically with observables that are not local in the spin basis, such as the disorder operators and provides a natural framework to properly define their entanglement content.
We give first a brief review of the Kramers-Wannier duality in the context of the quantum Ising chain. Then, we discuss the notion of density matrix associated with observables in the context of C^*-algebras. Finally, we identify the notion of the duals of twist fields and Rényi entropies associated with regions.
§.§ Kramers-Wannier duality in a quantum spin-1/2 chain
The Kramers-Wannier duality, in the context of quantum chains, is associated with the possibility of representing the states of a system using spin or kink variables. Specifically, this proves beneficial for models such as the Ising spin chain, as the Hamiltonian remains local in both representations, albeit ordered phases correspond to disordered phases.
Here, we are interested in a systematic treatment of the duality map between the two descriptions. While this is usually non-invertible, and the Kramers-Wannier duality is known as a non-invertible symmetry <cit.>, specific sectors associated with given boundary conditions are in one-to-one correspondence as we explain below.
Let us consider the Hilbert space ℋ generated by the 2^L configurations of spins ± along the z axis, for a chain of length L. This was defined in Eq. (<ref>). We call ℋ the site space, and the configurations of site variables give rise to an orthonormal basis. We consider another Hilbert space ℋ' of dimension 2^L-1, defined similarly:
ℋ'= Span{|s'_3/2,…, s'_L-1/2⟩, s'_j+1/2 = ±}.
This is referred to as the bond space, being generated by configurations of L-1 bond variables. We can associate any site configuration with a bond configuration considering the sign changes of adjacent sites. More precisely, we construct a linear map
T: ℋ→ℋ',
acting on the basis elements as follows
T:|s_1,…, s_L⟩→|s'_3/2,…, s'_L-1/2⟩, s'_j+1/2 = s_js_j+1.
Roughly speaking, we say that a kink at position j+1/2 is present if there is a change of sign between the sites j and j+1. In the following, we refer to T as the duality map.
We can also consider the adjoint of T as
T^†: ℋ' →ℋ,
such that the associated matrices of T, T^† are hermitian conjugate of each other in the configuration basis. From the definition, it holds T T^† =1, namely T^† is an isometry of ℋ' onto itself. However, T^† T≠ 1; this is not surprising, since the two Hilbert spaces ℋ,ℋ' have different dimensions, and, in particular, T has to display a non-trivial kernel. We fix this issue by focusing on a sector of ℋ such that the associated restriction of T becomes invertible. We choose to fix the last spin and define two sectors of dimension 2^L-1ℋ_±≡Span{| s_1,…, s_L-1,±⟩, s_j = ±},
satisfying
ℋ = ℋ_+ ⊕ℋ_-.
From now on we focus on ℋ_+ and, with a slight abuse of notation, we refer to T as the restriction
T:ℋ_+ →ℋ'.
In this way, T becomes invertible and both T^† T and TT^† give the identity operator (on ℋ_+ and ℋ' respectively).
The duality map we defined above is a correspondence between states,but we can extend it to a map between observables. For instance, we consider
End(ℋ_+) →End(ℋ')
𝒪 ↦𝒪' ≡ T𝒪T^†.
This is an isomorphism between C^*-algebras, since it is invertible and (𝒪_1𝒪_2)' = 𝒪'_1𝒪_2', (𝒪^†)' = (𝒪')^† hold. One can show, by checking, for example, the action of the operators on basis elements related by duality, that the following relations holds
σ^z_jσ^z_j+1 ↦σ^z_j+1/2,
σ^x_j ↦σ^x_j-1/2σ^x_j+1/2.
Moreover, by representing the one-site operators as products of two-site operators and using the fact that Eq. (<ref>) is an isomorphism, one gets
σ^z_j ↦∏_j'≥ jσ^z_j'+1/2,
∏_j'≥ jσ^x_j' ↦σ^x_j+1/2.
This means in particular that local and semilocal operators are mixed with each other by the duality map. It is also worth stressing that, since σ^z_j,σ^x_j generate the algebra End(ℋ_+) (via products and linear combinations), the relations above unambiguously characterise the isomorphism (<ref>).
Finally, we comment on some consequences of the non-locality of the duality map. Let us consider a spatial bipartition ℋ_+ =ℋ_A⊗ℋ_B with
ℋ_A = Span{|s_1,…, s_ℓ⟩, s_j = ±},
ℋ_B = Span{|s_ℓ+1,…, s_L-1+⟩, s_j = ±}.
Here, A and B are two subregions consisting of ℓ and L-ℓ sites respectively. Unfortunately there is no way to define the image of ℋ_A under duality due to the non-locality of the latter: the map T cannot be canonically extended to a map T:ℋ_A→ℋ'_A obtained from a possible spatial bipartition ℋ' = ℋ'_A ⊗ℋ'_B. Nonetheless, it is meaningful to map the observables associated with A to other observables of ℋ'. Namely, the former can be first canonically embedded in the space of the observables of ℋ_+ via
End(ℋ_A) →End(ℋ_A)⊗1_ℋ_B⊂End(ℋ_+),
and then the duality map (<ref>) can be applied as
End(ℋ_A)⊗1_ℋ_B→End(ℋ').
The price to pay is that the image of the map above contains observables, as it happens for σ^z_j (with j≤ℓ) in Eq.(<ref>), with a string (product) of σ^z attached to them. In this respect, we say that the isomorphism (<ref>) mixes local and semilocal observables.
§.§ An algebraic definition of the reduced density matrix
Here, we review how to construct the reduced density matrix associated with an algebra. This construction, which is well-established in the context of mathematical physics (see e.g. the textbook <cit.>), is nonetheless mostly overlooked by the vast majority of works on entanglement. For instance, people usually refer to a spatial bipartition of a Hilbert space <cit.>ℋ = ℋ_A ⊗ℋ_B,
and the entanglement of A is probed via the properties of the reduced density matrix ρ_A ∈End(ℋ_A). This approach is satisfactory when the observables of interests are realised locally in the corresponding model. However, in the presence of semilocal operators, or when dealing with maps (such as the Kramers-Wannier duality) that do not preserve locality, a more general approach is needed.
In the following, we will assume that the algebra of observables 𝒜 is a finite-dimensional C^*-algebra. A state is then a linear functional from 𝒜 to the complex numbers:
𝒜 →ℂ
a ↦ a,
and it corresponds to the usual notion of expectation value of the observables in 𝒜. As we show below, it is possible to construct a density matrix ρ_𝒜 as an observable itself requiring
Trłρ_𝒜 a=̊ a, ∀ a ∈𝒜.
The trace Trł…$̊ entering the above expression is defined unambiguously thanks to the isomorphism between𝒜and a direct sum of matrix algebras (briefly reviewed in Appendix <ref>), that is a standard result of the representation theory ofC^*-algebras (see e.g. Ref. <cit.>). Specifically, given an orthonormal basis{a_i}of𝒜Trł a^†_i a_j=̊δ_ij,
one expressesρ_𝒜as
ρ_𝒜 = ∑_j r_j a^†_j, r_j ≡ a_j.
By applying the definitions above to the bipartition (<ref>) and the algebraEndłℋ_A⊗̊1_ℋ_B⊂End(ℋ), one obtains
ρ_𝒜 = ρ_A ⊗1_ℋ_B/Trł1_ℋ_B.
The main advantage of this formulation is thatρ_𝒜becomes part of the local algebra𝒜and the local Hilbert spaceℋ_Ais never explicitly used.
In the framework depicted above, it is possible to make sense of the dual of a density matrix under the Kramers-Wannier map considered in the previous Sec. <ref>. Indeed, one can pick the algebra𝒜⊆End(ℋ_+)associated with a region of the site space defined in Eq. (<ref>). Then, given a state…for𝒜and its density matrixρ_𝒜, from the map (<ref>) one can define a dual density matrixρ_𝒜'≡ T ρ_𝒜 T^†⊂End(ℋ').
It is not difficult to show thatρ_𝒜'is just the density matrix of the dual algebra
𝒜' ≡ T𝒜T^†,
associated with the dual state defined by
TaT^†' ≡ a, a ∈𝒜.
Indeed, by making use of the propertyT^† T=1, a straightforward calculation yields
Trłρ_𝒜' a'=̊Trłρ_𝒜 a=̊ a = a'', a ∈𝒜,
witha' = TaT^†a generic observable in𝒜', that is the defining property of the density matrix. It is worth pointing out that expressions such asTρ_A T^†, withρ_A ∈Endłℋ_A$̊, are meaningless since T is defined at the level of the global Hilbert space only, which is the main reason we adopted the algebraic approach to investigate the duality.
Once the density matrices are defined in the algebraic framework, the usual entanglement measures can be introduced as well. For instance, the Rényi entropy is
S_n = 1/1-nlogTrłρ^n_𝒜,̊
and it matches the standard definition (in Ref. <cit.>) if applied to the algebra of regions. Moreover, since T is an invertible isometry, from Eq. (<ref>) we get
Trłρ_𝒜^n =̊Trłρ_𝒜'^n ,̊
and the two algebras have the same entropy for the corresponding dual states. At this point, it is worth giving a simple paradigmatic example to discuss the consequences of the construction above. We consider a state … of ℋ, defined in Eq. (<ref>), namely the ground state of an Ising chain in its ordered phase. We focus on a region A, we compute the associated entropy, that is the entropy of the algebra End(ℋ_A), then we turn our attention to the Kramers-Wannier duality. For instance, we consider the dual state …', which is a ground state of the paramagnetic phase, and we aim to characterise the entropy of some region in this state. The main problem is that, albeit the states … and …' are related by duality, the algebras of regions are not. In particular, the local observables in the region A are now mapped onto an algebra that is not associated with a region of the system ℋ'. Similar considerations hold for low-lying excited states with a finite number of kinks, dual to states with a finite number of particles.
In conclusion, the Kramers-Wannier duality does not give any direct relation regarding the entropy of regions. Nonetheless, we mention for completeness that, since the duality can be implemented by a Matrix Product Operator (MPO) of bond dimension 2<cit.>, the entropy difference between a state and its dual has an upper bound given by b log 2, with b the number of entangling points.
§.§ The dual of the twist field
In this section, we discuss the twist operators in the algebraic framework and their transformation under Kramers-Wannier duality. In the context of spin chains, these operators have been defined in Ref. <cit.> via their explicit basis representation (see also Ref. <cit.>). Here, as we did for the reduced density matrix, we follow a slightly more abstract construction based on algebraic properties of finite dimensional C^*-algebras, relaxing the hypothesis of the strict locality of the observables in the computational basis. In this way, we can safely discuss the notion of twist operators associated with algebras and their dual.
Let us consider an algebra 𝒜, we replicate it n times and we call 𝒜^⊗ n the replica algebra. 𝒜^⊗ n is generated by the operators a^j, obtained by inserting the elements a ∈𝒜 in the jth replica, for every j:
a^j ≡ 1⊗…⊗ a⊗ 1 ⊗… 1.
The twist operator 𝒯_𝒜∈𝒜^⊗ n is an operator which acts as a replica cyclic permutation j→ j+1. More precisely, one requires
𝒯_𝒜 a^j = a^j+1𝒯_𝒜.
Eq. (<ref>) is already present in <cit.>, where it is obtained from a more fundamental definition. Instead, here we ask whether this can be truly considered a defining property of the twist field. Clearly, if 𝒯_𝒜 satisfies Eq. (<ref>), then also λ𝒯_𝒜 with λ∈ℂ has the same property, and one may wonder whether (<ref>) fixes the twist operator up to a proportionality constant.
In the discussion below, we assume that the algebra 𝒜 is isomorphic to End(ℂ^d) for a given d; this is the framework of main interest since it corresponds to both the algebra of regions in spin chains, and the (non-local) algebra arising from its Kramers-Wannier dual. In those cases, the only element of 𝒜 commuting with every other element is, up to a constant, the identity 1∈𝒜, and the center of the algebra is trivial. A few minor issues arise for other (finite-dimensional) algebras, and they are discussed in the Appendix <ref>.
The existence of a non-zero operator 𝒯_𝒜 is not obvious a priori (we comment on a general algebraic proof in Appendix <ref>). However, since 𝒜 can be realised as an algebra of matrices, one can employ the explicit construction of Ref. <cit.>, ending up with an operator that has the desired properties. In particular, it is possible to construct 𝒯_𝒜 satisfying
𝒯_𝒜^⊗ n = Trłρ^n_𝒜,̊
with …^⊗ n the replica state of 𝒜^⊗ n associated with the density matrix ρ^⊗ n_𝒜∈𝒜^⊗ n.
The uniqueness of the twist operator comes from the fact that 𝒜^⊗ n has a trivial center, since it is isomorphic to End(ℂ^dn), and we give an explanation below.
We first observe that, from Eq. (<ref>), 𝒯_𝒜𝒯_𝒜^† belongs to the center of 𝒜^⊗ n; therefore, if 𝒯_𝒜≠ 0, it is in an invertible element of 𝒜^⊗ n and its inverse is proportional to 𝒯^†_𝒜. Let us now consider two non-zero twist operators 𝒯_𝒜, 𝒯̂_𝒜 satisfying Eq. (<ref>): (𝒯_𝒜)^-1𝒯̂_𝒜 has to be in the center of 𝒜^⊗ n and, since the latter is trivial, it implies 𝒯_𝒜∝𝒯̂_𝒜. The only arbitrariness in the definition of 𝒯_𝒜 is, therefore, the proportionality constant, which can be fixed unambiguously by requiring the normalisation condition (<ref>).
At this point, we have an operator 𝒯_𝒜 defined canonically at the algebraic level, without explicit reference to any local Hilbert space; in this way, we can give meaning to its dual under Kramers-Wannier duality, as done for the density matrix in the previous section. We start from the isomorphism in Eq. (<ref>), between two algebras. We extend it to an isomorphism of their replica as follows
𝒜^⊗ n →𝒜'^⊗ n
𝒪 ↦𝒪' ≡ T^⊗ n𝒪(T^†)^⊗ n.
Following this procedure, we construct a dual twist operator associated with 𝒜' as
𝒯_𝒜'≡ T^⊗ n𝒯_𝒜ł T^†^̊⊗ n.
The field above satisfies some natural expected properties. In particular, it allows representing the moments of the dual-density matrix associated with the dual algebra. A direct computation, based on Eqs. (<ref>) and (<ref>), gives indeed
𝒯_𝒜'^' ⊗ n = T^⊗ n𝒯_𝒜 (T^⊗ n)^†^' ⊗ n = 𝒯_𝒜^⊗ n = Trłρ^n_𝒜=̊Trłρ^n_𝒜'.̊
As an instructive example, we reinterpret the construction above in the context of the Ising Field Theory of Sec. <ref>. The twist field 𝒯(x) in Eq. (<ref>) is the twist operator associated with the algebra 𝒜 generated by the insertion of ε(y) and σ(y) in the region y>x. In particular, it holds 𝒯(x)σ^j(y) = σ^j+1(y)𝒯(x) if x<y, holds. We can consider another algebra 𝒜', related to 𝒜 via Kramers-Wannier duality, associated with the insertions of ε(y) and μ(y) in y>x. We call 𝒯'(x) the associated (dual) twist operator, which satisfies
𝒯'(x)μ^j(y) = μ^j(y)𝒯'(x), y<x.
In other words, 𝒯' seesμ as a local operator. 𝒯 and 𝒯' are explicitly different, and we remark that the commutation relation in Eq. (<ref>) are distinct from those in Eq. (<ref>). In conclusion, there is no reason why the correlation functions of 𝒯, associated with the usual notion of entanglement of the Ising model, should be equal in the ordered and disordered phases: we can only infer relations between 𝒯 and 𝒯'.
§ CONCLUSIONS
In this work we studied the entropy of kink excitation, showing that universal results emerge in the limit of large regions: this finding extends previous studies on the entanglement content of quasiparticles. We provided a detailed analysis of specific states of a spin-1/2 chain and made use of the qubit-picture, where computations can be carried out explicitly with elementary methods. Furthermore, we discussed a field-theoretic framework and found that the discrepancy between particles/kinks is ultimately traced back to the algebra of twist fields and the corresponding local/semilocal operators. In particular, the disorder fields charge non-trivially the twist fields, as shown by Eq. (<ref>), and a new family of twist fields (first introduced and characterised in Ref. <cit.>) is generated. Remarkably, while the qubit picture is simpler to deal with, the field-theoretic approach reveals the universal origin of the entropy difference between the kink states and the vacuum, which is far from obvious.
We pointed out the importance of the notion of entropy of algebras, which is crucial to understanding the lack of correspondence of entanglement entropy under Kramers-Wannier duality. A related, albeit different, discussion found in the previous literature concerns the entropy of the Ising model compared to that of free Majorana fermions: while the two models are related (by Jordan Wigner transformation), a discrepancy in the two-interval entropy has been found in Refs. <cit.>. The origin of the mismatch comes from the computation of the entropy of two distinct algebras: the one generated by local operators (say, Pauli matrices), and the one generated by fermions; this difference can be equivalently expressed in terms of spin-structure in the path integral formalism. We also mention recent results <cit.> on the presence of novel entanglement features in one-dimensional spin models mapped to free fermions, such as universal tripartite information and topological order, previously overlooked.
We remark that the main results of this work do not refer specifically to the Ising model and they do not make use of free fermionic techniques; the only important assumption is the spontaneous symmetry breaking of a ℤ_2 symmetry with kink excitations interpolating between the vacua as a low-lying spectrum. In particular, our field-theoretic approach could be used to discriminate the entanglement content of deconfined kinks and their bound states appearing in the more exotic spectrum of the confining chain (Ising model with both transverse and longitudinal fields) of Ref. <cit.>. Furthermore, we expect generalisations to other systems with distinct symmetry-breaking patterns to be straightforward (e.g. the Potts model in its ordered phase): one should identify the algebraic relations between disordered fields and the twist fields and carry out the contractions to evaluate the correlation functions (as done in Sec. <ref>). In this respect, a promising framework to formulate a general theory, that incorporates both fields with disorder lines and duality lines attached to them (as for Kramers-Wannier) is that of Non-invertible generalised symmetries, recently reviewed in Ref. <cit.>.
Acknowledgements:
LC acknowledges support from ERC Starting grant 805252 LoCoMacro. LC thanks Vanja Maric and Michele Fossati for their insightful comments on the manuscript. MM is grateful for funding under the EPSRC Mathematical Sciences Doctoral Training Partnership EP/W524104/1. MM thanks Olalla A. Castro-Alvaredo, David Horvath and Fabio Sailis for the many useful discussions.
§ DETAILS OF SOME LATTICE AND QUBIT CALCULATIONS
In this appendix, we collect some technical calculations on the reduced density matrix of multi-kink states on the lattice and within the qubit picture.
§.§ RDM of the lattice two-kink state in tripartite geometry
Here, we derive the explicit expression for the reduced density matrix in Section <ref>, that is
ρ^(2)_A_1 ∪ A_3 = 𝒩∑_s_k=±
k=ℓ_1+1,…,ℓ_1+ℓ_2 ∑_1 ≤ i_1 < j_1 ≤ L-1
1 ≤ i_2 < j_2 ≤ L-1 A_2⟨s_ℓ_1 +1,…,s_ℓ_1+ℓ_2|K_+-(i_1)K_-+(j_1)|⟩
× ⟨K_+-(i_2)K_-+(j_2)|s_ℓ_1 +1,…,s_ℓ_1+ℓ_2|_⟩A_2,
with 𝒩≡2/(L-1)(L-2) a normalisation constant. For convenience, we split the indices appearing in the sum into six subsets:
𝒮_1 = {(i,j)|i,j ∈ A_1, i < j },
𝒮_2 = {(i,j)|i ∈ A_1, j ∈ A_2∖{ℓ_1 + ℓ_2}},
𝒮_3 = {(i,j)|i ∈ A_1, j ∈{ℓ_1 + ℓ_2}∪ A_3∖{L}},
𝒮_4 = {(i,j)|i, j ∈ A_2∖{ℓ_1 + ℓ_2}, i < j },
𝒮_5 = {(i,j)|i ∈ A_2∖{ℓ_1 + ℓ_2}, j ∈{ℓ_1 + ℓ_2}∪ A_3∖{L}},
𝒮_6 = {(i,j)|i, j ∈{ℓ_1 + ℓ_2}∪ A_3∖{L}, i < j }.
Then, we compute
A_2⟨s_ℓ_1 +1,…,s_ℓ_1+ℓ_2|K_+-(i_1)K_-+(j_1)|⟩
= łδ_(i_1, j_1) ∈𝒮_1∏_k ∈ A_2δ_s_k,-|̊%̊s̊⟩̊K_+-(i_1)K_-+(j_1)_A_1⊗|+,…,+⟩_A_3
+ łδ_(i_1, j_1) ∈𝒮_2∏_k = ℓ_1+1^j_1δ_s_k,-∏_k = j_1+1^ℓ_1+ℓ_2δ_s_k,+|̊%̊s̊⟩̊K_+-(i_1)_A_1⊗|+,…,+⟩_A_3
+łδ_(i_1, j_1) ∈𝒮_3∏_k ∈ A_2δ_s_k,-|̊%̊s̊⟩̊K_+-(i_1)_A_1⊗|K_-+(j_1)⟩_A_3
+łδ_(i_1, j_1) ∈𝒮_4∏_k = ℓ_1+1^i_1δ_s_k,+∏_k = i_1+1^j_1δ_s_k,-∏_k = j_1+1^ℓ_1 + ℓ_2δ_s_k,+|̊%̊s̊⟩̊+,…,+_A_1⊗|+,…,+⟩_A_3
+łδ_(i_1, j_1) ∈𝒮_5∏_k = ℓ_1+1^i_1δ_s_k,+∏_k = i_1+1^ℓ_1 +ℓ_2δ_s_k,-|̊%̊s̊⟩̊+,…,+_A_1⊗|K_-+(j_1)⟩_A_3
+łδ_(i_1, j_1) ∈𝒮_6∏_k ∈ A_2δ_s_k,+|̊%̊s̊⟩̊+,…,+_A_1⊗|K_+-(i_1)K_-+(j_1)⟩_A_3.
In the above expression, to keep the formula as compact as possible we have made the identifications |K_+-(ℓ_1)⟩_A_1≡|+,…,+⟩_A_1, |K_-+(ℓ_1+ℓ_2)⟩_A_3≡|+,…,+⟩_A_3, |K_+-(i)K_-+(ℓ_1)⟩_A_1≡|K_+-(i)⟩_A_1, and |K_+-(ℓ_1+ℓ_2)K_-+(i)⟩_A_3≡|K_-+(i)⟩_A_3. The RDM is given by the sum of eight blocks, obtained by matching the delta constraints coming from the two matrix elements:
ρ^(2)_A_1 ∪ A_3 = 𝒩∑_(i_1,j_1) ∈𝒮_1
(i_2,j_2) ∈𝒮_1|K_+-(i_1)K_-+(j_1)⟩_A_1A_1⟨K_+-(i_2)K_-+(j_2)|⊗|+…,+⟩_A_3A_3⟨+,…,+|
+ 𝒩∑_(i_1,j_1) ∈𝒮_6
(i_2,j_2) ∈𝒮_6|+…,+⟩_A_1A_1⟨+,…,+|⊗|K_+-(i_1)K_-+(j_1)⟩_A_3A_3⟨K_+-(i_2)K_-+(j_2)|
+ 𝒩∑_(i_1,j_1) ∈𝒮_1
(i_2,j_2) ∈𝒮_6|K_+-(i_1)K_-+(j_1)⟩_A_1A_1⟨+,…,+|⊗|+…,+⟩_A_3A_3⟨K_+-(i_2)K_-+(j_2)|
+ 𝒩∑_(i_1,j_1) ∈𝒮_6
(i_2,j_2) ∈𝒮_1|+…,+⟩_A_1A_1⟨K_+-(i_2)K_-+(j_2)|⊗|K_+-(i_1)K_-+(j_1)⟩_A_3A_3⟨+,…,+|
+ 𝒩∑_(i_1,j_1) ∈𝒮_3
(i_2,j_2) ∈𝒮_3|K_+-(i_1)⟩_A_1A_1⟨K_+-(i_2)|⊗|K_-+(j_1)⟩_A_3A_3⟨K_-+(j_2)|
+ 𝒩(ℓ_2 - 1)∑_i_1, i_2 ∈ A_1|K_+-(i_1)⟩_A_1A_1⟨K_+-(i_2)|⊗|+…,+⟩_A_3A_3⟨+,…,+|
+ 𝒩(ℓ_2 - 1)∑_j_1, j_2 ∈{ℓ_1 + ℓ_2}∪ A_3 ∖ L|+…,+⟩_A_1A_1⟨+,…,+|⊗|K_-+(j_1)⟩_A_3A_3⟨K_-+(j_2)|
+ 𝒩(ℓ_1 - 1)(ℓ_2 -2)/2|+…,+⟩_A_1A_1⟨+,…,+|⊗|+…,+⟩_A_3A_3⟨+,…,+|.
§.§ Qubit description of three- and four-kink states
Here we diagonalise the reduced density matrix of the states with N=3,4 kinks in Eq. (<ref>). These states are
|Ψ^(3)⟩ = √(6 r_1 r_2 r_3)|1^+, 1^-, 1^+⟩
+ √(3 r_1^2 r_2)|2^+,1^+,-⟩ + √(3 r_1 r_2^2)|1^+,2^-,-⟩
+ √(3 r_1^2 r_3)|2^+,+,1^+⟩ + √(3 r_1 r_3^2)|1^+,-,2^-⟩
+ √(3 r_2^2 r_3)|+,2^+,1^+⟩ + √(3 r_2 r_3^2)|+,1^+,2^-⟩
+ r_1^3/2|3^+,-,-⟩ + r_2^3/2|+,3^+,-⟩ + r_3^3/2|+,+,3^+⟩,
|Ψ^(4)⟩ = √(12 r_1^2 r_2 r_3)|2^+, 1^+, 1^-⟩ + √(12 r_1 r_2^2 r_3)|1^+, 2^-, 1^-⟩ + √(12 r_1 r_2 r_3^2)|1^+, 1^-, 2^+⟩
+ √(4 r_1 r_2^3)|1^+,3^-,+⟩ + √(6 r_1^2 r_2^2)|2^+,2^+,+⟩ + √(4 r_1^3 r_2)|3^+,1^-,+⟩
+ √(4 r_1 r_3^3)|1^+,-,3^-⟩ + √(6 r_1^2 r_3^2)|2^+,+,2^+⟩ + √(4 r_1^3 r_3)|3^+,-,1^-⟩
+ √(4 r_2 r_3^3)|+,1^+,3^-⟩ + √(6 r_2^2 r_3^2)|+,2^+,2^+⟩ + √(4 r_2^3 r_3)|+,3^+,1^-⟩
+ r_1^2 |4^+,+,+⟩ + r_2^2 |+,4^+,+⟩ + r_3^2 |+,+,4^+⟩.
We trace out the degrees of freedom of A_2, obtaining the diagonal from of the two RDMs
ρ_A^(3) = r_2^3 |Φ_0⟩⟨Φ_0| + 3r_1 r_2^2 |Φ_1^-⟩⟨Φ_1^-| + 3r_3 r_2^2 |Φ_1^+⟩⟨Φ_1^+|
+ 6 r_1 r_2 r_3 |Φ_2^-⟩⟨Φ_2^-| + 3r_2(r_1^2 + r_3^2) |Φ_2^+⟩⟨Φ_2^+|
+ r_1 (r_1^2 + 3r_3^2) |Φ_3^-⟩⟨Φ_3^-| + r_3(3r_1^2 + r_3^2) |Φ_3^+⟩⟨Φ_3^+|,
and
ρ_A^(4) = r_2^4 |Φ_0⟩⟨Φ_0| + 4r_1 r_2^3 |Φ_1^-⟩⟨Φ_1^-| + 4r_3 r_2^3 |Φ_1^+⟩⟨Φ_1^+|
+ 12 r_1 r_2^2 r_3 |Φ_2^-⟩⟨Φ_2^-| + 6r_2^2(r_1^2 + r_3^2) |Φ_2^+⟩⟨Φ_2^+|
+ 4 r_1 r_2(r_1^2 + 3r_3^2) |Φ_3^-⟩⟨Φ_3^-| + 4 r_2 r_3(3r_1^2 + r_3^2) |Φ_3^+⟩⟨Φ_3^+|
+ 4 r_1 r_3(r_1^2 + r_3^2) |Φ_4^-⟩⟨Φ_4^-| + (r_1^4 + r_3^4 + 6r_1^2 r_3^2) |Φ_4^+⟩⟨Φ_4^+|.
The eigenvectors of ρ_A^(3) are:
* |Φ_0⟩ = |+,-⟩ (no kinks in A)
* |Φ_1^-⟩ = |1^+,-⟩ (one kink in A_1, no kinks in A_3)
* |Φ_1^+⟩ = |+,1^+⟩ (no kinks in A_1, one in A_3)
* |Φ_2^-⟩ = |1^+,1^+⟩ (one kink in A_1, one in A_3)
* |Φ_2^+⟩ = r_1 |2^+,-⟩+r_3|+,2^-⟩/√(r_1^2 +r_3^2) (one magnon in A_1 or one magnon in A_3)
* |Φ_3^-⟩= r_1 |3^+,-⟩+√(3)r_3|1+,2^-⟩/√(r_1^2 +3r_3^2) (3 kinks in A_1 or one kink in A_1 and one magnon in A_3)
* |Φ_3^+⟩= √(3)r_1 |2^+,1^+⟩+r_3|+,3^+⟩/√(3r_1^2 +r_3^2) (1 magnon in A_1 and one kink in A_3 or 3 kinks in A_3),
while those of ρ_A^(4) are:
* |Φ_0⟩ = |+,+⟩ (no kinks in A)
* |Φ_1^-⟩ = |1^+,+⟩ (one kink in A_1)
* |Φ_1^+⟩ = |+,1^-⟩ (no kinks in A_1, one in A_3)
* |Φ_2^-⟩ = |1^+,1^-⟩ (one kink in A_1, one in A_3)
* |Φ_2^+⟩ = r_1 |2^+,+⟩+r_3|+,2^+⟩/√(r_1^2 +r_3^2) (one magnon in A_1 or one magnon in A_3)
* |Φ_3^-⟩= r_1 |3^+,+⟩+√(3)r_3|1+,2^+⟩/√(r_1^2 +3r_3^2) (3 kinks in A_1 or one kink in A_1 and one magnon in A_3)
* |Φ_3^+⟩= √(3)r_1 |2^+,1^+⟩+r_3|+,3^+⟩/√(3r_1^2 +r_3^2) (1 magnon in A_1 and one kink in A_3 or 3 kinks in A_3)
* |Φ_4^-⟩ = r_1 |3^+,1^-⟩+r_3|1^+,3^-⟩/√(r_1^2 +r_3^2) (3 kinks in A_1 and one in A_3 or 1 kink in A_1 and three in A_3 )
* |Φ_4^+⟩=r_1^2 |4^+,+⟩+√(6r_1^2 r_3^2)|2^+,2^+⟩+ r_3^2|+,4^+⟩/√(r_1^4 +r_3^4 + 6r_1^2 r_3^2) (2 magnons in A_1 or 2 magnons in A_3 or 1 magnon in A_1 and one magnon in A_3).
The expressions of the eigenstates show how the superposition mechanism explained at the end of Section <ref> works in detail. It is easy to check that the eigenvalues in Eqs. (<ref>) and (<ref>) match the formulae (<ref>), (<ref>).
§ MORE ON C^*-ALGEBRAS AND TWIST OPERATORS
In this appendix, we first review some elementary properties of the finite-dimensional C^*-algebras, and then we discuss some additional details regarding the notions of twist operators and entropy. We remark that infinite-dimensional algebras arise systematically in rigorous treatments of the observables of infinite dimensional spin lattices <cit.> and QFT <cit.>, but their treatment is beyond the purpose of this work. However, in this context, we consider the algebra of finite regions to be finite-dimensional, potentially incorporating appropriate ultraviolet regularisation when dealing with quantum field theories.
The first important result is a classification theorem for those algebras, which goes under the name of Wedderburn–Artin theorem (see for instance <cit.>). It states that every finite-dimensional C^*-algebra 𝒜 is isomorphic to a direct sum of full matrix algebras, namely
𝒜≃⊕_λEndłℂ^d_λ.̊
Each term in the previous sum is called a factor, and it is (isomorphic to) an algebra of d_λ× d_λ complex matrices. The center of the algebra 𝒜, defined as
𝒵(𝒜) ≡{ c ∈𝒜 | [c,a] = 0 ∀ a ∈𝒜},
is generated by the identity operators of each factor
𝒵(𝒜) = Span{ 1_λ∈Endłℂ^d_λ}.
In the case of a single factor, which is the usual scenario for the algebra of regions of spin chains, the center is trivial as it is generated by the identity matrix. Nonetheless, in other contexts, as for lattice gauge theories <cit.>, the physical observables may display multiple factors, and we refrain from making specific assumptions in this regard.
An important concept in defining the density matrix and the entropy is the (intrinsic) notion of trace. In the context of algebras, a trace is defined as a positive linear functional
𝒜 →ℂ,
a →Trł a,̊
which satisfies Tr(ab-ba) = 0. Without further hypothesis, it might not be obvious whether a trace exists or it is unique (up to a proportionality constant). The first important result is that, whenever 𝒜 is a full matrix algebra, say
𝒜 = Endłℂ^d,̊
then the trace is uniquely defined up to a proportionality constant, and it corresponds to the usual notion of the trace of (d× d complex) matrices. To prove the statement, it is sufficient to observe that the image of the commutator
[a,b]≡ ab-ba,
has dimension d^2-1[
Given an orthonormal basis {|i⟩} one has [|i⟩⟨j|,|j⟩⟨k|] = |i⟩⟨k| for i ≠ k, and [|i⟩⟨j|,|j⟩⟨i|] = |i⟩⟨i|-|j⟩⟨j|. The linear combinations of those matrices which are obtained via commutators generate the space of traceless matrices.
] and it is in direct sum with the span of the identity matrix. Therefore, to define a trace satisfying Eq. (<ref>), it is sufficient to specify its value for the identity operator. The most common normalisation, associated with the usual trace of matrices, is
Trł 1=̊d.
For algebras with multiple factors, many traces can be constructed in principle, and they are identified by the values of the elements in the center; a natural choice is nonetheless
Trł 1_λ=̊d_λ.
It is worth commenting on the entropy for algebras with multiple factors (see details in Ref. <cit.>). Given a state … for 𝒜 in (<ref>), we split it as
… = ∑_λ p_λ…_λ.
Here p_λ≡1/ 1_λ is the probability of the factor λ; …_λ is the state conditioned to the factor λ, which satisfies 1_λ_λ =1 and 𝒪_λ =0 if 𝒪∈𝒜_λ' (λ'≠λ). Similarly, the density matrix associated with … is decomposed as
ρ_𝒜 = ∑_λ p_λρ_𝒜_λ,
with ρ_𝒜_λ the density matrix of …_λ. A simple calculation <cit.> gives the entropy of the state as
S(ρ_𝒜) = ∑_λ p_λ S(ρ_𝒜_λ) - ∑_λ p_λlog p_λ.
We mention that a similar decomposition for the entropy arises whenever the density matrix displays a block-diagonal structure, as pointed out in Refs. <cit.> for U(1) conserving systems. The computation of the moments of the reduced density matrix, and thus of the Rényi entropies, can be given in terms of twist operator for algebras with non-trivial center as well, as we explain below.
First, one defines 𝒯_𝒜_λ as twist operators associated with 𝒜_λ and belonging to 𝒜^⊗ n_λ: no issues arise here since the center of 𝒜_λ is trivial. Then, one introduces
𝒯_𝒜≡∑_λ𝒯_𝒜_λ∈𝒜^⊗ n,
which acts as a replica cyclic permutation and it does not connect distinct factors. We define the replica state …^⊗ n as the state of 𝒜^⊗ n associated with the density matrix ρ^⊗ n_𝒜∈𝒜^⊗ n. A direct computation gives
Trł𝒯_𝒜ρ^⊗ n_𝒜=̊ ∑_λ,λ_1,…,λ_n p_λ_1… p_λ_nTrł𝒯_𝒜,λłρ_A,λ_1⊗…⊗ρ_A,λ_n̊̊= ∑_λ p^n_λTrł𝒯_𝒜_λρ^⊗ n_𝒜,λ=
∑_λ p^n_λTrłρ^n_𝒜,λ=̊Trłρ^n_𝒜,̊
where we used the fact that contributions associated to distinct factors vanish and only the terms with λ=λ_1=… = λ_n remain.
Finally, we give some remarks regarding the existence of twist operators and their generalisation. In the main text, we mention that a microscopic construction for an operator that implements the replica shift j→ j+1 via Eq. (<ref>) can be provided explicitly.
In different contexts, other twist operators are found to be useful in the study of entanglement, as for example composite fields with ℤ_2 lines attached to them, as for the fields in Sec. <ref> and those of Ref. <cit.>. Therefore, a natural question to pose is whether a given transformation of observables can always be implemented via a certain operator. More precisely, given
ϕ: 𝒜→𝒜,
an algebra automorphism (that is ϕ(ab) = ϕ(a)ϕ(b), ϕ(a^†) = ϕ(a)^†) we look for an element 𝒯^ϕ∈𝒜 satisfying
𝒯^ϕ a = ϕ(a) 𝒯^ϕ, ∀ a ∈𝒜.
The cyclic permutation of the replica model in Eq. (<ref>) is indeed a specific example of this general problem. In the case of algebras with trivial central, a general result, under the name of Skolem–Noether theorem<cit.>, ensures that such 𝒯^ϕ always exist: in other words, every automorphism can be realised as an inner automorphism via 𝒯^ϕ a (𝒯^ϕ)^-1 = ϕ(a). This result is far from being trivial, and we are not aware of any simple constructive proof. Nonetheless, a proof of the unicity (up to constants) is definitely simpler, as it follows the same argument of Sec. <ref>, and it relies on the triviality of the center of 𝒜. Lastly, we point out that, in the presence of a non-trivial center, not only the unicity but also the existence of an operator 𝒯^ϕ satisfying (<ref>) is not guaranteed a priori. A simple example is provided by the commutative algebra generated by two central elements a,b (that is, 𝒜≃End(ℂ)⊕End(ℂ)) satisfying a^2=a,b^2=b: the map ϕ(a) = b, ϕ(b) = a is an automorphism, but it clearly cannot be realised as an inner automorphism since the algebra is commutative.
|
http://arxiv.org/abs/2409.03333v1 | 20240905081614 | YOLO-CL cluster detection in the Rubin/LSST DC2 simulation | [
"Kirill Grishin",
"Simona Mei",
"Stephane Ilic",
"Michel Aguena",
"Dominique Boutigny",
"Marie Paturel",
"the LSST Dark Energy Science Collaboration"
] | astro-ph.CO | [
"astro-ph.CO"
] |
YOLO-CL for LSST
Grishin et al.
Université Paris Cité, CNRS(/IN2P3), Astroparticule et Cosmologie, F-75013 Paris, France [email protected], [email protected]
Jet Propulsion Laboratory and Cahill Center for Astronomy & Astrophysics, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, California 91011, USA
IJCLab, Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France
LAPP, Université Savoie Mont Blanc, CNRS/IN2P3, Annecy; France
The next generation large ground-based telescopes like the Vera Rubin Telescope Legacy Survey of Space and Time (LSST) and space missions like Euclid and the Nancy Roman Space Telescope will deliver wide area imaging surveys at unprecedented depth. In particular, LSST will provide galaxy cluster catalogs up to z∼1 that can be used to constrain cosmological models once their selection function is well-understood. Machine learning based cluster detection algorithms can be applied directly on images to circumvent systematics due to models, and photometric and photometric redshift catalogs. In this work, we have applied the deep convolutional network YOLO for CLuster detection () to LSST simulations from the Dark Energy Science Collaboration Data Challenge 2 (DC2), and characterized the LSST cluster selection function. We have trained and validated the network on images from a hybrid sample of (1) clusters observed in the Sloan Digital Sky Survey and detected with the red-sequence Matched-filter Probabilistic Percolation, and (2) dark matter haloes with masses M_200c > 10^14 M_⊙ from the DC2 simulation, resampled to the SDSS resolution. We quantify the completeness and purity of the cluster catalog with respect to DC2 haloes with M_200c > 10^14 M_⊙. The cluster catalog is 100% and 94% complete for halo mass M_200c > 10^14.6 M_⊙ at 0.2<z<0.8, and M_200c > 10^14 M_⊙ and redshift z ≲ 1, respectively, with only 6% false positive detections. We find that all the false positive detections are dark matter haloes with 10^13.4 M_⊙≲ M_200c≲ 10^14 M_⊙, which corresponds to galaxy groups. We also found that the selection function is almost flat with respect to the halo mass at 0.2 ≲ z ≲ 0.9. The overall performance of is comparable or better than other cluster detection methods used for current and future optical and infrared surveys. shows better completeness for low mass clusters when compared to current detections based on Matched Filter cluster finding algorithms applied to Stage 3 surveys using the Sunyaev Zel'dovich effect, such as SPT-3G, and detects clusters at higher redshifts than X-ray-based catalogs. Future complementary cluster catalogs detected with the Sunyaev Zel'dovich effect will reach similar mass depth and will be directly comparable with optical cluster detections in LSST, providing cluster catalogs with unprecedented coverage in area, redshift and cluster properties. The strong advantage of over traditional galaxy cluster detection techniques is that it works directly on images and does not require photometric and photometric redshift catalogs, nor does it need to mask stellar sources and artifacts.
YOLO-CL cluster detection in the Rubin/LSST DC2 simulations
Kirill Grishin1,
Simona Mei1,2,
Stephane Ilic3,
Michel Aguena1,
Dominique Boutigny4,
Marie Paturel4,
and the LSST Dark Energy Science Collaboration
===============================================================================================================================================================================
§ INTRODUCTION
Galaxy clusters are the largest gravitationally bound structures in the Universe, and their distribution is a probe for cosmological models. Upcoming deep large-scale survey like those performed with the Vera C. Rubin Observatory <cit.>, the Euclid space telescope <cit.> and the Nancy Grace Roman Space Telescope <cit.> will give us unprecedented deep optical and infrared imaging of hundreds of thousands of clusters up to z∼2.
In particular, the Vera Rubin Telescope Legacy Survey of Space and Time (LSST; ) will deliver deep optical imaging data over ∼20,000 sq. deg. of the sky. LSST will observe in six bandpasses (u, g, r, i, z, y) and reach a depth of r∼27.5mag on about half of the sky <cit.>. These observations will permit us to obtain constraints on cosmological models using galaxy clusters, once we can provide a precise selection function.
Cluster detection in optical and near-infrared multi-wavelength imaging surveys is mainly based on the search of spatial overdensities of galaxies of a given class, which can be quiescent, line-emitter, massive, etc. <cit.>.
Most of these methods require a high-quality photometric calibration, an accurate calibration of galaxy colors as a function of redshift, and unbiased photometric and photometric redshift catalogs. Photometric catalogs might be affected by aperture or model choices in measuring magnitudes and background subtraction. These systematics propagate to the estimation of photometric redshifts, which also rely on being calibrated on available spectroscopic redshift samples and galaxy spectral energy distribution templates that do not cover the entire galaxy population <cit.>. These uncertainties on both photometric and photometric redshift catalogs make it essential to complement traditional cluster detection algorithms with new techniques that do not rely on catalogs, but instead work directly on images, such as deep machine learning (ML) neural network.
Over the last years, deep ML techniques were widely used in astrophysics for different purposes <cit.>, including object classification <cit.>, estimation of redshift of individual galaxies <cit.>, solution of ill-posed problems, including reconstructions of matter distributions <cit.>. The purity of the samples, defined as the percentage of true objects recovered by the network as opposite to false detections, was high enough to search for rare or elusive objects <cit.>. Among these methods, convolutional neural networks (CNN) are well adapted for object detection and characterization in astrophysics <cit.>, in particular for galaxy cluster detection <cit.>.
Recently, our team developed a cluster detection method modifying the well-known detection-oriented deep machine learning neural network “" <cit.>. Our network, <cit.>, detects galaxy clusters on multi-wavelength images, and shows a higher performance with respect to traditional cluster detection algorithms in obtaining cluster catalogs with high completeness and purity. When applied to the Sloan Digital Sky Survey <cit.>, provides cluster catalogs that are complete at ∼ 98% for X-ray detected clusters with I_X, 500≳ 20 × 10^-15 erg/s/cm^2/arcmin^2 at 0.2 ≲ z ≲ 0.6, and of ∼ 100% for clusters with I_X, 500≳ 30 × 10^-15 erg/s/cm^2/arcmin^2 at 0.3 ≲ z ≲ 0.6. The contamination from false detections is ∼ 2%.
It is also interesting that <cit.> found the selection function is flat as a function of redshift, with respect to the X-ray mean surface brightness. The advantage of , and other ML networks that work directly on images, is that they are independent of models and systematics that might arise when building photometric and photometric redshift catalogs in traditional methods. They also do not need stellar sources and artifacts to be masked. If the training sample is representative of the entire observed sample, the ML methods should be less impacted by modeling choices and systematics.
In this paper, we evaluate the efficiency in detecting galaxy clusters in the LSST survey. Given that LSST observations did not start yet, we apply the network on simulations from the LSST Data Challenge 2 <cit.>, which were developed within the LSST Dark Energy Science Collaboration (DESC[<https://lsstdesc.org/>]). We quantify the cluster catalog selection function in terms of completeness and purity (see below) with respect to DC2 haloes with M_200c > 10^14 M_⊙. The cluster catalog is 100% and 94% complete for halo mass M_200c > 10^14.6 M_⊙ at 0.2<z<0.8, and 94% complete for M_200c > 10^14 M_⊙ and redshift z ≲ 1, respectively, with only 6% false positive detections. This contamination is expected from the intrinsic accuracy of convolutional neural networks, and our network is highly efficient with respect to traditional cluster detection algorithms based on photometric and photometric redshift catalogs. It is interesting that all the false positive detections are groups with 10^13.4 M_⊙≲ M_200c≲ 10^14 M_⊙, and that the catalog selection function is flat with respect to the halo mass at 0.2 ≲ z ≲ 0.9.
This article is organized as it follows: in Section <ref> we describe the observations and simulations used to train and validate our network. In Section <ref> we present and its training and validation. The results and the discussion and conclusions are presented in Section <ref> and Section <ref>, respectively. The summary is in Sec. <ref>. All magnitudes are given in the AB system <cit.>. We adopt a Λ CDM cosmology, with Ω_M =0.3,
Ω_Λ =0.7, h=0.72, and σ_8 = 0.8.
§ OBSERVATIONS AND SIMULATIONS
Since the DESC DC2 simulated area includes only ≈ 2,000 synthetic galaxy clusters (see Sec. <ref>) and we need at least 10,000 objects for training our network, we trained on a hybrid sample of cluster images that includes both the same set of SDSS observed images <cit.>
that we used in <cit.>, and synthetic cluster images from the DESC DC2 simulations.
This strategy is widely used in astrophysics when the target sample (in our case the LSST DC2 simulations) is large enough to provide a statistical application of a network, but too small to be used for the network training and validation.
In the case of convolutional networks, such as , <cit.> demonstrated that transfer learning allows for rapid adaptation from one astrophysical survey application to another. Specifically, the weights obtained by training a convolutional network on images from a given survey can be efficiently transferred to another survey by fine-tuning them, i.e., by retraining the network adding a smaller number of images from the new survey, roughly an order of magnitude fewer than the initial training sample. In their case, the initial survey was SDSS, and they applied transfer learning to the Dark Energy Survey <cit.>. We demonstrate in this section that this approach is also effective when re-training using our initial training set from SDSS as utilized in <cit.>, and incorporating approximately one order of magnitude fewer synthetic cluster images from the DESC DC2 simulations.
§.§ The SDSS observations
The SDSS is an imaging survey that was performed with the 2.5-m. Apache Point telescope in five optical bandpasses (u, g, r, i, z) using the SDSS camera in a scanning regime. It covers ∼ 14,055 sq. deg. of the sky in two main areas in the Northern hemisphere split by the Milky Way: one within 7h < RA < 16h and -1 deg <Dec< +62 deg. and the other within 20h<RA<2h and -11 deg.<Dec<+35 deg. The 5-σ point-source depth in the g, r and i bandpasses is 23.13, 22.70 and 22.20 mag, respectively. The seeing quality for SDSS images varies from 1.2 to 2.0 arcsec[<https://www.sdss4.org/dr17/imaging/other_info/>].
As reference SDSS cluster catalog, we used the red-sequence Matched-filter Probabilistic Percolation
(redMaPPer) Data Release 8 (DR8) catalog from <cit.>. The redMaPPer algorithm finds overdensities of red sequence galaxies in large photometric surveys. The cluster catalog that we used[Version 6.3 of the catalog, from <risa.stanford.edu/redMaPPer>.] covers ∼10,000 square degrees of the SDSS DR8 data release, and includes 26,111 clusters over the redshift range z ∈ [0.08, 0.55]. The redMaPPer catalog is 100% complete up to z=0.35 for clusters from the MCXC (Meta-Catalog of X-Ray Detected Clusters of Galaxies) X-ray detection catalog <cit.>, with temperature T_X ≳ 3.5 keV, and luminosity L_X ≳ 2 × 10^44 erg s^-1, decreasing to 90% completeness at L_X ∼ 10^43 erg s^-1. The centers of 86% of the redMaPPer clusters correspond well to their X-ray centers <cit.>. For each cluster, redMaPPer provides its position, the richness λ[By definition, the cluster richness is the number of cluster members above a given luminosity. For redMaPPer it is defined as a sum of the probability of being a cluster member over all galaxies in a cluster field <cit.>.], and a list of cluster members. The richness is correlated to the cluster mass. All redMaPPer rich clusters (λ > 100) are detected in the X-ray ROSAT All Sky Survey <cit.>.
We excluded clusters with redshifts z<0.2 from the original redMaPPer cluster catalog, because they cover regions in the sky larger than the images that we consider when optimizing our network execution time and computational power (see sec. 3.2). Our final redMaPPer catalog includes 24,406 clusters, whose distribution is shown in Fig. 1 from <cit.>.
For the network training and validation, we used JPEG color images of the original SDSS DR16 images centered on each of the 24,406 redMaPPer clusters, using the web service[<http://skyserver.sdss.org/dr16/en/help/docs/api.aspx#imgcutout>]. These images were derived from the g, r, and i-band FITS corrected frame files from the Science Archive Server, and the color images are built using the conversion algorithm[Detailed here: <https://www.sdss.org/dr16/imaging/jpg-images-on-skyserver>]
based on <cit.>. We chose these three bandpasses because they are sufficient to identify passive early-type galaxies in clusters at z ≲ 1.
§.§ The DESC DC2 simulation
In ten years, LSST will reach the 5-σ point-source depth of 27.4, 27.5, and 26.8 mag in the g, r and i bandpasses, respectively <cit.>. This will allow to build a catalog of 20 billion individual galaxies, and over 100,000 galaxy clusters at z<1.2. The average seeing quality at the Rubin telescope site is 0.67” with a best value of 0.4”, which is very close to the best spatial resolution that can be achieved from the ground.
The primary goal of the LSST DESC DC2 simulation is to create realistic LSST synthetic observations that can be used to test all DESC primary pipelines.
DC2 is based on the Outer Rim cosmological N-body simulation, that contains around a trillion particles in 4.225 Gpc^3 of co-moving volume <cit.>. An extragalactic catalog, CosmoDC2, was built from the snapshots of Outer Rim simulation by: 1) assigning galaxies to each halo of the dark matter simulation with properties obtained from empirical relations <cit.>, and 2) fully characterizing galaxies in this sample adding missing properties derived from the semi-empirical model (SAM) Galacticus <cit.>.
The CosmoDC2 catalog was used to simulate images over an area of 445 sq. deg., with galaxies at z<3. The sample of galaxies in the initial truth catalog is complete down to r=28.0 mag, and galaxies fainter than r=29.0 mag are excluded from the simulations for computation performance purposes.
The catalog is stored in the HEALPix format <cit.> and split into three redshift bins: 0<z<1, 1<z<2 and 2<z<3. The quality of this catalog was evaluated in the framework of the LSST DESC collaboration using the DESCQA validation framework <cit.>.
This validation confirmed that the simulation reproduce reasonably well galaxies, their properties, and their distribution in the Universe <cit.>. This makes the DC2 simulation one of the best dataset to test the DESC cosmological pipelines and algorithms, including cluster finders.
The simulation includes both a catalog and synthetic images. The simulation of the DC2 synthetic images consisted of two mains steps: 1) simulation of raw images that resemble those obtained with LSSTCam, and 2) reduction of these raw images using the LSST science pipeline [<https://pipelines.lsst.io/>], based on the Hyper Supreme-Cam pipeline <cit.>. On the first step, each object from cosmoDC2 catalog was simulated using the GalSim package <cit.>, taking into account the LSST depth and noise, accounting for CCD effects, night sky background <cit.>, cosmic ray hits, etc. Galaxy colors and spectral energy distributions were modeled using templates from <cit.>.
The raw synthetic images were then processed by the LSST science pipeline, which covers: 1) single-frame processing, by basic corrections like bias subtraction, non-linearity and flat-field corrections, and first iteration of astrometric and photometric calibration 2) joint calibration, which uses synthetic observations of the same area of the sky from different frames to improve the calibration 3) image co-addition, when individual images are resampled on the same coordinate grid, and then coadded, and 4) source detection. The 5σ point-source depth of the simulation in the r-band is 27.3 mag, which corresponds to 5 years of the LSST survey, the deeper DC2 images on a large area currently available.
Using the Dark Energy Survey <cit.> exposure checker <cit.>, a few dozens of DESC members performed a quality check of ∼9,000 synthetic co-added images, which did not show substantial issues <cit.>. The galaxy catalogs comply with the LSST Science Requirements <cit.> and the DESC Science Requirements <cit.>.
These images are expected to have properties, including depth and seeing quality, very close to those that will be obtained with LSSTCam <cit.>.
The cosmoDC2 v1.1.4 catalog includes 2,342 dark matter halos with 0.2<z<1 and M_200c > 10^14 M_⊙[M_200c is defined as the mass within the circular region of radius R_200 containing a mean mass density equal to two hundred times the critical density of the Universe at a given redshift.] (the typical minimal halo mass of virialization from that defines galaxy clusters, hereafter we will refer to these haloes as DC2 clusters) and redshift in the range 0.2<z<1.0. Hereafter, we refer to this sample as our DC2 "true cluster" sample.
We exclude halos on the simulation edges, which are not entirely included in the images. In this work, we use this sample as the "true cluster" sample. Fig. <ref> shows our DC2 cluster sample and its redshift and mass distributions.
For each halo, the catalog includes its position, the true redshift, the dark matter halo mass M_200c, and a richness parameter defined as the sum of the probabilities for galaxies brighter than m^*(z)+2 to be a halo member. Here m^* is the characteristic magnitude that corresponds to the luminosity of the knee of the Press–Schechter luminosity function <cit.> at the redshift of the cluster. To find m^*, we fitted the galaxy luminosity function in the K-band <cit.>. Then, we predict m^* in optical bands using the PEGASE2 library <cit.> for a burst galaxy that passively evolves from z=3. The probability for the galaxy to be a cluster member was computed assigning a weight depending on the projected distance from the cluster center following <cit.>.
To generate composite color images, we used the deepCoadd frames delivered by the LSST pipeline in the DC2 Run2.2 simulation run <cit.> for the cosmoDC2 v1.1.4 extragalactic catalog <cit.>. These images are fully reduced, calibrated, sky subtracted and co-added science frames with a pixel scale of 0.2”/pix. To make our analysis fully consistent with the SDSS images, we have resampled the DC2 images to the SDSS pixel scale of 0.39”/pix using the astropy-based reproject package <cit.>.
To build composite JPEG color images for DC2 simulation we used the same algorithm used in the SDSS survey <cit.>. This algorithm has two main parameters: nonlinearity (Q) and flux scale (α). For SDSS, the parameters are Q=8 and α=0.2[<https://sdss4.org/dr17/imaging/jpg-images-on-skyserver/>]. For the DC2 color images, we used Q=8 and α=0.08, in order to partially compensate the depth and magnitude zeropoint difference (the zero magnitudes are m_0^SDSS=22.5 mag and m_0^LSST=27 mag). In fact, with α=0.08, the DC2 scale visually reproduces the SDSS scale. We also adjusted the DC2 flux count range to have a similar range in surface brightness as in SDSS. We performed a sky subtraction, and registered the composite images on a final JPEG scale from 0 to 255. We set to zero and 255 all pixels with fluxes less than zero and larger than 255, respectively.
§ TRAINING AND VALIDATION
§.§
[GITHUB PAGE] is based on the the third iteration
of , <cit.>, which represents
a significant improvement over the first versions, and proved to be very well adapted for cluster detection <cit.>. We outline here the algorithm main characterists, and more details can be found in <cit.>. The architecture applies a single neural network to images, combining object detection and classification into a single process. This results in several orders of magnitude faster execution times, compared to other detection convolutional networks such as R-CNN <cit.>.
The network divides the image into a S× S grid of cells, within which the detection and classification are performed. For each object detection the network predicts B bounding boxes, to which it assigns a set of parameters, including its position, size, the probability of being an object and the probability of belonging to a certain class of objects. The network is trained on a sample of images on which it optimizes the parameters to better detect and classify objects (i.e., converges on the optimal weights).
During the training process optimizes a multi-component loss function ℒ <cit.>:
ℒ = ℒ_ obj + ℒ_ bbox +ℒ_ class .
Where ℒ_ obj is the "objectness loss" and optimizes the object identification, ℒ_ bbox is the "bounding box loss" and optimizes the bounding box position and size, and ℒ_ class is the “classification loss” and optimizes the object class. The loss functions quantify the distance between the true parameter values and those estimated by the network. With respect to the original `classification loss” function that considers several object classes, in we removed multiple object classes because we use a single object class, which is "cluster". As "bounding box loss", we used the generalized Intersection over Union
(gIoU) loss <cit.>. In fact, the traditional IoU (Intersection over Union[The IoU is defined as the ratio between the area of intersection and the area of union between the detected object bounding box and the "true object" bounding box <cit.>]) metric does not permit us to optimize the corresponding loss term when the true and predicted bounding boxes are non-overlapping. More details can be found in <cit.>.
The training consists of several iterations, which are called epochs. At each epoch all the images from the training sample are an input for the network which optimizes the network weights and bias that decrease the loss function, making the distance between the true values and those estimated by the network closer.
The network is then validated on a validation sample.
The final network output is a catalog of detections with an associated detection probability (see below).
§.§ Training and validation
We used two equal hybrid samples of 12,203 redMaPPer and 1,171 DC2 cluster images each for both training and validating , with the same number but different images for the training and validation. Each of these two samples has identical redshift and mass distribution, for a total of 24,406 redMaPPer and 2,342 DC2 cluster images. Our hybrid training and validation sample approach makes the learning invariant to the differences in object densities, and all the other differences between SDSS and DC2.
Following <cit.>, we start with images of dimension 2048×2048 pixels, which corresponds to ∼13.5 x 13.5 arcmin^2, twice the size of a typical cluster virial radius of 1 Mpc at z∼ 0.2, and much larger than the typical cluster virial radius at z>0.5. For the input to the first layer of the network, we resize each image by average pooling to 512×512 pixels (with a pixel size equal to eight times the LSST resolution[four times the SDSS resolution]) and 1024×1024 pixels (with a pixel size equal to the four time of the LSST resolution[the double of the SDSS resolution]), and keep the same stride parameters as in the original publication, namely 8, 16, and 32.
These image sizes and stride parameters are a good compromise between keeping high image resolution and our computational power. Our training and validation runs were performed on Centre de Calcul IN2P3[<https://cc.in2p3.fr/>] computing cluster on a NVIDIA Tesla V100-SXM2-32GB GPU, equipped with 32 GB of memory.
§.§.§ Hyperparameter optimization
Our hyperparameters optimization is performed with respect to memory limits and the stability of the training. Since the weight optimization during the training is done using a gradient descent, the whole process can be a subject to instabilities. There are two main hyper-parameters responsible for the mitigation of these instabilities: the batch size and the learning rate.
The size of the training sample is too large to store in memory, and it is not possible to complete the training on the entire sample in one iteration. To overcome this limitation, we split our training sample in subsets (batches) that are processed by the network at the same time. The batch size is limited by two main factors: it cannot be too small, because in this case the derived direction of the gradient would be unstable, and at the same time it cannot be too big, given that memory resources are limited. Due to memory limitations, we used a batch size of 8 for the 512x512 images and of 2 for the 1024x1024 images.
The other hyper-parameter that is crucial for training is the learning rate. It defines how big the weight variations can be at each epoch. It cannot be too small, otherwise the most optimal weight configuration would never be achieved, and it cannot be too big because it would make the training process less stable. We choose the learning rate varying with the epoch: it starts from a some small value and grows up during a few first epochs, called “warm-up” epochs, and after reaching its maximum values it asymptotically goes down to the final values <cit.>. Starting, maximal and final values of the learning rate as well as the number of “warm-up” epochs are also hyper-parameters, and should be defined before the training. We start by setting a learning rate of 10^-10, which grows to 10^-5 during the first eight warm-up epochs, and then slowly decreases to 10^-6.
Our input image cutouts are centered on the redMaPPer cluster or DC2 selected dark matter halo positions. This centering should not have an impact on the network learning, which should understand that cluster features should not depend on its position in the image. For this reason, we apply data augmentation, including translation and flipping of a random quantity between zero and half of the image, which change the initial cluster position in the image. This forces the network to focus on the relevant features associated with clusters, independently of their position in the images.
We provide the main parameters of the training configuration in Table <ref>.
§ RESULTS
§.§ Network initial detection catalog
We run on the training and validation sample for ∼100 epochs. Fig. <ref> shows the loss functions for the two samples with different image size. For both cases the training epochs can be split into three parts: 1) in the first epochs the weights converge fast towards optimal values due to the large value of the gradient, 2) the search for an optimal loss minimum (epochs 10-40) and 3) the fine-tuning of the solution. In both cases the lowest value of the validation loss function is reached in the first half of the training epochs – for 512x512 it was in the range of epochs 10-45, and for 1024x1024 in the range 10-30.
At each epoch, the network output is a catalog of detections on the validation sample, with the bounding box coordinates, and the probability to belong to the class "cluster" (hereafter detection probability). The network usually outputs multiple detections of the same object, which we discard by following the standard approach in applications <cit.>. In this case, we define the IoU as the ratio between the area of intersection and the area of union between multiple detection bounding boxes. The gIoU is an optimization of the IoU <cit.>, and is defined as:
gIoU = IoU + 𝒰/𝒜_c - 1
where 𝒰 and 𝒜_c are the areas of the union of the two boxes and the smallest box enclosing both boxes, respectively.
Both the IoU and gIoU are a measurement of the overlap region of bounding boxes that define two different detections. A value of 1 indicates perfect agreement (we are detecting the same object), while a value approaching 0 indicates increasingly disjointed boxes and/or significantly different sizes (we are detecting different objects). We discard multiple detections of the same object by applying a gIoU threshold of 0.5, which is the same threshold as in the original for the IoU <cit.>. This standard choice means that when two bounding boxes overlap more than 50%, we consider that they define the same detected object. In this case, we kept the highest probability detection while discarding the other.
For each epoch, after discarding multiple detections, we obtained a catalog of single detections, each with the coordinates of the bounding box of the detection and the probability of the detection being a cluster.
§.§ Final cluster catalog
At this point, we needed to choose our best epoch and which probability threshold to use to select the best cluster candidates for our final catalog.
Our best epoch was chosen as the epoch in which the validation loss function reaches its minimum value. This means that in this epoch we reach on average the best values of all the network parameters.
Once we chose the best epoch, to asses our best probability threshold, we used two quantities, the final cluster detection catalog completeness and purity, which are calculated on the DC2 detections with respect to our reference DC2 "true cluster" sample from the simulation catalog. In fact, while we need an hybrid SDSS and DC2 sample for transfer learning, hereafter all our results will focus on the performance on DC2 simulations, which are the sample on which we want to test the performance on LSST, and which define our cluster catalog selection function.
The cluster catalog completeness quantifies the fraction of true clusters that are detected. The cluster catalog purity quantifies the fraction of detections that are true clusters, as opposite to false positive detections. In machine learning literature, the completeness corresponds to the recall, and the purity to the precision.
To calculate the purity (see below), we applied to a sample of images (“random” fields) that do not contain DC2 clusters, which means that the center of the random fields is more than 12 arcmin (∼4.5 Mpc at z ≳ 0.5) from any DC2 cluster. For this reason, we added to our validation 6,451 random fields, which correpond to all the regions that do not contain clusters in DC2.
We optimized the detection probability threshold to obtain cluster detection catalogs with the highest values of completeness and purity. Following <cit.>, we optimized purity and completeness to the same value, not to have one variable more optimized with respect to the other. The final catalog includes only detections that have a detection probability higher than the optimized detection threshold for which completeness and purity are the same. A more fine-tuned selection function can be defined depending on the use of the catalog for cosmology, galaxy formation and evolution studies, etc.
Fig. <ref> shows the catalog completeness and purity as a function of the detection probability threshold at our best epoch. The completeness C=N_td/N_tc is calculated as the ratio between the number of true cluster detections N_td and the number of true clusters in our images N_tc. The purity is calculated as P=1-N_fd/N_rf, where N_rf is the number of random fields and N_fd are cluster detections in the random fields, which are by definition false positive detections.
We assume that the ratio N_fd/N_rf is a good approximation of the true ratio of false positive detections over the total number of detections, independently on the area of the survey that we consider.
Completeness and purity have the same value of 90% and 94% at the threshold value of 27% and 32% when using the 512x512 pixel and 1024x1024 pixel images, respectively.
Figure <ref> shows the completeness as a function of the DC2 "true cluster" mass M_200c and redshift.
The completeness is almost flat at 0.2<z<0.8 and varies in the range of 80%-90%, and 90%-96% when is applied to 512x512 and 1024x1024 pixel images, respectively. At z>0.8, we observe a decrease in completeness, which is larger when considering 512x512 pixel images. The completeness also increases with the halo mass. For the 512x512 pixel images the completeness is ≳ 95% only for halos with M_200c>10^14.7M_⊙, while for 1024x1024 images it is ≳ 94% for M_200c>10^14M_⊙.
§.§ final catalog completeness and purity
Given the higher network performance with 1024x1024 pixel images, hereafter we concentrate on the catalog obtained with this image size. In this final catalog, we only keep cluster candidates with detection probability higher than a 32% threshold, which corresponds to a catalog 94% complete and pure.
Fig.<ref> shows the detection catalog completeness as a function of both redshift and DC2 halo mass and richness. Halo mass and richness are correlated, with a large scatter, and a M_200c = 10^14M_⊙ corresponds to a richness ∼ 35. The selection is almost flat with respect to the halo mass up to z∼0.9, but not with respect to richness. This might be due to the fact that the features found by the network to identify a cluster, or the non-linear combination of these features, are more linked with the cluster mass than with its richness.
The catalog is ∼100% complete for M_200c≳ 10^14.6M_⊙ and richness ≳ 100 at all redshifts. At M_200c≳ 10^14M_⊙, the completeness is ≳95% up to z∼0.8, and decreases to ≳80-85% at higher redshifts. However, when characterizing halos by their richness, the completeness is less flat as a function of redshift, as also shown for SDSS observations in <cit.>, and decreases abruptly to ∼ 70-75% at z>0.8.
To better understand the purity of catalog as function of redshift, we matched the 6% false detections to lower mass DC2 dark matter haloes, which are the most probable interlopers. Unfortunately, we cannot estimate purity as a function of both mass and redshift because we would need the number of detected clusters with a given observed mass and redshift, and does not provide an estimation of these parameters. We found that 49%, 97%, and 100% of the false detections match with halos 10^13.8 M_⊙ < M_200c < 10^14 M_⊙, 10^13.5 M_⊙ < M_200c < 10^14 M_⊙, and 10^13.4 M_⊙ < M_200c < 10^14 M_⊙, respectively, of which 24%, 79%, and 85% are at z<1, respectively.
Fig. <ref> shows their distributions as a function of mass and redshift.
Most of the contamination of the final cluster sample is due to groups with 10^13.7 M_⊙ < M_200c < 10^14 M_⊙ (i.e., objects with masses within 0.3 dex smaller than a cluster) and z ≳ 0.6.
From the DES Y1 redMaPPer cluster catalog <cit.>, the cluster mass uncertainty is estimated to be 0.13 dex at z ≲ 1 and M_200c > 10^14 M_⊙ <cit.>. This means that our false positive detections cannot be distinguished from "true clusters" within 3 σ of the current DES observational mass uncertainty, which might be taken as a hypothetical lower limit on future LSST cluster mass uncertainties.
The mass observational uncertainty would also introduce an Eddington bias, which means that the more numerous M_200c < 10^14 M_⊙ haloes will be assigned a mass estimation M_200c > 10^14 M_⊙, and then contaminate our cluster sample with lower mass groups. To estimate this bias and using again the current DES cluster mass uncertainty as a reference, we statistically estimated the number of groups in the DC2 footprint with M_200c < 10^14 M_⊙ that may have a mass estimate of M_200c > 10^14 M_⊙ due to the scatter of the cluster mass-richness relation, and obtain an Eddington bias of 11%.
With this hypothesis, this means that 6% of the detections in the cluster catalog would be groups with 10^13.4 M_⊙ < M_200c < 10^14 M_⊙, and at least ∼ 10% of these groups are expected to be assigned a mass M_200c > 10^14 M_⊙. In practice, in current surveys the uncertainty on halo mass at M_200c < 10^14 M_⊙ is about two times larger than the uncertainty at M_200c < 10^14 M_⊙, about 0.25-0.3 dex <cit.>. If that will be also true for LSST, the Eddington bias contamination will be of the order of ∼ 30%, and all these estimates have to be re-assessed when LSST cluster mass uncertainties will be estimated.
§ DISCUSSION AND CONCLUSIONS
Our results show that detects DC2 clusters (M_200c > 10^14 M_⊙) in regions centered around them with ∼ 94% completeness and purity at 0.2 ≲ z ≲ 1, and with a 100% completeness for M_200c > 10^14.6 M_⊙ within the same redshift range. We also found that the selection function is almost flat with respect to the halo mass up to z∼0.9. In this section, we discuss how this performance compare with other cluster detection methods in optical imaging surveys and other wavelengths.
At lower redshift than LSST, the current DES covers 5,000 sq. deg. in the g, r, i, z and Y bandpasses and reaches a 10 σ depth at 24.7, 24.4 and 23.8 mag in g, r and i respectively[<https://des.ncsa.illinois.edu/releases/dr2>]. This corresponds to a 5 σ depth ∼ 2 mag shallower than LSST. The DES redMaPPer cluster catalog <cit.> is 100% complete for richness λ>70, which corresponds to a halo mass of M_200c∼ 10^14.8 M_⊙, using weak lensing and X-ray halo mass estimations for redMaPPer clusters <cit.>. Given the large difference in survey depth, it is not surprising that this catalog is less complete than DC2 catalog at lower masses.
When comparing to predictions for cluster catalogs completeness and purity at the LSST depth, empirical simulations and a Bayesian cluster finder <cit.> predict similar completeness and purity as (86-98%) in the redshift range 0.5 ≲ z ≲ 1.0 for M_h>10^14.3 M_⊙ <cit.>, which corresponds to M_200c > 10^14.24 M_⊙ [Hereafter, M_200c masses were derived from the original M_h and M_500 masses found in the literature, using the web-calculator for the equations from <cit.>, <https://c2papcosmosim.uc.lrz.de/static/hydro_mc/webapp/index.html>].
For observational comparisons, the first survey that reached a depth closer to LSST is the Canada-France-Hawaii Telescope
Legacy Survey (CFHT-LS)[<https://www.cfht.hawaii.edu/Science/CFHLS/>] <cit.>. The median 50% completeness limits in its four deep fields (∼ 4 sq. deg.) are 26.3, 26.3 and 25.9 in the g, r, and i bandpasses, respectively <cit.>. When we analyze the 5 σ limit, we obtain similar depths as LSST in these three bands. Several algorithms were applied to the CFHT-LS deep fields to obtain galaxy cluster samples 90-95% complete at 0.2<z<0.8 and 90% pure for clusters with richness λ>50 in simulated data <cit.>, and 100% complete and 85-90 % pure at M_200c>10^14.5M_⊙ <cit.>.
A recent survey that reaches a depth similar to LSST and uses similar optical filters is the Hyper Suprime Camera Strategic Survey Program <cit.>, which covers an area of ∼ 1,000 sq. deg. and reaches a 5σ depth of 26.8 mag and 26.4 mag in the g and i-bandpasses, respectively. The HSC-SSP cluster catalog obtained with the CAMIRA algorithm <cit.> is 100% and ∼ 90% complete and ≳ 90% pure for
M_200c>10^14.64M_⊙ and M_200c>10^13.94M_⊙, respectively, in the redshift range 0.1≲z≲1.1 <cit.>. The CAMIRA algorithm is similar to redMaPPer, and searches for red sequence galaxy overdensities. The WHL09/12 algorithm <cit.>, applied to a compilation of the HSC-SSP and unWISE catalogs, delivers a cluster catalog 100% complete for M_200c>10^14.8 M_⊙ <cit.>, and 80-90% complete for M_200c>10^14.4 M_⊙ at 0.2≲z≲1. The purity of the sample is not discussed. The completeness significantly decreases for lower cluster mass, reaching ≲70-60% completeness for M_200c > 10^14.1 M_⊙.
When compared to the completeness and purity expected for Euclid cluster catalogs at z<1 <cit.> using simulations from <cit.>, the DC2 detections are more complete and pure for M>10^14 M_⊙. The best purity and completeness of ∼90% at this mass and redshift ranges were obtained with the algorithm AMICO <cit.>. The other Euclid cluster finder, PZWav, based on wavelet filtering, gives catalogs ∼85-87% complete and pure <cit.>.
Overall, the performance of on DC2 simulations is similar or higher when compared to both current optical surveys at the same depth and redshift range, and LSST and Euclid simulation predictions for future cluster catalogs.
To compare with present and future cluster catalogs obtained at other wavelengths, we compare our results with cluster catalogs obtained by the Sunyaev–Zeldovich <cit.> effect and X-ray flux measurements, which are both sensitive to the cluster hot gas content.
SZ cluster catalogs are mass-limited and the deepest catalogs available at present reach 100% completeness at M_200c > 10^14.86-10^14.94M_⊙ at z ≲ 1.5 from observations with the South Pole Telescope Polarimeter <cit.>, a much higher mass limit than optical and infrared surveys. The SPT-SZ survey <cit.> catalog is 100% complete at M_200c > 10^14.94-10^15.00M_⊙ in a similar redshift range. The cluster catalog obtained from the fifth data release (DR5) of observations (13,211 deg^2) with the Atacama Cosmology Telescope (ACT) is 90% complete for the clusters with M_200c > 10^14.76-14.66 at 0.2<z<2.0 <cit.>. The Planck space mission PSZ2 all-sky cluster catalog <cit.> is 80% complete for M_200c > 10^14.76 M_⊙ at 0.4<z<0.6, and for M_200c > 10^14.3 M_⊙ for clusters at z∼0.2.
Simulations of the current SPT-3G survey, which will provide much deeper observations <cit.>, were used to estimate the completeness and purity that can be attained with another deep convolutional neural network <cit.>, combined with a classical match filter <cit.>. This work shows that ∼95% completeness and purity is predicted to be attained at M_200c > 10^14.7 M_⊙ at z ≳ 0.25.
This means that all present SZ surveys reach ∼ 95% completeness at cluster masses much higher than what is predicted for LSST from this work. However, the next generation SZ experiments, like SPT-3G, Simons Observatory, CMB-S4 will obtain cluster catalogs with a limiting mass M_200c∼ 10^14 M_⊙ more comparable to the LSST mass limit <cit.>. The CMB-S4 WIDE <cit.> survey will reach the S/N=5 cluster detection limit of M_200c = 10^14.1 M_⊙ at the redshift range 0.2<z<1 over 67% of the sky; the S/N=5 detection threshold for the Simons Observatory <cit.> is planned to be M_200c = 10^14.3 M_⊙ in the same redshift range over 40% of the sky; and the CMB-S4 ULTRADEEP and CMB-HD <cit.> surveys are built to reach up to M_200c = 10^14 M_⊙ and M_200c = 10^13.8 M_⊙, respectively. However, the CMB-S4 ULTRADEEP survey covers only 3% of the sky, while CMB-HD is planned to cover ∼ 50% of the sky. All these survey are planned for ≳ 2030, most probably about at the same time as the the 5-year LSST data release.
For what concerns X-ray surveys, the reference X-ray all-sky cluster catalog is the Röntgensatellit <cit.> catalog of Extended Brightest Cluster Sample <cit.>, which contains 201 cluster in Northern hemisphere and is 90% complete for z<0.3 and X-ray fluxes higher than 4.4 · 10^-12 erg/cm^2/s.
The MCXC cluster catalog <cit.> is a compilation of several catalogs/surveys that consists of ROSAT-based catalogs and serendipitous catalogues, summarized in Table <ref>. As expected, X-ray surveys detect clusters at much higher masses than LSST at z=0.5-1.
The ComPRASS catalog <cit.> presents a compilation of Planck <cit.> and RASS <cit.> catalogs of galaxy clusters that were observed in X-ray and using SZ, and reaches deeper than each survey used to compile it. Therefore, the selection function is a complicated combination of the selection function of several surveys. CompRASS is 100% complete for M_200c > 10^14.6M_⊙, M_200c > 10^14.8M_⊙, ans M_200c > 10^14.7M_⊙ at z<0.3 and z<0.6, and 0.6<z<1.0, respectively, which are much lower than the completeness limit for the SZ catalogs used to build it.
In conclusion, shows similar completeness and purity as other algorithms applied to current deep optical imaging surveys like CFHTLS Deep and HSC-SSP, and better completeness and purity than most of the other methods that have been applied to Euclid simulations.
Compared to current SZ and X-ray surveys, can obtain more complete and pure catalogs at much lower masses. However, future SZ surveys are planned to provide much deeper complete and pure catalogs directly comparable with ours. With respect to this, we notice that both SZ surveys and the selection function are mass-limited, making the SZ-optical comparison based on similar selection functions. detections can also be combined to SZ and X–ray detections as it was done for the ComPRASS compilation, to reach catalogs with higher completeness and purity at lower masses.
It has to be noticed that in this paper we focus our analysis on the targeted detections, with the goal to analyze the performance of the algorythm itself, independently of possible systematics and biases introduced by the variations of the parameters of the images generated in a survey mode. In future papers, we will apply to DC2 images in a survey mode, and our detections will be compared to other LSST cluster detection algorithms applied to the DC2 simulations.
§ SUMMARY
We applied the deep convolutional network <cit.> to observations from SDSS and DESC DC2 simulations to estimate its performance for LSST. We trained the network on 12,203 and 1,171 g, r and i composite color images from SDSS and from the DESC DC2 simulations, respectively, and validated on the same number of cluster images (for a total of 24,406 SDSS and 2,342 DC2 training and validation images) and 6,451 random fields. We conclude that:
* When using DC2 LSST simulated images with a pixel size equal to four times the LSST pixel resolution (≈ 0.8”/pix), the DC2 cluster catalog is 94% pure and complete for M_200c > 10^14 M_⊙ and at 0.2<z<1, and 100% complete for M_200c > 10^14.6 M_⊙.
* The cluster selection function is mass-limited at 0.2<z<0.9.
* When compared to other cluster detection methods in current optical surveys that reach LSST depth and simulations of the Euclid surveys, shows similar or better completeness and purity.
* Current X-ray and SZ cluster surveys do not reach completeness and purity at M_200c > 10^14 M_⊙ and at 0.2<z<1, while future SZ surveys will be directly comparable to LSST detections and will have similar mass-limited selection functions.
This paper shows that will permit us to obtain LSST cluster catalogs that will be 94% pure and complete for M_200c > 10^14 M_⊙ and at 0.2<z<1, and 100% for M_200c > 10^14.6 M_⊙ The cluster selection function is mass-limited in the redshift range 0.2<z<0.9. We focused our analysis on targeted detections, with the goal to analyze the performance of the algorythm itself, independently of possible systematics and biases introduced by a survey mode.
We compare our algorithm to other cluster detection methods in current optical surveys that reach LSST depth and simulations of the Euclid surveys, and shows similar or better completeness and purity. When compared to current X-ray and SZ cluster surveys reaches higher completeness and purity at M_200c > 10^14 M_⊙ and at 0.2<z<1. However, future SZ surveys will reach similar completeness and purity at the same depth as LSST detections, and will have similar mass-limited selection functions.
We note that this analysis was based on LSST DC2 images and did not involve the image processing required to obtain galaxy photometric and photometric redshift catalogs, or the masking of stellar sources and artifacts. The advantage of this deep machine learning approach that works directly on images is to obtain cluster catalogs that will be complementary to other optical detection methods used in the LSST DESC collaboration, and that will be independent from systematic and statistical uncertainties inherent to galaxy catalog production.
In future papers, we will study the performance in survey mode, and our detections will be compared to other LSST cluster detection algorithms.
We thank Université Paris Cité (UPC), which founded KG's Ph.D. research. We gratefully acknowledge support from the CNRS/IN2P3 Computing Center (Lyon - France) for providing computing and data-processing resources needed for this work.
We describe below the author’s contributions. Kirill Grishin applied YOLO-CL to the DC2 simulations, produced the results and figures in the paper, and was the main writer of Sections 2.2 and 5. Simona Mei co-conceived the YOLO-CL network with Stéphane Ilic, developed the content of this paper, supervised the work of Kirill Grishin, Stéphane Ilic and Michel Aguena, and was the main writer of the paper's text, answered the internal DESC reports. She is the contact with the editor. Stéphane Ilic modified the original YOLO network to adapt it for galaxy cluster detection. He co-conceived YOLO-CL with Simona Mei and developed the network and analysis software to derive the completeness and purity plots. Michel Aguena contributed to the generation and validation of the DC2 images, and to the analysis and discussion of the cluster detection, including the improvement on the purity estimation model. He also shaped the final image generation software used, and provided the masses and richnesses estimations to the dark matter halo catalog. Dominique Boutigny and Marie Paturel helped with image generation at the beginning of the project and experimented with different versions of YOLO. These statements have been validated with the DESC publication board after having the confirmation of the authors.
The Dark Energy Science Collaboration (DESC) acknowledges ongoing support from the IN2P3 (France), the STFC (United Kingdom), and the DOE, NSF, and LSST Corporation (United States). As members of the DESC collaboration, we used resources of the IN2P3 Computing Center (CC-IN2P3–Lyon/Villeurbanne - France) funded by the Centre National de la Recherche Scientifique; the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported under Contract No. DE-AC02-05CH11231; STFC DiRAC HPC Facilities, funded by UK BEIS National E-infrastructure capital grants; and the UK particle physics grid, supported by the GridPP Collaboration. This work was performed in part under DOE Contract DE-AC02-76SF00515. This paper has undergone an internal review by the LSST DESC, and we thank the internal reviewers, Camille Avestruz and Markus Michael Rau, for fruitful discussions that improved the paper.
aa
|
http://arxiv.org/abs/2409.02487v1 | 20240904072657 | Category-theoretic formulation of relational materialism | [
"Bekir Baytaş",
"Ozan Ekin Derin"
] | physics.hist-ph | [
"physics.hist-ph"
] |
theoremLD Theorem
@keyx sphericalkeysradius#1
@keyx sphericalkeystheta#1
@keyx sphericalkeysphi#1
x spherical
x sphericalkeys#1
*cos()*sin()*cos()*sin()*sin()
@keyy sphericalkeysradius#1
@keyy sphericalkeystheta#1
@keyy sphericalkeysphi#1
y spherical
y sphericalkeys#1
*sin()*cos()*cos()*sin()*sin()
@keyz sphericalkeysradius#1
@keyz sphericalkeystheta#1
@keyz sphericalkeysphi#1
z spherical
z sphericalkeys#1
sin()*cos(-90)*sin()*cos()
+sin()*sin(-90)*sin()*sin()
+cos()*cos()
ifthenelse(<0,0,1)
=0
0.3
1
*sin()*cos()*sin()*sin()*cos()
point macro=@plot@curveto@handler@initial,
jump macro=@plot@smooth@next@moveto,
end macro=@plot@curveto@handler@finish
@plot@smooth@next@moveto
@plot@curveto@handler@finish
@plot@startedfalse
@plotstreampoint@plot@curveto@handler@initial
@plot@curveto@handler@initial#1
@process#1
@xa=@x
@ya=@y
@plot@first@action@xa@ya
@plot@curveto@first@xa@ya
@plot@curveto@first@support=@plot@curveto@first
@plotstreampoint=@plot@curveto@handler@second
@plot@curveto@handler@second#1
@process#1
@plot@curveto@second@x@y
@plotstreampoint=@plot@curveto@handler@third
@plot@startedtrue
@plot@curveto@handler@third#1
@process#1
@plot@curveto@current@x@y
@xa=@x
@ya=@y
@process@plot@curveto@first
@xa by-@x
@ya by-@y
@xa=@plottension@xa
@ya=@plottension@ya
@process@plot@curveto@second
@xb=@x
@yb=@y
@xc=@x
@yc=@y
@xb by-@xa
@yb by-@ya
@xc by@xa
@yc by@ya
ifundefinedMCheatOpa
@plotstreamspecial
@marshal
@plot@curveto@first@support
@xb@yb@plot@curveto@second
@plot@curveto@second
@marshal
@plot@curveto@first=@plot@curveto@second
@plot@curveto@second=@plot@curveto@current
@plot@curveto@first@support@xc@yc
@plot@curveto@handler@finish
@plot@started
@plot@curveto@first@support@plot@curveto@second@plot@curveto@second
fpheader
|
http://arxiv.org/abs/2409.02690v1 | 20240904132350 | Detecting Calls to Action in Multimodal Content: Analysis of the 2021 German Federal Election Campaign on Instagram | [
"Michael Achmann-Denkler",
"Jakob Fehle",
"Mario Haim",
"Christian Wolff"
] | cs.SI | [
"cs.SI",
"cs.CL"
] |
Driven Lorentz model in discrete time
Thomas Franosch
September 9, 2024
=====================================
§ ABSTRACT
This study investigates the automated classification of Calls to Action (CTAs) within the 2021 German Instagram election campaign to advance the understanding of mobilization in social media contexts. We analyzed over 2,208 Instagram stories and 712 posts using fine-tuned BERT models and OpenAI’s GPT-4 models. The fine-tuned BERT model incorporating synthetic training data achieved a macro F1 score of 0.93, demonstrating a robust classification performance. Our analysis revealed that 49.58% of Instagram posts and 10.64% of stories contained CTAs, highlighting significant differences in mobilization strategies between these content types. Additionally, we found that FDP and the Greens had the highest prevalence of CTAs in posts, whereas CDU and CSU led in story CTAs.
§ INTRODUCTION
In this study, we experiment with the automated classification of Calls to Action (CTAs) from the 2021 German Instagram campaign to advance the understanding of mobilization in social media election campaigns. Our primary goal is to determine the efficacy of several computational approaches for binary classification of the presence or absence of CTAs in Instagram posts and stories from the 2021 Federal election in Germany. To this end, we fine-tuned a BERT model <cit.>, experimented with synthetic training data to enhance the model, and contrasted these approaches with zero- and few-shot prompting using OpenAI's GPT-4 model family. Through our study, we aim to address the three gaps in computational text analysis for the social sciences identified by <cit.>: 1) We experiment with a non-English language, 2) We evaluate all classifications against human annotations for external validation <cit.>, and 3) We investigate the potential of LLMs for overcoming the specialization before integration gap.
The 2021 election marked a shift in Germany’s political landscape, with the long-serving Chancellor Angela Merkel stepping down. The key parties in the race included the CDU/CSU, SPD, Greens, FDP, AfD, and The Left. In 2021, Instagram was used by almost the same share of the German population as Facebook and was particularly popular among younger users under the age of 30 <cit.>. About half of the candidates had profiles on Instagram, with notable differences between parties <cit.>. We are interested in the front-runner and party accounts and how they utilized CTAs on Instagram to gain insight into their mobilization and audience engagement strategies. Understanding these strategies reveals how political actors use Instagram to engage voters. Thus, our secondary goal is to use the CTA classifications to contrast mobilization strategies between Instagram stories and posts, filling a gap as ephemeral stories have often been overlooked. Therefore, we want to answer the following research questions:
RQ1a Which of the currently available GPT-4 model variants, when tested with few-shot and zero-shot prompts, achieves the highest performance in automated detection of CTAs in German-language Instagram content?
RQ1b Does incorporating synthetic training data enhance the performance of a fine-tuned BERT model in detecting CTAs in German-language Instagram content?
RQ1c When comparing the best-performing GPT and BERT models, what are the performance differences in detecting CTAs between different types of Instagram content (stories vs. posts) and text types (OCR vs. caption vs. transcript)?
RQ2 How does the usage of CTAs vary between different types of Instagram content (stories vs. posts) and between different political parties?
§.§ Political Communication on Instagram
Instagram's role in political communication has been extensively studied, addressing various political actors and nations. Studies commonly reveal that political figures use Instagram to project positive imagery rather than for policy discussion or voter engagement <cit.>. Studies of the 2021 German Federal election have focused on visual personalization and political issues in posts <cit.>, and Instagram stories were compared to regular posts using topic modeling <cit.>.
Voter engagement and mobilization on social media have been the focus of recent studies: () illustrated that about half of the posts in the 2013 German and Austrian election campaigns on Facebook included CTAs, primarily focusing on mobilization. () proposed a framework for comparing political actors' campaign strategies across social media platforms. They investigated the Norwegian parliamentary election campaign on three social media platforms: Facebook, Instagram, and Twitter. () examined the mobilization strategies used by German political parties during the 2021 election campaign on Facebook and Instagram. Their findings revealed that 43% of Instagram posts from parties and candidates included mobilization calls. The study found notable differences in mobilization strategies among parties, with the Greens using calls to vote more frequently than others.
The current research offers a comprehensive view of how CTAs are used in social media campaigns. This paper aims to extend the analysis to include both Instagram posts and Stories, offering a more holistic view of political campaigning on this platform.
§.§ Ephemeral Instagram Stories
Few studies have investigated ephemeral Instagram stories in the context of political campaigns and communication: <cit.> analyzed stories from 2020 U.S. presidential candidates. They collected 304 images one week before and after the election campaign. They found the campaigns missed opportunities to share user-generated content and inconsistently followed communication norms for Instagram Stories. () studied how gubernatorial candidates utilized Instagram Stories during the 2018 elections. They found that candidates primarily used stories to mobilize voters and showcase indoor events, preferring static images to videos. This area remains relatively unexplored compared to the analysis of Instagram posts.
§.§ Text-Mining in Political Communication
Textual analysis of Instagram content includes a frequency study to analyze Islamist extremist content <cit.>, and an analysis of political advertisements on Instagram and Facebook, utilizing computational text classification methods <cit.>.
The computational detection of CTAs in social media content has, for example, been investigated by (). They classified CTAs on VKontakte, focusing on their role in mobilization and potential for censorship. Their model demonstrates a classification performance of F1=0.77. They used a relatively small ground-truth dataset (n=871) and employed RuBERT, a Russian version of BERT. Similarly, () developed a rule-based Natural Language Processing (NLP) pipeline to identify CTAs in Spanish social media posts. Their approach yields F1 scores between 0.81 and 0.85. () report in their working paper on training a fine-tuned BERT model for classifying political tweets and Facebook posts from the 2016 US General Election. They achieved an F1 score of 0.92 for CTAs on Twitter and 0.95 for Facebook.
In conclusion, these studies highlight the potential of using advanced NLP approaches and BERT variants to detect political CTAs in different languages and social media platforms.
§.§ Large Language Models for Social Science Tasks
LLMs have shown proficiency in various text classification tasks, including social sciences tasks, with some studies indicating performance superior to human annotators <cit.>.
While they are promising for tasks with clear and well-defined criteria, such as identifying misinformation or distinguishing political stances, applying LLMs requires caution, particularly in tasks needing deep semantic understanding <cit.>.
Beyond prompting, LLMs may also be used to augment training data: () explored using GPT-3.5 Turbo to generate synthetic Instagram captions for detecting sponsored content. Combining synthetic with real data improved their classification F1 score from 0.71 to 0.78, demonstrating that synthetic data can enhance classifier training.
In summary, Instagram is a critical platform for political communication. Prior research validates the potential of advanced NLP models, including BERT variants and LLMs, for detecting CTAs. Our study aims to compare GPT-4 and a fine-tuned BERT model to classify CTAs in German Instagram texts, using synthetic training data for enhanced performance.
§ THE CORPUS
We collected two types of Instagram content: permanent posts that may include multiple images or videos with a caption and stories that typically consist of a single image or video. Captions in posts represent the primary textual content on Instagram, varying in length and often featuring hashtags. While captions are the primary text elements, many images and videos incorporate embedded text or spoken words.
For our computational analysis, we deconstructed each Instagram post and story into smaller units to analyze text in various forms: captions, embedded text (through Optical Character Recognition, OCR), and speech (transcriptions) for video audio. This approach resulted in up to two text documents per image and up to three documents per video. As a post can contain multiple images, this leads to a maximum of 3 · n_images documents per post, plus an additional document for the caption. In contrast, Instagram stories typically comprise a single image or video, resulting in one OCR document and an optional transcription document per story. See table <ref> for an overview of corpus statistics for each text type, and table <ref> for examples.
§.§ Data Collection & Preprocessing
We collected stories and posts published by eight parties, namely AfD (@afd_bund), CDU (@cdu), CSU (@christlichsozialeunion), Die Grünen (@die_gruenen), Die Linke (@dielinke), FDP (@fdp), FW (@fw_bayern), and SPD (@spdde), and 14 front-runners[We only collected stories from verified accounts. In case of missing accounts or verification marks, we followed the hierarchy Chancellor-Candidate > Front-Runner > Head of Party > Deputy Head of Party. CDU and CSU are running a joint campaign; therefore, just one candidate each is included.] (see table <ref> in the appendix). Data collection started two weeks before election day, from Sept. 12th until Sept. 25, 2021, excluding election day. During this time, parties and politicians shared 712 posts and 2208 stories. Posts were collected retrospectively using CrowdTangle, amounting to 1153 images and 151 videos. Stories were collected daily at 0:00 using the Python package to simulate a human user browsing the stories.[We can not guarantee completeness for Sep 14 due to technical problems.] A majority of the posted stories are videos (n=1246).
Many images contain embedded text, which we extracted using OCR (). We transcribed videos using the whisper-large-v2-cv11-german model,[https://huggingface.co/bofenghuang/whisper-large-v2-cv11-germanhttps://huggingface.co/bofenghuang/whisper-large-v2-cv11-german] a version of OpenAI's Whisper model <cit.> fine-tuned for German. We also applied OCR to the first frame of videos.
§ METHODS
We have operationalized CTAs as a binary variable, indicating their presence or absence in documents, simplifying our model's classification process. Each social media post or story is analyzed by decomposing it into several text documents, enabling the computational analysis of multimodal data. To answer questions on a post/story level, we assign `True' for an entire post or story if Call to Action is marked as `True' in any of the associated documents. This section defines CTAs, describes our annotation study, and the prompt engineering and model training steps.
§.§ Calls to Action
A “Call to Action” (CTA) refers to statements or prompts that explicitly encourage the audience to take immediate action <cit.>. () connect CTAs in political campaigns to three of 's () campaign functions: Informing, Mobilizing, and Interacting. The first function aims at disseminating messages and positions on important issues. Mobilizing encourages supporters to take active steps such as voting, participating in events, or sharing campaign messages. Interacting facilitates dialogue between politicians and citizens, enhancing engagement and potentially persuading voters more effectively through reciprocal communication <cit.>. () relate to these functions and define three types of CTA: “Calls to Inform” encourage the audience to seek further online or offline information. This could include directing users to the party's website or inviting them to read party-related materials. “Calls to Interact” aim to increase engagement through dialogue, such as inviting users to comment on a post or participate in discussions. Finally, “Calls to Support” are direct appeals for actions that benefit the party, such as voting, donating, or sharing posts to increase the campaign's visibility.
We consider CTAs as a dichotomous variable marking the presence or absence of any CTA in a document. While this reduction from three types into a singular CTA reduces the analytical value of our work, we see it as a simplification to create a robust classification model. Such a model can then be used to develop more nuanced classification models in future studies.
§.§ The Annotation Process
Preparing our corpus, we drew a stratified sample across text (caption, OCR, transcript) and content type (story, post) combinations. The documents were annotated across two batches: We started with a 20 % sample in the first batch (n=925) and increased the sample size to 1,388 documents (app. 30 %) through a second batch.[Overall, our text corpus comprises 4,614 documents; sample sizes were rounded when balancing the text- and content-type distribution.] Each document was independently annotated by at least three randomly assigned annotators. A total of nine annotators contributed to the annotation. Alongside one of the authors who participated in the annotation process, we recruited eight non-expert annotators from our staff and students. The latter were rewarded with participant hours for their work. The majority (8) of annotators were native German speakers. Participants received a detailed annotation guide, including examples and the GPT classification prompt (see appendix, figure <ref>). They had to pass a short quiz to ensure they read the manual before being invited to the annotation project. Annotations were collected remotely using the Label Studio software. Participants coded one document at a time, marking the presence of CTAs with “True” or “False”. “Unsure” responses were coded as .
Items with disagreement were passed into a second round of annotations to increase the number of votes. Overall, nine coders created 5290 annotations. Using a majority decision, we deduced the ground truth CTA labels. Ties were resolved through the author's annotation. The interrater agreement measured by Krippendorff's α reached a moderate level of α=0.67 <cit.>. Notably, the agreement between the majority decisions and the annotating author reached a strong level <cit.>, with Cohen's κ=0.88 (n=892, excluding ties) <cit.>. This alignment with the author’s labels confirms the validity of our final dataset, demonstrating that the majority decision effectively captures Calls to Action, despite the expected variability among non-expert student annotators.
§.§ Classification Approaches
We compare several classification approaches using transformer architectures and large language models to detect the presence of Call for Actions within posts and stories shared during the election campaign. Specifically, we compare two main classification methods: fine-tuning the gbert-large German BERT model and utilizing OpenAI's GPT-4 large language model. We tested different variations for each method: we trained two BERT models—one with the original dataset and another with an extended dataset augmented by GPT-4o. For the GPT approach, we tested GPT-4, GPT-4 Turbo, and GPT-4o models in both zero-shot and few-shot settings.
§.§ Fine-tuned BERT models
We fine-tuned the pre-trained `deepset/gbert-large` model for our German language classification task using the library <cit.>. GBERT is a state-of-the-art BERT model trained on German text <cit.>. We trained two classification models: gbert-cta trained on the original dataset, and gbert-w/-synth-cta trained on the original dataset + synthetic data generated using GPT-4o to mitigate the class imbalance of the original dataset.
Both models went through the same preprocessing and training steps. Input documents were tokenized, with truncation and padding to a maximum length of 512 tokens. The training took place on Google Colab, using Nvidia A100 graphics cards. We used wandb[https://wandb.ai/sitewandb.ai] to find the best hyperparameters, focusing on achieving the highest F1 score. To address the class imbalance in the gbert-cta model, we calculated class weights and added them to the loss function. After optimizing the hyperparameters, we validated each model with a five-fold cross-validation. This means we split the dataset into five parts stratified by the call to action variable, trained the model in four parts, and tested it on the remaining part. We added one-fifth of the synthetic data to the training data per fold for the model incorporating the synthetic dataset. We repeated this process five times, each with a different part as the test set, ensuring a robust evaluation.
§.§ Synthetic Dataset
To improve the quality of our BERT classification model, we generated synthetic data to counter the class imbalance of our ground truth dataset. We generated three synthetic texts for each of the documents classified to contain a CTA using the prompt in the appendix, see figure <ref>. During the training of the gbert-w/-synth-cta model[Available at https://huggingface.co/chaichy/gbert-CTA-w-synthhttps://huggingface.co/chaichy/gbert-CTA-w-synth], we appended the synthetic data to the training set, paying attention to not leaking any synthetic data into the evaluation dataset and, vice-versa, to not leak any evaluation or test data through synthetic data based on these datasets, into the training data. We used the following parameters for our API requests: ,
and . The were set individually: We calculated the number of tokens for each original text using the package provided by OpenAI and used the original token count as .
§.§ Zero- and Few-Shot using GPT
Following 's () recommendations, we initiated the prompt engineering process by having one author annotate a small random sample of 150 documents. Next, we hand-crafted a preliminary classification prompt: “Given any user input, classify whether the input contains any calls to action”. We tested the initial draft on ChatGPT to classify one document at a time. Responding to misclassifications, we provided nuanced examples and instructed ChatGPT to modify and improve the original prompt accordingly.[for example: https://chatgpt.com/share/fdd306b0-ff2d-4971-bad2-92eb6e8f07a7https://chatgpt.com/share/fdd306b0-ff2d-4971-bad2-92eb6e8f07a7] Thus, we started to improve the prompt by conversation with GPT-4.0 on the ChatGPT platform. Once the classifications on ChatGPT appeared satisfactory, we used the prompt with the API and inferred classifications for all 150 sampled captions. This iterative prompt development process has been previously demonstrated to be effective <cit.>. Through the iterations, we added examples, as few-shot prompts have also been proven effective <cit.>.
During this prompt optimization process, we compared the classification results to the author's annotations and calculated Cohen's κ as a benchmark for the prompt's quality. Ultimately, we settled on a prompt incorporating 's advice to construct prompts around context, the question, and constraints. The context was provided in the objective part of the prompt, the question in the instructions part, and the constraints in the formatting part. Additionally, we enumerated the instructions and potential types of CTAs. Within the instructions, we employed the chain-of-thought approach <cit.>, as the model was prompted to split input messages into sentences, classify each sentence, and then return the final classification. See figure <ref> in the appendix for the final result. We deleted the examples from the few-shot prompt to convert it into the zero-shot prompt.
Our commands were sent as system prompts to the API, while each document was sent as user messages. We used the following settings for our API requests:
, , and . We used the following model versions: , , and .
§.§ Evaluation Approach
We evaluated our classification approaches using established machine learning evaluation metrics: precision, recall, macro F1-score, and binary F1-score. The metrics were calculated using <cit.>. Additionally, we calculated Cohen's κ to measure the interrater agreement between our ground truth data and the model classifications for comparison with social science research.
We used an independent test dataset to evaluate our BERT model. The corpus was stratified by "Call to Action" and split into two sets: 80% for training and 20% for testing. The 80% training set was used for hyperparameter tuning and cross-validation, while the 20% test set was reserved for the final evaluation. To evaluate the GPT classifications, we excluded rows containing phrases from the few-shot examples (n=16) and used the entire annotated dataset.
§ RESULTS
In the first part of this section, we will answer our primary questions RQ1a–c regarding the computational classifications through the external evaluation based on human annotations. At the end of the section, we will answer our secondary interest RQ2, uncovering the differences between stories, posts, and parties.
§.§ Evaluation of GPT Models
The performance across all tested GPT models is consistently high: The macro F1 scores[Subsequently, we always refer to macro F1 scores unless stated otherwise.] range from F1=0.85 to F1=0.91 (compare table <ref>). GPT-4o, with the few-shot prompt, achieves the highest classification performance, answering RQ1a. Upon closer inspection, the model performs best when classifying captions, followed by OCR in posts and post transcriptions. For stories, the performance drops to F1=0.85 for OCR and even lower for transcription text.
§.§ Evaluation of BERT Models
Both BERT models display a comparatively high classification quality ranging from F1=0.92 for the model trained on the original data to F1=0.93 for the model incorporating the synthetic training data (see table <ref>). Thus, to answer RQ1b: incorporating synthetic training data generated by GPT-4o improved classification performance. Since the performance has only improved by the second decimal place, the synthetic text generation prompt should be revisited to introduce greater linguistic variety, and the overall results should be interpreted with caution. The small quality improvement might be influenced by other factors, suggesting that the answer to RQ1b is not universally valid. A five-fold cross-validation evaluated the model hyperparameters. The mean F1=0.90 score for the gbert-w/-synth-cta model demonstrates its ability to generalize well across different subsets of the data, and the standard deviation of 0.02 suggests a stable performance with minimal variability.
§.§ Performance Across Text-Type and Post-Type Combinations
To answer RQ1c, we investigated the classification performance for each text-type and post-type combination (compare table <ref>). Notably, the poor results for the classification of story transcriptions and the excellent results for post transcriptions stand out. These outliers may be partly attributed to the low number of cases: of the 12 post transcriptions in the test set, one contains a call to action. Both models classified the document correctly; the F1 score is perfect without false positives. However, across story transcriptions, the BERT model missed three out of four CTAs across 26 documents. Coincidentally, two out of the three false negatives are Calls to Interact. They have been neglected in posts of the 2021 campaign <cit.>, indicating that the training data contains few documents of this type.
The lower classification performance of GPT-4o across OCR texts compared to the BERT model is striking. Across both post types, OCR documents constitute about 70% of all text documents and show the lowest mean token count per document. The OCR process introduces noise by recognizing irrelevant text, i.e., street and shop signs in the background and incorrectly recognized words. The OCR text bits are concatenated and do not necessarily follow the right word order. For captions, the OpenAI model is on par with the BERT model and exceeds the fine-tuned model in transcriptions.
§.§ Calls to Action in Posts and Stories
We used the gbert-w/-synth-cta classifications to answer RQ2: Instagram posts display a higher relative mention of Calls to Action. Almost half of all captions (44.7%) contain CTAs, followed by 16.8% of transcriptions and 15.9% of OCR documents. In stories, we found the most CTAs in the embedded text (10.5%) and a very low number in transcriptions (2.3%). On the post/story level, almost half of all posts contain a Call to Action (49.58%), compared to only 10.64% of all stories. The difference between CTAs in posts and stories is significant (χ^2(1) = 501.84, p < .001), with a medium effect size (Cramer's V = 0.42).
Next, we tested the use of CTAs across parties for all post types: The analysis indicates a significant difference in their usage between different parties (χ^2(15) = 604.13, p < .001), with a medium effect size (Cramer’s V = 0.46). We accounted for the interaction between party and post type to ensure this difference was not due to varying distributions of post types between parties. This suggests that the parties varied in their use of calls to action in their Instagram election campaigns, even considering the different use of stories and posts across parties. For posts, the FDP displayed the highest use of CTAs (70.45%), followed by the Greens (60.23%). On the low end, the SPD made the least use of CTAs (31.97%), followed by AfD (40.54%). In stories, the parties acted differently: The CDU (18.76%) and CSU (14.78%) show the highest use of calls. Similarly, the Freie Wähler party (14.97%) and the Left (14.56%) display relatively high numbers of CTAs in their stories. The CTA leaders for posts, the Greens (5.12%) and the FDP (5.66%), are at the bottom of the list for stories.
§ DISCUSSION
Our experiments confirm the efficacy of large language models for the binary classification of Calls to Action in social media election campaigns. Overall, the GPT-4 models performed well in zero- and few-shot settings. Regarding Cohen's Kappa, there is a strong agreement between language model classifications and ground truth labels.
Fine-tuning the gbert-large BERT model, however, exceeds the performance of the LLMs. The relatively low number of 1,388 human-annotated documents, with 270 positive cases, yielded a well-performing classification model. Adding synthetic training data generated by the GPT-4o model improved the model further. Both models surpass the performance of CTA classification approaches reported for Russian <cit.> and Spanish <cit.> social media texts. Compared to a fine-tuned version of BERT for classifying CTAs in English Twitter and Facebook messages <cit.>, our models perform similarly well while using only a third of the training data.
A closer look at the classification quality on a text- and post-type level reveals problems with classifying story transcripts. CTAs in these documents account for only 5.16% of the overall training data. This highlights the potential for further improvements in data augmentation using synthetic documents: A qualitative inspection of synthetic training data generated based on transcripts revealed less similarity to original transcripts than to, for example, post captions. Improving the synthetic data prompt to generate more realistic transcripts might improve the classification performance for this type of text while increasing the linguistic variance across synthetic training data might further increase the overall classification performance.
Striving for the best possible annotation quality, we chose the gbert-w/-synth-cta for our classification task. However, training a robust classification model takes several steps, from annotation through hyperparameter tuning to the final evaluation. Conversely, the GPT-4 models are readily available, and prompt engineering was comparatively uncomplicated in our context. With decreasing prices, evolving models, and the availability of open-source alternatives, like Llama 3, this study further confirms the utility of large language models for computational social science tasks and political science analyses.
After applying the model, we uncovered significant differences between political actors' use of CTAs in stories and posts. We found a slightly higher prevalence of CTAs across posts compared to previous studies <cit.>, which may be attributed to our sample: We collected data close to election day, CTAs have been shown to increase closer to election day <cit.>. Our study contributes to the study of election campaigns mainly by uncovering a significant difference between posts and stories and between parties. The Greens, for example, have been highlighted before as the party with the highest prevalence of CTAs across their posts. At the same time, we found the party's stories contain the lowest number of CTAs relative to the number of stories posted. Overall, the use of CTAs in stories was low, which contrasts with 's () observations of the 2018 U.S. gubernatorial election, raising questions about what other elements or content constituted political stories in the 2021 election.
§.§ Limitations
Our study has several limitations. We observed the campaign for a relatively short period – two weeks – due to the necessary effort to capture ephemeral stories. Additionally, we limited our study to verified accounts only. We also limited the analysis to the first frame of each video to decrease complexity, possibly dismissing embedded text in any other frame.
§.§ Future Work
The literature on calls to action in election campaigns distinguishes between different types of CTAs that fulfill various campaign functions. To gain a more holistic understanding of election campaigns and increase the analytical power of our approach, we see future work to build on top of our classification model: Using the positive classifications, future studies can collect human annotations to train a multi-label classification model. Following 's argumentation, future work should evaluate the classification performance of open-source LLMs.
§.§ Ethical Considerations
We collected publicly available data posted by parties and verified party officials only. We followed the recommendations towards a conscientious approach to data collection by Venturini and Rogers, who considered scraping a “necessary evil” <cit.>. In our article, we do not address personal or sensitive data.
acl_natbib
§ APPENDIX
→ See next page.
|
http://arxiv.org/abs/2409.02395v2 | 20240904024534 | Deep Brain Ultrasound Ablation Thermal Dose Modeling with in Vivo Experimental Validation | [
"Zhanyue Zhao",
"Benjamin Szewczyk",
"Matthew Tarasek",
"Charles Bales",
"Yang Wang",
"Ming Liu",
"Yiwei Jiang",
"Chitresh Bhushan",
"Eric Fiveland",
"Zahabiya Campwala",
"Rachel Trowbridge",
"Phillip M. Johansen",
"Zachary Olmsted",
"Goutam Ghoshal",
"Tamas Heffter",
"Katie Gandomi",
"Farid Tavakkolmoghaddam",
"Christopher Nycz",
"Erin Jeannotte",
"Shweta Mane",
"Julia Nalwalk",
"E. Clif Burdette",
"Jiang Qian",
"Desmond Yeo",
"Julie Pilitsis",
"Gregory S. Fischer"
] | physics.med-ph | [
"physics.med-ph",
"cs.RO"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Deep Brain Ultrasound Ablation Thermal Dose Modeling with in Vivo Experimental Validation
Zhanyue Zhao, Benjamin Szewczyk, Matthew Tarasek, Charles Bales, Yang Wang, Ming Liu, Yiwei Jiang, Chitresh Bhushan, Eric Fiveland, Zahabiya Campwala, Rachel Trowbridge, Phillip M. Johansen, Zachary Olmsted, Goutam Ghoshal, Tamas Heffter, Katie Gandomi, Farid Tavakkolmoghaddam, Christopher Nycz, Erin Jeannotte, Shweta Mane, Julia Nalwalk, E. Clif Burdette, Jiang Qian, Desmond Yeo, Julie Pilitsis, and Gregory S. Fischer
Z. Zhao, B. Szewczyk, C. Bales, Y. Wang, M. Liu, Y. Jiang, K. Gandome, F. Tavakkolmoghaddam, C. Nycz, and G. S. Fischer are with Worcester Polytechnic Institute, Worcester, MA e-mail: [email protected], [email protected].
B. Szewczyk, J. Qian, and J. Pilitsis are with the Department of Neurosurgery, Albany Medical Center, Albany, NY
M. Tarasek, C. Bhushan, E. Fiveland, and D. Yeo are with GE Global Research Center, Niskayuna, NY
Z. Campwala, R. Trowbridge, Z. Olmsted, S. Mane, J. Nalwalk, J. Qian, and J. Pilitsis are with the Department of Neuroscience and Experimental Therapeutics, Albany Medical Center, Albany, NY
P. M. Johansen and J. Pilitsis are with Charles E. Schmidt College of Medicine, Florida Atlantic University, Boca Raton, FL
E. Jeannotte is with Animal Resources Facility, Albany Medical Center, Albany, NY
G. Ghoshal, T. Heffter, and E. C. Burdette are with Acoustic MedSystems, Inc., Savoy, IL
This research is supported by National Institute of Health (NIH) under the National Cancer Institute (NCI) under Grant R01CA166379 and R01EB030539.
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Intracorporeal needle-based therapeutic ultrasound (NBTU) is a minimally invasive option for intervening in malignant brain tumors, commonly used in thermal ablation procedures. This technique is suitable for both primary and metastatic cancers, utilizing a high-frequency alternating electric field (up to 10 MHz) to excite a piezoelectric transducer. The resulting rapid deformation of the transducer produces an acoustic wave that propagates through tissue, leading to localized high-temperature heating at the target tumor site and inducing rapid cell death. To optimize the design of NBTU transducers for thermal dose delivery during treatment, numerical modeling of the acoustic pressure field generated by the deforming piezoelectric transducer is frequently employed. The bioheat transfer process generated by the input pressure field is used to track the thermal propagation of the applicator over time. Magnetic resonance thermal imaging (MRTI) can be used to experimentally validate these models. Validation results using MRTI demonstrated the feasibility of this model, showing a consistent thermal propagation pattern. However, a thermal damage isodose map is more advantageous for evaluating therapeutic efficacy. To achieve a more accurate simulation based on the actual brain tissue environment, a new finite element method (FEM) simulation with enhanced damage evaluation capabilities was conducted. The results showed that the highest temperature and ablated volume differed between experimental and simulation results by 2.1884^∘C (3.71%) and 0.0631 cm^3 (5.74%), respectively. The lowest Pearson correlation coefficient (PCC) for peak temperature was 0.7117, and the lowest Dice coefficient for the ablated area was 0.7021, indicating a good agreement in accuracy between simulation and experiment.
Needle-Based Therapeutic Ultrasound (NBTU), Magnetic Resonance-Guided Robotically Conformal Ablation, Finite element modeling, in vivo Swine Ablation
§ INTRODUCTION
Thermal ablation techniques are being developed as minimally invasive or non-invasive alternatives to primary and metastatic brain tumor surgery <cit.>. Different types of technologies were developed, including four conventional methods: (1) Radio frequency (RF) ablation, which uses electricity from an alternating current at a favorite frequency of around 500kHz to produce ionic agitation in tissue, which results in friction and heating of tumor <cit.>. (2) Microwave ablation (MWA), which uses electromagnetic waves up to 2450MHz to generate polar molecules agitation with friction and heat in tissue and causes coagulative necrosis of cells <cit.>. (3) Laser ablation, which utilizes intense and highly-collimated beams of monochromatic light that are designed for biological tissues for thermal destruction of tumors <cit.>. (4) Ultrasound ablation, which uses the transmission of high-frequency sound waves through biological tissue transfers mechanical energy that leads to localized heating and coagulative necrosis of surrounding cells <cit.>. The ultrasound ablation contains two types of techniques, namely extracorporeal methods using high intensity focused ultrasound (HIFU) for non-invasive ablation procedure <cit.>, and intracorporeal method using an inserted transducer with direct contact of the target area for minimally invasive ablation procedure <cit.>. There is a promising new ablation technology developed in recent years called pulsed-field ablation (PFA), which uses a train of microsecond duration high amplitude electrical pulsed that ablates myocardium by electroporation of the sarcolemmal membrane without measurable tissue heating, and this method is aiming for the treatment of atrial fibrillation<cit.>.
Numerical modeling of thermal ablation solves the physical equations governing energy deposition and heat transfer in tissue to determine the transient temperature profile and assess tissue damage upon heating <cit.>. However, due to patient-specific tissue compositions that might affect local features like absorption, attenuation, and perfusion, thermal ablation can be challenging to anticipate with absolute accuracy. Finite element modeling (FEM) can be applied in thermal ablation modeling. It can calculate the piezoelectric transducer's acoustic pressure and the transferring bioheat deposition along the desired acoustic medium by subdividing a large system into smaller ones. It can also simpler elements by meshing the construction. The finite element method formulation of a boundary value problem finally results in a system of algebraic equations.
In this work, an extended FEM simulation towards NBTU application in a real brain tissue environment with tissue damage evaluation is one of the primary studies for MR-guided robot-assisted conformal ablation procedures. We developed a Comsol FEM simulation based on a real brain tissue medium with a CEM43 isodose map for evaluating tissue damage. Moreover, we validated the model with in vivo swine ablation experimental results and proved the feasibility and accuracy of this new model.
§ FEM ANALYSIS OF NBTU IN BRAIN TISSUE
Temperature measurements can provide absolute or relative temperature changes in comparison to unheated tissue or reference data<cit.>. However, thermal damage is generally evaluated using the Sapareto-Dewey equation, and the ablation zone is defined by the cumulative equivalent minute (CEM) standard<cit.>. The CEM43 is defined as the equivalent time the target tissue temperature has to be at 43^∘C to induce thermal damage<cit.>. We selected a CEM43 of 70 based on our collaborators' experience with ablation in liver tissue<cit.>, also a comparison to similar paper<cit.> and studies demonstrating a CEM43 of 50 to 240 range to induce cell death with soft tissue<cit.>. Using Sapareto and Dewey's formulation, the thermal damage dosage was computed using the temperature from the MRTI given by Equation <ref>
CEM43=∑_t=0^t=final R^43-T_tΔ t, { R=0.25 for T<43^∘ C
R=0.50 for T≥43^∘ C
.
where T_t is the average temperature during time Δ t. The unit of thermal dose is equivalent to minutes at 43^∘C. The application of CEM43 can estimate the damage to brain tissue.
Based on the thermal damage of soft tissue, a FEM simulation can be created and used as a numerical prediction tool for accurately tracking thermal changes using NBTU in preparation for human trials.
§.§ FEM Analysis of NBTU in Brain Tissue
The FEM study for simulating thermal propagation and CEM43 threshold calculation was informed by NBTU parameters during surgery, while the parameters for the application were based on the phantom study <cit.>. We enhanced our existing numerical models with an acoustic medium based on brain-tissue-specific parameters for comparison to our in vivo animal data.
§.§.§ Simulation Setup
The FEM model was implemented in Comsol to model the probe’s mechanical deformation, resultant applicator stationary acoustic pressure field, and the thermal damage isodose threshold of 70 CEM43. A cross-sectional model of the probe’s geometry was created with a dimensional parameter of 1.5mm outer diameter by 1.1mm inner diameter ring, and our notches of 0.1mm depth by 0.05mm width were also added such that the probe was segmented into 90^∘ and 180^∘ sectors. A detailed drawing of the probe dimension is shown in Fig. <ref>. Lead zirconate titanate (PZT-4) from Comsol material library was selected as the probe cylinder material with no further modifications and the simulation properties for the transducer were used as shown in Table <ref>. The probe was surrounded by a 100 mm × 100 mm (L×W) 2D acoustic medium.
The solid mechanics physics interface defined the transducer as a piezoelectric material and defined a fixed constraint within the inner surface of the cylinder. The piezoelectric poling direction was defined by a base vector coordinate system defined by the transform shown in Table <ref> and was aligned radially outward from the center of the transducer geometry. The electrostatics physics interface was also used to define the electric potential, or electrical terminal along the 90^∘ outer surface of the transducer and to define a ground constraint along the inner surface. Different types of probes (ablation range) were used in the experiment, and the specific parameters of probes that were used in the simulation are shown in Table <ref>. The electric potential can be calculated by Equation <ref>:
V = √(P(1+ε)Z/η)
Where P is the desired acoustic power we need for ablation, ε is the power loss generated from the RF inline filter, and the power loss for these probes has a range of approximately 10-20% based on the probe type and the method of fabrication, based our collaborator's suggestion we used median value of 15% in our simulation. Z is the transducer impedance value, and we used 50ohm as a general piezoelectric material value. η is the efficiency of the probe, detailed value can be found in Table <ref>.
The pressure acoustics physics interface was used to define the wave propagation through the medium as a linear elastic fluid model with an attenuation coefficient of 31.96 Np/m. The bioheat transfer physics interface was used to define a heat source applied to the medium with a user-defined activation function shown in equation 1, where Q is the absorbed ultrasound energy over the acoustic field, acpr. Q_pw is the dissipated power intensity, step is a step function from 0 to 1 with smoothing of 0.005, t is the current time step of the simulation, and t_ProbeOn is a constant that represents the total time the probe was activated.
The brain tissue referenced acoustic medium material properties are described in Table <ref>. Far-field and thermally insulated boundary conditions were applied to the acoustic medium, and no reflection of ultrasonic waves from edges was assumed. A frequency domain study was conducted to obtain the acoustic pressure field produced by the applicator, applying a tetrahydral mesh with a maximum element size of λ/6, where λ represented the wavelength of the produced ultrasonic wave within the medium.
§.§.§ Acoustic Pressure Mapping
The generated ultrasound waves produced by the transducer can be simulated in Comsol using a frequency domain study at the selected resonant mode, and the pattern mapping can be found in Fig. <ref>. Considering the properties of brain tissue performed small change in speed of sound propagation in both mediums, the patterns are similar to our previous simulation work reported in <cit.>.
§.§.§ CEM43 Mapping
The bioheat transfer time-dependent study was conducted to validate the heat deposition by the simulated acoustic pressure field, and we modified the default equation in this node by using the generalized Penne’s bioheat transfer equation shown in Equation <ref>:
ρ C∂ T/∂ t=∇· k ∇ T-c_bρ_bω_b(T-T_b)+Q_m+Q_p
Where ρ (kg/m^3) is the tissue density, C (J/kg/^∘ C) is the tissue specific heat capacity, k (W/m/^∘ C) is the tissue thermal conductivity, T (^∘ C) is the current temperature, T_b (^∘ C) is the temperature of blood, and we assumed the initial temperature of both tissue and blood as 37^∘C. ρ_b (kg/m^3) is the blood density, ω_b (kg/m^3/s) is the blood perfusion, c_b (J/kg/^∘ C) is the blood specific heat capacity, which we used the same value as the brain tissue heat capacity (C). Q_p (W/m^3) is the heat deposition due to sonication given by Equation <ref>
Q_p=α_attenp^2/ρ c
Where α_atten (Np/m) is the medium attenuation, p (Pa) is the acoustic pressure field, ρ (kg/m^3) is the medium density, c (m/s) is the speed of sound within the medium. Q_m (W/m^3) is metabolic heat, which is the heat generated by the tissue.
A general form PDE study was then conducted to calculate the CEM43 thermal dose map and 70 CEM43 isodose lines, which are shown in Fig. <ref>. Predicted thermal dose maps for NBTU probes showed inhomogeneous CEM 43 thermal dose patterns because of the random blood perfusion and metabolic heat influence generated by Comsol. Using the same volume acquisition method for MRT images, we used surface integration to calculate the 70 CEM43 isodose lines surrounding the area of the transducer in the middle section. Areas of additional slides with 5 mm slice thickness were acquired, multiplied by 5 mm, and summed together to calculate the numerical predicted ablated volume.
§.§.§ Model Sensitivity Analysis
We also performed a Taguchi-based sensitivity analysis with the above model to determine the contribution and ranking of parameters. We considered the maximum obtained temperature as the response variable, and the six 3-level parameters, namely Thermal Conductivity (k), Heat Capacity (C), Metabolic Heat (Q_m), Blood Perfusion (ω_b), Attenuation (α_atten), and Voltage (V) applied onto the transducer, as factors of interest. The 3-level of each parameter based on human soft tissue proprieties, namely low, medium, and high, were shown in Table <ref>. To obtain the minimum number of simulations needed for the parameters with corresponding levels we used Taguchi analysis standard L-27 orthogonal array. After obtaining all combinations of parameters and their corresponding maximum temperature, we then performed the analysis of variance (ANOVA) to quantify the effect of each factor on the response variable <cit.>. Calculations of Taguchi design and ANOVA were all carried out in Minitab 21 software (Minitab LLC., PA). Results of ANOVA are shown in Table <ref>. As a result, V majorly affects the maximum obtained temperature with more than 50% contribution in the model. The second and third major contributing factors are α_atten and k, respectively. The aforementioned three parameters contribute over 83% in total to the response variable. C, Q_m, and ω_b are not significant in our model.
§ MRTI IMAGING VALIDATION
This section summarized the experiments from the year 2020 to 2022 and selected swine data was used for validation. All the subject was approved by the Albany Medical College (AMC) Institutional Animal Care and Use Committee (IACUC). MR delivery of NBTU was conducted on male and female Sus scrofa domestics swine (8 to 20 weeks old, 10 weeks on average) weighing between 18 and 25kg. One acute swine (sacrificed immediately post-procedure), one subacute swine (sacrificed 1 to 4 days post-procedure), and 5 survival swines were selected to create a total of 7 lesions. MRT-images acquired from a 3T GE Signa Architect scanner (GE Global Research, Niskayuna, NY, USA) and the ablated volume calculated from MRT-images were compared with simulation results <cit.>. To operate the interstitial thermal ablation procedure, a seven-DOF NeuroAblation robot was created as an MR-compatible device. The end effector of the NeuroAblation robot is modular and may be changed to accommodate the preferred applicator for the intended intervention. In this work, a needle-based therapeutic ultrasound (NBTU) probe developed by Acoustic MedSystems Inc (Illinois, United States) powered by the TheraVision control system was deployed in the subject. A detailed description can be found in our previous work <cit.>.
§.§ In-Vivo Swine Ablation Experiment
In this study, pigs were positioned in a prone position using a custom head holder to align with a NeuroAblation robot, which was fixed to MRI bed rails and registered with the MRI scanner's coordinate system. The procedure involved dividing the MRI room into a sterilized zone for the surgeon and nurse and a standard zone for the robot operator. The entry and target points for the procedure were selected using 3D MRI images, and the NeuroAblation robot validated the coordinates.
Once the points were confirmed within the robot's range, the pig was removed from the scanner, and the robot was operated to reach the target position, except for needle insertion, which was performed by the surgeon. Afterward, the pig was repositioned in the scanner for the ablation procedure. Thermal dose mapping was done using intraoperative MRI to track temperature changes and assess the ablation's effectiveness.
The procedure included low-dose test ablations to confirm the ablation volume's shape and direction before full treatment. Post-ablation imaging was conducted, and pigs were euthanized at various time points for further analysis. The study measured temperature distribution, thermal damage, and ablated volumes to assess the effectiveness of the procedure. A detailed description can be found in <cit.>.
§.§ MRT-Images Temperature Validation
7 swine experimental data were selected for simulation validation. Firstly, the 90^∘ probe with 6w acoustic power under 690s duration time of ablation (swine 7 experimental and simulation results) was selected as one of the examples of analysis in this section, which consisted of the short period of low power ablation for pre-testing, and a high power ablation range, and the MRTI peak temperature change over time and compare with the simulation maximum temperature results are shown in Fig. <ref>. Note that the ablation process concludes both heating up and cooling down duration because during the cooling down period, the surrounding area was still being heating up, and the heating continued contributing to the CEM43 thermal isodose map. Results indicate that the center ablation surface performed between slice 2 and 3 considering only the two slices changed temperature significantly while the other slices changed very small, and the maximum temperature increased up to 57^∘C. The right figure shows the selected slices (slice 2 and 3) versus the simulation result in temperature change over time. Based on the simulation the predicted maximum temperature reached up to 58.5823^∘C, which yields maximum peak temperature deference with 1.5823^∘C (2.78%). The Pearson correlation coefficient (PCC) metrics were evaluated to compare the experimental and simulation results. The PCC between experimental data from slice 2 and the simulation peak temperature curve reached 0.9527.
The rest of the lesions' results are also compared with the simulation data, and all the images can be found in Fig. <ref> and Fig. <ref>. The results of 180^∘ is shown in Fig. <ref>, where the (a)-(b) shows the Swine 4 results with 180^∘ probe under 3W acoustic power and 180s duration time condition, and the peak temperature reached 62^∘C in experiment versus 63.2925^∘C in simulation with Pearson correlation coefficient (PCC) of 0.9566. (c)-(d) shows the Swine 6 results with 180^∘ probe under 4W acoustic power and 180s duration time condition, and the highest temperature reached 64^∘C in experiment versus 64.9839^∘C in simulation with PCC of 0.9275. Finally, the results of 360^∘ are shown in Fig. <ref>, where the (a)-(b) shows the Swine 1 experimental and simulation results with 360^∘ probe under 3W acoustic power and 90s duration time condition, and the highest temperature reached 41^∘C in experiment versus 42.3701^∘C in simulation with PCC of 0.9652. Note that the base temperature was 38^∘C. (c)-(d) shows Swine 2 results with 360^∘ probe under 3W acoustic power and 100s duration time condition, and the highest temperature reached 43^∘C in experiment versus 44.3343^∘C in simulation with PCC of 0.9321. Note that there was no low-temperature testing process in this experiment. (e)-(f) Swine 3 results with 360^∘ probe under 3W acoustic power and 120s duration time condition, and the highest temperature reached 61^∘C in experiment versus 60.5230^∘C in simulation with PCC of 0.8145. Note that the base temperature was also 38^∘C in this experiment. (g)-(h) Swine 5 results with 360^∘ probe under 4W acoustic power and 120s duration time condition, and the highest temperature reached 59^∘C in experiment versus 61.1884^∘C in simulation with PCC of 0.7117. Detailed results can be found in Table <ref>.
§.§ Ablated CEM43 Map Results
The previous section discussed the peak temperature change between MRTI data and simulation results, however, the CEM43 isodose is the desired indicator for evaluating the tissue damage in the field <cit.>. By using the volumetric MRTI measured the temperature changes at different distances from the focal point, and the necrotic zone boundaries were measured at CEM4370 and multiplied the thickness of 5mm, we can calculate the ablated volume to compare with our simulation results.
The CEM43 of 70 isodose maps from 7 swine experiments are compared with the simulation results. An area was drawn around the margins of the ablated area with the lasso feature and then the area of the circled region in cm^2. Each ablated area was multiplied by the 5mm slice thickness if the ablation went through the posterior side of the slice. In slice 3 the largest distance from the probe center to the edge of the ablated area is 9mm and the ablated volume is 0.2344 cm^3, while in slice 3 the largest distance from the probe center to the edge of the ablated area is 7.4mm and the ablated volume is 0.2234 cm^3. Fig. <ref> shows the simulation versus the experimental results of the CEM43 of 70 isodose maps. The largest distance from the center of the probe to the edge of the ablated area is 9mm in the experiment versus 8.9778 in the simulation, which yields within 0.25% error. Another metric, the Dice coefficient was used for evaluating the degree of overlap between simulation and experimental CEM43 isodose pattern. For swine 7, the Dice coefficient between simulation (center slice of transducer) and experimental results (MTRI slice 2 data) is 0.8496.
Fig. <ref> and Fig. <ref> show the simulation versus the experimental results of CEM43 70 isodose maps using 360^∘ and 180^∘ probe results, and the Dice coefficient of selected swines experiment versus the simulation can be found in Table <ref>. The lowest Dice coefficient of experiment and simulation results by using 360^∘ and 180^∘ are 0.7021 and 0.8145 respectively. Note that these results contain all the in vivo animal experiments the author had attended, which is beyond the 7 lesions mentioned in this study.
Detailed results of ablated volume data from the 7 swine experiments can be found in Table <ref>. By using the same method mentioned in MRTI, the ablated area calculated from simulated CEM43 of 70 isodose maps was multiplied by the 5mm slice thickness and summed together to get the ablated volume, which is also shown in Table <ref>.
§ CONCLUSION
In this work, we developed a 2D version simulation with real brain tissue parameters focused on thermal damage modeling, and validated with multiple in vivo animal surgery.
Taguchi-based sensitivity analysis was performed and was concluded that the voltage applied (V) and attenuation (α_atten) are the most important parameters to determine the obtained maximum temperature in our bioheat transfer model. Consistent with other researchers' findings, we observed significant performance of “voltage" contribution to ablation procedure <cit.>. We also suggested prioritizing attenuation (α_atten) and thermal conductivity (k) and carefully considering these parameters in both simulation and experiments in bioheat transfer.
The 2D thermal damage FEM simulation in brain tissue, including the validation with multiple in vivo animal surgery. This model focused on real tissue thermal damage (CEM) instead of thermal propagation, which is what we desired in NBTU thermal ablation therapy. The highest temperature and ablated volume largest difference error between experimental and simulation results yield 2.1884^∘C (3.71%) and 0.0631cm^3 (5.74%) respectively, with the lowest Pearson correlation coefficient (PCC) of peak temperature is 0.7117, and the lowest Dice coefficient of ablated area is 0.7021. These results show good agreement of accuracy between simulation and experiment. Note that the ones with higher temperature error (swine 5, 3.71%) are not the same as the ones with higher volume error (swine 6, 5.74%), this may be because the temperature error is more dependent on thermal conductivity and absorption effect, while the volume error is more dependent on thermal dissipation and perfusion rate.
§ DISCUSSION
In this study, the changes of tissue properties are not considered, however as reported in <cit.> the perfusion and attenuation will change dynamically during the ablation procedure, especially the ablated section will be different compared to un-ablated status both in healthy tissue and tumor cells. Both low PCC and Dice coefficient were performed when using 360^∘, especially the heating up duration and some unexpected edge of ablation. Considering the sensitivity analysis we performed, the attenuation is ranking 2^nd among parameters in our simulation, and it can change up to ± 50% during the ablation procedure in soft tissue <cit.>, which will affect the ablation performance significantly. Also, the surrounding vasculature system may further induce these unexpected misalignments of ablation boundaries. Moreover, the heat transfer method between the above two different status tissues should also be considered to make the modeling more accurate. Future work may focus on dynamic parameters integrated simulation, especially considering the value shift from attenuation, and add more physics nodes with heat transfer among blood vascular system. The current sensitivity study only focuses on multiple parameters. In the future, we may consider involving some interactions of parameters in the analysis to get a more comprehensive ranking of all possible factors affecting the temperature and damage isodose.
IEEEtran
|
http://arxiv.org/abs/2409.03313v1 | 20240905074014 | On the asymptotics of real solutions for the Painlevé I equation | [
"Wen-Gao Long",
"Jun Xia"
] | math.CA | [
"math.CA"
] |
On the asymptotics of real solutions for the Painlevé I equation
Wen-Gao Long^1, Jun Xia^2,[Corresponding author]
================================================================
^1School of Mathematics and Computational Science, Hunan University of Science and Technology,
Xiangtan, 411201, China,
^2School of Mathematics and Systems Science, Guangdong Polytechnic Normal University,
Guangzhou 510665, China,
§ ABSTRACT
In this paper, we revisit the asymptotic formulas of real Painlevé I transcendents as the independent variable tends to negative infinity, which were initially derived by Kapaev with the complex WKB method. Using the Riemann-Hilbert method, we improve the error estimates of the oscillatory type asymptotics and provide precise error estimates of the singular type asymptotics. We also establish the corresponding asymptotics for the associated Hamiltonians of real Painlevé I transcendents. In addition, two typos in the mentioned asymptotic behaviors in literature are corrected.
2010 mathematics subject classification: 33E17; 34A30; 34E05; 34M55; 41A60
Keywords and phrases: The Painlevé I equation, asymptotic expansions, Riemann-Hilbert approach.
On the asymptotics of real solutions for the Painlevé I equation
Wen-Gao Long^1, Jun Xia^2,[Corresponding author]
================================================================
equationsection
§ INTRODUCTION
Painlevé equations are a group of six nonlinear second-order ordinary differential equations
that possess the Painlevé property. This property requires that the movable singularities of the solutions must be poles, not branch points or essential singularities.
In general, the solutions of these equations do not have explicit expressions. Consequently,
asymptotic analysis is a common tool to study the behavior of solutions for Painlevé equations.
In this paper, we are concerned with the Painlevé I equation
y”(x)=6y^2(x)+x,
and we concentrate on the asymptotic behaviors as x→-∞ of real solutions to (<ref>).
Using the method of dominant balance (cf. <cit.>), it is easy to see that there are two classes of behaviors
y(x)∼(-x/6)^1/2, or
y(x)∼ -(-x/6)^1/2,
as x→-∞. For the second case, one can further show that as x→-∞,
y(x)=-(-x/6)^1/2+d/(-x)^1/8cos(4·24^1/4/5(-x)^5/4-5d^2/8ln(-x)+χ)
+O(x^-5/8),
where d and χ are constants; see <cit.>. It should be noted that the asymptotic formula (<ref>) also appears in the NIST handbook of mathematical functions (see <cit.>).
Indeed, Holmes and Spence <cit.> proved that there exist exactly three types
of real solutions by studying
a boundary value problem for (<ref>). Subsequently, with the help of the isomonodromy approach <cit.>, Kapaev <cit.> obtained the following classification of asymptotic behaviors as x→-∞ of real Painlevé I transcendents, in terms of the Stokes multiplier s_2 (see Section <ref> for definition).
* When |s_2|<1, we have
y(x)∼-(-x/6)^1/2+d/(-24x)^1/8cos(4·24^1/4/5(-x)^5/4-5d^2/8ln(-x)+χ),
where
{
d^2 =-1/πln(1-|s_2|^2),
χ = s_2-(19/8ln2+5/8ln3)d^2-Γ(-id^2/2)
+3π/4.
.
* When |s_2|=1, we have
y(x)=y(x)+s_1-s_-1/4·24^1/4√(π)(-x)^-1/8
e^-4·24^1/4/5(-x)^5/4[1+O(x^-5/4)],
where y(x)=(-x/6)^1/2[1+O(x^-5/2)] as x→-∞ is the solution to (<ref>) with the Stokes multipliers s_2=s_-2=i, s_-1=s_1=i/2.
* When |s_2|>1, we have
y(x)∼-(-x/6)^1/2+(-x)^1/2/√(6)/3sin^2(2·24^1/4/5(-x)^5/4
+5q/8ln(-x)+ρ),
where
{ρ =1/2πln(|s_2|^2-1),
σ =1/2 s_2+(19/8ln2+5/8ln3)ρ
+1/2Γ(1/2-iρ)-π/4.
.
The type (A), (B) and (C) solutions are called oscillatory, separatrix and singular solutions, respectively. In <cit.>, with the Riemann-Hilbert approach, Kapaev found all separatrix (also known as tronquée) solutions of (<ref>). Recently, Deaño <cit.> extends Kapaev's results by calculating explicitly exponentially small corrections with respect to the tronquée solutions. However, to the best of our knowledge, the error estimate in asymptotic formula (<ref>) and the asymptotics for the Hamiltonians in case (A) and (C) have not been provided in the existing literature.
In this paper, by performing a Deift-Zhou steepest descent analysis <cit.> to the Riemann-Hilbert problem of (<ref>), we obtain the following asymptotics for the Painlevé I transcendent y(x) and its Hamiltonian ℋ(x) (see (<ref>) for definition).
If |s_2|<1, we have, as x→-∞,
y(x) =-(-x/6)^1/2+√(a)/(-24x)^1/8cos(4·24^1/4/5(-x)^5/4-5a/8ln(-x)+ϕ)
+O(x^-3/4),
ℋ(x) =-4(-x/6)^3/2
+a(-3x/2)^1/4+√(a)/(-24x)^3/8sin(4·24^1/4/5(-x)^5/4-5a/8ln(-x)+ϕ)
+O(x^-1),
where
{
a =-1/πln(1-|s_2|^2),
ϕ = s_2-(19/8ln2+5/8ln3)a-Γ(-ia/2)
-π/4.
.
If |s_2|>1, we have, as x→-∞,
y(x) =-(-x/6)^1/2+(-x)^1/2/√(6)/3sin^2(2·24^1/4/5(-x)^5/4
+5b/8ln(-x)+ψ)+O(x^-3/4),
ℋ(x) =-4(-x/6)^3/2
-b(-24x)^1/4-(-3x/2)^1/4(2·24^1/4/5(-x)^5/4+5b/8ln(-x)+ψ)
+O(x^-1),
where the error terms hold uniformly for x bounded away from the singularities appearing in the denominator and
{
b =1/2πln(|s_2|^2-1),
ψ =1/2 s_2+(19/8ln2+5/8ln3)b
+1/2Γ(1/2-ib)+π/4.
.
In the asymptotic formula (<ref>), we improve the error estimate O(x^-5/8) in (<ref>) to O(x^-3/4). In the asymptotic formula (<ref>), we provide a rigorous error estimate O(x^-3/4). Moreover, we find that the angle 3π/4 in (<ref>) and -π/4 in (<ref>) should be corrected by -π/4 and π/4, respectively.
The first author (W.-G. Long) of this paper apologies for committing the same typos in the Introduction of his other papers <cit.>, even though these typos do not affect the correctness of the main results of these papers. In Figures <ref> and <ref>, we have verified (<ref>) and (<ref>) numerically in special cases.
One should note from <cit.> that the Stokes multipliers of the special Painlevé I transcendent with (y(0),y'(0))=(0,0) are s_k=2icos(2π/5), k=0, ± 1,± 2.
In Appendix <ref>, we calculate the Stokes multipliers of the special solution with (p,H)=(0,0) by using the complex WKB method and find that s_k=-2icos(π/5), k=0, ± 1,± 2, where p is the pole location of the solution and H is a free parameter in the Laurent series
y(x)=1/(x-p)^2-p/10(x-p)^2-1/6(x-p)^3+H(x-p)^4+⋯.
With this result, we successfully verify our asymptotic formula (<ref>) numerically for this special solution.
In Figure <ref> and Figure <ref>, we numerically verify the asymptotic behaviors of the Hamiltonian ℋ(x) in (<ref>) and (<ref>), respectively.
The rest of the paper is organized as follows. In Section <ref>, we formulate the Riemann-Hilbert problem of the Painlevé I equation (<ref>). Due to the different case: (1) |s_2|<1, (2) |s_2|>1, we apply the Deift-Zhou approach to analyze the Riemann-Hilbert problem for Ψ(z,x) as x→-∞ in Section <ref> and <ref>, respectively. In the final section <ref>, we prove Theorem <ref>. For the convenience of the reader, we put the parabolic cylinder and Airy parametrix model used in the asymptotic analysis, and the computation of Stokes multipliers of a special solution to Painlevé I in the Appendix.
§ RIEMANN-HILBERT PROBLEM
In this section, we describe the Riemann-Hilbert (RH) problem of the Painlevé I equation (<ref>), which was first introduced in <cit.>.
We seek a 2×2 matrix-valued function Ψ(z,x) satisfying the following properties.
* Ψ(z,x) is analytic for z∈ℂ∖Γ, where Γ=∪_k=-2^2Γ_k with
Γ_0=ℝ_+, Γ_3=e^π iℝ_+, Γ_±1=e^±2π i/5ℝ_+, Γ_±2=e^±4π i/5ℝ_+,
see Figure <ref> for an illustration.
* Ψ(z,x) satisfies the jump relation
Ψ_+(z,x)=Ψ_-(z,x)S_k, z∈Γ_k,
where the Stokes matrices {S_k} take the forms
S_0=[ 1 0; s_0 1 ], S_1= [ 1 s_1; 0 1 ], S_2=[ 1 0; s_2 1 ],
S_3=[ 0 -i; -i 0 ], S_-1= [ 1 s_-1; 0 1 ], S_-2=[ 1 0; s_-2 1 ].
Here, the Stokes multipliers {s_k} satisfy the restricted conditions
1+s_ks_k+1=-is_k+3, s_k+5=s_k, k∈ℤ.
* Ψ(z;x) satisfies the following asymptotic behavior as z→∞
Ψ(z,x)=z^σ_3/41/√(2)[ 1 1; 1 -1 ][I+∑_k=1^∞Ψ_k(x)z^-k/2]
e^(4/5z^5/2+xz^1/2)σ_3,
where the branches of powers of z are chosen such that z∈(-π,π).
It should be mentioned that in the RH analysis throughout, we will use frequently the matrix notations
σ_1=[ 0 1; 1 0 ], σ_2=[ 0 -i; i 0 ], σ_3=[ 1 0; 0 -1 ].
From <cit.>, we obtain the following expressions for the Painlevé I transcendent y(x) and the associated Hamiltonian
ℋ(x)=1/2[y'(x)]^2-2y(x)^3-xy(x).
Let Ψ_k:=Ψ_k(x), k=1,2 be the coefficient in the expansion (<ref>). We have
Ψ_1=[ -ℋ(x) 0; 0 ℋ(x) ], Ψ_2=1/2[ ℋ(x)^2 y(x); y(x) ℋ(x)^2 ].
Hence, we have
y(x) =2Ψ_2,12=2Ψ_2,21,
ℋ(x) =-Ψ_1,11=Ψ_1,22,
where Ψ_k,ij denotes the (i,j) entry of Ψ_k.
By (<ref>), it follows that
1+s_2s_-2=-is_0, s_1=i-s_-2/1+s_2s_-2, s_-1=i-s_2/1+s_2s_-2,
if 1+s_2s_-2≠0. For the real solutions of the Painlevé I equation, we further have (see <cit.>)
s̅_2=-s_-2,
which implies from (<ref>) that
s̅_0=-s_0, s̅_1=-s_-1,
and
-is_0=1+s_2s_-2=1-|s_2|^2=1-|s_-2|^2∈ℝ.
§ ASYMPTOTICS OF Ψ FOR |S_2|<1
In this section, we first perform the Deift-Zhou nonlinear steepest descent analysis to the RH problem for Ψ(z,x) with 0<|s_2|<1. Then, the reduced case s_2=0 will be considered in the end of this section.
§.§ Scaling transformation
Assume in what follows that x<0. We define
A(z)=|x|^-σ_3/8Ψ(|x|^1/2z,x).
Then, A(z) satisfies the following RH problem.
* A(z) is analytic for z∈ℂ∖Γ.
* A(z) satisfies the same jump relation as Ψ(z,x); see (<ref>).
* As z→∞, we have
A(z)=z^σ_3/41/√(2)[ 1 1; 1 -1 ][I+∑_k=1^∞Ψ_k(x)(|x|^1/2z)^-k/2]
e^|x|^5/4θ(z)σ_3,
where
θ(z)=4/5z^5/2-z^1/2.
§.§ The g-function transformation
To normalize the behavior of A(z) at infinity, we introduce the g-function
g(z)=4/5(z-2α)^3/2(z+3α), α:=√(6)/6.
where the branch of (z-2α)^3/2 is taken such that (z-2α)∈(-π,π). It is clear that
g(z)=θ(z)+2α/3z^-1/2+O(z^-3/2), z→∞,
and g(z) has saddle point at z=-α. The sign of g(z) in the complex plane is depicted in Figure <ref>.
The next transformation is defined by
B(z)=[ 1 2α/3|x|^5/4-Ψ_1,11|x|^-1/4; 0 1 ]
A(z)e^-|x|^5/4g(z)σ_3.
We come to the following RH problem for B(z).
* B(z) is analytic for z∈ℂ∖Γ.
* B(z) satisfies the jump relation
B_+(z)=B_-(z){ e^|x|^5/4g(z)σ_3S_ke^-|x|^5/4g(z)σ_3, z∈Γ_k, k=±1,±2,
-iσ_1, z∈Γ_3,
e^|x|^5/4g_-(z)σ_3S_0e^-|x|^5/4g_+(z)σ_3, z∈(0,2α),
e^|x|^5/4g(z)σ_3S_0e^-|x|^5/4g(z)σ_3, z∈(2α,+∞).
.
* As z→∞, we have
B(z)=[I+B_1/z+O(z^-2)]z^σ_3/41/√(2)[ 1 1; 1 -1 ],
where
B_1,21=Ψ_1,11|x|^-1/4-2α/3|x|^5/4,
B_1,22=(Ψ_2,11-Ψ_2,12)|x|^-1/2
-2α/3Ψ_1,11|x|+2α^2/9|x|^5/2.
§.§ Contour deformation
In view of Figure <ref>, we see from (<ref>) that the jumps on Γ_±2 do not tend to the identity matrix as x→-∞. To overcome this, we define the transformation
C(z)=B(z){ e^|x|^5/4g(z)σ_3S_2^-1e^-|x|^5/4g(z)σ_3, z∈Ω_2,
e^|x|^5/4g(z)σ_3S_-2e^-|x|^5/4g(z)σ_3, z∈Ω_-2,
I, elsewhere.
.
Thus, we have the following RH problem for C(z).
* C(z) is analytic for z∈ℂ∖Γ_C, where Γ_C is shown in Figure <ref>.
* C(z) satisfies the jump relation
C_+(z)=C_-(z){ e^|x|^5/4g(z)σ_3S_ke^-|x|^5/4g(z)σ_3, z∈Γ_k, k=±1,
e^|x|^5/4g(z)σ_3S_ke^-|x|^5/4g(z)σ_3, z∈Γ_k, k=±2,
-iσ_1, z∈(-∞,-α),
e^|x|^5/4g_-(z)σ_3S_2(-iσ_1)S_-2
e^-|x|^5/4g_+(z)σ_3, z∈(-α,0),
e^|x|^5/4g_-(z)σ_3S_0e^-|x|^5/4g_+(z)σ_3, z∈(0,2α),
e^|x|^5/4g(z)σ_3S_0e^-|x|^5/4g(z)σ_3, z∈(2α,+∞).
.
* As z→∞, we have
C(z)=[I+B_1/z+O(z^-2)]z^σ_3/41/√(2)[ 1 1; 1 -1 ],
where B_1 is given in (<ref>).
On the other hand, since g_±(z) is purely imaginary on (-∞,α), the diagonal entries of jumps on (-α,2α) are highly oscillating for large |x|. To turn the oscillations into exponential decays, we open lens around the segment (-α,2α) according to the sign of g(z) depicted in Figure <ref>. This process depends on the following matrix decomposition:
[S_2(-iσ_1)S_-2]^-1=[ 1 -is_2s_0^-1; 0 1 ][ 0 -s_0^-1; s_0 0 ][ 1 -is_-2s_0^-1; 0 1 ],
S_0=[ 1 s_0^-1; 0 1 ][ 0 -s_0^-1; s_0 0 ][ 1 s_0^-1; 0 1 ].
Owing to (<ref>) and (<ref>), we define
D(z)=C(z){ [ 1 -s_0^-1e^2|x|^5/4g(z); 0 1 ], z∈ℒ_1+,
[ 1 s_0^-1e^2|x|^5/4g(z); 0 1 ], z∈ℒ_1-,
[ 1 is_-2s_0^-1e^2|x|^5/4g(z); 0 1 ], z∈ℒ_2+,
[ 1 -is_2s_0^-1e^2|x|^5/4g(z); 0 1 ], z∈ℒ_2-,
.
where the regions ℒ_1±, ℒ_2± are depicted in Figure <ref>. Therefore, we arrive at the following RH problem for D(z).
* D(z) is analytic for z∈ℂ∖Γ_D, where Γ_D is shown in Figure <ref>.
* D(z) satisfies the jump relations
D_+(z)=D_-(z){ [ 1 -is_∓2s_0^-1e^2|x|^5/4g(z); 0 1 ], z∈Γ_±1,
[ 1 0; s_±2e^-2|x|^5/4g(z) 1 ], z∈Γ_±2,
[ 1 -s_0^-1e^2|x|^5/4g(z); 0 1 ], z∈Γ^+_1∪Γ^-_1,
[ 1 -is_∓2s_0^-1e^2|x|^5/4g(z); 0 1 ], z∈Γ^±_2,
.
and
D_+(z)=D_-(z){ [ 0 -i; -i 0 ], z∈(-∞,-α),
[ 0 -s_0^-1; s_0 0 ], z∈(-α,2α),
[ 1 0; s_0e^-2|x|^5/4g(z) 1 ], z∈(2α,+∞).
.
* As z→∞, we have
D(z)=[I+B_1/z+O(z^-2)]z^σ_3/41/√(2)[ 1 1; 1 -1 ],
where B_1 is given in (<ref>).
Again, it is readily seen from the signs of g(z) shown in Figure <ref> that as x→-∞, the jumps of D(z) tend to identity matrix except on (-∞,2α).
§.§ Global parametrix
In this section, we construct the global parametrix. By (<ref>) and (<ref>), we consider the following model RH problem for a function N(z).
* N(z) is analytic for z∈ℂ∖(-∞,2α), where (-∞,2α) is oriented from left to right.
* N(z) satisfies the jump relation
N_+(z)=N_-(z){ [ 0 i; i 0 ], z∈(-∞,-α),
[ 0 -s_0^-1; s_0 0 ], z∈(-α,2α).
.
* As z→∞, we have
N(z)=[I+O(z^-1)]z^σ_3/41/√(2)[ 1 1; 1 -1 ].
It is straightforward to verify that
N(z)=[ 1 2√(3α) iν; 0 1 ]
(z-2α)^σ_3/41/√(2)[ 1 1; 1 -1 ]d(z)^σ_3,
where d(z) is the Szegő function
d(z) =exp(-ln(-is_0)/2π√(z-2α)∫^2α_-αdξ/√(2α-ξ) (ξ-z))
=(i√(z-2α)+√(3α)/i√(z-2α)-√(3α))^ν
with
ν=-1/2π iln(-is_0)=-1/2π iln(1-|s_2|^2)∈ iℝ.
The branches of the multi-valued functions in (<ref>) and (<ref>) are fixed by the conditions
(z-2α)∈(-π,π), (i√(z-2α)+√(3α)/i√(z-2α)-√(3α))∈(-π,π).
A direct calculation gives
N(z)=[I+N_1/z+O(z^-2)]
z^σ_3/41/√(2)[ 1 1; 1 -1 ], z→∞,
where
N_1=[ 6αν^2-α/2 2√(3)α^3/2iν(1-4ν^2); -2√(3α) iν α/2-6αν^2 ].
§.§ Local parametrices
In this section, we construct relevant local parametrices in the neighborhood U(z_0)={z∈ℂ:|z-z_0|<δ} of the special point z_0∈{-α,0,2α}.
Firstly, we focus on the saddle point z_0=-α. In view of the RH problem for D(z), we consider the following local RH problem.
* P^(-)(z) is analytic for z∈ U(-α)∖Γ_D.
* P^(-)(z) satisfies the same jump relation as D(z) on Γ_D∩ U(-α).
* For z∈∂ U(-α), we have
P^(-)(z)N(z)^-1=I+O(|x|^-5/8), x→-∞.
Let us define a conformal mapping
λ_1(z)=2{ √(g_+(-α)-g(z)), z>0,
√(g_+(-α)+g(z)), z<0,
.
where the branch of the square roots are chosen by the condition
λ_1(z)=e^-π i/42(3α)^1/4(z+α)[1-z+α/18α
+O((z+α)^2)], z→-α,
and
g_+(-α)=-4/5√(3α) i.
Then, we construct P^(-)(z) as
P^(-)(z)=E^(-)(z)Φ^(PC)(|x|^5/8λ_1(z))
s_2^-σ_3/2h_1^σ_3/2M(z)e^-|x|^5/4g(z)σ_3,
where Φ^(PC) is the parabolic cylinder parametrix model with the parameter ν given in (<ref>) (see Appendix <ref>), h_1 is defined in (<ref>) and
E^(-)(z) =N(z)M(z)^-1h_1^-σ_3/2s_2^σ_3/2
e^-|x|^5/4g_+(-α)σ_3(|x|^5/8λ_1(z))^νσ_32^-σ_3/2[ λ_1(z) 1; 1 0 ]
with
M(z)={ iσ_1, z>0,
I, z<0..
With (<ref>) and (<ref>), we check directly that P^(-)(z) fulfills the same jump relation as D(z) on Γ_D∩ U(-α) and E^(-)(z) is analytic in U(-α). In addition, we obtain from (<ref>), (<ref>) and (<ref>) that as x→-∞,
P^(-)(z)N(z)^-1 =N(z)M(z)^-1[I+L(z)/|x|^5/8
+O(|x|^-5/4)]M(z)N(z)^-1
=I+O(|x|^-5/8),
where
L(z)=[ 0 ν s_2|x|^5/4νλ_1(z)^2ν-1/h_1e^2|x|^5/4g_+(-α); h_1e^2|x|^5/4g_+(-α)/s_2|x|^5/4νλ_1(z)^2ν+1 0 ].
Notice that we have made use of the fact that ν and g_+(-α) are purely imaginary (see (<ref>) and (<ref>)).
For the local parametrix near the origin z_0=0, we consider the following local RH problem for a function P^(0)(z).
* P^(0)(z) is analytic for z∈ U(0)∖Γ_D.
* P^(0)(z) satisfies the same jump relation as D(z) on Γ_D∩ U(0).
* For z∈∂ U(0), we have
P^(0)(z)N(z)^-1=I+O(|x|^-∞), x→-∞.
The solution to above problem is elementary. First, we note from the real solution condition (<ref>) that
s_2=re^iβ, s_-2=-re^-iβ,
where r:=|s_2|, β:= s_2. Define
F(λ)=e^λσ_3{ I, λ∈(π/4,3π/4)
∪(-3π/4,-π/4),
[ 1 ir; 0 1 ], λ∈(3π/4,5π/4),
[ 1 0; -ir 1 ], λ∈(-π/4,π/4).
.
Then, we see that F(λ) solves the following RH problem.
* F(λ) is analytic for λ∈ℂ∖{λ:λ=±π/4,±3π/4}, where rays are oriented from 0 to ∞.
* F(λ) satisfies the jump relation
F_+(λ)=F_-(λ){ [ 1 0; ir 1 ], λ=π/4,
[ 1 ir; 0 1 ], λ=3π/4,
[ 1 -ir; 0 1 ], λ=-3π/4,
[ 1 0; -ir 0 ], λ=-π/4.
.
* As λ→∞, we have
F(λ)=[I+O(λ^-∞)]e^λσ_3.
Introduce the conformal mapping
λ_2(z) =g_-(0)± g(z), ± z>0,
=e^π i/22√(2α)z(1+o(1)), z→0.
We construct P^(0)(z) as
P^(0)(z)=E^(0)(z)F(|x|^5/4λ_2(z))
e^iβσ_3/2Q(z)
s_0^σ_3/2e^-|x|^5/4g(z)σ_3,
where
E^(0)(z)=N(z)s_0^-σ_3/2Q(z)^-1e^-iβσ_3/2
e^-|x|^5/4g_-(0)σ_3
and
Q(z)={ I, z>0,
[ 0 1; -1 0 ], z<0.
.
Using (<ref>), (<ref>) and (<ref>), it is direct to see from (<ref>)-(<ref>) that P^(0)(z) satisfies the same jump relation as D(z) on Γ_D∩ U(0) and E^(0)(z) is analytic in U(0). Moreover, it follows from (<ref>) and the fact that g_-(0) is purely imaginary that the matching condition (<ref>) is also satisfied.
Finally, we concentrate on the branch point z_0=2α and we consider the following local RH problem for some function P^(+)(z).
* P^(+)(z) is analytic for z∈ U(2α)∖Γ_D.
* P^(+)(z) satisfies the same jump relation as D(z) on Γ_D∩ U(2α).
* For z∈∂ U(2α), we have
P^(+)(z)N(z)^-1=I+O(|x|^-5/4), x→-∞.
As usual, in the neighborhood U(2α), we define a conformal mapping
λ_3(z)=(3/2 g(z))^2/3=(6α)^2/3(z-2α)(1+o(1)), z→2α.
Then the desired parametrix near z_0=2α is defined by
P^(+)(z)=E^(+)(z)Φ^(Ai)(|x|^5/6λ_3(z))σ_1
s_0^σ_3/2e^-|x|^5/4g(z)σ_3,
where Φ^(Ai) is the Airy parametrix model (see Appendix <ref>) and
E^(+)(z)=N(z)s_0^-σ_3/2σ_11/√(2)[ 1 -i; -i 1 ](|x|^5/6λ_3(z))^σ_3/4.
§.§ Small norm problem
Using the model functions N(z) given in (<ref>), P^(-)(z) given in (<ref>), P^(0)(z) given in (<ref>) and P^(+)(z) given in (<ref>), we define the final transformation as
R(z)=D(z){ P^(-)(z)^-1, z∈ U(-α),
P^(0)(z)^-1, z∈ U(0),
P^(+)(z)^-1, z∈ U(2α),
N(z)^-1, elsewhere.
.
As a result, we have the following RH problem for R(z).
* R(z) is analytic for z∈ℂ∖Γ_R, where Γ_R is illustrated in Figure <ref>.
* R(z) satisfies the jump relation R_+(z)=R_-(z)J_R(z)
J_R(z)={ P^(-)(z)N(z)^-1, z∈∂ U(-α),
P^(0)(z)N(z)^-1, z∈∂ U(0),
P^(+)(z)N(z)^-1, z∈∂ U(2α),
N(z)J_D(z)N(z)^-1, z∈Γ_R∖(∂ U(-α)∪∂ U(0)∪∂ U(2α)),
.
where J_D(z) denotes the relevant matrix in (<ref>) and (<ref>).
* As z→∞, we have
R(z)=I+R_1/z+O(z^-2).
From the matching conditions (<ref>), (<ref>) and (<ref>), we see from (<ref>) that the jump matrix J_R(z) satisfy the estimation
J_R(z)=I+O(|x|^-5/8), x→-∞.
With the standard small norm theory <cit.>, we have
R(z)=I+O(|x|^-5/8), x→-∞,
uniformly for z∈ℂ∖Γ_R.
§.§ The reduced case |s_2|=0
In the reduced case |s_2|=0, we obtain from (<ref>) that s_0=i. One sees from the RH problem for D(z) in Section <ref> that there are no jumps on Γ_±2∪Γ_2^±∪Γ_±1. The subsequent asymptotic analysis is almost same as the case 0<|s_2|<1.
First of all, we solve a RH problem with jump matrix iσ_1 on (-∞,2α). The solution to this RH problem can be constructed as (<ref>) by taking ν=0 therein. Next, we need to build a local parametrix near z=2α. The desired parametrix is defined as (<ref>) by taking |s_2|=0 therein. Similar to (<ref>), by defining R(z) as the ratio of the RH problem for D(z) and the parametrices, it is readily seen that R(z) satisfies the estimation R(z)=I+O(|x|^-5/4) as x→-∞.
§ ASYMPTOTICS OF Ψ FOR |S_2|>1
In this section, we will perform the Deift-Zhou nonlinear steepest descent analysis to the RH problem for Ψ(z,x) with |s_2|>1. As in the case 0<ρ<1, it includes a series of invertible transformations, and the first four transformations Ψ(z,x)→A(z)→B(z)→C(z)→D(z) are exactly the same as those defined in (<ref>), (<ref>), (<ref>) and(<ref>), respectively. The main differences lie in the constructions of the global and local parametrices. Indeed, by ignoring the exponentially small terms of J_D(z), we are led to the same RH problem for N(z). Since |s_2|>1, a solution is given by (<ref>) but with ν therein replaced by
ν=-1/2-1/2πiln(|s_2|^2-1)
:=-1/2+ν_0.
It is obvious that ν_0∈iℝ. If we continue to construct the local parametrix P^(-) as in (<ref>) and (<ref>), it comes out that P^(-)(z)N(z)^-1 does not tend to the identity matrix as t→+∞ for z∈∂ U(-α); see (<ref>) with ν given by (<ref>). This implies that the matching condition (<ref>) is no longer valid. To overcome this, our idea is to make some modifications of the global and local parametrices.
§.§ Global parametrix
We modify the global parametrix by imposing specific singularity at z_0=-α. More precisely, we consider the following RH problem.
* N(z) is analytic for z∈ℂ∖(-∞,2α), where (-∞,2α) is oriented from left to right.
* N(z) satisfies the same jump relation as N(z); see (<ref>).
* As z→∞, we have
N(z)=[I+O(z^-1)]z^σ_3/41/√(2)[ 1 1; 1 -1 ].
* As z→-α, N(z) behaves like
N(z)=O((z+α)^-3/2).
We look for a solution in the form
N(z)=(X/z+α+I)N(z), (X/z+α+I)=1,
where N(z) is given via (<ref>) and (<ref>). The z-independent matrix X is to be determined. It is easy to see that
N(z)=[I+X+N_1/z+O(z^-2)]
z^σ_3/41/√(2)[ 1 1; 1 -1 ], z→∞,
where N_1 is given in (<ref>).
§.§ Local parametrices
Near the saddle point z_0=-α, we instead consider the following local RH problem.
* P^(-)(z) is analytic for z∈ U(-α)∖Γ_D.
* P^(-)(z) satisfies the same jump relation as D(z) on Γ_D∩ U(-α).
* For z∈∂ U(-α), we have
P^(-)(z)N(z)^-1=I+O(|x|^-5/4), x→-∞.
We define the local parametrix P^(-)(z) as
P^(-)(z)=E^(-)(z)
Φ^(PC)(|x|^5/8λ_1(z))
s_2^-σ_3/2h_1^σ_3/2M(z)e^-|x|^5/4g(z)σ_3,
where Φ^(PC) is the parabolic cylinder parametrix model with the parameter ν given in (<ref>) (see Appendix <ref>), λ_1(z) is the conformal mapping (<ref>) and
E^(-)(z)=N(z)M(z)^-1h_1^-σ_3/2 s_2^σ_3/2e^-|x|^5/4g_+(-α)σ_3(|x|^5/8λ_1(z))^νσ_3
×[ 1 0; -1/|x|^5/8λ_1(z) 1 ]2^-σ_3/2[ λ_1(z) 1; 1 0 ].
Using the jumps (<ref>), (<ref>) and (<ref>), we see that P^(-)(z) has the same jump matrices as D(z) on Γ_D∩ U(-α) and E^(-)(z) is analytic in U(-α)∖{-α}. In addition, with the asymptotic behavior (<ref>), it is easily seen that the matching condition (<ref>) is fulfilled. To make sure that E^(-)(z) is also analytic at the isolated point z=-α, we have to make an appropriate choice of X in (<ref>). Indeed, by calculating the Laurent expansion at z=-α of E^(-)(z) using (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we have
E^(-)(z)=(X/z+α+I)
(H/z+α+I)Y(z),
where Y(z) is analytic near z_0=-α and
H=6α c/1-c[ 2ν_0 -4√(3α)iν_0^2; -i/√(3α) -2ν_0 ],
with
c=c(x)=h_1e^2|x|^5/4g_+(-α)/s_2|x|^5/4ν_0e^3π iν_0/22^6ν_0(3α)^5ν_0/2.
Since H^2=0, it follows from (<ref>) that the X is determined by
X=-H=6α c/1-c[ -2ν_0 4√(3α)iν_0^2; i/√(3α) 2ν_0 ].
With this choice of X, it is clear that (X/z+α+I)=1. It should be mentioned that we assume in (<ref>) and (<ref>) that x does not belong to the zero set of the function 1-c(x) (see (<ref>) below).
Since X/z+α+I is analytic at z=0,2α, the parametrices near these points can be just defined by replacing N(z) by N(z) in (<ref>), (<ref>) and (<ref>) and (<ref>).
§.§ Small norm problem
With the modified model function N(z) in (<ref>), P^(-)(z) in (<ref>), P^(0)(z) in (<ref>) and P^(+)(z) in (<ref>), the final transformation is defined by
R(z)=D(z){ P^(-)(z)^-1, z∈ U(-α),
P^(0)(z)^-1, z∈ U(0),
P^(+)(z)^-1, z∈ U(2α),
N(z)^-1, elsewhere.
.
Therefore, we get the following RH problem for R(z).
* R(z) is analytic for z∈ℂ∖Γ_R, where Γ_R is shown in Figure <ref>.
* R(z) satisfies the jump relation R_+(z)=R_-(z)J_R(z)
J_R(z)={ P^(-)(z)N(z)^-1, z∈∂ U(-α),
P^(0)(z)N(z)^-1, z∈∂ U(0),
P^(+)(z)N(z)^-1, z∈∂ U(2α),
N(z)J_D(z)N(z)^-1, z∈Γ_R∖(∂ U(-α)∪∂ U(0)∪∂ U(2α)),
.
where J_D(z) denotes the jump matrix in (<ref>) and (<ref>).
* As z→∞, we have
R(z)=I+R_1/z+O(z^-2).
It is readily seen from (<ref>), (<ref>) and (<ref>) that
J_R(z)=I+O(|x|^-5/4), x→-∞.
Thus, we have that for z∈ℂ∖Γ_R,
R(z)=I+O(|x|^-5/4), x→-∞.
§ PROOF OF THEOREM <REF>
In this section, we prove Theorem <ref>. The proof depends on the formulas (<ref>) and the asymptotic analysis of the RH problem for Ψ(z,x) given in the previous two sections. Below, it should be understood that |s_2|<1 in Section <ref> and |s_2|>1 in Section <ref>.
§.§ Derivation of (<ref>) and (<ref>)
By tracing back the transformations Ψ(z,x)→ A(z)→ B(z)→ C(z)→ D(z)→ R(z) as given in (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we obtain that for z∈ℂ∖Γ_R,
D(z)=R(z)N(z).
Taking the limit z→∞ in (<ref>), it follows from the expansions (<ref>), (<ref>) and (<ref>) that
B_1,21 =R_1,21+N_1,21,
B_1,22 =R_1,22+N_1,22.
In views of (<ref>)-(<ref>), we obtain from (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) that
y(x) =-(α+4√(3α) iν R_1,21+2R_1,22-R_1,21^2)|x|^1/2,
ℋ(x) =-2α/3|x|^3/2
+2√(3α) iν|x|^1/4-R_1,21|x|^1/4.
Next, we compute R_1. Since the RH problem for R(z) is equivalent to the following singular integral equation
R(z)=I+1/2π i∫_Γ_RR_-(ξ)(J_R(ξ)-I)dξ/ξ-z,
we get from (<ref>), (<ref>) and (<ref>) that as x→-∞,
R_1 =-1/2π i∫_Γ_RR_-(ξ)(J_R(ξ)-I)dξ
=-1/2π i∫_∂ U(-α)(J_R(ξ)-I)dξ+O(|x|^-5/4)
=-|x|^-5/8/2π i∫_∂ U(-α)N(ξ)M(ξ)L(ξ)M(ξ)^-1N(ξ)^-1dξ
+O(|x|^-5/4).
Via a direct residue calculation using (<ref>), (<ref>), (<ref>) and (<ref>), we obtain that as x→-∞,
R_1,21 =i/2√(3α)(A-B)|x|^-5/8+O(|x|^-5/4),
R_1,22 =[-1/2(A+B)+ν(A-B)]|x|^-5/8+O(|x|^-5/4),
where
A=ν s_2|x|^5/4νe^3π iν/22^6ν(3α)^5ν/2/h_1e^2|x|^5/4g_+(-α)
e^-π i/42(3α)^1/4,
B=h_1e^2|x|^5/4g_+(-α)/s_2|x|^5/4νe^3π iν/22^6ν(3α)^5ν/2e^-π i/42(3α)^1/4.
Inserting the asymptotics in (<ref>), (<ref>) into (<ref>) and (<ref>) gives
y(x) =-α|x|^1/2+(A+B)|x|^-1/8+O(|x|^-3/4),
ℋ(x) =-2α/3|x|^3/2
+2√(3α) iν|x|^1/4-i(A-B)/2√(3α)|x|^-3/8
+O(|x|^-1),
as x→-∞.
Using (<ref>) and (<ref>), we immediately have
A± B=νΓ(-ν) s_2|x|^5/4νe^π iν/22^6ν(3α)^5ν/2/√(2π)e^2|x|^5/4g_+(-α)
e^-π i/42(3α)^1/4±√(2π)h_1e^2|x|^5/4g_+(-α)/Γ(-ν)s_2|x|^5/4νe^π iν/22^6ν(3α)^5ν/2e^-π i/42(3α)^1/4.
With (<ref>), (<ref>), the relation Γ(-ν)=Γ(ν) and the reflection formula of the Gamma function
-νΓ(-ν)Γ(ν)=π/sinπν=2π i/e^π iν|s_2|^2,
we find that
A+B =√(2π)/(3α)^1/4|s_2||Γ(ν)|e^π iν/2cosφ(x),
A-B =i√(2π)/(3α)^1/4|s_2||Γ(ν)|e^π iν/2sinφ(x),
where
φ(x)=2ig_+(-α)|x|^5/4-iνln[2^6(3α)^5/2|x|^5/4]-Γ(ν)+ s_2-π/4.
On the other hand, it follows from (<ref>) that
|Γ(ν)|=√(2π)/√(iν) e^π iν/2|s_2|.
Substituting (<ref>)-(<ref>) into (<ref>) and (<ref>) and then recalling (<ref>), (<ref>) and (<ref>), we arrive at (<ref>) and (<ref>) for the case 0<|s_2|<1.
In the reduced case |s_2|=0, by applying the asymptotic analysis of the RH problem for Ψ discussed in Section <ref>, we obtain the following asymptotic formula
y(x) =-(-x/6)^1/2+O(x^-3/4),
ℋ(x) =-4(-x/6)^3/2+O(x^-1),
as x→-∞. We notice from (<ref>) that ν=0 when |s_2|=0. Thus, (<ref>) and (<ref>) can be viewed respectively as the limits of (<ref>) and (<ref>) as a→0.
Note that both the parabolic cylinder parametrix and Airy parametrix have a full asymptotic expansion as their variables tend to infinity. Hence, the jump matrices of R(z) on ∂ U(-α) and ∂ U(2α) can also be fully asymptotically expanded as |x|→+∞. This indicates that we can theoretically obtain the full asymptotic expansion of the real solutions of Painlevé I equation. We conjecture that (<ref>) can be improved as
y(x)∼√(-x/6)∑_k=0^∞a_k/(-x)^5k/4
+√(a)/(-24x)^1/8∑_n=1^∞cos(nψ(x))
∑_k=0^∞b_k^(n)/(-x)^5k/8,
where ψ(x)=4·24^1/4/5(-x)^5/4-5a/8ln(-x)+ϕ and a_k, b_k^(n) are constants. One may try to determine the coefficients a_k and b_k^(n) by following the way above in this section, while we think it is better to obtain these coefficients by substituting (<ref>) into the Painlevé I equation (<ref>) directly.
§.§ Derivation of (<ref>) and (<ref>)
Inverting the transformations Ψ(z,x)→ A(z)→ B(z)→ C(z)→ D(z)→R(z) as given in (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we now get that for z∈ℂ∖Γ_R,
D(z)=R(z)N(z).
Letting z→∞ in (<ref>), we obtain from the expansions (<ref>), (<ref>) and (<ref>) that
B_1,21 =R_1,21+N_1,21+X_21,
B_1,22 =R_1,22+N_1,22+X_22.
Using (<ref>)-(<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we have
y(x) =-α|x|^1/2-12α c/(1-c)^2|x|^1/2-2R_1,22|x|^1/2-R_1,21^2|x|^1/2
-4√(3α) iν_0 R_1,21|x|^1/2
+2√(3α)iR_1,211-c/1+c|x|^1/2,
ℋ(x) =-2α/3|x|^3/2
+2√(3α) iν_0|x|^1/4-i√(3α)(1+c)/1-c|x|^1/4
-R_1,21|x|^1/4.
For the asymptotics of R_1, we use the singular integral equation for R(z) and the estimations (<ref>), (<ref>) to deduce that
as x→-∞,
R_1=-1/2π i∫_Γ_RR_-(ξ)
(J_R(ξ)-I)dξ=O(|x|^-5/4).
On the other hand, it follows from (<ref>) and (<ref>) that
c=√(2π)e^-1/2π ie^2|x|^5/4g_+(-α)/Γ(1/2-ν_0)s_2|x|^5/4ν_0e^π iν_0/22^6ν_0(3α)^5ν_0/2.
Since
|Γ(1/2-ν_0)|^2=Γ(1/2-ν_0)Γ(1/2+ν_0)
=π/sinπ(1/2-ν_0)=2π/e^π iν_0|s_2|^2,
we have
c=exp{i(-2ig_+(-α)|x|^5/4+5/4iν_0ln|x|
. +iν_0ln[2^6(3α)^5/2]
.- s_2-Γ(1/2-ν_0)-π/2)}.
Hence, by (<ref>),
c/(1-c)^2=-1/4sin^2ω(x), 1+c/1-c=-iω(x),
where
ω(x)=ig_+(-α)|x|^5/4-5/8iν_0ln|x| -
1/2iν_0ln[2^6(3α)^5/2]
+1/2 s_2+1/2Γ(1/2-ν_0)+π/4.
Plugging the asymptotics (<ref>), the equations (<ref>) and (<ref>) into (<ref>), (<ref>) and then recalling (<ref>), (<ref>) and (<ref>), we obtain (<ref>) and (<ref>).
§ ACKNOWLEDGEMENTS
The work of Jun Xia was supported in part by the National Natural Science Foundation of China [Grant no. 12401322] and the Guangdong Basic and Applied Basic Research Foundation (Grant no. 2024A1515012985).
The work of Wen-Gao Long was supported in part by the National Natural Science Foundation of China
[Grant no. 12401094], the Natural Science Foundation of Hunan Province [Grant no. 2024JJ5131] and the Outstanding Youth Fund of Hunan Provincial Department of Education [Grant no. 23B0454].
§ MODEL PARAMETRICES
§.§ Parabolic cylinder parametrix
Let
𝐏(λ)=2^-σ_3/2[ D_-ν-1(iλ) D_ν(λ); D_-ν-1'(iλ) D_ν'(λ) ][ e^i π/2(ν+1) 0; 0 1 ],
where D_ν(λ) is parabolic cylinder function (see <cit.>).
Define
H_0=[ 1 0; h_0 1 ], H_1=[ 1 h_1; 0 1 ], H_n+2=e^i π(ν+1/2) σ_3 H_n e^-i π(ν+1/2) σ_3, n=0,1,
where
h_0=-i √(2 π)/Γ(ν+1), h_1=√(2 π)/Γ(-ν) e^i πν, 1+h_0 h_1=e^2 π i ν.
We introduce
Φ^(PC)(λ)={ 𝐏(λ), λ∈(-π/4, 0),
𝐏(λ)H_0, λ∈(0,π/2),
𝐏(λ)H_0H_1, λ∈(π/2, π),
𝐏(λ)H_0H_1H_2, λ∈(π,3π/2),
𝐏(λ)H_0H_1H_2H_3, λ∈(3π/2,7π/4).
.
Thus, Φ^(PC) solves the following RH problem (cf. <cit.>).
* Φ^(PC) is analytic for λ∈ℂ∖⋃^5_k=1Σ_k, where Σ_k={λ∈ℂ:λ=(k-1)π/2}, k=1,2,3,4 and Σ_5={λ∈ℂ:λ=-π/4}; see Figure <ref>.
* Φ^(PC) satisfies the jump conditions
Φ^(PC)_+(λ)=Φ^(PC)_-(λ){ H_0, λ∈Σ_1,
H_1, λ∈Σ_2,
H_2, λ∈Σ_3,
H_3, λ∈Σ_4,
e^2π iνσ_3, λ∈Σ_5.
.
* As λ→∞, we have
Φ^(PC)(λ)=[ 0 1; 1 -λ ]2^σ_3/2[I+∑_k=1^∞Φ^(PC)_kλ^-k]
e^(λ^2/4-νlnλ) σ_3,
where Φ^(PC)_2k-1 is off-diagonal and Φ^(PC)_2k is diagonal,
Φ^(PC)_1=[ 0 ν; 1 0 ], Φ^(PC)_2=[ ν(ν+1)/2 0; 0 -ν(ν-1)/2 ].
§.§ Airy parametrix
Let φ=e^2π i/3. We consider
Φ^(Ai)(λ)=Υ{ [ Ai(λ) Ai(φ^2λ); Ai^'(λ) φ^2Ai^'(φ^2λ) ] e^-i π/6σ_3, λ∈I,
[ Ai(λ) Ai(φ^2λ); Ai^'(λ) φ^2Ai^'(φ^2λ) ] e^-i π/6σ_3[ 1 0; -1 1 ], λ∈II,
[ Ai(λ) -φ^2Ai(φλ); Ai^'(λ) -Ai^'(φλ) ] e^-i π/6σ_3[ 1 0; 1 1 ], λ∈III,
[ Ai(λ) -φ^2Ai(φλ); Ai^'(λ) -Ai^'(φλ) ] e^-i π/6σ_3, λ∈IV,
.
where Ai(λ) is the Airy function (cf. <cit.>) and
Υ=√(2π)[ e^1/6π i 0; 0 e^-1/3π i ].
It is direct to see that Φ^(Ai) satisfies the following RH problem (cf. <cit.>).
* Φ^(Ai) is analytic for λ∈ℂ∖⋃^4_k=1Σ_k, where Σ_k is depicted in Figure <ref>.
* Φ^(Ai) satisfies the jump conditions
Φ^(Ai)_+(λ)=Φ^(Ai)_-(λ){ [ 1 1; 0 1 ], λ∈Σ_1,
[ 1 0; 1 1 ], λ∈Σ_2∪Σ_4,
[ 0 1; -1 0 ], λ∈Σ_3.
.
* Φ^(Ai) satisfies the asymptotic behavior
Φ^(Ai)(λ)=λ^-σ_3/41/√(2)[ 1 i; i 1 ][I+O(λ^3/2)] e^-2/3λ^3/2σ_3, λ→∞.
§ STOKES MULTIPLIERS OF A SPECIAL SOLUTION OF PAINLEVÉ I
According to <cit.> (see also <cit.>), Ψ(z,x) satisfies the following differential equation
∂Ψ/∂ z
=[ y_x 2z^2+2yz+x+2y^2; 2(z-y) -y_x ]Ψ.
The only singularity of the above equation is the irregular singular point at z=∞.
Following <cit.> (see also <cit.>),
there exist the canonical solutions Ψ_k(z,x), k∈ℤ, of (<ref>) with the asymptotic expansion
Ψ^(k)(z,x)
=z^1/4σ_3σ_3+σ_1/√(2)[I+-ℋ(x)σ_3/√(z)
+O(1/z)]
e^(4/5λ^5/2+xz^1/2)σ_3
as z→∞ with z∈Ω_k, uniformly for all x bounded away from the poles of the Painlevé I transcendents, where ℋ(x) is introduced in (<ref>), and the canonical sectors are
Ω_k={z∈ℂ: z∈(-3π/5+2kπ/5,π/5+2kπ/5)}, k∈ℤ.
These canonical solutions are connected by
Ψ^(k+1)(z,x)=Ψ^(k)(z,x)S_k, S_2k-1=[ 1 s_2k-1; 0 1 ], S_2k=[ 1 0; s_2k 1 ].
By <cit.>, we can proceed further as follows.
Define
Ψ̂(z,x)=[ 0 1; 1 -1/2(-y_x+1/2(z-y)) ]
(z-y)^σ_3/2Ψ(z,x)
Then Φ̂(z,x) satisfies
∂/∂ zΨ̂(z,x)=[ 0 2; V(z,x) 0 ]Ψ̂(z,x),
where
2V(z,x)=y_x^2+4z^3+2z x-2y x-4y^3-y_x/z-y+3/41/(z-y)^2.
Moreover, using the expansion of Ψ(z,x) in (<ref>), one can verify that
Ψ̂^(k)(z,x)
=z^-3/4σ_3/√(2)[ 1 -1; 1 1 ](I+O(z^-1/2))
e^(4/5z^5/2+xz^1/2)σ_3
as z→∞ with z∈Ω_k.
The asymptotic expansions of Ψ^(k)(z,x) and Ψ̂^(k)(z,x) in (<ref>) and (<ref>) are valid
only when x is bounded away from p.
When x→ p, according to <cit.>,
the system (<ref>) reduces to
∂/∂ zΨ̂(z,p)
=[ 0 2; 2z^3+pz-14H 0 ]Ψ̂(z,p),
and the corresponding asymptotic expansions of Ψ̂_k(z,p) in (<ref>) should be replaced by
Ψ̂^(k)(z,p)
=z^-3/4σ_3/√(2)[ -i -i; -i i ](I+O(z^-1/2))
e^(4/5z^5/2+pz^1/2)σ_3
as z→∞ with z∈Ω_k; see <cit.>. It is readily seen that Ψ̂^(k) also satisfy the connection formula
Ψ̂^(k+1)(z,p)=Ψ̂^(k)(z,p)S_k, k∈ℤ.
Finally, if denoting
Ψ̂(z,p)=[ ϕ_1; ϕ_2 ],
and letting Y(z;p)=ϕ_1, then we arrive at the following reduced triconfluent Heun equation (RTHE); see <cit.>
d^2Y/d z^2=[4z^3+2pz-28H]Y.
When (p,H)=(0,0), the equation (<ref>) can be explicitly solved by the Bessel functions. For any solution of (<ref>), there exist two constants C_1 and C_1 such that
Y(z)=C_1z^1/2I_1/5(4/5z^5/2)
+C_2z^1/2K_1/5(4/5z^5/2).
Applying the asymptotic behaivor of the Bessel functions I_ν(z) and K_ν(z) (cf. see <cit.>) and matching them with the behaviors of (Ψ̂^(k))_11 and (Ψ̂^(k))_12 in (<ref>), we have
(z^1/2I_1/5(4/5z^5/2),
z^1/2K_1/5(4/5z^5/2)) ∼ ((Ψ^(0))_11,(Ψ^(0))_12)
(
c_1 0
c_2 c_3)
as z→∞ with z∼ -π/5, where
c_1=(4π/5)^-1/2i, c_2=(4π/5)^-1/2e^-π i/5, c_3=(4/5π)^-1/2i.
Similarly when z→∞ with z∼π/5, we have
(z^1/2I_1/5(4/5z^5/2),
z^1/2K_1/5(4/5z^5/2)) ∼ ((Ψ^(1))_11,(Ψ^(1))_12)
(
c_1 0
c̃_2 c_3),
where c̃_2=-e^2π i/5c_2=ie^π i/5c_1.
Hence, according to the connection formula (<ref>), we conclude that
S_0=(
c_1 0
c_2 c_3)(
c_1 0
c̃_2 c_3)^-1
=(
1 0
c_2-c̃_2/c_1 1
),
which implies that s_0=c_2-c̃_2/c_1=-2icos(π/5).
Combining <cit.> with the connection formulas between I_ν(z) and K_ν(z) (cf. see <cit.>), we can further obtain
(z^1/2I_1/5(4/5z^5/2),
z^1/2K_1/5(4/5z^5/2))
∼((Ψ^(2))_11,(Ψ^(2))_12)(
d_1 d_3
d_2 d_4),
as z→∞ with z∼3π/5,
where
d_1=ie^π i/5c̃_2=-c_1, d_2=ie^π i/5c_1=-c_2, d_3=e^-π i/5ic_3+πc̃_2, d_4=π c_1=c_3.
Comparing (<ref>) and (<ref>), it is readily seen that
S_1=(
c_1 0
c̃_2 c_3)(
d_1 d_3
d_2 d_4)^-1=(
1 -c_1d_3
0 1
).
This implies that s_1=-c_1d_3=-2icos(π/5).
Finally, making use of (<ref>), we find that s_k=-2icos(π/5) for all k∈ℤ.
99
BO
C. M. Bender and S. A. Orszag, Advanced Mathematical Methods for Scientists and
Engineers I, Springer Science & Business Media, New York, 1999.
Bertola-Tovbis-2013
M. Bertola and A. Tovbis,
Universality for the focusing nonlinear Schrödinger equation at the gradient catastrophe point:
rational breathers and poles of the tritronquée solution to Painlevé I,
Comm. Pure Appl. Math., 66 (2013), pp. 678–752.
Deano
A. Deaño, On the Riemann-Hilbert approach to asymptotics
of tronquée solutions of Painlevé I, J. Phys. A, 56 (2023), 314001.
Deift
P. Deift, Orthogonal Polynomials and Random Matrices: A Riemann-Hilbert Approach, Courant Lecture Notes, vol. 3, New York University, 1999.
DZ1
P. Deift and X. Zhou, A steepest descent method for oscillatory Riemann-Hilbert problems. Asymptotics for the MKdV equation, Ann. Math., 137 (1993), 295-368.
DZ2
P. Deift and X. Zhou, Asymptotics for the Painlevé II equation, Commun. Pure Appl. Math., 48 (1995), 277-337.
FIKN
A. S. Fokas, A. R. Its, A. A. Kapaev and V.Yu. Novokshenov, Painlevé transcendents. The Riemann-Hilbert approach, Mathematical Surveys and Monographs, Vol. 128, Amer. Math. Soc., Providence, RI, 2006.
GLS
V. I. Gromak, I. Laine and S. Shimomura, Painlevé differential equations in the complex plane, de Gruyter Studies in Mathematics, Vol. 28, Walter de Gruyter GmbH & Co. KG, Berlin, 2002.
HS
P. Holmes and D. Spence, On a Painlevé-type boundary-value problem, Quart. J.
Mech. Appl. Math., 37 (1984), 525-538.
JK
N. Joshi and M. D. Kruskal, The Painlevé connection problem: An asymptotic
approach. I, Stud. Appl. Math., 86 (1992), 315-376.
Kap1
A. A. Kapaev, Asymptotic behavior of the solutions of the Painlevé equation of the first kind, Differ. Uravn. 24 (1988), 1684-1695.
Kap2
A. A. Kapaev, Quasi-linear Stokes phenomenon for the Painlevé
first equation, J. Phys. A, 37 (2004), 11149–11167.
Kapaev-Kitaev-1993
A. A. Kapaev and A. V. Kitaev, Connection formulae for the first Painlevé transcendent in the complex domain, Lett. Math. Phys., 27 (1993), 243–252.
Kitaev
A. V. Kitaev,
Symmetric solutions for the first and the second Painlevé equation, J. Math. Sci., 73 (1995), 494–499
LongLi
W.-G. Long and Y.-T. Li
Connection problem of the first Painlevé transcendents with large initial data,
J. Phys. A: Math. Theor., 56 (2023), 175201.
LongLLZ
W.-G. Long, Y.-T. Li, S.-Y. Liu and Y.-Q. Zhao,
Real solutions of the first Painlevé equation with large initial data,
Stud. Appl. Math., 139 (2017), 505–532.
LongLiWang
W.-G. Long, Y.-T. Li, and Q.-h. Wang,
Connection problem of the first Painlevé transcendent between poles and negative infinity,
SIAM J. Math. Anal., 55 (2023), 6676–6706.
NIST
F. W. J. Olver, A. B. O. Daalhuis, D. W. Lozier, B.I. Schneider, R.F. Boisvert, C.W. Clark, B.R. Miller, B.V. Saunders (Eds.), 2020, NIST Digital Library of Mathematical Functions, Release 1.0.28 of 2020-09-15, http://dlmf.nist.gov/.
QL
H.-Z. Qin and Y.-M. Lu, A note on an open problem about the first Painlevé
equation, Acta Math. Appl. Sin. Engl. Ser., 24 (2008), 203-210.
Xia-Xu-Zhao
J. Xia, S.-X. Xu and Y.-Q. Zhao, Isomonodromy sets of accessory parameters for Heun class equations,
Stud. Appl. Math., 146 (2021), 901–952.
|
http://arxiv.org/abs/2409.02242v1 | 20240903191549 | Curve collapse for the isospin-2 pion scattering length from QCD with 3, 4, and 5 colors | [
"Thomas DeGrand"
] | hep-lat | [
"hep-lat"
] |
O
x
⟨#|1⟨#1|
|#⟩1|#1⟩
#1⟨#1⟩
tr
Tr
Re
Im
det
#11 mu /#1
/
D
|
http://arxiv.org/abs/2409.03254v1 | 20240905051831 | Granular-ball Representation Learning for Deep CNN on Learning with Label Noise | [
"Dawei Dai",
"Hao Zhu",
"Shuyin Xia",
"Guoyin Wang"
] | cs.CV | [
"cs.CV",
"cs.AI"
] |
F. Author et al.
Chongqing Key Laboratory of Computational Intelligence, Key Laboratory of Big Data Intelligent Computing, Key Laboratory of Cyberspace Big Data Intelligent Security, Ministry of Education, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
Granular-ball Representation Learning for Deep CNN on Learning with Label Noise
Dawei Dai1()0000-0002-8431-4431 Hao Zhu10000-0002-4655-7336 Shuyin Xia10000-0001-5993-9563 Guoyin Wang10000-0002-8521-5232
September 5, 2024
==============================================================================================================================
[First Author and Second Author contribute equally to this work.
]
§ ABSTRACT
In actual scenarios, whether manually or automatically annotated, label noise is inevitably generated in the training data, which can affect the effectiveness of deep CNN models. The popular solutions require data cleaning or designing additional optimizations to punish the data with mislabeled data, thereby enhancing the robustness of models. However, these methods come at the cost of weakening or even losing some data during the training process. As we know, content is the inherent attribute of an image that does not change with changes in annotations. In this study, we propose a general granular-ball computing (GBC) module that can be embedded into a CNN model, where the classifier finally predicts the label of granular-ball (gb) samples instead of each individual samples. Specifically, considering the classification task: (1) in forward process, we split the input samples as gb samples at feature-level, each of which can correspond to multiple samples with varying numbers and share one single label; (2) during the backpropagation process, we modify the gradient allocation strategy of the GBC module to enable it to propagate normally; and (3) we develop an experience replay policy to ensure the stability of the training process. Experiments demonstrate that the proposed method can improve the robustness of CNN models with no additional data or optimization.
§ INTRODUCTION
In recent years, deep CNN models have achieved great success in many fields owing to their powerful feature representation and learning abilities <cit.>. However, their usefulness is usually dependent on high-quality annotated data. Typically, two common data-annotation methods can be used, that is manual and automatic model annotation <cit.>. Both of them are inevitably bound to produce a certain proportion of wrong annotation data (label noise) owing to myriad constraints, including the professional domain knowledge of the annotation personnel, data quality, malicious data poisoning, and the performance of the annotation model. Excessive mislabeled data (label noise) can cause changes or even confusion in the distribution of the training data, leading to diminished performance or even specific bias discrimination in related tasks <cit.>. Consequently, constructing a CNN model robust to label noise is of practical significance.
Currently, two main solutions are employed for label noise, that is noise containment and noise filtering. The former refers to reducing the impact of noisy labels by designing the additional optimizations to punish samples with wrong labels. Noise filtering involves clearing or correcting noise samples before returning them to the model for training, which has problems of its own. For example, (1) when the proportion of label noise is high, clearing all noisy samples greatly reduces the size of the training dataset, which can lead to insufficient or imbalanced training samples, and (2) this type of method usually targets low-dimensional samples and can have difficulty handling high-dimensional samples such as images. Our aim is to develop a general module that can be embedded in the CNN models to improve their robustness.
The content and feature space of samples are inherent attributes, whereas labels are generated by human induction and definition. Therefore, samples can often be mislabeled, while the content of the sample or its feature space does not change with changes in the labeling. As we know, traditional classifiers learn the mapping of each individual sample that the single and finest granularity to its label, thus, the label noise has a major impact on the models. When the classifier learns the mapping of the cluster samples (granular-ball) at feature level based on content similarity to one label, that is, when multiple samples share one label (See Fig. <ref>), it substantially reduces the impact of single-sample label noise on the model; if the label noise ratio of granular-ball samples is much lower than that of individual samples, it substantially reduces the impact of single-sample label noise on the models.
In this study, we develop a novel GBC module for the CNN models to learn with noisy labels, aiming to improve robustness of CNN with no additional requirements to the original model. GBC, first proposed by Xia. et al. <cit.>, is considered to be an effective method for describing a multi-granularity knowledge space. However, current GBC methods focus only on areas such as statistical machine learning and rough sets <cit.>. We extend the GBC to split the hidden feature vectors of the input batch samples into granularity-ball samples for deep CNN models. Specifically, our GBC module splits the input into granularity-ball (gb) samples at feature-level, each of gb sample contains an unequal number of individual samples with sharing one label. The specific label shared by the most of individual samples in one gb sample can be assigned as this gb sample's label, and the classifier finally learns the mapping of each gb sample to its label. Experiments show that the proportion of gb sample's label noise is much lower than that of the individual sample, and that our proposed method can improve the robustness of original CNN models for image classification tasks. Our contributions can be summarized as:
(1) We develop a general GBC module that can be embeded into a CNN model to learn the multi-granularity representation for the classifier, where the traditional mode of learning the mapping from each individual sample to one label is transformed into a multi-granularity (MG) mapping of each gb sample to its label.
(2) Our proposed GBC module can be embedded into a CNN model with no additional design and enhance the robustness of the original model on learning with label noise. When the GBC module is applied to a contrastive learning framework, it achieves the state-of-the-art results.
§ RELATED WORK
§.§ Noise Filtering
A direct approach to deal with label noise is to design a specific method to remove mislabeled data. Han et al. <cit.> proposed a co-learning noise memory method, in which two networks with different learning capabilities were designed to perform collaborative learning on small batches of data to filter noise label samples. Guo et al. <cit.> developed principled learning strategies to achieve the goal of effectively dealing with a large number of noisy data label imbalances. Jiang et al. <cit.> proposed learning other types of neural networks, called MentorNet, to supervise the training of basic deep networks (i.e., StudentNet), during which MentorNet could provide StudentNet with a course (sample weight scheme) to focus on samples with potentially correct labels. Jie et al. <cit.> proposed making the learning rate change periodically—the model swinging between overfitting and underfitting, resulting in the loss of samples with noise labels changing considerably—to detect noise label samples. Jindal et al. <cit.> introduced a nonlinear processing layer to model the data with incorrect labels, thereby preventing the model from overfitting noise. Yao et al. <cit.> considered that co-learning could not accurately express the true learning status of a network by manually setting the forgetting rate, and proposed to adaptively obtain the forgetting rate and enhance its autonomy. Liang et al. <cit.> proposed a two-stage training algorithm, that is, in the first stage, a pre-trained language model was adapted to named entity recognition (NER) tasks; in the second stage, remote label removal and self-training were used to enhance the robustness of the model. Meng et al. <cit.> proposed a noise-robust learning scheme comprising a new loss function and noise label deletion process, training the model to label the data. Garg et al. <cit.> proposed a two-component beta mixture model, assigning probability scores with clean or noisy labels to each sample before training the classifier and noise model using denoising losses. Zhang et al. <cit.> proposed self-cooperative noise reduction learning, which trained a teacher-student network, with each network using reliable labels through self-denoising, and explored unreliable annotations through collaborative denoising. Li et al. <cit.> proposed a global noise filter called Federated Noise Filter(FedDiv) for effectively identifying samples with noisy labels.
§.§ Noise Containment
These methods attempt to design specialized optimization goals to construct robustness models. Manwani et al. <cit.> verified that risk minimization using the 0–1 loss function had noise tolerance characteristics and the square error loss only tolerated uniform noise. Sukhbaatar et al. <cit.> introduced an additional noise layer in a neural network that adjusted the output to match the distribution of noise labels so that the probability transfer matrix continuously tended toward the true probability transfer matrix during the training process. Azadi et al. <cit.> proposed an auxiliary image regularization technique, encouraging the model to select reliable images to improve the learning process. Jindal et al. <cit.> augmented a standard deep network using a SoftMax layer that model the label-noise statistics before training the deep network. Zhuang ed al. <cit.> proposed an end-to-end weakly supervised deep-learning framework which was robust to label noise in web images. Li et al. <cit.> proposed a unified distillation framework to use "edge" information to "hedge" the risk of learning from noisy labels. Patrini et al. <cit.> proposed a forward correction method that does not depend on the application domain and network architecture, but only needs to know the probability of each class being polluted into another class. Zhang et al. <cit.> proposed a robust generalized cross-entropy (GCE) loss which combined the fast convergence speed of cross-entropy and the robustness advantages of the mean absolute error. Wang et al. <cit.> proposed a symmetric cross-entropy learning method that symmetrically enhances the CE using reverse cross-entropy corresponding to robust noise. Jun Shu et al. <cit.> proposed a meta-learning method to train a reliable network with a set of clean and small data to guide the subsequent training of noisy data, so as to alleviate the adverse effects of label noise or long-tail data on model training. Harutyunyan et al. <cit.> proposed a method to control the label noise information in the weights of neural networks, which reduced the label memorization problem. Ma et al. <cit.> proposed to combine two mutually reinforcing robust loss functions to mitigate the underfitting problem and improve the learning performance. Chen et al. <cit.> proposed a stochastic label noise (SLN) to help models avoid falling into "sharp minima" and "overconfidence" situations. Li et al. <cit.> proposed a contrastive regularization function to learn robust contrastive representations over noisy data. Zhang et al. <cit.> proposed a representation calibration method, RCAL, which improves the robustness of the representation by recovering the multivariate Gaussian distribution.
§.§ Granluar Computing
Chen<cit.> pointed out that the brain gives priority to recognizing a "wide range" of contour information in image recognition, and human cognition has the characteristics of "global precedence". This differs from major existing artificial intelligence algorithms, which use the most fine-grained points as inputs. Granular computing can be used to partition data distribution and knowledge space. Wang <cit.> introduced a large-scale cognitive rule into granular computing and proposed multigranular cognitive computing. Xia and Wang <cit.> proposed hyperspheres of different sizes to represent "grains" and proposed GBC, in which a large gb represented coarse granularity, while a small gb represented fine-granularity. Xia et al. <cit.> proposed the granular-ball support vector machine (GBSVM) method, in which gb samples replaced the original finest-grained sample; this method exhibited better efficiency and robustness than the traditional classifier. GBC has also been applied in many other fields to improve model generalizability or efficiency, such as rough sets <cit.>, sampling <cit.>, fuzzy sets <cit.>. In this study, we develop an extended GBC framework to construct robust deep CNN models for learning with label noise.
§ METHODOLOGY
§.§ Motivation
At present, the learning process of all deep CNN models attempts to map each individual sample in the training dataset to its label, that is, a single-granularity information processing mode. Therefore, containing a certain proportion of labeled noise in the training dataset can affect the usefulness of neural models. The popular solutions enhance the robustness of models at the cost of weakening or even losing the mislabeled data. In this study, we propose a GBC module that can be embeded in the CNN models, it splits the feature vectors of the input into multi-granularity grains (gb samples). Consequently, the final classifier learns the mapping of each gb sample to its label (Fig. <ref>). Intuitively, the proportion of gb samples with incorrect labels that generated based on content similarity is unlikely to exceed or may even be much lower that of individual samples. Therefore, multi-granularity information processing can perform better robustness than that of a single and finest-granularity.
§.§ Overview of our method
For image classification tasks, a deep CNN model can be divided into the feature-learning module (FLM) and classifier, and FLM converts the input images into low-dimensional feature vectors, based on which the classifier predicts the label of each individual image sample.
In this study, we design a GBC module and integrate it into FLM and classifier modules. And we develop an experience replay strategy to train the model, which requires the input images to be divided into empirical and non-empirical samples.
As shown in Fig. <ref>, (1) through the FLM, two types of input images are converted into a set of low-dimensional feature vectors; (2) each empirical sample from the experience pool is not required to reproduce the gb sample, and the center vector of each empirical gb sample is updated using the feature vectors of individual samples belonging to that gb sample that have just been updated; (3) the GBC module splits the feature vector set of non-empirical samples into the MG grains (i.e., gb samples), each of which contains different number individual samples and corresponds to one single label; (4) a portion of high-purity gb samples is placed as empirical gb samples into the experience pool; and (5) two types of gb samples are merged, and classifier predicts the label of each gb sample rather than the individual sample in the training process. In the error backpropagation of the GBC Layer, we adopt a similar average pooling operation to copy the error of the gb samples to all individual samples within it. In the reasoning process, each individual sample can be considered to be one gb sample.
§.§ Adaptive gb Sample Generation
Definition 1: Given a granular-ball sample gb_i, it can contain individual samples with different labels, each of label can correspond different number individual samples. We define label_j that corresponds the most individual samples as the label of this gb_i, | label_j | as the number of individual samples with label_j in gb_i, | gb_i | as the number of individual samples in gb_i, and p_gb_i as the purity of gb_i, then:
p_gb_i=| label_j|/| gb_i|
Definition 2: A set of a low-dimensional feature vector D∈R^d is given. We define C as the center of gravity of all individual sample points in a gb sample gb_i, v_i as the feature vector of an individual sample in gb_i, and v_c as the center vector of gb_i, then:
v_c=1/| gb_i|∑_i=1^| gb_i| v_i.
A formal description of the proposed GBC module is expressed in Eq. <ref>. N denotes the total number of samples in the input, m denotes the number of gb sample divided by the input. The construction of gb sample needs to meet the following constraints: (1) each gb sample meets the purity requirements; (2) each gb sample should cover as many samples as possible, and its number should be as few as possible. The purpose of the GBC module is to divide a single-granularity input into a multi-granularity (MG) representation at the feature level. The overall process is summarized in Algorithm <ref>.
f(x, w) → g(gb, θ),
s.t. Min N / ∑_j=1^m(| gb_i |) + m,
s.t. quality(gb_i) ≥ T.
§.§ Error Backpropagation in GBC Layer
The batch input of N_b samples can be mapped to be N_b d_0-dimensional feature vectors ([N_b, d_0]) through the FLM, and GBC layer further divides them into N_gb gb samples ([N_gb, d_0]), usually N_b>N_gb. Because of the inconsistency between the input of the FLM and the classifier, error propagation is interrupted between GBC and FLM. Consequently, the error corresponding to each gb sample is returned to the GBC layer during the backpropagation process. However, only the error corresponding to each individual sample ensures that the learning module learns layer-by-layer. The GBC layer performs a similar average pooling operation at the feature level of the input samples. Therefore, we adopt a similar operation to copy the error of the gb samples to all individual samples within it.
§.§ Experience Replay
Since the individual samples for each iteration are drawn randomly, gb samples generated for each iteration can exhibit non-static distribution. Consequently, reusing past experience not only reduces training costs but also enables better fitting of the model. Therefore, we design an experience replay strategy that stores the previous gb samples to address these problems. The overall process is summarized as:
(1) In each training step, we first randomly select a certain number of empirical gb samples from the experience pool and extract the original samples (X_empirical) contained in these empirical gb samples; we also randomly select a certain number of non-empirical samples (X_non-empirical) directly from the training set; batch data [X_empirical, X_non-empirical] are finally fed into the proposed models.
(2) In forward process, it is not necessary to generate gb samples for the empirical samples, and the center vector of each empirical gb sample can be updated using the current feature vector; for non-empirical samples, we generate gb samples through the GBC module and place a portion of high-purity gb samples as empirical gb samples into the experience pool, and the original samples contain in these empirical gb samples are called empirical samples; finally, we merge the two types of gb samples and fed them into the classifier.
§ EXPERIMENTS
We applied our method on base ResNet (RN) <cit.>, DenseNet (DN) <cit.> and contrastive learning <cit.> models, and then we conducted experiments on several image classification datasets (including CIFAR-10, CIFAR-100, CIFAR-10N and ANIMAL-10N). Among them, the noise in CIFAR-10 and CIFAR-100 is generated by random methods, while the noise in CIFAR-10N and ANIMAL-10N is generated by manual annotation.
§.§ Experiments Settings
Dataset. For CIFAR-10 and CIFAR-100 datasets, we test two types of label noise: symmetric noise(Sym.) and asymmetric noise(Asym.). For symmetric noise, a fixed proportion of samples being randomly selected from each category for random label modification; for asymmetric noise, we flipped labels between DEER↔HORSE, BIRD↔AIRPLANE, TRUCK↔AUTOMOBILE, and CAT↔DOG(Asym.).
ANIMAL-10N dataset contains 5 pairs of confusing ANIMAL with atotal of 55,000 images, which are crawled from several online search engines using the predifined labels as the search keyword; the images are then classified by 15 recruited participants; each participant annotated a total of 6,000 images with 600 images per-class; after removing irrelevant images, the training dataset contains 50,000 images and the test dataset contains 5,000 images; the noise rate is about 8% <cit.>.
CIFAR-10N, variations of CIFAR-10 with human-annotated real-world noisy labels collected from Amazon's Mechanical Turk <cit.>.
Implementation details. We implemented the proposed method in PyTorch and conducted experiments on a 24 GB NVIDIA RTX 3090 GPU. We used SGD with Nesterov momentum and set the initial learning rate to 0.1, momentum to 0.9, and minibatch size to 512-1024. The learning rate was dropped by 0.1 at 32k and 48k iterations, and we trained for 64k iterations. The basic models used in the experiments were ResNet and DenseNet models. We used cross-entropy losses with a weight decay of 0.0001. For GBC layer setting, the purity p was set to a value between 0.6 and 1.
Baseline methods. To evaluate our method, we also compared our method to other methods that also without additional data and optimization: (1) CE, which uses Cross-Entropy loss to train the DNNs on noisy datasets. (2) Forward <cit.>, which corrects loss values by a label transition matrix. (3) LIMIT <cit.>, which introduces noise into the gradient to avoid memorization. (4) SLN <cit.>, which proposes to combat label noise by adding noise to the data labels. (5) CTRR <cit.>, which proposes a contrastive regularization function to learn robust contrastive representations of data over noisy data.
§.§ Comparisons with the Original Models
We first applied the proposed GBC module on two classical models (ResNet and DenseNet) —and trained them on the benchmark dataset which contained different proportions of labeled noise. The comparison results of the original CNN and our GB_CNN models are listed in Table <ref>, Table <ref> and Fig. <ref>. From the results, we can make two major observations:
(1) The purpose of our proposed method is not to push the state-of-the-art performance of the original models leanring on clean data, but to reduce the influence of label noise. From the results in Table <ref>, we can note that the CNN models with embedding the GBC module can perform almost as well as the original models in terms of learning with no label noise, that is, the proposed GBC layer does not decrease the performance of the original models. Because, our method does not filter or penalize mislabeled samples, and all samples are used for learning.
(2) On learning with varying proportion of labeled noise (See Table <ref>), our method can significantly the robustness of the original deep model with no additional data and optimization. One fundamental reason for this is that the label-noise proportion of the gb samples in the training process is considerably lower than that of the individual samples, as shown in Table <ref>. We can conclude that our method improves the robustness of CNN models by reducing the noise ratio of training samples during the training process, without any additional data and optimization. In addition, our method can also improve the robustness of the models on datasets that generate noise through manual annotation, which conforms to the natural distribution of noise (See Table. <ref>).
§.§ Comparisons with Other Methods
Since, PreAct ResNet-18 (PRN18, see Table <ref>), a much wider and larger model compared with RN20, 32, 44, 56 and DN121, was used to construct the experiments in the previous studies, and thus, to ensure fairness in comparison, we also applied our method on the PRN18. In addition, CTRR method achieved the SOTA result in robustness on the label noise learning; therefore, we also applied the proposed method on CTRR framework, in which the GBC module was embedded into the supervised learning branch with no other changes. From the results in Table <ref>, we can make the major observations as: (1) Our method significantly improves the robustness of the original models (CE and GB_PRN18), and also perform better on varying noise ratio and different noise type comparing with the listed methods. (2) When the proposed GBC module is embeded into CTRR framework, the new method further improves the robustness of CTRR method and achieves the state-of-the-art results.
§ CONCLUSION
In the practice, a certain proportion of samples with wrong labels always occurs when collecting data, which can affect the effectiveness of models. Consequently, labels can often change due to subjective factors, while the content of the sample or its feature do not change with changes in the labeling. Inspired by this, we propose learning the multi-granularity representations based on the feature similarity, where the classifier can predict the label of each gb sample instead of the individual samples. The experimental results verify that the proposed method can improve the robustness of deep CNN models without any additional data and optimization. Nevertheless, our proposed still needs improvement in classification tasks with many categories, which is worth further exploration.
splncs04
|
http://arxiv.org/abs/2409.02600v1 | 20240904103128 | Solubility of carbon dioxide in water: some useful results for hydrate nucleation | [
"Jesús Algaba",
"Iván M. Zerón",
"José Manuel Míguez",
"Joanna Grabowska",
"Samuel Blazquez",
"Eduardo Sanz",
"Carlos Vega",
"Felipe J. Blas"
] | cond-mat.soft | [
"cond-mat.soft"
] |
AIP/123-QED
Solubility of carbon dioxide in water]Solubility of carbon dioxide in water: some useful results for hydrate nucleation
Laboratorio de Simulación Molecular y Química Computacional, CIQSO-Centro de Investigación en Química Sostenible and Departamento de Ciencias Integradas, Universidad de Huelva, 21006 Huelva Spain
Laboratorio de Simulación Molecular y Química Computacional, CIQSO-Centro de Investigación en Química Sostenible and Departamento de Ciencias Integradas, Universidad de Huelva, 21006 Huelva Spain
Laboratorio de Simulación Molecular y Química Computacional, CIQSO-Centro de Investigación en Química Sostenible and Departamento de Ciencias Integradas, Universidad de Huelva, 21006 Huelva Spain
Department of Physical Chemistry, Faculty of Chemistry and BioTechMed Center, Gdansk University of Technology, ul. Narutowicza 11/12, 80-233 Gdansk, Poland
Dpto. Química Física I, Fac. Ciencias Químicas, Universidad Complutense de Madrid, 28040 Madrid, Spain
Dpto. Química Física I, Fac. Ciencias Químicas, Universidad Complutense de Madrid, 28040 Madrid, Spain
Dpto. Química Física I, Fac. Ciencias Químicas, Universidad Complutense de Madrid, 28040 Madrid, Spain
Dpto. Química Física I, Fac. Ciencias Químicas, Universidad Complutense de Madrid, 28040 Madrid, Spain
Laboratorio de Simulación Molecular y Química Computacional, CIQSO-Centro de Investigación en Química Sostenible and Departamento de Ciencias Integradas, Universidad de Huelva, 21006 Huelva Spain
[email protected]
§ ABSTRACT
In this paper, the solubility of carbon dioxide (CO_2) in water along the isobar of 400 bar is determined by computer simulations using the well-known TIP4P/Ice force field for water and TraPPE model for CO_2. In particular, the solubility of CO_2 in water when in contact with the CO_2 liquid phase, and the solubility of CO_2 in water when in contact with the hydrate have been determined. The solubility of CO_2 in a liquid-liquid system decreases as temperature increases. The solubility of CO_2 in a hydrate-liquid system increases with temperature. The two curves intersect at a certain temperature that determines the dissociation temperature of the hydrate at 400 bar (T_3). We compare the predictions with the T_3 obtained using the direct coexistence technique in a previous work. The results of both methods agree and we suggest 290(2) K as the value of T_3 for this system using the same cutoff distance for dispersive interactions. We also propose a novel and alternative route to evaluate the change in chemical potential for the formation of hydrate along the isobar. The new approach is based on the use of the solubility curve of CO_2 when the aqueous solution is in contact with the hydrate phase. It considers rigorously the non-ideality of the aqueous solution of CO_2, providing reliable values for driving force for nucleation of hydrates in good agreement with other thermodynamic routes used. It is shown that the driving force for hydrate nucleation at 400 bar is larger for the methane hydrate than for the carbon dioxide hydrate when compared at the same supercooling. We have also analyzed and discussed the effect of the cutoff distance of the dispersive interactions and the occupancy of CO_2 on the driving force for nucleation of the hydrate.
[
Felipe J. Blas
July 2024
==================
§ INTRODUCTION
At ambient conditions of temperature and pressure (298 K and 1 bar), the thermodynamically stable phase of water is the liquid phase. If the temperature is decreased at a constant pressure of 1 bar, the liquid is no longer the most stable phase and a first-order phase transition takes place at 273.15 K. Consequently, and according to the Thermodynamics laws, water must freeze. The new thermodynamically stable phase is the well-known ordinary ice, also known as Ih or hexagonal ice. This solid phase is formed by a crystalline structure characterized by the oxygen atoms forming hexagonal symmetry with nearly tetrahedral bonding angles. The same happens if the pressure is above ambient conditions up to 2100 bar, approximately. Above this pressure, water can freeze into other ices, including ice III, V, VI, among others, as the pressure is increased. <cit.> These are only some of the solid crystalline phases of the well-known polymorphic phases of water. However, this only happens if the original liquid phase is formed from pure water. When liquid water is mixed with another substance the story can be different.
There exist aqueous solutions of small compounds that exhibit different behavior when cooled down at constant pressure. Particularly, aqueous solutions of methane (CH_4), carbon dioxide (CO_2), nitrogen (N_2), hydrogen (H_2) or larger organic molecules, among many other different compounds, do not transform into a crystalline ice phase when the temperature is lowered. In fact, all these aqueous solutions freeze into new crystalline solid compounds named clathrate hydrates or simply hydrates. <cit.> Hydrates are non-stoichiometric crystalline inclusion compounds consisting of a network of hydrogen-bonding water molecules forming cages in which solutes (for instance, CH_4, CO_2, N_2 or H_2) are enclathrated at appropriate thermodynamic conditions of temperature and pressure.
Fundamental and applied research on hydrates and clathrates has been motivated by several reasons. First at all, hydrates are potential alternative sources of energy since huge amounts of CH_4 have been identified in hydrate deposits, either in the sea floor or in the permafrost frozen substrates, but their exploitation is not technically accessible yet due to a poor physicochemical characterization and various engineering issues. <cit.> Another remarkably relevant aspect of hydrates from both the scientific point of view and practical interest is the possibility to capture <cit.> and store CO_2. <cit.> This places gas hydrates at the center of environmental concerns regarding atmospheric greenhouse gases. Sequestration and capture of CO_2 in hydrates constitute a technological breakthrough which is seen as a promising alternative to other conventional methodologies for CO_2 capture, such as reactive absorption using amines and selective adsorption using adsorbent porous materials including sieves and zeolites. <cit.>
It is clear from the previous discussion that an accurate knowledge of the thermodynamics and kinetics of the formation and growth of hydrates is necessary from the fundamental and practical points of view. The thermodynamics of hydrates has been relatively well-established experimentally for years. <cit.> In addition, it is also possible to describe theoretically the phase equilibria of hydrates using the van der Waals and Platteeuw (vdW&P) formalism. <cit.> This approach, combined with an equation of State (EOS) allows us to satisfactorily determine the phase equilibrium of both pure hydrates and mixtures. <cit.> Additionally, from the point of view of molecular simulation, there has been an enormous development in techniques and methodologies for the study of the formation and dissociation of a huge variety of hydrates. <cit.> Particularly, several research groups have determined the phase equilibrium of CO_2<cit.> and CH_4 hydrates under oceanic crust conditions<cit.> using the direct coexistence technique. The precise knowledge of phase equilibria of hydrates, and particularly their phase boundaries, is essential to provide a detailed description of the kinetic and nucleation processes of these systems.
Unfortunately, a complete description from a molecular perspective of the mechanisms of growth and hydrate formation is far from being satisfactory. In the last few years, some of the authors of this work have been working on the development and use of the Seeding Technique, <cit.> in combination with the Classical Nucleation Theory (CNT), <cit.> to deal with several systems including the hard-sphere and Lennard-Jones models and more complex systems such as water and salty water. <cit.> More recently, we have extended the study to deal with methane hydrates. <cit.> It is important to recall here that Molinero and collaborators used the Seeding Technique to estimate nucleation rates of hydrates <cit.> modeled through the well-known mW water model. <cit.> Other authors have also contributed significantly to the understanding of the dynamics of nucleation and dissociation of hydrates from computer simulation. <cit.> This work constitutes the extension of our most recent study <cit.> on methane hydrates to deal with CO_2 hydrates. Before undertaking nucleation studies of CO_2 hydrates it is necessary to account for several issues, including the solubility of CO_2 in the aqueous solution when it is in contact with the CO_2-rich liquid phase and with the hydrate, an accurate prediction of the dissociation temperature, and the driving force for the nucleation.
The phase behavior of the CO_2 + water binary mixture is dominated by a large region of liquid-liquid (L_w-L_CO_2) immiscibility.^51,52 Since the critical point of pure CO_2 and a liquid-liquid-vapor (L_w-L_CO_2-V) three-phase line are at conditions similar to those at which the CO_2 hydrates are found, between 270-295 K and 10-5000 bar, approximately, another three-phase coexistence line involving a hydrate phase (i.e., a triple point that occurs at a certain temperature T_3 for each pressure) exhibits two branches. This is contrary to what happens with methane hydrates, that only exhibit one branch. <cit.> Fig. <ref> shows the pressure-temperature (PT) projection of the phase diagram of the CO_2 + water binary mixture. At pressures below 44.99 bar, the dissociation line is a H-L_w-V three-phase line at which the hydrate, the aqueous solution of CO_2, and the vapor phases coexist. Above that pressure, the hydrate and the solution coexist with a CO_2 liquid phase and a three-phase H-L_w-L_CO_2 line where the hydrate, the aqueous solution of CO_2, and the liquid phase of CO_2 coexist starts. Both branches meet at a Q_2 quadruple point located at 283 K and 44.99 bar (black filled circle) at which the hydrate, the aqueous solution, the CO_2 liquid, and the vapor phases coexist, <cit.> as can be seen in Fig. <ref>. Note that at Q_2 the L_w-L_CO_2-V three-phase line also meets with another H-L_CO_2-V three-phase in which the hydrate, the CO_2 liquid, and the vapor phases coexist at lower temperatures. In addition to this, there exists another quadruple point Q_1, located at 273 K and 12.56 bar (black filled square), at which the hydrate, the Ih ice, the solution, and the vapor phases coexist. This quadruple point connects the H-L_w-V three-phase line with a new three-phase H-Ih-V line involving the hydrate, the Ih ice, and the vapor phases that runs towards lower temperatures and pressures.
In this work, we concentrate on 400 bar of pressure (see the 400 bar isobar in Fig. 1 along which all the simulations are performed). At these conditions, the key solubility curves in the context of nucleation of CO_2 hydrates are the solubility of CO_2 in water when the aqueous solution is in contact with the CO_2 liquid phase and with the hydrate phase. In the first case, the solubility of CO_2 increases as the temperature is decreased. In the second case, as it occurs for the methane hydrate, there is little or no information from computer simulations or experiments. Here we determine the solubility of CO_2 in water from the hydrate along the isobar of 400 bar. This will allow us to estimate the dissociation line of the hydrate at this pressure, as we have done in our previous work for the case of the methane hydrate. <cit.>
The dissociation line of the CO_2 hydrate has been already determined by us several years ago. <cit.> It is important to mention that also other authors have obtained similar results using computer simulations <cit.> and free energy calculations. <cit.> Our previous results are slightly different than those found by Costandy and coworkers and Waage an collaborators since unlike dispersive interactions between water and CO_2 are different. However, we follow our previous work <cit.> and determine the dissociation line of the hydrate using the solubility curve of CO_2 in the aqueous solution when it is in contact with the CO_2 liquid phase and the hydrate. We have found that the new estimations agree with the initial prediction of Míguez et al. <cit.> within the corresponding uncertainties.
The formation of the CO_2 hydrate can be viewed as a chemical reaction in which water and CO_2 molecules “react” in the aqueous solution phase to form hydrate molecules. <cit.> The change in chemical potential of this reaction is the driving force for nucleation, Δμ_N. It is difficult to get good estimates of Δμ_N from experiments since it requires accurate values for a number of thermodynamic properties, including enthalpies and volumes of reactions, among other magnitudes. <cit.> Here we use the three independent routes introduced in our previous paper <cit.> to deal with the nucleation driving force for the nucleation of CO_2 hydrates. Particularly, we calculate the driving force for nucleation with respect to the state on the H-L_w-L_CO_2 three-phase line at 400 bar. Note that this point is well above the two quadruple points Q_1 and Q_2 shown in Fig. <ref>. In addition to this, we also propose a novel and alternative thermodynamic route based on the use of the solubility curve of CO_2 with the hydrate. This new route, that considers rigorously the non-ideality of the aqueous solution of CO_2 and provides reliable results of the driving force for nucleation, can be also used to determine Δμ_N of other hydrates.
The organization of this paper is as follows: In Sec. II, we describe the methodology used in this work. The results obtained, as well as their discussion, are described in Sec. III. Finally, conclusions are presented in Sec. IV.
§ METHODOLOGY
We use the GROMACS simulation package <cit.> to perform MD simulations. Computer simulations have been performed using three different versions of the NPT or isothermal-isobaric ensemble. For pure systems which exhibit fluid phases (pure water and pure CO_2) and aqueous solutions of CO_2 which exhibit bulk phases, we use the standard isotropic NPT ensemble, i.e., the three sides of the simulation box are changed proportionally to keep the pressure constant. For the hydrate phase, we use the anisotropic NPT ensemble in which each side of the simulation box is allowed to fluctuate independently to keep the pressure constant. This ensures that the equilibrated solid phase has no stress and that the thermodynamic properties are correctly estimated. The same ensemble is used to simulate the two-phase equilibrium between the hydrate and the aqueous solution of CO_2 (SL coexistence). Finally, the two-phase equilibrium between the solution and the CO_2 liquid phase is obtained using the NP_z𝒜T ensemble in which only the side of the simulation box perpendicular to the LL planar interface is allowed to change, with the interface area kept constant, to keep the pressure constant. For simulations involving LL and SL interfaces, we have used sufficiently large values of interfacial areas 𝒜. The thermodynamics and interfacial properties obtained from simulations of LL interfaces do not show a dependence on the surface area for systems with 𝒜>10× 10 σ^2. <cit.> Here σ is the largest Lennard-Jones diameter of the intermolecular potentials. In all simulations, the 𝒜 values used are higher than this value for LL and SL interfaces. See Sections III.A and III.C for the particular values used in this work.
In all simulations, we use the Verlet leapfrog<cit.> algorithm with a time steps of 2 fs. We use a Nosé-Hoover thermostat, <cit.> with a coupling time of 2 ps to keep the temperature constant. In addition to this, we also use the Parrinello-Rahman barostat <cit.> with a time constant equal to 2 ps to keep the pressure constant. We use two different cutoff distances for the dispersive and coulombic interactions, r_c=1.0 and 1.9 nm.
We use periodic boundary conditions in all three dimensions. The water-water, CO_2-CO_2, and water-CO_2 long-range interactions due to coulombic forces are determined using the three-dimensional Ewald technique. <cit.> Particularly, the real part of the coulombic potential is truncated at the same cutoff as the dispersive interactions. The Fourier term of the Ewald sums is evaluated using the particle mesh Ewald (PME) method. The width of the mesh is 0.1 nm, with a relative tolerance of 10^-5. In some calculations, we also use the standard long-range corrections for the LJ part of the potential to energy and pressure with r_c=1.0 nm. Water molecules are modeled using the TIP4P/Ice model <cit.> and the CO_2 molecules are described using the TraPPE model. <cit.> The H_2O–CO_2 unlike dispersive energy value is given by the modified Berthelot combining rule, ϵ_12=ξ(ϵ_11 ϵ_22)^1/2, with ξ=1.13. This is the same used by Míguez et al, <cit.> which allows us to predict accurately the three-phase hydrate–water–carbon dioxide coexistence or dissociation line of the CO_2 hydrate, particularly the coexistence temperature at the pressure considered in this work, 400 bar (see Fig. 10 and Table II of the work of Míguez and co-workers for further details). Very recently, we have demonstrated that the same molecular parameters are able to predict accurately the CO_2 hydrate-water interfacial free energy. <cit.>
Finally, uncertainties are estimated using standard deviation of the mean values or sub-block average method. Particularly, bulk densities in the LL and SL coexistence studies are obtained by averaging the corresponding density profiles over the appropriate regions sufficiently away from the interfacial regions. The statistical uncertainties of these values are estimated from the standard deviation of the mean values. Solubilities of CO_2 in all the liquid phases are calculated as molar fractions from the densities of both components and the corresponding errors are obtained from propagation of uncertainty formulae. Uncertainties associated to LL interfacial tension values, molar enthalpies, and partial molar enthalpies are estimated using the standard sub-block average method. Particularly, the production periods are divided into 10 (independent) blocks. The statistical errors are estimated from the standard deviation of the average.
§ RESULTS
§.§ Solubility of carbon dioxide in water from the CO_2 liquid phase
We first concentrate on the solubility of CO_2 in the aqueous solution when the system exhibits LL immiscibility. In this case, there exists a coexistence between the water-rich and CO_2 liquid phases. We have used the direct coexistence technique to determine the solubility of CO_2 in the aqueous solution from the CO_2 liquid phase at several temperatures along the 400 bar isobar. Particularly, we have performed MD NP_z𝒜T simulations to ensure that temperature and pressure are constant. According to this, the planar interfacial area 𝒜=L_x× L_y is kept constant and only L_z is varied along each simulation. Here, L_x, L_y, and L_z are the dimensions of the simulation box along the x-, y-, and z-axis, respectively. In this work, the z-axis is chosen to be perpendicular to the planar interface. The initial simulation box is prepared in the following way. We build a slab of 2800 water molecules in contact, via a planar interface, with a second slab of 1223 molecules of CO_2. The dimensions of L_x and L_y of all the simulation boxes used in this part of the work are kept constant with L_x=L_y=3.8 nm (𝒜≃ 12× 12 σ^2). Since the pressure is constant, L_z varies along each simulation for all the temperatures considered. In this work, L_z varies from 11.06 to 12.29 nm. Simulations to calculate solubilities are run during 100 ns. The first 20 ns are used to equilibrate the system and the last 80 ns are used as the production period to obtain the properties of interest. We have also determined the LL interfacial tension and details of the simulations are explained later in this section.
Figure <ref> shows the density profiles of water and CO_2 as obtained by MD NP_z𝒜T simulations at 400 bar and temperatures from 250 to 310 K. For a better visualization for the reader, we only plot half of the profiles corresponding to one of the interfaces exhibited by the system. The right side of the figure corresponds to the CO_2 liquid phase and the left side to the aqueous liquid phase. We divide the inhomogeneous simulation box into 200 parallel slabs along the z–direction, perpendicular to the planar LL interface, to study the density profiles. Following the standard approach, density profiles are obtained assigning the position of each interacting site to the corresponding slab and constructing the molecular density from mass balance considerations.
As can be seen, density profiles of water (dashed curves) exhibit preferential adsorption at the interface at all temperatures. Particularly, water molecules are accumulated at the aqueous phase side of the interface. The relative maximum, which is identified with the accumulation of the water molecules, increases as the temperature of the system is decreased. The bulk density of water in the aqueous solution of CO_2 (left side of the figure) slightly decreases as the temperature is lower, especially in the range of 250-290 K. As can be seen, density profiles of water are nearly equal to zero at the bulk CO_2 liquid phase, indicating that solubility of water in that phase is completely negligible. Míguez et al. <cit.> have previously studied the LL interface of aqueous solutions of CO_2 at similar temperatures (287 and 298 K) but at lower pressures (P≤ 55 bar). These authors have found similar behavior for the density profiles of water but with an important exception: they exhibit the traditional shape of the hyperbolic tangent function in which water density decreases monotonically from the bulk density of water in the aqueous phase to zero in the CO_2 liquid phase.
The behavior and structure of the density profiles of CO_2 (continuous curves) along the interface are similar to those exhibited by other mixtures but with an important exception: CO_2 molecules exhibit activity on both sides of the liquid–liquid interface of the system. Particularly, there is an accumulation of CO_2 molecules at the CO_2 liquid phase side of the interface. This accumulation increases as the temperature of the system is decreased, as it happens with water molecules on the other side of the interface. The bulk density of CO_2 in both phases increases as the temperature is increased. This variation is more important in the CO_2 liquid phase (right side of Fig. <ref>). Contrary to what happens with water density in the CO_2 phase, the density of CO_2 in the aqueous solution is not negligible. This indicates that although the solubility of CO_2 in water is small (molar fraction of CO_2 between 0.04 and 0.09 in the range 310–250 K, respectively), its value is not so low as in the case of the solubility of water in CO_2.
It is interesting to mention that density profiles of CO_2 also exhibit depressions in the aqueous solution side of the interface indicating desorption of CO_2 molecules in this region. The desorption of CO_2 molecules at the interface is correlated with the preferential adsorption of water molecules since the relative maxima and minima occur at the same position (z≈ -0.45 nm). Note that preferential adsorption and desorption of CO_2 molecules at the LL interface of this kind of aqueous solutions has not been previously seen in the literature. Particularly, Míguez et al. <cit.> only observe density profiles that exhibit preferential adsorption at the interface.
The solubilities of CO_2 in the aqueous phase have been determined from the information of the density profiles presented in Fig. <ref> at the corresponding temperatures.
Fig. <ref> shows the solubilities of CO_2 along the isobar at the temperatures considered. We have also included in the figure (inset) the same results obtained previously by us corresponding to the methane + water system. <cit.> As can be seen, the solubility decreases as temperature increases. This result is in agreement with our previous work in which we considered the solubility of methane in water along the same isobar (400 bar) and in contact with the gas phase (see the inset). <cit.> In this work, the study is firstly done using a relatively short cutoff distance for dispersive interactions, r_c=1.0 nm, which corresponds to a reduced cutoff value of r^*=r_c/σ≊ 3.16, with σ=0.31668 nm. Here σ is the length scale of the Lennard-Jones intermolecular interactions associated with the water model (TIP4P/ice). <cit.> In order to evaluate the effect of the cutoff distance, we have also determined the solubility of CO_2 using a larger cutoff value for the dispersive interactions (1.9 nm instead of 1.0 nm). As can be seen, the effect of increasing the cutoff is important in the whole range of temperatures considered. In particular, solubility increases between 17 %, at high temperatures (310 K), and 13 % at low temperatures (250 K). This is an expected result according to previous studies of the effect of the cutoff distance on fluid-fluid coexistence. <cit.>
We have checked that there is no a priori temperature limit to perform the simulations as the temperature is decreased. From this point of view, the solubility of CO_2 in the aqueous solution can be computed without any difficulty since we do not observe nucleation of the hydrate at low temperatures. This is in agreement with previous results obtained by Grabowska et al. <cit.> However, as the temperature is decreased the dynamics of the system slows down and the equilibration of the LL interface becomes more difficult and longer simulation runs are required to achieve equilibrated density profiles.
We have also determined the solubility using the standard long-range corrections to energy and pressure
to the Lennard-Jones part of the potential (dispersive interactions). According to our results,
although long-range corrections are able to improve the solubility results, differences between these results and those obtained with a cutoff of 1.9 nm are still noticeable. In particular, the solubilities predicted using this approach are underestimated between 4 and 6 % along the isobar
at the temperatures simulated. It is interesting to compare the behavior of the CO_2 solubility, as a function of the temperature along the 400 bar, with that corresponding to methane obtained by us previously. <cit.> As can be seen in the inset of Fig. <ref>, the effect of the long-range correction on the dispersive interactions is slightly larger in the case of CO_2 than in methane. This is an expected result since the CO_2 molecules are modeled using three Lennard-Jones interaction sites and methane only with one, and also because the solubility of CO_2 is about ten times higher than the CH_4 solubility at the same thermodynamic conditions. It is also remarkable that the use of longer cutoff distances has a contrary effect on the solubility of CO_2 than in that of methane, i.e., the solubility of CO_2 increases with an increase of the cutoff whereas it decreases in the case of the solubility of methane. This is probably due to the presence of the quadrupolar moment of the CO_2 molecule and to the water–CO_2 interactions.
In the previous paragraphs, we have presented and discussed the results corresponding to the solubility of CO_2 in the aqueous solution when it is in contact with the CO_2 liquid phase. Since both phases are in contact through a planar LL interface and are in equilibrium at the same P and T, the chemical potential of water and CO_2 in both phases must satisfied that,
μ_CO_2^I(P,T,x_CO_2^I)= μ_CO_2^II(P,T,x_CO_2^II)
and
μ_H_2O^I(P,T,x_CO_2^I)= μ_H_2O^II(P,T,x_CO_2^II)
Here the superscripts I and II label the aqueous and the
CO_2 liquid phases, respectively. Note that we have expressed the chemical potential of water in each phase in terms of the corresponding CO_2 molar fractions. It is important to note that this is consistent from the thermodynamic point of view and it is always possible since we are dealing with a binary system that exhibits two-phase equilibrium. Following the Gibbs phase rule, such a system has two degrees of freedom, which according to Eqs. (<ref>) and (<ref>) are P and T. Consequently, the thermodynamic behavior of the system is fully described solving the previous equations since the composition of water in both phases, x_H_2O^I and x_H_2O^II, can be readily obtained as x_H_2O^I=1-x_CO_2^I and
x_H_2O^II=1-x_CO_2^II.
According to our previous results shown in Fig. <ref>, the density of water in the CO_2 liquid phase is ρ_H_2O^II≈ 0, and consequently, x_H_2O^II=ρ_H_2O^II/(ρ_H_2O^II+ρ_CO_2^II)≈ 0
and x_CO_2^II=1-X_H_2O^II≈ 1.
Following the approximations of the previous paragraph combined with Eq. (<ref>), the chemical potential of CO_2 in the aqueous solution can be obtained from the chemical potential of pure CO_2 at the same P and T,
μ_CO_2^I(P,T,x_CO_2^I)≈μ_CO_2^II(P,T,x_CO_2^II≈ 1)≈μ_CO_2^II(P,T)
The chemical potential of CO_2 along the isobar can be obtained from the thermodynamic relation,
(∂(μ_CO_2/T)∂ T)_P,N_H_2O,N_CO_2=-h_CO_2T^2
where h_CO_2=h_CO_2(P,T) is the partial molar enthalpy of CO_2, and the derivative is performed at constant pressure, P, and number of water and CO_2 molecules, N_H_2O and N_CO_2, respectively. Since in our case the CO_2 liquid phase is essentially a pure CO_2 liquid, h_CO_2 is simply the molar enthalpy. Consequently, the chemical potential of CO_2, as a function of the temperature, along the 400 bar isobar can be obtained by integrating the Eq. (<ref>) as,
μ_CO_2(T)k_BT=μ_CO_2(T_0)k_BT_0-_T_0^Th_CO_2(T')k_BT'^2 dT'
where k_B is the Boltzmann constant and T_0 is a certain reference temperature. Following our previous work <cit.>, we set μ_CO_2(T_0)=0. According to Eq. (<ref>), μ_CO_2(T) can be obtained by performing MD NPT simulations of pure CO_2 along the 400 bar isobar. In this case, since we are simulating a bulk phase, the standard NPT is used in such a way that the three dimensions of the simulation box are allowed to fluctuate isotropically. We use a cubic simulation box with 1000 CO_2 molecules. The dimensions of the simulation box, L_x, L_y, and L_z vary depending on the temperature from 3.9 to 4.2 nm. Simulations to calculate the molar enthalpy, at each temperature, are run during 100 ns, 20 ns to equilibrate the system and 80 ns as the production period to obtain h_CO_2. As in our previous work, we have not included the kinetic energy contribution (i.e. 5/2k_BT in the case of a rigid diatomic molecule such as CO_2). Note that this contribution is canceled out since we are evaluating chemical potential differences at constant P and T. In this work we choose as the reference temperature T_0=290 K. The reason for this selection will be clear later in the manuscript. Figure <ref> shows the chemical potential of CO_2 as a function of the temperature (blue curve). We have also included in the same figure the chemical potential values of the bulk methane taken from our previous work <cit.> (green curve) in order to compare both chemical potentials. Note that the reference temperature at which μ of the bulk methane is set to zero is 295 K.
§.§ LL interfacial free energy
From the same simulations we have also obtained the LL interfacial tension, γ, from the
diagonal components of the pressure tensor. The vapour pressure corresponds to the normal component,
P≡ P_zz, of the pressure tensor. The interfacial tension is obtained using the well-known combination of the normal component and the tangential components, P_xx and P_yy through the mechanical route as, <cit.>
γ=L_z/2[<P_zz>-<P_xx>+<P_yy>/2]
In Eq. (<ref>), the factor 1/2 reflects that during the simulations there exist two LL interfaces in the system, being L_z the size of the simulation box in the z direction perpendicular to the planar interface. Fig. <ref> shows the LL interfacial tension value as obtained from MD NP_z𝒜T simulations. Results obtained in our previous work corresponding to the LL interfacial tension of the methane + water mixture are also shown in the inset of the figure. <cit.> The interfacial tension decreases as the temperature is increased. We first calculate the interfacial tension using a cutoff value of 1.0 nm (green circles). In this case, we have used 50 ns for the equilibration period and 50 ns more for the production period in which the averages are calculated. Our results indicate that the values exhibit large fluctuations, especially at low temperatures. To improve our results, we have extended the simulations. The first 150 ns correspond to the equilibration period and the extra 150 ns are used to obtain the corresponding average values (production period). As can be seen (green diamonds), although the mean values obtained in both cases are similar, the error bars decrease, especially at the lowest temperature.
It is well-known that the equilibrium interfacial tension value associated with an interface critically depends on the molecular details. In particular, its value is very sensitive to the cutoff due to the dispersive interactions used during the simulations. <cit.> To account for the long-range interactions associated with the dispersive interactions, we have performed simulations using a cutoff distance of 1.9 nm, with 50 ns for equilibration and another 50 ns for production time (red circles). Note that in this case we are not using long-range corrections to energy and pressure. This value corresponds to a reduced cutoff distance r_c^*=r_c/σ=6, which is nearly the double value used in the first set of simulations. As can be seen, the main effect of increasing the cutoff distance is to decrease the interfacial tension values. Particularly, the effect is larger at low temperatures where the difference is about 3-4 mJ/m^2. However, at high temperatures, differences are about 1 mJ/m^2.
In order to account for the effect of the simulation length, we have extended the simulation at 310 K. As in the other set of simulations, we have equilibrated the system during the first 150 and used the next 150 ns to perform the corresponding averages (red triangles up). As can be seen, the new results are in practice identical to those obtained using only 50 ns for production time.
To be consistent with the calculations of the solubility of CO_2 in the aqueous solution, we have also determined the interfacial tension using the traditional LRC to energy and pressure. As in the previous case, we have only simulated using these corrections at 250 and 310 K (orange circles). As can be seen, this approximation is not able to provide consistent results at low temperatures when compared with the data obtained using r_c=1.9 nm. In particular, the interfacial tension is overestimated by more than 4.5 mJ/m^2, which represents more than 13% with respect to the value obtained using the larger cutoff distance. At the highest temperature, however, the overestimation of the interfacial tension is only about 0.5 mJ/m^2 (2%). Discrepancies at low temperatures between the results obtained using the largest cutoff distance (1.9 nm) and those using the traditional energy and pressure long-range corrections are probably due to the differences in densities in both liquid phases at these conditions.
We have also compared the predictions obtained from MD simulations with experimental data taken from the literature (magenta squares). <cit.> Unfortunately, to the best of our knowledge, there are no experimental data below 310 K. Our results are in good agreement with experimental measurements, although we slightly overestimate it by about 6%.
Finally, it is also interesting to compare the LL interfacial tension values obtained in this work for the CO_2 + water system with those obtained in our previous work <cit.> for the methane + water binary mixture. Computer simulation values of the former system are shown in the inset of Fig. <ref>. As can be seen, LL interfacial tension values of the methane + water system are approximately twice than those corresponding to the mixture containing CO_2. But perhaps the most interesting feature is that, as it happens with the solubilities (see Fig. <ref>), an increase of the cutoff distance due to the dispersive interactions has the opposite effect in the systems which containing methane and CO_2: an increase of the cutoff distance in the CO_2 system lowers the LL interfacial tension values, while in the methane mixture the interfacial tension values increase when the cutoff is larger (blue triangles up correspond to r_c=0.9 nm and dark green triangles down to r_c=1.7 nm). Also note that the effect of the long-range dispersive contributions is more important in the CO_2 + water mixture than in the system containing methane. As we have discussed previously, this could be due to the presence of the electric quadrupole of CO_2.
§.§ Solubility of carbon dioxide in water from the hydrate phase
We have also determined the solubility of CO_2 in water when the aqueous solution is in contact with the hydrate along the 400 bar isobar at several temperatures. We first prepare a simulation box of CO_2 hydrate replicating a unit cell of hydrate four times along each spatial direction (4× 4 × 4), using 2944 water and 512 CO_2 molecules. This corresponds to a hydrate with the cages (8 cages per unit cell) fully occupied by CO_2 molecules. We equilibrate the simulation box for 40 ns using an anisotropic barostat along the three axes. This allows the dimensions of the simulation box to change independently. The pressure is the same along the three directions and equal to 400 bar to allow the solid to relax and avoid any stress. In order to help the system to reach the equilibrium, we also prepare boxes of aqueous solutions with different concentrations of CO_2 depending on the temperature. This allows to reach the equilibrium as fast as possible in the last stage of the simulations when the hydrate and liquid phases are put in contact (see below). Particularly, the hydrate phase will grow or melt depending on the initial conditions, releasing/absorbing water and CO_2 molecules to/from the aqueous solution, until the solution phase reaches the equilibrium condition. Although the final state is independent of the initial CO_2 concentration in the aqueous phase, care must be taken in finite systems as those studied in this work. Initial conditions must be close enough to coexistence so that the system is able to reach equilibrium before exhaustion of any of the phases at coexistence. In this particular work, we have checked that density profiles of water and CO_2 in the aqueous phase reach the equilibrium value. This is practically done monitoring the averages profiles every 100 ns until no significant variations in their bulk region are observed. Once densities of water and CO_2 are obtained, the molar fraction of CO_2 in the aqueous solution is calculated from the corresponding averaged density values.
We use simulation boxes of solutions containing 4000 water molecules and varying the number of CO_2 molecules depending on temperature: 50 (250, 260, and 270 K), 120 (280 and 290 K), and 240 (295 K) CO_2 molecules. We equilibrate each simulation box during 40 ns using the isothermic-isobaric or NP_z𝒜T ensemble. In this case, two of the dimensions of the simulation boxes, arbitrarily named L_x and L_y, are kept constant (L_x=L_y vary between 4.77 and 4.82 nm (𝒜≃ 15× 15 σ^2) depending on the temperature) and equal to the values of two lengths of the simulation box of the hydrate. L_z is however allowed to vary to achieve the equilibrium pressure of 400 bar. Particularly, L_z varies from 10.28 to 10.53 nm depending on the temperature. Finally, the equilibrated hydrate and aqueous solution simulation boxes are assembled along the z direction sharing a planar solid-liquid interface with interfacial area 𝒜=L_x× L_y. We then perform simulations in the NPT ensemble using an anisotropic barostat with pressures identical in the three directions and equal to 400 bar. This allows the solid to relax and avoid any stress and obtain the correct value of the solubility at each temperature. Systems are equilibrated during 100 ns. After this, we run additional 300 ns to obtain the equilibrium density profiles of the system from 250 up to 295 K.
Fig. <ref> shows the density profiles of water and CO_2 molecules as obtained from anisotropic NPT simulations at 400 bar and temperatures ranging from 250 to 295 K. The density profiles have been obtained as explained in section III.A. Note that at temperatures above 295 K it is not possible to determine the solubility because the hydrate melts. In other words, there is a kinetic limit at high temperatures to determine the solubility of CO_2 from the hydrate.
As in the case of the LL coexistence described in section III.A, we only plot half of the profiles corresponding to one of the interfaces exhibited by the system. The right side of the figure corresponds to the hydrate phase and the left side to the aqueous solution phase. The density profiles in the hydrate phase exhibit the usual solid-like behavior for water and CO_2 molecules, with peaks at the corresponding crystallographic equilibrium position at which molecules are located in the hydrate. As can be seen, the density profiles at the lowest temperatures, from 250 up to 280 K show nearly the same structure, and only small differences are observed at the hydrate-solution interface, as it is expected.
It is also interesting to analyze the behavior of profiles of water and CO_2 in the aqueous phase. The density profiles near the interface show some structural order due to the presence of the hydrate phase. Note that the positional order of the molecules is more pronounced at low temperatures, below T≤ 280 K. The bulk density of water (left side of the figure) slightly decreases as the temperature is increased, especially close to temperatures at which the hydrate melts. It is interesting to mention that bulk density profiles vary with temperature in the opposite way that when the aqueous solution is in contact with the CO_2 liquid phase (see Fig. <ref>).
The bulk density of CO_2 (in the aqueous solution phase) increases as the temperature is increased. From the inspection of Fig. <ref> it is clearly seen that the hydrate phase becomes less stable as the temperature approaches to 290-295 K. At lower temperatures, the hydrate-solution interface is located at z≈ 5 nm, approximately, with the hydrate phase showing 6-7 well-defined CO_2 layers. However, at 290 and 295 K only 5-6 layers can be observed in the hydrate phase, with the interface located at z≈ 6 nm. In fact, as we have previously mentioned, it is not possible to keep stable the hydrate at temperatures above 295 K, which eventually melts at higher temperatures.
We have calculated, from the information obtained from the density profiles, the solubility of CO_2 in the aqueous solution when it is in contact with the hydrate. As can be seen in Fig. <ref>, the solubility of CO_2 increases as the temperature is raised. We have also included in Fig. <ref> (inset) the same results obtained previously by us corresponding to the methane + water system. <cit.> It can be seen that our results are in agreement with the results of the solubility of methane. Contrary to what happens in the case of the methane hydrate, it is only possible to calculate the solubility up to a temperature of 295 K.
This is just a few degrees (around five) above the values of the three point temperature T_3 of the CO_2 hydrate at this pressure (see section III.D). In the case of our previous work, the hydrate was kept in metastable equilibrium at about 35 K over the dissociation temperature of the methane hydrate (295 K). <cit.>
We have also considered the impact of using different cutoff distances on solubilities. As in the case of the results presented in Section III.A, we have used three different cutoff distances for the dispersive interactions. Particularly, we use the same two values, 1.0 and 1.9 nm. We have also performed simulations using the standard long-range corrections to energy and pressure with a cutoff value of 1.0 nm. Contrary to what happens with the solubility of CO_2 in water when the solution is in contact with the other liquid phase (CO_2), the solubility does not depend on the cutoff distance, as can be seen in Fig. <ref>. Our results indicate that the long-range corrections due to the dispersive interactions have little or negligible effect on solubilities in aqueous solutions in contact with the hydrate. However, according to Figs. <ref> and <ref>, long-range interactions play a key role in the thermodynamic and interfacial properties in systems involving fluid phases. <cit.>
We have considered the effect of the CO_2 occupancy in the hydrate on the solubility in the aqueous solution. We have prepared the initial simulation boxes in a similar way to the case of full occupancy but with CO_2 occupying half of the small or D cages. Particularly, we use 2944 water (46× 4× 4 4) and 448 CO_2 molecules. This means that the occupancy of the large or T cages is 100% (384 CO_2 molecules) and the occupancy of the small or D cages is 50% (64 CO_2 molecules). This represents a 87.5% of occupancy of D and T cages. According to the experimental data, <cit.> the equilibrium occupancy of large or T cages of the CO_2 hydrate is nearly 100%. However, although there is a large discrepancy in measurements and predictions of the small cage occupancy, it is generally accepted that occupancy of small or D cages is approximately 30-60% depending on thermodynamic conditions. Notice that with this occupancy (i.e 87.5%) the ratio of water to CO_2 molecules in the hydrate is not 5.75 (as when the occupancy is 100%) but its value is now 46/7 ≈ 6.57. We follow the same procedure explained in the previous paragraph, with a cutoff distance of 1.0 nm. As a result, we obtain similar density profiles to those shown in Fig. <ref>. The solubility of CO_2 in water when the aqueous solution is in contact with the hydrate with 87.5% of occupancy is also shown in Fig. <ref>. As can be seen, the solubility of CO_2 when it is in contact with the hydrate with an occupancy of 87.5% is the same as that when it is in contact with the hydrate fully occupied within the error bars.
Finally, it is important to remark that, contrary to what we have found in our previous work for the solubility of methane in water, <cit.> we do not find a melting of the hydrate in a two-step process, i.e., a bubble of pure methane appears in the liquid phase as a first step and then the methane of the aqueous solution moves to the bubble and the methane from the hydrate moves to the aqueous solution as a second step. We find here that the hydrogen bonds of the layer of the hydrate in contact with the aqueous solutions break and the hydrate starts to melt. Particularly, when the temperature is increased the concentration of CO_2 in the aqueous solution increases in order to stabilize the hydrate phase. In the case of the methane hydrate, the amount of methane molecules releases to the aqueous phase to achieve the new equilibrium state is small (the solubility of methane in water is very small) and the metastable hydrate phase can exists above the T_3. However, in the case of the CO_2 hydrate, the CO_2 saturates the aqueous phase (the solubility of CO_2 in water increases greatly with the temperature). The hydrate becomes unstable, the hydrogen bonds of the hydrate layer next to the aqueous solution breaks, and the hydrate finally melts.
§.§ Three-phases coexistence from solubility calculations
We have obtained the solubility of CO_2 in the aqueous solution, as a function of the temperature at a fixed pressure of 400 bar, when it is in contact with the CO_2 liquid phase (Section III.A) and with the hydrate (Section III.C). In both cases, the system exhibits two-phase coexistence. It is interesting to represent both solubilities in the same plot as we did in our previous study, <cit.> and as it is shown in Fig. <ref>. Since one of the solubility curves is a decreasing function of the temperature and the other an increasing function of the temperature, there exists a certain temperature, that we will call T_3 for reasons that will be clear soon, at which both solubilities are equal at 400 bar.
The points of the solubility curve of CO_2 in the aqueous solution from the CO_2 liquid phase correspond to thermodynamic states at which the pressure and the chemical potentials of water and CO_2 in the aqueous phase are equal to those in the CO_2 liquid phase. In addition to this, the points of the solubility of CO_2 in water from the hydrate phase correspond to states at which the pressure is also the same and at which the chemical potentials of both components in the aqueous phase are equal to those in the hydrate phase. Consequently, at T_3, the temperature, pressure, and chemical potentials of water and CO_2 in the aqueous solution, CO_2 liquid, and hydrate phases are the same. This means that the point at which the two solubility curves cross represents a three-phase coexistence state of the system at 400 bar. This is also known as the dissociation temperature of the CO_2 hydrate at the corresponding pressure (400 bar).
The value obtained in this work for the T_3 is 290(2) K when the occupancy of the hydrate is 100% (all the T and D cages are occupied by CO_2 molecules). We assume here an uncertainty of 2 K for the dissociation temperature of the hydrate, following the same estimation of the T_3 error of the methane hydrate determined in our previous work. <cit.> We have also determined the dissociation temperature of the hydrate when the occupancy of the small or D cages is 50% (87.5% overall occupancy) using r_c=1.0 nm. In this case, T_3=290.5(2) K, which is the same value obtained for the fully occupied hydrate within the error. Both dissociation temperature results seem to be occupancy-independent with the employed methodology and are in good agreement (within the corresponding uncertainties) with the value obtained by Míguez et al. using the direct coexistence technique,<cit.> 287(2) K. It is important to remark here that we are using the same models for water (TIP4P/ice)<cit.> and CO_2 (TraPPE),<cit.> the same unlike dispersive interactions between both components, and cutoff distance for dispersive interactions (r_c=1.0 nm) than in the work of Míguez et al. <cit.> At this point it is important to remark that the system sizes of this work are different than that used in the work of Míguez et al., <cit.> and this could have a subtle effect in the T_3 because of finite-size effects, as it has been found for the melting point of ice Ih. <cit.> The experimental value of T_3 at 400 bar is 286 K so that the force field used in this work provides a quite reasonable prediction.
Other authors have determined the T_3 for this system from computer simulation. Costandy et al.<cit.> have calculated the dissociation temperature of the CO_2 hydrate at 400 bar using the direct coexistence technique. They obtained a value of 283.5(1.7) K. Although they also used the same water and CO_2 models, a number of differences lead to a slightly different value of T_3: different unlike dispersive interactions between water and CO_2 and cutoff distance for dispersive interaction (1.1 nm). Waage and collaborators <cit.> have also determined the dissociation line of the hydrate using free energy calculations. They also use the same models for water and CO_2 but different unlike dispersive interactions between them. In this case, the cutoff distance for dispersive interactions is r_c=1.0 nm. These authors calculate the dissociation temperature of the hydrate at 200 and 500 bar. The values obtained are 283.9(1.7) and 284.8(0.9) K, respectively. Interpolating to 400 bar, the T_3 is 284.5 K, in good agreement with the results of Costandy et al.<cit.> Unfortunately, the result obtained here can not be compared with the predictions of Costandy et al.<cit.> and Waage and collaborators <cit.> since dispersive interactions are not the same as those used here.
Finally, it is important to focus on the effect of the cutoff distance used to evaluate the long-range dispersive interactions. The T_3 value of 290(2) K has been obtained using a cutoff distance of 1.0 nm. We have also analyzed the solubilities of CO_2 from the CO_2 liquid and the hydrate phases using a much larger cutoff distance (i.e., 1.9 nm). As we have previously shown, the solubility of CO_2 from the CO_2 liquid phase increases when the value of the cutoff is increased. On the other hand, the solubility of CO_2 in the hydrate phase is not affected by the use of larger cutoff values. Consequently, the combined effect of the increase of the cutoff distance of the dispersive interactions is an increase of the T_3 since it is the intersection of the two solubility curves shown in Fig. <ref>. Particularly, the dissociation temperature of the hydrate is now found at 292(2) K, 2 K above the T_3 observed with a cutoff distance of 1.0 nm, approximately.
It is interesting to compare the effect of the cutoff due to the dispersive interaction in both CO_2 and methane hydrates. As can be seen, in the case of the methane hydrate, T_3 is shifted towards lower temperatures, by 2 K, when the cutoff is increased. In the current case (CO_2 hydrate), we observed the opposite effect, i.e., T_3 increases when the cutoff distance is increased. This is the same effect as observed for the solubility curve of CO_2 in the aqueous solution in contact with the CO_2 liquid phase. This effect, contrary to that observed in the methane hydrate, could be due to the electrostatic interactions of the quadrupole of CO_2 with other CO_2 molecules and also with water molecules, that it is not present in the case of the methane. We think this issue deserves a more detailed study but this is out of the scope of the current work.
We have determined the dissociation line of the CO_2 hydrate at 400 bar from the calculation of the solubility of CO_2 when the aqueous solution is in contact with the other two phases in equilibrium, the CO_2 liquid phase and the hydrate phase. Grabowska and collaborators <cit.> have already demonstrated that this route allows to determine T_3 of hydrates. This work confirms that this methodology is a good alternative to the direct coexistence method. Particularly, it shows a slightly better efficiency compared with the other technique (lesser simulation times are required) and provides consistent values of T_3.
§.§ Driving force for nucleation of hydrates
The dissociation line of the CO_2 hydrate separates its phase diagram in two parts in which two different two-phase coexistence regions exist. <cit.> At a certain pressure, for instance, 400 bar, at temperatures above T_3 the system exhibits LL immiscibility between an aqueous solution and a CO_2 liquid phase. Note that the solubility of water in CO_2 is very small and the CO_2 liquid phase can be considered pure CO_2 in practice. However, at temperatures below T_3 the system exhibits SL phase equilibrium between the hydrate and a fluid phase (water or CO_2 depending on the global composition of the system). This is consistent with the nature of the dissociation or three-phase line at which the hydrate, aqueous solution, and CO_2 liquid phases coexist.
The fluid phase in equilibrium with the hydrate below T_3 depends on the global composition of the system. Here we assume that the hydrate is fully occupied by CO_2 molecules, i.e., 8 CO_2 molecules for every 46 water molecules according to the stoichiometry of hydrates type sI. Let be N_H_2O and N_CO_2 the number of water and CO_2 molecules used in the fluid phases during the simulations, respectively. If the ratio N_H_2O/N_CO_2> 5.75, one should have hydrate–water phase separation (below T_3). However, if N_H_2O/N_CO_2< 5.75, one should have a hydrate – CO_2 phase system for T<T_3.
As described by Kashchiev and Firoozabadi <cit.> and Grabowska and collaborators, <cit.> the formation of a hydrate in the aqueous solution phase can be viewed as a chemical reaction that takes place at constant P and T,
CO_2 (aq,x_CO_2) +
5.75 H_2O (aq,x_CO_2)
→ [CO_2(H_2O)_5.75]_H
Since we work at constant pressure in this work (P=400 bar), we drop the dependence of P in the rest of equations. Assuming that all the cages of the hydrate are filled, a unit cell of CO_2 hydrate is formed by 46 water molecules and 8 CO_2 molecules, i.e., 1 CO_2 molecule per 46/8=5.75 water molecules. According to this, Eq. (<ref>) considers the hydrate as a new compound formed from one molecule of CO_2 and 5.75 molecules of water. We can also associate to this compound one unique chemical potential for the hydrate at T, μ_H^H(T). Note that this chemical potential is simply the sum of the chemical potential of CO_2 in the solid plus 5.75 times the chemical potential of water in the solid, i.e.,
μ^H_H(T)=μ_CO_2^H(T)+5.75 μ_H_2O^H(T)
According to the previous discussion, the compound [CO_2(H_2O)_5.75]_H is simply the “hydrate” and we call one “molecule” of the hydrate in the solid the molecule [CO_2(H_2O)_5.75].
Following Kashchiev and Firoozabadi, <cit.> we denote the driving force for nucleation of the hydrate formed from the aqueous solution with a concentration x_CO_2 at T as,
Δμ_N(T,x_CO_2) =μ^H_H(T)
-μ^aq_CO_2(T,x_CO_2)-5.75 μ^aq_H_2O(T,x_CO_2)
Note that Δμ_N in this paper is Δμ_nucleation in our previous paper. <cit.> Δμ_N also depends on pressure but since we are working at constant pressure (400 bar), we drop the pressure dependence from all the equations in this study. μ_H^H(T) has been previously defined in Eq. (<ref>) as the chemical potential of the “hydrate molecule” in the hydrate phase, and μ^aq_CO_2(T,x_CO_2) and μ^aq_H_2O(T,x_CO_2) are the chemical potentials of CO_2 and water in the
aqueous solution, respectively, at T and molar fraction of CO_2, x_CO_2. Note that the composition of CO_2 in Eq. (<ref>) is, a priori, independent of the pressure and temperature selected. In other words, one could have different driving forces for nucleation, at a given P and T, changing the composition of the aqueous solution (for instance, in a supersaturated solution of CO_2). However, there exists a particular value of x_CO_2 which is of great interest from the experimental point of view. Experiments on the nucleation of hydrates are performed when the water phase is in contact with the CO_2 liquid phase through a planar interface. Since both phases are in equilibrium at P and T, the solubility of CO_2 in water (molar fraction of CO_2 in the aqueous solution) is fully determined since x^eq_CO_2≡ x_CO_2^eq(T). Following the notation of Grabowska and coworkers, <cit.> the driving force for nucleation at experimental conditions is given by,
Δμ^EC_N(T) =μ^H_H(T) -μ^aq_CO_2(T,x_CO_2^eq(T))
-5.75 μ^aq_H_2O(T,x_CO_2^eq(T))
Note that Δμ^EC_N depends only on T (and on P but in this work we are working at the same P=400 bar). We provide here valuable information for this magnitude when the molecules are described using the TIP4P/Ice and TraPPE models for water and CO_2, respectively.
In the next sections we concentrate on the driving force for nucleation at experimental conditions, Δμ^EC_N(T), obtained using four different routes. In the first one (route 1), we use the definition of the driving force of nucleation given by Eq. (<ref>). In the second one (route 2), we use the solubility curves of CO_2 from the hydrate and the CO_2 liquid phase. In third one (route 3), we use the enthalpy of dissociation of the hydrate and assume that it does not change with temperature nor composition. In the fourth one (route 4), we propose a novel methodology based on the use of the solubility curve of CO_2 with the hydrate, valid not only for the CO_2 hydrate but also for other hydrates. This route can be used to determine Δμ_N at any arbitrary temperature and mixture composition and not only at experimental conditions. Finally, we discuss the results obtained using the different routes and compare the driving force for nucleation of the CO_2 hydrate with that of the methane hydrate previously obtained by us in a previous work. <cit.>
§.§.§ Route 1 for calculating Δμ^EC_N.
Route 1 was proposed and described in our previous work <cit.> and we summarize here only the main approximations and the final expression of the driving force for nucleation. To evaluate Δμ^EC_N(T,x^eq_CO_2) in Eq. (<ref>) we need to calculate the chemical potential of the “hydrate molecule” in the hydrate phase, and the chemical potentials of CO_2 and water in the aqueous phase at a supercooled temperature T below T_3. The change in the hydrate chemical potential when the temperature passes from T_3 to T can be evaluated in a similar way as that for pure CO_2 from T_3 to T. In fact, this later change has been already calculated in Section III.A using Eq. (<ref>) and evaluated from the corresponding thermodynamic integration using computer simulations in the NPT ensemble.
The chemical potential of water in the aqueous phase at T can be estimated using the procedure of Grabowska and collaborators <cit.> applied to the case of the CO_2 hydrate according to Eqs. (20)-(26) of their paper. This is done in two steps. In the first step, the change in the chemical potential of the solution when its temperature passes from T_3 to T is approximated by that of pure water calculated from thermodynamic integration (see Eqs. (24) and (26) of the work of Grabowska and collaborators). The second step involves the change in the chemical potential of water in the solution when the composition of CO_2 changes from x^eq_CO_2(T_3) to x^eq_CO_2(T) (see Eq. (25) of our previous work). The rigorous calculation of this contribution requires the knowledge of the activity coefficient of water in an aqueous solution with a given composition of water, γ^aq_H_2O(T,x^eq_H_2O), at T and T_3. Grabowska and coworkers assume that this magnitude, in the case of an aqueous solution of methane, γ^aq_H_2O≈ 1 since the solution is very diluted. We follow here the same assumption.
Using these approximations it is possible to compute the driving force at experimental conditions for nucleation given by Eq. (<ref>). The final expression is given by,
Δμ^EC_N(T,x^eq_CO_2)k_BT=
-_T_3^Th_H^H(T')- {h_CO_2^pure(T')+5.75
h_H_2O^pure(T')}k_BT'^2dT'
- [k_BTln{x^eq_H_2O(T)}-k_BT_3ln{x^eq_H_2O(T_3)}]
Here h_H^H=H/N_CO_2 is the enthalpy H of the hydrate per CO_2 molecule and N_CO_2 is the number of CO_2 molecules in the hydrate. Note that Eq. (<ref>) is consistent with the view of Kashchiev and Firoozabadi <cit.> of the hydrate as a new compound formed from one molecule of CO_2 and 5.75
molecules of water when the hydrate is fully occupied. Also note that it is necessary to use that the driving force for nucleation at T_3 is equal to zero, i.e.,
Δμ^EC_N(T_3,x^eq_CO_2) =μ^H_H(T_3) -μ^aq_CO_2(T_3,x^eq_CO_2(T_3))
-5.75 μ^aq_H_2O(T_3,x^eq_CO_2(T_3))=0
This is equivalent to arbitrary set to zero the chemical potentials of CO_2 and water in the hydrate at T_3. Eq. (<ref>) is similar to Eq. (<ref>) (route 3 or dissociation route) but taking into account two effects: (1) the temperature dependence of molar enthalpies of hydrate, CO_2, and water; and (2) the change of composition of the solution when passing from T_3 to T (see route 3 below for further details).
As we have previously mentioned, each change in the chemical potentials needed to compute the driving force for nucleation is obtained by evaluating molar enthalpies of pure CO_2, water, and hydrate phases involved in the integrals given by Eq. (<ref>). Note that the chemical potential of CO_2 in the aqueous solution has been already obtained in Section III.A. Particularly since in this route one is interested in computing Δμ_N at experimental conditions, the chemical potential of CO_2 in the aqueous solution is equal to that of pure CO_2 at the same P and T. Consequently, it has been obtained from the integration of the molar enthalpy at several temperatures according to Eq. (<ref>).
In the case of water, the chemical potential can be obtained performing MD NPT simulations of pure water along the 400 bar isobar. As in the case of pure CO_2, we also use the standard NPT is used in such a way that the three dimensions of the simulation box are allowed to fluctuate isotropically. We used a cubic simulation box with 1000 H_2O molecules. The dimensions of the simulation box, L_x, L_y, and L_z vary depending on the temperature from 3.14 to 3.08 nm. Simulations to calculate the molar enthalpy, at each temperature, are run during 100 ns, 20 ns to equilibrate the system, and 80 ns as the production period to obtain h^pure_H_2O.
In the case of the pure hydrate, we have obtained the chemical potential in a similar way, performing simulations in the NPT ensemble using an isotropic barostat at 400 bar. At the beginning of each simulation, we use a cubic box formed by 27 replicas of the unit cell in a 3× 3× 3 geometry. The dimensions of the simulation box vary between 3.85 and 3.62 nm depending on the temperature. As in the rest of simulations, we calculate the enthalpy at different temperatures, from 260 to 295 K. Simulations are run during 100 ns, 20 ns to equilibrate the system and 80 ns to calculate the molar enthalpy of the hydrate.
§.§.§ Route 2 for calculating Δμ^EC_N.
Route 2 was also proposed and described in our previous work <cit.>. This route is inspired by the work of Molinero and coworkers<cit.> and we summarize here only the main approximations and the final expression of the driving force for nucleation. According to this, it is possible to find a different, but an equivalent, thermodynamic route to calculate the driving force for the nucleation of methane hydrates. We check in this work whether this approach can also be used to deal with CO_2 hydrates. Let us consider Eq. (<ref>) at experimental conditions, i.e., at the equilibrium composition of CO_2 in the aqueous solution when it is in contact via a planar interface with a CO_2 liquid phase (L), x^eq_CO_2(T|L). Note that the vertical line represents flat interface equilibrium with
the liquid CO_2. To clarify the derivation of the final expression, we write explicitly the solubility of water in the solution, at experimental conditions, as x^eq_H_2O(T|L). Obviously, as we have mentioned previously in Section III.A, the solubility of water in the solution can be obtained readily as x^eq_H_2O(T|L)=1-x^eq_CO_2(T|L).
We now assume that the chemical potential of the ideal solution's components can be expressed, in general, in terms of the chemical potentials of the pure components in the standard state and their molar fractions. In other words, since the molar fraction of CO_2 in the solution is small, we are assuming that water is the dominant component (solvent) and the CO_2 is the minor component (solute) in the mixture. Under these circumstances, the activity coefficients of water and CO_2 are close to one. <cit.> According to this and following our previous work,<cit.> Eq. (<ref>) can be written as,
Δμ^EC_N(T) =
-k_BTln[x^eq_CO_2(T|L)x^eq_CO_2(T|H)]
-5.75k_BTln[x^eq_H_2O(T|L)x^eq_H_2O(T|H)]
x^eq_CO_2(T|H) and x^eq_H_2O(T|H) represent the molar fraction of CO_2 and H_2O in the solution when it is in equilibirum via a planar interface (vertical line) with the hydrate phase (H), respectively. Note that Δμ^EC_N(T) and all the molar fractions also depend on pressure but in this work we work at fixed P=400 bar. This is the equation obtained previously by us considering the driving force for the nucleation of the methane hydrate.<cit.> As we will see later in this section, Eq. (<ref>) does not provide reliable values for the driving force of nucleation of the CO_2 hydrate, contrary to what happens with the methane hydrate. The solubilities of methane in the solution when is in contact with the methane phase and with the hydrate are one order of magnitude lower than those of CO_2 in the case of the CO_2 hydrate. Consequently, this route can be useful only in cases in which the solubility of the guest is extremely low.
§.§.§ Route 3 (dissociation) for calculating Δμ^EC_N.
It is possible to estimate the driving force for nucleation of a hydrate using a simple and approximate route based on the knowledge of the enthalpy of dissociation of the hydrate. <cit.>
The dissociation enthalpy of the hydrate, h_H^diss, is defined as the enthalpy change of the process, <cit.>
[CO_2(H_2O)_5.75]_H→CO_2 (liq) +
5.75 H_2O (liq)
Dissociation enthalpies are usually calculated assuming that the hydrate dissociates into pure water and pure CO_2. Note that this corresponds to the definition of enthalpy of dissociation and that in reality CO_2 will be dissolved in water and an even smaller amount of water will be dissolved in the CO_2 liquid phase. We have determined the dissociation enthalpy of the hydrate simply by performing simulations of the pure phases (hydrate, water, and CO_2) at several temperatures at 400 bar.
According to our previous work, <cit.> we evaluate the driving force for nucleation assuming the following approximations: (1) the enthalpy of dissociation of the hydrate, h_H^diss does not change with the temperature; (2) its value can be taken from its value at T_3; and (3) enthalpy of dissociation does not vary with composition of the aqueous solution containing CO_2 when the temperature is changed. According to this, Δμ_N^EC is given by,
Δμ_N^EC=k_BT
_T_3^Th^diss_Hk_BT'^2 dT'≊ -h^diss_H (T_3) (1-TT_3)
Note that Eq. (<ref>) reduces to Eq. (<ref>) under the approximations used in this route.
§.§.§ Route 4 for calculating Δμ^EC_N.
The driving force for nucleation of the CO_2 hydrate, at any arbitrary temperature, T_N, and molar fraction of CO_2 in the aqueous solution, x_CO_2^(N), at fixed pressure is defined as,
Δμ_N(T_N,x^N_CO_2) =μ^H_H(T_N) -μ^aq_CO_2(T_N,x^N_CO_2)
-5.75 μ^aq_H_2O(T_N,x^N_CO_2)
Note that Δμ_N also depends on pressure. However, since we work at constant pressure (P=400 bar), we drop the pressure dependence from equations from this point. It is also important to recall that since we are assuming that all cages of the hydrate are filled, the chemical potential of a “hydrate molecule” in the hydrate phase depends only on temperature. Finally, the chemical potentials of CO_2 and water also depend on the molar fraction of water in the aqueous solution, x_H_2O^N. Since we are dealing with a binary mixture, x_H_2O^N=1-x^N_CO_2. For simplicity, we choose x^N_CO_2 as independent variable of the chemical potentials of CO_2 and water in the solution.
The driving force for nucleation of the CO_2 hydrate, Δμ_N, depends on T_N and x_CO_2^N and both are independent variables. This means that route 4 is valid for calculating the driving force for nucleation at any T_N and x_CO_2^N. As we will see later, the method can be particularized to evaluate Δμ_N at experimental conditions. In this case, Δμ_N=Δμ_N^EC(T_N)=Δμ_N^EC(T_N,x_CO_2^eq(T_N|L)), as we have previously mentioned.
To evaluate Δμ_N we need to calculate the chemical potential of the “hydrate molecule”, μ^H_H(T_N), at a supercooled temperature T_N, and the chemical potentials of CO_2 and water molecules of an aqueous solution of CO_2 with molar fraction x_CO_2^N at the same temperature. This route is based on the use of the solubility curve of the hydrate with temperature, at constant pressure, previously described in Section III.C. A schematic depiction of the curve and the thermodynamic route for obtaining Δμ_N at arbitrary T_N and x_CO_2^N is presented in Fig. <ref>. Let us consider a reference state in our calculations at temperature T_ref on the solubility curve in contact with the hydrate. As it will be clear later, the particular value of T_ref is not important since we are dealing with differences of chemical potentials and the final value of Δμ_N does not depend on the election of the reference state. Due to this, the reference state does not appear in Fig. (<ref>).
The first contribution to Δμ_N in Eq. (<ref>) is the chemical potential of the “hydrate molecule” in the hydrate phase at T_N. The chemical potential μ_H^H can be obtained using the Gibbs-Helmholtz thermodynamic relation for pure systems,
(∂(μ_H^H/T)∂ T)_P, N_H=-h_H^HT^2
where h_H^H=h_H^H(T) is the molar enthalpy of the “hydrate molecule” and the derivative is performed at constant pressure, P, and number of “hydrate molecules”, N_H. Note that here h_H^H represents the enthalpy of the hydrate per molecule of CO_2 according to the definition in Section III.E.1. The chemical potential of the “hydrate molecule” in the hydrate phase at a supercooling temperature T_N can be obtained by integrating the Eq. (<ref>) from T_ref to T_N as,
μ_H^H(T_N)k_BT_N=
μ_H^H(T_ref)k_BT_ref
-_T_ref^T_Nh_H^H(T)k_BT^2 dT
where k_B is the Boltzmann constant.
The last two contributions to the driving force for nucleation in Eq. (<ref>), μ^aq_CO_2 and μ^aq_H_2O, need to be evaluated at temperature T_N
and molar fraction x_CO_2^N. Individual chemical potentials of CO_2 and water in the solution at a given temperature are not easy to evaluate, as we have seen in routes 1 and 2. However, it is possible to use the solubility curve of CO_2 with the hydrate to overcome this problem.
Let be T_i the temperature at which the aqueous solution with molar fraction x_CO_2^N is in equilibrium with the hydrate phase as indicated in Fig. <ref>. Since both phases are in equilibrium at these conditions, the chemical potentials of CO_2 and water in the hydrate phase and in the aqueous solution are equal,
μ_CO_2^H(T_i)=
μ^aq_CO_2(T_i,x_CO_2^N)
μ_H_2O^H(T_i)=μ^aq_H_2O(T_i,x_CO_2^N)
Note that x_CO_2^N=x^eq_CO_2(T_i|H) according to the nomenclature used in Section III.E.2 (route 2) and in our previous paper. <cit.> The vertical line here represents that the aqueous solution is in equilibrium with
the solid hydrate via a flat interface.
Combining Eqs. (<ref>) and (<ref>) with Eq. (<ref>), that gives the chemical potential of the “hydrate molecule” in terms of the chemical potentials of CO_2 and water in the hydrate phase, we obtain,
μ_H^H(T_i)=μ_CO_2^aq(T_i,x_CO_2^N)
+5.75 μ_H_2O^aq(T_i,x_CO_2^N)
Eq. (<ref>) is the heart of route 4. According to it, the combination μ_CO_2^aq+5.75 μ_H_2O^aq is known along the solubility curve of the hydrate at any temperature T_i: it is equal to the chemical potential of the "hydrate molecule" at the temperature considered. This apparently simple result allows to calculate accurately the driving force for nucleation at any temperature and composition of the solution using a one-step thermodynamic integration. As it will be clear at the end of this section, this method can be used to determine the driving force for nucleation of other hydrates.
In the first step, we calculate the difference of μ_CO_2^aq
+5.75 μ_H_2O^aq between the reference state (ref) at T_ref and a second state (i) at T_i, both on the solubility curve of CO_2 with the hydrate as indicated in Fig <ref>. According to Eq. (<ref>), this is completely equivalent to evaluate the difference of μ_H^H between T_ref and T_i along the solublity curve. This change can be evaluated using again the Eq. (<ref>) (Gibbs-Helmholtz relation) and integrating between the two temperatures,
μ_H^H(T_i)k_BT_i=
μ_H^H(T_ref)k_BT_ref
-_T_ref^T_ih_H^H(T)k_BT^2 dT
In the second step, that involves the difference between the chemical potentials of CO_2 and water in solution at temperatures T_i and T at constant molar fraction x_CO_2^N, Δμ_CO_2^aq and Δμ_H_2O^aq, can be obtained from the Gibbs-Helmholtz equation for CO_2 and water,
(∂(μ_α^aq/T)∂ T)_P,x_α=-h_α^aqT^2
Here α={CO_2, H_2O} and represents one of the components of the mixture. Note that the partial derivative is calculated at constant composition. In this case, the composition corresponds to that of the aqueous solution in equilibrium with the hydrate phase at T_i. h_α^aq is the partial molar enthalpy of component α in the aqueous solution. The partial molar enthalpy is defined as,
h_α^aq=N_A(∂ H∂ N_α)_P,T,N_βα=
lim_Δ N_α→ 0N_A(Δ HΔ N_α)_P,T,N_βα
where N_A is the Avogadro's number and H is the aqueous solution's enthalpy. The limit can be numerically evaluated computing the enthalpy for two systems that have the same number of water molecules and different number of CO_2 to evaluate the partial molar enthalpy of CO_2. The partial molar enthalpy of water can be estimated in a similar way, i.e., the number of molecules of CO_2 in the system is kept constant while the number of water molecules changes. According to this, it is possible to evaluate the variation of the chemical potential of CO_2 and water from T_i to T_N, Δμ_CO_2^aq and Δμ_H_2O^aq, from the knowledge of the partial molar enthalpies of both components. In particular, the combination of the chemical potentials of CO_2 and water, as a function of T_N, can be obtained by integrating Eq. (<ref>) as,
μ_CO_2^aq(T_N,x_CO_2^N)+5.75
μ_H_2O^aq(T_N,x_CO_2^N)k_BT_N=
μ_CO_2^aq(T_i,x_CO_2^N)+5.75
μ_H_2O^aq(T_i,x_CO_2^N)k_BT_i-_T_i^T_Nh_CO_2^aq(T,x_CO_2^N)+5.75
h_H_2O^aq(T,x_CO_2^N)k_BT^2dT
Now, it is possible to find a closed expression for evaluating Δμ_N
at arbitrary T_N and x_CO_2^N in terms of the enthalpies of the "hydrate", CO_2, and water molecules. Using Eqs. (<ref>), (<ref>), (<ref>), and (<ref>), the driving force for nucleation can be written as,
Δμ_N(T_N,x^N_CO_2)k_BT_N=
-_T_i^T_Nh_H^H(T)- {h_CO_2^aq(T,x_CO_2^N)+5.75
h_H_2O^aq(T,x_CO_2^N)}k_BT^2dT
We recall here that h_H^H is the enthalpy of the “hydrate molecule” per molecule of CO_2. Since we are assuming that the hydrate is fully occupied, the factor in the partial molar enthalpy of water must be 46/8=5.75 to be consistent with the stoichiometry of the unit cell. It is important to remark two important aspects of this expression. As we have previously mentioned, Δμ_N does not depend on the reference state. Note that the two integrations of the molar enthalpy of the “hydrate molecule” between T_ref and T_N, given by Eq. (<ref>), and between T_ref to T_i, given by Eq. (<ref>), are now expressed as a single integration of the molar enthalpy of the “hydrate molecule“ between T_i and T_N. In other words, since the driving force for nucleation does not depend on the reference state, the initial state of Eq. (<ref>) is simply T_i. This is also related with another important fact: the driving force for nucleation of the hydrate is zero not only at T_3 but also along the whole solubility curve of the hydrate.
The second interesting aspect of Eq. (<ref>) is that the integrand of the right term can be formally written as an enthalpy of dissociation of the hydrate that depends on T_N and x^N_CO_2. Under this perspective, Eq. (<ref>) resembles Eq. (<ref>) of route 3 since it has the same mathematical form.
the integrand of the right term can be rewritten in terms of the enthalpy of dissociation of the hydrate, as h_H^diss=h_H^H-(h_CO_2^aq+5.75 h_H_2O^aq). Note that h_H^diss depends on T_N and x_CO_2^N. A closed expression of the driving force for nucleation can be expressed in a more compact way as,
Δμ_N(T_N,x^N_CO_2)k_BT_N=
-_T_i^T_Nh_H^diss(T,x^N_CO_2)k_BT^2dT
Eq. (<ref>) resembles Eq. (<ref>) of route 3. In Eq. (<ref>), h_H^diss depends on T and x^N_CO_2 and Δμ_N is calculated assuming explicitly that the hydrate dissociates into water and CO_2 in the aqueous solution at these conditions. However, Eq. (<ref>) assumes that h_H^diss is constant and water and CO_2 dissociates into pure water and pure CO_2.
Eq. (<ref>) is a rigorous and exact expression (within the statistical uncertainties of the simulation results) obtained only from thermodynamic arguments for calculating the driving force for nucleation of the CO_2 hydrate at any T_N and x_CO_2^N. Obviously, this route is general and can be used to calculate driving forces for nucleation of other hydrates from the knowledge of the solubility curve of the corresponding guest with the hydrate.
Let us now apply this route to the particular case of the CO_2 hydrate and evaluate Δμ_N^EC(T) at experimental conditions, i.e., with x_CO_2^N≡ x_CO_2^eq(T|L). Each of the chemical potential changes in Eqs. (<ref>) or (<ref>) can be obtained evaluating the molar enthalpy of the “hydrate molecule” and the partial molar enthalpies of CO_2 and water in the aqueous solution in Eq. (<ref>). In the case of the pure hydrate, the change in the chemical potential can be obtained performing simulations in the NPT ensemble using an isotropic barostat at 400 bar. At the beginning of each simulation, we use a cubic box formed by 27 replicas of the unit cell in a 3× 3× 3 geometry. The dimensions of the simulation box vary between 3.85 and 3.62 nm depending on the temperature. As in the rest of simulations, we calculate the enthalpy at different temperatures, from 260 to 295 K. Simulations are run during 100 ns, 20 ns to equilibrate the system and 80 ns to calculate the molar enthalpy of the hydrate.
As can be seen in Fig. <ref>, the solubility curve of CO_2 with the liquid phase is a convex and decreasing function of temperature. According to this, it is not possible to reach this curve from the solubility curve of CO_2 with the hydrate from a temperature T_i below T_3 following the one-step thermodynamic path of route 4 (see also Fig. <ref>). A feasible solution is to choose a T_i value above T_3 where the hydrate–solution coexistence line is metastable. However, the T_3 of the CO_2 hydrate is located at T_3=290 K (with molar fraction x_CO_2^eq(T_3)=0.05), and the solubility curve with the hydrate can be only obtained up to 295 K, as it is shown in Fig. <ref>. This means that Eq. (<ref>) can be only applied to calculate Δμ^EC_N for supercoolings above 270-280 K, approximately (depending on the value of the cutoff). In the case of the methane hydrate it is possible to calculate the hydrate–solution equilibrium curve at temperatures significantly higher than T_3. This allows to evaluate Δμ^EC_N at lower tempertatures than in the case of the CO_2 hydrate (see the inset of Fig. <ref> in this work and Fig. 4 of our previous work <cit.>).
To overcome this problem, we propose to use Eq. (<ref>), for several values of T_i<T_3, and to perform an extrapolation of the composition at the temperature at which we evaluate Δμ^EC_N, as it is indicated schematically in Fig. <ref>. According to this, μ_CO_2^aq and μ_H_2O^aq, or the combination of both according to Eq. (<ref>), can be obtained by performing NPT MD simulations of the solution along the 400 bar isobar at constant composition. As in the case of the “hydrate molecule”, since we are simulating bulk phases, the standard NPT is used in such a way that the three dimensions of the simulation box are allowed to fluctuate isotropically. We evaluate the partial molar enthalpies of both components at five different concentrations: x_CO2=0.0680, 0.0521, 0.0413, 0.0335, and 0.0215. These values are the compositions of the aqueous solution along the solubility curve of CO_2 with the hydrate, x_CO_2^eq(T_i|H), at T_i=295, 290, 285, 280, and 270, respectively. We calculate numerically the derivative of Eq. (<ref>) by computing the enthalpy for two different systems that have the same number of water molecules and different number of CO_2 molecules and the same number of CO_2 molecules and different number of water molecules to determine h^aq_CO_2 and h^aq_H_2O, respectively. Particularly, h^aq_H_2O is obtained from the difference of the enthalpies of the aqueous solution using 990 and 1010 water molecules (Δ N_H_2O=20) for all the temperatures and compositions of the mixtures. In the case of h^aq_CO_2, we have used different number of CO_2 molecules depending on the composition of the mixture: 20 and 24 (Δ N_CO_2=4) for x_CO2=0.0215, 32 and 38 (Δ N_CO_2=6) for x_CO2=0.0335, 40 and 46 (Δ N_CO_2=6) for x_CO2=0.0413, 50 and 60 (Δ N_CO_2=10) for x_CO2=0.0521, and 68 and 78 (Δ N_CO_2=10) for x_CO2=0.0680. Simulations to calculate the enthalpy, at each temperature, are run during 300 ns, 50 ns to equilibrate the system and 250 ns as the production period. Dividing the enthalpy difference, Δ H, by the difference of the number of CO_2 molecules, Δ N_CO_2, and of the number of water molecules, Δ N_H_2O, and multiplying by Avogadro's constant, we get estimations of h^aq_CO_2 and h^aq_H_2O at several compositions of the mixture and temperatures.
Fig. <ref> shows the partial molar enthalpy of CO_2 at five constant compositions, as functions of the temperature. The composition of the mixture in each curve corresponds to the molar fraction of the solution at several temperatures, T_i, along the solubility curve of CO_2 with the hydrate, as indicated in the previous paragraph. The difference of the chemical potential of CO_2 along the integration path (see the Fig. <ref>), Δμ_CO_2^aq, is represented in the inset of Fig. <ref>. Note that we have five different curves, one corresponding to each temperature value T_i. We follow the same approach and represent the partial molar enthalpy of water at the same compositions in Fig. <ref>. In the inset is now depicted 5.75Δμ_H_2O^aq, as function of the temperature, which represents the change of the rest of the "hydrate molecule" dissociated in the solution according to the nomenclature used in this section. As can be seen, partial molar enthalpies of CO_2 and water show small variations with the composition and decrease as the temperature decreases. The variation of both chemical potentials also exhibit similar behavior, i.e., they decrease as the temperature is lowered when keeping the composition of the mixture constant. In addition, each Δμ_CO_2^aq→ 0 and Δμ_H_2O^aq→ 0 at the temperature associated with the corresponding composition on the solubility curve of CO_2 with the hydrate.
Using the values of μ_CO_2^aq and 5.75 μ_H_2O^aq, as functions of temperatures and at the molar fractions of CO_2 considered previously, it is possible to evaluate Δμ_N according to the scheme indicated in Fig. <ref>. Figure <ref> shows Δμ_N, as a function of x_CO2, at four temperatures below T_3 (250, 260, 270, and 280 K). As can be seen, Δμ_N follows a linear dependence with the composition of the solution. We have performed linear regressions using the five values of x_CO_2, for each temperature, to obtain Δμ^EC_N at experimental conditions. We have also represented the compositions of the solution in equilibrium with the CO_2 liquid phase on the solubility curve of CO_2, x^eq_CO_2(T|L) at the four temperatures (crosses and stars). As can be seen, we have two different values of
x^eq_CO_2(T|L) depending on the cutoff due to the dispersive interactions used, r_c=1.0 (crosses) and 1.9 nm (stars).
Δμ_CO_2^aq+5.75 Δμ_H_2O^aq at a given temperature T and at the appropriate composition of the solution in equilibrium with the CO_2 liquid phase, x^eq_CO_2(T|L), can be obtained in a second step, as indicated in Fig. <ref>, according to the following approach. The values of Δμ_CO_2^aq and 5.75 Δμ_H_2O^aq, at any T<T_3, are obtained using Eqs. (<ref>) or (<ref>), with α=CO_2 and H_2O, and taking into account that μ_CO_2^aq+5.75 μ_H_2O^aq at each T_i is equal to μ_H^H(T_i) according to Eq. (<ref>).
It is also interesting to show μ_CO_2^aq+5.75 μ_H_2O^aq, as a function of composition, evaluated at several temperatures below the T_3 of the hydrate as shown in Fig. <ref> (see also Fig. <ref>). Note that we have set to zero the chemical potentials of CO_2 and water in the hydrate at T_3=290 K. This representation contains the same information as Fig. <ref> but allows to discuss important aspects related with the approximations of route 3. We have represented μ_CO_2^aq+5.75 μ_H_2O^aq at T=250, 260, 270, and 280 K at the compositions x^eq_CO_2 previously selected (symbols). In addition to that, we have also represented the value of the change in the chemical potential of the hydrate, at the four temperatures, in equilibrium with the aqueous solution along the solubility curve of the hydrate. Note that the value of μ_CO_2^aq+5.75 μ_H_2O^aq at 270 K is equal to that of the hydrate, -11.2 k_BT, since it is in equilibrium with the solution with x^eq_CO_2=0.0215. The same is true at 280 K, i.e., μ_CO_2^aq+5.75 μ_H_2O^aq=μ_H^H=-5.4 k_BT, state at which the solution with x_CO_2=0.0335 is in equilibrium with the hydrate.
The values of μ_CO_2^aq+5.75 μ_H_2O^aq, at the corresponding temperatures, follow a linear dependence with the composition of the solution. We have performed linear regressions using the five values obtained in this work, for each temperature, and the corresponding lines are also shown in Fig. <ref>. We have also represented the compositions of the solution in equilibrium with the CO_2 liquid phase on the solubility curve of CO_2, x^eq_CO_2(T|L) at the four temperatures (crosses and stars). Note that these values fit in an excellent way to the linear regression. As can be seen, we have two different values of
x^eq_CO_2(T|L) depending on the cutoff due to the dispersive interactions used, r_c=1.0 (crosses) and 1.9 nm (stars). See Fig. <ref> and Section III.D for further details. According to this, it is possible to know accurately the values of μ_CO_2^aq+5.75 μ_H_2O^aq by extrapolating the linear fits depending on the temperature and the composition (see Fig. <ref>). These values, in combination with the values of μ_H^H(T) (already obtained in routes 1 and 2), can be used to predict with confidence the driving force for nucleation of the CO_2 hydrate using this new approach. Note that results obtained from Fig. <ref> are the same than those obtained from Fig. <ref> since they contain the same information.
Before finishing this section, Fig. <ref> contains valuable information that deserves to be discussed in detail. Dashed curves represent the values of μ_CO_2^aq+5.75 μ_H_2O^aq obtained at the lowest concentration considered, x_CO2=0.0215, and assuming that the variation with composition follows the approximation used in route 2 proposed by us in our previous work<cit.> and also used by Molinero and co-workers <cit.> and by us in this work. In other words, the difference of the chemical potentials of CO_2 and water when the composition is varied can be calculated assuming that the activity coefficients of both component are close to 1 or are similar in the solution. Unfortunately , this approximation is not valid for CO_2 hydrates. As can be seen in Fig. <ref>, this approach (route 2) underestimates the value of μ_CO_2^aq+5.75 μ_H_2O^aq more than 0.8 k_BT (using r_c=1.0 nm and more than 1 k_BT using r_c=1.9 nm) at 250 K with respect to the value obtained from route 4. As we will see later in the next section, this result explains from a thermodynamic perspective why the route 2 can not be used with confidence to estimate the driving force for nucleation of the CO_2 hydrate.
§.§.§ Evaluation of Δμ^EC_N using different routes.
We have obtained the driving force for nucleation of the CO_2 hydrate using the four routes presented in the previous sections. All the results have been obtained using a cutoff distance for the dispersive interactions r_c=1.0 nm. As we have seen in the previous sections, this value of r_c gives a dissociation temperature of the hydrate T_3=290(2) K. The results obtained using the different routes are presented in Fig. <ref>. The route 1 given by Eq. (<ref>) predicts an almost linear behavior of Δμ_N^EC with the temperature. This results is in agreement with our previous results obtained for the driving force for nucleation of the methane hydrate. <cit.>
We have also used the novel route proposed in this work (route 4), based on the use of the solubility curve of CO_2 with the hydrate, given by Eq. (<ref>). As we have mentioned in the previous section, the route 4 should provide reliable values of Δμ_N^EC since it is based on rigorous thermodynamic integration calculations. The only approximation made are the extrapolations of Δμ_N to the x^eq_CO_2 on the solubility curve of CO_2 with the liquid at the corresponding temperatures. However, we think this is a good approach taken into account the low values of the concentration and the results presented in Figs. <ref> and <ref>. As can be seen in Fig. <ref>, small differences are seen between results obtained from routes 1 and 4. Route 1 slightly underestimates the driving force for nucleation in nearly the whole range of temperatures considered in this work, especially in the intermediate range of temperatures.
As in our previous work, <cit.> we have also used the dissociation route (route 3) according to Eq. (<ref>), proposed by Kashchiev and Firoozabadi. <cit.> The agreement between the results from route 3 and 4 is good, especially at low supercoolings. The dissociation route overestimates the values of Δμ_N^EC obtained from the route 4 about 0.1 k_BT at 260 K, approximately. This represents a value 6.5% higher than the value obtained from the route 4, the maximum difference found between both approaches in the whole range of temperatures.
Finally, we have also obtained Δμ_N^EC, as a function of the temperature, via the route 2 proposed by us in our previous work <cit.> and inspired by the work of Molinero and collaborators. <cit.> This route, given by Eq. (<ref>), entails crude approximations. As can be seen, the route 2 is not able to provide reliable predictions of Δμ_N^EC in the whole range of supercoolings. In fact, it overestimates its value by 0.32 k_BT, a value 20% higher than that obtained using the more rigorous route 4 in a wide range of temperatures. This result is in agreement with the findings observed in Figs. <ref> and <ref> and it is a direct consequence of the main approximation made in route 2: that the activity coefficients of water and CO_2 are equal to one. Although this is a good approximation for the methane hydrate, <cit.> it is not a realistic option for the CO_2 hydrate. The root of this behavior must be found in the large differences in solubility of methane in water compared with that of
CO_2 in water (in contact with both, the gas/liquid phase and the hydrate phase). See the Figs. <ref> and <ref> and the corresponding insets.
In summary, the route 2 is not in general a good choice for calculating driving forces for nucleation of hydrates. This route can be used when the solubility of guest molecules in water is extremely low. The route 3 is an easy and fast way to estimate Δμ_N^EC values. However, we do not recommend this route in general except for cases in which solubility of the guest molecules in water is extremely low, as in the case of route 2. Finally, the route 4 proposed here in the most rigorous and nearly exact way to evaluate driving forces for nucleation of hydrates.
It is important to discuss the effect of the cutoff distance due to the dispersive interactions on Δμ_N^EC. We have already analyzed the effect of r_c on the solubility curve of CO_2 in contact with both, the CO_2 liquid (Fig. <ref>) and the hydrate (Fig. <ref>). Although the solubility curve of CO_2 with the hydrate is practically unaffected when r_c is changed from 1.0 to 1.9 nm, the situation is completely different for the solubility curve of CO_2 with the liquid. As a consequence of this, the T_3 of the CO_2 hydrate changes from 290(2) K when r_c=1.0 nm to 292(2) K when r_c=1.9 nm. Obviously, this change must also affect to the values of Δμ_N^EC. We have obtained the driving force for nucleation following the route 4 using a cutoff distance r_c=1.9 nm and results are compared with those using r_c=1.0 nm. As can be seen in Fig. <ref>, the main effect is to displace the curve towards higher temperatures. This is an expected result due to the difference in the T_3 values using different cutoff distances. However, it is clearly seen that differences between both curves increases as the temperature is decreased: the difference between both values in absolute value is 0.125 k_BT at 290 K, approximately, but that difference increases up to 0.286 k_BT at 260 K, approximately. This is more than double of the value of the difference predicted at 290 K, suggesting that the increase of the cutoff distance has a deep effect on the driving force for nucleation of the system.
To check the real impact of the cutoff distance of the dispersive interaction on the driving force for nucleation, we have plot Δμ_N^EC, as a function of the supercooling Δ T, instead of the absolute temperature T. This allows to compare both results at the same supercooling and to have a clearer picture of this effect. As can be seen in Fig. <ref>, there is an important effect on Δμ_N^EC when r_c is changed from 1.0 to 1.9 nm. For instance, at |Δ T|≈ 25 K, Δμ_N^EC changes from -1.5 to -1.65 k_BT when the cutoff distance is increased. According to this, the driving force for nucleation of the hydrate is 10% larger when r_c=1.9 nm than that obtained using 1.0 nm. This effect is not negligible. The origin of this displacement is due to the strong dependence of the solubility of CO_2 in water on r_c (aqueous solution in contact with the CO_2 liquid phase). Due to the effect of the cutoff distance on Δμ_N^EC, appropriate values of r_c are required in order to obtain reliable values of this magnitude.
It is also very interesting to compare the driving force for the nucleation of the CO_2 and methane hydrates at the same pressure. We have determined in our previous work <cit.> the driving force at experimental conditions along the same isobar. We also present these results in Fig. <ref>, obtained using the route 1 and a cutoff distance of r_c=0.9 nm. We compare these results with that obtained for the CO_2 hydrate using a cutoff distance of 1.0 nm. As can be seen, the driving force of the CO_2 hydrate is, in absolute value, lower than that of the methane hydrate along the isobar of 400 bar. For instance, at a supercooling of |Δ T|≈ 25 K, the driving force for the methane hydrate is Δμ^EC_N≈ -1.7 k_BT. At the same supercooling for the CO_2, Δμ^EC_N≈ -1.5 k_BT. This means that the driving force for the nucleation of the CO_2 hydrate, at 400 bar, is 13% lower than that of the methane hydrate at the same supercooling (Δ T=25 K). According to this, the nucleation of the methane hydrate should be more favorable than that of the CO_2 hydrate. Obviously, this would be true if the other factors that affect the nucleation rate of the hydrates are equal, i.e., the water–hydrate interfacial energy.
We have also considered the effect of the occupancy of CO_2 in the hydrate on the driving force for nucleation. Particularly, we study hydrates with 7 CO_2 molecules per unit cell, i.e., 50% of occupancy in the small or D cages and 100% of occupancy in the large or T cages, which is equivalent to 87.5% of overall occupancy. According to the work of Kashchiev and Firoozabadi, <cit.> the formation of a hydrate in the aqueous solution phase can be described as the chemical reaction of Eq. (<ref>). This reaction can be viewed as the formation of a “hydrate molecule” per each CO_2 molecule in the aqueous solution.
However, since we now calculate and compare driving forces for nucleation of hydrates with different occupancies, it is more convenient to write Eq. (<ref>) per cage of hydrate formed from the aqueous solution than per CO_2 molecule used to form the hydrate from the solution. In the case of a hydrate fully occupied, the reaction is the same in both descriptions since an unit cell of hydrate is formed from 8 cages (6 T and 2 D cages) and it is occupied by 8 CO_2 molecules as well. Let's define the occupancy, x_occ as the fraction of cages occupied by the CO_2, x_occ=n_CO_2/n_cg, where n_CO_2 and n_cg are the number of CO_2 molecules and cages per unit cell. When the occupancy is 100%, x_occ=8/8=1 and x_occ=7/8=0.875 when
it is 87.5%. According to this, the formation of one cage of hydrate, with occupancy 87.5%, from the aqueous solution phase can be viewed as a classical chemical reaction that takes place at constant P and T,
x_occ CO_2 (aq) +
5.75 H_2O (aq)
⟶ [(CO_2)_x_occ(H_2O)_5.75]_H
In this particular case, since each unit cell of CO_2 hydrate is formed from n_cg=8 cages and 46 water molecules, we only need 7/8=0.875 CO_2 molecules (i.e., an occupancy x_occ=7/8=0.875) and 46/8=5.75 water molecules in the solution to form 1 cage of hydrate with the desired occupancy (7 CO_2 molecules per unit cell). The compound [(CO_2)_x_occ(H_2O)_5.75]_H is simply a “cage” of hydrate. According to this, we call [(CO_2)_x_occ(H_2O)_5.75] a “molecule” of one cage of the hydrate in the solid. Note that stoichiometry of reaction given by Eq. (<ref>) is in agreement with an unit cell of this partially occupied hydrate, formed from 8 “cages” of hydrate with 8× 0.875=7 CO_2 molecules and 8× 5.75=46 water molecules.
We have used the route 1 described in Section III.E.1 with a cutoff distance for the dispersive interactions of r_c=1.0 nm. We have followed the same procedure previously explained but instead of simulating a hydrate fully occupied by CO_2 molecules we have considered a hydrate with occupancy of the small or D cages of 50% (87.5% overall occupancy of the hydrate). To be consistent with the description of the previous paragraph, we have used Eq. (<ref>) to evaluate the driving force for nucleation of the partially occupied hydrate per cage of hydrate instead of per CO_2 molecule. According to this, the corresponding molar enthalpy of the hydrate, h_H^H, as a function of the temperature, must be expressed as an enthalpy per cage of the hydrate, h̃_H^H,
h_H^H=HN_CO_2=
HN_cg(N_cg/N_CO_2)=
h̃_H^H(N_cg/N_CO_2)=
h̃_H^Hx_occ
Here H is the enthalpy of the hydrate, and N_CO_2 and N_cg are the total number of CO_2 molecules and cages used in the simulations, respectively.
N_CO_2=n_cells× n_CO_2 and N_cg=n_cells× n_cg, with n_cells=3× 3× 3=27 the number of unit cells in the simulations. Note that the enthalpy per cage, h̃_H^H, is obtained dividing the enthalpy of the hydrate by the total number of cages, N_cg=n_cells× n_cg=27× 8=216. H is calculated in the same way as in Section III.E.1 for the fully occupied hydrate but now using a simulation box with 189 CO_2 molecules and 27 replicas of the unit cell in a 3× 3× 3 geometry used in Section III.E.1 for the 1242 water molecules.
Fig. <ref> shows the enthalpy per cage of the hydrate h̃_H^H partially occupied by CO_2 molecules as a function of temperature (blue diamonds). We also present the enthalpy per cage of the hydrate with 100% of occupancy (red circles). Note that
h̃_H^H is equal to the enthalpy of the hydrate per molecule of CO_2, h_H^H, in the case of full occupancy. As can be seen, the enthalpy per cage when occupancy is 87.5% is systematically less negative than h̃_H^H when the hydrate is fully occupied. The difference between both values is ∼ 3 kJ/mol. This is an expected result since there is one CO_2 molecule less per unit cell (7 instead of 8) in the hydrate with occupancy of 87.5%. Although the difference is below 1%, there is less CO_2-water favorable (negative) dispersive interactions and this contributes to increase the energy and consequently the enthalpy of the system. Note that the lattice parameters of the unit cell (for a certain P and T) of the hydrate depend on the occupancy. Particularly, it becomes about 0.16% smaller when the occupancy changes from 100% to 87.5%.
Once h̃_H^H(T) is known, it is possible to use Eq. (<ref>) to evaluate the driving force for nucleation of the hydrate. However, Eq. (<ref>) is only valid for hydrates with 100% occupancy. It is possible to reformulate route 1 for hydrates partially occupied taken into account that the enthalpy of the hydrate is expressed per cage of the hydrate, h̃_H^H, and using the appropriate stiochiometry when the hydrate has an occupancy of x_occ. According to this, the driving force for nucleation per cage of hydrate is given by,
Δμ^EC_N(T,x^eq_CO_2)k_BT=
-_T_3^Th̃_H^H(T')- {x_occ h_CO_2^pure(T')+5.75
h_H_2O^pure(T')}k_BT'^2dT'
- [k_BTln{x^eq_H_2O(T)}-k_BT_3ln{x^eq_H_2O(T_3)}]
Note that Eq. (<ref>) is consistent with the view of Kashchiev and Firoozabadi <cit.> and with the reaction given by Eq. (<ref>) in which the hydrate is a new compound formed from x_occ molecules of CO_2 and 5.75
molecules of water when the hydrate with occupancy x_occ.
Fig. <ref> shows the comparison between the driving force for nucleation obtained using the route 1 and a cutoff distance of r_c=1.0 nm when the hydrate is fully occupied and when only half of the small or D cages are occupied by CO_2 molecules. It is important to remark here that we are calculating driving forces for nucleation per cage of hydrate. This allows to compare Δμ_N^EC for both hydrates at the same conditions since the number of water molecules that form both solids is the same. Particularly, it is possible to know if a hydrate fully occupied is thermodynamically more stable than a hydrate partially occupied (85.5%) when both are formed from an aqueous solution of CO_2 at fixed conditions of pressure and temperature.
As can be seen, Δμ_N^EC is similar in both cases for low supercoolings. However, as the supercooling increases, the differences between both values increases. Particularly, Δμ_N^EC becomes less negative (driving force for nucleation is lower) when the hydrate is partially occupied than when the hydrate is fully occupied. This means that the fully occupied hydrate is more stable, from the thermodynamic point of view, than the hydrate with occupancy of 87.5% since the driving force for nucleation is higher. However it remains to be studied in the future if an occupancy between 0.875 and 1 could be more stable than the fully occupied hydrate.
Finally, it is also interesting to compare the driving force for the CO_2 hydrate with that corresponding to ice Ih obtained previously by Espinosa and coworkers. <cit.> They have obtained a value of Δμ^EC_N≈ -0.29 k_BT per water molecule considering a supercooling of 35 K at 1 bar with r_c=0.9 nm. For the case of the CO_2 hydrate, the driving force is -2.1 k_BT per unit of hydrate molecule at the same supercooling and 400 bar. This can be transformed to give -0.36 k_BT per unit of water molecule. Comparing both driving forces for nucleation, the nucleation of the CO_2 hydrate is more favorable than that of the ice Ih.
The previous result is true if the other factors that affect the nucleation rate are equal or similar, particularly the interfacial free energy. The ice Ih–water interfacial free energy for the TIP4/Ice model has been recently determined by Espinosa and collaborators <cit.> using the Mold Integration methodology. They found a value for the interfacial energy of 29.8(8) mJ/m. This value corresponds to the interfacial free energy averaged over all crystal orientations. Algaba et al. <cit.> and Zerón et al. <cit.> have also determined the CO_2 hydrate–water interfacial free energy using the Mold Integration-Host and Mold Integration-Guest. Particularly, they have found 29(2) mJ/m^2 and 30(2) mJ/m^2, respectively at 400 bar and 287 K. According to this, the nucleation of the CO_2 hydrate is more favourable than that of ice Ih since the interfacial energies of the ice and the CO_2 hydrate are similar.
§ CONCLUSIONS
In this work, we have studied the solubility of CO_2 in aqueous solutions when they are in contact via planar interfaces with a CO_2-rich liquid phase and with the hydrate phase at 400 bar using molecular dynamics computer simulations. We have also estimated the driving force for the nucleation of the CO_2 hydrate using four different routes. These properties are key to understanding, from a thermodynamics point of view, the parameters that control the nucleation of CO_2 hydrates. Water is described using the TIP4P/Ice water model and CO_2 using the TraPPE model. The unlike dispersive interactions between water and CO_2 are taken into account using the approach proposed by us several years ago. This selection allows to describe very accurately, not only the dissociation temperature of the hydrate at the pressure considered in this work, but also the CO_2 hydrate–water interfacial free energy. Calculations of solubilities have been carried out using the direct coexistence technique between two phases. Additional simulations of the pure systems, at several temperatures, have also been performed to calculate the driving force for nucleation along the isobar considered in this work.
We have analyzed the aqueous solution of CO_2 when it is in contact with the liquid phase (pure CO_2) and with the hydrate using two different values of the cutoff associated with the dispersive interactions. From this information, we have obtained the solubility of CO_2 in water when the solution is in contact with the CO_2 liquid phase. The solubility of CO_2 decreases with temperature, in a similar way to that of methane. However, the solubility of CO_2 is one order of magnitude larger than that of methane. We also observed an important effect of the long-range dispersive interactions in the solubility curve along the isobar of 400 bar. The solubility of methane in water is also affected by these contributions but their effect is smaller. It is interesting to remark that corrections due to the long-range dispersive interactions affect in a different way both systems. Whereas the solubility of CO_2 increases with the cutoff distance, in the case of methane it decreases. This is probably an effect due to the CO_2-CO_2 and CO_2-water electrostatic interactions. We have also studied the solubility of CO_2 in the aqueous solution when it is in contact with the hydrate and analyzed its interfacial structure. This magnitude increases with the temperature, as it happens with the solubility of methane in water. Contrary to what happens when the aqueous solution is in contact with the CO_2 liquid phase, the variation of the cutoff distance due to the long-range dispersive interaction has no effect on the solubility. This behavior has been also observed in our previous study dealing with the solubility of methane in water.
The dissociation temperature of the CO_2 hydrate (T_3), at 400 bar, can be evaluated from the intersection of the two solubility curves obtained in this work. This intersection is possible because the formation of the hydrate phase, at T<T_3, and the formation of the CO_2 liquid phase, at T>T_3, are activated processes. This means that there exists metastability below and above the dissociation temperature of the hydrate at 400 bar, and because of this one can find the intersection between the two solubility curves. The temperature at which this occurs is the T_3 of the hydrate at the fixed pressure. From this analysis we find that the dissociation temperature of the hydrate is located at 290(2) K when the cutoff distance for dispersive interactions is equal to 1.0 nm. This is in good agreement (within the error bars) with our previous estimation of T_3 obtained from direct coexistence simulations and using the same cutoff distance, 287(2) K. If the cutoff distance is larger (1.9 nm), the T_3 is located at 292(2) K. Although the value obtained in this work for a cutoff distance of 1.0 nm compares well with our previous estimation, 287(2) K (within the error bars), it is possible that finite-size effects produce a shift of T_3 towards higher temperatures, as well as the value used to account for the long-range dispersive interactions.
We also estimate the driving force for nucleation of the CO_2 hydrate. Particularly, we have calculated Δμ_N using the three routes proposed in our previous paper (routes 1-3). <cit.> Since the solubility of CO_2 in water is higher that that of methane by one order of magnitude, we have proposed a novel and alternative route based on the use of the solubility curve of CO_2 with the hydrate. This new route (which we refer to as route 4 in this paper) considers rigorously the non-ideality of the aqueous solution of CO_2 and provides reliable results for Δμ_N. Routes 1, 3, and 4 provide similar values of the driving force for nucleation of the CO_2 hydrate in a wide range of supercoolings. Unfortunately, the route 2 can not be used for CO_2 hydrates due to the non-ideality of the water + CO_2 mixture at the conditions considered.
Finally, we have also analyzed the effect of the cutoff distance due to dispersive interactions and the occupancy of the cages on the driving force for nucleation of the CO_2 hydrate. In both cases, there is a non-negligible effect on the driving force for nucleation. Particularly, the driving force for nucleation increases when the cutoff distances increases and when the occupancy of the small or D cages of the hydrate increases from 87.5% of occupancy to 100%.
The driving force for nucleation of the CO_2 hydrate obtained in this work using the novel route 4 is Δμ_N≈ -0.36 k_BT (per water molecule) at 400 bar and a supercooling of 35 K. This value lies below (is more negative) the driving force for nucleation of ice Ih at 1 bar and the same supercooling, Δμ_N≈ -0.29 k_BT (also per water molecule). Since other factors that affect to the nucleation rate are similar (i.e., the interfacial free energy), our results indicates that CO_2 hydrates nucleate more easily than ice Ih. However, should be taken into account that this comparison is not straightforward since CO_2 is needed for the formation of the CO_2 hydrate, but not for the ice.
§ CONFLICTS OF INTEREST
The authors have no conflicts to disclose.
§ ACKNOWLEDGEMENTS
This work was finnanced by Ministerio de Ciencia e Innovación (Grant No. PID2021-125081NB-I00), Junta de Andalucía (P20-00363), and Universidad de Huelva (P.O. FEDER UHU-1255522 and FEDER-UHU-202034), all four cofinanced by EU FEDER funds. We also acknowledge the Centro de Supercomputación de Galicia (CESGA, Santiago de Compostela, Spain) for providing access to computing facilities. The authors also acknowledge Project No. PID2019-105898GB-C21 of the Ministerio de Educación y Cultura. We also acknowledge access to supercomputer time from RES from project FI-2022-1-0019. J. G. acknowledges the nacional support from Gdansk University of Technology by the DEC-09/2021/IDUB/II.1/AMERICIUM/ZD grant under the AMERICIUM - “Excellence Initiative - Research University” program. Part of the computations were carried out at the Centre of Informatics Tricity Academic Supercomputer & Network. The research was supported in part by PL-Grid Infrastructure.
§ DATA AVAILABILITY
The data that support the findings of this study are available within the article.
|
http://arxiv.org/abs/2409.03651v1 | 20240905160821 | Gravitational waves from decaying sources in strong phase transitions | [
"Chiara Caprini",
"Ryusuke Jinno",
"Thomas Konstandin",
"Alberto Roper Pol",
"Henrique Rubira",
"Isak Stomberg"
] | gr-qc | [
"gr-qc",
"astro-ph.CO"
] | =1
./fig/
=10000
12
16
13
32
34
14
112
23
43
KOBE-COSMO-24-03, TUM-HEP-1522/24, DESY-24-131a,b]Chiara Caprini,c]Ryusuke Jinno,d]Thomas Konstandin,a,1]Alberto Roper Pol,Corresponding author: mailto:[email protected]@unige.che]Henrique Rubira,d,2]Isak StombergCorresponding author: mailto:[email protected]@desy.de[a]Département de Physique Théorique, Université de Genève,
CH-1211 Genève, Switzerland[b]Theoretical Physics Department, CERN, CH-1211 Genève, Switzerland[c]Department of Physics, Kobe University, Kobe 657-8501, Japan[d]Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22607 Hamburg, Germany[e]Physik Department T31, Technische Universität München
James-Franck-Straße 1, D-85748 Garching, [email protected]@[email protected]@[email protected]@desy.de
We study the generation of gravitational waves (GWs) during a first-order cosmological phase transition (PT)
using the recently introduced Higgsless approach to numerically evaluate the fluid motion induced by the PT.
We present for the first time spectra from strong first-order PTs (α = 0.5), alongside
weak (α = 0.0046) and intermediate (α = 0.05) transitions previously considered in the literature.
We test the regime of applicability of the
stationary source assumption, characteristic of the
sound-shell model, and
show that it agrees with our numerical results when the
kinetic energy, sourcing GWs, does not
decay with time.
However, we find in general that for intermediate and strong
PTs, the kinetic energy in our simulations decays
following a power law in time, and provide a theoretical framework that
extends the stationary assumption to one that allows to include the
time evolution of the source.
This decay of the kinetic energy, potentially determined
by non-linear dynamics and hence, related to the production of vorticity,
modifies the usually assumed linear growth
with the source duration to an integral over time
of the kinetic energy fraction, effectively reducing the growth rate.
We validate the novel theoretical model with the results of our simulations covering a broad
range of wall velocities.
We provide templates for the GW amplitude and spectral shape for a broad range of PT parameters.
Gravitational waves from decaying sources in strong phase transitions
[
September 9, 2024
=====================================================================
§ INTRODUCTION
The door to a new era with the promise of groundbreaking discovery was opened with the inaugural direct detections by the LIGO-Virgo collaboration
of gravitational waves (GWs) emanating from mergers of black holes and neutron stars <cit.>.
The forthcoming observing run by the LIGO-Virgo-KAGRA (LVK) collaboration are expected to accumulate more events <cit.>.
Efforts among Pulsar Timing Array (PTA) collaborations have furthermore unveiled convincing evidence of a stochastic gravitational wave (SGWB) background at nano-Hertz frequencies <cit.>.
While a compelling
candidate for the source of this radiation is the superposition of supermassive black hole mergers, implying an astrophysical origin,
it is important to point out that
primordial sources of cosmological origin
can also explain the observed signal
(see, e.g., Refs. <cit.>).
These breakthroughs in GW detection give us ears to astrophysical events and cosmological history inaccessible through other means of observation.
Looking into the 2030s, the launch of the Laser Interferometer Space Antenna (LISA) mission <cit.>, designed to probe
GWs in the unexplored milli-Hertz frequency band,
is setting the stage for a potential
overhaul of modern cosmology <cit.>.
Until the time of launch, joint strides in data analysis techniques and theoretical progress are necessary to leverage the full potential of the LISA mission once it flies.
It is in this spirit that
studies on cosmological SGWB production gain their motivation.
Cosmological sources of GWs that can be explored by LISA include inflation, particle production, topological defects, and primordial black holes throughout different stages of the expansion history of the Universe (see Refs. <cit.> and references therein).
Of particular interest is the phenomenon of first-order phase transitions (PTs) <cit.> that could have occurred in the early Universe.
While the Standard Model (SM) predicts a crossover at the electroweak scale <cit.>, many theories beyond the Standard Model accommodate a first-order electroweak PT (see Ref. <cit.> and references therein).
During such a transition, the order parameter initially becomes trapped in a false vacuum expectation
value in the symmetric phase.
Subsequently, vacuum or thermal fluctuations locally induce a transition to the true vacuum in the broken phase, forming tiny seeds of bubbles <cit.>.
The released vacuum energy drives the expansion of these bubbles, which eventually collide with each other, generating anisotropic stresses in the energy distribution and thus sourcing GWs. This process is highly non-thermal, suggesting that the baryon asymmetry of the Universe might have its origin in it <cit.>.
While bubble collisions themselves are an important source of GWs <cit.>, it has been shown in Ref. <cit.> that compressional
fluid motion induced in the primordial plasma by the scalar walls often dominates the GW production for PTs when the broken-phase bubbles do not run away,[These results have been found in the absence of magnetic fields
and for small fluid perturbations. A
primordial magnetic field generated or present during the
PT, and/or the production of non-linear fluid perturbations, can efficiently induce vortical motion <cit.> due to the
high conductivity and Reynolds number of the primordial plasma <cit.>.]
which is expected to be
the case unless the PT is dominated by vacuum, e.g., for supercooled PTs <cit.>.
This occurs when the friction exerted on the bubble
walls by the fluid particles is strong enough to balance
the vacuum energy released and, hence, the bubbles
reach a terminal velocity <cit.>.
The production of GWs from fluid perturbations can be
decomposed in two contributions: sound waves (or acoustic/compressional turbulence) <cit.> and vortical turbulence <cit.>.
While analytical modeling of GW production
from fluid perturbations is important <cit.>, numerical simulations are essential for a comprehensive understanding of the entire process.
It is believed that after a first-order PT, the fluid motion initially manifests as compressional motion
(well approximated by sound waves when the fluid perturbations can be assumed to be of linear order)
and then develops non-linearly,
allowing for the formation of shocks and vorticity,
and the subsequent development of turbulence <cit.>.
The non-linear evolution is inevitable due to the large Reynolds number of the fluid in the early Universe <cit.>.
The transition from the fully compressional
to the vortical turbulence regime can be especially important in strong transitions, when non-linearities can play an important role.
Currently, large-scale simulations have been performed by the Helsinki-Sussex group, which numerically solve a coupled scalar field-fluid system <cit.>.
First-order PTs in the early Universe exhibit a significant hierarchy: this is the hierarchy between the typical scales inherent in the order parameter field (including the thickness of the walls) and those in the cosmological fluid (including the bubble size and the sound-shell thickness). For the electroweak PT, the hierarchical separation can be as large as M_P / T ∼ 10^16.
This fact naturally leads to the idea of the Higgsless scheme proposed by part of the authors of the present paper <cit.>.
In this scheme, the microphysics of the wall is introduced as a non-dynamical (although space and time dependent)
energy-injecting boundary condition within the bag equation of state <cit.>,
and the bubble walls are assumed to have reached a terminal velocity,
such that the fluid perturbations reach a self-similar
solution in a very short time scale (much
shorter than the time scale for collisions) <cit.>.
See also Refs. <cit.> for discussions on the bubble wall terminal velocity.
As a result, the scheme is able to capture the macroscopic dynamics necessary for GW production without being
required to also solve for the hierarchically smaller scales.
In this paper, we explore the previously uncharted realm of GWs
sourced by fluid perturbations induced in
strong first-order PTs.[We clarify that hereby, by strong transitions, we refer to α = 0.5 (see alpha for definition), still far from the supercooled regime where the scalar field potential energy dominates the energy content of the Universe <cit.>.]
We also update results for weak and intermediate transitions and compare with other results in the literature <cit.>.
We perform approximately 1000 simulations involving a parameter scan over wall
velocities ∈ [0.32, 0.8] in increments of 0.04
for weak (α = 0.0046), intermediate (α = 0.05), and strong (α = 0.5) PTs,
using the Higgsless approach, to clarify the dependence on the underlying physical quantities of various characteristics of the GW spectrum, especially
focusing on its overall amplitude.
We then assess the long-term evolution of the system, and
discuss indications of non-linearities
and their impact on the GW production.
We also include an analysis of the longitudinal and transverse components of the velocity field spectra.
The outline of the paper is as follows.
In sec:setup, we review the GW production in a PT.
We first describe the bubble nucleation history and the conservation
laws of the fluid perturbations in
nucleation_histhydro_eqs.
Then, GW_production presents the
description used to compute the GW spectrum
at present time from the numerical results,
more detail is given in sec:GW_prod.
We review in GW_sw the GW production under the stationary
assumption, found, for example,
in the sound shell model (which assumes linear fluid perturbations) <cit.>, and used to interpret
numerical results <cit.>.
We then
present
a novel model in sw_extended that extends the unequal time correlator (UETC)
to a locally stationary UETC, allowing us to introduce
the effect of the source decay.
In sec:expansion, we extend this model
to estimate the effects of the expansion
of the Universe.
In Sec. <ref>,
we discuss updates to the previous Higgsless simulations <cit.>, and summarize
the physical and numerical setup of the simulations.
In Sec. <ref>, we discuss the numerical results.
We first present a convergence analysis for the kinetic
energy and the integrated GW amplitude in sec:convergence, which is combined with
a convergence analysis that includes
the potential effects of underresolving the fluid perturbations,
presented in sec:kinetic_ed.
The result of these convergence analyses is an estimate
of the kinetic energy fraction
when the entire volume is converted to the broken phase, which is then compared
to the single-bubble value commonly used in GW studies.
We then analyse in decay_K2 the time evolution of the kinetic energy fraction and present a decaying power-law in time fit that accurately reproduces the simulations.
To investigate the origin of the observed decay, we
briefly discuss the development of vorticity
in sec:vort, but we leave a detailed treatment of the onset of vorticity for future work.
The growth of the GW integrated amplitude with the source
duration is presented in GW_spec_time, together with estimates
of the GW efficiency Ω̃_ GW.
We investigate the GW spectral shape in sec:shape and
fit the numerical results to
a double broken power law, providing
estimates for the positions of the relevant
spectral scales.
We summarize our results in sec:summary, and present a template for the GW spectrum.
§ GRAVITATIONAL WAVES FROM A PHASE TRANSITION
In this section we describe the production of gravitational waves from the fluid
perturbations induced in the primordial plasma by the nucleation
of bubbles in a first-order PT.
In nucleation_hist, we review the nucleation history of
broken-phase bubbles that is used
in the Higgsless approach, and in hydro_eqs, we review the
the relativistic hydrodynamic equations.
In GW_production, we
describe how the GW generation is tackled within the Higgsless simulations, and discuss its applicability (see also App <ref> for details).
In GW_sw, we
review the production of GWs under the
assumption that the anisotropic stress UETC is stationary. This is usually assumed in the literature for sound-wave sourcing
of GWs <cit.>. In sw_extended, we propose an extension of the stationary model to a locally stationary UETC,
allowing to account for the dynamics of a source that is decaying in time.
We also provide a proxy to extend our results to the expanding Universe in sec:expansion.
We validate this novel proposed model in sec:results with the results from
our numerical simulations, described in sec:numerical_setup.
§.§ Bubble nucleation histories
In the Higgsless approach <cit.>, a fundamental assumption is that the
broken-phase bubbles reach a terminal wall velocity due to the friction exerted by the fluid particles
<cit.>,
such that can be prescribed as an input of the
simulations. This enables the construction of bubble nucleation histories, encompassing nucleation times and locations, as well as the predetermined expansion of the bubbles.
We assume an exponentially increasing in time probability of bubble nucleation,
P (t) ≃ P_n exp[β(t-t_n )],
where β determines the usual rate of bubble nucleation evaluated at the nucleation time t_n, such that the action S has been Taylor-expanded around this time
<cit.>.
The time dependence is hereby inherited from the
temperature dependence of the tunnelling action S/T and the fact that the temperature scales inversely with the scale factor
β = d/dt(S/T)|_t = t_n = - H T d/dT(S/T)|_T = T_n ,
where H is the Hubble scale.
A detailed description of how such bubble nucleation histories are constructed is found in Ref. <cit.>,
and examples of how modified bubble nucleation histories can be constructed are found in Refs. <cit.>.
We note that in previous numerical work
<cit.>, bubbles
are nucleated simultaneously.
This is expected to have an impact on the spectral peak and the amplitude, but not on the spectral
shape <cit.>.
§.§ Relativistic hydrodynamic equations
The relativistic hydrodynamic equations of motion are derived from
the conservation
of the energy-momentum tensor T^μν, which, in Minkowski space-time reads
∂_μ T^μν=0 .
These equations of motion hold during the PT under the assumption that the
duration of the transition is much shorter than a Hubble time, i.e., β/ H_∗≫ 1, which allows to neglect the expansion of the Universe.
They can also be applied
to the fluid motion after the PT ends if the fluid is dominated
by radiation particles: indeed, the conservation laws then become conformally invariant and
hence reduce
to those in Minkowski space-time by a conformal transformation <cit.>.
We take T^μν to be that of a perfect fluid
T^μν=u^μ u^ν w- η^μν
p ,
where u^μ=γ(1, v^i) is the fluid four-velocity, γ=1 / √(1-v^2) is the gamma factor, w and p are respectively the enthalpy and pressure in the system,
and η^μν = diag{1, -1, -1, -1} is the Minkowski metric tensor.
The equations of motion couple to the state of the vacuum through the bag equation of state, for which we take the
sound speed to be ^2 = 1/3,
p=1/3 a T^4-ϵ , w=T d p/d T=4/3 a T^4,
with T being the temperature.
The bag constantϵ<cit.>, defined
as the difference in vacuum energy density between the symmetric and broken phases, is thus promoted to a time- and space-dependent quantity
ϵ(t, )={[ 0 inside bubbles ,; ϵ outside bubbles , ].
whose time evolution is uniquely determined for each bubble nucleation history by the terminal wall velocity .
We therefore neglect the (model-dependent)
possibility that heating in the broken phase
can slow down the expansion of the Higgs front
when the latter propagates as a deflagration <cit.>,
and also that strong PTs can lead to runaway behavior <cit.>.
The relevant quantity for boundary conditions of the fluid at the Higgs interface is
the difference in the trace of the energy-momentum tensor, θ = w-4p,
normalized to the enthalpy <cit.>,
α = Δθ/3 w = 4ϵ/3 w ,
where the second equality holds within the bag equation of state, for which the trace anomaly reduces to Δθ = 4 ϵ.
We use this quantity in the following to parameterize the strength of the PT in the system <cit.>.
The conservation laws for a relativistic perfect fluid are
∂_t T^00+∇_i T^i0 =0 ,
∂_t T^j0 +∇_i T^i j(T^μ0, ϵ) =0 .
Note that T^i j(T^μ 0, ϵ) depends on the state of the vacuum such that, effectively, the expanding bubbles perturb the fluid as the latent heat of the vacuum is (locally) deposited. For more details on these equations, we refer the reader to the original Higgsless reference <cit.>.
§.§ Gravitational wave production
The GW spectrum as a present-time observable
is computed using the following relation
(k) = 1/ρ_totd ρ_GW/d ln k =
3 T_ GW (H_∗/β)^2 I (k) ,
where ρ_tot is the total energy density of the Universe at present time,
T_ GW≡ (a_∗/a_0)^4 (H_∗/H_0)^2 is the transfer function, with a value
h^2 T_ GW≃ 1.6 × 10^-5 (g_∗/100)^-1/3<cit.>, k is
the comoving wave number, which can be converted to the observable frequency as f = k/(2 π a_0),
and g_∗ and g_∗ s are respectively the relativistic and entropic
degrees of freedom at the time of GW production.
The ratio
of scale factors is a_∗/a_0≃ 8 × 10^-16 (100 GeV /T_∗)(g_∗ s/100)^-1/3.
The function I(k) represents the spectrum of the stochastic GW signal, and it is given by a double time integral of the anisotropic stress UETC, E_Π (see twop_Pi in sec:GW_prod), multiplied by the Green's function of the GW equation.
We have introduced the prefactor (H_∗/β)^2 in OmGW_Ik to
express I in normalized time and wave number units
t̃≡ t β and k̃≡ k/β
(see App. <ref>, in particular OmGW_aver, for its derivation from the solution to the GW equation)
I (t̃_∗, t̃_ fin, k̃) = k/2 ∫_t̃_∗^t̃_ fin∫_t̃_∗^t̃_ fin
E_Π (t̃_1, t̃_2, k̃) cosk̃(t̃_1 - t̃_2)
t̃_1 t̃_2 .
Here t̃_∗ and t̃_ fin denote the initial and final times of action of the GW source.
In Eq. (<ref>) the expansion of the Universe is not taken into account,
and it therefore holds only for
a short duration of the GW production process, t_ fin - t_∗≪ H_∗^-1, where H_∗ is the Hubble rate at the PT time t_∗.
It is important to notice that,
even though the duration of the PT, determined by the inverse of the nucleation rate β^-1,
must be short β/H_∗≥ 1, this does not imply that the GW sourcing time is also short. In particular, for GWs generated by fluid motion, the typical time scale of dissipation of the fluid kinetic energy is very long, as it is set by the kinematic viscosity in the early universe <cit.>.
We discuss in sec:expansion how to extend our results, obtained with simulations performed
in Minkowski space-time, to an expanding Universe.
The anisotropic stress UETC
E_Π (t̃_1, t̃_2, k̃)
can be evaluated
numerically.
In particular, within the Higgsless simulations, it is evaluated from the following expression
(see OmGW and Refs. <cit.>),
given in the normalized units of the simulation t̃, k̃,
and Ṽ≡ V β^3:
I_ sim (t̃_∗, t̃_ fin, k̃) = k̃^3/4 π^2 Ṽ∫_Ω_k̃Ω_k̃/4 πΛ_ijlm
()
[T̃_ij (t̃_∗,
t̃_ fin, q̃, ) T̃_lm^∗ (t̃_∗,
t̃_ fin, q̃, ) ]_q̃ = k̃ ,
being Λ_ijlm
the transverse and traceless operator
Λ_ijlm () = P_il P_jm - P_ij P_lm , with P_ij = δ_ij - k̂_i k̂_j .
The function T̃_ij (t̃_∗,
t̃_ fin,q̃, )
is computed from the normalized
stress-energy tensor T̃_ij (t, ), sourcing the GWs, as [see Tij_qk]
T̃_i j(t̃_∗, t̃_ fin, q̃, )
= ∫_t̃_∗^t̃_ fint̃ e^i q̃t̃∫^3 e^-i · T̃_i j(t̃, ) ,
where T̃_ij (t, ) = w γ^2 v_i v_j/ρ̅, and ρ̅= w̅ + ϵ = (1 + α) w̅ is the average total energy density [see wpalpha].
From Weinberg
we can directly obtain the GW spectrum at present time [see OmGW_Ik]
using the normalized source T̃_i j(t̃_ init, t̃_ end, q̃, ) computed in the
simulations in an interval of time t̃∈ (t̃_ init, t̃_ end),
provided that
the source has stopped operating by the
end of the simulation t̃_ end.
This does not necessarily imply that one needs
to compute the GW spectrum until t̃_ fin in a
simulation, as this might not be computationally
affordable, especially for
slowly decaying sources.
We can assume that the source has stopped operating at a wave
number k also if the amplitude of the GW spectrum has
reached its saturation amplitude, entering its free-propagation
regime.
Therefore, the GW spectrum evaluated from the simulation
would be accurate
for any wave numbers k that have already
reached their saturated amplitudes by
the end of the simulation at t̃_ end,
even if t̃_ end < t̃_ fin.
However, as the modes in our simulations have not reached this regime, as in previous numerical work <cit.>, one needs to take into account this limitation
when interpreting the numerical results, as
we discuss in sec:results.
For this purpose, we will use the stationary UETC usually assumed for
sound waves (see GW_sw)
and its extension to a locally stationary UETC that can incorporate
the decay of the source with time (see sw_extended).
These models will allow us to extrapolate our results from the final time
of the simulations till the final time of GW production t_ fin to estimate the present-day GW spectrum.
We also note that
numerical viscosity, which in the simulations can
dissipate
kinetic energy over time could limit the time and k range for which the simulations can actually be accurate.
This can potentially affect the transfer of energy from the fluid perturbations to GWs, especially at large frequencies.
§.§ Gravitational waves from stationary sound waves
In the following, we describe the theoretical understanding
of the GW production from sound waves from previous work,
which will be useful to describe our numerical results
in terms of the quantities previously used in the literature.[Although
we refer to sound waves along the paper, which applies to the linearized regime
of fluid perturbations, we generally consider in our simulations the full
hydrodynamical system from initial compressional motion that can produce
large fluid perturbations, especially when the PT
is not weak.]
In the context of GW production from sound waves, it has
extensively been assumed that the UETC is stationary, i.e., it only depends on the difference t_- = t_2 - t_1, E_Π (t_1, t_2, k) = 2 k^2 K^2 f(t_-, k)<cit.>,
where K = ρ_ kin/ρ̅ is a time-independent kinetic
energy fraction with ρ_ kin = ⟨w γ^2 v^2|.
Under this assumption, Ik becomes
I (t̃_∗, t̃_ fin, k̃) =
k^3 K^2
∫_t̃_∗^t̃_ fint̃∫_t̃_∗ - t̃^t̃_ fin - t̃cos (k t_-) f(t̃_-, k̃) t̃_- .
In the sound-shell model of Refs. <cit.>,
the limits of the integral over t̃_- were extended
to ±∞, allowing to commute the two integrals, such that the integral over t̃ simply
becomes the source duration, τ̃_ sw = t̃_ fin - t̃_∗,
yielding the linear growth of the GW amplitude
usually assumed in the literature
<cit.>.
The duration of the sound-wave sourcing of GWs
can then be taken to correspond to
the time that it takes non-linearities to develop in the fluid, τ̃_ sw≡τ_ swβ∼ (β R_∗)/√(K), with β R_∗≡ (8π)^1/3max (, ) being the
mean separation of the bubbles at the end of the PT <cit.>.
The aforementioned assumptions about the integration limits
can only be considered
for k τ_ sw∼ kR_∗/√(K)≫ 1,
and when τ_ sw/R_∗∼ 1/√(K)≫ 1,
as shown in Ref. <cit.>, where the results
of the sound-shell model are extended to all wave numbers and
values of R_∗.
Therefore, for small values of K, we expect this
approximation to hold for all relevant wave numbers
k R_∗≫√(K).
Finally, it can be shown that under the same
assumptions, the remaining integral over t̃_- is proportional
to (β R_∗)/c_ s (see, e.g., App. B of Ref. <cit.>), such that
I (t̃_∗, t̃_ fin, k̃) = Ω̃_ GW
K^2 β R_∗ τ̃_ sw S(kR_∗) ,
where Ω̃_ GW corresponds to the GW production
efficiency and S is a normalized spectral shape,
such that ∫ln k S(k) = 1.
Using OmGW_Ik, the final GW spectrum
can be written as <cit.>
Ω_ GW (k) = 3 T_ GW Ω̃_ GW K^2
H_∗ R_∗ H_∗τ_ sw S(k R_∗) .
OmGW_sshell shows that one could divide the function
I_ sim^ int, obtained in the simulations [see Weinberg]
and integrated over ln k,
by K^2 β R_∗ T̃_ GW to estimate Ω̃_ GW, where T̃_ GW = t̃_ end - t̃_ init is the time interval of the simulation in which
Weinberg is evaluated, as done
in previous numerical
studies <cit.>.
Alternatively,
as we work with non-dimensional
length scales and times given in units of 1/β,
the previous work on Higgsless simulations presented
the results of the GW amplitude considering the following parameterization
<cit.>,
Q'(k) = (ρ̅/w̅)^2
4 π^2/T̃_ GW I_ sim (t̃_ init, t̃_ end, k̃) ≈9 π^2/4 T̃_ GW (1 + α)^2
I_ sim (t̃_∗, t̃_ fin, k̃) ,
where the prefactor (ρ̅/w̅)^2 in Qprime_lin
takes into account that the authors in Refs. <cit.> used the mean
enthalpy to normalize T_ij in the definition of Q'
instead of the total
energy density ρ̅, as done in our case [see eq:Fourier transform].
However,
we confirm
with simulations in sec:results
that using R_∗ in OmGW_sshell to describe the GW amplitude, instead of using Q'/K^2<cit.>, allows us to find a value of Ω̃_ GW that is almost independent of .[References <cit.>
found a strong dependence of the parameter Q'/K^2 ∼ I_ sim/(K^2 T̃_ GW) with as the authors did not incorporate the β R_∗ term in the parameterization.
This led the authors to interpreting that I_ sim∼ K_ξ^2 ξ_ shell T̃_ GW. ]
The kinetic energy fraction K is
usually taken to be the
one corresponding to a single bubble, which can be expressed
as
K_ξ≡κ_ξ α/1 + α ,
where α characterizes the strength of the PT
[see alpha],
and κ_ξ≡ρ_ kin/ϵ is the single-bubble
efficiency factor <cit.>.
We compare in sec:results the kinetic energy fraction found in the
simulations with the single-bubble result to take into
account the effect of collisions and non-linear dynamics in K and express the results in terms of K_ξ.
We note that in the sound-shell model, K ≠ K_ξ is time-independent but the exact value of
K/K_ξ∼ O (1) depends on the PT parameters <cit.>.
Similarly, Refs. <cit.> have also reported maximum
values of the kinetic energy fraction in their simulations
K_ max different than K_ξ.
We find in sec:results that OmGW_sshell
holds in our simulations as long as
the kinetic energy does not decay with time after
the PT ends.
Furthermore, we find that when the source
decays, a generalization of the stationary UETC
to include the decay of the source,
which is described in sw_extended, predicts a GW growth with the source duration that is given by the integrated K_ int.
This model is accurately validated by the numerical
results in GW_spec_time.
This allows us to still compute numerically the GW efficiency Ω̃_ GW and estimate the expected final GW amplitude at the final
time of GW production, even when the GW does not grow linearly
with the source duration.
§.§ Gravitational waves from decaying sources
As we show in sec:results,
when the
kinetic energy starts to decay within the duration of the
simulations,
potentially due to the fact that the
system enters the non-linear regime,
we find that the GW amplitude deviates from the linear growth
of OmGW_sshell, impeding its use
to estimate the GW efficiency.
In such cases, we propose a generalization of OmGW_stat, by
assuming a locally stationary UETC that allows us to
include the time dependence
of K^2,
E_Π (t_1, t_2, k) = 2 k^2 K^2 (t_+) f(t_-, k),
where t_+ = (t_1 + t_2).
Then, under the same approximations discussed above that yield
to the linear growth in the source duration τ_ sw (i.e., for k τ_ sw≫ 1 and τ_ sw≫ R_∗), we find that
K^2 τ̃_ sw in OmGW_sshell can be substituted by K_ int^2,
K_ int^2 (t̃_∗, t̃_ fin) ≡∫_t̃_∗^t̃_ fin K^2 (t̃) t̃ ,
yielding
I (t̃_∗, t̃_ fin, k̃) = Ω̃_ GW K^2_ int (t̃_∗, t̃_ fin) (β R_∗) S(k R_∗) .
We note that K_ int^2 reduces to K^2 (t̃_ fin - t̃_∗) = K^2 τ̃_ sw when K^2 is constant and we recover OmGW_sshell found
in the stationary assumption.
A similar UETC has been recently considered in Ref. <cit.>,
E_Π(t_1, t_2, k) = 2 k^2 √(P_v(t_1, k) P_v(t_2, k)) cos (k t_-), where P_v is the kinetic spectrum.
We note that the integral over t_1 and t_2 of the latter can be
reduced to the integral in K2rms under the assumptions discussed
in GW_sw, such that t_- terms can be
usually neglected in the integral over t_+
due to the
assumed small compact support of t_-<cit.>
(see discussion in Sec. 5 of Ref. <cit.>).
Therefore, we expect both UETC to have the same impact on the
integrated GW amplitude and we emphasize that when we validate the
proposed model in GW_spec_time, we are validating
the overall amplitude but not necessarily the GW spectral shape.
After validating this model with the results of numerical simulations
in GW_spec_time, we can
estimate the GW efficiency Ω̃_ GW even when the kinetic energy
is decaying with time.
In particular,
we show in decay_K2 that the kinetic energy evolution
in the simulations can be in general fit to a decaying
power law,
K (t̃) = K_0
(Δt̃/Δt̃_0)^-b,
where b > 0 and K_0 are
parameters to be fit using the numerical results.
We note that Δt̃ and Δt̃_0 are time
intervals with respect to the time-coordinate origin in the simulations and, hence, we will simply use t̃ and t̃_0 in the following
whenever the expansion of the Universe is ignored.[Since the GW equation is invariant under
time translations when expansion of the Universe is ignored,
we can freely choose the origin of time coordinates t̃_ ref in our simulations.
Then, choosing t̃_ ref = 0,
the time intervals become Δt̃ = t̃ - t̃_ ref→t̃. ]
We will take t̃_0 to be
the time when all the simulation box is in the broken phase.
If we assume that the GW production starts around this time
t̃_∗≃t̃_0, we find using K2rms that the dependence of K_ int^2
with the source duration, τ̃_ sw = t̃_ fin - t̃_∗, is
K^2_ int =
K_0^2 t̃_0 (1 + τ̃_ sw/t̃_0)^1 - 2b - 1/1 - 2b .
This expression reduces to
K_ int^2 → K_0^2 τ̃_ sw
for any value of b
when the source duration is very short τ̃_ sw/t̃_0 ≪ 1, while
for long durations
τ̃_ sw/t̃_0 ≫ 1,
it takes the following asymptotic limits:
lim_τ̃_ sw≫t̃_0 K_ int^2 = K_0^2 t̃_0/1 - 2 b (τ̃_ sw/t̃_0)^1 - 2 b, when b < ,
lim_τ̃_ sw≫t̃_0 K_ int^2 = K_0^2 t̃_0/2 b - 1 ,
when b > .
Hence, K_ int^2 grows unbounded proportional to τ̃_ sw^1 - 2b when 2 b ≤ 1, thus generalizing
the linear growth obtained in the stationary assumption to
any decay rate b.
On the other hand, when the decay rate is larger than 0.5, then K_ int saturates
to a value K_0^2 t̃_0/(2b - 1).
§.§ Effect of the Universe expansion
Based on the assumption that the expansion of the Universe can be neglected,
a superposition of sound waves would emit GWs with an amplitude that increases unbounded linearly with the
sourcing time τ_ sw (i.e., until the development
of non-linearities or the kinetic energy is efficiently
dissipated)
if the UETC of the source is stationary and K ≪ 1
(see OmGW_stat2 and Refs. <cit.>).
Similarly, with the proposed locally stationary UETC,
the GW amplitude would increase unbounded proportional to
τ_ sw^1 - 2b when the kinetic energy decays with a decay rate b < 1/2 [see growthrate_K], and it would
only saturate when b > 1/2.
In the stationary UETC case,
to take into account the expansion of the Universe,
the linear increase τ_ sw of OmGW_stat2
can be substituted by the suppression factor Υ (τ_ sw) = τ_ sw/(1 + H_* τ_ sw)<cit.>,
where τ_ sw≡τ_ fin - τ_∗
now refers
to an interval in conformal time,
and still apply the
results of our simulations to long-lasting sources.
We note that it is the interval in conformal time,
instead of the interval in cosmic time, that should be
associated to the eddy turnover time
R_∗/√(K) when evaluating the expected time to develop
non-linearities, due to the conformal invariance of the
fluid equations when the fluid is radiation-dominated <cit.>.
When including expansion, the results are no longer
invariant under time translations, so we need to choose
absolute values for conformal times.
Assuming that the PT is short and occurs during radiation-domination, we can
set the initial and final conformal times of GW production to be
H_∗τ_∗ = 1 and H_∗τ_ fin = 1 + H_∗τ_ sw, where we set a_∗ = 1, such that the conformal
Hubble rate is _∗ = H_∗ a_∗ = H_∗.
In the generalized case when we take into account the decay of K(t) [see OmGW_general],
unless K^2 decays faster than 1/t, the integrated K^2_ int would also diverge, requiring to cut off
the GW growth at
a final time of GW sourcing t_ fin = t_∗ + τ_ sw.
Extending
OmGW_stat to apply in an expanding Universe <cit.>,
an effective integrated K^2 that can be used in OmGW_general to
estimate the effect of expansion is the following
K^2_ int, exp≡ (β/H_∗)^2 ∫_τ̃_∗^τ̃_ finK^2 (τ̃)/τ̃^2τ̃= ∫_0^τ̃_ swK^2 (τ̃_∗ + δτ̃)/(1 + δτ̃/τ̃_∗)^2 (δτ̃)
,
where δτ̃≡τ̃- τ̃_∗ and τ̃_∗ = β/H_∗.
Taking into account that
the power-law decay in flat space-time
should be taken in conformal time
K (τ̃) = K_0 (Δτ̃/Δτ̃_0)^-b due to the conformal
invariance of the dynamics for a radiation-dominated fluid,
then we need to express the absolute times in a flat space-time as time intervals in conformal time (see footnote <ref>), Δτ̃= δτ̃+ Δτ̃_∗.
If we again assume that Δτ̃_∗ = Δτ̃_0,
the resulting integral for 2 b ≠ 1 can be expressed as
K_ int, exp^2 = K_0^2 Δτ̃_0^2b∫_0^τ̃_ sw(δτ̃+ Δτ̃_0)^-2b/[1 + (H_∗/β) δτ̃]^2 (δτ̃) =
K_0^2 Υ_b (τ_ sw) (β/H_∗) ,
where we have defined a suppression factor
Υ_b (τ_ sw) = Δ F_b
/(1 - 2b) with
Δ F_b ≡ F_b (H_∗τ_ sw) - F_b (0) that reduces to the one found for stationary
sources when b = 0, i.e., Υ_0 (τ_ sw) ≡Υ (τ_ sw) = H_∗τ_ sw/(1 + H_∗τ_ sw)<cit.>.
The function
F_b is the following
F_b (H_∗τ) = (Δτ_∗ + τ/Δτ_0)^1 -2bH_∗Δτ_0/(1 - H_∗Δτ_∗)^2 _2 F_1 [2, 1 - 2b, 2 - 2b, - H_∗ (Δτ_∗ + τ)/1 - H_∗Δτ_∗] ,
where _2 F_1 is the hypergeometric function.
We highlight that the emergence of a hypergeometric function has no deep physical meaning, since
hyper arises from introducing the chosen fit K(τ)
in Eq. (<ref>).
The relevant physical quantity is the resulting modification
Υ_b with respect to Υ (i.e., with no decay of the source)
obtained from the integral in Kexp_fit.
The value of Δτ̃_0 corresponds to the characteristic time
t̃_0 used in the fit of K^2
in flat space-time.
We note that, in principle, using Δτ̃_∗ = τ̃_∗ - τ̃_0 + Δτ̃_0 in the integrand of Kexp_fit allows to
compute the GW spectrum starting at any time τ̃_∗.
For simplicity, we have chosen Δτ̃_∗ = Δτ̃_0.
We also find that for any values of b,
the functions Υ_b (τ_ sw) always
reduce to the linear growth H_∗τ_ sw for short source duration, H_∗τ_ sw≪ 1.
Furthermore, we note that with the inclusion of the Universe expansion,
the relevant integrated K^2 becomes K_0^2 Υ_b (H_∗τ_ sw)
(β/H_∗),
where the β/H_∗ cancels with the normalization introduced in OmGW_Ik
and the resulting Υ_b only depends on the source duration
in
units of the Hubble time, H_∗τ_ sw, which becomes the relevant time scale in addition to the PT duration, β^-1.
Then, the final GW spectrum becomes
Ω_ GW (k) = 3 T_ GW Ω̃_ GW K_0^2
H_∗ R_∗ Υ_b(τ_ sw) S(k R_∗) ,
where Υ_b is obtained from the integrated K_ int^2 and in particular reduces to the expression determined
by hyper when the fit K^2 = K_0^2 (Δτ̃/Δτ̃_0)^-2b holds at all times of GW production.
As discussed above, in decay_K2 we use the numerical
results of the simulations to find the values of the fit
parameters b
and K_0 for different PTs.
Then, we validate the assumption that OmGW_general applies
within the duration of our simulations
in GW_spec_time,
and provide an estimate of the
GW amplitude as a function of the source duration τ̃_ sw.
We compare the resulting evolution in flat space-time (both from the analytical fit and using the numerical results)
with the one obtained including the expected effect of the
Universe expansion for different values of β/H_∗.
We emphasize that the suppression of the time intervals by Υ_b due to the Hubble expansion works as a proxy
to estimate its effect.
§ NUMERICAL SETUP
In this section, we focus on describing the numerical setup of the Higgsless simulations: in subsec:Updates to the simulation, we comment on the updates in the numerical scheme with respect to Ref. <cit.>, and in sec:params, we describe the simulation suite considered for this work.
§.§ Updates to the numerical setup
In this section, we highlight three updates to the
Higgsless simulations with respect to Ref. <cit.>
aimed at improving (1) the time integration scheme,
(2) the mapping
between the discrete and the continuum momenta,
and (3) the criterion for numerical stability in simulations of strong first-order PTs (α = 0.5). For a complete description of the Kurganov-Tadmor (KT) numerical scheme <cit.> used for the Higgsless simulations, we refer to Refs. <cit.>.
Commencing with (1), in practice, the
integral in eq:Fourier transform
must be computed numerically on the grid of space and time. For the space grid, this is accomplished through a fast Fourier transform routine <cit.>. For the time grid, in order to overcome the practical limitation of memory (i.e., storing a large number of 3D time slices), one needs to resort to another method. In the first iteration of the Higgsless simulation code, the discrete integral in time of eq:Fourier transform was approximated as
T̃_ij (t̃_ init, t̃, q̃, ) = ∑_t̃^' =
t̃_init ^t̃δt̃ e^i q̃t̃^'T̃_ij(t̃^', ) ,
i.e., through its Riemann sum, by stacking past time slices weighted by a complex factor from
t̃_ init until
t̃≤t̃_ end
for each time step δt̃ over which the GWs are sourced.
In the current version, we improve upon this scheme by treating T_ij as a piecewise linear function interpolating between the support points, using a similar scheme to the one proposed in Ref. <cit.> for solving the GW equation.
Since the integrand involving an oscillating exponential as well as the linearized T_ij is now analytically integrated, this modified routine
allows to capture better
the UV behavior at large k, alleviating the time-step δt̃ required
to find accurate spectra in this
regime (see discussion in Ref. <cit.>).
However, no sizable discrepancies have been observed in the UV range of the
GW spectra through this change
for the dynamical range and choice of δt̃ used in
our simulations.
Continuing with (2), we begin by noting that the first version of the Higgsless simulations employed a sin-prescription for the mapping of discrete momenta on the grid to their
correspondents in the continuum.
Care must be taken that on the grid of the simulation with N points per dimension, Fourier modes with momenta -l_i and N-l_i (in the ith direction) are equivalent and mapped to the same
momenta in the continuum.
At the same time, momenta of order l_i ≃ N are equivalent to l_i ≃ 0 and should be considered soft.
Depending on whether the observable under consideration is sensitive to the sign of the momentum, this
motivated the mapping
k̃_i = 3 - a/δx̃sin(a π l_i/N) ,
where δx̃ = L̃/N, with
a = 2 when the sign is relevant and a = 1 when it is not.
In the current simulations, we generally use a saw description for the momenta
k̃_i =
2 π l_i / (N δx̃ ) , l_i < N / 2 ,
0 , N / 2 ,
2 π (l_i - N) / (N δx̃) , l_i>N / 2 .
As such, the saw-prescription avoids different descriptions in different contexts (such as the space Fourier transforms for the GW estimate or for the numerical fluid evolution) and maintains a good map of momenta all the way to l_i ≃ N/2, while the previous method is only accurate in the linear regime of the sine function.
At the moment, we do not find substantial differences between
the two implementations, but we expect this implementation to improve
the results when increasing the resolution of the Higgsless simulations.
The third point (3) concerns the choice of the maximal local velocity a_j+1 / 2 (on a staggered cell in direction j), appearing in Eq. (3.7) of Ref. <cit.>.
In summary, this quantity enters the flux limiter used in the KT scheme to preserve the shock structures in the lattice
by setting a minimal numerical viscosity
that reduces spurious oscillations
and improves the stability of the numerical scheme.
In the limit of small fluid velocities, i.e., for weak and intermediate PTs,
a_j+1 / 2==1/√(3) is a good choice.
In the case of strong PTs, however, fluid velocities often supersede 1/√(3) and approach 1. To improve the numerical stability of the simulation, we therefore choose a_j+1 / 2=1 for strong PTs.
In the weak regime, the numerical changes due to this choice are negligible but for stronger
PTs, it improves the stability of the code significantly.
In rare occasions and close to shocks, the simulation can lead to unphysical fluid velocities (essentially v>1) as a numerical artifact.
In these cases, we opted to enforce the local fluid velocity to 1. This only happened in isolated points and had
no measurable impact on the conservation of T^0μ or the GW spectra.
In all other regards, the current version of the Higgsless implementation is identical to the first version in Ref. <cit.>.
§.§ Simulations and parameter choices
We list the parameters considered in this study in Tab. <ref>.
We expand upon Ref. <cit.> by including in our parameter scan strong PTs with α = 0.5.
We thus run reference simulations for α∈{ 0.0046, 0.05, 0.5} and wall velocities v_w∈{0.32, 0.36, ..., 0.76, 0.8},
except for strong PTs where v_w=0.32 is excluded due to the non-existence of
deflagrations for α≳ (1 - )^-13/10<cit.>,
implying a total of 3× 13 - 1 = 38 PT parameter points.
To extract our main results,
we run reference simulations for each simulation box size, in which a single reference bubble nucleation history is used for all wall velocities, PT strengths, and grid sizes, thus keeping the sample variance in our reference measurements identical for different values of v_ w, α, and N.
The bubble nucleation histories result in an (asymptotic) number of bubbles
of the order of
N_b ≃L̃^3/(8 π ^3),
where L̃≡ Lβ is the simulation box size,
nucleated following a statistical distribution that is exponential in time and uniform in space, as described in nucleation_hist,
and then removing bubbles that nucleate inside the future
causal cone of previous bubbles to take into account
the evolution of the broken-phase volume with time (see Ref. <cit.> for details).
In our simulations,
L̃/
takes on values
of 20 and 40, yielding of the order of 300 and 2500 bubbles respectively.
Using the same numerical resolution, simulations with
L̃/ = 40 yield a reduction in the statistical variance by increasing the number
of bubbles and by offering an increased resolution of the measured quantities in the IR regime, while simulations with
L̃/ = 20 cover a larger dynamical range in the UV regime.
For comparison, the number of bubbles for L̃/ = 40 (N_b ≃ 2500) in our work
and previous Higgsless simulations <cit.> is in general larger
than most[We note that Ref. <cit.> uses N_b = 32558 for
a weak PT with =0.44, while it takes either N_b = 988, 125, or 37 for the rest of the PT parameter space.
In Ref. <cit.>, 5376 bubbles are used
for some weak PTs, while 11 and 84 are used for other
weak PTs, and for intermediate ones.
Reference <cit.> considers 8 bubbles
for all simulations.] of the previous numerical simulations of the fluid-scalar system
<cit.>, especially for intermediate PTs,
allowing
for a reduction of the statistical variance.
A potential issue of small box sizes is that for small wall velocities,
the shock in front of the wall of the first nucleated bubble might collide with its mirror images
(due to the use of periodic boundary conditions) before the
end of the PT.
To avoid this issue, we take the minimum value of to be 0.32
in our simulations, such that the
numerical domain is filled with the broken phase
before the largest bubble reaches the edges
of the simulation box even for the smaller box L̃/ = 20.
For each of the 76 parameter points {v_ w, α, L̃/},
we then run simulations with different number of grid points N^3 with
N∈{64, 128, 256, 512}, yielding
a total of 76 × 4 = 304 reference simulations.
Running simulations of different grid sizes allows us
to test the degree of convergence of our numerical
results and
to estimate physical
quantities in the continuum limit by extrapolation (see sec:convergence).
To ensure the stability of our simulations,
we choose the number of time steps N_t =t̃_end/δt̃ to satisfy
the Courant-Friedrichs-Lewy (CFL) condition
δt̃/δx̃<1/4
with δx̃ = L̃/N. We have confirmed that even for strong transitions, increasing N_t beyond this threshold does not change the numerical results.
For each parameter point {, α, L̃/}, we have also run single-bubble simulations to track the convergence of the self-similar fluid profiles, leading to 76 “single-bubble” simulations.
These results are presented in sec:kinetic_ed and will be used
to improve the extrapolated predictions of the “reference” multiple-bubble simulations in sec:convergence.
We note that single-bubble simulations are only run until t̃_ end = L̃/[2max(, )], being roughly the time when the fluid shell reaches the edge of the simulation domain.
In addition to the reference simulations, we also run
multiple-bubble simulations based on 9 additional distinct bubble nucleation histories per box size for all strengths, resolutions, and box sizes, for ∈{0.32/0.36, 0.6, 0.8}, where the lower = 0.32 is used for weak and intermediate transitions, and =0.36 for strong ones.
These velocities correspond to deflagrations, hybrids, and detonations, respectively, except for strong transitions for which also =0.8 corresponds to a hybrid. This implies a total of 3 × 3 × 2 × 4 × 9 = 648seed simulations from which
the statistical variance of the results can be estimated.
We will use these simulations to provide error bars in our measured quantities,
corresponding to the standard deviation from the 10 different bubble
nucleation histories in sec:results.
All reference and seed simulations are run between 0 < t̃≡ t β < 32 and the GW spectrum
is extracted from
the time interval spanning from
t̃_ init = 16 to t̃_ end = 32.
We set the origin of time coordinates
t̃ = 0 at
a reference value such that the first bubble nucleates at t̃ = 0.5, based on the invariance of our equations on time translations when the expansion of the Universe can be ignored.
For this approximation to be valid we then require
β/H_∗≫t̃_ end = 32.
We specifically cut out the early times up to
t̃_ init = 16 to extract the contributions from the fluid perturbations after the collisions of bubbles, and to reduce the realization-dependent effects on the GW production.
Consequently, we also
suppress contributions to the GW spectrum from the initial collisions
(see also the discussion in Ref. <cit.>).
In this regime, we then compute I_ sim (t̃_ init, t̃_ fin, k̃) that allows us to robustly
test the scaling of OmGW_general and compute
the GW efficiency Ω̃_ GW and the spectral shape
S(k R_∗).
The time t̃_ init = 16 is shortly after
the time when the broken phase fills up
the whole volume of the simulation,
t̃_0 ≃ 10, for the reference nucleation history
with L̃/ = 20.
We will consider times t̃ > t̃_0
to fit the time evolution
of the kinetic energy fraction K(t̃) = K_0 (t̃/t̃_0)^-b in decay_K2.
In total, we have performed 1028 simulations, which we summarize in Tab. <ref>, with an estimated time of ∼ 10^6 CPU hours.
We note that
each large-resolution simulation (N=512) takes ∼ 10^3 CPU hours, a quite modest value that indicates the
numerical efficiency
of the Higgsless approach.
§ NUMERICAL RESULTS
Before we present a detailed account of our numerical results, we would like to put them in perspective.
Overall, our results can be summarized by the following findings:
* Simulations of strong first-order PTs with α = 0.5:
We present numerical results for strong PTs covering a wide range of wall velocities and performing systematic checks of the numerical convergence of our results.
For the first time, we obtain the full GW spectra for strong
PTs.[Reference <cit.> also provides estimates of the kinetic energy and the integrated
GW spectra for α = 0.5, but does not
present results about the spectral shape.]
Stronger simulations are more challenging when it comes to numerical stability and proper resolution
of non-linearities.
At the same time, stronger PTs lead to a larger GW signal and therefore are
preferred by a potential detection with LISA.
Hence the importance of developing an accurate understanding of the resulting GW spectrum.
We provide in Sec. <ref>
a template based on the expected GW spectrum from
compressional fluid perturbations
extended to decaying sources (see GW_swsw_extended) to incorporate
information from our simulations that can be used for phenomenological studies.
We show in fig:bubbles an example of a simulation
for a strong PT that corresponds to
a deflagration with = 0.36.
* Template parameterizations: Ideally, all our numerical findings can be expressed in terms of
a few physical quantities to facilitate their use in phenomenological studies.
In our simulations, all quantities evaluated are dimensionless such that β/H_∗ does not affect the
numerical results and only appears when we recover the
physical quantities, as indicated in OmGW_Ik.
This motivated the authors in Refs. <cit.> to use the variable Q'
[see Qprime_lin] to interpret the numerical results.
In the present work, we instead characterize the numerical results
based on R_∗ and K_ int^2 [see OmGW_general],
as to allow to capture the essential results in a form as simple as possible
(with an almost invariant GW efficiency Ω̃_ GW),
while allowing for deviations with respect to
the linear growth of the GW amplitude with the source duration, which is expected for
stationary sources (see discussion in GW_swsw_extended).
We also provide in sec:expansion a definition of K_ int, exp^2
that allows to incorporate a posteriori
the effect of the expansion of the Universe [see Kexp_fit].
* Development of non-linearities: For strong PTs, and some intermediate PTs with confined hybrids, we observe several phenomena that probably
stem from non-linear dynamics of the fluid.
In the first place,
we observe a decay in the kinetic energy of
the fluid at later times (after the PT ends)
that could indicate that non-linearities might be leading to a cascading from larger
to smaller scales in the fluid perturbations, making the
viscous dissipation at small scales more effective.
Potentially due to this decay,
we find that the dependence of the
GW amplitude with the source duration starts to deviate
from the expected linear growth, transitioning toward its
saturation amplitude at the latest times of the simulations.
The cascading of kinetic
energy from large to small scales could
also impact the UV part of the GW spectrum,
leading to a modification from the expected k^-3 found
for sound waves
<cit.> towards a shallower spectrum, for example like the one that is found
in vortical turbulence, k^-8/3<cit.>.
We test the numerical robustness of our results along this section and comment on future studies that would be required to confirm some of our findings.
In particular, we present in sec:vort a preliminary study
of the potential development of vorticity in our simulations,
and show the vorticity found for an example simulation in
the lower panel of fig:bubbles.
To test the validity of our numerical results,
we will pay special attention to the following points, to be addressed throughout this section.
In sec:convergence, we study the convergence
of our results with respect to the grid spacing, δx̃.
In decay_K2, we study the time dependence of the fluid kinetic energy fraction K, and fit
the numerical results to the decaying power-law presented in sw_extended.
In GW_spec_time, we test the expected
scaling of the GW spectrum
with K_ int^2 and R_∗,
evaluating the evolution of the integrated
GW amplitude with the source duration,
and compute the GW efficiency
Ω̃_ GW, according to OmGW_general.
We also provide in GW_spec_time an estimate of the
expected GW amplitude in a flat Minkowski space-time, based on the numerical results of
sec:convergenceGW_spec_time,
and in an expanding background, using the model presented
in sec:expansion.
Finally, in sec:shape, we study the spectral shape of the GW spectrum, paying special attention to the UV
regime, where we find deviations with respect to the
expected slope in the sound-wave regime.
However, we note that
to be able to confirm the presence of a forward
cascade in our simulations, and therefore the accuracy of the fits provided for the time decay of K and the resulting GW amplitude,
a detailed study of the dependence of the kinetic spectra on the numerical parameters would be required,
checking whether
the cascading within an inertial range of scales
is developed and unaffected by the numerical parameters.
Furthermore,
one should keep in mind that it is not in general expected that all
wave numbers evolve with the source duration in the same way, as shown in Ref. <cit.>.
§.§ Convergence analysis of the kinetic energy and GW amplitude
In the present study, for each parameter point {α, v_w}, simulations at four resolutions N∈{64, 128, 256, 512} and two box sizes L̃/∈{20, 40} have been performed.
Since our Higgsless simulations use relatively sparse grids compared to simulations with scalar
fields <cit.>, the resolution
can become an issue to reproduce some of the expected self-similar profiles induced by the uncollided nucleated bubbles at the initial stages of the simulations, especially for parameter points with ≲ v_ CJ,
where v_ CJ is the Chapman-Jouguet speed, determining the transition between hybrids and
detonations.
In these situations, the fluid profiles
become very thin hybrid profiles as approaches v_ CJ.
Since the Chapman-Jouguet speed is v_ CJ = {0.63, 0.73, 0.89} for α = {0.0046, 0.05, 0.5}, respectively, for our choice of parameters,
we have very thin profiles when =0.6 and the PT is weak, when = 0.72 and the PT has intermediate
strength, and when = 0.8 and the PT is strong.
Especially in these cases, the resolution in ξ≡ r/(t - t_n),
with r being the radial distance to the nucleation location and t_n
the time of nucleation,
might not be enough to resolve the
self-similar profiles at the time when the bubble collides,
since the resolution in ξ for a fixed N is initially low and then improves as
time evolves.
We study this rate of convergence in sec:kinetic_ed using single-bubble simulations to understand for each
PT what is the required time for the self-similar profiles to converge to the expected ones (see also discussion in Ref. <cit.>).
For reference, we show in sec:kinetic_ed (see fig:1d_profiles) the self-similar profiles
of the fluid perturbations expected to be
produced by uncollided expanding bubbles <cit.>, computed using CosmoGW<cit.>.
In the following, we first analyze the convergence of the numerical
results for multiple-bubble reference runs to provide estimates of the integrated kinetic
energy K_ int (t̃_ init, t̃_ end) and GW amplitude I_ sim^ int≡∫ I_ simln k, where I_ sim (t̃_ init, t̃_ end, k̃)
is defined in Weinberg, for initial and final times t̃_ init = 16 and t̃_ end = 32.
These integrated quantities will be used in GW_spec_timesec:shape to estimate respectively the
GW efficiency Ω̃_ GW and the spectral shape S(kR_∗).
We also provide estimates for
the kinetic energy fraction K_0 evaluated
at the time when the PT completes, t̃_0 ≃ 10.
We will then attempt to improve this estimate by including
the results of single-bubble runs studied in sec:kinetic_ed,
tracking the degree of convergence of each bubble at the
time when they collide, and leading to a new estimate, K_0, defined in eq:calK_0 definition (see sec:kinetic_ed for details).
We note that we will only use the improved estimates K_0 to make predictions of the resulting GW
template presented in sec:summary, while we study
K_0 computed from the reference runs for the remaining of
this section.
In addition to the required resolution for thin self-similar profiles before collisions, as the fluid perturbations become non-linear, we expect large numerical resolutions to be required to
fully capture the dynamics during and after collisions.
In order to take these effects into account, we study
the numerical results as a function of N and attempt to
potentially improve our measurements by extrapolating
our results to N→∞, based on the underlying assumption that the extrapolation
method obtained for the computed values of N also applies in this limit.
On general grounds, it is possible to define a particular number of grid points N_∗ such that for N ≫ N_∗ a simulation has reached
a converged solution, i.e., the numerical results are unaffected
within some acceptable tolerance and, hence,
we can assume that they accurately represent their continuum values.
Empirically, we find that for insufficient resolution,
the kinetic energy fraction K is in general underestimated
when the grid resolution is insufficient
as the values of the velocity profiles around
the peak are underresolved.[As we show in fig:extrapolation,
we find this to be the case for all simulations when studying K_0 at
the reference time t̃_0.
For the integrated K over time, described by K_ rms,
this is also the case for all simulations but for
a few cases with large when α = 0.5.
In these exceptional cases, as the self-similar profiles become better resolved, hence increasing
the numerical K before collisions,
the stronger decay induced by the increase of
the non-linearities can eventually overcome
the initial increase of K,
leading to a decrease of the integrated K_ rms.
We study the time evolution of K for different N in decay_K2.
]
This motivates us to use the following function when extrapolating the
numerical values of the kinetic energy fraction:
K = K_∞/1+(N_*/N)^a ,
where a, N_∗, and K_∞ are found by fitting the numerical results as a function of N.
In the few cases when this fit is not valid, in particular for the largest values of K
(see footnote <ref>), we take K_∞
to be the value computed in the simulations for N = 512, and the error ε_ K is then estimated comparing this value to the one obtained for N = 256.
We note that the value of a in eq:E_kin_convergence
indicates
the degree of convergence of the numerical results, as the relative error in the kinetic energy ε_K can be expanded as
ε_K ≡K_∞ - K/K_∞ = (δx̃/δx̃_∗)^a + O (δx̃^2a) ,
where δx̃_∗ = L̃/N_∗.
In some simulations, as we discuss later, we can observe that the numerical results have already converged for N = 512, indicated by a small
relative error ε_K ≡ |K - K_∞|/K_∞
(see values in Tab. <ref>).
The result of applying the convergence analysis based on eq:E_kin_convergence to the reference runs
with multiple bubbles
is shown in fig:extrapolation,
where we first consider the value of the
kinetic energy fraction K_0 (left panels).
For the full evolution of the kinetic energy, see Sec. <ref> and Fig. <ref>.
The middle panels of fig:extrapolation correspond to the rms
kinetic energy fraction, computed from K_ int
[see K2rms] as T̃_ GW K_ rms^2 ≡ K_ int^2,
where T̃_ GW≡t̃_ end - t̃_ init = 16
corresponds to the time interval
over which the GW spectrum is computed.
We note that we focus on the simulations with L̃/ = 20 as these
cover smaller scales than those with L̃/ = 40
for a fixed
N, providing a better resolution of the kinetic
energy density in the UV regime.
We show in Tab. <ref> the resulting
values of the fit
parameters of eq:E_kin_convergence for the rms kinetic energy fraction
K_∞^ rms, normalized by the single-bubble
kinetic energy fractions
K_ξ [see Kxi], and a_K,
and the relative errors ε_K as defined in rel_errorK (see also footnote <ref>),
for the set of PTs shown in fig:extrapolation.
We expect K_ rms^2 T̃_ GW to be the relevant
kinetic energy fraction entering the computed GW amplitude, I_ sim^ int, according to the model proposed in sw_extended.
We find empirically that the exponent a in (<ref>) usually varies between one and two in our numerical
simulations, indicating that the dynamics of the system reduces
the effective degree of convergence with respect to the
one expected from the numerical scheme, which corresponds to second order <cit.>.
We expect that further decreasing δx̃ would be required to finding an exact quadratic dependence of the error with δx̃.
In any case, we note that for most of the PTs (besides highly confined profiles with ≲ v_ CJ), we already find absolute errors below 10%, as indicated in Tab. <ref>.
For confined profiles,
the relative error is large, and we need to take into account that the
extrapolated result K_∞ presents a larger degree of uncertainty.
In these simulations, we expect
the lack of convergence to also become visible in the GW spectra: For example, we observe that the expected UV behavior, S(k) ∼ k^-3, found in the sound-shell model <cit.> and in PT simulations
<cit.>
is in these
cases obscured by an exponential decay (see the discussion in sec:shape and the fit used in Ref. <cit.>).
Furthermore, we also apply the extrapolation of eq:E_kin_convergence
to
the integrated GW spectrum obtained in the code I_ sim^ int.
The resulting values as a function of δx̃ are shown in the lower panels
of fig:extrapolation, while the numerical values of the extrapolated
I_∞^ int, the fit parameter a_ I,
and the relative error ε_ I are given in Tab. <ref>.
We note that for the GW amplitude, the fit of eq:E_kin_convergence is
not valid when the PT is strong for most of the wall velocities,
while it remains valid
for weak and intermediate PTs.
In the cases when the fit is not valid, we take the extrapolated values in fig:extrapolation as those obtained for the
largest resolution runs with N = 512.
However, the relative errors comparing simulations with N = 512 and N = 256 are already very small compared to the errors in K, making the estimate
of I_ sim^ int less sensitive to numerical inaccuracies
than the estimate of K_ int.
Finally, we also display in fig:extrapolation the kinetic energy fraction K_0 at the reference time t̃_0 ≃ 10. As we will see later, K_0 is essential to determine the resulting GW amplitude, as can be
observed from OmGW_generalKint_decay.
Therefore, to correctly capture the GW amplitude we need to accurately
reproduce K_0.
A first attempt is to directly take the extrapolated values K_∞^0 using
the fit of eq:E_kin_convergence (see upper panel of fig:extrapolation).
However, this result does not take into account that the self-similar profiles
have not reached convergence at the time when the bubbles collide,
leading to underestimating the ratio K_0/K_ξ.
We present a methodology that estimates the required correction for this underresolution in sec:kinetic_ed.
The corrected values K_0/K_ξ are presented in fig:kappa_eff_kappa, together with
a modified “efficiency”κ_0, such that
K_0 ≡κ_0 α/1 + α ,
in analogy to Kxi.
We find a general trend that K_0/K_ξ≳ 1 when <,
while K_0/K_ξ≲ 1 when >.
If we take an average value of K_0 over the PT parameters
and α, we find
K_0 = 0.84^+0.24_-0.29 K_ξ ,
where the super and subscripts are the maximum and minimum values found over all
wall velocities,
indicating that the typical use of K_ξ for the kinetic energy would overestimate the GW production by a factor (K_ξ/ K_0)^2, which can be as large as 0.55^-2∼ 3.3, for example when α = 0.5 and = 0.8 (see fig:extrapolation).
For different PT parameters (α and ), one could take the values of fig:kappa_eff_kappa
to predict the correction of the resulting GW amplitude.
According to the locally stationary UETC presented in sw_extended, we find that K_ int^2 ≡ K_ rms^2 T̃_ GW
is
the relevant quantity determining the GW amplitude.
Therefore, based on the results of Ref. <cit.>,
Ref. <cit.> used the estimated value K_ rms≃ 0.6 K_ξ.
However, we note that using the power-law fit presented in sw_extended and validated in decay_K2,
we can relate the final GW amplitude
to the corrected values K_0 and the decay rate b
(as we will do in GW_spec_time).
§.§ Time evolution of the kinetic energy
In this section, we evaluate the time evolution of the
kinetic energy fraction K for different numerical resolutions
N.
We show the results in fig:E_kin_evolution for the largest
resolution runs N = 512 (upper panel), and for a range of N = {64, 128, 256, 512} (lower panel).
We find that the kinetic energy is underestimated for low resolution, as argued in sec:convergence (see upper and middle panels of fig:extrapolation, and lower panel of fig:E_kin_evolution).
This is, for most PT parameters, the case at early times, before the bubbles
collide and while the self-similar profiles develop in each
nucleated bubble (see sec:kinetic_ed).
At later times (usually t̃≳t̃_0 ≈ 10), for weak and most of intermediate
PTs we find a decay of the kinetic energy with time that becomes less pronounced as we
increase the resolution N, directly related to the
underresolution of K at earlier times.
Since the kinetic energy is typically damped by numerical viscosity,[In the Kurganov-Tadmor scheme used in our simulations <cit.>, the numerical viscosity is expected to scale proportional to (δx̃)^3<cit.>.]
it is in general expected that the
decay is less pronounced when the grid spacing is reduced, as can be observed for the weak
and most of intermediate PTs.
Besides this general trend, the opposite is found
for strong PTs, and intermediate ones with thin hybrid profiles (≲ v_ CJ), such that
in the decaying phase of the kinetic energy,
the decay becomes
steeper with smaller grid spacing.
To study the dependence of the decay rate with resolution, we show in the lower panel of fig:E_kin_evolution the
evolution of K (t̃) normalized by the corresponding values of K_0, found using the fit
presented below.
In these cases,
the enhancement of the decay with resolution might indicate
that as the
fluid shells carry larger kinetic energies at the time
of collisions, non-linearities are enhanced and might eventually overcome the effect of numerical dissipation.
From energy conservation,
we then expect that as non-linearities develop, kinetic energy transfers from
larger
to smaller scales where it can be converted to thermal energy at the scale
determined by numerical viscosity.
In addition to the time decay of K, we also find oscillations in time that
can be associated to the sound-wave regime, where an
oscillatory conversion between kinetic and thermal
energies is expected, and confirmed by the fact that we
conserve T^00 to machine precision (see results in App. C of Ref. <cit.>).
We then fit the numerical results at times t̃ > t̃_0
when the PT is complete, using
the following
power-law decay with time,
effectively getting rid of the oscillations over time,
K (t̃ > t̃_0) = K_0 (t̃/t̃_0)^-b ,
where b indicates the power-law decay rate of K.
This power-law decay prescription
accurately fits the numerical data (see fig:E_kin_evolution),
and we have checked that it remains accurate
up to t̃_ end = 64
for an example strong PT with α = 0.5 and = 0.8.
We define the half-life of the kinetic energy as the
time when K (t̃_0 + t̃_1/2) = K_0, i.e.,
t̃_1/2 = (2^1/b-1) t̃_0 .
We display in Fig. <ref> the fit of the decay index b (left) as well
as the half-life t̃_1/2 (right)
as a function of v_w for weak, intermediate, and strong PTs.[This time, we refrain from extrapolating to infinite resolution due to the complex behavior of the index.
Hence, we use the values found for the largest resolution runs with N = 512, and with the best resolution in the UV regime, L̃/ = 20.]
In the right panel of Fig. <ref>, we also plot the eddy turnover time based on
the kinetic energy ratio expected for uncollided bubbles,
t̃_ eddy = (β R_∗)/√(K_ξ), which
corresponds to the time scale of
fluctuations in the plasma and it is expected to determine
the time decay into turbulent motion.
We compare t̃_1/2 to t̃_ eddy in fig:decay_2.
The eddy turnover time is t̃_ eddy≃ 5 for strong
PTs, t̃_ eddy≃ 10–30
for intermediate PTs, and t̃_ eddy∼ O (100) for weak PTs.
Therefore, we expect that the time scale for non-linearities to develop is reached within the simulations for strong
and some intermediate PTs, while for other
intermediate PTs, the eddy turnover time
occurs towards the end of our simulations.
To evaluate the development of vortical motion in
our simulations, we briefly discuss in App. <ref>
the presence of vorticity in our simulations and present some preliminary results.
For weak transitions, the rate of kinetic energy damping is greatly reduced as we increase the resolution,
which we interpret as a reduction of the numerical viscosity
(see footnote <ref>).
This observation therefore means that for weak transitions, decay is always dominated by numerical viscosity.
Only for the hybrid solution with v_w = 0.6 ≲ v_ CJ, when larger fluid velocities can be achieved (see self-similar profiles in fig:1d_profiles),
does b (and hence t̃_1/2) appear to stagnate with increasing
resolution, pointing towards the onset of resolving the physics
responsible for the damping.
However, in this extreme case the fluid profile is highly confined and
the simulations are far from reaching the converged profiles (see sec:kinetic_ed), so it is not completely clear whether the obtained
decay rate b is physical.
The results are more interesting in the case of intermediate transitions.
For both small and large v_w, corresponding to subsonic deflagrations
and detonations respectively, b decreases with increasing resolution.
However, for a large range of intermediate velocities ∈{0.52, 0.6, 0.68}≲ v_ CJ, the trend is reversed for the highest resolutions. We interpret this point of reversal as a transition from a decay of the kinetic energy
dominated by numerical viscosity to a decay determined by the development of non-linearities.
We note that in this case, some of the confined hybrids are still underresolved but this is no longer the case for the subsonic
deflagration with = 0.52, indicating that the decay rate seems to
be physical (see fig:E_kin_evolution and sec:kinetic_ed).
Furthermore, a similar decay of the kinetic energy was
already found for intermediate PT simulations
of the scalar-fluid system <cit.>.
For strong transitions, we are universally in the regime where increasing the numerical resolution N leads to
a larger decay rate, indicating that the physical
non-linear decay dominates over the numerical viscosity.
As discussed above, this is expected to be the case, as
the expected time scale for non-linearities to develop, i.e., the
eddy turnover time, is around t̃_ eddy≃ 5,
occurring during the duration of our numerical simulations.
§.§ Time evolution of the integrated GW spectrum and GW efficiency
We show in fig:GWgrowth (upper panel) the time evolution of the
integrated GW amplitude I_ sim^ int≡∫ I_ simln k,
where I_ sim (t̃_ init, t̃, k) is evaluated at t̃_ init = 16, and we allow t̃ to vary from t̃_ init to t̃_ end = 32 [see Weinberg].
We find that for weak and intermediate PTs,
the evolution with the source duration t̃ - t̃_ init is close to linear in most cases (unless ≲ v_ CJ),
as expected from the usual stationary
assumption in the sound-wave regime (see GW_sw)
and as argued in previous numerical work
<cit.>.
In these situations, K does not significantly evolve with
time within the simulations (see fig:E_kin_evolution).
However, when the decay of K is significant, we observe deviations
with respect to the linear growth, as expected from the generalized
locally stationary UETC proposed in sw_extended.
To test the validity of OmGW_general, found under this assumption,
we plot in the lower panels of fig:GWgrowth the
following ratio
Ω̃_ GW (t̃) = I_ sim^ int (t̃_ init, t̃)/K^2_ int (t̃_ init, t̃) (β R_∗) ,
with the objective to estimate the
GW efficiency Ω̃_ GW while including
the effect of the decay of K.
We note that when K does not significantly decay with time,
we recover the linear growth, K_ int^2 → K^2 T̃_ GW.
For α = 0.0046 and = 0.6, which corresponds to a confined
hybrid (see fig:1d_profiles), K presents a sharper decay
with time than
the other PTs (see fig:E_kin_evolution) that translates
into a decay of the GW amplitude with respect to the linear
growth.
A similar situation occurs for confined hybrids with α = 0.05
and intermediate wall velocities
∈{0.6, 0.68}≲ v_ CJ.
We see in fig:GWgrowth that in these cases again
the GW amplitude grows slower than linearly with the
sourcing time.
However, the proposed
ratio Ω̃_ GW (t̃) of OmGW_numerical,
defined with K_ int^2, is closer
to be constant with time (see lower panel of fig:GWgrowth), allowing us to still compute the GW
efficiency and spectral shape.
We note that the initial time growth of Ω̃_ GW is a numerical artifact from abruptly
starting the GW computation at t̃_ init.
For strong PTs, the kinetic energy decays significantly for all wall velocities within the time
of our simulations.
We then see that the growth of the GW amplitude
deviates soon from the linear growth and one
needs to incorporate the effect of the time evolution of K.
Again,
we show in the lower panels of fig:GWgrowth that the
ratio Ω̃_ GW (t̃) is very close to
constant, validating the generalization of the linear
growth to a growth proportional to K^2_ int (t̃)
as found by the model proposed in sw_extended.
Therefore, as long as the fit K(t̃') = K_0 (t̃'/t̃_0)^-b accurately represents
the numerical results at times t̃' ∈ [t̃_ init, t̃] (see fig:E_kin_evolution),
the growth of the
GW amplitude is
I_ sim^ int (t̃_ init, t̃) = Ω̃_ GW K_0^2 (β R_∗)
t̃_ init (t̃_0/t̃_ init)^2b [1 + (t̃ - t̃_ init)/t̃_ init]^1 - 2 b - 1/1 - 2 b ,
as found in Kint_decay for the case
t̃_0 = t̃_ init.
Since we find that Ω̃_ GW (t̃)
is roughly constant in time after incorporating K_ int^2
in the scaling of the GW amplitude (see fig:GWgrowth),
we take this value at the end of the simulations t̃_ end.
The resulting GW efficiency
Ω̃_ GW is shown in fig:GW_efficiency
for different numerical resolutions N and for both box sizes
L̃/ = 20 and 40.
We show the values of Ω̃_ GW, computed from OmGW_general
using the extrapolated values to δx̃→ 0 of I_ sim^ int and K_ int,
as described in sec:convergence (see also fig:extrapolation).
We compare the extrapolated efficiencies with those found using
the sound-shell model <cit.>, under the assumptions described in
GW_sw (see also App. B of Ref. <cit.>),
and those obtained from numerical simulations of the full coupled scalar field-fluid system
<cit.>.
However, we note that the latter are found using
simultaneous bubble
nucleation, which in general leads to smaller values of
Ω̃_ GW compared to exponential
nucleation (see Tabs. 2 and 3 in Ref. <cit.>).
We have modified the values of Ω̃_ GW
from Refs. <cit.> to take into account that they consider β R_∗ = (8π)^1/3, instead of the corrected β R_∗ = (8π)^1/3max(, ) that we use in OmGW_general.
Furthermore, we note that the integrated Ω̃_ GW might be modified by the structure that
develops below the peak, described in Ref. <cit.>.
However, we neglect this effect for two reasons: (1) we expect that the inclusion of the small wave numbers in
Ω̃_ GW is negligible when
we are under the assumptions described in GW_sw, (2)
the dynamical range in the IR available in our and previous simulations is usually not large enough to clearly reconstruct
the exact spectral shape described in Ref. <cit.>, see results in sec:shape.
Overall, we find reasonably good agreement for weak PTs, while deviations start to be more significant for intermediate and
strong PTs.
The error bars in fig:GW_efficiency show the standard deviation obtained from 10 different bubble nucleation histories, corresponding to the “seeds” set of simulations listed in Tab. <ref>.
For weak transitions (α = 0.0046), the extrapolated values obtained from the
Higgsless simulations accurately reproduce not just the numerical values but also the trend of Ω̃_ GW with observed in both the sound-shell model and the coupled scalar field-hydrodynamical simulations.
This is important for two reasons: (1) the agreement between three independent approaches lends support to the conclusion that the general trend may be physical; (2) since weak transitions are expected to be described
by linear dynamics, limit in which the sound-shell model applies, we would a priori expect the Higgsless simulations to accurately reproduce the sound-shell model results, validating this model.
However, as α becomes larger, non-linearities become more relevant and
full 3D simulations are necessary to push beyond the reach of the sound-shell model.
Only a few points of reference data for Ω̃_ GW exist for intermediate PTs (α = 0.05) and
so far none[Reference <cit.> presents results of Ω_ GW/Ω_ GW, exp = I_ sim^ int/ I_ exp^ int, where I_ exp^ int
would correspond to the value found using OmGW_sshell with K = K_ξ and Ω̃_ GW = 10^-2.
The ratio that ref. <cit.> presents therefore corresponds to a combined
estimate of Ω̃_ GW K_ rms^2/K_ξ^2 and extraction
of Ω̃_ GW for comparison is not straightforward.] for strong PTs (α=0.5).
We note that reference data points
Ω̃_ GW in Refs. <cit.> are computed
assuming a linear growth with the source duration as in OmGW_sshell.
Hence, incorporating K_ int as in OmGW_general can modify the
value of Ω̃_ GW when the source decays.
The extrapolation method described in Sec. <ref> and presented in Fig. <ref> as solid lines seem to behave very well, delivering agreement between the numerical results from both
simulation domains L̃/ = 20 and 40.
For intermediate PTs, we begin to see deviations from the sound-shell model, in particular for v_w = 0.68 ≲ v_CJ.
We observe that the -dependence seen for weak transitions has flattened and that the overall efficiency Ω̃_ GW is larger.
Our findings are consistent with the two available data points for scalar field-hydrodynamical
simulations from Ref. <cit.>, indicating a departure from linearity in the fluid perturbations and, hence
from the sound-shell model.
We note
that discrepancies with the numerical results of Ref. <cit.> might be due to the different nucleation histories
considered (simultaneous in Ref. <cit.> and exponential in our simulations). Again, extrapolation seems overall good as the extrapolated values agree well for the simulations
with L̃/ = 20 and 40.
For strong PTs, we observe even larger efficiencies overall. Besides, the impact of the non-linearities seems to
wash out again the dependence of Ω̃_ GW on the wall velocity.
We note that for weak PTs, the
relative difference between the extrapolated values and those
obtained in our largest resolution runs with N = 512 is still
large.
Hence, the exact values provided in fig:GW_efficiency might still present
numerical errors related to those
listed in Tab. <ref>.
Indeed, we find these potential errors to be larger for weak PTs (up to 50%
for extremely thin profiles, and usually below 10% otherwise),
where we can compare our extrapolated results to those found
by the sound-shell model, while for intermediate and strong PTs,
the errors of our extrapolated values
are below 10% for all wall velocities.
As a final note, we point out that the definition of Ω̃_ GW in terms of the integrated kinetic energy K^2_ int reduced the dependence on the wall velocity and the strength of the PT significantly — compared to normalizing
it to a stationary kinetic energy ratio (e.g., the one found
for the self-similar bubbles or K_0) multiplied by the source
duration.
Partially, this is due to the decay
found in the kinetic energy that is not
caputred by the stationary assumption for the UETC (see discussion
in sw_extended and numerical results in decay_K2).
Furthermore, the universality
of Ω̃_ GW is also due
to the use of OmGW_general instead of Q'/K^2 considered
in previous work <cit.>, as discussed in footnote <ref>.
The average values of Ω̃_ GW
over for each strength α from the simulations are the
following
10^2 Ω̃_ GW =
1.04^+0.81_-0.67 , for α = 0.0046 ;
1.64^+0.29_-0.13 , for α = 0.05 ;
3.11^+0.25_-0.19 , for α = 0.5 ,
where the super and subscripts refer to the maximum and
minimum values found in the extrapolated values from our simulations.[Based on the parameterization of OmGW_stat2
and the results of Ref. <cit.>, Ref. <cit.> reported a value A_ sw = 3 Ω̃_ GW≃ 0.11,
slightly larger than the extrapolated values in eq:Omegatilde with our updated
numerical simulations and results.]
We note that these values only take into account variations of Ω̃_ GW at different values of , but not from
numerical inaccuracies in our numerical results and hence, in the extrapolated
values.
Finally, we note that for most of weak and intermediate PTs we
still find a growth rate with the source duration
close to linear and, hence, we
have not reached the free-propagation regime of the GW
amplitude.
Therefore, we need to make the usual assumption that the linear growth will persist until the development of non-linearities at t̃ - t̃_ init≡τ̃_ sw∼t̃_ eddy = (β R_∗)/√(K) to then
saturate at that time.
For PTs where the non-linearities timescale has been reached
within the duration of the simulation (strong and some intermediate
PTs), we find that
even though the GW amplitude
is growing with the source duration slower than linearly, it
is still growing after t̃_ eddy, and until
the final time of our simulations.
Based on the decay found for K^2 (t̃) ∼t̃^-2b in decay_K2, the
GW amplitude of Isim asymptotically
grows proportional to
I_ sim^ int∼t̃^ 1 - 2b for b <,
I_ sim^ int∼lnt̃ for b =, and I_ sim^ int∼t̃^ 0 for b >.
Then, we need to extrapolate the
resulting GW amplitude by extending K^2_ int to
times beyond the final time of the simulation using OmGW_general.
We note that unless b >, then the GW
amplitude keeps growing unbounded as long as the UETC
assumed in GW_sw describes the source dynamics.
However, we expect that the UETC deviates from this description as vortical motion and turbulence development dominates in the simulation
<cit.>.
This can effectively be modelled by an appropriate choice of
the source duration τ̃_ sw at which to stop
the GW sourcing, which we leave as a free parameter in our current
estimates.
We emphasize that these results seem to indicate that, after
the fluid perturbations enter the non-linear regime, the GW
amplitude still takes some time to saturate to its free-propagation
value and, hence, assuming a linear growth that is cut
at τ̃_ sw = t̃_ eddy would underestimate the GW amplitude.
We present in fig:GWmodel the dependence with the sourcing time τ̃_ sw≡t̃ - t̃_ init of the numerical
integrated GW amplitude
I_ sim^ int, compared
to the one computed analytically for
the power-law fit K (t̃) = K_0 (t̃/t̃_0)^-b using the
scaling in OmGW_general to extend the results to times after
the end of the simulations, and
using the decay rate b found in fig:decay_2 for each PT.
Furthermore, we assume that the GW production starts at the time
when the PT is completed, t̃_0 ≃ 10, instead of at the starting time of the numerical GW evaluation[We note by the time t̃_0, significant decay of K has already occurred for strong PTs, as can be seen in fig:E_kin_evolution.
As GWs can start to be produced from the first time of bubble collisions,
this will lead to an underestimation of the GW amplitude from our extrapolated results, as discussed in sec:convergence, based
on a low estimate of K_0/K_ξ.
However, we avoid extrapolating our results to earlier times than t̃_0 as the decay fit for K^2 does not apply, and because in this regime,
the results are expected
to strongly depend on the nucleation history and our
assumptions on the GW production are not expected to be valid.]
at t̃_ init = 16, and that the scaling of OmGW_general
can also be extended to times t̃∈ [t̃_0, t̃_ init].
Then, the integrated K^2_ int in this initial time interval
based on the power-law fit of K is added to the numerical values in fig:GWmodel,
and we use the extrapolated values K_0 presented in sec:convergence to estimate the corrections
due to the underresolution of the self-similar
profiles.
To estimate the effect of the Universe expansion on the
GW amplitude, we also include the proxy presented in sec:expansion for the values β/H_∗ = 100 and 1000 [see Kexp_general].
§.§ Gravitational wave spectral shape
In this section, we present the numerical results concerning the spectral shape for weak, intermediate, and strong transitions
and a range of wall velocities.
We present fits to the data and extract spectral features.
Results for weak and intermediate transitions were previously obtained in hybrid simulations in Ref. <cit.> and Higgsless simulations in Ref. <cit.>.
Utilizing the improved Higgsless code, we update the results of Ref. <cit.> and present new results for strong transitions.
In addition to updating the results, we present scaling relations derived from normalizing to R_* rather than β, evidently revealing a better scaling behavior of the knee position in the spectrum associated with the typical bubble size.
The findings in Ref. <cit.> indicate that the GW spectrum (k) is characterized by a double broken power law:
at small k, a (k)∝ k^3 scaling was observed, which is also expected from causality.
At large k, the spectrum decays as (k)∝ k^-3.
These scalings are in agreement with those found in
the sound-shell model <cit.>.
At intermediate scales, a linear scaling regime (k)∝ k was observed.
Due to limited resolution, the spectrum appears to exponentially decay
beyond a damping scale k_e as a result of numerical viscosity.
At scales around or beyond the Nyquist wave number, (k) behaves erratically and is always neglected for the purpose of analysis and parameter extraction.
To capture the behavior of the spectral
shape S(k̃) ≡ I_ sim (k̃)/ I_ sim^ int, we use the following double-broken
power law function,
S(k, k_1, k_2, k_)=S_0 (k/k_1)^n_1[1+(k/k_1)^a_1]^-n_1+n_2/a_1[1+(k/k_2)^a_2]^-n_2+n_3/a_2 e^-(k / k_e)^2,
which corresponds to the shape function used
in Ref. <cit.> with an additional exponential damping factor effective above the damping scale k>k_e.
We expect that the exponential damping found in the simulated spectra are purely due to numerical viscosity so we disregard the parts of the spectra where the exponential
damping is relevant.
Assuming k_1 < k_2 and k < k_e,
the fitting parameters correspond to the slopes n_1, n_2, and n_3, such that S(k) ∼ k^n_1 at small
wave numbers k < k_1, S(k) ∼ k^n_2 at intermediate k_1 < k < k_2, and S(k) ∼ k^n_3 at large k > k_2.
The parameters a_1 and a_2 allow to control the sharpness/smoothness of the spectral shape
around the knee and peak at k_1 and k_2.
S_0 is a normalization constant defined by the condition that ∫ S ln k = 1.
We note that the choice a_1=2, a_2=4, n_1=3, n_2=1, and n_3=-3 renders Eq. <ref> equivalent to
S_f(k, k_1, k_2, k_e)=S_0 ×(k / k_1)^3/1+(k / k_1)^2[1+(k / k_2)^4]× e^-(k / k_)^2,
which was previously used in Ref. <cit.>. eq:shape function, however, allows for a more adaptable
recovery of the GW spectrum peak position and slopes
by adapting the sharpness/smoothness
of the spectral shape around the knee and the peak to the one
found in the numerical data.
We expect the characteristic knee and peak of the GW spectra to be determined by the scale of the fluid perturbations R_∗.
Another important length scale is the fluid shell thickness
ξ_shell :=ξ_front -ξ_rear ,
where
ξ_front =
ξ_shock , for deflagrations and hybrids ,
,
for detonations ,
and
ξ_rear =
,
for deflagrations ,
,
for detonations and hybrids .
The scale R_∗ ξ_ shell is
expected to determine the peak of the GW spectrum <cit.>.
§.§.§ Fitting to the numerical data
We fit Eq. (<ref>) to our numerically computed
GW
spectra and thus extract spectral features from our data.
We show in fig:fits the numerical GW spectra I_ sim (t̃_ init, t̃_ end, k̃) found
in the simulations with numerical resolution N = 512 and box sizes L̃/ = 20 and 40, for a range of wall velocities,
and for weak, intermediate, and strong PTs, together with the
analytical fits.
We use t̃_ init = 16 and t̃_ end = 32 to evaluate the GW spectra.
In the fitting procedure, we impose the constraint that k_1 < k_2.
However, since k_ does not represent a physical scale, we do not require that k_2<k_, but allow k_ to take on any value independently.
In the cases where k_2>k_, the spectral peaks are not resolved properly and suffer from numerical viscosity.
In obtaining the fit, we neglect the first
bin for simulations with L̃/ = 20 and the first two
bins for L̃/ = 40
to avoid the associated significant statistical scatter.
We cut the spectra in the UV where the fit including
the exponential damping deviates from the broken power law with no exponential damping.
While in fig:fits we show fits of eq:shape function to the spectra for different v_w,
in fig:money plot we show the fitted spectral features
k_1, k_2, and k_ as functions of v_w for
weak (α = 0.0046), intermediate (α = 0.05), and strong (α = 0.5) PTs.
We present these characteristic wave numbers in units of 1/β
and 1/R_∗ to evaluate the resulting dependence on and
determine the scale characterizing the spectral knee and peak.
We note that the maximum value of the spectral shape used in
eq:shape function is located at k_ peak, which does not in general
exactly coincide with k_2 (see discussion in Ref. <cit.>) and their relation depends on the
fitting parameters.
We show the resulting spectral peaks obtained from the fit in the
right columns of fig:money plot.
Extraction of the scale of exponential damping k_ gives us a handle on the reliability of the measurement of other parameters and the peak; clearly, finding k_2>k_ means we are in a regime where damping already dominates on scales larger than the peak in the spectrum. In this case, even though for weak transitions k_2 is found to track 1/ξ_shell well above k_2>k_ (which means that we are potentially recovering a trend expected from physical considerations), caution should be taken in interpreting k_2 and k_peak as true physical parameters.
However, for intermediate and strong PTs, we do
not find any evidence for k_2 to be determined by ξ_ shell, as previously pointed out in Ref. <cit.>.
Using our numerical results with L̃/ = 20, which present
better resolution in the UV, averaged over and 10 nucleation histories, we find the following values for k_2,
k_2 R_∗/2 π≃
(0.49 ± 0.024) / Δ_w , α = 0.0046 ,
0.93±0.13 , α = 0.05 ,
0.45±0.042 , α = 0.5 ,
where the indicated uncertainty corresponds to the standard deviation in the measurements among the reference simulations, and
Δ_w = ξ_ shell/max(, ) is the normalized
sound-shell thickness.
Sample variance is generally of the order of the scatter with wall velocity.
On the other hand, the numerical values at the knee, expected to be related
to the fluid perturbations scale R_∗, are found to be
k_1 R_∗/2 π≃ 0.39 ± 0.1 .
We note that both scales k_1 R_∗ and k_2 R_∗ (for intermediate
and strong transitions)
present
very small variability with , indicating a rather universal
behavior.
For weak PTs, k_2 R_∗ Δ_w ∼k̃_2 is
also almost independent of , as expected from the
sound-shell model.
The values of k_2 R_∗ and k_1 R_∗ used in Ref. <cit.> are based on the numerical
results of Ref. <cit.>.
For weak PTs, we find k_2 R_∗Δ_w consistent with
the values used in Ref. <cit.>, while
we find the value of k_1 R_∗ to be
twice the one used in Ref. <cit.>.
We note that the extraction of the knee k_1 in the IR part of our
spectra is more sensitive to statistical variance, underresolution,
and the end time of our simulations.
§.§.§ Time evolution of the spectral shape in the simulations
We show in fig:GWgrowth_spec the GW spectrum
I_ sim (t̃_ init,
t̃, k̃) for different values of t̃ of the
simulation.
We find in general that the causal tail, proportional to k^3
at small k,
is initially present from early times and a more complex structure
seems to develop below the peak as time advances,
potentially consistent
with analytical work <cit.> and numerical
simulations <cit.>.
We find that the growth of the GW amplitude with the source duration is faster than
linear at small wave numbers, which could be described by
the quadratic growth found in Ref. <cit.>, where
it is shown that the transition from
a quadratic to a linear growth with t̃ (when no significant
decay of K occurs) happens at later times for smaller k.
The resulting spectral shape at the end of the simulation, t̃_ end = 32, is then shown in fig:fits and used to provide
fits of the spectral shape.
As discussed in GW_sw, the modelling presented and validated
for the integrated GW spectrum is only expected to hold at wave
numbers k R_∗≫ (β R_∗)/(t̃ - t̃_ init).
Then, we note that using the GW spectrum shape as measured at t̃_ end as the one that ultimately enters the proposed model in OmGW_general effectively implies the assumption that all wave numbers evolve
with the source duration in the same way as the overall amplitude until t̃_ fin > t̃_ end is
reached.
However,
different time evolutions than those validated for the
integrated amplitude at wave numbers that
do not significantly contribute to the integrated
amplitude
and/or at times after the end of the simulation,
could potentially affect the
resulting spectral shape of the GWs.
This can occur within the sound-shell model in the IR regime, as shown in Ref. <cit.>, where a transition from the linear
towards a quadratic growth is expected at small k in the stationary case,
as well
as when the sound-shell model (or its generalization in sw_extended to decaying sources) is no longer valid due to,
for example,
the potential development of
non-linear fluid perturbations and vortical motion.
In the latter case, the
resulting GW spectrum is expected to have a different time evolution
than the one expected for compressional motion
<cit.> and we
expect that the GW modes would reach their saturation amplitudes
in this regime.
§.§.§ GW spectral slopes
In general, we find a clear n_1 = 3 slope at the smallest
frequencies, consistent with the expected causal tail, S(k) ∼ k^3.
At intermediate wave numbers, we fix n_2 = 1, although this
range of k is not large enough to have a clear prediction of
the exact intermediate slope.
However, it is clear that a smoothing with respect to the k^3
occurs in this range that eventually leads to the decrease
k^n_3 with n_3 < 0 at large wave numbers k > k_2.
In this regime, the sound-shell model predicts a
slope n_3 = -3<cit.>, and our simulations show a clear n_3 ≈ -3
whenever k_2 ≪ k_.
However, we allow n_3 to be a parameter in our fits to allow
for deviations, potentially due to the development of non-linearities.
Looking at fig:fits, it is apparent that generally for strong PTs, the simulations offer sufficient dynamical range to sample the UV slope of the GW spectrum.
This is particularly interesting, since for strong PTs, we expect a departure from n_3 = -3
if
non-linearities lead to a cascade of energy into the UV, thus modifying the slope towards a Kolmogorov turbulence
spectrum with n_3 = -8/3<cit.>, or a shallower acoustic turbulence spectrum <cit.>.
Hence, we allow n_3≥-3 when deriving the fit for strong and intermediate PTs, while restricting n_3 = -3 for weak PTs since the dynamical depth is typically insufficient to recover the UV behavior in these cases, due to thinner shells and hence k_2 ≳ k_.
In fig:n_3, we plot the fitted
values of n_3.
For intermediate transitions, we observe a marginal increase in n_3 towards -2.5 as the wall velocity is increased. Strong transitions exhibit a similar trend, while also preferring an optimal n_3≲-2.75 for small .
§.§.§ Smoothing/sharpening of the knees
Introducing two new free parameters a_1 and a_2 obviously improves the fits to the numerical data, compared to using the simpler eq:old_shape_function,
and explicitly shows
large degeneracies among the fitting parameters, thereby thwarting meaningful interpretation and extraction of the relevant spectral features.
Since the peak of the GW spectrum k_ peak is of greatest phenomenological interest, we adjust the parameters a_1 and a_2 to constants that universally recover the peak position well for all wall velocities and strengths. Empirically, we find that a slight sharpening of the knee and a slight smoothing of the peak typically improves the peak position recovery and yields good results for the fit overall. Measurements of a_1 benefit from simulations with more data points in the IR, and we use exclusively
simulations with L̃/ = 40
for its estimation, whereby a_1=3.6 (i.e., an increase from a_1=2 as used in Ref. <cit.>) is found suitable. Measurements of a_2, on the other hand, benefit from resolving the UV, for which we use exclusively simulations with L̃/ = 20, and find that a_2 = 2.4 (i.e., a reduction from a_2=4 as used in Ref. <cit.>) is an adequate choice. We use these values for a_1 and a_2 throughout this study, but point out that in principle,
the spectral fit could be improved by varying these parameters at the cost of a larger scatter in the parameter extraction (due to degeneracies).
The extraction of the parameters k_1, k_2, k_e, and k_ peak are shown in Fig. <ref>.
In the different rows, these parameters are expressed in terms of the physical length scales discussed in the
last section. Some example spectra are shown in Fig. <ref>.
The slope of the UV tail is plotted in Fig. <ref>.
§ SUMMARY AND CONCLUSIONS
We have conducted numerical simulations of cosmological
first-order phase transitions (PTs) using the Higgsless approach
<cit.> to compute the fluid perturbations in
the primordial plasma induced by a PT and the resulting GW
spectra, for a range of PT parameters: α = 0.0046 (weak),
0.05 (intermediate), and 0.5 (strong); and a broad range of
wall velocities ∈ (0.32, 0.8).
These results extend the previous numerical results of Ref. <cit.> to strong PTs, and include a larger number
of numerical simulations for weak and intermediate PTs.
We present for the first time
results of the GW amplitude and spectral shape sourced by
fluid perturbations from strong PTs with α = 0.5.
We have slightly updated the numerical code, although with no
significant impact on the numerical results.
We have compared our results to those
expected considering a stationary unequal time
correlator (UETC), an assumption
usually made in analytical computations of compressional motion (e.g., sound waves in the limit of linear perturbations) and commonly used
to extrapolate
the results from numerical simulations, based on the hypothesis
that the GWs are produced by a stationary
superposition of sound waves.
We find strong numerical evidence for the decay of the kinetic
energy fraction K with time for intermediate PTs with
highly confined profiles and for strong PTs, and a clear deviation
with respect to the linear growth of the GW amplitude with the
source duration found in previous numerical simulations and
analytical studies, assumed in the GW templates used
in the literature.
We associate this deviation to the decay of the kinetic energy
fraction K and
extend the stationary UETC modelling to a locally stationary UETC that allows us to introduce the effect of the numerically
found decay rate of K with time.
The numerical results presented in this work have allowed us
to generalize the usual stationary UETC assumption to an assumption of a locally-stationary UETC, and to test the validity of this assumption in predicting the integrated GW spectral amplitude.
Furthermore, the proposed model has allowed us to
numerically find the relevant scales that enter
in the resulting
GW amplitude and spectral shape; see OmGW_general.
We also have shown that the GW production might
not abruptly stop at the time when non-linearities develop
but it might keep increasing
for a duration that is uncertain at the moment.
For this reason, we present our
results as a function of the GW source duration.
It is of paramount
importance to determine its exact value and, hence,
the resulting saturated GW amplitude,
to make accurate predictions of the GW spectra expected from first-order
phase transitions.
The modelling of the remaining stage will also require numerical
simulations (as well as the stage over which we find decay of the
kinetic energy in our simulations), as it is deep in the non-linear regime and
potentially dominated by turbulence.
In the following, we summarize our numerical results by
providing a template
that can be used by the community to estimate the GW amplitude from
first-order phase transitions, validated for the duration
of our simulations and extrapolated to later times,
taking into account that some of the values presented
might be sensitive to numerical
uncertainty.
Based on the model presented in sw_extended and validated with our numerical simulations in GW_spec_time, we find the following parameterization
of the GW spectrum when the Universe expansion can be ignored
(k) = 3 T_ GW Ω̃_ GW (H_∗/β)^2 K_ int^2 R_∗β S(k R_∗) ,
where S(k) denotes the shape function of the spectrum that is normalized to ∫ dln k S(k) =1, and K_ int^2
is the integrated kinetic energy fraction K^2 over t̃≡ t β, such that it reduces to K^2 τ_ swβ when K is
constant, being τ_ sw the GW source duration.
Therefore, eq:base_normalization is a generalization of the
parameterization used in the stationary UETC assumption previously
tested with numerical simulations <cit.> and
usually assumed for sound-wave sourcing of GWs <cit.> that predicts
a linear growth with the GW source duration when K does not decay
with time.
The most robust results (i.e., an almost independent value
of Ω̃_ GW with the PT parameters) are obtained when the typical bubble separation R_∗,
which determines the length scale of fluid perturbations, is given by the front of the expanding bubbles <cit.>β R_* = (8π)^1/3 max (, ) ,
where 1/β parameterizes the duration of the PT, is the
wall velocity, and the speed of sound. This way,
the residual dependence on the wall velocity in Ω̃_ GW
is quite limited and we estimate from our numerical simulations values for the GW efficiency Ω̃_ GW∼ O (10^-2) for a range of PTs [see fig:GW_efficiency and eq:Omegatilde],
10^2 Ω̃_ GW =
1.04^+0.81_-0.67 , for α = 0.0046 ;
1.64^+0.29_-0.13 , for α = 0.05 ;
3.11^+0.25_-0.19 , for α = 0.5 ,
consistent with
previous numerical simulations <cit.> for weak and intermediate PTs, and with the sound-shell model <cit.>
for weak PTs.
For intermediate and strong PTs, we find much
less dependence with than for weak PTs,
clearly showing a departure with respect to
the predictions of the sound-shell model (see fig:GW_efficiency).
We also provide an estimate of the relevant kinetic energy
fraction K_0 at the end of the PT using our numerical results
[see fig:kappa_eff_kappa and K0_fit], given in units of the single-bubble K_ξ [see Kxi], which, averaged over wall velocities, becomes
K_0 = 0.84^+0.24_-0.29 K_ξ .
As a function of , we generally find
that K_ξ might slightly underestimate K_0 for the smallest
, while it tends to overestimate it for larger
(see fig:kappa_eff_kappa).
This might be a consequence of the energy transfer between thermal
and kinetic energies during the phase of collisions and
the expected development of the sound-wave regime <cit.>.
We have studied the decay of the kinetic energy fraction K with
time t̃ in decay_K2, and provide a power-law fit K(t̃) = K_0 (t̃/t̃_0)^-b, with b > 0 indicating the decay rate, that accurately reproduces the numerical results
(see fig:E_kin_evolutionfig:decay_2).
For small values of b (hence, negligible decay), one can directly use
K_ int^2 (b = 0) → K_0^2 τ̃_ sw→ K_0^3/2 β R_∗
in eq:base_normalization,
assuming that the duration of the GW
sourcing is given by the eddy turnover time τ̃_ sw∼t̃_ eddy = β R_∗/√(K), when non-linearities are expected to develop.
In general, we find b ≪ 1 when the eddy turnover
time is larger than our final simulation time t̃_ eddy≫t̃_ end = 32 (for weak PTs and some intermediate ones).
For non-negligible values of b, we find that the decay of K occurs
within the duration of our simulations, potentially indicating that we
are already modelling the GW production in the non-linear regime.
We indeed find that this might be the case as the eddy turnover time
is included in the duration of our simulations for some intermediate PTs and for
strong ones, where we find larger values of b.
For these PTs, we find that the integrated K_ int^2 becomes
K_ int^2 (b, τ_ sw) → K_0^2 β t_∗ (1 + τ_ sw/t_∗)^1 - 2b - 1/1 - 2b ,
when one uses the power-law fit for K(t̃) and assumes that the
GW production roughly starts at the time t̃_∗≃t̃_0 ≃ 10 (note that the actual value of t̃_0 only
appears as a consequence of our particular fit).
It is unclear what should be the final time of GW sourcing in these cases,
as the simulations seem to already be modelling
the non-linear regime, so we leave
τ̃_ sw as a free parameter.
We note that this is an indication that the GW spectrum might
still grow once that non-linearities develop in the fluid, such that
the use of Kint_const would
in general underestimate the GW production.
We compare in fig:GWmodel the numerical
dependence of the GW amplitude with the source duration τ̃_ sw found in the simulations to the one obtained
using Kint_decaying, extending
the analytical fit beyond the time when the simulations end.
As a final remark on the integrated GW amplitude, we note that so far
Universe expansion has been ignored, which is not justified for
long source durations.
Taking into account that the fluid equations are conformal invariant
after the PT if the fluid is radiation-dominated, we can apply the
results from our fluid simulations in Minkowski space-time to
an expanding Universe, as long as the PT duration is short (β/H_∗≫ 1) even if the GW source duration is not short (see discussion in sec:expansion).
Then, as a proxy to estimate the effect of the Universe expansion, we
can use the following value for K_ int^2 [see Kexp_fit]
K_ int^2 → K_0^2 Υ_b (τ_ sw) (β/H_∗) ,
which generalizes the suppression factor Υ = H_∗τ_ sw/(1 + H_∗τ_ sw) when the source does not decay <cit.> to any decay rate b using hyper
for the presented power-law decay fit of K(t̃).
We also compare in fig:GWmodel the expected evolution of the GW amplitude
with the source duration according to Kint_exp_decay for β/H_∗ = 100 and 1000.
We note that when one associates τ_ sw to the
eddy turnover time τ_ eddy = R_∗/√(K),
they should correspond to conformal time intervals, instead of cosmic time, due to the conformal invariance of the fluid equations.
Regarding the spectral shape S(kR_∗) in eq:base_normalization,
we find that the following template fits accurately our numerical results (see sec:shape)
S(k, k_1, k_2)=S_0×(k/k_1)^n_1[1+(k/k_1)^a_1]^-n_1+n_2/a_1[1+(k/k_2)^a_2]^-n_2+n_3/a_2 ,
with n_1≃3, n_2≃1, a_1 ≃ 3.6, and a_2 ≃ 2.4.
We note that to compare with our numerical results we have included
an exponential damping e^-(k/k_)^2 [see eq:shape function], effective at k > k_, but we omit
it here as we expect it to correspond to numerical viscosity and not have
physical relevance.
The slope of the UV tail
is n_3≃ -3 for weak PTs, and intermediate ones with small wall velocities ≲.
The slope
becomes slightly shallower (up to -2.5) for intermediate PTs with supersonic and strong
PTs (see fig:n_3).
This effect should not play a major role in phenomenological studies but a
more detailed description is given in Sec. <ref>.
Furthermore, this shallower GW spectral slope, together with the
decay of the kinetic energy, seems to indicate the development of non-linearities.
To confirm this statement would require a detailed study of the kinetic spectrum properties.
For now,
we present a preliminary study of the vorticity production in
our simulations in sec:vort.
The most relevant feature of the spectrum is the position of the peak,
determined by k_2.
Here, we find a distinction between weak and intermediate/strong PTs (as already previously
seen in Ref. <cit.>).
For weak PTs, the peak follows the
thickness of the fluid shells, ξ_ shell, as given in eq:xi_shell,
while for intermediate and strong PTs the dependence on the wall velocity is
much weaker.
As shown in k2R_fit and fig:money plot, we find the following results for k_2,
averaged over all wave numbers and 10 different nucleation histories:
k_2 R_∗≃π/Δ_w for weak PTs,
k_2 R_∗≃ 2 π for intermediate PTs, and k_2 R_∗≃π for strong ones,
where Δ_w = ξ_ shell/max(, ) is the normalized
sound-shell thickness, which is only found to determine k_2 in
weak PTs.
Finally, the position of the knee that relates to the typical
size of the bubbles does not even depend on the strength of the PT
and, quite generally, we find
k_1 R_∗≃ 0.4 × 2 π (see k1_fit and fig:money plot).
This research was supported in part through the Maxwell computational resources operated at Deutsches Elektronen-Synchrotron DESY, Hamburg, Germany. TK and IS acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC 2121 “Quantum Universe” - 390833306. HR is supported by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy EXC 2094 `ORIGINS'. (No. 390783311).
ARP acknowledges support by the Swiss National Science Foundation
(SNSF Ambizione grant https://data.snf.ch/grants/grant/208807182044).
ARP and CC acknowledge the working space provided during
the program on the “Generation, evolution, and observations of cosmological magnetic fields” at the Bernoulli Center in Lausanne.
RJ is supported by JSPS KAKENHI Grant Numbers 23K17687, 23K19048, and 24K07013.
§ GRAVITATIONAL WAVE PRODUCTION FROM EARLY
UNIVERSE SOURCES
The solution to the GW equation for the tensor-mode
perturbations h_ij, defined such that the line
element is
s^2 = a^2 [- τ^2 + (δ_ij + h_ij) x^i x^j] ,
while the source is active[The final time of GW sourcing can be taken as the time at which the source stops operating such that all GW modes have
reached a saturated amplitude in their free-propagation regime.
However, we note that different modes k can saturate (hence reaching a free-propagation regime) at earlier times than .] (t <) is
<cit.>
h_ij (t < , ) = 6 H_*^2/k∫_t_*^tΠ_ij (t', ) sin k (t - t') t',
where t_* is the initial time at which the tensor of anisotropic
stresses, Π_ij = Λ_ijlm T_lm/ρ̅, starts to source GWs, being ρ̅= 3 H^2 M_ Pl^2 the critical energy density
and Λ_ijlm = P_il P_jm - P_ij P_lm the traceless and transverse projector,
with P_ij = δ_ij - k̂_i k̂_j.
We have assumed that the expansion of the Universe
is negligible during the sourcing process, i.e., t_ fin - t_∗≪ H_∗^-1.
At later times,
after the sourcing has ended at , the solution is
h_ij (t ≥, ) = 6 H_*^2/k∫_t_*^Π_ij (t', ) sin k (t - t') t' .
Then, the time derivatives of the strains h_ij are
∂_t h_ij (t ≥, ) = 6 H_*^2 ∫_t_*^Π_ij (t', ) cos k (t - t') t' ,
which can be used to find the
fractional energy density at present time t_0<cit.>,
(t_0) = ρ_ GW/ρ_ tot^0 = (a_*/a_0)^4/12 H_0^2⟨∂_t h_ij (t_0, ) ∂_t h_ij (t_0, )| .
We consider the GW spectrum (t_0, k) ≡Ω_ GW (t_0)/ln k, which describes
the two-point correlation
function of the statistically homogeneous
and isotropic strain derivatives,
following the notation of Ref. <cit.>
(a_*/a_0)^4/12 H_0^2⟨∂_t h_ij (t_0, ) ∂_t h_ij^* (t_0, ')| = (2 π)^6 δ^3 ( - ') (t_0, k)/4 π k^3 ,
such that (t_0) = ∫(t_0, k) ln k.
Substituting der_strains into two_point_GW, we find
(t_0, k) = 3 k T_ GW H_∗^2 ∫_t_*^∫_t_*^
E_Π (t_1, t_2, k) cos k(t_0 - t_1) cos k (t_0 - t_2)
t_1 t_2 ,
where T_ GW≡ (a_*/a_0)^4 (H_*/H_0)^2 is the transfer function and E_Π (t_1, t_2, k) is the unequal-time correlator (UETC)
of the anisotropic stresses,
⟨Π_ij (t_1, ) Π_ij^* (t_2, ')| = (2 π)^6 δ^3 ( - ') E_Π(t_1, t_2, k)/4 π k^2 .
At present time, for modes k t_0 ≫ 1, we can average the product of Green's functions in OmGW_today_k over oscillations to find
(k) = 3 k/2 T_ GW H_∗^2 ∫_t_*^∫_t_*^
E_Π (t_1, t_2, k) cos k(t_1 - t_2)
t_1 t_2 .
Therefore, once we know the UETC
of the source of GWs, E_Π (t_1, t_2, k), in this
case produced by the fluid perturbations Π_ij = w γ^2 Λ_ijlm v_l v_m/ρ̅, we can directly
compute the GW spectrum.
However, this is in general unknown before the simulation and,
hence, we need to estimate it using the numerical results.
For this purpose, we can compute the UETC by
approximating the ensemble average of the
anisotropic stresses in twop_Pi with the average
over spherical shells of radius k in Fourier space,
E_Π (t_1, t_2, k) = k^2/2 π^2 V∫_Ω_kΩ_k/4π Π_ij (t_1, ) Π_ij^* (t_2, ) .
Then, substituting EPi_Pi into OmGW_aver
and taking into account that, since cos k (t_1 - t_2) = cos kt_1 cos kt_2 + sin kt_1 sin kt_2,
the double integral over t_1 and t_2 can be expressed as the following product
(k) = 3 k^3/4 π^2 V T_ GW H_∗^2 ∫_Ω_kΩ_k/4π ∫_t_∗^Π_ij (t_1, ) e^ikt_1 t_1
× ∫_t_∗^Π_ij^∗ (t_2, ) e^-ikt_2 t_2 ,
where we have used the fact that the resulting
Ω_ GW (k) is real.
Finally, defining the following integral over the stress-energy
tensor T_ij = w γ^2 v_i v_j,
T_ij (q, ) = ∫_t_*^ T_ij (t, ) e^iqt t ,
and using the property Π_ijΠ_ij^∗ = Λ_ijlm T_ij T_lm^∗/ρ̅^2, we find the following expression
(k) = 3 k^3/4 π^2 V ρ̅^2 T_ GW H_∗^2 ∫_Ω_kΩ_k/4π Λ_ijlm () [T_ij (q, ) T_lm^* (q, )]_q=k ,
previously used in Refs. <cit.> and referred
to as Weinberg's formula, due to its similarity to the expression obtained
for the power emitted by isolated deterministic binaries <cit.>.
However, we note that the applicability of OmGW is limited to the knowledge
of the stochastic variables T_ij, which is generated by the simulations
from a single realization, and results from the average
over shells under the assumption of statistically homogeneity and isotropy.
This expression allows us to compute the GW spectrum at present
time after averaging over oscillations in time.
However, we compute numerically this expression until
the end of the simulation at t_ end, which is,
in general, smaller than t_ fin.
Therefore, the numerical result will represent the physical
amplitude at a mode k if it
has already reached its free-propagation regime by
this time, i.e., if its amplitude has already saturated.
Otherwise, the numerical result presents the GW amplitude
that would be obtained misleadingly assuming that the source is abruptly switched off at t_ end.
§ CORRECTIONS TO THE KINETIC ENERGY FOR MULTIPLE BUBBLES
In this section, we study in more detail the convergence of
the numerical simulations of multiple bubbles by
comparing their convergence to that of single-bubble simulations,
for which we expect the self-similar profiles described in Ref. <cit.> to develop.
For reference, we show in fig:1d_profiles the self-similar velocity and enthalpy
profiles for the PTs considered in the present work (see Tab. <ref>).
We have previously defined the kinetic energy fraction K such
that ρ̅ K(t̃) ≡⟨ρ_ kin(, t̃)|, where ⟨ρ_ kin| corresponds to the kinetic energy
density averaged over the simulation volume V.
However, we note that K_ξ for single bubbles, defined in Kxi,
is taken as the average of the kinetic energy density fraction
induced by
a single bubble over the broken-phase volume.
Then, defining the ratio of the volume in the broken phase (bp) to the total volume,
𝒱(t̃)=V_bp/V ,
we can define the analog of K_ξ for multiple bubbles
as the ratio K(t̃)/ V(t̃).
Before fluid shells collide,
and in the limit of infinite resolution, this ratio should be identical to K_ξ after a very short transient period, over which the fluid profiles develop.
Deviations from K_ξ before collisions thus correspond to
an artifact due to numerical inaccuracy.
We plot the ratio K(t̃)/[ V(t̃) K_ξ]
as solid lines in fig:KoK for all four resolutions N∈{64, 128, 256, 512}.
In the multiple-bubble runs, we can similarly define ρ̅ K_i (t̃) = ⟨ρ_ kin, i (, t̃)| as the kinetic energy fraction
of each of the single bubbles i before their corresponding first collision,
and define the ratio K_i(t̃)/ V_i (t̃),
where V_i corresponds to the fractional
broken-phase volume occupied by each
bubble i.
To monitor the time-dependence of K_i, we simulate single bubbles nucleated at the center of the simulation box (see “single-bubble” runs in Tab. <ref>).
As the convergence of the single-bubble profiles depends on the resolution
in ξ≡ r/(t - t_i), where t_i is the nucleation time of the bubble i and r the radial distance to the nucleation center,
we empirically find that doubling the resolution from N to 2N is
equivalent
to evaluating the profile at time 2(t̃ - t̃_i) (see also discussion in Ref. <cit.>).
Hence,
the kinetic energy of single-bubble simulations (with t_i = 0) obeys K_i^2N(t̃)/𝒱_i(t̃) = K_i^N(2t̃)/𝒱_i(t̃) to an excellent degree
and it suffices to run single-bubble simulations for the largest resolution
N=512.
These simulations are run approximately until the front of the fluid profile collides with its own mirror image at the edge of the simulation box,
which occurs around t̃_end^ sb = L̃v_w/2/max(c_s, v_w).[Note that this t̃_end^ sb (where sb stands for “single-bubble” runs) is always smaller than t̃_ end = 32, the final time of the multiple-bubble simulations. Thus, in producing fig:KoK, we extend the fit of the observed convergence for times greater than t̃_end, enforcing
that in the limit of infinite time, it converges to the value of K_ξ.
This extrapolation always represents K_i(t̃) accurately from
the measured values (below 1% error).
In any case, since we never use values of K_i(t̃) at times larger than t̃_coll < t̃_ end^ sb, it does not affect the analysis and it is only
used to indicate
the expected convergence of the self-similar profiles in fig:KoK.]
Then, in the full simulations and before fluid shells collide, the state of the simulation is exactly the superposition of single bubbles nucleated at times t̃_i < t̃ in the bubble nucleation history. We thus construct the sum
K_Σ(t̃) ≡∑_i : {t̃_i < t̃} K_i(t̃-t̃_i) ,
which corresponds to the expected kinetic energy fraction for multiple-bubble simulations in the hypothetical case that no single bubble would collide, following the
bubble nucleation history up to time t̃.
Then, before the first fluid-shell collision occurs,
we have that K_Σ(t̃) = K(t̃),
while K(t̃) starts to deviate from K_Σ (t̃) after the
first collision at t̃_ coll.
Similarly, we can construct the fractional broken-phase volume
occupied by the superposition of single bubbles as
𝒱_Σ(t̃) ≡∑_i : {t̃_i < t̃}𝒱_i(t̃-t̃_i) ,
which can become larger than one, as it ignores interactions between bubbles.
However, the ratio K_Σ/ V_Σ is bounded by K_ξ.[This is the case for all the considered PTs, with the
exception of strong PTs with v_w=0.36 and v_w=0.4, where
values K_Σ/𝒱_Σ≳ K_ξ are found before collisions due to numerical inaccuracy
(see fig:KoK).
However, as time evolves the fraction asymptotically tends to K_ξ.]
We plot the time evolution of the ratio K_Σ/ V_Σ
as dotted lines in Figure <ref> using the nucleation history of the
reference multiple-bubble simulations with L̃/ = 20.
The ratio K_Σ/ V_Σ indicates the global degree of
convergence of the full multiple-bubble simulations in the hypothetical case
that all bubbles keep evolving without interacting with other bubbles.
Therefore, the
ratio K/ V computed in the multiple-bubble simulations
is initially identical to K_Σ/ V_Σ
at times t̃ < t̃_ coll.
However, as collisions take place,
we clearly see in fig:KoK that both fractions deviate from each other,
as a consequence of mainly four phenomena: (1) the self-similar profiles
stop converging towards the expected ones when collisions take place,
and since the kinetic energy of the uncollided bubbles is in general
underestimated,
the saturated value of K(t̃)/𝒱(t̃) found quickly
after a short transient period dominated by collisions, will be underestimated;
(2) oscillatory conversion between thermal and kinetic energy;
(3) upon collisions, the fluid self- and inter-shell interactions may be non-linear and result in the kinetic energy decay studied in decay_K2, which is again affected
by the previous two effects; and (4) numerical viscosity also
results in damping of the kinetic energy.
The first and last phenomena are purely numerical, while the remaining two
are physical effects that might be affected by the numerical accuracy.
In fig:KoK, we mark with orange stars the time of first collision t̃_ coll,
where we assume that the maximum degree of convergence of the self-similar profiles is reached, as at later times collisions might affect
the development of the fluid-shell profiles.
Thus, we can attempt to compensate for the underestimation of the kinetic energy fraction due to insufficient resolution at t̃ > t̃_ coll
multiplying K(t̃)
by the factor S = V (t̃_ coll) K_ξ/K (t̃_ coll),
effectively correcting to the expected value K_ξ at the time when
collisions affect the value of K/ V.
In particular, the kinetic energy fraction at the time when the PT ends,
t̃_0, can be corrected to the following value
𝒦_0 = 𝒮 K_0 = V(t̃_ coll) K_ξ/K(t̃_ coll) K_0 .
In fig:kappa_eff_kappa, we plot 𝒦_0 obtained
for numerical resolutions N∈{64, 128, 256, 512}.
Compared to the extrapolated value
of K_0 in fig:extrapolation,
we find faster convergence in K_0 when we compare the results for
the two largest resolutions N = 256 and N = 512.
As we are taking into account the known degree of convergence of the self-similar
profiles when computing K_0, we propose that it is a better estimate
of the actual value at t̃_0 than K_0^∞.
Furthermore, the resulting values are closer to K_ξ
and therefore are more conservative estimates, reducing potential deviations with respect to K_ξ that might
be a numerical artifact.
However, one needs to keep in mind that the underresolution at the time of
collisions might strongly affect the posterior evolution of the kinetic
energy when non-linearities dominate the dynamics.
§ PRELIMINARY RESULTS FOR VORTICITY
In this section, we present some preliminary measurements of
the fluid vorticity in our numerical simulations.
§.§ Vorticity on the lattice
The vorticity is computed as
∇×𝐯=(∂ v_z/∂ y-∂ v_y/∂ z) x̂+(∂ v_x/∂ z-∂ v_z/∂ x) ŷ+(∂ v_y/∂ x-∂ v_x/∂ y) ẑ .
On the lattice, we approximate the derivatives
using first-order central differences,
∂ v_i/∂x̃_j () ≃v_i[+ δx̃ x̂_j]
- v_i [- δx̃ x̂_j ]
/2 δx̃ ,
where δx̃ is the uniform grid spacing.
With this choice of derivative operator, the magnitude of the curl |∇̃×𝐯| is computed at every grid point. 2D simulation slices at different times are shown in the lower panels of fig:bubbles.
Note that the definition of the numerical derivative operator of
eq:central derivative inevitably introduces potentially large vorticity at points where the velocity field varies considerably from lattice site to lattice site. This occurs, e.g., around the bubble shock fronts where discontinuities are present. Ideally, the velocity gradients are aligned with the radial direction, in which case no vertical component is present. However, on the lattice, artifacts may arise from the discretization, causing rather strong vorticity to appear at and just around the bubble walls.
This is clearly seen in the lower left frame of fig:bubbles. The numerical nature of this vorticity is nevertheless clear from the observation that the vortical structure, as we traverse around bubble wall, is seen to inherit the symmetry of the lattice (see, in particular, the largest bubble in the center).
Furthermore, mostly small but spurious oscillations of the fluid velocity occur at the bubble wall interface.
These oscillations additionally give rise to extremely local but very steep velocity gradients, potentially showing up as spurious vorticity with large amplitude, confined to very small scales.
In the lower panel of Figure <ref>, and in particular when presented with the opportunity to study the full time-evolution of the system frame by frame, it is observed that production of vorticity occurs, upon collisions, at the interface of a sound-shell from one bubble crossing over the bubble wall of another, as seen, e.g., immediately to the right of the top section of the central bubble in the left column. The velocity field in the upper panel is included to make vorticity production easy to correlate with the velocity field. In this sense, the resulting vorticity pattern initially appears to track the sweeping of this sound-shell-bubble-wall-crossing interface over time. However, during this process, frame-by-frame inspections indicate that convective non-linear
motion is induced in the fluid.
This motion implies the presence of slowly evolving structures compared to sound waves propagating at the speed of sound, .
The former convective structures evolve on time scales proportional to the average convection speeds, which are typically much smaller than . It therefore appears that, in addition to a pure longitudinal velocity component, a fluid velocity field characterized by convective motion develops as a result of fluid interactions during, and possibly after, the collision phase. This convective component is marginally evident in the right column of Figure <ref>, where additional small-scale structures are hinted in the velocity field but are absent in the enthalpy field.
We have described the expected and observed presence of spurious vorticity components associated with the choice of the derivative operator and the lattice structure, and spurious osculations around the bubble wall interface. These are very localized effects and do not contribute meaningfully to large-scale vorticity correlated over macroscopic scales. Therefore, vorticity components that emerge from numerically induced oscillations and limited grid resolution will contribute mostly to the UV part of the velocity spectra. Furthermore, the presence of convective motion implies the presence of a transverse component of the velocity spectra. The development of such a component should thus be visible in spectra of the velocity fields decomposed into longitudinal and transverse contributions, and in particular, physical macroscopic contributions should distinguish themselves from numerical contributions through a separation of scales.
§.§ Velocity power spectra
We generate the Fourier transform v() of the fluid velocity field and then
extract the power spectrum from the two-point
correlation in Fourier space <cit.>⟨v_i () v_i^∗ (')| = (2 π)^3 δ^3( - ') P_v (k) ,
where the ensemble average is performed over
momenta with the same absolute value || = |'| due
to statistical homogeneity and isotropy. Along the same lines
we also construct the power spectrum of the longitudinal modes
⟨k̂_i v_i () k̂_j v_j^∗ (')|
= (2 π)^3 δ^3( - ') P_∥ (k) ,
where = saw() / | saw()| is a unit vector according to the saw description, as discussed in subsec:Updates to the simulation [see eq:momenta mapping].
Likewise, we extract the
vortical component of the spectra
⟨[× () ]_i[' ×^∗ (')]_i⟩
= (2 π)^3 δ^3( - ') P_⊥ (k) ,
such that P_v (k) = P_⊥(k) + P_∥(k).
Figure <ref> shows the power in longitudinal and vortical modes as
well as the fraction of power in vortical modes for strong PTs. The power in longitudinal modes builds up when the
first bubbles nucleate while vorticity requires collisions, as expected. There are some artifacts
in the UV even before collisions which result from the discretization of space on a grid.
Overall, the power in vorticity can reach values of around P_⊥/P_v ≃ 0.3 for the deflagration (v_w = 0.44),
while this fraction is somewhat smaller, P_⊥/P_v ≃ 0.1, for the hybrid (v_w = 0.80).
Still, a sizable fraction of vorticity is observable in both cases.
Moreover, the power spectra in vortical modes appear somewhat less steep than the ones in longitudinal modes.
All this is consistent with the hypothesis that the energy loss observed
over time for strong PTs is due to the decay of fluid kinetic energy into vortical motion and
eventually turbulence. This point deserves further attention in the future. We also have obtained velocity power spectra for weak and intermediate PTs, finding that P_⊥/P_v < 10^-3 for those scenarios. This indicates that turbulence becomes progressively more important the stronger the PT, and that for strong PTs (α=0.5) it already has a dominant role in determining the hydrodynamical evolution after the PT ends. This observation underlines the importance of fully non-linear 3D simulations.
JHEP
|
http://arxiv.org/abs/2409.03325v1 | 20240905075423 | Non-Uniform Noise Rates and Griffiths Phases in Topological Quantum Error Correction | [
"Adithya Sriram",
"Nicholas O'Dea",
"Yaodong Li",
"Tibor Rakovszky",
"Vedika Khemani"
] | quant-ph | [
"quant-ph",
"cond-mat.dis-nn",
"cond-mat.stat-mech"
] |
APS/123-QED
Department of Physics, Stanford University, Stanford, CA 94305
§ ABSTRACT
The performance of quantum error correcting (QEC) codes are often studied under the assumption of spatio-temporally uniform error rates. On the other hand, experimental implementations almost always produce heterogeneous error rates, in either space or time, as a result of effects such as imperfect fabrication and/or cosmic rays.
It is therefore important to understand if and how their presence can affect the performance of QEC in qualitative ways.
In this work, we study effects of non-uniform error rates in the representative examples of the 1D repetition code and the 2D toric code, focusing on when they have extended spatio-temporal correlations; these may arise, for instance, from rare events (such as cosmic rays) that temporarily elevate error rates over the entire code patch.
These effects can be described in the corresponding statistical mechanics models for decoding, where long-range correlations in the error rates lead to extended rare regions of weaker coupling.
For the 1D repetition code where the rare regions are linear,
we find two distinct decodable phases: a conventional ordered phase in which logical failure rates decay exponentially with the code distance, and a rare-region dominated Griffiths phase in which failure rates are parametrically larger and decay as a stretched exponential.
In particular, the latter phase is present when the error rates in the rare regions are above the bulk threshold.
For the 2D toric code where the rare regions are planar, we find no decodable Griffiths phase: rare events which boost error rates above the bulk threshold lead to an asymptotic loss of threshold and failure to decode.
Unpacking the failure mechanism implies that techniques for suppressing extended sequences of repeated rare events (which, without intervention, will be statistically present with high probability) will be crucial for QEC with the toric code.
Non-Uniform Noise Rates and Griffiths Phases
in Topological Quantum Error Correction
Adithya Sriram, Nicholas O'Dea, Yaodong Li, Tibor Rakovszky, Vedika Khemani
September 5, 2024
=====================================================================================
§ INTRODUCTION
A quantum error correcting (QEC) code aims to protect quantum information against the effects of environmental decoherence by robustly and redundantly encoding logical qubits into entangled states of many physical qubits.
The threshold theorem of fault-tolerant quantum computing allows for arbitrarily long computations to be performed on the logical qubits with arbitrarily high accuracy; i.e., the logical error rate goes to zero as the number of physical qubits is scaled up, provided that noise rates are below the error threshold, whose value depends on both the code being used and the types of errors that occur. <cit.>.
Accurately estimating the threshold and the logical failure rates is of great importance for any hardware implementation.
While this often requires taking into consideration various details (including a detailed description of the error model), it is believed that much of the qualitative aspects of QEC can be understood within simplified toy models, involving only phenomenological noise.
Within this setting, mappings to statistical mechanics models of decoding <cit.> have been particularly useful, in informing both the theoretical upper bound on the threshold, and the asymptotic scaling of the sub-threshold logical failure rate.
Phenomenological noise is often studied under the simplifying assumption of uniform error rates on all qubits for all times, while realistic implementations almost always display some sort of heterogeneity, resulting in e.g. non-uniform noise rates.
It is conceivable that when the fluctuations are short-range correlated, the simplifying assumption is still valid.
On the other hand, with extended spatio-temporal correlations — which naturally occur due to effects ranging from fabrication errors <cit.> to stochastic events such as cosmic rays striking the quantum hardware <cit.> —
it is unclear if and how things will be qualitatively different.
Studies of such effects are therefore necessitated for a qualitative understanding of code performance in practice <cit.>.
In this work, we investigate said effects of non-uniform spatio-temporally correlated noise rates
which map to extended, sub-dimensional rare regions in the corresponding stat mech models.
We show that such rare regions, despite being sub-dimensional and rare, can often have a disproportionately large effect and dominate the logical failure of the code, a phenomenon known as a Griffiths effect.
Griffiths effects have been extensively studied in disordered systems, and correspond to rare but large disorder fluctuations which dramatically change the properties of phase transitions (and the proximate phases) <cit.>.
We mainly focus on rare events (such as cosmic rays) that temporally increase the error rate from p_ bulk to p_ rare > p_ bulk over the entire code patch.
We study this for the representative examples of the 1D repetition code and the 2D toric code, in an phenomenological error model with both measurement and qubit errors.
They are described by the 2D random bond Ising model (RBIM) and the 3D random plaquette Ising gauge theory (RPGT), respectively, both with non-uniform coupling strengths, see Sec. <ref>
In the case of the the 1D repetition code, we leverage known results from disorder physics to argue that the rare regions lead to a new Griffiths phase above the conventional decodable phase, where the logical failure rate decays as a stretched exponential of the code distance, instead of the exponential scaling that occurs in the conventional decodable phase (when rare regions are absent).
Therefore, while the rare regions do not lead to a failure of error correction (there is still a finite threshold), they can qualitatively change the code's performance by parametrically increasing the logical failure rate.
Our results are described in detail in Sec. <ref>.
In contrast, for the 2D toric code, rare events can be catastrophic: we find that as soon as p_ rare is above the bulk threshold, the entire code loses its threshold (i.e. the probability of a logical error no longer vanishes in the thermodynamic limit)[We often have in mind a situtation where p_ bulk can be tuned by improving the qubit quality, whereas p_ rare is an external parameter that we have no control over.
Therefore, we say the effects are catastrophic if there is a large enough which makes the code undecodable even when p_ bulk→ 0.
However, it might be possible to gain some control over p_ rare in the hardware, as pointed out in Ref. <cit.>.]. Notably, this happens independently of the bulk error rate, i.e. even as p_ bulk→ 0. We note that similar error models were recently studied numerically by Tan et. al. <cit.>, where it was assumed that rare events such as cosmic rays last for at most a finite duration. Within their setup, the rare events are found to be “benign”, in the sense that when the rare region error rate p_ rare is slightly above the bulk threshold, the code can remain in the decodable phase by correspondingly decreasing the bulk error rate p_ bulk. The key difference in our work is that we consider a more realistic setup where the rare events occur at a finite rate in time. This, in turn, implies that the longest period with elevated error rates is typically unbounded from above: an L× L toric code system requires O(L) rounds of repeated measurements for decoding, which typically produces a largest rare-event sequence of size O(log(L)). We argue that these largest rare regions dominate the logical failure rate, and lead to loss of threshold. Both our setup and that of Ref. <cit.> can be understood within our theoretical framework, as we detail in Sec. <ref>.
Our results imply that techniques for suppressing lasting rare events <cit.> will be crucial to QEC with the toric code.
We also provide a physical interpretation of the difference between the 1D repetition code and the 2D toric code as follows.
Both models have excitations that are pointlike, and logical errors that are one-dimensional.
It is convenient to consider their dual models under Kramers-Wannier duality, after the random sign disorder in the stat mech models are neglected.
We obtain 2D and 3D Ising models with non-uniform couplings, respectively, where logical failure of the code can be related to dual Ising correlation functions in both cases.
The difference between the two phase diagrams can be attributed to whether the rare regions can order by themselves in isolation, in this dual picture.
Finally, we note that our results for the 1D repetition code are obtained by leaning on known results for Griffiths physics in the celebrated McCoy-Wu model, which is a disordered 2D Ising model with correlated columnar disorder <cit.>. In contrast, the toric code problem yields a 3D RPGT with correlated disorder, which has not been studied much before in the context of Griffiths physics. Thus our analysis of this problem should also be of independent interest as a stat mech problem.
§ BACKGROUND AND MODELS
We will focus on two familiar examples of error correcting codes, the 1D repetition code and the 2D toric / surface code. Both are characterized by a set of stabilizer parity checks. Errors are characterized by their syndrome, the set of checks they violate. In both examples, these are point-like excitations (domain walls and anyons, respectively). The goal of error correction is to pair up these excitations in a way that undoes the effect of the error. One also has to deal with errors in the measurement of the stabilizers, which is usually done by combining information from many rounds of measurements; this introduces a time coordinate, making the two problems 1+1 and 2+1 dimensional, respectively.
For any given error correcting algorithm, the logical failure rate refers to the probability that the algorithm fails to correctly recover the encoded logical information. It is related to the probability of large error chains whose syndromes are incorrectly paired up by the algorithm. The error threshold of the code is the maximal error rate (of both physical and measurement errors) such that the failure rate goes to zero as the number of (qu)bits is taken to infinity. Of particular interest is the maximum likelihood decoder, which corresponds to the theoretically optimal decoding and thus yields the largest threshold.
While usually both the physical and the measurement error rates are taken to be constants, which are the same everywhere in the system, and for all the different measurement rounds, in realistic scenarios, they would be different for different qubits / stabilizers and can also fluctuate in time. We will be interested in the effect of such fluctuations, particularly those with long-range correlations in space or time (focusing on the former case).
§.§ Statistical mechanical models
The threshold and failure rate of stabilizer codes can be understood in terms of appropriate disordered statistical mechanics models <cit.>.
Detailed derivations of these models are of secondary importance for our purposes, and we refer the reader to <cit.> for a more thorough explanation.
Throughout this paper, we focus on the illustrative examples of the 1D repetition code and the 2D toric code.
As both are CSS codes, we look solely at the Z stabilizers of the stabilizer code with stochastic single-qubit X noise acting on the system.
For the 1D repetition code, the stat mech model associated with decoding is known to be a 2D random bond Ising model (RBIM) on a square lattice,
Z_ RBIM = ∑_{σ}exp(∑_⟨ j,k ⟩ K_jkτ_jkσ_j σ_k ),
and for the 2D toric code, the model is a 3D random plaquette (Ising) gauge theory (RPGT),
Z_ RPGT = ∑_{σ}exp(∑_ K_τ_∏_⟨ j, k⟩∈σ_jk).
In Eq. (<ref>), the sum is over edges jk containing nearest-neighbor spins σ_j, σ_k.
In Eq. (<ref>), the sum is over plaquettes and each term is a product over spins which reside on the edges jk of the plaquette.
In both models, there is one Ising spin per constant time plane for each physical qubit.
Couplings that extend into the time direction (timelike) correspond to bitflip errors, and in Fig. <ref> they are represented as either vertical bonds (for the repetition code, see Fig. <ref>(a)) or plaquettes with normals pointing in spatial directions (for the toric code, see Fig. <ref>(b)).
Couplings within constant time planes (spacelike) take the form of Z stabilizers and correspond to measurement errors.
In Fig. <ref> they are represented by horizontal bonds (for the repetition code, see Fig. <ref>(a)) or plaquettes with normals pointing in the time z direction (for the toric code, see Fig. <ref>(b)).
For the ease of discussion, in the following we use α as a label for couplings, to refer to either a bond ⟨ j, k ⟩ in the RBIM, or a plaquette in the RPGT. We denote the local error rate by p_α
(i.e. it is a measurement error rate for a spacelike coupling, and a physical error rate for a timelike coupling). The local coupling strengths K_α are related to the local error rates via the Nishimori condition
e^-2 K_α = p_α/1-p_α.
Here, this equation is understood to hold for each and every α, in both Eqs. (<ref>,<ref>).
The τ_α variables incorporate the error history and introduce quenched random sign disorder into the system.
They take the value τ_α=-1 when an error happens on α i.e. with probability p_α, and take the value τ_α=+1 with probability 1-p_α.
As we are interested in error models with spatio-temporal heterogeneity, we introduce a second type of randomness, and allow the local error rates p_α to differ for different α, while maintaining the Nishimori condition Eq. (<ref>) everywhere.
Correspondingly, the coupling constants K_α also vary in spacetime. Thus, we will study stat mech models with two types of disorder, namely (i) random sign disorder τ_α, and (ii) spatio-temporally varying K_α.
Physically, (i) can be understood as coming from different stochastic error realizations within a fixed error model, whereas (ii) is due to randomness in the error model itself, defined by the error rates at all locations {p_α}.
§.§ Defects and failure rates
Condition (<ref>) ensures that the stat mech models correctly encode the success of maximal-likelihood decoders that have full knowledge of the error model.
In particular <cit.>, the failure and success probabilities of decoding are related to the free energy cost of the topological defect in the stat mech model which corresponds to a logical error.
For the RBIM, this is a domain wall; and for the RPGT, this is a magnetic flux tube (see Fig. <ref>), both extending perpendicular to the temporal direction.
As the defects in both models are linelike, they have a free energy cost which to leading order scales as
Δ F ∝σ(ℓ) ·ℓ,
where ℓ is the length of the defect and σ(ℓ) is its tension. In the ordered phases of the stat mech model (ferromagnetic for RBIM, deconfined for RPGT), defects are costly with diverging free-energy cost. This maps to the decodable phase with success probability approaching 1 as the system size is scaled up. A given defect may or may not correspond to a logical error. For instance, in the RBIM mapping of the 1+1D repetition code, only the defect in the horizontal direction is a logical operator (see Fig. <ref>).[Throughout this work, we denote the line tension of defects associated with logical operators as σ_∥, as they are parallel to the rare regions. We denote the line tension associated with the transversal defect going in the temporal direction as σ_⊥, for it is perpendicular to the rare regions.]
In codes with uncorrelated disorder, these quantities are expected to scale the same in all phases. However, as we will see below, this need not be the case for long-range correlated disorder.
For a given error model {p_α}, which thereby fixes the set { K_α}, the relative success probability for this error model is given by <cit.>
({K_α}) =
[ e^- Δ F/1+e^- Δ F]_{τ_α},
where Δ F is the free energy cost of the defect within a particular realization of {τ_α} and [ …]_{τ_α} denotes the quenched average over {τ_α}.
Numerically, we use the minimium weight perfect matching (MWPM) decoder as implemented through the PyMatching package <cit.> to calculate .
Though many of our physical arguments are derived from considerations of the maximum likelihood decoder, in App. <ref> we provide arguments for why they are also expected to hold for the MWPM decoder.
We calculate for each error model { p_α}.
This process yields a distribution of .
We do not expect to be self-averaging, as it will be dominated by rare error models where the LRR are exceptionally large, see App. <ref> for further discussions.
Instead of the mean , we will mostly use the median in our numerics.
From this we may numerically define the defect cost Δ F as
Δ F ≡ -ln[ ()_ med/1- ()_ med],
in analogy with Eq. (<ref>).[Given the average over the sign disorder {τ_a} in Eq. (<ref>), Δ F in Eq. (<ref>) is not the median over error rate realizations of the sign-disorder-averaged free energy. This does not matter for our purposes.]
§.§ Rare regions
Our discussions so far have been fairly general.
Now, we turn to our primary focus, that is long range correlations in the distribution of non-uniform error rates, and the effect of these on the free-energy cost of defects.
For concreteness, we focus on cases with uniform measurement error rates p_ meas.
They correspond to uniform spacelike couplings in the stat mech model.
In contrast, we set bit-flip error rates p_ bf to be spatially uniform on each time slice, but temporally varying between time slices, and are drawn from a Bernoulli distribution:
p_ bf(t) =
p_bulk with probability 1 - γ,
p_rare with probability γ .
These produce varying timelike couplings in the stat mech model via Eq. (<ref>), see Fig. <ref>.
This is a minimal model of stochastic rare events which globally affect the system, inspired by phenomena such as cosmic rays.
Let us denote by p_0 the threshold of the code when the error rate is uniform, i.e. when = p_ bulk.
(This number is set by p_ meas, but we omit this dependence here.) In App. <ref>, we calculate p_0 for the models use in this work.
We are interested in the following regime,
p_ bulk < p_0 < ,
which models an experiment that would be below threshold if not for rare events that temporarily elevates the error rate to .
Parts of the stat mech model where p_ bf = are henceforth referred to as rare regions.
Whenever γ > 0,
we have rare regions whose typical temporal extent L_⊥ is finite, which should be benign from an error correction point of view.
However,
we argue below that logical failure events will be dominated by the largest rare region (LRR), and with γ > 0 the largest L_⊥ will typically be unbounded from above in an infinitely long experiment.[Throughout the paper, we will take the time for the experiment (hence also the “height” of the stat mech models) to be proportional to L.]
In Sec. <ref> and Sec. <ref> below, we present this rare region analysis for the 1D repetition code and for the 2D toric code, respectively, using both analytic arguments and numerics.
§ LINEAR RARE REGIONS FOR THE 1+1D REPETITION CODE
In this section, we analytically predict the effects of the correlated disorder in bitflip rates Eq. (<ref>) in the 1D repetition code using the model described in Sec. <ref>, and we numerically test these predictions.
The repetition code is a classical code that can serve as a simple theoretical model and as a experimental benchmark <cit.>.
In particular, here, the repetition code allows an analogy with the McCoy-Wu model, where the asymptotic scaling of the defect free energy can be predicted analytically <cit.>.
As we noted in the previous section, the stat mech model describing decoding the 1D repetition code of length L is an L × T RBIM for a time duration of T; we take T=L in the following.
Logical failure rates are controlled by defect free energy costs, and the defects in the RBIM are domain walls.
The correlated disorder in bitflip rates introduces rows of weak vertical bonds in the RBIM that also have a higher likelihood of sign errors; we call regions of consecutive weak bonds in time rare regions.
§.§ Central assumption about largest rare region
A central simplifying assumption we make in order to derive a qualitative phase diagram is that the logical failure of the code is dominated by the largest rare region (LRR) of the model[By “dominate," we do not require the LRR to strictly set the logical failure rates; rather, we only assume that studying the LRR will give correct predictions for the asymptotic behavior in the phases on either side of the threshold, even if the LRR by itself does not necessarily predict nonuniversal quantities like phase boundaries in the full L × L model.].
Correspondingly, in the stat mech model, we assume that the cost of the spatial defect Δ F is controlled by the LRR of weak couplings.
We will see that much of the numerical results can be understood qualitatively by focusing on the LRR and treating it as an isolated system.
Our assumption can be understood from the intuitive picture that logical errors will most likely occur when the error rates are above threshold for the longest duration. One might also anticipate the assumption directly from the stat mech model by noting that the LRR has the most “room” for the domain wall to move (relative to smaller rare regions), and therefore the defect gains the most entropy (note that, since the bulk couplings are larger, it is energetically favorable for the defect to stay within rare regions). Both of these pictures neglect the interaction between the LRR and the bulk, as well as between LRR and other rare regions. We will find that much of the phase diagram is qualitatively described by LRR, although we note that there are discrepancies at large sufficiently far above threshold.
We briefly summarize the leading order scaling of defect free energy in clean models where neither the random sign disorder nor the spatio-temporal randomness in {K_α} are present.
These clean models are a convenient toy picture for the LRR, and provide a basis for comparison with our numerical results of Δ F.
For the LRR in the 1D repetition code, we obtain the 2D Ising model living in a thin strip of dimensions L × L_⊥.
In the paramagnetic phase with K_ rare < K_c, a high-temperature expansion can be used to show that the line tension decays to zero with increasing L_⊥ as
σ_∥(L) ∝ e^-α L_⊥,
where α depends continuously on K_ rare. We discuss this scaling in more detail in Appendices <ref> and <ref>.
For a rate of weak couplings γ > 0 and for a O(L) height of the stat mech model, we have that the typical L_⊥ for the LRR grows as
L_⊥∝log L,
with a nonzero constant of proportionality. L_⊥ is distributed identically to the longest run of heads in a sequence of L biased coin flips, which is known to have mean ∝log(L) up to O(1) corrections <cit.>. This random variable also has just O(1) variance, meaning that its mean and median behave similarly. The mean ∝log(L) can be understood heuristically as striking a balance between L opportunities to have a length L_⊥ string of rare regions and the exponentially decaying probability of such a consecutive string: the expected number of rare regions of height L_⊥ is asymptotically proportional to L γ^L_⊥, which is one for L_⊥∼log L.
Together, Eqs. <ref> and (<ref>) reproduce the same scaling as in Eq. (<ref>). The value of z depends on several factors. It depends on the height of the rare region (and hence γ, which sets the proportionality constant in Eq. (<ref>)). It also depends on how deep into the paramagnetic phase the rare region is (and hence on K_rare and hence p_rare, which sets α in Eq. (<ref>)).
§.§ Comparison to McCoy-Wu
A useful comparison can be made between the stat mech model for the 1D repetition code and the McCoy-Wu model <cit.>.
The latter can be obtained from Eq. (<ref>) by removing the sign disorder (setting τ_jk = +1 everywhere) but still keeping the correlated disorder in the couplings {K_α}.
By analogy with Eq. (<ref>), we are interested in the regime,
K_ bulk > K_0^2D > K_ rare,
where K_0^2D is the critical coupling strength of the 2D uniform Ising model without any weak bonds.
The rare regions of the model are locally in the paramagnetic phase, whereas the bulk is still in the ferromagnetic phase, and the McCoy-Wu model is said to be in the Griffiths phase.
Refs. <cit.> showed that a defect parallel to the rare region direction has a line tension of the form (compare Eq. (<ref>))
σ_∥(L) ∝ L^-z,
where L is the linear size of the system and z is a dynamical exponent which depends continuously on K_ rare <cit.>. z=0 when K_rare = K_0^2D.
Unlike the LRR approximation, z tends to infinity at a finite value of K_rare = K_c^MW. This point coincides with the loss of spontaneous magnetization in the McCoy-Wu model, and is typically identified as the critical point of the McCoy-Wu model. By spontaneous magnetization, we mean the magnetization in the thermodynamic limit on taking a bulk longitudinal field h → 0^+. Note that other common metrics of order may show transitions that do not necessarily coincide with K_c^MW; vertical correlation functions that cut perpendicular to the rare regions can still decay to zero for some choices of K>K_c^MW.
For K_rare < K_c^MW, the domain wall tension σ_∥ decays faster than any power law. This is not captured in the LRR approximation.
§.§ Predicted phase diagram
Using intuition from the McCoy-Wu model, we can now piece together the phase diagram shown in Fig. <ref>(c) as a function of at fixed p_ bulk < p_0^2D. Here, p_0^2D is the critical bitflip error rate of the L × L repetition code. This should be distinguished from the threshold p_D and the critical p_SM.
When we write in this section, we consider the median .
First, when p_rare < p_0^2D, the entire system is in the decodable phase and the code is necessarily decodable. The RBIM exhibits ordinary ferromagnetic behavior such as a finite domain wall tension. This corresponds to a failure rate decaying exponentially in system size.
For p_rare between p_0^2D and p_ SM, the rare regions are in the paramagnetic phase, but the stat mech model remains ordered. The stat mech model is in the ferromagnetic Griffiths phase with σ_∥ described by Eq. (<ref>), giving a domain wall cost of the form Δ F ≈ L σ_∥(L) ∝ L^1-z.
We expect z to vary continuously with p_ rare. In particular, z=0 when = p_0^2D. This phase is denoted by the hatched region in Fig. <ref>(c).
The Griffiths phase further splits into two distinct regimes depending on whether z<1 or z>1. We denote the at which z=1 as p_D (where p_D < p_SM is a distinct error rate from the critical p_SM), at which point is a constant greater than zero that is asymptotically independent of system size.[A constant at threshold assumes that the power law does not have subleading multiplicative corrections. For example, σ_∥∼log(L)/L^z would cause at threshold to decay to 0 with L. We do not expect any such multiplicative corrections.]
When p_0^2D < p < p_D, we have 0<z<1, and the code is still decodable.
Here, ℙ_fail tends to zero as L →∞, since the cost L^1-z diverges and decays as a stretched exponential.
On the other hand, p > p_D means z>1, and the domain wall cost L^1-z asymptotically vanishes, making → 1/2 and the code nondecodable. Nevertheless, this regime is still in ordered phase of the stat mech model, due to the fact that transversal domain walls (which are not related to ℙ_fail) continue to have a line tension σ_⊥∼𝒪(1), yielding a growing free energy cost; see Fig. <ref> below.
When p_rare is very large and above the critical disorder strength p_ SM of the stat mech model, the system fails to order and the domain wall tension decays faster than any power law, giving a vanishing domain wall cost and ≈ 1/2. This is captured by McCoy-Wu but is beyond the LRR approximation. We emphasize that the LRR nevertheless qualitatively captures the part of the phase diagram surrounding p_D, and only fails substantially above p_D. Note also that this discrepancy is small; the discrepancy is only in the qualitative description of how rapidly approaches 1/2 at sufficiently large .
§.§ Numerical results
We now compare this predicted phase diagram to numerical simulations of the repetition code. For the numerics on the 1+1D repetition code, we take a length L system with L measurements.
We choose a uniform measurement error rate p_ meas = 0.11, which sets p_0^2D = 0.10.
We fix the bulk bitflip rate to be p_ bulk = 0.02 < p_0^2D, and vary the value of p_ rare.
We sample the rare regions according to the Bernoulli distribution Eq. (<ref>), where we take γ = 1/3 in both cases.
Our numerical results supporting the phase diagram are shown in Fig. <ref> and Fig. <ref>. In Fig. <ref>, we compute ℙ_fail as a function of p_rare. We identify the decodable-to-nondecodable transition from the crossing of curves with different L and find that it is clearly distinct from the threshold of the bulk system p_0^2D≈ 0.1[We also observe an empirical separation between the mean and typical (median) failure rate, not pictured in the main text, which we comment on in the Appendix <ref>.]. In Fig. <ref>, we depict the scaling of domain wall cost with system size. We see that there is an extended regime where the domain wall cost scales as a power-law with continuously varying exponent 1-z, consistent with our prediction for within the Griffiths phase (p_0^2D, p_ SM).
The inset of Fig. <ref> shows the numerically extracted z obtained from the linear fits in the main panel. We indeed find that it changes continuously with p_rare, dropping below 1 at p_D; however, we also observe that z remains finite even when p_rare is made smaller than the bulk threshold, contrary to our analytical argument. However, we note that our theoretical predictions neglected subleading contributions to the domain wall cost. In general, one might expect
Δ F ∼ L^1-z + O(L^α)
where L^α with α < 1-z is a subleading term dependent on . We do not attempt to compute such subleading corrections, but we note that these may cause significant finite-size effects, particularly in estimating z numerically.
We view the persistence of z>0, even for < p_0^2D where we expect z=0, as an artifact of such subleading terms.[In Appendix <ref>, we consider an L × L RBIM without correlated line disorder for comparison. There, we again fit z in the same manner as in the inset of Fig. <ref>, but we still find a z that fails to reach 0 even at small p_ bulk.]
To summarize, while our numerics on z in Fig. <ref> are suggestive, they are not yet conclusive.
To further investigate our theoretical predictions, we directly investigate an L ×log_2 L system with uniform bitflip error rates p_ bulk = = p. A uniform system of this size mimicks the largest rare region of an L by L heterogeneous system.
With numerical results shown in Fig. <ref> for log_2 L ≤ 10, we confirm that a scale invariant point p_ D > p_0^2D exists in the latter systems, and that failure rates below this point decay as stretched exponentials in L.
We note however that the numerical value of p_ D is non-universal, so a direct comparison with the L × L heterogenous system cannot be made.
When is between p_ D and p_ SM, the code is no longer decodable.
However, within this region, the overall phase of the RBIM is still ferromagnetic and the cost of the defect in the time direction continues to grow.
This defect can be thought of as a “temporal” logical operator, relevant for fault-tolerant gates involving moving logical information in space <cit.>.
Our numerical results in Fig. <ref> show that this logical
has a higher threshold than when the logical was parallel to the correlation direction, confirming the phase diagram in Fig. <ref>.
As the order-disorder transition occurs when defects in all directions cease to grow in free energy cost with system size, it is at p_ SM where the system truly transitions to being paramagnetic.
Finally, we close this section by noting that p_D will additionally depend upon the value of γ used. Predictably, the code will be more tolerant to rare events if they are rarer and we show this explicitly in Appendix <ref>.
§.§.§ Additional numerical details for the repetition code
Here we provide additional numerical details for Figs. <ref>, <ref>, <ref> and <ref>.
In Fig. <ref>, each point on each curve is determined from the median of 10^3 error model realizations { p_α}. For each error model realization, is determined from averaging over 10^4 physical error configurations.
In Fig. <ref>, z is determined by a linear fit to logΔ F (determined from Eq. (<ref>)) vs log L. The extracted value of z should be viewed qualitatively. Due to potential subleading corrections which are present even in a system with uniform error rates (see Appendix <ref>), we do not see z = 0 for < p_0^2D as predicted. Furthermore, at low error rates and large system sizes, the median is often 0; which is why the smallest values of are restricted to smaller system sizes.
We note that techniques such as importance sampling (e.g. chapter 9 of Ref. <cit.>) may give greater access to this difficult regime of small , and leave detailed exploration to future work.
In Fig. <ref>, each point on each curve was determined from 10^6 physical error configurations.
In Fig. <ref>, each point on each curve is determined from the median of 10^3 error model realizations. for each error model realization is determined from 10^4 physical error configurations. To realize the transversal defect, we make the measurement error rate a random variable in the spatial direction, and set the bit flip error rate to 0.11.
§ PLANAR RARE REGIONS FOR THE 2+1D TORIC CODE
In this section we turn to time-dependent bitflip error rates in the 2D toric code.
They lead to rare regions in the random-plaquette gauge theory (RPGT) Eq. (<ref>) that are planar and have infinite extent in the two spatial dimensions, see Fig. <ref>(b).
Similarly to Sec. <ref>, we draw bitflip error rates on different time steps from the Bernoulli distribution in Eq. (<ref>), and choose the time duration to be T = L.
We proceed as in Sec. <ref>, first detailing our analytic predictions of the logical defect scaling for a quasi-2D system of dimensions L × L × L_⊥ with uniform error rates.
Treating this isolated system as a model of LRR, we then turn to the L × L × L heterogeneous system with non-uniform error rates, where quantitative agreement between numerical results and analytic predictions are found. Our key finding is the absence of a decodable Griffiths phase: as soon as p_rare exceeds the bulk threshold, and the rare regions are in the “wrong” phase, they make the decoder fail. As we will see, this is due to the
2D nature
of the rare regions, which allows them to order by themselves.
§.§ High temperature expansion within a clean planar LRR
As we have seen in Sec. <ref>, clean models (where neither the random sign disorder or the spatial-temporal randomness in {K_α} is present) are useful for gaining intuition.
In parallel with Sec. <ref>, a high temperature expansion within a thin slab of dimensions L × L × L_⊥ yields
σ_∥(L) ∝ e^-α L_⊥· L ⇒ -logσ_∥(L) ∝ L · L_⊥,
see details in Appendix <ref>.
Once again, the LRR has width L_⊥∝log L, as in Eq. (<ref>).
As in Sec. <ref> and Eq. (<ref>), we perform this expansion for K < K_c^3D, where K_c^3D is the critical coupling strength of the 3D uniform bulk system[Strictly speaking, the expansion is only valid for K < K_c^2D(L_⊥), where K_c^2D(L_⊥) < K_c^3D is the L_⊥-dependent critical coupling strength of the slab with height L_⊥.
Our statement here can be justified by noting that K_c^2D(L_⊥) approaches K_c^3D as L_⊥ increases, see Sec. <ref>, <ref> for detailed discussions.].
This result suggest that σ_∥(L) vanishes exponentially with L in the limit L→∞.
The defect cost is therefore Δ F ∝ L^1-z L for some positive constant z, which vanishes in the thermodynamic limit when K < K_c^3D.
Translating this back to the decoding problem, an L × L toric code within time duration T = L_⊥∝log L becomes immediately undecodable whenever p > p_0^3D, where p_0^3D is the threshold of the 3D RPGT with uniform error rates.
Comparing Eq. (<ref>) with Eq. (<ref>), there is an extra multiplicative factor L in -logσ_∥(L) due to the increased dimension of the rare region.
The strong vanishing of σ_∥(L) in Eq. (<ref>) suggests qualitative differences between linear and planar rare regions.
§.§ Dual picture
Such qualitative differences can be more clearly appreciated in a dual picture. We summarize this here and explain this picture in depth in Appendix <ref>. Under a Kramers-Wannier duality, the linear and planar LRRs are described by quasi-1D and quasi-2D Ising models, respectively.
The defect line tension maps to the inverse dual correlation length;
the vanishing of the line tension (and therefore proliferation of line defects) corresponds to the ordering of the dual Ising models.
The difference in the scaling of σ can be therefore attributed to whether the dimension of the rare region is below or above the lower critical dimension of the Ising model.
In particular, for the planar rare regions, the critical dual coupling strength K_c^2D, ∗(L_⊥) of the L × L × L_⊥ dual Ising model is finite, and can be arbitrarily close to K_c^3D, ∗ of a 3D bulk Ising model, as L_⊥∝log L can be arbitrarily large[In particular, under Kramers-Wannier duality, we have K_c^2D, ∗(L_⊥) > K_c^3D, ∗ for all L_⊥ < ∞.
As L_⊥→∞, K_c^2D, ∗(L_⊥) approaches K_c^3D, ∗ from above.
For any K^∗ > K_c^3D,∗, we will also have K^∗ > K_c^2D, ∗(L_⊥) for sufficiently large L_⊥.].
Therefore, for any K^∗ > K_c^3D,∗, the LRR will become ordered for sufficiently large L_⊥∝log L.
We can also understand these statements from the perspective of quantum models in one lower dimensions, related to the stat mech models via quantum-classical duality. We can relate the clean, L × L × L_⊥ classical problem to a quantum system on a L × L_⊥ lattice[Note that in this case, the “time” direction corresponds to one of the spatial dimensions of the toric code, rather than to physical time.]. In this quantum model, the calculation of σ(L) amounts to asking about the energy cost of inserting a point-like flux defect in the trivial paramagnetic phase (since we are dealing with rare regions). Using the quantum version of Kramers-Wannier duality, this is the same as the energy gap between the two symmetry broken ground states of a two-dimensional quantum Ising model in its ordered phase, which is exponentially small in the volume of the system, yielding the scaling in Eq. (<ref>). The same argument could be used to deduce the scaling (<ref>) in the case of 1D repetition code.
§.§ Crossover scaling
The picture above offered by the Kramers-Wannier duality for the clean model suggest that when analyzing numerical data of at p_ rare≳ p_0^3D of the toric code problem, we must take into account the proximity of p_0^3D to the 2D critical point of the planar LRR.
We denote this L_⊥-dependent critical error rate as p_0^2D(L_⊥), and we have lim_L_⊥→∞ p_0^2D(L_⊥) = p_0^3D.
As we increase L_⊥, we expect a crossover from the 2D RBIM transition to the 3D RPGT bulk transition.
To capture the finite L_⊥ crossover, we propose the following phenomenological two-parameter scaling function
(, L, L_⊥) ≈Φ[ (-p_0^3D) · L^1/ν_3, L_⊥ / L ].
We expect the scaling function to be descriptive of when both L_⊥ and L are much larger than 1.
Here, ν_3 is the correlation length exponent for the 3D RPGT transition.
For any finite L_⊥ / L > 0, this reduces to the conventional scaling function of the 3D RPGT with a nonzero aspect ratio.
The planar limit is L_⊥ / L → 0, where we expect to see critical scaling near a 2D critical point.
With these considerations, we find that
p_0^2D(L_⊥) - p_0^3D = z_0 · L_⊥^-1/ν_3,
where z_0 is a non-universal positive constant. See Appendix <ref> for more detail.
We also find that at small L_⊥ / L the two-parameter scaling function Φ reduces to the following asymptotic form with a single parameter
L_⊥ / L → 0=Φ^2D[ (-p_0^2D(L_⊥)) · L_⊥^1/ν_3·( L_⊥ / L)^-1/ν_2]
=
Φ^2D[ ( (-p_0^3D) · L_⊥^1/ν_3 - z_0 ) ·( L_⊥ / L)^-1/ν_2].
Here, ν_2 is the correlation length exponent for the 2D RBIM transition, and Φ^2D is the corresponding universal scaling function.
Note that for any finite L_⊥, this function reduces to the standard 2D scaling form with the argument ∝ (-p_0^2D(L_⊥)) · L^1/ν_2.
§.§ Predicted phase diagram
With these, and the assumptions that the logical failure of the code is dominated by the LRR (see Sec. <ref>),
we now discuss the predicted phase digram in Fig. <ref>(d).
Recall that we choose p_ bulk < p_0^3D, and vary across p_0^3D.
When < p_0^3D, the entire system has error rate below p_0^3D, and the code is in its decodable phase.
Correspondingly, the RPGT is in its deconfined phase, where the conventional scaling - log∝ L holds.
When > p_0^3D, we predicted in Sec. <ref>, <ref> that the code becomes undecodable in the thermodynamic limit, i.e. lim_L→∞→ 3/4 for any p_ bulk.
In Fig. <ref>(d), we highlight the critical point p_ D(L) ≡ p_0^2D(L_⊥∝log L) of the planar LRR, which approaches p_0^3D from above as L →∞.
For between p_0^3D and p_ D(L), the system is in the “mesoscopic crossover regime” (dashed green region), whose extent shrinks with increasing L.
Within this regime, we expect to be suppressed with increasing L for L below a crossover length scale ξ_ crossover∝ ( - p_0^3D)^-ν_3.
Above the crossover length scale, grows with L, and eventually saturates to 3/4 in the infinite L limit.
Therefore, a non-monotonic dependence of on L is expected.
All these phenomenology should be captured by the scaling function Eq. (<ref>) for sufficiently large L, L_⊥, and for sufficiently close to p_0^3D.
Similar to the 1+1D repetition code, there is again a range of values of between p_D(L) and p_ SM, where the rare regions are disordered but the line tension σ_⊥ of the transveral defect (perpendicular to the planar rare regions) is still finite. Due to the presence of the disordered rare regions, we call this regime a Griffiths phase but unlike the 1+1D repetition code, it is non-decodable.
§.§ Numerical results
We calculate for both an L× L × L heterogeneous system with rare regions, and an L× L ×log_2 L isolated system with uniform error rates.
In both cases, we choose a uniform measurement error rate p_ meas = 0.01, which sets p_0^3D = 0.045.
For the L × L × L system, we choose p_ bulk = 0.01 < p_0^3D, and generate the rare regions according to Eq. (<ref>) with γ = 1/3.
Our numerical results are shown in Fig. <ref> and <ref>.
In Fig. <ref> for the L× L × L system, we find no scale-invariant point p_ D for the median .
Instead, we find a crossover regime with a non-monotonic dependence of on L.
This is consistent with our expectations from the discussions above.
In Fig. <ref> for the isolated L× L × L_⊥ system, these predictions are more clearly borne out.
When setting L_⊥ = log_2 L, we observe a clear downward drift of p_D(L) towards p_0^3D with increasing L, as well as the non-monotonic dependence of on L.
We also extract p_0^2D(L_⊥) at fixed finite values of L_⊥.
The results can be fitted well to Eq. (<ref>), see inset of Fig. <ref>.
Furthermore, the single-parameter scaling function Eq. (<ref>) is relevant as L_⊥≪ L, and can be directly tested.
In Fig. <ref>(triangles) we show the data collapse of the heterogeneous L× L× L system, and find good agreement with Eq. (<ref>) when setting L_⊥∝log L in its parameter.
In Fig. <ref>(squares), collapse of data from the L× L × L_⊥ system is shown, where agreements are again found.
Finally, the functional form of Φ^2D in Eq. (<ref>) is universal and thus can be extracted independently by decoding the 2D toric code with perfect measurements.
These results (Fig. <ref>(circles)), when plotted together with the numerical scaling collapses from the previous two numerical experiments, lie on top of each other (up to a constant rescaling of their parameters), see Fig. <ref>.
This strongly supports the crossover scaling picture provided by Eqs. (<ref>,<ref>).
In particular, the data collapse is strong evidence for our theoretical prediction that the 2D scaling function, modulo a rescaling of its argument, describes the failure probability close to threshold even in the (2+1)D planar-disordered toric code described by a 3D stat mech model.
To summarize, in the toric code, whenever p_ rare > p_0^3D, planar rare regions render decoding entirely impossible in the thermodynamic limit[As a consistency check, this should follow directly from Eq. (<ref>). Taking L_⊥∝log L, we indeed find lim_L→∞ = 3/4 for any p > p_0^3D.]. However, a transverse defect perpendicular to the planar rare regions may still have a diverging Δ F. Indeed, the defect cost will diverge whenever the transvserse correlation length in the dual Ising model is finite. It is known <cit.> that one can have a spontaneous magnetization without a diverging transverse correlation length in 3d Ising models with planar disorder, corresponding in the primal gauge theory to a non-decodable region that has a diverging transvserse defect cost. These dualities are complicated by the presence of quenched random sign disorder, which we have largely neglected in our discussion of dualities. We do find such a region via numerical results on small systems L ≤ 18 in Appendix <ref>, and we find a transition in the transverse defect cost at a p_ SM > p_0^3D. We leave a more thorough investigation of this transition to future work.
§.§.§ Additional Numerical Details
Here we provide additional numerical details for Fig. <ref> and Fig. <ref>. In Fig <ref>, each point on each curve is determined from the median of 10^3 error model realizations and for each error model realization is determined from 10^4 physical error configurations. For Fig. <ref>, for L = 4,8,16,32, each point is determined from 10^5 physical error configurations. For L = 64,128,256 we use 10^4 physical error configurations. In the inset, for each value of L_⊥, we obtain the critical error rate from system sizes of L = 24 to L = 64. For each system size, we obtain from 10^5 physical error configurations.
In Fig. <ref>, we plot the same data from Fig. <ref> along with additional data collected for a toric code with no measurement errors. In all data collapses, we take ν_3 = 1 and ν_2 = 1.5.
We use z_0 = 0.08 for the planar-disordered 2+1D toric code and z_0 = 0.09 for the log_2 L 2+1D toric code.
z_0 is not universal, and it is reasonable to expect different z_0 values for these two systems.
For the 2D toric code with no meausrement errors, we use w = (p - p_0^2D)· L^1/ν_2 for the argument of the scaling form, where p_0^2D≈ 0.103 <cit.>.
A slight x axis rescaling was also necessary for good collapse. For the log_2 L toric code, we rescale the x axis by 1.1 and for the disordered toric code, we rescale by 1.2.
§ CONCLUSIONS
In this work we analyzed the performance of topological quantum codes in the presence of non-uniform error rates that are long range correlated.
We find that rare events with increased error rates (above the bulk threshold) can have dramatic effects on the code performance.
We point out crucial differences between linear and planar rare regions: the former lead to a new decodable phase for the 1D repetition code where the logical failure rate scales as a stretched exponential in the code distance, while the latter make decoding immediately impossible for the 2D toric code, as soon as the error rate in the rare region exceeds the global decoding threshold[Our analysis can be immediately extended to the case with fabrication errors, leading to parts of the code patch which have a larger than bulk error rate. They will result in linear rare regions parallel to the temporal direction, for both the 1D repetition code and the 2D toric code. For this reason, our analysis implies that that a decodable stretched exponential phase is present in both cases.].
We expect these analyses to be more broadly applicable to other topological codes with point-like excitations.
While we focused on toy error models for simplicity, the asymptotic scaling thus predicted are expected to be universal, and can hold for more realistic error models.
For instance, fabrication errors may lead to qubits with different fixed error rates and this may be better modeled by a noise distribution other than Bernoulli <cit.>.
For cosmic ray events, an error model incorporating the relaxation time of the qubits can be considered.
While current quantum error correction (QEC) experiments are restricted to small code distances, we expect our results to be descriptive of future experiments when they approach the scaling limit.
More broadly, QEC experiments provide new platforms and new motivations for exploring disorder physics.
This work provides first explorations in this direction, and in future work it would be interesting to consider other models, including those in (generalized) gauge theories which are themselves under-motivated in solid state systems.
§ ACKNOWLEDGEMENTS
We acknowledge helpful discussions with Aditya Mahadevan, Arpit Dua, David Huse, Chaitanya Murthy and especially Akshat Pandey.
We thank Hengyun Zhou for helpful discussions and for bringing Ref. <cit.> to our attention.
A.S. acknowledges support from the National Science Foundation Graduate Research Fellowship. N.O.D. acknowledges support
from the ARCS Foundation for ARCS Scholar funding. Y.L. was supported in part by the Gordon and Betty Moore Foundation’s EPiQS Initiative through Grant GBMF8686. Y.L. and T.R. were supported in part by the Stanford Q-FARM Bloch Postdoctoral Fellowship in Quantum Science and Engineering.
V.K. acknowledges support from the Office of Naval Research Young Investigator Program (ONR YIP) under Award Number N00014-24-1-2098, the Alfred P. Sloan Foundation through a Sloan Research Fellowship and the Packard Foundation through a Packard Fellowship in Science and Engineering.
Numerical simulations were performed on Stanford Research Computing Center's Sherlock cluster.
§ MINIMUM WEIGHT PERFECT MATCHING
Throughout this work, we use minimum weight perfect matching (MWPM) for decoding.
MWPM finds the minimum weight error consistent with the observed syndrome.
As the weight of an error is associated with an energy, MWPM neglects entropic contributions and can be thought of as a zero temperature decoder.
MWPM works as follows.
First, a decoding graph is defined such that every node of the graph corresponds to a check of the code, and every edge in the graph represents a local error that connects all checks triggered by this error.
We assume that each edge connects at most two checks (i.e. no hyperedges), which is the case for all the model considered in this paper.
Second, a syndrome measurement marks which of the checks were triggered and those nodes are highlighted on the decoding graph.
Next, the decoder works to pair up each of the highlighted nodes on the check graph with the minimum number of edges.
From such a minimum weight matching, a proposed correction operation can be obtained.
The pairing can be carried out efficiently in O(n^3) time, where n is the number of nodes on the decoding graph <cit.>.
To account for non-uniform error rates, weights can be assigned to edges based upon the rate of the corresponding error.
If error a has error rate p_a, then its respective edge on the decoding graph has weight ln1-p_a/p_a.
Throughout this work, we supply the decoder with the information about the non-uniform error rates.
§.§ Repetition Code and Toric Code with Uniform Error Rates
Here we provide baseline numerics using MWPM for the repetition code and the toric code. We utilize a uniform error rate in both systems in order to obtain p_0^2D and p_0^3D. This data is shown in Fig. <ref>(a) and Fig. <ref> respectively. The values we obtain are consistent with those found in the literature <cit.>.
For the repetition code, we also calculate the defect cost similar to the procedure used in Fig. <ref>. This is shown in Fig. <ref>(b). The defect cost is expected to scale purely exponentially (i.e. z = 0). However, due to subleading corrections in the defect cost scaling, the empirical value of z is slightly greater than 0. Furthermore, the error rates shown here are close enough to the threshold that we cannot rule out contributions to the scaling form from critical scaling. This is to be contrasted with the case of the rare-event repetition code in the main text where the nonzero empirical value of z for > p_0^2D is theoretically understood to be coming from a sublinear leading term, and we see its effects reasonably far from the threshold.
§.§ Griffiths physics for MWPM
We used free energies of defects to inform our theoretical treatment of success probabilities. This picture holds for the maximum likelihood (ML) decoder, where success probabilities can be reframed in terms of free energies of defects in statistical mechanical models with couplings K=β J set in terms of the error rates by “Nishimori conditions."
For MWPM, is actually set by the probability that introducing a defect lowers the energy of the system. Quantities that depend on details of energies probe zero temperature physics, which can be in a different phase from the finite-temperature phase probed by the ML decoder.
However, we argue that the relevant Griffiths physics at play does not change much from the picture for the ML decoder.
For specificity, consider the case of the RBIM. Here, the defect is a domain wall formed by flipping the sign of a column of couplings. For sufficiently large disorder strengths at zero temperature, there is a phase transition from the ferromagnet to a spin glass phase. This spin glass phase will control the physics of the “wrong phase" spatial regions in the Griffiths phase.
Assume that is largely set by the largest rare region living in the wrong phase. In an L by L system, this largest rare region has size L_⊥ by L with L_⊥ scaling as ln(L). The corresponding for MWPM on a model with dimensions of the rare region will be bounded below by that of the maximal likelihood decoder on the same region.
However, the maximal likelihood decoder has its failure probability controlled by free energy differences, and the free energy difference in the paramagnetic phase is exponentially suppressed in system size as a L e^-b L_⊥ for some a and b that are functions of error rates but are independent of system size. This gives ∼ 1/(1+e^a L e^-b L_⊥) for the ML decoder whenever the error rate is sufficient to enter the paramagnetic phase. Note again that for the MWPM decoder is also bounded below by this quantity; though this is not a proof, we believe it plausible that for the MWPM decoder in the spin glass behaves in a similar manner to that of the ML decoder in the paramagnet. That is, we believe ∼ 1/(1+e^a' L e^-b' L_⊥) for some new functions a',b' of the error rates. Furthermore, we expect this functional form to hold through the whole spin glass phase, not only where the error rate is sufficient for the better ML decoder to leave the ferromagnetic phase. Under this assumption, the phases of in Fig. <ref>c) remain unchanged for the MWPM decoder, even if the phase boundaries shift relative to the ML decoder.
An analogous argument holds for MWPM in the 3D gauge theory.
§.§ Mean-Median Separation
In the main text, we noted that our numerical results yielded a separation between the mean and median logical failure rates as calculated from the MWPM decoder. This phenomenon is shown in Fig. <ref>. When there is uncorrelated or short range correlated disorder, this distribution is expected to be approximately Gaussian. We find that this is emphatically not the case for long range correlations. There are several sources of randomness that could lead to asymptotically different scaling forms for the two quantities. For instance, the largest rare region in a given realization could be larger than its typical size by a big 𝒪(1) constant times log(L), which could enhance the failure rate to be 𝒪(1) instead of decaying. Those rare disorder realizations, which occur with probability ∼ 1/poly(L), would skew the mean failure probability to be at least ∼ 1/poly(L) rather than stretched exponential. Such uncommon realizations will cause the mean to scale differently than the median. Though we have theoretical reason to suspect that the mean and median should scale differently (power-law versus stretched exponential), we do not resolve this asymptotically different scaling in the numerics.
§.§ Distribution Dependence
In Fig. <ref>, we present data on how p_D for the repetition code changes with γ. We predictably find that p_D increases with decreasing γ.
§ DERIVATION OF DEFECT TENSIONS
Here, we will derive the defect tension for both the Ising model and the plaquette gauge theory using low and high temperature expansions. These derivations will be asymptotic in nature. Though the models discussed in the main text are disordered and contain couplings with both sign and magnitude disorder, we expect that the bulk phase properties to have the same asymptotic scaling behavior as the clean models.
§.§ 2D Ising Model
In a ferromagnet, the defect cost scales as L with a 𝒪(1) line tension. We can directly calculate this from the low temperature expansion. At lowest order in e^-K, the partition function of an L by L_⊥ clean Ising system is Z ∼ 2 e^2K L L_⊥. On introducing a domain wall via antiperiodic boundary conditions along the length L direction, the lowest order contribution is Z_DW∼ 2L_⊥ e^2 K L L_⊥ -2KL. This contribution comes from the L_⊥ locations that the shortest domain wall of length L can be placed. The definition of the domain wall free energy (up to a multiplicative factor of β) is Δ F = log(Z) - log(Z_DW), giving Δ F ∼ 2 K L - log(L_⊥) ∼ 2KL and a domain wall tension of 2K. Higher-order contributions renormalize the domain wall tension of 2K via a series in e^-K.
In a paramagnet, the defect cost should scale as L e^-zL_⊥. We may calculate this from the high temperature expansion, which sums over weighted loops on the real lattice. Recall the high temperature expansion of the Ising model:
Z = (cosh K)^2N∑_{σ_i }∏_i,ν( 1 + (tanh K) σ_i σ_i+ν)
where ν is a lattice translation. Since σ_i^2 = 1 and ∑_σ_i = ± 1σ_i = 0, only terms which contain no factors of σ_i can contribute to the partition function. These terms correspond to closed loops built out of bonds. The first few terms of the partition function series are
Z = (cosh K)^2N(1 + L L_⊥ (tanh K)^4 + 2 LL_⊥ (tanh K)^6 + …)
The effect of anti-periodic boundary conditions (i.e. flipping the sign of the couplings on some column) is to put a - sign on some of the loops. Note that any contractible loop must contain an even number of links on the antiferromagnetic column.
Thus, the first nontrivial contribution to
Δ F = log(Z) - log(Z_DW)
will be from the shortest-length non-contractible loops that run perpendicular to the antiferromagnetic column. These loops have length L_⊥, and there are L of them.
Since such loops carry opposite signs between Z and Z_DW, we have
Δ F ∼ 2 L (tanh K)^L_⊥
with manifest exponential decay in L_⊥.
§.§ 3D Gauge Theory
Magnetic flux tubes are the defects of the ℤ_2 lattice gauge theory stat mech model corresponding to the 2+1D toric code. In the low temperature phase of the clean gauge theory, we may obtain the cost of this defect through a low temperature expansion. This cost comes from the free energy difference of flipping the sign of the coupling of a column of plaquettes. The defect cost is
Δ F = F_ flipped column - F_ no flipped column
= -logZ_ flipped column/Z_ no flipped column.
Similar to the 2D Ising model, at lowest order in e^-K, the partition function of a clean L_x × L_y × L_z system is Z_ no flipped column∼ 4e^3K L_x L_y L_z. Upon fixing a flux tube by flipping a the sign of row of plaquettes transverse to the xy plane, the lowest order contribution to Z_ flipped column is ∼ 4L_x L_y e^K(3L_x L_y L_z - 4L_z. This form is the result of the column of antiferromagnetic bonds lowering the energy of the ground state configuration. There are L_x L_y places to put this column. The flux tube cost becomes Δ F = log (Z_ no flipped column) - log (Z_ flipped column) again up to a multiplicative constant of β. This reduces to Δ F ∼ 4KL_z - log L_x L_y ∼ 4KL_z. The flux tube tension is an 𝒪(1) number, 4K and again, higher order contributions renormalize the tension in powers of e^-K.
Next, we calculate how the defect cost should scale in the high temperature phase. The first non-unity term in the high temperature expansion are closed surfaces of area 6. However, any closed surface which is contractible will always contain even numbers of plaquettes from the tube, similar to the 2D case. Therefore, the first term which is different between the Z_ flipped column and Z_ no flipped column will be the product of all the plaquettes within a plane pierced by the tube. This occurs at order L_xL_y. We obtain the defect cost in the high temperature phase to be
Δ F ∼ 2 L_z (tanhK)^L_xL_y
with manifest exponential decay.
We see that the defect cost decreases exponentially in the area of the plane within the high temperature phase.
§ DISCUSSION OF THE PHASE DIAGRAMS WITH KRAMERS-WANNIER DUALITY
In this Appendix, we provide additional discussion of the phase diagrams in Fig. <ref>. We emphasize how differences in these phase diagrams arise from the dimensions of the rare regions.
To build intuition, we consider success probabilities controlled by free energies in the context of maximum likelihood decoding. We neglect the random sign disorder in these stat mech models for simplicity, as we do not believe the sign disorder affects the relevant properties of the phases. At zero temperature (appropriate for describing the MWPM numerics in the main text), the phase transitions are in fact driven entirely by the random sign disorder; however, see the discussion in Appendix <ref>.
We make use of Kramers-Wannier dualities where defect free energy costs are mapped to spin correlation functions in the dual Ising models. Within this formulation, we attribute the difference in the phase diagrams to whether the dimension of the rare regions are below or above the lower critical dimension of the dual Ising models. In particular, the fact that a two dimensional Ising model can order is responsible for the absence of a stretched exponential phase of the toric code in (2+1)D.
§.§ Quasi-1D rare regions for repetition code in (1+1)D spacetime
As we explain in the main text, the stat mech model relevant to the repetition code in (1+1)D with non-uniform noise rates is the McCoy-Wu model (after neglecting weak random sign disorder). The z direction corresponds to time, where faulty measurements are performed up to a time t=L_z, and perfect measurements (simulating readout via single-site measurements) are performed at t=L_z.
In the stat mech model,
- = Z_++-Z_+-/Z_+++Z_+-
where Z_++ refers to fixing both the top and bottom boundary to be all σ = 1 and Z_+- refers to fixing the top and bottom boundaries to have opposite orientation. Note that this quantity can be interpreted in two equivalent ways in this primal model. It is asymptotically 1 whenever the free-energy cost of inserting a domain wall grows unboundedly in system size. It additionally can be rewritten as - = ⟨σ_i,z=0σ_j,z=L_z⟩_∞ in a model without fixed boundary conditions but with infinite-strength horizontal couplings K_x = ∞ on the top and bottom boundaries.
When thinking in terms of free-energy costs, it's useful to consider just
= Z_+-/Z_+++Z_+-
and in particular the ratio of Z_+-/Z_++≤ 1. Note that when Z_+-/Z_++ is small, is ∼Z_+-/Z_++, and when Z_+-/Z_++∼ 1, ∼1/2.
We can rewrite Z_+-/Z_++ in terms of quantities in the Kramers-Wannier dual models, and we detail this construction in Fig. <ref> and Fig. <ref> for the respective cases of a periodic and open repetition code. For simplicity, in the figures we do not visually distinguish between rare and bulk regions; later, we will view these as uniform L by L_⊥ systems that model the largest rare regions of height L_⊥∼ln(L).
Under Kramers-Wannier duality, the spins of the dual model (which we will label as μ to distinguish them from the spins σ of the primal model) live on the dual lattice, and local couplings K = β J are in one-to-one correspondence with the dual couplings K → K^* = -1/2log(tanh(K)).
The rare regions have K_x, rare < K_c^2D < K_x, bulk, see Fig. <ref>(a).
Therefore, in the dual Ising model, the local couplings in the rare regions and in the bulk satisfy K_x, rare^∗ > K_c^2D > K_x, bulk^∗. In the main text, K_z is always set to K_c^2D in all regions; more generally, K_x,c^2D would then be a function of K_z.
For the repetition code in periodic boundary conditions, an additional η “domain wall" degree of freedom is required in the dual model. This η variable allows the low temperature expansion of the dual model to match the high temperature expansion of the primal model; the latter includes lines that cross the bulk and terminate at the bottom and top boundaries. Such lines correspond to a single vertical domain wall in the dual model. When the horizontal boundary conditions are periodic, domain walls naively come in pairs in the dual model. The introduction of η allows for single domain walls in the dual model by converting a column of couplings in the dual model to be antiferromagnetic instead of ferromagnetic. In particular, as noted in Fig. <ref>, the couplings on such a column are weighted by η. Summing over η=± 1 thus sums over all configurations with an even and odd number of vertical domain walls in the dual model.
In the high temperature expansion of Z_++, configurations with even and odd numbers of vertical lines crossing the bulk come with the same sign. However, such configurations come with opposite signs for Z_+-. By weighting the Boltzmann weights of the dual model by η, we reproduce the relative sign differences.
That is,
Z_+-/Z_++ = ⟨η⟩_*
where * denotes an expectation in the dual model.
In our numerics, we always take periodic boundary conditions for the repetition code. However, the dual of the free energy cost of a domain wall can be massaged into a conceptually simpler form in open boundary conditions. We summarize the dual model for open boundary conditions in Fig. <ref>.
In Fig. <ref>(b), we show the corresponding dual model. However, unlike periodic boundary conditions, this expression can be massaged into a more natural correlation function between dual spins on opposite edges.
By sending μ→ -μ to the right or left of the η-weighted bonds, we can convert configurations with η=-1 into configurations with η=1 but oppositely polarized left and right boundary conditions. We can view these fully polarized boundary conditions as coming from free boundary conditions with infinite couplings. Furthermore, the relevant signs of configurations can be found by weighting with μ_L μ_R instead of η; here μ_L is any dual spin on the left boundary and μ_R is any dual spin on the right boundary; the infinite couplings on the boundary make the choice immaterial.
Explicitly,
Z_+-/Z_++ = ⟨μ_Lμ_ R⟩_*,∞ (OBC)
for any choice of μ_L on the leftmost boundary and any choice of μ_R on the rightmost boundary. Here ∞ denotes that the left and right boundaries have infinite vertical couplings; see Fig. <ref>(c).
From this relation, we can identify the line tension of the domain wall with the inverse correlation length of the dual Ising model, namely σ = (ξ^∗)^-1.
Below, we focus on the largest rare regions (LRR). We assume that - is controlled by configurations where the domain wall is entirely contained within largest paramagnetic rare region, where the domain wall gains the most entropy and where the line tension is the smallest. For conceptual simplicity, we use an L by L_⊥ system as a model of the LRR. The horizontal couplings are all (K_x, rare)^*.
The height of the largest rare region L_⊥ goes as ln(L), but it is worthwhile to consider L_⊥ fixed and finite first. Importantly, the dual Ising model is (quasi) one-dimensional and does not order at any nonzero temperature (K_x, rare)^* < ∞, and always has a finite correlation length ξ^∗.
Correspondingly, the line tension in the primal model σ = (ξ^∗)^-1 is nonzero at any non-infinite temeperature K_x, rare > 0.
The infinite critical temperature for the vanishing of this domain wall line tension can be compared with the corresponding decoding problem at L_⊥ = O(1).
When introducing the random sign disorder back into the primal Ising model according to the Nishimori conditions, the primal model describes the decoding problem of a repetition code with length L and run for L_⊥ time steps.
In this case, perfect state initialization and perfect final syndrome measurement are assumed, so that the domain wall can only fluctuate between y ∈ [0, L_⊥].
It is easy to show that this code has threshold p_ th = 0.5 for any L_⊥ = O(1).
For both finite L_⊥ and L_⊥ = ln(L), the μ-μ correlation function can be obtained from a low temperature expansion within the dual model, yielding (ξ^∗)^-1∝ e^-2K^∗· L_⊥.
This expansion is similar to that leading to Eq. (<ref>).
Alternatively, viewing the dual partition function as a path integral with the x direction as a Euclidean time, the inverse correlation length is given by the splitting between ground state energies of the transfer matrix, i.e. a transverse field Ising model of length L_⊥.
This is again exponentially suppressed by L_⊥ when K_x, rare^∗ > K_c^2D.
This points to a difference between strictly finite L_⊥ and L_⊥ growing unboundedly with L; the latter can have an asymptotically vanishing inverse correlation length. For L_⊥∼log(L) and K_x, rare^∗ > K_c^2D, the inverse correlation length in the dual model vanishes as a power law in L. Correspondingly, the domain wall tension in the primal model decays algebraically 1/L^z, and the domain wall cost goes as L^1-z, where z is a function of the coupling strength.
§.§ Quasi-2D rare regions for toric code in (2+1)D spacetime
After neglecting weak random sign disorder, the toric code in (2+1)D spacetime is described by a ℤ_2 lattice gauge theory in three dimensions, where the coupling strengths are uniform within each plane but may vary from plane to plane, see Fig. <ref>.
We again focus on the LRR, and treat it as a quasi-2D system. We similarly assume that a homologically nontrivial flux loop along the x or y directions receives the most contribution from configurations where the flux loop is completely contained within the LRR. More precisely, we assume that the behavior of the phases can be understood in terms of the properties of the LRR, even if some of the phase boundaries change when considering the additional effects of smaller rare regions.
Under Kramers-Wannier duality, the LRR is described by a quasi-2D Ising model with dimensions L_x × L_y × L_⊥, where L_x, L_y →∞. The height of the largest rare region L_⊥∼ln(L_z) will therefore slowly diverge with system size L, but it is again worthwhile to consider L_⊥ fixed and finite first.
Within the LRR we have T_ rare^∗ < T_c^3D, *, where T_c^3D, * is the critical temperature of the dual 3D Ising model.
Under appropriate boundary conditions, we also have the relation Eq. (<ref>) and can identify σ = (ξ^∗)^-1. The dual Ising model in the LRR can develop long range order at finite temperature, with a critical temperature denoted T_c^2D, ∗(L_⊥). T_c^2D, ∗(L_⊥) increases monotonically with L_⊥, and approaches the 3d transition temperature T_c^3D, * as L_⊥→∞.
Thus, for any T_ rare^∗ > T_c^3D, *, we have T_ rare^∗ > T_c^2D, ∗(L_⊥) for L_⊥ larger than a T_ rare^∗-dependent constant.
Correspondingly, whenever T_ rare^∗ > T_c^3D, *, the inverse correlation length and the domain wall line tension asymptotically vanish as L_⊥ increases.
In the toric code decoding problem with L_x = L_y = L_z = L, L_⊥∝ln(L) and hence L_⊥ is unbounded above as L grows. The above discussion about the approach of T_c^2D, ∗(L_⊥) to T_c^3D, * implies that whenever > p_c^3D, logical errors will then proliferate and the code is not decodable.
The contrast with the repetition code case can therefore be attributed to the dimension of the rare regions.
When T_ rare^∗ < T_c^2D, ∗(L_⊥), the correlation length can either be obtained via an expansion along the lines of Appendix <ref>, or via the ground state splitting of a transverse field Ising model of size L_y × L_z, both yielding lnσ∝ - L_y · L_z for a defect in the x direction.
§.§ Transversal defects for toric code in (2+1)D spacetime
In our time dependent error rate toric code, we obtained a gauge theory where the plaquette coupling magnitudes were completely correlated in the spatial direction. By duality, this is related to a 3D Ising model with bond coupling magnitudes which are correlated in planes. The 3D Ising model with planar defects is believed to have a “smeared" phase transition <cit.>.
As each of the rare regions have infinite extent in two spatial dimensions, they are able to undergo phase transitions independently of the bulk. Then, as one lowers the temperature, different parts of the system order independently at different temperatures and the global order parameter develops smoothly from 0, when the smallest rare region orders. At the temperature where the smallest rare region orders, there is an essential singularity in the free energy.
In Figure <ref>, we show numerics for the threshold of the transverse defect for the toric code in (2+1)D spacetime.
This defect goes in the temporal direction, perpendicular to the planar rare regions. In the repetition code, we were able to interpret the vanishing of the transverse defect as the point at which the underlying statistical mechanics model undergoes a phase transition. It is not clear whether that interpretation is true for the gauge theory. In future work, it would be interesting to further investigate this and whether symptoms of the smearing could be observed in the code. Furthermore, it would also be interesting to study what happens in the classical statistical mechanics model with planar defects when there is also uncorrelated sign disorder.
§ CROSSOVER SCALING FUNCTION
Here we discuss in some detail the crossover scaling function for an L× L × L_⊥ system with uniform error rate p.
We posit that the mean failure probability of the code near p_0^3D is captured by the following phenomenological crossover scaling function (see Eq. (<ref>))
(p, L, L_⊥) ≈Φ[ (p-p_0^3D) · L^1/ν_3, L_⊥ / L ] ≡Φ[x,y].
As usual for finite size scaling, we require both L and L_⊥ to be large for the scaling function to be descriptive.
We are particularly interested in the behavior of this function when y → 0, as the largest rare regions will have typically (L_⊥)_ max∝ln L.
For future convenience, we define single-parameter scaling functions Φ^3D_y[z] for each y,
Φ^3D_y[z ≡ y^1/ν_3· x] ≡Φ[x,y].
They are therefore cross sections of Φ at constant values of y (up to a rescaling of x).
For any finite y > 0, Φ^3D_y is analytic, and describes the 3D bulk RPGT transition with aspect ratio y.
We extract the functional form of Φ^3D_y → 0[z] in two steps.
(i)
We first take L →∞, while keeping (p-p_0^3D) and L_⊥ finite.[In this limit, we have y → 0, x →∞, but z = y^1/ν_3· x = (p-p_0^3D) · L_⊥^1/ν_3 remains finite. This justifies our choice of the parameter z for Φ_y^3D, see Eq. (<ref>).]
In this case, the system is an infinite 2D slab with height L_⊥, which has an L_⊥-dependent critical error rate, denoted p_0^2D(L_⊥).
Therefore, as we take L →∞, we expect that
=
3/4·Θ(p-p_0^2D(L_⊥)).
On the other hand, by definition of Φ^3D_y we have
= Φ^3D_y=0[z = (p-p_0^3D) · L_⊥^1/ν_3].
Comparing Eqs. (<ref>, <ref>), we conclude that Φ^3D_y=0 is necessarily singular, with a step singularity at z = z_0.
Matching the location of the singularity, we have
z_0 = (p_0^2D(L_⊥) - p_0^3D) · L_⊥^1/ν_3
⇒ p_0^2D(L_⊥) - p_0^3D = z_0 · L_⊥^-1/ν_3,
see also Eq. (<ref>).
Again, this relation holds when L_⊥ is sufficiently large.
Therefore, from the crossover scaling function we can infer how p_0^2D(L_⊥) approaches p_0^3D with increasing L_⊥.
(ii)
Next, to see how Φ^3D_y becomes singular as y → 0, we continue to keep (p-p_0^3D) and L_⊥ finite, and consider large but finite L, so that y = L_⊥ / L ≪ 1.
In this case, we expect to recover the scaling function near the 2D RBIM transition, namely
= Φ^3D_y → 0[z] = Φ^2D [w = λ(L_⊥) · (p-p_0^2D(L_⊥)) · L^1/ν_2],
where λ(L_⊥) is an L_⊥-dependent multiplicative factor.
We emphasize that Φ^2D is an analytic universal scaling function, and can be extracted e.g. from logical failure rates of the 2D toric code with perfect measurements.
Noticing that
(p- p_0^2D(L_⊥))· L^1/ν_2 = ((p- p_0^3D) - (p_0^2D(L_⊥) - p_0^3D )) · L^1/ν_2 = ((p- p_0^3D) - z_0 · L_⊥^-1/ν_3) · L^1/ν_2 = ((p- p_0^3D) · L_⊥^1/ν_3 - z_0) · L_⊥^-1/ν_3· L^1/ν_2 = (z - z_0) · y^-1/ν_2· L_⊥^1/ν_2-1/ν_3,
we may therefore define as our scaling variable
w ≡ (z-z_0) · y^-1/ν_2 = λ(L_⊥) · (p- p_0^2D(L_⊥))· L^1/ν_2
where λ(L_⊥) = L_⊥^1/ν_3-1/ν_2,
and conclude that
L_⊥ / L → 0=Φ^3D_y → 0[z = y^1/ν_3· x]
= Φ^2D[w = (z-z_0)· y^-1/ν_2],
see also Eq. (<ref>).
Based on Eq. (<ref>), we include in Fig. <ref> a schematic visualization of the function Φ^3D_y for small values of y as y → 0.
§.§ Fluctuations in Size of Largest Rare Region
In the main text we found that this scaling function Eq. (<ref>) collapsed the data for the planar disordered L × L × L system, by setting L_⊥≡ln L.
We claimed that this worked because the physics of the disordered model was entirely controlled by the largest rare region which was of height (L_⊥)_ max∝ln L. However, there are expected to be subleading 𝒪(1) fluctuations in the height of the largest rare region. It is worth then asking how these fluctuations affect the mean and median .
In particular, we argue that scaling collapse of the mean is asymptotically destroyed by these small fluctuations in L_⊥ as L →∞; however, this only noticeably occurs in systems with at least hundreds of millions of spins. On the other hand, the median is more resilient with scaling collapse maintained at all sizes. The only caveat is that the form of the scaling variable w as a function of L at similarly large system sizes will need to change slightly to include subleading-in-L corrections to L_⊥.
For a fixed system size L, we may study the effect of a random L_⊥ through the scaling function Eq. (<ref>), where the analytic scaling function Φ^2D now has a random parameter w(L_⊥, L).
(For notational simplicity, we write Ψ for Φ^2D henceforth.)
We expand w around the mean of L_⊥ to the first order,
w(L_⊥, L) = w(𝔼[L_⊥], L) + ϵ·∂ w/∂ L_⊥|_L_⊥ = 𝔼(L_⊥) + O(ϵ^2),
where ϵ≡ L_⊥ - 𝔼[L_⊥].
We define w_0 = w(𝔼[L_⊥], L), and write w' ≡ (∂ w / ∂ L_⊥)|_L_⊥ = 𝔼(L_⊥).
With these, we may write the mean failure probability as
𝔼[ Ψ(w)]
= 𝔼[ Ψ(w_0 + ϵ· w') ]
= 𝔼[ Ψ(w_0) ] + 𝔼[ ϵ· w' ·Ψ'(w_0) ] +𝔼[ ϵ^2 · (w')^2 /2Ψ”(w_0) ] + …
= Ψ(w_0) + 𝔼[ϵ] · w'·Ψ'(w_0) +𝔼[ ϵ^2 ] ·(w')^2/2·Ψ”(w_0) + …
= Ψ(w_0) +𝔼[ ϵ^2 ] (w')^2 /2Ψ”(w_0) + …
Here, we used 𝔼[ ϵ]=0, and we expect from standard extreme value statistics that 𝔼[ ϵ^2 ] = 𝒪(1); by 𝒪(1), we mean that it is asymptotically independent of system size.
We expect Ψ”(w_0) to also be 𝒪(1), as Ψ is analytic and bounded.
Note in particular that 𝔼[ ϵ^2 ] (w')^2 /2Ψ”(w_0) (and the higher-order corrections) will generically depend on parameters like L and error rates differently than w_0 depends on such parameters. This makes the expectation of the scaling function a sum of scaling functions with different scaling parameters, which will generically destroy single-parameter scaling if the corrections are not small.
Furthermore, we have that
w' = A L^1 / ν_2 L_⊥^-1 / ν_2 - 1 + B L^1 / ν_2 L_⊥^-1/ν_2 + 1/ν_3 - 1.
Here, A and B are constants which depend on p - p_0^3D, z_0, ν_2 and ν_3.
Notably, w' diverges with L even when the scaling argument w is fixed and small.
Therefore, 𝒪(1) fluctuations in L_⊥ result in unbounded fluctuations in w, which will generically destroy the scaling collapse for the mean when L →∞.
However, for L_⊥ = ln(L), w'<1 for L less than about 5000, so this destruction of scaling collapse only occurs at quite large sizes. We still see scaling collapse of the mean of at the sizes of L that we probe (not pictured).
Because we do not expect single-parameter collapse for the mean in the thermodynamic limit, our plots of collapse are all for the median. We do expect good single-parameter collapse of the median of at all L.
In particular, for a monotonic function like Ψ,
median(Ψ(w(L,L_⊥))) = Ψ(median(w(L,L_⊥)))
so collapse is maintained so long as the non-random median(w(L,L_⊥)) is used as the scaling variable.
As a last technical note, the form of median(w(L,L_⊥)) will slightly differ at sufficiently large sizes from what we use for collapse. We use w = (p-p_c^2d(L_⊥)) L^1/ν_2 L_⊥^1/ν_3 - 1/ν_2|_L_⊥ = c ln(L), taking L_⊥ directly proportional to ln(L) and neglecting the subleading 𝒪(1) corrections. At large sizes L, the multiplicative factor of L^1/ν_2 nevertheless makes the effect of these subleading corrections non-negligible; at sufficiently large L, single-parameter scaling collapse occurs for the slightly more complicated parameter (p-p_c^2d(L_⊥)) L^1/ν_2 L_⊥^1/ν_3 - 1/ν_2|_L_⊥ = c ln(L) + 𝒪(1) with explicit 𝒪(1) corrections included in L_⊥. However, this minor distinction only matters once the spins number in the hundreds of millions.
|
http://arxiv.org/abs/2409.02740v1 | 20240904142100 | Convolutional Neural Networks for Automated Cellular Automaton Classification | [
"Michiel Rollier",
"Aisling J. Daly",
"Jan M. Baetens"
] | nlin.CG | [
"nlin.CG",
"cs.LG"
] |
Convolutional neural networks for automated cellular automaton classification
Michiel Rollier Faculty of Bioscience Engineering, Ghent University, Coupure Links 653, 9000 Gent, Belgium, [email protected]
*
Michiel Rollier, Aisling J. Daly, Jan M. Baetens
Received 2024; accepted 2024
====================================================
The emergent dynamics in spacetime diagrams of cellular automata (CAs)
is often organised by means of a number of behavioural classes.
Whilst classification of elementary CAs is feasible and well-studied,
non-elementary CAs are generally too diverse and numerous to exhaustively classify manually.
In this chapter we treat the spacetime diagram as a digital image,
and implement simple computer vision techniques to perform an automated classification of elementary cellular automata
into the five Li-Packard classes.
In particular, we present a supervised learning task to a convolutional neural network,
in such a way that it may be generalised to non-elementary CAs.
If we want to do so,
we must divert the algorithm's focus away from the underlying `microscopic' local updates.
We first show that previously developed deep learning approaches have in fact been trained to identify the local update rule,
rather than directly focus on the mesoscopic patterns that are associated with the particular behavioural classes.
By means of a well-argued neural network design,
as well as a number of data augmentation techniques,
we then present a convolutional neural network that performs nearly perfectly at identifying the behavioural class,
without necessarily first identifying the underlying microscopic dynamics.
§ INTRODUCTION
The emergent behaviour of elementary cellular automata (ECAs),
upon evolution from an initial configuration following a local update rule,
is typically displayed in a so-called spacetime diagram (Fig. <ref>).
Patterns in this diagram can generally be identified as belonging to a particular behavioural class.
Many ECA diagrams display some repetitive emergent behaviour, for example,
allowing for a collective characterisation as `periodic' ECAs.
In this chapter, we develop and evaluate a neural network (NN)
that is capable of establishing such a classification automatically,
based on rudimentary computer vision techniques.
We design and train the network in such a way
that it achieves excellent performance in identifying the behavioural class,
without necessarily inferring the (elementary) local update rule.
This achieves two goals.
First, we essentially train the network to focus on mesoscopic structures,
rather than patterns at the pixel level.
Such structures are a hallmark of emergent complexity (e.g. gliders in the Game of Life).
Complexity, in turn, is a property of non-trivial CAs of the highest interest
to mathematicians, computer scientists, and mathematical modellers <cit.>.
Second, if the network can classify the CA without the need to know
the governing elementary rule, it may also be suitable for identifying interesting classes
in non-elementary CAs.
Combining both goals, a sophisticatedly developed NN
would be able to automatically identify complex emergent behaviour
in a set of spacetime diagrams from non-elementary CAs
such as non-uniform or multi-state CAs
(see Rollier et al. <cit.> for an overview of CA families).
The number of possible variations in non-elementary CAs quickly becomes overwhelmingly large.
Human classification-by-inspection is therefore liable to missing certain patterns,
and is simply too large of a task to perform manually.
An automatic classification tool constructed from the principles of computer vision
on the other hand, is objective, scrutinous, and fast.
We present the basics ingredients of such a tool in this chapter,
explore a number of strategies for our objective,
and quantitatively compare our results.
This chapter's first Section contains an overview of the CA classification problem,
and a very brief introduction to image classification using deep learning.
We conclude the introduction with a clear definition of the available data and the research objective.
§.§ Behavioural classification of cellular automata
Systematic behavioural classification is generally an efficient means to expose
underlying structure in a heterogeneous set of phenomena.
This is certainly also the case for ECA classification,
for which it has been known for a long time that there are strong correlations between
e.g. the Langton parameter and the resulting behavioural types <cit.>.
There is no a priori `correct' way of classifying the various ECAs,
but a number of useful approaches have been proposed,
the most familiar (and earliest) being the phenomenological four-fold classification of Wolfram <cit.>.
Martínez <cit.> proposes a different classification,
and in addition reviews 15 existing classifications (see Tab. <ref>).
All classifications are based on some properties
of the ECA, an overview of which was compiled by Vispoel et al. <cit.>.
Generally, one distinguishes properties associated with the local update rule (the CA's genotype)
from properties associated with the spacetime diagram (the phenotype).
Our approach will focus on the phenotype.
In this chapter, we adopt the Li-Packard (LP) classification of ECAs
(see Li and Packard (1990) <cit.> and Tab. <ref>),
for three reasons.
First, because it is a common and well-studied classification,
which facilitates comparison of our results with those found in the literature.
Second, because the class is directly related to local structures
in the spacetime diagram (the phenotype), without the need of some intermediate analysis
such as the calculation of the power spectrum.
This is a prerequisite for the proposed classification approach,
inspired by computer vision.
Third, because the definition of its classes imposes a surjective mapping
from the set of ECA rules to the set of LP classes.
The first two reasons apply to the Wolfram classification as well,
but the third does not: Wolfram classes are qualitative,
and spacetime diagrams of the same rule can be assigned to different Wolfram classes.
This indeterminacy obstructs the generation
of a reliable dataset of rule-diagram pairs,
which is required (or at least strongly preferred)
for properly training a NN (cf. section <ref>).
§.§ Artificial neural networks for image classification
We will perform an automated LP classification by means of a convolutional neural network (CNN). Many excellent monographs and review articles on this popular topic are available; see e.g. Goodfellow et al. <cit.> for a general introduction to deep learning, and Dhillon and Verma (2020) for CNNs in particular <cit.>.
Our choice for CNNs is motivated by four observations that we briefly cover below.
First, NNs have long been shown to be capable of ECA classification.
Kunkle <cit.> has shown (over twenty years ago) that
a very simple fully-connected NN is capable of correctly identifying the LP class
in over 98% of the cases. Kunkle feeds a combination of seven parameter values to the input layer
of the NN. These parameters are based on the rule table (genotype),
rather than on a simulated spacetime diagram.
Whilst our approach will attempt to avoid precisely this kind of a priori knowledge of the CA setup,
the work does present a proof of concept regarding automated classification of ECAs.
Second, spacetime diagrams can be interpreted as digital images.
These images are as simple as they come:
single-channel, binary (black-and-white), and with a relatively low resolution.
da Silva et al. <cit.>
(further enhanced by Machicao et al. <cit.>)
have shown the feasibility of Wolfram classification of ECAs
by treating the spacetime diagram as a two-dimensional texture.
They do not use a NN approach, but rather use two conventional texture descriptors:
local binary pattern variance and Fourier descriptors.
Whilst the accuracy for identifying complexity (in Wolfram's interpretation) is only 67%,
it is highly accurate for other classes.
Generally, the work of da Silva et al. demonstrates that observing structures within a spacetime diagram
by treating it as an image containing some texture, is a promising avenue.
Third, within the spectrum of artificial intelligence,
CNNs have been shown to be
the most promising tool for image recognition and classification.
Arguably the most notable early publication that has established this claim
is LeCun et al. <cit.>
(for a general introduction to (the terminology of) CNNs,
we refer the reader to this standard work as well).
Images used in that study are typically
not as abstract and `geometrical' as the spacetime diagrams we are considering here,
but the authors make a convincing case for the general applicability of CNNs.
Fourth, CNNs have quite recently been used for automated ECA classification
in at least two publications. Silverman <cit.> constructs
a CNN for automated Wolfram classification. Going one step further,
Comelli et al. <cit.> train a simple CNN for automated class identification
for 11 classifications (all included in Tab. <ref>),
effectively comparing how easily the various classifications are taught to this CNN.
Whilst both approaches show that it is indeed feasible to construct a CNN
that is capable of inferring an ECA class simply based on its spacetime diagram,
both also fail to notice that the transition rules hidden within the diagram are a dead giveaway (cf. section <ref>).
This implies that the CNN is not necessarily trained to recognise mesoscopic patterns
that human observers would use for classification, but could simply be a (computationally demanding)
way of inferring the local update rule.
This is the issue we will phrase in more detail,
and will attempt to avoid in this chapter, with the aim of improving generalisability.
§.§ Data and objectives
We consider all 256 ECA rules, evolved over 64 time steps, for finite ECAs with 64 cells and periodic boundary conditions.
This allows for 2^64 initial configurations, resulting in 2^64× 256 ≈ 10^21 possible spacetime diagrams.
From this set, we sampled 1024 spacetime diagrams for each of the 256 local update rules for a total of 262144 diagrams,
with an additional 128 diagrams per rule for testing the CNN accuracy (cf. section <ref>).
Fig. <ref> displays an example of these data for every LP class.
The objective of the classification task is to predict the LP class, given a spacetime diagram.
We first show that this objective is easily achieved through a grid search (section <ref>).
We next show that this objective can also be achieved by fitting the diagrams (`data') to the rule or class (`labels') by means of a CNN (section <ref>).
§ A BASIC CNN FOR CA CLASSIFICATION
In this Section we show how the classification task can be solved almost perfectly with a basic algorithm.
We subsequently construct an alternative solution using a CNN and assess its performance.
§.§ Benchmark: a simple grid search
Clearly, simply scanning the spacetime diagram will allow to `fill in' the associated rule table
(all eight `T-tetrominos', such as in Fig. <ref>).
In practice, one could simply observe three adjacent cells together with
the central cell in the next time step to fill in the first rule table entry.
Next, shift the observation one cell to the right,
and fill in the next rule table entry – provided that the ordered triplet of cell states is different.
Continuing in this fashion for a random initial configuration,
the probability of encountering all eight triplets
– and hence completing the rule table – is over 99%
in the first 64-cell timestep (the first row) alone.
Due to the surjective mapping from rules to LP class,
this simple grid search allows for solving the classification objective
(presented in Section <ref>)
with a near-perfect accuracy in a very efficient way.
In some rare cases fewer than all eight rule table entries
are encountered in the 64×64 diagram.
Then, for every missing rule table entry,
the probability of still guessing the right rule is halved.
The probability of still guessing the correct class, however,
decreases more slowly, as a result of the relationships between rule tables of rules
belonging to the same LP class.
Scanning our simulated data set, we find that 370 diagrams from the training set
and 33 diagrams from the test set contain incomplete information,
which amounts to ∼0.1%.
Not a single diagram contains fewer than seven rule table entries,
which enables correct LP classification for all but one spacetime diagram,
despite an incomplete rule table.
Fig. <ref> displays these results.
Null rules often (and disproportionately) generate missing information,
as do fixed-point rules.
This of course is due to the fact that such rules either annihilate or simply repeat any input information,
respectively, whilst (locally) chaotic rules generate unpredictable patterns.
Additionally, in Fig. <ref>,
two spacetime diagrams with missing information are shown.
These two examples are interesting for different reasons.
On the left, we display the only diagram (out of the nearly 3 × 10^5 samples)
that could be incorrectly classified (with a probability of 50%) using this benchmark scan.
Whilst it is the diagram of a fixed-point ECA,
it could be identified as being in the fixed-point class.
On the right, we show a diagram that is chaotic, but counter-intuitively still manages to withhold an eighth rule table entry.
§.§ The default CNN
Finite ECA spacetime diagrams can be interpreted as digital images,
which allows our objective to be interpreted as a well-defined computer vision task.
Given the massive data set at hand, it should be possible
to train a CNN to correctly infer the class of a diagram it has not encountered before.
The possibility of this approach has been explored by Silverman <cit.>
for the Wolfram classification on ECAs,
reporting an accuracy of over 99.7% whilst training,
and a perfect 100% score on the test set.
More recently, Comelli et al. <cit.>
used a similar simple CNN to compare 11 variations of ECA classification,
reporting an accuracy of 97.58% for LP classification.
These extremely high accuracies can be understood by acknowledging that
for many ECA classifications, we are feeding a solvable problem to the algorithm
– as was demonstrated in the benchmark grid search (section <ref>).
As a first step, we reproduce the CNN architecture proposed by
Silverman <cit.>,
briefly explaining every design choice
(see Fig. <ref> for a visual overview).
§.§.§ Architecture and activation functions
From input to output, the information stream encounters two consecutive convolutional layers,
a global maximum pooling layer, and a 256-node dense layer.
The input layer is a single-channel black-and-white image with a resolution of 64×64.
Next is a 16-channel convolutional layer,
where every channel contains the result of a 3×2 convolution
for a particular choice of values for the six weights and one bias parameter.
This sums to 112 free parameters.
By choosing 16 channels,
the weights and biases for each channel can be optimised to recognise a T-tetromino from the rule table,
that comes in 16 variations.
Due to the shape of the convolutional kernel,
the leftmost and rightmost columns of pixels cannot be convolved,
and neither can the bottom row.
The resulting channels therefore have a resolution of 62×63.
The output of this convolution is activated by means of a rectified linear unit (ReLU),
simply defined as x ↦max(0,x).
For an intuitive understanding of the information flow, see
Fig. <ref>.
A similar convolutional layer maps to 16 60×62 channels, again ReLU-activated,
for further increasing the spatial sensitivity and overall power of the model.
This layer contributes 1552 trainable parameters.
In the information flow we next have a bottleneck:
the global max pooling layer simply keeps only the maximum value
of each of the 16 channels, in order to filter out noise.
After the bottleneck comes a fully-connected layer of 256 nodes,
i.e. each of the maximum values of the 16 convolutional channels
is connected to all of the 256 nodes, for a total of 4096 weights and 256 biases.
Note that this layer contributes over half of the total number of trainable parameters.
If the objective is to predict the ECA rule, the 256-node layer is the output layer.
If, however, we aim to predict the LP class,
the values in the previous layer are connected to a five-edge output layer.
The values from the final layer
are mapped to the unit interval by means of a SoftMax function (similar to the hyperbolic tangent function),
such that the output vector can be read as a discrete probability distribution.
§.§.§ Training strategy
With the architecture outlined in Section <ref> in place, the 6016 or 7301 free parameters
need to be inferred.
This is achieved by back-propagation using the highly convenient Keras infrastructure <cit.>.
In short, first the generated data set of 2^18 diagrams is randomly split
in 3/4 training set and 1/4 validation set.
No cross-validation is required due to the size of the data set.
Next, labels associated with each diagram are translated to one-hot vectors
and compared with the CNN's outcome by means of categorical cross-entropy.
We select a batch size of 64: a new set of parameter values is proposed after evaluating 64 diagrams.
This effectively means that the back-propagation method implements 3072 parameter updates for each time the full training set is evaluated (each `epoch').
Weights and biases are initialised randomly,
and updated by means of Keras's built-in Adam optimiser.
The optimiser's learning rate parameter
– which determines the velocity at which the optimisation process moves through parameter space –
is set at 10^-3.
We run over maximally 50 epochs, but abort training when the difference in accuracy on the validation set is less
than 10^-5 over 5 subsequent epochs.
The model is trained using a NVIDIA T4 GPU,
running over a single epoch in approximately 25 seconds.
§.§ Performance assessment of the default model
The default model achieves an accuracy of 99.99% for both the training set and the validation set for the LP classification,
stopping its training after 12 epochs. From the 65536 diagrams in the validation set, the model infers the wrong LP class for only six.
Note – whilst this was not our objective – that the accuracy we achieved
is much higher than the one reported by Comelli et al. <cit.>,
which may be partly due to our larger training set.
Our training accuracy is even higher than the one reported by Silverman <cit.>. Clearly, his reported 100% for test set accuracy is not surpassed, but to be fair (looking at the benchmark results) this perfect score is probably rather lucky.
Keep in mind, moreover, that we have chosen a LP-based classification rather than Wolfram's.
For rule identification rather than LP classification,
the model achieves an accuracy of 99.86% on the training set,
and 99.85% on the validation set.
In both cases, the accuracy history exceeds 99.5% after the first epoch,
displaying the power given to the model via the large number of free parameters.
Results for rule identification of the validation set are shown in Fig. <ref>.
Comparing these results to Fig. <ref>,
we observe largely the same trends.
Due to the near-perfect accuracy, however, the standard deviation of the wrongly-classified diagrams is quite high
and results are to be compared with some leniency.
Fig. <ref> shows all six wrongly classified diagrams in the LP classification.
The goal, now, is first to trim down the clearly overly powerful model.
Next, we experiment with a number of changes to the data and the neural architecture,
exploiting characteristics of the classification objective.
§ VARIATIONS AND EXTENSIONS OF THE CLASSIFICATION CNN
In this Section we first design an algorithm that is equally good at determining the ECA rule (and hence the LP class) as the benchmark grid search,
whilst following the architecture of a CNN.
Next, we show how the CNN of Section <ref>
can be re-designed and simplified with the objective of class identification
in mind, rather than rule identification.
Next, we explore various methods of data augmentation that again
reflect this research objective.
Third, we assess and visualise the performance of the
maximally-simplified model with appropriate data augmentation.
§.§ A perfect CNN for rule identification
Any ECA diagram that contains all neighbourhood configurations
can be decomposed into its `neighbourhood contributions'
via a convolution into eight channels.
An example for a single channel was shown in Fig. <ref>,
and Fig. <ref> contains a full decomposition for a diagram of rule 120 (01111000_2 in binary).
The global maximum of these channels is either 0 or 1,
and the ordered octuple of maxima encodes the local update rule.
We feed this binary pattern into a 256-node output layer,
once more by manually choosing the weights of each node i:
𝐰_i = (w_i,0, …, w_i,j, …, w_i,7),
with w_i,j = 2 bin(i)_j - 1,
where bin(i)_j is the jth binary digit of the integer i.
The corresponding biases are
b_i = 1 - ∑_j=0^7bin(i)_j.
The output layer is again mapped to a probability with a SoftMax function.
The resulting CNN has 2360 (fixed) parameters.
It incorrectly identifies only 331 diagrams in the training set, and as such reaches an accuracy of 99.87% for rule identification. This former number is close to the number of diagrams (370) that contain an insufficient amount of information for a decisive grid search.
Clearly, this approach for retrieving the local update rule is far more tedious than simply scanning the grid. So, if we are to harvest the power of CNNs,
we would do better to design them for actual pattern recognition,
rather than use them as a long computational detour for an essentially trivial task.
§.§ Trimming the default CNN for mesoscopic pattern recognition
We trim down the default CNN architecture for two reasons.
First, it demonstrates that even simpler models are capable of reaching a high classification accuracy,
even if they are not tailored for reconstructing the rule table.
Second, using these trimmed-down versions of the CNN, the accuracy decreases slightly,
which provides the headroom that is required to inspect the effect
of the various data augmentations that we will explore in Section <ref>.
When re-designing the NN, a first obvious observation
is that we must remain true to the convolutional approach,
cleverly exploiting the local structure.
For comparison: for our dataset,
a fully-connected NN with a single 256-node hidden layer performs acceptably,
with a validation accuracy for LP classification of just over 95%.
However, it requires well over a million trainable parameters.
We therefore explore variations on the default CNN presented in Section <ref>.
Guided by the objective of designing a very simple CNN that is good at predicting
LP classes, we considered a number of architectures
that, next to the general principles of successful CNN design <cit.>, all have the following three additional principles in common.
First, the convolutional kernel of the first hidden layer must not envelope
an ECA neighbourhood and the resulting subsequent state (a T-tetromino).
Second, the CNN should have a sufficiently large receptive field,
enabling the observation of mesoscopic structure.
Third, the hidden dense layer before the five-fold output layer must not
be capable of encoding the ECA rule.
Note that none of these principles are explicitly reflected
in the default CNN of Section <ref>.
The CNN shown in Fig. <ref> fulfils the above principles. The convolutional kernel has dimensions 2×2, which cannot fit a T-tetromino.
The two additional hidden convolutional layers and (especially) the central maximum-pooling layer increase the receptive field substantially.
The final hidden layer is a 16-node dense layer, which certainly cannot uniquely encode information on ECA rules.
A validation accuracy of at least 99.10% can be achieved with this CNN
for LP class identification, whilst it comes with only 389 trainable parameters.
For rule identification, however, the validation score drops
substantially, to 71.19%, despite involving 1504 parameters.
§.§ Data augmentation tailored for class identification
We want to train the model Fig. <ref> such that it is sensitive to what matters
to the classification, and insensitive to what does not.
One way of doing so is by manipulating the data set in a way
that affects the content at pixel level, but does not affect
the overall pattern structure of the spacetime diagram.
In other words, to alter it in a way that does not make
it too difficult for a human to still perform the classification.
This is one aspect of a type of regularisation known as data augmentation <cit.>.
Data augmentation is a preprocessing technique that is typically called upon
to artificially enlarge and diversify the size of the dataset,
such that the CNN is not overfit on the (often sparse) training data.
Due to the practically unlimited size and maximal diversity of the set of ECA spacetime diagrams, overfitting is not an issue, though we will use data augmentation techniques to average non-essential aspects of the data,
as such aiding the algorithm to focus on what we want it to.
Importantly, this will not improve validation accuracy, but it will increase the gap between the accuracies of predicting the rule versus the LP class.
Fig. <ref> visualises the four types of data augmentations that we tested:
inversion, mirroring, coarse-graining, and adding salt-and-pepper noise.
First, the way one decides to colour the diagram is not of any
significance to the underlying binary structure, so inverting
this colour should not influence the LP classification.
Second, neither should horizontally flipping because the emergent patterns are not fundamentally different when they manifest from left to right or vice versa.
The mirror and inversion operations (and their combination) are of course precisely what define the subsets of ECA rules that are equivalent to each other,
such that only 88 `independent' rules remain <cit.>.
Third, the patterns and mesoscopic structures in the spacetime diagrams should still be distinguishable when they are slightly blurred. In particular, we can coarse-grain the image such that, in blocks of 2×2, every pixel value is changed to the average value in that block.
Clearly this removes some information on the microscopic level, disrupting rule identification.
Fourth, we explore the addition of so-called salt-and-pepper noise,
which simply means that a certain percentage of pixel values are randomly changed to 0 or 1. This will introduce contradictions in the rule table, whilst mostly leaving the larger structures untouched.
In practice, data augmentation is performed while training the CNN, i.e. `online'. Every batch of 64 diagrams
is randomly augmented using one or more of the augmentation techniques.
A diagram has a probability of N/(N+1) of being augmented by one (and only one) of N selected augmentations, and a probability of 1/(N+1) of remaining unaffected.
Additionally, we take into account the equivalence between rules in the training phase as well.
We evaluate the CNN's capacity of determining which of the 88 equivalence classes the rule belongs to (cf. Tab. 1 in <cit.>),
by decreasing the number of nodes in the final layer from 256 to 88 and adapting the diagram labels accordingly.
The resulting CNN contains 664 trainable parameters.
§.§ Performance assessment of the trimmed CNN on augmented data
We trained the CNN with different combinations of data augmentation techniques,
with various learning rates, and with noise levels varying between 1% and 10%.
The highest accuracies achieved on the validation set
are displayed in Tab. <ref> for no augmentation, individual augmentation techniques, and a combination of all techniques.
As expected, all accuracies decrease compared to the CNN trained on the non-augmented data.
While the LP classification accuracy remains above 97%,
the rule identification accuracy decreased significantly,
and – interestingly – differently for the various augmentation techniques.
When inverting or mirroring the diagrams, it becomes much more difficult
to detect the actual rule, because the augmentation effectively transforms the local update rule into a different (equivalent) one. The effect is larger for inversion, which might be caused by convergence difficulties while training the CNN.
However, this is probably mostly due to the fact that 64 rules are their own mirror image, whilst only 16 rules are their own inversion.
That is to say: the mirror operation often is the same as the identity operator on the microscopic level,
effectively nullifying the augmentation altogether.
Moreover, the relation between inversion and mirror augmentation on the one hand,
and equivalence classes of local update rules on the other,
is clear from the results as well:
the accuracies on independent rule identification are largely unaffected
(or even go up) when applying these data augmentations.
Coarse-graining by averaging every 2×2 block of pixels
clearly has the strongest effect on the CNN's capacity for rule detection,
bringing down the accuracy from over 71% to less than one in six.
Whilst the number of wrong LP classifications more than doubled,
the gap with rule detection is by far the largest of the considered augmentation techniques.
We observed that even a little noise was enough to negatively affect
all accuracies, without favouring LP classification, which is undesirable in terms of our objective.
The values reported in Tab. <ref> correspond to
a noise level of only 1%.
Clearly, adding this type of noise does not impact the CNN's ability to
achieve our objectives in a positive way.
In order to inspect any kind of interference between different augmentation techniques, we also investigated their combined effects.
This results in accuracies that are significantly smaller than the average of all accuracies of single-technique augmentations,
which is testimony to the fact that convergence to an optimum is harder to achieve
when the training process is `distracted' with augmentations.
Informed by the results in Tab. <ref>,
we perform a final optimisation on the complete LP-classified training set
(including the validation set), with all augmentations except salt-and-pepper noise.
This yields an accuracy on the test set of 98.17% for the LP classification,
and 61.25% for identifying the independent rule.
Fig. <ref> shows the confusion matrix for the LP classification,
summarising which predictions were made by the CNN.
§ DISCUSSION AND FUTURE DEVELOPMENTS
The overlap between the research domains of deep learning on the one hand,
and discrete dynamics systems on the other, continues to grow.
In the case of CAs, this is due to the readily available vast computational resources
and to exciting developments in the study of computer vision.
Due to their simple, convolution-like nature, CAs have been used
for studying the internal mechanism of CNNs <cit.>.
In contrast, here we examined techniques where CNNs can aid CA research.
Such techniques – especially cleverly designed CNNs – can help to identify
a CA spacetime diagram as belonging to a particular behavioural class.
Automated classification can therefore be a very useful tool for
identifying large numbers of spacetime diagrams.
In turn, this is an important requirement to make large-scale statistical
assessments of classification schemes practically feasible,
which would help formulate an answer to the first original problem in the theory of CAs <cit.>.
This has to be done right, however. In this chapter, we first explicitly showed how easy it is to find a perfect automated
classification of ECAs when `cheating' is allowed, i.e. by simply scanning the local update rule or by designing a CNN and hand-picking the weights and biases.
Next, we started from existing CNN implementations for automated classification, but re-designed them.
We did so in such a way that the algorithm picks up on the mesoscopic structures
that determine their phenomenology – and hence their classification –
whilst having a hard time identifying the microscopic structures that directly reveal their genotype.
This redesign was guided by first altering the architecture itself,
making it less obvious for the information flow to contain a direct encoding of the local update rule.
Second, we considered four techniques of data augmentation,
and demonstrated that out of these four, adding coarse-graining is the best way to keep
the model from learning the underlying rule.
Our final model was trained using all augmentations except salt-and-pepper noise.
LP classes were assigned erroneously in less than 2% of the space-time diagrams.
An important note for future development is that this CNN relatively often wrongly identifies chaotic or periodic diagrams as locally chaotic.
The future of the research on the edge between CAs and deep learning is first of all to explore more architecture variations and training strategies,
to increase the gap between rule accuracy and class accuracy.
Second, we may explore different classification schemes,
much like Comelli et al. <cit.> did,
but now again with the goal of finding the best way to maximise the performance gap.
After all, it is not obvious that the LP classification is the most useful,
especially not when it comes down to pinpointing Wolfram's `class-IV' complex behaviour <cit.>.
Next, armed with a good automatic classification scheme that is sensitive to mesoscopic patterns, the CNN may be used for classification of non-elementary CAs,
such as CAs with a larger neighbourhood or non-uniform CAs.
One must remain critical when interpreting the inferences obtained using a deep learning model, especially when it is used on data that is was not trained on.
On the other hand, this is arguably the best we can do for a supervised learning approach without spending ages of mind-numbing and (at least partially) subjective manual labelling
for the creation of a proper training set.
Because of that reason, another promising approach in automated classification
is self-supervised learning <cit.>, where the algorithm decides `for itself'
which phenomenologies are to be grouped together, without the need for a label.
This approach will be the topic of our forthcoming work,
and is a powerful ally to the supervised approach presented in this chapter,
as it may uncover different aspects of the same problem.
Looking further, many research opportunities on the edge of CAs and NNs are still open for discovery.
One such possibility is the mobilisation of time series in the context of AI-aided CA classification.
This implies deriving a time series from the CA evolution and using, for example, the time-dependent spatial entropy as an input for a recurrent neural network <cit.>.
Whilst this kind of an approach does not strictly contain more information
than the full spacetime diagram, it may help to direct the algorithm's focus,
much like we did for CNNs in this chapter.
Yet another next step would be not to predict classes,
but to predict values associated with the CA that are otherwise computationally demanding
to calculate directly, such as the Lyapunov spectrum (cf. M. Vispoel's contribution in this book).
A further exciting possibility is the implementation of (modest) generative AIs,
where a particular local update rule is selected that generates
a desired emergent behaviour (see e.g. Mordvintsev et al. <cit.>),
effectively turning the classification problem on its head.
In one way or another, all these roads lead to the identification of links
between interesting emergent behaviour on the one hand
and, on the other hand, the very simple algorithmic sequences at the heart of CAs.
This further strengthens both our mathematical understanding of this fascinating dynamical model,
and further enables applications in computer science and mathematical modelling.
99.
wolfram1994complexity Wolfram S (1994) Cellular Automata and Complexity: Collected Papers. Westview press, Boulder, Colorado
rollier2024comprehensive Rollier M, Zielinski K M C, Daly A J, Bruno O M, Baetens J M (2024) A Comprehensive Taxonomy of Cellular Automata. Preprint: arXiv:2401.08408
langton1990computation Langton C G (1990) Computation at the edge of chaos: Phase transitions and emergent computation. Physica D: Nonlinear Phenomena 42(1-3): 12–37
martinez2013classification Martínez G J (2013) A Note on Elementary Cellular Automata Classification. Journal of Cellular Automata 8(3-4): 233–259
vispoel2022progress Vispoel M, Daly A J, Baetens J M (2022) Progress, gaps and obstacles in the classification of cellular automata. Physica D 432: 133074
li1990structure Li W, Packard N (1990) The structure of the elementary cellular automata rule space. Complex Systems 4(3): 281–297
goodfellow2016deep Goodfellow I, Bengio Y, Courville A (2016) Deep Learning. MIT Press
dhillon2020convolutional Dhillon A, Verma G K (2020) Convolutional neural network: a review of models, methodologies and applications to object detection. Progress in Artificial Intelligence 9(2): 85–112
kunkle2003automatic Kunkle D R (2003) Automatic Classification of One-Dimensional Cellular Automata. MSc thesis (Rochester Institute of Technology)
dasilva2016classification da Silva N R, Baetens J M, Oliveira M W D, de Baets B, Bruno O M (2016) Classification of cellular automata through texture analysis. Information Sciences 370: 33–49
machicao2018cellular Machicao J, Ribas L C, Scabini L F S, Bruno O M (2018) Cellular automata rule characterization and classification using texture descriptors. Physica A 497: 109–117
lecun1998gradient LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11): 2278–2324
silverman2019convolutional Silverman E (2019) Convolutional Neural Networks for Cellular Automata Classification. In: Fellermann H et al. (ed) Artificial Life Conference Proceedings, MIT Press, Cambridge, MA: 280–282
comelli2021comparing Comelli T, Pinel F, and Bouvry P (2021) Comparing Elementary Cellular Automata Classifications with a Convolutional Neural Network. In: Rocha A P et al. (ed) Proceedings of the 13th International Conference on Agents and Artificial Intelligence, Volume 2, Springer Cham: 467–474
fernandes2019nnclassification Fernandes T (2019) Cellular Automaton Neural Network Classification. In: Wolfram Community. <https://community.wolfram.com/groups/-/m/t/1417114>
chollet2015keras Chollet F et al. (2015) Keras. <https://keras.io>
shorten2019augmentation Shorten C, Khoshgoftaar T M (2019) A Survey on Image Data Augmentation for Deep Learning. Journal of Big Data 6(1): No. 60
wolfram1984twenty Wolfram S (1985) Twenty Problems in the Theory of Cellular Automata. Physica Scripta T9: 170–183
rani2023selfsupervised Rani V, Nabi S T, Kumar M, Mittal A, Kumar K (2023) Self-supervised Learning: A Succinct Review. Archives of Computational Methods in Engineering 30: 2761–2775
gilpin2019convolutional Gilpin W (2019) Cellular automata as convolutional neural networks. Physical Review E 100(3): 032402
mordvintsev2020growing Mordvintsev A, Randazzo E, Niklasson E, Levin M (2020) Growing Neural Cellular Automata. Distill. <https://distill.pub/2020/growing-ca>
|
http://arxiv.org/abs/2409.03636v1 | 20240905155050 | DiffEVC: Any-to-Any Emotion Voice Conversion with Expressive Guidance | [
"Hsing-Hang Chou",
"Yun-Shao Lin",
"Ching-Chin Sung",
"Yu Tsao",
"Chi-Chun Lee"
] | eess.AS | [
"eess.AS"
] |
Limited but consistent gains in adversarial robustness by co-training object recognition models with human EEG
Manshan Guo1,2,40000-0002-5506-6854 Bhavin Choksi 1,20000-0002-6475-4149 Sari Sadiya1,2,30009-0005-7482-3274
Alessandro T. Gifford40000-0002-8923-9477 Martina G. Vilas1,2,50000-0002-1097-8534
Radoslaw M. Cichy4jointly directed work0000-0003-4190-6071 Gemma Roig1,2,⋆0000-0002-6439-8076
======================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Emotional Voice Conversion (EVC) modifies speech emotion to enhance communication by amplifying positive cues and reducing negative ones. This complex task involves entangled factors like voice quality, speaker traits, and content. Traditional deep learning models like GANs and autoencoders have achieved some success in EVC by learning mappings or disentangling features but face challenges like instability and voice quality degradation. Diffusion models offer stable training and high-quality generation. We propose a diffusion-based EVC framework that disentangles emotion and speaker identity using mutual information loss and auxiliary models. An expressive guidance mechanism is introduced to improve emotion conversion while maintaining speaker traits. Experimental results demonstrate our approach's effectiveness for unseen speakers and emotions, achieving state-of-the-art performance in EVC tasks.
Emotion Voice Conversion, Diffusion, Disentanglement, Guidance
§ INTRODUCTION
Emotional Voice Conversion (EVC), which focuses on artificially modifying the emotional expression of speech signals, offers potential application scenarios such as enhancing conversational fluidity by amplifying positive emotional cues and attenuating harmful emotions, thereby reducing potential friction between individuals <cit.>. This cutting-edge research challenge required deep control of complex factors, including voice quality, emotional expressiveness, speaker traits, and language contents, that are mutually entangled and produce the final manifestation of human voices. With the effectiveness of deep learning techniques, building the EVC model on specific speakers or small-scaled parallel emotional speech has gained preliminary success in previous works <cit.>. In this work, different from the controlled scenarios, we target in-the-wild any-to-any EVC problems that require a comprehensive evaluation of each mentioned factor.
To solve the EVC problems, most of the deep learning methods models lie in the two realms, which are generative adversarial networks<cit.> and autoencoder-based models <cit.>. GAN-based methods, by leveraging adversarial mechanisms to learn direct mappings between data distributions of different emotional classes, can produce speech with high naturalness without compromising voice quality. However, the lack of robust modeling of fundamental vocal attributes can result in instability during the conversion process and diminish the accuracy of transforming the desired emotional class. Autoencoder-based approaches tackle this issue by introducing a disentanglement mechanism that separates linguistic features and speaker identity features from emotional representations, enabling better control over emotion conversion. Despite this advantage, effective disentanglement remains a challenging task, often leading to a degradation in the quality of the converted voice. The experiments in previous works point out important facts that an effective EVC learning method should jointly design the disentangled mechanism and the generation process.
The diffusion model, compared to GAN-based methods, has recently garnered significant attention for its more effective generative capabilities, offering stabilized training and the ability to produce highly realistic samples in multiple application scenarios, in image generation<cit.>, voice conversion <cit.> and noise purification <cit.>. EmoConvDiff<cit.>, as the first work to address the in-the-wild EVC, has shown to be effective for learning the EVC problems by using large-scaled un-parallel emotional speech corpora such as MSP-PODCAST<cit.>. By leveraging the diffusion models, we further design disentangled mechanisms in two separate aspects to boost the effectiveness of the EVC model. First, we incorporate the disentangle loss into the diffusion EVC model training process. Second, with the reversed diffusion process, we further design the emotion-speaker disentangled guidance mechanism to enhance the expressiveness of the target emotion and mitigate the distortion of the source speaker trait.
We propose a diffusion-based any-to-any emotional voice conversion framework for in-the-wild data. It conditions on pretrained speech representations for both emotion and speaker identity, which are further disentangled by proposed disentangled mechanism. Our objective evaluations on any-to-any EVC tasks for both in-the-wild and act-out data show that our disentangled mechanism successively make the conversion result more emotional. Morover, both objective and subjective evaluations demonstrate that the overall framework can generate more natural and high-quality samples while remain emotional comparing to state-of-the-art approaches.
§ METHODOLOGY
§.§ Proposed Method
This work aims to solve the problem of in-the-wild any-to-any emotion voice conversion. Given a pair of source (X_c, s, e, or simply X_0) and reference (X_c, s,e) speech utterances, which the source X_c, s, e have language content c, speaker identity s and emotion expressiveness e; X_c, s,e have completely different language content, speaker identity, and emotion, our proposed method F aims to perform the conversion process X̂_c, s, e =F(X_c, s, e, X_c, s,e) that preserve both content and speaker identity while transform emotion from e to e. The in-the-wild setting requires the source and reference speech utterance to be completely unseen in the training set.
Figure 1 shows the overall framework of our proposed method, which composed with a set of encoders, diffusion-based decoder, and expressive guidance method. <cit.>.
§.§.§ Encoders
Three pre-trained encoders are used to capture the representations of linguistic content, speaker identity, and emotion expression.
Phoneme Encoding: To encode linguistic content X:=ϕ(X_0), we adapt pre-trained transfoer-based encoder ϕ(.) from <cit.> to convert input mel-spectrograms X_0 into speaker and emotion independent "average voice" mel features that replaced each phoneme-level mel feature with the corresponding average phoneme-level mel features. The encoder output has the same dimension as the source mel-spectrograms X_0∈ℝ^n× T.
Speaker Encoding: To encode the speaker identity z_s:=S(.), we use a pretrained speaker verification model S(.)<cit.>, adapted from <cit.>, that generates a 256-dimension d-vector speaker representation.
Emotion Encoding: To encode emotional information z_e=E(.)∈ R^1024, we use a SSL-based SER system adapted from <cit.> that was built by fine-tuning the Wav2Vec2-Large-Robutst network on the MSP-Podcast(v1.7) dataset.
The E(.) network was built by fine-tuning the Wav2Vec2-LargeRobust network <cit.> on the MSP-Podcast (v1.7) dataset <cit.>.
§.§.§ Diffusion-based decoder
We employ a diffusion framework based on stochastic differential equations (SDE), conditioned on the given linguistic content X̅, speaker representation z_s, and emotion representation z_c to generate high-quality speech. In this model, the diffusion process gradually transforms the average-voice mel-spectrogram X̅ into real sample X_0 in a forward process, while the reverse process generates X_0 from X̅.
§.§.§ Expressive Guidance
To amplify the effectiveness of the diffusion model on the converted speech, we further design the expressive guidance method that aims to manage the reversed diffusion process with positive and negative direction scores. During the inference stage, we modified s_θ with ŝ_θ as follows:
ŝ_θ =s_θ,neg + λ_EG(s_θ,pos
- s_θ,neg)
λ_EG with the value 1 controls the intensity of this guidance method and pushes the generation process away from the negative condition but toward the positive condition.
The score s_θ,condition are derived with different conditional embedding, including speaker (z_s) and emotion(z_e). In order to keep the naturalness of the converted emotional voice, we find the ideal pos condition = [Spk_src, Emo_ref] and the ideal neg condition = [Spk_ref, Emo_src]
§.§.§ Disentangled Loss
In order to reduced the correlation between different speech representation, specifically emotion information and speaker identity, we minimize MI loss between reprsentations L_MI=Î(z_s, z_e),
where Î represents the unbiased esitmation for vCLUB as described in <cit.>. The MI loss has shown to be effective to disentangled between different speech representations in several studies <cit.>.
To further preserve speaker identity and emotion information resides in the represenations after disentanglement, we use two auxiliary supervised models that 1) predict speaker identity from disetnagled speaker representation z_s, and 2) predict emotion label (Neutral, Angry, Happy and Sad) and emotion attributes (Arousal and Valence) from disentangled emotion representation z_e. These models are trained to minimize loss L_style where the negative log likelihood loss is used for categorical prediction task and the concordance correlation coefficient loss is used for regression task.
In addition to L_diff for training diffusion-based decoder, we follow <cit.> to use a mel-spectrogram recontruction loss L_rec that measures the L_1-norm between X_0 and X̂_0, where X̂_0 is the single-step approximation relying on X_t, X̅, s_θ using Tweedie's formula <cit.>. We use λ_rec=(1-t^2) adapt from <cit.> to reduce importance of the loss with X_t containing more Gaussian noise for larger t.
The final objective function for our proposed method is as follows
ℒ_Total=ℒ_diff+λ_MIℒ_MI+λ_guideℒ_guide+λ_recℒ_rec
where λ_MI and λ_guide are hyparameters to control the importance of respective loss.
§ EXPERIMENTAL SETUP AND RESULTS
§.§ Experimental Setup
§.§.§ Implementation Details
Our proposed methodology is trained on in-the-wild MSP-Podcast corpus version 1.10 <cit.> that contains real podcast recordings (16kHz, 1ch) with naturalistic emotional expressions segmented in utterances. We select 53685 utterances labeled with four emotion and emotion attributes from 1385 labeled speakers. The Adam optimizer with a learning rate of 1×10^-4 is used for for updating all of the model parameters. We adopt pre-trained model parameters from <cit.> and fine-tuned it on MSP-Podcast for 368k iterations with a batch size of 32. We set λ_MI=0.1 and λ_guide=1 during training, and set λ_EG=1.25 for expressive guidance during inference.
§.§.§ Evaluation Setup
We evalutae our methods on both in-the-wild dataset, MSP-Podcast (v1.11), with real world scenrio and act-out dataset, ESD<cit.> with high quality recordings. We randomly sample 100 utterances of each emotion categories with unseen speakers from both dataset to conduct following experiments.
First, we perform any-to-any emotion voice conversion that includes all of the transformations between angry, happy, sad and neutral, besides transforming from emotional speech to neutral one. The experiments is conduct on both in-the-wild MSP-Podcast with real world scenrio and high quality act-out dataset (ESD). We compared methods under different training schemes, i.e., using only mathcolL_diff or using matchcolL_Total in <ref>. For evaluating effectiveness of our guidance method, we compared different settings of s_theta,neg by replacing original conversion input with either reference speaker, source emotionas or both.
Second, We compared our propose method with five baseline models, i.e.
StarGAN-EVC<cit.>, Seq2Seq-EVC<cit.>, Emovox<cit.> and Prosody2Vec<cit.> following conversion samples presented in Prosody2Vec[https://leyuanqu.github.io/Prosody2Vec/]. All of the comparison models are trained on act-out ESD datasets, while only Prosody2Vec utilized both predominant and in-the-wild dataset. The audio samples are avaliable in our demo page[https://henrychou36.github.io/DiffEVC/].
§.§.§ Evaluation Metric
For both experiments, we incorporate non-intrusive objective evaluatin, i.e., UTMOS <cit.> for natureliness, DNSMOS <cit.> for speech quality (SIG) and overall signal quality (OVRL). Both methods are designed to predict mean opinion score (MOS) of subjective listenting tests. To access speaker similarity, speaker embedding cosine similarity (SECS) between extracted embeddings of source and gerneated speech based on Resemblyzer<cit.>. For emotion classification accuracy (ECA), we utilized speech emotion recogntion (SER) model for four class emotion classification pre-trained on both MSP-Podcast and ESD datasets based on emotion embedding from <cit.>. For second experiments, in addition to objective evaluation, we conduct subjective Evaluation using mean opinion score (MOS) for speech quality, natureliness (nMOS), emotion similarity (sMOS) between synthesis speech and the target emotional utterances with a 5-point scale ranging from 1 to 5. We also required listener to label primary emotion for ECA. The evaluation of first and second expereiments are presented in Table <ref> and Table <ref>, seperately.
§.§ Experimental Results
From the result presented in Table <ref>, the proposed method with both disentaglement mechanism and expressive guidance method significantly improve in ECA with about 15% comparing to base model, will sustain roughly the same performance in naturalness and speech quality for both datasets. This shows that our method can help underlying model generate more expressive samples for any-to-any emotion voice conversion.
Comparing guidance methods under different setting of unwanted representation with only disentanglement mechanism, we find out that by utilizing unwanted emotion, i.e., emotion information from the source utterance, we can further make the generated speech more emotional in terms of ECA with around 7% increments, while damages naturalness, speaker similarity and speech quality. On the otherhand, using unwanted speaker identity, i.e., speaker embedding from the reference speakers, decreases the emotion accurarcy while improving other criteria slightly. The downside of both methods can be alivieate by joint considering unwanted speaker and emotion information. This overall result shows that the guidance method can control the expressiveness of emotion voice conversion in either speaker identity or emotion information. Inspecting confusion matrix in Figure <ref>, we can find out that for both dataset, proposed method can effectively transfer any emotion into sad and happy, whereas conversion results toward angry would sometimes be recognized as happy, especially for act-out dataset. This demonstartes that the proposed method has room to be improved by incooperating act-out data and difference between angry and happy into training scheme.
Comparing source speech and synthesis speech regardless of different training and inference schemes, the synthesis results have overall better speech and audio quality, which shows that noise can be alleviate through speech decomposition and reconsturction. However, the level of noise will still inherently effects the quality and naturaliness of the generated speech.
Based on the result presented in Table <ref>, we can find out that our method has statistical significant improvement in terms of naturalness, speech and audio quality comparing to other models and is closed to the performance of target speech. In terms of expressiveness, we can find out that GAN-based method has highest speaker similarity with unrecognizable emotion, whereas Prosody2Vec generates most emotional speech while suffers from low naturalness and audio quality. Our method, on the other hand, generate emotional speech and preserves the quality without incooperate these data into training scheme.
§ CONCLUSION AND FUTRUE WORK
It is important to develop any-to-any EVC under real-wolrd scenerio.
In this work, we propose an in-the-wild any-to-any emotion voice conversion that combines a disentanglement mechanism and expressive guidance and provide thorough evaluation with both objective and subjective tests. We show that our proposed method can effectively converts speech into different emotion for both in-the-wild and act-out dataset while having high speech and audio quality. Morover, comparing to other EVC in control environment, our methods generated natural and high quality emotional speech.
IEEEbib
|
http://arxiv.org/abs/2409.02289v1 | 20240903205824 | Query answering in lattice-based description logic | [
"Krishna Manoorkar",
"Ruoding Wang"
] | cs.LO | [
"cs.LO"
] |
[
[
=====
§ ABSTRACT
Recently, the description logic LE-𝒜ℒ𝒞 was introduced for reasoning in the semantic environment of the enriched formal contexts, and a tableaux algorithm was developed for checking the consistency of ABoxes in this logic <cit.>. In this paper, we study the ontology-mediated query answering in LE-𝒜ℒ𝒞. In particular, we show that several different types of queries can be answered efficiently for LE-𝒜ℒ𝒞 knowledge bases with acyclic TBoxes using our tableaux algorithm directly or by extending it with some additional rules.
§ INTRODUCTION
Description logic (DL) <cit.> is a class of logical formalisms, rooted in classical first-order logic, widely used in Knowledge Representation and Reasoning to articulate and infer relationships among pertinent concepts within a specified application domain. It is widely utilized across various fields such as the semantic web <cit.>, ontologies <cit.>, and software engineering <cit.>. Description logic offers solutions to diverse reasoning tasks arising from a knowledge base. Among the notable reasoning services offered by description logic is ontology-mediated query answering, which involves answering queries based on a given knowledge base <cit.>.
In <cit.>[We noticed a mistake in the proof of termination and I-compatibility in an earlier version of this paper <cit.> in which concepts ⊤ and were included as concept names. In the updated version <cit.> we prove that the result holds in the restriction which does not contain ⊤ and in the language of concept names. In this paper, we work with the restricted language as in <cit.>.], a two-sorted lattice-based description logic LE-𝒜ℒ𝒞[Even though concept names in LE-𝒜ℒ𝒞 do not contain negation, we still refer to this description logic as LE-𝒜ℒ𝒞 rather than LE-𝒜ℒℰ, as negation on ABox terms is included in the description logic language.] was introduced based on non-distributive modal logic, with semantics grounded in an enriched formal context <cit.>. LE-𝒜ℒ𝒞 provides a natural description logic to reason about formal concepts (or categories) arising from formal contexts in Formal Concept Analysis (FCA) <cit.>. The logic LE-𝒜ℒ𝒞 has the same relationship with non-distributive modal logic and its semantics based on formal contexts as the relationship between 𝒜ℒ𝒞 and the classical normal modal logic with its Kripke frame semantics. Namely, LE-𝒜ℒ𝒞 facilitates the description of enriched formal contexts, i.e., formal contexts endowed with additional relations, which give rise to concept lattices extended with normal modal operators. Similarly to the classical modal operators, the `non-distributive' modal operators can be given different interpretations, such as the epistemic operator <cit.> and the approximation operator <cit.>.
In this paper, we adapt and modify the LE-𝒜ℒ𝒞 tableaux algorithm provided in <cit.> to answer several different types of queries based on LE-𝒜ℒ𝒞 knowledge bases with acyclic TBoxes. We show that for any consistent LE-𝒜ℒ𝒞 ABox 𝒜, the model constructed from the tableaux completion of 𝒜 is a universal or canonical model for answering different queries like relationship queries asking if an object and a feature are related, membership queries asking if an object or a feature belongs to a concept, and subsumption queries asking if a concept is included in some other concept. This allows to answer multiple such queries in polynomial time in |𝒜|. We show that it also acts as a universal model w.r.t. negative relational queries, however this is not true for negative membership or subsumption queries.
Finally, we consider separation queries which ask if two objects or features can be distinguished from each other by means of some role (relation). We convert these queries into an equivalent problem of checking the consistency of the given ABox w.r.t. some extension of LE-𝒜ℒ𝒞 and providing a tableaux algorithm for such extension. This method allows to answer separation queries of different types in polynomial time in |𝒜|.
Structure of the paper. In Section <ref>, we briefly review non-distributive modal logic and polarity-based semantics, lattice-based description logic LE-𝒜ℒ𝒞, and tableaux algorithm for checking its ABox consistency. In Section <ref>, we demonstrate that the model obtained from the Tableaux Algorithm (Section <ref>) is a universal model for various queries, and define different types of queries and corresponding algorithms. Section <ref> provides a specific LE-𝒜ℒ𝒞 knowledge base and illustrates how the algorithms answer queries discussed earlier. Finally, Section <ref> summarizes the paper and outlines future directions.
§ PRELIMINARIES
In this section, we collect preliminaries on non-distributive modal logic and its polarity-based semantics, i.e. semantics based on formal contexts, and the lattice-based description logic LE-𝒜ℒ𝒞 with the tableaux algorithm developed for it in <cit.>.
§.§ Basic non-distributive modal logic and its polarity-based semantics
In this section, we briefly introduce the basic non-distributive modal logic and polarity-based semantics for it. It is a member of a family of lattice-based logics, sometimes referred to as LE-logics (cf. <cit.>), which have been studied in the context of a research program on the logical foundations of categorization theory <cit.>.
Let be a (countable) set of atomic propositions. The language ℒ is defined as follows:
φ|⊤| p |φ∧φ|φ∨φ|φ|φ,
where p∈. In the following part, we define the polarity-based semantics for this logic.
The basic, or minimal normal ℒ-logic is a set 𝐋 of sequents ϕ⊢ψ, with ϕ,ψ∈ℒ, containing the following axioms:
p ⊢ p ⊢ p p ⊢ p ∨ q p ∧ q ⊢ p ⊤⊢⊤ p ∧ q ⊢(p ∧ q)
p ⊢⊤ q ⊢ p ∨ q p ∧ q ⊢ q ⊢ (p ∨ q) ⊢ p ∨ q
and closed under the following inference rules:
ϕ⊢χ χ⊢ψ/ϕ⊢ψ ϕ⊢ψ/ϕ(χ/p)⊢ψ(χ/p) χ⊢ϕ χ⊢ψ/χ⊢ϕ∧ψ ϕ⊢χ ψ⊢χ/ϕ∨ψ⊢χ ϕ⊢ψ/ϕ⊢ψ ϕ⊢ψ/ϕ⊢ψ
Relational semantics.
The following preliminaries are taken from <cit.>.
For any binary relation T⊆ U× V, and any U'⊆ U and V'⊆ V, we let[For any u ∈ U (resp. v ∈ V) we will sometimes write T^(1)[u] (resp. T^(0)[v]) in place of T^(1)[{u}] (resp. T^(0)[{v}]).]
T^(1)[U']{v|∀ u(u∈ U'⇒ uTv) } T^(0)[V']{u|∀ v(v∈ V'⇒ uTv)}.
A polarity or formal context (cf. <cit.>) is a tuple ℙ =(A,X,I), where A and X are sets, and I ⊆ A × X is a binary relation.
A and X can be understood as the collections of objects and features, and for any a∈ A and x∈ X, aIx exactly when the object a has the feature x. For any formal context ℙ = (A, X, I), the pair of maps
(·)^↑: 𝒫(A)→𝒫(X) and (·)^↓: 𝒫(X)→𝒫(A),
defined by B^↑ I^(1)[B] and Y^↓ I^(0)[Y],
[For any a ∈ A (resp. x∈ X) we will sometimes write a^↑ and a^↑↓ (resp. x^↓ and x^↓↑) in place of {a}^↑ and {a}^↑↓ (resp. {x}^↓ and {x}^↓↑).],
forms a Galois connection, and hence induces the closure operators(·)^↑↓ and (·)^↓↑ on 𝒫(A) and on 𝒫(X), respectively.
A formal concept of a polarity ℙ=(A,X,I) is a tuple c=(c,c) such that c⊆ A and c⊆ X, and c = c^↓ and c = c^↑,
i.e. the sets c and c are Galois-stable. The set of formal concepts of polarity ℙ, with the order defined by
c_1 ≤ c_2 iff c_1⊆c_2 iff c_2⊆c_1,
forms a complete lattice ℙ^+, namely the concept lattice of ℙ.
An enriched formal ℒ-context is a tuple 𝔽 =(ℙ, R_, R_), where R_⊆ A × X and R_⊆ X × A are I-compatible relations, that is, for all a ∈ A and x ∈ X, the sets R_^(0)[x], R_^(1)[a], R_^(0)[a], R_^(1)[x] are Galois-stable in ℙ.
Given the operations [R_] and ⟨ R_⟩ on ℙ^+ corresponding to R_ and R_, respectively, we have for any c ∈ℙ^+,
[R_] c =(R_^(0)[c], I^(1)[R_^(0)[c]]) and ⟨ R_⟩ c =( I^(0)[R_^(0)[c]], R_^(0)[c]).
We refer to the algebra 𝔽^+=(ℙ^+, [R_], ⟨ R_⟩) as the complex algebra of 𝔽. A valuation on such an 𝔽
is a map V→ℙ^+. For each p∈, we let pV(p) (resp. pV(p)) denote the extension (resp. intension) of the interpretation of p under V.
A model is a tuple 𝕄 = (𝔽, V), where 𝔽 = (ℙ, R_, R_) is an enriched formal context and V is a valuation of 𝔽. For every ϕ∈ℒ, we let ϕ_𝕄V(ϕ) (resp. ϕ_𝕄V(ϕ)) denote the extension (resp. intension) of the interpretation of ϕ under the homomorphic extension of V. The `satisfaction' and `co-satisfaction' relations ⊩ and ≻ can be recursively defined as follows:
𝕄, a ⊩ p iff a∈p_𝕄
𝕄, x ≻ p iff x∈p_𝕄
𝕄, a ⊩⊤ always
𝕄, x ≻⊤ iff a I x for all a∈ A
𝕄, x ≻ always
𝕄, a ⊩ iff a I x for all x∈ X
𝕄, a ⊩ϕ∧ψ iff 𝕄, a ⊩ϕ and 𝕄, a ⊩ψ
𝕄, x ≻ϕ∧ψ iff (∀ a∈ A) (𝕄, a ⊩ϕ∧ψ⇒ a I x)
𝕄, x ≻ϕ∨ψ iff 𝕄, x ≻ϕ and 𝕄, x ≻ψ 𝕄, a ⊩ϕ∨ψ iff (∀ x∈ X) (𝕄, x ≻ϕ∨ψ⇒ a I x).
As to the interpretation of modal operators:
𝕄, a ⊩ϕ iff (∀ x∈ X)(𝕄, x ≻ϕ⇒ a R_ x)
𝕄, x ≻ϕ iff (∀ a∈ A)(𝕄, a ⊩ϕ⇒ a I x)
𝕄, x ≻ϕ iff (∀ a∈ A) (𝕄, a ⊩ϕ⇒ x R_ a)
𝕄, a ⊩ϕ iff (∀ x∈ X)(𝕄, x ≻ϕ⇒ a I x).
The definition above ensures that, for any ℒ-formula φ,
𝕄, a ⊩ϕ iff a∈ϕ_𝕄, and 𝕄,x ≻ϕ iff x∈ϕ_𝕄.
𝕄ϕ⊢ψ iff ϕ_𝕄⊆ψ_𝕄 iff ψ_𝕄⊆ϕ_𝕄.
The interpretation of the propositional connectives ∨ and ∧ in the framework described above reproduces the standard notion of join and meet of formal concepts used in FCA. The interpretation of operators and is motivated by algebraic properties and duality theory for modal operators on lattices (see <cit.> for an expanded discussion).
In <cit.>, it is shown that the semantics of LE-logics is compatible with Kripke semantics for classical modal logic, and thus, LE-logics are indeed generalizations of classical modal logic.
This interpretation is further justified in <cit.> by noticing that, under
the interpretations of the relation I as
a I x iff “object a has feature x”
and R=R_ =R^-1_ as a R x iff “there is evidence that object a has feature x”, then, for any concept c, the extents of concepts c and c can be interpreted as “the set of objects which certainly belong to c” (upper approximation), and “the set of objects which possibly belong to c” (lower approximation) respectively. Thus, the interpretations of and have similar meaning in the LE-logic as in the classical modal logic.
§.§ Description logic LE-𝒜ℒ𝒞
In this section, we recall the lattice-based description logic LE-𝒜ℒ𝒞 introduced in <cit.> as a counterpart of non-distributive modal logic.
The language of LE-𝒜ℒ𝒞 contains two types of individuals, usually interpreted as objects and features. Let 𝖮𝖡𝖩 and 𝖥𝖤𝖠𝖳 be disjoint sets of individual names for objects and features. The set ℛ of the role names for LE-𝒜ℒ𝒞 is the union of three types of relations: (1) a unique relation I ⊆𝖮𝖡𝖩×𝖥𝖤𝖠𝖳; (2) a set of relations ℛ_ of the form R_⊆𝖮𝖡𝖩×𝖥𝖤𝖠𝖳; (3) a set of relations ℛ_ of the form R_⊆𝖥𝖤𝖠𝖳×𝖮𝖡𝖩. The relation I is intended to be interpreted as the incidence relation of formal contexts and encodes information on which objects have which features, and the relations in ℛ_ and ℛ_ encode additional relationships between objects and features (see <cit.> for an extended discussion). In this paper, we work with an LE-𝒜ℒ𝒞 language in which the sets of role names ℛ_ and ℛ_ are singletons. All the results in this paper can be generalized to language with multiple role names in each of these sets straightforwardly.
For any set 𝒟 of atomic concept names, the language of LE-𝒜ℒ𝒞 concepts is:
C D | C_1 ∧ C_2 | C_1∨ C_2 | ⟨ R_⟩ C | [R_ ]C
where D ∈𝒟.
This language matches the LE-logic language and has an analogous intended interpretation of the complex algebras of the enriched formal contexts (cf. Section <ref>).
As usual, ∨ and ∧ are to be interpreted as the smallest common superconcept and the greatest common subconcept, as in FCA.
We do not use the symbols ∀ r and ∃ r in the context of LE-𝒜ℒ𝒞 because using the same notation verbatim would be ambiguous or misleading, as the semantic clauses of modal operators in LE-logic use universal quantifiers.
TBox assertions in LE-𝒜ℒ𝒞 are of the shape C_1 ≡ C_2[As is standard in DL (see <cit.> for more details), general concept inclusions of the form C_1 ⊑ C_2 can be rewritten as C_1 ≡ C_2 ∧ C_3, where C_3 is a new concept name.], where C_1 and C_2 are concepts defined as above.
ABox assertions are of the form:
aR_ x, xR_ a, aIx, a:C, x::C, α,
where α is any of the first five ABox terms. We refer to the first three types of terms as relational terms. We denote an arbitrary ABox (resp. TBox) with 𝒜 (resp. 𝒯). The interpretations of the terms a:C and x::C are: “object a is a member of concept C”, and “feature x is in the description of concept C”, respectively. Note that we explicitly add negative terms to ABoxes, as the concept names in LE-𝒜ℒ𝒞 do not contain negations.
An interpretation for LE-𝒜ℒ𝒞 is a tuple ℳ = (𝔽, ·^ℳ), where 𝔽=(ℙ, R_, R_) is
an enriched formal context, and ·^ℳ maps:
1. individual names a ∈𝖮𝖡𝖩 (resp. x ∈𝖥𝖤𝖠𝖳) to some a^ℳ∈ A (resp. x^ℳ∈ X);
2. role names I, R_ and R_ to relations I^ℳ⊆ A× X, R_^ℳ⊆ A× X and R_^ℳ⊆ X× A in 𝔽;
3. any atomic concept D to D^ℳ∈𝔽^+, and other concepts as follows:
(C_1 ∧ C_2)^ℳ = C_1^ℳ∧ C_2^ℳ (C_1∨ C_2)^ℳ = C_1^ℳ∨ C_2^ℳ
([R_]C)^ℳ = [R_^ℳ]C^ℳ (⟨ R_⟩ C)^ℳ =⟨ R_^ℳ⟩ C^ℳ
where the all the connectives are interpreted as defined as in LE-logic (cf. Section <ref>). The satisfiability relation for an interpretation ℳ is defined as follows:
1. ℳ C_1≡ C_2 iff C_1^ℳ = C_2^ℳ iff C_2^ℳ = C_1^ℳ.
2. ℳ a:C iff a^ℳ∈C^ℳ
and ℳ x::C iff x^ℳ∈C^ℳ.
3. ℳ a I x (resp. a R_ x, x R_ a) iff a^ℳ I^ℳ x^ℳ (resp. a^ℳ R_^ℳ x^ℳ, x^ℳ R_^ℳ a^ℳ).
4. ℳα, where α is any ABox term, iff ℳα.
The satisfaction definition can be extended to concept inclusion as follows. For any concepts C_1, and C_2, and an interpretation ℳ, ℳ C_1 ⊑ C_2 iff C_1^ℳ≤ C_2^ℳ.
An interpretation ℳ is a model for an LE-𝒜ℒ𝒞 knowledge base (𝒜, 𝒯)
if ℳ𝒜 and ℳ𝒯.
§.§ Tableaux algorithm for checking LE-𝒜ℒ𝒞 ABox consistency
In this section, we introduce the tableaux algorithm for checking the consistency of LE-𝒜ℒ𝒞 ABoxes. An LE-𝒜ℒ𝒞 ABox 𝒜 contains a clash iff it contains both β and β for some relational term β. The expansion rules below are designed so that
the expansion of 𝒜 will contain a clash iff 𝒜 is inconsistent.
The set sub(C) of sub-formulas of
any LE-𝒜ℒ𝒞 concept name C is defined as usual. A concept name C' ∈𝒜 (in symbol: C' ∈𝒜) if C'∈ sub(C) for some C such that one of the terms a:C, x::C, (a:C), or (x ::C) is in 𝒜. A constant b (resp. y) occurs in 𝒜 (b ∈𝒜, or y ∈𝒜), iff some term containing b (resp. y) occurs in it.
The tableaux algorithm below constructs a model
(𝔽,·^ℳ) for every consistent 𝒜, where 𝔽= (ℙ, R_, R_) is such that, for any C ∈𝒜, some a_C ∈ A and x_C ∈ X exist such that, for any a ∈ A (resp. any x ∈ X), a ∈C^ℳ (resp. x ∈C^ℳ) iff a I x_C (resp. a_C I x).
We call a_C and x_C the classifying object and the classifying feature of C, respectively.[To make the notation easily readable, we write a_ C, x_ C (resp. a_ C, x_ C) instead of a_[R_]C, x_[R_]C (resp. a_⟨ R_⟩ C, x_⟨ R_⟩ C).] The commas in each rule are meta-linguistic conjunctions, hence every tableau is non-branching.
1cCreation rule 1cBasic rule
For any C ∈𝒜
create
a_C:C, x_C::C
[-1.85mm]0mm8mm
b:C, y::C
I
b I y
2cRules for the logical connectives 2cI-compatibility rules
[-1.85mm]0mm8mm
b:C_1 ∧ C_2
∧_A
b:C_1, b:C_2
y::C_1 ∨ C_2
∨_X
y::C_1, y::C_2
b I y
y
b R_ y
b I ▪ y
▪ y
y R_ b
b:[R_]C, y::C
b R_ y
y::⟨ R_⟩ C, b:C
y R_ b
b I y
b
y R_ b
b I y
b
b R_ y
2cinverse rule for connectives
b:C_1, b:C_2, C_1 ∧ C_2 ∈𝒜
∧_A^-1
b:C_1 ∧ C_2
y::C_1, y::C_2, C_1 ∨ C_2 ∈𝒜
∨_X^-1
y::C_1 ∨ C_2
2cAdjunction rules
[-1.85mm]0mm8mm
b R_ y
R_
b I y, b I y
y R_ b
R_
b I y, b I ▪ y
[2mm]
2cBasic rules for negative assertions 2cAppending rules
[-1.85mm]0mm8mm
(b:C)
b
(b I x_C)
(x::C)
x
(a_C I x)
b I x_C
x_C
b:C
a_C I y
a_C
y::C
In the adjunction rules, the individuals b, b, y, and ▪ y are new and unique individual names[The new individual names b, b, y, and ▪ y appearing in tableaux expansion are purely syntactic entities. Intuitively, they correspond to the classifying objects (resp. features) of the concepts b, b (resp. y, resp. ▪y), where 𝐛=(b^↑↓, b^↑) (resp. 𝐲=(y^↓, y^↓↑)) is the concept generated by b (resp. y), and the operation (resp.▪) is the left (resp. right) adjoint of operation (resp. ).] for each relation R_ and R_, and individuals b and y, except for a_C= a_ C and x_C= x_ C.
It is easy to check that the following rules are derivable in the calculus.
b:C_1 ∨ C_2, y::C_1, y::C_2
∨_A
b I y
y::C_1 ∧ C_2, b:C_1, b:C_2
∧_X
b I y
b:C
adj_
b: [R_] C
▪ y::C
adj_
y::⟨ R_⟩ C
The following theorem follows from the results in <cit.>:
The tableaux algorithm <ref> provides a sound and complete polynomial-time decision procedure for checking consistency of LE-𝒜ℒ𝒞 ABoxes.
For any consistent LE-𝒜ℒ𝒞 ABox 𝒜,
we construct a model ℳ =(𝔽,·^ℳ) from the tableaux completion 𝒜 of 𝒜, where 𝔽 =(A,X,I,R_,R_) is described as follows:
A and X are taken to be the sets of all individual names of object and feature that occur in 𝒜, respectively, and all individuals are interpreted by their names. For any role name R, its interpretation R^ℳ is defined as follows: for any individual names l, m, l R^ℳ m iff l R m ∈𝒜. Finally, for the atomic concept D, its interpretation is set to the concept (x_D^↓, a_D^↑). The following result was proved in <cit.>.
For any LE-𝒜ℒ𝒞 ABox 𝒜, the model ℳ as constructed above is a model for 𝒜 of the size polynomial in |𝒜|. Moreover, for any individual names b, y, and concept C occurring in 𝒜, b ∈C iff b I x_C ∈𝒜, and y ∈C iff a_c I y ∈𝒜.
The algorithm can be easily extended to acyclic TBoxes (exponential-time), using the unraveling technique (see <cit.> for details).
In this paper, we use LE-𝒜ℒ𝒞 knowledge bases to mean LE-𝒜ℒ𝒞 knowledge bases with acyclic Tboxes unless otherwise stated.
§.§ Ontology-mediated query answering
A key task in description logic ontologies (knowledge bases) is to support various reasoning tasks, one of which is to answer queries based on ontologies <cit.>.
Let 𝒦 =(𝒜, 𝒯) be a consistent knowledge base given in a specific description logic DL. Given a query q(p) (with a possibly empty tuple of free variables p) in (appropriate) first-order language and a model ℳ of 𝒦, we say that a sequence of individuals a in 𝒜 is an answer for query q(p) w.r.t. model ℳ of knowledge base 𝒦 if ℳ q(a). An answer a in 𝒜 is said to be a certain answer for the query q(p) w.r.t. a knowledge base 𝒦 if it is an answer for q(p) w.r.t. all the models of 𝒦. An important notion used in ontology-mediated query answering is that of a universal or canonical model. For a query q(p) on a knowledge base 𝒦, we say that a model ℳ of 𝒦 is a universal or canonical model for 𝒦 if for any a appearing in 𝒦, 𝒦 q(a) iff ℳ q(a).[In case p is empty, the answer or the certain answer for such query is true or false depending on whether ℳ q or not.] Thus, we can provide certain answer for query q over 𝒦 by only looking over the universal or canonical model ℳ. Universal models for different description logics have been extensively studied <cit.>. In this paper, we would focus on answering some specific types of queries over knowledge bases in non-distributive description logic LE-𝒜ℒ𝒞. To this end, we show that for any LE-𝒜ℒ𝒞 ABox 𝒜, the model constructed from it by applying the LE-𝒜ℒ𝒞 tableaux algorithm <ref> acts as the universal model for 𝒜 w.r.t. several different types of queries. As the tableaux algorithm is polynomial in time and produces a polynomial size model in |𝒜|, this provides a polynomial-time algorithm to answer these types of queries.
§ QUERY ANSWERING OVER LE-𝒜ℒ𝒞 ABOXES
In this section, we discuss different types of queries pertaining to LE-𝒜ℒ𝒞 ABoxes and develop algorithms to answer them. We start by showing that for any consistent LE-𝒜ℒ𝒞 ABox 𝒜, the model obtained using Algorithm <ref> behaves like universal model w.r.t. several types of queries.
§.§ Universal model for LE-𝒜ℒ𝒞 ABox
For any individual name appearing in the tableaux expansion 𝒜 of an LE-𝒜ℒ𝒞 ABox 𝒜 , we define its concept companion as follows :
1. For any constant b (resp. y) appearing in 𝒜, con(b) (resp. con(y)) is a concept such that for any interpretation ℳ, con(b)^ℳ = 𝐛 (resp. con(y)^ℳ = 𝐲), where 𝐛 (resp. 𝐲) denotes the concept generated by b^ℳ (resp. y^ℳ), i.e. 𝐛=((b^ℳ)^↑↓, (b^ℳ)^↑) (resp. 𝐲=((y^ℳ)^↓, (y^ℳ)^↓↑)).
2. For any constant b (resp. b, resp. ▪ y, resp. y) appearing in 𝒜, con( b)= con(b) (resp. con( b)= con(b), resp. con(▪ y)= ▪ con(y), resp. con( y)= con(y)), where the operation (resp. ▪) is the left (resp. right) adjoint of (resp. ).
For any consistent LE-𝒜ℒ𝒞 ABox 𝒜, individual names b, y appearing in its completion 𝒜, and concept C appearing in 𝒜:
1. 𝒜 con(b) ⊑ con(y) iff b I y ∈𝒜, 2. 𝒜 con(b) ⊑ con (y) iff b R_ y ∈𝒜,
3. 𝒜 con(b) ⊑ con (y) iff y R_ b ∈𝒜, 4. 𝒜 con(b) ⊑ C iff b I x_C ∈𝒜,
5. 𝒜 C ⊑ con(y) iff a_C I y ∈𝒜, 6. 𝒜 con(b) ⊑ C iff b:C ∈𝒜,
7. 𝒜 C ⊑ con(y) iff y::C ∈𝒜, 8. 𝒜 C_1 ⊑ C_2 iff a_C_1 I x_C_2∈𝒜.
The proofs from left to right for items 1-5 follow immediately from Theorem <ref>. We prove the right to left implications by simultaneous (over all the items) induction on the number of expansion rules applied. The base case is when the term in the right appears in 𝒜. In this case, it is immediate from the definition that we get the required condition on the left.
Creation rule. By this rule, a_C : C and x_C :: C are added by any C∈𝒜, which imply C⊑ C.
Basic rule. By this rule, bIy is added from b:C and y::C. By induction applied to items 6 and 7, we get con(b)⊑ C and C⊑ con(y), which imply that con(b)⊑ con(y). It is easy to check item 4 and item 5 also hold. For item 8, a_C_1 I x_C_2 is added from a_C_1:C and x_C_2::C. By induction applied to items 6 and 7, we have C_1⊑ C and C⊑ C_2, which imply that C_1⊑ C_2.
Rules ∧_A, ∨_X, ∧_A^-1, ∨_X^-1. We give the proofs for rules ∧_A and ∧_A^-1. The proofs for ∨_X and ∨_X^-1 are analogous.
By rule ∧_A, b:C_1 and b:C_2 are added from b:C_1∧ C_2. By induction applied to item 6, con(b)⊑ C_1∧ C_2, and thus con(b)⊑ C_1 and con(b)⊑ C_2.
By rule ∧^-1_A, b:C_1∧ C_2 is added from b:C_1, b:C_2, and C_1∧ C_2∈𝒜. By induction applied to item 6, we have con(b)⊑ C_1 and con(b)⊑ C_2. Since C_1∧ C_2 exists in 𝒜, we get con(b)⊑ C_1∧ C_2.
I-compatibility rules. We give the proofs for rules y and ▪ y. The proofs for b and b are analogous.
By rule y, b R_ y is added from b I y, by induction applied to item 1, we get con(b)⊑ con( y) and by definition we have con(b)⊑ con(y).
By rule ▪ y, y R_ b is added from b I ▪ y. By induction applied to item 1, we have con(b)⊑ con(▪ y), and by definition con(b)⊑▪ con(y). By adjunction, we have con(b)⊑ con(y).
Rules and . We give the proof for rule , and the proof for is analogous. By rule , b R_ y is added from b:[R_]C and y::C. By induction applied to item 6, we have con(b)⊑ C. By induction on item 7, we get C⊑ con(y). As is a monotone operator, we have C⊑ con(y). Thus, con(b)⊑ con(y).
Adjunction rules. We give the proof for rule R_ and the proof for R_ is analogous. By rule R_, b I y (resp. b I y) is added from b R_ y. By induction applied to item 2, we have con(b)⊑ con(y) (resp. con(b)⊑ con(y) by adjunction), and thus con(b)⊑ con( y) (resp. con( b)⊑ con(y)).
Appending rules. By rule x_C, the term b:C is added from b I x_C. By induction applied to item 4, we have con(b)⊑ C. By rule a_C, y::C is added from a_C I y and by induction applied to item 5, we have C⊑ con(y).
Lemma <ref> implies that for any consistent ABox 𝒜, the model generated from 𝒜 using Algorithm <ref> acts as a universal model for several types of queries. We describe some such queries below.
Relationship queries. These queries are either Boolean queries asking if two individuals are related by relation I, R_ or R_, e.g. q= b I y, or queries asking for names of all individuals appearing in 𝒜 that are related to some element by relation I, R_ or R_, e.g. q(p)= b R_ p.
Membership queries. These queries are either Boolean queries asking if some object or feature belongs to a given concept, e.g. q= y::C, or queries asking for names of all individuals appearing in 𝒜 that are in the extension or intension of a concept C, e.g. q(p)=p:C.
Subsumption queries. These queries are Boolean queries asking if a concept C_1 is included in C_2, i.e. q=C_1 ⊑ C_2.[Note no non-trivial subsumptions are implied by knowledge bases with acyclic TBoxes. However, we include such queries as the algorithm can be used to answer queries regarding trivial (those implied by logic) subsumption efficiently. Moreover, we believe that the algorithm extend ideas used to answer these queries may be used in future generalizations to knowledge bases with cyclic TBoxes.]
As Algorithm <ref> is polynomial-time and gives a model which is of polynomial-size in |𝒜|, we have the following corollary.
For any LE-𝒜ℒ𝒞 ABox 𝒜, a query q of the above forms consisting of concepts and individual names appearing in 𝒜, can be answered in polynomial time in |𝒜| using Algorithm <ref>.
Relationship, membership, and subsumption queries can also be answered in polynomial time by converting them into a problem of consistency checking (see <cit.> for more details). However, it involves performing tableaux expansion for each query, while our result implies that we can answer multiple Boolean and naming queries with a single run of tableaux algorithm.
If a subsumption or membership query has concept C not appearing in 𝒜, we can answer such query by adding C to 𝒜 through creation rule i.e. adding terms a_C:C, and x_C::C to 𝒜. If we have multiple queries consisting of concepts not appearing in 𝒜, we can add all of these concepts simultaneously and answer all queries with a single run of the tableaux algorithm.
Disjunctive relationship and membership queries. A disjunctive relationship (resp. membership) query is formed by taking the
disjunction of a finite number of relationship (resp. membership) queries, e.g. q=b I y ∨ b I z, and q(p)= p:C_1 ∨ p:C_2[The symbol ∨ in this paragraph refers to join in the first-order (meta) language, and not join of concepts in LE-𝒜ℒ𝒞.]. The following lemma implies that we can answer such queries in LE-𝒜ℒ𝒞 by answering each disjunct separately.
Let t_1 and t_2 be any LE-𝒜ℒ𝒞 ABox terms not containing negation. Then, for any consistent LE-𝒜ℒ𝒞 ABox 𝒜, 𝒜 t_1 ∨ t_2 iff 𝒜 t_1 or 𝒜 t_2.
𝒜 t_1 ∨ t_2 iff 𝒜∪{ t_1, t_2} is inconsistent. By tableaux algorithm for LE-𝒜ℒ𝒞, we have 𝒜∪{ t_1, t_2}= 𝒜∪ B, where the only terms in B are t_1, t_2, and the terms obtained by applying the negative rule b or x to these terms. Therefore, as 𝒜∪{ t_1, t_2} is inconsistent, 𝒜∪ B must contain a clash. But as 𝒜 is consistent, 𝒜 does not contain any clash. Therefore, some term in 𝒜 clashes with t_1 or t_2 or the term obtained by applying negative rule b or x to these terms. This implies that 𝒜∪{ t_1} or 𝒜∪{ t_1} must be inconsistent. Therefore, we have 𝒜 t_1 or 𝒜 t_2.
§.§ Negative queries
Negative queries are obtained by applying negation to relationship, membership and subsumption queries discussed above. These queries ask if the given ABox implies that some object is not related to some feature or some object (resp. feature) does not belong to some concept, or that one concept is not included in another concept. We start with negative relationship queries.
For any consistent LE-𝒜ℒ𝒞 ABox 𝒜 and for any individual names b and y,
1. 𝒜 (b I y) iff (b I y)∈𝒜,
2. 𝒜 (b R_ y) iff (b R_ y) ∈𝒜,
3. 𝒜 (y R_ b) iff (y R_ b) ∈𝒜.
We only prove items 1 and 2. The proof for item 3 is similar. For item 1, the right to left implication is trivial. For the left to right implication, suppose 𝒜 (b I y). Then, 𝒜∪{b I y} must be inconsistent. As b and y appear in 𝒜, no tableaux expansion rule has the term b I y in its premise. Therefore, the tableaux completion of 𝒜∪{b I y} is 𝒜∪{b I y}. As 𝒜 is consistent, 𝒜 does not contain clash. Therefore, since 𝒜∪{b I y} must contain a clash, we have (b I y) ∈𝒜. However, note that no tableaux expansion rule can add such term for individual names b, y appearing in the original ABox 𝒜. Therefore, (b I y) ∈𝒜.
For item 2, the right to left implication is also trivial. For the left to right implication, suppose 𝒜 (b R_ y). Then, 𝒜∪{b R_ y} must be inconsistent. As b and y appear in 𝒜, the only tableaux expansion rule having term b R_ y in its premise is adjunction rule R_ which adds terms b I y and b I y. Again, the only rules that have any of these terms in premise are I-compatibility rules that add b R_ y to the tableaux expansion. Therefore, the tableaux completion of 𝒜∪{b R_ y} is 𝒜∪{b R_ y, b I y, b I y}. As 𝒜 is consistent, 𝒜 does not contain a clash. Therefore, as 𝒜∪{b R_ y, b I y, b I y} must contain a clash, one of the terms (b R_ y), ( b I y) or ( b I y) must be in 𝒜. However, no expansion rule can add the terms of any of these forms. Furthermore, the terms of the form ( b I y) or (b I y) cannot appear in the original ABox 𝒜. Therefore, (b R_ y) must be in 𝒜.
As a result, we can answer negative relationship queries over a consistent LE-𝒜ℒ𝒞 ABox 𝒜 in linear time by searching through 𝒜.
We cannot apply a similar strategy to membership queries of the form (b:C) or (y::C), as such terms can be implied by 𝒜 without being present in 𝒜. For example, consider ABox 𝒜 = {b:C_1, (b:C_1 ∧ C_2)} which implies (b:C_2), but this term does not appear in
𝒜. This means that the model obtained by tableaux algorithm is not a universal model for these types of queries. Hence, to answer queries of the form (b:C) or (y::C), we must proceed by the usual route of adding the terms b:C or y::C to 𝒜 and checking the consistency of the resulting ABox.
We can also consider negative subsumption queries, i.e. queries asking whether the given ABox 𝒜 implies one concept C_1 is not included in another concept C_2, denoted as (C_1⊑ C_2).
Answering this query is the same as answering if the knowledge base obtained by adding the TBox axiom C_1⊑ C_2 to ABox 𝒜 is consistent. We can answer these queries for any TBox term C_1⊑ C_2, such that no sub-formula of C_1 appears in C_2, by using unraveling on C_1⊑ C_2, and then applying Algorithm <ref>.
In this and previous sections, we have discussed answering Boolean queries of all the forms which an LE-𝒜ℒ𝒞 ABox term can take. Hence, we can combine these methodologies to answer ontology equivalence queries asking if two ABoxes are equivalent, i.e. 𝒜_1 ≡𝒜_2, by checking if every term in 𝒜_2 is implied by 𝒜_1 and vice versa.
§.§ Separation and differentiation queries
An important set of queries is queries asking if the given knowledge base implies (ensures) that two individuals can be differentiated from each other by a certain property. In this section, we consider some queries of this type in LE-𝒜ℒ𝒞.
Separation queries are queries of the form S(b,d)=∃ p (b I p ∧ (d I p)) or S(y,z)= ∃ p (p I y ∧ (p I z)) for two object (resp. feature) names b, d (resp. y, z) appearing in a given ABox. These queries can be understood as asking whether two given objects or features can be separated for sure using relation I based on the given knowledge base. Note that for any
LE-𝒜ℒ𝒞 ABox 𝒜,
1. 𝒜∃ p(b I p ∧ (d I p)) iff 𝒜∪{∀ p(b I p ⇒ d I p)} is consistent, and
2. 𝒜∃ p (p I y ∧ (p I z)) iff 𝒜∪{∀ p (p I y ⇒ p I z)} is consistent.
Therefore, a separation query S(b,d) (resp. S(y,z)) can be answered by checking if 𝒜 is consistent in the extension of LE-𝒜ℒ𝒞 with the axiom ∀ p (b I p ⇒ d I p) (resp. ∀ p (p I y ⇒ p I z)). To this end, we consider the expansion of the LE-𝒜ℒ𝒞 tableaux algorithm with the rules
b I x
SA(b,d)
d I x
a I y
SX(y,z)
a I z
.
The tableaux algorithm obtained by adding the rule SA(b,d) (resp. SX(y,z)) to the LE-𝒜ℒ𝒞 tableaux expansion rules provides a polynomial-time sound and complete decision procedure for checking the consistency of 𝒜∪∀ p (b I p ⇒ d I p)(resp. 𝒜∪∀ p(p I y ⇒ p I z)).
See Appendix <ref>.
Hence, we can use this expanded tableaux algorithm to answer separation queries over a given ABox 𝒜 in polynomial time. This also allows us to answer differentiation queries Dif(b,d) = S(b,d) ∧ S(d,b) (resp. Dif(y,z) = S(y,z) ∧ S(z,y)) which ask if 𝒜 implies that b and d (resp. y and z) can be differentiated from each other by the relation I in polynomial time. We can similarly define and answer the separation and differentiation queries for the relations R_ and R_. Furthermore, we can answer identity queries asking if 𝒜 implies that two individuals are not identical by checking if they can be differentiated by some relation.
The strategy used to answer separation queries can also be used to answer many other interesting types of queries. For example, we can consider separation queries which ask about separation between different relations. Consider the queries SA(R_, R_, b) = ∃ p (b R_ p ∧ (p R_ b)), and SX(R_, R_, y) = ∃ p (y R_ p ∧ (p R_ y)). These queries ask if the given ABox implies that the relations R_ and R_ are the local inverses of each other in some object b or feature y appearing in the given ABox.
Note that for any LE-𝒜ℒ𝒞 ABox 𝒜,
1. 𝒜∃ p (b R_ p ∧ (p R_ b)) iff 𝒜∪{∀ p(b R_ p ⇒ p R_ b)} is consistent, and
2. 𝒜∃ p (y R_ p ∧ (p R_ y)) iff 𝒜∪{∀ p (y R_ p ⇒ p R_ y)} is consistent.
Therefore, a separation query SA(R_, R_, b) (resp. SX(R_, R_, y)) can be answered by checking if 𝒜 is consistent in the extension of LE-𝒜ℒ𝒞 with the axiom ∀ p(b R_ p ⇒ p R_ b) (resp. ∀ p (y R_ p ⇒ p R_ y)). To this end, we consider the expansion of the LE-𝒜ℒ𝒞 tableaux algorithm with the following rules
b R_ y
SA(R_, R_, b)
y R_ b
y R_ b
SX(R_, R_, y)
b R_ y
.
The tableaux algorithm obtained by adding rule SA(R_, R_, b) (resp. SX(R_, R_, y)) to LE-𝒜ℒ𝒞 tableaux expansion rules provides a polynomial-time sound and complete decision procedure for checking consistency of 𝒜∪{∀ p(b R_ p ⇒ p R_ b)}(resp. 𝒜∪{∀ p (y R_ p ⇒ p R_ y)}).
See Appendix <ref>.
We can similarly answer the queries of the forms SA(R_, I, b) = ∃ p (b R_ p ∧ (b I p)), and SX( R_, I, y) = ∃ p (y R_ p ∧ (p I y)) using tableaux algorithm expanded with the rules
b R_ y
b I y
and
y R_ b
b I y
.
However, this does not apply to all separation queries on relations. For example, consider queries of the form SA(I, R_, b) = ∃ p (b I p ∧ (b R_ p)), and SX(I, R_, y) = ∃ p (p I y ∧ (y R_ p)). This is because the expansion of LE-𝒜ℒ𝒞 tableaux algorithm with the rules
b I y
b R_ y
and b I y
y R_ b
may not be terminating. We leave this as part of future work.
§ EXAMPLES
In this section, we give a toy example of an LE-𝒜ℒ𝒞 knowledge base and some queries of different types to demonstrate working of various algorithms for answering these queries discussed in the paper.
Suppose, we want to create a knowledge base to represent categorization of some movies on a streaming website which can be used to answer some queries based on them. We list the concept names and individual names for objects and features appearing in the knowledge base in the following tables:
concept name symbol
concept name symbol
concept name symbol
Italian movies IM German movies GM French movies FM
European movies EUM Recent movies RM Recent drama movies RDM
Drama movies DM Famous drama movies FDM
object symbol
object symbol
All the President's Men m_1 Spirited Away m_2
Oppenheimer m_3
Cinema Paradiso m_4
feature symbol feature symbol feature symbol
German language f_1 French language f_2 Based on real story f_3
Serious plot f_4 Released after 2015 f_5 Released after 2020 f_6
Suppose the following knowledge base 𝒦_1 presents the information obtained by the website regarding the movies, their features and their categorization into above categories from some initial source which possibly has incomplete information.
2
𝒜_1={ m_4:IM, (m_4 I x_∅), x_∅ :: FM∧ IM,
x_∅::GM∧ IM, f_1::GM, f_2::FM,
f_4::DM, m_3:RDM, m_3 I f_6, m_1 I f_3,
(m_1 I f_2), (m_2:EUM)},
𝒯_1={EUM ≡ GM∨ FM,
RDM ≡ RM∧ DM,
IM ⊑ EUM}.
For any movie m and feature y, we have m I y (resp. (m I y)) iff according to the initial source database, movie m has (resp. does not have) feature y. The feature x_∅ intuitively represents a contradiction. The terms x_∅:: FM∧ IM and x_∅:: GM∧ IM states that there is no movie that is both a French movie and Italian movie or both a German and Italian movie. The term m_4:IM specifies that Cinema Paradiso is an Italian movie. The term f_3::AM states that Action movies have action sequences. Other terms in 𝒜_1 can be explained similarly. The term EUM≡ GM∨ FM states that the category of European movies is the smallest category on the website which contains both German movies and French movies. The term IM⊑ EUM can be equivalently written as IM≡ EUM∧ C for some new category C, meaning all Italian movies are European movies. Other terms in 𝒯_1 can be explained similarly. Note that the terms (m_4 I x_∅), x_∅ :: FM∧ IM, and x_∅::GM∧ IM together imply that m_4 is not in (FM∧ IM)∨(GM∧ IM). However, m_4 is in IM = EUM∧ IM = (GM∨ FM) ∧ IM. Therefore, this knowledge base is inconsistent in distributive logic but it is consistent in our setting of LE-𝒜ℒ𝒞.
Additionally, the website also tries to get an understanding of subjective (epistemic) view of different user groups on the website regarding movies, their features, and categorization. To this end, website asks some users from different groups the following two questions:
(a) Given a list of movies:
(a1) Please choose movies which have feature y from the list;
(a2) please choose movies which do not have feature y from the list.
(b) Given a list of features:
(b1) please choose features that describe movie m from the list;
(b2) please choose features which do not describe movie m from the list.
Note that there can be movies (resp. features) in the list of options which are not chosen as answer to either (a1) or (a2) (resp. (b1) or (b2)).
We model information obtained from above questions as follows: If some user from group i chooses movie m (resp. movie m, resp. feature y, resp. feature y) as an answer to question (a1) (resp. (a2), resp. (b1), resp. (b2)), then we add m R__i y (resp. (m R__i y), resp. y R__i m, resp. (y R__i m)) to the knowledge base. Note that, in general, none of the terms y R__i m, and m R__i y implies other. This is because question (a) and question (b) may be asked to different users from group i.
Then, for any category C, [R__i] C denotes the category defined by objects which are reported to have all the features in C (description of C) by some user in group i. Thus, [R__i] C can be seen as the category of movies which are considered to be in C according to the user group i. This means that for any movie m in [R__i] C and any feature y of C, some user in the group i will name m as a movie having feature y as an answer to (a1).
Similarly, ⟨R__i⟩ C denotes the category defined by features which all objects in C (objects in C) are reported to have by some user in group i. Thus, ⟨R__i⟩ C can be seen as the category of movies defined by features which are considered to be in the description of C according to the user group i. This means that for any feature y of ⟨R__i⟩ C and any movie m in C, some user in the group i will name y as a feature of the movie m as an answer to (b1).
Moreover, the website can also ask users the following questions:
(c) Please choose movies which belong to the category C from a given list of movies.
(d) Please choose features that describe the category C from a given list of features.
If m (resp. y) is chosen as an answer of (c) (resp. (d)) by some user in group i, then we add term m:[R__i] C (resp. y::⟨R__i⟩ C) to the knowledge base. Here, we assume that for any feature y in description of C, if some user chooses m as a movie in category C, then there is some user from the same group who will also choose m as a movie with feature y in answer to (a1). This assumption ensures that the [R__i] C is interpreted in accordance with LE-𝒜ℒ𝒞 semantics from relation R__i.
Similarly, for any movie m in C, we assume that if some user chooses y as a feature in category C, then there is some user from the same group will also choose y as a feature with movie m in answer to (b1).
This assumption ensures that the ⟨R__i⟩ C is interpreted in accordance with LE-𝒜ℒ𝒞 semantics from relation R__i [These assumptions can be justified if we assume we have a large number of users in each group so that at least some users in the group will have the information regarding all the movies and their features under consideration.].
The following table presents knowledge base 𝒦_2 representing different user groups' views regarding the movies obtained from the answers to the above questions. For simplictiy, we assume that we have only two different user groups. From here on, we will use _i (resp. _i) to denote [R__i] (resp. ⟨R__i ⟩) for i=1,2.
2
𝒜_2={ m_3 R__1 f_3, m_3 R__2 f_3,(m_1 R__1 f_6),
f_3 R__1 m_3, f_3 R__2 m_3, m_3: _2 RDM,
f_5::_1 RM, (m_1 R__2 f_5)},
𝒯_2={FDM ≡_1 DM ∧_2 DM}.
Let 𝒦 =𝒦_1 ∪𝒦_2
be the knowledge base obtained by combining knowledge from the source database and from users.
Given the knowledge base 𝒦, we can answer the following queries.
Positive queries.
By Lemma <ref>, these queries can be answered using the universal model constructed using the Tableaux Algorithm <ref> from the ABox obtained by unraveling 𝒦. We depict this model in Appendix <ref>.
(1) q(p)=m_3 I p asking to name all the features implied by 𝒦 that the movie Oppenheimer has. Using the universal model, we can give the answer q(a)={f_4,f_6}.
(2) q=m_4:FDM asking if 𝒦 implies that Cinema Paradiso is a Famous drama movie. We can give answer `No' since in the universal model, I(m_4, x_FDM)=0.
(3) q=_2 RDM ⊑_2 DM asking if 𝒦 implies that all the movies considered to be recent drama movies by users in group 2 are also considered to be drama movies by them. We can give answer `Yes' since in the universal model, a__2 RDM I x__2 DM.
(4) q=m_3:_2_1 RM asking if 𝒦 implies whether for any feature which is considered to be in the description of Recent movies according to some user in group 1, there is some user in group 2 who considers (reports) Oppenheimer to have this feature. We can give answer `No' since in the universal model of the knowledge base 𝒦'= 𝒦∪{a__2_1 RM: _2_1 RM, x__2_1 RM:: _2_1 RM, I(m_3, x__2_1 RM)=0. In Appendix <ref>, we provide the universal model for 𝒦 which is obtained from complete tableaux expansion of 𝒦. It is easy to check from the shape of the tableaux expansion rules that no tableaux expansion rule can add term m_3 I x__2_1 RM during the expansion of knowledge base 𝒦'. Hence,
I(m_3, x__2_1 RM)=0 holds in the universal model of 𝒦'.
Negative queries.
(1) q=(m_1:_2_1 RM) asking if 𝒦 implies that the movie All the President's Men is not a movie in category _2_1 RM (interpretation of this category is mentioned in previous example). We can give the answer `Yes' since, if we add the term m_1:_2_1 RM to 𝒦, this term along with the term f_5::_1 RM appearing in 𝒦, we would get m_1 R__2 f_5 by rule . Thus, the resulting knowledge base is not consistent, which means that 𝒦 implies (m_1:_2_1 RM).
(2) q=(m_3 R__1 f_4) asking if 𝒦 implies that some user in group 1 considers Oppenheimer to be a movie which does not have a serious plot. We can give the answer `No' since the ABox obtained by unraveling 𝒦 does not contain the term (m_3 R__1 f_4) and by Lemma <ref> it is not implied by 𝒦.
Separation queries.
(1) q= Dif(m_2, m_4) asking if 𝒦 implies that there is a feature that one of the movies Spirited Away and Cinema Paradiso has but the other does not. We can give the answer `Yes'. If we add the rules SA(m_2,m_4) and SA(m_4,m_2) to the LE-𝒜ℒ𝒞 tableaux expansion rules and run the resulting tableaux algorithm on ABox obtained by unraveling 𝒦, we will get the clash as showed below.
rules premises added terms
create x_GM∨ FM:: GM∨ FM
∧_A m_4:(GM∨ FM)∧ C m_4:GM∨ FM, m_4:C
I m_4:GM∨ FM, x_GM∨ FM:: GM∨ FM m_4 I x_GM∨ FM
SA(m_4,m_2) m_4 I x_GM∨ FM m_2 I x_GM∨ FM
x (m_2:GM∨ FM) (m_2 I x_GM∨ FM)
(2) q=SA(R__1, I, m_4) asking if 𝒦 implies that there is a feature that some user in group 1 considers Cinema Paradiso has but according to the initial source database it does not. We can give the answer `No', since if we add the rule SA(R__1, I, m_4) to the LE-𝒜ℒ𝒞 tableaux expansion rules and run the resulting tableaux algorithm on ABox obtained by unraveling 𝒦, we will get no clash, i.e. 𝒦 is consistent in the extension of LE-𝒜ℒ𝒞 with the axioms ∀ y (m_4 R__1 y ⇒ m_4 I y).
a small example to show how non-distributivity makes difference.
The example seems very "flat", which doesn't play on the usual strengths of DLs, i.e. when relational restrictions can be nested in each other. Here, there are basically two relations, I and R_, but they are not used in a nested way. The TBox also does not contain any relations at all.
Let us consider a knowledge base regarding categorization of actors and epistemic understanding of these categories according to some epistemic agent. We list the concept names and individual names for objects and features appearing in the knowledge base in the following tables:
The interpretations of the notations are given as follows. For any object a and feature x appearing in 𝒦, a I x if a has the feature x. Let R_ be a relation corresponding to an epistemic agent i defined as: for any object a and feature x, a R_ x if a has the feature x according to the agent i. For any concept C, [R_] C denotes the epistemic interpretation of C, that is, the formal concept generated by the set of objects which have all the features of C according to some agent i.
We give a knowledge base 𝒦=(𝒜, 𝒯), where
TBox term VDA ≡ (D ∧ M) ∨ (D ∧ TV) states that the versatile actor category is the smallest category (in categorization described by 𝒦) containing the category of actors who are both drama and movie actors and the category of actors who are both drama and TV actors. ABox term (s R_ r) states that “according to some agent, Stephen Chow is not acting in some main or supporting role”, j:[R_] F states that “according to some agent, Jean-Paul Belmondo is a French actor”, and (m:M) states that “Michael Jordan is not a movie actor”. We can similarly explain the intuitive meaning of other ABox and TBox terms.
§ CONCLUSION AND FUTURE WORK
In this paper, we have shown that the tableaux algorithm for LE-𝒜ℒ𝒞, or its extension with appropriate rules, can be used to answer several types of queries over LE-𝒜ℒ𝒞 ABoxes in polynomial time. Additionally, can generalize these algorithms to exponential time algorithms for LE-𝒜ℒ𝒞 knowledge bases with acyclic TBoxes by unraveling.
Dealing with cyclic TBoxes and RBox axioms.
In this paper, we introduced a tableaux algorithm only for knowledge bases with acyclic TBoxes. In the future, we intend to generalize the algorithm to deal with cyclic TBoxes as well. Another interesting avenue of research is to develop tableaux and query answering algorithms for extensions of LE-𝒜ℒ𝒞 with RBox axioms. RBox axioms are used in description logics to describe the relationship between different relations in knowledge bases and the properties of these relations such as reflexivity, symmetry, and transitivity. It would be interesting to see if it is possible to obtain necessary and/or sufficient conditions on the shape of RBox axioms for which a tableaux algorithm can be obtained. This has an interesting relationship with the problem in LE-logic of providing computationally efficient proof systems for various extensions of LE-logic in a modular manner <cit.>.
Universal models for other types of queries
In this work, we showed that the model constructed from tableaux Algorithm <ref> acts as universal model for several types of positive queries. In the future, it would be interesting to study if we can develop tableaux algorithms in such way that the resulting models can act as universal models for negative queries and other types of queries. This would allow the algorithm to answer multiple such queries efficiently.
Answering more types of queries
In Section <ref>, we mentioned that certain separation queries cannot be answered by our method due to potential non-termination of tableaux arising from the naive extension LE-𝒜ℒ𝒞 tableaux expansion rules corresponding to these queries. However, it may be possible to achieve termination in some of these case by incorporating appropriate loop check conditions into these expansion rules. In the future, we intend to study such extensions.
Generalizing to more expressive description logics. The DL LE-𝒜ℒ𝒞 is the non-distributive counterpart of 𝒜ℒ𝒞. A natural direction for further research is to explore the non-distributive counterparts of extensions of 𝒜ℒ𝒞 such as 𝒜ℒ𝒞ℐ and 𝒜ℒ𝒞ℐ𝒩 and fuzzy generalizations of such description logics.
This would allow us to express more constructions like concepts generated by an object or a feature, which can not be expressed in LE-𝒜ℒ𝒞. This would provide us language to answer many more types of interesting queries regarding enriched formal contexts.
eptcs
§ PROOFS
In this section, we collect proofs of some results stated throughout the paper.
§.§ Proof of Theorem <ref>
In this part, we will prove the termination, soundness and completeness of tableaux algorithms for checking consistency of 𝒜∪{∀ p (b I p ⇒ d I p)} and 𝒜∪{∀ p (p I y ⇒ p I z)} defined in Section <ref>. We only give the proof for 𝒜∪∀ p (b I p ⇒ d I p), the proof for 𝒜∪∀ p (p I y ⇒ p I z) would be similar.
In this paper, we will only explain the changes that must be made to the termination, soundness, and completeness proofs of the LE-𝒜ℒ𝒞 tableaux algorithm provided in <cit.>. We refer to <cit.> for details of these proofs.
Termination. To prove termination, we prove that the following lemma proved for LE-𝒜ℒ𝒞 tableaux algorithm in <cit.> also holds for its extension with rule SA(b,d).
<cit.>
For any individual names b, and y, and concept C added during tableau expansion of 𝒜,
_𝒟(C) ≤_𝒟(𝒜)+1 _𝒟(C) ≤_𝒟(𝒜)+1,
-_𝒟(𝒜 )-1 ≤_𝒟 (b) _𝒟(b) ≤_𝒟(𝒜)+1,
-_𝒟(𝒜)-1 ≤_𝒟(y) _𝒟(y) ≤_𝒟(𝒜)+1
The proof proceeds by showing that the following stronger claim holds. For any tableaux expansion 𝒜, obtained from 𝒜 after any finite number of expansion steps:
1. For any term b I y ∈𝒜, _𝒟 (b) - _𝒟 (y) ≤_𝒟(𝒜)+1, and _𝒟 (y) - _𝒟 (b) ≤_𝒟(𝒜)+1.
2. For any term b R_ y ∈𝒜, _𝒟 (b) +1 - _𝒟 (y) ≤_𝒟(𝒜)+1, and _𝒟 (y) - _𝒟 (b) ≤_𝒟(𝒜)+1.
3. For any term y R_ b ∈𝒜, _𝒟 (b) - _𝒟 (y) ≤_𝒟(𝒜)+1, and _𝒟 (y) +1 - _𝒟 (b) ≤_𝒟(𝒜)+1.
4. For any term b:C ∈𝒜, _𝒟 (b) + _𝒟 (C) ≤_𝒟(𝒜)+1, and -_𝒟 (b) - _𝒟 (C) ≤ 0.
5. For any term y::C ∈𝒜, -_𝒟 (y) - _𝒟 (C) ≤ 0, and _𝒟 (y) + _𝒟 (C) ≤_𝒟(𝒜)+1.
The proof proceeds by induction of number of rules applied. The proofs for initial case (i.e. for all the terms in original ABox 𝒜) and all the LE-𝒜ℒ𝒞 tableaux expansion rule are provided in <cit.>. Therefore, to complete the proof we need to show that the rule SA(b,d) also preserves these properties. Suppose a term d I y is added from a term b I y using rule SA(b,d). In this case, by induction, we have _𝒟 (b) - _𝒟 (y) ≤_𝒟(𝒜)+1, and _𝒟 (y) - _𝒟 (b) ≤_𝒟(𝒜)+1. As b and d are object names appearing in 𝒜, we have _𝒟 (b) = _𝒟 (d)=_𝒟 (b) = _𝒟 (d)=0. Therefore, we have _𝒟 (d) - _𝒟 (y) ≤_𝒟(𝒜)+1, and _𝒟 (y) - _𝒟 (d) ≤_𝒟(𝒜)+1. Hence proved.
This lemma bounds the number of new contestant and concept names that can appear in tableaux expansion. Therefore, it implies that the number of terms that can appear in tableaux expansion are bounded by poly(size(𝒜)). As tableaux has no branching rules, this implies termination. (See <cit.> for more details).
Soundness. The soundness follows immediately from the soundness for LE-𝒜ℒ𝒞 tableaux algorithm <cit.>, and the fact that the rule SA(b,d) ensures that the model obtained from completion satisfies the axiom ∀ p (b I p ⇒ d I p).
Completeness. To prove completeness, we show that the following lemma proved for LE-𝒜ℒ𝒞 <cit.> also holds for its extension with the axiom ∀ p (b I p ⇒ d I p).
For any ABox 𝒜, any model M=(𝔽, ·^ℳ) of 𝒜 can be extended to a model M'=(𝔽', ·^ℳ') such that 𝔽'=(A',X',I',{R_'}_∈𝒢, {R_'}_∈ℱ), A ⊆ A' and X ⊆ X', and moreover for every ∈𝒢 and ∈ℱ:
1. There exists a_C ∈ A' and x_C ∈ X' such that:
C^ℳ' =(I'^(0)[x_C^ℳ'], I'^(1)[a_C^ℳ']), a_C^ℳ'∈C^ℳ', x_C^ℳ'∈C^ℳ',
2. For every individual b in A there exist b and b in A' such that:
I'^(1)[ b] = R_'^(1)[b^ℳ'] I'^(1)[ b] = R_'^(0)[b^ℳ'],
3. For every individual y in X there exist y and ▪ y in X' such that:
I'^(0)[▪ y] = R_'^(1)[y^ℳ'] I'^(0)[ y] = R_'^(0)[y^ℳ'].
4. For any C, C^ℳ= C^ℳ'∩ A and C^ℳ= C^ℳ'∩ X.
In <cit.>, this lemma was proved by constructing such model M'. Here, we show that if the model M additionally satisfies axiom ∀ p (b I p ⇒ d I p), then so does the model M'. This follows from the fact that the model M' is constructed in such a way that (see <cit.> for more details) for any b,d ∈ A, a term b I^ℳ' x_C is added for some newly added element x_C∈ X'∖ X iff we have b I^ℳ y for all y ∈ C^ℳ. Then, as M ∀ p (b I p ⇒ d I p), we also get d I y for all y ∈ C^ℳ which implies d I^ℳ' x_C.
The above lemma ensures that if 𝒜∪{∀ p (b I p ⇒ d I p)} has a model, then it has a model with classification of objects and features. The completeness proof then proceeds by showing that if 𝒜 is consistent, then the model for 𝒜 with the above properties satisfies all the terms in the completion of 𝒜. We refer to <cit.> for details.
§.§ Proof of Theorem <ref>
We only give the proof for SA(R_, R_, b). The proof for SX(R_, R_, y) is similar. The proof for soundness and completeness is analogous to the proof for soundness and completeness of Theorem <ref> given in Section <ref>.
To prove termination, we would need the following lemma.
We define -leading concepts as the smallest set of concepts satisfying the following conditions.
* For any atomic concept C, the concept C is -leading.
* If C is -leading, then C is -leading.
* If C is -leading, then C∨ C_1 is -leading for any C_1.
* If C_1 and C_2 are -leading, then C_1 ∧ C_2 is -leading.
We define -leading concepts as the smallest set of concepts satisfying the following conditions.
* For any atomic concept C, the concept C is -leading.
* If C is -leading, then C is -leading.
* If C is -leading, then C∧ C_1 is -leading for any C_1.
* If C_1 and C_2 are -leading, then C_1 ∨ C_2 is -leading.
Note that a concept C_1 ∧ C_2 (resp. C_1 ∨ C_2) is -leading (resp. -leading) iff both C_1 and C_2 are -leading (resp. -leading).
For any LE-𝒜ℒ𝒞 ABox 𝒜 and any individual names b, y, the following holds:
* If a term of the form b I x_C or b :C or x_C R_ b or b I ▪ x_C appears in 𝒜, then C must be -leading.
* If a term of the form a_C I y or y::C or a_C R_ y or a_C I y appears in 𝒜, then C must be -leading.
* If we have a term of the form x_C::C' or a_C' I x_C or a_C':C in 𝒜, and C is -leading, then C' is also -leading.
* If we have a term of the form a_C:C' or a_C I x_C' or x_C'::C in 𝒜, and C is -leading, then C' is also -leading.
* No term of the form b I y
can belong to 𝒜.
* No term of the form b R_ y
can belong to 𝒜.
* No constant of the form b or ▪ y appears in 𝒜 for any b or y.
The proof follows by a simultaneous induction on the number of applications of the expansion rules. The proof for base case is obvious as 𝒜 does not contain individual name of the form b or y. We give the proof for all inductive cases now.
Creation rule: Only terms added by this rule are of the form a_C:C or x_C::C for some C ∈𝒜. For terms of both of these types, all the items in lemma hold trivially.
Basic rule: In this case, we add term b I y from terms b:C and y::C. We only need to consider the following cases: (1) b is of the form d, and y is of the form x_C' for some C'. By induction item 1, C is -leading.
Hence, by induction item 4, C' is also -leading. Therefore, the new term
d I x_C' also satisfies item 1. (2) b is of the form a_C' for some C', and y is of the form z. By induction item 2, C is -leading. Hence, by induction item 3, C' is also -leading. Therefore, the new term a_C' I z also satisfies item 2. (3) b is of the form a_C_1, and y is of the form x_C_2. In this case, if C_1 (resp. C_2) is -leading (resp. -leading), then by induction item 4 (resp. item 3) C would be be -leading (resp. -leading).
By again applying the same items, we would get C_2 (resp. C_1) is -leading (resp. -leading). Therefore, the added term a_C_1 I x_C_2 satisfies items 3 and 4. Item 5 is satisfied, since if any of these terms is of the form b I y, then both b:C, and y::C appear in 𝒜. By induction items 1 and 2 C must be both -leading and -leading. However, no such concept exists. Item 7 is satisfied as this rule does not add new individual names.
Rules ∧_A and ∨_X: We only give the proof for ∧_A, the proof for ∨_X is dual. In this case, we add terms b:C_1, and b:C_2 from term b:C_1 ∧ C_2. We need to consider the following cases: (1) b is of form d. By induction item 1, C_1 ∧ C_2 is -leading. Therefore, both C_1 and C_2 must be -leading. Hence, the newly added terms b:C_1 and b:C_2 also satisfy item 1. (2) b is of the form a_C. We have to show items 3 and 4 hold. If C is -leading, then by induction item 4, C_1 ∧ C_2 is -leading, which implies that both C_1 and C_2 are -leading. Hence, the added terms b:C_1 and b:C_2 satisfy item 4. To show item 3 holds for the new terms, w.l.o.g. suppose C_1 is -leading. Then, by def. C_1 ∧ C_2 is -leading as well. Therefore, by induction item 3, C is -leading. We can similarly show C is -leading, when C_2 is -leading. Item 7 is satisfied as this rule does not add new individual names.
Rules and : We only give the proof for , the proof for is dual. In this case, we add term of the form b R_ y from terms b:[R_]C and y::C. By induction item 6, b can not be of the form d. If b is of the form a_C' for some C', then by induction item 3, C' is -leading. Hence, the added term a_C' R_ y satisfies item 2. It also satisfies item 6 because C' being -leading can not have as the outermost connective. Item 7 is satisfied as this rule does not add new individual names.
Rules y, ▪ y, b, and b: We only give the proof for y, the proofs for other rules are similar. In this case, we add term of the form b R_ y from term b I y. By induction item 6, b can not be of the form d. If b is of the form a_C for some C, then by induction item 2, C must be -leading. Therefore, the added term a_C R_ y satisfies item 2. It also satisfies item 6, since C being -leading, can not have as the outermost connective. Item 7 is satisfied as this rule does not add new individual names.
Rules ∧_A^-1 and ∨_X^-1: We only give the proof for ∧_A^-1, the proof for ∨_X^-1 is dual. In this case, we add term of the form b:C_1 ∧ C_2 from terms of the form b:C_1 and b:C_2. We need to consider the following cases: (1)
b is of the form d, then by induction item 1, C_1 and C_2 are -leading. Then, by def. C_1 ∧ C_2 is also -leading. Therefore, the new term b:C_1 ∧ C_2 satisfies item 1. (2) b is of the form a_C. We have to show items 3 and 4 hold. If C is -leading, then by induction item 4, both C_1 and C_2 are -leading, which implies that C_1 ∧ C_2 is -leading. Hence, new term b:C_1 ∧ C_2 satisfies item 4. To show item 3 holds for the new term, suppose C_1 ∧ C_2 is -leading. Then, C_1 is -leading or C_2 is -leading. Therefore, by induction item 3, C is -leading. It also satisfies item 6, since C being -leading, can not have as the outermost connective. Item 7 is satisfied as this rule does not add new individual names.
Rules R_ and R_: We only the give proof for R_, the proof for R_ is dual. In this case, we add terms b I y, and b I y from b R_ y.
By induction item 5, b can not be of the form d. If b is of the form a_C for some C, then by induction item 2, C must be -leading. Therefore, the added terms a_C I y and a_C I y satisfy item 2. As b can not be of the form d, the possibly new constant b is not of the form d for any d. Hence, item 3 is satisfied.
Rules a_C and x_C: We only give the proof for a_C, the proof for x_C is dual. In this case, we add term of the form b:C from term b I x_C. We need to consider the following cases: (1) b is of the form d, then by induction item 1, C must be -leading. Therefore, the added term b:C also satisfies item 1. (2) If b is of the form a_C' for some C'. We have to show items 3 and 4 hold. If C' is -leading, then by induction item 4, C is -leading. Hence, added term a_C':C satisfies item 4. To show item 3 holds for the new term, w.l.o.g. suppose C is -leading. Then, by induction item 3, C' is -leading.
As a corollary, we get the following result.
For any LE-𝒜ℒ𝒞 ABox 𝒜, and any terms d R_ z, and y R_ b, if dR_ z ∈𝒜∪{y R_ b}, then d R_ z ∈𝒜.
For any term of the form y R_ b, the only rule that has it in premise is the adjunction rule R_ which adds terms b I y, and b I ▪ y. The term b I ▪ y cannot lead to the addition of any other term. If the term b I y leads to the addition of a term of the form d R_ y, then it means that we must have b : C ∈𝒜, for some C or b I z ∈𝒜 for some z. However, none of these is possible by the lemma <ref>.
This lemma immediately implies the following modified version of Lemma <ref>.
For any individual names b, and y, and concept C added during tableau expansion of 𝒜,
_𝒟(C) ≤_𝒟(𝒜)+1 _𝒟(C) ≤_𝒟(𝒜)+1,
-_𝒟(𝒜 )-2 ≤_𝒟 (b) _𝒟(b) ≤_𝒟(𝒜)+1,
-_𝒟(𝒜)-1 ≤_𝒟(y) _𝒟(y) ≤_𝒟(𝒜)+2
The proof proceeds by showing that the following stronger claim holds. For any tableaux expansion 𝒜, obtained from 𝒜 after any finite number of expansion steps:
1. For any term b I y ∈𝒜, _𝒟 (b) - _𝒟 (y) ≤_𝒟(𝒜)+1, and _𝒟 (y) - _𝒟 (b) ≤_𝒟(𝒜)+2.
2. For any term b R_ y ∈𝒜, _𝒟 (b) +1 - _𝒟 (y) ≤_𝒟(𝒜)+1, and _𝒟 (y) - _𝒟 (b) ≤_𝒟(𝒜)+1.
3. For any term y R_ b ∈𝒜, _𝒟 (b) - _𝒟 (y) ≤_𝒟(𝒜)+1, and _𝒟 (y) +1 - _𝒟 (b) ≤_𝒟(𝒜)+2.
4. For any term b:C ∈𝒜, _𝒟 (b) + _𝒟 (C) ≤_𝒟(𝒜)+1, and -_𝒟 (b) - _𝒟 (C) ≤ 1.
5. For any term y::C ∈𝒜, -_𝒟 (y) - _𝒟 (C) ≤ 0, and _𝒟 (y) + _𝒟 (C) ≤_𝒟(𝒜)+1.
The proof relies on the idea that the new rule SA(R_, R_, b) introduces a new term of the form y R_ b from a term b R_ y. However, by Corollary <ref>
the term b R_ y must belong to the LE-𝒜ℒ𝒞 completion of 𝒜. Hence, it satisfies Condition 2 by Lemma <ref>. The proof for all other conditions follows by a straightforward generalization of the proof of <cit.>.
§ MODELS FOR EXAMPLE KNOWLEDGE BASE
The knowledge base in Section <ref> is given by 𝒦 =𝒦_1 ∪𝒦_2, where the initial source database 𝒦_1=(𝒜_1,𝒯_1) is given by
2
𝒜_1={ m_4:IM, (m_4 I x_∅), x_∅ :: FM∧ IM,
x_∅::GM∧ IM, f_1::GM, f_2::FM,
f_4::DM, m_3:RDM, m_3 I f_6, m_1 I f_3,
(m_1 I f_2), (m_2:EUM)},
𝒯_1={EUM ≡ GM∨ FM,
RDM ≡ RM∧ DM,
IM ⊑ EUM},
while the database from users of two different groups 𝒦_2 = (𝒜_2,𝒯_2) is given by
2
𝒜_2={ m_3 R__1 f_3, m_3 R__2 f_3,(m_1 R__1 f_6),
f_3 R__1 m_3, f_3 R__2 m_3, m_3: _2 RDM,
f_5::_1 RM, (m_1 R__2 f_5)},
𝒯_2={FDM ≡_1 DM ∧_2 DM}.
By unraveling TBoxes we get the following terms:
* EUM≡ GM∨ FM
* RDM≡ RM∧ DM
* IM≡ (GM∨ FM)∧ C for some C not appearing in 𝒦
* FDM ≡_1 DM ∧_2 DM
Note that the terms FM∧ IM and GM∧ IM in 𝒜_1 are denoted as follows.
* FM∧ IM≡ FM∧ ((GM∨ FM)∧ C)≡ FM∧ C
* GM∧ IM≡ GM∧ ((GM∨ FM)∧ C)≡ GM∧ C
We denote the objects and features in the model of the form a_C, and x_C as below:
a_1 a_GM x_1 x_GM
a_2 a_FM x_2 x_FM
a_3 a_GM∨ FM x_3 x_GM∨ FM
a_4 a_RM x_4 x_RM
a_5 a_DM x_5 x_DM
a_6 a_RM∧ DM x_6 x_RM∧ DM
a_7 a_C x_7 x_C
a_8 a_(GM∨ FM)∧ C x_8 x_(GM∨ FM)∧ C
a_9 a_FM∧ C x_9 x_FM∧ C
a_10 a_GM∧ C x_10 x_GM∧ C
a_11 a__1 DM x_11 x__1 DM
a_12 a__2 DM x_12 x__2 DM
a_13 a__1 DM∧_2 DM x_13 x__1 DM∧_2 DM
a_14 a__2 (RM∧ DM) x_14 x__2 (RM∧ DM)
a_15 a__1 RM x_15 x__1 RM
a_16 a_⊤ x_16 x_
x_17 x_∅
We give the following table depicting all the objects and features appearing in the model and whether or not they are related by I.
The relations R__i, and R__i are given as follows. We have R__1 = {(a_11,x_5), (a_13, x_5), (m_3,f_3))}, and R__2 = {(a_12,x_5), (a_13, x_5), (a_14, x_4), (a_14, x_5), (a_14, x_6),(a_14,f_4),(m_3,x_4), (m_3,x_5), (m_3,x_6), (m_3,f_3),(m_3,f_4)}. R__1 = {(f_5,a_4),(f_5,a_6),(f_3,m_3),(f_5,m_3),(x_15,a_6),(x_15,a_4),(x_15,m_3)} and R__2 = {(f_3,m_3)}. The model contains atomic concepts GM, FM, RM, DM, and C. For any of these concepts D, its interpretation is given by the tuple (x_D^↓, a_D^↑).
Figure <ref> depicts the concept lattice generated by the formal context defining the model with the help of lattice visualization by LatViz <cit.>.
a_6-x_5 proof
Rule Premise added term
create a_M ∧ C_1:M ∧ C_1 a_M ∧ C_1:M, a_M ∧ C_1:C_1
create x_M ∨ TV:: M ∨ TV
∨ x_M ∨ TV:: M ∨ TV x_M ∨ TV::M, x_M ∨ TV::TV
I a_M ∧ C_1:M, x_M ∨ TV::M a_M ∧ C_1 I x_M ∨ TV
x_C a_M ∧ C_1 I x_M ∨ TV a_M ∧ C_1 : M ∨ TV
∧^-1 a_M ∧ C_1 : M ∨ TV, a_M ∧ C_1:C_1 a_M ∧ C_1: ( M ∨ TV) ∧ C_1
a_7-x_5 is similar.
Proof for a_8-x_3
Rule Premise added term
create x_M ∨ TV::M ∨ TV
∨ x_M ∨ TV::M ∨ TV x_M ∨ TV::M, x_M ∨ TV::TV
previously done a_M ∧ C_1:M
I a_M ∧ C_1:M, x_M ∨ TV::M a_M ∧ C_1 I x_M ∨ TV
a_C a_M ∧ C_1 I x_M ∨ TV x_M ∨ TV:: M ∧ C_1
You can similarly show that you have term x_M ∨ TV::TV ∧ C_1, and then by ∨^-1, we would get x_M ∨ TV::(M ∧ C_1) ∨ (TV ∧ C_1) which by creation and basic rule gives a_(M ∧ C_1) ∨ (TV ∧ C_1) I x_M ∨ TV). You can show other two similarly
|
http://arxiv.org/abs/2409.02416v1 | 20240904034144 | Relative-Translation Invariant Wasserstein Distance | [
"Binshuai Wang",
"Qiwei Di",
"Ming Yin",
"Mengdi Wang",
"Quanquan Gu",
"Peng Wei"
] | cs.LG | [
"cs.LG",
"stat.ML"
] |
#1mw:#1
[
[
September 9, 2024
=====================
§ ABSTRACT
We introduce a new family of distances, relative-translation invariant Wasserstein distances (RW_p), for measuring the similarity of two probability distributions under distribution shift. Generalizing it from the classical optimal transport model, we show that RW_p distances are also real distance metrics defined on the quotient set 𝒫_p(ℝ^n)/∼ and invariant to distribution translations. When p=2, the RW_2 distance enjoys more exciting properties, including decomposability of the optimal transport model, translation-invariance of the RW_2 distance, and a Pythagorean relationship between RW_2 and the classical quadratic Wasserstein distance (W_2). Based on these properties, we show that a distribution shift, measured by W_2 distance, can be explained in the bias-variance perspective. In addition, we propose a variant of the Sinkhorn algorithm, named RW_2 Sinkhorn algorithm, for efficiently calculating RW_2 distance, coupling solutions, as well as W_2 distance. We also provide the analysis of numerical stability and time complexity for the proposed algorithm. Finally, we validate the RW_2 distance metric and the algorithm performance with three experiments. We conduct one numerical validation for the RW_2 Sinkhorn algorithm and show two real-world applications demonstrating the effectiveness of using RW_2 under distribution shift: digits recognition and similar thunderstorm detection. The experimental results report that our proposed algorithm significantly improves the computational efficiency of Sinkhorn in certain practical applications, and the RW_2 distance is robust to distribution translations compared with baselines.
§ INTRODUCTION
Optimal transport (OT) theory and Wasserstein distance <cit.> provide a rigorous measurement of similarity between two probability distributions. Numerous state-of-the-art machine learning applications are developed based on the OT formulation and Wasserstein distances, including domain adaptation, score-based generative model, Wasserstein generative adversarial networks, Fréchet inception distance (FID) score, Wasserstein auto-encoders, distributionally robust Markov decision processes, distributionally robust regressions, graph neural networks based objects tracking, etc <cit.>. However, the classical Wasserstein distance has major limitations in certain machine learning and computer vision applications. For example, a meteorologist often focuses on identifying similar weather patterns in a large-scale geographical region <cit.>, where he/she cares more about the “shapes” of weather events rather than their exact locations. The weather events are represented as images or point clouds from the radar reflectivity map. Here the classical Wasserstein distance is not useful since the relative location difference or relative translation between two very similar weather patterns will add to the Wasserstein distance value. Another example is the inevitable distribution shift in real-world datasets. A distribution shift may be introduced by sensor calibration error, environment changes between train and test datasets, simulation to real-world (sim2real) deployment, etc. Motivated by these practical use cases and the limitations of Wasserstein distances, we ask the following research question:
Can we find a new distance metric and a corresponding efficient algorithm to measure the similarity between probability distributions (and their supports) regardless of their relative translation?
To answer this research question, we introduce the relative translation optimal transport (ROT) problem and the corresponding relative-translation invariant Wasserstein distance RW_p. We then focus on the quadratic case (p=2) and identify three exciting properties of the RW_2 distance. We leverage these properties to design a variant of the Sinkhorn algorithm to compute RW_2 distance, coupling solutions, as well as W_2 distance. In addition, we provide analysis and numerical experiment results to demonstrate the effectiveness of the new RW_2 distance against translation shifts. Finally, we show the scalability and practical usage of the RW_2 in a real-world meteorological application.
Contributions. The main contributions of this paper are highlighted as follows: (a) we introduce a family of new similarity metrics, relative-translation invariant Wasserstein (RW_p) distances, which are real distance metrics like the Wasserstein distance and invariant to the relative translation of two distributions; (b) we identify three useful properties of the quadratic case RW_2 to support our algorithm design: decomposability of the ROT problem, translation-invariance of both the ROT problem solution and the resulted RW_2, and Pythagorean relationship between RW_2 and the classical W_2; (c) we show that the RW_2 can be used to analyze and explain a general distribution shift (measured by W_2) in the bias-variance perspective; and (d) we propose an efficient variant of Sinkhorn algorithm, named the RW_2 Sinkhorn, for calculating RW_2 distance, coupling solutions as well as W_2 distance with significantly reduced computational complexity and enhanced numerical stability under relative translations. Empirically, we report promising performance from the proposed RW_2 distance when the relative translation is large, and the RW_2 Sinkhorn algorithm in illustrative numerical examples and a large-scale real-world task for similar weather detection. Figure <ref> shows our major findings in this work.
Notations.
Assume that 𝒫_p(ℝ^n) is the set of all probability distributions with finite moments of order p defined on the space ℝ^n and ℳ(ℝ^n) is the set of all probability distributions with finite supports defined on the space ℝ^n. ℝ_*^m_1 × m_2 represents the set of all m_1× m_2 matrices with non-negative entries. [μ] represents the equivalence class (orbit) of μ under the shift equivalence relation in 𝒫_p(ℝ^n). μ̅ represents the mean of probability distribution μ (data points). m denotes a vector in ℝ^m where all elements are ones. ./ represents the component-wise division.
Related work.
Optimal transport theory is a classical area of mathematics with strong connections to probability theory, diffusion processes and PDEs. Due to the vast literature, we refer readers to <cit.> for comprehensive reviews. Computational OT methods have been widely explored, including Greenkhorn algorithm <cit.>, Network Simplex method <cit.>, Wasserstein gradient flow <cit.>, neural network approximation <cit.>. Significant research has also been conducted on Wasserstein distances, such as the sliced Wasserstein distance <cit.>, Gromov-Wasserstein distance <cit.>, etc. Other important topics include Wasserstein barycenter <cit.> and unbalanced optimal transport <cit.>.
§ PRELIMINARIES
Before delving into the details of our proposed method, it is essential to focus on the groundwork with an introduction to key aspects of classical optimal transport theory and formulations. This foundation will support the subsequent derivations and proofs presented in Section 3.
§.§ Optimal Transport Theory
The optimal transport theory focuses on finding the minimal-cost transport plans for moving one probability distribution to another probability distribution in a metric space. The core of this theory involves a cost function, denoted as c(x,y), alongside two probability distributions, μ(x) and ν(y). The optimal transport problem is to find the transport plans (coupling solutions) that minimize the cost of moving the distribution μ(x) to ν(y), under the cost function c(x,y). Although the cost function can take any non-negative form, our focus will be on those derived from the p-norm, expressed as x-y_p^p for p ≥ 1, because the optimal transport problem is well-defined <cit.>.
Assuming μ(x) as the source distribution and ν(y) as the target distribution, μ, ν∈𝒫_p(ℝ^n), we can formulate the optimal transport problem as a functional optimization problem, detailed below:
[p-norm optimal transport problem <cit.>]
OT (μ, ν, p) =γ∈Γ∈ℝ^n min∫_ℝ^nx-y_p^p d γ(x,y),
with Γ(μ, ν) = {γ∈𝒫_p(ℝ^n×ℝ^n)|∫_ℝ^nγ(x,y) dx = ν(y), ∫_ℝ^nγ(x,y) dy = μ(x), γ(x,y) ≥ 0}.
Here γ(x,y) represents the transport plan (or the coupling solution), indicating the amount of probability mass transported from source support x to target support y. The objective function is to minimize the total transport cost, which is the integrated product cost of distance and transported mass across all source-target pairs (x, y).
After the foundational optimal transport problem is outlined, we can introduce a family of real metrics, the Wasserstein distances, for measuring the distance between probability distributions on the set 𝒫_p(ℝ^n). These distances are defined based on the optimal transport problem.
[Wasserstein distances <cit.>]
The Wasserstein distance between μ and ν is the pth root of the minimal total transport cost from μ to ν, denoted as W_p, (p≥ 1):
W_p(μ, ν) = OT(μ, ν, p)^1/p.
The Wasserstein distance is a powerful tool for assessing the similarity between probability distributions. It is a real metric admitting the properties of indiscernibility, non-negativity, symmetry, and triangle inequality <cit.>. Meanwhile, it is well-defined for any probability distribution pairs, including discrete-discrete, discrete-continuous, and continuous-continuous.
For practical machine learning applications, the functional optimization described in Equation (<ref>) can be adapted into a discrete optimization framework. This adaptation involves considering the distributions, μ and ν, as comprised of finite supports, {x_i}_i=1^m_1 and {y_j}_j=1^m_2, with corresponding probability masses {a_i}_i=1^m_1 and {b_j}_j=1^m_2, respectively, where m_1 and m_2 are the number of supports (data points). Since all m_1 and m_2 are finite numbers, we can use an m_1 × m_2 matrix C to represent the cost between supports, where each entry represents the transporting cost from x_i to y_j, i.e., C_ij = x_i - y_j_p^p. This discrete version of the optimal transport problem can then be expressed as a linear programming problem, denoted as OT(μ, ν, p):
OT(μ, ν, p) = P_ij∈Π(μ, ν)min∑_i=1^m_1∑_j=1^m_2 C_ij P_ij,
with Π(μ, ν) = {P ∈ℝ_*^m_1 × m_2 | P m_1 = a, P^⊤m_2 = b},
where Π(μ, ν) is the feasible set of this problem, vectors a and b are the probability masses of μ and ν, respectively. coupling solutions P_ij indicates the amount of probability mass transported from the source point x_i to the target point y_j. This linear programming approach provides a scalable and efficient way for solving discrete optimal transport problems in various data-driven applications.
§.§ Sinkhorn Algorithm
Equation (<ref>) formulates a linear programming problem, which is commonly solved by simplex methods or interior-point methods <cit.>. Because of the special structure of the feasible set Π(μ, ν), another approach for solving this problem is to transform it into a matrix scaling problem by adding an entropy regularization in the objective function <cit.>. The matrix scaling problem can be solved by the Sinkhorn algorithm, which is an iterative algorithm that enjoys both efficiency and scalability. In detail, the Sinkhorn algorithm will initially assign u^(0) and v^(0) with vector m_1 and m_2, then the vector u^(k) and v^(k) (k≥ 1) are updated alternatively by the following equations:
u^(k+1)← a./K v^(k), v^(k+1)← b./K^⊤ u^(k+1),
where K_ij = e^-C_ij/λ (λ is the coefficient of the entropy regularized term) and the division is component-wise. When the convergence precision is satisfied, the coupling solution P will be calculated by the matrix diag(u) K diag(v). It has been proved the solution calculated by the Sinkhorn algorithm can converge to the exact coupling solution of the linear programming model, as λ goes to zero <cit.>. One caveat of this calculation is the exponent operation, which may cause “division by zero", we will show how we can improve the numerical stability in Section <ref>.
§ RELATIVE TRANSLATION OPTIMAL TRANSPORT AND RW_P DISTANCES
Here we present the relative translation optimal transport model and the RW_p distances. We first introduce the general case of the relative translation optimal transport problem and the RW_p distances. We will then focus on the quadratic RW_2 case and its properties.
§.§ Relative Translation Optimal Transport (ROT) Formulation and RW_p Distances
As discussed in Section 1, the classical optimal transport (OT) problem is not useful when there is a relative translation between two distributions (or the two datasets known as their supports). We introduce the relative translation optimal transport problem, ROT(μ, ν, p), which is formulated to find the minimal total transport cost under any translation.
[Relative translation optimal transport problem]
Continuing with the previous notations,
ROT(μ, ν, p) = s ∈ℝ^n infP ∈Π(μ, ν)min∑_i=1^m_1∑_j=1^m_2x_i + s - y_j _p^p P_ij,
where variable s represents the translation of source distribution μ, and variables P_ij represent the coupling solution between the support x_i and the support y_j.
The ROT problem can be viewed as a generalized form of the classical OT in Equation (<ref>). Different from the one-stage optimization in the classical OT problem, there are two stages in this optimization. The inner optimization is exactly the classical OT, whereas the outer optimization finds the best relative translation for the source distribution to minimize the total transport cost.
For Equation (<ref>), the domain of the variable s can be confined to a compact set {s∈ℝ^n | s_p ≤ 2 max_ijx_i-y_j_p}. Thus, we have
ROT(μ, ν, p) = s ∈ℝ^n minP ∈Π(μ, ν)min∑_i=1^m_1∑_j=1^m_2x_i + s - y_j _p^p P_ij,
where the minimum can be achieved.
The proof of Theorem <ref> is provided in Appendix A.
From the perspective of group theory, we could have a better view of which space the ROT problem is defined on. Assume that ∼ is the translation relation on the set 𝒫_p(ℝ^n). When distribution μ can be translated to distribution μ', we denote it by μ∼μ'. Because the translation is an equivalence relation defined on the set 𝒫_p(ℝ^n), we may divide set 𝒫_p(ℝ^n) by the translation relation, which leads to a quotient set, 𝒫_p(ℝ^n)/∼. 𝒫_p(ℝ^n)/∼ consists of the equivalence class of distributions, and each equivalence class, denoted by [μ], contains all mutually translatable probability distributions. Therefore, the ROT problem can also be regarded as an OT problem defined on the quotient set, 𝒫_p(ℝ^n)/∼, which tries to find the minimal total transport cost between [μ] and [ν]. Figure <ref> illustrates this idea.
We can see that the value of the ROT problem is invariant to translations. This is because the ROT problem is only associated with the equivalence classes of probability distributions.
Building upon the ROT model, we introduce a new family of Wasserstein distances to measure the minimal total transport cost between different equivalence classes of probability distributions. As mentioned above, the value of the ROT problem is invariant to any relative translations, thus, we name the corresponding Wasserstein distances as relative-translation invariant Wasserstein distances, denoted by RW_p, (p ≥ 1):
[Relative-translation invariant Wasserstein distances]
RW_p(μ, ν) = ROT(μ, ν, p)^1/p.
Similar to the situation where W_p is a real metric on 𝒫_p(ℝ^n), we can obtain the following theorem.
RW_p is a real metric on the quotient set 𝒫_p(ℝ^n)/∼.
The proof of Theorem <ref> is provided in Appendix A. The RW_p can also be regarded as a pseudo-metric defined on the 𝒫_p(ℝ^n), where the indiscernibility is no longer admitted.
It should be noted that we can not incorporate rotation in our equation <ref> since rotation will violate the metric properties and convexity.
§.§ Quadratic ROT and Properties of the RW_2 Distance
We show three useful properties in the quadratic case of ROT and the resulted RW_2 distance: decomposability of the ROT optimization model (Theorem <ref>), translation-invariance of coupling solutions of the ROT problem (Corollary <ref>), and Pythagorean relationship of RW_2 and W_2 (Corollary <ref>). Decomposability will motivate our algorithm design for efficiently solving an ROT problem, translation-invariance provides RW_2 the robustness against distribution shift, and the Pythagorean relationship helps us better understand a distribution shift in the perspective of bias-variance.
Continuing with previous notations, the two-stage optimization problem in quadratic ROT can be decomposed into two independent single-stage optimization problems:
ROT(μ, ν, 2) =
s ∈ℝ^nminP ∈Π(μ, ν)min H(P, s) = P ∈Π(μ, ν)min (E(P)) + s ∈ℝ^n min (V(s))
where horizontal function H(P, s) is described by H(P, s) = ∑_i=1^m_1∑_j=1^m_2x_i + s - y_j _2^2 P_ij,
the overall function E(P) is described by,
E(P) = ∑_i=1^m_1∑_j=1^m_2x_i- y_j _2^2 P_ij,
and vertical function V(s) is described by,
V(s) = s_2^2 + 2 s · (μ̅ - ν̅).
Moreover, the variables P_ij are fully determined by P ∈Π(μ, ν)min E(P), which is the classical optimal transport problem between μ and ν; and the variable s is fully determined by s ∈ℝ^n min V(s), which is a minimization problem of a quadratic function in ℝ^n and the minimum is achieved when s = ν̅ - μ̅.
Function E(P), V(P,s) and V(s) are illustrated in Figure <ref>. Due to the page limit, the proof of Theorem <ref> is provided in Appendix A.
Theorem <ref> is the core idea for our algorithm design in Section 4. It indicates that the coupling solutions P_ij to the OT problem are always the same as its ROT version, and verse versa. In other words,
The coupling solutions to the quadratic ROT problem are invariant to any translation of distributions.
Corollary <ref> not only guarantees the robustness of RW_2 against translational shifts but also suggests that the coupling solution of a ROT problem (including the classical OT problem) can be calculated by finding a more “proper” cost matrix of a translated probability distribution pair. This helps us improve the numerical stability and reduce the time complexity in many practical conditions. We provide a detailed analysis in Section <ref> and demonstrate it in Section <ref>.
Let s be the minimizer ν̅ - μ̅, it follows that,
W_2^2(μ, ν)= μ̅ - ν̅_2^2 + RW_2^2(μ,ν).
Corollary <ref> indicates that there exists a Pythagorean relationship among three types of distances, W_2, RW_2, and L_2, as illustrated in Figure <ref>.
Corollary <ref> provides a refinement to understand a distribution shift (measured by W_2) in the perspective of bias and variance decomposition. The L_2 Euclidean distance between the expectations of two distributions corresponds to the “bias” between two distributions, and the value of RW_2 corresponds to the difference of “variances” or the “shapes” of two distributions. When the source distribution is a Dirac distribution, the situation degenerates to a classical bias-variance decomposition.
§ RW_2 SINKHORN ALGORITHM
Based on the first two properties of RW_2, we design a variant of the Sinkhorn algorithm, named RW_2 Sinkhorn, to compute RW_2 under translational shift. Furthermore, we show the new algorithm can be used as a technique to compute W_2 distance, and it provides enhanced numerical stability and a reduction of time complexity when the difference between the means of two distributions is large.
r0.5
0.5
§.§ RW_2 Algorithm Design
Based on Theorem <ref> and Corollary <ref>, we propose the RW_2 Sinkhorn algorithm for computing RW_2 distance and coupling solution P_ij, which is described in Algorithm <ref>. The key idea of this algorithm involves precomputing the difference between the means of two distributions, as shown in Line 3. Subsequently, it addresses a specific instance of the optimal transport problem where the means of the two distributions are identical by a regular Sinkhorn algorithm. It is important to note that alternative algorithms, such as the network-simplex algorithm or the auction algorithm <cit.>, can also be employed to complete the specific instance procedure.
§.§ RW_2 Technique for W_2 Computation
With the observation of Corollary <ref>, we can also propose a new approach to compute the W_2 distance from the right side of the Equation (<ref>). When ν̅ - μ̅_2 is large enough, this new RW_2 based technique has advantages than the regular Sinkhorn in terms of numerical stability and time complexity. We provide an analysis of this new approach in the rest of this section. In addition, experiment in Section 5.1 validates the analysis of our proposed RW_2 Sinkhorn algorithm for computing W_2 distance.
§.§ Numerical Stability
The division by zero is a known numerical issue of the Sinkhorn algorithm, as shown in Equation (<ref>), and infinitesimal value often occurs in the exponential process of the (negative) cost matrix, that is, K← e^-C/λ. The results of the Corollary <ref> suggest that it is possible to switch to another “mutually translated" cost matrix under a relative translation s to increase the numerical stability while preserving the same coupling solutions.
To measure the numerical stability of the matrix, we introduce g(K), defined by the product of all entries K_ij. Since all entries K_ij are in the exponential format and lie in the range of (0,1], g(K) is also in the range of (0,1]. As g(K) increases, most entries K_ij deviate from zero, which means numerical computation will be more stable. Because of the following transformation,
g(K)=∏_i=1^m_1∏_j=1^m_2 K_ij = ∏_i=1^m_1∏_j=1^m_2exp(-C_ij/λ) = exp(-∑_i=1^m_1∑_j=1^m_2x_i +s - y_j_2^2/λ),
one can choose the relative translation s=y̅ - x̅, so that g(K) can achieve its maximum. When we assume the probability mass on each data is equal, y̅ - x̅ is the same as ν̅ - μ̅.
§.§ Complexity Analysis
<cit.> provided a proof for the time complexity of the optimal transport model by Sinkhorn algorithm with τ approximation, which is O(m^2C_∞^3(log m) τ^-3), where C_∞ = max_ij C_ij, with assuming m = m_1 = m_2 for the sake of simplicity. When the translated cost matrix has a smaller infinity norm C_∞, we can see that the time complexity will be reduced. When two distributions are sub-Gaussian and ν̅ - μ̅_2 is large enough, we can have a more rigorous analysis for the complexity analysis, which is provided in Appendix C. Our results from the first experiment in Section <ref> also empirically validate this idea.
§ EXPERIMENTS
To comprehensively evaluate and validate our proposed method, we conducted three experiments: numerical validation, digit recognition, and similar weather pattern detection. The three experiments are motivated differently: the first experiment is used to validate the numerical stability and computational time of the RW_2 Sinkhorn algorithm; the second experiment serves as validation of the robustness of RW_2 distance to the distribution translation in a real-world application; and the third experiment is to show the RW_2 can scale up to large-scale datasets to identify the similar weather patterns. All tests are executed on a workstation with a 2.60 GHz Intel Core i7 processor and 16GB of RAM using a single computation thread.
§.§ Numerical Validation
r0.35
< g r a p h i c s >
Schematic illustration of the first experiment. Assume that the distributions μ and ν are the same type of distribution. We compare the performance of Algorithm 1 and the classical Sinkhorn by translating the distribution μ along vector s, i.e., s =ν̅ - μ̅.
First we demonstrate the benefits of applying the RW_2 Sinkhorn algorithm for computing W_2 distance on specially designed examples. Let us consider two sets of data points randomly sampled from two identical distributions μ and ν, each with 1,000 samples. To compare the performance of Algorithm 1 and the classical Sinkhorn, we translate the distribution μ with a vector s, where the length of translation s ranges within [0, 3]. The idea is illustrated in Figure <ref>.
Experimental setting
We compare the two versions of Sinkhorn algorithms in W_2 error and running time. We repeat each experiment and data sampling 10 times. We test a pair of Gaussian distributions in ℝ (Figure 3(a) and 3(d)), a pair of Gaussians in ℝ^10 (Figure 3(b) and 3(e)), and a pair of uniform distributions in ℝ^10 (Figure 3(c) and 3(e)). We set λ = 0.1 and ϵ = 0.01 for both algorithms.
Experiment results
Figure <ref> shows that when the length of the translation increases, the proposed RW_2 Sinkhorn algorithm significantly outperforms the regular Sinkhorn algorithm in both the running time and computational errors. We also test the performance of the RW_2 Sinkhorn algorithm on other distributions. See more results in Appendix B.
§.§ Digit Recognition
We test the robustness of the RW_2 distance on the MNIST dataset, perturbed by random translations. Each digital image is a 28 x 28 grid and can be converted into a discrete probability distribution by normalizing the intensity on each pixel by the sum of all intensities. We consider a subset of N data points randomly sampled from the MNIST dataset, where N = {100, 1000}. We embed each image in a larger 84 x 84 grid and translate each image with a random vector s (varying lengths and directions). We test performance on the length of s ranging within {0, 4, 8, 12, 16, 20, 24, 28} pixels.
Experimental setting
We report the mean and standard deviation of classification error using a 1/4 test ratio scheme, repeated 10 times. We use the nearest neighbor search as a classifier with RW_2 distance, as well as other distances (L_1, L_2, W_1, W_2) as the baselines. We set the λ = 0.1 and ϵ = 0.1 for the RW_2 Sinkhorn algorithm.
Experiment results
Figure <ref> shows that RW_2 distance significantly outperforms the other distances when the translation s is large in both sample sizes N = 100 and N = 1,000.
§.§ Similar Thunderstorm Pattern Detection
We apply RW_2 distance on the real-world thunderstorm dataset, to show that RW_2 can be used for identifying similar weather patterns and focus more on shape similarity compared with W_2 distance. Our data are radar images from MULTI-RADAR/MULTI-SENSOR SYSTEM (MRMS) <cit.> with a 150 km × 150 km rectangular area centered at the Dallas Fort Worth International Airport (DFW), where each pixel represents a 3 km × 3 km area. The data is assimilated every 10 minutes tracking a long time range of history from 2014 to 2022, with 205,848 images in total. Vertically Integrated Liquid Density (VIL density) and reflectivity are two common measurements for assessing thunderstorm intensity, with threshold values of 3 kg· m^-3 and 35 dBZ, respectively <cit.>. For the sake of simplicity, we will focus on reflectivity as our main indicator for thunderstorm intensity.
We consider two types of thunderstorm events: thunderstorm snapshots and thunderstorm sequences. More comparison results can be found in the Appendix B.
Experimental setting We apply the RW_2 Sinkhorn algorithm to compute both the W_2 distance and the RW_2 distance to identify the most similar thunderstorm event given a reference event, with λ = 0.1 and ϵ = 0.01. For the thunderstorm sequence identification, we consider a one-hour thunderstorm event as our reference, consisting of 6 images in time order.
Thunderstorm snapshot results
Given the same reference thunderstorm snapshot, Figure <ref> shows that the similar events identified by RW_2 focus more on shape similarity compared with W_2.
Thunderstorm sequence results
Given a reference thunderstorm sequence, Figure <ref> shows that the similar sequences identified by RW_2 focus more on shape similarity compared with W_2.
§ CONCLUSIONS
In this paper, we introduce a new family of distances, relative-translation invariant Wasserstein (RW_p) distances, for measuring the pattern similarity between two probability distributions (and their data supports). Generalizing from the classical optimal transport model, we show that the proposed RW_p distances are real distance metrics defined on the quotient set 𝒫_p(ℝ^n)/∼ and invariant to the translations of distributions. When p=2, this distance enjoys more useful properties, including decomposability of the reformulated optimal transport model, translation-invariance of coupling solutions and RW_2, and Pythagorean relationship of RW_2 and W_2 distances. Based on these properties, we show a distribution shift, measured by W_2 distance, which can be explained from the perspective of bias-variance. In addition, we propose a variant of the Sinkhorn algorithm, named RW_2 Sinkhorn algorithm, for efficiently calculating RW_2 distance, coupling solutions, as well as W_2 distance. We provide the analysis of numerical stability and time complexity for the proposed algorithm. Finally, we validate the RW_2 distance metric and the algorithm performance with illustrative and real-world experiments. The experimental results report that our proposed algorithm significantly improves the computational efficiency of Sinkhorn in practical applications with large translations, and the RW_2 distance is robust to distribution translations compared with baselines.
plainnat
§ MAIN PROOFS
§.§ Theorem <ref>
It is clear to verify that when s_p ≥ 2 max_ijx_i-y_j_p, for any i,j, (1≤ i ≤ m_1, 1 ≤ j ≤ m_2), it follows that x_i + s-y_j_p ≥s _p - x_i - y_j_p ≥ 2max_ijx_i-y_j_p - x_i - y_j_p ≥x_i - y_j_p. In other words, when s_p ≥ 2max_ijx_i-y_j_p, the relative distance between each pair of support x_i and y_j are always greater than or equal to the non-translated distance, which implies the total transport cost for the translated case will also greater than or equal to the cost for the non-translated distance. Since we are trying to find the minimal value, we can only focus on the compact set {s∈ℝ^n | s_p ≤ 2 max_ijx_i-y_j_p}.
§.§ Theorem <ref>
With the previous notations, firstly, we will show that the translation relation ∼ is an equivalence relation on set 𝒫_p(ℝ^n).
Equivalence relation requires reflexivity, symmetry, and transitivity, and the following observations show translation relation is indeed an equivalence relation.
* Reflexivity, (x∼ x).
For any distribution μ∈𝒫_p(ℝ^n), it can translate to itself with zero vector.
* Symmetry, (x∼ y y ∼ x).
For any distribution μ, ν∈𝒫_p(ℝ^n), if μ can be translated to ν, then ν can also be translated to μ.
* Transitivity, (x∼ y and y ∼ z x ∼ z).
For any distribution μ, ν, η∈𝒫_p(ℝ^n), if μ can be translated to ν, and ν can be translated to η, then μ can also be translated to η.
Based on the property of equivalence relation, it is clear that set 𝒫_p(ℝ^n)/∼ is well-defined. Let [μ] be an element in set M/∼, where μ is a representative of [μ], i.e. [μ] is the set of distributions that can be mutually translated from μ. Noticing that W_p(·, ·) is a real distance metric on 𝒫_p(ℝ^n) <cit.>, it implies that W_p(·, ·) satisfied with identity, positivity, symmetry, and the triangle inequality. Based on W_p(·, ·), we show RW_p(·, ·) satisfies with identity, positivity, symmetry, and triangle inequality with respect to elements in 𝒫_p(ℝ^n)/∼.
For any μ, ν, η∈𝒫_p(ℝ^n)/∼,
* Identity,
RW_p([μ], [μ]) = μ∈ [μ],μ∈ [μ]min. [W_p(μ, μ)] = 0.
* Positivity,
RW_p([μ], [ν]) = μ∈ [μ],ν∈ [ν]min. [W_p(μ, ν)] ≥ 0.
* Symmetry,
RW_p(μ, ν) = μ∈ [μ], ν∈ [ν]min.[W_p(μ, ν)] =ν∈ [ν], μ∈ [μ]min. [W_p(ν, μ )]= RW_p(ν, μ).
* Triangle inequality,
RW_p(μ, ν)
= μ∈ [μ], ν∈ [ν]min.[W_p(μ, ν)]
≤ η, η' ∈ [η]min.μ∈ [μ], ν∈ [ν]min.[W_p(μ, η
) + W_p(η, η') + W_p(η', ν)]
= μ∈ [μ], ν∈ [ν], η, η' ∈ [η]min.[W_p(μ, η
) + 0 + W_p(η', ν)]
= μ∈ [μ], η∈ [η]min.[W_p(μ, η)] + ν∈ [ν], η' ∈ [η]min.[ W_p(η', ν)]
= μ∈ [μ], η∈ [η]min.[W_p(μ, η)] + ν∈ [ν], η∈ [η]min.[ W_p(η, ν)]
= RW_p(μ, η) + RW_p(η, ν).
§.§ Theorem <ref>
With the previous notations, firstly, we show the two-stage optimization problem, s ∈ℝ^nminP ∈Π(μ, ν)min H(P, s), can be decomposed into two independent one-stage optimization problems, P ∈Π(μ, ν)min (E(P)) and s ∈ℝ^n min (V(s)).
For the objective function H(P, s), we expand it with respect to s,
H(P, s)
= ∑_i=1^m_1∑_j=1^m_2x_i + s -y_j _2^2 P_ij
= ∑_i=1^m_1∑_j=1^m_2(x_i-y_j_2^2+ s_2^2 + 2 s · (x_i-y_j)) P_ij
= ∑_i=1^m_1∑_j=1^m_2x_i-y_j_2^2 P_ij
+∑_i=1^m_1∑_j=1^m_2s_2^2 P_ij + 2∑_i=1^m_1∑_j=1^m_2 s · (x_i-y_j) P_ij.
We can rewrite the second and the third terms in Equation (<ref>) under the condition P ∈Π(μ, ν), which implies that,
∑_i=1^m_1∑_j=1^m_2 P_ij =1, ∑_j=1^m_2 P_ij = a_i, ∑_i=1^m_1 P_ij = b_j, 1≤ i ≤ m_1, 1 ≤ j ≤ m_2.
For the second term, it follows that
∑_i=1^m_1∑_j=1^m_2s_2^2 P_ij = s_2^2 · (∑_i=1^m_1∑_j=1^m_2 P_ij) = s_2^2 · 1= s_2^2.
For the third term, it follows that
2∑_i=1^m_1∑_j=1^m_2 s · (x_i-y_j) P_ij
= 2 s ·∑_i=1^m_1∑_j=1^m_2 (x_i-y_j) P_ij
= 2 s · (∑_i=1^m_1∑_j=1^m_2 x_i · P_ij - ∑_i=1^m_1∑_j=1^m_2 y_j · P_ij)
= 2 s · (∑_i=1^m_1 x_i · (∑_j=1^m_2 P_ij) - ∑_j=1^m_2 y_j · (∑_i=1^m_1P_ij))
= 2 s · (∑_i=1^m_1 x_i · a_i - ∑_j=1^m_2 y_j · b_j)
= 2 s · (μ̅ - ν̅).
Thus, we have the following transformation,
s ∈ℝ^nminP ∈Π(μ, ν)min H(P, s)
= s ∈ℝ^nminP ∈Π(μ, ν)min (∑_i=1^m_1∑_j=1^m_2x_i-y_j_2^2 P_ij
+∑_i=1^m_1∑_j=1^m_2s_2^2 P_ij + 2∑_i=1^m_1∑_j=1^m_2 s · (x_i-y_j) P_ij)
= s ∈ℝ^nminP ∈Π(μ, ν)min∑_i=1^m_1∑_j=1^m_2x_i-y_j_2^2 P_ij
+ s ∈ℝ^nminP ∈Π(μ, ν)min (∑_i=1^m_1∑_j=1^m_2s_2^2 P_ij + 2∑_i=1^m_1∑_j=1^m_2 s · (x_i-y_j) P_ij)
= s ∈ℝ^nminP ∈Π(μ, ν)min∑_i=1^m_1∑_j=1^m_2x_i-y_j_2^2 P_ij
+ s ∈ℝ^nmin ( s_2^2+ 2 s · (μ̅ - ν̅))
= P ∈Π(μ, ν)min∑_i=1^m_1∑_j=1^m_2x_i-y_j_2^2 P_ij
+ s ∈ℝ^nmin ( s_2^2 + 2 s · (μ̅ - ν̅))
= P ∈Π(μ, ν)min (E(P)) + s ∈ℝ^n min (V(s))
Since V(s) is a high-dimensional quadratic function of variable s, it is easy to follow that the minimum is achieved when s = ν̅ - μ̅.
§ ADDITIONAL EXPERIMENT RESULTS
§.§ Additional experiment results for Section 5.1 - Numerical Validation
§.§ Additional experiment results for Section 5.3 - Similar Thunderstorm Pattern Detection
§ COMPLEXITY ANALYSIS FOR ALGORITHM 1 UNDER SUB-GAUSSIAN DISTRIBUTIONS
This section is organized as follows. In section <ref>, we state and prove the theorem regarding the time complexity of Algorithm 1. We leave the definitions and theorems used in the proof to section <ref>.
§.§ Theoretical Results of Time Complexity
Assuming the two distributions are sub-Gaussian and ν̅ - μ̅_2^2 is large enough, we can prove that with high probability, the translated cost matrix has a smaller norm C_∞, thus the time complexity can be reduced.
Let μ, ν be two high-dimensional sub-Gaussian distributions in ^n. (X_1, X_2, …, X_m_1), (Y_1, Y_2, …, Y_m_2) are i.i.d data sampled from μ and ν separately. Let μ̅= μ, ν̅= ν, X̅ = ∑_i=1^m_1 X_i / m_1, Y̅ = ∑_i=1^m_2 Y_i / m_2. Assume μ-μ̅_ψ_2 < ∞, ν - ν̅_ψ_2 < ∞. l = μ̅- ν̅_2. If
l ≥ C√(n)[1 + μ-μ̅_ψ_2 + ν-ν̅_ψ_2]
+ C [√(log(4m_1/δ))·μ - μ̅_ψ_2 + √(log(4m_2/δ)))·ν - ν̅_ψ_2],
where C is an absolute constant,
then with probability at least 1-δ, we have
max_i,jX_i - X̅ - Y_j + Y̅_2 ≤max_i,jX_i - Y_j_2.
Sub-Gaussian distributions represent a broad class of distributions that encompass many common types, including multivariate normal distribution, multivariate symmetric Bernoulli, and uniform distribution on the sphere. Theorem <ref> demonstrates that when the translation is significant, the maximum absolute entry of the cost matrix C_∞ = max_ij |C_ij| tends to decrease. Consequently, our RW_2 method achieves better time complexity compared to W_2. This theoretical finding is consistent with our experimental results, as shown in Figure <ref>.
For i=1,2,…, m_1, X_i - μ̅ is a sub-Gaussian random vector. Using Theorem <ref> and taking a union bound over all the random vectors, we have for all X_i with probability at least 1- δ/4, the following inequality holds
X_i - μ̅_2 ≤ c(√(n) + √(log(4m_1/δ)))·μ - μ̅_ψ_2.
Similarly, we have for all Y_j, with probability at least 1-δ, the following inequality holds
Y_j - ν̅_2 ≤ c(√(n) + √(log(4m_2/δ)))·ν - ν̅_ψ_2.
Using Theorem <ref>, ∑_i=1^m (X_i - μ̅) is a sub-Gaussian random vector, with ∑_i=1^m_1 (X_i - μ̅)_ψ_2≤√(C ∑_i=1^m_1X_i - μ̅_ψ_2^2) Then using Theorem <ref>, with probability at least 1-δ/4, we have
∑_i=1^m_1 X_i - m_1μ̅_2
≤ c(√(n) + √(log(1/δ)))·∑_i=1^m_1 (X_i - μ̅)_ψ_2
= c(√(n) + √(log(1/δ)))·√(C ∑_i=1^m_1X_i - μ̅_ψ_2^2)
= c'(√(n) + √(log(1/δ)))·√(m_1)μ-μ̅_ψ_2,
where c' is an absolute constant. Similarly, with probability at least 1-δ, we have
∑_j=1^m_2 Y_j - m_2ν̅_2 ≤ c'(√(n) + √(log(1/δ)))·√(m_2)ν-ν̅_ψ_2,
where c' is an absolute constant. In the following proof, we consider the union bound of all the high-probability events above, such that (<ref>), (<ref>), (<ref>) and (<ref>) hold. It occurs with probability at least 1-δ.
First, for max_i,jX_i - Y_j_2, we have
max_i,jX_i - Y_j_2 ≥max_i,jμ̅-ν̅_2 - X_i - μ̅_2 - Y_j-ν̅_2
≥ l - [c(√(n) + √(log(4m_1/δ)))·μ - μ̅_ψ_2]
- [c(√(n) + √(log(4m_2/δ)))·ν - ν̅_ψ_2]
= l - 2c√(n) - c√(log(4m_1/δ)))·μ - μ̅_ψ_2
- c√(log(4m_2/δ)))·ν - ν̅_ψ_2,
where the first inequality holds due to the triangle inequality. The second inequality holds due to (<ref>) and (<ref>).
For max_i,jX_i - Y_j - X̅_i + Y̅_j_2, we have
max_i,jX_i - Y_j - X̅_i + Y̅_j_2 ≤max_i,jX_i-μ̅_2 + Y_j-ν̅_2
+ X̅_i - μ̅_2 + Y̅_j - ν̅_2
≤[c(√(n) + √(log(4m_1/δ)))·μ - μ̅_ψ_2] + [c(√(n) + √(log(4m_2/δ)))·ν - ν̅_ψ_2]
+ [c'√(n) + √(log(1/δ))/√(m_1)μ-μ̅_ψ_2] + [c'√(d) + √(log(1/δ))/√(m_2)ν-ν̅_ψ_2]
≤ C√(n)[1 + μ - μ̅_ψ_2/√(m_1) + ν - ν̅_ψ_2/√(m_2)]
+ C [√(log(4m_1/δ))·μ - μ̅_ψ_2 + √(log(4m_2/δ)))·ν - ν̅_ψ_2],
where the first inequality holds due to (<ref>), (<ref>), (<ref>) and (<ref>).
Therefore, we have the following conclusion:
As long as
l ≥ C√(n)[1 + μ-μ̅_ψ_2 + ν-ν̅_ψ_2]
+ C [√(log(4m_1/δ))·μ - μ̅_ψ_2 + √(log(4m_2/δ)))·ν - ν̅_ψ_2],
where C is an absolute constant, we can conclude that
max_i,jX_i - Y_j - X̅_i + Y̅_j_2 ≤max_i,jX_i - Y_j_2.
This completes the proof of Theorem <ref>.
§.§ High Dimensional Probability Basics
In this section, we introduce some basic knowledge we will use in the proof of Theorem.
The results mostly come from <cit.>.
We first introduce a broad and widely used distribution class.
[Sub-Gaussian]
A random variable X that
satisfies one of the following equivalent properties is called a subgaussian random variable.
(a) There exists K_1 > 0 such that the tails of X satisfy
{|X| ≥ t}≤ 2 exp(-t^2/K^2_1) for all t ≥ 0.
(b) There exists K_2 > 0 such that the moments of X satisfy
X_L^p = ( |X|^p)^1/p≤ K_2 √(p) for all p ≥ 1.
(c)
There exists K_3 > 0 such that the moment-generating function (MGF) of X^2 satisfies
exp(λ^2X^2) ≤exp(K^2_3λ^2) for all λ such that |λ| ≤1/K_3.
(d) There exists K_4 > 0 such that the MGF of X^2
is bounded at some point,
namely,
exp(X^2/K^2_4) ≤ 2.
(e) Moreover, if X = 0, the following property is also equivalent.
There exists K_5 > 0 such that the MGF of X satisfies
exp(λ X) ≤exp(K^2_5λ^2) for all λ∈.
The parameters K_i > 0 appearing in
these properties differ from each other by at most an absolute constant factor.
The sub-gaussian norm of X, denoted X_ψ_2, is defined to be
X_ψ_2 = inf{
t > 0 : exp(X^2/t^2) ≤ 2}.
A random vector X ∈^d
is sub-Gaussian
if for any vector ∈^d
the inner product ⟨ X, ⟩ is a sub-Gaussian random variable. And the
corresponding ψ_2 norm of X is defined as
X_ψ_2 = sup__2=1⟨ X, ⟩_ψ_2.
Let X_1,…,X_N ∈^d be
independent, mean zero, sub-Gaussian random vectors. Then
∑_i=1^N X_i is also a
sub-Gaussian random vector, and
∑_i=1
^N X_i^2_ψ_2≤ C ∑_i=1^N X_i^2_ψ_2.
where C is an absolute constant.
For any vector ∈, _2 = 1, consider ⟨∑_i=1
^N X_i, ⟩. Using independence, we have for all λ,
exp(λ∑_i=1
^N ⟨ X_i, ⟩) = ∏_i=1^N exp(λ⟨ X_i, ⟩)
≤∏_i=1^N exp(C ⟨ X_i, ⟩_ψ_2^2 λ^2)
= exp(Cλ^2 ∑_i=1^N ⟨ X_i, ⟩_ψ_2^2),
where C is an absolute constant and the first inequality holds due to property (e) of the sub-Gaussian variables. Taking supreme over , we prove that ∑_i=1^N X_i is also a
sub-Gaussian random vector. Moreover,
∑_i=1
^N X_i^2_ψ_2≤ C ∑_i=1^N X_i^2_ψ_2.
where C is an absolute constant.
Let X ∈^d be a sub-Gaussian random vector. Then with probability at least 1-δ,
X_2 ≤ c(√(d) + √(log(1/δ)))·X_ψ_2.
Let B_d be the d-dimensional unit ball, N
be a 1/2-covering of B_d in 2-norm with covering number = N(B_d, ·_2, 1/2). Therefore,
∀∈ B_d, ∃∈ N, s.t. - ≤ 1/2.
Using Lemma <ref>, we have
N ≤ 5^d.
Using the fact _2 = max__2 ≤ 1⟨, ⟩, we have
X_2 = max_∈ B_d⟨, X ⟩
≤max_∈ N⟨, X⟩ + max_∈ (1/2) B_d⟨
, X ⟩
= max_∈ N⟨, X⟩ + 1/2max_∈ B_d⟨
, X ⟩.
Therefore, we have
X_2 ≤ 2 max_∈ N⟨, X⟩.
Then we can provide a high probability upper bound for the Euclidean norm of the random vector X by considering the probability (X_2 ≥ t).
(X_2 ≥ t) ≤(max_∈ N⟨, X⟩≥t/2)
≤(∃∈ N,
⟨, X⟩≥t/2)
≤∑_∈ N(
⟨, X⟩≥t/2)
≤ N exp(- ct^2/X_ψ_2^2)
≤ 5^d exp(- ct^2/X_ψ_2^2),
where c is an absolute constant. Here the first inequality holds due to (<ref>). The second inequality holds due to {max_∈ N⟨, X⟩≥
t/2}⊆{∃∈ N,
⟨, X⟩≥
t/2}. The third inequality holds due to the union bound. The fourth inequality holds due to the definition of the sub-Gaussian vector and the property
(a) of a sub-Gaussian variable. The last inequality holds due to (<ref>).
Finally, let t = √([d log 5 + log(1/δ)]/c)·X_ψ_2. We have with probability at least 1-δ,
X_2 ≥ t.
Finally, using √(a+b)≤√(a) + √(b), we complete the proof of Theorem <ref>.
[ϵ-covering]
Let (V, ·) be a normed space, and Θ⊂ V. V_1,…, V_N is an
ϵ-covering of Θ if Θ⊆∪_i=1^N V_i, or equivalently, ∀θ∈Θ, ∃ i such that θ - V_i≤ϵ.
[Covering number]
The covering number is defined by
N(Θ, ·, ϵ) := min{n : ∃ϵ-covering over Θ of size n}.
Let B_d be the d-dimensional Euclidean unit ball. Consider N(B_d, ·_2, ϵ). When ϵ≥ 1, N(B_d, ·_2, ϵ) =
1. When ϵ < 1, we have
(1/ϵ)^d≤ N(B_d, ·_2, ϵ) ≤(1+2/ϵ)^d.
§ PROPOSED OPTIMAL TRANSPORT FRAMEWORK WITH RW DISTANCE FOR SPATIAL-TEMPORAL DATASETS
After the above definitions and theorems, we have introduced a new metric called RW distance to define the similarity between two distributions. We will introduce how to use RW distance to identify similar spatial-temporal data patterns, by treating them as discrete probability distributions.
Generally speaking, each spatial-temporal data is described as “a series or sequence of vectors”. Here, we use a set of spatial-temporal vectors to represent spatial-temporal data d. i.e.,
d = { (t_i, x_i)| 1 ≤ i ≤ m},
where t_i is a scale value representing the time information of this vector, and x_i is a multi-dimensional vector that stands for the space information, and assumes there are m spatial-temporal vectors in the data d.
By treating each spatial-temporal vector as a support point, we can treat the spatial-temporal data d as a probability distribution, where the probability mass of each support point can be distributed uniformly or customized based on different objectives. Here, we assume the probability mass of spatial-temporal vectors is distributed uniformly.
Since time is independent of space information, it is unwise to directly treat raw data as distributions. Thus, we introduce a time weight parameter, w, to customize different demands of spatial and temporal similarity. Formally, the time axis dilation operation is defined as follows:
Definition 5 The time axis dilation operation ω(·).
Assume that there is spatial-temporal data d and a time weight w, i.e.,
d = {(t_1, x_1), ..., (t_m, x_m) }, w ∈ [0,+∞).
Then the time axis dilation operation:
ω(d): = {(w· t_1, x_1), ..., (w· t_m, x_m) }.
The advantage of this time weight is that we can adjust the spatial and temporal similarity by changing the value of w. Specifically,
* when w goes to 0, which leads to w · t_i (1≤ i ≤ m) also going to 0, the cost of matching along the time axis will be cheaper. Thus, the weighted data will reflect more of the spatial information;
* when w goes to infinity, which leads to w · t_i (1≤ i ≤ m) also going to infinity, the cost of matching along the time axis will be more expensive. Thus, the weighted data will reflect more of the time series information.
It is important to consider whether different orders of move-to-origin shift and time axis dilation to distribution will lead to different results. In fact, the following theorem shows that the two operations are independent and the order does not matter.
Theorem 6 Given any data d, the time axis dilation operation ω(d) and move-to-origin shift τ (d) are commutative, i.e.,
ω∘τ (d) =τ∘ω (d).
The proof of Theorem 6 can be found in the supplementary material.
The independence of the above two operations provides a flexible framework to unify different definitions of similarity, as shown in Table 1.
the shift in terms of their shift-invariant difference with toleration of the shifts of distributions.
The histogram method <cit.> is another common method to use for measuring the similarity of high-dimensional temporal sequences. The basic idea is to extract a few essential features and use those features to build feature vectors for the temporal sequences. Finally, we compute the distance between feature vectors to compute the similarities. The advantage is that it is a flexible framework for choosing all kinds of global or local features, such as motion, time length, level of clustering, shape, etc. However, it relies on human knowledge, and often difficult to find the proper features from the dataset when it is lacking understanding and human experience. In a real-world application, it would be too complicated to tune this heavy multi-parameter model properly to make it work.
Dynamic time warping is a traditional method of measuring the similarity between two temporal sequences in time series analysis, which may allow different time lengths. The main idea of this method is that it allows to have one-to-many and many-to-one matches for different time lengths. The goal of this method is to minimize the sum of the distances of mapping between the two sequences. A common strategy to find the best matches is via dynamic programming, which has the time complexity of O(nm) <cit.>, where n and m are the lengths of two input sequences, separately. This method has many different variants in real applications, including SparseDTW <cit.>, FastDTW <cit.>, etc. However, because of the curse of dimensionality, it is not an easy way to provide a one-to-many or many-to-one optimal matching in high-dimensional space. This method is usually applied in low-dimensional cases and is hard to extend to a high-dimension space. In addition, the distance between two temporal sequences computed by DTW is not a true metric and is sensitive to the shift or disturbance of absolute coordinates of two temporal sequences.
Due to the space limit, we leave the remaining results of k = 1 and k = 5 to the supplementary material. Table 2 shows the results when k =3.
We choose the Modified National Institute of Standards and Technology database (MNIST) <cit.> as the dataset for this task. MNIST is a large dataset of handwritten digits containing 60,000 images of 10 digits, where each image is a greyscale image with 28x28 pixels. The k-nearest neighbors algorithm (k-NN) is used as the basic framework with different distance metrics, including Euclidean distance (L_2), the Wasserstein distances (W_1 and W_2) <cit.>, and the shift-invariant Wasserstein distance (RW_2). The difference between W_2 and RW_2 is shown in Figure 4.
We collect N images randomly from the MNIST dataset and then randomly split the collected images into train/test set by a ratio of 3:1. Using the k-NN framework as a classifier, (k= 1, 3, 5), where the similarity measures include L_2, W_1, W_2, RW_2, we can compute the accuracy of collected data on the test set. Finally, we repeat the above procedures many times to compute the average accuracy and deviations.
* Purely spatial similarity. The results are shown in Figure <ref> and Figure <ref>.
* Spatial-temporal similarity. Assume that the time weight is 0.1. The results are shown in Figures <ref>, <ref> <ref>.
For the airline operational consideration, W_2 and RW may satisfy different demands. Since most of the flights are scheduled routinely at the same time for each day and W_2 will be a good option since it sticks to the specific time and locations. For the purposes of clustering thunderstorms in a local area, RW_2 might be a better metric, since it can get rid of the impact of coordinate shifts.
Note that the existence of RW_p distances is inherited from the underlying geometry structure of Euclidean space. (If we change to another geometry structure, such as a spherical surface, those properties may no longer be admitted.)
|
http://arxiv.org/abs/2409.02441v1 | 20240904043737 | Dynamics of Dissipative Gravitational Collapse in the Morris-Thorne Wormhole Metric: One Scenario -- Several Outcomes | [
"Subhasis Nalui",
"Subhra Bhattacharya"
] | gr-qc | [
"gr-qc"
] | |
http://arxiv.org/abs/2409.03728v1 | 20240905173413 | Multiplicity dependent $J/ψ$ and $ψ(2S)$ production at forward and backward rapidity in $p$$+$$p$ collisions at $\sqrt{s}=200$ GeV | [
"PHENIX Collaboration",
"N. J. Abdulameer",
"U. Acharya",
"C. Aidala",
"Y. Akiba",
"M. Alfred",
"V. Andrieux",
"S. Antsupov",
"N. Apadula",
"H. Asano",
"B. Azmoun",
"V. Babintsev",
"N. S. Bandara",
"E. Bannikov",
"K. N. Barish",
"S. Bathe",
"A. Bazilevsky",
"M. Beaumier",
"R. Belmont",
"A. Berdnikov",
"Y. Berdnikov",
"L. Bichon",
"B. Blankenship",
"D. S. Blau",
"J. S. Bok",
"V. Borisov",
"M. L. Brooks",
"J. Bryslawskyj",
"V. Bumazhnov",
"S. Campbell",
"R. Cervantes",
"D. Chen",
"M. Chiu",
"C. Y. Chi",
"I. J. Choi",
"J. B. Choi",
"Z. Citron",
"M. Connors",
"R. Corliss",
"N. Cronin",
"M. Csanád",
"T. Csörgő",
"T. W. Danley",
"M. S. Daugherity",
"G. David",
"K. DeBlasio",
"K. Dehmelt",
"A. Denisov",
"A. Deshpande",
"E. J. Desmond",
"A. Dion",
"D. Dixit",
"V. Doomra",
"J. H. Do",
"A. Drees",
"K. A. Drees",
"J. M. Durham",
"A. Durum",
"H. En'yo",
"A. Enokizono",
"R. Esha",
"B. Fadem",
"W. Fan",
"N. Feege",
"D. E. Fields",
"M. Finger, Jr.",
"M. Finger",
"D. Firak",
"D. Fitzgerald",
"S. L. Fokin",
"J. E. Frantz",
"A. Franz",
"A. D. Frawley",
"Y. Fukuda",
"P. Gallus",
"C. Gal",
"P. Garg",
"H. Ge",
"F. Giordano",
"Y. Goto",
"N. Grau",
"S. V. Greene",
"M. Grosse Perdekamp",
"T. Gunji",
"T. Guo",
"H. Guragain",
"T. Hachiya",
"J. S. Haggerty",
"K. I. Hahn",
"H. Hamagaki",
"H. F. Hamilton",
"J. Hanks",
"S. Y. Han",
"S. Hasegawa",
"T. O. S. Haseler",
"T. K. Hemmick",
"X. He",
"J. C. Hill",
"K. Hill",
"A. Hodges",
"R. S. Hollis",
"K. Homma",
"B. Hong",
"T. Hoshino",
"N. Hotvedt",
"J. Huang",
"K. Imai",
"M. Inaba",
"A. Iordanova",
"D. Isenhower",
"D. Ivanishchev",
"B. Jacak",
"M. Jezghani",
"X. Jiang",
"Z. Ji",
"B. M. Johnson",
"D. Jouan",
"D. S. Jumper",
"J. H. Kang",
"D. Kapukchyan",
"S. Karthas",
"D. Kawall",
"A. V. Kazantsev",
"V. Khachatryan",
"A. Khanzadeev",
"C. Kim",
"E. -J. Kim",
"M. Kim",
"D. Kincses",
"E. Kistenev",
"J. Klatsky",
"P. Kline",
"T. Koblesky",
"D. Kotov",
"L. Kovacs",
"S. Kudo",
"K. Kurita",
"Y. Kwon",
"J. G. Lajoie",
"A. Lebedev",
"S. Lee",
"M. J. Leitch",
"Y. H. Leung",
"S. H. Lim",
"M. X. Liu",
"X. Li",
"V. -R. Loggins",
"S. Lökös",
"D. A. Loomis",
"K. Lovasz",
"D. Lynch",
"T. Majoros",
"Y. I. Makdisi",
"M. Makek",
"V. I. Manko",
"E. Mannel",
"M. McCumber",
"P. L. McGaughey",
"D. McGlinchey",
"C. McKinney",
"M. Mendoza",
"A. C. Mignerey",
"A. Milov",
"D. K. Mishra",
"J. T. Mitchell",
"M. Mitrankova",
"Iu. Mitrankov",
"G. Mitsuka",
"S. Miyasaka",
"S. Mizuno",
"P. Montuenga",
"T. Moon",
"D. P. Morrison",
"B. Mulilo",
"T. Murakami",
"J. Murata",
"K. Nagai",
"K. Nagashima",
"T. Nagashima",
"J. L. Nagle",
"M. I. Nagy",
"I. Nakagawa",
"K. Nakano",
"C. Nattrass",
"T. Niida",
"R. Nouicer",
"N. Novitzky",
"T. Novák",
"G. Nukazuka",
"A. S. Nyanin",
"E. O'Brien",
"C. A. Ogilvie",
"J. D. Orjuela Koop",
"M. Orosz",
"J. D. Osborn",
"A. Oskarsson",
"G. J. Ottino",
"K. Ozawa",
"V. Pantuev",
"V. Papavassiliou",
"J. S. Park",
"S. Park",
"M. Patel",
"S. F. Pate",
"D. V. Perepelitsa",
"G. D. N. Perera",
"D. Yu. Peressounko",
"C. E. PerezLara",
"J. Perry",
"R. Petti",
"M. Phipps",
"C. Pinkenburg",
"R. P. Pisani",
"M. Potekhin",
"M. L. Purschke",
"K. F. Read",
"D. Reynolds",
"V. Riabov",
"Y. Riabov",
"D. Richford",
"T. Rinn",
"S. D. Rolnick",
"M. Rosati",
"Z. Rowan",
"A. S. Safonov",
"T. Sakaguchi",
"H. Sako",
"V. Samsonov",
"M. Sarsour",
"S. Sato",
"B. Schaefer",
"B. K. Schmoll",
"K. Sedgwick",
"R. Seidl",
"A. Seleznev",
"A. Sen",
"R. Seto",
"A. Sexton",
"D. Sharma",
"I. Shein",
"T. -A. Shibata",
"K. Shigaki",
"M. Shimomura",
"T. Shioya",
"P. Shukla",
"A. Sickles",
"C. L. Silva",
"D. Silvermyr",
"B. K. Singh",
"C. P. Singh",
"V. Singh",
"M. Slunečka",
"K. L. Smith",
"M. Snowball",
"R. A. Soltz",
"W. E. Sondheim",
"S. P. Sorensen",
"I. V. Sourikova",
"P. W. Stankus",
"S. P. Stoll",
"T. Sugitate",
"A. Sukhanov",
"T. Sumita",
"J. Sun",
"Z. Sun",
"J. Sziklai",
"K. Tanida",
"M. J. Tannenbaum",
"S. Tarafdar",
"G. Tarnai",
"R. Tieulent",
"A. Timilsina",
"T. Todoroki",
"M. Tomášek",
"C. L. Towell",
"R. S. Towell",
"I. Tserruya",
"Y. Ueda",
"B. Ujvari",
"H. W. van Hecke",
"J. Velkovska",
"M. Virius",
"V. Vrba",
"N. Vukman",
"X. R. Wang",
"Y. S. Watanabe",
"C. L. Woody",
"L. Xue",
"C. Xu",
"Q. Xu",
"S. Yalcin",
"Y. L. Yamaguchi",
"H. Yamamoto",
"A. Yanovich",
"I. Yoon",
"J. H. Yoo",
"I. E. Yushmanov",
"H. Yu",
"W. A. Zajc",
"A. Zelenski",
"L. Zou"
] | hep-ex | [
"hep-ex"
] |
[PHENIX Spokesperson: ][email protected]
Deceased
Deceased
PHENIX Collaboration
§ ABSTRACT
The J/ψ and ψ(2S) charmonium states, composed of cc̅
quark pairs and known since the 1970s, are widely believed to serve as
ideal probes to test quantum chromodynamics in high-energy hadronic
interactions. However, there is not yet a complete understanding
of the charmonium-production mechanism. Recent measurements of
J/ψ production as a function of event charged-particle
multiplicity at the collision energies of both the Large Hadron
Collider (LHC) and the Relativistic Heavy Ion Collider (RHIC) show
enhanced J/ψ production yields with increasing multiplicity. One
potential explanation for this type of dependence is multiparton
interactions (MPI). We carry out the first measurements of
self-normalized J/ψ yields and the ψ(2S) to J/ψ ratio at
both forward and backward rapidities as a function of self-normalized
charged-particle multiplicity in p+p collisions at √(s)=200
GeV. In addition, detailed pythia studies tuned to RHIC energies
were performed to investigate the MPI impacts. We find that the PHENIX
data at RHIC are consistent with recent LHC measurements and can only
be described by pythia calculations that include MPI effects. The
forward and backward ψ(2S) to J/ψ ratio, which serves as a
unique and powerful approach to study final-state effects on charmonium
production, is found to be less dependent on the charged-particle
multiplicity.
Multiplicity dependent J/ψ and ψ(2S) production at
forward and backward rapidity in p+p collisions at √(s)=200 GeV
L. Zou
September 9, 2024
===================================================================================================================
Charmonium, a bound cc̅ state, has been studied extensively over
the past several decades, but a clear understanding of its formation
has not yet been reached. Several models are currently available to
describe the evolution of a cc̅ pair into the bound or
meson, such as the nonrelativistic-quantum-chromodynamics
(NRQCD) <cit.>, color-evaporation <cit.>,
color-singlet <cit.>, and
jet-fragmentation <cit.> models.
The formation appears to involve both
perturbative (above the Λ_ QCD scale) and nonperturbative
(below the Λ_ QCD scale) aspects of QCD. The initial
creation of cc̅ pairs through hard scattering can be described as
perturbative, and their evolution into a color-neutral state is likely
nonperturbative.
In this Letter, we present PHENIX measurements at the Relativistic
Heavy Ion Collider (RHIC) of the self-normalized production
versus self-normalized event multiplicity to study the
multi-parton-interaction (MPI) effects at forward rapidity as well as
the to ratio, which is sensitive to final-state
interactions in collisions at RHIC energies. We measure inclusive
and production without separating prompt from nonprompt
charmonium because the nonprompt contributions are less than 3% of the
total charmonium production.
The STAR Experiment at RHIC has measured the self-normalized yields as a function of self-normalized charged-particle multiplicity
at midrapidity in collisions <cit.>. The results show
an increase in the yields with increasing multiplicity, a
dependence suggesting MPI. Additionally, at Large-Hadron-Collider (LHC)
energies, the ALICE experiment has reported similar results for the
normalized yields versus event multiplicity at both
forward rapidity <cit.> and
midrapidity <cit.>. In addition to potential MPI, the
to ratio could reveal final-state effects from either hot
(formation of quark-gluon
plasma <cit.>),
or cold (comover-interaction model <cit.>)
nuclear-matter effects. In early 2024, the LHCb collaboration reported
the ratio of the normalized to in collisions at
=13 TeV as a function of multiplicity <cit.>,
where suppression of promptly produced charmonia is observed at high
event multiplicity.
The present analysis relies on the data from the PHENIX
experiment <cit.> obtained using the muon-arms detector
subsystem covering 1.2<|η|<2.4, which includes the muon tracker
(MuTr), the muon identifier (MuID), the hadron absorbers, the forward
silicon vertex detectors
(FVTX) <cit.>
in the forward rapidity region and the central arm barrel silicon
vertex tracker (VTX) <cit.> at |η|<1.0. Two
beam-beam counters (BBC), covering the full azimuth and
3.1<|η|<3.9, measure the vertex position along the beamline,
located at z=±144 cm from the nominal interaction point. The
BBCs also serve as the minimum-bias (MB) trigger and measure the beam
luminosity.
The data set used in this analysis was collected in 2015 and
recorded at =200 GeV center-of-mass energy. The analyzed events
were selected by the MB trigger and the dimuon triggers, which
required two or more muon tracks in the MuID.
The collision vertex was constrained to be within ±10 cm with
respect to the center of the interaction region. The total sampled
luminosity for the data set is 47 pb^-1.
The observable used in this analysis has theoretical <cit.>
and experimental <cit.> motivations. We first define the
self-normalized event charged-particle multiplicity , where represents the number of reconstructed charged-particle tracks detected by
the forward- or backward-rapidity FVTX with 1.2<η<2.4 and
-2.4<η<-1.2, and ⟨⟩ represents the average for
MB events. Then the relative yield of , denoted as
N_J/ψ/⟨N_J/ψ⟩ in a given range, is
measured by the forward or backward muon arms covering 1.2<y<2.2 and
-2.2<y<-1.2, respectively:
N_J/ψ/⟨ N_J/ψ⟩ = N_J/ψ^ raw/N_ evt N_ evt^ total/N_J/ψ^ raw,totalε^ MB_ trig/ε^J/ψ_ trig⟨ε^J/ψ_ trig⟩/⟨ε^ MB_ trig⟩ f_ coll,
where N_J/ψ^ raw is the raw number of signal yield extracted from the dimuon invariant mass fit, and N_
evt is the number of recorded MB events for a certain bin. The
average MB efficiency is ⟨ε^ MB_
trig⟩=55±5% and the J/ψ efficiency is
⟨ε^J/ψ_ trig⟩=79±2%. The
superscript “total" stands for quantities integrated over all .
The ϵ^ MB_ trig is the MB trigger efficiency,
ϵ^J/ψ_ trig is the dimuon trigger efficiency. The
f_ coll is a correction factor for multiple collisions. The same
observable is then measured for , and the ratio of relative
yields is defined as
(N_ψ(2S)/ N_J/ψ)/⟨N_ψ(2S)/N_J/ψ⟩.
We present the data in terms of the same arms defined as measuring
and N_J/ψ at the same rapidity range, and opposite arms
defined as measuring and N_J/ψ at the opposite rapidity
range. When combining the data from two arms, we define
1.2<|η|<2.4 for and 1.2<|y|<2.2 for N_J/ψ.
In PHENIX the trigger efficiency has event-multiplicity dependence due
to finite acceptance of the BBC. The event multiplicity-dependent MB
and dimuon trigger efficiencies are determined using a data-driven
approach. The MB trigger efficiency versus event multiplicity was
evaluated with random-collision-clock triggered events by checking
whether the MB trigger is fired for events satisfying the MB trigger
condition. To check the multiplicity dependence of the MB trigger for
hard-scattering events, we require at least one EMCAL cluster of E>2
GeV on top of the random-collision-clock trigger
The possibility of multiple-collision events, defined as an event
having collisions in addition to the one that produced the or , increases with increasing multiplicity. A data-driven method
is utilized to estimate the fraction of multiple collision events,
which assumes Poisson statistics and estimates the BBC (MB) trigger
rate R_ BBC using the following formula:
R_ BBC = f_BC[1-e^-μϵ_F-e^-μϵ_B-e^-μ(ϵ_F+ϵ_B+ϵ_FB)],
where f_BC is the beam-crossing rate at RHIC, μ is the mean
number of collisions, ϵ_F (ϵ_B) is the trigger efficiency
(≈75%) of each BBC at forward (backward) rapidity, and
ϵ_FB is the trigger efficiency (≈50%) when both BBCs are
fired. The maximum R_ BBC during the data run is 2.5 MHz,
corresponding to an ≈10% rate of double collision events. Due to the
average R_ BBC being ≈1 MHz and the low probability of having
more than two collisions per beam crossing, only contributions from double
collisions were considered. The ratio of double to single collisions,
f_ coll, is evaluated as a function of the measured-BBC-trigger rate
R_ BBC.
A crystal-ball (CB) function <cit.> is used to model the
and signal shapes to extract the raw yields from the
dimuon invariant-mass distribution. The tail parameters α and
n of the CB function were fixed with values from integrated
multiplicity data for and simulation studies for ψ(2S). The
FVTX detector delivers the mass resolution needed for measurements but reduces the acceptance due to a finite matching
efficiency between the MuTr and FVTX detectors. For
N_J/ψ/⟨N_J/ψ⟩
as a function of /⟨⟩
shown in Figs. <ref> and <ref>,
the FVTX detector is not used to maximize signal statistics. For
measurements with the FVTX detector, a Gaussian function is used
alongside the CB function to account for the broadening of signal shape
due to misassociation between the FVTX and MuTr
detectors <cit.>. The individual multiplicity bins are
fitted by fixing the shape parameters of the CB function to the
integrated multiplicity fit results to extract the J/ψ signal
yield.
For the analysis without the FVTX detector, an exponential decay
function is used to describe the background. For the ψ(2S) to J/ψ
ratio, a modified Hagedorn function is used to model the combinatorial
and correlated-background
contributions <cit.>. The shape of the
combinatorial background is estimated with real data using the
mixed-event method, where opposite sign single muons are selected from
different events. The combinatorial background normalization is
obtained from like-sign single muons selected from the same events. The
correlated background shape is determined using detailed simulation
studies <cit.> and is implemented in the total fit
function by fixing three of the five parameters of the modified
Hagedorn function to constrain the shape.
The sources of systematic uncertainties for the measurements
include the following: the reconstruction efficiency, the trigger efficiency, the MB trigger efficiency, and the
multiple-collision-correction factor f_ coll per beam crossing.
A systematic uncertainty is assigned for the reconstruction
efficiency based on the largest variation when three different
z-vertex selections are applied. For the systematic uncertainty
related to the trigger efficiency, the efficiency as a function
of is first fit with the following function: f(x) = p_0 +
p_1e^-p_2x. Then, the efficiency distribution is re-evaluated by
moving the parameter values by ±1σ of the statistical
uncertainty in the fit result. The largest difference with respect to
the nominal value is assigned as the systematic uncertainty. The
MB-trigger efficiency can be affected by multiple collisions. The
systematic uncertainty related to the MB-trigger efficiency is
determined by dividing the trigger efficiency into three different collision rates, determined by the MB trigger: 600–800, 1000–1500,
and 2000–2500 kHz. The trigger efficiency for the high and low trigger
rates are compared to the central rate, and the maximum deviation is
scaled by 1/√(12) (the standard deviation of a uniform
distribution), which is assigned as the systematic uncertainty. The
systematic uncertainty related to the multiple-collision correction
factor is evaluated by comparing correction factors using an
alternate method, which is the ratio of probability distributions
P() between two different trigger rates, less than 500 and
1000–1500 kHz.
The sources of systematic uncertainty for the normalized to
ratio include the following: the mixed-event background
normalization, the signal shapes (fixed CB tail parameters and the
second Gaussian width), and the correlated background shape. Most
systematic uncertainties cancel for the double ratio, including
uncertainties related to fixing the CB tail parameters, the second
Gaussian width, and the multiple collision correction factor. The
correlated background uncertainty was determined by allowing two of the
five parameters in the fit function to vary and comparing the resulting
signal yields from each fit. The determination of the uncertainty in
the normalization of the mixed-event background follows the methods
described in Ref. <cit.> and an uncertainty is assigned by
repeating the fit to the invariant-mass distribution over a mass range
extended first below and then above the nominal mass range of the fit.
The self-normalized J/ψ yields
(N_J/ψ/⟨N_J/ψ⟩) as a function of
self-normalized charged-particle multiplicity () are shown in
Fig. <ref> and compared to
STAR <cit.> and ALICE <cit.>
data. The J/ψ decay-daughter-muon contributions to the
charged-particle multiplicity have been subtracted in the PHENIX
results to remove the auto-correlation effect. Before subtracting the
J/ψ daughter-muon contributions, The PHENIX results show a
multiplicity dependence that is similar to the STAR results at
midrapidity. For ALICE at 13 TeV, the results at forward rapidity are
systematically lower than at midrapidity, but it is difficult to reach
conclusions for PHENIX and STAR at 200 GeV due to the larger systematic
uncertainties. After subtracting the daughter-muon contributions,
the yields shift to lower , leading to a significant drop
of N_J/ψ/⟨ N_J/ψ⟩, which is consistent with the
results of opposite arms, where is free from the daughter-muon contributions. The PHENIX results with subtraction is
significantly lower than the ALICE results at forward rapidity, but a
comparable slope is seen in STAR and ALICE results at midrapidity. Note
that the auto-correlation correction was not applied to the ALICE and
STAR results. The impact on the slope is expected to be more
significant for the STAR than the ALICE results due to the smaller
multiplicity.
The self-normalized J/ψ yields are also compared with calculations. In Fig. <ref> where the simulations are
compared to J/ψ normalized yield in data, the Detroit
tuned with MPI calculations <cit.> agree with the data
within ≈1σ while failing to describe the data without
MPI effects. Therefore, MPI effects need to be included to accurately
model production in collisions at RHIC. In addition,
comparison between the PHENIX and ALICE results at forward rapidity
indicates that the MPI effects on the multiplicity dependence would be
even stronger at LHC energies.
The (N_ψ(2S)/N_J/ψ)/⟨ N_ψ(2S)/N_J/ψ⟩ as a
function of , shown in Fig. <ref>, was measured
to investigate final-state interaction effects on charmonium production.
The self-normalized yield ratios of to at forward rapidity are
shown as a function of normalized charged-particle multiplicity, where the
charged-particle multiplicity has been measured using both the forward
(same arms) and backward (opposite arms) FVTX detectors. The
(N_ψ(2S)/ N_J/ψ)/⟨ N_ψ(2S)/ N_J/ψ⟩
measurements are consistent with unity within≈1σ, and no
significant rapidity dependence is observed. Therefore, final-state
interactions on charmonium production appear negligible at the measured
multiplicity range and collision energy.
Furthermore, in Fig. <ref>, the ALICE forward
rapidity prompt <cit.> and LHCb prompt and
nonprompt measurements are included for comparison. The ALICE
results also demonstrate similar consistency near unity up to
/⟨⟩=≈6. Recently, LHCb has reported
suppression of
(N_ψ(1S)/N_J/ψ)/⟨N_ψ(2S)/N_J/ψ⟩
in high multiplicity collisions <cit.>. The final
state comover effect <cit.> becomes more pronounced at
high multiplicity, and may result in the increased breakup of compared to due to its lower binding energy.
Finally, Fig. <ref> also shows Detroit
tuned with MPI effects for the
(N_ψ(2S)/N_J/ψ)/⟨N_ψ(2S)/N_J/ψ⟩ as a
function of but without any final-state effects on quarkonia.
The calculations yield similar results within uncertainties,
and generally reproduce the data near unity. This indicates that the
data favors no significant final-state interaction in collisions at
=200 GeV.
In summary, we have reported multiplicity-dependent studies of and production at forward rapidity in collisions at
=200 GeV. The self-normalized yields are measured up to
≈6 units of and reasonable consistency with the STAR
and ALICE data has been achieved when the and charged particles
are measured in the same pseudorapidity region. Results of
self-normalized yields before and after decay-daughter-muon
subtraction for counting can be well described by the Detroit tuned with MPI. Also, forward self-normalized to yields as a function of forward and backward track multiplicity have
been studied. Agreements with unity have been observed within
uncertainties and the results are consistent with each other at
different rapidity gaps, suggesting no significant final-state
interactions on charmonium production. The model without
final-state interactions generally reproduces the self-normalized
to yields indicating that no such effect occurs in collisions at RHIC energies.
Consistency with unity is also reported by ALICE. However, LHCb
observes a suppression of prompt ψ(2S) production at high
multiplicity, aligning with the prediction of final-state comover
effects <cit.>. The LHCb results also demonstrate
consistency with the PHENIX data points within uncertainties.
Investigations of charmonium production in p+Al and p+Au
collisions with PHENIX data, particularly in the backward-rapidity
region, would provide an excellent opportunity to further study
final-state effects in small collision systems at RHIC energies.
We thank the staff of the Collider-Accelerator and Physics
Departments at Brookhaven National Laboratory and the staff of
the other PHENIX participating institutions for their vital
contributions.
We acknowledge support from the Office of Nuclear Physics in the
Office of Science of the Department of Energy,
the National Science Foundation,
Abilene Christian University Research Council,
Research Foundation of SUNY, and
Dean of the College of Arts and Sciences, Vanderbilt University
(U.S.A),
Ministry of Education, Culture, Sports, Science, and Technology
and the Japan Society for the Promotion of Science (Japan),
Conselho Nacional de Desenvolvimento Científico e
Tecnológico and Fundação de Amparo à Pesquisa do
Estado de São Paulo (Brazil),
Natural Science Foundation of China (People's Republic of China),
Croatian Science Foundation and
Ministry of Science and Education (Croatia),
Ministry of Education, Youth and Sports (Czech Republic),
Centre National de la Recherche Scientifique, Commissariat
à l'Énergie Atomique, and Institut National de Physique
Nucléaire et de Physique des Particules (France),
J. Bolyai Research Scholarship, EFOP, HUN-REN ATOMKI, NKFIH,
MATE KKF, and OTKA (Hungary),
Department of Atomic Energy and Department of Science and Technology (India),
Israel Science Foundation (Israel),
Basic Science Research and SRC(CENuM) Programs through NRF
funded by the Ministry of Education and the Ministry of
Science and ICT (Korea).
Ministry of Education and Science, Russian Academy of Sciences,
Federal Agency of Atomic Energy (Russia),
VR and Wallenberg Foundation (Sweden),
University of Zambia, the Government of the Republic of Zambia (Zambia),
the U.S. Civilian Research and Development Foundation for the
Independent States of the Former Soviet Union,
the Hungarian American Enterprise Scholarship Fund,
the US-Hungarian Fulbright Foundation,
and the US-Israel Binational Science Foundation.
30
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Brambilla et al.(2000)Brambilla, Pineda, Soto, and Vairo]Brambilla:1999xf
author author N. Brambilla, author A. Pineda,
author J. Soto, and author A. Vairo, title
title Potential NRQCD: An Effective theory for heavy
quarkonium, https://doi.org/10. 1016/S0550-3213(99)00693-8
journal journal Nucl. Phys. B volume 566, pages 275 (year
2000)NoStop
[Amundson et al.(1997)Amundson, Eboli, Gregores, and Halzen]Amundson:1996qr
author author J. F. Amundson, author O. J. P. Eboli, author E. M. Gregores, and author F. Halzen, title title Quantitative tests of
color evaporation: Charmonium production, https://doi.org/10.
1016/S0370-2693(96)01417-7 journal journal Phys.
Lett. B volume 390, pages 323
(year 1997)NoStop
[Kuhn et al.(1980)Kuhn,
Nussinov, and Ruckl]Kuhn:1979zb
author author J. H. Kuhn, author S. Nussinov, and author R. Ruckl, title title Charmonium Production in B Decays, https://doi.org/10. 1007/BF01576192 journal journal Z. Phys. C volume 5, pages
117 (year 1980)NoStop
[Baumgart et al.(2014)Baumgart, Leibovich, Mehen, and Rothstein]Baumgart:2014upa
author author M. Baumgart, author A. K. Leibovich, author T. Mehen, and author I. Z. Rothstein, title title Probing Quarkonium Production
Mechanisms with Jet Substructure, https://doi.org/10.
1007/JHEP11(2014)003 journal journal J. High
Energy Phys. volume 11number
(2014), pages 003NoStop
[Adam et al.(2018)Adam et al.]STAR:2018smh
number author author J. Adam et al. (collaboration STAR
Collaboration), title title J/ψ production
cross section and its dependence on charged-particle multiplicity in p + p
collisions at √(s) = 200 GeV, https://doi.org/10.
1016/j.physletb.2018.09.029 journal journal
Phys. Lett. B volume 786, pages 87
(year 2018)NoStop
[Abelev et al.(2012a)Abelev et al.]ALICE:2011gej
author author B. Abelev et al. (collaboration ALICE Collaboration), title title J/ψ polarization in pp
collisions at √(s)=7 TeV, https://doi.org/10.
1103/PhysRevLett.108.082001 journal journal
Phys. Rev. Lett. volume 108, pages
082001 (year 2012a)NoStop
[Acharya et al.(2022a)Acharya et al.]ALICE:2021zkd
author author S. Acharya et al. (collaboration ALICE Collaboration), title title Forward rapidity J/
production as a function of charged-particle multiplicity in pp collisions
at √(s) = 5. 02 and 13 TeV, https://doi.org/10.
1007/JHEP06(2022)015 journal journal J. High
Energy Phys. volume 06number
(2022), pages 015NoStop
[Acharya et al.(2020)Acharya
et al.]ALICE:2020msa
number author author S. Acharya et al. (collaboration ALICE
Collaboration), title title Multiplicity
dependence of J/ψ production at midrapidity in pp collisions at
√(s) = 13 TeV, https://doi.org/10.
1016/j.physletb.2020.135758 journal journal
Phys. Lett. B volume 810, pages
135758 (year 2020)NoStop
[Arsene et al.(2005)Arsene
et al.]BRAHMS:2004adc
author author I. Arsene et al. (collaboration BRAHMS Collaboration), title title Quark gluon plasma and color glass
condensate at RHIC? The Perspective from the BRAHMS experiment, https://doi.org/10. 1016/j.nuclphysa.2005.02.130 journal
journal Nucl. Phys. A volume 757, pages 1 (year 2005)NoStop
[Adcox et al.(2005)Adcox
et al.]PHENIX:2004vcz
author author K. Adcox et al. (collaboration PHENIX Collaboration), title title Formation of dense partonic matter in
relativistic nucleus-nucleus collisions at RHIC: Experimental evaluation by
the PHENIX collaboration, https://doi.org/10.
1016/j.nuclphysa.2005.03.086 journal journal
Nucl. Phys. A volume 757, pages 184
(year 2005)NoStop
[Back et al.(2005)Back et al.]PHOBOS:2004zne
author author B. B. Back et al. (collaboration PHOBOS Collaboration), title title The PHOBOS perspective on discoveries
at RHIC, https://doi.org/10. 1016/j.nuclphysa.2005.03.084
journal journal Nucl. Phys. A volume 757, pages 28 (year
2005)NoStop
[Adams et al.(2005)Adams
et al.]STAR:2005gfr
author author J. Adams et al. (collaboration STAR Collaboration), title title Experimental and theoretical
challenges in the search for the quark gluon plasma: The STAR Collaboration's
critical assessment of the evidence from RHIC collisions, https://doi.org/10. 1016/j.nuclphysa.2005.03.085 journal
journal Nucl. Phys. A volume 757, pages 102 (year 2005)NoStop
[Ferreiro(2015)]Ferreiro:2014bia
author author E. G. Ferreiro, title title Excited charmonium
suppression in proton-nucleus collisions as a consequence of comovers, https://doi.org/10.1016/j.physletb.2015.07.066 journal
journal Phys. Lett. B volume 749, pages 98 (year 2015)NoStop
[Aaij et al.(2024)Aaij et al.]LHCb:2023xie
author author R. Aaij et al. (collaboration LHCb Collaboration), title title Multiplicity dependence of
_(2S)/_J/ in pp collisions at
√(s) = 13 TeV, https://doi.org/10. 1007/JHEP05(2024)243
journal journal J. High Energy Phys. volume 05number (2024), pages
243NoStop
[Morrison et al.(1998)Morrison et al.]PHENIX:1998vmi
number author author D. P. Morrison et al. (collaboration
PHENIX Collaboration), title title The PHENIX
experiment at RHIC, https://doi.org/10.1016/S0375-9474(98)00390-X
journal journal Nucl. Phys. A volume 638, pages 565 (year
1998)NoStop
[Akikawa et al.(2003)Akikawa
et al.]Akikawa:2003zs
author author H. Akikawa et al. (collaboration PHENIX Collaboration), title title PHENIX muon arms, https://doi.org/10.1016/S0168-9002(02)01955-1 journal
journal Nucl. Instrum. Methods Phys. Res., Sec. A volume 499, pages 537 (year
2003)NoStop
[Adachi et al.(2013)Adachi
et al.]Adachi:2013qha
author author S. Adachi et al., title title Trigger
electronics upgrade of PHENIX muon tracker, https://doi.org/10.1016/j.nima.2012.11.088 journal journal Nucl. Instrum. Methods Phys. Res., Sec. A volume 703, pages 114 (year
2013)NoStop
[Aidala et al.(2014)Aidala
et al.]Aidala:2013vna
author author C. Aidala et al., title title The PHENIX
Forward Silicon Vertex Detector, https://doi.org/10.1016/j.nima.2014.04.017 journal journal Nucl. Instrum. Methods Phys. Res., Sec. A volume 755, pages 44 (year 2014)NoStop
[Allen et al.(2003)Allen
et al.]Allen:2003zt
author author M. Allen et al. (collaboration PHENIX Collaboration), title title PHENIX inner detectors, https://doi.org/10.1016/S0168-9002(02)01956-3 journal
journal Nucl. Instrum. Methods Phys. Res., Sec. A volume 499, pages 549 (year
2003)NoStop
[Nouicer(2007)]Nouicer:2007rb
author author R. Nouicer (collaboration PHENIX Collaboration), title title PHENIX Upgrade: Novel Stripixel Detector for
Heavy Quark Detection and Proton Spin Structure Measurements at RHIC
Energies, https://doi.org/10.1016/j.nimb.2007.04.265 journal journal Nucl. Instrum. Methods Phys. Res., Sec. A volume 261, pages 1067 (year
2007)NoStop
[Ferreiro and Pajares(2012)]Ferreiro:2012
author author E. G. Ferreiro and author C. Pajares, title title High multiplicity pp
events and J/ψ production at energies available at the CERN Large Hadron
Collider, https://doi.org/10.1103/PhysRevC.86.034903 journal journal Phys. Rev. C volume
86, pages 034903 (year 2012)NoStop
[Abelev et al.(2012b)Abelev et al.]ALICE:2012165
author author B. Abelev et al. (collaboration ALICE Collaboration), title title J/ψ production as a function of
charged particle multiplicity in pp collisions at √(s)=7 TeV, https://doi.org/https://doi.org/10.1016/j.physletb.2012.04.052 journal journal Phys. Lett. B volume
712, pages 165 (year
2012b)NoStop
[Oh and Lim(2023)]Oh:2023lvj
author author J. Oh and author S. Lim, title title Simulation study of
multiplicity-dependent charmonia production with pythia, https://doi.org/10.1007/s40042-023-00753-6 journal journal J. Korean Phys. Soc. volume 82, pages 651 (year 2023)NoStop
[Gaiser(1982)]Gaiser:1982yw
author author J. E. Gaiser, title Charmonium Spectroscopy From Radiative
Decays of the J/ψ and ψ^', @noop type
Masters thesis, school Stanford University (year
1982), note SLAC-0255, UMI-83-14449-MC, SLAC-R-0255,
SLAC-R-255NoStop
[Adare et al.(2017)Adare
et al.]PHENIX:2016vmz
author author A. Adare et al. (collaboration PHENIX Collaboration), title title Measurement of the relative yields of
ψ(2S) to ψ(1S) mesons produced at forward and backward rapidity in
p+p, p+Al, p+Au, and ^3He+Au collisions at √(s_NN)=200
GeV, https://doi.org/10. 1103/PhysRevC.95.034904 journal journal Phys. Rev. C volume
95, pages 034904 (year 2017)NoStop
[Aidala et al.(2019)Aidala
et al.]Aidala:2018ajl
author author C. Aidala et al. (collaboration PHENIX Collaboration), title title Measurements of μμ pairs from
open heavy flavor and Drell-Yan in p+p collisions at √(s)=200
GeV, https://doi.org/10.1103/PhysRevD.99.072003 journal journal Phys. Rev. D volume
99, pages 072003 (year 2019)NoStop
[Acharya et al.(2022b)Acharya et al.]PHENIX:2022nrm
author author U. A. Acharya et al. (collaboration PHENIX Collaboration), title title Measurement of ψ(2S) nuclear
modification at backward and forward rapidity in p+p, p+Al, and p+Au
collisions at √(s__NN)=200 GeV, https://doi.org/10.1103/PhysRevC.105.064912 journal journal Phys. Rev. C volume 105, pages 064912 (year 2022b)NoStop
[Acharya et al.(2023)Acharya
et al.]ALICE:2022gpu
author author S. Acharya et al. (collaboration ALICE Collaboration), title title Measurement of ψ(2S) production
as a function of charged-particle pseudorapidity density in pp collisions
at √(s) = 13 TeV and p-Pb collisions at √(s_NN) = 8. 16 TeV
with ALICE at the LHC, https://doi.org/10. 1007/JHEP06(2023)147
journal journal J. High Energy Phys. volume 06number (2023), pages
147NoStop
[Aguilar et al.(2022)Aguilar, Chang, Elayavalli, Fatemi, He, Ji, Kalinkin,
Kelsey, Mooney, and Verkest]Aguilar:2021sfa
number author author M. R. Aguilar, author Z. Chang, author R. K. Elayavalli, author R. Fatemi, author Y. He, author Y. Ji, author D. Kalinkin,
author M. Kelsey, author I. Mooney, and author
V. Verkest, title title pythia8 underlying event tune for RHIC energies, https://doi.org/10.1103/PhysRevD.105.016011 journal journal Phys. Rev. D volume 105, pages 016011 (year 2022)NoStop
[Ferreiro(2014)]FERREIRO201457
author author E. Ferreiro, title title Charmonium dissociation
and recombination at LHC: Revisiting comovers, https://doi.org/https://doi.org/10.1016/j.physletb.2014.02.011 journal journal Phys. Lett. B volume
731, pages 57 (year 2014)NoStop
|
http://arxiv.org/abs/2409.02345v2 | 20240904001605 | Combined Plant and Control Co-design via Solutions of Hamilton-Jacobi-Bellman Equation Based on Physics-informed Learning | [
"Kenjiro Nishimura",
"Hikaru Hoshino",
"Eiko Furutani"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
Combined Plant and Control Co-design via Solutions of Hamilton-Jacobi-Bellman Equation Based on Physics-informed Learning
Kenjiro Nishimura, Hikaru Hoshino, Eiko Furutani
Department of Electrical Materials and Engineering
University of Hyogo
2167 Shosya, Himeji, Hyogo 671-2280, Japan
[email protected], {hoshino, furutani}@eng.u-hyogo.ac.jp
================================================================================================================================================================================================================================================================
§ ABSTRACT
This paper addresses integrated design of engineering systems, where physical structure of the plant and controller design are optimized simultaneously. To cope with uncertainties due to noises acting on the dynamics and modeling errors, an Uncertain Control Co-design (UCCD) problem formulation is proposed. Existing UCCD methods usually rely on uncertainty propagation analyses using Monte Calro methods for open-loop solutions of optimal control, which suffer from stringent trade-offs among accuracy, time horizon, and computational time. The proposed method utilizes closed-loop solutions characterized by the Hamilton-Jacobi-Bellman equation, a Partial Differential Equation (PDE) defined on the state space. A solution algorithm for the proposed UCCD formulation is developed based on PDE solutions of Physics-informed Neural Networks (PINNs). Numerical examples of regulator design problems are provided, and it is shown that simultaneous update of PINN weights and the design parameters effectively works for solving UCCD problems.
Control Co-design (CCD), optimal control, Physics-informed Neural Networks (PINNs), stochastic processes
§ INTRODUCTION
When designing an engineering system, one can adopt a sequential strategy where the physical system design is optimized first, followed by the controller design.
For instance, consider the system-level design of a robotic manipulator.
The physical system design may optimize the geometric parameters of the links, and the control design may determine the joint torque time trajectories for specific tasks <cit.>.
Although the physical system design is often performed based only on static characteristics of the system, the resulting systems are suboptimal in most cases, and it is desirable to consider dynamic characteristics in the physical system design in cooperation with the controller design.
This view is called as Control Co-design (CCD) approach <cit.>, and many authors have shown its benefit in various applications including robotic manipulators <cit.>, quadruped robots <cit.>, flexible space structures <cit.>, electric motors <cit.>, offshore wind farms <cit.>, Field Programmable Gate Array (FPGA) circuits <cit.>, and civil structures <cit.>.
One of the main challenges in CCD that remains to be addressed is to consider the impact of uncertainties coming from the noise acting on the control channels, unmodeled or neglected dynamics of the system, estimation errors in model parameters, and so on.
All of these uncertainties may propagate through the dynamical system and transform the states into uncertain trajectories <cit.>.
These problems are termed as Uncertain CCD (UCCD) <cit.> or Robust CCD <cit.>, and several formulations and solution methods have been proposed.
In <cit.>, it is proposed to optimize a metric that represents the sensitivity of the trajectory to perturbation.
This can reduce the sensitivity of the trajectory for a specific uncertainty relying on custom-made cost formulations, but it increases the complexity of the problem and the cost function has to be carefully designed.
Another approach is stochastic programming, in which the optimal trajectory is found for a set of perturbed scenarios <cit.>.
In <cit.>, a bi-level optimization scheme, where trajectories are optimized by the Differential Dynamic Programming (DDP) algorithm in an inner loop, and hardware is optimized in an outer loop with a genetic algorithm based on Covariance Matrix Adaptation Evolution Strategy (CMA-ES) <cit.>.
However, the above methods rely on Monte Calro simulation for analyzing uncertainty propagation through dynamical systems, and are subject to stringent trade-offs among accuracy, time horizon, and computational time <cit.>.
In this paper, we propose a novel approach to UCCD problems, where optimality of the trajectories are characterized by the Hamilton-Jacobi-Bellman (HJB) equation in optimal control.
In contrast to DDP based approach in <cit.>, where open-loop solutions of an optimal control problem are computed, the HJB equation characterizes closed-loop solutions in the form of a deterministic Partial Differential Equation (PDE), and uncertainty propagation can be naturally incorporated within the framework of stochastic control theory <cit.>.
In our previous work <cit.>, we proposed a CCD solution method for deterministic dynamics with uncertainties in initial conditions based on Galerkin-approximations of the HJB equation <cit.>.
This paper presents an extension of the CCD framework in <cit.> to deal with stochastic dynamics and uncertainties in model parameters.
To the best of our knowledge, this is the first work to use the HJB equation for the purposes of solving UCCD problems.
Furthermore, we present a UCCD solution method based on Physics-informed Neural Networks (PINNs) <cit.>.
PINNs are widely used for solving various PDEs <cit.> and have a potential in dealing with a high-dimensional system <cit.>, whereas the HJB equation is notoriously difficult to solve by using standard numerical methods when the dimension becomes 5 or more <cit.>.
This paper presents numerical examples of regulator design for a 2-dimensional nonlinear system and Linear Quadratic Regulator (LQR) problems of up to 10 dimensional systems to examine the effectiveness of the proposed method.
The rest of this paper is organized as follows.
In sec:formulation, the proposed UCCD problem formulation is presented.
A solution method for the UCCD problem based PINNs is introduced in sec:method.
Numerical results are provided in sec:simulation.
Conclusions and future works are summarized in sec:conclusion.
§.§ Notation
Let ℝ be the set of real numbers, and ^n be the n-dimensional Euclidean space.
For an open set A, A stands for the closure of A, and ∂ A for the boundary of A.
For a scalar function ϕ,
∂_x ϕ stands for the gradient of ϕ with respect to x, and ∂_x^2 ϕ for the Hessian matrix of ϕ.
Let tr(M) be the trace of the matrix M.
For random variables X and Y, let 𝔼[X] be the expectation of X, and 𝔼[X|Y=y] be the conditional expectation of X given Y=y.
We use upper-case letters (e.g., Y) to denote random variables and lower-case letters (e.g., y) to denote their specific realizations.
§ CO-DESIGN PROBLEM FORMULATION
In this section, we firstly introduce a basic co-design problem commonly studied in literature in sec:basic_formulation, and then present the proposed formulation in sec:proposed_formulation.
§.§ Deterministic Control Co-design Problem
There are two commonly studied strategies for CCD problems: simultaneous and nested <cit.>.
The simultaneous strategy optimizes both the plant and control variables in a same optimization formulation and is the most fundamental representation <cit.>.
The nested strategy can be seen as a specific reorganization of the simultaneous formulation. An outer loop optimizes the design of the controlled plant, and an inner loop identifies the optimal control for each plant design tested by the outer loop.
Here we briefly introduce a basic formulation in the simultaneous strategy.
Consider the following dynamical system with an l-dimensional design parameter ρ=[ρ_1, …, ρ_l ]^⊤:
ẋ = f( x, u; ρ )
where x ∈ℝ^n stands for the n-dimensional state, ẋ for the time derivative of x, and u ∈ℝ^m for the m-dimensional control action.
For this system, a standard optimal control problem can be considered with the following cost integral J_c:
J_c(ρ, u(·)) =
∫_0^T L ( x(t), u(t), ρ) dt
+ M(x(T), ρ)
where T stands for the horizon length, and L and M for the Lagrange cost (running cost) and Mayer cost (terminal cost), respectively.
The above notation shows that the cost integral J_c can be affected by the design parameter ρ through the dependence of L and M on ρ.
Besides, J_c depends on ρ through the change in the dynamics (<ref>).
By combining the cost J_c representing the control performance and an additional cost J_p about the choice of the parameter ρ of the plant, which represents, e.g., hardware materials costs or assembling costs, the co-design problem can be formulated as
min_ρ, u(·) w_p J_p(ρ) + w_c J_c(ρ, u(·))
s.t. ẋ(t) = f(x(t), u(t); ρ), ∀ t∈ [0, T],
x(0) = x_0
where w_p and w_c are weighting coefficients, and x_0 stands for the initial state of the optimal control problem.
Although a more concise formulation can be obtained by including the term J_p in the Mayer term of the optimal control problem, the above formulation is commonly used to allow for more natural representations of CCD problems.
§.§ Proposed Formulation
Here we present the proposed formulation for UCCD problems.
Let (Ω, ℱ, {ℱ_t}_t≥0, ) be a filtered probability space, and consider a control system with stochastic noise represented by ℱ_t-standard w-dimensional Brownian motion { W_t }_t ≥ 0 starting from W_0 = 0.
For an open set 𝕏⊂^n in the n-dimensional state space, the state X_t ∈𝕏 evolves according to the following Stochastic Differential Equation (SDE):
dX_t = { f(X_t;ρ)+g(X_t;ρ)U_t }dt + σ(X_t; ρ) dW_t,
where ρ stands for the design parameter of the hardware, and {U_t}_t≥0 with U_t ∈𝕌⊂ℝ^m is
an {ℱ_t}_t≥0-adapted control process, which takes values in 𝕌.
Throughout this paper, we assume sufficient regularity in the coefficients of the system (<ref>).
That is, the functions f, g and σ are chosen in a way such that the SDE (<ref>) admits a unique strong solution (see, e.g., Section IV.2 of <cit.>).
The size of σ(X_t; ρ) is determined from the uncertainties in the disturbance, unmodeled dynamics, and prediction errors of the environmental variables.
As we solve the equation in the domain 𝕏, define a stopping time τ as
τ = inf{ t | X_t ∉𝕏}.
Then, for each ρ, consider an optimal control problem to minimize the following cost functional:
J_c(ρ, x, U) = [ ∫_0^τ L(X_s, U_s, ρ) e^-γ sds
+ e^-γτ M(X_τ, ρ) | X_0 = x ],
where L and M represent Lagrange and Meyer costs as mentioned in sec:basic_formulation, and γ stands for the discount factor.
The control process U:={U_t}_t≥0 is chosen over a set 𝒰 of admissible control processes that have values in 𝕌 and are adapted to the filtration {ℱ_t}_t≥0.
Then, by defining the value function V as
V(ρ, x) = inf_U∈𝒰 J_c (ρ, x, U),
the control performance is characterized as a function of the parameter ρ and the initial state x.
From stochastic control theory <cit.>, it can be shown that the value function (<ref>) satisfies the following HJB equation:
inf_u ∈𝕌{ℒ^u V(ρ, x) + L(x,u,ρ) -γ V(ρ, x) }=0,
in 𝕏, where ℒ^u is defined by
ℒ^u V(ρ,x) :=
12Tr[ σ^⊤σ (∂_x^2 V) ] (ρ, x)
+ { f(x;ρ)+g(x;ρ)u}^⊤∂_x V(ρ,x)
with the boundary condition
V(ρ, x) = M(ρ, x),
on ∂𝕏.
By using the above, the UCCD problem proposed in this paper is formulated as
min_ρ J = w_p J_p(ρ) + w_c∫_𝕏ω(x) V(ρ, x) dx
s.t.
inf_u ∈𝕌{ℒ^u V(ρ, x) + L(x,u,ρ) -γ V(ρ, x) }=0, x ∈𝕏,
V(ρ, x) = M(ρ, x), x ∈∂𝕏
where ω: ℝ^n →ℝ is a weighting function to take an expectation of the value V(ρ, x) satisfying
∫_𝕏ω(x) dx = 1.
The problem (<ref>) is a partial differential equation constrained optimization problem, and a proper computational technique is needed to be solved.
In the above formulation, both the effects of the noise acting on the control channels and unmodeled/neglected dynamics are represented by the nonlinear function σ.
Also, uncertainties in initial conditions are represented by the distribution ω. Furthermore, if there are uncertain model parameters, denoted by ϕ, the system dynamics can be augmented as
[ dX_t; dϕ ]=
[ f(X_t;ρ, ϕ)+g(X_t; ρ, ϕ) U_t; 0 ]dt
+[ σ(X_t; ρ); 0 ]dW_t.
Thus, by applying the formulation (<ref>) adopted for the augmented system (<ref>), uncertainties in model parameters can also be captured by the distribution ω.
In conclusion, the proposed formulation can capture all the uncertainties due to the noise acting on the control channels, unmodeled/neglected dynamics, initial conditions, and model parameters.
§ SOLUTION ALGORITHM
This section presents a solution method for the proposed formulation for UCCD problems.
To deal with the HJB equation as a PDE constraint in the optimization problem (<ref>), we use the PINN framework, which is able to solve PDEs by exploiting machine learning techniques <cit.>.
fig:method shows a schematic overview of the proposed UCCD method based on PINN.
The PINN takes the pair (ρ, x) of the design parameter ρ and the state x (as well as the uncertain model parameter ϕ stated in Remark 1 if the augmented representation is used), and outputs the prediction V of the value V(ρ, x), as well as its derivatives ∂V/∂ρ and ∂V/∂ x computed by automatic differentiation <cit.>.
By assuming that the PINN is parameterized by θ, the loss function L_PINN for the learning is defined as
L_PINN(θ, 𝒮_h, 𝒮_b) = μ_h L_HJB(θ, 𝒮_h) + μ_b L_bdry(θ,𝒮_b)
where L_HJB and L_bdry are loss terms for the HJB equation (<ref>) and the boundary condition (<ref>), respectively, and μ_h and μ_b are the weighting coefficients.
The HJB loss term L_HJB is a function of θ and a set of N_h random samples with 𝒮_h = { (ρ_i, x_i) | i∈{1,… N_h}, x_i ∈𝕏}, and given by
L_HJB(θ, 𝒮_h) = 1N_p∑_i=1^N_pF(ρ_i, x_i, θ) ^2
with
F(ρ, x, θ) = inf_u ∈𝕌{ℒ^u V(ρ, x, θ) + L(x,u,ρ) -γV(ρ, x) }.
Here, the optimal control u needs to be addressed, and in this paper, we assume a specific form of the cost function.
The Lagrange cost term L in the cost functional J_c in (<ref>) takes the following form
L(x, u) = L̂(x) + u^⊤𝖱 u,
where 𝖱 is a positive-definite matrix.
With this assumption, we have an explicit optimal control as
u^∗(x) = -12𝖱^-1 g(x)^⊤∂_x V.
The boundary loss term L_bdry in (<ref>) is computed from a set of N_b random samples with 𝒮_b = { (ρ_j, x_j) | j∈{1,… N_b}, x_j ∈∂𝕏}, and given by
L_bdry(θ,𝒮_b) = 1N_b∑_j=1^N_bV(ρ_j, x_j, θ) - M(ρ_j, x_j) ^2.
To solve the UCCD problem (<ref>), we propose to simultaneously update the weights θ of the PINN and the design parameter ρ.
The proposed solution method is presented in Algorithm <ref>.
At each epoch, the sets 𝒮_h = { (ρ_i, x_i )}_i=1^N_h and 𝒮_b = { (ρ_j, x_j )}_j=1^N_b are randomly sampled with x_i ∈𝕏 and x_j ∈∂𝕏.
The samples ρ_i and ρ_j are drawn by
ρ_i=ρ+ϵ_i, ρ_j = ρ + ϵ_j, ϵ_i,ϵ_j ∼𝒩(0, σ_n^2)
where 𝒩(0, σ_n^2) stands for the Gaussian distribution with the zero mean and the standard deviation of σ_n.
The noise term ϵ is added for exploration of ρ, and its effect is discussed in sec:simulation.
The minimization of the loss L_PINN imposes the constraints in the UCCD problem (<ref>).
The design parameter ρ is then updated to minimize the objective function J.
To this end, a set 𝒮_r of N_r random samples are drawn as 𝒮_r = { x_k |, k∈{ 1, …, N_r}, x_k ∈𝕏}, and the following loss L_r is considered:
L_r(ρ, 𝒮_r) = w_p J_p(ρ) + w_c∑_k=1^N_rV(ρ, x_k)
Note that the samples x_k needs to be drawn from the distribution ω in (<ref>) to be unbiased.
The parameter ρ is updated by performing a gradient step on the loss L_r once in N_up epochs.
§ NUMERICAL EXAMPLES
This section provides numerical examples of CCD problems for regulator designs.
A 2-dimensional nonlinear deterministic dynamical system is treated in sec:planer, and LQR problems for up to 10-dimensional systems are treated in sec:stochasticLQR.
The algorithm was implemented by a deep learning framework PyTorch <cit.>, and our implementation is available at <https://github.com/er24h020/SCIS_ISIS_2024>.
§.§ Deterministic Nonlinear Planar System
We consider a CCD problem for a planer nonlinear dynamical system with uncertainty in initial states, for which a numerical analysis has been performed in our previous work <cit.> based on a Galerkin approximation-based CCD method.
Consider the following deterministic dynamics:
[ ẋ_1; ẋ_2 ]_ẋ
=
[ -x_1^3 -x_2; x_1 +x_2 ]_f(x; ρ)
+ [ 0; ρ ]_g(x; ρ)
u
where x = [x_1, x_2]^⊤∈^2 stands for the state, u ∈ for the input, and ρ∈ℝ for the design parameter.
The objective function for the optimal control is given by
J_c(ρ, x) = ∫_0^∞ p x(t)^⊤ x(t) + q u^2(t) dt,
with p=q=1.
With this cost function, we have the following HJB equation:
∂_x V^⊤ (f + g u) + p x^⊤ x+ q u^2 =0
with u= -ρ∂_x_2V/(2q).
The boundary condition is given as V(ρ, 0) = 0.
Then, consider the CCD problem with the following objective function:
J = w_p |ρ| + w_c∫_𝕏ω(x) V(x) dx
where 𝕏 = { (x_1, x_2) ∈ℝ^2 | |x_1|≤ 1, |x_2|≤ 1 }, w_p=1, w_c=4, and ω(x)≡ 1/4.
With this problem setting, the control performance is improved as ρ increases, but a penalty is imposed on the increase in ρ by the first term of the objective function.
The objective function J was minimized by Algorithm <ref>.
For the function approximator V of the value function, we used a neural network with 3 hidden layers with 64 units per layer, and the hyperbolic tangent () as the activation function.
The soft plus function was used at the output.
For updating the neural network's weights, optimizer was used with the learning rate of 3E-3.
The weights in the loss (<ref>) was set as μ_h=μ_b=1, and the number of samples was N_p=1000 for the inside of 𝕏 and N_b=100 for the boundary.
The design parameter ρ was initially set to ρ = 1, and updated by using optimizer with the learning rate of 2e-2 at every 1 or 500 epochs (N_up=1 or 500) with N_r=1000.
fig:2d_results shows results of the proposed CCD algorithm applied to the planer example.
We compare the cases where ρ is sampled with and without adding the noises ϵ_i and ϵ_j at the lines 4 and 6 in Algorithm <ref>.
For the additive noise, we used a uniform distribution with the amplitude of 0.1.
In fig:rho_nolta, cyan and blue lines show the changes in the design parameter ρ when it is updated at each epoch (N_up=1). The purple line show the results with N_up=500 and adding noise.
The solid curves correspond to the mean of ten repeated experiments, and the shaded region shows their standard deviations.
It can be seen that the standard deviation of ρ is significantly reduced by adding the noise, and the parameter ρ converges to the optimal solution ρ = 2.1, which is calculated in <cit.>, only when ρ is explored by adding the noise.
As shown in fig:loss_nolta, the PINN loss L_PINN is kept small, and it can be confirmed that the simultaneous update of ρ does not impede the learning process of the PINN.
§.§ Stochastic LQR problem
Next we present a stochastic LQR example based on <cit.>.
Consider the following d-dimensional controlled stochastic process given by
dX_t = ρ U_t dt + √(2)dW_t
where X_t ∈𝕏⊂^d, U_t ∈^d, W_t ∈^d, and ρ∈.
For the domain 𝕏, consider d-dimensional sphere of radius R:
𝕏 = { x ∈^d | x < R }.
The cost functional for the optimal control is given by
J_c(ρ, x) = [ ∫_0^τ ( p X_t ^2 + q U_t ^2 -2k d) e^-γ sds
+ e^-γτ k R^2 ]
where τ stands for the exit time of the domain 𝕏, and the constant k is given by
k = (√(q^2 γ^2 + 4pqρ^2) -γ q )/( 2 ρ^2 ).
With this setting, the value function V satisfies the following HJB equation:
∂_x^2 V(ρ, x) + inf_u ∈^d (ρ u^⊤∂_x V(ρ,x) + qu^2 )
+ px^2 -2kd -γ V(ρ, x) = 0.
with the boundary condition V(ρ, x) = k R^2.
It is known that the PDE has the exact solution as a quadratic function V(x) =k x^2, and the optimal control is given as u^∗(x)= -ρ∂_x V(x)/(2q) = -kρ x /q.
Then, the objective function for the UCCD problem is given by
J = w_p |ρ| + w_c∫_𝕏ω(x) V(x) dx
where w_p= w_c=1, and ω(x)≡ 1/|𝕏|.
fig:lqr_results shows results for d=5 (shown by blue) and 10 (shown by purple). The other parameters are given as p=q=γ=1, and R=2 and 4 for d=5 and 10, respectively.
The shape of the neural network is the same with the above example, and the learning rates are 1e-4 and 1e-2 for θ and ρ, respectively, for both of d=5 and 10.
The weights in the loss (<ref>) are μ_p=μ_b=1.
The numbers of samples are N_p = N_b = N_r = 1000 for d=5, and N_p = N_b = N_r = 100,000 for d=10.
fig:lqr_rho shows the mean (solid lines) and standard deviation (shaded area) of the design parameter ρ in 10 repeated experiments, and it can be seen that ρ converges at around 120,000 epochs.
fig:lqr_loss shows the changes in the loss function L_PINN, and it is kept low for d=5 as shown by the blue line.
For d=10, purple shows the mean value of the ten experiments, and cyan shows the results when the loss function is the smallest among these experiments.
Although a relatively large variation was observed in the loss, the design parameter ρ converged to similar values in these 10 experiments.
§ CONCLUSIONS
This paper proposed a novel UCCD problem formulation to cope with uncertainties coming from noises acting on the dynamics and modeling errors.
The proposed method utilizes closed-loop solutions of an optimal control problem characterized by the Hamilton-Jacobi-Bellman equation as a PDE constraint in the UCCD problem.
A solution algorithm is developed based on Physics-informed Neural Networks (PINNs), and numerical examples show that simultaneous update of PINN weights and the design parameters effectively works for solving UCCD problems.
Future directions of this work include extension of the proposed algorithm to address problems where the optimal control is obtained only in an implicit form.
A possible approach for this task is to use a reinforcement learning framework to obtain an optimal policy.
Another future direction is to used the proposed method in practical applications in e.g., robotics and energy systems as mentioned in sec:introduction.
§ ACKNOWLEDGMENT
This work was supported in part by JST, ACT-X Grant Number JPMJAX210M, Japan, and the Kansai Research Foundation for Technology Promotion, Japan.
IEEEtran
|
http://arxiv.org/abs/2409.02831v1 | 20240904155506 | LIGO Detector Characterization in the first half of the fourth Observing run | [
"S. Soni",
"B. K. Berger",
"D. Davis",
"F. Di. Renzo",
"A. Effler",
"T. A. Ferreira",
"J. Glanzer",
"E. Goetz",
"G. González",
"A. Helmling-Cornell",
"B. Hughey",
"R. Huxford",
"B. Mannix",
"G. Mo",
"D. Nandi",
"A. Neunzert",
"S. Nichols",
"K. Pham",
"A. I. Renzini",
"R. M. S. Schofield",
"A Stuver",
"M. Trevor",
"S. Álvarez-López",
"R. Beda",
"C. P. L. Berry",
"S. Bhuiyan",
"R. Bruntz",
"N. Christensen",
"L. Blagg",
"M. Chan",
"P. Charlton",
"G. Connolly",
"R. Dhatri",
"J. Ding",
"V. Garg",
"K. Holley-Bockelmann",
"S. Hourihane",
"K. Jani",
"K. Janssens",
"S. Jarov",
"A. M. Knee",
"A. Lattal",
"Y. Lecoeuche",
"T. Littenberg",
"A. Liyanage",
"B. Lott",
"R. Macas",
"D. Malakar",
"K. McGowan",
"J. McIver",
"M. Millhouse",
"L. Nuttall",
"D. Nykamp",
"I. Ota",
"C. Rawcliffe",
"B. Scully",
"J. Tasson",
"A. Tejera",
"S. Thiele",
"R. Udall",
"C. Winborn",
"Z. Yarbrough",
"Z. Zhang",
"R. Abbott",
"I. Abouelfettouh",
"R. X. Adhikari",
"A. Ananyeva",
"S. Appert",
"K. Arai",
"N. Aritomi",
"S. M. Aston",
"M. Ball",
"S. W. Ballmer",
"D. Barker",
"L. Barsotti",
"J. Betzwieser",
"G. Billingsley",
"S. Biscans",
"N. Bode",
"E. Bonilla",
"V. Bossilkov",
"A. Branch",
"A. F. Brooks",
"D. D. Brown",
"J. Bryant",
"C. Cahillane",
"H. Cao",
"E. Capote",
"F. Clara",
"J. Collins",
"C. M. Compton",
"R. Cottingham",
"D. C. Coyne",
"R. Crouch",
"J. Csizmazia",
"T. J. Cullen",
"L. P. Dartez",
"N. Demos",
"E. Dohmen",
"J. C. Driggers",
"S. E. Dwyer",
"A. Ejlli",
"T. Etzel",
"M. Evans",
"J. Feicht",
"R. Frey",
"W. Frischhertz",
"P. Fritschel",
"V. V. Frolov",
"P. Fulda",
"M. Fyffe",
"D. Ganapathy",
"B. Gateley",
"J. A. Giaime",
"K. D. Giardina",
"R. Goetz",
"A. W. Goodwin-Jones",
"S. Gras",
"C. Gray",
"D. Griffith",
"H. Grote",
"T. Guidry",
"E. D. Hall",
"J. Hanks",
"J. Hanson",
"M. C. Heintze",
"N. A. Holland",
"D. Hoyland",
"H. Y. Huang",
"Y. Inoue",
"A. L. James",
"A. Jennings",
"W. Jia",
"S. Karat",
"S. Karki",
"M. Kasprzack",
"K. Kawabe",
"N. Kijbunchoo",
"P. J. King",
"J. S. Kissel",
"K. Komori",
"A. Kontos",
"Rahul Kumar",
"K. Kuns",
"M. Landry",
"B. Lantz",
"M. Laxen",
"K. Lee",
"M. Lesovsky",
"F. Llamas",
"M. Lormand",
"H. A. Loughlin",
"R. Macas",
"M. MacInnis",
"C. N. Makarem",
"G. L. Mansell",
"R. M. Martin",
"K. Mason",
"F. Matichard",
"N. Mavalvala",
"N. Maxwell",
"G. McCarrol",
"R. McCarthy",
"D. E. McClelland",
"S. McCormick",
"L. McCuller",
"T. McRae",
"F. Mera",
"E. L. Merilh",
"F. Meylahn",
"R. Mittleman",
"D. Moraru",
"G. Moreno",
"A. Mullavey",
"M. Nakano",
"T. J. N. Nelson",
"J. Notte",
"J. Oberling",
"T. O'Hanlon",
"C. Osthelder",
"D. J. Ottaway",
"H. Overmier",
"W. Parker",
"A. Pele",
"H. Pham",
"M. Pirello",
"V. Quetschke",
"K. E. Ramirez",
"J. Reyes",
"J. W. Richardson",
"M. Robinson",
"J. G. Rollins",
"C. L. Romel",
"J. H. Romie",
"M. P. Ross",
"K. Ryan",
"T. Sadecki",
"A. Sanchez",
"E. J. Sanchez",
"L. E. Sanchez",
"R. L. Savage",
"D. Schaetzl",
"M. G. Schiworski",
"R. Schnabel",
"E. Schwartz",
"D. Sellers",
"T. Shaffer",
"R. W. Short",
"D. Sigg",
"B. J. J. Slagmolen",
"C. Soike",
"V. Srivastava",
"L. Sun",
"D. B. Tanner",
"M. Thomas",
"P. Thomas",
"K. A. Thorne",
"C. I. Torrie",
"G. Traylor",
"A. S. Ubhi",
"G. Vajente",
"J. Vanosky",
"A. Vecchio",
"P. J. Veitch",
"A. M. Vibhute",
"E. R. G. von Reis",
"J. Warner",
"B. Weaver",
"R. Weiss",
"C. Whittle",
"B. Willke",
"C. C. Wipf",
"V. A. Xu",
"H. Yamamoto",
"L. Zhang",
"M. E. Zucker"
] | astro-ph.IM | [
"astro-ph.IM",
"gr-qc"
] |
CBscryo-manifold baffles
DetCharDetector Characterization
PEMPhysical Environment and
Monitoring
IFOinterferometer
LHOLIGO Hanford
LLOLIGO Livingston
GWgravitational-wave
LIGOLaser Interferometer Gravitational-Wave Observatory
BBHbinary black hole
O1first observing run
O2second observing run
DQRData Quality Report
O3third observing run
O3afirst half of the third observing run
O3bsecond half of the third observing run
O4fourth observing run
BHblack hole
BBHbinary black hole
IMBHintermediate-mass black hole
IMCinput mode cleaner
HEPIhydraulic external pre-isolator
SNRsignal-to-noise ratio
BNSbinary neutron star
PSDpower spectral density
GRgeneral relativity
FARfalse-alarm rate
GCNthe Gamma-ray Coordinates Network
CBCcompact binary coalescence
VTvolume-time
ASDamplitude spectral density
ASDamplitude spectral densities
DACdigital-to-analog
GWBgravitational-wave background
DQdata quality
RRTrapid response team
GRBgamma-ray burst
DARMdifferential arm readout measurement
OSEMoptical shadow sensors and magnetic actuator
oplevoptical lever
ETMend test mass
ETMYend test mass at the Y-end
ETMXend test mass at the X-end
AERMannular end reaction mass
LSClength sensing and control
PSLpre-stabilized laser
FSSfrequency stabilization system
HAM3third horizontal access module
L2penultimate
L3third
ESDelectrostatic drive
ESDelectrostatic drives
RCreaction chain
ETGevent trigger generator
SQZsqueezer
RMSroot-mean-square
RFradio frequency
LEDlight emitting diode
ASCalignment sensing and control
TMStransmission motor stage
OMCoutput mode cleaner
DAQdata acquisition
FFTfast Fourier transform
ETGevent trigger generator
PRCPower Recycling Cavity
SRCSignal Recycling Cavity
AERMannular end reaction mass
QPDquadrant photo-diode
ACBarm cavity baffle
ACBarm cavity baffles
CWcontinuous gravitational-wave
gwsgravitational-wave strain
IGWNInternational Gravitational-wave Network
LSCLIGO Scientific Collaboration
LVKLSC-Virgo-KAGRA Collaboration
Xx
h(t)gravitational-wave strain timeseries
GWOSCGravitational-wave Open Science Center
HVACheating, ventilation, and air conditioning
^1LIGO, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
^2Stanford University, Stanford, CA 94305, USA
^3LIGO, California Institute of Technology, Pasadena, CA 91125, USA
^4Université Lyon, Université Claude Bernard Lyon 1, CNRS, IP2I Lyon IN2P3, UMR 5822, F-69622 Villeurbanne, France
^5LIGO Livingston Observatory, Livingston, LA 70754, USA
^6Louisiana State University, Baton Rouge, LA 70803, USA
^8University of Oregon, Eugene, OR 97403, USA
^7University of British Columbia, Vancouver, BC V6T 1Z4, Canada
^9Embry-Riddle Aeronautical University, Prescott, AZ 86301, USA
^10The Pennsylvania State University, University Park, PA 16802, USA
^11LIGO Hanford Observatory, Richland, WA 99352, USA
^12School of Physics and Astronomy, University of Minnesota, 55455 MN, USA
^13Dipartimento di Fisica “G. Occhialini”, Università degli Studi di Milano-Bicocca, Piazza della Scienza 3, 20126 Milano, Italy
^14Villanova University, 800 Lancaster Ave, Villanova, PA 19085, USA
^15University of Maryland, College Park, MD 20742, USA
^16Syracuse University, Syracuse, NY 13244, USA
^17Max Planck Institute for Gravitational Physics (Albert Einstein Institute), D-30167 Hannover, Germany
^18Leibniz Universität Hannover, D-30167 Hannover, Germany
^19OzGrav, University of Adelaide, Adelaide, South Australia 5005, Australia
^20University of Birmingham, Birmingham B15 2TT, United Kingdom
^21University of California, Riverside, Riverside, CA 92521, USA
^22Cardiff University, Cardiff CF24 3AA, United Kingdom
^23University of Florida, Gainesville, FL 32611, USA
^24OzGrav, University of Western Australia, Crawley, Western Australia 6009, Australia
^25Vrije Universiteit Amsterdam, 1081 HV, Amsterdam, Netherlands
^26National Central University, Taoyuan City 320317, Taiwan
^27Missouri University of Science and Technology, Rolla, MO 65409, USA
^28Bard College, Annandale-On-Hudson, NY 12504, USA
^29Sungkyunkwan University, Seoul 03063, Republic of Korea
^30The University of Texas Rio Grande Valley, Brownsville, TX 78520, USA
^31Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth, PO1 3FX, UK
^32Montclair State University, Montclair, NJ 07043, USA
^33OzGrav, Australian National University, Canberra, Australian Capital Territory 0200, Australia
^34University of Washington, Seattle, WA 98195, USA
^35Universität Hamburg, D-22761 Hamburg, Germany
^36Oberlin College, Oberlin, OH 44074, USA.
^37Durham University, South Road, Durham,
,DH1 3LE, UK
^38University College London, Gower St, London WC1E 6BT, United Kingdom
^39OzGrav, Charles Sturt University, Wagga Wagga, New South Wales 2678, Australia
^40Universiteit Antwerpen, Prinsstraat 13, 2000 Antwerpen, Belgium
^41Université Côte d’Azur, Observatoire de la Côte d’Azur, CNRS, Artemis, 06304 Nice, France
^42Carleton College, Northfield, MN 55057, USA
^43Christopher Newport University, Newport News, VA 23606, USA
^44NASA Marshall Space Flight Center, Huntsville, Alabama 35811, USA
^45School of Physics, Georgia Institute of Technology, Atlanta, Georgia 30332, USA
^46 SUPA, School of Physics and Astronomy, University of Glasgow, University Ave, Glasgow G12 8QQ, United Kingdom
^47 Vanderbilt University, Department of Physics and Astronomy, 6301 Stevenson Science Center, Nashville, TN 37212, USA
^48Fisk University, Department of Life and Physical Sciences, 1000 17th Avenue N. Nashville, TN 37208, USA
§ ABSTRACT Progress in gravitational-wave astronomy depends upon having sensitive detectors with good data quality. Since the end of the LIGO-Virgo-KAGRA third Observing run in March 2020, detector-characterization efforts have lead to increased sensitivity of the detectors, swifter validation of gravitational-wave candidates and improved tools used for data-quality products. In this article, we discuss these efforts in detail and their impact on our ability to detect and study gravitational-waves. These include the multiple instrumental investigations that led to reduction in transient noise, along with the work to improve software tools used to examine the detectors data-quality. We end with a brief discussion on the role and requirements of detector characterization as the sensitivity of our detectors further improves in the future Observing runs.
§ INTRODUCTION
The LIGO <cit.> and Virgo <cit.> detectors started the era of GW astronomy when the first GW signal from the merger of two black holes was detected in 2015 <cit.>. Since then, the LIGO, Virgo and KAGRA collaborations have published 90 probable detections of signals involving black holes and neutron stars, including the spectacular multi-messenger discovery in August 2017 <cit.>. The data used for these analyses is publicly available in the Gravitational Wave Open Science Center (GWOSC)
[https://gwosc.orgwww.gwosc.org].
The LIGO and KAGRA detectors started taking data in the fourth Observing run (O4) on May 24, 2023. The first part of the run, O4a, finished on January 16, 2024. The LIGO and Virgo detectors started taking data again on April 10, 2024 (O4b). In O4a, 92 significant detection candidates were shared as public alerts, including 11 retracted for data-quality problems. The 81 significant detections are almost as many as the 90 detections published in the GWTC-3 catalog <cit.> from the previous three Observing runs.
As shown in O3bO4aASDs_fig, the broadband sensitivity of the LIGO detectors in O4a was significantly better than in O3b. Some key upgrades commissioned between O3 and O4 include frequency-dependent squeezing <cit.>, replacement of some optics that had small defects (point absorbers) allowing for increased laser power, and replacing squeezing subsystem Faraday isolators with more efficient ones to increase the level of squeezing achieved <cit.>.
As a measure of sensitivity, we can calculate the distance to which the noise would allow a detection of a binary neutron star system (1.4 M_⊙ each) with a signal-to-noise ratio of 8; this is often called the “binary neutron star inspiral range" or “BNS range". The spectra shown in O3bO4aASDs_fig correspond to dates in O3b (H1 March 19, 2020, and L1 January 4, 2020) with BNS ranges 112 Mpc and 134 Mpc respectively; and dates in O4a (H1 December 12 2023, and L1 December 31 2023) with BNS ranges 160 Mpc and 158 Mpc respectively.
At the junction between instrument scientists and data analysis groups, the DetChar group works to understand instrumental noise and improve data-quality <cit.>. The group carries out instrumental investigations in collaboration with instrument scientists at the observatory sites, striving to understand and mitigate sources of noise that are not inherent to the detector design and thereby to improve detector performance. The DetChar group also provides data-quality information for GW searches, which is crucial for searches to avoid or remove noise artifacts, and to evaluate and validate gravitational-wave signals.
The DetChar group carries out a variety of regular activities to monitor noise and provide data-quality information to searches. The LIGO DetChar summary pages <cit.> provide detector-site and off-site scientists, technicians, and staff an overview of the performance of the LIGO detectors and environmental monitors through a series of regularly updated webpages. The summary pages centralize plots, figures of merit, and links to additional resources for analyses of the GW strain data and various detector subsystems. During Observing runs, members of the DetChar group participate in data-quality shifts, which utilize the summary pages to closely monitor the performance of the detector. Members also engage in event validation shifts in order to assess any data-quality issues near and during the time of GW signals <cit.>. Finally, the DetChar group works together with GW analysis efforts to construct data-quality products in order to avoid noise contamination in GW analyses (see data_qual.)
Data recorded by the LIGO detectors can be characterized as generally Gaussian and stationary with non-Gaussian noise appearing in several forms: 1) short-duration artifacts, often referred to as “glitches”, which are typically broadband in nature; 2) persistent or slowly time-varying, broadband artifacts; 3) persistent or slowly time-varying, narrowband artifacts, often referred to as “lines” <cit.>. Typically, glitches impact transient GW searches and parameter estimation of candidate transient GW events but do not strongly impact searches for persistent GW. Similarly, lines impact persistent GW searches but do not strongly impact searches and parameter estimation for transient GW. Persistent broadband artifacts impact all types of searches because the artifacts elevate the noise background.
Glitches and lines can be further categorized by the morphology of the artifact. Many common glitch classes have been named (e.g., Tomte, Fast Scattering, Low Frequency Burst) based on a combination of their shape in spectrogram plots and what is known of their origin, see common-glitches <cit.>. Some glitch classes are more detrimental to transient gravitational-wave searches and CBC parameter estimation than others. Lines, by contrast, have fewer morphological distinctions, see fig:common-lines <cit.>. The primary distinction is between individual line artifacts and combs of lines. Combs arise when narrowband noise that couples into the gravitational-wave data is not purely sinusoidal, creating a series of peaks with common frequency spacing. Combs are particularly problematic because one source can impact multiple narrow frequency bands at the same time.
This paper describes the activities performed to characterize the strain data measured by the LIGO detectors, and investigations of instrumental and environmental noise between the end of O3 and end of O4a.
In ins_invs, we describe the instrumental investigations. In event_valid we describe the activities to promptly validate gravitational-wave candidates, using tools to analyze the data-quality surrounding the event. In data_qual, we describe the use of data-quality products in searches of gravitational-waves of different kinds (compact binary coalescences, un-modelled transients, continuous waves and stochastic background). In summ_conc, we summarize the results, present conclusions and present prospects for the near future.
§ INSTRUMENTAL INVESTIGATIONS
Instrumental investigations carried out by the DetChar group are crucial for understanding the impact of various noise sources on detector data quality <cit.>. The PEM investigations are often carried out at the sites and require a strong co-ordination between the DetChar group and the instrument scientists at the site <cit.>. These investigations rely on vibration, acoustic, and magnetic injections, which enable us to estimate environmental couplings between different parts of the detector and the GW strain data. As explained in detail later in this section, some noise couplings can be reduced or eliminated thereby reducing the amount of noise in GW strain channel. In addition to artificially inducing environmental noise, we also routinely induce differential-arm displacements via the photon calibrator <cit.> to study coupling between the GW channel and interferometer auxiliary channels (see safety_section) and add simulated gravitational waveforms for end-to-end testing of analysis pipelines <cit.>.
In addition to artificially induced environmental noise and signals studies, the
DetChar group also analyzes instrument data in other ways to enhance our understanding of the detector. These studies usually use several DetChar tools for determining the coupling between the environment, auxiliary sensors, and the detector noise characteristics. For example, through these investigations we may find that ground motion at a specific location in the detector is more correlated to the noise in the GW strain channel than at other locations. Such hypothesis can then be tested using the PEM tests.
These investigations are central to solving problems that adversely impact the detector data-quality and uptime, consequently reducing the number of gravitational-wave observations. The transient noise impacts the parameter estimation process and could generate false alerts, which have to be retracted later. In this section, we first give an overview of transient noise in O4a at both the detectors, and then we discuss several DetChar investigations carried out between the end of O3 and the end of O4a.
§.§ Transient noise investigations at both sites
§.§.§ Transient Noise
Glitches are short-duration bursts of excess power with their origins in environmental and/or instrumental couplings. Omicron is an event trigger generator that is used to search for this excess power in h(t) and auxiliary channels <cit.>. The time, frequency, and SNR of short-duration transient noise can be visualized using “glitchgrams”. glitchgram shows an example of a glitchgram which is generated by plotting Omicron triggers.
Different tools are used to investigate glitches, and one of them is Gravity Spy, which classifies noise transients by their morphologies in spectrograms <cit.>. common-glitches shows time-frequency spectrograms for six common glitch categories at the two sites <cit.>. Certain environmental or instrumental conditions can generate similar signals. Glitch classification enables the identification of patterns in the data for similar noise transients.
Examining the relationship between noise in the primary gravitational-wave channel and multiple auxiliary channels can potentially lead us to the source of the noise. Hveto is one of the main tools used to study the correlation between transient noise in the primary strain and auxiliary data <cit.>. Auxiliary channels that also witness the same noise are called witness channels.
The glitchgram shown in glitchgram is from a day when seismometers recorded elevated ground motion, which caused the excess transient noise at low frequencies around 10–20 Hz. Sometimes, this motion is so strong that the interferometer cannot maintain the servo-controlled resonance condition and stops observing <cit.>. This is referred to as losing lock of the interferometer.
Multiple variables affect the glitch rate in the detectors, including instrumental upgrades, addition of new subsystems, and environmental conditions such as wind, elevated ground motion, or the passing of trucks or trains near the site. glitch-rate shows the comparison of glitch rates between O3 and O4 for two different SNR thresholds. In O3, transient noise at both detectors was dominated by stray light. These couplings were greatly reduced during O3b and after O3. Further details are discussed in <cit.> and in llo_invs.
The O4a transient noise at LHO was dominated by low SNR glitches mostly in 10–50 Hz. Most of the LHO transient noise was low SNR as can be seen in glitch-rate and in glitch_rate_fig, which shows the daily glitch rate at LHO and LLO during O4a. These broadband transients had a common source and were mitigated during O4a. low_snr_glitches provides more details on this.
The transient noise at LLO was dominated by low-frequency ground motion that induce laser light scattering. Most of this ground motion can be attributed to the impact of atmosphere driven ocean waves on the ocean surface, also known as microseism <cit.>. The microseismic motion is seasonal and is caused by intense ocean wave activity from winter storms. This is why we see an increase in the glitch rate during the latter half of O4a as shown in glitch_rate_fig. slow_scatter_llo provides more detail on this noise.
§.§.§ Overview of scattering noise
Noise due to light scattering is a common problem at both detectors. Scattering happens when a small fraction of laser light gets reflected off a test mass, hits another moving surface in the vicinity, and then rejoins the main laser beam. This rejoining leads to the introduction of a time dependent phase modulation to the main laser beam <cit.>. The additional phase noise shows up as h_ph(f),
h_ph(f) = K/2λ/4π Lℱ[sinδϕ]
where
ϕ(t) = ϕ_0 + δϕ_sc(t) = 4 π/λ[x_0 + δ x_sc(t)].
Here, K is the ratio of stray light amplitude to the amplitude of light in the main beam (usually unknown but small, of the order of 1e-9 in O4a microseismic scatter), ℱ indicates a Fourier transform, λ is laser wavelength (1064 nm) and L is the length of interferometer arms (4 km) <cit.>.
When the relative motion between the scattering surfaces is not small compared to the laser wavelength, we get fringe wrapping and the noise shows up as arches in time-frequency spectrograms (see common-glitches). Transient scattering noise can be classified into two categories, which depends on the frequency of the ground motion that produces it: Slow Scattering and Fast Scattering <cit.>. Scattering is discussed in the following section, and again in llo_invs.
§.§.§ Cryo-manifold baffle noise
Most of the optics in LIGO are housed in corner and end stations (X-arm and Y-arm).
During O3, vibrating CBs at the LIGO end stations were identified as a cause of light scattering noise <cit.>. These baffles prevent most light reflecting from the beamtube reduction flange, where the beamtube narrows at the corner and end stations, from reentering the detector arms <cit.>. Some light reflected from these baffles, however, still interferes with the circulating light in the main beam. When a mechanical resonance of the CB with a high quality factor was rung up from heightened ground motion, scattered light noise at about 4 Hz and harmonics were visible in h(t).
At both detectors, rubber viton dampers were installed on three of the four CBs to decrease the quality factor of the mechanical resonance prior to the start of O4a. Damping reduced the velocity of the reflecting surface and thus lowered the cutoff frequency of the scattering noise to low frequencies where the interferometer is not sensitive. The vibration coupling at the remaining undamped cryobaffles increased between O3 and O4 as the power in the arms increased. The difference in coupling between 60 W and 75 W at the undamped CBs was about a factor of about 3 at LHO <cit.>.
In the O4a-O4b commissioning break an incursion was made into each detector's X-arm end station vacuum enclosure to damp the final cryo-manifold baffle <cit.>. After the viton damper installation, the quality factor of the ∼ 4 Hz mechanical mode in the CB dropped by a factor of ∼20, reducing the maximum frequency of the scattering noise by roughly the same amount <cit.>.
§.§.§ Safety studies
A crucial aspect of verifying gravitational-wave candidates is ensuring that they are not introduced by environmental noise sources observed in auxiliary channels. In general, however, the transfer functions between h(t) (GW strain) channel and the thousands of auxiliary channels is unknown, so it is possible for real gravitational-wave events to produce signals in some of the auxiliary channels. If we were to use such a channel (an “unsafe” channel) to veto a gravitational-wave event, without knowledge of this coupling, we would in effect be using an astrophysical signal to veto itself.
To probe the “safety” of auxiliary channels for vetoing candidate events, we use the photon calibrator to inject sine-Gaussian waveforms into h(t) at each detector, mimicking a gravitational-wave signal. A safe auxiliary channel should not respond to these injected waveforms in h(t). Then, we perform a statistical analysis of the 𝒪(5000) auxiliary channels sampled above 16 Hz using the tool <cit.> to classify auxiliary channels as either “safe” (i.e., acceptable to use to veto potential gravitational-wave events) or “unsafe.” The resulting list of safe channels is then passed on to downstream data-quality analyses.
The injection and safety analysis process was repeated every few months in each detector during Engineering Run 15 (ER15) (April 26, 2023 - May 24, 2023) and O4a (May 24, 2023 - January 16, 2024) to track possible changes in safety and to ensure that any new channels were correctly classified as either safe or unsafe.
During the first set of analyses in ER15, a handful of LHO channels were found to be unsafe compared to the existing list from O3: the ESD voltage monitors at X-arm end station, and a handful of suspension rack magnetometers.
We found no substantial changes in channel safety during the subsequent duration of ER15 and O4a.
§.§.§ Narrow spectral artifacts
A typical daily-averaged, high-resolution spectrum at either LHO or LLO reveals hundreds or thousands of narrow spectral artifacts (lines) within 10-2000 Hz, the band of particular interest to persistent gravitational-wave searches. These lines display a variety of amplitudes, widths, and shapes. Some are stable over long periods of time (weeks, months), while others are variable. Identifying and mitigating the most problematic lines requires both routine monitoring and focused investigations.
Since noise lines impact CW and stochastic gravitational-wave searches in slightly different ways, specific tools are used to evaluate these artifacts to aid the analyses. CW-focused line studies typically use a tool called <cit.>, while stochastic-focused studies use tools called <cit.> and <cit.>. These tools provide complementary information about lines, and their results are often used together to inform line investigations, mitigation efforts, and data-quality products.
Line investigations with Fscan
produces high-resolution spectra averaged over long periods of time. It was largely rewritten for O4: modernizing the code, improving stability of data generation, making new data products and visualization tools available for analysis, and enabling production of custom spectra. In O4a, was used to generate daily, weekly, and monthly spectra for about 80 channels at each observatory site, using FFT of 1800-s-long data segments. Additional analyses were performed to track lines of interest (determining witness channels and times at which the artifacts changed) using data.
In lho-combs, we highlight examples of successful investigation and mitigation efforts in O4a at LHO. Because LLO has generally cleaner data for persistent gravitational-wave searches, there have been fewer notable examples of mitigation. A number of high-priority narrow spectral noise artifacts, however, have not yet been mitigated, including artifacts present at both detector sites. Additional work is ongoing in this area. The highest-priority artifacts are those that contaminate a broad spectral region (i.e., combs, especially those with many visible peaks) and artifacts that are present at both sites (e.g., 60 Hz power mains) because these have a disproportionate impact on persistent gravitational-wave searches.
The highest line artifacts in O3bO4aASDs_fig have known cause, typically due to design choices inherent to the detector design (mirror suspension resonances at various frequencies, strongest at 300 Hz and 500 Hz and harmonics) or calibration and dither lines to monitor or control interferometric cavities, respectively.
Monitoring strain-strain narrow-band coherence.
Stochastic searches rely on cross-correlating h(t) data from detector pairs, so understanding and monitoring potential noise sources that could detrimentally impact the cross-correlated data is crucial. A stochastic monitoring tool called is specifically designed to calculate strain–strain coherence in medium-latency and flag problematic frequency bins that have excess coherence. These are flagged when they pass a Gaussian coherence threshold of 1 - (1/N_f)^1/(N_ eff -1),
where N_f is the number of frequency bins, and N_ eff is the effective number of segments used in the coherence calculation.
An example of the calculated coherence and its outliers is shown in fig:coherence_stochmon. The outlier bins shown have all been traced back to specific detector noise sources. The 505.75 Hz and 1496.2 Hz outliers lies within to the first and third violin modes of the detectors, which are resonances of the detector’s mirror suspension fibers and are caused by changes in the fiber tension. The 24.5 Hz outlier corresponds to a calibration line that was turned on between July 25, 2023 to August 9, 2023 <cit.>. The 960 Hz outlier corresponds to one of the Duotone signals in the timing system <cit.>.
In practice, the investigation to determine the instrumental causes of the coherence outliers is crucial. They are carried out by spectral monitoring tools including , and . is another stochastic monitoring tool that keeps track of daily, weekly coherence between h(t) and physical environmental channels at 0.1 Hz resolution. In terms of computing resources, a moderately low resolution of 0.1 Hz allows it to monitor a wide range of channels (about 1000 channels per observatory site). The high-resolution spectral information from and auxiliary channel information from provide complementary information to support the strain-strain coherence outliers investigation.
§.§.§ Broadband persistent artifacts
Investigations of coherence noise
Correlated magnetic noise was investigated as a potential noise source for stochastic searches. During O4a, two sets of coordinated magnetic injections were performed to study the coupling between the coherence of h(t) in the presence of an increased correlated magnetic field, thereby determining the magnetic noise budget. The first set of injections lasted 5 minutes and was composed of broadband white noise to test the coordinated injection capability between the sites. The second set lasted ∼45 minutes and consisted of a realistic Schumann-like spectrum (see e.g., <cit.> for a description of Schumann noise relevant for ground-based gravitational-wave detectors). fig:stochastic correlated injections shows the coherence spectrums between the magnetometers as well as the coherence between h(t) at LHO and LLO during the second set of injections compared to a reference background time. Details on the level of correlated magnetic noise estimate will be provided in the O4 stochastic analyses release.
§.§ LIGO Hanford noise investigations
§.§.§ Electronics ground noise
During the commissioning period before O4, spectra of the variation in current flowing from building electronics ground to neutral earth were observed to be correlated with noise in h(t). A newly developed electronics ground injection system showed that noise in h(t) could be produced by injecting ∼100 mA currents onto the building electronics ground. The coupling is thought to be produced by fluctuations in the potential of the electronics ground system due to the variations in current flows across the finite resistance between electronics ground and true neutral earth, measured to be about 2 Ω at LHO <cit.>. Forces on the charged test mass may fluctuate with the potentials of nearby electronic systems that are referenced to the fluctuating electronics ground, such as the electrostatic drive and ring heaters.
The noise from electronics ground potential fluctuations was reduced in two ways. First, the resistance between certain electronics chassis and the building electronics ground were reduced in order to reduce the total resistance to neutral earth for those electronics. Second, the biases of the electrostatic drives were swept and set to values that minimized the coupling to h(t) of injections onto the electronics ground. It is thought that, at the coupling minimum, the forces on the charged test mass due to ground potential fluctuations are partially canceled out by an opposite dipole force associated with the bias-polarization of the test mass <cit.>. These mitigations resulted in range improvements of a few megaparsecs. Further mitigation could be obtained by shielding electronics inside the chamber from the test mass with shields connected to the chamber.
The LIGO test masses are held in place by a quadruple pendulum suspension <cit.>. The top three stages of the pendulum suspension employ magnetic actuators to hold the suspension fixed, while the test mass itself is held in place by four low-noise ESD <cit.>. The test masses located at ends of the interferometer arms require more actuation than the input test masses <cit.>. The electric force that the ESDs apply to the end test masses depends on the electric potential between the test mass and its surroundings as well as the voltage supplied to the ESD electrodes.
A changing electrical environment local to the end test masses modulates the force applied by the the ESDs to the test masses, introducing noise into the GW data. At LHO it was found that lowering the resistance on the grounding wires for electronics chassis used to control test mass motion made the electronics less sensitive to ground potential fluctuations <cit.>. Changing the grounding of controls chassis located at the end stations lowered the noise in h(t) overall and also reduced coherence between test mass motion and current to ground below 100 Hz <cit.>. Sensitivity to ground potential fluctuations was further reduced by selecting a bias voltage for the DC component of the ESDs which minimized coupling between h(t) and currents injected onto an electronics chassis at each end station <cit.>. The results of these two changes is illustrated in fig:elec_ii. The bias voltage that minimizes currents coupling to h(t) changes over time and continues to be tracked <cit.>.
A minimum noise setting for Y-end ESD bias was identified at LLO before O4 <cit.> and midrun changes in the X-arm end station-end ESD bias were found to reduce noise at ∼11 Hz and ∼60 Hz and harmonics of these frequencies <cit.>.
§.§.§ Broadband transient noise
We noticed an increased noise at low frequencies which showed non-stationary behavior in the frequency band 10-50 Hz. A bicoherence analysis of h(t) noise with itself found that this noise was modulated by a low frequency h(t) signal mostly around 2.6 Hz <cit.>. Most of the longitudinal drive control sent to the ESD was being sent in the band 1-3 Hz which could have contributed to this increased noise. A new longitudinal control scheme that reduced the amount of control sent at these low frequencies was developed and implemented . This led to significant improvement in h(t) noise, thereby reducing the non-stationary or glitch behavior as well <cit.>.
§.§.§ Modifying input power
The amount of laser power input into the LIGO interferometers has increased in each Observing run. Increased power circulating in the arms improves the high-frequency sensitivity of the LIGO detectors by reducing the effect of quantum shot noise. In O4a, both LIGO detectors were slated to operate with 75 W of laser power sent into the IMC.
As input laser power was increased at both detectors prior to the fourth Observing run, vibration coupling also increased. This includes vibration coupling through both scattered light noise and input beam jitter noise. The dramatic increases in coupling have prompted a warning that vibration coupling may become increasingly problematic as power increases <cit.>. A possible explanation is that increased thermal distortion of the test mass surfaces around coating defects may increase scattered light and also reduces the symmetry of the arms, decreasing common mode rejection of input noise.
Due to duty cycle and control scheme concerns associated with high-power operation the laser power sent to the IMC was reduced from 75 W to 60 W at LHO during O4a <cit.>. fig:pslccf shows the reduction in vibrational coupling between PEM sensors placed around the in-air optics table where the input laser light is produced and the apparent differential arm length.
§.§.§ Weekly magnetic monitoring
Regular measurements were taken before and during O4a to understand the potential for coupling between local magnetic fields and the interferometer <cit.>. Local magnetic fields were generated by running a current through large coils of wire placed near electronics racks used for interferometer controls as well as in the experiment hall. The response of each interferometer to the resulting magnetic fields was quantified using the network of PEM magnetometers set up around each observatory <cit.>. In order to vet GW candidates at kilohertz frequencies, such as transients from neutron star f-modes <cit.>, more computing space was allocated prior to O4a to store accelerometer and magnetometer data up to 8192 Hz. Nearly every week of O4a, a broadband magnetic field was injected at 1000-4096 Hz at 7 locations around LHO for 36 s at each coil to quantify magnetic coupling in the newly-monitored part of the kilohertz regime. These injections provoked a response in h(t) at the LIGO LHO corner station. fig:weeklymags shows h(t) response to several of these weekly injections compared to a reference background time. Weekly probes of the high frequency magnetic coupling could be used to more accurately estimate the total environmental contribution to a high-frequency GW candidate <cit.>. Broadband magnetic injections were also made over 10-100 Hz and 100-1000 Hz from these 7 coils as part of the weekly injection campaign.
During the course of O4a, the magnetic coupling of the detector at kilohertz frequencies fluctuated from week to week. At the beginning of O4a, the LHO detector was somewhat sensitive to large external magnetic fields applied at the Corner Station. Towards the middle of O4a, the detector's response to these applied magnetic fields increased, before dropping to being only weakly coupled towards the end of O4a. The most likely mechanism for the observed magnetic coupling in the kilohertz regime is magnetic interference with cables that control the LIGO suspensions and optics. Specific mid-run electronics configuration changes which affected the degree of magnetic coupling, such as cables being rerouted, have not yet been identified.
§.§.§ Cosmic ray glitch correlations
A high energy cosmic ray shower is a potential mechanism for creating glitches via momentum transfer to, heating of, or changing the electric potential near the test masses <cit.>. Cosmic rays are monitored by four photomultiplier tubes placed beneath the vacuum chamber which houses the X-arm input test mass at LHO. During O2 and O3, we compared the time difference between cosmic ray arrival times and blip glitches at LHO and found no evidence of a correlation <cit.>. We expanded this search in O4a to include blips, low-frequency blips, repeating blips, and tomtes as identified by Gravity Spy <cit.>. A description of the cosmic ray sensor systematics in O4a can be found at <cit.>. No temporal correlation was found between cosmic rays and any of these glitch classes. Additionally, we found the amplitude of cosmic rays which struck LHO within a second of a glitch were consistent with the overall amplitude distribution of cosmic rays witnessed by the cosmic ray detectors installed at LHO.
§.§.§ Scattering noise from the input arm
In O4a, short shutdowns of the LHO building HVAC system produced several percent increases in astrophysical range <cit.>. Localized vibration injections indicated that the coupling of the HVAC vibrations at the corner station was in the input arm of the interferometer <cit.>. The coupling was further localized, using laser vibrometry, to particular baffles in the input arm <cit.>. During the O4a-O4b break, the baffles were moved and damped, and new baffles added, greatly reducing the vibration coupling in the input arm <cit.>.
§.§.§ Comb investigations
During O4a, two sources of comb artifacts were identified and mitigated at LHO. The first source was in the electronics driving a mirror heating element. This created a comb of approximately 1.6611 Hz (though different mitigation efforts caused changes in the spacing) centered around 280 Hz. A time-correlation was found between changing electronic settings for the mirror heating element and variations in the 1.6611 Hz comb amplitude, which suggested a possible source for the comb <cit.>. This insight motivated subsequent mitigation efforts that more clearly identified the problem. Electrical connections were changed to stop the 1.6611 Hz comb from being created <cit.>.
The second source created a near-1 Hz comb, as well as a near-5 Hz and near-7 Hz comb at various times. All three combs were traced to Hartmann wavefront wensors (HWS), which are part of the interferometer ASC subsystem <cit.>. It was determined that the comb frequency spacing changed when the HWS camera shutter frequency setting was changed <cit.>. The low amplitude of the comb makes observing this artifact in short stretches of data (less than ∼1 day) challenging, and thus mitigation efforts more time-consuming. Once the connection between changes in the comb spacing to changes in HWS hardware settings was established, efforts to change the hardware configuration while in observing mode helped to mitigate this comb <cit.>.
§.§ LIGO Livingston noise investigations
§.§.§ Slow scattering
Noise due to high ground motion in the band 0.1–0.5 Hz was the most dominant source of glitches in the LLO data during O4a. These glitches, also known as Slow Scattering, adversely impacted the strain sensitivity mostly in 10–50 Hz band. fig:gm_and_glitch_rate shows glitch rate and ground motion for three days. For two of these days, ground motion in the band 0.1–1 Hz was high, which led to a high rate of Slow Scattering glitches in the data.
The additional phase noise as given by eq2 shows up as arches in the time-frequency spectrogram as shown in the middle plots in figure <ref>. The time separation between subsequent scattering arches gives a direct measure of the frequency with which the scattering surface is moving. During O4a, we have seen that the frequency of the scattering surface is not constant because it moves at whatever frequency is dominant in the ground motion <cit.>. We have not yet found any optics which have enough velocity to create noise above 10 Hz in h(t).
§.§.§ Fast scattering
During O3, Fast Scattering glitches were the most common glitch source at LLO, making up about 27% of all glitches with a confidence of 90%, according to Gravity Spy <cit.>. Fast scattering is typically found to be correlated with ground motion in the microseismic band 0.1–0.3 Hz, and the anthropogenic band 1–6 Hz. Fast scattering arches are short in duration, shown in Figure <ref>, and impact the detector sensitivity in the 10–100 Hz frequency range. Trains, logging, construction, and other human activity were the main sources of Fast Scattering, as the anthropogenic motion upconverts to higher frequency <cit.>.
In O3, Trains near the LLO Y-arm end station produced low-frequency seismic noise that would upconvert into the gravitational-wave sensitive frequency band. For this reason, they provided an avenue to study how periods of large ground motion impacted the detector. Spectrograms of the ground motion revealed many harmonic lines with changing frequency, and short bursts of increased amplitude in the strain data. The suspicion was that each burst was produced by the low-frequency ground motion exciting mechanical resonances of some scattering surface. Two methods, Lasso <cit.> regression and Spearman correlation <cit.>, were employed to identify which narrow band seismic frequencies contributed the most to increased detector noise. Both methods consistently pointed to ground motion in the 1.8–2.2 Hz range as the primary factor correlating with heightened strain noise at the corner station <cit.>. The subsequent mitigation of noise from these frequencies for O4 is discussed in acbres.
From roughly June 2023 through August 2023, there was a significant amount of logging occurring near LLO <cit.>. The anthropogenic ground motion in the vertical direction near the corner station consistently reached amplitudes greater than 1000 nm/s, as shown in Figure <ref>. These high ground motion levels caused the detector to lose lock multiple times for many hours during the daytime. After August 2023, the logging activity ceased, significantly reducing the disruptive ground motion near the corner station.
§.§.§ Arm cavity baffle resonances
ACB <cit.> are located at each of the test masses, attached to the first stage of the active seismic isolation system (HEPI <cit.>), and are used to catch the light from wide angle scattering. ACB resonances are sensitive to the physical state of the system and changes to it can led to the shift in resonant frequencies <cit.>. After O3, but before O4, at the corner and Y-arm end station, the ACB had a high-quality-factor resonance at around 1.6 Hz <cit.>. When rung up, noise appears in the gravitational-wave data from around 20–100 Hz. In the absence of high microseismic ground motion, 1.6 Hz motion would create scattering noise at 3.2 Hz. After O3 ended, 3.3 Hz scattering noise was observed that had not been seen before; this can be explained by ACB resonances at 1.6 Hz.
We suspect that during O3, the ACB resonance was around 2 Hz, which would produce the common 4 Hz Fast Scattering observed. In late 2022, the ACB resonances at the corner and Y-arm end station were mechanically damped. As a result, the rate of Fast Scattering decreased dramatically and subsequently it was found that this noise coupling was no longer present <cit.>. This effect of this remediation can be seen in the sweep injections performed in July 2022 (see acb-sweep-test) and again in February 2023 <cit.>.
Comparisons of the impact of logging during O3 and O4 showed that for similar ground motion amplitudes, the rate of transients was significantly reduced by a factor of about 50 <cit.>. In O3, anthropogenic ground motion at such high levels would have produced many Omicron glitches (assuming lock is not lost). Due to the damping of the arm cavity baffles, we did not observe significant strain noise due to the logging activities in O4 <cit.>.
§.§.§ Binary neutron star range ocillations
During O4a, from time to time, the observed BNS range exhibited oscillations with a period of about 30 minutes and a range variation of about 5-15 Mpc, lasting for all or part of a day. These variations can be seen in fig:range_osc. The range variations are the result of broadband excess noise in h(t).
Searches for the cause of these oscillations identified accelerometers that seemed to witness motion that aligned with the oscillations <cit.>. There is a line at around 30 Hz produced by the HVAC system, and initial investigations hypothesized that this line changing amplitude could be responsible for the observed BNS range oscillations. To check, a shaker injection at 30.5 Hz was performed on the vacuum enclosure of the chamber containing the X-arm end test mass. The 30.5 Hz shaker injection could not re-create the broadband effect we observe in h(t) <cit.>.
The hunt for what may be causing these oscillations was continued by analyzing the output of the summary page tool Lasso. As described in <cit.>, Lasso can produce overlays of the BNS range with auxiliary channels, allowing identification of channels with similar periodicity to the range variations.
However, range variations often include secular trends combined with the oscillations, such that the channels found by Lasso often fail to align with the period of the oscillations and are thus unlikely to yield information about their cause.
The channels that seemed to correlate the most, according to the Lasso algorithm, were primarily located at the X-arm end station and were related to temperature sensors. Additional days with similar leading channels were found – along with channels not explicitly measuring temperature but sensitive to it <cit.>. Additional investigations have mentioned issues with temperature control <cit.>. While the evidence seems to implicate temperature effects at the X-arm end station, a deeper study during all of O4a could yield the actual cause, as the oscillations are still present in O4b. The coupling mechanism between the thermal variations and h(t) is not currently known.
§.§.§ 84 Hz h(t) noise
During ER15, which immediately preceded O4a, excess noise around 84 Hz in h(t) data was present. The noise would appear and disappear, suggesting that its cause might be related to dehumidifiers and fans which turn off and on.
This noise was present at both end stations, but was louder at the Y-arm end station. Analysis of spectrograms and outputs of the channels that monitor the dehumidifiers and fans pointed towards the 84 Hz source being related to two exhaust fans located at the Y-arm end station <cit.>. To further confirm that this excess noise is coming from the Y-arm end station, broadband acoustic injections were done at both the X-arm end station and the Y-arm end station. These injections revealed a sharp mechanical resonance at around 84 Hz at the Y-arm end station <cit.>. The exact coupling mechanism of the fans into h(t) is still unknown, but the source was removed by moving the fans off the beam tube enclosure doors <cit.>.
§ EVENT VALIDATION IN O4
Validation of GW candidates is a crucial step that enhances our confidence in the astrophysical origin of the candidate events and the reliability of source parameter-estimation results. Event Validation refers to the process of checking for the presence of any data-quality issues surrounding the time of an event and conveying this information to the relevant data analysis groups in the collaboration.
These assessments build upon the initial vetting conducted by the Rapid Response Team (RRT), a joint LVK working group tasked with promptly responding to event candidate alerts.
The RRT conducts a series of prescribed data-quality checks, informed by the DQR ; see dqr, and verifies the overall functioning of the low-latency pipeline infrastructure.
This team provides round the clock coverage, comprising rotating on-shift scientists and experts from the various areas of detector operations, including DetChar. The prompt assessment, made after the alert for a significant event is produced by online searches, is then reconsidered more thoroughly and in more detail by the Event Validation task force to determine a final evaluation of the data-quality for every event candidate.
Multiple checks are performed on the data, e.g., making sure the detector is operating in a nominal data-taking configuration, identifying any noise artifacts that could bias source property estimation, and checking for any inconsistencies in the output of the PEM sensors as that may suggest noise coupling between the environment and strain data.
If the data-quality around the candidate event is found to be unsatisfactory, further data processing techniques such as bayesian noise inference and transient noise removal may be applied <cit.>. For example, BayesWave data cleaning and linear noise subtraction were applied to a total of 17 events during O3. The catalog papers discuss these events and techniques in more detail <cit.>.
There have been a number of changes and improvements in the event validation procedure since the last observation run. These changes include:
* An LVK event validation roster for all active detectors: O4a event validation infrastructure is designed to take information from LIGO, Virgo and KAGRA interferometers. This centralization has reduced the person power and time required to perform event validation compared to past Observing runs, during which the LIGO and Virgo Collaborations conducted validation of their detector data separately <cit.>. Additionally, the unified framework has ensured more standardization and uniformity in the procedures for evaluating data-quality and the tools utilized for the assessment.
* More automated event validation software infrastructure: The event validation infrastructure in O4 is centralized and is maintained using git on the event validation website, accessible to all LVK members.
This centralized infrastructure allows easier information flow from DQR, to noise mitigation teams and other data analysis groups. The event validation website acts as a repository of all the details related to event validation including a list of events, the contact information of the volunteers and RRT experts, links to event’s DQR report and the event validation. For each event, a Gitlab issue page is created where any additional details regarding the event’s data-quality can be discussed. This page is also linked from the Event validation website.
* More automated DQR infrastructure: The DQR is a DetChar tool used to assess the data-quality surrounding an event time, as detailed in dqr.
The O4 version of the DQR has undergone significant upgrades compared to O3, incorporating automated checks that offer insights into the interferometer's state and data-quality around the event time.
The results of these checks are displayed in the form of labels (Pass, DQ issue, or Task fail), indicating whether a particular data-quality check has been successful or not.
The event validation volunteers in O4 have made extensive use of these results for validation purposes.
§.§ Data Quality Report
As mentioned, the primary tool used in event validation is the DQR <cit.>. In O3, similar DQR toolkits were separately used by the LSC <cit.> and Virgo Collaboration <cit.> to evaluate candidates from the GWTC catalog.
For O4, we used the experience gained from O3 to improve the DQR, with a focus on improving the speed and robustness of analyses, increasing the fraction of analyses that were automated, and generalizing the software to support analysis of data from all ground-based gravitational-wave detectors.
One key upgrade was the use of a p-value to automatically identify data-quality issues that could impact the detection or analysis of gravitational-wave candidates.
Additional details about the DQR architecture used in O4a can be found in <cit.>.
A wide variety of different analyses were used as part of the DQR framework during O4a.
Tests that were used to analyze LHO and LLO data include:
estimates of noise contributions from the observatory environment <cit.>,
statistical correlations between strain data and auxiliary-channel information <cit.>,
predictions of the presence of glitches using only auxiliary information <cit.>,
analytic identification of excess power in spectrograms of the strain data <cit.>,
machine-learning image classification of spectrograms of the candidate <cit.>,
quantitative estimates of the data stationarity <cit.>,
estimates of the Rayleigh statistic of the data,
measurements of the local glitch rate <cit.>,
and monitors of the detector range at the time of the candidate.
These tasks were completed on two different timescales.
Most tasks were completed within 5 minutes of a DQR being launched, allowing these tasks to be used as part of the initial rapid response to identified gravitational-wave signals.
Additional tasks were available within a few hours to help with additional offline event validation of each candidate.
A key feature of this updated DQR was the ability to automatically flag DQ issues in the candidate events identified by tasks based on the reported p-value.
Candidates with DQ issues reported by the DQR underwent additional scrutiny as part of the rapid response to gravitational-wave candidates in O4a.
No additional human follow-up of the candidate data-quality was completed in low latency when no DQ issue was identified.
All candidates were further analyzed offline, however, regardless of the conclusion reached in low latency.
We found that the choice of p-value threshold strongly impacted the rate of false alarms from the DQR.
The p-value threshold chosen to identify a data-quality issue was changed partway through the run for this reason;
at the start of O4a, a threshold of 0.1 was used, but was eventually changed to 0.05.
This lower threshold reduced the rate of false alarms with minimal reduction in the true alarm rate.
We also changed the set of tasks used throughout the run to improve the true alarm rate and reduce the false alarm rate.
Using a single p-value threshold was also suboptimal, as the exact definition of the reported p-value varied between tasks.
This led to tasks either overreporting or underreporting the presence of DQ issues.
This limitation has been addressed for O4b by introducing task-specific thresholds that are informed by our O4a experience.
§.§ Event Validation procedure
The Event Validation workflow is shown in fig <ref>.
For each event, the volunteer assigned to the week-long validation shift is immediately notified. They receive all necessary information about the event, as well as instructions on how to validate it. This includes links to the links to the Event Validation form, the DQR and the GraceDB (Gravitational-Wave Candidate Event Database) [https://gracedb.ligo.orgwww.gracedb.ligo.org] page for the event. In the event validation form, the validator can fill the event details for each detector. These details include the “validation conclusion for the detector”; the options are “Not Observing”, “No Data Quality Issues”, and “Data Quality Issues”. In case the validator finds DQ (Data Quality) issues, they can fill the noise duration and frequency bandwidth in the “Noise Box” for time and frequency.
This information is then used as input for glitch subtraction before parameter-estimation analysis, as detailed in the next paragraphs.
The validation form also has an option to write any important details about the event in the “Notes” tab. The Gitlab page for each event allows for any further data-quality related discussion.
Once the validator is satisfied with their findings, they can submit the validation form. This validation conclusion is then passed to the noise mitigation review team.
The noise mitigation team is responsible for assessing whether any excess power within the target time-frequency analysis window of any candidate is sufficiently non-Gaussian to require further action <cit.>. We do this by comparing the PSD noise variance in each identified time-frequency region and check it is consistent with Gaussian noise. For regions which are not consistent with Gaussian noise (p < 0.01) there are two options available. If the noise is sufficiently isolated in time and frequency the noise transient can be subtracted from the data. All noise-subtracted data in O4a were produced by the BayesWave algorithm <cit.>. The procedure of how this is done is described in the Appendix of Ref <cit.>.
proc = [rectangle, minimum width=2cm, minimum height=8mm, text centered, text width=3cm, draw=black]
decision = [diamond, minimum width=2cm, minimum height=2cm, text centered, text width=15mm, draw=black]
arrow = [thick,->,>=stealth]
To assess the efficacy of the noise subtracted data we compare the Gaussianity of the noise-subtracted data with the targeted time–frequency window to Gaussian
noise <cit.>. Noise-subtracted data consistent with Gaussian
noise were deemed sufficiently stationary for parameter
estimation. If the noise is extended in time and frequency such that noise subtraction is not appropriate, or the noise subtracted data were not sufficiently stationary, the noise mitigation team recommended restricting the time-frequency analysis window, so the parameter estimation analysis does not take into account any of the noise. The final recommended time-frequency analysis window, along with the recommended data frame name, are then sent through CBCFlow for parameter estimation to automatically retrieve the information and start their analyses. CBCFlow is a Python library that facilitates storage and transfer of event metadata <cit.>.
§.§ Validation of O4a events found by online search pipelines
In O4a, online searches generated alerts for 92 significant event candidates.
Out of these, 11 were retracted by the RRT due to evident contamination from noise artifacts or other issues that resulted in inaccurately estimated event significance, rendering their astrophysical origin improbable <cit.>.
Some of the events from the remaining 81 candidates required noise mitigation through glitch subtraction.
The procedures for assessing the necessity of glitch subtraction and its execution, along with the evaluation of the result, are detailed in <cit.>.
For the remaining events showing data-quality issues but deemed not to require glitch subtraction, restrictions were implemented on the analyzed times and frequency bands surrounding the events. A common problem was low-frequency non-stationary noise, particularly in the lowest part of the detector sensitivity range, between 10 and 40 Hz, often caused by ground motion.
Examples of this noise can be seen in the middle and lower panels of Fig. <ref>.
While GW searches and parameter estimation typically start at 20 Hz <cit.> to avoid the noise wall below 15 Hz (as shown in Fig. <ref>), when non-stationary noise is present, the lowest frequency may be increased to exclude the affected frequency range and avoid biases in the analysis results.
§ DATA QUALITY FOR ASTROPHYSICAL SEARCHES
DetChar seeks to help mitigate or eliminate identified noise sources as a top priority. Since this is not always feasible and excess non-Gaussian noise remains in archival data even in cases where the issue was corrected, the DetChar group prepares various data-quality products applied to astrophysical searches to reduce the impact of non-Gaussianity of the data on these searches.
§.§ Data quality products for all searches
As in previous observation runs <cit.>, the LIGO DetChar group recommended that specific periods when the data is unusable due to severe data-quality issues be removed from analyzed data prior to performing astrophysical searches. This is handled through defining segments (time periods specified by start and stop times) to be vetoed by data-quality flags. Category 1 flags define times to be removed prior to running an analysis. It is generally recommended that a consistent set of these flags be applied across all searches, making them relevant for all analyses described in later subsections. Searches for gravitational waves have continued to become more sophisticated in their handling of suboptimal detector data, including utilization of noise subtraction techniques to extract gravitational-wave signals in data with noise transients present. For this reason, LIGO DetChar was less aggressive about recommending periods of data for removal through defining data-quality flags in O4a than in previous runs.
Periods of non-stationarity with significant salvageable data in some frequency bands were left in place, but there were still stretches of data when the detector was nominally supposed to be operating as an astronomical observatory during which the data were in practice un-analyzable due to severe issues sufficient to bias the PSD or the detector status otherwise being inconsistent with the detection of gravitational-waves. The total deadtime percentage, defined as percentage of observation time removed by data-quality flags, was less than one tenth of one percent for each interferometer during O4a. Category 1 flags in this category covered issues including:
* Incorrect line subtraction for the LHO or LLO, generally at the start of lock stretches.
* Parametric instability mode <cit.> rung up and severely impacting data shortly before causing lockloss. This issue occurred infrequently in both LIGO interferometers in O4a and also occurred in the third Observing run <cit.>.
* A servo causing severe issues with squeezing <cit.> at LHO.
* Violin modes <cit.>, i.e. resonances of the mirror suspension fibers, rung up severely at LLO, in one case leading directly to distorted strain data followed by lockloss, and in another case causing issues with data calibration.
* Observing mode was defined incorrectly in either LIGO interferometer early in the ER15 Engineering run prior to O4a.
* Observing mode was defined but h(t) data was not stored permanently due to technical issues.
§.§ Data quality for transient searches
Searches for transient gravitational-waves cover gravitational-wave emission that will be in the detectable LIGO frequency band for relatively short duration (sub-second to minutes). These searches include matched-filter analyses detecting compact binary coalescences (CBCs) as well as searches for less well-modelled phenomena referred to as GW bursts. In previous runs, both types of transient searches used Category 2 flags, which are typically shorter in duration than Category 1 and targeted the needs of specific analyses. Due to improved confidence in gravitational-wave detection in the presence of noise, CBC searches have eliminated the use of traditional Category 2 flags, although some of these searches use supplementary data-quality products in their place as described below. Unmodelled transient searches, which cannot rely on the characteristic chirp structure of the gravitational-wave signal for confirmation, continue to use these additional data-quality flags, but fewer kinds of flag resulting in reduced deadtime compared to previous runs.
§.§.§ Data quality for compact binary coalescence transient searches
The primary data-quality products used in CBC searches were the iDQ <cit.> timeseries. The iDQ pipeline produces statistical data-quality information based on auxiliary channel activity. In O4a, the Ordered Veto List (OVL) <cit.> algorithm was implemented within iDQ to create and rank an ensemble of vetoes for strain data triggered on auxiliary channels. The internal rank of OVL is then calibrated to probabilistic statements on the presence of a glitch by iDQ. To produce timeseries, the generated vetoes are applied to the strain data, and the probability that any time in the strain data contains a glitch is given by the highest ranked veto active at that time. These output timeseries are available for each detector in real-time and contain a number of statistics calculated by OVL for each time sample: the ranking statistic; the false-alarm probability (FAP), which is the probability that a random time with no glitch would be ranked at least as high as the current sample; the natural logarithm of the likelihood that transient noise is present (log L); and a state vector for iDQ indicating the quality of the iDQ data.
CBC searches in O4a were performed in two different operating modes, online and offline. In online operations, CBC detection pipelines search for gravitational waves in near real-time with initial low-latency alerts sent to the public for significant detections on a timescale of seconds to minutes. In offline operations, CBC detection pipelines analyze archival data in high latency on a timescale of days to weeks. These offline searches have data from the entire Observing period available to them as well as additional data-quality products such as the Category 1 vetoes described previously. This makes the offline configuration of detection pipelines typically more sensitive than their low-latency counterparts, but comes at an additional computational cost and by definition cannot provide real-time alerts for astronomers.
The iDQ pipeline was also run in online and offline modes. As with CBC searches, the online configuration produced data available in near real-time to detection pipelines and data-quality experts. One detection pipeline, PyCBC Live <cit.>, integrated the iDQ FAP timeseries into their search to reject candidate gravitational-wave signals caused by glitches. This implementation discarded all gravitational-wave candidates with coalescence times within one second of any time satisfying FAP(t) < 10^-4.
Offline iDQ differs from the low-latency version by having access to larger amounts of data when ranking and calibrating vetoes, allowing for more accurate estimation of their statistical properties. The log-likelihood timeseries produced by offline iDQ were used to construct data-quality flags. All of the times satisfying log L(t) ≥ 5 were identified, and segments were constructed covering times from 0.25 s before to 0.25 s after each identified time. Data-quality flags were made by taking the union of all such segments. Different CBC searches may use these flags as they see fit. For example, the flags may be used as vetoes in the style of the Category 2 flags used in previous Observing runs, or they may be incorporated into a ranking statistic as in <cit.>.
§.§.§ Data quality for unmodelled transient searches
The coherent WaveBurst (cWB) pipeline <cit.> was the primary online search algorithm for unmodelled transients, or bursts, used in O4a with other algorithms (oLIB <cit.> and MLy <cit.>) planned to be used later in O4. cWB has several modes of operation which allow it to be applied to both short duration and long duration searches for GW signals from the entire sky, and searches for signals from binary black holes, Galactic core collapse supernovae or magnetar bursts or flares.
Times of poor data-quality were removed from the burst searches through Category 1 and 2 flags as described above. All searches applied the same Category 1 flags as applied to the CBC searches, while Category 2 flags were developed by determining auxiliary data channels (those that monitor environmental or instrumental changes that are not sensitive to the effects of gravitational-waves) with high correlation to glitches that affect the burst searches. Category 2 flags are applied primarily to the offline burst searches. Similar to issues related to light intensity dips in O3, two Category 2 flags (one for the LHO and another for the LLO) were developed to exclude very loud glitches (usually with SNRs >100) <cit.>. These flags had deadtime percentages of 0.122% for LHO and 0.019% for LLO during O4a. There were 60 Hz glitches at LLO witnessed by ESD monitors which were used to develop an effective Category 2 flag specific to this observatory, with deadtime percentage of 0.069% during O4a.
§.§ Data quality for persistent searches
Persistent gravitational-wave signals are predicted to take a variety of forms. For example, rapidly-rotating non-axisymmetric neutron stars can emit nearly monochromatic gravitational-wave signals <cit.>, a superposition of many gravitational-wave emitters can create a broadband stationary stochastic background <cit.>, conditions in the early Universe may have generated a stochastic background signal <cit.>, etc. The wide variety of possible signal models and our knowledge (or lack thereof) of waveform parameters motivates a similarly wide variety of analysis efforts <cit.>. Nevertheless, these different search techniques may be served well by a relatively small number of data-quality products. Common data-quality products include information on which frequency bands are contaminated by narrowband noise lines, and cleaned strain data sets in which loud non-Gaussian transient noise has been removed. Different searches take differing approaches towards handling non-stationary noise; CW searches tend to de-weight those time-frequency intervals of higher noise while stochastic searches typically remove those periods from the analysis. Below we describe the data-quality products used for persistent searches in greater detail.
§.§.§ Data quality for continuous wave searches
Self-gated h(t)
Although noise transients typically do not impact CW analyses, the cumulative effect of frequent and loud noise transients can degrade search sensitivity by effectively increasing the noise background, especially below ∼500 Hz <cit.>. Removing these loud transient artifacts has proven useful in improving the sensitivity of broadband CW searches. In O4, we have employed a more sophisticated algorithm to identify and remove such artifacts
, creating a new calibrated h(t) dataset useful for CW searches <cit.>.
Lists of narrow-band instrumental artifacts
Most CW searches depend on a catalog of known instrumental lines to veto spurious candidates or remove contaminated spectral bands from analysis. To produce the catalog, all lines visible in a high-resolution O4a-averaged spectrum (using Hann-windowed, 50% overlapping FFT of 7200-second-long data segments) are listed and evaluated. Artifacts that are confirmed to be non-astrophysical are added to a curated list and made available for searches to use, as in <cit.>. A corresponding O4a list is currently being produced. Combs are always considered non-astrophysical because they do not align with a CW signal model. Other lines are considered non-astrophysical when their instrumental/environmental source is known. Artifacts that do not have identified non-astrophysical causes are placed in a separate curated list, as in <cit.>. Only the confirmed non-astrophysical list is used to veto outliers, whereas the unvetted list is used for investigation purposes.
§.§.§ Data quality for stochastic searches
Non-stationarity cuts in stochastic searches.
Stochastic searches in LVK data are optimal under the assumption that the noise is stationary and Gaussian <cit.>.
This is not the case in general, however, as introduced in section_intro and described throughout ins_invs. To mitigate these effects on stochastic analyses, we split our data into smaller segments, historically 192 s <cit.>. The data stationarity is enforced by removing h(t) segments with a standard deviation that varies by more than a chosen threshold between adjacent segments <cit.>.
Auto-gating and stochastic DQ vetoes.
To mitigate the effect of glitches on frequency-domain non-stationarity cuts, a gating procedure is implemented to pre-process the data. In O4a, gating in stochastic searches is handled by the workflow <cit.> as described in <cit.> by multiplying the data with an inverse Planck-taper window. Periods around samples in the whitened data with an absolute value above a chosen threshold are marked for gating to remove the entirety of the glitch present in the data segment. Occasionally, the gating method implemented in produces extended gates (≥20 s) that cover periods marked with Category 1 flags, as explained in data_qual_all_search. This tool helps monitoring the emergence of new Category 1 flagged periods.
In O4a, there are time segments marked with Category 1 flags unique to the stochastic searches collected in a stochastic veto definer file (VF). These involve periods when a violin mode was rung up and interfering with the stationarity of the data and a specific instance when calibration lines were not being properly subtracted. The stochastic isotropic search <cit.> was run with and without VF vetoed times to verify that the VF correctly excludes data segments that trigger long gates. In fig:stochastic_gates, we compare gate distributions between detectors. VF effectively removed long gates in while maintaining overall bulk of the distribution. Of these gates, over 80% match the minimum gate duration (2 s); others cluster between 2.5-4.5 s.
Notch lists.
As described in sec:spec-artifacts, we monitor the coherence between h(t) channels at different sites, and between h(t) channel at one site and auxiliary channels at the same site. The coherence data between h(t) channels indicates the frequency bins that pass the coherence threshold as shown in fig:coherence_stochmon. They are further examined for possible instrumental causes by spectral monitor tools that also keep track of auxiliary channels (, ).
Frequency bins containing lines known to have an instrumental origin are documented and removed from the analysis.
§ SUMMARY AND FUTURE PROSPECTS
Detector characterization is the study of the detectors and how various noise sources couple and impact GW strain data. In this paper, we summarize the work of the DetChar group between the end of the third Observing run and the end of the first half of the fourth Observing run. These efforts led to a factor of ∼50 reduction in the rate of daytime laser light scattering at LLO as detailed in <ref>, identification of the high-frequency magnetic noise coupling at LHO, identification of several persistent narrow-band noise features in LLO and LHO, a more comprehensive Event Validation and DQRframework and an overall better understanding of the noise characteristics for astrophysical searches.
As the LIGO detectors become more sensitive, detector characterization will become more challenging. Increased sensitivity translates to higher rate of events but could also lead to an increase in glitch rate. Our work entails not just glitch characterization and reduction, but also validation of the data quality surrounding an event.
Lower detector noise across the band also implies higher sensitivity to persistent signals as well as narrow-band, broad-band, and/or correlated terrestrial noise sources.
Continuous monitoring of the data quality, identification of potential noise couplings in the detector, and improvement in our software tools are some of the prerequisites for time timely dissemination of robust, and accurate astrophysical results. This would, among other things, require more person power, increased automation of the tools such as acDQR and Event Validation, and a stronger collaboration between the instrument science and DetChar group. These efforts will lead to more robust identification of weak astrophysical GW signals in noisy LIGO data, and thus a deeper probe of the GW sky.
Data-quality products described in this paper from previous Observing Runs have been released publicly on the Gravitational-Wave Open Science Center webpage (GWOSC) [https://gwosc.orgwww.gwosc.org], and when the O4a data is publicly released, data quality products will be released alongside <cit.>.
We would like to thank Christopher Berry, Jess McIver and Alan Weinstein for their many helpful comments and suggestions.
This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation.
LIGO was constructed by the California Institute of Technology and Massachusetts Institute of Technology with funding from the National Science Foundation and operates under Cooperative Agreement PHY-1764464. Advanced LIGO was built under grant No. PHY-0823459. This work uses the LIGO computing clusters and data from the Advanced LIGO detectors. This document has been assigned LIGO-number LIGO-P2400320.
my-iopart-num
|
http://arxiv.org/abs/2409.03168v1 | 20240905015529 | The HI reservoir in central spiral galaxies and the implied star formation process | [
"Jing Dou",
"Yingjie Peng",
"Qiusheng Gu",
"Alvio Renzini",
"Luis C. Ho",
"Filippo Mannucci",
"Emanuele Daddi",
"Chengpeng Zhang",
"Jiaxuan Li",
"Yong Shi",
"Tao Wang",
"Dingyi Zhao",
"Cheqiu Lyu",
"Di Li",
"Feng Yuan",
"Roberto Maiolino",
"Yulong Gao"
] | astro-ph.GA | [
"astro-ph.GA"
] |
Jing Dou, Yingjie Peng, Qiusheng Gu
[email protected], [email protected], [email protected]
0000-0002-6961-6378]Jing Dou*
School of Astronomy and Space Science, Nanjing University, Nanjing 210093, China
Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210093, China
Department of Astronomy, School of Physics, Peking University, 5 Yiheyuan Road, Beijing 100871, China
Kavli Institute for Astronomy and Astrophysics, Peking University, 5 Yiheyuan Road, Beijing 100871, China
0000-0002-3890-3729]Qiusheng Gu*
School of Astronomy and Space Science, Nanjing University, Nanjing 210093, China
Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210093, China
0000-0002-7093-7355]Alvio Renzini
INAF - Osservatorio Astronomico di Padova, Vicolo dell'Osservatorio 5, I-35122 Padova, Italy
0000-0001-6947-5846]Luis C. Ho
Kavli Institute for Astronomy and Astrophysics, Peking University, 5 Yiheyuan Road, Beijing 100871, China
Department of Astronomy, School of Physics, Peking University, 5 Yiheyuan Road, Beijing 100871, China
0000-0002-4803-2381]Filippo Mannucci
Istituto Nazionale di Astrofisica, Osservatorio Astrofisico di Arcetri, Largo Enrico Fermi 5, I-50125 Firenze, Italy
0000-0002-3331-9590]Emanuele Daddi
AIM, CEA, CNRS, Université Paris-Saclay, Université Paris Diderot, Sorbonne Paris Cité, F-91191 Gif-sur-Yvette, France
0000-0001-6469-1582]Chengpeng Zhang
George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A&M University, College Station, TX 77843-4242, USA
Department of Physics and Astronomy, Texas A&M University, College Station, TX 77843-4242, USA
0000-0001-9592-4190]Jiaxuan Li
Department of Astrophysical Sciences, 4 Ivy Lane, Princeton University, Princeton, NJ 08544, USA
0000-0002-8614-6275]Yong Shi
School of Astronomy and Space Science, Nanjing University, Nanjing 210093, China
Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210093, China
0000-0002-2504-2421]Tao Wang
School of Astronomy and Space Science, Nanjing University, Nanjing 210093, China
Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210093, China
Department of Astronomy, School of Physics, Peking University, 5 Yiheyuan Road, Beijing 100871, China
Kavli Institute for Astronomy and Astrophysics, Peking University, 5 Yiheyuan Road, Beijing 100871, China
0009-0000-7307-6362]Cheqiu Lyu
Department of Astronomy, School of Physics, Peking University, 5 Yiheyuan Road, Beijing 100871, China
Kavli Institute for Astronomy and Astrophysics, Peking University, 5 Yiheyuan Road, Beijing 100871, China
0000-0003-3010-7661]Di Li
CAS Key Laboratory of FAST, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012, China
School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China
0000-0003-3564-6437]Feng Yuan
Center for Astronomy and Astrophysics and Department of Physics, Fudan University, Shanghai 200438, People’s Republic of China
0000-0002-4985-3819]Roberto Maiolino
Cavendish Laboratory, University of Cambridge, 19 J. J. Thomson Avenue, Cambridge CB3 0HE, UK
Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK
Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK
0000-0002-5973-694X]Yulong Gao
School of Astronomy and Space Science, Nanjing University, Nanjing 210093, China
Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210093, China
§ ABSTRACT
The cold interstellar medium (ISM) as the raw material for star formation is critical to understanding galaxy evolution. It is generally understood that galaxies stop making stars when, in one way or another, they run out of gas. However, here we provide evidence that central spiral galaxies remain rich in atomic gas even if their star formation rate and molecular gas fraction have dropped significantly compared to “normal" star-forming galaxies of the same mass. Since is sensitive to external processes, here we investigate central spiral galaxies using a combined sample from SDSS, ALFALFA, and xGASS surveys. After proper incompleteness corrections, we find that the key scaling relations for central spirals show significant but regular systematic dependence on stellar mass. At any given stellar mass, the gas mass fraction is about constant with changing specific star formation rate (sSFR), which suggests that reservoir is ubiquitous in central spirals with any star formation status down to M_* ∼ 10^9. Together with the tight correlation between the molecular gas mass fraction and sSFR for galaxies across a wide range of different properties, it suggests that the decline of SFR of all central spirals in the local universe is due to the halt of supply, though there is plenty of gas around. These hence provide critical observations of the dramatically different behavior of the cold multi-phase ISM, and a key to understand the star formation process and quenching mechanism.
§ INTRODUCTION
The cold interstellar medium (ISM) plays a vital role in driving galaxy formation and evolution. As galaxies evolve, ISM is replenished by gas cooling and accretion from the circumgalactic medium; consumed by star formation; and enriched by stellar winds and feedback <cit.>. The global star formation rates (SFRs) of galaxies are determined by their cold gas content (M_ gas) and star formation efficiency (SFE, defined as SFR/M_ gas) or the depletion timescale (the inverse of SFE, defined as M_ gas/SFR). The SFE is a key galaxy parameter that reflects the efficiency with which a galaxy can convert cold gas into stars. Furthermore, SFE is closely tied to a galaxy’s dynamical time, which represents the time it takes for some fraction of gas to be transformed into stars per galactic orbital time, influenced by the gravitational instability of the cold gas in the galactic disk. This fraction is dependent on the detailed feedback physics <cit.>.
Observation evidence strongly supports the well-established Kennicutt-Schmidt (KS) type star formation law, asserting a tight correlation between star formation rate (SFR) and total gas content, including atomic and molecular gas <cit.>. This relationship is more pronounced with molecular gas, especially dense molecular gas <cit.>, as stars are commonly believed to originate from cold, dense molecular clouds. Numerous studies have corroborated the molecular gas KS law on both global and spatially resolved scales <cit.>, leading to the proposal of additional scaling relations for molecular gas. These include the SFE of molecular gas (SFE_ H_2 = SFR/M_ H_2) and the molecular gas mass fraction (μ_ H_2 = M_ H_2/M_*), both have been found to tightly depend on the specific star formation rate (sSFR = SFR/M_*) <cit.>, for both AGNs and non-AGNs <cit.>. The molecular gas mass has also been shown to correlate with stellar mass, a relationship often referred to as the molecular gas main sequence <cit.>. Although the underlying physical mechanisms of these scaling relations remain debated and somewhat elusive, they are extensively utilized to estimate cold gas masses and to put constraints in theoretical models and simulations.
In <cit.>, utilizing the xCOLD GASS survey <cit.>, we proposed the Fundamental Formation Relation (FFR), a tight relation between sSFR, SFE_ H_2, and μ_ H_2. Other scaling relations, including the integrated KS law, star-forming main sequence (SFMS), and the molecular gas main sequence can all be derived from this fundamental cube. The molecular gas FFR demonstrates that star formation levels in galaxies are determined by the combined effects of galactic dynamical timescales (related to the gas depletion timescale, 1/SFE) and gas instability (associated with μ_ H_2). The molecular gas FFR governs the star formation and quenching processes with small scatters. Galaxies with different stellar masses, sizes, structures, metallicities, and in different environments all evolve on the same single scaling relation of μ_ H_2-SFE_ H_2-sSFR. These unique features and simplicities make the molecular FFR an ideal framework to study galaxy formation and evolution, for instance, to accurately derive the gas cycles in galaxy populations with different stellar masses, from star-forming galaxies to the galaxies in the process of being quenched <cit.>.
Another critical component of the ISM in galaxies is atomic gas (). generally distributes more extended and loosely bounded comparing to , and reacts sensitively to external environmental influences like ram pressure stripping and galaxy interactions <cit.>. Using xGASS survey <cit.>, <cit.> explored whether follows a similar FFR as , and found that
the relation between SFE_ HI (defined as SFR/M_ HI) and sSFR for all galaxies shows significant scatter and strong systematic dependence on the key galaxy properties that have been investigated, revealing that does not follow a similar FFR as . The dramatic difference between and indicates that different physical processes, such as environmental effects, notably impact the gas, while relations remain insusceptible.
Given that serves as the precursor for , it play a critical role in star formation. To accurately assess the intrinsic properties of and its actual role in star formation, we must exclude any external environmental factors to which is particularly sensitive. Central galaxies are the most massive galaxies within a dark matter halo and usually locate at the center of a galaxy cluster or group. They are typically distinguished from satellite galaxies, which orbit around the central galaxy. Since external environmental effect mainly operates on satellite galaxies <cit.>, in this work, only central galaxies are selected to minimize the environmental effects, which allows us to focus on star formation and quenching processes due to internal mechanisms.
Different galaxy types can follow very different evolutionary paths. We only focus on spiral galaxies for the following reasons. Firstly, many ellipticals nowadays are already quenched at higher redshift <cit.>. Including them can introduce severe progenitor bias. Secondly, ellipticals in the local universe are mostly transformed from the spiral galaxies via mergers rather than internal secular evolution <cit.>. Internal quenching mechanisms, such as AGN feedback alone, cannot directly transform spirals to ellipticals. And also, the formation mechanisms for irregulars and S0 galaxies are still not fully understood and may involve recent mergers or strong interactions as well <cit.>.
Central spiral galaxies are ideal candidates for studying the overall properties and behavior of gas and star formation, significantly contributing to our understanding of galaxy formation and evolution. In combination with the molecular FFR, we aim to provide a comprehensive understanding of the cold multi-phase ISM, star formation process and (internally-driven) quenching mechanisms in local galaxies. Throughout this work, we adopt the cosmological parameters: Ω_m=0.3, Ω_Λ=0.7, H_0=70 km s^-1 Mpc^-1.
§ SAMPLE
§.§ The ALFALFA-SDSS matched sample
The main sample used in this work is Arecibo Legacy Fast ALFA (ALFALFA) survey <cit.>, which is a blind -selected survey out to z ∼ 0.06. The survey utilized the seven-horn Arecibo L-band Feed Array (ALFA) to map ∼ 7000 deg^2 of the high Galactic latitude sky in drift-scan mode <cit.>. Conducted between 2005 and 2011, ALFALFA covered a frequency range from 1335 to 1435 MHz, corresponding to heliocentric velocities from -2000 km/s to 18000 km/s, and detected ∼31,500 extragalactic 21cm line sources.
The ALFALFA survey contains sources of quality code 1 and quality code 2. Code 1 refers to the sources with the highest quality, evaluated using several criteria: (1) a good signal consistence between the two independent polarizations observed by ALFALFA, (2) a spatial extent consistent with or larger than the telescope beam, (3) the spectral profile is free from radio-frequency interference (RFI), (4) an approximate minimum signal-to-noise ratio (S/N) threshold of 6.5. Code 2 refers to the sources with low S/N (≲ 6.5) but have been matched with optical counterparts with known optical redshifts coincident with those measured in . They are also included in the ALFALFA catalog as they are highly likely to be real. Both Code 1 and Code 2 sources are considered detections in our analysis.
The parent SDSS Data Release 7 (DR7) <cit.> sample was retrieved from the SDSS CasJobs site, covering ∼9,500 deg^2 of the sky. The SDSS spectroscopic survey covers a wavelength range of 3,800 to 9,200Å, utilizing fibers with a diameter of 3 arcseconds on the sky. Following the criteria by <cit.>, galaxies with clean photometry and Petrosian r magnitudes in the range of 10.0 - 18.0 after correction for Milky-Way galactic extinction are selected. The bright limit of 10 magnitudes is set to exclude extremely bright, large nearby galaxies, while the faint limit of 18 magnitudes ensures the completeness of the SDSS Spectroscopic Survey. After removing duplicates, the parent photometric sample contains 1,579,314 objects, of which 72,697 have reliable spectroscopic redshift measurements in the redshift range 0.02 < z < 0.05. This narrow and low redshift range is used to match the depth of the ALFALFA survey. The physical scales within this redshift range vary from 1.2kpc to 2.9kpc.
There are 43,448 SDSS galaxies with reliable spectroscopic redshift measurements in the SDSS and ALFALFA overlapped region within the redshift range of z = 0.02 - 0.05. The ALFALFA detections are then cross-matched with the parent SDSS sample in this narrow redshift range. Briefly, the spatial separation between the most probable optical counterpart of each detection and SDSS galaxy is less than 5”. Also, the velocity difference between the source and SDSS galaxy is less than 300 km/s. For most galaxies, the line-of-sight velocity difference is generally less than 100 km/s, and using 300 km/s can encompass more than 99% of galaxies. Slightly adjusting the matching conditions will not significantly affect the matched sample. It should be noted that the angular resolution is ∼ 3.5 arcmin in ALFALFA. Close companions may contaminate the measured spectra within the large Arecibo beam. We hence exclude galaxies that have multiple SDSS counterparts within the ALFALFA beam size (∼ 3.5 arcmin) and within a velocity difference of three times the line width (W_50), where W_50 is defined as the difference between the velocities corresponding to the fitted polynomial at a level of 50% of the maximum value of the flux on each horn. The threshold of three times the W_50 is an empirically standard that has been found effective and employed in many studies because it balances the need for accurate matches without missing potentially relevant sources <cit.>. Also, galaxies that are in dense regions or have close companions are more vulnerable to environmental effect. Since our goal of this work is to investigate in-situ star formation process and mass-quenching (i.e. the internal quenching) mechanism, we try to exclude any environmental effect. About 12% of ALFALFA galaxies were excluded to avoid contaminations from the interlopers. This subset of detections with clean optical counterparts is referred to as the ALFALFA-SDSS matched sample, consisting 9571 galaxies.
The aperture is also an important issue. The spatial scale is, on average, much larger than the optical and scales. Before the Square Kilometre Array <cit.>, it is difficult to have a large sample with deep and resolved measurements. Nevertheless, with the well-established mass-size relation based on resolved deeper observation of nearby spiral galaxies <cit.>, we can roughly estimate the spatial scales of our galaxies. The massive star-forming galaxies (which are also about the most -rich galaxies, in terms of the absolute gas mass) on average have a gas mass of ∼ 10^10, which corresponds to a diameter of about 50 kpc. The ∼ 3.5 arcmin’s beam of the Arecibo telescope corresponds to an angular size of approximately 85 kpc at a redshift of z = 0.02 and 200 kpc at a redshift of z = 0.05, which are larger than the diameter. Hence, this should not cause a significant bias. The most relevant bias might be the interlopers into the large beam, as discussed above.
§.§ Incompleteness corrections
As detailed in Appendix <ref> and illustrated in Figure <ref>, both SDSS and ALFALFA are flux-limited samples, resulting in significant selection biases in both stellar mass and gas mass, even within the narrow redshift range of z = 0.02 - 0.05. Therefore, when conducting statistical studies using the ALFALFA-SDSS matched sample, it is essential to apply a joint incompleteness correction to account for these biases.
Following the method introduced in <cit.>, we have used a joint V_ max correction to account for the volume incompleteness within the given redshift range. For the ALFALFA-SDSS matched sample, we calculated for each individual galaxy the maximum redshift to which the galaxy can still be observed according to the ALFALFA and SDSS detection limits. This maximum redshift was then used to calculate the maximum observable comoving volume (V_ max) for each galaxy. By assuming that the spatial distribution of our sample is homogeneous and there is no evolution in the comoving volume (within a narrow redshift range), we weight each galaxy using the value of V_ total/V_ max to account for the galaxies missed in the surveys, where V_ total is the total comoving volume that our sample spans. The corrections of the SDSS and ALFALFA sample are performed independently and they are combined together to correct the ALFALFA-SDSS matched sample, i.e., we have simultaneously corrected the strong sample selection effects shown in both panels in Figure <ref>.
One of the advantages of this method is that the weighting factor has been calculated for each individual galaxy in our sample. This allows us to perform statistical analysis conveniently for a subsample, e.g., central spirals, ellipticals and uncertains. As we discussed and emphasized in Figure <ref> and Appendix <ref>, galaxies with different visual morphology (i.e., spiral, uncertain and elliptical) have intrinsically distinct properties, and should be treated and studied separately, for instance, in a stacking analysis.
§.§ XGASS
To test our incompleteness correction on the large but shallow ALFALFA-SDSS sample and verify the results, we have included in our analysis the observations from a much deeper target survey, the extended GALEX Arecibo SDSS Survey <cit.>. It provides the gas measurements for 1179 galaxies in the nearby universe. These galaxies are selected only by redshift (0.01 < z < 0.05) and stellar mass (10^9< M_* <10^11.5) from the overlapped area of the SDSS DR7 spectroscopic survey, the GALEX Medium Imaging Survey <cit.> and projected ALFALFA footprints <cit.>. Those galaxies with reliable detections already available from the 40% ALFALFA catalog or the Cornell digital archive <cit.> were not reobserved in xGASS to optimize the efficiency. The rest galaxies were observed with the Arecibo telescope until the line was detected, or a limit of a few percent in M_ HI/M_* was reached. This limit of M_ HI/M_* is 2% for galaxies with log M_*/ > 9.7, and a constant gas mass limit of log M_ HI/= 8 for galaxies with lower stellar masses. Each galaxy is weighted by a correction factor to account for selection effects in stellar mass, described in <cit.>.
§.§ The selection of central spiral galaxies
It is well known that is sensitive to external environmental effects <cit.>. Therefore, if we want to focus on the star formation and quenching processes due to internal mechanisms, only central galaxies are selected to minimize the environmental effects that are strongly operating on satellites <cit.>.
The galaxies are classified into central and satellite galaxies using the SDSS DR7 group catalogue from <cit.>. The central galaxies are defined as the most massive and most luminous ones in the r-band within a given group. “Centrals with satellites" and “isolated centrals/singletons" are both included as “centrals" in our analysis. Among the 4470 central spiral detections in ALFALFA, 87% of them are singletons in <cit.> group catalog, i.e., groups with the richness N = 1. The remaining 13% are centrals with N > 1 (i.e., with at least one satellite). This is expected since the majority of low-mass centrals (much more abundant than the massive ones) are singletons just above the detection limit, and their fainter satellites are not observed for a flux-limited survey. We have also checked the detection fraction and scaling relations for “Centrals with satellites" and “isolated centrals/singletons". The results show little difference for these two cases and our conclusions remain robust.
As mentioned in the introduction, our aim is to better understand the processes of in-situ star formation and internal-origin quenching. Environmental effects can strongly impact gas, hence our analysis focuses exclusively on central spiral galaxies. In practice, spiral galaxies can be classified based on various criteria, such as visual morphology, structural parameters, or kinematics. Each classification method results in significantly different samples, as illustrated in Appendix <ref> and Figure <ref>. Visually-classified “spiral/disk" is the most effective parameter to distinguish -rich galaxies and -poor galaxies. This is because gas disk (extending to much outer regions than the stellar disk) is most sensitive to external interaction or perturbation, and then followed by the visual morphology of the disk, and the least sensitive parameters are the structural parameters (such as bulge-to-total ratio (B/T), concentration index (R_90/R_50) and Sérsic index. Hence, we focus only on central galaxies that are visually defined as spiral galaxies.
Galaxies are classified into different morphologies using data from the Galaxy Zoo (GZ) project <cit.>. In this project, hundreds of thousands of volunteers are assisted to view each SDSS image and classify the galaxy morphology. A clean sample has been defined by requiring at least 80 % of the corrected vote to be in a particular category. A morphology flag (“spiral", “elliptical" or “uncertain") is assigned to each galaxy after performing a careful debiasing process. The contamination of lenticular or S0s into the GZ clean spiral sample is small, about 3% <cit.>. Most S0 galaxies with smooth and rounded profiles are classified in GZ as “elliptical” or “uncertain”. It should be noted that in GZ, “spiral" includes disky galaxies with spiral arms and also those without clear spiral arms. In this work, we categorize them both as “spirals".
Since galaxy mergers can have a complicated effect on the star formation of galaxies by enhancing or suppressing star formation, depending on the gas content of the merging galaxies and also the phase of the merger, we exclude the merger systems in our sample if the vote fractions of a merger are greater than 30%. As we will show later in Figure <ref> and <ref>, central spiral galaxies defined by visual morphology classified by the GZ project achieve a very high detection fraction, which is essential for obtaining unbiased intrinsic scaling relations. In contrast, central spiral galaxies defined by structural parameters show a significantly lower average detection fraction.
§.§ Stellar mass and SFR measurements
The stellar masses (M_*) of the galaxies are estimated from the k-correction program <cit.> with <cit.> stellar population synthesis model and a <cit.> initial mass function (IMF), which show a small scatter of ∼ 0.1 dex compared with the published stellar masses of <cit.>.
The star formation rates (SFRs) are taken from the value-added MPA-JHU catalog <cit.>, which are based on the emission line luminosities. These luminosities are corrected for extinction using the Hα/Hβ ratio. To correct for the aperture effects, the SFRs outside the SDSS 3” fiber were obtained by performing the spectral energy distribution fitting to the ugriz photometry outside the fiber, using the models and methods described in <cit.>. The emission for AGN and composite galaxies can be contaminated by central nuclear activities. Their SFRs are derived based on the strength of 4000 break, calibrated with Hα for non-AGN, pure star-forming galaxies <cit.>. These SFRs are computed for a Kroupa IMF and we convert them to a Chabrier IMF using log SFR (Chabrier) = log SFR (Kroupa) - 0.04. Different SFR estimators may produce different results. The results using different SFR estimators are shown and discussed in the Appendix <ref>.
§ RESULTS
§.§ HI detection fraction
ALFALFA is a blind, relatively shallow, flux-limited survey over a significant volume. As discussed in Section <ref> and shown in Figure <ref>, even within the narrow redshift range of z = 0.02 - 0.05, both ALFALFA and SDSS are highly biased samples, with strong selections in gas mass, line width and redshift for ALFALFA, and in stellar mass, color and redshift for SDSS. Hence, when combining ALFALFA and SDSS (i.e., the ALFALFA-SDSS matched sample), a joint incompleteness correction is necessary, as discussed in Section <ref>. The corrected ALFALFA-SDSS matched sample contains 9571 reliable detections, and 4470 of them are central spirals classified by visual morphology.
In Figure <ref>, we show the detection fraction (DF) on the SFR-M_* plane for different samples with or without the incompleteness correction. The DF is defined as the ratio between the number of galaxies with detections and the number of all galaxies in the parent SDSS sample, obtained by using a moving box of size 0.5 dex in SFR and 0.5 dex in M_*.
The upper left panel of Figure <ref> shows the raw DF for all galaxies in the ALFALFA-SDSS matched sample without the incompleteness correction. The overall DF is evidently low, even for galaxies on the SFMS. The lower left panel shows the raw DF for central spirals (defined by the visual morphology as in the Galaxy Zoo project; see Section <ref> for details). Compared to all galaxies, the DF increases significantly, particularly for those with low SFRs.
The middle panels are similar to the left panels but have done incompleteness corrections. In both panels, the corrected DFs are significantly elevated compared to the raw DFs shown in the left two panels. For the case of all galaxies (upper middle panel), the corrected average DF for galaxies on and above the SFMS (above the lower dotted lines) is significantly higher than that below the SFMS (below the lower dotted lines), indicating that quenching or quenched galaxies are indeed cold gas-poor, on average. Remarkably, in the case of central spiral galaxies (lower middle panel), the corrected DF is further enhanced. Although it should be noted that below the SFMS, the DF is slightly lower in both ALFALFA and xGASS samples. The average DF for central spiral galaxies, including low-mass galaxies with M_* > 10^9 and galaxies below the SFMS, is ∼ 95%, i.e., there is a ubiquitous reservoir in the vast majority of central spirals.
To test our incompleteness correction to the ALFALFA-SDSS sample and the robustness of our results, we show the results obtained from a much deeper target survey xGASS in the right two panels. There are 662 reliable detections (i.e., the line is detected and not confused by close companions) in the xGASS sample, and 273 of them are central spirals (classified by visual morphology). The DFs for all galaxies and central spiral galaxies in the xGASS sample (right two panels) look distinct from the left two panels (which have not done incompleteness corrections) but similar to the middle two panels (which have done proper incompleteness corrections). In particular, the xGASS central spirals consistently show a nearly 100% DF across a wide range of stellar mass and SFR. The overall raw DF is ∼ 96%, and ∼ 98% with the statistical weights given by <cit.>, and hence strongly supports the ubiquitous gas content in central spiral galaxies. There are some “green holes” representing a lower DF in both the ALFALFA central spirals and xGASS central spirals panels. They do not seem to follow a systematic trend, and they do not occupy a similar region on the SFR-M_* plane. These could be caused by small number statistics (for the small but deep xGASS) or imperfect incompleteness corrections (for the large but shallow ALFALFA). Future larger and deeper surveys will verify these results.
As shown in Figure <ref>, at a given stellar mass, a significant amount of visually-defined central “spirals" have large values of R_90/R_50, B/T and Sérsic index, and will be classified as “elliptical" if the definition is in accordance with their structural parameters. As shown in <cit.>, these central spirals with a massive bulge have lower sSFR, but are -rich. On the other hand, as shown in Figure <ref>, many visually-defined “uncertain" galaxies have small values of R_90/R_50, B/T and Sérsic index, and will be classified as “spiral" according to their structural parameters. These “uncertain" galaxies, on average, are -poor as shown in the lower left panel in Figure <ref>, in particular for those below the SFMS, which are almost all non-detections. Therefore, using “spiral" defined by quantitative structural parameters produces a lower DF towards low sSFR (by including many -poor “uncertain"). In the upper panels of Figure <ref>, we show that for central galaxies with B/T < 0.3 and B/T < 0.5, the DFs drop significantly towards low sSFR in both cases, consistent with the results of <cit.>; and also the overall DF is ∼ 64% for both B/T cuts, much lower than that of the central spirals defined by visual morphology (∼ 95%).
To complete, we also show the DFs for the central “elliptical" galaxies defined from the Galaxy Zoo project in the lower right panel in Figure <ref>. The overall DF is only ∼ 13.4%, consistent with the fact that ellipticals, on average, are cold gas-poor systems. These results show that galaxies with different visual morphology (i.e., spiral, uncertain and elliptical) have intrinsically distinct properties, and should be treated and studies separately, for instance, in stacking analysis.
§.§ HI scaling relations
The high overall DF of ∼ 95% for the central spiral galaxies makes it possible to derive the unbiased scaling relations for this particular population. The left panels of Figure <ref> show the distributions of the central spirals in the ALFALFA-SDSS matched sample on the SFE_ HI-sSFR plane (upper left panel, where star formation efficiency SFE_ HI = SFR/M_ HI) and μ_ HI-sSFR plane (lower left panel, where gas fraction μ_ HI = M_ HI / M_*). The SFE_ HI is the inverse of the gas depletion timescale, describing how efficiently the galaxy can convert the available cold gas into stars, or how long the gas reservoir would be depleted by current star formation activities. Although SFR and M_ HI happen at very different spatial scales, SFE_ HI can be interpreted as the product of the star formation efficiency of and the molecular-to-atomic gas mass ratio M_ H_2/M_ HI (SFE_ HI = SFE_ H_2× M_ H_2/M_ HI). The size of each dot represents the weight used for incompleteness correction. The weighting factor for most galaxies ranges from 1 to 10, with the median value is about 1.34 and the mean value is about 2.73, while the maximum value is around 42. Different studies may use different sSFR cuts to define star-forming, green valley, and passive galaxies. Typically, star-forming galaxies are defined as log sSFR (Gyr^-1) > - 1.5, fully quenched galaxies are defined as log sSFR (Gyr^-1) < -2.5, with green valley galaxies occupying the range in between. It is evident that even for star-forming galaxies with high sSFR, many of them have a large dot size, i.e., a large weighting factor, suggesting that the sample there is highly incomplete. Therefore, the incompleteness correction is necessary for both star-forming and passive galaxies. On average, both SFE_ HI and μ_ HI increase with increasing sSFR, but with significant scatters.
The thick grey line in each panel in Figure <ref> shows the fundamental formation relation (FFR) for as found in <cit.>. As discussed in <cit.>, for , both SFE_ H_2-sSFR relation and μ_ H_2-sSFR relation are very tight, independent of other key galaxy parameters, and their scatters can be entirely explained by measurement errors. However, the relations for all galaxies does not follow a similar tight FFR as <cit.>. Here we further show that even for the central spiral galaxies, their relations still exhibit significant scatters as shown in Figure <ref>. It is well known that SFR does not correlate well with content while it does so with <cit.>. In a sense, here we demonstrate that this lack of correlation is due to central spirals, which have even when their SFR is low.
The middle panels in Figure <ref> use the common 1-dimensional analysis to show the median SFE_ HI and μ_ HI as a function of sSFR for different stellar mass bins, calculated with a sliding window of 0.5 dex in sSFR, which is chosen to match the typical observational uncertainty in measuring sSFR and also to include a sufficient number of data points, ensuring statistical robustness. Error bars on each line indicate the 5% and 95% percentiles of the distribution. The right panels use the 2-dimensional analysis to show the average M_* on the SFE_ HI-sSFR plane and μ_ HI-sSFR plane, calculated with a moving box of 0.5 dex in sSFR, SFE_ HI and μ_ HI, and the contour lines mark the equal stellar mass distribution. The overall trends shown in the corresponding panels are similar, but with notable differences in the low stellar mass regions. As discussed in detail in Figure <ref>, the widely used 1-dimensional analysis (which considers only the variation of y at a fixed x) can be heavily affected by the shape of the sample distribution, in particular for the low stellar mass bin; while the 2-dimensional analysis (which takes the variation of both the x- and y-axes into account) is more robust. Therefore, the results shown in the right panels in Figure <ref> are more objective and less biased representations of the true underlying trends.
The contour lines in the upper right panel in Figure <ref> are almost all parallel to the constant μ_ HI, which is consistent with those shown in Figure <ref>, where the distribution of the galaxies (indicated by the ridge of the contour lines) is around the constant μ_ HI and the slope of the distribution (indicated by the red line) is about unity at a given stellar mass. Consistent results are also shown in the lower right panel of Figure <ref>, where the contour lines are about horizontal. It is expected, since by definition, sSFR = μ_ HI× SFE_ HI; hence the μ_ HI-sSFR relation is closely related to the SFE_ HI-sSFR relation. Both panels indicate that for central spirals, at a given stellar mass, star-forming galaxies, green valley galaxies and passive galaxies all host a similar amount of gas. The gas fraction primarily depends on stellar mass and weakly depends on sSFR. We also investigate the SFE_ HI-sSFR and the μ_ HI-sSFR relation for central spirals in the xGASS sample, which exhibit consistent general trends but with considerable scatter compared to ALFALFA due to their much smaller sample size.
We also tested the SFE_ HI-sSFR and the μ_ HI-sSFR relation for central spirals defined by other spiral definitions, such as concentration, B/T, sersic index and probability of a galaxy to be late-type disk galaxy with T-type > 0 as shown in Figure <ref> in the Appendix <ref>. Interestingly, these different definitions do not significantly affect these scaling relations. This is because ALFALFA is a relatively shallow survey and these relations are derived using only galaxies with detections. However, as demonstrated in Figure <ref> and <ref>, different definitions of spiral galaxies strongly affect the detection fraction of ALFALFA-SDSS sample. Central spiral galaxies, defined by visual morphology classified by Galaxy Zoo project, achieve a very high detection fraction, which is essential for obtaining unbiased intrinsic scaling relations. Other definitions result in lower detection fractions.
As mentioned before, there is a single tight FFR for all galaxies (grey thick line), while both the SFE_ HI-sSFR and μ_ HI-sSFR relation are systematically dependent on stellar mass for central spiral galaxies, but both relations show a similar slope for different stellar mass bins (i.e., the contour lines in the right two panels of Figure <ref> are nicely parallel to each other). As above, the two relations are closely related. It is important to notice that the FFR has a non-zero slope (∼ 0.5), while the slope of the SFE_ HI-sSFR relation is about unity and that of the μ_ HI-sSFR relation is about zero for all stellar masses. This indicates that for central spirals, at any stellar mass, when sSFR decreases, both gas mass and SFE_ H_2 decrease. gas fraction remains almost constant and SFE_ HI decreases. The M_ H_2/M_ HI ratio also decreases. Previous studies found that there do exist some -rich galaxies with low SFRs <cit.>. Here we show that for a significant population of central spirals, reservoir is ubiquitous, down to M_* ∼ 10^9.
§ SUMMARY AND DISCUSSION
Since gas usually extends much further than stars, external environmental effects such as interactions and mergers can strongly impact the gas. To better understand the processes of in-situ star formation and internal-origin quenching, this study aims to focus exclusively on central galaxies to minimize the strong environmental effects that can significantly influence satellite galaxies. Additionally, a galaxy's visual morphology is more sensitive to external interactions or mergers than structural parameters like the bulge-to-total ratio (B/T), concentration index (R_90/R_50), and Sersic index. Therefore, in this work, we focus only on central galaxies those are visually defined as spiral galaxies.
Because both SDSS and ALFALFA are flux-limited surveys, they exhibit strong selection biases in stellar mass and gas mass, respectively. Consequently, a joint incompleteness correction is necessary for the SDSS-ALFALFA matched sample. After performing proper incompleteness correction, the average detection fraction in the ALFALFA-SDSS matched sample is significantly increased. For the central spiral galaxies (defined by visual morphology), the detection fraction is, on average, larger than 90%, even for many low-mass and low-SFR systems. These are in good consistency with the results from the deeper xGASS survey, hence supporting our incompleteness correction method. It is important to note that if the “spiral/disk” are defined by their structural parameters (e.g., bulge-to-total ratio, B/T < 0.3), the average detection fraction (∼ 64%) is significantly lower than that of spiral galaxies defined by visual morphology, especially for galaxies below the SFMS. This is likely because the structure of a galaxy is generally expected to be less sensitive to external environmental effects compared to its visual morphology, which more readily reveals features such as tidal disturbances, mergers, and asymmetries.
The high overall detection fraction for the central spiral galaxies (defined by visual morphology) allows for the derivation of the unbiased scaling relations for this particular population. We find that the SFE_ HI-sSFR and μ_ HI-sSFR scaling relations for central spirals show strong systematic dependence on stellar mass. At any given stellar mass, the gas mass fraction remains roughly constant with varying sSFR, suggesting that reservoir is present in central spirals regardless of their star formation status, down to M_* ∼ 10^9.
Along with the tight correlation between the molecular gas mass fraction and sSFR for galaxies across a wide range of different properties, these results indicate that a cessation of supply drives the decline of SFR of central spirals in the local universe, despite the abundance of gas. Our findings hence provide key observational evidence of the dramatically different behavior of their multi-phase ISM.
In regular undisturbed spiral galaxies, the atomic gas usually distributes towards a larger radius than the stellar and molecular gas disk <cit.>. Gas inflow carrying excess angular momentum can accumulate in a stable outer ring, persisting over extended periods in the absence of perturbations <cit.>. This would deplete the material available in the inner disk for new star formation, providing a straightforward explanation for our observations, a phenomenon also supported by simulations <cit.>.
Moreover, the surface density of gas at larger radii is often below the critical threshold needed to convert into , necessary for initiating star formation <cit.>. Coupled with the tight correlation between molecular gas mass fraction and sSFR across diverse galaxy types, our results indicate that the reduction in SFR in central spirals predominantly results from a cessation in supply and a concomitant decline in star formation efficiency, despite there is plenty of gas around.
Alternatively, the disruption in supply could result from feedback mechanisms. In this scenario, our findings indicate that such feedback should have minimal impact on the located in the outer disk regions, thereby imposing significant constraints on the strength and nature of feedback in theoretical models <cit.>.
Future high spatial resolution 21 cm observations (e.g., the SKA) for a sample of central spiral galaxies with different star formation levels and feedback status will be necessary to obtain their resolved gas properties, including surface density, kinematics, and disk instability. Such future 21 cm observations will help determine whether in quenched central spirals is confined beyond the optical disk, as proposed in the scenario by <cit.>, or whether the lies within the disk itself but does not convert to for some reason. These observations will be crucial in testing these hypotheses and advancing our knowledge of the complex interplay between gas dynamics, star formation, and feedback processes in galaxies. Dark matter halo may also play an important role in regulating gas cycling and star formation, and will be explored in our following work.
§ ACKNOWLEDGMENTS
This work is supported by the National Natural Science Foundation of China (No. 12303010, 12125301, 12192220, 12192222, 12121003, 12192223), and the science research grants from the China Manned Space Project with No. CMS-CSST-2021-A07.
§ SELECTION BIASES IN SDSS AND ALFALFA SAMPLES
Due to the SDSS spectroscopic selection of r < 17.77, the sample is flux-limited. As shown in the left panel of Figure <ref>, even within the narrow redshift range of 0.02 <z< 0.05, this selection produces a strong bias towards massive and star-forming galaxies, and faint sources are progressively missed at higher redshifts.
Similar to SDSS, ALFALFA is also a flux-limited sample and the effective integration time is typically 48 seconds for each source, leading to a strong selection in gas mass. As shown in <cit.>, the completeness limits for Code I and Code II sources in ALFALFA are both functions of W_50, with a break at W_50 = 300km/s. At the same integrated line flux, ALFALFA sources with narrower profiles (i.e., smaller W_50) host larger S/N, hence are easier to be detected than broader ones. The selection effects of ALFALFA within the narrow redshift range of 0.02 <z< 0.05 are clearly shown in the right panel of Figure <ref>. At a given (low) gas mass, the number of non-detections increases significantly with both redshift and W_50 due to sample incompleteness, which must be properly accounted for in any statistical analysis.
§ COMPARISON BETWEEN DIFFERENT CLASSIFICATIONS OF SPIRAL GALAXIES
Spiral galaxies can be defined by their visual morphology, structure or kinematics. To demonstrate the impact of different methods to define “spiral" galaxies, in Figure <ref>, we show the distribution function of concentration index R_90/R_50 (upper left panel), bulge-to-total ratios B/T (upper right panel), and Sérsic index (lower left panel) for central galaxies in a narrow stellar mass range of 10^10 - 10^10.3 in the SDSS parent sample within the redshift range 0.02 < z < 0.05. We retrieved the r band Sérsic index, Petrosian half-light radii R_50 and R_90 from the New York University Value-Added Catalog <cit.>. The mass-weighted bulge-to-total ratios (B/T) are taken from <cit.> and <cit.>, with a fitting model using a pure exponential disk and a de Vaucouleurs bulge. Galaxies are split into “spirals" (blue histograms, 3166 galaxies), “ellipticals" (red histograms, 537 galaxies) and “uncertains" (grey histograms, 2918 galaxies) as classified by Galaxy Zoo (visual classifications). Figure <ref> clearly shows that morphology is not a one-to-one correspondence with structure. Visually-defined spirals/ellipticals/uncertains span a wide range of R_90/R_50, B/T, and Sérsic index. The results for other stellar mass bins also show similar results that at a given structural parameter, there are significant overlaps between the three populations. Hence, different definitions (i.e., defined by visual morphology or structural parameters) can produce significantly different galaxy samples.
It should be noted that the machine learning (ML) method trained on visually classified morphology (e.g., Galaxy Zoo) has also been used widely to classify galaxies. For instance, using an ML-based morphology catalog from <cit.>, disk galaxies can be selected with T-type > 0 and P_ disk > 0.5, where T-type describes the galaxy morphology type and galaxies with T-type > 0 correspond to late-type morphologies in the Hubble sequence, and P_ disk is the probability for a galaxy to be a disk. The lower right panel of Figure <ref> clearly shows that for the centrals with T-type > 0 and P_ disk > 0.5, there are significant numbers of “uncertains” as classified in Galaxy Zoo. Therefore, although the ML-based morphology classification works very well in general for all galaxy populations in SDSS, at least for this particular case, it badly failed to recover the human eyeball classified results. As discussed before, many disk-like “uncertains" share similar structure parameters (e.g., B/T, sersic index) as “spirals", but their visual morphology features look different from “spirals". These differences maybe related to external environmental effects such as perturbation and interaction, which are difficult to be catched by some of the ML method. These external environmental effects tend to produce -poor systems.
§ BIAS INTRODUCED BY 1-DIMENSIONAL STATISTICAL ANALYSIS
Figure <ref> shows the SFE_ HI-sSFR relation of central spirals in the ALFALFA-SDSS matched sample in three stellar mass bins. Blue dots in the upper panels are individual galaxies. Level contours for the number density of the galaxies at given stellar mass bins are shown in the lower three panels, with colors ranging from blue to red as a function of number density, applied with sample incompleteness corrections. The red lines are the best fits to the data using the orthogonal distance regression (ODR) fitting method, which calculates the sum of the orthogonal distances from the data points to the fitted line. The blue lines show the average SFE_ HI (at a given sSFR) as a function of sSFR, calculated with a moving average of 0.5 dex in sSFR.
In the lower panels, the contour lines of the 2-dimensional density distributions clearly indicate the underlying trends. Evidently, the ODR fitting (red line) better traces the ridge of the contour lines than the blue line, and hence represents a more accurate fit to the data, in particular for the case of low mass bin where the scatter is larger. This is expected since ODR fitting takes the variation in both x- and y-axes into account. The slope of the best ODR fits is around unity in all three stellar mass bins, indicating that galaxies distribute along with the constant gas-to-mass ratio, i.e., when sSFR decreases, the gas-to-mass ratio remains constant.
While the commonly used 1-dimensional scaling relation (e.g., the blue lines), which considers only the variation in the y-axis at a given x, agrees with the ridge line and ODR fitting for the massive galaxies (lower right panel), due to the relatively narrow data distribution, i.e., the variation is small in both x- and y-axes. However, in the low-mass bin (lower left panel), the blue line significantly deviates from the ridge line and the red line, due to the larger scatter and the shape of the data distribution. Also, the blue line has, on average, a shallower slope than unity, indicating that when sSFR decreases, the gas-to-mass ratio also decreases, i.e., galaxies become more -poor.
The results shown in Figure <ref> demonstrate that using 1-dimensional scaling relations (e.g., the blue lines in the lower panels in Figure <ref> and the lines in the middle panels in Figure <ref> to describe the trend can be biased, due to the shape of the data distribution. For instance, in the upper middle panel in Figure <ref>, the fact that the blue line (lowest stellar mass bin) is flattened out at high sSFR is due to the effect described in Figure <ref>. Clearly, the corresponding contour lines in the upper left panel are all well parallel to the diagonal constant gas-to-mass ratio line.
§ ALTERNATIVE SFR INDICATORS
As discussed in the main text, with proper sample incompleteness corrections, for central spirals, the overall detection DF reaches ∼ 95% for the ALFALFA-SDSS matched sample, and ∼ 98% for the xGASS sample. Hence, our main conclusion that there is a ubiquitous reservoir in the vast majority of central spirals (defined by visual morphology) is largely independent of any particular SFR indicator. Here we further test the detailed scaling relations shown in Figure <ref> with alternative SFR indicators.
We repeat the analyses using SED SFRs derived from UV, optical and mid-IR fluxes <cit.>. The results are shown in Figure <ref>. As discussed in the previous Section and Figure <ref>, the trends derived from the 1-dimensional method (the middle two panels of Figure <ref>) can be significantly biased. Using 2-dimensional analysis and ODR fitting produces more accurate representations of the true underlying trends. Therefore, comparing the right two panels of Figure <ref> with those in Figure <ref>, although the range of the galaxy distribution in sSFR is narrower (due to the use of different SFR indicators), the general trends are similar. Both SFR indicators reveal that the SFE_ HI-sSFR relation and μ_ HI-sSFR relation strongly correlate with stellar mass, and that at a given stellar mass, μ_ HI is largely constant across sSFR.
It should be noted that although there are few fully quenched central spirals with log sSFR < -2 Gyr^-1 based on the SED-based SFRs, our main conclusions that there is a ubiquitous reservoir in the vast majority of central spirals remain robust, and that at a given stellar mass, μ_ HI is largely constant across sSFR. The exact name of the galaxies with the lowest sSFR, i.e., fully quenched or green valley, is therefore less critical.
In addition, in the right two panels in Figure <ref>, we also show the FFR derived from the SED-based SFRs, which has a steeper slope than the one in Figure <ref>. It is important to note that though the amount of the decrease in sSFR is smaller for the SED-based SFRs (due to the narrower range of sSFR), the corresponded amount of decrease in the gas mass is similar to those shown in Figure <ref>. The ∼10 times less gas accompanied by the suppressed star formation efficiency in the central spirals with the lowest sSFRs indicates that they are indeed in the process of being quenched, while their gas mass fraction remains about constant.
natexlab#1#1
[Abazajian et al.(2009)Abazajian, Adelman-McCarthy,
Agüeros, Allam, Allende Prieto, An, Anderson, Anderson,
Annis, Bahcall, Bailer-Jones, Barentine, Bassett, Becker,
Beers, Bell, Belokurov, Berlind, Berman, Bernardi, Bickerton,
Bizyaev, Blakeslee, Blanton, Bochanski, Boroski, Brewington,
Brinchmann, Brinkmann, Brunner, Budavári, Carey, Carliles,
Carr, Castander, Cinabro, Connolly, Csabai, Cunha, Czarapata,
Davenport, de Haas, Dilday, Doi, Eisenstein, Evans, Evans,
Fan, Friedman, Frieman, Fukugita, Gänsicke, Gates,
Gillespie, Gilmore, Gonzalez, Gonzalez, Grebel, Gunn,
Györy, Hall, Harding, Harris, Harvanek, Hawley, Hayes,
Heckman, Hendry, Hennessy, Hindsley, Hoblitt, Hogan, Hogg,
Holtzman, Hyde, Ichikawa, Ichikawa, Im, Ivezić, Jester,
Jiang, Johnson, Jorgensen, Jurić, Kent, Kessler, Kleinman,
Knapp, Konishi, Kron, Krzesinski, Kuropatkin, Lampeitl,
Lebedeva, Lee, Lee, French Leger, Lépine, Li, Lima, Lin,
Long, Loomis, Loveday, Lupton, Magnier, Malanushenko,
Malanushenko, Mand elbaum, Margon, Marriner, Martínez-Delgado,
Matsubara, McGehee, McKay, Meiksin, Morrison, Mullally, Munn,
Murphy, Nash, Nebot, Neilsen, Newberg, Newman, Nichol,
Nicinski, Nieto-Santisteban, Nitta, Okamura, Oravetz, Ostriker,
Owen, Padmanabhan, Pan, Park, Pauls, Peoples, Percival, Pier,
Pope, Pourbaix, Price, Purger, Quinn, Raddick, Re Fiorentin,
Richards, Richmond, Riess, Rix, Rockosi, Sako, Schlegel,
Schneider, Scholz, Schreiber, Schwope, Seljak, Sesar, Sheldon,
Shimasaku, Sibley, Simmons, Sivarani, Allyn Smith, Smith,
Smolčić, Snedden, Stebbins, Steinmetz, Stoughton,
Strauss, SubbaRao, Suto, Szalay, Szapudi, Szkody, Tanaka,
Tegmark, Teodoro, Thakar, Tremonti, Tucker, Uomoto, Vanden
Berk, Vandenberg, Vidrih, Vogeley, Voges, Vogt, Wadadekar,
Watters, Weinberg, West, White, Wilhite, Wonders, Yanny,
Yocum, York, Zehavi, Zibetti, & Zucker]Abazajian:2009ef
Abazajian, K. N., Adelman-McCarthy, J. K., Agüeros, M. A., et al.
2009, , 182, 543, 10.1088/0067-0049/182/2/543
[Baldry et al.(2006)Baldry, Balogh, Bower, Glazebrook,
Nichol, Bamford, & Budavari]2006MNRAS.373..469B
Baldry, I. K., Balogh, M. L., Bower, R. G., et al. 2006, , 373,
469, 10.1111/j.1365-2966.2006.11081.x
[Bamford et al.(2009)Bamford, Nichol, Baldry, Land,
Lintott, Schawinski, Slosar, Szalay, Thomas, Torki, Andreescu,
Edmondson, Miller, Murray, Raddick, &
Vandenberg]2009MNRAS.393.1324B
Bamford, S. P., Nichol, R. C., Baldry, I. K., et al. 2009, , 393,
1324, 10.1111/j.1365-2966.2008.14252.x
[Barnes(1992)]1992ApJ...393..484B
Barnes, J. E. 1992, , 393, 484, 10.1086/171522
[Barrera-Ballesteros et al.(2018)Barrera-Ballesteros, Heckman,
Sánchez, Zakamska, Cleary, Zhu, Brinkmann, Drory, & THE
MaNGA TEAM]2018ApJ...852...74B
Barrera-Ballesteros, J. K., Heckman, T., Sánchez, S. F., et al.
2018, , 852, 74, 10.3847/1538-4357/aa9b31
[Bekki & Couch(2011)]2011MNRAS.415.1783B
Bekki, K., & Couch, W. J. 2011, , 415, 1783,
10.1111/j.1365-2966.2011.18821.x
[Bigiel et al.(2008)Bigiel, Leroy, Walter, Brinks, de
Blok, Madore, & Thornley]2008AJ....136.2846B
Bigiel, F., Leroy, A., Walter, F., et al. 2008, , 136, 2846,
10.1088/0004-6256/136/6/2846
[Blanton & Roweis(2007)]2007AJ....133..734B
Blanton, M. R., & Roweis, S. 2007, , 133, 734, 10.1086/510127
[Blanton et al.(2005)Blanton, Schlegel, Strauss,
Brinkmann, Finkbeiner, Fukugita, Gunn, Hogg, Ivezić, Knapp,
Lupton, Munn, Schneider, Tegmark, & Zehavi]2005AJ....129.2562B
Blanton, M. R., Schlegel, D. J., Strauss, M. A., et al. 2005, , 129,
2562, 10.1086/429803
[Bournaud et al.(2005)Bournaud, Jog, &
Combes]2005A A...437...69B
Bournaud, F., Jog, C. J., & Combes, F. 2005, , 437, 69,
10.1051/0004-6361:20042036
[Brinchmann et al.(2004)Brinchmann, Charlot, White,
Tremonti, Kauffmann, Heckman, & Brinkmann]2004MNRAS.351.1151B
Brinchmann, J., Charlot, S., White, S. D. M., et al. 2004, , 351,
1151, 10.1111/j.1365-2966.2004.07881.x
[Broeils & Rhee(1997)]1997A A...324..877B
Broeils, A. H., & Rhee, M. H. 1997, , 324, 877
[Bruzual & Charlot(2003)]2003MNRAS.344.1000B
Bruzual, G., & Charlot, S. 2003, , 344, 1000,
10.1046/j.1365-8711.2003.06897.x
[Casasola et al.(2017)Casasola, Cassarà, Bianchi,
Verstocken, Xilouris, Magrini, Smith, De Looze, Galametz,
Madden, Baes, Clark, Davies, De Vis, Evans, Fritz, Galliano,
Jones, Mosenkov, Viaene, & Ysard]2017A A...605A..18C
Casasola, V., Cassarà, L. P., Bianchi, S., et al. 2017, , 605,
A18, 10.1051/0004-6361/201731020
[Catinella et al.(2010)Catinella, Schiminovich, Kauffmann,
Fabello, Wang, Hummels, Lemonias, Moran, Wu, Giovanelli,
Haynes, Heckman, Basu-Zych, Blanton, Brinchmann, Budavári,
Gonçalves, Johnson, Kennicutt, Madore, Martin, Rich,
Tacconi, Thilker, Wild, & Wyder]2010MNRAS.403..683C
Catinella, B., Schiminovich, D., Kauffmann, G., et al. 2010, ,
403, 683, 10.1111/j.1365-2966.2009.16180.x
[Catinella et al.(2013)Catinella, Schiminovich, Cortese,
Fabello, Hummels, Moran, Lemonias, Cooper, Wu, Heckman, &
Wang]2013MNRAS.436...34C
Catinella, B., Schiminovich, D., Cortese, L., et al. 2013, , 436,
34, 10.1093/mnras/stt1417
[Catinella et al.(2018)Catinella, Saintonge, Janowiecki,
Cortese, Davé, Lemonias, Cooper, Schiminovich, Hummels,
Fabello, Geréb, Kilborn, & Wang]2018Catinella
Catinella, B., Saintonge, A., Janowiecki, S., et al. 2018, , 476,
875, 10.1093/mnras/sty089
[Chabrier(2003)]2003PASP..115..763C
Chabrier, G. 2003, , 115, 763, 10.1086/376392
[Chung et al.(2009)Chung, van Gorkom, Kenney, Crowl, &
Vollmer]2009AJ....138.1741C
Chung, A., van Gorkom, J. H., Kenney, J. D. P., Crowl, H., &
Vollmer, B. 2009, , 138, 1741, 10.1088/0004-6256/138/6/1741
[Cicone et al.(2017)Cicone, Bothwell, Wagg, Møller, De
Breuck, Zhang, Martín, Maiolino, Severgnini, Aravena,
Belfiore, Espada, Flütsch, Impellizzeri, Peng, Raj,
Ramírez-Olivencia, Riechers, & Schawinski]2017A A...604A..53C
Cicone, C., Bothwell, M., Wagg, J., et al. 2017, , 604, A53,
10.1051/0004-6361/201730605
[Cortese et al.(2020)Cortese, Catinella, Cook, &
Janowiecki]2020MNRAS.494L..42C
Cortese, L., Catinella, B., Cook, R. H. W., & Janowiecki, S. 2020,
, 494, L42, 10.1093/mnrasl/slaa032
[Cortese et al.(2021)Cortese, Catinella, &
Smith]2021PASA...38...35C
Cortese, L., Catinella, B., & Smith, R. 2021, , 38, e035,
10.1017/pasa.2021.18
[Daddi et al.(2005)Daddi, Renzini, Pirzkal, Cimatti,
Malhotra, Stiavelli, Xu, Pasquali, Rhoads, Brusa, di Serego
Alighieri, Ferguson, Koekemoer, Moustakas, Panagia, &
Windhorst]2005ApJ...626..680D
Daddi, E., Renzini, A., Pirzkal, N., et al. 2005, , 626, 680,
10.1086/430104
[Daddi et al.(2010)Daddi, Bournaud, Walter, Dannerbauer,
Carilli, Dickinson, Elbaz, Morrison, Riechers, Onodera, Salmi,
Krips, & Stern]2010ApJ...713..686D
Daddi, E., Bournaud, F., Walter, F., et al. 2010, , 713, 686,
10.1088/0004-637X/713/1/686
[Davis et al.(2019)Davis, Greene, Ma, Blakeslee, Dawson,
Pandya, Veale, & Zabel]2019MNRAS.486.1404D
Davis, T. A., Greene, J. E., Ma, C.-P., et al. 2019, , 486, 1404,
10.1093/mnras/stz871
[de los Reyes & Kennicutt(2019)]2019ApJ...872...16D
de los Reyes, M. A. C., & Kennicutt, Robert C., J. 2019, , 872, 16,
10.3847/1538-4357/aafa82
[Dewdney et al.(2009)Dewdney, Hall, Schilizzi, &
Lazio]2009IEEEP..97.1482D
Dewdney, P. E., Hall, P. J., Schilizzi, R. T., & Lazio, T. J. L. W.
2009, IEEE Proceedings, 97, 1482, 10.1109/JPROC.2009.2021005
[Domínguez Sánchez et al.(2018)Domínguez
Sánchez, Huertas-Company, Bernardi, Tuccillo, &
Fischer]2018MNRAS.476.3661D
Domínguez Sánchez, H., Huertas-Company, M., Bernardi, M.,
Tuccillo, D., & Fischer, J. L. 2018, , 476, 3661,
10.1093/mnras/sty338
[Dou et al.(2021a)Dou, Peng, Renzini, Ho,
Mannucci, Daddi, Gao, Maiolino, Zhang, Gu, Li, Lilly, &
Yuan]2021ApJ...907..114D
Dou, J., Peng, Y., Renzini, A., et al. 2021a, , 907,
114, 10.3847/1538-4357/abd17c
[Dou et al.(2021b)Dou, Peng, Renzini, Ho,
Mannucci, Daddi, Gao, Maiolino, Zhang, Gu, Li, Lilly, Pan,
Yuan, & Zheng]2021ApJ...915...94D
—. 2021b, , 915, 94, 10.3847/1538-4357/abfaf7
[Elmegreen & Efremov(1997)]1997ApJ...480..235E
Elmegreen, B. G., & Efremov, Y. N. 1997, , 480, 235,
10.1086/303966
[Freundlich et al.(2019)Freundlich, Combes, Tacconi,
Genzel, Garcia-Burillo, Neri, Contini, Bolatto, Lilly,
Salomé, Bicalho, Boissier, Boone, Bouché, Bournaud,
Burkert, Carollo, Cooper, Cox, Feruglio, Förster Schreiber,
Juneau, Lippa, Lutz, Naab, Renzini, Saintonge, Sternberg,
Walter, Weiner, Weiß, & Wuyts]2019A A...622A.105F
Freundlich, J., Combes, F., Tacconi, L. J., et al. 2019, , 622,
A105, 10.1051/0004-6361/201732223
[Gao et al.(2018)Gao, Ho, Barth, &
Li]2018ApJ...862..100G
Gao, H., Ho, L. C., Barth, A. J., & Li, Z.-Y. 2018, , 862, 100,
10.3847/1538-4357/aacdac
[Gao & Solomon(2004)]2004ApJ...606..271G
Gao, Y., & Solomon, P. M. 2004, , 606, 271, 10.1086/382999
[Gavazzi et al.(2005)Gavazzi, Boselli, van Driel, &
O'Neil]2005Gavazzi
Gavazzi, G., Boselli, A., van Driel, W., & O'Neil, K. 2005, , 429,
439, 10.1051/0004-6361:20041678
[Genzel et al.(2010)Genzel, Tacconi, Gracia-Carpio,
Sternberg, Cooper, Shapiro, Bolatto, Bouché, Bournaud,
Burkert, Combes, Comerford, Cox, Davis, Förster Schreiber,
Garcia-Burillo, Lutz, Naab, Neri, Omont, Shapley, &
Weiner]2010MNRAS.407.2091G
Genzel, R., Tacconi, L. J., Gracia-Carpio, J., et al. 2010, ,
407, 2091, 10.1111/j.1365-2966.2010.16969.x
[Genzel et al.(2015)Genzel, Tacconi, Lutz, Saintonge,
Berta, Magnelli, Combes, García-Burillo, Neri, Bolatto,
Contini, Lilly, Boissier, Boone, Bouché, Bournaud, Burkert,
Carollo, Colina, Cooper, Cox, Feruglio, Förster Schreiber,
Freundlich, Gracia-Carpio, Juneau, Kovac, Lippa, Naab, Salome,
Renzini, Sternberg, Walter, Weiner, Weiss, &
Wuyts]2015ApJ...800...20G
Genzel, R., Tacconi, L. J., Lutz, D., et al. 2015, , 800, 20,
10.1088/0004-637X/800/1/20
[Giovanelli et al.(2005)Giovanelli, Haynes, Kent,
Perillat, Saintonge, Brosch, Catinella, Hoffman, Stierwalt,
Spekkens, Lerner, Masters, Momjian, Rosenberg, Springob,
Boselli, Charmandaris, Darling, Davies, Garcia Lambas, Gavazzi,
Giovanardi, Hardy, Hunt, Iovino, Karachentsev, Karachentseva,
Koopmann, Marinoni, Minchin, Muller, Putman, Pantoja, Salzer,
Scodeggio, Skillman, Solanes, Valotto, van Driel, & van
Zee]2005Giovanelli
Giovanelli, R., Haynes, M. P., Kent, B. R., et al. 2005, , 130,
2598, 10.1086/497431
[Grossi et al.(2009)Grossi, di Serego Alighieri, Giovanardi,
Gavazzi, Giovanelli, Haynes, Kent, Pellegrini, Stierwalt, &
Trinchieri]2009A A...498..407G
Grossi, M., di Serego Alighieri, S., Giovanardi, C., et al. 2009, ,
498, 407, 10.1051/0004-6361/200810823
[Haynes & Giovanelli(1984)]1984Haynes
Haynes, M. P., & Giovanelli, R. 1984, , 89, 758, 10.1086/113573
[Haynes et al.(2011)Haynes, Giovanelli, Martin, Hess,
Saintonge, Adams, Hallenbeck, Hoffman, Huang, Kent, Koopmann,
Papastergis, Stierwalt, Balonek, Craig, Higdon, Kornreich,
Miller, O'Donoghue, Olowin, Rosenberg, Spekkens, Troischt, &
Wilcots]2011AJ....142..170H
Haynes, M. P., Giovanelli, R., Martin, A. M., et al. 2011, , 142,
170, 10.1088/0004-6256/142/5/170
[Haynes et al.(2018)Haynes, Giovanelli, Kent, Adams,
Balonek, Craig, Fertig, Finn, Giovanardi, Hallenbeck, Hess,
Hoffman, Huang, Jones, Koopmann, Kornreich, Leisman, Miller,
Moorman, O'Connor, O'Donoghue, Papastergis, Troischt, Stark, &
Xiao]2018ApJ...861...49H
Haynes, M. P., Giovanelli, R., Kent, B. R., et al. 2018, , 861, 49,
10.3847/1538-4357/aac956
[Janowiecki et al.(2020)Janowiecki, Catinella, Cortese,
Saintonge, & Wang]2020MNRAS.493.1982J
Janowiecki, S., Catinella, B., Cortese, L., Saintonge, A., & Wang,
J. 2020, , 493, 1982, 10.1093/mnras/staa178
[Kauffmann et al.(2003)Kauffmann, Heckman, White, Charlot,
Tremonti, Brinchmann, Bruzual, Peng, Seibert, Bernardi,
Blanton, Brinkmann, Castander, Csábai, Fukugita, Ivezic,
Munn, Nichol, Padmanabhan, Thakar, Weinberg, &
York]2003MNRAS.341...33K
Kauffmann, G., Heckman, T. M., White, S. D. M., et al. 2003, ,
341, 33, 10.1046/j.1365-8711.2003.06291.x
[Kennicutt(1998)]1998ApJ...498..541K
Kennicutt, Robert C., J. 1998, , 498, 541, 10.1086/305588
[Kennicutt & De Los Reyes(2021)]2021ApJ...908...61K
Kennicutt, Robert C., J., & De Los Reyes, M. A. C. 2021, , 908, 61,
10.3847/1538-4357/abd3a2
[Kennicutt & Evans(2012)]2012ARA A..50..531K
Kennicutt, R. C., & Evans, N. J. 2012, , 50, 531,
10.1146/annurev-astro-081811-125610
[Koss et al.(2021)Koss, Strittmatter, Lamperti, Shimizu,
Trakhtenbrot, Saintonge, Treister, Cicone, Mushotzky, Oh,
Ricci, Stern, Ananna, Bauer, Privon, Bär, De Breuck,
Harrison, Ichikawa, Powell, Rosario, Sanders, Schawinski, Shao,
Megan Urry, & Veilleux]2021ApJS..252...29K
Koss, M. J., Strittmatter, B., Lamperti, I., et al. 2021, , 252,
29, 10.3847/1538-4365/abcbfe
[Krumholz et al.(2009)Krumholz, McKee, &
Tumlinson]2009ApJ...699..850K
Krumholz, M. R., McKee, C. F., & Tumlinson, J. 2009, , 699, 850,
10.1088/0004-637X/699/1/850
[Lemonias et al.(2014)Lemonias, Schiminovich, Catinella,
Heckman, & Moran]2014ApJ...790...27L
Lemonias, J. J., Schiminovich, D., Catinella, B., Heckman, T. M., &
Moran, S. M. 2014, , 790, 27, 10.1088/0004-637X/790/1/27
[Leroy et al.(2008)Leroy, Walter, Brinks, Bigiel, de
Blok, Madore, & Thornley]2008Leroy
Leroy, A. K., Walter, F., Brinks, E., et al. 2008, , 136, 2782,
10.1088/0004-6256/136/6/2782
[Lin et al.(2019)Lin, Pan, Ellison, Belfiore, Shi,
Sánchez, Hsieh, Rowland s, Ramya, Thorp, Li, &
Maiolino]2019ApJ...884L..33L
Lin, L., Pan, H.-A., Ellison, S. L., et al. 2019, , 884, L33,
10.3847/2041-8213/ab4815
[Lintott et al.(2011)Lintott, Schawinski, Bamford, Slosar,
Land, Thomas, Edmondson, Masters, Nichol, Raddick, Szalay,
Andreescu, Murray, & Vandenberg]2011MNRAS.410..166L
Lintott, C., Schawinski, K., Bamford, S., et al. 2011, , 410,
166, 10.1111/j.1365-2966.2010.17432.x
[Lintott et al.(2008)Lintott, Schawinski, Slosar, Land,
Bamford, Thomas, Raddick, Nichol, Szalay, Andreescu, Murray, &
Vandenberg]2008MNRAS.389.1179L
Lintott, C. J., Schawinski, K., Slosar, A., et al. 2008, , 389,
1179, 10.1111/j.1365-2966.2008.13689.x
[Lu et al.(2022)Lu, Xu, Wang, Cai, He, Xu, Xia,
Mao, Springel, & Hernquist]2022MNRAS.509.2707L
Lu, S., Xu, D., Wang, S., et al. 2022, , 509, 2707,
10.1093/mnras/stab3169
[Martin et al.(2005)Martin, Fanson, Schiminovich,
Morrissey, Friedman, Barlow, Conrow, Grange, Jelinsky,
Milliard, Siegmund, Bianchi, Byun, Donas, Forster, Heckman,
Lee, Madore, Malina, Neff, Rich, Small, Surber, Szalay,
Welsh, & Wyder]2005ApJ...619L...1M
Martin, D. C., Fanson, J., Schiminovich, D., et al. 2005, , 619,
L1, 10.1086/426387
[Mendel et al.(2014)Mendel, Simard, Palmer, Ellison, &
Patton]2014ApJS..210....3M
Mendel, J. T., Simard, L., Palmer, M., Ellison, S. L., & Patton,
D. R. 2014, , 210, 3, 10.1088/0067-0049/210/1/3
[Morselli et al.(2020)Morselli, Rodighiero, Enia,
Corbelli, Casasola, Rodríguez-Muñoz, Renzini, Tacchella,
Baronchelli, Bianchi, Cassata, Franceschini, Mancini, Negrello,
Popesso, & Romano]2020MNRAS.496.4606M
Morselli, L., Rodighiero, G., Enia, A., et al. 2020, , 496, 4606,
10.1093/mnras/staa1811
[Onodera et al.(2012)Onodera, Renzini, Carollo,
Cappellari, Mancini, Strazzullo, Daddi, Arimoto, Gobat, Yamada,
McCracken, Ilbert, Capak, Cimatti, Giavalisco, Koekemoer, Kong,
Lilly, Motohara, Ohta, Sanders, Scoville, Tamura, &
Taniguchi]2012ApJ...755...26O
Onodera, M., Renzini, A., Carollo, M., et al. 2012, , 755, 26,
10.1088/0004-637X/755/1/26
[Parkash et al.(2019)Parkash, Brown, Jarrett,
Fraser-McKelvie, & Cluver]2019MNRAS.485.3169P
Parkash, V., Brown, M. J. I., Jarrett, T. H., Fraser-McKelvie, A., &
Cluver, M. E. 2019, , 485, 3169, 10.1093/mnras/stz593
[Peng et al.(2012)Peng, Lilly, Renzini, &
Carollo]2012ApJ...757....4P
Peng, Y.-j., Lilly, S. J., Renzini, A., & Carollo, M. 2012, , 757,
4, 10.1088/0004-637X/757/1/4
[Peng & Renzini(2020)]2020MNRAS.491L..51P
Peng, Y.-j., & Renzini, A. 2020, , 491, L51,
10.1093/mnrasl/slz163
[Peng et al.(2010)Peng, Lilly, Kovač, Bolzonella,
Pozzetti, Renzini, Zamorani, Ilbert, Knobel, Iovino, Maier,
Cucciati, Tasca, Carollo, Silverman, Kampczyk, de Ravel,
Sanders, Scoville, Contini, Mainieri, Scodeggio, Kneib, Le
Fèvre, Bardelli, Bongiorno, Caputi, Coppa, de la Torre,
Franzetti, Garilli, Lamareille, Le Borgne, Le Brun, Mignoli,
Perez Montero, Pello, Ricciardelli, Tanaka, Tresse, Vergani,
Welikala, Zucca, Oesch, Abbas, Barnes, Bordoloi, Bottini,
Cappi, Cassata, Cimatti, Fumana, Hasinger, Koekemoer,
Leauthaud, Maccagni, Marinoni, McCracken, Memeo, Meneux, Nair,
Porciani, Presotto, & Scaramella]Peng:2010gn
Peng, Y.-j., Lilly, S. J., Kovač, K., et al. 2010, , 721,
193, 10.1088/0004-637X/721/1/193
[Renzini(2020)]2020MNRAS.495L..42R
Renzini, A. 2020, , 495, L42, 10.1093/mnrasl/slaa054
[Renzini & Peng(2015)]2015Renzini
Renzini, A., & Peng, Y.-j. 2015, , 801, L29,
10.1088/2041-8205/801/2/L29
[Saintonge & Catinella(2022)]2022ARA A..60..319S
Saintonge, A., & Catinella, B. 2022, , 60, 319,
10.1146/annurev-astro-021022-043545
[Saintonge et al.(2011)Saintonge, Kauffmann, Wang, Kramer,
Tacconi, Buchbender, Catinella, Graciá-Carpio, Cortese,
Fabello, Fu, Genzel, Giovanelli, Guo, Haynes, Heckman,
Krumholz, Lemonias, Li, Moran, Rodriguez-Fernandez, Schiminovich,
Schuster, & Sievers]2011MNRAS.415...61S
Saintonge, A., Kauffmann, G., Wang, J., et al. 2011, , 415, 61,
10.1111/j.1365-2966.2011.18823.x
[Saintonge et al.(2013)Saintonge, Lutz, Genzel, Magnelli,
Nordon, Tacconi, Baker, Bandara, Berta, Förster Schreiber,
Poglitsch, Sturm, Wuyts, & Wuyts]2013ApJ...778....2S
Saintonge, A., Lutz, D., Genzel, R., et al. 2013, , 778, 2,
10.1088/0004-637X/778/1/2
[Saintonge et al.(2016)Saintonge, Catinella, Cortese,
Genzel, Giovanelli, Haynes, Janowiecki, Kramer, Lutz,
Schiminovich, Tacconi, Wuyts, & Accurso]2016MNRAS.462.1749S
Saintonge, A., Catinella, B., Cortese, L., et al. 2016, , 462,
1749, 10.1093/mnras/stw1715
[Saintonge et al.(2017)Saintonge, Catinella, Tacconi, Kauffmann,
Genzel, Cortese, Davé, Fletcher, Graciá-Carpio, Kramer, Heckman,
Janowiecki, Lutz, Rosario, Schiminovich, Schuster, Wang, Wuyts, Borthakur,
Lamperti, & Roberts-Borsani]Saintonge:2017iz
Saintonge, A., Catinella, B., Tacconi, L. J., et al. 2017, The Astrophysical
Journal Supplement Series, 233, 0
[Salim et al.(2018)Salim, Boquien, &
Lee]2018ApJ...859...11S
Salim, S., Boquien, M., & Lee, J. C. 2018, , 859, 11,
10.3847/1538-4357/aabf3c
[Salim et al.(2007)Salim, Rich, Charlot, Brinchmann,
Johnson, Schiminovich, Seibert, Mallery, Heckman, Forster,
Friedman, Martin, Morrissey, Neff, Small, Wyder, Bianchi,
Donas, Lee, Madore, Milliard, Szalay, Welsh, &
Yi]2007ApJS..173..267S
Salim, S., Rich, R. M., Charlot, S., et al. 2007, , 173, 267,
10.1086/519218
[Salim et al.(2016)Salim, Lee, Janowiecki, da Cunha,
Dickinson, Boquien, Burgarella, Salzer, &
Charlot]2016ApJS..227....2S
Salim, S., Lee, J. C., Janowiecki, S., et al. 2016, , 227, 2,
10.3847/0067-0049/227/1/2
[Salvestrini et al.(2022)Salvestrini, Gruppioni,
Hatziminaoglou, Pozzi, Vignali, Casasola, Paladino, Aalto,
Andreani, Marchesi, & Stanke]2022A A...663A..28S
Salvestrini, F., Gruppioni, C., Hatziminaoglou, E., et al. 2022, ,
663, A28, 10.1051/0004-6361/202142760
[Schmidt(1959)]1959ApJ...129..243S
Schmidt, M. 1959, , 129, 243, 10.1086/146614
[Scoville et al.(2023)Scoville, Faisst, Weaver, Toft,
McCracken, Ilbert, Diaz-Santos, Staguhn, Koda, Casey, Sanders,
Mobasher, Chartab, Sattari, Capak, Vanden Bout, Bongiorno,
Vlahakis, Sheth, Yun, Aussel, Laigle, &
Masters]2023ApJ...943...82S
Scoville, N., Faisst, A., Weaver, J., et al. 2023, , 943, 82,
10.3847/1538-4357/aca1bc
[Serra et al.(2012)Serra, Oosterloo, Morganti, Alatalo,
Blitz, Bois, Bournaud, Bureau, Cappellari, Crocker, Davies,
Davis, de Zeeuw, Duc, Emsellem, Khochfar, Krajnović,
Kuntschner, Lablanche, McDermid, Naab, Sarzi, Scott, Trager,
Weijmans, & Young]2012MNRAS.422.1835S
Serra, P., Oosterloo, T., Morganti, R., et al. 2012, , 422, 1835,
10.1111/j.1365-2966.2012.20219.x
[Shangguan et al.(2018)Shangguan, Ho, &
Xie]2018ApJ...854..158S
Shangguan, J., Ho, L. C., & Xie, Y. 2018, , 854, 158,
10.3847/1538-4357/aaa9be
[Sharma et al.(2023)Sharma, Masters, Stark, Garland,
Drory, Fraser-McKelvie, & Weijmans]2023MNRAS.526.1573S
Sharma, A., Masters, K. L., Stark, D. V., et al. 2023, , 526,
1573, 10.1093/mnras/stad2695
[Shi et al.(2022)Shi, Peng, Diemer, Stevens, Pillepich,
Renzini, Dou, Gao, Gu, Ho, Kong, Lagos, Li, Li, Maiolino,
Mannucci, Xie, & Zhang]2022ApJ...927..189S
Shi, J., Peng, Y., Diemer, B., et al. 2022, , 927, 189,
10.3847/1538-4357/ac51d5
[Silk(1997)]1997ApJ...481..703S
Silk, J. 1997, , 481, 703, 10.1086/304073
[Simard et al.(2011)Simard, Mendel, Patton, Ellison, &
McConnachie]Simard2011
Simard, L., Mendel, J. T., Patton, D. R., Ellison, S. L., &
McConnachie, A. W. 2011, , 196, 11, 10.1088/0067-0049/196/1/11
[Solanes et al.(2001)Solanes, Manrique,
García-Gómez, González-Casado, Giovanelli, &
Haynes]2001Solanes
Solanes, J. M., Manrique, A., García-Gómez, C., et al. 2001,
, 548, 97, 10.1086/318672
[Springob et al.(2005)Springob, Haynes, Giovanelli, &
Kent]2005ApJS..160..149S
Springob, C. M., Haynes, M. P., Giovanelli, R., & Kent, B. R. 2005,
, 160, 149, 10.1086/431550
[Tacconi et al.(2020)Tacconi, Genzel, &
Sternberg]2020ARA A..58..157T
Tacconi, L. J., Genzel, R., & Sternberg, A. 2020, , 58, 157,
10.1146/annurev-astro-082812-141034
[Tacconi et al.(2013)Tacconi, Neri, Genzel, Combes,
Bolatto, Cooper, Wuyts, Bournaud, Burkert, Comerford, Cox,
Davis, Förster Schreiber, García-Burillo, Gracia-Carpio,
Lutz, Naab, Newman, Omont, Saintonge, Shapiro Griffin, Shapley,
Sternberg, & Weiner]2013ApJ...768...74T
Tacconi, L. J., Neri, R., Genzel, R., et al. 2013, , 768, 74,
10.1088/0004-637X/768/1/74
[Tacconi et al.(2018)Tacconi, Genzel, Saintonge, Combes,
García-Burillo, Neri, Bolatto, Contini, Förster Schreiber,
Lilly, Lutz, Wuyts, Accurso, Boissier, Boone, Bouché,
Bournaud, Burkert, Carollo, Cooper, Cox, Feruglio, Freundlich,
Herrera-Camus, Juneau, Lippa, Naab, Renzini, Salome, Sternberg,
Tadaki, Übler, Walter, Weiner, & Weiss]2018ApJ...853..179T
Tacconi, L. J., Genzel, R., Saintonge, A., et al. 2018, , 853, 179,
10.3847/1538-4357/aaa4b4
[Tapia et al.(2017)Tapia, Eliche-Moral, Aceves,
Rodríguez-Pérez, Borlaff, & Querejeta]2017A A...604A.105T
Tapia, T., Eliche-Moral, M. C., Aceves, H., et al. 2017, , 604,
A105, 10.1051/0004-6361/201628821
[Valentino et al.(2021)Valentino, Daddi, Puglisi, Magdis,
Kokorev, Liu, Madden, Gómez-Guijarro, Lee, Cortzen,
Circosta, Delvecchio, Mullaney, Gao, Gobat, Aravena, Jin,
Fujimoto, Silverman, & Dannerbauer]2021A A...654A.165V
Valentino, F., Daddi, E., Puglisi, A., et al. 2021, , 654, A165,
10.1051/0004-6361/202141417
[Walters et al.(2022)Walters, Woo, &
Ellison]2022MNRAS.511.6126W
Walters, D., Woo, J., & Ellison, S. L. 2022, , 511, 6126,
10.1093/mnras/stac283
[Wang & Peng(2023)]2023ApJ...950L..22W
Wang, B., & Peng, Y. 2023, , 950, L22,
10.3847/2041-8213/acd779
[Wang et al.(2016)Wang, Koribalski, Serra, van der Hulst,
Roychowdhury, Kamphuis, & Chengalur]2016MNRAS.460.2143W
Wang, J., Koribalski, B. S., Serra, P., et al. 2016, , 460, 2143,
10.1093/mnras/stw1099
[Wang et al.(2023)Wang, Peng, & Chen]2023MNRAS.523.1268W
Wang, K., Peng, Y., & Chen, Y. 2023, , 523, 1268,
10.1093/mnras/stad1169
[Wong & Blitz(2002)]2002ApJ...569..157W
Wong, T., & Blitz, L. 2002, , 569, 157, 10.1086/339287
[Yang et al.(2007)Yang, Mo, van den Bosch, Pasquali, Li,
& Barden]Yang:2007
Yang, X., Mo, H. J., van den Bosch, F. C., et al. 2007, , 671, 153,
10.1086/522027
[Yu et al.(2022)Yu, Ho, & Wang]2022ApJ...930...85Y
Yu, N., Ho, L. C., & Wang, J. 2022, , 930, 85,
10.3847/1538-4357/ac5f07
[Zanella et al.(2023)Zanella, Valentino, Gallazzi, Belli,
Magdis, & Bolamperti]2023MNRAS.524..923Z
Zanella, A., Valentino, F., Gallazzi, A., et al. 2023, , 524,
923, 10.1093/mnras/stad1821
[Zhang et al.(2019)Zhang, Peng, Ho, Maiolino, Dekel,
Guo, Mannucci, Li, Yuan, Renzini, Dou, Guo, Man, &
Li]2019ApJ...884L..52Z
Zhang, C., Peng, Y., Ho, L. C., et al. 2019, , 884, L52,
10.3847/2041-8213/ab4ae4
[Zhang et al.(2021)Zhang, Peng, Ho, Maiolino, Renzini,
Mannucci, Dekel, Guo, Li, Yuan, Lilly, Dou, Guo, Man, Li,
& Shi]2021ApJ...911...57Z
—. 2021, , 911, 57, 10.3847/1538-4357/abd723
[Zhuang et al.(2021)Zhuang, Ho, &
Shangguan]2021ApJ...906...38Z
Zhuang, M.-Y., Ho, L. C., & Shangguan, J. 2021, , 906, 38,
10.3847/1538-4357/abc94d
|
http://arxiv.org/abs/2409.03201v1 | 20240905024759 | Model Predictive Online Trajectory Planning for Adaptive Battery Discharging in Fuel Cell Vehicle | [
"Katsuya Shigematsu",
"Hikaru Hoshino",
"Eiko Furutani"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
Model Predictive Online Trajectory Planning for Adaptive Battery Discharging in Fuel Cell Vehicle
Katsuya Shigematsu, Hikaru Hoshino, Eiko Furutani
Department of Electrical Materials and Engineering
University of Hyogo
2167 Shosya, Himeji, Hyogo 671-2280, Japan
[email protected], {hoshino, furutani}@eng-u-hyogo.ac.jp
September 5, 2024
==================================================================================================================================================================================================================================================================
§ ABSTRACT
This paper presents an online trajectory planning approach for optimal coordination of Fuel Cell (FC) and battery in plug-in Hybrid Electric Vehicle (HEV).
One of the main challenges in energy management of plug-in HEV is generating State-of-Charge (SOC) reference curves by optimally depleting battery under high uncertainties in driving scenarios.
Recent studies have begun to explore the potential of utilizing partial trip information for optimal SOC trajectory planning, but dynamic responses of the FC system are not taken into account.
On the other hand, research focusing on dynamic operation of FC systems often focuses on air flow management, and battery has been treated only partially.
Our aim is to fill this gap by designing an online trajectory planner for dynamic coordination of FC and battery systems that works with a high-level SOC planner in a hierarchical manner.
We propose an iterative LQR based online trajectory planning method where the amount of electricity dischargeable at each driving segment can be explicitly and adaptively specified by the high-level planner.
Numerical results are provided as a proof of concept example to show the effectiveness of the proposed approach.
plug-in hybrid vehicle, charge-depleting strategy, model predictive control, nonlinear systems
§ INTRODUCTION
Hydrogen Fuel Cell Vehicles (FCVs) have emerged as a promising solution in the automotive industry for reducing greenhouse gas emissions.
Especially, when considering heavy-duty applications, FCVs offer distinct advantages over battery electric vehicles (BEVs) due to large mileage and cost-effectiveness when operating at full payload capacity <cit.>.
FCVs usually have a hybrid configuration with a Fuel Cell (FC) system and a battery storage system, which overcomes the slow dynamic response of the FC system <cit.>.
With plug-in Hybrid Electric Vehicles (HEVs), grid electricity charged to the battery can be consumed during a trip.
The so-called charge-depleting/charge-sustaining strategies <cit.> involve discharging the battery initially to a certain level of State-of-Charge (SOC) and then maintaining the SOC around that level in the remaining operation.
However, the myopic battery depletion is not optimal, and devising an appropriate energy management strategy is crucial for optimal coordination of FC and battery storage systems to achieve high fuel economy.
Various methods have been proposed for determining SOC reference curves.
Linear or affine decrease of SOC with distance traveled is used in, e.g., <cit.>, and SOC reference curves are built using historical driving data in, e.g., <cit.>.
These methods do not require online trip information.
In turn, they may not achieve optimal battery usage due to the lack of trip details.
With advancements in intelligent transportation systems and navigation technology, it has become feasible to obtain partial trip information such as route segment lengths and average speeds.
Thus, recent studies have begun to explore the potential of utilizing these trip information to predict future driving cycles and to generate SOC reference curves <cit.>.
However, how to effectively and efficiently exploit partial trip information for online energy management of plug-in HEV is an open problem for further investigation.
In this paper, we discuss how to construct a hierarchical energy management structure that is suitable for adaptive SOC trajectory planning for plug-in HEVs under high uncertainties in driving scenarios.
Above mentioned approaches <cit.> using partial trip information are promising, but these studies utilize a variant of Equivalence Consumption Minimization Strategy (ECMS), which does not consider dynamic responses of FC systems and only ensures a near-optimal local solution <cit.>.
On the other hand, for the purpose of dynamic optimization of FC systems, air flow controllers are designed using various methods such as adaptive control <cit.>, sliding-mode control <cit.>, and Model Predictive Control (MPC) <cit.>.
However, these methods do not directly optimize battery usage.
Thus, to construct an efficient energy management strategy that harnesses dynamic coordination of FC and battery systems in FCVs, an appropriate trajectory planner needs to be designed to work in the middle of an upper-level SOC planner and lower-level feedback controllers (see sec:model for the proposed hierarchical strategy).
The main contribution of this paper is to propose an online trajectory planning method where the amount of electricity dischargeable in each driving segment can be explicitly and adaptively specified to facilitate generating SOC references in the upper-level SOC planner.
The proposed method has the following advantages:
* We utilize a state-of-the-art online trajectory planning method called iterative Linear Quadratic Regulator (iLQR) <cit.>, which has been developed for robot motion planning and solves an optimal control problem at each time step in an MPC (receding horizon) framework. While many MPC-based energy management strategies are proposed in, e.g.,<cit.>, most of them use highly simplified models to balance with computational burden. This paper uses a nonlinear dynamical model that captures air flow transients in FC systems leveraging computationally efficient algorithm of iLQR.
* We carefully formulate an optimal control problem such that the amount of electricity dischargeable in each driving segment can be specified to facilitate coordination between the upper-level planner. While several studies have addressed dynamic coordination of FC and battery/ultracapacitor systems <cit.>, they focus on charge-sustaining behavior in non plug-in HEVs and coordination with the upper-planner is not intended.
The paper is organized as follows.
In sec:model, a mathematical model of the hybrid FC-battery system is introduced.
The proposed hierarchical energy management architecture and the online trajectory planning approach are described in sec:method.
Numerical results are provided in sec:simulation.
Conclusions and future works are summarized in sec:conclusion.
§ MODEL OF FUEL CELL AND BATTERY SYSTEMS
This section introduces the plant model used for the design of the proposed online trajectory planning method.
fig:fc_system schematically shows the structure of the FC system, which consists of a Proton Exchange Membrane (PEM) FC stack,
the air supply subsystem including a compressor and a supply manifold, the hydrogen supply subsystem, and the humidifier and the cooling controller <cit.>.
Among them, the air supply system, which provides oxygen to the PEMFC stack, is crucial for the purpose of energy management, since air compressors can consume up to 30% of the fuel cell power during rapid increase in the air flow <cit.>.
More importantly, there is a challenge that oxygen could be depleted when the stack current increases rapidly.
This oxygen starvation results in a rapid drop of the stack voltage and even reduces the lifetime of the fuel cell membrane.
However, the air supply rate is limited by the supply manifold dynamics, and centrifugal compressors are susceptible to surge and choke which limits the efficiency and performance of the compressor <cit.>.
One way to avoid oxygen starvation and to match an arbitrary level of current demand is to split the current demand with a battery, which can be connected with the FC system through a DC/DC converter.
In this study, we use a control-oriented model of PEMFC systems proposed in <cit.>.
This model describes essential dynamics
of the air flow into the cathode of the PEMFC.
It is assumed that a fast proportional–integral (PI) controller ideally regulates the hydrogen flow to the anode to match the oxygen flow, which means that the hydrogen is supplied from a compressed tank timely and a quasi-steady state is achieved.
It is also assumed that humidity and temperature are regulated to their desired levels, and the model does not consider the effect of temperature or humidity fluctuations.
This assumption should not limit the validity of our approach since the temperature and humidity dynamics are considerably slower than the FC power dynamics, and the variation of the temperature and humidity can be captured as a slow variation in model parameters.
For the design of the proposed online trajectory planner, the dynamics of the FC system combined with a battery storage system are represented by a nonlinear state-space model:
ẋ = f(x,u)
where the state vector x is defined as
x = [x_1, x_2, x_3, x_4, x_5, x_6, x_7, x_8]^⊤
:= [
p_O_2, p_N_2 ,ω_cp,p_sm_x_fc,
v_soc, v_s, v_f_x_bat,
q_dis
]^⊤,
where x_fc and x_bat stand for the state vectors for the FC and battery systems, respectively.
The control vector u is given by
u = [u_1, u_2, u_3]^⊤ := [ v_cm, I_st, I_bat]^⊤
where v_cm is the compressor motor voltage, I_st is the stack current, and I_bat is the battery current.
The FC state x_fc consists of p_O_2 and p_N_2 representing the oxygen and nitrogen partial pressures in the cathode, respectively, ω_cp is the angular speed of the compressor, and p_sm is the air pressure in the supply manifold.
The dynamics of the FC system are described by the fist four elements of the vector field f and given by
f_1(x,u) = c_1(-x_1 - x_2 + x_4 - c_2) -c_3 x_1 ψ_ca(x_1, x_2)/c_4 x_1 + c_5 x_2 + c_6
-c_7 u_2,
f_2(x) = c_8(-x_1 - x_2 + x_4 - c_2) - c_3 x_2 ψ_ca(x_1, x_2)/c_4 x_1 + c_5 x_2 + c_6,
f_3(x,u) = -c_9 x_3 - c_10 x_3 [(x_4/c_11)^c_12 - 1] ψ_cm(x_3, x_4)
+c_13 u_1,
f_4(x) = c_14[1 + c_15[(x_4/c_11)^c_12 - 1]]
×[ψ_cm(x_3, x_4) - c_16(-x_1 - x_2 + x_4 - c_2)],
where the meaning and values of the parameters c_i for i=1,… 16 and the representation of the function ψ_ca(x_1,x_2)
can be found in <cit.>.
The battery state x_bat consists of v_soc representing the SOC as a voltage of a fully charged capacitor C_b, and v_f and v_s representing voltages in two RC networks in an equivalent circuit model <cit.>.
The dynamics of the battery system are described by
f_5(x,u) = -1/R_sd C_b x_5 - 1/C_b u_3
f_6(x,u) = -1/R_s(x_5) C_s(x_5) x_6 + 1/C_s(x_5) u_3
f_7(x,u) = -1/R_f(x_5) C_f(x_5) x_7 + 1/C_f(x_5) u_3
where the values of the parameters R_sd and C_b, and functions R_s, C_s, R_f, and C_f are found in <cit.>.
Finally, the last state variable x_8 = q_dis is added to integrate the battery usage:
f_8(u) = u_3.
§ PROPOSED ENERGY MANAGEMENT STRATEGY
This section introduces the proposed hierarchical energy management structure in sec:hierarchical_structure,
and presents the design of the online trajectory planner for adaptive battery discharging in sec:formulation.
§.§ Overall Hierarchical Structure
The hierarchical energy management structure proposed in this study is schematically shown in fig:control_structure.
This structure is inspired by recent work on SOC planning using partial trip information <cit.> and energy management based on short-term velocity predictor <cit.>.
The controller consists of three layers.
At the highest level, an SOC planner determines a battery SOC reference trajectory based on partial trip information.
In real-world scenarios, obtaining detailed trip information, including precise distance, elevation profiles, and speed profiles, can be challenging or even unattainable.
However, map service providers may offer simplified information such as road networks, speed limits, and estimated distances between locations <cit.>.
Neural network based learning approach is considered as a promising candidate for designing the SOC planner that needs to handle intricate relationship between vehicle speed, distance to be covered, and desirable battery depleting schedule <cit.>.
At the second level of the hierarchy, an online trajectory planner generates short-term reference state trajectories considering optimal dynamic coordination between the FC and battery systems.
It utilizes velocity forecasting for several seconds to minimize hydrogen consumption while tracking the power demand and ensuring safety constraints such as avoiding oxygen starvation and compressor limits.
To cope with forecasting errors in power demand, online re-planning based on an MPC (receding horizon) strategy is expected to work effectively.
At the bottom level of the hierarchy, a feedback tracking controller works for achieving the desired state trajectory generated by the planner.
Also, a state observer is required for estimating the entire state vector x, which includes variables that are not directly measurable.
For this, a nonlinear observer is designed for PEMFC systems in, e.g., <cit.>, and for batteries in, e.g., <cit.>.
In this paper, we focus on the development of the online trajectory planner at the second layer in fig:control_structure.
The key idea of the design in this paper is that
the amount of electricity dischargeable from the battery in each driving segment can be explicitly and adaptively specified by the SOC planner.
With this design, we aim to facilitate the coordination between the SOC planning and trajectory planning in the first two layers in the hierarchy.
With existing learning methods for the SOC planner <cit.>, training dataset has been generated based on simulations using ECMS as energy management strategy, which does not consider the dynamic responses of the FC system.
However, considering that the efficiency of the FC system deteriorates during the transients following rapid changes in the power demand due to slow responses of the compressor and the supply manifold <cit.>, it would be crucial to take into account the dynamic coordination of the FC and battery systems during the training of the SOC planner.
To this end, the proposed method enables quantification of the value of discharging a fixed amount of electricity for saving hydrogen consumptions, and these data can be used for training of the SOC planner.
This is illustrated and further discussed with a numerical example in sec:simulation (see fig:H2_consumption).
Consider the discrete-time control system described by the state equation:
x(k+1)=f̅(x(k),u(k),k)
where x stands for the state variable, u for the control input, and f̅(x(k),u(k),k) is the function representing the time evolution of the state.
ILQR is an iterative method, and each iteration starts with a series of nominal control inputs {u̅_k }_k=0^N-1 calculated in the previous step k-1, and the nominal state trajectory {x̅_k }_k=0^N obtained by applying these inputs to the controlled system.
Let δ x_k := x_k - x̅_k and δ u_k := u_k - u̅_k be the deviations from the nominal trajectory and input, respectively, then the system (<ref>) can be linearized as
δx_k+1=A_kδx_k+B_kδu_k
where A_k:=∂f̅/∂x_k and B_k:=∂f̅/∂u_k. Then, for this linear model (<ref>), we solve the LQR problem with the following cost function at each step k:
J = (x̅_N+δx_N)^⊤ Q_f(x̅_N+δx_N)
+∑_k=0^N-1{(x̅_k+δx_k)^⊤ Q (x̅_k+δx_k) .
.+ (u̅_k+δu_k)^⊤ R (u̅_k+δu_k)}
where Q_f∈ℝ^n × n and Q ∈ℝ^n × n are semidefinite matrices and R ∈ℝ^m × m is a positive definite matrix.
The matrices A_k and B_k in Eq. (<ref>) are defined as the Jacobians of f̅. However, if the system is given as a continuous-time system ẋ = f(x,u,t), no analytic expression is available for f̅.
In the field of control engineering, a finite difference approximation, such as Euler method, is generally used, see e.g. <cit.>,
and the system (<ref>) is approximated by
x(t+Δ t) ≃ x(t) + Δ t f(x, u).
Thus, the matrices A_k and B_k are given by
A_k = ∂f̅/∂ x_k = I + Δ t∂ f/∂ x_k,
B_k = ∂f̅/∂ u_k = Δ t∂ f/∂ u_k.
However, with the finite difference method, the approximation accuracy deteriorates as the increase in the sampling period Δ t. In this paper, we propose to apply the variational equation <cit.> for deriving the time-varying linear system used in ILQR.
For this, from the state x_k and the input u_k, the next state x_k+1 is given by solving the differential equation from time t_k to t_k+Δ t:
ẋ̃̇=f̃(x̃,t), x̃(t_k)=x̃_k :=
[ x_k; u_k ]
where x̃=[x^⊤ u^⊤]^⊤ and f̃=[f(x,u)^⊤ 0]^⊤.
Let ϕ_t(x̃_k,t_k) be the solution starting from the initial value x̃_k of Eq. (<ref>) at time t_k, then the following equations hold:
ϕ̇_t(x̃_k,t_k) = f̃(ϕ_t(x̃_k,t_k),t),
ϕ_t_k(x̃_k,t_k) = x̃_k.
Differentiating these by x̃_k we obtain,
∂ϕ̇_t/∂x̃_k(x̃_k,t_k) =∂f̃/∂x̃(ϕ_t(x̃_k,t_k),t)∂ϕ_t/∂x̃_k(x̃_k,t_k),
∂ϕ_t_k/∂x̃_k(x̃_k,t_k) = I.
Let Φ_t(x̃_k,t_k) =∂ϕ_t/∂x̃_k(x̃_k,t_k), then Eqs. (<ref>) and (<ref>) can be rewritten as follows:
Φ̇_t(x̃_k,t_k) = ∂f̃/∂x̃(ϕ_t(x̃_k,t_k),t)Φ_t,
Φ_t_k(x̃_k,t_k) = I.
Here, Φ_t_k+Δ t can also be described by
Φ_t_k+Δ t=
[ ∂f̅/∂x_k ∂f̅/∂u_k; 0 0 ].
Therefore, by solving Eqs. (<ref>), (<ref>), (<ref>) and (<ref>) simultaneously, we obtain A_k and B_k.
§.§ Online trajectory planner
This paper proposes an online trajectory planning method based on iLQR <cit.>, which is one of the state-of-the-art trajectory optimization tools developed for robot motion planning.
It numerically solves an optimal control problem at each time step and works in an MPC (receding horizon) framework.
While various solvers are proposed for optimal control problems, direct shooting <cit.> is a popular class of methods,
and iLQR and its variant, Differential Dynamic Programming, have attracted great attention especially in robotics <cit.>, since they are fast and have a low memory footprint, making them amenable to embedded implementation.
Although the original iLQR <cit.> was not able to handle state and input constraints, the ALTRO algorithm proposed in <cit.> combines an augmented Lagrangian method to handle general constraints.
While some discretization method is required to obtain a discrete-time state equation from the original continuous-time model (<ref>), we used variational-equation based discretization presented in <cit.>.
The algorithm of iLQR is iterative.
It starts from a nominal control sequence {u̅^k }_k=0^N-1 and corresponding nominal trajectory {x̅^k }_k=0^N, where N stands for the outlook horizon.
Each iteration consists of two processes called the backward pass, which solves an optimal control problem for the linearized dynamics around the nominal trajectory and quadratic approximation of the cost function, i.e., a time-varying LQR problem, based on the dynamic programming principle, and the forward pass, which integrates the original nonlinear state equation to update the nominal control sequence {u̅^k }_k=0^N-1 and the nominal trajectory {x̅^k }_k=0^N while verifying the validity of the optimal solution obtained for the approximated LQR problem.
The nonlinear optimal control problem used in this paper is given by the following minimization problem where the update of the nominal control sequence {Δ u^k }_k=0^N-1 with u^k := u̅^k + Δ u^k is determined at each iteration:
min J = ∑_k=0^N-1{l_ref(x^k,u^k) +l_e(u^k) +l_s(Δu^k) }
s.t. x^0 = x(0),
x^k+1 = F(x^k, u^k), ∀ k ∈𝒩,
λ_O_2(x^k) ≥λ_min, ∀ k ∈𝒩,
x_2^k ≥ a_1ψ_cm(x_1^k, x_2^k) + b_1, ∀ k ∈𝒩,
x_2^k ≤ a_2ψ_cm(x_1^k, x_2^k) + b_2, ∀ k ∈𝒩,
v_cm, min≤ u_1^k ≤ v_cm,max, ∀ k ∈𝒩,
I_st, min≤ u_2^k ≤ I_st,max, ∀ k ∈𝒩,
I_bat,cmax≤ u_3^k ≤ I_bat,dmax, ∀ k ∈𝒩,
x_8^k ≤ Q_max, ∀ k ∈𝒩,
where 𝒩 := {0, …, N }, and F stands for the state transition map derived from the state equation (<ref>).
The objective function consists of three terms of l_ref for the error between the total output power P_sys and the demand power P_ref, l_e for the minimization of the stack current, which is proportional to the hydrogen consumption, and l_s for improving numerical stability of the iLQR algorithm, and given as
l_ref(x^k,u): = W_ref{P_sys^k(x^k,u^k)-P_ref^k}^2,
l_e(u^k) := W_e u_2^k,
l_s(Δu^k) := Δu^k^⊤ W_sΔu^k,
where W_ref, W_e, and W_s are the weights for these terms.
The constraint (<ref>) is for avoiding oxygen starvation by imposing the lower bound λ_min for the oxygen excess ratio λ_O_2.
The constraints (<ref>) and (<ref>) represents the choke and surge boundaries of the compressor, respectively, and the constants a_1, b_1, a_2, and b_2 are given in <cit.>.
The constrains (<ref>)–(<ref>) specifies upper and lower limits of the inputs u_1 to u_3, respectively.
The constraint (<ref>) imposes the upper limit on x_8 = q_dis, and this condition is responsible for allowing a fixed amount of electricity Q_max specified by the upper planner.
In addition to the above, we added constraints to ensure that x_1=p_O_2 becomes non-negative and that the cathode and supply manifold pressures are larger than the atmospheric pressure.
§ SIMULATION AND DISCUSSION
This section provides numerical results as a proof of concept example to show the effectiveness of the proposed online trajectory planner.
The closed-loop behavior of the proposed planner is discussed, and the value of the dynamic coordination of the FC and battery systems is quantified.
The parameters of the FC and battery systems are based on <cit.>, and summarized in tab:parameters.
The sampling period is set to Δ t = 0.05s and the horizon to N=10.
The weights are W_ref=100, W_c = 0.01, and W_s=diag(1,1,0.01), and the lower limit of the oxygen excess ratio is λ_min = 1.5.
With this setting, we compare the results for four cases of discharging amount: Q_max=72As, 36As, 18As, and 3.6As, which correspond to the amounts of charge such that the battery can be discharged with the maximum current of 36A for 2s, 1s, 0.5s, and 0.1s, respectively.
An arbitrary test sequence consisting of two steps is considered for the reference of the power demand P_ref, which is shown by the red line in fig:demand_power.
By utilizing the short-term velocity predictor, the controller gains imformation of an upcoming T_h = 0.5s in advance and can subsequently prepare for the transient responses.
As shown in fig:demand_power, the supplied power P_sys tracks the power demand P_ref without significant error.
fig:input_trajectory shows the time courses of the planned input variables: the compressor motor voltage v_cm, the stack current I_st, and the battery current I_bat.
It can be seen that the controller prepares for the step increase in the power demand at t = 2.0s by increasing the compressor voltage v_cm in advance at around t = 1.5s, which increases the oxygen flow to the PEMFC stack.
As the compressor consumes more power after t = 1.5s, the stack current I_st and/or the battery current I_bat also increases.
To clarify the difference in the battery usage, fig:q_dis shows the time courses of the state q_dis representing the amount of electricity discharged.
In all cases, q_dis increases gradually until about t=1.5s.
After that, with the setting of Q_max = 3.6As, as an overall trend, the battery is charged before the step increase in the demand, and gradually discharged after the increase in the demand.
The value of q_dis becomes lowest just before the first increase in the power demand at t=2s, and then rises gradually.
Similarly, with the setting of Q_max = 18As, the value of q_dis drops near the second increase in the power demand at t=2.2s.
These behaviors are similar to the battery usage in the charge-sustaining mode in the sense that the battery is charged when the total load is low and discharged when the total load is high.
In contrast, with Q_max = 36As and 72As,
the battery is discharged for all the time until the specified amount of electricity is consumed, and the value of q_dis monotonically increases.
These behaviors are similar to the operation in the charge-depleting mode.
The above results show that the proposed planner can generate trajectories corresponding to both charge-depleting and charge-sustaining modes depending on the setting of Q_max.
In all cases, the value of q_dis reaches and does not exceed the specified limit of Q_max, meaning that the proposed trajectory planner effectively achieves the dynamic coordination of the FC and battery systems within the specified discharging limit
in an adaptive manner.
Finally, fig:H2_consumption shows the performance comparison with respect to the hydrogen consumption.
It can be seen that there is approximately 2.3% difference between the cases of Q_max=72As and 3.6As.
These results quantify the value of discharging a fixed amount of electricity.
Given the fact that the proposed planner allows to explicitly specify the discharging amount, it is not difficult to gather data on the trade-off between the amount of battery usage and reduction in hydrogen consumption both through offline simulations and online data acquisition that can be used as historical data in the learning process of the SOC planner.
Thus, the proposed method can be seen as a key component for a better coordination between the first two layers in the hierarchical energy management structure in fig:control_structure, where the dynamic performance of the FC and battery systems can be included in the training of the SOC planner.
§ CONCLUSIONS
In this paper, we discussed how to construct a hierarchical energy management structure that is suitable for adaptive SOC trajectory planning under high uncertainties in driving scenarios.
Although existing learning methods for designing an SOC planner effectively incorporate partial trip information, dynamic responses of the FC and battery systems cannot be taken into account.
For the better coordination between the SOC planner and trajectory optimizer, we proposed an iLQR based online trajectory planning approach where the dischargeable amount of electricity in each driving segment can be explicitly and adaptively specified by the SOC planner.
Owing to the proposed approach, it becomes easy to gather data on the value of utilizing a fixed amount of battery both through offline simulations and online data acquisition for the training of the SOC planner.
Future directions of this work include development of a learning method for the SOC planner based on the proposed hierarchical strategy.
Also, as next steps for real application of the proposed online trajectory planner, implementation of the velocity predictor, the state observer, and low-level controllers are required. A robustness analysis of the proposed planner under prediction errors is crutial to verify the real-life performance.
It is also necessary to cope with slow variations in model parameters due to temperature and humidity dynamics.
IEEEtran
|
http://arxiv.org/abs/2409.03081v1 | 20240904211036 | SUSY Quantum Mechanics, (non)-Analyticity and $\ldots$ Phase Transitions | [
"Alexander V Turbiner"
] | math-ph | [
"math-ph",
"math.MP",
"quant-ph"
] |
^1Instituto de Ciencias Nucleares, Universidad Nacional
Autónoma de México, Apartado Postal 70-543, 04510 México,
D.F., Mexico
[email protected]
§ ABSTRACT
It is shown by analyzing the 1D Schrödinger equation the discontinuities in the coupling constant can occur
in both the energies and the eigenfunctions. Surprisingly, those discontinuities, which are present in the energies versus the coupling constant, are of three types only:
(i) discontinuous energies (similar to the 1st order phase transitions), (ii) discontinuous first derivative in the energy while the energy is continuous (similar to the 2nd order phase transitions), (ii) the energy and all its derivatives are continuous but the functions are different below and above the point of discontinuity (similar to the infinite order phase transitions). Supersymmetric (SUSY) Quantum Mechanics provides a convenient framework to study this phenomenon.
SUSY Quantum Mechanics, (non)-Analyticity and … Phase Transitions
Alexander V. Turbiner
September 9, 2024
=================================================================
§ INTRODUCTION
One of the general beliefs in finite-dimensional quantum mechanics is
that we deal with analytic functions in coupling constant,
contrary to Quantum Field Theory (QFT) and/or statistical mechanics where non-analyticity/discontinuity
in temperature can occur in the form of 1st, 2nd, 3rd …infinite (Berezinsky-Kosterlitz-Thouless)
order phase transitions. In this talk we demonstrate the existence of discontinuities in coupling
constant in 1D quantum mechanics meaning that the functions in coupling constant can be piece-wise analytic.
Note that a possible loss of analyticity was mentioned by the author a long ago <cit.> in relation to a spontaneous SUSY breaking.
∙ Harmonic oscillator (example)
Take the potential
V = ^2 x^2 ,
of one-dimensional harmonic oscillator and construct the Hamiltonian
H = - _x^2 + ^2 x^2 , x ∈ (-∞ , +∞) ,
in atomic units with mass m=1/2, where plays the role of frequency.
It is evident that the ground state energy
E_0 = > 0 , E_0 = || < 0 ,
see Fig.<ref>.
In turn, the ground state eigenfunction
Ψ_0 = e^- x^2/2 > 0 , E_0 = e^-|| x^2/2 < 0 .
Needless to say that both eigenvalue and eigenfunction are non-analytic at =0 being analytic at > 0
and < 0. Their analytic continuation to < 0 or > 0 does exist but does not correspond to the physics reality, the eigenfunction becomes non-normalizable in these domains, respectively.
This is the simplest example of discontinuity in the parameter of the Hamiltonian, which itself is continuous in the parameter: there are no singularities in the eigenvalues and eigenfunctions but are no analytically disconnected in some point in the parameter space! It is evident that the analyticity in eigenstates is restored when the harmonic oscillator is placed to the finite box x ∈ (-L , +L): the analytic continuation exists, see Fig.<ref>. Similar situation occurs for the excited states.
§ SUSY QUANTUM MECHANICS, SEE E.G. <CIT.>
Let us take SUSY Quantum Mechanics with two supercharges:
Q = _- (i _x + i w(x)) , Q^2=0 ,
Q̅ = _+ (i _x - i w(x)) , Q̅^2=0 ,
where _± are Pauli matrices, w(x) is called superpotential,
and then form the Hamiltonian
H̃ = { Q , Q̅} = -_x^2 + w^2 + _3 w' ,
here { a , b } = ab + ba is anticommutator, _3 is a diagonal
Pauli matrix, (1,-1). Hence, H̃ is the 2 × 2 diagonal Hamiltonian matrix. The spectral problem
H̃ Ψ = Ẽ Ψ ,
where Ψ, Ẽ are 2-columns, is reduced to two disconnected spectral problems
H_B Ψ_B ≡ (-_x^2 + w^2 - w') Ψ_B = E_B Ψ_B ,
H_F Ψ_F ≡ (-_x^2 + w^2 + w') Ψ_F = E_F Ψ_F ,
for the so-called bosonic and fermionic sectors, respectively. If the ground state
Ψ^(0)_B = e^-∫ w dx∈ L_2(R), the SUSY is exact and the lowest eigenvalue of the bosonic sector vanishes, E^(0)_B = 0 (zero mode), otherwise SUSY is broken and E^(0)_B ≠ 0. Let us introduce a parameter a into the superpotential
w a w .
What are analytic properties of the spectrum wrt the parameter a?
It is evident that
H_B(a) = H_F(-a)
thus, the bosonic and fermionic Hamiltonians are interchanged!
Let us consider two spectral problems:
H_B Ψ_B ≡ (-_x^2 + a^2 w^2 - a w') Ψ_B = E_B Ψ_B ,
with spectra E^(0)_B , E^(1)_B , E^(2)_B … and
H_F Ψ_F ≡ (-_x^2 + a^2 w^2 + a w') Ψ_F = E_F Ψ_F ,
with spectra E^(0)_F , E^(1)_F , E^(2)_F ….
It is well known that if SUSY is exact the spectra of these two spectral problems are remarkably related
E^(1)_B=E^(0)_F , E^(2)_B=E^(1)_F , … ,
hence,
E^(1)_B(a) = E^(0)_F(-a) .
Now we consider several examples.
§.§ w = x - Harmonic oscillator
The potentials of the bosonic and fermionic sectors are
V_B = ^2 x^2 - ,
V_F = ^2 x^2 + ,
respectively, where a=. Hence, for > 0 the ground state energy is
E_0 = 0 ,
while for < 0 the ground state energy takes the form
E_0 = 2 || .
It is evident that the discontinuity in the first derivative of E_0 occurs at =0, see Fig.<ref>, it resembles the 2nd order type phase transition at =0.
§.§ w = a x^3 - primitive quasi-exactly-solvable (QES) sextic oscillator,
see <cit.>
The potentials of the bosonic and fermionic sectors are
V_B(a) = a^2 x^6 - 3 a x^2 ,
see Fig.<ref>,
V_F(a) = V_B(-a) = a^2 x^6 + 3 a x^2 ,
see Fig.<ref>, respectively, hence, for a > 0 the ground state energy
of the bosonic sector
E_0 = 0 ,
while for a < 0 the ground state energy of the fermionic sector
E_0 ∼ 1.93556 |a|^1/2 ,
see <cit.>.
Thus, the discontinuity in the first derivative of E_0 occurs at a=0, see Fig.<ref> below, it resembles the 2nd order type phase transition at a=0. Similar discontinuity appears for any energy of the excited state.
§.§ w = a x^2 - SUSY broken quartic oscillator,
Ψ_0 ∉ L_2(R), E_B^(0)≠ 0
The potentials of the bosonic and fermionic sectors are
V_B = a^2 x^4 - 2 a x ,
V_F = a^2 x^4 + 2 a x .
respectively. They are symmetric wrt x -x and/or a -a,
V_B (x, a) = V_F (-x, a) = V_F (x, -a) = V_B (-x, -a) .
The ground state energy
E_0(a) ∼ 0.562 136 |a|^2/3 ,
is symmetric wrt a -a [We thank J C del Valle for the numerical calculation of E_0(1) in the Lagrange Mesh method, see <cit.>.].
Thus, the ground state energy E_0 is continuous but the discontinuity in the first derivative of E_0
occurs at a=0, see Fig. <ref> below, it resembles the 2nd order type phase transition at a=0. Similar behavior appears for the energies of the excited states.
§.§ w = a x^3 + b x - generalized primitive QES sextic polynomial oscillator <cit.>
Take the two-term superpotential
w = a x^3 + b x .
The ground state function at a > 0
Ψ_0 = e^-a/4 x^4 - b/2 x^2 ,
is normalizable with vanishing ground state energy, E_0 = 0, at any b.
The bosonic potential takes the form
V_B = a^2 x^6 + 2 ab x^4 + (b^2 - 3 a) x^2 - b ≡ V(x; a, b) ,
while the fermionic potential is of the form
V_F = a^2 x^6 + 2 ab x^4 + (b^2 + 3 a) x^2 + b ≡ V(x; -a, -b) ,
respectively.
Remarkable Observation-I by Herbst and Simon <cit.>
Take (<ref>)
V(x; a, b) ≡ V_B = V_0 + V_1 ≡ (b^2 x^2 - b) + (a^2 x^6 + 2 ab x^4 - 3 a x^2) ,
and develop perturbation theory (PT) for V_1 as perturbation by taking V_0 as unperturbed potential. This PT is, in fact, in powers of the parameter a. For the ground state energy the PT has the form
E_0 = ∑_i=0^∞ e_i a^i ,
with a remarkable property that all coefficients vanish (!)
e_i=0 , i=0,1,2,… ,
which was proved rigorously in <cit.>, although it was evident from the physics point of view.
If a > 0, the Schrödinger equation can be solved exactly by finding the square-integrable nodeless eigenfunction (<ref>) in the explicit form with the ground state energy E_0=0, however, for a<0 the explicit solution (<ref>) becomes non-square-integrable, although, the square integrable ground state eigenfunction of (<ref>) exists, hence, the energy E_0 is NOT zero(!), see <cit.>.
For b > 0 the energy can be presented in the form of the one-instanton expansion at small (in modulo)
negative a,
E_0 = β |a|^1/2 e^-b/√(|a|) (1 + …) , a -0 ,
where
β=1.93566 , =0.149 ,
see <cit.>. Hence, the energy is exponentially-small when a -0. This is the remarkable example that sum of the infinitely-many zeroes is not equal to zero! Eventually, it leads to the infinite order type phase transition, see Fig.<ref>. From another side if b=0 in (<ref>): V=V(x;a,0), we will arrive at the 2nd order type phase transition, see Fig.<ref>. If b < 0, the energy gets discontinuous at a=0 and the plot the energy versus a looks as the 1st order phase transition, see Fig. <ref>. Similar behavior appears for the energies of the excited states in the potential V_B (7) as well as for other QES potentials, see <cit.>.
§.§ w=a x^2 + b x - primitive, SUSY broken, quartic anharmonic oscillator
Take the superpotential
w = a x^2 + b x
The function (a formal solution of the Schrödinger equation) at real a
Ψ_0 = e^-a/3 x^3 - b/2 x^2 ,
with (formal) energy E_0 = 0 at any b and a ≠ 0 is NOT in L_2(R).
Bosonic potential
V_B = a^2 x^4 + 2 ab x^3 + b^2 x^2 - 2a x - b ≡ V(x; a, b)
while the fermionic potential
V_F = a^2 x^4 + 2 ab x^3 + b^2 x^2 + 2a x + b = V(x; -a, -b)
Remarkable Observation-II by Herbst and Simon <cit.>
Take
V(x; a, b) ≡ V_B = V_0 + V_1 ≡ (b^2 x^2 - b) + (a^2 x^4 + 2 ab x^3 - 2 a x)
and develop PT wrt V_1, which appears to be in powers of a, for the ground state energy
E_0 = ∑_i=0^∞ e_i a^i .
It was proved in <cit.> that all coefficients in the expansion vanish,
e_i=0 , i=0,1,2,… .
Since for real a ≠ 0 the function Ψ_0 (<ref>) is non-normalizable, the ground state energy E_0 ≠ 0 and E_0(a)=E_0(-a). For b>0 and small a the energy can be written as one-instanton-type expansion
E_0 = β |a|^2/3 e^-b/|a|^2/3 (1 + …) , a ± 0 ,
where parameter needs to be calculated, β∼ 0.562 136, cf..
Discontinuities in a of the ground state energy E_0, which occur at a=0, depending on the parameter b in the potential (<ref>) are presented in Figs.<ref>-<ref>. They are of the type of the 1st, 2nd and infinite order phase transitions, respectively.
Similar analysis to one presented in this manuscript can be carried out for different superpotentials. All attempts to find other types of discontinuities than 1st, 2nd and infinite-order type phase transitions fail.
Placing a system to finite box leads to disappearance of the discontinuities.
§ EXACT BOHR-SOMMERFELD (B-S) QUANTIZATION CONDITION AND SUSY QUANTUM MECHANICS
0.4cm
Exact B-S quantization condition proposed by del Valle & AVT <cit.> has the form
∫_x_a^x_b√(E_exact - V) = π (N + 1/2 + γ (N))
where x_a,b(E_exact) are turning points, γ (N) is the so-called WKB correction, ħ=1.
It was shown in <cit.> that for power-like potentials
V = |x|^m , m > 0 ,
γ is small, dimensionless bounded function
|γ (N)| ≤ 1/2
for any m > 0. In particular, for the harmonic oscillator m=2 γ(N)=0,
while for the quartic oscillator m=4 γ_max=γ(0) ∼ 0.08, and
for the sextic oscillator m=6 γ_max=γ(0) ∼ 0.15. Note that for
infinite square well potential m=∞ with x ∈ [-1,1] γ(N) = 1/2.
Typical behavior of the WKB correction γ is shown on Fig.<ref> for the case
of quartic and sextic oscillators: it is a smooth decreasing function with the quantum
number N growth and with asymptotics ∼ 1/N.
§ WKB CORRECTION Γ FOR SUSY POTENTIALS X^6∓ 3 X^2
Take potentials of bosonic and fermionic sectors
V_B/F = a^2 w^2 ∓ a w' ,
with energies
E^(B)(0)=0 , E^(B)(N+1) = E^(F)(N) , N=0,1,2, … ,
assuming that SUSY is exact. Exact B-S quantization condition (<ref>)
∫_x_a^x_b√(E^B/F_exact - a^2 w^2 ∓ a w') dx =π (N + 1/2 + γ_B/F (N))
Take the superpotential w=a x^3 as an example and substitute it into (<ref>),
∫_x_a^x_b√(Ẽ^B/F_exact a^1/2 - a^2 x^6 ± 3a x^2) dx =
[ x = a^-1/4 y ]
∫_y_a^y_b√(Ẽ^B/F_exact - y^6 ± 3 y^2) dy =
= π (N + 1/2 + γ_B/F (N)) .
Formally, a-dependence in γ_B/F disappears.
In Lagrange Mesh method <cit.> the Schrödinger equation for SUSY partner potentials
x^6∓ 3 x^2 can be easily solved for the first several eigenstates, the integral (<ref>) in the lhs of (<ref>) can be calculated and eventually
the WKB correction γ_B/F (N)) can be found. These corrections are presented
in Fig.<ref> for the first 21 eigenstates. Non-surprisingly,
at large N limit the corrections γ_B/F (N)) coincide with high accuracy.
It must be emphasized that, in general, γ^(B)(N+1) ≠ γ^(F)(N) , N=0,1,2, … although the corresponding exact energies coincide.
Those WKB corrections are analytically disconnected in a.
§ ACKNOWLEDGEMENTS
This text is dedicated to the 40 years of successful scientific career by
Professor David J Fernandez:
we sincerely congratulate David and wish him a very successful continuation.
The author thanks J C del Valle for preparation of the Fig.13.
The research is partially supported by DGAPA grant IN113022 (UNAM, Mexico).
99
Turbiner:1992
Turbiner A.V.
A New Phenomenon of Non-Analyticity and Spontaneous
Supersymmetry
Breaking,
Phys.Lett. B276, 95-102 (1992);
ibid B291, 519 (1992)(erratum)
SUSY:1995
Cooper F., Khare A., Sukhatme U.,
Supersymmetry and Quantum Mechanics,
Phys.Rept. 251, 267-385 (1995)
Turbiner:1988
Turbiner A.V.
Quasi-Exactly-Solvable Problems and the SL(2,R) algebra,
Comm.Math.Phys. 118, 467-474 (1988);
One-dimensional quasi-exactly solvable Schrödinger equations,
Physics Reports 642, 1-71 (2016)
H-Simon:1978
Herbst I.W. and Simon B.
Some remarkable examples in eigenvalue perturbation theory,
Phys. Lett. B78, 304-306 (1978)
BS:2021
J.C. del Valle, A.V Turbiner,
Power-like potentials: from the Bohr-Sommerfeld energies to exact
ones,
Int.Journ.Mod.Phys. A36(18) (2021) 2150221, pp.14
dV:2024
J.C. del Valle,
Solving the one-dimensional time-independent Schrödinger equation with
high accuracy: The LagrangeMesh Mathematica® package,
Intern Journ Mod Phys C 35, 2450011 (2024)
|
http://arxiv.org/abs/2409.03176v1 | 20240905020842 | Pseudo-Gorenstein edge rings and a new family of almost Gorenstein edge rings | [
"Yuta Hatasa",
"Nobukazu Kowaki",
"Koji Matsushita"
] | math.AC | [
"math.AC",
"math.CO",
"Primary 13D40, Secondary 13F55, 13F65, 05C25"
] |
[Y. Hatasa]
Department of Mathematics,
Tokyo Institute of Technology, 2–12–1 Ōokayama, Meguro-ku,
Tokyo 152–8551, Japan
[Y. Hatasa]
International Institute for Sustainability with Knotted Chiral Meta Matter (WPI-SKCM^2),
Hiroshima University, 1–3–1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739–8526, Japan
[email protected]
[N. Kowaki]Department of Pure and Applied Mathematics, Graduate School of Information Science and Technology, Osaka University, Suita, Osaka 565-0871, Japan
[email protected]
[K. Matsushita]Department of Pure and Applied Mathematics, Graduate School of Information Science and Technology, Osaka University, Suita, Osaka 565-0871, Japan
[email protected]
[2020]
Primary
13D40;
Secondary
13F55,
13F65,
05C25,
§ ABSTRACT
In this paper, we study edge rings and their h-polynomials.
We investigate when edge rings are pseudo-Gorenstein, which means that the leading coefficients of the h-polynomials of edge rings are equal to 1.
Moreover, we compute the h-polynomials of a special family of edge rings and show that some of them are almost Gorenstein.
Pseudo-Gorenstein edge rings and a new family of almost Gorenstein edge rings
Yuta Hatasa, Nobukazu Kowaki, Koji Matsushita
September 9, 2024
=============================================================================
§ INTRODUCTION
§.§ Backgrounds
Let R = ⊕_k≥ 0 R_k be a Cohen–Macaulay homogeneous domain of dimension d over an algebraically closed field R_0 = with characteristic 0.
Then the Hilbert series of R is defined as the formal power series ∑_k≥ 0(_ R_k)t^k, and it is known that we can write the Hilbert series of R as the following form:
∑_k ≥ 0 (_ R_k) t^k=h_0+h_1t+⋯+h_st^s/(1-t)^d,
where h_s ≠ 0.
We call the polynomial h_0+h_1t+⋯+h_st^s the h-polynomial of R, denoted by h(R;t), and call the sequence (h_0,h_1,…,h_s) the h-vector of R.
In addition, we also call the index s the socle degree of R, and denoted by s(R).
The h-polynomial (h-vector) of R is one of the most important invariants in the theory of commutative algebra because it tells us what commutative-algebraic properties R has.
In fact, it is well known that R is Gorenstein if and only if R has the symmetric h-vector, that is, h_i=h_s(R)-i holds for each i=0,…,s(R)/2 (<cit.>).
Moreover, recent studies have shown that the h-vector has connections not only to Gorensteinness, but also to its generalized notions such as levelness (<cit.>), almost Gorensteinness (<cit.>) and nearly Gorensteinness (<cit.>).
For example, the following facts are known:
* R is level if and only if the Cohen–Macaulay type of R is equal to h_s(R) (see <cit.>). This implies that if R is level, then h_s(R)=1 if and only if R is Gorenstein;
* If s(R)≥ 2 and R is almost Gorenstein, then h_s(R)=1 (<cit.>);
* If R is nearly Gorenstein, then h_s(R)=1 if and only if R is Gorenstein (<cit.>).
These show that the leading coefficient h_s(R) of the h-polynomial of R has wealth information.
In particular, these motivate determining when h_s(R)=1.
A Cohen–Macaulay homogeneous ring R is called pseudo-Gorenstein if h_s(R)=1 (<cit.>), and certain classes of pseudo-Gorenstein rings are characterized (see, e.g., <cit.>).
In this paper, we study the h-polynomials and pseudo-Gorensteinness of certain homogeneous domains, called edge rings.
For a finite simple graph G on the vertex set V(G)=[d] {1,…,d} with the edge set E(G), we define the edge ring of G as follows:
[G] [t_i t_j : {i,j}∈ E(G)]⊂[t_1,…,t_d].
Edge rings began to be studied by Ohsugi–Hibi (<cit.>) and Simis–Vasconcelos–Villarreal (<cit.>).
Since then, many researchers have studied the commutative ring-theoretic properties of edge rings.
In particular, the h-polynomials of the edge rings have been investigated.
As far as we know, the h-vectors of the edge rings of the following graphs have been computed:
* Complete graphs and complete bipartite graphs (see <cit.>);
* Bipartite graphs (via interior polynomials) (<cit.>);
* A family of graphs composed of a complete bipartite graph and a cycle graph (<cit.>);
* A family of graphs consisting of odd cycles that share a single common vertex (<cit.>).
Moreover, Gorensteinness, leveleness, almost Gorensteinness, nearly Gorensteinness and pseudo-Gorensteinness of edge rings have been investigated, respectively (see <cit.>).
There are two goals of this paper; one is to characterize when edge rings are pseudo-Gorenstein.
Another one is to find a new family of edge rings having commutative ring-theoretic properties mentioned above and to examine the behavior of their h-polynomials.
In this paper, we focus on almost Gorenstein edge rings and their h-polynomials.
§.§ Results
First, we completely characterize when the edge rings of bipartite graphs are pseudo-Gorenstein:
Let G be a bipartite graph.
Then the edge ring [G] is pseudo-Gorenstein if and only if every block of G is matching-covered.
We also derive some corollaries from <Ref> (<Ref>).
Moreover, we investigate the case of non-bipartite graphs in <Ref>.
This case becomes much more difficult than the bipartite graph case, and the assertions of <Ref> and corollaries mentioned above are no longer true.
However, the assertions can still be ensured with the addition of certain assumptions (<Ref>).
Next, we give a new family of almost Gorenstein edge rings for which their h-vectors are computed.
For integers m, n ≥ 3 and 0 ≤ r ≤min{m, n}, let G_m, n, r be the graph obtained from a
complete bipartite graph K_m, n by removing a matching M with M = r (see <Ref> for the precise definition of G_m,n,r).
We have
h_[G_m, n, r](t) = 1 +(m - 1)(n - 1) - r t
+ ∑_i = 2^min{m, n}m - 1in - 1i t^i.
Moreover, [G_m, n, r] is almost Gorenstein if and only if m = n.
§.§ Organization
The structure of this paper is as follows.
In <Ref>, we prepare the required materials for the discussions later.
We recall definitions and notations associated with commutative algebras and edge rings.
In <Ref>, we discuss the pseudo-Gorensteinness of edge rings.
Before that, we recall some definitions and facts on graph theory.
We then provide the characterization of pseudo-Gorenstein edge rings for bipartite graphs and investigate the case of non-bipartite graphs.
In <Ref>, we introduce a new family of almost Gorenstein edge rings and compute their h-polynomials.
Moreover, we give an observation on the behavior of the h-polynomials of almost Gorenstein edge rings.
§.§ Acknowledgement
The author would like to thank Akihiro Higashitani, Tamás Kálmán, and Sora Miyashita for their helpful comments and advice on improving this paper.
The third author is partially supported by Grant-in-Aid for JSPS Fellows Grant JP22J20033.
§ PRELIMINARIES
§.§ Cohen–Macaulay homogeneous domains and their h-polynomials
Throughout this subsection, let R = ⊕_k≥ 0 R_k be a Cohen–Macaulay homogeneous domain of dimension d over an algebraically closed field R_0 = with characteristic 0 and let (h_0,…,h_s) be the h-vector.
First, we introduce some fundamental objects associated with homogeneous rings (consult, e.g., <cit.> for the introduction).
* Let ω_R denote a canonical module of R and let a(R) denote the a-invariant of R, i.e., a(R)=-min{j:(ω_R)_j ≠ 0}.
When it is clear from context, we simply write a instead of a(R).
* For a graded R-module M, we use the following notation:
* Let μ(M) denote the number of minimal generators of M.
* Let e(M) denote the multiplicity of M. Then the inequality μ(M) ≤ e(M) always holds.
* Let M(-ℓ) denote the R-module whose grading is given by M(-ℓ)_k=M_k-ℓ for any k ∈.
* As mentioned in the introduction, we denote the h-polynomial (resp. the socle degree) of R by h(R;t) (resp. s(R)).
We simply write s instead of s(R) as in the a-invariant.
Note that h_s=_ (ω_R)_-a and d+a=s (see <cit.>).
Next, we recall the definitions and some properties of pseudo-Gorenstein rings and almost Gorenstein rings, respectively.
[<cit.>]
We call R pseudo-Gorenstein if h_s=_ (ω_R)_-a=1.
[<cit.>]
We call R almost Gorenstein
if there exists an exact sequence of graded R-modules
0 → R →ω_R(-a) → C → 0
with μ(C)=e(C).
Under our assumptions on R, there always exists a degree-preserving injection from R to ω_R(-a) (<cit.>).
Moreover, we see that μ(C)=μ(ω_R)-1 and e(C)=∑_j=0^s-1h_s+⋯+h_s-j-h_0+⋯+h_j (<cit.>), especially, μ(C) and e(C) do not depend on C.
Let e(R) ∑_j=0^s-1h_s+⋯+h_s-j-h_0+⋯+h_j. Note that e(R)≥μ(ω_R)-1.
Work with the same notation as above.
Then R is almost Gorenstein if and only if one has e(R)=μ(ω_R)-1.
If R is almost Gorenstein and s(R) ≥ 2, then h_s(R)=1, that is, R is pseudo-Gorenstein.
§.§ Edge rings and edge polytopes
Throughout this paper, all graphs are finite and have no loops and no multiple edges.
We recall the definition and some properties of edge rings.
We also recall edge polytopes, which are lattice polytopes arising from graphs.
We can regard edge rings as affine semigroup rings associated with edge polytopes.
See, e.g., <cit.> or <cit.> for the introduction to edge rings.
For a graph G on the vertex set V(G)=[d] with the edge set E(G), we consider the morphism ψ_G of -algebras:
ψ_G : [x_{i,j} : {i,j}∈ E(G)] →[t_1,…,t_d], induced by ψ_G(x_{i,j})=t_it_j.
Then we denote the image (resp. the kernel) of ψ_G by [G] (resp. I_G) and call [G] the edge ring of G.
The edge ring [G] is a homogeneous domain by setting (t_it_j)=1 for each i,j∈ E(G).
Given an edge e={i,j}∈ E(G), let ρ(e) _i+_j, where _i denotes the i-th unit vector of ^d for i=1,…,d.
We define the convex polytope associated to G as follows:
P_G ({ρ(e) : e ∈ E(G)}) ⊂^d.
We call P_G the edge polytope of G.
We can regard [G] as an affine semigroup ring as follows:
Let _G={ρ(e) : e ∈ E(G)} and let S_G be the affine semigroup generated by _G, that is, S_G=_≥ 0_G. Then we can easily see that [G] is isomorphic to the affine semigroup ring of S_G.
Let b(G) be the number of bipartite connected components of G, then we have P_G=d-b(G)-1 (see <cit.>).
This implies that the Krull dimension of [G] is equal to d-b(G).
Let G be a graph. Then the following are equivalent:
* [G] is normal;
* S_G=_≥ 0_G∩_G;
* _[G]_k =kP_G∩^d for any k∈_≥ 0.
* each connected component of G satisfies the odd cycle condition, i.e., for each pair of odd cycles C and C' with no common vertex, there is an edge {v,v'} with v ∈ V(C) and v' ∈ V(C')
For a lattice polytope P⊂^d and k∈_≥ 0, L_P(k) kP∩^d is called the Ehrhart polynomial of P.
If [G] is normal, we can compute the Ehrhart polynomial of P_G using the h-vector of [G] as follows:
Let G be a graph with P_G=δ and let (h_0,…,h_s) be the h-vector of [G].
If [G] is normal, then the Ehrhart polynomial L_P_G(k) of P_G can be described as follows:
L_P_G(k)=h_0k+δδ+h_1k+δ-1δ+⋯ +h_sk+δ-sδ.
Moreover, if [G] is normal, then [G] is Cohen–Macaulay and the ideal generated by monomials corresponding to the elements in ∫(S_G), where ∫(S_G) denotes the set of elements of S_G in the relative interior of the cone generated by S_G, is the canonical module of [G] (see <cit.>).
In particular, let ℓ_G minn : ∫(nP_G)∩^d≠∅, where ∫(P) denotes the relative interior of a polytope P, then we have ℓ_G=-a([G]).
To observe ∫(S_G), we give a description of the cone _≥ 0_G.
To this end, we recall some terminologies and notations.
We say that a subset T of V(G) is an independent set of G if {v,w}∉ E(G) for any v,w∈ T.
Note that each singleton is regarded as an independent set.
We denote the set of non-empty independent sets of G by _G.
For i ∈ [d] and T ∈_G, let
H_i {(x_1,…,x_d) ∈^d : x_i= 0},
H_T (x_1,…,x_d) ∈^d : ∑_j ∈ N_G(T)x_j - ∑_i ∈ Tx_i = 0,
H_i^+ {(x_1,…,x_d) ∈^d : x_i≥ 0},
H_T^+ (x_1,…,x_d) ∈^d : ∑_j ∈ N_G(T)x_j - ∑_i ∈ Tx_i ≥ 0,
H_i^>{(x_1,…,x_d) ∈^d : x_i > 0},
H_T^>(x_1,…,x_d) ∈^d : ∑_j ∈ N_G(T)x_j - ∑_i ∈ Tx_i > 0,
where N_G(T) v∈ [d] : v,w∈ E(G) for some w∈ T.
The following terminologies are used in <cit.>: Suppose that G is connected.
* For a subset W ⊂ V(G), let G∖ W denote the induced subgraph with respect to V(G)∖ W (for a vertex v, we denote by G ∖ v instead of G ∖{v}).
* We call a vertex v of G ordinary if G ∖ v is connected.
* Given an independent set T ⊂ V(G), let B(T) denote the bipartite graph on T ∪ N_G(T) with the edge set {{v,w}∈ E(G) : v ∈ T, w ∈ N_G(T)}.
* When G is a bipartite graph with the partition V(G)=V_1⊔ V_2, a non-empty set T ⊂ V_1 is said to be an acceptable set if the following conditions are satisfied:
* B(T) is connected;
* G ∖ V(B(T)) is a connected graph with at least one edge.
Let G be a connected graph.
Then the cone _≥ 0_G has the following representation:
_≥ 0_G=⋂_i∈ [d]H^+_i∩⋂_T∈_GH^+_T.
Moreover, if G is bipartite, each facet of P_G is defined by a supporting hyperplane H_i for some ordinary vertex i or H_T for some acceptable set T.
For a subset F⊂ E(G), we set v_F ∑_e∈ Fρ(e).
The following lemma is obvious but important to our results:
Let G be a graph.
For F⊂ E(G) and T∈_G, we have v_F ∈ H_T^> if there exists f∈ F with f∈ E(G)∖ E(B(T)) and f∩ N_G(T)≠∅.
A connected graph G is said to be 2-connected if all vertices of G are ordinary.
A block is a maximal 2-connected component of G.
Let G be a bipartite graph and let B_1,…, B_n be their blocks.
Then we have [G]≅[B_1]⊗⋯⊗[B_n].
§ PSEUDO-GORENSTEIN EDGE RINGS
In this section, we study the pseudo-Gorensteinness of edge rings.
§.§ Preliminary on graph theory
Before that, we need some notions and notations on (directed) graphs.
Let G be a 2-connected graph.
Then G has an (open) ear decomposition (cf. <cit.>), i.e., G can be decomposed as C∪ P_1∪⋯∪ P_r where C is a cycle, P_i is a path and (C∪ P_1 ∪⋯∪ P_i-1)∩ P_i consists of end vertices of P_i for each i.
Let ϕ(G) denote the minimum number of even paths (containing the first cycle) in an ear decomposition of G.
Note that ϕ(G)≥ 1 if G is bipartite since the first cycle must be an even
cycle.
We call an ear decomposition of G optimal if the number of even paths is just ϕ(G).
Let G be a 2-connected bipartite graph and fix an optimal ear decomposition G=C∪ P_1∪⋯∪ P_r with
V(P_i)=p_i,0,p_i,1,…,p_i,m_i, V(C)=c_1,…,c_2n,
E(P_i)=p_i,0,p_i,1,p_i,1,p_i,2,…,p_i,m_i-1,p_i,m_i and
E(C)=c_1,c_2,c_2,c_3,…,c_2n-1,c_2n,c_2n,c_1.
We set
E_i {{p_i,1,p_i,2},{p_i,3,p_i,4},⋯,{p_i,m_i -2,p_i,m_i-1}} if m_i is odd,
{{p_i,1,p_i,2},{p_i,3,p_i,4},⋯,{p_i,m_i-1,p_i,m_i}} if m_i is even,
E_C {{c_1,c_2},{c_3,c_4},⋯,{c_2n-1,c_2n}} and
E_C∪⋃_i=1^rE_i.
Note that E_i=∅ if m_i=1, and that =(ϕ(G)+V(G)-1)/2.
The following terminologies and facts are mentioned in <cit.>:
For a directed graph D,
* the set of edges entering a subset W⊂ V(D) is called a directed cut if no edges leave W.
* A directed graph D is strongly connected if there is a directed path from every vertex to any other.
* The following are equivalent:
* D is strongly connected;
* D does not contain directed cuts;
* D has a so-called directed ear decomposition, which is an ear decomposition of D, denoted as <Ref>, such that each edge c_i,c_i+1 (resp. p_i,j,p_i,j+1) is directed from c_i to c_i+1 (resp. p_i,j to p_i,j+1).
For a 2-connected bipartite graph G with the partition V(G)=V_1⊔ V_2, let G' be the directed graph obtained from G by orienting each edge from V_1 to V_2
and let G' be the directed graph obtained by reversing the orientation of edges of G'.
We can see that G' has a directed ear decomposition.
Let σ(G) denote the minimum cardinality of a subset of E(G) that contains at least one edge of each directed cut of G'.
According to <cit.> and <cit.>, we have the following:
Let G be a 2-connected bipartite graph.
Then we have
ℓ_G=σ(G)==ϕ(G)+V(G)-1/2.
A graph G is called matching-covered if it is connected and each edge is contained in some perfect matching.
There are many characterizations of matching-covered bipartite graphs (see, e.g., <cit.> or <cit.>), of which the following is the most important one for this paper:
Let G be a connected bipartite graph with the partition V(G)=V_1⊔ V_2.
Then the following are equivalent:
* G is matching-covered;
* V_1=V_2 and N_G(T)>T for every non-empty subset T⊊ V_1;
* G is 2-connected and ϕ(G)=1.
If G is not bipartite, then (i) and (iii) in <Ref> are not equivalent, but the implication “(i) ⟹ (iii)” is true in general (see <cit.>).
For a graph G, we say that G is k-regular if all vertices of G have the same degree k, and G is regular if G is k-regular for some k.
Let G be a connected k-regular graph.
Then for an independent set T⊂ V(G),
one has T = N_G(T) if and only if G = B(T).
Since G is k-regular, we have
k T = ∑_v ∈ N_G(T)_B(T)(v) ≤∑_v ∈ N_G(T) k = k N_G(T),
where _B(T)(v) is the degree of v in the subgraph B(T).
If T = N_G(T), then the equality of <Ref> holds, which implies that G=B(T) since G is connected.
Conversely, assume that G = B(T).
In this case, G is bipartite with the partition V(G) = T ⊔ N_G(T).
Moreover, the equality of <Ref> holds, so we have
T = N_G(T).
§.§ The case of bipartite graphs
In this subsection, we investigate when the edge rings of bipartite graphs are pseudo-Gorenstein.
Let G be a bipartite graph with the partition V_1⊔ V_2 and let B_1,…,B_m be the blocks of G.
Then we have h_s([G])=∏_i=1^m h_s([B_i]) from <Ref>, so it is enough to study the case when G is 2-connected.
In what follows, we assume G is 2-connected, fix an optimal ear decomposition described as <Ref> and let be the subset of E(G) defined in <Ref>.
We first give the following lemma:
We have v_∈∫(S_G).
From <Ref>, it is enough to show that v_∈ H_i^> for each i∈ [d] and v_∈ H_T^> for any acceptable set T⊊ V_1.
Since ⋃_e∈e=V(G),
we have v_∈ H_i^> for any i∈ V(G).
For an acceptable set T⊊ V_1, let W T∪ N_G(T) and v,w∈ E(G): v∈ V_1∖ T and w∈ N_G(T).
Then is a directed cut of G', but is not a directed cut of G' since G' has a directed ear decomposition, and hence G' does not contain directed cuts.
This implies that ∩≠∅, and hence v_∈ H_T^> from <Ref>.
We now present the following main theorem in this subsection:
Let G be a 2-connected bipartite graph.
Then the following are equivalent:
* [G] is pseudo-Gorenstein;
* ϕ(G)=1;
* G is matching-covered.
It follows from <Ref> that (ii) and (iii) are equivalent, so it is enough to show that (i) and (ii) are equivalent.
First, we show that (i) implies (ii).
Suppose that ϕ(G)≥ 2.
Then there exists i such that P_i has the even length.
Let E'_i p_i,0,p_i,1,p_i,2,p_i,3,…,p_i,m_i-2,p_i,m_i-1
and let ' (∖ E_i)∪ E'_i.
We can see that v_≠ v_' and v_,v_'∈∫(S_G) from <Ref>.
In particular, we have v_,v_'∈∫(ℓ_G P_G)∩^d
since ='=ℓ_G from <Ref>.
Therefore, we get h_s= (ω_[G])_ℓ_G=∫(ℓ_G P_G)∩^d≥ 2.
Next, we prove that (ii) implies (i).
Actually, this has been already shown in <cit.> via interior polynomials, but we will provide a short self-contained proof of it.
In this case, we can easily see that v_=(1,…,1) and this is the unique lattice point in ∫(ℓ_G P_G).
Thus, we have h_s=1.
<Ref> gives some corollaries.
Let G be a bipartite graph and let B_1,…, B_m be the blocks of G.
Then [G] is pseudo-Gorenstein if and only if B_i is matching-covered for each i=1,…,m.
It follows immediately from h_s([G])=∏_i=1^m h_s([B_i]).
Let G be a graph.
A cycle containing all the vertices of G is said to be a Hamilton cycle, and a graph containing a Hamilton cycle is said to be Hamiltonian.
Let G be a Hamiltonian bipartite graph.
Then [G] is pseudo-Gorenstein.
In this situation, we have ϕ(G)=1.
Indeed, G has an optimal ear decomposition C∪ P_1∪⋯ P_r where C is a Hamilton cycle of G and P_i is a path of length 1.
Let G be a connected regular bipartite graph with the partition V_1⊔ V_2.
Then [G] is pseudo-Gorenstein.
From <Ref>, it is enough to show that V_1=V_2 and N_G(T)>T for every non-empty subset T⊊ V_1.
The fact that G is regular implies that G has a perfect matching (and hence V_1=V_2) and that N_G(T)≥T for every subset T⊂ V_1 (<cit.>).
If N_G(T)=T holds, then we have T=V_1 from <Ref>, thus N_G(T)>T for every non-empty subset T⊊ V_1.
Let G be a 2-connected bipartite graph with the partition V_1⊔ V_2.
If [G] is pseudo-Gorenstein, then we have V_1=V_2 (in particular, V(G) is even).
It follows immediately from <Ref>.
We summarize the results of <Ref> in the following figure:
§.§ The case of non-bipartite graphs
In this subsection, we investigate the pseudo-Gorensteinness of the edge rings of non-bipartite graphs.
Actually, most of the implications in <Ref> are no longer true.
We first give their counterexamples in the case of 2-connected non-bipartite graphs satisfying the odd cycle condition.
(i) Let H_1 be the graph on the vertex set V(H_1)=1,2,3,4 with the edge set E(H_1)=1,2,2,3,3,4,1,4,1,3 (see <Ref>).
This graph is not matching-covered since there is no perfect matching containing 1,3.
On the other hand, we can compute h([H_1];t)=1+t, so H_1 is a counterexample to the implication “pseudo-Gorenstein ⟹ matching-covered”.
(ii) Let H_2 be the graph on the vertex set V(H_2)=1,2,3,4,5,6,7 with the edge set E(H_2)=1,2,2,3,3,4,4,5,1,5,1,6,3,6,3,7,5,7 (see <Ref>).
We can see that ϕ(H_2)=2, V(H_2) is odd and h([H_2];t)=1+2t+t^2.
Thus, H_2 is a counterexample to the implications “pseudo-Gorenstein ⟹ ϕ(G)=1” and “pseudo-Gorenstein ⟹ even number of vertices”.
(iii) Let H_3 be the graph on the vertex set V(H_3)=1,2,3,4,5,6 with the edge set E(H_3)=1,2,2,3,3,4,4,5,5,6,1,6,1,3,3,5,3,6 (see <Ref>).
This graph is Hamiltonian, and
we can see that ϕ(H_3)=1 and h([H_3];t)=1+3t+3t^2.
Therefore, H_3 is a counterexample to the implications “ϕ(H_3)=1 ⟹ pseudo-Gorenstein” and “Hamiltonian ⟹ pseudo-Gorenstein”.
(iv) Consider the complete graph K_5 with five vertices.
This graph is regular, but we can compute h([K_5];t)=1 + 5t + 5t^2.
This implies that K_5 is a counterexample to the implication “regular ⟹ pseudo-Gorenstein”.
In particular, each direction of the implication (3) in <Ref> is incorrect.
Moreover, the following example shows that the value of ϕ(G) does not depend on the pseudo-Gorensteinness of edge rings, that is, for any k∈_>0, there exists a 2-connected graph G satisfying the odd cycle condition with h_s([G])=1 and ϕ(G)=k:
For k∈_>0, let _k be the graph on the vertex set V(_k) v_1,…,v_k+1∪u_1,…,u_k∪w_1,…,w_k with the edge set
E(_k) v_1,v_k+1∪⋃_i=1^k v_i,u_i,v_i,w_i,u_i,v_i+1,w_i,v_i+1
(see <Ref>).
Then _k is a 2-connected graph satisfying the odd cycle condition with ϕ(_k)=k.
Indeed, any odd cycle of _k contains the edge v_1,v_k+1, so _k satisfies the odd cycle condition.
Moreover, we can easily see that for any ear decomposition of _k, the number of even paths appearing in it is just k, which implies that ϕ(_k)=k.
We show that [_k] is pseudo-Gorenstein.
From <cit.>, we can see that
I__k=b_i x_v_i,u_ix_w_i,v_i+1-x_u_i,v_i+1x_v_i,w_i : i ∈ [k].
Since these binomials b_i have no common variables, we obtain
[_k] ≅⊗_i=1^k [x_v_i,u_i,x_w_i,v_i+1,x_u_i,v_i+1,x_v_i,w_i]/(b_i)[x_v_1,v_k+1]
and h([_k];t)=(1+t)^k, which implies that h_s([_k])=1.
The pseudo-Gorensteinness of the edge rings of non-bipartite graphs is quite different from that of bipartite graphs, and it seems difficult to completely characterize it.
We end this section by giving the following two sufficient conditions for edge rings to be pseudo-Gorenstein:
Let G be a matching-covered connected graph satisfying the odd cycle condition.
Then [G] is pseudo-Gorenstein.
Our assertion holds from <Ref> if G is bipartite, thus we may assume that G is not bipartite.
Let v=(1,…,1), then v belongs to S_G since G has a perfect matching.
We show that v is in the unique lattice point in ∫(ℓ_G P_G).
From <Ref>, it suffices to prove that v∈ H_T^> for any independent set T of G.
Since G is a non-bipartite connected graph, there exists f∈ E(G)∖ E(B(T)) with f∩ N_G(T)≠∅.
We can take a perfect matching including f since G is matching-covered, and hence v_=v∈ H_T^> from <Ref>.
Let G be a connected regular graph with an even number of vertices
satisfying the odd cycle condition.
Then [G] is pseudo-Gorenstein.
Our assertion holds from <Ref> if G is bipartite, thus we may assume that G is not bipartite.
As in the proof of <Ref>, it is enough to show that (1,…,1) ∈ H_T^> for any independent set T of G.
We obtain T≤N_G(T) by <Ref>.
If T = N_G(T) holds, then we have G = B(T) from <Ref>,
so G is bipartite, which contradicts with the assumption.
Therefore, for any independent set T, we have T < N_G(T), and the point (1, …, 1) belongs to H_T^>.
§ ALMOST GORENSTEIN EDGE RINGS
§.§ A new family of almost Gorenstein edge rings
In this subsection, we consider the following graph:
For integers m, n ≥ 3 and 0 ≤ r ≤min{m, n}, let G_m, n, r be the graph on the vertex set V(G_m, n, r)=[m + n] with the edge set
E(G_m, n, r) = {i, j + m} : i ∈ [m], j ∈ [n]∖1, 1 + m, 2, 2 + m, …, r , r + m.
We can easily see that V(G_m,n,r) has the partition V(G_m,n,r)=V_1⊔ V_2, where V_1 1,…,m and V_2 m+1,…,m+n, and G_m,n,r is bipartite.
Moreover, G_m,n,r is 2-connected and the subset T⊂ V_1 is an acceptable set if and only if T=i for some i∈ [r].
To compute the h-polynomial of [G_m,n,r], we recall the h-polynomial of the edge ring of a complete bipartite graph.
Let K_m,n denote the complete bipartite graph with m+n vertices. Then
h([K_m,n];t)=∑_i=0^min{m,n}m-1in-1it^i.
We have
h([G_m, n, r]; t) = 1 + (m - 1)(n - 1) - r t
+ ∑_i = 2^min{m, n}m - 1in - 1i t^i.
By <Ref>, it suffices to show that
L_P_K_m, n(k) - L_P_G_m, n, r(k)
= k P_K_m, n∩^m + n - k P_G_m, n, r∩^m + n
= r k + m + n - 3m + n - 2.
For each i = 1, …, r, let
A_i (x_1, …, x_m + n) ∈^m + n_≥ 0x_i + x_m + i≥ k + 1, ∑_j = 1^m x_j = ∑_j = 1^n x_m + j = k
and A A_1∪⋯∪ A_r.
Note that A_i ∩ A_j = ∅ if i j.
We show that A ⊂ k P_K_m, n∩^m + n and
A_i=k + m + n -3m + n - 2 for each i, and hence A = r k + m + n -3m + n - 2.
Let S_i [m+n] ∖i for each i ∈ [r] and let σ_i S_i →^m+n be the map defined as follows:
σ_i(l) _l+_m+i if 1 ≤ l ≤ m,
_i+_l if m + 1 ≤ l ≤ m + n for l∈ S_i.
Moreover, we denote the set of all (k-1)-element multisubsets on S_i by U_i and define the map f_i U_i→^m + n as
f_i(u) _i + _m+i + ∑_l∈ uσ(l) for u ∈ U_i.
Then we can see that f_i(U_i) = A_i and f_i is a bijection on A_i.
Therefore, we have A_i ⊂ k P_K_m, n∩^m + n and
A_i = U_i = k+m+n-3m+n-2.
It remains to show that
(k P_K_m, n∩^m+n) ∖ A = k P_G_m, n, r∩^m+n.
Any a = (a_1, …, a_m + n) ∈ A_i does not satisfy the inequality ∑_j i a_m + j - a_i ≥ 0 for any i∈ [r].
Hence a ∉ H_i^+, and consequently a ∉ kP_G_m, n, r∩^m+n.
Conversely, for any a = (a_1, …, a_m+n) ∈ (kP_K_m, n∩^m+n) ∖ A,
the element a can be expressed as
a = ∑_i ∈ [m], j ∈ [n] b_i, j (_i + _m + j) for some b_i, j∈_≥ 0.
Since (_i + _m + i) + (_j + _m + j) = (_i + _m + j) + (_j + _m + i),
we may assume b_i, i≥ 0 and b_j, j = 0 for j ∈ [m] ∖{i}.
Suppose b_i, i > 0. Then there exist j, l ∈ [m] ∖{i} such that b_j, l > 0 since a_i + a_m + i≤ k,
and we have (_i + _m + i) + (_j + _m + l) = (_i + _m + l) + (_j + _m + i).
Hence we may assume b_i, m+i = 0, and therefore we have a ∈ kP_G_m, n, r∩^m+n.
We have μ(ω_[G_n,n,r])≥ r(n-3)+1.
It is enough to show that the set of the lattice points in ∫(S_G_n,n,r) corresponding to the minimal generators of ω_[G_n,n,r] contains the following:
v_i,j (1,…,1,iǰ,1,…,1,i+nǰ,1,…,1) for i∈{1,…, n} and j∈{1,…,n-2}.
We can see that v_i,j belongs to ∫(S_G_n,n,r) since v_i,j∈ H_k^> for any k∈ [2n] and v_i,j∈ H_l^> for any l∈ [r].
If we can write v_i,j=v'+ρ(e) for some v'∈∫(S_G_n,n,r) and e∈ E(G_n,n,r), then e=i,i+n since v'∈ H_k^> for any k∈ [2n], a contradiction to i,i+n∉ E(G_n,n,r).
Therefore, v_i,j cannot be written as a sum of an element in
∫(S_G_n,n,r) and an element in S_G_n,n,r∖0, which is the desired result.
The edge ring [G_m,n,r] is almost Gorenstein if and only if m=n.
If [G_m,n,r] is almost Gorenstein, then we have h_s([G_m,n,r])=1 from <Ref>, so we get m=n.
Suppose that m=n.
It follows from <Ref> and <Ref> that
r(n-3)=e([G_n,n,r]) ≥μ(ω_R)-1 ≥ r(n-3).
Therefore, we have e([G_n,n,r])=μ(ω_R)-1 and [G_n,n,r] is almost Gorenstein from <Ref>.
§.§ Observations and questions on almost Gorenstein edge rings
<Ref> and <Ref> tell us that [G_n,n,r] is almost Gorenstein and its h-vector (h_0,h_1,…,h_s) satisfies the following condition:
h_i=h_s-i for i=0,2,3,…,s/2. *
As far as the authors know, the h-vectors of almost Gorenstein edge rings discovered so far satisfy the condition <Ref>.
For example, the edge ring of a complete graph K_2m is almost Gorenstein (<cit.>), its h-vector has been computed (<cit.>) and satisfies condition <Ref>.
Moreover, the edge ring of a complete multipartite graph K_1,n,n (n≥ 2) is almost Gorenstein (<cit.>), which is isomorphic to a certain Hibi ring (<cit.>).
The h-vector of this Hibi ring satisfies condition <Ref> (<cit.>).
Furthermore, the h-vectors of the edge rings of a certain family of graphs _n, consisting of n triangles that share a single common vertex, have been investigated in <cit.>.
According to <cit.>, [_n] is almost Gorenstein and its h-vector satisfies condition <Ref>.
These results naturally pose us with the following question:
Do the h-vectors of almost Gorenstein edge rings satisfy condition <Ref>?
Actually, the following example gives a negative answer to this question:
Let be the Petersen graph (see <Ref>).
We can see that satisfies the odd cycle condition.
Moreover, we can check
h([];t)=1+5t+15t^2+25t^3+5t^4+t^5
by using (<cit.>).
It follows from <cit.> that [] is not Gorenstein but almost Gorenstein.
Moreover, this h-vector does not satisfy condition <Ref>.
We can still see that the h-vectors of almost
Gorenstein edge rings are “almost symmetric”, which means that for i=0,…,s/2, the equality h_i=h_s-i holds for all but at most one value of i.
Unfortunately, that is also not true in general.
Let W_10 be the wheel graph with 10 vertices (see <Ref>).
This graph satisfies the odd cycle condition and
tells us that the h-polynomial of [W_10] is
h([W_10];t)=1+8t+27t^2+30t^3+9t^4+t^5
and that μ(ω_[W_10])=7.
Therefore, we have e([W_10])=μ(ω_[W_10])-1=6, and hence [W_10] is almost Gorenstein from <Ref>.
While these counterexamples exist, we have yet to find the edge ring of a “bipartite graph” that is almost Gorenstein and whose h-vector does not satisfy condition <Ref>.
Do the h-vectors of the almost Gorenstein edge rings of bipartite graphs satisfy condition <Ref>?
plain
|
http://arxiv.org/abs/2409.03235v1 | 20240905041239 | $SLE_6$ and 2-d critical bond percolation on the square lattice | [
"Wang Zhou"
] | math.PR | [
"math.PR",
"math-ph",
"math.CV",
"math.MP",
"82B27, 60K35, 82B43, 60D05, 30C35"
] |
theoremTheorem
corollaryCorollary[section]
lemma[corollary]Lemma
proposition[corollary]Proposition
conjecture[corollary]Conjecture
|
http://arxiv.org/abs/2409.03466v1 | 20240905122151 | Panopticon: a novel deep learning model to detect single transit events with no prior data filtering in PLATO light curves | [
"H. G. Vivien",
"M. Deleuil",
"N. Jannsen",
"J. De Ridder",
"D. Seynaeve",
"M. -A. Carpine",
"Y. Zerah"
] | astro-ph.EP | [
"astro-ph.EP",
"astro-ph.IM",
"cs.LG"
] |
Transit detection in unfiltered PLATO light curves
Aix Marseille Univ, CNRS, CNES, Institut Origines, LAM, Marseille, France
[email protected] Institute for Astronomy, KU Leuven, Celestijnenlaan 200D bus 2401, 3001 Leuven, Belgium AIM, CEA, CNRS, Université Paris-Saclay, Université Paris Diderot, Sorbonne Paris Cité, 91191 Gif-sur-Yvette, France
Vivien et al.
To prepare for the analyses of the future PLATO light curves, we develop a deep learning model, , to detect transits in high precision photometric light curves. Since PLATO's main objective is the detection of temperate Earth-size planets around solar-type stars, the code is designed to detect individual transit events. The filtering step, required by conventional detection methods, can affect the transit, which could be an issue for long and shallow transits. To protect transit shape and depth, the code is also designed to work on unfiltered light curves.
The model is based upon the Unet family architectures, able to more efficiently extract and combine features of various scale length, leading to a more robust detection scheme. We trained the model on a set of simulated PLATO light curves in which we injected, at pixel level, either planetary, eclipsing binary, or background eclipsing binary signals. We also include a variety of noises in our data, such as granulation, stellar spots or cosmic rays. We then assessed its capacity to detect transits in a separate dataset.
The approach is able to recover 90% of our test population, including more than 25% of the Earth-analogs, even in the unfiltered light curves. The model also recovers the transits irrespective of the orbital period, and is able to retrieve transits on a unique event basis. These figures are obtained when accepting a false alarm rate of 1%. When keeping the false alarm rate low (<0.01%), it is still able to recover more than 85% of the transit signals. Any transit deeper than ∼180ppm is essentially guaranteed to be recovered.
This method is able to recover transits on a unique event basis, and does so with a low false alarm rate. Due to the nature of machine learning, the inference time is minimal; around 0.2 s per light curve of 126 720 points. Thanks to light curves being one-dimensional, model training is also fast, on the order of a few hours per model. This speed in training and inference, coupled to the recovery effectiveness and precision of the model make it an ideal tool to complement, or be used ahead of, classical approaches.
: a novel deep learning model to detect single transit events with no prior data filtering in PLATO light curves
H. G. Vivien1 ^https://orcid.org/0000-0001-7239-6700
< g r a p h i c s > M. Deleuil1 ^https://orcid.org/0000-0001-6036-0225
< g r a p h i c s > N. Jannsen2 ^https://orcid.org/0000-0003-4670-9616
< g r a p h i c s > J. De Ridder2 ^https://orcid.org/0000-0001-6726-2863
< g r a p h i c s > D. Seynaeve2 ^https://orcid.org/0000-0002-0731-8893
< g r a p h i c s > M.-A. Carpine3 Y. Zerah1 ^https://orcid.org/0000-0003-1786-7367
< g r a p h i c s >
Received Month dd, yyyy; accepted Month dd, yyyy
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
sections/introduction.tex
§ DEEP LEARNING MODEL
In this paper, we present a DL model able to identify transit signals in a light curve. This is done by localizing the position of transit events. We do this in the context of the forthcoming ESA's PLATO mission, designed to determine the frequency of Earth-sized planets orbiting Sun-like stars. We opt for a classifier approach, rating the probability that a transit is occurring at each point of a light curve. This allows our approach to retrieve an arbitrary number of transits in a light curve, including the case of mono-transits. Additionally, we rely on the DL ability to extract and classify features to properly identify the events to bypass the filtering process.
§.§ Architectures
We implement a custom 1-dimensional version of the Unet family: Unet, Unet++, Unet3+ <cit.>. This architecture is a type of fully convolutional neural network, that adds successive upsampling layers to the usual contracting networks. By combining the features extracted from the contracting during the upsamling process, the model can yield a high resolution output. For our light curves, the output generated acts as a one-to-one map of the input, where each point is classified individually based on neighboring context.
A Unet model can be seen as an auto-encoder with skip connections between the layers of the encoder part and the decoder part. The encoder extracts contextual information from the input, while the decoder builds the output point by point. During the encoding process the input is iteratively down-sampled, allowing a fixed-size kernel to extract information over a larger window at each step. Then, the decoder iteratively upsamples the output of the encoder, combining it with the features previously extracted at various timescales. The output of the decoder is therefore a segmentation map covering the input point-to-point, allowing for precise localization of the object of interest in the input light curve. Besides, a DL model with skip connections is beneficial, as they have been shown to increase training speed, and also allow for deeper network <cit.>.
Because this approach is a point-wise detection, it presents a few advantages. First and foremost, it allows any transit in a light curve to be detected individually, as points are classified based on local context. Second, we can extract the T_0 and duration for an arbitrary number of transits in a given light curve. Third, because the output yields a probability, it is possible to define the confidence level depending on the required certainty to extract the TCEs. Finally, because DL builds an internal noise model, there is no need for prior filtering of the light curves. Bypassing this step ensures that no shallow signal will be removed by mistake, and simplifies the detection process significantly. The theoretical detection of a transit in a light curve by our model is illustrated in Fig. <ref>, and the models architectures are given in Figs. <ref>, <ref> and <ref>.
§.§ Implementation
graphs/conv_block
The three variants, Unet, Unet++ and Unet3+ are illustrated in Figs. <ref>, <ref> and <ref>, respectively. The encoder and decoder can be seen as a series of nodes, each either extracting features or combining them, respectively. Each node is built upon a basic convolution block, illustrated in Fig. <ref>. This block consists of three base operations: a convolution, a batch normalization and a rectified linear unit. An additional, optional, drop-out layer can be included at the beginning of the block. The node of the backbone make use of either two consecutive blocks in the cases for Unet and Unet++, or a single block for Unet3+.
We identify the nodes of the model as x^i,j. The index i corresponds to the depth of the node in the network. It increases with each downsample operation, and decreases with each upsample. Conversely, index j tracks number of upsampling steps to reach a given node. We can therefore easily identify every node in the models. The backbone of the models corresponds to nodes x^i,0. Reciprocally, the decoder is made up of the nodes x^i,j>0. The backbone is common to all model and can be computed using:
x^i,0 = 𝒩( x^𝑖𝑛𝑝𝑢𝑡), i=0
𝒩(𝒟( x^i-1,0)), i>0
where 𝒩 is the operation assigned to the node using the default block, described above. 𝒟 corresponds to the downsampling operation (Max Pooling; selecting the maximum value within a certain kernel). In this case, the kernel is set to a length of 2, and results in halving the resolution of the input at each level. Additionally, the number of feature channels also increases at each level. The decoders of each model can then be computed as follow:
x^i,j>0 = 𝒩([ x^i,0, 𝒰(x^i+1,j-1) ]), Unet
𝒩([ [x^i,k]_k=0^j-1, 𝒰(x^i+1,j-1) ]), Unet++
𝒩([ [𝒩(𝒟(x^k,0))]_k=0^i-1,
𝒩(x^i,0),
[𝒩(𝒰(x^0,k))]_k=0^j-1]), Unet3+
where 𝒟 remains the downsampling operation, and 𝒰 is the upsampling operation (Transpose Convolution for Unet and Unet++ or Upsample for Unet3+). The upsampling operation is setup so that it upsamples the data to the same length as the target node. Anything contained within [] is concatenated feature-wise.
The kernel size of the convolution operation is key on two aspects; (i) the number of trainable parameters and (ii) the coverage of the signal it offers. A larger kernel can increase the quality of the features identification, at the cost of longer training time. Additionally, the kernel must be large enough to encompass recognizable features within the signal. To achieve a good balance between feature quality, training time and feature coverage, we make substantial use of kernel dilation:
k_s = k_t + (k_t - 1)(d - 1)
where k_s is the total length of the kernel, k_t is the number of active parameters in the kernel, and d is the dilation factor, namely, the spacing between active points in the kernel. For a default kernel where all active points are next to each other, the dilation factor is 1. This allows us to increase the size of the kernel for a fixed number of trainable parameters, at the cost of a lower resolution per feature. Typical kernel size for each architecture is shown in Table <ref>, for a constant depth of four and the number of initial feature maps set to eight. The number of learnable kernel parameters has a strong impact, while each model appears fairly similar. The Unet3+ version displays the smallest number of parameters for a given number of parameters, showing the advantage of the full skip connection over the nested skip connections of Unet++.
As described above, the goal of the model is to identify transit events directly within the stellar noise. We therefore first limit the model to a binary classification scheme, distinguishing two classes: "continuum" and "event". Each point in the light curve is assigned a likelihood score to belong to either the continuum (0) or an event (1). To retrieve the classification, we need to define a threshold to separate the two classes. Given the intrinsic class imbalance present within our data, due to the short nature of the transits compared with that of the light curve, it is not guaranteed that setting the threshold to 0.5 will yield the best results <cit.>. To more effectively constrain the best threshold value, we evaluate the performance of the model for values ranging from 0.05 to 0.95.
Finally, to compare the performances of the model, we train multiple models over a wide range of parameters, leading to the comparison of multiple versions of the model. This allows to compare the aforementioned kernel sizes and coverage of the features. We limit the model to use a binary cross entropy (BCE) loss function, which has proven effective. We use the AdamW optimizer, with γ=0.001, β_1=0.9 and β_2=0.999. We set the feature dropout rate to 10%.
§ DATASET AND ENVIRONMENT
To prepare a realistic dataset, tailored for the PLATO instrument, while controlling the astrophysics content of the light curves, we took advantage of the mission end-to-end camera simulator <cit.>. is developed to generates accurate and realistic simulated images to be received from the PLATO satellite. It includes a wide range of instrumental noise sources at different levels: platform, camera, with realistic PSFs, and detector. With stars that can be observed by a variable number of cameras and the complexity of the instrument, which accommodates 26 cameras on a single optical bench, to control the observation conditions for a given star, we used 's toolkit, called , which, based on the PLATO input catalog <cit.>, enables the simulator to be programmed in a friendly way. In addition to a realistic representation of the instrument, the other advantage of using , is that the signal is injected at pixel level. While this is not fundamental to test the mere detection, this is central for later identify our capacity to sort false positives generated by background eclipsing binaries, and bona fide planets.
To build our simulated dataset, we chose stars identified in the PIC as potential targets for the prime sample (P1; main sequence stars with V_mag<11), as the signal-to-noise ratio (SNR) of these stars enables a detection of an Earth-like planet. We then simulate those stars, including various astrophysical signals:
- stellar activity effects that include granulation, stochastic oscillations and stellar spots
- exoplanet transits, simulated with <cit.>
- eclipsing binaries, simulated with <cit.>
We can thus simulate a target combining all of these effects to produce either a transiting planet in front of an active star, or an eclipsing binary on the target, or an eclipsing binary on a nearby contaminant. All the physical characteristics used to generate the signal (masses, radii, effective temperature, orbital period, ephemerides, eccentricity, rotation period of the star, pulsation frequency, etc.) are also documented and saved.
The prime sample stars can potentially be observed by up to 24 cameras, the simulator generates the resulting signals and light curve, for each camera individually. We include all main effects that are currently implemented in , including the photometry module. At the time when the dataset was generated, only on-board algorithms were implemented by this module. This means that once the full processing chain is complete, the flux is extracted at pixel level using optimal aperture photometry <cit.>. We underline that the photometry for bright stars will eventually be derived by PSF fitting. This prevents us from making use of centroids for the targets. However, since we have control over the simulation, this is a great baseline to evaluate the performance of the method. To reduce the computational cost, the simulations were not performed on a complete PLATO field of view (i.e. simulating full-frame CCD images) but star per star, on a CCD subfield of 10×10 pixels. We also choose to reduce the cadence from the nominal 25 sec to 1 min and limit the simulations to the first four quarters, to cover a one year time span. We underline our objective was not to assess the performances of the instrument but to test the ability of our software to detect transit-like events. Depending on the type of simulation (planetary transit, eclipsing binary, number of contaminants...), and the version of the simulator, the computation time for a single target, on one quarter and for one camera, takes on the order of ≃ 12 minutes for version 3.6 of , while previous versions took around ≃ 7 minutes. Finally, still in an effort to save computational resource, we decided to adapt the simulation to the orbital period of the transiting body, and did not generate light curves on a given quarter when no transit is expected to occur. As a result, the number of quarters for a given star and a given astrophysical signal is not constant, but is tailored to the orbital period of the transit signal.
Table <ref> gives the summary of the number of the different astrophysical signals that were used in this study. Taking into account the fact that simulations cover one year time span, and that we forego quarters where no transit is present, we end up with a total of 16 094 quarters that were treated as independent light curves. This is all the more relevant as the periodic nature of transits does not come into play in our approach, and that the light curves were not corrected from any trend, such as instrument aging, or even cosmic impacts. We show the resulting distribution of radii and period of the transiting bodies simulated in Fig. <ref>.
We further filter the dataset by removing edge cases where, due to numerical errors, the transits were not visible in the quarters. We also truncate the dataset to remove cases where non physical parameters were used to generate light curves. For instance, we remove cases where the stellar radius >2.5 , or where the transit depth is <50 ppm. This leaves us with 14 594 light curves, that we randomly split into two datasets; 85% training and 15% validation, that is 12 405 and 2 189 quarters, respectively. The final counts of signals in the dataset used is shown in the right column of Table <ref>.
§ PERFORMANCES
Evaluation of the performance of the model can be done in two ways. First, directly evaluating the raw output compared to the desired label. Second, assessing the ability of the model to detect transit events, or lack thereof. The former is achieved by computing conventional metrics, such as precision, recall, average precision, F_1 score and Jaccard score (or intersection over union; IOU). The later is done by comparing the positions of the ground truths of the events to the predicted positions by the model. While the direct approach allows a straight-forward evaluation of the model, it also doesn't reflect its actual ability to detect transits. We therefore focus our estimates on the ability of the model to recover transits, as well as its false positive rate (FAR). We deem a transit to be successfully recovered if an overlap between the prediction and the ground truth exists. The FAR is defined as the fraction of false positives to the total number of predicted event.
Models are trained on A40 GPUs in the in-house cluster of the laboratory. We trained a total of 16 models using the Unet3+ architecture, expected to perform the best. We test multiple initial kernel lengths and trainable parameters, using 4 or 8 initial features, and did this up to 70 training epochs, using batches of 40 light curves per training pass. To find the best performing versions of the models, we check the recovery and FAR performance on the last 20 epochs. We explored two options: a conservative approach that takes the model that has the lowest FAR at the 0.95 confidence threshold, and finding the version of the models that retrieve the most planets for a constant FAR of 1%. We show the summary of the parameters, and their associated results, in Table <ref>.
When taking the models in their conservative regime, we find that our models are able to retrieve more than 80% of our test population, and that with a FAR under 0.1%, less than 1 false positive for 1 000 predictions. When fixing the FAR to 1%, we find that we are able to retrieve 90% of the planets in our test dataset. These performances demonstrate that this approach is not only viable, but beneficial. The inference mechanism is very fast (∼0.2 seconds per light curve on a CPU), allowing for processing large amounts of data, which will be the case of PLATO. In this case, keeping the number of false positives small is key to enabling rapid and accurate processing of the vast amount of light curves.
As highlighted in Table <ref>, we consider three models that offer the best performances. Model A retrieves the largest fraction of the test population, model B yields a FAR of less than 0.01% while still recovering more than 85% of the planets, and finally model C provides a solid compromise between recovery and FAR. We use these models as an illustration for our approach on the population of our dataset. We show in Fig. <ref> the effectiveness of model C at 1% FAR at recovering the planets in our test population. The limiting factor for detection that emerges is the depth of the transits, and their associated SNR. We here compute the SNR after <cit.>:
SNR = δ/σ_CDPP√(n_tr· t_dur/3hr)
where δ is the depth of the transit, σ_CDPP is the combined differential photometric precision, n_tr is the number of observed transits and t_dur the transit duration. While the recovery rate noticeably drops for depths lower than ∼150 ppm (SNR of ∼15), Earth-analogs are detectable by the model. Fig. <ref> (c) shows the depths of the events where the expected Earth depth is highlighted as a black vertical line, and neighboring planets are recovered at a rate between 25–33%. Additionally, the duration of transits are found to have little impact on the recovery rate (panel d), and crucially, the orbital period also has no impact on transit recovery (panel e). This holds true even for planets with orbital periods longer than a single quarter, indicating that transits are indeed identified on a unique event basis. We therefore find that our approach should be be able to identify at least 25% of the Earth-analogs robustly.
We also subsequently train the Unet and Unet++ architectures to compare their performances relative to the best Unet3+ versions. We find that these alternative models perform slightly worse. Namely, we find that the recovery is lower at equal FAR, especially in the small planets regime. We therefore limit our analysis to the pest-performing Unet3+ models.
To better highlight the performances of the models, we illustrate in the top panel of Fig. <ref> the trade-off between the recovery rate and the FAR in a receiver operating characteristic (ROC). We show the three selected models and their compromise, identifying the FAR selected in Table <ref>. We see for each case that the number of planets recovered increases with the FAR. Importantly, we find that even for the lowest possible FAR, here model B with <0.01%, a sizable 85.81% of the population is successfully recovered. The lower panel of Fig. <ref> illustrates the recovery for various depths of transits. It illustrates clearly that the recovery rate rapidly rises above ∼180 ppm, essentially reaching 100% (as also visible in panel c of Fig. <ref>, for model C).
We illustrate the detection of an Earth-analog signal (R_P=1.11 R_⊕, R_S=1.23 R_⊙, δ=83.16 ppm) in Fig. <ref>, using model C. This unique event is recovered with a confidence level of more than 0.65, making this planet detected at a corresponding FAR of less than 0.3%. While not strictly equivalent to the false alarm probability, it sufficiently analogous to give insight on the likelihood that this event is a true positive.
§ DISCUSSION
sections/discussion.tex
HV and MD acknowledge funding from the Institut Universitaire de France (IUF) that made this work possible. This research made use of the computing facilities operated by the CeSAM data center at the LAM, Marseille, France.
aa
§ MODEL ARCHITECTURES
graphs/unet
graphs/unet++
graphs/unet3+
|
http://arxiv.org/abs/2409.03238v1 | 20240905043849 | Preserving Empirical Probabilities in BERT for Small-sample Clinical Entity Recognition | [
"Abdul Rehman",
"Jian Jun Zhang",
"Xiaosong Yang"
] | cs.CL | [
"cs.CL",
"cs.LG",
"68T50",
"I.2.7"
] |
Gr-IoU: Ground-Intersection over Union for Robust Multi-Object Tracking with 3D Geometric Constraints
Keisuke Toida10009-0006-4873-3651 Naoki Kato20009-0004-3815-0829 Osamu Segawa20009-0000-2469-6098 Takeshi Nakamura20009-0001-4991-3383 Kazuhiro Hotta10000-0002-5675-8713
September 9, 2024
==============================================================================================================================================================================
§ ABSTRACT
Named Entity Recognition (NER) encounters the challenge of unbalanced labels, where certain entity types are overrepresented while others are underrepresented in real-world datasets. This imbalance can lead to biased models that perform poorly on minority entity classes, impeding accurate and equitable entity recognition.
This paper explores the effects of unbalanced entity labels of the BERT-based pre-trained model. We analyze the different mechanisms of loss calculation and loss propagation for the task of token classification on randomized datasets. Then we propose ways to improve the token classification for the highly imbalanced task of clinical entity recognition.
§ INTRODUCTION
Named entity recognition (NER) is a Natural Language Processing (NLP) task of identifying and categorizing entities (such as names, events, things, and places) in a given raw text. An entity may span over just a single word or many continuous words. This process facilitates the automated analysis, search, and organization of extensive text datasets. One of the aims for NER is achieving higher precision for the entity labels with a relatively small number of training samples <cit.>. This creates the problem of imbalance between the entities that are recognized <cit.>.
Within the clinical domain, NER holds particular significance, as it operates amidst a lexicon rich with specialized terminology necessitating accurate interpretation while imposing strict tolerances for errors. Capitalizing on the efficacy of transformer-based language models <cit.>, particularly BERT, has emerged as a dominant deep learning model for NER tasks. These models thrive on their foundational unsupervised pre-training on vast textual data, enabling them to encapsulate intricate linguistic structures that lend themselves to various language processing tasks.
The challenge of imbalanced labels considerably complicates the process of fine-tuning transformer models for Named Entity Recognition (NER). The distribution of entity classes often exhibits a significant skew, leading to disparities in the frequency of different entity types. This scenario frequently results in the underrepresentation of certain entity categories and the overrepresentation of others. Consequently, this inequality poses a hurdle to the model's capacity to generalize effectively to novel and unobserved data, particularly when it comes to the identification of essential yet infrequent entities, such as critical clinical information. In the realm of NER, it becomes imperative to address the issue of unbalanced labels, as it is pivotal to the development of models that ensure precise and equitable recognition of entities across all classes. This endeavor subsequently enhances the comprehensive efficacy and dependability of NER systems within a diverse array of practical applications.
This work makes a twofold contribution. Initially, we introduce a novel empirical bias testing methodology for BERT in token classification. We analyze the implications stemming from the application of arbitrary labels in BERT training. Furthermore, drawing on the observations and insights derived from related studies, we present a binary token labeling approach aimed at mitigating biases unsupported by empirical evidence. This enhancement seeks to augment BERT's capability to accurately discern entities characterized by a relatively low number of samples in contrast to the entities prevailing in the majority class.
§.§ Related Works
The proliferation of text-based data in the biomedical field, such as electronic health records, clinical documents, and pharmaceutical specifications, has led to the widespread adoption of deep learning and Natural Language Processing (NLP) methods for extracting and processing information <cit.>. Additionally, studies have demonstrated that language models can partially encode clinical knowledge <cit.>. Contemporary generalized large-scale language models, which represent the forefront of language technology, exhibit suboptimal performance when deployed in clinical contexts, consequently undermining their reliability for clinical text analysis, as extensively noted in recent scholarly contributions <cit.>.
Therefore, biomedical and clinical NLP pose unique challenges, particularly the need to integrate structured domain knowledge into text representations, which is less prevalent in other domains <cit.>. To ensure the reliability of neural language modeling in the specialized medical field, models must learn directly from domain-specific terminologies rather than solely relying on general text data. As a result, significant research efforts within the medical NLP community have been devoted to integrating information from knowledge graphs into language models <cit.>. As a consequence, NER retains its position as the prevailing technique for clinical text analysis. However, it is noteworthy that the utilization of NER for clinical texts introduces a noteworthy challenge in the form of unbalanced accuracies. This imbalance in performance is intricately linked to the disparate distribution of data across distinct entity categories <cit.>.
The recent advancements in the field of biomedical NER, as highlighted in previous studies <cit.>, predominantly revolve around a restricted set of named entities such as diseases, chemicals, and genes. Nonetheless, it becomes imperative to broaden the scope of consideration to encompass a broader spectrum of biomedical entities. This includes entities pertinent to clinical diagnoses like diseases, symptoms, medical terms, risk factors, and vital signs, as well as epidemiological entities like infectious diseases and patient demographic information.
Certain studies in the biomedical field have explored the applicability of BERT in tasks related to biomedical NER <cit.>, yielding remarkable levels of performance. An analysis of BioBERT, a transformer-based model specifically refined through fine-tuning procedures within the clinical text domain, has unveiled a pivotal insight. It discerns that the principal factor contributing to erroneous inferences generated by the BioBERT model resides in its limited grasp of the domain-specific knowledge <cit.>.
§ BLACKBOX ANALYSIS FOR EMPIRICAL UNCERTAINTY PERSISTENCE
BERT (Bidirectional Encoder Representations from Transformers), introduced by , represents a profound architecture grounded in self-attention mechanisms. It undergoes a pretraining phase utilizing substantial volumes of data, guided by a language modeling objective. This model yields intricate linguistic text representations that have exhibited their utility across a multitude of tasks within the domain of natural language processing. Since its inception, BERT has undergone meticulous examination and practical application across diverse domains <cit.>. One of the downstream tasks for BERT is token classification, i.e., to use its contextualization capability to identify labels for words. BERT produces a 768-dimension vector for each token, processed to take into account a small amount of information about each of the other tokens in the input text. A downstream layer of a neural network can then learn to classify each token into entity categories. During fine-tuning, BERT's pre-trained weights and the final classification layer are modified to match the target task's label set. This process allows BERT to capture intricate contextual information from the input text and refine its predictions according to the token-level classification objectives, leading to robust and state-of-the-art performance across a diverse range of token-level classification tasks. In the case of NER, these labels identify types of entities that are learned from a particular annotated dataset.
Fine-tuning BERT with unbalanced token labels is a significant consideration when adapting the model to token classification tasks where certain classes are disproportionately represented in the dataset. In scenarios where some classes occur much less frequently than others, the standard fine-tuning process can lead to biased models that perform well on majority classes but struggle with minority classes. The Unbalanced labels pose a significant challenge, as skewed distributions of entities in real-world datasets lead to accuracy issues, particularly for underrepresented classes. This imbalance affects model generalization and can hinder accurate recognition of minority entities, necessitating solutions for equitable and effective entity recognition in various applications.
§.§ Maximum Likelihood Dilemma
Because NER datasets can have limited labelling and significant imbalances, we contend that the conventional cross-entropy loss function, while theoretically capable of asymptotically generating the optimal token classifier based on the maximum likelihood principle, would not succeed in delivering satisfactory performance in situations characterized by imbalanced data distributions. In simpler words, if there is a training set comprising 99% of class `O' and the remaining 1% of class `M', there is little incentive for the optimizer to optimize in favour of class `M' if it comes at the cost of a loss for class `O'. Half of this problem is solved by using weighted cross-entropy loss. However, when a calculated loss is back-propagated, the optimizer is likely to mostly adjust the weights that were modified by the majority class because the majority class had a bigger share in the gradient during the forward pass. The layers of the neural network are oblivious to loss at the individual token level because the loss is calculated for the whole batch that contains at least one sentence.
§.§ Exaggerated Empirical Bias in BERT for Token Classification
We test the above-mentioned argument on a clinical entity dataset MACCROBAT has has 41 annotated entities ranging in occurrences from 10 to 1208 in 200 clinical documents <cit.>. We use 85% of this annotated data to fine-tune BioBERT v1.1 model for 20 epochs for the token classification task as shown in Fig. <ref>. Then we use the rest of 15% of the documents to create a histogram of logits distributions and calculate the percentage of all the (TP+FP) positive predictions (represented by A) for each class as shown in Fig. <ref>.
§.§ Testing BERT with Arbitrary Token Labels
Assuming that there is inherent arbitrariness to language <cit.>, certain factors in languages can't be learned as hard and fast rules. We argue that if there is fuzziness in the training corpus, a good language model should retain the fuzziness if there is no significant evidence for clarity. We test this assumption on BERT by fine-tuning it on randomly generated token labels for the same clinical text corpus. We replace the original labels with randomly generated 3 classes of labels, 60% of class `O', and 20% each for classes `M' and `N' as shown in Fig. <ref>. Since the labels are randomly assigned, the model should not learn any significant pattern other than the unbalanced amount of labels. We train the BERT-base-cased model for 30 epochs using 85% of the corpus and measure the logits distributions on the rest 15% of the corpus after each epoch. We test the evidence-based uncertainty of the fine-tuned model by comparing the number of labels for each class N with the number of predicted positive labels for each class represented as A. The difference between the two numbers will be a measure of the empirical dependency of the model since there is no other pattern to learn from other than the empirical amount of labels. Figure <ref> shows the logits distributions of 3 classes for the test set. It can be observed that the model has an unaccounted bias towards the class `O' because it predicts the highest logit value for class `O' 87.8% of the time even though empirically it should be 59.9%.
§ BINARY LABELS FOR TOKEN CLASSIFICATION
When the loss is calculated by all token labels, it gets diluted into a generic loss that does not vary much from batch to batch. The problem of deeper layers being oblivious to the non-variant loss at the final layer can be solved by having only two classes creating a higher variation in loss so that it can backpropagated deeper into the model <cit.>. We theorize that if a contextualized token classification model is fine-tuned to recognize only two labels (binary classification), then it is likely to learn each entity more precisely as compared to a model that is fine-tuned to recognize more than two labels within the same batch. We propose to use Binary Token Labels (BTL) to finetune BERT for the NER task as opposed to the conventional method of using all token labels (ATL) for each batch. The BTL method splits training batches into multiple copies but each batch of passages has binary contrastive labelling against `O' (the absence of any entity label is represented by `O') for only one true-positive label while the rest of the labels are masked.
The weighted cross-entropy loss at the token level is calculated as
l_n= -w_y_nexp(x_n, y_n)/∑_c=1^Cexp(x_n, c)
where x_n, c is the value of the logit at the output layer for class c, x_n, y_n is the logit value for the target class, C is the total number classes, w_y_n is the weight of the target class that is calculated as
w_c = 1 - N_c/N
where N_c is the number of tokens for the class c and N is the number of all labelled tokens in the training set.
The mean loss L for the whole batch is
L = ∑_n=1^Bl_n/∑_B^n=1w_y_n·{v_n ≠ X }
where B is the number of total tokens in the batch, and X is the masked label conditioned as
v_n={[ X y_n ∉{O, C_b}; 1 otherwise. ].
The C_b is the label for batch b that is left unmasked along with the true negative label O. The difference from the usual approach for loss calculation is that each batch contains a mixture of all target classes. Whereas in this approach, each batch only has a true positive label for only one entity class.
We performed the uncertainty persistence test on BERT-base-case again using the BTL approach for the randomly labelled dataset. It can be seen in Figures <ref> and <ref> that using BTL the model is less sensitive to the empirical bias as it converges to the maximum-likelihood over the epochs. This makes it possible to intervene in the training process before the maximum likelihood end goal of the optimizer is reached. This approach focuses on the core task of entity presence or absence, allowing the model to learn to distinguish entities from non-entities effectively. Lastly, binary fine-tuning can yield models that are less sensitive to label noise or annotation inconsistencies for a batch.
§.§ Experimentations
Based on the observations made on the randomly labelled dataset in Section <ref>, we tested the BTL approach on the MACCROBAT dataset with the intention that the loss caused by the majority classes won't undermine the loss caused by the small classes as it did in Figure <ref> when we tested the conventional token classification learning approach. The bigger documents in the MACCROBAT dataset are broken into smaller chunks to fit the maximum input tokens size of 512. This results in 200 documents being divided into 886 passages for the training set and 169 passages for the test set. Another few of the quantitative entity labels such as Volume, Mass, Height, and Weight are merged into a single category.
To achieve a balanced performance across all entity classes we create a clinical NER method with the following measures:
* Weighted cross-entropy using the class weights as calculated in Section <ref>.
* The binary token label (BTL) approach is used to create segregated batches for each entity class. We also test the conventional all-token-labels (ATL) approach where each batch has all entity labels without any masking.
* The number of batches for each entity class is balanced. The minority class batches are repeated more frequently to match the number of batches of the largest class `O'.
* Both models (BTL and ATL) are fine-tuned using pre-trained BioBERTv1.1. The training is run for 20 epochs using an SGD optimizer with a learning rate of 5e-5.
* A latent KNN classifier (17 neighbours) to predict the final entity label using the raw logits from the output layer. The KNN classifier is trained separately after finetuning.
The utilization of K-Nearest Neighbors (KNN) serves the purpose of additional independent calibration post finetuning. The outcomes for 33 entities within the MACCROBAT dataset are presented in Table <ref>. Notably, employing the BTL method leads to a substantial increase in unweighted measures like unweighted accuracy and mean precision across all entities. This improvement is particularly pronounced for entities with small sample sizes.
Figure <ref> shows the distribution of logits for the test set using the BTL approach. The number of positive predictions for small-sample entity classes are much closer to empirically expected positive predictions as compared to the distributions for ATL model (Figure <ref>). Another observable difference from Figure <ref> is the wider spread of the distributions. This is due to the increase in the entity-specific loss variation as discussed at the start of this section.
In Table <ref>, a comparison of F1 scores between the proposed method and the baseline is provided. However, it's important to clarify that the baseline's F1-score represents the overall weighted F1 score. While examining performance metrics for select individual entities, a marked distinction from the baseline results by becomes evident. The advantage of BTL is the increase in the significant increase in unweighted accuracy which is a better measure of balanced accuracy across all entities, however, it is rarely reported by other works.
§ LIMITATIONS
This study conducted an analysis of the empirical biases inherent in BERT's NER token classification. A novel black box testing method was introduced to assess empirical biases, independent of linguistic content, with the caveat that matching expectations through empirical evidence is valuable in domains with established parameters. Subsequently, insights gleaned from the black box testing were leveraged to enhance NER performance on a heavily imbalanced clinical dataset. While there was an overall enhancement in recognition metrics, it's worth noting that the performance improvements were not uniform across all entities. Certain entity labels, such as 'Coreference,' experienced a decrease in accuracy when utilizing the proposed approach.
§ ETHICAL STATEMENT
Our conducted experiments and the model framework we propose are designed to promote investigation within the clinical information extraction domain while prioritizing the prevention of privacy breaches. The data utilized in our study is publicly accessible and has been thoroughly de-identified. Although recent studies have demonstrated the challenge of reconstructing sensitive personal information from such data, a minimal potential risk exists for future models to achieve this. It's important to note that we have not made any modifications to the data's content that would enhance the probability of such an eventuality, thus ensuring the mitigation of any risks related to the leakage of private information.
acl_natbib
|
http://arxiv.org/abs/2409.03431v2 | 20240905112341 | UV-Mamba: A DCN-Enhanced State Space Model for Urban Village Boundary Identification in High-Resolution Remote Sensing Images | [
"Lulin Li",
"Ben Chen",
"Xuechao Zou",
"Junliang Xing",
"Pin Tao"
] | cs.CV | [
"cs.CV"
] |
UV-Mamba: A DCN-Enhanced State Space Model for Urban Village Boundary Identification in High-Resolution Remote Sensing Images
Lulin Li^1,*, Ben Chen^1,*, Xuechao Zou^2, Junliang Xing^3, Pin Tao^1,3,†
^1School of Computer Technology and Applications, Qinghai University, Xining, China
^2School of Computer Science and Technology, Beijing Jiaotong University, Beijing, China
^3Department of Computer Science and Technology, Tsinghua University, Beijing, China
^*Lulin Li and Ben Chen contribute equally. ^†Corresponding author.
{lulinlee, benchen1997}@163.com, [email protected], {jlxing, taopin}@tsinghua.edu.cn
September 9, 2024
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Owing to the diverse geographical environments, intricate landscapes, and high-density settlements, the automatic identification of urban village boundaries using remote sensing images is a highly challenging task. In this paper, we propose a novel and efficient neural network model called UV-Mamba for accurate boundary detection in high-resolution remote sensing images. UV-Mamba mitigates the memory loss problem in long sequence modeling, which arises in state space model (SSM) with increasing image size, by incorporating deformable convolutions (DCN). It's architecture utilizes an encoder-decoder framework, includes an encoder with four deformable state space augmentation (DSSA) blocks for efficient multi-level semantic extraction and a decoder to integrate the extracted semantic information. We conduct experiments on Beijing and Xi’an dataset, and the results show that UV-Mamba achieves the state-of-the-art performance. Specifically, our model achieves 73.3% and 78.1% IoU on the Beijing and Xi'an datasets, respectively, representing improvements of 1.2% and 3.4% IoU over the previous best model, while also being 6 × faster in inference speed and 40 × smaller in parameter count. Source code and pre-trained models are available in the supplementary material.
Urban Village, High-resolution Remote Sensing Images, State Space Model (SSM), Segmentation
§ INTRODUCTION
Urban villages, as historical remnants in the urbanization process, present significant challenges in urban planning and management because of their low-rise and densely packed buildings, substandard environmental conditions, and outdated municipal infrastructure <cit.>. The issue of urban villages not only concerns the aesthetic and cleanliness of the city's image but also directly affects residents' quality of life, public safety, and social stability <cit.>. Traditional methods of collecting information on urban villages mainly rely on manual field surveys which is time-consuming and labor-intensive <cit.>.
To achieve the automatic identification of urban village boundaries, the exploration of image segmentation techniques using satellite imagery has garnered widespread attention <cit.>. Several studies have employed advanced semantic segmentation models, including Fully Convolutional Networks (FCN) and U-Net, to map urban village areas <cit.>. <cit.> utilizes adversarial learning to fine-tune the semantic segmentation network, thereby adaptively generating consistent outputs for input images across various domains. UisNet <cit.> enhances segmentation accuracy by integrating features from both remote sensing imagery and building contours through a spatial-channel feature fusion module. UV-SAM <cit.> capitalizes on the strengths of both a general model and a specialized model to apply the zero-shot capabilities of SAM <cit.> to the task of urban village boundary identification.
However, accurately delineating the boundaries of urban villages in existing research is challenging due to two primary factors. First, the unique architectural characteristics of urban villages, including high density, narrow streets, and diverse building forms, pose inherent difficulties. Second, the limitations of CNN in capturing global information and the computational complexity of transformers <cit.>, as shown in Fig. <ref>, further complicate this task. Moreover, when ultra-high-resolution (UHR) remote sensing images are divided into smaller patches, spatial features and dependencies can be lost.
To address the above issues, we propose the UV-Mamba model, which leverages the global modeling capability of SSM with linear complexity and the spatial geometric deformation ability of deformable convolutions. Our model mitigates the memory loss issue of SSM in long sequence modeling by employing DCN to allocate greater weights to regions of interest, thereby improving SSM's capacity to retain information across extended sequences. The main contributions of our architecture are summarized as follows:
* We introduce UV-Mamba, a novel and efficient architecture based on SSM that effectively preserves linear computational complexity while delivering enhanced global modeling capabilities.
* We design a DSSA module that mitigates memory loss in SSM during long-distance modeling as the sequence grows, by assigning greater weights to regions of interest using deformable convolutions.
* We conduct extensive experiments on two cities Beijing and Xi’an in China and the results show that our method achieves superior performance, surpassing the state-of-the-art CNN-based and Transformer-based models.
§ METHODOLOGY
§.§ Preliminaries: State Space Model
The state space model is a concept derived from the linear time-invariant systems in modern control theory. It maps a one-dimensional input signal X(t) ∈ℝ to an N-dimensional latent state h(t) ∈ℝ^N, and then projects it to a one-dimensional output signal y(t). This process can be described by the following linear ordinary differential equations (ODE):
h'(t) = 𝐀h(t) + 𝐁x(t),
y(t) = 𝐂h(t),
where 𝐀∈ℝ^N × N is the state transition matrix, 𝐁∈ℝ^N and 𝐂∈ℝ^N are the projection matrix.
To better adapt to the discrete inputs in deep learning such as text sequences, 𝐀 and 𝐁 are discretized using a zero-order hold (ZOH) technique with a learnable time scale parameter Δ, which transforms the continuous SSM into a discrete SSM. The process is as follows:
𝐀 = exp(Δ𝐀),
𝐁 = (Δ𝐀)^-1exp(Δ𝐀 - 𝐈) ·Δ𝐁.
After discretization, the Eq. <ref> can be represented as:
h_k = 𝐀h_k-1 + 𝐁x_k,
y_k = 𝐂h_k,
where 𝐀 and 𝐁 represent the discretized versions of the 𝐀 and 𝐁 matrices, respectively. h_k-1 represents the previous state information and h_k represents the current state information.
§.§ Architecture Overview
The architecture of the proposed UV-Mamba model, as depicted in Fig. <ref> (a), is composed of three principal components:
A stem module with varying convolutional kernel sizes, a hierarchical multi-path scan encoder, and a lightweight decoder. The stem module, which performs initial feature extraction and downsamples the input image by a factor of 2, consists of four convolutional layers with 7 × 7 and 3 × 3 kernels, padding of 3 and 1, and strides of 2 and 1, respectively. The multi-path scan encoder consists of four deformable state space augmentation (DSSA) blocks, which progressively reduce the feature map size by half at each stage, resulting in feature maps of various scales relative to the model input: H/4×W/4×C_1, H/8×W/8×C_2, H/16×W/16×C_3, H/32×W/32×C_4. The decoder comprises four upsample modules, each incorporating a transposed convolution to upsample the feature map from the encoder by a factor of two, followed by two 3 × 3 convolutions for feature fusion. Finally, bilinear interpolation is used to restore the image to the input size.
§.§ Deformable State Space Augmentation Block
For UHR remote sensing of dense urban environments, two primary challenges are refining pixel-level representation and ensuring robust global modeling for accurate boundary extraction. To address these challenges, we design the DSSA Block, which includes patch embeddings, a spatially adaptive deformable enhancer (SADE), a multi-path scan SSM module (MSSM), and patch merging, as illustrated in Fig. <ref> (b). Notably, our SADE and MSSM modules are stacked twice as intermediate modules. By assigning greater weights to regions of interest through the SADE, the issue of memory loss during global modeling with SSM can be mitigated. This approach achieves linear complexity while enhancing global modeling capabilities beyond those of SSM, enabling more effective differentiation between buildings, as shown in Fig. <ref>.
Multi-path Scan SSM Module (MSSM).
A series of studies <cit.> have demonstrated that in SSM-based models, increasing the number of scanning directions is crucial for achieving comprehensive global modeling capabilities. To better delineate the boundaries between urban villages and adjacent communities, we aggregate scanning results from eight directions (horizontal, vertical, diagonal, and anti-diagonal, both forward and backward) to capture the complex spatial relationships of surrounding structures and to provide a thorough understanding of the contextual environment. In order to better adapt to varying input sizes, we introduce Mix-FFN <cit.>, which is more effective to provide positional information than traditional positional encoding <cit.>, by applying a 3 × 3 convolution within the feed-forward network.
Spatially Adaptive Deformable Enhancer (SADE). As shown in Fig. <ref> (c), the design of the SADE adopts a structure similar to that of the transformer <cit.>. By leveraging the spatial geometric deformation learning capabilities of the deformable convolution, it more effectively adapts to the diverse spatial distribution characteristics of urban villages. Specifically, we utilize the DCNv4 <cit.> operator for spatial feature enhancement, valued for its fast convergence and processing efficiency. The process is as follows:
𝐲(p_0) = ∑_g=1^G∑_k=1^K𝐰_g𝐦_gk𝐱_g(p_0 + p_k + Δp_gk ),
where G denotes the total number of aggregation groups.
For the g-th group, w_g represents the location-irrelevant projection weights, m_gkis the modulation scalar for the k-th sampling point, x_g denotes the sliced input feature map, and Δp_gk is the offset for the grid sampling location Δp_k. The extracted features are subsequently aggregated using Mix-FFN, which reduces computational complexity while maintaining the model's representational capacity.
§ EXPERIMENTS
§.§ Experimental Settings
Dataset.
We use datasets from Beijing and Xi'an <cit.>, two Chinese cities with distinct architectural styles due to their significant geographical differences. Both cities feature a mix of traditional and modern buildings, creating complex urban structures that challenge our model in extracting urban village boundaries. The Beijing dataset contains 531 images, while the Xi'an dataset comprises 205 images. We divided these datasets into training, validation, and test sets in a 6:2:2 ratio. Each image has a resolution of 1024 × 1024 to ensure the inclusion of the main urban information.
Implementation details. Our experiments are conducted on a single Tesla V100 GPU, training for 100 epochs. To prevent overfitting and improve generalization, we apply a consistent data augmentation strategy across all experiments, which included random rotation, horizontal flipping, and vertical flipping.
The model is pre-trained on the Cityscapes dataset <cit.>, then fine-tuned on the urban village dataset. During pre-training, we utilize the Adam <cit.> optimizer with an initial learning rate of 0.001. The learning rate is warmed up for the first 10 epochs and subsequently decrease gradually to 1e-6. Cross Entropy loss <cit.> is utilized during the pre-training phase to optimize the model's performance.
The pre-trained weights are then fine-tuned on the Urban Village dataset. For fine-tuning on the Beijing and Xi'an datasets, we continue to use the Adam optimizer. The learning rate is warmed up for the first 30 epochs and then gradually decreased to 1e-6. Specifically, for the Beijing dataset, we set the learning rate to 0.0004 and use the Dice loss function <cit.>. For the Xi'an dataset, the learning rate is set to 0.0002 and the Cross Entropy loss function is employed. The models' accuracy is evaluated using Intersection over Union (IoU), accuracy (ACC) and overall accuracy (OA). The efficiency is assessed by the parameter (Params, M) and floating point operations per second (Flops, G), denoted as #P and #F, respectively, in the table for brevity.
§.§ Ablation Studies
Image Size: To assess the impact of contextual information and spatial features on urban village boundary detection, we evaluate the model's performance using input images of varying sizes, with the results presented in Table <ref>. The results demonstrate that as image size increases, the accuracy of urban village detection consistently improves, likely due to the continuous spatial distribution of these areas. This finding highlights the importance of utilizing UHR remote sensing images for precise boundary detection.
DSSA Module: To evaluate the effectiveness of the DSSA module in UV-Mamba, we present the segmentation performance of different model variants on Beijing and Xi'an datasets in Table <ref>. The results indicate that the model's performance decreases by 2.4% and 5.5% without the SADE module. Similarly, without the MSSM module, performance drops by 2.8% and 6.7%. These findings underscore the importance of robust global modeling capabilities for accurate urban village segmentation. Furthermore, we experiment with various positional combinations of the SADE and MSSM modules within the DSSA module. The results showed that when the SADE and MSSM modules are arranged in parallel, the performance is suboptimal, achieving 72.7% and 74.9% IoU, respectively. Conversely, placing the SADE module after the MSSM module results in the worst overall model performance, suggesting that the long sequence modeling limitations of the SSM lead to feature map information loss, thereby misleading the model. In summary, these results indicate that the SADE can partially complement the global modeling capabilities of the SSM, helping to mitigate the memory loss issue when modeling high-resolution remote sensing images with the SSM.
§.§ Comparison to the State-of-the-Arts
As illustrated in Table <ref>, UV-Mamba outperforms the advanced urban village identification models <cit.>, achieving the state-of-art performance on both the Beijing and Xi'an datasets. The visualized segmentation results are presented in Fig. <ref>. Regarding segmentation accuracy, our model demonstrates a 1%-3% improvement in IoU across the two datasets compared to the previous best urban village boundary identification model UV-SAM with a parameter size that is 40 × smaller. Similar enhancements in performance are observed in the accuracy metrics of ACC and OA.
§ CONCLUSION
In this paper, we introduce the UV-Mamba model, which mitigates memory loss in long sequence SSM modeling, maintaining global modeling capabilities with linear complexity for precise segmentation and localization of urban village buildings in dense environments. We anticipate that our research will provide significant technical support for the modernization of urban villages, thereby advancing urban development towards increased livability, harmony, and sustainability.
IEEEtran
|
http://arxiv.org/abs/2409.02525v1 | 20240904083302 | A Topic-wise Exploration of the Telegram Group-verse | [
"Alessandro Perlo",
"Giordano Paoletti",
"Nikhil Jha",
"Luca Vassio",
"Jussara Almeida",
"Marco Mellia"
] | cs.SI | [
"cs.SI"
] |
Politecnico di Torino
Torino
Italy
[email protected]
Politecnico di Torino
Torino
Italy
[email protected]
Politecnico di Torino
Torino
Italy
[email protected]
Politecnico di Torino
Torino
Italy
[email protected]
Universidade Federal de Minas Gerais
Belo Horizonte
Minas Gerais
Brazil
[email protected]
Politecnico di Torino
Torino
Italy
[email protected]
§ ABSTRACT
Although currently one of the most popular instant messaging apps worldwide, Telegram has been largely understudied in the past years.
In this paper, we aim to address this gap by presenting an analysis of publicly accessible groups covering discussions encompassing different topics, as diverse as Education, Erotic, Politics, and Cryptocurrencies. We engineer and offer an open-source tool to automate the collection of messages from Telegram groups, a non-straightforward problem. We use it to collect more than 50 million messages from 669 groups.
Here, we present a first-of-its-kind, per-topic analysis, contrasting the characteristics of the messages sent on the platform from different angles — the language, the presence of bots, the type and volume of shared media content. Our results confirm some anecdotal evidence, e.g., clues that Telegram is used to share possibly illicit content, and unveil some unexpected findings, e.g., the different sharing patterns of video and stickers in groups of different topics. While preliminary, we hope that our work paves the road for several avenues of future research on the understudied Telegram platform.
Marco Mellia
================
§ INTRODUCTION
Telegram has experienced remarkable growth in the past years, becoming one of the most popular instant messaging apps in the world. In July 2023, it surpassed the mark of 800 million monthly active users worldwide<cit.>. Telegram offers several features to its users, who can organize themselves into different spaces of communication such as groups (many-to-many) or channels (one-to-many).
Yet, the literature on Telegram is still limited in breadth. Targeting publicly accessible groups and channels, most prior works focused on textual content (e.g. news, hate speech), specific groups (e.g., terrorists <cit.>) or countries (e.g., Iran <cit.>), and often a single topic of discussion (e.g., far-right politics <cit.>).
In contrast, we are driven by the hypothesis that user activity patterns on Telegram groups may be influenced by the main topic of discussion. We aim to offer a broad analysis of the platform usage by providing a first-of-its-kind, multi-faceted, topic-wise exploration of usage patterns on popular Telegram groups.
We use our crawler to gather data from more than a thousand open groups and select 669 of them with at least 100 active users, distributed across 10 different topics like Politics, Cryptocurrency, Video and Films, etc., where users discuss about some specific topic. In total our data covers around 51.6 M messages and 1.4 M distinct users over a two-month observation period.
We analyse our data aiming to precisely measure and contrast user activity patterns in groups across different topics, analysing the mix of various languages, the footprint of official Telegram bots, the diverse habits in sharing media (e.g., videos, audios, images, GIFs) and point to external content via URLs. We overall witness very different behavioural patterns; some are expected (e.g., the large usage of emojis in all topics and languages, or the sharing of possibly illicit content), others are more surprising (e.g., users in Darknet post much longer messages than users in the other topics; or videos published in Video and Films are much longer than those published in Erotic — the latter being with higher resolution).
Although preliminary, our results expose a very diverse universe. We believe that our work offers notable insights into the Telegram platform usage and constitutes a first step toward better understanding how users behave on such a platform.
§ RELATED WORK
The characteristics and dynamics of messaging platforms have attracted a lot of attention. Notably, prior studies analysed content properties <cit.> and information spread <cit.> on WhatsApp's groups, hinting at the catalytic role of the platform in various real-world events <cit.>.
More recently, attention has been dedicated to groups and channels on Telegram, as the platform's popularity increases across the globe. Some studies were interested in the inner workings of the mobile application <cit.>, and its use by particular user populations, such as Iranian immigrants <cit.>, terrorist organizations <cit.>, extremist groups <cit.>, or particular countries (e.g., Iran and Russia <cit.>). Others analysed the formation of communities within Telegram channels <cit.> and their connection to information spread <cit.>. Some other efforts studied content properties, limiting to textual content, and usage patterns, with attention given to news content <cit.>, hate speech and abusive language <cit.>, as well as the presence of fake channels (i.e., those impersonating important services or persons) <cit.>. The use of Telegram to perform illicit activities (e.g., pump-and-dump activities in cryptocurrency markets <cit.>, manipulation of social media popularity <cit.>) has also been previously addressed. Overall, previous works focused on the information people exchange on Telegram, in groups and channels of a specific topic.
Only Morgia et al. <cit.> used TGStat to gather channels associated with multiple topics. Yet, they did not distinguish between such topics and aggregated all of them to discover fake channels.
In contrast, we here offer a topic-wise analysis of common user activities in Telegram groups. Rather than focusing on the type of exchanged information, or how it spreads, we show how differently people leverage different features (e.g., media types, links to external sites) to interact with each other when discussing different topics.
§ CRAWLER AND DATA COLLECTION
Given our interest in user interactions,
our data collection effort is focused on Telegram public groups, i.e., public chats where all the members can send messages.
To collect the data, we design an open-source, two-stage crawler that we offer to the community.[The code and data will be made available upon publication of the paper.]
At the first stage, the tool periodically crawls the TGStat website to discover public Telegram groups on various topics. At the second stage, the tool crawls Telegram, by joining the discovered groups and collecting all shared messages.
§.§ TGStat crawling
TGStat is a mostly undocumented service that catalogues popular Telegram groups and channels worldwide.
Currently, TGStat's database covers almost 1.9 M channels and groups <cit.>, which are categorised into 48 pre-defined topics.
For each topic, TGStat shows the lists of
the top-100 groups/channels according to various metrics. As the whole database, these lists are dynamically updated.
Each group is characterised by some metadata, including the group name, its topic, its language, and the monthly Active Users (AU), i.e., the number of unique users who have written at least one message inside the group in the past month.
We extract information from TGStat engineering a Python-based crawler using the BeautifulSoup Python package <cit.>. We periodically run the crawler to automatically extract the lists of groups.
This allows us to grow the group lists in those topics of interest to us (see discussion in Section <ref>).
Most prior studies of Telegram searched for links to existing groups in social media, news and even word-of-mouth <cit.>. Such approach potentially leads to strenuous crawling work, especially if one is interested in covering diverse topics (as we do here). TGStat offers on a single platform easy access to a large number of already categorised groups, with dynamically updated activity metrics and topics. Such information allows us to narrow our focus to the most relevant groups, besides enabling a per-topic analysis. Indeed, a few recent studies have used TGStat as a starting point to explore Telegram <cit.>. Yet,
unlike these studies which gathered a single snapshot of listed groups, we here continuously monitor TGStat by growing the initial list of groups over multiple days. Moreover, given our interest in the per-topic analysis, we first estimate how reliable TGStat categorisation is, a neglected step in past works (see Section <ref>).
§.§ Telegram crawling
Given a list of previously discovered groups, our crawler automatises the group join and message collection tasks. We rely on the Telethon Python package <cit.> and design a scalable tool based on threads: a master instructs workers to join (and leave — if desired) a group, check if a pending request for join has been accepted, collect new messages, or just wait. For scalability, we use multiple Telegram IDs, each associated with a worker.
The master maintains a list of groups to join
and instruments workers to join these groups and collect all messages from a desired initial date until the present. We store the collected information in a MongoDB database for later processing.
We instrument our crawler to join and stay in groups discovered on TGStat. To refresh the collection of messages, workers download only the new messages since the last retrieved snapshot.
For every group and message, the crawler stores all the returned information in JSON format in the MongoDB instance. In this paper, we focus on the following message information: sender user's identifier, message body, message time, and possible media contained in the message (image, video, GIF, poll, etc.).
§.§ Crawler design challenges
Telegram implements several countermeasures to limit API abuse, notably: i) a limit of 500 groups a given Telegram ID can join; ii) an unspecified upper limit on the rate to join new groups which, if not respected,
causes a lengthy temporary ban<cit.>;
iii) a without-any-notice permanent ban of novel-activated Telegram IDs.
Respecting these limitations requires ingenuity when designing the crawler. First, we declared our intentions to the official Telegram support channel.
Second, we carefully limited the group joining rate. Third, we used multiple Telegram IDs, each associated with a worker thread to scale the data gathering.
Telegram offers the possibility of setting up administration bots (known as Telegram bots) that ease group management. Captcha protection bots are popular for filtering fake user bots, i.e., actual Telegram accounts used to programmatically spam messages in open groups. Such captcha protection bots may kick users if they do not solve the captcha after a specific time. However, other bots or administrators might enforce different rules or criteria for group participation. Whenever we were removed from a group, we respected the administrators' willingness and did not try to join the group again.
Similarly, to respect the privacy indications of the group administrators, we only consider groups where “auto-delete” functionality is not enabled.[The auto-delete setting erases messages for all participants either 24 hours or 7 days after sending. ]
Our crawler can collect up to thousands of messages per second per worker and join about tens of channels per hour without causing Telegram rate limitations.
§ TOPIC CHARACTERISATION
§.§ Data collection and filtering
On April 1st, 2024 we
collected
the top-100 groups
for all TGStat topics. Out of these, we select the subset of topics in which we are able to join at least 10 English language groups, a condition that allows us to manually validate the accuracy of TGStat's topic labelling.
§.§.§ Topic selection
Out of the 48 topics, 12 topics have 10 or more English groups. We keep crawling TGStat every week to refresh the lists of top-100 groups for these 12 topics to observe how those lists change over time. We stop on May 1st, 2024, discovering 1,368 groups in total. We notice the largest growth in Erotics, Cryptocurrencies and Bookmaking, where we find around 20% new groups. This illustrates that taking a single snapshot from TGStat, as prior work <cit.>, would limit the lists of discovered groups.
Feeding these growing lists to the Telegram crawler, 8.6% of groups have the auto-delete function enabled. We abandon them immediately. We instead fail to join 18.8% of groups because the group: (i) expired, or (ii) changed its name before we could join it, or (iii) is moderated and either the administrator did not admit us or a bot kicked us out after joining. We thus successfully join and monitor 993 groups overall. For each tracked group, we collect all messages starting from March 1st to April 30th.[For this work, we limit the collection period to avoid overloading the Telegram servers.] In total, we collect more than 50 M messages, with about 1 M new messages gathered each day.
§.§.§ TGStat topic verification
As mentioned, we evaluate if the per-topic categorisation provided by TGstat is reliable. To that end, we check if the actual topic of discussion is (i) coherent with the topic assigned by TGStat and (ii) consistent across time. We pick all 206 English groups. For each group, we select three sets of 30 consecutive messages, each set separated by the others by ten days. We split the 206 groups into three partitions (with TGStat's topics evenly distributed among partitions) and assign each to a human verifier who checks if the TGStat topic is consistent with the contents of the messages. In case of doubts, the verifier asks for the support of the other two.
TGStat's topic assignment proves mostly correct, with two exceptions: the Courses and guides groups are mostly filled with spam; and the Economics groups mostly host discussions about cryptocurrencies, for which a dedicated topic already exists. We thus discard these two topics' remaining 166 groups, ending up with 827 groups.
§.§.§ Selecting active groups
To guarantee groups are active and with enough diversity, we keep groups that have at least 100 active users, i.e. users who sent at least one message in the two-month observation period.
From 827 groups, we discard 158 of them, ending with 669 groups, as detailed in column 3 of Table <ref>. For comparison, the total numbers of groups discovered in TGStat in each topic is shown in column 2.
The table also details the average number of users per group, the percentage of active ones, the total number of messages and the average number of messages per active user. Figures vary widely, showing already very different interests, engagement and activity levels across the topics.
For the sake of completeness, Figure <ref> in the Appendix <ref> details the breakdown of the various cases one can face when trying to collect messages from Telegram. Depending on the topic, we observe various failure cases which may significantly reduce the number of groups to follow.
§.§ Per-Topic Characterisation
We characterize our dataset by extracting various features from each group and then aggregating them on a per-topic basis (i.e., per-group macro average). This allows us to avoid the bias induced by large groups.
Our goal is to explore how differently users interact on each topic.
§.§.§ Telegram Bot usage and user activity
Telegram group administrators can incorporate bots into their groups. Bots offer a wide array of functionalities, from welcoming newcomers with group rules to responding to user commands, from collecting statistics to moderating messages.
Telegram bots are in fact quite popular: only 10.2% of groups in our dataset do not include bots; in the median, there are four bots per group. Interestingly, 1.5% of groups have 20 bots (the maximum allowed by Telegram). Some bots enjoy significant popularity: Combot<cit.> and MissRose_bot<cit.> are installed in 145 and in 129 groups, respectively. Both provide moderation services, analytics, and anti-spam features.
Bots footprint is not negligible: in groups with bots, they generate on average 8.6% of messages, with notable variations across topics. Figure <ref> shows, for 4 topics, the Empirical Complementary Cumulative Distribution Function (ECCDF) of the fraction of messages sent by bots for different groups. In Linguistics, bots generate the highest fraction of messages (brown dotted curve). In fact, some bots are integral to learning platforms and merit examinations. For instance Quizbot<cit.> is widely deployed in Linguistic and Education groups (27.5% and 28.6%, respectively). It generates an average of 39.0% and 13.0% of messages. In one Linguistics group it generates 91% of messages. Conversely, Politics groups see the smallest fraction of messages generated by bots (green dotted line), possibly testifying to a higher user engagement[For messages sent by a user account, we cannot distinguish between messages sent by a human or by an automated system.] in political groups than in other topics.
Curiously, there is a quite large fraction of groups with bots that simply collect statistics or moderate the group without sending any message (leftmost part of Figure <ref>).
For the remainder of our analysis, we ignore messages sent by Telegram bots.
Focusing on the amount of messages actual users generate, we observe that there are few users sending thousands of messages, while the majority are not active (see column 4 in Table <ref>) or send few messages. Indeed, the Empirical Probability Density Function (EPDF) of the number of messages generated by users follows a heavy-tailed shape that can be fit by a Pareto distribution with α=1.9. Remarkably, the fittings of the per-topic EPDFs are very similar, hinting at the universality of these behaviours.
§.§.§ Language
The next question we answer is what are the languages people speak in each topic and group. For each textual message in a group, we associate a language by employing the FastText language identification library <cit.> to obtain the distribution of languages for each group.
In Figure <ref> we report the breakdown of the most popular language for groups on the same topic:
∙ English (a global language) and Russian (being Telegram very popular in Russia) are the two most popular languages.
Their share changes based on the topic. For instance, most groups in Bookmaking and Darknet have Russian messages. Conversely, the majority of Education and Cryptocurrency groups are in English, possibly due to the worldwide interest in such topics.
∙ Despite Telegram's restricted use in Iran, some popular groups are in Persian, especially in Politics. This is in line with the claim that Telegram is among the platforms to evade state censorship in Iran <cit.>.
∙ Telegram is blocked in China. In fact, it does not appear to be popular for discussing topics in Chinese. Yet, few groups have Chinese as the dominant language.
We observe that in almost half of the groups, more than 75% of the messages are written in the same dominant language. Still, in all groups and topics, we observe messages written in other languages, hinting to a global user population.
Linguistics is the topic where groups contain the largest mix of languages, which supports the anecdotal observation of users practising foreign languages and mixing messages in their native language in such
groups. In contrast, both Darknet and Technology stand out with more than 58%
of the groups having more than 80% of the messages in a single language (English). This agrees with the intuition that technology-related discussions are carried over in English.
§.§.§ Message length
Figure <ref> shows the ECCDF of the length of textual messages for each topic. Since message length may be influenced by language, we consider only messages written in English-dominated groups. We observe some great distinctions across topics. On one hand, Darknet groups are dominated by very long messages (80% longer than 100 characters).
A manual check unveils that most messages contain samples of illicit content people trade. The same can be said for Bookmaking and Technologies groups, where long messages describe bookmaking websites, experiences and results of betting, or devices to sell. In Erotic, people advertise their services.
Conversely, Linguistics and Politics groups are dominated by very short messages in which people debate or provide suggestions (80% shorter than 30 characters). In a nutshell, the presence of “advertisement” messages inflates their length, while discussion-driven groups see a predominance of short messages.
At last, steps in the distribution signal the presence of repeated automated messages (mostly spam).
§.§.§ Usage of non-textual elements
We now broaden our analysis to consider non-textual elements.
Specifically, we extract, for each group, the fraction of messages containing images, external links, voice messages, polls, GIFs, stickers, videos, and emojis. How are these elements used? To gauge this, we compute the average usage fraction over all groups of a given topic (macro average).
Figure <ref> visually compares two pairs of selected topics using radar charts. Table <ref> and Figure <ref> in Appendix <ref> provides the complete set of results. Some interesting findings emerge:
∙ Politics groups represent the typical average usage of non-textual elements: 20–30% of messages contains emojis; 10% of messages shares an image; stickers are more popular than GIFs; few messages contain voice content; polls are mostly an unused feature (present only in Linguistics groups).
∙ Cryptocurrencies groups represent some mixed usage: no videos and voice messages, fewer photos and emojis but more stickers and GIFs.
∙ Groups in Video and Films and Politics have very similar usage patterns (i.e. radar shape), though, surprisingly, the former has fewer videos (present in only 1% of the messages) — see Section <ref>.
∙
Erotic groups present the minimum usage of non-textual elements: no stickers, no GIFs, while photos are found in 6% of messages.
Surprisingly, we see very few links to external platforms. Indeed, readers are invited to contact “advertisers” via private chat.
Notice that sending stickers requires manual actions hardly automatizable.
Also, stickers are commonly used as reactions to other messages. Their prominent use in a group or topic may testify to a larger fraction of messages being sent by real users, or to a more confidential exchange: Erotic, Darknet and Technologies have the least fraction of stickers and are dominated by ad-style messages. Conversely, in Bookmaking and Cryptocurrencies people use more stickers as reactions for suggestions.
§ MULTIMEDIA AND EXTERNAL LINKS
We now delve deeper into the usage of specific non-textual elements, notably shared videos and URLs to external sites.
§.§ Video size and duration
We focus our analysis on groups in the three topics with the largest share of videos: Politics, Video and Films, and Erotic. Although Figure <ref> suggests some similarities in the amount of shared videos, a closer examination of the video duration and file size reveals noteworthy differences in goals and types of shared videos.
The left plot of Figure <ref> compares the video duration. Observe how videos shared in Video and Films are notably longer than in other topics,
with peaks roughly around the 60- and 120-minute marks (notice the log-y scale). Manual inspection confirms that people share entire TV series episodes and movies. Their total volume amounts to 7.45 TB of data.
A comparison of video duration with file size — see the right graph of Figure <ref> — exposes an noticeable difference between topics: videos shared in Erotic groups have significantly higher video rate (thus quality), as evidenced by the steeper slope of the regression line.
§.§ External URL lookup
Analyzing the sharing of links to external websites (i.e., different from <telegram.me> and <t.me>) allows us to understand
how Telegram users may be redirecting (or driving) attention (and traffic) to other websites.
In Figure <ref>, we present the average frequency at which a domain appears in a given topic. We focus on the union of the 5 most popular domains across each topic. These collect from ≈20% to ≈50% of links — with a fragmented list of other platforms (bottom row).
The three most frequent platform are social networks: X, YouTube and Instagram. Usage varies a lot: X sees significant use in Cryptocurrency; YouTube and, to a lesser extent, Instagram are the two most transversal platforms. Conversely, in line with their discussion topics, Technology and Software and Application show a significant usage of GitHub pages. The <telegra.ph> open and anonymous publishing platform is used in Software & Applications, e.g. to share installation guides and tutorials.
§ CONCLUSION
In this paper, we offered a first-of-its-kind transversal analysis of Telegram as observed through the lens of public groups retrieved from TGstat.
We observed a large diversity of the usage that communities do of the platform capabilities, which in turn reflects the type of content and goals of sharing: for instance, bots may have a huge footprint — with up to 80–90% of bot-generated messages. User messages length varies a lot — with the self-advertisement-driven content inflating message length; video sharing is quite common — in Politics these are short, while in Video and films entire movies get shared — Erotic having the highest quality.
All in all, our work, albeit preliminary, paves the road to the investigations and understanding of how people use the Telegram platform.
We hope our work can foster discussion towards a deeper understanding of this platform.
§ ETHICS
In our work, we take ethics under utmost consideration throughout the whole process.
In the first place, while crawling TGStat, we respect the page visit rate imposed by the website itself. We keep our data extraction rate (few pages per minute) at a level to minimise TGStat's load. For the same reason, we repeat the crawling only once a week.
Monitoring Telegram groups can also be delicate. First of all, we checked the privacy policy of the Telegram platform. It does not forbid crawling. We also asked explicitly Telegram's support at [email protected] and [email protected] to declare our intention, asking them to share with us any restrictions or limitations. We received no answer.
To respect privacy restrictions imposed by users and administrators, we restrict our analysis to public groups where it is common knowledge and well-accepted that anybody with a link to access the chat can participate as a member — if not as an active user. We respect both the groups where the administrator sets the auto-delete functionality and those where admins refuse — or ignore — our join request. We do not monitor nor store any data about these groups.
Considering users participating in groups, we store only their TelegramID (which are randomly generated), and do not store any Personally Identifiable Information such as usernames (which are in any case freely chosen by the users and can change at any time) or profile pictures. Usernames can appear in the message content when a user is mentioned by someone else. We did not make any effort to re-identify any users.
Finally, we are aware that Telegram is sometimes used to share copyright-protected material and illicit content — more so, our results suggest that this behaviour is frequent. To prevent being implicated in such activity, we avoid downloading and storing any actual media such as pictures and videos, only collecting metadata (e.g., video duration and size).
§ APPENDIX
In the appendix, we include some secondary results that can help the reader have a more complete understanding of the context of our analysis.
In Figure <ref>, we show the breakdown of the groups we found on TGStat, and those we actually monitor for the paper.
In Table <ref>, we provide a qualitative explanation of the topics under observation.
In Table <ref>, we present per-topic detailed quantities.
ACM-Reference-Format
|
http://arxiv.org/abs/2409.02322v1 | 20240903223157 | TimeDiT: General-purpose Diffusion Transformers for Time Series Foundation Model | [
"Defu Cao",
"Wen Ye",
"Yizhou Zhang",
"Yan Liu"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
Review and Novel Formulae for Transmittance and Reflectance of Wedged Thin Films on absorbing Substrates
[
========================================================================================================
§ ABSTRACT
With recent advances in building foundation models for texts and video data, there is a surge of interest in foundation models for time series. A family of models have been developed, utilizing a temporal auto-regressive generative Transformer architecture, whose effectiveness has been proven in Large Language Models. While the empirical results are promising, almost all existing time series foundation models have only been tested on well-curated “benchmark” datasets very similar to texts. However, real-world time series exhibit unique challenges, such as variable channel sizes across domains, missing values, and varying signal sampling intervals due to the multi-resolution nature of real-world data. Additionally, the uni-directional nature of temporally auto-regressive decoding limits the incorporation of domain knowledge, such as physical laws expressed as partial differential equations (PDEs).
To address these challenges, we introduce the Time Diffusion Transformer (), a general foundation model for time series that employs a denoising diffusion paradigm instead of temporal auto-regressive generation. leverages the Transformer architecture to capture temporal dependencies and employs diffusion processes to generate high-quality candidate samples without imposing stringent assumptions on the target distribution via novel masking schemes and a channel alignment strategy.
Furthermore, we propose a finetuning-free model editing strategy that allows the seamless integration of external knowledge during the sampling process without updating any model parameters.
Extensive experiments conducted on a varity of tasks such as forecasting, imputation, and anomaly detection, demonstrate the effectiveness of TimeDiT.
§ INTRODUCTION
Time series analysis is pivotal in a diverse set of AI applications, such as natural science <cit.>, social science <cit.>,sustainability <cit.>,health <cit.>, etc. These applications root in diverse domains <cit.>, leading to time series with various distributions <cit.> and a divers set of analysis tasks, such as forecasting <cit.>, imputation <cit.>, anomaly detection <cit.>, etc. Even though considerable progress has been made in developing specialized models optimized for specific scenarios and individual tasks, an open question remains: Can a single time series foundation model excel across domains? Recent initiatives have explored the possibility of universal time series models on zero-shot setting <cit.>, drawing inspiration from large pre-trained language models in natural language processing(NLP) and computer vision(CV), such as GPT<cit.>, CLIP <cit.>, which are known for their robust transfer learning capabilities. However, due to the fundamentally different semantics between text/images and time series data, the unique challenges of achieving a truly flexible and general-purpose time series model remain an open problem.
Recently, the emergence of LLMs like GPT-4 <cit.> and LLaMA <cit.> suggests the potential for building time series foundation models enabling a general solution to handle multiple time series distributions.
Previous attempts typically build upon the transformer backbone, which has achieved state-of-the-art performance on various time series tasks, particularly in modeling long-term dependencies.
However, the tokenization of time series data for transformers is especially sensitive to variations in data sources and sampling rates. Previous tokenization approaches with different schemes including token patching <cit.>; discretization tokens <cit.> and tokens based on time series features <cit.> have either fragmented the global information or have been constructed in a manner that inherently loses important information. Most, if not all, of them employ a channel independence strategy <cit.> or focus solely on univariate time series. Channel independence strategy, though beneficial in certain contexts, often overlooks the complex inter-temporal and cross-feature dependencies in practical applications and thus presents an opportunity for optimization <cit.>.
Moreover, compared with texts and images, time series exhibit unique characteristics such as missing values <cit.>, irregular sampling <cit.>, multi-resolution <cit.>, etc.
To address these challenges, a foundation model for time series must be capable of demonstrating flexibility across different scales to handle diverse inputs with varying distributions. However, these unique natures and challenges are not covered by the popular well-curated benchmark datasets<cit.>. As a result, most existing works, which are developed and evaluated primarily on these datasets, may not fully address the complexities encountered in real-world time series applications. In addition, time series processes are often governed by underlying physical principles <cit.> and can be guided by domain-specific textual information <cit.>. However, integrating these diverse sources of information into a unified model poses further challenges, as the model must effectively leverage the relevant physics context while adapting to the unique characteristics and distributions of each domain. Addressing each of these issues requires innovative approaches in data preprocessing, model architecture, and training strategies to create models that can seamlessly handle the diverse and complex nature of time series data.
Current approaches to time series modeling lack a unified framework for handling the aforementioned diverse and imperfect data inputs, frequently prioritizing performance on well-curated benchmarks over addressing real-world challenges. Diffusion models, such as DDPM <cit.>, offer a promising solution by framing data generation as a series of conditional transformations, effectively recasting density estimation as sequential reconstruction. Unlike autoregressive methods that generate future tokens sequentially, diffusion models can directly produce high-quality samples through a reverse denoising process. This process can be analogized to solving partial differential equations (PDEs), allowing for the natural incorporation of physics-based knowledge.
This capability, combined with transformers' strength in capturing temporal dependencies, presents an opportunity to develop a more versatile and robust time series foundation model. Such a hybrid approach could effectively address the complexities of real-world data while maintaining the flexibility to adapt to various forecasting tasks and data conditions.
In this work, we introduce —a diffusion transformer-based foundation model equipped with a standardized training pipeline for different shapes of input time series and tailored for diverse distributions and downstream tasks. TimeDiT leverages the Transformer architecture's inherent ability to capture temporal dependencies through its attention mechanisms, while also benefiting from the scalability that allows for increased model capacity crucial for complex time series tasks. By adopting a diffusion model approach, TimeDiT treats time series holistically, avoiding the error accumulation issues common in autoregressive solutions. The model incorporates a novel comprehensive mask mechanism that enables a single, unified foundation model to handle multiple tasks without additional modules or parameters. This design naturally addresses real-world challenges such as multi-resolution data and missing values. During the sampling stage, TimeDiT introduces an innovative strategy to incorporate physics knowledge as an energy-based prior, supported by theoretical guarantees. This approach guides the reverse diffusion process using physics-based constraints, including partial differential equations, resulting in generated samples that adhere to known physical laws and domain-specific requirements, thereby enhancing sample quality and model applicability across various scientific and engineering contexts.
TimeDiT's performance is rigorously evaluated through an extensive experimental setup encompassing over 20 diverse datasets from domains including traffic, weather, finance, etc. The model is benchmarked against more than 25 open-source baselines, ranging from linear-based models to diffusion-based models, transformer-based models, and other forecasting foundation models. These comprehensive experiments cover multiple challenging time series tasks, including in-domain and zero-shot probabilistic forecasting, imputation, anomaly detection, and synthetic data generation. TimeDiT demonstrated state-of-the-art or highly competitive results across these tasks, showcasing its effectiveness and efficiency as a foundation model for various time series applications. Notably, TimeDiT achieved new state-of-the-art CRPS_sum scores on the Electricity and Traffic datasets for probabilistic forecasting. In addition, the results on zero-shot experiments show that our model can be used as a foundation model even without fine-tuning, although fine-tuning may be necessary in some cases. Furthermore, TimeDiT's scalability and adaptability are evident in its ability to incorporate external knowledge, such as physical constraints, during the sampling stage. This feature allows for the generation of samples that better conform to known physical laws and domain-specific requirements. This combination of state-of-the-art performance, adaptability across diverse tasks, scalability, and the ability to incorporate domain-specific knowledge positions TimeDiT as a powerful and versatile foundation model, capable of addressing a wide spectrum of time series challenges and opening new avenues for advanced time series analysis across various fields.
In summary, our contributions can be summarized as three unfolds:
* We introduce TimeDiT, a novel diffusion transformer-based foundation model for time series analysis. By combining the strengths of diffusion models and transformers, our approach offers a flexible architecture adaptable to various downstream tasks. The model incorporates a comprehensive mask mechanism for reconstruction pretraining and task-specific fine-tuning, ensuring a standardized training pipeline capable of handling diverse input shapes and distributions.
* Unlike autoregressive approaches, TimeDiT addresses real-world challenges in time series data by directly processing multivariate inputs and employing a denoising process to generate cohesive target time series. This method effectively handles issues such as missing values and multi-resolution data. In addition, TimeDiT can generate time series that adhere to known physical laws and domain-specific requirements, enhancing its applicability in scientific and engineering contexts.
* Evaluated on over multiple datasets across different domains and tasks, TimeDiT achieves state-of-the-art or competitive results. It excels in probabilistic forecasting, imputation, anomaly detection, and data generation, showcasing its versatility as a foundation model in both in-domain and zero-shot settings.
§ RELATED WORK
General Purpose Time Series Model
In the past decades, researchers have excelled in designing sophisticated models for specific time series analysis tasks <cit.>. However, in recent years, the emergence of large language models has inspired the development of general-purpose time series models <cit.> and the field of time series has seen tremendous exploration efforts towards foundation models. <cit.> simply encoded time series as strings while <cit.> converted time series into language representations by alignment. <cit.> and <cit.> further incorporated decomposition technique and prompt design and generalizes to unseen data and multimodal scenarios. <cit.> worked towards foundation model from a probabilistic perspective but only considered univariate time series only which rarely appears in real-life. Additionally, many studies started to follow a two-stage training paradigm of pretraining and finetuning <cit.>. However, these works mainly focused on the forecasting task only <cit.>. <cit.> first adapted GPT2 as a general-purpose time series analysis model and extended it to various time series tasks. <cit.> leveraged VQVAE as a tokenizer for transformer to handle time series tasks and <cit.> employed a scaling and quantization technique to embed time series. For more detailed literatures of the general-purpose time series model, please refer to recent surveys and position paper <cit.>
Diffusion models for Time Series
Despite the growing interest of diffusion models in various scenarios <cit.>, the use of diffusions in time series analysis is less explored compared to pre-trained language models and transformers. Most existing studies also focused solely on forecasting and the choice of backbone model also varies among VAE<cit.>, RNN<cit.>, and transformer. CSDI <cit.> utilized diffusion model for time series imputation. <cit.> incorporated decomposition into diffusion model to improve interoperability.
Although <cit.> build a diffusion pipeline for multiple tasks with refinement, they still train different models for each task. To the best of our knowledge, there has been no exploration of leveraging unified diffusion models for a comprehensive set of time series tasks yet. Please refer to <cit.> for a comprehensive literature review on diffusion models for time series analysis.
§ PRELIMINARIES
§.§ Diffusion Model
In recent years, diffusion models have emerged as a promising approach in generative modeling. A diffusion process is a Markov chain that incrementally adds Gaussian noise to data over a sequence of steps, effectively destroying the data structure in forward process and destroying the data structure in backward structure.
The forward process adds noise to the data 𝐱_0 over a series of timesteps t according to a variance schedule β_t, resulting in a set of noisy intermediate variables 𝐱_1, 𝐱_2, …, 𝐱_T. Each subsequent 𝐱_t is derived from the previous step by applying Gaussian noise:
q(𝐱_t |𝐱_t-1) = 𝒩(𝐱_t; √(1 - β_t)𝐱_t-1, β_t 𝐈)
The reverse process aims to denoise the noisy variables step by step, sampling each 𝐱_t-1 from the learned distribution p_θ(𝐱_t-1|𝐱_t). This distribution, modeled by a neural network parameterized by θ, approximates the Gaussian distribution:
p_θ(𝐱_t-1|𝐱_t) = 𝒩(𝐱_t-1; μ_θ(𝐱_t, t), Σ_θ(𝐱_t, t))
By iterating this reverse process from t=T down to t=0, the model gradually reconstructs the original data from noise. The reverse process learns to predict the mean and covariance of each intermediate distribution, effectively approximating the original data distribution.
§ METHODOLOGY
In this section, we present our main contributions: the proposed foundation model, , a diffusion model with transformer backbone designed for multiple time series tasks, along with uniform masking strategies and incorporation of physics knowledge and textual information as an extension. We first outline the uniformed problem setting for multiple down-stream tasks and offer an in-depth examination of the model architecture. Subsequently, we delve into the training pipeline with mask strategies, which help to build the training scheme in self-supervised learning for time series. Next, we present how to incorporate external information to improve the model's performance during both the training and inference stages. By doing so, can generate samples that better conform to real-world requirements and enhance its performance on various downstream tasks. These extensions showcase the flexibility and adaptability of our proposed model, making it a powerful tool for a wide range of time series applications.
§.§ Problem Definition
We denote a multivariate time series as 𝐗 = {x_i,j}∈ℝ^K × L, where K is the number of features and L is the length of the time series. Each individual entry x_i,j represents the j-th feature at time step l, for i ∈{1, …, K} and j ∈{1, …, L}.
We define an observation mask 𝐌_𝐨𝐛𝐬 = {m_i,j}∈{0, 1}^K × L, where m_i,j = 0 if x_i,j is missing, and m_i,j = 1 if x_i,j is observed.
Let 𝐱_0^obs∈ X^obs denote the observed subsequence; 𝐱_0^tar denote the target subsequence of 𝐱_0^obs which could be forecast target or imputation target or the whole sequence depending on the task. Let 𝐱_0^con denote the unmasked partial observations in 𝐱_0^obs which acts like conditions for the masked area 𝐱_0^tar. Let us use all subscripts of x to denote diffusion timestamp, and a subscript of 0 means no noise has been applied to the original data. Formally, the goal of our task is to approximate the true conditional data distribution given the conditional information
q_𝐗(𝐱_0^ta|𝐱_0^con)
with a model distribution p_θ(𝐱^tar_0 |𝐱^con_0), which can be calculated by a diffusion model with conditional information:
p_θ(𝐱_0: T^tar|𝐱_0^con) := p(𝐱_T^tar) ∏_t=1^T p_θ(𝐱_t-1^tar|𝐱_t^tar, 𝐱_0^con), 𝐱_T^tar∼𝒩(0, 𝐈), where
p_θ(𝐱_t-1^tar|𝐱_t^tar, 𝐱_0^con) := 𝒩(𝐱_t-1^tar ; μ_θ(𝐱_t^tar, t |𝐱_0^con), σ_θ(𝐱_t^tar, t |𝐱_0^con) 𝐈).
The mask mechanism 𝐌 plays a critical role in identifying the positions of 𝐱_0^con and 𝐱_0^tar. By leveraging these positional differences, our model adeptly adapts to various downstream tasks, including forecasting, imputation, anomaly detection, etc, within a unified framework.
§.§ Time Series Diffusion Transformer
Figure <ref> shows the overall framework of . We first establish the 𝐌_𝐨𝐛𝐬 and 𝐱_0^obs based on the given input from different distributions with multivariate sequences, missing value and multi-resolution by injecting placeholders to standardize the input shape across different time series, facilitating more efficient and consistent processing.
Then, the unified time series mask unit adapts to diverse time series scenarios and builds the 𝐱_0^con, 𝐌 and 𝐱_0^tar, with shape ℝ^B × L × K,
to help learn robust representations in a self-supervised manner by reconstructing the original sequence through denoising 𝐱_T^tar. After that, the embedding layer directly treats 𝐱_0^con and 𝐱_0^tar as without any patching, as the diffusion process is designed to handle multivariate input and operate in a continuous token space.
By preserving the integrity of the input time series, ensures that the model can effectively capture and utilize the rich information contained within the data. The block's attention mechanism is designed to autonomously learn cross-channel and temporal correlations through end-to-end training.
Standardize Pipeline
We introduce placeholders within the input sequences to standardize the input shape across different time series, accounting for varying channel numbers K and sequence length L. Specifically, we define the maximum channel number K_max such that any input with channel k<K_max is padded to have K^max channels while any input with more than K_max channels are segmented into ⌈k/K_max⌉ blocks of inputs where each block has K_max channels and undergoes independent processing. This segmentation allows our model to manage high-dimensional data efficiently, reducing computational overhead and maintaining relative positional integrity of the data and consistency across inputs. Additionally, for any input with sequence length less than the designated maximum length L_max, we pad the sequence in the front to achieve the desired length. This standardization is essential for establishing a uniform input structure that enhances processing efficiency and consistency.
Time Series Mask Unit
We propose a unified time series mask mechanism that includes a variety of masks that seamlessly integrates with the model during self-supervised task agnostic pre-training and task specific fine-tuning to cater to diverse time series scenarios. The time series mask unit generate four types of masks: reconstruction mask, stride mask, block mask, and random mask. Firstly, the task-agnostic pre-training aims to improve the overall time series representation by encouraging the model to learn robust and generalizable features from the input data. Secondly, the task-specific training is designed specifically for the most common downstream tasks, including forecasting and imputation, enabling the model to adapt to the unique requirements of each task.
As shown in Figure <ref> right top, given 𝐱∈ℝ^K× L, the random mask M^R can be generated by:
𝐌^R(x,r) = 1 z_i,j>r, z∈ℝ^K× L, z∼ Uniform(0,1)
0, otherwise,
where r is the mask ratio. In addition, for task-specific training and inference, we allow the user to supply customized imputation masks that could simulate the naturally missing data and multi-resolution cases.
Block mask 𝐌^B can be generated via:
𝐌^B(x,l) = 1 j<L-l,
0, otherwise,
where l is the predicted length. We can randomly select l during pretraining and use the designated prediction length during the finetuning and inference stage for specific experiment settings.
Stride mask 𝐌^S, a variant of 𝐌^B, is placed intermittently within the series and is defined as follows:
𝐌^S(x, n_blocks) =
1 ⌊j/b⌋ 2 = 0
0 otherwise
where n_block is the number of blocks into which the sequence is divided; b = ⌈L/n_blocks⌉ is the length of each block; j is the index of the sequence. 𝐌^Sis designed for task-agnostic pretraining further to enhance the model's representation ability by integrating information across non-contiguous parts of the series.
In addition, reconstruction mask 𝐌^Rec = 0 is used for tasks including synthetic data generation and anomaly detection, where we can directly generate synthetic data or obtain an anomaly score for each temporal position based on the difference between original and reconstructed series.
For the pretraining stage, we random select one conditional mask type from 𝐌={𝐌^R, 𝐌^S, 𝐌^B, 𝐌^Rec} for each instance. 's goal is to reconstruct the 𝐱^tar, defined as 𝐱_0 × (J - 𝐌) where J is unit matrix. The target sequence is the masked portion of the original sequence. In the finetuning and inference stage, the choice of mask is tailored to align with the specific requirements of the user. This flexibility allows to apply the most appropriate masking strategy based on the context of the task and application.
Condition Injection
Instead of following <cit.> to integrate diffusion timestep and label information (texts in out case) through the layer normalization preceeding the attention block, we add the diffusion timestep and texts information to the target noise as these are universal information to the series.
Given that utilizes a transformer-based architecture, a straightforward and intuitive approach is to include conditional information directly as part of the input sequence by concatenation as done in latent diffusion <cit.>. However, we empirically found out that controlling mean and variance through layer normalization is a stronger form of conditional information injection. We incorporate the partial observations x_0^con through adaptive layer normalization to control the scale and shift of the x_0^tar. This design choice is motivated by the face that the scale and shift of the partial observations are more relevant to the target observations. This integration can be expressed as
AdaLN(h,c) = c_scaleLayerNorm(h) +c_shift
where h is the hidden state and c_scale and c_shift are the scale and shift parameters derived from the partial observations. We perform temporal attention in the self-attention block to capture the temporal dependency within the input.
Inference We perform pre-training and finetuning for forecasting and imputation. We exclude the pre-training process from anomaly detection and synthetic generation because the two tasks are very dataset specific and do not necessarily benefit from learning distributions beyond the target dataset. Let n represent the number of samples generated for each prediction, which we set to n=10 (n=30 for forecasting tasks) in our experimental setup at inference time. We use the median of these n predictions as the final prediction, providing the added benefit of obtaining a confidence interval for 's predictions. To prevent channel padding from affecting the generated samples, we mask out the invalid channels during sampling at each diffusion timestep so that does not falsely treat information in the non-valid channels as meaningful information. Padding is applied at the beginning of the temporal dimension to ensure that the most relevant information remains at the end, thereby mitigating the effect of padding.
§.§ Physics-Informed TimeDiT
Physics principles are fundamental in shaping the evolution of temporal signals observed in real-world phenomena, such as climate patterns and oceanographic data. Therefore, it is essential to integrate physical knowledge into foundational time series models. In this section, we propose a strategy to incorporate physics knowledge as an energy-based prior for during inference, which iteratively refines the reverse diffusion process. By guiding the denoising process during inference with gradients derived from physical laws represented by partial differential equations (PDEs), the integration of this knowledge can significantly enhance the quality of the generated samples.
A generic form of a physical law represented as a PDE that describes the evolution of a continuous temporal signal 𝐱(𝐮,t) over a spatial coordinate 𝐮 is given by:
∂𝐱/∂ t = F(t,𝐱,𝐮,∂𝐱/∂𝐮_i, ∂^2𝐱/∂𝐮_i ∂𝐮_j, …)
Based on this PDE representation of physical knowledge, the consistency between the predicted time series 𝐱^tar and the physics knowledge can be quantified using the following squared residual function:
K(𝐱^tar;F) = -||∂𝐱^tar/∂ t-F(t,𝐱^tar,𝐮,∂𝐱^tar/∂𝐮_i, ∂^2𝐱^tar/∂𝐮_i ∂𝐮_j, …)||^2_2
This function reaches its maximum when the predicted time series is perfectly consistent with the physical model, resulting in a residual of 0. Using this metric K, physics knowledge can be integrated into a probabilistic time series foundation model p(𝐱^tar|𝐱^con) by solving the following optimization problem to obtain a refined model q(𝐱^tar|𝐱^con):
q(𝐱^tar|𝐱^con) = max_q𝔼_𝐱^tar∼ qK(𝐱^tar;F) - α D_KL(q(𝐱^tar|𝐱^con)||p(𝐱^tar|𝐱^con))
where the first term represents the aforementioned physics knowledge metric, and the second term controls the divergence between q(𝐱^tar|𝐱^con) and p(𝐱^tar|𝐱^con).
The following theorem provides a closed-form solution to the above optimization problem:
The optimal q(𝐱^tar|𝐱^con) in Eq.<ref> is the Boltzmann distribution defined on the following energy function:
E(𝐱^tar;𝐱^con) = K(𝐱^tar;F)+αlog p(𝐱^tar|𝐱^con)
in other words, the optimal q(𝐱^tar|𝐱^con) is:
q(𝐱^tar|𝐱^con) = 1/Zexp(K(𝐱^tar;F)+αlog p(𝐱^tar|𝐱^con)),
where Z = ∫exp(K(𝐱^tar;F)+αlog p(𝐱^tar|𝐱^con))d𝐱^tar is the partition function.
The theorem illustrates that sampling from the Boltzmann distribution defined in Eq. <ref>, is analogous to incorporating physics knowledge into model edition. In the context of diffusion models, this distribution can be effectively sampled using Langevin dynamics <cit.>:
𝐱^tar_j+1 = 𝐱^tar_j + ϵ∇log q(𝐱^tar|𝐱^con) + √(2ϵ)σ, σ∼𝒩(0,1)
=𝐱^tar_j+ϵ∇ K(𝐱^tar_j;𝐱^con)+αϵ∇log p(𝐱^tar_j|𝐱^con) + √(2ϵ)σ, σ∼𝒩(0,1)
In diffusion model, precisely calculate the likelihood log p(𝐱^tar|𝐱^con) is intractable. To tackle this issue, following previous works <cit.>, we approximate likelihood with the objective to edit the pre-trained diffusion model:
log p(𝐱^tar|𝐱^con) = -𝔼_ϵ, t [||ϵ_θ(𝐱^tar,t;𝐱^con)-ϵ||^2]
The approximation presented above constitutes the optimizable component of the evidence lower bound(ELBO). Algorithm <ref> summarizes the comprehensive model editing process.
§ EXPERIMENTS
To comprehensively assess our time series foundation model, we evaluate on a diverse set of tasks that reflect real-world challenges and applications. We begin by testing the model's performance in practical scenarios and its ability to integrate domain knowledge. This includes handling missing data and multi-resolution forecasting on customized datasets, which allows us to evaluate the model's robustness in situations that frequently occur in real-world applications, as well as physics-informed modeling crucial for scientific and engineering domains<cit.>, which uses 6 practical partial differential equations (PDEs). We then assess the model's capabilities in well-established benchmarking tasks across various fields such as finance, healthcare, and industrial monitoring. These tasks include forecasting on Solar, Electricity, Traffic, Taxi, and Exchange datasets<cit.> to evaluate temporal dependency modeling, imputation on ETTh, ETTm, Weather and Electricity datasets<cit.> to assess the handling of missing data,
anomaly detection on MSL, SMAP, SWaT, SMD, and PSM datasets<cit.> to gauge sensitivity to unusual patterns,
and synthetic data generation on Stock, Air Quality, and Energy datasets<cit.> to test understanding of underlying distributions. By evaluating these diverse tasks, we can demonstrate that our model truly serves as a foundation for various time series applications, potentially reducing the need for task-specific models.
§.§ Practical Scenarios: Missing Data and Multi-Resolution Forecasting
To simulate more realistic scenarios in time series tasks, we introduced two additional challenges: missing values and multi-resolution data. These conditions are common in real-world applications and test a model's robustness and adaptability.
For the missing value scenario, we created datasets with various missing ratios, simulating incomplete data often encountered in practice. In the multi-resolution setting, we sampled each individual time series within the multivariate dataset at different resolutions, reflecting the diverse sampling frequencies often present in real-world data collection.
Figure <ref> illustrates TimeDiT's performance in realistic scenarios, showcasing its effectiveness across different sampling frequencies on the Exchange dataset. In Figure <ref> (a), we observe TimeDiT's superior performance in handling missing data. As the missing ratio increases from 5% to 50%, TimeDiT maintains the lowest CRPS_sum across all scenarios, indicating its robustness to data gaps. The performance gap between TimeDiT and other models widens as the missing ratio increases, highlighting its effectiveness in more challenging conditions.
Figure <ref> (b) demonstrates TimeDiT's ability to manage multi-resolution data, where it maintains a clear performance advantage as the number of different sampling resolutions increases from 2 to 6. This demonstrates its ability to effectively integrate and forecast time series data sampled at varying frequencies.
These findings underscore TimeDiT's potential as a practical and versatile tool for time series forecasting in diverse, challenging scenarios that more closely resemble real-world applications.
§.§ Domain Knowledge Integration: Physics-Informed TimeDiT
A key advantage of our approach is the ability to directly incorporate physics knowledge into the pretrained foundation model without additional fine-tuning. This is possible because the diffusion model reconstructs the entire process, allowing for seamless integration of PDE-based constraints during inference. By encoding the known physical laws governing the system into the sampling process, we can guide the model towards more physically consistent and accurate predictions.
In this section, we evaluate how effectively our pre-trained foundation model can integrate physics-informed knowledge into time series forecasting without the need for fine-tuning. We study four 1D partial differential equations (PDEs) from <cit.>: general Navier-Stokes Equations, Kolmogorov Flow (a specific case of Navier-Stokes Equations), Advection Equations, Burgers Equations, Diffusion Soeption and Computational Fluid Dynamics (CFD). These equations are used to generate synthetic data with random initial conditions, and we apply diffusion models to forecast time series based on data from a historical window.
Table <ref> presents the results, including both mean error and error bars. The table clearly demonstrates that our proposed model editing solution, which incorporates physics knowledge, significantly outperforms previous sampling strategies such as DDPM <cit.>, DDIM <cit.>, and TS Diffusion's Self-Guidance <cit.>. By leveraging domain-specific physical information, our approach achieves substantial performance improvements over these baselines, highlighting the effectiveness of integrating physics-informed priors into the diffusion model sampling process.
This performance gain underscores the potential of combining pretrained foundation models with domain-specific knowledge. Our method offers a flexible framework for enhancing time series forecasting in scientific and engineering applications where the underlying physical laws are partially known. It demonstrates that by bridging the gap between data-driven approaches and physics-based modeling, we can achieve more accurate and physically consistent predictions without the computational overhead of retraining or fine-tuning the entire model. The ability to easily incorporate physics knowledge into a pretrained foundation model represents a significant advance in the field of scientific machine learning. It opens up new possibilities for rapid adaptation to specific physical systems and phenomena, potentially accelerating research and discovery in various scientific domains.
§.§ Forecasting on Full-shot Setting and Zero-shot Setting
In the forecasting task, we conduct two types of experiments. First, we compare our proposed TimeDiT with baselines in a full-shot setting, where models are trained and tested on separate datasets. This approach evaluates their performance on conventional time series forecasting tasks, ensuring that models can effectively learn and generalize from complete data. Second, we assess TimeDiT as a foundation model in a zero-shot setting, comparing it to previous transformer-based time series models. This setting is crucial as it tests the model's ability to generalize and adapt to entirely new datasets without prior exposure, highlighting its robustness and versatility.
Together, these experiments provide a holistic view of TimeDiT's capabilities, addressing both specialized performance and broad applicability in time series forecasting.
Table <ref> presents the full-shot forecasting results, comparing TimeDiT with state-of-the-art models in two categories: deterministic forecasting models, which are trained with the Student's t-distribution head to support probabilistic results, and inherently probabilistic time series forecasting models, including diffusion-based models, such as CSDI and non-diffusion-based, such as GP-copula. Our model achieves the lowest CRPS_sum on four datasets and the second-best performance on the Taxi dataset. In the zero-shot setting, TimeDiT is compared with the open-sourced foundation models including TEMPO <cit.>, which is pre-trained with Student's t-distribution head to support probabilistic results, Moirai <cit.> and LagLLama <cit.> in Table <ref>. TimeDiT's ability to outperform other open-source foundation models in most cases is particularly noteworthy, as it suggests that TimeDiT can be effectively applied to a wide range of time series forecasting tasks across different domains with minimal adaptation.
§.§ Imputation Task
We conduct experiments on six benchmark time-series datasets: ETTh1, ETTh2, ETTm1, ETTm2, Electtricity, and Weather. We use random mask ratios {12.5%, 25%, 37.5%, 50%} following previous studies' settings with sequence length set to 96. We finetune on model checkpoints pretrained on solar, traffic, exchange, taxi, Huawei cloud, air quality, and weather (different from the evaluation weather data). Table <ref> shows the imputation result averaged over the four mask ratios. achieves the best performance on most dataset. obtaining 10 first places out of the 12 evaluations while the remaining baselines obtained 2 first place count in total. In particular, achived a 39% reduction in MSE and 22% reduction in MAE compared to the strongest baseline on ETTh1 dataset. For full result on each mask ratio, please refer to section <ref>.
§.§ Anomaly Detection Task
We conduct experiments on five real-world datasets from industrial applications: MSL, SMAP, SWaT, SMD, and PSM. The diffusion model, renowned for its proficiency in distribution learning, may inadvertently overfit by reconstructing anomalies alongside normal data points. To counteract this, we opted to bypass pretraining and introduced spectral residue (SR) transformation at the preprocessing stage of . This transformation helps to conceal points most likely to be anomalies and their immediate neighbors. The number of neighbors affected is controlled by the hyperparameter n_neighbor. The SR method utilizes Fourier Transformation to convert the original time series into a saliency map, thereby amplifying abnormal points, as detailed in <cit.>. For additional information about this transformation, please see section <ref>.
Consistent with prior methodologies, we set the sequence length to be 100 identify anomalies using the 99th percentile of reconstruction errors. During evaluations, we apply standard anomaly adjustments as suggested by <cit.>. As demonstrated in table <ref>, outperforms baseline models on four of the five datasets. In particular, 23.03 points of improvement in terms of F1 score on the SMAP dataset compared to the strongest baseline.
§.§ Synthetic Generation Task
We conduct experiments to synthesize multivariate time series and evaluate performance using the discriminative score and predictive score metrics under a "train on synthetic test on real" experimental setup with sequence length set to 24 <cit.>. We finetune on model checkpoints pretrained on etth1, etth2, ettm1, ettm2, electricity, and weather. Table <ref> shows the result on synthetic generation where TimeDiT, in general, consistently generates more realistic synthetic samples compared to baselines, even on high-dimensional energy datasets. This demonstrates TimeDiT's strength in complex time series synthesis. We visualize synthesis performance using PCA and t-SNE in Appendix <ref>. As shown in Figure <ref>, TimeDiT's samples markedly overlap the original data distribution better than other methods. Qualitative and quantitative results confirm TimeDiT's superior ability to model intricate characteristics for realistic time series synthesis, even on multidimensional, complex datasets.
§ CONCLUSION
In this paper, we introduced , a pioneering approach to creating a versatile and robust foundation model for various time series tasks under practical scenarios. By integrating transformer architecture with diffusion model, effectively captures temporal dependencies and addresses real world challenges unique to time series regarding multi-resolution and missing values as well as incorporating external knowledge. Our innovative masking strategies allow for a consistent training framework adaptable to diverse tasks such as forecasting, imputation, and anomaly detection and synthetic data generation. Extensive experiments demonstrated the strong performance of on both practical scenarios and standard benchmarks. However, we recognize some limitations. We primarily explored common sequence lengths and did not assess 's performance on very long sequences. While we have introduced randomness in prediction length and feature numbers up to a maximum, we aim to develop more scalable solutions for highly variable multivariate time series. Additionally, our understanding of how different types of external information contribute to performance is still developing. For future work, we envision several key directions: enhancing scalability to improve 's ability to handle practical time series with varying multivariate numbers; developing techniques for seamless multi-modal integration, allowing to leverage diverse data sources for improved performance across different tasks; and extending 's capabilities to effectively process and analyze very long time series sequences, addressing a critical need in many real-world applications.
§ DATASETS
* The ETT datasets <cit.>[ETT: <https://github.com/zhouhaoyi/ETDataset>] include electricity load data at various resolutions (ETTh & ETTm) from two different electricity stations.
* The Weather dataset comprises 21 meteorological indicators collected in Germany over the span of one year.
* The Electricity dataset provides information on electricity consumption.
* The SMD dataset <cit.> includes multivariate time-series data collected from server machines in a data center. It typically contains metrics such as CPU usage, memory usage, and disk activity.
* The PSM dataset <cit.>is used for predictive maintenance and includes sensor data from industrial machines. It often contains readings such as temperature, pressure, and vibration over time.
* The MSL dataset <cit.> comes from the Mars Science Laboratory mission, specifically the Curiosity rover. It includes telemetry data from the rover's sensors and systems.
* The SWaT dataset <cit.> originates from a scaled-down water treatment testbed designed to reflect a real-world water treatment process. It includes sensor and actuator data collected over time.
* The SMAP dataset <cit.>comes from NASA's Soil Moisture Active Passive (SMAP) mission, which measures soil moisture and freeze/thaw state. It includes time-series data from multiple sensors aboard the SMAP satellite.
* The Sine dataset <cit.> is synthetically generated by sinusoidal waves.
* The Air Quality dataset [Air Quality: <https://archive.ics.uci.edu/dataset/360/air+quality>]contains hourly averaged readings from five metal oxide chemical sensors integrated into an Air Quality Chemical Multisensor Device. This device was positioned at road level in a highly polluted area of an Italian city. Data were collected from March 2004 to February 2005, making it the longest freely available record of on-field air quality chemical sensor responses.
* The Stock dataset [Stock: <https://finance.yahoo.com/quote/GOOG>] contains daily historical Google stocks data from 2004 to 2019.
* The UCI Appliances Energy prediction dataset [Energy: <https://archive.ics.uci.edu/ml/datasets>]consists of multivariate, continuous-valued measurements including numerous temporal features measured at close intervals.
* The Cloud dataset: The Huawei cloud datasets contain serverless traces <cit.>. Following <cit.>, we selected 8 time series containing metrics based on the minute-frequency occurrences of the top 10 functions over a period of 141 days. The metrics included in these series are: Function delay; Platform delay; CPU usage; Memory usage; CPU limit; Memory limit; Instances; Requests. The functions were chosen based on their median occurrences throughout the dataset.
* The Weather_2 dataset: The Weather_2 dataset comprises hourly climate time series data collected near Monash University, Clayton, Victoria, Australia, from January 2010 to May 2021. It includes series for temperature, dewpoint temperature, wind speed, mean sea level pressure, relative humidity, surface solar radiation, surface thermal radiation, and total cloud cover <cit.>.
§ FORECASTING EXPERIMENT SETTING
For the forecasting task, we utilized five widely-used open datasets to evaluate probabilistic time series forecasting performance. These datasets were collected in GluonTS <cit.> and have been previously employed in <cit.>:
* Solar[Solar: <https://www.nrel.gov/grid/solar-power-data.html>]: Hourly solar power production records from 137 stations in Alabama State, as used in <cit.>.
* Electricity[Electricity: <https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014>]: Hourly time series of electricity consumption for 370 customers, as used in <cit.>.
* Traffic[Traffic_nips: <https://archive.ics.uci.edu/dataset/204/pems_sf>]: Hourly occupancy rates of 963 San Francisco freeway car lanes, with values between 0 and 1 <cit.>.
* Taxi[Taxi: <https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data>]: Half-hourly spatio-temporal time series of New York taxi rides taken at 1,214 locations, using data from January 2015 for training and January 2016 for testing, as proposed in <cit.>.
* Exchange rate[Exchange: <https://github.com/laiguokun/multivariate-time-series-data>]: Daily exchange rates between 8 currencies, namely Australia,
the United Kingdom, Canada, Switzerland, China, Japan, New Zealand, and Singapore, as used in <cit.>.
Table <ref> summarizes the characteristics of each dataset. The task for these datasets is to predict the future L_2 steps given the observed L_1 steps. We set L_1 and L_2 values based on previous studies <cit.>. For training, we randomly selected L_1 + L_2 consecutive time steps as a single time series and designated the last L_2 steps as forecasting targets. We adhered to the train/test splits used in previous studies and utilized the last five samples of the training data as validation data.
For the full-shot setting, we trained separate models on different datasets. Due to the large number of features in multivariate time series, we adopted subset sampling of features for training. For each input, we split them into subsets based on their order. If the last subset was smaller than the fixed shape, we applied padding to ensure equal input sizes across all subsets. In the multi-resolution setting, we used different resolutions identified by the resolution number, which corresponded to different sampling rates for the exchange features. It is worth to note that the aforementioned strategy was also employed for zero-shot training, as the input feature length varied across datasets.
§ TRAINING DETAILS
The codebase for TimeDiT is modified from <https://github.com/facebookresearch/DiT>, where they provide different model sizes, including Small (S), Big (B), Large (L), Extra Large (XL) <cit.>.
In our training, we used Adam optimizer with a training rate of 0.0001 without weight decay. Batch size is set to 512. The maximum channel number K_max is set to 40. All experiments are run on NVIDIA A100 GPUs. The zero-shot foundation model was trained on the ETT, weather, illness, air quality, and cloud datasets and used for different downstream tasks. We will include more available time series datasets to develop a more robust time series foundation model as the future work.
§ METRICS
MAE describes the mean absolute error that measures the absolute difference between ground truth and prediction.
MAE = 1/n∑_i=1^n |y_i - ŷ_i|
MSE describes the mean squared difference between ground truth and prediction.
MSE = 1/n∑_i=1^n (y_i - ŷ_i)^2
RMSE is the sqaure root of MSE.
RMSE = √(1/n∑_i=1^n (y_i - ŷ_i)^2)
Discriminative score Following TimeGAN, we train a post-hoc time-series classification model (by optimizing a 2-layer LSTM) to distinguish between sequences from the original and generated datasets. First, each original sequence is labeled real, and each generated sequence is labeled not real. Then, an off-the-shelf (RNN) classifier is trained to distinguish between the two classes as a standard supervised task. We then report the classification error on the held-out test set.
Predictive Score Following TimeGAN, we train a post-hoc sequence-prediction model (by optimizing a 2-layer LSTM) to predict next-step temporal vectors over each input sequence. Then, we evaluate the trained model on the original dataset. Performance is measured in terms of the mean absolute error (MAE); for event-based data, the MAE is computed as the absolute value of 1 - estimated probability that the event occured.
Computations of CRPS
We explain the definition and calculation of the CRPS metric. The continuous ranked probability score (CRPS) assesses how well an estimated probability distribution F aligns with an observation x. It is defined as the integral of the quantile loss Λ_α(q, z) := (α - 1_z<q)(z - q) over all quantile levels α∈ [0, 1]:
CRPS(F^-1, x) = ∫_0^1 2Λ_α(F^-1(α), x) dα
where 1 represents the indicator function. We then calculated quantile losses for quantile levels discretized in 0.05 increments. Thus, we approximated CRPS as follows:
CRPS(F^-1, x) ≈1/19∑_i=1^19 2Λ_i · 0.05(F^-1(i · 0.05), x).
Next, we computed the normalized average CRPS for all features and time steps:
∑_k,lCRPS(F^-1_k,l, x_k,l)/∑_k,l |x_k,l|
where k and l denote the features and time steps of the imputation targets, respectively.
CRPS_sum measures CRPS for the distribution F of the sum of all K features, calculated by:
∑_lCRPS(F^-1, ∑_k x_k,l)/∑_k,l |x_k,l|
where ∑_k x_k,l is the total of the forecasting targets for all features at time point l.
§ SYNTHETIC GENERATION
We use 80% of all data for training and evaluate on the same data. For the air quality dataset, previous methods did not carefully the -200 values as placeholder for missing values. In our experiment, we masked all the -200 values for and baselines that support masks. For baselines that do not support mask, we replace -200 with the mean value. Minmax scaler is used for all models. Figure <ref>, <ref>,<ref>,<ref> shows the PCA plots for all datasets and baselines. The visual comparison also validates the superiority of .
§ IMPUTATION
§ ANOMALY DETECTION
§.§ SR processing for Anomaly Detection
Spectral Residue The SR Transformation involves the following equations. Table <ref> shows the full anomaly detection results.
A(f) = Amplitude(F(x))
P(f) = Phase(F(x))
L(f) = log(A(f))
AL(f) = h_q(f) · L(f)
R(f) = L(f) - AL(f)
S(x) = F^-1(exp(R(f) + iP(f)))
§ PHYSICS EQUATIONS
The Burgers Equation is:
∂ u/∂ t + u∂ u/∂ x - v∂^2 u/∂ x^2 = 0
where v is the diffusion term. We set the v (diffusion term) as 0.1 and randomly sample a combination of sine waves as initial status
The Advection Equation is:
∂ u/∂ t + c∂ u/∂ x = 0
where c is the advection speed. We set the c as 1.0 and randomly placed Gaussian peaks as initial status
The diffusion-reaction Equation is:
∂ u/∂ t - D∂^2 u/∂ x^2 -R(u)= 0
where D is the diffusion coefficient and R(u) is the reaction term. Here, we apply a linear reaction term R(u) = -k· u, where k is the reaction speed. We set the D as 1.0, k as 0.1, and a Gaussian distribution with random parameters as initial status.
The Kolmogrov Flow is a specific case of NS equation. More specifically, it is described by:
𝐮(x, y, z, t) = ( -∂ψ/∂ y, ∂ψ/∂ x, 0 )
where the psi is the flow function. It is usually set as:
ψ(x, y, z, t) = A sin(kx) cos(zy + ω t)
where A,k,w are hyperparameters.
§ PROOF OF THE THEOREM
Let us consider the objective function:
O(q(y|x)) = 𝔼_y∼ q(y|x)K(y) - α D_KL(q(y|x)||p(y|x))
=𝔼_y∼ q(y|x)K(y) - α∫_yq(y|x)log(q(y|x)/p(y|x))dy
=∫_yq(y|x)[K(y) + αlog p(y|x) -αlog q(y|x)]dy
We try to find the optimal q(y|x) through Lagrange multipliers. The constraint of the above objective function is that q(y|x) is a valid ∫_y q(y|x)dy=1. Thus, the Lagrangian is:
L(q(y|x),λ) = ∫_yq(y|x)[K(y) + αlog p(y|x) -αlog q(y|x)]dy - λ (∫_y q(y|x)dy-1)
=∫_y q(y|x)[K(y) + αlog p(y|x) -αlog q(y|x)-λ q(y|x)]dy + λ
We define f(q(y|x),y,λ)=q(y|x)[K(y) + αlog p(y|x) -αlog q(y|x) - λ] + λ h(y)], where h(y) can be the density function of any fixed distribution defined on the support set of y. Therefore, L(q(y|x),λ) = ∫_y f(q(y|x),y,λ)dy.
According to Euler-Lagrange equation, when the above Lagrangian achieve extreme point, we have:
∂ f/∂ q = K(y) + αlog p(y|x) -αlog q(y|x) - λ - α = 0
Thus, we have:
αlog q(y|x) = K(y) + αlog p(y|x) - log q(y|x) - λ - α
q(y|x) = exp(1/αK(y) + log p(y|x) - λ/α - 1)
=1/exp(λ/α +1)exp(1/αK(y) + log p(y|x))
Meanwhile, since ∫_y q(y|x)dy=1, we have:
∫_y exp(1/αK(y) + log p(y|x) - λ/α - 1)dy =1
1/exp(λ/α +1)∫_y exp(1/αK(y) + log p(y|x))dy =1
Thus, we have exp(λ/α +1) = ∫_y exp(1/αK(y) + log p(y|x))dy=Z, leading to:
q(y|x) = 1/Zexp(K(y)+αlog p(y|x)), Z = ∫exp(K(y)+αlog p(y|x))dy
|
http://arxiv.org/abs/2409.02418v1 | 20240904034617 | MOSMOS: Multi-organ segmentation facilitated by medical report supervision | [
"Weiwei Tian",
"Xinyu Huang",
"Junlin Hou",
"Caiyue Ren",
"Longquan Jiang",
"Rui-Wei Zhao",
"Gang Jin",
"Yuejie Zhang",
"Daoying Geng"
] | cs.CV | [
"cs.CV"
] |
1
.001
MOSMOS: Multi-organ segmentation facilitated by medical report supervision
W. Tian et al.
mode = title]MOSMOS: Multi-organ segmentation facilitated by medical report supervision
1]Weiwei Tian
[email protected]
2]Xinyu Huang
[email protected]
2,5]Junlin Hou
[email protected]
2]Caiyue Ren
[email protected]
2]Longquan Jiang
[email protected]
[1]
[1]Corresponding author
1]Rui-Wei Zhao
[email protected]
4]Gang Jin
[email protected]
2,3]Yuejie Zhang
[email protected]
1]Daoying Geng
[email protected]
[1]organization=Academy for Engineering and Technology, Fudan University,
city=Shanghai,
postcode=200433,
country=China
[2]organization=School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University,
city=Shanghai,
postcode=200433,
country=China
[3]organization=Shanghai Collaborative Innovation Center of Intelligent Visual Computing,
country=China
[4]organization=Department of Hepatobiliary Pancreatic Surgery, Changhai Hospital, Second Military Medical University (Naval Medical University),
city=Shanghai,
postcode=200433,
country=China
[5]organization=Department of Computer Science and Engineering, The Hong Kong University of Science and Technology,
country=China
§ ABSTRACT
Owing to a large amount of multi-modal data in modern medical systems, such as medical images and reports, Medical Vision-Language Pre-training (Med-VLP) has demonstrated incredible achievements in coarse-grained downstream tasks (i.e., medical classification, retrieval, and visual question answering). However, the problem of transferring knowledge learned from Med-VLP to fine-grained multi-organ segmentation tasks has barely been investigated. Multi-organ segmentation is challenging mainly due to the lack of large-scale fully annotated datasets and the wide variation in the shape and size of the same organ between individuals with different diseases. In this paper, we propose a novel pre-training & fine-tuning framework for Multi-Organ Segmentation by harnessing Medical repOrt Supervision (MOSMOS). Specifically, we first introduce global contrastive learning to maximally align the medical image-report pairs in the pre-training stage. To remedy the granularity discrepancy, we further leverage multi-label recognition to implicitly learn the semantic correspondence between image pixels and organ tags. More importantly, our pre-trained models can be transferred to any segmentation model by introducing the pixel-tag attention maps. Different network settings, i.e., 2D U-Net and 3D UNETR, are utilized to validate the generalization. We have extensively evaluated our approach using different diseases and modalities on BTCV, AMOS, MMWHS, and BRATS datasets. Experimental results in various settings demonstrate the effectiveness of our framework. This framework can serve as the foundation to facilitate future research on automatic annotation tasks under the supervision of medical reports.
Medical report supervision Multi-label recognition Multi-organ segmentation Vision-language pre-training Visual representation learning
[
[
September 9, 2024
=====================
§ INTRODUCTION
Assigning an organ tag to each pixel in a medical image, also known as multi-organ segmentation, is a crucial task in medical image analysis, as it contributes to various computer-aided diagnosis and treatment tasks, including volume measurement <cit.>, 3D reconstruction <cit.>, and treatment planning <cit.>. To achieve these clinical applications, it is necessary to segment multiple organs in medical images accurately and robustly. However, compared to one particular organ segmentation, manually annotating multiple organs by radiologists is not only time-consuming and laborious but also heavily dependent on their experience. Since automatic multi-organ segmentation is efficient, it becomes an essential issue to address the growing clinical needs <cit.>.
With the development of Fully Convolutional Networks (FCN) <cit.> and Vision Transformers (ViT) <cit.>, impressive segmentation performance has been achieved. However, existing works on multi-organ segmentation are usually based on the supervised learning paradigm <cit.>, which is dramatically limited by high-quality and high-cost annotations. To tackle this issue, pre-training on large-scale datasets and then fine-tuning on smaller target datasets has become a widely adopted mode. For instance, Swin UNETR <cit.> leveraged self-supervised pre-training with tailored proxy tasks to alleviate the lack of annotations. Nevertheless, it only learns transferable visual representations from five Computer Tomography (CT) datasets. It is unsuitable for segmentation tasks in other diseases or modalities, such as Magnetic Resonance Imaging (MRI).
In summary, critical difficulties exist in two aspects with multi-organ segmentation: (i) There is a lack of large-scale fully annotated, multi-disease, or multi-modal datasets. (ii) The shape and size of the same organ vary significantly between patients with different diseases, making it difficult for the network to learn representative features. To address the first limitation, we argue that medical reports reflect radiologists' perceptions of multi-disease and multi-modal medical images, which can serve as weakly supervised information to help optimize the multi-organ segmentation network even with fewer annotations (see Fig. <ref>). Moreover, taking into consideration that radiologists prepare medical reports accompanied with radiology images as part of their daily routine, large-scale medical image-report pairs are easy to access without extra cost, in contrast to pixel-level fine-grained annotations. To address the second challenge, we simultaneously introduce global image-report aligning and local pixel-tag aligning to identify discriminative representations for the same organ with different diseases in the pre-training stage. Furthermore, we design pixel-tag attention maps to assist multi-organ segmentation tasks in the fine-tuning stage.
Concretely, we propose a novel pre-training & fine-tuning framework named MOSMOS for multi-organ segmentation based on medical report supervision. In the pre-training phase, image-report contrastive learning is used to align the global features of medical images and corresponding reports. In addition, we apply a more fine-grained pre-training task called multi-label recognition. It can locate image regions with the organ tags in the corresponding reports, which has the following advantages: (i) The tags are the organ classification labels extracted from the reports without additional manual annotations. (ii) The tags are encoded into query embeddings and then fed into a Transformer decoder <cit.> to perform multi-modal interaction, which guarantees the generalizability of the pre-trained model transferred to multi-disease, multi-modal, and multi-organ segmentation tasks. (iii) By implicitly optimizing the attention maps in the Transformer decoder, the organ tags can be associated with fine-grained and interpretable location information. They are capable of assisting multi-organ segmentation tasks to be better optimized since attention maps can also be regarded as segmentation results with low resolution. In the fine-tuning phase, we combine the segmentation loss and the pixel-tag aligning loss to supervise the training process.
Our pre-trained model can be fine-tuned on any downstream segmentation framework to boost performance. A series of comprehensive experiments have proved the effectiveness of our method. In the aspect of the downstream segmentation frameworks, we verify on the representative segmentation models (U-Net <cit.> & UNETR <cit.>) with two mainstream visual backbones (ResNet <cit.> & ViT <cit.>), respectively. As for the downstream segmentation datasets, we evaluate on four publicly available multi-disease and multi-organ datasets (BTCV <cit.> & AMOS <cit.> & MMWHS <cit.> & BRATS <cit.>) with different modalities (CT & MRI).
The main contributions of this work are summarized as follows:
* We establish MOSMOS, a novel pre-training & fine-tuning framework to fully leverage the intrinsic medical report supervision within the paired images and reports to learn medical visual representation instead of purely exploiting radiology images. To the best of our knowledge, this is the first work that the medical vision-language pre-training is applied to downstream tasks of multi-organ segmentation.
* We design global image-report aligning and local pixel-tag aligning in the pre-training stage, which is more suitable for fine-grained segmentation tasks in the downstream.
* We verify the effectiveness of the proposed method on the representative segmentation frameworks and four widely used multi-disease and multi-organ datasets of different modalities with 2D and 3D medical images. Our proposed MOSMOS significantly improves the multi-organ segmentation performance by a substantial margin.
§ RELATED WORK
Before introducing the proposed method, we mainly review previous works that inspired the design of our multi-organ segmentation scheme in this section. The two essential parts are (i) multi-organ segmentation; (ii) language supervision, in order to leverage cross-modal information to guide the multi-organ segmentation.
§.§ Multi-organ segmentation
Many attempts have been made to implement multi-organ segmentation more efficiently. According to the backbone, these approaches can be divided into three categories. (i) FCN-based: To leverage the partially labeled datasets, the multi-head strategy <cit.> was used for segmentation, which consists of a task-shared encoder and multiple decoders (layers) with specific tasks, leading to poor scalability. To improve the flexibility, DoDNet <cit.> built a dynamic on-demand framework that introduced a dynamic segmentation head to the shared encoder-decoder structure. (ii) ViT-based: Swin-Unet <cit.> first utilized hierarchical Swin Transformer <cit.> with shifted window operation to capture global and long-term semantic information. (iii) FCN and ViT combined: Taking advantage of the locality of convolution and the globality of self-attention in Transformer, recent works <cit.> adopted the hybrid architecture. Based on U-Net <cit.> architecture, TransUNet <cit.> and TransDoDNet <cit.> introduced Transformer as a bottleneck feature extractor for modeling long-range organ-wise dependencies, which is conducive to multi-organ segmentation. UNETR <cit.> used Transformer as the encoder and delivered the encoded representations to the FCN-based decoder by skip connections. NnFormer <cit.> applied interleaved convolutional layers and Transformer blocks to play both advantages sufficiently. However, the performance of these supervised learning methods learning from scratch is limited by the quantity and quality of annotations, or that transferring pre-trained weights from ImageNet <cit.> is suboptimal due to the drastic difference between natural and medical images. Performance improvements have been achieved through supervised learning methods that transferred pre-trained weights from large-scale, partially labeled medical datasets. Nonetheless, these methods also need intensive labor and expertise costs.
Recent advances in self-supervised pre-training <cit.> provided the promise of leveraging unlabeled medical images. Specifically, Swin UNETR <cit.> first designed three tailored proxy tasks, that is, masked volume inpainting, rotation prediction, and contrastive learning, to pre-train the Swin Transformer encoder. The pre-trained encoder was transferred to downstream segmentation tasks and achieved observable improvements. Despite its success, a gap exists between the upstream self-supervised task and the downstream segmentation tasks. Consequently, ReFs <cit.> proposed an extra supervised reference task as a bridge to minimize the gap. Unlike these approaches, our pre-training framework introduces the cross-modal supervisory information in paired medical images and reports at no extra cost to facilitate multi-disease, multi-modal, and multi-organ segmentation tasks. Meanwhile, we employ multi-label recognition to align image pixels with organ tags automatically extracted from medical reports, bridging the gap between upstream and downstream tasks.
§.§ Language supervision
Towards the goal of utilizing unlabeled images more efficiently, several follow-ups <cit.> based on Contrastive Language-Image Pre-training (CLIP) <cit.> have achieved promising results in learning visual representation with language supervision using plenty of image-text pairs in the general domain. Inspired by these pioneering works, <cit.> applied modified CLIP to medical classification, retrieval, and visual question answering tasks. For more fine-grained dense prediction tasks, LViT <cit.> introduced medical text annotations to lead the generation of pseudo labels in semi-supervised learning. In addition to using global contrastive learning to align medical images and reports, GLoRIA <cit.> and LoVT <cit.> proposed utilizing local contrastive learning to align image sub-regions and words or sentences in the paired reports, and BioViL <cit.> adopted masked language modeling to leverage text semantics sufficiently. Furthermore, MGCA <cit.> explored the abundant semantic correspondences between radiology images and reports with multiple granularities: disease-level, instance-level, and token-level. Despite achieving exceptional performance, these segmentation or detection approaches that utilize language supervision are confined to the localization of pulmonary lesions or cell nuclei in 2D images. Augmenting the above-mentioned medical segmentation methods, we extend to broader multi-organ segmentation scenarios of different modalities with 2D and 3D medical images by introducing global image-report aligning and local pixel-tag aligning using multi-label recognition in the pre-training stage.
§ MATERIAL AND METHODS
In this section, we first present the datasets and the overview of our MOSMOS framework. Next, we introduce the pre-training method of MOSMOS, including global image-report aligning and local pixel-tag aligning. Then we illustrate the fine-tuning approach, which utilizes weakly supervised positioning to facilitate multi-organ segmentation.
§.§ Datasets
§.§.§ Dataset for pre-training
The Radiology Objects in COntext (ROCO) dataset <cit.> contains over 81,000 2D radiology images, split into 73,594 and 8,176 images for training and validation sets, respectively. ROCO does not concentrate on a specific disease or anatomical structure but addresses multi-modal radiology images, including Angiography, CT, Fluoroscopy, MRI, Mammography, Positron Emission Tomography (PET), PET-CT, Ultrasound, and X-Ray.
All images in ROCO have corresponding medical reports and a set of organ tags obtained from the reports. Each report describes the visual element in its semantic context. To acquire the organ tags for radiology images, we first define K=20 common organ categories, including abbreviations and synonyms. Then the list is double-checked by radiologists. After substituting abbreviations and synonyms with the unified forms, we extract the organ tags from the medical reports by matching. An aggregation step is further executed to merge the multiple mentioned tags in a report. A detailed overview of this tag list is shown in Fig. <ref>.
§.§.§ Datasets for fine-tuning
We extensively evaluate our multi-organ segmentation approach on two modalities of datasets from different human body regions, that is, BTCV <cit.> for abdominal multi-organ segmentation using CT, AMOS <cit.> for abdominal multi-organ segmentation using CT and MRI, MMWHS <cit.> for cardiac substructure segmentation using MRI, and BRATS <cit.> for brain tumor segmentation using MRI. These datasets adopted for fine-tuning do not need to provide medical reports but human-assisted annotations. Following the split ratios of <cit.>, the percentages for training, validation, and test sets on BTCV, MMWHS, and BRATS are 70%, 10%, and 20%, respectively. The AMOS dataset is divided into training and validation sets at a ratio of 2:1.
* BTCV provides annotations of Q=13 abdominal organs (that is, spleen, right kidney, left kidney, gallbladder, esophagus, liver, stomach, aorta, inferior vena cava, portal vein and splenic vein, pancreas, right adrenal gland, and left adrenal gland). There are 30 abdomen CT scans from colorectal cancer or ventral hernia patients acquired during the portal venous contrast phase. All images are manually annotated and further verified by experienced radiologists from Vanderbilt University Medical Center.
* AMOS consists of 300 CT and 60 MRI scans, collected from multi-center, multi-vendor, multi-phase, multi-disease patients. It provides voxel-level annotations for Q=15 abdominal organs, namely spleen, right kidney, left kidney, gallbladder, esophagus, liver, stomach, aorta, inferior vena cava, pancreas, right adrenal gland, left adrenal gland, duodenum, bladder, and prostate or uterus. Notably, the duodenum, bladder, and prostate or uterus are considered open-set organ categories, for which tags are not extracted during the pre-training stage. Additionally, MRI scans lack annotations for the bladder and prostate or uterus organs.
* MMWHS is a dataset for whole heart segmentation of Q=7 cardiac substructures (that is, myocardium, left atrium, left ventricle, right atrium, right ventricle, ascending aorta, pulmonary artery), containing 20 cardiac MRI images from patients with cardiovascular diseases. These data are obtained using 3D balanced steady-state free precession (b-SSFP) sequences.
* BRATS is specifically designed for brain tumor segmentation, which comprises 484 multi-modal MRI scans (including FLAIR, T1w, T1gd, T2w modalities) from patients diagnosed with gliomas. All Q=3 segmentation targets (that is, tumor core, whole tumor, enhancing tumor) are categorized as open-set.
§.§ MOSMOS
As shown in Fig. <ref>, MOSMOS is a two-stage framework for multi-organ segmentation based on medical report supervision. Given a batch of image-report pairs in the first pre-training stage, we first split the visual representations into global and spatial features through the image encoder, and the textual representations into report-level and tag-level features through the shared text encoder. Then we perform two tasks: global image-report aligning and local pixel-tag aligning. The first task adopts the contrastive learning strategy, which reinforces the matching degree between the visual global representations of the radiology images and the textual global representations of the corresponding reports. The second task leverages multi-label recognition to align the image regions and the organ tags in the original medical reports. For this purpose, we apply a Transformer decoder based on cross-attention to fully leverage visual spatial features to recognize tags located in the images. In the second fine-tuning stage, the pixel-tag attention maps generated by the Transformer decoder are concatenated with the visual spatial features and then fed into the image decoder concurrently. Besides the segmentation loss, the pixel-tag aligning loss is also applied to supervise the training process.
§.§.§ Pre-training
Global image-report aligning
In the routine clinical workflow, medical reports paired with radiology images are generated naturally by experienced radiologists. Assume each image-report pair is unique. We utilize global image-report contrastive learning to align image-report representations. For a mini-batch of B_1 image-report pairs (I, R) sampled from training dataset, we use (I_i, R_i) to represent the i-th pair. We embed the 2D image I_i∈ℝ^H_1× W_1× C_1 with resolution (H_1, W_1) and C_1 input dimension via an image encoder e^I and a linear projection layer p^I into a global feature f^G_i∈ℝ^C and a spatial feature f^S_i∈ℝ^Ĥ_1Ŵ_1× C, where (Ĥ_1, Ŵ_1) and C denote the resolution and dimension of the feature map, respectively:
f_i^G, f_i^S=p^I(e^I(I_i)).
Following the similar processing pipeline, R_i∈ℝ^N × C_2 with token length N and C_2 dimension is converted into a C-dimension global representation f^R_i by a text encoder e^T and a linear projection function p^T:
f_i^R=p^T(e^T(R_i)).
Note that our model is agnostic to the specific option of image and text encoders. Following previous work <cit.>, we apply ResNet <cit.> and ViT <cit.> as the image encoders e^I. The main difference between them is that ResNet performs a global attention pooling on the spatial feature f^S_i to obtain the global feature f^G_i, while f^G_i of ViT is the corresponding output of [class] token. As for the text encoder e^T, we follow the encoder part of the Transformer <cit.> architecture. The projectors p^I and p^T map the representations of images and medical reports into the same space of C dimension so that contrastive learning can be applied. Based on the bidirectional image-to-report and report-to-image InfoNCE losses <cit.>, the global image-report contrastive loss for each training mini-batch can be formulated as:
ℒ_i2r = -loge^cos(f_i^G, f_i^R) / τ/∑_j=1^B_1 e^cos(f_i^G, f_j^R) / τ,
ℒ_r2i = -loge^cos(f_i^R, f_i^G) / τ/∑_j=1^B_1 e^cos(f_i^R, f_j^G) / τ,
ℒ_irc = 1/2 B_1∑_i=1^B_1(ℒ_i2r+ℒ_r2i),
where cos(·,·) denotes the cosine similarity, cos(f_i^G, f_i^R) = (f_i^G)^⊤ f_i^R /f_i^Gf_i^R, ⊤ represents the transpose operation, · denotes the L2 normalization, and τ is the learnable temperature parameter and initializes to 0.07 following <cit.>.
Furthermore, compared with natural image-text pairs <cit.>, the publicly available medical multi-modal datasets <cit.> are relatively small to train a generalizable model. Thus we employ CLIP model parameters for initialization. A multitude of medical image-report pairs are subsequently employed for the purpose of fine-tuning CLIP within the medical domain.
Local pixel-tag aligning
In the pre-training stage, we expect to gain more language supervision to learn medical visual representations. The global image-report contrastive learning, however, mainly considers coarse-grained representations of both images and medical reports, while downstream tasks of multi-organ segmentation are pixel-level. To narrow the substantial gap between these stages, we introduce multi-label recognition to implicitly align the image pixels and the organ tags to obtain more fine-grained information.
Multi-label recognition predicts whether each organ tag exists in the radiology images. Unlike the original Query2Label <cit.> that directly used learnable label embeddings as the input queries, we introduce K-class tags as the input, which can be transferred to downstream segmentation tasks based on medical report supervision better. The details of constructing the tag list can be found in Sec <ref>. Motivated by CoOp <cit.>, we apply the learnable textual context to mitigate the domain gap between tags and medical reports. Then the input of the shared text encoder e^T becomes:
T_k = ⟨ p_k, t_k⟩, 1 ≤ k ≤ K,
where p_k∈ℝ^N_1× C_2 is the learnable textual context, shared in K-class tags. t_k∈ℝ^N_2× C_2 is the embedding of k-th organ tag, ⟨·,·⟩ denotes the concatenation, and N_1 and N_2 are the token lengths of the learnable textual context and the tag, respectively. Similar to the procedure of medical reports, we get the global representation f^T_k∈ℝ^C of the tag:
f_k^T=p^T(e^T(T_k)).
On the basis of the spatial feature f^S_i of the input radiology image obtained in Sec <ref>, we treat f^T∈ℝ^K × C as queries and leverage the cross-attention mechanism in Transformer decoder <cit.> to progressively integrate category-related contextualized information from the input image into the query embeddings:
f_i^TS=TransDecoder(f^T,f_i^S,f_i^S),
where f_i^TS∈ℝ^K × C are the updated queries. To perform multi-label recognition, we regard predicting each label as a binary classification task and map the feature f_i,k^TS∈ℝ^C for k-th category of i-th sample into a logit value applying a linear projection layer p^TS followed by a sigmoid function:
y_i,k=Sigmoid(p^TS(f_i,k^TS)),
where y_i,k∈[0,1] is the predicted probability for k-th category of i-th sample. We denote the ground-truth labels of input image I_i as x_i = [x_i,1,⋯,x_i,K] where x_i,k∈{0,1} is a discrete binary label. x_i,k = 1 if the k-th organ tag presents in the corresponding medical report R_i, otherwise x_i,k = 0. Medical reports usually only describe organs that appear abnormal on radiology images, so there may be plenty of false negative labels. To address this issue, we adopt a simple and effective loss, that is, weak assume negative loss <cit.>, which introduces a weight parameter γ∈[0,1] based on binary cross-entropy loss to reduce the effect of false negatives. For a training mini-batch, the multi-label recognition loss is defined as:
ℒ_mlr=-1/B_1K∑_i=1^B_1∑_k=1^K{log(y_i,k), x_i,k=1,
γlog(1-y_i,k), x_i,k=0,
.
where γ = 1/(K-1) ensures that the approximate single positive label has the same impact on the loss as the K-1 assumed negatives.
Formally, we minimize the total loss function of pre-training tasks of MOSMOS as:
ℒ_total_up=ℒ_irc + ℒ_mlr.
§.§.§ Fine-tuning
Since MOSMOS learns visual representations from medical report supervision in the pre-training stage, we would like to explore the effect of transferring the pre-trained model to multi-organ segmentation tasks. Note that our framework is model-agnostic. For our investigation, we consider two main medical segmentation methods, 2D U-Net <cit.> and 3D UNETR <cit.>, that adopt ResNet <cit.> and ViT <cit.> as their image encoders, respectively. To evaluate the contribution of MOSMOS, we substitute the image encoders with the pre-trained ones and introduce the pre-trained language supervision for multi-label recognition without the classifier.
Weakly supervised positioning
Thanks to the cross-attention mechanism in Transformer decoder <cit.>, the generated pixel-tag attention maps can provide weakly supervised information. Specifically, the attention maps incorporate language supervision into medical visual representations and roughly locate the spatial distribution of the organ tags in the medical images. Take a B_2 mini-batch of 3D radiology images I̅∈ℝ^B_2× H_2× W_2× D_2× C_1 and corresponding Q-class organ tag embeddings T̅∈ℝ^Q ×(N_1+N_2) × C_2, for example. We obtain the spatial features of images f̅^S∈ℝ^B_2×Ĥ_2Ŵ_2D̂_2× C and the global features of tags f̅^T∈ℝ^Q × C using the pre-trained image and text encoders followed by corresponding projectors, respectively, where H_2× W_2× D_2 and Ĥ_2Ŵ_2D̂_2 represent the height, width, and depth of the input images and feature maps, respectively. Regarding f̅^T as queries and f̅^S as keys and values, we pass these features to the Transformer decoder and gain the pixel-tag attention maps f̅^M∈ℝ^B_2×Ĥ_2Ŵ_2D̂_2× Q:
f̅^M=TransDecoder(f̅^T,f̅^S,f̅^S).
The attention maps represent the degree of pixel-tag aligning, which play a significant role in our framework. Firstly, the attention maps can be concatenated with the visual spatial features to integrate medical language prior to guide the segmentation, that is, f̅^SM=⟨f̅^S,f̅^M⟩∈ℝ^B_2×Ĥ_2Ŵ_2D̂_2×(C+Q), and then fed into the image decoder. We obtain the predicted output Y_seg∈ℝ^B_2× H_2W_2D_2× Q. Secondly, we can regard the attention maps as the segmentation results with lower resolution, and thus upsample them to the original resolution by linear interpolation LI to calculate a pixel-tag aligning loss:
Y_pta=LI(f̅^M/ε),
where Y_pta∈ℝ^B_2× H_2W_2D_2× Q is the pixel-tag aligning output, and ε denotes a learnable temperature coefficient and initializes to 0.07 following <cit.>.
Multi-organ segmentation
In addition to the segmentation loss ℒ_seg, we propose a pixel-tag aligning loss ℒ_pta to make better use of the pixel-tag attention maps and help dense segmentation tasks converge faster. Both losses are a combination of cross-entropy loss and dice loss <cit.>:
ℒ(X,Y)=1/B_2∑_b=1^B_2(1-1/V∑_v=1^V∑_q=1^QX_b,v,qlog Y_b,v,q.
. -2/Q∑_q=1^Q∑_v=1^VX_b,v,qY_b,v,q/∑_v=1^VX_b,v,q^2+∑_v=1^VY_b,v,q^2),
ℒ_seg=ℒ(X,Y_seg),
ℒ_pta=ℒ(X,Y_pta),
where X ∈ℝ^B_2× H_2W_2D_2× Q and Y ∈ℝ^B_2× H_2W_2D_2× Q denote the one-hot encoded ground truth and the predicted output, respectively, and V is the number of pixels.
The final loss function is a linear combination of the above two parts:
ℒ_total_down=ℒ_seg + λℒ_pta,
where λ is the hyper-parameter to balance the two-part losses.
§ RESULTS
In this section, we present the experimental details and analyze the results to demonstrate the flexibility and generalization of our proposed multi-organ segmentation algorithm that is facilitated by medical report supervision.
§.§ Implementation details
We implement MOSMOS in PyTorch on a single NVIDIA V100 GPU. Two segmentation baselines are considered, that is, U-Net <cit.> with ResNet-50 <cit.> and UNETR <cit.> with ViT-B/16 <cit.> visual backbones. The textual backbone is the same text encoder as in the CLIP <cit.>. For the sake of a comprehensive analysis, we compare our method with the following seven methods. To ensure a fair comparison, we implement the other methods using the same backbone and hyper-parameter settings as those applied in MOSMOS. The detailed hyper-parameters are listed in Table <ref>.
* Random Init.: The visual backbone of the baseline is initialized using default random initialization.
* ImageNet <cit.> Init.: The visual backbone of the baseline is initialized with weights pre-trained on ImageNet.
* Inpainting+Contrast+Rotation <cit.>: The visual backbone of the baseline is pre-trained through the utilization of three self-supervised proxy tasks on ROCO images, specifically, mask volume inpainting, contrastive learning, and rotation prediction.
* CLIP <cit.>: The visual backbone of the baseline is initialized with weights pre-trained on CLIP.
* CLIP+DenseCLIP <cit.>: The visual and textual backbones of the DenseCLIP are initialized with weights pre-trained on CLIP.
* PubMedCLIP <cit.>: The visual backbone of the baseline is initialized with weights pre-trained on PubMedCLIP.
* PubMedCLIP+DenseCLIP <cit.>: The visual and textual backbones of the DenseCLIP are initialized with weights pre-trained on PubMedCLIP.
In the pre-training stage, we resize all 2D images to H_1× W_1=224 × 224 as the input resolution and set the token lengths of the medical reports, learnable textual context, and tags to N=77, N_1=16, and N_2=10, respectively. The feature dimensions of input images, input texts, and outputs are C_1=768, C_2=512, and C=512, respectively. The network is trained for 50 epochs with a fixed batch size B_1 of 64, and the optimizer is Adam <cit.> with the learning rate of 10^-5. We compute the validation loss after every epoch and save the checkpoint with the lowest validation loss.
During the fine-tuning stage, all images are preprocessed following the procedures in <cit.>. For training, we randomly crop 3D images into a resolution of H_2× W_2× D_2 = 96 × 96 × 96. For 2D images, the D_2 is omitted. We train the whole network using AdamW optimizer <cit.> and set the initial learning rate of 10^-4 for 5,000 epochs. After 50 epochs, the learning rate is decayed according to the cosine attenuation approach <cit.>. Given the memory constraints, we set the batch size B_2 to 96 for ResNet-50-based and 2 for ViT-B/16-based methods. The text encoder is fixed to retain more medical language supervision learned from the large-scale image-report pre-training. For inference, we apply the sliding window method with an overlap ratio of 0.5 and keep the same resolution as the training sets. We calculate the evaluation metrics every 100 epochs and select the models with the best values to perform the test.
§.§ Evaluation metrics
To objectively evaluate the segmentation performance, we apply the Dice similarity coefficient and Hausdorff Distance 95% (HD95) as the evaluation metrics. For a given organ category, let X_v and Y_v represent the ground truth and prediction for pixel v, and X^' and Y^' denote the ground truth and predicted surface point sets. The Dice and HD metrics are defined as:
Dice=2 ∑_v=1^V X_v Y_v/∑_v=1^V X_v+∑_v=1^V Y_v,
HD=max{max _x^'∈ X^'min _y^'∈ Y^'x^'-y^', .
. max _y^'∈ Y^'min _x^'∈ X^'y^'-x^'},
where Dice measures the overlaps of ground truth and predicted values of V pixels, and HD95 calculates the 95^th percentile of the surface distances between ground truth and predicted point sets.
§.§ Quantitative segmentation results
§.§.§ Abdominal multi-organ segmentation on BTCV
As shown in Table <ref>, we report the abdominal multi-organ segmentation results of our MOSMOS and other approaches with two different baselines on BTCV. We see that the pre-training methods generally perform better than training from scratch. Compared with other pre-training methods, MOSMOS consistently attains the highest Dice scores, both on average and across the majority of organ categories. This noteworthy accomplishment is attributed to the incorporation of two key components during the pre-training phase: global image-report alignment and local pixel-tag alignment. Specifically, our MOSMOS is 3.37% and 1.18% Dice higher than the ImageNet-based pre-training <cit.> on ResNet-50 and ViT-B/16 visual backbones, respectively. MOSMOS also surpasses the state-of-the-art self-supervised pre-trained baselines (denoted by Inpainting+Contrast+Rotation <cit.>) by 3.68% and 2.37% on average of 13 organs. Besides, MOSMOS consistently maintains advantages of at least 1.10% with respect to these contrastive language-image pre-training models <cit.>. Although MOSMOS with ViT-B/16 visual backbone does not improve as much as with ResNet-50, it outperforms using ResNet-50, so ViT is more suitable for multi-organ segmentation tasks. As for why MOSMOS does not perform best in some organs, we consider that the feature extraction differences in visual backbones affect the positioning capability of attention maps.
§.§.§ Abdominal multi-organ segmentation on AMOS
A performance comparison of multi-organ segmentation tasks on the AMOS dataset for both CT and MRI modalities using MOSMOS versus the baseline UNETR <cit.> is presented in Fig. <ref>. As depicted in Fig. <ref>, MOSMOS consistently outperforms UNETR across all CT segmentation tasks on AMOS, with an average Dice score improvement from 85.37% to 86.29%. Significant improvements can be observed in the closed-set tasks of the stomach, pancreas, right adrenal gland, left adrenal gland, and the open-set tasks of the duodenum, bladder, prostate or uterus, with Dice scores advancing from 87.89% to 89.63%, 80.99% to 82.24%, 72.38% to 74.07%, 73.88% to 74.98%, 73.47% to 75.50%, 85.06% to 87.11%, and 79.48% to 80.87%, respectively. In Fig. <ref>, for all MRI tasks on AMOS, the average Dice score increases from 76.84% to 78.17%. Distinct improvements are evident in the closed-set stomach category and the open-set duodenum category, with Dice scores improving from 79.58% to 83.05% and 57.19% to 61.48%, respectively. The gallbladder category in the closed-set displays the most substantial improvement, with a Dice score of 60.99% compared to 54.08%.
§.§.§ Cardiac substructure segmentation on MMWHS
Table <ref> presents the class-specific results in both Dice and HD95 metrics on cardiac substructure segmentation using the MMWHS dataset. Compared to previous approaches, the proposed MOSMOS displays more strength in Dice than in HD95. Specifically speaking, our MOSMOS outperforms other methods in 9 out of 14 categories in Dice under two different baselines, while 5 out of 14 categories in HD95. From the perspective of average performance, we notice that MOSMOS often achieves better capability. For instance, we gain the state-of-the-art Dice of 83.57% and 87.38% when adopting U-Net <cit.> and UNETR <cit.> as the segmentation baselines, respectively, while keeping the lowest HD95 in UNETR baseline. Moreover, MOSMOS surpasses DenseCLIP <cit.>—the best-performing contrastive language-image pre-training approach—by over 0.51% in Dice.
§.§.§ Statistical significance
In Table <ref> and Table <ref>, we employ Wilcoxon signed rank test to calculate p-values between the average performance of our MOSMOS and PubMedCLIP+DenseCLIP <cit.> in both Dice and HD95 metrics. As we can see, MOSMOS demonstrates statistically significant performance, yielding p-values below 5e-2 across both Dice and HD95 metrics on two distinct baselines (U-Net & UNETR) and two public datasets (BTCV & MMWHS). The sole exception is observed with UNETR baseline on the BTCV dataset. These findings indicate that, in general, MOSMOS has significant advantages over PubMedCLIP+DenseCLIP.
§.§ Analytical ablation studies
§.§.§ Different modules in MOSMOS
We provide a thorough empirical study on MOSMOS by removing the individual modules in Table <ref>. For simplicity, we conduct experiments on the BTCV dataset and choose UNETR as the default baseline.
First, we investigate the effect of medical image-report contrastive learning in the pre-training stage. By comparing row 1 with row 0, we observe that dropping the cross-modal contrastive task would adversely affect the overall performance by 0.54% and 1.59mm in average Dice and HD95, respectively. We argue the reason behind this is that the global image-report aligning is a prerequisite for the local pixel-tag aligning and thus benefits downstream segmentation tasks. Next, we remove the learnable textual context so that each organ tag is embedded alone by the text encoder (row 2). Such an operation causes Dice to drop by 1.30% and HD95 to rise by 1.64mm (compared with row 0). This phenomenon demonstrates the helpfulness of mitigating the gaps between organ tags and reports. In addition, we consider not adopting CLIP parameters for initialization (row 3), where we can see a 1.59% decrease in Dice and a 3.55mm increase in HD95. This result verifies the advantage of large-scale cross-modal pre-training. Last but not least, we evaluate the significance of the entire pre-training process. A comparison between Row 4 and Row 0 reveals that the integration of supervision derived from medical reports can markedly improve overall performance. Such an improvement is attributed to the introduction of comprehensive medical prior knowledge without any additional manpower expense.
§.§.§ Different weights of the pixel-tag aligning loss
We vary the weight of pixel-tag aligning loss to explore the sensitivity of results to the trade-off parameter λ in Eq. (<ref>). To be specific, we range λ∈[0.1, 1.0] at a step of 0.1 and analyze the organ-wise segmentation performance on the BTCV dataset. As shown in Fig. <ref>, the box plot displays the average Dice across each organ of our method based on the ResNet-50 visual backbone. Our MOSMOS achieves the best performance on the BTCV test set when the λ is set to 0.8. The performance fluctuations are very small, except when λ is 0.7. In comparison, our MOSMOS is capable of generally outperforming the baseline (e.g., 74.86% for U-Net shown in Table <ref>). In Fig. <ref>, we compare the performance of MOSMOS based on the ViT-B/16 visual backbone with different λ. Although MOSMOS attains the highest Dice score when λ equals 0.8, it exhibits low sensitivity to λ. Considering the performance across both visual backbones, we empirically set λ to 0.8.
§.§.§ Different patch resolutions of the ViT-B backbone
In Table <ref>, we compare the average performance of the ViT-B visual backbone with different input patch resolutions. It shows that the performance significantly improves when decreasing the patch resolution. Specifically, dropping the resolution from 32 to 16 boosts the Dice of our MOSMOS by 2.12% and 6.59% on BTCV and MMWHS datasets, respectively. We can also observe that the proposed MOSMOS consistently maintains a significant advantage over the baseline UNETR in different resolutions and datasets. However, a lower patch resolution leads to a longer sequence and, therefore, higher memory cost. Considering the trade-off between segmentation performance and memory consumption, we empirically set the input patch resolution of ViT-B to 16.
§.§.§ Different label ratios in the fine-tuning stage
Fig. <ref> displays the performance comparison of various approaches under different label ratios on BTCV test dataset. Using only 25% of labeled data, our MOSMOS achieves a 7% improvement in performance compared to training a model from scratch. When utilizing the full set of labeled data, MOSMOS outperforms models trained from scratch or those using other pre-training methods by an average Dice score increase of 3.37%. Notably, MOSMOS only needs 75% of the annotated training data to match the performance comparable with those of other methods under a 100% labeled ratio. This highlights MOSMOS's efficiency in reducing annotation efforts by approximately 25% for the multi-organ segmentation task on BTCV.
§.§ Visualization for qualitative segmentation results
Fig. <ref> illustrates the segmentation and weakly supervised positioning results for qualitative evaluation. We mainly visualize the segmentation maps of our MOSMOS and other pre-training approaches on two public datasets. Compared to training from scratch and self-supervised or image-report contrastive pre-training, our MOSMOS displays visual improvements in capturing the shape of inferior vena cava (IVC, row 1 on BTCV) and pancreas (Pan, row 1 on BTCV), right atrium (RA, row 1 on MMWHS), and ascending aorta (AA, row 1 and row 2 on MMWHS). In addition, MOSMOS can reduce the prediction of false positives. One representative example is the second case on BTCV. Other methods predict the wrong liver (Liv) pixels near the stomach (Sto). Furthermore, the weakly supervised positioning results show that MOSMOS can distinguish between left and right organs through tailored pre-training tasks, demonstrating the superiority of our approach.
§ DISCUSSION
§.§ Strengths
We verify the effectiveness and generalization of our MOSMOS on multi-disease, multi-modal, and multi-organ datasets. Considering the cost of collection and annotation, BTCV, AMOS, and MMWHS datasets consist of a small number of annotated images, which cannot train effective models using randomly initialized weights from scratch. Thus we make a thorough comparison of the pre-training strategies. We observe that MOSMOS outperforms the previous pre-training approaches on most of the average metrics and organ categories. The main reasons for this are: (i) The ImageNet-based pre-training utilizes nature images, which exit enormous differences from medical images. In addition, this method is supervised and requires extensive annotations. (ii) Without the need for annotation effort, the self-supervised pre-training approach designs proxy tasks to learn solely visual representations, which does not introduce additional potentially exploitable supervisory information and has a gap with downstream tasks. (iii) As for the image-report contrastive pre-training, it adopts language priors paired with medical images as supervision without extra human effort. However, the visual spatial features transferred downstream are indirectly aligned to the text embeddings via the visual global features. In contrast, MOSMOS directly aligns the visual spatial features and the tag embeddings corresponding to the organ tags by introducing multi-label recognition in the pre-training stage, which can roughly identify the same organ with different shapes and sizes using attention maps in the Transformer decoder. Unlike the traditional multi-label classification, which encodes the multiple labels into a string of numbers as input, we take the embeddings of multiple organ tags as input. In this way, our MOSMOS is scalable and generalized. Meanwhile, MOSMOS is suitable for any segmentation model. The performance improvements on U-Net and UNETR demonstrate the universality of the MOSMOS framework.
§.§ Limitations
Our approach still has some limitations that can be improved in future works.
First, the proposed MOSMOS has only been pre-trained using 2D medical image-report pairs and transferred to 2D and 3D multi-organ segmentation tasks. This is mainly due to the lack of publicly available 3D image-report pairs, which are more consistent with clinical practice in most medical imaging modalities. In future effort, we will extend our framework to 3D image-report pre-training by constructing this dataset.
Second, in the current pre-training stage, we have constructed only 20 limited organ tag categories, primarily focused on abdominal multi-organs and cardiac substructures, which are not sufficient for fine-grained segmentations of the entire complex human body organs. Despite this, the diversity of medical reports in the pre-training stage provides a preliminary basis, as demonstrated in Fig. <ref>, for MOSMOS to exhibit a degree of open-set segmentation capability for abdominal organs on the AMOS dataset. Therefore, we can further refine our approach by expanding the tag list used in the pre-training stage and developing more advanced algorithms for open-set multi-organ segmentation.
Third, we mainly focus on organ segmentation, but ignore the descriptions of lesion morphology, size, location, and number in the reports, which can guide more significant tasks of fine-grained lesion segmentation. To further explore the generalization of our MOSMOS model, we extend its application to a slightly out-of-domain task—brain tumor segmentation on the BRATS dataset. Table <ref> shows the performance comparison in Dice score between MOSMOS and the baseline UNETR. MOSMOS surpasses UNETR by 1.12% Dice in whole tumor segmentation, yet exhibits comparable or inferior performance in the more granular tumor core and enhancing tumor segmentation tasks. Due to the significant differences between organs and lesions, the performance improvement in open-set brain tumor segmentation on BRATS is not substantial. Consequently, we aim to optimize our framework to be more suitable for lesion segmentation by mining the medical reports for more detailed information to further demonstrate its generality.
§ CONCLUSIONS
In this paper, we present a novel framework, dubbed MOSMOS, for multi-organ segmentation by leveraging cross-modal pre-training with medical image-report pairs. Based on global image-report aligning, MOSMOS first introduces the proxy task of local pixel-tag aligning. It utilizes a multi-label recognition approach to position the organ tags extracted from reports in the corresponding images, which is more suitable for complex fine-grained segmentation tasks in the downstream. Thus, the proposed framework is capable of being general for multi-disease, multi-modal, and multi-organ segmentation tasks on both 2D and 3D networks.
§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENT
Weiwei Tian: Writing – review & editing, Writing – original draft, Visualization, Validation, Supervision, Software, Resources, Project administration, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Xinyu Huang: Writing – review & editing, Writing – original draft, Visualization, Validation, Supervision, Software, Resources, Methodology, Investigation, Data curation, Conceptualization. Junlin Hou: Writing – review & editing, Writing – original draft, Visualization, Validation, Software, Methodology, Formal analysis. Caiyue Ren: Writing – review & editing, Writing – original draft, Visualization, Validation, Software, Formal analysis. Longquan Jiang: Writing – review & editing, Writing – original draft, Visualization, Validation, Supervision, Software, Resources, Project administration, Investigation, Funding acquisition. Rui-Wei Zhao: Writing – review & editing, Writing – original draft, Visualization, Validation, Funding acquisition, Formal analysis. Gang Jin: Writing – review & editing, Writing – original draft, Supervision, Conceptualization. Yuejie Zhang: Writing – review & editing, Writing – original draft, Supervision, Funding acquisition, Formal analysis. Daoying Geng: Writing – review & editing, Writing – original draft, Supervision, Investigation, Formal analysis.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no competing interests.
§ DATA AVAILABILITY
Data will be made available on request.
§ ACKNOWLEDGMENTS
This work was supported in part by the Science and Technology Commission of Shanghai Municipality (No.22511106003, No.23511100602) and the Shanghai Research and Innovation Functional Program under Grant 17DZ2260900.
cas-model2-names
|
http://arxiv.org/abs/2409.02509v1 | 20240904081040 | Distributed Quantum Computation via Entanglement Forging and Teleportation | [
"Tian-Ren Jin",
"Kai Xu",
"Heng Fan"
] | quant-ph | [
"quant-ph"
] |
Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China
School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China
School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
Beijing Academy of Quantum Information Sciences, Beijing 100193, China
Hefei National Laboratory, Hefei 230088, China
Songshan Lake Materials Laboratory, Dongguan 523808, China
CAS Center for Excellence in Topological Quantum Computation, UCAS, Beijing 100190, China
[email protected]
Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China
School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
Beijing Academy of Quantum Information Sciences, Beijing 100193, China
Hefei National Laboratory, Hefei 230088, China
Songshan Lake Materials Laboratory, Dongguan 523808, China
CAS Center for Excellence in Topological Quantum Computation, UCAS, Beijing 100190, China
§ ABSTRACT
Distributed quantum computation is a practical method for large-scale quantum computation on quantum processors with limited size.
It can be realized by direct quantum channels in flying qubits.
Moreover, the pre-established quantum entanglements can also play the role of quantum channels with local operations and classical channels.
However, without quantum correlations like quantum channels and entanglements, the entanglement forging technique allows us to classically forge the entangled states with local operations and classical channels only.
In this paper, we demonstrate the methods to implement a nonlocal quantum circuit on two quantum processors without any quantum correlations, which is based on the fact that teleportation with classically forged Bell states is equivalent to quantum state tomography.
In compensation, the overhead of single-shot measurement will increase, and several auxiliary qubits are required.
Our results extend the possibility of integrating quantum processors.
We expect that our methods will complement the toolbox of distributed quantum computation, and facilitate the extension of the scale of quantum computations.
Distributed Quantum Computation via Entanglement Forging and Teleportation
Heng Fan
==========================================================================
§ INTRODUCTION
Quantum computation has attracted widespread attention in recent years, since it takes advantage when simulating large quantum systems, or solving specific problems with efficient quantum algorithms.
Practical quantum computation requires millions of qubits in a relatively low level of noise, which may obstruct the applications of quantum computation.
Quantum error correction and other techniques provide a systematic way to deal with the noises in quantum operations <cit.>.
Recently, the remarkable progress in quantum error correction codes has declared that the level of noise in the state-of-art experimental techniques is close to the threshold of fault-tolerant quantum computation <cit.>.
However, fabricating such many qubits on an individual quantum chip is challenging for state-of-the-art experimental techniques.
Quantum internet and distributed quantum computation provide a feasible way to extend the size of quantum processors <cit.>.
On many integrated quantum processors, the scales of implementable quantum tasks can be much larger than those on an individual quantum processor.
To integrate two spatially separate quantum processors, it can be directly correlated with flying qubits to transmit quantum states, which have been realized by both photons <cit.> and phonons <cit.>.
The famous quantum teleportation protocol also shows that quantum channels can be undertaken by entangled states with classical channels <cit.>.
In addition, the gate teleportation protocol teleports the nonlocal controlled-unitary quantum gates with the assistance of only a single Bell pair <cit.>.
Therefore, nonlocal circuits can be implemented on separate quantum processors with quantum correlations.
However, when two quantum processors are separated with only local operations and classical channels (LOCCs), the nonlocal quantum circuits can still be implemented.
Recently, a technique called entanglement forging has been used to double the size of quantum simulators <cit.>.
In this technique, entangled states can be forged by separable states classically.
Entanglement forging has been used only to double the size of quantum processors in some specially designed circuits <cit.>.
With the entanglement forging and the teleportation protocol, any nonlocal quantum circuits can be forged on many separate quantum processors with only classical channels.
This paper is organized as follows.
In Sec. <ref>, we review the entanglement forging and the gate teleportation protocol.
In Sec. <ref>, we show that the quantum state teleportation with entanglement forging is equivalent to quantum state tomography, and demonstrate the methods to forge a nonlocal quantum circuit on two separated quantum processors with LOCCs.
In Sec. <ref>, we discuss the suppression of measurement noise in our methods with measurement error mitigation.
The conclusion and discussion are given in Sec. <ref>.
§ PRELIMILARIES
§.§ Entanglement Forging
Entanglement forging employs the Schmidt decomposition of a bipartite entangled state
|ψ⟩ = (Û⊗V̂) ∑_iλ_i |i⟩⊗|i⟩,
where |i⟩ is the computational basis of local Hilbert space, and Û and V̂ are unitaries, and the Schmidt coefficients λ_i are positive.
In terms of the density matrix, it can be written as
ρ_ψ = (Û⊗V̂) ∑_i (λ_i^2 |i⟩⟨i|⊗|i⟩⟨i| + ∑_j<i.
.×∑_p ∈ℤ_4 (-1)^p |ij_p⟩⟨ij_p|⊗|ij⟩⟨ij_p|) (Û^†⊗V̂^†),
where |ij_p⟩ = 1/√(2)(|i⟩ + i^p |j⟩), with p ∈{1,2,3,4}.
In this decomposition, the entangled state |ψ⟩ is not the classical probabilistic mixture of separable states, where all the coefficients of the decomposition are positive, due to the factor (-1)^p.
This kind of decomposition is called local pesudomixture <cit.> in the investigation of the robustness measure of quantum entanglement.
In general, we can decompose a state ρ_AB of system AB into the product states of AB
ρ_AB = ∑_i x_i ρ_A^i ⊗ρ_B^i,
where the coefficients x_i can be both positive and negative.
With the normalization of states, the coefficients satisfy ∑_i x_i = 1, thus this decomposition is called the quasiprobability decomposition.
If all the coefficients are positive, this state is a separable state by definition.
Therefore, for an entangled state, there exist coefficients x_i <0.
The decomposition of the entangled state can be rewritten as
ρ_AB = Z ∑_i sgn(x_i) q_i ρ_A^i ⊗ρ_B^i,
where Z = ∑_i |x_i|, and q_i = |x_i|/Z is a probabilistic distribution.
Therefore, the expectation of observable Ô in ρ_AB is
⟨Ô|_⟩ρ_AB = Z ∑_i q_i (sgn(x_i) ⟨Ô|_⟩ρ_A^i ⊗ρ_B^i),
the probabilistic mixture of the expectations in separable state ρ_A^i ⊗ρ_B^i with signature sgn(x_i).
The entangled state can be simulated by a separable state in the sense of expectations.
A similar technique is also used in the probabilistic error cancellation method of quantum error mitigation <cit.>.
The cost of separable states in the simulation of an entangled state is evaluated as the factor Z, and the minimal cost over all possible decomposition is called the implementability of the state ρ_AB with respect to separable states <cit.>
p_𝒬(ρ_AB) = min{∑_i |x_i|| ρ_AB = ∑_i x_i ρ_A^i ⊗ρ_B^i}.
§.§ Gate Teleportation
Gate teleportation protocol is put forward to nonlocally implement the quantum CNOT gate with the assistance of a pair of qubits in the maximal entangled state.
It can also be extended to teleport any N-controlled unitary gate.
Given a controlled-unitary gate 𝒰_C, there are universal starting and ending processes <cit.>, which allows for nonlocal application of the gate 𝒰_C with the consumption of only a single Bell pair.
On the contrary, for the nonlocal implementation of a general two-bit gate, with the teleportation of state, it consumes two Bell pairs.
The gate teleportation protocol of a controlled-unitary gate 𝒰_C is shown in Fig. <ref>(c).
The starting and ending process is part of the teleportation of state, as shown in Fig. <ref>(d), which contains the Bell measurement and the feedback.
Assume the input state of the starting process is |ψ_in⟩ = α|0⟩ + β|1⟩, which is the state of a qubit called the control qubit in the following.
Then the output state of the starting process is
|ψ_start⟩ = α|00⟩ + β|11⟩,
where the second qubit is an auxiliary qubit.
The ending process can take the state |ψ_start⟩ back to |ψ_in⟩, without the cost of nonlocal quantum operations or quantum state.
If a two-qubit quantum gate Û is acted before the ending process, the action of the output state will generally not be the two-qubit quantum gate Û
Expand the quantum gate Û into the Pauli basis on the auxiliary qubit after starting the process as
Û = ∑_i=0,x,y,zσ̂_i ⊗Û_i.
The action of this gate is
Î⊗Û|ψ_start⟩⊗|ψ_t⟩
= ∑_i (α|0⟩⊗σ_i |0⟩ + β|1⟩⊗σ_i |1⟩)⊗Û_i |ψ_t⟩,
where |ψ_t⟩ is the state of qubit not involved in the starting and ending process, which is called target qubit in the following.
After the ending process of the measurement in x direction on auxiliary qubit and feedback, the state on control and target qubit is
|ψ_out^+⟩ = [Î⊗(Û_0 + Û_x) + σ̂_z ⊗(Û_z + iÛ_y )]|Ψ_in⟩ ,
|ψ_out^-⟩ = [Î⊗(Û_0 - Û_x) + σ̂_z ⊗(Û_z -iÛ_y)]|Ψ_in⟩ ,
where |Ψ_in⟩ = |ψ_in⟩⊗|ψ_t⟩.
To realize the gate teleportation, it requires that
|ψ_out^+⟩ = |ψ_out^-⟩ = Û|Ψ_in⟩,
which gives that
Û_x = Û_y = 0.
This means that the quantum gates, which can be teleported with the starting and ending process, are in the form
Û = Î⊗Û_0 + Û_cσ̂_z Û_c^†⊗Û_z,
where Û_c is arbitrary unitary transformation acting on the control qubit.
In this protocol, the gate teleportation costs one pair of qubits in the Bell state.
For the general two-qubit quantum gate, the gate teleportation is realized with the state teleportation twice, which costs two pairs of Bell state.
§ DISTRIBUTED QUANTUM COMPUTATION WITH LOCAL OPERATIONS AND CLASSICAL CHANNELS
§.§ Teleportation Forged by Separable States
The teleportation protocol teleports qubits with classical channels in the consumption of Bell state <cit.>, so the entangled states are resources for quantum communication.
The entangled states cannot be prepared by local operations and classical channels (LOCC), which is the free set of the quantum entanglement theory <cit.>.
However, we can simulate the entangled states ρ^E by the quasiprobability decomposition.
Here is an example of the simulation of the Bell state.
|B^+⟩⟨B^+| = 1/2(|00⟩⟨00| + |00⟩⟨11|.
+ . |11⟩⟨00| + |11⟩⟨11|)
= 1/4(Î + Ẑ_1 Ẑ_2 + X̂_1 X̂_2 - Ŷ_1 Ŷ_2 )
= 1/4(Î + Ẑ_1 Ẑ_2) + 1/4(Î + X̂_1 X̂_2)
- 1/4(Î + Ŷ_1 Ŷ_2),
where
1/4(Î + Ẑ_1 Ẑ_2) = 1/2(|0_z0_z⟩⟨0_z0_z| + |1_z1_z⟩⟨1_z1_z|) ,
1/4(Î + X̂_1 X̂_2) = 1/2(|0_x0_x⟩⟨0_x0_x| + |1_x1_x⟩⟨1_x1_x|) ,
1/4(Î + Ŷ_1 Ŷ_2) = 1/2(|0_y0_y⟩⟨0_y0_y| + |1_y1_y⟩⟨1_y1_y|) .
The teleportation protocol is implemented by appending a state ρ_C and performing the Bell state measurements Π̂_s, s=0,1,2,3 on the system CA.
If the measurement outcome is s_0, then the state is
Π̂_s_0ρ_C ⊗ρ_ABΠ̂_s_0 = ∑_i x_i Π̂_s_0ρ_C ⊗ρ_Ai⊗ρ_BiΠ̂_s_0
= Π̂_s_0⊗∑_i x_i p(s_0|i) ρ_Bi,
where p(s|i) = Tr(Π̂_sρ_C ⊗ρ_Ai) = Tr(ρ_Ai^Tσ_sρ_C σ_s).
The transported state ρ_C is obtained by
ρ_C ∝∑_i x_i p(s_0|i) 𝒰_s_0(ρ_Bi) = ∑_s ∑_i x_i p(s|i) 𝒰_s(ρ_Bi).
Thus, with decomposition Eq. (<ref>), the classical information of p(s|i) for a fixed s is sufficient to recover the state ρ_C.
Therefore, the teleportation protocol can be simulated by separable states completely, which means that the quantum information of Alice is converted to Bob completely by classical channels.
For simplicity, we consider σ_s=0 = I, then p(0|i) = Tr(ρ_Ai^Tρ_C ).
In the decomposition Eq. (<ref>), ρ_Ai = |i_α⟩⟨i_α|, where i = 0,1 and α = x, y, z, so as ρ_Ai^T.
Therefore, the information of p(0|i) is the same as the required information of standard quantum state tomography (QST) on single qubit <cit.>, so the teleportation with the Bell state simulated in this decomposition is equivalent to the standard QST.
Because the dimension of the single qubit state space is d = 3, which is equal to the number of quantities measured by standard QST, it is reasonable to believe that the implementability is attained by this decomposition, p_𝒮(ρ_AB) = 3.
When simulating many Bell pairs, let the system of Bell states ρ_AB be AB = ⊗_a A_a B_a, where A_a, B_a are single qubit.
If the state that can be freely used is the separable state in the systems A_a and B_a, ℱ_𝒮 = conv[⊗_a𝒮(A_aB_a)], it can be shown that (Proposition 4 in Ref. <cit.>)
p_ℱ_𝒮(ρ_AB) = ∏_a p_𝒮(ρ_A_a B_a) = 3^L,
where L is the number of simulated Bell pairs.
The teleportation with Bell pairs in this simulation is equivalent to the local tomography <cit.>.
Equation (<ref>) implies that the standard QST is the most efficient scheme in local tomography.
In the contrary, if the entangled states between system A_a (or system B_a) can be freely used, ℱ = 𝒮(AB) = conv[𝒬(A) ⊗𝒬(B)], it can be show that (Proposition 5 and Proposition 11 in Ref. <cit.>)
2^L ≤ p_𝒮(ρ_AB) ≤ 3^L.
The investigations in classical teleportation also show that the standard QST is not the most efficient scheme in global tomography <cit.>.
With the classically forged Bell states and the teleportation protocols, a quantum computational task can be performed on separate quantum processors with only classical communications.
In compensation, there are exponentially increasing overhead of repeated measurements.
For an arbitrary quantum gate, its teleportation requires two pairs of qubits in the Bell state.
In the decomposition of Eq. (<ref>), the overhead for classical teleportation of one nonlocal quantum gate is 9 times more than the performing on one chip, and 4 auxiliary qubit.
However, to teleport a general two-qubit gate, it requires two pairs of qubits in Bell states.
In this case, the entanglement forging in Eq. (<ref>) is not optimal, and the optimal one has overhead larger than 4 times.
Since the classical teleportations of nonlocal quantum gates are independent, assume that the quantum circuit has L layers, where each layer contains one nonlocal gate between two processors, the p^2L times overhead and the 4 L auxiliary qubits are required, where 2 ≤ p ≤ 3.
§.§ Identical operations Forged by Projective Measurements
Since the distributed quantum computation is aimed at scaling the size of the quantum computer, the increasing auxiliary qubit will confine its usage.
In the following, we illustrate a method with less number of auxiliary qubits.
The state teleportation can be viewed as an identical operation between two qubits.
In the state teleportation with entanglement forging, one auxiliary qubit is needed to realize the identical operations.
Therefore, we hope to classically simulate this operation directly.
In terms of the Choi-Jamiołkowsky (CJ) isomorphism <cit.>, the Bell state is dual to the identical operations
|B^+⟩⟨B^+| = 1/2(|00⟩⟨00| + |00⟩⟨11|.
+ . |11⟩⟨00| + |11⟩⟨11|)
↦ℐ(·) = |0⟩⟨0|(·) |0⟩⟨0| + |0⟩⟨0|(·) |1⟩⟨1|
+ |1⟩⟨1|(·) |0⟩⟨0| + |1⟩⟨1|(·) |1⟩⟨1|.
This inspires us to construct the identical operations from the classical forging of Bell state, Eq. (<ref>), by transposing the “bra” and “ket” in the middle of the states.
This construction gives
ℐ(·) = Π̂_0z(·)Π̂_0z + Π̂_1z(·)Π̂_1z + Π̂_0x(·)Π̂_0x
+ Π̂_1x(·)Π̂_1x- Π̂_01y(·)Π̂_10y + Π̂_10y(·)Π̂_01y ,
where Π̂_ijα = |i_α⟩⟨j_α| and Π̂_iα≡Π̂_iiα.
It is the mixture of the projective measurements in x and z directions with the operations [Π̂_01y(·)Π̂_10y + Π̂_10y(·)Π̂_01y] removed.
The removed operation is not the projective measurement in y direction, since
|0_y⟩^T = ⟨1_y|, |1_y⟩^T = ⟨0_y|.
Although, the operations [Π̂_01y(·)Π̂_10y + Π̂_10y(·)Π̂_01y] is not a projective measurement.
It is equivalent to projective measurement up to a local unitary transformation
Π̂_01y(·)Π̂_10y + Π̂_10y(·)Π̂_01y = 𝒫_z ∘ M_y(·),
where 𝒫_z is the Pauli z operation.
Therefore, the identical operation is
ℐ(·) = ℳ_z(·) + ℳ_x(·) - 𝒫_z ∘ℳ_y(·),
where
ℳ_α(·) = Π̂_0α(·)Π̂_0α + Π̂_1α(·)Π̂_1α
is projective measurement in x,y, and z direction.
This allows us to simulate the identical channels classically.
Assume Alice (A) and Bob (B) want to perform a nonlocal circuit 𝒰.
This circuit can be represented as
𝒰 = (𝒱_2A⊗𝒱_2B) ∘𝒱∘ (𝒱_1A⊗𝒱_1B),
where 𝒱 is a nonlocal gate acting on one qubit q_a of Alice and one qubit q_b of Bob.
Let the initial state be ρ_A⊗ρ_B.
Typically, it can be selected as |0⟩_A^⊗ m⊗|0⟩_B^⊗ n in practice.
The output state is
ρ_out = 𝒰(ρ_A⊗ρ_B).
Insert ℐ before and after the gate 𝒱 on the qubit of Alice q_a (or Bob q_b), where the gate 𝒱 acts on.
Then, the state can be realized by
ρ_out = ∑_i,j,k,l∑_α, β, μ, ν M_jβ,iα M_lν,kμ
×ρ(A|i_α,l_ν) ⊗ρ(B|k_μ,j_β),
where the unnormalized states ρ(A|i_α,l_ν), ρ(B|k_μ,j_β) are
ρ(A|i_α,l_ν) = Tr_q_a [Π̂_iα^(q_a)𝒱_2A^(q_c→ q_a)
∘𝒱_1A^(A)(ρ_A⊗Π̂_lν^(q_c))],
ρ(B|k_μ,j_β) = Tr_q_d [Π̂_kμ^(q_d)𝒱_2B^(B)∘𝒱^(q_d → q_a)
∘𝒱_1B^(B)(ρ_B⊗Π̂_jβ^(q_d))],
and the transition matrix is
M_jβ,iα = (-1)^δ_α,yδ_α,β [δ_i,j(1-δ_α,y) + (1-δ_i,j)δ_α,y].
Here, q_c and q_d are two auxiliary qubits, and the notation [·]^(q_c,d→ q_a) represents that the operation [·] is acted by exchanging q_a and q_c,d
[·]^(q_c,d→ q_a) = [·] ∘𝒮_q_c,d↔ q_a.
This method splits the nonlocal circuit into two separate circuits with two auxiliary qubits.
The diagram is shown in Fig. <ref>.
The auxiliary qubits q_c, q_d are prepared randomly in |i⟩_α, and the qubits q_a and q_d are measured randomly in x,y and z direction.
After the preparations, evolutions, and measurements in q_a and q_d, they obtain the states ρ(A|i_α,l_ν) and ρ(B|k_μ,j_β), which are labeled by the prepared states |i_α⟩, |k_μ⟩, and the outcome |j_β⟩, |l_ν⟩ of measurements.
In the post-processing, they pairs the state ρ(A|i_α,l_ν) with ρ(B|k_μ,j_β) and signature ϵ_αϵ_ν.
The index |i_α⟩ is related to ⟨j_β|, and |k_μ⟩ is related to |l_ν⟩, in the way that for a pair (|i_α⟩, ⟨j_β|), i = j if α = β = x,z and i ≠ j if α = β = y.
The signature ϵ_α = (-1)^δ_α,y is negative when α = y.
This construction is from the equivalence between the teleportation forged by separable states and the standard QST shown in the previous.
For a circuit with more nonlocal gates between Alice and Bob, the circuit can be split into separated circuits with the same method.
The auxiliary qubits are 2 L for splitting L nonlocal gates.
Moreover, if the qubit q_a can be reset, the qubit q_c can reuse q_a after the measurement, thus only L auxiliary qubits are necessary.
However, in compensation, the overhead is 3^2L times, which is optimal since the preparation in state |i_α⟩ (and |k_μ⟩) and measurement in ⟨l_ν| (and ⟨j_β|) on Alice's (and Bob's) side have no quantum correlation.
§ MITIGATION OF THE NOISE IN MEASUREMENTS
The perfect realizations of both the methods discussed in the previous section depend on perfect preparations of state and measurements.
In practice, there always exist errors in preparations and measurements.
These errors can be canceled by the measurement error mitigation <cit.>.
With the post-processing of measurement error mitigation, the overhead will increase additionally.
In the following, we consider the cancellation of measurement errors in detail.
Assume the expectation of observables of interest can be calculated from the projective measurement in basis |s_A⟩⊗|s_B⟩, where s_A and s_B are bit strings.
Then, the probabilities are
P(s_A, s_B) = ∑_i,j,k,l∑_α, β, μ, ν M_jβ,iα M_lν,kμ
× P(s_A|i_α,l_ν) P(s_B|k_μ,j_β),
where
P(s_A|i_α,l_ν) = P(s_A, i_α, l_ν)/∑_s_A P(s_A, i_α, l_ν),
P(s_B|k_μ,j_β) = P(s_B, k_μ,j_β)/∑_s_B P(s_B, k_μ,j_β),
where P(s_A, i_α, l_ν) and P(s_B, k_μ,j_β) can be counted from measurement data of experiments.
In measurement error mitigation, there is an assignment matrix Â, which depicts the relation between the probabilities of ideal outcome and noisy outcome
P_error(s) = ∑_s' A(s,s') P(s'),
where P and P_error are the ideal and noisy probabilities of outcome.
This matrix can be measured from experiments.
The error is mitigated by
P(s) = ∑_s' A^-1(s,s') P_error(s').
In this manner, with the noisy probabilities P_error(s_A, i_α, l_ν) and P_error(s_B, k_μ,j_β) from experiments, the ideal probabilities is calculated by
P(s_A|i_α,l_ν) = ∑_s_A', i_α', l_ν' A^-1(s_A, i_α, l_ν;s_A', i_α', l_ν') P_error(s_A', i_α', l_ν')/∑_s_A∑_s_A', i_α', l_ν' A^-1(s_A, i_α, l_ν;s_A', i_α', l_ν') P_error(s_A', i_α', l_ν'),
P(s_B|k_μ,j_β) = ∑_s_B', k_μ',j_β' A^-1(s_B, k_μ,j_β;s_B', k_μ',j_β') P_error(s_B', k_μ',j_β')/∑_s_B∑_s_B', k_μ',j_β' A^-1(s_B, k_μ,j_β;s_B', k_μ',j_β') P_error(s_B', k_μ',j_β').
§ CONCLUSION
In this paper, we demonstrate two methods for distributed quantum computation on separated quantum processors with local operations and classical channels.
There is a simple method that is based on the teleportation protocol with classical forged entanglement.
It requires 4 auxiliary qubits and 9 times overhead for classically teleporting per nonlocal gate.
However, the overhead of this construction may not be optimal.
Moreover, we construct another method that requires 2 auxiliary qubits per gate, which is less than the first one.
This method is based on the identical operations forged by projective measurements.
The demonstrated construction requires also 9 times overhead, however, which is optimal.
We also show that the measurement error mitigation technique can cancel the error of the protocol in noisy cases.
Our results demonstrate the methods to implement a nonlocal quantum circuit on two separate quantum processors only with local operations and classical channels, which extend the possibility of combining two quantum processors.
We expect that our methods will complement the toolbox of distributed quantum computation, and facilitate the extension of the scale of quantum computations.
This work was supported by the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0301800), National Natural Science Foundation of China (Grants Nos. T2121001, 92265207, 12122504). We also acknowlege the supported from the Synergetic Extreme Condition User Facility (SECUF) in Beijing, China.
40
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Terhal(2015)]RevModPhys.87.307
author author B. M. Terhal, title title Quantum error correction for quantum memories, https://doi.org/10.1103/RevModPhys.87.307 journal journal Rev. Mod. Phys. volume 87, pages 307 (year 2015)NoStop
[Cai et al.(2023)Cai, Babbush, Benjamin, Endo, Huggins, Li, McClean, and O'Brien]RevModPhys.95.045005
author author Z. Cai, author R. Babbush, author S. C. Benjamin, author S. Endo, author W. J. Huggins, author Y. Li, author J. R. McClean, and author T. E. O'Brien, title title Quantum error mitigation, https://doi.org/10.1103/RevModPhys.95.045005 journal journal Rev. Mod. Phys. volume 95, pages 045005 (year 2023)NoStop
[Zhao et al.(2022)Zhao, Ye, Huang, Zhang, Wu, Guan, Zhu, Wei, He, Cao et al.]PhysRevLett.129.030501
author author Y. Zhao, author Y. Ye, author H.-L. Huang, author Y. Zhang, author D. Wu, author H. Guan, author Q. Zhu, author Z. Wei, author T. He, author S. Cao, et al., title title Realization of an error-correcting surface code with superconducting qubits, https://doi.org/10.1103/PhysRevLett.129.030501 journal journal Phys. Rev. Lett. volume 129, pages 030501 (year 2022)NoStop
[AI(2023)]google2023suppressing
author author G. Q. AI, title title Suppressing quantum errors by scaling a surface code logical qubit, https://www.nature.com/articles/s41586-022-05434-1 journal journal Nature volume 614, pages 676 (year 2023)NoStop
[Evered et al.(2023)Evered, Bluvstein, Kalinowski, Ebadi, Manovitz, Zhou, Li, Geim, Wang, Maskara et al.]evered2023high
author author S. J. Evered, author D. Bluvstein, author M. Kalinowski, author S. Ebadi, author T. Manovitz, author H. Zhou, author S. H. Li, author A. A. Geim, author T. T. Wang, author N. Maskara, et al., title title High-fidelity parallel entangling gates on a neutral-atom quantum computer, https://www.nature.com/articles/s41586-023-06481-y journal journal Nature volume 622, pages 268 (year 2023)NoStop
[Azuma et al.(2023)Azuma, Economou, Elkouss, Hilaire, Jiang, Lo, and Tzitrin]RevModPhys.95.045006
author author K. Azuma, author S. E. Economou, author D. Elkouss, author P. Hilaire, author L. Jiang, author H.-K. Lo, and author I. Tzitrin, title title Quantum repeaters: From quantum networks to the quantum internet, https://doi.org/10.1103/RevModPhys.95.045006 journal journal Rev. Mod. Phys. volume 95, pages 045006 (year 2023)NoStop
[Luo et al.(2023)Luo, Cao, Shi, Wan, Zhang, Li, Chen, Li, Li, Wang et al.]luo2023recent
author author W. Luo, author L. Cao, author Y. Shi, author L. Wan, author H. Zhang, author S. Li, author G. Chen, author Y. Li, author S. Li, author Y. Wang, et al., title title Recent progress in quantum photonic chips for quantum communication and internet, https://www.nature.com/articles/s41377-023-01173-8 journal journal Light: Science & Applications volume 12, pages 175 (year 2023)NoStop
[Fang et al.(2023)Fang, Zhao, Li, Li, and Duan]fang2023quantum
author author K. Fang, author J. Zhao, author X. Li, author Y. Li, and author R. Duan, title title Quantum network: from theory to practice, @noop journal journal Science China Information Sciences volume 66, pages 180509 (year 2023)NoStop
[Kurpiers et al.(2018)Kurpiers, Magnard, Walter, Royer, Pechal, Heinsoo, Salathé, Akin, Storz, Besse et al.]kurpiers2018deterministic
author author P. Kurpiers, author P. Magnard, author T. Walter, author B. Royer, author M. Pechal, author J. Heinsoo, author Y. Salathé, author A. Akin, author S. Storz, author J.-C. Besse, et al., title title Deterministic quantum state transfer and remote entanglement using microwave photons, https://www.nature.com/articles/s41586-018-0195-y journal journal Nature volume 558, pages 264 (year 2018)NoStop
[Campagne-Ibarcq et al.(2018)Campagne-Ibarcq, Zalys-Geller, Narla, Shankar, Reinhold, Burkhart, Axline, Pfaff, Frunzio, Schoelkopf, and Devoret]PhysRevLett.120.200501
author author P. Campagne-Ibarcq, author E. Zalys-Geller, author A. Narla, author S. Shankar, author P. Reinhold, author L. Burkhart, author C. Axline, author W. Pfaff, author L. Frunzio, author R. J. Schoelkopf, and author M. H. Devoret, title title Deterministic remote entanglement of superconducting circuits through microwave two-photon transitions, https://doi.org/10.1103/PhysRevLett.120.200501 journal journal Phys. Rev. Lett. volume 120, pages 200501 (year 2018)NoStop
[Zhong et al.(2019)Zhong, Chang, Satzinger, Chou, Bienfait, Conner, Dumur, Grebel, Peairs, Povey et al.]zhong2019violating
author author Y. Zhong, author H.-S. Chang, author K. Satzinger, author M.-H. Chou, author A. Bienfait, author C. Conner, author É. Dumur, author J. Grebel, author G. Peairs, author R. Povey, et al., title title Violating bell’s inequality with remotely connected superconducting qubits, https://www.nature.com/articles/s41567-019-0507-7 journal journal Nature Physics volume 15, pages 741 (year 2019)NoStop
[Zhong et al.(2021)Zhong, Chang, Bienfait, Dumur, Chou, Conner, Grebel, Povey, Yan, Schuster et al.]zhong2021deterministic
author author Y. Zhong, author H.-S. Chang, author A. Bienfait, author É. Dumur, author M.-H. Chou, author C. R. Conner, author J. Grebel, author R. G. Povey, author H. Yan, author D. I. Schuster, et al., title title Deterministic multi-qubit entanglement in a quantum network, https://www.nature.com/articles/s41586-021-03288-7 journal journal Nature volume 590, pages 571 (year 2021)NoStop
[Grebel et al.(2024)Grebel, Yan, Chou, Andersson, Conner, Joshi, Miller, Povey, Qiao, Wu, and Cleland]PhysRevLett.132.047001
author author J. Grebel, author H. Yan, author M.-H. Chou, author G. Andersson, author C. R. Conner, author Y. J. Joshi, author J. M. Miller, author R. G. Povey, author H. Qiao, author X. Wu, and author A. N. Cleland, title title Bidirectional multiphoton communication between remote superconducting nodes, https://doi.org/10.1103/PhysRevLett.132.047001 journal journal Phys. Rev. Lett. volume 132, pages 047001 (year 2024)NoStop
[Bienfait et al.(2019)Bienfait, Satzinger, Zhong, Chang, Chou, Conner, Dumur, Grebel, Peairs, Povey et al.]bienfait2019phonon
author author A. Bienfait, author K. J. Satzinger, author Y. Zhong, author H.-S. Chang, author M.-H. Chou, author C. R. Conner, author É. Dumur, author J. Grebel, author G. A. Peairs, author R. G. Povey, et al., title title Phonon-mediated quantum state transfer and remote qubit entanglement, https://www.science.org/doi/10.1126/science.aaw8415 journal journal Science volume 364, pages 368 (year 2019)NoStop
[Zivari et al.(2022)Zivari, Fiaschi, Burgwal, Verhagen, Stockill, and Gröblacher]zivari2022chip
author author A. Zivari, author N. Fiaschi, author R. Burgwal, author E. Verhagen, author R. Stockill, and author S. Gröblacher, title title On-chip distribution of quantum information using traveling phonons, https://www.science.org/doi/10.1126/sciadv.add2811 journal journal Science Advances volume 8, pages eadd2811 (year 2022)NoStop
[Bennett et al.(1993)Bennett, Brassard, Crépeau, Jozsa, Peres, and Wootters]PhysRevLett.70.1895
author author C. H. Bennett, author G. Brassard, author C. Crépeau, author R. Jozsa, author A. Peres, and author W. K. Wootters, title title Teleporting an unknown quantum state via dual classical and einstein-podolsky-rosen channels, https://doi.org/10.1103/PhysRevLett.70.1895 journal journal Phys. Rev. Lett. volume 70, pages 1895 (year 1993)NoStop
[Eisert et al.(2000)Eisert, Jacobs, Papadopoulos, and Plenio]PhysRevA.62.052317
author author J. Eisert, author K. Jacobs, author P. Papadopoulos, and author M. B. Plenio, title title Optimal local implementation of nonlocal quantum gates, https://doi.org/10.1103/PhysRevA.62.052317 journal journal Phys. Rev. A volume 62, pages 052317 (year 2000)NoStop
[Hu et al.(2023)Hu, Guo, Liu, Li, and Guo]hu2023progress
author author X.-M. Hu, author Y. Guo, author B.-H. Liu, author C.-F. Li, and author G.-C. Guo, title title Progress in quantum teleportation, https://doi.org/10.1038/s42254-023-00588-x journal journal Nature Reviews Physics volume 5, pages 339 (year 2023)NoStop
[Eddins et al.(2022)Eddins, Motta, Gujarati, Bravyi, Mezzacapo, Hadfield, and Sheldon]PRXQuantum.3.010309
author author A. Eddins, author M. Motta, author T. P. Gujarati, author S. Bravyi, author A. Mezzacapo, author C. Hadfield, and author S. Sheldon, title title Doubling the size of quantum simulators by entanglement forging, https://doi.org/10.1103/PRXQuantum.3.010309 journal journal PRX Quantum volume 3, pages 010309 (year 2022)NoStop
[de Schoulepnikoff et al.(2024)de Schoulepnikoff, Kiss, Vallecorsa, Carleo, and Grossi]PhysRevResearch.6.023021
author author P. de Schoulepnikoff, author O. Kiss, author S. Vallecorsa, author G. Carleo, and author M. Grossi, title title Hybrid ground-state quantum algorithms based on neural schrödinger forging, https://doi.org/10.1103/PhysRevResearch.6.023021 journal journal Phys. Rev. Res. volume 6, pages 023021 (year 2024)NoStop
[Huembeli et al.(2022)Huembeli, Carleo, and Mezzacapo]huembeli2022entanglement
author author P. Huembeli, author G. Carleo, and author A. Mezzacapo, title title Entanglement forging with generative neural network models, https://arxiv.org/abs/2205.00933 journal journal arXiv:2205.00933 (year 2022)NoStop
[Sanpera et al.(1998)Sanpera, Tarrach, and Vidal]PhysRevA.58.826
author author A. Sanpera, author R. Tarrach, and author G. Vidal, title title Local description of quantum inseparability, https://doi.org/10.1103/PhysRevA.58.826 journal journal Phys. Rev. A volume 58, pages 826 (year 1998)NoStop
[Vidal and Tarrach(1999)]PhysRevA.59.141
author author G. Vidal and author R. Tarrach, title title Robustness of entanglement, https://doi.org/10.1103/PhysRevA.59.141 journal journal Phys. Rev. A volume 59, pages 141 (year 1999)NoStop
[Temme et al.(2017)Temme, Bravyi, and Gambetta]PhysRevLett.119.180509
author author K. Temme, author S. Bravyi, and author J. M. Gambetta, title title Error mitigation for short-depth quantum circuits, https://doi.org/10.1103/PhysRevLett.119.180509 journal journal Phys. Rev. Lett. volume 119, pages 180509 (year 2017)NoStop
[Endo et al.(2018)Endo, Benjamin, and Li]PhysRevX.8.031027
author author S. Endo, author S. C. Benjamin, and author Y. Li, title title Practical quantum error mitigation for near-future applications, https://doi.org/10.1103/PhysRevX.8.031027 journal journal Phys. Rev. X volume 8, pages 031027 (year 2018)NoStop
[Cai(2021)]cai2021multi
author author Z. Cai, title title Multi-exponential error extrapolation and combining error mitigation techniques for nisq applications, https://www.nature.com/articles/s41534-021-00404-3 journal journal npj Quantum Information volume 7, pages 80 (year 2021)NoStop
[Takagi(2021)]PhysRevResearch.3.033178
author author R. Takagi, title title Optimal resource cost for error mitigation, https://doi.org/10.1103/PhysRevResearch.3.033178 journal journal Phys. Rev. Res. volume 3, pages 033178 (year 2021)NoStop
[Van Den Berg et al.(2023)Van Den Berg, Minev, Kandala, and Temme]van2023probabilistic
author author E. Van Den Berg, author Z. K. Minev, author A. Kandala, and author K. Temme, title title Probabilistic error cancellation with sparse pauli–lindblad models on noisy quantum processors, https://www.nature.com/articles/s41567-023-02042-2 journal journal Nature Physics , pages 1 (year 2023)NoStop
[Jin et al.(2024)Jin, Xu, Zhang, and Fan]jin2024noisy
author author T.-R. Jin, author K. Xu, author Y.-R. Zhang, and author H. Fan, title title Noisy probabilistic error cancellation and generalized physical implementability, https://arxiv.org/abs/2409.01000 journal journal arXiv:2409.01000 (year 2024)NoStop
[Wu et al.(2023)Wu, Matsui, Forrer, Soeda, Andrés-Martínez, Mills, Henaut, and Murao]wu2023entanglement
author author J.-Y. Wu, author K. Matsui, author T. Forrer, author A. Soeda, author P. Andrés-Martínez, author D. Mills, author L. Henaut, and author M. Murao, title title Entanglement-efficient bipartite-distributed quantum computing, https://doi.org/10.22331/q-2023-12-05-1196 journal journal Quantum volume 7, pages 1196 (year 2023)NoStop
[Bennett et al.(1996a)Bennett, Bernstein, Popescu, and Schumacher]PhysRevA.53.2046
author author C. H. Bennett, author H. J. Bernstein, author S. Popescu, and author B. Schumacher, title title Concentrating partial entanglement by local operations, https://doi.org/10.1103/PhysRevA.53.2046 journal journal Phys. Rev. A volume 53, pages 2046 (year 1996a)NoStop
[Bennett et al.(1996b)Bennett, DiVincenzo, Smolin, and Wootters]PhysRevA.54.3824
author author C. H. Bennett, author D. P. DiVincenzo, author J. A. Smolin, and author W. K. Wootters, title title Mixed-state entanglement and quantum error correction, https://doi.org/10.1103/PhysRevA.54.3824 journal journal Phys. Rev. A volume 54, pages 3824 (year 1996b)NoStop
[Vedral et al.(1997)Vedral, Plenio, Rippin, and Knight]PhysRevLett.78.2275
author author V. Vedral, author M. B. Plenio, author M. A. Rippin, and author P. L. Knight, title title Quantifying entanglement, https://doi.org/10.1103/PhysRevLett.78.2275 journal journal Phys. Rev. Lett. volume 78, pages 2275 (year 1997)NoStop
[Nielsen and Chuang(2010)]nielsen2010quantum
author author M. A. Nielsen and author I. L. Chuang, @noop title Quantum computation and quantum information (publisher Cambridge university press, year 2010)NoStop
[Barnum and Wilce(2014)]barnum2014local
author author H. Barnum and author A. Wilce, title title Local tomography and the jordan structure of quantum theory, @noop journal journal Foundations of Physics volume 44, pages 192 (year 2014)NoStop
[Chiribella and Spekkens(2016)]chiribella2016quantum
author author G. Chiribella and author R. W. Spekkens, @noop title Quantum theory: informational foundations and foils (publisher Springer, year 2016)NoStop
[Massar and Popescu(1995)]PhysRevLett.74.1259
author author S. Massar and author S. Popescu, title title Optimal extraction of information from finite quantum ensembles, https://doi.org/10.1103/PhysRevLett.74.1259 journal journal Phys. Rev. Lett. volume 74, pages 1259 (year 1995)NoStop
[Jamiołkowski(1972)]JAMIOLKOWSKI1972275
author author A. Jamiołkowski, title title Linear transformations which preserve trace and positive semidefiniteness of operators, https://doi.org/https://doi.org/10.1016/0034-4877(72)90011-0 journal journal Reports on Mathematical Physics volume 3, pages 275 (year 1972)NoStop
[Choi(1975)]CHOI1975285
author author M.-D. Choi, title title Completely positive linear maps on complex matrices, https://doi.org/https://doi.org/10.1016/0024-3795(75)90075-0 journal journal Linear Algebra and its Applications volume 10, pages 285 (year 1975)NoStop
[Bravyi et al.(2021)Bravyi, Sheldon, Kandala, Mckay, and Gambetta]PhysRevA.103.042605
author author S. Bravyi, author S. Sheldon, author A. Kandala, author D. C. Mckay, and author J. M. Gambetta, title title Mitigating measurement errors in multiqubit experiments, https://doi.org/10.1103/PhysRevA.103.042605 journal journal Phys. Rev. A volume 103, pages 042605 (year 2021)NoStop
|
http://arxiv.org/abs/2409.02604v1 | 20240904103744 | Hypothesizing Missing Causal Variables with LLMs | [
"Ivaxi Sheth",
"Sahar Abdelnabi",
"Mario Fritz"
] | cs.LG | [
"cs.LG",
"stat.ME"
] |
A Software Visualization Approach for
Multiple Visual Output Devices
Malte Hansen
Department of Computer Science
Kiel University
Kiel, Germany
[email protected]
Heiko Bielfeldt
Department of Computer Science
Kiel University
Kiel, Germany
[email protected]
Armin Bernstetter
GEOMAR Helmholtz Centre for Ocean Research Kiel
Kiel, Germany
[email protected]
Tom Kwasnitschka
GEOMAR Helmholtz Centre for Ocean Research Kiel
Kiel, Germany
[email protected]
Wilhelm Hasselbring
Department of Computer Science
Kiel University
Kiel, Germany
[email protected]
September 9, 2024
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Scientific discovery is a catalyst for human intellectual advances, driven by the cycle of hypothesis generation, experimental design, data evaluation, and iterative assumption refinement. This process, while crucial, is expensive and heavily dependent on the domain knowledge of scientists to generate hypotheses and navigate the scientific cycle. Central to this is causality, the ability to establish the relationship between the cause and the effect. Motivated by the scientific discovery process, in this work, we formulate a novel task where the input is a partial causal graph with missing variables, and the output is a hypothesis about the missing variables to complete the partial graph. We design a benchmark with varying difficulty levels and knowledge assumptions about the causal graph. With the growing interest in using Large Language Models (LLMs) to assist in scientific discovery, we benchmark open-source and closed models on our testbed. We show the strong ability of LLMs to hypothesize the mediation variables between a cause and its effect. In contrast, they underperform in hypothesizing the cause and effect variables themselves. We also observe surprising results where some of the open-source models outperform the closed GPT-4 model[<https://github.com/ivaxi0s/hypothesizing-causal-variable-llm>].
§ INTRODUCTION
Scientific discovery has been key to humankind's advances. It is a dynamic process revolving around inquiry and constant refinements driven by new observations. Scientists adhere to a structured process that involves formulating a hypothesis and then collecting pertinent data <cit.>. They then draw inferences from experiments and the collected data, modify the hypothesis, formulate sub-questions, and repeat the process until the research question is answered <cit.>.
Causality empowers scientists to assess the hypotheses and interpret the collected data beyond mere correlations and associations. Tools such as Randomised Control Trials (RCTs) <cit.> allow for establishing causal relationships between variables.
Naturally, the process of causal discovery heavily relies on human experts to guide the hypothesis formation and experimental design <cit.>. Expert domain knowledge is crucial to narrow the search space of hypotheses, especially when it is expensive to collect data or when systematic exploration is infeasible. However, a possible impediment is that domain knowledge can be difficult to formulate and collect <cit.>.
With the recent advancement of Large Language Models (LLMs) <cit.>, there has been a growing interest in using them for scientific discovery <cit.>. Beyond natural language processing, LLMs have shown exceptional performance in a wide array of tasks, such as reasoning problems <cit.>.
Their potential is now studied in domains such as
natural sciences <cit.>. Despite these promising results, LLMs have many well-known limitations, such as confabulation or hallucination, which would require human supervision when adopting them <cit.>. Previous work also proposed using LLMs as creative solution proposers with task-specific means of verifying said solutions <cit.>.
r0.45
< g r a p h i c s >
Scientific discovery is an iterative process to generate new hypotheses from initial assumptions relying on human expertise. We leverage LLMs as proxy domain experts to propose new hypotheses in causal DAGs.
Given the importance of causality in the scientific discovery process, we focus on how LLMs can assist with causal reasoning. LLMs have achieved state-of-the-art results for causal tasks such as determining pairwise causal relationships by considering variable names <cit.>, combined with causal discovery algorithms <cit.> for refinement.
This step, however, comes after hypothesizing the variables of interest (which require domain knowledge), forming experiments, and potentially costly data collection.
In our work, we extend the application of LLMs in causal reasoning to assist in steps essential before causal discovery. We harness LLMs to identify and hypothesize missing variables in a partially known causal graph, simulating a realistic scientific discovery process of incremental hypotheses formation and testing. Our approach is complementary to existing causal methods and taps into LLMs' capabilities induced by their large-scale training to propose memorized or inferred variables based on their general and even domain knowledge. We do not require LLMs to determine pairwise causal relations or perform numerical calculations, sidestepping their limitations in these tasks <cit.>.
In summary, our main contributions are:
* We introduce a new task of LLM-assisted causal variable identification and hypothesizing.
* We propose a benchmark for hypothesizing missing variables based on a diverse set of existing causal graph datasets.
* We design experimental tests with different difficulty levels and knowledge assumptions, such as open-world and closed-world settings, the number of missing variables, etc and gather insights on LLM's capabilities and weaknesses.
* We benchmark several SoTA models and analyze their performance w.r.t. variable types.
§ RELATED WORK
Our work is based on the framework of causality as proposed by <cit.>. The intersection of language and causality is explored in <cit.> to extract causal relationships from a large corpus of text. With the advancements in LLMs and their ability to process large contexts, there has been an interest in using them for causal reasoning <cit.>. Some works have focused on commonsense causality <cit.> and temporal causal reasoning <cit.>. More recently <cit.> introduced a method to discover causal structures by prompting LLMs with variable names. <cit.> extended this work by introducing ancestral constraints to refine the causal structures derived from LLMs. <cit.> combined data-based deep structural causal models, such as <cit.>, with LLMs generated causal structure. Beyond using the ingested information for causal tasks, <cit.> focused on pure causal inference using LLMs. Recent work attempted to train causal transformers <cit.>, however, in this work we aimed to test the hypothesizing abilities of generalist LLMs. In contrast to previous work, we focus on the novel task of identifying and hypothesizing missing variables, a task that comes before data collection and evaluation, with LLMs as assistants.
Additionally, existing works tested inductive hypothesis generation with LLMs <cit.>, although, we look at causal hypothesis generation.
§ PRELIMINARIES: CAUSAL GRAPH
A causal relationship can be modeled via a Directed Acyclic Graph (DAG). A causal DAG represents relationships between a set of N variables defined by 𝐕 = { V_1,...,V_N }. The variables are encoded in a graph 𝒢 = (𝐕, 𝐄) where 𝐄 is a set of directed edges between the nodes ∈𝐕 such that no cycle is formed. Mathematically it can be expressed as:
𝒢 = (𝐕, 𝐄), 𝐄 = {e_i,j| v_i, v_j ∈𝐕 , i ≠ j } and v_i → v_j
Each edge e_i,j∈𝐄 denotes causal relationship between v_i and v_j, v_i v_j, emphasizing the influence from v_i to v_j. Beyond visualization, causal DAGs allow for the mathematical characterization of different node types for a causal model to understand the influences and dependencies.
We define 𝐝(v) as the degree of a node v, representing the total number of edges connected to v.
𝐝_𝐢𝐧(v) is the in-degree, representing the number of incoming edges to v.
𝐝_𝐨𝐮𝐭(v) is the out-degree, representing the number of outgoing edges from v.
Sources are variables v_s with no incoming edges. Mathematically sources are d_in(v_s) = 0 where d_in is the in-degree of the graph.
Sinks are variables v_k with no outgoing edges. Sinks are d_out(v_k) = 0 where d_out is the out-degree of the graph.
Treatment are variables v_t, characterized as nodes d_in(v_t) = 0 that are being intervened upon.
Outcome are variables v_y, characterized as the nodes d_out(v_y) = 0 that are observed for interventions from the treatments.
Mediator are variables v_m that have both incoming and outgoing edges (d_in(v_m) > 0 and d_out(v_m) > 0), acting as intermediaries in the causal pathways between treatment and outcome. Hence v_k is a mediator if it is both a child of v_i and a parent of v_j.
Confounder are variables v_k that influence both treatment and outcome, exhibiting edges directed towards the treatment and outcome nodes (d_out(v_k) ≥ 2. Hence v_k is a confounder if it is a parent of both v_i and v_j.
Collider are variables v_l that have two edges meeting, and have an in-degree greater than one d_in(v_l) > 1. Hence v_k is a collider if it is a child of both v_i and v_j.
Mediation Analysis.
Mediation analysis quantifies the treatment's effect on the outcome through a mediator variable. This effect is decomposed into the Natural Direct Effect (NDE) and the Natural Indirect Effect (NIE). The NDE represents the treatment's effect on the outcome without mediation, while the NIE represents the effect mediated by the mediator variable. Futher explanation can be found in Appendix <ref>.
NDE = 𝔼[v_t=1,v_m=0 - v_t=0,v_m=0]
NIE = 𝔼[v_t=0,v_m=1 - v_t=0,v_m=0]
§ LLMS FOR IDENTIFYING AND HYPOTHESIZING CAUSAL VARIABLES
In this work, we aim to leverage language models to identify and hypothesize variables in a causal DAG. Motivated by the process of hypothesizing a causal graph from a partially known structure <cit.>, this paper proceeds under the assumption that some elements of the graph are already known. The aim is to find additional variables that can be incorporated into the existing causal structure to enhance the underlying causal mechanism.
We assume the presence of a partially known causal DAG, defined as 𝒢^* = (V^*, 𝐄), where V^* ⊆𝐕. The objective is to identify the set of missing variables V^* = 𝐕∖V_missing thereby expanding 𝒢^* to 𝒢. This implies that
all causal relationships (edges) among variables in V^* are known and correctly represented in 𝒢^*; i.e., E is fully specified.
Our methodology, structured around progressively challenging scenarios, explores the ability of LLMs to identify and hypothesize causal variables. This starts from a restrictive and controlled exploration to an open-ended one. Initially, we restrict the exploration by providing the language models with a partially known causal DAG and a set of multiple choices for the missing variables. The complexity of the task is gradually increased by removing more than one node from the graph. Finally, we move to an open-ended scenario where the ground truth is not known. In this setting, LLM is required to hypothesize the missing variables of the causal DAG without any explicit hints.
We evaluate the causal reasoning capability of LLMs through prompting. Given LLMs' limitation to textual input, we represent the graph 𝒢^* using a prompt template P_LLM(·) which enables LLMs to parse the causal relationships embedded within the DAG.
§.§ Task 1: Out-of-Context Controlled Variable Identification
This task (depicted in <ref>) evaluates LLMs' ability to identify missing variables in a causal graph from a list of multiple choices, thereby reconstructing the original graph.
The partial DAG 𝒢^* is created by removing one variable from the original DAG 𝒢. Let us denote the removed node as v_x.
Along with the partial graphs, we operate in the multiple-choice question answering (MCQA) paradigm.
The role of the LLM is to select a variable from the multiple choices, MCQ_v_x, that can be used to complete the graph.
The multiple choices include the missing variable v_x and out-of-context distractors. The out-of-context distractors are carefully chosen to be irrelevant to the given DAG and its context. Let v_x^* represent the variable selected by the LLM to complete 𝒢^*.
v_x^* = P_LLM(𝒢^*, MCQ_v_x) ∀ v_x ∈V
§.§ Task 2: In-Context Controlled Variable Identification
In practical applications, such as healthcare <cit.> and finance <cit.>, dealing with missing data and unobserved latent variables is a major challenge <cit.>. Therefore, identifying the missing variables and their underlying causal mechanism is an important task. To simulate this, a more challenging task is introduced (see <ref>). Here, instead of removing one node from the ground truth DAG 𝒢, two nodes, v_x_1 and v_x_2, are now removed to create the partial graph, 𝒢^*.
𝒢^* = 𝒢∖{v_x_1, v_x_2} for v_x_1, v_x_2∈𝐕
We use the MCQA paradigm to provide multiple choices that include the missing variables v_x_1 and v_x_2. The task for the LLM here is to select the correct variable v_x_1 only, given an in-context choice v_x_2 and out-of-context choices.
We introduce the non-parental constrain for v_x_1 and v_x_2. This prevents the removal of both a parent node and its immediate child node in 𝒢^*.
v_x_1^* = P_LLM(𝒢^*, MCQ_v_x_1, v_x_2) ∀ v_x_1, v_x_2∈𝐕 and v_x_1↛v_x_2, v_x_2↛v_x_1
§.§ Task 3: Hypothesizing in Open World
So far, we have described the testbeds for variable identification in a partial DAG given the controlled world knowledge in the form of distractors. This assumption allows for the evaluation of the language model's ability to select the correct answer from a set of options. However, in the open-world setting, we increase the complexity to provide no choices, as shown in <ref>. Hence the task is to predict the missing node v_x given the partial graph 𝒢^* to complete the ground truth graph 𝒢. Here, the model returns a set of potential hypotheses, {v_x,1^*,..., v_x,k^*} where k is the number of hypotheses.
{v_x,1^*, v_x,2^*, ..., v_x,k^*} =P_LLM(𝒢^*) ∀ v_x∈𝐕
§.§ Task 4: Iteratively Hypothesizing in Open World
In addition to the search space relaxation, we further relax the number of missing variables. The partial DAG here, is obtained for one or more missing node variables. 𝒢^* = 𝒢∖{v_x_1... v_x_M}. The fine-grained results from the open-world setting reveal that language models exhibit a particularly strong performance in identifying mediator variables.
Thus, the LLM is used here to iteratively hypothesize mediator variables in a causal DAG given a treatment and an effect. The task (shown in <ref>) is set up as follows: given a partial graph 𝒢^*, which includes observed treatment and outcome variables, we aim to hypothesize a set of mediators, denoted as M = {v_m_1, v_m_2, ..., v_m_H}, that mediates the treatment v_t to the outcome v_y. Here, H represents the number of direct, and indirect mediators. A pair of treatments and outcomes are considered iteratively across the causal DAG. In the first iteration, the LLM generates a hypothesis for the mediator v_m_1. The hypothesized mediator, v_m_1 is then added to the graph, updating 𝒢^* →𝒢^* ∪{v_m_1}. The partial graph that now also includes v_m_1^* can be used to identify the second mediator v_m_2^* and so on. Therefore, in each subsequent iteration i, the LLM is tasked to generate a hypothesis for the next missing mediator v_m_i given the updated graph 𝒢^* ∪{v_m_1^*, ..., v_m_i-1^*}.
v_m_i^* = P_LLM(𝒢^* ∪{v_m_1^*, ..., v_m_i-1^*}) for i = 1, ..., H
The sequence of mediators M = {v_m_1, v_m_2, ..., v_m_H} is chosen at random.
But to formally study the influence of the order of the hypothesized mediator, we borrow concepts from the mediation analysis literature, specifically the Natural Direct Effect (NDE) and the Natural Indirect Effect (NIE). The NDE measures the effect of the treatment on the outcome that is not mediated by a particular mediator, while the NIE measures the effect of the treatment that is mediated by the mediator. We introduce a metric called Mediation Influence Score (MIS) that quantifies the influence of each mediator between a treatment and an effect. MIS is the ratio of the NIE to the NDE. This metric quantifies the relative importance of the indirect effect (through the mediator) compared to the direct effect. MIS (v_m_i) = NIE(v_m_i)/NDE(v_m_i) for i = 1, ..., H.
The mediators are then generated according to these MIS scores, prioritizing mediators with higher scores.
§ EVALUATION AND RESULTS
§.§ Experimental setup
We evaluate a variety of causal datasets spanning diverse domains. We use the semi-synthetic datasets from BNLearn repository - Cancer:𝒢(5,4) <cit.>, Survey:𝒢(6,6) <cit.>, Asia:𝒢(8,8) <cit.>, Child:𝒢(20,25) <cit.>, Insurance:𝒢(27,52) <cit.>, and Alarm:𝒢(37,46) <cit.>. We also evaluate our approach on a realistic Alzheimer's Disease dataset:𝒢(9,16) <cit.>, developed by five domain experts.
We evaluate our setups across different open-source and closed models. The models we use are GPT-3.5 <cit.>, GPT-4 <cit.>, LLama2-chat-7b <cit.>, Mistral-7B-Instruct-v0.2 <cit.>, Mixtral-7B-Instruct-v0.1 <cit.>, Zephyr-7b-Beta <cit.> and Neural-chat-7b-v3-1 <cit.>.
§.§ Task 1: Out-of-Context Controlled Variable Identification
Our first experiment is designed to assess the most straightforward setting as a baseline to understand the fundamental abilities of language models in handling causal reasoning tasks given a partial causal graph.
Here, the input to the LLM is the ground truth variable name in addition to out-of-context multiple choices for the missing variable v_x and the partial DAG 𝒢^*. We then calculate the models' accuracy in correctly predicting v_x.
Accuracy = 1/N∑_i=1^N1(v_x^* = v_x^i)
Results.
In Figure <ref>, we report the accuracy of different LLMs in identifying the missing variable.
GPT-4, followed by Mixtral, consistently performs well, achieving perfect accuracy on most of the datasets. GPT-3.5 also shows overall strong performance, apart from the Insurance and Alarm datasets. The other models, including Mistral, Llama-70, and Zephyr, demonstrate varying degrees of success. Insurance is the most challenging dataset, which could potentially be due to the high number of edges present in the DAG. It is noteworthy that all models significantly outperform the random baseline.
However, we may conjecture that the high performance could be attributed to the simplicity of the task. The models might be primarily inferring from the context of the dataset domain, rather than performing actual causal reasoning among multiple plausible choices. To further investigate this, we introduce an in-domain choice in the multiple choices in the next experiment. This can assess LLMs' ability to choose a causal variable for a partial DAG beyond the highly evident correlations.
§.§ Task 2: In-Context Controlled Variable Identification
We introduce a more complex setting to further challenge the causal reasoning capabilities of the language models for causal identification. Recall that the partial graph consists of two missing nodes here. In addition to the out-of-context choices and the ground truth variable, the multiple choices also include the other missing node from the partial graph as an in-context distractor. Here the language model should essentially reason about indirect causal relationships.
To evaluate models' performance, we present two metrics: accuracy and false node accuracy. The false node accuracy, measuring the confusion of LLMs in picking the in-context variable instead of the ground truth choice, is defined as:
False Node Accuracy (FNA)↓ = 1/N∑_i=1^N1(v_x_1^* = v_x_2)
Results.In Figure <ref>, we plot both Accuracy and False Node Accuracy across different datasets. Ideally, accuracy should be 1.0, and the FNA should be 0.0. Since there were 5 multiple choices, the random chance is 0.2. We observe that most of the models for larger datasets achieve much higher accuracy than random chance. GPT-3.5 and GPT-4 consistently perform well across all datasets, with high accuracy and low FNA. This suggests that these models are capable of reasoning causally by identifying the missing nodes in the causal graph and are less likely to be confused by the in-context node variable.
On the other hand, open-source models like Mistral, Zephyr, and Mixtral show varying performance across different datasets. For instance, Mistral performs well on the easy Cancer dataset but struggles with the more complex Alarm dataset.
§.§ Task 3: Hypothesizing in Open World
In a realistic scenario, a user or a scientist would provide the partial graph as input without multiple choices and expect the language model to complete the causal DAG. Hence in this test-bed, we aim to leverage LLMs to hypothesize the causal variables. The language model is prompted for k=5 suggestions for the missing node v_x.
We then compare the suggestions against the ground truth. We suspect that traditional semantic similarity metrics may not fully capture the performance of models, given that suggestions need to be evaluated within the context of the entire graph.
Hence, for a robust evaluation of this experiment, we use two metrics, semantic similarity, and LLM-as-Judge that incorporate contextual information.
Semantic Similarity: measures the cosine similarity between the embeddings (of another pretrained sentence embedding model) of each suggestion of the model's predictions, v_x_1:5^* and the ground truth v_x.
The distances of the most similar suggestion are averaged across all nodes v_x ∈V. For a detailed explanation of this process, please refer to Appendix <ref>.
LLM-as-Judge: This metric evaluates the quality of the model's suggestions using a two-step process inspired by <cit.>. In particular, LLM-as-Judge compares against ground truth variables to measure contextual semantic similarity beyond semi-exact matching like in semantic similarity metric. In the first step, the language model is prompted to determine which suggestion best fits the partial graph, given the ground truth and the suggestions, v_x_1:5^*. In the second step, the language model is again prompted to rate the selected suggestion on a scale of 1 to 10 in terms of similarity. This is repeated for all nodes, and the ratings are averaged to provide an overall quality measure. Implementation details can be found in <ref>.
Results.
We report model performances using both semantic similarity and LLM-as-judge metrics in Table <ref>. For brevity, we provided the variances in Appendix <ref>.
To further develop an intuition of LLMs' performance, we provide a detailed analysis of each metric across different types of node variables (defined in Section <ref>). We specifically look at sources, sinks, colliders, and mediators for each causal DAG. The results, fine-grained by node type, are given in Figure <ref> that shows each model's average performance across datasets. For a detailed performance per each dataset individually, see <ref>.
GPT-4 and Mistral generally achieve higher semantic similarity and LLM-as-Judge scores across most datasets (<ref>). GPT-3.5 also shows good average performance. We observe that semantic similarity is a stricter metric than LLM-as-judge since it cannot encode contextual information about the causal DAG (see example in <ref>). Despite different scales, semantic similarity and LLM-as-judge metrics both seem to be fairly correlated.
In Figure <ref>,
we observe that models display stronger performance for colliders and mediators on average. This suggests that these models are relatively proficient at reasoning about common causes and indirect causal relationships.
Sink nodes represent the effects in a causal graph, and the lower performance on these nodes indicates that the models find it challenging to reason about the potential outcomes of the causal graphs.
Source nodes represent the causes in a causal graph, and lower performance on these nodes might indicate difficulties in reasoning about the potential treatments from the partial graph. For datasets such as Survey and Alzheimers, the different LLMs struggle with sources and sink variables at varying levels.
In Figure <ref>, we observe that the model performance increases with k, i.e., with a higher number of suggestions. From Figure <ref>, it is also evident that the performance is proportional to the number of total edges, d_in + d_out (more context about the node). In summary, LLMs show impressive performance across some of the nodes and can be particularly useful to hypothesize mediators and colliders in a partial causal DAG. It is, hence, potentially beneficial to use LLMs in the real world because, in practice, treatment and outcomes are usually known.
§.§ Task 4: Iteratively Hypothesizing in Open World
In the previous open-world experiment, we observed that LLMs excel at identifying mediators when the treatments and outcomes are given. This observation could be particularly relevant in medical settings, where understanding the mediators can provide insights into causal mechanisms through which a treatment affects a patient's outcome.
For unordered mediator evaluation, we consider the random order of the mediators to hypothesize iteratively. The evaluation is similar to the open-world evaluation, where the final semantic similarity is averaged across all mediators. For ordered mediator evaluation according to MIS (v_m_i), we introduce a new metric Δ. We hypothesize that the order of a mediator realization may influence the model predictions for the subsequent mediators. In each step, the previously hypothesized mediator may influence the search space of the current step. Therefore, to evaluate this, we prompt the LLM in two different orders: in ascending and descending orders of significance, as indicated by the MIS score. Δ is defined as the rate of change of semantic similarity from prompting in descending to ascending order. This procedure is repeated across all nodes. Given that some datasets only contain a single mediator, we here selected the Asia, Child, Insurance, and Alarm datasets, as they offer a wider range of mediators, ranging from 1 to 10 for the Alarm dataset.
Results.
The results of this experiment are in <ref>. Results with variances are provided in Appendix <ref>. In a highly complex environment with more than one node missing and with open-world search space, we observe that LLMs can still maintain their performance.
Unlike the overall consistent performance of GPT-4 across all of the datasets from the open-world setting, the model showed superior performance in Insurance and Alarm datasets only. As the complexity of the dataset increases, we observe larger differences in hypothesizing the mediators according to the MIS order. Positive Δ values suggest that prompting the LLM based on the MIS metric leads to higher semantic similarity between the mediator hypotheses and the ground truth variables. In summary, we observe that LLMs can be highly effective in iteratively hypothesizing multiple mediators in a DAG, and if present,
some domain knowledge about the significance of the mediator can boost the performance.
§.§ Hypothesizing Confounder
R5cm
Sachs Alarm Ins
Zephyr ± 0.010.10 ± 0.050.45 ± 0.060.53
Mixtral ± 0.100.95 ± 0.090.85 ± 0.070.63
Neural ± 0.030.30 ± 0.050.45 ± 0.060.61
LLama ± 0.020.20 ± 0.050.47 ± 0.060.63
Mistral ± 0.020.20 ± 0.090.85 ± 0.060.61
GPT-3.5 ± 0.040.40 ± 0.050.49 ± 0.070.67
GPT-4 ± 0.100.95 ± 0.070.73 ± 0.080.78
Evaluating Confounders.
In causal inference, backdoor paths are alternative causal pathways that confound the estimation of causal effects. They introduce bias when estimating causal effects if not appropriately addressed. Hence hypothesizing and controlling for confounders is an important task in causal inference. We extract confounder subgraphs from Sachs <cit.>, Alarm and Insurance graphs. From Table <ref> with detailed results in Appendix <ref>, we observe that while some confounders were easily hypothesized by LLMs, achieving perfect accuracy, the genomic domain of the SACHS posed challenges for models with potentially less domain-specific knowledge. Similar to the mediator analysis, a large model: GPT-4, does not always perform best across all datasets. This highlights the need for a diverse set of benchmarks, like ours, to fully assess their performance. Considering the importance of backdoor paths, we have now considered benchmarking LLM performance for confounders in addition to colliders. LLM typically performs well when hypothesizing a collider, however, the results for confounders are varied.
§ CONCLUSION
Most causality literature assumes that the necessary data has been collected, and it aims to answer the question of how to establish causal relationships between variables. Generating hypotheses regarding which variables to observe is, however, mostly done by human experts. LLMs, having been trained on large-scale datasets, can be harnessed to act as expert proxies. We establish the novel task of generating hypotheses about missing variables in a causal graph. We formalize the problem by instantiating test procedures that vary in difficulty and knowledge level about the ground truth causal graph. We benchmark various models in identifying missing variables from a list of in-context and out-of-context distractors and hypothesizing missing variables in an open-world setting. We further evaluate an iterative setup to populate a graph with up to 10 missing mediator nodes. Our work shows that LLMs can be a promising tool to generate hypotheses, especially for mediators, which in practice are less known apriori than treatments and outcomes.
§ LIMITATIONS AND FUTURE WORK
Given the non-disclosed datasets of models, it is difficult to confirm with absolute certainty that the datasets are not ingested by models during training. However, one of the datasets we used was released recently <cit.> after the announced cut-off date of models. Additionally, our task itself is novel, including the way we verbalize the graphs and prompt the models. Such textual descriptions are not part of the original datasets. Finally, we did not observe verbatim reconstruction of graphs that would have suggested that they are memorized.
We envision our setup as a human-LLM collaboration under expert supervision. We do not automatically find the most plausible answer out of all models' suggestions. Our metrics compare against the ground truth for evaluation. Also, LLMs show limitations in expressing confidence about their responses <cit.>. Future work could thus investigate other mechanisms to filter out suggestions, in addition to improving models' performance on cause and effect (i.e., sources and sinks) nodes. Another promising direction is using retrieval-augmented models <cit.>.
§ ACKNOWLEDGEMENTS
This work was partially funded by ELSA – European Lighthouse on Secure and Safe AI funded by the European Union under grant agreement No. 101070617 and the German Federal Ministry of Education and Research (BMBF) under the grant AIgenCY (16KIS2012). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or European Commission. Neither the European Union nor the European Commission can be held responsible for them.
plainnat
§ IMPLEMENTATION
§.§ Datasets
We use 7 real-world based datasets. These datasets span across different domain knowledge topics. These datasets have ground truth graphs along with their observational data. The simplest dataset used is the cancer dataset with 4 edges and 5 node variables. In addition to the semi-synthetic datasets from the BNLearn library, we also evaluate our approach on a realistic Alzheimer's Disease dataset <cit.>, which was developed by five domain experts. Given that each expert created a different causal graph, the final causal DAG comprises only those edges that were agreed upon by consensus.
§.§ Reproducibility
For reproducibility, we used temperature 0 and top-p value as 1 across all of the models. We also mentioned the snapshot of the model used. We have also included the prompts and examples below. Our code can be anonymously found here - <https://github.com/ivaxi0s/hypothesizing-causal-variable-llm.git>. The datasets are under CC BY-SA 3.0 which allows us to freely modify the datasets for benchmarking. Our benchmark will be released under the CC BY-SA License.
GPT-3.5 GPT-4 were accessed via API. Rest of the models were run on 1 A100 GPU. Since we used off-the-shelf LLM, there was no training to be performed. Since many of the models were run by API, it is difficult to calculate the entire compute, however, all of the experiments for each model took ≈ 6 hours.
§.§ Controlled Variable Identification
For variable identification, we generate multiple choices that remain consistent across all missing nodes and all of the datasets. The words were randomly chosen to be far enough from the nodes. The options chosen were weather, book sales, and movie ratings. We wanted to make sure that the options were not from one specific domain such that the LLM could do the process of elimination.
§.§ Semantic Similarity
Given the task of hypothesizing missing nodes in a partial graph 𝒢^* in the absence of multiple-choices, we evaluate the semantic similarity between the model's predictions and the ground truth node variable. We leverage an open model namely 'all-mpnet-base-v2' to transform the textual representations of the model's predictions and the ground truth into high-dimensional vector space embeddings. Post transforming textual representations into embeddings and normalizing them, we calculate the cosine similarity. Scores closer to 1 indicate a high semantic similarity, suggesting the model's predictions align well with the ground truth. This metric gives a score of similarity without the contextual knowledge of the causal graph. We perform our experiments to consider every node of the ground truth as a missing node iteratively. For all the suggestions for a node variable, we calculate the semantic similarity. The average similarity reported is the highest semantic similarity for each of the variable suggestions.
§.§ LLM-as-Judge
To capture the domain knowledge of the expert that selects the most relevant causal variable, we use LLM-as-Judge as a proxy expert. This also allows for evaluation based on contextual DAG knowledge as well. Given the impressive results of GPT-4 in <cit.>, we use GPT-4 as a judge for all of the experiments.
Shortcomings of LLM-as-judge.
LLM-as-judge uses GPT-4 as a judge model which could be biased towards some data. Since the training datasets are not public for this model, it would be hard to judge how these biases might affect the final score. Hence for robust evaluation we also evaluate using the semantic similarity.
§.§ Iteratively Hypothesizing in Open World
For each order, the algorithm prompts the LLM to generate mediator suggestions, selects the suggestion with the highest semantic similarity to the context, and iteratively updates the partial graph with these mediators. Δ, quantifies the impact of mediator ordering by comparing the average highest semantic similarity scores obtained from both descending and ascending orders. This methodical evaluation sheds light on how the sequence in which mediators are considered might affect the LLM's ability to generate contextually relevant and accurate predictions.
§ CONFOUNDERS
§ FURTHER RESULTS
§.§ Variances
For brevity we didnt add variance in the main text, the following results have variances:
§.§ Analysis of difference across tasks
Since the metrics are different to evaluate each task, it is not meaningful or straightforward to compare the raw results. It must also be noted that the tasks are not linear. To address this, we rank the model performances across all models and datasets and present these rankings in Figure <ref>. This allows us to compare the relative performance of the models across different tasks.
As we observe from the graph, GPT-4 model shows consistently top performances in Tasks 1-3, however, it has one of the lowest performances for Task 4. GPT-3.5 shows a strong performance in Task 2 and 4, ranking 2nd, but drops in Tasks 1 and 3. We observe that Zephyr, Neural and Mistral show consistently average performances. These observations motivate the significance of the tasks proposed in our benchmark. They highlight the variability in model performance across different tasks and emphasize the need for comprehensive and diverse benchmarks to fully assess the capabilities of these models.
§.§ Breaking down the performance
§.§ Fine grained model performance
§.§ Effect of context
We observed notable differences in the accuracy of LLM predictions for missing nodes within causal graphs when context was provided versus when it was absent. Specifically, the inclusion of contextual information about the causal graph significantly enhanced the LMs' ability to generate accurate and relevant predictions. In realistic settings, when this setup is being used by a scientist, they would provide the context of the task along with the partial graph. When context was not provided, the models often struggled to identify the most appropriate variables, leading to a decrease in prediction accuracy, especially for smaller models. Unsurprisingly, providing context was more important for smaller graphs than larger graphs. LLMs were able to understand the context of the graph via multiple other nodes in the graph for larger graphs.
§.§ Using explanations
While using LLMs for hypothesizing the missing nodes withing the causal graph for the open world setting, introduced an additional question to prompt the model to provide explanations for each of their predictions. This was motivated by the fact that incorporating a rationale behind each prediction might enhance the model's semantic similarity. We present the results in the Table below: We observe that evaluating semantic similarity with explanations leads to a decrease in performance as compared to the earlier setting where the language model returned phrases. This is because semantic similarity, as a metric, evaluates the closeness of the model's predictions to the ground truth in a high-dimensional vector space, focusing on the semantic content encapsulated within the embeddings. It is a metric that leaves little room for interpretative flexibility, focusing strictly on the degree of semantic congruence between the predicted and actual variables. The introduction of explanations, while enriching the model's outputs with contextual insights, did not translate into improved semantic alignment with the ground truth.
Ambiguous predictions which semantically represent the same variable.
An important linguistic concern that could be missed by semantic similarity is ambiguous hypothesis by the LLM that may have same semantics, which again breaks the semantic similarity metric. This further motivates LLM-judge metric whose input is - the context of the causal graph, the partial causal graph, the ground truth variable, and the model predictions. Given the rich context of the LLM-judge metric we suspect it would be able to overcome the ambiguity. We prompted the model to justify its hypothesis variables using explanations. We observe that evaluating semantic similarity with explanations leads to a decrease in performance as compared to the earlier setting where the language model returned just phrases. In Table <ref> we observed a drop in performance for semantic similarity. In contrast, we observe a similar or slight improvement in the LLM-judge metric when the explanation of the model hypothesis is given.
§.§ Chain of thought
In recent times, Chain-of-Thought prompting has gained popularity due to its impressive performance in proving the quality of LLMs' output <cit.> also in metadata-based causal reasoning <cit.>. We also incorporated COT prompting for our prompts. We perform ablation studies in Table. We observe that COT particularly improves the performance of the identification experiments.
§.§ Iterative mediator search vs all at once
For Task 4, we iteratively hypothesize the missing variables (mediators). Our choice was primarily driven by the complexity of Task 4, which involves predicting multiple missing mediators, ranging from 1 to 10. For a Task with 10 missing mediators, the model would have to predict 50 suggestions at once. We initially hypothesized that LLMs might struggle with making multiple predictions across different variables simultaneously. This was indeed reflected in our results and GPT-4 outputs from Table X. The iterative approach allows the model's prediction to narrow the search space, which would not be possible in a non-iterative approach. This method is more aligned with the scientific discovery process, where hypotheses are often refined iteratively based on new findings.
Furthermore, our approach simulates a human-in-the-loop scenario, where the most plausible answer is selected and used to guide the next prediction.
§ FINETUNING
we aim to assess the LLM’s causal reasoning via prompting. Following are the reasons why fine-tuning is not the most practical solution:
* Pretrained models come with a wealth of general knowledge, which we aim to leverage. Fine-tuning these models could potentially limit their ability to draw on this broad knowledge base. We aim to understand the utility of pretrained models, as fine-tuning large models like GPT-4 is not always feasible.
* The training dataset is too small for fine-tuning. Despite considering a large 52-edged graph: Insurance, we would have just 27 datapoints or Alarm with 37 datapoint. Additionally:
* Using the same graph as part of train and test would unfortunately lead to training data leakage.
* If we consider different graphs for train and test, there would exist a domain shift in the two graphs and the model may be overfitted to the domain of the train graph.
However, to illustrate our hypothesis and alleviate the reviewer's concern, we performed Supervised Fine-Tuning using QLoRA on the Mistral-7b-Instruct model for hypothesizing in the open world task.
The train set here is all of the graphs minus the respective graph it was tested on. We tested on Survey, Insurance and Alzheimers graphs. The model was trained to give one best-fit suggestion for the missing variable.
From the above results, it is evident that finetuning does not significantly improve over the prompting results. This is because during training the LLM gets biased towards the domains of training datasets which are contextually distant from the test domain, given the diversity of datasets chosen. One may think that training might help the LLM to understand the task, but from prompt-based model output, it was evident that the LLM can instruction-follow. In summary, we were able to extract the LLM knowledge via prompting and domain-specific fine-tuning could be closely looked at in the future works.
§ CAUSAL GRAPHS
§ PROMPT TEMPLATE
Hello. You will be given a causal graph. The context of the graph [CONTEXT]. Please understand the causal relationships between the variables - [VERBALISED DAG].
Base prompt to describe the causal graph
Hello. You will be given a causal graph. The context of the graph is hypothetical patient monitoring system in an intensive care unit (ICU). Please understand the causal relationships between the variables - < anaphylaxis > causes < total peripheral resistance >. < arterial co2 > causes < expelled co2 >. < arterial co2 > causes < catecholamine >. < catecholamine > causes < heart rate >. < cardiac output > causes < blood pressure >. < disconnection > causes < breathing tube >. < error cauter > causes < heart rate displayed on ekg monitor >. < error cauter > causes < oxygen saturation >. < error low output > causes < heart rate blood pressure >. < high concentration of oxygen in the gas mixture > causes < pulmonary artery oxygen saturation >. < heart rate > causes < heart rate blood pressure >. < heart rate > causes < heart rate displayed on ekg monitor >. < heart rate > causes < oxygen saturation >. < heart rate > causes < cardiac output >. < hypovolemia > causes < left ventricular end-diastolic volume >. < hypovolemia > causes < stroke volume >. < insufficient anesthesia > causes < catecholamine >. < intubation > causes < lung ventilation >. < intubation > causes < minute volume >. < intubation > causes < alveolar ventilation >. < intubation > causes < shunt - normal and high >. < intubation > causes < breathing pressure >. < kinked chest tube > causes < lung ventilation >. < kinked chest tube > causes < breathing pressure >. < left ventricular end-diastolic volume > causes < central venous pressure >. < left ventricular end-diastolic volume > causes < pulmonary capillary wedge pressure >. < left ventricular failure > causes < previous medical history >. < left ventricular failure > causes < left ventricular end-diastolic volume >. < left ventricular failure > causes < stroke volume >. < the amount of time using a breathing machine > causes < the intensity level of a breathing machine >. < sudden blockage in the pulmonary arteries > causes < shunt - normal and high >. < sudden blockage in the pulmonary arteries > causes < pulmonary artery pressure >. < pulmonary artery oxygen saturation > causes < oxygen saturation >. < oxygen saturation > causes < catecholamine >. < shunt - normal and high > causes < oxygen saturation >. < stroke volume > causes < cardiac output >. < total peripheral resistance > causes < catecholamine >. < total peripheral resistance > causes < blood pressure >. < alveolar ventilation > causes < arterial co2 >. < alveolar ventilation > causes < pulmonary artery oxygen saturation >. < lung ventilation > causes < expelled co2 >. < lung ventilation > causes < minute volume >. < lung ventilation > causes < alveolar ventilation >. < the intensity level of a breathing machine > causes < breathing tube >. < breathing tube > causes < lung ventilation >. < breathing tube > causes < breathing pressure >.
An example of the base prompt for Alarm dataset. Each relationship is enclosed in pointed brackets,<> followed by a full stop.
Prompt: Hello. You will be given a causal graph. The context of the graph is different aspects of a patient's health related to lung diseases. Please understand the causal relationships between the variables - < x > causes < tuberculosis >. < bronchitis > causes < dyspnoea-laboured breathing >. < individual has either tuberculosis or lung cancer > causes < positive xray >. < individual has either tuberculosis or lung cancer > causes < dyspnoea-laboured breathing >. < lung cancer > causes < individual has either tuberculosis or lung cancer >. < smoking cigarettes > causes < lung cancer >. < smoking cigarettes > causes < bronchitis >. < tuberculosis > causes < individual has either tuberculosis or lung cancer >.Based on your understanding and analytical skills, please identify what the variable X could most likely be from the given multiple choices. It's important to note that you MUST select an option from the provided choices, as one of them is the correct answer. The choices are: [ visited Asia; weather; book sales; movie rating ]. Please think step by step. Your answer should be formatted as follows - Answer: X = choice. The choice must be one of the provided options, and it should be formatted exactly as it appears in the choices. Generating your own options is not allowed in this task.
Answer: X = visited Asia.My thought process is as follows:The causal graph suggests that X causes tuberculosis, and tuberculosis is related to lung diseases. Visiting Asia is a possible explanation for why an individual may have been exposed to tuberculosis, as the disease is more prevalent in some Asian countries. Therefore, visited Asia is the most likely option for X.
Out-of-context controlled variable identification, Ground truth variable: visited Asia
Prompt: Hello. You will be given a causal graph. The context of the graph is different aspects of a patient's health related to lung diseases Please understand the causal relationships between the variables - < x > causes < tuberculosis >. < bronchitis > causes < dyspnoea-laboured breathing >. < individual has either tuberculosis or lung cancer > causes < y >. < individual has either tuberculosis or lung cancer > causes < dyspnoea-laboured breathing >. < lung cancer > causes < individual has either tuberculosis or lung cancer >. < smoking cigarettes > causes < lung cancer >. < smoking cigarettes > causes < bronchitis >. < tuberculosis > causes < individual has either tuberculosis or lung cancer >. Based on your understanding and analytical skills, please identify what the variable X could most likely be from the given multiple choices. It's important to note that you MUST select an option from the provided choices, as one of them is the correct answer. The choices are: [ weather; book sales; movie rating; positive xray; visited Asia ]. Please think step by step. Your answer should be formatted as follows - Answer: X = choice. The choice must be one of the provided options, and it should be formatted exactly as it appears in the choices. Generating your own options is not allowed in this task.
Answer: Answer: X = visited Asia. My thought process is as follows:1. The graph shows that X causes tuberculosis, which suggests that X is a variable related to healthcare or medicine. 2. The graph also shows that bronchitis causes dyspnoea-laboured breathing, which further supports the idea that X is a variable related to healthcare or medicine. 3. The graph shows that individual has either tuberculosis or lung cancer causes y, which suggests that X could be a variable related to diagnosing or treating lung diseases. 4. The graph shows that smoking cigarettes causes lung cancer and bronchitis, which suggests that X could be a variable related to smoking or lung health. 5. Based on the above observations, it seems unlikely that X would be weather, book sales, or movie rating.6. Visited Asia is a variable that could potentially relate to healthcare or medicine, particularly if the patient has traveled to a region with a high prevalence of tuberculosis or other lung diseases.7. Therefore, based on the information provided in the graph, it seems most likely that X = visited Asia.
In-context controlled variable identification, Ground truth variable: visited asia
Prompt: Hello. You will be given a causal graph. The context of the graph is different aspects of a patient's health related to lung diseases. Please understand the causal relationships between the variables - < x > causes < tuberculosis >. < bronchitis > causes < dyspnoea-laboured breathing >. < individual has either tuberculosis or lung cancer > causes < positive xray >. < individual has either tuberculosis or lung cancer > causes < dyspnoea-laboured breathing >. < lung cancer > causes < individual has either tuberculosis or lung cancer >. < smoking cigarettes > causes < lung cancer >. < smoking cigarettes > causes < bronchitis >. < tuberculosis > causes < individual has either tuberculosis or lung cancer >. Using your causal knowledge, can you suggest what variable X most likely? Give 5 suggestions for what variable X could be. Give reason for your suggestions. Finally, return your answer (without reasoning) in the following format: <Answer> [first suggestion, second suggestion, third suggestion .. and so on] </Answer> No explanations, no reasoning, just answers as a list as described earlier. For example, if the suggestions are A,B,C,D,E then - <Answer> [A, B, C, D, E] </Answer>
Answer: <Answer> [HIV/AIDS, Chronic respiratory conditions, Pneumonia, Exposure to asbestos, Silicosis] </Answer>
Reasons:
1. HIV/AIDS: It is known to weaken the immune system, making the individual more susceptible to infections like tuberculosis.
2. Chronic respiratory conditions: These can make an individual more susceptible to lung diseases like tuberculosis.
3. Pneumonia: It is a lung infection that can lead to serious conditions like tuberculosis if not treated properly.
4. Exposure to asbestos: Long-term exposure to asbestos can lead to lung diseases including lung cancer and tuberculosis.
5. Silicosis: It is a lung disease caused by inhalation of silica dust, leading to inflammation and scarring of the lung tissue, making the individual more susceptible to tuberculosis.
Hypothesizing missing variable in open world, Ground truth variable: Visted Asia
Prompt: Hello. You will be given a causal graph. The context of the graph is different aspects of a patient's health related to lung diseases. Please understand the causal relationships between the variables - < visited asia > causes < tuberculosis >. < bronchitis > causes < dyspnoea-laboured breathing >. < individual has either tuberculosis or lung cancer > causes < positive xray >. < individual has either tuberculosis or lung cancer > causes < dyspnoea-laboured breathing >. < x > causes < individual has either tuberculosis or lung cancer >. < smoking cigarettes > causes < x >. < smoking cigarettes > causes < bronchitis >. < tuberculosis > causes < individual has either tuberculosis or lung cancer >. Using your causal knowledge, can you suggest what variable X most likely? Give 5 suggestions for what variable X could be. Give reason for your suggestions. Finally, return your answer (without reasoning) in the following format: <Answer> [first suggestion, second suggestion, third suggestion .. and so on] </Answer> No explanations, no reasoning, just answers as a list as described earlier. For example, if the suggestions are A,B,C,D,E then - <Answer> [A, B, C, D, E] </Answer>
Answer: <Answer> [Lung Cancer, Chronic Obstructive Pulmonary Disease, Emphysema, Pneumonia, Asthma] </Answer>
Reasons:
1. Lung Cancer: Smoking is a major cause of lung cancer and lung cancer can lead to tuberculosis or other lung diseases.
2. Chronic Obstructive Pulmonary Disease: Smoking is a leading cause of COPD and it can lead to other lung diseases including tuberculosis and lung cancer.
3. Emphysema: It is a type of COPD often caused by smoking and can lead to other lung diseases.
4. Pneumonia: Smoking increases the risk of pneumonia and pneumonia can lead to other lung diseases.
5. Asthma: Smoking can trigger asthma attacks and severe asthma can lead to other lung diseases.
Hypothesizing missing variable in open world, Ground truth variable: Lung cancer
Prompt:
Hello. You will be given a causal graph. The context of the graph is hypothetical patient monitoring system in an intensive care unit (ICU) Please understand the causal relationships between the variables - < anaphylaxis > causes < total peripheral resistance >. < x > causes < expelled co2 >. < x > causes < x3 >. < x3 > causes < x2 >. < x1 > causes < blood pressure >. < disconnection > causes < x7 >. < error cauter > causes < heart rate displayed on ekg monitor >. < error cauter > causes < oxygen saturation >. < error low output > causes < heart rate blood pressure >. < high concentration of oxygen in the gas mixture > causes < x9 >. < x2 > causes < heart rate blood pressure >. < x2 > causes < heart rate displayed on ekg monitor >. < x2 > causes < oxygen saturation >. < x2 > causes < x1 >. < hypovolemia > causes < left ventricular end-diastolic volume >. < hypovolemia > causes < stroke volume >. < insufficient anesthesia > causes < x3 >. < intubation > causes < x5 >. < intubation > causes < minute volume >. < intubation > causes < x4 >. < intubation > causes < shunt - normal and high >. < intubation > causes < breathing pressure >. < kinked chest tube > causes < x5 >. < kinked chest tube > causes < breathing pressure >. < left ventricular end-diastolic volume > causes < central venous pressure >. < left ventricular end-diastolic volume > causes < pulmonary capillary wedge pressure >. < left ventricular failure > causes < previous medical history >. < left ventricular failure > causes < left ventricular end-diastolic volume >. < left ventricular failure > causes < stroke volume >. < the amount of time using a breathing machine > causes < x6 >. < sudden blockage in the pulmonary arteries > causes < shunt - normal and high >. < sudden blockage in the pulmonary arteries > causes < pulmonary artery pressure >. < x9 > causes < x8 >. < x8 > causes < x3 >. < shunt - normal and high > causes < x8 >. < stroke volume > causes < x1 >. < total peripheral resistance > causes < x3 >. < total peripheral resistance > causes < blood pressure >. < x4 > causes < x >. < x4 > causes < x9 >. < x5 > causes < expelled co2 >. < x5 > causes < minute volume >. < x5 > causes < x4 >. < x6 > causes < x7 >. < x7 > causes < x5 >. < x7 > causes < breathing pressure >. Using your causal knowledge, can you suggest what variable X most likely? Give 5 suggestions for what variable X could be. Give reason for your suggestions. Finally, return your answer (without reasoning) in the following format: <Answer> [first suggestion, second suggestion, third suggestion .. and so on] </Answer> No explanations, no reasoning, just answers as a list as described earlier. For example, if the suggestions are A,B,C,D,E then - <Answer> [A, B, C, D, E] </Answer>
Answer: <Answer> [ventilation rate, respiratory effort, airway resistance, lung compliance, alveolar gas exchange] </Answer>
Reasons for the answer:
1. **Ventilation Rate**: The variable X causes both expelled CO2 and another variable (X3), which indirectly influences heart rate, blood pressure, and oxygen saturation. The rate of ventilation directly affects the amount of CO2 expelled from the body, making it a plausible candidate for X.
2. **Respiratory Effort**: This suggestion is based on the relationship between X and expelled CO2, as well as the influence of X on another variable (X3) that affects cardiovascular and respiratory parameters. Respiratory effort, which encompasses the work of breathing, could affect CO2 expulsion and have downstream effects on the cardiovascular system.
3. **Airway Resistance**: Airway resistance could be a candidate for X because it directly influences the amount of CO2 that can be expelled (a function of airflow) and could also affect other physiological variables through its impact on ventilation efficiency.
4. **Lung Compliance**: Lung compliance, or the ease with which the lungs can expand, directly affects ventilation and, consequently, the expulsion of CO2. It could also indirectly influence other variables in the graph through its impact on gas exchange efficiency.
5. **Alveolar Gas Exchange**: This process directly determines the amount of CO2 expelled and could influence the variable X3, affecting cardiovascular parameters. Efficient gas exchange is crucial for maintaining appropriate oxygen and CO2 levels in the blood, impacting heart rate, blood pressure, and oxygen saturation.
These suggestions are not explicitly mentioned in the input graph but are closely related to the physiological processes and outcomes described. They offer plausible explanations for the causal relationships involving the variable X, considering the context of patient monitoring in an ICU setting.
Hypothesizing missing variable in open world #1 Ground truth variable: arterial CO2
Prompt:
Strictly follow the format mentioned otherwise you will be disqualified.', 'ello. You will be given a causal graph. The context of the graph is hypothetical patient monitoring system in an intensive care unit (ICU) Please understand the causal relationships between the variables - < anaphylaxis > causes < total peripheral resistance >. < Alveolar Gas Exchange > causes < expelled co2 >. < Alveolar Gas Exchange > causes < x2 >. < x2 > causes < x1 >. < x > causes < blood pressure >. < disconnection > causes < x6 >. < error cauter > causes < heart rate displayed on ekg monitor >. < error cauter > causes < oxygen saturation >. < error low output > causes < heart rate blood pressure >. < high concentration of oxygen in the gas mixture > causes < x8 >. < x1 > causes < heart rate blood pressure >. < x1 > causes < heart rate displayed on ekg monitor >. < x1 > causes < oxygen saturation >. < x1 > causes < x >. < hypovolemia > causes < left ventricular end-diastolic volume >. < hypovolemia > causes < stroke volume >. < insufficient anesthesia > causes < x2 >. < intubation > causes < x4 >. < intubation > causes < minute volume >. < intubation > causes < x3 >. < intubation > causes < shunt - normal and high >. < intubation > causes < breathing pressure >. < kinked chest tube > causes < x4 >. < kinked chest tube > causes < breathing pressure >. < left ventricular end-diastolic volume > causes < central venous pressure >. < left ventricular end-diastolic volume > causes < pulmonary capillary wedge pressure >. < left ventricular failure > causes < previous medical history >. < left ventricular failure > causes < left ventricular end-diastolic volume >. < left ventricular failure > causes < stroke volume >. < the amount of time using a breathing machine > causes < x5 >. < sudden blockage in the pulmonary arteries > causes < shunt - normal and high >. < sudden blockage in the pulmonary arteries > causes < pulmonary artery pressure >. < x8 > causes < x7 >. < x7 > causes < x2 >. < shunt - normal and high > causes < x7 >. < stroke volume > causes < x >. < total peripheral resistance > causes < x2 >. < total peripheral resistance > causes < blood pressure >. < x3 > causes < Alveolar Gas Exchange >. < x3 > causes < x8 >. < x4 > causes < expelled co2 >. < x4 > causes < minute volume >. < x4 > causes < x3 >. < x5 > causes < x6 >. < x6 > causes < x4 >. < x6 > causes < breathing pressure >. Using your causal knowledge, can you suggest what variable X most likely? Give 5 suggestions for what variable X could be. Give reason for your suggestions. Finally, return your answer (without reasoning) in the following format: <Answer> [first suggestion, second suggestion, third suggestion .. and so on] </Answer> No explanations, no reasoning, just answers as a list as described earlier. For example, if the suggestions are A,B,C,D,E then - <Answer> [A, B, C, D, E] </Answer>
Answer: <Answer> [cardiac output, heart function, blood flow, vascular resistance, cardiac cycle] </Answer>
Reasons for the answer:
1. **Cardiac Output**: The variable X causes blood pressure and is caused by stroke volume. In physiological terms, cardiac output is a primary factor influencing blood pressure, as it represents the volume of blood the heart pumps per minute. Since stroke volume (the amount of blood pumped by the left ventricle of the heart in one contraction) contributes to cardiac output, it's logical to suggest "cardiac output" as a potential identity for X.
2. **Heart Function**: Given that X is related to blood pressure and influenced by stroke volume, it's reasonable to infer that X pertains to the overall function of the heart. Heart function encompasses various aspects of how the heart works, including pumping efficiency, which directly affects blood pressure.
3. **Blood Flow**: Since X has a direct relationship with blood pressure and an indirect relationship with factors affecting the heart's pumping ability, "blood flow" could be a plausible suggestion. Blood flow is crucial for maintaining blood pressure and is inherently linked to the heart's ability to pump blood effectively.
4. **Vascular Resistance**: X's impact on blood pressure and its connection to total peripheral resistance suggest that it might involve vascular resistance. Vascular resistance is the resistance that must be overcome to push blood through the circulatory system and is a key determinant of blood pressure.
5. **Cardiac Cycle**: The cardiac cycle involves all the events related to the flow of blood through the heart during one complete heartbeat, including systole (contraction) and diastole (relaxation). Given X's influence on blood pressure and its relationship with stroke volume, considering the cardiac cycle as a potential identity for X makes sense, as it encompasses the mechanisms affecting blood pressure regulation.
Hypothesizing missing variable in open world #2 Ground truth variable: cardiac output
Prompt:
Hello. You will be given a causal graph. The context of the graph is hypothetical patient monitoring system in an intensive care unit (ICU) Please understand the causal relationships between the variables - < anaphylaxis > causes < total peripheral resistance >. < < Alveolar Gas Exchange > causes < expelled co2 >. < < Alveolar Gas Exchange > causes < x1 >. < x1 > causes < x >. < Cardiac Output > causes < blood pressure >. < disconnection > causes < x5 >. < error cauter > causes < heart rate displayed on ekg monitor >. < error cauter > causes < oxygen saturation >. < error low output > causes < heart rate blood pressure >. < high concentration of oxygen in the gas mixture > causes < x7 >. < x > causes < heart rate blood pressure >. < x > causes < heart rate displayed on ekg monitor >. < x > causes < oxygen saturation >. < x > causes < Cardiac Output >. < hypovolemia > causes < left ventricular end-diastolic volume >. < hypovolemia > causes < stroke volume >. < insufficient anesthesia > causes < x1 >. < intubation > causes < x3 >. < intubation > causes < minute volume >. < intubation > causes < x2 >. < intubation > causes < shunt - normal and high >. < intubation > causes < breathing pressure >. < kinked chest tube > causes < x3 >. < kinked chest tube > causes < breathing pressure >. < left ventricular end-diastolic volume > causes < central venous pressure >. < left ventricular end-diastolic volume > causes < pulmonary capillary wedge pressure >. < left ventricular failure > causes < previous medical history >. < left ventricular failure > causes < left ventricular end-diastolic volume >. < left ventricular failure > causes < stroke volume >. < the amount of time using a breathing machine > causes < x4 >. < sudden blockage in the pulmonary arteries > causes < shunt - normal and high >. < sudden blockage in the pulmonary arteries > causes < pulmonary artery pressure >. < x7 > causes < x6 >. < x6 > causes < x1 >. < shunt - normal and high > causes < x6 >. < stroke volume > causes < Cardiac Output >. < total peripheral resistance > causes < x1 >. < total peripheral resistance > causes < blood pressure >. < x2 > causes < < Alveolar Gas Exchange >. < x2 > causes < x7 >. < x3 > causes < expelled co2 >. < x3 > causes < minute volume >. < x3 > causes < x2 >. < x4 > causes < x5 >. < x5 > causes < x3 >. < x5 > causes < breathing pressure > Using your causal knowledge, can you suggest what variable X most likely? Give 5 suggestions for what variable X could be. Give reason for your suggestions. Finally, return your answer (without reasoning) in the following format: <Answer> [first suggestion, second suggestion, third suggestion .. and so on] </Answer> No explanations, no reasoning, just answers as a list as described earlier. For example, if the suggestions are A,B,C,D,E then - <Answer> [A, B, C, D, E] </Answer>
Answer: <Answer> [patient anxiety level, patient consciousness level, patient pain level, patient respiratory rate, patient skin temperature] </Answer>
Reasons for the answer:
1. **Patient Anxiety Level**: The variable X influences heart rate, blood pressure, oxygen saturation, and cardiac output, which are all physiological parameters that can be affected by a patient's psychological state, such as anxiety. Anxiety can lead to increased heart rate and blood pressure, changes in oxygen saturation, and alterations in cardiac output due to the body's stress response.
2. **Patient Consciousness Level**: Consciousness level can impact heart rate, blood pressure, and oxygen saturation due to the brain's control over these vital signs. Changes in consciousness level, such as becoming more alert or lapsing into unconsciousness, can significantly affect these parameters, which are all influenced by variable X.
3. **Patient Pain Level**: Pain can cause physiological responses that include increased heart rate, elevated blood pressure, changes in oxygen saturation due to hyperventilation or altered breathing patterns, and potentially affect cardiac output as the body reacts to the stress of pain.
4. **Patient Respiratory Rate**: Given that X influences expelled CO2 and is affected by alveolar gas exchange, it's plausible that X could relate to the patient's respiratory rate. Respiratory rate directly impacts alveolar gas exchange, which in turn affects expelled CO2 levels, oxygen saturation, and indirectly influences heart rate and blood pressure through changes in blood chemistry.
5. **Patient Skin Temperature**: While not directly related to the cardiovascular or respiratory systems, skin temperature can be an indirect indicator of changes in these systems. For example, changes in cardiac output and peripheral resistance can affect blood flow to the skin, thereby influencing skin temperature. Variable X's impact on cardiac output and blood pressure suggests it could also indirectly affect skin temperature.
These suggestions are based on the understanding that variable X has a broad impact on both cardiovascular and respiratory parameters, indicating it could be a factor that influences or reflects the overall physiological state of the patient.
Hypothesizing missing variable in open world #3 Ground truth variable: Heart rate
Prompt:
Hello. You will be given a causal graph. The context of the graph is hypothetical patient monitoring system in an intensive care unit (ICU) Please understand the causal relationships between the variables - < anaphylaxis > causes < total peripheral resistance >. < < Alveolar Gas Exchange > causes < expelled co2 >. < < Alveolar Gas Exchange > causes < x >. < x > causes <Patient Respiratory Rate>. < Cardiac Output > causes < blood pressure >. < disconnection > causes < x4 >. < error cauter > causes < heart rate displayed on ekg monitor >. < error cauter > causes < oxygen saturation >. < error low output > causes < heart rate blood pressure >. < high concentration of oxygen in the gas mixture > causes < x6 >. <Patient Respiratory Rate> causes < heart rate blood pressure >. <Patient Respiratory Rate> causes < heart rate displayed on ekg monitor >. <Patient Respiratory Rate> causes < oxygen saturation >. <Patient Respiratory Rate> causes < Cardiac Output >. < hypovolemia > causes < left ventricular end-diastolic volume >. < hypovolemia > causes < stroke volume >. < insufficient anesthesia > causes < x >. < intubation > causes < x2 >. < intubation > causes < minute volume >. < intubation > causes < x1 >. < intubation > causes < shunt - normal and high >. < intubation > causes < breathing pressure >. < kinked chest tube > causes < x2 >. < kinked chest tube > causes < breathing pressure >. < left ventricular end-diastolic volume > causes < central venous pressure >. < left ventricular end-diastolic volume > causes < pulmonary capillary wedge pressure >. < left ventricular failure > causes < previous medical history >. < left ventricular failure > causes < left ventricular end-diastolic volume >. < left ventricular failure > causes < stroke volume >. < the amount of time using a breathing machine > causes < x3 >. < sudden blockage in the pulmonary arteries > causes < shunt - normal and high >. < sudden blockage in the pulmonary arteries > causes < pulmonary artery pressure >. < x6 > causes < x5 >. < x5 > causes < x >. < shunt - normal and high > causes < x5 >. < stroke volume > causes < Cardiac Output >. < total peripheral resistance > causes < x >. < total peripheral resistance > causes < blood pressure >. < x1 > causes < < Alveolar Gas Exchange >. < x2 > causes < x6 >. < x3 > causes < expelled co2 >. < x3 > causes < minute volume >. < x3 > causes < x1 >. < x3 > causes < x4 >. < x4 > causes < x3 >. < x4 > causes < breathing pressure > Using your causal knowledge, can you suggest what variable X most likely? Give 5 suggestions for what variable X could be. Give reason for your suggestions. Finally, return your answer (without reasoning) in the following format: <Answer> [first suggestion, second suggestion, third suggestion .. and so on] </Answer> No explanations, no reasoning, just answers as a list as described earlier. For example, if the suggestions are A,B,C,D,E then - <Answer> [A, B, C, D, E] </Answer>
Answer: <Answer> [oxygen delivery, arterial blood gas, lung compliance, blood oxygen level, respiratory efficiency] </Answer>
Reasons for the answer:
1. **Oxygen Delivery**: Given that "alveolar gas exchange" causes "X" and "X" in turn affects "patient respiratory rate," it's logical to infer that "X" could be related to how effectively oxygen is being delivered throughout the body. Oxygen delivery is a critical aspect of respiratory function and would directly impact respiratory rate.
2. **Arterial Blood Gas (ABG)**: ABG tests measure the levels of oxygen and carbon dioxide in the blood and can indicate how well oxygen is being transported and carbon dioxide is being expelled, which are functions influenced by alveolar gas exchange. Since "X" is caused by "alveolar gas exchange" and affects "patient respiratory rate," it's plausible that "X" could involve the measurement or state of arterial blood gases.
3. **Lung Compliance**: This refers to the elasticity or flexibility of the lungs, which directly impacts alveolar gas exchange and, consequently, the respiratory rate. Since "X" is linked to both "alveolar gas exchange" and "patient respiratory rate," it could suggest a relationship with the mechanical properties of the lung, such as compliance.
4. **Blood Oxygen Level**: This is a direct outcome of effective alveolar gas exchange and would naturally influence the patient's respiratory rate as the body adjusts to maintain adequate oxygenation. Given the causal links mentioned, "X" could very well represent the level of oxygen present in the blood.
5. **Respiratory Efficiency**: This encompasses how effectively the respiratory system can supply oxygen to and remove carbon dioxide from the body. It's influenced by alveolar gas exchange and directly affects the respiratory rate, making it a plausible candidate for "X."
These suggestions are not explicitly mentioned in the input graph but are logically inferred based on the described causal relationships and a basic understanding of respiratory physiology.
Hypothesizing missing variable in open world #4 Ground truth variable: catecholamine. This repeats until all of the mediators are hypothesized.
§ ASSUMPTIONS
The causal sufficiency of 𝒢, by definition, implies that for every pair of variables within 𝐕, all common causes are also included within 𝐕. Extending this assumption to 𝒢^*, we assume that the partial graph inherits causal sufficiency for its given that all edges among these variables are preserved as in 𝒢. This preservation ensures that the observed relationships within V^* are not confounded by omitted common causes. Since the faithfulness of 𝒢 ensures that the observed conditional independencies among variables in 𝐕 are accurately reflected by the causal structure represented by 𝐄. By maintaining the same set of edges 𝐄 in 𝒢^* for the subset V^*, we uphold the faithfulness assumption within the partial graph.
§ NDE AND NIE
Average Treatment Effect (ATE) quantifies the expected change in the outcome v_y caused by the unit change of the treatment v_t. ATE is part of the causal do-calculus introduced by <cit.>. We consider binary causal DAGs, i.e., each variable can either take 0 or 1 as values.
ATE = 𝔼[v_y|do(v_t=1)] - 𝔼[v_y|do(v_t=0)]
where the do(·) operator, represents an intervention. The E[v_y|do(v_t=1)] represents the expected value of the outcome variable v_y when we intervene to set the treatment variable v_t to 1 (i.e., apply the treatment), and E[v_y|do(v_t=0)] represents the expected value of v_y when we set v_t to 0 (i.e., do not apply the treatment).
§.§ Mediation Analysis
Mediation analysis is implemented to quantify the effect of a treatment on the outcome via a third variable, the mediator. The total mediation effect can be decomposed into the Natural Direct Effect (NDE) and the Natural Indirect Effect (NIE). The Natural Direct Effect (NDE) is the effect of the treatment on the outcome variable when not mediated by the mediator variable. The Natural Indirect Effect (NIE) is the effect of the treatment variable on the outcome variable when mediated by the mediator variable.
NDE = 𝔼[v_t=1,v_m=0 - v_t=0,v_m=0]
Here, NDE is calculated by comparing the expected outcome when the treatment variable is set to 1 and the mediator is fixed at the level it would take under the control treatment v_t=0, with the expected outcome when both the treatment and the mediator are set to the control level.
NIE = 𝔼[v_t=0,v_m=1 - v_t=0,v_m=0]
Here, NIE is calculated by comparing the expected outcome when the treatment variable is set to 1 and the mediator is allowed to change as it would under the treatment, with the expected outcome when the treatment variable is set to 1 but the mediator is fixed at the control level.
|
http://arxiv.org/abs/2409.02353v1 | 20240904005626 | Conditional logistic individual-level models of spatial infectious disease dynamics | [
"Tahmina Akter",
"Rob Deardon"
] | stat.CO | [
"stat.CO"
] |
roman
Conditional logistic individual-level models of spatial infectious disease dynamics
Tahmina Akter^1,2, Rob Deardon^1,3
Department of Mathematics and Statistics,
University of Calgary^1
Faculty of Institute of Statistical Research and Training, University of Dhaka ^2
Faculty of Veterinary Medicine, University of Calgary^3
§ ABSTRACT:
Here, we introduce a novel framework for modelling the spatiotemporal dynamics of disease spread known as conditional logistic individual-level models (CL-ILM's). This framework alleviates much of the computational burden associated with traditional spatiotemporal individual-level models for epidemics, and facilitates the use of standard software for fitting logistic models when analysing spatiotemporal disease patterns. The models can be fitted in either a frequentist or Bayesian framework. Here, we apply the new spatial CL-ILM to both simulated and semi-real data from the UK 2001 foot-and-mouth disease epidemic.
Keywords: Disease transmission model, ILMs, Logistic ILM, Conditional logistic ILM, Posterior predictive distribution.
arabic
§ INTRODUCTION
Infectious disease outbreaks can have devastating effects on human lives, agriculture, and economic growth. For example, the ongoing coronavirus disease outbreak wreaked havoc on public health and economic activity lost (Barro et al., 2020). High-quality mathematical models can provide powerful insights into how infectious disease complex systems behave, which in turn can enable outbreaks to be better controlled by designing efficient public health strategies and resource allocation, such as intervention or vaccination (Tildesley et al., 2006). To this end, Deardon et al. (2010) introduced a class of individual-level models that focus on describing and predicting the behavior of disease at the individual level of interest (e.g., infection between people, households, or farms).
Individual-level models are notable because they incorporate individual-specific covariate information on susceptible and infectious individuals to better describe the dynamics of infectious disease outbreaks. For example, we can account for population heterogeneity in space by including information on separation distance. However, fitting such models to data can be difficult due to the computational cost of calculating the likelihood. This situation arises when we deal with
a large population. Utilizing ILMs is also challenging because it generally requires specialized software such as the EpiILM and EpiILMCT R packages (Warriyar et al., 2020; Almutiry et al., 2020) or coding in fast languages such as Fortran, Julia, or C.
Inference for such models is usually facilitated via Markov chain Monte Carlo (MCMC) within a Bayesian framework. This is a powerful tool because it can deal with high-dimensional and complex models and offers great flexibility in the choice of model. Bayesian MCMC is also powerful in that it provides a principled way for imputing missing data, as well as enabling the incorporation of prior knowledge, allowing multiple sources of data to be combined to improve parameter identifiability. However, in terms of practical outcomes, repeating the calculation of likelihood for an ILM as required by the MCMC method can be computationally very expensive, especially when dealing with large population sizes or complex models (Deardon, 2010).
A logistic regression model is a powerful tool in statistics used for modelling binary response variables and prediction. It is used to model the relationship between predictor variables and binary responses. It can be used to predict the probability of an event occurring, such as disease status (yes/no), based on the associated predictors. Moreover, the logistic model can used as a valuable tool in epidemiology for understanding the dynamics of disease transmission within a population (Jin et al., 2015).
There is also, of course, a wide range of statistical software for fitting these models. The key features of these models are simplicity, interpretability, and applicability to a wide range of scenarios.
In this study, we propose a framework for logistic ILMs, specifically in the context of spatial individual-level models. The logistic ILM is used to model the probability of infection (or non-infection) of disease at each point in time based on risk factors (e.g., environmental, demographical, or behavioral) that are associated with individuals in the population. This is done in a similar way to an ILM, but the two models have a different underlying functional form.
From these models, we can understand the spatial pattern of the disease, identify associated risk factors, and make predictions or forecasts just as we can with a standard ILM.
Spatial logistic ILMs are typically non-linear in terms of their covariates due to the spatial distance function typically used. However, we can condition on the spatial parameter of the logistic ILM so that the covariates in the model are linear predictors of the log odds of infection at each time point. This enables us to use standard statistical software
to fit the logistic ILM and facilitates faster inference.
We will do this in two stages. In the first stage, we will fix the spatial parameter by choosing an appropriate value for the parameter from a finite set of plausible values. This leads to a conditional logistic ILM (CL-ILM). In the second stage, the model can be fitted in either a Bayesian or frequentist framework. Here, we will focus on Bayesian CL-ILMs. We can check the performance of this model relative to say, a standard ILM by using a posterior predictive approach (Gardner et al., 2011), or model-based information criterion.
The subsequent sections of this paper are organized as follows. In Section 2, we introduce the general framework of ILMs, the logistic ILM, the spatial logistic ILM, the CL-ILM, methods for converting from epidemic data to that suitable for fitting the CL-ILM via standard statistical software, and the posterior predictive approach. In Section 3, we discuss our simulation process. In Section 4, we present our findings and compare the ILM and CL-ILM methods based on simulation studies under SI and SIR frameworks. In Section 5, we apply the CL-ILM to semi-real data based on the UK foot and mouth disease (FMD) outbreak of 2001. Finally, in Section 6, we conclude and propose plans for future research.
§ METHODOLOGY
§.§ Individual-level model
A class of disease transmission models defined as individual-level models was introduced by Deardon et al. (2010). These models provide a tool for modelling infectious disease spread through space and time at the individual level (e.g., individual people, households, or geographical regions). The goal of these models is to mimic the dynamic of infectious disease. The models are placed within a so-called compartmental framework.
We begin by considering the SI - or susceptible (S), infectious (I) framework and then the SIR - or susceptible (S), infectious (I), removed (R) framework. These compartmental frameworks can easily be extended to SEIR or SEIRS, which allows for a latent period and/or reinfections.
In the SI framework, individuals are initially in the susceptible state (S), and when infection occurs, the individual becomes infectious immediately and moves to the infectious state (I). In the SIR framework,
the same process occurs but after some time the individual moves to the removal state (R), if death or recovery happens, for example.
In our discrete-time scenarios, the epidemic starts at the time t=1 when the first individual is being infected and the epidemic ends at the time t=t_end; t=1, 2, …, t_end. The functional form of the ILM infection probability as defined in Deardon et al. (2010) is given as,
P_it = 1-exp[-{Ω_S(i)∑_jϵ I(t)Ω_T(j) k(i,j)}-ε (i,t)], Ω_S(i), Ω_T(j), ε (i,t)>0
where: P_it is the probability that susceptible individual i is infected at time t; I(t) is the set of individuals who are infectious at time t; Ω_S(i) is a susceptibility function representing potential risk factors associated with the i^th susceptible individual contracting the disease; Ω_T(j) is a transmissibility function representing potential risk factors associated with the j^th infectious individual passing on the disease;
k(i,j) is an infection kernel that involves potential risk factors associated with both the infectious and susceptible individuals (e.g., a function of spatial distance); and ε (i,t) describes random behavior due to some otherwise unexplained infection process.
The likelihood function for the model of (1) is given as
L(D| θ)= ∏_t=1^t_max-1 f_t( D| θ),
where
f_t( D| θ)=[∏_iϵ I(t+1)∖ I(t)P_it] [∏_iϵ S(t+1)(1-P_it)],
and where θ is the vector of unknown parameters, D is the epidemic data set, S(t+1) is the set of individuals susceptible at time (t+1), I(t+1)∖ I(t) is the set of individuals newly infected at time t+1, and t_max≤ t_end is the last time point observed in data. Infectious periods (removal) can be modelled in various ways. For simplicity, here, we assume that the infection times and infection periods are known.
We will focus upon a simple spatial ILM with no covariates aside from spatial distance with the form,
P_it =1-exp[-α∑_jϵ I(t)d_ij^-β], α, β>0, t=1, …, t_max,
where Ω_S(i)=α, Ω_T(j)=1, k(i,j)=d_ij^-β and ε (i,t)=0 in equation (2), and where d_ij is the Euclidean distance between i^th susceptible and j^th infectious individual, α is the baseline susceptibility, and β is the spatial parameter.
§.§ Logistic ILM
Here, we discuss the logistic ILM and its general form. The logistic ILM is the logistic version of the ILM that involves the relationship between log odds of infection and potential risk factors associated with susceptibility and transmissibility. The general form of the logistic ILM infection probability is defined as
λ_it=Ψ_S ∑_jϵ I(t)Ψ_T K(i, j)+e_it,
where λ_it=log[P_it/1-P_it], Ψ_S is the potential risk factors associated with the susceptible individual, Ψ_T is the potential risk factors associated with the infectious individual, K(i, j) is the infection kernel, and e_it is some infection from unexplained causes.
The likelihood function for the model (3) can be written as,
L(D|θ)=∏_i=1^n∏_t=1^t_max-1 P_it^y_it (1-P_it)^1-y_it,
where y_it is the infection status of i^th individual at time t, with y_it=1 if the individual is infected and 0 otherwise. We can fit this model to data by maximizing the likelihood and then take a frequentist approach to inference or fit the model in a Bayesian framework using an MCMC algorithm, incorporating prior information on parameters.
§.§ Spatial logistic ILM
Here, we present a logistic version of the simple spatial ILM of equation (2). It can be considered as an alternative model in its own right, or as an approximation to the spatial ILM.
It is given by,
log[P_it/1-P_it] =Xα
=α_0+α_1 X_it
where Ψ_S=α, Ψ_T=1, K(i, j)=d_ij^-β_0, and e_it=0 in equation (3), where
X=(1, X_it), α^T=(α_0, α_1) and X_it=∑_jϵ I(t)d_ij^-β_0. That is we relate the force of infection (α_1∑_jϵ I(t)d_ij^-β_0) to the log odds of infection rather than the probability of infection.
We can write the probability of being infected as
P_it =exp(Xα)/1+exp(Xα)
=exp(α_0+α_1 X_it)/1+exp(α_0+α_1 X_it)
=1/1+exp(-(α_0+α_1 X_it)).
However, standard statistical software for fitting logistic models will not be able to cope with the non-linearity in the spatial function (e.g., the glm command in R), due to the non-linearity in X_it. However, if we fix, or condition on β_0, we can calculate X_it for each susceptible i at each time t and then use standard software to fit the model.
§.§ Conditional logistic ILM
The conditional logistic ILM involves conditioning on the spatial parameter β_0. In such cases, the probability of being infected can be written as
P(Y_it=1|β_0=β_0) =exp(α_0+α_1 ∑_jϵ I(t)d_ij^-β_0)/1+exp(α_0+α_1 ∑_jϵ I(t)d_ij^-β_0)
=1/1+exp(-(α_0+α_1 ∑_jϵ I(t)d_ij^-β_0)),
where β_0 is our fixed value of β_0.
The conditional likelihood function can be written as
L(α|β_0)=∏_i=1^n∏_t=1^t_max-1 P(Y_it|β_0)^y_it (1-P(Y_it|β_0))^1-y_it.
One simple way to choose β_0 is to fit the model for each of a finite set of possibilities and choose the β_0 which maximizes the likelihood.
§.§ Data converting from epidemic data to binary data
The epidemic data for a spatial SI ILM without covariates will contain information on infection times (and removal times if an SIR model is being fitted) of individuals with (X, Y) coordinates. Here, we will explain the procedure of how to convert such epidemic data to binary data that we can fit the spatial logistic ILM to. The infection pattern over time is shown in Table 1, for a hypothetical `toy example' consisting of four individuals. Note that the column `Individual ID' is not strictly needed but is included here to aid illustration.
In this data, individual 1 is being infected at t=4, and so becomes infectious at t=5, and similarly, individual 2 is being infected at t=3, and so becomes infectious at t=4, and so on. Here, the epidemic starts when individual 3 becomes infectious at time t=2, and we would condition on that infection.
To convert the epidemic data to a data set suitable for fitting the CL-ILM using standard software, we create three columns. The first column incorporates time points for each individual. The time point will reach up to the point they get infected. The second column incorporates the infection event status of the individuals for each time point. We start to observe the binary data from t=2 because the epidemic starts from one individual who was infected at time 1. The third column includes the set of infected individuals (I_t) in the data. It contains X_it calculated for fixed β_0. Note that, X_it will typically be calculated for change over time for each individual. The binary data set for Table 1 is shown in Table 2. Here, time (t) and I_t are supporting information that is not directly used in the fitting of our CL-ILM.
Similarly, we can convert the epidemic data to binary data in the context of the SIR framework. In this case, the data contains the information on the time of infection and time of removal with (X, Y) coordinates for each individual. An infectious individual would move to the removal state after their infectious period. At that time, the individual would not be in the set of infectious individuals anymore, and this would feature in the calculation of X_it.
§.§ Posterior predictive distribution
To investigate the model accuracy or goodness of fit under the Bayesian framework, we can use a posterior predictive approach as introduced by Guttman (1967).
We can generate realizations from the posterior predictive distribution (PPD) of various epidemiological statistics such as some form of epidemic curve or the final size of the epidemic, and then compare that with the equivalent statistic calculated from the observed data, to assess the model fit. Here, we consider the number of newly infectious individuals (incidence) over time, which we refer to as the epidemic curve.
The algorithm for producing posterior predictive realizations in the case of an ILM or CL-ILM consists of the following steps:
* Sample a set of parameters from the MCMC-estimated posterior distribution.
* Simulate an epidemic from the model using the parameters sampled in Step (i).
* Summarize the simulated epidemic from Step (ii) via the epidemic curve (or some other statistic of interest).
* Repeat Steps (i) to (iii) a large number of times. For this study, we repeated 500 times.
Then, we examine and compare the PPD of the epidemic curve to the original observed epidemic curve to check for accuracy and precision. We consider a model to be a good fit for the data if the observed data lies in the areas of high mass of the PPDs, and the PPD has low variance.
To quantify the posterior predictive model fit, we can also use metrics such as the mean square error (MSE). Here, the MSE is calculated by taking the average of the squared differences between predicted and actual values of new infections over time, which is then averaged over the total number of epidemic simulations.
The MSE is given as,
MSE=1/500t_max∑_s=1^500∑_t=1^t_max(Y_st-Ŷ_st)^2,
where Y_st is the number of new cases of the s^th sample at time t, and Ŷ_st is the predicted number of new cases of the s^th sample at time t.
§ SIMULATION STUDY
A simulation study is carried out to assess the performance of our CL-ILMs when the underlying data is generated by the spatial ILM (equation 2). That is, we examine how well a spatial CL-ILM can approximate the basic spatial ILM. Here, we consider a log transformation of the X_it covariate to enhance stability in the data. Each data analysis is carried out in two stages. In the first stage, we use maximum likelihood over a finite set of β_0 values to tune β_0.
In the second stage, we fit the CL-ILM under the Bayesian framework. Then, we examine the model accuracy via the posterior predictive approach described above. Moreover, we compare the prediction error between the basic spatial ILM and CL-ILM under the SI and SIR frameworks.
In this study, we simulate epidemic data under four scenarios with different spatial ILM parameter values. The true parameter values of (α, β) are (0.7, 4), (0.5, 3), (0.2,4), and (0.9, 5) for each of the four scenarios, respectively. For each set of parameters, we produce 30 epidemics. These are used to fix β_0=β_0. We then take an arbitrarily chosen subset of 20 epidemics and fit the CL-ILM to these using a Bayesian MCMC framework. Note, the subset of only 20 is taken at the second stage to reduce the computational burden associated with carrying out multiple MCMC analyses. For each simulated epidemic, we randomly
generate the spatial location of 500 individuals uniformly within 10 × 10 unit square area for each epidemic.
To generate epidemic data from the ILM, we use the epidata function from the `EpiILM' R package. Then, we convert the epidemic data to binary data suitable for analysing with the glm command in R.
§.§ Fixing β_0
We compare a number of spatial logistic ILMs to find the optimal tuning parameter (β_0). For comparing these models, maximizing the likelihood approach is used here. We compare the logistic models with spatial parameter β_0∈{-1, 0.5, 1, …, 9.5, 10}. By fixing β_0, we construct the conditional logistic model.
We fit models and calculate the likelihood values using the glm function in R with a logit link. We also calculate the proportion of time each possible value of β_0 is selected.
§.§ Model fit
Here, we fit the basic spatial ILM using the mcmc function of the package `adaptMCMC' in R. For the basic spatial ILM, the marginal prior distributions of the parameters, α and β, are U(0, 5) and U(0, 10), respectively. Posterior predictive simulations are produced using the epidata command in the `EpiILM' R package.
In the case of CL-ILM, we fit the model using the MCMClogit function from the `MCMCpack' package in R. Here, the marginal prior distribution of the parameters, α_0 and α_1, are independent Cauchy distributions with location and scale parameters 0 and 1, respectively.
We use our own R code to produce the epidemic curves under the posterior.
Then we compare the fit of the spatial ILM and the CL-ILMs. We use the average MSE to measure the prediction error and the average standard deviation (SD) to capture the variation in the posterior realizations. Moreover, we report the average proportion of time points at which posterior predictive 95% credible intervals capture the true numbers of new infections.
§ RESULTS
§.§ SI framework
Under the SI compartmental framework, we assess the performance of CL-ILMs when data is generated from a basic spatial ILM.
§.§.§ Choosing β_0 via maximizing the likelihood
For the true parameter (α, β)=(0.7, 4), the spatial parameter β_0 was found to be either 3.5, 4.0, or 4.5 by maximizing the likelihood (Table 3). Further, β_0=4.0 (the true value) was chosen under a majority of epidemics (0.60). Similarly, when the true parameter value was (0.5, 3), the highest proportion was 0.733 for the true value of β_0=3.0. When the true values were (0.2, 4) and (0.9,5), the highest proportion was 0.533 and 0.600 for β_0=4.0 and β_0=5.0, respectively.
This implies that the maximizing likelihood approach can be successfully used to fix β_0 for the CL-ILM, either picking the true or one close to the true value under all epidemic scenarios tested.
§.§.§ Model fit
Table 4 shows the average MSE and SD under posterior prediction of the incidence-based epidemic curves under the SI framework. For each scenario, the average MSE and SD were higher for the CL-ILM compared to the spatial ILM. This would be expected, of course, since the actual data observed was simulated from the ILM.
We summarize the mean proportion of time credible intervals capture the true number of infectious with its standard deviation across epidemic datasets in Table 5. When comparing the ILM and CL-ILM, we observed that the mean proportion value was slightly lower for the CL-ILM compared to the ILM. The mean proportion of successful incidence capture varied between 0.968 and 0.992 while considering the spatial ILM. In contrast, the mean proportion varied between 0.881 and 0.950 while considering the CL-ILM.
As expected, the values of SD were higher for the CL-ILM compared to the spatial ILM. However, we note that under the CL-ILM the lowest capture proportion was 0.881, and in the other scenarios it was larger than 0.90.
From Figures 1 to 4 (see Appendix), we show the comparison of the posterior predictive epidemic curve (number of newly infectious individuals over time) between the ILM and CL-ILM for each scenario under the SI framework.
We observed that the width of the posterior predictive intervals was a little larger under the ILM than the CL-ILM for each scenario, suggesting less uncertainty in epidemic prediction under the CL-ILM. This is presumably due to the fixing of the spatial parameter.
As we have already observed, there is also a higher chance of failing to capture the true incidence under the CL-ILM. However, the patterns of the posterior predictive distributions were fairly similar under the ILM and CL-ILM. Overall, this suggests that the CL-ILM provides a reasonable approximation to the basic spatial ILM.
§.§ SIR framework
Under the SIR compartmental framework, we evaluate the performance of CL-ILMs when data is generated from a basic spatial ILM. Here, we consider the infectious period follows Poisson distribution with a mean value of 4.
§.§.§ Choosing β_0 via maximizing the likelihood
For the true parameter value of (α, β)=(0.7, 4), the spatial parameter β_0 was found to be either 3.5, 4.0, 4.5, or 5.5 by maximizing the likelihood (Table 6). The highest proportion was 0.500 for the true value of β_0=4.0. Similarly, when the true parameter value was (0.5, 3), for the majority of epidemics (0.667) β_0=3.0 was chosen. When the true values were (0.2, 4) and (0.9,5), the highest proportion was 0.633 and 0.400 for β_0=4.0 and β_0=5.0, respectively.
Once again, the findings imply that the maximizing likelihood approach can be effectively used to fix β_0 for the CL-ILM, either picking the true value or one close to the true value under all epidemic scenarios tested.
§.§.§ Model fit
Table 7 shows the average MSE and SD under posterior prediction of the incidence-based epidemic curves under the SIR framework.
Once again, the average MSE and SD were higher for the CL-ILM compared to the spatial ILM for all scenarios. This would be anticipated, of course, since the real data observed was simulated from the ILM.
In Table 8, we summarise the mean proportion of time points at which credible intervals capture the true number of infections with its standard deviation under the SIR framework. For each scenario, the mean proportion value was slightly lower for the CL-ILM compared to the spatial ILM. Under the CL-ILM, the lowest capture proportion was 0.843, and the highest capture proportion was 0.950. Under the ILM, the mean proportion varied between 0.959 to 0.990.
Moreover, the standard deviations were higher for the CL-ILM compared to the ILM for all scenarios except (0.9, 5).
Figures 5 to 8 (see Appendix) show the posterior predictive distribution of the epidemic curves under the ILM and CL-ILM for each scenario under the SIR framework. Here, we notice that the width of the posterior predictive intervals is larger for the CL-ILM compared to the ILM for almost all scenarios, suggesting more uncertainty in epidemic prediction under the CL-ILM.
Moreover, the patterns of the posterior predictive distributions are slightly different for the CL-ILM compared to ILM. Note that this differs from performance under the SI model.
However, the credible interval mostly captures the true number of infections, suggesting that the CL-ILM provides a reasonable approximation to the basic spatial ILM.
§ SEMI-REAL DATA
Here, we fit the CL-ILM to a simulated epidemic based on foot and mouth disease data (FMD) from the UK epidemic of 2001. The reason for using this `semi-real' data rather than the actual data set is that the culling strategy imposed by the UK government in 2001 is very hard to mimic, and so the posterior predictive performance of the epidemic model tends to be poor. The culling strategy varied over time and space but essentially aimed at pre-emptively culling animals as farms thought to be at high risk. Thus, we simulate a new `true' epidemic under our ILM that does not involve a culling strategy when fitted to this data. Then we compare the performance of the ILM and CL-ILM based on the `semi-real' data. We consider a subset of 1101 farms from the Cumbria region, with infection times varied between t=30 to 71 in days (t=1 being the day of the first infection).
In this study, we consider a conditional logistic ILM of the following form,
log[P(Y_it|β_0)/1-P(Y_it|β_0)] =α_0+α_1 ∑_jϵ I(t)d_ij^-β_0,
where: α_0 is the intercept and α_1 is the slope of the model.
Then we compare the model with the spatial ILM as follows
P_it =1-exp[-α∑_jϵ I(t)d_ij^-β] .
To simulate the epidemics, we used true parameter values (α, β)=(0.00096, 1.22) and (α, β)=(0.002, 1.18) under the SI and SIR framework, respectively. The parameter values were estimated from the real FMD data using the optim function in R. In the case of SIR, we consider the infectious period following the Poisson distribution with a mean of 8.86. The mean value was the average infectious period found in the real FMD data.
We compared the spatial logistic ILMs by maximizing likelihood values to fix β_0. Here, the values of β_0 considered were {-1, 0.2, 0.4, …, 3.8, 4.0 }.
The spatial parameter β_0 was found 1.0 and 1.2 under the SI and SIR framework, respectively.
This implies that the maximizing likelihood approach can be successfully used to fix β_0 for the CL-ILM by picking a value close to the true value.
Then, we assess the performance of CL-ILM under the posterior and compare it with the basic spatial ILM.
Table 9 and Table 10 show the average MSE, and SD for posterior prediction of the incidence-based epidemic curves in the SI and SIR framework, respectively. In addition, we summarize the proportion of time credible intervals capture the true number of new infections in these tables.
The average MSE and SD were very close for the spatial ILM and CL-ILM when considering the SI framework (Table 9). we observed that the proportion was exactly one for both the spatial ILM and CL-ILM.
In contrast, the average MSE and SD were higher for the CL-ILM compared to the spatial ILM when considering the SIR framework (Table 10). The proportion of capturing original distribution was slightly higher for the spatial ILM (1.000) compared to the CL-ILM (0.976). Note that we simulated more epidemics under the semi-real data scenario and the findings were very similar.
Figure 9 (see Appendix) demonstrates the posterior predictive distribution and 95% credible interval of the spatial ILM and CL-ILM for the semi-real data. The patterns of the posterior predictive distribution were fairly similar for both the spatial ILM and CL-ILM.
In addition, the posterior uncertainty was almost the same for the ILM and CL-ILM in the context of the SI framework. Alternatively, the posterior uncertainty was slightly higher for the CL-ILM compared to the ILM in the context of the SIR framework. Overall, this suggests that the CL-ILM is a reasonable approximation to the basic spatial ILM.
§ DISCUSSION
This article has proposed a logistic ILM as both an alternative to, and approximation of, the individual-level model. Generally, the ILM is a complicated model and thus the inference for these models is computationally expensive especially when involves a large population. Moreover, the ILM generally calls for coding in low-level language which makes the analysis harder for the researcher with limited expertise in computational statistics. We use a new modelling framework called CL-ILMs. The logistic model is a well-understood model with an extensive choice of statistical software for fitting into data, and the CL-ILM is associated with a substantially lower computational burden.
We use the posterior predictive approach to compare the performance of the CL-ILM when approximating a basic spatial ILM. To quantify prediction accuracy, we measured MSE and standard deviation. We discuss and compare the performance in the context of spatial disease models with simulated datasets and semi-real data from the UK 2001 foot-and-mouth disease epidemic. Overall, we find reasonably good prediction accuracy for the CL-ILM when comparing it with the spatial ILM. However, the posterior predictive uncertainty was found to be greater under the SIR framework compared to the SI framework.
Of course, this study has some limitations and there are other avenues of research worthy of exploration. Firstly, we supposed that event times (infection and removal times) are known. However, the event times are typically not observed in practice with MCMC being used to solve this issue. However, it would be recommended to validate that our conclusions are robust when allowing for uncertain event times, though this would undoubtedly increase computation costs.
Secondly, we use the maximize likelihood approach for tuning the spatial parameter in the CL-ILM. However, other methods such as probability scoring rules could be considered here. In addition, if susceptibility and/or transmissibility covariates are being included in the model, then the choice of fixed spatial parameter will need to incorporate model uncertainty regarding the covariates. Thus, we might want to consider criteria such as AIC or BIC for a few covariates or methods such as the LASSO or spike-and-slab priors with large numbers of covariates.
Finally, here we have only considered SI and SIR compartmental frameworks for our CL-ILM, but extension to others, such as the SEIR would be warranted. We can also consider the introduction of more complex data structures and dynamics into our CL-ILM framework. For example, we could consider behaviour change mechanisms (e.g., Ward et al., 2023), population incorporating regional as well as individual-level spatial information (e.g., Mahsin et al., 2022), missing covariate information (Amiri et al., 2023) and contact network based continuous time ILMs (Almutiry & Deardon, 2020).
§.§ CRediT authorship contribution statement
First Author (Corresponding Author): Conceptualization, Formal analysis, Methodology, Software, Visualization, Writing - original draft, Writing - review & editing.
Second Author: Conceptualization, Methodology, Supervision, Validation, Visualization, Writing - review & editing.
§.§ Declaration of competing interest
The authors declare that they have no known financial conflicts of interest or personal connections that might have influenced the work reported in this paper.
§.§ Acknowledgements
This project was funded by an Alberta Innovates Graduate Student Scholarship for Data-Enabled Innovation and a University of Calgary Eyes High Doctoral Scholarship, Doctoral Completion Scholarship, Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grants program (RGPIN/03292-2022) and the Alberta Innovates Advance - NSERC Alliance program (222302037).
99
Almutiry2020 Almutiry, W., Deardon, R., 2019. Incorporating contact network uncertainty in individual level models of infectious disease using approximate Bayesian computation. The International Journal of Biostatistics.
Almutiry2020 Almutiry, W., Warriyar, V. K. V., Deardon, R., 2020. Continuous time individual-level models of
infectious disease: EpiILMCT. arXiv:2006.00135v1.
Amiri2023 Amiti, L., Torabi, M., Deardon, R., 2023. Analyzing COVID-19 data in the Canadian province of Manitoba: A new approach. Spatial Statistics, 55.
Barro2020 Barro, R.J., Ursua, J.F., Weng, J., 2020. The coronavirus and the great influenza pandemic: lessons from the “spanish flu" for the coronavirus's potential effects on mortality and economic activity. National Bureau of Economic Research.
Deardon2010 Deardon, R., Brooks, S.P., Grenfell, B.T., Keeling, M.J., Tildesley, M.J., Savill, N.J., Shaw, D.J., Woolhouse, M.E.J., 2010. Inference for individual-level models of
infectious diseases in large populations. Statistica Sinica, 20 (1), 239–261.
Gardner2011 Gardner, A., Deardon, R., Darlington, G.A., 2011. Bayesian goodness-of-fit measures for individual-level models of infectious disease. Spatial and Spatio-Temporal Epidemiology, 2 (4), 273–281.
Gelman2003 Gelman, A., Carlin, J.B., Stern, H.S., Rubin, D.B., 2003. Bayesian data analysis. Chapman & Hall: London.
Guttman1967 Guttman, I., 1967. The use of the concept of a future observation in goodness-of-fit problems. Journal of the Royal Statistical Society Series B, 29, 83–100.
Jin2015 Jin, R., Yan, F., Zhu, J., 2015. Application of logistic regression model in an epidemiological study. Science Journal of Applied Mathematics and Statistics, 3(5), 225–229.
Keeling2001Keeling, M. J., Woolhouse, M. E. J., Shaw, D. J., Matthews, L., Chase-Topping, M., Haydon, D. T., et al, 2001. Dynamics of the 2001 UK foot and
mouth epidemic: stochastic dispersal in a heterogeneous landscape. Science, 294, 813-17.
Kirasich2018 Kirasich, K., Smith, T., Sadler, B., 2018. Random forest vs
logistic regression: binary classification for heterogeneous datasets. S.M.U. Data Science Review, 1(3).
Mahsin2022 Mahsin, M., Deardon, R., Brown, P., 2022. Geographically dependent individual-level models for infectious diseases transmission. Biostatistics, 23, 1–17.
Rosendal2011 Rosendal, T., 2011. The spread of porcine reproductive and respiratory syndrome virus (PRRSV) by genotype and the association between genotype and clinical signs in Ontario, Canada 2004-2007. Doctoral dissertation, University of Guelph.
ster2009 Ster, I.C., Singh, B.K., Ferguson, N.M., 2009. Epidemiological inference for partially observed epidemics: the example of the 2001 foot and mouth epidemic in Great Britain. Epidemics, 1(1), 21–34.
Tildesley2006 Tildesley, M.J., Savill, N.J., Shaw, D.J., Deardon, R., Brooks, S.P., Woolhouse, M.E., Grenfell, B.T., Keeling, M.J., 2006. Optimal reactive vaccination strategies for a foot-and-mouth outbreak in the UK. Nature, 440(7080), 83–86.
Ward2023 Ward, C., Deardon, R., Schmidt, A. M., 2023. Bayesian modeling of dynamic behavioral change during an epidemic. Infectious Disease Modelling, 8(4), 947–963.
Warriyar2020 Warriyar, V. K. V., Almutiry, W., Deardon, R., 2020. Individual-level modelling of infectious disease data: Epiilm. Preprint available at arXiv:2003.04963 [stat.AP].
§ APPENDIX
§.§ SI
§.§ SIR
§.§ Semi-real Data
|
http://arxiv.org/abs/2409.02317v1 | 20240903221230 | Topological communities in complex networks | [
"Luís F Seoane"
] | physics.soc-ph | [
"physics.soc-ph",
"cond-mat.dis-nn"
] |
§ ABSTRACT
Most complex systems can be captured by graphs or networks. Networks connect nodes (e.g. neurons) through edges (synapses), thus summarizing the system's structure. A popular way of interrogating graphs is community detection, which uncovers sets of geometrically related nodes. Geometric communities consist of nodes “closer” to each other than to others in the graph. Some network features do not depend on node proximity—rather, on them playing similar roles (e.g. building bridges) even if located far apart. These features can thus escape proximity-based analyses. We lack a general framework to uncover such features. We introduce topological communities, an alternative perspective to decomposing graphs. We find clusters that describe a network as much as classical communities, yet are missed by current techniques. In our framework, each graph guides our attention to its relevant features, whether geometric or topological. Our analysis complements existing ones, and could be a default method to study networks confronted without prior knowledge. Classical community detection has bolstered our understanding of biological, neural, or social systems; yet it is only half the story. Topological communities promise deep insights on a wealth of available data. We illustrate this for the global airport network, human connectomes, and others.
Topological communities in complex networks
Luís F Seoane
===========================================
Network science has revolutionized our understanding of diverse systems ranging from gene regulation <cit.>; through neural circuitry <cit.>, ecology <cit.>, linguistics <cit.>, or technology <cit.>; to human <cit.>, political <cit.>, or economic interactions <cit.>. Networks abstract away such complex systems into a set of nodes or vertices (e.g. genes, neurons, species, people, or companies; etc.) connected by edges or links (respectively: promotion or inhibition, synapses, predation or mutualism, friendship, or supply dependencies; etc.). These are pair-wise interaction summaries, which often suffice to capture what matters in each system <cit.>. Interrogating the resulting graphs is much simpler than running detailed models of each case.
A common strategy to study networks is hypothesis driven: We suspect that a feature plays an important role (e.g. hierarchy <cit.>, motifs as building blocks <cit.>, or a backbone within the human brain <cit.>), so we set out to find these elements, quantifying node involvement, and measuring graph properties (e.g. communication efficiency and cost to bring that brain backbone to light <cit.>). This requires some prior insight about the system, which we might acquire after visual inspection of the network. Could the graph guide our attention to its outstanding features in a more automated way? That is achieved, within a specific scope, by community detection algorithms, which uncover relevant subgraphs based on proximity criteria—e.g., by grouping nodes that are more connected to each-other (hence closer in a geometric sense) than to the rest of the network <cit.>. Let us call such sets of nodes classic or Geometric Communities (GC). Some GC stand out visually as a graph is plotted, and human feedback can be considered. But, in general, good algorithms see through the tangled web of connections finding us clusters difficult to see with the naked eye. These communities decompose the network often uncovering functional modules—e.g. functional gene relationships <cit.>, brain circuits <cit.>, or ideological political groups <cit.>.
These two approaches (hypothesis-driven and community detection studies) underlie most network analyses, and are behind the revolution that network science brought about. Might there be a blind-spot that we have not exploited yet? Classic community detection is restricted to finding contiguous sets of nodes. But relevant network features are often distributed—e.g. nodes may play similar roles because they act as bridges between communities, or because they constitute a backbone holding the graph together. These functions can be implemented by vertices that are not necessarily close-by, hence would be missed by classic community detection. We might suspect such functionality, as in <cit.>; but the combinatorial possibilities are staggering and our capacity is limited. Is there a more general, automated way for a network to direct our attention to its most salient features, whether they stem from geographic proximity or from similarity between node types?
A minimal example is illustrative. In a Watts-Strogatz network <cit.>, 𝒢^WS, all nodes start as exactly identical, sitting around a circle and each connected to their k nearest neighbors. At this point, whichever topological property we measure on the vertices, they all register the same. Now with a probability p for each edge, we break it and make a shortcut from one of the nodes just separated to another, random one across the graph (Fig. <ref>a). This introduces topological defects: One of the separated neighbors has lost a connection, which is gained in turn by the far-away node. Clustering near the shortcut decreases, since long-range triangles are not completed. Distances across the graph change. Etc. From each node, we measure these and other properties—e.g. centralities such as betweenness, closeness, eigenvector; local connectivity such as clustering, k-coreness, cliques; cycles associated to each vertex; etc. (App. <ref>). We then study how these properties were perturbed by rewiring, and how the earlier topological homogeneity is recovered in nodes further away from shortcuts. Projecting these properties into Principal Components (PC, App. <ref>) reveals how the more homogeneous nodes occupy a limited region (pink tip, Fig. <ref>b), and how the emerging range of topologically distinct vertices spreads over this eigenspace. Using red (PC-1), green (PC-2), and blue (PC-3) to code for position respectively along the first three components (App. <ref>), we map this topological diversity back into the 𝒢^WS circle graph (Fig. <ref>a) or another suitable layout (Fig. <ref>c). Nodes far from topological accidents (i.e. similar to themselves before rewiring) stand out along the red-coded PC-1, and thus they color the network in pink stretches of very regular, almost grid-like structures (Fig. <ref>c). Properties typical of a grid (e.g. square clustering, long average distance to other vertices) have bigger loadings on PC-1 in this example. Nodes closer to a shortcut score high in betweenness centrality instead. In Fig. <ref>a-c, these later vertices acquire more bluish (PC 3) and eventually greener (PC 2) hues, indicating that these PC correlate with betweenness and other defining aspects of shortcuts in Watts-Strogatz graphs.
The topological diversity within this network is limited. A more telling example comes from a scientific collaboration graph <cit.>, 𝒢^CNB (App. <ref>). We measured topological properties (again, capturing centrality, local connectivity, cycles, etc.; App. <ref>) of every node in this graph and projected them onto PC eigenspace (Fig. <ref>d). PC in 𝒢^CNB are different from those in 𝒢^WS. In our analysis, each network reveals those features that better explain the topological diversity of its nodes. In this collaboration network, PC-1 defines an axis of centrality. More central researchers (in terms of eigenvector, betweenness, closeness, etc.) are projected onto smaller values of PC-1; while peripheral vertices score higher there (Fig. <ref>d and Sup. Fig. <ref>). This reveals a core-periphery structure that becomes obvious as we use distances in PC-space to define clusters of similar nodes (Fig. <ref>e). We term such clusters Topological Communities (TC, App. <ref>). TC consist of nodes more similar to each other (in topological terms) than to the rest of the network. Fig. <ref>e shows 5 TC for 𝒢^CNB, but TC form a hierarchy that can be explored seeking finer topological detail (Sup. Fig. <ref>).
Projecting TC back onto a network layout (Fig. <ref>f) helps clarify their roles. 𝒢^CNB consists of a shell (TC-1, black) that separates two topologically different sets of peripheral nodes (TC-4, blue, and TC-5, yellow) from a marked core (TC-3, green) alongside a rich club (TC-2, red). The rich club consists of scientists very central to the network, who collaborate amply across shell and core, but much more among themselves <cit.>. Both the core and rich club score high on centrality measurements (low PC-1). But they are told apart by PC-3, which correlates (among others) with rich-club properties, such us nodes beloging to very large cliques and k-cores, and completing a large number of neighbor triangles. One of the peripheries (TC-5, yellow), consists of researchers with only one collaborator—they are terminal leaves of the graph, suggesting newcomers to the collaboration network. The other periphery, TC-4 (blue), contains researchers with several connections. Both periferies are told apart by PC-2 (Fig. <ref>e). Nodes in TC-4 are topologically similar to each other (they present akin values of centrality, involvement in cycles, connectivity patterns, etc.). But TC-4 is not contiguous—we cannot visit every TC-4 node without passing through other TC, prominently the shell (Sup. Fig. <ref>a). Hence, despite its topological regularity and marked role within 𝒢^CNB, TC-4 could never be picked up by classic GC. Actually, no 𝒢^CNB TC is recovered by GC despite their prominence and the clear network decomposition they entail (Sup. Fig. <ref>c-d).
Approaches to topological classification exist that divide graphs between core and periphery based on centrality measurements <cit.>. But a network might not have clear-cut core and periphery—e.g. 𝒢^WS and others below. A graph's structure might also be more nuanced than captured by a single, monotonously increasing feature—e.g. in 𝒢^CNB a PC uncovers the core-periphery but other, orthogonal components are needed to extract additional details. The TC framework is both more precise, subtle, and unbiased in identifying cores, peripheries, and further structure—if present. It allows an automated analysis in which each graph guides us to its most salient features, whether their nature depends on centrality or other properties. It acts as a microscope that amplifies each graph's defining structures. Let us turn this approach to more relevant graphs.
Airports make up a global transport network <cit.>, 𝒢^GTN (App. <ref>). Its nodes spread over their PC eigenspace (Fig. <ref>a) differently to how 𝒢^WS and 𝒢^CNB vertices did on theirs. This anticipates a distinct decomposition. Some prominent TC appear distributed over network layout (Fig. <ref>b), as well as on a world map (Fig. <ref>c). TC-1 (black, which we term the US TC) contains most US airports except the major ones. TC-2 (red) contains the main hubs world-wide (including in the US). We name this the global backbone. Both the US TC and the global backbone score high in PC-1, which correlates positively with different node centralities (Sup. Fig. <ref>b). These two TC are told apart along PC-3, which correlates, among others, with larger clustering and square clustering (Sup. Fig. <ref>d). This indicates that, while both these TC are very central in the network, the global backbone has denser local connectivity, with each airport presenting more complete triangles among neighbors. Both TC-3 (green) and TC-4 (blue) contain medium-sized and smaller airports. TC-3 seems more Euro- and Caribbean-centered, while TC-4 clusters around South-East Asia and Brazil. But nodes of these TC appear all over the world (see details in Fig. <ref>c)—their difference is not geographical, but topological.
The TC decomposition also aids in producing coarse-grained summary graphs (App. <ref>). Fig. <ref>d has condensed all nodes of each TC to show that the global backbone, despite containing less airports than each of the other three TC discussed, channels most of the connections, including from the US TC to the rest of the world. The global backbone only contains one node in the former Soviet block (PRG, Prague), and none in Africa, Latin America, India, Japan, or Australia. The US TC is quite self-contained—it is the only one fairly captured by classic GC (Fig. <ref>e-f). Hence, it consists of geographically and geometrically close airports, that are also topologically similar as graph nodes. The US TC's topology is also dissimilar to that of other world regions. This likely stems from the US historic decision to prioritize airborne transportation over, e.g., railway. TC insights can carry socioeconomic and strategic relevance. The airport most topologically similar to Mexico's Ciudad Juárez (CJS) is Bodø (BOO), in Norway. These might constitute the best models of each-other for planning logistics or expansions, even though they are an ocean and 4 flights apart (compare to the graph's diameter, 5, and average path length, 2.27).
While one GC encloses the US TC completely, it also subsumes US airports from the global backbone. This misses a much more nuanced structure that is also erased as European, South American, and African-Asian-Austronesian nodes are grouped in their respective GC (Fig. <ref>f). Even though TC are geographically distributed, they remain fairly contiguous (Sup. Fig. <ref>a). Notwithstanding, classic GC fail to capture them. Both TC and GC reveal key, complementary facets of complex networks. Neither approach is superfluous—both should be applied when we start studying a new graph. Geographical clustering of airports was appreciated in an earlier study <cit.>, which also noted that, within each GC, nodes could play different roles. Hypothesis-driven measurements followed to clarify this (App. <ref>). The TC approach is more principled, incorporates more diverse measurements, and allows the graph to guide us towards its relevant internal structure.
Finally, let us turn our attention to connectomes, which summarize connectivity patterns in the brain. We downloaded 1,064 connectomes in which nodes are small brain volumes and an edge exists if at least one axonal fiber was detected connecting the corresponding volumes <cit.> (App. <ref>). Exhaustive analysis will follow in future works. Here we intend to illustrate TC, for which we focus on three brains: 𝒢^HC1, 𝒢^HC2, and 𝒢^HC3. 𝒢^HC1 was chosen because, after visual inspection of numerous connectomes, it presents three TC that are clear-cut and common to many other brains. 𝒢^HC1 nodes spread over PC-eigenspace again differently to previous examples (Fig. <ref>a), suggesting a novel decomposition. The most outstanding cluster (TC-2, red) includes nodes from the somatosensory and primary visual cortices (Fig. <ref>b-c). This is perhaps the feature that we observe more often across the database. TC-2 suggests that the striate and somatosensory cortices are topologically similar (even though they are non-contiguous) and singularly distinct from other brain regions. TC-3 (green) contains mostly nodes located deeper in the brain, while TC-5 (yellow) is more superficial (Fig. <ref>b, d-e). This might seem a trivial division; but it further highlights TC-2, which consists of mostly superficial nodes, yet differently grouped than other cortical regions. Also, the deep-superficial divide varies across brains in the database.
Connectomes 𝒢^HC2 and 𝒢^HC3 were chosen to illustrate symmetry breaking in the brain. In 𝒢^HC2, its TC-1 (black, Fig. <ref>f) groups mostly superficial, but only left-hemispheric, nodes. Compare this with the largely symmetric 𝒢^HC1, for which TC spanned both hemispheres—mirror symmetric, yet distant nodes had a similar topology. But for 𝒢^HC2, certain left-hemispheric nodes are topologically more similar to each other than to their mirror-symmetric counterparts. 𝒢^HC3 presents a more complicated decomposition. Its TC-2 (red, Fig. <ref>g) contains superficial nodes in the left hemisphere but deeper ones at the right. Symmetry and symmetry breaking are prominent brain features with clinical implications <cit.>, often linked to optimality and complexity <cit.>. But they are not easy to formalize and measure (recent efforts mobilized consortia with hundreds of researchers <cit.>). TC nimbly report topological symmetry and asymmetry—a research line that we will explore in the future. TC decomposition offers a refined structural analysis complementary to GC, which again fail to recover outstanding topological features (Sup. Fig. <ref>). Cortical centers singled out by TC participate of distinct cognitive processes. Our analysis suggests a principled way to further explore the effect of connectome topology on cognition by correlating TC and functional regions—whcih we will also explore in the future.
App. <ref> showcases brief analyses of networks of programming languages <cit.>, a macaque connectome <cit.>, yeast protein-protein interactions <cit.>, and bill co-sponsorship in the US house of representatives <cit.>. These studies illustrate a range of TC decompositions, highlighting the many ways in which a few topological building blocks can be arranged to produce complex networks. Some novel insights hinted at in App. <ref> will be developed in dedicated papers. The wealth of additional data where to apply this framework is vast. Our analysis is inspired by earlier observations that topologial roles might vary within classic GC <cit.>, as well as by methods of numerical topology and dimensionality reduction in vogue in computational neuroscience <cit.> or cell biology <cit.>. We dicuss connections with earlier work in App. <ref>. TC offer a novel network decomposition perhaps on par in importance with classic community detection—itself a cornerstone of network science. 20 years after their introduction <cit.>, GC are a vibrant research field both for the development of more refined algorithms <cit.> and as a revealing tool across the sciences <cit.>. We expect similar applicability for TC. This paper aims at introducing the framework, but tweaks and refinements should expand its possibilities. We limited ourselves to unweighted, undirected graphs—both for conceptual simplicity and ease to handle topological properties. As we introduce directedness and weights, we expect more distinct TC to appear—i.e. more topological building blocks available that can be arranged in more different ways, resulting in new insights across networks. An enticing application might help open the black box of Artificial Intelligence by studying TC in Artificial Neural Networks. The paradigm should also work on multiplex graphs <cit.> or simplicial networks <cit.>. On the technical side, we used PC again for simplicity, but a range of dimensionality reduction methods could be applied instead <cit.>. What matters is the TC conceptual framework, which we think offers a new, relevant tool in network science, covering a blind spot in graph analysis. A researcher who is confronted with a new network and asks “What can this graph tell me? What kind of analyses should I run on it?”, should, by default, try out GC and TC to let the graph shine a light on its most salient geometric and topological features.
§ ACKNOWLEDGMENTS
The author thanks Susanna Manrubia for her support and for early feedback provided about this manuscript (together with Iker Atienza). It was Susanna Manrubia who came up with the term Topological Communities, for which I am deeply indebted. The author also acknowledges the help and support of the “extended” group of Susanna Manrubia and Jose Cuesta, whose biweekly seminars helped shape this paper. This work has been supported by the Jesús Serra Foundation (grant FJSCNB-2022-12-B).
§ NETWORKS
A graph or network, 𝒢, consists of a set, V, containing N^n ≡ |V| nodes or vertices v_i ∈ V; and a set, E, containing N^e ≡ |E| unordered tuples of the kind (v_i, v_j) ∈ E that indicate that nodes v_i and v_j are connected. We call the elements of E edges or links indistinctly. We will say (v_i, v_j) ∈ E if either (v_i, v_j) ∈ E or (v_j, v_i) ∈ E. It is convenient to introduce the adjacency matrix: A ≡{a_ij;i=1, …, N^n;j=1, …, N^n } with a_ij=1 if (v_i, v_j) ∈ E. It is also convenient to introduce the neighborhood of a node: V_i ≡{v_j ∈ V, (v_i, v_j) ∈ E }. In this paper we only work with unweighted, undirected graphs with no self-loops (hence (v_i, v_i) ∉ E for any i).
All networks studied in this paper are unweighted and undirected. We will extend our methods to weighted and directed networks in following papers. We summarize some characteristic of our case-study graphs in table <ref>. An itemized list follows with additional details where necessary:
* Random Watts-Strogatz (WS) network, 𝒢^WS: We generated a random WS network <cit.> with 300 nodes, connecting each vertex to its 4 nearest neighbors. The rewiring probability was 0.05.
* CNB collaboration network, 𝒢^CNB: In <cit.>, we built the collaboration network of researchers at the Author's home institution, the Spanish National Centre for Biotechnology (CNB). We focused on the time period 2016-2021, during which the CNB was distinguished as a Severo Ochoa center of excellence by the Spanish Ministry for Science, Innovation, and Universities. We collected all papers published by CNB researchers during that period. Each CNB researcher constitutes a node in the network, and two vertices are connected if the corresponding scientists coauthored at least one paper within the studied period. Edges leaving the graph (i.e. collaborations with external researchers) are ignored.
* Top 500 global transport network, 𝒢^GTN: We used data from <cit.> (available at <cit.>) to recreate the global network that connects every two airports between which there is at least one flight. This data is restricted to the top 500 airports in volume of passengers.
* MRI human connectomes, 𝒢^HC: Networks downloaded from <cit.>, generated by <cit.>. The dataset contains several libraries of connectomes generated from MRI from the Human Connectome Project <cit.>. In the original dataset, the brain has been divided into voxels. We are given the number of fibers connecting any two brain regions, which were inferred from diffusion MRI images using standard techniques <cit.>. We assume that two nodes are connected if at least a fiber exists between the corresponding regions. The dataset contains 1,064 brains with 463 regions in each connectome (note that we work with the largest connected components, so the final number of nodes varies from one brain to another). In this paper we discuss connectomes ob subjects 101309 (𝒢^HC1), 992774 (𝒢^HC2), and 989987 (𝒢^HC3) within the Human Brain Connectome database. A more thorough analysis will be presented in successive papers.
* Programming languages, 𝒢^PL: In <cit.>, a phylogenetic tree of programming languages was built based on which got inspiration from each-other (as documented in each language's manual). This is a directed network, but we reduced the graph to its undirected version.
* Macaque connectome, 𝒢^MC: We used the network in <cit.>, which built upon collated data from 410 tract tracing studies found in the CoCoMac database (http://cocomac.org; http://cocomac.g-node.org; <cit.>). We ignored direction and weights of the connections. Opposed to our human connectomes, this network does not correspond to a unique brain, but to the result of merging data from several macaques.
* Yeast protein-protein interaction network, 𝒢^Y: We used the most recently published data on protein-protein interaction in yeast Saccharomyces cerevisiae <cit.>. While data in <cit.> is provided as directed links, this network is naturally undirected (interaction of protein A with B implies an equal interaction of B with A). Data is also unweighted.
* Bill co-sponsorship network within the US house, 𝒢^US: We used the tools developed in <cit.> and available in <cit.> as an R package. These tools allow us to reconstruct co-sponsorship networks within the US house and senate. Representatives in either chamber can support bills introduced for consideration. The software in <cit.> builds a network considering whether two representatives co-sponsor bills together more often than by random chance. The resulting network is unweighted and undirected. Available data spans from the 93rd to the 114th congresses. We discuss only two networks from the house—the first and last congresses, noted 𝒢^US93 and 𝒢^US114 respectively. Study of the complete dataset is left for future work.
§ TOPOLOGICAL PROPERTIES
For our topological analysis, we measure a series of properties for each node. We first chose a set of primary properties (Tab. <ref>) with the hope that they describe all relevant topological aspects of a node exhaustively. Specifically, we try to capture dimensions such as centrality (which can be of different kinds—e.g. eigenvector, betweenness, etc.), density of local connections (as measured by cliques and k-cores), edges (girth or abundance of minimal cycles associated to a node), or overlap between neighbor connections (effective size or constraint). In this appendix we define in detail the properties just mentioned and others.
Some of these properties can carry similar information as others. Which measurements are redundant usually changes from one network to another—hence each graph induces a similarity structure between node properties (see App. <ref>). Adequate dimensionality reduction methods prevent redundancies from biasing our results (see App. <ref>). Important topological aspects might have been left out despite our efforts. This could be alleviated in the future by introducing additional measurements. Our central contribution in this paper is only contingent on these details. We have summarized the chosen primary properties, their formulas or notation, and some useful references in Tab. <ref>. Below we expand these properties in an itemized list with lengthier explanations where needed. All numerical evaluations in this paper have been implemented in Python using NetworkX <cit.>.
Let us take an arbitrary primary property, π, and note π^𝒢≡{π_i, v_i ∈𝒢} the result of numerically evaluating this quantity over all nodes in network 𝒢. For each primary property we derive two additional secondary properties: (i) the average over a node's neighbor, <π>_i ≡∑_j∈ V_iπ_j/k_i, where k_i is the node's degree; and (ii) the standard deviation over a node's neighbor, (π)_i ≡√(∑_j∈ V_i (π_j - <π>_i)^2/k_i ). We can run our analysis including or excluding secondary properties—actually, we can run it excluding any combination of measurements, also primary ones. We have found that including secondary properties enriches our methods, suggesting that they capture salient information about each node, and that this allows grouping up vertices that have similar relationships to their neighbors even if they are not contiguous in the network.
The first set of secondary properties, <π>_i, can tell us whether nodes tend to connect with vertices which are similar or dissimilar to themselves (Sup. Fig. <ref>a). This will result in correlations or anti-correlations during PCA, which allows us to generalize ideas of assortativity. Assortativity is used to indicate that nodes with a high degree are connected to others with high degree as well. Assortative networks emerge spontaneously from entropic forces alone—given a configuration, they are much more common <cit.>. In anti-assortative graphs, high-degree nodes avoid each-other and prefer to link with less-connected vertices. This is rarer, suggesting specific mechanisms operating in that direction. Examples of antiassortative graphs are syntax networks or genotype networks explored by viruses. We do not need to stop at node degree. Given a network, do those with large betweenness centrality tend to connect with others scoring also high in this quantity? What about the number of cycles that a node is involved in? If such trends are relevant for some property in a network, our analysis will pick them up.
The second set of derived properties, (π)_i, tells us whether a node is picky regarding which other vertices it connects to (Sup. Fig. <ref>b). If (π)_i is small, then v_i tends to connect with others within a specific range of values of property π. If (π)_i is larger, the neighbors of v_i are heterogeneous. What is small or large only makes sense within the context of the whole network. Again, our analysis provides an automatic way to report on this aspect if it is salient in the graph.
§.§ Itemized list of primary properties
* Node degree: k_i ≡ |V_i|.
* Eigenvector centrality <cit.>: Let ν^1 be the eigenvector that corresponds to the largest eigenvalue of the Adjacency matrix. Then, ν^1_i is the i-th entry of this eigenvector, and it corresponds to the i-th node eigenvector centrality.
* Betweenness centrality <cit.>: Let σ(j,k) be the number of shortest paths connecting ν_j and ν_k. Let σ(j,k|i) the number of such shortest paths that pass through ν_i. Then the betweenness centrality reads: C^B_i ≡∑_j,kσ(j,k|i) / σ(j,k).
* Closeness centrality <cit.>: This property measures the inverse of the average closest distance of a node to all others. Let d(i,j) be the shortest distance between ν_i and ν_j. Then C^C_i ≡ (N_n-1)/∑_j id(i,j).
* Harmonic centrality <cit.>: Related to the previous one, this quantity measures the average of the inverse of closes distances of a node to all others: C^H_i ≡∑_j i 1/d(i,j).
* PageRank: Pagerank is a popular algorithm that ranks nodes from most to least central (in the eigenvector centrality sense). It obviously correlates with eigenvector centrality, but it is non-linearly related to it and rather provides information about cumulative centrality (much as a cumulative distribution relates to a density distribution in statistics).
* Coreness or core number <cit.>: A k-core is found by iteratively removing all nodes with degree less than k until no more nodes can be removed. The coreness or core number of a node is the largest k-core to which a node belongs.
* Onion layer <cit.>: In the iterative process to compute a k-core we remove nodes sequentially. Assuming a connected component, to find the 2-core we first remove nodes with degree 1, as they have less than k=2 connections. These nodes belong to the most external onion layer. If, after removing this layer, we are left only with nodes with degree equal or larger than 2, we have found the 2-core (which might consist of a connected graph or many). Otherwise, after removing the first layer, a new set of nodes will be left with degree less than 2. This is the second onion layer. We remove them and repeat the process until the 2-core is located. Note again that this might be a unique connected component or many. Next we set up to find the 3-core, which is contained within the 2-core; then proceed for higher k-cores until none is found. A node's onion layer is the order in which it is removed in this process.
* Effective size <cit.>: The ego-network of node v_i (named ego-node in this context), 𝒢_i, is the subgraph 𝒢_i ⊂𝒢 that contains all neighbors of v_i. If nodes within an ego-network are linked, these connections are redundant in a very specific sense—e.g. because information will arrive repeatedly through many paths. Effective size is an attempt to capture this redundancy. In undirected, unweighted neighbors, it is straightforwardly E_i ≡ k_i - 2t_i/k_i <cit.>. , where k_i is the ego-node's degree and t_i is the number of edges within 𝒢_i that do not involve v_i.
* Clique number: A clique is a graph in which all nodes are connected to each-other. A node's clique number is the size of the largest clique to which it belongs within the larger network, 𝒢. Mathematically, over all subgraphs g within a network 𝒢, ω_i ≡max_ω{ω≡ | g ⊂𝒢 |; v_i ∈ g∧ gis clique}.
* Number of cliques: Number of maximal cliques that a node belongs to. Mathematically, N^ω≡ |{ g⊂𝒢, v_i ∈ g |g|=ω_i gis clique}|.
* Number of triangles: Given a node, v_i, and its closest neighbors, V_i, a triangle is completed if v_j ∈ V_i and v_k ∈ V_i and (v_k, v_j) ∈ E. Thus, N^t_i ≡∑_j,k ∈ V_i a_jk. This property is tightly related to clustering. In our analysis we found that, in most networks, nodes with a large centrality also showed a very small clustering. This is so mostly because a node with a large centrality has got many more potential triangles and it is much more difficult that it will complete them all. We speculated that nodes with small degree might present high clustering even though they had a small number of associated triangles, hence that this quantity behaves differently to clustering and that it might be informative in some networks.
* Cycles or loops: Cycles are topologically relevant features. Characterizing a graph's cycle structure is particularly difficult because the number of loops grows combinatorially with network size. We tested several options in smaller graphs before deciding on the four properties reported next. One option was to produce a cycle basis—a set of loops from which all others in a graph can be generated—but each network comprises combinatorially many different bases. A stochastic evaluation was a possibility. Another option was to retain the so-called minimal basis, but finding it grows faster than polynomially with network size. We think that there is much room for improvement in the characterization of a graph's cycle structure. Whenever new breakthroughs appear, they can be seamlessly incorporated into our analysis. Here we opted to use recently published work <cit.> that localizes, for each node, an associated set of minimal cycles, S_i ≡{σ^l_i, l=1, ⋯, N^c_i }. Here, N^c_i is the number of minimal cycles associated to the vertex v_i and each σ^l_i is a collection of vertices σ^l_i ≡{ v^l_i(1), …, v^l_i(λ^l_i) } where λ^l_i ≡ |σ^l_i| is the length of the cycle. σ^l_i are such that an edge exists in 𝒢 connecting each two consecutive nodes in σ^l_i, (v^l_i(m), v^l_i(m+1)) ∈ E, and the last and first nodes of σ^l_i, (v^l_i,(λ^l_i), v^l_i(1)) ∈ E. See <cit.> for more details. From this set, we compute:
* Cycle ratio: Following <cit.>, from the above set we compute the matrix entries c_ij as the number of loops in ∪_i S_i that contain both vertices v_i and v_j. The cycle ratio is defined as r_i = 0 if v_i has no cycles associated and ∑_j c_ij/c_ii otherwise. This measures the presence of node v_i in the loops associated to other vertices.
* Number of minimum cycles: We score the size of the set of minimum cycles associated to each node, N^c_i.
* Inverse of maximum minimum cycle: Among a node's minimum cycles, there is one (or many) with largest length. We wanted to include this information in the analysis, but nodes without associated loops were troublesome. A possibility was to assign them a maximum minimum cycle of 0, but this would introduce an artificial proximity to vertices with small associated cycles. We opted for assigning an infinity-length cycle to such nodes, then working with the inverse of this quantity to avoid numerical problems. Thus, μ_i ≡ 1/max_|σ^l_i|{S_i}.
* Inverse of node girth: This measure presented the same problem as the previous one, and it was solved with the same strategy. A node's girth is the size of the smallest associated cycle. We take its inverse: γ≡ 1/min_λ^l_i{S_i}.
* Clustering coefficient: fraction of possible triangles through a node that exist, c_i ≡ N^t_i/k_i(k_i-1).
* Square clustering coefficient: fraction of possible squares involving a node that exist. This property was developed to attempt a kind of clustering coefficient for bipartite networks, in which triangles are never possible <cit.>.
* Constraint: Node constraint is an alternative way to tackle the redundancy of connections within the immediate neighborhood of a node <cit.>. It is a measurement introduced in economics to quantify how much investment overlap there is between neighbor nodes.
§ PRINCIPAL COMPONENTS ANALYSIS
Some of the measurements might have taken infinite values, or might be the same for all nodes. In this last case, they do not offer any relevant information that clarifies variety of node topology within the graph. We detect and remove these pathological properties before our analysis. We are left with an array, Π_i ≡{π^l_i, l=1, …, N^p}, where N^p is the total number of properties of interest.
Π≡{Π_i, i=1, …, N^n} contains valuable information about our network, 𝒢. Analyses in network science are often driven either by guesses after visual inspection (e.g. because a community structure is outstanding, even though this can be deceiving <cit.>) or hypothesis validation (e.g. we want to check out whether our graph is assortative, whether it is a small world, etc.). Instead, our analysis asks the network to guide us towards its relevant features. Which facets are important usually changes from one network to another. For this task we can use any available dimensionality reduction techniques—e.g. autoencoders <cit.>, umap <cit.>, or other non-linear manifold embeddings <cit.>. For simplicity, we choose the most straightforward one, Principal Component Analysis (PCA) <cit.>. This also allows a more intuitive discussion that helps us focus on the novelty of Topological Communities, not on technicalities. More modern methods will doubtlessly enrich our framework in the future.
We center and normalize all variables before computing the correlation matrix. These matrices start showing us important information about global topological properties in each graph. Sup. Fig. <ref>a and c show, respectively, correlation matrices for our WS graph, 𝒢^WS, and the global airport network, 𝒢^GTN. In this example we only show primary properties to simplify our visualizations. Each network induces a different correlation structure between our primary properties. To the question: “Given a graph, do two distinct properties measure a same thing?” The answer is: “It depends on the specific network that we are looking at.” For example, the onion layer decomposition of a node most often correlates with that vertex' centrality and degree, so we might think that these quantities are always similar. But above we showed that this is not the case for a WS network, where nodes with high centrality might be in low onion layers.
From the correlation matrices we apply hierarchical clustering (using the fcluster tool from SciPy) to derive dendrograms (Sup. Fig. <ref>b and d) that summarize which properties are more similar to each other in a given network. Across all networks studied we tend to observe two blocks: Those that correlate with centrality (black branches) and those that correlate with clustering (red). This split is not always well defined (see the WS graph in Sup. Fig. <ref>a-b). When they are clear, these blocks are usually anti-correlated with each other. Clustering coefficient usually anti-correlates with centrality because very central nodes tend to have many more nearest neighbors and it is hence more difficult to complete all possible triangles. However, this relationship if far from trivial or parsimoniously lineal, we think that it deserves further study.
In the figure, arrows show properties that switch blocks when moving from 𝒢^WS to 𝒢^GTN. This shows that the information that a property contains is contingent on the graph and must be understood in relation to other quantities. Take the onion layer, ℒ. The way that this property has been built <cit.>, we would think that central nodes would be removed last, thus have a higher ℒ. This is the case in 𝒢^GTN, but not in 𝒢^WS. WS graphs start with its nodes arranged on a circle, and each vertex connected only to its 4 closest neighbors. Then, a small fraction of these local connections is rewired to a random vertex, building a bridge far away. Nodes involved in such bridges become the most central ones because: (i) the far-away vertex will have its degree increased by 1 and (ii) they now connect distant parts of the graph, scoring higher in betweenness, harmonic, and closeness centrality. However, these central vertices will also be close to the neighbor that has been disconnected to build the bridge. These nodes have their degree reduced by 1, and will hence be removed earlier when looking for k-cores. This will in turn affect vertices nearby, including very central ones, which will hence be removed in early onion layers. In 𝒢^WS, the last onion layers are occupied by nodes in lattice-like parts of the network, which the less central ones in terms of betweenness or closeness.
From the correlation matrix we extract Principal Components (PC) <cit.>. These are directions within the space of node properties (within which the Π_i data points live) along which nodes present more variability. In other words, this calls our attention to dimensions of our data set along which there is more heterogeneity (hence more interesting structure to report) of nodes. PC define an orthogonal basis of the space of node properties. We can project the original data into this basis—we will note such projected data as Π̂≡{Π̂_i, i=1, …, N^n} and we will say that data is represented in PC-space or eigenspace. Note that each network node is represented as a point either in property space or PC-space. For visualization purposes, we often retain the first three PC and color-code each node according to the values they take in these components. We associate red, green, and blue to the first, second, and third PC, and interpolate linearly from hexadecimal values 00 to ff between the node scoring the least and the most along each PC.
Sup. Fig. <ref>a-b shows this color-coded representation for the CNB collaboration network. In this space, some relevant structure becomes readily noticeable. A small set of nodes appears in blue (denoting high score in PC 3). A valley of greener (high PC 2) points separates the first group from the bulk of the network, which appears in redder tones (high PC 1). Both color similarity and proximity in PC-space indicates that a group of nodes are topologically similar, meaning that they play similar structural roles in the graph. The color code can be projected back into the network (Sup. Fig. <ref>c). This reveals that the set of bluer nodes is not only topologically similar, but it is also geometrically close. Visual inspection or classic clustering methods could have also hinted us towards this densely packed group of nodes. But, in case they miss this feature, our analysis automatically reports it because it is an outstanding one in this graph. From visual inspection or classic clustering it would have been much more difficult to find some structure in the remainder of the network, as greener and redder nodes appear rather distributed. Nodes with very similar colors can appear literally at opposite sides of the network. Despite their distance, their topological properties indicate that such vertices are deeply similar.
The final step of our analysis is utilizing the projection in PC-space to reveal Topological Communities (TC). This is explained in App. <ref>. First, let us make a brief comment on the interpretation of Principal Components.
Despite their widespread use and conceptual simplicity, PC might not be particularly simple to interpret. This led to the development of new methods, such as sparse PCA, that emphasize interpretability by constraining each PC to be associated with as few original properties as possible. This and other, more modern dimensionality reduction methods will advance our framework. Here we intend to introduce the TC framework with the most straightforward choices. TC study with more sophisticated techniques is left out for future work. To close this section, we illustrate some bits of information that can be extracted from PC.
Sup. Fig. <ref> shows eigenvectors for the global airport network, 𝒢^GTN. The main component (Sup. Fig. <ref>b) correlates strongly with most properties in the centrality block from Sup. Fig. <ref>d. This centrality defines a main axis alongside which topologically diverse nodes are segregated. This is a pattern observed in most (but not all) graphs studied, highlighting the importance of the centrality axis in complex networks. Less principal components become more difficult to decipher. The second one includes two properties related to cycles among its most relevant ones: inverse of maximum minimum cycle, μ (anti-correlated), and inverse girth, γ (correlated). This means that nodes with both a small girth (i.e. a small minimum cycle associated—say a triangle) and its maximum minimum cycle is large will score higher in PC-2. Finally, the two main properties in PC-3 are the two clustering coefficients. Nodes scoring high in this component also present high core number and number of cliques, but they score low in betweenness centrality.
§ LOCATING TOPOLOGICAL COMMUNITIES
We define Topological Communities (TC) as sets of nodes that are more topologically similar to each-other than to other vertices of the network. We find them by hierarchically clustering nodes that fall close in PC-space. We could use a variety of suitable techniques such as k-means <cit.>, or methods that identify non-linear manifolds. We again opt for a straightforward method to focus on the conceptual novelty of TC.
We use the location of each node in PC-space to compute Euclidean distances between all vertices, then use these distances to build a dendrogram (again using fcluster from SciPy). From a bottom up perspective, this algorithm proceeds as follows: First, each node makes up its own cluster, and is represented by the node's position in PC-space. We merge the two closest nodes together in a new cluster, which becomes represented by the center of mass of the vertices just grouped up. We repeat this process iteratively, merging nodes and clusters. Nodes that are further away are merged later, inducing a distance in the emerging dendrogram. Looking at the algorithm from a top down perspective, then cutting branches at different distances along the dendrogram, a hierarchy of clusters unfolds.
Sup. Fig. <ref> shows this process for the CNB collaboration network. Let us examine the first 4 steps from the top down viewpoint. We start with the whole network undivided, constituting a unique cluster; then progress increasing the number of clusters as the network is split into more topologically coherent subsets of node. In Sup. Fig. <ref>, first, nodes are divided into a broad core and a periphery. Then the periphery splits twice: first revealing a shell, closer to the core; then the remainder of the nodes are separated into two kind of peripheral vertices. Finally, a subset of the core is excised revealing to constitute a rich club.
Let us introduce some notation before moving on. In the dendrogram, we can have any number of TC from N^TC=1 (the whole network) to N^TC=N^n (each node constitutes its own TC). Methods to explore the optimal number of TC in each case will be explored elsewhere. In this paper we chose a suitable number in each case for illustration purposes. Let us call 𝒯𝒞 to the sorted set 𝒯𝒞≡{𝒯𝒞(n), n=1, …, N^TC} of all TC once fixed N^TC. Each TC is a collection of nodes. If a given node, v_i, belongs to a given TC, TC(n), we will say: v_i ∈ TC(n) and 𝒯𝒞(v_i) = TC(n).
With this in hand, we elaborate a series of strategies to study TC. First we introduce two kinds of Topological Summary Graphs (TSG): expanded and compacted summary graphs, which capture overall relationships between TC. For expanded-TSG (eTSG), we locate all sets of contiguous nodes that belong to a same TC. This is, starting with an arbitrary node, locate all neighbors that belong to the same TC and that are accessible without visiting vertices that belong to another TC; etc. Mathematically, one such a set of nodes satisfies ν≡{v_i,𝒯𝒞(v_i) if v_j∈ V_i and 𝒯𝒞(v_j)=𝒯𝒞(v_i) then v_j∈ν}. Let TC(n) be the TC to which all nodes in ν belong. As with individual nodes, we say ν⊂ TC(N) and 𝒯𝒞(ν) = TC(n). We call ν a contiguous subset of TC(n) and we say of nodes in ν that they are contiguously connected. We make each contiguous subset into the vertices of a new graph, the eTSG, as represented in Sup. Fig. <ref>a (elaborated for 𝒢^CNB split into 5 TC). The size of each eTSG vertex is proportional to the number of nodes from the original network that it contains, and the width of the links is proportional to the number of edges in the original network connecting across contiguous node sets. Note that, by definition, TC impose a coloring of the eTSG, which cannot contain two adjacent vertices belonging to a same TC.
For compact-TSG (cTSG) graphs we group all nodes of a same TC into a same vertex disregarding their contiguity. Thus, cTSG (Sup. Fig. <ref>b for 𝒢^CNB with 5 TC) trivially tells us the number of nodes in each TC (vertex size) and the amount of edges connecting between different TC anywhere in the network (link width).
TSG for the CNB collaboration networks tell us how the most central TC (the rich club, red, and the core, green) are virtually shielded from the periphery of the nodes by the shell (black). Also, the rich club is almost completely shielded from the shell. Looking at the eTSG, we can count two green and five black vertices. This implies that neither the core nor the shell are fully contiguous sets of nodes within the network. This means that no community detection algorithm would have been able to identify the complete sets of nodes that share topological characteristics in each case, even though both these TC are large, rather central, and salient features of this graph. That task becomes even more difficult for peripheral nodes, which in the eTSG appear much more fragmented.
To further illustrate how TC encompass nodes that are not necessarily close, we searched for classic communities in each network. We refer to them as Geometric Communities (GC) to contrapose the defining criteria—geometric adjacency in GC versus topological similarity in TC. Sup. Fig. <ref>c shows GC for the CNB collaboration graph found using a greedy (descriptive) algorithm. (We are aware of the troubles of such methods <cit.>, but, once again, we wanted to focus on illustrating TC, not technical details of GC.) Sup. Fig. <ref>d shows a bipartite network connecting TC and GC with edge width proportional to the number of shared nodes. We have only plotted the 5 largest GC (alongside 5 TC) for convenience. GC algorithms group nodes based on some notion of geometric proximity within the graph (actually, we know that the three main GC slightly follow departmental divisions within the research institution from which this collaboration network is derived <cit.>). They cut through TC because classic algorithms cannot separate nodes according to the topological role they play in the network.
§ EXTENDED ANALYSIS OF THE GLOBAL AIRPORT NETWORK
We built the global transportation network, 𝒢^GTN, containing the top 500 airports (from data at <cit.>). For each node we measured each of the primary and derived properties, finding that none is pathological (i.e. non yield infinity and none takes the same values for all nodes) and that we could include all of them in our analysis. We computed cross-correlation between node properties. Sup. Fig. <ref>c-d shows the correlation matrix and dendrogram for primary properties on this network. The centrality- and clustering-correlated blocks emerge clearly.
We diagonalize and project all node properties into PC-space (Sup. Fig. <ref>b), where some non-trivial structure is already visible. Proximity of nodes in this space, as well as likeness in color (which codes PC 1 to 3 in red, green, and blue respectively), denotes similar topological characteristics within the network. When projecting this color code back into a network layout (Sup. Fig. <ref>c) we see that some of topologically close nodes are also geometrically close—i.e. nearby within the network. For example, a large orange and red cluster is prominent in the center-bottom half of the network. But other vertices with similar topology are far apart in the graph—note, e.g., two brown clusters: one at the top left and a smaller one at the bottom right. Both these proximity and separation between topologically similar nodes becomes more evident when projecting the PC color code into a world map, with each node plotted at the location of the corresponding airport (Sup. Fig. <ref>d). The reddest cluster is straightforwardly associated to the United States. Some clustering of brownish nodes appears in the South-East Asian region (around China). Nodes with a similar color can be found spread over the globe. All other colors appear extended world-wide, without an obvious clustering pattern. This anticipates that our analysis will uncover network features not based on proximity within the graph—as classic geometric communities do.
TC are defined by topological proximity between nodes in PC-space (Sup. Fig. <ref>a). When projected onto a network layout, it becomes apparent that nodes in a same TC are not necessarily connected (Sup. Fig. <ref>b). This becomes even more evident when projecting TC onto the world map (Sup. Fig. <ref>c), and when we project each individual TC alone both on a map (Sup. Fig. <ref>) and on a graph layout (Sup. Fig. <ref>).
Sup. Fig. <ref> shows 5 TC (we chose the number for convenience—as stated elsewhere, we will explore criteria for optimal number of TC in the future). We do not discuss TC-5 (hardly visible in Sup. Fig. <ref>), which consists of two poorly connected airports in Taiwan and are likely an outlier. Among the other TC, a prominent one (TC-1, black, Sup. Figs. <ref>a and <ref>a) is also spatially clustered around the United States. This shows that our analysis can report geometrically clustered nodes when they are a network feature that stands out topologically as well. This also means that a potential geometric community of US airports may also be topologically homogeneous. TC-1 (the US-TC) constitutes a unique contiguous subset, as illustrated in the e-TSG (Sup. Fig. <ref>a).
TC-2 (red, Sup. Figs. <ref>b and <ref>b) also forms a contiguously connected components. This TC consists of the most important hubs is the world. Note that the most relevant US-airports belong here, not in the US-TC. We dub TC-2 the global backbone (GB) TC. This TC is denser in Europe but includes hubs in the Asian South-East and Dubai. Central and South America, Africa, India and most of the Middle East, and Oceania are not present in the global-TC, highlighting the disconnectedness of these regions from the backbone of the global transport network. Regarding its topological qualities, we see that it scores similarly to the US-TC in PC 1 and 2 (Sup. Fig. <ref>a). PC-1 implies that both the UC-TC and the GB-TC have similar centrality features (see App. <ref> for interpretation of 𝒢^GTN PC). Both TC differ on PC-3, which correlated with clustering measurements—implying that the GB-TC has a higher clustering than the US-TC.
TC-3 (green, Sup. Figs. <ref>c and <ref>c) is the most widely distributed one across the world. Two US airports (Easter Iowa, CID, and the Quad Cities, MLI) belong to this TC, meaning that they are more topologically similar to airports elsewhere than to the US-TC. They constitute the only non-contiguous nodes of TC-3. The rest of that TC is contiguously connected (Sup. Figs. <ref>c), as summarized by the large green node in the e-TSG (Sup. Fi. <ref>a). Note how the main contiguous component of TC-3, as a graph, looks very different from those of the US-TC and the GB-TC (Sup. Fig. <ref>). This illustrates how our analysis is picking up different topological classes.
TC-4 (blue, Sup. Figs. <ref>d and <ref>d) looks even more dissimilar to TC-1 and 2. Sup. Fig. <ref>d shows only the largest contiguous component of this TC, which encompasses most airports in the Asian South-East. Apart from this large subset, TC-3 consists of 16 contiguous components (visible in the eTDG graph, Sup. Fig. <ref>a) spread all over the world. This TC is densely present in South America, South-East Asia, and Oceania. We remark that all nodes in TC-4 are topologically similar to each other despite being far away both geographically (in the world map) and geometrically (within the network). TC-4 scores the lowest in PC-1, which correlates with centrality. While the global transport network is well connected and has no terminal branches and leaves (as in a tree graph), and while it can be circumnavigated similarly to a toroid or a WS graph, if we would like to define a periphery, TC-4 is the best candidate.
The compact summary graph (cTSG, Sup. Fig. <ref>b) shows the pattern of connections between TC, which suggests a hierarchy. On the top sits the GB-TC, that connects profusely with the US-TC and TC-3. These were, respectively, the topologically homogeneous transport network within the US and the most abundant and widely distributed network across the world. A few direct connections exist between the US-TC and TC-3, but not as many as between the GB-TC and TC-3. In other words, the global backbone, formed by the main hub airports, is responsible for most connections between topologically dissimilar regions of the network. This is so even though the global backbone is the smallest TC of all four. TC-4 is relatively disconnected from the other topological communities, and it is rather accessed from TC-3.
This summary of the 4 main TC in 𝒢^GTN highlights how our analysis can group up airports that are topologically similar even if they are not close in space or in the network. This becomes much more clear if we compare TC to classic Geometric Communities within 𝒢^GTN. Sup. Fig. <ref>a shows the result of applying a greedy community detection algorithm to our graph. It reveals that nodes group preferentially according to geography, as it was already known <cit.>. This decomposition of the graph cuts across topological categories (Sup. Fig. <ref>b) and groups together nodes that play different roles within the graph structure at large—for example, all European airports belong to a same GC, even though only a few of them belong to the GB-TC and the rest of them are split between TC 3 and 4. Both analyses complement each-other, as GC look much more compact on the network layout (Sup. Fig. <ref>c).
Sup. Fig. <ref>d illustrates how TC relate to GC and vice-versa. This allows us to build an index to quantify whether topologically homogeneous communities are also geometrically close, and whether geometric communities are also geometrically homogeneous. For each TC, we compute the fraction of its nodes in each GC. We use this fraction as a probability to compute S(n), the entropy of TC(n) as divided into GC. Similarly, we compute the fraction of nodes from each GC assigned to each TC to compute H(m), the corresponding entropy of GC(m). In this example we obtain S(1)=0 (meaning that all airports in the US-TC are geometrically close), S(2)=1.45, S(3)=1.79, and S(4)=1.13. This confirms that TC-3 is the most widely spread TC, but both the GB-TC and TC-4 appear also distributed in space. On the other hand, all GC are rather evenly spread across TC, as we get: H(1)=1.34, H(2)=1.32, H(3)=0.91, H(4)=0.96. This means that, while at least one TC was able to pick up relevant geographic contiguity (as illustrated by S(1)=0), no GC is able to pick up topological homogeneity—not even the cluster centered in North America.
§ EXTENDED ANALYSIS OF HUMAN CONNECTOMES
We now study three human connectomes. The data is available at <cit.>. Connectomes were built by <cit.> from publicly available MRI data from the Human Connectome Project <cit.>. These networks comprise 463 voxels of brain tissue, each of them constituting a node in the resulting graphs. The raw data gives us the number of white matter fibers inferred with standard algorithms that connect each couple of brain voxels. For our analyses, two nodes are linked if at least one fiber exists connecting the two corresponding voxels. There are several alternatives to this choice. An obvious one is to study weighted connectomes. We can also choose to connect two regions if the number of fibers between them exceeds a certain threshold (e.g. if two regions share more fibers than the brain-wide average). We will explore these and other alternatives in future papers. Here we stick to the simplest method. This is not crucial, since our goal is to introduce TC and showcase how they contribute to different fields (here, neuroscience).
The original database contains 1,064 brains. We focus on those with patient ID 101309 (𝒢^HC1), 992774 (𝒢^HC2), and 989987 (𝒢^HC3) in the Human Connectome Project database. Connectome 𝒢^HC1 was chosen because it summarizes very nicely common TC found across the database after visual inspection of TC in many brains. This brain appears rather symmetric across hemispheres—as do most others in the database. In symmetric connectomes, typical TC span nodes in both hemispheres. This denotes that a given node is often more similar (in topological terms) to its contralateral partner than to other nearby regions. But this is not always the case: connectomes 𝒢^HC2 and 𝒢^HC3 were chosen to illustrate brain asymmetries. An asymmetric node is typically more topologically similar to others nearby than to its symmetric counterpart. Again, these three connectomes are shown to exemplify how TC can help us understand complex networked systems. More thorough analyses are underway—specifically to exploit the large database and add statistical significance to the typical TC illustrated by 𝒢^HC1; or finding smaller topological communities that correlate with functional structures.
For each connectome, we measured all primary and derived node properties finding that none is pathological (again, none takes infinite values, none takes the same value across all nodes). All were included in the analysis. Fig. <ref> shows the corresponding projection of nodes into PC eigenspaces (panels <ref>a1, b1, and c1), and the projection of PC color code into each connectome as a network layout (panels <ref>a2, b2, and c2) and on the corresponding location of each node in the brain (panels <ref>a3, b3, and c3). We observe a strong contrast between colors in the first connectome (Fig. <ref>a), where pink and blue hues dominate, and the second and third brains (Fig. <ref>b-c), which appear mostly green. We checked that this was not due to some trivial property of PC—explicitly, whether a dimension was just sign-inverted in 𝒢^HC1 with respect to 𝒢^HC2 and 𝒢^HC3. This was not the case. Differences come down to two reasons: (i) PC are different from one graph to another and (ii) nodes are distributed differently within each eigenspace. And this is so because each graph is different, and the TC paradigm allows each network to direct our attention to its relevant features.
We advanced above that one difference concerns brain symmetry. This becomes much more clear when looking at TC, which we represent from several perspectives in Figs. <ref> to <ref>. Before further discussing asymmetry, let us comment on TC of 𝒢^HC1, which have been observed to different degrees in most brains in the database. The most outstanding feature is TC-2 (red in Figs. <ref>a, <ref>a, and <ref>a), which contains nodes mostly from the primary visual (or striate) cortex and the somatosensory area. This suggests that human connectomes around these two regions are topologically dissimilar to the rest of the brain, and that both areas are similar to each other. Next we find a set of nodes that are located in the most exposed layers of the cortex (TC-5, yellow in Figs. <ref>a, <ref>a, and <ref>c). This is in opposition to another set of nodes (TC-3, green in Figs. <ref>a, <ref>a, and <ref>b), that comprises mostly deeper regions. It makes sense that wiring patterns of more superficial nodes (hence remote with respect to each other) are different than those of nodes at some depth. Our analysis correctly detects this. But both TC-3 and TC-5 score remarkably similarly in the first PC (Fig. <ref>a1). This is indicative of topological similarities in the wiring of most cortical nodes. In this connectome, the remaining TC-1 and TC-4 (respectively black and blue in Figs. <ref>a and <ref>a) hardly show up. They seem to correspond to deep and isolated subcortical structures, which we discuss in future work.
It is interesting to see how some of these structures are partly preserved in the asymmetric brains. The separation between more internal and more external regions seems present in 𝒢^HC2 (Figs. <ref>b, <ref>b, and <ref>), even though the most external nodes break their symmetry into two different TC—notably captured by the corresponding TC-1 (black in Figs. <ref>b, <ref>b, and <ref>a). The visual and somatosensory TC is not particularly differentiated in this brain.
𝒢^HC3 presents an interplay between symmetric and superficial nodes (Figs. <ref>c, <ref>c, and <ref>). Its TC-2 (red in Figs. <ref>c, <ref>c, and <ref>b) contains more external nodes in the left hemisphere and more internal ones at the right. All other most external nodes are split into TC-1 (black in Figs. <ref>c, <ref>c, and <ref>a) and TC-3 (green in Figs. <ref>c, <ref>c, and <ref>c), which recovers the striate cortex, some of the somatosensory nodes, and other, more frontal ones.
Looking at geometric communities in connectomes further illustrates how TC and GC are implementing two different informative decompositions of the same network. Fig. <ref> shows how GC in human connectomes are strongly influenced by brain geometry. Connections across hemispheres are rare in these connectomes; thus the separation across brain sides, the longitudinal fissure, constitutes a natural barrier for classical community detection algorithms. This is in stark contrast with TC. Note, e.g., TC-3 and TC-5 in the first brain, 𝒢^HC1 (respectively green and yellow in Figs. <ref>a, <ref>a, <ref>b-c). These TC group up nodes from across hemispheres. Even though connections along the longitudinal fissure are few, nodes alongside it have connectivity patterns and general topological properties similar to those of either TC-3 or TC-5, and are consequently grouped therein. In turn, GC group nodes according to their hemisphere first (Fig. <ref>c). Then, within each side, nodes are split roughly into the frontal and parietal lobes, on the one hand, and the occipital and temporal lobes, on the other.
If we project these GC back into the eigenspace of topological properties, we see that GC does not capture any of the information structure in this space (Fig. <ref>a). This indicates that, for human connectomes, classical community detection algorithms are missing out on a large share of meaningful information about these graphs. Note that this was not completely so for the global transport networks, where one of the TC presented large overlap with a classical community. In general, we cannot assume that this will happen, and both methodologies should be pursued to obtain complementary information about each network.
§ BRIEF ANALYSES OF ADDITIONAL NETWORKS
We include some extra case studies to illustrate the diversity of topological decompositions that can be found. These analyses are briefer than the previous ones—we will expand some of them in dedicated papers. The range of topologically distinct networks obtained comes from considering just unweighted and undirected graphs, even though some of them are naturally weighted, directed, or both. We expect to uncover more topological diversity and deeper insights when including this information in future analyses. This is beyond the scope of this paper, which intends to introduce and illustrate the core concept of TC.
Fig. <ref> shows a TC analysis for programming languages. The network was elaborated in <cit.> based on whether a programming language is based on another—e.g. C++ is trivially based on C, but less obvious relationships were spelt out in <cit.>. This network is unweighted but directed, which we ignore.
From the projection of nodes into PC eigenspace (Fig. <ref>a) we find a cloud of points, or manifold, different to all previous ones. This indicates that the topological structure of this graph is different from other examples. At the very center of the network we find TC-1 (black), which we denominate the graph's backbone. This TC contains very few languages, hence it is hardly noticeable in PC eigenspace (black nodes hidden in the back ground between green nodes in Fig. <ref>c), but its central place is apparent in network layout (Fig. <ref>d). It contains the most relevant programing languages (Fig. <ref>e), from which all other structured languages descend. Note a relevant difference between this backbone and the one in the global transport network, 𝒢^GTN: nodes within the 𝒢^GTN backbone were very tightly connected, almost completing a clique (Fig. <ref>b); but the ones in Fig. <ref>e is more sparsely connected, yet it holds the graph together.
Another interesting aspect of this network is that it presents two peripheries: a large one, TC-5 (yellow in Fig. <ref>c-d), and a smaller one, TC-3 (blue). This last TC is singular (and different from the two peripheral TC found for the CNB collaboration network) in that all nodes descent exclusively from backbone languages. They are either very recent or very unsuccessful nodes that have not inspired newer programming languages yet. The TC analysis manages to pick up this interesting topological feature. A classical community detection algorithm would likely group these nodes somewhere alongside backbone languages, despite their deep differences.
Fig. <ref> summarizes the analysis of a macaque brain connectome—built in <cit.> with collated data from 410 tract tracing studies. This way of connectome reconstruction is different from the techniques used for human connectomes. Among other things, we are presented with a composition of several macaque brains. The projection in PC eigenspace (Fig. <ref>a) might bear some resemblance to that of the typical human connectome, 𝒢^HC1 (Fig. <ref>a). But a lot of its features are absent. Noticeably, when plotted in network layout (Fig. <ref>b), it does not display the marked division between hemispheres in human brains (e.g. Fig. <ref>a2). A possibility is that tract tracing is able to recover many more inter-hemispheric connections than MRI, thus reducing the chasm.
The largest network that we have processed so far is the protein-protein interactome of the yeast Saccharomyces cerevisiae, 𝒢^Y. This network has very recently been presented with exquisite detail <cit.>. It contains 3,839 nodes and 30,955 edges, both an order of magnitude larger than all other networks in this study.
Network size is a current limitation of our analysis. On the one hand, it is time-consuming to compute all topological properties for each node. Some calculations scale quickly with network size—e.g. betweenness centrality grows as ∼ N_n^3; other properties, even faster. On the other hand, complex networks often present heavy-tailed distributions—notably so for node degree. Heavy tails often appear also in measurements that correlate with degree, such as the different centralities. Thus, for very large networks, the first PC is usually dominated by heavy-tailed variables. This can eclipse more interesting topological features which distributions decay exponentially, and eventually skews TC detection too. This effect takes on a very visual form: eigenspace projections of networks with heavy-tailed properties result in a few nodes stretching the first PC by orders of magnitude, while all others dimensions appear flattened. A possible solution is to take logarithms for heavy-tailed properties, which are more informative in these cases. This would allow other features to have the relevance they deserve in defining TC.
It has been possible to obtain an informative TC decomposition of 𝒢^Y because all its properties behave nicely despite being such a large graph—i.e. no heavy tails. Fig. <ref>c shows a densely populated eigenspace. The cloud of points appears different than in other examples, again revealing a distinct topological disposition of nodes. Features in eigenspace are readily assigned to TC (Fig. <ref>e), which are non-trivially distributed over the network (Fig. <ref>f). The study in <cit.> also provides a detailed map of proteins within the yeast cell. This will allows us to link TC to structure and function in future studies.
Complex networks have been of great aid in understanding social systems <cit.>. A recent, fruitful case study has been the US house of representatives <cit.>, in which voting members can collaborate to sponsor a same bill. We can build a graph, 𝒢^US, that connects representatives who sponsored together more bills than expected by random. Such networks have helped uncover the pattern of polarization currently in full sway in the US and elsewhere in the world. Applying our TC analysis we can reproduce results concerning polarization and extract some new insights.
Fig. <ref>a-d explores the bill co-sponsorship network in the US house of representatives for the 93rd congress (Jan 1973 to Jan 1975), 𝒢^US93. Fig. <ref>e-h shows the same analysis for the 114th congress (Jan 2013 to Jan 2015), 𝒢^US114; forty years apart. The evolution toward the current, polarized state is remarkable and it leaves a clear imprint both in PC eigenspace and in TC. Note how division after party lines in 𝒢^US93 (Fig. <ref>b) does not result in a clear topological separation of nodes. This means that, within both parties, there are numerous representatives that took on similar topological roles within this social network. Opposed to this, in 𝒢^US114, dividing nodes along party lines results also in a meaningful segregation of topological properties (Fig. <ref>f). In other words, representatives of a party occupy different topological roles within 𝒢^US114—so much so that this chasm is revealed by the naked eye in PC eigenspace.
In future studies, we intend to apply the TC paradigm to data from the intervening years. But an interesting insight is revealed with the current, limited analysis. In the 114th congress, with well advanced social polarization, and with salient topological clusters associated to either party, there is a symmetry breaking between the two largest groups. The Democrats (blue in Fig. <ref>f, h) is made up of mostly one large TC (TC-1, black in Fig. <ref>e, g). This indicates that most bill-support patterns established by Democrats in the 114th congress are very similar to each other. Meanwhile, Republicans (red in Fig. <ref>f, h) can be decomposed into three prominent TC: (i) TC-3 (green in Fig. <ref>e, g), which contains representatives ready to collaborate across party lines (including a few Democrats); (ii) TC-4 (blue in Fig. <ref>e, g), which contains representatives very central to the Republican subnetwork; and (iii) TC-5 (yellow in Fig. <ref>e, g), which contains what seems a large periphery of Republican representatives.
Note that TC in this example summarize bill-support patterns from both parties. Since more republicans appear in the bridge TC, we may wonder whether Republicans may be more eclectic and ready to collaborate than Democrats, as this symmetry breaking of the TC decomposition suggests. This seems not to be the case. Analysis of the 111th congress (𝒢^US111, not shown) portrays a reversal between Democratic and Republicans topological decompositions—Republican have less topological diversity than Democrats in 𝒢^US111; and vice-versa, with Democrats splitting into 3 TC that include a bipartisan collaboration subnetwork. We hypothesize that the difference stems from who has the majority (Democrats in the 111th congress; Republicans across the 112th-114th congresses, in which topological patterns are as in Fig. <ref>e-h). This suggests that the minority party adopts a more homogeneous strategy for legislative collaborations, while the majority party may be forced to have representatives playing different roles. Clarifying this becomes relevant only in the current, polarized scenario. These questions are less important in the 93rd congress because representatives of either party are more topologically similar to each other. These and other issues will be investigated in successive papers.
§ CONNECTION WITH EARLIER STUDIES
Our work is directly inspired by methods currently in vogue in neuroscience <cit.> or cell and molecular biology <cit.>. These recent contributions have explored complex neural or biological systems by studying large collections of objects (e.g. activity of neural assemblies, electric brain waves, or different cells), each of these objects being described by an also large collection of qualitative features (e.g. neurons involved in the assembly activity; frequency, intensity and other dynamical markers of neural waves; arrays of gene expression in cells). Those objects of interest become hence represented by points in very high-dimensional spaces. But features often correlate with each other, thus the seemingly complex, high-dimensional cloud of points made up by all objects in a study can often be summarized by a very low dimensional manifold. This manifold can be inferred through dimensionality reduction techniques such as PCA, uMap, and others.
Take <cit.> to illustrate these methods at their best. Gardner et al. recorded activity from a very large assembly of neurons in the entorhinal cortex of mice while they moved freely across an empty maze—i.e. a flat plane. It is known that these regions code the mouse's position. As a consequence, the binary vector that indicates if each neuron is ON or OFF at a point in time also moves around a high-dimensional space, its trajectory tracking the mouse's whereabouts in the abstract mathematical realm. Those binary collections constitute feature vectors. Adjacent points in physical space often produce similar representations, hence correlations arise. When applying dimensionality reduction techniques, neural assembly activity turns out to dwell in a relatively simple 2-D surface: a toroid. This manifold happens to capture, quite optimally, relevant properties of positions over a plane. This has implications for theories of neural representation, as discussed in <cit.>.
In this work we turned these trendy, powerful methods to finding complex graph decompositions that are based on explicit topological similarity between individual nodes. The resulting high-dimensional representation of each network can often be reduced to very low dimensional manifolds, each with a characteristic shape that summarizes the diverse topological roles played by vertices. Because our decompositions (TC) group nodes based on topological similarity, vertices with similar topological roles do not need to be (and often are not) contiguous in the graph. This lies beyond the capabilities of classic community detection techniques, which fail to capture the highly informative structure found by TC.
The need for better characterization of distinct topological roles within classic geometric communities, and the recognition that nodes with a same topological role might be present in different GC and that they might not be contiguous, has been made explicit in some papers that inspired our work. Notably, in <cit.>, classic geometric communities are detected in a global airport transport network (akin to ours). The (now-)expected clustering of airports by geographical zones is discovered, but then an additional analysis is carried out to find distinct roles within such geographic clusters. For example, some airports within a country are hubs connecting to the wider global network, while others are part of a provincial periphery. This prior knowledge might guide our intuition suggesting we measure within-module degree or participation in outside clusters (as the authors in <cit.> did) to separate nodes according to these more refined topological aspects.
We expand this kind of analysis in several ways. We want each network to tell us what is salient in its topology. We do not assume that sets of nodes will stand out because of some aspect of degree, or any other specific quantity. Instead, we try to capture all potentially relevant graph-defining facets of nodes. Dimension reduction techniques then guide us to each graph's salient structures in a less biased way. Thus, ours is a more principled and encompassing framework to address this problem in complex networks.
Other recent approaches have tried to group nodes also not based on classic geometric communities, but rather on similarities between their neighborhoods. This can be important, e.g., to identify plausible functional routes in genetic regulation. Even if two genes belong to different regulatory modules, if they target similar nodes downstream, they might be functionally related or perhaps be making use of a same signaling route for completely different purposes. This is a relevant aspect of network topology, and methods such as introduced in <cit.> take care of it. The TC framework should be able to extract similar information if and when such structures are relevant aspects of the studied graph. Note that for some research we might not be interested in retrieving, in order of saliency, all TC. We might be just interested, e.g., in overlap between targeted nodes (for which the framework in <cit.> is preferred). But if we care about the topological make up of a graph at large, then the TC paradigm improves by incorporating more different, relevant dimensions and by detecting less contiguous, yet similar sets of nodes (e.g. not necessarily connected to a same nighborhood, but sharing othere abstract similarities).
Finally take the case of node2vec, an elegant and popular approach to make nodes and network structure readable to neural networks. This technique starts a series of random walks from each node, recording the sequence of vertices visited. These sequences are then provided to artificial neural networks that try to extract patterns between random walks, then groups up nodes according to the patterns detected. Again, node2vec might rely on features that are shared by proximal nods—since they are more likely to produce similar random walks. On the other hand, node2vec and the neural networks it employs act much as black boxes—we do not control what patterns might be extracted from random walks, nor whether they exhaustively cover all relevant topological or geometrical dimensions of a graph. While efforts exists to make interpretable AI, we think that by focusing on specific topological aspects (which are readily interpretable and central to what makes graphs different from each other), the TC framework is a more straightforward contribution to expand our understanding of complex networks.
ieeetr
10
shen2002network
S. S. Shen-Orr, R. Milo, S. Mangan, and U. Alon, “Network motifs in the
transcriptional regulation network of escherichia coli,” Nature
genetics, vol. 31, no. 1, pp. 64–68, 2002.
farkas2003topology
I. Farkas, H. Jeong, T. Vicsek, A.-L. Barabási, and Z. N. Oltvai, “The
topology of the transcription regulatory network in the yeast, saccharomyces
cerevisiae,” Physica A: Statistical Mechanics and its Applications,
vol. 318, no. 3-4, pp. 601–612, 2003.
luscombe2004genomic
N. M. Luscombe, M. Madan Babu, H. Yu, M. Snyder, S. A. Teichmann, and
M. Gerstein, “Genomic analysis of regulatory network dynamics reveals large
topological changes,” Nature, vol. 431, no. 7006, pp. 308–312, 2004.
dobrin2004aggregation
R. Dobrin, Q. K. Beg, A.-L. Barabási, and Z. N. Oltvai, “Aggregation of
topological motifs in the escherichia coli transcriptional regulatory
network,” BMC bioinformatics, vol. 5, pp. 1–8, 2004.
palla2005uncovering
G. Palla, I. Derényi, I. Farkas, and T. Vicsek, “Uncovering the
overlapping community structure of complex networks in nature and society,”
nature, vol. 435, no. 7043, pp. 814–818, 2005.
chen2008revealing
Z. J. Chen, Y. He, P. Rosa-Neto, J. Germann, and A. C. Evans, “Revealing
modular architecture of human brain structural networks by using cortical
thickness from mri,” Cerebral cortex, vol. 18, no. 10, pp. 2374–2381,
2008.
bullmore2009complex
E. Bullmore and O. Sporns, “Complex brain networks: graph theoretical analysis
of structural and functional systems,” Nature reviews neuroscience,
vol. 10, no. 3, pp. 186–198, 2009.
van2012high
M. P. Van Den Heuvel, R. S. Kahn, J. Goñi, and O. Sporns, “High-cost,
high-capacity backbone for global brain communication,” Proceedings of
the National Academy of Sciences, vol. 109, no. 28, pp. 11372–11377, 2012.
harriger2012rich
L. Harriger, M. P. Van Den Heuvel, and O. Sporns, “Rich club organization of
macaque cerebral cortex and its role in network communication,” 2012.
betzel2014changes
R. F. Betzel, L. Byrge, Y. He, J. Goñi, X.-N. Zuo, and O. Sporns, “Changes
in structural and functional connectivity among resting-state networks across
the human lifespan,” Neuroimage, vol. 102, pp. 345–357, 2014.
montoya2006ecological
J. M. Montoya, S. L. Pimm, and R. V. Solé, “Ecological networks and their
fragility,” Nature, vol. 442, no. 7100, pp. 259–264, 2006.
ings2009ecological
T. C. Ings, J. M. Montoya, J. Bascompte, N. Blüthgen, L. Brown, C. F.
Dormann, F. Edwards, D. Figueroa, U. Jacob, J. I. Jones, et al.,
“Ecological networks–beyond food webs,” Journal of animal ecology,
vol. 78, no. 1, pp. 253–269, 2009.
bascompte2003nested
J. Bascompte, P. Jordano, C. J. Melián, and J. M. Olesen, “The nested
assembly of plant–animal mutualistic networks,” Proceedings of the
National Academy of Sciences, vol. 100, no. 16, pp. 9383–9387, 2003.
corominas2009ontogeny
B. Corominas-Murtra, S. Valverde, and R. Solé, “The ontogeny of scale-free
syntax networks: phase transitions in early language acquisition,” Advances in Complex Systems, vol. 12, no. 03, pp. 371–392, 2009.
goni2011semantic
J. Goñi, G. Arrondo, J. Sepulcre, I. Martincorena, N. Vélez de
Mendizábal, B. Corominas-Murtra, B. Bejarano, S. Ardanza-Trevijano,
H. Peraita, D. P. Wall, et al., “The semantic organization of the
animal category: evidence from semantic verbal fluency and network theory,”
Cognitive processing, vol. 12, pp. 183–196, 2011.
sole2015ambiguity
R. V. Solé and L. F. Seoane, “Ambiguity in language networks,” The
Linguistic Review, vol. 32, no. 1, pp. 5–35, 2015.
seoane2018morphospace
L. F. Seoane and R. Solé, “The morphospace of language networks,” Scientific reports, vol. 8, no. 1, p. 10465, 2018.
corominas2018chromatic
B. Corominas-Murtra, M. Sànchez Fibla, S. Valverde, and R. Solé,
“Chromatic transitions in the emergence of syntax networks,” Royal
Society open science, vol. 5, no. 12, p. 181286, 2018.
valverde2007topology
S. Valverde, R. V. Solé, M. A. Bedau, and N. Packard, “Topology and
evolution of technology innovation networks,” Physical Review
E-Statistical, Nonlinear, and Soft Matter Physics, vol. 76, no. 5,
p. 056118, 2007.
valverde2015punctuated
S. Valverde and R. V. Solé, “Punctuated equilibrium in the large-scale
evolution of programming languages,” Journal of The Royal Society
Interface, vol. 12, no. 107, p. 20150249, 2015.
milgram1967small
S. Milgram, “The small world problem,” Psychology today, vol. 2, no. 1,
pp. 60–67, 1967.
travers1977experimental
J. Travers and S. Milgram, “An experimental study of the small world
problem,” in Social networks, pp. 179–197, Elsevier, 1977.
neal2014backbone
Z. Neal, “The backbone of bipartite projections: Inferring relationships from
co-authorship, co-sponsorship, co-attendance and other co-behaviors,” Social Networks, vol. 39, pp. 84–97, 2014.
andris2015rise
C. Andris, D. Lee, M. J. Hamilton, M. Martino, C. E. Gunning, and J. A. Selden,
“The rise of partisanship and super-cooperators in the us house of
representatives,” PloS one, vol. 10, no. 4, p. e0123507, 2015.
neal2020sign
Z. P. Neal, “A sign of the times? weak and strong polarization in the us
congress, 1973–2016,” Social Networks, vol. 60, pp. 103–112, 2020.
hohmann2023quantifying
M. Hohmann, K. Devriendt, and M. Coscia, “Quantifying ideological polarization
on a network using generalized euclidean distance,” Science Advances,
vol. 9, no. 9, p. eabq2044, 2023.
onnela2003dynamics
J.-P. Onnela, A. Chakraborti, K. Kaski, J. Kertesz, and A. Kanto, “Dynamics of
market correlations: Taxonomy and portfolio analysis,” Physical Review
E, vol. 68, no. 5, p. 056110, 2003.
bialek2007rediscovering
W. Bialek and R. Ranganathan, “Rediscovering the power of pairwise
interactions,” arXiv preprint arXiv:0712.4397, 2007.
corominas2013origins
B. Corominas-Murtra, J. Goñi, R. V. Sole, and C. Rodríguez-Caso, “On
the origins of hierarchy in complex networks,” Proceedings of the
National Academy of Sciences, vol. 110, no. 33, pp. 13316–13321, 2013.
milo2002network
R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon,
“Network motifs: simple building blocks of complex networks,” Science, vol. 298, no. 5594, pp. 824–827, 2002.
newman2006finding
M. E. Newman, “Finding community structure in networks using the eigenvectors
of matrices,” Physical review E, vol. 74, no. 3, p. 036104, 2006.
peixoto2021descriptive
T. P. Peixoto, “Descriptive vs. inferential community detection in networks:
pitfalls, myths, and half-truths,” arXiv preprint arXiv:2112.00183,
2021.
watts1998collective
D. J. Watts and S. H. Strogatz, “Collective dynamics of `small-world'
networks,” nature, vol. 393, no. 6684, pp. 440–442, 1998.
manrubia2022report
S. Manrubia Cuevas, I. Atienza-Diez, and L. F. Seoane, “Report scientific
collaboration network, 2016-2021 national centre for biotechnology (csic),”
2022.
csermely2013structure
P. Csermely, A. London, L.-Y. Wu, and B. Uzzi, “Structure and dynamics of
core/periphery networks,” Journal of Complex Networks, vol. 1, no. 2,
pp. 93–123, 2013.
rombach2014core
M. P. Rombach, M. A. Porter, J. H. Fowler, and P. J. Mucha, “Core-periphery
structure in networks,” SIAM Journal on Applied mathematics, vol. 74,
no. 1, pp. 167–190, 2014.
hebert2016multi
L. Hébert-Dufresne, J. A. Grochow, and A. Allard, “Multi-scale structure
and topological anomaly detection via a new network statistic: The onion
decomposition,” Scientific reports, vol. 6, no. 1, p. 31708, 2016.
marcelino2012critical
J. Marcelino and M. Kaiser, “Critical paths in a metapopulation model of h1n1:
Efficiently delaying influenza spreading through flight cancellation,” PLoS currents, vol. 4, 2012.
networkResources
“Resources for network science by markus keiser's dynamic connectome lab.”
<https://sites.google.com/view/dynamicconnectomelab/resources>.
Accessed: 2023-08-28.
guimera2005worldwide
R. Guimera, S. Mossa, A. Turtschi, and L. N. Amaral, “The worldwide air
transportation network: Anomalous centrality, community structure, and
cities' global roles,” Proceedings of the National Academy of
Sciences, vol. 102, no. 22, pp. 7794–7799, 2005.
humanConnectome
“The pit group connectomes.”
<https://braingraph.org/cms/download-pit-group-connectomes/>.
Accessed: 2023-08-28.
kerepesi2017braingraph
C. Kerepesi, B. Szalkai, B. Varga, and V. Grolmusz, “The braingraph. org
database of high resolution structural connectomes and the brain graph
tools,” Cognitive Neurodynamics, vol. 11, pp. 483–486, 2017.
davidson1995brain
R. J. Davidson and K. Hugdahl, Brain asymmetry.
Mit Press, 1995.
seoane2020modeling
L. F. Seoane and R. Solé, “Modeling brain reorganization after
hemispherectomy,” bioRxiv, pp. 2020–12, 2020.
carballo2022phase
A. Carballo-Castro and L. F. Seoane, “Phase transitions in a simple model of
focal stroke imitate recovery and suggest neurorehabilitation strategies,”
bioRxiv, pp. 2022–12, 2022.
seoane2023optimality
L. F. Seoane, “Optimality pressures toward lateralization of complex brain
functions,” Physical Review X, vol. 13, no. 3, p. 031028, 2023.
kong2014mapping
X.-Z. Kong, S. R. Mathias, T. Guadalupe, and E. Laterality, “Mapping cortical
brain asymmetry in 17,141 healthy individuals worldwide via the,” studies, vol. 40, no. 41, 2014.
kong2022mapping
X.-Z. Kong, M. C. Postema, T. Guadalupe, C. de Kovel, P. S. Boedhoe,
M. Hoogman, S. R. Mathias, D. Van Rooij, D. Schijven, D. C. Glahn, et al., “Mapping brain asymmetry in health and disease through the enigma
consortium,” Human brain mapping, vol. 43, no. 1, pp. 167–181, 2022.
michaelis2023social
A. C. Michaelis, A.-D. Brunner, M. Zwiebel, F. Meier, M. T. Strauss, I. Bludau,
and M. Mann, “The social and structural architecture of the yeast protein
interactome,” Nature, vol. 624, no. 7990, pp. 192–200, 2023.
gallego2017neural
J. A. Gallego, M. G. Perich, L. E. Miller, and S. A. Solla, “Neural manifolds
for the control of movement,” Neuron, vol. 94, no. 5, pp. 978–984,
2017.
houston2022squishy
K. Houston-Edwards, “Squishy math october 2022, scientificamerican. com
37topology is becoming an indispensable tool for data analysisand revealing
doughnuts in the brain,” 2022.
gardner2022toroidal
R. J. Gardner, E. Hermansen, M. Pachitariu, Y. Burak, N. A. Baas, B. A. Dunn,
M.-B. Moser, and E. I. Moser, “Toroidal topology of population activity in
grid cells,” Nature, vol. 602, no. 7895, pp. 123–128, 2022.
sebastian2023topological
E. R. Sebastian, J. P. Quintanilla, A. Sánchez-Aguilera, J. Esparza,
E. Cid, and L. M. de la Prida, “Topological analysis of sharp-wave ripple
waveforms reveals input mechanisms behind feature variations,” Nature
neuroscience, vol. 26, no. 12, pp. 2171–2181, 2023.
becht2019dimensionality
E. Becht, L. McInnes, J. Healy, C.-A. Dutertre, I. W. Kwok, L. G. Ng,
F. Ginhoux, and E. W. Newell, “Dimensionality reduction for visualizing
single-cell data using umap,” Nature biotechnology, vol. 37, no. 1,
pp. 38–44, 2019.
fortunato202220
S. Fortunato and M. E. Newman, “20 years of network community detection,”
Nature Physics, vol. 18, no. 8, pp. 848–850, 2022.
bianconi2013statistical
G. Bianconi, “Statistical mechanics of multiplex networks: Entropy and
overlap,” Physical Review E-Statistical, Nonlinear, and Soft Matter
Physics, vol. 87, no. 6, p. 062806, 2013.
peixoto2022disentangling
T. P. Peixoto, “Disentangling homophily, community structure, and triadic
closure in networks,” Physical Review X, vol. 12, no. 1, p. 011004,
2022.
mucha2010community
P. J. Mucha, T. Richardson, K. Macon, M. A. Porter, and J.-P. Onnela,
“Community structure in time-dependent, multiscale, and multiplex
networks,” science, vol. 328, no. 5980, pp. 876–878, 2010.
nicosia2013growing
V. Nicosia, G. Bianconi, V. Latora, and M. Barthelemy, “Growing multiplex
networks,” Physical review letters, vol. 111, no. 5, p. 058701, 2013.
moore2012analyzing
T. J. Moore, R. J. Drost, P. Basu, R. Ramanathan, and A. Swami, “Analyzing
collaboration networks using simplicial complexes: A case study,” in 2012 Proceedings IEEE INFOCOM Workshops, pp. 238–243, IEEE, 2012.
patania2017shape
A. Patania, G. Petri, and F. Vaccarino, “The shape of collaborations,” EPJ Data Science, vol. 6, pp. 1–16, 2017.
kramer1991nonlinear
M. A. Kramer, “Nonlinear principal component analysis using autoassociative
neural networks,” AIChE journal, vol. 37, no. 2, pp. 233–243, 1991.
mcinnes2018umap
L. McInnes, J. Healy, and J. Melville, “Umap: Uniform manifold approximation
and projection for dimension reduction,” arXiv preprint
arXiv:1802.03426, 2018.
van2008visualizing
L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.,” Journal of machine learning research, vol. 9, no. 11, 2008.
neal42constructing
Z. P. Neal, “Constructing legislative networks in r using incidentally and
backbone,” Connections, vol. 42, no. 1, pp. 1–9.
humanConnectomeProject
H. C. Project", “The Human Connectome Project.”
<https://wiki.humanconnectome.org/>, 2008.
Accessed: 2024-08-28.
stephan2001advanced
K. E. Stephan, L. Kamper, A. Bozkurt, G. A. Burns, M. P. Young, and
R. Kötter, “Advanced database methodology for the collation of
connectivity data on the macaque brain (cocomac),” Philosophical
Transactions of the Royal Society of London. Series B: Biological Sciences,
vol. 356, no. 1412, pp. 1159–1186, 2001.
kotter2004online
R. Kötter, “Online retrieval, processing, and visualization of primate
connectivity data from the cocomac database,” Neuroinformatics,
vol. 2, pp. 127–144, 2004.
incidentally
“R package to build us congress bill-cosponsorship networks.”
https://cran.r-project.org/web/packages/incidentally/vignettes/congress.html.
Accessed: 2024-02-12.
bonacich1987power
P. Bonacich, “Power and centrality: A family of measures,” American
journal of sociology, vol. 92, no. 5, pp. 1170–1182, 1987.
newman2018networks
M. Newman, Networks.
Oxford university press, 2018.
brandes2001faster
U. Brandes, “A faster algorithm for betweenness centrality,” Journal of
mathematical sociology, vol. 25, no. 2, pp. 163–177, 2001.
freeman2002centrality
L. C. Freeman et al., “Centrality in social networks: Conceptual
clarification,” Social network: critical concepts in sociology.
Londres: Routledge, vol. 1, pp. 238–263, 2002.
boldi2014axioms
P. Boldi and S. Vigna, “Axioms for centrality,” Internet Mathematics,
vol. 10, no. 3-4, pp. 222–262, 2014.
batagelj2003m
V. Batagelj and M. Zaversnik, “An o (m) algorithm for cores decomposition of
networks,” arXiv preprint cs/0310049, 2003.
allard2019percolation
A. Allard and L. Hébert-Dufresne, “Percolation and the effective structure
of complex networks,” Physical Review X, vol. 9, no. 1, p. 011023,
2019.
lazega1995structural
E. Lazega, “Structural holes: the social structure of competition,” 1995.
borgatti1997structural
S. P. Borgatti, “Structural holes: Unpacking burt's redundancy measures,”
Connections, vol. 20, no. 1, pp. 35–38, 1997.
fan2021characterizing
T. Fan, L. Lü, D. Shi, and T. Zhou, “Characterizing cycle structure in
complex networks,” Communications Physics, vol. 4, no. 1, p. 272,
2021.
lind2005cycles
P. G. Lind, M. C. Gonzalez, and H. J. Herrmann, “Cycles and clustering in
bipartite networks,” Physical review E, vol. 72, no. 5, p. 056127,
2005.
zhang2008clustering
P. Zhang, J. Wang, X. Li, M. Li, Z. Di, and Y. Fan, “Clustering coefficient
and community structure of bipartite networks,” Physica A: Statistical
Mechanics and its Applications, vol. 387, no. 27, pp. 6869–6875, 2008.
burt2004structural
R. S. Burt, “Structural holes and good ideas,” American journal of
sociology, vol. 110, no. 2, pp. 349–399, 2004.
hagberg2008exploring
A. Hagberg, P. Swart, and D. S Chult, “Exploring network structure, dynamics,
and function using networkx,” tech. rep., Los Alamos National Lab.(LANL),
Los Alamos, NM (United States), 2008.
johnson2010entropic
S. Johnson, J. J. Torres, J. Marro, and M. A. Munoz, “Entropic origin of
disassortativity in complex networks,” Physical review letters,
vol. 104, no. 10, p. 108702, 2010.
pearson1901liii
K. Pearson, “Liii. on lines and planes of closest fit to systems of points in
space,” The London, Edinburgh, and Dublin philosophical magazine and
journal of science, vol. 2, no. 11, pp. 559–572, 1901.
lloyd1982least
S. Lloyd, “Least squares quantization in pcm,” IEEE transactions on
information theory, vol. 28, no. 2, pp. 129–137, 1982.
gavish2014optimal
M. Gavish and D. L. Donoho, “The optimal hard threshold for singular values is
4 √(3),” IEEE Transactions on Information Theory, vol. 60,
no. 8, pp. 5040–5053, 2014.
pascual2020functionink
A. Pascual-García and T. Bell, “functionink: An efficient method to
detect functional groups in multidimensional networks reveals the hidden
structure of ecological communities,” Methods in Ecology and
Evolution, vol. 11, no. 7, pp. 804–817, 2020.
|
http://arxiv.org/abs/2409.03495v1 | 20240905130731 | Maximum likelihood inference for high-dimensional problems with multiaffine variable relations | [
"Jean-Sébastien Brouillon",
"Florian Dörfler",
"Giancarlo Ferrari-Trecate"
] | stat.ML | [
"stat.ML",
"cs.LG",
"cs.SY",
"eess.SY",
"stat.CO"
] |
0.8em
2212−
groupplots,dateplot
patterns,shapes.arrows
compat=newest
UBOONDOX-dsmn
xnamedef#1@xdef#1
@harm
@au#1@edefead@au#1
@fmt@xdef
@fmt@xdef
fmt@edef
@xtok@xdef
9999
thms
theorem[thms]Theorem
lems
lemma[lems]Lemma
cors
corollary[cors]Corollary
proprem
proposition[thms]Proposition
remark[thms]Remark
defsex
definition[defsex]Definition
example[defsex]Example
asses
assumption[asses]Assumption
|
http://arxiv.org/abs/2409.03692v1 | 20240905164502 | Advances in Cislunar Periodic Solutions via Taylor Polynomial Maps | [
"Mohammed Atallah",
"Simone Servadio"
] | math.DS | [
"math.DS",
"cs.SY",
"eess.SY"
] |
Advances in Cislunar Periodic Solutions via Taylor Polynomial Maps
Mohammed AtallahPhD Student, Department of Aerospace Engineering, Iowa State University, IA 50011, USA. email: [email protected]
and Simone ServadioAssistant Professor, Department of Aerospace Engineering, Iowa State University, IA 50011, USA. email: [email protected]
Received xxxx ; accepted xxxx
==========================================================================================================================================================================================================================================================================================
§ ABSTRACT
In this paper, novel approaches are developed to explore the dynamics of motion in periodic orbits near libration points in cislunar space using the Differential Algebra (DA) framework. The Circular Restricted Three-Body Problem (CR3BP) models the motion, with initial states derived numerically via differential correction. Periodic orbit families are computed using the Pseudo-Arclength Continuation (PAC) method and fitted. Two newly developed polynomial regression models (PRMs) express initial states as functions of predefined parameters and are used in the DA framework to evaluate propagated states. The initial states, expressed via PRM, are propagated in the DA framework using the fourth-order Runge-Kutta (RK4) method. The resultant polynomials of both PRM and DA are employed to develop a control law that shows significantly reduced control effort compared to the traditional tracking control law, demonstrating their potential for cislunar space applications, particularly those requiring computationally inexpensive low-energy transfers.
§ INTRODUCTION
With the increasing potential for deep-space exploration missions, there is significant interest in cislunar space due to its role in designing low-energy trajectories [koon2000dynamical]. However, this region is recognized as a chaotic system because of its multi-body gravitational environment. These characteristics have drawn attention to the bounded motion represented by the periodic and quasi-periodic orbits near libration points [wilmer2024preliminary]. In the past few decades, numerous studies have investigated motion in cislunar space. The Circular Restricted Three-Body Problem (CR3BP) is one of the simplified mathematical models commonly used to find solutions for bounded trajectories. This model is linearized around a libration point to obtain the trajectory of the periodic orbit, which introduces inaccuracy. This inaccuracy is compensated using a high-order differential correction scheme that obtains a more accurate trajectory [richardson1980halo]. However, even the differential correction scheme cannot provide a high-fidelity solution for the trajectory due to unmodeled dynamics and external disturbances. Therefore, there is a need for a real-time correction scheme that retains the satellite in a periodic orbit by leveraging sensor measurements.
In recent years, several missions have utilized periodic orbits near the libration points in cislunar space. For instance, the first stationkeeping operations around ℒ_1 and ℒ_2 in cislunar space were performed by the ARTEMIS mission [woodard2009artemis]. In light of these advancements, the Lunar Orbiter Platform-Gateway (LOP-G) is one of the largest international cooperative space programs, aiming to assemble a space station around the Moon [merri2018lunar]. Additionally, NASA's journey to Mars will utilize cislunar space to conduct advanced operations [national2016nasa]. Moreover, the next decade will witness over thirty missions being launched in the cislunar region [baker2024comprehensive]. These missions require staging locations to conduct various activities, where the periodic orbits near the libration points are being investigated as potential choices for that role [whitley2016options].
The CR3BP is the most commonly used framework for transfers in cislunar space, where approximate trajectory solutions can be obtained analytically [richardson1980analytic]. In [pritchett2018impulsive], a methodology for conducting low-energy transfers between periodic orbits is developed using CR3BP. In [singh2020low], the Halo orbit (HO) family near ℒ_1 is utilized to design a low-thrust transfer of a small spacecraft to a low-altitude lunar orbit. In [davis2017stationkeeping], stationkeeping in cislunar space is investigated, with the CR3BP being employed to generate the periodic orbit families, while the n-body dynamical model simulates higher-fidelity trajectories. In [van2016tadpole], the Pseudo-Arclength Continuation (PAC) method is developed to compute the members across each periodic orbit family based on the CR3BP. In [wilmer2021cislunar], the benefits of ℒ_1 and ℒ_2 HOs for orbit maintenance are investigated, where the CR3BP is used to model the constellations. In [fay2024investigation], search and rescue operations are investigated, and the response times are compared for rescuer spacecraft located in distant retrograde orbits and ℒ_1/ℒ_2 Lyapunov orbit (LO) families. More accurate trajectory solutions can be obtained using the bicircular restricted four-body problem (BCR4BP), as presented in [negri2020generalizing, oshima2022multiple, wilmer2021lagrangian]; however, these solutions are more computationally expensive and cannot be obtained analytically. Therefore, this study employs the CR3BP to develop a methodology for representing motion in a periodic orbit family leveraging Differential Algebra (DA).
DA is a computationally efficient tool based on Taylor expansion, that can be employed to represent differentiable and continuous dynamic models as high-order polynomials [chao2023handbook, hawkes1999modern]. Several tools are supplied in the DA framework to obtain the derivatives and integrals of the models in low-level computation environments, such as FORTRAN [berz1987differential], and C/C++ [massari2018differential]. In addition, DA has been proven to be a reliable tool for numerical integration of Ordinary Differential Equations (ODE) carried out by an arbitrary integration scheme. Several applications have leveraged DA framework, such as describing beam dynamics [berz1988differential], and high-order nonlinear filtering [valli2014nonlinear, cavenago2018based, servadio2021differential]. The DA tool is fundamentally based on expressing a continuous differentiable function as an infinite series expanded at a predetermined operating point [servadio2022maximum]. For a small deviation of this operating point, the series returns a precise value of the function using a finite number of the terms. This introduces the concept of Truncated Power Series (TPS), which is computationally reliable and can be used for applying arithmetic and calculus operations. In this study, the DA is employed to represent the initial states of periodic orbit families as functions of predetermined parameters, then these TPS are propagated according to ODE of the CR3BP using fourth-order Runge-Kutta scheme.
This paper aims to investigate and analyze the periodic orbit families near libration points in cislunar space within the framework of DA. First, the general translational motion in cislunar space is expressed using the CR3BP. Then, the initial states of an arbitrary periodic orbit in a given family are obtained using the linearized model around the nearest libration point. These approximate states are refined using a high-order differential correction scheme to obtain more accurate ones. Next, the members across the family are computed using the Pseudo-Arclength Continuation (PAC) method. After that, these members are employed to fit a Polynomial Regression Model (PRM) using the Least-Squares Error (LSE) method. The resultant polynomials of the periodic orbit initial states are then propagated to specific times using the fourth-order Runge-Kutta scheme within the DA framework. The first propagation process uses absolute time, while the second process uses normalized time, in which each orbit in the family is propagated for a fractional amount of its period. Finally, numerical simulations are conducted to demonstrate the reliability and accuracy of DA in representing different periodic orbit families near libration points in cislunar space. Additionally, a Proportional-Derivative (PD) control law is developed using the proposed method and compared to the traditional tracking control law to demonstrate the optimality of the proposed approach.
The rest of the paper is organized as follows: Section 2 presents the mathematical model of the CR3BP. Section 3 introduces the basics of DA. Section 4 shows the evaluation of the periodic orbits near ℒ_1 and ℒ_2 and the application of DA in that process. Section 5 presents and discusses the results of the numerical simulations and demonstrates the applicability of the proposed methodology. Section 6 concludes the paper.
§ MATHEMATICAL MODEL OF THE CR3BP
The translational motion in cislunar space can be approximated by an autonomous dynamic model by applying the following assumptions:
* The Earth and the Moon are treated as mass points.
* The Moon moves in a circular orbit around the Earth.
* The gravity of the Earth and the Moon is the only source of force influencing the motion, while all other perturbations are neglected.
Conventionally, the parameters and states in the CR3BP model are dimensionless, and the motion is expressed in rotating coordinates centered at the barycenter of the Earth-Moon system. The X axis is in the direction of the vector between the Earth and the Moon. Equation (<ref>) presents the mathematical model of the CR3BP according to the aforementioned assumptions and conventions.
ẍ=2 ẏ+x-(1-μ)(x+μ)r_1^3-μ[x-(1-μ)]r_2^3
ÿ=-2 ẋ+y-(1-μ) yr_1^3-μ yr_2^3
z̈=-(1-μ) zr_1^3-μ zr_2^3
Here, x, y, and z denote the components of the dimensionless position vector of the satellite, where x points to the Moon, y is in the direction of the relative motion of the Moon with respect to the Earth, and z completes the set according to the right-hand rule. μ = 0.01215 is the dimensionless mass of the Moon. r_1 and r_2 are the relative distances between the satellite and the Earth, and the satellite and the Moon, respectively.
§ PERIODIC ORBITS NEAR ℒ_1 AND ℒ_2
In the CR3BP model, there are five equilibrium points, known as libration points, where the gravitational forces exerted by the Earth and the Moon on a satellite are balanced. The first two points, ℒ_1 and ℒ_2, have special characteristics due to their symmetry relative to the Moon. There are two common families of periodic orbits near these points: LOs, which exist in two-dimensional space [henon1969numerical], and HOs, which exist in three-dimensional space [breakwell1979halo]. Computing these periodic orbits requires a series of iterative steps due to the chaotic behavior of the CR3BP and the absence of an analytical solution for the model, as follows:
* The mathematical equations are linearized at the libration point, and approximate initial states are computed using this linear model.
* An iterative high-order differential correction scheme is employed to determine the period and refine the initial states of the orbit.
* The computed member of the orbit family is used to generate other members in the family using the PAC method.
§.§ Linearized Equations of Motion
The detailed steps of linearizing the model are found in [richardson1980analytic, richardson1980halo]. The resultant linear model is derived as follows:
ẍ = 2 ẏ+(1+2 c_2) x
ÿ = -2 ẏ-(c_2-1) y
z̈ = -c_2 z
where
c_n=1γ_L^3[( ± 1)^n μ+(-1)^n (1-μ) γ_L^n+1(1 ∓γ_L)^n+1], (L_1 or L_2)
Here, μ_E=G M_E, G is the gravitational constant, M_E is the Earth mass, γ_L=r_E / a, r_E is the Earth mean radius, and a is the astronomical unit.
§.§ Differential Correction for Computing Initial States
The linearized model in Equation (<ref>) has a closed-form analytical solution, from which an analytical formula for the initial states of periodic orbits can be derived. However, the accuracy of these approximate initial states is insufficient due to the chaotic behavior of the system, and the high non-linearity of the real-time system. Therefore, further correction is required to achieve the desired accuracy. The differential correction method, commonly used for this purpose, implements an iterative algorithm using the nonlinear model to modify the initial states. The differential correction scheme used in this study was first proposed in [richardson1980halo]. The procedure of this scheme is as follows: First, the initial states of the periodic orbit are obtained using the closed-form analytical solution of Equation (<ref>). These states are then propagated using the CR3BP model until they intersect the x-z plane for HOs or the y-axis for LOs. Due to the symmetry of the periodic orbit, the states ẋ and ż must equal zero at the intersection point. Next, the state transition matrix is evaluated at this half-period. Using the states and the state transition matrix at the half-period point, the next iteration of the corrected initial states is computed as follows:
[[ Δ x_0; Δẏ_0; Δ T_1 / 2 ]]=-Φ^-1[[ ẋ(x_0, ẏ_0, T_1 / 2); ż(x_0, ẏ_0, T_1 / 2); y(x_0, ẏ_0, T_1 / 2) ]]
where Φ is the matrix of partials that is defined as follows:
Φ={[ ∂ẋ/∂ x_0 ∂ẋ/∂ẏ_0 ∂ẋ/∂ T_1 / 2; ∂ż/∂ x_0 ∂ż/∂ẏ_0 ∂ż/∂ T_1 / 2; ∂ y/∂ x_0 ∂ y/∂ẏ_0 ∂ y/∂ T_1 / 2 ]}_t=T_1 / 2
This process repeats until the desired state error is achieved.
§.§ Computing Members across the Orbit Families
The differential correction scheme is used obtain the initial states of a single periodic orbit. It might be used to obtain the other members of the periodic orbit family, however, it has a limited range of these members and it is not the most computationally efficient method for that purpose [servadio2022dynamics]. The PAC method is one of the most reliable methods that is developed to compute a wide range of members across the family using a single predetermined member [van2016tadpole, patel2023low]. It computes the members with a step Δ s, that is predetermined depending on the desired number of members, in the tangent direction to the solution manifold. In this paper, the method is employed for both Lyapunov and HOs, though it can be used for all periodic and quasi-periodic orbit families.
Assume 𝐱_i is the initial state vector of an arbitrary member that satisfies the constraints F(𝐱_i) = 0 of the family. In case of LO, the constraints are y|_t=T/2 = ẋ|_t=T/2 = 0, while ż|_t=T/2 = 0 is added for HOs. Shifting this member by a step size Δ s yields the next member 𝐱_i+1, which also satisfied the constraints F(𝐱_i+1) = 0. To guarantee that the step size equals Δ s in the tangent direction, an additional constraint is added as follows:
G(𝐱_i+1)=[[ F(𝐱_i+1); (𝐱_i+1-𝐱_i)^T Δ𝐱_i-Δ s ]]=0
where G(·) is the augmented constraints, and Δ𝐱_i is the null vector of the Jacobian matrix for 𝐱_i, which is defined as Δ𝐱_i=𝔑(D_ F(𝐱_i)). Here, 𝔑(·) denotes the null vector, and D_(·) is the Jacobian matrix of (·). The new Jacobian matrix of the augmented constraints is defined as follows:
D_G(𝐱_i+1)=[[ D_F(𝐱_i+1); Δ𝐱_i^T ]]
In order to obtain 𝐱_i+1 that satisfies the constraints in Equation (<ref>), an initial guess of the solution ^0𝐱_i+1 is selected. Then, Newton's method is employed as follows:
^k+1𝐱_i+1=^k𝐱_i+1-[D_G(^k𝐱_i+1)]^-1 G(^k𝐱_i+1)
where ^k𝐱_i+1 is the k^th iteration. This process is repeated iteratively until the constraints in Equation (<ref>) are satisfied within a certain tolerance.
Starting from an arbitrary member in the family, the other members can be computed in both directions ±Δ𝐱. In this study, the PAC method is employed to compute the members across LO families, HO, and NRHO families near ℒ_1 and ℒ_2.
§.§.§ LO Families
Figure <ref> shows several LOs near ℒ_1 and ℒ_2. A key advantage of this method is its ability to obtain orbits that are close to the Moon.
§.§.§ HO and NRHO Families
Similarly, Figures <ref> and <ref> show numerous HOs and NRHOs near ℒ_1 and ℒ_2. Specifically, Figure <ref> displays the ℒ_1 families, separated by the bold blue orbit, while Figure <ref> shows the ℒ_2 families.
§ OVERVIEW OF DIFFERENTIAL ALGEBRA
The basic idea of treating numbers and implementing various operations on them using computers is to represent these numbers with a finite amount of information. Generally, numbers can be irrational and ideally represented by infinite digits, which makes it impractical for computers to handle their ideal representation. Therefore, only a finite amount of relevant information is extracted. These approximations are known as floating-point numbers. This approximated form of the numbers allows operations on real numbers by transforming these numbers into floating-point numbers and implementing the operations as depicted in Figure <ref>. Here, a and b are real numbers, a̅ and b̅ are floating-point numbers, and denotes an arbitrary operation.
In a similar manner, the DA technique extends the concept of floating-point numbers to encompass differentiable functions [servadio2020recursive]. According to the Taylor expansion, any differentiable function at a certain point can be represented by an infinite series expanded at that point. This brings an analogy between real numbers and differentiable functions, in which both are represented by an infinite amount of information (i.e., digits of real numbers and series coefficients of differentiable functions). Similar to floating-point numbers, DA extracts a finite number of terms to represent the function in an approximated way that can be handled by computers. This approximation is used to implement various operations on these functions in a computationally efficient manner. Figure <ref> depicts the equivalent DA approximation of functions in computer environments [servadio2020nonlinear]. Here, f and g are differentiable functions, while F and G are finite series that represent these functions in the DA framework, defined by their coefficients.
For any differentiable function 𝐲 = f(𝐱), the DA mapping is represented as follows:
𝐲(δ𝐱) = _Nℱ^𝐱̂(δ𝐱)
where 𝐱̂ denotes the operating point of the series expansion, δ𝐱 denotes the deviation of the 𝐱 defined as δ𝐱 = 𝐱 - 𝐱̂, and N is the highest order of the series with nonzero coefficient. This representation can be used to derive a highly accurate approximation of the solution for a dynamical system [servadio2022nonlinear]. For a given dynamic system 𝐱̇ = f(t, 𝐱), the DA framework can be employed to evaluate the states at a certain time t_j, as depicted in Figure <ref>.
Here, ℳ^𝐱̂_t_0 → t_j(δ𝐱) denotes the State Transition Propagation Matrix (STPM) of the states propagated from the initial time t_0 to a given time t_j, expressed as a function of the deviation δ𝐱 with respect to the operating point of the expansion 𝐱̂. This approach allows for the propagation of the neighborhood of a given state 𝐱 to multiple times by propagating the series instead of propagating each point individually [servadio2024likelihood]. This method is computationally efficient in any application that requires computing multiple states at different times. Additionally, the accuracy can be balanced with computation time by tuning the order N of the series.
§ POLYNOMIAL REGRESSION MODEL
The computed members in each periodic orbit family are used to construct PRMs. Two different approaches are implemented: The first is the global PRM, where the domain of the predefined parameter κ is divided into multiple regions. The mean of each region is selected as the operating point κ̂, and the polynomial for that region is fitted. The deviations of the predefined parameter δκ for the members act as the query points, while the initial states of the members serve as the fitted values at these query points. The second approach is the local PRM, which uses the parameter κ at the designed periodic orbit as the operating point for polynomial fitting. However, this approach only uses the neighbors of the designed member to fit the polynomial, in order to avoid fitting issues.
§.§ Global Polynomial Regression Model
In this study, the x component of the initial states of the members is used as the parameter κ for both Lyapunov and Halo families. The x component is chosen because it is unique for each member, unlike the y component, which is always zero, and the z component, which is zero in Lyapunov families and not unique in Halo families. Figure <ref> illustrates the concept of dividing the domain into multiple regions and fitting a polynomial in each, where κ_il and κ_iu represent the lower and upper bounds of the ith region of the domain, respectively, while 𝒫^κ̂_i_i is the polynomial of the members in the ith region, expanded at the operating point κ̂_i. For any given point κ = x_0, the deviation is calculated with respect to the operating point of each region to evaluate each polynomial. The red line in Figure <ref> represents the deviation of x_0 with respect to the operating point of an arbitrary region, δκ_m.
For any arbitrary parameter κ = x_0, the initial states of a family member 𝐱 are evaluated as follows:
𝐱(κ = x_0) = ∑_i = 1^M a_i(κ) 𝒫_i^κ̂_i(δκ_i)
where δκ_i = κ - κ̂_i, as visualized in Figure <ref>, and a_i(κ) is the activation function of 𝒫_i, defined as follows:
a_i(κ) =
1 if κ_il≤κ < κ_iu,
0 otherwise
§.§ Local Polynomial Regression Model
The local PRM uses the parameter κ of the designed member as the operating point for the polynomial. This approach yields a single polynomial, which is more computationally efficient than the global method and provides a more precise computation of members that are close to the designed member (i.e., when δκ≪ 1). However, this model becomes less effective for larger values of δκ. The initial states of the member are obtained using this model as follows:
𝐱(κ) = 𝒫^κ̂_d(δκ)
In some cislunar applications, such as station-keeping, only the neighbors of the designed orbit are required, making this local PRM an optimal choice for these cases.
§ PERIODIC ORBIT REPRESENTATION USING DIFFERENTIAL ALGEBRA
As mentioned earlier, a Polynomial Regression Model (PRM) is developed to represent the initial states of each periodic orbit family as a polynomial function of a parameter κ. Using this model, the initial state vector is expressed as a function of δκ, as follows: 𝐱_0(δκ) = _Nℱ^κ̂(δκ). Here, ℱ denotes the series of the initial states. Instead of propagating the initial states of the periodic orbits individually, it is more efficient to propagate the series ℱ. In this approach, the resultant series represents the propagated state vector as a function of δκ and can be used to obtain the state vector at different values of κ in a less computationally expensive manner. The DA technique is employed to represent the propagated state vector 𝐱_ij at a certain time t_j, starting from an initial state vector 𝐱_i0, as a series expanded at a given parameter vector κ̂ to N order, where each parameter vector κ_i can be mapped to a certain initial state vector 𝐱_i0, as follows:
𝐱_i0(δκ_i) = _Nℳ^κ̂_t_0 → t_0(δκ_i)
𝐱_ij(δκ_i)
= _Nℳ^κ̂_t_0 → t_j(δκ_i)
where ℳ denotes the STPM. This STPM is obtained by propagating the CR3BP as a function of the PRM of the periodic orbit family using the fourth-order Runge-Kutta scheme. Therefore, the STPM returns a mapped state vector that must be in the subspace of the periodic orbit family for any arbitrary κ is the domain. This mapping is initially performed at discretized times; however, it is used to obtain the states at any randomly selected time by interpolating the states of the surrounding points.
§.§ Propagation with Respect to Normalized Time
Equation (<ref>) represents the mapping of the propagated states at a given time. In this context, any deviation in κ might lead to a significant deviation in 𝐱 due to the different time periods of the deviated members. This large deviation increases the control effort required to transfer to the deviated member. To minimize the deviation of the propagated states, the STPM can be computed at a certain dimensionless normalized time η, where η = t/T_p. In this case, the period T_p is a function of δκ as follows:
T_pi(δκ_i) = _N𝒯^δκ̂(δκ_i)
where _N𝒯^δκ̂(δκ_i) is the map function of the time period.
In this approach, the number of time steps is fixed, while the sampling time is variable and is determined as a ratio of the time period function, as follows:
T_s(δκ) = 1N_s_N𝒯^κ̂(δκ)
where N_s denotes the fixed number of time steps per period. In this case, the resultant state vector at any normalized time can be obtained by adjusting the number of time steps (e.g., N_s|_10% = 1/10N_s). In this case, the mapping from time to normalized time is performed as follow:
η(δκ_i) = t_N𝒯^κ̂(δκ)
The propagated states at a certain η is represented as follows:
𝐱_i0(δκ_i) = _Nℳ^κ̂_η_0→η_0(δκ_i)
𝐱_ij(δκ_i) = _Nℳ^κ̂_t_0 → t_j(δκ_i, T_s(δκ_i)) = _Nℳ^κ̂_η_0→η_j(δκ_i)
where ℳ denotes the STPM using normalized time. Figure <ref> depicts the difference between mapping to time and normalized time, with the horizontal solid lines representing points at the same normalized time η, while the dashed lines representing points at the same time t.
§ NUMERICAL SIMULATIONS
The proposed PRMs and the DA representation of periodic orbit families are verified through a series of numerical simulations.
§.§ Precision of the Derived Polynomial Regression Models
The purpose of the PRM is to obtain the initial states of any member in the periodic orbit family in a computationally efficient manner. In this numerical simulation, the accuracy of the developed model is assessed by measuring the state error after propagation for multiple orbits.
To evaluate the accuracy of the proposed global PRM, random states are generated at random times and propagated using the RK4 method for a finite number of orbits. Both random points and times are generated using a uniform distribution in a MATLAB environment. For the LOs near ℒ_2, the domain is divided into eight regions, with polynomials of order thirty. Figure <ref> shows the propagation of these random initial points over time. Figure <ref> demonstrates that the states maintained the periodic orbit over three orbits with only insignificant deviations. However, after three orbits, Figure <ref> reveals that most of these states fail to maintain the periodic orbits. The accuracy would vary if the number of regions or the polynomial order were changed. Figure <ref> shows the Root Mean Square Error (RMSE) of both position and velocity after different numbers of orbital revolutions. In this analysis, initial states are randomly generated using the developed PRM. The initial position vector is randomly selected, and the corresponding velocity vector is then computed using the PRM to satisfy the periodic orbit conditions. The figure indicates that both position and velocity errors follow the same trend and are of similar magnitude for all number of samples. The errors start from significantly small values and settle to an order of magnitude of one. This analysis demonstrates that the developed PRM can compute accurate initial states for periodic orbits, maintaining a significantly small error even after up to three orbits.
§.§ Periodic Orbit Families Using the Derived Polynomial Regression Models
The developed global PRM is applied to compute the members of the Lyapunov and HO families near the libration points ℒ_1 and ℒ_2. The polynomial approximations of these orbits are obtained, representing the initial states of the periodic orbit as a function of x_0. Then, these polynomials are propagated over time using the RK4 method within a DA framework. To validate the method's accuracy and demonstrate its effectiveness, the polynomials are propagated over various time and normalized time intervals. This approach provides a comprehensive view of the orbits' trajectories and their stability characteristics. The results, showcasing the generated polynomials, are presented in this section, highlighting the efficiency and precision of the global and local PRMs in orbit computation and propagation near the libration points.
§.§ Lyapunov Orbits Family
In LOs, the initial condition on the Earth-Moon rotating frame is governed by two states: x_0 and ẏ_0, as they are planar orbits. Since the predefined parameter of the PRM is x_0, the state ẏ_0 is expressed as a polynomial. However, after propagating the states to a certain time, each of the four planar states will have a different polynomial expressed as a function of δκ or δ x_0. Figure <ref> shows the states of the LOs near ℒ_1 after propagating the initial states to ten different times. Each solid line represents the locus of points that share the same time. Figure <ref> displays the propagation of the states to ten different normalized times, with each solid line representing the locus of points that share the same normalized time. It is worth noting that normalized time propagation covers the entire domain of the family. Additionally, points with the same normalized time are closer to each other than points with the same time, especially in long-term propagation. This demonstrates the superiority of normalized time propagation over time propagation in proximity operations. Figure <ref> illustrates the variation of the position states with x_0 and η, and also demonstrates how η varies with x_0, with each solid line representing normalized times of the family at a given time. Figure <ref> shows the variation of the velocity states with x_0 and η in a similar manner. Here, x_0 den
§.§ Halo Orbit Families
In HOs, the initial conditions are governed by three states: x_0, z_0, and ẏ_0, as these are 3-D orbits. The states z_0 and ẏ_0 are defined as functions of δ x_0. After propagation, each of the six states will have a polynomial representation at each time or normalized time. Figure <ref> shows the states of the HOs near ℒ_2 after propagating the initial states to ten different times, with each solid line representing the locus of points that share the same time. Figure <ref> illustrates the propagation of the states to ten different normalized times, with each solid line representing the locus of points that share the same normalized time. Figure <ref> shows the variation of the position states with x_0 and η, while Figure <ref> shows the variation of the velocity states.
§.§ Comparative Analysis of Global and Local Polynomial Regression Models
The strength of the global PRM lies in its ability to cover the entire domain of the orbit family. However, locating the orbit in the correct region can be computationally expensive for some applications. Conversely, if an application only requires information about the neighborhood of a specific design orbit rather than the entire family, the local PRM becomes the optimal choice, since it uses a single polynomial, making it computationally efficient. Figure <ref> compares the accuracy of the global and local PRMs, represented by the error between the final and initial states after one time period. An arbitrary point is selected for comparison, and its neighboring points are propagated using the three methods. The comparison shows that the global model maintains consistent accuracy across the domain, whereas the local models exhibit better accuracy near the designed operating point. However, as expected, the local model fails to provide accurate results when the states deviate significantly from the designed operating point. It is worth noting that the global PRM requires evaluating multiple polynomials to maintain consistent error, while the local PRM requires evaluating only a single polynomial.
§.§ Proportional-Derivative Controller
A PD control law is designed to verify the applicability of the proposed models in real-world missions. The controller gains are manually tuned until the system is stabilized. The objective of the controller is to transfer the satellite to one of the nearest periodic orbits while keeping it within the same family. The control law is implemented as follows:
* The position of the satellite is measured to identify the closest periodic orbit, as determined by the proposed model.
* The nearest periodic orbit parameter, κ_0, and its normalized time, η_0, are determined using the normalized time mapping, as depicted in Figure <ref> and Figure <ref>.
* The normalized transfer time, η_t, is selected; then, the reference state is obtained as follows:
𝐱_r = _Nℳ^κ̂_η_0→η_0 + η_t(δκ_0)
* The thrust is evaluated as follows: 𝐮 = - [𝐊] ×(𝐱 - 𝐱_r),
where [𝐊] = Diag(k_p 𝐈_3, k_d 𝐈_3). Here, 𝐈_3 denotes the identity matrix, and k_p and k_d are the controller gains.
* The equivalent transfer time is computed as follows: t_t = η_t×_N𝒯^κ̂(δκ_0).
* The velocity impulse Δ𝐕 is evaluated as follows: Δ𝐕 = 𝐮 t_t, which is a valid approximation for small t_t.
* These steps are repeated frequently after each transfer.
The control law is tested on the LO family near ℒ_2, where a random periodic orbit is selected, and the satellite starts from a random point on that orbit with a disturbed velocity. The control law is then implemented to retain the satellite within the family by performing multiple transfers. The transfer time is set to η_t = 0.05. Figure <ref> shows the transfers over ten orbits until the satellite is retained in a specific periodic orbit. Figure <ref> illustrates the time history of the velocity impulses, indicating that the satellite rapidly converges to a steady-state periodic orbit within two revolutions. Although the final periodic orbit differs from the initial one, it maintains the same ground track on the Moon. These results emphasize the potential of the proposed method for various cislunar applications, including low-energy transfers.
The proposed control law is compared to a traditional controller, which tracks the states of a predetermined orbit without using the STPM. As previously mentioned, the proposed method converges to a random orbit within the family. To efficiently compare the two methods, the same simulation parameters are used, and the final orbit obtained from the proposed method is set as the target orbit for the traditional method. Figure <ref> presents the control efforts of both methods, demonstrating a significant reduction of 13.54% in the impulses required by the proposed method.
§ CONCLUSIONS
In conclusion, this study explores the dynamics of motion in periodic orbits near libration points in cislunar space using the DA framework. The CR3BP model is employed to describe the motion in this environment. Initial states of the periodic orbits are numerically generated using a differential correction scheme that leverages an analytical solution, while the members of each orbit family are computed using the PAC method. These computed members are then used to fit PRMs for the orbit families, with the initial states expressed as functions of predefined parameters. These regression models are incorporated into the DA framework to evaluate propagated states at a given time as functions of deviations in these predefined parameters. The accuracy of the computed states at various times is assessed, and the execution time for computing these states is compared with traditional propagation methods using the Runge-Kutta method. The analysis demonstrates the effectiveness of using DA for representing motion in periodic orbits in cislunar space. Additionally, it shows significantly reduced control effort compared to the
traditional tracking control law.
§ ACKNOWLEDGEMENT
The authors wish to thank Erin Ashley and Batu Candan for their help in reviewing and improving this paper.
ieeetr
|
http://arxiv.org/abs/2409.02750v1 | 20240904142826 | Toward the First Gluon Parton Distribution from the LaMET | [
"William Good",
"Kinza Hasan",
"Huey-Wen Lin"
] | hep-lat | [
"hep-lat"
] |
[email protected]
Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824
Department of Computational Mathematics, Science and Engineering, Michigan State University, East Lansing, MI 48824
Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824
Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824
MSUHEP-24-011
12.38.-t,
11.15.Ha,
12.38.Gc
§ ABSTRACT
We present progress towards the first unpolarized gluon quasi-PDF from lattice QCD using high-statistics measurements for hadrons at two valence pion masses M_π≈ 310 and 690 MeV computed on an a ≈ 0.12 fm ensemble with 2+1+1-flavors of HISQ generated by the MILC collaboration.
In this study, we consider two gluon operators for which the hybrid-ratio renormalization matching kernels have been recently derived and a third operator that has been used in prior pseudo-PDF studies of the gluon PDFs.
We compare the matrix elements for each operator for both the nucleon and pion, at both pion masses, and using two gauge-smearing techniques.
Focusing on the more phenomenologically studied nucleon gluon PDF, we compare the ratio and hybrid-ratio renormalized matrix elements at both pion masses and both smearings to those reconstructed from the nucleon gluon PDF from the CT18 global analysis.
We identify the best choice of operator to study the gluon PDF and present the first gluon quasi-PDF under some caveats.
Additionally, we explore the recent idea of Coulomb gauge fixing to improve signal at large Wilson-line displacement and find it could be a major help in improving the signal in the gluon matrix elements, using the perturbative calculation to confirm our results.
This work helps identify the best operator for studying the gluon quasi-PDF, shows higher hadron boost momentum is needed to implement hybrid-ratio renormalization reliably, and
suggests the need to study more diverse set of operators with their corresponding perturbative calculations for hybrid-ratio renormalization to further gluon quasi-PDF study.
Toward the First Gluon Parton Distribution from the LaMET
Huey-Wen Lin
Received 2024; accepted 2024
==========================================================
§ INTRODUCTION
Parton distribution functions (PDFs) are nonperturbative functions that represent the probability of finding (anti)quarks and gluons within a hadron at a specific fraction of the hadron's total momentum.
These functions act as crucial inputs for many high energy scattering experiments <cit.>.
The nucleon gluon PDF g(x) is especially important to determine the cross sections in pp collisions, Higgs boson productions, J/ψ photo production and jet production <cit.>.
In addition to proton structure, there is much interest in elucidating the structure of the pion because of its role in association to chiral-symmetry breaking as the pseudo–Nambu-Goldstone boson of quantum chromodynamics (QCD) <cit.>.
The experimental data <cit.> is very limited in the pion case, as the pion's short lifetime forbids its use as a scattering target.
The experiments at the future Electron Ion Colliders based in U.S. <cit.> and China <cit.> along with the proposed COMPASS++ and AMBER facilities <cit.> will advance our knowledge on gluon PDFs, in the meantime, Lattice QCD serves as a tool enabling us to study gluon PDFs from first principles.
Lattice QCD is a theoretical framework that allows us to calculate nonperturbative QCD quantities with full systematic control.
x-dependent calculations for hadron structures in Lattice QCD have multiplied since Large Effective Momentum Theory (LaMET) was proposed in 2013 <cit.>.
LaMET, with its application to PDF studies sometimes referred to as the quasi-PDF method, relies on measuring matrix elements nonlocal, bilinear quark/gluon operators in boosted hadron states.
The Fourier transform of these matrix elements are referred to as quasi-PDFs, which can be matched to the lightcone PDFs via a matching procedure which is accurate to powers of inverse parton momentum.
We direct readers to the following reviews on LaMET in Refs. <cit.>.
However, the necessity to have signal out to far separation distances and large momentum has previously forbidden the use of the quasi-PDF method on the gluon PDF from lattice <cit.>.
The primary method used in LQCD studies of the unpolarized and helicity gluon PDFs <cit.> has instead been the pseudo-PDF method <cit.>, which relies on a short distance factorization and matching the lightcone PDF to the position space matrix elements.
The pseudo-PDF method requires one to use, typically phenomenological-inspired, model forms for the PDF and fit the model parameters.
It is, therefore, desired to obtain the gluon PDF through LaMET to make comparisons between results from the two methodologies.
This paper is organized as follows. We provide some theoretical background in Sec. <ref>, giving the definitions for the gluon operators, hybrid-ratio renormalization, the quasi-PDF, and matching to the lightcone PDF.
In Sec. <ref>, we explain the numerical setup, define the two-point and three-point correlators, compare the signal for different operators, explain how the bare matrix element are extracted, and present our bare matrix elements for different operators, hadrons, and smearings.
We present the results of our study in Sec. <ref>, including renormalized matrix element comparison between operators and to phenomenological results, a tentative look at the first nucleon gluon quasi-PDF from the data with the best signal, and an early exploration of Coulomb gauge fixing to improve the signal.
The final conclusions and future outlook can be found in Sec. <ref>.
§ THEORETICAL BACKGROUND
§.§ Gluon Operators
Obtaining a lightcone PDF using LaMET starts with the matrix elements of some coordinate-space correlator O(z) having separation in the z-direction,
h^B(z, P_z) = ⟨ H(P_z) | O(z) | H(P_z) ⟩,
where | H(P_z) ⟩ is the ground state of the hadron H with boost momentum P_z.
For the gluon PDF, there is some freedom in the choice of O(z), minding multiplicative renormalizability.
The form of the operators should be <cit.>
O^μν (z) = F_a^μγ(z)W(z,0)F^ν_a,γ(0)
or a combination of such operators, where F_a^μα = ∂^μ A_a^α - ∂^α A_a^μ - g f_abcA_b^μ A_c^α is the gluon field strength tensor, and
W(z, 0) = 𝒫exp[ -ig∫_0^z z' A^z(z')]
is the Wilson line for gauge invariance with A^z = A^z_a t_a.
Only some choices of operator indices and summations are known to be multiplicatively renormalizable <cit.>.
We will focus on three operators for the unpolarized gluon PDF
O^(1)(z) = F^z i(z)W(z,0)F^z_i(0)
O^(2)(z) = F^z μ(z)W(z,0)F^z_μ(0)
O^(3)(z) = F^t i(z)W(z,0)F^t_i(0) - F^i j(z)W(z,0)F_ij(0).
Here, the repeated μ terms denote summation over all Lorentz indices, while i,j means summation over only the transverse indices (x,y).
Multiplicative renormalizability at the one-loop level was shown for the first two operators in Ref. <cit.> and for the last operator in Ref. <cit.>.
We choose O^(1) and O^(2), as these are the only two operators that have hybrid-ratio scheme matching relations derived, and O^(3), as it has been shown to produce good signal in the many pseudo-PDF studies <cit.>.
§.§ Renormalization Procedure
The hybrid-ratio–renormalized<cit.> matrix elements are defined
h^R(z,P_z) = h^B(0,0)/h^B(0,P_z)h^B(z,P_z)/h^B(z,0) z ≤ z_s
h^B(0,0)/h^B(0,P_z)h^B(z,P_z)/h^B(z_s,0)× e^(δ m +m_0 )(z-z_s) z > z_s
,
where z_s is some scale distance typically chosen to be less than about 0.3 fm, and δ m and m_0 are the mass renormalization and the renormalon ambiguity terms needed to renormalize the linear divergence from the Wilson line self energy.
If z_s →∞, we recover the standard ratio-scheme renormalization, which does not take into account the Wilson-line self energy.
The quasi-PDF for the gluon has never been studied directly from lattice data in either renormalization scheme, so we are interested in seeing the hybrid-ratio and ratio-scheme results.
With multiple lattice spacings, δ m and m_0 can be fit independently <cit.>; however, with only a single lattice spacing, it is simpler to fit the sum δ m + m_0 as one term by matching the P_z = 0 bare matrix elements to the perturbatively calculated “Wilson coefficients”.
The Wilson coefficients have only been explicitly calculated for operators O^(1) and O^(2).
Following Ref. <cit.>, we write these as
ℋ^(i)( 0, μ^2z^2 ) = 1 + α_s/2πC_A(-A^(i) L_z + B^(i))
where L_z = ln( 4e^-2γ_E/μ^2 z^2) and
A^(1) =11/6 B^(1) = 4
A^(2) =11/6 B^(2) = 14/3.
We fit δ m + m_0 at short distances using the form
(δ m + m_0)z -I_0 ≈ln[ℋ(z,μ) /h^B(z,0) ],
where I_0 is a constant not dependent on z.
Typically, one would want to fit using three data points {z-a, z, z+a }, where a is the lattice spacing, but for coarse lattices, interpolation must be used between data points to get a reasonable fit.
§.§ The Quasi-PDF and Lightcone Matching
The Fourier transform of the renormalized matrix elements defines the quasi-PDF, which gives the leading-order behavior of the PDF:
xg̃(x,P_z) = ∫_-∞^∞ z/2π P_ze^ixP_z zh^R(z, P_z) .
It is important to have matrix elements at large enough distances for the integral to converge.
In many cases this is not tractable, since the noise in the lattice matrix elements increases exponentially with distance;
however, based on minimal assumptions on the small-x form of the lightcone PDF, a model involving an exponential decay can be used for a large-distance extrapolation <cit.>.
Obtaining signal at far enough distances to reliably make this extrapolation is still difficult in the gluon case.
The lightcone PDF is then related to the quasi-PDF through a matching relationship:
g̃(x,P_z) = ∫_-1^1 y K_gg(x,y,μ/P_z)g(y,μ)
+ K_gq(x,y,μ/P_z)q(y,μ)
+ 𝒪(Λ_QCD^2/(xP_z)^2,Λ_QCD^2/((1-x)P_z)^2)
where K_gg(x,y,μ/P_z) and K_gq(x,y,μ/P_z) are the perturbatively calculated glue-glue and glue-quark matching kernels, g(y,μ) and q(y,μ) are the lightcone gluon and quark PDFs, and μ is the renormalization scale.
Lightcone PDFs are most often quoted in the modified minimal subtraction (MS) scheme.
The kernels handle matching between the lattice schemes and continuum schemes, as well.
The quasi-PDF matching kernels for O^(1) and O^(2) for ratio and hybrid-ratio renormalization to the MS are derived in Ref. <cit.>.
Only the pseudo-PDF matching kernels have been explicitly derived in the literature for O^(3) in the ratio scheme to MS <cit.>.
The numerical implementation of the integration in Eq. <ref> can be written as a matrix-vector multiplication and inverted to find the lightcone PDF from the quasi-PDF.
The perturbative scales (Λ_QCD^2/(xP_z)^2,Λ_QCD^2/((1-x)P_z)^2) suggest that the accuracy of the PDF is limited by the hadron momentum and that the PDF will be more accurate in the mid-x region.
§ BARE LATTICE MATRIX ELEMENTS
We perform high-statistics calculations on one ensemble with lattice spacing a ≈ 0.12 fm at two valence pion masses M_π≈ 310 and 690 MeV generated using 2+1+1 flavors of highly improved staggered quarks (HISQ) <cit.> by the MILC collaboration <cit.> with the lattice volume of 24^3 × 64.
Wilson-clover fermions are used in the valence sector and valence quark masses are tuned to reproduce the lightest light and strange masses of the HISQ sea.
The same valance quark parameters are used in the PNDME collaboration <cit.>.
1,296,640 two point correlator measurements were performed across 1013 configurations to obtain the data presented in this paper.
For the three point correlators we looked at two types of gauge smearings to improve the signal.
We look at data from configurations where with five steps of hypercubic smearing (HYP5) in order to directly compare to previous results from our group <cit.>.
We also consider more aggressively smeared lattice where we apply Wilson flow with flow time T=3 a^2 (Wilson-3) to the gauge links.
The two-point correlator is defined on the lattice as
C_H^2pt(P_z;t)=⟨ 0|Γ∫ d^3ye^-iy_zP_zχ(y⃗,t)χ(0⃗,0)|0 ⟩
where P_z is the hadron momentum in the spatial z-direction, t is the lattice euclidean time, χ(y) is the interpolation operator for a specific hadron being analysed and Γ=1/2(1+γ/4) is the projection operator.
The three point correlator is then calculated by combining the gluon loop with the two point correlator.
The three point correlator is defined as
C_H^3pt(P_z;t_sep,t)=
⟨ 0|Γ∫ d^3ye^-iy_zP_zχ(y⃗,t_sep)O(z,t)χ(0⃗,0)|0⟩
where t_sep is the source-sink time separation and t is the gluon operator insertion time.
To judge the how well the operators perform, we may compare the signal and behaviors of the ratios of the two and three point correlators.
R_H(P_z;t_sep,t)= C_H^3pt(P_z;t_sep,t)/ C_H^2pt(P_z;t_sep)
We plot selected ratios for each hadron and operator for t_sep = 5a,7a,9a in Figs. <ref> and <ref> for the Wilson-3 and HYP5 smearings.
In these plots, we normalize such that the mean of the left center-most ratio in each plot for each operator is equal to one, otherwise, the results would not be easily comparable.
We see already at this point that in most cases, O^(3) has the best signal compared to the other operators and often very symmetrical behavior, which is to be expected for these plots.
We see that in some cases, the smaller t_sep data for O^(1) and O^(2) have larger error or different behavior than the other t_sep.
This is mostly due to these data being close to 0, so the normalization inflates some of the error and exaggerates some trends.
This is already suggestive that the best ground state matrix elements will likely come from O^(3).
The two and three point correlators can be fit to the energy eigenstate expression as,
C_H^2pt(P_z;t)=|A_H,0|^2e^-E_H ,0t + |A_H ,1|^2e^-E_H ,1t + .....
C_H^3pt(z,P_z;t_sep,t) = |A_H,0|^2 ⟨ 0|O_g|0⟩ e^-E_H,0t_sep
+ |A_H,0| |A_H,1| ⟨ 0|O|1⟩ e^-E_H,0(t_sep-t) e^-E_H,0t
+ |A_H,0| |A_H,1| ⟨ 1|O|0⟩ e^-E_H,1(t_sep-t) e^-E_H,0t
+ |A_H,1|^2 ⟨ 1|O|1⟩ e^-E_H,1t_sep+ ⋯
The ground (first excited) state amplitudes and energies, A_H ,0,E_H ,0, (A_H,1,E_H ,1) are obtained from the two-state fits of the two point correlators.
⟨ 0|O_g|0 ⟩,⟨ 0|O|1⟩= ⟨ 1|O|0⟩, ⟨ 1|O|1⟩ are ground state and excited state matrix elements which are extracted from the two-state simultaneous fits to the three point correlator at multiple values of t_sep.
The reliability of our fits for extracting the matrix elements can be checked by comparing the fits to the ratios as defined in Eq. <ref>.
If the excited state contamination is small, the ratios would eventually approach the ground state matrix element.
This is shown in the example ratio plots outlined in Figs. <ref>, <ref>, <ref>, <ref>, <ref> and <ref>.
Each figure represents one operator and one smearing type for strange nucleon, light nucleon, η_s and π.
The left most column of the example ratio plot shows R_H at different source-sink separations t_sep, along with reconstructions of the fit shown in the colored bands and the fitted ground state matrix elements represented by the grey band.
We observe that as we increase the t_sep the ratios and their respective reconstructed bands move towards the grey bands, upwards if the fitted matrix element is positive and downwards if the matrix element is negative.
As per Eq. <ref> the ratios should be symmetric alongside the source and sink, we see that this is the case for most of the lower t_sep however this pattern deviates as we get to higher t_sep values.
This can be mainly due statistical fluctuations and lower signal to noise ratio at higher t_sep that causes this deviance.
But in general the ratio plots do display symmetry and approach the ground state matrix element (gray band) attesting to the reliability of our fitting process.
Our choice of source-sink separation used in the fits plays a crucial role in the simultaneous fitting process.
We need to determine if our extracted ground state matrix element is stable for our choice of t^min_sep and t^max_sep.
In order to do so we study the t^min_sep and t^max_sep dependence. The middle column of the ratio plots outlined in Figs. <ref>, <ref>, <ref>, <ref>, <ref> and <ref> show the extracted matrix element as we vary the t^min_sep.
Our final choice for t^min_sep is indicated by the light green point in the middle column.
The plots demonstrate that the extracted matrix elements converge as we decrease the t^min_sep and our within error of our final choice of ground state matrix element, this shows that our choice for t^min_sep is reliable.
We performed a similar analysis to determine the t^max_sep. The right most column of the ratio plots outlined in Figs. <ref>, <ref>, <ref>, <ref>, <ref> and <ref> show the extracted matrix element as we vary the t^max_sep.
Our final t^max_sep choice and the ground state matrix element used in the rest of the analysis is outlined by the light green point.
As the plots show, when we increase the t^max_sep the extracted matrix elements converge and stay within the error range (grey band) of our final matrix elements.
This shows that our choice for t^max_sep is consistent across various t^max_seps and therefore reliable.
Using the same process we determined t^max_sep and t^min_sep ranges for all hadrons across the three operators, two smearings, and various Wilson link displacement zs and hadron boosted momenta P_zs.
With the bare matrix elements fit for each hadron, operator, and smearing, we can compare their behavior.
In Figs. <ref> and <ref>, we show the bare matrix elements h^B(z,P_z) for the Wilson-3 and HYP5 smearings.
The matrix elements are normalized such that h^B(0,0)=1 and not divided out by any kinematic factors.
The behavior of the matrix elements at fixed-z for different P_z is not necessarily monotonic in every case.
These effects are particularly apparent for O^(3) in the Wilson-3 case for all hadrons and for the mesons in both smearing cases (bottom two rows of each figure) for O^(1) and O^(2) operators.
Nonetheless, we expect renormalization to remove any factors (kinematic or otherwise) that could be producing this behavior.
More concerningly, we see in both smearing cases that O^(1) and O^(2) (first and second columns) both cross zero at different momenta and distances, while O^(3) (third column) stays above zero aside from some ambiguity due to statistical noise at large distances.
From Eq. <ref>, we can see that some of these noisy, near-zero, matrix elements may bring large errors into the renormalized matrix elements, especially those at z=0 or P_z = 0.
This is highly suggestive that O^(3) will likely produce more consistent renormalized matrix elements while O^(1) and O^(2) may not work as well.
§ RESULTS AND DISCUSSION
§.§ Ratio Renormalized Matrix Elements
With the bare matrix elements, we may follow Eq. <ref> with z_s →∞ to get ratio renormalized matrix elements.
For each hadron and operator, we plot our results for Wilson-3 and HYP5 smearing in Figs. <ref> and <ref>, respectively.
We plot the data against the unitless and invariant Ioffe time ν = zP_z so as to be able to compare results from different P_z.
To improve the clarity of the graph, we remove the many points with error over 200% or with means of magnitude greater than three.
Note that the horizontal range increases in the plots from left to right and that the vertical range of the O^(3) plots (rightmost column) is smaller than the first two.
In these two figures, we can immediately see that O^(1) and O^(2) (left two columns) both seem to have poor signal, the matrix elements diverge to infinity, and primarily in the meson cases, the matrix elements are very inconsistent between different momenta.
These effects likely come from zero-crossings in the bare matrix elements.
We see that O^(2) (middle column) has reasonably smooth behavior in the nucleon cases (top two rows).
However, O^(3) has by far the smoothest behavior and does not cross zero at a magnitude of more than 1σ.
At this level, it is clear that the signal and behavior of the ratio renormalized O^(3) matrix elements are much better than the other operators.
We also know from the many previous studies of the gluon PDF through pseudo-PDF matching that O^(3) produces matrix elements and PDFs comparable to phenomenological result <cit.>.
It is worth exploring whether the behavior of the first two operators captures phenomenological behavior in any way.
We narrow down to the more commonly phenomenologically studied nucleon gluon PDF, taking the CT18 <cit.> gluon PDF at MS scheme scale μ=2.0 GeV, and use Eq. <ref> with the ratio matching kernels from Ref. <cit.> to obtain a quasi-PDF.
We ignore the glue-quark mixing term in this case, as it has been shown to be small in the pseudo-PDF studies.
We Fourier transform the quasi-PDF back to position space so that we have “phenomenological matrix elements” with which to compare the O^(1) and O^(2) matrix elements.
We plot the O^(1) and O^(2) operator results for the strange and light nucleons compared the the phenomenological matrix elements in Figs. <ref> and <ref> for the Wilson-3 and HYP5 smearing respectively.
We use the asymmetrical error formula to get the error bars for the phenomenological results.
We can see that the phenomenological matrix elements are reasonably consistent in this range across different different P_z and that they decay much more slowly than the lattice matrix elements.
In the top right plot in each figure, we see that the strange nucleon O^(2) results agree best with the phenomenological results at smaller ν; however the phenomenological results capture no sign change.
These results together suggest that on top of the poorer signal in the raw data, there may also be systematic contaminations in these two operators as suggested by Ref. <cit.> in the context of short-distance behavior.
It will be interesting to see if hybrid renormalization can make up at all for the divergent behavior of the matrix elements.
§.§ Hybrid-Ratio Renormalized Matrix Elements
§.§.§ Operators O^(1) and O^(2)
We only have the necessary information to apply hybrid renormalization to O^(1) and O^(2), so we wish to explore this to see if the results change and to possibly make an ansatz for the hybrid renormalization of O^(3).
For operators O^(1) and O^(2), we have the Wilson coefficients as defined in Eq. <ref>, so we may fit δ m + m_0 from Eq <ref> and apply hybrid renormalization with μ=2.0 GeV.
As stated before, the lattice spacing a≈ 0.12 fm is too coarse to capture the range of linear behavior in the small-z region, so we interpolate h^B(z,0) to get finer data to apply the fit.
We fit the interpolated data to Eq. <ref> with points {z-0.2, z, z+0.2} in units of fm, varying z.
We show these results for the δ m + m_0 versus z for the two operators, for each hadron for the two smearings, Wilson3 (left) and HYP5 (right), in Fig. <ref>.
We see that the fits are not consistent between the different values of smearing and for different operators, which are both expected results.
The hadron and pion mass also have noticeable effects on the fits.
The behavior is consistent in that a larger pion mass, results in a smaller δ m + m_0 and that the fitted values for the nucleon are smaller than those of the mesons.
Ref. <cit.> summarizes a few reasons why the m_0 fit will depend on the specific matrix element fit, and we confirm that this is non-negligible at this level.
We choose the z that results in the minimum δ m + m_0 for the final value for the hybrid renormalization of each respective operator, hadron, and smearing.
We make this choice because the minimum seems to correspond to the region around which the fit is most stable.
The z here seems to be large enough that the logarithm in the Wilson coefficient is not diverging and small enough that perturbation theory still holds.
We leave it to future studies to consider how scale variation, leading renormalon resummation, and renormalization group resummation affect these fits <cit.>.
Now that δ m + m_0 has been fit for O^(1) and O^(2) for each hadron and smearing, we can see if anything has changed in the behavior of the renormalized matrix elements.
We show the hybrid-ratio renormalized matrix elements with z_s = 0.24 fm in Fig. <ref> and <ref> for Wilson-3 and HYP5 smearing respectively.
Again, we remove the many points with error over 200% or means with magnitude greater than three for clarity and use the same plot ranges as for the ratio renormalized matrix elements.
The hybrid renormalized matrix elements exhibit similar behavior to the ratio renormalized ones, overall.
We see that O^(1) (left column) has poor signal and gives inconsistent results at different momenta in nearly all cases.
We see again that the cleanest and most consistent signal comes from the strange nucleon and O^(2) (right column) with more divergent behavior in the mesons (bottom two rows); however, the crossing below zero and overall divergent behavior at such short distances is still concerning for these operators.
At this level, it would not appear that the hybrid renormalization has made up for the ratio renormalized matrix elements quick decay, but it is worth exploring the phenomenological matrix elements obtained using the hybrid-ratio matching kernels instead.
We plot the O^(1) and O^(2) operator results for the strange and light nucleons compared the the phenomenological matrix elements in Figs. <ref> and <ref> for the Wilson-3 and HYP5 smearing results respectively.
We again use the asymmetrical error formula for the phenomenological error bars, which clearly affects the results just after ν_s.
We also quickly see that there is an interesting bump in the phenomenological matrix elements at ν_s = z_sP_z and also that the lattice matrix elements do not capture this bump at all.
In almost every case, the lattice matrix elements diverge quickly and cross zero in a way that is also not captured by the phenomenological matrix elements.
This suggests that the O^(3) operator needs to be considered more closely.
§.§.§ Operator O^(3)
Although we do not have the Wilson coefficients for O^(3) to fit δ m + m_0 directly, we want to take a guess at what a hybrid-ratio renormalized matrix elements may look like from this operator.
We emphasize that this and the next section are hypothetical and rely on an educated, but still subjective, guess at a value of δ m + m_0, along with using data that is likely smeared too much and has a much heavier than physical pion mass.
We considered our cleanest data, the strange nucleon with Wilson-3 smearing, and measured further to z=23a.
We only consider data up to z=13a, as the data becomes too noisy and likely contaminated by finite-volume effects beyond this point.
To guess δ m + m_0, we fit the zero momentum bare matrix elements between z = 7a-13a to a fit form h^fit(z,0) = Ae^-δ mz, resulting in δ m = 0.65(94) GeV.
Though, not used in the final methodology here, some preliminary tests found for O^(1) and O^(2) that fitting δ m like this before fitting m_0 resulted consistently in negative values of m_0 with magnitude O(100) MeV.
Starting from this information and attempting to get reasonable “bump” behavior seen in the phenomenological results for the other two operators, we decided δ m + m_0= 0.5 GeV was a reasonable guess.
Above this point, the bump becomes unreasonably large, below this point, the matrix elements seem to decay too fast.
Again, we emphasize that the choice is subjective and that an objective result calls for the O^(3) Wilson coefficients.
We plot our guess at hybrid renormalized O^(3) matrix elements in with z_s = 0.24 fm in Fig. <ref>.
We see that with this choice, we recreate the bump after ν_s, which is largest for the smallest P_z and is gradually smoothed out at larger momentum.
After the bump, the different P_z matrix elements start to become compatible again, as with the the phenomenological results seen before.
We move forward with the P_z = 1.71 GeV data, as it seems to be on a convergent path while the largest momentum likely displays more finite-volume effects.
§.§ Quasi-PDF
Under the assumption that the small-x behavior of the lightcone PDF trends like x^-α, one may use an ansatz of the form <cit.>
h^R(z,P_z) ≈ Ae^-mν/|ν|^d
to fit the large-ν data, where A, m and d are fitted parameters.
We use this form to fit our P_z = 1.71 GeV data from z=9a to 13a.
At this level of statistical precision and with only five data points, it is hard to separate the algebraic and exponential decay, causing a large amount of instability in the fitted parameters.
Nonetheless, we plot the results of this fit in Fig. <ref>.
We see qualitatively that the fit agrees well with the data.
At the largest ν, the error becomes smaller than the data, and the mean decays quickly, suggesting that we can get a good Fourier transform.
We perform an interpolation of the smaller-ν data and then use the extrapolated data beyond around ν=10 to get a Fourier transform of the matrix elements, corresponding to xg̃(x,P_z).
We show these results in Fig. <ref>.
We see that the uncertainty in the large-ν data seems to mostly affect the small-x region.
Because the data and extrapolation do not go below zero, we see a finite value of xg̃(x=0,P_z).
Interestingly, the quasi-PDF is negative in a range around x ∈ [0.4,0.85] with some ranges being quite statistically significantly below zero.
It would be illuminating to see whether this effect is washed out by either a proper fit of δ m + m_0 with the Wilson coefficients or whether this is something that is taken care of by the lightcone matching.
Overall, we are able to get the first gluon quasi-PDF from lattice data, but this required guess work for the hybrid renormalization due to the missing Wilson coefficient and a much heavier than physical pion mass and a large amount gauge-link smearing, both which likely affect the physics.
Nonetheless, this shows that we are very close to being able to extract a gluon PDF through LaMET.
Further signal improvements will be necessary to make a more confident extrapolation of the matrix elements and improve the error bars.
§.§ Exploration of Coulomb Gauge Fixing
It has been recently suggested and shown for quark PDFs and transverse momentum distributions (TMDs) that fixing to the Coulomb gauge and removing the Wilson line from the operator definitions reduces noise in the calculation, results in consistent lightcone PDFs, and sees minimal systematic uncertainty from Gribov copies <cit.>.
We wish to explore this for the gluon, naively following the methodology of Ref. <cit.>.
We consider only Wilson-3 smeared data for the light nucleon for this preliminary study.
After applying the smearing, we fixed to the Coulomb gauge to an accuracy of 10^-7 and measured each operator defined in Eqs. <ref>, <ref>, and <ref> with the Wilson lines removed.
We plot the bare matrix elements for Coulomb gauge (CG) (opaque markers) and gauge invariant (GI) (lighter markers) operators in Fig. <ref> for each operator.
We see, as expected, the z=0 GI and CG matrix elements all agree well within statistical errors, except for the smallest two, nonzero, momenta for O^(2).
Whatever, the cause of this disagreement, it is reduced at larger momenta.
Interestingly, the P_z = 0 and 0.427 GeV CG matrix elements begin to disagree significantly with the GI matrix elements at z=a, while the larger momenta data are in better agreement until about z=2a-3a.
The CG data decays much faster than the GI results, as one would expect from the highly smeared gauge links in the GI results.
Overall, these observations suggest that the Coulomb gauge fixing is working as expected for these operators but large momenta may be more desirable to achieve the most consistent results at short distances.
Following what was done for the quarks, we implement hybrid renormalization of the CG gluon matrix elements using Eq.<ref>, setting δ m = m_0 = 0, .
We present the hybrid renormalized matrix elements in Fig. <ref> for each operator.
It should be noted that the plots for O^(1) and O^(2) are missing the P_z = 1.71 and 2.14 GeV data respectively because h^B(0,P_z) in each case overlaps with zero, so the normalization term has well over 100% error.
The same thing occurs to a lesser extent for P_z = 1.71 in O^(2) as well, resulting in poor convergence.
We see that O^(1) has particularly good agreement with the GI results at short distances, while the agreement for the other operators is not as good.
This may suggest different contaminations occur in the Coulomb gauge for these operators.
The gauge fixing still does not seem to fix the inconsistent behavior at different momenta for O^(1).
However, every operator seem to now converge to zero much more quickly with far improved signal.
It is temping to be wary about the sharp behavior at ν_s, especially at low momentum, but it seems smoothed out at larger momentum just like the sharp behavior seen in the phenomenological results for the GI matrix elements.
If this sharper decay behavior is behavior is confirmed to be reasonable by a calculation of the matching kernel for the Coulomb gauge operators used on phenomenological results, Coulomb gauge fixing could be a major step forward in gluon PDFs from the lattice.
More numerical study must be done here, too.
Smaller lattice spacings, larger volumes, less smearing, for example, should be considered.
It could be useful to consider more operators, as well.
§ CONCLUSION AND OUTLOOK
We have presented our progress towards obtaining the first gluon PDF through LaMET with hybrid-ratio renormalization.
We consider three operators through which the quasi-PDF can be studied: O^(1) and O^(2) (Eqs. <ref> and <ref>) which have recently had their Wilson coefficients and hybrid-renormalization matching kernels derived <cit.> and O^(3) (Eq. <ref>) that was used in pseudo-PDF studies <cit.>.
We found that operators O^(1) and O^(2) have consistently across hadrons and smearing techniques poorer signal than O^(3).
We suggest that the O^(1) and O^(2) bare matrix elements crossing zero causes their ratio and hybrid-ratio renormalized matrix elements to have poor consistency between different momenta and to diverge towards ±∞.
We confirm that the behaviors in the renormalized matrix elements O^(1) and O^(2) do not reproduce the behavior of the matrix elements reconstructed from the CT18 nucleon gluon PDF global fit <cit.>.
We found that O^(3) for the nucleon with M_π≈ 690 MeV and Wilson-3 smearing has the best signal and used it to get a tentative first look at the hybrid renormalization for this operator by fitting δ m and making an estimate of m_0.
We found a balance of high momentum and good signal with the P_z = 1.71 GeV matrix elements, allowing us to fit a long-distance extrapolation and produce a quasi-PDF from this tentative data.
Overall, we suggest that operator O^(3) is likely the best for studying the gluon PDF through LaMET;
we can obtain a quasi-PDF from this operator, but only in the case of heavy pion mass and a large amount of smearing, which may change the physics.
We conclude that numerical improvements are still needed to obtain a reliable long-range extrapolation with data that is closer to physical.
Finally, we explored the recent idea of Coulomb gauge fixing to improve signal of the matrix elements for the quark quasi-PDF and TMD <cit.>.
We naively follow the methodology for the quark, presenting a first limited study of gluon matrix elements from the lattice with Coulomb gauge fixing.
We found from our limited exploration, that the behavior of O^(2) and O^(3) show slightly different short-distance behavior between the Coulomb gauge and gauge-invariant results, possibly suggesting different contamination in the Coulomb gauge for these operators.
High momentum will be needed to smooth out the sharp behavior at ν_s, but overall, Coulomb gauge fixing greatly improved the signal.
We have made major progress towards a gluon PDF from LaMET and identified more work to be done.
Once the the Wilson coefficients and the hybrid-ratio matching kernel for O^(3) are derived explicitly, we can confirm our estimate for δ m + m_0.
Further numerical improvements will allow us to go to higher momentum and obtain more reliable long distance extrapolations for our matrix elements.
Further details about the Coulomb gauge fixed gluon operators from the perturbative QCD side and the numerical side to fully utilize its power to improve the signal.
§ ACKNOWLEDGMENTS
We thank Jian-Hui Zhang for clarifying details for the O^(2) operator Wilson coefficients and matching kernels.
We thank Yong Zhao, Xiangdong Ji, and many others who attending the LaMET2024 workshop for useful comments on this project.
We thank MILC Collaboration for sharing the lattices used to perform this study.
The LQCD calculations were performed using the Chroma software suite <cit.>.
This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 through ERCAP;
facilities of the USQCD Collaboration, which are funded by the Office of Science of the U.S. Department of Energy,
and supported in part by Michigan State University through computational resources provided by the Institute for Cyber-Enabled Research (iCER).
The work of WG is supported by partially by MSU University Distinguished Fellowship and by U.S. Department of Energy, Office of Science, under grant DE-SC0024053 “High Energy Physics Computing Traineeship for Lattice Gauge Theory”.
The work of KH is partially supported by the Professional Assistant program at Honors College at MSU and by the US National Science Foundation under grant PHY 2209424.
The work of HL is partially supported by the US National Science Foundation under grant PHY 2209424, and by the Research Corporation for Science Advancement through the Cottrell Scholar Award.
unsrt
|
http://arxiv.org/abs/2409.02617v1 | 20240904111917 | PUB: Plot Understanding Benchmark and Dataset for Evaluating Large Language Models on Synthetic Visual Data Interpretation | [
"Aneta Pawelec",
"Victoria Sara Wesołowska",
"Zuzanna Bączek",
"Piotr Sankowski"
] | cs.CL | [
"cs.CL"
] |
Gate control of magnon spin transport in unconventional magnon transistors based on the van der Waals antiferromagnet CrPS4
Bart J. van Wees
September 9, 2024
===========================================================================================================================
§ ABSTRACT
The ability of large language models (LLMs) to interpret visual representations of data is crucial for advancing their application in data analysis and decision-making processes. This paper presents a novel synthetic dataset designed to evaluate the proficiency of LLMs in interpreting various forms of data visualizations, including plots like time series, histograms, violins, boxplots, and clusters. Our dataset is generated using controlled parameters to ensure comprehensive coverage of potential real-world scenarios. We employ multimodal text prompts with questions related to visual data in images to benchmark several state-of-the-art models like ChatGPT or Gemini, assessing their understanding and interpretative accuracy.
To ensure data integrity, our benchmark dataset is generated automatically, making it entirely new and free from prior exposure to the models being tested. This strategy allows us to evaluate the models' ability to truly interpret and understand the data, eliminating possibility of pre-learned responses, and allowing for an unbiased evaluation of the models' capabilities. We also introduce quantitative metrics to assess the performance of the models, providing a robust and comprehensive evaluation tool.
Benchmarking several state-of-the-art LLMs with this dataset reveals varying degrees of success, highlighting specific strengths and weaknesses in interpreting diverse types of visual data. The results provide valuable insights into the current capabilities of LLMs and identify key areas for improvement. This work establishes a foundational benchmark for future research and development aimed at enhancing the visual interpretative abilities of language models. In the future, improved LLMs with robust visual interpretation skills can significantly aid in automated data analysis, scientific research, educational tools, and business intelligence applications.
§ INTRODUCTION
In recent years, large language models (LLMs) have demonstrated remarkable capabilities in understanding and generating human language <cit.>. These advancements have sparked significant interest in their potential applications across diverse domains, including natural language processing and automated reasoning. As LLMs evolve, there is increasing emphasis on their multimodal capabilities <cit.>, particularly their ability to interpret and analyze visual data representations. This encompasses interpreting series plots, clusters, histograms, boxplots, and violin plots—areas where current LLMs encounter notable challenges. Integrating visual and textual data remains a complex task, underscoring the need for further advancements to enhance the effectiveness of LLMs in comprehensive data analysis.
A significant concern in the development and evaluation of multimodal LLMs is data contamination. When a model is trained or evaluated on data that it has previously encountered, the results can be misleading. For instance, <cit.> used data generated by GPT-4 from publicly available Internet sources, while <cit.> relied on Kaggle data. Such practices can result in an overestimation of the true capabilities of the model and do not accurately reflect its generalization performance <cit.>. This issue compromises the reliability of the benchmarks and impedes research progress by presenting an inflated view of the performance of the model.
To address these challenges, we introduce the Plot Understanding Benchmark (PUB), a novel synthetic data set designed to evaluate the proficiency of LLMs in interpreting various forms of data visualization. Our dataset is generated using controlled parameters to ensure comprehensive coverage of potential real-world scenarios. This approach not only eliminates the risk of data contamination, but also ensures that the evaluation is based solely on the model’s ability to interpret and understand visual data rather than relying on previous knowledge, maintaining its validity over time and ensuring that future evaluations remain unbiased and reflective of real model capabilities.
The nature of our benchmark allows for quantifying the LLM's responses to various kinds of visualizations and assessing its performance in a nuanced way. By manipulating different parameters within the generated plots, such as axis scales, data density, color schemes, and overall plot shape, we can systematically control the visual inputs presented to the LLM. This parameterization enables us to identify which aspects of the visualizations most significantly impact the LLM's ability to interpret and respond accurately. Furthermore, it allows for a detailed analysis of the LLM's sensitivity to specific features, thus providing insights into the underlying mechanisms of visual data interpretation. Through this approach, we can pinpoint strengths and weaknesses in the LLM's performance, facilitating targeted improvements and advancements in the development of more robust and capable models. Additionally, this method allows us to observe the conditions under which the model is more likely to produce hallucinations, enabling a clearer understanding of the factors that influence the odds of such occurrences and paving the way for mitigating these issues in future iterations of LLMs.
Furthermore, we introduce quantitative measures for assessing the performance of models, which have been previously missing in this context. These measures provide a robust and accurate tool for evaluating model capabilities, ensuring that our assessments are grounded in objective metrics rather than subjective evaluations.
Our study benchmarks several state-of-the-art LLMs such as GPT-4 <cit.>, Gemini <cit.>, or Claude <cit.> revealing varying degrees of success in interpreting different types of visual data. The results highlight specific strengths and weaknesses, offering valuable insights into the current capabilities of LLMs and identifying key areas for future improvement.
The implications of our findings are far-reaching. Enhancing the visual interpretative abilities of LLMs can significantly advance automated data analysis, scientific research, educational tools, and business intelligence applications. As such, this work not only provides a foundational benchmark for evaluating LLMs' visual interpretation skills but also lays the groundwork for future research and development aimed at creating more robust and versatile language models.
In the following sections, we detail the construction of our synthetic dataset, the methodology for benchmarking LLMs, the results of our evaluations, and the potential future applications of improved LLMs in various domains.
§ RELATED WORK
Recent advancements in LLMs have demonstrated their impressive capabilities in understanding and generating human language. However, extending these abilities to multimodal tasks, particularly in visual data interpretation, presents unique challenges that have sparked considerable research interest.
§.§ Multimodal Large Language Models
Multimodal LLMs (MLLMs) aim to integrate text with visual or other non-textual data to enhance understanding and reasoning. For example, Li et al. <cit.> introduced SEED-Bench, a benchmark designed to evaluate the proficiency of multimodal LLMs in processing and generating visual content. Their work highlights the importance of assessing LLMs' capabilities across diverse visual tasks, emphasizing the need for comprehensive benchmarks in this domain.
Similarly, Zhang et al. <cit.> proposed M3exam, a multilingual and multimodal benchmark to evaluate LLMs' performance across different tasks and languages, highlighting the versatility and potential of these models in handling complex, real-world scenarios.
§.§ Benchmarking and Evaluation
The development of benchmarks plays a crucial role in advancing the field by providing standardized methods to assess and compare the performance of different models. Chen et al. <cit.> introduced VisEval, a benchmark specifically designed to evaluate LLMs in visual data analysis. Their work underscores the challenges LLMs face in interpreting complex visual representations and the importance of robust evaluation methodologies to ensure accurate performance assessments.
Further, Gadre et al. <cit.> discussed the importance of dataset design in benchmarking, introducing DATACOMP as a new standard for multimodal dataset creation. This work highlights the significance of deduplication and data-centric approaches in enhancing the generalization capabilities of LLMs.
§.§ Challenges in Visual Data Interpretation
The ability of LLMs to accurately interpret and analyze visual data remains an underexplored area. Malode <cit.> emphasized the need for optimized LLMs capable of handling multimodal data, identifying key challenges such as data contamination and overestimation of model performance due to exposure to training data during evaluation.
Moreover, McIntosh et al. <cit.> critically analyzed the inadequacies of current benchmarking practices, especially in the context of generative AI. Their work calls for a re-evaluation of existing benchmarks to better reflect the complex, multimodal nature of real-world tasks.
§ DATASET CONSTRUCTION
The objective of this paper is to benchmark the capability of multimodal models in understanding and interpreting various types of plotted data. To achieve this, we generate a variety of synthetic datasets, and create diverse visualizations of the results. This chapter details the steps involved in creating these datasets, ensuring they provide a comprehensive basis for evaluating the performance of multimodal models.
§.§ Time Series
Data Generation
Our priority was to create artificial plots that resemble real-world data as closely as possible. To achieve this, we generate random time series data using a random walk process and a geometric random walk, as these methods are widely used in financial and economic data analysis <cit.>. The random walk process is defined as follows:
W_t = ∑_i=1^t X_i
where X_i are independent and identically distributed random variables with a mean of zero and a standard deviation of one.
The geometric random walk is defined as follows:
S_t = S_0 exp( ( μ - σ^2/2) t + σ W_t )
where S_0 is the initial value, μ is the drift, σ is the variance, and W_t is the random walk process.
This process is parameterized by two additional values: drift and variance. The drift, μ, represents the overall trend of the time series. A positive drift indicates an upward trend over time, suggesting consistent growth, whereas a negative drift implies a downward trend, indicating a gradual decline. The variance, σ, quantifies the degree of volatility or fluctuation in the time series. A higher variance means the data points are more spread out from the trend, leading to larger and more frequent changes, while a lower variance results in data points that are closer to the trend, producing smoother and more stable movements.
Having those two additional parameters allows us to measure how well the model responds to different kinds of plots and quantify its performance in a more nuanced way. The drift and variance values are randomly sampled from a predefined range to ensure a diverse set of time series plots with varying characteristics.
Data Transformation and Anomaly Introduction
To enhance the realism of artificial time series data and evaluate model robustness, we apply several data transformation techniques and introduce anomalies. Data smoothing is achieved using a moving average filter, where the window size is determined by a smoothing factor, 0 < factor < 1, to stabilize the data and aid trend identification. Pointwise anomalies simulate unexpected deviations at random points, with both the number and magnitude of anomalies being randomly determined to test the model's ability to detect and manage irregularities. To introduce variability in scale and offset, we randomize data ranges applying random shifts (Δ_x, Δ_y) and scaling factors (scale_x, scale_y), assessing the resilience of the model to changes in the scale and range of the data. In addition, consecutive data points are randomly removed to simulate missing-data scenarios, reflecting real-world issues such as sensor failures.
§.§ Clusters
Data Generation
In order to check model ability to understand and interpret visualisation of clustered data we create diverse synthetic dataset by varying the number of clusters and samples. The number of clusters and the number of samples per cluster are randomly chosen to introduce variability. Cluster standard deviations are also randomized to ensure different degrees of overlap or separation between clusters. The data is generated using isotropic Gaussian blobs, and metadata such as cluster center location and standard deviations are recorded for each dataset for later evaluation.
Clustering Algorithms
To ensure a comprehensive assessment, we apply a variety of clustering algorithms, randomly selecting from K-Means, Mean Shift, DBSCAN, or no clustering for each sample dataset. Different algorithms reveal diverse patterns and structures, providing a thorough evaluation of the model's capabilities. This approach tests the robustness of the model across various clustering scenarios and enhances its applicability to real-world data, where diverse clustering behaviors are common. Random selection ensures a broad range of clustering scenarios and, for algorithms requiring parameters, these are also randomly determined.
The clustering results are visualized using scatter plots with randomised options for marker styles, colors, and additional plot elements like legend.
§.§ Histograms
Data Generation
To evaluate the models' ability to interpret histogram visualizations, we generate diverse synthetic datasets characterized by various parameters, including distribution type, size, and additional specifics such as mean, standard deviation, or skewness, to capture a wide range of real-world scenarios. The datasets are generated to follow several types of distributions, including uniform, normal, exponential, Poisson, multimodal, and skewed (both left and right) distributions. This variety ensures that models encounter a broad spectrum of typical and complex statistical patterns. Parameters for each distribution type are randomly determined within realistic bounds to ensure variability. For example, the mean and standard deviation for normal distributions or the lambda for Poisson distributions are varied to create diverse datasets. Additionally, anomalies such as extra bins outside the normal range or the removal of certain bins are introduced randomly to test the models’ robustness and ability to detect irregular patterns.
§.§ Boxplots and Violin Plots
Data Generation
We generate synthetic datasets to evaluate models on both boxplots and violin plots, using a variety of statistical distributions. For boxplots, data includes normal, log-normal, exponential, and mixed distributions, with varied parameters like mean, standard deviation, and scale.
For violin plots, we use normal, log-normal, exponential, gamma, beta, Weibull, Cauchy, uniform, and triangular distributions. Datasets feature 5-10 series and 50-100 points, with randomized parameters to simulate diverse real-world scenarios.
§.§ Visualisation
To create a more diverse dataset, we employ various visualization techniques with randomized settings, such as line colors, marker styles, grid presence, and axis scaling. This randomization introduces variability, ensuring that the models are tested under a wide range of visual conditions.
For time series, the line colors and the visibility of the grid are chosen randomly. Clustering results are visualized using scatter plots with various markers styles and plot elements. Histograms are created with different bin counts and visual settings like color and grid lines. Boxplots and violin plots are adjusted in terms of series count, color schemes, and axis ranges to reflect the data's spread and outliers.
This approach provides a comprehensive framework for assessing how visual presentation impacts the models' ability to interpret data, ensuring that the evaluation covers a broad spectrum of visual scenarios.
§.§ Image Degradation
To evaluate the models' ability to accurately interpret images under different conditions, we introduce various distortions. After generating the initial dataset, a portion of it is selected and augmented using one of the following methods, resulting in a second, modified dataset.
Noise
We add a noise following a normal distribution to the image. The resulting image is a mixture of the original plot and noise:
image_noisy = (1 - α)·image_real + α·noise
where α is the noise coefficient.
Rotate
We introduce rotation to the image, with the angle of the rotation randomly selected from the range (-60, 60).
Image Overlay
To further challenge the models' interpretative abilities, we introduce an image overlay augmentation. This process involves pasting smaller, distinct images onto the original images within the dataset. By introducing these overlays, we simulate real-world scenarios where visual data may include occlusions, distractions, or additional objects.
§ BENCHMARK PROCEDURE
§.§ Prompts
In our benchmark, we employ multimodal prompts to evaluate the interpretive capabilities of LLMs on visual data. These prompts consist of a textual question paired with an image of a data plot, which requires the model to analyze the visual information and provide a structured response.
For instance, a prompt may ask the model to identify the largest cluster within a scatter plot and respond with the coordinates of the bounding box in JSON format. Another example involves approximating a plot with a series of points, where the model must generate an ordered list of coordinates that approximate the visual data.
The structured responses are requested in a specific JSON format, ensuring consistency and enabling precise evaluation of the model's performance. By varying the types of questions and the corresponding visual data, we comprehensively assess the models' abilities to interpret and respond to different visual scenarios.
§.§ Time Series
Detecting Minimal and Maximal Values
This test assesses the proficiency of LLMs in identifying the minimum and maximum values within a dataset. Given a plot, the model is tasked with pinpointing the intervals that contain these extreme values. The performance score is calculated using the following formula:
ℳ = 1 - (min_pred - min_real)^2 + (max_pred - max_real)^2/(min_real - y̅_real)^2 + (max_real - y̅_real)^2
Here, min_pred and max_pred represent the predicted minimum and maximum values, while min_real and max_real denote the actual minimum and maximum values. The score intuitively reaches 1 if the model accurately identifies the minimal and maximal points and drops to 0 if the predicted intervals encompass the entire plot. This evaluation provides a clear metric for determining the model's accuracy in recognizing critical data points, thereby contributing to a comprehensive understanding of its capabilities in visual data analysis.
Data Approximation
This test evaluates the LLM's ability to accurately interpret and replicate a given time series plot through a piecewise linear approximation (Figure <ref>). The LLM is presented with a time series plot and tasked with approximating it using up to nn points. The score is derived from the mean squared error between the LLM's piecewise linear approximation and the original plot.
𝒜 =
1 -
∑_x_i (y_real(x_i) - y_approx(x_i))^2
/∑_x_i(y_real(x_i) - y̅_real)^2
The piecewise linear approximation generated from the LLM's output serves as a robust indicator of the model's comprehension. High performance in this test suggests that the LLM can effectively detect and replicate trends, values, and overall patterns in the data, demonstrating a strong understanding of the graphical information presented. This makes our benchmark particularly valuable for assessing the practical capabilities of LLMs in interpreting visual data. Such an evaluation provides insights into the LLM's ability to process and approximate real-world data, highlighting its potential applicability in various analytical and decision-making tasks involving time series data.
Detecting Pointwise Anomalies
We measure how well the model detects anomalies in the time series data. Given a plot, the model is tasked with identifying points that deviate significantly from the overall trend.
The LLM is asked to output the x coordinate of the points it considers to be anomalies,
returning an empty list, should there be no anomalies.
To calculate the score, we check if the model predicted the correct number of anomalies
and how closely the predicted points match the actual anomalies.
If model predicts the correct number of anomalies, the score is calculated as follows:
𝒫_A = 1 -∑_a ∈ Amin_p ∈ P ( a - p ) ^2/∑_a (a - x̅) ^2
Where A is the set of actual anomalies, P is the set of predicted anomalies, and x̅ is the middle of the x-axis.
Intuitively, the score will be 1 if the model correctly identifies the anomalies, 0 if the model's guess is as good as just guessing the middle of the x-axis, and negative if it's worse.
Detecting Missing Points
We evaluated the model's ability to detect missing points in the time series data. During the data generation process, we randomly remove a subset of points from the time series plot, covering a predefined percentage of the data.
Given a plot, the model is asked to identify the range where the points are missing, if there is any. To calculate the score, we first check if the model correctly identified the presence of missing points. Then we measure how closely the predicted range matches the actual range where the points are missing.
If the model correctly identifies the presence of missing points, the score is calculated as follows, using the Jacaard similarity coefficient:
𝒫_M = |M ∩ P|/|M ∪ P|
Where M is the interval where the points are missing and P is the interval predicted by the model.
§.§ Clusters
Detecting Clusters The task presented to the LLMs is to determine the locations of clusters within a scatter plot and provide the coordinates of the bounding areas occupied by these clusters (Figure <ref>).
The LLMs' responses are evaluated based on the accuracy of the detected clusters' locations. The primary metric used for this evaluation is the Intersection over Union (IoU). IoU is a standard metric in object detection and image segmentation tasks, it is defined as the area of the intersection divided by the area of the union of the two bounding boxes.
Intersection over Union (IoU) Calculation:
IoU
=
Area of Intersection/Area of Union
The IoU metric provides a robust measure of how well the predicted cluster locations match the actual cluster locations. An IoU score of 1 indicates perfect alignment, whereas a score of 0 indicates no overlap. For each cluster, we calculate the IoU between the model-predicted bounding box and the ground truth bounding box. The overall performance is then assessed by averaging the IoU scores across all clusters.
Detecting Cluster's Center This task involves identifying the centers of clusters within a scatter plot and providing the coordinates of these centers (Figure <ref>).
To evaluate the models' performance in identifying cluster centers, we use a metric based on Euclidean distance.
This metric involves pairing the predicted cluster centers (p) with the ground truth (gt), so that each point of both groups is in at least one pair (m pairs). We compute the Euclidean distance between each paired ground truth and the predicted cluster center ‖ gt_i - p_i ‖. This gives us a measure of how close the predictions are to the actual centers.
Additionally, we compute the Euclidean distance from each ground truth cluster center to the center of the dataset plot (c) ‖ gt_i - c ‖ (for n clusters). This provides a baseline measure of the distribution of the clusters within the plot.
𝒟 =
∑_i=1 ^n‖ gt_i - c ‖
-
∑_i=1 ^m‖ gt_i - p_i ‖/∑_i=1 ^n‖ gt_i - c ‖
This evaluation metric ranges from -∞ to 1, where a positive score indicates that the predicted cluster centers are closer to the ground truth centers than the ground truth centers are to the center of the dataset plot, suggesting high precision in the model's predictions. A score of zero implies that the model's predictions are, on average, as accurate as simply guessing the center of the plot. Conversely, a negative score indicates that the predicted centers are farther from the ground truth than the ground truth centers are from the plot center, highlighting significant inaccuracies in the model's predictions.
§.§ Biggest Cluster
To assess the model's ability to localize the largest cluster within the data, we evaluate three key aspects: the proportion of correctly enclosed points, the area efficiency of the bounding rectangle, and a penalty for incorrectly included points. These factors are combined into a final score, providing a comprehensive measure of the model's performance.
The final score S for cluster localization is calculated as the arithmetic mean of three components:
S = P_correct + P_area + P_penalty/3
Where:
- P_correct = N_correct/N_total is the proportion of correctly enclosed points.
- P_area = A_cluster/A_rect compares the area of the minimal bounding box of the cluster to the area of the predicted rectangle.
- P_penalty = 1 - N_incorrect/N_all accounts for incorrectly included points.
This score reflects the accuracy and efficiency of the model in cluster location.
§.§ Histograms
Distribution Identification This task evaluates the model's ability to identify the distribution type in a histogram. The DistributionChecker class compares the model's predicted distribution against the actual distribution type in the metadata. The model scores 1.0 if the predicted distribution matches the actual distribution. Additionally, if the actual distribution is skewed (left or right) and the predicted distribution is exponential, the model also scores 1.0. This scoring accounts for cases where skewed distributions are often approximated by exponential distributions. This approach provides a robust evaluation of the model's ability to recognize different statistical distributions, accommodating both exact matches and reasonable approximations.
MinMax EvaluationThis evaluation assesses the model's accuracy in predicting the minimum and maximum ranges within histogram data using the Jaccard Similarity Index. The index measures the overlap between the predicted and actual intervals:
Jaccard Similarity = Intersection(A, B)/Union(A, B)
The Jaccard similarity is computed separately for the minimum and maximum intervals, and the final evaluation metric, the Overall Score, is the average of these two similarities:
Overall Score = J_min + J_max/2
Here, J_min denotes the average Jaccard similarity for the minimum and maximum intervals, respectively. This metric comprehensively measures the model's ability to predict data ranges within histograms accurately.
Monotonicity Evaluation This task assesses the model's ability to identify monotonic intervals (either increasing or decreasing) in histogram data. The score is determined by the ratio of correctly predicted monotonic intervals to the total number of predicted intervals:
ℳ = C_inc + C_dec/T_pred
Here, C_inc is the count of correctly predicted increasing intervals, C_dec is the count of correctly predicted decreasing intervals, and T_pred is the total number of predicted monotonic intervals. This score reflects the model's ability to detect and represent underlying patterns accurately.
BelowXValuePercentageThis class evaluates the model's accuracy in predicting the percentage of data points falling below a specified threshold. The evaluation metric is:
Score = 1 - |Actual Percentage - Predicted Percentage|/100
The score reflects how close the predicted percentage is to the actual percentage, minimizing the absolute difference.
FindAnomaly This class assesses the accuracy of predicting anomaly ranges within histogram data using the Weighted Jaccard Similarity, which considers a radius around both predicted and actual anomaly ranges:
Weighted Jaccard Similarity = |E_int|/|E_uni|
In this formula,|E_int| is the size of the intersection of the extended ranges, and |E_uni| is the size of the union of the extended ranges. This metric measures the overlap between predicted and actual anomaly ranges, accommodating minor deviations for a comprehensive assessment.
§.§ Boxplots and Violin Plots
Highest and Lowest Median
For both boxplots and violin plots, the median of each plot is calculated to evaluate the predictions of the highest and lowest medians. The indices corresponding to the plots with the highest and lowest medians are identified and compared to predicted values. Correct identification of these indices earns 0.5 points for each correct prediction (one for the highest and one for the lowest), with a maximum score of 1.0.
Biggest and Smallest Range
The range of data in each plot is determined by calculating the difference between the maximum and minimum values. For both types of plot, the indices of the plots with the largest and smallest ranges are identified. Then these indices are compared with predicted values. Correctly predicting the index of the plot with the largest range earns 0.5 points, as does correctly predicting the smallest range, for a total possible score of 1.0.
Biggest and Smallest IQR
The interquartile range (IQR), calculated as the difference between the 75th and 25th percentiles, is used to evaluate predictions regarding variability within each plot. The indices of the plots with the largest and smallest IQRs are identified and compared to the predicted indices. Correct predictions earn 0.5 points each, resulting in a maximum score of 1.0.
§ EXPERIMENTS
To evaluate the performance of multimodal models in interpreting various types of plots, we conducted a series of experiments across different plot categories. These experiments were designed to assess how well the models handle diverse visual presentations and tasks.
§.§ Experimental Setup
Models and Data
We evaluated several state-of-the-art multimodal models, including GPT-4o, GPT-4o-mini, Claude-3 (various versions), and Gemini-1.5 (various versions). Each model was tested on a comprehensive dataset of synthetic plots, which included scatter plots, histograms, time series, boxplots, and violin plots. Dataset were create in two version, one without any distortions and other with random augmentation.
Reproducibility
All experiments were conducted under consistent conditions, and model performance was evaluated based on the pre-defined metrics. Details of the experimental setup, including model configurations and data generation parameters, are provided in the Appendix.
§.§ Results
Table <ref> presents the overall performance of each model across different plot categories. claude-3-5-sonnet leads with the highest scores in both clustering (0.682) and violin plots (0.579), while gemini-1.5-pro excels in boxplots (0.607). In contrast, gpt-4o-mini consistently shows lower performance across most categories. Detailed results for each model on specific metrics are provided in the Technical appendix.
Clustering
The performance of various models were evaluated based on their ability to identify the biggest cluster, detect cluster centers, and estimate cluster areas. Among the models, claude-3-5-sonnet achieved the highest overall score of 0.682, excelling in identifying the largest cluster and determining cluster centers. In contrast, gpt-4o-mini showed the lowest performance with an overall score of 0.432, indicating room for improvement in cluster detection and center localization.
Histograms
The models' abilities to interpret histogram data were evaluated using various metrics, including distribution detection, identification of minimum and maximum bin values, monotonicity analysis, and estimating the percentage of data below a specific threshold. Model gpt-4o led in overall performance with a score of 0.57, performing particularly well in predicting the percentage of data below a specified value. On the other hand, claude-3-opus had the lowest overall score of 0.32, struggling particularly with distribution detection and monotonicity assessment.
Series
The models' performance in handling series data was evaluate on broad rage of tasks, such as identifying minimum and maximum intervals, approximating the plot with points, and detecting pointwise anomalies. Notably, most models struggled with the approximation task, with some models like claude-3-5-sonnet and gemini-1.5-flash showing negative overall scores due to significant deviations in approximations. gpt-4o was the most consistent performer with an overall score of 0.44, indicating its relative robustness in series-related tasks.
Boxplots
In the boxplot evaluation the models were assessed based on their ability to identify medians, overall ranges, and interquartile ranges (IQR). gemini-1.5-pro performed the best with an overall score of 0.607, particularly excelling in detecting IQRs. gpt-4o-mini, however, showed the lowest performance with an overall score of 0.342, indicating challenges in accurately identifying key boxplot features.
Violins
Lastly, the models' performance was evaluated based on their ability to interpret the violin plots. claude-3-5-sonnet again emerged as a top performer with an overall score of 0.579, showing strong results in estimating medians and overall ranges. gpt-4o-mini, however, lagged with an overall score of 0.356, indicating difficulties in accurately interpreting the density and distribution characteristics of the violin plots.
§ CONCLUSION
This study provides a detailed evaluation of multimodal models' capabilities in interpreting various types of plots, including clustering results, histograms, time series, boxplots, and violin plots. Our findings indicate significant variability in model performance across different tasks and visualization settings. The introduction of specialized metrics for assessing model accuracy has highlighted both strengths and limitations in current models.
The results underscore the need for robust models that can handle diverse visual data effectively. Future research should focus on improving model performance across different plot types and visualization settings to enhance overall accuracy and reliability.
|
http://arxiv.org/abs/2409.03575v1 | 20240905143012 | Detecting Spatial Dependence in Transcriptomics Data using Vectorised Persistence Diagrams | [
"Katharina Limbeck",
"Bastian Rieck"
] | stat.ME | [
"stat.ME",
"cs.CG"
] |
Detecting Spatial Dependence in Transcriptomics Data
using Vectorised Persistence Diagrams
Katharina Limbeck
Helmholtz Munich, Technical University of Munich
Bastian Rieck
Helmholtz Munich, Technical University of Munich, University of Fribourg
=================================================================================================================================================================================
§ ABSTRACT
Evaluating spatial patterns in data is an integral task across various domains, including geostatistics, astronomy, and spatial tissue biology.
The analysis of transcriptomics data in particular relies on methods for detecting spatially-dependent features that exhibit significant spatial patterns for both explanatory analysis and feature selection.
However, given the complex and high-dimensional nature of these data, there is a need for robust, stable, and reliable descriptors of spatial dependence.
We leverage the stability and multi-scale properties of persistent homology to address this task.
To this end, we introduce a novel framework using functional topological summaries, such as Betti curves and persistence landscapes, for identifying and describing non-random patterns in spatial data.
In particular, we propose a non-parametric one-sample permutation test for spatial dependence and
investigate its utility across both simulated and real spatial omics data.
Our vectorised approach outperforms
baseline methods at accurately detecting spatial dependence.
Further, we find that our method is more robust to outliers than alternative tests using `Moran’s I.'
mkbothgobbletwo
oddheadempty
evenheadempty
§ INTRODUCTION
Omics data, that is measurements of molecules within an organism or a cell, provide crucial insights into biological processes and cellular function.
Specifically, transcriptomics plays a vital role in studying RNA transcripts i.e. the expression of genes via the transcriptome and is thus capable of providing snapshots of cellular function and the mechanisms behind health and disease.
Recent advancements in spatial transcriptomics further enable the large-scale analysis of cells’ gene expression and their spatial locations across tissue samples.
Leveraging these data, the detection of spatially variable genes (SVGs) is an important analysis step both for explanatory analysis and further downstream tasks, such as spatial domain identification and gene enrichment analysis, uncovering the spatial components of tissue biology <cit.>.
Nevertheless, accurately identifying the spatial patterns in
omics data on both local and global scales remains challenging, particularly due to the high-dimensional, sparse and noisy nature of these data. The difficulty of modelling
technical and systematic errors, the ongoing debate on even basic preprocessing stpdf, as well as an ever growing understanding of the fundamental processes in cellular biology, therefore necessitate the development of robust and expressive analysis methods <cit.>.
In particular, SVG detection methods should be robust to technical noise and sparsity, correctly identify spatially dependent genes, be independent of the sum of expression values, and show low
false positive error rates <cit.>. In practice, reliable SVG detection methods then help to identify the spatial patterns of disease e.g. by detecting spatial clusters of cancer cells or necrotizing tissue within an organ as well as identifying the genes driving these spatial changes <cit.>.
Meanwhile, topological data analysis (TDA) has given scholars a new toolbox for quantifying complex structures in biological networks at multiple scales motivated by the rigorous mathematical study of shapes <cit.>.
Indeed, when adjusting for noise, molecular data in particular is equipped with meaningful coarse geometry
<cit.>.
Capturing both local and global characteristics of the shape of data, TDA is thus poised
to quantify spatial patterns in molecular data.
Computing persistent homology on spatial graphs gives stable and flexible summaries of the distribution of spatial features <cit.> summarising both local and global patterns in gene expression.
In particular, our work demonstrates how toplogical descriptors can be used to distinguish spatial signals in transcriptomics data and identify genes whose expression varies across space.
Our contributions are as follows:
* We propose a one-sample randomised permutation test for spatial dependence using functional topological summaries.
* We investigate the performance of this approach on both simulated and real data in comparison to alternative methods, Moran's I, and .
§ RELATED WORK
§.§
Spatially-Variable Gene Detection
The field of spatial omics data analysis has grown rapidly in the last 10 years fueled by advancements in sequencing technologies and data availability <cit.>. The term spatial omics denotes spatially-resolved molecular measurements in general, including features extracted from the genome, transcriptome, or proteome. These features are greatly relevant as they give insights into fundamental biological processes. As a key mechanism in biology, the set of DNA encoded in the genome is read out or transcribed by RNA molecules, which make up the transcriptome. Some of these transcripts later code for proteins, which are of great practical importance as they perform the actual biological functions in cells. Omics measurements then give key insights into these fundamental building blocks of biology.
Depending on the sequencing method, observations from spatial datasets either correspond to individual cells, enabling the advent of single cell omics, aggregations of multiple cells or even sub-cellular measurements. Associated spatial locations are given by unstructured i.e. continuous coordinates on a tissue sample
or spots on a pre-defined grid.
Most frequently, features are transcriptomic measurements that summarise the number of times a gene as measured by the number of RNA fragments detected at each capture location <cit.>, the focus of our investigation.
Generally, SVG detection methods fall into numerous categories. Moran's I is the most established statistical measure for spatial auto-correlation across spatial statistics <cit.>. As well as its cousin Geary's C <cit.>, Moran's I is frequently used as a versatile descriptor of spatial patterns in gene expression values <cit.>.
Despite its simplicity as a mean summary based on weighted variance and covariance estimates, and its well-established theoretical benefits,
Moran's I is
known to be sensitive to outliers and noise <cit.>, a disadvantage given the noisy nature of spatial omics data <cit.>.
Further, based on the rich tradition of spatial statistics, there also exists a variety of statistical methods that fit spatial regression models to asses the importance of spatial covariance on gene expression. The most popular of these approaches include and , which fit Gaussian regression processes, and or fitting mixed and non-parametric models respectively <cit.>. However, these models rely on fixed assumptions on the nature of the distribution of gene expression values, which remains a contested question in practice <cit.>.
Lastly, there exist non-parametric methods, such as
, which models gene expression as
a marked point process and uses permutation testing to make inferences <cit.>. This approach benefits by allowing statistical reasoning under less stringent assumptions but is associated with increased computational costs.
Further,
model-free frameworks such as , which measures diffusion times, or , which models graph Laplacians, have also been proposed <cit.>.
However, despite the appeal of such geometric and model-free approaches, multi-scale descriptors, such as persistent homology have not yet been employed in this context.
Motivated by the use of geometric descriptors of spatial graphs, we thus aim to further develop persistent homology as a framework for spatial significance testing for spatial omics data analysis.
§.§ Persistent Homology
Originating from algebraic topology, persistent homology (PH) enables practitioners to measure topological features from data <cit.> thus describing the shape of data <cit.>.
This is of particular importance for molecular data, which after accounting for noise can be surmised to contain meaningful coarse geometry <cit.>. PH then gives one such a tool for encoding this geometry in a robust manner.
From a signal processing perspective, persistent homology has demonstrated its utility for detecting peaks in noisy signals guaranteeing stability properties and robustness to small perturbations <cit.>. The multi-scale nature of filtrations allows practitioners to not only summarise local but also global patterns in spatial data.
Persistence-based methods can thus be applied to detect and encode both local and global extrema alike <cit.>.
Indeed, the use of persistent homology as spatial descriptor has been extended to investigating time-varying data and spatial clustering <cit.>.
Thus, persistence has shown its utility for describing spatial patterns, such as local maxima and local minima, in various contexts. It is then only natural to consider the promising potential of using persistence for SVG detection in particular.
From a statistical perspective,
the increasingly widespread use of PH for data analysis and budding applications to ML have also been driven by advancements in the statistical theory underpinning the theoretical foundation of PH. In particular, summaries of PH allow for the definition of parametric and non-parametric significance tests and uncertainty estimation. Amongst these approaches, non-parametric techniques, specifically permutation testing, are by far the most utilised and assumption-free methods used for statistical inferences <cit.>.
Arguably, one-sample testing remains somewhat under-explored <cit.> with most common statistical approaches focusing on two-sample comparisons.
Nevertheless, recent work by <cit.> demonstrate the use of persistent homology
for the identification of spatially dependent features in geographical data. In particular, they propose a one-sample permutation testing approach using persistence summary statistics
<cit.>.
Their finding suggest that
persistent homology performs well at testing for spatial dependence in the presence of outliers or on sparsely connected graphs resembling ladder graphs <cit.>.
Further, they conclude that there is potential for future work to investigate the use of alternative functional summaries of persistent homology.
Addressing this outlook and widening the application to spatial omics data, we now investigate the use of persistence curves for SVG detection.
§ BACKGROUND ON PERSISTENT HOMOLOGY
§.§ Persistent Homology and Vertex-based Filtrations on Spatial Graphs
Throughout this paper, we set out to study a spatial dataset X = {x_1, x_2, ..., x_n} whose observations are assigned feature values f: X → and spatial locations s: X →^2.
We then want to examine the spatial distribution of such a feature, which is modelled by constructing a spatial (neighbourhood) graph G. This approach is appealing because adjacency relationships play a central role in spatial data analysis and we can flexibly construct a graph that best describes the nature of the data as also commonly done for analysing spatial omics data. See <ref> for a more detailed description on our method.
Persistent homology can then be computed from such a spatial graph
by first considering the clique-complex S=(G) defined as the simplicial complex made up of cliques of vertices in G. Here, a clique complex is a generalisation of a graph that consists of 0-simplices corresponding to vertices, 1-simplices corresponding to edges, 2-simplices corresponding to 3-cliques (i.e. triangles), and so on. In particular, this simplicial complex is defined to be a set of simplices that is closed under taking intersections and face decomposition.
In practice, we can consider S as a generalised triangulation of a spatial graph.
As a multi-scale summary of geometry, persistent homology then studies a filtration on this complex. That is, we construct a sequence of topological spaces, and track the evolution of topological features given by homology groups.
The added algebraic structure allows for the computation of boundaries and holes from the generalised graph. We can thus track the emergence and disappearance of connected components, that is 0th dimensional homology groups, H_0(S), and more generally k-dimensional homology groups, H_k(S), representing k-dimensional holes <cit.>. This process as well as the further details explained below are illustrated via an example in <ref>.
While there exist a multitude of methods for defining filtrations on graphs <cit.>, we are interested in one specific construction throughout this work. In particular, we track the evolution of superlevel-sets defined from the spatial vertex feature, f, to investigate the spatial distribution of this feature.
That is, we filter through the simplicial complex as follows
S_δ = Cl(G_δ)= {s ∈ Cl(G) | f(u_i) ≥δ for all u_i ∈ s}
so that the filtration parameter δ decreases from max(f) to min(f). Here, s is a simplex in the clique complex S_δ of the subgraph G_δ∈ G defined as G_δ = {u_i ∈ G | f(u_i) ≥δ}.
The collection of simplicial complexes S_max(f)⊂ ... ⊂ S_min(f) = Cl(G) then gives a Vietoris-Rips filtration.
Intuitively, this vertex-based filtration can be constructed by
filtering through
the feature values starting with an empty simplicial set for δ > max(f).
Then, as δ decreases, a vertex x_i i.e a zero-dimensional simplex is included whenever its associated vertex feature value f(x_i) has been reached. Further, an edge i.e. a one-dimensional simplex, is added to the filtered simplicial complex S_δ whenever both its vertices have been included <cit.>, with the process generalising to higher dimensions. Note that to define a sublevelset-set filtration, this construction can be modified, so that δ increases from min(f) to max(f). Further, for data collected on a rectangular grid, cubical persistence, which filters through cubical complexes made up of cubical cells, offers a natural alternative to the triangle-based simplicial complex construction. See <cit.> for more details on how to generalise to this setting.
The output from computing these filtrations results in a set of persistence diagrams, {_0, _1}, one for each homology dimension, where each _· = { (p_1, q_1), (p_2, q_2), ..., (p_m, q_m)} is a multi-set of persistence pairs in ^2 encoding the birth and death, i.e. the emergence and merging scales of topological features in the filtration.
In particular, peaks or local maxima in spatial feature values are represented by points in the 0-th dimensional persistence diagram above the diagonal <cit.>.
For a deeper exploration of the mathematical framework and theoretical underpinnings of persistent homology, we refer the interested reader to <cit.>.
We however move on to consider the use of persistence homology for statistical analyses.
§.§ Summarising Persistence Diagrams
Persistence diagrams consist of multi-sets of birth- and death pairs and are thus challenging to study from a statistical perspective. In fact, there exists no natural way to directly understand them in as a Hilbert space that allows for the application of standard statistical tools. Indeed, inherent distances between diagrams such as the Bottelneck or Wasserstein distances are expensive to compute and do not induce a norm on the space of persistence diagrams.
Potential approaches for fitting PH into a statistical setting is the use of summary statistics, as for example explored by <cit.> in the context of spatial dependency detection. Moreover, to retain the multiscale properties of PH, functional summaries can be employed to understand PDs in a Hilbert space setting.
§.§.§ Summary Statistics
While various one-number descriptors, such as persistence entropy or lifetime-derived statistics are available, we choose to include total lifetime (TL) into our study based on the findings of <cit.>.
In particular, the total lifetime is
computed as
= ∑_i=1^m q_i - p_i
i.e. the sum of the lifetimes of persistence pairs, defined as the difference between the death and birth scales of topological features.
§.§.§ Functional Summaries
Our analysis will focus on using functional summaries that map PDs to (piecewise) continuous functions from → V where V is a suitable vector space <cit.>. Note that in related literature these types of functional summaries are often called persistence curves. In particular, we compare Betti curves
and persistence landscapes, which can be conveniently compared via suitable norms between these functions. We hypothesise that these functional summaries retain more complete information from the whole PD than single number summaries (as e.g. studied by <cit.>) while allowing for the application of statistical methods in a theoretically well-founded manner <cit.>.
Betti curves
Betti curves are defined as
: →ℕ where
(δ)=|{ (p_i, q_i) ∈|δ∈ [p_i, q_i]}|.
That is, Betti curves sum up the number of topological features from the persistence diagram that persist at each scale of the filtration.
Given one Betti curve, b, we can then take its L^p norm as a natural descriptor:
L_p() =
||||_p = (∫_min(f)^max(f) |(x)|^p dx)^1/p
This L^1 norm then is closely related to computing
total lifetime in terms of the information it summarises
<cit.>.
To compare multiple Betti curves, _1 and _2, we naturally extend this notion and use the L^p distance between Betti curves, that is
L^p(_1, _2) =
||_2 - _1||_p = (∫_min(f)^max(f) |_1(x)-_2(x)|^p dx)^1/p.
Persistence Landscapes
A frequently used tool for statistical analyses are persistence landscapes, which have been proven to follow a central limit theorem, even enabling the application of standard parametric tools <cit.>.
First, the rank functions are defined for each persistence pair given as
λ_(p_i, q_i)(δ) =
δ - p_i if p_i < δ≤p_i + q_i/2
- δ + q_i if p_i + q_i/2 < δ < q_i
0 otherwise
and the k-th persistence landscapes function is then given by
Λ(k,δ) = kmax{λ_(p_i, q_i)(δ)}_i ∈ I
where kmax is the function giving the k-th largest value of a set. The persistence landscape then is the collection of these functions. Further, we can define the L^p-norm for persistence landscapes as:
L^p(Λ) =
||Λ||_p = ∑_k=1^∞(∫_∞^∞ |Λ(k,x)|^p dx)^1/p.
§
METHODS
§.§ Computing Persistent Homology from Spatial Omics Data
Now, to develop our spatially variable gene detection method, we
define a spatial graphs for omics data
in the manner that best matches
the underlying sequencing technology <cit.>.
Amongst these technologies, spatial measurements are collected across varying resolutions.
For example, for data with continuous spatial coordinates, such as high resolution measurements of single cells, we compute the Delaunay triangulation to find an appropriate triangular covering of the space.
In comparison, Visium, one of the most popular sequencing methods, measures spots on a hexagonal grid with spots having diameters of 55 μ m and centers being a fixed distance of 100μ m apart <cit.>. Motivated by this, we choose a spatial graph connecting each grid point to its six directly adjacent neighbouring spots
for further analysis.
For both of these graph choices we then apply simplicial-complex based filtrations.
Lastly, for data measured on rectangular grids, we connect each spot to its four adjacent neighbours and apply cubical persistence to capture the rectangular nature of these grid cells.
Throughout this work we focus on computing zero-
dimensional persistent homology, which allows us to investigate the spatial patterns given by connected components.
In particular, we are interested in encoding peaks in spatial gene expression and therefore construct super-levelset filtrations using gene expression as vertex feature as described in <ref>. Then, the resulting persistence diagrams encode key information on clustering behaviour. Here, we choose to use a super-levelset approach rather than a sub-levelset filtration because we reason that spatial maxima will be more suitable for revealing spatial trends than spatial minima for these data. This choice is informed by the fact that omics data is
affected by technical noise and sparsity.
Indeed, dropout is a well known measurement error describing the common phenomena of not observing a transcript even though it is present. In practice, this effect leads to a large proportion of excess zeros in gene expression values <cit.>, which could negatively affect the interpretation of local minima for our purpose. We therefore argue that spatial peaks will be more robust to such sparsity.
For each gene, whose spatial pattern we want to assess, we then compute persistence diagrams and persistence summaries as detailed in <ref>.
By convention, we bound death value of each persistence pair by minimum filtration value. That is, rather than allowing one persistence feature to achieve an infinite death value, we bound it by the lowest gene expression value observed in the data.
Note that our construction is similar to <cit.>, but we further extend their methods to also consider cubical persistence for rectangular grids and investigate the use of functional summaries for spatial dependence testing as detailed below.
§.§ One-Sample Permutation Testing for Spatial Dependence
We use randomized permutation testing to test for the following hypotheses:
* H_0: Expression values are randomly distributed in space
* H_1: Expression values are not spatially random
We thus retain the original gene expression values as well as the spatial graph and randomly permute which expression value is assigned to each location to simulate spatial randomness, that is the lack of any spatial pattern. Then, significance is assessed as follows:
p-value = (||_̅0̅ - _i||_p ≤ ||_̅0̅ - _obs||_p) + 1/n_perm + 1
where is the indicator function summing up the number of times the L^p-norm, ||·||_p, between the observed Betti curve _obs and the empirical mean Betti curve under the null model _0 is more extreme than the difference between each individual Betti curve under the null model and their overall mean.
Note that we compute _0, the mean Betti curve given the null hypothesis, as the mean across all n_perm permutations as well as the observed Betti curve while also adding 1 to both the nominator and denominator.
This bias correction is included to avoid zero p-values and bias the test towards more conservative estimates by including this pseudo-count <cit.>.
We thus conduct a two-sided, one-sample hypothesis test for each spatial feature. Whenever multiple genes are compared simultaneously, we further recommend to apply a suitable multiple testing adjustment such as
Benjamini-Hochberg
procedure to control the false discovery rate <cit.>, which we will apply throughout our experiments. Across our experiments, the number of permutations is set to 1000 giving a practical compromise with regards to computational efficiency considerations.
Beyond statistical reasoning, there also exist
more deterministic approaches to spatially variable gene discovery.
Here, the goal is to either rank features by the strength of their spatial dependence, or to find a predefined number of the top most spatially variable features.
In practice, these approaches are key for both explanatory analysis and for feature selection as a first step to uncover and understand spatial patterns in data <cit.>.
For further analysis, we thus choose to rank spatially expressed features by
their empirical p-values following similar statistical approaches <cit.>.
§.§ Alternative Spatial Variability Detection Methods
In order to evaluate the performance of our spatial testing approach we compare our methods with three standard methods for detecting spatial dependence.
Moran's I: Moran's I is one of the oldest and most widely-used measures for spatial auto-correlation. It is known to be a suitable baseline for SVG benchmarking <cit.> and can be computed as
I=n_obs/∑_i,j w_i,j∑_i=1^n_obs∑_j=1^n_obs w_i,j (f_i - f̅) (f_j - f̅)/∑_i=1^n_obs (f_i - f̅)^2
.
We compute it using
the spatial adjacency matrix to define the weights w_i,j. The value of Moran's I is directly interpretable, ranging from -1 indicating negative autocorrelation, to 0 corresponding to spatial randomness, and 1 indicating positive autocorrelation.
Further, for statistical analysis we use Moran's I in the same randomised permutation testing framework as the previously introduced topological summaries explained in <ref>.
sepal: simulates diffusion processes, which models a stochastic process that tracks the spread of a feature on the spatial domain over time as it tends towards randomness. Diffusion is simulated on either a rectangular or hexagonal grid and then measures diffusion times, which are used to assess spatial patterns and measure how long it takes the diffusion process to converge. The higher the diffusion time the stronger the spatial signal. Note that this method does not conduct any statistical testing and thus does not provide p-values. Instead, it provides a ranking of genes by their diffusion times.
Moreover, is only applicable to hexagonal or rectangular grids. Hence, whenever data is given with unstructured coordinates, we follow the implementation by <cit.> to convert spatial measurements to a rectangular grid.
SpatialDE: Finally, applies spatial covariance testing via Gaussian process regression, an class of models originating from geostatistics. In a nutshell, gene expression is modelled via a multivariate normal model that includes both a non-spatial and a spatial covariance term.
The model fitted with spatial covariance is then compared to a null model of spatial independence without this spatial component.
Likelihood testing is used and p-values are computed analytically using the χ^2 distribution with one degree of freedom. As a multiple testing correction, we follow <cit.> and report q-values, which specifically control for the positive false discovery rate <cit.>. assumes normally distributed residual noise, we follow the preprocessing stpdf by <cit.>. First, Anscombe’s transformation, a variance stabilising method for negative binomial data, is applied. Second, log total count values are regressed out
to achieve independence from the total counts per spatial location.
§.§ Datasets and Preprocessing
We use three real transcriptomics datasets for our investigation. First, we analyse spatial data of a mouse olfactory bulb (referred to as MOB, replicate 1) and a breast cancer tumour sample (breast, replicate 1) from <cit.>, standard datasets commonly used to test spatial variability detection methods. Both have been sequenced using 1k spatial transcriptomics arrays, which measure spots on rectangular grids. Each spot has a diameter of 100 μ m and encompasses 10-100 cells.
Further, we study one sample from <cit.> (sample IZP10), which consists of spatial measurements of spots on a hexagonal grid. This sample has been taken from the heart of a patient that suffered from a heart attack from tissues with necrotic areas.
Prior to analysis, peprocessing and quality control of omics data are crucial stpdf for accounting for technical errors as well as filtering out noise. Indeed, normalisation is a key step in spatial omics data analysis as many downstream tasks, including the detection of spatially variable features, rely on adequate preprocessing choices to account for technical effects <cit.>.
Following best practices <cit.> and alternative spatial variability studies <cit.>, we decide on the following pre-processing strategy and
select all genes that
have a minimum total expression count of 10, and
are present in at least 1% of locations.
Further, we remove all cells with fewer than 10 counts. These stpdf are done to avoid analysing cells or gene for which hardly any biological signals have been detected. Finally,
we exclude all ribosomal and mitochondrial genes from analysis as these indicate technical artifacts and are not related to the mRNA, i.e. protein-coding genes that are the focus of transcriptomics data analysis <cit.>.
For each real dataset, we then identify the 500 most variable genes using
for further analysis. That is, we select a set of genes that vary most notably across cells. Our goal will then be to detect whether these genes that differ noticeably between dislocated cells also show distinct spatial patterns. Note that for the simulated data we skip these quality control stpdf as we want to compare all SVG detection methods across all simulated features.
Further, as conventionally done in spatial omics data analysis, we decide to pre-process the expression values i.e. count data
in the manner best fitting the SVG detection method <cit.>. For we apply the normalisation strategy preferred for this method as detailed in <ref> <cit.>.
For , we apply the recommended shifted logarithm normalisation by applying the transformation log(f+2) to each feature f. This transformation is applied to
stabilize the variance across the dataset to counteract variable sampling effects such as overdispersion and reduce the skewedness of the feature's distribution. The pseudo-count of 2 is added to enable the log-transformation also for zero-values counts. To ensure comparability across spatial variability tests, we similarly choose to compute our persistence-based test as well as Moran's I from the same log(f+2)-transformed features.
Note however, that this pre-processing strategy is not the only viable choice and our method can be flexibly applied across varying pre-processing choices <cit.>.
In general, given that our persistence-based methods directly scales through each gene's expression values, we highly recommend to apply our framework to transformed and normalised features, but we leave the choice of the best data-dependent transformation to further investigation <cit.>.
§.§ Synthetic Data Generation
We create in-silico examples of spatial transcriptomics data using SRTsim <cit.>, an R package for the simulation of spatial patterns. In particular, we randomly sample spatial locations on a square domain and then simulate count data using (zero-inflated) Poisson or negative binomial distributions.
The choice between these range of distributions of count data is motivated by the fact that the observed distribution and nature of transcriptomics data is highly debated and varies between sequencing technologies and datasets.
Zero-inflation aims to account for the excess zeros often encountered in sparsely sampled gene expression data due to a high dropout probability.
Across our simulations we thus consider zero-inflation
to be a potential source of noise and assess the robustness of the proposed spatially variable gene detection methods against these technical errors.
SRTsim allows us choose the following settings for simulations. Globally, we set the mean parameter (μ = 1), the dispersion parameter (s) and the zero-proportion parameter (z).
We then define N spatial domains corresponding to distinct spatial shapes that are assigned an effect size x_i resulting in an x_i-fold increase in mean for the specified spatial domain i ∈0, 1, 2, .., N. The effect size of the background domain is set to x_0=1. Count data is then sampled separately for each domain with the thus specified parameters and assigned randomly to locations within the shape.
We then simulate four distinct spatial patterns, a gradient, two lines, a set of clusters and a cellring illustrated in <ref> with the annotated effect sizes. Otherwise, if no spatial signal is simulated, gene expression is sampled randomly across the locations.
§ RESULTS
Across our experiments we consider two main goals: (i) using significance testing to determine spatially dependent features, (ii)
rank features by their spatial dependence and identify a fixed number of the top most spatially variable genes.
Throughout our empirical experiments we then demonstrate the following results:
* PH-based approaches perform well at detecting SVGs in terms of the area under the precision recall curve and are more robust to zero-inflation i.e. high proportions of excess zeros in gene expression counts than Moran's I.
* PH-based significance testing leads to lower sensitivity but higher specificity identifying a smaller set of SVGs at a fixed 0.05 significance threshold.
* PH-based approaches are superior to alternatives at detecting a predefined number of the top most spatially variable genes.
* PH-based approaches capture orthogonal information to alternative methods.
* PH-based spatial dependence test are less correlated to the total sum of feature values than Moran's I.
We further explore the use of our PH-based approach to detect SVGs for three real spatial omics datasets, for which we plot and examine the top spatially variable features.
We thus demonstrate that persistence-based methods, and Betti curves in particular, offer valid alternatives to existing SVG methods, such as Moran's I.
§.§ Simulation Study
§.§.§ PH-based approaches are robust to zero-inflation
We simulate spatial data as described in <ref> and first vary the zero-inflation parameter z from 0.1 to 0.99 keeping the dispersion fixed at 0.3. For each shape and parameter choice, we then simulate 50 genes that possess the spatial signal
and 50 genes that are randomly sampled without any spatial effect.
Then, across each degree of zero-inflation we summarise the performance of each statistical test and scoring method in the following manner:
<ref>
reports the area under the precision-recall curve (AUPRC) as a function of the degree of zero-inflation for all methods.
This comparison is chosen because it allows us to compare all methods, also the ranking-based method, as a single summary statistic per degree of zero-inflation.
An AUPRC of 1 indicates perfect performance, random guessing is expected to result in an AUPRC of 0.5, and 0 is the worst possible value.
§.§.§ Persistence-based testing performs well at detecting SVGs
We observe that all persistence-based methods achieve high AUPRC values very close to 1 until around 70% of zero-inflation across all pattern.
This is followed by a decrease in AUPRC, which remains above 0.6, even for 99% of excess zeros. Overall, these trends show robustness and expressivity of persistent homology as a summary of spatial patterns even in the presence of high zero-inflation.
§.§.§ Persistence-based methods perform similarly to one another
We further observe that both Betti curves and total persistence
follow very similar trends in <ref> with Betti curves performing better on the gradient pattern and total persistence showing higher scores on the streaks pattern.
This finding highlights that both approaches are alike in terms of the information they are summarising.
Persistent landscapes meanwhile achieve consistently higher AUPRC values of above 0.7 for higher degrees of zero-inflation across datasets and thus capture trends in spatial variability in a more robust manner for high degrees of dropout.
For this experiment, we thus observe that persistence landscape summarise different aspects of persistent homology than alternative summaries.
§.§.§ Persistence-based methods outperform Moran's I
Our results thus indicate that persistent homology is more expressive at quantifying spatial signals in simulated expression values under the presence of zero-inflation than Moran's I.
That is we observe that by tracking the evolution of connected components via superlevelset filtrations
on spatial graphs, we get summary statistics with higher discriminative performance than Moran's I, which summarises spatial autocorrelation based on summarising local neighbourhoods and is the most commonly used baseline measure for spatial variability. Indeed, via this comparison in terms of AUPRC, we assess that persistence-based summaries offer a valid alternative to established tools from spatial statistics.
§.§.§ Persistence detects spatial variability across all simulations
We note that our proposed persistence-based tests do well at detecting spatial variability, in particular for low levels of zero-inflation, as would be expected. However, in comparison, we see that some alternative methods do not follow the same clear trend across datasets. starts decreasing in terms of AUPRC at comparably lower proportions of zero inflation. This difference is especially pronounced for the streaks pattern, where performs notably worse than all alternative methods after 60% zero-inflation. Potentially this is due to this simulation only showing two relatively thin lines that are harder for to detect as spatially variably under the presence of excess zeros.
Meanwhile,
performs very well for higher values of z between 0.6 and 0.8 across datasets, showing higher AUPCRs than persistence landscapes. However, fails at detecting the gradient pattern at low degrees of zero-inflation. For example, receiving an AUPRC of only 0.6 at z=0.1 highlighting that diffusion times are not suitable for detecting this smooth trend in spatial variability. We thus demonstrate disadvantages of both and compared to our persistence-based approach.
We conclude that PH-based methods achieve overwhelmingly higher or similar AUPRCs to the baseline method Moran's I highlighting their utility as robust SVG detection methods.
§.§.§ PH-based methods are more specific than Moran's I
Extending our observations beyond AUPRC, we next decide to view spatially variable gene detection in the context of hypothesis testing. <ref> then compares the performance of each
statistical test in terms of specificity and sensitivity at a fixed 0.05 significance
threshold for corrected p-values.
In agreement with results from <cit.> we observed that
at this fixed threshold, total persistence is either similarly or slightly less powerful i.e. less sensitive than Moran's I. This is followed by
Betti curves, which detects a lower proportion of true positives, in particular on the streaks pattern, as well as persistence landscapes which give even more conservative estimate. However, we also observe that all persistence-based approaches show very high specificity across examples, whereas Moran's I demonstrates the highest false positive rate of around 5% to 25%, even after p-value correction, which is undesirable in practice, when the set of spatially variable genes should be correctly identified.
Our findings thus agree with <cit.>, who find that persistence is less sensitive than Moran's I.
However, <cit.> further claim that persistence cannot replace Moran's I, a findings we do not fully support. Arguably, specificity as well as sensitivity is important for interpreting spatial dependency tests in practice. Indeed, we observe that at a default 0.05 significance cutoff, corrected p-values from PH-based approaches achieve higher specificity than Moran's I, showing that PH-based results can more useful for tasks that require a low false positive rate and benefit from a more conservative assessment. Given the high AUPRC achieved by PH-based methods, we further assess that choosing a higher significance threshold could be a suitable adjustment to achieve higher sensitivity and identify a larger set of spatially variable features while retaining superior specificity.
§.§.§ Agreement between methods
Further, we compute the Spearman correlation between each method's ranking
and report the results in <ref>.
For persistence-based approaches we rank genes by their
p-values, else we use the values of the scores as defined in <ref> for and Moran's I.
Overall, we observe that total persistence and Betti curves perform most similarly across experiments with correlations between 0.64 and 0.82, which is likely driven by their common roots in accumulating persistence features <cit.>.
Further, we find that there is varying but overall moderate levels of agreement between persistence and Moran's I (ρ≤ 0.53), likely explained by the fact that both are summaries of the same spatial graph and understand spatial variability via modelling clustering behaviour, as summarised by connected components in PH, or autocorrelation, as summarised by differences in neighbourhood scores.
Interestingly, while follows a fundamentally different modelling approach, correlation between and the other approaches
exist at comparable levels to other entries in the correlation matrix and the score is actually most correlated with Moran's I on the clusters dataset (ρ = 0.58). Overall, we thus observe that persistence-based permutation testing captures complementary information not encoded by either Moran's I, or .
§.§ PH-based approaches identify the top SVGs
Next, we keep the zero proportion at z=0.1 and the dispersion parameter at s=0.3, following estimations by <cit.> from real data, but vary the effect size, by taking x̂_̂î = max(x_i / e, 1) for some e in { 6, 5, 4, 3, 2, 1} for each domain i effectively varying the strength of the spatial pattern compared to the background domain for which x_i=1. For each parameter choice and spatial shape we then simulate 50 features with spatial signals and 50 completely random patterns.
We then investigate the identification of top spatially variable genes for different strengths of spatial effects.
§.§.§ Correlation with the sum of gene expression counts
As reported in <ref>, we explore the correlation between each spatial variability score and the total count per gene / i.e. the sum of feature values.
We want an ideal SVG detection method to not just correspond to these total counts, but score genes based solely on the existence of spatial dependencies in gene expression.
Across datasets, we see that Moran's I and show the highest correspondence to the sum of feature values. In comparison, persistence-based ranking achieve consistently lower Spearman correlation of 0.6 to the number of total counts. Persistence landscapes reaching slightly higher correlation than Betti curves and total persistence. Overall, this rank correlation is similar to the one achieved by . Therefore, the comparison highlights that persistence offers a satisfactory alternative to established SVG tests.
§.§.§ PH-based approaches correctly identify the top SVGs
Across methods, we then compare the ground truth
given by the existence of spatial signal with each spatial variability score. <ref> shows the proportion of truly positive features detected by each method. Ideally, a suitable spatial variability detection method would show a high proportion of true positives amongst the top 300 genes (given 300 out of 600 features show spatial patterns).
Notably, Betti curves perform consistently well across examples correctly identifying more than 90% of top SVGs. This is followed by total persistence and persistence landscapes, which performs better on the gradient but worse on the cluster pattern.
Interestingly, from these examples we see that while the proportion of genes correctly identified is high across scenarios, there is no strong distinction between either persistence- or non persistence based methods based on these examples. However, and Moran's I seem to perform slightly worse at identifying the most spatially variable features than persistence-based approaches, which together with their correlation to total counts, indicates that persistence-based testing is preferable in this scenario.
§.§ Analysing Transcriptomics Data
Having observed that persistence-based approaches give suitable spatial variability tests on simulated data, we next investigate their utility on real datasets as detailed in <ref>.
§.§.§ PH-based methods identify spatially variable genes
First, we decide to focus on one of our proposed methods, namely Betti curves, which performed well across our simulated case study, and investigate spatially variable genes detected by this approach. Then, <ref> shows representative examples of genes that are both the most highly variable and the most spatially variable features reaching the lowest possible p-values in terms of the spatial dependence test. We see that Betti curves identify both genes with low and high expression counts to be spatially variable.
For the mouse olfactory bulb (MOB), we see that there is a clear pattern of genes being more highly expressed and clustered in the center of this brain region, such as Mbp, Pcp4, Nrgn, or Gpsm1. What these genes have in common then is that they relate to neural activity in this region of the mouse's brain. For the beast cancer sample we see some more spares patterns that could indicate disease specific differences in tissue regions and immune response. Finally, for the heart sample the genes COL1A1 and COL1A2 are found to be spatially variable, which are coding for collagen type 1 mainly found in connective tissue and related to wound healing. Overall, studying the set of spatially variable genes in more depth can thus reveal important insights into tissue biology and disease response by specifically focusing on transcripts that vary across tissue regions.
§.§.§ PH-based methods identify fewer SVGs than Moran's I
Across the
real datasets considered for this study, we report the number of spatially variable genes detected by each method. Results in <ref> then confirm that Moran's I reports the largest number of significantly spatially auto-correlated genes calling more than 50% of transcripts spatially variable. This is followed by Betti curves, which find more spatially variable patterns for the heart sample, but fewer for the mouse brain and breast cancer sample. Interestingly, across these real datasets, total persistence detects a low number of SVGs, which is comparable to Persistence landscapes, but notably lower than Betti curves. This trend has not been observed across our simulations indicating that the results of using Betti curves or total persistence are not necessarily similar.
Overall, the reported numbers of SVGs support our findings from the simulation study in <ref> on the sensitivity and specificity of each approach that lead us to conclude that persistence-based methods identify a smaller number of genes than Moran's I but do so with a lower type-1 error rate.
§.§.§ Persistence is robust to the sum of feature values
Next, we investigate the dependence of each score on the total counts measured for each gene as reported in <ref>. Ideally, spatial autocorrelation test evaluation also be somewhat robust of whether features have high or low values overall. That is, genes that are found to be spatially variable should not be baised towards genes with high expression levels, but also detect spatial patterns in lowly expressed genes.
Using the Spearman correlation with total counts as a sanity check, we see that amongst all methods, Moran's I generally shows the highest correlation with total counts. This is followed by persistence-based methods which are more robust to the sum of feature values. Finally, is designed to quantify spatial variability independently of total counts, but we find that in some examples it actually correlates more with total counts than persistence-based methods.
§.§.§ Agreement between methods
Further, <ref> reports the rank correlations between the gene rankings reported by each spatially variable gene detection method. Overall, we once again see a high agreement between Betti curves and total persistence ρ∈ [0.7, 0.79]. This is followed by moderate correlation to persistence landscapes, Moran's I and with ρ∈ [0.32, 0.69]. Persistence landscapes rank genes in a notably different order than other persistence-based approaches ρ∈ [0.42, 0.59].
Overall, the least agreement is reported between persistence-based methods and with ρ∈ [0.24, 0.39]. Overall, our results point towards a relatively high disagreement between methods. Similar to the simulation study, we find that persistence summarises distinct information on spatial patterns not encoded by existing SVG detection methods.
§ DISCUSSION
Throughout our study, we proposed randomised permutation testing procedures for spatial variability in omics data via Betti curves or persistence landscapes. We investigated these tests in comparison to baseline methods Moran'I, , and total persistence <cit.>.
On simulated data, we showed that persistence is more robust to zero-inflation than Moran's I showing higher AUPCR and a better detection rate of truly variable features amongst the highest ranked genes. At a fixed significance cut-off we further find that persistence-based methods lead to slightly lower sensitivity but higher specificity making us conclude that persistence is potentially more useful than Moran's I in applications that require more conservative estimates. We thus demonstrate in-silico that persistence curves offer stable approaches to quantify spatial dependence in feature values.
On real data, we further observe that persistence is less dependent on the sum of feature values than Moran's I indicating that it summarises spatial patterns in a more robust manner.
Finally, we visualise genes with representative spatial patterns that are both spatially and highly variable and describe their qualitative biological relevance. Thus, we describe the practical benefits of using persistence-based approaches over Moran's I and showcase examples where persistence curves are superior to
baseline methods.
We find that a non-parametric approach is less reliant on specific distributional assumptions about the spatial distribution of feature values, in contrast to parametric methods such as , making our approach preferable when such assumptions are not satisfied <cit.>, which arguably remains a contested question for omics data in particular.
While our analysis focuses on illustrating examples on the use of persistent homology in the domain of spatial omics data analysis, our methodology can be generalised to other types of spatial data, such as spatial graphs with labelled vertices or edges. We thus propose a flexible method for detecting spatially variable features based on application-driven filtrations.
However, we want to also note the limitations of our study. The computational costs of non-parametric methods, especially the computation-heavy nature of permutation testing, remains a concern in practice <cit.>.
Computing a persistence curve for one spatial feature itself is extremely fast taking less than one second on a local CPU.
However, the proposed spatial variablity tests pose some scalability constraints. Permutation testing requires thousands of repetitions resulting in a notable increase in computational costs <cit.>. While per feature this test might then take less than a minute, this scaling behaviour does not suffice when the goal is to study hundreds of thousands of genes simultaneously. Sequential permutation testing could be employed to reduce some of these costs <cit.>, but might not lead to sufficient computational improvements overall. There thus exists a great motivation for further work to develop alternative statistical tests using potentially parametric results, such as ones that could be derived from a universal null distribution for persistence diagrams <cit.> to speed up the proposed testing procedures.
Further, we have observed a high degree of disagreement between SVG detection methods across our experiments, which is a known practical issue for benchmarking these methods <cit.>. Even across our simulation study, we find no one spatial variability detection method that is preferable in all scenarios. Indeed, further investigation could be beneficial to gain a more complete understanding of the advantages and disadvantages of each approach.
Nevertheless, we conclude that persistence-based approaches offer a valid alternative to conventional spatial variability detection methods, in particular Moran's I, as they achieve high AUPRC values even in the presence of zero-inflation, give more specific results than Moran's I, and show lower correlation to the total sum of feature values.
In terms of the way persistence is used throughout this study, we investigated zeroth dimensional persistent homology
because of its computational efficiency as well as the utility of encoding local peaks for detecting spatial patterns. However, we note that one could follow an approach similar to <cit.>, extend the filtration by its exterior and compute first dimensional persistent homology. This
would allow to deduce information about the geographical location of local minima via tracking the generators of 1D homology classes, which could be combined with statistical methods to quantify exactly how spatial patterns vary from spatial randomness.
abbrvnat
|
http://arxiv.org/abs/2409.03129v1 | 20240904233830 | Subsidy design for better social outcomes | [
"Maria-Florina Balcan",
"Matteo Pozzi",
"Dravyansh Sharma"
] | cs.GT | [
"cs.GT",
"cs.LG"
] |
Subsidy design for better social outcomes
Maria-Florina Balcan
Carnegie Mellon University
Matteo Pozzi
Carnegie Mellon University
Dravyansh Sharma
Toyota Technological Institute at Chicago
Received April 17, 2024; accepted September 4, 2024
====================================================================================================================================================================================================================
§ ABSTRACT
Overcoming the impact of selfish behavior of rational players in multiagent systems is a fundamental problem in game theory. Without any intervention from a central agent, strategic users take actions in order to maximize their personal utility, which can lead to extremely inefficient overall system performance, often indicated by a high Price of Anarchy. Recent work <cit.> investigated and formalized yet another undesirable behavior of rational agents, that of avoiding freely available information about the game for selfish reasons, leading to worse social outcomes. A central planner can significantly mitigate these issues by injecting a subsidy to reduce certain costs associated with the system and obtain net gains in the system performance. Crucially, the planner needs to determine how to allocate this subsidy effectively.
We formally show that designing subsidies that perfectly optimize the social good, in terms of minimizing the Price of Anarchy or preventing the information avoidance behavior, is computationally hard under standard complexity theoretic assumptions. On the positive side, we show that we can learn provably good values of subsidy in repeated games coming from the same domain. This data-driven subsidy design approach avoids solving computationally hard problems for unseen games by learning over polynomially many games. We also show that optimal subsidy can be learned with no-regret given an online sequence of games, under mild assumptions on the cost matrix. Our study focuses on two distinct games: a Bayesian extension of the well-studied fair cost-sharing game, and a component maintenance game with engineering applications.
§ INTRODUCTION
Multiagent systems often need a central agent to intervene and avoid harmful consequences of selfish decisions of individual agents. Subsidy is a form of positive intervention where the central agent reduces the cost of some actions in the system, with the goal of leading agents to better social outcomes. The amount of subsidy available to the central agent is typically scarce and therefore it is crucial to optimize its allocation. Prior research has addressed this by designing subsidy schemes that approximately optimize the Price of Anarchy <cit.>. In this work we extend the study of subsidy design for games in three different directions: we study the impact of subsidy beyond the Price of Anarchy objective and show its usefulness in preventing an undesirable information avoidance behavior, we establish formal hardness results for designing optimal subsidy schemes, and show how an alternative novel data-driven approach can be used to design subsidy by exploiting historical data or related games.
Concretely, critical infrastructure maintenance typically involves joint responsibility shared among multiple stakeholders, and failure of coordination can lead to disastrous consequences. For example, different segments of a large road network are typically managed by different civil authorities.
With an increasingly connected physical and digital world, it is a major challenge to coordinate large-scale systems consisting of several disjointly owned components. As a result, one crucially needs a central planner—who has the ability to allocate shared resources to avoid major catastrophic failures—to ensure smooth operation of the overall system. In the above examples, this central agent could be a government department with appropriate jurisdiction. Ideally, the central agent would ensure a judicious use of the common resources which are to be allocated to appropriate stakeholders to incentivize them to do their part. Identifying the optimal resource allocations can be hard, but
very often the central agent manages multiple similar systems or has access to relevant historical data. Could one take advantage of this data availability to improve the allocation?
Alternatively, in modern market systems, consumers have several options and would prefer to select options that meet their needs at the smallest cost. Often the consumer commits to an action based on expected long-term costs. For example, people buy health insurance plans and effectively share the cost of healthcare with other subscribers to the plan. If they could accurately estimate their need for medical services, people who determine they would not need any expensive medical procedures would opt out and drive up the insurance costs. An intervention by the governement to reduce the cost of health insurance plans could ensure that these people still opt in and the system is robust to the additional information regarding need for medical services. This intervention to guide the market is expensive, and also needs a careful allocation. As with infrastructure maintenance, optimal allocation is hard and one would like to use historical market data to guide it.
Coordination in infrastructure projects poses multiple challenges when different pieces are owned by different agents.
In systems requiring all of multiple components to simultaneously work, the failure of any single component can bring the entire system down.
As a countermeasure, critical systems often have in place some amount of redundancy in terms of the components needed for the system to function. But this could introduce volunteer's dilemma, where the agents with knowledge of their redundancy can choose to not invest the due maintenance cost in hope that a different agent would put in the cost instead. Furthermore, selfish agents could choose to ignore or deliberately not collect important information about their own component, if public knowledge of that information comes at increased personal cost to them. With these various strategic aspects at play, a good amount of literature is devoted to identifying and circumventing such issues as individual agents <cit.>.
A common lesson from major failures and the primary recommendation for avoiding large-scale failure is ensuring an active role by the top management in coordination and resource allocation <cit.>. For example, in public-private partnership infrastructure projects a central government agency typically decides the allotment of common resources among the various project stakeholders. Resources are typically scarce, and therefore a judicious allocation is crucial to ensure that all the critical components essential for the project function properly. The challenge can be particularly severe in systems with a large number of components.
Games studied.
We consider a component maintenance game based on <cit.>. We model the common responsibility as disjoint components maintained by individual agents or stakeholders, each having a binary state denoting whether the component is functioning or broken. For simplicity, we will assume known (prior) probabilities which govern whether the component will work, which are a common knowledge among all agents. The overall system is also assumed to have a binary state, given by some boolean function of the component states. For example, the overall system consisting of five components might function only if all components are functioning, or it might function if at least one of the first three components is functioning and at least one of the next two components is functioning. The components are repairable, and the agent maintaining it can choose to repair their component at some personal cost, or choose to do nothing. If an agent repairs
their component, then it is assumed to be guaranteed to function. In addition to the repair cost which is only charged to agents that undertake repair, all agents are assumed to experience a large cost if the overall system fails (see Section <ref> for formal details of the game). Furthermore, any individual component may be publically inspected (without any inspection cost) to determine whether it is actually working or broken. In this case, the agents may have a tendency to avoid having the state of their component inspected and revealed to everyone to avoid increased personal cost at equilibrium. In our model, it is the agents that could inspect a component but do not control the information about the inspected state, i.e. the state of an inspected component is revealed to all agents irrespective of who owns the component.
We also study a Bayesian extension of the classical cost-sharing game. We are given a collection of actions, each associated with some cost and different actions being available to a fixed subset of agents. All agents that select a fixed action from the collection share the cost of that action. For example, if the actions are commute options like bus, train or car and agents are commuters, the cost of running the bus is shared by its users. In the Bayesian extension, there is a prior over of the action costs and agents choose actions based on the mean cost under the distribution. The true cost of some action may be inspected and the information revealed to all the agents.
Summary of contributions. We formalize the problem of effective resource allocation by a central agent to improve the performance of a multi-agent system by subsidizing certain costs associated with the system.
We consider distinct objectives that the central agent might have,
(a) to use subsidy to reduce the price of anarchy for the system, i.e. to ensure that the harmful effect of the selfish behavior and lack of coordination and of the agents on the social cost is minimized,
(b) to ensure that the value of information (measured as change in agent's cost at equilibrium before and after inspection) about the state of an inspected component is non-negative for all agents.
We show that the system can perform poorly on each of the above objectives in the absence of any subsidy. The goal of the central agent therefore is to determine the smallest subsidy budget needed to ensure that one of the above objectives is met. We will show (Section <ref>) that this calculation of optimal budget by the central agent can be done exactly for a simple small system, and the optimal subsidy allocation can be different for different objectives. We further show that in contrast
it is computationally hard to do so in more general systems under standard complexity theoretic assumptions. The computational hardness results (located in Section <ref>) for computing the optimal subsidy hold for both the above objectives, for both component maintenance and cost-sharing games.
On the positive side, if the central agent has access to data about multiple games, we show (in Section <ref>) that a good value of the subsidy and allocation can be achieved with a polynomial number of game samples coming from an arbitrary game distribution. Moreover, if the games happen sequentially, the agent can perform nearly as well as the best subsidy scheme in hindsight, under very mild assumptions on the adversarial sequence of games. Since designing an optimal subsidy scheme is computationally hard, we would like to avoid having to solve the problem too many times in repeated games from the same domain. If we have access to similar games (e.g. infrastructure projects in similar counties), the subsidy design problem is still hard, but we can potentially avoid a large number of repeated intensive computations. Moreover, the central agent may need to decide the value of subsidy on a new unseen game instance without observing the relevant parameters for this “test” game, for example the prior distribution on component failure. We obtain polynomial bounds on the sample complexity of the number of sample games needed to learn the optimal subsidy, which imply that we need to optimize the subsidy only for polynomially many games and can use the “learned” subsidy scheme on further game instances.
We also obtain no-regret guarantees on learning the subsidy parameter in an online sequence of games, under a mild smoothness assumption on the repair costs. This could be useful for example in studying potential failure in a communication network with dynamically changing nodes/components.
While Lin et al. <cit.> introduces the component maintenance game and expose the challenge of information avoidance in small systems with a constant number of agents, we study general systems with an arbitrary number of components and consider additional relevant objectives from the perspective of a central agent that can provide subsidy to some agents to reduce the repair cost of their component. While the use of subsidy has been studied in the context of cost-sharing games <cit.>, we study an interesting Bayesian extension where agents may experience negative Value of Information, establish new formal hardness results for optimal subsidy design and an a data-driven approach for overcoming the computational hardness in subsidy allocation.
Our main tool for showing the above positive results is to employ a recently introduced paradigm for beyond worst-case analysis called data-driven algorithm design <cit.>. Unlike traditional analysis, where one gives exact or approximate performance bounds applicable for worst-case instances, this paradigm focuses on “typical” problem instances that one actually encounters. This is similar to average-case analysis, but instead of a uniform distribution over problem instances, any arbitrary (fixed but unknown) distribution over the problem instances is allowed.
§.§ Related Work
Component maintenance games. Management of engineering systems often involves maintaining multiple components arranged in some scheme that govern the overall functionality of the system; these components are controlled by different agents that make decisions under uncertainty <cit.>. Often this involves careful planning and resource allocation by a central agent whose goal is ensuring that the overall cost to the agents is small, and that the agents make good use of any available information about the component states <cit.>. The central agent could design incentives or subsidy to be given to specific agents for improving the overall system. The role of subsidy and taxation has been studied in both cooperative and non-cooperative games <cit.>. Typically, a central agent designs a subsidy (or taxation) scheme, which effectively alters the game parameters by changing the costs/profits of the agents, to minimize some objective like the Price of Anarchy. We examine optimization of novel objectives in addition to previously analyzed ones in the context of component maintenance games, and demonstrate a first application of a learning-theoretic lens to overcome the worst-case computational hardness of subsidy design.
Cost-sharing games.
Cost-sharing game is a classical game in algorithmic game theory. Several variants of this game have been studied in the literature, including a set cover version <cit.> which we study here, and multicast game where network users connect to a source by paying for a route to the source and sharing cost <cit.>. Prior work has shown that subsidy is crucial for the former, while best response dynamics are sufficient to obtain a small Price of Anarchy for the latter. While <cit.> propose a primal-dual approach for approximately minimizing PoA under subsidy, we complement their results by establishing NP-hardness of exact optimization and further study the information avoidance phenomenon in a Bayesian extension of the game.
Value of Information.
Information avoidance has been studied extensively in behavioral sciences, economics, psychology and public health <cit.>. People can decide to seek or avoid certain information based on their goals, and strategic considerations can cause agents to ignore free and useful information. In the component maintenance games that we study, the information corresponds to the true state of some inspected component. Agents can choose to not seek this information (about their own component, or that of another agent) even if it is freely available, if it could make them worse off (e.g. increase the need to repair their component) even at the cost of making the overall system more likely to fail. <cit.> demonstrates this phenomenon in several multi-agent network systems. <cit.> focuses on component inspection and the value of information metric, and the results apply to games with a small constant number of agents. In contrast, we study a broader variety of metrics of interest to the central agent, provide formal hardness results for n-agent games, and complement them with positive results under the data-driven algorithm design lens.
Central agent improving equilibrium performance. Another related line of work considers steering strategic agents to “good” equilibria in a variety of settings, typically with the help of a central agent. <cit.> consider using a public service advertising campaign, where the central agent prescribes actions to agents and a fraction of the agents (influenced by the campaign) follow actions that could lead to better equilibria. Another variant of the problem is to lead learning dynamics in a certain games where the game happens in phases for the same set of agents, and the impact of the campaign is only assumed in early rounds <cit.>. Similarly, <cit.> consider leading dynamics of agents with vanishing average regret. In contrast, our repeated game settings in Section <ref> consist of non-identical but similar games, for example corresponding to different infrastructure projects managed by the same central agent. <cit.> consider central agents that can modify the network structure, and study computational tractability for different utility functions and notions of “good” equilibrium.
Data-driven algorithm design is a tool for beyond worst-case analysis of algorithms, for learning algorithms that work well on typical problems coming from a common problem domain <cit.>. The technique has been successfully used in designing more effective algorithms for combinatorial problems, with applications to machine learning as well as mechanism design <cit.>.The technique has been successfully used in designing more effective algorithms for combinatorial problems <cit.>, with applications to machine learning <cit.> as well as mechanism design <cit.>. We will use the data-driven algorithm design approach to learn subsidy schemes from multiple related games. We provide sample complexity guarantees when the games are drawn independently from a fixed distribution, and no-regret guarantees when learning subsidy in an online sequence of games.
§ FORMAL NOTATION, SETUP AND MOTIVATING EXAMPLES
Let G=⟨ N,(S_i),(cost_i)⟩ denote a game, where N is a set of n agents (or players), S_i is the finite action space of agent i∈ N, and cost_i is the cost function of agent i. The joint action space of the agents is S=S_1×…× S_n. Given joint action s=(s_1,…,s_n)∈ S let s_-i denote the actions of all agents except agent i, i.e. s_-i= (s_1,…,s_i-1,s_i+1,…,s_n). The cost function cost_i:S→ of agent i (which the agent seeks to minimize) is a function of the joint action s∈ S. The social cost function of the game is the sum of cost functions of all the agents in the game, cost=∑_i=1^ncost_i. The optimal social cost is = min_s∈ Scost(s).
Given a joint action s, the best response of agent i is the set of actions _i(s_-i) that minimizes its cost given s_-i, i.e., _i(s_-i) = _a∈ S_icost_i(a,s_-i). A joint action s∈ S is a (pure) Nash equilibrium (or NE) if no agent can benefit from unilaterally deviating to another action, in other words every agent is simultaneously playing a best response action in s, i.e., s_i ∈_i(s_-i) for every i∈ N. A Nash equilibrium is said to be global or optimal if it also minimizes the social cost among all Nash equilibria. We say a Nash equilibrium is a local or suboptimal equilibrium if it is not global.
To the above standard model of a game, we add a central agent whose goal is to improve social outcomes by allocating subsidy which reduces costs for certain actions. Formally, we have the following definition of a subsidy scheme.
A subsidy scheme is defined as a set of functions subs_i:S→_≥0 where subs_i(s) gives the subsidy offered by the central agent to agent i given joint action s. In a subsidized game using scheme , the cost of agent i is given by the difference function cost_i^:=cost_i-subs_i, and total subsidy provided for joint action s∈ S is subs(s)=∑_isubs_i(s).
The Price of Anarchy measures the reduction in system efficiency (social cost) due to selfish behavior of the agents
<cit.>. We define Price of Anarchy (PoA) in the presence of subsidy along the lines of <cit.> as the ratio of the sum of total social cost and subsidy in the worst case equilibrium, to the optimal social cost.
Let 𝕊={subs_i} denote the subsidy scheme. Let _NE(𝕊)⊆ S denote the subset of states corresponding to Nash equilibria when the cost for agent i is cost_i-subs_i. Suppose 0 and _NE(𝕊){}. Then the Price of Anarchy under subsidy is given by
PoA(𝕊)=max_s∈_NE(𝕊)cost^(s)+subs(s)/.
We also define a related metric for studying the effectiveness of subsidy scheme ,
PoA(𝕊)=max_s∈_NE(𝕊)cost^(s)+subs(s)/min_s∈_NEcost(s),
where _NE denotes the set of Nash equilibria in the component maintenance game (in the absence of any subsidy), provided min_s∈_NEcost(s) 0 and _NE(𝕊){}_NE.
By setting zero subsidies (i.e. subs_i(s)=0 for each i,s) we recover the usual Price of Anarchy, PoA <cit.>. Note that finding the subsidy scheme that optimizes PoA(𝕊) or PoA(𝕊) corresponds to the same optimization problem. In some games, it will be easier to show absolute bounds on PoA(𝕊). Note that PoA(𝕊)=PoA(𝕊)/PoS, where PoS is the usual Price of Stability in the unsubsidized game <cit.>.
Besides PoA, we will also be interested in another quantity called Value of Information <cit.> which we define next. Suppose the costs of agents have some uncertainty, which we model by a prior θ (common belief shared by all the agents) over some finite information set about the game (e.g. the component states in a component maintenance game, or action costs in a cost-sharing game, see below). The expected cost of agent j under joint action s is given by l_j(s,θ)=_I∼θ[cost_j(s)]. We will refer to this as the prior game. The Value of Information for agent j when information i∈ is revealed is the change in expected cost of the agent from the prior θ to the posterior θ^i,e_i, where e_i denotes the revealed value of the information i. For example, in the component maintenance game defined below the information i corresponds to the some agent's component and e_i corresponds to the revealed state (working or broken). The game with expected costs given by l_j(s,θ^i,e_i) is called the posterior game. Formally, the value of information is defined as follows.
Denote by θ the prior, and by θ^i,e_i the posterior when information i∈ is revealed to be e_i. Let s,s̃ be joint actions which are Nash Equilibria in the prior and posterior games respectively. The Value of Information for agent j when information i is revealed as e_i is given by
VoI_j,i(s,s̃) := l_j(s,θ)-l_j(s,θ^i,e_i). We say that the Value of Information is non-negative for agent j if VoI_j,i(s,s̃)≥ 0 for any information i∈ and any prior/posterior equilibria s,s̃. The worst-case Value of Information is defined as min_s,s̃VoI_j,i(s,s̃), where the minimum is over joint actions from prior and posterior Nash Equilibria. We will often call this simply the Value of Information, the worst-case aspect will be clear from context (lack of explicit arguments s,s̃).
We will be interested in a collection of related game instances, specifically the sample complexity of number of game instances needed from a distribution over the games to learn a good value of subsidy. Formal definitions follow the standard in data-driven algorithm design, and are deferred to Section <ref>. We will now proceed to formally describe the component maintenance and cost-sharing games and instantiate the above abstract definitions for both.
Component maintenance game <cit.>.
Each agent is associated with a component c_i which has a binary state x_i∈{0,1} where x_i=0 corresponds to a broken component and x_i=1 corresponds to a functioning component. The action space of each agent is also binary, S_i={0,1}, where action s_i=1 indicates that the agent repaired the component (denoted RE), and s_i=0 denotes that the agent did nothing (denoted DN). The state x_i of c_i is updated after action s_i as x'_i=max{x_i,s_i}. This corresponds to “perfect repair”, i.e. if an agent picks the RE action, their component is guaranteed to work, and otherwise it stays as is. For a tuple of actions s, we will denote the updated state by '(s), or simply ' when s is evident from context. The state u of the system is a fixed binary function of the component states, u=ϕ(), where =(x_1,…,x_n) and ϕ:{0,1}^n→{0,1}[The component maintenance game intuitively corresponds to monotone boolean functions ϕ, but our results easily extend to general boolean functions.]. For example, if ϕ(x_1,x_2,x_3)=(x_1 x_2) x_3, the system functions either when both components c_1 and c_2 are working, or when component c_3 is working. Here u=0 denotes a failure of the system. Let u'=ϕ(') denote the state of the system after the agents' actions. The cost for agent i is given by, cost_i=C_is_i+1-u' for repair cost C_i∈. Note that C_i could be negative, for example if there is a reward or incentive associated with the repair of component i which more than offsets what the agent pays for its repair. The actions depend on the belief about the state of the components, which we model by a distribution θ over {0,1}^n. We will assume here that components and therefore their probability of functioning are independent, and that all agents share the same common belief about the state of the components, i.e. θ is fixed and known to all the agents. The system failure probability under this belief is P_ϕ(θ)=1-_∼θ[ϕ()], and the expected cost of action s_i to agent i, given other agents' actions are s_-i, is l_i(s_i,s_-i,θ)=_∼θ[cost_i]=C_is_i+1-_∼θ[ϕ(')]. Similarly, the expected social cost is defined as l(s,θ)=_∼θ[cost]=∑_il_i(s_i,s_-i,θ) for s=(s_1,…,s_n).
We now imagine that each agent j has the ability, for free, to inspect their own component to determine its state. The catch, however, is that this state is revealed to all agents. The revelation of the state would result in an updated common belief θ̃, corresponding to the conditional distribution given the known component state, and therefore an updated cost function for all agents. As a result, the set of Nash equilibria can now change and the agent may suffer higher personal cost in the new equilibria, inducing the agent to avoid inspecting their own component for selfish reasons. We will use the terminology prior (respectively posterior) game and equilibria to refer to the game before (respectively after) the inspection of any fixed component c_j. We next formalize the setup and instantiate the Value of Information metric for component maintenance game to capture this behavior.
Component inspection game <cit.>. Suppose the inspection of component c_j reveals state y_j∈{0,1}. We will assume perfect inspection, i.e. y_j=x_j. Denote posterior belief after the inspection of component c_j by θ^j,y_j. If agent i switches action from s_i to s_i^j,y_j after the inspection (and other agents switch from s_-i to s_-i^j,y_j), the value of information about inspection of c_j for agent i and posterior y_j is given by VoI_i,j(s_i,s_-i,s_i^j,y_j,s^j,y_j_-i) := l_i(s_i,s_-i,θ)-l_i(s_i^j,y_j,s_-i^j,y_j,θ^j,y_j). The expected value of information is given by VoI_i,j(s_i,s_-i,s_i^j,*,s_-i^j,*) := l_i(s_i,s_-i,θ)-_y_jl_i(s_i^j,y_j,s_-i^j,y_j,θ^j,y_j), where s^j,* is the collection of states s^j,0,s^j,1. Here the posterior loss is computed as the expectation (based on prior) for the various possible outcomes for the inspection of a given component j. Typically we will assume that the joint actions s and s^j,y_j are Nash equilibria. We want the value of information to be non-negative for each agent i, when inspecting any component j, for any choice of equilibrium states. The expected value of information is easier to ensure to be non-negative, but implies a weaker (less robust) guarantee, so we will focus on the (inspection-specific) value of information. A motivation for ensuring that the value of information is non-negative is to not have any undesirable information avoidance behavior among the agents, where agents may choose to ignore freely available information (about the inspected component state) for selfish reasons (to reduce personal cost, for example by choosing to not repair their broken component), which could lead to sub-optimal social cost or an undesirable system state.
Fair-cost sharing game. Let ={a_j} denote the (finite) set of all possible actions for all the agents. There is a function f:→ 2^N such that
agent i∈[N] may use any action a_j for which i∈ f(a_j). Note that there may be multiple options corresponding to the same subset of agents. Under uniform or fair cost-sharing, all agents that use an action a_j in some state s∈ S equally share its cost. That is, in the classical fair cost-sharing game there is a deterministic function c:→[0,C_max] such that if k agents from f(a_j) use an action a_j in some state s, then cost_i(s) for each of these agents is c(a_j)/k. Here, we will consider a Bayesian extension where the costs of some actions in are associated with some uncertainty. The action costs are given by a distribution θ_c over [0,C_max]^|| and the agents all know the costs under the prior given by l_i(s,θ_c)=1/k_θ_c[c(a_j)], where k is the number of agents opting for action a_j in state s. This means we expect the agents to act according to the mean costs of actions under θ_c in the absence of additional information.
Analogous to the component inspection game above, suppose the inspection of action a_j reveals its true cost c_j∈[0,C_max]. Denote posterior belief after the revelation of the cost of action a_j by θ_c^j,c_j. If agent i switches action from s_i to s_i^j,c_j after the inspection (and other agents switch from s_-i to s_-i^j,c_j), the value of information about inspection of a_j for agent i and posterior c_j is given by VoI_i,j(s_i,s_-i,s_i^j,c_j,s^j,c_j_-i) := l_i(s_i,s_-i,θ_c)-l_i(s_i^j,c_j,s_-i^j,c_j,θ_c^j,c_j). The expected value of information is given by VoI_i,j(s_i,s_-i,s_i^j,*,s_-i^j,*) := l_i(s_i,s_-i,θ_c)-_c_jl_i(s_i^j,c_j,s_-i^j,c_j,θ_c^j,c_j), where s^j,* is the collection of posterior states s^j,c_j for different posterior costs c_j. As before, typically we will assume that the joint actions s and s^j,c_j are Nash equilibria.
A model for information avoidance. One way to model information avoidance is to adapt the advertising model of <cit.>. The model proceeds in the following steps:
* Some components C⊆ c_[n] are inspected. If a component c_i is inspected, information y_i∈{0,1} about it is collected. Here y_i=0 indicates the component was broken, and y_i=1 indicates it is working. y_i is a random variable with distribution p_i(θ). This updates the belief to θ'.
* The information is revealed to all agents. Each agent may independently decide to ignore the information with probability α.
* Agents that use the information play best response actions based on belief θ', agents that ignore the information play best response actions based on belief θ. The agents settle on a (local) Nash equilibrium.
* This is repeated for T rounds, after this each agent learns their own value of information i.e. whether it wants to use or ignore the information, and does the same thing from then on (learn-then-decide model of <cit.>).
§ FURTHER QUESTIONS
* Is the best response dynamics guaranteed to converge for this game? Is it a potential game <cit.>?
* For two agents and series system, we know the value of information can be negative for local Nash equilibriums. Under what conditions (inspection, α, number of rounds T) can the agents reliably learn the value of information?
* Are there inspection strategies C such that agents are steered towards global Nash equilibrium (for which the value of information is non-negative)?
§.§ Motivating examples for how subsidy can help
Component maintenance game. We first present a motivating example where subsidy for component repair costs can help improve social cost by steering the system to a better equilibrium.
<cit.> give examples of local equilibria for two agent games where the Value of Information (VoI is defined as difference of prior loss defined above, and posterior loss when a component is inspected) is negative. This is undesirable as it can trigger the agents to avoid the perfect information on component state for selfish reasons. We present a motivating example where subsidy for component repair costs can help improve social cost as well as prevent information avoiding behavior on inspection.
[PoA in a 2-series system] Consider the two component series system depicted in Fig <ref>. Suppose that the components are independent and
the failure probabilities (according to the common prior belief) for components c_1 and c_2 are both 0.5, and the repair cost is
C_1 = C_2 = 0.3. Then the cost matrix for the component maintenance game is given in Table <ref>. For instance, for the joint action DN-DN (recall that action DN stands for “do nothing” and RE for “repair”) the system works with probability 0.5× 0.5=0.25 and therefore the cost to agents is P_ϕ(θ)=1-0.25=0.75. explain table Notice that both DN-DN and RE-RE are Nash equilibria for the game, but RE-RE has a smaller social cost. If the central authority provides a subsidy of 0.05+ϵ for the repair action, for any ϵ>0, then it would incentivize the agents to switch their actions from DN to RE, and the only Nash equilibrium is RE-RE. Note that the cost reduction in RE-RE (0.3 to 0.25-ϵ for each agent) equals the subsidy provided by the central agent in this case, and the social cost + subsidy for RE-RE is preserved, while ruling out the sub-optimal equilibrium DN-DN.
Suppose there is central authority that is invested in minimizing the social cost and tackling information avoidance. One approach towards fixing the above problem of negative VoI is for the central authority to give an extra incentive (subsidy, or reward) to some agents to repair their components.
For example consider the two component series system from <cit.>. Suppose that the components are independent and
the failure probabilities for c_1 and c_2 are 0.6
and 0.9 respectively, and the repair cost is
C_1 = C_2 = 0.3. Then the cost matrix for the game (both prior, and posterior cost when component c_1 is inspected) is given in Figure <ref>. Notice that while DN-DN is not a Nash equilibrium in the prior game, but it becomes one in the event item i is inspected and is broken y_i=0. As a result the value of information (VoI) can be negative for both agents. If the central authority gives agent 1 an incentive of 0.2+ϵ for any ϵ>0, then it would incentivize agent 1 to switch their action from DN to RE in the posterior game y_i=0 (which will cost 1.0-ϵ after the subsidy) and negative VoI can be avoided in this example. More generally, for any costs C_1=C_2=ϵ<1, if c_2>1-C_1, then for y_i=0 case, we will have DN-DN a sub-optimal equilibrium which can be arbitrarily worse than RE-RE, by a factor of 1/ϵ.
The above example illustrates how using a subsidy scheme, the central agent can potentially eliminate undesirable Nash equilibria by effectively modifying the repair costs in the component maintenance game.
In the component maintenance game, we will consider subsidy schemes that incentivize repair, i.e. subs_i(0,s_-i)=0 for all agents i (no subsidy awarded for the action “do nothing”). We also say that a subsidy scheme is uniform if the scheme is identical for all agents and actions, i.e. subs_i(1,s_-i)=c_subs a fixed constant for all agents i for some constant c_subs≥ 0. We consider two types of subsidies which a central agent, whose goal is to altruistically maximize social welfare, can offer to reduce the repair cost of certain components.
Conditional vs. unconditional subsidies. The central agent may offer an unconditional subsidy which effectively reduces the cost of repair for the components, or may be conditional on inspection in order to encourage agents to inspect their components, even when the information about the state of an agent's component results is something the agent might want to avoid (in the absence of subsidy).
Formally, in a component inspection game involving inspection of some component j, a general subsidy consists of three functions for each agent i given by subs_i, subs_i^1, subs_i^0, corresponding to prior, posterior with component j intact, and posterior with component j damaged respectively. For simplicity, we will say that the central agent provides subsidy conditional on inspected state of component j, y_j=k for k∈{0,1}, to denote that subs_i^k is the only non-zero function in the conditional scheme, and conditional on inspection to denote that subs_i is a zero function and subs_i^1=subs_i^0.
We will now illustrate how subsidy can be used to avoid negative Value of Information, again in a 2-series component inspection game.
Consider the two component series system from <cit.>. Suppose that the
components are independent and the failure probabilities for c_1 and c_2 are 0.6 and 0.9
respectively, and the repair cost is C_1 = C_2 = 0.3. Then the cost matrix for the game
(both prior, and posterior cost when component c_1 is inspected) is given in Table <ref>. Without any subsidy, for the highlighted Nash equilibria, when component c_1 is inspected and revealed to be broken (y_1=0), the value of information is negative (0.3-1.0=-0.7) for both agents.
If the central authority gives agent 1 an incentive of 0.2 + ϵ for any ϵ > 0, then it
would incentivize agent 1 to switch their action from DN to RE in the posterior game
y_1 = 0 (which will cost 1.0 - ϵ after the subsidy) and negative VoI can be avoided in
this example. The expected VoI is 0.3-(0.6*1.0+0.4*0.0)=-0.3 for agent 1 and 0.3-(0.6*1.0+0.4*0.3)=-0.42 for agent 2, and the same subsidy works to ensure the expected VoI is non-negative as well.
Cost-sharing game. In the cost-sharing game, the central agent can subsidize the cost of some actions in . That is, the subsidy can be specified as an allocation of the subsidy budget to actions, c^:→_≥ 0. The corresponding subsidy scheme is given by {subs_i}_i, where subs_i(s_1,…,s_N)=c^(s_i)/k, with k=∑_j=1^N[s_j=s_i]. This corresponds to the subsidy being equally enjoyed by all the agents that select a given subsidized action. Subsidy can again be used to reduce Price of Anarchy in this game <cit.>. We will here show an example where subsidy can be used to avoid negative Value of Information.
Consider a two-agent cost-sharing game where the action set is ={A,B,C,D} with associated subsets f(A)={1,2}, f(B)={2}, f(C)={2}, and f(D)={1}. For example, in a commuting game, A could correspond to a shared public transport, and B,C,D could correspond to private modes of transport. We assume the cost function c is a random function such that with probability 1/2, c(A)=5, c(B)=2, c(C)=6, and c(D)=4, and with probability 1/2, c(A)=5, c(B)=6, c(C)=2, and c(D)=4. In the commute example, for agent 2, B could be a bike and C could be a car, and w_i could be unknown world state that impacts the cost of actions B and C for agent 2. We call these posterior worlds w_1 and w_2 respectively (see Tables <ref> and <ref>). Thus the prior cost for agent 2 (i.e. probability weighted cost of the worlds w_1 and w_2) is 4 for actions B and C. A Nash equilibrium in the prior game is (A,A) with a cost of (2.5,2.5) for both agents. In world w_1, the only NE is (D,B) and in world w_2 the only NE is (D,C), both with cost (4,2). Thus the knowledge of the state of the world leads to negative VoI for agent 1. Specifically, the knowledge of the cheaper option among B and C causes agent 2 to drop out of the cost-shared option A, increasing social cost and cost for agent 1. In this example, using a subsidy of 3+ϵ for ϵ>0 for the option A guarantees that agent 2 will always prefer option A, and is sufficient to ensure that negative Value of Information
is avoided.
We conclude this section with a simple remark that PoA(𝕊)≥ 1 for any subsidy scheme 𝕊 since the amount of subsidized cost is added back in the numerator for any state s in the definition of PoA(𝕊), and the dependence on the subsidy scheme 𝕊 is governed by the corresponding set of Nash Equilibria _NE(𝕊).
For any subsidy scheme 𝕊, 1≤PoA(𝕊) and PoA(𝕊)≤PoA(𝕊).
Let 𝕊={subs_i} denote the subsidy scheme. By definition,
PoA(𝕊) =max_s∈_NE(𝕊)cost^(s)+subs(s)/
=max_s∈_NE(𝕊)∑_i(cost_i(s)-subs_i(s))+subs_i(s)/
=max_s∈_NE(𝕊)cost(s)/≥min_s∈_NE(𝕊)cost(s)/≥ 1.
Also, PoA(𝕊)≥PoA(𝕊) since min_s∈_NEcost(s)≥min_s∈ Scost(s)=.
Addition of subsidy effectively changes the set of states that correspond to a Nash equilibrium. The goal of a central agent is to ensure suboptimal (local) Nash equilibria are not included in the set _NE when the subsidy is applied. The subsidy provided by the central agent could for example come from taxes collected from the agents, and therefore it makes sense to add the total subsidy provided by the central agent to the social cost in the definition of PoA(𝕊).
The central agent designing the subsidy scheme can have several different objectives for the scheme. In this work, we consider the following three objectives:
1. PoA() is minimized. This means that the social cost in the worst-case Nash equilibrium after the subsidy is provided (with the total subsidy added) is not much worse than the social cost of the best joint action (or the best Nash equilibrium if we minimize PoA(𝕊)) when no subsidy is offered. As the result, the harmful effects of selfish behavior and lack of coordination among the agents is minimized.
2. The value of information for each agent is non-negative when a single component is inspected. This corresponds to minimizing the tendency of agents to avoid seeking freely available information about the state of their component to avoid an increase in their personal cost, potentially at the expense of the social cost.
3. The system is guaranteed to work in any Nash equilibrium in the component maintenance game. This is desirable if the central agent is keen on guaranteeing system functionality and willing to provide the needed subsidy for it. This can differ from minimizing the price of anarchy as the optimal social cost could correspond to doing nothing and letting the system stay broken, for example when the repair costs are too large.
summarize in a table
For a fixed game, the appropriate subsidy scheme could vary depending on the objective of the central agent. We will illustrate this for a two agent series game in Section <ref>, where we will obtain the optimal subsidy schemes for different objectives below. Similarly, optimal schemes can be derived for the two-agent parallel game, the interested reader is directed to Appendix <ref>. However, for general n-agent games, we show in Section <ref> that computing the optimal total subsidy is NP hard, under each of the various objectives.
optional TODO: Write more generally for any game and information. Add more intuition/example for definition. We could also skip Defn 3 for SAGT submission.
Let 𝕊={subs_i} denote the subsidy scheme. Consider the component inspection game for inspection of component j. Let _NE
(𝕊),_NE^0(𝕊),_NE^1(𝕊)⊆ S denote the subset of states corresponding to Nash equilibria when the cost for agent i is cost_i-subs_i for prior, posteriors y_j=0 and y_j=1 respectively. Let VoI_j(𝕊)=min_i,s∈_NE(𝕊),s'∈_NE^0(𝕊)∪_NE^1(𝕊)VoI_i,j(s_i,s'_i) denote the least value of information for any agent i for equilibria under 𝕊. Then the Price of Information Avoidance is given by
PoIA(j)=min_𝕊|VoI_j(𝕊)≥0max_s∈_NE
(𝕊)cost(s)/min_𝕊max_s∈_NE(𝕊)cost(s)=min_𝕊|VoI_j(𝕊)≥0PoA(𝕊)/min_𝕊PoA(𝕊).
Recall that Price of Anarchy captures the
effect of selfish behavior on social cost, relative to best centralized (co-ordinated) action under unselfish behavior, and is minimized (equals 1) when selfish behavior does not impact social cost. Similarly, Price of Information Avoidance corresponds to the
effect of information avoidance, relative to the best centralized action when agents do not avoid information for selfish reasons. Also, it equals 1 if there is never any negative value of information under any subsidizing policy. Otherwise it is at least 1, and captures the sacrifice in social cost to avoid negative VoI.
§.§ Optimal subsidy design in two-agent series component maintenance game
This section aims to provide a working intuition for what an optimal subsidy scheme looks like for simple two-agent component maintenance games for the different objectives of the central agent, including Price of Anarchy and Value of Information. The key insights are that subsidy can be greatly beneficial, but exactly optimal subsidy allocation is a complicated function of the game's cost matrix and objective of interest. In the two-agent series game, the goal is typically to provide just enough subsidy to ensure that both agents prefer to repair (and the series system functions as a result), but the smallest amount of subsidy needed depends on repair costs and component failure probabilities.
Suppose we have two agents N={1,2} with components c_1,c_2 connected in series. Let p_1,p_2 denote the (prior) probability that the components c_1,c_2 will work (respectively).
We will use the notation p:=1-p for conciseness.
This corresponds to the prior game in Table <ref>, where DN denotes “do nothing” (s_i=0) and RE denotes “repair” (s_i=1).
We will now consider the above three objectives. For each objective, we will note that subsidy helps and we will obtain the optimal subsidy in the two-agent series game.-1
§.§.§ Minimizing the price of anarchy with subsidy.
To demonstrate the significance of subsidy for reducing the social cost in the two-agent series games, we will first show a lower bound on the Price of Anarchy in the absence of subsidy. The following proposition indicates that the Price of Anarchy can be very high when the probabilities of the components functioning (p_1,p_2) are small. Moreover, for n agents connected in series, the price of anarchy can increase exponentially with number of agents n.
In the two-agent series prior game (defined above and cost matrix noted in the first row of Table <ref>), the Price of Anarchy in the absence of subsidy is at least PoA≥2/p_1+p_2,
for some repair costs C_1,C_2. More generally, for n agents, PoA≥H̃/G̃^n for some repair costs C_1,…,C_n, where H̃ and G̃ are the harmonic and geometric means, respectively, of the prior probabilities p_1,…,p_n.
We set C_1=p_1p_2 and C_2=p_2p_1. Observe that DN-DN is an equilibrium since[In our notation, p_1p_2:=1-p_1p_2, while p_1·p_2:=(1-p_1)·(1-p_2).] p_1p_2=p_1+p_2p_1≤p_1+C_2, and similarly p_1p_2≤p_2+C_1. Also, RE-RE is an equilibrium since C_1=p_1p_2≤p_1 and C_2=p_2p_1≤p_2. Clearly,
PoA≥cost(DN,DN)/cost(RE,RE)=2p_1p_2/C_1+C_2≥2/p_1+p_2,
where the last inequality follows from the observations
p_1p_2(p_1+p_2)=p_1+p_2-(p_1+p_2)p_1p_2≥ p_1+p_2-2p_1p_2 = C_1+C_2.
For the n agent series game, we set the costs C_i=p_iΠ_j ip_j. DN^n (i.e., s=0^n) is a Nash equilibrium, since
C_i+Π_j ip_j=(1-p_i)Π_j ip_j+1-Π_j ip_j≥Π_jp_j.
Moreover, RE^n is also an equilibrium as C_i≤p_i for all i∈[n]. Therefore,
PoA ≥cost(DN^n)/cost(RE^n)=nΠ_ip_i/∑_iC_i
=n(1-Π_ip_i)/∑_iΠ_j ip_j - nΠ_ip_i
≥n(1-Π_ip_i)/∑_iΠ_j ip_j - (∑_iΠ_j ip_j)Π_ip_i
=n/∑_iΠ_j ip_j
=n/Π_ip_i·∑_i1/p_i,
and the claim follows by noting H̃=n/∑_i1/p_i and G̃=(Π_ip_i)^1/n.
By Proposition <ref>, if probabilities of component working p_1=p_2=ϵ≪ 1, then Price of Anarchy can be as large as 1/ϵ (or as large as ϵ^-(n-1) for general n). We remark that our lower bounds also apply to the ratio PoA/PoS, i.e. when we compare the worst-case Nash equilibrium with the global Nash equilibrium instead of . We will now show that a constant price of anarchy can be achieved in this game using subsidy, more precisely, that PoA()=1 for some subsidy scheme . In fact, we characterize the total subsidy needed to guarantee this for any game parameters. The proof involves carefully considering cases for the parameters resulting in different sets of Nash equilibria and computing necessary and sufficient amounts of subsidy in each case (see Appendix <ref>).
Consider the two-agent series component maintenance game with C_1,C_2>0 and 0<p_1,p_2<1. Let s^*={(C_1,C_2)∈ [p_1p_2,p_1]× [p_2p_1,p_2]}·min{C_1-p_1p_2, C_2-p_2p_1}, where {·} denotes the 0-1 valued indicator function. Then there exists a subsidy scheme with total subsidy s for any s>s^* such that PoA(𝕊)=1. Moreover, a total subsidy of at least s^* is necessary for any subsidy scheme that guarantees PoA(𝕊)=1.
We will characterize the set of values of C_1,C_2 for which there are multiple Nash equilibria and design subsidy schemes that achieve PoA(𝕊)=1. We consider the following cases.
Case 0: C_1<p_1p_2,C_2>p_2. In this case the only NE is RE-DN (Table <ref>). Thus, PoA =1 even in the absense of subsidy and s^*=0 in this case.
Case 1: C_1=p_1p_2,C_2>p_2. Both RE-DN and DN-DN are Nash equilibria, and
cost(RE,DN)=C_1+2p_2=p_1p_2+2p_2=2-p_1p_2-p_2<2p_1p_2=cost(DN,DN).
An arbitrarily small subsidy to agent 1 is sufficient to guarantee PoA(𝕊)=1 (therefore s^*=0 works) as DN-DN would no longer be a NE.
Case 2: C_1>p_1p_2,C_2>p_2. In this case the only NE is DN-DN. Thus, PoA =1 even in the absense of subsidy.
Case 3: C_1<p_1p_2,C_2=p_2. Both RE-DN and RE-RE are Nash equilibria, and
cost(RE,DN)=C_1+2p_2>C_1+p_2=C_1+C_2=cost(RE,RE).
An arbitrarily small subsidy to agent 2 is sufficient to guarantee PoA(𝕊)=1 (therefore s^*=0 works) as RE-DN would no longer be a NE.
Case 4: C_1<p_1p_2,C_2<p_2. In this case the only NE is RE-RE. Thus, PoA =1 even in the absense of subsidy.
Case 5: (C_1,C_2)∈ [p_1p_2,p_1]× [p_2p_1,p_2]. Both RE-RE and DN-DN are Nash equilibria, and OPT corresponds to RE-RE. A subsidy greater than C_1-p_1p_2 to agent 1, or a subsidy greater than C_2-p_2p_1 to agent 2 guarantees that the only NE is RE-RE. Further, in either case PoA(𝕊)=1 as the subsidy equals the reduction in the repair cost of the respective agent.
Further suppose a subsidy of s^*=s^*_1+s^*_2 is sufficient to ensure PoA(𝕊)=1 in this case. Now if subsidy to agent 1 s_1^*≤ C_1-p_1p_2 and subsidy to agent 2 s_2^*≤ C_2-p_2p_1. Then both DN-DN and RE-RE are NE and PoA(𝕊)>1 since the worst-case equilibrium (i.e. DN-DN) cost does not depend on the subsidy. Therefore either s_1^*> C_1-p_1p_2 or s_2^*> C_2-p_2p_1, establishing that a subsidy of at least s^* is necessary in this case to ensure PoA(𝕊)=1.
Case 6: Otherwise. By symmetry, the case is similar to one of C0 through C4 with agents 1 and 2 switched. s^*=0 and price of anarchy of 1 is achieved by no or arbitrarily small subsidy as above.
Note that s^* is non-zero only in case C5, in which case we have established both sufficiency and necessity of a total subsidy of s^* to ensure PoA(𝕊)=1.
§.§.§ Guaranteeing that the system functions in any NE
As seen in Example <ref>, the agents may be in an equilibrium (e.g. DN-DN) such that the system is not guaranteed to function. This problem can also be remedied using subsidy.
We will quantify the optimal subsidy needed to guarantee that the system functions, i.e. ϕ(')=1, where ' denotes the component states after the agents' actions.
Consider the two-agent series component maintenance game with C_1,C_2>0 and 0<p_1,p_2<1. Define R⊂^+×^+ as the set of cost pairs that satisfy C_1≤p_1 C_2≤p_2 (C_1≤p_1p_2 C_2≤p_2p_1), depicted in Figure <ref>. Let s^*=min_(x,y)∈ R||(C_1,C_2)-(x,y)||_1, where ||·||_1 denotes the L_1-norm. Then there exists a subsidy scheme with total subsidy s for any s>s^* such that the system functions in any NE. Moreover, a total subsidy of at least s^* is necessary for any subsidy scheme that guarantees that the system functions in any NE.
(Sufficiency of s^*). We do this by cases on the cost vector (C_1,C_2), as follows.
Case 0: (C_1,C_2) lies in the interior of R. In this case it is easy to see that PoA=1 and s^*=0. In particular, the conditions C_1< p_1, C_2< p_2 rule out DN-RE, RE-DN as candidate equilibria respectively, and since C_1< p_1p_2 or C_2< p_2p_1, DN-DN cannot be a NE either (agent 1 or agent 2 will prefer to repair).
Case 1: C_1≤p_1p_2, C_2≥p_2. In this case, a subsidy more than s^*=C_2-p_2 to agent 2 is sufficient to bring the cost vector to the interior of R.
Case 2:C_2≤p_2p_1, C_1≥p_1. Symmetric to C1.
Case 3: p_1p_2<C_1≤p_1, p_2p_1< C_2≤p_2. A subsidy more than min_i{C_i-p_ip_3-i} to agent _i{C_i-p_ip_3-i} is sufficient to bring the cost vector to the interior of R.
Case 4: Otherwise. It is straighforward to verify by direct calculation that a subsidy of s^*_1=max{C_1-p_1,0}+p_1·p_2 to agent 1 and a subsidy of s^*_2=max{C_2-p_2,0}+p_1·p_2 to agent 2 is sufficient.
The necessity argument essentially follows by updating the cost matrix with conditional subsidies (s_1,s_2) and noting that the system is guaranteed to function in any NE if the costs (C_1,C_2) are in the interior of R.
The proof is deferred to Appendix <ref>. In constrast to Theorem <ref>, here the central agent needs to provide subsidy in the states where the price of anarchy may be 1 without subsidy but the system is not guaranteed to function as agents may choose to do nothing.
§.§.§ Ensuring value of information is non-negative for each agent.
<cit.> exhibit several examples, including 2-agent series games, where the value of information for certain agents can be negative, when actions in the prior and posterior games are selected according to some Nash equilibria. We will demostrate the use of subsidy in tackling this undesirable information avoidance behavior.
In addition to the prior game, we will now consider posterior games where component c_1 is inspected, and its state y_1 is revealed on inspection (y_1=1 corresponds to c_1 is functioning, and y_1=0 corresponds to c_1 is broken). Table <ref> summarizes prior and posterior costs for the two agents for each action pair (recall that DN denotes “do nothing” or s_i=0, and RE denotes “repair” or s_i=1). <cit.> show that the expected value of information (VoI) is non-negative for all agents if
a global NE is selected.
In the two-agent series game described above, if component c_j,j∈{1,2} is inspected, and the prior and posterior actions s,s̃ are selected from global equilibria, then the expected value of information VoI_i,j(s,s̃) is non-negative for each agent i∈{1,2}.
Therefore, if we avoid suboptimal local equilibria in this setting then expected VoI is guaranteed to be non-negative. Combined with Theorem <ref> above, this implies that negative Value of Information may be avoided via subsidizing repair costs in the two-agent series game. In more detail, we can employ a conditional subsidy scheme, with prior subsidy subs_i according to the scheme in Theorem <ref>, and posterior subsidy subs_i^y when agent 1 is inspected employs the subsidy scheme from Theorem <ref> by setting the parameter p_1=y. Theorem <ref> in Appendix <ref> characterizes the optimal unconditional subsidy for ensuring the Value of Information is non-negative for each agent.
In addition to expected VoI, we can also ensure posterior conditioned VoI_i,j when component c_j is inspected is non-negative for each agent i and each posterior y_j via subsidy. Compared to expected VoI, this is a more worst-case perspective as it includes the case when the component is broken which is typically when the agents are more likely to avoid the information about the component state. The following result gives the optimal value of subsidy to ensure this.
Consider the two-agent series component inspection game with C_1,C_2>0 and 0<p_1,p_2<1. Then
(a) VoI_i,j(s_i,s^j,1)≥0 for any agents i,j, any prior NE s_i and any posterior NE s^j,1, when the inspected component j is working, except when C_3-j=p_Rp_3-j and an arbitrarily small subsidy is sufficient to ensure VoI is non-negative in this case.
(b)
WLOG j=1.
(a)
If y_1=1, the only candidate posterior equilibria are DN-DN if C_2≥ p_Rp_2 and DN-RE if C_2≤ p_Rp_2.
If C_2> p_Rp_2, then the prior NE is either DN-DN or RE-DN. In either case both agents have a non-negative Value of Information (Table <ref>).
If C_2< p_Rp_2, then the prior NE is either DN-DN, RE-RE or DN-RE. In each case both agents can be readily verified to have a non-negative Value of Information. If C_2=p_Rp_2 and C_1≤ p_Rp_2, then VoI can be negative for agent 1 if the prior equilibrium is RE-RE and the posterior equilibrium is DN-DN. But in this case a (unconditional, or conditional on inspection) subsidy of s^*_2>0 to agent 2 ensures that DN-DN is not a posterior equilibrium and both agents have non-negative VoI.
(b) For y_1=0, we consider the following cases.
Case 0: C_1≤ p_Rp_1p_2,C_2>p_Rp_2. If C_1<p_Rp_1p_2, the only prior NE is RE-DN and the only posterior NE is also RE-DN. Thus, VoI =0 for each agent even in the absense of subsidy and s^*=0 in this case. If C_1=p_Rp_1p_2, DN-DN is also a prior NE but VoI is still non-negative for each agent, since the poterior costs
cost_1(RE,DN)=C_1+p_Rp_2≤ p_Rp_1p_2+p_Rp_2=p_Rp_1p_2,
and
cost_2(RE,DN)=p_Rp_2<p_Rp_1p_2.
Case 1: p_Rp_1p_2< C_1<p_Rp_2,C_2>p_Rp_2. DN-DN is the only prior NE and the only posterior equilibrium is RE-DN. and
cost_1(RE,DN)=C_1+p_Rp_2> p_Rp_1p_2.
A subsidy of C_1-p_Rp_1p_2 to agent 1 is sufficient to ensure VoI is non-negative for agent 1 as noted in case C0. Alternatively, a subsidy of C_2-p_Rp_1p_2 to agent 2 ensures that RE-RE is the only prior and posterior NE, and VoI is non-negative for each agent. The smaller of the two subsidies works and is necessary in this case.
If we require the subsidy to be conditional on inspection, DN-DN remains the only prior NE as prior costs are not updated by the subsidy. A subsidy of C_1-p_Rp_1p_2 to agent 1 works as above. In the alternative case, if we provide a smaller subsidy of C_2-p_Rp_2 to agent 2, the posterior NE is switched to RE-RE but the prior remains DN-DN and a
subsidy of C_1-p_Rp_1p_2 to agent 1 is still required to ensure non-negative VoI for agent 1.
Case 2: C_1>p_1p_2,C_2>p_2. In this case the only NE is DN-DN. Thus, PoA =1 even in the absense of subsidy.
Case 3: C_1<p_1p_2,C_2=p_2. Both RE-DN and RE-RE are Nash equilibria, and
cost(RE,DN)=C_1+2p_2>C_1+p_2=C_1+C_2=cost(RE,RE).
An arbitrarily small subsidy to agent 2 is sufficient to guarantee PoA(𝕊)=1 (therefore s^*=0 works) as RE-DN would no longer be a NE.
Case 4: C_1<p_1p_2,C_2<p_2. In this case the only NE is RE-RE. Thus, PoA =1 even in the absense of subsidy.
Case 5: (C_1,C_2)∈ [p_1p_2,p_1]× [p_2p_1,p_2]. Both RE-RE and DN-DN are Nash equilibria, and OPT corresponds to RE-RE. A subsidy greater than C_1-p_1p_2 to agent 1, or a subsidy greater than C_2-p_2p_1 to agent 2 guarantees that the only NE is RE-RE. Further, in either case PoA(𝕊)=1 as the subsidy equals the reduction in the repair cost of the respective agent.
Further suppose a subsidy of s^*=s^*_1+s^*_2 is sufficient to ensure PoA(𝕊)=1 in this case. Now if subsidy to agent 1 s_1^*≤ C_1-p_1p_2 and subsidy to agent 2 s_2^*≤ C_2-p_2p_1. Then both DN-DN and RE-RE are NE and PoA(𝕊)>1 since the worst-case equilibrium (i.e. DN-DN) cost does not depend on the subsidy. Therefore either s_1^*> C_1-p_1p_2 or s_2^*> C_2-p_2p_1, establishing that a subsidy of at least s^* is necessary in this case to ensure PoA(𝕊)=1.
Case 6: Otherwise. By symmetry, the case is similar to one of C0 through C4 with agents 1 and 2 switched. s^*=0 and price of anarchy of 1 is achieved by no or arbitrarily small subsidy as above.
§ COMPUTATIONAL HARDNESS OF SUBSIDY DESIGN
We will now consider the problem of computing the best subsidy scheme in general component maintenance and cost-sharing games.
We will prove computational hardness results in this section and assume familiarity with fundamental concepts in complexity theory <cit.>. We will do this by reducing the Vertex-Cover problem (one of Karp's original NP-complete problems <cit.>) to the decision-problem version of computing the best subsidy for the component maintenance game and from the Min-Set-Cover problem for the cost-sharing game.
Recall that Vertex-Cover is the following decision problem, specified by a graph =(V,E) and an integer k.
Vertex-Cover: Does a given (undirected, unweighted) graph =(V,E) admit a vertex cover[A set of vertices such that all edges of the the graph have at least one endpoint in the set.] of size k?
We also consider the optimization version of the minimum subset cover problem (,,k) stated below, which will be useful in our hardness results for the cost-sharing game.
Min-Set-Cover: Given a finite set of size n and a collection ⊆2^ of subsets of , does there exist a subset S of of size k<n that covers , i.e. ∪_S_i∈ SS_i=?
In the next three subsections we will show computational hardness results for designing the optimal subsidy scheme for several different objectives of interest to the central agent, including the Price of Anarchy under subsidy, guaranteeing the Value of Information is non-negative for all agents, and (in the component maintenance game) ensuring the system functions in any Nash equilibrium.
§.§ Price of Anarchy under Subsidy
Component maintanence game. Finding the best subsidy in a given component maintanence game (CMG) to minimize the price of anarchy is an optimization problem. Here we will study the hardness of a corresponding decision problem stated below.
CMG-PoAS: Given a CMG G and a subsidy budget n^*,
does there exist subsidy scheme with non-zero subsidy provided to n^* agents such that PoA()=1?
Note that here we define the best subsidy scheme to be the one that provides subsidy to fewest agents (as opposed to providing the smallest total subsidy paid out by the central agent, which we also study subsequently). This meaningfully models situations when subsidy consists of allocation of indivisible resources, for example the central agent providing commitment to intervene and assist but with a bandwidth constraint on the number of agents that could receive the assistance. The question whether the same hardness result can be established when the subsidy budget is continuous is left open in this case (we do consider real-valued subsidy budgets for other objectives, see Sections <ref>, <ref>). As mentioned above, we will do a Karp reduction from the Vertex-Cover problem.
The key idea is to construct a component maintenance game G given any graph , with agents corresponding to graph nodes, and the system state function ϕ corresponding to the graph edges. We show that the Nash equilibria of G roughly correspond to minimal vertex covers of . To ensure PoA()=1, we provide subsidy to the agents in a smallest vertex cover of .
CMG-PoAS is NP-Hard.
We will reduce the Vertex-Cover problem to CMG-PoAS. Given an instance ,k of the Vertex-Cover problem, we create a corresponding CMG-PoAS problem as follows. Introduce an agent i for every vertex i∈ V and consider the (2-CNF) formula ϕ()=⋀_(i,j)∈ E(x_i x_j), where the clauses consist of component states x_i,x_j for all pairs i,j of agents corresponding to edges in E. Set the probability distribution θ to be the constant distribution with the entire probability mass on 0^n (i.e. all the components are guaranteed to fail without repair). Set repair cost C_i=1 for all components i. Then the cost function for agent i for joint action s=(s_i,s_-i) is given by-1
l_i(s_i,s_-i,θ)=_∼θ[cost_i] =C_is_i+P_ϕ(θ)
=1· s_i+1-ϕ(')
=s_i+1-ϕ(s),
where x_i'=max{0,s_i}=s_i denotes the component state after agent i takes action s_i.
We first proceed to characterize the set of Nash equilibria of this game. Note that s=0^n is a NE for this game, since any repair action by any agent increases the agent's cost by 1 if the repair does not change the state ϕ of the system, and by 0 otherwise. This is because ϕ is monotonic, ϕ(s) can only change from 0 to 1 on repair. For the same reason, no agent has any incentive to switch from DN to RE. We will now show that the remaining NEs for the game correspond to minimal vertex covers of .
Suppose K⊆[n] is a set of agents for which the corresponding nodes in constitute a minimal vertex cover. Let s_K'=(s_1,…,s_n) where s_i=[i∈ K'], and [·] is the 0-1 valued indicator function, denote the joint action where agents in set K'⊆[n] choose repair. Clearly, ϕ(s_K)=1. If i∈ K, agent i does not reduce cost by switching from RE to DN since K is a minimal cover therefore not repairing component i causes the system to fail. As noted above, switching from DN to RE never improves an agent's cost in this game.
Further, if K is the set of agents corresponding to a non-minimal vertex cover, then there must be some agent that can reduce its cost by switching from RE to DN (and system continues to function).
Finally, if K' ∅ is the set of agents with one or more agents short of a vertex cover, then any agent in K' can reduce its cost by switching from RE to DN. This establishes that besides 0^n, only possible NE must correspond to a minimal vertex cover. In particular, this implies that =k^*, where k^* is the size of the smallest vertex cover K^* of , and the corresponding NE is s_K^*.
To complete the reduction, we consider the game defined above with subsidy budget n^*=k. We will show a bijection between the YES and NO instances of the two decision problems to complete the proof.
If there exists a vertex cover of size k, then the smallest vertex cover K^* has size k^*≤ k. We design a subsidy scheme with subsidy allocated to k^*≤ n^* agents, allocating subsidy of 1+1/2n for repair (the total subsidy is no more than k^*+1/2) to all agents in the minimum cover K^*, a and subsidy of 0 otherwise.
As argued above, the only candidate NE without subsidy are 0^n and s_K corresponding to some minimal vertex cover K. The social cost for 0^n is n (except the trivial case k^*=0) and that for s_K^* is k^* which is smaller. If we provide subsidy in our scheme to the agents in K^* then 0^n is no longer an NE. In particular, every subsidized agent in K^* would now always choose repair at subsidized cost -1/2n over doing nothing (even when the system stays broken after the repair). Thus, the social cost plus subsidy is k^*(-1/2n)+k^*(1+1/2n)=k^*, and the price of anarchy for the subsidy scheme is 1 (any agent outside of K^* will prefer to do nothing to reduce their cost).
On the other hand, suppose that has no vertex cover of size k. Any minimal vertex cover of therefore has size at least k+1. Suppose is a subsidy scheme with subsidy allocated to most k agents. We will show that PoA()>1. Indeed, let K be an arbitrary minimal vertex cover of . By pigeonhole principle, at least one agent in K does not receive subsidy. Let K' denote the (possibly empty) set of agents that receive subsidy greater than 1. As argued above, these agents will always prefer the repair action. Thus, s_K' is a Nash equilibrium (agents without subsidy never have incentive to switch from DN to RE in this game) in the subsidized game. Note that ϕ(s_K')=0 since at least one vertex is missed in any minimal vertex cover K, and therefore we must have an uncovered edge.
Now ≤cost(s_K)=|K|<n. Therefore, PoA()> 1 as total cost plus subsidy is at least n for s_K'.
We make a couple of remarks about the above result. Our proof implies a similar hardness result if the decision problem is stated for PoA() instead of PoA().
We further note that our reduction from Vertex-Cover does not involve any negative literals in the boolean system function ϕ, i.e. applies to monotone boolean functions where if a component is repaired from broken to a working state, then it cannot cause the overall system to go from working to broken. In fact, for monotone ϕ we can show an even stronger hardness result using parameterized complexity theory. Namely CMG-PoAS is W[2]-hard, by reduction from the Dominating-Set problem, i.e. deciding whether given a graph and integer k, does there exist a subset X of vertices of size k such that each vertex is either in the subset X or has an edge connecting it to a vertex in X. The Dominating-Set problem is known to be W[2]-complete (<cit.>, Lemma 23.3.1). This implies that it is unlikely that the problem is even fixed-parameter tractable (FPT), i.e. it is plausible that there is no computationally efficient algorithm for CMG-PoAS even for a fixed small subsidy budget. Further details and formal proofs are located in Appendix <ref>.
Cost-sharing game. We will now give a similar result for cost-sharing games. While it is known that computing OPT is NP-Hard in cost-sharing games <cit.> and approximation algorithms are known for subsidy design <cit.>, we further show that designing a subsidy scheme to guarantee that PoA=1 is also NP-hard.
CSG-PoAS: Given a cost-sharing game G and a subsidy budget n^*,
does there exist subsidy scheme with non-zero subsidy provided to n^* actions such that PoA()=1?
The key idea is to construct a cost-sharing game G given an instance of the minimum set cover problem, with agents corresponding to set elements, and the actions corresponding to subsets available for covering plus some additional actions uniquely available to each agent. When there exists a set cover of size k, assigning subsidies to actions in the cover guarantees that the only possible NEs correspond to assigning actions from the set cover and the subsidized cost plus the subsidy equals cost of OPT. Conversely, we use the additional actions to show that the Price of Anarchy is greater than 1 when a subsidy smaller than the size of the minimum set cover is available.
CSG-PoAS is NP-Hard.
Consider the cost-sharing game G with n agents that correspond to elements of via a bijection ζ:→[n], set of actions =⊎⊎ with ={{1},…,{n}} and ={{1,2,…,n}} being two distinct collections of actions available uniquely to each agent and all agents simultaneously respectively, function f:S↦{ζ(s)| s∈ S} which assigns action S to agents corresponding its elements, and cost function c given by
c(S)=
1 if S∈⊎,
n-ϵ if S∈.
for some ϵ∈(0,1).
We set n^*=k.
Given a YES instance of Min-Set-Cover, we show that the above contruction yields a YES instance of CSG-PoAS. Indeed, let k^* denote the size of the smallest set cover of (,). In the YES instance this means k^*≤ k<n, and we provide subsidy of value 1 to all actions corresponding the sets in the smallest set cover. Now any assignment of the actions to agents consistent with the set cover (i.e. each agent is assigned an action corresponding to one of the sets in the cover that include the agent) is a Nash Equilibrium with social cost 0. This is because each agent has (subsidized) cost 0 and therefore no incentive to switch actions. Moreover, any other state is not an NE as at least one agent will have non-zero cost and would switch to an action in the cover. Therefore, social cost plus subsidy is k^* for the only NE in this case. Next we argue that the social cost OPT is k^* in this case, which would imply PoA()=1 for the above subsidy scheme. If possible, let there be a state s=(a_1,…,a_n) with social cost less than k^*. Let s_={i| a_i∈} and s_={i| a_i∈}. Now the smallest set cover of ∖ s_∖ s_ has size at least k^*-|s_|-|s_| since each set in the optimal cover must cover at least one element. We consider cases based on |s_|. If s_=0, cost(s)≥ (k^*-|s_|) +|s_|= k^*, a contradiction. If s_=n, then cost(s)=n-ϵ>k^*, again contradicting the assumption that cost of s is less than k^*. Else 1≤|s_|≤ n-1. But then, cost(s)≥ (k^*-|s_|-|s_|) +|s_|+n-ϵ≥ k^*+1-ϵ>k^*. Thus, cost(OPT)=k^* and we have PoA()=1 in this case.
Conversely, consider a NO instance of Min-Set-Cover. The smallest set cover of (,) has size k^*>k. Consider any subsidy scheme assigning subsidy of value 1 to at most k actions. Clearly, all the agents that have at least one of their actions subsidized will select the subsidized action in any NE. Since the smallest set cover has size greater than k, there exists at least one agent with no subsidized action. Let A⊂[n] denote the set of these agents. We will show the existence of two Nash equilibria with different social costs, implying that PoA()>1 in this case. Consider states s_ and s_ for which agents in A are assigned the corresponding actions from and respectively, and agents in [n]∖ A are assigned one of the subsidized actions in either case. The social cost plus subsidy cost(s_)+k=|A|+k for s_ and cost(s_)+k=n-ϵ+k for s_. Thus, PoA()>1.
§.§ Value of Information
We also show NP-hardness of the problem of determining the minimal subsidy needed to avoid negative Value of Information in the component inspection game. Formally, the decision problem is stated below.
CIG-VoI: Given a component inspection game (CIG) and subsidy budget s^*, does some subsidy scheme with total subsidy at most s^* guarantee that no agent has negative value of information when a single component j is inspected (i.e. for fixed inspected component j, VoI_i,j(s,s̃^j,y_j)≥0 for any s∈_NE(𝕊), s̃^j,y_j∈_NE^y_j(𝕊), for each agent i∈ [n], and each posterior component state y_j∈{0,1})?
Our proof (Appendix <ref>) again involves a reduction from the Vertex-Cover problem, but we use a different subsidy budget and examine the non-negativity of VoI when potentially different equilibria are selected in the prior and posterior games.
CIG-VoI is NP-Hard.
We also establish a similar hardness result for designing the optimal subsidy allocation in cost-sharing games to ensure that the value of information is non-negative for all agents.
CSG-VoI: Given a cost sharing game (CSG) and subsidy budget s^*, does some subsidy scheme with total subsidy at most s^* guarantee that no agent has negative value of information when a single action j is inspected (i.e. for fixed inspected action j, VoI_i,j(s,s̃^j,c_j)≥0 for any s∈_NE(𝕊), s̃^j,c_j∈_NE^c_j(𝕊), for each agent i∈ [n], and any posterior action cost c_j)?
We have the following hardness result, again using a reduction from the min set cover problem.
CSG-VoI is NP-Hard.
§.§ Guaranteeing that the system functions in any NE
Here we consider a more challenging optimization objective applicable only to the component maintenance game. We seek the optimal subsidy scheme with the least total value across all agents that receive the subsidy, and the goal of the central agent is to disburse sufficient subsidy to guarantee that the system functions in any Nash equilibrium. The decision problem in this case is stated as follows.
CMG-System: Given a component maintenance game (CMG) and subsidy budget s^*, does some subsidy scheme with total subsidy at most s^* guarantee that the system functions in any NE (i.e. ϕ(s)=1 for any s∈_NE(𝕊))?
The reduction is similar to the proof of Theorem <ref>. We give a Karp mapping from any vertex cover instance ,k to a CMG-System instance and use a slightly different game instance for the reduction. We show that the system is guaranteed to function in any NE iff the subsidy budget is one less than the size of a smallest vertex cover. See Appendix <ref> for a proof of the following result.
CMG-System is NP-Hard.
We will reduce the Vertex-Cover problem to CMG-System. Given an instance ,k of the Vertex Cover problem, we create a corresponding CMG-System problem as follows. Introduce an agent i for every vertex i∈ V and consider the (2-CNF) formula ϕ()=⋀_(i,j)∈ E(x_i x_j), where the clauses consist of states x_i,x_j for all pairs i,j of agents/components corresponding to edges in E. Set the probability distribution θ to be the constant distribution with the entire probability mass on 0^n (i.e. all the components are guaranteed to fail without repair). Set repair cost C_i=1-ϵ for 0<ϵ<1/n for all components i.
Observe that,
l_i(s_i,s_-i,θ) =_∼θ[cost_i]
=C_is_i+P_ϕ(θ)
=(1-ϵ)s_i+1-ϕ(')
=(1-ϵ)s_i+1-ϕ(s),
where x_i'=max{0,s_i}=s_i denotes the state of component i after agent i takes action s_i. Note that WLOG s=0^n is a NE for this game, since any repair action by any agent increases the agent's cost by (1-ϵ) if the repair does not change the state ϕ of the system, and 0 otherwise (since ϕ is monotonic, it can only change from 0 to 1). We will now show that the remaining NEs for the game correspond to vertex covers of .
Suppose K⊆[n] be a set of agents for which the corresponding nodes in constitute a minimal vertex cover. Let s_K'=(s_1,…,s_n) where s_i=[i∈ K'], where [·] is the 0-1 valued indicator function and K'⊂ K with |K'|=|K|-1. Similarly, s_K:=(s_1,…,s_n) where s_i=[i∈ K]. Clearly, ϕ(s_K')=0 and ϕ(s_K)=1. If i∈ K, agent i does not reduce cost by switching from RE to DN since K is a minimal cover therefore not repairing component i causes the system to fail. If j∉ K, agent j does not reduce cost by switching from DN to RE as the system was already functioning. Further, if K is the set of agents corresponding to a non-minimal vertex cover, then there must be some agent that can reduce its cost by switching from RE to DN.
Finally, if K' ∅ is the set of agents with one or more agents short of a vertex cover, then any agent in K' can reduce its cost by switching from RE to DN. This establishes that besides 0^n, only possible NE must correspond to a minimal vertex cover.
To complete the reduction, we consider the game defined above and subsidy budget s^*=k-1. If there exists a vertex cover of size k, then a minimal cover K' has size k'≤ k. We design a subsidy scheme with total subsidy s'=k'-1≤ s^*, allocating subsidy of 1 for repair to all but one agent in the minimal cover K' and subsidy of 0 otherwise. Clearly the only agent not given subsidy will choose repair since cost of repair 1-ϵ is more than compensated by the change due to system state. As argued above, the only candidate NE are 0^n and s_K corresponding to some minimal vertex cover K. Without the subsidy, the social cost for 0^n is n (except the trivial case k=0) and that for s_K' is k'(1-ϵ) which is smaller. If we provide subsidy in our scheme to the agents in K' except one then 0^n is no longer an NE. In particular, every subsidized agent in K' would now choose repair at cost 1-ϵ over doing nothing (even when the system stays broken after the repair) and the remaining agent in K' will choose repair if the system is broken. Thus, the system functions in all NEs.
Conversely, suppose there exists a subsidy scheme with total subsidy at most s^*=k-1, such that the system functions in any NE. Then either the system functions in 0^n and there is a 0-cover for graph , or the NE corresponds to a minimal vertex-cover K' of size k' as the repaired components (in the subsidized game). In the latter case, we seek to show k'≤ k to complete the proof. Since the system is not assumed to function for s=s_κ for repair actions by agents in any κ⊂ K' with κ=k'-1, we
need to provide subsidy at least 1-ϵ to all but one agent in K'. That is, k-1=s^*≥ (1-ϵ)(k'-1)>k'-1-(k'-1)/n (since ϵ<1/n), or k≥ k' since both k,k' are integers.
For the more general game where ϕ need not be monotone, a very similar argument as above can be used to show an even stronger hardness result. Namely CMG-System (with general boolean functions) is W[2]-hard, by reduction from Weighted CNF-SAT, i.e. deciding whether a CNF formula has a satisfying assignment with k variables assigned 1 (<cit.>, Lemma 23.3.1). This implies that it is unlikely that the problem is even fixed-parameter tractable (FPT), i.e. it is plausible that there is no computationally efficient algorithm for CMG-System even for a fixed small subsidy budget.
We will reduce Vertex-Cover to CIG-VoI. Recall that Vertex-Cover is the following decision problem—given a graph =(V,E) and integer k, does there exist a vertex cover of size k?
In contrast to proof of Theorem <ref>, we will need to set a slightly higher subsidy and carefully adapt the argument to the value of information computation.
We will create an instance of the CIG-VoI problem with n=|V|+1 agents, an agent each for vertices in and an additional agent j=|V|+1. The construction of the instance and several arguments are similar to the proof of Theorem <ref>. The key difference is that we have an additional agent j that does not correspond to a vertex in . We will consider the inspection of the component c_j corresponding to this agent.
Consider the (2-CNF) formula ϕ()=⋀_(u,v)∈ E(x_u x_v), where the clauses consist of states x_u,x_v for all pairs u,v of agents/components corresponding to edges in E. Set the probability distribution θ to be the constant distribution with the entire probability mass on 0^n (i.e. all the components are guaranteed to fail without repair). Set repair cost C_i=1-ϵ for 0<ϵ<1/n for all components i∈[|V|] and C_j=1.
Therefore,
l_i(s_i,s_-i,θ)=(1-ϵ)s_i+1-ϕ(s) for i∈[|V|] and l_j(s_j,s_-j,θ)=2-ϕ(s). Note that WLOG s=0^n is a NE for this game, since any repair action by any agent increases the agent's cost by (1-ϵ) if the repair does not change the state ϕ of the system, and 0 otherwise (since ϕ is monotonic, it can only change from 0 to 1). As shown in the proof of Theorem <ref>, the remaining NEs for the game correspond to minimal vertex covers of . Moreover, since ϕ(s) does not depend on s_j by definition, agent j will always prefer action DN for any s_-j. Let s_K:=(s_1,…,s_n) where s_i=[i∈ K] for any K⊆ V.
Notice that the prior and posterior games (for inspection of c_j) have identical cost matrices and equilibria for this component inspection game. To complete the reduction, we consider the game defined above and subsidy budget s^*=k. Suppose there exists a vertex cover of of size k, then there exists a minimal vertex cover, say K' of size k'≤ k. We design a subsidy scheme with total subsidy s'=k'≤ s^*, allocating subsidy of 1 for repair to exactly the agents in K'. Clearly, all subsidized agents will always choose repair. We claim that the only NE after subsidy is s_K'. Indeed, by the above observation, any NE must be s_K for some K⊇ K'. But if K K', then any agent in K∖ K' will choose to do nothing as the system would function without their repair action. Since there is exactly one NE in prior and posterior games, Value of Information is exactly zero for all agents.
Conversely, if there is no vertex cover of size k, then we show that no subsidy scheme with s^*≤ k may guarantee that no agent has negative value of information when a single component j is inspected. In this case the any vertex cover K' has |K'|>k. We consider two cases:
Case 0: |K'|>k+1. Observe that if the subsidy provided to an agent is less than the repair cost 1-ϵ, then the agent will prefer to do nothing, except when repairing their component (given other players actions) changes the system state from 0 to 1. However, with a budget of s^*=k, the maximum number of agents that can receive a subsidy of at least 1-ϵ is at most k/1-ϵ<k+1, since ϵ<1/n and k<n WLOG. Thus, at least two agents are without subsidy at least 1-ϵ in K', and these agents will prefer to do nothing if only the agents K^*={i∈[|V|]| s^*_i>1-ϵ} with sufficient subsidy choose repair. Observe that both s_K' and s_K^* are Nash equilibria in the subsidized game. If s_K' is chosen as the prior equilibrium and s_K^* a posterior equilibrium, then the value of information for agents in K'∖ K^* is (1-ϵ)-1<0 since the system does not work in s_K^*.
Case 1: |K'|=k+1. In this case, the only new possibility is if at least 1-ϵ subsidy is provided to all but one agent (say k') in K', then the remaining agent will choose repair. Without loss of generality, we assume k+1<n, and that K' is a minimal vertex cover. Now if v_k' denotes the vertex corresponding to agent k' in , and let E' denote the set of edges incident on vertices V'⊆ V∖ K' with one end at v_k'. E' is non-empty, as otherwise K'∖{v_K'} would constitute a vertex cover for contradicting minimality of K'. Observe that K_1=K'∖{v_K'}∪ V' is a vertex cover. Let K_2 denote a minimal vertex cover which is a subset of K_1. Now both s_K_2 and s_K' are NEs in the subsidized game. If the former is set as the prior equilibrium, and the latter a posterior equilibrium then, the value of information is negative (equals 0-(1-ϵ)=ϵ-1) for agent k'.
Thus in either case, some agent has a negative value of information when the subsidy budget is k. This completes the reduction.
We consider an extension of the two agents connected in series, with additional components connected in series which are assumed to be uncontrolled by agents, and cannot be inspected or repaired. Let p_R denote the probability that all remaining components connected in series will all work. In addition to the prior game, we will now consider posterior games where component c_1 is inspected, and its state y_1 is revealed on inspection (y_1=1 corresponds to c_1 is functioning, and y_1=0 corresponds to c_1 is broken). Table <ref> summarizes prior and posterior costs for the two agents for each action pair (recall that DN denotes “do nothing” or s_i=0, and RE denotes “repair” or s_i=1). <cit.> show that value of information (VoI) is non-negative for all agents if only one component is inspected, and if a global equilibrium is selected.
In the two-agent series game described above, if component c_j,j∈{1,2} is inspected, and actions are selected from global equilibria, then VoI_i,j is non-negative for each agent i∈{1,2}.
Therefore, if we avoid suboptimal local equilibria in this setting then VoI is guaranteed to be non-negative. Combined with Theorem <ref> above, this implies that negative Value of Information may be avoided via subsidizing repair costs in the two-agent series game.
Suppose C_1>0 and C_2 p_Rp_2. For y_1=1 (component c_1 is inspected and found to be functioning), any Nash equilibrium is global.
Note that RE-RE cannot be a Nash equilibrium as agent 1 can reduce its cost by switching action from RE to DN. Similarly, RE-DN cannot be a Nash equilibrium either. If further C_2 p_Rp_2, or equivalently p_Rp_2p_R+C_2, then exactly one of DN-DN and DN-RE constitutes the unique Nash equilibrium, depending on the cost-minimizing action for agent 2.
Suppose C_1∉{p_Rp_2}, and C_2∉{0, p_Rp_2}. For y_1=0 (component c_1 is inspected and found to be broken), if a Nash equilibrium is not global, it must be DN-DN.
DN-RE cannot be a Nash equilibrium if C_2>0. Now if RE-DN is a Nash equilibrium, it must be the only Nash equilibrium (since 1p_Rp_2+C_1 and p_Rp_2p_R+C_2) and therefore global.
If possible let RE-RE be a Nash equilibrium and not be global. Since RE-RE is a Nash equilibrium, we have p_R+C_1≤ 1 and p_R+C_2≤p_Rp_2. Therefore, p_R+C_1+p_R+C_2≤ 1+p_Rp_2≤ 2. Since p_Rp_2p_R+C_2, the only candidate for global equilibrium is DN-DN. Since RE-RE is not global, we must have p_R+C_1+p_R+C_2>2, a contradiction.
Now suppose there is a central authority which provides incentive/subsidy s for repairing the component. We can potentially have a targeted subsidy only available to a subset of agents, or a uniform subsidy available to all agents. Effectively this changes the cost parameters for the game, and can potentially enable the central authority to avoid negative VoI for the agents. We can also potentially consider conditional subsidy where the subsidy may be conditional on inspection results.
Suppose C_1∉{0,p_Rp_2}, and C_2∉{0, p_Rp_2}. A targeted subsidy s^*_1=C_1-p_Rp_2+ϵ for ϵ>p_Rp_1p_2 for agent 1 conditional on y_1=0 is sufficient to ensure that no agent has negative VoI when component 1 is inspected.
By Lemmas <ref> and <ref>, the only possible posterior suboptimal Nash equilibrium is DN-DN for y_1=0.
With a subsidy s=C_1-p_Rp_2+ϵ, agent 1 has cost p_Rp_2+C_1-s=1-ϵ in RE-DN and p_R+C_1-s=1-ϵ-p_Rp_2 in RE-RE. Thus DN-DN is no longer an equilibrium, and the unique global equilibrium is either RE-DN if C_2>p_Rp_2, or RE-RE if C_2<p_Rp_2. If the prior is also a global equilibrium, by Theorem
<ref>, we have that no agent has negative VoI.
It only remains to consider possible prior suboptimal (local) equilibria. We consider four cases w.r.t. choice of prior suboptimal equilibria:
Case 1: DN-DN is a local equilibrium. If y_1=1, the cost of each agent is only reduced for posterior actions DN-DN for both agents and hence VoI is non-negative. For posterior equilibrium DN-RE agent 1's VoI is non-negative for the same reason. Agent 2 has VoI given by p_Rp_1p_2-(p_R+C_2)=p_Rp_1p_2-C_2≥ p_Rp_2-C_2 which is non-negative if DN-RE is a Nash equilibrium since
p_R+C_2≤p_Rp_2 C_2≤ p_Rp_2
By Lemma <ref> we only have above two cases if y_1=1. If y_1=0, after subsidy, only candidate posterior NEs are RE-DN and RE-RE. VoI for agent 2 is non-negative if it continues to DN, and the expression for RE-RE is identical to the DN-RE case for y_1=1 above. Cost for agent 1 cannot increase if ϵ>p_Rp_1p_2.
Case 2: RE-RE is a local equilibrium, implying C_2≤ p_Rp_2. If y_1=1, Lemma <ref> now implies that DN-RE is the unique posterior NE. Cost for each can only decrease in this case. For y_1=0, only posterior NE is RE-RE where again cost cannot increase for any agent.
Case 3: RE-DN is a local equilibrium. If y_1=1, agent 1's action is DN, and cost can only decrease. Agent 2's cost is at most p_Rp_2 for the candidate equilibria DN-DN and DN-RE. If y_1=0, cost of agent 1 can only decrease for posterior NEs RE-DN and RE-RE. Cost of agent 2 is again at most p_Rp_2 by similar reasoning as above.
Case 4: DN-RE is a local equilibrium, implying C_2≤ p_Rp_1p_2≤ p_Rp_2. If y_1=1, the only candidate posterior is DN-RE for which both agents have VoI p_Rp_1≥ 0. If y_1=0, the condition on C_2 implies that the only possibile posterior NE is RE-RE. For agent 2 VoI is again p_Rp_1≥ 0. For agent 1, the subsidized cost is 1-ϵ-p_Rp_2<1-p_Rp_1p_2-p_Rp_2=1-p_R(p_1p_2+1-p_2)=1-p_R(p_1+(1-p_1)(1-p_2))≤p_Rp_1.
The cost matrix for a two-agent parallel game is summarized in Table <ref>. For this case as well,
the Value of Information may be negative if a local equilibrium is selected for some parameter settings. In the following we will show a dichotomy—if the repair costs are small then a central agent using subsidy must subsidize the full costs of repair to avoid negative Value of Information of component inspection for the agents. Otherwise, the central agent can partially subsidize to avoid negative VoI.
Suppose C_1,C_2∉{0,p_R·p_2,p_R·p_1·p_2}. A subsidy scheme with s_1^*>max{C_1-p_R·p_1·p_2,min{C_1,p_R·p_2}} and s_2^*>max{C_2-p_R·p_1·p_2,min{C_2,p_R·p_2}}, conditional on inspection, is sufficient to avoid negative VoI for both agents when component 1 is inspected.
Note that RE-RE cannot be an equilibrium since C_1,C_2>0. Also negative VoI is not possible when the posterior is y_1=1 as the only Nash equilibrium is DN-DN with zero cost for each agent.
First suppose min{C_1,C_2}≥p_R·p_1·p_2. This implies DN-DN is the prior equilibrium. In this case, the conditional subsidies of s_i^*=C_i-p_R·p_1·p_2 are sufficient to ensure that posterior repair costs for each agent is less than p_R·p_1·p_2≤p_R·p_2. Thus, DN-DN cannot be a posterior NE for y_1=0. For DN-RE and RE-DN, the subsidy ensures that VoI is non-negative for both agents.
Otherwise, we have three cases to consider w.r.t. relative choice values of repair costs and failure probabilities,
Case 1: C_1< p_R·p_1·p_2≤ C_2. In this case, RE-DN is the prior equilibrium.
Moreover, since C_1< p_R·p_1·p_2≤p_R·p_2, agent 1 would prefer action RE over DN and so DN-DN cannot be a posterior NE for y_1=0. If RE-DN is the posterior NE, then by Table <ref> clearly VoI is non-negative for both agents. So it only remains to consider the posterior equilibrium DN-RE. If C_2>p_R·p_2, DN-RE cannot be an equilibrium, and we are done. If C_2≤p_R·p_2, then the subsidy s_2^*>min{C_2, p_R·p_2} ensures that VoI of agent 2 is non-negative even if DN-RE is the posterior equilibrium.
Case 2: C_2< p_R·p_1·p_2≤ C_1. The argument for this case is symmetric to the previous case, with DN-RE as the only possible prior equilibrium.
Case 3: max{C_1,C_2}< p_R·p_1·p_2. In this case DN-RE as well as RE-DN can be prior Nash equilibria. Since max{C_1,C_2}< p_R·p_1·p_2≤p_R·p_2, DN-RE and RE-DN are the only candidate posterior NE. The setting of subsidies s_i^*>p_R·p_2≥p_R·p_1·p_2 > max{C_1,C_2} ensures that VoI is non-negative for both agents in this case.
§.§ Games with incomplete information
In a game of incomplete information, there are n players. Player i has a type space T_i and an action space S_i. We write T = T_1 ×…× T_n and S = S_1 ×…× S_n. We assume that the type vector t is drawn from a distribution D (prior) over T that is common knowledge. The distribution D may or may not be a product distribution — that is, players’ types may or may not be stochastically independent. The cost c_i(t_i;s) of player i is determined by its type t_i and by the actions s chosen by all of the players. For example, in the component inspection game, each agent have two types, corresponding to the binary state of their component.
The expected cost of action s_i to agent i is c_i(s_i,D)=_t∼ D[c_i(t_i;s_i)]. Suppose the inspection of agent j reveals their type t_j. Denote posterior belief after the inspection of agent j by D^j,t_j. If agent i switches action from s_i to s_i^j,t_j after the inspection, the value of information about inspection of agent j for agent i is given by VoI_i,j(s_i,s_i^j,t_j) := l_i(s_i,D)-l_i(s_i^j,t_j,D^j,t_j). Typically we will assume that the actions s_i^j,t_j are Nash equilibria. We want this to be non-negative for each agent i.
We can extend the definition of PoIA to the more general setting.
TODO: extend to multiple inspections?
Let 𝕊={subs_i} denote the subsidy scheme. Consider the single agent inspection in an incomplete information game (T,S,D) for inspection of agent j. Let _NE
(𝕊),_NE^t_j(𝕊)⊆ S (for t_j∈ T_j) denote the subset of states corresponding to Nash equilibria when the cost for agent i is cost_i-subs_i for prior, posteriors y_j=0 and y_j=1 respectively. Let VoI_j(𝕊)=min_i,s∈_NE(𝕊),s'∈∪_t_j∈ T_j_NE^t_j(𝕊)VoI_i,j(s_i,s'_i) denote the least value of information for any agent i for equilibria under 𝕊. Then the Price of Information Avoidance is given by
PoIA(j)=min_𝕊|VoI_j(𝕊)≥0max_s∈_NE
(𝕊)cost(s)/min_𝕊max_s∈_NE(𝕊)=min_𝕊|VoI_j(𝕊)≥0PoA(𝕊)/min_𝕊PoA(𝕊).
§.§ Subsidies can help avoid negative VoI in two-agent series component inspection game
Suppose we have two agents N={1,2} with components c_1,c_2 connected in series. Let p_1,p_2 denote the (prior) probability that the components c_1,c_2 will work (respectively) and p_R denote the probability that all remaining components (assumed uncontrolled by agents, and cannot be inspected or repaired) connected in series will all work. We will use the notation p:=1-p for simplicity. Table <ref> summarizes prior and posterior costs for the two agents for each action pair, where DN denotes “do nothing” (s_i=0) and RE denotes “repair” (s_i=1).
<cit.> show that value of information (VoI) is non-negative for all agents if only one component is inspected, and if a global equilibrium is selected.
If component c_j,j∈{0,1} is inspected, and actions are selected from global equilibria, then VoI_i,j is non-negative for each agent i∈{1,2}.
Therefore, if we avoid suboptimal local equilibria in this setting then VoI is guaranteed to be non-negative.
Suppose C_1>0 and C_2 p_Rp_2. For y_1=1 (component c_1 is inspected and found to be functioning), any Nash equilibrium is global.
Note that RE-RE cannot be a Nash equilibrium as agent 1 can reduce its cost by switching action from RE to DN. Similarly, RE-DN cannot be a Nash equilibrium either. If further C_2 p_Rp_2, or equivalently p_Rp_2p_R+C_2, then exactly one of DN-DN and DN-RE constitutes the unique Nash equilibrium, depending on the cost-minimizing action for agent 2.
Suppose C_1∉{p_Rp_2}, and C_2∉{0, p_Rp_2}. For y_1=0 (component c_1 is inspected and found to be broken), if a Nash equilibrium is not global, it must be DN-DN.
DN-RE cannot be a Nash equilibrium if C_2>0. Now if RE-DN is a Nash equilibrium, it must be the only Nash equilibrium (since 1p_Rp_2+C_1 and p_Rp_2p_R+C_2) and therefore global.
If possible let RE-RE be a Nash equilibrium and not be global. Since RE-RE is a Nash equilibrium, we have p_R+C_1≤ 1 and p_R+C_2≤p_Rp_2. Therefore, p_R+C_1+p_R+C_2≤ 1+p_Rp_2≤ 2. Since p_Rp_2p_R+C_2, the only candidate for global equilibrium is DN-DN. Since RE-RE is not global, we must have p_R+C_1+p_R+C_2>2, a contradiction.
Now suppose there is a central authority which provides incentive/subsidy s for repairing the component. We can potentially have a targeted subsidy only available to a subset of agents, or a uniform subsidy available to all agents. Effectively this changes the cost parameters for the game, and can potentially enable the central authority to avoid negative VoI for the agents. We can also potentially consider conditional subsidy where the subsidy may be conditional on inspection results.
Suppose C_1∉{0,p_Rp_2}, and C_2∉{0, p_Rp_2}. A targeted subsidy s^*_1=C_1-p_Rp_2+ϵ for ϵ>p_Rp_1p_2 for agent 1 conditional on y_1=0 is sufficient to ensure that no agent has negative VoI when component 1 is inspected.
By Lemmas <ref> and <ref>, the only possible posterior suboptimal Nash equilibrium is DN-DN for y_1=0.
With a subsidy s=C_1-p_Rp_2+ϵ, agent 1 has cost p_Rp_2+C_1-s=1-ϵ in RE-DN and p_R+C_1-s=1-ϵ-p_Rp_2 in RE-RE. Thus DN-DN is no longer an equilibrium, and the unique global equilibrium is either RE-DN if C_2>p_Rp_2, or RE-RE if C_2<p_Rp_2. If the prior is also a global equilibrium, by Theorem
<ref>, we have that no agent has negative VoI.
It only remains to consider possible prior suboptimal (local) equilibria. We consider four cases w.r.t. choice of prior suboptimal equilibria:
Case 1: DN-DN is a local equilibrium. If y_1=1, the cost of each agent is only reduced for posterior actions DN-DN for both agents and hence VoI is non-negative. For posterior equilibrium DN-RE agent 1's VoI is non-negative for the same reason. Agent 2 has VoI given by p_Rp_1p_2-(p_R+C_2)=p_Rp_1p_2-C_2≥ p_Rp_2-C_2 which is non-negative if DN-RE is a Nash equilibrium since
p_R+C_2≤p_Rp_2 C_2≤ p_Rp_2
By Lemma <ref> we only have above two cases if y_1=1. If y_1=0, after subsidy, only candidate posterior NEs are RE-DN and RE-RE. VoI for agent 2 is non-negative if it continues to DN, and the expression for RE-RE is identical to the DN-RE case for y_1=1 above. Cost for agent 1 cannot increase if ϵ>p_Rp_1p_2.
Case 2: RE-RE is a local equilibrium, implying C_2≤ p_Rp_2. If y_1=1, Lemma <ref> now implies that DN-RE is the unique posterior NE. Cost for each can only decrease in this case. For y_1=0, only posterior NE is RE-RE where again cost cannot increase for any agent.
Case 3: RE-DN is a local equilibrium. If y_1=1, agent 1's action is DN, and cost can only decrease. Agent 2's cost is at most p_Rp_2 for the candidate equilibria DN-DN and DN-RE. If y_1=0, cost of agent 1 can only decrease for posterior NEs RE-DN and RE-RE. Cost of agent 2 is again at most p_Rp_2 by similar reasoning as above.
Case 4: DN-RE is a local equilibrium, implying C_2≤ p_Rp_1p_2≤ p_Rp_2. If y_1=1, the only candidate posterior is DN-RE for which both agents have VoI p_Rp_1≥ 0. If y_1=0, the condition on C_2 implies that the only possibile posterior NE is RE-RE. For agent 2 VoI is again p_Rp_1≥ 0. For agent 1, the subsidized cost is 1-ϵ-p_Rp_2<1-p_Rp_1p_2-p_Rp_2=1-p_R(p_1p_2+1-p_2)=1-p_R(p_1+(1-p_1)(1-p_2))≤p_Rp_1.
Some observations from the proof:
* Above proof in particular implies that a targeted conditional subsidy of s=C_1-p_rp_2 to agent 1 when component 1 is inspected is sufficient to avoid local equilibria when the component is inspected. Thus, a PoA of 1 is achievable in this case with the help of subsidy.
* Uniform/unconditional Subsidy. Note that if the subsidy is uniformly applied for all repair actions, subsidy plus social cost can exceed the social cost of global equilibrium in the absence of any subsidy. The subsidy to agent 2 is not necessary to avoid bad local equilibrium. The social cost s' for RE-RE is 2p_R+C_1+C_2. We have s+s'/s'< 1+C_1/C_1+C_2< 2. Can we also lower bound the PoA?
TODO. Discuss bounds on PoA and PoIA based on above theorem.
§.§ Subsidies can help avoid negative VoI in two-agent parallel component inspection game
The cost matrix for a two-agent parallel game is summarized in Table <ref>. For this case as well,
the value of information may be negative if a local equilibrium is selected for some parameter settings. In the following we will show a dichotomy. If the repair costs are small then a central agent using subsidy must subsidize the full costs of repair to avoid negative value of information of component inspection for the agents. Otherwise, the central agent can partially subsidize to avoid negative VoI.
Suppose C_1,C_2∉{0,p_R·p_2,p_R·p_1·p_2}. A subsidy scheme with s_1^*>max{C_1-p_R·p_1·p_2,min{C_1,p_R·p_2}} and s_2^*>max{C_2-p_R·p_1·p_2,min{C_2,p_R·p_2}}, conditional on inspection, is sufficient to avoid negative VoI for both agents when component 1 is inspected.
Note that RE-RE cannot be an equilibrium since C_1,C_2>0. Also negative VoI is not possible when the posterior is y_1=1 as the only Nash equilibrium is DN-DN with zero cost for each agent.
First suppose min{C_1,C_2}≥p_R·p_1·p_2. This implies DN-DN is the prior equilibrium. In this case, the conditional subsidies of s_i^*=C_i-p_R·p_1·p_2 are sufficient to ensure that posterior repair costs for each agent is less than p_R·p_1·p_2≤p_R·p_2. Thus, DN-DN cannot be a posterior NE for y_1=0. For DN-RE and RE-DN, the subsidy ensures that VoI is non-negative for both agents.
Otherwise, we have three cases to consider w.r.t. relative choice values of repair costs and failure probabilities,
Case 1: C_1< p_R·p_1·p_2≤ C_2. In this case, RE-DN is the prior equilibrium.
Moreover, since C_1< p_R·p_1·p_2≤p_R·p_2, agent 1 would prefer action RE over DN and so DN-DN cannot be a posterior NE for y_1=0. If RE-DN is the posterior NE, then by Table <ref> clearly VoI is non-negative for both agents. So it only remains to consider the posterior equilibrium DN-RE. If C_2>p_R·p_2, DN-RE cannot be an equilibrium, and we are done. If C_2≤p_R·p_2, then the subsidy s_2^*>min{C_2, p_R·p_2} ensures that VoI of agent 2 is non-negative even if DN-RE is the posterior equilibrium.
Case 2: C_2< p_R·p_1·p_2≤ C_1. The argument for this case is symmetric to the previous case, with DN-RE as the only possible prior equilibrium.
Case 3: max{C_1,C_2}< p_R·p_1·p_2. In this case DN-RE as well as RE-DN can be prior Nash equilibria. Since max{C_1,C_2}< p_R·p_1·p_2≤p_R·p_2, DN-RE and RE-DN are the only candidate posterior NE. The setting of subsidies s_i^*>p_R·p_2≥p_R·p_1·p_2 > max{C_1,C_2} ensures that VoI is non-negative for both agents in this case.
§.§ Beyond two agents
We extend the above results to beyond two agent component inspection games (prior work does not have general results for beyond two component inspection games.
We first consider a series connection of N={1,2,…,n} agents. We will extend the above results to this setting by providing a subsidy scheme to avoid negative value of information for all agents. We use the notation p_-A:=Π_j∈ [n]∖ Ap_j to denote that probability that all components other than components in set A are working. We will write p_-{i} as p_-i and p_-{i,j} as p_-ij for conciseness, given agents i,j∈[n].
Suppose C_i∉{0,p_-i}∪{p_-ijp_i}_j for each agent i∈[n]. A subsidy scheme s^*_i>C_i-p_-i+p_∅ conditional on y_i=0 for each agent i is sufficient to ensure that no agent has negative VoI when a single component is inspected.
Suppose component c_i is inspected. The given assumption implies that C_i∉{0,p_-i}, and C_j∉{0, p_-ijp_j}. By Theorem <ref>, a targeted subsidy s^*_i>C_i-p_-i+p_∅ for agent i conditional on y_i=0 is sufficient to ensure that no agent has negative VoI when component i is inspected. Thus, the proposed subsidy scheme works when any arbitrary component c_i is inspected.
We now consider a parallel connection of N={1,2,…,n} agents. We will again extend the above results to this setting by providing a subsidy scheme to avoid negative value of information for all agents. We use the notation p_-A:=Π_j∈ [n]∖ Ap_j to denote that probability that all components other than components in set A are not working. We will write p_-{i} as p_-i and p_-{i,j} as p_-ij for conciseness, given agents i,j∈[n].
Suppose C_i∉{0,p_∅}∪{p_-{j}}_j. A subsidy scheme with s_i^*>max{C_i-p_∅,min{C_i,p_-{i}}}, conditional on inspection, is sufficient to avoid negative VoI for all agents when a single component is inspected.
We use Theorem <ref> together with the observation that the probability that all components in a given set of components fails is no larger than at least one component in that set fails.
[Above expression for subsidy is somewhat loose; we could probably get a tighter but more complicated expression].
Above results can be used to obtain sufficient subsidies for mixed series-parallel connections to avoid negative value of information. Can we handle arbitrary binary functions relating component status to system status? What about real-valued status with goodness thresholds?
§.§ General series-parallel game
In this case prior work <cit.> notes that agents can have negative VoI even when a global equilibrium is selected. TODO. We show that a central agent can still use subsidy to ensure negative VoI is avoided. The key idea is to express a general binary function ϕ of individual components using the Disjunctive Normal Form (DNF), and extend the above arguments to compute a bound on the subsidy sufficient to ensure at least one of the series of components corresponding to clauses containing the inspected component has all components working.
§ NEGATIVE VOI WHEN SHARING UNCERTAIN COSTS
Cost-sharing games are well-studied in the algorithm game theory literature <cit.>. Possible actions ={a_j} correspond to a subset of agents, i.e. there is a function f:→ 2^N such that
agent i may use any action for which i∈ f(a_j). Note that there may be multiple options corresponding to the same subset of N. Under uniform cost-sharing, all agents that use an action a_j in some state s∈ S equally share its cost. That is, typically there is a deterministic function c:→_≥0 such that if k agents from f(a_j) use an action a_j in some state s, then cost_i(s) for each of these agents is c(a_j)/k. Here, we will consider a Bayesian extension where the costs of some actions in are associated with some uncertainty. In this setting we will show that negative VoI is possible.
Example 1. Consider a two-agent cost-sharing game where the action set is ={A,B,C,D} with associated subsets f(A)={1,2}, f(B)={2}, f(C)={2}, and f(D)={1}. For example, in a commuting game, A could correspond to a shared public transport, and B,C,D could correspond to private modes of transport. We assume the cost function c is a random function such that with probability 1/2, c(A)=5, f(B)=2, f(C)=6, and f(D)=4, and with probability 1/2, c(A)=5, f(B)=6, f(C)=2, and f(D)=4. In the commute example, for agent 2, B could be a bike and C could be a car, and w_i could be unknown world state that impacts the cost of actions B and C for agent 2. We call these worlds w_1 and w_2 respectively (see Tables <ref> and <ref>). Thus the prior cost for agent 2 (i.e. probability weighted cost of the worlds w_1 and w_2) is 4 for actions B and C. Thus the only Nash equilibrium in the prior game is (A,A) with a cost of (2.5,2.5) for both agents. In world w_1, the only NE is (D,B) and in world w_2 the only NE is (D,C), both with cost (4,2). Thus the knowledge of the state of the world leads to negative VoI for agent 2. Specifically, the knowledge of the cheaper option among B and C causes agent 1 to drop out of the cost-shared option A, increasing social cost and cost for agent 2.
In the above example, the public knowledge of the world state of a simple uniform cost-sharing game results in negative VoI for one of the agents. It is possible to consider simple realistic extensions where VoI is negative for both agents, for example by adding additional cost when both agents use a “private mode” in the commuting example
. For example if an additional cost of 0.75 is charged to both agents for states (D,B) and (D,C) then it is easy to verify that prior and posterior equilibria remain the same but the posterior cost is increased to (4.75,2.75) causing negative VoI for both agents.
In Example 1, using a subsidy of 3+ϵ for ϵ>0 for the option A guarantees that agent 2 will always prefer option A, and is sufficient to ensure that negative VoI (as well as suboptimal local equilibrium) is avoided.
TODO. Add sufficient condition on subsidy for this game similar to above results for component inspection games.
§.§.§ Beyond pure series and pure parallel systems
For 3 agents, there are only two systems not covered by above, up to symmetry. For both these systems, one can work out how much subsidy is sufficient to avoid negative VoI but it involves tediously many cases to verify directly.
An alternative idea is to cover these cases by providing algorithms instead of general closed expressions for computing sufficient subsidy to avoid negative VoI, possibly by some lemmas to relate VoI in a system to a simpler system.
A concrete next direction is to try to prove a hardness result for a large number of agents.
§ DATA-DRIVEN SUBSIDY IN REPEATED GAMES
The above computational hardness results for optimal subsidy design under vaious objectives motivate us to consider a beyond worst-case approach to finding a good subsidy for a given game. Specifically, we will consider the data-driven algorithm design paradigm introduced by <cit.>, and further studied by <cit.>. In this framework, we will assume access to multiple games coming from the same domain (e.g. infrastructure management in similar counties) and determine a good value of subsidy for unseen game instances from the same domain. We will consider
games drawn i.i.d. from an (arbitrary, unknown) game distribution, or games arriving in an online sequence.
In the former we
and will be interested in having a small sample complexity of the number of game samples needed to generalize well to an unseen sample from the same distribution. For the latter, we will study regret relative to the best possible subsidy scheme over the online sequence, in hindsight. We leave open questions related to computational complexity, some recently proposed techniques are applicable to our setting <cit.>.
§.§ Sample complexity for subsidy schemes
In this section, we define the notion of sample complexity for designing sample-based subsidy schemes for cost minimization games. The sample complexity for (uniform convergence of) a given set of subsidy schemes measures how many samples are sufficient to ensure the expected social cost of any subsidy scheme in the set approximately matches its average cost over the game samples with high probability, for any given approximation and confidence level.
In particular, if there is a subsidy scheme in the set with small social cost over a sufficiently large set of game samples, then that scheme will almost certainly have low cost in expectation over the distribution over games from which the samples are generated. Guarantees on sample complexity are a central topic in computational learning theory <cit.>.
The sample complexity for a class of subsidy schemes is a function : _≥0×(0,1) →ℤ_≥1 defined such that for any ϵ>0, any δ∈ (0,1), any sample size N∈ℤ_≥1, and any distribution over the games, with probability at least 1 -δ over the draw of a set S ∼^N, for any scheme in , the difference between the average cost of over
S and the expected cost of over is at most ϵ, whenever N≥(ϵ,δ). In other words,
_S∼^N[∃∈ s.t. |1/N∑_v∈ Scost_ (v)- [cost_ (v)]|> ϵ]< δ.
Note that the existence of a single ∈ that violates the ϵ-approximation for its expected cost is sufficient to cause the “failure” event which happens with probability δ. In other words, with probability 1-δ, all schemes must observe uniform convergence of sample cost to expected cost (for sufficiently large sample). The 1 -δ high probability condition is needed because it is always possible that (with a very small but non-zero probability) the set of samples S, no matter how large, is completely unrepresentative of the distribution over the games.
Clearly, (ϵ,δ) should grow as δ or ϵ shrinks since we need to ensure that the difference between the average and expected cost of each subsidy in is at most (ϵ,δ) with probability at least 1 -δ.
The sample complexity (ϵ,δ) of class of course also depends on the specific subsidy class . According to classic computational learning theory, the more “complex” the subsidy class is, the more challenging it is to bound the difference between the average and expected cost of every subsidy in , i.e. richer subsidy classes have larger sample camplexity (ϵ,δ).
For an arbitrary class , a bound on the sample complexity allows the subsidy scheme designer to relate the expected cost of a scheme in which achieves minimum average cost over the set of samples to the expected cost of an optimal scheme in , using classic arguments from learning theory <cit.>. More precisely, for a set of samples S from the distribution over buyers’ values, let Ŝ be the scheme in that minimizes average cost over the set of samples and let ^* be the scheme in that minimizes expected cost over . Finally, let P be the minimum cost achievable by any scheme in over the support of the distribution . For any δ∈ (0,1), with probability at least 1 -δ over the draw of a set of N≥(ϵ,δ) samples S from , the difference between the expected cost of Ŝ over and the expected cost of ^* over is at most 2ϵ.
Therefore, so long as there is a good sample complexity (ϵ,δ) bound for subsidy scheme class , the scheme designer can be confident that an optimal scheme over the set of observed samples competes with an optimal scheme in .
Pseudo-dimension. Pseudo-dimension <cit.> is a well-known learning theoretic measure of complexity of a class of functions (it generalizes the Vapnik-Chervonenkis or VC dimension to real-valued functions), and is useful in obtaining bounds on the sample complexity of fitting functions from that class to given data.
Let be a set of real valued functions from input space . We say that
C = (x_1, …, x_m)∈^m is pseudo-shattered by if there exists a vector
r = (r_1, …, r_m)∈^m (called “witness”) such that for all
b= (b_1, …, b_m)∈{± 1}^m there exists h_b∈ such that sign(h_b(x_i)-r_i)=b_i. Pseudo-dimension of is the cardinality of the largest set
pseudo-shattered by .
For a function class with range [0,H] and pseudo-dimension d, a sample complexity bound of (ϵ,δ)=O(H^2/ϵ^2(d+log1/δ)) is well-known <cit.>. We conclude this section with a useful general lemma from data-driven algorithm design, restated in the specific context of games, for giving upper bounds on the pseudo-dimension of certain loss function classes.
(Lemma 2.3, <cit.>)
Suppose that for every game G ∈, the objective function L_G(σ) : → which maps game parameter σ (e.g. subsidy allocation) to the objective value (e.g. cost of worst-case equilibrium) is piecewise
constant with at most N pieces. Then the dual class family {L_σ(·):→| L_σ(G)= L_G(σ)} defined on games in has pseudo-dimension O(log N).
§.§ Sample complexity for subsidizing games drawn from a distribution
Learning uniform subsidies. We start with some initial results on learning a good value of the subsidy even in the absence of considerations about value of information, or possible equilibria in posterior games. We will consider uniform subsidy σ∈_≥ 0 conditional on repair (i.e. reduces cost of repair for all agents that choose to repair).
A simple loss objective in this uniform subsidy setting is given by
L_prior(σ):= max_s∈_NE(σ)cost^σ(s)+n_sσ,
where n_s is the number of agents that choose repair in joint state s, _NE(σ) and cost^σ(s) denote the set of Nash equilibria and (respectively) the updated total cost, when a uniform subsidy of σ is applied. We further assume all the repair costs as well as subsidy budget is no more than H, i.e. σ, C_i≤ H for each i∈[n]. Therefore, L_prior(σ)≤ (2H+1)n.
Suppose the central agent (learner) who needs to set the subsidy has repeated instances of this game (e.g. cost matrices) drawn from a distribution. Can we learn a good value of uniform subsidy σ^*, that has small expected loss over the distribution?
Our proof involves a bound on the number of critical subsidy values at which the set of Nash equilibria could possibly change by examining the subsidy values at which the preferred actions of an individual agent i conditional on any fixed joint action s_-i of the remaining agents, and use of Lemma <ref> <cit.>. Add more details/sketch on tools used.
For any ϵ,δ>0 and any distribution over component maintenance games with n agents, O(n^2H^2/ϵ^2(n+log1/δ)) samples of the game drawn from are sufficient to ensure that with probability at least 1-δ over the draw of the samples, the best value of uniform subsidy σ̂^̂*̂ over the sample has expected loss L_prior(σ̂^̂*̂) that is at most ϵ larger than the expected loss of the best value of subsidy over .
Consider any fixed component maintenance game G. Observe that if actions of all agents except agent i i.e. s_-i' is fixed (2^n-1 possibilities), then the agent i will repair their component provided the cost under subsidy C_i-σ^*+1-_θϕ('(1,s_-i')) is smaller than 1-_θϕ('(0,s_-i')). That is we have at most 2^n-1 critical values of σ^* where the preferred action of agent i may change. Over n agents, we have at most n2^n-1 such points. Moreover, the loss is piecewise constant in any fixed piece.
Given the piecewise-constant structure with a bound on the total number of pieces, the sample complexity bound follows from standard learning theoretic arguments. In more detail, by Lemma <ref>, this implies that the pseudo-dimension of the loss function class parameterized by the subsidy value is at most O(log(n2^n-1))=O(n) and classic bounds <cit.> imply the sample complexity result.
Note that a naive bound of O(2^n) could be derived on the pseudo-dimension of any n player game, where each player has 2 possible actions. This is because there are 2^n distinct states and therefore at most 2^2^n possible distinct state subsets which could correspond to a Nash Equilibrium. The critical subsidy values σ^* correspond to values at which the set of NE changes, and for any pair of state subsets exactly one could be the set of NE for all values of subsidy above (respectively below) some critical value σ^*. The loss is again piecewise constant, and by Lemma <ref> we have that the pseudo-dimension is O(2^n). The above proof makes use of cost matrix of the component maintenance game to obtain the exponentially better upper bound of O(n).
Learning non-uniform subsidies. We are further able to obtain a sample complexity bound even for the non-uniform subsidy scheme defined above, where the central agent can provide a different subsidy to each agent depending on their component cost, failure probability and how critical the component is to overall system functionality. The subsidy scheme consists of a vector of multiple real-valued parameters, one for each agent.-1
L_prior():= max_s∈_NE()cost^(s)+subs(s).
We assume that subs_i(s), C_i≤ H for each i∈[n], and therefore L_prior()≤ (2H+1)n. Again, we are able to give a polynomial sample complexity for the number of games needed to learn a good value of subsidy with high probability over the draw of game samples coming from some fixed but unknown distribution (proof in Appendix <ref>).
For any ϵ,δ>0 and any distribution over component maintenance games with n agents, O(n^2H^2/ϵ^2(n^2+log1/δ)) samples of the game drawn from are sufficient to ensure that with probability at least 1-δ over the draw of the samples, the best vector of subsidies over the sample σ̂^̂*̂ has expected loss L_prior that is at most ϵ larger than the expected loss of the best vector of subsidies over .
Consider any fixed game G. Given any joint action s=(s_i,s_-i), an agent i's decision for switching their action from s_i to s_i:=1-s_i is determined by the inequality (C_i-σ^*_i)s_i+1-Φ(s_i,s_-i')≤ (C_i-σ^*_i)(s_i)+1-Φ(s_i,s_-i'), where Φ(s):=_θ[ϕ('(s))], which is linear in s_i^*, the subsidy provided to agent i. Thus, for each agent i, we have at most 2^n-1 axis-parallel hyperplanes in the parameter space in ^n, or a total of n2^n-1 hyperplanes overall. Moreover, the loss function as a function of the parameters is piecewise constant in any fixed piece. Therefore the loss function class is (n,n2^n-1)-delineable in the sense of <cit.>, that is the subsidy parameter space is Euclidean in n dimensions and is partitioned by at most n2^n-1 hyperplanes into regions where the loss is linear (in this case constant) in the parameters.better bound?
By using a general result from <cit.> which states add stmt to appendix? that a (d,t)-delineable function class has pseudo-dimension O(dlog(dt)), the above structural argument implies that the pseudo-dimension of the loss function class parameterized by the subsidy value is at most O(nlog(n^22^n-1))=O(n^2) and the sample complexity result follows <cit.>.
Learning conditional subsidies. We will now obtain a sample complexity bound for non-uniform subsidy schemes in component inspection games, where the central agent provides subsidy only in posterior games. Let denote the subsidy scheme. Let _NE^0(𝕊) (resp. _NE^1(𝕊)) denote the subset of states in S corresponding to Nash equilibria when the cost for agent i is the subsidized cost cost_i^,0 (resp. cost_i^,1) for posterior y_j=0 (resp. y_j=1). For component inspection game of component c_1 (wlog), define
L_posterior():= p_1L_posterior^1()+p_1L_posterior^0(),
where L_posterior^i():=max_s∈_NE^i()cost^,i(s)+subs^i(s). We assume that subs_i^j(s)≤ H, C_i≤ H for each i∈[n], j∈{0,1}, thus L_posterior()≤ (2H+1)n. In this case too, we are able to give a polynomial sample complexity for the number of games needed to learn a good value of subsidy with high probability over the draw of game samples coming from some fixed but unknown distribution (proof in Appendix <ref>).
For any ϵ,δ>0 and any distribution over component inspection games with n agents, O(n^2H^2/ϵ^2(n^2+log1/δ)) samples of the component inspection game drawn from are sufficient to ensure that with probability at least 1-δ over the draw of the samples, the best vector of subsidies over the sample σ̂^̂*̂ has expected loss L_posterior that is at most ϵ larger than the expected loss of the best vector of subsidies over .
Consider any fixed game G. Given any joint action s=(s_i,s_-i), an agent i's decision for switching their action from s_i to s_i:=1-s_i in posterior game y_1=y is determined by the inequality (C_i-s^y_i)s_i+1-Φ(s_i,s_-i')≤ (C_i-s^y_i)(s_i)+1-Φ(s_i,s_-i') (with Φ(s):=_θ^1,y[ϕ('(s))]), which is linear in s_i^y, the subsidy provided to agent i conditional on y_1=y. Thus, for each agent i, we have at most 2· 2^n-1 axis-parallel hyperplanes in the parameter space in ^2n, or a total of n2^n hyperplanes overall. Moreover, the loss function as a function of the parameters is piecewise constant in any fixed piece. Therefore the loss function class is (2n,n2^n)-delineable in the sense of <cit.>. The rest of the argument is similar to the proof of Theorem <ref>, differing only in some multiplicative constants.
A similar sample complexity bound can also be given for learning conditional subsidies from game samples, by minimizing a loss based on the social cost in the posterior game. See Theorem <ref> in Appendix <ref>. Note that minimization of L_prior corresponds to minimization of PoA(). To guarantee that the system functions, we can simply add a regularization term λ (1-ϕ(s)), for sufficiently large λ > (2H+1)n.
In learning terminology, our results imply a bound on the number of sample games in the “training set” to do well on an unseen “test” game instance from the same distribution. We note that optimization over the training set is still computationally hard, but we can avoid solving the hard problem over and over again for repeated test instances.
Our techniques extend beyond losses that are based on worst case equilibria in the subsidized game. We define the following loss that considers average case Nash Equilibrium, and obtain similar same complexity guarantees as in Theorem <ref> above.
L̃_prior():= 1/|_NE()|∑_s∈_NE()cost^(s)+subs(s).
We remark that, generally speaking, average case NE are known to be hard to analyze and give any useful guarantees for <cit.>. Our result below indicates the potential of data-driven algorithm design to handle such challenging objectives and obtain meaningful learning guarantees. In particular, given a sufficiently large sample of games, we can compute near-optimal subsidy schemes with high confidence for minimizing the average cost of Nash equilibria, not just the worst case Nash equilibrium.
Suppose subs_i(s), C_i≤ H for each i∈[n]. For any ϵ,δ>0 and any distribution over component maintenance games with n agents, O(n^2H^2/ϵ^2(n^2+log1/δ)) samples of the game drawn from are sufficient to ensure that with probability at least 1-δ over the draw of the samples, the best vector of subsidies over the sample σ̂^̂*̂ has expected loss L̃_prior that is at most ϵ larger than the expected loss L̃_prior of the best vector of subsidies over .
Cost-sharing games. We first consider the problem of learning a good value of subsidy in the prior game. The subsidy scheme consists of a vector of multiple real-valued parameters, one for each action. We define the loss of the central agent in a game G as the total social cost in the worst-case Nash Equilibrium under the subsidy scheme , plus the total subsidy paid out by the central agent in the scheme , i.e.,
L_prior():= max_s∈_NE()cost^(s)+subs(s).
We assume that c^(a), c(a)≤ H for each a∈, and therefore L_prior()≤ 2H||.
For any ϵ,δ>0 and any distribution over fair cost sharing games with N agents and || actions, O(||^2H^2/ϵ^2(||log ||N+log1/δ)) samples of the game drawn from are sufficient to ensure that with probability at least 1-δ over the draw of the samples, the best vector of subsidies over the sample σ̂^̂*̂ has expected loss L_prior that is at most ϵ larger than the expected loss of the best vector of subsidies over .
Another goal of the central agent is to avoid an increase in the social cost due to knowledge of the true costs. We denote the true costs by c̃:→_≥0, which are assumed to be known in the posterior game, and the corresponding social costs by cost(·). Let _NE denote the corresponding set of Nash Equilibria. We have
L_VoI():= max_s∈_NE()cost^(s) - max_s∈_NE()cost^(s).
This corresponds to the increase in the social cost relative to the prior game, when the true costs are known. The central agent would like to minimize this “VoI” loss.
For any ϵ,δ>0 and any distribution over fair cost sharing games with N agents and || actions, O(||^2H^2/ϵ^2(||log ||N+log1/δ)) samples of the game drawn from are sufficient to ensure that with probability at least 1-δ over the draw of the samples, the best vector of subsidies over the sample σ̂^̂*̂ has expected loss L_VoI that is at most ϵ larger than the expected loss of the best vector of subsidies over .
TODO: Extensions for VoI?
add cost-sharing results
§.§ No-regret when subsidizing in an online sequence of games
In the online setting, we receive a sequence of games at times (rounds) t=1,…, T. In each round t, the central agent must set a value of the (say uniform) subsidy σ_t, with potentially some feedback on previous rounds but no knowledge of the game parameters (costs/priors) of the current or future rounds. This is more pessimistic (but potentially also more realistic) than the distributional setting above. In particular, the sequence of games may be adversarially picked. The performance of the algorithm is measured by the difference in the cumulative loss for the selected subsidy values and the cumulative loss of the best fixed value of subsidy in hindsight, also known as regret (R_T).
R_T:= ∑_t=1^TL_prior(σ_t)-min_σ∈[0,H]L_prior(σ).
“No-regret” corresponds to R_T being sublinear in T, and the average regret R_T/T approaches zero for large T in this case. We will impose a mild assumption on the repair costs C_i to obtain good results in the online setting. We will assume that the costs are not known exactly, but come from some smooth distribution. Formally,
We assume that the probability distributions generating the costs have κ-bounded probability density, i.e. max_x∈f_i(x)≤κ for some κ∈^+, where f_i denotes the probability density function for cost C_i.
The adversary designing the sequence of games may select any bad distribution as long as it is smooth. Under this assumption, our analysis above and tools from <cit.> can be used to show that the online sequence of loss functions is dispersed <cit.>. Dispersion, informally speaking, captures how amenable a non-Lipschitz function is to online learning. As noted in <cit.>, dispersion is a sufficient condition for learning piecewise Lipschitz functions online, even in changing environments. A formal definition is included below.-1
The sequence of random loss functions L_1, …,L_T is β-dispersed for the Lipschitz constant ℓ if, for all T and for all ϵ≥ T^-β, we have that, in expectation, at most
O(ϵ T) functions (here Õ suppresses dependence on quantities beside ϵ,T and β, as well as logarithmic terms)
are not ℓ-Lipschitz for any pair of points at distance ϵ in the domain . That is, for all T and ϵ≥ T^-β,
[
max_ρ,ρ'∈
||ρ-ρ'||_2≤ϵ|{ t∈[T] | |L_t(ρ)-L_t(ρ')|>ℓ||ρ-ρ'||_2}|]
≤O(ϵ T).
Under Assumption <ref>, we have the following guarantee about online learning of uniform subsidy in a sequence of games, namely one can predict good values of subsidy (with Õ(√(n/T)) average expected regret over T online rounds). We establish 1/2-dispersion of the sequence of loss functions under the above assumption and use results from <cit.> to obtain the regret bound (proof in Appendix <ref>).
Suppose Assumption <ref> holds. Let L_1,…, L_T:[0,H]→[0,(2H+1)N] denote an independent sequence of losses L_prior(σ) as a function of the uniform subsidy value σ, in an online sequence of T component maintenance games.
Then the sequence of functions is 1/2-dispersed and there is an online algorithm with O(√(nT)) expected regret.
The key idea is to observe that each loss function L_t has at most K=n2^n-1 discontinuities (as in proof of Theorem <ref> above). Further, any interval of length ϵ has at most O(κϵT) functions that are non-Lipschitz in that interval, in expectation. This uses Assumption <ref>, and the observation that critical values of s^* are linear in some cost C_i. Indeed, as shown in the proof of Theorem <ref>, the critical values of subsidy are given by s^*=C_i+_θϕ(0,s_-i')-_θϕ(1,s_-i') for some agent i and joint action s_-i'.
By <cit.> then the expected number of non-Lipschitz losses on the worst interval of length ϵ is at most Õ(Tϵ+√(Tlog(TK)))=Õ(√((n+log T)T)) for ϵ≥1/√(T).
This implies 1/2-dispersion of the sequence of loss functions in the sense of Definition <ref>.
Then Theorem 5 from <cit.> with M=1 implies the desired regret bound.
The online algorithm which achieves the above regret guarantee is the Exponential Forecaster algorithm of <cit.>, a continuous version of the well-known exponential weights update algorithm <cit.>. We can further extend the result to learning a non-uniform subsidy scheme under Assumption <ref>, with a Õ(√(nT)) regret bound, using the same online algorithm.
Suppose Assumption <ref> holds. Let L_1,…, L_T:[0,H]^n→[0,(2H+1)N] denote an independent sequence of losses L_prior() as a function of the subsidy scheme parameterized by subsidy values {σ_i}, in an online sequence of T component maintenance games.
Then the sequence of functions is 1/2-dispersed and there is an online algorithm with O(√(nT)) expected regret.
TODO: Add algo for semi-bandit feedback set computation?
For components connected in pure series or pure parallel games, we can further improve the above bound on the sample complexity of learning the best subsidy from game samples.
For any ϵ,δ>0 and any distribution over component inspection games with n agents, connected either all in parallel or all in series, O(1/ϵ^2(log nlog1/ϵ+1/δ)) samples of the component inspection game drawn from are sufficient to ensure that with probability at least 1-δ over the draw of the samples, the best value of subsidy over the sample ŝ^̂*̂ has expected loss that is at most ϵ larger than the expected loss of the best value of subsidy over .
We first consider the all parallel case. In this setting, an agent i could possible prefer RE only if all the other agents choose DN, i.e. for exactly one possible s'_-i. That is we have at most one critical values of s^* where the preferred action of agent i may change. Over n agents, we have at most n such points. The loss is piecewise constant in any fixed piece, and a similar argument as above (i.e. Lemma 2.3 of <cit.> and standard generalization guarantees) imply the pseudo-dimension is O(log n) and hence the claimed sample complexity result.
We will now consider the all series case. This case is more interesting, as the preferred action of an agent i could change for multiple possible s_-i'. However, we note that across any possible critical s^* the preferred action can only change from DN to RE as we increase the subsidy from just below s^* to just above it. Therefore, there is at most one relevant breakpoint in the loss function for any agent i. Adding up over n agents, we have at most n breakpoints and the rest of the argument is the same as above.
§ DISCUSSION
We study the problem of resource allocation for infrastructure maintenance in systems with privately owned components, and in classical cost-sharing games. The former captures the typical organization of engineering systems, computer networks, or project pipelines, and the latter corresponds to typical market systems. We identify useful metrics related to the well-studied price of anarchy, as well as the recently introduced value of information metric that a central agent may care about, and examine the challenge in optimally allocating resources to optimize these metrics.
Our work employs a data-driven approach, which has not been previously employed in the literature of subsidy or taxation design where the focus has largely been on designing approximations for worst-case game parameters. An interesting further question is to extend this idea of analyzing “typical” games to potentially obtain better domain-specific subsidy schemes in other interesting games. Also our learning-based approach allows modeling more realistic settings, where the game parameters or even set of agents may change over time.
§ TABLES
The “” document class includes the “”
package — <https://ctan.org/pkg/booktabs> — for preparing
high-quality tables.
Table captions are placed above the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper “floating” placement of tables, use the
environment table to enclose the table's contents and the
table caption. The contents of the table itself must go in the
tabular environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on tabular material are found in the
User's Guide.
Immediately following this sentence is the point at which
Table <ref> is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
To set a wider table, which takes up the whole width of the page's
live area, use the environment table* to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will “float” to a location deemed more
desirable. Immediately following this sentence is the point at which
Table <ref> is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
§ MATH EQUATIONS
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
§.§ Inline (In-text) Equations
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the math environment,
which can be invoked with the usual
construction or with
the short form . You can use any of the symbols
and structures, from α to ω, available in
<cit.>; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
lim_n→∞x=0
,
set here in in-line math style, looks slightly different when
set in display style. (See next section).
§.§ Display Equations
A numbered display equation—one set off by vertical space from the
text and centered horizontally—is produced by the equation
environment. An unnumbered display equation is produced by the
displaymath environment.
Again, in either environment, you can use any of the symbols and
structures available in ; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
lim_n→∞x=0
Notice how it is formatted somewhat differently in
the displaymath
environment. Now, we'll enter an unnumbered equation:
∑_i=0^∞ x + 1
and follow it with another numbered equation:
∑_i=0^∞x_i=∫_0^π+2 f
just to demonstrate 's able handling of numbering.
§ CITATIONS AND BIBLIOGRAPHIES
The use of for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
— use full first names (“Donald E. Knuth”) not initials
(“D. E. Knuth”) — and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the command:
where “” is the name, without the “”
suffix, of the file.
Citations and references are numbered by default. A small number of
ACM publications have citations and references formatted in the
“author year” style; for these exceptions, please include this
command in the preamble (before the command
“”) of your source:
Some examples. A paginated journal article <cit.>, an
enumerated journal article <cit.>, a reference to an entire
issue <cit.>, a monograph (whole book) <cit.>, a
monograph/whole book in a series (see 2a in spec. document)
<cit.>, a divisible-book such as an anthology or compilation
<cit.> followed by the same example, however we only output
the series if the volume number is given <cit.> (so
Editor00a's series should NOT be present since it has no vol. no.),
a chapter in a divisible book <cit.>, a chapter in a
divisible book in a series <cit.>, a multi-volume work as
book <cit.>, a couple of articles in a proceedings (of a
conference, symposium, workshop for example) (paginated proceedings
article) <cit.>, a proceedings article with
all possible elements <cit.>, an example of an enumerated
proceedings article <cit.>, an informally published work
<cit.>, a couple of preprints <cit.>, a doctoral dissertation <cit.>, a
master's thesis: <cit.>, an online document / world wide web
resource <cit.>, a video game
(Case 1) <cit.> and (Case 2) <cit.> and <cit.>
and (Case 3) a patent <cit.>, work accepted for
publication <cit.>, 'YYYYb'-test for prolific author
<cit.> and <cit.>. Other cites might
contain 'duplicate' DOI and URLs (some SIAM articles)
<cit.>. Boris / Barbara Beeton:
multi-volume works as books <cit.> and <cit.>. A
couple of citations with DOIs:
<cit.>. Online
citations: <cit.>.
Artifacts: <cit.> and <cit.>.
§ ACKNOWLEDGEMENT
We thank Chaochao Lin for helpful initial discussions, and are grateful to Avrim Blum, Hedyeh Beyhaghi, Siddharth Prasad, Keegan Harris and Rattana Pukdee for useful comments. This material is based on work supported by the National Science Foundation under grants CCF1910321, IIS 1901403, and SES 1919453.
alpha
§ APPENDIX
§ PROOFS FROM SECTION <REF>
Proposition <ref> (restated). In the two-agent series prior game (defined above and cost matrix noted in the first row of Table <ref>), the Price of Anarchy in the absence of subsidy is at least PoA≥2/p_1+p_2,
for some repair costs C_1,C_2. More generally, for n agents, PoA≥H̃/G̃^n for some repair costs C_1,…,C_n, where H̃ and G̃ are the harmonic and geometric means, respectively, of the prior probabilities p_1,…,p_n.
We set C_1=p_1p_2 and C_2=p_2p_1. Observe that DN-DN is an equilibrium since[In our notation, p_1p_2:=1-p_1p_2, while p_1·p_2:=(1-p_1)·(1-p_2).] p_1p_2=p_1+p_2p_1≤p_1+C_2, and similarly p_1p_2≤p_2+C_1. Also, RE-RE is an equilibrium since C_1=p_1p_2≤p_1 and C_2=p_2p_1≤p_2. Clearly,
PoA≥cost(DN,DN)/cost(RE,RE)=2p_1p_2/C_1+C_2≥2/p_1+p_2,
where the last inequality follows from the observations
p_1p_2(p_1+p_2)=p_1+p_2-(p_1+p_2)p_1p_2≥ p_1+p_2-2p_1p_2 = C_1+C_2.
For the n agent series game, we set the costs C_i=p_iΠ_j ip_j. DN^n (i.e., s=0^n) is a Nash equilibrium, since
C_i+Π_j ip_j=(1-p_i)Π_j ip_j+1-Π_j ip_j≥Π_jp_j.
Moreover, RE^n is also an equilibrium as C_i≤p_i for all i∈[n]. Therefore,
PoA ≥cost(DN^n)/cost(RE^n)=nΠ_ip_i/∑_iC_i
=n(1-Π_ip_i)/∑_iΠ_j ip_j - nΠ_ip_i
≥n(1-Π_ip_i)/∑_iΠ_j ip_j - (∑_iΠ_j ip_j)Π_ip_i
=n/∑_iΠ_j ip_j
=n/Π_ip_i·∑_i1/p_i,
and the claim follows by noting H̃=n/∑_i1/p_i and G̃=(Π_ip_i)^1/n.
Theorem <ref> (restated).
Consider the two-agent series component maintenance game with C_1,C_2>0 and 0<p_1,p_2<1. Let s^*={(C_1,C_2)∈ [p_1p_2,p_1]× [p_2p_1,p_2]}·min{C_1-p_1p_2, C_2-p_2p_1}, where {·} denotes the 0-1 valued indicator function. Then there exists a subsidy scheme with total subsidy s for any s>s^* such that PoA(𝕊)=1. Moreover, a total subsidy of at least s^* is necessary for any subsidy scheme that guarantees PoA(𝕊)=1.
We will characterize the set of values of C_1,C_2 for which there are multiple Nash equilibria and design subsidy schemes that achieve PoA(𝕊)=1. We consider the following cases.
Case 0: C_1<p_1p_2,C_2>p_2. In this case the only NE is RE-DN (Table <ref>). Thus, PoA =1 even in the absense of subsidy and s^*=0 in this case.
Case 1: C_1=p_1p_2,C_2>p_2. Both RE-DN and DN-DN are Nash equilibria, and
cost(RE,DN)=C_1+2p_2=p_1p_2+2p_2=2-p_1p_2-p_2<2p_1p_2=cost(DN,DN).
An arbitrarily small subsidy to agent 1 is sufficient to guarantee PoA(𝕊)=1 (therefore s^*=0 works) as DN-DN would no longer be a NE.
Case 2: C_1>p_1p_2,C_2>p_2. In this case the only NE is DN-DN. Thus, PoA =1 even in the absense of subsidy.
Case 3: C_1<p_1p_2,C_2=p_2. Both RE-DN and RE-RE are Nash equilibria, and
cost(RE,DN)=C_1+2p_2>C_1+p_2=C_1+C_2=cost(RE,RE).
An arbitrarily small subsidy to agent 2 is sufficient to guarantee PoA(𝕊)=1 (therefore s^*=0 works) as RE-DN would no longer be a NE.
Case 4: C_1<p_1p_2,C_2<p_2. In this case the only NE is RE-RE. Thus, PoA =1 even in the absense of subsidy.
Case 5: (C_1,C_2)∈ [p_1p_2,p_1]× [p_2p_1,p_2]. Both RE-RE and DN-DN are Nash equilibria, and OPT corresponds to RE-RE. A subsidy greater than C_1-p_1p_2 to agent 1, or a subsidy greater than C_2-p_2p_1 to agent 2 guarantees that the only NE is RE-RE. Further, in either case PoA(𝕊)=1 as the subsidy equals the reduction in the repair cost of the respective agent.
Further suppose a subsidy of s^*=s^*_1+s^*_2 is sufficient to ensure PoA(𝕊)=1 in this case. Now if subsidy to agent 1 s_1^*≤ C_1-p_1p_2 and subsidy to agent 2 s_2^*≤ C_2-p_2p_1. Then both DN-DN and RE-RE are NE and PoA(𝕊)>1 since the worst-case equilibrium (i.e. DN-DN) cost does not depend on the subsidy. Therefore either s_1^*> C_1-p_1p_2 or s_2^*> C_2-p_2p_1, establishing that a subsidy of at least s^* is necessary in this case to ensure PoA(𝕊)=1.
Case 6: Otherwise. By symmetry, the case is similar to one of C0 through C4 with agents 1 and 2 switched. s^*=0 and price of anarchy of 1 is achieved by no or arbitrarily small subsidy as above.
Note that s^* is non-zero only in case C5, in which case we have established both sufficiency and necessity of a total subsidy of s^* to ensure PoA(𝕊)=1.
Theorem <ref> (restated).
Consider the two-agent series component maintenance game with C_1,C_2>0 and 0<p_1,p_2<1. Define R⊂^+×^+ as the set of cost pairs that satisfy C_1≤p_1 C_2≤p_2 (C_1≤p_1p_2 C_2≤p_2p_1), depicted in Figure <ref>. Let s^*=min_(x,y)∈ R||(C_1,C_2)-(x,y)||_1, where ||·||_1 denotes the L_1-norm. Then there exists a subsidy scheme with total subsidy s for any s>s^* such that the system functions in any NE. Moreover, a total subsidy of at least s^* is necessary for any subsidy scheme that guarantees that the system functions in any NE.
(Sufficiency of s^*). We do this by cases on the cost vector (C_1,C_2), as follows.
Case 0: (C_1,C_2) lies in the interior of R. In this case it is easy to see that PoA=1 and s^*=0. In particular, the conditions C_1< p_1, C_2< p_2 rule out DN-RE, RE-DN as candidate equilibria respectively, and since C_1< p_1p_2 or C_2< p_2p_1, DN-DN cannot be a NE either (agent 1 or agent 2 will prefer to repair).
Case 1: C_1≤p_1p_2, C_2≥p_2. In this case, a subsidy more than s^*=C_2-p_2 to agent 2 is sufficient to bring the cost vector to the interior of R.
Case 2:C_2≤p_2p_1, C_1≥p_1. Symmetric to C1.
Case 3: p_1p_2<C_1≤p_1, p_2p_1< C_2≤p_2. A subsidy more than min_i{C_i-p_ip_3-i} to agent _i{C_i-p_ip_3-i} is sufficient to bring the cost vector to the interior of R.
Case 4: Otherwise. It is straighforward to verify by direct calculation that a subsidy of s^*_1=max{C_1-p_1,0}+p_1·p_2 to agent 1 and a subsidy of s^*_2=max{C_2-p_2,0}+p_1·p_2 to agent 2 is sufficient.
The necessity argument essentially follows by updating the cost matrix with conditional subsidies (s_1,s_2) and noting that the system is guaranteed to function in any NE if the costs (C_1,C_2) are in the interior of R.
In addition to expected VoI, we can also ensure posterior conditioned VoI_i,j when component c_j is inspected is non-negative for each agent i and each posterior y_j via subsidy. Compared to expected VoI, this is a more worst-case perspective as it includes the case when the component is broken which is typically when the agents are more likely to avoid the information about the component state. The following result gives the optimal value of subsidy to ensure this.
Consider the two-agent series component inspection game with C_1,C_2>0 and 0<p_1,p_2<1. Then
(a) VoI_i,j(s_i,s^j,1)≥0 for any agents i,j, any prior NE s_i and any posterior NE s^j,1, when the inspected component j is working, except when C_3-j=p_3-j and an arbitrarily small subsidy is sufficient to ensure VoI is non-negative in this case.
(b) Define R_1⊂^+×^+ as the set of cost pairs that satisfy (C_j≤p_j C_3-j≤p_3-jp_j) (C_j≤p_jp_3-j). Let s^*=min_(x,y)∈ R_1||(C_1,C_2)-(x,y)||_1, where ||·||_1 denotes the L_1-norm. Then there exists a subsidy scheme with total (unconditional) subsidy s for any s>s^* such that VoI_i,j(s_i,s^j,0)≥0 for any agents i,j, any prior NE s_i and any posterior NE s^j,0, when the inspected component j is broken. Moreover, a total (unconditional) subsidy of at least s^* is necessary for any subsidy scheme that guarantees that the system functions in any NE.
WLOG j=1.
(a)
If y_1=1, the only candidate posterior equilibria are DN-DN if C_2≥p_2 and DN-RE if C_2≤p_2.
If C_2> p_2, then the prior NE is either DN-DN or RE-DN. In either case both agents have a non-negative Value of Information (Table <ref>).
If C_2< p_2, then the prior NE is either DN-DN, RE-RE or DN-RE. In each case both agents can be readily verified to have a non-negative Value of Information. If C_2=p_2 and C_1≤p_2, then VoI can be negative for agent 1 if the prior equilibrium is RE-RE and the posterior equilibrium is DN-DN. But in this case a (unconditional, or conditional on inspection) subsidy of s^*_2>0 to agent 2 ensures that DN-DN is not a posterior equilibrium and both agents have non-negative VoI.
(b) For y_1=0, we consider the following cases.
Case 0: C_1≤p_1p_2,C_2>p_2. If C_1<p_1p_2, the only prior NE is RE-DN and the only posterior NE is also RE-DN. Thus, VoI =0 for each agent even in the absense of subsidy and s^*=0 in this case. If C_1=p_1p_2, DN-DN is also a prior NE but VoI is still non-negative for each agent, since the posterior costs
cost_1(RE,DN)=C_1+p_2≤p_1p_2+p_2=p_1p_2,
and
cost_2(RE,DN)=p_2<p_1p_2.
Case 1: p_1p_2< C_1≤ p_2,C_2>p_2. DN-DN is the only prior NE and the only posterior equilibrium is RE-DN (except if C_1=p_2, when DN-DN is also a posterior NE), and
cost_1(RE,DN)=C_1+p_2> p_1p_2.
A subsidy of C_1-p_1p_2 to agent 1 is sufficient to ensure VoI is non-negative for agent 1 as noted in case C0. Alternatively, a subsidy of C_2-p_1p_2 to agent 2 ensures that RE-RE is the only prior and posterior NE, and VoI is non-negative for each agent. The smaller of the two subsidies works and is necessary in this case.
If we require the subsidy to be conditional on inspection, DN-DN remains the only prior NE as prior costs are not updated by the subsidy. A subsidy of C_1-p_1p_2 to agent 1 works as above. In the alternative case, if we provide a smaller subsidy of C_2-p_2 to agent 2, the posterior NE is switched to RE-RE but the prior remains DN-DN and a
subsidy of C_1-p_1p_2 to agent 1 is still required to ensure non-negative VoI for agent 1.
Case 2: C_1>p_2,C_2>p_2. In this case the only NE is DN-DN in both prior and posterior games. The value of information is negative for both agents in the absense of subsidy as the posterior costs are 1. Subsidy schemes described in case C1 above can be verified to be optimal in this case as well.
Case 3: C_1<p_1p_2,C_2=p_2. Both RE-DN and RE-RE are prior and posterior Nash equilibria, and
cost(RE,DN)=C_1+2p_2>C_1+p_2=C_1+C_2=cost(RE,RE).
An arbitrarily small subsidy to agent 2 is sufficient to guarantee non-negative VoI as RE-DN would no longer be a NE.
Case 4: Otherwise. Value of Information for agent 1 is negative since either DN-DN or DN-RE is a posterior NE with cost 1, and there is a prior NE with smaller cost to agent 1. It may be verified that the subsidy scheme from Theorem <ref> is optimal in ensuring non-negative VoI in this case.
§ OPTIMAL SUBSIDY IN TWO AGENT PARALLEL GAME
The cost matrix for a two-agent parallel game is summarized in Table <ref>. Here we consider a slight generalization where there are two agents with reparable components connected in parallel, and there are additional components connected in parallel which are not maintained by any agent. Let p_R denote the probability that remaining components work.
§.§ Price of Anarchy
Consider the two-agent parallel component maintenance game with C_1,C_2>0 and 0<p_1,p_2<1. Let s^*={(C_1,C_2)∈ [0,p_R·p_1·p_2]^2}·|C_1-C_2|, where {·} denotes the 0-1 valued indicator function. Then there exists a subsidy scheme with total subsidy s for any s>s^* such that PoA(𝕊)=1. Moreover, a total subsidy of at least s^* is necessary for any subsidy scheme that guarantees PoA(𝕊)=1.
We will characterize the set of values of C_1,C_2 for which there are multiple Nash equilibria and design subsidy schemes that achieve PoA(𝕊)=1. For convenience set p^*:=p_R·p_1·p_2. We consider the following cases.
Case 0: C_1<p^*,C_2>p^*. In this case the only NE is RE-DN (Table <ref>). Thus, PoA =1 even in the absense of subsidy and s^*=0 in this case.
Case 1: C_1=p^*,C_2>p^*. Both RE-DN and DN-DN are Nash equilibria, and
cost(RE,DN)=C_1=p^*=1/2cost(DN,DN).
An arbitrarily small subsidy to agent 1 is sufficient to guarantee PoA(𝕊)=1 (therefore s^*=0 works) as DN-DN would no longer be a NE.
Case 2: C_1>p^*,C_2>p^*. In this case the only NE is DN-DN. Thus, PoA =1 even in the absense of subsidy.
Case 3: C_1<p^*,C_2=p^*. Both RE-DN and DN-RE are Nash equilibria, and
cost(RE,DN)=C_1+2p_2>C_1+p_2=C_1+C_2=cost(RE,RE).
An arbitrarily small subsidy to agent 2 is sufficient to guarantee PoA(𝕊)=1 (therefore s^*=0 works) as RE-DN would no longer be a NE.
Case 4: C_1<p_1p_2,C_2<p_2. In this case the only NE is RE-RE. Thus, PoA =1 even in the absense of subsidy.
Case 5: (C_1,C_2)∈ [p_1p_2,p_1]× [p_2p_1,p_2]. Both RE-RE and DN-DN are Nash equilibria, and OPT corresponds to RE-RE. A subsidy greater than C_1-p_1p_2 to agent 1, or a subsidy greater than C_2-p_2p_1 to agent 2 guarantees that the only NE is RE-RE. Further, in either case PoA(𝕊)=1 as the subsidy equals the reduction in the repair cost of the respective agent.
Further suppose a subsidy of s^*=s^*_1+s^*_2 is sufficient to ensure PoA(𝕊)=1 in this case. Now if subsidy to agent 1 s_1^*≤ C_1-p_1p_2 and subsidy to agent 2 s_2^*≤ C_2-p_2p_1. Then both DN-DN and RE-RE are NE and PoA(𝕊)>1 since the worst-case equilibrium (i.e. DN-DN) cost does not depend on the subsidy. Therefore either s_1^*> C_1-p_1p_2 or s_2^*> C_2-p_2p_1, establishing that a subsidy of at least s^* is necessary in this case to ensure PoA(𝕊)=1.
Case 6: Otherwise. By symmetry, the case is similar to one of C0 through C4 with agents 1 and 2 switched. s^*=0 and price of anarchy of 1 is achieved by no or arbitrarily small subsidy as above.
Note that s^* is non-zero only in case C5, in which case we have established both sufficiency and necessity of a total subsidy of s^* to ensure PoA(𝕊)=1.
§.§ Guaranteeing system functions in any NE
Consider the two-agent parallel component maintenance game such that C_1,C_2>0 and 0<p_1,p_2<1.
Let s^*={(C_1,C_2)∈ [p_R·p_1·p_2,∞)^2}·(min{C_1,C_2}-p_R·p_1·p_2), where {·} denotes the 0-1 valued indicator function. Then there exists a subsidy scheme with total subsidy s for any s>s^* such that the system functions in any NE. Moreover, a total subsidy of at least s^* is necessary for any subsidy scheme that guarantees that the system functions in any NE.
Note that the system does not function only in DN-DN, for which to be a NE we must have (C_1,C_2)∈ [p_R·p_1·p_2,∞)^2. Now a subsidy s^*_i>C_i-p_R·p_1·p_2 to agent i guarantees that agent i chooses repair, establishing the first part of the theorem.
Conversely, suppose (C_1,C_2)∈ [p_R·p_1·p_2,∞)^2 and a subsidy scheme with total subsidy s^* guarantees that the system functions in any NE. But if s^*_i≤ C_i-p_R·p_1·p_2 for i∈{1,2}, then DN-DN is still an NE, and the system does not function for this NE. By contradiction, the total subsidy must be at least s^* as claimed.
§.§ Value of Information
For this case as well,
the Value of Information may be negative if a local equilibrium is selected for some parameter settings. In the following we will show a dichotomy—if the repair costs are small then a central agent using subsidy must subsidize the full costs of repair to avoid negative Value of Information of component inspection for the agents. Otherwise, the central agent can partially subsidize to avoid negative VoI.
Suppose C_1,C_2∉{0,p_R·p_2,p_R·p_1·p_2}. A subsidy scheme with s_1^*>max{C_1-p_R·p_1·p_2,min{C_1,p_R·p_2}} and s_2^*>max{C_2-p_R·p_1·p_2,min{C_2,p_R·p_2}}, conditional on inspection, is sufficient to avoid negative VoI for both agents when component 1 is inspected.
Note that RE-RE cannot be an equilibrium since C_1,C_2>0. Also negative VoI is not possible when the posterior is y_1=1 as the only Nash equilibrium is DN-DN with zero cost for each agent.
First suppose min{C_1,C_2}≥p_R·p_1·p_2. This implies DN-DN is the prior equilibrium. In this case, the conditional subsidies of s_i^*=C_i-p_R·p_1·p_2 are sufficient to ensure that posterior repair costs for each agent is less than p_R·p_1·p_2≤p_R·p_2. Thus, DN-DN cannot be a posterior NE for y_1=0. For DN-RE and RE-DN, the subsidy ensures that VoI is non-negative for both agents.
Otherwise, we have three cases to consider w.r.t. relative choice values of repair costs and failure probabilities,
Case 0: C_1< p_R·p_1·p_2≤ C_2. In this case, RE-DN is the prior equilibrium.
Moreover, since C_1< p_R·p_1·p_2≤p_R·p_2, agent 1 would prefer action RE over DN and so DN-DN cannot be a posterior NE for y_1=0. If RE-DN is the posterior NE, then by Table <ref> clearly VoI is non-negative for both agents. So it only remains to consider the posterior equilibrium DN-RE. If C_2>p_R·p_2, DN-RE cannot be an equilibrium, and we are done. If C_2≤p_R·p_2, then the subsidy s_2^*>min{C_2, p_R·p_2} ensures that VoI of agent 2 is non-negative even if DN-RE is the posterior equilibrium.
Case 1: C_2< p_R·p_1·p_2≤ C_1. The argument for this case is symmetric to the previous case, with DN-RE as the only possible prior equilibrium.
Case 2: max{C_1,C_2}< p_R·p_1·p_2. In this case DN-RE as well as RE-DN can be prior Nash equilibria. Since max{C_1,C_2}< p_R·p_1·p_2≤p_R·p_2, DN-RE and RE-DN are the only candidate posterior NE. The setting of subsidies s_i^*>p_R·p_2≥p_R·p_1·p_2 > max{C_1,C_2} ensures that VoI is non-negative for both agents in this case.
§ FIXED PARAMETER INTRACTABILITY OF DESIGNING SUBSIDY TO MINIMIZE THE PRICE OF ANARCHY
We strengthen our hardness results from Section <ref> here, by showing that it is unlikely that the optimal subsidy design problem is even fixed parameter tractable with the subsidy budget as the fixed parameter. Formally, a problem is fixed parameter tractable (FPT) with respect to parameter k∈^+ if there exists an algorithm running in f(k)× n^O(1) time, where f is a function of k which is independent of the input size n. The W hierarchy is a sequence of computational complexity classes which, roughly speaking, indicate fixed parameter intractability in an increasing order of conjectured hardness. A parameterized problem L is in the class W[i], if every instance
(x,k) can be transformed in FPT time to a combinatorial circuit that has weft at most i, such that
(x,k)∈ L if and only if there is a satisfying assignment to the inputs that assigns true to exactly k inputs. The weft is the largest number of logical units with fan-in greater than two on any path from an input to the output. Note that W[i]⊆W[j] for each i≤ j ∈_≥ 0, and the inclusion is conjectured to be strict. We refer the reader to standard texts for further details on parameterized complexity including the W hierarchy <cit.>.
Our hardness result involves reduction from the Dominating-Set problem, which is known to be W[2]-complete. Formally, the problem may be stated as follows.
Dominating-Set: Does a given (undirected, unweighted) graph =(V,E) admit a dominating set[A subset X of vertices is said to be a dominating set if for every vertex v∈ V∖ X there is an edge (x,v)∈ E for some x∈ X.] of size k?
We will show W[2]-hardness of the subsidy design problem CMG-PoAS for optimizing the Price of Anarchy, stated in Section <ref>.
CMG-PoAS is W[2]-Hard.
We will reduce the Dominating-Set problem to CMG-PoAS. Given an instance ,k of the Dominating-Set problem, we create a corresponding CMG-PoAS problem as follows. Introduce an agent i for every vertex i∈ V and consider the formula ϕ()=⋀_i∈ V(x_i⋁_(i,j)∈ E x_j), where the clauses correspond to each vertex and consist of component states x_i for that vertex and x_j for all agents corresponding to the neighbors of node i. Set the probability distribution θ to be the constant distribution with the entire probability mass on 0^n (i.e. all the components are guaranteed to fail without repair). Set repair cost C_i=1 for all components i. Then the cost function for agent i for joint action s=(s_i,s_-i) is given by-1
l_i(s_i,s_-i,θ)=_∼θ[cost_i]=C_is_i+P_ϕ(θ)=1· s_i+1-ϕ(')=s_i+1-ϕ(s),
where x_i'=max{0,s_i}=s_i denotes the component state after agent i takes action s_i.
We proceed to characterize the set of Nash equilibria of this game. Note that s=0^n is a NE for this game, since any repair action by any agent increases the agent's cost by 1 if the repair does not change the state ϕ of the system, and by 0 otherwise (since ϕ is monotonic, ϕ(s) can only change from 0 to 1 on repair). This also implies that no agent has any incentive to switch from DN to RE. Further note that if the components corresponding to a dominating set X are have their state x_i=1, then ϕ()=1 as each clause of ϕ will have a positive literal corresponding to some node in X (either the first literal is in X or the node in V∖ X has some neighbor in X) by definition of a dominating set. Moreover, if the set of states with x_i=1 does not correspond to a dominating set, then there is some vertex v∈ V such that x_v=0 and x_u=0 for all neighbors u of v in , and ϕ()=0 in this case as the clause corresponding to v is not satisfied. We remark that the rest of the proof is very similar to the proof of Theorem <ref>, and is included below for completeness.
We will now show that the remaining NEs for the game correspond to minimal dominating sets of .
Suppose K⊆[n] be a set of agents for which the corresponding nodes in constitute a minimal dominating set. Let s_K'=(s_1,…,s_n) where s_i=[i∈ K'], and [·] is the 0-1 valued indicator function, denote the joint action where agents in set K'⊆[n] choose repair. Clearly, ϕ(s_K)=1. If i∈ K, agent i does not reduce cost by switching from RE to DN since K is a minimal dominating set therefore not repairing component i causes the system to fail. As noted above, switching from DN to RE never improves an agent's cost in this game.
Further, if K is the set of agents corresponding to a non-minimal dominating set, then there must be some agent that can reduce its cost by switching from RE to DN with system still functioning.
Finally, if K' ∅ is the set of agents with one or more agents short of a dominating set, then any agent in K' can reduce its cost by switching from RE to DN. This establishes that besides 0^n, only possible NE must correspond to a minimal dominating set. In particular, this implies that =k^*, where k^* is the size of the smallest dominating set K^* of , and the corresponding NE is s_K^*.
To complete the reduction, we consider the game defined above with subsidy budget n^*=k. We will show a bijection between the YES and NO instances of the two decision problems to complete the proof.
If there exists a dominating set of size k, then the smallest dominating set K^* has size k^*≤ k. We design a subsidy scheme with subsidy allocated to k^*≤ n^* agents, allocating subsidy of 1+1/2n for repair (the total subsidy is no more than k^*+1/2) to all agents in the minimum cover K^*, a and subsidy of 0 otherwise.
As argued above, the only candidate NE without subsidy are 0^n and s_K corresponding to some minimal dominating set K. The social cost for 0^n is n (except the trivial case k^*=0) and that for s_K^* is k^* which is smaller. If we provide subsidy in our scheme to the agents in K^* then 0^n is no longer an NE. In particular, every subsidized agent in K^* would now always choose repair at subsidized cost -1/2n over doing nothing (even when the system stays broken after the repair). Thus, the social cost plus subsidy is k^*(-1/2n)+k^*(1+1/2n)=k^*, and the price of anarchy for the subsidy scheme is 1 (any agent outside of K^* will prefer to do nothing to reduce their cost).
On the other hand, suppose that has no dominating set of size k. Any minimal dominating set of therefore has size at least k+1. Suppose is a subsidy scheme with subsidy allocated to most k agents. We will show that PoA()>1. Indeed, let K be an arbitrary minimal dominating set of . By pigeonhole principle, at least one agent in K does not receive subsidy. Let K' denote the (possibly empty) set of agents that receive subsidy greater than 1. As argued above, these agents will always prefer the repair action. Thus, s_K' is a Nash equilibrium (agents without subsidy never have incentive to switch from DN to RE in this game) in the subsidized game. Note that ϕ(s_K')=0 since at least one vertex is missed in any minimal dominating set K, and therefore we must have an uncovered edge.
Now ≤cost(s_K)=|K|<n. Therefore, PoA()> 1 as total cost plus subsidy is at least n for s_K'.
We remark that a similar stronger hardness result can be proved for the other studied objectives for subsidy design as well, i.e. the decision problems CMG-System and CIG-VoI are also W[2]-hard, by adapting the proofs of Theorem <ref> and <ref>
along the lines of the above.
§ ADDITIONAL PROOFS FROM SECTION <REF>
Theorem <ref> (restated).
CMG-System is NP-Hard.
We will reduce the Vertex-Cover problem to CMG-System. Given an instance ,k of the Vertex Cover problem, we create a corresponding CMG-System problem as follows. Introduce an agent i for every vertex i∈ V and consider the (2-CNF) formula ϕ()=⋀_(i,j)∈ E(x_i x_j), where the clauses consist of states x_i,x_j for all pairs i,j of agents/components corresponding to edges in E. Set the probability distribution θ to be the constant distribution with the entire probability mass on 0^n (i.e. all the components are guaranteed to fail without repair). Set repair cost C_i=1-ϵ for 0<ϵ<1/n for all components i.
Observe that,
l_i(s_i,s_-i,θ) =_∼θ[cost_i]
=C_is_i+P_ϕ(θ)
=(1-ϵ)s_i+1-ϕ(')
=(1-ϵ)s_i+1-ϕ(s),
where x_i'=max{0,s_i}=s_i denotes the state of component i after agent i takes action s_i. Note that WLOG s=0^n is a NE for this game, since any repair action by any agent increases the agent's cost by (1-ϵ) if the repair does not change the state ϕ of the system, and 0 otherwise (since ϕ is monotonic, it can only change from 0 to 1). We will now show that the remaining NEs for the game correspond to vertex covers of .
Suppose K⊆[n] be a set of agents for which the corresponding nodes in constitute a minimal vertex cover. Let s_K'=(s_1,…,s_n) where s_i=[i∈ K'], where [·] is the 0-1 valued indicator function and K'⊂ K with |K'|=|K|-1. Similarly, s_K:=(s_1,…,s_n) where s_i=[i∈ K]. Clearly, ϕ(s_K')=0 and ϕ(s_K)=1. If i∈ K, agent i does not reduce cost by switching from RE to DN since K is a minimal cover therefore not repairing component i causes the system to fail. If j∉ K, agent j does not reduce cost by switching from DN to RE as the system was already functioning. Further, if K is the set of agents corresponding to a non-minimal vertex cover, then there must be some agent that can reduce its cost by switching from RE to DN.
Finally, if K' ∅ is the set of agents with one or more agents short of a vertex cover, then any agent in K' can reduce its cost by switching from RE to DN. This establishes that besides 0^n, only possible NE must correspond to a minimal vertex cover.
To complete the reduction, we consider the game defined above and subsidy budget s^*=k-1. If there exists a vertex cover of size k, then a minimal cover K' has size k'≤ k. We design a subsidy scheme with total subsidy s'=k'-1≤ s^*, allocating subsidy of 1 for repair to all but one agent in the minimal cover K' and subsidy of 0 otherwise. Clearly the only agent not given subsidy will choose repair since cost of repair 1-ϵ is more than compensated by the change due to system state. As argued above, the only candidate NE are 0^n and s_K corresponding to some minimal vertex cover K. Without the subsidy, the social cost for 0^n is n (except the trivial case k=0) and that for s_K' is k'(1-ϵ) which is smaller. If we provide subsidy in our scheme to the agents in K' except one then 0^n is no longer an NE. In particular, every subsidized agent in K' would now choose repair at cost 1-ϵ over doing nothing (even when the system stays broken after the repair) and the remaining agent in K' will choose repair if the system is broken. Thus, the system functions in all NEs.
Conversely, suppose there exists a subsidy scheme with total subsidy at most s^*=k-1, such that the system functions in any NE. Then either the system functions in 0^n and there is a 0-cover for graph , or the NE corresponds to a minimal vertex-cover K' of size k' as the repaired components (in the subsidized game). In the latter case, we seek to show k'≤ k to complete the proof. Since the system is not assumed to function for s=s_κ for repair actions by agents in any κ⊂ K' with κ=k'-1, we
need to provide subsidy at least 1-ϵ to all but one agent in K'. That is, k-1=s^*≥ (1-ϵ)(k'-1)>k'-1-(k'-1)/n (since ϵ<1/n), or k≥ k' since both k,k' are integers.
Theorem <ref> (restated).
CIG-VoI is NP-Hard.
We will reduce Vertex-Cover to CIG-VoI. Recall that Vertex-Cover is the following decision problem—given a graph =(V,E) and integer k, does there exist a vertex cover of size k?
In contrast to proof of Theorem <ref>, we will need to set a slightly higher subsidy and carefully adapt the argument to the value of information computation.
We will create an instance of the CIG-VoI problem with n=|V|+1 agents, an agent each for vertices in and an additional agent j=|V|+1. The construction of the instance and several arguments are similar to the proof of Theorem <ref>. The key difference is that we have an additional agent j that does not correspond to a vertex in . We will consider the inspection of the component c_j corresponding to this agent.
Consider the (2-CNF) formula ϕ()=⋀_(u,v)∈ E(x_u x_v), where the clauses consist of states x_u,x_v for all pairs u,v of agents/components corresponding to edges in E. Set the probability distribution θ to be the constant distribution with the entire probability mass on 0^n (i.e. all the components are guaranteed to fail without repair). Set repair cost C_i=1-ϵ for 0<ϵ<1/n for all components i∈[|V|] and C_j=1.
Therefore,
l_i(s_i,s_-i,θ)=(1-ϵ)s_i+1-ϕ(s) for i∈[|V|] and l_j(s_j,s_-j,θ)=2-ϕ(s). Note that WLOG s=0^n is a NE for this game, since any repair action by any agent increases the agent's cost by (1-ϵ) if the repair does not change the state ϕ of the system, and 0 otherwise (since ϕ is monotonic, it can only change from 0 to 1). As shown in the proof of Theorem <ref>, the remaining NEs for the game correspond to minimal vertex covers of . Moreover, since ϕ(s) does not depend on s_j by definition, agent j will always prefer action DN for any s_-j. Let s_K:=(s_1,…,s_n) where s_i=[i∈ K] for any K⊆ V.
Notice that the prior and posterior games (for inspection of c_j) have identical cost matrices and equilibria for this component inspection game. To complete the reduction, we consider the game defined above and subsidy budget s^*=k. Suppose there exists a vertex cover of of size k, then there exists a minimal vertex cover, say K' of size k'≤ k. We design a subsidy scheme with total subsidy s'=k'≤ s^*, allocating subsidy of 1 for repair to exactly the agents in K'. Clearly, all subsidized agents will always choose repair. We claim that the only NE after subsidy is s_K'. Indeed, by the above observation, any NE must be s_K for some K⊇ K'. But if K K', then any agent in K∖ K' will choose to do nothing as the system would function without their repair action. Since there is exactly one NE in prior and posterior games, Value of Information is exactly zero for all agents.
Conversely, if there is no vertex cover of size k, then we show that no subsidy scheme with s^*≤ k may guarantee that no agent has negative value of information when a single component j is inspected. In this case the any vertex cover K' has |K'|>k. We consider two cases:
Case 0: |K'|>k+1. Observe that if the subsidy provided to an agent is less than the repair cost 1-ϵ, then the agent will prefer to do nothing, except when repairing their component (given other players actions) changes the system state from 0 to 1. However, with a budget of s^*=k, the maximum number of agents that can receive a subsidy of at least 1-ϵ is at most k/1-ϵ<k+1, since ϵ<1/n and k<n WLOG. Thus, at least two agents are without subsidy at least 1-ϵ in K', and these agents will prefer to do nothing if only the agents K^*={i∈[|V|]| s^*_i>1-ϵ} with sufficient subsidy choose repair. Observe that both s_K' and s_K^* are Nash equilibria in the subsidized game. If s_K' is chosen as the prior equilibrium and s_K^* a posterior equilibrium, then the value of information for agents in K'∖ K^* is (1-ϵ)-1<0 since the system does not work in s_K^*.
Case 1: |K'|=k+1. In this case, the only new possibility is if at least 1-ϵ subsidy is provided to all but one agent (say k') in K', then the remaining agent will choose repair. Without loss of generality, we assume k+1<n, and that K' is a minimal vertex cover. Now if v_k' denotes the vertex corresponding to agent k' in , and let E' denote the set of edges incident on vertices V'⊆ V∖ K' with one end at v_k'. E' is non-empty, as otherwise K'∖{v_K'} would constitute a vertex cover for contradicting minimality of K'. Observe that K_1=K'∖{v_K'}∪ V' is a vertex cover. Let K_2 denote a minimal vertex cover which is a subset of K_1. Now both s_K_2 and s_K' are NEs in the subsidized game. If the former is set as the prior equilibrium, and the latter a posterior equilibrium then, the value of information is negative (equals 0-(1-ϵ)=ϵ-1) for agent k'.
Thus in either case, some agent has a negative value of information when the subsidy budget is k. This completes the reduction.
Theorem <ref> (restated).
CSG-VoI is NP-Hard.
Recall the minimum subset cover problem instance (,,k) is given as follows.
Min-Set-Cover: Given a finite set of size n and a collection ⊆2^ of subsets of , does there exist a subset S of of size k<n that covers , i.e. ∪_S_i∈ SS_i=?
Consider the cost-sharing game G with n+1 agents that correspond to elements of via a bijection ζ:→[n] plus additional agent n+1, set of actions =⊎⊎⊎{{n+1}} with ={{1},…,{n}} and ={{1,…,n}} being two distinct collections of actions available uniquely to each agent and {n+1} corresponds to a unique action a_n+1 available to agent n+1. Function f:S↦{ζ(s)| s∈ S} assigns action S to agents corresponding its elements, and cost function c is given by
c(S)=
1 if S∈⊎,
n-ϵ if S∈,
∞ if S={n+1},
for 0<ϵ<1.
We set s^*=k.
Given a YES instance of Min-Set-Cover, we show that the above contruction yields a YES instance of CSG-VoI. Let k^* denote the size of the smallest set cover of (,). In the YES instance this means k^*≤ k, and we provide subsidy of value 1 to all actions corresponding the sets in the smallest set cover. The total subsidy used is k^*≤ k=s^*. Any assignment of the actions to agents consistent with the set cover is a Nash Equilibrium with social cost 0 and, any other state is not an NE as in the proof of Theorem <ref>. Revealing the cost of action a_n+1 does not impact the choices of agents {1,…,n} as the action is not available to them, and agent n+1 either since the only available action is a_n+1. Thus, the cost of any agent in [n] is 0 in any prior or posterior equilibria and the value of information is zero. The cost of agent n+1 can only decrease when it is revealed, and therefore VoI is non-negative for agent n+1 as well.
Conversely, consider a NO instance of Min-Set-Cover. The smallest set cover of (,) has size k^*>k. Consider any subsidy scheme assigning subsidy of value 1 to at most k actions. All the agents that have at least one of their actions subsidized will select a subsidized action in any NE. Since the smallest set cover has size greater than k, there exists at least one agent with no subsidized action. Let A⊂[n] denote the set of these agents. We will show the existence of two Nash equilibria with different costs for some agent in A, implying that VoI<0 for that agent by selecting the higher cost NE as the posterior equilibrium and the lower cost NE for the prior. Consider states s_ and s_ for which agents in A are assigned the corresponding actions from and respectively, and agents in [n]∖ A are assigned one of the subsidized actions in either case. We have for any agent i∈ A, cost_i(s_)=1 but cost_i(s_)=n-ϵ/|A|cost_i(s_).
§ ADDITIONAL PROOFS FROM SECTION <REF>
We include below proof details for missing proofs for our sample complexity and online learning results.
§.§ Sample complexity results
Theorem <ref> (restated).
For any ϵ,δ>0 and any distribution over component maintenance games with n agents, O(n^2H^2/ϵ^2(n^2+log1/δ)) samples of the component maintenance game drawn from are sufficient to ensure that with probability at least 1-δ over the draw of the samples, the best vector of subsidies over the sample σ̂^̂*̂ has expected loss L_prior that is at most ϵ larger than the expected loss of the best vector of subsidies over .
Consider any fixed game G. Given any joint action s=(s_i,s_-i), an agent i's decision for switching their action from s_i to s_i:=1-s_i is determined by the inequality (C_i-σ^*_i)s_i+1-Φ(s_i,s_-i')≤ (C_i-σ^*_i)(s_i)+1-Φ(s_i,s_-i'), where Φ(s):=_θ[ϕ('(s))], which is linear in σ_i^*, the subsidy provided to agent i. Thus, for each agent i, we have at most 2^n-1 axis-parallel hyperplanes in the parameter space in ^n, or a total of n2^n-1 hyperplanes overall. Moreover, the loss function as a function of the subsidy parameters is piecewise constant in any fixed piece. Therefore the loss function class is (n,n2^n-1)-delineable in the sense of <cit.>, that is the subsidy parameter space is Euclidean in n dimensions and is partitioned by at most n2^n-1 hyperplanes into regions where the loss is linear (in this case constant) in the parameters.better bound?
By using a general result from <cit.> which states add stmt to appendix? that a (d,t)-delineable function class has pseudo-dimension O(dlog(dt)), the above structural argument implies that the pseudo-dimension of the loss function class parameterized by the subsidy value is at most O(nlog(n^22^n-1))=O(n^2) and the sample complexity result follows <cit.>.
Theorem <ref> (restated).
Suppose subs_i(s), C_i≤ H for each i∈[n]. For any ϵ,δ>0 and any distribution over component maintenance games with n agents, O(n^2H^2/ϵ^2(n^2+log1/δ)) samples of the game drawn from are sufficient to ensure that with probability at least 1-δ over the draw of the samples, the best vector of subsidies over the sample σ̂^̂*̂ has expected loss L̃_prior that is at most ϵ larger than the expected loss L̃_prior of the best vector of subsidies over .
Our proof of Theorem <ref> above establish a piecewise constant structure for the L_prior loss given any fixed game G and a bound on the number of hyperplanes in the subsidy parameter space that demarcate the pieces. On a fixed side of each of these n2^n-1 hyperplanes, each agent i has a fixed preferred action given any s_-i, and therefore the set _NE() of Nash equilibria is fixed over any piece. Indeed any joint action s is a NE given subsidy scheme if and only if s_i is preferred given s_-i for all agents i. Thus, L̃_prior is also piecewise constant over the pieces induced by the same hyperplanes. Therefore, the same upper bound on the sample complexity can be obtained following the arguments in the proof of Theorem <ref>.
Learning conditional subsidies. We will now obtain a sample complexity bound for non-uniform subsidy schemes in component inspection games, where the central agent provides subsidy only in posterior games. Let denote the subsidy scheme. Let _NE^0(𝕊) (resp. _NE^1(𝕊)) denote the subset of states in S corresponding to Nash equilibria when the cost for agent i is the subsidized cost cost_i^,0 (resp. cost_i^,1) for posterior y_j=0 (resp. y_j=1). For component inspection game of component c_1 (wlog), define
L_posterior():= p_1L_posterior^1()+p_1L_posterior^0(),
where L_posterior^i():=max_s∈_NE^i()cost^,i(s)+subs^i(s). We assume that subs_i^j(s)≤ H, C_i≤ H for each i∈[n], j∈{0,1}, thus L_posterior()≤ (2H+1)n. In this case too, we are able to give a polynomial sample complexity for the number of games needed to learn a good value of subsidy with high probability over the draw of game samples coming from some fixed but unknown distribution.
For any ϵ,δ>0 and any distribution over component inspection games with n agents, O(n^2H^2/ϵ^2(n^2+log1/δ)) samples of the component inspection game drawn from are sufficient to ensure that with probability at least 1-δ over the draw of the samples, the best vector of subsidies over the sample σ̂^̂*̂ has expected loss L_posterior that is at most ϵ larger than the expected loss of the best vector of subsidies over .
Consider any fixed game G. Given any joint action s=(s_i,s_-i), an agent i's decision for switching their action from s_i to s_i:=1-s_i in posterior game y_1=y is determined by the inequality (C_i-s^y_i)s_i+1-Φ(s_i,s_-i')≤ (C_i-s^y_i)(s_i)+1-Φ(s_i,s_-i') (with Φ(s):=_θ^1,y[ϕ('(s))]), which is linear in s_i^y, the subsidy provided to agent i conditional on y_1=y. Thus, for each agent i, we have at most 2· 2^n-1 axis-parallel hyperplanes in the parameter space in ^2n, or a total of n2^n hyperplanes overall. Moreover, the loss function as a function of the parameters is piecewise constant in any fixed piece. Therefore the loss function class is (2n,n2^n)-delineable in the sense of <cit.>. The rest of the argument is similar to the proof of Theorem <ref>, differing only in some multiplicative constants.
Theorem <ref> (restated). For any ϵ,δ>0 and any distribution over fair cost sharing games with N agents and || actions, O(||^2H^2/ϵ^2(||log ||N+log1/δ)) samples of the game drawn from are sufficient to ensure that with probability at least 1-δ over the draw of the samples, the best vector of subsidies over the sample σ̂^̂*̂ has expected loss L_prior that is at most ϵ larger than the expected loss of the best vector of subsidies over .
Consider any fixed game G. Given any joint action s=(s_i,s_-i), an agent i's decision for switching their action from s_i to s_i s_i is given as follows.
Let k=∑_j=1^N[s_j=s_i] and k=∑_j=1^N[s_j=s_i]. Clearly, k≥ 1 and k≥ 0. Agent i's decision to switch from action s_i to s_i is governed by the inequality
c(s_i)-c^(s_i)/k≶c(s_i)-c^(s_i)/k+1.
Thus, across all agents, we get at most ||^2N^2 distinct hyperplanes in the subsidy parameter space corresponding to c^, corresponding to choices for s_i,s_i,k,k.
Moreover, the loss function as a function of the subsidy parameters is piecewise constant, since in any fixed piece induced by the above hyperplanes the set of NEs is fixed and the reduction in social cost cost^(s) is exactly compensated by the increase in subsidy subs(s) as the subsidy is varied within the piece. Therefore the loss function class is (||,||^2N^2)-delineable in the sense of <cit.>, that is the subsidy parameter space is Euclidean in || dimensions and is partitioned by at most ||^2N^2 hyperplanes into regions where the loss is linear (in this case constant) in the parameters.
By using a general result from <cit.> which states that a (d,t)-delineable function class has pseudo-dimension O(dlog(dt)), the above structural argument implies that the pseudo-dimension of the loss function class parameterized by the subsidy value is at most O(||log(||^3N^2))=O(||log(||N)) and the sample complexity result follows <cit.>.
Theorem <ref> (restated).
For any ϵ,δ>0 and any distribution over fair cost sharing games with N agents and || actions, O(||^2H^2/ϵ^2(||log ||N+log1/δ)) samples of the game drawn from are sufficient to ensure that with probability at least 1-δ over the draw of the samples, the best vector of subsidies over the sample σ̂^̂*̂ has expected loss L_VoI that is at most ϵ larger than the expected loss of the best vector of subsidies over .
The key arguments are similar to the proof of Theorem <ref>. We can show that the loss function class as a function of the subsidy is (||,2||^2N^2)-delineable and the result follows.
§.§ Online subsidy design
A key tool is the following theorem due to <cit.>. We present a simplified version (setting M=1 in their general result) as it will suffice for us.
Let ⊂^d be contained in a ball of radius R and l_1, … , l_T: → [0, H] be piecewise ℓ-Lipschitz
functions that are 1/2-dispersed. Then there is an online learning algorithm with regret bound Õ(√(dT)+K_T), where the soft-O notation suppresses terms in R,H and logarithmic terms, provided
[
max_ρ,ρ'∈
||ρ-ρ'||_2≤1/√(T)|{ t∈[T] | |l_t(ρ)-l_t(ρ')|>ℓ||ρ-ρ'||_2}|]
≤ K_T.
Thus, it is sufficient to establish 1/2-dispersion of the sequence of loss functions, and provide a bound on the expected number of non-Lipschitz losses between worst-case pair of points in the domain, in order to establish our results in Section <ref>.
Theorem <ref> (restated).
Suppose Assumption <ref> holds. Let L_1,…, L_T:[0,H]→[0,(2H+1)N] denote an independent sequence of losses as a function of the subsidy value σ, in an online sequence of T component maintenance games.
Then sequence of functions is 1/2-dispersed and there is an online algorithm with O(√(nT)) expected regret.
The key idea is to observe that each loss function L_t has at most K=n2^n-1 discontinuities (as in proof of Theorem <ref> above). Further, any interval of length ϵ has at most O(κϵ) discontinuities in that interval for any fixed loss function L_t, in expectation. This uses Assumption <ref>, and the observation that critical values of σ^* are linear in some cost C_i. Indeed, as shown in the proof of Theorem <ref>, the critical values of subsidy are given by σ^*=C_i+_θϕ(0,s_-i')-_θϕ(1,s_-i') for some agent i and joint action s_-i'.
By Theorem 7 of <cit.> then the expected number of non-Lipschitz losses on the worst interval of length ϵ is at most K_T= Õ(Tϵ+√(Tlog(TK)))=Õ(√((n+log T)T)) for ϵ≥1/√(T).
This implies 1/2-dispersion of the sequence of loss functions in the sense of Definition <ref>.
Now Theorem <ref> implies the desired regret bound.
Theorem <ref> (restated).
Suppose Assumption <ref> holds. Let L_1,…, L_T:[0,H]^n→[0,(2H+1)N] denote an independent sequence of losses L_prior() as a function of the subsidy scheme parameterized by subsidy values {σ_i}, in an online sequence of T component maintenance games.
Then the sequence of functions is 1/2-dispersed and there is an online algorithm with O(√(nT)) expected regret.
Each loss function L_t can be partitioned by at most K=n2^n-1 axis-parallel hyperplanes into pieces such that the loss function is constant over each piece (as in proof of Theorem <ref> above). Further, the offset of each of these hyperplanes is linear in some cost C_i and therefore along any σ^*_i-aligned line segment of length at most ϵ, there are at most O(κϵT) functions that are non-Lipschitz on that segment, in expectation.
Now for any pair of subsidy vectors σ,σ' such that ||σ-σ'||_2≤1/√(T), we can bound the expected number of non-Lipschitz functions for which |L_t(σ)-L_t(σ')|>0 by taking an axis aligned path connecting σ,σ' and adding up the number of non-Lipschitz functions along each segment. Suppose the segment lengths are ϵ_1,…,ϵ_n, then the above argument gives a bound O(κ T ∑_iϵ_i) on the expected number of non-Lipschitz functions. By Cauchy-Schwarz inequality, we have ∑_iϵ_i≤√(n)√(∑_iϵ_i^2)≤√(n/T), and the bound simplifies to O(κ√(nT)).
By Theorem 4 of <cit.> the expected number of non-Lipschitz losses on the worst point-pair with separation 1/√(T) is at most K_T= O(κ√(nT)+√(Tlog(TK)))=Õ(√(nT)).
Theorem <ref> now implies the claimed regret bound.
|
http://arxiv.org/abs/2409.02719v1 | 20240904135603 | An Extended Closure Relation by LightGBM for Neutrino Radiation Transport in Core-collapse Supernovae | [
"Shota Takahashi",
"Akira Harada",
"Shoichi Yamada"
] | hep-ph | [
"hep-ph"
] |
[email protected]
Department of Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan
0000-0003-1409-0695]Akira Harada
National Institute of Technology, Ibaraki College, Hitachinaka 312-8508, Japan
Interdisciplinary Theoretical and Mathematical Sciences Program (iTHEMS), RIKEN, Wako, Saitama 351-0198, Japan
Advanced Research Institute for Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku, Tokyo 169-8555, Japan
§ ABSTRACT
We developed a machine learning model using LightGBM, one of the most popular gradient boosting decision tree methods these days, to predict the Eddington tensor, or the second-order angular moment, for neutrino radiation transport in core-collapse supernova simulations. We use not only zero-th and first moments as in ordinary closure relations but also information on the background matter configuration extensively. For training the model we utilize some post-bounce snapshots from one of our previous Boltzmann radiation-hydrodynamics simulations. The Eddington tensor as well as the zero-th and first angular moments are calculated from the neutrino distribution function obtained in the simulation. LightGBM is light indeed and its high efficiency in training enables us to feed a large number of features and figure out which features are more important than others. We report in this paper the results of the training and validation as well as the generalization of our model: it can reproduce the Eddington factor better in general than the M1-closure relation, one of the most commonly employed algebraic closure relations at present; the generalization performance is also much improved from our previous model based on the deep neural network.
§ INTRODUCTION
Core-collapse supernovae (CCSNe) are explosive demise of massive stars. Despite extensive investigations, the detailed mechanism driving this explosion remains to be established. The neutrino heating <cit.> is the mechanism that most researchers in the society think is most likely at present. It goes as follows. The gravitational collapse of a massive star is halted when the core reaches nuclear density at the center. Then a shock wave is formed and sweeps the accreting outer core, photo-dissociating heavy nuclei therein. The shock energy depleted through this endothermic reaction, leading eventually to the shock stall. In the meantime, a proto-neutron star (PNS) is formed at the center and starts to emit neutrinos copiously. These neutrinos heat up matter behind the shock and re-energize the stagnant shock wave eventually. It should be obvious that the good understanding of neutrino transport is imperative for firmly establishing the theory of the CCSN mechanism.
Numerical simulations have been extensively employed for this purpose <cit.>. They comprise the computations of hydrodynamics, gravity, and neutrino transport and reactions. The neutrino transport, the target of the this study, is described by the Boltzmann equation. Its faithful solution, although preferable, is numerically demanding because of the high dimension. The majority of CCSN simulations these days resorts to be established to some approximations. Among them, the truncated moment method is most widely used <cit.>. Instead of the distribution function in momentum space at each spatial point, it considers the angular moments thereof and integrates the Boltzmann equation with respect to the two angles that specify the neutrino momentum, angular moments of the Boltzmann equation, thereby reducing the dimension.
The resultant equations for the moments of different orders are coupled with one another and form an infinite hierarchy. In numerical simulations, it is truncated at some order, typically the first order. The efficacy of this truncated moment method hinges on the accuracy of the closure relation, which one imposes by hand to obtain unsolved moments from the solved ones. If one truncates the hierarchy at the first order, the Eddington tensor, or the second moment of the distribution function, is normally given by the closure relation as a function of the zero-th and first order moments. Among various closure relations proposed in the literature to date, the M1-closure relation is the most popular these days <cit.>. According to the comparison of the Eddington tensors obtained by the directly integrating the distribution functions obtained in the CCSN simulations that solved the Boltzmann equation faithfully, which we refer to as the Boltzmann simulations hereafter, and the Eddington tensors given by the M1-closure relation for the same data, the diagonal and off-diagonal components of the Eddington tensor may develop discrepancies of several tens of percent and a factor of 2, respectively <cit.>.
Some efforts are still being made to enhance the accuracy of closure relation. <cit.> suggested a fitting formula for the Eddington factor, the largest eigenvalue of the Eddington tensor, based on the spherically symmetric Boltzmann simulations. Though it offers a reasonably faithful formula, more improvement is required to apply for the multi-dimensional simulations. In a separate endeavor, <cit.> developed a tensor basis neural network (TBNN), a deep neural network that provides the Eddington tensor from the neutrino energy density and flux, i.e., the zeroth and first order moments and the local matter velocity. The network was trained by using the data of a Boltzmann simulation. Although the TBNN-closure outperforms the M1-closure in terms of accuracy, there remains room for improvement both in the accuracy and in the generalization performance.
This paper aims at improving the closure relation by machine learning techniques. From our experience in the previous attempt with the TBNN <cit.>, we think that a substantial improvement will be achieved only by providing more information other than the two lowest order moments to the machine learning model. In fact, <cit.> found that the information on local matter motions was useful as an input to the TBNN. One particularly powerful technique to identify relevant features is the decision tree for regression problems (see Section 2.2 for details). We adopt Light Gradient Boosting Machine (LightGBM) in this paper. It is known that LightGBM can handle a large number of features very efficiently. As one of the decision tree algorithms, on the other hand, it may lack a sufficient smoothness in expressing functions. We attach more importance to the former capability in this study.
This paper is organized as follows. In section 2, we briefly review the M1-closure relation and LightGBM, the machine learning model employed in this paper. Section 3 provides the details of the training: data, feature engineering, validation and hyper parameters. Then, in section 4, we present the main results on the performance of our novel models making comparisons with the Eddington tensors calculated directly from the original data obtained by our Boltzmann simulation as well as with the Eddington tensors derived from the M1-closure relation applied to the zero-th and first angular moments calculated from the same original data. Finally, section 5 is the conclusion. Throughout this paper, we employ the units of c=1, where c is the light speed. The metric signature is (-+++). Greek and Latin indices run over 0–3 (spacetime) and 1–3 (spatial).
§ OVERVIEWS OF THE M1-CLOSURE RELATION AND LIGHTGBM
The purpose of this paper is to improve by machine learning techniques the estimation of the Eddington tensor in the truncated moment method for the neutrino transport in CCSN simulations. This section is devoted to a brief review of the M1-closure relation in section <ref> and LightGBM in section <ref>.
§.§ M1-closure relation
The neutrino population in CCSNe is described by the distribution function f(x,p) in phase space, in which x and p are the spacetime coordinate and momentum of neutrinos. It is governed by the Boltzmann equation. The high dimension and stiff source term of this equation poses a numerical challenge under the current computational resources. To the numerical cost, most researchers are opting the truncated moment method nowadays.
Since the formulation of the moment method is described in <cit.> in detail, we give only its outline in the following. We first define the unprojected second moment of the distribution function as[We follow the definition in <cit.>, which is slightly different from that in <cit.>.]
M^αβ(ε, x ) = ∫ f(x, p^') δ(ε^3/3-ε^' 3/3) p^'α p^'β d V_p^',
where ε^'=-u_μ p^'μ is the neutrino energy measured in the fluid-rest (FR) frame with u being the fluid four-velocity. By applying appropriate projections to this moment, we can derive the energy densities E, energy flux
F^i, and the stress tensors P^ij in both the FR and the laboratory (LB) frames:
E_ FR = M^αβ u_α u_β, E_ LB = M^αβ n_α n_β,
F_ FR^i = -M^αβ u_α h_β^i, F_ LB^i = -M^αβ n_αγ_β^i,
and
P_ FR^ij = M^αβ h_α^i h_β^j, P_ LB^ij = M^αβγ_α^i γ_β^j,
where n is the unit normal vector to the spatial hypersurface with t = const., γ_α^i = δ_α^i + n_α n^i is the projection onto that surface with δ_α^β being the Kronecker's delta, and h_α^i = δ_α^i + u_α u^i is the projection onto the plane perpendicular to u^α. In the truncated moment formalism, the time evolutions of E_ LB and F_ LB^i are solved normally by setting a closure relation to give P_ LB^ij as a function of E_ LB and F_ LB^i[In order to solve the spectral evolution of spectrum in general relativity, the third moment of the distribution function is also required. We ignore it in this paper for simplicity, though.]. These three quantities are essentially the lowest three angular moments. The exact relations are given, e.g., in <cit.>.
The closure relation is normally imposed to the Eddington tensor k^ij defined as the stress tensor divided by the energy density. A common way to construct it is to employ the interpolation
of the optically thick and thin limits:
P_ LB^ij = 3p_ν-1/2 P_ thin^i j + 3(1-p_ν)/2 P_ thick^i j.
In the above expression, P^ij_ thin and P^ij_ thick are the stress tensors in the optically thick and thin limits, resepctively, and can be expressed with the energy density and energy flux density as given shortly; p_ν is the Eddington factor and should be also prescribed in the truncated moment method. Then the Eddington tensor is obtained as
k^ij = P_ LB^ij/E_ LB.
The functional forms of the stress tensor in the two limits and of the Eddington factor is given as follows <cit.>. In the optically thin limit, neutrinos move freely in one direction specified by the flux vector. The stress tensor P_ thin^ij is then given in the LB frame as
P_ thin^i j = E_ LBF_ LB^i F_ LB^j/|F_ LB|^2.
In the optically thick limit, on the other hand, neutrinos are in thermal equilibrium with matter, and are isotropic in the FR frame. In the LB frame, P_ thick^ij is given by the Lorentz transformation as
P_ thick^i j = 1/3 E_ FR(γ^ij + 4 v^i v^j)
+ F_ FR^i v^j + F_ FR^j v^i.
As for the Eddington factor, the M1 closure scheme adopts the following expression proposed by <cit.>:
p_ν = 3 + 4 χ^2/5 + 2 √(4 - 3 χ^2),
where χ=|F_FR|/E_FR is referred to as the flux factor.
§.§ Light Gradient Boosting Machine
The decision tree (DT) is a machine learning algorithm that performs classification or regression tasks by recursively partitioning a dataset based on certain criteria <cit.>. Suppose that we find a functional relation between an n-dimensional input vector x and an output real number[If the output is integer or discrete number, the task is classification.] y from a dataset of size N: {( x^i, y^i)}_i=1^N. In the DT method, each data ( x^j, y^j) is binary-classified according to x^j, and the the two destinations are called nodes. The data classified to a certain node is further classified to the daughter nodes recursively. This process hence forms a tree graph. In the DT regression, each node has the prediction value ŷ_k = ∑_j∈ D_k y^j/|D_k|, where D_k is a subset of the data classified to node k and |D_k| is its size. The classification criterion at each node is determined so that it would minimize the mean squared error of the prediction value ŷ_R(L)k for the data in the right (left) daughter node of node k. This binary classification is repeated until the error thus obtained gets small enough. Then the output of the DT is the prediction value ŷ^j of the node, to which the input x^j is finally classified.
The Gradient Boosting DT <cit.> is an improved version of DT. The output of single DT for regression has large errors normally. We can improve the prediction by training another DT for these errors. With recursive correctios with many DTs, the GBDT achieves an accurate prediction. Suppose that f_ℓ (x) is the output of the ℓ-th DT for the input vector x. In the least-square GBDT, each DT is trained so that the following functions
∑_j=1^N{(y^j-η∑_m=0^ℓ-1f_m(x^j))-f_ℓ(x^j)}^2
are employed from ℓ=1; in the above expression, 0< η < 1 is the learning rate employed to avoid overfitting and f_0(x) is given conventionally by ∑_j=1^N y^j/N. Then, the prediction of GBDT is given by ŷ^j = η∑_ℓ=0^M f_ℓ (x^j), where M is the total number of DTs.
LightGBM (Light Gradient Boosting Machine) is a
variant of GBDT introduced by <cit.>. LightGBM incorporates four techniques to enhance GBDT: (1) leaf-wise tree growth that prioritizes creations of daughter and granddaughter nodes than those of cousin nodes, suppressing the growth of
the nodes that focus on irrelervant variables for regression, as shown in figure <ref>, (2) a histogram-based algorithm, which reduces numerical cost by binning data, (3) the gradient-based one-sided sampling (GOSS) to reduce the training data by focusing on the critical elements, and (4) the exclusive feature bundling, which combines (EFB), which combines features that are not zero simultaneously. Thanks to these techniques, LightGBM excels in computational and memory efficiency compared to traditional models, hence it accelerates the search for the relevant variables of the closure relation.
§ METHOD
This section provides a detailed description of the model developed in this paper. Section 3.1 concerns the data used in this study, section 3.2 discusses the feature engineering, and section 3.3 explains the model.
§.§ Data Description
The data set used in this study are five snapshots taken from a 2D axisymmetric CCSN simulations performed with the Boltzmann-radiative-hydrodynamics code <cit.>. This code solves the Boltzmann equation for neutrinos, the hydrodynamics equations for stellar gas and the Poisson equation for gravity. Simultaneously, we adopt Furusawa–Togashi EOS <cit.>, which is based on the variational method and is extended to handle a large number of nuclei in the nuclear statistical equilibrium. The progenitor is a non-rotating star with the ZAMS mass of 15 M_⊙ modeled by <cit.>. For the detailed description of this code we refer readers to <cit.>, <cit.>, <cit.> and <cit.>. Whereas we consider three neutrino species: electron-type neutrinos and anti-neutrino as well as heavy-lepton-type neutrino, we employ the results for the electron-type neutrino alone in this study. The grid sizes for radius N_r, zenith angle N_θ, energy N_ε, momentum angles N_θ_ν, N_ϕ_ν are (N_r, N_θ, N_ε, N_θ_ν, N_ϕ_ν) = (384, 128, 20, 10, 6). The computational zone covers the region up to 5000 km from the stellar center. In this study, on the other hand, we utilize the data within the ranges of 0<r(km)<200, 0<θ(rad)<π, and 0<ε(MeV)<300 because this is the region of the greatest relevance for the neutrino heating and shock revival. We extract the distribution function of electron-type neutrino in this region from the simulation data at the post-bounce time of 300 ms and use it as the training data. For validation, we employ four other distribution functions sampled at t=100 ms, t=150 ms, t=200 ms and t=250 ms after bounce. Note that the CCSN simulation we adopt in this paper is identical to what was used in <cit.>, but the radial extent and the number of snapshots are expanded.
We will construct a machine learning model to give the Eddington tensor as a function of some variables, or features, that include the lower moments. Those we refer to as the basic features are listed in table <ref>. As discussed later, additional features are generated from these basic ones through feature engineering. The basic and additional features are collectively treated as inputs to the machine.
The output of the machine is the Eddington tensor. In this paper, we do not project it onto the local orthonormal vectors: e_r, e_θ and e_ϕ, of the spherical coordinates in the LB frame. Instead we employ the orthonormal vectors, e_ FP, i, one of which is aligned with the neutrino flux and the other two are orthogonal to it and also to each other. We refer to the 3-dimenstional frame defined by these orthonormal vectors as the flux-projected (FP) frame, and the Eddington tensor represented in this frame is denoted as k^ij_ FP hereafter.
The FP frame is constructed for each neutrino energy as follows: e_ FP, 1 is the unit vector directed along the neutrino flux. The choice of the other two unit vectors is rather arbitrary. In this paper we obtain e_ FP, 2 from e_θ by subtracting the component parallel to e_ FP, 1, i.e., via the Gram-Schmidt orthogonalization. As a result, e_ FP, 2 so obtained lies in the plane spanned by the flux and e_θ. The last vector is derived so that e_ FP,1, e_ FP, 2 and e_ FP,3 should form a right-handed orthonormal system. The Eddington tensor in the FP frame is then related with that in the LB frame as k_ FP^ij e_ FP,i^l e_ FP,j^m = k_ LB^lm, where the superscript of e_ FR specifies the components and the repeated indices are summed over.
The reason why we employ the FP frame is the following. The Eddington tensor is not “aligned” in general with the coordinates we choose rather arbitrarily. Since the Eddington tensor is a symmetric second-rank tensor, it becomes diagonal in the appropriately chosen orthonormal frame, which is not equal to the LB frame in general. Suppose that the neutrino distribution in momentum space is axisymmetric with respective to a certain direction misaligned with any one of the orthonormal vectors in the LB frame. Instead the symmetry axis is parallel to the flux vector in this case. In fact, the Eddington tensor is diagonal in the FP frame. It should be stressed that the neutrino distribution in momentum space is not axisymmetric in general, except for the notable case with spherical symmetry. For example, the neutrino flux from a rotating oblate PNS is non-radial and has even nonvanishing ϕ-component, since neutrinos carry non-zero angular momenta. Then the Eddington tensor in the LB frame has substantial off-diagonal components <cit.>. We expect, on the other hand, that the off-diagonal components will be smaller in the FP frame even in this case.
§.§ Feature Engineering
The closure relation is originally meant to give high-order moments such as the Eddington tensor in terms of the lower-order moments like the energy density and flux. In our previous attempt to build a deep neural network to do this task, we realized that the lower-order moments are just not sufficient and added some other local features such as the matter velocity and its shear to the input. In this paper, we extend this idea. Thanks to the efficiency of LightGBM, we are able to incorporate much more information, some of them nonlocal. It is true that the more model-specific information we employ, the less general the machine becomes, but the purpose of this paper is to construct a machine learning model that we can apply to the CCSN simulation with the truncated moment method.
In providing a large number of features to the machine, we find feature engineering very important. By feature engineering, we mean finding better combinations of the original features, normalizing features and, if necessary, removing the basic features after using them for feature engineering. In principle, this process does not increase the information included in the original features. If the output depends on a function of rather involved combinations of these features, the machine may need a long time or even fail to train itself even if those features contain sufficient information. On the other hand, if the human knows a priori those combinations of the original features and teaches them to the machine, we can expect the training efficiency to be improved substantially. Unfortunately, we do not know what should be the best combinations. Thanks to the efficiency of LightGBM, we can try various combinations. The engineered features given below are indeed obtained that way.
As mentioned above, we will incorporate nonlocal quantities to the input features. This is probably understandable if one recalls the fact that the radiative transfer is most difficult to treat in the semi-transparent region, where the mean free path of neutrino becomes comparable to the scale height of matter and the neutrino distribution is not determined locally indeed. It is noted that the basic features in Table 1 include the optical depth, which is nonlocal information. As explained below, we add more.
The engineered features thus produced are classified into three types: advanced features, spatial-shift features, and spatial-difference features. The advanced features, listed in Table <ref>, are those features given as functions of the basic features. Particularly important for the improvement of the model are the flux factor in the LB frame χ_ LB = |F_ LB^i|/E_ LB, the colinearity parameter F_LB^j v_j/E_LB |v^i| that measures the alignment of the neutrino flux and matter velocity, and the parameter we call the“neutrinosphere indicator,” which is a binary indicator of whether the point is well inside the neutrinosphere or not, i.e., it is 1 if the optical depth τ is greater than 5 and 0 otherwise.
The spatial-shift features are the features concerning the neighboring grids. The closure relation is normally local, i.e., it is a relation among physical quantities at the same location. It turns out, however, that the inclusion of the neighboring physical quantities as input improves the accuracy of the inference. More concretely, we incorporate the flux factor χ_ LB=|F_ LB^i|/E_ LB and the colinearity parameter F_LB^jv_j/E_LB|v^i| not only on the grid point of concern but also on the neighboring grid points. For instance, let Q_k,l represent one of the aforementioned quantities at the k-th radial grid point and the l-th angular grid point; we then incorporate Q_k-5i, l (where i=-1, 1, 2, …, 6), Q_k+1, l, Q_k, l+1, Q_k-1, l and Q_k, l-1 as components of the input feature vector for Q_k,l.[If it goes beyond the spatial boundaries, those points are simply ignored.] In particular, it was found that incorporating information from the inner regions of the star, such as Q_k-5i, l (where i=1, 2, …, 6) and Q_k-1, l, significantly improves the prediction accuracy. The reasoning here is that since neutrinos predominantly move outward, upstream information from radially inner points is particularly valuable.
We also find it useful to employ the difference between the feature on the grid of concern and that on the inner adjacent grid:
Δ Q_k,ℓ = Q_k,ℓ-Q_k-1,ℓ.
to which we refer as the spatial-difference feature. Although it may seem redundant, we include both the spatial-shift and difference features in the input as we do not know which feature is more effective. Note that irrelevant features are automatically ignored in LightGBM.
§.§ Models
The pipeline of the model employed in this study is shown in Figure <ref>. After creating new features from the basic features through feature engineering, we conduct training with the 8-fold cross validation (CV), in which the data set is randomly divided into 8 subsets to build 8 models. One of them serves as the validation set and the remaining 7 as the training data. They make predictions individually and the arithmetic mean of those predictions is adopted as the final prediction of this model. Such an approach is known to effectively prevent the model from over-fitting a particular dataset, thereby improving generalization.
The main hyperparameters and their values in the LightGBM model are listed in Table <ref>. We employ the early stopping training, i.e., the training is terminated once the mean squared error is no longer lowered remarkably. It is also useful to avoid overfitting. As a consequence, we can set a sufficiently large value to , the number of decision trees to be built. Though it is not our main goal in this paper to fine-tune the hyperparameters, we set different values to some hyperparameters for the diagonal components of the Eddington tensor from those for the off-diagonal components as given in the table.
§ RESULTS
In this paper, the data extracted from the snapshot at 300 ms after the bounce are used as the training set. The target variable is the Eddington tensor in the FP frame k_FP^ij. We focus on the estimation of its 11-, 22-, 33- and 12-components, since they are the diagonal and main off-diagonal elements, respectively. The test data were collected from the snapshots at 100 ms, 150 ms, 200 ms and 250 ms after the bounce. In the following, the diagonal and off-diagonal components are considered separately in turn.
§.§ Diagonal Components
The diagonal components of the Eddington tensor play the main role in neutrino transport, and hence their accurate estimation is crucially important. The conventional M1-closure relation does not reproduce it perfectly <cit.>, and the machine-learning closure is expected to provide better predictions.
Figure <ref> shows the learning curve in the training of the 11-component of the Eddington tensor. The training is conducted to minimize the Mean Squared Error (MSE):
L_ε^(2) = 1/|D_ε|∑_D_ε|k_ boltz, FP^ij-k_ infer, FP^ij|^2,
where the sum is taken over the subset D_ε of the data corresponding to a particular neutrino energy and |D_ε| is the number of its elements; k_ boltz, FP^ij is the Eddington tensor obtained by the Boltzmann-radiation-hydrodynamics simulation whereas k_ infer, FP^ij is the one obtained with our LightGBM model. As the solid lines show, the training is successful with the values of MSE decreasing monotonically down to <10^-5 for all 8 models. These lines are terminated at different numbers of estimator because of the early stopping. The evaluation of the validation data also presented in the figure as dashed lines indicates that the overfitting is avoided reasonably well indeed. Similar results are obtained for other components of the Eddington tensor.
We move on to the predictions of these models. In Figures 4, 5, 6 we plot the mean absolute error (MAE) for the diagonal components of the Eddington tensor for different post-bounce times: 250, 200, 150, and 100 ms. MAE is defined as
L_ε^(1) = 1/|D_ε|∑_D_ε |k_ boltz, FP^ij - k_ infer, FP^ij|,
where k_ infer, FP^ij is the ij-component of the Eddington tensor obtained either with our Light-GBM model or via the M1-closure relation. It is apparent that our LightGBM closure gives more accurate predictions than the M1 closure for almost all energies at these times. The behavior of MAE is essentially the same among the three diagonal components. As the time goes back to earlier times, the accuracy is degraded somewhat as expected. It is encouraging, however, that the results are still better than those for the M1 closure except at ∼ 30 MeV, where the M1 closure gives exceptionally good results.
Although MAE is useful to see the overall convergence of the model to the target, what is more important is to what extent the individual components of the Eddington tensor are reproduced at different points in space. We hence compare next the radial profiles of the components of the Eddington tensors along the radial line at θ=π / 2 in figures <ref>–<ref>. The results are not much different for other angles.
We can confirm again that the LightGBM closure outperforms the M1 closure in general. Of particular note is that our LightGBM model can reproduce the values of the 11-component of the Eddington tensor smaller than 1/3 and the 22- and 33-components greater than 1/3 at rather small radii. Those values are obtained when the opacity decreases rapidly with the radius in the vicinity of the neutrinosphere; then the neutrino distribution is almost isotropic in the outward hemisphere whereas it is heavily depleted in the opposite hemisphere <cit.>. By its construction the M1 closure cannot treat such angular distributions properly. It is noted that our LightGBM fairs even better than the TBNN model in <cit.>.
It is also worth mentioning that the LightGBM model can handle the shock wave with reasonable accuracies. The shock wave is located at 170 km, 140 km, 130 km, 120 km, and 110 km at 100 ms, 150 ms, 200 ms, 250 ms, and 300 ms post bounce, respectively. The Eddington tensor is discontinuous at the shock wave. The jump gets larger as the energy increases. The radial profiles of the Eddington tensor for the energy of 54 MeV presented in figures <ref>–<ref> demonstrate indeed that the predictions of the LightGBM model are closer to the results of the Boltzmann simulations than the M1-closure relation in general.
However, the Eddington tensor predicted by LightGBM has several issues. Firstly, the predicted values are not so smooth as those obtained with the M1 closure. This tendency becomes more pronounced as we go back in time and see the predictions themselves degraded. In fact, the Eddington tensor at 250 ms post bounce plotted in Figures <ref>, <ref>, <ref> is relatively smooth, whereas the lines are more jagged at 100 ms after bounce as shown in Figures <ref>, <ref>, <ref>. We think that this phenomenon occurs because the current predictions are made for each grid point individually and that the issue could be resolved by adopting a machine learning model capable of incorporating non-local information into the predictions. Machine learning models such as Convolutional Neural Networks (CNNs) <cit.> or Recurrent Neural Networks (RNNs) <cit.> may have robustness to local noises and outliers in the input data, thereby giving smoother predictions.
It should be also mentioned that the current model does not take into account physical requirements. For instance, the trace of the Eddington tensor is unity by definition. There is no guarantee, however, that the predicted results satisfy this constraint although the violation is quite minor in the current models, as can be seen in Figure <ref>. This issue may be addressed with physics-informed neural networks <cit.>: for example, one may incorporate the violations of constraints into the loss function so that they could be minimized. Although these possibilities are interesting on their own rights and warrant further investigations, they are much beyond the scope of this paper and will be future works.
§.§ Off-Diagonal Component
Although the diagonal components, particularly the 11-component, is dominant in neutrino transport, the off-diagonal components should not be forgotten <cit.>. Designed to reproduce the dominant component, the M1-closure is known to give large errors to the off-diagonal components sometimes <cit.>. It is hence one of our goals in the machine learning modeling of the Eddington tensor to better reproduce the off-diagonal components. It is admittedly true in this respect that the previous TBNN closure was not so good, either <cit.>. In the following, we examine the accuracy of the off-diagonal components in our LightGBM model.
We focus on the 12-component, which is actually the most important off-diagonal component. Figure <ref> shows the MAE for this component as a function of the neutrino energy at different postbounce times. We find again that our LightGBM model achieves higher accuracies in general compared to the M1 closure. The radial profiles of the 12-component of the Eddington tensor are presented in figures <ref>–<ref>. They are obtained for the radial ray at θ=π/2. At 250 ms post bounce, our LightGBM model fairs better than the M1 closure except at ε_ν=54.0MeV, where the two models give similar errors. As we go back in time, the deviation from the results of the Boltzmann simulation gets larger for the LightGBM model and its advantage becomes not so remarkable.
A potential factor that contributes to the compromized success is the variation in the statistical properties among the data at different time steps. Tables <ref> and
<ref> give the the mean and standard deviation of k_ boltz, FP^11 and k_ boltz, FP^12 at each time step. It is evident that the statistical properties of k_ boltz, FP^12 are significantly different from time to time compared to those of k_ boltz, FP^11. It may be that the generalization should not have been expected for the off-diagonal component in the first place. Standardisation of the dataset across all time steps may be a solution but again is out of the scope of this paper.
It is worth noting that different features were utilized for the diagonal and off-diagonal components in our LightGBM model. Specifically for the off-diagonal components, it was unclear which features held the most significance. In predicting the diagonal components, we utilized 35 features, whereas for the off-diagonal components, 317 features were employed. For example, features such as fluid velocity, which were removed when predicting diagonal components, are included. When predicting diagonal components, the prediction accuracy significantly decreased; however, in predicting off-diagonal components, the accuracy was not greatly compromised. Off-diagonal components are more significantly affected by factors such as fluid motion compared to diagonal components, which suggests that background information may be more important for their prediction.
§.§ Importance of feature engineering
Feature engineering plays a crucial role in predicting the Eddington tensor accurately. In this work we incorporated many advanced features indeed as listed in table <ref>. It is possible because LightGBM is very efficient both in training and prediction, ignoring irrelevant features automatically. As explained in section 3.2, we find that the spatial-shift and difference features are two most important features for the improvement of accuracy, which we demonstrate here.
In Figure <ref>, we show how MAE gets better from the one for the model with only the basic features included. The MAE for the model with the basic features alone is actually larger than that for the M1 closure method for most of the neutrino energy. The MAE becomes smaller, however, once the advanced features are incorporated, particularly for energies close to the mean energy. It is further improved if the spatial-shift and difference features are taken into account in addition. The same trend is observed for the Eddington tensor k_FP^11 itself as a function of radius as demonstrated in Figure <ref>.
What we learned from these experiments on feature engineering may be three-fold. Firstly, the local low-order moments, the inputs for the ordinary closure methods, are not sufficient to reproduce the Eddington tensor accurately. We find that the information on the background matter such as the baryon density or the optical depth is particularly important. Secondly, the nonlocal information on the neutrino distribution is probably the most crucial. Although their incorporation in LightGBM is rather straightforward, it will not be so easy for the analytic closure relation to take them into account. Thirdly, finding nice combinations of the basic features is also an important thing. Our LightGBM model is useful in this respect, since we can try many possible combinations in relatively short times. The new features so obtained may be employed in other machine learning models.
§ CONCLUSION
In this paper we develop a machine learning model using LightGBM to predict the Eddington tensor, or the second moment, of the angular distribution of neutrinos often employed in the core-collapse supernova simulation. Unlike the ordinary closure relations, we employ not only low-order moments of the neutrino angular distribution in momentum space but exploit also the information on the matter distribution as well as non-local features of the neutrino distribution. For the training and validation of the machine, we utilize the numerical result of the Boltzmann-radiation-hydrodynamics simulations of the core-collapse of a non-rotating 15M_⊙ progenitor model. The training data are a snapshot at 300 ms after core bounce, while the validation data are snapshots taken at 100 ms, 150 ms, 200 ms and 250 ms post-bounce. For all these times, our machine learning model shows better accuracy than the M1 closure in predicting the Eddington tensor, highlighting the potential of machine learning to be an alternative closure relation.
The key factor in the model building is the feature engineering process. It is true that finding features of the greatest relevance in improving the prediction is the most critical but it is equally important to produce nice combinations of the basic features that the machine can easily understand. Those features should have some physical meanings so that they could be applied to different evolutionary stages. The LightGBM, a variant of the Gradient Boosting Decision Tree, is efficient in the management of memory and time, allowing us to handle a large number of features at a time quickly and, as a result, identify the features that improve generalization ability substantially. In fact, LightGBM is able to reproduce better both diagonal and off-diagonal components of the Eddington tensor in most cases than the M1 closure although the accuracy degrades somewhat as we go back to earlier post-bounce times. In particular, the dip in the 11-component near the neutrino sphere is represented quite well, which not only the M1 closure but our previous tensor-basis neural network model is unable to do. The hump in the 12-component in the same region is represented reasonably as well.
Although these successes are encouraging, there are admittedly many unsatisfactory things. For instance, the resultant Eddington factor is not very smooth in radius, which may cause a trouble when we substitute it for the M1 closure relation in the truncated moment method and perform core-collapse simulations. Although it is much improved from our previous neural network model, the generalization performance is not good enough. This may suggest that we should use all data collected from different times for the training. We are currently trying such possibilities. The non-local features investigated in this paper are based on physical intuitions and admittedly very primitive, and there is a much room for improvement. The ultimate goal of this project is to create a versatile model capable of predicting the Eddington tensor (and even higher moments as well) for stars with different masses and/or degrees of rotation. It may turn out that the machine based on the decision tree is not the best one. Even in that case, the features we identify in this work may be used as the inputs to other machine learning models. The current model is useful to find other features to improve the prediction further.
§ ACKNOWLEDGMENTS
We acknowledge Hiroki Nagakura and Keiya Hirashima for fruitful discussions.
This work was supported by KAKENHI Grant Numbers JP21K13913, 21H01083.
This work was also supported by MEXT as “Program for Promoting Researches on the Supercomputer Fugaku” (Toward a unified view of the universe: from large scale structures to planets).
S. T. is supported by International Graduate Program of Innovation for Intelligent World.
S. Y. is supported by Institute for Advanced Theoretical and Experimental Physics, Waseda University and the Waseda University Grant for Special Research Projects (project number: 2023-C141, 2024-C56, 2024-Q014).
We acknowledge the high-performance computing resources of the K-computer / the supercomputer Fugaku provided by RIKEN, the FX10 provided by Tokyo University, the FX100 provided by Nagoya University, the Grand Chariot provided by Hokkaido University, and Oakforest-PACS / Wisteria Odyssey provided by JCAHPC through the HPCI System Research Project (Project ID: hp130025, 140211, 150225, 150262, 160071, 160211, 170031, 170230, 170304, 180111, 180179, 180239, 190100, 190160, 200102, 200124, 210050, 220047, 220223, 230056, 230270, 240041) for producing and processing the supervisor data.
LightGBM (<cit.>)
aasjournal
|
http://arxiv.org/abs/2409.03039v1 | 20240904191747 | Fluctuating Hydrodynamics Describes Transport in Cellular Aggregates | [
"Subhadip Chakraborti",
"Vasily Zaburdaev"
] | cond-mat.soft | [
"cond-mat.soft",
"cond-mat.stat-mech",
"physics.bio-ph"
] |
=240mm
pdf: minorversion=6
./Figures/./Hydro/./SpaceAverage/
|
http://arxiv.org/abs/2409.02695v1 | 20240904132951 | How does the critical torus instability height vary with the solar cycle? | [
"Alexander W. James",
"Lucie M. Green",
"Graham Barnes",
"Lidia van Driel-Gesztelyi",
"David R. Williams"
] | astro-ph.SR | [
"astro-ph.SR"
] |
0000-0001-7927-9291]Alexander W. James
Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, Surrey, RH5 6NT, UK
0000-0002-0053-4876]Lucie M. Green
Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, Surrey, RH5 6NT, UK
0000-0003-3571-8728]Graham Barnes
NorthWest Research Associates, 3880 Mitchell Lane, Boulder, CO 80301, USA
0000-0002-2943-5978]Lidia van Driel-Gesztelyi
Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, Surrey, RH5 6NT, UK
LESIA, Observatoire de Paris, Université PSL, CNRS, Sorbonne Université, Univ. Paris Diderot, Sorbonne Paris Cité, 5 place Jules Janssen, 92195 Meudon, France
Konkoly Observatory, Research Centre for Astronomy and Earth Sciences, Hungarian Academy of Sciences, Konkoly Thege út 15-17., H-1121, Budapest,
Hungary
0000-0001-9922-8117]David R. Williams
European Space Agency (ESA), European Space Astronomy Centre (ESAC), Camino Bajo del Castillo, s/n, Villanueva de la Cañada, Madrid, 28692 Spain
§ ABSTRACT
The ideal magnetohydrodynamic torus instability can drive the eruption of coronal mass ejections.
The critical threshold of magnetic field strength decay for the onset of the torus instability occurs at different heights in different solar active regions, and understanding this variation could therefore improve space weather prediction.
In this work, we aim to find out how the critical torus instability height evolves throughout the solar activity cycle.
We study a significant subset of HMI and MDI Space-Weather HMI Active Region Patches (SHARPs and SMARPs) from 1996–2023, totalling 21584 magnetograms from 4436 unique active region patches.
For each magnetogram, we compute the critical height averaged across the main polarity inversion line, the total unsigned magnetic flux and the separation between the positive and negative magnetic polarities.
We find the critical height in active regions varies with the solar cycle, with higher (lower) average critical heights observed around solar maximum (minimum).
We conclude this is because the critical height is proportional to the separation between opposite magnetic polarities, which in turn is proportional to the total magnetic flux in a region, and more magnetic regions with larger fluxes and larger sizes are observed at solar maximum.
This result is noteworthy because, despite the higher critical heights, more CMEs are observed around solar maximum than at solar minimum.
§ INTRODUCTION
Coronal mass ejections (CMEs) are a significant component of space weather that affects the Earth and human activities in near-Earth space. These eruptions of plasma and magnetic field can interact with the Earth's magnetic field to induce damaging electrical currents and accelerate harmful bursts of energetic particles.
We are continuously developing capabilities to forecast space weather and the impacts it may cause. CMEs can take anywhere from 1–3 days to reach the Earth after they erupt, so to be able to produce a forecast with a longer lead time than that, we need to be able to predict CMEs before they occur.
This will require an improved understanding of the mechanisms involved in CME initiation.
One theory is that CMEs are driven by the ideal magnetohydrodynamic torus instability <cit.>. Magnetic flux ropes in the solar atmosphere carry electric currents and tend to expand radially via the hoop force, but the ambient magnetic field surrounding the flux rope contributes a stabilising tension force <cit.>. However, if the strength of the restraining magnetic field drops off sufficiently rapidly with height, radial expansion of the flux rope will be unstable, leading to runaway expansion and the eruption of the flux rope as a CME <cit.>.
We can quantify the gradient of the magnetic field with height using a parameter known as the decay index, n, where
n = - d lnB_ext,p/d lnR.
Here, B_ext,p is the poloidal component of the magnetic field external to a magnetic flux rope (the component that contributes the restraining tension force; where the poloidal direction is perpendicular to the axial (toroidal) direction of the flux rope and R is distance in the solar radial direction.
For a symmetric torus with a major axis much larger than its minor axis, the critical value of the decay index at and above which the torus instability sets in is n_c=1.5 <cit.>, however observations and simulations suggest values that range from 1<n_c<2 <cit.>.
We can define the height at which the critical value of the decay index occurs as the critical height, h_c, and this will vary in time and space as the magnetic field evolves. Beneath the critical height is a torus-stable zone, and the region above the critical height is a torus unstable zone <cit.>.
<cit.> studied a sample of 42 active regions, quantifying how magnetic flux, the separation between opposite polarities, and the critical height in each region evolved over time and how these changes correlated with CME occurrence.
Firstly, it was found that the CME rate was twice as high during periods when magnetic flux in the active regions was increasing than when it was decreasing, interpreted as resulting from the emergence of magnetic flux that leads to injection of free magnetic energy and increased complexity in the photospheric inversion lines.
Secondly, during these times of increasing magnetic flux, 63% more CMEs occurred per unit time when the critical height in the source active region was increasing rather than decreasing over a period of several hours before the eruption.
This seems like a surprising result because a rising critical height should make it harder for a flux rope to become torus unstable and erupt as a CME.
To help understand this observation, it is helpful to note that the critical height in bipolar magnetic regions is strongly correlated with the distance between the flux weighted centres of opposite magnetic polarities <cit.>. Rising critical heights are therefore often observed when photospheric magnetic polarities move apart from each other, which can occur on a timescale of hours in the early emergence phase of active regions as an observational manifestation of the emergence of Ω-loops through the photosphere.
Since emerging magnetic flux can cause magnetic polarities to move apart, which in turn leads to higher critical heights, it can be difficult to separate the roles changing magnetic fluxes and critical heights play in causing eruptions.
Furthermore, CMEs were observed during phases of all combinations of increasing/decreasing magnetic flux and increasing/decreasing polarity separation.
Still, the results of <cit.> suggest the increase of magnetic flux is more important for creating the environment in which CMEs occur than changes in the critical height.
In addition to studying how the critical height changes over time, it is also important to understand how the critical height compares to the height of an embedded flux rope.
It has been shown that the critical height is typically lower in complex multipolar regions than in simple bipolar regions <cit.>, and that the most magnetically complex active regions can produce many CMEs <cit.>.
Active region properties show some variation over the solar cycle <cit.>, and with more active regions on the Sun around the cycle maximum, the overall magnetic complexity increases.
More CMEs occur around solar maximum that at solar minimum <cit.>, with CME rates correlating closely with the sunspot number. The simplest explanation for this is that there are more sunspot groups present to produce CMEs, but it is also possible that the rate of CMEs produced per active region varies with the solar cycle.
The above points raise the question of whether the critical height also shows a temporal variation on the timescale of a solar cycle.
In this study, we use data from across three solar cycles to investigate changes in the critical height and how they compare with the magnetic fluxes and separations between opposite polarities in solar magnetic regions.
We outline the data used in this work in Section <ref> and describe our methods in Section <ref>. We present our results in Section <ref>, discuss our findings in Section <ref>, and summarise our conclusions in Section <ref>.
§ DATA
The Solar and Heliospheric Observatory (SOHO; ) was launched on 2 December 1995. Onboard is the Michelson Doppler Imager (MDI; ), which took measurements from 23 April 1996 until 27 October 2010 that enable the line-of-sight component of the photospheric magnetic field to be determined at an image cadence of 96 minutes and a spatial pixel size of 2”. On 11 February 2010, the Solar Dynamics Observatory (SDO; ) was launched, and its Helioseismic and Magnetic Imager (HMI; ) instrument produces 3D vector magnetograms at a cadence of 12 minutes with a pixel size of 0.5”. We use magnetograms from the SHARP (Spaceweather HMI Active Region Patch; ) and SMARP (Spaceweather MDI Active Region Patch; ) datasets, which include cutouts of regions of significant photospheric magnetic flux that are tracked as they transit the solar disc. Each active region patch in the SHARP and SMARP datasets is assigned with a unique HARP or TARP (HMI/Tracked Active Region Patch) number, respectively, for identification, and they may contain zero or more active regions as designated by NOAA (the National Oceanic and Atmospheric Administration of the United States).
In our work, we use HARPs and TARPs from the beginning of May 1996 until the end of October 2023, covering solar cycles 23 and 24, as well as the rising phase of solar cycle 25. We select regions that contain zero or one NOAA active region (patches with zero active regions are hereafter referred to as ephemeral regions), that were observed between 60^∘ east and west of central meridian (as observed by SOHO and SDO), and were significantly non-unipolar (i.e., at least 10% of the total unsigned magnetic flux in the region was of the minority magnetic polarity, whether positive or negative). This non-unipolarity condition was enforced to ensure there would be clear polarity inversion lines in the regions of study, as these are essential to our method of computing the critical torus instability height (see Section <ref>), and indeed are where pre-eruptive magnetic flux ropes necessarily form.
To observe the evolution of the magnetic regions, we used magnetograms in a cylindrical equal area projection at an image cadence of 24 hours (taken at approximately 00:00 UT each day).
In total, we used 21584 magnetograms from 4436 SMARPs and SHARPs (see Table <ref> for a breakdown of the dataset across the ARP (Active Region Patch) types and the numbers of active regions and ephemeral regions).
The number of magnetograms used from each month is shown in the top panel of Figure <ref>. Since we use magnetograms from TARPs and HARPs, it is no surprise that the monthly number of magnetograms in our sample varies with the solar activity cycle.
The “butterfly diagram” in the bottom panel of Figure <ref> shows that the magnetograms in our study come from a wide range of active latitudes that evolves as expected throughout each sunspot cycle.
We spatially resampled magnetograms from the SHARP dataset by a factor of 4 to match the 2” pixel size of the SMARP data.
For SHARP magnetograms, we used magnetograms of the radial magnetic field component (B_r), however MDI magnetograms represent the line-of-sight field component. To approximate the radial field component in the SMARP magnetograms, we used the method of <cit.>, first performing a potential field extrapolation that uses the line-of-sight field component as the photospheric boundary condition to obtain a 3D vector magnetic field model, and then we take the B_r component from the bottom of the extrapolation as a modelled magnetogram. As noted by <cit.>, this method generally performs better in active regions (where horizontal fields are significant) than a simple geometrical “μ-correction” that assumes all of the observed magnetic field is radial.
There is an approximately six-month overlap period in observations from when the SHARP series begins on 1 May 2010 and the SMARP series ends on 27 October 2010.
During this time, we find 49 NOAA active regions that are in both the SMARP and SHARP datasets (and meet our selection criteria for non-unipolarity and disc longitude).
Using the full MDI image cadence of 96 minutes (one in every eight HMI images), we find 3142 pairs of SHARP and SMARP magnetograms that were observed approximately co-temporally.
In Section <ref>, we perform a comparison of the critical height values obtained from these SMARP and SHARP magnetograms.
§ METHODS
§.§ Calculating the critical height
To compute the decay index and therefore the critical height at which a flux rope would become unstable if present, we need the coronal magnetic field external to such a flux rope. We approximate the external magnetic field with the potential magnetic field, which we produce using the method of <cit.> with B_r magnetograms as the photospheric boundary condition (B_r is obtained for SMARP data using the method described in Section <ref>).
Without modelling flux ropes themselves, we cannot accurately determine the orientation of the poloidal magnetic field component, which is the necessary component in Equation <ref>. Instead we approximate the poloidal component with the horizontal field component, as has been assumed in other studies <cit.>.
We use a method similar to that performed in <cit.> to calculate the critical height. That is, polarity inversion lines (PILs) are identified by heavily smoothing the magnetograms with a 20 × 20 pixel spatial average and locating pixels where their neighbours are of the opposite magnetic polarity, the critical height is identified above each PIL pixel, and the mean critical height is calculated along the PIL.
In some cases, the decay index exhibits a “saddle”-shaped profile with height above the PIL <cit.>, in which there are multiple critical heights with a torus-stable zone between them. As in <cit.>, where multiple critical heights are found above one pixel, we select the lowest because these tend to occur at comparable heights to those calculated in regions where there is no saddle (typically ∼40 Mm). Furthermore, although saddle-shaped decay index profiles with multiple critical heights and an intermediate stable zone can cause failed or two-step eruptions <cit.>, CMEs can still occur from such regions, particularly when the decay index at the minimum of the saddle is not too small <cit.> and when a large enough Lorentz force is applied to give the erupting structure enough momentum to traverse the torus-stable region <cit.>. Magnetic reconnection may also play a role in continuing to accelerate an erupting structure after an initial torus instability subsides.
Whilst <cit.> manually defined sections of the detected PILs that were relevant, we implement an automated method due to the large number events in our sample that uses the field strength pixel bitmaps that are included with the SHARP and SMARP datasets. We select PIL pixels that are within the weak field bitmaps, but also within regions where dilated positive and negative strong field bitmaps overlap with each other (similarly to how the R parameter is computed by to quantify sharp gradients across inversion lines), resulting in masked PIL pixels that lie between strong concentrations of opposite polarity magnetic fluxes (see Figure <ref>).
Sometimes this method identifies PIL pixels that are further from the core of the active regions than we would like, and particularly in multipolar active regions, there is sometimes more than one distinct PIL identified, meaning the critical heights from each PIL are averaged into a single critical height for the magnetogram. However, the large number of magnetograms we are using should help to reduce the impact of outliers where the PILs are not identified as clearly.
§.§ Quantifying magnetic flux and polarity separation
To quantify the radial magnetic flux in each B_r magnetogram, we first smooth the data using a 15 × 15 pixel spatial average. Then, we create a mask of pixels where the smoothed values are greater than a threshold magnitude of 100 G. We apply this pixel mask to the unsmoothed magnetograms and integrate the positive and negative flux densities, thereby excluding weak field pixels where the signal-to-noise ratio is poor. Finally, we calculate the unsigned magnetic flux as the mean of the absolute values of positive and negative magnetic flux to give a single magnetic flux value for each magnetogram.
We quantify the characteristic separation between positive and negative magnetic polarities in each magnetogram using the same 15 × 15 pixel spatially smoothed magnetograms as are used to calculate the magnetic flux. Using only pixels where the magnitude of smoothed flux density is greater than 100 G, we compute the flux-weighted centres of positive and negative magnetic flux. Examples of these two centroid coordinates are shown with orange circles in Figure <ref>, and we use the distance between the pair of centroids as our measure of the separation between opposite magnetic polarities in each magnetogram.
§.§ Comparison of critical heights from SMARP and SHARP extrapolations
Before we can make conclusions using observations from MDI and HMI, we need to understand any systematic differences between the datasets.
The magnetograms from the two instruments are produced using observations across different spectral lines, with MDI observing in a tunable band around the Ni1 6768 Å absorption line and HMI similarly around the Fe1 6173 Å absorption line.
Furthermore, unlike the line-of-sight MDI magnetograms, the HMI vector magnetograms are produced using the Very Fast Inversion of the Stokes Vector <cit.>, and as outlined in Section <ref>, HMI has a better spatial resolution than MDI.
Therefore, even when the two telescopes observe the same region of the Sun, there are likely to be some differences in the inferred magnetic fields.
Here, we calibrate the critical height values we obtain from MDI and HMI observations of the same active regions.
As introduced in Section <ref>, we find 3142 pairs of SHARP and SMARP magnetograms that contain the same NOAA active region observed almost cotemporally.
Even when observing the same regions at the same time, the fields-of-view of the SHARPs and SMARPs are generally different, so we crop and use only the common area that is contained within both magnetograms.
To account for the different spatial resolutions, we spatially resample the HMI magnetograms by a factor of 4 to match the MDI resolution.
Then, using the method described in Section <ref>, we extrapolate potential coronal magnetic fields from the SHARP and SMARP magnetograms and calculate the mean critical height above photospheric PILs.
For each SHARP-SMARP magnetogram pair, we identified PIL pixels using the mask made from the SHARP data to ensure the same pixels were used to calculate the critical height in both datasets.
We obtain critical heights from the 3142 pairs of SHARP and SMARP magnetograms during the period of overlapping observations, and these are presented in Figure <ref>.
A number of outliers where the critical heights calculated from the SHARP magnetograms are significantly larger than from the SMARPS can be seen in Figure <ref> (marked with orange `X' symbols).
Specifically, we exclude 11 points where the SHARP critical height is greater than 34 Mm but the critical height from a SMARP magnetogram at the same time is < 5 Mm. All of these datapoints come from early in the observed lifetime of HARP 00224, when it contained the weak and dispersed remnants of a decayed active region before NOAA 11119 emerged into it. The weak field bitmap used to search for PIL pixels between opposite polarities (described in Section <ref>) only covered the negative polarity, so very few PIL pixels were identified. Despite the fact that we compute critical heights using the same PIL pixels in both the SHARP and SMARP magnetograms, the critical height values obtained by averaging across so few PIL pixels are unreliable. We exclude the values obtained from these 11 magnetograms from our study.
We perform an orthogonal distance regression fit to the SMARP and SHARP critical height pairs to see how they compare. Constraining the fit to pass through the origin, we find a gradient of 0.93 with a Pearson's correlation coefficient of r=0.838. This suggests that the critical heights obtained from SMARP magnetograms are, on average, slightly lower than from the corresponding SHARP magnetograms.
However, whilst there are magnetograms where the SHARP critical height is higher than the paired SMARP value, there are also many magnetograms where the opposite is true. We take the ratio of the SHARP-to-SMARP critical height for each magnetogram pair, and find the median ratio across the dataset to be 0.99. This suggests there are roughly as many magnetogram pairs where the SHARP critical height is higher as there are pairs where the SMARP critical height is higher, and therefore no systematic scaling is required for either the HMI or MDI critical heights.
Finally, we perform the Student's t-test and the Wilcoxon signed rank test on the HMI and MDI critical heights and find p-values of 0.40 and 0.93, respectively. Both of these are significantly greater than 0.05, so we cannot reject the null hypothesis. In other words, there is no evidence to suggest the HMI and MDI critical heights are meaningfully different from one another.
Whereas above we cropped the commonly-observed field-of-view from from each pair of SHARP and SMARP magnetograms and averaged their critical heights using the same mask of PIL pixels made using the SHARP data and bitmap, we repeat the analysis whilst treating the SHARP and SMARP magnetograms independently as an additional test of how well the results obtained from each agree. Whilst the spatial resolution of the SHARP magnetograms is still downsampled by a factor of 4 (matching that of the SMARP magnetograms) in order to save computational time, this time we compute critical heights with different pixel masks in the SHARP and SMARP magnetograms. The full SHARP and SMARP fields-of-view are considered and PIL pixels are identified independently for each dataset using their respective bitmaps and B_r.
The independently-computed SHARP and SMARP critical heights are still in very close agreement, albeit slightly less so than we found using the previous method.
We now only find and exclude 3 magnetograms that fit the definition for outliers we used above (i.e. where the SHARP critical height is greater than 34 Mm but the critical height from a SMARP magnetogram at the same time is < 5 Mm). These magnetograms still correspond to NOAA 11119.
The Pearson's correlation coefficient of the SHARP and SMARP critical heights using this method is r=0.762 (compared to the previous r=0.838) and an orthogonal distance regression fit results in a gradient of 0.87 (compared to the previous 0.93), suggesting the average critical height from the SHARP magnetograms is greater than from the SMARP magnetograms by a little bit more than was seen by using the other method (i.e. slightly further away from a ratio of 1.0).
We again calculate the ratio of the critical heights from each SHARP-SMARP pair and take the median ratio across the dataset, resulting in a ratio of 1.05. This is slightly larger than the ratio of 0.99 found using the previous method, but is still very close to unity, suggesting no significant systematic difference between the HMI and MDI critical heights.
§ RESULTS
We investigate how the magnetic flux, polarity separation and critical height vary throughout the SMARP and SHARP datasets.
§.§ Active region flux, polarity separation, and critical height
In the top panel of Figure <ref>, we plot the calculated critical height (h_c) against the measured separation between opposite polarities (d) for each of the 21584 magnetograms. The critical height and polarity separation are strongly correlated, with a Pearson correlation coefficient (r) of 0.854, and we perform a linear fit to the data, resulting in the relationship h_c = 0.5d + 7.35.
The colour of each datapoint represents the unsigned magnetic flux calculated in that magnetogram.
The smallest (largest) magnetic fluxes are generally seen in the magnetograms with the smallest (largest) polarity separations and lowest (highest) critical heights.
Specifically, many of the weaker magnetic fluxes (<10^21 Mx) are measured in magnetograms with small polarity separations (<80 Mm) and small critical heights (<40 Mm).
However, a small population of magnetograms with polarity separations 50<d<170 Mm and critical heights h_c>100 Mm are seen to have abnormally small magnetic fluxes <10^21 Mx, and there are also two magnetograms with exceptionally high critical heights >250 Mm for their polarity separations (≈125 Mm).
These magnetograms all contain magnetic flux concentrations that are very close to quiet Sun conditions, captured either early in the ARP's life before significant flux emergence has taken place, or after the region's magnetic flux has decayed.
In the middle panel of Figure <ref>, we show the critical height against the unsigned magnetic flux in a log-log scale for each of the 21584 magnetograms, with colours corresponding to the polarity separation. The Pearson correlation coefficient between these two quantities is r=0.738, and a linear fit to the data gives h_c = 6.5 × 10^-6 Φ^0.31, where Φ is measured in Mx and h_c is in Mm.
The magnetograms with the largest polarity separations are generally seen in magnetograms with the largest magnetic fluxes and that have the largest critical heights.
There are a few outliers with polarity separations >10^2 Mm and critical heights <10^1.4 Mm.
In the bottom panel of Figure <ref>, we plot the magnetic flux (measured in Mx) against the polarity separation (in Mm) from all 21584 magnetogtams using a log-log scale. These parameters have a Pearson correlation coefficient of r=0.600, and a linear fit to the data gives the relationship Φ = 1.97 × 10^19 Mx d^1.23, where d is measured in Mm and Φ is in Mx.
For direct comparison with the relationship obtained by <cit.> (presented in Section <ref>), we also perform the fit with d measured in degrees, finding the relationship Φ = 4.23 × 10^20 Mx d^1.23.
We also repeat the linear fitting of unsigned flux against polarity separation after separating the magnetograms that contain a NOAA active region from those which do not (ephemeral regions).
With separations, d, in Mm, we find Φ = 3.27 × 10^19 Mx d^1.14 for NOAA active regions (r=0.566) and Φ = 7.42 × 10^18 Mx d^1.31 in ephemeral regions (r=0.772).
When d is measured in degrees, these relationships are Φ = 5.64 × 10^20 Mx d^1.14 for NOAA active regions and Φ = 1.95 × 10^20 Mx d^1.31 in ephemeral regions.
§.§ Temporal evolution of critical height, polarity separation, and magnetic flux
In Figure <ref>, we show the mean unsigned flux (top), the mean separation between the flux-weighted centres of positive and negative magnetic flux (middle), and the mean critical height (bottom) of all the magnetograms in each month of our data sample containing a single NOAA active region (left column) and ephemeral regions (right).
The standard deviations of the values across all magnetograms in each month are presented as errorbars to indicate the observed ranges in each parameter.
Though the standard deviations, σ, are often large, particularly around solar maximum, the standard error (σ/√(N)) in the monthly means is generally very low due to the large number of magnetograms sampled, N. This suggests the monthly mean values are well-constrained.
For comparison, we also show the 13-month smoothed sunspot number (issued by the Solar Influence Data Analysis Center) in each panel of Figure <ref>, and we indicate the times of sunspot minima which demarcate solar cycles 23, 24 and 25. These took place in August 1996, December 2008 and December 2019.
For the NOAA active regions, the lowest monthly means of unsigned magnetic flux, polarity separation, and critical height are seen around 1996, 2008 and 2019 — around the same times as the sunspot minima — with typical values < 10^21 Mx, 25–40 Mm, and 10–20 Mm, respectively.
There is relatively little range in the values observed in any given month around these times of lower solar activity.
Periods of consistently higher monthly mean values occur from roughly 1998–2007, 2010–2017, and from 2021 until the end of our dataset in October 2023, with unsigned fluxes > 10^21 Mx, polarity separations of 50–80 Mm, and critical heights of 20–35 Mm. These periods span the years around the times of maximum observed sunspot numbers in solar cycles 23 and 24, and appear to fit with the increasing numbers of sunspots in solar cycle 25 that are currently predicted to peak in 2024 or 2025 <cit.>.
Although the monthly means are higher during these periods of higher solar activity, the spreads seen each month show that values from individual magnetograms can still be almost as low as those seen around solar minimum.
However, significantly higher values of unsigned magnetic flux are also seen, with unsigned magnetic fluxes up to ∼ 10^22 Mx, polarity separations up to ≈ 150 Mm, and critical heights of up to ≈ 60 Mm in many months. This demonstrates the larger range of magnetic fluxes, polarity separations, and critical heights that are seen during periods of increased solar activity.
The ephemeral regions also do not exhibit a wide range of values around solar minimum, with very few high values of magnetic flux, polarity separation and critical height around these times (typical mean monthly fluxes ∼ 10^20 Mx, polarity separations ≈ 30 Mm and critical heights ≈ 15 Mm). However, the quantities vary significantly from month to month around solar maximum, from some of the highest observed values (fluxes approaching 10^22 Mx, polarity separations >100 Mm and critical heights > 40 Mm) to lows comparable to those seen around solar minimum.
§ DISCUSSION
§.§ The correlation of magnetic flux, polarity separation, and critical height
We study the relationships between the total unsigned magnetic flux (Φ), the separation between the flux-weighted centres of positive and negative magnetic polarity concentrations (d), and the critical height h_c in 21584 magnetograms from May 1996 – October 2023.
Firstly, we find that the critical height above PILs is positively correlated with the separation between positive and negative polarities, resulting in a relationship of h_c = 0.50d + 7.35 and a strong Pearson's correlation coefficient of r=0.854. This is in good agreement with the findings of previous studies (h_c/d=0.54; , h_c/d=0.4±0.1; , h_c/d=0.52±0.04; ).
The outliers in the top panel of Figure <ref> that have small magnetic fluxes and relatively large critical heights (mentioned in Section <ref>) correspond to magnetograms that show approximately quiet Sun conditions. These may be captured either early in the region's life before significant flux emergence has taken place, or after the region's magnetic flux has decayed. In these quiet-Sun-like conditions, there are no strong, localised bipoles, so there are no strong field gradients (and therefore no critical heights) low down in the corona. Instead, we suggest the coronal magnetic field in the quiet Sun is generally dominated by large-scale global fields from distant bipoles, which decay over larger length scales and result in higher critical heights.
Secondly, the critical height also correlates closely with magnetic flux.
<cit.> identified a correlation between h_c and logΦ, but in our dataset, we find a better fit when considering log(h_c) and log(Φ) (Pearson's correlation coefficient r=0.738). We fit the relation h_c = 6.5 × 10^-6 Φ^0.31, where h_c is in Mm and Φ is in Mx.
The outliers in the middle panel of Figure <ref> that have large polarity separations and relatively small critical heights (mentioned in Section <ref>) come from magnetograms that contain multiple magnetic bipoles or one strong bipole and another region of dispersed flux.
This causes our method of calculating the separation between opposite magnetic polarities to perform poorly, as the positions of the two flux-weighted centres of positive and negative flux are averaged between the multiple bipoles and/or the regions of dispersed magnetic flux.
Furthermore, when more than one distinct PIL is identified in multipolar configurations, we average the critical heights from each PIL into a single value for the magnetogram, even though the critical heights can vary significantly from one PIL to another.
Thirdly, we show that the logarithm of the separation between opposite magnetic polarities, d, correlates with the total magnetic flux in an observed region, Φ, with a relationship of Φ∼ d^1.23 and a Pearson's correlation coefficient of r=0.600 (in log-log space). This is in good agreement with the findings of previous studies (Φ∼ d^1.3; , Φ∼ d^1.15; ).
<cit.> studied only bipolar magnetic regions with two sunspots, and when we examine only the magnetograms from our dataset that contain a NOAA active region, we find Φ∼ d^1.14 (r=0.566), which agrees very well with their result.
Conversely, if we fit to only the magnetograms in our dataset that do not contain a NOAA active region (weaker, ephemeral regions), we find Φ∼ d^1.31 (r=0.772). This is very close to the <cit.> result, although they considered a broad range of magnetic regions with and without sunspots, from strong regions ∼ 10^22 Mx down to weaker (likely ephemeral) regions ∼ 10^20 Mx.
In summary, regions with larger magnetic fluxes tend to have larger separations between their opposite magnetic polarities, and regions with larger polarity separations tend to have higher critical heights.
Therefore, it follows that, since the magnetic flux in our studied magnetograms varies with the solar cycle, so too does the separation between opposite polarities and the critical height (seen clearly in Figure <ref>).
§.§ Solar cycle variation of the critical height
In active regions, the mean critical height for the onset of the torus instability varies with the solar cycle, as does the unsigned magnetic flux and the separation between opposite polarities. Minimum smoothed sunspot numbers were observed in 1996, 2008 and 2019, around which times we see the smallest mean monthly magnetic fluxes (< 10^21 Mx), the shortest polarity separations (30–50 Mm) and the lowest monthly critical heights (15–20 Mm).
Maximum smoothed sunspot numbers were observed in late 2001 and 2014 (with another maximum expected around 2025), i.e., in the middles of the extended periods when we see consistently higher mean monthly unsigned fluxes (> 10^21 Mx), polarity separations (60–100 Mm) and critical heights (30–45 Mm).
The solar cycle variation of these quantities is less clear in ephemeral regions. The largest monthly means of magnetic flux, polarity separation, and critical height are observed around solar minimum, but there are also many months around solar maximum with very low mean quantities. In other words, there is greater variance in the monthly means around solar maximum, but comparably low values are still observed throughout each solar cycle.
There are naturally fewer magnetograms in our dataset around solar minimum, and this will enable extreme values to more strongly affect the mean and the spread of values seen in the magnetograms each month.
However, at least for active regions, it is still interesting that the fluxes, polarity separations and critical heights at solar minimum are generally amongst the lowest values observed, with very few large fluxes, separations, or critical heights identified (as evidenced by the small spreads around these times in Figure <ref>).
We note that the few regions with strong magnetic fluxes (>10^22 Mx) seen around periods of solar minimum are all from latitudes lower than 12^∘, and therefore likely belong to the waning solar cycle, whereas regions with with weaker magnetic fluxes (<5×10^20 Mx) are found from a wide range of latitudes up to ≈ 30^∘, which could represent the earliest emergences of the next cycle.
As shown in Figure <ref>, we bin the magnetic fluxes, polarity separations, and critical heights from our 21584 magnetograms into unsigned latitudinal bands of 5^∘ (i.e. no distinction between north and south) and examine the median values in each band.
Firstly, there appear to be exceptionally large polarity separations found between ± 45–50^∘, but this is likely due to anomalously large values in the 8 magnetograms in our sample at these latitudes, which all come from the weak field region, NOAA AR 08175.
Ignoring this, we generally see the largest magnetic fluxes, polarity separations, and critical heights in the ± 15–25^∘ latitudinal range, with smaller values found further towards the equator and the poles.
This fits with the scenario where these parameters vary with the solar cycle, because the latitudes of our magnetic regions also vary with the solar cycle. As shown in the bottom panel of Figure <ref>, magnetic regions typically lie at higher latitudes at the start of each solar cycle, and their latitude decreases over the next 11 years.
Our results suggest that, during the activity minimum start of each 11-year cycle, magnetic regions emerge at high latitudes and have relatively small magnetic fluxes, polarity separations, and critical heights. By solar maximum, magnetic regions exist at lower latitudes (∼ 20^∘) with stronger magnetic fluxes, larger polarity separations, and higher critical heights. Then, as the next sunspot minimum approaches, magnetic patches emerge closer to the equator, and their magnetic fluxes, polarity separations, and critical heights are smaller once again.
Around solar maximum, we do observe values in some magnetograms that are comparably low to those seen at solar minimum (see the lower ends of the standard deviation errorbar spreads in Figure <ref>).
These low critical heights may occur in complex, multipolar active regions as suggested by ).
However, there are also many higher critical heights found each month around solar maximum.
In other words, there is a larger spread of high and low magnetic fluxes, polarity separations and critical heights around solar maximum, whereas there is only a small spread of relatively low values at solar minimum.
But more than just the difference in the range of values found from magnetograms at different points in the solar cycle, it is clear that the monthly mean values found in active regions are significantly greater than those seen at solar minimum.
Despite the sunspot peak of solar cycle 23 being larger than in solar cycle 24, the mean monthly values of each parameter are not significantly different from one cycle to another.
This suggests the variations seen in the magnetic flux, polarity separation and critical height throughout each cycle do not depend on the absolute number of sunspots (else, we would expect to see a difference in the flux, polarity separation and critical height values from one cycle to the next that depends on the peak sunspot number).
Instead, there appears to be an intrinsic evolution in the magnetic properties of the regions from solar minimum to solar maximum, irrespective of the magnitude of the sunspot number.
Higher critical heights should make it harder for a flux rope to become torus unstable and erupt as a CME, as the flux rope would have to reach that higher instability height.
But following this logic, the higher critical heights that we find in active regions around solar maximum appear to contradict the observation of more CMEs than at solar minimum.
The lower quantity of active regions could mean that fewer flux ropes form at solar minimum, explaining the relative lack of eruptions. But how are the many CMEs seen at solar maximum able to occur?
Perhaps CMEs around solar maximum are driven by mechanisms other than the torus instability, such as magnetic reconnection.
Or perhaps many of the solar maximum CMEs come from the regions where we still observe low critical heights. After all, low critical heights are found in magnetically-complex active regions <cit.>, which can produce many CMEs <cit.>.
However, we may have excluded many of these complex CME-producing magnetic regions with low critical heights by selecting only ARPs that contain one or no NOAA active region.
We must also consider the limitations of the methodology we have used in this paper. When calculating the critical height for each magnetogram, we take the mean of the critical heights identified in all of the identified PIL pixels.
However, where multiple distinct PILs exist, the critical height associated with one may be lower than the other. Our averaged critical height will be an overestimate for one PIL and an underestimate for the other, and a CME could erupt from either. We tried to minimize the occurrence of this scenario by excluding ARPs that contained multiple active regions and optimising our method of defining PILs, but the presence of complex active regions and new flux emerging into decayed ephemeral regions in our dataset means we still may mischaracterise the true critical height associated with a CME-producing ARP in some cases.
Still, for a statistically significant sample of 21584 magnetograms, our results show that the mean monthly critical height in active regions is higher at solar maximum than at solar minimum, and during many months around solar maximum, high critical heights are also observed in ephemeral regions.
Some pre-eruptive flux ropes may somehow be able to overcome these higher critical heights.
Understanding the heights of flux ropes is just as important as knowing the critical height when determining whether the torus instability can cause an eruption.
The cancellation of magnetic flux in the photosphere and chromosphere can form low-altitude flux ropes <cit.>. These flux ropes may have bald patch separatrix surface configurations <cit.>, in which their underside is line-tied to the photosphere by high β plasma. Similarly dense plasma can also manifest as an associated filament along dips in the helical magnetic field of a flux rope.
<cit.> identified 9477 prominences (filaments observed at the solar limb) between 2007 and 2009. 99% of these prominences were between 30^∘ and 60^∘ in latitude, and 82% lay at a height of about 26 Mm above the photosphere. These prominences were observed around solar minimum, and this height of 26 Mm is comparable to the critical heights we observe around solar minimum in our study (25 ± 5 Mm). However, such filaments would lie well below the typical active region critical heights we observe around solar maximum (roughly 35 ± 10 Mm).
On the other hand, quiescent filaments (long-lived and often appearing at high latitudes) tend to form or rise to much higher heights. <cit.> found the mean critical height at the onset of six quiescent filament eruptions to be 118.3 ± 47.4 Mm.
Quiescent filaments tend to form during the decay phases of active regions which, after significant decay, may be classed as ephemeral regions), and the increased quantity of active regions around solar maximum means this can happen more frequently.
Indeed, the filament eruptions studied by <cit.> occurred between 2012 and 2014, demonstrating how quiescent filaments can form or rise to meet great critical heights around solar maximum periods.
We can also consider flux ropes that form in active regions without filaments.
<cit.> studied 47 active region CMEs and found the mean critical height at eruption onset was 43 ± 8 Mm.
Furthermore, <cit.> found the mean critical height in “hot channel” eruptions was 58.0 ± 33.6 Mm. These “hot channels” are signatures of heated plasma trapped in the magnetic field of active region flux ropes.
These hot flux ropes can form via magnetic reconnection in the corona that is triggered by the motions of emerging magnetic flux <cit.>.
They can form at heights of ≈ 100 Mm in the corona <cit.> with a hyperbolic flux tube configuration (HFT; ), in which magnetic reconnection can occur in a current sheet around the flux rope, enabling it to rise and reach higher altitudes.
<cit.> found high CME rates during phases of increasing magnetic flux, even though the critical height was also increasing at the same time.
The result in the present study, that magnetic fluxes and critical heights are higher at solar maximum (when the most CMEs occur), seems to echo this scenario.
We suggest that the effect magnetic flux emergence has on triggering a CME is greater than the extra stabilising effect provided by an associated rise in the critical height.
Emerging magnetic flux could cause hot channel HFT flux ropes to form at relatively high coronal heights, and the photospheric motions associated with the flux emergence could inject magnetic energy into such a flux rope, causing it to inflate <cit.> and rise above an increased critical height.
Additionally, newly-emerged magnetic flux could reconnect with — and be built into — a forming pre-eruptive flux rope, adding twist to the structure. This added twist could evolve a flux rope towards the threshold for the onset of the helical kink instability <cit.>, which could deform the flux rope axis and cause it to rise towards the critical torus instability height.
§ CONCLUSIONS
Using a large sample of 21584 MDI and HMI magnetograms from 1996–2023 containing NOAA active regions and ephemeral regions, we find that the critical torus instability height varies with the solar sunspot cycle.
The mean value of the critical height in months around solar cycle minima are found to be relatively low, and there are small spreads in the observed critical heights. In contrast, larger mean monthly critical heights occur around solar maximum, with larger spreads in the values that are seen each month.
We also see similar variations of the magnitude of unsigned magnetic flux and the separation between the centres of positive and negative magnetic polarities throughout solar cycles 23, 24 and 25, each also peaking around solar maximum and showing the lowest monthly values around solar minimum.
The critical height is strongly correlated with the separation between opposite magnetic polarities (r=0.854), and we fit a relationship of h_c = 0.5d + 7.35 to the values found from our 21584 magnetograms. This is in close agreement with relations found in previous studies <cit.>.
Secondly, we find that the logarithm of the critical height correlates well with the logarithm of magnetic flux (r=0.738), and thirdly, we provide an updated look at the connection between magnetic flux and polarity separation. Across our full dataset, we find the logarithm of magnetic flux correlates with the logarithm of polarity separation (r=0.600), and we fit the power law Φ∼ d^1.23. This sits well between the results found in previous studies <cit.>, and breaking the dataset down to separately examine active regions and ephemeral regions, we find relationships of Φ∼ d^1.14 and Φ∼ d^1.31, respectively.
In summary, all three parameters — unsigned magnetic flux, the separation between opposite magnetic polarities, and the critical torus instability height — correlate well with each other.
Therefore, we suggest the stronger magnetic fluxes found during periods of higher solar magnetic activity lead to larger separations (as emerging bipoles expand), and that the magnetic field strength of these larger-scale magnetic bipoles decays with height over larger length scales, resulting in higher critical heights.
Despite the higher critical heights found around solar maximum, more CMEs still occur then than at solar minimum. For the torus instability to play a role in driving these eruptions, the majority of CMEs must either originate from the regions that have the lowest critical heights, or the flux ropes that form around solar maximum must be able to overcome higher critical heights.
We acknowledge the helpful discussions with K. D. Leka with respect to radialising MDI magnetic field observations.
A.W.J. was supported by a European Space Agency (ESA) Research Fellowship and acknowledges funding from the STFC Consolidated Grant ST/W001004/1.
Data courtesy of NASA/SDO and the HMI science team, as well as the SOHO/MDI consortium. SOHO is a project of international cooperation between ESA and NASA.
This research has made use of SunPy 4.1.3 <cit.>, an open-source and free community-developed solar data analysis Python package <cit.>.
SDO(HMI), SOHO(MDI)
aasjournal
|
http://arxiv.org/abs/2409.02155v1 | 20240903175519 | Target Detection in Sea Clutter with Application to Spaceborne SAR Imaging | [
"Shahrokh Hamidi"
] | eess.SP | [
"eess.SP"
] |
Target Detection in Sea Clutter with Application to Spaceborne SAR Imaging
Shahrokh Hamidi
Department of Electrical and Computer Engineering
University of Waterloo
Waterloo, Ontario, Canada
[email protected]
September 9, 2024
=========================================================================================================================================================
§ ABSTRACT
In this paper, the challenging task of target detection in sea clutter is addressed. We analyze the statistical properties of the signals received from the scene and based on that, we model the amplitude of the signals reflected from the sea clutter according to the Weibull distribution.
Subsequently, we utilize the aforementioned information to design an adaptive threshold based on the Constant False Alarm Rate (CFAR) algorithm to detect the energy of the targets which have been buried in the sea clutter.
Thorough analysis of the experimental data gathered from the Canadian RADARSAT-1 satellite demonstrates the overall effectiveness of the proposed method.
Target detection, sea clutter, Weibull distribution, CFAR.
§ INTRODUCTION
As an all-weather, day-night, and active device, Synthetic Aperture Radar (SAR) is capable of providing crucial information from the surface of the Earth <cit.>. Following the image reconstruction, post processing procedure is performed to obtain more critical information from the scene. Target detection is one of the areas that requires further analysis of the reconstructed image. In this paper, we specifically focus on the targets buried in sea clutter. The goal is to establish a mechanism to extract the energy of the targets from sea clutter. Several techniques have been reported in the literature, such as the wavelet-based analysis <cit.> and fixed threshold-based method <cit.>. The problem with using a fixed threshold to detect targets from clutter is that some of the key features of the desired targets can be ignored while the undesired signals are retained. Moreover, methods such as wavelet-based analysis are not able to capture the statistical properties of the signals. This, in turn, hinders their ability to obtain all the critical information about the desired targets.
In this paper, we utilize the technique based on the Constant False Alarm Rate (CFAR) method <cit.> which creates an adaptive threshold to extract the energy of the desired target while suppressing the energy of the undesired signals.
However, in order to be able to implement the CFAR-based adaptive threshold, we need to obtain the information about the statistical properties of the undesired signals which in this case is the sea clutter. To tackle this issue, we study several well-known distributions, namely, Weibull, Log Normal, Inverse Gaussian, Gamma, and Rayleigh and demonstrate that the Weibull distribution is a more appropriate distribution to model the statistical properties of the sea clutter.
In <cit.>, the authors have studied the statistical properties of the reflected SAR signals. In this paper, however, we specifically focus on sea clutter and also we present more statistical models which distinguishes our work from <cit.>.
We further describe the SAR imaging procedure in detail and present the experimental results to verify the effectiveness and accuracy of the proposed models which makes the material presented in this paper distinct from <cit.>.
In contrast to <cit.>, we will not consider the K distribution. The reason for this is that, the K distribution is formed by compounding two separate probability distributions, one representing the radar cross-section, and the other representing the speckle noise. However, prior to addressing the target detection in sea clutter, we remove the effect of the speckle noise by applying median filter to the reconstructed SAR image. As a result, the K distribution is not a suitable model anymore. Moreover, based on the experimental data, that we present in this paper, we show that the Weibull distribution is also capable of modeling the sea clutter, contaminated with speckle noise, highly accurately.
We present several statistical models to describe the sea clutter and at the end show that the Weibull distribution is the most appropriate distribution to model the statistical properties of the sea clutter. We then utilize the Weibull distribution and establish an adaptive threshold to extract the energy of the desired targets from sea clutter.
The verification of the proposed approach is based on the real data gathered from the Canadian RADARSAT-1 SAR satellite <cit.>. The data has been gathered from English Bay in Vancouver Canada in which several ships have been located inside water which creates a perfect scenario to evaluate the results.
The organization of the paper is as follows.
In Section <ref>, we present the system model as well as the Range-Doppler algorithm to be used to reconstruct the SAR images from the raw data. In Section <ref>, we address the statistical analysis of the SAR data. Finally Section <ref> has been dedicated to the experimental results based on the real data gathered from the RADARSAT-1 SAR satellite which is followed by the concluding remarks.
§ MODEL DESCRIPTION AND IMAGE FORMATION
Fig. <ref> shows the system model in which the satellite is moving along its track and is collecting data from the surface of the Earth. The data collection is based on the strip-map mode.
The signal transmitted by the radar is a chirp signal which after being reflected from a point target and upon down-conversion is described as
s(t, η) = σ w_r(t - 2R(η)/c)w_a(η - η_c)
e^ -j 4 π f_cR(η)/c + j πβ t^2 + j 4 πβ t R(η)/c,
where f_c is the carrier frequency and the parameter β is given as b/T, in which b and T stand for the bandwidth and the chirp time, respectively. In addition, w_r is a rectangular window with length T and t is referred to as fast-time parameter.
In addition, σ is the complex radar cross section for the point reflector, η is referred to as slow-time parameter and R(η) is the instantaneous radial distance between the radar and the target which is given as R(η) = √(R^2_0 + v^2(η - η_c)^2). Moreover, w_a is a rectangular window with its length equal to the synthetic aperture length divided by v, where v is the speed of the satellite.
The ultimate goal in SAR imaging is to estimate the complex radar section σ.
The Range-Doppler algorithm is the first algorithm that was developed for spaceborne SAR image reconstruction <cit.>.
The first step in the Range-Doppler algorithm is to perform range compression which results in <cit.>
s_ rc(t, η) = σ p_r(t - 2R(η)/c)w_a(η - η_c)
e^ -j 4 π f_c R(η)/c,
where p_r(x) = sin(π x)/π x.
Next, we should compensate for the RCM phenomenon which is the result of coupling between the range and azimuth directions that is created because of the non-zero squint angle for the antenna. Consequently, after RCM compensation, we can describe the signal in the Doppler domain as <cit.>
S_ rcmc(t, f_η) = σ p_r(t - 2R_0/c)W_a(f_η - f_η_c) ×
e^ -j 4 π f_c R_0/c e^ j πf^2_η/K_a,
in which K_a = 2v^2/λ R_0.
The final step for image formation in the Range-Doppler algorithm, is to compress the data in the azimuth direction which upon performing this stage the compressed signal in the range and azimuth directions is given as <cit.>
s_ rcac(t, η) = σ p_r(t - 2R_0/c)p_a(η) e^ -j 4 π f_c R_0/c×
e^ j 2 π f_η_cη,
where f_η_c is the Doppler centroid frequency <cit.>.
§ STATISTICAL MODELING
In this section, we attempt to model the amplitude of the reflected signal from sea clutter. The main candidate is the Weibull distribution which its probability density function is described as <cit.>
p_W(x; α, β) = α/β(x/β)^α-1e^ - (x/β)^α, α > 0, β > 0.
Another possible option for consideration to model the sea clutter is the Log-normal distribution which its probability density function is given as <cit.>
p_ LN(x; η, γ) = 1/xη√(2 π) e^ - (log x -γ)^2/2 η^2, γ > 0, η > 0.
The next distribution that we consider in this paper, is the Inverse Gaussian distribution with the following probability density function <cit.>
p_ IG(x; μ, λ) = √(λ/2 π x^3) e^ - λ(x -μ)^2/2 μ^2 x, μ > 0, λ > 0.
The Gamma distribution with 2 degrees of freedom is the next best choice to model the sea clutter. The probability density function for the Gamma distribution is given as <cit.>
p_G(x; a, b) = b^a/Γ(a) x^a-1 e^-bx, a>0, b > 0.
The last distribution that we study in this paper, is the Rayleigh distribution. The probability density function for the Rayleigh distribution is described as <cit.>
p_R(x; σ) = x/σ e^ -x^2/2σ, σ > 0.
Based on the given probability of false alarm (p_ fa), we can calculate the adaptive threshold T_a as <cit.>
p_ fa = p(X > T_a| H_0) = ∫_ T_a^∞ p(X| H_0) dX,
where X describes the statistics of the cell under test and T_a is the adaptive threshold. Furthermore, H_0 represents the null hypothesis.
By using the experimental data that we present in this paper, we will demonstrate that among the proposed distributions, the Weibull distribution is the best distribution to model the sea clutter. Therefore, the calculation of the adaptive threshold is only performed for the Weibull distribution.
Consequently, based on (<ref>), the adaptive threshold T_ aW for the Weibull distribution, which has been given in (<ref>), is calculated as
T_ aW = [βlog(1/p_ fa)]^1/α.
The maximum likelihood estimates for the parameters of the Weibull distribution, given in (<ref>), are expressed as <cit.>
β̂ = (1/N∑_i = 1^N x^α̂_i)^1/α̂,
α̂ = N/1/β̂∑_i = 1^N x^α̂_i log x_i - ∑_i = 1^Nlog x_i,
in which x_i is the i^ th realization of the random variable x.
In this paper, we implement the 2D CFAR <cit.> algorithm.
The implementation of the 2D CFAR method is based on the structure which has been shown in Fig. <ref>.
Consequently, the decision making process for the cell under test is expressed as <cit.>
X_ CUT≶_H_1^H_0μ_c + σ_c Q,
where X_ CUT is the amplitude of the cell under test. Furthermore, μ_c and σ_c are the sample mean and standard deviation computed from the clutter data of the local background.
Moreover, Q = T_ aW/μ̂ is the detector design parameter which defines the p_ fa and is set empirically in which T_ aW is the adaptive threshold which has been described in (<ref>) and μ̂ denotes the mean value estimated from the underlying clutter model. In addition, the H_0 and H_1 are the null and alternative hypothesis, respectively <cit.>.
The null hypothesis, H_0, represents the case in which the cell under test has been occupied by clutter only and the alternative hypothesis, H_1, on the other hand, is for the case in which the target is present and the cell under test contains the energy of the target.
In the next section, we present the experimental results.
§ EXPERIMENTAL RESULTS
In this section, we present the result of SAR image reconstruction based on experimental data gathered from the Canadian RADARSAT-1 satellite. The data is from English Bay in Vancouver Canada.
The specifications for the RADARSAT-1 satellite have been given in Table.<ref>.
We select the data related to English Bay. The reason is that, there are several ships in this scene that play the role of isolated strong reflectors which will assist to observe the effect of the RCM clearly and will also be utilized to perform Doppler centroid estimation.
Furthermore, the ships in the sea water are the main subject of the paper. In other words, we attempt to detect the ships in sea clutter which is the main goal of the paper.
In order to reconstruct the image, we apply the Range-Doppler algorithm to the raw data. Fig. <ref>-(a) shows the range compressed data based on (<ref>).
The range compressed energy of the ships can be seen as a few skewed vertical lines. The skew demonstrates the effect of the RCM.
In order to perform the RCM compensation we need to estimate the unambiguous Doppler centroid frequency.
As we mentioned before, if the Doppler centroid frequency is larger than the PRF of the RADAR, there will be an ambiguity in the Doppler centroid frequency estimation.
One way to estimate the unambiguous value for the Doppler centroid frequency is by analyzing the trajectory of the strong isolated targets.
Fig. <ref> shows the range compressed image shown of several ships in sea water.
The skew in the trajectory of these targets is due to the non-zero Doppler centroid frequency. The energy of each one of them is spread over several different range cells.
The slope can easily be calculated and is equal to 0.034 range samples per azimuth samples. To estimate the Doppler centroid frequency we should first multiply the slope by c/2Fr to convert the slope from range samples to range distance and then multiply it by PRF to convert it from azimuth samples to azimuth time.
Hence, we have dR(η)/dη = 198.23 m/s. As a result, from f_ dc = -2/λdR(η)/dη <cit.>, the Doppler centroid frequency can be calculated as f_ dc = -7009 Hz.
Next, we estimate the fractional part of the Doppler centroid frequency which is f^'_ dc <cit.>. The fractional part is essential in focusing the energy of the targets in the azimuth direction.
Fig. <ref> illustrates the power spectrum of the data in azimuth direction versus slow time frequency f_η. In order to reduce the effect of the noise, we have added the power spectrum corresponding to 2048 range cells. In order to be able to calculate the frequency component at which the signal reaches its maximum value, we have performed curve fitting.
From Fig. <ref>, we can estimate the fractional part of the Doppler centroid frequency as f^'_ dc = 531 Hz.
Finally, at the last stage we perform the azimuth localization. Consequently, the reconstructed image based on the Range-Doppler algorithm is obtained which has been presented in Fig. <ref>.
The image shown in Fig. <ref> suffers from speckle noise <cit.>. In order to remove the effect of the speckle noise, we introduce a 2D m × n filter and slide it over the reconstructed image while solving the following optimization problem,
min_a ∑_i=1^n∑_j=1^m| a_ ij-a|,
where a_ ij is the value for the ( ij)^ th pixel and a is the value chosen by the optimization problem for the (⌊n-1/2⌋ + 1 ⌊m-1/2⌋ + 1)^ th pixel. In fact, by solving the optimization problem in (<ref>) we are filtering the image by the median filter. In other words, we replace the value of each pixel with the median of the neighboring pixels. This proves to be a powerful method to decrease the effect of the speckle noise which, as a result, the fine structures of the image can be revealed.
It should be noted that, the common practice to alleviate the effect of the speckle noise in SAR images is multi-look processing <cit.>. However, multi-look processing sacrifices the azimuth resolution in order to remove the speckle noise. In contrast, the median filtering approach, which we have proposed, can efficiently remove the speckle noise while it leaves the azimuth resolution intact. Furthermore, the median filtering does not smear the edges in the image which is considered to be a significant property.
Fig. <ref> shows the result of speckle noise reduction for the reconstructed image depicted in Fig. <ref>. To remove the effect of the speckle noise we have applied the filter given in (<ref>) with m=n=6.
In order to analyze the effect of sea clutter on target detection, we focus on the specific part of the reconstructed image shown in Fig. <ref> which contains several ships in sea water. The related part of the data-set has been highlighted as the area inside the red rectangle in Fig. <ref>. The area inside the green rectangle contains the energy from sea water and is utilized to analyze the statistical properties of the sea clutter. Fig. <ref> shows the histogram for part of the image related to the area which has been located inside the green rectangle. The estimated probability density functions for the Weibull, Log-normal, Inverse Gaussian, Gamma, and Rayleigh distributions are based on (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), respectively.
From Fig. <ref>, it is evident that the Weibull distribution models the sea clutter much better than the other distributions.
Fig. <ref> illustrates the result of applying the 2D CFAR algorithm to the part of the data-set shown as red rectangle in Fig. <ref>. The result presented in Fig. <ref>-(b) is remarkable in a sense that, by modeling the sea clutter with Weibull distribution and applying the 2D CFAR technique we have been able to detect the ships in sea clutter.
To achieve the result presented in Fig. <ref>-(b), we have set the number of guard cells for the upper, lower, left, and right part of the cell under test to 10. We have also selected the number of training cells for the upper, lower, left, and right part of the cell under test to be equal to 10.
We consider another case in which one of the ships, as has been highlighted as the area inside the yellow rectangle in Fig. <ref>, is being buried in the sea clutter. Moreover, it should be noted that, the image presented in Fig. <ref> is contaminated with speckle noise. Our goal is to consider the effect of the speckle noise in modeling the histogram of the reconstructed image.
In Fig. <ref>, we have presented the reconstructed image from a single ship which has been highlighted as the area inside the yellow box in Fig. <ref>.
The part of the image, shown in Fig. <ref>-(b), which has been highlighted as the area inside the red rectangle is utilized to estimate the histogram of the sea clutter and the result has been illustrated in Fig. <ref>.
As it is clear from Fig. <ref>, the Weibull distribution can model the histogram with high accuracy.
Next, we apply the 2D CFAR technique to the reconstructed image presented in Fig. <ref>. We have presented this remarkable result in Fig. <ref>. To achieve the result presented in Fig. <ref>, we have set the number of guard cells for the upper and lower part of the cell under test to 100 and for the left and right part to 20. We have also selected the number of training cells for the upper, lower, left, and right part of the cell under test to be equal to 20.
§ CONCLUSION
The SAR image reconstruction was discussed in detail. The statistical properties of the sea clutter was investigated and the correct model based on the Weibull distribution was applied to the sea clutter. The validity of the proposed model was verified based on the real data and the results were presented.
We firmly believe that, the material presented in this paper can have a significant impact on further research in the field of target detection in sea clutter and will provide the researchers in this field with valuable information.
10
url@samestyle
Curlander
J. Curlander and R. McDonough, Synthetic Aperture Radar Systems and
Signal Processing.1em plus 0.5em minus 0.4emJohn Wiley and
Sons, New York, 1991.
Soumekh
M. Soumekh, Synthetic aperture radar signal processing with MATLAB
algorithms.1em plus 0.5em minus 0.4emJohn Wiley, 1999.
Cumming
I. G. Cumming and F. H. Wong, Digital processing of synthetic aperture
radar data.1em plus 0.5em minus 0.4emArtech House, Norwood,
MA, 2005.
Harger
R. O. Harger, Synthetic Aperture Radar Systems: Theory and Design.1em plus 0.5em minus 0.4emAcademic Press, New York, 1970.
Sullivan
R. J. Sullivan, Microwave Radar Imaging and Advanced Concepts.1em
plus 0.5em minus 0.4emArtech House, Norwood, MA, 2000.
Munson_Stripmap
D. Munson, “An introduction to strip-mapping synthetic aperture radar,”
vol. 12, pp. 2245–2248, 1987.
Wavelet,
G. Davidson and H. Griffiths, Wavelet detection of low observable targets within sea clutter, RADAR 2002, Edinburgh, UK, pp. 238-242, 2002.
fixed_threshold
Q. H. Pham, A. Ezekiel, M. T. Campbell, and M. J. Smith, A
new end-to-end SAR ATR system, in AeroSense’99. International Society for Optics and Photonics, pp. 292–301, 1999.
Skolnik
M. I. Skolnik, Introduction to Radar Systems.1em plus 0.5em minus
0.4emMcGraw-Hill, New York, 2002.
Mahafza
B. R. Mahafza, Radar Systems Analysis and Design Using MATLAB.1em plus 0.5em minus 0.4emChapman and Hall/CRC Press, Boca Raton, FL,
2000.
SAR_Clutter
C. Oliver, Optimum texture estimators for SAR clutter
Journal of Physics D: Applied Physics, vol. 26, no. 11, p. 1824,
1993.
CFAR_Weibull
Mahapatra, Dheeren Ku and Pradhan, Kumari Rosy and Roy, Lakshi Prosad,
An experiment on MSTAR data for CFAR detection in lognormal and weibull distributed SAR clutter,
International Conference on Microwave, Optical and Communication Engineering (ICMOCE), pp. 377-380, 2015.
ground_clutter
M. S. Greco and F. Gini, Statistical analysis of high-resolution
SAR ground clutter data, IEEE Trans. Geosci. Remote Sens.,
vol. 45, no. 3, pp. 566–575, 2007.
RADARSAT
R. K. Raney, A. P. Luscombe, E. J. Langham, and S. Ahmed, “RADARSAT
SAR imaging,” Proceedings of the IEEE, vol. 79, no. 6, pp.
839–849, 1991.
Cumming_RD
I. Cumming and J. Bennett, “Digital processing of seasat SAR data,”
vol. 4, pp. 710–718, 1979.
Chen_RD
C. Chen and H. C. Andrews, “Target-motion-induced radar imaging,”
IEEE Transactions on Aerospace and Electronic Systems, vol. AES-16,
no. 1, pp. 2–14, 1980.
Walker_RD
J. L. Walker, “Range-Doppler imaging of rotating objects,” IEEE
Transactions on Aerospace and Electronic Systems, vol. AES-16, no. 1, pp.
23–52, 1980.
Johnson_fdc
F. k. Li and W. T. K. Johnson, “Ambiguities in spacebornene synthetic
aperture radar systems,” IEEE Transactions on Aerospace and Electronic
Systems, vol. AES-19, no. 3, pp. 389–397, 1983.
Madsen
S. N. Madsen, “Estimating the Doppler centroid of sar data,” IEEE
Transactions on Aerospace and Electronic Systems, vol. 25, no. 2, pp.
134–140, 1989.
Bamler_fdc
R. Bamler and H. Runge, “PRF-ambiguity resolving by wavelength
diversity,” IEEE Transactions on Geoscience and Remote Sensing,
vol. 29, no. 6, pp. 997–1003, 1991.
Shu_fdc_MLBF
I. G. Cumming and S. Li, “Adding sensitivity to the MLBF Doppler
centroid estimator,” IEEE Transactions on Geoscience and Remote
Sensing, vol. 45, no. 2, pp. 279–292, 2007.
Distribution
C. Walck, Handbook on Statistical Distributions for Experimentalists, Stockholm:Univ. of Stockholm Press, 2000.
Weibull_ML
A. C. Cohen, Maximum likelihood estimation in the Weibull distribution based on complete and on censored samples, Technometrics, vol. 7, pp. 579-588, 1965.
Kay
S. Kay, Fundamentals of Statistical Signal Processing: Detection Theory, Volume 2.
Pearson; 1st edition, 1998.
clutter_threshold
S. Demirci, C. Ozdemir, A. Akdagli, and E. Yigit, Clutter
reduction in synthetic aperture radar images with statistical
modeling: An application to MSTAR data, Mic. Opt. Tech.
Lett., vol. 50, no. 6, pp. 1514–1520, 2008.
[
< g r a p h i c s >
]Shahrokh Hamidi was born in 1983, in Iran. He received his B.Sc., M.Sc., and Ph.D. degrees all in Electrical and Computer Engineering. He is with the faculty of Electrical and Computer Engineering at the University of Waterloo, Waterloo, Ontario, Canada. His current research areas include statistical signal processing, mmWave imaging, Terahertz imaging, image processing, system design, multi-target tracking, wireless communication, machine learning, optimization, and array processing.
|
http://arxiv.org/abs/2409.02091v1 | 20240903174259 | Storms and convection on Uranus and Neptune: impact of methane abundance revealed by a 3D cloud-resolving model | [
"Noé Clément",
"Jérémy Leconte",
"Aymeric Spiga",
"Sandrine Guerlet",
"Franck Selsis",
"Gwenaël Milcareck",
"Lucas Teinturier",
"Thibault Cavalié",
"Raphaël Moreno",
"Emmanuel Lellouch",
"Óscar Carrión-González"
] | astro-ph.EP | [
"astro-ph.EP"
] |
Laboratoire d'Astrophysique de Bordeaux, Univ. Bordeaux, CNRS, B18N, allée Geoffroy Saint-Hilaire, 33615 Pessac, France
Laboratoire de Météorologie Dynamique (IPSL), Sorbonne Université, Centre National de la Recherche Scientifique, École
Polytechnique, École Normale Supérieure, Paris, France
LESIA, Observatoire de Paris, Université PSL, CNRS, Sorbonne Université, Université de Paris, 5 place Jules Janssen,
92195 Meudon, France
Laboratoire ATmosphère Milieux Observations
Spatiales/Institut Pierre-Simon Laplace (LATMOS/IPSL), Sorbonne Universités, UPMC Univ Paris 06, Université Paris-Saclay, Université de Versailles
Saint-Quentin-en-Yvelines, Centre National de la Recherche Scientifique, 78280 Guyancourt, France
Uranus and Neptune have atmospheres dominated by molecular hydrogen and helium. In the upper troposphere (between 0.1 and 10 bars), methane is the third main molecule and condenses, yielding a vertical gradient in CH_4. This condensable species being heavier than H_2 and He, the resulting change in mean molecular weight due to condensation comes as a factor countering convection, traditionally considered as ruled by temperature only. It makes both dry and moist convection more difficult to start. As observations also show latitudinal variations in methane abundance, one can expect different vertical gradients from one latitude to another.
In this paper, we investigate the impact of this methane vertical gradient and the different shapes it can take, on the atmospheric regimes, especially on the formation and inhibition of moist convective storms in the troposphere of ice giants.
We develop a 3D cloud-resolving model to simulate convective processes at the required scale. This model is non-hydrostatic and includes the effect of the mean molecular weight variations associated with condensation.
Using our simulations, we conclude that typical velocities of dry convection in the deep atmosphere are rather low (of the order of 1 m/s) but sufficient to sustain upward methane transport, and that moist convection at methane condensation level is strongly inhibited.
Previous studies derived an analytical criterion on the methane vapor amount above which moist convection should be inhibited in saturated environments. In ice giants, this criterion yields a critical methane abundance of 1.2% at 80 K (this corresponds approximately to the 1 bar level).
We first validate this analytical criterion numerically.
We then show that this critical methane abundance governs the inhibition and formation of moist convective storms, and we conclude that the intensity and intermittency of these storms should depend on the methane abundance and saturation.
- In the regions where CH_4 exceeds this critical abundance in the deep atmosphere (at the equator and the middle latitudes on Uranus, and all latitudes on Neptune), a stable layer almost entirely saturated with methane develops at the condensation level. In this layer, moist convection is inhibited, ensuring stability. Only weak moist convective events can occur above this layer, where methane abundance becomes lower than the critical value. The inhibition of moist convection prevents strong drying and maintains high relative humidity, which favors the frequency of these events.
- In the regions where CH_4 remains below this critical abundance in the deep atmosphere (possibly at the poles on Uranus), there is no such layer. More powerful storms can form, but they are also a bit rarer.
In ice giants, dry convection is weak, and moist convection is strongly inhibited. However, when enough methane is transported upwards, through dry convection and turbulent diffusion, sporadic moist convective storms can form.
These storms should be more frequent on Neptune than on Uranus, because of Neptune's internal heat flow and larger methane abundance.
Our results can explain the observed sporadicity of clouds in ice giants and can help us guide future observations to test the conclusions of this work.
Storms and convection on Uranus and Neptune:
impact of methane abundance revealed by a 3D cloud-resolving model
Noé Clément 1
Jérémy Leconte 1
Aymeric Spiga 2
Sandrine Guerlet 2,3
Franck Selsis 1
Gwenaël Milcareck 2,4
Lucas Teinturier 2,3
Thibault Cavalié 1,3
Raphaël Moreno 3
Emmanuel Lellouch 3
Óscar Carrión-González 3
Submitted December 13, 2023 / Accepted July 26, 2024
========================================================================================================================================================================================================================================================================
§ INTRODUCTION
Uranus and Neptune are the two most distant planets of our Solar System, and thus receive little insolation (3.7 W m^-2 for Uranus and 1.5 W m^-2 for Neptune, against 1366 W m^-2 for the Earth), in addition to having long radiative timescales (more than 100 terrestrial years at 1 bar).
As a result, weak atmospheric activity might be expected, yet observations highlight intense meteorology showing numerous discrete cloud features (presumably composed of methane ice crystals) evolving on short timescales <cit.> and long-lasting powerful storms <cit.>.
<cit.> listed several observations as candidate moist-convection features in Uranus and Neptune, but they conclude that most cloud activity observed so far is probably not convective. The record of observations demonstrating moist convective activity in Uranus and Neptune is almost nonexistent because frequent observations at very high spatial resolution – that are lacking today – would be required.
These clouds and storms need to be modeled in order to understand the atmospheric dynamics of ice giants. A particularly crucial open question is related to the mechanisms of activation and inhibition of convection in those storms.
Among the properties they share, Uranus and Neptune both have atmospheres dominated by molecular hydrogen and helium, where all condensable species (CH_4, H_2S, NH_3, H_2O) are heavier than the background mixture of H_2 and He.
Observations show a high abundance of methane in the troposphere, with significant latitudinal variations: 1 to 4% in Uranus at 2 bars <cit.>, 2 to 6% in Neptune at 4 bars <cit.>.
In ice giants, where the troposphere is located below the 0.1 bar level, methane is expected to condense around 1-2 bars (Figure <ref>). The methane mixing ratio decreases along with pressure between 1-2 bars and 0.1 bar. As a consequence, the mean molecular weight in the atmosphere also decreases along with pressure in these layers (Figure <ref>).
In this study, we explore the 0.03-10 bar pressure range.
Because the other minor species condense much deeper in the troposphere (e.g., H_2O around 100 bars) or are much less abundant (e.g., H_2S, above the 10 bars level, <cit.>), we will consider only methane among the condensable species.
An abundant and heavy condensable species like methane is thought to play an important role in convective storms that occur in the troposphere.
In the traditional view of convection, temperature alone sets the rules. Relatively to the adiabatic gradient, warmer air rises, and colder air sinks.
By contrast, in ice giants, both temperature and mean molecular weight control convection.
Air at depth is warmer yet heavier; air higher up is colder yet lighter. This makes convection in ice giants more complex than in the traditional view.
<cit.> showed that the potential temperature
increase required to compensate for the mean molecular weight change and have a density profile that is neutrally stable to convection is 20 K, not including possible latent heat effects.
During flybys of the ice giants, the Voyager 2 spacecraft has measured vertical temperature profiles (Figure <ref>), showing near-dry-adiabatic gradients in the troposphere for both Uranus and Neptune <cit.>.
Since the thermal gradient is close to the dry-adiabatic gradient,
it is relevant to address the impact of the mean molecular weight vertical gradient. It can completely change the (un)stability of the atmosphere.
Global Climate Models (GCM) have been used to model the atmosphere of Jupiter <cit.> and Saturn <cit.>, highlighting large-scale phenomena but also the need to include mesoscale processes that GCM cannot directly resolve.
Indeed, GCMs for ice giants have a horizontal resolution of 1° in latitude at best (equivalent to 400 Km), in addition to being vertically hydrostatic.
Because of this assumption of hydrostaticity, a GCM cannot be used to study convective storms.
Solving and studying convection requires models with a higher spatial resolution and capable of solving the vertical momentum equation without making the hydrostatic approximation.
<cit.> and <cit.> have studied moist-convection on Jupiter with cloud-resolving models. In Jupiter, the condensable species (H_2O, NH_3, H_2S) also have a molecular weight higher than H_2 and He.
<cit.> conclude that:
- stable layers associated with condensation act as effective dynamic boundaries,
- intense cumulonimbus clouds develop with clear temporal intermittence, with a period that is roughly proportional to the deep abundance of H_2O gas,
- the active transport associated with these clouds leads to the establishment of mean vertical profiles of condensates and condensable gases that are markedly different from the usual three-layer structure.
However, in none of their simulations condensable species (H_2O, NH_3, H_2S) exceed their critical specific concentrations (see next section) for moist convection inhibition.
The simulations of an idealized Jovian atmosphere in radiative-convective equilibrium presented by <cit.> show that the temperature gradient is super-adiabatic near the water condensation level because of the change of mean molecular weight.
Using non-hydrostatic simulations applied to Uranus and Neptune, <cit.> show that CH_4 and H_2S condensation induces two stably stratified layers at about 1 bar and 8 bars when the abundance of these elements range from 30 times solar to 50 times solar. They find that, in these stable layers, the temperature profile is super-adiabatic and convection is inhibited, because of the compositional gradient in sub-saturated weather layers.
More generally, they find that weakly forced giant planets are less cloudy than previously expected and that moist convection is limited by the planetary heat flux.
To study how the mean molecular weight variability affects convection, the challenge is to build a model that can resolve convection and account for the effect of condensation on the mean molecular weight.
In this paper, using a 3D cloud-resolving model, we investigate the impact of vertical mean molecular weight gradients, induced in particular by methane condensation, on the formation and inhibition regimes of convective storms in ice giants.
In Section <ref>, we review the criteria for convection inhibition in ice giants. In Section <ref> we present the model, how we adapted it for this study, and which challenges it implies. In Section <ref>, we describe the simulated atmospheric structures, and we analyze their temporal evolution in Section <ref>. In Section <ref>, we discuss the limitations of our model and several open questions about the understanding of ice giants, and we attempt to give an overall scenario for storm formation based on the results of our simulations.
§ CONVECTION INHIBITION CRITERIA IN ICE GIANTS
<cit.> and <cit.> calculated analytical criteria for convection inhibition caused by a vertical gradient of the mean molecular weight.
To determine if convection can occur for a given thermal gradient, the density of a theoretical rising parcel should be compared to its surrounding environment. The rising parcel follows the adiabatic gradient, dry if the pressure level is not saturated, and moist if it is. In a dry and well-mixed environment, the parcel continues to rise (i.e. convection occurs) if the thermal gradient is higher in absolute value than the dry-adiabatic gradient. The parcel remains warmer than the environment, with lower density and positive buoyancy.
In the presence of a gradient of methane (i.e. there is more methane at the bottom than at the top), convection can be inhibited even if the thermal gradient is super-adiabatic.
At dry levels, if there is a gradient of methane, a rising parcel coming from the bottom, contains more methane than the air at the top. The density of the parcel is increased relative to the density of the environment. If the methane gradient is strong enough, the density of the parcel becomes higher than the density of the environment, even if the parcel is warmer than the environment. The buoyancy of the rising parcel is negative, stopping convection.
The criterion for dry convection inhibition is the Ledoux criterion:
∇_T < ∇_ad + ∇_μ
where ∇_T=dln T/dln P is the thermal gradient of the atmosphere, ∇_ad=.∂ln T/∂ln P|_ad the dry-adiabatic gradient and ∇_μ=dlnμ/dln P the mean molecular weight gradient of the atmosphere, (T,P) being the temperature versus pressure profile of the atmosphere.
Here the mean molecular weight gradient and the thermal gradient come as additive factors: the greater the mean molecular weight gradient, the greater the thermal gradient required to trigger convection.
In the "moist" atmosphere, methane condenses, producing a vertical methane gradient.
The resulting methane profile is close to the saturation vapor curve. A rising parcel follows the moist-adiabatic gradient: it is becoming cooler, with a decreasing methane abundance because of condensation.
When the mean molecular weight is vertically constant (i.e. in an atmosphere where the condensable species would have the same weight as the background atmosphere), if the thermal gradient is steeper than the moist-adiabatic gradient, then the parcel continues to rise (i.e. convection occurs). (Warming of the rising parcel by latent heat release is included in the moist-adiabatic gradient, which is lower than the dry-adiabatic gradient.)
When the mean molecular weight decreases with pressure, if the thermal gradient is steeper than the moist-adiabatic gradient, the abundance of methane also decreases with pressure faster in the environment than in the parcel.
A super-moist-adiabatic thermal gradient leads to an inner competition. On the one hand, the steep temperature gradient and the release of latent heat favor convection, and on the other hand varying mean molecular weight prevents convection. The rising parcel is warmer but also heavier than the surrounding environment. The abundance of methane determines which factor dominates, and convection is inhibited if the methane abundance exceeds a critical specific concentration q_cri(T). <cit.> provide the criterion. Moist convection is inhibited if:
(∇_T - ∇_ad^*)(q_v - q_cri(T))>0
where ∇_T is the thermal gradient of the atmosphere,
∇_ad^* is the moist-adiabatic gradient,
q_v (kg/kg) is the specific concentration of vapor,
and q_cri(T) is the critical specific concentration:
q_cri(T) = 1/1-M_gas/M_csR/Δ H_cs
T = 0.078 T/80 K(kg/kg)
where M_gas=2.3 g mol^-1 is the mean molecular weight of the non-condensing atmosphere (here a mix of 85% of H_2 and 15% of He),
M_cs=16.04 g mol^-1 the molecular weight of the condensable species (methane in this study),
Δ H_cs=10000 J mol^-1 is the latent heat of sublimation (or vaporization depending on the pressure-temperature range) of the condensable species, methane in this study,
R=8.314 J K^-1 mol^-1 is the perfect gas constant.
If q_v exceeds q_cri(T), moist convection is inhibited, even if the thermal gradient is stronger than the moist-adiabatic gradient.
In this case, the vapor abundance does not come as an additional factor, as it was through the gradient ∇_μ in the criterion for inhibition of dry convection, but as a multiplicative factor (q_v - q_cri(T)).
Contrary to dry convection inhibition, no thermal gradient however strong can produce moist convection.
This critical specific concentration is linearly dependent on temperature. As the condensation level is around 80 K in ice giants, we can keep in mind the value of 0.078 kg/kg. This corresponds to a volume mixing ratio of 1.2%.
In <cit.> the value of 0.10 kg/kg at 80 K was proposed. This value was calculated with the latent heat of vaporization (instead of sublimation). As shown by Figure <ref>, considering solid-gas equilibrium (i.e. using the latent heat of sublimation in the formula) is more adequate.
In Figure <ref>, we plot several possible methane vapor gradients constrained by observations and saturation curves.
We can see that the criterion of moist convection inhibition should apply at some depth and latitude on both planets.
§ A 3D CLOUD-RESOLVING MODEL
The Generic Planetary Climate Model (Generic PCM) gathers different versions of a common structure to model the atmospheres of planets and moons of our Solar System <cit.>, as well as exoplanets <cit.>. PCM is the new name of the planetary versions of the model known so far as the Laboratoire de Météorologie Dynamique (LMD) model. Our model is one of these versions.
To build it, two components are coupled:
* a dynamical core, which solves the Euler equations of motion
* a physical package (by physical, we mean everything that is not related to dynamics), which calculates the tendencies of the relevant physical phenomena
§.§ The dynamical core - The Weather Research and Forecasting (WRF) Model
In this study, the dynamical core used is adapted from the Weather Research and Forecasting (WRF) Model. This model has been developed for a few decades for meteorological applications on Earth (large-eddy simulations) and used by a lot of meteorological research institutes. The version we use is the 4th one <cit.>.
To build our own model, we remove the physical package of the WRF model to keep only the dynamical core, which is coupled to our own physical package.
The dynamical core discretizes and solves the Euler equations of motion in a 3D rectangular grid. In addition to the classic fluid mechanics terms, these equations contain the physical tendencies of the relevant phenomena in the atmosphere, such as radiative transfer, for example, calculated by the physical package.
As it is a cloud-resolving model designed for large-eddy simulations, with this chosen dynamical core, we can solve cloud formation. Clouds (identified here as saturated levels), being bigger than the resolution of the model, are spread over several grid cells.
The WRF dynamical core solves dynamics and transport.
It has several characteristics that are particularly interesting in our case:
* it uses the formalism by <cit.> for the coordinates. The hydrostatic component of dry air pressure is used as the vertical coordinate.
* it is non-hydrostatic, so it can solve convection.
* it takes into account, in its equations, the variability of the mean molecular weight by including the variability of air density due to moisture <cit.>.
Some studies have already been done on Venus and Mars, using the same dynamical core with the adequate physical package from the PCM for each planet. Simulations on Mars <cit.> have demonstrated that localized convective snow storms can occur, and simulations on Venus <cit.> have reproduced the vertical position and thickness of the main convective cloud layer. In parallel with this study, the same association of the WRF dynamical core with the physical package we use here has been made for K2-18b by <cit.> who give more details on the dynamical equations.
§.§ The physical package
The physical package is a "generic" physical package, which can be used for giant planets <cit.>, paleoclimates of telluric planets and exoplanets <cit.>.
Our physical package takes into account the following phenomena:
* Radiative transfer (absorption and emission by the gases with the correlated-k formalism; absorption, emission and scattering by aerosols layers; Collision-Induced Absorption (CIA); Rayleigh scattering). Considered species are H_2, He, CH_4, C_2H_2, C_2H_6.
* Methane thermodynamic cycle as a condensable species: condensation and sublimation with latent heat release, condensates precipitation.
* For Neptune, an internal heat flow (0.43 W m^-2, <cit.>), which is a residual heating of the gravitational contraction of the planet, and is taken into account as a heat source at the bottom of the model.
For the radiative transfer, we use the parameters prescribed for ice giants by <cit.>, which integrate the aerosol scenario described by <cit.>. Radiative-convective equilibrium 1-D simulations produce temperature profiles close to the observed ones.
Contrary to <cit.> who used a fixed methane vertical profile, the methane abundance can vary in our simulations (due to condensation, sublimation, and transport). Radiative transfer calculations are updated during the simulation to take into account these methane variations.
The treatment of the methane cycle is generic and can be used for any condensable species, as it has been done to study cloud formation on the Hot-Jupiter WASP-43b by <cit.>.
This numerical thermodynamic scheme ensures that the methane abundance never exceeds the saturated value, which is derived from saturation pressure calculated with the Clausius-Clayperon formula.
As soon as there is too much vapor at a given pressure-temperature level, the scheme condenses enough methane to bring the vapor amount below the saturation.
The so-formed condensates then precipitate whenever their specific concentration exceeds a threshold that we keep arbitrarily small in this first study, so as not to have to take cloud radiative feedback into account.
The condensates, which are ice crystals, precipitate and are transported instantaneously to deeper unsaturated layers where they sublimate. To do so, at each time step, the routine starts from the level where the condensates have formed and carries them downward. At each unsaturated layer, the mass of methane that is needed to bring the layer back to saturation is computed accounting for the thermal effect of the sublimation. This mass is then sublimated into gas before repeating the process in the next layer until no more precipitations remain.
We constrain the methane profile by a fixed value in the deep atmosphere (q_deep(kg/kg)), that will be chosen among the possible ones allowed by observations. This boundary condition works as an infinite source and sink of CH_4 that always maintains its abundance at the bottom of the model at this fixed value.
§.§ Simulation settings
The parameters common to all simulations are summarized in Table <ref>.
We choose to set the bottom pressure level at 10 bars to have enough pressure levels below methane condensation. We set the top of the model at 0.03 bar to encompass the tropopause, which is just below the 0.1 bar level, and the lower stratosphere.
We have defined 200 vertical pressure levels from 10 bars to 0.03 bar, with an almost regular distribution in log pressure. This pressure distribution allows us to have enough levels where condensation occurs. It corresponds to an average vertical resolution of 0.75 km. The choice of the dynamical time step is constrained by Courant–Friedrichs–Lewy condition. In our case, 5 s is an optimized choice.
With this choice, there is no need to set a vertical-velocity damping in the first 190 levels.
In the top 10 levels, which correspond to the range 0.03-0.05 bar, we set an implicit gravity-wave "damping" layer (, ; these parameters are detailed in the Modeling System User's Guide of WRF whose reference is given in <cit.>) that smoothes out high speeds to prevent the model from collapsing.
When velocity damping happens, energy is lost. However, as discussed in Section 2.4 of <cit.>, this energy sink is rather small (less than 2% of the global budget).
In our model, the stratosphere starts at the 0.2 bar level (see Section <ref>). Between the 0.2 bar level (the bottom of the stratosphere) and the 0.05 bar level (the bottom of the damping layer), there are 30 vertical grid points. These 30 points are enough for gravity waves to propagate.
Stratospheric dynamics will not be studied here, as this artificial damping reduces and modifies them, and because the chosen pressure range and discretization of the model are not suited to study them. The last layer of the model at the 0.03 bar level is directly subjected to the solar flux, without attenuation by the non-simulated layers above, in order to have a complete radiative balance.
The WRF dynamical core requires a few specific settings.
We have decided to turn off any subgrid parameterization of diffusion (, ) and hyperviscosity () to ensure that any mixing that we see in the simulations is due to the resolved dynamics of the dynamical core.
The only diffusive processes left in the dynamical core are divergence damping () and external model filters () that are necessary to stabilize the simulations. These are canonical values recommended by the WRF documentation.
To set the heat capacity, which is constant in our model, we have chosen a value that gives the best fit of a dry-adiabatic temperature profile to Voyager 2's temperature profiles with an associated molecular weight of non-condensing air (85% H_2 + 15% He) M_gas=2.3 g mol^-1.
We take c_p = 10200 J kg^-1 K^-1.
The chosen horizontal resolution is 2 km. It is a good compromise between high resolution and sufficient horizontal coverage, with a 100 km-width domain (50×2 km). Given the horizontal size, the effects of the Coriolis force will not be studied (they appear at larger scales).
Diurnal and seasonal effects are turned off.
Their impact will be addressed in Section <ref>.
We run the simulations with the diurnal-averaged insolation on the planet, which corresponds to 1/4 of the solar flux reaching the planet's orbit.
While Uranus has a very low internal heat flow (0.042^+0.047_-0.042 W m^-2, <cit.>), Neptune has a higher one (0.433±0.046 W m^-2). We set Uranus' internal heat flow to zero and Neptune's one to 0.43 W m^-2.
Calculations of radiative transfer in our model show that a bit less than 0.1 W m^-2 of the incoming solar flux (0.93 W m^-2) penetrates the atmosphere below the condensation level (see Figure <ref>).
For a low methane abundance in the deep atmosphere (0.8%), there is even about 0.05 W m^-2 absorbed below the model bottom.
These fluxes are enough to trigger convection in the deep atmosphere.
In the model, this energy is assumed to be absorbed at the model bottom and is sufficient to power convection from there.
§.§ Initialization and convergence
Ice giants are weakly forced systems because of their long radiative time scales and the little insolation they receive.
We therefore expect long convergence times - decades to centuries - for the 3D simulations to reach a steady state.
A simulation of one terrestrial year with our 3D model requires two weeks of computation on a cluster. Simulating a few terrestrial years would consequently extend the duration of computation while remaining a short time compared to the radiative time scales.
We decide to limit ourselves to this duration of one terrestrial year, being aware that our simulations may not reach a statistical steady state. This limitation must be kept in mind when interpreting the results as discussed later in Section <ref>.
To mitigate this issue, the simulations are started as close as possible to the envisioned equilibrium state. <cit.> have simulated temperate exo-Neptunes for which thermal equilibrium can be reached much faster, because of shorter radiative timescales.
They show that a simple 1D model can well predict the behavior of the thermal profile of 3D simulations. To initialize the 3D simulations with 1D thermal profiles, we have done preliminary work on 1D simulations at radiative-convective equilibrium using their approach.
These 1D simulations use the same single-column physics package as the 3D model, for the radiative and microphysical considerations.
Concerning the dynamics of the 1D simulations, we parameterize convective adjustment and turbulent vertical diffusion.
The convective adjustments are performed by two schemes.
The 1D dry convective adjustment scheme brings back any decreasing profile of virtual potential temperature (this concept will be introduced in the next section) to a constant profile, in dry layers, and mixes the methane accordingly.
The 1D moist convective adjustment scheme is triggered in regions where i) the thermal gradient is steeper than the moist-adiabatic gradient, ii) methane is at saturation, and iii) the methane abundance is lower than the critical abundance discussed in Section <ref>. In these regions, the thermal gradient is brought back to the moist one and methane is condensed accordingly.
The 3D simulations that we will present in the sections hereafter are initialized with 1D simulations that use these parametrizations.
To test the sensitivity to the initial conditions, 3D simulations initialized with 1D simulations that only use a dry convective adjustment have also been run and will be discussed in Section <ref>.
In order to represent a diverse but limited set of conditions, we have chosen different configurations for initial methane profiles.
In 1D simulations, the parameterizations (convective adjustment and turbulent vertical diffusion) and the constraints due to the saturation vapor curve, build the methane profile. It results in a constant profile at the value set by q_deep, from the bottom of the simulation to the condensation level, and above this level, the profile follows the saturation vapor curve.
In this 1D initialization profile, the methane mixing ratio above the cold trap is constant at its value at the cold trap.
Some observations show a decreasing methane gradient in Uranus' stratosphere and an excess of methane in Neptune's stratosphere <cit.>. We do not take these variations into account, as their study would require other simulation settings.
The parameter q_deep is the one that allows us to test different configurations, that are inspired by both observations and analytical criteria.
Observations show a latitudinal minimum around the 2 bar level of about 1% methane on Uranus and 2% methane on Neptune. The critical mixing ratio for moist convection inhibition being 1.2% (at 80 K), we have chosen to include a case study with only 0.8% methane (q_deep = 0.05 kg/kg) in order to be below the critical mixing ratio at all pressures. In the case of Neptune, observations show that this configuration might not exist, however, we keep it as an experiment.
We also simulate atmospheres with more methane in the deep atmosphere: 3.6% for Uranus (q_deep = 0.20 kg/kg) and 6.2% for Neptune (q_deep = 0.30 kg/kg).
Although there might be a (high) pressure level at which CH_4 is latitudinally homogeneous, our simulations do not attempt to capture this aspect which will be discussed later.
Finally, we run 4 different simulations in ice giants (2 on Neptune and 2 on Uranus). Table <ref> synthesizes the varying parameters related to methane between these simulations.
After initialization with a 1D profile, we run 3D simulations for one terrestrial year. They need a few simulated months to reach a steady state. From the one terrestrial year simulation, we keep only the last 200 days, from day 150 to day 350, for our study.
§.§ 3D validation tests
Before using our 3D model in specific configurations, we run more theoretical 3D simulations to check the application of our model to the ice giants.
The first test simulates a dry atmosphere, where we remove the thermodynamic cycle of methane as a condensing species, only keeping it as a radiatively active species. The atmosphere is then only a radiative/dry-convective atmosphere. As expected, this test simulation exhibits a dry convective layer driven by radiative heating at depth overlain by a stratosphere.
The second test simulates a saturated moist atmosphere where we treat methane as a condensing species, but do not account for its different mean molecular weight. Again, as expected, this test simulation exhibits the standard 3-layer structure, with a moist convective layer between the dry troposphere and the stratosphere.
These two tests are conclusive and confirm that the model can be configured to simulate the ice giants.
§ STRUCTURE AND DYNAMICS IN SIMULATED ATMOSPHERES
In this section, we describe the thermal structures arising in our simulations as well as dynamics and convective activity, then we highlight the role of convection inhibition in those properties.
§.§ Mean temperature profiles, dynamics and convective activity
Temperature profiles allow us to check if the simulations are capable of capturing the phenomena we want to study, and if the model provides results in line with observations.
In this section, we look at the temporally and horizontally averaged temperature profiles of the 3D simulations.
Mean temperature profiles are close to Voyager 2 profiles, and similar to those obtained by <cit.>.
Above condensation levels, temperature profiles are moist-adiabatic.
In simulations where we expect inhibition of moist convection and the formation of a stable layer (Uranus 0.20 kg/kg CH_4 and Neptune 0.30 kg/kg CH_4), temperature profiles are super-moist-adiabatic at condensation levels between the 1 and 2 bar levels. However, they remain sub-dry-adiabatic (see Figure <ref>).
This structure is intimately related to the initialization profiles due to the long radiative timescale.
Another recent study of the tropospheres of ice giants by <cit.> exhibits a super-dry-adiabatic profile at these levels. We come back to these differences in Section <ref>.
Profiles in <cit.> have similar temperature ranges for Neptune, although they are a few Kelvin colder.
The potential temperature of a fluid parcel at a given pressure is the temperature the parcel would reach if it were brought adiabatically to a standard reference pressure (the 10-bar pressure at the bottom of the model in our case). In this study, the virtual potential temperature is more useful.
Virtual potential temperature θ_v is the theoretical potential temperature θ of dry air that would have the same density as moist air:
θ_v = (1- (1-1/ϵ_CH_4)q_v) θ
where ϵ_CH_4=M_CH_4/M_gas=16.04/2.3=6.97 is the ratio between the molecular weight of the condensable species, in this case methane, and the molecular weight of non-condensing air; q_v (kg/kg) is the vapor specific concentration.
As ϵ_CH_4 is greater than 1, θ_v is lower than θ.
Variations of q_deep from one simulation to another show no difference on the average temperature profiles above the 1 bar level (see Figure <ref>).
Below the 1 bar level, simulations with a high q_deep are colder than simulations with a low q_deep. These average temperature profiles are highly dependent on the initial 1D profile. The absorption and emission of methane, and its impact on radiative-convective equilibrium, may explain these differences in temperature gradients.
<cit.> have also shown that methane enrichment in the deep atmosphere leads to colder temperature profiles.
Virtual potential temperature profiles also show differences, in line with expectations.
For simulations with q_deep= 0.20 and 0.30 kg/kg, condensation level is reached deeper than for simulations q_deep=0.05 kg/kg.
We expect dry convection to extend higher in simulations with q_deep= 0.05 kg/kg. All simulations share the same global 3-layer structure (see Figure <ref> for Uranus and Figure <ref> for Neptune):
* a dry layer between 10 and 1-2 bars, where the virtual potential temperature is almost constant (Figure <ref>). In this layer methane abundance is almost constant, and equals the value we have fixed in the deep atmosphere. Regions with constant virtual potential temperature correspond to well-mixed layers (CH_4 remains at the same concentration) close to the dry-adiabatic gradient. Dry convection permanently occurs in this layer, with typical speeds of about 1 m/s, as shown by the "typical situation" plot (second panel of Figures
<ref>
and <ref>). Downdrafts and updrafts can be observed. This is a mixing layer. The upper part of this layer may be non-convective as a thin methane gradient is formed to link the two constraints that delimit this layer. The constraint at the bottom is the q_deep quantity at 10 bars which is transported upward by convection, and the constraint at the top is the condensation level.
* a moist layer between 1-2 bars and 0.1 bar, where methane abundance is close to the saturation vapor curve and virtual potential temperature slightly increases. This layer is the moist troposphere where moist convection episodically occurs. Most of the time, no convection occurs in this layer, as shown by "typical situation" plots. Sometimes, convective storms occur, as shown by "storm event" plots (fourth panel of Figures
<ref>
and <ref>).
Convective storms are characterized by positive speeds in the moist layer (that can be higher than 10 m/s), and a strong downdraft in the dry layer (the large blue cell on the fourth panel of Figures <ref> and <ref>). Gravity waves can be identified above the storm. They transport energy.
When a storm occurs, the moist layer is saturated and relative humidity reaches 100% in most of the vertical levels of the moist layer where the storm is located.
* an upper layer above the 0.1 bar level up to the top of the model at 0.03 bar, where virtual potential temperature increases more strongly. No convection is expected in this layer but gravity waves can propagate upward.
The simulated atmosphere alternates between a "typical situation" structure and a "storm event" structure.
In the "typical situation" structure, there is an expected local minimum of relative humidity in the moist layer. The temperature profile of this layer is confined between the dry-adiabatic temperature gradient and the moist-adiabatic temperature gradient. The temperature gradient is not steep enough for dry convection and the layer is not saturated for moist convection to occur. This layer remains as such until it reaches saturation and convective storms occur. The condensation level below and the cold trap above surround this minimum of relative humidity.
§.§ Convection inhibition in simulations
The analysis of the dynamics and convective activity in the 3-layer structure highlights that convection can be inhibited.
In the dry layer, methane abundance is constrained at both ends of the layer:
by the fixed methane abundance q_deep in the deep atmosphere and by the saturation at the condensation level.
In most of the dry layer, methane abundance is constant and is equal to q_deep.
At the top of the dry layer, methane abundance starts to decrease before reaching the condensation level.
This thin part of the dry layer with a methane gradient is a non-convective boundary layer where the criterion for dry convection inhibition applies. This can be particularly seen in the bottom panels of Figure <ref> which are zooms of the first panel of Figures <ref> and <ref>. Dry convection stops slightly before the condensation level, marking the top of the dry layer.
This gradient at the top of the dry layer is about 0.5 bar thick. It could be more or less thick as shown by the possible methane gradients in Figure <ref>. This non-convective layer acts as an obstacle for upward methane transport, limiting the rise of methane from the dry layer to the moist layer. This rise is required for convective storms.
The critical specific concentration for moist convection inhibition (q_cri(T)) is plotted (dotted line) on the first panels of Figures <ref> and <ref>.
For simulations run with q_deep= 0.05 kg/kg, on both Uranus and Neptune, methane abundance is always below the critical value q_cri(T) at all pressures. Consequently, if we examine the fourth panel of Figures <ref> and <ref> (i.e. "storm event" plots) for 0.05 kg/kg CH_4, we can see that moist convection occurs in the entire moist layer. Moist convection is never inhibited.
For simulations run with q_deep= 0.20 and 0.30 kg/kg, the methane abundance may exceed the critical value. A new layer, between the dry layer and the moist layer, appears, with a relative humidity very close to 100%. This is a non-convective layer (Figure <ref>), we call it the "Stable Layer".
In the simulations with q_deep= 0.20 and 0.30 kg/kg, i.e. when the criterion for moist convection inhibition is satisfied, the 3-layer structure becomes a 4-layer structure with this stable moist and non-convective layer appearing between the dry layer and the moist layer.
This non-convective layer is a bit more than 0.5 bar thick in our simulation of Uranus with 0.20 kg/kg methane and almost 1 bar thick in our simulation of Neptune with 0.30 kg/kg methane. The more methane in the deep atmosphere, the thicker this layer is.
It acts both as an obstacle and a reservoir. As moist convection is inhibited in this layer, methane is difficult to transport to higher levels (to levels where moist convection is not inhibited and where convective storms can occur). In addition, the top of this "reservoir" (the level denoted with an X marker in Figure <ref>) must be completely saturated in methane to let convective storms occur in the moist layer (see next section).
§ FORMATION OF CONVECTIVE STORMS AND INTERMITTENCY
While the previous section focused on averaged structures, we describe here their evolution: how convective storms are formed, what is their frequency and released kinetic energy.
§.§ Advection of methane: from the dry to the moist layer
Convective storms are made of rising saturated air.
Figures <ref> and <ref> show the structure of storms. In a few columns of the domain where the storm occurs, the moist layer is filled with methane and reaches 100% of relative humidity. As the moist layer is confined between the dry-adiabatic gradient and the moist-adiabatic gradient, dry convection can never happen and moist convection happens when saturation is reached.
Convective storms form when enough methane from the deep troposphere is brought to lower pressures, at condensation level where inhibition criteria are not satisfied. When these conditions are gathered, a perturbation can thus create a convective storm.
The mechanical energy of the convective storm is dispersed by gravity waves propagating upward. The convective storm ends with condensed methane precipitating deeper.
Dry convection in the dry layer transports methane to lower pressures. Dry convection faces an obstacle at the top of the dry layer that we call the non-convective boundary layer. To cross this obstacle, methane has to be transported by slow eddy diffusion and has to "climb" the methane gradient. This obstacle slows down the transport of methane to the moist layer and thus controls the frequency of convective storms.
In the simulations with q_deep lower than q_cri(T) (Uranus 0.05 kg/kg CH_4 and Neptune 0.05 kg/kg CH_4) this gradient below condensation level is slowly becoming smaller with time until it is small enough for a perturbation coming from deeper to reach the moist layer and produce a convective storm.
In the simulations with q_deep exceeding q_cri(T), the stable layer is filling up with methane, acting as a reservoir.
The reservoir becomes full when saturation reaches its top, where methane concentration is lower than its critical value.
And moist convection can occur.
To quantify methane transport, we estimate the equivalent mixing coefficient (the so-called eddy diffusivity or K_zz) defined as:
K_zz≡⟨ρ q_v w⟩/⟨ρ∂_z q_v⟩
where ρ is the density (kg/m^3), q_v the CH_4 vapor specific concentration (kg/kg), w the vertical speed (m/s), ∂_z the partial derivative along the vertical axis, ⟨⟩ the average over horizontal and temporal dimensions.
The K_zz profiles shown in Figure <ref> are consistent with the dynamics observed in the simulations. In the mixing dry layer methane transport is efficient and K_zz is of the order of 10^2 to 10^4 m^2/s. In the moist layer, intermittent convection is illustrated by the peak in the profiles around 0.5 bar and K_zz is on the order of 1 m^2/s. At the interface between these two layers, there is a low K_zz of about 10^-1 m^2/s. K_zz is particularly low in the stable layer of the simulations where the criterion for moist convection inhibition is satisfied (Uranus 0.20 kg/kg CH_4 and Neptune 0.30 kg/kg CH_4).
Using another mesoscale model, <cit.> have also calculated K_zz profiles that are close to ours: a high K_zz in the dry troposphere, a low K_zz in the upper layers where convection is inhibited, and a higher K_zz in the layer where moist convection can occur.
In the dry layer, below the 2 bar level, maximum downward velocities are higher in absolute value than maximum upward velocities and correspond to the strong downdrafts forming when moist convection occurs (Figure <ref>). In the moist layer, maximum upward velocities are higher in absolute value than maximum downward velocities and correspond to the convective storms.
In the simulations run above the critical specific concentration (Uranus 0.20 kg/kg and Neptune 0.30 kg/kg), we can see a strong minimum in the root-min-square (RMS) profiles around 1 bar. This minimum corresponds to the stable layer where the criterion for moist convection inhibition is satisfied. There is no such minimum in simulations below the critical specific concentration. We can see the RMS decreasing slightly before reaching the 1 bar level thus highlighting the non-convective boundary layer at the top of the dry layer, but the local minimum is not very marked.
§.§ Temporal evolution and intermittence
The existence of two different structures - a "typical situation" structure and a "storm event" structure - implies intermittency. The previous subsection has explained how the transition between these two structures happens. It works as a cycle.
To study this cycle, we have identified on the average structures of Figure <ref> two important levels in the moist layer: a minimum (indicated by an O marker) and a maximum (indicated by an X marker) of relative humidity. This maximum is also the top of the "reservoir". These levels allow us to identify when a storm occurs. The level of the relative humidity minimum is suddenly filled with methane. In Figure <ref>, we plot the temporal evolution of the relative humidity at these two levels. While minima (O) remain at a constant value for a long time, at the same time maxima (X) are getting filled until they reach 100%, and then moisture is transferred upward filling the upper levels, among them the minimum level which is more or less the epicenter of the convective storm. In simulations with 0.20 and 0.30 kg/kg CH_4, the levels just below the maximum (X) level and that form what we call the "reservoir" are also getting filled. Relative humidity at the maximum level (the top of the reservoir) is always close to 100% in the simulations exceeding the critical specific concentration, i.e. Uranus 0.20 kg/kg CH_4 and Neptune 0.30 kg/kg CH_4.
The frequency of storms varies a lot from one simulation to another. On the studied window of the last 200 days of the one terrestrial year duration of the simulations, we can count and estimate the period between 2 storms (Table <ref>).
More storms occur in simulations where the critical specific concentration is exceeded: 2.5 times more storms in the simulation of Uranus with 0.20 kg/kg CH_4 than in the simulation of Uranus with 0.05 kg/kg CH_4, 9 times more storms in the simulation of Neptune with 0.30 kg/kg CH_4 than in the simulation of Neptune with 0.05 kg/kg CH_4. Storm occurrence is cyclic, with a chaotic aspect (i.e. the cycle is sometimes irregular), which is well illustrated in Neptune's simulation with 0.05 kg/kg CH_4 (see the third panel of Figure <ref>), where the time between two storms varies a lot.
Storms are triggered when a sufficiently strong perturbation comes from deeper levels for the moisture of the "reservoir" to be transferred from the levels where moist convection is inhibited to the levels just above where moist convection is no longer inhibited.
Simulations show a much lower frequency of convective storms on Uranus than on Neptune. We attribute this to the absence of internal heat flow. This internal heat flow warms the deep atmosphere in simulations. It is a source of heat at the bottom of the model. There is more energy flux to evacuate from deep layers in Neptune than in Uranus.
One might think that an interesting test would be to turn off the internal heat flow in the settings of Neptune's simulations or to add one in Uranus' simulations. We have done this, but it changes so much the thermal balance so that the temperature gradients, and the conclusions, become unrealistic.
Model and observations agree that Neptune is much more active in producing new cloud systems and cloud variations than Uranus <cit.>.
§.§ Intensity of convective storms
The evolution of relative humidity on plots from Figure <ref> has allowed us to identify each convective storm and monitor their frequency. However, these plots do not provide information about the intensity of storms.
We compute the kinetic energy E_c per unit of mass on the whole domain of our simulations as follows:
E_c(J/kg) = 1/2∑_i(u_i^2+v_i^2+w_i^2)dm_i/m_tot
where u_i, v_i, w_i are the three components of velocity of the i-th cell of the domain, dm_i is the mass of this cell, m_tot=∑_idm_i is the total mass of the domain. Figure <ref> illustrates the temporal evolution of E_c in our simulations.
The average kinetic energy of simulations with more methane, i.e. Uranus 0.20 kg/kg CH_4 and Neptune 0.30 kg/kg CH_4, is of the same order of magnitude as the average kinetic energy of simulations with less methane - Uranus 0.05 kg/kg CH_4 and Neptune 0.05 kg/kg CH_4.
But when looking at the convective storms that can be identified by the peaks of kinetic energy, we see that the intensity of these peaks is higher in simulations with 0.05 kg/kg CH_4. The non-convective layers of these configurations are thinner, but more importantly, the criterion for moist convection inhibition is never met. The release of energy is thus facilitated.
There is a strong correlation between intermittency and intensity. The rarer the storms are the more intense they are. In Figure <ref>, the kinetic energy of a storm corresponds to the difference between the peak associated with this storm and the "background" kinetic energy. This "background" kinetic energy is mainly due to dry convection in the troposphere and is of the same order of magnitude from one simulation to another.
§ OPEN QUESTIONS, LIMITATIONS AND SUGGESTED SCENARIO
Here we present the limitations of our model and the associated issues.
Knowing these limitations, we propose a scenario for storm formation in ice giants.
Condensates micro-physics. In our model, the micro-physics of condensation is limited to a very basic scheme. Growth, sedimentation and sublimation of condensates would require a more detailed model. In this first study, we have kept these processes simple and controlled by as few parameters as possible, which at least allows us to test extreme cases that bracket the actual behavior of condensates.
The chosen microphysical parameters limit the potential retention of condensates (e.g. in moist updrafts) to 10^-10 kg of methane ice per kg of air to avoid having to introduce a complex microphysical model. In reality, some condensates can be retained and slow down the updraft. To test whether this would affect our conclusions, we have run simulations where 10^-3 kg/kg of methane ice can be retained before precipitating. As expected, the kinetic energy of moist convective events is reduced, by a factor of 2, when more condensates are allowed, which further favors convection inhibition. At the same time, the frequency of moist convective events is multiplied by a factor 2, demonstrating again that frequency and intensity are strongly correlated.
Convergence.
As discussed in Section <ref>, the radiative timescale is too long to enable running the simulation until complete thermal equilibration. As a result, even though we remove the spin-up phase of the simulations (i.e. the first 150 days) and analyze only the part of the simulation where the frequency of storms is rather stable, the thermal structure still evolves slightly over this period. Figure <ref> shows the evolution of the temperature anomaly with respect to the initial thermal profile for the 4 simulations. These anomalies remain relatively low and are mostly confined to the stable layer. This is to be expected because the thermal gradient in this region is determined by turbulent diffusion, which is one of the most difficult factors to take into account in the 1D model used for the initialization. The maximum value of the anomaly therefore gives a rough estimate of the uncertainty on the equilibrium temperature profile.
However, we believe that this slight remaining thermal disequilibrium does not affect our main conclusions. First, the layered structure with an inhibition layer has been recovered in a similar setup in <cit.>, even though, in their case, they were able to run their simulation until equilibration. Second, the fact that the size of this inhibition layer increases with deep methane abundance is supported by analytical arguments <cit.>.
A test simulation on Neptune was run after initializing far from saturation (less than 70% at condensation levels). After 200 days, the stable layer is reformed, demonstrating that its appearance is independent of initialization.
To see whether the evolution of the frequency of storms with abundance was also robust, we carried out a complete set of simulations with a different initial thermal structure: simulations were started from the output of the 1D model where only a dry convective adjustment was performed. These initial conditions are further away from the anticipated equilibrium state of the atmosphere and therefore show larger temperature anomalies during the run. Yet, these simulations show extremely similar behavior in terms of stable-layer sizes and storm frequencies and intensities, which do not differ by more than 50%.
This confirms that methane turbulent diffusion in the stable layer is the main driver of storm formation.
Thermal gradient in stable layers.
In stable layers, our temperature profiles are super-moist-adiabatic but remain sub-dry-adiabatic. Using less computationally expensive 2D non-hydrostatic simulations that were run for a longer time,
<cit.> found a super-dry-adiabatic temperature profile in the stable region and <cit.> found the same structure with a 3D model but for warmer atmospheres
that have shorter radiative timescales. We thus believe that, given more integration time, our thermal profiles would probably converge toward a super-dry-adiabatic one. This however should not affect our conclusions on the occurrence of storms as we have shown that they are driven mainly by the methane cycle and the turbulence in the stable layer, which is only mildly affected by the thermal gradient in the stable layer.
Global climate. Simulations with a general circulation model that accounts for the large-scale dynamics and seasonal changes would provide crucial information.
But we are still facing major unknowns that prevent us from performing fully consistent large-scale simulations: for instance, are the latitudinal variations in methane abundance observed at 1-2 bars on Uranus and 4 bars on Neptune still valid at 10 bars? Is there a pressure at which methane abundance is homogeneous? How do these strong horizontal gradients affect the dynamics? Ideally, small-scale convective simulations should have evolving boundary conditions fed by large-scale simulations while large-scale simulations should include a convective sub-grid scheme derived from small-scale simulations. There is quite some work to be done before reaching such a sophisticated coupling between models. At this early stage, our approach was to develop the small-scale simulations independently and in parallel with large-scale circulation models but to vary the deep methane concentration in order to capture the variety of conditions met at different latitudes. We do believe that by doing so, we were able to capture the mechanisms controlling storms at the scale of our simulations and to provide parametrizations for moist convection inhibition that can be used in global climate models.
Diurnal cycle and seasonal variations.
Our simulations use a constant averaged solar flux and zenith angle and do not include day/night alternation nor the daily and seasonal variations of solar zenith angle, which could have an impact.
The insolation diurnal cycle should have little impact, as radiative time constants are very long: more than 100 terrestrial years at 1 bar.
Concerning seasonal variations of solar flux, the period that we studied (200 days) was small compared to the Uranian and Neptunian years, so the solar flux reaching the planet was almost constant during that time. We could run simulations on the same planet with different constant averaged solar flux corresponding to different latitudes and seasons on the planet. This would change the thermal profile by several Kelvin and the level of methane condensation. It could increase or diminish storm frequency. Because of its obliquity, the case of Uranus is really interesting and the study of seasonal variations will be the subject of future studies.
Our model was run under average insolation conditions, and we tested different values of methane abundance in the deep atmosphere. One could build a pseudo-2D model (i.e. as a function of altitude and latitude) to study latitudinal variations in temperature and methane.
Stratospheric methane concentrations.
Another debated question is how to explain the abundance of methane in the stratosphere.
Observations of <cit.> show a mixing ratio of CH_4 higher than its value at the cold trap in Neptune's stratosphere, while methane abundance strongly decreases with pressure in Uranus' stratosphere. Though we do not seek to explain those particularities in this study, we do not see any methane transport by moist convection to the stratosphere. The overshoots that we simulate do not bring methane into the stratosphere either. The 3D Global Climate Models are more suitable for such a study, as those variations could probably also be caused by large-scale dynamics.
Mesoscale simulations that include the effect of the Coriolis force could also bring new pieces of information.
Aware of these limitations, we propose a description of the formation and structure of convective storms for given methane abundances in the deep atmosphere. Our simulations highlight a stable moist and non-convective layer in the regions where the critical specific concentration is exceeded. The thickness of this layer, which is located around the 1 bar level (Figure <ref>), depends on the methane abundance in the deep atmosphere (q_deep). The frequency and intensity of storms depend on the presence or absence of this stable layer.
If the abundance at saturated levels is lower than the critical specific concentration, this stable layer does not exist. This situation is encountered in Uranus, at the poles according to observations. It is illustrated in the two plots done for methane-poor regions, on the edges of Figure <ref> (Uranus). A very thin line near the 1 bar level (in yellow on those plots) marks the maximum relative humidity and the separation between the dry layer and the moist layer. Convective storms occur when enough methane is brought up from deeper levels and are intense because moist convection is never inhibited. These intense storms reduce the relative humidity in the levels close to condensation by evacuating methane, and it then takes a long time to "reload" them with methane. The frequency of the storms is lower.
If this abundance is higher than the critical specific concentration at saturated levels, a stable layer appears (the larger q_deep the thicker this stable layer). In Uranus, this situation corresponds to the three center plots of Figure <ref> (Uranus), and should occur at mid-latitudes and the equator. In Neptune, this situation is encountered in all plots of Figure <ref> (Neptune), with the variation of q_deep inducing a variation in the thickness of the stable layer. In this saturated layer of variable thickness, moist convection is inhibited, thus retaining methane and acting as a reservoir. This reservoir needs to be completely full for moist convective storms to occur above. Because of moist convection inhibition, this reservoir is always almost full and the frequency of convective storms is high (about a few days when q_deep= 0.30 kg/kg). Also because of moist convection inhibition, the intensity of the storms is weak: the moist non-convective layers can never be driven upwards by the storm above them. The bottom of the storm is thus never deeper than this level where moist convection is no longer inhibited. The more methane there is, the more frequent storms are, but also the weaker they are.
We have previously explained that convective storms should be more frequent in Neptune than in Uranus, due to the presence of an internal heat flow in Neptune. This might also be reinforced by the fact that the methane abundance in Neptune exceeds the critical abundance for the inhibition of moist convection at all latitudes, while in Uranus the methane abundance at the poles might be lower than this critical abundance. As there is more methane in Neptune than in Uranus, convective storms should be even more frequent (but weaker).
The cyclic occurrence of storms in our simulations may suggest a link with the study by <cit.> on the frequency of Saturn’s giant storms. The long timescale (about 60 years) and the resolution of convection that they used in their model are quite different from our study. Temperature variations in Saturn's atmosphere are much larger than in the ice giants (we report very few variations in our simulations), and are the main driver of the cyclic occurrence of storms, whereas in our simulations, it is the evolution of the methane profile that is the driver. The frequencies that we have calculated and presented in Table <ref> should be treated with caution. These frequencies depend very much on the assumptions we make about methane microphysics. A better estimate of the frequencies may be obtained by a parametric study of our microphysical parameters. In the current state of our model, even if the frequency of storms is highly uncertain, we can explain the sporadicity of clouds observed on Uranus <cit.> and on Neptune <cit.>.
As for the other condensable species in ice giants (H_2O, NH_3, H_2S), we expect the same behavior as for methane. However, some species are less abundant than methane (e.g., NH_3, H_2S, <cit.>), and should never exceed their associated critical mixing ratios for inhibition of moist convection. The behavior of these species could be similar to the simulations presented in this article with 0.05 kg/kg methane.
In Uranus and Neptune, water should be very abundant in the deep atmosphere below the 100 bars level <cit.> and should exceed its associated critical mixing ratio. Even if it is difficult to extrapolate our simulated convective regimes in the range of 0.03-10 bars to such high pressures, we could expect behavior similar to the simulations presented in this article with 0.20 and 0.30 kg/kg.
In Jupiter and Saturn, water abundance may exceed the critical mixing ratio associated with water. At certain latitudes, we would then expect behavior similar to the simulations presented in this article with 0.20 and 0.30 kg/kg.
§ CONCLUSIONS
Using a 3D cloud-resolving model, we have investigated the impact of the change in mean molecular weight due to methane condensation on the formation and inhibition regimes of convective storms in ice giants. Methane being heavier than the H_2/He background, its condensation can indeed inhibit convection and moist convective storms.
Observations show both latitudinal variations - 1 to 4% in Uranus at 2 bars <cit.>, 2 to 6% in Neptune at 4 bars <cit.> - and vertical variations caused by condensation.
Vertical variations at non-saturated levels (i.e. dry levels) strongly stabilize the atmosphere and a super-dry-adiabatic gradient - which is convective in a mixed atmosphere - can remain stable.
The literature <cit.> highlights the existence of a critical methane abundance at saturated levels. Whether or not this critical abundance is exceeded can inhibit or activate moist convection.
Depending on the form that the methane gradient takes, saturated levels may or may not be above this critical abundance which is 1.2% at 80 K. It corresponds approximately to the 1 bar level. Latitudinal variations observed at this level are almost all above this critical abundance, but these levels have to be saturated for the criterion of moist convection inhibition to apply.
After having shown in our simulations that this critical methane abundance indeed rules convective storm inhibition and formation, we conclude that:
* typical velocities of dry convection in the deep atmosphere are rather low (of the order of 1 m/s), but sufficient to sustain upward methane transport
* moist convection at methane condensation level is strongly inhibited
* convective storms can form regardless of methane abundance in the deep atmosphere, but they can only form in saturated layers where the methane abundance does not exceed the critical abundance
* the formation of convective storms on Uranus and Neptune should be intermittent and follow a loading/unloading cycle
* the intermittency and intensity of storms depends on the methane abundance:
- where CH_4 exceeds the critical abundance in the deep atmosphere (at the equator and the middle latitudes on Uranus, and all latitudes on Neptune), more frequent but weak storms form.
- where CH_4 remains below the critical abundance in the deep atmosphere (possibly at the poles on Uranus), storms are rarer but more powerful.
* storms should be more frequent on Neptune than on Uranus, because of the internal heat flow of Neptune and because there is more methane in Neptune than in Uranus
* methane-rich latitudes at saturated (or near-saturated) levels should act as a barrier allowing little energy to be released in one storm, while methane-poor latitudes should allow much more energy to be released in one storm
These conclusions could explain the sporadicity of clouds observed in ice giants.
Further observations with the James Webb Space Telescope, which would track moist convective events over a longer observational period or would provide new constraints on methane abundance, could help to bring new insights to the conclusions proposed in this article.
§ ACKNOWLEDGEMENTS
The authors acknowledge the support of the French Agence Nationale de la Recherche (ANR), under grant ANR-20-CE49-0009 (project SOUND).
aa.bst
|
http://arxiv.org/abs/2409.02208v1 | 20240903182052 | Accelerating Graph Neural Networks with a Novel Matrix Compression Format | [
"João N. F. Alves",
"Samir Moustafa",
"Siegfried Benkner",
"Alexandre P. Francisco",
"Wilfried N. Gansterer",
"Luís M. S. Russo"
] | cs.DS | [
"cs.DS"
] |
[
[
=====
§ ABSTRACT
The inference and training stages of Graph Neural Networks (GNNs) are often dominated by the time required to compute a long sequence of matrix multiplications between the sparse graph adjacency matrix and its embedding. To accelerate these stages, we first propose the Compressed Binary Matrix (CBM) storage format to succinctly represent the binary adjacency matrix of an unweighted graph. Then, we show how to generalize this representation to normalized adjacency matrices of unweighted graphs which arise in the context of GNNs. Finally, we develop efficient matrix multiplication kernels based on this compressed representation. The matrix multiplication kernels proposed in this work never require more scalar operations than classic sparse matrix multiplication algorithms. Experimental evaluation shows that the matrix multiplication strategies proposed outperform the current state-of-the-art implementations provided by Intel MKL, achieving speedups close to 5×. Furthermore, our optimized matrix-multiplication strategies accelerated the inference time of a GNN by up to 3×.
§ INTRODUCTION
Graph Neural Networks (GNNs) are the preferred tool to learn from
graph-structured data and thus are considered key for future AI applications in
domains like social network analysis, natural language processing, biology,
physics, and many others <cit.>. The training and inference time of different GNN architectures is dominated by a long sequence of matrix products. This is particularly evident in GNNs that resort to Message Passing Layers (MPLs), where in each hidden layer the nodes of the graph aggregate the embedding of neighboring nodes and adjust their own embedding based on the information collected.
In some variants of GNNs, such as the widely used Graph
Convolutional Networks (GCNs) <cit.>, the message produced in each layer is
essentially the product of the adjacency matrix of the underlying graph and its current embedding.
For illustration purposes, consider a two-layer GCN. To propagate and combine node features across the graph, this network must compute the following operations once per inference and once per training epoch <cit.>:
𝐀̂ σ(𝐀̂𝐗𝐖^0) 𝐖^1,
where 𝐀̂ represents the normalized adjacency matrix of the
graph, such that 𝐀̂ = 𝐃^-1/2 (𝐀 + 𝐈) 𝐃^-1/2, 𝐃 is
the degree diagonal matrix of the graph, σ denotes an element-wise activation function, 𝐗 is the matrix of node features, and 𝐖^0 and 𝐖^1 are learnable dense matrices for the first and second layers <cit.>. It is important to note that in real scenarios the adjacency matrix of the graph is typically much larger than the remaining operand matrices of Equation <ref>. Therefore, matrix products involving this matrix represent most of the computational burden of training and inferring GNNs. Graphs arising in the context of GNNs are often extremely sparse. Popular GNNs frameworks, such as PyTorch <cit.>, leverage this sparsity to accelerate both training and inference. This is achieved by
representing the adjacency matrix of the graph, or its normalized form, in standard sparse matrix formats that only consider the nonzero elements of a sparse matrix. Sparse matrix formats, like COO or CSR, enable faster sparse-dense matrix multiplication kernels (SpMMs) which are known to require a number of scalar operations that is proportional to the number of nonzero elements in the sparse matrix.
In this paper we present a new matrix compression format called Compressed Binary Matrix (CBM) to accelerate widely-used GNN architectures (e.g., GCN <cit.>, GraphSage<cit.>, and GIN <cit.>) that spend most of their processing time on matrix products involving the adjacency matrix of the underlying graph or its normalized, when these GNNs learn on unweighted graphs. Our format exploits the fact that the adjacency matrix of an unweighted graph is binary, and therefore it can be compressed beyond what is possible with sparsity alone, while simultaneously reducing the number of scalar operation required to multiply our compressed representation of the adjacency matrix and another dense real-valued matrix.
The key advantage of the CBM format is that it only represents the differences (deltas) of a row with respect to another similar row of the same matrix, which for many use cases tends to be significantly smaller than the number of non-zeros represented by standard compression formats.
§.§ Previous Works
The matrix-matrix and matrix-vector products have been extensively studied <cit.>.
A particular case is the product of a binary sparse matrix by a real-valued vector or (dense) matrix, where the efficient representation of the binary matrix can be exploited to improve both the memory footprint and the operation running time.
Although this was not expected in some preliminary studies <cit.>, and impossibility results exist for more complex compression schemes <cit.>, it works for some representational compression schemes.
One of such schemes is the Single Tree Adjacency Forest (STAF) proposed by Nishino et al. <cit.>. The STAF representation of a binary matrix is obtained by reversing and inserting the adjacency list of each row of a binary matrix into a trie data-structure, meaning that suffixes that are common to more than one adjacency list are represented exactly once. The authors have shown that the this data-structure can be built in linear time with respect to the number of nonzero elements in the input matrix.
STAFs enable fast matrix-matrix products by traversing the trie in root-to-leaf order, while accumulating partial sums that are common to different rows of the result matrix.
The number of operations required to multiply a binary matrix represented by a STAF and a real-valued vector is proportional to the size of the trie, which is also upper-bounded by the number of nonzero elements in the input matrix.
STAFs doe not exploit, however, row-wise similarities outside of common row suffixes. To address this issue the authors proposed splitting the input matrix into sets of columns and create a STAF per set. This optimized version achieves a significant speedup and memory footprint reduction compared to CSR and the sparse matrix multiply kernel offered by the Eigen library.
Francisco et al. <cit.> explored how succinct representations for binary matrices and graphs could be exploited to speedup binary matrix multiplication, namely the Webgraph representation by Boldi and Vigna <cit.> and the Biclique Extraction (BE) based representation by Hernandez and Navarro <cit.>.
Both representations allow to reduce the memory footprint of binary matrices and accelerate the product of compressed binary matrices and real-valued vectors.
The Webgraph representation exploits the similarity among rows, as well as clustering effects present in real-world graphs and matrices, relying on gap compression, referentiation, intervalisation and ζ codes.
The BE representation exploits also rows similarities and clustering effects, extracting maximal bicliques and replacing them with differential compressed sets of nodes that share adjacencies.
The key observations are that, in both cases, we can reuse partial results from previous computations.
Taking an implementation of Pagerank using classic adjacency lists, authors achieved significant and similar speedups and memory footprint reductions, showing that the product computation can be reduced in time proportional to the compressed matrix size.
The Webgraph and BE representations require however non-trivial preprocessing steps, Webgraph benefits from a suitable vertex reordering, obtained through graph clustering methods, and BE requires finding maximal bicliques, an NP-hard problem in general.
These steps might take longer to execute than computing the SpMM we are trying to optimize.
On the other hand the experimental evaluations considered naive implementations of standard representations, possibly leading to unfair comparisons in what concerns state of the art linear algebra libraries.
Elgohary et al. <cit.> also addressed the problem that large-scale machine learning algorithms are often iterative, using repeated read-only data access and I/O-bound matrix-vector multiplications, introducing Compressed Linear Algebra (CLA) for lossless matrix compression.
CLA also executes linear algebra operations directly on the compressed representations.
However, it is not focused on binary matrices and, although it achieves a performance close to the uncompressed case, it only presents performance gains when data does not fit into memory.
Our work is somewhat related to the work by Björklund and Lingas <cit.>.
They also consider a weighted graph on the rows of a binary matrix where the weight of an edge between two rows is equal to its Hamming distance, and then they rely on a minimum spanning tree of that graph to differentially compress the rows of a binary matrix with respect to one another, thus accelerating the product computation.
They consider however only a single product of two binary matrices, and do not consider the overhead imposed by operating on a compressed representation of a binary matrix, as their results are purely theoretical.
§.§ Our Contributions
In this paper, we make the following contributions: (1) we present the Compressed Binary Matrix (CBM) format, an efficient compression scheme for binary matrices, that can reduce the memory footprint of unweighted graphs that arise in the context of GNNs; (2) we introduce a new algorithm to significantly accelerate the sequence of matrix products between the (potentially normalized) adjacency matrix of the graph, represented in CBM format, and its embedding; (3) we prove that even in the worst-case scenario, where compression
is not possible, the number of scalar operations required to multiply a matrix represented in CBM format does not exceed the number of scalar operations required to multiply the same matrix using classic sparse storage formats; and (4) we have implemented the CBM format and corresponding matrix multiplication kernels such that they can be used together with state-of-the-art Deep Learning framework, such as PyTorch. Furthermore, experimental evaluation using real-world datasets demonstrates the effectiveness of our approach. Our method is nearly 5× faster than state-of-the-art SpMM implementations in sequential and parallel environments, subsequently shortening the inference time of a 2-layer GCN by more than 3×. Our implementation will be made available at <https://github.com/cbm4scale>.
As previously stated, the CBM format is akin to the work of Björklund and Lingas <cit.>. Nevertheless, it distinguishes itself by being specifically designed to accelerate the product between a single sparse binary matrix and a set of real-valued matrices, thus the binary matrix only needs to be compressed once[Ideally, the adjacency matrix of any unweighted graph would be provided in CBM, avoiding any compression overhead.]. Furthermore, our format resorts to a Minimum Cost Arborescence (MCA), which allows us to ignore compression opportunities that do not lead to improvements in memory-footprint and matrix multiplication. The CBM format also overcomes limitations present in STAF and BE, since our format exploits compression opportunities along complete rows of the matrix (by default), is constructed in polynomial time, and does not allocate additional memory to store partial results.
§ COMPRESSED BINARY MATRIX FORMAT
Let 𝐀∈{0,1}^m × n be a binary matrix, where a⃗_i ∈{0,1}^n represents the i-th row of 𝐀 for i=1, …, m. Also assume that a⃗_i is represented with an adjacency list with the column indices of the nonzero elements of a⃗_i.
The Compressed Binary Matrix (CBM) format resorts to differential compression to represent the rows of a binary matrix 𝐀. This is, if 𝐀 is represented in CBM format, then any row a⃗_x can be represented by another row a⃗_y and two lists of deltas that indicate which elements must be included (Δ^+_x,y), or removed (Δ^-_x,y), from the adjacency list of a⃗_y to obtain a⃗_x:
a⃗_x = (a⃗_y ∪Δ^+_x,y )∖Δ^-_x,y, where Δ^+_x,y=a⃗_x ∖a⃗_y , and Δ^-_x,y=a⃗_y ∖a⃗_x.
Assuming a⃗_y is present in memory, Equation <ref> suggests that the memory required to represent a⃗_x is proportional to the number of deltas between a⃗_x and a⃗_y. If these rows are similar, then it is likely that the number of deltas is smaller than the number of nonzero elements of a⃗_x. In that case, it would be more memory efficient to represent a⃗_x with respect to a⃗_y than through its adjacency list.
Therefore, to further reduce the memory footprint of matrix A, the compression algorithm that builds the CBM format must find a suitable chain of compression to represent all rows of 𝐀. This is, for each row a⃗_x, identify another similar row a⃗_y that characterizes the former, such that: (1) the numbers of deltas required to represent each row a⃗_x is minimized subjected to a⃗_y, and (2) the number of deltas required to represent a_x is guaranteed to be less than, or equal to, the number of nonzero elements in a⃗_x.
Minimizing the number of deltas. To address point (1), the CBM format must first measure the number of deltas required to convert each row a⃗_y into all other rows a⃗_x of 𝐀, i.e., measure the Hamming distance for each pair of matrix rows.
This step provides a global view of the dissimilarity between the rows of the matrix 𝐀, and it can be modeled as a fully-connected and undirected distance graph G. This graph has m vertices, where each vertex represents a unique row of the matrix, and the weight of each edge (y,x) corresponds to the number of deltas required to represent a⃗_x with respect to a⃗_y.
To reduce the number of deltas required to compress the rows of 𝐀 we can find a Minimal Spanning Tree (MST) of G, which by definition spans G with the minimum sum of edge weights possible. Naturally, any MST of the distance graph, rooted in vertex x, defines a chain of compression with as many deltas as the weight of this tree plus the number of nonzero elements of a⃗_x. Thus, any MST rooted in the vertex corresponding to the row with the fewest nonzero elements, defines a chain of compression that satisfies point (1).
Worst-case guarantees. Note that the chain of compression obtained by finding an MST of G does not satisfy point (2), because the weight of the lightest incoming edge of any vertex x can be greater than the number of nonzero elements in a⃗_x. In such cases, representing a⃗_x with an adjacency list is clearly more memory efficient. To avoid this issue, we extended the distance graph G with a virtual vertex 0 which is connected to all other vertices of the graph. This virtual vertex represents a null row-vector a⃗_0 ∈{0}^n, which ensures that the weight of each edge (0,x) is equal to the number of nonzero elements in a⃗_x. The inclusion of virtual vertex 0 in the distance graph G ensures that the issue described above cannot occur, since the lightest incoming edge of any vertex x is now at most as heavy as the number of nonzero elements in a⃗_x. Therefore, any chain of compression characterized by an MST of G, rooted on vertex 0, is guaranteed to satisfy points (1) and (2). If we use this chain of compression to represent a matrix in CBM format, then following property will be observed:
The number of deltas required to represent any matrix 𝐀∈{0,1}^m × n in Compressed Binary Matrix (CBM) format is never greater than the number of nonzero elements in matrix 𝐀.
To complete the construction, we simply traverse the compression chain above in topological order, and for every edge (y,x) visited, we compute the lists of positive and negative deltas required to convert row a⃗_y into a⃗_x.
§.§ Time and Space Analysis
Any matrix 𝐀∈{0,1}^m × n can be represented in CBM format in O((m+1) 𝐧𝐧𝐳(𝐀) + m^2 log m) time.
The construction of the extended distance graph G for 𝐀 requires the computation of m(m+1)/2 Hamming distances between all possible row pairs (a⃗_x, a⃗_y). The Hamming distance of each pair of rows can be reduced to the intersection of their adjacency lists, computed in O(𝐧𝐧𝐳(a⃗_x) + 𝐧𝐧𝐳(a⃗_y)) time. Hence, the time to compute all m(m+1)/2 Hamming distances is upper-bounded by
∑_x=0^m ∑_y=0^m (𝐧𝐧𝐳(a⃗_x) + 𝐧𝐧𝐳(a⃗_y)) = (m+1) 𝐧𝐧𝐳(𝐀).
Additionally, well-known MST algorithms, such as Prim or Kruskal, are known to find an MST in O(E log V) time, where E and V denote the number of edges and vertices in the graph. Since the extended distance graph G contains m(m+1)/2 edges and m+1 vertices, finding an MST of G requires O((m+1)^2 log (m+1)) time, and therefore, representing matrix 𝐀 takes time
O((m+1) 𝐧𝐧𝐳(𝐀) + (m+1)^2 log (m+1))).
Finally, a single list of deltas can also be obtained from the intersection of the adjacency lists of a⃗_x and a_y. Hence, the computation of 2m lists of deltas is already accounted for in Equation <ref>.
The CBM representation of a binary matrix A is fully characterized by the edges of an MST of the extended distance graph G, and the lists of positive (Δ^+) and negative (Δ^-) deltas associated with each vertex of G. Therefore, the memory required to represent a matrix in CBM format depends on the number of rows in A and the sum of the size of all lists of deltas.
The space required to represent a binary matrix 𝐀∈{0,1}^m × n in Compressed Binary Matrix (CBM) format is proportional to O(m + ∑_i=0^m (|Δ^+_x, r_x| + |Δ^-_x, r_x|)), where r_x represents the index of the row selected to compress row a⃗_x.
Assuming matrix 𝐀 is represented in CBM format. Then the chain of compression required to represent this matrix consists on: (1) a list of edges that represents any MST rooted in vertex 0 of graph G, and (2) a list of positive and negative deltas for each edge of this MST. Since the extended version of G is a fully-connected graph with m+1 vertices, it is known that any MST of G contains m edges. Therefore, the list of edges contains m elements, and there are m lists of deltas, whose size totals ∑_x=0^m (|Δ^+_x,r_x| + |Δ^-_x,r_x|).
§.§ Fast Matrix Multiplication with Compressed Binary Matrix Format
Let w∈ℝ^n be a dense and real-valued vector, and a⃗_x and a⃗_y two distinct rows of a binary matrix 𝐀 as previously defined.
It follows from Equation <ref> that we can resort to the inner-product a⃗_y ·w⃗ to compute a⃗_x ·w⃗ as
a⃗_x ·w⃗ = ((a⃗_y ∪Δ^+_x,y )∖Δ^-_x,y)·w⃗ = (a⃗_y ·w⃗) + (Δ^+_x,y·w⃗) - (Δ^-_x,y·w⃗).
This implies that the dot-product a⃗_x ·w⃗ can be calculated in 1+|Δ^+_x,y|+|Δ^-_x,y| scalar operations, if the value of a_y·w⃗ can be reused. Naturally, we can resort to this strategy to design fast matrix-vector multiplication kernels u⃗𝐀v⃗, where 𝐀∈{0,1}^m × n, u∈ℝ^m, and v∈ℝ^n, by computing all dot-products between the rows of 𝐀 and v in an order where: (1) the value of the dot-product a⃗_y ·v is known before calculating a⃗_x ·v, and (2) the value of all dot-products a⃗_x ·v⃗ is calculated with respect to the value a⃗_y ·v⃗ that results in the minimum overall number of scalar operations. By definition, the chain of compression of matrix 𝐀 already represents such an ordering. Therefore, we can accelerate matrix-vector multiplication by traversing the chain of compression of matrix 𝐀 in topological order, and for each edge visited compute u_xa_x ·v as
u_x u_y + (Δ^+_x,y·v⃗) - (Δ^-_x,y·v⃗),
where u_y is known to already contain the value a_y ·v. Note that classic sparse-dense matrix-vector multiplication kernels compute u𝐀·v in 𝐧𝐧𝐳(𝐀) scalar operations, where 𝐧𝐧𝐳(𝐀) represents the numbers of nonzero elements in matrix 𝐀. Assuming 𝐀 is represented in CBM format, then the number of deltas required to represent each row a⃗_x of 𝐀 is known to be smaller than, or equal to, the number of nonzero elements in a⃗_x. If the number of deltas required to represent a⃗_x is strictly smaller than 𝐧𝐧𝐳(a⃗_x), then it is clear that the dot-product a⃗_x·v⃗ requires at most 𝐧𝐧𝐳(a⃗_x) scalar operations. On the other hand, if the number of deltas required to represent a⃗_x is the same as the number of nonzero elements in a⃗_x, Equation <ref> suggests that the dot-product a⃗_x ·v⃗ would be computed in 1+𝐧𝐧𝐳(a⃗_x) scalar operations. This scenario can be avoided by engineering the MST algorithm to select an out-going edge of the virtual node 0 in case of draw. Since a⃗_x was not compressed in this scenario, the number of scalar operations required to compute this dot-product is exactly 𝐧𝐧𝐳(a⃗_x). comment: This next part is confusing. Maybe we could just say that if the nnz of a_x is equal to the deltas then this edge is not considered during the construction of the formatHowever, if the number of deltas required to represent row a⃗_x is the same as the number of nonzero elements in a⃗_x, then this row can be compressed with respect to the null-row a⃗_0. This implies that the dot-product a⃗_x ·v⃗ can be computed in exactly 𝐧𝐧𝐳(a⃗_x), since a⃗_0·v=0, meaning that the cost of this addition can be engineered away. As the number of scalar operations required to compute a_x ·v never surpasses the number of nonzero elements in a⃗_x for x=1,…,m, the following property becomes evident:
The number of scalar operations required to compute matrix-vector multiplication based on the Compressed Binary Matrix (CBM) format is never greater than those required to compute matrix-vector multiplication based on classic sparse formats.
Additionally, Equation <ref> shows that our matrix-vector multiplication strategy does not require the allocation of additional buffers, since the value of the dot-product required to compute a_x ·v is guaranteed to already be stored in vector u⃗. Therefore, the following property is observed:
The amount of memory required to compute matrix-vector multiplication based on the Compressed Binary Matrix (CBM) format is proportional to the size of its operands and remains constant during execution time.
Intuitively, we can resort to the matrix-vector multiplication strategy described above to design fast matrix-matrix multiplication kernels as described in Algorithm <ref>. This algorithm assumes that the left-hand side operand matrix 𝐀∈{0,1}^m× n is represented in CBM format, while matrices 𝐁∈ℝ^n × p and 𝐂∈ℝ^m× p are dense and correspond to the right-hand side operand matrix and to the product matrix, respectively. As it can be observed, Algorithm <ref> computes the matrix-vector product between matrix 𝐀 and each column of matrix 𝐁. Therefore, we can conclude that Properties <ref> and <ref> hold for the matrix-matrix multiplication case.
Leveraging High Performance SpMM kernels. The representation of a binary matrix 𝐀 in CBM format was until now conceptualized as a chain of compression, where each node of this chain is associated with two lists of deltas. As is, a compressed matrix cannot be represented in a sparse format capable of leveraging efficient SpMM kernels. To address this issue, we represent the lists of deltas that characterize our format as a matrix 𝐀'∈ℝ^m × n, which can be represented in any convenient matrix format, and leverage efficient SpMM kernels provided by Intel MKL to compute all dot-products of Algorithm <ref> in a single matrix product 𝐀'𝐁. Once 𝐀'𝐁 is stored in matrix 𝐂, we can finalize the matrix multiplication with CBM, by traversing the chain of compression in topological order, and updating row c⃗_x of 𝐂 as c⃗_x := c⃗_x + c⃗_y for each edge (y,x) that was visited. Naturally, these updates can also leverage efficient AXPY kernels, which are also provided by Intel MKL.
Leveraging High Performance SpMM kernels. The representation of a binary matrix 𝐀 in CBM format was until now conceptualized as a chain of compression, where each node of this chain is associated with two lists of deltas. As is, a compressed matrix cannot be represented in a sparse format capable of leveraging efficient SpMM kernels. To address this issue, we represent the lists of deltas that characterize our format as a matrix 𝐀'∈ℝ^m × n, which can be represented in any convenient matrix format, and leverage efficient SpMM kernels provided by Intel MKL to compute all dot-products of Algorithm <ref> in a single matrix product 𝐀'𝐁. Once 𝐀'𝐁 is stored in matrix 𝐂, we can finalize the matrix multiplication with CBM, by traversing the chain of compression in topological order, and accordingly updating the value of row c⃗_x of 𝐂 with the current value of c⃗_y. Naturally, these updates can also leverage efficient AXPY kernels, which are also provided by Intel MKL.
Multi-threading parallelism. As suggested in the previous paragraph, matrix multiplication with the CBM format can be divided into two stages. The first one computes the product 𝐂:=𝐀'𝐁 which is embarrassingly parallel, and Intel MKL already provides efficient multi-threaded and vectorized implementations of this operation. The second stage involves updating the rows of matrix 𝐂 with respect to the chain of compression. This stage presents data-dependencies, since the final value of row c⃗_x of 𝐂 can only be calculated once c⃗_y is known. There are however no dependencies between different branches of the chain of compression. Therefore, we can parallelize this stage
by concurrently updating the rows of matrix 𝐂 that are found in different branches of the virtual node 0.
Extending SpMM with CBM to normalized adjacency matrices. Let 𝐀̂∈ℝ^n × n represent the normalized adjacency matrix of an unweighted graph, where 𝐀̂ = 𝐃^-1/2 (𝐀 + 𝐈) 𝐃^-1/2. As is, we cannot resort to the CBM format to represent 𝐀̂ since this matrix is not binary. However, fast matrix multiplication is still possible. Note that 𝐃^-1/2 is a diagonal matrix, and therefore (𝐀 + 𝐈) 𝐃^-1/2 corresponds to a column-scaled matrix, i.e., the elements in a column of the matrix are either 0 or have a constant value that is unique to this column. We can represent this matrix in CBM format, by simply multiplying the corresponding matrix of deltas (𝐀' + 𝐈) by 𝐃^-1/2. At this point, we can efficiently compute 𝐂:=((𝐀 + 𝐈)𝐃^-1/2)𝐁 as described previously.
Finally, note that 𝐃^-1/2(𝐀 + 𝐈)𝐃^-1/2𝐁), simply scales the rows of matrix (𝐀 + 𝐈)𝐃^-1/2𝐁. The cost of scaling the rows of this matrix can be hidden by fusing it with the update step of the product matrix 𝐂.
Speeding up SpMM with Edge Pruning. Note that not all compression opportunities contribute to faster matrix multiplication kernels. The overheads associated with differential compression, such as traversing the chain of compression and updating the result matrix, might overcome potential performance gains if the number of scalar operations saved are not above a certain threshold. To address this issue, we can prune all edges of the distance graph of 𝐀 where the number of scalar operations saved does not meet a user-defined threshold α∈ℕ. For each edge (y,x) in the extended distance graph of 𝐀, prune this edge if its weight is greater than the number of nonzero elements in a⃗_x minus α. Naturally, if we prune the edges of the extended distance graph of 𝐀 in this manner, it is possible for a single edge direction to be pruned, while the opposite direction remains in the graph. Therefore, the extended distance graph of 𝐀 is now directed, and a suitable chain of compression corresponds to a Minimum Cost Arborescence (MCA) rooted in the virtual node 0. Note that our compression algorithm remains correct, since the extended distance graph of 𝐀 contains an out-going edge from the virtual node 0 to all other nodes; Furthermore, the time required to build the CBM format, as shown in Lemma <ref>, remains unchanged since finding an MCA or an MST present the same time complexity <cit.>.
§ EXPERIMENTAL EVALUATION
Experimental Setting. The experiments found in this section were run on an Intel Xeon Gold 6130 (Skylake) CPU with 16 physical cores and 2.1 GHz fixed clock frequency. This machine runs on CentOS Linux 7 (version 3.10.0) operating system. The SpMM kernels tested in this section were implemented in C++, and rely on Intel MKL 24.0 sparse CSR format and corresponding matrix multiplication kernels. The C++ code developed in this work is then called from Python 3.11, via PyTorch C++ Extensions to reliably emulate common use-cases.
Parallel experiments (with 16 cores) were implemented with OpenMP 4.5, and the threads were pinned to physical cores with environment variable.
§.§ Evaluation Metrics
The CBM format was evaluated with respect to quality of compression achieved (compression ratio), and the time required (runtime reduction) to compute SpMM and to infer a 2-layer GCN, when the adjacency
matrix of the graphs are represented in our format. The compression
ratio is defined as the ratio between the memory required to represent a matrix in CSR format and
CBM format. In our implementation, the CBM format is composed by the corresponding matrix of deltas 𝐀' and a tree representing the chain of compression, both stored in CSR format. In the context of sparse-dense matrix multiplication (SpMM), the runtime reduction is measured by comparing the average time, out of
50 runs, it takes to perform matrix multiplication with a randomly generated
dense matrix with 500 columns using the CSR format, to the time taken to compute
the same matrix multiplication with the CBM format. The formula used to capture
this metric is T_CSR - T_CBM/T_CSR×
100%, where T_CSR is the time required to carry out sparse-dense matrix multiplication with CSR by the state-of-the-art SpMM implementation offered by Intel MKL, and T_CBM is the time taken to compute the same product with CBM. We used same formula and number of runs in the context of GCN inference, however, T_CSR and T_CBM correspond to the time required by the inference stage of this network by resorting to SpMM kernels based on CSR and CBM, respectively. It is important to note that we did not consider the SpMM kernels that are native to PyTorch, because these kernels were substantially slower than the ones implemented in Intel MKL.
§.§ Datasets
To demonstrate the advantages of the CBM format in the context of SpMM and GCN inference, we selected six real-world graphs of varying size and average in-degree as depicted in Table <ref>. The selected graphs depict relationships between authors and/or academic papers, where nodes tend to share many common neighbors. This property suggests that the adjacency matrices of these graphs are good candidates to be represented in CBM format.
In co-paper graphs, each node represents a paper. An undirected edge is placed between two nodes if the corresponding papers share at least one common author. They depict the interconnection and collaborative patterns between various academic publications.
Co-author graphs represent scientific collaborations between authors of academic papers. Here, nodes correspond to authors, and an undirected edge is placed between two nodes if authors of the nodes have co-authored a paper together. If a paper is authored collaboratively by a group of authors, it results in a fully connected subgraph, or clique, encompassing those grouped nodes.
Citation graphs are directed graphs where each node represents an academic paper. Directed edges in these graphs illustrate citations, with an edge pointing from the citing paper to the cited paper. These graphs highlight the directional flow of information and the influence of one paper upon another within the academic community.
§.§ Sparse-Dense Matrix Multiplication (SpMM) Evaluation
Finding the best α is key to improve the performance of matrix multiplication with CBM. Adjusting this parameter not only reduces overhead associated with
traversing the compression chain, but also exposes more parallelism opportunities as it increases the out-degree of the virtual node. Given the importance of α we first consider the case where α = 0 and our edge pruning technique was not applied, and then we show how fine-tuning α improves the quality of matrix multiplication with CBM.
impacts the The behaviors of α for each datasets when setting the α parameter to different values, impacting both compression effectiveness and computational performance. This paragraph is very confusing. Please be direct, no point on trying to be fancy
α equal to zero. Representing Cora (Fig. <ref>) and PubMed (Fig. <ref>) datasets in CBM resulted in minimal, and even negative, compression
gains with respect to CSR. This is likely caused by the small average in-degree of these graphs, which suggests that the compression opportunities found in these graphs do not offset the memory overhead required to represent the corresponding chains of compression. As expected, the poor compression rate of these datasets led to no speedup in the context of matrix multiplication.
On the other hand, compressing ca-AstroPh (Fig. <ref>) and ca-HepPH (Fig. <ref>) with our format respectively increased the compression ratios of both datasets to 2.6× and 1.7×, subsequently accelerating matrix product for both datasets. Our matrix multiplication strategy achieved a runtime reduction of 26% and 8% for ca-AstroPh in sequential and parallel environments, respectively. For ca-HepPh the same multiplication kernel presented a runtime reduction of 40% and 18% also in sequential and parallel environments.
The datasets that benefited the most from our format were the coPapersCiteseer (Fig. <ref>) and coPapersDBLP (Fig. <ref>), achieving impressive compression ratios of 9.8× and 5.9×, leading to substantial improvements in computational performance. Our matrix multiplication kernel with coPapersCiteseer achieved a runtime reduction of 71% and 77% in sequential and parallel environments, while the same kernel with coPapersDBLP presented a runtime reduction of 59% and 61% in the same experimental settings. These results highlight the potential of the CBM format to efficiently compress and multiply unweighted graphs that present natural communities and an high average in-degree.
α greater than zero. Setting α to a value greater than 8 reduced the overhead associated with traversing the chain of compression of Cora (Fig. <ref>) and PubMed (Fig. <ref>), making the performance decay observed for α = 0 negligible in sequential and parallel settings, even when no compression gains are observed.
Increasing the value of α is also beneficial for sequential matrix multiplication with ca-AstroPh (Fig. <ref>) and ca-HepPh (Fig. <ref>), improving the respective runtime reduction from 26% up to 28% (α=2) and from 40% up to 44% (α=4). Nevertheless, it is important to note that our compression algorithm will start to ignore good compression opportunities once α is large enough, worsening the performance of our matrix multiplication strategy. This effect is evident in both datasets for α greater than 16, where the compression ratio decreases alongside with the runtime reduction.
Experiments with both co-author datasets show that adjusting α is even more important in the parallel case, increasing the runtime reduction of our multiplication strategy with ca-AstroPh from 8% up to 11% (α=8), and with ca-HepPh from 18% up to 26% (α=64). Furthermore, these experiments confirm that the degree of parallelism of our multiplication strategy increases concurrently with α. This effect is easily observed for ca-HepPh (Fig.<ref>), where the performance of our matrix multiplication kernel sharply declines for α greater than 2, followed by a steep increase in performance when α equals to 32 or 64 (even though our compression algorithm is already ignoring good compression opportunities at this point).
Finding the best α is not as relevant for coPapersCiteseer (Fig.<ref>) and coPapersDBLP (Fig.<ref>), as most compression opportunities save more than 8 scalar operations in the context of our matrix multiplication strategy. This observation is verified, since the compression ratio of these datasets shows little to no decrease for α smaller than 8. This effect is most likely due to the high average in-degree of both datasets. Still, adjusting α is required to obtain the best runtime reduction for both coPapers datasets. In our experiments our matrix multiplication kernel with coPapersCiteseer achieved a peak runtime reduction of 71% (α=2) and 79% (α=32) in sequential and parallel environments, while the same kernel with coPapersDBLP peaks at 59% (α=4) and 63% (α=16) also in sequential and parallel environments.
increasing α also tends to increase the degree of parallelism of our matrix multiplication strategy. This effect can be observed
Experiments in a parallel environment Adjusting α also increases ca-Astro-Ph and ca-HepPh we can observe that increases the degree of parallelism
For the same datasets, adjusting α in the context of parallel matrix multiplication resulted in a runtime reduction up to 11% (α=8) and 26% (α=64).
Increasing the value of α is also beneficial for matrix multiplication with ca-AstroPh (Fig. <ref>) and ca-HepPh (Fig. <ref>), respectively improving the sequential runtime reduction of these datasets up to 28% (α=2) and 44% (α=4). For the same datasets, adjusting α in the context of parallel matrix multiplication resulted in a runtime reduction up to 11% (α=8) and 26% (α=64).
While increasing α tends to improve the performance of our matrix multiplication strategy, it is important to note that our compression algorithm will start to ignore good compression opportunities once α becomes large enough, thus worsening the performance of our matrix multiplication strategy. This effect is evident in the context of sequential matrix multiplication for α greater than 16 in both datasets.
Fine-tuning α in a parallel environment is more complex. As it can be observed in Figure <ref>, increasing α represents a trade-off between good compression ratio and improving the degree of parallelism of our solution. This effect is easily observed for matrix products involving ca-HepPh (Fig.<ref>), where the performance of our matrix multiplication kernel sharply declines for α greater than 2, followed by a steep increase in performance when α equals to 32 or 64. This increase in performance is the result of
For ca-AstroPh, as α increases, the sequential runtime initially
improves, peaking at 28% reduction for α = 2 before dropping
significantly to just 4% for α = 64. Similarly, the parallel runtime
initially increases to 11% for α = 8 and then declines to 2% for
α = 64. The compression ratio decreases steadily, reaching 1.02×
for α = 64. For ca-HepPH, the sequential runtime improvement peaks at
44% for α = 4, then declines to 29% for α = 64, while the
parallel runtime peaks at 19% for α = 1 and eventually increases again
to 26% for α = 64. The compression ratio gradually falls to
1.56× as α increases.
For coPapersDBLP, as α increases, the sequential runtime peaks slightly
at 59% for α = 4 before reducing further to 42% for α = 64, and
the parallel runtime increases marginally before dropping to 59% for α =
64. The compression ratio declines to 2.2×.
In coPapersCiteseer, the sequential runtime improves marginally before
decreasing to 59% for α = 64, and parallel runtime slightly improves,
peaking at 79% for α = 32 before slightly adjusting to 77% for α
= 64, with the compression ratio falling to 3.4×.
While the compression rate decreases as α increases, we observe better
SpMM performance overall. The impact of a better α becomes more
critical in the parallel case because it enhances the degree of parallelism of
the SpMM kernel. This is evident as in some cases the performance decreases as
α increases, and as we further increase the value of alpha, the
performance spikes. For example, ca-HepPh shows a decrease from α=8 to
α=16, but spikes from α=16 to α=32. If α is large
enough (around α=8), we can minimize the performance decay of
challenging datasets such as Cora or PubMed.
§.§ Integrating CBM with GCN
To assess the impact of CBM in GNNs, we considered the runtime reduction of an inference stage of a 2-layer GCN with 500 features where the normalized adjacency matrix
of each dataset is represent in our format. Matrix products involving the normalized adjacency matrix
were carried out with our extended matrix multiplication kernel, as described in Section <ref>. As baseline for our experiments, we selected the same 2-layer GCN where the normalized adjacency matrix
is represented in CSR and any products involving this matrix are carried out by the SpMM implementation found in Intel MKL.
These experiments are illustrated in Figure <ref>. To keep the discussion concise, we only considered the values of α that led to the best matrix multiplication performance for each dataset analyzed in Section <ref>.
Representing both Cora and PubMed datasets in CBM format increased the inference time of the corresponding GCNs compared to the baseline. This behavior is expected because the only steps of the inference that benefit from the CBM format are the products involving the normalized adjacency matrix. As previously shown, compressing these datasets using our format did not accelerate the corresponding matrix products.
Experiments with both co-author datasets demonstrated that our format can reduce the inference time for GCN models. For ca-AstroPh, our format reduced the inference time of the network by 17% in sequential environments and 3% in parallel environments. Similarly, for ca-HepPh, our format achieved a runtime reduction of 22% in sequential environments and 11% in parallel environments.
Representing both co-Paper datasets in our format resulted in the highest runtime reductions during GCN inference. For coPapersCiteseer, our format achieved an average runtime reduction of 48% in sequential environments and 66% in parallel environments. For coPapersDBLP, our format achieved a runtime reduction of 43% in sequential environments and 52% in parallel environments.
Significant reductions in inference time for
coPapersDBLP and coPapersCiteseer by 43% and 52% in sequential mode, and 48%
and 66% in parallel mode, respectively,
for each datasetThe baseline of our experiments is the same 2-layer GCN, but the normalized adjacency matrix is represented in CSR. Products involving this matrix were carried out by resorting to the SpMM implementation offered by Intel MKL. For the sake of briefness, the experiments of Figure <ref> only consider the α va for each dataset, as seen in Section <ref>) that resulted in the best runtime reduction for both sequential and parallel execution.
As part of the inference stage of a GCN, we integrated CBM SpMM
for normalized adjacency matrices to increase efficiency.
Figure <ref> illustrates the average runtime reduction for the inference
time. We set the baseline against the CSR SpMM provided by Intel MKL for each
dataset. Runtime was measured by the average of 50 epochs with two GCN
layers. Based on this comparison, it can be seen that CBM can enhance
computations within GCN architectures.
Cora and PubMed datasets show an increase in inference time by 6% and 7% in
sequential mode, and 19% and 8% in parallel mode, respectively, indicating
scenarios where CBM format might not be optimal. In contrast, ca-AstroPh and
ca-HepPh datasets benefit from reductions in inference time by 17% and 3% in
sequential mode, and 22% and 11% in parallel mode, respectively, suggesting
moderate compatibility with CBM. Significant reductions in inference time for
coPapersDBLP and coPapersCiteseer by 43% and 52% in sequential mode, and 48%
and 66% in parallel mode, respectively, demonstrate superior performance of CBM
SpMM for datasets favorable to block compression.
§ FINAL REMARKS
In this work we proposed the Compressed Binary Matrix (CBM) format which simultaneously reduces the memory footprint of unweighted graphs and binary matrices, and enables the implementation of new matrix multiplication kernels that might be significantly faster than the current state-of-the-art.
Experimental evaluation results shown significant speedups, both in sequential and parallel environments, up to 5×.
We obtained also significant performance improvements in the context of the GCN inference stage by integrating the CBM format in a deep learning framework, namely PyTorch, observing speed ups of 3×.
Although we did not discuss the CBM format construction time, we observe that it can be built in a reasonable amount of time for a dataset provider. In our experiments it took us less than 16 seconds to convert the largest dataset into our format in a sequential CPU environment.
It is important to stress that the effectiveness of our format depends on the specific dataset, as discussed in the experimental evaluation. While we suspect that graphs with a high average degree and a tendency to form communities are good candidates, the best way to determine if a graph is suitable is to examine the compression ratio achieved by our format for a reasonable value of α. Finally we highlight that our format is future proof, since future optimizations to high-performance SpMM kernels will also accelerate matrix multiplication with the CBM format.
Future work concerns integrating and evaluating the CBM format in the context of different GNNs architectures, and also targeting the training stage of this networks. Additionally, we intend to implement and evaluate our format and corresponding multiplication kernels in GPU architectures.
§ ACKNOWLEDGMENTS
This work has been supported by the Innovation Study CBM4scale, funded by the Inno4scale project, which is funded by the European High-Performance Computing Joint Undertaking (JU) under Grant Agreement No 101118139. The JU receives support from the European Union's Horizon Europe Programme.
§ AUTHOR CONTRIBUTIONS
JNFA devised the main conceptual of the Compressed Binary Matrix (CBM), including the various matrix multiplication algorithms and their optimizations. JNFA also implemented the CBM format and the related multiplication kernels in C++, including the integration of SpMM and AXPY (from Intel MKL) into the matrix multiplications kernels based on the CBM format, as proposed in Section <ref>. Additionally, JNFA implemented an interface that enables calling the previous C++ routines from Python.
SM designed the Python benchmarks to compare CBM and CSR in matrix multiplication and compression quality, as described in Section <ref>. SM also integrated the CBM and CSR-based matrix multiplication kernels into the Message Passing Layer (MPL) to evaluate the impact of the CBM format during the inference stage of a 2-layer GCN model implemented in PyTorch, as discussed in Section <ref>. SB, APF, WNG, and LMSR supervised the project. JNFA wrote the bulk of this draft. All authors provided critical feedback, participated in discussions, contributed to the interpretation of the results, and approved the final manuscript.
unsrt
|
http://arxiv.org/abs/2409.02996v1 | 20240904180005 | Enhanced AGN Activity in Overdense Galactic Environments at $2 < z < 4$ | [
"Ekta A. Shah",
"Brian C. Lemaux",
"Benjamin Forrest",
"Nimish Hathi",
"Lu Shen",
"Olga Cucciati",
"Denise Hung",
"Finn Giddings",
"Derek Sikorski",
"Lori Lubin",
"Roy R. Gal",
"Giovanni Zamorani",
"Emmet Golden-Marx",
"Sandro Bardelli",
"Letizia Pasqua Cassara",
"Bianca Garilli",
"Gayathri Gururajan",
"Hyewon Suh",
"Daniela Vergani",
"Elena Zucca"
] | astro-ph.GA | [
"astro-ph.GA"
] |
Department of Physics and Astronomy, University of California, Davis, One Shields Avenue, Davis, CA, 95616, USA
[email protected]
Gemini Observatory, 670 N. A’ohoku Place, Hilo, Hawai'i, 96720, USA
Space Telescope Science Institute, Baltimore, MD 21218, USA
Department of Physics and Astronomy, Texas A&M University, College Station, TX, 77843-4242, USA
INAF-Osservatorio di Astrofisica e Scienza dello Spazio, Via Gobetti 93/3, I-40129, Bologna, Italy
University of Hawai'i, Institute for Astronomy, 2680 Woodlawn Drive, Honolulu, HI 96822, USA
Department of Astronomy, Tsinghua University, Beijing 100084, China
INAF - Osservatorio astronomico di Padova, Vicolo Osservatorio 5, 35122 Padova, Italy
INAF-IASF Milano, Via Alfonso Corti 12, 20159 Milano, Italy
University of Bologna – Department of Physics and Astronomy “Augusto Righi” (DIFA), Via Gobetti 93/2, 40129 Bologna, Italy
SISSA, Via Bonomea 265, I-34136 Trieste, Italy
IFPU - Institute for fundamental physics of the Universe, Via Beirut 2, 34014 Trieste, Italy
We conduct a study on the relationship between galaxy environments and their active galactic nuclei (AGN) activity at high redshifts (2.0<z<4.0). Specifically, we study the AGN fraction in galaxies residing in a range of environments at these redshifts, from field galaxies to the densest regions of highly overdense peaks in the GOODS-S extragalactic field. Utilizing the extensive photometric and spectroscopic observations in this field, we measure local- and global-overdensities over a large a range of environments, including in massive (M_tot≥10^14.8M_⊙) protostructures reported in <cit.>. We employ a multi-wavelength AGN catalog <cit.>, consisting of AGN in nine different categories. Our analysis shows a higher AGN fraction (10.9^+3.6_-2.3%) for galaxies in the highest local-overdensity regions compared to the AGN fraction (1.9^+0.4_-0.3%) in the corresponding coeval-field galaxies at 2.0<z<4.0 (a ∼4σ difference). This trend of increasing AGN fraction in denser environments relative to the field is present in all redshift bins. We also find this trend consistently present in all five AGN categories that have a sufficient number of AGN to make a meaningful comparison: mid-IR SED, mid-IR color, X-ray luminosity, X-ray-luminosity-to-radio-luminosity-ratio, and optical-spectroscopy at 2.0<z<4.0. Our results also demonstrate a clear trend of higher (∼4×) AGN fractions in denser local overdensity environments for a given stellar mass. Additionally, we observe the same trend (though at a lower significance) with the global environment of galaxies, measured using a metric based on the projected distance of galaxies from their nearest massive (M_tot>10^12.8M_⊙) overdense (σ_δ>5.0) peak, normalized with respect to the size of the peak. These findings indicate that the prevalence of AGN activity is highly dependent on the environment in which a host galaxy resides, even at early times in the formation history of the Universe.
AGN Activity in Different Environments at 2<z<4
E. A. Shah et al.
Enhanced AGN Activity in Overdense Galactic Environments at 2 < z < 4
Ekta A. Shah1,
Brian C. Lemaux2,1,
Benjamin Forrest1,
Nimish Hathi3,
Lu Shen4,
Olga Cucciati5,
Denise Hung2,
Finn Giddings6,
Derek Sikorski6,
Lori Lubin1,
Roy R. Gal6,
Giovanni Zamorani5,
Emmet Golden-Marx7,8,
Sandro Bardelli5,
Letizia Pasqua Cassara9,
Bianca Garilli9,
Gayathri Gururajan10,5,11,12,
Hyewon Suh2,
Daniela Vergani5,
Elena Zucca5
Received xx; accepted xx
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Galaxies reside in a variety of environments - ranging from sparsely populated field regions where galaxies have few neighbors, to dense galaxy clusters in which thousands of galaxies can reside in a relatively small region. The environment of a galaxy plays a crucial role in its formation and evolution. The gravitational potential well of large dense structures affect the motion of galaxies as well as their interaction and merger rate with other galaxies <cit.>. For these reasons among many others, large dense structures can significantly alter the properties of their constituent galaxies such as morphology, stellar mass, star formation rate (SFR), and active galactic nuclei (AGN) activity <cit.>, as well as many other properties.
The supermassive black hole at the center of a galaxy that is actively accreting matter and emitting intense radiation is called an `AGN'. As more material gets accreted onto supermassive black holes, they grow, eventually leading to triggering of the AGN <cit.>. The feedback caused by AGN emission can affect the properties of galaxies such as its star formation <cit.>, morphology <cit.>, and chemical enrichment of the intragalactic medium <cit.> among many other properties. Therefore, the interplay between black hole growth and AGN feedback is a critical aspect of galaxy evolution as it connects the growth of the black hole to the properties and evolution of the host galaxy.
AGN are typically classified into various categories based on their observed properties. For example, radio-loud AGN emit strong radio waves relative to the strength of emission in other bandpasses (typically optical), likely due to the presence of relativistic jets powered by the central supermassive black hole. Radio-quiet AGN have weaker radio emission, possibly due to the absence or weakness of these jets <cit.>. Based on optical spectra, AGN with broad emission lines, known as Type 1, are thought to provide a direct view of the central engine of the AGN and broad-line region, whereas Type 2 AGN, which exhibit narrow emission lines, are thought to be viewed through an optically thick dusty torus that obscures the central engine and broad-line region <cit.>. The re-radiation of absorbed light from dust surrounding the AGN can cause an IR-bright AGN <cit.>. Some types of AGN can also be identified by their strong X-ray emission <cit.>. Among the most extreme examples of AGN are quasars, which are so luminous that they often outshine their host galaxies <cit.>. Additionally, the “little red dots” observed by the James Webb Space Telescope (JWST) could represent a population of heavily obscured, high-redshift AGN, providing insights into early black hole growth <cit.>. Different processes may be responsible for triggering different types of AGN. Since an AGN identified using one criterion does not necessarily meet the criteria in all other categories simultaneously, the results of AGN studies can vary depending on the type of AGN used.
The complex connection of AGN activity and dense environments of galaxy clusters has been studied in-depth at low-to-intermediate redshifts (z≲1.5).
In the local Universe, observational studies show that luminous AGN are fractionally less common among cluster galaxies compared to coeval field galaxies <cit.>, while the population of less-luminous AGN does not show a significant difference with environment <cit.>. <cit.> find that AGN are preferentially found in large-scale environment (such as moderate groups and clusters) compared to the local environment of the galaxy. At 0.65<z<0.96, <cit.> show that radio-AGN are more likely to be found in dense local environments of the large-scale structures (LSSs) identified as a part of the Observations of Redshift Evolution in the Large-Scale Environments (ORELSE) survey <cit.>. However, at 1<z<1.5, <cit.> show consistent X-ray and MIR-selected AGN fractions in the clusters and field. Some studies show significant decrement in the fraction of X-ray AGN with decreasing cluster-centric radius, from r_500 to central regions <cit.>. Additionally, at 0.65<z<1.28, <cit.> generally do not observe any strong connection between AGN activity and location within the LSSs in the ORELSE survey.
However, they observed a notable exception in two dynamically most unrelaxed LSSs (SG0023 and SC1604), where there was an overabundance of kinematic pairs, suggesting that merging activity could be a significant factor in AGN triggering. This highlights that at least some LSS environments might play a role in triggering AGN activity through merging processes at these redshifts. On the other hand, studies such as <cit.> do not see any significant correlations between AGN activity and cluster properties such as mass, X-ray luminosity, size, morphology, and redshift at 0.5<z<0.9 in their sample of AGN identified based on optical variability, X-ray emission, or mid-IR power-law spectral energy distributions (SEDs).
The complexity of the relation between dense environments of galaxies and their AGN activity is in some ways heightened at high redshift. Some studies on individual protoclusters, such as <cit.> (X-ray AGN; z∼1.7), <cit.> (X-ray AGN; z∼2.156), <cit.> (AGN identification using emission-lines in optical/near-IR spectra; z∼2.2), and <cit.> (X-ray AGN; z∼3.09), all of which show enhanced AGN activity in protoclusters compared to field. <cit.> show a strong AGN actvity in the brightest protocluster galaxy of Cl J0227-0421 at z∼3.3. However, other studies, such as <cit.> (X-ray AGN; z∼2.53), do not show this same trend. Similarly, <cit.> do not see any AGN activity in member galaxies of a massive protocluster at z∼4.57. A large sample of protostructures[We use the more agnostic term “protostructures” (instead of protoclusters) throughout the paper as we are unsure of the fate of the systems reported in this paper.] over a large redshift range is required to understand the overall nature of this trend. Furthermore, as highlighted in <cit.>, populations of various types of AGN should be used to study the environment-AGN connection. Each type of AGN provides unique insights into different aspects of AGN activity, revealing information such as the central engine, obscuring structures, and the interaction of an AGN with its surroundings. A diverse approach encapsulating different types of AGN is key for capturing a broad spectrum of AGN phenomena across various redshifts. However, achieving this across a large redshift range requires deep, multi-wavelength observations, which are challenging due to the faintness of distant sources and the extensive multi-wavelength observational resources needed. These requirements make studying the environment-AGN connection at high redshift challenging.
Utilizing the vast amount of deep multi-wavelength data in the Great Observatories Origins Deep Survey - South (GOODS-S) extragalactic field <cit.>, we study the connection of environment and AGN activity of galaxies at 2<z<4. We use a large sample of spectroscopically and photometrically selected galaxies spanning a wide range of environments, from field to protostructures, with the latter including five of the spectroscopically confirmed massive protostructures presented in <cit.>, as well as a large number of other massive protostructures. We use a novel environmental measurement technique allowing us to define a coeval sample of field galaxies along with the samples of galaxies in denser environments. For the AGN identification, we employ the AGN catalog presented in <cit.>, which uses deep multi-wavelength observations from X-ray to radio to provide AGN samples in nine different categories. These samples of galaxies and AGN enable us to conduct an unprecedented study on the environment-AGN connection of galaxies during an important epoch (2<z<4) of the cosmic history.
The layout of this paper is as follows: we describe the data and methods used for generating the galaxy and AGN samples in <ref>. In <ref>, we present our analysis on the change in AGN fraction with environment overdensity. We discuss our results and compare them with other studies in <ref>. Lastly, we summarize our results in <ref>.
§ DATA AND METHODS
In this section, we first provide the details of the observations and methods used to generate the galaxy sample (<ref>) and the AGN sample (<ref>). We then describe the spectral energy distribution (SED) fitting process used to estimate the stellar masses of galaxies in <ref> and the Voronoi Monte Carlo mapping method used to measure the local overdensity of galaxies in <ref>.
§.§ Galaxy sample
In this study, we utilize the extensive photometric (<ref>) and spectroscopic (<ref>) observations in the Extended Chandra Deep Field South (ECDFS) field <cit.>, which encapsulated the GOODS-S field. The ECDFS field has deep multi-wavelength observations available across the electromagnetic spectrum <cit.> enabling a wide range of extragalactic studies <cit.>.
§.§.§ Photometric catalog
To generate the initial galaxy sample, we select galaxies from the <cit.> photometric catalog for galaxies in the ECDFS field. This catalog includes deep optical medium-band photometry (18 bands) from the Subaru telescope, UBVRIz photometry from the Garching-Bonn Deep Survey <cit.> and the Multiwavelength Survey by Yale-Chile <cit.> survey, deep near-infrared (NIR) imaging in JHK from MUSYC <cit.>, as well as Spitzer Infrared Array Camera (IRAC) photometry obtained using the Spitzer IRAC/MUSYC Public Legacy Survey in ECDFS <cit.>. The multi-wavelength observations used for generating the galaxy catalogs, estimating the stellar masses of galaxies using spectral energy distribution, and utilizing the Voronoi Monte Carlo (VMC) mapping technique to calculate the overdensity values for galaxies are described in detail in <cit.>.
For this study, we only select galaxies which have their IRAC1 (3.6μm band) or IRAC2 (4.5μm band) magnitudes brighter than 24.8. The selection of this cutoff has been determined by considering the 3σ limiting depth of the IRAC images within the ECDFS field. This cut ensures a reliable detection in the rest-frame optical wavelengths at these redshifts, which helps constrain the Balmer/4000Å break for galaxies at 2<z<5 (see Lemaux et al. 2018 for details). There are 55,147 unique objects in our sample satisfying this IRAC magnitude cut. Following the method of <cit.>, this IRAC cut effectively results in a sample with an 80% stellar mass completeness limit of log(M*/M⊙) ∼ 8.8-9.14, depending on the redshift in the redshift range 2.0<z<4.0.
§.§.§ Spectroscopy
For this study, we utilize a combination of publicly available and proprietary spectroscopic observations. We use a compilation of publicly available redshifts in the GOODS-S field compiled by one of the authors (NPH). This compilation includes spectroscopic redshifts from surveys such as VIsible Multi-Object Spectrograph <cit.> VLT Deep Survey <cit.>, the MOSFIRE Deep Evolution Field (MOSDEF) survey <cit.>, the 3D-HST survey <cit.>, a deep VIMOS survey of
the CANDELS CDFS and UDS field <cit.>, along numerous other surveys. Additionally, we also use spectroscopic redshifts from the VIMOS Ultra-Deep Survey <cit.>. These surveys predominantly focus on star-forming galaxies (SFGs) with ∼ 0.3- 3L^*_UV luminosity, where L^*_UV is the characteristic UV luminosity at a given redshift <cit.>. These galaxies are generally representative of SFGs at their given redshifts (see Lemaux et al. 2022 for details).
Along with the publicly available spectroscopic redshifts, we also utilize our proprietary spectroscopic redshifts obtained using Keck/DEep Imaging Multi-Object Spectrograph (DEIMOS, ) and Keck/Multi-Object Spectrometer for Infra-Red Exploration (MOSFIRE, ) as a part of the Charting Cluster Construction with VUDS and ORELSE (C3VO) survey <cit.>. These observations consists of a total of 29 and 26 secure (i.e., reliability of ≳95%) spectroscopic redshifts obtained from five MOSFIRE masks and two DEIMOS masks, respectively. Most of these redshifts are in the range of 2.5<z<4.0. These masks were designed to target a suspected protostructure at z∼3.5 <cit.>, which is now spectroscopically confirmed <cit.>. The details of these MOSFIRE and DEIMOS observations, data reduction, the procedure for redshift estimation are described in detail in <cit.>.
We combine these C3VO spectroscopic observations with the publicly available spectroscopic redshifts described above. To get the spectroscopic redshift for a given photometric object, we perform a nearest-neighbor matching within an aperture of 1 centered on the coordinates of each source in the spectral catalog. For cases where multiple spectroscopic redshifts exist for a specific photometric object, we choose the most reliable z_spec, considering factors like the quality of the redshift, the type of instrument used, the integration time of the survey, and the photometric redshift. Out of the 55,147 galaxies satisfying the IRAC cut, 9111 have a secured spectroscopic redshift.
§.§ AGN sample
In this study, we utilize the extensive AGN catalog provided by <cit.>, which presents a comprehensive census of AGN identified by multi-wavelength observations (<ref>) from X-ray to radio in the GOODS-S/HUDF region. <cit.> probe diverse populations of obscured as well as unobscured AGN by selecting AGN using nine different criteria (<ref>) based on X-ray properties, ultraviolet to mid-infrared SEDs, optical spectral features, mid-infrared colors, radio loudness and spectral slope, and AGN variability. Using these techniques, <cit.> generated a comprehensive sample of 901 AGN within the ∼ 170 arcmin^2 3D-HST GOODS-S footprint, which significantly expanded the number of known AGN in the region. Their analysis shows the complexity of the AGN population and indicates that no single selection method can exhaustively identify all types of AGN within a field. Hence, their extensive AGN sample identified using various selection methods is invaluable for our study, enabling our detailed analyses of how the overall AGN population as well as various types of AGN get affected by dense environments.
We note that while our density mapping (see <ref>) covers the larger ECDFS region compared to GOODS-S, the <cit.> AGN sample is only available for the GOODS-S region within ECDFS. Therefore, for the AGN fraction analysis, we select galaxies exclusively from the GOODS-S region. However, the overdensity values for these galaxies in the GOODS-S field are still sourced from our density maps covering the entire ECDFS field.
§.§.§ Observations used for generating the AGN sample
For generating their AGN sample, <cit.> compiled a sample of 1886 unique GOODS-S 3D-HST (v4.1) <cit.> sources with multi-wavelength counterparts identified as (i) “radio-detected" sources in VLA 3 GHz images, (ii) “X-ray detected sources" from the Chandra 7 Ms X-ray source catalog <cit.>, (iii) “mid-IR detected" sources from the Spitzer MIPS 24 μm <cit.> or IRS 16 μm images or sources <cit.> with AGN-like IRAC colors in mid-IR, or (iv) time variable sources in the HST or Chandra images. For the radio sources, the optical to mid-IR counterparts are matched within a radius of 0.5”. For the X-ray sources, they used a matching radius of 2.0” to identify optical to mid-IR counterparts. The <cit.> catalog also includes redshifts of AGN sourced from the 3D-HST catalog, with spectroscopic redshifts for some objects and photometric redshifts for others. This redshift distribution of all AGN listed in the <cit.> catalog, as well as their AGN that have a match in our C3VO catalog (see <ref>) are shown in Figure <ref>.
§.§.§ AGN Selection criteria
To probe populations of different types of AGN, <cit.> apply various selection criteria on the multi-wavelength source catalog described above. The nine categories of AGN presented in <cit.>, along with their selection criteria as well as the average number of AGN in our sample based on our statistical framework (see <ref>) in the redshift range of 2.0<z<4.0, are briefly described below:
* mid-IR-SED: An AGN in this category has a notable excess emission in near-mid IR (λ∼ 2-8μm) from hot- to warm-dust. There are 288 mid-IR-SED AGN in the <cit.> catalog. <cit.> conducted SED fitting of galaxies using the SED fitting tool Prospector <cit.>. They utilized the Flexible Stellar Population Synthesis (FSPS) stellar model from Prospector, along with their own (semi) empirical models of dust emission and AGN continuum emission <cit.>. To identify AGN based on SED analysis, they require that the best-SED fit contains a significant AGN component with L_AGN>10^8 L_⊙. For these cases, the SED fits are visually inspected to discard the cases for which the fitting solutions are degenerate with multiple peaks of the L_AGN posterior as well as the cases for which the photometric constraints are too limited to determine if the AGN component exists. In cases where the classification of objects is unclear, the method involves conducting SED fitting both with and without an AGN component. If the inclusion of the AGN component results in the chi-square values being reduced by a factor of 2.5 or more, the object is then categorized as a mid-infrared SED AGN. For cases with L_AGN>10^8 L_⊙ but visual inspection of borderline or marginal, they conduct SED fitting both with and without an AGN component. Similar to other objects, for these cases, if the inclusion of the AGN component results in the chi-square values of the data being reduced by a factor of 2.5 or more, the object is categorized as a mid-infrared SED AGN. On average, there are ∼27 mid-IR-SED AGN in our sample.
* mid-IR-COLOR: These objects have mid-IR colors that are dominated by AGN warm dust emission. They are selected using the criterion of log(S_5.8/S_3.6)>0.08 and log(S_8.0/S_4.5)>0.15 from <cit.>. <cit.> list 104 mid-IR-COLOR AGN in their catalog. Our AGN sample contains an average of ∼ 13 mid-IR-COLOR AGN.
* X-ray-lum: These objects have intrinsic X-ray luminosity higher than expected for stellar processes in galaxies. These AGN are identified using L_X,int>10^42.5erg s^-1 <cit.>. For cases with slightly lower intrinsic X-ray luminosity cut (L_X,int>10^42.0erg s^-1), if their SED fittings suggest low star formation rates (≲ 10 M_⊙ yr^-1), they are selected as an AGN <cit.>. The observed X-ray luminosity distribution with redshift is shown for all <cit.> AGN that have Lx_obs > 0 in Figure <ref>. The figure also shows this distribution for such sources that are also matched to the C3VO (MUSYC) sources within a projected separation of 0.5”. <cit.> identified 321 AGN satisfying their X-ray-lum criteria. On average, there are ∼ 41 AGN identified based on X-ray-lum in our AGN sample.
* X2R: These objects have X-ray (0.5–7 keV) to radio (3GHz) luminosity ratio higher than expected for stellar processes in a galaxy. These AGN are identified using the criterion of L_X,int[erg/s]/L_3GHz[W/Hz] > 8×10^18. In their catalog, <cit.> report 588 X2R AGN. Our AGN sample consists of an average ∼51 X2R AGN.
* radio-loud: These objects are radio-loud AGN with excess emission in the radio band compared with the prediction of templates of normal star-forming galaxies (SFGs). They are selected using q_24,obs = log(S_24μ m,obs/S_1.4GHz,obs), q_24,obs<q_24,temp. Here, S_24μ m,obs and S_1.4GHz,obs are the observed MIPS 24μm flux and the radio flux density at 1.4GHz, respectively. S_1.4GHz,obs is computed by extrapolating the 3 GHz flux assuming that the radio spectrum can be described as a power law with α = -0.7. q_24,obs is computed following <cit.>. q_24,obs is then compared to q_24,temp, which is from the radio–infrared correlations based on the <cit.> SFG templates at the appropriate redshifts. An object that is 0.5 dex below the midpoint of the radio–infrared relation with more than 2σ significance is classified as an AGN. We note that as they compare radio emission with emission at 24 μm, their definition of radio-loud AGN differs from its canonical definition in the literature, where the radio-emission is compared with the optical brightness <cit.>. The selection criterion of <cit.> is very conservative, which might have resulted in many AGN that would have satisfied the traditional radio-AGN criteria <cit.> being missed in the <cit.> AGN sample. <cit.> find 43 radio-loud AGN. Our AGN sample contains, on average, ∼ 4 radio-loud AGN.
* radio-slope: Assuming a power-law spectrum (f_ν∝ν^α) between the observed 3GHz and 6GHz bands, <cit.> calculate the radio slope of any given object. The object with a slope index α of more than -0.5 with a 2σ significance is considered to be a flat-spectrum radio source, and identified as an AGN. There are 18 AGN in the <cit.> catalog identified using this criteria. Our AGN sample does not contain any AGN identified using this radio-slope criteria.
* opt-spectroscopy: This type of AGN shows optical spectra with hydrogen broad emission lines or hard line ratios usually caused by AGN-driven gas ionization. These AGN are identified using either of the two criteria: (i) narrow-line AGN presented in <cit.> identified using “Baldwin, Phillips & Terlevich” (BPT) <cit.> criteria or (ii) broadline AGN presented in <cit.> identified based on FWHM(Hα)> 1000-2000 km/s. <cit.> select 22 radio-detected and 26 radio-undetected AGN by cross-matching these AGN with the 3D-HST sample. Hence, they report a total of 48 AGN selected based on opt-spectroscopy-based criteria. Our AGN sample contains, on average, ∼ 9 AGN selected from this category.
* opt-SED:
These objects are classified as an optical SED AGN as they have L_AGN>10^8 L_⊙ for their best-fit SED as well as a significant UV/optical excess above the stellar component which is attributed to an optically blue AGN component. <cit.> report 99 opt-SED AGN in their catalog. On average, our AGN sample contains ∼ 3 opt-SED AGN.
* variable: These objects exhibit a significant photometric variation at any wavelength within either the X-ray or optical wavebands. They are identified based on their higher median absolute deviation <cit.>. <cit.> include variable AGN identified in optical by <cit.> and in X-ray by <cit.> and <cit.>. There are 111 variable AGN in the <cit.> catalog, and our AGN sample contains, on average, ∼7 variable AGN.
All 901 AGN reported in <cit.> are unique systems, i.e., they each satisfy at least one of the nine AGN selection criteria. We note that the AGN categories are not mutually exclusive, i.e., a given system can be identified as an AGN in multiple categories. In their Figure 1, <cit.> shows the overlap of sources in various AGN categories. Additionally, it is also worth noting that type II AGN are not specifically selected in their catalog.
§.§.§ AGN matching
To identify AGN within our galaxy sample, we cross-match the 3D HST coordinates of optical/NIR AGN counterparts provided in the <cit.> catalog with the coordinates of galaxies in our galaxy sample, using a nearest-neighbor matching radius of 0.5. For only one case multiple matches were found, and we selected the match that had the most similar redshifts in the C3VO and the <cit.> catalogs. We note that we also checked for a presence of a bulk astrometric offset between our galaxy coordinates from <cit.> and the coordinates of the optical/NIR counterparts of <cit.> AGN selected from the 3D-HST catalog and found no appreciable offset. Out of 901 AGN reported in the <cit.> catalog, 591 AGN are matched to C3VO (MUSYC) sources within a projected separation of 0.5”. The redshift distribution of all <cit.> AGN and the ones that are matched with C3VO sources is shown in Figure <ref>. For approximately 25% of sources that do not have a match in C3VO, the lack of matching seems to be caused by strong deblending of C3VO sources into multiple sources in the 3D-HST catalog. Most of the remaining sources are missed due to the difference in the depth of observations between the two catalogs, the C3VO catalog adopted here being shallower between the two. In spite of its shallowness, we use the C3VO catalog instead of the deeper catalogs, which are only available for parts of the ECDFS field, to prioritize uniformity across the ECDFS field in our VMC mapping process.
§.§ Estimation of stellar mass using spectral energy distribution fitting
We estimate the properties of galaxies, such as their stellar mass, by utilizing the SED fitting code LePhare <cit.> for the photometry described above included in the <cit.> catalog. For the SED fitting process, we fix the redshift of the galaxy to its spectroscopic redshift z_spec (if available) or photometric redshift z_phot. The methodology used here is the same as that used in <cit.>. Briefly, we fit a range of <cit.> synthetic stellar population models to the observed photometry (magnitudes) in different wavebands. These population models were created based on a <cit.> initial mass function (IMF), a range of dust contents and stellar-phase metallicities, and exponentially declining and delayed star-formation histories (SFHs).
The output of the SED fitting process in LePhare includes the marginalized probability distribution function (PDF) of various physical parameters, such as stellar mass. Stellar masses of galaxies were used only as a test for examining the impact of a stellar mass-limited sample on AGN enhancement, and were not used for the final AGN enhancement results presented in this paper.
§.§ Measurement of local overdensity using VMC-mapping
We use a metric of local overdensity σ_δ to measure the environment of galaxies. We calculate this metric using a modified version of the Voronoi tessellation used in many studies <cit.>. The version of the VMC mapping used for this study is described in detail in <cit.>. This method partitions an area into discrete regions called Voronoi cells. A Voronoi cell consists of all points in the area that are closer to one particular predefined point (galaxy) than to any other predefined points. The edges between these cells are at the same distance from the nearest two or more galaxies, ensuring that each cell is uniquely associated with the closest galaxy. Furthermore, the sizes of these cells vary based on the proximity of galaxies to one another. This variation in cell size effectively captures the variability in galaxy distribution and serves as a robust indicator of local galaxy density.
This method cannot be directly applied to the redshift dimension due to uncertainties caused by peculiar velocities in spectroscopically confirmed galaxies and relatively large uncertainties in the photometric redshifts. These uncertainties complicate the spatial analysis as they affect the precise location of galaxies in the redshift space. To mitigate these complications, we divide the volume into redshift slices, and apply the VMC method in the projected space of each redshift slice. The redshift slice widths are chosen based on the approximate sizes of protoclusters in simulations <cit.>, with an additional margin to account for peculiar motion.
Our approach incorporates both spectroscopic and photometric redshifts, adjusting for their respective uncertainties to determine the most suitable redshifts for various Monte Carlo iterations as described in detail in <cit.>. We note that we do not use the spectroscopic redshifts of galaxies as absolute truth, instead we treat them in a probabilistic manner. Similarly, for the probabilistic consideration of the photometric redshifts, we use the median redshift and 16^th and 84^th percentile of the redshift PDF. Our treatment of the redshifts of galaxies based on Monte Carlo technique is identical to the technique used in <cit.> and described in detail in Appendix of <cit.>. Briefly, we generate a suite of 100 Monte Carlo (MC) realizations of z_gal. If the galaxy does not have a spectroscopic redshift, then z_gal is chosen from an asymmetric Gaussian distribution of z_phot based on the median value of the photometric redshift and its 1σ errors (16^th and 84^th percentile of the photometric redshift PDF). If the galaxy has a spectroscopic redshift, then the selection of z_spec as z_gal is based on reliability, i.e., the quality flag of the z_spec. Galaxies with the spectroscopic redshift quality flag of 3 or 4 had their z_spec selected as z_gal for ∼99.3% of all 100 MC iterations. For the rest of the iterations, their z_gal was drawn from their asymmetric Gaussian distribution of z_phot. Similarly, for galaxies with the quality flag of 2 or 9, for ∼70% iterations, z_spec was used as z_gal, and for the rest of the iterations, z_gal was drawn from their asymmetric Gaussian distribution of z_phot. All 100 MC iterations were utilized for the AGN fraction analysis as described in Section <ref>.
The VMC technique yields measurements of galaxy overdensity (δ_gal) and the statistical significance of these overdensities (σ_δ) across a three-dimensional grid aligned with the right ascension (RA), declination (DEC), and redshift (z) axes. For additional information on the computation of these metrics, refer to <cit.> and <cit.>. The overdensity values assigned to any given galaxy would correspond to the σ_δ of the nearest Voronoi cell to the galaxy's coordinates and its z_gal. For 120,525 cases (i.e., average ∼ 1205 objects per
MC iteration) where z_gal is between 2.0<z_gal<4.0, the redshift vs. σ_δ distribution over all 100 MC iterations is shown in Figure <ref>. As the figure shows, we divide the galaxy sample in 3 different overdensity bins of (i) σ_δ<2 (coeval field), (ii) 2<σ_δ<5 (intermediate overdensity), (iii) σ_δ>5 (highest overdensity peaks) for our analysis (see details in <ref>).
Using the VMC maps, a protostructure is defined as a contiguous envelope of VMC cells, where each cell has an overdensity of more than 2.5× the RMS of the density distribution in a given slice as measured by a 5^th order polynomial to σ vs. z. In <cit.>, we presented six spectrosocopically confirmed massive (M_tot≥10^14.8M_⊙) protostructures at 2.5<z<4.5. In this study, we exclude the redshift range of 4<z<4.5 due to limiting depth of multi-wavelength observations significantly affecting the number of AGN (see Figure <ref> and Figure <ref>) in this redshift range. Therefore while the first five protostructures Drishti (M_tot∼10^14.9M_⊙, z∼2.67), Surabhi (M_tot∼10^14.8M_⊙, z∼2.80), Shrawan (M_tot∼10^15.1M_⊙, z∼3.3), Smruti (M_tot∼10^15.1M_⊙, z∼3.47), and Sparsh (M_tot∼10^14.8M_⊙, z∼3.70) from <cit.> are included in this study, the highest redshift structure Ruchi (M_tot∼10^15.4M_⊙) at z∼4.14 is not included. The redshift of these five massive spectroscopically confirmed protostructures is also showed in Figure <ref>.
Furthermore, for this study, we extend the lower bound of the redshift to z=2, selecting the redshift range of 2<z<4 for our analysis. Below this redshift bound (z<2), VUDS and C3VO do not effectively target galaxies. By extending the lower redshift bound from z=2.5 used in <cit.> to z=2.0 in this study, we considerably increase the galaxy sample and the corresponding AGN sample.
The redshift range of 2 < z < 2.5 was not considered in <cit.> because that study concentrated on six of the most massive protostructures within the redshift range 2.5 < z < 4.5. These six protostructures are, thus, not coincidentally, the most massive protostructures in the extended range of 2.0 < z < 4.5, surpassing all protostructures in the 2 < z < 2.5 range in terms of mass. We show the total mass distribution of all large structures (M_tot>10^12M_⊙) in the three redshift bins of 2.0<z<2.5, 2.5<z<3.0, and 3.0<z<4.0 in Figure <ref>. These structures are identified based on a search consistent with the process described in <cit.>. There are considerably more highly massive (M_tot>10^14M_⊙) structures at higher redshift compared to in the lowest redshift bin. We present the most massive structure in the redshift range of 2.0<z<2.5 below.
§.§.§ Newly detected protostructure at z∼2.45
The most massive protostructure in the redshift range of 2.0<z<2.5 is at z∼2.45, which is spread over 2.40<z<2.48. The 3D distribution of the local overdensity distribution in this protostructure along with its two massive (M_tot>10^13M_⊙) overdensity peaks is shown in Figure <ref>. It has a total mass of M_tot=10^14.7M_⊙ and volume of 7387cMpc^3.
To summarize the galaxy sample selection and its property estimation process described in this entire section, the final galaxy sample consists of all 100 MC iterations as described in <ref>. For each galaxy in a given MC iteration, we estimate the stellar mass using SED fitting for its redshift z_gal, and the overdensity σ_δ value using the VMC maps and the location of galaxy (RA, Dec, z_gal). Consequently, the redshift z_gal, and thus the stellar mass and σ_δ of a given galaxy may vary across different MC iterations. We only select galaxies with IRAC1 or IRAC2 magnitudes brighter than 24.8 and 2<z_gal<4. Using these criteria, we generate a sample of 5,514,700 objects (i.e., × 100 Monte Carlo iterations for 55147 unique objects). Out of these, there are 120,525 cases where 2 < z_gal < 4 over all 100 iterations. The z_gal and σ_δ distribution of the 120,525 cases (i.e., average ∼1205 objects per MC iteration) are shown in Figure <ref>.
§ AGN FRACTION ANALYSIS
We define the AGN fraction as the ratio of the number of AGN to the total number of galaxies. We calculate the AGN fraction in each of the 100 MC iterations and use the median AGN fraction value of all iterations as the final AGN fraction. For the errors on the AGN fraction, we use the 16^th and 84^th percentiles of the AGN fraction in all MC iterations, added in quadrature with the error on the number of AGN fraction values computed by assuming binomial statistics <cit.>.
§.§ Enhancement in AGN fraction with local environment
For our entire sample (2.0<z<4.0), we show the change in AGN fraction (in %) of galaxies with local overdensity (σ_δ) in Figure <ref>. As the AGN fraction increases from 1.9^+0.4_-0.3% for galaxies in the coeval field, i.e., σ_δ<2 to 10.9^+3.6_-2.3% for galaxies in highly overdense peaks, i.e., σ_δ>5, which is a clear trend of increasing AGN fraction with increasing overdensity of galaxies. This difference in AGN fraction is at ∼ 3.9σ level.
To check the sensitivity of our results to the stellar mass limit of galaxies in our redshift range, we conducted a test to see if the AGN fraction results vary when using a redshift-based 80% stellar-mass completeness limited galaxy sample instead of our entire sample. The trend described above remains unchanged and we do not see a significant difference in the results for this stellar-mass limited sample as shown in Appendix <ref>.
§.§.§ AGN fraction of various AGN types in different environments
We show the values of AGN fraction (in %) corresponding to the different types of AGN in these local overdensity bins in Figure <ref>. There are five categories of AGN that have on average, more than eight AGN across all MC iterations. These categories are: mid-IR, mid-IR-color, X-ray-lum, X2R, and opt-sp. We find that for all of these five categories, there is a clear trend of increasing AGN fraction with increasing overdensity as shown in the figure. Out of all of these categories, we see the highest AGN fraction for the X2R category compared to the rest of the four categories in all environment overdensity bins. For the X2R category, the AGN fraction increases from 1.0^+0.5_-0.3% to 8.2^+3.4_-2.0 (∼ 3.5σ level) as the local ovendensity increases from σ_δ<2.0 to σ_δ>5.0.
§.§.§ AGN fraction as a function of stellar mass and environment
To study the change in AGN fraction with environment for different stellar masses of the AGN host galaxies, we divide our entire galaxy sample into two different stellar mass bins: M_*>10^10.2 M_⊙ and M_*≤10^10.2 M_⊙. The median stellar mass in these two bins differ approximately by an order of magnitude (∼10^9.5M_⊙ vs. ∼10^10.4M_⊙), however the median stellar mass in different environments is similar (difference ≤ 0.1dex) within each of these two mass bins. The change AGN fraction with the local overdensity in the subsamples of the two mass bins is shown in Figure <ref>. For the higher stellar mass bin (M_*>10^10.2 M_⊙), the AGN fraction increases from 10.3^+3.2_-2.2 for σ_δ<2.0 to 42.1^+11.0_-10.4 (a ∼ 2.9σ increment) for σ_δ<5.0. Similarly, for the lower stellar mass bin (M_*<10^10.2 M_⊙), the AGN fraction increases from 1.1^+0.3_-0.2 for σ_δ<2.0 to 4.0^+3.0_-1.2 (a ∼ 2.3σ increment) for σ_δ<5.0. Notably, at the same local environment for all environmental bins, galaxies in the higher stellar mass bin have an AGN fraction that is ∼10× higher than their counterparts in the lower stellar mass bins in all three environment bins. This increment (∼10×) in the AGN fraction with a 10× increase in stellar mass in a given local overdensity environment, is higher compared to the increment (∼4×) in the AGN fraction with the change in the the local overdensity of environment (σ_δ<2.0 to σ_δ>5.0) at a given stellar mass of the AGN host galaxies.
§.§.§ AGN fraction as a function of redshift and environment
We further divide our entire sample in three different redshift bins of 2.0<z<2.5, 2.5<z<3.0, and 3.0<z<4.0. We show the AGN fraction for different environments in these three redshift bins in Figure <ref>. For 3.0<z<4.0, the AGN fraction increases from 1.1^+0.7_-0.4% for σ_δ<2.0 to 8.7^+4.8_-2.3% (∼ 3.2 σ level) for σ_δ>5.0. Except for the highest overdensity bin for the lowest redshift bin (σ_δ>5 and 2.0<z<2.5), all other points show a clear and continuous trend of increasing AGN fraction with increasing overdensity in all three redshift bins. The AGN fraction in the highest redshift bin is lower than the other two redshift bins, which is likely due to the considerable variation in the completeness of Lx,obs (Figure <ref>) and other multi-wavelength observations used for AGN identification. We also note that the intrinsic AGN fraction for different types can also vary significantly with redshift. So our results at different redshifts are affected by both of these factors. However, even considering these variations, we still see higher AGN fractions in denser local environments compared to the coeval fields at all redshifts.
§.§ Enhancement in AGN fraction with global environment
In addition to local environment metrics, considering the global environment gives a complementary view of the potentially environment-related factors influencing AGN activity. The global environment includes large-scale structures like clusters, filaments, and voids, which can significantly affect galaxy properties and evolution. While local overdensity metrics capturing the immediate surroundings of a galaxy, global overdensity metrics provide insights into its larger scale environment. The global environment can reveal the influence of large-scale gravitational potential wells and other dynamical processes that might not be apparent from local densities alone. This is particularly important for understanding the role of massive protostructures in galaxy evolution. Furthermore, local and global environments can influence galaxies in different ways. While local density likely correlates with immediate processes like galaxy mergers and interactions, the global environment can impact broader phenomena such as gas accretion, stripping, and the infall of galaxies into larger structures. By combining both local and global measures, we can identify if certain trends in AGN activity are consistent across different scales or if they exhibit scale-dependent behavior. This helps in understanding the multi-scale nature of environmental effects on galaxies.
To study the variation in the AGN fraction of galaxies with their global environment, we compute an environment metric based on the location of galaxies with regards to its closest massive (M_tot>10^12.8M_⊙) 5σ_δ peak. We define this metric by R_proj,norm=R_proj/R_eff. Here, R_proj is the projected distance of the galaxy from its closest 5σ_δ peak and R_eff is the effective radius (R_eff = (R_x+R_y)/2) of the corresponding 5σ_δ peak. For each galaxy in any MC iteration, we first identify all 5σ_δ peaks in the ECDFS field where the galaxy's redshift falls within the redshift range of the peak, allowing a buffer of ±0.05 (i.e., zext± 0.05). Additionally, we ensure that the galaxy is within a projected separation (Rproj) of less than 10 cMpc from the peak. Then we select the peak for which R_proj,norm value is the lowest for the given galaxy in order to study the relation of AGN fraction of galaxies with their global environment. The galaxies that have an associated peak identified using this method are considered to be in dense environments. The galaxies in 2<z<4 that (i) do not have an associated peak, i.e, z_gal outside z_ext±0.05 of all peaks, or (ii) R_proj of more than 10cMpc - make a coeval field sample used for a comparison. For this analysis, we only consider massive (M_tot>10^12.8M_⊙) 5σ peaks as they are likely associated to massive protostructures, providing probes for denser large-scale global environments.
The AGN fraction for galaxies with an associated peak as well as galaxies in this coeval field sample is shown as a function of R_proj,norm in Figure <ref>. For the galaxies closest to the overdense peaks (R_proj,norm<1.0), i.e., highest global overdensity, we observe a higher AGN fraction of 5.0^+1.0_-0.8% (a ∼ 2.0σ increment), compared to AGN fraction (2.7^+0.8_-0.6%) of the coeval field galaxies. Hence, similar to the local environment result, we see a higher AGN activity for galaxies in a denser environment as compared to coeval field galaxies. Our results are not sensitive to the threshold (R_proj,norm<1) used for the lowest bin, the mass-threshold (M_tot>10^12.8M_⊙), or the buffer (z_ext±0.05) around the 5σ peaks. We note that the value of the AGN fraction increases further (though not statistically significantly) for the lowest R_proj,norm bin as we decrease the upper limit from R_proj,norm=1.
The AGN fraction of 5.0^+1.0_-0.8% for the highest global overdensity (R_proj,norm<1.0) is lower compared to the AGN fraction of 10.9^+3.6_-2.3% for the highest local overdensity (σ_δ>5.0) bin shown in Figure <ref>. Furthermore, the AGN fraction (2.7^+0.8_-0.6%) of the coeval field defined based on global environment as described above, is higher compared to the AGN fraction (1.9^+0.4_-0.3%) corresponding to the coeval field defined based on local overdensity (σ_δ<2.0). In other words, the increment in the AGN fraction with changes in local overdensity of galaxies is higher compared to the changes in their global overdensities. Part of this difference is caused by the change in the overdense galaxy sample and coeval field sample selected based on global overdensity compared to local overdensity. As we adopt a redshift cylinder around the peaks to match galaxies with peaks for the global environment measure, galaxies that have a relatively lower local density can end up having relatively higher global overdensity. The resultant may lower the dilution of the contrast in AGN fraction with overdensity, lowering the AGN fraction for the highest global density sample. Similarly, some galaxies that resid in highly locally overdense regions may not live in a globally rich environment. Thus, while global environment provides a complementary view to local environment, the considerate uncertainties associated with measuring and characterizing the latter prevents us from drawing strong conclusions.
§ DISCUSSION
To study the role of environment on the AGN activity of galaxies at high redshift, we conduct an analysis showing the AGN fraction of galaxies in a range of environments - from coeval fields to highly dense protostructure peaks at 2<z<4. For the combined AGN fraction of all nine AGN types, we see a clear trend of increasing AGN fraction with increasing local overdensity of galaxies. The trend is also present for all five types of AGN that have on average more than eight AGN across all MC iterations, including AGN identified using mid-IR-SED, mid-IR-color, X-ray luminosity, X-ray to radio luminosity ratio, or optical spectroscopy.
Our trend of increasing AGN fraction with increasing local overdensity of galaxies is in contrast with the local Universe (z∼0), in which studies show lower AGN fractions of luminous AGN in cluster galaxies compared to coeval field galaxies <cit.>. Our trend is also considerably different from studies at 1<z<1.5, which show similar X-ray and MIR-selected AGN fractions in cluster galaxies and field galaxies <cit.>. Our results, combined with these results, suggest a reversal of the AGN-environment relation at high redshift of z>2.
As we observe this trend across all three redshift bins in 2<z<4, including 2.0<z<2.5, it suggests that the reversal might be occurring before z∼2.0. This result is consistent with studies based on individual protoclusters, such as <cit.> (X-ray AGN; z∼1.7), <cit.> (X-ray AGN; z∼2.156), <cit.> (X-ray AGN; z∼2.16), <cit.> (AGN identification using emission-lines in optical/near-IR spectra; z∼2.2), and <cit.> (X-ray AGN; z∼3.09), all of which show a higher AGN fraction in protocluster galaxies as compared to field galaxies. Additionally, our X-ray-luminosity AGN fraction in the intermediate and highly overdense local overdensity environments at 2<z<4 are consistent with the X-ray AGN fraction in individual clusters reported in <cit.> (z∼2.2) and <cit.> (z∼3.09). The enhancement of ∼2.5× at 3.0<z<4.0 in the intermediate overdensity bin compared to the field observed in our sample, is within error bars of the enhancement seen in individual protocluster at z∼3.09 in <cit.> (Lyman break galaxies, X-ray AGN sample). However, our findings are different compared to the findings of <cit.>, who did not find a relation between environment and X-ray AGN fraction in a z∼2.53 protocluster.
One possible reason for this difference could be the variation in the dynamical states of protoclusters, which may result in different impacts of the environment on AGN activity in various protoclusters. As a given protocluster can show a range of impacts on AGN activity, a statistical sample like the one utilized in this study is necessary to understand the overall impact of the protocluster population on AGN activity in galaxies. Furthermore, there are also differences in the methods used to characterize protoclusters and their member galaxies between our study and these studies, which may result in variations in AGN fraction enhancement. The substantial photometric and spectroscopic data used for our galaxy sample, and the extensive AGN sample provided by <cit.> allow us to have a large sample of galaxies and AGN spanning a wide range of redshift and environment expanding the scope of this type of analysis.
Studies show approximately an order of magnitude increment in the AGN fraction of galaxies with an increment in the host galaxy mass by order of magnitude at lower redshifts(z≤1.0) <cit.> and in the stellar mass range and redshift range of our study <cit.>. These increments are consistent with our results of approximately an order of magnitude increment in AGN fraction in the higher mass bin compared to the lower mass bin in a given environment as shown in Figure <ref>. We note that these studies are based on a single type of AGN (for example, X-ray or optical), as apposed to, the nine different AGN categories included in our study. There are also differences between our study and these studies such as the depth of multi-wavelength observations, methods used to generate galaxy samples, etc. These differences can lead to differences in the AGN fraction values observed in our study compared to these studies.
The higher increments in the AGN fraction with an increase in the stellar mass of the host galaxies in a given local overdensity environment, compared to the increment in the AGN fraction with the change in the local overdensity of galaxies at a given stellar mass, as shown in Figure <ref>, suggests that the impact of processes related to the stellar mass-AGN connection is larger compared to that of the environment-AGN connection. As we observe approximately the same increment in the AGN fraction with environment in both higher and lower stellar bins (also shown in Figure <ref>), which may suggest that the processes that are responsible for the increment in AGN with environment may not have a strong dependence on the stellar mass of host galaxies.
For a global environment measure R_proj,norm, our results show signs of increasing AGN fraction with decreasing R_proj,norm, i.e., decreasing normalized distance from highly overdense (σ_δ>5) peaks. Therefore, this result also suggests higher AGN fraction in galaxies in denser global environments. This finding is a contrast of the results of studies in local Universe showing significant decrease in the fraction of X-ray AGN with decreasing cluster-centric radius, going from r_500 to central regions of clusters <cit.>. Our result is also different from the <cit.> study at 0.65<z<1.28, showing an absence of a strong relation between AGN activity and location within the large-scale structures (LSSs) in the ORELSE survey <cit.>. Therefore, similar to the redshift-based change in the impact of local environment on AGN activity, there seems to be a reversal of the global environment impact on AGN activity at high redshift.
Both of our results, based on local and global environments, show a higher AGN fraction in denser environments compared to coeval-field environments. Our analysis indicates that the enhancement in AGN fraction is more pronounced in the local environment compared to the global environment. This suggests that the processes driving the increase in AGN fraction are more effective on smaller scales rather than larger scales. Smaller-scale processes like galaxy interactions or mergers could be significant factors in this enhancement. Such processes are more common in denser global environments, the global environments can also contribute to the increased AGN fraction observed in these environments, as demonstrated in this study.
At intermediate redshifts and relatively smaller scales, i.e., in spectroscopic galaxy pairs at 0.5<z<3.0, <cit.> find no significant enhancement in X-ray AGN or IR-AGN fractions compared to a stellar mass-, redshift-, and environment-matched control sample of isolated galaxies. Similarly, at intermediate redshifts and relatively smaller scales, <cit.> show that X-ray and MIR-AGN fractions for galaxies in clusters at 1<z<1.5 are comparable to those of field galaxies. As discussed in <cit.>, <cit.>, and <cit.> (among others), despite the larger gas fraction of galaxies at these intermediate redshifts compared to local galaxies, extreme gas properties (such as high turbulence and temperature) and other processes at these redshifts appear to weaken the nuclear infall of gas that triggers AGN during galaxy intractions and mergers.
In contrast, our study shows that at even higher redshifts of 2<z<5, the role of the local environment (likely galaxy interactions and mergers) is reversed, as we observe a higher AGN fraction in denser local environments. Several factors could contribute to this observed reversal, such as: the increased gas supply in high-redshift galaxies <cit.>, higher merger rates <cit.>, and differences in galaxy properties such as stellar mass <cit.> and morphology <cit.> at these epochs. Our finding suggests a significant redshift evolution in the role of processes influencing AGN activity, highlighting the need for further investigation into how these processes vary across different environment scales and cosmic epochs.
§ SUMMARY
We present a study on the relation of the environment of galaxies with its AGN activity at 2.0<z<4.0. We conduct a robust analysis on the change in AGN fraction of galaxies with the change of local overdensity as well as global overdensity of galaxies. These overdensities are measured using a novel environment measurement technique, utilizing deep multi-wavelength spectroscopic and photometric observations in the GOODS-S field. The AGN in our sample are sourced from the multi-wavelength AGN catalog over nine categories provided by <cit.>. We summarize our findings below:
* For our entire galaxy sample and AGN sample at 2.0<z<4.0, we see a clear trend of increasing AGN fraction with increasing local overdensity of galaxies. The AGN fraction of galaxies increases from 1.9^+0.4_-0.3% to 10.9^+3.6_-2.3%
(a ∼ 3.9 σ increment) as the local overdensity of galaxies increase from σ_δ<2.0 (coeval field) to σ_δ>5.0 (highly overdense peaks). Our results exhibit ∼ 3× and ∼5× increment in the AGN fraction compared to the coeval-field for intermediate overdensity bin (2<σ_δ<5.0) and highly overdense bin (σ_δ>5.0), respectively.
* The trend of increasing AGN fraction with increasing local overdensity of galaxies is present in all five categories of AGN (for which on average there are more than eight AGN across all 100 MC iterations) including mid-IR-SED, mid-IR-color, X-ray luminosity, X-ray-to-radio (X2R) luminosity ratio, and optical-spectroscopy at 2.0<z<4.0. Note that these categories are not mutually exclusive, i.e., a given system can be identified as an AGN for more than one AGN categories. The highest AGN fraction is 8.2^+3.4_-2.0
for σ_δ>5 for X2R AGN category, which is ∼ 3.5σ higher than that for the coeval field σ_δ<2 galaxies (1.0^+0.5_-0.3).
* We divide our sample into two stellar mass bins: M_*>10^10.2 M_⊙ and M_*≤10^10.2 M_⊙. We observe a clear trend of higher (∼4×) AGN fractions in denser local overdensity environments in both stellar mass bins. For a given environment, the AGN fractions of the galaxies in the higher stellar mass bins are ∼10× higher compared to the galaxies in the lower stellar mass bins in all three environment bins.
* We split the sample in 3 different redshift bins: 2.0<z<2.5, 2.5<z<3.0, and 3.0<z<4.0. The results for all 3 redshift bins show the trend of increasing AGN fraction with increasing local overdensity of galaxies, with an exception of σ_δ>5 bin for 2.0<z<2.5, which is likely affected by low-number statistics.
For 3.0<z<4.0, the AGN fraction increases from 1.1^+0.7_-0.4 for σ_δ<2.0 to 8.7^+4.8_-2.3 (a ∼ 3.2 σ increment) for σ_δ>5.0.
* For global environment measure R_proj,norm, we see a higher AGN fraction (5.0^+1.0_-0.8%) (∼ 2.0σ higher) for galaxies in denser global environment (R_proj,norm < 1.0) compared the AGN fraction (2.7^+0.8_-0.6%) of the corresponding coeval field galaxies.
Our results, as compared to cluster studies of the low and intermediate redshift Universe, suggest a reversal in the role of environment in AGN activity. Specifically, at high redshifts of 2.0 < z < 4.0, galaxies in denser environments exhibit a substantially higher AGN fraction compared to their coeval field counterparts, in contrast to the behavior seen in local clusters. This enhancement in AGN fraction in denser environments at high redshift could be caused by factors such as increased gas supply, higher merger rates of galaxies, and differences in the properties of galaxies or environments. Such prevalence of AGN activity in these denser environments could result in increase AGN-feedback processes (such as AGN-driven outflows), expelling or heating gas, which may lead to quenching of star formation of galaxies <cit.>. Our study provides a pathway to better understand the complex interplay between the environment and AGN activity in galaxies at high redshift, and offers valuable constraints on environment-related impacts on AGN activity for theoretical models.
§ DATA AVAILABILITY
The data used this study would be shared based on a reasonable request to the corresponding author.
Results in this paper were partially based on observations made at Cerro Tololo Inter-American Observatory at NSF’s NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. Supported by the international Gemini Observatory, a program of NSF NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the U.S. National Science Foundation, on behalf of the Gemini partnership of Argentina, Brazil, Canada, Chile, the Republic of Korea, and the United States of America. Results additionally relied on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere. This work is based on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Some of the data presented herein were obtained at Keck Observatory, which is a private 501(c)3 non-profit organization operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. This work was supported by NASA’s Astrophysics Data Analysis Program under grant number 80NSSC21K0986. Some of the material presented in this paper is based upon work supported by the National Science Foundation under Grant No. 1908422.
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising from this submission.
aa
§ AGN FRACTION IN A STELLAR-MASS COMPLETENESS LIMITED SAMPLE
In the beginning of <ref>, in Figure <ref>, we show AGN fraction in three different environment bins for our entire galaxy sample in 2.0<z<4.0. We conduct the same analysis for an 80% stellar-mass completeness limited galaxy sample (M_*>10^9.14M_⊙) in our redshift range and present the results in Figure <ref>. The AGN fraction increases from 2.3^+0.5_-0.4% to 12.6^+4.0_-2.7% (a ∼3.8σ increment) as the local overdensity increases from σ_δ<2 (coeval field) to σ_δ>5 (overdense peaks). Hence, the AGN fraction values are within error bars and the AGN fraction increment significance is approximate the same when these results are compared to the results for the overall galaxy sample presented in Figure <ref>.
|
http://arxiv.org/abs/2409.03251v1 | 20240905050843 | Dual-TSST: A Dual-Branch Temporal-Spectral-Spatial Transformer Model for EEG Decoding | [
"Hongqi Li",
"Haodong Zhang",
"Yitong Chen"
] | cs.HC | [
"cs.HC",
"cs.LG",
"cs.SY",
"eess.SY"
] |
Journal of Class Files, Vol. 18, No. 9, September 2020
How to Use the IEEEtran Templates
Dual-TSST: A Dual-Branch Temporal-Spectral- Spatial Transformer Model for EEG Decoding
Hongqi Li* Member, IEEE,, Haodong Zhang, Yitong Chen
Manuscript received Aug 21, 2024. This work was supported in part by the Natural Science Basic Research Program of Shaanxi Province under Grant 2024JC-YBQN-0659, in part by the Guangdong Basic and Applied Basic Research Foundation under Grant 2022A1515110252, in part by the Basic Research Programs of Taicang under Grant TC2023JC16, in part by the Fundamental Research Funds for the Central Universities under Grant D5000210969 (Corresponding author: Hongqi Li.)
H. Li is with the School of Software, Northwestern Polytechnical University, Xi’an 710072, China, and also with the Research & Development Institute of Northwestern Polytechnical University in Shenzhen, Shenzhen 518063, China, and also with the Yangtze River Delta Research Institute of Northwestern Polytechnical University, Taicang 215400, China (e-mail: lihongqi@nwpu. edu.cn)
H. Zhang and Y. Chen are with the School of Software, Northwestern Polytechnical University, Xi’an 710072, China (e-mail: zhang_haodong@mail. nwpu.edu.cn, [email protected]).
September 5, 2024
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
The decoding of electroencephalography (EEG) signals allows access to user intentions conveniently, which plays an important role in the fields of human-machine interaction. To effectively extract sufficient characteristics of the multichannel EEG, a novel decoding architecture network with a dual-branch temporal-spectral-spatial transformer (Dual-TSST) is proposed in this study. Specifically, by utilizing convolutional neural networks (CNNs) on different branches, the proposed processing network first extracts the temporal-spatial features of the original EEG and the temporal-spectral-spatial features of time-frequency domain data converted by wavelet transformation, respectively. These perceived features are then integrated by a feature fusion block, serving as the input of the transformer to capture the global long-range dependencies entailed in the non-stationary EEG, and being classified via the global average pooling and multi-layer perceptron blocks. To evaluate the efficacy of the proposed approach, the competitive experiments are conducted on three publicly available datasets of BCI IV 2a, BCI IV 2b, and SEED, with the head-to-head comparison of more than ten other state-of-the-art methods. As a result, our proposed Dual-TSST performs superiorly in various tasks, which achieves the promising EEG classification performance of average accuracy of 80.67% in BCI IV 2a, 88.64% in BCI IV 2b, and 96.65% in SEED, respectively. Extensive ablation experiments conducted between the Dual-TSST and comparative baseline model also reveal the enhanced decoding performance with each module of our proposed method. This study provides a new approach to high-performance EEG decoding, and has great potential for future CNN-Transformer based applications.
EEG decoding, feature fusion, transformer, convolutional neural network, signal processing.
labelsep=space,
§ INTRODUCTION
Brain-Computer/Machine interfaces (BCIS/BMIS) have garnered much attention over the past decades due to their outstanding ability to convert the users’ brain activity into machine-readable intentions or commands <cit.>. Among various BCI modalities, noninvasive electroencephalograph (EEG) has the advantages of adequate temporal resolution, non-surgical electrode placements, and low cost, thus leading to its widest application in the fields of rehabilitation engineering <cit.>, cognitive science <cit.>, neuroscience, and psychology <cit.>.
Various brain paradigms, such as the motor imagery (MI), event-related P300, and steady-state visual evoked potentials (SSVEP), have been extensively studied by researchers <cit.>, and a complete EEG-based BCI system generally consists of the user intention decoding process of the signal acquisition, preprocessing, feature extraction, classification, and a final application interface for the control signal convert. To gain an accurate interpreting of the sampled EEG, two main categories of recognition methods, traditional machine learning (ML) algorithms <cit.> and advanced deep learning (DL) techniques <cit.>, have been innovatively investigated. Traditional ML methods usually involve feature extraction and feature classifi-cation, where the former procedure uses algorithms of the common spatial pattern (CSP), Filter bank CSP, fast Fourier transform (FFT), wavelet transform, etc. As for the feature classification, the supervised learning approach (e.g., linear discriminant analysis (LDA), support vector machine (SVM)) and the unsupervised methods (e.g., K nearest neighbor (KNN)) have been shown to be effective. However, since features are manually extracted from the raw non-stationary EEG with low signal-to-noise ratio, the specific expertise is generally required, leading to the process being time-consuming and complicated. Worse more, the useful information may also be lost due to insufficient expert experience. The DL methods, on the other hand, allow end-to-end models that are composed of multiple processing layers to learn the data representation automatically, thereby minimizing the need for human manual intervention and domain-specific preprocessing, and have already achieved excellent even the state-of-the-art (SOTA) performance in several domains such as computer vision <cit.>, and natural language processing <cit.>.
Specifically for the EEG decoding, the convolutional neural networks (CNNs)-based ConvNet has reached comparable classification results to the traditional ML methods <cit.>. A compact network called EEGNet has been proposed in <cit.>, which utilized depth wise and separable convolutions to build an EEG-specific model that capable of learning features across various tasks. Moreover, a long short-term memory (LSTM) based recurrent neural network (RNN) has been developed by Tortora et al. in <cit.> for decoding the gait events from EEG, where the network’s ability to handle the time-dependent information was fully leveraged. However, despite these commendable advances, CNNs and RNNs are not perfect in processing EEG signals. More specifically, while CNNs are good at learning local features, it is difficult to obtain long-term dependencies across the whole data scale. RNNs, on the other hand, are also prone to difficulties in capturing long-term dependencies in long sequence data. Therefore, to address these shortcomings, the research processing for sequence signals is gradually shifting to the self-attention mechanism, which allows each element in a sequence to be processed taking into account the relationship with all other elements, thus capturing richer contextual features. Moreover, the multi-feature analysis of EEG has also increasingly attracted attention considering that the sampled signals contain multi-dimensional features of the temporal, spectral, and spatial domains.
§.§ Related Work
One of the most famous models based on the self-attention mechanism is the Transformer model, which has recently been attempted to EEG decoding. Sun et al. <cit.> introduced a novel approach by integrating the multi-head attention mechanisms with CNNs for motor imagery tasks, and the various positional embedding techniques were used to improve the classification accuracy. As a result, the introduced five Transformer-based models have significantly outperformed existing models. Similarly, a compact hybrid model of CNNs and Transformers, named EEG Conformer, was developed to decode EEG signals by capturing both the local and global features, which was excelled on three public datasets and potentially established a new baseline for EEG processing <cit.>. The proposed ADFCNN in <cit.> utilized the convolutions at two different scales to capture comprehensive spatial details in EEG data, and the features were fused through a self-attention mechanism. Moreover, Arjun <cit.>, Al-Quraishi <cit.>, and Mulkey <cit.> first converted the EEG data into time-frequency images, and then used Vision Transformers based on the idea of computer vision field. In the realm of pretrained models, different models of the BERT <cit.>, GPT and Swin Transformer model <cit.> have been designed to transform the EEG into textual and visual formats for the further processing. Particularly, a Transformer-like recognition approach of Speech2EEG has been proposed to leverage the pretrained speech processing networks for the robust EEG feature aggregation, thereby boosting EEG signal analysis capabilities <cit.>. Given these above advancements, the promising potential of Transformers in EEG decoding has been well demonstrated.
On the other hand, with the development of DL models, a single feature can no longer satisfy the requirement of the performance improvement for increasingly complex models in EEG decoding, and therefore, multi-feature analysis methods gradually occupy the mainstream of EEG analysis methods. In 2019, Tian et al. <cit.> crafted a multi-view DL strategy that first transforming the raw data into representations in the frequency and time-frequency domains, then independently extracting the features, which were finally merged to perform classification tasks efficiently. The data from multiple frequency bands was used in <cit.> to create multi-view representations, where the spatial discrimination patterns of the views were learned by CNN, temporal information was aggregated by a variance layer, and the resultant features were classified by a fully connected layer. Recently, a multi-domain CNN model of TSFCNet was developed for MI decoding, which significantly outperformed the other traditional methods by extracting multi-scale features from the time domain and capturing the additional spatial, frequency, and time-frequency features <cit.>. Earlier in this year, Liang et al. <cit.> developed an EISATC-Fusion model to leverage the multi-scale EEG frequency band information combined with the attention mechanism and temporal convo¬lutional networks (TCN) for an integrated feature extraction process. In addition, a lightweight multi-feature attention CNN was proposed in <cit.> to extract the information from frequency, localized spatial domains, and feature maps to enhance the precision of EEG analysis, where a hybrid neural network of SHNN was designed to autonomously extract the spatial, spectral, and temporal features from EEG <cit.>. In conclusion of these mentioned studies, the research community has tended to extract the temporal-spatial-spectral features simultaneously, which helps to improve the understanding and decoding effects of specific EEG signals. However, since the EEG signals are first collected and expressed in temporal domain, while the frequency/time-frequency/spatial features are represented or converted by various approaches, how to efficiently extract and integrate the features from different dimensions and establish a more robust extraction process still remain challenging.
§.§ Contribution and Overview
As mentioned earlier, the application of Transformer-based and multi-feature analysis in EEG decoding has just emerged and is in a phase of continuous development, and there are few attempts to naturally combine the two. Since the convolutional networks-based models are able to automatically learn more discriminative local features from raw EEG data, while the attention-based Transformer adepts to describe the long-range dependencies, the combination of these two modules is envisioned to benefit each other for a more comprehensive interpretation of human user EEG data.
Driven by this insight, in present work, a novel decoding architecture model with dual-branch temporal-spectral-spatial transformer, termed as Dual-TSST, is proposed to extract multi-dimensional features hidden in EEG while considering their global correlations. Specifically, the proposed architecture mainly consists of three parts of the feature extraction, feature fusion, and classification modules. The first feature extraction module is composed of two branches of convolutional neural networks to receive multi-view inputs from raw EEG and to extract the inherent temporal-spectral-spatial features. These obtained features are fed into the feature fusion module to be jointly concatenated and then to learn their global relationships by a Transformer, and a classifier composed of multilayer perceptron and global pooling layers is finally used to achieve the results output. The main contributions of this study are summarized below.
1) We propose a natural fusion and collaboration architecture based on the classical CNNs and emerging Transformer, which is highly generalizable to a wide range of EEG decoding tasks. Specifically, the developed network enables to extract abundant powerful features without handcraft while allowing long-range correlation among features being considered and processed concurrently.
2) The designed Dual-TSST mainly comprises dual-scale convolution networks, wherein one is used to better extract the temporal feature from the raw EEG, and the other enables to acquire the time-frequency/time-spatial information from the converted EEG signals. These features are in the same scale to be concatenated and effectively jointly fused by a fusion patch. A self-attention mechanism is applied to adaptively enhance the flexibility of the feature fusion.
3) Dual-TSST has undergone extensive experiments on multiple public datasets to demonstrate the model structure and superior performance, the compared results with state-of-the- art models proved the effectiveness of the proposed method.
The rest of this paper is organized as follows. Section <ref> introduces the design ideas and specific structural principles of Dual-TSST. Section <ref> presents the used datasets with related data preprocessing, and experimental setups. The comparable results and visualized model effects are presented in Section <ref>, while a detailed discussion and conclusion is given in the final Section <ref>.
§ APPROACH OF DUAL-TSST NETWORK
The EEG signals are notable for their exceptional temporal resolution while encompassing extensive spectral and spatial properties. With the goal of processing EEG with multiple features involved being considered adequately and efficiently, a generalized network adheres to machine learning principles has been proposed, in which the advanced deep learning techniques are utilized to perform feature extraction, feature fusion, and the final classification step by step.
§.§ Overall Model Architecture
Some traditional practice of EEG decoding generally uses exclusive raw EEG or solely time-frequency images derived from the transformed data, which may lead to leakage of contained information during the conversion process. Instead, as illustrated in Fig. <ref>, a dual-branch model named Dual-TSST, capable of processing diverse views of EEG, is designed to start with both the given raw and converted EEG signals. For the first module of the feature extraction, two branches based on convolutional neural networks are applied to sufficiently extract potential characteristics from the temporal, frequency, and spatial domain. Branch II, in particular, is uniquely designed to simultaneously analyze the wavelet-transformed time-frequency EEG data in two separate flip-flop formats, collecting the time-frequency-space features comprehensively. To reduce the model complexity and computational load, the depth wise separable convolutions and average pooling layers are employed in this module.
The acquired features from branch I and branch II are then synergistically integrated and serve as the input of patch embedding for the feature fusion part, where a Transformer module is exploited to learn global relationships among the extracted properties. Ultimately, for the classification module, a global average pooling (GAP) layer and multilayer perceptron (MLP) module is used to analyze the inputted features and deliver the final classification outcomes.
§.§ Source of Data
1) Data Input: The original EEG data can be represented as EEG(t)∈ℝ^ch× T, where ch is the number of electrodes indicating spatial dimensions, and T represents the time samples of the EEG data. Initially, to convert the two-dimensional time-domain data into three-dimensional time-frequency domain, the Morlet wavelet transform <cit.> provided by MNE-Python is applied, for which the process is described by
W(a,t) = 1/√(a)∫_-∞^+∞EEG(τ)Ψ(τ - t/a f_o) dτ
where W(a,t) represents the transformed outcomes, a is the scale parameter related to frequency and sampling rate, fo means the central frequency, τ is the time variable, Ψ(t) is the wavelet function as
Ψ(t)=1/√(σ√(π))e^-t^2/σ _t^2e^i2π f_o t
and σ_t is the wavelet’s temporal standard deviation.
For the Morlet wavelet transformation, we set the frequency freq to match the frequency range used in the original data filtering. Noting that the number of cycles (i.e., n_cycle) determines the width of the wavelet in the transformation, and is related to the temporal standard deviation σ_t. Larger n_cycle results in a wider wavelet, leading to lower time but higher frequency resolution. Conversely, a smaller n_cycle results in a narrower wavelet, enhancing both the time resolution and frequency resolution. To achieve a balance, n_cycle is set to be half of the frequency, i.e., n_cycle = freq/2, and the sampling rate for the time resolution remains the same as that of the original EEG signal. We use Wavelet(t) ∈ℝ^ch× T× F to represent the transformed time-frequency data.
Then, both original EEG(t) and transformed time-frequency Wavelet data Wavelet(t) are subjected to Z-Score normalization, which preserves the data’s dimensional shape while ensuring consistency in analysis and can be represented as:
x^'=x-μ/σ
where x and x^' represents the input and output data, μ and σ are the calculated mean and standard deviations.
2) Data Augmentation: To mitigate the challenge of limited EEG data availability in decoding, several data augmentation strategies can be applied. Here the Segment and Reassemble (S&R) mechanism is adopted. More specifically, each EEG sample from the same category and its corresponding time-frequency data are divided into a predetermined number of fixed segments (labeled R). These segments are subsequently reconnected in various random orders that respect the original temporal sequence. This technique not only diversifies the training dataset but also enhances the model’s ability to generalize from limited data samples. Following the guidelines set forth in references <cit.>, we generated augmented data in each epoch, matching the batch size, thereby ensuring consistent model training across different data permutations.
§.§ Feature Extraction based on Dual CNN Branches
1) Branch I for original EEG(t): As shown in Fig. <ref>, the shape of the inputted 2D EEG data is [ch × T_B1]. To extract features in the temporal dimension, the time convolution is first used, resulting in a 3D feature map of EEG_TC^D_1, with the shape [D_1 × ch × T_B11]. Here, to capture local details in the temporal dimension as much as possible, the time convolution kernel size is set to be relatively small. The relevant process can be summarized as:
EEG(t)_TC^D_1=TimeConv(EEG^')
Then, separable spatial convolution compresses the spatial dimension and extracts features from the electrodes, changing the feature map shape to [D_1 × 1 × T_B11]. It should be noticed that, to ensure the performance, Batch Norm layers (see in Fig. <ref>) are added after the time convolution and separable spatial convolution, and the ELU activation function is also employed. The above data flow can be expressed as follows:
EEG_SSC^D_1=ELU(BN(SSConv(EEG_TC^D_1)))
where BN means the batch normalization function, and SSConv indicates the related separable spatial convolution.
After that, an average pooling layer is used to extract features while reducing the data in the temporal dimension. With the enhanced generalization and noise suppressing ability, a feature map of EEG_AP^D_1is derived, as the shape of [D_1 × 1 × T_B12]. Finally, pointwise convolutions are applied for channel fusion and increasing the channel dimension to some extent, as enhancing the data’s information content and expressive power. The final feature map EEG_Pw^D_2 is in shape of [D_2 × 1 × T_B12]. The entire data flow of these mentioned operations can be represented by the following process:
EEG_Pw^D_w=PWConv(AP( EEG_SSC^D_1))
2) Branch II for converted Wavelet(t) : Branch II is designed for processing the time-frequency EEG data. As illustrated in Fig. <ref>, to capture multidimensional features, distinct inputs in different viewpoints (i.e., Input 1 and Input 2) are fed into such branch. Here, the Input 2 is obtained by subtly transposing Input 1 by 90 degrees. Unlike branch I of processing original EEG in a single line, branch II is actually perform simultaneous processing of multi-inputs. Specifically, the time convolution is applied first with a followed Batch normalization, which can be expressed as:
Wavelet_i_TC^D_1=TimeConv(Wavelet_i^'),i=1,2
where Wavelet_i (i=1,2) represents the branch inputs, and Wavelet_i_TC^D_1 is the relevant output.
Given the differences in temporal resolution and information content between the time-frequency Wavelet(t) and the original EEG(t), a different time convolution of scale is used, which has shapes [ch× F × T_B2] and [F× ch × T_B2], respectively. Indeed, as shown in Fig. <ref>, such the choice is aimed to balance the features derived from the original and time-frequency data that will produce feature maps [D_1× F × T_B21] and [D_1× ch × T_B21].
Similarly, separable spatial and frequency convolutions are employed for feature extraction and dimension compression in the spatial and frequency dimensions, resulting in feature maps Wavelet_1_SSC^D_1 and Wavelet_2_SFC^D_1, both with shapes [D_1× 1 × T_B21]. The detailed operation of these two processing are:
Wavelet_1_SSC^D_1=ELU(BN(SSConv(Wavelet_1_TC^D_1)))
8a
Wavelet_2_SFC^D_1=ELU(BN(SFConv(Wavelet_2_TC^D_1)))
8b
Subsequently, an average pooling layer is used to suppress noise, extract features, and reduce data volume, resulting in data with shape [D_1× 1 × T_B21]. Finally, pointwise convolutions are applied to achieve channel fusion and dimension elevation, producing the feature maps Wavelet_1_PW^D_2 and Wavelet_2_PW^D_2, each with the shape [D_2× 1 × T_B21]. Similarly, the hyper parameters D1 and D2 are set to be the same of branch I. The entire data flow of these descriptions is as follows:
equation+1
Wavelet_i_PW^D_2=PWConv(AP( Wavelet_i_SSC/SFC^D_1))
§.§ Feature Fusion based on Transformer
Three representative feature characteristics can be acquired from the above feature extraction process with Branch I and Branch II. To better integrate them, we reshape these outputs to be EEG_S^D_2, Wavelet_1_S^D_2, and Wavelet_2_S^D_2 with shapes of [T_B12 × D_2], [T_B22 × D_2], and [T_B22 × D_2], respectively. Such dimensional conversion is employed to suit the data need of the succeeding Transformer, which is applied to learn the cross-channel context information and the appropriate Encoder accepts inputs shaped as [SeqLength × FeatureSize]. The reshaped feature maps are horizontally concatenated to form a unified dataset EW_Fusion, which represents a fusion of the original EEG and time-frequency Wavelet data:
EW_Fusion = Concat(EEG_S^D_2,
Wavelet_1_S^D_2, Wavelet_2_S^D_2)
The new feature EW_Fusion, which takes on the shape [D_2× T_B12 × T_B22*2], is then processed using a multi-head attention mechanism within a complete Transformer Encoder. This setup captures the detailed correlations within the input sequence, thereby obtaining comprehensive global characteristics across time, space, and frequency dimensions from the combined EEG and time-frequency data.
An encoding approach akin to those in Vision Transformers is adopted, which involves parameterizable position encodings initialized with random values as:
P = Parameter(P_init)
X_P = EW_Fusion + P
where P is the position encoding matrix, P_init is its initial value, determined by random numbers, and X_P represents the encoded feature matrix with shape of [D_2× T_B12 × T_B22*2], which is subsequently mapped to the Query (Q), Key (K), and Value (V) spaces through linear transformations, with learnable weighting matrices W_Q, W_K, and W_V as:
Q = X_PW_Q,
K = X_PW_K,
V = X_PW_V
Attention(Q,K,V)=Softmax(QK^T/√(D_2))V
where D_2 is the dimensionality of patches within the data.
The Transformer Encoder applies multi-head attention to parallelize the computation on data, thereby enhancing the expressivity and efficiency of the model and improving its generalizability. Multi-head attention (MHA) includes several self-attention layers, where each head generates an attention output, and the outputs from all heads are concatenated to form the final multi-head attention, as depicted in the following:
MHA(Q,K,V) = Concat(head_1,...head_h)W^O
Head_i = SelfAttention(QW^Q_i,KW^K_i,VW^W_i)
where h denotes the number of heads, W^o is the weight matrix that integrates information captured by different heads of head_i.
After the multi-head attention mechanism, as can be seen in Fig. <ref>, a series of residual connections and layer normalization are performed to facilitate the information flow and stabilize the training process. The output is further processed using the MLP, followed by additional layer normalization, and residual connections, culminating in the final outputs from the multiple Encoder layers.
§.§ Classification Module
The outcome of the Transformer Encoder maintains the same dimensional structure as its input. To effectively distill this complex data, global average pooling (GAP) is employed, which simplifies the feature map by averaging out the features over the entire spatial extent of each channel. This process extracts pivotal global information that is crucial for the next stage of processing.
Following the pooling, the data is routed to an MLP module with two linear layers. The Softmax function, which normalizes the linear outputs to form a probability distribution over the predicted output classes, aids in the transformation of the pooled features into an M-dimensional vector. The model’s performance is evaluated using a cross-entropy loss function, which is essential for classification tasks and is mathematically represented as:
l=-1/N_b∑_i=1^N_b∑_j=1^N_cylog(ŷ)
where N_b is the batch size indicating the number of samples processed per training iteration, N_c denotes the total number of categories in the classification task, y is the true label of the data, and ŷ is the predicted probability for each class. Briefly, the function can effectively measures the difference between the predicted probabilities and the actual distribution, guiding the model towards more accurate predictions through training.
§ DATASET AND EXPERIMENTAL SETUP
To evaluate the proposed method, we utilized three public datasets. Specifically, two BCI competition datasets in MI <cit.>, sourced from MOABB (Mother of All BCI Benchmarks) project <cit.>, and one widely used emotional SEED dataset <cit.> are included. This section gives the relevant introduction and several required necessary procedures.
§.§ Datasets
Dataset I: BCI Competition IV 2a - This dataset comprises EEG recordings from 9 subjects that performing four distinct MI tasks, i.e., imagery movements of the left hand, right hand, both feet, and the tongue. Data were collected by 22 positioned Ag/AgCl electrodes according to international 10-20 system. To ensure signal quality, a 250 Hz sampling rate was utilized and the recorded data was filtered between 0.5 Hz and 100 Hz. The dataset includes two sessions, where the first session serves as the training set, and the second as the test set. Each session consists of six runs, with 48 trials per run that distributed evenly across task categories. In our study, we set the time window for each trial of this dataset between 2 and 6 seconds, and filtered the data using a frequency range from 0 to 40 Hz.
Dataset II: BCI Competition IV 2b - This dataset features by the data from 9 subjects engaged in left and right hand MI tasks. The recording data were captured from three electrodes of C3, Cz, and C4, with a sampling frequency of 250 Hz. A band-pass filter of 0.5-100 Hz and a notch filter at 50 Hz have been used. Each subject participated in five sessions, where the initial two collected data without visual feedback and the subse¬quent three sessions included online feedback. Moreover, the dataset designates the initial three sessions (400 trials in total) for training and the final two (i.e., 320 trials) for testing. Noting that in our study each trial is allocated a time window from 3 to 7.5 s, with data similarly filtered within the 0 to 40 Hz range.
Dataset III: SEED - Provided by BCMI Lab from Shanghai Jiao Tong University, this dataset consists of EEG data from 15 subjects who viewed clips from Chinese films edited to evoke various emotions (e.g., positive, negative, neutral). The films last for 4 minutes, with data processed using 1-seconds or 4s sliding windows across 62 channels, and downsampled to 200 Hz. Each subject underwent three experimental sessions, with data filtered through a 0-75 Hz band-pass filter. In addition, five-fold/ten-fold cross-validation techniques were involved in training. Also, a band-pass filter ranging from 0.5 Hz to 50 Hz was utilized on the SEED dataset, and the continuous data from each experiment was segmented into 1-second windows.
§.§ Experiment Setting
We constructed the developed model using Python 3.11 and PyTorch 2.0, and conducted training on a Nvidia GeForce RTX 4090 GPU using the Adam optimizer. The Adam optimizer was configured with a learning rate of 0.0001 and a weight decay of 0.0012, with β_1 and β_2 values at 0.5 and 0.999, respectively. Throughout the training, the epoch value was set to 1000, with a batch size of 32. The critical hyperparameters D1 and D2 were set to 40 and 120. On Datasets I and II, the data augmentation parameters R were designated as 8 and 9. Since the data scale is enough, no data augmentation was applied in Dataset III. The learning rate was adjusted with Cosine Annealing <cit.>, which can be explained by the following formula:
lr = lr_min + 1/2(lr_max - lr_min)(1+cosT_cur/T_maxπ)
where lr is the current learning rate, lr_max and lr_min are the related maximum and minimum values, respectively. T_cur is the current training epoch, and T_max is the total number of training epochs in a cycle. The learning rate decreases to lr_min at the end of a cycle. For the experiments, T_max was set to 32 to allow better model convergence and generalization during training.
§.§ Choice of Model Parameters
Table <ref> illustrates the input shapes, kernels, strides, and output configurations for each layer in the feature extraction, emphasizing how each layer contributes to the final outputs.
Specifically, from the model structure, it is apparent that the final feature size outputted by each branch is primarily governed by the kernel size and stride of the Average Pooling layer. For Branch I, a relative small convolution kernel is set to capture more granular features along the temporal dimension. However, despite richer details can be extracted, it may result in a larger feature map size. Using a larger Pooling Kernel size helps control the map size and the receptive field of the features. Meanwhile, it helps to reduce the computational requirements and enhance the model’s generalization capabilities while maintaining substantial contextual information. In contrast, a larger convolution kernel set in Branch II aims to capture broader features along the time-frequency dimension, and a following smaller Pooling Kernel Size may facilitate more intensive feature extraction. Indeed, balancing the convolution kernel sizes and pooling parameters between different branches enhances the model’s flexibility, which helps to better adapt to the model’s intrinsic structure and allow the model to learn features of different scales from different data types, thus improving model performance. Here, to balance the features obtained while enhancing the model performance, we set a larger Pooling Kernel size P_1 of 120 with a stride of P_1/10 for Branch I, and a smaller Pooling Kernel size P_2 of 64 with a stride of P_2/2 for Branch II.
Moreover, the Transformer Encoder was configured with 4 blocks, and the multi-head attention mechanism was set with 10 heads. Finally, the model’s performance was evaluated using classification accuracy and the Kappa value, and the Kappa value is defined as:
Kappa=P_o-P_e/1-P_e
where P_o is the proportion of correctly classified samples to the total number of samples, i.e., overall classification accuracy, and P_e represents the probability of chance agreement, i.e., the correctness of random guesses.
Besides, we also used the Wilcoxon Signed-Rank Test to analyze the potential statistical significance.
§ RESULTS
In this section, we compared the relevant results of proposed model against a variety of innovative state-of-the-art methods, where ablation experiments also being involved to demonstrate the contribution of each model part. Finally, we illustrated the interpretability of the extracted features visually, which helps in understanding underlying principles of the model.
In general, for the selected datasets, the classification effect of existing deep learning has made great progress compared to machine learning. To avoid redundancy and ensure the persua¬siveness of the comparison, we mainly review the latest methods of the relevant datasets in the past three years, as well as some well-established deep learning techniques (e.g., ConvNet <cit.>, EEGNet <cit.>, FBCNet <cit.>, EEG Conformer <cit.>). Among used methods, DRDA <cit.> offers a sophisticated end-to-end domain adaptation approach tailored for EEG-based motor imagery classification tasks, and DAFS <cit.> merges small sample learning with domain adaptation, enhancing domain-specific classification efficacy in MI-EEG tasks by leveraging source domain insights. EEG-ITNet <cit.> features an interpretable CNN framework that relies on inception modules and dilated causal convolutions, whereas IFNet <cit.> is a streamlined interactive convolutional network focusing on the interplay among various frequency signals to boost EEG feature depiction. MANN in <cit.> integrates multiple attention mechanisms with transfer learning for EEG classification, and incorporates domain adaptation techniques to enhance its efficacy, while a multi-scale hybrid convolutional network of MSHCNN <cit.> leverages convolu-tions across different dimen¬sions to distinctly extract temporal and spatial features from EEG data. In addition, several other latest models such as TSFCNet <cit.>, Speech2EEG <cit.>, EISATC-Fusion <cit.>, FSA- TSP <cit.>, FTCN <cit.> have also been used for the evaluation.
§.§ Head-to-head Comparison Results
Table <ref> lists the comparison results of different algorithms applied in Dataset I. Specifically, the proposed Dual-TSST outperformed the existing SOTA methods in terms of overall average classification accuracy and the Kappa metrics, notably across subjects S1, S5, S6, and S7. In particular, compared with classical EEG decoding techniques like ConvNet and EEGNet, the average accuracy under current model has improved by 8.14% (p0.05) and 6.17% (p0.05), respectively, also with an obviously corresponding rise in Kappa values. Such the results underscore Dual-TSST’s enhanced capability for global feature extraction, as opposed to those local feature focus seen in ConvNet and EEGNet. Moreover, for the most test subjects, Dual-TSST achieved superior results over the FBCSP-inspired FBCNet and domain adaptation methods like DRDA and DAFS (p0.05), although being slightly inferior on S2 and S4. Compared to the models that introduced attention mechanisms into deep learning networks, such as Conformer, ADFCNN, and M-FANet, the developed Dual-TSST also showed better performance in most subjects’ accuracy, the average classifi¬cation accuracy, and the Kappa value. Among all the compared methods, the SHNN model excels in subjects S8 and S9, while being less accurate in S6. Overall, our proposed dual-TSST framework delivers varied improvements in the classification accuracy among different subjects within the Dataset I, and leads in terms of the average accuracy, and the Kappa metrics.
For the binary classification Dataset II, as we see from Table <ref>, several additional models have been supplemented for the evaluation. Consequently, near the similar effects have been observed with those in Dataset I, where the dual-TSST not only surpasses conventional deep learning models such as ConvNet and EEGNet but significantly outperformed other advanced methods like DRDA, SHNN, Conformer, and ADFCNN, in almost all metrics (p0.05). In head-to-head comparisons with other leading techniques of MANN, TSFCNet, MSHCNN, and EISATC-Fusion, Dual-TSST consistently achieved superior average accuracy and Kappa values. Furthermore, the standard deviation values of the proposed dual-TSST in Table <ref> is 9.17, which is relatively lower to most of the compared methods. Such the result underscores the model’s robust generalization ability to deliver steady strong results for diverse subjects.
To further evaluate the robustness and generalization ability of the model, we extended our analysis with the challenging emotion Dataset III of SEED, which presents a different type of task and requires the model to adapt to new patterns. As listed in Table <ref>, the model continues to outperform the traditional machine learning algorithms and majority of the compared SOTA methods, indicating a commendable level of adapta¬bility of the designed model to effectively capture and interpret complex patterns associated with widely used EEG paradigms.
§.§ Parameter Sensitivity
Obviously, for DL models, the internal hyper-parameters of the network significantly affect its performance. The critical hyperparameters of our constructed model mainly include the dimensionality used for channel fusion and upscaling through pointwise convolution, the number of Transformer encoder layers and Transformer Heads.
First, here the pointwise dimension refers to the dimension parameter D_2, which is used in Dual-TSST for feature fusion and dimensionality increase through the pointwise convolution. To study the effects of this dimensional parameter, a range of [40, 160] with an interval of 10 has been designated, and Fig. <ref> gives the resultant average accuracy. As shown in Fig. <ref>, with an increase in D_2, the accuracy in Dataset I shows an overall trend of decreasing first, then rising, and finally maintaining a mild fluctuation. Similarly, the average accuracy corresponding to Dataset II initially increases with D_2 and then oscillates. Interestingly, the optimal dimensional parameter for both is found at D_2 = 120, which avoids the complexity associated with too high dimensions and effectively enhances the model expressive capabilities.
The number of Transformer layers refers to the stack levels of Transformer Encoders, which essentially define the depth of the model that determines the complexity and hierarchy of the information the model can learn. Generally, the deeper models typically enhance the model’s representational ability and fit the data better. However, as the number of layers increases, issues such as the overfitting and gradient explosion may occur, along with an increase in computational costs. The accuracy trends with changes in number of Transformer layers are illustrated in Fig <ref>, where we see that the introduction of the Transformer (from zero to one layer) may lead to a marked performance improvement. Besides, while initial increases in layers (i.e., from 0 to 4) enhance performance for both datasets, the accuracy associated with all datasets begins to decline after the fourth layer. This may indicate that the model has reached its learning saturation or is beginning to overfit the noise within the data. For Dataset I, the peak accuracy with Dual-TSST exceeded the lowest by 4.36% (p0.01), and for Dataset II, the corresponding value is 2.03% (p0.05). These results suggest that while increasing the number of layers can enhance performance up to a certain limit, excessively high numbers may hinder training and increase the risk of overfitting.
In the Transformer model, each involved Head can be seen as an independent self-attention mechanism, while multi-head attention allows the model to concurrently attend to different semantic information, thereby capturing diverse relationships and features in the input sequence. More specifically, each head learns different weights to better encode information in various contexts, and thus enhancing the richness and expressive power of the representation. However, too many heads can also lead to overfitting or an increase in computational complexity. In this study, the influence of the heads number has been studied, for which the results are depicted in Fig. <ref>. Noting that since the accuracy of Subject 2 of Dataset II is far from others, it is listed separately.
As illustrated in Fig. <ref>, the average accuracy on Dataset I varies significantly with the number of heads, while on Dataset II, the fluctuation seems to be smaller. Overall, the highest accuracy for both Dataset I and II is achieved when the Head count was 10, showing an improvement to the lowest accuracy of 2.97% (p0.05) and 0.61% (p0.05), respectively. Since the increase is not substantial, we conclude that changes in the number of Transformer heads do not significantly impact the model performance.
§.§ Ablation Experiment
The Dual-TSST model comprises multiple modules, and we introduced the data augmentation measures into the proposed framework. To determine the specific effects of each functional modules, ablation experiments were conducted on both Dataset I and Dataset II to assess the impact of data augmentation, the Transformer module, different branches, and various inputs.
Initially, we conducted ablation experiments on the data augmentation and Transformer modules. As illustrated in Fig. <ref>, when it is without the Transformer module, we note an obvious decrease in the accuracy across most of the specific different subjects and the average results for the used datasets. However, an increase in performance is observed for Subject 7 of Dataset II, which possibly indicating overfitting when the module was used. Overall, for the tested two datasets, reintegrating the Transformer improved the overall average accuracy by 8.41% (p0.01) and 4.68% (p0.05), respectively, underscoring its critical role in boosting accuracy.
The operation of data augmentation is envisioned to expand the data scale, aid the model in capturing more complex patterns, and mitigate overfitting tendencies. Meanwhile, it also introduces additional variability and disturbances. Across the datasets I and II, the application of data augmentation strategies led to a 5.21% (p0.05) and 2.37% (p0.01) increase in average accuracy, which indicates that such the module has proven to enhance model performance significantly.
We further conducted experiments by removing Branch I and Branch II (with Input 1 or Input 2), where the results of the remaining parts are reported in Fig. <ref>. It was observed that removing Branch I (i.e., only Branch II) significantly impacted the overall performance on both datasets (p0.05), because Branch I provides the majority of the temporal features to the model. Besides, removing the input from Branch II had some impact, but not as significant as that from Branch I. Overall, on both datasets, performance using two branches was superior to using just one. Within Branch II, using two inputs also showed an improvement over using just one input. In addition, the improved error bar range of the model with all branches implies the enhanced robustness.
§.§ Visualization
To further intuitively demonstrate the effectiveness of the designed branches and self-attention mechanism, a compara-tive study of low-dimensional visualizations, using t-SNE <cit.>, was conducted for one typical subject (i.e., Subject 7 of Dataset I). Fig. <ref> reports the relevant results with/without prominent components (e.g., Branch I or Branch II of feature extraction part, Transformer modules). Specifically, for the test data, as in Fig. 9 (a), when it with Branch II only, the features of focused categories are closely mixed. In contrast, as shown in (b) and (c), the distance between classes becomes larger with the help of Branch I, even if only a part of Branch II is involved. The results of inter-category distance is being more evident with all developed branches, thus illustrating the capacity of our model.
Moreover, as we can see from Fig. <ref> (e), without the Transformer, the t-SNE visualization of the training set reveals several well-separated clusters, indicating a clear distribution of categories in low-dimensional space. However, when applied to the testing set, the model exhibits a significant reduction in category separation (see Fig. <ref> (f)). In particular, a substantial overlap of features between the feet and tongue, left- and right-hands can be apparently observed. This overlap indicates that while efficiently learning the properties of each category on the training data, the model exhibits poor generalization when exposed to unknown data, failing to discriminate between comparable classes. Conversely, the introduction of Transfor¬mer results in a dramatic improvement. As in t-SNE visualiza¬tion, the training set displays highly distinct and well-separated clusters, with each category occupying a clear, even non-overlapping region in low-dimensional space. This implies that the transformer module significantly improves the model’s capacity to describe diverse properties, resulting in a better defined distribution of categories. Importantly, the t-SNE visualization of the test set also exhibits considerable improvement, where the distinctions between hand features and other categories become more prominent. Especially in categories prone to confusion (such as left and right hand), the Transformer module significantly lowers overlap, highlighting its vital role in strengthening the model’s generalization performance and capacity to differentiate between comparable categories.
To further exhibit the impact of the integrated Transformer modules, graphical confusion matrix was used to present the classification performance across the specific categories. For each dataset, the results of one subject with/without related part are depicted in Fig. <ref>. As it can be seen, the results clearly demonstrate that the model without the Transformer module
faces considerable challenges in capturing the discriminative features. For instance, the confusion matrix reveals that 23.61% of left-hand features were erroneously classified as right-hand features, whereas a notable 19.44% misclassification rate of tongue features being recognized as feet (see in Fig. <ref> (a)). Such results suggest that the model struggles with discerning subtle feature differences, which may lead to a generalization inadequacy, particularly when dealing with categories that exhibit similar features. Instead, upon incorporating the Trans¬former, a marked improvement in the model’s classification capabilities is observed (mere the value of 5.56% and 6.94% of corresponding index are found). Also for hand imaginary recognition of Fig. <ref> (c) and (d), following the implementation of the Transformer, the model for subject 4 of dataset II has a particularly satisfactory classification accuracy of 98.75%, a notable increase of near 5%. These improvements suggests that the Transformer module bolsters the model’s feature extraction capabilities and enhances its generalization ability and robust¬ness across different datasets.
§ DISSCUSSION AND CONCLUSION
The statistical distribution of non-stationary EEG data varies across different subjects and recording sessions, making it challenging for BCI researchers to design a classifier with high accuracy and generalization capability. Borrowing the idea of machine learning with data flow of feature extraction, feature fusion, and classification, a novel efficient DL-framework that fully incorporates the CNN and Transformer is suggested in this study to handle the data processing of EEG signals.
The proposed dual-TSST model first leverages a dual-branch CNN structure, which accepts data from diverse perspectives, to dig the comprehensive representation of entailed features. In this configuration, Branch I is tasked with extracting spatio¬temporal features from raw EEG, while Branch II handles the spatio-temporal-frequency features from wavelet-transformed data. The Transformer module further explores the long-range global dependencies and synthesizes all the diverse features into a cohesive feature set, which is finally classified by the classification module. For the proposed dual-TSST framework, minimal yet critical preprocessing with band-pass filtering, as in <cit.>, is needed to EEG signals, which avoids the specific sophisticated data preprocessing steps. In essence, such the proposed DL model does not require the extra expert knowledge but can automatically extract the comprehensive spatio-temporal-frequency features, which is conducive to the identification from multifaceted data sources.
Experimentally, the framework was evaluated through two BCI Competition IV Datasets of 2a, 2b, and one widely used emotional Dataset of SEED, where the superior performance compared with state-of-the-art methods has been achieved. In general, extensive parameter sensitivity and ablation studies affirm that each component significantly contributes to the model’s effectiveness, particularly highlighting the substantial impact of pointwise dimension and Transformer. Particularly, the number of Transformer layers, which also being termed as the depth in other related studies, directly influences the model’s classification result, highlighting the importance of such the module introduction. More importantly, our specific results of Fig. 5 reveal the instructive suggestion for subsequent layer configuration of future transformer-based EEG decoding. Conversely, the number of heads in the multi-head attention setup showed marginally impact on the final performance, and such insensitivity may aid in the lightweight iterative design of future models. Moreover, the effects of the related branches and data augmentation have also been intuitively presented, thus clarifying the rationality of the developed framework.
Whereas the approach proposed in this study boosts the model capability to extract more discriminative EEG features, it still has several limitations. First, a notable limitation of our current model is its structural complexity. The majority of the parameters in current model originates from the comprehensive Transformer module and the fully connected layers for the classification, which coincides with the prior research of EEG Conformer <cit.>. Although depthwise separable convolutions were applied to mitigate this issue, current model still maintains a higher parameter count. Second, as the deep learning model, the number of samples for the focused data is expected to be large enough. While the data augmentation strategies can be strategically adopted as current work, one should maintain the quest for more effective source data <cit.>. Since it is expensive, not-friendly, and impractical to always collect a larger number of recording data, several advanced methods, such as the transfer learning based domain adaptation, which uses knowledge from source subjects to improve the performance of a targeted one <cit.>, ought to be applied toward all accessible data. Moreover, only the subject-specific based experiments are conducted in this study, while more cross-subjects validation should be focused to further investigate the generalizability of the model <cit.>.
To sum up, the developed innovative architecture leverages the distinct strengths of related data types to enhance the accuracy and robustness of the decoding process, while also improving network interpretability through obeying ML-based processing flow. Moving forward, our objectives will focus on optimizing the model’s architecture and reducing its parameter footprint, alongside exploring online potential applications.
50
1
J. R. Wolpaw et al., “Brain-computer interface technology: a review of the first international meeting,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 8, no. 2, pp. 164-173, Jun. 2000.
2
H. Li, L. Bi, X. Li, and H. Gan, “Robust predictive control for EEG-based brain–robot teleoperation,” IEEE Trans. Intell. Transp. Syst., vol. 25, no. 8, pp. 9130-9140, Aug. 2024.
3
H. Li, L. Bi, and J. Yi, “Sliding-mode nonlinear predictive control of brain-controlled mobile robots,” IEEE Trans. Cybern., vol. 52, no. 6, pp. 5419-5431, Jun. 2022.
4
H. Li, L. Bi, and H. Shi, “Modeling of human operator behavior for brain-actuated mobile robots steering,” IEEE Trans. Neural. Syst. Rehabil. Eng., vol. 28, no. 9, pp. 2063-2072, Sep. 2020.
5
R. Abiri, S. Borhani, E. W. Sellers, Y. Jiang, and X. Zhao, “A comprehensive review of EEG-based brain–computer interface paradigms,” J. Neural Eng., vol. 16, no. 1, Feb. 2019, Art. no. 011001.
6
S. Aggarwal and N. Chugh, “Review of machine learning techniques for EEG based brain computer interface,” Arch. Comput. Method Eng., vol. 29, no. 5, pp. 3001-3020, Aug. 2022.
7
S. Gong, K. Xing, A. Cichocki, and J. Li, “Deep learning in EEG: advance of the last ten-year critical period,” IEEE Trans. Cognit. Develop. Syst., vol. 14, no. 2, pp. 348-365, Jun. 2022.
8
Y. LeCun, Y. Bengio, G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436-444, May. 2015.
9
Z. Li, F. Liu, W. Yang, S. Peng, and J. Zhou, “A survey of convolutional neural networks: analysis, applications, and prospects,” IEEE Trans Neural Netw. Learn. Syst., vol. 33, np. 12, pp. 6999-7019, Dec. 2022.
10
W. Rawat and Z. Wang, “Deep convolutional neural networks for image classification: A comprehensive review,” Neural Comput., vol. 29, no. 9, pp. 2352-2449, Sep. 2017.
11
D. W. Otter, J. R. Medina, and J. K. Kalita, “A survey of the usages of deep learning for natural language processing,” IEEE Trans Neural Netw. Learn. Syst., vol. 32, no. 2, pp. 604-624, Feb. 2021.
12
R. T. Schirrmeister et al., “Deep learning with convolutional neural networks for EEG decoding and visualization,” Human. Brain Mapp., vol. 38, no. 11, pp. 5391-5420, Aug. 2017.
13
V. J Lawhern et al., “EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces,” J. Neural Eng., vol. 15, no. 5, Jul. 2018, Art. no. 056013.
14
[S. Tortora et al., “Deep learning-based BCI for gait decoding from EEG with LSTM recurrent neural network,” J. Neural Eng., vol. 17, no. 4, Jul. 2020, Art. no. 046011.
15
J. Sun, J. Xie, and H. Zhou, “EEG classification with transformer-based models,” in Proc. IEEE 3rd Glob. Conf. Life Sci. Technol., (LifeTech)., pp. 92-93, 2021.
16
Y. Song, Q. Zheng, B. Liu, and X. Gao, “EEG conformer: convolutional transformer for EEG decoding and visualization,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 31, pp. 710-719, Dec. 2023.
17
W. Tao et al., “ADFCNN: attention-based dual-scale fusion convolu- tional neural network for motor imagery brain–computer interface,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 32, pp. 154-165. 2024.
18
A. Arjun, A. S. Rajpoot, and M. Raveendranatha Panicker, “Introducing attention mechanism for EEG signals: emotion recognition with vision transformers,” in Proc. 43rd Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBC)., pp. 5723-5726, Nov. 2021.
19
M. S. Al-Quraishi et al., “Decoding the user’s movements preparation from EEG signals using vision transformer architecture,” IEEE Access, vol. 10, pp. 109446-109459, Oct. 2022.
20
M. A. Mulkey et al., “Supervised deep learning with vision transformer predicts delirium using limited lead EEG,” Sci. Rep., vol. 13, no. 1, May. 2023, Art. no. 7890.
21
A. Nogales et al., “BERT learns from electroencephalograms about Parkinson’s disease: transformer-based models for aid diagnosis,” IEEE Access, vol. 10, pp. 101672-101682, Jan. 2022.
22
B. Wang, X. Fu, Y. Lan, L. Zhang, and Y. Xiang, “Large transformers are better EEG learners,” arXiv: 2308.11654.
23
J. Zhou, Y. Duan, Y. Zou, Y. -C. Chang, Y. -K. Wang, and C. -T. Lin, “Speech2EEG: leveraging pretrained speech model for EEG signal recognition,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 31, pp. 2140-2153, Apr. 2023.
24
X. Tian et al., “Deep multi-view feature learning for EEG-based epileptic seizure detection,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 27, no. 10, pp. 1962-1972, Oct. 2019.
25
R. Mane et al., “A multi-view CNN with novel variance layer for motor imagery brain computer interface,” in Proc. 42nd Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBC)., pp. 2950-2953, Jul. 2020.
26
H. Zhi, Z. Yu, T. Yu, Z. Gu, and J. Yang, “A multi-domain convolutional neural network for EEG-based motor imagery decoding,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 31, pp. 3988-3998, Oct. 2023.
27
G. Liang, D. Cao, J. Wang, Z. Zhang, and Y. Wu, “EISATC-fusion: inception self-attention temporal convolutional network fusion for motor imagery EEG decoding,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 32, pp. 1535-1545, Mar. 2024.
28
Y. Qin, B. Yang, S. Ke, P. Liu, F. Rong, and X. Xia, “M-FANet: multi-feature attention convolutional neural network for motor imagery decoding,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 32, pp. 401-411, Jan. 2024.
29
C. Liu et al., “SincNet-based hybrid neural network for motor imagery EEG decoding,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 30, pp. 540-549, Mar. 2022.
30
M. X. Cohen, “A better way to define and describe Morlet wavelets for time-frequency analysis,” NeuroImage, vol. 199, pp. 81-86, Oct. 2019.
31
F. Lotte, “Signal processing approaches to minimize or suppress calibra¬tion time in oscillatory activity-based brain–computer interfaces,” Proc IEEE, vol. 103, no. 6, pp. 871-890, Jun. 2015.
32
M. Tangermann et al., “Review of the BCI competition IV,” Front. Neurosci., vol. 6, p.55, Jul. 2012.
33
V. Jayaram and A. Barachant, “MOABB: Trustworthy algorithm bench¬marking for BCIs,” J. Neural Eng., vol. 15, no. 6, Dec. 2018, Art. no. 066011.
34
W. L. Zheng and B. L. Lu, “Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks,” IEEE Trans. Auton. Mental Develop., vol. 7, no. 3, pp. 162-175, Sep. 2015.
35
I. Loshchilov and F. Hutter, “SGDR: Stochastic gradient descent with warm restarts,” 2016, arXiv:1608.03983.
36
R. Mane, et al. “FBCNet: A multi-view convolutional neural network for brain-computer interface,” 2021, arXiv:2104.01233.
37
H. Zhao, Q. Zheng, K. Ma, H. Li, and Y. Zheng, “Deep representation¬based domain adaptation for nonstationary EEG classification,” IEEE Trans Neural Netw. Learn. Syst., vol. 32, no. 2, pp. 535-545, Feb. 2021.
38
C. Phunruangsakao, D. Achanccaray, and M. Hayashibe, “Deep adversa¬rial domain adaptation with few-shot learning for motor-imagery brain¬computer interface,” IEEE Access, vol. 10, pp. 57255-57265, Jan. 2022.
39
A. Salami, J. Andreu-Perez, and H. Gillmeister, “EEG-ITNet: an explainable inception temporal convolutional network for motor imagery classification,” IEEE Access, vol. 10, pp. 36672-36685, Apr. 2022.
40
J. Wang, L. Yao, and Y. Wang, “IFNet: An interactive frequency convolutional neural network for enhancing motor imagery decoding from EEG,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 31, pp. 1900-1911, Jan. 2023.
41
P. Chen, Z. Gao, M. Yin, J. Wu, K. Ma, and C. Grebogi, “Multiattention adaptation network for motor imagery recognition,” IEEE Trans. Syst. Man, Cybern. Syst., vol. 52, no. 8, pp. 5127-5139, Aug. 2022.
42
X. Tang, C. Yang, X. Sun, M. Zou, and H. Wang, “Motor imagery EEG decoding based on multi-scale hybrid networks and feature enhance¬ment,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 31, pp. 1208-1218, Feb. 2023.
43
M. Jiménez-Guarneros and G. Fuentes-Pineda, “Cross-subject EEG- based emotion recognition via semisupervised multisource joint distribu¬tion adaptation,” IEEE Trans. Instrum. Meas., vol. 72, pp. 1–12, 2023.
44
Y. Li et al., “A novel Bi-hemispheric discrepancy model for EEG emotion recognition,” IEEE Trans. Cogn. Develop. Syst., vol. 13, no. 2, pp. 354-367, 2021.
45
P. Zhong, D. Wang and C. Miao, “EEG-based emotion recognition using regularized graph neural networks,” IEEE Trans. Affect. Comput., vol. 13, no. 3, pp. 1290-1301, 2022.
46
L. Yang et al., “Electroencephalogram-based emotion recognition using factorization temporal separable convolution network,” Eng. Appl. Artif. Intell., vol. 133, 2024, Art. no. 108011.
47
J. Liu et al., “Spatial-temporal transformers for EEG emotion recognition,” in Proc. Int. Conf. Adv. Artif. Intell., 2022, pp. 116-120.
48
L. Van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res., vol. 9, no. 11, pp. 2579-2605, 2008.
|
http://arxiv.org/abs/2409.02361v1 | 20240904011404 | Diversify-verify-adapt: Efficient and Robust Retrieval-Augmented Ambiguous Question Answering | [
"Yeonjun In",
"Sungchul Kim",
"Ryan A. Rossi",
"Md Mehrab Tanjim",
"Tong Yu",
"Ritwik Sinha",
"Chanyoung Park"
] | cs.CL | [
"cs.CL"
] |
Pluralistic Salient Object Detection
Xuelu Feng, Yunsheng Li, Dongdong Chen, Chunming Qiao, Fellow, IEEE, Junsong Yuan, Fellow, IEEE, Lu Yuan, and Gang Hua, Fellow, IEEE
Xuelu Feng, Chunming Qiao, Junsong Yuan are with the Department of Computer Science and Engineering, University at Buffalo, USA (e-mail: [email protected]; [email protected]; [email protected]).
Yunsheng Li, Dongdong Chen, Lu Yuan are with Microsoft GenAI, USA (e-mail: [email protected]; [email protected]; [email protected])
Gang Hua is with Dolby Laboratories, USA (e-mail: [email protected]).
September 9, 2024
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
The retrieval augmented generation (RAG) framework addresses an ambiguity in user queries in QA systems by retrieving passages that cover all plausible interpretations and generating comprehensive responses based on the passages. However, our preliminary studies reveal that a single retrieval process often suffers from low-quality results, as the retrieved passages frequently fail to capture all plausible interpretations. Although the iterative RAG approach has been proposed to address this problem, it comes at the cost of significantly reduced efficiency.
To address these issues, we propose the diversify-verify-adapt () framework. first diversifies the retrieved passages to encompass diverse interpretations. Subsequently, verifies the quality of the passages and adapts the most suitable approach tailored to their quality.
This approach improves the QA systems' accuracy and robustness by handling low quality retrieval issue in ambiguous questions, while enhancing efficiency.
§ INTRODUCTION
Open-domain question answering (QA) systems are designed to provide users with factually accurate responses across various domains. In these systems, user queries often pose an ambiguity issue, requiring the response encompassing answers from multiple interpretations; in fact, over 50% of Google search queries fall into this category <cit.>. Such ambiguous questions pose a significant challenge to QA systems as they must accurately determine user intentions, which critically affects the user experience. It is therefore crucial for QA systems to offer answers that encompass all potential interpretations of ambiguous questions.
Despite the crucial importance of addressing ambiguous questions in real-world applications, this area remains insufficiently explored compared to the extensive research focused on unambiguous questions <cit.>. In this work, we address this gap by delving into the more complex scenarios of ambiguous QA to better handle such questions.
Retrieval-augmented generation (RAG) framework has made significant progress in open-domain QA tasks <cit.> and also proven to be an effective solution for addressing ambiguous questions <cit.>. Specifically, these approaches first retrieve passages on the given question and prompt the LLM to extract plausible interpretations and answers relying on the passages (c.f. Fig <ref>(a)).
Despite the success of the RAG framework on the ambiguous QA task, we should rethink:
Is a single retrieval process sufficient to retrieve passages encompassing all plausible interpretations? To answer this question, we conduct preliminary experiments (c.f. Sec <ref>) about the quality of the retrieved passages used in the RAG framework. We observe that the passages obtained from the single retrieval process often pose a low quality issue with respect to addressing ambiguous questions. In other words, the retrieved passages often partially or completely failed to cover all plausible interpretations, leading to significant performance degradation in terms of factual accuracy.
To address this issue, the iterative RAG approach, ToC <cit.>, has been introduced (c.f. Fig <ref>(b)) to further explore other interpretations that can not be covered by the single retrieval process. Specifically,
to further explore missing interpretations, the interpretations extracted in the previous iteration are utilized as queries to retrieve new passages, additional interpretations are then extracted. This exploration process is repeated in multiple times, leading to encompassing more diverse interpretations and corresponding answers. However, we argue that this effectiveness comes with a significant increase in computational overheads due to the iterative passage retrieval and LLM reasoning. In our experiments, this method requires an average of 5.5 exploration steps per query. As shown in Figure <ref>, Iterative RAG (i.e., ToC) significantly outperforms the vanilla RAG approach in terms of factual accuracy but at the cost of greatly reduced efficiency, with notable increases in both inference time and API call costs.
To this end, we introduce an efficient and robust RAG framework for ambiguous QA, referred to as diversify-verify-adapt (). comprises two key components efficiently addressing the low quality retrieval issue: 1) Retrieval Diversification (RD) and 2) Adaptive Generation (AG). The key idea of RD is to infer pseudo-interpretations of a question, using them to retrieve a set of passages that broadly cover these interpretations, thus enhancing retrieval quality without any iterative interpretation exploration process. To further enhance the robustness of this framework, we propose an adaptive generation (AG) method. The key idea of AG is to carefully verify the overall quality of the passages retrieved from RD before indiscriminately incorporating them. More specifically, we define a new criterion of quality levels tailored to ambiguous questions: {, , }. Subsequently, AG adapts the most suitable approach between relying on the retrieved passages and LLM's internal knowledge, each of which is tailored to the specific quality level of the passages.
Experiments demonstrate that the proposed RD method efficiently diversifies the retrieval process to obtain passages covering diverse interpretations, thereby enhancing both QA and retrieval accuracy. Additionally, the proposed AG method successfully discriminate low quality passages, leading to the improvement of the QA performance. Consequently, outperforms existing baselines on the ASQA <cit.> and SituatedQA <cit.> across various LLM backbones in a few-shot setup, achieving superior accuracy and efficiency. The key contributions of this work are as follows:
* To the best of our knowledge, this paper is the first attempt to investigate the practical limitations of the existing RAG frameworks when applied to ambiguous QA task: low quality retrieval and inefficiency.
* We propose , an efficient and robust RAG framework that efficiently retrieves diverse passages, verifies their quality, and adapts the most suitable approach tailored to each retrieval quality.
* consistently outperforms state-of-the-art RAG approaches in ambiguous QA task, while significantly more efficient (nearly 1.5 - 3 times faster response generation).
§ PRELIMINARY EXPERIMENTS
We investigate the quality of retrieved passages and their impact on the performance of the RAG framework (as in Fig <ref>(a)) in ambiguous QA task.
Experimental Details.
We utilize the most recent ambiguous QA dataset, ASQA <cit.>. We classify the quality of retrieved passages into three labels: 1) Fully Cover, 2) Partially Cover, and 3) Not Cover. Fully Cover indicates that the retrieved passages encompass all plausible interpretations, Not Cover does that the retrieved passages do not contain any of them, and otherwise Partially Cover. We obtain these labels for each question by computing a string exact match between a set of retrieved passages and all plausible answers provided in ASQA as ground-truth answers. For implementation details of retrieving passages, see Appendix <ref>.
Results.
In Fig <ref>(a), we observe that for only 34.6% of questions (i.e., Fully Cover) the retriever successfully retrieves passages that cover all plausible interpretations. Additionally, for 15.7% of questions (i.e., Not Cover) the retriever fails to retrieve any relevant passages.
More critically, as shown in Fig <ref>(b), the performance of the RAG framework (i.e., RAG in the figure)
significantly deteriorates in terms of the factual accuracy (i.e., D-F1) when the retrieved passages pose a low quality issue (i.e., Partial Cover and Not Cover), indicating that it is highly susceptible to noise and irrelevant information in the ambiguous QA.
This observation raises a follow-up question: How can we handle cases where the retrieved passages do not fully cover the plausible answers? To address this issue, we conducted another experiment that compares the effectiveness of LLM's internal knowledge and provided passages for different cases, respectively. We observe that when the retrieved passages do not contain any of the plausible interpretations (i.e., Not Cover), the closed-book LLM (i.e., LLM in the figure) significantly outperforms the RAG framework. This suggests that QA performance benefits more from relying on the LLMs' internal knowledge rather than on external passages containing entirely irrelevant information.
In short, while the quality of retrieval is crucial for the performance of the RAG framework in ambiguous QA, existing works have largely overlooked this critical issue, which notably diminishes their practical applicability.
§ PROPOSED METHOD:
Based on the findings, we propose an efficient and robust RAG framework for ambiguous QA, diversify-verify-adapt ().
This framework comprises two key components: Retrieval Diversification (Sec <ref>) and Adaptive Generation (Sec <ref>). The retrieval diversification method aims to efficiently diversify the retrieved passages to encompass diverse interpretations. Subsequently, the adaptive generation method aims to verify the quality of the passages and adapt the most suitable approach tailored to their quality. Fig <ref>(c) and Algorithm <ref> show the overview and inference algorithm of , respectively.
§.§ Problem Formulation
Given an ambiguous question q_i, the goal of the proposed RAG framework is to generate a comprehensive response r_i that encompasses all plausible answers 𝒜_i= { a_i,1,...,a_i,M} of the interpretations 𝒬_i={ q_i,1,...,q_i,M} based on the retrieved passages 𝒫_i = { p_i,1, ..., p_i, K}, where M and K indicate the number of plausible answers and passages, respectively. Specifically, given the 𝒫_i ideally contains all 𝒬_i and 𝒜_i, an LLM is first prompted with the question and the relevant passages to extract all plausible interpretations and their corresponding answers, formally represented as follows:
𝒬_i, 𝒜_i ←(q_i, 𝒫_i, I_e),
where I_e is a text prompt for extracting 𝒬_i and 𝒜_i from the 𝒫_i. Subsequently, based on the 𝒬_i and 𝒜_i, the LLM is prompted to consolidate them with q_i and 𝒫_i to generate a response r_i, formally represented as follows:
r_i ←(𝒬_i, 𝒜_i, 𝒫_i, q_i, I_g).
For the prompts I_e and I_g, we start with that of <cit.> and modify it for our setup (see Table <ref> and Table <ref> in Appendix <ref>).
§.§ Retrieval Diversification (RD)
In this section, we propose a novel retrieval diversification (RD) method aiming to efficiently identify passages 𝒫_i encompassing all plausible answers 𝒜_i of the interpretations 𝒬_i. The key idea of RD is to infer pseudo-interpretations of a question, using them to retrieve a set of passages that maximally cover these interpretations. This approach guarantees the retrieved passages encompass diverse interpretations, without any iterative interpretation exploration process of <cit.>, leading to the generated response r_i covering all 𝒜_i.
Inferring Pseudo-Interpretations.
To infer pseudo-interpretations 𝒬̂_i={q̂_i, 1, q̂_i, 2, ... }, each of which related to a true plausible answer of 𝒜_i, we draw inspiration from a human's reasoning chain inferring multiple interpretations of a question. Given an ambiguous question, a human would first identify the ambiguous part of the question and then determine the reason for the ambiguity, followed by inferring multiple interpretations of the question. For example, given the question "Who played the Weasley brothers in Harry Potter?", the ambiguous part is the object of the question, "Weasley brothers," and the corresponding reason is that "It can refer to multiple characters such as Ron, Percy, and so on." Consequently, a human would generate "Who played Ron Weasley in Harry Potter?", "Who played Percy Weasley in Harry Potter?", etc.
To mimic this reasoning chain, we leverage the LLM's powerful reasoning ability to identify the ambiguous part of the question and the reason for the ambiguity, subsequently, to infer the pseudo-interpretations 𝒬̂_i from the results, formally represented as follows:
𝒬̂_i ←(q_i, I_p, (q_i, I_a)),
where I_p and I_a are carefully designed instructions for each step, respectively. We present the conceptual example of I_a and I_p in Fig <ref> and full instructions in Table <ref> and <ref> of Appendix <ref>. For (·), we consider GPT-3.5 <cit.> and GPT-4 <cit.>.
Retrieving Relevant and Diverse Passages. As a first stage retrieval, we obtain the candidate passages 𝒞_i generally relevant to the given question q_i from Wikipedia[We use ColBERT <cit.> and Bing search API as retrievers.].
From the 𝒞_i, we select a set of multiple passages 𝒫_i with maximal coverage of all distinct pseudo-interpretations 𝒬̂_i.
Retrieval for unambiguous questions involves scoring a single passage individually based on their relevance to a single interpretation. Whereas, when it comes to the ambiguous questions, we should retrieve a set of passages encompassing multiple interpretations, which makes this problem more challenging. To obtain such set of passages, we explicitly employ our inferred pseudo-interpretations 𝒬̂_i to retrieve the set of passages 𝒫̃_i that maximally cover these interpretations, formally represented as follows:
𝒫̃_i ←⋃_j=1^|𝒬̂_i|ℛ(𝒞_i, q̂_i, j; K),
where ℛ is a retriever yielding top-K passages from the 𝒞_i by relevance scores to each pseudo interpretation q̂_i, j.
Pruning Noisy Passages. Although this process explicitly enables 𝒫̃_i to encompass all pseudo interpretations, there could be some noisy and irrelevant passages due to the absence of perfect retriever and the noise of the inferred pseudo-interpretations 𝒬̂_i. To this end, we find and prune the passages that are highly likely to be irrelevant and noisy. Our intuition is that 1) noisy passages caused by the imperfect retriever tend to be irrelevant to all pseudo-interpretations and 2) noisy passages caused by noisy pseudo-interpretations tend to be irrelevant to most of the pseudo-interpretations. Based upon this intuition, we measure an averaged relevance of the passage to determine if it is noisy or not. The averaged relevance of a passage 𝒮(p) is calculated as follows:
𝒮(p) ←1/|𝒬̂_i|∑_j=1^|𝒬̂_i|(q̂_j) ·(p)/||(q̂_j)|| · ||(p)||,
where (·) encodes sentences to a dense vector and p ∈𝒫̃_i. We then select the top-K passages from the 𝒫̃_i based on these averaged scores as the final passage set 𝒫_i.
Our approach is generic, allowing for the use of various sentence embedding models for calculating relevance scores. In line with the sota baseline <cit.>, we employ the frozen SentenceBERT <cit.> in ℛ(·) and (·) in our implementation.
§.§ Adaptive Generation (AG)
Despite the effectiveness of the proposed RD method, there may be the low quality of 𝒫_i. To further enhance the robustness of , in this section, we propose an adaptive generation method. The key idea of AG is to carefully verify the overall quality of the passages retrieved from RD before indiscriminately incorporating them.
From the findings in Section <ref>, if 𝒫_i does not encompass all plausible interpretations, 𝒬_i and 𝒜_i, the response generated by the RAG framework is highly likely to be inaccurate. To this end, we introduce an adaptive generation (AG) method that dynamically adjust the response generation strategy among the RAG framework and closed-book LLM, which is achieved by verifying the quality of 𝒫_i before attempting a solution.
Retrieval Verification (RV) To verify the quality of 𝒫_i, we exploit the LLM’s strong natural language understanding ability. The existing works <cit.> verify whether 𝒫_i can sufficiently support answering q_i by prompting or training the LLM to give a proper label V_i (e.g., Yes / No):
V_i ←(q_i, 𝒫_i, I_v),
where I_v is the corresponding instruction. However, the retrieval quality in terms of ambiguous questions should be graded according to how many interpretations are encompassed by the retrieved passages, which can not be achieved by the existing approaches tailored to unambiguous questions. To this end, we newly define a criterion of quality levels tailored to ambiguous questions: {, , }. indicates the 𝒫_i encompasses all 𝒬_i and 𝒜_i, indicates the 𝒫_i does not contain any of them, otherwise . To determine these grades, we estimate how many interpretations are encompassed by the 𝒫_i by explicitly utilizing the pseudo-interpretations 𝒬̂_i:
V_i,1←(q̂_i,1, 𝒫_i, I_v)
⋮
V_i,|𝒬̂_i|←(q̂_i,|𝒬̂_i|, 𝒫_i, I_v),
where each V_i,j consists of a binary label (i.e., Yes or No). For instance, if all V_i,* are determined "Yes" the grade is . We present the full prompt of I_v in Table <ref> in Appendix <ref>. For (·), we consider GPT-3.5 and GPT-4.
Adaptive Generation Once we get the verification results from Eq <ref>, if the 𝒫_i is classified to or , we decide to utilize the retrieved passages 𝒫_i to generate a response by Eq <ref> and <ref>. If 𝒫_i is classified to , we decide to only utilize the LLM's internal knowledge to generate a response: (q_i, I_l). The full prompt of I_l is presented in Table <ref> in Appendix <ref>. This process enables the utilization of the most suitable approach tailored to each retrieval quality, which is beneficial to both accuracy and efficiency.
§.§ Discussion on Efficiency
We examine the factors contributing to 's strong efficiency in Figure <ref>, which illustrates the average number of input and output tokens per query when using the GPT-4 backbone. First, 's strong efficiency is largely due to the RD method. Unlike Iterative RAG, which involves an average of 5.5 exploration steps per query and requires more than 12,000 tokens for input and 1,200 tokens for output, the RD method significantly reduces the number of tokens needed. This reduction leads to improved efficiency in inference time and API costs. Second, although the RV method introduces some additional costs, these are acceptable compared to the complexity of Iterative RAG. Moreover, RV enables the adaptive generation (AG) strategy, where the faster closed-book LLM is selectively used instead of RAG, further enhancing efficiency. As a result, , which combines RD, RV, and AG, requires substantially less inference time and API costs.
§ EXPERIMENTAL SETUPS
§.§ Datasets
Our proposed method and all baseline models are assessed using the ASQA <cit.> and SituatedQA <cit.> datasets. ASQA is a long-form QA dataset featuring ambiguous questions. SituatedQA is a short-form QA dataset featuring questions that specifically highlight ambiguities related to temporal and geographical contexts. We give these questions to the QA systems and assess how comprehensively the responses cover the provided possible interpretations of questions. Further details about the datasets are provided in the Appendix <ref>.
§.§ Evaluation Metrics
Metrics for QA. Following <cit.>, we mainly adopt F1-based metrics. For the short-form QA dataset (SituatedQA) we utilize F1 score. Given ASQA is the long-form QA dataset, following <cit.>, we use Disambig-F1 (D-F1) score instead of F1. We further leverage ROUGE-L (R-L) to measure correctness of the long-form responses. Finally, Disambiguation-ROUGE (DR), combines R-L and D-F1 scores for overall performance.
Metrics for Passage Retrieval. Following <cit.>, we use MRecall@k to evaluate the quality of retrieved passages.
For more details of the evaluation metrics, please refer to Appendix <ref>.
§.§ Baselines
We compare our against relevant models, including fully-supervised LMs, few-shot closed book LLMs, LLMs w/ RAG, and the adaptive generation. Specifically, fully-supervised LMs include the 1) T5 closed-book <cit.>, 2) T5 w/ JPR <cit.>, and 3) PaLM <cit.> w/ Soft Prompt Tuning.
Few-shot closed book LLMs include 3) Vanilla LLAMA3, GPT-3.5-turbo, and GPT-4 and 4) Query refinement <cit.>. Few-shot LLMs w/ RAG include 5) Vanilla RAG where we use RAC prompt in <cit.>, for 6) Iterative RAG we use the sota method ToC <cit.>, and for adaptive generation 7) Self-RAG <cit.>. For more details of the baselines, please refer to Appendix <ref>.
§.§ Implementation Details
In , the LLM is employed across three modules: retrieval diversification (Eqn <ref>), retrieval verification (Eqn <ref>), and adaptive response generation (Eqn <ref>, <ref>, and closed-book LLM). For adaptive response generation, we use the same LLM backbones as the other baselines. For the retrieval diversification and verification modules, we assess the performance of GPT-3.5 (gpt-3.5-turbo) and GPT-4 (gpt-4) across them, ultimately opting to use GPT-4 for both modules in the ASQA dataset and GPT-3.5 for both modules in the SituatedQA dataset in all experiments. However, as demonstrated in Section <ref>, other LLMs also perform effectively in these modules. For other implementation details, please refer to Appendix <ref>.
§ EXPERIMENTAL RESULTS AND ANALYSES
§.§ Main Results
Table <ref> presents the long-form ambiguous QA performance of baselines and on the development set of ASQA.
First, outperforms the sota baseline, Iterative RAG, in terms of both accuracy and efficiency of response generation.
Our method enhances Vanilla RAG framework by incorporating retrieval diversification and adaptive generation strategies that address low-quality retrieval and improve performance. It is also more efficient, requiring significantly less computational overhead and achieving 1.5x - 3x greater efficiency in inference time across various LLM backbones compared to Iterative RAG. Overall, our method produces more accurate and diverse interpretations without the cumbersome iterative exploration process.
Second, demonstrates good adaptability in switching out the underlying LLM backbones.
consistently enhances Vanilla RAG with its RD and AG modules across different LLM backbones, demonstrating its adaptability and wide applicability. This suggests that can easily integrate with more advanced LLMs in the future.
Additionally, Fig <ref> shows the performance and efficiency of baselines and on the SituatedQA test set for short-form ambiguous QA tasks. All experimental results align with those seen in Table <ref>, demonstrating strong generalizability of across different types of ambiguous questions.
§.§ Ablation Studies
To evaluate the importance of each component of , namely retrieval diversification (RD) and adaptive generation (AG), we incrementally add them to Vanilla RAG (row 2 in Table <ref>). Table <ref> reveals the following insights:
1) RAG (row 2) with the closed-book LLM (row 1) significantly enhances the ability to handle ambiguity in questions.
2) Implementing the RD module (row 3) enhances all performance metrics, demonstrating that RD effectively diversifies and improves the quality of retrieved passages, thereby enhancing the RAG framework. 3) Incorporating the AG module (row 4) also boosts all metrics, showing that the retrieval verification method accurately identifies passages. Additionally, this supports our finding in Sec <ref> that when retrieved passages are of extremely low quality, the internal knowledge of LLMs proves more advantageous than RAG.
§.§ Retrieval Analysis
We evaluate the effectiveness of our proposed RD method in Table <ref> using MRecall@k <cit.>. Vanilla RAG (row 1) involves basic retrieval of passages using a given question q_i. "+ RD" (row 3) applies the RD method to row 1, using pseudo-interpretations generated by our proposed instructions (i.e., I_p and I_a). Row 2 uses the RD method with pseudo-interpretations generated by the LLM query rewriter as described in <cit.> using simple instructions. "+ Oracle" (row 4) applies RD to Vanilla RAG using ground-truth interpretations from the ASQA dataset.
We observe that 1) adding RD leads to significant improvements of MRecall and D-F1 score compared to Vanilla RAG, demonstrating RD effectively addresses low-quality retrieval issue and then improve the QA performance. 2) "+ RD" outperforms "+ <cit.>" showing the superiority of our carefully designed instruction in inferring pseudo-interpretations. 3) "+ Oracle" (row 4) significantly outperforms RD, indicating that when more advanced LLMs are available in the future there is potential for RD to improve in accurately inferring pseudo-interpretations.
§.§ Sensitivity Analysis
For the retrieval diversification (RD) and retrieval verification (RV) modules, we explore how their performance is affected by the choice of LLM. We evaluate the impact of using GPT-3.5 and GPT-4 across both modules, comparing the overall QA performance against the sota baseline, ToC <cit.>, on the ASQA and SituatedQA datasets. Fig <ref>(a) and (b) represent using GPT-3.5 and GPT-4 as the response generation models on the ASQA dataset, respectively. Fig <ref>(c) represents using GPT-3.5 as the response generation model on the SituatedQA dataset.
In Fig <ref>, we observe the following: 1) consistently outperforms ToC, regardless of the LLM model used in each module. 2) While the RD module shows very stable results, the RV module appears relatively sensitive to the choice of LLM. This highlights that verifying the quality of retrieved passages for ambiguous questions requires more powerful natural language understanding ability, underscoring the need for future work to alleviate the dependency on the choice of LLM. Based on these results, we argue that is a general framework that is robust across different LLM models.
§.§ Case Studies
We conduct a case study to qualitatively compare the reasoning chains of Iterative RAG, ToC <cit.>, and . Due to space limits, please refer to Appendix <ref> for detailed case studies.
§ RELATED WORK
RAG for Handling Ambiguous Question.
To tackle the ambiguity inherent in certain questions, earlier studies such as those by <cit.> necessitated the fine-tuning of models using extensive training datasets. For example, <cit.> introduced JPR designed to identify multiple passages with maximal coverage of all plausible answers, and language model (LM) is trained to generate a comprehensive answer from them. AmbigPrompt <cit.> iteratively generates prompts based on previous answers and uses LM to produce new answers, ensuring coverage of all plausible responses. Recently, some studies have leveraged LLMs to generate comprehensive responses through few-shot in-context learning. For example, RAC <cit.> instructs LLM to extract plausible interpretations and answers from provided passages. <cit.> developed a query rewriter that clarifies ambiguous questions, enabling the retrieval of specific passages. Black-box LLMs then generate answers from these passages. However, they overlook the problem of low-quality retrieval, where the retrieved passages frequently fail to cover all plausible interpretations. This often results in significant performance drops in factual accuracy. To tackle this issue, ToC <cit.> explores missing interpretations by iteratively using previously extracted interpretations as queries to retrieve new passages, from which further interpretations are then extracted. This exploratory process is repeated multiple times. However, this approach incurs significant computational overhead due to the iterative passage retrieval and LLM reasoning.
Retrieval Quality Verification.
Many studies have noted that low-quality retrieval introduces significant irrelevant information to the RAG framework and have proposed various solutions. Self-RAG <cit.> fine-tunes LLM to generate a reflection token that assesses the relevance of a passage to the question at hand. Llatrieval <cit.> employs LLM to check if retrieved passages sufficiently support the answer, updating them if they are of low quality. Meanwhile, CRAG <cit.> trains a lightweight verifier to evaluate the quality of retrieved passages, making corrections if they fall below a set threshold.
Adaptive Generation.
Numerous studies have examined adaptive strategies that dynamically determine the need for retrieval, utilizing only the internal knowledge of LLMs when unnecessary <cit.>. <cit.> used an empirical method to retrieval, activating relying on the frequency of entity.
AdaptiveRAG <cit.> dynamically chooses the optimal response generation strategy tailored to the complexity of the query. TA-ARE <cit.> uses in-context learning to assess whether a query necessitates retrieval.
Compared with recent studies that either overlook or inefficiently address the issue of low-quality retrieval in ambiguous questions, we introduce the retrieval diversification method efficiently retrieves higher quality passages without relying on cumbersome iterative processes. Additionally, we propose retrieval verification and adaptive generation strategies specifically designed for ambiguous questions. To the best of our knowledge, this paper is the first effort to thoroughly analyze and address the problem of low-quality retrieval in the context of ambiguous questions and its potential solutions.
§ CONCLUSION
In this study, we examined the shortcomings of the current RAG-based method in dealing with ambiguous questions, specifically its low-quality retrieval and inefficiency. Our proposed framework, , effectively diversifies the retrieved passages to capture various interpretations, verifies their quality, and adapts the most appropriate approach based on that quality. This strategy improves QA performance while minimizing inefficiency.
§ LIMITATIONS
While demonstrates clear advantages in effectiveness and efficiency through retrieval diversification and adaptive generation, its design is specifically tailored for ambiguous questions. In real-world QA systems, where queries can be a mix of ambiguous and unambiguous, the applicability of may be limited. However, recent work has introduced methods to classify whether a query is ambiguous <cit.>, which leads to utilizing the suitable approach according to its ambiguity. Although <cit.> proposed simple approaches, there is still significant potential to enhance these methods using advanced techniques like in-context learning and RAG. Future research could focus on developing systematic approaches for classifying the ambiguity of queries.
Furthermore, the performance of our proposed retrieval verification module is somewhat sensitive to the choice of LLM. Specifically, it tends to work better with GPT-4 than with GPT-3.5, though this may negatively impact the efficiency of . Therefore, future work should focus on developing a more efficient and robust retrieval quality verifier LLM, tailored to handling ambiguous questions, to enhance both effectiveness and efficiency.
§ ETHICS STATEMENT
Given that is built on the RAG framework of QA systems, it is important to consider the following points: (1) the retrieved passages may contain offensive or harmful content, which could result in similarly harmful responses, and (2) user queries themselves may be offensive or harmful. Therefore, developing methods to detect harmful user queries and selectively retrieve passages that are free from harmful content could be a crucial focus for future research.
§ EXPERIMENTAL DETAILS
§.§ Datasets
Our proposed method and all baseline models are assessed using the ASQA <cit.> and SituatedQA <cit.> datasets. ASQA is a long-form QA dataset derived from a subset of ambiguous questions in the AmbigNQ dataset <cit.>. The ASQA dataset contains 6,316 ambiguous questions and their corresponding comprehensive long-form answers that contain all plausible answers, split into 4,353 for training, 948 for development, and 1,015 for testing. SituatedQA is a short-form QA dataset featuring questions that specifically highlight ambiguities related to temporal and geographical contexts. In this dataset, each question is subject to multiple interpretations, with corresponding answers varying by context. We give these questions to the QA systems and assess how comprehensively the responses cover the possible interpretations.
§.§ Evaluation Metrics
Metrics for QA. For both datasets, following previous studies on ambiguous QA <cit.>, we
mainly adopt F1-based metric. Specifically, for the short-form QA dataset (SituatedQA) we measure F1 based on the precision and recall between the ground-truth answers and the generated responses. Given ASQA is the long-form QA dataset, following <cit.>, we use Disambig-F1 (D-F1), which assesses the factual accuracy of long-form responses, instead of F1. Using a RoBERTa model <cit.> trained on SQuAD2.0, we extract short answers from the generated long-form responses and compare them to the ground-truth disambiguation questions (DQs). The F1 score of these extracted answers indicates whether the long-form answers contain correct information. We further leverage ROUGE-L (R-L) to measure correctness of the generated long-form responses to the ground-truth long-form answers. Finally, Disambiguation-ROUGE (DR), combines R-L and D-F1 scores as a geometric mean for overall performance.
Metrics for Passage Retrieval. Following <cit.>, we use MRecall@k to evaluate the quality of retrieved passages by considering retrieval to be successful if all answers or at least k answers in the plausible answer set are recovered by the retrieved passages.
§.§ Baselines
We describe the details of models as follows:
1) T5 closed-book. <cit.> fine-tuned T5-large <cit.> to generate long-form response on the whole train set.
2) T5 w/ JPR. <cit.> fine-tuned T5-large <cit.> with JPR <cit.>, fully trained dense retriever for ambiguous QA, to generate long-form response on the whole train set.
3) PaLM w/ Soft Prompt Tuning. <cit.> employed a prompt engineering method to PaLM <cit.> that learn the soft prompts in the closed-book setup.
4) Closed-book LLM. Closed-book LLM indicates the traditional few-shot prompting method used in <cit.>. We consider the backbone LLM as LLAMA3-70B-Instruct, GPT-3.5, and GPT-4.
5) Query refinement. Inspired by <cit.>, we developed an in-context learning method within a closed-book setup. First, we prompt the LLM to refine ambiguous questions into multiple possible interpretations. These interpretations are then used as in-context examples for the LLM to generate a response that addresses all potential interpretations. We consider the backbone LLM as LLAMA3-70B-Instruct, GPT-3.5, and GPT-4.
6) Vanilla RAG. In this method, we begin by retrieving the top 5 relevant passages based on the frozen SentenceBERT similarity between the given query and candidate passages from Wikipedia. We then use the RAC prompt from <cit.> to extract interpretations and generate corresponding answers. We consider the backbone LLM as LLAMA3-70B-Instruct, GPT-3.5, and GPT-4.
7) Iterative RAG. For this approach, we employ the state-of-the-art method ToC <cit.> for handling ambiguous QA. Specifically, ToC iteratively constructs a tree of possible interpretations for the ambiguous question using few-shot prompting that leverages external knowledge, and then uses this tree to generate a long-form response. Following the authors' implementation, we set the tree's maximum depth to 3 and the maximum number of nodes to 10. It is important to note that we do not use the tree pruning method in our implementation, as we observe that adding this method notably degrades the QA performance. The retrieval settings are identical to those used in Vanilla RAG. We consider the backbone LLM as LLAMA3-70B-Instruct <cit.>, GPT-3.5, and GPT-4.
8) Self-RAG. The LLM is trained to adaptively manage retrieval and generation, initiating retrieval when a special token is predicted above a certain threshold, followed by generating the answer. We consider the model trained on LLAMA2-13B.
§.§ Implementations Details
Since utilizes few-shot prompting, we dynamically select k-shot examples through nearest neighbor search and incorporate them into the prompt, following the approach in <cit.> using dsp package <cit.>. For the retrieved passages 𝒫_i, we set the number of passages |𝒫_i| to 5. We use GPT-4 for both the retrieval diversification and verification steps. For adaptive response generation, we use the same LLM backbones as the other baselines. For the retrieval diversification and verification modules, we assess the performance of GPT-3.5 (gpt-3.5-turbo) and GPT-4 (gpt-4) across them, ultimately opting to use GPT-4 for both modules in the ASQA dataset and GPT-3.5 for both modules in the SituatedQA dataset for all experiments. The APIs provided by OpenAI[https://openai.com/index/openai-api/] are employed for GPT-3.5-turbo and GPT-4, with the following settings: max tokens set to 300, top-p to 1.0, and temperature to 0.3.
§.§.§ Retrieval Process
To retrieve relevant passages for the given question, we follow the method utilized by <cit.>. Specifically, we first gather relevant Wikipedia documents for the question using two retrieval systems: ColBERT <cit.> and the Bing search engine[https://www.microsoft.com/bing]. After compiling a set of passages, we rerank and select the top-k passages. For reranking, we utilize SentenceBERT <cit.>, pre-trained on MS-Marco, as the backbone.
§ ADDITIONAL EXPERIMENTS
§.§ Case Study
Figure <ref> illustrates the reasoning chains of Iterative RAG, ToC <cit.>, and using the ASQA question, "The movement of food in the food pipe is called?". In panel (a), the answer "Peristalsis" is easily covered during the first exploration, whereas "Swallowing" requires six steps of passage retrieval and LLM reasoning for exploration. In contrast, panel (b) shows that our pseudo-interpretations include both interpretations, with the RD retrieving passages that encompass all necessary information. Consequently, the LLM efficiently extracts all plausible interpretations from the retrieved passages without the need for the cumbersome iterative exploration process.
§ PROMPTS
Table <ref> and Table <ref> show an example of text prompt for inferring pseudo-interpretations (i.e., I_a and I_p in Eqn <ref>). Table <ref> shows an example of text prompt for verifying the retrieved passages (i.e., I_v in Eqn <ref>). Table <ref> and <ref> show an exmple of text prompt for response generation in vanilla RAG framework (i.e., I_e in Eqn <ref> and I_g in Eqn <ref>)
|
http://arxiv.org/abs/2409.02569v1 | 20240904093907 | More is More: Addition Bias in Large Language Models | [
"Luca Santagata",
"Cristiano De Nobili"
] | cs.CL | [
"cs.CL",
"cs.AI",
"cs.CY",
"cs.HC"
] |
Department of Information Engineering and Computer Science,
University of Trento, Italy.
[email protected]
MHPC (SISSA/ICTP), Trieste, Italy.
Pi School, Rome, Italy.
[email protected]
More is More:
Addition Bias in Large Language Models
Luca Santagata,1
Cristiano De Nobili2, 3
September 9, 2024
======================================================
§ ABSTRACT
In this paper, we investigate the presence of additive bias in Large Language Models (LLMs), drawing a parallel to the cognitive bias observed in humans where individuals tend to favor additive over subtractive changes <cit.>. Using a series of controlled experiments, we tested various LLMs, including GPT-3.5 Turbo, Claude 3.5 Sonnet, Mistral, MathΣtral, and Llama 3.1, on tasks designed to measure their propensity for additive versus subtractive modifications. Our findings demonstrate a significant preference for additive changes across all tested models. For example, in a palindrome creation task, Llama 3.1 favored adding letters 97.85% of the time over removing them.
Similarly, in a Lego tower balancing task, GPT-3.5 Turbo chose to add a brick 76.38% of the time rather than remove one.
In a text summarization task, Mistral 7B produced longer summaries in 59.40% to 75.10% of cases when asked to improve its own or others' writing. These results indicate that, similar to humans, LLMs exhibit a marked additive bias, which might have implications when LLMs are used on a large scale. Addittive bias might increase resource use and environmental impact, leading to higher economic costs due to overconsumption and waste. This bias should be considered in the development and application of LLMs to ensure balanced and efficient problem-solving approaches.
§ INTRODUCTION
Large Language Models (LLMs) present substantial opportunities as tools to aid a growing variety of decision-making processes. However, because they are trained on data generated by humans, LLMs are known to inherit societal biases and can exhibit biases that closely resemble cognitive biases, defined as systematic and erroneous response patterns in judgment and decision-making <cit.>. Such human-like biases have the potential to hinder the fairness and transparency of decisions made with the help of LLMs.
Adams et al. <cit.> conducted a series of experiments to explore a cognitive phenomenon known as the addition bias in human participants. This bias was examined in scenarios where problems could be resolved by either adding or removing elements. Additive transformations result in a state with more elements than the original, while subtractive transformations lead to a state with fewer elements <cit.>. A key finding from Adams et al.'s work was that people tend to add rather than remove elements when modifying ideas, objects, or situations. This tendency was observed across various tasks, such as stabilizing a Lego structure, improving a miniature golf course, creating symmetry within a grid, or rewriting an article summary. Interestingly, participants often chose to add elements even when a subtractive solution would have been simpler and required fewer steps. Additionally, instructions to “improve” a design amplified the addition bias more than instructions to “worsen” a design.
In this paper, we extend this line of inquiry to LLMs, investigating whether they exhibit a similar additive bias. We conducted a series of experiments designed to test the tendency of LLMs to favor additive over subtractive changes across various tasks. These experiments included creating palindromes from strings, balancing Lego towers, modifying recipes with unusual ingredients, improving soup recipes with varying numbers of ingredients, and revising text summaries. We tested several prominent LLMs, including GPT-3.5 Turbo, Claude 3.5 Sonnet, and Llama 3.1 70B, among others. Our research aims to uncover whether these AI models, trained on human-generated data, have inherited the human tendency towards additive problem-solving, and to explore the implications of such a bias for AI-assisted decision-making and problem-solving processes.
RQ: Do Large Language Models exhibit an additive bias similar to humans when solving problems or generating content, and if so, how does this bias manifest across different tasks and models?
§ RELATED WORKS
LLMs have demonstrated remarkable abilities in various tasks, such as document summarization <cit.>, solving math problems <cit.>, and providing chat support <cit.>. This has led to their growing use for assistance and advice in daily decision-making <cit.>. However, these models are not immune to algorithmic biases <cit.>, highlighting the need for strategies to evaluate and mitigate these issues <cit.>.
During training, LLMs can encode societal biases related to race, gender, and other sensitive areas, potentially generating outputs that reinforce harmful stereotypes or discriminatory views. A common example is the tendency of LLMs to associate certain professions or traits with specific genders or ethnicities, such as linking engineering with male pronouns and nursing with female pronouns <cit.>. Additionally, biases can surface in text generated on sensitive topics like politics, religion, or social issues <cit.>.
In addition to societal bias, LLMs can show answer patterns similar to human-like cognitive bias <cit.>, which
can implicitly mislead a user’s decision-making <cit.>.
Cognitive bias refers to a systematic pattern of deviation from norms of rationality in judgment, where individuals create their own “subjective reality” from their perception of the input <cit.>, <cit.>.
Three of the eight experiments conducted by <cit.> were replicated in <cit.>, confirming the presence of the addition bias. Further, <cit.> demonstrated that the addition bias extends beyond behavioral manifestations and is also evident in language. A frequency analysis of the Corpus of Contemporary American English <cit.> revealed that words associated with increasing quantities, such as “add” and “more”, are more common in English than those associated with decreasing quantities, such as “subtract” and “less”.
§ METHODOLOGY
For the various experiments, the following models were used:
- GPT-3.5 Turbo[<https://platform.openai.com/docs/models/gpt-3-5-turbo>], prompted using the OpenAI API.
- Claude 3.5 Sonnet[<https://www.anthropic.com/news/claude-3-5-sonnet>], prompted using Anthropic API[<https://docs.anthropic.com/en/api/getting-started#accessing-the-api>].
- MathΣtral[<https://mistral.ai/news/mathstral/>], a model specializing in mathematical and scientific tasks, whose weights were downloaded from HuggingFace[<https://huggingface.co/mistralai/Mathstral-7B-v0.1>].
- Llama 3.1 70B and 450B[<https://llama.meta.com/>], prompted using NVIDIA AI Foundry API[<https://build.nvidia.com/explore/discover>].
- Mistral 7B[<https://mistral.ai/news/announcing-mistral-7b>], prompted using Mistral AI API[<https://mistral.ai/>].
A temperature = 0.7 has been set to enhance the variability of the responses, and the following system prompt has been used:
"Imagine being a regular person asked a question for data collection in a scientific study."
All responses provided by the models were saved in CSV files (available https://github.com/LucaSantagata/More-is-More-Addition-Bias-in-Large-Language-Modelshere) for result analysis. Initially, as described in greater detail in the following sections, all responses deemed incorrect were excluded, including those that were formally incorrect from a logical standpoint and those that did not meet the prompt's requirements.
For each iteration of the process, the history of previous interactions was neither retained nor used to inform the subsequent generation by the model. Each request was thus treated as an independent instance, without any context from prior interactions.
§ EXPERIMENTS
§.§ Palindrome sequence task
In this experiment, the objective was to transform a sequence of letters into a palindrome. Specifically, the strings "abb" and "abab" were used, along with the following prompts:
"Knowing that a sequence is said to be a palindrome if it is equal to its reverse, or in other words, if reading the sequence from left to right gives the same result as reading it from right to left, you need to make this sequence 'abb' a palindrome, but you can only add or remove one letter. Give me your answer in one sentence."
"Knowing that a sequence is said to be a palindrome if it is equal to its reverse, or in other words, if reading the sequence from left to right gives the same result as reading it from right to left, you need to make this sequence 'abab' a palindrome, but you can only add or remove one letter. Give me your answer in one sentence."
In particular, to prevent the indication add or remove from influencing the choice by favoring the additive approach over the subtractive one, the experiment was repeated the same number of times for each model using the same prompt, but this time with the inverted instruction remove or add. All results presented are averages of the values obtained in both cases.
§.§.§ GPT-3.5 Turbo results
In the case of the sequence "abb", as an initial analysis, we considered only the logically correct responses, where a single letter was used to make it symmetric. These correspond to the answers "abba" (where an "a" was added at the end) or "bb" (where the first "a" was removed). Out of the 1000 responses obtained with the suggestion add or remove, 700 responses were deemed valid, while with remove or add, 707 responses were considered correct. The results, obtained from the average of the two cases, are presented in Table <ref>.
Subsequently, we decided also to consider responses that are not logically correct or do not adhere to using only one letter but are still palindromes (e.g., "adding the letter 'a' to the middle of the sequence 'abb' to make it a palindrome, resulting in 'abba'", where the answer 'abba' is a palindrome but does not correspond to the given explanation, therefore logically incorrect).
With these new considerations, the number of responses considered correct with the reccomendation add or remove was 928, while with remove or add it was 931. Additionally, beyond the two sequences 'abba' and 'bb', which were considered the only correct ones in the previous case, new palindrome sequences emerged, as shown in the results Table <ref>.
On the other hand, the sequence ”abab” was tested 100 times for each of the two prompts: with the suggestion add or remove, 22 responses were discarded as they were neither correct from the palindromic perspective nor the logical one, while with remove or add 23 responses were discarded for the same reason. Unlike the previous case, all the remaining responses fell into one of the 4 valid solutions that could be obtained by adding or removing only one letter, whose results are shown in Table <ref>.
§.§.§ Claude 3.5 Sonnet results
With this model, not only were all responses correct both from a palindromic and a logical perspective, but an extreme tendency towards addition was also observed for both the "abb" and "abab" sequences, as shown by the results in Tables <ref> and <ref>.
§.§.§ Llama 3.1 405B results
For the sequence "abb", out of the 200 responses collected, 19 were discarded because they produced a non-palindromic string. In the results shown in the Table <ref>, some responses like "remove the last 'b' to get 'ab', then add an 'a' at the end to get the palindrome 'aba'." were included, even though they were infrequent and, while accurate, did not adhere to the rule of adding or removing just one letter.
Regarding the sequence "abab", out of the 200 responses, 13 were discarded from the results with the prompt containing add or remove, and 11 were discarded from those with remove or add. In this case as well, all remaining responses fell into one of the 4 valid solutions that could be obtained by adding or removing only one letter, with the results shown in Table <ref>.
The presented results clearly show that each model extensively pursued an additive approach. This confirms that the presence of an additive bias has influenced the models' choices, with all of them preferring addition over subtraction.
Moreover, from Figure <ref>, it is also noteworthy that additive responses are not only more frequent but, in both cases and across all models, they occur by a significant margin.
§.§ Lego towers task
For this experiment (considere figure <ref> as an example), it was asked:
"Imagine you have two towers built with Lego bricks. One of them was built with 5 bricks, while the other with 4 bricks. You need to make them the same height using the fewest number of pieces. What do you do? Give me your answer in one sentence."
In this case, using the fewest possible pieces, there are two valid answers to make the towers symmetrical: add one Lego brick to the shorter tower on the left (additive responses), or remove one brick from the taller tower on the right (subtractive responses). The purpose of the experiment is to determine if there is an additive
bias in the responses that could explain a majority of answers involving adding a brick rather than
removing one.
§.§.§ GPT-3.5 Turbo results
Out of the 1000 responses, 365 were discarded because the result was either logically incorrect or did not result in two symmetrical towers (e.g., "I would take two bricks from the tower with 5 bricks and add them to the tower with 4 bricks to make both towers have 6 bricks each."). The results of the remaining 635 correct responses, presented in Table <ref>, show that the additive one was the more frequent of the two suggestions.
§.§.§ Claude 3.5 Sonnet results
For this model, not only were all 1000 collected responses logically correct, but all of them were additive, as indicated by the results in Table <ref>.
§.§.§ MathΣtral results
As in the previous case, for MathΣtral as well, all 1000 logically correct responses suggested adding a brick to the shorter tower, as indicated in Table <ref>.
§.§.§ Llama 3.1 70B results
Out of the 1000 responses, only 2 were discarded, as they suggested removing the excess brick from the taller tower and placing it on the shorter one, which did not solve the symmetry problem (e.g., "take one brick from the 5-brick tower and attach it to the 4-brick tower.").
The results for the remaining 998 answers, shown in Table <ref>, unlike the previous model cases, indicate that subtracting a brick from the taller tower was the most frequently proposed solution.
A test was conducted to determine whether 485 out of 635 (76.38%) responses for GPT-3.5 Turbo, 1000 out of 1000 (100.00%) for Claude 3.5 Sonnet, 1000 out of 1000 (100%) for MathΣtral, and 19 out of 1000 (1.90%) for Llama 3.1 70B, reject the null hypothesis that the suggestions for the two possible transformations are equally likely. The p-value from a two-sided binomial distribution test for these results was found to be less than 0.001.
In conclusion, as can be seen also in Figure <ref>, it is possible to confirm that for this type of task, GPT-3.5 Turbo, Claude 3.5 Sonnet, and MathΣtral showed a strong additive bias, unlike Llama 3.1 70B, which in this case demonstrated a pronounced tendency towards subtractive choices.
§.§ Elementary operation task
In this experiment, the following prompt was used:
"Given these numbers: [n_1, n_2], which of the four basic operations would you suggest performing? Provide your answer in one word."
Where [n_1, n_2] are two numbers within the range of 1 to 10, randomly generated in each of the 1000 iterations. The aim is to investigate the potential tendency to prefer elementary operations that increase the value of the numbers involved, such as addition and multiplication (referred to as additive operations), over those that decrease it, such as subtraction and division (referred to as subtractive operations).
§.§.§ GPT-3.5 Turbo results
Out of the 1000 collected responses,
2 were discarded as they suggested average rather than one of the four basic operations.
For the remaining 998 responses, the distribution of choices is showed in Table <ref>.
§.§.§ Claude 3.5 Sonnet results
In this case, all the responses suggested one of the four basic operations, which is why none of them were discarded. The obtained results are shown in Table <ref>.
§.§.§ MathΣtral results
Also in this case it was not necessary to remove any responses, and the results obtained, shown in Table <ref>, indicate that the only operations suggested were addition and multiplication, with no indication of subtraction or division.
§.§.§ Llama 3.1 70B results
All 1000 responses suggested one of the four basic operations, which is why they were all included in the analysis presented in Table <ref>. The results demonstrate a clear tendency for the model to favor addition.
A test was conducted to determine whether 753 (sum of addition and multiplication counts) out of 998 responses (75.45%) for GPT-3.5 Turbo, 660 out of 1000 (66.00%) for Claude 3.5 Sonnet, 1000 out of 1000 (100.00%) for MathΣtral, and 686 out of 1000 (68.60%) for Llama 3.1 70B, reject the null hypothesis that suggestions for additive operations are equally likely as those for subtractive operations. The p-value from a two-sided binomial distribution test for these results was found to be less than 0.001.
As summarized in Figure <ref>, all models suggested addition and multiplication more frequently than subtraction and division, clearly indicating the presence of an additive bias for this type of task.
§.§ Anomalous sandwich task
The objective of this experiment is to determine if the tested LLMs are more likely to add or subtract from stimuli with anomalous components. To test this hypothesis, it was asked to modify the recipe for a cheese sandwich. Specifically, the prompt used was as follows:
"Imagine you are hungry and decide to make a sandwich for lunch. Below there is a list of five ingredients: bread, ham, cheese, lettuce, mayonnaise, and ingredient n°6. In one sentence, please describe how you would change this recipe when making your sandwich."
For ingredient n°6, three different possibilities were tested:
banana, chocolate, and pineapple. These three ingredients are extremely unusual for a cheese sandwich recipe, which is why it would be reasonable to expect that the most obvious response would be of a reductive nature, namely only the removal of the sixth unusual ingredient. However, the results demonstrated that this is not so straightforward.
The collected responses were divided into four categories:
- no change: the original sandwich recipe is left unaltered with the unusual ingredient (e.g "I would make a ham and cheese sandwich with lettuce, mayonnaise, and banana slices for a unique twist").
- only addition: there is a modification of only additive type, that is, the addition of an ingredient, but no subtractive modification, that is, the removal of an ingredient (e.g "If it were up to me, I would add some sliced tomatoes to my sandwich for an extra burst of freshness and flavor").
- only remotion: there is a modification of only subtractive type, but no additive modification (e.g "I would remove the chocolate from the list of ingredients when making my sandwich").
- both addition and remotion: there are simultaneously both an additive modification and a subtractive modification, indicating both the removal and the addition of an ingredient (e.g. "I would skip the pineapple and add some mustard for an extra kick of flavor in my sandwich").
§.§.§ GPT-3.5 Turbo results results
As mentioned earlier, when faced with an extremely unusual ingredient, the most reasonable choice would be to remove it. However, the results in Table <ref> show a marked additive tendency of the model. Instead of simply removing the unusual ingredient, GPT-3.5 Turbo frequently suggested adding a new ingredient. This was the most common choice in the cases of banana and pineapple, and while it was not the most frequent choice in the case of chocolate, it was still suggested a significantly notable number of times.
§.§.§ Claude 3.5 Sonnet results
The responses from Claude 3.5 Sonnet, shown in Table <ref>, generally met the expectation of simply removing the unusual ingredient, especially in the cases of banana and chocolate. However, it is interesting to note that in the case of pineapple, although the suggestion to remove only is the most frequent, a significant percentage of responses included the addition of a new ingredient.
§.§.§ Llama 3.1 70B results
Table <ref> shows that Llama 3.1 70B, for each individual ingredient and in all cases, consistently chose the most straightforward solution, which was to simply remove the unusual component.
In conclusion, as shown in Figure <ref>, in this type of task, only GPT-3.5 Turbo displayed an additive bias. This was confirmed by the interesting fact that it did not merely remove the unwanted ingredient but often preferred to suggest adding a new one as well.
§.§ Increasing ingredients in soup task
For this task, it was asked to transform a soup recipe using the following prompt:
"Below you have a list of ingredients for a soup recipe: {ingredients}. Your job is to make any and all changes necessary to improve this soup. Assume that this soup is for someone who has no dietary restrictions or strong food dislikes. Please provide your answer in only one sentence."
where the number of {ingredients} was increased each time, covering the following cases:
- 5 ingredients: vegetable broth, carrots, peas, garlic, salt/pepper.
- 15 ingredients: vegetable broth, carrots, peas, garlic, salt/pepper, onion, celery, oregano, potatoes, thyme, green beans, corn, zucchini, parsley, and leeks.
- 30 ingredients: vegetable broth, carrots, peas, garlic, salt/pepper, onion, celery, oregano, potatoes, thyme, green beans, corn, zucchini, parsley, leeks, tomatoes, spinach, bell peppers, mushrooms, lentils, cabbage, chickpeas, bay leaves, paprika, cumin, lemon juice, ginger, cilantro, basil, kale.
- 50 ingredients: vegetable broth, carrots, peas, garlic, salt, pepper, onion, celery, oregano, potatoes, thyme, green beans, corn, zucchini, parsley, leeks, tomatoes, spinach, bell peppers, mushrooms, lentils, cabbage, chickpeas, bay leaves, paprika, cumin, lemon juice, ginger, cilantro, basil, kale, cauliflower, green onions, black beans, quinoa, broccoli, radishes, fennel, mint, dill, rosemary, sage, tofu, coconut milk, turmeric, chili powder, sweet potatoes, barley, shallots, pumpkin, asparagus, lime juice.
The goal of this experiment is to investigate whether there is a tendency for models to add ingredients rather than remove them. Specifically, it aims to observe how this tendency might be influenced by the increasing number of ingredients and whether there is a "phase transition" where this tendency is no longer observed.
For this reason, the responses were categorized as follows:
- Only addition: A suggestion was made to add one or more ingredients (e.g., "I would add some diced potatoes and onion to enhance the flavor and texture of the soup").
- Only removal: A suggestion was made to remove one or more ingredients (e.g., "I would remove the radishes and fennel, as they may overpower the other flavors in the soup").
- Both addition and removal: A suggestion was made to remove one or more elements while also suggesting the addition of new ones (e.g., "I would remove the barley and pumpkin and add in a dash of smoked paprika and a splash of balsamic vinegar for a richer flavor profile").
§.§.§ GPT-3.5 Turbo results
As shown by the results in Table <ref>, with 5 and 15 ingredients, almost every suggestion from GPT-3.5 Turbo is additive. It takes 30 ingredients before suggestions that involve removing an ingredient start to appear consistently, although the additive tendency remains the most prevalent overall. With 50 ingredients, a reasonably high number, the most common suggestion shifts to removal. However, surprisingly, the difference is not excessive, and the additive tendency still appears in responses that include both types of suggestions as well as in those that are exclusively additive.
§.§.§ Claude 3.5 Sonnet results
In this case, as shown in Table <ref>, with 5 and 15 ingredients, the suggestions are exclusively additive, proposing to add one or more ingredients. Even with 30 ingredients, the trend remains largely the same, with suggestions involving only removal being completely absent. With 50 ingredients, this trend reverses, but surprisingly, removal is still not the preferred solution. Instead, the preferred suggestion is a combination of both removal and addition. This confirms that for this model, a persistent additive tendency is consistently present in this type of task.
§.§.§ Llama 3.1 70B results
As indicated by the results in Table <ref>, with 5 ingredients, as usual, the model tends to suggest adding. However, as the number of ingredients increases, the strategy of both removing and adding becomes increasingly frequent, eventually becoming the preferred option with 50 ingredients, in contrast to single removals, which only reach a consistent percentage in the scenario with the highest number of ingredients. Even in this case, it is possible to observe a strong additive bias that favors the addition of elements over their removal.
Examining Figure <ref>, it is evident that the response patterns are quite similar across all three models, particularly in the case of purely additive responses (which decrease as the number of ingredients increases) and those focused solely on removal (which become more frequent). Notably, in some instances—especially when only 5 ingredients are involved—additive responses make up the entirety of the suggestions, a situation never observed with subtractive responses. Even with 50 ingredients, subtractive responses never reach totality unless combined with an additive suggestion. These findings highlight that all models exhibit a strong additive bias when performing this specific task.
§.§ Summarization task
This experiment allowed the observation of potential additive or subtractive tendencies in the context of a revision process.
During the first phase, each model was asked 1000 times to summarize a text provided in the prompt (specifically the proposed text was the introduction of Wikipedia's page on the Roman Empire[<https://en.wikipedia.org/wiki/Roman_Empire>], attached in Appendix <ref>).
"Summarize the following text: The Roman Empire was... "
Of the provided summary, the number of words used was counted, and then, during the second phase, following the same procedure proposed to the participants by <cit.>, it was asked to improve it, but in two different contexts.
First case - one's own writing
In this case, it was asked to improve the previous summary, presented again in the prompt, specifying that it was a summary elaborated by the model itself.
"Edit your summary with the goal of improving how well you summarized the text."
Finally, the number of words used was counted.
Second case - others' writing
This time, the original summary was always presented with the request to improve it, but with the specification that it was done by someone else other than the model.
"Edit this previous summary made by someone else with the goal of improving how well the text has been summarized."
In this case as well, the number of words used in the improved summary was counted.
The key observation of this experiment is to determine whether the summaries obtained have more or fewer words than the original summary, thereby highlighting the presence or absence of an additive trend.
§.§.§ GPT-3.5 Turbo results
As shown in the Table <ref>, especially in the first case, the model most often preferred to adopt a subtractive strategy, using fewer words.
§.§.§ Claude 3.5 Sonnet results
Also in this case, as indicated in Table <ref> the preferred strategy in both instances was to use fewer words by writing shorter summaries.
§.§.§ Mistral 7B results
Unlike the previous cases, in this instance, the Table <ref> shows that the tendency for both cases was additive, resulting in edited summaries with an higher number of words.
A test was conducted for both cases to determine if the counts of 16 out of 1000 and 384 out of 1000 for GPT-3.5 Turbo (408 out of 1000 and 394 out of 1000 for Claude 3.5 Sonnet, and 594 out of 1000 and 751 out of 1000 for Mistral 7B) reject the null hypothesis that the probability of producing a summary with fewer words is the same as producing one with more words compared to the original.
The p-value from a two-sided binomial distribution test for these
results was found to be less than 0.001.
As shown in Figure <ref>, it can be concluded that during the revision phase, only Mistral 7B exhibited an additive bias in both cases. In contrast, Claude 3.5 Sonnet and GPT-3.5 Turbo (for the first time in this series of proposed experiments) favored a reductive strategy.
§ CONCLUSION
This study investigated the presence of addition bias in Large Language Models (LLMs), drawing parallels to the cognitive bias observed in humans where individuals tend to favor additive over subtractive changes. Through a series of controlled experiments across various tasks, we tested several prominent LLMs, including GPT-3.5 Turbo, Claude 3.5 Sonnet, Mistral, MathΣtral, and Llama 3.1.
Our findings consistently demonstrated a significant preference for additive changes across most tested models and tasks. This bias was particularly evident in tasks such as palindrome creation, Lego tower balancing, and elementary operation selection. For instance, in the palindrome task, models like GPT-3.5 Turbo and Claude 3.5 Sonnet showed a strong tendency to add letters rather than remove them. Similarly, in the Lego tower task, most models preferred adding a brick to the shorter tower rather than removing one from the taller tower.
The addition bias persisted even in more complex scenarios, such as modifying recipes with unusual ingredients or improving soup recipes with varying numbers of ingredients. Interestingly, the bias remained present but decreased in intensity as the number of ingredients increased, suggesting a potential "phase transition" point where subtractive changes become more prevalent.
However, it is important to note that the bias was not uniform across all tasks and models. For example, in the text summarization task, some models (GPT-3.5 Turbo and Claude 3.5 Sonnet) showed a tendency towards reduction rather than addition when improving their own or others' summaries. This suggests that the manifestation of addition bias may be task-dependent and can vary across different LLMs.
These findings have significant implications for the development and application of LLMs in various domains. The presence of addition bias could influence how these models approach problem-solving, decision-making, and content generation tasks. It may lead to unnecessarily complex solutions, inefficiencies, or in certain scenarios it may go against the Occam Principle where simpler, subtractive approaches might be more appropriate. When LLMs are used on a large scale, addition bias can even increase resource use, leading to higher economic costs and increased environmental impact due to overconsumption and waste.
Future research should focus on understanding the root causes of this bias in LLMs, potentially exploring its relationship to training data and model architectures. Additionally, developing strategies to mitigate this bias could be crucial for improving the efficiency and effectiveness of LLMs across a wide range of applications.
abbrvnat
§ ROMAN EMPIRE TEXT
"The Roman Empire was the post-Republican state of ancient Rome. It is generally understood to mean the period and territory ruled by the Romans following Octavian's assumption of sole rule under the Principate in 27 BC. It included territories in Europe, North Africa, and Western Asia and was ruled by emperors. The fall of the Western Roman Empire in 476 AD conventionally marks the end of classical antiquity and the beginning of the Middle Ages.
Rome had expanded its rule to most of the Mediterranean and beyond. However, it was severely destabilized in civil wars and political conflicts which culminated in the victory of Octavian over Mark Antony and Cleopatra at the Battle of Actium in 31 BC, and the subsequent conquest of the Ptolemaic Kingdom in Egypt. In 27 BC, the Roman Senate granted Octavian overarching power (imperium) and the new title of Augustus, marking his accession as the first Roman emperor of a monarchy with Rome as its sole capital. The vast Roman territories were organized in senatorial and imperial provinces.
The first two centuries of the Empire saw a period of unprecedented stability and prosperity known as the Pax Romana (lit. 'Roman Peace'). Rome reached its greatest territorial expanse under Trajan (r. 98–117 AD); a period of increasing trouble and decline began under Commodus (r. 180–192). In the 3rd century, the Empire underwent a crisis that threatened its existence, as the Gallic and Palmyrene Empires broke away from the Roman state, and a series of short-lived emperors led the Empire. It was reunified under Aurelian (r. 270–275). Diocletian set up two different imperial courts in the Greek East and Latin West in 286; Christians rose to power in the 4th century after the Edict of Milan. The imperial seat moved from Rome to Byzantium in 330, renamed Constantinople after Constantine the Great. The Migration Period, involving large invasions by Germanic peoples and by the Huns of Attila, led to the decline of the Western Roman Empire. With the fall of Ravenna to the Germanic Herulians and the deposition of Romulus Augustus in 476 AD by Odoacer, the Western Roman Empire finally collapsed. The Eastern Roman Empire survived for another millennium with Constantinople as its sole capital, until the city's fall in 1453.
Due to the Empire's extent and endurance, its institutions and culture had a lasting influence on the development of language, religion, art, architecture, literature, philosophy, law, and forms of government across its territories. Latin evolved into the Romance languages while Medieval Greek became the language of the East. The Empire's adoption of Christianity resulted in the formation of medieval Christendom. Roman and Greek art had a profound impact on the Italian Renaissance. Rome's architectural tradition served as the basis for Romanesque, Renaissance and Neoclassical architecture, influencing Islamic architecture. The rediscovery of classical science and technology (which formed the basis for Islamic science) in medieval Europe contributed to the Scientific Renaissance and Scientific Revolution. Many modern legal systems, such as the Napoleonic Code, descend from Roman law. Rome's republican institutions have influenced the Italian city-state republics of the medieval period, the early United States, and modern democratic republics."
|
http://arxiv.org/abs/2409.02288v1 | 20240903205751 | Interface dynamics of wet active systems | [
"Fernando Caballero",
"Ananyo Maitra",
"Cesare Nardini"
] | cond-mat.soft | [
"cond-mat.soft",
"cond-mat.stat-mech"
] | |
http://arxiv.org/abs/2409.03252v1 | 20240905050903 | Gr-IoU: Ground-Intersection over Union for Robust Multi-Object Tracking with 3D Geometric Constraints | [
"Keisuke Toida",
"Naoki Kato",
"Osamu Segawa",
"Takeshi Nakamura",
"Kazuhiro Hotta"
] | cs.CV | [
"cs.CV",
"68T45, 68U10, 93E11",
"I.2.10; I.4.8; I.5.4"
] |
Gr-IoU: Ground-Intersection over Union
K.Toida et al.
Meijo University, 1-501 Shiogamaguchi, Tempaku-ku, Nagoya 468-8502, Japan Chubu Electric Power Co., Inc., 1-1 Higashishin-cho, Higashi-ku, Nagoya 461-8680, Japan
Gr-IoU: Ground-Intersection over Union for Robust Multi-Object Tracking with 3D Geometric Constraints
Keisuke Toida10009-0006-4873-3651 Naoki Kato20009-0004-3815-0829 Osamu Segawa20009-0000-2469-6098 Takeshi Nakamura20009-0001-4991-3383 Kazuhiro Hotta10000-0002-5675-8713
September 9, 2024
==============================================================================================================================================================================
§ ABSTRACT
We propose a Ground IoU (Gr-IoU) to address the data association problem in multi-object tracking.
When tracking objects detected by a camera, it often occurs that the same object is assigned different IDs in consecutive frames, especially when objects are close to each other or overlapping.
To address this issue, we introduce Gr-IoU, which takes into account the 3D structure of the scene.
Gr-IoU transforms traditional bounding boxes from the image space to the ground plane using the vanishing point geometry.
The IoU calculated with these transformed bounding boxes is more sensitive to the front-to-back relationships of objects, thereby improving data association accuracy and reducing ID switches.
We evaluated our Gr-IoU method on the MOT17 and MOT20 datasets, which contain diverse tracking scenarios including crowded scenes and sequences with frequent occlusions.
Experimental results demonstrated that Gr-IoU outperforms conventional real-time methods without appearance features.
§ INTRODUCTION
Multi-Object Tracking (MOT) is a significant and extensively studied problem in computer vision.
The primary goal of MOT is to consistently track multiple objects in a video sequence, assigning unique IDs to each object and accurately tracking their movements across frames.
However, MOT presents several technical challenges.
One of the major issues in MOT is data association errors.
When objects are close to each other or overlapping, the same object may be assigned different IDs in consecutive frames, causing ID switches and reducing the tracking accuracy.
In this work, we introduce new constraints to address these issues, focusing on objects that move on the ground plane (e.g., people and vehicles), which objects cannot overlap in 3D space by their physical nature.
We propose a novel matching method that incorporates the 3D structure of the scene.
Our proposed Gr-IoU (Ground-Intersection over Union) leverages vanishing point geometry to transform traditional bounding boxes from image space to the ground plane.
Transformed bounding boxes are visualized in <ref>.
The IoU computed with these transformed bounding boxes is more sensitive to the spatial relationships between objects, thereby improving data association accuracy and reducing ID switches.
We conducted experiments on both the MOT17<cit.> and MOT20<cit.> datasets to evaluate the performance of our proposed Gr-IoU method.
Experimental results demonstrated that our approach outperforms conventional methods that do not utilize appearance features.
Specifically, our method shows significant improvements in reducing ID switches and increasing tracking accuracy across various scenarios, including crowded scenes and those with frequent occlusions.
These findings indicate that Gr-IoU which incorporates 3D geometric information, is effective for tracking multiple objects.
This paper is organized as follows. <Ref> describes related works. In <ref>, the details of the proposed method is explained. Experimental results are shown in <ref>. Finally, we describe conclusions and future works in <ref>.
§ RELATED WORKS
Data association is a critical component in MOT, involving the matching of detected objects across consecutive frames while maintaining consistent object identities.
Various methods<cit.> have been developed to address the challenges about data association.
The Kalman filter<cit.> is widely used for predicting the future positions of objects based on their previous states.
The Hungarian algorithm<cit.>, on the other hand, is employed to solve the assignment problem, matching predicted positions with detected objects.
Although they are effective in simple scenarios, these methods can struggle with occlusions and complex interactions between objects.
By the advent of deep learning, several learning-based approaches have been proposed to improve data association in MOT.
DeepSORT<cit.>, for instance, extends the traditional SORT<cit.> algorithm by integrating deep learning-based appearance features.
It employs a convolutional neural network (CNN) to extract appearance features of detected objects, which are then combined with motion information to enhance data association.
This method has demonstrated significant improvements in tracking performance, particularly in scenarios with visually similar objects.
Other deep learning-based methods, such as <cit.>, have also shown promising results in terms of tracking accuracy.
However, these approaches often come with a significant computational cost, limiting their applicability in real-time scenarios.
High computational requirements of deep neural networks, especially when processing high-resolution video streams, can lead to substantial latency, making them impractical for many real-world applications that demand real-time performance.
Our proposed Gr-IoU method aims to address the limitations of both traditional and deep learning-based approaches.
By incorporating 3D scene structure into the data association process, Gr-IoU improves the tracking accuracy without the computational overhead associated with deep learning methods.
This geometric approach allows for efficient processing, making it suitable for real-time applications while maintaining robust performance in complex scenes with frequent occlusions and dense object interactions.
Gr-IoU leverages vanishing point geometry to transform bounding boxes from image space to the ground plane, enabling more accurate spatial reasoning.
This approach not only improves data association accuracy but also maintains computational efficiency, striking a balance between performance and real-time applicability that is often challenging to achieve with deep learning-based methods.
§ PROPOSED METHOD
<Ref> illustrates the architecture of our proposed tracking method, which incorporates the Ground IoU (Gr-IoU) method.
It is designed to be able to assign correct IDs of close or overlapping objects without using visual features.
In <ref>, we provide the overview of pipeline for multi-object tracking, outlining the fundamental steps and components involved in the tracking process.
<Ref> introduces and explains in detail our novel Gr-IoU (Ground-Intersection over Union) algorithm.
§.§ Pipeline
We follows the tracking-by-detection paradigm.
Tracking-by-detection is a widely used framework in multi-object tracking (MOT).
This approach involves two main steps.
At first, objects are detected in each frame of a video sequence, and then linking these detections across consecutive frames to form consistent object trajectories.
In our tracking system, we first use an off-the-shelf object detector (e.g., YOLOX<cit.>) to generate bounding boxes for objects in each frame.
Then, our proposed tracking system takes the detected bounding boxes as input and generates a continuous object trajectory as the final tracking result.
Our tracking system incorporates a Kalman filter<cit.> as a motion model to linearly predict the motion of objects.
The state variables of the Kalman filter are expressed by the following equation:
x = [x_c, y_c, h, a, ẋ_̇ċ, ẏ_̇ċ, ḣ, ȧ]
where (x_c, y_c) are the center coordinates of the bounding box, h is the height, a is the aspect ratio, and (ẋ_̇ċ, ẏ_̇ċ, ḣ, ȧ) represent their respective velocities.
To improve data association accuracy, we utilize our proposed Gr-IoU method to calculate the cost matrix.
This geometric approach enhances the spatial reasoning capabilities of the tracker.
Details are given in <ref>.
Finally, we employ the Hungarian algorithm to solve the assignment problem based on this cost matrix, effectively linking detections across frames.
§.§ Ground-IoU
The main contribution proposed in this paper is Ground-Intersection over Union (Gr-IoU).
Gr-IoU addresses a key issue in traditional tracking methods: redundancy in the cost matrix caused by occlusion or when objects appear close together in 2D image space.
In traditional 2D approaches, when objects overlap or occlude each other, the IoU cost matrix often fails to accurately represent the real spatial relationships.
Gr-IoU significantly mitigates this issue by incorporating 3D geometric information by projecting bounding boxes onto the ground.
This projection is performed using the vanishing point estimated in the initial frame.
Gr-IoU is calculated using the rectangle formed by the four projected points, as shown in <ref>.
The left side in <ref> shows conventional method of IoU calculation, which uses bounding boxes coordinates in camera space.
In contrast, our proposed method shown on the right side in <ref> uses bounding boxes coordinates projected onto the ground plane to calculate IoU.
By using our method on ground plane, it is robust to close and occluded objects.
Let the XY-coordinates of the four points of the bounding box in camera space obtained by the detector be denoted as (i_tl, i_tr, i_bl, i_br).
The XY-coordinates of these four points transformed to project onto the ground plane are represented as (o_tl, o_tr, o_bl, o_br).
Given the vanishing point coordinates (x_vp, y_vp), the transformation of each point can be expressed by <ref>.
(o_tl, o_tr, o_bl, o_br) = (t_tl, t_tr, i_bl, i_br)
where transformed points t_tl, t_tr are calculated as in <ref>.
t(x_t, y_t) = (x_i + du_x, y_i + du_y)
where i(x_i, y_i) is a bottom input point, t(x_t, y_t) is a transformed point, (u_x, u_y) is a unit vector in the direction of the vanishing point, and d is a parameter controlling how far the new point is from the original point.
The unit vector (u_x, u_y) is calculated as in <ref>.
(u_x, u_y) = (x_vp-x_i/√((x_vp-x_i)^2+(y_vp-y_i)^2), y_vp-y_i/√((x_vp-x_i)^2+(y_vp-y_i)^2))
This transformation projects the top points of the bounding box towards the vanishing point, creating a trapezoid shape that better represents the object's position on the ground plane.
The bottom points remain unchanged, anchoring the object to its original position in the image.
In our experiments, we set d = 0.3h (h is a height of bounding box) so that it is proportional to the size of the bounding box.
We also add a scaling buffer to make the cost matrix more responsive.
This projection helps resolve ambiguity in object associations and allows for a more accurate representation of spatial relationships even in complex scenarios.
As a result, Gr-IoU reduces the likelihood of tracking errors caused by misleading camera space similarity scores, especially for partially occluded or closely spaced objects.
§ EXPERIMENTS
This section describes our experimental setup, evaluation methodology, and results.
In <ref>, We detail the datasets used in experiments and the specific conditions under which they were employed.
Then, in <ref>, we explain the quantitative measures used to assess the performance of our Gr-IoU method and compare it with existing trackers.
Next, in <ref>, we present a comparison between our proposed Gr-IoU method and baseline trackers.
This subsection includes a detailed analysis of the results, highlighting the strengths and limitations of our approach.
§.§ Datasets
Our experiments were conducted on the MOT17<cit.> and MOT20<cit.> training datasets with private detections by YOLOX<cit.> ablation model from <cit.>.
These datasets provide diverse scenarios for evaluating multi-object tracking algorithms.
For the MOT17 dataset, we evaluate the performance of Gr-IoU using only static camera sequences and vanishing points estimated by ELSED<cit.> and RANSAC<cit.>.
We estimate the vanishing point in the first frame of each sequence and use it for all subsequent frames.
In contrast, for the MOT20 dataset, we adopt a simplified approach where the vanishing point is set to (image_width/2, 0).
This decision was made because MOT20 contains sequences captured by non-static cameras, where per-frame vanishing point estimation can be computationally expensive and potentially unreliable.
By using a fixed vanishing point at the top center of the image, we maintain a reasonable approximation of perspective while avoiding the computational overhead of dynamic vanishing point estimation.
This adaptive approach to vanishing point determination allows our Gr-IoU method balancing accuracy and computational efficiency.
§.§ Metrics
In this section, we describe the evaluation metrics used to evaluate the performance of our proposed Gr-IoU method on the MOT17 dataset.
The primary metric is Multi-Object Tracking Accuracy (MOTA)<cit.>, which provides an overall measure of tracking accuracy by combining false positives, false negatives, and identity switches, thereby penalizing both detection and association errors.
MOTA is a comprehensive metric that reflects the general performance of a tracking system, with higher scores indicating better tracking performance.
In addition to MOTA, we utilize other standard metrics commonly used in multi-object tracking evaluation, including Identity F1 Score (IDF1), Identity Precision (IDP), Identity Recall (IDR), Mostly Tracked (MT), Mostly Lost (ML), Partly Tracked (PT), and Identity Switches (IDsw).
§.§ Results
§.§.§ Comparison on MOT17 training dataset with private detections.
<Ref> compares our Gr-IoU to mainstream MOT methods on the training sets in MOT17 dataset.
We compared our method with ByteTrack<cit.>, which is renowned for its high accuracy among tracking methods that do not utilize appearance features.
Gr-IoU achieves the highest overall MOTA (79.10%) and IDF1 (80.81%), indicating superior tracking accuracy and identity consistency compared to SORT and ByteTrack.
Notably, Gr-IoU also exhibits the highest IDP (87.38%), highlighting its precision in identity matching.
Additionally, Gr-IoU shows a significant reduction in identity switches, further underscoring its effectiveness in maintaining consistent object identities over time.
These results confirm that incorporating 3D constraints through Gr-IoU enhances the overall performance of multi-object tracking.
§.§.§ Comparison in MOT20 training dataset with private detections.
<Ref> illustrates the advantages of our proposed Gr-IoU of simplified version denoted as Gr-IoU†, when evaluated on the MOT20 training dataset with private detections.
Gr-IoU† achieves higher overall MOTA (65.60%) and IDF1 (70.35%), indicating improved tracking accuracy and identity consistency compared to ByteTrack.
Additionally, Gr-IoU† demonstrates superior IDP (74.22%) and IDR (66.86%), reflecting its effectiveness in identity matching and recall.
Notably, Gr-IoU† also reduces the number of identity switches (1,372 to 1,187), emphasizing its capability to maintain consistent object identities over time.
These results highlight that even the simplified version of Gr-IoU, with a fixed vanishing point, enhances the performance of multi-object tracking on the challenging MOT20 dataset.
§.§.§ Ablation studies and other experiments
We performed additional experiments on the MOT17 training dataset to evaluate the sensitivity of our method to the parameter d. <Ref> shows the impact of different values of d on MOTA and IDsw.
For d ranging from 0.1h to 0.4h, the MOTA remains relatively stable.
However, when d exceeds 0.5h, the MOTA begins to deteriorate, suggesting that larger values of d lead to a decline in tracking performance.
These results suggest that this phenomenon occurs because a too large value of d causes the transformed bounding box to expand unnecessarily as shown in <Ref>, adversely affecting other bounding boxes that do not accurately represent the spatial relationships between objects.
<Ref> shows the IoU distribution used in calculating the cost matrix visualized as a histogram.
The vertical axis represents frequency, and the horizontal axis represents IoU value, focusing on the distribution where the IoU is close to 1.
<Ref> illustrates the distribution of conventional IoU values in image space, while <ref> depicts the distribution of Gr-IoU values.
These histograms demonstrate that the proposed Gr-IoU method mitigates the concentration of values near 1, resulting in a more uniform distribution across the range of IoU values.
The prevalence of cost matrix with numerous IoU values close to 1 adversely affects the optimization of the Hungarian algorithm<cit.>, which performs one-to-one data association.
This is because IoU values close to 1 are difficult to distinguish and introduce ambiguity in the association process.
The improved distribution of IoU values in the cost matrix achieved by the Gr-IoU method likely enhances the solvability of the assignment problem.
Consequently, this improvement contribute to a reduction in identity switches (IDsw).
<Ref> shows a comparative analysis of Gr-IoU and ByteTrack with bounding boxes visualization per frame.
In results by the ByteTrack, we observe an ID switch in the frames immediately before and after a detection error.
In contrast, Gr-IoU improved ID switch and better tracking consistency across frames with detection uncertainty.
§ CONCLUSION
In this paper, we proposed Gr-IoU, a novel approach to address data association errors in MOT.
Gr-IoU incorporates 3D constraints and transforms the coordinates of detected bounding boxes onto the ground plane, thereby computing IoU based on these transformed rectangles.
This method alleviates redundancy in the cost matrix and enhances matching efficiency.
Experiments on the MOT17 and MOT20 datasets show that Gr-IoU outperforms existing methods such as SORT and ByteTrack in terms of MOTA, IDF1, and other key metrics.
As a future work, there is room for improvement in the method for selecting the parameter d.
In addition, further improvements in accuracy are expected by applying camera calibration and incorporating appearance features into Gr-IoU.
splncs04
|
http://arxiv.org/abs/2409.03220v1 | 20240905033605 | FairQuant: Certifying and Quantifying Fairness of Deep Neural Networks | [
"Brian Hyeongseok Kim",
"Jingbo Wang",
"Chao Wang"
] | cs.LG | [
"cs.LG",
"cs.SE"
] |
FairQuant: Certifying and Quantifying Fairness of Deep Neural Networks
Brian Hyeongseok Kim
University of Southern California
Los Angeles, USA
Jingbo Wang
Purdue University
West Lafayette, USA
Chao Wang
University of Southern California
Los Angeles, USA
Received 16 July 2024; accepted 04 September 2024
==============================================================================================================================================================================================================================================================
§ ABSTRACT
We propose a method for formally certifying and quantifying individual fairness of deep neural networks (DNN). Individual fairness guarantees that any two individuals who are identical except for a legally protected attribute (e.g., gender or race) receive the same treatment. While there are existing techniques that provide such a guarantee, they tend to suffer from lack of scalability or accuracy as the size and input dimension of the DNN increase.
Our method overcomes this limitation by applying abstraction to a symbolic interval based analysis of the DNN followed by iterative refinement guided by the fairness property. Furthermore, our method lifts the symbolic interval based analysis from conventional qualitative certification to quantitative certification, by computing the percentage of individuals whose classification outputs are provably fair, instead of merely deciding if the DNN is fair.
We have implemented our method and evaluated it on deep neural networks trained on four popular fairness research datasets. The experimental results show that our method is not only more accurate than state-of-the-art techniques but also several orders-of-magnitude faster.
§ INTRODUCTION
The problem of certifying the fairness of machine learning models is more important than ever due to strong interest in applying machine learning to automated decision making in various fields from banking <cit.> and healthcare <cit.> to public policy <cit.> and criminal justice <cit.>. Since the decisions are socially sensitive, it is important to certify that the machine learning model indeed treats individuals or groups of individuals fairly.
However, this is challenging when the model is a deep neural network (DNN) with a large number of hidden parameters and complex nonlinear activations.
The challenge is also exacerbated as the network size and input dimension increase.
In this work, we aim to overcome the challenge by leveraging abstract interpretation techniques to certify fairness both qualitatively and quantitatively.
Our work focuses on individual fairness which, at a high level, requires that similar individuals are treated similarly <cit.>.
Here, similar individuals are those who differ only in some legally protected input attribute (e.g., gender or race) but agree in the unprotected attributes[This notion can be understood as counterfactual fairness <cit.>
explained in more detail in Section <ref>.]
and being treated similarly means that the DNN generates the same classification output.
Let the DNN be a function f: X→ Y from input domain X to output range Y, where an individual x∈ X is an input and a class label y∈ Y is the output.
Assume that each input x=⟨ x_1,…,x_D⟩ is a D-dimensional vector, and x_j, where 1≤ j ≤ D, is a protected attribute.
We say that the DNN is provably fair (certified) for the entire input domain X if f(x)=f(x') holds for any two individuals x ∈ X and x' ∈ X that differ only in x_j but agree in the unprotected attributes (∀ x_i where i≠ j).
Conversely, the DNN is provably unfair (falsified) for input domain X if f(x) ≠ f(x') holds for any two individuals (x ∈ X and x'∈ X) that differ only in the protected attribute.
If the DNN is neither certified nor falsified, it remains undecided.
Given a DNN f, a protected attribute x_j, and an input domain X, a qualitative certification procedure aims to determine whether f is fair, unfair, or undecided for all x ∈ X.
Qualitative analysis is practically important because, if f is provably fair, the model may be used as is, but if f is provably unfair, the model should not be used to make decision for any x∈ X.
When the result of qualitative analysis is undecided, however, there is a need for quantitative analysis, to compute the degree of fairness. For example, the degree of fairness may be measured by the percentage of individuals in input domain X whose classification outputs are provably fair.
Both qualitative certification and quantitative certification are hard problems for deep neural networks.
While there are many verification tools for deep neural networks, existing verifiers such as ReluVal <cit.>, DeepPoly <cit.>, and α-β-CROWN <cit.> focus on certifying perturbation robustness, which is a fundamentally different property, and cannot certify individual fairness.
To the best of our knowledge, the only existing technique for certifying individual fairness of a DNN is <cit.>. However, since it directly analyzes the behavior of a DNN in the concrete domain using the SMT solver, the computational cost is extremely high; as a result, can only certify tiny networks.
Furthermore, it cannot quantify the degree of fairness.
Prior works on quantitative analysis of fairness focus on either testing <cit.>, which do not lead to sound certification, or statistical parity <cit.> that concerns another type of fairness, group fairness, which differs significantly from individual fairness.
To fill the gap, we propose the first scalable method for certifying and quantifying individual fairness of a DNN.
Our method, named , takes a certification problem (consisting of the DNN f, protected attribute x_j, and input domain X) as input and returns one of the following three outputs:
(1) certified (fair) for all input x∈ X;
(2) falsified (unfair) for all input x∈ X;
or (3) undecided, meaning that f is neither 100% fair nor 100% unfair.
In the third case, our method also computes the percentage of inputs in X whose classification outputs are provably fair.
More specifically, our method provides a lower bound of the certified percentage, which can guarantee that the DNN meets a certain requirement, e.g., the DNN is individually fair for at least 80% of all inputs in X.
As shown in Fig. <ref>, iterates through three steps: abstraction (forward analysis), refinement (backward analysis), and quantification (rate computation).
Assuming that the legally protected attribute x_j has two possible values (e.g., male and female), forward analysis tries to prove that, for each x in the input partition P (which is the entire input domain X initially), flipping the value of the protected attribute of x does not change the model's output.
This is accomplished by propagating two symbolic input intervals I (∀ x∈ P that are male) and I' (∀ x'∈ P that are female) to compute the two corresponding output intervals that are overapproximated.
If the classification labels (for all x and x') are the same, our method returns certified (fair).
On the other hand, if the classification labels (for all x and x') are different, our method returns falsified (unfair).
In these two cases, 100% of the inputs in the partition P are resolved.
Otherwise, we perform refinement (backward analysis) by splitting P into partitions P_l and P_u and apply forward analysis to each of these new partitions.
Since smaller partitions often lead to smaller approximation errors, refinement has the potential to increase the number of certified (or falsified) inputs and decrease in the number of undecided inputs.
To ensure that our method terminates quickly, we propose two early termination conditions based on the refinement depth of each partition P⊆ X. The refinement depth is the number of times X is partitioned to produce P.
There are two predefined thresholds.
Once the refinement depth exceeds the higher threshold, we classify the partition P as undecided and avoid splitting it further.
But if the refinement depth exceeds the lower threshold without exceeding the higher threshold, we use random sampling to try to find a concrete example x∈ P that violates the fairness property. If such a counterexample is found, we classify P as undecided and avoid splitting it further.
Otherwise, we keep splitting P into smaller partitions.
Our method differs from the vast majority of existing techniques for neural network verification, which focus on robustness verification of a single neural network <cit.>. In contrast, our method focuses on certifying individual fairness by simultaneously executing the network twice symbolically, once for ∀ x∈ X and another for ∀ x'∈ X (we further explain in Section <ref> why directly using existing robustness verifiers to certify fairness does not work in practice).
Our method also differs from prior works on differential verification <cit.> or equivalence verification <cit.>, which focus on proving the behavioral difference or equivalence of two different networks, given the same input. In our case, we deal with only one network model f, with different inputs defined by our fairness property.
We have evaluated our method on a large number of deep neural networks trained using four widely-used datasets for fairness research: Bank <cit.> (for predicting marketing), German <cit.> (for predicting credit risk), Adult <cit.> (for predicting earning power),
and Compas <cit.> (for predicting recidivism risk).
For comparison, we apply <cit.> since it represents the current state-of-the-art in certifying individual fairness; we also apply α-β-CROWN since it is currently the best robustness verifier for deep neural networks.
Our results show that α-β-CROWN is not effective in certifying individual fairness. As for , our method significantly outperforms in terms of both accuracy and speed for all DNN benchmarks.
In fact, often completes certification in seconds, whereas often times out after 30 minutes and certifies nothing or only a tiny fraction of the entire input domain.
To summarize, this paper makes the following contributions:
* We propose the first scalable method for certifying and quantifying individual fairness of DNNs using symbolic interval based analysis techniques.
* For forward analysis, we propose techniques for more accurately deciding if the DNN is fair/unfair for all inputs in an input partition.
* For refinement, we propose techniques for more effectively deciding how to split the input partition.
* For quantification, we propose techniques for efficiently computing the percentages of inputs whose outputs can be certified and falsified.
* We demonstrate the advantages of our method over the current state-of-the-art on a large number of DNNs trained using four popular fairness research datasets.
The remainder of this paper is organized as follows. First, we motivate our work in Section <ref> using examples. Then, we present the technical background in Section <ref>. Next, we present the high-level procedure of our method in Section <ref>, followed by detailed algorithms of the abstraction, refinement, and quantification subroutines in Sections <ref>, <ref> and <ref>. We present the experimental results in Section <ref>, review the related work in Section <ref>, and finally give our conclusions in Section <ref>.
§ MOTIVATION
In this section, we use an example to illustrate the limitations of existing methods.
§.§ The Motivating Example
Fig. <ref> (left) shows a DNN for making hiring decisions. It has three input nodes (i_1,i_2 and i_3), two hidden neurons (h_1 and h_2) and one output node (o).
The values of h_1 and h_2 are computed in two steps: first, the values of i_1,i_2 and i_3 are multiplied by the edge weights before they are added up; then, the result is fed to an activation function. For instance, the activation function may be ReLU(z) = max(0,z).
The output of the entire network f is based on whether the value of o is above 0; that is, positive label is generated if o > 0; otherwise, negative label is generated.
The DNN takes an input vector x with three attributes:
x_1 is the interview score of the job applicant,
x_2 is the gender (0 for female and 1 for male), and
x_3 is the number of years of experience.
Furthermore, x_2 is the protected attribute while x_1 and x_3 are unprotected attributes.
In general, the input domain may be unbounded, e.g., when some attributes are real-valued variables.
However, for illustration purposes, we assume that the input domain is X = {x | x_1∈{1,2,3,4,5}, x_2∈{0,1}, x_3∈{0,1,2,3,4,5}}, meaning that X has a total of 5× 2× 6 = 60 individuals.
Consider the individual x = ⟨ 5, 0, 5⟩, meaning that x_1=5, x_2=0 and x_3=5. According to the DNN in Fig. <ref>, the output is the positive label.
After flipping the value of the protected attribute x_2 from 0 to 1, we have the individual x' = ⟨ 5, 1, 5⟩, for which the DNN's output is also the positive label.
Since the DNN's output is oblivious to the gender attribute, we say that it is fair for this input x.
Consider another individual x = ⟨ 1, 0, 5 ⟩ whose gender-flipped counterpart is x' = ⟨ 1, 1, 5 ⟩. Since the DNN produces the negative label as output for both, it is still fair for this input x.
To summarize, the DNN f may be fair regardless of whether a particular x ∈ X receives a positive or negative output; as long as x receives the same label as its counterpart x', the DNN is considered fair.
In contrast, since the individual x = ⟨ 1, 0, 3 ⟩ and its counterpart x' = ⟨ 1, 1, 3 ⟩ receive different outputs from f, where x gets the positive label but x' gets the negative label, the DNN is not fair for this input x. Furthermore, this pair (x, x') serves as a counterexample.
In general, a machine learning model may be fair for some, but not all, individuals of the input domain.
However, this observation seems to be overlooked by prior work on certifying individual fairness, which focuses on searching for counterexamples or proving that they do not exist.
Such qualitative techniques may not be useful if almost all of the DNNs in practice are simply declared as “not fair”.
In contrast, we emphasize in this work the need to quantitatively measure the degree of fairness when a DNN is neither 100% fair nor 100% unfair, by computing the percentages of inputs whose classification outputs we can prove as fair or unfair.
§.§ Limitations of Prior Work
One possible solution to the fairness certification problem as defined above would be explicit enumeration of the (x,x') pairs. For each x∈ X, we may flip its protected attribute to generate x' and then check if f(x)=f(x').
However, since the size of the input domain X may be extremely large or infinite, this method would be prohibitively expensive in practice.
Another possible solution is to leverage existing DNN robustness verifiers, such as ReluVal <cit.>, DeepPoly <cit.>, and α-β-CROWN <cit.>. However, since robustness and individual fairness are fundamentally different properties, applying a robustness verifier would not work well in practice.
The reason is because a robustness verifier takes an individual x and tries to prove that small perturbation of x (often defined by ||x-x'||<δ, where δ is a small constant) does not change the output label.
However, during fairness certification, we are not given a concrete individual x; instead, we are supposed to check for all x ∈ X and x' ∈ X, where x_j ≠ x'_j.
If we force a robustness verifier to take a symbolic input I (∀ x∈ X), it would try to prove that the DNN produces the same output label for all inputs in X (implying that the DNN makes the same decision for all inputs in X).
Recall our example network f in Fig. <ref>. While our method can prove that f is fair for an input domain that contains x=⟨ 5,0,5 ⟩ and x'=⟨ 5,1,5⟩ (both receive a positive outcome) as well as x=⟨ 1,0,5 ⟩ and x'=⟨ 1,1,5⟩ (both receive a negative outcome), this cannot be accomplished by a robustness verifier (since it is almost never possible for all individuals in the input domain to have the same outcome).
The only currently available method for (qualitatively) certifying individual fairness of a DNN is <cit.>, which relies on the SMT solver and may return one of the following results:
SAT (meaning that there exists a counterexample that violates the fairness property), UNSAT (meaning that there is no counterexample), or UNKNOWN (meaning that the result remains inconclusive).
The main problem of is that it works directly in the concrete domain by precisely encoding the non-linear computations inside the DNN as logical formulas and solving these formulas using the SMT solver.
Since each call to the SMT solver is NP-complete, the overall computational cost is high.
Although attempts to reduce the computational cost by partitioning input domain a priori and heuristically pruning logical constraints, it does not scale as the network size and input dimension increase. Indeed, our experimental evaluation of shows that only tiny networks (with ≤ 100 neurons) can be certified.
§.§ Novelty of Our Method
We overcome the aforementioned (accuracy and scalability) limitations by developing a method that is both scalable and able to quantify the degree of fairness.
First, relies on abstraction to improve efficiency/scalability while maintaining the accuracy of symbolic forward analysis. This increases the chance of quickly certifying more input regions as fair or falsifying them as unfair, and decreases the chance of leaving them as undecided.
Specifically, we use symbolic interval analysis, instead of the SMT solver used by . The advantage is that symbolic interval analysis focuses on the behavior of the DNN in an abstract domain, which is inherently more efficient and scalable than analysis in the concrete domain.
Second, relies on iterative refinement (partitioning of the input domain) to improve the accuracy of forward analysis. Instead of creating input partitions a priori, it conducts iterative refinement on a “need-to” basis guided by the fairness property to be certified.
This makes it more effective than the static partitioning technique of , which divides the input domain into a fixed number of equal chunks even before verifying any of them.
To see why iterative refinement can improve accuracy, consider our running example in Fig. <ref>. Initially, forward analysis is applied to the DNN in the entire input domain X, for which the certification result is undecided.
During refinement, our method would choose x_1 (over x_3) to split, based on its impact on the network's output. After splitting x_1∈{1,2,3,4,5} into x_1∈{1,2,3} and x_1∈{4,5}, we apply forward analysis to each of these two new partitions.
As shown in Fig. <ref>, while the partition for x_1∈{1,2,3} remains undecided, the partition for x_1∈{4,5} is certified as fair. This partition has 12 pairs of x ∈ X and x' ∈ X, where x_2 ≠ x'_2. Therefore, from the input domain X which has 30 (x, x') pairs, we certify 12/30 = 40% as fair.
Next, we split the undecided partition x_1∈{1,2,3} into x_1∈{1,2} and x_1∈{3} and apply forward analysis to each of these two new partitions. While the first new partition remains undecided, the second one is certified as fair. Since this partition has six (x, x') pairs, it represents 6/30=20% of the input domain.
This iterative refinement process continues until one of the following two termination conditions is satisfied: either there is no more partition to apply forward analysis to, or a predetermined time limit (e.g., 30 minutes) is reached.
§ PRELIMINARIES
In this section, we review the fairness definitions as well as the basics of neural network verification.
§.§ Fairness Definitions
Let f:X→ Y be a classifier, where X is the input domain and Y is the output range. Each input x∈ X is a vector in the D-dimensional attribute space, denoted x=⟨ x_1,…,x_D⟩, where 1,…,D are vector indices. Each output y∈ Y is a class label.
Some attributes are legally protected attributes (e.g., gender and race) while others are unprotected attributes. Let 𝒫 be the set of vector indices corresponding to protected attributes in x. We say that x_j is a protected attribute if and only if j ∈𝒫.
Given a classifier f, an input x ∈ X, and a protected attribute j ∈𝒫, we say that f is individually fair for x if and only if f(x) = f(x') for any x' ∈ X that differs from x only in the protected attribute x_j.
This notion of fairness is local in the sense that it requires the classifier to treat an individual x in a manner that is oblivious to its protected attribute x_j of x.
Given a classifier f, an input domain X, and a protected attribute j ∈𝒫, we say that f is individually fair for the input domain X if and only if, for all x∈ X, f(x)=f(x') holds for any x'∈ X that differs from x only in the protected attribute x_j.
This notion of fairness is global since it requires the classifier to treat all x∈ X in a manner that is oblivious to the protected attribute x_j of x.
The method of explicit enumeration would be prohibitively expensive since the number of individuals in X may be astronomically large or infinite.
§.§ Connecting Robustness to Fairness
Perturbation robustness, which is the most frequently checked property by existing DNN verifiers, is closely related to the notion of adversarial examples. The idea is that, if the DNN's classification output were robust, then applying a small perturbation to a given input x should not change the classifier's output for x.
Given a classifier f, an input x ∈ X, and a small constant δ, we say that f is robust against δ-perturbation if and only if f(x)=f(x') holds for all x' ∈ X such that ||x-x'||≤δ.
By definition, perturbation robustness is a local property defined for a particular input x, where the set of inputs defined by ||x-x'||≤δ is not supposed to be large.
While in theory, a robustness verifier may be forced to check individual fairness by setting δ to a large value (e.g., to include the entire input domain X), it almost never works in practice.
The reason is because, by definition, such a global robustness property requires that all inputs to have the same classification output returned by the DNN – such a classifier f would be practically useless.
This observation has been confirmed by our experiments using α-β-CROWN <cit.>, a state-of-the-art DNN robustness verifier. Toward this end, we have created a merged network that contains two copies of the same network, with one input for one protected attribute group (e.g., male) and the other input for the other group (e.g., female). While the verifier finds counterexamples in seconds (and thus falsifies fairness of the DNN), it has the same limitation as : it merely declares the DNN as unsafe (unfair, in our context) as soon as it finds a counterexample, but does not provide users with any meaningful, quantitative information.
In contrast, our method provides a quantitative framework for certified fairness by reasoning about all individuals in the input domain.
§ OVERVIEW OF OUR METHOD
In this section, we present the top-level procedure of our method; detailed algorithms of the subroutines will be presented in subsequent sections.
Let the DNN y = f(x) be implemented as a series of affine transformations followed by nonlinear activations, where each affine transformation step and its subsequent nonlinear activation step constitute a hidden layer.
Let l be the total number of hidden layers, then f=f_l(f_l-1(...f_2(f_1(x · W_1) · W_2)... · W_l-1). For each k∈ [1,l], W_k denotes the affine transformation and f_k() denotes the nonlinear activation.
More specifically, W_1 consists of the edge weights at layer 1 and x · W_1 = Σ_i x_iw_1,i. Furthermore, f_1 is the activation function, e.g., ReLU(x · W_1) = max(0, x · W_1).
§.§ The Basic Components
Similar to existing symbolic interval analysis based DNN verifiers <cit.>, our method consists of three basic components: forward analysis, classification, and refinement.
Forward Analysis
The goal of forward analysis is to compute upper and lower bounds of the network's output for all inputs.
It starts by assigning a symbolic interval to each input attribute. For example, in Fig. <ref>, i_1=[x_1,x_1] is symbolic, where x_1∈{0,1,2,3,4,5}.
Compared to concrete values, symbolic values have the advantages of making the analysis faster and more scalable. They are also sound in that they overapproximate the possible concrete values.
Classification
In a binary classifier, e.g., the DNN in Fig. <ref> for making hiring decisions, the output is a singular node o whose numerical values needs to be turned to either the positive or the negative class label based on a threshold value (say 0).
For example, if o ∈ [0.3, 0.4], the output label is guaranteed to be positive since o>0 always holds,
and if o ∈ [-0.7, -0.6], the output label is guaranteed to be negative since o<0 always holds.
However, if o ∈ [-0.7, 0.4], the output label remains undecided – this is when our method needs to conduct refinement.
Refinement
The goal of refinement is to partition the input domain of the DNN to improve the (upper and lower) symbolic bounds computed by forward analysis.
Since approximation error may be introduced when linear bounds are pushed through nonlinear activation functions (e.g., unless ReLU is always on or always off), by partitioning the input domain, we hope to increase the chance that activation functions behave similar to their linear approximations for each of the new (and smaller) input partitions, thus reducing the approximation error.
§.§ The Top-Level Procedure
Algorithm <ref> shows the top-level procedure, whose input consists of the network f, the protected attribute x_j, and the input domain X.
Together, these three parameters define the fairness certification problem, denoted ⟨ f, x_j, X ⟩.
Within the top-level procedure, we first initialize the input partition P as X and push it into the stack S. Each input partition is associated with a refinement depth. Since P is initially the entire input domain X, its refinement depth is set to 0. Subsequently, the refinement depth increments every time P is bisected to two smaller partitions. In general, the refinement depth of P⊆ X is the number of times that X is bisected to reach P.
In Lines 4-10 of Algorithm <ref>, we go through each partition stored in the stack S, until there is no partition left or a time limit is reached. For each partition P, we first apply symbolic forward analysis (Line 6) to check if the DNN f is fair for all individuals in P. There are three possible outcomes:
(1) fair (certified), meaning that f(x)=f(x') for all x∈ P and its counterpart x';
(2) unfair (falsified), meaning that f(x)≠ f(x') for all x∈ P and its counterpart x'; or
(3) undecided.
Next, if the result is undecided (Line 7), we apply backward refinement by splitting P into two disjoint new partitions P_l and P_u. By focusing on each of these smaller partitions in a subsequent iteration step, we hope to increase the chance of certifying it as fair (or falsifying it as unfair).
Finally, we quantify fairness (Line 10) by updating the percentages of certified (r_cer), falsified (r_fal) and undecided (r_und) inputs of X.
Specifically, if the previously-undecided partition P is now certified as fair, we decrease the undecided rate r_und by |P|/|X| and increase the certified rate r_cer by the same amount.
On the other hand, if P is falsified as unfair, we decrease r_und by |P|/|X| and increase the falsified rate r_fal by the same amount.
In the next three sections, we will present our detailed algorithms for forward analysis (Section <ref>), backward refinement (Section <ref>), and quantification (Section <ref>).
§.§ The Correctness
Before presenting the detailed algorithms, we would like to make two claims about the correctness of our method.
The first claim is about the qualitative result of forward analysis, which may be fair, unfair, or undecided.
When forward analysis declares an input partition P⊆ X as fair, the result is guaranteed to be sound in that f(x)=f'(x) holds for all x∈ P and its counterpart x',
Similarly, when forward analysis declares P as unfair, the result is guaranteed to be sound in that
f(x)≠ f'(x) holds for all x∈ P and its counterpart x'.
The above soundness guarantee is because SymbolicForward soundly overapproximates the DNN's actual behavior.
That is, the upper bound UB is possibly-bigger than the actual value, and the lower bound LB is possibly-smaller than the actual value. As a result, the symbolic interval [LB, UB] computed by SymbolicForward guarantees to include all concrete values.
In the next three sections, we shall discuss in more detail how the symbolic interval is used to decide if P is fair, unfair, or undecided.
When an input partition P is undecided, it means that some individuals in P may be treated fairly whereas others in P may be treated unfairly.
This brings us to the second claim about the quantitative result of our method, represented by the rates r_cer, r_fal and r_und.
The certification rate r_cer computed by our method is guaranteed to be a lower bound of the percentage of inputs whose outputs are actually fair.
Similarly, the falsified rate r_fal is a lower bound of the percentage of inputs whose outputs are actually unfair.
In other words, when our method generates the percentages of fair and unfair inputs, it guarantees that they are provable lower bounds of certification and falsification, respectively.
The reason is because SymbolicForward soundly overapproximates the actual value range. When the output intervals indicate that the model is fair (unfair) for all inputs in P, it is definitely fair (unfair). Thus, both r_cer and r_fal are guaranteed to be lower bounds.
Since the sum of the three rates is 1, meaning that r_und = 1 - r_cer - r_fal, the undecided rate r_und is guaranteed to be an upper bound.
§ SYMBOLIC FORWARD ANALYSIS
Algorithm <ref> shows our forward analysis subroutine, which takes the subproblem ⟨ f, x_j, P ⟩ as input and returns the certification result as output.
§.§ The Two Steps
Our forward analysis consists of two steps.
First, a standard symbolic interval based analysis is invoked twice, for the symbolic inputs I and I', to compute the corresponding symbolic outputs O and O'.
Second, O and O' are used to decide if the certification result is fair, unfair, or undecided.
In the first step, the symbolic input I = P|_x_j∈[0,0] is defined as the subset of input partition P where all inputs have the protected attribute x_j set to 0.
In contrast, I' = P|_x_j∈[1,1] is defined as the subset of P where all inputs have x_j set to 1.
The output O is a sound overapproximation of f(x) for x∈ I, whereas the output O' is a sound overapproximation of f(x') for x'∈ I'.
The subroutine ForwardPass used to compute O and O' is similar to any state-of-the-art neural network verifier based on symbolic interval analysis; in our implementation, we used the algorithm of ReluVal <cit.>.
In the second step, the two output intervals,
O=[O_lb,O_ub] and
O'=[O'_lb,O'_ub],
are used to compute the certification result.
To understand how it works, recall that in the concrete domain, the numerical value of the DNN's output node is compared against a threshold, say 0, to determine if the output label should be positive or negative.
In the symbolic interval abstract domain, the upper and lower bounds of the numerical values are used to determine if the model is fair, unfair, or undecided.
Below are the five scenarios:
* If O_lb > 0 and O'_lb > 0, both O and O' have the positive label, meaning that f is fair for P.
* If O_ub < 0 and O'_ub < 0, both O and O' have the negative label, meaning that f is fair for P.
* If O_lb > 0 and O'_ub < 0, O is positive but O' is negative, meaning that f is unfair for P.
* If O_ub < 0 and O'_lb > 0, O is negative but O' is positive, meaning that f is unfair for P.
* Otherwise, f remains undecided for P.
Fig. <ref> illustrates the first four scenarios above. Specifically, the horizontal line segments represent the value intervals of O and O', whose upper/lower bounds may be either >0 or <0. The vertical lines represent the threshold value 0.
We can extend out method from two protected attribute (PA) groups (e.g., male and female) to more than two PA groups.
For example, if the protected attribute x_j has three values, we will have three symbolic inputs (I,I' and I”) and three corresponding symbolic outputs (O,O' and O”).
To decide if f is fair (or unfair) in this multi-PA group setting, we check if (1) individuals in each PA group receive the same output label; and (2) the output labels for the three PA groups are the same.
We can also extend our method from binary classification to multi-valued classification.
For example, if there are three possible output labels, we will have O_1, O_2, and O_3 as the symbolic intervals for the three values for one PA group (I) and O'_1, O'_2 and O'_3 for the other PA group (I').
To decide if f is fair (or unfair) for this multi-valued classification, we check (1) which output labels are generated for I and I'; and (2) whether these two output labels (O and O') are the same.
§.§ The Running Example
For our running example in Fig. <ref>, consider the initial input partition P = X. For ease of understanding, we denote the symbolic expressions for a neuron n as S_in(n) after the affine transformation, and as S(n) after the ReLU activation.
Furthermore, S will be used for I, and S' will be used for I'.
Let
I = P|_x_j=0 and I' = P|_x_j=1.
After affine transformation in the hidden layer, we have S_in(h_1) = 2x_1 + 1.2x_3 and S'_in(h_1) = 2x_1 + 1.2x_3 + 0.5. If we concretize these symbolic expressions, we will have S_in(h_1) = [2, 16] and S'_in(h_1) = [2.5, 16.5]. Based on these concrete intervals, we know that h_1 is always active for both I and I'. Since the activation function is ReLU, we have S(h_1) = S_in(h_1) and S'(h_1) = S'_in(h_1).
For the hidden neuron h_2, we have S_in(h_2) = -0.2x_1 + 0.4x_3 and S'_in(h_2) = -0.2x_1 + 0.4x_3 + 0.7, whose corresponding concrete bounds are [-1, 1.8] and [-0.3, 2.5], respectively.
In both cases, since h_2 is nonlinear (neither always-on nor always-off), we must approximate the values using linear expressions to obtain S(h_2) and S'(h_2).
While we use the sound overapproximation method of Wang et al. <cit.>, other techniques (e.g., <cit.>) may also be used.
After overapproximating the ReLU behavior of h_2, we obtain S(h_2) = [-0.128 x_1 + 0.257 x_3, -0.128 x_1 + 0.257 x_3 + 0.643] and S'(h_2) = [-0.178 x_1 + 0.357 x_3 + 0.625, -0.178 x_1 + 0.357 x_3 + 0.893].
Finally, we compute S_in(o) = [0.528 x_1 - 0.017 x_3 - 0.643, 0.528 x_1 - 0.017 x_3] and S'_in(o) = [0.578 x_1 - 0.117 x_3 - 0.793, 0.578 x_1 - 0.117 x_3 - 0.525]. From these symbolic bounds, we obtain the concrete bounds of O = [-0.2, 2.64] and O' = [-0.8, 2.368]. Since these output intervals are not tight enough to determine the output labels for I and I', which are needed to decide if the model is fair or unfair for the partition P, the model remains undecided.
To improve the accuracy, we need to split P into smaller input partitions and then apply symbolic forward analysis to each partition again. How to split P will be addressed by the iterative backward refinement method presented in the next section.
§ ITERATIVE BACKWARD REFINEMENT
The goal of iterative backward refinement is to split the currently-undecided input partition P into smaller partitions, so that for each of these smaller partitions, symbolic forward analysis will obtain a more accurate result. Algorithm <ref> shows the pseudo code, which takes the network f and the partition P as input and returns two smaller partitions P_l and P_u as output. Inside this procedure, Lines 7-14 are related to splitting P, and Lines 2-6 are related to early termination conditions.
§.§ Early Termination Conditions
In Lines 2-6 of Algorithm <ref>, we check if P.depth exceeds the predefined max_refinement_depth. If the answer is yes, we avoid splitting P further. For example, if max_refinement_depth=20, it means the current partition P occupies only |P|/|X|= 1/2^20 of the entire input domain X.
By increasing the refinement depth, we can decrease the percentage of undecided inputs over X.
If P.depth has not exceeded the maximal refinement depth, we check if P.depth exceeds the predefined min_sample_depth, which is set to a value (e.g., 15) smaller than max_refinement_depth. When P.depth exceeds this threshold, we start searching for counterexamples in P via random sampling.
Inside the random sampling subroutine SampledCEX(P) shown in Line 4, we sample up to 10 concrete inputs in P and check if x and its counterpart x' satisfy f(x) ≠ f(x'). If this condition is satisfied, a counterexample is found (but P remains undecided); in this case, we increment cex_count and stop splitting P. If no counterexample is found, we continue splitting P into smaller partitions.
Note that in both early termination cases (Lines 3 and 6), the partition P will be marked as undecided since we are not able to decide whether the DNN model is fair or unfair to all individuals in P.
§.§ Splitting Input Intervals
In Lines 7-14 of Algorithm <ref>, we split P into smaller partitions P_l and P_u by first identifying the input attribute x_i that has the largest influence on the output (Lines 8-12) and then bisecting its input interval x_i∈[lb,ub].
Our method for identifying the input attribute x_i is based on maximizing the impact of an input attribute on the network's output. One way to estimate the impact is taking the product of the gradient g(x_i) and the input range |ub(x_i)-lb(x_i)|. In the literature, the product is often called the smear value <cit.>.
Unlike existing methods such as Wang et al. <cit.>, however, our computation of the smear value is different because we must consider both inputs I and I', which may have different gradients.
Specifically, during forward analysis, we store the neuron activation information in two gradient mask matrices denoted R and R', where R[i][j] is [1,1] if the j-th neuron at i-th layer is always active, [0,0] if it is always inactive, and [0,1] if it is unknown. The neuron activation information is used later to perform backward refinement for this partition P.
During refinement, we first compute the two gradients g_I and g_I' and then take the average. Our goal is to identify the input attribute that has the largest overall influence on the network's output.
§.§ The Running Example
Consider our running example in Fig. <ref> again.
To compute the smear value, we start with the output layer's edge weights, which are 0.2 for h_1 and -1 for h_2.
Since the ReLU associated with h_1 is always-on, g_I(h_1) and g_I'(h_1) are set to the interval [0.2,0.2].
However, since the ReLU associated with h_2 is nonlinear, as indicated by the gradient mask matrices R and R', g_I(h_2) and g_I'(h_2) are set to the interval [-1.0, 0].
Then, we propagate these gradient intervals backwardly, to get g_I(i_1) = g_I'(i_1) = [(0.2×2)+(0×-0.2), (0.2×2)+(-1×-0.2)] = [0.4, 0.6] and g_I(i_2) = g_I'(i_2) = [-0.6, 0.1] and g_I(i_3) = g_I'(i_3) = [-0.16, 0.24].
Next, we compute the average g, based on which we compute the smear values. Since x_1 has the smear value of 0.6 × 4 = 2.4 and x_3 has the smear value of 0.24 × 5 = 1.2, we choose to partition P by bisecting the input interval of x_1.
This leads to the smaller partitions shown in Fig. <ref>.
§.§ Generalization
While we only consider ReLU networks in this paper, our refinement technique can be extended to non-ReLU activations. Recall that, by definition, ReLU(z)=0 (inactive) if z<0, and ReLU(z)=1 (active) if z>0. Let σ(z) be a non-ReLU activation function. To compute the gradient mask matrices R and R', we use thresholds (ϵ_1,ϵ_2) to approximate the on/off behavior:
the mask is [0,0] (inactive) if z<ϵ_1 and [1,1] (active) if z>ϵ_2.
Although the approximate on/off behavior of non-ReLU activation function σ(z) is not the same as the on/off behavior of ReLU(z), it serves as a practically-useful heuristic to rank the input attributes. Furthermore, this generalization will not affect the soundness of our method, since the gradient masks computed in this manner are only used for picking which input attribute to split first.
§ FAIRNESS QUANTIFICATION
We now present our method for updating the percentages of certified and falsified inputs, when the DNN model is found to be fair or unfair for the current input partition P. The pseudo code is shown in Algorithm <ref>.
There are three cases.
First, if the current partition P is found to be fair, meaning that all inputs in P are treated fairly, we compute the percentage of input domain X covered by the partition P, denoted r_P, and then add r_P to r_cer, the percentage of certified inputs.
Second, if the current partition P is found to be unfair, meaning that all inputs in P are treated unfairly, we add r_P to r_fal, the percentage of falsified inputs.
In both cases, we also subtract r_p from r_und.
Otherwise, the current partition P remains undecided and the percentages remain unchanged.
Consider our running example with input partition P defined as x_1 ∈ [4,5] ∧ x_2 ∈ [0,1] ∧ x_3 ∈ [0,5], as shown by the right child of the root node in Fig. <ref>. This partition has a total of 24 individuals, and its corresponding I = P|_x_2 = 0 and I'= P|_x_2 = 1 contain 12 individuals each. In contrast, the entire input domain X has 60 individuals, or 30 pairs of x and its counterpart x' (where x_2 ≠ x'_2).
For this input partition P, O = [1.49, 2.65] and O' = [1.0, 2.36] are the output intervals. Assuming that the decision threshold is 0, the bounds of O and O' imply that the DNN model will generate the positive label for both I and I', meaning that the DNN model is fair for all individuals in P.
Since the input partition size is 12 and the input domain size is 30, the rate r_P = 12/30 = 40%. After certifying P to be fair, we can add 40% to r_cer, the certification rate, and consequently subtract 40% from r_und, the undecided rate.
While the above computation assumes that population distribution for each feature is uniform and thus the percentage (e.g., 40%) is computed directly from the partition size (e.g., 12) and the domain size (e.g., 30), the method can be easily extended to consider a non-uniform population distribution.
Furthermore, note that the method works regardless of whether the input attributes have integer or real values.
§ EXPERIMENTS
We have implemented in a software tool written in C, by leveraging the OpenBlas[http://www.openblas.net] library for fast matrix multiplication and symbolic representation of the upper and lower bounds. Our forward analysis follows that of Wang et al. <cit.>.
For experimental comparison, we also run which is the only currently-available tool for DNN individual fairness certification.
Since cannot quantify the degree of fairness, we compute the certified/falsified/undecided rates based on its reported statistics.
It is worth noting that and our method () have a fundamental difference in falsification. stops and declares an input partition as SAT as soon as it finds a counterexample in that partition; thus, the number of counterexamples that it finds is always the same as the number of SAT partitions it reports. However, SAT partitions are not necessarily unfair partitions, since unfair partitions require all inputs to be counterexamples, but a SAT partition, excluding the one counterexample, still remains undecided.
checks if an entire partition is unfair. Moreover, when the partition is undecided, it can minimize the amount of undecided inputs by only sampling for counterexamples after it reaches a deep enough refinement depth. This is made possible through our iterative refinement.
For example, in a DNN model named GC-3, finds 194 SAT partitions (together with 6 UNSAT and 1 UNKNOWN partitions). However, none of these 194 SAT partitions are unfair partitions.
Instead, the percentage of falsified inputs is close to being 0% (representing 194 counterexamples out of over 435 trillion individuals in the input domain), the percentage of certified inputs is 2.985% (6 UNSAT partitions out of 201 partitions), and the rest remains undecided.
, on the other hand, finds 25,963 counterexamples; furthermore, it is able to formally certify 58.44% of the inputs as fair.
§.§ Benchmarks
Table <ref> shows the statistics of the benchmarks, including 32 deep neural networks trained on four popular datasets for fairness research.
Among the 32 networks, 25 came from <cit.> and the other 7 were trained by ourselves using TensorFlow. All of these networks have a single node in the output layer, to determine the binary classification result.
Columns 1-2 show the name of each dataset with its considered protected attribute (PA) and the number of input attributes.
Columns 3-6 show the name of each DNN model, its number of hidden layers, number of hidden neurons, and classification accuracy.
The accuracy for DNNs trained on Bank, German, and Adult was provided by the paper. For the models we trained using Compas, we have reserved 10% of the data for testing.
All the networks coming directly from on Bank, German, and Adult datasets are small, where the largest has only 200 hidden neurons. Moreover, most of them have only 1 or 2 hidden layers. Thus, we additionally trained much larger networks, with up to 10,000 hidden neurons, using the Compas dataset.
Details of the four datasets are given as follows.
Bank <cit.> is a dataset for predicting if a bank client will subscribe to its marketing; it consists of 45,000 samples.
German <cit.> is a dataset for predicting the credit risk of a person; it consists of 1,000 samples.
Adult <cit.> is a dataset for predicting if a person earns more than $50,000; it consists of 32,561 samples.
Finally,
Compas <cit.> is a dataset for predicting the risk of recidivism; it consists of 6,172 samples.[We used the preprocessed Compas data provided by <cit.>.]
We evaluate our method using three legally-protected input attributes.
For Bank and German, we use age;
for Adult, we use gender;
and for Compas, we use race.[
For Bank and German, we use binarized age attribute provided by . For Compas, we binarize race attribute as {white, non-white} as in <cit.>.
]
These are consistent with and other prior works in the fairness research.
§.§ Experimental Setup
We ran all experiments on a computer with 2 CPU, 4GB memory, and Ubuntu 16.04 Linux operating system.
We set a time limit of 30 minutes for each DNN model.
Our experiments were designed to answer three research questions:
* Is more accurate than the current state-of-the-art in certifying individual fairness of a DNN model?
* Is more scalable than the current state-of-the-art in handling DNN models, especially when the network size increases?
* Is more effective than the current state-of-the-art in providing feedback, e.g., by quantitatively measuring the percentages of certified, falsified, and undecided inputs?
requires a parameter MS (maximum size of an input attribute) based on which it creates a fixed number of input partitions prior to certification.
On the DNN models trained for Bank, German, and Adult, we used the default MS values (100, 100, and 10) for to create 510, 201, and 16000 partitions, respectively.
On the new DNN models trained for Compas, we set MS to a small value of 2 to create 20 partitions for .
This was done to maximize 's performance such that it does not “choke” in verifying each input partition.
By default, uses 100 seconds as “soft timeout” for each input partition and uses 30 minutes as “hard timeout” for the entire DNN. This means that it takes at most 100 seconds to verify a single input partition, and if unsolved, it just moves to the next partition, until the entire 30 minutes runs out.
To run , we set the parameters min_check_depth to 15 and max_refinement_depth to 20 for all DNN models. We also use 30 minutes as “hard timeout”, but always finished before the limit.
§.§ Experimental Results
Table <ref> shows the results of our method () in comparison with [The order in which sorts the partitions before running the verification query is random and non-deterministic, so there may be minor difference in the reported counterexamples in the original evaluation and ours.].
Columns 1-2 show the names of the dataset and the DNN model.
Columns 3-5 show the statistics reported by , including the time taken, whether a counterexample is found (Cex) and the number of counterexamples found (#Cex). T/O or M/O in Column 3 respectively means that either spent all 30m or ran out of memory in the network pruning step prior to verifying any input partition.
Columns 6-8 show the percentage of certified, falsified and undecided inputs (Cer%, Fal%, Und%).
Columns 9-14 show the corresponding results from .
§.§.§ Results for RQ 1
To answer the first research question (RQ 1), i.e., whether our method is more accurate than the current state-of-the-art, we need to compare the results shown in Columns 4-5 (for ) with the results shown in Columns 10-11 (for ).
Specifically, Columns 4 and 10 indicate whether the tool is able to find a counterexample () or not () within the time limit.
While our method () found counterexamples for all 32 DNNs, found counterexamples for only 20 of the 32 models. In addition, it found counterexamples for only one of the 7 newly added models.
Moreover, Columns 5 and 11 show that, on models where both tools found counterexamples, the number of counterexamples found by is often thousands of times more. For example, the largest number of counterexamples found by is 194 (for GC-3) but the large number of counterexamples found by is 71,012 (for AC-5).
§.§.§ Results for RQ 2
To answer the second research question (RQ 2), i.e., whether our method is more scalable than the current state-of-the-art, we need to compare the running time shown in Column 3 (for ) with the running time shown in Column 9 (for ).
While our method () always finished within the time limit of 30 minutes, timed out on compas-5 and ran out of memory on compas-6 and compas-7.
Even on the models where both tools finished, the time taken by is significantly longer.
To illustrate the scalability advantage of our method, we took a subset of the models for which both and finished, and plot the running time in a bar chart, shown on the left side of Fig. <ref>. Here, the red bars represent the time taken by and the blue bars represent the time taken by . The results show that is many orders of magnitude faster, and can certify DNN models that are well beyond the reach of .
§.§.§ Results for RQ 3
To answer the third research question (RQ 3), i.e., whether our method is more effective in providing feedback to the user, we need to compare the results in Columns 6-8 (for ) with the results in Columns 12-14 (for ), which show the certified, falsified, and undecided percentages.
Since was not designed to quantitatively measure the degree of fairness, it did poorly in almost all cases. Except for a few DNN models for Bank, GC-4, and compas-1, its certified percentages are either 0 or close to 0, and its undecided percentages are almost 100%. It means that, for the vast majority of individuals in the input domain, whether they are treated by the DNN model fairly or not remains undecided.
In contrast, the certified percentages reported by our method () are significantly higher. For the models trained using the Compas dataset, in particular, the certified percentages are around or above 90%, and more importantly, the undecided percentages are always 0. It means that has partitioned the input domain in such a way that each partition is either certified as being fair, or falsified as being unfair. Even on the subset of DNN models where some inputs remain undecided by , the undecided percentages reported by our method are significantly lower than , as shown on the right side of Fig. <ref>.
§.§ Summary
The results show that our method is more accurate and more scalable than the current state-of-the-art techniques for qualitative certification. In addition, our method is able to formally quantify the degree of fairness, which is a capability that existing methods do not have.
For some DNN models, still has a significant percentage of inputs left undecided. This is because we set min_check_depth=15 and max_refinement_depth=20 for all benchmarks. Thus, as soon as reaches the refinement depth 15 (see the refinement tree shown in Fig. <ref>) and finds a counterexample in the input partition, it will stop refining further; at that moment, all inputs in the partition are treated conservatively as undecided.
In general, a smaller refinement depth allows to terminate quickly. During our experiments, terminated after 0.29s and 1.24s for GC-4 and GC-5, respectively, compared to the 4 minutes and 30 minutes taken by and yet returned better results.
In fact, for GC-5, spent 30 minutes but failed to find any counterexample.
If we increase the refinement depth, by increasing the two threshold values of , its quantification results will get even better.
§ RELATED WORK
Our method is the first scalable method for certifying and quantifying individual fairness of a deep neural network, and it outperforms the most closely related prior work, <cit.>, which is the only tool currently available for certifying individual fairness of a DNN.
To the best of our knowledge, no other methods can match the accuracy, scalability, and functionality of our method.
Our method differs from existing techniques for verifying other types of individual fairness properties for neural networks.
For example, techniques in <cit.> and <cit.> tackle a different type of fairness property proposed in <cit.> that allows some perturbation for the non-protected attributes. They are orthogonal to our certification and quantification techniques.
Other techniques <cit.> locally verify a type of property known as ε-fairness which, similar to robustness, is defined using a given input x and a small constant for perturbation.
Group fairness is yet another type of fairness property, which can be verified using probabilistic techniques <cit.>.
The difference between individual fairness and group fairness is that, while individual fairness requires similar individuals to be treated similarly, group fairness requires similar demographic groups to be treated similarly.
Furthermore, dependency fairness is studied in <cit.> and <cit.>.
All of these fairness properties are different from one another in varying degrees.
There are other prior works related to fairness of machine learning models.
For example, existing works like <cit.>, <cit.>, and <cit.> proposed fairness verification techniques for other types of machine learning models, but they are not applicable to deep neural networks.
Testing techniques can quickly detect fairness violations in machine learning models <cit.>,
but it does not provide formal guarantee that is important for certain applications.
There are also techniques for improving fairness of machine learning models <cit.>, but they differ significantly from our method, which focuses on certifying and quantifying fairness of existing DNN models.
At a high level, our method is related to the large number of robustness verifiers for deep neural networks based on interval analysis <cit.>, SMT solving <cit.> and mixed-integer linear programming <cit.>.
While these verifiers can decide if a model is robust against adversarial perturbation, as explained earlier in Section 2, they cannot directly certify individual fairness.
Other neural network verifiers that deal with differential verification <cit.> or equivalence verification <cit.> are also different, since they evaluate over two networks instead of one network.
§ CONCLUSION
We have presented , a scalable method for certifying and quantifying individual fairness of a deep neural network over the entire input domain. It relies on sound abstraction during symbolic forward analysis to improve scalability, and iterative refinement to improve accuracy. In addition to certifying fairness, it is able to quantify the degree of fairness by computing the percentages of inputs whose classification outputs can be certified as fair or falsified as unfair.
We have evaluated the method on a large number of DNN models trained using four popular fairness research datasets. The experimental results show that the method significantly outperforms state-of-the-art techniques in terms of both accuracy and scalability, as well as the ability to quantify the degree of fairness.
§ ACKNOWLEDGMENTS
This research was supported in part by the U.S. National
Science Foundation (NSF) under grant CCF-2220345.
We thank the anonymous reviewers for their constructive feedback.
*
ieeetr
|
http://arxiv.org/abs/2409.03059v1 | 20240904201859 | Quantification of stylistic differences in human- and ASR-produced transcripts of African American English | [
"Annika Heuser",
"Tyler Kendall",
"Miguel del Rio",
"Quinten McNamara",
"Nishchal Bhandari",
"Corey Miller",
"Migüel Jetté"
] | cs.CL | [
"cs.CL"
] |
MobileUNETR: A Lightweight End-To-End Hybrid Vision Transformer For Efficient Medical Image Segmentation
Shehan Perera10009-0005-3831-0404 Yunus Erzurumlu10009-0006-5798-5842 Deepak Gulati20000-0003-3374-5992 Alper Yilmaz10000-0003-0755-2628
September 9, 2024
============================================================================================================================================
§ ABSTRACT
Common measures of accuracy used to assess the performance of automatic speech recognition (ASR) systems, as well as human transcribers, conflate multiple sources of error. Stylistic differences, such as verbatim vs non-verbatim, can play a significant role in ASR performance evaluation when differences exist between training and test datasets. The problem is compounded for speech from underrepresented varieties, where the speech to orthography mapping is not as standardized. We categorize the kinds of stylistic differences between 6 transcription versions, 4 human- and 2 ASR-produced, of 10 hours of African American English (AAE) speech. Focusing on verbatim features and AAE morphosyntactic features, we investigate the interactions of these categories with how well transcripts can be compared via word error rate (WER). The results, and overall analysis, help clarify how ASR outputs are a function of the decisions made by the training data’s human transcribers.
§ INTRODUCTION
Word error rate (WER) is the standard metric for automatic speech recognition (ASR) evaluation, widely used across industry and research. However, WER is readily affected by the properties of an ASR system’s training and test data. All the idiosyncrasies of the chosen reference transcript play a role in how the system is trained, refined, and assessed.
This means a system's performance could degrade if the reference doesn't reflect the training data's idiosyncrasies, many of which could be considered stylistic in nature.
Transcription styles are not a novel idea: in fact, companies like Rev[<https://www.rev.com/blog/resources/verbatim-transcription>] produce distinct verbatim and non-verbatim transcripts. Verbatim transcription includes filler words, such as “um" and “uh," false starts, and interjections, while non-verbatim transcription allows for light editing, still preserving the content of what was said in the audio.
Both are valid styles, but directly comparing a verbatim vs non-verbatim transcript would misleadingly highlight several “errors", even between humans. To add to the complications, transcription companies commonly assign separate chunks of a single long file to different transcribers in order to maintain reasonable delivery times.
Deep Learning models are well known to capture their training data's distribution. We posit that during training, models acquire additional types of stylistic proclivities which can be explicitly observed, even beyond the verbatim vs non-verbatim axis of variation. It is precisely our limited knowledge about these proclivities that makes comparing different ASR models more challenging. Within the WER evaluation paradigm, a model would be penalized for not having a stylistic proclivity that another model and the reference transcript share. It is, however, critical that we can accurately compare different ASR models, in order to determine which architectures, training strategies, etc. are most effective.
We define transcription style as the collection of decisions made in contexts where there are multiple reasonable alternatives for how to transform the audio signal into an orthographic representation. Additionally, style can make a transcript better suited to its purpose. Not all the differences between two transcripts are stylistic in nature – for example: some might be perceptual disagreements, while others could of course be actual errors, like typos. Some of the differences might seem stylistic to one person but not another, because they do not agree on whether a given transcription choice was among the set of reasonable alternatives for the given context.
Bucholtz <cit.> describes a situation where transcribers might be trying to do the “original speaker a favor by `cleaning up'" their speech. However, some choices could be considered counter-productive or harmful because they might misrepresent the speech of an individual or community. Consequently, many researchers would not want ASR systems to replicate these.
The current project collects multiple transcripts of a subset of the sociolinguistic interviews contained in the Corpus of Regional African American Language (CORAAL) <cit.> and characterizes differences between them. We collected multiple transcripts to demonstrate that professional transcripts of the same audio can differ substantially, with concomitant effect on WER. The original CORAAL transcripts were transcribed and corrected by multiple researchers familiar with AAE, but the other transcripts are still professional-grade. As an underrepresented variety of English <cit.>, AAE's orthography is not nearly as conventionalized as that of Standard American English (SAE).
We divide our transcripts into two groups: those produced by humans and those produced by ASR systems. We compare the distributions of differences within and across these two groups by categorizing the types of differences.
In this paper, we examine three categories of transcription differences, which serve as hypotheses for the potential sources of the differences. For example, a verbatim vs non-verbatim category or hypothesis posits that any given difference between two transcripts could be due to the transcribers having different verbatim objectives. We then quantify what percentage of the time this hypothesis is true for any transcript pairing. In addition to 1) the verbatim vs non-verbatim hypothesis, we also test 2) whether morpho-syntactic features that differentiate AAE from SAE and 3) whether different reduction and contraction orthographic representations (e.g. “going to" vs “gonna" and “she will" vs “she'll") account for the differences.
The morpho-syntactic hypothesis allows us to investigate ASR bias against AAE, and potentially identify its source. The greater the percentage of transcript differences that are accounted for by the morpho-syntactic hypothesis, the more one transcript or the other might be transcribing AAE as SAE. By applying the same test to ASR output and human-produced transcripts from the same distribution as the training data, we can track the extent to which the ASR system is emulating the human rates of transcription decisions regarding AAE morpho-syntactic features.
These 3 hypotheses serve as our metrics to quantify the stylistic differences across transcript versions. The 3 categories examined here are not meant to capture the full range of possible differences, but we hope they can contribute to a complete ontology of the axes of transcription variation, which is left for future work.
§ BACKGROUND
Inter-transcriber variation, while under-explored in the context of ASR, has been examined in allied fields, such as phonetics, conversation analysis, and forensic linguistics. The analogy to verbatim vs non-verbatim in phonetics is narrow vs broad transcription. As might be expected since broad uses fewer symbols/distinctions than narrow, <cit.> found that inter-transcriber reliability was higher for broad transcription. Nonetheless, broad phonetic transcripts are still much “narrower" than word-level ASR transcripts. Conversation analysts often focus closely on transcriber decisions and agreement, in ways that are relevant to the interests of this paper, but focus on a wider-range of speech phenomena (such as pauses and intonation) <cit.>. Forensic linguists are often concerned with content agreement between humans and also ASR transcriptions <cit.>. Given the stakes of transcription in legal contexts, it is perhaps unsurprising that forensic linguists have considered categories of transcription differences. For instance, <cit.> generated a difference ontology by manually examining eight transcripts of an audio recording produced by different linguistically trained transcribers. It consisted of 1) omitted/additional speech, 2) splitting of turns, 3) phonetic similarity, and 4) lexical variation. 1) corresponds to the verbatim vs non-verbatim distinction and 2) corresponds to speaker attribution/diarization differences as opposed to word-level phenomena. Like 1), 3) is a hypothesis of the origin of transcription differences, namely that they are the result of perceptual differences. Finally, 4) corresponds to the rest of the transcription differences.
Recent work, in both forensic linguistics and in ASR research, has investigated transcription accuracy on non-standard varieties of English <cit.>, particularly on AAE <cit.>. However, work thus far has not investigated the categories underlying disagreements and inaccuracies across human- and ASR-produced transcripts of the same audio data.
§ DATA
We selected 27 files from CORAAL, corresponding to about 10 hours of audio. For each file we produced 6 transcript versions (referred to simply as versions from this point forward).
Human Versions
* CORAAL: The original transcript from the CORAAL corpus, produced by <cit.>.
* Rev: Generated by soliciting a verbatim transcript through the web interface of Rev.com.
* Rev (+AA tag): Generated exactly like the Rev transcript, but with the additional specification of “Other - African American" in the accent information, which we expected to recruit transcribers more familiar with the variety.
* Amberscript: A verbatim transcript from Amberscript[<https://www.amberscript.com/en/>]. We were helped by a salesperson who matched our audio with transcribers deemed well-suited.
Machine Versions
* Rev ASR: Generated using Rev.com's internal verbatim ASR model[<https://docs.rev.ai/api/asynchronous/>], which is described in greater detail in <cit.>.
* OpenAI's Whisper: Generated using OpenAI's API[<https://platform.openai.com/docs/guides/speech-to-text>] to their large-v2 Whisper <cit.> model.
It is important to clarify that the CORAAL transcript versions were developed by a team of linguistic researchers; each file passed through multiple stages of transcription and editing where a researcher had access to the whole audio file. The Rev, Rev (+AA tag), and Amberscript versions on the other hand were developed by professional transcribers who were only given a section of the audio to work with; each section could then have its quality verified and improved upon by a senior transcriber.
§ METHODS
We used the open source
fstalign[<https://github.com/revdotcom/fstalign/>] with default settings to align transcript pairs to produce alignments of every permutation of transcript pairs <cit.>.
We generated tests[<https://github.com/revdotcom/speech-datasets/tree/main/coraal-multi>] for our three transcription difference source hypotheses: morpho-syntactic, reductions, and verbatim.
Our morpho-syntactic tests are based on the features enumerated in <cit.>.
We could not translate all the features into potential transcription difference tests. For example, 19p in <cit.> refers to stressed “stay," but the stressed and unstressed versions cannot be differentiated in written form. Another example is 20d in <cit.>, which describes the past participle form being used as the past tense. However, many common verbs have irregular participle or past forms (e.g. “see"/“seen"/“saw" and “run"/“run"/“ran"), making it difficult to algorithmically test for this alternation. Of the tests we were able to develop, some failed to capture any transcript differences. The morpho-syntactic hypothesis ultimately consisted of 17 tests.
The reductions hypothesis consists of common contractions as well as a set of conventions used by CORAAL transcribers for reductions. The common contractions test checks for “she'd/'s/'ve/'ll/'re/'t" contractions and their longer forms (e.g. “she would/did/had," "she is/has," etc.). The CORAAL reduced form test checks for whether a substitution is made up of a full form and reduced form pairing listed in the table on pages 21-22 of the CORAAL user guide[<http://lingtools.uoregon.edu/coraal/userguide/>].
Finally, our verbatim tests checked for filler deletions, filler substitution (e.g. transcript 1 has “uh" while transcript 2 has “um"), restart deletion or lack of restart indication (e.g. “you-" vs “you"), and repetition deletion.
These hypotheses are in order of most to least indicative of speaker characteristics. AAE feature erasure captured by morpho-syntactic differences results in a more SAE-looking transcript which will potentially misrepresent the speech signal.
Reductions are prevalent in both AAE and SAE, and they do not generally change the expression meaning, unlike some of the alternations caught by the morpho-syntactic tests. They do, however, have pragmatic consequences in that reduced forms are considered vernacular <cit.>; someone speaking in a more formal event, e.g. an interview or a trial, might prefer their reduced speech to be transcribed as the long forms. Finally, verbatim differences do not change the speech content, and speakers are disfluent in every register, though the social context can impact how speakers are disfluent <cit.>.
§ RESULTS
§.§ Word error rates
We first report WERs between the 4 human transcript versions. Though WER traditionally measures the error rate between an ASR hypothesis and a human reference, in this context we utilize the same WER mechanism to quantify the differences among humans by taking one human version as the reference and another as the hypothesis. We report the full WER as well as the individual rates of error that it is composed of, namely the rates of insertions (INS), deletions (DEL), and substitutions (SUB). As noted in <Ref>, the WERs range between 10% and 20%, demonstrating the importance of the reference transcript to evaluation – especially as many papers report traditional human vs machine WERs at much lower rates (e.g. <cit.>).
Unsurprisingly, the lowest WER is between the Rev and Rev (+AA tag) transcripts, likely because they were produced by a similar transcriber population. While the transcribers for the Rev (+AA tag) transcript may have been more familiar with AAE, they used the same style guide as the transcribers of the Rev transcript. It is even possible that there was overlap in the transcribers for the two sets of transcript versions. The Rev and Rev (+AA tag) versions also had relatively low WERs against the CORAAL transcript, suggesting similar stylistic proclivities. On the other hand, the greatest WER was between the CORAAL and Amberscript transcripts, most noticeably caused by the disproportionate amount of insertions.
We turn to the WERs of the Rev ASR and Whisper models, reported in <Ref>. Rev's ASR performance is comparable between the CORAAL and both Rev transcript versions, but worse on Amberscript's version. Unexpectedly, we see a similar trend for the Whisper performance. We theorize that the higher deletion rate, compared to Rev, implies that the main difference between the models is likely where they fall on the verbatim to non-verbatim scale. We explore this hypothesis in the next section.
§.§ Difference source hypotheses
Looking across all transcripts, <Ref> shows that the biggest categories of differences are verbatim and morpho-syntactic, with reductions accounting for very few differences. We tease out the impact of each of these two categories of differences per each transcript version pair.
<Ref> verifies our hypothesis that a higher percentage of the differences between the Whisper transcript version and all the other versions are related to verbatim style choices. In fact, over all pairs, the greatest percentage of verbatim differences is between the Whisper and CORAAL versions while the lowest is between the Rev ASR model and the Rev versions. We note that the verbatim percentage between the Rev ASR model and the Rev (+AA tag) version is particularly large, larger than the difference between the two ASR models' transcript versions. The addition of the AA tag could have resulted in the transcribers taking greater liberty with respect to many parts of the style guide, including the verbatim instructions.
Looking into the morpho-syntactic differences, <Ref> shows that the Rev ASR vs Rev transcript versions and the Rev ASR vs CORAAL transcript versions have the highest percentage of these differences. In contrast, Rev's ASR transcript version vs the Rev (+AA tag) transcript percentage is relatively low. Particularly confusing is that the percentage of morpho-syntactic differences between the individual Rev (ASR and both human) versions is nearly the same as the percentage between each Rev version vs the CORAAL version. Because the CORAAL version was produced by linguists who are familiar with AAE and its morpho-syntactic features, we expect that the CORAAL transcript will typically be a more accurate representation of the speech in the audio. We believe that Rev transcribers and the ASR model may have more standardizing transcription proclivities that are causing these differences. We consider whether the style guide used by Rev transcribers could explain this in the following section.
§ DISCUSSION
In this work, we investigated how and the extent to which reference transcripts of the same audio can differ, especially on underrepresented speech. We collected 6 transcript versions, 4 human- and 2 ASR-produced, of the same 10 hours of CORAAL. We found that the human-produced transcripts could vary by WERs as low as ∼10% and as high as ∼20%, and that ASR WER performance could increase or decrease by 5% depending on the reference transcript. We also found the Rev human- and ASR-produced transcripts to be the most similar to one another. This makes sense because the transcribers were all trained on the same style guide and the ASR was trained on data from this same population of transcribers. We next examined three hypotheses about sources of stylistic differences, in order of most to least potentially misrepresentative: 1) morpho-syntactic differences between AAE and SAE, 2) reduction differences, and 3) verbatim vs non-verbatim differences. The verbatim hypothesis accounted for the greatest percentage of the transcript differences, and the morpho-syntatic hypothesis for the second most. The Rev transcripts for the most part had fewer verbatim differences than the other transcript version pairwise comparisons, but they interestingly had more morpho-syntactic differences.
We might attribute this to the Rev style guide[<https://cf-public.rev.com/styleguide/transcription/Transcription+Style+Guide+v5.pdf>], which instructs transcribers to “use English grammar conventions while maintaining the integrity of what was spoken. We are unable to cover and address specific guidelines regarding grammar. We expect you to have prior knowledge of, or be able to research, American English grammar, capitalization, and punctuation guidelines." This is ambiguous with respect to non-standard language varieties. Many AAE features that we tested for are often taught to be “ungrammatical" in schools <cit.>. At the same time, those more familiar with AAE might deem them necessary to “maintaining the integrity of what was spoken." Rev might consider clarifying this part of the style guide for underrepresented language varieties, as well as augmenting the customer-facing definition of verbatim vs non-verbatim, or introducing a new transcription variety option. The inclusion of examples could help as well. Then a user wanting to have audio of a non-standard variety transcribed could choose whether their variety's morpho-syntactic features are transcribed with the standard variety's constructions or not (<cit.> makes a similar proposal for machine translation). Of course, greater awareness and education about underrepresented varieties would also help with this.
With this work, we add to the ever more important research into bias in machine learning. We give more insight to similar discrepancies found by <cit.> and identify key categories of errors. Moreover, we come to the conclusion that a single reference transcript may not be sufficient to conclusively make claims about performance. Our findings indicate that different transcript versions may highlight distinct, yet equally valid, variations (e.g. verbatim vs non-verbatim) that must be considered for fair evaluation. We hope that by making our transcript versions and code available, we assist other research in addressing the important impact of human variation and bias.
IEEEtran
|
http://arxiv.org/abs/2409.03373v1 | 20240905092440 | Doubly heavy tetraquark bound and resonant states | [
"Wei-Lin Wu",
"Yao Ma",
"Yan-Ke Chen",
"Lu Meng",
"Shi-Lin Zhu"
] | hep-ph | [
"hep-ph",
"hep-ex",
"hep-lat",
"nucl-th"
] |
[email protected]
School of Physics, Peking University, Beijing 100871, China
[email protected]
School of Physics and Center of High Energy Physics,
Peking University, Beijing 100871, China
[email protected]
School of Physics, Peking University, Beijing 100871, China
[email protected]
Institut für Theoretische Physik II, Ruhr-Universität Bochum, D-44780 Bochum, Germany
[email protected]
School of Physics and Center of High Energy Physics,
Peking University, Beijing 100871, China
§ ABSTRACT
We calculate the energy spectrum of the S-wave doubly heavy tetraquark systems, including the QQ^(')q̅q̅, QQ^(')s̅q̅, and QQ^(')s̅s̅ (Q^(')=b,c and q=u,d) systems within the constituent quark model. We use the complex scaling method to obtain bound states and resonant states simultaneously, and the Gaussian expansion method to solve the complex-scaled four-body Schrödinger equation. With a novel definition of the root-mean-square radii, we are able to distinguish between meson molecules and compact tetraquark states. The compact tetraquarks are further classified into three different types with distinct spatial configurations: compact even tetraquarks, compact diquark-antidiquark tetraquarks and compact diquark-centered tetraquarks. In the I(J^P)=0(1^+) QQq̅q̅ system, there exists the D^*D molecular bound state with a binding energy of -14 MeV, which is the candidate for T_cc(3875)^+. The shallow B̅^*B̅ molecular bound state is the bottom analogue of T_cc(3875)^+. Moreover, we identify two resonant states near the D^*D^* and B̅^*B̅^* thresholds. In the J^P=1^+ bbq̅q̅ (I=0) and bbs̅q̅ systems, we obtain deeply bound states with a compact diquark-centered tetraquark configuration and a dominant χ_3̅_c⊗ 3_c component, along with resonant states with similar configurations as their radial excitations. These states are the QCD analogue of the helium atom. We also obtain some other bound states and resonant states with “QCD Hydrogen molecule" configurations. Moreover, we investigate the heavy quark mass dependence of the I(J^P)=0(1^+) QQq̅q̅ bound states. We strongly urge the experimental search for the predicted states.
Doubly heavy tetraquark bound and resonant states
Shi-Lin Zhu 0000-0002-4055-6906
=================================================
§ INTRODUCTION
In 2021, the LHCb Collaboration discovered the first doubly charmed tetraquark state T_cc(3875)^+ in the D^0D^0π^+ invariant mass spectrum <cit.>. It is a narrow state with a mass extremely close to the D^*+D^0 threshold, having a binding energy of only around -300 keV. The observation of T_cc(3875)^+ significantly advances the hadron spectroscopy and may open a new chapter for the discovery of other doubly heavy exotic states in the future.
Theoretical investigations on the possible existence of doubly heavy tetraquark bound states dated back to the 1980s <cit.>. Many studies aimed at predicting the masses of doubly charmed tetraquark states using various frameworks <cit.>, but their conclusions were wildly inconsistent, with the predicted masses ranging from -300 MeV to +300 MeV relative to the DD^* threshold. After the discovery of T_cc(3875)^+, its exotic properties have attracted much attention and reignited interest in doubly heavy tetraquark states <cit.>.
The proximity of T_cc(3875)^+ to the D^*+D^0 threshold favors its interpretation as a D^*D molecular state, while for other doubly heavy tetraquark systems, both the compact tetraquark picture and the hadronic molecular picture have been proposed. More discussions can be found in recent reviews <cit.>.
The discovery of the doubly charmed tetraquark bound state implies the existence of other doubly heavy tetraquark states. While many studies focus on the existence and properties of bound states, some also explore possible resonant states. In Ref. <cit.>, the authors employed the complex scaling method to study the ccq̅q̅, bbq̅q̅, bcq̅q̅ (q=u,d) bound and resonant states in the chiral quark model. However, they predicted a deeply bound ccq̅q̅ state with a binding energy of around -150 MeV, which contradicts with the experimental results. In Ref. <cit.>, the author used the heavy quark spin symmetry to predict a resonant pole T_cc' below the D^*D^* channel as a partner of T_cc(3875). Similar results were obtained in the constituent quark model <cit.> and lattice QCD <cit.>. In Ref. <cit.>, the authors adopted a constituent quark model including the one-pion exchange interaction to study the ccq̅q̅ and bbq̅q̅ systems using the real scaling method. They did not find any resonant states in the doubly charmed sector, but reported a bbq̅q̅ resonant state.
In this work, we conduct a comprehensive study on the S-wave doubly heavy tetraquark systems, including the QQ^(')q̅q̅, QQ^(')s̅q̅, and QQ^(')s̅s̅ (Q^(')=b,c and q=u,d) systems, within the constituent quark model. We utilize the complex scaling method <cit.> to obtain possible bound states and resonant states simultaneously. We employ the Gaussian expansion method <cit.> to solve the four-body Schrödinger equation, which has been successfully used in our previous work on tetraquark bound states <cit.> and resonant states <cit.>. Moreover, we calculate the root-mean-square radii of the tetraquark states to analyze their spatial structures and distinguish between meson molecular states and compact tetraquark states.
We further classify the compact tetraquark states into three different types with distinct spatial configurations, unraveling the rich internal structures and different forming mechanisms of the tetraquark states.
This paper is organized as follows. In Sec. <ref>, we introduce the theoretical framework, including the constituent quark model, the complex scaling method and the wave function construction. In Sec. <ref>, we demonstrate different spatial structures of tetraquarks and how to distinguish them by calculating the root-mean-square radii. In Sec. <ref>, we present the numerical results and discuss the properties of doubly heavy tetraquark states. We summarize our findings in Sec. <ref>.
§ THEORETICAL FRAMEWORK
§.§ Hamiltonian
In a nonrelativistic quark potential model, the Hamiltonian of a tetraquark system in the center-of-mass frame reads
H=∑_i=1^4 (m_i+p_i^2/2 m_i)+∑_i<j=1^4 V_ij
,
where the last term represents the two-body interaction between the i-th and j-th (anti)quark. We adopt the AL1 potential <cit.>, which includes the one-gluon-exchange interaction and a linear quark confinement interaction,
V_i j =-3/16λ_i ·λ_j(-κ/r_i j+λ r_i j-Λ.
.+8 πκ^'/3 m_i m_jexp(-r_i j^2 / r_0^2)/π^3 / 2 r_0^3S_i ·S_j),
where λ_i and S_i are the SU(3) color Gell-Mann matrix and the spin operator acting on quark i, respectively. The parameters of the AL1 model were determined by fitting the meson spectra across all flavor sectors <cit.>. No additional free parameters are introduced. The theoretical masses as well as the root-mean-square (rms) radii of the corresponding mesons are listed in Table <ref>. It can be seen that the theoretical results for the 1S mesons agree with the experimental values within tens of MeV. We also list possible experimental candidates for the 2S excited mesons, whose theoretical masses deviate from the experimental values by up to 100 MeV. However, these candidates are not yet well established as 2S mesons. Moreover, relativistic effects and coupled-channel effects may play a crucial role in understanding these excited mesons, which is beyond the scope of this work. Nonetheless, since we focus only on the tetraquark states below the M(1S)M'(2S) dimeson thresholds in this work, we expect the uncertainties to be within tens of MeV, similar to those of the 1S mesons.
§.§ Complex scaling method
In contrast to bound states, the wave functions of resonant states are not square-integrable and cannot be obtained by solving the eigenequation of the Hermitian Hamiltonian directly. The complex scaling method (CSM) enables us to obtain possible bound states and resonant states simultaneously. In the CSM <cit.>, the coordinate r and its conjugate momentum p are transformed as
U(θ) r=r e^i θ, U(θ) p=p e^-i θ.
Under such a transformation, the Hamiltonian is analytically continued to the complex plane and no longer Hermitian,
H(θ)=∑_i=1^4 (m_i+p_i^2e^-2iθ/2 m_i)+∑_i<j=1^4 V_ij(r_ije^iθ).
By solving the eigenvalue equation of the complex-scaled Hamiltonian, we can obtain the eigenenergies of bound states, scattering states and resonant states simultaneously. Bound states are located on the negative real axis in the energy plane and remain unchanged as θ varies. Scattering states align along rays starting from threshold energies with Arg(E)=-2θ. If the complex scaling angle θ is chosen to be larger than the angle of the resonant state θ_r=1/2tan^-1(Γ_r/2M_r), where M_r and Γ_r represent its mass and width, the wave function of the resonant state becomes square integrable and can be obtained with eigenenergy E_r=M_r-iΓ_r/2.
§.§ Wave function
The basis functions of the tetraquark wave function are expressed as
ψ=𝒜(ϕ⊗χ),
where 𝒜 is the antisymmetric operator of identical particles, ϕ and χ represent the spatial wave function and color-spin wave function, respectively.
For the spatial wave function, we employ the Gaussian expansion method (GEM) <cit.>. We consider three sets of spatial configurations (dimeson and diquark-antidiquark), which are denoted by (jac) = (a), (b), (c). In each configuration, there are three independent Jacobian coordinates r_jac, λ_jac, ρ_jac, as shown in Fig. <ref>. The S-wave spatial basis function is written as
ϕ^( jac)_n_1,n_2,n_3=ϕ_n_1(r_jac)ϕ_n_2(λ_jac)ϕ_n_3(ρ_jac),
where ϕ_n_i(r) takes the Gaussian form,
ϕ_n_i(r)=N_n_ie^-ν_n_ir^2,
ν_n_i=ν_1γ^n_i-1 (n_i=1∼ n_ max),
N_n_i is the normalization factor.
For the color-spin wave function, we choose a complete set of bases written as
χ^s_1,s_2,S_3̅_c⊗ 3_c=[(Q_1Q_2)_3̅_c^s_1(q̅_1q̅_2)_3_c^s_2]_1_c^S,
χ^s_1,s_2,S_6_c⊗6̅_c=[(Q_1Q_2)_6_c^s_1(q̅_1q̅_2)_6̅_c^s_2]_1_c^S,
where the subscripts and superscripts denote the color and spin representations, respectively.
§ SPATIAL STRUCTURES
Tetraquark states are generally classified into meson molecules and compact tetraquarks. In the molecular scheme, the (anti)quarks cluster into two color-singlet mesons, and their relative distance is expected to be larger than the typical range of color confinement, Λ_QCD^-1∼ 1 fm, which is also the typical size of a meson. In contrast, in the compact tetraquark scheme, all four (anti)quarks are confined together, with their relative distances being on the order of Λ_QCD^-1. The compact tetraquark scheme can be further subdivided into several types, particularly for doubly heavy systems, as illustrated in Fig. <ref>. In compact even tetraquarks, the relative distance between four (anti)quarks is of similar size. In compact diquark-antidiquark tetraquarks, two quarks and two antiquarks form two clusters, respectively. The sizes of the clusters are smaller than the distance between them. In compact diquark-centered tetraquarks, the two heavy quarks form a very compact diquark cluster, while the two light antiquarks orbit around the diquark, similar to the way two electrons orbit around the nucleus in a helium atom. The compact diquark-centered tetraquarks correspond to the “QCD Helium atom" in Ref. <cit.>. The compact even tetraquarks roughly correspond the “QCD Hydrogen molecule" states as coined in Ref. <cit.>. The two light antiquarks are shared by two heavy quarks as in the Hydrogen molecule where two electrons are shared by two protons. It should be noted that in this paper (anti)diquark only refers to two (anti)quarks that form a spatially compact cluster, rather than the color-spin-isospin configuration of two (anti)quarks as discussed in Ref. <cit.>.
The root-mean-square (rms) radius is a commonly used criterion to analyze the spatial structures of tetraquark states and distinguish between different tetraquark configurations. However, we found that the conventional rms radii calculated using the complete wave function could be misleading and fail to identify molecular configuration due to the antisymmetrization of identical particles <cit.>. In order to eliminate the ambiguity, we proposed a new approach to calculate the rms radius. For systems without identical particle such as bcs̅q̅, such an ambiguity does not exist and we can calculate the rms radii using the complete wave function directly. For systems with one pair of identical particles ( bcq̅q̅ (qqb̅c̅) , bcs̅s̅ (ssb̅c̅) , ccs̅q̅, bbs̅q̅), we decompose the complete antisymmetric wave function as
Ψ(θ)= [(q_1q̅'̅)_1_c(q_2q̅”̅)_1_c]_1_c⊗|ψ_1(θ)⟩
+[(q_2q̅'̅)_1_c(q_1q̅”̅)_1_c]_1_c⊗|ψ_2(θ)⟩
= 𝒜 [(q_1q̅'̅)_1_c(q_2q̅”̅)_1_c]_1_c⊗|ψ_1(θ)⟩
≡ 𝒜 Ψ_ nA(θ).
For systems with two pairs of identical particles (ccq̅q̅, bbq̅q̅, ccs̅s̅, bbs̅s̅), we decompose the complete antisymmetric wave function as
Ψ(θ)= ∑_s_1≥ s_2([(Q_1q̅_1)^s_1_1_c(Q_2q̅_2)^s_2_1_c]^S_1_c⊗|ψ_1^s_1s_2(θ)⟩.
+[(Q_1q̅_1)^s_2_1_c(Q_2q̅_2)^s_1_1_c]^S_1_c⊗|ψ_2^s_1s_2(θ)⟩
+[(Q_1q̅_2)^s_1_1_c(Q_2q̅_1)^s_2_1_c]^S_1_c⊗|ψ_3^s_1s_2(θ)⟩
+.[(Q_1q̅_2)^s_2_1_c(Q_2q̅_1)^s_1_1_c]^S_1_c⊗|ψ_4^s_1s_2(θ)⟩)
= 𝒜∑_s_1≥ s_2[(Q_1q̅_1)^s_1_1_c(Q_2q̅_2)^s_2_1_c]^S_1_c⊗|ψ_1^s_1s_2(θ)⟩
≡ 𝒜 Ψ_ nA(θ),
where s_1, s_2 sum over spin configurations with total spin S. Instead of using the complete wave function Ψ(θ), we use the decomposed non-antisymmetric wave function Ψ_nA(θ) to calculate the rms radius:
r^rms_ij≡Re[√(⟨Ψ_nA(θ) | r_ij^2 e^2iθ|Ψ_nA(θ)⟩/⟨Ψ_nA(θ) | Ψ_nA(θ)⟩)].
For a meson molecule, r^rms_13 and r^rms_24 are expected to be the sizes of the constituent mesons, and much smaller than the other rms radii. The novel definition of the rms radius is useful for distinguishing the molecular configuration from the others. For a compact tetraquark state, it may seem more reasonable to calculate the rms radii using the complete wave function. However, we find that the results from the conventional definition and the novel definition are qualitatively the same for compact tetraquarks, as shown in Appendix <ref>. Therefore, we will only use the novel definition to calculate the rms radii.
The inner products in the CSM are defined using the c-product <cit.>,
⟨ϕ_n |ϕ_m⟩≡∫ϕ_n(r)ϕ_m(r)d^3r,
where the complex conjugate of the “bra" state is not taken. The c-product definition ensures that the expectation values of physical quantities are independent of the complex scaling angle θ. The rms radius calculated by the c-product is generally not real, but its real part can still reflect the internal quark clustering behavior if the width of the resonant state is not too large, as discussed in Refs. <cit.>.
§ RESULTS AND DISCUSSIONS
§.§ QQ^(')q̅q̅
With the CSM, we calculate the complex eigenenergies of the S-wave QQ^(')q̅q̅ systems. The results for ccq̅q̅, bbq̅q̅ and bcq̅q̅ are shown in Figs. <ref>, <ref> and <ref>, respectively. We only focus on the energy spectra below the M(1S)M'(2S) dimeson thresholds. We identify resonant states by varying complex scaling angles θ. We obtain a series of QQ^(')q̅q̅ bound and resonant states, which are labeled as T_QQ^('),I(J)(M) in the following discussions, where M represents the mass of the state. The complex energies, proportions of different color configurations and rms radii of these states are summarized in Table <ref>.
§.§.§ ccq̅q̅
For the ccq̅q̅ system, we obtain one bound state, three resonant states with quantum number I(J^P)=0(1^+) and a resonant state with I(J^P)=1(2^+). The isoscalar bound state T_cc,0(1)(3864) is located 14 MeV below the D^*D threshold. It has a molecular configuration, with r_c_1q̅_1^ rms and r_c_2q̅_2^ rms close to the sizes of D^* and D mesons, respectively, indicating that it is a D^*D molecular state. Comparing with the experimentally observed T_cc(3875)^+, whose binding energy is around -300 keV and characteristic size is around 7 fm <cit.>, T_cc,0(1)(3864) is lower in energy and smaller in size. However, considering that the uncertainties of quark model are up to tens of MeV, it may still serve as a candidate for T_cc(3875)^+. The discrepancies suggest that improvements of the constituent quark model are needed to better describe the long-range interactions between hadrons. Additionally, in Section <ref>, we calculate the rms radii of T_QQ,0(1) with different binding energies by varying the heavy quark mass. When the binding energy is adjusted to match that of the experimental T_cc(3875)^+, the distance between the (anti)quarks in D and D^* is approximately 6 fm. This result is in excellent agreement with the characteristic size estimated in Ref. <cit.>, further supporting the interpretation of the T_cc(3875)^+ as a very loosely bound molecular state.
We obtain an isoscalar resonant state T_cc,0(1)(4031) with a width of Γ=54 MeV, located 1 MeV below the D^*D^* threshold. This state is in agreement with the results from previous works using heavy quark spin symmetry <cit.>, constituent quark model <cit.> and lattice QCD <cit.>. It can be searched for in the D^*D and DDπ channel in future experiments. One may expect that T_cc,0(1)(4031) is the partner of the D^*D molecular bound state T_cc,0(1)(3864) and have a D^*D^* molecular configuration. However, the rms radii for T_cc,0(1)(4031) is highly numerically unstable and change dramatically as θ varies. The numerical instability of the rms radii may result from the state being sandwiched between the D^*D and D^*D^* thresholds, which makes it strongly coupled to the scattering states. Calculations with higher numerical precision are needed to identify its spatial structure.
We also obtain three higher resonant states T_cc,0(1)(4466), T_cc,0(1)(4542), and T_cc,1(2)(4673), which have compact tetraquark configurations. The state T_cc,1(2)(4673) has 10% isovector (q̅q̅)_6_c^1 component, where the spatial wave function between two light antiquarks must be antisymmetric. Similar configurations also appear in other 2^+ QQ^(')q̅q̅ states. Antisymmetric spatial wave function is possible in our S-wave calculations since we only restrict the total orbital angular momentum to be S-wave. The states T_cc,0(1)(4466) and T_cc,0(1)(4542) can decay into the D^*D and D^*D^* channels while the isovector state can only decay into the D^*D^* channel. These three states were not found in Ref. <cit.>, where the authors used the same constituent quark model as we do. The reason for the discrepancies may be that we use a larger set of wave function bases compared to the previous work, as discussed in Ref. <cit.>. Future experimental explorations of these states may help resolve the discrepancies between different methods.
§.§.§ bbq̅q̅
We first focus on the I(J^P)=0(1^+) bbq̅q̅ system, which is the bottom analogue of the T_cc state. We obtain a deeply bound state T_bb,0(1)(10491) with a binding energy of -153 MeV, and a shallow bound state T_bb,0(1)(10642) with a binding energy of -1 MeV. The lower state is a compact diquark-centered tetraquark, which is similar to the helium atom. The two bottom quarks form a very compact diquark cluster, with the rms radius between them (r_b_1b_2^ rms=0.33 fm) being approximately the size of the bottomonia, as listed in Table <ref>. The two light antiquarks orbit around the diquark, similar to the way two electrons orbit around the nucleus in a helium atom. The rms radii between the bottom quarks and light antiquarks r_bq̅^ rms are around 0.7 fm. The dominant color configuration of the state is χ_3̅_c⊗ 3_c, where the color electric interactions are attractive between two (anti)quarks. Note that the χ_3̅_c⊗ 3_c component reaches 97%! The strong attraction between two bottom quarks contribute to the deep binding energy of the state. A deeply bound state in the I(J^P)=0(1^+) bbq̅q̅ system has been anticipated since the 1980s <cit.> and was recently predicted in lattice QCD studies <cit.>. Our findings agree with the previous works. The bound state T_bb,0(1)(10491) is below the B̅B̅ threshold and can only decay weakly.
The higher bound state T_bb,0(1)(10642) has a molecular configuration, with r_b_1q̅_1^ rms and r_b_2q̅_2^ rms being close to the sizes of B̅^* and B̅ mesons, respectively, indicating that it is a B̅^*B̅ molecular state. It is the bottom analogue of the D^*D molecular bound state T_cc,0(1)(3864). The bound state T_bb,0(1)(10642) can decay radiatively to B̅B̅γ.
In addition to bound states, we also obtain six isoscalar bbq̅q̅ resonant states with J^P=1^+. The state T_bb,0(1)(10700) is located near the B̅^*B̅^* threshold and identified as a B̅^*B̅^* molecular state. It is dominated by the B̅^*B̅^* component, resulting in a small decay width of only 2 MeV to the B̅^*B̅ channel. In contrast, its charmed partner T_cc,0(1)(4031) is a relatively broad state with a width of 54 MeV. This discrepancy may indicate that the nature of the two states is different, namely T_cc,0(1)(4031) may have sizable contributions from both the D^*D and D^*D^* channels. The state T_bb,0(1)(10700) can be searched for in the B̅^*B̅ channel.
The state T_bb,0(1)(11025) is a compact diquark-antidiquark tetraquark state. Comparing its internal structures with those of the bound state T_bb,0(1)(10491), we find an interesting resemblance. Both states are dominated by the χ_3̅_c⊗3_c color configuration. The rms radii between two bottom quarks r^ rms_b_1b_2=0.33 fm are the same for these two states, while the rms radii between the bottom quarks and light antiquarks r^ rms_bq̅ are around 1 fm for T_bb,0(1)(11025), larger than those for T_bb,0(1)(10491). This clearly suggests that T_bb,0(1)(11025) is the radial excitation in the light degree of freedom of T_bb,0(1)(10491). The state T_bb,0(1)(11025) can decay into the B̅^*B̅ and B̅^*B̅^* channels.
Two isoscalar bound states T_bb,0(0)(11195) and T_bb,0(2)(11370) are found in the 0^+ and 2^+ systems, respectively. The T_bb,0(0)(11195) does not decay into the S-wave BB pair and T_bb,0(2)(11370) does not decay into the S-wave B^*B^* pair, since the total wave functions of two identical bosons must be interchange symmetric. Hence, their lowest S-wave dimeson thresholds are M(1S) M'(2S). However, it should be noted that P-wave dimeson channels exist below these states. The coupling between the P-wave channels and these states may alter their positions and nature.
In contrast to the isoscalar systems, no bound state is obtained in the isovector bbq̅q̅ systems. The absence of isovector bound state suggests that the configuration (q̅q̅)^0_3_c with isospin 0, which is referred to as “good antidiquark" in Ref. <cit.>, is essential for forming the doubly bottomed tetraquark bound states. We obtain two resonant states with J^P=0^+, four resonant states with J^P=1^+, and three resonant states with J^P=2^+. They are identified as either compact diquark-centered tetraquarks or compact even tetraquarks. The two lowest states T_bb,1(1)(10685) and T_bb,1(2)(10715) may be more likely to be observed in experiments. It is worth noting that in Ref. <cit.>, the authors used a different constituent quark model including the one-pion exchange interaction, and identified an isovector resonant state with J^P=1^+, whose mass and width are in agreement with the state T_bb,1(1)(10685) obtained in our calculations. We recommend experimental exploration of this state in the B̅^*B̅ channel.
§.§.§ bcq̅q̅
We identify three isoscalar bound states T_bc,0(0)(7129), T_bc,0(1)(7185), and T_bc,0(2)(7363) in the bcq̅q̅ systems. T_bc,0(0)(7129) and T_bc,0(1)(7185) are compact even tetraquark states, where the rms radii between four (anti)quarks are of similar size. Unlike the χ_3̅_c⊗3_c dominated compact diquark-centered tetraquark T_bb,0(1)(10491), these two compact even tetraquarks have sizable components of both χ_3̅_c⊗3_c and χ_6_c⊗6̅_c configurations. This suggests that both the interactions between two (anti)quarks and the interactions between quarks and antiquarks are important for these two states. The shallow bound state T_bc,0(2)(7363) with a binding energy of -3 MeV is identified as a B̅^*D^* molecular state. The scalar bound state T_bc,0(0)(7129) can only decay weakly, while T_bc,0(1)(7185) can decay radiatively to B̅ Dγ, and T_bc,0(2)(7363) can decay strongly to B̅^*Dπ. In contrast to the isoscalar case, no isovector bcq̅q̅ bound state is found. This again shows that the (q̅q̅)^0_3_c configuration plays a vital role in forming doubly heavy tetraquark bound states, as in the bbq̅q̅ systems. In Refs. <cit.>, the authors investigated possible bcq̅q̅ bound states from the molecular picture. It was shown in Ref. <cit.> that for the isovector system the interactions between charmed and bottomed mesons are repulsive and no bound state exists, which is consistent with our findings. Moreover, both studies <cit.> suggested that shallow bound states with binding energies of several MeV may exist in the I(J^P)=0(0^+),0(1^+), and 0(2^+) bcq̅q̅ systems. For comparison, we also obtain a loosely bound molecular state in the isoscalar 2^+ system. But for the isoscalar 0^+ and 1^+ systems, our quark model calculations lead to bcq̅q̅ compact tetraquark states with binding energies around -30 MeV.
We also obtain a series of resonant states with different quantum numbers. The two lowest states T_bc,0(0)(7301) and T_bc,1(2)(7430) may be more likely to be observed in experiments. They can be searched for in the B̅ D and B̅^*D^* channels, respectively. Moreover, in the I(J^P)=0(1^+) bcq̅q̅ system, we find that the scattering states of B̅D^* and B̅^*D^* deviate from the continuum lines, indicating possible existence of a resonant state, which may be the bcq̅q̅ analogue of T_cc,0(1)(4031) and T_bb,0(1)(10700). However, we cannot determine this state accurately due to the limitations of numerical precision in the present calculations.
§.§ QQ^(')s̅q̅
The complex eigenenergies of the S-wave ccs̅q̅, bbs̅q̅, and bcs̅q̅ systems are shown in Figs. <ref>, <ref> and <ref>, respectively. We obtain a series of QQ^(')s̅q̅ bound and resonant states, which are labeled as T_QQ^(')s̅,J(M) in the following discussions, where M represents the mass of the state. The complex energies, proportions of different color configurations and rms radii of these states are summarized in Table <ref>.
We obtain two resonant states in the ccs̅q̅ system. They are located near the M(1S)M'(2S) dimeson threshold and have compact even tetraquark configurations. T_ccs̅,1(4578) can decay into the D_s^*D, D_sD^*, and D_s^*D^* channel while T_ccs̅,2(4723) can only decay into the D_s^*D^* channel.
For the bbs̅q̅ system, we identify a bound state T_bbs̅,1(10647) with a binding energy of -64 MeV. It is a deeply bound state with compact diquark-centered tetraquark configuration, and can be considered as the strange partner of T_bb,0(1)(10491). It is below the B̅_sB̅ threshold and can only decay weakly. Moreover, we also obtain four resonant states with J^P=1^+. The state T_bbs̅,1(10766) is located 3 MeV below the B̅_s^*B̅^* threshold and have a compact diquark-centered tetraquark configuration. It is the radial excitation in the light degree of freedom of T_bbs̅,1(10647), similar to T_bb,0(1)(11025) being the radial excitation of T_bb,0(1)(10491). It can decay into the B̅_s^*B̅ and B̅_sB̅^* channel, while the three higher resonant states can also decay into the B̅_s^*B̅^* channel. In addition, we obtain two resonant states with J^P=0^+ and three resonant states with J^P=2^+. The 0^+ states can decay into the B̅_sB̅ and B̅_s^*B̅^* channels while the 2^+ states can only decay into the B̅_s^*B̅^* channel.
For the bcs̅q̅ system, we obtain two resonant states each in the 0^+, 1^+, and 2^+ systems. Similar to the resonant state T_cc,0(1)(4031), T_bcs̅,1(7437) is sandwiched between two thresholds and its rms radii results are numerically unstable. The lowest 0^+ resonant state can decay into the B̅_sD and B̅D_s channels. The lowest 1^+ resonant state can decay into the B̅_s^*D, B̅^*D_s, B̅_sD^*, and B̅D_s^* channels. The lowest 2^+ resonant state can decay into the B̅_s^*D^* and B̅^*D_s^* channels.
§.§ QQ^(')s̅s̅
The complex eigenenergies of the S-wave ccs̅s̅, bbs̅s̅, and bcs̅s̅ systems are shown in Figs. <ref>, <ref> and <ref>, respectively. We obtain a series of QQ^(')s̅s̅ resonant states, which are labeled as T_QQ^(')s̅s̅,J(M) in the following discussions, where M represents the mass of the state. The complex energies, proportions of different color configurations and rms radii of these states are summarized in Table <ref>.
The QQ^(')s̅s̅ systems and the isovector QQ^(')q̅q̅ systems share the same internal symmetries, and their energy spectra bear a resemblance. In the ccs̅s̅ system, we obtain a resonant state T_ccs̅s̅,2(4808) below the D_s^*(2S)D_s^* threshold. It can be considered as the strange partner of T_cc,1(2)(4673). In the bbs̅s̅ system, the two lowest resonant states T_bbs̅s̅,1(10846) and T_bbs̅s̅,2(10877) are the strange partners of T_bb,1(1)(10685) and T_bb,1(2)(10715), respectively. In the bcs̅s̅ system, two resonant states with J^P=2^+ are found. The lowest state T_bcs̅s̅,2(7605) is the strange partner of T_bc,1(2)(7430). The mass differences between the isovector QQ^(')q̅q̅ resonant states and their strange partners QQ^(')s̅s̅ resonant states are around 150 MeV. For higher resonant states, the correspondence between QQ^(')s̅s̅ states and isovector QQ^(')q̅q̅ states is less clear. Compared to the isovector QQ^(')q̅q̅ systems, the QQ^(')s̅s̅ systems have fewer resonant states near the M(1S)M'(2S) dimeson thresholds. All of the QQ^(')s̅s̅ states are identified as compact tetraquarks and can be searched for in corresponding dimeson decay channels.
The state T_bcs̅s̅,2(7605) has a negative proportion of color configuration χ_6_c⊗6̅_c, which is rather confusing. However, it should be emphasized that the inner products calculated by the c-product in Eq. (<ref>) are not positive-definite and generally not real. Moreover, the physical quantities calculated by the c-product may only have probabilistic interpretation for resonant states that are not too broad <cit.>. Therefore, the negative proportions of the broad resonance T_bcs̅s̅,2(7605) should not be taken too seriously.
§.§ Heavy quark mass dependence of T_QQ,0(1) bound states
The I(J^P)=0(1^+) QQq̅q̅ systems, in which both loosely bound molecular states and deeply bound compact states are observed in our calculations, have attracted great attention. We shall investigate very carefully how the properties of the bound states change with the heavy quark mass m_Q. The binding energies of the ground state and the first excited state with varying m_Q are shown in Fig. <ref>. We see that the ground state is bound for m_Q larger than around 1200 MeV, and the binding energy increases as m_Q increases. When m_Q reaches around 4600 MeV, the first excited state also becomes a bound state. The rms radii of the two bound states with varying m_Q are shown in Fig. <ref>. As m_Q increases, r^ rms_Q_1q̅_1 and r^ rms_Q_2q̅_2 remain stable and match the sizes of the ground state Qq̅ mesons with spin 0 and 1 (denoted as M and M^*) respectively, while the other four rms radii decreases significantly. For the ground state, the rms radii between the clusters Q_1q̅_1 and Q_2q̅_2 are larger than 1 fm when m_Q is less than around 2 GeV, indicating a M^*M molecular configuration. In this scenario, the ground state is a loosely bound state with a binding energy |Δ E |< 20 MeV. For a larger m_Q, the two heavy quarks form a compact diquark, and the ground state transforms into a deeply bound compact diquark-centered tetraquark state. For the first excited state, the rms radii between the clusters Q_1q̅_1 and Q_2q̅_2 decreases more slowly compared to the ground state. The first excited state remains a molecular state for m_Q < 6 GeV.
The physical interpretation for the above findings is straightforward. As m_Q increases, the kinetic energies of the heavy quarks are suppressed. The distance between two heavy quarks gets smaller and the attractive color electric interaction gets stronger, leading to a deeper bound state and a potential second bound state. Interestingly, a similar conclusion on the existence of two types of tetraquark bound states in the limit of large heavy quark mass was reached in Ref. <cit.> using the Born-Oppenheimer approximation within the framework of large N QCD. Future experimental explorations of the two T_bb,0(1) bound states may help test theoretical predictions.
§ SUMMARY
In summary, we calculate the energy spectra of the S-wave doubly heavy tetraquark systems QQ^(')q̅q̅, QQ^(')s̅q̅, and QQ^(')s̅s̅ (Q^(')=b,c) using the AL1 quark potential model. We apply the complex scaling method to study possible bound states and resonant states simultaneously, and the Gaussian expansion method to solve the four-body Schrödinger equation. We focus on the low-lying states below the lowest M(1S)M'(2S) dimeson threshold. The uncertainties of these tetraquark states are expected to be of the same order as those of the 1S mesons, which are around tens of MeV.
We obtain bound states in the ccq̅q̅, bbq̅q̅, bcq̅q̅, and bbs̅q̅ systems. The shallow bound state T_cc,0(1)(3864) serves as a candidate for the experimental T_cc(3875)^+ state. The bound state T_bc,0(2)(7363) can decay strongly to B̅^*Dπ. The bound states T_bb,0(1)(10642) and T_bc,0(1)(7185) can decay radiatively. The bound states T_bb,0(1)(10491), T_bc,0(0)(7129), and T_bbs̅,1(10647) can only decay weakly. In addition, a series of doubly heavy resonant states are found. We urge future experimental explorations of these predicted states.
We use the rms radii to distinguish between meson molecular states and compact tetraquark states. The compact tetraquark states are further classified into three different configurations: compact even tetraquark, compact diquark-antidiquark tetraquark and compact diquark-centered tetraquark. The shallow bound states T_cc,0(1)(3864), T_bb,0(1)(10642), and T_bc,0(2)(7363) have molecular configurations, which are QCD molecules.
The deeply bound states T_bb,0(1)(10491) and T_bbs̅,1(10647) are compact diquark-centered tetraquarks, which are coined as the “QCD Helium atom" in Ref. <cit.>. The T_bc,0(0)(7129) and T_bc,0(1)(7185) are compact even tetraquarks, which are ideal candidates of the “QCD Hydrogen molecule" as noted in Ref. <cit.>.
Most of the resonant states are compact tetraquark states, except that T_bb,0(1)(10700) is a B̅^*B̅^* molecular state. The resonant states T_bb,0(1)(11025) and T_bbs̅,1(10766) are considered as the radial excitations in the light degree of freedom of the bound states T_bb,0(1)(10491) and T_bbs̅,1(10647), respectively. It is worth noting that all of the compact diquark-centered tetraquarks and compact diquark-antidiquark tetraquarks identified in our calculations are dominated by the χ_3̅_c⊗3_c color configuration, except for the broad resonant state T_bc,0(0)(7301). In these states, the attractive color electric interactions between two heavy quarks play an important role. On the other hand, mixing effect between χ_3̅_c⊗3_c and χ_6_c⊗6̅_c configurations is important in compact even tetraquarks and meson molecules. Similar classifications of tetraquark configurations were made in Ref. <cit.>. The compact diquark-centered tetraquarks and compact diquark-antidiquark tetraquarks resemble the type-1 tetraquarks in Ref. <cit.>, and the compact even tetraquarks and meson molecules resemble the type-2 tetraquarks. The classifications of tetraquarks based on their color-spatial configurations help unravel the rich internal structures and various forming mechanisms of tetraquark states.
We also explore the heavy quark mass dependence of the T_QQ,0(1) bound states. As the heavy quark mass increases from 1.2 GeV to 6 GeV, the ground state transforms from a loosely bound molecular state to a deeply bound compact diquark-centered tetraquark state, with the emergence of a second loosely bound molecular state. The future experimental explorations of the two T_bb,0(1) bound states may help test theoretical predictions and deepen our understanding of quantum chromodynamics.
§ ACKNOWLEDGMENTS
We thank Zi-Yang Lin for the helpful discussions. This project was supported by the National
Natural Science Foundation of China (No. 12475137). This
project was also funded by the Deutsche Forschungsgemeinschaft (DFG,
German Research Foundation, Project ID 196253076-TRR 110). The computational resources were supported by High-performance Computing Platform of Peking University.
§ TWO DEFINITIONS OF ROOT-MEAN-SQUARE RADIUS
In our calculations, we use the decomposed non-antisymmetric wave function to calculate the rms radii. It seems more reasonable to calculate the rms radii of compact tetraquark states using the complete wave function, which considers contributions from both direct terms and exchange terms. However, our primary interest lies in the general clustering behavior of tetraquark states, rather than in specific numerical results of the rms radii, which are not experimentally observables at present. The rms radii calculated using the decomposed non-antisymmetric wave function are already capable of distinguishing between different tetraquark configurations. To illustrate this, we compare the results of rms radii calculated using the complete wave function Ψ and the decomposed non-antisymmetric term Ψ^J_ nA in Table <ref>. We take four states with different configurations as examples. For the compact tetraquark states T_bc,0(1)(7185), T_cc,0(1)(4466), and T_bb,0(1)(10491), the results from Ψ and Ψ_ nA are similar. We can draw the same conclusion on their spatial structures from both results. However, for the molecular state T_bb,0(1)(10642), the results from Ψ_ nA can clearly demonstrate the clustering behaviour of a molecular state, while the results from Ψ are more ambiguous due to the antisymmetrization.
In conclusion, the novel definition of rms radii, which are calculated using only the decomposed non-antisymmetric wave function, can reflect the internal spatial structure of tetraquark states more transparently.
|
http://arxiv.org/abs/2409.02803v1 | 20240904152258 | Configurational entropy and stability conditions of fermion and boson stars | [
"P. S. Koliogiannis",
"M. Vikiaris",
"C. Panos",
"V. Petousis",
"M. Veselsky",
"Ch. C. Moustakidis"
] | gr-qc | [
"gr-qc",
"astro-ph.HE",
"nucl-th",
"physics.comp-ph"
] |
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
^1Department of Physics, Faculty of Science, University of Zagreb, Bijenička cesta 32, 10000 Zagreb, Croatia
^2Department of Theoretical Physics, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
^3Institute of Experimental and Applied Physics, Czech Technical University, Prague, 110 00, Czechia
§ ABSTRACT
In a remarkable study by M. Gleiser and N. Jiang (Phys. Rev. D 92, 044046, 2015), the authors demonstrated that the stability regions of neutron stars, within the framework of the simple Fermi gas model, and self-gravitating configurations of complex scalar field (boson stars) with various self couplings, obtained through traditional perturbation methods, correlates with critical points of the configurational entropy with an accuracy of a few percent. Recently, P. Koliogiannis et al. (Phys. Rev. D 107, 044069 2023) found that while the minimization of the configurational entropy generally anticipates qualitatively the stability point for neutron stars and quark stars, this approach lacks universal validity. In this work, we aim to further elucidate this issue by seeking to reconcile these seemingly contradictory findings. Specifically, we calculate the configurational entropy of bosonic and fermionic systems, described by interacting Fermi and Boson gases, respectively, that form compact objects stabilized by gravity. We investigate whether the minimization of configurational entropy coincides with the stability point of the corresponding compact objects. Our results indicate a strong correlation between the stability points predicted by configurational entropy and those obtained through traditional methods, with the accuracy of this correlation showing a slight dependence on the interaction strength. Consequently, the stability of compact objects, composed of components obeying Fermi or Boson statistics, can alternatively be assessed using the concept of configurational entropy.
03.67.-a, 04.40.Dg, 97.60.Jd, 05.30.-d, 02.30.Nw
Configurational entropy and stability conditions of fermion and boson stars
Ch.C. Moustakidis^2
September 9, 2024
============================================================================
§ INTRODUCTION
In recent years there has been an extensive interest in the study of astrophysical objects with the help of the concept of information entropy and related quantities. In particular, Sañudo and Pacheco <cit.> studied the relation between the complexity and the structure of white dwarfs. Later on, the aforementioned study has been applied in neutron star's structure <cit.> and it was found that the interplay between gravity, the short-range nuclear force, and the very short-range weak interaction shows that neutron stars, under the current theoretical framework, are ordered systems. Similar studies took place in the following years in a series of papers <cit.> and Herrera et al. <cit.> elaborated the definition of the complexity factors in self-gravitating systems, approaching the problem in a different way. Some additional applications of the concept of the information measures may be found in
Refs. <cit.>.
A more specific application of the information measure is the configurational entropy (CE). The concept of the CE has been introduced by Gleiser and Stamatopoulos <cit.> in order to study possible relation between the dynamical and information content of physics models with localized energy configurations. In the next years, the CE has been applied in several similar studies <cit.>. In a notable study, Gleiser and Jiang <cit.> investigated the connection between the stability of compact objects (white dwarfs, neutron stars and boson stars) and the corresponding information-entropic measure. According to their notable finding, the minimization of the CE offers an alternative way to predict the stability condition through the maximum mass configuration, for a variety of stellar objects. It is worth noting that there is no theoretical argument (or proof) to relate the stability point to the minimum of the CE. However, it is intuitive to expect that since the maximum mass corresponds to the most compact configuration (maximum mass and minimum radius of a stable configuration), the corresponding CE will exhibit an extreme value (in this case a total minimum).
Recently in Ref. <cit.> we extended the aforementioned study in various compact objects including neutron stars, quark stars, and hybrid stars. Employing a large set of realistic equations of state (EoS), in each case, we found that the suggested prediction of the stability by the minimization of the CE, concerning neutron stars and quark stars, does not have, at least quantitatively, universal validity.
It is worth mentioning that the study of the longstanding problem of the stability of relativistic stars <cit.> is mainly carried out by the following three methods: (a) the method of locating the point that corresponds to the minimization of the binding energy defined as E_B=(M-m_bN)c^2 (where m_b is the mass of a single nucleon, M stands for the gravitational mass, and N is the total number of nucleons) <cit.>, (b) the variational method developed by Chandrasekhar <cit.>, and (c) the method based on the dependence of the gravitational mass M and the radius R on the central energy density E_c (hereafter traditional method TM). The stability condition demands that the mass increases with increasing central energy density dM/d E_c>0. The extrema in the mass indicates a change in the stability of the compact star configuration <cit.>.
In the present work we employ the third method in order to investigate a possible relation between the stability of a relativistic star and the corresponding CE. In this case, the questions that arise and must be answered (or at least investigated) are the following: Is there any one-to-one correspondence between the minimum of the CE and the stability point for each realistic EoS? Is this rule universal or it depends on the specific character of each EoS? Is it possible, even in some special cases, to associate stability with minimization of CE, and if so, which is the underlying reason? Obviously, one can appreciate the importance of discovering new ways, beyond the classical ones, to find stability conditions for compact objects.
In view of the above questions, the main motivation of the present work is to provide an extended examination of the statements of Refs. <cit.> and <cit.>. In Ref. <cit.> the authors found that the stability regions of neutron stars (in the framework of the simple Fermi gas model) as well as of self-gravitating configurations of complex scalar field (boson stars) with various self couplings (detailed reviews in Refs. <cit.>), obtained from traditional perturbation methods, correlating the critical points of the CE with an accuracy of a few percent. In Ref. <cit.> the authors found that the suggested prediction of the stability by the minimization of the CE, concerning neutron stars and quark stars, does not have, at least quantitatively, universal validity although in several cases it qualitatively predicts the existence of the stability point. In this work, we aim to further elucidate this issue by seeking to reconcile these seemingly contradictory findings. Specifically, we calculate the configurational entropy of bosonic and fermionic systems, described by interacting Fermi and Boson gases, respectively, that form compact objects stabilized by gravity. We investigate whether the minimization of configurational entropy coincides with the stability point of the corresponding compact objects. Our results indicate a strong correlation between the stability points predicted by configurational entropy and those obtained through traditional methods, with the accuracy of this correlation showing a slight dependence on the interaction strength.
The paper is organized as follows. In Sec. <ref> we present the basic formalism of the hydrodynamic equilibrium and the role of analytical solutions while in Sec. <ref>, we review the definition of the configurational entropy. The parametrization of the equations of state is provided in Sec. <ref> and in Sec. <ref> the results of the present study are laid out and discussed. Finally, Sec. <ref> contains the concluding remarks.
§ HYDRODYNAMIC EQUILIBRIUM AND ANALYTICAL SOLUTIONS
To construct the related configuration in each compact object, which is the key property to calculate the CE, we employ the Einstein's field equations of a spherical fluid. In this case the mechanical equilibrium of the star matter is determined by the well known
Tolman-Oppenheimer-Volkoff (TOV) equations <cit.>
dP(r)/dr = -G E(r) M(r)/c^2r^2(1+P(r)/ E(r))
× (1+4π P(r) r^3/M(r)c^2) (1-2GM(r)/c^2r)^-1,
dM(r)/dr=4π r^2/c^2 E(r).
In general, to obtain realistic solutions, it is most natural to numerically solve the TOV equations by incorporating an EoS that describes the relationship between pressure and density within the fluid interior. Alternatively, one can seek analytical solutions to the TOV equations, though these solutions may lack physical relevance. While there are hundreds of analytical solutions to the TOV equations <cit.>, only a few are of significant physical interest. In this work, we employ two of these noteworthy solutions: the Schwarzschild (constant-density interior solution) and the Tolman VII solution <cit.>. It is important to note that analytical solutions are highly valuable, as they often provide explicit expressions for the quantities of interest and are instrumental in verifying the accuracy of numerical calculations. Below, we briefly describe these two fundamental analytical solutions.
* Schwarzschild solution: In the case of the Schwarzschild interior solution, the density is constant throughout the star <cit.>. The energy density and the pressure read as
E = E _c=3M/4π R^3,
P(x)/ E_c = √(1-2β)-√(1-2β x^2)/√(1-2β x^2)-3√(1-2β),
where x=r/R, β=GM/Rc^2 is the compactness of the star and E_c=ρ_c c^2 is the central energy density.
* Tolman VII solution: The Tolman VII solution has been extensively employed in neutron star studies while its physical realization has been examined, very recently, in detail <cit.>. The stability of this solution has been examined by Negi et al. <cit.> and also confirmed in Ref. <cit.>. The energy density and the pressure read as <cit.>
E (x)/ E_c = (1-x^2), E_c=15Mc^2/8π R^3,
P(x)/ E_c = 2/15√(3e^-λ/β)tanϕ-1/3+x^2/5.
In is worth mentioning that these solutions are applicable to any kind of compact object independently of the values of mass and radius. Consequently, they are suitable in studying any kind of massive or supramassive object which the hydrodynamic stability obeys to TOV equations. In any case, useful insight can be gained by the use of analytical solutions concerning both the qualitative and quantitative behavior of CE as a function of central density ρ_c.
§ CONFIGURATIONAL ENTROPY IN MOMENTUM SPACE
The key quantity to calculate the CE in momentum space is the Fourier transform F( k) of the density ρ(r)= E(r)/c^2, originating from the solution of the TOV equations, that is
F( k) = ∫∫∫ρ(r)e^-i k· r d^3 r
= 4π/k∫_0^R ρ(r)r sin(kr)dr.
It is notable that the function F(k) in the case of zero momentum, coincides with the gravitational mass of the compact object, that is F(0)≡ M, since by definition (see Eq. (<ref>))
M=4π∫_0^R ρ(r) r^2 dr,
where the density ρ(r) derives from the solution of the TOV equations. Moreover, we define the modal fraction f( k) <cit.>
f( k)=|F( k)|^2/∫|F( k)|^2d^3 k,
and also the function f̃( k)=f( k)/f( k)_ max, where f( k)_ max is the maximum fraction, which is given in many cases by the zero mode k=0, or by the system's longest physics mode, |k_min|=π/R. The above normalization guarantees that f̃( k) ≤ 1 for all values of k.
Finally, the CE, S_C, as a functional of f̃( k), is given by
S_C[f̃]=-∫f̃( k) ln [f̃( k)] d^3 k.
Summarizing, for each EoS, an infinite number of configurations can appear, leading to the construction of M-ρ_c and S_C-ρ_c dependence. The latter fascilitates the investigation of any possible correlation between the minimum of S_C and the stability point of compact objects under examination.
§ EQUATION OF STATE
The present study focuses on the role of the CE as a stability condition of compact objects obeying Fermi or Boson statistics including a parameterized self-interaction. The compact objects under consideration are neutron stars (introduced by a simplified EoS), boson stars and other type of astrophysical objects composed of fermions or bosons, such as dark matter stars. In following, the equations of interacting Fermi and boson gases are introduced.
§.§ Interacting Fermi Gas (FG)
In the case of compact objects consisting of solely interacting Fermi gas (FG), we considered the simplest extension of the free fermion gas in which an extra term that introduces the repulsive interaction between fermions is added. Therefore, the energy density and pressure of the fermions are described as (for an extensive analysis see Ref. <cit.>)
E(n_χ) = (m_χc^2)^4/(ħ c)^38π^2[x√(1+x^2)(1+2x^2) .
- . ln(x+√(1+x^2))] + y^2/2 (ħ c)^3 n_χ^2,
P(n_χ) = (m_χc^2)^4/(ħ c)^38π^2[x√(1+x^2)(2x^2/3-1)
.
+ . ln(x+√(1+x^2))] + y^2/2 (ħ c)^3 n_χ^2,
where m_χ is the particle mass, which considered equal to m_χ=939 MeV/c^2 for reasons of simplicity (this holds throughout the study), n_χ is the number density and
x=(ħ c)(3π^2n_χ)^1/3/m_χc^2.
The last parameter, y (in units of MeV^-1), is the one that introduces the repulsive interaction. In this study we have considered the values y=[0,0.001,0.005,0.01,0.05,0.1,0.3,0.5] ( MeV^-1) where increasing y increases the strength of the interaction and vice-versa.
§.§ Interacting Boson Gas (BG)
To enrich the study, we included the case of compact stars composed of bosonic matter and specifically that of an interacting Boson gas. As the construction of the EoS for the aforementioned gas is not unambiguously defined, we introduce three cases based on different assumptions. It needs to be noted that since the scalar field only vanishes at spatial infinity, boson stars do not have a specific radius where the energy density and pressure vanish. Thus, we do not use a momentum cut-off scheme, 0≤ | k|≤∞.
* BG - C1: The first derivation of the EoS of boson stars with a repulsive interaction was given in Ref. <cit.> and since then it has been used extensively in the corresponding calculations. In particular, the energy density is given as
E(P)= 4/3w[ (√(9w/4P)+1 )^2-1 ],
where w=4λ (ħ c)^3/(m_χc^2)^4 (in units of MeV^-1 fm^3). In fact, the parameter λ is the one that is related with the strength of the interaction. However, it is usual to employ the combination of m_χ and λ defined as w. In this study we have considered the values w=[0.001,0.005,0.01,0.05,0.1,0.3,0.5] ( MeV^-1 fm^3) where increasing w increases the strength of the interaction and vice-versa.
* BG - C2: The second way to describe the interior of a boson star is through the EoS provided in Ref. <cit.> and used recently in Ref. <cit.>, where the energy density is given by
E(P)=P +√(2P/z),
where the interaction parameter z is given by z=u^2(ħ c)^3/(m_χ c^2)^2 (in units of MeV^-1 fm^3) and the quantity u= g_χ/m_ϕc^2 defines the strength of the interaction in analogy to the case of fermions. In this study we have considered the values z=[0.001,0.005,0.01,0.05,0.1,0.3,0.5] ( MeV^-1 fm^3).
* BG - C3: Recently, in Ref. <cit.> the authors studied the properties of self-interacting boson stars with different scalar potentials. They concluded that the resulting properties of the boson star configurations differ considerably from previous calculations. Therefore, to enhance the connection between the stability criterion and the CE, we employed two cases of the EoSs introduced in Ref. <cit.>: (a) one with a mass term (MT) and (b) one with a vacuum term (VT) without a mass term. The scaling EoSs read as
ℰ(P) =
P^2/n+(n+2)P/(n-2), MT,
1+(n+2)P/(n-2), VT,
where the index n is restricted to n>2. In this study we have considered the values n=[4,5] for MT (hereafter case (a)) and n=[3,4,5] for VT (hereafter case (b)). An additional reason for using the mentioned cases is because they lead to different mass-radius diagrams (depending on the index values) and thus, cover a large range of cases that may correspond to boson stars.
We consider that pluralism in the use of EoS will greatly help to test the plausibility of the stability criterion through the CE in the case of boson stars.
§ RESULTS AND DISCUSSION
As a first step, we employed two analytical solutions, namely the Schwarzchild's and Tolman's VII solutions, in order to calculate the CE. It is worth pointing out that although it is more natural to use a realistic EoS for the fluid interior in order to solve the Einstein’s field equations, the use of analytical solutions has the advantage that by having an explicit form, the examination of the implied physics becomes simpler. We should remark that the analytical solutions are a source of infinite number of EoSs (plausible or not). Consequently, they can be used extensively, to introduce and establish some universal approximations (for more details and discussion see also Ref. <cit.>). In both cases, we formulate the dependence in the form
S_C ρ_c^-1/4π b^-1= C× 10^5×( km/R)^31/ρ_c/ba^-3,
where
a=1/π(h/mc)^3/2c/(mG)^1/2, b=c^2/Ga,
and C taking the values 1.728 and 0.145 for the Schwarzschild and Tolman-VII solutions, respectively.
Although the above expression does not ensure the location of the stability point, it is very useful for two reasons: (a) comparison with the results produced by using realistic EoSs, for a fixed value of the radius R (see Fig. <ref>(c)), and (b) check and ensure the accuracy of our numerical calculations.
In Fig. <ref> we display in order the four cases corresponding to: (first) Fermi gas, (second) Boson gas - C1, (third) Boson gas - C2, and (fourth) Boson gas - C3. In particular, Fig. <ref>(a) manifests the dependence of the gravitational mass on the radius, Fig. <ref>(b) presents the dependence of the gravitational mass on the central density, and Fig. <ref>(c) indicates the CE as a function of the central density for various values of the interaction parameters. In addition, diamonds demonstrate the stability points due to the TM, while open circles mark the minimum of the CE.
In the case of the FG, the first panel of Fig. <ref> displays that the points due to TM and CE are located in close proximity, validating the CE method for the location of the stability point. However, a detailed presentation of the percentage error on some interesting quantities, namely the gravitational mass, the radius, the central energy density and the compactness, as shown in Table <ref>, signal a different behavior. While the error in the gravitational mass is lower than 5%, the error in the central energy density can reach up to almost 99%, depending each time to the value of the interaction. The latter have its origin in the creation of a plateau immediately after the rapid decrease of the CE. The existence of a plateau maintains the CE in a narrow region, while the central energy density is spanning in a wide region. In addition, as the data indicate, there is no simple relation between the interaction and the corresponding error to establish a pattern. Thus, in the FG case, while the CE can potentially establish the maximum gravitational mass with good accuracy, the proper description of the central energy density is almost impossible. It needs to be noted that for some specific values of the interaction, the location of the total minimum in CE was not successful, even at high values of densities beyond the maximum mass configuration (unstable region). In that cases, we located a local minimum near the density that corresponds to the maximum mass configuration. The aforementioned statement holds for all cases under consideration in the present study.
In the case of the BG, a similar behavior with the FG is observed. Once more, Fig. <ref> in the second, third and fourth panel, displays a visually small difference between the TM and CE points. Nevertheless, Table <ref> illustrates that the error in the underlying quantities aligns with the trend observed in the FG. Specifically, the error in gravitational mass is less than 5%, while the error in central energy density can extend up to 100%. In this instance as well, the CE can be employed to determine the maximum mass, but not the associated central energy density. The observed behavior in both the FG and BG cases strengthens the argument that the location of the stability point is an intrinsic property of the EoS.
As one might easily suspect the CE is related to the star's compactness, Fig. <ref> displays the dependence of the CE on the compactness for (a) FG, (b) BG - C1, (c) BG - C2, and (d) BG - C3. In the compactness plane the CE creates also a plateau, similar to the central energy density plane in Fig .<ref>(c), but in the majority of the cases it is not so extended, depending each time on the corresponding mass-radius diagram of Fig.<ref>(a). In the FG case, the error can reach values up to 12%, where in the majority of the cases the error is lower than 6%. This result is in accordance with the error in the gravitational mass along with the corresponding error values in the radius. As the radius is more sensitive to the structure of the star, this sensitivity is also presented in the error, reaching values close to 15%. Concerning the BG case, the error in the compactness is established in general under 12% and in the corresponding radius under 14%.
For a visual presentation of Table <ref> for the predictions of the two methods, Fig. <ref> displays the percentage error on the predictions for various fermionic and bosonic stars properties (gravitational mass, radius, central density and compactness) as a function of the relative parameterization of the interaction strength (y,w,z,n) in each particular case. As a general comment, the convergence of the two methods is clearly better in the case of fermion stars compared to that of boson stars (at least for the selected range of parameterizations). Furthermore, for the case of a boson star, the choice of the EoS is decisive for the accuracy of the convergence of the two methods.
§ CONCLUDING REMARKS
Compact objects composed of either fermionic or bosonic matter have been employed for studying the configurational entropy as a mean of stability. The aforementioned quantity should be in alignment with the stability criterion of TM method. It is important to note that the existence of a stable configuration is a property of gravity and independent of the EoS. However, the specific location of the stability point is influenced by the underlying EoS <cit.>. Considering the above points, one might expect, as suggested in Ref. <cit.>, that the minimization of the CE is a consequence of gravity within the framework of general relativity. If this is the case, the relevant minimum (which should be a total minimum) should be independent of the applied EoS. However, our findings indicate that this is not universally true for compact objects.
The CE was studied in light of the gravitational mass, radius, central energy density and compactness. The TM and the CE method for the location of the stability point converge, with a good or moderate accuracy, for the three out of four quantities under consideration. In fact, the most accurate prediction lies with the maximum mass, where the difference reaches values lower than 5%. In addition, a quantity with good accuracy is also the corresponding radius with errors up to 15%, while in the majority of the cases, the error is lower than 10%. As a result, the combination of the aforementioned macroscopic quantities, which is the compactness, is also a quantity with accuracy lying on values lower than 12%. These three quantities are decisive indicators of the macroscopic quantities of compact stars and leading to the ultimate result that in a macroscopic scale, the two methods for locating the stability point, are in agreement.
The last quantity under consideration, which is the central energy density, leads to enormous amounts of error that can reach values up to 100%. From this point of view, the two methods contradict each other rendering the central energy density unreliably calculated through the CE method.
The above result does not agree with what was recently found that the stability by the minimization of the CE, concerning neutron stars and quark stars, does not have, at least quantitatively, universal validity. A possible explanation, at least for neutron stars, is that the existence of the crust, which has a special constitutive explanation, has a dramatic effect on locating the stability point by the CE minimization method. On the other hand, in the case of quark stars, where there is no crust, the failure of the method does not currently have a solid explanation. Thus, the accurate prediction of the stability point is not only related to the uniformity of the EoS, such as in the case of interacting Fermi and Boson gas, where no crust is added, but also to its specific form.
In conclusion, the CE method can be used as a qualitative, rather than a quantitative, tool to macroscopically locate the instability region of certain configurations of compact objects. Finally, although the CE method is an alternative approach for exploring the instability regions of compact objects, the dependence on the specific EoS and the internal structure of the compact star are factors with a decisive role in the validity of the approach.
§ ACKNOWLEDGMENTS
The authors would like to thank Dr. Nan Jiang for correspondence and useful comments. All numerical calculations were performed on a workstation equipped with 2 Intel Xeon Gold 6140 Processors (72 cpu cores in total) provided by the MSc program “Computational Physics” of the Physics Department, Aristotle University of Thessaloniki. This work was supported by the Croatian Science Foundation under the project number HRZZ- MOBDOL-12-2023-6026, by the Croatian Science Foundation under the project number IP-2022-10-7773 and by the Czech Science Foundation (GACR Contract No. 21-24281S).
|
http://arxiv.org/abs/2409.02451v1 | 20240904051215 | Fast, High-Quality and Parameter-Efficient Articulatory Synthesis using Differentiable DSP | [
"Yisi Liu",
"Bohan Yu",
"Drake Lin",
"Peter Wu",
"Cheol Jun Cho",
"Gopala Krishna Anumanchipalli"
] | eess.AS | [
"eess.AS",
"cs.AI",
"cs.SD"
] |
Certifying Quantum Temporal Correlation via Randomized Measurements: Theory and Experiment
Dawei Lu
==========================================================================================
§ ABSTRACT
Articulatory trajectories like electromagnetic articulography (EMA) provide a low-dimensional representation of the vocal tract filter and have been used as natural, grounded features for speech synthesis. Differentiable digital signal processing (DDSP) is a parameter-efficient framework for audio synthesis. Therefore, integrating low-dimensional EMA features with DDSP can significantly enhance the computational efficiency of speech synthesis. In this paper, we propose a fast, high-quality, and parameter-efficient DDSP articulatory vocoder that can synthesize speech from EMA, F0, and loudness. We incorporate several techniques to solve the harmonics / noise imbalance problem, and add a multi-resolution adversarial loss for better synthesis quality. Our model achieves a transcription word error rate (WER) of 6.67% and a mean opinion score (MOS) of 3.74, with an improvement of 1.63% and 0.16 compared to the state-of-the-art (SOTA) baseline. Our DDSP vocoder is 4.9x faster than the baseline on CPU during inference, and can generate speech of comparable quality with only 0.4M parameters, in contrast to the 9M parameters required by the SOTA.
Neural vocoder, articulatory synthesis, DDSP, computational efficiency, parameter-efficient, high-quality
§ INTRODUCTION
Articulatory synthesis is the task of generating speech audio from articulatory features, i.e., the physical movements of human articulators, often measured as electromagnetic articulography (EMA). Since the articulatory features are physically grounded <cit.>, EMA-to-speech vocoders are more interpretable than mel-spectrogram-based vocoders <cit.>. Articulatory vocoders are also highly controllable, allowing for nuanced adjustments in speech generation <cit.>. Given these unique characteristics, articulatory synthesis has many applications including helping patients with vocal cord disorders communicate better <cit.>, decoding brain signals to speech waveforms <cit.>, and augmenting silent speech systems <cit.>.
However, to our knowledge, there has been little investigation into the parameter efficiency of articulatory synthesis models, which is important for applications on edge devices, where the memory and computation are limited. Smaller models may also have faster inference speed, which also opens up new possibilities for faster real-time applications. Since articulatory synthesis is mostly utilized in clinical domains, a high-speed low-footprint synthesis model is crucial for maximizing accessibility.
We utilize differentiable digital signal processing (DDSP) <cit.> to achieve efficient articulatory synthesis while maintaining high-fidelity audio generation. A DDSP model consists of a neural network encoder and traditional digital signal processing (DSP) modules. The encoder transforms input features, such as F0, loudness, and spectral features, into control signals like filter coefficients and harmonic amplitudes. DSP modules then generate audio from these control signals. The differentiability of DSP modules allows for end-to-end training, hence the term “Differentiable DSP". DDSP models are light-weight since they utilize the strong inductive bias of known signal-processing modules to explicitly model the speech generation process <cit.>. Consequently, DDSP models only need to learn control signals rather than raw waveforms, delegating synthesis to DSP modules.
In this paper, we introduce a novel articulatory synthesis approach using DDSP with the Harmonic-plus-Noise (H+N) model to convert articulatory features (EMA, F0, loudness) into speech. To our knowledge, this is the first application of DDSP to articulatory synthesis. Our model achieves a word error rate (WER) of 6.67% and a mean opinion score (MOS) of 3.74, improving the state-of-the-art (SOTA) result by 1.63% and 0.16, respectively. It is also 4.9x faster during CPU inference. Additionally, a 0.4M parameter version of our model matches the quality and intelligibility of the previous 9M-parameter SOTA. Codes and audio samples are available at https://tinyurl.com/ddsp-vocodertinyurl.com/ddsp-vocoder.
§ RELATED WORK
§.§ Articulatory Synthesis
Articulatory synthesis with traditional digital signal processing methods has long been investigated <cit.>. In the deep learning era, there are generally three methods for articulatory synthesis: (1) predicting the acoustic parameters first and then using traditional signal-processing-based vocoders, e.g. WORLD <cit.>, to synthesize speech <cit.>; (2) predicting intermediate spectrograms and then utilizing GAN-based vocoders <cit.> to convert spectrograms to speech signals <cit.>; (3) directly synthesizing speech from articulatory features with HiFi-CAR <cit.>. Among them, <cit.> is the SOTA model in terms of synthesis intelligibility and inference speed, and <cit.> extends it to a universal articulatory vocoder. However, there is still scope for improving parameter efficiency and synthesis quality.
§.§ Differentiable Digital Signal Processing
There are two main architectures of DDSP synthesizers: (1) the source-filter model <cit.>, and (2) the Harmonic-plus-Noise (H+N) model <cit.>. Since H+N models are strictly more expressive than source-filter models <cit.>, we investigate the H+N model in this paper. The H+N model divides speech into two components: harmonics, which represent the periodic part of speech produced by vocal cord vibrations; and noise, which models the aperiodic component of speech produced by airflow in the vocal tract. DDSP has wide-spread applications in music generation <cit.>, timbre transfer<cit.>, singing voice synthesis<cit.>, and speech synthesis <cit.>.
§ METHODS
Following <cit.>, our proposed model mainly consists of two parts: an encoder and a DSP generator. The overall model architecture can be found in Figure <ref>. Note that F0 and loudness are pre-computed from the corresponding utterance.
§.§ Encoder
The encoder architecture is shown in Figure <ref>. Inspired by <cit.>, we use a dilated convolution network as the encoder. The input to the encoder is F0, loudness, and EMA, all sampled at f_model = 200Hz. The input features are first concatenated along the channel dimension, then processed by 4 dilated convolution stacks, while keeping the same time steps. In each stack there are 5 ResBlocks <cit.> with dilations [1, 2, 4, 8, 16] respectively. The output is fed to a loudness conditioning FiLM <cit.> layer, which takes in loudness as the condition and generates the affine transformation parameters to modulate the output features of dilated convolution stacks. FiLM helps to balance the amplitudes of harmonics and filtered noise, which will be mentioned in section <ref>.
The loudness FiLM output is processed by two multilayer perceptrons (MLPs). The first MLP produces 2(K+1)-dimensional output: the first K+1 dimensions control sine waves, and the other K+1 control cosine waves. Each K+1 dimensional control signal comprises a global amplitude a[n] and a K-dimensional time-varying harmonic distribution c[n]. Here, K represents the total number of harmonics used. Harmonics exceeding the Nyquist frequency in c[n] are set to -1e20 to avoid aliasing and then normalized via softmax. The other MLP output is the time-varying filter frequency response H[n], an M-dimensional vector per time point n. To stabilize training, an exponential sigmoid nonlinearity, exp-sigmoid(x) = 2.0 ·sigmoid(x)^log 10 + 10^-7, is applied to a[n] and H[n], as per <cit.>.
§.§ Digital Signal Processing (DSP) Generator
For the DSP modules, we iterated on the DSP generators of <cit.>. The outputs of the encoder from section <ref> control two DSP modules: a harmonic oscillator and a filtered noise generator. The harmonic oscillator generates the voiced components of speech while the filtered noise generator synthesizes the unvoiced components. The outputs of these two modules are added to get the raw synthesized speech, which will be filtered by the post convolution (post conv) layer to generate the final synthesized speech.
§.§.§ Harmonic Oscillator
Unlike the harmonic oscillator in <cit.>, where only the sine harmonic waves are used, we propose to use both the corresponding sine harmonics and cosine harmonics to better approximate Fourier series for higher expressivity. The harmonic oscillator generates a sum of sine and cosine waves whose frequencies are multiples of F0. The k-th harmonic x_k is controlled by global amplitudes a[n], a[n], harmonic weights c_k[n], c_k[n], and a frequency contour f_k[n], as shown below in equation <ref>.
x_k[n] = a[n]c_k[n]sin(ϕ_k[n]) + a[n]c _k[n]cos(ϕ_k[n])
ϕ_k[n] = 2π∑_m=0^nf_k[m] is the instantaneous phase and f_k[n] = kF_0[n] is the integer multiple of F0. The harmonic distribution c[n] (or c[n] for cosine waves) output from the encoder has K values (c_1[n], c_2[n], ..., c_K[n])^T for each time point n and satisfy
∑_k=0^Kc_k[n]=1 and c_k[n]≥0
Thus, the harmonic oscillator output can be calculated as
x[n] = ∑_k=1^K(a[n]c_k[n]sin(ϕ_k[n]) + a[n]c _k[n]cos(ϕ_k[n]))
Since a[n], a[n], F_0[n], c[n], c[n] are all sampled at f_model = 200Hz, we need to first upsample them back to the sampling frequency f_s = 16kHz of the speech signals before calculating the above equations, i.e. upsample by a factor of u = 80. Here u is also the frame size. We upsample using the traditional signal-processing method by first inserting u-1 zeros between every two samples and then convolving with a Hann window of size 2u+1.
§.§.§ Filtered Noise Generator
This module generates noise signals filtered by learned linear time-varying finite impulse response (LTV-FIR) filters. To avoid complex numbers, we treat H[n] as half of a zero-phase filter's transfer function, which is real and symmetric. We perform an inverse fast Fourier transform (FFT) to obtain zero-phase filter coefficients, shift them to form a causal, linear-phase filter, and apply a Hann window to balance time-frequency resolution, resulting in h[n], which is then multiplied by an attenuation hyperparameter γ to balance the filtered noise and harmonics. The filtered noise output is produced by convolving each h[n] with a noise signal of length u (a noise frame) and performing overlap-and-add with a hop size of u. Noise is generated from a uniform distribution between [-1, 1], and all convolutions are computed via FFT.
§.§ Post Convolution Layer
To further balance the noise and harmonics amplitudes, we introduce a post convolution (post conv) layer, which is a learnable 1D convolution layer without bias. Unlike the 1D convolution reverb module in <cit.>, which models reverberation or room acoustics, here the post conv layer acts as a filter to suppress the noise level or to compensate for the noise amplitudes depending on the previous amplitude balancing design choices. We explore this further in Section <ref>.
§.§ Loss Functions
§.§.§ Multi-Scale Spectral Loss
We use the multi-scale spectral loss as defined in <cit.>:
ℒ_MSS = ∑_i ∈ W ||S_i - Ŝ_i||_1 + α ||log S_i - logŜ_i||_1
where S and Ŝ are the magnitude spectrograms of the ground truth audio and the generated audio respectively. α is chosen to be 1 in this paper. W = [2048, 1024, 512, 256, 128, 64] is the set of FFT sizes, and the frame overlap is set to be 75%.
§.§.§ Multi-Resolution Adversarial Loss
As mentioned in <cit.>, training only with multi-scale spectral loss for audio often results in over-smoothed spectrogram predictions. L1 / L2 losses aim to reduce large discrepancies and capture the low-frequency components of spectrograms, averaging out rapid changes in spectral details which results in muffled-sounding audio, as shown in Figure <ref>.
To capture the finer details of spectrograms, following the work of <cit.>, we utilize multi-resolution spectrogram discriminators. We treat each input spectrogram as a one-channel image, and perform 2D strided convolution for discrimination. Note that the input spectrograms are calculated from acoustics with different parameters, such as window size, hop size, and number of points for FFT, so that the discriminators have access to spectrograms of the same utterance with multiple resolutions.
For each sub-discriminator, the adversarial loss is calculated as Least Squares GAN (LSGAN) described in <cit.>:
min_D_iℒ_LSGAN(D_i;G) = 1/2𝔼_x∼ p_data(x)[(D_i(S(x)) - 1)^2]
+ 1/2𝔼_z∼ p_z(z)[(D_i(S(G(z))))^2]
min_Gℒ_LSGAN(G;D_i) = 𝔼_z∼ p_z(z)[(D_i(S(G(z))) - 1)^2]
where S is the magnitude STFT, D_i is the i-th sub-discriminator, G is the DDSP vocoder, x is the ground truth audio, and z is the input features.
The loss functions for the generator and discriminator are:
ℒ(G) = ℒ_MSS + λ/R∑_i=1^Rℒ_LSGAN(G;D_i)
ℒ(D) = 1/R∑_i=1^Rℒ_LSGAN(D_i;G)
where R is the total number of sub-discriminators, which is also the total number of resolutions, and λ controls the weight of the LSGAN loss.
§ RESULTS
§.§ Datasets
§.§.§ MNGU0 EMA Dataset
We experiment with the MNGU0 EMA dataset <cit.>, comprising 75 minutes of 16 kHz male speech with 200 Hz EMA recordings. The 12-dimensional EMA features capture the x and y coordinates of jaw, upper and lower lips, and tongue (tip, blade, and dorsum) movements. F0 is extracted from the speech using CREPE <cit.> with a 5ms hop size, and loudness is computed as the maximum absolute amplitude of each 5ms speech frame <cit.>. Consequently, EMA, F0, and loudness are all sampled at 200 Hz. During training, we randomly crop 1-second segments of aligned EMA, F0, and loudness for input, and their corresponding waveforms as targets. The dataset is split into 1129 training utterances (71.3 minutes) and 60 test samples (3.7 minutes), with 60 training utterances used for validation.
§.§.§ LJ Speech Pseudo-Labelled Dataset
To evaluate our model with a substantial amount of training data, we use the LJ Speech dataset <cit.>, containing 24 hours of 22050 Hz female speech. As it lacks EMA data, we generate pseudo EMA labels using the acoustic-to-articulatory inversion (AAI) model from <cit.>. EMA features are linearly interpolated from 50 Hz to 200 Hz, and waveforms are resampled to 16 kHz. Other features follow the MNGU0 settings. We use a 90%/5%/5% train/validation/test split, corresponding to 21.5, 1.25, and 1.25 hours, respectively.
§.§ Experimental Setup
For our DDSP model, we choose the kernel size of ResBlocks to be 3 with 2 convolution layers inside, the hidden dimension of the dilated convolution stacks to be 256, with K = 50 harmonics, M = 65 frequency bands, and attenuation γ = 0.01. The loudness FiLM module consists of three 1D convolution layers with kernel size 3, and the post convolution layer has a kernel size of 1025. This results in a total of 9.0M parameters. The multi-resolution discriminator uses R = 6 with FFT sizes [2048, 1024, 512, 256, 128, 64] and 75% frame overlap. Weight normalization <cit.> is applied to all sub-discriminators.
We use the Adam optimizer with β_1 = 0.9, β_2 = 0.999, and distinct learning rates: 3 × 10^-4 for the generator and 3 × 10^-6 for the discriminator. The batch size is 32, with λ = 5. For MNGU0 dataset, there are 6400 training epochs. The learning rates are multiplied by 0.3 at epoch milestones [2400, 4800]. For LJ Speech dataset, the total number of epochs is 1280, with epoch milestones = [480, 960]. The HiFi-CAR baseline (13.5M) <cit.> is trained with its original configuration and adapted to our input features.
§.§ Metrics
We use both objective and subjective metrics to evaluate model performance. Objective metrics include: (1) word error rate (WER), which is calculated on the transcription of the synthesized test set speech using the SOTA speech recognition model Whisper-Large <cit.>; A lower WER indicates higher intelligibility of the synthesized speech; (2) Multi-resolution STFT (M-STFT) <cit.><cit.>, which measures the difference between the spectrograms of the ground truth and the prediction across multiple resolutions; (3) perceptual evaluation of speech quality (PESQ) <cit.>, a widely adopted automated method for assessing voice quality; and (4) UTMOS <cit.>, a machine-evaluated mean opinion score (MOS). We use the conventional 5-scale MOS test as the subjective metric. Each model receives 200 unique ratings.
§.§ Synthesis Quality
The subjective and objective quality metrics for DDSP and HiFi-CAR are listed in Table <ref>. For MNGU0, our DDSP model is consistently better than the baseline in every metric, with a boost in WER by 1.63% and a significant improvement in MOS (+0.16). This indicates that our DDSP model has a strong and appropriate inductive bias for the inner periodic structure of speech signals and is capable of generating high-fidelity speech. For LJ Speech, with substantially more training data, our model is still better in all metrics. This also indicates that our model is effectively compatible with the inverted EMA from the AAI model.
§.§ Parameter Efficiency
To evaluate parameter efficiency, we retrain the models using the same configurations as in section <ref>, but with varying parameter counts (nparams): [9M, 4.5M, 2.3M, 1.1M, 0.6M, 0.4M]. For each nparams, we train the model with three random seeds (324, 928, 1024) and evaluate the combined synthesized test set speech. To maintain the receptive field size, we reduce nparams by decreasing the hidden dimension. The results are shown in Figure <ref>. Our DDSP model shows no significant performance decline as model size decreases, outperforming HiFi-CAR at all nparams configurations. In contrast, HiFi-CAR's performance drops drastically below 1.1M nparams. Notably, our smallest model (0.4M) is comparable to HiFi-CAR (9M), highlighting our DDSP model's high parameter efficiency and potential for edge device applications.
§.§ Inference Speed
We test the inference speed of DDSP, HiFi-CAR (13M), and HiFi-CAR (9M) on an Apple M1 CPU by varying the input length N from 0.5s to 10s, with 0.5s intervals. For each N, we average the inference time over 50 utterances of the same length N, normalizing by N. Table <ref> reports the model sizes and the mean and standard deviation of the average inference time for 1s of input. Our model is 1.5x smaller and 4.9x faster than HiFi-CAR (13M). Notably, HiFi-CAR (9M) is still 3.9x slower than DDSP despite having the same model size. Furthermore, as shown in Figure <ref>, HiFi-CAR (9M) consistently underperforms compared to DDSP in both WER and UTMOS. This demonstrates that our DDSP model is fast and lightweight without sacrificing synthesis quality.
§.§ Ablation Study
We perform an ablation study on the GAN loss, additional cosine harmonics, post conv layer, and loudness FiLM using the MNGU0 dataset, with all models trained under the same configuration as the original model. The results, summarized in Table <ref>, show that removing any module decreases performance, except for the GAN loss. Without the GAN loss, similarity metrics like PESQ and M-STFT improve, as the model is trained solely on reconstruction loss (ℒ_MSS in Section <ref>), leading to predictions more similar to the ground truth on average but perceptually over-smoothed, as mentioned in Section <ref> and supported by significant drops in UTMOS (-1.739) and MOS (-0.64). The absence of additional cosine harmonics causes substantial performance drops across all metrics, underscoring their importance in speech modeling. The post conv layer is essential for balancing noise and harmonics amplitudes. Omitting the loudness FiLM module results in a small yet noticeable performance decline.
§ DISCUSSION
§.§ Speech Decomposition
Since the synthesized speech is the sum of harmonics and filtered noise, we can decompose the output and visualize each component via spectrograms (Figure <ref>). The harmonics spectrogram shows distinct frequency bands and higher energy, reflecting the quasi-periodic nature of voiced sounds generated by the harmonic oscillator. In contrast, the noise spectrogram displays higher frequency components with a dispersed energy distribution along the frequency axis, modeling the unvoiced, noise-like sounds such as fricatives and consonants.
§.§ Noise / Harmonics Balance
One challenge in achieving a high-quality vocoder using our DDSP model is balancing the amplitudes of harmonics and noise. We employ three methods to address this issue: the attenuation hyperparameter γ, the post conv layer, and the loudness FiLM module. Among these methods, the attenuation and the post conv layer are particularly crucial. If there is no attenuation at all, i.e. γ=1, the model will only learn the filtered noise as shown in Figure <ref>. Although on average the energy distribution seems correct, the predicted spectrogram has lost all finer harmonic structures, while for the ground truth, there are clear and detailed harmonic stripes.
We have also analyzed the frequency responses of the learned post conv filters when trained with different levels of attenuation, as shown in Figure <ref>. The attenuation parameter γ influences the noise energy: higher γ results in greater noise amplitude. Given that harmonic energy is concentrated in the lower frequencies while noise has high energy in the higher frequency range, the post conv filter should suppress high-frequency components to balance the noise and harmonics amplitudes when γ is large. This is evidenced in Figure <ref>, where the gain |H| decreases in the high-frequency range (ω > 0.4π) as γ increases. This demonstrates that the attenuation and post conv filter together effectively balance the noise and harmonics amplitudes.
§ CONCLUSION
In this paper, we present a DDSP articulatory vocoder based on harmonic-plus-noise model. With the strong inductive bias of DDSP, we show that our model is parameter-efficient, fast, and capable of synthesizing high-quality speech from EMA, F0 and loudness. For future work, we plan to explore the multi-speaker capabilities of our DDSP vocoder.
§ ACKNOWLEDGEMENTS
This research is supported by the following grants to PI Anumanchipalli: NSF award 2106928, Google Research Scholar Award, Rose Hills Foundation and UC Noyce Foundation.
IEEEbib
|
http://arxiv.org/abs/2409.03376v1 | 20240905092756 | Journalists are most likely to receive abuse: Analysing online abuse of UK public figures across sport, politics, and journalism on Twitter | [
"Liam Burke-Moore",
"Angus R. Williams",
"Jonathan Bright"
] | cs.CY | [
"cs.CY"
] |
Article Title]Journalists are most likely to receive abuse: Analysing online abuse of UK public figures across sport, politics, and journalism on Twitter
[1]Liam [email protected]
These authors contributed equally to this work.
1]Angus R. [email protected]
These authors contributed equally to this work.
1]Jonathan [email protected]
[1]Public Policy, The Alan Turing Institute, London, NW1 2DB, United Kingdom
Engaging with online social media platforms is an important part of life as a public figure in modern society, enabling connection with broad audiences and providing a platform for spreading ideas. However, public figures are often disproportionate recipients of hate and abuse on these platforms, degrading public discourse. While significant research on abuse received by groups such as politicians and journalists exists, little has been done to understand the differences in the dynamics of abuse across different groups of public figures, systematically and at scale. To address this, we present analysis of a novel dataset of 45.5M tweets targeted at 4,602 UK public figures across 3 domains (members of parliament, footballers, journalists), labelled using fine-tuned transformer-based language models. We find that MPs receive more abuse in absolute terms, but that journalists are most likely to receive abuse after controlling for other factors. We show that abuse is unevenly distributed in all groups, with a small number of individuals receiving the majority of abuse, and that for some groups, abuse is more temporally uneven, being driven by specific events, particularly for footballers. We also find that a more prominent online presence and being male are indicative of higher levels of abuse across all 3 domains.
[
[
September 9, 2024
=====================
§ INTRODUCTION
It is by now commonplace for public figures (well known individuals such as celebrities, sports stars, journalists, politicians) to maintain an active presence on social media platforms such as Instagram, Facebook and X (Twitter). Such platforms allow them to build up a personal `brand' and connection with an audience through the generation of a kind of informal, parasocial intimacy <cit.>; a brand that can then be leveraged for a wide variety of different professional purposes, from securing sponsorship deals <cit.> to influencing opinions <cit.> or distributing ideas <cit.>. Many researchers have highlighted the potential positive consequences of the fact that access to public status is now available through social media channels, in particular by allowing voices into the public sphere who previously would have remained marginalised <cit.>.
However, whilst these positive consequences are important, much of the research on high profile figures on social media has also focused on a negative aspect: the frequency with which these figures receive hateful and abusive messages, facilitated through the peer-to-peer nature of social media. Such messages can be personally distressing for the individuals involved <cit.>, and may lead them to limiting their online presence in order to avoid receiving such messages <cit.>, which in turn will degrade the quality of public discourse and limit the ability for the public to engage with, for example, elected officials. High levels of abuse can also cause more widespread consequences for those witnessing the abuse, who may conclude that public debate is a hostile, angry environment that they should stay away from <cit.>.
Though research on abuse of public figures is widespread, it is also largely siloed, with individual efforts looking at (for example) abuse levels towards politicians, or journalists, or a certain type of celebrity. As methodology can differ across these individual studies, direct comparisons can be complex. We therefore know little about the extent to which dynamics of abuse are similar across different domains, and across demographics within domains. We attempt to advance the debate about abuse towards online public figures by providing a measurement of the extent of abuse faced by three key United Kingdom (UK) based groups (members of parliament, journalists and footballers), and a comparison of the dynamics of abuse between them.
In this paper, we present a cross-group analysis of a novel dataset of 45.5M tweets targeted at 4,602 UK public figures across 3 domains, collected between 2021 and 2023, applying our previous work fine-tuning pre-trained transformer models to classify abusive tweets. We find that MPs as a group receive more abuse than footballers or journalists, but show through statistical modelling that abuse may be more of an intrinsic feature of being a journalist than other domains. We also find that a more prominent online presence and being male are factors predictive of higher levels of abuse, and that abuse is unevenly distributed both individually and temporally across all groups.
§ RELATED WORK
This paper focuses on the issue of abuse towards public figures on social media. We define abuse as content that threatens, insults, derogates, mocks or belittles an individual or their identity <cit.>. This is a broad reaching definition that includes but is not limited to more severe forms of abuse that may constitute `hate speech' (hate focused on protected characteristics <cit.>), also accounting for generic toxicity.
Whilst abuse has been a constant feature of public life, a wide variety of research has found that it appears to be especially prevalent in online discussions. Some of the earliest work on the internet remarked on the apparent prevalence of `flaming' <cit.>, with research continuing to this day about aggressive and uncivil online comments in forums and discussion sections of websites <cit.>. A considerable further body of research has tried to explain why online environments seem to be so much more hostile than offline ones <cit.>, highlighting factors such as anonymity <cit.>, reduced empathy, and group dynamics where witnessing (and receiving) abuse makes one more likely to create it <cit.>.
Early research on online abuse regarded it largely as an interpersonal phenomenon, exchanged amongst nascent internet communities that were at the time a niche pursuit <cit.>. However as the internet itself grew into the major means of online societal communication, a further focus developed in terms of the fact that `public figures' also started to become targets. In this paper, we use the term `public figures' to define those whose profession compels them to seek recognition amongst the public and who therefore become known to a potentially wide section of the community. They communicate with an audience of individuals who are unknown to them. Public figures include celebrities, sports stars and famous politicians who might be known to an audience of millions. However they may also include local journalists or members of parliament who might have much lower name recognition but nevertheless have a public face. As Xu et al. <cit.> show, public figures now exist on all scales, from `traditional' celebrities known to millions to `micro-celebrities'. Such figures enter into `parasocial' relationships with members of the wider public: a type of relationship which feels intimate and personal on some level to audience members despite the fact that the public figure themselves is unlikely to have met all of (or even a small fraction of) the audience members with whom they have such a relationship <cit.>.
While public figures have always been a fact of social life, the way they work has changed dramatically with the rise of the internet, and especially social media. These platforms, which provide the possibility of such figures communicating in a relatively direct way with their audiences (circumventing to an extent the filtering mechanisms of the press), have revolutionised what it means to be a public figure, giving rise to a wide variety of new ways of forming parasocial relationships <cit.>. A presence on at least some social media platforms is by now arguably a requirement of many public facing professions (such as politics and journalism), or at least a highly important way of advancing professional life or monetising status through (for example) endorsements <cit.>.
While access to these platforms arguably represents a boon in many senses, the levels of abuse public figures receive on them is also a source of increasing concern. In this study, we address three different categories of public figure in particular. We look at professional sports stars (in particular football players), journalists, and politicians. This selection does not, of course, address all potential types of public figures (for example, musicians and actors are obvious absences from the list). However, it does offer important variety, with three very different professions with different audiences to communicate with, all united only by the fact that they engage with people across public facing social media. Each of these categories has attracted a considerable amount of research on levels of abuse, which we will review in turn below. What is missing, which we provide here, are studies that address multiple categories in the same framework, and thus provide a more general view on the dynamics of abuse.
Football
Abuse towards professional athletes (and other professionals such as referees) has long been part of the sporting industry <cit.>, and in the past has been associated with multiple campaigns launched by the industry itself to attempt to stamp it out <cit.>. While there was some perceptions that these campaigns had been partially successful, the rise of social media as a forum for the self-presentation of footballers for many brought about a kind of regression with abuse once again rife, also closely associated with the issue of racism <cit.>. Empirical work has consistently documented relatively high levels of abuse towards footballers <cit.>. However the vast majority of quantitative studies have, to our knowledge, been directed towards male sports stars, despite obvious press attention to abuse towards female athletes as well <cit.>. Concerns about abuse (especially racist abuse) being directed towards footballers are based of course on the mental health and wellbeing of the players themselves, with players having even contemplated suicide when being on the receiving end of it <cit.> and family members also feeling an incredible amount of strain <cit.>. Furthermore, due to the highly mediatized nature of the phenomenon, there are concerns that witnessing of online abuse may serve to normalise it in wider society.
Politics
Volumes of abuse towards professional politicians have also been a subject of considerable research interest. The world of professional politics is of course a combative one, with threat and harassment unfortunately a part of life for professional politicians of all types <cit.>. The online arena seems to be an extension of this trend, with a wide variety of work documenting the high levels of abuse and vitriol directed towards elected officials <cit.>, and some also arguing that the problem is increasing over time <cit.>.
One of the key debates in this area is whether male and female politicians experience different volumes and types of abuse, with some studies not identifying gender differences <cit.>, whilst others have problematised this type of finding <cit.> or shown mixed results <cit.>. The recent resignations of high profile female politicians, citing patterns of abuse received, seem highly significant in this regard <cit.>. A similar but smaller body of literature has also sought to highlight religious and racial differences <cit.>. Another key debate is the reasons for abuse <cit.>, with some arguing that periodic news attention to different topics is a key driver and others pointing to differences in the profile of the individuals in question <cit.>. These issues are critical not only in terms of effects on politicians themselves and wider public discourse, but also in terms of concerns about impacting on the representativeness of democracy as a whole.
Journalism
The impact of online abuse on the journalistic profession has also attracted a considerable amount of scrutiny (including noting the clear crossovers with the previous two domains in terms of journalists covering both sports and politics). Findings are in a sense similar to the other two domains: first, a wide variety of scholarship has claimed the problem is serious and widespread <cit.>, as well as being connected to real world acts of violence, a subject of particular concern as many journalists lack the security protections provided to politicians (though this is not to say that politicians do not also frequently experience violent attacks). Diverging patterns of abuse between men and women have also been a frequent area of study <cit.>: though unlike in the political arena, greater levels of abuse directed towards women has been a clear and consistent finding <cit.>.
Personal visibility (as opposed to newsroom visibility) is suggested as another factor of the abuse of journalists <cit.>, and many studies have also linked it to broad societal factors such as the rise of populism <cit.>. Temporal factors have also been considered, with work describing online abuse towards journalists as both a chronic problem and one that is also likely to be boosted by individual events <cit.>. Some of the feared consequences of abuse for journalists are also somewhat similar to the political domain: that the distress and psychological burden created by abuse patterns will drive people out of the public domain <cit.>. However, authors have also noted that this abuse may create a more general perception in the eyes of journalists themselves that news audiences are irrational and low quality <cit.>.
One of the most significant and yet under-explored things emerging from all of this work is that, despite the great differences in the profession and style of work these different public figures are employed in, many similar patterns and claims about online abuse have emerged. However, what the field as yet lacks is comparative work looking at different professions to tie these observations together (one notable exception is <cit.>, though this looks only at female journalists and politicians). We hence lack knowledge about what features of abuse are unique to a given professional context and what are more general features of online public life as a whole.
In this article, we seek to remedy this deficit, by measuring levels of abuse across our three different domains of interest. We structure our enquiry in terms of three key questions:
Distribution of abuse: do all domains experience similar levels of abuse, and is this abuse distributed amongst people within the domain in similar ways?
Temporal patterns: is abuse a generally stable features of domains, or does it fluctuate and respond to events?
Factors linked to abuse: how does abuse vary with the activity of public figures? After accounting for other potential factors linked to abuse, to what extent is abuse an intrinsic feature of a domain?
Our aim is to describe the dynamics of online hate and abuse towards public figures in a way that is not entirely dependent on the domain or field of study.
§ METHODS
§.§ Platform Selection
In this paper we make use of data collected from the social media platform Twitter/ X (we refer to “Twitter” exclusively given data collection took place primarily before the rebrand to “X” ). As a platform focused on broadcasting messages, with no requirement for reciprocal following before messages are exchanged, Twitter has long been a forum where public figures have maintained an active presence and broadcast messages to an audience. It hence represents an ideal choice of venue for our study. It is worth noting that other platforms are also being used by public figures, with the footballers in our study also highly present on Instagram, for example. However, there is no other platform that is widely used by all the groups in our study.
§.§ Target Group Selection
We select 3 domains of public figures to study: professional football players, members of parliament, and journalists. We delineate public figures into male and female groups (this study is limited to binary gender, given the low prevalence of individuals identifying as non-binary or other genders within these groups). This gives us 6 individual groups across 3 domains and 2 genders.
We source lists of public figures from official sources where available, and filter lists down to include only those with an official Twitter account. Full details are visible in <ref>. The final, total number of individual public figures, present on Twitter, across all domains and demographics, included in this study are visible in <ref>. Immediately apparent is the minority of female public figures across all domains. This is mirrored by follower accounts, where the average man individual public figure has a higher follower count than the average woman, and the majority of the top 10 most followed individuals in each domain-demographic pair are men. This could be seen as a feature of the domains themselves (and society in general): while the popularity of women's football grows, men's football receives more widespread engagement <cit.>, the the balance of female MPs was below 20% until 2006 <cit.>, and journalism has been shown to be an industry dominated by men <cit.>.
§.§ Data Collection
Central to Twitter activity are primarily text-based posts called “tweets”. Tweets can be replied to, creating chains or “threads”, or can be reposted as “Quote tweets” or “Retweets” of other tweets. We are interested in the most direct form of communication targeted at public figures, and as such we only consider what we term “audience contact” (AC) tweets: direct replies to a tweet from a public figure account, or top-level tweets (that aren't replies to other tweets) containing a mention of a public figure account. We present aggregate statistics based on these tweets and the labels assigned.
We use the Twitter API Filtered Stream endpoint and Full Archive Search endpoint (provided by the Twitter Academic API, no longer available) to collect all tweets that either contain a mention of a public figure account (including direct and indirect replies) or are quote tweets or retweets of tweets created by a public figure account. Data collection endpoint usage and time windows differed across domains and demographics, as outlined in <ref>, due to the staggered nature of data collection for this project. All data collection ended on the 14th of March 2023, when API access was suspended. We filter the tweets collected to retain only tweets matching the audience contact conditions, that are written in English, and contain text content aside from mentions and URLs. On collection, we extracted lists of public figure accounts mentioned within the tweet text, and created a clean version of the tweet text, replacing mentions of users with domain-specific tokens, and URLs with a URL token. The remaining “valid” audience contact tweets (visible in <ref>) for each domain-demographic pair are used for the modelling and analysis presented in this paper.
We additionally collect all tweets authored by public figure accounts, again using the Twitter API Filtered Stream endpoint and Full Archive Search endpoint, in order to enable analysis around the activity of public figures and the relationship with abuse. We retain all of these tweets, regardless of language and content, and we do not label these tweets as abusive / not abusive.
§.§ Abuse Classification
We fine-tune pre-trained transformer-based language models for binary abuse classification for each public figure target group, using the same annotated data and annotation processes outlined in Vidgen et al.<cit.> and Williams et al.<cit.>.
All tweets are annotated with one of four labels: “abusive”, “critical”, “neutral”, or “positive”. Definitions of each class, and guidelines for annotators, are visible in <ref>. Here we define abuse as broad-reaching, including but not limited to hate speech, pertaining to any content that threatens, insults, derogates, mocks or belittles an individual or their identity <cit.>. We collapse multi-class labels to binary labels (abuse / not abuse) for the models in this study. At least 7,000 tweets are annotated for each group: 1,000 for the validation split, 3,000 for the test split, and 3,000 for the training split (more in the case of male footballers).
Initial rounds of annotation (the male and female footballers datasets) were done by crowdworkers, but, due to high levels of disagreement between crowdworkers, and therefore more expert annotation required, a small group of high-quality annotators was used to label the remaining datasets (MPs, Journalists).
For male footballers, we use a version of deBERTa-v3 <cit.> fine-tuned on 9,500 tweets targeted at male footballers. This model is trained using an active learning process, starting with a sample of 3,000 tweets, and using diversity and uncertainty sampling to select 2,000 additional training entries to annotate over 3 rounds, plus one round of 500 adversarial entries, as outlined in Vidgen et al.<cit.>. Tweets were annotated by 3,375 crowdworkers. This model outperformed (F1 score on the male footballers test split) a model trained on the base 3,000 male footballers training dataset, and an ensemble of two models trained on male footballers and female footballers data.
For female footballers, we use an ensemble of two fine-tuned versions of deBERTa-v3 <cit.>, one on tweets targeted at male footballers, the other on tweets targeted at female footballers. Both models were fine-tuned on 3,000 tweets, as outlined in Williams et al.<cit.>. Tweets were annotated by 3,513 crowdworkers. Output probabilities from the two models are averaged during inference to make classifications. This ensemble outperformed (F1 score on the female footballers test split) the model trained on the base 3,000 female footballers training dataset.
For MPs, we use an ensemble of two fine-tuned versions of deBERTa-v3 <cit.>, one on tweets targeted at male MPs, the other on tweets targeted at female MPs. Both models were fine-tuned on 3,000 tweets, as outlined in Williams et al.<cit.>. Tweets were annotated by 23 high quality annotators. Output probabilities from the two models are averaged during inference to make classifications. This ensemble outperformed models trained on the base training datasets for both male and female MPs.
For journalists, we use a version of deBERTa-v3 <cit.> fine-tuned on 3,000 tweets targeted at journalists, following the same processes outlined in Williams et al.<cit.>. Tweets were annotated by 23 high quality annotators.
Model evaluation results are visible in <ref>.
§ RESULTS
We structure our analysis in terms of our three research questions.
§.§ Distribution of abuse
We begin our analysis by assessing whether all domains experience similar levels of abuse, and look at whether this abuse is distributed amongst individual public figures within a domain in similar ways.
We present total tweet counts in <ref> and weekly average tweet counts in <ref> (total counts are presented for completeness, weekly averages are used for analysis due to variable time windows between domains). We see that MPs have the highest weekly average rate of abuse, with 11.2% of tweets received by male MPs being classified as abusive, and 9.1% for female MPs. We also see that male footballers receive a higher average proportion of abuse containing identity-based slurs than any other group at 11%. 32 journalists received no Tweets during the data collection window.
We present cumulative distributions of total abuse counts by individual public figures in <ref>. We see that, across all domains and demographics, a small number of individuals receive a large proportion of the total abuse. For example, 50% of abuse targeted at male MPs is directed towards just 2.1% of all of those individuals. This observation holds for other domains and demographics, although differences can be partly explained by the differing number of public figures in each group.
The proportion of public figures that received any abuse across the entire data collection window is visible in <ref>, showing that almost all MPs (99.5% for men and women) received any abuse, and over 50% of footballers and journalists. The different lengths of the data collection windows does affect this, meaning we might see higher levels of coverage for journalists if the data collection window was more similar to that of MPs and footballers, but the fact that relatively few footballers (71.5% of men, 53.4% of women, lowest across both demographics) received any abuse, despite the data collection window being the largest of the 3 domains, does emphasise that fewer footballers receive any abuse at all than the other domains.
One might assume that the most abused public figures are also the most popular public figures. We use the Spearman rank correlation between the quantity of abuse a public figure receives and the number of followers they have, visible in <ref>.
We see a mild to strong rank correlation between abuse and followers when considering the total population of each group of public figures, highest for footballers (0.83 for men, 0.71 for women). However, when we limit the analysis to only the top 50 most abused public figures from each group, this changes - there is still a positive rank correlation for all groups, but the correlation is significantly weaker for footballers and male journalists. This indicates that some of the most abused individuals within these groups have lower follower counts. This points to more circumstantial abuse centered around specific events, and more work is needed to understand this phenomenon.
§.§ Temporal patterns
It is well known that abuse of public figures is not a stable phenomenon temporally <cit.>, with abuse rising and falling in relation to real world events. Here, we explore the extent to which abuse levels fluctuate over time, and how the dynamics of temporal fluctuation differ by domain and demographic.
We present cumulative distributions of total abuse counts by day in <ref>. We see that abuse tends to be unevenly distributed over time for all groups. 50% of abuse targeted at footballers takes place over 9.1% of days (on average across both men and women). Abuse towards MPs and journalists is more evenly distributed, with 50% of abuse taking place over 26.1% and 29.6% of days respectively on average. Within each domain, abuse of female public figures tends to be more uneven than for their male counterparts. Female footballers receive 50% of their abuse over just 0.5% of days compared to 17.6% of days for male footballers. This observation holds to a lesser extent for MPs and journalists, where the difference between female and male public figures is 5.1% and 6.0% respectively.
Investigating temporal fluctuation at an individual level, we present percentages of public figures who receive at least 13 of their total abuse in a single day in <ref>. This shows the presence of public figures in all groups who receive a significant proportion of the abuse they receive throughout the whole study in a single day. More MPs than any other group receive over 13 of their abuse in a single day (15.6% on average), followed by journalists (9.4% on average), and then footballers (7.2% on average). Across all domains, more men receive a significant proportion of their abuse in a single day than women.
Finally, in <ref> we plot histograms of weekly percentages of public figures receiving any abuse within a given week. We see that the majority of MPs receive abuse on a weekly basis, with over 50% of MPs receiving at least one abusive tweet in all but 2 (3.2%) weeks during data collection. No other group receives such regular and widespread abuse - at no point do over 50% of footballers or journalists in our dataset receive at least one abusive tweet in a single week, with the average being 15.3% for male footballers, 3.47% for female footballers, 28.0% for male journalists, and 17.2% for female journalists, compared to 67.0% for male MPs and 65.0% for female MPs.
Taken together, we make several observations about these results. Firstly, MPs as a group receive the most regular and widespread abuse, but also individually see many concentrated periods of abuse. This suggests that whilst abuse may be a stable feature of being an MP, there is also a large element of this abuse which is more unstable and sporadic. These abusive tweets could be in relation to specific events, for example, in response to a controversial tweet or comment made by an MP.
Footballers, on the other hand, receive the least regular and widespread abuse as a group. In most weeks a small proportion of players receive any abuse, which tends to be less evenly distributed over time. This suggests that abuse towards footballers is more sporadic and event driven. However, compared to MPs, a smaller minority of players receive a significant proportion of their abuse in a single day. It may be the case that whilst footballers receive less regular abuse than MPs as a group, many individual footballers receive more regular abuse spread out over specific days, such as match days. One can imagine this as a series of regular peaks in abuse between troughs of low abuse levels, resulting in an uneven distribution but lacking individual peaks that account for significant proportions of abuse.
The temporal nature of abuse towards journalists is somewhere between that of MPs and footballers. In most weeks a reasonable proportion of journalists receive some abuse, which is more evenly distributed over time than for MPs or footballers. The proportion of journalists who receive at least a third of their abuse in a single day (9.35% on average across men and women) is less than MPs (15.6% on average) but greater than footballers (7.15% on average).
On an individual level, less women receive over 13 of their abuse in a single day than men (across all domains), suggesting a more even distribution for abuse of female public figures. However, on a group level, abuse towards women is in fact less evenly distributed over time than for men. This requires more analysis to understand, but may be due to the presence of events that see female public figures abused as a group, to a greater extent than for male public figures.
§.§ Factors linked to abuse
We tackle our third and final research question regarding which factors are linked to abuse. We firstly examine the relationship between the activity of public figures and the abuse they receive, and then attempt to quantify how intrinsic abuse is to a domain or gender through statistical modelling.
§.§.§ Public figure activity
We count the number of “active statuses” written by a public figure (the number of statuses posted by a public figure account that receive at least one reply) as a measure of their activity.
Looking at weekly average activity in <ref>, journalists appear as the most active group by a significant factor, with MPs more active than footballers on average by a smaller margin. This holds when accounting for the number of public figures studied, given the larger number of journalists - the average journalist writes 7.2 (9.7 for men, 4.6 for women) tweets per week, compared to 3.3 (3.1 for men, 3.6 for women) for the average MP and 0.6 (0.8 for men, 0.5 for women) for the average footballer.
<ref> visualises the relationship between activity and abuse for public figures across different domains. In terms of ratios of abuse to activity, only considering public figures receiving at least 1 abusive tweet per week on average, we see 28 male and 5 female MPs receiving at least 100 abusive tweets for every tweet they write per week, compared to 9 and 4 journalists, and 3 and 0 footballers. The average ratio is also higher for MPs, at 32.1 abusive tweets per status for men and 22.7 for women, versus 17.2 and 1.9 for footballers and 1.4 and 1.6 for journalists.
There appears to be some positive relationship between activity and abuse. Correlation coefficients between abuse and activity are mild but positive, with the average Pearson correlation coefficients at 0.32, strongest for male footballers (0.64). However, across all groups, the most active individual is never the most abused individual, but does still consistently rank in the top 8% of public figures. This suggests that, while some level of activity may be a pre-requisite to receiving higher levels of abuse, it doesn't necessarily identify the most abused individuals.
§.§.§ Intrinsic nature of abuse
In <ref> and <ref> we use absolute levels of abuse to compare across domains, demographics, and time. This is important as it affords us a better understanding of how public figures actually experience abuse online. However, this approach does not allow us to assess the extent to which abuse is intrinsic to a domain or demographic – that is, whether abuse is a direct result of belonging to a particular domain or demographic, or whether it can be explained by other factors irrespective of domain or demographic (such as the prominence or activity of a public figure).
To examine the intrinsic nature of abuse we fit a series of count models. Each observation in these models is a public figure and the dependent variable is the number of abusive tweets received by that individual. We first fit models to each domain separately to examine gender differences within each domain (referred to as Model 1 for Journalists, Model 2 for MPs, and Model 3 for Footballers). In these models, our main independent variable of interest is gender, where female gender is used as the reference category. We subsequently fit a model to all the data to assess differences between domains (referred to as Model 4). Here, we are interested in the independent variable domain, where the footballer domain is used as the reference category. We also include in Model 4 an exposure offset term to account for the differing time periods in which data were collected for each domain. This was set to the log of the total number of weeks of data collection for each domain. In all models we include as control variables the total number of audience contact tweets received by the public figure (Count total), the number of people that follow them (Count followers), and the number of tweets written by the public figure that received at least one reply (Count replied to). To assist with model convergence, and to aid interpretation of results, these variables were incremented by 1 and log2 transformed.
For each model, we use a likelihood ratio test (LRT) to determine whether a Poisson or a negative binomial regression is most appropriate. As the Poisson model is nested within the negative binomial model, the LRT is a suitable test to compare the fit of these models. All tests indicated the negative binomial provided a significantly better fit. As the negative binomial model estimates a dispersion parameter, which is held constant in the Poisson model, this suggests our data is over-dispersed. We considered using zero-inflated versions of these models (i.e. zero-inflated Poisson and zero-inflated negative binomial), but the lack of a strong theoretical reason for the existence of excess zeros deemed these inappropriate. Negative binomial models were run using the R package <cit.>, and approximate 95% confidence intervals were obtained by likelihood profiling. We report incident rate ratios (IRRs) by exponentiating the raw model coefficients. For a log2 transformed count variable, the resulting IRR represents the multiplicative change in incident rate when that count is doubled.
Models were checked for multicolinearity by calculating variance inflation factors (VIFs), with VIFs for all variables in all models no greater than 5. Outliers were also checked for using Cook’s distance (CD). A cutoff of 4 / N (where N is the number of observations) is typically used to identify potential outliers. Data points with a CD greater than or equal to this cutoff were removed, models refit, and the resulting coefficients checked for changes in direction and significance. Two changes were observed after refitting models. In Model 3, the control variable Count replied to was no longer deemed significant, whilst in Model 4 the raw estimate for Count followers changed from -0.002 to 0.003 (representing a change in the corresponding IRR from 0.998 to 1.003)[We note that 216 potential outliers were observed for Model 4. After carefully inspecting these data points and concluding they were genuine (and not as a result of data errors) we decided against excluding them from our analysis.].Taken together, these diagnostics suggest no reason to doubt the reported results. We present results for all models in Table <ref>.
Models 1 – 3 look at how the levels of abuse received by public figures varies by gender. In all three models male public figures experience more abuse than female public figures. On average, and with all other variables held constant, male journalists receive a 22% greater incidence of abusive tweets than their female counterparts, whilst male MPs receive a 26% greater incidence. Male footballers receive an incidence that is almost three times greater than for female footballers. The 95% confidence intervals do not contain 1, and so these estimates can be considered statistically significant.
Model 4 looks at how abuse levels differ between domains. The results show that on average, and with all other variables held constant, journalists receive abuse at a rate which is almost 12 times that for footballers. The rate at which MPs receive abuse is almost 6 times that for footballers. Again, since the confidence intervals do not contain 1, these estimates can be considered statistically significant at the 95% level.
§ DISCUSSION AND CONCLUSION
Overall, we find that MPs receive higher absolute levels of abuse at a more constant rate than footballers or journalists, although abuse appears to be a greater intrinsic feature of being a journalist than it is for an MP or footballer. Across all domains, the majority of abuse is directed at a very small number of individuals, but the majority of all public figures studied across each group received at 1 least abusive tweet during the data collection window. Abuse levels are more evenly distributed over time, but fluctuate to a much greater degree for footballers than MPs or journalists. We find that abuse levels tend to be higher for public figures who are more active or have more followers, but the most abused individuals are rarely the most active or most followed. Across all domains, an average male public figure receives more abuse than an average female public figure, but also has more followers and receives more tweets in total - controlling for these factors, statistical models still indicate that being a man is predictive of higher abuse levels. We also see that abuse targeted at female public figures fluctuates with time to a greater extent than male public figures, and is therefore likely to be driven by specific events.
§.§ Limitations
In this study we focus on a broad-reaching definition of abuse, and count the number of tweets that meet that definition (as classified by a machine learning model). This does not account for the potential range of severity of abuse, and as such all abuse counts equally towards the figures presented, lacking the nuance that some public figures may receive higher levels of more severe abuse than others. In the same vein, abuse may affect individuals in different ways, and measuring counts of tweets does not encapsulate the impact of abuse on individuals. As such, our results are best interpreted as counts of abusive language, with further work needed to understand how the severity and impact of abuse differs across groups of public figures.
Our focus on data collection of public figures enables relatively efficient data collection, and our filtering of tweets to the “audience contact” category maximises the chance of any given tweet being directly addressed at a public figure. However, we do identify cases of e.g. abusive replies to public figures that in fact show support . Equally, abuse doesn't solely exist within this category on social media - many public figures receive abuse via direct messages, which are not accessible to , and abuse may also take place without mention or reply to the subject of the abuse. As noted earlier, this study is limited to a single platform, and as such conclusions can only be drawn within the scope of Twitter/X.
We delineate public figures into binary gender categories. As noted, we do not include other possible gender identities due to low prevalence within the groups studied. A public figure from a minority gender identity is likely to receive abuse targeted surrounding their identity, and abuse targeted at these individuals is likely to follow different dynamics.
This study was conducted sequentially, with data collection, annotation, and modelling occurring at different times for different domain-demographic groups. Data collection via archive search may not include tweets that would have been obtained during streaming. Data annotation uses the same schema across all groups, but, as discussed, annotation is done by 2 different groups (the expert annotator group remained the same), which may introduce uncertainty. The first model trained in this study utilised active learning, an effective but resource-intensive approach which could not be replicated for later models.
The variable time windows in data collection in this study represent variation between the groups studied. Arguably the real world events that occur within these time windows skew results, but one would struggle to be able to measure a “baseline” level of abuse through real world data collection. We take measures to account for variable times windows at multiple points in the analysis.
Our annotation schema (<ref>) differentiates between criticism and abuse. Much of the abuse received by public figures could be seen as overly-profane or toxic forms of criticism, highlighting the fuzzy line between the two categories. While annotators were given a strict schema to follow, we note that levels of annotator disagreement were higher in cases where the final majority label was either “critical” or “abusive”.
§.§ Future Work
As discussed, expanding beyond binary gender the would be a logical extension to this work. In addition to this, further work could be done to expand beyond a single demographic (gender) to better understand the dynamics of abuse across a range of identities.
Content analysis (with more nuance than whether content is abusive or not according to a machine learning model) of tweets targeted at public figures would provide a greater understanding of the themes contained within abuse, and could be combined with incorporation of data around real world events to provide more granular explanations of specific peaks in abuse.
Further research questions building on this work include developing a better understanding of the perpetrators of abuse, and how the affiliations and beliefs of perpetrators of abuse varies between groups of public figures. Extending beyond binary abuse classification to be able to measure the severity of abuse, and discern different forms of abuse (e.g. misogyny, racism), would open up avenues to explore the types of abuse received by public figures from different domains and demographics.
§ DECLARATIONS
§.§ Availability of data and materials
The dataset of tweets analysed during the current study is not publicly available due to restrictions on sharing data collected from the Twitter API, as outlined in the API terms and conditions. We make available anonymised aggregate statistics, available from the corresponding author on reasonable request.
§.§ Competing interests
The authors declare that they have no competing interests.
§.§ Funding
This work was partially supported by the Ecosystem Leadership Award under the EPSRC Grant EPX03870X1 and The Alan Turing Institute.
§.§ Authors' contributions
LBM, AW, and JB designed the study. LBM and AW performed data collection and analysed the results. LBM, AW, and JB wrote the first draft. All authors read and approved the final manuscript.
§.§ Acknowledgements
We thank Eirini Koutsouroupa for invaluable project management support, and Yi-Ling Chung, Ivan Debono, Pica Johansson, Hannah Kirk, Francesca Stevens, and Bertie Vidgen for their roles in previous work that enabled this study.
§ APPENDICES
§ DATA COLLECTION
§.§ Sourcing public figures
For footballers, we focus on the top UK leagues of the men's and women's game, namely the Men's Premier League, consisting of 20 teams at any given point, and the Women's Super League, with 12 teams at any given point. The exact number of players in each league is fuzzy, but we estimate the total number of eligible individuals to be around 1,000 male footballers and 300 female footballers. Within politics, we focus on UK Members of Parliament (MPs), the most prominent public figures in UK politics as elected officials voted for by the public, constituting 650 individuals. Unlike footballers and MPs, journalists are not a finite group of individuals, adding complexity to the selection of individuals and data collection. As such, we use a list of UK journalists on Twitter <cit.> (now no longer maintained), selecting the top 3,000 journalists in terms of Twitter follower numbers. We chose a larger number of journalists than MPs or footballers to account for the fact that there is no finite number, and to capture a range of abuse, given that abuse is not a phenomenon limited to the most popular journalists.
§.§ Collecting public figure information
We scrape the official websites of the Premier League <cit.> and Super League <cit.>, alongside the websites of individual clubs, to gather lists of eligible players. This information includes complete information on name, club, and nationality, and incomplete information on position, date of birth, height, and Twitter account (gender is implicit in the data collection process). For MPs, we collate a complete list using several sources <cit.> <cit.> <cit.> (the latter is no longer available). These provide complete information on name, gender, party, constituency, and incomplete information on Twitter account. The list of journalists <cit.> provides complete information on name, publisher, publication, job role, and Twitter account, and no information on gender.
We estimate gender for journalists using a hybrid approach, first obtaining a gender and probability from the first names of the 3,000 journalists studied using genderize <cit.> (based on census data). In cases where this approach returned a gender with a probability less than 100% (912 entries, 30%), we prompt GPT-4 <cit.> with the full name and publication of the journalist, asking to indicate if it is aware of the journalist and what their gender is. In cases where the two approaches disagreed or GPT-4 did not indicate awareness of the individual (313 entries, 10%), we manually labelled gender using available resources online.
§.§ Social media presence
Where records of Twitter profiles were incomplete (footballers and MPs), we use a combination of the Twitter API and desk research to assign Twitter profiles where they exist (some MPs and footballers are not present on Twitter). During data collection, changes in user name or deletions of account were recorded, as were any changes in affiliation (e.g. footballers transferring to other clubs, MPs losing their seat). Numbers of accounts and any analysis presented is inclusive of all accounts used for data collection through the entire data collection process.
§ ANNOTATOR INSTRUCTIONS
|
http://arxiv.org/abs/2409.03070v1 | 20240904204422 | Hausdorff measure and decay rate of Riesz capacity | [
"Qiuling Fan",
"Richard S. Laugesen"
] | math.CA | [
"math.CA",
"31B15"
] |
§ ABSTRACT
The decay rate of Riesz capacity as the exponent increases to the dimension of the set is shown to yield Hausdorff measure. The result applies to strongly rectifiable sets, and so in particular to submanifolds of Euclidean space. For strictly self-similar fractals, a one-sided decay estimate is found. Along the way, a purely measure theoretic proof is given for subadditivity of the reciprocal of Riesz energy.
Three-body model of ^6He with non-local halo effective field theory potentials
P. Descouvemont
September 9, 2024
==============================================================================
§ INTRODUCTION AND RESULTS
The Riesz kernel 1/|x-y|^p allows for energy interactions more general than the electrostatic Coulomb repulsion. The p-capacity generated by the kernel can be used to measure the size of a compact set in . How does this Riesz capacity relate to other notions of size of the set, in particular to its measure? Since the set can have dimension smaller than the ambient dimension n, one intends here the appropriate Hausdorff measure.
We show for a class of sets including smooth submanifolds that as p increases to the dimension of the set, the Hausdorff measure arises from the decay rate of Riesz p-capacity. More precisely, it is the slope of capacity raised to the power p. To state the result precisely, we need some definitions.
[Riesz energy and capacity]
Consider a nonempty compact subset E of ^n, n ≥ 1. The Riesz p-energy of E is
V_p(E) = min_μ∫_E ∫_E |x-y|^-p dμ(x) dμ(y) , p > 0 ,
where the minimum is taken over all probability measures on E. For the empty set, define V_p(∅)=+∞. The Riesz p-capacity is
(E) = V_p(E)^-1/p .
Notice the energy is positive or +∞ and so the capacity is positive or zero. The energy minimum in (<ref>) is known to be attained by some “equilibrium” measure μ, by an application of the Helly selection principle, in other words, by weak-* compactness of the collection of probability measures on the set <cit.>. The equilibrium measure is unique if the energy is finite, although we will not need that fact.
The capacity is positive if and only if the energy is finite. The classical Newtonian energy V_n-2(E) and Newtonian capacity (E) arise when n ≥ 3 and p=n-2. Capacity can be regarded as measuring the size of the set, since capacity increases as the set gets larger and it scales linearly under dilation, with (sE)=s (E) when s>0.
Write ^d for d-dimensional Hausdorff measure, normalized to agree with Lebesgue measure when applied to subsets of . Hausdorff dimension is denoted “”.
§.§ Results for rectifiable sets
A definition by Calef and Hardin <cit.> calls a set strongly rectifiable if it can be covered by almost-flat pieces except for an omitted set of lower dimension.
[Strongly rectifiable sets]
Let 1 ≤ d ≤ n be integers. Call a set E⊂^n strongly d-rectifiable if for each ϵ>0 there exists a finite collection of compact subsets K_1,…,K_m ⊂^d and corresponding bi-Lipschitz functions φ_i: K_i→ E such that:
∙ each φ_i has bi-Lipschitz constant less than 1+ϵ,
∙ ^d(E_i ∩ E_j)=0 when i≠ j,
∙ (F) < d, where F=E\∪_i=1^m E_i is the portion of E not covered by the images of the E_i,
and where E_i = φ_i(K_i) is compact.
Such an E necessarily has Hausdorff dimension ≤ d, and has finite measure:
^d(E)<∞ .
Examples of strongly d-rectifiable sets include smooth d-dimensional submanifolds, and also finite unions of such submanifolds provided they intersect in sets of zero d-dimensional measure <cit.>. The strongly rectifiable concept is most useful when d < n, because when d=n, every compact set E is strongly n-rectifiable simply by taking K_1 to equal E itself.
Now we state the main result, obtaining Hausdorff measure from p-capacity.
Let 1 ≤ d ≤ n be positive integers. If E⊂^n is compact and strongly d-rectifiable then
lim_p↗ d(E)^p/d-p = ^d(E)/|^d-1| .
Here |^d-1|=2π^d/2 / Γ(d/2) is the surface area of the unit sphere in .
(E)=0.
Since (E)=0 by the corollary, the left side of (<ref>) can be interpreted as a limit of difference quotients for p-capacity to the power p. Hence (<ref>) says that the Hausdorff measure is determined by the slope of p ↦(E)^p at p=d. See <ref> for a graphical illustration.
The corollary is known already in greater generality because every d-dimensional set with ^d(E)<∞ has (E)=0, by <cit.>.
The full-dimensional case of the theorem (d=n) says for compact E ⊂ that
lim_p↗ n(E)^p/n-p = ^n(E)/|^n-1| .
That case of the theorem was proved by Clark and Laugesen <cit.>.
The proof of <ref> for dimensions d≤ n, is in <ref> and <ref>. It builds on the case d=n but requires several new ingredients, as follows. Recall that when d<n, the set E decomposes into finitely many pieces. Those pieces can intersect, which we handle in <ref> by removing a local neighborhood of the intersection points in order to eliminate energy interactions between those multiple pieces in the neighborhood. For the other direction of the proof, in <ref>, we globally discard interaction energies between different pieces of the set and estimate only the self-interaction energies. This apparently wasteful technique turns out to suffice because by subadditivity of the reciprocal energy (for which we give a purely measure theoretic proof), one may recombine the estimates and show that the self-interaction terms dominate in the limit.
An alternative proof of <ref> could be constructed using results of Calef and Hardin <cit.>. Specifically, one could use the inequalities in the proof of their Theorem 1.3, along with the result of that theorem that normalized Hausdorff measure is the weak-* limit of p-equilibrium measure as p ↗ d, to establish the formula in our equation (<ref>). Such a proof would depend on the renormalized potential theory at p=d that they develop in their paper, and hence would be more involved than the direct approach in this paper.
The unit sphere ^d is strongly d-rectifiable in whenever d<n, since it is smooth and d-dimensional. Its Riesz p-capacity is
(^d) =
2 ( Γ(d-p/2) Γ(d/2)/Γ((d-p)/2) Γ(d))^ 1/p , 0<p<d,
by Borodachov, Hardin and Saff <cit.> or see Landkof <cit.>; here the expression in <cit.> has been manipulated using the duplication formula <cit.> to arrive at formula (<ref>). For example, with d=2, formula (<ref>) gives
(^2) = 2(1-p/2)^1/p , 0<p<2,
and so (^2)^p/(2-p) = 2^p-1→ 2=|^2|/|^1| as p↗ 2, which confirms <ref> in this case. A similar calculation works for all d≥ 1, as illustrated in <ref>.
<ref> plots the capacity of the sphere as a function of p for the first few values of d. <ref> then illustrates the limit in <ref> by plotting capacity raised to the power p and showing the slope at p=d.
§.§ Results for Ahlfors upper regular sets, including fractals
Next we consider sets of real dimension d, which need not be an integer. After introducing the needed notions of density, we will relate the second order density to the limit of Riesz capacity as p ↗ d.
[Densities at dimension d]
Let d > 0. The first-order density at dimension d of a finite Borel measure μ on ^n is
ρ_d(μ,x) = lim_r↘0μ(^n(x,r))/r^d , x ∈^n ,
assuming the limit exists and is finite. The second-order density is
σ_d(μ, x)
= lim_p↗ d (d-p) ∫_0^1 μ(^n(x,r)) r^-p-1 dr , x ∈^n ,
again assuming the limit exists and is finite. The second-order upper density is
σ_d(μ, x)
= lim sup_p↗ d (d-p) ∫_0^1 μ(^n(x,r)) r^-p-1 dr.
Excellent reference for densities and Hausdorff measure are the books by Falconer <cit.> and Zähle <cit.>. The second-order density can be expressed equivalently as
σ_d(μ, x)
= lim_η↘ 01/|logη|∫_η^1 μ(^n(x,r))/r^d dr/r ,
although we will not need that formulation. The proof of this equivalence by Hinz <cit.> can be found in Calef <cit.>, all building on earlier work by Zähle <cit.>.
A helpful example is that Hausdorff measure on a strongly d-rectifiable set has constant first order density:
If μ is Hausdorff measure ^d restricted to a strongly d-rectifiable set E, then the first-order density of μ equals ||^d at ^d-almost every x∈ E.
First order densities are stronger than second order, in the following sense.
Let x ∈^n. If ρ_d(μ,x) exists then so does σ_d(μ,x), and the two numbers are equal.
The next definition controls the rate of growth of Hausdorff measure near x.
[Ahlfors upper d-regular set]
Let d>0 and n ≥ 1. A set A ⊂ is said to be upper Ahlfors d-regular if a constant C>0 exists such that
^d(^n(x,r)∩ A) ≤ C r^d
for all x ∈ A and r ∈ (0, A].
The next theorem gets a lower bound on the decay of Riesz capacity as p ↗ d. It is proved in <ref>.
Let d>0 and n ≥ 1. If E ⊂^n is compact and upper Ahlfors d-regular with positive ^d-measure then
lim inf_p↗ d(E)^p/d-p≥^d(E)^2/d/∫_E σ_d(^d|_E,x) d^d(x) .
One would like to prove a reverse inequality on the lim sup, for some suitable class of sets, thus getting equality in the limit. Our attempts have not been successful.
If the Hausdorff measure ^d|_E in <ref> has second-order density that is constant ^d-a.e., denoted σ_d(E), then
lim inf_p↗ d(E)^p/d-p≥^d(E)/d σ_d(E).
Suppose E is a smooth submanifold of ^n with positive integer dimension d, or more generally suppose E is a strongly d-rectifiable set. The second-order density of Hausdorff measure restricted to E has the constant value |^d|, as is easily seen by approximating the submanifold locally with its tangent space. In the strongly rectifiable case, this formula follows from <ref> and <ref>. Thus for these examples, the right side of <ref> equals ^d(E)/ |^d-1|, which matches the right side of <ref>.
§.§ Application to fractals
The right side of <ref> can exceed the “strongly rectifiable value” ^d(E)/ |^d-1| that appears on the right side of <ref>, as we proceed to show for certain fractal sets.
A compact set A ⊂^n is called a strictly self-similar fractal if
A = ∪ _i=1^N φ_i(A)
where φ_i(x) = L_i U_i x + b_i for some L_i ∈ (0,1), unitary matrix U_i, and offset b_i ∈^n, and the sets {φ_i(A )}_i=1^N are disjoint. Strictly self-similar fractals possess several useful properties:
(i) the Hausdorff dimension d>0 of A is determined by ∑_i=1^N L_i^d = 1 and the d-Hausdorff measure of A is positive and finite by <cit.>,
(ii) A is Ahlfors d-regular by <cit.>,
(iii) the second-order density σ_d(^d|_A, x) is positive, finite and constant ^d-a.e. by <cit.>.
This constant second order density value is denoted σ_d(A).
Due to these properties, <ref> immediately implies that:
If A is a strictly self-similar fractal with dimension d then
lim inf_p↗ d(A)^p/d-p≥^d(A)/d σ_d(A).
Incidentally, <ref> applies also to self-similar sets in the sense of Zähle <cit.> and to self-conformal sets <cit.>, since they too are known to be upper Ahlfors regular and have constant second-order density.
The middle-thirds Cantor set is a strictly self-similar fractal, as one verifies by choosing L_1=L_2=1/3, U_1=U_2=1, b_1=0, b_2=2/3. This set A has dimension d=(log 2)/(log 3) with ^d(A)>0 and second order density
σ_d(A) = 2^d (0.62344…) ≃ 0.9654
(see <cit.>, noting that the definition there of second order density is 2^-d times our definition). Hence for the Cantor set, the denominator on the right side of <ref> is d σ_d(A) ≃ 0.6091. Meanwhile, the denominator on the right side of <ref> is 2π^d/2/Γ(d/2) ≃ 1.0113, which is larger. Hence the result of <ref> for strongly rectifiable sets fails for some strictly self-similar fractal sets.
Does equality hold in <ref>? We raise:
If A is a strictly self-similar fractal with dimension d then
lim_p↗ d(A)^p/d-p = ^d(A)/d σ_d (A) .
In support of the conjecture, note that as p ↗ d, the weak-* limit of p-equilibrium measure on the fractal set A equals normalized Hausdorff measure, by Calef <cit.>. Perhaps surprisingly, that proof follows quite different lines from the corresponding work of Calef and Hardin <cit.> for strongly rectifiable sets. The fractal arguments for convergence of the equilibrium measure do not seem to provide tools that might help prove the limit of capacity in <ref>. Nonetheless, the research of those two authors has helped inspire the current paper.
§.§ Remarks
§.§.§ Riesz capacity
Our definition (<ref>) of Riesz capacity follows Hayman and Kennedy <cit.> in taking the p-th root of the energy, whereas other authors such as Landkof <cit.> do not. Another difference is that Landkof uses n-p instead of p as the Riesz exponent. The definition in (<ref>) seems the most natural choice for three reasons: it makes capacity a decreasing function of p, it recovers logarithmic capacity as p ↘ 0 (at least for nice sets), and it permits a natural extension to p<0. For these results see Clark and Laugesen <cit.>.
§.§.§ Variational capacity
The variational capacity min_u ∫_ |∇ u|^q dx of a set E, where u ≥ 1 on E and u → 0 at infinity, has been studied by many authors. When q=2, it agrees with Newtonian capacity (K) up to a constant factor. For other values of q, the variational capacity does not seem to be directly connected with Riesz capacity.
§ PRELIMINARIES ON LIPSCHITZ MAPS
A Lipschitz mapping with constant λ increases the d-dimensional Hausdorff measure by a factor of at most λ^d, as one sees immediately from the definitions.
Let 1 ≤ d ≤ n and λ>0. If f: ^d →^n is a λ-Lipschitz map then ^d(f(A))≤λ^d ^d(A) for every ^d-measurable set A⊂^d.
The next lemma estimates a Riesz potential by “straightening out” the set with a bi-Lipschitz map.
Let 0≤ p<d ≤ n, where d and n are integers. If φ:K→^n is a bi-Lipschitz map with bi-Lipschitz constant λ≥ 1, where K⊂^d is compact and φ(K)=E, then
∫_E ∩ ^n(y,r)1/|x-y|^p d^d(x) ≤λ^2d r^d-p |^d-1|/d-p, y∈ E,
for all r>0. In particular, when p=0 one has
^d(E∩^n(y,r)) ≤λ^2d r^d |^d-1|/d , y ∈ E .
Fix r>0 and y ∈ E. By a translation of K, we may assume φ(0)=y. Given x ∈ E, write x' = φ^-1(x) for its preimage. First we estimate the integrand, using that
|x-y|^p ≥1/λ^p|φ^-1(x) - φ^-1(y)|
by the lower Lipschitz bound. Next, the upper Lipschitz bound says that φ stretches each direction by at most λ and so it increases d-dimensional volumes by at most λ^d. Hence by the integrand estimate and a change of variable,
∫_E ∩ ^n(y,r)1/|x-y|^p d^d(x)
≤∫_E ∩ ^n(y,r)λ^p/|φ^-1(x)-φ^-1(y)|^p d^d(x)
≤∫_φ^-1(E ∩ ^n(y, r))λ^p+d/|x'-0|^p dx'
≤∫_^d(0,λ r)λ^p+d/|x'|^p dx'
= λ^2d r^d-p |^d-1|/d-p,
where the third inequality uses that φ^-1(^n(y, r)) ⊂^d(0,λ r), by the lower Lipschitz condition and the fact that φ^-1(y)=0.
§ SUBADDITIVITY OF RECIPROCAL ENERGY
Subadditivity of the reciprocal of Riesz energy will be needed later in the paper. The standard proof relies on potential theoretic techniques <cit.>, <cit.>, perhaps because the authors aim at the better result known as strong subadditivity. Following is a short proof relying only on measure theory and Cauchy–Schwarz.
Let (X,𝔐) be a measurable space and suppose G is a nonnegative, product measurable function on X × X. Define the energy of a measurable set E ⊂ X to be
W(E) = inf_μ∫_E ∫_E G(x,y) dμ(x) dμ(y)
where the infimum is taken over all probability measures μ on E, that is, measures on 𝔐 with μ(E)=1. We do not require that the infimum in the definition be attained. If E does not support any probability measures, in particular if it is empty, then the energy equals ∞ by convention.
If E_1, E_2, E_3, … are measurable subsets of X then
1/W(∪_i E_i)≤∑_i 1/W(E_i) .
Subadditivity also holds if there are only finitely many sets E_1,…, E_m, simply by padding the sequence with empty sets, for which the reciprocal energy equals zero.
It suffices to prove the proposition with kernel G+ϵ, because that choice increases the energy of each set by ϵ, after which one may simply take ϵ→ 0 in the conclusion of the proposition. Thus we may suppose from now on that the kernel G and energy W are bounded below by a positive constant.
Write E=∪_i E_i. If W(E)=∞ then there is nothing to prove, and so we may suppose W(E)<∞. Let μ be a probability measure on E that has finite energy. To avoid double-counting in the proof below, we disjointify the sets: let E_1^*=E_1, E_2^*=E_2 ∖ E_1, E_3^*=E_3 ∖ (E_1 ∪ E_2), and so on.
The index set I(μ) = {i: μ(E_i^*)>0} is nonempty and
∑_i∈ I(μ)μ(E_i^*)=∑_i=1^∞μ(E_i^*) = μ(∪_i=1^∞ E_i^* )=μ(E) = 1.
By decomposing E into the E_i^* and discarding all cross terms, we estimate the energy of μ from below by
∫_E ∫_E G(x,y) dμ dμ ≥∑_i∈ I(μ)∫_E_i^*∫_E_i^* G(x,y) dμ dμ
≥∑_i∈ I(μ)μ(E_i^*)^2 W(E_i),
where we used that the restricted and normalized measure μ(·∩ E_i^*)/μ(E_i^*) is a probability measure on E_i^* and hence also on E_i, and hence can serve as a trial measure for the energy W(E_i). Notice that W(E_i) on the right side is finite (and positive) since the left side of the inequality is finite by assumption on μ. Next, by Cauchy–Schwarz,
∑_i∈ I(μ)μ(E_i^*)^2 W(E_i) ≥(∑_i∈ I(μ)μ(E_i^*))^ 2/∑_i∈ I(μ) W(E_i)^-1≥1/∑_i=1^∞ W(E_i)^-1.
Infimizing over the probability measures μ, we deduce that
W(E) = inf_μ∫_E ∫_E G(x,y) dμ dμ≥1/∑_i=1^∞ W(E_i)^-1 ,
which proves the proposition.
§ UPPER LIMIT OF THE ENERGY FOR <REF>
The conclusion of <ref> can be rewritten in terms of energy as
lim_p↗ d (d-p) V_p(E) = |^d-1|/^d(E) .
The next proposition proves the upper direction of this equality. The lower direction is established in <ref>. As usual, d and n are positive integers with 1 ≤ d ≤ n.
If E⊂^n is compact and strongly d-rectifiable then
lim sup_p↗ d (d-p) V_p(E) ≤|^d-1|/^d(E).
If ^d(E)=0 then there is nothing to prove, and so we suppose ^d(E)>0. Take 0<ϵ<^d(E). By definition, the strongly rectifiable set E decomposes as E = ( ∪_i=1^m E_i ) ∪ F, where the set of intersection points A = ∪_1≤ i< j ≤ m (E_i∩ E_j) has measure zero, i.e. ^d(A) = 0.
Let A(α) = { x∈ E: (x,A)<α} be the subset of E within distance α of A. Notice that ∩_α>0A(α) = A since A is closed. Thus by continuity of the measure from above, there exists α_0>0 such that the set B = A(α_0) has ^d(B)<ϵ. Hence ^d(E ∖ B)>0.
Let Ẽ_i = E_i \ B, that is, E_i with the “bad” part B removed. Notice that the sets Ẽ_i are disjoint and compact (due to the strict inequality in the definition of A(α_0)) and so are separated by some positive distance δ, meaning (Ẽ_i, Ẽ_j)≥δ>0 when i≠ j.
We give two proofs of inequality (<ref>). The first works directly with the Riesz kernel. The second employs an alternative formula for the energy.
§.§ First proof
Take μ to be normalized Hausdorff measure on E ∖ B. That set consists of the Ẽ_i together with F ∖ B, but ^d(F ∖ B)=0 and so we ignore that set in the following proof. Using μ as a trial measure in the definition of the energy, we find
V_p(E) ≤ 1/^d(E\ B)^2∫_E\ B∫_E\ B1/|x-y|^p d^d(x) d^d(y)
≤1/^d(E \ B)^2∫_E\ B∫_(E\ B) ∖^n(y,δ)1/|x-y|^p d^d(x) d^d(y)
+ 1/^d(E \ B)^2∑_i=1^m ∫_Ẽ_i∫_(E ∖ B) ∩ ^n(y,δ)1/|x-y|^p d^d(x) d^d(y).
Expression (<ref>) is bounded straightforwardly by δ^-p, since x ∉^n(y,δ) forces |x-y| ≥δ. For the inner integral in (<ref>), we have y∈Ẽ_i and so the ball ^n(y,δ) does not intersect Ẽ_j for j ≠ i. Hence (E ∖ B) ∩^n(y,δ) = Ẽ_i ∩^n(y,δ). Thus the inner integral can be estimated using <ref> by
∫_Ẽ_i ∩ ^n(y,δ)1/|x-y|^p d^d(x)
≤(1+ϵ)^2d δ^d-p |^d-1|/d-p .
Hence line (<ref>) is bounded by
1/^d(E\ B)^2∑_i=1^m ^d(Ẽ_i) (1+ϵ)^2d δ^d-p |^d-1|/d-p≤(1+ϵ)^2d δ^d-p/d-p|^d-1|/^d(E\ B).
Combining the estimates on (<ref>) and (<ref>) and multiplying by d-p, we find
(d-p)V_p(E) ≤(1+ϵ)^2d δ^d-p|^d-1|/^d(E\ B)
+ (d-p) δ^-p .
Letting p↗ d gives
lim sup_p↗ d(d-p) V_p(E)≤(1+ϵ)^2d|^d-1|/^d(E\ B).
Finally, recalling that ^d(B) < ϵ and letting ϵ→ 0, we conclude
lim sup_p ↗ d(d-p)V_p(E) ≤|^d-1|/^d(E),
which is the desired estimate (<ref>).
§.§ Second proof
As observed by Götz <cit.>, the Riesz kernel can be expressed for p>0 as
|x-y|^-p = p ∫_|x-y|^∞ r^-p-1 dr = p ∫_0^∞ 1_^n(x,r)(y) r^-p-1 dr
and so the energy becomes
V_p(E) = pmin_μ∫_0^∞∫_E μ(^n(x,r)) dμ(x) r^-p-1 dr
where the minimum is taken over probability measures on the compact set E.
Choosing μ once again to be normalized Hausdorff measure on E\ B, and integrating with respect to r over the intervals (0,δ) and (δ,∞), we deduce
V_p(E) ≤p/^d(E\ B)^2∫_0^δ∫_E\ B^d(^n(x,r)∩ (E\ B)) d^d(x) r^-p-1 dr
+ p∫_δ^∞∫_E 1 dμ(x)r^-p-1 dr .
The second term equals δ^-p, since μ(E)=1.
In the first term, when x∈ E\ B = ∪_i=1^m Ẽ_i and r<δ, we know x belongs to precisely one of the Ẽ_i and ^n(x,r) does not intersect Ẽ_j when j ≠ i. By applying <ref> with p=0 to x ∈Ẽ_i, we find that
^d(^n(x, r)∩ (E\ B))
= ^d(^n (x, r)∩Ẽ_i)
≤ (1+ϵ)^2d r^d |^d-1|/d.
This estimate is the same for each i, and so we conclude
V_p(E) ≤ (1+ϵ)^2d |^d-1| p/d/^d (E\ B)^2∫_0^δ^d(E\ B) r^d-p-1 dr + δ^-p
= (1+ϵ)^2d |^d-1| p/d/^d (E\ B)δ^d-p/d-p +δ^-p.
Multiply both sides by d-p and let p↗ d, getting that
lim sup_p↗ d (d-p) V_p(E)≤ (1+ϵ)^2d|^d-1|/^d (E\ B),
Let ϵ→ 0 to obtain as wanted for (<ref>) that
lim sup_p↗ d (d-p) V_p(E) ≤|^d-1|/^d(E).
§ LOWER LIMIT OF THE ENERGY FOR <REF>
To complete the proof of <ref>, we establish a lower bound on the energy.
Let 1 ≤ d ≤ n. If E⊂^n is compact and strongly d-rectifiable then
lim inf_p↗ d (d-p) V_p(E) ≥|^d-1|/^d(E).
The inequality is known already when E is flat, meaning E ⊂, by recent work of Clark and Laugesen <cit.>. This flat case provides a key ingredient in the following proof.
Let ϵ>0. By definition of strong rectifiability, the set partitions as E = ( ∪_i=1^m E_i ) ∪̇ F, with E_i = φ_i(K_i) for some compact K_i⊂^d and corresponding bi-Lipschitz function φ_i with bi-Lipschitz constant ≤ 1+ϵ. The intersections have vanishing Hausdorff measure: ^d(E_i ∩ E_j)=0 when i ≠ j. The set F is lower dimensional, with (F) < d, and so
^d(F)=0 .
Suppose (F) < p < d. Because p exceeds the dimension of F, we know by <cit.> that every compact subset of F has p-capacity zero, that is, has infinite p-energy. We show now that F itself (which need not be compact) has infinite p-energy in the sense that ∫_F ∫_F |x-y|^-p dμ dμ=∞ whenever μ is a probability measure on F. For suppose this energy integral is finite; the compact set F_η = { x ∈ E : (x,∪_i=1^m E_i) ≥η}⊂ F exhausts F as η↘ 0 and so μ(F_η)>0 for some η, implying finiteness of the energy integral for the probability measure μ(·∩ F_η)/μ(F_η). Hence (F_η)>0, which we already observed is not true. Therefore F must have infinite p-energy.
First we prove the proposition for each E_i individually, up to an ϵ-dependent factor. Let μ_i be a probability measure on E_i, so that μ = μ_i ∘φ_i is a probability measure on K_i. Then
V_p(E_i) = min_μ_i∫_E_i∫_E_i1/|x̃-ỹ|^p dμ_i(x̃) dμ_i(ỹ)
= min_μ∫_K_i∫_K_i1/|φ_i(x)-φ_i(y)|^p dμ(x) dμ(y)
≥1/(1+ϵ)^pmin_μ∫_K_i∫_K_i1/|x-y|^p dμ(x) dμ(y)
= 1/(1+ϵ)^p V_p(K_i)
where the inequality uses the upper Lipschitz condition. Now we call on the result by Clark and Laugesen <cit.> for K_i⊂^d, that:
lim inf_p↗ d(d-p)V_p(K_i) ≥|^d-1|/^d(K_i) .
Meanwhile, <ref> for φ_i^-1 (using the lower Lipschitz condition on φ_i) gives
^d(K_i) ≤ (1+ϵ)^d ^d(E_i) .
Combining these inequalities, we obtain for E_i that the proposition holds up to a factor of (1+ϵ)^2d:
lim inf_p↗ d(d-p)V_p(E_i)
≥|^d-1|/(1+ϵ)^d^d(K_i)≥|^d-1|/(1+ϵ)^2d^d(E_i).
Notice we cannot simply let 1+ϵ tend to 1 on the right side, because the choice of E_i in our decomposition depends on ϵ.
Next we turn attention to the whole set E = ( ∪_i=1^m E_i ) ∪̇ F. Subadditivity of the reciprocal energy (<ref>) yields that
V_p(E) ≥1/∑_i=1^m V_p(E_i)^-1 + V_p(F)^-1 .
We showed above that F has infinite p-energy and so the term V_p(F)^-1 in the denominator can be dropped.
Multiplying by d-p and letting p ↗ d, we see
lim inf_p ↗ d (d-p) V_p(E)
≥1/∑_i=1^m (lim inf_p ↗ d(d-p)V_p(E_i) )^-1
≥|^d-1|/(1+ϵ)^2d1/∑_i=1^m ^d(E_i) by (<ref>) for each E_i
= |^d-1|/(1+ϵ)^2d1/^d(E)
→|^d-1|/^d(E)
as ϵ→ 0, which proves the proposition.
§ PROOF OF <REF>
(E) is monotonically decreasing with respect to p by Clark and Laugesen <cit.> and so the limiting value lim_p ↗ d(E) exists and is greater than or equal to (E). That limiting value must be zero, because if it were positive then the left side of (<ref>) in <ref> would be infinite whereas the right side is finite. Hence (E)=0.
§ PROOF OF <REF>
Let ϵ_k = 2^-k for k∈. For each k we have a bi-Lipschitz decomposition of the strongly rectifiable set E = ∪_i E_k,i∪ F_k, given as in definition, with constant L_k<1+ϵ_k. Let A_k = ∪_i≠ j (E_k,i∩ E_k,j) be the set of intersection points, which has Hausdorff measure 0.
Suppose x∈ E\ (A_k∪ F_k). Then x belongs to only one of the E_k,i, and since those sets are compact, for sufficiently small r the ball around x with radius r does not intersect any other E_k,j, j≠ i. If x̃ is the preimage of x under the bi-Lipschitz mapping onto E_k,i, then
L_k^-d ^d(^d(x̃,r/L_k))
≤^d(^n(x,r)∩ E) ≤ L_k^d ^d(^d(x̃, L_k r)),
for all small r>0.
Therefore,
|^d|/(1+ϵ_k)^2d ≤lim inf_r→ 0^d(^n(x,r)∩ E)/r^d
≤lim sup_r→ 0^d(^n(x,r)∩ E)/r^d≤ (1+ϵ_k)^2d|^d|.
Let A = ∪ A_k and F = ∪ F_k, so that ^d(A∪ F) = 0. Equation (<ref>) holds for all x ∈ E\ (A∪ F). Letting ϵ_k → 0 completes the proof.
§ PROOF OF <REF>
In terms of energy, the theorem claims that
lim sup_p↗ d (d-p) V_p(E) ≤d/^d(E)^2∫_E σ_d(^d|_E,x) d^d(x) .
We begin with Götz's formula (<ref>), which says
V_p(E)
= p inf_μ∫_E ∫_0^∞μ(^n(x,r)) r^-p-1 dr dμ(x)
≤ p inf_μ∫_E ∫_0^1 μ(^n(x,r)) r^-p-1 dr dμ(x) + 1
since μ(·) ≤μ(E) = 1. Choose μ(·) = ^d(·∩ E) / ^d(E) to be normalized Hausdorff measure on E. Then
V_p(E) ≤p/^d(E)^2∫_E ∫_0^1 ^d(^n(x,r) ∩ E) r^-p-1 dr d^d(x) + 1.
Notice that (d-p) times the inner integral is dominated by the upper Ahlfors d-regular constant of E:
(d-p)∫_0^1 ^d(^n(x,r) ∩ E) r^-p-1 dr
≤ C(d-p) ∫_0^1 r^d-p-1 dr = C.
Hence
lim sup_p↗ d (d-p) V_p(E)
≤d/^d(E)^2lim sup_p↗ d∫_E (d-p) ∫_0^1 ^d(^n(x,r) ∩ E) r^-p-1 dr d^d(x)
≤d/^d(E)^2∫_E lim sup_p ↗ d (d-p) ∫_0^1 ^d(^n(x,r) ∩ E) r^-p-1 dr d^d(x)
by dominated convergence
= d/^d(E)^2∫_E σ_d(^d|_E,x) d^d(x)
by definition of the second order upper density.
§ ACKNOWLEDGMENTS
Laugesen was supported by awards from the Simons Foundation (#964018) and the National Science Foundation (#2246537). The NSF grant supported Fan too.
plain
99
BHS19
S. V. Borodachov, D. P. Hardin and E. B. Saff,
Discrete Energy on Rectifiable Sets. Springer Monographs in Mathematics. Springer, New York, 2019.
C10
M. T. Calef,
Riesz s-equilibrium measures on d-dimensional fractal sets as s approaches d.
J. Math. Anal. Appl. 371 (2010), 564–572.
CH09
M. T. Calef and D. P. Hardin,
Riesz s-equilibrium measures on d-rectifiable sets as s approaches d,
Potential Anal. 30 (2009), 385–401.
CL24b
C. Clark and R. S. Laugesen,
Riesz capacity: monotonicity, continuity, diameter and volume.
Preprint. 2406.10781
F97
K. Falconer,
Techniques in Fractal Geometry.
John Wiley & Sons, Ltd., Chichester, 1997.
G03
M. Götz,
On the Riesz energy of measures,
J. Approx. Theory 122 (2003), 62–78.
HK76
W. K. Hayman and P. B. Kennedy,
Subharmonic Functions. Vol. I. London Mathematical Society Monographs, No. 9. Academic Press (Harcourt Brace Jovanovich, Publishers), London–New York, 1976.
H05
M. Hinz,
Average densities and limits of potentials,
Master’s thesis, Universität Jena, Jena, 2005.
L72
N. S. Landkof,
Foundations of Modern Potential Theory. Translated from the Russian by A. P. Doohovskoy. Die Grundlehren der mathematischen Wissenschaften, Band 180. Springer–Verlag, New York–Heidelberg, 1972.
M46
P. A. P. Moran,
Additive functions of intervals and Hausdorff measure,
Proc. Cambridge Philos. Soc. 42 (1946), 15–23.
Z01
M. Zähle,
The average density of self-conformal measures,
J. London Math. Soc. (2) 63 (2001), 721–734.
Z02
M. Zähle,
Forward integrals and stochastic differential equations.
In: Seminar on Stochastic Analysis, Random Fields and Applications III.
Birkhäuser Basel, 2002, pp. 293–302.
Z24
M. Zähle,
Lectures on Fractal Geometry.
Fractals Dyn. Math. Sci. Arts Theory Appl., 8.
World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2024.
DLMF NIST Digital Library of Mathematical Functions. <http://dlmf.nist.gov/>, Release 1.2.1 of 2024-06-15. F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, B. V. Saunders, H. S. Cohl, and M. A. McClain, eds.
|
http://arxiv.org/abs/2409.02722v1 | 20240904135851 | Efficient Simulation of Non-uniform Cellular Automata with a Convolutional Neural Network | [
"Michiel Rollier",
"Aisling J. Daly",
"Odemir M. Bruno",
"Jan M. Baetens"
] | nlin.CG | [
"nlin.CG"
] |
Efficiently simulation of νCAs with a CNN
M. Rollier et al.
Univ. Ghent, Dept. Data Anal. & Math. Modelling, BionamiX, Coupure Links 653, B-9000 Ghent, Belgium
[email protected]
Univ. São Paulo, São Carlos Inst. Phys., POB 369, BR-13560970 São Carlos, SP, Brazil
Efficient simulation of non-uniform cellular automata with a convolutional neural network
Michiel Rollier10000-0001-8467-734X Aisling J. Daly10000-0002-3390-2495 Odemir M. Bruno10000-0002-2945-1556 Jan M. Baetens0000-0003-4084-9992
September 9, 2024
=================================================================================================================================================
§ ABSTRACT
Cellular automata (CAs) and convolutional neural networks (CNNs) are closely related due to the local nature of information processing. The connection between these topics is beneficial to both related fields, for conceptual as well as practical reasons. Our contribution solidifies this connection in the case of non-uniform CAs (νCAs), simulating a global update in the architecture of the Python package . Additionally, we demonstrate how the highly optimised out-of-the-box multiprocessing in offers interesting computational benefits, especially when simulating large numbers of νCAs with many cells.
§ INTRODUCTION
§.§ Elementary and non-uniform cellular automata
Arguably the simplest non-trivial and maximally discrete dynamical system is an elementary cellular automaton (ECA). In this model, a finite or countably infinite number of cells are aligned in one dimension. A cell can be in only two possible states, all cells update their state in discrete time steps based on their own and their direct neighbours's state. Additionally, all cells update their states simultaneously, they do so deterministically, and they all follow the same local update rule (see e.g. <cit.>). Relaxing any of these conditions results in a CA that belongs to a family of discrete models that typically exhibit a richer behaviour, a more complex mathematical description, and well-defined `taxonomic' ties to other families. Our forthcoming comprehensive review on this taxonomy <cit.>) provides an overview.
In particular, allowing certain cells to follow different local update rules results in the family of CAs collectively identified as non-uniform CAs (νCAs). Our review paper <cit.> covers non-uniformity in the most general sense, where the `rule allocation' varies in space and time. However, in the literature <cit.> νCA rule allocation is typically only spatially non-uniform. For this reason, together with the fact that our proposed implementation is more cumbersome in the general interpretation of a νCA, we will only consider spatially non-uniform CAs in this contribution. Additionally, as we will focus on simulating νCAs, we will only be concerned with finite grids. Fig. <ref> contains an example of a νCA with N=32 cells and N_R = 2 elementary rules.
Clearly, allowing non-uniformity implies that the space of possible CA dynamics increases in size considerably. An ECA with N-cells and periodic boundary conditions already has 2^N possible initial configurations, which evolves into different dynamics for each of the 256 elementary rules. A νCA consisting of N cells that each evolve according to one of N_R rules has 2^N × N_R^N such possible initial conditions. This large diversity obstructs mathematical generalisation except in particular cases that are quite remote from applications <cit.>. An empirical approach to phenomenological classification is therefore imperative, but such a computational task requires an efficient means of simulation.
§.§ CA classification and simulation by means of CNNs
The CA classification problem <cit.> is a challenge at the centre of CA research (see e.g. <cit.>). Considering the fact that we can interpret the spacetime diagrams of CAs as images, computer vision techniques can be mobilised for their classification, including those researched in the domain of deep learning. Within the spectrum of deep learning, convolutional neural networks (CNNs) are wildly popular, largely due to their undeniable success in image processing and computer vision <cit.>. We refer to excellent monographs to gain a good understanding of the topic (e.g. <cit.>), while a good visual introduction is offered by the deep learning series by 3Blue1Brown on YouTube <cit.>.
A lot of diverse data is required in order to effectively train CNNs to identify classes. Fortunately, the local nature of the convolution operation enables not only the identification of CAs, but also their emulation. After all, nodes in a neural network may be identified with CA cells, and a convolutional operation can be interpreted as an update from a local neighbourhood. In fact, as Gilpin <cit.> shows, the global update mechanism on any kind of CA can be accommodated by the architecture of a CNN. This can be achieved both by a clever choice of weights and biases, or by training the network from a random initialisation.
In the CNN, transforming the input configuration to the neighbourhood encoding is performed by the first 1D convolutional layer, with a kernel of width 3 and fixed weights (4, 2, 1), zero bias, and periodic boundary conditions. The output of this convolution is transformed to a matrix with one-hot vectors as columns, and each of the 8 rows of this matrix corresponds to a channel in the first CNN hidden layer. Next, another convolution layer with a kernel of width 1 essentially sums all channels, where this time the weights are determined by the binary representation of the local update rule. The output is then, by design, the ECA configuration after one global update. With a mere 40 parameters, this is an extremely simple CNN, whose computational complexity scales only with the number of cells N. The subsequent steps required to integrate a global update into a CNN framework are shown schematically in Fig. <ref>. This concrete example uses ECA rule 54 and a random initial condition, but the required operations are independent of this choice
[1]
< g r a p h i c s >
skip=-12pt
The subsequent (de)composition steps required for updating an ECA configuration, illustrated for 32 randomly initialised cells, evolved over one time step by rule 54. First, each binary size-3 neighbourhood is translated to an integer from 0 to 7 (shown in grey-scale). This integer is encoded as a size-8 one-hot vector (displayed in columns). Depending on the rule table of the local update rule (displayed on the left-hand side), this columns is kept or removed. As a final step, all columns are summed, resulting in the output configuration.
The parameters within the CNN (weights and biases) can be calculated, but for more general CAs they would typically be trained. In order for the CNN to be in practice (and consistently) trainable, starting from random weights and biases, some additional features are required. We will not focus on the training procedure here, but we may mention that the most important of such additional features would be activation functions <cit.>. For illustrative purposes we include our convergence towards an optimum in parameter space for a CNN that emulates rule 54 with near perfection in Fig. <ref>. For details on preferred training procedures for CA emulators, we again refer to <cit.>.
§.§ Scope
The goal of this article is to fill in a gap in the literature, by emulating νCAs by means of CNNs. We can benefit from the extremely streamlined software implementations designed for neural networks, optimised for parallel processing and general performance. That is to say: CNNs can present us with a practical tool for the fast and massive simulation of spacetime diagrams and analyses on these diagrams.
In the next section we will develop a CNN for νCAs, and we will see that this requires only a minimal addition to the architecture outlined above. We discuss some performance characteristics, and conclude with an outlook on the future of the marriage between CAs and CNNs.
§ METHODS
In order to assess the performance of a CNN regarding the simulation of νCAs, we first discuss a popular well-established approach, and then introduce two varieties of CNN extensions.
§.§ Existing approaches
Some programming languages enable very convenient and computationally optimised ways of simulating and analysing CAs. Wolfram Mathematica is an obvious example, which was in fact partially created for this purpose <cit.>. In Python, the most commonly used package is <cit.>.
It is straightforward to implement a νCA in by defining an array that instructs the method on what rule to apply when to which cell. Adding more or fewer rules (i.e. altering the non-uniformity) should not affect the performance. What does impact the performance, however, is the fact that the non-uniformity of the model no longer allows for caching the states in each step – in this is encapsulated in the option (sic). This means that one cannot make any `memory shortcuts' which typically speed up the CA simulation considerably. Fig. <ref> displays an example for an 8-rule νCA simulated in .
[1]
< g r a p h i c s >
skip=15pt
A νCA is easily simulated using the Python package . One does so in two steps: first by defining the rule allocation (left, dependent on time and cell), and second by parsing this information to the method, which generates the spacetime diagram (right).
§.§ Two approaches for non-uniform CAs in
Building on the CNN framework for elementary CAs, below we propose two additions which enable the CNN to emulate νCAs. Both additions involve a change in the CNN architecture between the one-hot neighbourhood encoding and the output layer. For technical details we refer to the code and annotations of the class, available at mrollier/emulating-and-learning-CAs on GitHub, and entirely based on modules.
Note that the proposed CNNs emulate a single global update. Generating an entire spacetime diagram of T time steps requires feeding the output of the CNN T-1 times back into the input layer, because in the proposed setup, the CNN can only emulate a single global update. Global updates strongly depend on the previous time step, so it is (in general) not possible to distribute the calculation of the CA dynamics into (for example) `all even time steps' and `all odd time steps'. This impossibility is related to the so-called computational irreducibility of CAs, and impedes temporal parallellisation of the computation of its dynamics. Note also that if the rule allocation is independent of time (as is conventionally the case for νCAs), the weights and biases of the CNN remain unchanged.
§.§.§ A locally-connected hidden layer
The first approach includes a locally connected layer, which is essentially a convolution where the kernel weights are allowed to differ for distinct nodes. Like in Fig. <ref>, all the columns are summed, weighted by the binary representation of the local update rule, but now the weights are not shared. The biases remain zero. The rest of the CNN is identical to that for ECAs. In practice one more intermediate step is added: our model first calculates the entire output configuration as if the CA would be uniform, for each of the N_R rules. Next, the locally connected layer picks out the relevant cells based on which rule was actually allocated to it. While this is less computationally efficient, it does arguably increase interpretability of the model. More relevant in forthcoming research, however, is that this also facilitates flexibility in the training phase. After all, over-parameterisation is one of the key ingredients of deep learning.
Following the required subsequent calculations as explained in Section <ref>, this brings the total number of parameters in the model to
(3+1) × 8 + 8 × N_R + N_R × N,
if we discard the bias parameters that have been set to zero. For the example depicted in Fig. <ref>, this sums to 352 parameters.
§.§.§ A sparsely-populated dense layer
A slightly different approach invokes the power of a fully-connected layer, known in the industry as a dense layer. Here, again CA outputs are calculated for each of the rules, but the cell selection now occurs by means of this dense layer. Note that, essentially, a locally connected layer is a dense layer for which all edges have been cut that connect nodes that represent different cells. Whilst this seems superfluous at first, there are two reasons to do so. First, is heavily optimised for calculating with large matrices, especially if these are sparse. Second, we again have the consideration of more model power and flexibility in future approaches that also involve training via backpropagation.
Technically, the N_R channels containing size-N outputs of the uniform case are first flattened, i.e. deconstructed into a single vector of length N_RN. Next, all elements in this vector (the node values) are connected with the size-N output layer by means of an N_RN × N weights matrix, where most weights are manually set to 0 or 1.
The total number of parameters in this model is therefore
(3+1) × 8 + 8 × N_R + N_R × N^2,
if again we do not count the vanishing bias parameters. For the example in Fig. <ref>, this now sums to 8288 parameters.
§.§ Comparison of the three models in four scenarios
It is counter-intuitive that any increase in computing speed is expected at all, considering the clearly large increase in required floating point operations. To be fair, a really well-optimised and parallelised approach tailored to νCA simulation will undoubtedly outperform the proposed over-parameterised CNNs. The main allure, however, is in the combination of ease of use and out-of-the-box high performance of (or , for that matter), as a result of the global scale of its continuous development.
We will briefly examine where the strengths and weaknesses of the CNN approaches lie, compared to the benchmark approach using . In particular we will consider four scenarios: the performance when adding more rules, more time steps, more cells, and more samples. Tab. <ref> provides a summary of the parameter values (or ranges) that were found to be appropriate for best illustrating the trends and comparisons: these are the domains in which the overall trend in performance for all approaches is easily discernible. Every sample starts from a random initial condition but an identical rule allocation, such that the CNN needs to be initialised only once. Using the Python package, we simply keep track of how many seconds each model requires for evolving the νCAs, taking the average over ten attempts.
This small-scale experiment was performed using an Intel Core i7-9850H CPU, 6 cores, at 2.6 GHz. We ran Python 3.11.8 and 2.14.0. Note, however, that the numerical value of the timing is secondary to the qualitative comparison.
§ RESULTS
Here we present the computation times of the various scenarios listed in Table <ref>. We always show the data in blue, the data from the locally connected CNN in orange, and data from the fully connected CNN in green. In order to value the trends, we always show the mean value and the standard deviation from ten independent computations per unique combination of parameters.
Fig. <ref> displays the computation times for the all four scenario. First, we show what happens when the number of rules (the `non-uniformity') is increased by factors of two. Because the number of rules N_R goes up to 256, we also chose N=256, allowing the possibility to allocate each rule at least once. We observe that the computation time is largely independent of N_R for all models, except when a large number of rules is chosen in the densely connected CNN. Rather surprisingly, however, for small values of N_R this densely connected CNN is significantly smaller than the other two.
For the second scenario, the number of time steps increases linearly between 10 and 100, and the computation time in all models appear to increase linearly as well. Except for small T values, the computation time is similar for all three models.
The results from the third scenario are shown for eight linearly spaced values of N between 32 and 256. As expected, 's computation time is proportional to the number of cells. The locally connected CNN also increases more or less linearly – but with a higher `start-up cost'. The computation time of the densely connected CNN is largely independent of N.
For the fourth and final scenario, we consider 11 logarithmically spaced values of S between 1 and 1024. We again observe that using the larger models requires a certain initial cost, but once we want to simulate a large number of diagrams, they are clearly the least time consuming option.
§ DISCUSSION, CONCLUSION AND PROSPECTS
The highly optimised `out-of-the-box' multiprocessing of is clearly preferred over in scenarios where we want to generate many samples of νCAs with many cells. This of course is precisely the condition for obtaining statistically significant results in empirical studies of these discrete dynamical systems, especially when training models for automatic classification.
More surprising, however, is that the densely connected CNN almost always beats the locally connected CNN, despite the fact that mathematically speaking, the latter is a subgraph of the former. This is even the case when simulating more cells, despite, as Eq. (<ref>) shows, the quadratic growth of the number of parameters. This precisely demonstrates the point: is so cleverly optimised, that more complex models can outperform the simpler ones. This is arguably the reason why the layer is discontinued in more recent versions of .
CNNs are a great tool for efficient simulation, which enables a more thorough exploration of the computational landscape of CAs. Similar approaches will enable the simulation of other types of CAs. As we show in forthcoming work, for example, graph CNNs are quite straightforward to mobilise in the simulation of network automata as well. While we do not claim that is the computationally optimal solution for CA simulation, it does present the CA researcher with an educational, ergonomic and flexible engine for efficient simulation.
CNNs are more than a tool for studying CAs, however. Arguably the most promising possibilities are created when, inversely, CAs serve the theoretical study and practical applications of CNNs. The training of CAs in the CNN framework may be used to better understand the information flow and learning process of CNNs <cit.>. Another exciting avenue is the mobilisation of CAs for generative neural networks, as was elegantly illustrated in <cit.>. In any case, increased efforts in joining discrete dynamical modelling and deep learning, such as the one shared in this work, offer interesting benefits for both research domains.
§.§.§
The authors have no competing interests to declare that are
relevant to the content of this article.
splncs04
|
http://arxiv.org/abs/2409.03453v1 | 20240905120215 | Ageing and dynamics of the tailed radio galaxies in Abell 2142 | [
"L. Bruno",
"T. Venturi",
"D. Dallacasa",
"M. Brienza",
"A. Ignesti",
"G. Brunetti",
"C. J. Riseley",
"M. Rossetti",
"F. Gastaldello",
"A. Botteon",
"L. Rudnick",
"R. J. van Weeren",
"A. Shulevski",
"D. V. Lal"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.CO"
] |
Dipartimento di Fisica e Astronomia (DIFA), Università di Bologna (UNIBO), via Gobetti 93/2, 40129 Bologna, Italy
Istituto Nazionale di Astrofisica (INAF) - Istituto di Radioastronomia (IRA), via Gobetti 101, 40129, Bologna, Italy
Istituto Nazionale di Astrofisica (INAF) - Osservatorio di Astrofisica e Scienza dello Spazio (OAS) di Bologna, Via P. Gobetti 93/3, 40129, Bologna, Italy
Center for Radio Astronomy Techniques and Technologies, Rhodes University, Grahamstown 6140, South Africa
Istituto Nazionale di Astrofisica (INAF) - Osservatorio Astronomico di Padova (OAPD), Vicolo dell'Osservatorio 5, I-35122 Padova, Italy
Istituto Nazionale di Astrofisica (INAF) - Istituto di Astrofisica Spaziale e Fisica cosmica (IASF) di Milano, Via A. Corti 12, I-20133, Milano, Italy
Minnesota Institute for Astrophysics, University of Minnesota, 116 Church St SE, Minneapolis, MN 55455, USA
Leiden Observatory, Leiden University, PO Box 9513, 2300 RA Leiden, The Netherlands
ASTRON, Netherlands Institute for Radio Astronomy, Oude Hoogeveensedijk 4, 7991 PD, Dwingeloo, The Netherlands
Tata Institute of Fundamental Research, Post Box 3, Ganeshkhind P.O., Pune 411007, India
[email protected]
Tailed radio galaxies are shaped by ram pressure owing to the high-velocity motion of their host through the intracluster medium (ICM). Recent works have reported on the increasing complexity of the phenomenology of tailed galaxies, with departures from theoretical ageing models and evidence of re-energising mechanisms, which are yet unclear.
The nearby (z=0.0894) galaxy cluster Abell 2142 hosts two tailed galaxies, namely T1 and T2, which exhibit peculiar morphological features. We aim to investigate the properties of T1 and T2 and constrain their spectral evolution, dynamics, and interactions with the ICM.
We combined LOw Frequency Array (LOFAR), upgraded Giant Metrewave Radio Telescope (uGMRT), Very Large Array (VLA), and MeerKAT data (from 30 MHz to 6.5 GHz) to carry out a detailed spectral analysis of T1 and T2. We analysed surface brightness profiles,
measured integrated and spatially-resolved spectral indices, and performed a comparison with single injection ageing models. Chandra X-ray data were used to search for discontinuities in the ICM properties in the direction of the targets.
The spectral properties of T1 at low frequencies are predicted by ageing models, and provide constraints on the 3D dynamics of the host by assuming a constant velocity. However, sharp transitions along sub-regions of the tail, local surface brightness enhancements, and a spectral shape at high frequencies that is not predicted by models suggest a more complex scenario, possibly involving hydrodynamical instabilities and particle mixing. T2 exhibits unusual morphological and surface brightness features, and its spectral behaviour is not predicted by standard models. Two AGN outburst events during the infall of T2 towards the cluster centre could explain its properties.
Tailed galaxies in A2142
Bruno et al. 2024
Ageing and dynamics of the tailed radio galaxies in Abell 2142
L. Bruno
1,2,
T. Venturi
2,3,4,
D. Dallacasa
1,2,
M. Brienza
3,1,
A. Ignesti
5,
G. Brunetti
2,
C. J. Riseley
1,2,
M. Rossetti
6,
F. Gastaldello
6,
A. Botteon
2,
L. Rudnick
7,
R. J. van Weeren
8,
A. Shulevski
9,
D. V. Lal
10
Received Month dd, yyyy; accepted Month dd, yyyy
=======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
A variety of discrete and diffuse radio sources can be found in galaxy clusters, such as radio galaxies, radio halos, mini-halos, and radio relics (see for reviews). These targets offer the chance to probe the evolution and dynamics of galaxy clusters and their members, the cosmic magnetism, and the energy transfer processes on various scales.
Radio galaxies in clusters are typically of Fanaroff-Riley I-type (FRI; ) exhibiting two jets lacking hotspots at their tips, which are launched in opposite directions from the core at relativistic velocities, and become sub-relativistic at a distance of a few kpc. In galaxy clusters, FRIs can be reshaped due to the high-velocity (v∼ 1000 km s^-1) motion of the host galaxy throughout the rarefied (n_ ICM∼ 10^-2-10^-4 cm^-3) thermal intracluster medium (ICM). Specifically, ram pressure (P_ ram∝ v^2n_ ICM; ) can deflect the radio jets by angles ≳ 90^ o and generate narrow-angle tail (NAT) galaxies <cit.>, which exhibit a bright core (the head) and roughly parallel jets that rapidly diffuse as tails of length ∼ 100-500 kpc. Ram pressure is also responsible for moderate deflections by angles ≪ 90^ o observed in some targets, possibly associated with merging subclusters, forming in this case a wide-angle tail (WAT) galaxy <cit.>. When the jets are not resolved (due to the intrinsic bending, projection effects, or resolution), NAT and WAT galaxies are generally referred to as head-tail (HT) galaxies. Owing to their origin, tailed radio galaxies are useful tracers of high-z and/or low-mass groups and clusters <cit.>, which are hardly detected by current X-ray telescopes.
In typical tailed galaxies, the radio brightness decreases from the core along the tail due to particle ageing. Therefore, the spectral index steepens with the distance from the core <cit.>. However, departures from these trends have been observed in a number of targets <cit.>, which suggest the presence of particle re-acceleration processes and/or amplification of magnetic fields. Furthermore, tailed galaxies, especially those extending throughout large fractions of the radii of clusters <cit.>, likely release relativistic electrons and magnetic fields into the ICM <cit.>. Therefore, tailed galaxies are interesting targets to probe the interplay between thermal and non-thermal components in galaxy clusters, and the complex re-acceleration mechanisms that are yet poorly constrained <cit.>.
Abell 2142 (A2142) is a nearby and massive galaxy cluster in an intermediate dynamical state between relaxed and merging systems <cit.>. It is characterised by a complex dynamics <cit.> and hosts peculiar discrete and diffuse radio sources <cit.>. In the present work, we focus on two morphologically interesting HT galaxies in A2142, labelled as T1 and T2 in <cit.>. We aim to analyse their morphological and spectral properties over a wide range of wavelengths (30 MHz - 6.5 GHz), search for possible interactions between the tails and the ICM, and constrain their dynamics.
Throughout this paper we adopted a standard ΛCDM cosmology with H_0=70 km s^-1 Mpc^-1, Ω_ M=0.3 and, Ω_Λ=0.7. At the cluster redshift of z=0.0894, 1”=1.669 kpc (or 1'∼ 100 kpc). We adopted the convention on the spectral index α as defined from the flux density S(ν) ∝ν^-α. The paper is organised as follows. In Sect. <ref>, we describe the galaxy cluster A2142. In Sect. <ref>, we present the radio and X-ray data and their processing. In Sect. <ref>, we report on the results of our analysis. In Sect. <ref> we discuss scenarios to explain the properties of T1 and T2. In Sect. <ref>, we summarise our work.
§ THE GALAXY CLUSTER ABELL 2142
A2142 (RA_ J2000=15^ h58^ m20^ s, Dec_ J2000 = 27^ o14'00”) is a nearby (z=0.0894) galaxy cluster of mass[M_500 is the mass within a radius R_500, which encloses 500ρ_ c(z), where ρ_ c(z) is the critical density of the Universe at a given redshift.] M_500=(8.8±0.2)× 10^14 M_⊙ within a radius R_500=14.07± 0.70 arcmin <cit.>. The galaxy cluster A2142 is located at the centre of the A2142 supercluster, after which it is named <cit.>. In Fig. <ref> we provide a multi-wavelength (optical, X-ray, radio) view of A2142, and label the various sources and features that we discuss below.
A2142 hosts about 900 member galaxies gathered in small groups and hierarchically organised in structures and substructures <cit.>. Furthermore, several groups are infalling towards the richest structure in the cluster centre <cit.>, which hosts the brightest cluster galaxy `BCG1' (RA_J2000 =239.5834, DEC_J2000 = 27.2334, z=0.09081, M_*=1.9× 10^11 M_⊙). The secondary brightest cluster galaxy, BCG2 (RA_J2000 =239.5554, DEC_J2000 = 27.2481, z=0.0965, M_*=1.5× 10^11 M_⊙) is located at a projected distance of ∼ 180 kpc from BCG1 and is thought to be the main member of a merging group. Minor mergers are likely responsible for the uncommon properties of the ICM, owing to their intermediate dynamical state in between that of relaxed cool cores and unrelaxed (major) mergers <cit.>.
Radio observations of A2142 with the Low Frequency Array (LOFAR) revealed diffuse emission from the ICM in the form of a hybrid radio halo (Fig. <ref>) consisting of three distinct components <cit.>. The brightest component (the `core', `H1') in the cluster centre has a roundish morphology and a diameter of ∼ 200 kpc, the second component (the `ridge', `H2') is elongated towards south-east for ∼ 400 kpc, and the third component (`H3') extends for ∼ 2 Mpc embedding both H1 and H2 <cit.>. The origin of the hybrid halo is likely associated with turbulent re-acceleration triggered by mergers taking place on different spatial scales and/or timescales <cit.>.
Besides the hybrid halo, extended radio emission is associated with two prominent HT galaxies, T1 and T2, which are the focus of the present work. The host of T1 (RA_J2000 =239.5596, DEC_J2000 = 27.2721, z=0.09540, M_*=0.7× 10^11 M_⊙) is an elliptical galaxy likely being a member of the merging group associated with BCG2 <cit.>. Its optical spectrum[<https://skyserver.sdss.org/dr12/en/tools/chart/navi.aspx>] shows the presence of weak emission lines (e.g. O[III], Hα) that indicate AGN activity. The core is active at radio wavelengths <cit.> and bright in the 0.5-10 keV band (L_0.5-10=2.1× 10^42 erg s^-1; ), further supporting the association with an AGN. The host of T2 (RA_J2000 =239.5870, DEC_J2000 = 27.3337, z=0.08953, M_*=2.1× 10^11 M_⊙) is an elliptical member galaxy of A2142. Its optical spectrum lacks emission lines that would confirm the presence of current AGN activity, while its radio and X-ray emission will be discussed throughout this paper.
§ OBSERVATIONS AND DATA REDUCTION
In this section we present the radio and X-ray data to study the HT radio galaxies in A2142. We first provide a brief overview of the radio data recently analysed in <cit.> and in <cit.> for the study of the cluster's hybrid radio halo. The additional radio and X-ray data used in the present work are described in Sects. <ref> to <ref>. The details of all radio and X-ray observations are reported in Table <ref> and Table <ref>, respectively.
Some of the radio data exploited in this work have been recently presented in <cit.> to characterise the hybrid radio halo. These include data from LOFAR, the Giant Metrewave Radio Telescope (GMRT), the upgraded GMRT (uGMRT), and the Very Large Array (VLA). Furthermore, MeerKAT data have been recently used by <cit.> for a follow-up analysis. Here we briefly summarise these observations, and refer to <cit.> and <cit.> for details on the telescope-specific setup and calibration strategies.
LOFAR observed A2142 for 16 and 32 hours with the Low Band Antenna (LBA) and High Band Antenna (HBA) arrays operating at 30-70 MHz and 120-168 MHz, respectively. The GMRT observed A2142 for 5 hours at 305-340 MHz <cit.>, and 3 hour observations were carried out with the uGMRT at 300-500 MHz (band-3). Mosaicked pointings (each having a field of view of ∼ 30') on A2142 were obtained with the Very Large Array (VLA) at 1-2 GHz (L-band) in C-array and D-array configurations, for a total of 2 hours. MeerKAT observed A2142 at 872-1712 MHz (L-band) for 5.5 hours as part of the `MeerKAT-meets-LOFAR mini-halo census' project <cit.>.
§.§ GMRT data
Archival GMRT observations of A2142, first presented by <cit.>, are available in the 225-240 MHz and 590-625 MHz bands, for 6 and 5 hour on-source, respectively. The total bandwidths of 16 and 32 MHz are split into 128 and 256 channels, respectively. The sources 3C286 and 3C48 were used as absolute flux density scale calibrators.
Following the same procedure described in <cit.> for the 305-340 MHz band data, we reprocessed the 225-240 and 590-625 MHz band data by means of the Source Peeling and Atmospheric Modeling (SPAM) automated pipeline <cit.>, which corrects for ionospheric effects by deriving directional-dependent gains from bright sources across the field of view. We reached noise levels of ∼ 180 μ Jy beam^-1 at 13”× 10” and ∼ 40 μ Jy beam^-1 at 6”× 4” for the 234 and 608 MHz datasets, respectively, which are consistent with those reported in <cit.>.
§.§ uGMRT band-5 data
New observations of A2142 were performed with the uGMRT in the 1050-1450 MHz (band-5) frequency range in March 2018. The total 400 MHz bandwidth is split into 8192 channels of width ∼ 49 kHz each. The cluster was covered in its full extent with five different pointings (each having a field of view of ∼ 27'), for a total of 7 hours. For the study presented in this paper, only the three pointings covering the northern part of the cluster (∼ 40' in total) are considered (namely `A2142_2', `A2142_3', `A2142_5'), for a total of 4 hours. The sources 3C286 and 1602+334 were used as amplitude and phase calibrators, respectively.
Data reduction of band-5 observations with SPAM has not been tested in depth. However, direction-dependent corrections are negligible for these gigahertz frequency and small field of view data, and thus we did not used SPAM. To process these data, we split the total bandwidth in 6 sub-bands of ∼ 67 MHz each and carried out a standard data reduction with the Common Astronomy Software Applications <cit.> by iteratively performing flagging of Radio Frequency Interference (RFI), and bandpass, amplitude, and phase calibrations for each sub-band and pointing. We then recombined the calibrated sub-bands of each pointing to perform rounds of phase and phase plus amplitude self-calibration. The self-calibrated pointings were imaged separately with WSClean v. 2.10 <cit.> with multi-frequency and multi-scale synthesis options. After correcting each of the three images by the corresponding primary beam attenuation at the central frequency of 1250 MHz and convolving them to the same resolution, they were combined by means of the tool lm.makemosaic in CASA to produce a single mosaic image. At a resolution of 2.5”, the final noise level is in the range ∼ 25-40 μ Jy beam^-1.
§.§ VLA L-band and C-band radio data
As first presented by <cit.>, the head-tail galaxy T1 was studied with the VLA in A configuration at 1-2 GHz (L-band) and 4.5-6.5 GHz (C-band), for 15 minutes on source in each band. In both observations, the sources 3C286 and 1602+3326 were used as absolute flux density and phase calibrators, respectively. Data were recorded in 16 spectral windows of 128 MHz each.
The field of view of these VLA observations include the core of T2, which we aim to study. We reprocessed the L-band and C-band data in CASA following standard calibration procedures (see Sect. <ref>), and performing an additional cycle of phase self-calibration. Our data processing improved the quality of the images with respect to those reported in <cit.> in terms of noise (improvement by factors of ∼1.6 and ∼ 1.2 in L-band and C-band, respectively). We reached a noise level of ∼ 27 μ Jy beam^-1 in L-band at ∼ 1” resolution and ∼ 11 μ Jy beam^-1 in C-band at ∼ 0.3” resolution.
§.§ Radio imaging
Imaging of all radio data was carried out with WSClean v. 2.10 <cit.> to account for wide-field, multi-frequency, and multi-scale synthesis. For both VLA and uGMRT, mosaicked observations were imaged separately and then properly combined following the procedure described in Sect. <ref> for uGMRT.
In the following, uncertainties on the reported radio flux densities S are computed as:
Δ S= √(( σ^2 · N_ beam) + ( ξ_ cal· S ) ^2) ,
where N_ beam is the number of independent beams within the considered region, and ξ_ cal is the calibration error. We assumed standard calibration errors of ξ_ cal=10% for LOFAR <cit.>, ξ_ cal=7%, 6%, 6%, 5%, 5% for GMRT at 234 MHz, 323 MHz, 407 MHz, 608 MHz, and 1250 MHz, respectively <cit.>, ξ_ cal=5% for VLA and MeerKAT in L-band <cit.>, and ξ_ cal=3% for VLA in S-band and C-band <cit.>.
§.§ Chandra X-ray data
In <cit.> we analysed deep Chandra observations of A2142. These consist of 4 pointings of 187 ks in total that mainly cover the central regions of the clusters. In the present work, we considered 3 additional pointings covering the northern and north-eastern regions of A2142, in the direction of T2. These observations were carried out in 2014, in VFAINT mode, with both ACIS-I and ACIS-S CCDs, and were first presented in <cit.>. A summary of all Chandra data considered in this work is reported in Table <ref>.
As for the other pointings, we reprocessed the additional data by means of CIAO v. 4.13, with CALDB v. 4.9.4. After extracting light curves in source-free regions, soft proton flares were filtered out with the lc_clean algorithm, leaving a clean time of 169.1 ks. Overall, the 7 pointings provide a total clean time of 356 ks. In this work, we will use these data to investigate the local conditions of the ICM towards T1 and T2 with a combination of resolution and signal-to-noise ratio (S/N) that depends on the depth of the pointings covering the same regions. We refer to <cit.> for the description of background treatment for imaging and spectral analysis.
§ RESULTS
§.§ Radio morphology and sub-regions
Within the tailed radio galaxies T1 and T2 we define sub-regions that are discussed throughout this section and labelled in Fig. <ref>, along with optical overlays from the Panoramic Survey Telescope & Rapid Response System (Pan-STARSS; ). Radio images of T1 and T2 are presented in Figs. <ref>, <ref> at different frequencies and resolutions (see details in Table <ref>).
The host of T1 is located at a projected distance of ∼ 270 kpc from the brightest cluster galaxy. Starting from the head, T1 extends from east towards west for a total projected length of ∼ 5.5' (at 143 MHz), corresponding to ∼ 550 kpc at the cluster redshift. The sub-region T1-A, which corresponds to the initial ∼ 200 kpc, is the brightest part of the source and is well imaged at all frequencies. In this region, the width of tail is ∼ 35 kpc, thus suggesting that the whole structure (made by the two merged jets into the tail) is highly collimated. For the subsequent ∼ 150 kpc (T1-B), the tail doubles its width up to ∼ 70 kpc and shows clear brightness fluctuations, which we will refer to as wiggles due to their oscillating pattern. We note the presence of a galaxy (RA_J2000 =239.5267, DEC_J2000 = 27.2742, z=0.08684) at the transition from T1-A and T1-B (Fig. <ref>). At the end of sub-region T1-B, a bright compact component is visible at higher frequencies (its location is indicated by the purple circle in Fig. <ref>), which is likely a compact radio source seen in projection. For the last ∼ 200 kpc (T1-C), the tail follows a straight path. Interestingly, additional emission (T1-D) is well detected at 143 MHz (and partly visible at 50 and 323 MHz in lower resolution images). A thin, ∼ 50 kpc long filament connects T1-C with an arc-shaped structure extending for ∼ 50 kpc and 200 kpc along east-west and north-south, respectively. If the arc were the termination of the tail, the total length of T1 would be ∼ 650 kpc (in lower resolution images, more emission from T1-D is recovered, reaching ∼ 700 kpc in total). We aim to shed light on this feature with a spectral analysis in Sect. <ref>.
The morphology of T2, which extends along the SE-NW axis for ∼ 400 kpc, is more peculiar than classical tailed galaxies. The optical counterpart of T2 is located at a projected distance of ∼ 600 kpc from the brightest cluster galaxy, and hosts a weak radio core (T2-D; see also Fig. <ref>, bottom right panel) that is resolved from the rest of the source at 608 and 1250 MHz only (this is likely due to a favourable combination of higher resolution and lower sensitivity to extended components than other images). The core is the base of the first sub-region (T2-A), which has a light bulb shape of width ∼ 75 kpc and length ∼ 100 kpc. As highlighted by our high resolution (2.5”) 1250 MHz data, the light bulb ends with a sharp edge (see also Fig. <ref>). The global morphology of T2-A may be presumably due to the backward bending of the jets by the ram pressure, but these are not resolved by any of our images. A second sub-region (T2-B) is defined by a choking, that is the abrupt shrinking of the width of the tail and a drop in the radio surface brightness. T2-B extends for ∼ 150 kpc, has a fairly constant width of ∼ 75 kpc, and exhibits a single bright spot. The last sub-region (T2-C) is defined by the spread of the tail into a diffuse and filamentary plume of length ∼ 150 kpc, maximum width ∼ 200 kpc, and non-uniform brightness. The plume is detected in our images only at low frequencies (ν < 407 MHz), thus suggesting a very steep spectral index for this region. Interestingly, the western part of the plume is bent towards SW <cit.>.
§.§ Radio surface brightness fluctuations
To investigate the brightness fluctuations within each sub-region, we computed the surface brightness profiles of T1 and T2. In the left panels of Fig. <ref> we report the profiles (black data points) measured from the LOFAR HBA image (Figs. <ref>, <ref>) in boxes of width 10”. Each data point of T1 and T2 is normalised by the peak value within T1-A and T2-A, respectively. Analogously, in the right panels of Fig. <ref> we compare the normalised profiles at 50 (brown), 143 (yellow), 323 (green), 608 (light blue), and 1284 (magenta) MHz, as measured from images convolved at the same resolution of 14”. The surface brightness profiles are reflective of the sub-regions that we defined in Sect. <ref> by visual inspection, and reveal interesting features discussed below.
The absolute peak value of T1 is coincident with the radio core (first sampling box). In T1-A, the brightness at 143 MHz rapidly decreases (down to a factor ∼ 10) with the distance from the core. A discontinuity is visible at ∼ 100 kpc, where the tail first deviates from its straight path and is slightly compressed. In T1-B (the region of the wiggles), the brightness is enhanced instead of declining, but such growth becomes progressively shallower with the increasing frequency. In T1-C, the declining trend resumes, but we also report the presence of a moderate peak at ∼ 500 kpc at low-ν. The last peak in T1-D is associated with the arc, which is detected only at 50, 143, and 323 MHz.
The surface brightness profile of T2 is largely unusual for tailed galaxies. The core of T2 is weak (see also Sect. <ref>) and is not coincident with the absolute peak of emission. Overall, the normalised profile exhibits three peaks of similar relative amplitude, corresponding to the light bulb (T2-A), the main body of the tail (T2-B), and the plume (T2-C). The first peak is associated with the bright spot at the edge of T2-A. The choke that separates T2-A and T2-B is identified as a sharp discontinuity in our profiles at a distance of ∼ 100-150 kpc. Within T2-B, the surface brightness increases with the distance up to ∼ 200 kpc. Finally, the emission of the plume produces the last peak, which becomes progressively shallower with the increasing frequency.
§.§ Integrated radio spectra
To derive the integrated radio spectra of T1 and T2, we imaged all the datasets (except VLA data in A-array) with a common uv-range of 350λ-16 kλ. The chosen minimum baseline length provides more uniform uv-coverage of our data at short spacings. The obtained images were convolved at the same resolution of 14”. We report our flux density measurements in Table <ref>; the corresponding radio spectra are shown in Fig. <ref>, and the obtained spectral indices are summarised in Table <ref>.
We measured the total flux densities of T1 (excluding T1-D) in a box of size 6.0'× 1.3'
(blue box in Fig. <ref>) and fitted the data points with a single power-law. We found that a single power-law of slope α=0.87± 0.01 (blue line in Fig. <ref>) can well reproduce the integrated spectrum of T1 from 50 to 1810 MHz. In addition, we obtained the radio spectrum of each sub-region by measuring the flux densities within boxes of size 2.0'× 1.0', 1.5'× 1.0', 2.0'× 1.0', and 1.0'× 2.0' for T1-A (red), T1-B (orange), T1-C (cyan), and T1-D (green), respectively. Fig. <ref> clearly shows that the total flux density of T1 is dominated by T1-A at all frequencies. For T1-A, data points can be described by a single power-law of slope α=0.68± 0.02. However, we find evidence of spectral breaks for T1-B and T1-C, as single power-laws cannot fit our measurements. We thus considered double power-laws, with a fixed break at 234 MHz, as this appears to be roughly the frequency where the spectrum steepens. The details on the fitted spectral indices between 50 and 234 MHz (solid lines) and between 234 and 1284 MHz (dotted lines) are reported in Table <ref>. The arc is detected (above 3σ) only by LOFAR and GMRT at 323 MHz, therefore we did not attempt to fit two power-laws, but the poor fit (χ^2_ red=7.5) suggests the existence of a break for this sub-region as well.
Similarly to T1, we obtained the total flux density of T2 in a box of size 3.0'× 4.5'
(blue box in Fig. <ref>), but a single power-law does not reproduce all our data points (χ^2_ red=5.9). The flux densities of T2-A (red), T2-B (orange), and T2-C (cyan) were measured in an ellipse of axis 0.8'× 1.1', a box of size 1.0'× 1.5', and an ellipse of axis 1.8'× 2.4', respectively, and then fitted with two power-laws having a fixed break at 407 MHz (see details in Table <ref>). Even though we notice that the low-ν spectrum of T2-B is not accurately fitted (χ^2_ red=6.5), likely due to contamination from the plume in overlapping areas, our analysis shows that the spectrum of T2 progressively steepens from the inner to the outer sub-regions. Moreover, we produced additional radio images (including VLA data in A-array) with specific combination of uv-range and weighting schemes to maximise the resolution (<5”) of our images, and derive the spectrum of the core of T2 (T2-D, green line). We used the imfit task in CASA to derive the peak value of a Gaussian fit to the core at each frequency. These values provide a fitted spectral index of α=0.69± 0.02 from 143 to 5500 MHz.
§.§ Resolved spectral properties
The analysis of the integrated radio spectra of T1 and T2 in Sect. <ref> has shown that the spectral index is not constant across our targets. Under the hypothesis of pure radiative losses, a gradual steepening of the spectral index is expected along the tail, whereas sudden spectral flattening may suggest re-energising mechanisms. To determine the spectral trend along the tails, we produced spectral index maps by combining sets of radio images (see details in Table <ref>) and setting a minimum flux density threshold of 5σ at each frequency. In Fig. <ref> we report the spectral index maps at 50-143 and 143-323 MHz at 30” (∼ 50 kpc), and at 143-323 MHz and 608-1284 MHz at 14” (∼ 25 kpc); the associated error maps are shown in Fig. <ref>. By measuring the spectral index within boxes of width equal to the beam size, we derived the corresponding spectral index profiles that are shown in Fig. <ref>. As typically observed along the lobes of FRI galaxies due to ageing of the emitting particles, on average both T1 and T2 exhibit a progressive steepening of the spectral index along the tail. We discuss the spectral trends within each sub-region in the following paragraphs.
Within T1-A, the spectral index ranges from α∼ 0.5 (in the core) up to α∼ 1.5, with steeper values for higher frequency pairs. The profiles steepen with the distance with approximately constant slopes (even though the trend is flatter at lower frequencies). Along T1-B and T1-C the spectra further steepen, reaching ultra-steep (α≳1.5) values of α∼ 2 and α∼ 2.5 between 143-323 MHz and 608-1284 MHz, respectively. In the lower-frequency regime (<323 MHz), the constant steepening trend with the distance is retained, whereas deviations in the form of flatter and steeper features are clearly visible at 608-1284 MHz both in T1-B and T1-C. Despite the different resolutions (14” and 30”), we notice that the trends of the 143-323 MHz maps are consistent. Beyond T1-C, emission is detected only below 323 MHz. In T1-D (the arc) the spectral index is α∼ 1.5 and α∼ 2.5 between 50-143 MHz and 143-323 MHz, respectively. The constant steepening of the spectral index is still preserved (with moderate deviations at 143-323 MHz), thus allowing us to confidently conclude that the arc is the oldest part of T1. As a consequence, the total projected length of T1 is ∼ 700 kpc.
In T2-A, the inner regions exhibit a spectral index that is flatter (α∼ 0.5) at 50-143 MHz than that at higher frequencies (α∼ 0.7). For both T2-A and T2-B, the spectral index steepens with the distance with roughly constant and smooth trends at all wavelengths. The spectrum of T2-B reaches values of α∼ 0.8, α∼ 1.1, and α∼ 1.5 at 50-143 MHz, 143-323 MHz, and 608-1284 MHz, respectively. Studying the profile of T2-C is not trivial due to the expansion of the tail into the plume and its western bending at low frequencies. Along our sampling boxes, the measured spectral indices are ultra-steep (α∼ 1.5-2.5) for all maps. The smooth trend is retained below 323 MHz, while a rapid steepening is observed at 608-1284 MHz.
§.§ Testing the radiative ageing
Relativistic particles are ejected from the core of the radio galaxy and then radiatively age along the tail due to synchrotron and inverse Compton losses. By assuming a constant injection rate, we expect to observe a progressive decline of the flux density and a steepening of the spectral index with the increasing distance. However, in Sects. <ref>, <ref> we found complex distributions of the surface brightness and spectral index within each sub-region of T1 and T2, which suggest either local deviations from a pure ageing scenario or a varying particle injection rate. In this section we aim to test ageing models in detail.
Ageing models depend on the form of the initial electron energy distribution (N(E)∝ E^-δ_ inj, where δ_ inj=2α_ inj+1 is the population injection index, and α_ inj is the spectral index at age t=0, based on the assumption of Fermi I acceleration mechanism) and the spatial distribution of the magnetic field (B(r)), which are not known a priori. Models assuming a single injection event are the Kardashev-Pacholczyk <cit.>, Jaffe-Perola <cit.>, and Tribble-Jaffe-Perola <cit.>. Both the KP and JP models assume a uniform magnetic field, but differ in terms of the treatment of the pitch angle θ_ p. In the KP model, a constant and isotropic θ_ p is assumed throughout the entire lifetime of the electrons; the JP model considers electron scattering, which leads to isotropic θ_ p on short-time scales only, and thus assumes a time-averaged pitch angle. As a consequence of the assumption on θ_ p, energetic electrons with small pitch angles can live indefinitely for the KP model if emitting synchrotron radiation only, whereas an exponential cut-off arises in the electron distribution at high energy for the JP model <cit.>. The TJP model is based on the JP model, but introduces Gaussian spatial fluctuations of the magnetic field around a central value (B_0). The high-energy cut-off is shallower in the TJP model than in the JP model, thus allowing particles to live longer in non-uniform magnetic fields.
In the following, we test the KP, JP, and TJP ageing models. We consider a value for the magnetic field that minimises the radiative losses and maximises the lifetime of the source, which is B_0=B_ CMB/√(3), where B_ CMB=3.25(1+z)^2 μ G is the equivalent magnetic field of the cosmic microwave background (CMB); this yields to B_ 0∼ 2.2 μ G for both T1 and T2 at the cluster redshift. The injection spectral index was derived by means of the findinject task of the Broadband Radio Astronomy ToolS (BRATS[<https://www.askanastronomer.co.uk/brats/>]; ) software, which fits the radio spectrum of the target from multi-frequency radio images and outputs the value of α_ inj that minimises the distribution of χ^2 for the fitted model; we obtained α_ inj^ T1=0.51± 0.01 for T1 and α_ inj^ T2= 0.72±0.01 for T2, in agreement with the measured spectral index of the radio cores (see Sects. <ref>, <ref>), thus suggesting that their spectra are representative of the injection distributions.
Radio colour-colour diagrams (RCCDs; ) are diagnostic plots to probe the local shape of radio spectra computed from two pairs of frequencies, which is independent of the magnetic field and possible adiabatic expansion and compression (these can only shift the spectrum in frequency and affect the age), and thus test theoretical ageing models. In RCCDs, the one-to-one line represents a power-law spectrum with α=α_ inj, whereas data points lying below the one-to-one line (α>α_ inj) indicate particle ageing. By considering the spectral index maps and sampling boxes as in Sect. <ref>, we obtained the RCCDs that are shown in Fig. <ref>. We overlaid the theoretical ageing curves (dotted lines) as obtained from the kpdata, jpdata, and tribbledata tasks in BRATS under the assumptions discussed above.
For the low-resolution (30”) and low-ν (50-143-323 MHz) set of images of T1 (upper left panel in Fig. <ref>), both the JP and TJP models reproduce the emission of the source in each sub-region, whereas the KP model cannot describe data points beyond ∼ 400 kpc (T1-C and T1-D). With 14”-resolution (upper right panel), the observed 143-323-608-1284 MHz spectral distribution can be barely described by the models in T1-A (inner ∼ 100 kpc), and progressively increasing deviations are found in T1-B and T1-C. Although a steeper injection index (α_ inj∼ 0.6-0.8) would shift the ageing curves towards our measurements, models still fail to reproduce data points at large distance from the core. We will further discuss the results and implications of the RCCDs for T1 in Sects. <ref>, <ref>.
The bottom panels in Fig. <ref> report the complex distribution of the data points in the RCCD for T2. In the low-ν (50-143-323 MHz) set at 30” (left), data points align roughly parallel to the one-to-one line (except for an outlier associated with the plume). In the high-ν (143-323-608-1284 MHz) set at 14” (right), the KP, JP, and TJP models can approximately reproduce the data points associated with T2-A, but prominent deviations are found for T2-B and T2-C. In summary, none of the considered models can entirely describe the spectral behaviour of T2 (see further discussion in Sect. <ref>).
§.§ Local ICM conditions
Throughout this section we aim to search for direct evidence of interplay between radio emission and local ICM conditions. By means of XMM-Newton images in different energy bands, <cit.> produced the projected temperature map that we show in the upper left panel of Fig. <ref> as it provides a useful overview of the ICM temperature over the whole cluster. Nevertheless, this map does not allow us to probe the peripheral regions with sufficient resolution. In this respect, following the same procedure described in <cit.>, we produced maps of projected thermodynamic quantities towards T1 and T2 by performing a spectral analysis with our Chandra data. CONTBIN[< https://github.com/jeremysanders/contbin>] <cit.> was used to bin the 0.5-2 keV exposure-corrected Chandra image in regions with a minimum S/N=40. We extracted the spectra in each region of the event and blanksky files, subtracted the background from the ICM emission, and jointly fitted the resulting spectra in XSPEC <cit.> with an absorbed thermal plasma component (phabs × apec) by fixing the Galactic hydrogen column density in the direction of A2142 and the ICM metal abundance to values of N_ H=3.8×10^20 cm^-2 and Z=0.28 Z_⊙ <cit.>. The Cash statistics <cit.> was adopted for fitting.
Each spectrum provides values of temperature kT (in units of keV) and normalisation 𝒩 (in units of cm^-5), which is proportional to the squared numerical density integrated over the volume. We derived the (projected) pressure as
p = kT ×( 𝒩/A)^1/2 [keV cm^-5/2 arcmin^-1] ,
where A is the area of each region (in units of arcmin^2). By propagating errors, uncertainties on p are computed as
Δ p= p √((Δ kT/k T)^2 + 1/4(Δ𝒩/𝒩)^2) .
In Fig. <ref> we report the temperature and pressure maps in the direction of T1 (top right panels) and T2 (middle panels), respectively, and the corresponding profiles (bottom panels) as measured from regions that are labelled with green and magenta circles. While the chosen high S/N ensures accurate spectral fitting, the extraction regions are relatively large and limit our analysis to minimum spatial scales of ∼ 30 kpc for T1 and ∼ 100 kpc for T2.
For T1, we found drops in temperature and pressure from regions 4 to 5, which correspond to the transition from T1-A to T1-B. We further probed these trends by extracting and fitting the X-ray surface brightness with a broken power-law by means of pyproffit[<https://github.com/domeckert/pyproffit>] <cit.>. As shown in Fig. <ref>, the profile is dominated by the presence of the prominent NW cold front, which we detect as a density jump of C=1.80± 0.14 (Cstat/dof=45/49), consistent with the value reported by <cit.>. Furthermore, we observe a peak at a distance of ∼ 1' (indicated by the red arrow in the profile), which is co-spatial with the transition from T1-A to T1-B. This feature is likely responsible for the observed kT and p drops and is suggestive of local ICM compression, but we were not able to constrain its nature through fitting procedures. While our analysis is inconclusive, it is unlikely that such compression is driven by a shock, as its passage along the tail would have left signatures in the spectral index distribution with a discontinuity between T1-A and T1-B that we do not observe (Fig. <ref>).
For T2, both kT and p profiles are roughly continuous within errors. Even though we notice a tentative (significance ≳ 1σ) drop in pressure between T2-A and T2-B, we cannot draw any solid conclusion on possible discontinuities from these plots. A complementary analysis (not shown) of the X-ray surface brightness profile in the direction of T2 with high spatial resolution (ranging from 2” to 12”) is consistent with continuous trends of the thermodynamic properties. We notice that modelling of the background <cit.> may provide more accurate spectral results rather than subtraction, especially for the cluster outskirts (which is the case of T2), but this is beyond the aim of the present work.
§ DISCUSSION
In the previous sections we analysed the morphological and spectral properties of T1 and T2. In the following, we discuss possible scenarios explaining the observed features.
§.§ On the discrepancy of RCCD analysis for T1
Through the RCCD analysis in Sect. <ref> we showed that the observed spectral distribution of T1 at low frequency (≲ 300 MHz) can be reproduced by the JP and TJP models under simple assumptions on the magnetic field (Fig. <ref>). Nevertheless, an inconsistent spectral trend is measured at higher frequencies. In RCCDs, data points are expected to follow the same spectral shape in a pure radiative ageing scenario, regardless of the considered frequency pairs. Therefore, the observed inconsistency requires a deeper understanding.
Offsets in the flux density scale and calibration artefacts can systematically shift the data points. The spectra of compact sources in the field do not reveal clear offsets in our datasets, and although images at 608 MHz show spurious emission around the tail (∼ 4σ significance, likely generated by self-calibration errors), this is not driving the observed spectral distribution in the RCCD. A combination of projection effects and mixing of particles with different energies within each sampling region can broaden the intrinsic spectral distribution, mimicking our observed trend. While this hypothesis is plausible, the discrepancy among RCCDs is preserved when using images produced with different weighting schemes and resolutions, changing the considered frequency pairs, and sampling with beam-size circular regions to reduce possible transverse mixing across the boxes. Physical phenomena, such as compression/expansion and re-energising, can also alter the spectral distribution in RCCDs. However, compression and expansion are frequency-independent processes, while standard re-energising mechanisms via shock and turbulence predominantly affect the low-energy particles emitting at low-frequency. In this respect, unknown exotic processes favouring the re-acceleration of high-energy against low-energy particles would be necessary to solve the RCCD discrepancy.
In summary, we did not identify any obvious systematic effects or physical conditions explaining the observed spectral discrepancy. The possible role of subtle calibration effects should be investigated through independent reprocessing and methods. Mixed (either intrinsic or projected) energy distributions are plausible, but difficult to be further disentangled.
§.§ Dynamics of T1
In Sect. <ref> we discussed possible solutions to the RCCD discrepancy of T1. Throughout this section, we only consider the low frequency RCCD in Sect. <ref>, as we showed that the measured spectral trend is predicted by the JP and TJP ageing curves.
Following up on the results of the RCCD, we performed a pixel-by-pixel fitting of a TJP spectrum to our 50, 143, and 323 MHz images at 30”-resolution by means of BRATS, and obtained the radiative age map of T1 that is shown in the top panel of Fig. <ref>. The lower panel of Fig. <ref> reports the corresponding age profile. Fitted ages are in the range t∼ 50-150 Myr in T1-A, t∼ 150-200 Myr in T1-B, t∼ 200-300 Myr in T1-C, and t∼ 350 Myr in T1-D, with typical errors of ∼ 20 Myr.
We fitted the age profile data points with a linear relation, which provides the tangential velocity v_ sky (in the plane of the sky) under the assumptions that T1 is moving at a constant velocity, the radiative age coincides with the dynamical age (implying v_ sky∼ L/t), and there are not compression/expansion, re-energising, or bulk motions. We obtained a fitted velocity of v_ sky=2100 ± 87 km s^-1, which has to be considered as a lower limit because the value of B_ 0 that we used provides an upper limit to the radiative age and we ignored projection effects. The peculiar radial velocity (along the line of sight) of T1 is computed from the spectroscopic redshift of its host galaxy (z_ T1=0.0954) and that of A2142 (z_ A2142=0.0894) in units of the light speed c as <cit.>:
v_ los = c (z_ T1-z_ A2142/1+ z_ A2142) ∼ 1650 km s^-1 .
The inferred velocity components provide constraints on the deprojected dynamics of T1, as we derive a 3D velocity of v=√( v_ los^2 + v_ sky^2)=2670 km s^-1 and a viewing angle of i = arctan( v_ sky/ v_ los) =52^ o. For a comparison, the radial velocity dispersion of A2142 is σ_ A2142 = 1193 km s^-1 <cit.>, meaning that v∼ 2.2 σ_ A2142, which is a reasonable value for head-tail galaxies.
§.§ Transitions along T1
In Sect. <ref> we demonstrated that a simple TJP ageing scenario with a constant velocity is a good and physically reasonable representation of the global emission of T1. In line with this scenario, the absence of spectral flattening (Sect. <ref>) rules out ongoing large-scale re-acceleration processes, and our analysis in Sect. <ref> does not conclusively support evidence of interaction with the ICM (Fig. <ref>). However, it is evident that local phenomena of unclear nature are taking place within its sub-regions. Indeed, we observed several major discontinuities in the surface brightness profile of T1 (Fig. <ref>), which we further emphasise in Fig. <ref>. In this section we briefly speculate on the possible origin of such discontinuities.
In T1-B, the tail shows a distinct set of wiggles, but they disappear as soon as the tail fades abruptly at the beginning of T1-C. A possible explanation to the wiggles is the precession of the jets before their downstream bending, but this scenario is unlikely. Indeed, we do not detect similar wiggles in T1-A, and such scenario would thus require the unlikely condition of sudden stabilising of the jets in a more recent epoch than the age of the plasma in T1-B.
Plausibly, the wiggles result from the development of Kelvin-Helmholtz (KH) instabilities, which are caused by the velocity difference across the interface between the jets and surrounding medium. These may be triggered either in T1-A, which is unresolved in our images, or at the transition in jet properties at the beginning of T1-B. With reference to the latter possibility, we recall the tantalising presence of a galaxy visible in the transition from T1-A to T1-B (Fig. <ref>). We stress that the growth rate of KH instabilities depends on the magnitude of the velocity gradient and on the effective viscosity and magnetic fields in the interface. Therefore, for the development of such instabilities, the jets have to move fast enough through the ICM, and the displacement of the jets downstream from the nucleus should result from the combination of the motion of the host galaxy through the ICM and the jet flow <cit.>. As the jet decelerates and comes to rest in the ICM, the instabilities are not efficiently driven anymore, resulting in a turbulent tail which starts mixing with the ICM; this is a promising scenario for the plasma in T1-C and T1-D. Possibly, some local re-energising processes are occurring in T1-D, where the brightness increases and slight departures from the constant velocity scenario are found (Fig. <ref>). Higher resolution images of T1-A could further shed light on the onset of the instabilities.
§.§ Nuclear emission of T2
In this section we discuss the properties of the host and core of T2 at different wavelengths. These are important to probe the overall origin of the source, as outlined in Sect. <ref>.
The host of T2 is an elliptical galaxy with no optical emission lines (Sect. <ref>). It emits in the soft X-ray band, but the analysis of its spectrum (see details in Appendix <ref>) indicates that the observed emission comes from thermal gas. Therefore, the optical and X-ray spectra indicate that the accretion of the black hole is either currently off-state or radiatively inefficient.
Useful information is provided by the comparison of the core prominence at radio frequencies, which is CP=S^ T2-D_1400/S^ T2_150= 2× 10^-4, when considering the ratio of 1.4 GHz flux density of the core and the 150 MHz total flux density, and the total 1.4 GHz radio power in logarithmic scale log_10P_1400=23.9 W Hz^-1. These values are particularly low <cit.> and suggest that T2 is a remnant radio galaxy. Similar values were reported by <cit.> for a remnant radio galaxy with a tailed morphology, whereas, for a comparison, we obtained CP= 4× 10^-2 and log_10P_1400=24.4 W Hz^-1 for T1, in line with active FRI galaxies.
§.§ Origin of T2
As highlighted in the previous sections, the global morphology of T2 is consistent with that of a HT galaxy, but it also exhibits unusual features. Moreover, its core prominence (Sect. <ref>) suggests that T2 is a remnant tailed galaxy. During its lifetime, the AGN may have experienced either a single or multiple outbursts, which we invoke to discuss possible scenarios explaining the radiatively old components of T2 (Sects. <ref>, <ref>) and some peculiar features.
In the context of a single AGN outburst, the core launched two radio jets in opposite directions, which were then bent by the ram pressure, and originated a tail as in classical HT galaxies. Afterwards, regions of the tail at different distances from the core passed through diverse ICM phases during the infalling towards the cluster centre. Such gas phases reshaped the sub-regions of T2 into the present T2-A and T2-B due to density and pressure gradients. This scenario is supported by the observed radial steepening of the spectral index (Fig. <ref>), in line with standard HT galaxies, but it relies on specific conditions of the ICM. Indeed, the relativistic plasma should be tremendously compressed at the location of the choke to explain the abrupt separation between T2-A and T2-B. Even though our projected thermodynamic maps (Fig. <ref>) do not provide solid conclusions owing to a combination of poor resolution and low ICM counts in the cluster outskirts, the existence of a thin layer where the thermal pressure is dramatically enhanced is disfavoured and appears unlikely.
A multiple AGN outburst scenario is more plausible and can be reconciled with the choke without invoking thermal pressure exerted by particular ICM layers. We can assume that a first AGN outburst was triggered when the host of T2 was far from its present position (beyond T2-B in projection). During the infall, the radio galaxy developed a tail, which we observe today as T2-B, and the core switched off at the location of the present choke. In proximity of its present location, the galaxy experienced a second AGN outburst, which would be responsible for the formation of T2-A. This scenario naturally explains the double-peaked surface brightness profile of T2 (Fig. <ref>) and the choke, and is in line with the complex spectral distribution in the RCCDs that deviates from a single injection event[The complexity of the observed spectral distribution would require ad-hoc ageing modelling that we did not attempt in this work. A useful starting point might be the KGJP <cit.> model, as it assumes a continuous injection of fresh electrons for a certain period, followed by a passive ageing.]. As a first approximation, the integrated spectra of T2-A and T2-B (Fig. <ref>) suggest similar break frequencies that yield consistent radiative ages for a uniform magnetic field. This might be indicative of a short period between the two phases, possibly triggered by ram pressure itself <cit.>, but firm conclusions cannot be drawn (see e.g. for a discussion on complex spectra).
In the context of the two scenarios described above, T2-A may represent the superposition of the two radio lobes in projection. The remnant scenario indicated by the core prominence analysis is supported by the non-detection of radio jets, which suggests that they are switched off on large scales (at least down to ∼ 4 kpc). However, a completely different interpretation is also viable. Indeed, the conical morphology of the light bulb and the sharp transition at its edge (Fig. <ref>) are reminiscent of the structure of FRI-type jets reported in <cit.> (see e.g. the case of M84 in their Fig. 3), which results from their deceleration on kiloparsec-scales. In other words, T2-A itself could potentially be a radio jet caught in its initial deceleration phase throughout the ambient medium. We notice that the sharp edge of T2-A suggests that the source approximately lies in the plane of the sky, and implying that it has a one-sided jet. This is unlikely, therefore a possibility is that a combination of deceleration of the two jets and their downstream bending due to ram pressure during the infall is ongoing. In summary, the nature of T2-A remains unconfirmed, and higher-resolution radio data towards the nuclear regions could be helpful for further investigation.
Regardless of a single or double AGN outburst, the plume is interpreted as the oldest part of the tail. The aged relativistic plasma progressively expanded, diffused, mixed with the thermal ICM, and originated T2-C. Our high-resolution spectral index maps (Fig. <ref>) show flatter (but still ultra-steep) patches that deviate from the expected radial steepening of the tail. The morphology and spectral behaviour of T2-C may be indicative of some kind of interplay between thermal and non-thermal components and/or trace substructures of the magnetic field.
§ SUMMARY AND CONCLUSIONS
In this work we reported on the study of two tailed radio galaxies, T1 and T2, in the galaxy cluster A2142. These targets show interesting morphological features that are suggestive of a complex dynamics and interaction with the thermal ICM. By means of LOFAR, uGMRT, VLA, and MeerKAT radio data, we provided a detailed spectral analysis of T1 and T2. Auxiliary Chandra X-ray observations were used to investigate the local conditions of the ICM in the direction of the targets. In this section we summarise our results and discuss future prospects.
T1 (Fig. <ref>) is a long HT galaxy extending for ∼ 700 kpc and exhibiting clear surface brightness fluctuations and discontinuities that define four sub-regions (Figs. <ref>, <ref>). A single power-law of slope α=0.87± 0.01 can fit the flux density measurements of T1 from 50 to 1810 MHz, but spectral breaks are found within its sub-regions (Fig. <ref>). The overall spectral index profile steepens with the increasing distance from the core (Fig. <ref>). In the low-frequency (50-143-323 MHz) regime that we considered, standard ageing models (JP, TJP) can well reproduce the observed spectral behaviour of T1 (Fig. <ref>). Under simple assumptions on the magnetic field, we produced a radiative age map (Fig. <ref>), which we used to constrain the tangential velocity of the target (Sect. <ref>). We computed a lower limit on the 3D velocity of the galaxy of v>2670 km s^-1, which is a factor of ∼2 higher than the radial velocity dispersion of the cluster. Although we showed that a pure radiative ageing scenario and a constant velocity are reasonable approximations, each sub-region shows signs of a complex phenomenology. Indeed, we detected spots, labelled as wiggles due to their oscillating pattern, where the surface brightness is locally enhanced (Fig. <ref>), sharp transitions along the tail, and fossil emission visible only at the lowest frequencies. All these features might be connected with the development and evolution of KH instabilities along the tail at different distances from the core. Moreover, we found that the spectral shape at high frequencies is surprisingly inconsistent with that at low frequencies (Fig. <ref>), hinting at subtle calibration effects, mixing of older and younger emitting electrons, or peculiar physical conditions (Sect. <ref>).
T2 (Fig. <ref>) is a HT galaxy extending for ∼ 400 kpc and featuring morphological properties that are unusual for tailed sources. The whole structure of T2 can be decomposed into three sub-regions (Fig. <ref>) identified by distinct peaks in the surface brightness profile (Fig. <ref>). The radio core is not coincident with one of these peaks, and our spectral analysis suggests that T2 is a remnant radio galaxy. While we observe a spectral steepening along the tail that is in line with classical HT galaxies (Fig. <ref>), none of the standard (single injection) ageing models can reproduce the spectral measurements (Fig. <ref>), and non-trivial ad-hoc modelling would be required. The most intriguing feature of T2 is the choke, which is a sharp depletion of radio emission that separates the sub-regions T2-A and T2-B. We discussed possible scenarios to explain such feature, which involve either a single or double AGN outburst event during the infall towards the cluster centre (Sect. <ref>). The simplest scenario that we proposed assumes two AGN outbursts that produced the tailed morphology of T2-B and the light bulb structure of T2-A, respectively. The X-ray analysis of the ICM disfavours exceptional thermal compression at the location of the choke (Sect. <ref>), but deeper data in this direction are necessary to definitely confirm our findings.
In conclusion, our work further highlights the increasing complexity in the phenomenology of tailed galaxies that has been shown by recent studies with the advent of deep and high-fidelity images from low (∼ 100 MHz) to high (∼ 1 GHz) frequencies. These works are providing valuable constraints on physical parameters, but theoretical modelling and numerical simulations are also necessary to shed light on the complex mechanisms that shape the properties of radio galaxies in clusters, such as re-energising processes, hydrodynamical instabilities, plasma mixing, magnetic field structure, and multiple AGN outbursts. As a follow-up work, we suggest the exploitation of LOFAR HBA data acquired with the international stations, which are available in the archive as part of the observations (with the core and remote stations) analysed here. These sensitive data could provide an insightful view of the sub-regions of T1 and T2 down to spatial scales of ∼ 500 pc, possibly providing information on the onset of instabilities along T1 and the presence of jets and the choke in T2. Additionally, polarisation studies at gigahertz frequencies with MeerKAT are ongoing and will be used to constrain the structure of the magnetic field across T1 and T2.
§.§ Pressure equilibrium
Radio lobes that are under-pressured (p_ nt<p_ t) with respect to the surrounding medium are expected to collapse, whereas over-pressured lobes (p_ nt>p_ t) should expand. It was shown that low-power radio galaxies, such as FRIs, do not supply sufficient non-thermal energy to balance the thermal pressure of ICM <cit.>. In this section we discuss the pressure balance along T1 and T2.
The non-thermal pressure is given by the total energy budget of CRe, CRp, and magnetic field within the lobes, but the relative contribution of these components is unknown. To compute p_ nt, it is routinely assumed that the lobes are in condition of minimum energy, meaning that the energy from particles and magnetic field are roughly equal. Under this assumption, the total minimum energy density can be computed as <cit.>:
u_ min∼ξ(α_ inj, ν_1, ν_2) (1+k/ϕ)^4/7(ν/ MHz)^4/7α_ inj (1+z)^12/7+4/7α_ inj ×
× (I_ν/ mJy arcsec^-2)^4/7(d/ kpc)^-4/7 [erg cm^-3] ,
where ξ (see Table 1 in ) depends on the injection spectral index and the integration frequency limits for the particle spectrum, k is the ratio of CRp to CRe electrons, ϕ is the volume fraction of the source occupied by non-thermal components, and d is the depth of the source. In condition of minimum energy, the corresponding minimum (equipartition) magnetic field is
B_ eq = ( 24π/7)^1/2( u_ min/ erg cm^-3)^1/2 [G] ,
which yields a total non-thermal pressure of p_ nt∼ 0.6 × u_ min.
Our high-quality, multi-frequency radio data, as well as literature information from X-ray analysis, allow us to investigate the non-thermal to thermal pressure ratio along T1 and T2. We computed the non-thermal pressure profile based on the radio surface brightness of T1-A and T2-A (with a threshold of 3σ) from our 2.5”-resolution uGMRT image at 1250 MHz, where the apparent width of the two sub-regions is roughly constant, thus providing an upper limit to their minimum energy density. As basic assumptions on the unknown parameters, we considered: i) a cylindrical geometry of depth equal to the projected minimum length, ii) a volume filling factor of ϕ=1, and iii) a CRp to CRe ratio of k=1; these parameters and the resulting average equipartition field are summarised in Table <ref>. The deprojected azimuthally-averaged thermal pressure profile was retrieved from the public data products of the XMM Cluster Outskirts Project <cit.>. We notice that the X-ray surface brightness profiles in the specific directions of T1 and T2 are consistent (within a factor of ≲ 1.5) with the averaged profile of the whole cluster, therefore considering the azimuthally-averaged pressure profile does not significantly affect our results. In our analysis, we ignored the non-thermal pressure from ICM turbulence, which can provide support to the thermal pressure, but its contribute was found to be ∼ 10% of the total pressure at most <cit.>.
In Fig. <ref> we compare the profiles of p_ nt (red) and p_ t (blue) along T1-A and T2-A. Both sub-regions appear to be prominently under-pressured (by factors ∼ 50-100) with respect to the ICM, in line with findings for usual FRIs. This would imply a collapse, which is not observed. Therefore, either the minimum energy condition or the considered parameters (d, k, ϕ) are not reasonable assumptions, or alternative explanations need to be taken into account. Under the hypotheses of minimum energy, plausible 3D geometry, and no significant projection effects, the pressure equilibrium can be approximately reached if (1+k/ϕ)∼ 2000-2500 (where we expect a dominant contribution from k rather than ϕ). Such ratio implies a magnetic field strength of B_ eq∼ 20-30 μ G, which is a factor ∼ 10 higher than typically measured values, and would significantly shorten the lifetime of the jets. An obvious explanation is the role of projection: if the deprojected distance of T1 and T2 were much higher than the projected distance, the thermal pressure would decrease, thus reducing the discrepancy between p_ nt and p_ t. To reach pressure balance, T1 and T2 should be located at a deprojected distance of r∼ 2 Mpc and r∼ 3 Mpc from the centre, respectively. This hypothesis could be tested by means of polarisation data[The analysis of MeerKAT data in polarisation is ongoing and will be presented in a dedicated work.], but such large distances may disfavour the formation of long tails due to a lower ram pressure. Furthermore, it is worth noticing that the ratio p_ nt-p_ t is similar for T1 and T2, although these are significantly different in terms of projected distance and assumed volume. Such similarity may suggest that projection is not a major issue for the pressure balance in our targets. A plausible scenario to preserve the equilibrium in FRIs is the dominant additional contribution of thermal pressure within the lobes, which could be provided by ICM particles entrained by the jets and lobes during their motion <cit.>. Likely, all these effects (higher k, higher r, entrainment) contribute to the pressure equilibrium, but their relative impact remains unclear.
We thank the referee for their comments and suggestions. M.B. acknowledges support from the agreement ASI-INAF n. 2017-14-H.O and from the PRIN MIUR 2017PH3WAT 'Blackout'. A.I. acknowledges the European Research Council (ERC) programme (grant agreement No. 833824, PI B. Poggianti), and the INAF founding program 'Ricerca Fondamentale 2022' (project 'Exploring the physics of ram pressure stripping in galaxy clusters with Chandra and LOFAR', PI A. Ignesti). C.J.R. acknowledges financial support from the ERC Starting Grant ‘DRANOEL’, number 714245. M.R., F.G., and G.B. acknowledge support from INAF mainstream project 'Galaxy Clusters Science with LOFAR'. A.B. acknowledges financial support from the European Union - Next Generation EU. R.J.vW. acknowledges support from the ERC Starting Grant ClusterWeb 804208. D.V.L acknowledges support of the Department of Atomic Energy, Government of India, under project No. 12-R&D-TFR-5.02-0700. The research leading to these results has received funding from the European Unions Horizon 2020 Programme under the AHEAD project (grant agreement No. 654215).
LOFAR <cit.> is the Low Frequency Array designed and constructed by ASTRON. It has observing, data processing, and data storage facilities in several countries, which are owned by various parties (each with their own funding sources), and that are collectively operated by the ILT foundation under a joint scientific policy. The ILT resources have benefited from the following recent major funding sources: CNRS-INSU, Observatoire de Paris and Université d’Orléans, France; BMBF, MIWF- NRW, MPG, Germany; Science Foundation Ireland (SFI), Department of Business, Enterprise and Innovation (DBEI), Ireland; NWO, The Netherlands; The Science and Technology Facilities Council, UK; Ministry of Science and Higher Education, Poland; The Istituto Nazionale di Astrofisica (INAF), Italy. This research made use of the Dutch national e-infrastructure with support of the SURF Cooperative (e-infra 180169) and the LOFAR e-infra group. The Jülich LOFAR Long Term Archive and the German LOFAR network are both coordinated and operated by the Jülich Supercomputing Centre (JSC), and computing resources on the supercomputer JUWELS at JSC were provided by the Gauss Centre for Supercomputing e.V. (grant CHTB00) through the John von Neumann Institute for Computing (NIC). This research made use of the University of Hertfordshire high-performance computing facility and the LOFAR-UK computing facility located at the University of Hertfordshire and supported by STFC [ST/P000096/1], and of the Italian LOFAR IT computing infrastructure supported and operated by INAF, and by the Physics Department of Turin University (under an agreement with Consorzio Interuniversitario per la Fisica Spaziale) at the C3S Supercomputing Centre, Italy. This research made use of the HOTCAT cluster <cit.> at Osservatorio Astronomico di Trieste. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. We thank the staff of the GMRT that made these observations possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. The MeerKAT telescope is operated by the South African Radio Astronomy Observatory, which is a facility of the National Research Foundation, an agency of the Department of Science and Innovation. The scientific results reported in this article are based on observations made by the Chandra X-ray Observatory data obtained from the Chandra Data Archive. This research has made use of SAOImageDS9, developed by Smithsonian Astrophysical Observatory <cit.>. This research has made use of the VizieR catalogue access tool, CDS, Strasbourg Astronomical Observatory, France (DOI: 10.26093/cds/vizier). This research made use of APLpy, an open-source plotting package for Python <cit.>, Astropy, a community-developed core Python package for Astronomy <cit.>, Matplotlib <cit.>, Numpy <cit.>.
aa
§ SPECTRAL INDEX ERROR MAPS
In Fig. <ref> we report the error maps associated with the spectral index maps shown in Fig. <ref>. Errors are computed from the standard error propagation formula as
Δα = |1/ln( ν_ 1/ν_ 2) |√(( Δ S_ 1/S_ 1)^2 + ( Δ S_ 2/S_ 2)^2 ) ,
where Δ S is obtained as in Eq. <ref>.
§ X-RAY SPECTRUM OF T2 HOST
The host of T2 emits in the X-ray band (see Fig. <ref>). In this section we analyse its Chandra X-ray spectrum, which is shown in Fig. <ref>.
We extracted the spectrum of the background from a circular annulus of width 25” centred on the target. The spectrum of the source was extracted from a circular region of radius 5” (8.5 kpc), chosen as the one that maximises the S/N, being:
S/N = N_ cnt-f_ areaN_ bkg/√(N_ cnt+f_ areaN_ bkg) ,
where N_ cnt is the total (target plus background) count number within the circle, N_ bkg is the background count number within the annulus, and f_ area is the ratio of target to background extraction region areas. We extracted the corresponding background and source spectra from each pointing covering the region of T2, and we jointly fitted the background-subtracted spectra with an absorbed thermal component (phabs × apec). The Galactic hydrogen column density, redshift, and gas metallicity were kept fixed to the values considered for A2142 as in Sect. <ref>. The thermal component is sufficient to reproduce the spectrum of the target, as no hard X-ray (>2 keV) emission, which is distinctive of AGN, is detected. An additional power-law component (phabs × (apec+po)) modelling possible soft X-ray emission from AGN does not improve the fit (significance below 1σ) and is thus rejected.
The best-fit temperature, density (computed from the apec normalisation) are kT=1.0± 0.1 keV and n_ e=(1.7± 0.4)× 10^-3 cm^-3. These values are consistent with those of the hot ionised medium in elliptical galaxies <cit.> and suggest that the observed X-ray emission comes from thermal gas rather than AGN activity.
|
http://arxiv.org/abs/2409.03366v1 | 20240905091402 | Free convection in fractured porous media: a numerical study | [
"Arash Andrea Roknian",
"Anna Scotti",
"Alessio Fumagalli"
] | math.NA | [
"math.NA",
"cs.NA"
] |
Free convection in fractured porous media: a numerical study
Arash Andrea Roknian^1 Anna Scotti^2 Alessio Fumagalli 3
MOX - Dipartimento di Matematica “F. Brioschi”, Politecnico di
Milano, via Bonardi 9, 20133 Milan, Italy
^1 [email protected]
^2 [email protected]
^3 [email protected]
September 9, 2024
===============================================================================================================================================================================================================================================
§ ABSTRACT
The objective of this study is to better understand the influence of fractures
on the possibility of free convection in porous media. To this aim, we introduce a mathematical model for density driven flow in the presence of fractures, and the corresponding numerical approximation. In addition to the direct numerical solution of the problem we propose and implement a novel method for the assessment of convective stability through the eigenvalue analysis of the linearized numerical problem. The new method is shown to be in agreement with existing literature cases both in simple and complex fracture configurations. With respect to direct simulation in time, the results of the eigenvalue method lack information about the strength of convection and the steady state solution, they however provide detailed (quantitative) information about the behaviour of the solution near the initial equilibrium condition. Furthermore, not having to solve a time-dependent problem makes the method computationally very efficient. Finally, the question of how the porous matrix interacts with the fracture network to enable free convection is examined: the porous matrix is shown to be of key importance in enabling
convection for complex fracture networks, making stability criteria based on the fracture network alone somewhat limited in applicability.
§ INTRODUCTION
The study of flow in porous media finds application in many different areas ranging from industrial, to biomedical and environmental applications.
In the context of geological porous media many relevant problems such as geothermal engineering and contaminant transport are nowadays addressed with the help of numerical simulations, due to the necessity of estimating flow rates and pathways.
In many cases, due to either temperature or the presence of solutes, fluids can present density variations in space and time.
Temperature gradients are particularly important in geothermal applications,
while solute concentration gradients are found for instance in the study of contaminant plume migration or seawater intrusion phenomena.
In situations of inverse density gradients (denser fluid on top) the fluid may spontaneously develop unstable convective plumes.
It is particularly important to predict the possible onset of such instabilities due to the strong rates at which convection can transport solutes with respect to diffusion alone.
The first studies dealing with free convection in porous media <cit.> examined simple scenarios of inverse density gradients. The possibility of convection is linked to the Rayleigh number, which, accounting for both fluid and porous medium parameters, quantifies the strength of convection against that of diffusion.
For large values of the Rayleigh number, there is the possibility of free convection.
In recent years, different authors have tried to understand how the presence of fractures influences the possibility and strength of free convection <cit.>.
While high-density fracture networks can be mostly treated through averaging/upscaling of porous media properties, low-density fracture networks can manifest unexpectedly high convection strength due to the particular geometry of the network.
Even for simple geometries, e.g. for regular horizontal or vertical fracture grids, the unstable nature of convective plumes makes prediction very hard if not impossible <cit.>.
The main focus of this study is the development of a new method based on the eigenvalue analysis of equilibrium solutions for assessing the possibility of convection in fractured porous media. Leveraging this new method will enable us to better understand the particular mechanisms by which fractures enhance convection. This idea is at the core of the results provided by <cit.>, where the analysis is carried out analytically for homogenoues media, and in <cit.> for layered porous media. Here, given the geometrical complexity of the domain, eigenvalues will be computed by a suitable numerical algorithm starting from the discretized problem.
We start in section <ref> with a detailed description of the mathematical model both in homogeneous and fractured media.
As the focus of this study is the effect of fracture geometry, simple constitutive models are preferred over more elaborate (and more accurate) models.
The Darcy law is used as constitutive law for filtration, Fick's law is used to model solute diffusion and density is modeled as a linear function of solute concentration.
Particular care is given to the averaging procedure used for the dimensional reduction of fractures, consistent with the mixed-dimensional approach presented in <cit.>.
In section <ref> we present the discretization of the continuous model using the finite volume method in space and the implicit Euler scheme in time.
The particular finite volume method used in the implementation is the Multipoint Flux Approximation method <cit.> for both the fluid mass conservation and the solute transport problems.
The MPFA method is particularly appropriate due to its mass conservation properties and consistency for general grids.
Again, density dependence must be treated with care for the numerical implementation to be consistent <cit.>.
Two methods will be described for assessing the possibility of convective cell formation.
Section <ref> describes what we call the direct method of assessing stability: starting from a non-equilibrium initial condition and integrating the time-dependent flow equations until steady state may result in either convective motion or a diffusive equilibrium solution. The two regimes are easily distinguished by measuring the amount of solute transport through the domain.
Section <ref> describes the eigenvalue method for assessing stability. By linearizing the problem and numerically studying the eigenvalues of the resulting discrete system, we can assess the stability of arbitrary perturbations to any equilibrium solution without the need of solving the time-dependent problem.
Sections <ref> and <ref> test the two methods respectively with the Elder and HRL problems relying on the results of <cit.> and <cit.>: both are well known benchmark cases for density driven flow.
Section <ref> validates and examines a three-dimensional generalization of the HRL problem, based on the studies in <cit.>.
In the HRL scenarios, the complementarity of the two approaches described above will enable us to better understand and examine the peculiarities of free convection in the presence of fractures.
The variety of scenarios is also useful for understanding the advantages and limitations of the eigenvalue method with respect to the direct method. A comparison of their computational cost is presented in section <ref>.
Finally, section <ref> will conclude this study with a critical evaluation of what it has accomplished and suggest different options for further developments.
§ MATHEMATICAL MODEL
The problem of our interest stems from the coupling between solute transport and density driven flow in a porous medium.
The mathematical model is based on the one described in the review paper by Diersch and Kolditz <cit.>, which will be here extended to account for the presence of fractures in the domain, since our main goal is to understand the impact of fracture networks on the onset of free convection.
Let us begin by considering a generic advection-diffusion equation expressing solute mass conservation in a porous medium:
∂_t (ρϕω) + ∇·(ρϕωu )+∇·(ρϕi )=0 ,
where ρ [M/L_v^3] is the density of the fluid in which the solute is dissolved, ω [M_s/M] is the concentration of the solute, u [L/T] is the flow velocity, ρϕi [M_s/L^2 T] is the solute mass flux due to diffusion and ϕ [L_v^3/L^3] is the porosity of the medium.
The subscript v in the length dimension L is used to differentiate void volumes from total volumes, while s is used to differentiate solute mass from fluid mass.
For the diffusive flux a Fick-type law is used: i= -D ∇ω .
The diffusion tensor D [L^2/T] is usually modeled as the sum of two components:
a part related to molecular diffusion and a part related to mechanical dispersion (function of the fluid velocity): D= D_d (u)+ D_m I.
In this study only the part related to molecular diffusion has been considered, thus
i=-D_m ∇ω .
In the following, it will be useful to indicate the combined advective and diffusive solute flux as
q [M_s/L^2 T]:
q = ωu + i .
The boundary conditions for equation (<ref>) can be
of Dirichlet type where we set the value of the solute concentration ω = ω_D,
or of Neumann type where the total mass flux in the normal direction is specified q·n = q_N, where n is the unit normal (by convention pointing outwards from the domain).
To close the system we still need an expression for the fluid velocity and a constitutive equation for density as a function of primary variables.
The first can be determined as the solution of the Darcy problem, which is the model used to describe filtration in porous media. In particular we have a mass conservation law for the fluid:
∂_t (ρϕ)+∇·(ρϕu )=0,
and a constitutive law classically used in the context of porous media, the Darcy law:
u = k/ϕμ (- ∇ p+ρg) .
where g=-ge_z is the gravity acceleration vector, k is the permeability tensor of the porous medium k [L_v^2], and μ [M/L_vT] is the viscosity of the fluid.
The boundary condition for (<ref>) can be
of Dirichlet type where the pressure is specified: p = p_D
or of Neumann type where the normal flow velocity is prescribed, u·n = u_N.
For the density constitutive law, we will assume a linear dependence on the concentration ω
ρ = ρ(p, ω) = ρ_0 (1 + αω) ,
where ρ_0 is a reference value and α a dilation coefficient.
A large part of the following discussion applies unchanged to flows driven by temperature gradients instead of solute concentration gradients.
Indeed, apart from a reinterpretation of some of the quantities introduced above e.g. thermal conductivity instead of diffusivity,
the same model can be applied.
§.§ Oberbeck-Boussinesq approximation
The analysis and the solution of system (<ref>) -(<ref>) can be substantially simplified by assuming the solution satisfies what is known as the Oberbeck-Boussinesq (OB) approximation.
The OB approximation consists in neglecting all but the most important among the different density related nonlinearities
in the system.
In particular we will take ρ = ρ_0 (1 + αω) ≈ρ_0 everywhere except in the gravity term.
Using this approximation, and after some algebraic simplifications the system reduces to
{ ∇·u = 0 in Ω,
∂_t ω + ∇·q = 0 in Ω,
u= k/ϕμ (- ∇ p_e + ρ_0 αωg) in Ω,
q= - D ∇ω + ωu in Ω,
.
with boundary conditions
{ u·n = u_N on ∂Ω_N^p,
p = p_D on ∂Ω_D^p,
. { q·n = q_N on ∂Ω_N^ω,
ω = ω_D on ∂Ω_D^ω,
.
where ∂Ω = ∂Ω_N^p ∪∂Ω_D^p, ∂Ω = ∂Ω_N^ω∪∂Ω_D^ω.
Among the simplifications to reach the reduced system, we have replaced the unknown p with the excess pressure
p_e by removing the hydrostatic component p_h= ρ_0 g (y_0 - y) (defined for the reference density, in the absence of solute):
p = p_e + p_h = p_e + ρ_0 g (y_0 - y) ,
-∇ p + ρg = -∇ p_e - ρ_0 g +ρ_0 g + ρ_0 αωg
For readability, in what follows we will drop the subscript e and use the variable p to indicate the excess pressure p_e.
§.§ Horton-Rogers-Lapwood (HRL) problem
The HRL problem is a simple scenario aiming to study the possible onset and strength of natural convection.
The idea is to impose, through the boundary conditions, a layer of heavier fluid overlaid on lighter fluid in a two-dimensional vertical cross section of a homogeneous porous medium.
The density gradient can be caused by temperature difference (as in the original description <cit.>), or by solute concentration difference.
In a situation of inverse mass gradient, the fluid will form convective cells only if diffusivity is small enough to allow it.
The contrast between convection and diffusion speed, respectively v_g and v_d, is described by the Rayleigh number
= v_g/v_d = k/ϕμρ_0 αω_max g /D/H ,
where H is the height of the domain, and other quantities have been introduced in the previous section.
In the original presentation of this problem <cit.>, the possibility of convective motion is linked to the value of : _c = 4 π^2 is the critical number i.e. a threshold such that, for Ra > Ra_c convection occurs.
Why is free convection only possible for such values of the Rayleigh number?
For high enough diffusion, convective cells cannot sustain the concentration/temperature difference and thus they simply decay.
Figure <ref> illustrates how the diffusive flow tries to restore concentration imbalances caused by the convective motion.
Note that convective motion in general enhances solute transport.
For this reason, an indicator of the presence of convection is the Sherwood number , defined as
= ∫_A i·n/∫_A i_0 ·n =∫_A i·n/D/H A ,
where A is the diffusive inflow surface (top boundary) and i_0 is the diffusive flow in the absence of convection. When convection is not present inside the domain, i = i_0 and thus =1. Note that the boundary conditions for the HRL problem being of zero fluid mass flow all around, the solute transport on the inflow surface A is entirely due to diffusion, regardless of whether convection is present inside the domain. However, when convection is present, the (overall) larger concentration gradient at A makes i > i_0 and > 1.
§.§ Dimensional reduction
Porous media often present heterogeneities in their material properties.
A particularly strong kind of heterogeneity are fractures: regions of different material properties, with negligible aperture (thickness) with respect to both their length and the characteristic lengths of the medium.
Very different material properties in fractures, such as permeability, can compensate for their small dimensions so that, overall, fractures can strongly influence the behaviour of flow in the medium. Let us define
k = k(x) = { k_b x∈Ω_b,
k_f x∈Ω_f,
.
where Ω_f denotes the fracture, Ω_b the bulk medium and Ω = Ω_b ∪Ω_f.
The approach known as discrete fracture modeling aims to solve this problem by treating fractures as separate domains, often of reduced dimensionality, coupled to the bulk medium by coupling conditions at their interface.
We will now rewrite the fluid mass conservation equation and associated Darcy law as a mixed-dimensional equation.
To keep our derivation simple, we will treat a single two-dimensional fracture as illustrated in figure <ref>. The domain Ω is split between bulk medium and fracture Ω = Ω_b ∪Ω_f ∪Γ, where Γ = Ω_b∩Ω_f denotes the interface between subdomains. We assume that Ω_f can be expressed as
Ω_f = {x∈Ω:x=γ + αn, γ∈Γ^0, α∈ ( -b/2, b/2 ) } ,
Γ^0 = {x∈Ω:x=x_0 + s_1(x_1 - x_0) + s_2(x_2 - x_0), s_i ∈ (0,1) } ,
where we have assumed a planar midsurface Γ^0 as shown in figure <ref>. The lateral boundary of Ω_f will be treated differently from the top and bottom boundaries. We define
Γ^± = {x∈∂Ω_f : x = γ±b/2n, γ∈Γ^0 } ,
Σ = {x∈∂Ω_f : x = σ + αn, σ∈∂Γ^0, α∈ ( -b/2, b/2 ) } ,
such that Γ = Γ^±∪Σ.
Superscripts + and - will also be used to denote quantities evaluated on Γ^±.
Starting from the continuity equation and the associated Darcy law in the two domains, we enforce pressure continuity and conservation of mass across the interface Γ:
{ u_b = 0 in Ω_b ,
u_b= (- ∇ p_b + ρ_0 αω_b g) in Ω_b ,
u_f = 0 in Ω_f ,
u_f= (- ∇ p_f + ρ_0 αω_f g) in Ω_f ,
p_b = p_f on Γ ,
u_b ·n_b = u_f·n_b on Γ ,
.
where n_b is the normal vector defined on Γ, exiting from Ω_b.
At the end of dimensional reduction, the equations solved on the three-dimensional domain Ω_f will be replaced by the solution of (different) equations on the two-dimensional domain Γ^0.
For the sake of simplicity, we will suppose that fractures differ from the bulk only in their permeability: all the other material parameters will be common to both.
We proceed by integrating the mass conservation equation in the fracture across the aperture.
0 = ∫_-b/2^b/2u_f = ∫_-b/2^b/2 (Tu_f + Nu_f)
= ∇_τ·u_γ - (u_b ·n_b)^+ - (u_b ·n_b)^- ,
where we have denoted by T, N the projection operators specific to the fracture:
Nv = (v·n) n and
Tv = (I - N) v, where n is the normal vector to the fracture plane (as indicated in figure <ref>), and I is the identity operator.
We also denoted the in-plane gradient operator by ∇_τ = T∇, and by u_γ the integral of the tangential flow field:
u_γ = ∫_-b/2^b/2Tu_f, [L^2/T].
Indeed, given these definitions,
∫_-b/2^b/2 (Tu_f) =
∫_-b/2^b/2∇_τ· (Tu_f) =
∇_τ·∫_-b/2^b/2Tu_f =
∇_τ·u_γ.
To obtain a law for u_γ, we integrate the in-plane component of the Darcy law:
u_γ
= ∫_-b/2^b/2Tu_f
= k_f/ϕμ∫_-b/2^b/2T (- ∇ p_f + ρ_0 αω_f g)
= b k_f/ϕμ (-∇_τ p_γ + ρ_0 αω_γg_τ) ,
where p_γ = 1/b∫_-b/2^b/2 p_f, ω_γ = 1/b∫_-b/2^b/2ω_f and g_τ = Tg. Differently from u_γ, scalar quantities in the fracture are averaged across the fractures thus maintaining the same dimensions as the corresponding bulk quantities.
Coupling conditions at the interface must also be expressed in terms of the averaged variables:
(u_b ·n_b)^±≈b k_f/ϕμ (p_b^± - p_γ/b/2 + ρ_0 αω_b^± g_n).
We conclude by replacing Ω_f with its center plane Γ^0 and extending Ω_b: Ω = Ω_b ∪Ω_f ∪Γ≈ (Ω∖Γ^0) ∪Γ^0 ∪Γ^±. Even though the fracture domain Ω_f and the interface Γ have collapsed on one another geometrically, the two play distinct roles and must be kept conceptually separate.
Collecting the last steps, we can write the Darcy part of the mixed-dimensional system:
{ u_b = 0
in Ω_b ,
u_b= (- ∇ p_b + ρ_0 αω_b g)
in Ω_b ,
∇_τ·u_γ = [λ]
in Γ^0 ,
u_γ = b (-∇_τ p_γ + ρ_0 αω_γg_τ)
in Γ^0 ,
u_b ·n_b = λ on Γ^±,
λ = (- p_γ - p_b/b/2 + ρ_0 αω_b g_n) on Γ^±.
.
In writing (<ref>), we have introduced the new variable λ [L/T] and the jump operator [v] = v^+ + v^-.
For v defined on Γ^±, we will consider [v] to be defined on Γ^0.
Note that Σ collapsed onto a lower-dimensional object (the boundary of the center plane ∂Γ^0) onto which we will have to impose boundary conditions. We will ignore its contribution to the mass exchange between fracture and bulk by setting zero normal flux (note that this flux scales linearly with the fracture aperture b) on immersed fracture boundaries (or tips) whereas fractures will inherit boundary conditions from the bulk if they touch the boundary.
Note that two sources of modeling error associated with dimensional reduction: (i) by collapsing the thin dimension of the fracture we have reassigned part of the domain which previously belonged to the fracture to the bulk medium and (ii) the flux λ exchanged between the bulk medium and the fracture is a first order approximation to the true flux due to the presence of the normal gradient of the pressure ∇ p.
§.§ Mixed-dimensional transport
We can now re-apply the reduction procedure to the transport problem. We start by splitting the equation in the two domains and prescribing compatibility conditions at the interface
{ ∂_t ω_b + ∇·q_b = 0
in Ω_b ,
q_b= - D ∇ω_b + ω_b u_b
in Ω_b ,
∂_t ω_f + ∇·q_f = 0
in Ω_f ,
q_f= - D ∇ω_f + ω_f u_f in Ω_f ,
ω_b = ω_f
on Γ ,
q_b ·n_b = q_f·n_b
on Γ .
.
We integrate the conservation equation over the fracture, splitting the fluxes into their normal and tangential parts:
0
= ∫_-b/2^b/2∂_t ω_f + ∇·q_f
= b ∂_t ω_γ + ∇_τ·q_γ - (q_b ·n_b)^+ - (q_b ·n_b)^- = 0 .
As we did for the fluid mass conservation equation, we introduced fracture quantities ω_γ = 1/b∫_-b/2^b/2ω_f and the integrated tangential solute mass flux q_γ = ∫_-b/2^b/2Tq_f. As done before, scalar quantities are averaged while vector quantities are integrated.
Unlike the continuity equation, a new approximation is needed in the averaging step to deal with the nonlinearity of the advective term: let us consider a splitting of ω_f into ω_γ + ω̃_f, where ω_γ is constant across the fracture and ω̃_f is a null average fluctuation, and similarly for u_f. We will neglect the product of fluctuations assuming that they are small, i.e.
∫_-b/2^b/2 (ω_f u_f)
≈ω_γu_γ.
For fractures of non-negligible aperture however, internal motions (see e.g. intrafractrure mode 2A in figure <ref>) could make this approximation inappropriate.
Just as for Darcy's law, Fick's law will look identical when projected in the fracture plane, while for the normal projection, we will have to resort to a first order approximation
(q_b ·n_b)^± = D (ω_b^± - ω_γ/b/2) + ω_b^±λ^±
Finally, we obtain a mixed-dimensional system for the transport of a solute in a fractured porous medium,
{ ∂_t ω_b + ∇·q_b = 0
in Ω_b ,
q_b = -D ∇ω_b + ω_b u_b
in Ω_b ,
b ∂_t ω_γ + ∇_τ·q_γ = [θ]
in Γ^0 ,
q_γ = -b D ∇_τω_γ + ω_γu_γ in Γ^0 ,
q_b ·n_b = θ on Γ^±,
θ = D (ω_b - ω_γ/b/2) + ω_b λ on Γ^± ,
.
to be complemented with boundary and initial conditions.
It is important to notice that the equations in the bulk and the equations in the fracture, both in (<ref>) and (<ref>), are entirely decoupled apart from their interaction through the interface variables λ and θ.
The way these variables appear in the equations reveals how the domains are coupled across different dimensions: in the higher-dimensional domain interface variables appear as Neumann boundary conditions, while in the lower-dimensional fracture they appear as sources inside the domain.
Although the averaging procedure has been carried out for a three-dimensional domain with a single fracture,
the exact same procedure works for a generic n-dimensional domain (n = 1, 2, 3) with multiple, possibly
intersecting fractures. In this case, the procedure is conceptually carried out hierarchically: n-dimensional
quantities are coupled to (n-1)-dimensional quantities through (n-1)-dimensional interface fluxes (figure <ref>).
In view of dealing with the general case of multiple fractures of different dimensions, we want to write equations which hold for each dimension.
Apart from making the system of equations more compact, it will also make the mechanism by which different domains interact more clear.
We collect all domains of equal dimension d ∈{0, 1, 2, 3} in a single domain Ω_d, and denote by Γ_d the interface between domains Ω_d and Ω_d+1. We also introduce mixed-dimensional variables ω_d, p_d, fluxes u_d, i_d and in-plane gradient operator ∇_d defined on Ω_d, interface fluxes λ_d, θ_d defined on Γ_d.
With these quantities at hand, we can generalize systems (<ref>, <ref>) to
{ ∇_d ·u_d = [λ_d]
in Ω_d ,
b^3-d∂_t ω_d + ∇_d ·q_d = [θ_d]
in Ω_d ,
u_d = b^3-d k_d/ϕμ (-∇ p_d + ρ_0 αω_d g_d^τ)
in Ω_d ,
q_d = - b^3-d D ∇ω_d + ω_d u_d
in Ω_d ,
u_d ·n_d = λ_d-1 on Γ^d-1,
q_b ·n_b = θ_d-1 on Γ^d-1,
λ_d = b^2-d k_d/ϕμ ( p_d+1 - p_d /b/2 + ρ_0 αω_d g_d^n )
on Γ_d ,
θ_d = b^2-d D ω_d+1 - ω_d /b/2 + ω_d λ_d
on Γ_d .
.
with suitable boundary conditions.
§.§ The impact of fractures on convection onset
The original discussion of the HRL problem addressed the question of the possible onset of convection.
This was expressed as a critical Rayleigh number _c, such that convective motion is possible for ≥_c.
In recent years, different studies have tried to understand in what way fractures influence the possibility of convection.
It is clear that highly permeable fractures constitute preferential paths for flow, thus enabling convection or enhancing its strength.
As shown in <cit.>, for large fracture density, calculating an average Rayleigh number based on the average (upscaled) permeability (neglecting the specific fracture configuration) can be effective at predicting the onset and strength of convection.
For lower fracture densities however, this approach is not adequate.
As shown in figure <ref>, a continuous fracture loop barely modifying the permeability, such that the average Rayleigh number is well below the critical Rayleigh number for homogeneous media, still exhibits convective motion.
In <cit.>, the key factor for enabling convection in the case of low-density fracture configurations is shown to be the presence of continuous fracture circuits, around which convection cells can form.
Simple scenarios (also used as validation cases in this work) tried to relate the location, aspect ratio and size of fracture circuits to the possibility and strength of convection.
A more systematic study remains to be done, with the aim of uncovering better quantitative relations that can be extended to more complex fracture configurations. Moreover, we have to consider that in applicative scenarios the particular geometry of the underground fracture network is mostly unknown, or in the best cases described by statistical parameters such as fracture density, mean lengths and orientations.
In these cases, being able to relate statistical parameters such as the ones mentioned to a quantitative estimate on the strength of convection and its uncertainty would be both useful from an applicative standpoint and interesting in its own right.
Let us consider the diffusion of a solute in a domain cut by a horizontal fracture.
The continuous equidimensional problem (<ref>) admits the linear concentration profile solution for the boundary conditions prescribed by the HRL problem.
ω_j = ω_maxy/H, i_j = -D ∇ω_j = -D ω_max/He_y
for j ∈{ B, f }.
With dimensional reduction, the system of equations to be solved is replaced by (<ref>), which yields a piecewise linear concentration profile:
ω_b =
{ (ω_max - δω) y/H y < H/2 ,
(ω_max - δω) y/H + δω y > H/2 ,
.
ω_f = ω_max/2
δω = ω_max/1 + H/b ,
b being the fracture aperture. Fractures which qualify as thin enough to be treated as lower-dimensional regions will always satisfy b ≪ H,
thus making the error in the concentration profile small: δω≪ω_max. This small model reduction error however can manifest itself in the form of (small) artificial fluxes around fracture tips: the concentration gradient that arises from matching the two solutions creates a circulating diffusive flux, as illustrated in figure <ref> on the left. The exact same reasoning can then be applied for the fluid mass conservation equation: small artificial fluid mass fluxes may appear around fracture tips due to the discontinuity of pressure across the fracture (see figure <ref> on the right).
§ NUMERICAL DISCRETIZATION
This section is dedicated to the discretization of system (<ref>).
The method chosen for the spatial discretization of the system is the finite volume method.
In the fractured problem, we have a sequence of domains Ω_d, d ∈{0,1,2,3}.
We start by defining a mesh on each of them.
While the different meshes in principle can be completely independent, the mathematical formulation is more easily expressed using conforming meshes: lower dimensional meshes are implicitly defined by their higher dimensional neighbour.
The sequence of meshes for each of the domains Ω_d will be denoted by 𝒯_d, the set of edges of 𝒯_d by ℰ_d.
Even though the interfaces Γ_d are co-located with the lower dimensional domains Ω_d, we keep the two separate by defining interface meshes 𝒢_d.
In what follows, K will denote a generic element of mesh 𝒯_d, γ a generic element of the interface mesh 𝒢_d and σ a generic face of ℰ_d.
We will also indicate the normal pointing inside the fracture at element γ as n_γ.
The set ℰ_d, or equivalently the faces of any element K, can be partitioned into three sets:
(i) internal or belonging to the Dirichlet boundary,
(ii) belonging to the Neumann boundary,
(iii) adjacent to a lower-dimensional domain.
Notice that we get two different partitions based on which boundary conditions we use to split the boundary (boundary conditions related to the flow problem or to the transport problem):
∂ K = ∂ K_i^p ∪∂ K_N^p ∪∂ K_f = ∂ K_i^ω∪∂ K_N^ω∪∂ K_f.
Also, thanks to the conforming mesh hypothesis, faces in ∂ K_f can be identified with elements of the interface mesh 𝒢_d, thus enabling us to legitimately write integrals such as ∫_γλ_d where γ∈∂ K_f.
With the notation in place, we can start integrating the conservation equations in system (<ref>) over a generic element K ∈𝒯_d:
∫_K ∇_d ·u_d x^d = ∫_K [λ] x^d ,
∫_K b^3-d∂ω_d/∂ t x^d + ∫_K ∇_d ·q_d x^d = ∫_K [θ] x^d .
Working on the integrals one by one we have
∫_K ∇_d ·u_d x^d
= ∫_∂K u_d ·n x^d-1
= ∑_∂K_i ∫_σ u_d ·n x^d-1
+ ∑_∂K_f ∫_σ λ_d-1 x^d-1
+ ∑_∂K_N^p ∫_σ u_N x^d-1 ,
∫_K [λ_d] x^d = ∫_K^± λ_d x^d ,
∫_K b^3-d _t ω_d x^d = b^3-d d/dt ∫_K ω_d x^d ,
∫_K ∇_d ·q_d x^d
= ∫_∂K q_d ·n x^d-1
= ∑_∂K_i ∫_σ q_d ·n x^d-1
+ ∑_∂K_f ∫_σ θ_d-1 x^d-1
+ ∑_∂K_N^ω ∫_σ q_N x^d-1 ,
∫_K [θ_d] x^d = ∫_K^± θ_d x^d .
Now introducing the discrete variables and fluxes
P_K = 1/|K|∫_K p_d x^d ,
U_Kσ≈∫_σu_d ·n_K x^d-1 ,
Λ_K^± = ∫_K^±λ_d x^d ,
W_K = 1/|K|∫_K ω_d x^d ,
Q_Kσ≈∫_σq_d ·n_K x^d-1 ,
Θ_K^± = ∫_K^±θ_d x^d .
we can write the discrete version of the conservation equations in (<ref>):
∑_∂ K_i U_Kσ
+ ∑_∂ K_fΛ_γ
+ ∑_∂ K_N^p U_N,σ = Λ_K^± ,
b^3-d |K| d/dt W_K
+ ∑_∂ K_i Q_Kσ
+ ∑_∂ K_fΘ_γ
+ ∑_∂ K_N^ω Q_N,σ
= Θ_K^± .
The last step is the discretization of the constitutive laws for the fluxes.
Integrating the constitutive laws gives
∫_σu_d ·n x^d-1
= b^3-d(
∫_σ -∂ p_d/∂ n x^d-1
+ ρ_0 α g·n_σ∫_σω_d x^d-1) ,
∫_γλ_d x^d
= b^2-d∫_γ( p_d+1 - p_d/b/2 + ρ_0 αω_d+1 g·n_γ) x^d ,
∫_σq_d ·n x^d-1
= b^3-d(
D ∫_σ -∂ω_d/∂ n x^d-1
+ ∫_σω_d u_d·n x^d-1) ,
∫_γθ_d x^d
= ∫_γ( b^2-d D ω_d+1 - ω_d/b/2 + ω_d λ_d ) x^d .
We rewrite each of these laws in terms of the discrete variables defined in (<ref>):
U_Kσ = b^3-d |σ| ( ∇ P_σ + ρ_0 α g·n_σ W_σ) ,
Λ_K^± = b^2-d |K| ( P_K^± - P_K/b/2 + ρ_0 α g·n_K^± W_K^±) ,
Q_Kσ = b^3-d |σ| D ∇ W_σ + W_σ U_Kσ ,
Θ_K^± = b^2-d |K| D W_K^± - W_K/b/2 + W_K^±Λ_K^± ,
where the quantities ∇ϕ_σ and ϕ_σ, i.e. gradients and face values, depend on the particular finite volume scheme. We choose the Multipoint Flux Approximation scheme described in <cit.>, which computes the gradient on a face by considering values of all cells sharing a node with the face.
As for all finite volume schemes, fluid mass and solute mass conservation is guaranteed.
In contrast to a two-point scheme (TPFA) however, the MPFA scheme is consistent on general grids.
Note that we make use of a centered scheme for the advective term in the concentration equation. This choice, unlike upwind, is known to produce numerical oscillations for convection dominated flows.
In all our numerical experiments however, convection is mild enough for our centered scheme to be numerically stable.
We can easily relate the already introduced Rayleigh number to the Peclet number:
𝑃𝑒 = u h/D = u H/Dh/H = h/H
where h is a characteristic cell diameter, H is a characteristic length of the domain.
In all the the following numerical experiments, sufficiently fine grids will be computationally feasible for 𝑃𝑒 to be O(1).
The 𝑃𝑒 number is nonetheless numerically monitored in all the following simulations.
§.§ Time discretization and direct solution method
We denote as "direct solution method" the integration of the model equations forward in time starting from a zero-solute or equilibrium initial condition, to assess the possible onset of convection.
Once a steady state has been reached, we must verify whether convective motion is present.
We will use the implicit Euler method for advancing in time for its unconditional stability. An adaptive time-stepping is also used to reduce the amount of computation necessary to reach steady state, see section <ref> for details.
To discretize in time, define a set of timesteps
{t^n}_n=0… N
and write our discretized system as:
∑_∂ K_i U_Kσ^n+1
+ ∑_∂ K_fΛ_γ^n+1
+ ∑_∂ K_N^p U_N,σ^n+1 = Λ_K^±^n+1 ,
b^3-d |K| W_K^n+1-W_K^n/Δ t^n
+ ∑_∂ K_i Q_Kσ^n+1
+ ∑_∂ K_fΘ_γ^n+1
+ ∑_∂ K_N^ω Q_N,σ^n+1
= Θ_K^±^n+1,
where K∈𝒯_d, d∈{0,1,2,3}, Δ t^n = t^n+1-t^n, n∈{0, …, N-1}.
If the timestep is long enough and the solution stops changing (according to the norm of the difference between two timesteps of the solution), we declare the system to have reached steady state.
Note that since the system of equations is nonlinear, each timestep requires the solution of a nonlinear problem.
In our case we use Newton iterations by leveraging the automatic differentiation capabilities of the implementation.
In the test cases, the tolerance for the Newton procedure is set to 1e-8 for the concentration increment.
§.§ Eigenvalue analysis
The previously outlined method for assessing stability, while effective at predicting stability, has some shortcomings.
Among the disadvantages are its reliance on the choice of perturbation for the hydrostatic solution (if not starting from zero solute everywhere) and having to reach and assess the steadiness of the solution.
The computational cost of reaching the steady state may become an issue: advancing a nonlinear equation in time with an implicit scheme requires the solution of multiple linear systems for each timestep advancement.
An alternative method, presented in detail below, relies on inspecting the nonlinear system of equations linearized around the equilibrium solution.
The nonlinear discrete system (<ref>) and (<ref>) can be written abstractly as
M d W/d t + F(W, Y)=0 ,
G(W, Y)=0 ,
where W ∈ℝ^n collects the degrees of freedom relative to the discrete variable W, and Y ∈ℝ^N-n collects the degrees of freedom relative to the discrete variables P, Λ and Θ.
F: ℝ^n ×ℝ^N-n→ℝ^n and G: ℝ^n ×ℝ^N-n→ℝ^N-n collect the linear and nonlinear discrete operators in (<ref>).
Any discrete equilibrium solution (W_s, Y_s) will satisfy the system
F(W_s, Y_s)=0 ,
G(W_s, Y_s)=0 .
To assess whether the equilibrium solution is also asymptotically stable we perturb the time-dependent system:
M (d W_s/d t + d δ W/d t )+ F(W_s+ δ W, Y_s + δ Y)=0 ,
G(W_s+ δ W, Y_s + δ Y)=0 ,
and linearize, taking advantage of the fact that δ W, δ Y are small perturbations:
M d W_s/d t + M d δ W/d t + 0 F(W_s, Y_s) + ∂ F/∂ W|_W_s, Y_sδ W + ∂ F/∂ Y|_W_s, Y_sδ Y=0 ,
0 G(W_s, Y_s) + ∂ G/∂ W|_W_s, Y_sδ W + ∂ G/∂ Y|_W_s, Y_sδ Y=0 .
Renaming the partial derivatives to ease the notation
A_ww = ∂F/∂W |_W_s, Y_s , A_wy = ∂F/∂Y |_W_s, Y_s ,
A_yw = ∂G/∂W |_W_s, Y_s , A_yy = ∂G/∂Y |_W_s, Y_s ,
the system becomes
M ∂δW/∂t +
A_ww δW + A_wy δY = 0 ,
A_yw δW + A_yy δY = 0 .
Now, relying on the invertibility of A_yy , we can eliminate δ Y to obtain a single evolution equation for the perturbation δ W:
M ∂δ W/∂ t = (A_wy A_yy^-1 A_yw - A_ww) δ W.
Substituting an exponential in time trial solution δ W(t) = w e^λ t in the evolution equation yields an eigenvalue problem:
λ M w = (A_wy A_yy^-1 A_yw - A_ww) w,
or, by defining the matrix S= M^-1 (A_wy A_yy^-1 A_yw - A_ww),
S w = λ w .
Note that λ describes the evolution of the perturbation in time, while vector w its shape in space because each component represent the value in a grid cell.
The equilibrium solution W_s is then linearly stable if and only if all the eigenvalues associated to system (<ref>) have negative real part. Conversely, if we can find one or more eigenvalues with positive real part the perturbation w will grow and be sustained in time.
Three computational considerations: (i) to assess stability there is no need to compute the entire spectrum, it is enough to compute the eigenvalue of largest real part (ii) automatic differentiation of the discrete system (<ref>) can yield the numerical value of the matrices A_ww, A_wy, A_yw, A_yy without having to write explicit expressions for them (iii) iterative methods for computation of eigenvalue spectra are available e.g. power iterations which do not require explicit expression for the matrix under study, only the ability of computing matrix-vector products. In our case, this removes the need of explicitly inverting the matrix A_yy. Furthermore, since the matrix only depends on the equilibrium solution, we can factorize the matrix only once using e.g. LU decomposition for fast matrix-vector products during the computation of the spectrum.
§.§ Implementation details
The implementation of the numerical methods outlined above is based on the library <cit.>, which provides the necessary tools for meshing fractured domains and assembling the discrete mixed-dimensional operators. The framework also implements forward automatic differentiation, providing the numerical Jacobians used for performing Newton iterations.
For the eigenvalue analysis (<ref>), the automatic differentiation part provides the Jacobian matrix.
Once the different matrix blocks are identified, we can easily define the matrix-vector product procedure yielding S v.
For the computation of few leading eigenpairs, the Krylov-Schur algorithm (outlined in <cit.>) has been combined with the dynamic restarting scheme described in <cit.>.
Using the inner product defined by the mass matrix for the orthogonalization part of the algorithm has been particularly beneficial in accelerating convergence.
The use of a custom procedure for computing eigenvalues has been preferred to a default implementation such as mostly due to the possibility of monitoring convergence and as a possible starting point for devising more efficient algorithms.
§ RESULTS
The model and its numerical approximation have been validated against three reference papers: <cit.> which treats the Elder problem, <cit.> and <cit.> that treat the HRL scenario respectively in two and three dimensions. Both problems have been extensively used in the literature as benchmarks in the context of density driven flows.
§.§ Elder problem
The Elder problem was originally proposed in a paper by Elder <cit.>, studying thermal convection in a Hele-Shaw cell.
It was later reformulated into a solute convection problem by Voss and Souza <cit.>,
where the system of equations is similar to the homogeneous problem (<ref>).
What this benchmark case aims to highlight and validate is the possibility of flow driven purely by density differences: no pressure gradient is being enforced by the boundary conditions.
The problem we want to solve is (<ref>), the domain being Ω = [0, 600] × [0,150]. Boundary conditions are
{ u·n = 0 on ∂Ω ,
ω = ω_max on ∂Ω_i ,
ω = 0 on ∂Ω_o ,
q·n = 0 on ∂Ω∖ (∂Ω_i ∪∂Ω_o) ,
.
where ∂Ω_i = (150, 450) × 150 and ∂Ω_o = (0, 600) × 0.
Since the boundary conditions for the Darcy problem are of Neumann type over the whole boundary we have an ill-posed problem;
we can however restore the well-posedness by adding an additional constraint, such as imposing zero mean pressure over the whole domain:
∫_Ω p = 0.
Initial conditions prescribe ω(x, 0) = 0 , x ∈Ω.
Equations are integrated in time until T = 20yr.
All the other parameters of the problem are reported in table <ref>.
In the absence of gravity, and for small enough , the solute will diffuse from the inlet ∂Ω_i until the solution reaches the diffusive steady state. The characteristic time of the evolution is T_diff = H^2/D, which, for the parameters listed above is
T = (150)^2/3.565e-6^2≈200yr.
The corresponding characteristic time associated to convection induced by density differences is much smaller:
T_adv = Hϕμ/k ρ_0 αω_max g≈0.5yr.
Note that the Rayleigh number presented above as a ratio of velocities <ref> can be equivalently interpreted as a ratio of these timescales:
= T_diff/T_adv≈ 400 .
As noted in <cit.>, the solutions of the Elder problem present grid dependence even for moderately fine meshes, also due to the possible non-uniqueness of the solution.
For this reason, instead of trying to achieve grid independence, we directly validate the implementation by comparing the solutions with <cit.> for corresponding levels of grid refinement.
We use quadrilateral grids identical to the ones reported in the reference paper: number of cells n = 2^2l + 1 , l ∈{ 4, 5, 6 } and fixed timestep Δ t = [parse-numbers=false]1/12yr. The qualitative comparison using the contours of the concentration profiles is reported in figure <ref>.
While the differences in the continuous model (which is not explicitly detailed in <cit.>)
and discretization method (<cit.> uses adaptive time stepping and Galerkin-FEM) cause different solutions for low levels of grid refinement,
for higher levels of grid refinement the solutions are in good agreement.
§.§ HRL problem
The HRL problem, already introduced in sections <ref> and <ref>, is the test case analyzed in <cit.>,
which we will use as reference solutions for validating both the direct method and the eigenvalue method.
The geometry of the domain and the parameters common to the different simulations are reported in figure <ref> and table <ref>.
The system of equations solved is system (<ref>), with boundary conditions given by
{ u·n = 0 on ∂Ω ,
ω = ω_max on ∂Ω_i ,
ω = 0 on ∂Ω_o ,
q·n = 0 on ∂Ω∖ (∂Ω_i ∪∂Ω_o) ,
.
where ∂Ω_i = (0, 20) × 10 and ∂Ω_o = (0, 20) × 0.
As for the Elder problem, the model is supplemented by the additional constraint ∫_Ω p = 0 to obtain a well-posed problem.
The solution strategy is inspired by the one outlined in the reference paper: the initial diffusive steady state concentration is perturbed
and the solution is advanced in time, gradually increasing the timestep Δ t.
Criteria for adapting the timestep Δ t include both the number of Newton iterations required for convergence and
the norm of the concentration difference ‖ω^n+1 - ω^n‖.
If Δ t is large enough with respect to characteristic time scales, and the concentration difference between timesteps is small enough, we consider the solution to have reached steady state.
A quantitative comparison between our results and the one presented in the reference paper is presented in table <ref>.
Concentration profiles for a few selected cases are also presented in figure <ref>.
In all the test cases, there is agreement between our results and <cit.> on whether convective motion is present at steady state.
In the majority of cases, there is both qualitative agreement in the concentration profiles and quantitative agreement on the strength of convection, as measured by the Sherwood number, even in test cases with complex fracture configurations such as 𝖤9𝖺 and 𝖤9𝖻.
As for the significant differences in cases such as 𝖡3, 𝖢2, 𝖢3
they might be due to important differences in the model solved in this study and the one in the reference paper:
we use the Boussinesq approximation, neglect dispersivity, and use a different numerical method.
These differences however seem to have a modest overall impact.
The analysis of the different scenarios presented above can be complemented with the stability analysis based on eigenvalues outlined in section <ref>.
We begin with a detailed analysis of scenario 𝖣11.
The method can provide the eigenpairs corresponding to k eigenvalues with largest real part, for reasonably small k.
Figure <ref> shows some of the computed eigenpairs, and, as we can see, one of them is positive, indicating the presence of natural convection.
The computed eigenvalues are consistent with the direct simulation: indeed, the instability of the diffusive steady state is confirmed by the presence of one eigenvalue of positive real part.
The numerical error due to the iterative nature of the eigenvalue computation is estimated by the formula ϵ = ‖ S x - λ x ‖/λ x, where S is introduced in (<ref>). Grid independence instead is assessed by computing the eigenvalues on a coarser grid and again computing a relative error: ϵ_g = | λ - λ_g |/λ, where λ_g are the eigenvalues computed on a coarser grid.
The corresponding errors for this scenario are reported in table <ref>. The errors ϵ, ϵ_g are overall showing good accuracy for almost every scenario.
Apart from predicting the possibility of convection, the eigenfunctions and eigenvalues can give insight into the evolution dynamics in the vicinity of the diffusive initial condition.
We start by noticing that not only all of the computed eigenvalues have imaginary part equal to zero, but the related eigenfunctions furthermore are all mutually approximately orthogonal. Let ⟨ e_i, e_j⟩ denote the scalar product between (normalized) eigenfunctions: we have that, if i≠ j the value is about three orders of magnitude smaller than 1 for test case D11. We remark that the S matrix is not symmetric for reasons due to the implementation of boundary conditions, so there is no obvious reason to expect these results.
The orthogonality in particular suggests the possibility of studying the time evolution of the concentration in the space spanned by these eigenfunctions: given the solution ω(t) and the eigenfunctions { e_1, …, e_k }, let us define the scalar functions
α_i(t) = ⟨ω(t) - ω_0, e_i ⟩, i=1,…,k ,
representing the projections of ω(t)-ω_0 on the eigenfunction basis.
At steady state, different eigenfunctions give non-negligible contribution to the solution ω(t): no obvious relationship exists between the eigenpairs and the steady state solution. This should come as no surprise given that the eigenvalue analysis is localized around the initial condition.
In the early stages of the simulation however, not only the eigenvalue analysis correctly predicts the shape of the growing perturbation (identical to the only eigenfunction associated to a positive eigenvalue), but the eigenvalue λ_1 also provides a good estimate of its growth rate.
Let us now consider other scenarios. Table <ref> provides the five eigenvalues with largest real part for each of the scenarios from <cit.> presented above. With no exception, a Sherwood number greater than 1 is associated to an eigenvalue with positive real part, thus correctly predicting the possibility or impossibility of convection. Moreover, larger are (weakly) associated to greater number of eigenvalues or eigenvalues greater in magnitude.
As for the connection between the spectrum and the evolution dynamics at early times, scenarios 𝖠2, 𝖡2 and 𝖤9𝖻 have been analyzed, after checking the (approximate) orthogonality of the eigenfunctions.
In figure <ref>, the comparison between the actual time evolutions and the one predicted by the spectrum as ∑_i=0^N exp(λ_i t) e_i are represented. Here, we have used N=8 for cases 𝖠2, 𝖡2 and N=12 for 𝖤9𝖻.
The presence of multiple positive eigenvalues precludes the possibility of pinning down uniquely the shape and rate of growth of the instability. In all cases however, the growing perturbation has the form of one particular eigenfunction and its rate of growth is given by the associated eigenvalue, see figure <ref>.
The eigenvalue with largest real part thus also provides a numerical upper bound on the rate of growth.
§.§ The role of fracture circuits
In <cit.> the onset and strength of convection is linked to the existence of continuous fracture circuits, and different numerical experiments confirm its importance.
In this section we argue that quasi-continuous fracture circuits can also enable convective motions, for a surrounding porous matrix of sufficiently high permeability.
Cases 𝖣1 and 𝖣2 in <cit.> are compared to show the necessity of circuit continuity for convection to occur.
However, slightly changing the fracture geometry (parameters remain unchanged), as in scenario 𝖣2^* (see figure <ref>), we see that, although the strength of convection is significantly reduced, it is still possible.
The permeability of the surrounding medium is indeed large enough for the medium to be part of the convective circuit,
though not large enough for convection to occur without the presence of fractures.
Scenario 𝖣2^* is purposefully built to highlight this phenomenon, we can however find similar configurations in more complex scenarios already introduced in <cit.>.
Let us consider for instance network 𝖤9𝖻, shown in figure <ref>: velocity vectors plotted over the concentration profile clearly indicate, unlike what is claimed in <cit.>, one large convective cell instead of two, separate ones.
As in scenario 𝖣2^*, the convective loop is not limited to the fracture network but crosses the porous matrix as well.
As further confirmation, we can decompose the solution of 𝖤9𝖻 using the approximate eigenvector basis (following the apprach outlined in section <ref>).
In figure <ref> we see how the dominant modes e_6, e_8, i.e. the modes corresponding to the largest magnitudes of α_i, both involve fluid motion across the gap in the fracture network, through the porous matrix,
while modes looping around large continuous fracture circuits e_1, e_2, e_4, e_7 contribute only marginally to the steady state solution.
Let us parametrize the geometry of a fracture circuit with a gap as in figure <ref>, to relate it to the possibility of convection with the analysis of eigenvalues sign. Results are reported in figure <ref>.
As expected, for very small matrix permeability gaps stop flow, thus inhibiting convection.
On the other hand, for larger matrix permeability k_m = 3e-16, the corresponding Rayleigh number is ≈ 20, very near the critical _c = 4 π^2 for which the matrix can exhibit convection even without the aid of fractures; convection is thus possible across very large gaps.
For matrix permeability between these two extremes, the geometry of the gap determines whether convection across is possible or not.
This behavior can be explained, at least qualitatively, focusing on the top (broken) edge of the fracture circuit composed by the two fracture segments of length Δ x (segments A and C), and the gap of width ϵ (denoted by B). Assuming that across the gap flow occurs only across an area A, and that exchange with the porous matrix elsewhere re negligible, to ensure convection in the circuit we have to guarantee that the conductivity G_B≥ G_A=G_C, which translates into
A k_mϵ≥ k_f bΔ x,
or, equivalently,
k_m Ak_f bΔ xϵ≥ 1.
This relation correctly predicts the qualitative behaviour of the numerical simulations reported in figure <ref>: small gaps with large surface areas favour convection across the gap.
The results of a more quantitative evaluation, checking the sign of eigenvalues for different combinations of the parameters, are illustrated in figure <ref>. Although the threshold does not appear to be so sharp, and its numerical value is closer to 10^-1 than to 1, the horizontal separation of positive leading eigenvalues from negative ones indicates that the model is capturing part of the physics of the phenomenon.
§.§ Three-dimensional HRL problem
Both the direct method (exposed in section <ref>) and the eigenvalue method (section <ref>) make no assumptions on the dimensionality of the ambient space.
In this section, we are going to apply the latter to simple three-dimensional scenarios.
In <cit.>, a classification of possible modes of convection in fractured porous media is proposed.
The paper further explains how the assumption of a two-dimensional domain severely limits the applicability to experimental cases.
This is due to the three-dimensional character of the dominant mode of convection – mode 2B in figure <ref> – which cannot be captured by a two-dimensional analysis.
In <cit.>, different three-dimensional scenarios are analyzed and numerically simulated.
The numerical results from the regular three-dimensional fracture circuit (here denoted as scenario 6), show that when convection is available (for large enough fracture aperture), the dominant mode is the interfracture mode 1.
The geometry simulated in <cit.> being practically identical to the geometry studied in <cit.>, the two results stand in contradiction.
Indeed, <cit.> acknowledges the contradiction,
attributing it to both the matrix-fracture coupling conditions and to the Rayleigh averaging strategy used in the analysis of <cit.>.
In particular, the use of a Rayleigh number based on an averaged permeability was shown to be ineffective in the case of low density fracture networks in <cit.> (as briefly discussed in section <ref>).
We analyze the 3D problem with the eigenvalue method for different scenarios presented in <cit.>, and collect the results in table <ref>, where we report for different geometries and different fracture apertures the unstable modes, and compare them with the convection modes predicted by <cit.>.
As in the two-dimensional case, the instability thresholds are identical to the ones obtained in <cit.>, apart from a slight misprediction for scenario 9𝖺.
In all scenarios furthermore, the eigenfunctions clearly follow the classification proposed in <cit.>, corresponding to either interfracture □ or intrafracture ↻ modes.
In particular, for scenario 6 we observe that (1) the interfracture mode is first unstable one appearing at b ≈1.4e-5
(2) already at b ≈1.6e-5, different intrafracture modes are available and dominate over the interfracture mode:
λ^↻_1 = 21.0, λ^↻_2 = 20.1, λ^□ = 17.6
(3) intrafracture convection modes remain dominant for larger fracture apertures.
Thus, excluding apertures for which the setup is very near to the instability threshold, intrafracture convection modes dominate over interfracture convection modes when both modes are available.
This conclusion also holds for scenarios 9𝖺 and 9𝖻.
Although our results seem to confirm the results of <cit.>, where the intrafracture mode is also identified as dominant,
they do not contradict the results in <cit.>.
Indeed the eigenvalue analysis is strictly localized around the diffusive equilibrium solution. In particular, the eigenvalues give no indication on which modes will be present at steady state.
As seen in e.g. scenario 𝖠2 (figure <ref>), even modes with decaying behaviour near equilibrium may turn out to be dominant at steady state.
§.§ Computational considerations
We want to discuss some of the computational aspects of the eigenvalue method, comparing it in particular to the direct method of assessing stability in terms of computational cost.
The computationally expensive steps of the eigenvalue method can be divided in
(a) grid construction, initial operator discretization and Newton iterations to reach steady state (needed as initial condition)
(b) construction of the S operator defined in (<ref>) (including LU factorization of A_yy)
(c) computation of k largest eigenvalues.
Step (c) can itself be subdivided in the two costly operations (i) matrix vector products S v and (ii) orthogonalization.
We will initially compare the computational cost of assessing the possibility of convection using the eigenvalue method against the direct method.
Next, we will discuss how the cost changes with number of degrees of freedom and number of sought eigenvalues.
In section <ref>, we saw how the direct method and the eigenvalue method can be used in a complementary fashion,
each of them giving answers inaccessible to the other.
We can however restrict the question to the possibility of convection since both methods are a valid way of finding an answer (e.g. see table <ref>).
For a fairer comparison, the direct forward simulations will be as soon as the Sherwood number exceeds 1, which would already indicate the possibility of convection.
Table <ref> reports the timing comparison and the relative time spent in each of the most costly operations for different test cases.
The cost of the direct method
is dominated by the assembly of the problem Jacobian in the two-dimensional cases
and by the linear solves in the three-dimensional cases (which possess a much larger number of degrees of freedom).
Both of these operations are performed for each Newton iteration.
The number of Newton iterations per timestep advancement mostly varies between 2 and 3 for all the simulations.
Relaxing the convergence tolerance of the Newton method is one of the options to speed up the method, at the cost of lower accuracy of the solution.
This might be a valid option considering that what we are interested in is whether convection is possible, not in the detailed behaviour of the solution.
The eigenvalue method is much more efficient with respect to the direct method in low fracture density cases.
For test cases with complex fracture configurations such as 𝖤9𝖺 and 𝖤9𝖻, the performance of the two methods (direct and eigenvalue) approximately matches.
In the different test cases, the eigenvalue method dedicates approximately equal times to orthogonalization and matrix-vector products.
Their relative weights can actually be adjusted by varying the size m of the Krylov basis in the Krylov-Schur algorithm:
for every iteration of the algorithm, the cost of orthogonalization scales with m^2 while the cost of the matrix-vector products scales with m.
Tuning m for the problem at hand may be a possible option for improving the efficiency of the method.
In figure <ref> we illustrate how the number of matrix-vector products N_mv changes with the number of mesh elements .
The results show that N_mv depends weakly (N_mv∼^0.5) on the size of the problem (although a lot of variability remains).
Note that however the cost of each matrix-vector product S v scales as C_mv∼^2: the cost of each matrix-vector product S v scales as C_mv∼ N_mv^2: the presence of A_yy^-1 in (<ref>) makes the S matrix dense.
The scaling estimate for the total cost of the matrix-vector products 𝑇𝐶_mv is thus
𝑇𝐶_mv = N_mv C_mv∼^0.5^2 = ^2.5 .
The estimate is consistent with the data reported in table <ref>, for which a least-squares estimate gives 𝑇𝐶_mv∼^2.2 ± 0.4.
Finally, the convergence of the Krylov-Schur algorithms is represented for four different test cases during the earch for multiple eigenvalues. As shown in figure <ref> in all the test cases, once the algorithms pins down an eigenvalue, the corresponding error starts decreasing at exponential rate. Depending on the test case, we can have scenarios in which the eigenvalues all practically converge together, as opposed to cases where computing each successive eigenvalue requires considerably more computation.
§ CONCLUSION
The aim of this study was to better understand the influence of fractures on the possibility of free convection in porous media.
To this aim, we have described a mathematical model for density driven flow in the presence of fractures,
and the corresponding numerical approximation. In addition to the direct ”forward” numerical solution of the problem
we have proposed and implemented a novel method for the assessment of convective stability through the eigenvalue analysis of the linearized numerical problem.
The new method is shown to be in agreement with existing literature cases both in simple and complex fracture configurations.
With respect to direct simulation in time, its results provide further information on the possibility of convection.
In particular, as shown in section <ref> the sign of the leading eigenvalue correctly predicts the onset of convection, and its magnitude provides quantitative estimates of the rate of growth or decay of perturbations.
Furthermore the computational cost of the method has proven to be in the worst cases equal, and in the best cases up to an order of magnitude faster than the direct solution method.
The fact that the eigenvalue method closely mimics the analytical method of investigating stability clarifies what inferences can be made from the the results of a linear stability analysis. In particular, analyzing stability through the study of the system around equilibrium solutions (both numerically and analytically) is restrictive in that it cannot predict whether, far from the equilibrium solution, transient convective motion is possible and which convective modes will be dominant at steady state.
Moreover, an in depth study of the particular scenario E9b in <cit.> has further complicated the question of analyzing free convection in the presence of fractures: for realistic sets of problem parameters, the porous matrix is indeed able to participate in the convective motion.
Thus, stability criteria based on the fracture network alone, e.g. the presence of large continuous fracture circuits as a trigger for convection, are shown to be somewhat limited in applicability.
Given the results of this study, we could expand the work in different directions.
On the numeric side, the method used to compute the eigenvalues at the moment does not take into account the structure of the particular problem.
However, the S matrix is built up from submatrices of the Jacobian associated to the linearized problem.
Therefore, a close study of the Jacobian may suggest ways of speeding up the computation of the eigenvalues by exploiting the structure of the S matrix.
Finally, though the eigenvalue method has proven to be faster than the direct method of assessing convective stability, the method could be further sped up if we are willing to sacrifice accuracy, since, for this purpose, we are only interested in the sign of the most positive eigenvalue. If during the computation of the eigenvalues we find one to be positive, with error reasonably smaller than its magnitude, we can already predict the possibility of convective motion.
In cases where the method is applicable, such as the fractured HRL problem, the additional speed
may enable a more systematic (or even statistical) study of how global properties of fracture networks such as fracture density, connectivity and characteristic sizes are related to the possibility of convective motion.
§ ACKNOWLEDGEMENTS
The authors acknowledge the support by MUR, grant Dipartimento di Eccellenza 2023–2027.
10
aavatsmark02
Ivar Aavatsmark.
An Introduction to Multipoint Flux Approximations for
Quadrilateral Grids.
Computational Geosciences, 6(3/4):405–432, 2002.
diersch02
H.-J.G. Diersch and O. Kolditz.
Variable-density flow and transport in porous media: approaches and
challenges.
Advances in Water Resources, 25(8-12):899–944, August 2002.
elder67
J. W. Elder.
Steady free convection in a porous medium heated from below.
Journal of Fluid Mechanics, 27(1):29–48, January 1967.
hortonrogers
C. W. Horton and F. T. Rogers.
Convection Currents in a Porous Medium.
Journal of Applied Physics, 16(6):367–370, June 1945.
porepy
Eirik Keilegavlen, Runar Berge, Alessio Fumagalli, Michele Starnoni, Ivar
Stefansson, Jhabriel Varela, and Inga Berre.
PorePy: an open-source software for simulation of multiphysics
processes in fractured porous media.
Computational Geosciences, 25(1):243–265, February 2021.
lapwood_1948
E. R. Lapwood.
Convection of a fluid in a porous medium.
Mathematical Proceedings of the Cambridge Philosophical
Society, 44(4):508–521, October 1948.
martin05
Vincent Martin, Jérôme Jaffré, and Jean E. Roberts.
Modeling Fractures and Barriers as Interfaces for Flow in
Porous Media.
SIAM Journal on Scientific Computing, 26(5):1667–1691, January
2005.
rees
D.A.S. Rees and L. Storesletten.
The onset of convection in a two-layered porous medium with
anisotropic permeability.
Transport in Porous Media, 128, 2019.
shafabakhsh19
Paiman Shafabakhsh, Marwan Fahs, Behzad Ataie-Ashtiani, and Craig T. Simmons.
Unstable Density-Driven Flow in Fractured Porous Media:
The Fractured Elder Problem.
Fluids, 4(3):168, September 2019.
shikaze98
Steven G Shikaze, E.A Sudicky, and F.W Schwartz.
Density-dependent solute transport in discretely-fractured geologic
media: is prediction possible?
Journal of Contaminant Hydrology, 34(3):273–291, October 1998.
nield
C. T. Simmons, J. M. Sharp, and D. A. Nield.
Modes of free convection in fractured low‐permeability media.
Water Resources Research, 44(3):2007WR006551, March 2008.
starnoni19
M. Starnoni, I. Berre, E. Keilegavlen, and J. M. Nordbotten.
Consistent MPFA Discretization for Flow in the Presence of
Gravity.
Water Resources Research, 55(12):10105–10118, December 2019.
stathopoulos98
Andreas Stathopoulos, Yousef Saad, and Kesheng Wu.
Dynamic Thick Restarting of the Davidson, and the Implicitly
Restarted Arnoldi Methods.
SIAM Journal on Scientific Computing, 19(1):227–245, January
1998.
stewart02
G. W. Stewart.
A Krylov–Schur Algorithm for Large Eigenproblems.
SIAM Journal on Matrix Analysis and Applications,
23(3):601–614, January 2002.
voss1987variable
Clifford I. Voss and William R. Souza.
Variable density flow and solute transport simulation of regional
aquifers containing a narrow freshwater‐saltwater transition zone.
Water Resources Research, 23(10):1851–1866, October 1987.
vg15
Katharina Vujević and Thomas Graf.
Combined inter- and intra-fracture free convection in fracture
networks embedded in a low-permeability matrix.
Advances in Water Resources, 84:52–63, October 2015.
vg14
Katharina Vujević, Thomas Graf, Craig T. Simmons, and Adrian D. Werner.
Impact of fracture network geometry on free convective flow patterns.
Advances in Water Resources, 71:65–80, September 2014.
|
http://arxiv.org/abs/2409.02154v1 | 20240903172712 | COmoving Computer Acceleration (COCA): $N$-body simulations in an emulated frame of reference | [
"Deaglan J. Bartlett",
"Marco Chiarenza",
"Ludvig Doeser",
"Florent Leclercq"
] | astro-ph.IM | [
"astro-ph.IM",
"astro-ph.CO",
"cs.LG",
"stat.ML"
] |
[email protected]
ORCID: 0000-0001-9426-7723 https://orcid.org/0000-0001-9426-77230000-0001-9426-7723; Corresponding author.
[email protected]
https://www.florent-leclercq.eu/
ORCID: 0000-0002-9339-1404 https://orcid.org/0000-0002-9339-14040000-0002-9339-1404; Corresponding author.
§ ABSTRACT
N-body simulations are computationally expensive, so machine-learning-based emulation techniques have emerged as a way to increase their speed.
Although fast, surrogate models have limited trustworthiness due to potentially substantial emulation errors that current approaches cannot correct for.
To alleviate this problem, we introduce COmoving Computer Acceleration (COCA), a hybrid framework interfacing machine learning with an N-body simulator.
The correct physical equations of motion are solved in an emulated frame of reference, so that any emulation error is corrected by design.
This approach corresponds to solving for the perturbation of particle trajectories around the machine-learnt solution, which is computationally cheaper than obtaining the full solution, yet is guaranteed to converge to the truth as one increases the number of force evaluations.
Although applicable to any machine learning algorithm and N-body simulator, this approach is assessed in the particular case of particle-mesh cosmological simulations in a frame of reference predicted by a convolutional neural network, where the time dependence is encoded as an additional input parameter to the network.
We find that COCA efficiently reduces emulation errors in particle trajectories, requiring far fewer force evaluations than running the corresponding simulation without machine learning.
As a consequence, we obtain accurate final density and velocity fields for a reduced computational budget.
We demonstrate that this method shows robustness when applied to examples outside the range of the training data.
When compared to the direct emulation of the Lagrangian displacement field using the same training resources, COCA's ability to correct emulation errors results in more accurate predictions.
Therefore, COCA makes N-body simulations cheaper by skipping unnecessary force evaluations, while still solving the correct equations of motion and correcting for emulation errors made by machine learning.
COmoving Computer Acceleration (COCA): N-body simulations in an emulated frame of reference
Florent Leclercq
September 9, 2024
===========================================================================================
§ INTRODUCTION
N-body simulations represent the state-of-the-art numerical method for studying the dynamics of complex systems, including non-linear gravitational structure formation in the Universe <cit.>.
Such simulations can be incredibly computationally expensive to run <cit.>; hence, various machine learning (ML)-based approaches have been proposed to either remove the requirement to run physical simulations or to reduce the complexity of the simulator used.
The most straightforward application of ML methods is as surrogate models, which take initial conditions as inputs and emulate various features of the corresponding full N-body simulation as outputs.
For example, <cit.> were able to predict halo properties from the initial conditions given certain environmental properties.
<cit.> used Generative Adversarial Networks (GANs) to generate visually plausible three-dimensional cosmological density fields but encountered difficulties in reproducing the correct statistical distribution of physical density fields.
Going further, <cit.> built an emulator of cosmological density fields based on a combination of dimensionality reduction via principal component analysis and supervised ML.
<cit.> demonstrated that one can replicate the full result of particle-mesh (PM) simulations (i.e., a Lagrangian displacement field) using a deep neural network which takes the Zel'dovich approximation <cit.> displacement field as input.
This work was extended to tree-based N-body simulations by <cit.>.
Such emulators can replicate the power spectrum to the percent level up to k ≈ 1 h Mpc^-1.
<cit.> further extended these works by predicting both the displacement and velocity fields through two separate neural networks and by incorporating the cosmological matter density information through the addition of a “style” parameter <cit.>.
The resulting emulator can reproduce power spectra and bispectra to within a few percent and achieves a similar level of cross-correlation with the true simulation run with the same initial conditions.
By adding a time variable as an additional style parameter, <cit.> were able to eliminate the need for two separate networks and produce an emulator capable of predicting N-body outputs as a function of redshift.
The speed and differentiable nature of particle-based emulators enable them to be integrated within field-based inference of initial conditions <cit.>, with posterior re-simulations indicating faithful reconstruction of the initial conditions.
Simulations including the effects of massive neutrinos <cit.> or modified gravity <cit.> can also be emulated using similar neural network techniques.
Instead of completely bypassing the N-body simulation, one can include ML corrections that capture unresolved physics in low-resolution, cheaper simulations.
For example, <cit.> introduced an additional effective force to PM simulations to capture unresolved forces between particles.
Their machine-learnt isotropic Fourier filter was extended by <cit.> to depend not only on time and wavenumber but also on cosmological parameters.
Super-resolution techniques based on GANs <cit.> and U-nets <cit.> have also been proposed, achieving power spectra correct within a few percent, as well as reasonable bispectra, void size functions, and halo mass functions correct to within 10% for halos down to ≈ 10^11 M_⊙ in mass.
Given the computationally demanding nature of hydrodynamical simulations, <cit.> introduced a light model (with only 𝒪(10) learnable parameters) to transform the output of a dark-matter-only simulation to one that resembles the hydrodynamical simulation run with the same initial conditions.
<cit.> also presented a light and interpretable neural network to produce halo catalogues from dark matter density fields.
Accuracy and interpretability are pivotal challenges in the application of machine learning to N-body simulations.
Despite the high reported accuracy of the methods reviewed above on various tests (mainly using summary statistics), none of these models can be expected to perfectly recover the truth.
Are ML-accelerated simulation algorithms sufficiently accurate to be used in real-world applications?
Without a ground-truth model to compare against during actual use (since such algorithms are designed to eliminate the need for it), current approaches have limited means of identifying the emulation error and cannot correct for it.
Since typical simulations usually also involve simplifying assumptions and approximations, perfectly accurate ML-based models may not be required for many purposes.
The question that arises is then that of the interpretability of ML, in order to control the approximation made with respect to a physical simulator.
Unfortunately, many ML algorithms, including (deep) neural networks, lack interpretability.
If machines predict something humans do not understand, how can we check (and trust) the results?
In this paper, we contend that addressing the lack of interpretability of ML is not always necessary to use an emulator of an expensive model while maintaining control over the degree of accuracy.
We elucidate this argument by constructing a framework in which emulation of N-body simulations is made an ML-safe task by physically rectifying emulation inaccuracies. By “ML-safe,” we mean systems that are reliable, robust, and trustworthy by construction.
The key idea is to find a mathematically equivalent form of the system's equations of motion, where we solve for the (not necessarily small) perturbation around the approximate solution provided by ML.
From a physical point of view, in N-body simulations obeying Newtonian dynamics, this is equivalent to solving the equation of motion in an emulated frame of reference.
Since the ML solution is designed to be approximately correct, computing corrections is numerically easier than evolving the full system, thus requiring fewer evaluations of the forces.
Through the number and the temporal positions of force evaluations, the user controls the trade-off between speed and accuracy, ranging from fully trusting the ML solution by never correcting particle trajectories to correcting for ML emulation errors at any time step of the simulation.
The system has the theoretical guarantee of asymptotically converging to the physical solution as the number of force evaluations increases.
For gravitational N-body simulations of dark matter particles, we introduce the COmoving Computer Acceleration (COCA) approach to running cosmological simulations within an emulated frame of reference.
While traditional emulators aim to translate initial conditions into final particle positions, directly representing the non-linear dark matter distribution, COCA aims to emulate a frame of reference in which to run a physical simulation with lower computational cost.
The approach can be seen as a generalisation and improvement of the idea behind COmoving Lagrangian Acceleration (COLA) <cit.>.
As an illustration, we compare the results of COLA and COCA simulations when forces are evaluated through a particle-mesh (PM) scheme.
We find that using our ML-enhanced approach requires very few force evaluations (approximately 8, compared to 20 for COLA) to correct for emulation errors, yielding percent-level accurate power spectra, bispectra, and cross-correlation to a reference simulation.
This paper is organised as follows.
In <ref>, we review the COLA approach to N-body simulations, extend it to yield COCA, and describe the benefits of COCA in terms of computational efficiency. A more thorough description is provided in <ref>.
We introduce our emulator for the frame of reference in <ref> and describe the training procedure and validation metrics for the COCA simulations.
In <ref>, we present our results: the performance of the emulator, the accuracy of COCA simulations as a function of the number of force evaluations, the generalisation to an example known to be outside the range of the training set, the comparison to a Lagrangian displacement field emulator, and a discussion of the computational performance.
We conclude in <ref>, discussing potential future extensions and applications of this study.
§ THEORY
For simplicity, some of the equations in this section are abridged. We reintroduce the omitted constants, temporal prefactors, and Hubble expansion in <ref>.
§.§ Review of COLA
In a cosmological dark matter-only N-body code, one wishes to compute the final Eulerian positions of particles x, as a function of scale factor a, as they interact under gravity. If the initial comoving particle positions are q, then the Lagrangian displacement field is given by <cit.>
Ψ(q,a) ≡x (a) - q.
One then must solve the equation of motion which reads schematically
∂_a^2 Ψ(q,a) = -∇Φ(x,a),
where the gravitational potential Φ satisfies the Poisson equation sourced by the density contrast field δ (x, a),
ΔΦ(x,a) = δ(x,a).
In the perturbative regime, analytic solutions for Ψ(q,a) can be derived, which are known as Lagrangian Perturbation Theory <cit.>. These solutions are valid on large scales but become inaccurate once shell crossing occurs, making the approximation more reliable at early times.
This behaviour is illustrated in <ref>, where initially the LPT trajectories and the true trajectories are indistinguishable, but the discrepancy increases over time.
The temporal COmoving Lagrangian Acceleration <cit.> algorithm aims to separate the temporal evolution of large and small scales by evolving large scales using analytic LPT results and small scales using a numerical solver. This is accomplished by decomposing the Lagrangian displacement field into two components <cit.>:
Ψ(q,a) ≡Ψ_LPT(q,a) + Ψ_res^COLA(q,a),
where Ψ_LPT(q,a) represents the LPT displacement field, and Ψ_res^COLA(q,a) denotes the residual displacement of each particle as observed from a frame comoving with an “LPT observer,” whose trajectory is defined by Ψ_LPT(q,a). Knowing Ψ_LPT(q,a), one does not need to solve for the full trajectory of the particle, but just the residual between the approximation and the truth (the blue arrows in <ref>).
Using <ref>, it is possible to rewrite <ref> as
∂_a^2 Ψ_res^COLA(q,a) = -∇Φ(x,a) - ∂_a^2 Ψ_LPT(q,a).
Therefore, one can view LPT as providing a new frame of reference within which we solve the equations of motion.
The term ∂_a^2 Ψ_LPT(q,a) can be thought of as a fictitious force acting on particles, caused by our use of a non-inertial frame of reference.
Since particles experience lower typical accelerations in the LPT frame compared to the natural cosmological frame, solving the equation of motion numerically becomes a comparatively simpler task, requiring fewer time steps to achieve equivalent accuracy <cit.>. In particular, COLA has been demonstrated to always yield correct results at large scales, even with a small number of time steps (≤ 10), unlike a basic particle-mesh (PM) code. Given that <ref> is mathematically equivalent to <ref>, COLA is asymptotically equivalent to the corresponding standard N-body code (e.g. a PM code if forces -∇(Δ^-1δ) are evaluated via a standard PM technique), in the limit of an infinite number of time steps.
§.§ COCA formalism
While the COLA formalism has proven effective in solving the equation of motion within the frame of reference defined by LPT, there is no requirement to adhere to LPT or any other analytic approximation in the decomposition given by <ref>.
According to the principle of Galilean invariance, the equation of motion can be solved in any frame of reference, provided appropriate fictitious forces are introduced for non-inertial frames.
Considering that the simplest scenario is the one where no motion occurs, we aim at finding a frame of reference where particles are nearly stationary. In such a frame of reference, solving the equation of motion numerically to reach a given level of accuracy becomes an easier problem than in COLA. This is the key insight that underpins the formalism proposed in this paper. We dub this approach COmoving Computer Acceleration (COCA).
We propose utilising a ML algorithm, such as a neural network, as an emulator to learn and predict the trajectories of particles in N-body simulations. Since the LPT frame already provides a good approximation of the trajectories, particularly on large scales, we opt to learn displacements relative to the LPT frame. Therefore, the emulator outputs a displacement field Ψ_ML(q,a) that approximates Ψ(q,a) - Ψ_LPT(q,a).
Rather than directly employing the emulator as a surrogate for simulation results, we use the frame of reference corresponding to the emulated trajectories in order to run a simulation. Following the same spirit as COLA, we split the Lagrangian displacement field into three contributions,
Ψ(q,a) ≡Ψ_LPT(q,a) + Ψ_ML(q,a) + Ψ_res^COCA(q,a),
where Ψ_ML(q,a) is the ML contribution to the Lagrangian displacement field, and the residual displacement Ψ_res^COCA(q,a) represents the emulation error. Different contributions are shown schematically in <ref>.
Reframing <ref> using <ref>, the equation of motion for COCA contains an extra fictitious force with respect to COLA:
∂_a^2 Ψ_res^COCA(q,a) = -∇Φ(x,a) - ∂_a^2 Ψ_LPT(q,a)
- ∂_a^2 Ψ_ML(q,a).
In COCA, the predicted displacement Ψ_LPT(q,a) + Ψ_ML(q,a) approximates the optimal frame of reference in which to solve the simulation (the one where all particles are at rest). Ideally, in case of perfect emulation, solving the equation of motion would result in no trajectory adjustment (Ψ_res^COCA(q,a) =0 for any a). Otherwise, numerically solving <ref> corrects the trajectories of particles to produce a more accurate solution.
We describe in more detail the COCA formalism in <ref>. Notably, while above we described the framework in terms of an emulated displacement field Ψ_ML(q,a), we show that we can equivalently define the new frame of reference by the momentum p≡dx/da of particles, so that
p(a) ≡p_LPT(a) + p_ML(a) + p_res(a),
where p_LPT(a) and p_ML(a) denote momenta predicted by LPT and ML, respectively, and the residual momentum p_res(a) is determined by solving the equations of motion.
§.§ Reducing the number of force evaluations
To integrate the equations of motion in the new frame of reference, we utilise a symplectic “kick-drift-kick” (leapfrog) algorithm <cit.>.
With this method, the positions x, and momenta p, of the particles are updated at different times, typically with one momentum update between every two position updates.
A schematic illustration of the technique is given in <ref>, with the full details provided in <ref>.
At each momentum update (“kick”) we face two choices. One can assume that the emulated frame of reference is sufficiently accurate and thus update the particle momenta by simply evaluating the emulator, corresponding to following the “emulated” (purple) trajectory in <ref> (equivalent to assuming that g_δ(t^D) = 0 in equation (<ref>), using the notations of <ref>).
Alternatively, one may deem the emulation error significant and opt to correct the trajectory, aiming to bring the particles back to the “N-body” (black) trajectory in <ref>.
This correction involves evaluating gravitational forces between particles[In a PM scheme, which we employ in this paper, forces are computed by deriving the density field from particle positions through cloud-in-cell binning, solving the Poisson equation in Fourier space to obtain the gravitational potential, and then finite differencing the potential in configuration space to get the forces.] (g_δ(t^D) in <ref>) and using the complete form of the kick operator.
Correcting trajectories is more computationally expensive than simply following the emulated trajectories, so the number of force evaluations n_f, should be as small as possible, but large enough to correct for emulation errors.
During time steps without force evaluations, particles move according to trajectories defined by their respective frames of reference (Ψ_LPT for COLA and Ψ_LPT+Ψ_ML for COCA). Hence, n_f=0 corresponds to the LPT solution in COLA simulations and a purely emulated one in COCA simulations.
In COCA, the ability to reduce the number of force evaluations introduces an additional degree of freedom compared to PM/COLA simulations, where forces are evaluated at every time step. Force evaluations can in principle be placed at any of the time steps, however we find that concentrating all evaluations towards the end of the simulation, when structure formation is non-linear, typically yields the most accurate results.
Up until the first force evaluation, the COCA framework consists in predicting particle positions x and momenta p at specific times, functioning in a similar way as more traditional emulators <cit.>.
In <ref> we show an example of a kick-drift-kick scheme with ten time steps, with three force evaluations at time steps 8, 9 and 10. At all other time steps, momentum updates (kicks) rely solely on the chosen frame of reference.
§ EMULATION
We remind the reader that the field to be emulated, p_ML(q,a), is the residual momentum field with respect to the LPT momentum field, namely
p(q,a) - p_LPT(q,a),
at any value of a corresponding to a kick time step (see <ref>).
§.§ Training data
For the application of COCA described in this work, we chose to emulate the frame of reference in a cubic box of length 128 h^-1 Mpc with N^3 = 64^3 dark matter particles, resulting in a final density field at a=1 on a grid with a resolution of Δ x = 2 h^-1 Mpc. This resolution is approximately the same as that used by <cit.>.[
The number of force evaluations needed in COCA to achieve a given accuracy likely depends on the accuracy of the frame of reference emulator and, therefore, on the resolution. We leave the investigation of such effects to future work.
]
Since the focus of this paper is the time evolution of the fields, we adopt fixed cosmological parameters equal to the best-fit values (TT,TE,EE+lowE+lensing+BAO) from Planck 2018 <cit.>: Ω_ b = 0.0490, Ω_ m = 0.3111, h = 0.6766, τ = 0.0561, n_ s = 0.9665, and σ_8 = 0.8102. We assume a flat Universe and a non-evolving equation of state for dark energy.
Although the COCA formalism can be applied to any method of computing the forces between particles (PM, P^3M, tree-based, etc.), for this paper, we chose to work with a PM force solver, utilising a modified version of the publicly available code[<https://simbelmyne.florent-leclercq.eu/>] <cit.>.
For our simulations, we generated initial conditions at a scale factor a = 0.05 using second-order LPT and solved the equations of motion using COLA with 20 time steps equally spaced in a and a PM grid of size 64^3 <cit.>.
Although we have verified that this initial scale factor and number of time steps are appropriate to give converged results for all k ≤ 1 h Mpc^-1, the “reference” against which we compare in testing refers to a COLA simulation with the same setup, except with 100 time steps equally spaced in a.
At each time step of the simulations, we output the difference between the computed momentum of the particles p and the LPT momentum p_ LPT, which is the quantity we must emulate.
We produce 100 simulations for training, 50 for validation, and a further 50 for testing.
This is a sufficiently small number of training simulations that re-training with a different resolution or specifications does not require significant computational resources.
While one could potentially achieve higher accuracy for the emulator with more training simulations, the aim of this paper is primarily to demonstrate how to correct for emulation errors rather than to produce the optimal emulator. Therefore, we find 100 training simulations to be sufficient for our purposes.
For each simulation, we use all 20 output snapshots, resulting in a total of 2000 fields for training. In addition, we use 1000 fields for validation and 1000 for testing.
§.§ Scaling of momenta
In <ref>, we plot a slice of the field p - p_ LPT as a function of scale factor, for one of our test simulations.
From visual inspection, we find that the large-scale spatial structure of the field to be emulated does not change significantly as a function of time, particularly at late times, but its magnitude does.
We therefore choose to rescale the momenta to be emulated by defining
p_ ML( q, a )
≡
D(a) ℋ(a) ϖ (a) p̃_ ML( q, a ),
where p̃_ ML is defined to have a standard deviation of unity, D(a) is the linear growth factor, ℋ(a) is the conformal Hubble parameter in units of h, and ϖ(a) is a time-dependent function which we wish to approximate.
Our emulator is designed to directly predict p̃_ ML, and thus this scaling has the benefit of standardising the output, since p̃_ ML has zero mean and standard deviation unity.
To find an approximation for ϖ(a), we compute the standard deviation of the 2000 training p - p_ LPT fields and fit these as a function of a using the ESR <cit.> symbolic regression code.
We use a mean squared error loss function and allow functions to be comprised of addition, multiplication, subtraction, division, the power operator, as well as free constants, θ, and the scale factor a.
Upon inspecting the fitted equations, we find that a power law provides a sufficiently simple yet accurate approximation for our purposes:
ϖ (a) ≈( θ_0 a )^θ_1,
with parameters θ_0 = 1.1415174 and θ_1 = 2.3103984, which yields a root mean squared error of 1.5 × 10^-3.
We compare this fit to the training data in <ref>, from which we see that it accurately reproduces ϖ(a) at all scale factors. Note that any error in this fit can be compensated for by the emulator, so a perfect fit is not required.
§.§ Neural network architecture
To emulate p̃_ ML, we utilise a U-net/V-net architecture <cit.>, with a similar implementation to <cit.>.
Our model consists of three resolution levels connected in a “V” shape, using two downsampling (by stride-2 2^3 convolutions) and two upsampling (by stride-1/2 2^3 convolutions) layers. At each level, we apply a 3^3 convolution and, as in a V-Net, we apply a 1^3 convolution as a residual connection <cit.> within each block. A batch normalisation is applied after each convolution, which is followed by a leaky ReLU activation function with a negative slope of 0.01. Each layer has 64 channels, except the input (1), output (3), and those after concatenations (128).
For every convolutional layer, we introduce a “style” parameter[
A style parameter in a neural network is an additional input at each layer that encodes dependence on an important feature. In our case, the scale factor a encodes the dependence on clustering at various cosmological times. For more details, we refer the interested reader to Eq. (1)–(3) of <cit.>.
]
(borrowing the nomenclature from StyleGAN2, ), where each convolutional kernel is multiplied by an array of the same dimension as the layer's input and with values equal to the style parameter.
Since we are producing a time-dependent emulator, we use the scale factor a as our style parameter.
Our network is implemented and trained using a modified version of the [<https://github.com/eelregit/map2map/>] package and <cit.>.
The input to the emulator is the redshift-zero linear density field (at a=1). This contrasts with <cit.>, <cit.>, <cit.> and <cit.>, who use the three displacement fields predicted by first-order LPT as input. Given that the latter is computed deterministically from the former, both fields contain the same amount of information. However, the linear density field requires three times less memory to store, making our approach more memory efficient.
We also note that we achieve good performance for our emulator with only N^3=64^3 voxels in the input field, whereas <cit.> uses N^3=128^3 for a similar resolution, resulting in a further factor of 8 reduction in memory requirements.
The smaller input size necessitates one fewer resolution layer in our neural network architecture, thus reducing the number of parameters in our model to 2.4×10^6, compared to 3.4×10^6 in <cit.>.
It also requires less padding of the input field: we use periodic padding of 24 voxels, compared to 48 in <cit.>.
Regarding the dependence of the emulation on cosmology, we expect the sensitivity of p - p_LPT to cosmological parameters to be relatively small, since long-range features should be captured in p_ LPT.
Moreover, we choose to use the linear density field as input instead of the white noise field from which it is produced. This way, our emulator of p - p_LPT only depends on Ω_ m, as the equations of motion depend solely on this parameter.
The dependence on all other cosmological parameters is contained in the linear power spectrum, which is used to transform the white noise field into the linear density field.
Thus, adding only Ω_ m as a second style parameter to the network would be sufficient to capture the dependence of the framework on cosmological parameters. For simplicity, we fix Ω_ m and save this extension for future work.
Omitting Ω_ m as a second style parameter also enables us to test the robustness of the COCA framework in the case of cosmological parameter misspecification, and hence check for ML-safety.
We discuss this aspect in <ref>.
To summarise, our architecture is similar to that of <cit.>, with three main differences:
(i) we use a single channel (linear density) input rather than three channels (LPT displacements or velocities);
(ii) we have three resolution levels instead of four (since we work with N^3=64^3 grids as opposed to N^3=128^3); and
(iii) we include a as a style parameter <cit.> and fix Ω_ m.
§.§ Training
As our loss function, we choose
Loss≡log L_1 + log L_2,
where
L_n ≡∑_q∑_i
{[( p_ LPT + p_ ML)_i]^n -
[( p_ true)_i ]^n
}^2,
and the sum runs over the Lagrangian coordinates of the particles, q, and the three Cartesian components, i ∈ x, y, z.
This functional form is partially inspired by <cit.>.
The L_1 term matches p_ML to the residual momenta p_true - p_LPT, whereas the L_2 term ensures that the full momentum field (including the LPT contribution) matches p_true.
<cit.> found that terms similar to L_2 are required to correctly predict redshift-space distortions. We leave the investigation of redshift-space distortions in COCA for future work.
Both terms of our loss function use the mean square error between the fields in Lagrangian coordinates. Unlike <cit.>, we do not include any term in Eulerian coordinates. Given the computational and memory requirements to use the displacement fields in Eulerian coordinates, and the good performance already achieved with our choice, we decided to omit such additional terms.
We use the Adam optimiser <cit.> with decoupled weight decay (AdamW) <cit.>, an initial learning rate of 1.5×10^-4, a weight decay coefficient of 8×10^-3, and parameters β_1=0.85, β_2 = 0.994, and ϵ = 3×10^-9.
The learning rate is reduced on a plateau by a factor of 0.35 when the loss does not improve by more than 10^-3 over 50 epochs. After a change in learning rate, we apply a cooldown of 30 epochs before the scheduler resumes normal operation. We use a batch size of 5 and train on a single V100 GPU, which has 32 GB of RAM.
The entire time for generating the training, validation, and test simulations (for which we use 40 Intel Xeon Gold 6230 cores) and training was 120 hours, corresponding to 277 epochs, by which time the training and validation losses had plateaued.
§.§ Validation metrics
To quantitatively determine the accuracy of the COCA simulations, we compute the dark matter density field δ using a cloud-in-cell estimator <cit.> and the velocity field v using a simplex-in-cell estimator <cit.>. To work with a scalar field rather than a vector field, we compute the divergence of the velocity field in Fourier space, ∇·v.[
The velocity potential is usually of greater physical interest than the divergence of the velocity field. However, in Fourier space, they are related by a factor of 1/k^2, and since we only compare the ratio of auto and cross spectra at a given k, all quantities shown will be identical for both. Thus, we compute only the divergence.
]
For both fields φ∈{δ, ∇·v}, we compute the (auto) power spectrum P_φ (k) defined by
⟨φ( k) φ( k^') ⟩≡(2π)^3 _ D( k + k^') P_φ (k),
where _ D is a Dirac delta distribution.
For all simulations, we compute the ratio of power spectra between the simulation of interest and the reference.
We also compute the cross spectrum P_φ_a φ_b (k) between the test simulation and the reference simulation, defined by
⟨φ_a ( k) φ_b ( k^') ⟩≡(2π)^3 _ D( k + k^') P_φ_a φ_b (k).
Thus, we obtain the cross-correlation coefficient
r_φ_a φ_b( k )
= P_φ_a φ_b (k)/√(P_φ_a (k) P_φ_b (k)) .
One can interpret 1-r^2 as the fraction of the variance in the prediction that is not explained by the reference.
In schemes such as carpool <cit.>, where one combines exact and approximate simulations, 1-r^2 is proportional to the required number of simulations. Hence, improving r^2 can dramatically reduce the required computational resources.
Just comparing r can hide the importance of improving the cross-correlation: for example, improving r from 0.9 to 0.99—a change of 0.09—corresponds to explaining an additional 17% of the variance at that scale.
For these reasons, in all figures, we plot r^2 rather than r, since it is more meaningful.
All two-point statistics are computed using .
To assess higher-order statistics, we also compute the bispectrum B (k_1, k_2, k_3) of the density field, defined by
⟨δ( k_1 )δ( k_2 ) δ( k_3 ) ⟩≡(2π)^3 _ D( ∑_i=1^3 k_i ) B (k_1, k_2, k_3),
and, to factor out dependence on scale and cosmological parameters, the reduced bispectrum,
Q (k_1, k_2, k_3)
≡B (k_1, k_2, k_3)/P_1 P_2 + P_2 P_3 + P_3 P_1,
with P_i ≡ P_δ(k_i) for i ∈{ 1,2,3 }.
We consider two different configurations in this work, which are designed to be approximately the same as those used in <cit.>. First, we consider a “squeezed” bispectrum, consisting of an isosceles triangle configuration with one small wavenumber, k_ℓ = 9.8 × 10^-2 h Mpc^-1, and two sides of equal but varying size, k_1=k_2=k_ s. For our second configuration, we fix two of the wavenumbers, k_1 = 0.1 h Mpc^-1 and k_2 = 1.0 h Mpc^-1, and vary the angle θ between them.
All bispectrum calculations are performed using <cit.>.
§ RESULTS
§.§ Emulator performance
In <ref>, we plot slices of the input δ_linear, output p_ML, target p-p_LPT, and emulation error p_res≡p-p_LPT-p_ML for one of our test simulations, where all fields are evaluated at a=1.
The target fields are obtained by running COLA simulations with 20 time steps, using initial conditions matching those of the test simulations, and saving the residuals between the calculated and LPT momenta.
As described in <ref>, the input is the linear density field, comprising a single channel, whereas the output prediction is a three-component vector for each Lagrangian grid point. Since we are learning the residuals between the true momentum and the LPT prediction, correlations observed in p-p_LPT are highly localised, reflecting the accurate capture of large-scale modes by LPT.
Visually, there is a notable correlation between p - p_LPT (second column of <ref>) and p_ ML (third column of <ref>). Leveraging the linear density field and scale factor information, the emulator accurately identifies the spatial structure of the p_ ML field. The small emulation errors indicate its capability to predict magnitudes as well. We observe that the emulation error is signal-dependent, resulting in larger values of p_ res in the regions where |p_ ML| is large. These regions are highly nonlinear and appear as the simulation progresses. It is noteworthy that the emulation errors become particularly visible when visualising the final quantities in <ref>, given their lesser prevalence at earlier times.
To quantify the magnitude and time-dependence of the emulation error, we plot the root mean squared error (RMSE) between the true p - p_ LPT and p_ ML, as predicted by the emulator, as a function of the scale factor in <ref>.
We present the mean and standard deviation of the RMSE across 50 test simulations.
At early times, the trajectories of the particles are well described by perturbation theory. Thus, even though LPT is not a perfect description of the dynamics, the emulator can easily correct for the error, maintaining a relatively constant RMSE of less than 20 km s^-1 for a < 0.5.
We observe a slight decrease in RMSE between a=0.2 and a=0.4, which is understandable given <ref>: initially, the field exhibits a high degree of small-scale structure, which becomes less significant and approximately constant over time, making p - p_ LPT easier to predict during this period.
Beyond a ≈ 0.5, the small-scale dynamics become highly non-linear, making it more challenging for the emulator to predict the correct frame of reference.
Consequently, we observe the behaviour schematically illustrated in <ref>: the emulation error grows at late times, approximately doubling between a=0.5 and a=1.
<cit.> and <cit.> found similar issues with predicting virialised motions within collapsed regions due to their chaotic and random nature. It is precisely these emulation errors that we aim to correct using the COCA framework.
§.§ COCA performance
We now turn to testing the use of our frame of reference emulator within a cosmological simulation. To do this, for each realisation of initial conditions in our test set, we run a reference simulation (see <ref>) as well as COCA and COLA simulations with varying specifications. For these runs, we use 20 time steps between a=0.05 and a=1, spaced linearly in scale factor, but we vary the number of force evaluations n_ f.
After some experimentation, we found that the best strategy to maximise the statistics described in <ref> is to place all force evaluations at the end of the simulation.
This is expected, as the dynamics become more non-linear at later times, making it crucial to accurately resolve particle trajectories during these periods, especially since the emulation error is also largest at these times (see <ref>).
In <ref>, we plot a slice of the final density field for one of the reference simulations in our test set, as well as the corresponding COCA simulations with n_ f = 10 and n_ f = 20, and their respective residuals relative to the reference. Both COCA simulations accurately recover the overall structure of the cosmic web, with correctly positioned filaments and nodes. With the smaller number of force evaluations, there is a small residual in the final density, but this has almost completely disappeared when n_ f = 20.
To assess the relative performance of COCA and COLA and to determine the optimal number of force evaluations, in <ref> we plot the fractional error on the matter power spectra and the cross-correlation coefficient for the a=1 density field as a function of wavenumber for both simulation frameworks on the test set.
As a sanity check, we verify that both COLA and COCA achieve similar performance when performing a force evaluation at each of the 20 time steps.
Our first observation is that COCA performs dramatically better than COLA, even when using few force evaluations. It is unsurprising that with n_ f=0 COLA performs poorly, as this is merely the LPT prediction, which is known to be a poor description at this redshift and on these scales. In contrast, we find that COCA with n_ f=0 is already extremely accurate: purely following the trajectories of the emulated frame of reference (n_ f=0) produces a behaviour practically identical to running a COLA simulation with n_ f=12 force evaluations. The matter power spectrum of the emulated field is 99% accurate up to k ≈ 0.3 h Mpc^-1, with r^2(k) > 0.99 up to k ≈ 0.6 h Mpc^-1.
One would expect that, if the training simulations and evolution used a higher-accuracy gravity solver (e.g. P^3M or a tree-based approach), COCA would outperform COLA. However, it is not possible to check this conjecture in this example, since both the frame of reference emulator and COCA solver are based on PM forces.
Despite the good predictions of the emulator, we see that the relative error on the power spectrum increases to more than 10% at k=1 h Mpc^-1 when n_f=0.
However, the error is reduced to less than 1% up to k = 0.5 h Mpc^-1 by adding just 8 force evaluations, and less than 1% up to k = 1 h Mpc^-1 with 10 force evaluations, both in terms of the power spectrum and phase accuracy.
This feature highlights the benefit of the COCA framework: we use machine learning to provide good approximations to the true solution and can run a physical simulation to correct for any errors made, using far fewer force evaluations than is ordinarily required.
The same behaviour is observed when considering the three-point statistics. In <ref> we plot the bispectrum for the COCA simulations in the configurations outlined in <ref>. As with the power spectrum, reasonable agreement with the reference is achieved without any force corrections, with errors of the order of 5-10%. However, with just 8 force evaluations, one achieves close to perfect agreement with the reference for almost all configurations considered, with the only discrepancy occurring for k_ s > 1 h Mpc^-1.
We now evaluate the accuracy of the simulated velocity fields by plotting the error on the power spectrum and cross-correlation coefficient for the final velocity potential in <ref>.
Velocity fields are very poorly predicted for all COLA simulations that skip force evaluations, with an under-prediction of power beyond k ≈ 0.1 h Mpc^-1, and an over-prediction as one approaches the Nyquist frequency of our simulations.
The cross-correlation between the COLA velocities and the reference is also very low, with practically zero correlation at k = 1 h Mpc^-1 when no force evaluations are used, and with r^2(k) ≈ 0.5 at this scale for n_ f = 12.
This is unsurprising, since this latter case is equivalent to initialising a COLA simulation with an LPT prediction at a redshift of z=1.5 and using 12 time steps; one would not expect the initial conditions of such a simulation to be reasonable, as this is well beyond the validity of LPT.
However, this problem is alleviated if one uses an emulated frame of reference. Using only the emulator (n_ f = 0) reduces the error on the velocity field power spectrum to approximately 5% at a a=1, which, although still reasonably large, is much smaller than what is found with COLA with up to n_ f=12.
The advantage of the COCA framework is particularly evident when varying n_ f, as the addition of just 6 force evaluations practically eliminates this error, reducing it to 1%.
Similarly, we find that the COCA fields are much more correlated with the reference, even when using far fewer force evaluations, with r^2(k) > 0.8 for all k ≲ 1 h Mpc^-1 and for any number of force evaluations. The degree of correlation improves as one increases n_ f.
In summary, we find that our emulator can reasonably recover the density and velocity fields even without any correction. However, emulation errors of up to 𝒪(10%) remain, but these can be reduced to the sub-percent level with just 8 force evaluations. Thus, COCA is able to correct for mistakes made in the emulation of particle trajectories by running a simulation in the corresponding frame of reference.
§.§ COCA with misspecified cosmological parameters
One of the key motivations behind the COCA framework is the concept of ML-safety. Although emulation techniques have previously been applied to predict the results of dark matter simulations <cit.>, there may be concerns that the emulated solutions might not match the truth if the initial conditions or cosmological parameters are “unusual,” i.e., unlike the training data.
The capacity of emulators to extrapolate was tested by <cit.> in the context of well-understood simple matter distributions that had not been seen during training. Furthermore, <cit.> found that their emulator performed well with initial conditions containing significantly less power than their training examples.
However, with regular emulators, it is not possible to test all possible configurations, and thus, in general, one can only hope that the model extrapolates well to the application of interest.
In contrast, COCA uses a frame of reference emulator but solves the fundamental equations of motion. Therefore, any extrapolation mistake made in the emulation should be automatically corrected, unlike with the use of an emulator alone.
Our frame of reference emulator was trained using simulations run at a single cosmology. To test its out-of-distribution behaviour and its use in the COCA formalism, we ran 50 additional test simulations with a different set of cosmological parameters: Ω_ b = 0.03, Ω_ m = 0.35, h = 0.7, n_ s = 0.99, and σ_8 = 0.9.
These parameters are chosen to be relatively extreme, yet still within the support of a moderately wide prior that could be used for a cosmological analysis.
We note that we use the correct cosmological parameters for producing the initial density field, obtaining the LPT displacement fields, and solving the equations of motion; the only place where cosmological parameters are misspecified is in the prediction of p_ ML.
In <ref>, we plot the fractional error on the power spectra and the cross-correlation coefficients for the density and velocity fields in COCA simulations. The reference is the COLA simulations run with the same initial conditions, which are not subject to model misspecification in this scenario.
Despite the relatively extreme cosmological parameters, the uncorrected fields (n_ f=0) yield reasonable power spectra and cross-correlation. The mean error on the density power spectrum is approximately 20% by k=1 h Mpc^-1 with r^2(k) > 0.93 at these scales, while the velocity power spectrum has slightly smaller errors—around 10%—and r^2(k) ≈ 0.85 by k=1 h Mpc^-1.
This moderate agreement with the truth is enabled by using the initial density field rather than the white noise field as the input to the emulator (see section <ref>).
Indeed, even if the initial density appears different from that of the training simulations, the emulator does not have to predict the relevant initial matter power spectrum, which contains the entire dependence on all cosmological parameters except Ω_ m.
Additionally, since one expects p_ ML to be sourced only by local contributions in Lagrangian coordinates, the sensitivity to cosmological parameters should be relatively small.
Similar moderately accurate extrapolation behaviour has also been observed in other cosmological simulation emulators <cit.>.
Despite the moderate performance of this emulator in the presence of cosmological parameter misspecification, without any force evaluations (i.e., with n_f = 0), the error on the matter power spectrum would be too large for current cosmological analyses <cit.>.
Therefore, relying solely on an emulator of particles' trajectories (i.e., a frame of reference emulator with n_f = 0) as a forward model would produce inappropriate results and would not be a safe use of machine learning.
However, trajectories can be rectified in the COCA framework by evaluating gravitational forces and solving for the residual displacements with respect to the emulated frame of reference.
In our test, using just 8 force evaluations is sufficient to achieve percent-level agreement in both P(k) and r^2(k) for the density field, for all k < 1 h Mpc^-1 (see <ref>). The same conclusion is true for the velocity field up to k ≈ 0.6 h Mpc^-1.
Thus, with only a small additional computational cost, we can convert an unsafe use of machine learning in cosmology into a well-behaved one, even when the emulator is applied outside the range of its training data. This is one of the main benefits of COCA compared to traditional emulators of N-body simulation results.
§.§ COCA versus a Lagrangian displacement field emulator
In this work, we advocate for using an emulator for the frame of reference in N-body simulations, as it allows for correcting emulation errors by introducing force evaluations. This approach contrasts with previous emulators <cit.>, which directly predict the Lagrangian displacement field, i.e., the simulation output. This section compares the relative accuracy of these two approaches as a function of the number of force evaluations in COCA.
To investigate this question, we train a time-dependent emulator for the residual displacement field Ψ_ ML (q, a), defined as the difference between the true Lagrangian displacement field Ψ (q, a) and that predicted by LPT, Ψ_ LPT (q, a). We opted to train a new Ψ-emulator rather than directly compare COCA to existing literature results to minimise the impact of differences in gravity solvers, training set sizes, architecture choices, and training procedures. For a fair comparison, we trained our Ψ-emulator using the same simulations as for the frame of reference emulator, employing the same architecture and training procedure outlined in <ref> (with p replaced by Ψ in the loss function).
In a similar manner as before, we begin by normalising the target variable by defining the function ψ(a) such that
Ψ_ ML (q, a) ≡ψ(a) Ψ̃_ ML (q, a),
where Ψ̃_ ML (q, a) has unit standard deviation. Applying symbolic regression to the function ψ(a), we find that it is well approximated by
ψ(a) ≈ a^ϕ_0 + ϕ_1,
with ϕ_0 = 1.2412539 and ϕ_1 = -0.05402543, yielding a root mean squared error of 5 × 10^-4.
We take the linear density field as input, but this time output Ψ̃_ ML (q, a).
We evaluate this emulator on the same test simulations as for the frame of reference emulator, and convert the returned Lagrangian displacements into an Eulerian density field using a cloud-in-cell estimator.
We compute the power spectrum, cross-correlation coefficient, and bispectra (see <ref>) and plot these in <ref>, where we compare against COCA without force evaluations (n_ f=0; using solely the frame of reference emulator) and both n_ f=4 and n_ f=8.
We perform this analysis for both the fiducial and misspecified cosmology (see <ref>).
When compared to COCA with n_ f = 0, the Lagrangian displacement field emulator more accurately recovers the reference density field. The power spectra of the two methods are relatively similar with fiducial cosmological parameters, but the difference becomes more pronounced when the cosmology is misspecified. For all other metrics, the Ψ-emulator produces summary statistics that are closer to the reference.
This behaviour is expected: the Ψ-emulator is designed to optimise the prediction of dark matter particle positions through its loss function, naturally resulting in an accurate density field. In contrast, the frame of reference emulator in COCA aims to match particle momenta p. Consequently, without force evaluations, emulation errors in p accumulate over time, reducing the quality of the final density field.
Although the Ψ-emulator performs better than COCA with no force evaluations, there is no way to correct its errors, meaning its performance cannot be improved. Conversely, in COCA, force evaluations can be added to correct the errors made by the frame of reference emulator.
<ref> shows that the addition of only four force evaluations results in performance nearly identical to that of the Ψ-emulator for the bispectra, and better results for two-point statistics with residual errors reduced by a factor of 1.4.
Residual errors almost entirely disappear when eight force evaluations are used in COCA: the final power spectrum P(k) has approximately four to five times smaller errors than the one derived from the Ψ-emulator at all scales for both cosmologies.
Thus, even with very limited additional computations beyond the emulation, the COCA framework outperforms a Lagrangian displacement field emulator.
We note that our displacement emulator is slightly less accurate than that of <cit.>, who also emulated a PM-like output. They achieved errors on P(k) of 0.8% and 4% at k=0.4 h Mpc^-1 and k=0.7 h Mpc^-1, respectively, whereas our emulator is accurate to 3% and 9% at these scales.
We attribute this discrepancy to our use of fewer training simulations (2,000 fields compared to 10,000 in ), the need for our emulator to learn time-dependence (only 100 of the 2,000 training fields are at a=1), and <cit.> employing a more optimised architecture and training schedule.
As mentioned in <ref>, since the aim of this paper is to demonstrate how to correct for emulation errors rather than produce the optimal emulator, we chose not to increase the number of training simulations or fine-tune the architecture, as our emulator is already of similar quality to those in the literature.
If a frame of reference emulator with performance similar to that of <cit.> were used in COCA, fewer time steps would be needed to correct for emulation errors, thereby achieving the same theoretical guarantees with reduced computational expense.
§.§ Timing tests
To assess the computational performance of COCA, in <ref> we show the required amount of CPU/GPU time for each of the stages of the framework. To perform the timing tests, we use an Intel Xeon Gold 6230 processor with 40 CPU cores and an Nvidia V100 GPU. We compare the results of running COLA with 20 time steps (and 20 force evaluations) and COCA with the same number of time steps, but with varying numbers of force evaluations (0, 8, and 20). All other settings are identical to those in section <ref>.
The COLA simulation takes approximately 61 CPU-seconds to run, with almost half of this time spent on cloud-in-cell binning (converting particle positions to the density field).
In our test, running COCA with n_ f=0 is approximately four times faster.
In the current implementation, the emulation and the simulation codes are disjoint, with the frame of reference being written to and then read from disk at each kick time step (a process responsible for 77% of the CPU/GPU time in this case).
Separating emulation and N-body evolution is not a fundamental requirement of the COCA framework: one could emulate on the fly, which would effectively reduce the input/output time to zero for a slightly higher memory cost (two p_ML fields need to be kept in memory for each kick operation, see <ref>.
Such an approach would make the n_ f=0 case 18 times faster than the COLA simulation, with this emulator.
To enable the safe use of an ML emulator, COCA relies on including a finite number of force evaluations.
Naturally, if one uses the same number of force evaluations, then COCA is more computationally expensive than COLA, since it must perform the same steps as COLA but with an additional emulation stage.
However, because the ML correction makes the frame of reference more accurate than LPT alone, the number of force evaluations can be reduced to approximately 8 (see <ref>).
With the current implementation of separate emulation and N-body evolution codes, the cost of COCA with n_f=8 is approximately two-thirds of the cost of the COLA simulation. If emulation were done on the fly, the time required for inputs (36% of the total time) would be eliminated, making COCA 2.3 times faster than COLA.
These timing improvements are expected to become more dramatic if COCA were extended to include a more accurate gravity solver, such as a P^3M or tree-based code.
The computational expense for computing the forces in these codes is significantly higher than in the PM-based model used in this work. Therefore, reducing the number of force evaluations would dramatically improve run time. We leave such an investigation to future work.
§ DISCUSSION AND CONCLUSION
In this paper, we have introduced COmoving Computer Acceleration (COCA), a hybrid formalism involving ML and N-body simulations. Unlike previous works that directly emulate the simulation output, COCA solves the dynamics of an N-body simulation using a machine-learnt frame of reference.
COCA can be seen as an improvement of COLA, which solves the dynamics of an N-body simulation in the LPT frame of reference.
By virtue of the principle of Galilean invariance, equations of motion can be solved in any frame of reference, making COCA a ML-safe framework.
COCA is the first framework to use physics to determine the (otherwise uncorrected) emulation error in N-body simulations using ML and correct for it.
The concept behind COCA is entirely independent of the N-body solver and of the ML emulation algorithm used. For this proof-of-concept, we employed a PM approach to solving the equations of motion and a V-net architecture for the frame of reference, with fixed cosmological parameters.
We have demonstrated that after the ML-prediction of the optimal frame of reference (the one in which all particles are at rest), running the N-body simulation corrects for potential emulation errors in the particle trajectories.
We have quantitatively shown that the number of force evaluations required to achieve a given accuracy is reduced compared to COLA. The frame of reference emulator achieves between 1% and 10% accuracy when used in isolation, but only eight force evaluations are needed to reduce emulation errors to the percent level, compared to a 100-time step COLA simulation. Therefore, COCA can be utilised as a cheap N-body simulator.
Furthermore, with eight force evaluations, COCA is four to five times more accurate than a Lagrangian displacement field emulator, when the frame of reference emulator and the Lagrangian displacement field emulator are trained using the same computational resources. This increased accuracy is due to COCA's ability to correct emulation errors and represents one of the main advantages of this framework compared to the direct emulation approach explored in earlier literature.
In <ref>, we demonstrated that our frame of reference emulator is moderately robust to changes in cosmological parameters (despite training at a fixed cosmology). However, the COCA framework can correct for extrapolation errors arising from applying the emulator outside the range of validity of the training simulations.
Even when the frame of reference is inaccurate (because the ML training/prediction and the N-body evolution use different cosmological parameters), we found that percent-level accuracy can be reached on final density and velocity fields up to k ≈ 0.6 h Mpc^-1.
Thus, the COCA framework provides ML-safety even when models are required to extrapolate.
There is no fundamental reason why the emulator cannot depend on cosmological parameters, and future implementations of COCA can include these as additional style parameters of the neural network.
Our example focused on relatively small simulation volumes (with a side length of 128 h^-1 Mpc) compared to those required for modern-day surveys (typically several gigaparsecs in length). With the current memory limitations of GPUs, it is not possible to emulate the entire volume with a single emulator at the desired resolution.
As a workaround, <cit.> splits the volume into several padded sub-boxes and treats each one separately, relying on sequential predictions for particle displacements in each sub-box to cover the full volume.
Similarly, in COCA, one could predict the frame of reference for particles in each sub-box, and then solve the equations of motion in each sub-box independently.
This idea relates to the algorithm introduced by <cit.> for perfectly parallel N-body simulations using spatial comoving Lagrangian Acceleration (sCOLA).
There, a tiling of the simulation volume is used, and the evolution of tiles is spatially decoupled by splitting the Lagrangian displacement field into large and small-scale contributions.
In sCOLA, the frame of reference used in the evolution of tiles is given by LPT, but it could be easily replaced by a frame of reference including both LPT and an ML contribution, as introduced in this paper.
Such an approach would overcome the memory limitations of GPUs, which currently limit COCA to small simulation volumes.
An additional benefit of this approach would be the inexpensive generation of light-cones. Indeed, when using a tiling approach as with sCOLA, only one tile needs to be evolved to a redshift of zero; the tiles farthest from the observer only need to be evolved until they intersect the light-cone at higher redshifts.
It is important to emphasise that the specific implementation details used in this work are not requirements but just an example.
For instance, one could use a perturbation theory-informed integrator for the equations of motion <cit.>, an approach complementary to COLA for fast generation of approximate cosmological simulations.
Furthermore, instead of using training simulations run with a PM gravity solver, one could learn the frame of reference given simulations with higher force accuracy, for example using a P^3M or tree-based gravity solver.
Subsequently, solving the equations of motion in the emulated frame of reference with the same solver would result in simulations with similar accuracy to those of the training set, but with significantly reduced computational cost.
The guarantee that any emulation mistakes are removed asymptotically as the number of force evaluations increases—a central feature of COCA—will remain.
This ML-safety cannot be guaranteed through direct emulation of P^3M or tree-based simulations.
As with COLA, the COCA framework could be adapted to include more extended physical models, such as neutrinos, which induce a scale-dependent growth factor <cit.>.
Finally, although in this work we have focused on gravitational N-body simulations in a cosmological context, the approach of solving equations of motion in an emulated frame of reference could be applied to any kind of simulation involving interacting particles (e.g., electrodynamics, hydrodynamics, radiative transfer, magnetohydrodynamics).
We generally expect a reduction in computational demands while retaining physical guarantees of convergence to the truth.
Benefiting from its modest computational cost, COCA could be used in analyses of cosmological data using fully non-linear models. It could straightforwardly be used as a forward model in implicit likelihood inference algorithms such as delfi <cit.>, bolfi <cit.>, selfi <cit.>, or the LtU-ili pipeline <cit.>.
As COCA is an ML-safe framework, its use as a forward model cannot bias the inference result.
We also note that using a V-net emulator and a PM force solver, the entire COCA framework is differentiable. For the emulation of the frame of reference, differentiability is achieved via automatic differentiation. For the N-body evolution, differentiable PM simulators already exist <cit.>. Building upon these, future work could be dedicated to writing a differentiable COCA solver, which could be used in Bayesian large-scale structure inference using an explicit field-level likelihood <cit.>.
Machine learning offers great promise in the acceleration of forward modelling in the physical sciences.
The output of any ML model is usually an approximation with inevitable emulation errors. In this paper, we have shown that emulation errors are correctable in gravitational N-body simulations.
By solving the correct physical equations while using the ML solution as an approximation, one can exploit the speed of ML while retaining the safety of more traditional methods.
§ THE ACTUAL EQUATIONS
§.§ Model equations with COCA
Using the notations of <cit.> and <cit.>, we consider dark matter particles with positions x and momenta p in comoving coordinates. Denoting the scale factor as a and the over-density field as δ, the equations to be solved are:
xa = D(a) p with D(a) ≡1/a^2 ℋ(a),
pa = K(a) ∇( Δ^-1δ) with K(a) ≡ -3/2Ω_m^(0)ℋ^(0)2/a ℋ(a),
where ℋ(a) ≡ a^' / a is the conformal Hubble factor (where a prime denotes a derivative with respect to conformal time) and Ω_m^(0) is the matter density parameter at the present time (a=1). For simplicity, we note ∇_x = ∇, Δ_x = Δ and δ(x,a) = δ.
With x(a) = x_LPT(a) + x_ML(a) + x_res(a) (denoting the Langrangian perturbation theory, machine-learnt, and residual contributions to the position, respectively),
we note for each contribution y∈{LPT, ML, res}:
x_ya≡D(a) p_y and p_ya = / a( 1/D(a)x_ya) ≡ - K(a) V[x_y](a),
where the differential operator V[·](a) is defined by
V[·](a) ≡ - 1/K(a)/ a( 1/D(a) ·a) .
Analogously, one writes the momenta as p(a) = p_LPT(a) + p_ML(a) + p_res(a), and thus Eqs. (<ref>) and (<ref>) take the form
xa = D(a) {p_res(a) + p_LPT(a) + p_ML(a) },
p_resa = K(a) {[ ∇( Δ^-1δ) ](a) + V[x_LPT](a) + V[x_ML](a) }.
The analytical properties of LPT are <cit.>:
x_LPT(a) = q - D_1(a) Ψ_1 + D_2(a) Ψ_2 ,
D(a) p_LPT = - D_1aΨ_1 + D_2aΨ_2 ,
V[x_LPT](a) = - D_1(a) Ψ_1 + [D_2(a) - D_1^2(a)] Ψ_2 ,
where Ψ_1 and Ψ_2 are the time-independent first and second order displacements, with corresponding growth factors D_1 and D_2.
This gives
xa = D(a) {p_res(a) + p_ML(a) } - D_1aΨ_1 + D_2aΨ_2,
p_resa = K(a) {[ ∇( Δ^-1δ) ](a) - D_1(a) Ψ_1 + [ D_2(a) - D_1^2(a)] Ψ_2 + V[x_ML](a) }.
Furthermore, for any arbitrary positive function u of a, we can rewrite
xa = D(a) u(a) {1/u(a)×p_res(a) + 1/u(a)×p_ML(a) } - D_1aΨ_1 + D_2aΨ_2,
p_resa = u(a)a{K(a)/ u(a)/ a×[ [ ∇( Δ^-1δ) ](a) - D_1(a) Ψ_1 + [ D_2(a) - D_1^2(a) ] Ψ_2 + V[x_ML](a) ] }.
§.§ Time stepping with COCA
In this paper, we adopt the second order symplectic “kick-drift-kick” algorithm, also known as the leapfrog scheme <cit.>, to integrate the equations of motion, for a series of n+1 time steps t(a) between t_0=t(a_i) and t_n+1=t(a_f). This algorithm relies on integrating the model equations on small time steps and approximating the momenta and accelerations that appear in the integrands (the part between curly brackets in the model equations) by their value at some time within the interval.
The discrete versions of the COCA model equations (equations (<ref>)–(<ref>) or (<ref>)–(<ref>)) give the Drift and Kick operators for COCA:
D(t_i^D,t_f^D,t^K) : x(t_i^D) ↦x(t_f^D) = x(t_i^D) + α_p(t_i^D, t_f^D, t^K) p_res(t^K) - [ D_1 ]_t_i^D^t_f^DΨ_1 + [ D_2 ]_t_i^D^t_f^DΨ_2
+ α_p(t_i^D, t_f^D, t^K) p_ML(t^K),
K(t_i^K,t_f^K,t^D) : p_res(t_i^K) ↦p_res(t_f^K) = p_res(t_i^K) + β_δ(t_i^K, t_f^K, t^D) ×
{[ ∇(Δ^-1δ)](t^D) - D_1(t^D) Ψ_1 + [ D_2(t^D) - D_1^2(t^D) ] Ψ_2 + g_ML(t^D) } .
Using equations (<ref>)–(<ref>), the standard discretisation of operators <cit.> gives the time prefactors as <cit.>,
α_p(t_i^D, t_f^D, t^K) ≡∫_t_i^D^t_f^DD(t̃) t̃ = ∫_t_i^D^t_f^Dt̃/t̃^2ℋ(t̃), β_δ(t_i^K, t_f^K, t^D) ≡∫_t_i^K^t_f^KK(t̃) t̃ = -3/2Ω_m^(0)ℋ^(0)2∫_t_i^K^t_f^Kt̃/t̃ℋ(t̃) .
The arbitrary function u of a appearing in equations (<ref>)–(<ref>) can be used to improve upon the standard discretisation of operators <cit.>. Indeed, if during the time step, the terms between brackets in equations (<ref>)–(<ref>) are closer to constants than the terms between brackets in equations (<ref>)–(<ref>), the approximation will hold better.
Therefore, using equations (<ref>)–(<ref>), the modified discretisation of operators gives the time prefactors, for any positive function u of t, as <cit.>,
α_p(t_i^D, t_f^D, t^K) ≡1/u(t^K)∫_t_i^D^t_f^DD(t̃) u(t̃) t̃, β_δ(t_i^K, t_f^K, t^D) ≡[ u(t_f^K) - u(t_i^K) ] ×K(t^D)/[ u(t)/ t ] (t^D) .
In this paper, consistently with earlier literature, we use u(t) ≡ a^n_LPT with n_LPT = -2.5 <cit.>.
The ML frame of reference gives particles an acceleration g_ML(t^D) which should satisfy
∫_t_i^K^t_f^KK(t) V[x_ML](t) t = ∫_t_i^K^t_f^Ku(t)t×{K(t)/ u(t) / tV[x_ML](t) } t ≈β_δ(t_i^K, t_f^K, t^D) g_ML(t^D) .
In the standard discretisation, the integral can be approximated by using the value of V[x_ML](t) at t^D (assuming it is constant during the time step), giving β_δ(t_i^K, t_f^K, t^D) V[x_ML](t^D) with the definition of β_δ(t_i^K, t_f^K, t^D) given in equation (<ref>). In the modified discretisation, the integral can be approximated using the value of K(t)/ u(t) / tV[x_ML](t) at t^D, giving also β_δ(t_i^K, t_f^K, t^D) V[x_ML](t^D) but with the definition of β_δ(t_i^K, t_f^K, t^D) given in equation (<ref>). In both cases, we get
g_ML(t^D) ≡V[x_ML](t^D) .
But the integral is also:
∫_t_i^K^t_f^KK(t) V[x_ML](t) t = ∫_t_i^K^t_f^K - /t( 1/D(t)x_ML/t) t = ∫_t_i^K^t_f^K - d p_ML/t t = p_ML(t_i^K) - p_ML(t_f^K),
which gives the alternative form
g_ML(t^D) ≡1/β_δ(t_i^K, t_f^K, t^D)[ p_ML(t_i^K) - p_ML(t_f^K) ].
As such, to use the COCA Kick and Drift operators (Eqs. (<ref>) and (<ref>)), one does not require to emulate both p_ML and g_ML, but one only needs a single emulator (for p_ML), which is evaluated at the kick time steps.
In the end, the time evolution between t_0 and t_n+1 is achieved by applying the following operator to the initial state {x(t_0),p(t_0) }:
L_+(t_n+1) E(t_n+1,t_0) L_-(t_0),
where E(t_n+1,t_0) is the operator given by (see <ref>)
K(t_n+1/2,t_n+1,t_n+1)D(t_n,t_n+1,t_n+1/2) [ ∏_i=0^n K(t_i+1/2,t_i+3/2,t_i+1) D(t_i,t_i+1,t_i+1/2) ] K(t_0,t_1/2,t_0),
and L_± will be defined in <ref>.
§.§ Generic Drift and Kick operators for PM, COLA and COCA
The difference between the COCA Kick and Drift operators and the corresponding COLA operators <cit.> is the last term in each operator. Therefore, we can introduce generic operators, valid for both COLA and COCA: for any external momentum p_ext and acceleration g_ext,
D(t_i^D,t_f^D,t^K) : x(t_i^D) ↦x(t_f^D) = x(t_i^D) + α_p(t_i^D, t_f^D, t^K) p_res(t^K) + α_LPT1(t_i^D, t_f^D, t^K) Ψ_1 + α_LPT2(t_i^D, t_f^D, t^K) Ψ_2
+ α_ext(t_i^D, t_f^D, t^K) p_ext(t^K),
K(t_i^K,t_f^K,t^D) : p_res(t_i^K) ↦p_res(t_f^K) = p_res(t_i^K) + β_δ(t_i^K, t_f^K, t^D) g_δ(t^D) + β_LPT1(t_i^K, t_f^K, t^D) Ψ_1 + β_LPT2(t_i^K, t_f^K, t^D) Ψ_2
+ β_ext(t_i^K, t_f^K, t^D) g_ext(t^D) ,
where
α_LPT1(t_i^D, t_f^D, t^K) ≡ - [ D_1 ]_t_i^D^t_f^D,
α_LPT2(t_i^D, t_f^D, t^K) ≡ [ D_2 ]_t_i^D^t_f^D,
α_ext(t_i^D, t_f^D, t^K) ≡ α_p(t_i^D, t_f^D, t^K) for COCA or 0 for COLA,
p_ext(t^K) = p_ML(t^K) for COCA or 0 for COLA,
g_δ(t^D) ≡ [ ∇(Δ^-1δ)](t^D),
β_LPT1(t_i^K, t_f^K, t^D) = - β_δ(t_i^K, t_f^K, t^D) D_1(t^D),
β_LPT2(t_i^K, t_f^K, t^D) = β_δ(t_i^K, t_f^K, t^D) [ D_2(t^D) - D_1^2(t^D) ],
β_ext(t_i^K, t_f^K, t^D) = β_δ(t_i^K, t_f^K, t^D) for COCA or 0 for COLA,
g_ext(t^D) = g_ML(t^D) for COCA or 0 for COLA.
We note that these operators also remain valid for a standard PM algorithm, by setting α_LPT1(t_i^D, t_f^D, t^K), α_LPT2(t_i^D, t_f^D, t^K), β_LPT1(t_i^K, t_f^K, t^D), and β_LPT2(t_i^K, t_f^K, t^D) to zero.
§.§ Machine learning prediction of the frame of reference in COCA
The goal in COCA is to find the frame of reference in which p_res is as small as possible. Therefore, the machine needs to predict:
* At any “kick” time step t^K,
p_ML(t^K) ≡1/α_p(t_i^D, t_f^D, t^K)[ x(t_f^D)- x(t_i^D) - α_LPT1(t_i^D, t_f^D, t^K) Ψ_1 - α_LPT2(t_i^D, t_f^D, t^K) Ψ_2 ] ≡p_res^COLA(t^K),
that is the momentum residual p_res^COLA(t^K) of COLA <cit.>.
* At any “drift” time step t^D,
g_ML(t^D) ≡ - 1/β_δ(t_i^K, t_f^K, t^D)[ β_δ(t_i^K, t_f^K, t^D) g_δ(t^D) + β_LPT1(t_i^K, t_f^K, t^D) Ψ_1 + β_LPT2(t_i^K, t_f^K, t^D) Ψ_2 ]
= 1/β_δ(t_i^K, t_f^K, t^D)[ p_res^COLA(t_i^K) - p_res^COLA(t_f^K) ] ≡ - g_res^COLA(t^D),
that is the residual acceleration g_res^COLA(t^D) of COLA <cit.>, up to a minus sign.
From equations (<ref>) and (<ref>), we see that it is sufficient for the machine to predict the momentum residual p_res^COLA(t^K) of COLA at any “kick” time step t^K, as the accelerations g_res^COLA(t^D) can be derived from the momenta.
§.§ Initial and final momenta of particles in COCA
In the initial conditions, we have p(t_0) = p_LPT(t_0) + p_ML(t_0), which means that the momentum residual in the COCA frame of reference, p_res(t_0) = p(t_0) - p_LPT(t_0) - p_ML(t_0), should be initialised to zero. Furthermore, if initial conditions are generated with LPT, the ML contribution is p_ML(t_0) = 0 initially and we recover p_res(t_0) = p(t_0) - p_LPT(t_0), as in COLA.
At the end, the momentum p_LPT(t_n+1) + p_ML(t_n+1) of the COCA frame of reference has to be added to p_res(t_n+1) to recover the full momentum of particles, p(t_n+1). These operations correspond respectively to the L_-(t_0) : p(t_0) ↦p_res(t_0) and L_+(t_n+1) : p_res(t_n+1) ↦p(t_n+1) operators, given by
L_±(t) : p(t) ↦p(t) ±p_LPT(t) ±p_ML(t) = p(t) ±1/D(t)( -D_1tΨ_1 + D_2tΨ_2 ) ±p_ML(t) .
§ STATEMENT OF CONTRIBUTION
DJB implemented the frame of reference emulator, ran the COCA and COLA simulations, produced the plots, and contributed to the analysis and interpretation of the results and the writing of the paper.
MC contributed to the model equations and the validation of COCA.
LD provided code for the bispectrum analyses, advised on the design of the emulator, and contributed to the interpretation of ML performance results.
FL conceived and designed the study, wrote the model equations, modified the code to accept any arbitrary frame of reference as input, contributed to the analysis and interpretation of results, edited the manuscript, advised early-career authors, and secured funding.
All authors read and approved the final manuscript.
§ ACKNOWLEDGEMENTS
We thank
Guilhem Lavaux, Natalia Porqueres, Benjamin Wandelt, and Ewoud Wempe
for useful comments and suggestions.
DJB and LD are supported by the Simons Collaboration on “Learning the Universe.”
FL and MC acknowledge financial support from the Agence Nationale de la Recherche (ANR) through grant INFOCW, under reference ANR-23-CE46-0006-01. LD acknowledges support by the Swedish Research Council (VR) under the project 2020-05143 – “Deciphering the Dynamics of Cosmic Structure".
This work has received funding from the Centre National d’Etudes Spatiales (CNES).
This work was done within the Aquila Consortium (<https://www.aquila-consortium.org/>).
For the purposes of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
§ REFERENCES
|
http://arxiv.org/abs/2409.02256v1 | 20240903192729 | Inferring Cosmological Parameters on SDSS via Domain-Generalized Neural Networks and Lightcone Simulations | [
"Jun-Young Lee",
"Ji-hoon Kim",
"Minyong Jung",
"Boon Kiat Oh",
"Yongseok Jo",
"Songyoun Park",
"Jaehyun Lee",
"Yuan-Sen Ting",
"Ho Seong Hwang"
] | astro-ph.CO | [
"astro-ph.CO"
] |
Ji-hoon Kim
[email protected]
0009-0006-4981-0604]Jun-Young Lee
Institute for Data Innovation in Science, Seoul National University, Seoul 08826, Korea
Center for Theoretical Physics, Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea
0000-0003-4464-1160]Ji-hoon Kim
Institute for Data Innovation in Science, Seoul National University, Seoul 08826, Korea
Center for Theoretical Physics, Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea
Seoul National University Astronomy Research Center, Seoul 08826, Korea
0000-0002-9144-1383]Minyong Jung
Center for Theoretical Physics, Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea
0000-0003-4597-6739]Boon Kiat Oh
Department of Physics, University of Connecticut, Storrs, CT 06269, USA
0000-0003-3977-1761]Yongseok Jo
Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027, USA
Center for Computational Astrophysics, Flatiron Institute, New York, NY 10010, USA
Center for Theoretical Physics, Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea
0000-0002-6810-1778]Jaehyun Lee
Korea Astronomy and Space Science Institute, Daejeon 34055, Republic of Korea
0000-0001-5082-9536]Yuan-Sen Ting
Research School of Astronomy & Astrophysics, Australian National University, Canberra, ACT 2611, Australia
School of Computing, Australian National University, Acton ACT 2601, Australia
Department of Astronomy, The Ohio State University, Columbus, OH 43210, USA
Center for Cosmology and AstroParticle Physics (CCAPP), The Ohio State University, Columbus, OH 43210, USA
0000-0003-3428-7612]Ho Seong Hwang
Seoul National University Astronomy Research Center, Seoul 08826, Korea
Astronomy Program, Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea
§ ABSTRACT
We present a proof-of-concept simulation-based inference on Ω_ m and σ_8 from the SDSS BOSS LOWZ NGC catalog using neural networks and domain generalization techniques without the need of summary statistics. Using rapid lightcone simulations, , mock galaxy catalogs are produced that fully incorporate the observational effects. The collection of galaxies is fed as input to a point cloud-based network, . We also add relatively more accurate mocks to obtain robust and generalizable neural networks. By explicitly learning the representations which reduces the discrepancies between the two different datasets via the semantic alignment loss term, we show that the latent space configuration aligns into a single plane in which the two cosmological parameters form clear axes.
Consequently, during inference, the SDSS BOSS LOWZ NGC catalog maps onto the plane, demonstrating effective generalization and improving prediction accuracy compared to non-generalized models.
Results from the ensemble of 25 independently trained machines find Ω_ m=0.339±0.056 and σ_8=0.801±0.061, inferred only from the distribution of galaxies in the lightcone slices without relying on any indirect summary statistics.
A single machine that best adapts to the mocks yields a tighter prediction of Ω_ m=0.282±0.014 and σ_8=0.786±0.036.
We emphasize that adaptation across multiple domains can enhance the robustness of the neural networks in observational data.
§ INTRODUCTION
Following its success in explaining the clustering of matter over a wide range of scales, the ΛCDM model has now ushered in the era of precision cosmology.
The small perturbations imprinted in the cosmic microwave background grow as cold dark matter falls into and deepens potential wells.
Small structures gravitationally evolve to create the characteristic cosmic webs and voids referred to as the large scale structures <cit.>, which are observable in galaxy surveys <cit.>.
The LSS serves as a widely used probe for constraining the cosmological parameters constituting the ΛCDM model, as it maps the distribution and motion of matter throughout the universe over time.
Over the past few decades, a series of galaxy redshift surveys have been conducted extensively to trace the distribution of galaxies and the growth history of LSS across a large spatial extent and depth <cit.>.
Considering the galaxy distribution as a (biased) proxy for the total matter content of the universe, power spectrum multipoles and n-point correlation functions (n-pCF) can be derived to express matter clustering at different scales.
These summary statistics serve as essential components in the development of mock catalogs and in the inference of cosmological parameters.
The construction of survey-specific mocks, which mimic similar summary statistics and the geometry of the survey, imposes constraints on certain cosmological parameters <cit.>.
Through high-resolution simulations in large volumes and by assigning adequate band magnitudes and spectroscopic information, generic catalogs applicable to various observational surveys can also be generated <cit.>.
Other than producing the mocks that best match the observational catalog, derived summary statistics from realizations simulated with varying cosmology can be compared with the observational counterpart to make inference on the cosmological parameters, an approach referred to as simulation-based inference <cit.>. While these cited works rely on predefined summary statistics, the simulation-based inference framework allows for the potential use of raw inputs together with the neural networks' flexible featurizations, which permits the exploration beyond summary statistics.
With the advent of artificial intelligence and machine learning, simulation-based inference of cosmological parameters has been accelerated.
This involves inferring cosmological parameters from simulations by matching summary statistics or features, with neural networks serving as an option alongside more traditional measures of statistical inference such as Markov chain Monte Carlo <cit.>.
In particular, classic summary statistics such as the n-pCF and power spectra, which convey limited information about the matter distribution of the universe, can be replaced with features extracted by neural networks that capture much more complex information engraved inside <cit.>.
Attributed to this capability of extracting rich information not hinted at in the summary statistics, simulation-based inference with neural networks has shown the possibility of producing tight predictions on the cosmological parameters <cit.>.
Therefore, the importance of simulation-based inference is being recognized as it can serve as an alternative for verifying and possibly resolving tensions in the cosmological parameters predicted from CMB observations and galaxy surveys, especially concerning H_0 and S_8≡σ_8√(Ω_ m/0.3) <cit.>.
In this context, AI-driven projects have been launched to perform diverse tasks, including parameter estimation <cit.>.
Especially in the estimation of cosmological parameters, 21-cm tomography light cones <cit.>, weak lensing (WL) convergence and shear maps <cit.>, dark matter density fields <cit.>, and halo catalogs <cit.> were utilized as inputs for various neural network architectures, typically in a traditional supervised learning setup.
In contrast to the direct input of mocks, derived summary statistics such as the n-pCF, count-in-cell, void probability function, star formation rate density, and stellar mass functions (SMF) were also used as inputs <cit.>.
In addition, individual galaxy properties <cit.>, galaxy cluster properties <cit.>, or snapshots of galaxy catalogs <cit.> were shown to be useful as inputs for neural networks.
Among the listed works, most tested their pipeline on simulated data sets, and only a few successfully generalized their neural networks to the actual observational data.
<cit.> and <cit.> created a mask autoregressive flow using the power spectrum and bispectrum as summary statistics to provide constraints on cosmological parameters, based on the SDSS BOSS CMASS catalog <cit.>.
In contrast, <cit.> leveraged 2-pCF from lognormal mocks as input to fully connected layers (FCL). <cit.> used FCL emulators to perform implicit likelihood inference on observed SMF <cit.> and SFRD <cit.>.
Parameter inferences using WL convergence maps as probes, including the Kilo Degree Survey <cit.> and Subaru Hyper Suprime-Cam first-year surveys <cit.> were also performed with Convolutional Neural Networks (CNNs) or Graph CNNs <cit.>.
Notably, recent studies regard neural networks' outputs of predicted parameters as summary statistics due to their centrally biased nature <cit.>, and perform additional Bayesian inferences.
In line with efforts to use deep learning for constraining cosmological parameters, this paper aims to perform a proof-of-concept test of conducting cosmological inference using the galaxy redshift survey, without relying on any indirect summary statistics, but rather utilizing the total raw distribution of galaxies as input to the neural network.
For this test, we focus mainly on Ω_ m and σ_8, which are directly related to the S_8≡σ_8√(Ω_ m/0.3) tension as mentioned above. As mentioned in <cit.>, this choice is due to the fact that Ω_ m and σ_8 are the parameters that are sensitive to the cosmological information of the clustering galaxies, while others are less constrained.
In order to reduce any artificial priors arising from survey-specific observational biases, we rapidly generate a large mock suite that fully includes observational effects such as redshift space distortion, survey footprint, stellar mass incompleteness, radial selection, and fiber collision in the SDSS BOSS LOWZ Northern Galactic Cap (NGC) catalog.
Then, using the position and mass information of individual and neighboring galaxies, we make inference on Ω_ m and σ_8, again without relying on any indirect summary statistics.
The biggest difficulty in using the whole galaxy catalog as input instead of the summary statistics is that the selection of codes begets overall differences in the resultant realizations.
The differences are easily discernible and distinguishable by complex neural networks.
Consequently, naively merging the different sets of mocks or domains limits the machines to merely learning fragmented domain-specific knowledge.
Recent studies have tried to address such issues, as machines failing to attain robustness exhibit poor performances and lack predictability on unseen domains <cit.>.
Moreover, as simulated catalogs do not perfectly portray the actual universe, such discrepancies may significantly aggravate the performance of machines onto unseen observed data.
Especially, the rapid generation of mocks trades off with the inaccuracies compared to the relatively time-consuming simulations, leading to a clear deviation.
In order to make effective inferences on different types of simulation or domains, the neural network must achieve generalizability.
This study focuses mainly on extracting and learning unified representations originating from distinct domains and exploiting generalized and integrated knowledge on the observational data.
This paper is organized as follows.
In Section <ref>, we illustrate the creation of our mock data that thoroughly integrate the observational effects. We produce two suites of mocks using two distinct simulations, and . The footprint and lightcone slices are shown together with the observational target, the SDSS BOSS LOWZ NGC catalog, and its specific set of mock catalogs, , for comparison.
In Section <ref>, input features and the neural network architecture are introduced together with the training strategies in Section <ref>, to align the latent space representations of different mocks and achieve domain generalization or robustness.
In Section <ref>, implicit likelihood estimates in Ω_ m and σ_8 using the SDSS BOSS LOWZ NGC catalog are shown.
We also discuss the impact of fine-tuned mocks on the predictability and generalizability of the machine.
Finally, the results and the following conclusions are summarized in Section <ref>.
The overall approach taken by the paper is schematically shown in Figure <ref>.
§ GALAXY CATALOG: OBSERVATION AND SIMULATION
§.§ The Reference SDSS Catalog
In this study, we utilize the Baryon Oscillation Spectroscopic Survey <cit.>, part of SDSS-III <cit.>, which extends the previously studied distribution of luminous red galaxies <cit.> from SDSS I/II, adding fainter galaxies and thus larger number densities, for the purpose of measuring baryon acoustic oscillations. The survey consists of the LOWZ <cit.> and CMASS <cit.> catalogs, which have different color and magnitude cuts. The LOWZ catalog targets galaxies at a low redshift of z≲ 0.4, while CMASS targets a higher redshift range of 0.4≲ z ≲ 0.7.
The LOWZ samples are roughly considered as volume-limited, whereas the CMASS samples, representing `constant mass', are considered volume-limited within the mass and redshift ranges of M_⋆ > 10^11.3M_⊙ and z ≲ 0.6 <cit.>. Using the MKSAMPLE code, the LSS catalogs for both LOWZ and CMASS were created for BOSS DR12, fully equipped with survey masks and random samples. These samples include completeness and weights calculated for the analysis of large-scale structure <cit.>.
To account for the stellar mass incompleteness of the survey and to incorporate cosmological information from the stellar masses of galaxies later on, we obtain stellar mass data from the value-added Portsmouth SED-fits catalog <cit.>, assuming a passive evolution model with the Kroupa IMF <cit.>. Since the Portsmouth SED-fits catalog includes both BOSS and LEGACY targets, we need to select those that are included in the LSS catalog. Following <cit.>, we match galaxies using the unique combination of tags MJD, PLATEID, and FIBERID, and then assign the stellar masses from the matched galaxies in the Portsmouth catalog to the corresponding entries in the LSS catalog.
In this work, we use the Northern Galactic Cap (NGC) of the LOWZ samples with RA=150^∘–240^∘ and DEC>0^∘. The selection of the LOWZ samples and the cropped regions is due to the limited volume of the lightcone simulations that will be used to generate mocks. Using this catalog as a benchmark, we generate mocks that incorporate the same observational effects: redshift space distortions, survey footprint geometry, stellar mass incompleteness, radial selection matching, and fiber collision (see Section <ref> for more information).
§.§ Rapidly Generated Lightcone Mocks, L-picola
is a rapid dark matter simulation that employs the COmoving Lagrangian Acceleration method <cit.> and supports on-the-fly generation of lightcones. At the expense of minute errors—2% in the power spectrum and 5% in the bispectrum—the code allows for the rapid generation of dark matter distributions in large box sizes <cit.>. Numerous studies have leveraged on this computational efficiency to produce a vast amount of mock catalogs aimed for diverse observations <cit.>.
In a box volume of (1.2h^-1Gpc)^3 we simulate the evolution of 1200^3 dark matter particles on 1200^3 meshes. Each particle has a mass of approximately M_p≈8.3× 10^10( Ω m/0.3) h^-1 M_⊙. The simulation starts with a 2LPT initial condition generated with 2LPTIC <cit.> at z_initial=9 and progresses in 10 steps to z=0.45, as <cit.> suggests for sufficient precision in the resolution adopted here, with 10 lightcone slices generated from z=0.45 to z=0. A total of 1500 simulations are produced, incorporating cosmic variance across varying Ω_ m and σ_8. Each of the two parameters is randomly sampled from a uniform distribution of Ω_ m∈[0.1, 0.5] and σ_8∈[0.6, 1.0]. We assume H_0=100 h km s^-1 Mpc^-1 with h=0.674, n_s=0.96 following the results from <cit.>. We select a realization from one pair of cosmological parameters most similar to the fiducial cosmology of with Ω_ m=0.3067, σ_8=0.8238 and name it . We obtain the halos using the ROCKSTAR halo finder <cit.> in lightcone mode, considering a minimum number of 10 particles as a seed halo (most detailed layer of subgroup hierarchy determined by the friends-of-friends algorithm). Thus, we impose a cut in the halo mass of log(M_ h/h^-1 M_⊙)=11.45. Subsequently, the 1500 catalogs are rotated and reflected in six directions following <cit.>, generating a total of 9000 realizations referred to as mocks. These mocks will be further cropped and masked separately according to the observational effects. From this we establish a one-to-one correspondence between the subhalos and galaxies.
§.§ Adaptation: Gravitational N-body Simulation Mocks, Gadget
The mocks described in Section <ref> lack accuracy in the clustering statistics on small scales compared to full N-body simulations (see Section <ref>). Therefore, we similarly generate mocks using -4 <cit.> in lightcone mode, which we refer to as mocks. Although they require more computational time and resources to generate than mocks, mocks, are generally considered to offer higher fidelity at smaller scales <cit.>. Consequently, we use mocks as adaptation standards of the neural networks, to refine the code-specific knowledge from mocks, implementing a training strategy that aligns the neural networks' extracted representations. For additional details, refer to Section <ref>.
The simulation resolution is the same as that of mock suites generated with : a box volume of (1.2h^-1Gpc)^3 and 1200^3 dark matter particles with a softening length of 10h^-1kpc. The simulation initiates with a 2LPT initial condition generated with N-GenIC <cit.> at z_initial=10, similar to , and ends at z=0.[We acknowledge that starting a full N-body simulation, , at low redshifts may lead to inaccuracies, unlike , despite the reduction of computational resources. The choice of the initial redshift was based on the comparative analyses presented in <cit.>. We leave such improvements to be addressed in our future work.] The cosmological parameters of the fiducial run, , are set to be identical to those of mocks in Section <ref>: Ω_ m=0.307115, σ_8=0.8288, and h=0.6777, with other parameters fixed to the previously stated values. Furthermore, in order to test the machine's predictability for non-fiducial mocks, we produce with Ω_ m=0.2, σ_8=0.7 and with Ω_ m=0.4, σ_8=0.8. We generate 6 samples each by rotating and reflecting the three simulations, totalling 18 samples.
§.§ Adaptation: Fine-Tuned Mocks, MD-patchy
MULTIDARK PATCHY Mocks (hereafter ) are mock galaxy catalogs designed to match the SDSS-III BOSS survey <cit.>. They referenced the BIGMULTIDARK simulation <cit.>, a N-body simulation run on -2 <cit.>. The halos from the BIGMULTIDARK are populated using the stochastic halo abundance matching technique and the observational effects including redshift space distortion, survey footprint, stellar mass incompleteness, radial selection, and fiber collision are considered using the SUGAR code <cit.>. The reference catalog is used to calibrate PATCHY <cit.>, which employs augmented Lagrangian Perturbation Theory <cit.> to generate dark matter fields. These fields are biased and the halo masses are identified using the HADRON code <cit.>, which takes the halos' environmental information into account. The halo catalog is further processed into galaxy mocks using the halo abundance matching procedure in the SUGAR code. Specifically, the clustering statistics are fitted by fine-tuning a single parameter–the scatter in the HAM procedure (σ_HAM(V_peak|M_⋆)), where M_⋆ represents the stellar mass and V_peak the peak velocity observed throughout the history of the halo. In total, 10240 mocks that mimic the clustering statistics, stellar mass functions, and observational effects are produced. The cosmological parameters used are Ω_ m=0.307115, σ_8=0.8288, and h=0.6777. In this work, we focus on the 2048 mocks of the Northern Galactic Cap (NGC) of the LOWZ samples. Similarly to the mocks in Section <ref>, the mocks are used as reference mocks for adaptation of the neural networks during the training phase (see Section <ref> for more information).
§.§ Galaxy-Halo Connection
The galaxy-halo connection is a crucial statistical relation that summarizes the interplay between gravitational evolution and baryonic physics in galaxies and halos, widely studied in the fields of galaxy formation and cosmology <cit.>. Numerous approaches in modeling are available, including the halo occupation distribution <cit.>, subhalo abundance matching <cit.>, and also the combined models such as subhalo clustering and abundance matching <cit.>. In the following, we introduce the two galaxy-halo connection methods: the fixed stellar-to-halo-mass relation and the SHAM.[The galaxy-halo connection models introduced here are indeed simplistic. To account for the detailed connection relation, it may be necessary to track the halo assembly history or apply varying population models by introducing few additional parameters. Here, we focus on proof-of-concept objective, rather than investigating deeply into this complex relation. Such limitations are left for future work.]
§.§.§ Fixed Stellar-to-Halo-Mass Relation
Here, we adopt the minimal model that connects N-body simulations to galaxy catalogs. Assuming a one-to-one galaxy-subhalo correspondence as employed in the previous works <cit.>, we impose a fixed stellar-to-halo-mass relation (SHMR) across different realizations. In other words, we assume that the star formation efficiency of galaxies in halos is equivalent across different cosmologies, within the redshift range of this study.[This is a strong assumption made to derive the stellar masses of each subhalo identified from a dark matter-only simulation, and where cosmology-dependent information intervenes. This is due to the impracticality of performing full hydrodynamic simulations of such a spatial and temporal extent across varying cosmologies. Despite introducing weak dependency, we emphasize that this assumption is made for a proof-of-concept test. For a model free of cosmological priors, refer to the model in Section <ref>.]
We use the SHMR obtained by <cit.>, which compares the DUSTGRAIN-pathfinder simulation <cit.> with the SMF determined in <cit.> from the Cosmological Evolution Survey (COSMOS) <cit.>. The SHMR is analyzed per different redshift bins to account for the temporal variability of the efficiency, parameterized as
M_⋆/M_h(z)=2A(z)[(M_⋆/M_A(z))^-β(z)+( M_h/M_A(z))^-γ(z)]^-1
where M_h is the halo mass and A(z) is the normalization factor at M_A, at which the double power-law breaks. Since our mock galaxies are selected within 0.15<z<0.40, we utilize the SHMR parameters estimated for 0.2<z<0.5. The best-fit parameters are A(z)=0.0429, M_A=11.87, β=0.99, and γ=0.669 when scatter of σ_r=0.2 dex is introduced. We will use these parameters, including the 0.2 dex scatter, for this work.
§.§.§ Subhalo Abundance Matching
In Section <ref>, the establishes cosmological priors as it selects the specific relation of connecting the halo mass properties to the baryonic physics. To tackle this issue, we alternatively utilize the non-parametric version of —a well-known basic galaxy-halo connection, as previously discussed, which is also used for constraining cosmological parameters <cit.>. The halo catalogs are painted with stellar masses using a monotonic relation between the simulated halo masses and the stellar masses identified from the observed SDSS BOSS LOWZ NGC catalog. Therefore, the difference between mocks with different cosmologies arises from the clustering of the galaxies instead of stellar mass itself as compared to the model.
We acknowledge that the prescription in our model is simplistic and may not fully describe the galaxy-halo connection. Numerous studies on SHAM have employed the historical peak mass or circular velocity of the halo <cit.>. However, the nature of the on-the-fly generation of lightcones precludes the possibility of utilizing historical information. In order to bypass such limitations, <cit.> use snapshots instead of generating lightcones on-the-fly and employs post-processing to generate lightcones. However, since our focus here is on the proof-of-concept test of inferring cosmological parameters without summary statistics and using neural networks, we accept the inherent crudeness in the galaxy-halo connection model.
§.§ Observational Effects
We include the following observational effects of the SDSS BOSS LOWZ NGC catalog into the simulations: redshift space distortions, survey footprint geometry, stellar mass incompleteness, radial selection matching, and fiber collision. By fully accounting for these observational effects, we can assess how observables from realizations endowed with different sets of cosmological parameters would have deviated from the actual observation.
Firstly, the positions of the model galaxies are shifted using their peculiar velocities to account for the redshift space distortion <cit.>. In order to match the footprint geometry of our mocks to that of the SDSS BOSS LOWZ NGC, we apply acceptance and veto masks. Galaxies are filtered out by applying the MANGLE masks <cit.> using the MAKE_SURVEY code <cit.>. Next, for both and models, we restrict the area of interest to RA=150^∘–240^∘ and DEC>0^∘.[The trimming of the footprint was necessary to accommodate the generation of lightcones in octants of the sphere. This adjustment results in a slight deviation in the data used compared to earlier studies, such as those of <cit.> and <cit.>. Nonetheless, we expect these differences to be minimal, given the modest nature of the change.]
For the model, we further apply the incompleteness in the galaxy stellar mass function of the SDSS BOSS LOWZ NGC catalog, a statistical bias due to the observational constraints of the survey. Here, we apply the incompleteness of the LOWZ NGC sample, which is modeled by <cit.>, using the Stripe 82 Massive Galaxy Catalog to measure the SMF. The incompleteness function is shown in Equation <ref>, where f, σ, and M_1 are free parameters for fitting. We calculate the interpolated incompleteness using the stellar mass and redshift of the galaxies, and decide whether to use or discard a galaxy based on the result.
c=f/2[1+erf( logM_⋆/M_1/σ) ]
After identifying the galaxies that are not observable due to stellar mass incompleteness and survey geometry, we randomly downsample the galaxies to match the radial selection. This is achieved by finely dividing the redshift range into 260 radial bins with equal redshift space volume spacing.
For the model we perform massive downsampling. Unlike the model, the SHAM model inherently includes stellar mass incompleteness because we use the observed galaxy catalog, which already has inherent incompleteness, as our reference. Also, we perform massive sampling instead of random sampling in order to match the monotonicity of the process. Similarly to the model, the sampled galaxies are filtered once more through the fiber collision algorithm, and then finally assigned with the appropriate stellar masses.
Furthermore, we mimic the fiber collision in the SDSS BOSS LOWZ NGC catalog. The SDSS galaxy spectra were obtained from fibers inserted into perforated plates. Since the fibers have a finite size with a collision radius of ;;62, a portion of fiber-collided galaxies has not been assigned with any fibers. Using NBODYKIT <cit.>, we classify the galaxies into two populations: decollided galaxies (D_1) and potentially collided galaxies (D_2) <cit.> using the angular friends-of-friends algorithm as in <cit.>. The actual abundance matching of the model is performed after accounting for the fiber collisions in order to fully preserve the number of galaxies. However, for the model, the stellar mass incompleteness already includes the incompleteness due to fiber collisions. Nevertheless, this reduction should be applied since fiber collisions are an important systematic biases in the small-scale geometry of the survey. We consider the potential double-counting of fiber collisions within the stellar mass incompleteness to have a negligible impact on our final results.
Figure <ref> compares the galaxy count per radial bins for SDSS BOSS LOWZ NGC, , , and mocks generated with the model. The similarity in the distributions verify the consistency across all three mocks and the observational catalog. In realizations with low Ω_ m and σ_8 generated with the model, the absolute number of galaxies is relatively small, and thus the total number of galaxies may be less than that of the fiducial cosmology. Such a deficit can provide critical information to inform the neural network that the real universe is unlikely to have such cosmological parameters. However, the mocks produced by the model does not have difference in the total number of galaxies, as it directly matches the observed galaxy mass to the halo catalog.
Finally, for both and models, we restrict the area of interest to RA=150^∘–240^∘ and DEC>0^∘. The four panels of Figure <ref> show the footprint of the , , , and the SDSS BOSS LOWZ NGC catalog. Notice that the masks are equally applied, showing the same apparent streaks and holes. Figure <ref> shows the lightcone slices from 0^∘<DEC<6^∘ for each of the four mocks, with the observational effects fully taken into account.
§ NEURAL NETWORK ARCHITECTURE
§.§ Backbone: Minkowski-PointNet
A large portion of the universe is empty, as galaxies are predominantly clustered along the filaments of the LSS. Therefore, depositing galaxies into uniform voxels can be highly inefficient, resulting in many voxels with few or even no galaxies assigned. To mitigate this problem, galaxies are represented as point clouds, with each galaxy depicted as a single point characterized by distinct positions and properties. This representation is then processed through a deep neural network called , which is a PointNet <cit.> implementation in the Minkowski Engine <cit.>.
PointNet is a neural network architecture that captures the structure of point clouds, a simplified graph with no edges. PointNet is an architecture that can be generalized as DeepSets <cit.>, which captures the permutation invariance and equivariance of point clouds <cit.>. Such geometric priors are captured from the 1D convolution layers and the global pooling layers. Despite PointNet's use of rotation and translation invariance to handle point clouds, such procedures are omitted in our approach because of the redshift dependence of features and clustering, as well as the (RA,DEC) dependence of masking. Moreover, to explicitly introduce local properties, we apply the k-nearest-neighbor (kNN) algorithm to survey the characteristics of neighboring galaxies and explicitly add them to the feature vector. Such a step is inevitable since we are not able to perform message-passing between the nodes or the points, as the computational costs involving calculation on the edges are extremely demanding for the mocks comprising more than 150,000 galaxies. Therefore, we add the local information to the feature vector, to enrich the information fed to the machine.[In contrast to PointNet++ <cit.>, which uses kNN for grouping and non-uniform sampling of points, we do not adopt such set abstraction layers since the absolute number of galaxies comprising each realization needs to be informed to the machine.]
The Minkowski Engine is a library that efficiently handles sparse tensors, including operations such as auto-differentiation and convolution. Galaxies are grouped and quantized into sparse tensors based on their (RA, DEC, z) positions using the engine, where z denotes the redshift. The main advantage of this implementation lies in its ability to handle a variable number of points as inputs to the machine, whereas the original implementation of PointNet operates on fixed sizes. Additionally, it efficiently utilizes memory by grouping galaxies into sparse tensors.
This approach results in approximately 25% of the quantized cells containing more than one galaxy, and around 5% containing more than two galaxies. This strategy effectively preserves the local structure while ensuring better memory consumption and performance.
The specific network layout is illustrated in Figure <ref>. The is capable of receiving point clouds of arbitrary size. The input catalog is transformed into a sparse tensor and passes through a total of five linear layers. Each linear layer is followed by a batch normalization layer <cit.> and a leaky ReLU activation function. The tensor is then passed through the global sum, average, and max-pooling layers and concatenated to a 1536-dimensional vector. Global aggregators are crucial to reflecting the permutation invariance of the neural network. Unlike the original implementation of PointNet, solely using the global max-pooling as the aggregator, we add other aggregators to better capture the embedded information as suggested in <cit.>. After four consecutive linear layers, the machine predicts the Ω_ m, σ_8, and their standard deviations, which will be used for implicit likelihood inference.
During the training process, we use the ADAM optimizer <cit.> with a learning rate of 10^-7 and a ReduceLRonPlateau scheduler, which reduces the learning rate when the validation loss is not decreased, for a total of 20 epochs. We make use of 80% of the samples as a training data set and 10% each as validation and test data sets. We adopt the loss function for implicit likelihood inference as described in <cit.>, which is the sum of the following two loss functions, where y is the label and σ^2 the variance.
L_1 = ln[∑_i ∈ batch (y_i,pred-y_i,true)^2]
L_2=ln[∑_i ∈ batch ((y_i,pred-y_i,true)^2-σ_i^2)^2]
By minimizing the combined loss function L_vanilla=L_1+L_2, we optimize both prediction accuracy and enable the representation of the second moment, which corresponds to the standard deviation. Such approaches have recently been utilized in many machine learning projects to estimate the model's error in the absence of likelihoods <cit.>.
§.§ Input Features
The input features of galaxies should align with those derivable from observational data. Thus, we utilize the position and stellar mass of each galaxy, as well as information from its neighbors, to extract details about the local environment, following the methodology presented in <cit.>. Moreover, it is important to note that we do not provide the machine with physical or comoving distances since they already imply a certain cosmology when converted from observed redshifts. Instead, we introduce a transformed position of each galaxy by (X,Y,Z)=(zsin(DEC)cos(RA), zsin(DEC)sin(RA), zcos(DEC)). The redshift will be re-introduced as one feature, allowing the machine to infer the redshift dependence of features.
Additionally, we explicitly incorporate information from neighboring galaxies. This addresses the limitations of , which does not support message-passing between edges due to computational constraints arising from the large number of inputs. By introducing neighboring information, we expect these features to serve as proxies for relational local information. From the nine nearest neighbors, four local features are selected: mean distance, maximum distance, mean stellar mass, and maximum stellar mass. Again, since we apply a metric in the redshift space, the distances become unitless. The redshift and stellar mass of each galaxy are used as point-specific features. In total, the six features are aggregated per galaxy, combining both local and point-specific characteristics. Figure <ref> displays a pair plot of features with contours for 1000 randomly sampled galaxies for the mocks generated with the model. The distribution exhibits fair consistency across the three mocks and the SDSS BOSS LOWZ NGC catalog. Another comparison between different cosmologies is available in Appendix <ref>. Although not displayed for brevity, the models exhibit similar levels of consistency in the mocks.
§ TRAINING STRATEGIES
§.§ Why is Domain Shift Critical?
The small-scale clustering statistic and the low mass end of a halo mass function may have distortion because of its approximate nature in the code. This is due to the dispersive behavior of dark matter particles that leads to an imprecise subhalo determination <cit.>. Moreover, the on-the-fly lightcone simulation restricts us from exploiting the historical information of individual halos. The evolution of individual subhalos can be tracked using merger trees derived from simulation snapshots. From this, accurate modeling of the galaxy-halo connection through SHAM is feasible using V_peak or V_max, even for dark matter fields generated with COLA simulations as opposed to the lightcone simulation
<cit.>. In an attempt to mitigate the intrinsic limitation of the rapid lightcone simulation, , <cit.> introduce two free parameters to represent the subhalo number and mass ratio. These values are tuned by fitting the power spectrum monopole of the observational catalog. However, since we aim at performing inference rather than fine-tuning simulations to match observational data, such an adaptation step is inapplicable. We can enhance the flexibility of the models by incorporating extra free parameters and marginalizing over them during inference, particularly with the HOD framework. However, this approach restricts the use of stellar mass information used in modeling the stellar mass incompleteness and as features in the neural networks. We plan to address such issues in future work.
demonstrates strengths in its lack of specific limits on clustering scale, allowing for analysis across a wide range of scales, unlike most studies that impose an upper bound k_ max <cit.>. Even CNNs inherently impose an effective clustering scale through voxelization <cit.>. However, our approach is sensitive to small scales, offering rich clustering information while also being susceptible to small-scale distortions specific to each domain's codes. Therefore, it is critical to regularize the training of neural networks to acquire domain-agnostic knowledge.
Addressing the domain shift is crucial to ensuring the robustness of machines and their applicability to real-world observations. We adapt the machines using prepared suites of mocks: 9000 mocks as the source, along with either 18 mocks or 2048 mocks as targets. By training them with specific strategies aimed at achieving domain adaptation and generalization, we expect the machines to learn domain-agnostic information. Consequently, they will be capable of extracting representations that can be generalized to multiple domains, particularly observational data.
§.§ Training Objective: Domain Generalization
The primary goal of this research is to conduct simulation-based inference on actual observational data using machines robust across different codes for generating mocks. A critical question arises: Can we establish a unified approach to forward modeling our universe and making fair inferences on the cosmological parameters? Unfortunately, current neural networks show apparent discrepancies when applied to other domains <cit.>. However, recent trials in generating domain-adaptive graph neural networks to incorporate various sources have shown the possibility of achieving a more robust inference <cit.>.
In the context of transfer learning, which involves the transfer of knowledge from a set of task to relevant tasks, each of the mock suites can be viewed as n mocks sampled from individual domains 𝒟_i, or S_i={ (x^i_j, y^i_j)}^n_j=1∼ (𝒟_i)^n, where x∈𝒳, y ∈𝒴. 𝒳 is the feature space and 𝒴 is the space for labels (cosmological parameters), while 𝒟_i⊂𝒫_𝒳𝒴 is a joint distribution on 𝒳 and 𝒴 <cit.>. Our aim is to develop a machine that generalizes across multiple domains, even those unseen during the training phase, particularly the observational catalog. Attempts to test the generalizability of a machine trained on a single domain have been initiated by various projects in astronomy using machine learning and deep learning, referred to as “robustness tests” <cit.>. In the language of transfer learning, testing on uninvolved domains in the training phase can be viewed as domain generalization <cit.>.
To achieve effective domain generalization, it is crucial that the distributions of the target (unseen domains) and source domains (domains involved in the training phase) are similar, which can be achieved through accurate modeling of mocks and training strategies to extract common features. Due to limitations in the accuracy of mocks, non-negligible discrepancies exist compared to or mocks. Such domain shift (expressed by ℋ-divergence, d_ℋ(·,·)) is crucial in setting the upper bound on the empirical risk of any hypothesis <cit.>. Thus, achieving single-domain generalization solely through training on mocks can be challenging.
To enhance the machine's generalization capabilities, we utilize or mocks, which enable the machine to acquire common knowledge. Unlike domain generalization, or mocks are incorporated during the training phase, hence, this approach is termed domain adaptation. By employing a training strategy to learn from the relatively accurate mocks, the neural networks learn consistent semantics from the two domains, and finally generalize on the observational data, unseen at training phase.
A method includes utilizing the domain-adversarial neural network <cit.>, which seeks to derive domain-invariant features through the use of a domain classifier as a regularizer. This technique has recently been adopted for performing classification tasks in the field of astronomy <cit.>. However, multiple trials show that DANN still suffers from overfitting and there are discrepancies between domains (see Appendix <ref> for more information). We find that such issues can be effectively mitigated by an alternative training strategy, which will be explained in Section <ref>.
llcccc[t]
0pt
Summary of Predictions on the Cosmological Parameters
Models Training Strategy Ω_ m σ_8 ϵ_Ω_ m(%) ϵ_σ_8(%)
Semantic Alignment 0.339±0.056 0.801±0.061 7.6 1.2
Vanilla 0.357±0.044 0.858±0.045 13.3 5.8
Semantic Alignment 0.227±0.035 0.743±0.039 27.9 8.4
Vanilla 0.196±0.021 0.705±0.019 37.8 13.1
<cit.> - 0.315±0.007 0.811±0.006 - -
<cit.> - 0.295±0.010 0.721±0.043 - -
Summary of cosmological parameter predictions from different models trained with as source and mocks as target.
For this work, we refer to the galaxy-halo connection model as the model names, together with the two training strategies: Semantic Alignment (with domain adaptation) and Vanilla (without domain adaptation).
The predicted values for each models are given with their respective uncertainties, which include both the uncertainty of individual machine and all 25 independently trained machines combined.
Together with our main results, we also display the results from the CMB measurements <cit.> and the full-shape power spectrum analyses of BOSS <cit.> for reference. Relative differences ϵ_Ω_ m and ϵ_σ_8, calculated with respect to the results of <cit.>, are displayed for the models studied in this work.
See Section <ref> for more information on the results, and Section <ref> for the discussion on the comparison between the two training strategies.
§.§ Training Strategy: Semantic Alignment
Our strategy explicitly aligns representations from different domains with similar labels. In other words, given that the samples have similar cosmological parameters, regardless of the selection of simulations, the neural networks extract features that are similar to each other. Aligning the representations can explicitly bring about consistency in terms of their semantics across domains and be effective in domain generalization <cit.>. We adapt the semantic alignment loss in <cit.> to a regression task setup by adding the following loss term:
L_SA=∑_i∈ B_S^∑_j∈B_T^1/𝐲^S_i-𝐲^T_jg(𝐱^S_i)-g(𝐱^T_j)
Here, B_S and B_T represent batches from domains S (source) and T (target), respectively, with g(·) denoting the function that maps input to the representation vector. We apply the semantic alignment loss to the 16-dimensional representation, which can be obtained just before the terminal layer of the neural network, as depicted in Figure <ref>.[In this study, we opted for a reduced representation of 16 dimensions instead of the comprehensive 1536-dimensional representation due to challenges in balancing accuracy and adaptability within our machine learning model. The use of the penultimate layer of linear networks as the representation vector was also used in <cit.>. Modifying the architecture of the neural network and performing detailed fine-tuning of hyperparameters are strategies that could enhance adaptability, which we aim to explore in future research.] The generalization strength can be modified by adjusting the weight α_p in L_total=L_vanilla+α_pL_SA. Here, we slightly modify the adaptation parameter setup proposed by <cit.>,
α_p=α_0[2/1+exp(-γ p)-1]
where p linearly increases from 0 to 1 as training epochs increase, with γ=5 and α_0=5. This gradual increase in the strength of the adaptation term allows the machine to first gain predictability on the labels before aligning the representations' semantics. Hyperparameters are chosen based on multiple trials to balance the trade-off between prediction accuracy and the strength of domain adaptation. To observe the effectiveness of the alignment process, or domain adaptation, we do not include the samples from the target domain in calculating the vanilla loss (see Equations <ref> and <ref>). Therefore, the labels of targets are only implied to the machine through the semantic alignment loss. When incorporating mocks, we reserve 2/3 of the mocks for training and 1/3 for testing. For mocks, we use 80% as a training data set and 10% each for validation and test data sets, same as mocks.
§ PREDICTION OF MINKOWSKI-POINTNET
In this section, we conduct a series of performance tests of and make predictions on the cosmological parameters of the observational catalog. Given the stochastic nature of the training outcome arising from the existing trade-off between domain adaptability and the accuracy of individual predictions, we train 25 different machines, whose model parameters are randomly initialized. Before predicting on the actual SDSS BOSS LOWZ NGC data, we perform the same feature sampling by identifying their neighbors, as explained in Section <ref>. The designated local and global features are then fed to the trained machines. We compare and discuss the results from a set of machines adapted to different domains, as summarized in Table <ref>.
§.§ Performance Tests of Minkowski-PointNet
Following the training procedures discussed in the previous sections <ref> and <ref>, machines are trained to predict Ω_ m, σ_8 and their standard deviations. Figure <ref> displays test results of machines trained with the semantic alignment strategy on the and mocks. We present results for an arbitrarily selected single machine and for all 25 individually trained machines. The top two
panels show the results of the model, and the bottom two panels show the results of the model. In each case, the upper panels show the comparison between the true and predicted values, while the bottom shows the residual.
The test results are promising for both Ω_ m and σ_8, regardless of the galaxy-halo connection model. The results from the ensemble of 25 machines for Ω_ m and σ_8 show a relative error of 3.20% and 1.28% for the model and ϵ=2.65% and 1.34% for the model, respectively. A single machine shows a relative error of 2.90% and 1.17% for the model, and ϵ=3.20% and 1.33% for the model. The difficulty in trying to accurately predict σ_8 seen in recent studies <cit.> is not apparent.
The blue markers and bins in Figure <ref> show the domain adaptation results in mocks. Due to semantic loss, we are able to marginalize the selection of domains, which leads to the degradation of accuracy in each simulation set (for more information on the error analysis, see Section <ref>). Since the machine only implicitly infers the cosmological parameters of the mocks through semantic alignment loss during the training phase, a noticeable bias is observed in the predictions when comparing mocks to mocks. However, the fact that the machine can make predictions solely by aligning the semantics of the source and target domains is encouraging.
Moreover, considering that the parameter space of input labels is constrained within a range of Ω_ m∈ [0.1, 0.5] and σ_8∈ [0.6, 1.0], samples like and may encounter asymmetry when calculating the semantic alignment loss. In an extreme scenario, if a sample is characterized by the cosmological parameters Ω_ m=0.6 and σ_8=1.0, it may suffer from bias due to the lack of samples with larger values of the cosmological parameters. This could lead to center-biased predictions as their representations may experience excessive center-ward pull. Overall, the adaptation results remain quite promising, indicating effective alignment of representations from the two domains by the machine.
§.§ Predictions on the SDSS BOSS LOWZ NGC Catalog
In this section, we present predictions on the SDSS BOSS LOWZ NGC Catalog made by the machines trained with different galaxy-halo connection models and training strategies. Table <ref> summarizes the results of the machines trained with and mocks. Figure <ref> illustrates the aggregated outcomes of 25 distinct machines, each trained using semantic alignment with and mocks, alongside benchmark values from <cit.> and <cit.>.[The main result from <cit.>, which we cite in Table <ref> and figures <ref>, <ref>, <ref>, and <ref>, combines the likelihoods from the Northern Galactic Cap (NGC) and the Southern Galactic Cap (SGC) across two redshift ranges: low-z (z_ eff=0.38) and high-z (z_ eff=0.61). Although our LOWZ NGC mocks differ from the low-z definition, having a lower effective redshift of z_ eff=0.29, the results from the low-z NGC used in <cit.> yield Ω_ m=0.290±0.017 and σ_8=0.808±0.073 (see sections <ref> and <ref> for more information).] Even within a single training scheme, the predicted results vary significantly between machines, illustrating the stochastic nature of the training process. This suggests that there is degeneracy in the final state of the machine, with multiple configurations exhibiting similar, suboptimal performance. In other words, although different machines demonstrate consistent accuracy and precision on the test set, their predictions on the observational catalog unseen during training phase shows notable variability. This justifies our approach of training multiple machines instead of selecting only those with the best performance.
Next, we compare how the machine predicts on the observational data when trained with the domain-adaptive training strategy (semantic alignment) and when trained without it (vanilla). For the model, the prediction of the ensemble of 25 machines yield Ω_ m=0.196±0.021 and σ_8=0.705±0.019 in vanilla scheme, while after applying the semantic alignment loss, Ω_ m=0.227±0.035 and σ_8=0.743±0.039. The model yields Ω_ m=0.357±0.044 and σ_8=0.858±0.045 in the vanilla scheme, and Ω_ m=0.339±0.056 and σ_8=0.801±0.061 with semantic alignment. The semantic alignment worsens the precision compared to when not applied, despite increasing the accuracy of prediction, assuming Planck 2018 cosmology as the ground truth. Thus, although the same data sets are being used, the differences in how they are employed to train the machines severely affect the accuracy and precision of prediction on unseen domains.
The predictions vary significantly depending on the galaxy-halo connection model used to generate the mock catalogs. Especially, models exhibit considerable divergence from the Planck 2018 cosmology (Ω_ m=0.315±0.007 and σ_8=0.811±0.006), while models are largely in agreement, within the 1σ error. Moreover, the Ω_ m predicted by the models show consistent values with the most recent dark energy survey, <cit.>, which yields Ω_ m=0.352±0.017 for the flat ΛCDM model, a higher value than the Planck 2018 cosmology. Although is the most favorable in terms of both accuracy and the absence of any cosmological priors involved in the forward modeling processes, exhibits better precision. This discrepancy likely stems from the additional cosmological priors incorporated via stellar masses in models, as opposed to models, which rely solely on clustering information.
This discrepancy can be due to several factors, although the precise cause of this bias in the model remains unclear. One potential reason is that, for the model, regardless of cosmology, any halo with a similar mass will be assigned a similar stellar mass following the SHMR. As discussed in Section <ref>, the SHMR from <cit.> was obtained from a different survey, COSMOS, which could also explain the variations. Additionally, the stellar masses of the galaxies in the observational catalog are determined on the basis of the Kroupa IMF <cit.> with passive evolution from <cit.>, whereas the SHMR we utilized is based on the SMF adjusted for the Chabrier IMF <cit.> and the stellar population synthesis models from <cit.>, which can result in such differences. The exact cause of this discrepancy still being unclear, we stress the limitations of our naive assumption in the model, and that results may vary depending on the galaxy-halo-connection models. Here, we aim to demonstrate the feasibility of inferring without using summary statistics and leave further investigation into the impact of galaxy-halo connection models for future studies.
As mentioned above, when calculating the uncertainty of the inferred parameters, we adopt the most conservative approach. We consider both the error of individual predictions and the 25 independently trained machines, without cherry-picking. However, selecting a single machine that best adapts to and predicts on mocks, characterized by the smallest distance measured by √(ΔΩ_ m^2+Δσ_8^2), yields results of Ω_ m=0.267±0.020 and σ_8=0.775±0.0003 for and Ω_ m=0.282±0.014 and σ_8=0.786±0.036 for . This suggests further potential for performing more precise inference on the cosmological parameters, achieved through the convergence of individual machines and enhanced robustness (see Section <ref> for a discussion).
§ DISCUSSION
§.§ Effect of Aligning Representations
The improvement in generalizability can be attributed to the distribution of different domains aligned in the feature space. To compare the extracted features from machines trained by the vanilla scheme and the semantic alignment strategy, we visually inspect the distributions of their representations in a lower dimension (Jo et al. in prep). Figure <ref> exhibits the latent space configuration of the targeted 16-dimensional vector reduced to two dimensions, deduced by the t-distributed Stochastic Neighbor Embedding algorithm <cit.>. In the semantic alignment strategy, the samples are evenly distributed in the reduced dimensions and the parameters gradually change along one direction, while being almost independent in the other direction[The gaps in the latent space can arise for several reasons. Firstly, the randomness in sampling the parameter space disrupts the dataset's uniformity. Secondly, the dimension reduction technique relies on the distribution's local structure and is inherently nonlinear. Furthermore, because of the discriminative nature of our neural networks, the distribution is not required to be uniform. Generative models such as normalizing flows and variational autoencoders are better suited for accurately modeling the distributions within specific probability distribution functions.]. This behavior naturally suggests that the machine is extracting features and representing them effectively in a way that removes degeneracy and gains predictability in the two parameters.
The vanilla scheme fails to achieve an adaptation of the mocks to the mocks, resulting in a clear separation between the distributions. The proximity of the observation target to the mocks in comparison to the mocks demonstrates that the mocks provide a more precise representation of our real universe for the model. On the other hand, when the semantic alignment strategy is employed, the two distinct domains blend into a single distribution. Consequently, this supports the claim that the machine is extracting common features from the two domains and less weighting on the domain-specific information, which improves prediction accuracy on the observational data.
However, there exists a clear trade-off as the semantic alignment loss degrades precision although showing better accuracy. To analyze the effect of semantic alignment on precision, we can first decompose the error into two sources: the aleatoric (statistical) error and the epistemic (model or systematic) error. The two distinct sources of errors can easily be seen in Figure <ref>—the aleatoric error estimated from the individual error bars of the machines and the epistemic error from the variance in the prediction from the ensemble of machines.
Figure <ref> shows the two sources of error for the test sets of and mocks, which are the domains seen during training phase, and and SDSS BOSS LOWZ NGC samples, unseen during training phase, for the model. As we have applied the trained machines to the SDSS BOSS LOWZ NGC catalog, we make inferences on the mocks for further analysis (See Section <ref> for more information on the results). The epistemic errors are calculated by the standard deviation of the predictions on a single input data from the ensemble of 25 machines. On the other hand, the aleatoric errors are calculated by the root mean square of the predicted errors (see Equation <ref>) of the individual machines. Largely, the aleatoric and epistemic errors have comparable values for both the test set and the SDSS BOSS LOWZ NGC catalog. However, the errors for the test set show a larger epistemic error compared to the aleatoric error for the semantic alignment training strategy.
The alignment scheme has a positive effect in reducing errors when predicting in samples. In particular, the epistemic and aleatoric errors in Ω_ m show improvements by 23% and 4% each, respectively, and 17% and 33% for σ_8. Conversely for samples, epistemic and aleatoric errors on Ω_ m show degradation by 92% and 38% each respectively, and 86% and 34% for σ_8. Thus, we can interpret that the domain-adapted machines exhibit weaker constraints, mostly due to the model-wise uncertainty on the target domain. In other words, the alignment scheme is unstable and can lead to significant variability in the machine's end-of-training state. This considerable variability in model performance on the target domain after domain adaptation can be attributed to the implicit provision of cosmological parameters to the models via the semantic alignment loss, in contrast to the vanilla models. However, the prediction on the unseen observational target shows no significant inclination towards either of the two sources of error. Specifically, the ratio of epistemic to aleatoric error increases by 21% for Ω_ m and decreases by 23% for σ_8 after adaptation. Likewise, for the mocks, which are also unseen during the training phase, both epistemic and aleatoric errors arise, but the focus is on the aleatoric error, thus reducing the ratio of epistemic to aleatoric error.
Seen from the analyses above, domain adaptation with semantic alignment improves the overall generalizability on the unseen domains and precision in the source domain while sacrificing precision in target and unseen domains. Although its detailed impact on the precisions are indeed complex, the improvement on generalizability can be mathematically modeled by the domain generalization error bound <cit.>. The upper bound of the domain generalization error can also be decomposed into a few sources. Firstly, the machines have to perform well in each of the source domains individually and jointly. Moreover, the source domains should well depict the unseen domain while reducing the discrepancy between the source domains. The discrepancy between the source domains can be explicitly reduced by the semantic alignment as seen from Figure <ref>, while the discrepancy between the source and the unseen domain can be reduced with the addition of accurate mocks.[Precisely, given multiple sources 𝒟^i_S, we define a convex hull Λ_S={𝒟̅|𝒟̅=∑_i=1^Nπ_i𝒟^i_S, π∈Δ_N-1} with Δ_N-1 being a N-1 dimensional simplex. We can then find an optimal distribution 𝒟^*=∑_i=1^Nπ_i^*𝒟^i_S where π^* minimizes the distance between the optimal distribution 𝒟^* and the target unseen distribution 𝒟_U. Therefore, the domain discrepancy between the optimal distribution 𝒟^* and the unseen domain 𝒟_U measured by the ℋ-divergence term (d_ℋ(𝒟^*, 𝒟_U)), and the discrepancy between the two domains inside the convex hull (sup_𝒟', 𝒟”∈Λ_S d_ℋ(𝒟',𝒟”)) are the two major sources of error. Refer to <cit.> and <cit.> for more information.] The vanilla scheme has increased performance on the target sources by distinguishing between the domains, while semantic alignment aligns the distribution at the expense of degraded performance on the target domains. Thus, while domain adaptation shows a significant advantage in that it enables generalization through the alignment of domains, it still suffers from other trade-offs resulting in variability in the machines' end-of-training state, leading to weaker constraints on the cosmological parameters.
§.§ Comparison with Previous Studies Using the SDSS BOSS Catalog
Our simulation-based inference with neural networks, which replaces the use of summary statistics, yields results that can be compared with several notable studies utilizing the SDSS BOSS catalog. This comparison provides a broader context for evaluating the constraints on cosmological parameters. In the following, we compare our results with previous studies that used summary statistics from the full-shape power spectrum and bispectrum, as well as neural network-based approaches.
Compared to the full-shape power spectrum analyses that yield Ω_ m=0.295±0.010 and σ_8=0.721±0.043 <cit.> and the bispectrum analyses yielding Ω_ m=0.338^+0.016_-0.017 and σ_8=0.692^+0.035_-0.041 <cit.>, our main results from show weaker constraints of Ω_ m=0.339±0.056 and σ_8=0.801±0.061. However a direct comparison is not possible as our analyses are limited to the BOSS LOWZ NGC sample. In contrast, <cit.> utilizes the likelihoods combining from the Northern Galactic Cap (NGC) and the Southern Galactic Cap (SGC) across two redshift ranges: low-z (z_ eff=0.38) and high-z (z_ eff=0.61), and <cit.> both NGC and SGC samples from CMASSLOWZTOT, which combine the LOWZ, LOWZE2, LOWZE3 and CMASS catalogs.
Although our LOWZ NGC mocks differ from the low-z definition, having a lower effective redshift of z_ eff=0.29, the results from the low-z NGC used in <cit.> yield Ω_ m=0.290±0.017 and σ_8=0.808±0.073.
Next, we compare our results with the recently developed simulation-based inference framework, SIMBIG, which uses BOSS CMASS samples <cit.>. <cit.> used the power spectrum information up to k_ max=0.5 h/ Mpc together with normalizing flows, resulting in Ω_ m=0.292^+0.055_-0.040 and σ_8=0.812^+0.067_-0.068. Compared to these results, we obtain a slightly better constraint on σ_8. On the other hand, <cit.> analyzed the bispectrum monopole up to k_ max=0.5 h/ Mpc conducted by using normalizing flows, yielding Ω_ m=0.293^+0.027_-0.027 and σ_8=0.783^+0.040_-0.038. Therefore, <cit.> and <cit.> explicitly input the cosmological information derived from the clustering statistics at various scales into the machine. In contrast, <cit.> employ a 3D CNN applied to voxelized galaxy positions in real space, effectively capturing clustering characteristics up to k_ max=0.28 h/ Mpc. CNN predictions serve as an intermediate summary statistic, which is then used to generate the final predictions through a flow-based neural network, yielding Ω_ m=0.267^+0.033_-0.029 and σ_8=0.762^+0.036_-0.035. Our analysis suggests a weaker constraining power compared to previous results.
However, our study implements a more direct form of simulation-based inference using the embedding extracted by . As <cit.> point out, such direct inference from neural network embeddings shows weaker constraints. Thus, recent studies consider the predictions of neural networks as summary statistics and perform additional Bayesian inferences <cit.>. Moreover, the major difference in our approach is that we adopt the most conservative form of setting constraints, presenting the ensemble results of 25 individually trained machines instead of a single machine. This highlights the degeneracy of the machines, which show similar performances on known datasets but produce varying predictions on unseen datasets. As mentioned above, using a single machine that is best adapted to the target samples, we obtain comparably tight constraints of Ω_ m=0.282 ±0.014 and σ_8=0.786±0.036.
§.§ Towards Improved Robustness
The ultimate goal of replacing summary statistics with raw input from the mock catalogs for the inference of cosmological parameters would be to give tight and accurate constraints. However, since the neural networks capture the complexities engraved in the input data regardless of the physical importance, such methodology involves advantages and disadvantages at the same time. To maximize the advantage, one must consider building machines robust against the choice of domains.
An example of robustness is shown in Figure <ref>, where our domain-adapted machines are applied to the 2048 samples. The results show Ω_ m=0.327±0.070 and σ_8=0.822±0.071 for the model, and Ω_ m=0.236±0.046 and σ_8=0.784±0.038 for the model. The uncertainties are increased compared to the prediction results on SDSS BOSS LOWZ NGC catalog, partly due to the cosmic variance of the samples. In particular, the predicted values show differences from the SDSS BOSS LOWZ NGC catalog, despite the high degree of similarity of the mocks in the summary statistics. Especially, for the model, the machines correctly predict the lower value of Ω_ m and the higher value of σ_8 for compared to the observational counterpart assuming Planck 2018 as the ground truth. In contrast to the domain-adapted machines, the vanilla machines yield Ω_ m=0.365±0.055 and σ_8=0.875±0.054 for the model, and Ω_ m=0.199±0.024 and σ_8=0.715±0.016 for the model. Again, as we have seen from the prediction results on the SDSS BOSS LOWZ NGC catalog, domain adaptation effectively boosts generalizability at the expense of precision.
To enhance the robustness of neural networks across diverse simulation and observation domains with varying cosmological parameters, we need more samples from the target domains. Currently, insufficient target domain data affects our ability to adapt and generalize effectively, resulting in increased epistemic or model uncertainties, as discussed in Section <ref>. This in turn leads to degraded precision in the final predictions, as shown in figures <ref> and <ref>. Moreover, biases may arise from the discriminative nature of our current neural network model as seen for the samples in Figure <ref>. Generative models such as normalizing flows and its variants can be helpful in mitigating such biases and better approximate posterior distributions <cit.>. Addressing these biases is crucial to making reliable inferences in data-driven approaches, as emphasized by <cit.>.
To accommodate a broader range of cosmological parameters while retaining robustness, not only do we require more sophisticated neural network architectures, but also a focus on the accuracy and correctness of input data. In such data-driven approaches using highly sophisticated neural networks, unreliable input data will distort the extracted domain-agnostic representation. Furthermore, as demonstrated in Section <ref>, achieving both precision and accuracy in individual predictions is critical. By improving domain adaptation strategies and utilizing augmented target data, we can potentially enhance the precise inference of cosmological parameters, especially by focusing on reducing the model uncertainties. We plan to explore this potential further in future work.
§.§ Limatations and Considerations
We have demonstrated a proof-of-concept test of inferring cosmological parameters without relying on summary statistics, yet there are several limitations and considerations that merit discussion. mocks, which are our main source domain, show inaccuracies, especially in modeling the halo mass function and small-scale clustering. These inaccuracies are worsened by the simplified assumptions in our galaxy-halo connection models, and . Our machine learning models, particularly , do not enforce explicit cut-offs, making them sensitive to such inaccuracies. Although we introduced mocks and performed domain adaptation to address these issues and improved the models' generalizability, this method involves trade-offs in precision.
To tackle these challenges, we suggest several strategies. To begin with, enhancing the flexibility of our galaxy-halo connection models by incorporating additional modeling parameters may improve both accuracy and robustness. Secondly, our target domain samples currently lack diversity in the domains and cosmologies, which might limit the generalizability of our models. Addressing this issue involves considering the inclusion of more mock samples from diverse codes, despite the higher computational costs. Additionally, exploring alternative techniques for domain adaptation and generalization could foster improvements in model performance across various datasets.
Additionally, it is essential to explore the application of our new methodology to a range of galaxy redshift surveys, which vary in observational effects such as color magnitude cuts, survey depths, completeness, and footprints. Given that our mocks are explicitly modeled to include observational effects unique to the SDSS BOSS LOWZ NGC catalog, our present neural network cannot be applied to other observational surveys. In order to enhance the neural network's robustness against varying observations, we could augment our mock dataset with random cuts and masks, along with modifying radial selection functions. We plan to explore these strategies in our upcoming research.
§ SUMMARY & CONCLUSION
We propose a novel approach to rapidly model vast quantities of galaxy catalogs through lightcone simulations, while fully incorporating the observational effects of the SDSS BOSS LOWZ NGC catalog and inferring Ω_ m and σ_8 from the actual observations using trained neural networks.
This addresses the question of whether performing simulation-based inference on observed galaxy redshift surveys using neural networks is feasible in the absence of summary statistics, but only with the position and mass information of individual galaxies.
Our method extends previous works that perform “robust field-level inference” on different codes without adopting summary statistics <cit.>, and works that use summary statistics to infer values from the actual galaxy redshift surveys <cit.>.
Using lightcone simulation , we generate 9000 galaxy catalogs with varying cosmological parameters in a volume of (1.2h^-1Gpc)^3.
Subhalos are identified using ROCKSTAR, with each subhalo assumed to host a single galaxy. We propose two models of galaxy-halo connection, and . The model assumes a constant star formation efficiency is assumed within a certain halo mass range across different cosmologies, allowing us to identify stellar masses with varying values across different redshift bins. However, the model suffers from the inclusion of cosmological priors, since they are determined from simulations assuming fiducial cosmology. Therefore, we introduce the model, free of cosmological priors, which paints the halo catalog by assuming a monotonic relation with the observed catalog. The catalogs undergo further processing to mimic the observational effects of the SDSS BOSS LOWZ NGC catalog, including RSD, survey footprint using the MANGLE masks, stellar mass incompleteness (for ), radial selection, and fiber collision (Section <ref>).
The results and key takeaways are summarized below. Without employing summary statistics and using galaxies as point cloud inputs (Section <ref>), we perform implicit likelihood inference <cit.> and derive constraints on Ω_ m and σ_8 from the SDSS BOSS LOWZ NGC sample. Rapidly generated mock representations can be aligned with the more accurate mocks to achieve effective domain generalization using the semantic alignment loss (Section <ref>). Machines trained and adapted independently with and mocks infer values of Ω_ m=0.227±0.035 and σ_8=0.743±0.039 for the model and Ω_ m=0.339±0.056 and σ_8=0.801±0.061 for the model, when applied to the SDSS BOSS LOWZ NGC catalog. Despite the divergence in the prediction results from the model, the model, which is free of cosmological priors, agrees with the <cit.> results within 1σ (Section <ref> and Figure <ref>).
Although the constraints highlighted in Section <ref> exist, we have demonstrated advancements in performing simulation-based inference on observations without the use of any summary statistics. This was primarily achieved by adapting across two different code domains, to extract a unified knowledge applicable to real-world observations. Moving forward, we aim to incorporate precise data from various fields and utilize more advanced models to enhance the robustness of our models. This could potentially establish the new method as a competitive approach in precisely constraining cosmological parameters.
§
Jun-Young Lee would like to thank Aleksandra Ćiprijanović, Francisco Villaescusa-Navarro, Yong-uk Cho, Cullan Howlett, Hyeonyong Kim, Seungjae Lee, Jubee Sohn, Jun Yong Park, and Eun-jin Shin for insightful discussions.
He would also like to thank Francisco-Shu Kitaura and Cheng Zhao for providing the mocks.
Jun-Young Lee's work was supported by Korea Institute for Advancement of Technology (KIAT) grant funded by the Korean Government (Ministry of Education) (P0025681-G02P22450002201-10054408, Semiconductor–Specialized University).
Ji-hoon Kim’s work was supported by the Global-LAMP Program of the National Research Foundation of Korea (NRF) grant funded by the Ministry of Education (No. RS-2023-00301976).
His work was also supported by the NRF grant funded by the Korea government (MSIT) (No. 2022M3K3A1093827 and No. 2023R1A2C1003244).
His work was also supported by the National Institute of Supercomputing and Network/Korea Institute of Science and Technology Information with supercomputing resources including technical support, grants KSC-2020-CRE-0219, KSC-2021-CRE-0442 and KSC-2022-CRE-0355.
Jaehyun Lee is supported by the National Research Foundation of Korea (NRF-2021R1C1C2011626).
HSH acknowledges the support by Samsung Electronic Co., Ltd. (Project Number IO220811-01945-01) and by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT), NRF-2021R1A2C1094577.
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
aasjournal
§ FEATURES OF REALIZATIONS WITH DIFFERENT COSMOLOGICAL PARAMETERS
Features of individual and neighboring galaxies differ across realizations with varying cosmological parameters. In Figure <ref>, we provide the same pair plot as Figure <ref> but for the model of the mock suite with different cosmologies: (Ω_ m=0.4772, σ_8=0.9639), (Ω_ m=0.1185, σ_8=0.6163), and (Ω_ m=0.3067, σ_8=0.8238).
Notice that deviates the most from , while shows a better agreement in all features.
This tendency becomes most extreme for distances to neighboring galaxies.
This is due to the deficit of the total number of galaxies for , which severely affects the separation between the galaxies. Although not displayed for brevity, the models exhibit consistency despite differences in the cosmological parameters. Such behavior arises from the fact that, in contrast to the model, the model matches the total galaxy count of the mocks to the SDSS BOSS LOWZ NGC catalog.
§ EFFECT OF FINE-TUNED MOCKS MD-PATCHY
We further investigate the possibility of increasing the accuracy and precision via the incorporation of fine-tuned mock samples. Similarly to machines trained with and mocks, we train 25 different machines using and with the semantic alignment loss applied. As shown in Figure <ref>, the results yield Ω_ m=0.307±0.035 and σ_8=0.767±0.035 for the model, and Ω_ m=0.343±0.053 and σ_8=0.796±0.051 for the model. Compared to when applying the mocks, better precision is achieved for both galaxy-halo connection models. Moreover, especially for the model, the accuracy drastically increases. Indeed, such behavior is well expected, as the machine can learn from the fine-tuned mocks, which better depict the observational sample.
Semantic alignment loss plays an explicit role in reducing the divergence of representations originating from different domains. For example, aligning the representations of 𝒟_ L-PICOLA and 𝒟_ MD-PATCHY to be close enough, adding mock samples will have a small impact on increasing the diameter of the convex hull of the domains. Moreover, assuming that the marginal distribution of is relatively similar to the SDSS BOSS LOWZ NGC catalog, the optimal domain, 𝒟^*, will be weighted towards 𝒟_ and will effectively reduce the generalization risk. Therefore, this confirms not only the importance of aligning the representations from different domains but also the inclusion of accurate mocks involved in the training phase. This effect is maximized for the model, where the initially biased prediction, when trained with the and domains, significantly alters to produce more accurate results assuming the Planck 2018 cosmology as the ground truth.
However, since mocks are based on a single cosmology, generalization is only effective locally. To train the machines to be globally robust, it is necessary that a multitude of high-fidelity mocks with diverse cosmologies are included as in Section <ref>. Such inclusion must be made across varying cosmological parameters, unlike the fine-tuned mocks with a single targeted value, as generalization is only performed locally in this case. We leave these aspects of improvement for future work.
§ ALTERNATIVE TRAINING STRATEGY: DOMAIN ADVERSARIAL TRAINING
An alternative training strategy for domain adaptation and generalization is to extract domain-invariant features through adversarial training. The essence of such a training strategy is to prevent the machine from learning domain-specific information. Here, we employ domain adversarial neural networks <cit.>, which adds a domain classifier to the backbone of the machine illustrated in Figure <ref>. The domain classifier is trained to classify whether the input originates from mocks or mocks. Moreover, the preceding gradient reversal layer (GRL) enables forward propagation of the domain loss to the feature extractor. Consequently, the feature extractor weights are updated to produce domain-invariant features sufficient to deceive the domain classifier.
In this approach, we leverage the DANN strategy to perform regression tasks in a supervised domain adaptation setup using and mocks. The loss function of the supervised DANN setup can be mathematically expressed as follows:
L(θ_f, θ_r, θ_d; 𝐱)=L_vanilla(G_r(θ_r; (G_f(θ_f; 𝐱))), 𝐲)
+α L_domain(G_d(θ_d; ℛ((G_f(θ_f; 𝐱)))), d)
where θ_f, θ_r, θ_d denote the parameters and G_f(θ_f,·), G_r(θ_r,·), G_d(θ_d,·) represent the function of the feature extractor, regressor, and domain classifier. Here, 𝐱 represents the input, 𝐲 represents the cosmological parameters, and d represents the domain. The GRL ℛ(𝐱) is a pseudo-function with properties ℛ(𝐱)=𝐱 and ℛ'(𝐱)= -𝐈. Introducing GRL reduces the DANN setup to a single minimization problem.
The terminal layer of the domain classifier passes through a sigmoid activation function, classifying input as (“1") or (“0") based on a threshold of 0.5. The domain confusion loss L_domain is calculated using the binary cross-entropy loss with logits, accounting for the imbalance in the size of the data set between each domain. After training, we further train new domain classifiers, each with two trainable layers, for every machine while keeping the weights of the feature extractor frozen. This process allows us to evaluate the classifiability of the extracted features.
Figure <ref> displays the results of the 25 independently trained DANN machines. Individual predictions are colored based on their probabilities as classified by the domain classifier, indicating whether they originated from , denoted as P(𝒟_L-PICOLA|𝐱_SDSS). The results show Ω_ m=0.304±0.033 and σ_8=0.795±0.057. However, we observe that compared to the semantic alignment strategy, the distribution between the two domains is not effectively reduced, making it susceptible to overfitting. Consequently, the adequacy of the training scheme can vary depending on the characteristics of the sources and targets and must be used judiciously.
|
http://arxiv.org/abs/2409.02198v1 | 20240903180833 | Universally-Charging Protocols for Quantum Batteries: A No-Go Theorem | [
"Pratik Sathe",
"Francesco Caravelli"
] | quant-ph | [
"quant-ph"
] | |
http://arxiv.org/abs/2409.03707v1 | 20240905171338 | A Different Level Text Protection Mechanism With Differential Privacy | [
"Qingwen Fu"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
Simulating the Galactic population of axion clouds around stellar-origin black holes:
Gravitational wave signals in the 10 - 100 kHz band
Jacob R. Sprague^1, Shane L. Larson^1, Zhiyuan Wang^2, Shelby Klomp^2, Andrew Laeuger^3, George Winstone^2, Nancy Aggarwal^4, Andrew A. Geraci^2, and Vicky Kalogera^1
(The LSD Collaboration)
September 9, 2024
===================================================================================================================================================================================================
§ ABSTRACT
With the widespread application of differential privacy in text protection, however, the current text cleaning mechanism based on metric local differential privacy (MLDP) is not applicable to non-metric semantic similarity measurement, and cannot achieve a good trade-off between privacy and practicality. And currently when we perform differential privacy on long text data, all text data will be perturbed. This method of perturbing all texts may be relatively effective for downstream tasks on some data sets, but if applied to long text data, it may have a great impact on the overall meaning of the text. Therefore, in this article, we propose to use the weights of different words in the pre-trained model to assign different weight parameters to words of different importance. Perform differential perturbations. In addition to conducting inference attacks, we also use large models to perform privacy and validity tests on our perturbed data.
§ INTRODUCTION
In many natural language processing (NLP) applications, input text often contains sensitive information that can infer the identity of a specific person <cit.>. In addition, legal restrictions such as CCPA and GDPR may further restrict the sharing of sensitive text data. This makes it difficult for NLP service providers to collect training data unless the privacy concerns of data owners (including individuals and institutions) are properly addressed.
A lot of work has been done to address privacy issues <cit.> to train language models using differential privacy (DP)<cit.> , which is considered the standard for privacy-preserving computing. These methods protect the data source by adding noise to the gradient or training data. However, they require service providers to collect raw data for LM training, which may still cause privacy leakage.
In order to fundamentally solve the privacy leakage problem, data needs to be fundamentally protected. Typically, these privacy mechanisms <cit.> work by replacing the original tokens in the original document with new tokens extracted from the output token set. To generate a cleaned text document. Specifically, they adopt metric local differential privacy (MLDP, also known as dχ-privacy) to provide privacy and practicality guarantees. MLDP<cit.> inherits the idea of DP and ensures that the output of any adjacent input tokens is indistinguishable to protect the original tokens from being inferred. On the other hand, MLDP preserves the utility of the purified text by assigning higher sampling probabilities to tokens that are semantically closer to the original tokens. In these mechanisms, any metric distance (such as Euclidean distance) can be used to measure the semantic similarity between tokens.
In the paper <cit.>", an MLDP-based concept is proposed to assign a smaller custom output set to each input token to achieve token-level privacy protection. This method is an improvement on the santext<cit.> method, which increases the text perturbation rate without reducing the privacy protection effect by limiting the size of the output set. The custom parameter K can be adjusted to determine the output set size of each input token to achieve different utility-privacy trade-offs, and an improved CusText+ mechanism is proposed to skip stop words when sampling to achieve higher utility.
This analysis does improve the perturbation efficiency of words in the text to a certain extent,
but according to all previous studies, they treat every token that appears in the text equally, which actually perturbs all tokens in the text equally, which may not cause much performance impact on datasets for specific tasks. However, it will cause loss of meaning in common long texts, especially in some medical datasets or long text novels. If all words are treated equally and deemed equally important, and are perturbed to the same extent, this will greatly affect the effectiveness of the text data and lose some of the information we need. Therefore, we propose a method based on a pre-trained BERT model. Using the BERT pre-trained model, the attention weights of all tokens in the sample are extracted, and then the weights of the multi-head multi-layer Transformer are averaged and regularized. This regularized weight is used to symbolically represent the importance of each word in the sample. According to this importance parameter, words of different importance are selectively perturbed. This can reduce the damage to the effectiveness of the text to a certain extent. We tested it on two public datasets, SST-2 and QNLI, and proved the effectiveness of our method of extracting words of different importance.
§ RELATED WORK
When discussing privacy risks and protection measures in natural language processing (NLP), we can see three main research directions: research on privacy attacks on deep learning models, differential privacy (DP) and its application in NLP, and the application of local differential privacy (LDP).
First, privacy attacks against deep learning models, especially language models (LMs), have become an important research area. For example, <cit.> proposed a classification for recovering sensitive attributes or parts of original text from text embeddings output by popular LMs without relying on the structure or pattern of the input text. <cit.> demonstrated a black-box attack against GPT-2, capable of extracting verbatim text of the training data. These studies show that privacy attacks on LMs are realistic and damaging, so it is crucial to develop defenses with strict safeguards.
Secondly, in terms of Differential Privacy (DP) and its application in NLP, DP has become the de facto standard for statistical analysis. For example, some research attempts to inject high-dimensional DP noise into text representations <cit.> but these methods fail to achieve a good balance between privacy and utility, mainly because of the “dimensionality Curse". Another approach is to learn private text representations through adversarial training <cit.>, where the adversary model is trained to infer sensitive information together with the master model , while the master model is trained to maximize the adversary's loss and minimize the main learning objective.
Third, the application of local differential privacy (LDP) also plays an important role in NLP. LDP allows data owners to sanitize data locally before sending it to the server. This means data owners can share information without revealing the content of their original data. In NLP applications, LDP is particularly valuable because it can collect and analyze text data while protecting user privacy. For example, the LDP mechanism can be used to generate sanitized text datasets that can be used to train machine learning models without exposing personal information. The challenge of LDP is to achieve privacy protection while maintaining data practicality, especially when dealing with text data with complex structure and high-dimensional features.
To sum up, the NLP field faces multiple challenges when dealing with privacy protection issues. On the one hand, effective defense strategies need to be developed against privacy attacks on LMs; on the other hand, differential privacy and local differential privacy provide a series of solutions to protect the privacy of text data. These studies not only help improve the privacy protection capabilities of existing technologies, but also provide important guidance for future privacy protection research in the field of NLP.
§ PRELIMINARIES
Before we delve deeper into our CusText technique, let's first briefly review some fundamental concepts, including ϵ-differential privacy and the exponential mechanism.
Definition 1 (ϵ-differential privacy)
Given a privacy parameter ϵ≥ 0, for all adjacent input pairs x, x' ∈ X, and for every possible output y ∈ Y, a randomized mechanism M satisfies ϵ-differential privacy if it adheres to the following condition:
[M(x) = y]/[M(x' ) = y]≤ e^ϵ
In this definition, a smaller ϵ indicates a higher level of privacy protection. Theoretically, ϵ-DP ensures that even adversaries with infinite computing power cannot distinguish between the probability distributions of two adjacent inputs, as their probabilities of producing the same output y are closely matched. In the context of Natural Language Processing (NLP), any pair of input tokens that produce the same output set Y are considered adjacent. This paper continues to use this definition for adjacent inputs.
Definition 2 (Exponential Mechanism).
Given a scoring function u: X × Y →ℝ, the exponential mechanism M(X, u, Y) achieves ϵ-differential privacy by randomly selecting an output token y ∈ Y to perturb the input token x ∈ X with a probability proportional to
e^ϵ· u(x,y)/2Δ u
Here, u(x, y) represents the score of the output token y for the input token x. Additionally, the sensitivity of u, denoted as Δ u, for the exponential mechanism (EM) is defined by
Δ u := max_y ∈ Ymax_x, x' ∈ X| u(x, y) - u(x', y) |
According to the second definition, lower sensitivity makes it statistically more difficult to distinguish the original token from its adjacent tokens. In practice, we may standardize the scoring function u, normalizing its sensitivity Δ u to a fixed value (e.g., 1), so that the selection probability for each output token y for an input token x is solely related to u(x, y), considering that ϵ and Δ u are predetermined, and a larger u(x, y) results in a higher sampling probability.
In an NLP task, we assume each document D = ⟨ R_i ⟩_i=1^m contains m records, and each record R = ⟨ t_j ⟩_j=1^n contains n tokens. We define the task of text sanitization as follows: Given an input document D containing sensitive information, a set of all possible input tokens X, a set of all possible output tokens Y, and a differential privacy mechanism M (e.g., the EM used in this work), it applies the mechanism M to each input token t_j ∈ D, replacing it with an output token t'_j ∈ Y if t_j ∈ X. All tokens after replacement form the sanitized document, i.e., D' = ⟨ R'_i ⟩_i=1^m and R' = ⟨ t'_j ⟩_j=1^n.
Following previous studies <cit.>, we still adopt a semi-honest threat model in the context of local differential privacy. In this model, the data owner only submits sanitized documents to the service provider. However, a malicious service provider may try to extract sensitive information from the received data. We assume that the adversary can only obtain the sanitized text and all algorithms and mechanisms are public and transparent. In addition, we also assume that the adversary has unlimited computing power.
§ METHOD
Our privacy perturbation method is based on the CusText mechanism. The difference is that we use the BERT pre-trained model to assign weights to different words in the same example. Then we average the weights of multiple heads and layers. We remove the weights of CLS and septoken and regularize the weights of other words. Use this weight value to represent the importance of different words. Then we combine the CusText mechanism to perform different degrees of perturbation for our words of different importance.
"CusText" is a tailored text sanitization framework designed to safeguard privacy by substituting every token within a text. It comprises two primary components: firstly, a semantic correlation-based mapping function, fmap, which identifies the appropriate output set for each input token; secondly, a sampling function, fsample, that selects new tokens from this output set using an exponential mechanism.
Unlike traditional SANTEXT methods, CusText enhances the relevance of the output tokens to the original tokens by customizing the output set for each input token, thus improving the utility of the model. The development of the mapping function involves picking tokens from the input set, identifying those that are semantically closest, and creating a mapping. This mapping is then refined by progressively removing the tokens that have been mapped until a complete mapping is achieved or there are insufficient tokens left to continue. This strategy ensures that every input token is paired with at least one neighboring token, preserving the effectiveness of the privacy measures.
Sampling function: The fsample function, which is reliant on the fmap, selects an output tag for each input tag. This selection is governed by an exponential mechanism, and it requires a carefully designed scoring function u to maintain a balance between utility and privacy. The function ensures that the relationship between each input and output tag pair is capped, with pairs that are semantically closer receiving higher scores.
Scoring Function,custext is based on the same similarity function used in mapping schemes,,e.g., Euclidean distance or cosine similarity based on token-vector,representations <cit.>.,In general, all similarity measures can be divided into two categories,,negative and positive,,according to the correlation between the score and semantic proximity.,For example, Euclidean distance and cosine similarity are negative,and positive correlation measures, respectively, because the smaller the Euclidean distance,and the larger the cosine value between two vectors,,means that the semantic proximity of their corresponding tokens is higher.,Next, we will design scoring functions for these two types of similarity,measurements.
The following is our own perturbation method based on words of different importance. Our method mainly uses the pre-extracted words of different importance as our sensitive word list, and then uses the custext method to perturb these sensitive word lists.
Aggressive mechanism.
When we select the important vocabulary list, if we adopt an aggressive mechanism, we can perturb all the words in the sensitive vocabulary list without difference, but this may have a greater impact on the original semantics of the text, because the same noun or the same verb will be perturbed into different words, which will cause the text semantics to be incoherent. The result for short text may be less than the effect on long text.
Conservative mechanism.
When the same sensitive word appears multiple times in a sample, we give it the same perturbation result. This is a conservative mechanism and may be easier to attack. But it is possible to give the same nouns the same perturbation in the content of long texts. In this way, the relationship between words such as subject and predicate can be better preserved, and its semantic structure can be preserved. It is possible to protect sensitive information while still having better semantic information and text information.
The above two mechanisms can be used to process different categories of text data, and can be freely selected as needed. Combined with our selection mechanism for words of different degrees of importance, the text can be protected more flexibly to better achieve a balance between privacy and utility.
§ EXPERIMENT
.
Experimental Setup,
5.1 Experimental Setup
Following <cit.>
We selected two datasets from the GLUE benchmark <cit.> in our experiments, both of which contain
In our experimental section, we aim to demonstrate the efficacy of using attention mechanism parameters to represent the importance of different words within a sample. This section is divided into two parts, each utilizing the public datasets SST-2 and QNLI to validate our method.
Datasets Description:
* SST-2: A widely-used movie review dataset for sentiment classification, consisting of 67,000 training samples and 1,800 test samples. The evaluation metric is accuracy.
* QNLI: A dataset for sentence pair classification with 105,000 training samples and 5,200 test samples. Accuracy is also used as the evaluation metric here.
In our approach, for both the SST-2 and QNLI datasets, we first identify the most and least important words, quantified as the top and bottom 10%, 20%, 30%, 40%, 50%, and 60% based on the attention scores. These words are considered as the sensitive words that need to be perturbed. We record the number of words actually perturbed during training and compare it under similar total perturbation conditions to gauge the effectiveness of our method.
We use the vocabulary from CounterFitting in GloVe, and apply both Euclidean distance and cosine similarity as measures for comparing GloVe vectors. The sensitive word list is derived from the probabilities associated with different words in the pre-trained model.
For each downstream task, we set the maximum sequence length to 128 and limit the training to 3 epochs. On both SST-2 and QNLI datasets, the batch size is set to 64. We use as the pre-trained model with an increased learning rate of 2 × 10^-5. The experiments are conducted on an A100 GPU.
The second part of our experimental analysis focuses on demonstrating the effectiveness of our approach. In this phase, we perturb words of varying degrees of importance—specifically, 5%, 10%, and 20% of the words determined by our quantifier. We then evaluate both the privacy and effectiveness of the perturbed datasets using several established mechanisms.
* Evaluation Mechanisms: We apply various metrics to assess the privacy levels and the utility of the datasets after perturbation.
* Data Perturbation: We methodically perturb the words identified as having high, medium, and low importance to measure the impact on the dataset’s utility and privacy.
* Analysis of Important Words: This method also allows us to count and calculate the distribution of words based on their importance. We identify and examine some relatively high-importance words, observe the categories they belong to, and analyze their patterns.
This structured evaluation helps in understanding how different levels of perturbation affect the privacy-security balance and the overall effectiveness of the sensitive data we intend to protect.
§.§ Experiment Result
Below are some of my experimental results when ϵ equals to 3.
§.§ result analysis
For the perturbation of data with different importance, we conducted experiments on the SST-2 and QNLI datasets. For fair comparison in the future, we chose Glove as the token embedding and controlled other variables to be the same. Table 1 shows the results of perturbing words of different importance on the SST-2 dataset while keeping the training set unchanged. For the same test set, words of different importance are perturbed while keeping ϵ = 3 unchanged. As can be seen from the figure, when the test set data is perturbed with basically the same amount of data, the result of perturbing more important words is worse than that of perturbing less important words, which also proves that our vocabulary extraction method is correct. Figure 2 shows the results of perturbing the training data and test data at the same time. The results show that when perturbing the same number of words of different importance, perturbing more important words has a greater impact on the results, which also proves that our extraction strategy is correct. When we make a horizontal comparison, we find that when we use the perturbed training set for training, the matching effect with the test set is better, which also reflects the effectiveness of our method for words of different importance to a certain extent. When we observe the results of the QNLI dataset, we can also draw the above conclusions. Therefore, our Transformer-based extraction method is effective. When we perform differential privacy on the text, we can selectively perturb words of different importance. Of course, this method can also be used as a screening mechanism to help us narrow the search scope of keywords and privacy words. Combined with other named entity recognition and LLM reasoning methods, it can help us find more effective keywords faster. This will be a general method.
Conservative method, when we adopt a conservative strategy, that is, when we keep the same perturbation results for the same words in the same sample, the results are as follows. Here we mainly analyze the results of the guard strategy for the top 10, 20, 30, 40, and 50 emphasized words of qnli (long text length, with a greater possibility of the same vocabulary). It can be observed that when we consider adopting the guard strategy, the results of the experiment will be significantly improved. Therefore, we can use the guard strategy in the relatively long policy text protection process, which can better maintain semantic coherence.
Token reasoning attacks and query attacks are carried out on the perturbed text to test the effectiveness of our extraction of data of different importance and its relevance to privacy. Using a pre-trained BERT model can help infer the possibility of recovering the original text from the purified text. By replacing each token in the purified text with the "[MASK]" token and inputting it into the BERT model, we can get the model's predicted output for "[MASK]", which is the inferred original token. If the predicted output is the same as the token of the original input, we consider the attack attempt to be successful. By calculating the success rate of all such attacks (rmask), we can measure the privacy protection of the text, which is 1-rmask. Because our algorithm is based on the custext algorithm and has not been modified to the original algorithm, its effect is the same as custext.
Importance vocabulary analysis When we use chatgpt4 to analyze the words of different importance we extracted, we find that the more important words are often those nouns, pronouns, punctuation marks, etc. This is the same as the more important words in a sentence we understand. However, when we use GPT4 and our more important words to reconstruct the zero-shot sentence, the reconstructed sentence is very different from our original sentence. Therefore, our method does not perform well under unguided reconstruction. This method may be more suitable for identifying important words.
§ CONCLUSION AND LIMITATION
Conclusion: This method proves that we can reflect the importance of different words in different sentences through multiple layers of Transformers and the attention weights between them, but more supplementary experiments are needed. Moreover, when we apply this method to long text data, our accuracy will be biased due to the limitation of the maximum length of Transformer and the long text length. This requires us to combine some other models and find a way to obtain longer length data at the same time. We need to do more work on this basis to improve its performance. We can do more experiments and research on this basis in combination with LLM.
§ FUTURE WORK
With the recent emergence and development of LLM, I think we can combine the large oracle model with the discovery of sensitive data. Combined with prompts, LLM can identify important and sensitive information in the text. And we can combine LLM with this method to filter sensitive information in the text except for specific categories, because some other information in the text that is not classified may also contain some critical sensitive information. This is a direction worth exploring.
apalike
|
http://arxiv.org/abs/2409.03533v1 | 20240905134611 | Bayesian inference of wall torques for active Brownian particles | [
"Sascha Lambert",
"Merle Duchene",
"Stefan Klumpp"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech"
] |
Interactive Surgical Liver Phantom for Cholecystectomy Training
[
September 9, 2024
===============================================================
§ INTRODUCTION
Self-propelled particles are a widely studied type of active matter that includes a broad range of systems, from microbial motility to animal flocking <cit.>. Active Brownian Particles (ABP) form one of the simplest models of such systems, based on a fixed active propulsion force and diffusion to describe the motion. These minimal descriptions are often extended to include interactions between the particles <cit.>, with external fields<cit.>, or with their environment <cit.>. The latter is important as self-propelled particles often move in complex environments characterized by confining walls or interactions with obstacles. Many studies have investigated the microscopic origin of alignment with obstacles based on steric and/or hydrodynamic interactions<cit.>. Within a phenomenological description, such alignment can be implemented within ABP models as a 'wall torque', an empirical torque that provides the observed alignment with surfaces <cit.>. Experiments can inform some of the underlying parameters of such extended models, while others are strictly of an empirical nature, combining multiple physical phenomena.
Our goal in this study is two-fold. First, we demonstrate that the aforementioned empirical torque model is a quantitatively plausible description of an active rod's steric interaction with a surface and explore how its parametrization influences accuracy. For this, we utilize a classical least-squares approach to fitting the torque model to simulations of an active rod and evaluate the relevant residuals that describe the errors in the predicted dynamics. Second, we describe a procedure to learn the functional form of wall interactions, such as the wall torque from experimental data. This is done using a Bayesian inference approach, where the posterior model parameters are evaluated from trajectories of self-propelled particles. Notably, this approach does not require any information about the particle's orientation, which might be experimentally inaccessible.
§ MODEL
Our base model is an Active Brownian Particle (ABP)<cit.> in two dimensions with anisotropic diffusion, obeying the following set of Langevin equations:
ṙ(t) = v_0 φ̂e_x + 1/k_BTφ̂^-1 D_Tφ̂F_wall + √(2)φ̂^-1√( D_T)ξ(t),
φ̇ (t) = D_R/k_BTτ_ABP + √(2D_R)χ(t).
They describe the time evolution of the active particle's location 𝐫 and its orientation φ using the active velocity v_0 and the rotational diffusion D_R. In contrast to the simplest ABP models, we allow for anisotropic translational diffusion, as we will use this model to describe elongated (rod-shaped) particles explicitly. The translational diffusion is thus described by a diagonal diffusion matrix D_T=diag(D_T^∥, D_T^) in the particle's coordinate system. We use φ̂ to denote the rotation matrix linking the lab's coordinate system to the particle's coordinate system. The diffusion is set to values representing a spherocylindrical rod, using the parametrization of
Lüders et. al. <cit.>:
D_T^∥ = k_BT/2 πη L(log (p) - 0.1404 + 1.034/p - 0.228/p^2),
D_T^ = k_BT/4 πη L(log (p) +0.8369 + 0.5551/p - 0.06066/p^2),
D_R = 3 k_BT/πη L^3(log (p) - 0.3512 + 0.7804/p - 0.09801/p^2).
p is the aspect ratio of the spherocylinder, and L is its length. This parametrization is valid for p=1-30 <cit.>.
The ABP may interact with walls, which exert force and torque on the particle.
The term F_wall=-∇ V_WCA in eq. <ref> is the volume exclusion force exerted by walls. We use WCA repulsion <cit.>, a Lennard-Jones potential with a truncated attractive region:
V_WCA(x) =
V_LJ(x) - V_LJ(x_c) for x < x_c =2^1/6/2σ,
0 for x ≥ x_c,
with
V_LJ(x) = 4ϵ[ ( σ/x)^12 - ( σ/x)^6].
Here, σ is the length scale of the interaction. For our simulations, we set the hardness to ϵ=4k_BT.
In addition, the ABP gets subjected to an empirical torque τ_ABP, factorized into a force term proportional to the instantaneous repulsion exerted by the wall (which depends on the distance from the wall) and an angle term dependent on the ABP's incidence angle with the wall:
τ_ABP = |F_wall|· f_ABP(θ_wall - φ).
The angle term f is an arbitrary function of the angle of incidence between the swimming direction φ and the wall's normal direction θ_wall. We aim to learn this function using information gathered from observations of an active rod's trajectory. For this, we parametrize the unknown function in terms of its spectral components
f_ABP(φ) = ∑_n=1^N_Pα_n
·sin(n[ θ_wall - φ]).
up to order N_P. All anti-symmetric cosine terms are suppressed by demanding symmetry with respect to the wall-normal, leaving only the sine terms. The factors α_i determine the modes' strengths. The first mode (n=1 with α_1<0) always produces a torque away from the wall, irrespective of the angle of incidence. This creates an additional repulsion mechanism that leads to trajectories moving away from the wall in addition to the WCA repulsion. The second mode (n=2) creates a torque that aligns the ABP with the wall tangent. When the particle is heading toward the wall, the torque turns it away from the wall, whereas when the particle is headed away from the wall, the torque turns it back to the wall tangent. With additional higher frequencies, more complex torque functions can be described.
§.§ Training Data generation
The Bayesian method outlined in this paper provides a framework for learning the wall torque function f from experimental data on a swimmer's location over time. However, for the context of this paper, we only employ synthetic data, as this allows for better diagnostics of the method. The data is generated using model variants described in the previous section. We generate two types of trajectories:
* Type-A: ABP trajectories with predefined wall torque functions f. These provide ground truth data to validate the Bayesian inference method.
* Type-R: Rod trajectories from a variant of our ABP model describing rod shape particles explicitly. In this case, the empirical wall torque is disabled, f=0. To capture steric interactions of a rod, the ABP gets equipped with N_S test sites equidistantly distributed at locations ± piσ/2N_S in the front and back of the ABP. The WCA repulsion forces F_± sat, i are evaluated at the locations of the test sites and produce torques piσ/2N_S· F_± sat, i×φ̂ e_x.
Both types of simulations are set up like the example in fig. <ref>. The ABP and the rods are initialized outside the WCA interaction region in front of the wall. The initial orientations were chosen on a grid from -60^∘ to 60^∘. Integration is performed using an Euler-Maruyama discretization and terminated when the particle is sufficiently separated from the wall.
§ LEAST-SQUARES ANALYSIS
To evaluate the performance of the empirical torque model in describing the steric interactions of rods, we've extracted the dynamic torque from type-R simulations.
We generated 2500 rod trajectories (100 per initial orientation), each for a range of parameter sets. The rate of change φ̇(t) ≈ (φ(t+h) - φ(t))/h in the orientations, conditionally averaged for given distance and orientation, can then be used to calculate the effective torque on the rod:
τ(t) = k_BT/D_rot ⟨φ̇(t)⟩.
Fig. <ref>a shows an example of the resulting torque data, parametrized by the distance to the wall and the rod orientation. The system's dynamics primarily occur in a crescent moon-shaped region, where the rod is inbound to the wall. The white regions are not sampled in the simulations, i.e., they are typically not visited during the scattering process. When the rod is pointed away from the wall, the torque quickly vanishes as the active motion and the repulsive interaction move it out of the interaction zone of the WCA potential.
We then fit eq. <ref> to the data by minimizing the squared residuals:
(α_n)_opt =(α_n)min∑_t,k[ τ^(α_n)(φ^k(t), d^k(t)) - τ^k(t)]^2,
where the index k runs over the simulated trajectories.
Assuming Gaussian errors in the calculated torques (which is expected given the diffusive structure of the equations of motion), this gives us a point estimate of the most likely torque parametrization. We calculate error bands using 68% confidence intervals obtained from bootstrapping <cit.>. Fig. <ref>B shows the best fit obtained for the leading mode (n=1). The corresponding amplitude is (α_1)_opt=2.03σ±0.08σ.
We've tested the influence of several parameters on the model. Variations in the number of test sites for the repulsive interaction with the wall, the Péclet number of the active rod, and the wall hardness did not significantly influence the performance of the empirical model or the parametrization of f (see Supplementary Material).
In contrast, the rod aspect ratio p, the mode frequency n, and the mode count N_P affect the fit significantly. Fig. <ref>A and B show the fit results and performance for a single-mode description of f. Note that we've varied n continuously instead of only considering integer values. We find a linear relationship between 1/n and its optimal amplitude (α_n)_opt. This suggests that the rod dynamics at small angles of incidence, where f∝α_nn, are of primary importance to the structure of f.
Furthermore, we see a linear relationship between the amplitude and the rod aspect ratio, corresponding to a modulation of the lever arm length. Surprisingly, the tangentially aligning mode (n=2) is not the optimal configuration. Instead, we found that a slightly bigger n≃ 1.1 shows lower residual errors for all aspect ratios, as shown in fig. <ref>C.
When using multiple modes (discrete n=1,2,...), we find very similar results to the single-mode case in terms of performance and dependency on the aspect ratio. The ratios of optimal mode amplitudes show a non-linear dependency on p (see Supplementary Material).
Using up to four modes improves the model's performance, albeit by less than 1% compared to the single-mode model. We did not find that using more than four modes leads to any further improvement of the model (see fig. <ref>D).
From these results, we conclude that using the empirical torque function f successfully introduces a mechanism that captures the steric repulsion seen in rod simulations. However, different modes of the torque function result in approximately equal performance, indicating that the choice of torque function is not unique and that modes are somewhat interchangeable. Likewise, extending the representation to include higher-frequency components does not significantly improve the model quality after four modes.
§ BAYESIAN ANALYSIS
We now turn to a Bayesian approach <cit.> to learn the torque function f from observations. This approach provides intuitive and easily interpretable <cit.> results that quantify our knowledge about f that is gained by observing the system dynamics. Specifically, we construct the posterior probability distribution of f.
§.§ Method Outline
Bayesian inference targets the posterior distribution p(θ|Y), where θ denotes the set of parameters of interest, in our case, the parametrization (α_n) of the torque function f, and Y denotes the data used for the inference, here consisting of sequences of x- and y-positions that form the trajectories of the rod (but not its orientation). We use the tip of the rod as the measured location, as it is the primary interaction point of the rod with the wall. The posterior distribution is calculated from Bayes' rule p(θ|Y) ∝ p(Y|θ)p(θ), using the likelihood p(Y|θ) and the prior p(θ). We use an uninformative uniform prior for all components of θ, only limiting the interval so that the ABP does not turn more than 90^∘ in a single integration time step. Furthermore, we assume a flat prior for all unobserved orientational states.
The likelihood p(Y|θ) is computed for the (type-A) ABP model. Here, we exploit that our observation data originates from another simulation (type-A for test cases, then eventually type-R) with a finite integration time step. We adopt the same time steps for the ABP, thus avoiding introducing another level of discretization error. If the method is applied to real experimental data, one will introduce a discretization error corresponding to the data frequency (for example, the framerate of the experimental video). Using a single time step between observations conveniently decouples the rotational and translational diffusion of the ABP, making it possible to calculate the likelihood just from the particle's location while marginalizing the ABP's orientation with a filtering approach.
After discretization with an Euler-Maruyama scheme, equation <ref> becomes
r_i+1 - r_i - v_0 φ̂_i e_xΔ t - 1/k_BTφ̂^-1_i D̂_Tφ̂_i F_wallΔ t
=√(2)φ̂_i √(D̂_TΔ t)w_i(t) ≡o_i^φ_i,
yielding the displacement the particle experiences through translational diffusion as the offset o_i^φ_i. The random numbers w_i are normal-distributed. Therefore, the offset is normal-distributed as well:
o_i^φ_i∼𝒩(0,2φ̂_iD̂_Tφ̂_i^-1Δ t).
Then, the joint likelihood of all the offsets, conditioned on the orientations φ_i of the ABP at each time step, is
p(o_1^φ_1...o_N^φ_N|θ, φ_1...φ_N)=∏_i^N𝒩(o_i^φ_i|0,2φ̂_iD̂_Tφ̂_i^-1Δ t)
§.§ State Marginalization
This equation needs to be marginalized for all orientations φ_i, which is not analytically possible due to the trigonometric functions in φ̂_i. As we're dealing with a state space model, an effective strategy is using a sequential filter, which calculates the posterior of φ_i given all previous states φ_1...φ_i-1. The filter then steps through all time steps in sequence to construct the joint probability distribution p(φ_1...φ_N|θ) iteratively. We considered a range of commonly used filters and found that particle filters <cit.> work very well, unlike linear Kalman filters. We also decided against non-linear extensions of the Kalman filter as the particle filter produces an unbiased estimation of the posterior density, which allows us to sample from the true posterior via pseudo-Marginal Metropolis-Hastings (PMMH) methods <cit.>.
The particle filter approximates the posterior distribution of the rotational state φ_i as a sum over k=1...N_p delta distributions at locations φ̃_i^k, the name-giving 'particles':
p(φ_i | θ, φ_1...φ_i-1) ≈1/N_p∑_k=1^N_pδ(φ_i - φ̃^k_i-1)
Filter particles, the discretization samples for the orientational states, are characterized only by the unobserved orientational degree of freedom of ABPs, but, together with the positional data, can be used to sample full ABP trajectories that include the unobserved latent orientations. They follow the orientation dynamics of eq. <ref> with the torques that follow from the position data.
With this, we can marginalize equation <ref> as
p(Y|θ) ∝∫ p(o_1^φ_1...o_N^φ_N|θ, φ_1...φ_N)p(φ_1...φ_N)φ_1...φ_N
≈∏_i=0^N1/N_p∑_k=1^N_p p(o_i^φ̃_i^k|θ, φ_1^k...φ_i-1^k)
Note that the offsets are the only stochastic terms in Y when conditioned on the latent orientations, which is why, inside the integral, the offsets form the entirety of the observed data. The particles' initial states are sampled from the prior φ̃^k_0 ∼ p(φ_1). After that, the particles follow the orientational dynamics of equation <ref> and are resampled with weights w_i^k=p(o_i^φ̃_i^k|θ, φ_1^k...φ_i-1^k)p(φ_i-1) after each time step. We do the resampling step before applying rotational diffusion to reduce particle degeneracy, where the orientation distribution is poorly represented as many identically resampled particles.
To summarize, we employ the following procedure to construct an estimation of the posterior density of the parameter set θ:
* Initialize N_p filter particles (discrete orientation values) sampled from the prior p(φ_0).
* Likelihood estimation: Calculate the offsets o_i^φ̃_i^k for all particles and update the partial product in equation <ref>.
* Assimilation: Resample the particles based on the latest likelihood estimation and the priors on the orientations.
* Dynamics: Apply the equation of motion <ref> to all filter particles, i.e. update the orientation by rotational diffusion and the torques.
* Repeat from 2 until all data has been used.
§.§ Posterior Density calculation
The procedure described above does not normalize the posterior density, as the marginal probability is intractable to calculate. We address this with two different techniques. The first is to use a Markov-Chain Monte-Carlo (MCMC) method, the PMHH algorithm <cit.>, to sample directly from the posterior. This method relies on the particle filter as an unbiased estimator of the posterior density. For visualizations of the posterior, we used the second approach, discretizing the parameter space θ and doing an exhaustive sweep. The discretization is then readily normalized.
§.§ Posterior Envelope
To tune the inference performance in large parameter spaces, we employ a procedure to adapt a multivariate Gaussian envelope to the posterior. This can be done with relatively few filter evaluations, and the resulting distribution is then used directly as a proposal for the Markov chain.
Our fitting procedure produces a sequence of Gaussians 𝒩(θ_j, Σ_j). Starting with an initial guess θ_0, Σ_0, we draw k=1...N_T parameter vectors from a test distribution θ_j^k ∼𝒩(θ_j, zΣ_j), with the windows size z serving as a scaling of the search window. We then estimate the posterior densities p(Y|θ_j^k) at the test points using the particle filter and calculate the next iteration as
θ_j+1 = Σ_k=1^N_Tw_j^kθ_j^k
Σ_j+1 = ∑_k=1^N_T w_j^k (θ_j^k - θ_j+1)^T (θ_j^k - θ_j+1),
with weights
w̃_j^k = p(Y|θ_j^k)/𝒩(θ_j^k|θ_j, zΣ_j) w_j^k = w̃_j^k/∑_i=1^N_Tw̃_j^k.
This weighs test samples based on the targeted posterior density and compensates for the test distribution, which biases the sampling towards its mean θ_j. This procedure is iterated until convergence is observed in the covariance matrix. Convergence can be monitored from the entries of Σ_j (see Supplementary Material). We've found that N_T=1024 test samples, 1500 particles, and a search radius of z=1.5 work well in our scenarios, but we have not tuned the procedure in detail.
§.§ B-Spline representation
In our tests, we've found that using the spectral representation from eq. <ref> is causing numerical issues when using more than 3-4 modes. In particular, the posterior mass collapses to a very thin distribution that is not aligned with any specific α_i dimension. The strong correlation makes it numerically difficult to find and represent the envelope of the posterior accurately, given the noisy density estimates from the filter. We attribute this to the fact that the Fourier modes are highly non-local and are affected by all observations. By switching to a quadratic B-spline representation, we can alleviate the problem. The B-spline is of the form
B^(2)(x) = ∑_i=0^N_Bα_i·κ((N-3)x + i - 1/2)
with κ(u) = 1/2u^2 0≤ u ≤ 1,
1/2(-2u^2 + 6u - 3) 1≤ u ≤ 2,
1/2(3 - u)^2 2≤ u ≤ 3,
0 else
and represents functions in the base system κ(.), stitched together from quadratic functions. The most important property of this spline for our use case is that κ has compact support and thus is not affected by all observations, like the spectral representation.
§.§ Validation on type-A data
We tested the procedure on data generated from (Type-A) ABP simulations with known torque function f. An example is shown in fig. <ref>A and B. We used a 2-mode torque (with ground truth α_1 = α_2 = 10σ) and generated 20 trajectories. We then inferred the amplitudes to check if the implementation correctly recovers the ground truth. The posterior density shows a highly complex structure that depends on the particular observations used for the inference. Most of the probability mass concentrates into a Gaussian near the ground truth. In all our tests, the 99% highest density interval (HDI) reliably captures the ground truth. One can see a clear anticorrelation between the amplitudes of the two modes, indicating that the modes are, to some extent, interchangeable.
We also tested the inference of a B-spline-represented torque function on the same data (fig. <ref>C). This representation also reliably captures the torque function, at least for inbound trajectories. Strikingly, the torque function is not well-learned in the outbound orientations when the particle moves away from the wall. This matches the observations from the least-squares approach, where dynamical data is collected in a crescent-shaped region of the φ-d-space. Information about the torque function for a specific orientation is primarily gained when the particle is inside the WCA interaction region and pointing in the respective direction. As the center of the crescent shape is never realized in the dynamics, no information about the torque can be gained.
§.§ Inference on rod data
Finally, we learn the torque function f from 20 Type-R simulations of active rods, which serve as a proxy for experimentally collected data. As these simulations now use true steric interactions with the wall instead of a predefined torque function, no ground truth for f is known. Results for rods with aspect ratio p=1.5 are shown in fig. <ref>. Notably, the posterior density coincides with our fitting results for trajectories that are inbound to the wall, giving credence to the previous interpretation that this is the regime where the torque function is most important. The Bayesian inference furthermore reveals that most knowledge about f is gathered in the inbound direction. The HDI remains broad for parallel and outbound trajectories.
Model selection imposes strong constraints on the torque function: Using two modes limits the bandwidth of the function, correlating the inbound and the outbound direction (fig. <ref>A). In this case, the HDI remains finite and relatively small (compared to the prior) for outbound movement, as knowledge about the regime is learned from information that is gathered from inbound movement.
Using a higher resolution representation of f, such as the B-splines representation shown in fig. <ref>B lifts these correlations. Then, the posterior f densities for inbound and outbound movement are constructed from different observations. This reveals two important properties. The first is that f is not strongly defined for small angles of incidence. While the inference strongly prefers negative torques (which corresponds to steric repulsion), there's no dominant magnitude that is needed to explain the trajectories. This may be explained by rotational diffusion being more important when a normally oriented rod experiences little torque, in combination with a lack of data on such rods, as they are in an unstable configuration. Secondly, the uncorrelated B-spline representation shows that the torque function does not impact the trajectory prediction in the outbound regime. The posterior coincides with the prior in this regime.
§ CONCLUSIONS
In this study, we have validated an empirical torque model to describe the steric interactions of active rods with walls. To this extent, we've simulated spherocylindrical active rods scattering with a wall and compared the resulting torques to an Active Brownian Particle model, which experiences a torque described by an orientation-dependent function f. By employing a least-squares approach, we demonstrated how the ABP model accurately fits the simulated rod trajectories. We've looked at the functional structure of f and found that it is primarily influenced by the rod's aspect ratio. We've discovered no significant impact of the discretization of the rod or its activity on the model's accuracy.
We also introduced a Bayesian inference framework to learn the torque model from observational data. This method works robustly in our test cases, where we generated trajectories with predefined torque functions, which the inference could reasonably recover. The posterior density of the torque function directly highlights where information about the system is gained and where the model stays elusive. We've shown that care must be taken when selecting the representation of f, as insufficient complexity introduces unwarranted correlations that suppress details of the inference. We've applied the method to data obtained from rod simulations and found that the empirical torque model is effective for rods inbound to a wall, whereas the outbound direction is, in practice, not affected by the choice of torque model.
The Bayesian inference method presented here relies on the assumption that the system dynamics can be described by a single Euler-Maruyama step between observation times. While this is exact for our synthetic data, it will be an approximation when the method is applied to gather information from real experiments, and its accuracy remains to be explored. Finally, we want to emphasize that the Bayesian approach described here to infer wall torques is general and also applicable to inferring other properties of active particles.
This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – project ID 446142122.
Simulations were run on the GoeGrid cluster at the University of Göttingen, which is supported by DFG (project IDs 436382789; 493420525) and MWK Niedersachsen (grant no. 45-10-19-F-02).
eplbib
|
http://arxiv.org/abs/2409.02756v2 | 20240904143458 | Proper Wilson flow time for calculating the topological charge density and the pseudoscalar glueball mass in quenched lattice QCD | [
"Zhen Cheng",
"Guang-yi Xiong"
] | hep-lat | [
"hep-lat"
] |
Corresponding author.
E-mail addresses: [email protected] (Zhen Cheng), [email protected] (Guang-yi Xiong).
^aDepartment of Science Education, School of Education, Zhejiang
International Studies University, Hangzhou 310023, China
^bDepartment of Physics, School of Information Engineering, Jiangxi Science and Technology Normal University, Nanchang 330036, China
§ ABSTRACT
The proper flow time for the Wilson flow in calculating the topological charge, topological susceptibility, and topological charge density correlator (TCDC) using the gluonic definition is analyzed. The proper flow time for the topological charge and TCDC is determined by different methods. The proper flow time may vary depending on whether the calculation is for the topological charge, topological density, topological susceptibility, or TCDC. Specifically, the flow time identified using the matching procedure is optimal for TCDC and a good choice for the topological susceptibility. Additionally, the pseudoscalar glueball is extracted from the TCDC using the bosonic definition at the identified proper flow time for three ensembles, and the continuum mass value is obtained through continuous extrapolation.
Proper Wilson flow time for calculating the topological charge density and the pseudoscalar glueball mass in quenched lattice QCD
Guang-yi Xiong^b
September 9, 2024
=================================================================================================================================
§ INTRODUCTION
The QCD vacuum is believed to possess a non-trivial topological structure, characterized by the topological charge and topological charge density. These topological properties are crucial for understanding various phenomena, including the U(1)_A problem, confinement, θ dependence, and spontaneous chiral symmetry breaking <cit.>. Lattice QCD is a powerful tool for investigating these topological properties from first principles. Definitions of topological charge and topological charge density are generally divided into the fermionic and gluonic definitions <cit.>. On the lattice, results of the gluonic and fermionic definitions of the topological charge are consistent in the continuum limit a→0 <cit.>. The topological charge calculated using the fermionic definition is an integer <cit.>; however, the computational cost of this method is prohibitively high. Computing the topological charge on the lattice using the gluonic definition is less computationally demanding, but the numerical value of the topological charge is typically not an integer due to ultraviolet fluctuations in the gauge fields. To ensure that the numerical value of the topological charge nears an integer, a renormalization constant must be applied, or smoothing techniques must be used. Smoothing procedures, such as cooling, smearing (including APE, HYP, and stout smearing), or gradient flow, are commonly employed in the computation of the topological charge density <cit.>. However, determining the optimal level of smoothing in the calculation of the topological charge or topological charge density is a critical issue. By analyzing the relationship between the topological charge of the clover-Dirac operator and the nearest integer, Ref. <cit.> provides a lower bound for calculating the topological charge. The topological susceptibility and topological charge density correlation (TCDC) are also important research subjects in studying the vacuum's topological properties.
TCDC is negative at any non-zero distances due to the reflection positivity and the pseudoscalar nature of the relevant local operator in Euclidean field theory. The negativity of the TCDC has significant implications for the nature of the topological charge structure in the QCD vacuum <cit.>. For instance, the pseudoscalar glueball mass can be extracted from the TCDC in pure gauge theory. The correlator of gluonic observables exhibits large vacuum fluctuations, making the extraction of glueball masses significantly more challenging compared to hadronic masses. However, extracting the pseudoscalar glueball mass from the TCDC does not require calculating connected and disconnected quark diagrams <cit.>. In lattice QCD, due to severe singularities and lattice artifacts in the TCDC, smoothing of the gauge field is necessary. It is well known that undersmearing cannot completely remove lattice artifacts, while oversmearing may wipe out even the negative nature of the correlator <cit.>. Therefore, a lower bound on the smoothing (Wilson flow) is required.
The topological susceptibility χ can be obtained by the four-volume integral of the TCDC <cit.>. Topological susceptibility, which reflects the fluctuations of the topological charge, is of great importance in the study of the QCD vacuum. The universality of the topological susceptibility in the fermionic definition shows that it is free of short-distance singularities <cit.>. The topological susceptibility χ is linked to the U(1) anomaly and the mass of the flavor-singlet pseudoscalar η^' meson in pure Yang-Mills theory, as expressed in the well-known Witten-Veneziano relation <cit.>. Additionally, a lower bound for the proper flow time of the Wilson flow can be determined by using the topological susceptibility <cit.>.
The overlap operator, as a solution to the Ginsparg-Wilson equation <cit.>, is commonly used to calculate the topological charge of the fermionic definition. The topological charge computed using the overlap operator is an exact integer. Traditionally, the topological charge density has been calculated by using the point source <cit.>, which is almost impossible on a large volume lattice. To address this, Ref. <cit.> proposed the symmetric source (SMP) method to calculate the topological charge density of the fermionic definition. Although the SMP method can reduce the computational resources required for calculating the topological charge, it remains challenging to apply the SMP method for calculating the topological susceptibility or TCDC in the context of when the number of configurations is large. In current practical calculations, we generally use the gluonic definition for the topological charge density to compute the topological susceptibility or TCDC. However, the degree of smoothing in the calculation of the bosonic definition for the topological charge density remains an unresolved issue. Three matching methods are considered in this paper. The first method is to find the most matching topological charge in the calculation of the topological charge. The second method is to calculate the matching parameter and determine the flow time at which this parameter is closest to 1. The third method involves identifying the minimum flow time at which the topological susceptibility reaches a plateau.
The matching parameters for different settings of the SMP method, as well as those obtained by comparing the SMP method with the Wilson flow results, will be calculated. By analyzing these matching parameters, the proper flow time for calculating the TCDC will be determined. Further exploration of the SMP method in the calculation of the topological charge of the fermionic definition will be presented. The relationship between the TCDC and topological susceptibility with the matching parameters will be discussed. Additionally, an attempt will be made to extract the pseudoscalar glueball mass from the TCDC at the proper flow time.
§ SIMULATION SETUP
The Lscher-Weisz gauge action is used to generate the pure gauge lattice configurations. This gauge action is tadpole-improved at tree-level 𝒪(a^2) and combines the plaquette and rectangle gauge actions, implemented using the pseudo-heat-bath algorithm <cit.>. The parameters of the ensembles used to generate configurations with periodic boundary conditions are detailed in Tab. <ref>, and the lattice spacings a are determined through the Wilson flow.
The same overlap operator as in Ref. <cit.> is used to calculate the topological charge density of the fermionic definition, and the parameter κ is the input variable. In this work, we specifically choose κ=0.18 and 0.19. While the topological charge computed using this overlap operator with the point sources yields an integer value, the computational cost is significantly high. To reduce the computational cost, the symmetric multi-probing source (SMP) method is introduced to calculate the topological charge density with the Dirac operator <cit.>. The SMP method is utilized to calculate the topological charge density of the fermionic definition, as follows:
q_smp(x) =∑_α,aψ(x,α,a)(D̃_ov(x))ϕ_P(S(x,P),α,a)
=∑_α,aψ(x,α,a)(D̃_ov(x))ψ(x,α,a),
and the corresponding topological charge is
Q_smp=∑_xq_smp(x),
where ϕ_P(S(X,P),α,a) represents the SMP source vector with (d,mode), and other parameters in the SMP source vector are explained in Ref. <cit.>. The total number of SMP source vectors ϕ_P(S(X,P),α,a) for the (d,mode) is
N_SMPV=
d^4, mode = 0,
2d^4, mode = 1,
d^4/2, mode = 2,
and which can cover all grid points is 12N_SMPV. We use (d,mode) to represent the number of SMP source vectors in the following. Under a proper scheme P of the SMP method, we can
obtain the expected topological charge density.
The field tensor used in calculating the topological charge density of the gluonic definition is a 3-loop 𝒪(a^4)-improved and defined as <cit.>,
F_μν^Imp=27/18C_^(1,1)-27/180C_^(2,2)+1/90C_^(3,3),
and C_μν^( m,m ) is the clover term constructed by m × m loops.
The Wilson flow is used to smooth the gauge field in the gluonic definition, and the gauge fields do not need to be renormalized. The topological charge density of the gluonic definition by using the Wilson flow is
q_wf( x )=1/32π^2ε_μνρσTr[ F_μν^Imp( x )F_ρσ^Imp( x ) ],
and the corresponding topological charge Q_wf for the gluonic definition is given by
Q_wf=∑_xq_wf( x ).
To determine the proper flow time τ_qpr for calculating the topological charge using the gluonic definition, we introduce a new comparing method by identifying the minimum value of the absolute difference between the topological charge obtained from the fermionic definition and that calculated from the gluonic definition at various Wilson flow time. This process involves finding the minimum value of the following expression for different Wilson flow times:
min|Q-Q_wf|,
where Q is the topological charge obtained by rounding the topological charge Q_smp calculated using the SMP method to the nearest integer, and Q_wf represents the topological charge of the gluonic definition by using the Wilson flow.
To investigate how to use the SMP method to determine the proper flow time τ_pr when computing the topological charge density of the gluonic definition, a matching procedure is introduced. The matching quantity Ξ_AB will be calculated as follows <cit.>,
Ξ_AB=χ_AB^2/χ_AAχ_BB,
with
χ_AB=1/V∑_x(q_A(x)-q̅_A)(q_B(x)-q̅_B),
where q̅ is the mean value of topological charge density q(x), and V is the volume. When the numerical value of Ξ_AB is nearest to 1, the flow time is the desired proper Wilson flow time τ_pr.
§ THE MATCHING PARAMETER AND THE PROPER FLOW TIME OF THE WILSON FLOW
In Ref. <cit.>, the results indicate that the outcomes of the SMP method with parameters (8,0) can serve as a benchmark for calculating the matching parameter Ξ_AB. To compare results from different parameters (d≠8,mode≠0) with those obtained using the parameter set (8,0) in the SMP method, the matching parameters Ξ_AB are computed for various hopping parameters κ. Given the constraints of computational resources, only three configurations are analyzed for each lattice ensemble in the SMP method.
The results for 16^4, 24^3×48, and 32^4 with κ=0.18 are shown in Tab. <ref>. In Tab. <ref>, the results for 16^4 and 24^3×48 with κ=0.19 are presented. However, due to computational resource limitations, the results for κ = 0.19 in the lattice 32^4 were not computed. When the SMP parameters are (4,2), the matching parameters are already very close to 1, indicating that the topological charge density calculated with parameters (4,2) can also be used to determine the matching parameter Ξ_AB. Using the SMP method with parameters (4,2) requires approximately 1/32 of the computational resources compared to the parameters (8,0), or 1/512 of the computational resources compared to the point sources, leading to significant resource savings.
In Fig. <ref>, the matching parameter results obtained by comparing the topological charge density calculated using the SMP method with that calculated using the Wilson flow method are presented. The results indicate that as the number of SMP source vectors increases, the value of the matching parameter also increases. However, even when the number of SMP source vectors exceeds the source vector count at the parameters (4,2), the increase in the matching parameters is no longer significant. This suggests that the parameters (4,2) in the SMP method are sufficient for calculating the matching parameters when comparing the SMP method with Wilson flow. Additionally, the proper flow time remains essentially constant, regardless of variations in the parameters (d, mode) within the SMP method. The proper flow time in the calculation of topological density in the gluonic definition can be obtained from the matching procedure. All results indicate that choosing the parameters (4,2) in the SMP method is a good option when selecting a benchmark to determine the matching parameter Ξ_AB.
We adopt the matching results at κ = 0.18. In the calculation of the topological charge density, the proper flow time for lattices of 16^4 at β=4.5, 24^3×48 at β=4.8 and 32^4 at β=5.0 are approximately τ=0.38, τ=0.34 and τ=0.34. The corresponding proper flow radii of the Wilson flow √(8τ_pr)=0.224, 0.148, and 0.109fm, respectively. In the subsequent calculation of TCDC, this proper flow time τ_pr will be used in the Wilson flow.
The topological charges Q obtained using the SMP method with different parameters (d,mode) and the Wilson flow method at the proper flow time τ_pr are illustrated in Fig. <ref>. The results indicate that as the number of SMP source vectors increases, the topological charges Q calculated by the SMP method approach integer values, aligning with expectations. Notably, when the parameters of the SMP source vectors are set to (4,2), the topological charge Q derived from the fermionic definition using the SMP method closely matches that calculated from point sources. These findings indicate that using SMP sources with parameters (4,2) could be a viable approach for calculating the topological charge of the fermionic definition, potentially reducing computational resource requirements compared to traditional point source calculations. All results suggest that the SMP method with parameters (4,2) could be considered for application to the computation of TCDC and the topological susceptibility of the fermionic definition.
However, the topological charges of the gluonic definition, calculated at the proper flow time τ_pr through the matching procedure, deviate considerably from integer values. The upcoming research will demonstrate that the proper flow time determined by the matching method is not suitable for the calculation of topological charge, it is applicable for the calculation of TCDC. For calculating the topological charge of the gluonic definition using the Wilson flow, the proper flow time can be determined using the eq. (<ref>).
In Tab. <ref>, the topological charge calculated using the SMP method with parameters (8,0) and the Wilson flow, along with the proper flow time τ_qpr for the gluonic definition, are presented. The value of Q is obtained by rounding the topological charge Q_smp, calculated using the SMP method with parameters (8,0), to the nearest integer. The proper flow time τ_qpr for calculating Q_wf using the eq. (<ref>) is determined by identifying the minimum value of the absolute difference between Q and Q_wf as outlined in eq. (<ref>).
The results demonstrate that the topological charge derived from the SMP method with parameters (8,0) is numerically very close to the precise value obtained from point sources, indicating that Q accurately represents the topological charge of the configuration. Furthermore, it suggests that a larger Wilson flow time is generally necessary when calculating the topological charge for the gluonic definition. This also indicates that the proper flow time τ_pr determined by the matching method may not be the optimal choice to calculate the topological charge of the gluonic definition.
The topological charge calculated using the SMP method with parameters (4,2) and the Wilson flow, along with the proper flow time τ_qpr for the topological charge of the gluonic definition, is presented in Tab. <ref>. The methods for determining Q and τ_qpr are the same as those employed in Tab. <ref>. The values of Q and τ_qpr obtained using the SMP method with parameters (4,2) are fundamentally similar to those derived with parameters (8,0).
These results indicate that the SMP method with parameters (4,2) is indeed a viable approach for accurately determining the topological charge of the fermionic definition. Moreover, this method can effectively establish the proper flow time τ_qpr when calculating the topological charge of the gluonic definition using the Wilson flow. By analyzing the index of the overlap-Dirac operator concerning the clover topological charge during the Wilson flow, Ref. <cit.> shows that max {t_c}∼77 for the Wilson flow. This max {t_c}∼77 corresponds to a lower bound for the Wilson flow time of τ∼ 0.77, which is compatible with our results. Notably, the results show that when calculating the topological charge of the gluonic definition, the proper flow time τ_qpr is generally larger than τ_pr. The results in the next section will demonstrate that when calculating TCDC at the proper flow time τ_qpr, TCDC may exhibit oversmearing, meaning that the negative dip of TCDC may disappear, as shown in Fig. <ref>.
§ THE TOPOLOGICAL CHARGE DENSITY CORRELATOR AND THE PSEUDOSCALAR GLUEBALL
MASS
TCDC is defined as
C(r)=⟨ q(x)q(0)⟩ , r=|x|,
and the four-volume integral of the TCDC gives the topological susceptibility
χ=∫d^4x⟨ q(x)q(0)⟩ =⟨ Q^2⟩/V, V→∞.
Due to the presence of severe singularities and lattice artifacts in TCDC, a smoothing procedure is essential to refine the gauge fields. In this study, we employ the Wilson flow method for this purpose. Undersmearing fails to adequately eliminate the lattice artifacts, while oversmearing can erase even the negative character of the TCDC, as illustrated in Fig. <ref>. The results indicate that the flow times at which the negative dip disappears at a distance of r∼ 0.3 fm are τ≈ 0.4 for 16^4, τ≈ 0.6 for 24^3×48, and τ≈ 1.0 for 32^4, which are very close to the corresponding τ_qpr. In other words, when using the Wilson flow method to calculate TCDC at τ_qpr, TCDC may be oversmearing.
To determine the suitable Wilson flow time, Ref. <cit.> suggests using the stability of the topological susceptibility to establish a lower bound for the Wilson flow time at finite temperature. The topological susceptibility χ for various lattices with respect to the Wilson flow time τ is presented in Fig. <ref>. As the flow time τ increases, UV fluctuations are gradually smoothed out, and the topological susceptibility eventually stabilizes, reaching a plateau around τ≈0.2. This result suggests that the proper flow time τ_ps, based on the stability of the topological susceptibility χ, should be τ_ps=0.2, which is smaller than the proper flow times τ_pr≈0.38 and 0.34 obtained through the matching procedures discussed in the previous section. All results demonstrate that the relationship between the proper flow times in the calculations of topological charge, TCDC, and topological susceptibility is given by . This discrepancy may arise from the cumulative summation process or lattice artifacts, warranting further investigation.
The results indicate that while the topological susceptibility has stabilized at a Wilson flow time of τ=0.2, significant fluctuations in the TCDC persist, as illustrated in Fig. <ref>. This suggests that the proper flow time τ_ps determined by susceptibility is not sufficient to calculate the TCDC optimally. In other words, when determining the TCDC of the gluonic definition, the required Wilson flow time is longer than τ_ps determined by the topological susceptibility. However, the TCDC calculated by Wilson flow at the proper flow time τ_pr retains the negative core part while achieving effective smoothing. This highlights the importance of selecting the proper Wilson flow time τ_pr. Therefore, τ_pr is a optimal choice for the calculation of TCDC, and a good choice for the topological susceptibility in the gluonic definition. In this work, the calculation of TCDC will be performed using the Wilson flow method at the proper flow time τ_pr.
TCDC can be used to extract the lowest pseudoscalar glueball mass in the negative region by the following form
⟨ q(x)q(0)⟩ =m/4π^2rK_1(mr),
and K_1(z) is a modified Bessel function, which has the asymptotic form <cit.>
K_1(z)large z∼e^-z√(π/2z)[1+3/8z].
We aim to extract the glueball mass from the TCDC of three ensembles using the Wilson flow at the proper flow time τ_pr. To do this, we apply Eqs. (<ref>) and (<ref>) to determine the pseudoscalar glueball mass in the negative region. In this fitting procedure, both the amplitude and mass are treated as free parameters, and the χ^2/dof is calculated to evaluate the quality of the fit. It shows that the extracted mass remains independent of the endpoint once the error bars of the tail of the TCDC approach zero<cit.>. The fitting is considered optimal when the value of χ^2/dof is closest to 1. Consequently, we fix the endpoint and vary the starting point to extract the mass. The TCDC and the best-fitting curve for the ensemble 24^3×48 with τ=0.34 (or √(8τ)=0.15fm) as an example are illustrated in Fig. <ref>.
The optimal fit results for the three ensembles reveal minimal differences, as shown in Tab. <ref>. To obtain the particle mass via continuous extrapolation, we perform a constant fit. The plot of mass M versus a^2, along with the fitting results, is illustrated in Fig. <ref>. The red solid line represents the mass of the pseudoscalar glueball, while the magenta lines indicate the associated errors. The mass of the pseudoscalar glueball, obtained through continuous extrapolation, is m=2561(39)MeV, consistent with the findings in Ref. <cit.>.
§ CONCLUSIONS
The topological charge density of lattice QCD using both the fermionic and gluonic definitions is analyzed in this paper. The topological charge density for the fermionic definition is calculated using the SMP method, while the gluonic definition employs the Wilson flow. The SMP method offers advantages in computing the topological charge for the fermionic definition compared to the point sources, particularly with the parameters (4,2), which is used effectively to compute the topological charge of the fermionic definition and significantly reduce computational resource requirements. The SMP method with parameters (4,2) offers a potential approach for calculating TCDC and the topological susceptibility of the fermionic definition. The SMP method with parameters (4,2) can be used to determine the proper flow time τ_pr.
We investigate the proper flow time for calculating the topological charge, topological susceptibility, and the TCDC of the gluonic definition using the Wilson flow. It is important to note that the proper flow time may differ when calculating the topological charge, topological charge density, topological susceptibility, and TCDC using the Wilson flow method. Specifically, the flow time for calculating the topological charge is the longest, followed by the time for TCDC, with the shortest time allocated for the topological susceptibility.
By identifying the topological charge calculated via the Wilson flow that is closest to that determined by the SMP method, we can ascertain the proper flow time τ_qpr for the calculation of the topological charge of the gluonic definition. The proper flow time τ_ps for calculating the topological susceptibility using the Wilson flow is identified as the point at which the susceptibility no longer decreases with increasing flow time. However, both τ_ps and τ_qpr are not suitable for calculating TCDC. The proper flow time τ_pr can indeed be determined using the matching parameter Ξ_AB. τ_pr is the optimal choice for calculating TCDC and a good choice for the topological susceptibility.
The TCDC of the gluonic definition has also been analyzed using the Wilson flow in this work. Given its severe singularities and lattice artifacts, the Wilson flow serves as an effective smoothing method. We employ the TCDC calculated at the proper flow time to extract the pseudoscalar glueball mass through curve fitting. The pseudoscalar glueball mass obtained from the continuous extrapolation of three ensembles is consistent with results from other studies.
Given limitations in computational resources, the results of this study are preliminary, particularly regarding the determination of the proper flow time and the continuous extrapolation of particle masses.
In the future, we should use more lattice ensembles, larger lattice volumes, or more configurations to further investigate this issue. Additionally, we may consider improving the method for determining the proper flow time to enhance the accuracy and effectiveness of the calculations.
We thank Jian-bo Zhang and Yi-bo Yang for useful discussions and suggestions. Most Numerical simulations have been performed on the Tianhe-2 supercomputer at the National Supercomputer Center in Guangzhou (NSCC-GZ), China. This research was supported by the National Natural Science Foundation of China (NSFC) under the project No. 11335001 and Zhejiang Provincial Natural Science Foundation of China under Grant No. LQ23A050001.
99
Schierholz:1994pb
G. Schierholz, Towards a dynamical solution of the strong CP problem, Nucl. Phys. Proc. Suppl. 37A
(1994) 203–210 [http://arXiv.org/abs/hep-lat/9403012hep-lat/9403012].
Witten:1978bc
Edward Witten, Instantons, the Quark Model, and the 1/N Expansion, Nucl. Phys. B 149
(1979) 285–320.
Diakonov:1995ea
Dmitri Diakonov, Chiral symmetry breaking by instantons, Proc. Int. Sch. Phys. Fermi 130
(1996) 397–432 [http://arXiv.org/abs/hep-ph/9602375hep-ph/9602375].
Cichy:2014qta
Krzysztof Cichy, Arthur Dromard, Elena Garcia-Ramos, Konstantin Ottnad, Carsten
Urbach, Marc Wagner, Urs Wenger, and Falk Zimmermann, Comparison of different lattice definitions of the topological
charge, PoS LATTICE2014
(2014) 075 [http://arXiv.org/abs/1411.1205hep-lat/1411.1205].
Muller-Preussker:2015daa
M. Müller-Preussker, Recent results on topology on the lattice (in memory of Pierre van
Baal), PoS LATTICE2014
(2015) 003 [http://arXiv.org/abs/1503.01254hep-lat/1503.01254].
Alexandrou:2017hqw
Constantia Alexandrou, Andreas Athenodorou, Krzysztof Cichy, Arthur Dromard,
Elena Garcia-Ramos, Karl Jansen, Urs Wenger, and Falk Zimmermann, Comparison of topological charge definitions in Lattice QCD, Eur. Phys. J. C 80
(2020) 424 [http://arXiv.org/abs/1708.00696hep-lat/1708.00696].
Belavin:1975fg
A.A. Belavin, Alexander M. Polyakov, A.S. Schwartz, and Yu.S. Tyupkin, Pseudoparticle Solutions of the Yang-Mills Equations, Phys. Lett. B 59
(1975) 85–87.
Fujikawa:1998if
Kazuo Fujikawa, A continuum limit of the chiral jacobian in lattice gauge theory, Nucl. Phys. B 546
(1999) 480–494.
Kikukawa:1998pd
Yoshio Kikukawa and Atsushi Yamada, Weak coupling expansion of massless QCD with a Ginsparg-Wilson
fermion and axial U(1) anomaly, Phys. Lett. B 448
(1999) 265–274 [http://arXiv.org/abs/hep-lat/9806013hep-lat/9806013].
Atiyah:1971rm
M. F. Atiyah and I. M. Singer, The Index of elliptic operators. 5., Annals Math. 93
(1971) 139–149.
Hasenfratz:1998ri
Peter Hasenfratz, Victor Laliena, and Ferenc Niedermayer, The Index theorem in QCD with a finite cutoff, Phys. Lett. B 427
(1998) 125–131 [http://arXiv.org/abs/hep-lat/9801021hep-lat/9801021].
Chiu2019
Ting-Wai Chiu and Tung-Han Hsieh, Topological susceptibilty in lattice QCD with exact chiral symmetry
– the index of overlap-Dirac operator versus the clover topological charge
in Wilson flow, [http://arXiv.org/abs/1908.01676hep-lat/1908.01676].
Chowdhury:2012sq
Abhishek Chowdhury, Asit K. De, A. Harindranath, Jyotirmoy Maiti, and Santanu
Mondal, Topological charge density correlator in Lattice QCD with two
flavours of unimproved Wilson fermions, JHEP 11
(2012) 029 [http://arXiv.org/abs/1208.4235hep-lat/1208.4235].
Creutz:2010ec
Michael Creutz, Anomalies, gauge field topology, and the lattice, Annals Phys. 326
(2011) 911–925 [http://arXiv.org/abs/1007.5502hep-lat/1007.5502].
Chowdhury:2014mra
Abhishek Chowdhury, A. Harindranath, and Jyotirmoy Maiti, Correlation and localization properties of topological charge density
and the pseudoscalar glueball mass in su(3) lattice yang-mills theory, Phys. Rev. D 91
(2015) 074507 [http://arXiv.org/abs/1409.6459hep-lat/1409.6459].
Smit:1986fn
Jan Smit and Jeroen C. Vink, Remnants of the Index Theorem on the Lattice, Nucl. Phys. B 286
(1987) 485–508.
Luscher:2010ik
Martin Lüscher and Filippo Palombi, Universality of the topological susceptibility in the SU(3) gauge
theory, JHEP 09
(2010) 110 [http://arXiv.org/abs/1008.0732hep-lat/1008.0732].
Witten:1979vv
Edward Witten, Current Algebra Theorems for the U(1) Goldstone Boson, Nucl. Phys. B 156
(1979) 269–283.
Veneziano:1979ec
G. Veneziano, U(1) Without Instantons, Nucl. Phys. B 159
(1979) 213–224.
Mazur:2020hvt
Lukas Mazur, Luis Altenkort, Olaf Kaczmarek, and Hai-Tao Shu, Euclidean correlation functions of the topological charge density, PoS LATTICE2019
(2020) 219 [http://arXiv.org/abs/2001.11967hep-lat/2001.11967].
Neuberger:1997fp
Herbert Neuberger, Exactly massless quarks on the lattice, Phys. Lett. B 417
(1998) 141–144 [http://arXiv.org/abs/hep-lat/9707022hep-lat/9707022].
Neuberger:1998wv
Herbert Neuberger, More about exactly massless quarks on the lattice, Phys. Lett. B 427
(1998) 353–355 [http://arXiv.org/abs/hep-lat/9801031hep-lat/9801031].
Horvath:2003yj
I.Horváth, S.J. Dong, Terrence Draper, F.X. Lee, K.F. Liu, N. Mathur, H.B.
Thacker, and J.B. Zhang, Low dimensional long range topological charge structure in the QCD
vacuum, Phys. Rev. D 68
(2003) 114505 [http://arXiv.org/abs/hep-lat/0302009hep-lat/0302009].
Ilgenfritz:2007xu
E.-M. Ilgenfritz, K. Koller, Y. Koma, G. Schierholz, T. Streuer, and
V. Weinberg, Exploring the structure of the quenched QCD vacuum with overlap
fermions, Phys. Rev. D 76
(2007) 034506 [http://arXiv.org/abs/0705.0018hep-lat/0705.0018].
Xiong:2019pmh
Guang-Yi Xiong, Jian-Bo Zhang, and You-Hao Zou, Evaluating the topological charge density with the symmetric
multi-probing method, Chin. Phys. C 43(3)
(2019) 033102 [https://arxiv.org/abs/1901.02211hep-lat/1901.02211].
Luscher:1984xn
M. Lüscher and P. Weisz, On-Shell Improved Lattice Gauge Theories, Commun. Math. Phys. 97
(1985) 59 [Erratum: Commun. Math. Phys. 98 (1985) 433].
Bonnet:2001rc
Frederic D.R. Bonnet, Derek B. Leinweber, Anthony G. Williams, and James M.
Zanotti, Improved smoothing algorithms for lattice gauge theory, Phys. Rev. D 65
(2002) 114510 [http://arXiv.org/abs/hep-lat/0106023hep-lat/0106023].
Cheng:2020bql
Zhen Cheng and Jian-bo Zhang.
Dependence of overlap topological charge density on Wilson mass
parameter, Chin. Phys. C 45(7) (2021) 073103 [https://arxiv.org/abs/2011.00908hep-lat/2011.00908]
BilsonThompson:2002jk
Sundance O. Bilson-Thompson, Derek B. Leinweber, and Anthony G. Williams, Highly improved lattice field strength tensor, Annals Phys. 304
(2003) 1–21 [http://arXiv.org/abs/hep-lat/0203008hep-lat/0203008].
Bruckmann:2006wf
Falk Bruckmann, Christof Gattringer, Ernst-Michael Ilgenfritz, Michael
Muller-Preussker, Andreas Schafer, and Stefan Solbrig, Quantitative comparison of filtering methods in lattice QCD, Eur. Phys. J. A 33
(2007) 333–338 [http://arXiv.org/abs/hep-lat/0612024hep-lat/0612024].
Moran:2010rn
Peter J. Moran, Derek B. Leinweber, and Jianbo Zhang, Wilson mass dependence of the overlap topological charge density, Phys. Lett. B 695
(2011) 337–342 [http://arXiv.org/abs/1007.0854hep-lat/1007.0854].
Shuryak:1994rr
Edward V. Shuryak and J.J.M. Verbaarschot, Screening of the topological charge in a correlated instanton vacuum, Phys. Rev. D 52
(1995) 295–306 [http://arXiv.org/abs/hep-lat/9409020hep-lat/9409020].
Zou:2018mxc
You-Hao Zou, Jian-Bo Zhang, and Guang-Yi Xiong, Localization of topological charge density near T_c in quenched
QCD with Wilson flow, Phys. Rev. D 98
(2018) 014504 [http://arXiv.org/abs/1806.05301hep-lat/1806.05301].
Chen:2005mg
Y. Chen et al, Glueball spectrum and matrix elements on anisotropic lattices, Phys. Rev. D 73
(2006) 014516 [http://arXiv.org/abs/hep-lat/0510074hep-lat/0510074].
|
http://arxiv.org/abs/2409.03295v1 | 20240905070323 | N-gram Prediction and Word Difference Representations for Language Modeling | [
"DongNyeong Heo",
"Daniela Noemi Rim",
"Heeyoul Choi"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
[
[
September 9, 2024
=====================
§ ABSTRACT
Causal language modeling (CLM) serves as the foundational framework underpinning remarkable successes of recent large language models (LLMs). Despite its success, the training approach for next word prediction poses a potential risk of causing the model to overly focus on local dependencies within a sentence. While prior studies have been introduced to predict future N words simultaneously, they were primarily applied to tasks such as masked language modeling (MLM) and neural machine translation (NMT). In this study, we introduce a simple N-gram prediction framework for the CLM task. Moreover, we introduce word difference representation (WDR) as a surrogate and contextualized target representation during model training on the basis of N-gram prediction framework. To further enhance the quality of next word prediction, we propose an ensemble method that incorporates the future N words' prediction results. Empirical evaluations across multiple benchmark datasets encompassing CLM and NMT tasks demonstrate the significant advantages of our proposed methods over the conventional CLM.
§ INTRODUCTION
With the remarkable advancements in deep learning techniques, neural language modeling has become a central component in modern natural language processing (NLP) tasks, such as natural language understanding (NLU), neural machine translation (NMT) and question answering. Among the approaches to language modeling, causal language modeling (CLM), which predicts the next word given the previous words, is a widely employed language modeling framework. For example, prominent large language models (LLMs) like GPT-2 <cit.> and GPT-3 <cit.> rely on CLM as their primary training framework. Despite their successful applications, the prevalent next word prediction manner can inadverently lead models to overfit to local dependencies rather than capturing long-term dependencies between words. This tendency arises from some phrases or paired words that have strong dependencies with each other, such as "Barack Obama" and "Harry Potter" <cit.>.
A way of mitigating this problem involves predicting not solely the next word but also subsequent words in later time-steps such as N-gram prediction. Researchers <cit.> have adopted this N-gram prediction methodology for the masked language modeling (MLM) during the pre-training phase of LLMs <cit.>. Similar approaches have been applied to the NMT task <cit.>. However, these methods often require significant modifications to the model architecture, a different loss function than the conventional cross-entropy loss, or an expansion of the vocabulary for N-grams.
This paper introduces a novel N-gram prediction framework designed specifically for CLM and proposes innovative methods aimed at fortifying this framework. The contributions of this work can be summarized as follows.
(1) A simple N-gram prediction for CLM: we propose a simple N-gram prediction integrated to existing CLM models. Except for an additional multi-layer perceptron (MLP) layer, our method does not require other modifications to model architecture, loss function, and vocabulary.
(2) Word difference representation: we propose to use the embedding vectors' difference between contiguous words, termed word difference representation (WDR), as a surrogate representation for individual words. Departing from the conventional approaches that employing a fixed word embedding as target representation, we provide diverse WDR as target representations in accordance with context. We discovered this method can vary backpropagated gradient during training so that it can enhance generalizability. The algorithmic reversibility of WDR preserves the feasibility of the above simple N-gram prediction method.
(3) An ensemble method suitable for the CLM task: we propose an ensemble method designed to refine the next word prediction by leveraging other multiple N predictions from the N-gram prediction.
Our preliminary and primary experimental results, conducted several CLM benchmark datasets, highlight the gradual improvements in perplexity achieved by our proposed simple N-gram framework, the WDR diverse target representations, and ensemble method when compared to several baseline models. Our qualitative analysis focusing on gradient elucidates the advantage of the WDR method from the perspective of optimization generalizability. In addition to the main CLM task, we demonstrate the applicability and advantages of our proposed approaches to the NMT task, which is a conditional form of the CLM task.
§ BACKGROUND: CONVENTIONAL CLM
Since the work of <cit.>, neural network-based language modeling has been developed and become mainstream in language modeling. As background knowledge, we describe the conventional training framework of CLM (the next word prediction) in this section.
A sentence consists with words, X={x_1, x_2, ⋯, x_T}, x ∈𝒱, where T means the sequence length of the sentence and 𝒱 is the vocabulary set. Conventional CLM computes the likelihood of a word conditioned on its preceding words in the sentence, p(x_t|x_<t). For processing, words are mapped to embedding vectors <cit.>, and the encoded hidden state at time-step t is formulated as follows:
𝐡_t=Enc_θ({𝐱^e_1,𝐱^e_2, ⋯, 𝐱^e_t-1}) ∈ℝ^d,
where 𝐱^e_t ∈ℝ^d means the embedded vector of x_t. Enc_θ is an encoder model with its parameter set θ. d is the dimension of the encoded hidden state and the embedding vector spaces. Recently, most language models use Transformer <cit.> as their encoder architecture. After encoding, the encoded hidden state is linearly transformed to a logit value of each word in a vocabulary set 𝒱. Finally, the likelihood of the predicted word is formulated as follows:
p(x̂_t|x_<t;θ) = softmax(𝐱̂^l_t),
𝐱̂^l_t = 𝐖^l𝐡_t=𝐖^l𝐱̂^e,l_t,
where 𝐖^l ∈ℝ^|𝒱| × d is the weight matrix of the logit layer.
To help the understanding of our idea, we note that a parameter vector of logit layer's weight is another word embedding set that is mapped to the target word, that is 𝐖^l=[𝐱^e,l_1,𝐱^e,l_2,⋯,𝐱^e,l_|𝒱|]^⊤. In this point of view, the encoded hidden state, 𝐡_t, is the predicted word embedding vector of the logit layer, 𝐱̂^e,l_t. Then, the inner product between 𝐖^l and 𝐱̂^e,l_t outputs the predicted score of each embedding that indicates how the predicted word embedding is similar to the logit layer's original word embedding.
Finally, the model learns to minimize the negative log-likelihood (NLL) loss as follows:
ℒ(X,θ)=-∑_t=1^Tlog p(x̂_t=x_t|x_<t;θ).
This loss becomes the minimum when the model exactly predicts the logit layer's embedding of the target word, that is 𝐱̂^e,l_t=𝐱^e,l_t. This process is illustrated in Fig.<ref>(a).
§ PROPOSED METHODS
In this section, we propose three ideas: (1) a simple N-gram CLM, (2) word difference representation N-gram CLM, and (3) an ensemble method over N-gram predictions.
§.§ Simple N-gram CLM
First, we propose a simple N-gram prediction on the conventional framework of CLM. The core idea is adding an MLP layer to predict a future word given the same hidden state of the conventional CLM. This process is formulated as follows:
𝐱̂^e,l_t+n=MLP^n(𝐡_t).
For instance, assuming N is 3, two MLP layers, MLP^1 and MLP^2, are employed and predict 𝐱̂^e,l_t+1 and 𝐱̂^e,l_t+2, respectively, as shown in Fig.<ref>(b).
The limited capability of the MLP layer to learn an effective function from a large and complicated dataset may regularize the main encoder, Enc_θ, to encode a simultaneously informative hidden state for all N-gram predictions. This regularization might be beneficial to prevent the model to overly focus on local dependencies.
We compute the likelihoods of the future target words, p(x̂_t+1|x_<t;θ) and p(x̂_t+2|x_<t;θ) in the above example, following each logit layer and the softmax function. Instead of using individual logit layers for each future word prediction, we share the parameters of all logit layers, including the conventional CLM model's logit layer. Therefore, this approach increases just a small amount of parameters for each additional MLP layer. Furthermore, it re-uses the original (unigram) vocabulary set for the future word prediction, not an additional large vocabulary set of N-grams.
The loss for n-th future word prediction is as follows:
ℒ_n(X,θ)=-∑^T-n_t=1log p(x̂_t+n= x_t+n|x_<t;θ).
As like Eq.(<ref>), this loss becomes minimum when the model exactly predicts the future target word's embedding, i.e., 𝐱̂^e,l_t+n=𝐱^e,l_t+n. The total loss for the training of this simple N-gram CLM model is the mixture of Eq.(<ref>) and Eq.(<ref>) as follows:
ℒ^tot_N(X,θ)=1/2ℒ(X,θ)+1/2(N-1)∑^N-1_i=1ℒ_i(X,θ).
Notably, we do not equally take the average of the original loss, Eq.(<ref>), with other losses, since the next word typically has stronger dependencies with the preceding words than other future words. In other words, averaging the entire set of loss terms together might introduce excessive regularization.
§.§ Word Difference Representation (WDR) N-gram CLM
To use a more informative target than simple N-gram CLM, we introduce the idea of WDR which is a contextualized surrogate representation of words within a sentence. Basically, it is based on a form of word embedding compositions: the difference vector, 𝐱^e_t+1-𝐱^e_t. Since <cit.> demonstrated that arithmetic compositions of learned word embedding can convey semantic meanings, many researches have explored the word embedding compositionality <cit.>. Their studies utilized composed word embeddings as inputs to models, instead of original word embeddings, showcasing their advantages across various NLP tasks.
Unlike the prior research, we provide WDR to the model as the target to predict, rather than utilizing it as input. The difference vector of contiguous words offers a different representation for the word depending on its adjacent words. Therefore, by leveraging WDR as the target, we expect the model can learn more diverse targets than previous works. Generating WDR is simple repetition of vector subtractions which is computationally cheap and easy to parallelize, so it does not impose a high computational cost. Moreover, generating WDR is reversible, so that original embedding vectors can be reconstructed from WDR. This property facilitates the development of WDR-based N-gram CLM integrating the same framework of the simple N-gram CLM without a significant modification. Detailed explanations elucidating these advantages are provided in the subsequent sections.
§.§.§ Definition of n-level WDR
As we briefly mentioned above, we use the difference of contiguous embedding vectors as the base of WDR. Given an embedding vector sequence {𝐱^e_1,𝐱^e_2,⋯,𝐱^e_T}, the 1-level WDR at the time-step t is defined as follows:
Δ_1𝐱^e_t =
𝐱^e_t+1-𝐱^e_t if 1≤ t < T,
𝐱^e_T if t = T.
In an inductive manner, the n-level WDR at the time-step t when n>1 is defined as follows:
Δ_n𝐱^e_t =
Δ_n-1𝐱^e_t+1-Δ_n-1𝐱^e_t if 1≤ t < T,
Δ_n-1𝐱^e_T=𝐱^e_T if t = T.
As an alternative of the above n-level WDR definition, we explored the opposite direction to subtract the contiguous vectors, that is Δ_n-1𝐱^e_t-Δ_n-1𝐱^e_t+1. In our internal empirical studies, we discovered the alternative design achieved similar performances. Therefore, we follow the design of Eq. <ref> throughout this paper.
Based on the definitions of Eqs. <ref> and <ref>, the n-level WDR can be represented by the composition of original word embeddings. For example, the 2 and 3-level WDRs at time-step t can be represented as follows: Δ_2𝐱^e_t=𝐱^e_t+2-2𝐱^e_t+1+𝐱^e_t and Δ_3𝐱^e_t=𝐱^e_t+3-3𝐱^e_t+2+3𝐱^e_t+1-𝐱^e_t, respectively. With this manner, we can derive the formulation of n-level WDR as follows:
Δ_n𝐱^e_t=∑^n_i=0ni(-1)^i𝐱^e_t+(n-i),
where ni=n!/(n-i)!i! is the binomial coefficient. This equation holds for every positive integer of n
and for every time-step t when t ≤ T-n.
See Appendix <ref> for a proof of this equation.
As we mentioned earlier, n-level WDR is reversible to the original word embedding. For the 1-level WDR, 𝐱^e_t+1 can be reconstructed by adding 𝐱^e_t to Δ_1𝐱^e_t. Likewise, 𝐱^e_t+n can be reconstructed by adding -∑^n_i=1ni(-1)^i𝐱^e_t+(n-i) to Δ_n𝐱^e_t (note that the first term of the right-hand side of Eq.(<ref>) is 𝐱^e_t+n). For simplicity, we use a new notation for the conjugate term that reconstructs the original embedding by addition to the n-level WDR as follows:
Δ^r_n𝐱^e_t=-∑^n_i=1ni(-1)^i𝐱^e_t+(n-i),
Δ^r_n𝐱^e_t =
𝐱^e_t if n = 1,
-∑^n_i=1ni(-1)^i𝐱^e_t+(n-i) if n > 1.
This leads to Δ_n𝐱^e_t+Δ^r_n𝐱^e_t=𝐱^e_t+n. The conjugate term for reconstruction, Δ^r_n𝐱^e_t, can be obtained by Eq.(<ref>) or iterative operations of Eq.(<ref>).
§.§.§ Training of WDR N-gram CLM
We develop the WDR-based N-gram CLM from the framework of simple N-gram CLM.
To achieve the mentioned goal that providing the WDR as the target of the model, we apply the definitions and derivations in Sec.<ref> to the logit layer's embeddings.
Following the idea of the simple N-gram CLM described in Sec.<ref>, we employ MLP layers for predictions of N-gram. However, in WDR N-gram CLM, the MLP^n layer outputs Δ_n𝐱̂^e,l_t instead of 𝐱̂^e,l_t+n. Then we produce its corresponding conjugate term, Δ^r_n𝐱^e,l_t, based on the logit layer's embedding matrix. Adding those two, Δ_n𝐱̂^e,l_t+Δ^r_n𝐱^e,l_t, yields 𝐱̂^e,l_t+n as in the simple N-gram CLM. Then, we take the same processes of the logit, likelihood, and loss computations as in the simple N-gram CLM.
An essential design of this framework is detachment of the produced conjugate term, Δ^r_n𝐱^e,l_t, from the backpropagation process. Absence of this detachment might lead the model to adjust the logit layer's weight matrix in a distorted manner, because the input of the logit layer is recursively produced from itself.
In WDR N-gram CLM, the minimum value of NLL loss of x_t+n prediction, Eq.(<ref>), is achieved when 𝐱̂^e,l_t+n=𝐱^e,l_t+n, which is Δ_n𝐱̂^e,l_t+Δ^r_n𝐱^e,l_t=Δ_n𝐱^e,l_t+Δ^r_n𝐱^e,l_t based on the equation led by Eq.(<ref>). Because the conjugate term, Δ^r_n𝐱^e,l_t, is detached, the model would learn to predict Δ_n𝐱^e,l_t, which is true n-level WDR.
In other words, WDR N-gram CLM learns to predict composed word embeddings, offering diverse and contextualized target representations, even for the same target word. The entire process of WDR trigram CLM example is illustrated in Fig.<ref>(c).
§.§.§ How Diverse Are WDR-based Target Representations?
In order to gain a more profound understanding of WDR as target representations, we explored how WDR would diversify target representations compared to the conventional CLM or the simple N-gram CLM. As we mentioned in Sec.<ref> and Sec.<ref>, the conventional CLM and the simple N-gram CLM utilize the logit layer's embeddings as target representations to predict. To see the practical examples of these target representations, we collected 1,270 representations from the logit layer's embedding matrix of the pre-trained conventional CLM model (`TF' in the preliminary experiment, Sec.<ref>). The 1,270 representations correspond to all the tokens of randomly selected 10 sentences from the Penn TreeBank (PTB) <cit.> testset. Also, we computed 1 and 2-level WDRs with the collected embeddings, and added them to the collection, resulting in 3,810 representations in total. Finally, we reduced the dimension of the total collection to 2-dimension with t-SNE algorithm <cit.>.
Fig.<ref> shows the collected representations in a 2-dimensional space. The first plot illustrates the original embeddings, 𝐱^e,l. Note that the representations of frequent words, such as `to' may be included more times than other words in the collection. We interpret that this is the reason why t-SNE places frequent words (e.g., `in', `to', and `the') distant from other less frequent words to resemble the non-uniform distribution of the collection.
On the other hand, the 1-level WDR representations, Δ_1𝐱^e,l, look more diverse compared to the original embeddings as in the second plot. For example, by composing adjacent words such as `want', `unable', `returned', into the frequent word `to', it diversifies the embedding representations according to its previous word as in the third plot which is zoomed in. The 2-level WDR looks more diverse even compared to 1-level WDR as in the last plot. Based on this analysis, we expect WDR N-gram CLM to give more diverse target representations than other methods, such as conventional CLM and the simple N-gram CLM.
§.§ Ensemble Method to Refine the Next Word Prediction Leveraging N-gram Predictions
We propose a new ensemble method to incorporate the N-gram predictions into the process of the next word prediction. The encoder model, such as Transformer, outputs {𝐡_2,𝐡_3,⋯,𝐡_t} given the embedded input sentence {𝐱^e_1,𝐱^e_2,⋯,𝐱^e_t-1}. The encoded hidden state 𝐡_i represents the computed hidden state given the inputs up to time-steps (i-1). At testing, in addition to the predicted embedding 𝐱̂^e,l_t from the conventional CLM, MLP^n layer of N-gram CLM can estimate the target word for time t given 𝐡_t-n. Therefore, we can get N predicted embeddings for the current time-step. We ensemble these predicted embeddings just before the logit layer using the following formulation:
𝐱̂^e,l_t,ens=(1-λ)𝐱̂^e,l_t+λ/N-1∑^N-1_i=1MLP^i(𝐡_t-i),
where
𝐱̂^e,l_t,ens=(1-λ)𝐱̂^e,l_t+λ[1/N-1∑^N-1_i=1𝐱̂^e,l_t,i],
where 𝐱̂^e,l_t,i is the output of MLP^i layer given 𝐡_t-i.
λ is a scalar value between 0 and 1. It controls the influences of future word predictions (but derived from past time-steps) on the current word prediction. Similar to the rationale behind the dominance of the original NLL loss in its total loss formulation, Eq.(<ref>), we do not equally average the original predicted embedding with others. In the case of WDR-based N-gram CLM, we ensemble MLP^i(𝐡_t-i)+Δ^r_i𝐱^e,l_t-i=𝐱̂^e,l_t in the summation part in Eq.(<ref>).
After this ensemble computation, we input it to the logit layer and compute the next word's likelihood. At testing, this ensemble likelihood result is used to compute perplexity (PPL) in CLM tasks or serving as candidate scores for beam search in NMT tasks.
§ EXPERIMENTS AND RESULTS
To assess the performances of our proposed methods, we conducted CLM and NMT experiments on multiple benchmark datasets.
§.§ Causal Language Modeling (CLM)
For the CLM task, we executed two experiments: preliminary and primary. The preliminary experiment was dedicated to monitor the dynamics of two hyperparameters: N and λ toward the performance. In contrast, we only report the results of the best hyperparameters in the primary experiment's demonstration.
§.§.§ Data Description
PTB (-, 0.9M tokens, 10K vocabulary), WikiText-2 (W2, 2M tokens, 33K vocabulary), Text8 (T8, 15M tokens, 254K vocabulary), and WikiText-103 (W103, 103M tokens, 268K vocabulary) <cit.>. To ensure standardization and transparency in our data-related processes (e.g., download, tokenization, vocabulary, and train/valid/testsets splitting), we relied on open sources. Specifically, the W2 and T8 datasets were sourced from the GitHub repository[https://github.com/chakki-works/chazutsu], while the PTB and W103 datasets were sourced from the Tensorized Transformer <cit.>'s GitHub repository[https://github.com/szhangtju/The-compression-of-Transformer]. In the primary experiment, we used the whole datasets, whereas the preliminary experiment was conducted solely on the PTB dataset.
§.§.§ Models and Training
For the baseline model of the preliminary experiment, we implemented Transformer (TF) encoder-based CLM. The total number of parameters of the TF baseline is 12M, and our proposed simple and WDR methods increase only 0.1M parameters per an additional MLP layer (note that the logit layer's parameters are all shared). The details of model architecture and training method for the preliminary experiment are described in Table <ref> (in Appendix <ref>) in the column of `Small Enc. TF CLM'.
For the baseline models of the primary experiment, we trained the two baseline models that are advanced ones based on TF: tensorized transformer (TT) <cit.> and Reformer (RF)[https://github.com/lucidrains/reformer-pytorch] <cit.>. We mostly followed their reported configurations, except some minor changes such as the number of tokens in a mini-batch and learning rates. The details of these changes for each dataset are described in Table <ref> (in Appendix <ref>). As a result, the total numbers of parameters of (TT, RF) models according to datasets are (6.7M, 15.3M) for PTB and W2, (82.4M, 236.6M) for T8 and W103, respectively. Our proposed simple and WDR methods increase the number of parameters by 0.1M and 0.5M, respectively, per an additional MLP layer regardless of the type of dataset.
On top of the baseline models, we applied our proposed method, and we call them `TF+Sim', `TF+WDR', `TT+Sim', `TT+WDR', `RF+Sim', and `RF+WDR'. We varied N from 2 to 4 and λ from 0.0 to 0.6 for every experiment of our proposed methods. In the demonstration of the primary experiment results, we report the result of the best hyperparameter setting of each model. These settings are reported in the `CLM Task' column of Table <ref> (in Appendix <ref>).
§.§.§ Preliminary Experimental Results
Table <ref> presents the outcomes of the preliminary experiments. We trained the model of each configuration five times with different seeds, and we report the average PPL scores. Both `TF+Sim' and `TF+WDR' surpass the performances of the conventional CLM baseline. This observation aligns with findings from previous studies on other tasks <cit.>. The ensemble method consistently improves performance compared to the non-ensemble ones (where λ=0.0). It usually achieves the best scores at λ=0.4 for both the `TF+Sim' and `TF+WDR' models. Also, we observed that the `TF+WDR' model maintains strong performance even at λ=0.6, while the `TF+Sim' model does not. This implies that `TF+WDR' generally generates more accurate predictions for future words. Moreover, `TF+WDR' tends to outperform their `TF+Sim' counterparts in each setting. These findings collectively suggest that the WDR training approach offers benefits over N-gram prediction methodologies.
§.§.§ Gradient Diversity Analysis
As an additional exploration of the advantages of WDR, we checked the connection between the diverse target representations and its benefit during training. Given the evidence in Sec.<ref> that WDR gives more diverse target representations compared to other CLMs, it is plausible to guess the backpropagated gradients are also diverse. To quantify this property, we measured `gradient diversity (GD)' <cit.> which is formulated as follows:
GD(𝒟,θ) =∑^|𝒟|_i=1||g_i||^2_2/||∑^|𝒟|_i=1g_i||^2_2,
= ∑^|𝒟|_i=1||g_i||^2_2/∑^|𝒟|_i=1||g_i||^2_2+∑_i j⟨ g_i,g_j ⟩,
g_i =∇_θℒ^tot_N(X_i,θ),
where 𝒟={X_1,X_2, ⋯,X_|𝒟|} is a mini-batch, ||·||^2_2 is the squared L^2 norm operation, ⟨·,·⟩ is the inner product operation, and ∇_θ is gradient operator with respect to θ. This metric is large when the inner product terms in denominator are small, which means the gradients are different from each other.
We measured GD of the `TF+Sim N=4' and `TF+WDR N=4' models in Table <ref> during training. The GDs over epochs are presented in Fig.<ref>. `TF+WDR N=4' usually has higher GD than `TF+Sim N=4'. As the stochastic property of stochastic gradient descent is known for noisy gradient which enhances generalizability compared to full-batch gradient descent <cit.>, higher GD may offer similar advantages due to the stochastic property. Given this understanding, we believe WDR-based training could be beneficial for improving generalization.
§.§.§ Primary Experimental Results
Table <ref> presents the entire results of the primary experiments (6 models on 4 datasets). Results show that, with the exception of TT-based models on W2, our proposed N-gram CLMs consistently either match or surpass the baseline CLMs, even without the ensemble method. Remarkably, WDR N-gram CLMs generally improve performance on top of the simple N-gram CLMs. Upon applying our proposed ensemble method, they generally exhibit improvements over their non-ensemble counterparts, except the models trained on W103. Notably, the effect of ensemble method is relatively significant in smaller datasets (PTB and W2) in contrast to larger datasets (T8 and W103). Based on these results, we argue that our proposed methods have actual advantages on various models and datasets for the CLM task.
§.§ Neural Machine Translation
§.§.§ Data Description
Since NMT includes language modeling as a part of the decoder, we view the NMT could be an appropriate additional experimental task to demonstrate the effectiveness of our proposed approach in addition to the main CLM tasks. We conducted NMT experiments on several datasets: `IWSLT14 English-German'(En-De, 160K training pairs) <cit.>, `WMT14 English-German'(En-De, 3.9M training pairs) <cit.>, and `WMT18 English-Turkish' (En-Tr, 207K training pairs) <cit.>. We used the same preprocessing, tokenization and subword byte-pair encoding methods with <cit.>. We used 10K, 10K, 32K most frequents subwords to organize vocabularies for datasets, respectively.
§.§.§ Models and Training
As a baseline, we used our implementation of Transformer (TF) <cit.> in the encoder-decoder architecture. We used the small Transformer for the `IWSLT14 En-De' and `WMT18 En-Tr' datasets,
and the base Transformer for the `WMT14 En-De' dataset. The total number of parameters of small and base TF baselines are 32M and 77M, respectively. We applied our simple and WDR N-gram CLM methods onto the decoder parts of the baselines, `TF+Sim' and `TF+WDR'. Each additional MLP layer in our simple and WDR methods increases the number of parameters by around 0.5M. Information about the models and how TF models are optimized can be found in the columns labeled 'Small Enc-Dec TF NMT' and 'Base Enc-Dec TF NMT' in Table <ref>. Also, the hyperparameters (N and λ) for `TF+Sim' and `TF+WDR' are described in the `NMT Task' column of Table <ref> (in Appendix <ref>).
As a more closely related baseline, bag-of-words (BOW) NMT was proposed to predict the whole words in the context of the original NMT task <cit.>. However, their approach was not applied to the TF architecture, and they evaluated the model only on the English-Chinese translation dataset of NIST. To ensure a fair comparison, we re-implemented BOW NMT based on our TF architecture and compared with our proposed method. Following their prescribed approach, we integrated the computed loss of whole words prediction into the original loss.
§.§.§ BLEU Results
Table <ref> presents the experiment results of the models on each testset with SacreBLEU <cit.> as the evaluation metric. Our proposed `TF+Sim' and `TF+WDR' models exhibit usually enhanced performances compared to the `TF' and `BOW NMT' baselines. `TF+WDR' always outperforms its counterpart of `TF+Sim'. Notably, the integration of the ensemble method from both of `TF+Sim' and `TF+WDR' further increases performances. Specifically, we note that `TF+WDR' with ensemble method improved performances by 0.7 1.5 BLEU scores compared to `TF' baseline on the both translation directions of `IWSLT14 En-De', and German-to-English translations of `WMT14 En-De' testsets.
To explain why N-gram prediction approaches are more effective for German-to-English translation compared to English-to-German translation in `IWSLT14 En-De' and `WMT14 En-De' experiments, we hypothesize that the difference in word diversity between the two languages plays a role. We analyzed the `WMT14 En-De' training dataset (subword-level tokenized) and found that English has around 33.6K unique unigrams and 6.7M unique bigrams, while German has around 34.9K unique unigrams and 9.3M unique bigrams. This suggests that German-to-English translation might have simpler local dependencies to learn compared to English-to-German translation due to the lower number of unique bigrams. Considering simple local dependencies might lead to the over-fitting problem, we believe that this is a potential reason why N-gram prediction approaches, which can help mitigate over-fitting to local dependencies, are more effective for German-to-English translation.
§ CONCLUSION
In this work, we have constructed an advanced N-gram prediction framework tailored specifically to causal language modeling.
In addition to the construction of this framework, our work includes the introduction of new strategies for providing diverse target representations and an ensemble method over the predicted N words. Extensive experiments on language modeling and neural machine translation have confirmed the practical benefits of the proposed method.
§ LIMITATIONS
Given the demonstrated performance improvements of the WDR-based N-gram CLM, we tried to apply the WDR method to other tasks beyond CLM, such as the MLM task. In addition to the standard loss function of MLM, which involves predicting the masked word <cit.>, we added new loss terms to predict n-level WDR target representations of the masked position. For this experiment, we utilized the CrammedBERT model <cit.>, a streamlined variant of BERT that facilitates faster pre-training while maintaining competitive performance on the GLUE benchmark. We integrated the WDR approach into this model and conducted a comparative analysis with the original CrammedBERT configuration. Further experimental details are provided in Appendix <ref>.
Table <ref> (in Appendix <ref>) presents the results of our experiments comparing CrammedBERT and the applications of WDR models on the GLUE test set. While the application of 2-level WDR resulted in a 1.0 point increase in the average GLUE score, the performance benefits of the WDR method is less consistent across individual sub-tasks compared to the benefits observed in the CLM tasks. We attribute this result to the fundamental difference between the CLM and MLM tasks. Specifically, in MLM, when the WDR method combines the masked word embedding with the embeddings of the next words, such information is already provided as input. This partial visibility of the target representation might lead to an unexpected optimization behavior, such as the model disproportionately focusing on the right-side (future) context which is incorporated in the target, rather than considering the entire context.
Since there are prior works for N-gram prediction within the MLM framework <cit.>, we believe we can apply the WDR method to the prior works by combining the only masked words when WDR is calculated to solve the aforementioned issue. We expect that the high gradient diversity characteristic of the WDR method may offer additional benefits to the prior MLM framework.
§ ACKNOWLEDGEMENTS
§ APPENDIX
§.§ Proof of Eq.(<ref>)
We provide a proof of Eq.(<ref>) with the induction method. To avoid confusion, we temporarily change the notation of Δ_n𝐱^e_t in conjecture Eq.(<ref>) to Δ̂_n𝐱^e_t until it is proved. Based on the definitions of the 1 and n-level WDR, Eq.(<ref>) and Eq.(<ref>), we can verify the initial condition, that is n=1, holds as follows:
Δ_1𝐱^e_t =𝐱^e_t+1-𝐱^e_t
=10(-1)^0𝐱^e_t+1+11(-1)^1𝐱^e_t
= ∑_i=0^11i(-1)^i𝐱^e_t+(1-i)
=Δ̂_1𝐱^e_t.
Therefore, the conjecture holds for the initial condition. Then, by following the induction method, we assume the conjecture at n-level is true, that is Δ̂_n𝐱^e_t=Δ_n𝐱^e_t. Then, the (n+1)-level WDR from the definition Eq.(<ref>) is derived to Δ_n+1𝐱^e_t=Δ_n𝐱^e_t+1-Δ_n𝐱^e_t=Δ̂_n𝐱^e_t+1-Δ̂_n𝐱^e_t. Each term is derived as follows:
Δ̂_n𝐱^e_t+1 =n0(-1)^0𝐱^e_t+n+1+n1(-1)^1𝐱^e_t+n+
⋯ +nn-1(-1)^n-1𝐱^e_t+2+nn(-1)^n𝐱^e_t+1,
-Δ̂_n𝐱^e_t =n0(-1)^1𝐱^e_t+n+n1(-1)^2𝐱^e_t+n-1+
⋯ +nn-1(-1)^n𝐱^e_t+1+nn(-1)^n+1𝐱^e_t,
Δ̂_n𝐱^e_t+1-Δ̂_n𝐱^e_t =n0(-1)^0𝐱^e_t+n+1+(n0+n1)(-1)^1𝐱^e_t+n+
⋯+(nn-1+nn)(-1)^n𝐱^e_t+1+nn(-1)^n+1𝐱^e_t
=n+10(-1)^0𝐱^e_t+n+1+n+11(-1)^1𝐱^e_t+n+
⋯+n+1n(-1)^n𝐱^e_t+1+n+1n+1(-1)^n+1𝐱^e_t
=∑^n+1_i=0n+1i(-1)^i𝐱^e_t+(n+1-i)
=Δ̂_n+1𝐱^e_t.
Note that the binomial coefficient, ni, is the n-th row and i-th value of Pascal's triangle, and it satisfies ni-1+ni=n+1i. Based on this outcome, the conjecture holds for (n+1)-level if the n-level is true. Therefore, the conjecture is proved.
§.§ Experiment Details
We trained the models described in Sec. <ref> and Sec. <ref> following the configurations described in Table <ref> for Transformer-based models, `TF', and the configurations reported in the previous works' papers <cit.> with several changes as described in Table <ref> for the primary CLM baselines, `TT' and `RF'. For Transformer-based models' experiments, we saved the best checkpoint based on the validation results. We early stopped the training whenever the model does not beat its previous best performance for the `Patience' times on the validation <cit.>. For the primary CLM baselines, we followed the pre-defined total training iterations. Table <ref> describes the specific configurations, such as N and λ, we used for our proposed N-gram CLMs, simple-based and WDR-based.
About the information of our computational environment, we used a single NVIDIA RTX3090 GPU for the large CLM datasets, such as T8 and W103, and a GTX1080Ti GPU for the small CLM datasets, such as PTB and W2. On average, they took 1 day and 3 hours, respectively, for training. We used 4x NVIDIA RTX3090 GPUs for the large NMT datasets, such as WMT14 English-German, and 2x GTX1080Ti GPUs for the small NMT datasets, such as IWSLT14 English-German and WMT18 English-Turkish. On average, they took 3 days for training.
§.§ Masked Language Modeling Experiment
We adhered to the environmental settings established by CrammedBERT <cit.> for all aspects of our study, including dataset preprocessing, model configurations, pre-training, fine-tuning procedures, and evaluations. Comprehensive details of these settings can be found in the associated GitHub repository[https://github.com/JonasGeiping/cramming]. Building on the CrammedBERT architecture, we apply the WDR method that is analogous to the method conducted in our WDR-based N-gram CLM experiment. Specifically, we utilized N additional MLP layers designed to predict n-level WDRs alongside the original word embedding at the masked position. These n-level WDRs are calculated by composing the next words of the masked word. The final loss is computed as the average of the original loss and the additional losses derived from the WDR method, with the original and additional losses being averaged unequally, as described in Section <ref>.
Table <ref> presents the experimental results for CrammedBERT and our proposed models, evaluated on the GLUE test set following fine-tuning. We varied the number of grams, N, from 1 to 3. The results indicate that the application of 2-level WDR yields an increase of 1.0 point in the average GLUE score. However, the performance improvements across individual sub-tasks are not consistently superior; in some cases, they were similar to or worse than the baseline.
|