entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 14
193
| authors
sequencelengths 1
1.14k
| primary_category
stringclasses 125
values | categories
sequencelengths 1
6
| text
stringlengths 12
495k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2409.03296v1 | 20240905070614 | An Efficient Two-Dimensional Functional Mixed-Effect Model Framework for Repeatedly Measured Functional Data | [
"Cheng Cao",
"Jiguo Cao",
"Hao Pan",
"Yunting Zhang",
"Fan Jiang",
"Xinyue Li"
] | stat.ME | [
"stat.ME",
"stat.AP"
] |
#1 1
1
An Efficient Two-Dimensional Functional Mixed-Effect Model Framework for Repeatedly Measured Functional Data
CHENG CAO
Department of Data Science, City University of Hong Kong
and
JIGUO CAO
Department of Statistics and Actuarial Science, Simon Fraser University
and
HAO PAN, YUNTING ZHANG, FAN JIANG
Department of Developmental and Behavioral Pediatrics,
Shanghai Children’s Medical Center,
School of Medicine, Shanghai Jiao Tong University
and
XINYUE LI Corresponding author: Xinyue Li, Email: [email protected]
Department of Data Science, City University of Hong Kong
September 9, 2024
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
0
An Efficient Two-Dimensional Functional Mixed-Effect Model Framework for Repeatedly Measured Functional Data
CHENG CAO
Department of Data Science, City University of Hong Kong
and
JIGUO CAO
Department of Statistics and Actuarial Science, Simon Fraser University
and
HAO PAN, YUNTING ZHANG, FAN JIANG
Department of Developmental and Behavioral Pediatrics,
Shanghai Children’s Medical Center,
School of Medicine, Shanghai Jiao Tong University
and
XINYUE LI Corresponding author: Xinyue Li, Email: [email protected]
Department of Data Science, City University of Hong Kong
September 9, 2024
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
With the rapid development of wearable device technologies, accelerometers can record minute-by-minute physical activity for consecutive days, which provides important insight into a dynamic association between the intensity of physical activity and mental health outcomes for large-scale population studies. Using Shanghai school adolescent cohort we estimate the effect of health assessment results on physical activity profiles recorded by accelerometers throughout a week, which is recognized as repeatedly measured functional data. To achieve this goal, we propose an innovative two-dimensional functional mixed-effect model (2dFMM) for the specialized data, which smoothly varies over longitudinal day observations with covariate-dependent mean and covariance functions. The modeling framework characterizes the longitudinal and functional structures while incorporating two-dimensional fixed effects for covariates of interest. We also develop a fast three-stage estimation procedure to provide accurate fixed-effect inference for model interpretability and improve computational efficiency when encountering large datasets. We find strong evidence of intraday and interday varying significant associations between physical activity and mental health assessments among our cohort population, which shed light on possible intervention strategies targeting daily physical activity patterns to improve school adolescent mental health. Our method is also used in environmental data to illustrate the wide applicability. Supplementary materials for this article are available online.
Keywords: Functional Mixed Effect Model, Wearable Device Data, Physical Activity Data, Mental Health
1.9
§ INTRODUCTION
A growing amount of research suggests an essential relationship between adolescents’ physical activity and mental health. For instance, prolonged sedentary behaviors have been found to elevate the risk of depression (). Recent studies have further discovered that the context of when and where physical activity occurs are also influential factors (). Wearable devices can continuously record an individual's physical activity profiles over consecutive days, which offers unprecedented opportunities to analyze the varying associations with health outcomes for large-scale population studies (). However, the dynamicity of the association is little studied as it relies on both functional variations of repeatedly measured activity profiles characterizing temporal differences across different times of the day and longitudinal variations capturing different days of the week. The development of new functional methodological tools for repeatedly measured functional data is required to address this scientific problem.
Our motivating dataset comes from a Shanghai school adolescent study, which aims to examine whether a student's physical activity pattern is associated with mental health outcomes after adjusting for covariates such as demographic information. In this study, all adolescent participants wore ActiGraph for consecutive seven days to obtain activity count signals. A total of 2,313 students aged between 11 and 18 years from several schools in Shanghai participated in the study. The recorded accelerometer signals were summarized into minute-by-minute activity counts, resulting in a total of 1,440 functional grids over a day, and one example of the physical activity profile for one participant is shown in Figure <ref>. For each subject, we collected his/her demographic information and mental health assessment results. More details will be provided in Section <ref>.
The physical activity recorded by advanced accelerometers can be seen as repeatedly measured functional observations with covariate-dependent mean and covariance functions. Understanding the intraday and interday relationships between physical activity and key factors can provide insights into intervention strategies for school adolescent health. However, the specialized data structure makes it difficult to characterize using one-dimensional models alone, which implies the importance of two-dimensional modeling to adequately capture the data natures. Furthermore, many methods struggle with scalability even when encountering smaller sample sizes and functional sampling grids than our motivating data (). To improve the computational efficiency of model estimation and inference is also challenging for practical application.
Currently, repeatedly measured physical activity data monitored by wearable devices are commonly studied as one special case of longitudinal functional data analysis using functional mixed effect model (FMEM) (). The method aims at fixed-effects estimation and inference under the mixed-effect framework by applying splines (), functional principal component analysis (FPCA) (), or Bayesian methods (). While longitudinal functional data analysis mainly focuses on sparse and irregular longitudinal-based design (), repeatedly measured functional data are often densely sampled and measured over regular longitudinal visits. Hence, the analysis of repeatedly measured functional data is of independent interest (), where the methodology can be refined in three aspects when taking into account the practical data structure.
First, the variations of effect along both longitudinal and functional directions underscore the need for evaluation from two-dimensional rather than one-dimensional effects as in conventional longitudinal functional studies. While some research suggests bivariate functional models for symmetrical two-dimensional functional data, i.e. images, the modeling framework and estimation procedures rely essentially on their univariate cases and lead to extra computational burden (). Besides, they also inadequately account for complex four-dimensional correlation structures presented in longitudinal and functional contexts. Second, enhancing the correlation structure is crucial for improving model flexibility and universality. The random effect component of traditional one-dimensional FMEM incorporates longitudinal visits through a linear framework with additive assumptions (). In comparison, a nonparametric four-dimensional covariance function that considers continuity along two directions and enables a flexible model structure with minimal assumptions is a potential alternative. Third, there is a growing need to boost the computational efficiency of existing methods to apply to population studies. They are often computationally expensive due to challenges in modeling the complex four-dimensional correlation structure and conducting fixed-effect inference. Although some techniques have been proposed to enhance computational efficiency (), implementing FMEM in large-scale population studies remains challenging.
In this study, we aim to propose a novel two-dimensional functional mixed-effect model (2dFMM) framework for repeatedly measured functional data. Our model considers two-dimensional fixed effects for the specialized dense and regular longitudinal designs and leverages a nonparametric four-dimensional covariance structure. This flexible structure can effectively avoid covariance structure assumption violations that may often occur in longitudinal settings. We also develop a fast three-stage estimation procedure using pointwise and smoothing techniques for accurate bivariate coefficient function estimation, and it can effectively achieve good model interpretability and at the same time ensure fast computation. Moreover, the fixed-effect inference is built under the weak separability assumption of spatiotemporal covariance function to decompose marginal effects () and enable efficient estimation of the four-dimensional covariance function, leveraging FPCA and basis splines to relieve the computational burden.
The rest of this paper is organized as follows. In Section <ref>, we propose 2dFMM with estimation and inference procedure. Asymptotic results of the proposed estimator are also provided. Extensive simulation studies are conducted in Section <ref> to evaluate the performance of 2dFMM and compare it with existing approaches. Section <ref> further applies 2dFMM to the Shanghai school adolescent study and Australia electricity demand data. Conclusion and discussion are in Section <ref>.
§ METHODS
§.§ Two-Dimensional Functional Mixed-Effect Model
The functional response is denoted by Y_i(s,t), ith subject's profile functional time t ∈𝒯, repeatedly measured at longitudinal time s ∈𝒮, where i=1,…,N. We assume that the covariate-dependent mean function μ(s,t,𝐗_i) is the linear combination of P-dimensional time-invariant or time-varying covariates of interest 𝐗_i = (1, x_i,1(s,t), …, x_i,P(s,t))^T at time s and t, which is also known as a standard linear concurrent model. The proposed two-dimensional functional mixed-effect model (2dFMM) is
Y_i(s,t) = μ(s,t,𝐗_i) + η_i(s,t) + ϵ_i(s,t)
= β_0(s,t) + ∑_p=1^Px_i,p(s,t)β_p(s,t) + ∑_j=1^∞ξ_i,j(t)ψ_j(s) + ϵ_i(s,t),
where {β_0(s,t), β_1(s,t), …, β_P(s,t)} are corresponding coefficient functions, η_i(s,t) is a bivariate random process with mean zero and four-dimensional covariance function C(s,t;u,v), while random measurement error process ϵ_i(s,t) is mean zero with some covariance function ℰ(s,t;u,v) and independent of η_i(s,t).
For dimension reduction, η_i(s,t) is typically decomposed via well-developed two-dimensional FPCA and represented in Karhunen-Loève expansion with orthonormal basis of L^2 (𝒮×𝒯). It ensures that a large proportion of the four-dimensional covariance function C(s,t;u,v) can be explained by the top several terms, but is computationally demanding and not suitable for asymmetric bivariate arguments which are usually assumed in repeatedly measured functional data. To address these issues, we exploit the marginal covariance functions technique to separate the bivariate process for efficiency (). In model (<ref>), we define W_i(s,t) = η_i(s,t) + ϵ_i,1(s,t), where ϵ_i(s,t) = ϵ_i,1(s,t) + ϵ_i,2(s,t) for model identifiability; C_𝒲(s,u) = C_𝒮(s,u) + Γ(s,u), where C_𝒮(s,u) = ∫_𝒯C(s,t;u,t)dt is the marginal covariance with respect to 𝒮 and . Let ψ_j(s) be eigenfunction of marginal covariance function, such that C_𝒲(s,u)=∑_j=1^∞τ_jψ_j(s)ψ_j(u); {ξ_i,j(t): j ≥ 1} are random coefficient functions obtained by the projection of η_i(·,t) subject onto the direction ψ_j(s), i.e. ⟨ W(·,t), ψ_j⟩_𝒮 - ⟨ϵ_1,i(·,t), ψ_j⟩_𝒮. It is easy to see that 𝔼[ξ_i,j(t)] = 0 for t ∈𝒯 and 𝔼[⟨ξ_i,j, ξ_i,h⟩_𝒯] = τ_j1_{j=h}. We further define the covariance function Θ_j(t,v) = 𝔼[ξ_i,j(t)ξ_i,j(v)], which depends on jth eigenfunction of marginal covariance C_𝒮(s,u). Therefore, for any s ≠ u and t ≠ v, we can obtain an expression of the four-dimensional covariance function as follows,
C(s,t;u,v) = 𝔼[(Y_i(s,t) - μ(s,t, 𝐗_i))(Y_i(u,v) - μ(u,v, 𝐗_i))]
= ∑_j=1^∞ψ_j(s)ψ_j(u)Θ_j(t,v).
Moreover, regarding two components of measurement error ϵ_i(s,t),
ϵ_i,1(s,t) with covariance function Γ(s,u)1_{t=v} depicts variation of repeated visits and white noise ϵ_i,2(s,t) is mean zero with variance φ^2. Therefore, ℰ(s,t;u,v) = {Γ(s,u) + φ^21_{s=u}}1_{t=v}, where 1_{·} is an indicator function.
Thus, the functional response Y_i(s,t) is i.i.d. Gaussian random process with mean 𝔼[Y_i(s,t)] = μ(s,t, 𝐗_i) and variance function of Y_i(s,t) is σ^2(s,t), while for any s ≠ u and t ≠ v, covariance function is σ(s,t;u,v) = C(s,t;u,v) + ℰ(s,t;u,v).
Remark 1. The proposed bivariate model differs from existing FMEM in two key aspects. First, the expectation of the functional response in our model depends on bivariate coefficient functions, which inherently imply possible variations over the longitudinal domain. Second, our model captures complex correlation structure through the four-dimensional covariance function C(s,t;u,v), removing the constraint of the linear framework in FMEM and allowing a more flexible representation. In addition, our bivariate model addresses the pre-specification difficulty of FMEM design: whether using a random slope effect depends on hard-to-verify a priori assumptions about longitudinal correlation structure. Our data-driven approach circumvents this specification difficulty. We demonstrate this better accommodates subtle complexities like distinctions between in-school weekdays and off-school weekend patterns for our motivating examples.
Remark 2. The four-dimensional covariance function in Equation <ref> is decomposed under a weaker assumption than strong separability, which is defined as C(s,t;u,v) = C_𝒮(s,u)C_𝒯(t,v) (). We approximate C_𝒯(t,v) within each score function ξ_i,j(t) and corresponding covariance function Θ_j(t,v). The representation relaxes the restriction to the scope of strong separability and is theoretically sound because it delivers near-optimality under the appropriate assumptions ().
Remark 3. Compared with FMEM in longitudinal functional studies, our coefficient functions are expanded from curves to surfaces, indicating two-dimensional effects on functional responses. It is deemed necessary for repeatedly measured functional data varying in two domains. Despite this, we can still approximate the univariate effect by taking the average of the bivariate coefficient function along either domain.
Suppose that the sampling points {s_r, r=1,…,R} and {t_l, l=1,…,L} are prefixed in this study with design densities f_𝒮(s) and f_𝒯(t) such that ∫_s_1^s_r f_𝒮(s)ds = r/R for r ≥ 1 and similarly ∫_0^t_l f_𝒯(t)dt = l/L for l ≥ 1. Both densities are continuous second-order differentiable with the support 𝒮 or 𝒯, which are uniformly bounded away from zero and infinity. Suppose β_p(s,t) = β_p(s)β_p(t) and only take account into 𝒯, the satisfication of identification conditions 𝔼[β_p(s)] = 1 implies β_p(t) = ∫_𝒮β_p(s,t)f_𝒮(s)ds. Hence, given an estimator of bivariate coefficient function β̂_p(s,t), univariate function estimator can be obtained β̂_p(t) = R^-1∑_r=1^Rβ̂_p(s_r,t)
with covariance estimator Cov{β̂_p(t), β̂_p(v)} =R^-2∑_r_1∑_r_2Cov{β̂_p(s_r_1,t),β̂_p(s_r_2,v)}.
§.§ Estimation of Model Components
We propose a computationally efficient three-stage model estimation procedure. First, we estimate the fixed effects using bivariate pointwise and post-smoothing (pointwise-smoothing) estimation. We also represent the four-dimensional covariance function C(s,t;u,v) as a combination of eigenfunctions and B-spline basis functions for faster estimation. This is achieved by leveraging the decomposition of the marginal covariance. We assume densely and regularly sampled grids for both domains. Estimation with randomly sampled designs is discussed in Section <ref>. Our approach also lends itself well to parallelization, further accelerating the entire process.
Suppose design points s_r and t_l satisfy densities in Remark 3 , we let a pointwise response vector denoted by 𝐘_s_r,t_l and design matrix 𝐗_s_r,t_l = (𝐗_1, s_r,t_l^T, …, 𝐗_N,s_r,t_l^T)^T, where 𝐗_i, s_r,t_l = (1, x_i,1(s_r,t_l), …, x_i,P(s_r,t_l))^T. Our estimation procedure is as follows.
Step I: Bivariate Pointwise Estimation. We reform a linear model
𝐘_s_r,t_l = 𝐗_s_r,t_lβ_s_r,t_l + 𝐞_s_r,t_l, where β_s_r,t_l = (β_0(s_r,t_l), …, β_P(s_r, t_l))^T and 𝐞_s_r, t_l∼ N(0, σ^2_s_r, t_l𝐈_N), under the mutual independence assumption, which is commonly adopted in other fixed-effect estimation approach (). We use ordinary least squares estimator β̃_s_r,t_l = (𝐗^T_s_r,t_l𝐗_s_r,t_l)^-1𝐗^T_s_r,t_l𝐘_s_r,t_l for initial estimation because it shares nice statistical properties and low computational cost. Considering the bivariate pointwise conditions, the covariance matrix estimates of coefficient functions can be obtained for any s_r_1, s_r_2, t_l_1, t_l_2,
Cov{β̃_s_r_1,t_l_1, β̃_s_r_2,t_l_2} =σ_s_r_1,t_l_1;s_r_2,t_l_2𝐇_s_r_1,t_l_1𝐇^T_s_r_2,t_l_2,
where 𝐇_s_r,t_l = (𝐗^T_s_r,t_l𝐗_s_r,t_l)^-1𝐗^T_s_r,t_l and the estimates for σ_s_r_1,t_l_1;s_r_2,t_l_2 will be provided in step III later. Note that for pairwise observation (s_r,t_l),
the standard error estimate of the linear model which is denoted by σ̃^2_s_r,t_l is the raw estimate to σ^2_s_r,t_l.
Step II: Bivariate Smoothing. As the raw estimate is obtained, bivariate smoothing is required to refine it by integrating neighboring temporal information. The post-smoothers for the bivariate process are rich, for example, bivariate P-splines (), thin plate regression splines (), tensor product smooths (), and sandwich ().
Here we illustrate the use of sandwich smoother due to its computational efficiency with nice asymptotic properties. Denote raw pth estimated bivariate coefficient function by matrix β̃_p= (β̃_p(s_r, t_l))_R × L, the refined estimator is simply expressed in a closed-form solution such that β̂_p = 𝐒_2β̃_p𝐒_1, where 𝐒_1 and 𝐒_2 are smoother matrices for 𝒯 and 𝒮 respectively, utilizing P-splines in different prespecified number of knots K_R and K_L, respectively.
With the help of the bivariate smoother, the variability of covariance estimator σ̃^2_s_r,t_l can be further diminished. Applying the sandwich smoother on standard error matrix 𝐑 = (σ̃^2_s_r,t_l)_R × L gives the final covariance estimator 𝐑 = (σ̂^2_s_r,t_l)_R × L. However, there is no guarantee that the resulting estimators are all non-negative. The issue can be handled by trimming the negative values at zero ().
In this study, we also provide tensor product smooths in the practical implementation of bivariate smoothing. To quantify the wiggliness of the raw estimator, sandwich smoother relies on differencing matrices to account for the distance between adjacent coefficients, while tensor product smooths use common second-order derivatives penalties. The performance of two smoother will be shown in detail in simulation studies.
Step III: Covariance Estimation. To reduce the computing burden of the most time-consuming covariance estimation step, we employ flexible nonparametric methods incorporating FPCA on marginal covariance of C_𝒮(·, ·) and B-splines to approximate random functions ξ_i,j(·), tailored to imbalance number of grids for two domains. The efficient estimation procedure also consists of three stages.
Firstly, we use the centered data, Y_i(s_r,t_l) = Y_i(s_r,t_l) - μ̂(s_r,t_l, 𝐗_i), to estimate the marginal covariance function C_𝒲(s,u), and obtain the estimates of eigenfunctions ψ_j(s) and score functions ξ_i,j(t). Specifically, given the refined estimator of coefficient functions from Step II, we pool {Y_i(·,t_l), i = 1,… N, l=1, … L} and have sample covariance
C_𝒲(s_r_1,s_r_2) = (|𝒯|/NL)∑_i=1^N∑_l=1^LY_i(s_r_1,t_l)Y_i(s_r_2,t_l),
where 1 ≤ r_1≤ r_2≤ R. The storage memory and computation of C_𝒲(s_r_1,s_r_2) is light because the number of longitudinal grids is relatively small. We obtain the resulting positive semi-definite covariance function estimates C_𝒲(s_r_1,s_r_2) by kernel-based local linear smoothing in PACE algorithm, which excludes the diagonal entries due to the inflation by the effect of white noise (). Let {ψ̂_j(s): j = 1,…, J} be the estimated eigenfunctions and ξ̃_𝒲,i,j(t_l) = ∫_𝒮Y_i(s,t_l)ψ̂_j(s)ds be the estimated score function, which is used as the approximant to ξ̃_i,j(t_l). The number of components J is determined by a fraction of variance explained (FVE), the threshold of which is set 0.99. Additionally, the variance of white noise φ^2 can be estimated as the average difference between C_𝒲(s_r_1,s_r_2) and C_𝒲(s_r_1,s_r_2).
Secondly, we estimate the marginal covariance functions Θ_j(t, v) by “observed" functional data ξ̃_i,j(t). Suppose each functional data has B-splines basis expansion ξ̃_i,j(t_l)=𝐁^T_j(t_l)𝐛_i,j, where 𝐁_j(t_l) = (B_j1(t_l), …, B_jK(t_l))^T and 𝐛_i,j = (b_i,j1, …, b_i,jK)^T, B_jk(t_l) is kth B-splines basis function of jth principal component, K is the number of basis functions. Let K be the same for all j, the basis functions 𝐁(t_l) do not rely on j. The covariance estimator of ξ̃_i,j(t_l), denoted by Θ_j(t_l_1, t_l_2), can be obtained nonparametrically
Θ_j(t_l_1, t_l_2) = N^-1∑_i=1^Nξ̂_i,j(t_l_1)ξ̂_i,j(t_l_2) = N^-1∑_i=1^N𝐁^T(t_l_1)(𝐛̂_i,j^T⊗𝐛̂_i,j)𝐁(t_l_2).
The choice of the value of K depends on a tradeoff between capturing variations adequately and ensuring computational efficiency. To ensure that the majority of variations are captured by large enough number of basis functions, we also consider the computational efficiency of the basis function expansion. Compared with FPCA, which has the computational complexity O(NL^2 + L^3) for each ξ̃_i,j(t) (), the usage of B-splines requires O(NLK^2) computations. It implies that when K < (L+L^2/N)^1/2, our approach offers a lighter computational cost. Naturally, we can set K = K_L and K_L = min{L/2, 35} is recommended by <cit.>. Thus, in practical usage, we suggest K_L = min{(L+L^2/N)^1/2, L/2, 35}.
Finally, given Equation (<ref>), we obtain the estimator of four-dimensional covariance function C(s_r_1,t_l_1;s_r_2,t_l_2) as follows.
C(s_r_1,t_l_1;s_r_2,t_l_2) =N^-1𝐁^T(t_l_1) {∑_j=1^Jψ̂_j(s_r_1)ψ̂_j(s_r_2)∑_i=1^N𝐛̂_i,j^T⊗𝐛̂_i,j}𝐁(t_l_2),
and therefore, σ̂_s_r_1,t_l_1; s_r_2,t_l_2 = C(s_r_1,t_l_1; s_r_2,t_l_2) + σ̂^2_s_r_1,t_l_11_{s_r_1 = s_r_2, t_l_1 = t_l_2}.
§.§ Inference Procedure
Here we show the construction of pointwise and simultaneous confidence bands. These inference procedures require the estimator of Cov{β̃_s_r_1,t_l_1, β̃_s_r_2,t_l_2}, which can be explicitly obtained by Equation (<ref>) and (<ref>). However, this sample covariance function is wiggly because it relies on a pointwise estimator, although the B-splines smoothing technique controls the roughness to some degree. We still need a covariance estimator of refined coefficient Cov{β̂_s_r_1,t_l_1, β̂_s_r_2,t_l_2} in the form of the sandwich smoother, which is coincidentally equivalent to the covariance smoothing approach (). Let β̃_p = vec(β̃_p) indicate a matrix is stacked by column and Cov{β̃_p, β̃_p} be a RL × RL covariance matrix estimates of pth bivariate coefficient function formed by pth diagonal element of Cov{β̃_s_r_1,t_l_1,β̃_s_r_2,t_l_2} for all s_r_1, s_r_2, t_l_1, t_l_2. By tensor product properties β̂_p = (𝐒_1⊗𝐒_2)β̃_p, the ultimate four-dimensional covariance estimator of the pth bivariate coefficient function is
Var{β̂_p(s_r,t_l)} = e_s_r,t_l^T(𝐒_1⊗𝐒_2)Cov{β̃_p, β̃_p}(𝐒_1⊗𝐒_2)^Te_s_r,t_l.
where e_s_r,t_l denotes RL-dimensional unit vector with 1 at the (l-1)L+rth entry.
The analytic inference for two-dimensional fixed effects using confidence bands is straightforward. Depending on the pointwise variability for every pair of (s_r,t_l) estimated by Equation (<ref>), the ± 2 standard error surfaces can be constructed by
β̂_p(s_r,t_l) ± 2Var{β̂_p(s_r,t_l)}^1/2.
Note that our estimator is biased so that the standard error surface is also called a 95% pointwise confidence bands (PCB) if neglecting the effect of bias term based on the nice approximation property of sandwich smoother (). While PCB is efficient to construct in analytical form, using it for statistical inference can be flawed because it ignores the inherent correlation of functional data and results in false positives ().
2
To address the issue, we refer to simultaneous confidence bands (SCB) to account for correlation, which are commonly constructed using nonparametric bootstrap approaches. One method involves multivariate normal simulations (), with high computational cost due to the dimensionality equalling the sampling density of the functional domain. We use parameter simulations based on the number of B-splines basis functions and reduce to a tractable number of dimensions (). The details of the bootstrap algorithm are provided in the Algorithm <ref>. Our bootstrap SCB algorithm makes construction computationally practical by leveraging B-splines technique. In addition, it can also serve as a data-driven inference tool for formal global tests about the coefficient functions without relying on distributional assumptions ().
§.§ Asymptotic Results
In this section, we derive the asymptotic distribution of our pointwise-smoothing estimator by showing the asymptotic bias and variance structure. Asymptotics of our estimator is established based on the properties of the least square estimator and sandwich smoother which is equivalent to a bivariate kernel regression estimator with a product kernel, (RLh_Rh_L)^-1∑_r,lβ_p(s_r,t_l)H_m_R{h^-1_R(s-s_r)}H_m_L{h^-1_L(t-t_l)}, where H_m is the equivalent kernel for univariate penalized splines, h_R and h_L are the bandwidths (). The kernel function H_m is symmetric and bounded. For simplicity, our results are for the case of equally spaced design points and knots. For notation convenience, a ∼ b means a/b converges to 1.
We first derive the asymptotic bias in the interior points. Let m_R and m_L are difference orders of differencing matrices, m_T = 4m_Rm_L + m_R + m_L for notation simplicity.
Propostion 1. Suppose conditions (a)-(d) and (g) are satisfied, further assume K_R∼ c_R(RL)^b_1, K_L∼ c_L(RL)^b_2, with b_1 > (m_R + 1)m_L/m_T, b_2 > (m_L + 1)m_R/m_T, h_R∼ d_R(RL)^-m_L/m_T, h_L∼ d_L(RL)^-m_R/m_T for some positive constants c_R, c_L, d_R, d_L. Then, for any (s,t) ∈ (0,1)^2, we have
bias{β̂_p(s,t)} = (-1)^m_R + 1d^2m_R_R∂^2m_R/∂ s^2m_Rβ_p(s,t) + (-1)^m_L + 1d^2m_L_L∂^2m_L/∂ t^2m_Lβ_p(s,t) + o(h^2m_L_L).
The bias remians same with the sandwich smoother (), containing componests from 𝒮 and 𝒯. Noted that the bias converges at a slower rate at the boundary than in the interior (), the proof of which is ignored here.
To derive the asymptotic variance of the estimator, we assume the covariates are identically and independently distributed as well as time-invariant. For simplicity of illustration, we also assume no missing values.
Proposition 2. Suppose conditions (a)-(g) are satisfied and under the conditions of Proposition 1, when N →∞, we have
var{β̂_p(s,t)} = 2(RLh_Rh_LN)^-1ω_pσ^2(s,t)κ(H_m_R)κ(H_m_L) + o(h_L^4m_L).
where ω_p is the (p,p)th entry of Ω = 𝔼(𝐗_1,s_r,t_l𝐗_1,s_r,t_l^T) and κ(H_m) = ∫ H^2_m(u)du. The proposition implies that the asymptotic variance structure of our estimator has an extra component because of the existence of pointwise least square estimator, compared to the sandwich smoother only (). Additionally, it also shows that the correlation influence of two points can be ignored, similarly with kernel regression estimator ().
Based on the asymptotic bias and variance structures, and proposition 1 of <cit.>, the corresponding asymptotic distribution of our estimator is given by
(RL)^2m_Rm_L/m_T(β̂_p(s,t) - β_p(s,t)) → N(α_p(s,t), V_p(s,t)),
in distribution as R →∞ and L →∞, N →∞, where α_p(s,t) = (-1)^m_R + 1d^2m_R_R∂^2m_R/∂ s^2m_Rβ_p(s,t) + (-1)^m_L + 1d^2m_L_L∂^2m_L/∂ t^2m_Lβ_p(s,t) and V_p(s,t) = 2(RLh_Rh_LN)^-1ω_pσ^2(s,t)κ(H_m_R)κ(H_m_L).
§ SIMULATION
We conduct extensive simulation studies to evaluate the performance of the proposed estimation and inference procedure. Our method is examined not only in bivariate but also univariate perspectives, as other competing FMEM estimation methods often only consider fixed effects over functional domain.
The bivariate functional model is simulated as follows:
Y_i(s,t) = β_0(s,t) + X_i(s)β_1(s,t) + γ_i0(t) + z_i(s)γ_i1(t) + ϵ_i(s, t), (s, t) ∈ [0, 1]^2.
The fixed-effect covariates are generated from X_i(s) ∼ N(0, 4) and z_i(s) = 6(s-0.5)^2 + N(0,ρ^2), where ρ represents how noisy the signal of repeatedly measured visits is. For bivariate coefficient functions, we take account of two different types as shown in Figure <ref> of the supplementary material. The first scenario (S1) presents a continuous non-differentiable bivariate function with local zero regions (sparse), while the second scenario (S2) presents a smooth bivariate function. The random effects are simulated as γ_il(t) = a_i1ϕ_1l(t) + a_i2ϕ_2l(t), l=0,1. We use the scaled orthonormal functions
[ ϕ_1(t); ϕ_2(t) ]∝[ 1.5 - sin(2π t) - cos(2π t) sin(4π t) ]^T if l=0
[ cos(2π t) sin(2π t) ]^T if l=1
to capture the subject-level fluctuations. The random coefficients are generated from a_i1∼ N(0, 2σ^2_B) and a_i2∼ N(0, σ^2_B) respectively and σ^2_B depends on the relative importance of random effect SNR_B. The measurement error ϵ_ij(s,t) ∼ N(0, σ_ϵ^2), where σ_ϵ^2 depends on signal-to-noise ratio SNR_ϵ. Here SNR_B is defined as the ratio of the standard deviation of fixed-effect and random-effect surfaces, while SNR_ϵ is the ratio of the standard deviation of all liner predictors and that of the measurement errors (). We set SNR_B = SNR_ϵ = 1.
The performance of the method is evaluated from three aspects reflecting the accuracy of estimation and inference, as well as computational efficiency. First, the estimation error is assessed by integrated squared error (ISE) to measure the difference between the estimate and the underlying truth, defined as ISE(β̂_p) = |𝒮×𝒯|^-1∫_𝒮∫_𝒯(β̂_p(s,t) - β_p(s,t))^2dt, where |𝒮×𝒯| denotes the area of the entire domain. Secondly, the proportion of pointwise surfaces wrapping the true plane in the sandwich form is computed for bivariate functional slope to evaluate inferential performance on fine grids. We use the empirical coverage probability of 95% PCB, defined by , defined by |𝒮×𝒯|^-1∫_𝒮∫_𝒯1_β̂_p(s,t) ∈PCB_p(s,t)dsdt. Additionally, to measure the width of confidence bands we also report integrated actual width (IAW), defined as |𝒮×𝒯|^-1∫_𝒮∫_𝒯{UB_p(s,t) - LB_p(s,t)}dsdt, where UB_p(·,·) and LB_p(·,·) are pointwise estimates of upper bound and lower bound respectively. When comparing with other methods for FMEM from the univariate perspective, we accommodate the above metrics with only 𝒯 direction. Computing time for the entire estimation procedure will also be counted to present the computational cost.
§.§ Bivariate Comparative Analysis
We compare our proposed approach (2dFMM) with the concurrent bivariate functional regression method denoted by 2dGAM, which is developed by tensor product smooths (). The following several simulation scenarios are considered. We let sample size N ∈{50, 75, 100}, the number of functional grids L ∈{100, 150, 200}, and the number of longitudinal grids R ∈{10, 15, 20}. The baseline setting is N = 50, R = 10, and L=100, where all other sample generating parameters are fixed at their baseline values when one is changed. The noise argument of the longitudinal signal is set ρ = 0.5. A total of 100 replicates are independently simulated.
Figure <ref> shows better performance of the proposed 2dFMM compared with 2dGAM regarding estimation accuracy in most of the cases under the first scenario. It is primarily attributed to the choice of sandwich smoother depending on differencing matrices, which can tackle this non-differentiable function. When the difference of dimensions between longitudinal and functional domains reduces, 2dGAM improves and slightly outperforms our method because the technique of tensor product spline basis expansions is appropriate for symmetric grids. The coverage probability of 95% PCB for 2dFMM approaches the nominal level, while 2dGAM is far below it due to the ignorance of the four-dimensional correlation structure, leading to the too-narrow width of confidence bands. Since the 2dGAM model estimation depends on a representation of the generalized additive model, the computing time is not surprisingly much longer. The storage of intermediate large matrices is also problematic as it easily runs out of memory even when the sample generating parameters are not very large.
Figure <ref> displays the results of the second scenario of the slope coefficient function, while we use tensor product smooths as post-smoother instead. The comparable performance of the two methods regarding estimation results indicates that the smoother is sufficient to compensate for the violation of the independence assumption underneath the pointwise technique when encountering continuous functions. The choice of tensor product smooths also solves the problem of symmetric numbers of longitudinal and functional grids that sandwich smoother has. Despite similar estimation accuracy, our method is still plausible given the nice inferential behaviors and low computational cost.
In addition, we also perform the empirical coverage probability of 95% pointwise and simultaneous confidence bands in different scenarios. The simultaneous confidence bands reach the nominal level. The detailed comparisons between them are shown in Table <ref> of the supplementary file.
§.§ Univariate Comparative Analysis
In longitudinal functional data analysis, fixed effects in FMEM framework are often evaluated only over the functional domain. Several methods including functional additive mixed models (FAMM), fast univariate inference (FUI), and fixed-effect inference for longitudinal functional data (FILF) allowing between-visit correlations are considered for comparisons, while FUI and FILF are designed for simpler computation (). They are prespecified to incorporate random slope covariates using longitudinal time points. However, the performance of FAMM is not shown because it is similar to FUI while taking dramatically longer computing time and narrower confidence bands ().
All approaches are evaluated in two cases: (i) bivariate functional slope is retained in the true model, but only the marginal effect over the functional domain is examined. (ii) bivariate functional intercept and slope are shrunk to univariate ones in the true model, i.e. β_p(t) = |𝒮|^-1∫_𝒮β_p(s,t)ds, p=0,1. Let the noise argument ρ∈{0.5, 2, 6} control the magnitude of the longitudinal correlation in the true model. A total of 100 replicates are independently simulated.
Figure <ref>(a) presents the performance of different methods under case (i). It can be seen that the estimation accuracy of 2dFMM outperforms the other approaches at any strength of noise argument because of allowing two-dimensional effects. FMEM approaches are worse as the noise argument tends to be larger, indicating nearly no longitudinal correlation structures and therefore resulting in the misspecification of the functional random slope model. For confidence bands, our method is robust in terms of coverage level and actual width among all kinds of noise arguments, while FILF obtains high coverage level at the expense of wider confidence bands. On the other hand, Figure <ref>(b) reveals the disadvantage of 2dFMM in case (ii) where a strong longitudinal correlation structure and univariate effect appear, which precisely meet the prespecified FMEM framework. The relatively low estimation accuracy is due to the misrecognition of the strong random longitudinal correlated signals as quantities of fixed effects. Given the reason, it is not surprising that the 2dFMM is better with heavier longitudinal noise, while others decline for the misspecification issue again. Despite the disadvantage, our method still presents decent coverage close to the nominal level in any case, even though it is slightly conservative in unfavorable conditions. The patterns remain unchanged for larger sample generating parameters (see Figure <ref>-<ref> of the supplementary material).
Table <ref> shows the computing time of these three methods with varying sample sizes and numbers of functional grids. The advantage of 2dFMM facing high-dimensional functional data is attributed to the pointwise-smoothing technique and marginal decomposition of the covariance function, while parallel implementation can further accelerate the process. Remarkably, the computation cost of our approach is not sensitive to either parameter, which causes particular concern in the analysis of our motivating data. However, FUI slows down given a considerable number of functional grids when encountering at least a moderate sample size. It is due to the fact that the estimation procedure for x for each pair of functional time points is computationally challenging once having a larger sample size. In addition, FILF uses the generalized additive model to refine the initial estimator, resulting in the same storage and cost limitation as 2dGAM.
§ APPLICATION
In this section, we apply our proposed method to two studies for illustration. The first study uses motivating accelerometer data to examine the intraday and interday dynamic associations between adolescents' physical activity and their demographic characteristics, family socioeconomic status, and physical and mental health assessments. The second study uses a public environmental dataset to assess the association between electricity demand and temperature.
To present the statistical inferences of bivariate coefficient functions, we define a new metric Î_p(s,t) = β̂_p(s,t)1_{LB_p(s,t) > 0 or UB_p(s,t) < 0} to quantify and interpret the dynamicity of the associations. Heatmaps of the significance evaluation Î_p(s,t) will be displayed: white regions indicate no significant effects, red indicates significantly positive effects, and blue indicates significantly negative effects, while the darkness of the colors represents the magnitude of the effects.
§.§ Application to Shanghai School Adolescent Physical Activity Data
In the Shanghai school adolescent study, we collected each subject's demographic information including a binary indicator of gender, grade from 7th to 12th, annual family income level, and mother's education level. Annual family income contains 7 levels from “≤ 10k" to “≥ 300k", while mother’s education level includes 8 levels from “not graduated from primary school" to “at least master degree", both of which are treated as ordinal variables. Mental health screening questionnaires were also conducted during measuring periods. Several common self-report health measurements are included: Depression Anxiety Stress Scales (DASS), Anticipatory and Consummatory Interpersonal Pleasure Scale (ACIPS), Emotion Regulation Questionnaire (ERQ) including two metrics: cognitive reappraisal and expressive suppression. All health assessment results are numeric to present the subjective psychological conditions within different aspects.
We first regress the subjects' profiles to demographic and socioeconomic covariates in the baseline model. To avoid the collinearity of mental health outcomes, each covariate is added to the baseline model separately. The functional response consists of a 16,191 × 1,440 dimensional matrix, where each row corresponds to “day of week” and each column corresponds to “time of day”.
Figure <ref> shows the results of the baseline model, where only demographic data and socioeconomic status are incorporated. The evaluation heatmap of the estimated intercept function is consistent with the overall student physical activity patterns. It is observed that girls usually fall asleep later but wake up earlier, while boys are more vigorous in the daytime; yet it is surprising that girls essentially have more intense activity than boys on Saturday night and Sunday. Grade is an alternative characterization for age but has more substantial indications about school workloads and pressure. From the heatmap, elder students are critically more sedentary most of the time, except at 6 a.m., 6 p.m., and midnight, suggesting longer studying time due to laborious tasks. As for two socioeconomic covariates, the effect of family income level is weaker than mother's education level. As mohter's education level increases, students get up and go to bed later, probably because of more demanding assignments from family or richer material well-being such as commuting by private cars and more entertaining options at night. In addition, the results of the univariate effects along the time of day for only weekday or weekend in the baseline model are illustrated in Figure <ref> of the supplementary material.
Figure <ref> demonstrates inferential heatmaps for self-reported health assessment results assessed by well-known questionnaires. The associations between physical activity and scores of subjective depression, anxiety, or stress (DASS) measures share similar patterns. Students tend to be more active at the weekend midnight if they have more severe symptoms of mental health. Nevertheless, being sedentary during 6 p.m. after school or on weekends is because a student who is considered symptomatic for DASS is likely reluctant to exercise. Anticipatory and Consummatory Interpersonal Pleasure Scale (AICPS) evaluates one's pleasant sensations. It is expected that students who own high ACIPS tend to be more active in the daytime, especially on weekends, revealing that happy students are typically more motivated and energetic. In addition, emotion regulation strategies (ERQ) measure respondents' propensity to adjust their emotions, comprising cognitive reappraisal (ERQ(CR)) and expressive suppression (ERQ(ES)). For example, heavily relying on expressive suppression leads to substantial physical activity reduction on weekends, while cognitive reappraisal mainly affects student behaviors during the weekend daytime. In addition, the results of the univariate effects for health assessment results along the time of day for only weekdays or weekends are illustrated in Figure <ref> of the supplementary material.
§.§ Application to Electricity Demand Data
In Adelaide, Australia, summer electricity demand exhibits high volatility and a strong correlation with temperature. Several studies have examined this relationship under various temperature conditions (). The electricity demand and temperature data, accessible through the R package (), span from July 6, 1997, to March 31, 2007, providing half-hourly recordings for each day of the week. Our investigation focuses on whether electricity demand is associated with temperature and the impact of weekends as a binary indicator under our proposed modeling framework. The dataset comprises 63 two-dimensional samples, each representing a weekday of a year. The response variable is electricity demand, measured in megawatts, recorded at half-hourly intervals within each week of the year, resulting in 48 functional grids over a day and 52 longitudinal grids over a year.
Figure <ref> demonstrates the inferential heatmaps for intercept, temperature, and weekend. The intercept shows that electricity demand was generally higher during the mid-year weeks (Australian winter). During summer months, temperature had a strong positive effect on electricity demand, particularly from 10 a.m. to 8 p.m., suggesting a combination of factors including residential cooling needs and the population's daily activity patterns. Conversely, the mid-year weeks (Australian winter) show decreased demand with rising temperatures. This negative association may be attributed to Adelaide's pleasant winter temperatures, reducing the need for heating. Additionally, compared to weekdays, weekend corresponds to lower electricity demands throughout the entire year, especially in the morning around 6 a.m., which demonstrates the habit of waking up late on weekends. Figure <ref> presents the univariate effects of time of day and week of year separately. While they provide strong supporting evidence for bivariate effects, the interplay of two temporal directions is overlooked. For example, the trend of prolonged positive effect was observed around week 10 and week 40 at 3 p.m. only from a bivariate perspective, possibly because work-related and other activities offset the climate effect.
§ CONCLUSION
Motivated by daily activity profiles obtained by wearable device technologies, our work has provided an effective method for analyzing functional data over a series of times, which is common for longitudinal studies and known as repeatedly measured functional data. The analysis of this type of functional data is increasingly concerned in large-scale medical health research and many other studies such as biomedicine and environmental sciences. The development of efficient and flexible statistical tools for analyzing such data with ultra-high dimensionality and complex longitudinal-functional structure is an urgent problem. In this study, we aim to establish the two-dimensional functional mixed-effect model and efficient fixed-effect inference. We illustrate our method with the analysis of daily adolescent physical activity profiles and hourly electricity demands data, exploring their associations with various covariates of interests.
The proposed fast three-stage estimation procedure sufficiently reduces computing costs for big data, especially with large-scale samples and functional grids. While we focus on the situation where the longitudinal sampling design is dense and regular in this study, real data is sometimes irregularly sampled. To address this, the estimation of bivariate coefficient functions can be adjusted by taking the average inside equal-size rectangular bins (). Moreover, the covariance function can be estimated utilizing a local-linear smoothing approach and functional principal components analysis through conditional expectation (), while the remaining two-dimensional fixed-effect inference procedure is unchanged. Therefore, our method promotes the use of wearable devices in health research and has wider applications for longitudinal studies and spatial analysis.
SUPPLEMENTARY MATERIAL
Supplementary Material: The Supplementary Materials contain results of additional simulation studies, additional analyses of Shanghai School Adolescent Physical Activity Data study, and the proof of the propositions.
R-package for method and simulations: All code for model implementation and simulation is available at <https://github.com/Cheng-0621/2dFMM>.
agsm
|
http://arxiv.org/abs/2409.03591v1 | 20240905144732 | Exact anomalous mobility edges in one-dimensional non-Hermitian quasicrystals | [
"Xiang-Ping Jiang",
"Weilei Zeng",
"Yayun Hu",
"Lei Pan"
] | cond-mat.dis-nn | [
"cond-mat.dis-nn"
] |
Zhejiang Lab, Hangzhou 311121, China
Zhejiang Lab, Hangzhou 311121, China
[email protected]
Zhejiang Lab, Hangzhou 311121, China
[email protected]
School of Physics, Nankai University, Tianjin 300071, China
§ ABSTRACT
Recent research has made significant progress in understanding localization transitions and mobility edges (MEs) that separate extended and localized states in non-Hermitian (NH) quasicrystals. Here we focus on studying critical states and anomalous MEs, which identify the boundaries between critical and localized states within two distinct NH quasiperiodic models. Specifically, the first model is a quasiperiodic mosaic lattice with both nonreciprocal hopping term and on-site potential. In contrast, the second model features an unbounded quasiperiodic on-site potential and nonreciprocal hopping. Using Avila's global theory, we analytically derive the Lyapunov exponent and exact anomalous MEs. To confirm the emergence of the robust critical states in both models, we conduct a numerical multifractal analysis of the wave functions and spectrum analysis of level spacing. Furthermore, we investigate the transition between real and complex spectra and the topological origins of the anomalous MEs. Our results may shed light on exploring the critical states and anomalous MEs in NH quasiperiodic systems.
Exact anomalous mobility edges in one-dimensional non-Hermitian quasicrystals
Lei Pan
September 9, 2024
=============================================================================
§ INTRODUCTION
Anderson localization is a fundamental phenomenon in which quantum wavefunctions become exponentially localized in the presence of random disorder, without the tendency to diffuse <cit.>. In one and two-dimensional quenched disorder systems, one-parameter scaling theory predicts that all noninteracting eigenstates become localized even with arbitrarily infinitesimal disorder strength<cit.>. However, in three-dimensional disorder systems, it has been demonstrated that localized and extended states can coexist at finite levels of disorder, with a critical energy known as the mobility edge (ME) acting as a boundary between these two phases. Comparatively, one-dimensional (1D) quasiperiodic systems can exhibit unique behaviors and undergo localization transitions. A prototypical example is the Aubry-André-Harper (AAH) model <cit.>, which undergoes a localization transition when the strength of the quasiperiodic potential exceeds a critical threshold. The AAH model is renowned for its exact solvability, offering significant benefits for obtaining exact results due to the self-duality between real and momentum spaces <cit.>. Through the investigation of various extensions of the AAH models, experimental and theoretical researchers have discovered evidence for the existence of energy-dependent MEs in 1D generalized AAH models <cit.>.
In 1D quasiperiodic systems, three primary quantum states have been observed: extended, localized, and critical states. Critical states are extended yet non-ergodic, showing local scale invariance and possessing fundamentally distinct properties in terms of spectral statistics, multifractal characteristics, and dynamical evolution compared to localized and extended states. Conventionally, MEs have been employed to distinguish between localized and extended states. However, recent advances in research have introduced a novel type of MEs referred to as anomalous mobility edges (AMEs) <cit.>, which serve as boundaries between critical states and localized states. These discoveries and analyses of AMEs have significantly advanced our comprehension of critical states and the localization phenomena in quasiperiodic systems <cit.>.
In recent years, there has been an escalating interest in the examination of Anderson localization and MEs in non-Hermitian (NH) disordered and quasiperiodic systems <cit.>. Typically, NH systems are constructed by incorporating nonreciprocal hopping processes or gain and loss terms into their Hamiltonians. For example, with the NH extensions of the AAH model through the complexification of the potential phase, it has been demonstrated that the localization transitions exhibit a topological nature and are characterized by winding numbers of the energy spectrum. Meanwhile, the concept of the ME has also been extended to NH systems. It has been found that the ME can be used to predict the boundary of extended states and the transition from real to complex energy spectrum for NH quasiperiodic systems, thereby introducing a topological signature of MEs <cit.>. Despite extensive studies on the effects of non-Hermiticity on localization transitions and traditional MEs in various contexts, investigation of critical states and AMEs alongside localization transitions in NH quasiperiodic models remains lacking. It remains unclear whether critical states and AMEs exist stably in NH quasiperiodic lattices. If so, how do we characterize the AMEs and whether any correlation exists between the critical-localized state transitions and the real-complex spectrum transition?
In this work, we introduce two distinct nonreciprocal NH quasiperiodic models to address the issues above. We endeavor to investigate robust critical states and exact AMEs by employing Avila's global theory, which accurately characterizes critical regions and AMEs. By analyzing the spatial distribution of wave functions and level spacings of the eigenvalues, we discover that an increase in quasiperiodic potential strength results in a critical-localized transition. This localization transition co-occurs with the real-complex spectrum transition, indicating that a winding number can describe this topological transition. Consequently, the emergence of AMEs separating critical and localized states in our models is indeed topological.
The structure of this paper is as follows. In Sec. <ref>, we provide a streamlined introduction to the two NH quasiperiodic models. In Sec. <ref>, we determine the AMEs of model I using Avila's global theory and investigate the mechanism that generates the existence of critical states. In Sec. <ref>, we determine the AMEs of model II. In Sec. <ref>, we show the real-complex spectrum transition and the topological origin of AMEs. We make a summary in Sec. <ref>.
§ THE MODEL HAMILTONIAN
We introduce two NH quasiperiodic models that will be adopted to investigate critical states and AMEs in this work. These two models are pictorially shown in Fig. <ref>. The Hamiltonian of model I [Fig. <ref>(a)] is described by
H_I=∑_n(t_n^+a_n^†a_n+1+t_n^-a_n+1^†a_n)+∑_nV_na_n^†a_n,
where a_n^† (a_n) corresponds to the spinless fermion creation (annihilation) operator at site n. In Eq. (<ref>), the critical components involve the hopping parameter t_n and the on-site potential V_n, both of which exhibit quasiperiodic and mosaic characteristics. The hopping coefficient t_n is defined as
t_n^±={λ e^± g, n=1, 2,
2Vcos(2πα n+ϕ), n=0, 2. .
and the on-site potential V_n is considered as
V_n={ 2Vcos[2πα(n-1)+ϕ], n=1, 2,
2Vcos(2πα n+ϕ), n=0, 2. .
Here λ, g, and ϕ denote the hopping coefficient, nonreciprocal strength, and phase offset. For convenience, we set on-site potential amplitude V=1 as unit energy.
The Hamiltonian of model II, as shown in Fig. <ref>(b), can be written as
H_II=t∑_n(e^ga_n^†a_n+1+e^-ga_n+1^†a_n)+∑_nλ_na_n^†a_n,
where t=1 is the hopping strength and λ_n is the quasiperiodic potential, which is given by
λ_n=2λcos(2πα n+ϕ)/1-bcos(2πα n+ϕ).
Here λ, g, and b represent the strength of the on-site potential, the nonreciprocal strength, and the control parameter, respectively. When the NH parameter g=0, the Hamiltonian (<ref>) reduced to the Ganeshan-Pixley-Das Sarma (GPD) model <cit.>, which can host the energy-dependent MEs for b< 1 and AMEs for b⩾ 1 <cit.>. The current study examines the model's critical states and AMEs where the parameter g≠ 0 and b⩾ 1.
In this work, for convenience and without affecting generality, we take ϕ=0 and α=lim_n→∞(F_n-1/F_n)=(√(5)-1)/2, with F_n being the nth Fibonacci numbers. For a finite system, one would choose the system size L=F_n and α=F_n-1/F_n to impose the periodic boundary condition (PBC) for numerical diagonalization
of the tight-binding models in Eq. (<ref>) and Eq. (<ref>).
§ EXACT ANOMALOUS MOBILITY EDGES IN A QUASIPERIODIC MOSAIC MODEL
In this section, we study model I which is featured by the mosaic AAH potentials of both hopping terms and on-site potentials. To comprehend the localization transition and the AMEs, we perform a similarity transformation on the Hamiltonian (<ref>) into the Hermitian Hamiltonian via a transformation H_I^'=S^-1_IH_IS_I, where the matrix S_I=diag{1,1,r,r,...,r^L/2,r^L/2} and r=e^-g. Let ψ^' denote the eigenstate of the transformed Hamiltonian H_I^', and ψ be the eigenstate of the original Hamiltonian H_I, it satisfies ψ = S^-1_Iψ^'. Consequently, under the similarity transformation, for an extended eigenstate of H_I^', S^-1_I localizes the wave function exponentially on the boundary, giving rise to the non-Hermitian skin effects <cit.>. Two localization lengths emerge on either side of the localized center for a localized state of the Hamiltonian H_I. The AMEs and critical states of H_I^' can be analytically derived by calculating the Lyapunov exponent (LE) using Avila's global theory<cit.>. Denote by T_n(ϕ) the transfer matrix of the Jacobi operator, and note that it can be expressed as:
T_2(ϕ)=1/λ M[ E-M -M; λ 0 ][ E-M -λ; M 0 ],
where M=2cos(2πα+ϕ). Thus, the LE for an eigenstate with energy E can be calculated via
γ_ϵ(E)=lim_n→∞1/2π n∫lnT_n(ϕ+iϵ)dϕ,
where · represents the norm of the matrix and ϵ is imaginary part of complexified ϕ, respectively. By a standard complexification procedure and using Avila's global theory, the LE is given by <cit.>
γ_0(E)=max{1/2ln|(|E|+√(E^2-λ^2))/λ|,0}.
For the Hamiltonian H_I, we ultimately derive the LEs γ(E)=max{1/2ln|(|E|+√(E^2-λ^2))/λ|± g,0}. Let γ(E)=0, and we would have exact energy-dependent AMEs separating localized states and critical states, as indicated by
|Re(E_c)|=λcosh(g).
If |Re(E)|>λcosh(g), then γ(E)>0, the eigenenergy belongs to the point spectrum and the corresponding eigenstate is localized. Conversely, if |Re(E)|<λcosh(g), then γ(E)=0, the eigenstates can be extended or critical states, with the corresponding eigenenergy belonging to the absolutely continuous spectrum or singular continuous spectrum <cit.>, respectively.
It is widely accepted that there are two primary methods for eliminating the presence of absolutely continuous spectrum (extended states): one involves introducing unbounded spectrum <cit.>, and the other involves introducing zeros in the hopping terms <cit.>. In our model I, there exists a sequence of sites {2n} such that t_2n→0 in the thermodynamic limit, thereby leading to the exclusion of extended states and the eigenstates associated with |Re(E)|≤λcosh(g) being all critical states. In summary, the vanishing LEs and the absence of hopping coefficient zeros unambiguously determine the critical region for |Re(E)|≤λcosh(g), while positive LEs delineate the localized region for |Re(E)|>λcosh(g). Therefore, Eq.(<ref>) signifies the critical energies separating localized and critical states, manifesting the AMEs.
To numerically verify the analytical results we obtained, we can use fractal dimension (FD) and energy spectrum statistics to identify the extended, localized, and critical states <cit.>. For an arbitrary given m-th eigenstate |Ψ_m⟩=∑_n=1^Lψ_m,na_n^†|0⟩, the inverse participation ratio (IPR) being IPR=∑_j|ψ_m,j|^4. Consequently, the FD Γ=-lim_L→∞ln(IPR)/ln(L). In the thermodynamics limit, the Γ approaches 1 for extended states and 0 for localized states, whereas 0<FD<1 for critical states. Figure <ref>(a) illustrates the Γ as a function of λ for for various eigenvalues Re(E). The green solid lines, originating from the band center, represent the AMEs |Re(E_c)|=λcosh(g), across which Γ varies from approximately 0.5 to 0.1, highlighting a critical-to-localization transition predicted by the analytic results. We further present the spatial distributions of three typical eigenstates in Fig. <ref>(c), where the eigenstates corresponding to real eigenvalues Re(E_10)<-Re(E_c) or Re(E_2500)>Re(E_c) are localized, whereas the eigenstate with real eigenvalue -Re(E_c)<Re(E_1000)<Re(E_c) is critical. Notably, in Fig. <ref>(b), we fix the parameters λ=2.0, g=0.5 and depict Γ as a function of the corresponding eigenvalues Re(E) for various system sizes L. The green dashed lines in the figure represent the AMEs Re(E_c) ≃± 2.26. One can observe that in Fig. <ref>(b), the Γ tends to 0 for all eigenstates in energy zones with |Re(E)|> 2.26 as the system size increases, suggesting that these eigenstates are localized. In contrast, in energy zones with |Re(E)|≤ 2.26, is nearly independent of the system size and differs significantly from 0 and 1, approaching 0.5 magnitude, indicating that these eigenstates are critical. A more meticulous finite-size scaling for mean fractal dimension (MFD) can be found in Figs. <ref> (a) and (b), where it is shown that the MFD of the critical zone converges to a finite value, whereas the MFD of the localized zone tends to 0 as the system size grows. In Figure <ref> (c), we also plot the LEs of the H_I for different parameters λ.
To more clearly distinguish between extended, critical, and localized states, we define the even-odd (odd-even) level spacings of the eigenvalues <cit.> as δ_n^e-o=Re(E_2n)-Re(E_2n-1)(δ_n^o-e=Re(E_2n+1)-Re(E_2n)). Re(E_2n) and Re(E_2n-1) denote the even and odd eigenenergies in ascending order of the real eigenenergy spectrum, respectively. In the extended region, the eigenenergy spectrum for the system is nearly doubly degenerate, leading to the vanishing of δ_n^e-o. Consequently, a significant gap exists between δ_n^e-o and δ_n^o-e. In the localized region, δ_n^e-o and δ_n^o-e are almost the same and the gap disappears. In the critical region, δ_n^e-o and δ_n^o-e exhibit scattered distribution behavior, which is distinct from extended and localized phases. As depicted in Fig <ref> (d), our numerical results reveal that the central eigenvalues correspond to critical states, while the energy spectra at the two boundaries are localized states.
§ EXACT ANOMALOUS MOBILITY EDGES IN AN UNBOUNDED QUASIPERIODIC MODEL
In this section, we investigate model II Hamiltonian (<ref>), which exhibits nonreciprocal hopping and GPD potential (<ref>) with b⩾ 1. The LE can characterize the localized properties of eigenstates. We present the transfer matrix method <cit.> and its relation to the LE. Initially, we transform given the Hamiltonian (<ref>) into the Hermitian Hamiltonian using a similar transformation H_II^'=S^-1_IIH_IIS_II. Then, starting from the eigenstate of the transformed Hamiltonian, we derive the LE of the original Hamiltonian. The similar matrix S_II=diag{1,r,r^2,,...,r^L} is defined with r=e^-g. Let ψ^' denote the eigenstate of the transformed Hamiltonian H_II^', and since ψ is the eigenstate of the original Hamiltonian H_II, it follows that ψ = S^-1_IIψ^'. Assuming the system to be a half-infinite lattice with left-hand end sites n=0 and n=1, the LE of H_II^' can be determined using the transfer matrix method. For instance, by starting with ψ^'(0) and ψ^'(1) of the left-hand end sites, the wave function can be derived through the relation
Ψ^'(n)=T(n)T(n-1)...T(2)T(1)Ψ^'(0)
where matrix
T(n)≡([ E- 2λcos(2πα n+ϕ)/1-bcos(2πα n+ϕ) -1; 1 0; ]).
and
Ψ^'(n)≡([ ψ^'(n+1); ψ^'(n); ]).
Viewing the aforementioned equation as an evolutionary equation of a dynamical system, ψ(0) and ψ(1) act as the initial conditions. Given a real number E, as n increases, one may assume that the wave function grows approximately according to an exponential law, i.e.,ψ^'(n)∼ e^γ^'(E) n, as n→∞, where γ^'(E)≥0 is the LE. If the parameter E is not an eigenenergy of H_II^', the LE would be positive, γ^'(E)>0. Conversely, if the parameter E is an eigenenergy of H, the LE can be zero or positive.
For extended or critical states, the LE γ^'(E)≡0. Conversely, for localized states, the LE γ^'(E)>0. Therefore, the LE of H_II^' can be expressed as <cit.>
γ^'(E)=lim_L →∞ln(|Ψ^'(L)|/|Ψ^'(0)|)/L
=lim_L→∞ln(|T(L)T(L-1)...T(2)T(1)Ψ^'(0)|/|Ψ^'(0)|)/L
where L is the system size and |Ψ^'(n)|=√(|ψ^'(n+1)|^2+|ψ^'(n)|^2). In accordance with Refs. <cit.>, we complexify the phase ϕ→ϕ+iϵ and take advantage of the ergodicity of the map ϕ→ 2πα n+ϕ. Consequently, we can express the LE as an integral over the phase ϕ as follows:
γ^'_ϵ(E)=lim_n→∞1/2π n∫lnT_n(ϕ+iϵ)dϕ,
where · signifies the norm of the matrix, and ϵ represents the imaginary component of the complexified ϕ. Utilizing a standard complexification procedure and incorporating Avila's global theory, the LE is derived as
γ^'_0(E)=max{ln| |bE+2λ|+√((bE+2λ)^2-4b^2)/2b |,0}.
As a result, for the Hamiltonian H_II and thanks to the similarity transformation ψ = S^-1_IIψ^', we ultimately determine the LEs γ(E)=max{γ^'_0(E)± g,0}. Upon setting γ(E)=0, we would have exact energy-dependent AMEs that separate localized states and critical states, yielding
Re(E_c)=[± 2bcosh(g)-2λ]/b.
If Re(E)>[2bcosh(g)-2λ]/b or Re(E)<[-2bcosh(g)-2λ]/b, then γ(E)>0, the eigenenergy belongs to the point spectrum and the corresponding eigenstate is localized. If [-2bcosh(g)-2λ]/b<Re(E)<[2bcosh(g)-2λ]/b, then γ(E)=0, the eigenstates can either be extended or critical states and the corresponding eigenenergy belong to absolutely continuous spectrum or singular continuous spectrum, respectively. It is known that, for our model II, the Hamiltonian H_II has an unbounded spectrum, and the eigenstates associated with γ(E)=0 are all critical states. Thus, Eq.(<ref>) marks critical energies separating localized states and critical states, manifesting AMEs.
To validate the analytical outcomes we have derived, we perform exact diagonalization of H_II under PBC and employ FD and energy spectrum statistics to distinguish between critical and localized states. As illustrated in Fig. <ref>(a), we display the FD Γ as a function of λ for various eigenvalues Re(E) at the parameter g=0.5. The green solid lines represent the AMEs Eq.(<ref>). The Γ magnitude between the two lines is approximately 0.5, signifying critical zones, whereas the Γ magnitude outside the two lines is close to 0, denoting localized zones. Subsequently, in Fig. <ref>(b), we fix the parameters λ=2.0 and g=0.5 and present the Γ as a function of the corresponding eigenvalues Re(E) for different systems sizes L. The green dashed lines in the figure represent the AMEs Re(E_c1) ≃ -4.26 and Re(E_c2) ≃ 0.26. One can observe that in Fig. <ref>(b), the Γ tends to 0 for all eigenstates in energy zones with Re(E)<Re(E_c1) or Re(E)>Re(E_c2) with the system size increasing, suggesting that these eigenstates are localized. In contrast, in energy zones with Re(E_c2)>Re(E)>Re(E_c1), the Γ≃ 0.5 magnitude is far different from 0 and 1, and nearly independent of the system size, indicating that these eigenstates are critical. We further present the spatial distributions of several typical eigenstates in Fig. <ref>(c), where the eigenstate of a real eigenvalue Re(E_100)<Re(E_c1) or Re(E_2000)>Re(E_c2) is localized, whereas the eigenstate of a real eigenvalue Re(E_c1)<Re(E_1000)<Re(E_c2) is critical. Further, finite-size scaling analysis for MFD of various parameters λ can be found in Figs. <ref> (a) and (b). We observe that the MFD of the critical zone approaches a finite value of 0.25, while that of the localized zone tends to be 0 as the system size increases. In Figure <ref> (c), we also plot the LEs of the H_II for different parameters λ, and the numerical results align with the analytical LE γ(E). Finally, considering the even-odd (odd-even) level spacings of the eigenvalues as in the previous section, we define δ_n^e-o=Re(E_2n)-Re(E_2n-1)(δ_n^o-e=Re(E_2n+1)-Re(E_2n)). Re(E_2n) and Re(E_2n-1) denote the even and odd eigenenergies in ascending order of the real eigenenergy spectrum, respectively. It is known that for localized states, δ_n^e-o and δ_n^o-e are almost the same, and the gap no longer exists. For the critical states, δ_n^e-o and δ_n^o-e have scatter-distributed behavior. As depicted in Fig. <ref> (d), for the system size L=2584 and the parameters λ=2 and g=0.5, our numerical results indicate that the center eigenvalues are the critical states, while the energy spectra at the two boundaries are localized states.
§ TOPOLOGICAL ORIGIN OF NON-HERMITIAN ANOMALOUS MOBILITY EDGES
The emergence of critical states and AMEs in the investigated models reveals a universal underlying mechanism. The underlying mechanism is rooted in the zeros of hopping coefficients in the thermodynamic limit or the presence of unbounded potentials within the Hamiltonian, which facilitate the existence of critical states. This mechanism applies not only to Hermitian systems but also to non-Hermitian systems, such as our models. In this section, we explore the real-complex spectrum transition and the topological origin of the AMEs in our two NH quasiperiodic models. Through numerical diagonalization of Hamiltonians (<ref>) and (<ref>) with specific parameters λ=2.0 and g=0.5 under PBC, we can obtain insights into this transition. The numerical results, depicted in Figs. <ref> (a) and (b), indicate that the FD Γ of real energies is nearly close to 0, suggesting localization of the corresponding eigenstates. Conversely, for complex energies, the FD Γ approaches 0.5, indicating the corresponding eigenstates are critical. These findings suggest a localization-critical transition that co-occurs with the real-complex spectrum transition. A winding number can describe this topological transition. For the phase factor, ϕ of the potential in our NH models is continuously varied, the winding number can be defined as <cit.>
w(E_B)=lim_L →∞1/2 π i∫_ϕ^2 π d ϕ∂_ϕ ln det{ H(ϕ) -E_B },
which measures the change of the spectrum and topological transition for the base energy E_B when ϕ is changed continuously from 0 to 2π. In Fig. <ref> (c), we set the base energy in the middle of the energy spectrum E_B=E_mid, then the winding number w=1/2 when the AMEs emerge. Note that for fixed g=0.5 and except λ=0, the model I has AMEs for all quasiperiodic potential strengths λ, and thus, the system is always in topological AME phase coexisting with localized and critical states. However, for the numerical results of model II as shown in Fig. <ref> (d), the winding number can change from 0 to 1/2 and then back to 0 when changes the λ=-10 to 10. This observation confirms a topological transition from a trivial localized phase to a topological AME phase with changing λ. Based on the above numerical results and discussions, we know that the emergence of such AMES in our models is topological, i.e., the energies of localized and critical states exhibit distinct topological structures in the complex energy plane. This is similar to NH topological ME separating localized and extended states in the complex energy plane as a result of NH terms in the quasicrystals.
§ CONCLUSION
In summary, we have studied the critical states and AMEs to 1D NH quasicrystals with nonreciprocal hopping. The study has observed two distinct mechanisms that lead to the emergence of robust critical states in the two NH models investigated. These robust critical states and AMEs are attributed to the zeros of hopping coefficients in the thermodynamic limit and the presence of unbounded quasiperiodic potentials. The AMEs and LEs can be analytically obtained from the NH proposed models using Avila's global theory. To confirm the emergence of robust critical states in both models, we perform a finite-size analysis of the MFD and level spacings of the eigenvalues. Furthermore, we demonstrate the localization-critical transition that co-occurs with the real-complex spectrum transition and the topological origin of the AMEs in our NH quasiperiodic models.
Our work contributes to developing critical states and AMEs for 1D NH quasicrystals. In future research, it may be valuable to extend the concept of AME to higher-dimensional systems <cit.> or other interacting systems <cit.>. Additionally, it would be intriguing to explore transport phenomena in NH quasiperiodic systems with critical states or AMEs.
§ ACKNOWLEDGMENTS
This work is supported by the China Postdoctoral Science Foundation (No. 2023M743267) and the National Natural Science Foundation of China (Grant No. 12304290, No. 12204432, and No. 62301505). LP also acknowledges support from the Fundamental Research Funds for the Central Universities.
|
http://arxiv.org/abs/2409.03549v1 | 20240905141215 | Reduced-order modelling based on Koopman operator theory | [
"Diana A. Bistrian",
"Gabriel Dimitriu",
"Ionel M. Navon"
] | math.NA | [
"math.NA",
"cs.NA",
"math.DS",
"93A30, 70K75, 65C20"
] |
Reduced-order modelling based on Koopman operator theory]Reduced-order modelling based on Koopman operator theory
(D.A. Bistrian) University Politehnica of TimisoaraDepartment of Electrical Engineering and Industrial InformaticsRevolutiei Nr.5, 331128 Hunedoara, Romania
Corresponding author: [email protected]
(G. Dimitriu) University of Medicine and Pharmacy “Grigore T. Popa”Department of Mathematics and InformaticsUniversitatii 16, 700115 Iasi, Romania
[email protected]
(I.M. Navon) Florida State UniversityDepartment of Scientific ComputingDirac Science Library Building, Tallahassee, FL 32306-4120, USA
[email protected]
1520231-217-27
[2010] 93A30, 70K75, 65C20.
§ ABSTRACT
The present study focuses on a subject of significant interest in fluid dynamics: the identification of a model with decreased computational complexity from numerical code output using Koopman operator theory. A reduced-order modelling method that incorporates a novel strategy for identifying the most impactful Koopman modes was used to numerically approximate the Koopman composition operator.
[
Diana A. Bistrian, Gabriel Dimitriu, Ionel M. Navon
5 September 2024
=======================================================
§ INTRODUCTION
Despite the fact that complex nonlinear dynamical systems can appear challenging to understand, the existence of similar flow characteristics indicates that a number of different dynamic phenomena are likely governed by the same fundamental processes. Using the modal decomposition method <cit.> is an effective strategy to find a helpful low-dimensional reference frame for capturing prominent dynamical processes. Model order reduction approaches based on modal decomposition have made significant improvements in the recent decade <cit.>.
The choice of an appropriate reduced order basis to describe the system dynamics in relation to the system order reduction is the main topic of this study.
The Koopman operator theory <cit.> provides the mathematical foundation for determining the reduced-order model of a complex nonlinear dynamical system.
There are various advantages to transposing the nonlinear dynamics into a reduced-order model, which is a linear model by construction. These include the ability to identify the dominant frequencies, the production of a mathematical reduced-order model with higher fidelity, and a notable increase in computing speed.
The authors have made a substantial contribution to the advancement of modal decomposition techniques <cit.> and the introduction of new numerical algorithms for the modeling of nonlinear dynamical systems with lower computational complexity <cit.> in the past years.
The present work contains a presentation of the mathematical aspects of modal decomposition technique based on Koopman operator theory <cit.>, with application to the Saint-Venant nonlinear dynamical system model.
The remainder of the article is organized as follows.
Section 2 discusses the mathematical considerations on the Koopman operator theory.
Section 3 presents the numerical method developed for reduced-order modelling.
Section 4 presents the test problem, consisting in nonlinear Saint-Venant equations dynamical model. A qualitative study of the reduced-order model is conducted in the case of two experiments.
A summary and conclusions are given in Section 5.
§ MATHEMATICAL CONSIDERATIONS ON THE KOOPMAN OPERATOR
Let Ω⊂ℝ^n be a compact and non-empty space.
Let
L^2( Ω) = {ψ :Ω→ℝ| ∫_Ω| ψ|.^2dΩ < ∞}
be a Hilbert space of square integrable functions on Ω, endowed with the inner product ⟨ψ _i,ψ _j⟩ = ∫_Ωψ _iψ _jdΩ and the norm ψ = √(⟨ψ ,ψ⟩) for ψ∈L^2( Ω).
Let us consider a nonautonomous continuous-time dynamical system on domain Ω⊂ℝ^n governed by the nonlinear ordinary differential equation
{[ dy/dt( x,t) = f( y,u,t), t ∈ℝ_≥ 0; y( x,t_0) = y_0( x) ].
where the map f is locally Lipschitz continuous, x∈ℝ^n is the Cartesian coordinate vector, u ∈ℝ^m is the input vector, with n ≫ m. Forward invariance of the set Ω⊂ℝ^n w.r.t. system dynamics (<ref>) is assumed, i.e. any solution y( x,t) ∈Ω , t ≥ 0 holds for all y_0.
Dimensionality reduction in reduced-order modelling.
The principle of modal reduction aims to finding an approximation solution of the form
{[ y( x,t) ≈∑_j = 1^p a_j( t )ψ _j( x) , t ∈ℝ_≥ 0; da_j( t )/dt = g( t ), a_j( t_0) = a_j^0 ].
expecting that this approximation becomes exact as p →∞, assuring preservation of dynamic stability, computational stability, and a small global approximation error compared to the true solution of (<ref>).
Let us consider a scalar observable function φ :Ω→ℂ, u = φ( y ), y ∈Ω, t ∈ℝ_≥ 0 with a smooth and Lipschitz continuous flow F^t:Ω→Ω:
F^t( y_0) = y_0 + ∫_t_0^t_0 + tf( y( τ))dτ,
which is forward-complete, i.e. the flow F^t( y ) has a unique solution on ℝ_≥ 0 from any initial condition y_0.
The system class that fits the aforementioned assumption is quite vast and encompasses a wide range of physical systems, including whirling flows, shallow water flows, convection-diffusion processes, and so on.
The Koopman operator describes the propagation of state space observables over time. An observable might be any sort of system measurement or the dynamical reaction of the system. The recurrence of a fixed time-t flow map, i.e. sequential compositions of the map with itself, is assumed to describe the dynamical development.
Koopman operator.
For dynamical systems of type (<ref>), the semigroup of Koopman operators {𝒦^t}_t ∈ℝ_≥ 0:Ω→Ω acts on scalar observable functions φ :Ω→ℂ by composition with the flow semigroup {F^t}_t ∈ℝ_≥ 0 of the vector field f:
𝒦^tφ = φ( F^t).
The Koopman operator is also known as the composition operator.
Linearity of the Koopman operator.
Consider the Koopman operator 𝒦^t and two observables φ _1,φ _2∈Ω and the scalar α∈ℝ. Using (<ref>) it follows that:
𝒦^t( αφ _1 + βφ _2) = ( αφ _1 + βφ _2)( F^t) = αφ _1( F^t) + βφ _2( F^t) = α𝒦^tφ _1 + β𝒦^tφ _2.
Infinitesimal generator.
Let us assume that there is a generator 𝒢_𝒦:ℱ→Ω, ℱ being the domain of the generator and Ω the Banach space of observables. The operator 𝒢_𝒦 stands as the infinitesimal generator of the time-t indexed semigroup of Koopman operators {𝒦^t}_t ∈ℝ_≥ 0, i.e.
𝒢_𝒦φ = lim_t ↘ 0𝒦^tφ - φ/t = dφ/dt.
Koopman eigenfunction.
An observable ϕ∈Ω is called a Koopman eigenfunction if it satisfies the relation:
𝒢_𝒦ϕ( y ) = dϕ/dt( y ) = sϕ( y ),
associated with the complex eigenvalue s ∈ℂ.
Koopman mode.
Let ϕ _i∈Ω be an eigenfunction for the Koopman operator, corresponding to eigenvalue λ _i. For an observable φ :Ω→ℂ, the Koopman mode corresponding to ϕ _i is the projection of φ onto span{ϕ _i}.
Koopman Spectral Decomposition.
Any observable φ :Ω→ℂ admits a Koopman spectral decomposition of the following form:
φ( y ) = ∑_j = 1^∞a_j( φ)λ _j^tϕ _j,
where λ _j^t = e^s_jt w.r.t. s_j = σ _j + iω _j with eigen-decay/growth σ _j and eigenfrequencies ω _j.
The Koopman mode decomposition of form (<ref>), first provided in <cit.>, was later coupled with numerical approaches for modal decomposition, such as dynamic mode decomposition <cit.>.
Since for any semigroup of Koopman operators {𝒦^t}_t ∈ℝ_≥ 0, exists an infinitesimal generator 𝒢_𝒦, the following relation is satisfied for any λ ^t = e^st:
𝒦^tϕ( y ) = ϕ( F^t( y )) = λ ^tϕ( y ).
Let us consider that the space Ω is chosen to be a Banach algebra, i.e. the set of eigenfunctions forms an Abelian semigroup under product of functions. If ϕ _1,ϕ _2∈Ω are two eigenfunctions of the composition operator 𝒦^t with eigenvalues λ _1,λ _2, then the function product ϕ _1ϕ _2 is also an eigenfunction of 𝒦^t with the eigenvalue λ _1λ _2. Thus, products of eigenfunctions are, again, eigenfunctions. It follows that, for any observable function written in the following form:
φ( y ) = ∑_j = 0^∞a_j( φ)ϕ _j( y ) ,
the Koopman operator acts as follows:
𝒦^tφ = ∑_j = 1^∞a_j( φ)( 𝒦^tϕ _j) = ∑_j = 1^∞a_j( φ)λ _j^tϕ _j.
§ REDUCED-ORDER MODELLING BASED ON KOOPMAN OPERATOR
Dynamic Mode Decomposition (DMD) <cit.> , is a data-driven approach for estimating the modes and eigenvalues of the Koopman operator without numerically executing a Laplace transform. DMD has emerged as a popular approach for finding spatial-temporal coherent patterns in high-dimensional data, with a strong connection to nonlinear dynamical systems via the Koopman mode theory <cit.>. We present in the following an improved numerical algorithm based on dynamic mode decomposition.
Let us consider a set of observables in the following form:
u_i( x,t) = u( x,t_i), t_i = iΔ t, i = 0,...,N_t
at a constant sampling time Δ t, x representing the spatial coordinates, whether Cartesian or Cylindrical.
A data matrix whose columns represent the individual data samples, called the snapshot matrix, is constructed in the following manner:
V = [ [ u_0 u_1 ... u_N_t ]] ∈ℝ^N_x× (N_t + 1)
Each column u_i is a vector with N_x components, representing the numerical measurements.
The Koopman decomposition theory assumes that an infinitesimal operator 𝒦^t exists that maps every vector column onto the next one:
{u_0, u_1 = 𝒦^tu_0, u_2 = 𝒦^tu_1 = ( 𝒦^t)^2u_0,. .., u_N_t = 𝒦^tu_N_t - 1 = ( 𝒦^t)^N_tu_0}.
Our aim is to build the best numerical approximation of the Koopman operator using the DMD technique. The next step consists in forming two data matrices from the observables sequence, in the form:
V_0 = [ [ u_0 u_1 ... u_N_t - 1 ]] ∈ℝ^N_x×N_t, V_1 = [ [ u_1 u_2 ... u_N_t ]] ∈ℝ^N_x×N_t.
Assume that over a sufficiently long sequence of snapshots, the latest snapshot may be expressed as a linear combination of preceding vectors, so that:
u_N_t = c_0u_0 + c_1u_1 + ... + c_N_t - 1u_N_t - 1 + ℛ,
where c_i∈ℝ,i = 0,...,N - 1 and ℛ is the residual vector. The following relations are true:
{u_1,u_2,...u_N_t} = 𝒦^t{u_0,u_1,...u_N_t - 1} = {u_1,u_2,...,V_0c} + ℛ,
where c = ( [ c_0 c_1 ... c_N - 1 ])^T is the unknown column vector. Eq.(<ref>) is equivalent to the following relation:
𝒦^tV_0 = V_0𝒮 + ℛ, 𝒮 = ( [ 0 ... 0 c_0; 1 0 c_1; ⋮ ⋮ ⋮ ⋮; 0 … 1 c_N_t - 1 ]),
where 𝒮 is the companion matrix.
The relationship (<ref>) is true when the residual is minimized. It follows that the vector c must be chosen such that ℛ is orthogonal to span{u_0,...,u_N_t - 1}.
The goal of dynamic mode decomposition is to solve the eigenvalue problem of the companion matrix:
V_1 = 𝒦^tV_0 = V_0𝒮 + ℛ,
where 𝒮 approximates the eigenvalues of the Koopman operator 𝒦^t when ℛ_2→ 0.
As a direct result of resolving the minimization problem (<ref>), minimizing the residual enhances overall convergence, and so the eigenvalues and eigenvectors of 𝒮 will converge toward the eigenvalues and eigenvectors of the Koopman operator, respectively.
The advantage of this method is that the Koopman operator has an infinite number of eigenvalues, whereas its DMD approximation is linear and has a finite number of terms.
Model reduction is highly dependent on the selection of dynamic modes. The superposition of all Koopman modes, weighted by their amplitudes and complex frequencies, approximates the whole data sequence, but some modes contribute insignificantly. In this research, we create a reduced-order model of the data that only includes the most important modes that make a substantial contribution to the representation of the data, which we refer to as the leading modes.
The data snapshots at every time step will be represented as a Koopman spectral decomposition of the form:
u_DMD( x,t_i) = ∑_j = 1^N_DMDa_j( t_i)λ _j^i - 1ϕ _j( x) , 1mu i ∈{1,...,N_t}, 1mut_i∈{t_1,...,t_N_t},
where N_DMD≪N_t represents the number of Koopman leading modes ϕ( x) involved in the spectral decomposition of data snapshots, λ _j are the Koopman eigenvalues, and a_j∈ℂ are the modal amplitudes of the Koopman modes, respectively.
The leading modes indicate a subset of Koopman modes that will be chosen from all computed DMD modes using an original criterion, discussed in the following.
We define the weight of each Koopman mode as follows:
w𝒦_j = ∫_Δ t^t_N_t∑_i = 1^N_ta_j( t )λ _j^i - 1 dt,
where λ _j are the Koopman eigenvalues, and a_j∈ℂ are the modal amplitudes of the Koopman modes, respectively.
Let
Er_DMD = u( x) - u_DMD( x)_2/u( x)_2,
be the relative error of the difference between the variables of the full model and approximate DMD solutions over the exact one, where u( x) represents the full solution of the model and u_DMD( x) represents the reduced order solution.
The leading dynamic modes and their related frequencies are chosen in descending order of the modal entropy, until a minimal relative error of the reduced-order model is obtained. To produce the reduced-order model amounts to finding the solution to the following minimisation problem:
{[ Find N_DMD∈ N, w.r.t. u_DMD( x,t_i) = ∑_j = 1^N_DMDa_jϕ _j( x)λ _j^i - 1 ,; i ∈{1,...,N_t}, t_i∈{t_1,...,t_N_t},; Subject to min_N_DMD{w𝒦_1 > w𝒦_2 > ... > w𝒦_N_DMD, Er_DMD≤ε}. ].
As a consequence, the modes and frequencies with the highest effect on approximation accuracy are selected to be included in the model with a reduced computational complexity.
§ REDUCED-ORDER MODELLING OF SAINT-VENANT EQUATIONS MODEL
The test problem used in this paper consists of the nonlinear Saint-Venant equations (also called the shallow water equations <cit.>) in a channel on the rotating earth:
∂( ũh)/∂ t + ∂( ũ^2h + gh^2/2)/∂ x + ∂( ũṽh)/∂ y = h( fṽ - g∂ H/∂ x),
∂( ṽh)/∂ t + ∂( ũṽh)/∂ x + ∂( ṽ^2h + gh^2/2)/∂ y = h( - fũ - g∂ H/∂ y),
∂h/∂ t + ∂( ũh)/∂ x + ∂( ṽh)/∂ y = 0,
where ũ and ṽ are the velocity components in the x̃ and ỹ axis directions respectively, h̃ represents the
depth of the fluid, H( x,y) is the the orography field, f̃ is the Coriolis factor and g is the acceleration of gravity.
The reference computational configuration is the rectangular 2D domain Ω = [ 0,L_max] ×[0,D_max]. Subscripts represent the derivatives with respect to time and the streamwise and spanwise coordinates.
The Coriolis parameter is modelled as varying linearly in the spanwise direction, such that
f = f_0 + β (y - D_max),
where f_0,β are constants, L_max,D_max are the dimensions of the rectangular domain of integration.
The height of the orography is given by the fixed two-dimensional field
H( x,y) = αe^y^2 - x^2.
The model (<ref>)-(<ref>) is associated with periodic boundary conditions in the x̃-direction and solid wall boundary condition in
the ỹ-direction:
ũ( 0,ỹ,t̃) = ũ( L_max,ỹ,t̃), ṽ( x̃,0,t̃) = ṽ( x̃,D_max,t̃) = 0,
and also with the initial Grammeltvedt type condition <cit.> as the initial height field, which propagates the energy in wave number one, in the
streamwise direction:
h_0( x̃,ỹ) = H_0 + H_1tanh( 10(D_max/2 - ỹ)/D_max) + H_2sin( 2πx̃/L_max)cosh ^ - 2( 20(D_max/2 - ỹ)/D_max).
Using the geostrophic relationship ũ = - h̃_ỹ( g/f̃), ṽ = h̃_x̃(
g/f̃), the initial velocity fields are derived as:
u_0( x̃,ỹ) = - g/f̃10H_1/D_max( tanh^2( 5D_max - 10ỹ/D_max) - 1) -
18g/f̃H_2sinh( 10D_max - 20ỹ/D_max)sin( 2πx̃/L_max)/D_maxcosh^3( 10D_max - 20ỹ/D_max),
v_0( x̃,ỹ) = 2πH_2g/f̃L_maxcos 1mu( 2πx̃/L_max)cosh ^ - 2( 20(D_max/2 - ỹ)/D_max).
The constants used for the test problem are
f_0 = 10^ - 4s^ - 1, α = 4000, β = 1.5 ×10^ - 11s^ - 1m^ - 1, g = 9.81ms^ - 1,
D_max = 60×10^3m, L_max = 265×10^3m,
H_0 = 10 ×10^3m, H_1 = -700m, H_2 = -400m.
The error of the numerical algorithm is set to be less than ε=10^-7. A non-dimensional analysis was performed to assess the performances of the reduced-order shallow water model.
Reference quantities of the dependent and independent variables in the shallow water model are considered, i.e. the length scale
L_ref = L_max and the reference units for the height and velocities, respectively, are given by the initial conditions h_ref =
h_0, u_ref = u_0. A typical time scale is also considered, assuming the form t_ref = L_ref/u_ref.
In order to make the
system of equations (<ref>)-(<ref>) non-dimensional, the non-dimensional variables
( t,x,y) = ( t̃/t_ref,x̃/L_ref,ỹ/L_ref), ( h,u,v) = ( h̃/h_ref,ũ/u_ref,ṽ/u_ref)
are introduced.
The numerical results are obtained employing a Lax-Wendroff finite difference discretization scheme <cit.> and used in further numerical experiments in dimensionless form. The training data
comprises a number of 289 unsteady solutions of the two-dimensional shallow water equations model (<ref>)-(<ref>), at regularly spaced time
intervals of Δ t = 1800s for each solution variable.
The numerical results of two tests illustrating the computing performance of the approach are presented below. In the first experiment, the threshold is set to be ε = 10^ - 3 for solving the optimization problem (<ref>). In the second experiment, the threshold is set at ε = 10^ - 4 for solving the optimization problem (<ref>).
Figures <ref>–<ref> present the spectrum of Koopman
decomposition eigenvalues, of geopotential height field h, streamwise field u and spanwise field v, respectively, in the case of two experiments, and the leading Koopman modes selected by resolving the optimization problem (<ref>). In the second experiment, extra modes are selected (darker colored dots) to improve the reduced-order model precision.
The representation of the height field compared to its reduced-order model is displayed in Figures <ref>–<ref>, in the case of both experiments.
The vorticity field compared to its reduced-order model is illustrated in Figures <ref>–<ref>, in the case of both experiments, at different time instances.
Table <ref> presents the percentage reduction of the computational complexity of the reduced-order model, in the two experiments performed.
§ CONCLUSIONS
The current study concentrated on a topic of significant interest in fluid dynamics: the identification of a model of reduced computational complexity from numerical code output, based on Koopman operator Theory.
The full model consisted in the Saint-Venent equations model, that have been computed using a Lax-Wendroff finite difference discretization scheme. The Koopman composition operator have been numerically approximated with the algorithm of reduced-order modelling, endowed with a novel criterion of selection of the most influential Koopman modes, based on the modes weights. It automatically selects the most representative Koopman modes, even if they exhibit rapid development with lower amplitudes or are composed of high amplitude fast damped modes.
Two tests were carried out in order to evaluate the algorithm's computing efficiency in order to enhance the reduced-order model precision. It was demonstrated that the model rank may be decreased by up to 92% without compromising model accuracy.
This approach is a useful tool for creating reduced-order models of complex flow fields characterized by non-linear models.
99
Holmes1996 Holmes, P., Lumley, J., Berkooz, G., Turbulence, coherent structures, dynamical systems and symmetry, Cambridge University Press, 1996.
Kaiser2018 Kaiser, E., Kutz, J., Brunton, S., Sparse identification of nonlinear dynamics for model predictive control in the low-data limit, Proceedings Of The Royal Society A, 474, 2018.
ChKutz2019Champion, K., Brunton, S., Kutz, J., Discovery of nonlinear multiscale systems: Sampling strategies and embeddings, SIAM Journal On Applied Dynamical Systems, 18, 312-333, 2019.
Arcucci2023 Arcucci, R., Xiao, D., Fang, F., Navon, I.M., Wu, P., Pain, C.C., Guo, Y.K., A reduced order with data assimilation model: Theory and practice, Computers & Fluids 257, 105862, 2023.
Mou2023 Mou, C., Merzari, E., San, O., Iliescu, T., An energy-based lengthscale for reduced order models of turbulent flows, Nuclear Engineering and Design 412, 112454, 2023.
Iliescu2022Iliescu, T., ROM Closures and Stabilizations for Under-Resolved Turbulent Flows, 2022 Spring Central Sectional Meeting, 2022.
Daba2023 Dabaghian, P.H., Ahmed, S.E., San, O., Nonintrusive Reduced Order Modelling of Convective Boussinesq Flows, International Journal of Computational Fluid Dynamics 36 (7), 578-598, 2022.
Sanfilippo2023 Sanfilippo, A., Moore, I., Ballarin, F., Iliescu, T., Approximate deconvolution Leray reduced order model for convection-dominated flows, Finite Elements in Analysis and Design 226, 104021, 2023.
Koopm1931 Koopman, B., Hamiltonian systems and transformations in Hilbert space, Proc. Nat. Acad. Sci., 17, 315-318, 1931.
BistrianDimitriuNavon2019 Bistrian, D.A., Dimitriu, G., Navon, I.M., Processing epidemiological data using Dynamic Mode Decomposition method, AIP Conference Proceedings 2164, 080002, 2019.
BistrianDimitriuNavon2020a Bistrian, D.A., Dimitriu, G., Navon, I.M. Modeling dynamic patterns from COVID-19 data Using Randomized Dynamic Mode Decomposition in predictive Mode and ARIMA, AIP Conference Proceedings 2302, 080002, 2020.
BistrianDimitriuNavon2020b Bistrian, D.A., Dimitriu, G., Navon, I.M., Application of deterministic and randomized dynamic mode decomposition in epidemiology and fluid dynamics, An. Ştiinţ. Univ. Al. I. Cuza Iaşi. Mat. (N.S.), Tomul LXVI, F. 2, 2020.
BistrianNavon2017b Bistrian, D.A., Navon, I.M., Randomized Dynamic Mode Decomposition for non-intrusive reduced order modelling, International Journal for Numerical Methods in Engineering, ISSN 0029-5981, Volume: 112, Issue: 1, Page:3-25, 2017.
Bistrian2022a Bistrian, D.A., High-Fidelity Digital Twin Data Models by Randomized Dynamic Mode Decomposition and Deep Learning with Applications in Fluid Dynamics, Modelling, Volume 3, Issue 3, pp. 314-332, 2022.
Bistrian2022b Bistrian, D.A., Mathematical considerations on Randomized Orthogonal Decomposition method for developing twin data models, Transylvanian Journal of Mathematics and Mechanics, Volume 14, Number 2, pp. 105-115, 2022.
Mezic2005 Mezic, I., Spectral properties of dynamical systems, model reduction and decompositions, Nonlinear Dynamics, Volume 41, pp. 309-325, 2005.
SchmidSesterhen2008 Schmid, P.J., Sesterhenn, J., Dynamic Mode Decomposition of Numerical and Experimental Data, 61st Annual Meeting of the APS Division of Fluid Dynamics, San Antonio, Texas, Vol. 53(15), American Physical Society, 2008.
Schmid2010 Schmid, P.J., Dynamic Mode Decomposition of Numerical and Experimental Data, Journal of Fluid Mechanics 656: 5–28, 2010.
Mezic2013 Mezic, I., Analysis of fluid flows via spectral properties of the Koopman operator, Annual Review of Fluid Mechanics, Volume 45, pp. 357-378, 2013.
Tu2014 Tu, J., Rowley, C., Luchtenburg, D., Brunton, S., Kutz, J., On dynamic mode decomposition: Theory and applications, Journal Of Computational Dynamics, 1, 391-421, 2014.
Bistrian2015Bistrian, D., Navon, I., An improved algorithm for the shallow water equations model reduction: Dynamic Mode Decomposition vs POD, International Journal For Numerical Methods In Fluids, 78, 552-580, 2015.
chenoptimal2012Chen, K., Tu, J., Rowley, C., Variants of dynamic mode decomposition: boundary condition, Koopman and Fourier analyses, Nonlinear Science, 22, 887-915, 2012.
SaintVenant Saint-Venant, A.J.C., Barré de, J.C., Théorie du mouvement non permanent des eaux, avec application aux crues des rivières et a l'introduction de marées dans leurs lits, Comptes rendus de l'Académie des Sciences 73, 237-240, 1871.
Grammeltvedt1969 Grammeltvedt, A., A survey of finite-difference schemes
for the primitive equations for a barotropic Fluid, Monthly Weather Review 97 (5): 384–404, 1969.
Brass2011Brass, H., Petras, K., Quadrature Theory: The Theory of Numerical Integration on a Compact Interval, American Mathematical Soc., 2011.
|
http://arxiv.org/abs/2409.02767v1 | 20240904144235 | Hong-Ou-Mandel Interference in a temporal-average-inversion-symmetric chain | [
"Shi Hu",
"Meiqing Hu",
"Shihao Li",
"Zihui Zhong",
"Zhoutao Lei"
] | quant-ph | [
"quant-ph",
"cond-mat.mes-hall"
] |
[email protected]
School of Optoelectronic Engineering, Guangdong Polytechnic Normal University, Guangzhou 510665, China
School of Optoelectronic Engineering, Guangdong Polytechnic Normal University, Guangzhou 510665, China
School of Optoelectronic Engineering, Guangdong Polytechnic Normal University, Guangzhou 510665, China
School of Optoelectronic Engineering, Guangdong Polytechnic Normal University, Guangzhou 510665, China
[email protected]
Guangdong Provincial Key Laboratory of Quantum Metrology and Sensing & School of Physics and Astronomy, Sun Yat-Sen University (Zhuhai Campus), Zhuhai 519082, China
§ ABSTRACT
We show how to implement tunable beam splitter and Hong-Ou-Mandel interference in the Su-Schrieffer-Heeger chain by manipulating the topological edge states adiabatically. The boson initially injected in the one end of the chain can be transferred to the two-end with a tunable proportion depends on the dynamical phases accumulated during the adiabatic evolution. We also observe Hong-Ou-Mandel interference via the tunable beam splitter (50:50) and achieve a spatially entangled two-particle NOON state. We demonstrate the robustness of our proposal under chiral- and time-reversal-symmetry-preserving disorder. However, the chiral symmetry is scarce for realist system. Therefore, we demonstrate Hong-Ou-Mandel interference are robust to inversion symmetric disorder breaking the chiral symmetry, highlighting the protection of inversion symmetry. More importantly, the inversion symmetry violated by static disorder can be restored for more common situations where disorder becomes time dependent, giving rise to the temporal-average-inversion-symmetry protected Hong-Ou-Mandel interference. Our approach opens a new way to study quantum effects in topological matter with potential applications.
Hong-Ou-Mandel Interference in a temporal-average-inversion-symmetric chain
Zhoutao Lei
September 9, 2024
===========================================================================
§ INTRODUCTION
Topological states of matter, originally discovered in condensed matter <cit.>, have attracted intensive attention in many fields such as photonics <cit.>, acoustics <cit.>, and cold atoms <cit.> over the last decade. The existence of topologically protected edge states according to bulk-edge correspondence <cit.> is one of the most appealing characteristics of topological systems. The unique properties of the in-gap topological edge states such as robustness rooted in the bulk topological invariants <cit.> and unidirectional propagation immune to backscattering <cit.> drive numerous research in quantum science and technology. These prominent features make topological edge states as reliable platform for topological protection of quantum correlation <cit.>, quantum state transfer <cit.>, topological quantum gates <cit.>, and topological quantum devices <cit.>. Actually, the characterization and properties of topological phases rely on their underlying symmetry. After establishing the celebrated Altland-Zirnbauer symmetry classification <cit.>, researchers began investing the role of spatial symmetry and introduce the concept of topological crystalline phases <cit.>.
Quantum entanglement and interference play a crucial role in quantum communication, quantum computation, and quantum metrology <cit.>. Topological systems have demonstrated topological protection of quantum entanglement <cit.>. A recent experiment <cit.> has observed the well-known Hong-Ou-Mandel (HOM) interference <cit.> in an integrated photonic circuit described by off-diagonal Harper model <cit.>. In that work, topological interface was created in the middle of the system and interference was observed at this interface by adiabatically manipulating coupling strength and making topological edge modes into delocalized. Besides, one of us has demonstrated the HOM interference by Thouless pumping via topological bulk bands and generated spatially entangled two-particle NOON state <cit.>. It is interesting to study quantum interference and entanglement generation in topological system.
In this work, we propose a scheme to achieve HOM interference and entanglement generation based on the Su-Schrieffer-Heeger (SSH) chain <cit.>, one of the most prototypical topological models. In topological nontrivial phase, the in-gap eigenstates of finite-sized SSH chain are odd and even superpositions of edge states localized exponentially on left and right edge <cit.>. Moreover, edge states just localize in ends of chain in the fully dimerized limit (only intercell hopping). Firstly, we show a boson injected in one end of the chain, acting as a superposition of these two in-gap eigenstates, can be transferred to two ends with tunable proportion. This proportion depends on the dynamical phase accumulated in the adiabatic evolution, that is moving away from the fully dimerized limit and then return adiabatically. Therefore we have achieve a tunable beam splitter (BS) by manipulating the dynamical phase delicately, where two ends of SSH chain act as input and output ports. Extent it into two-particle system, HOM interference and generation of spatial two-particle NOON state can be observed via the tunable BS (50:50).
After studying the clean system with translation symmetry, we reveal the role of internal and inversion symmetries by introducing different type of disorder. More specifically, the chiral- and time-reversal-symmetry-preserving disorder only make a tiny random shift on the energy spectrum and influence the dynamical phase. In the static case with fixed disorder during the adiabatic evolution, we can adjust the total evolution time to match the desired dynamical phase. In the temporal case with time dependent disorder, the influence on the dynamical phase can be averaged out during the adiabatic evolution. However, the chiral symmetry is delicate for many realistic systems, such as the ones with on-site energy disorders.
Therefore, we examine the effects of spatial symmetry and reveal the tunable BS and HOM interference still can be established against inversion-symmetric disorders, while can be destroyed by static generic ones. Fortunately, temporal-average-inversion-symmetry emerges for these cases with time dependent disorder, which exists universally in realistic systems, and then the BS as well as HOM interference is restored. These results highlight our proposal can be protected by temporal-average-inversion-symmetry, dual to previous works <cit.> demonstrating the average spatial symmetry protected topological phase in statistic ensemble.
This paper is organized as follows. In Sec. <ref>, we introduce our model, SSH chain, its hybridized edge states, and underlying symmetries. In Sec. <ref>, we study how to achieve tunable BS ,HOM interference, and entanglement generation via edge channels of clean or disordered SSH chain remaining in class BDI. In Sec. <ref>, we investigate disordered systems without chiral symmetry and analyze the role of inversion symmetry, especially the case with temporal diagonal disorder where exact inversion symmetry are also broken while temporal-average-inversion-symmetry appears. In Sec. <ref>, we give a summary and discussion of our results.
§ SU-SCHRIEFFER-HEEGER MODEL AND HYBRIDIZED EDGE STATES
The SSH model is one of the most prototypic models exhibiting topological edge states. It describes particles hopping on a one-dimensional chain with staggered hopping amplitudes (ħ=1):
Ĥ=v∑_n=1^Nb̂_2n-1^†b̂_2n
+w∑_n=1^N-1b̂_2n^†b̂_2n+1+ H.c.,
with N unit cells (the total lattice site is 2N). b̂_n^† and b̂_n are bosonic creation and annihilation operators acting on lattice site n, respectively. v and w denote the intracell and intercell hopping amplitudes, as shown in Fig. <ref>. Under the periodic boundary condition, the Hamiltonian can be expressed in momentum space:
Ĥ=∑_kΨ_k^†H(k)Ψ_k, Ψ_k^†
=(b̂_ odd,k^† b̂_ even,k^†),
H(k)=(v+wcosk)σ_x+wsinkσ_y,
with σ_x,y,z being Pauli matrices. For simplicity, we take v and w to be real and nonnegative. In following discussion, the intercell hopping is set as energy unit w=1.
SSH model holds time-reversal symmetry H(k)=H^*(-k) (all hopping being real), particle-hole symmetry σ_zH(k)σ_z=-H^*(-k) and chiral symmetry σ_zH(k)σ_z=-H(k). Because both the time-reversal symmetry operator and particle-hole symmetry operator square to +1, SSH model falls into the BDI class in the Altland-Zirnbauer symmetry classification <cit.>. Moreover, SSH model also holds inversion symmetry read as σ_xH(k)σ_x=H(-k).
Either chiral symmetry <cit.> or inversion symmetry <cit.> can protect nontrivial topological phases for SSH model when v<w. In this regime, single-particle SSH chain hosts two zero-energy edge states localized on the boundaries in the thermodynamic limit of N→∞, according to the well-known bulk-boundary correspondence.
We now turn to consider a finite-sized system. The energies of the edge states remain very close to zero energy and take a pair of almost-zero-energy eigenvalues opposite to each other due to chiral symmetry. Actually, for an eigenstate state |ψ_n⟩ with energy E_n≠0, there is a chiral symmetric partner Ŝ|ψ_n⟩. Here Ŝ is the (single-particle) real space form of chiral symmetry operator expressed with the orthogonal projectors P̂_ odd and P̂_ even:
P̂_ odd=∑_n=1^N|2n-1⟩⟨2n-1|, P̂_ even=∑_n=1^N|2n⟩⟨2n|,Ŝ=P̂_ odd-P̂_ even,
and the eigenenergies of Ŝ|ψ_n⟩ will be -E_n due to the chiral symmetric relationship (See Appendix <ref> for more details).
These almost-zero-energy eigenstates are approximated given as
|0_+⟩ = |L⟩+(-1)^N+1|R⟩/√(2),
E_+=|vη^N-1(η^2-1)/η^2N-1|,
|0_-⟩ = |L⟩-(-1)^N+1|R⟩/√(2),
E_-=-|vη^N-1(η^2-1)/η^2N-1|.
Here
|L⟩ = |1⟩+η|3⟩+⋯+η^N-1|2N-1⟩,
|R⟩ = |2N⟩+η|2N-2⟩+⋯+η^N-1|2⟩,
denote the ideal exponentially localized left and right edge states in thermodynamic limit and η=-v/w is the localization factor (See Appendix <ref> for more details). |n⟩=b̂_n^†|V⟩ denote the state of the chain where the boson is on n-th site with |V⟩ being vacuum state. The hybridized edge states |0_+⟩ and |0_-⟩ are superpositions of states localized exponentially on the left and right edge. What’s more, the odd or even superposition depends on the parity of total number N of the unit cells.
The hybridized edge states |0_+⟩ and |0_-⟩ are also eigenstates of the inversion operator and hold opposite parity,
Î=∑_n=1^2N|2N+1-n⟩⟨ n|, Î|0_+⟩=(-1)^N+1|0_+⟩, Î|0_-⟩=(-1)^N|0_-⟩,
with Î being real space form of inversion symmetry operator. And then the evolution of two hybridized edge states can be treated separately. What's more, inversion symmetry ensure that the distribution of eigenstates on left and right part of SSH chain are equal, that is
D_f≡∑_i=1^N|c_i|^2-∑_j=N+1^2N|c_j|^2=0.
Here |c_i|^2 is the distribution of i-th site (See Appendix <ref> for more details).
§ TUNABLE BEAM SPLITTER AND HONG-OU-MANDEL INTERFERENCE VIA SSH CHAIN
§.§ Tunable beam splitter
In this section, we explore how to achieve a tunable BS via edge channels in SSH model. We consider two end sites as both input ports and output ports, as shown in Fig. <ref>. A boson injected into site 1 (site 2N) can be transferred only to the two output ports with tunable proportion.
We start by considering the topological fully dimerized limit (v=0). The edge state |L⟩ and |R⟩ localized only in site 1 and 2N respectively according to Eq. <ref>. A boson initially injected in site 1 (|1⟩) can be rewritten as the superpositions of two hybridized edge states |0_+⟩ and |0_-⟩, i.e., |1⟩=(|0_+⟩+|0_-⟩)/√(2). Then we move away from the fully dimerized limit by adiabatically changing the intracell hopping amplitudes and taking intercell ones as constant
v(t)=v_0sinθ(t).
Here v_0<1 is the modulation amplitude of the intracell hopping amplitudes, θ(t)=π t/T_f varying linearly with the adiabatic evolution time t∈[0,T_f], and T_f is the total evolution time in the units of 1⁄w.
For a SSH chain with 2N=16 sites, the energy spectrum versus evolution time t is plotted in Fig. <ref>(a) and the hybridized topological edge states |0_+⟩ and |0_-⟩ are well separated from the bulk states. At initial time t=0, the SSH chain is in the fully dimerized limit, edge states are |L⟩=|1⟩ and |R⟩=|2N⟩ edge states |L⟩=|1⟩ and |R⟩=|2N⟩ associating with eigenvalue E_+=-E_-=0. When it evolves to θ=π/2, the SSH chain is moving away from the fully dimerized limit, |E_±| gets the maximum value and the distribution of |L⟩ (|R⟩) is among different odd (even) sites [Fig. <ref>(b)]. After then, the system returns to the fully dimerized limit at θ=π.
As mentioned in Sec. <ref>, the eigenstates |0_+⟩ and |0_-⟩ will evolve separately in different sectors of the Hilbert space labeled by their parity. Following quantum adiabatic theorem, at time t the initial state |1⟩ should evolve to
|ψ(t)⟩=e^-i∫_0^tE_+dτ|0_+(t)⟩
+e^-i∫_0^tE_-dτ|0_-(t)⟩/√(2).
As a result, when the system returns to the fully dimerized (t=T_f) limit, the final state is given as
|ψ_F⟩ = e^-iϕ_d|0_+(T_f)⟩
+e^iϕ_d|0_-(T_f)⟩/√(2) = cosϕ_d|1⟩+(-1)^Nsinϕ_d|2N⟩,
where ϕ_d=∫_0^T_fE_+(t)dt=-∫_0^T_fE_-(t)dt is the dynamical phase accumulated in the adiabatic evolution. Here we use the symmetry of the energy spectrum i.e., E_+=-E_-. The dynamical phase depends on the total adiabatic evolution time and we can get desired ϕ_d by choosing suitable T_f. Obviously, the boson initially injects in the input port 1 can be finally observed at the output ports 1 and 2 with a certain proportion. What's more, the splitting proportion can be arbitrarily tuned from 1 to 0 by modulating the dynamical phase ϕ_d. If the boson is initially injecting in the input port 2, we have the similar result, and the tunable BS is written as
|1⟩ → cosϕ_d|1⟩+(-1)^Nisinϕ_d|2N⟩,
|2N⟩ → (-1)^Nisinϕ_d|1⟩+cosϕ_d|2N⟩.
In conclusion, the realization of tunable BS has to satisfy two
key conditions: (i) the hybridized edge states |0_+⟩ and |0_-⟩ could evolve separately, and (ii) the dynamical phase ϕ_d is well controlled.
We plot the distribution of the final state at output ports 1 (|1⟩, solid blue line) and 2 (|2N⟩, dashed red line) vs the dynamical phase ϕ_d∈[π/2,3π/2] in Fig. <ref>(c) and (d). These numerical results are in good agreement with analytical ones in Eq. <ref>. In Fig. <ref>(c) the boson is initially injected in input port 1 (|1⟩) and the final distribution in the output port 1 can increase from 0 to 1 while the distribution in the output port 2 decrease from 1 to 0 simultaneously at range [ϕ_d:π/2→π]. And their behaviour will be reversed for range [ϕ_d:π→3π/2]. Similar results will be obtained for the boson initially injected in input port 2, which is shown in Fig. <ref>(d).
Above phenomena illuminates we can achieve a tunable BS via edge channel in the SSH model. These two end sites of chain act as input and output ports. Above tunable BS has numerous potential applications in quantum optics and quantum information processing with different splitting proportions, such as optical trapping or photon storage (100:0 and 0:100), quantum state transfer (0:100), photon distributions between two distant nodes (arbitrary splitting proportion), and HOM interference (50:50).
§.§ Hong-Ou-Mandel interference
In this section, we focus on two-boson HOM interference via the tunable BS introduced in Sec. <ref>. We begin evolution at t=0 and two identical bosons respectively injected in site 1 and 2N, i.e., the two-boson initial state is |1,2N⟩. Then, we adiabatically tune the intracell hopping amplitudes according to Eq. <ref>. At the final time t=T_f, as illustrated in Sec. <ref>, the bosons will undergo tunable BS operation according to Eq. <ref>. As a result, the two identical bosons initial in state |1,2N⟩ will evolve to (-1)^Nisin2ϕ_d(|1,1⟩+|2N,2N⟩)/√(2)+cos2ϕ_d |1,2N⟩.
Intriguingly, when we take ϕ_d=(2l+1)π⁄4,l∈Z, i.e., 50:50 BS, the antibunching term will disappear. Thus, above state evolves into a two-particle NOON state
|ψ_ NOON⟩=1/√(2)(|1,1⟩+|2N,2N⟩),
where the global phase factor is ignored. The spatial NOON state is a maximally entangled state between two-end sites due to the well-known HOM interference.
To explore correlation feature and characterize entanglement of the generated two-boson NOON state, we calculate density distribution ⟨n̂_r⟩, two-particle correlation Γ_q,r and “NOONity” (Nity),
⟨n̂_r⟩ = ⟨b̂_r^†b̂_r⟩,Γ_q,r = ⟨ψ|b̂_q^†b̂_r^†b̂_rb̂_q|ψ⟩,
Nity = ∑_q,rΓ_q,qΓ_r,r-Γ_q,r^2.
whose values during adiabatic evolution are given in Fig. <ref>.
Both the density distribution of the initial state and final state at t=T_f just hold nonzero values (1) at 1 and 2N, as shown in Fig. <ref>(a), while they behave huge difference in correlation and entanglement features. Specifically, the value of Nity is given in Fig. <ref>(b), which must fall in the range [-2,2] and will be larger if the state is more like the NOON state. Thereby, the increase of “NOONity” from Nity=-2 to Nity=2 confirms the forming of ideal two-particle NOON state from initial product state |1,2N⟩. More intuitively, the correlations Γ_q,r at several typical evolution times t=0, T_f⁄2, T_f in Fig. <ref>(c) display the process where correlation function changes from uncorrelated antibunching to correlated bunching patterns.
§.§ Disordered SSH chain in class BDI
After establishing tunable BS and HOM interference in SSH model with clean limit, we began introducing disorder and investing the role of distinct symmetry for protection above two phenomena.
In this section, we focus on the disordered chains remaining in BDI class, i.e., the time-reversal and chiral symmetry are not broken by disorder. The real hopping amplitude disorder conform to this requirement. As we can see in following, the chiral symmetry and time-reversal symmetry together could protect the tunable BS and HOM interference against disorder. More specifically, the parameters with disorder can be read as
v_2n-1(t)=v(t)(1+ξr_2n-1), w_2n=w(1+ξr_2n),
where ξ is the disorder strength and r_n∈[-0.5,0.5] is a uniformly distributed random number. Generally speaking this disorder could break inversion symmetry and the eigenstates no long hold definite parity. However, we will see chiral symmetry ensures that there is no transition between eigenstate |ψ_n⟩ and its chiral symmetric partner Ŝ|ψ_n⟩ during evolution. The derivative of disordered Hamiltonian Ĥ also preserve chiral symmetry i.e., {dĤ/dt, Ŝ}=0. Under time-reversal symmetry, ⟨ψ_n|dĤ/dtŜ|ψ_n⟩=⟨ψ_n|ŜdĤ/dt|ψ_n⟩ are both real number and we also have ⟨ψ_n|dĤ/dtŜ+ŜdĤ/dt|ψ_n⟩=0. As a result, we have
⟨ψ_n|dĤ/dtŜ|ψ_n⟩
=⟨ψ_n|ŜdĤ/dt|ψ_n⟩
=0,
which means there is no transition between eigenstate |ψ_n⟩ and Ŝ|ψ_n⟩. In this way the hybridized edge states |0_+⟩ and |0_-⟩ could evolve separately though inversion symmetry is broken.
There are two distinct treatments about the disorder, i.e., time independent static noise or time dependent temporal noise. When we consider static case, the disorder is fixed during the adiabatic evolution for each realization of the numerical simulation. In contrast, it is fluctuated during the evolution in temporal case. Here, we study both static and temporal cases.
For each realization we calculate the fidelity at a given final time T_f and then take the average over 100 repetitions of the adiabatic evolution. Each realization has its different random choice of disorder. In Fig. <ref>(a), we plot average values ℱ of fidelity F=|⟨ψ_F|ψ(T_f)⟩| for different disorder strength ξ.
The initial state is also set as |1⟩, i.e., a boson is injected in input port 1 at initial time t=0 and we take T_f=504 in unit of 1/w and ϕ_d=π/2 correspond to 0:100 BS. With this setup, the boson injected in input port 1 (2) can be transferred to output port 2 (1) under clean limit. In current disordered cases as discussed in Sec. <ref>, we find that in temporal case (solid blue line) there is a plateau near 1 for ξ in the range ξ/w∈[0,0.2]. However, it will decrease from 1 to 0.94 in static case (dashed red line). The results of HOM interference against this disorder is similar, see average values ℱ of fidelity F=|⟨ψ_ NOON|ψ(T_f)⟩| in Fig. <ref>(b).
The difference of static and temporal disorder comes from the influence of dynamical phase ϕ_d. More specifically, the chiral-symmetry-preserving hopping amplitude disorder only make a tiny random shift on the energy spectrum and influence the dynamical phase. In the temporal case with time dependent disorder, the influence on the dynamical phase can be averaged out during the adiabatic evolution. Therefore, for our protocol, the average fidelity
ℱ will remain above 0.99 as long as disorder strength ξ is less than 0.2w in temporal case and the total evolution time T_f are same as the clean case [Fig. <ref>(a) and (b)]. In the static case, the disorder fixed during the adiabatic evolution, we should adjust the total evolution time T_f to match the desired dynamical phase [Fig. <ref>(c) and (d)].
§ EXACT INVERSION SYMMETRY AND TEMPORAL-AVERAGE-INVERSION-SYMMETRY
In Sec. <ref>, we have investigated hopping amplitude disorder (preserving chiral symmetry but breaking inversion symmetry) and illustrated the protection of chiral symmetry and time-reversal symmetry. However chiral symmetry may easily broken by several common disorders such as on-site disorder. It is necessary to consider what happens when chiral symmetry is broken, but inversion symmetry preserves. In this section, we will characterize the protection of inversion symmetry both in exact case and temporal-averaged one.
§.§ Static disorder preserving or breaking inversion symmetry
We began with the static on-site disorder described by
Ĥ_ζ=ζ∑_nr_nb̂_n^†b̂_n,
where ζ is the disorder strength and r_n∈[-0.5,0.5] is a uniformly distributed random number. In general, this disorder breaks both chiral and inversion symmetry. To preserve inversion symmetry, the disorder configuration
must respect to the inversion center, that is the random number r_n should satisfy
r_n=r_2N+1-n.
We will consider both inversion-symmetry-preserving on-sit disorder and generic on-site disorder (breaking inversion symmetry) in this subsection.
To illustrate the protection of inversion symmetry to HOM interference, we also calculate the fidelity F=|⟨ψ_ NOON|ψ(t)⟩|, Nity, and two-particle correlation Γ_q,r in the interference procession.
In Fig. <ref>(a), we plot the fidelity for the input state |1,2N⟩ as a function of evolution time t during HOM interference with disorder strength ζ=0.2. Under inversion-symmetry-preserving disorder (blue sold line), the fidelity F=0 for the input product state will increase to F=1 at t=T_f. Indeed we generate a NOON state by HOM interference and take the total evolution time T_f=252 as the same to clean situation. In contrast, the generic on-site disorder could break inversion symmetry and the fidelity (red dashed line) always close to 0. To explore the entanglement and correlation feature, we also provide Nity and Γ_q,r in Figs. <ref>(b)-(d). As shown in Fig. <ref>(b), Nity=-2 for the input state |1,2N⟩ and it increase to 2 at final evolution time t=T_f under inversion-symmetry-preserving disorder (blue sold line). In contrast, it return to -2 at t=T_f under inversion-symmetry-breaking case (red dashed line). In Figs. <ref>(c) and (d), We present two-particle correlation Γ_q,r at t=T_f. The final correlation function still take uncorrelated antibunching pattern under inversion-symmetry-breaking case [Fig. <ref>(d)] while behaves correlated bunching pattern for the case preserving inversion symmetry [Fig. <ref>(c)], consistent with the evolution of fidelity F and Nity in Figs. <ref>(a) and (b). Therefore, exact inversion symmetry could protect the HOM interference even though chiral symmetry is breaking, which can be further confirmed by investing the conservation of parity during evolution process as given in Appendix <ref>.
§.§ Emergence of temporal-average-inversion-symmetry
Although we can preserve inversion symmetry by engineering disorder symmetrically <cit.>, it's more important to restore inversion symmetry in a generic way. Previous works <cit.> have demonstrated an average spatial symmetry on the statistical ensemble of disordered system can protect topological phase. In this section, we consider temporal case of generic on-site disorder, where temporal-average-inversion-symmetry emerges. Specially, disorder descried in Eq. <ref> will fluctuate during the evolution for each realization of numerical simulation.
As discussed in Sec. <ref>, our protocol is destroyed by static on-site disorder breaking inversion symmetr6y. To explore the difference between static and temporal case, we calculate distribution difference D_f defined in Eq. (<ref>) of in-gap instantaneous eigenstates. As shown in Fig, <ref>(a), the average distribution difference of in-gap instantaneous eigenstates are closed to 0 and each point is average about a duration of 0.1T_f. The result is agree with clean limit and inversion-symmetry-preserving static on-sit disorder given in Appendix <ref>. In contrast it close to 1 or -1 in static case and the in-gap instantaneous eigenstates are long hybridized edge states. Associating with D_f≈0, the parity P=⟨Î⟩ during adiabatic evolution is conserved during adiabatic evolution in temporal case [Fig, <ref>(b)]. In conclusion, there is temporal-average-inversion-symmetry in temporal case, restoring the parity conservation existing in exact inversion symmetric cases. Protected by this temporal-average-inversion-symmetry, hybridized edge states |0_+⟩ and |0_-⟩ will evolve separately, and then tunable BS and HOM interference could also be restored, as shown in Figs. <ref>(c) and (d). Here the average fidelity ℱ will remain above 0.99 as long as disorder strength ξ is less than 0.2w in temporal case and the total evolution time T_f are as the same in clean limit.
§ SUMMARY AND DISCUSSION
Based upon the nontrivial boundary states of topological phases, we have established tunable BS and HOM interference in SSH chain, and elucidated effects of distinct symmetries.
To realize these behaviours, the hybridized edge states should evolve separately and accumulate dynamical phase difference.
We reveal that chiral and time-reversal symmetries could protect this process via adiabatic theorem and examine it numerically by introducing associating disorders.
For more practical applications, we turn to study the role of inversion symmetry and find the parity conservation protected by this symmetry also can ensure this mechanism.
Despite various type of disorders could break exact inversion symmetry and destroy BS as well as HOM interference, a temporal-average-inversion-symmetry appears when disorder becomes temporal.
This emergent symmetry also can protect parity conservation, giving rise to temporal-average-inversion-symmetry protected tunable BS and HOM interference.
Obviously, our approach can be applied to a large number of realistic systems, considering the widely existence of temporal disorders.
Moreover, we point out that our scheme can be extended to interacting topological states <cit.>.
Lastly, we briefly discuss the experimental feasibility of our protocol.
Photonic waveguide arrays represent a promising platform for exploring topological features and several topological effects have been observed using this system including non-quantized adiabatic pumping of topological edge state <cit.>, topological protection of correlation and entanglement <cit.>, quantum interference of topological states of light <cit.>,and 4D quantum Hall physics <cit.>.
The adiabatically modulated Aubry-André-Harper model can be achieved by slowly varying the spacing between the waveguides along the propagation axis <cit.>.
One may use the adiabatically modulated photonic waveguide arrays as a potential platform for testing our protocols.
This work is supported by the National Natural Science Foundation of China under Grant No. 12104103, the Guangdong Basic and Applied Basic Research Foundation under Grant No. 2022A1515010726, and Science and Technology Program of Guangzhou under Grant No. 2023A04J0039.
§ SPECTRUM AND EDGE SATES CONSTRAINED BY CHIRAL SYMMETRY
In this section, we will discuss several properties of spectrum and eigenstates embodied in chiral symmetric systems.
Referring to the chiral symmetry, the single-particle
Hamiltonian of SSH chain must satisfy a anticommutation
relation
ŜĤS^†=-H,
and the chiral symmetry operator Ŝ is unitary and Hermitian, i.e. ŜŜ^†=Ŝ^2=1. The energy spectrum constrained by chiral symmetry is symmetric. For an eigenstate |ψ_n⟩ with eigenenergy E_n, there is a chiral symmetric partner Ŝ|ψ_n⟩ with eigenenergy -E_n. This result with the minus sign is from the chiral symmetric relationship
Ĥ|ψ_n⟩=E_n|ψ_n⟩ ⇒ĤŜ|ψ_n⟩=-ŜĤ|ψ_n⟩
=-E_nŜ|ψ_n⟩,
For E_n≠0, the states |ψ_n⟩ and Ŝ|ψ_n⟩ are eigenstates with opposite energy and must be orthogonal. What's more, every nonzero energy eigenstate has equal support on both odd and even sites, which can be simply seen by
0=⟨ψ_n|Ŝ|ψ_n⟩
=⟨ψ_n|P̂_ odd|ψ_n⟩
-⟨ψ_n|P̂_ even|ψ_n⟩.
It is well known that, in topological nontrivial phase (|v|<|w|), the SSH chain hosts two zero-energy edge states localized on the boundaries in the thermodynamic limit of N→∞. The edge state exponentially localized in the left and right sides of the chain are (un-normalized)
|L⟩ = |1⟩+η|3⟩+⋯+η^N-1|2N-1⟩,
|R⟩ = |2N⟩+η|2N-2⟩+⋯+η^N-1|2⟩,
where η=-v/w denotes the localization factor. We can derive the zero-energy edge state by solving Ĥ|L⟩=0 (Ĥ|R⟩=0) under left (right) semi-infinite boundary condition. The obtained right edge state |R⟩ has nonvanishing components only on the even sites while the left edge state |L⟩ distributes on the odd sites. In finite-sized system, the energies of the edge states remain very close to zero energy and take a pair of almost-zero-energy eigenvalues opposite to each other due to chiral symmetry [Fig. <ref>(a) of main text].
Obviously, the left edge state |L⟩ and the right edge state |R⟩, localized only on the odd sites or even sites respectively, are no longer eigenstates according to Eq <ref> when v≠0. Fortunately, we can obtain the almost-zero-energy eigenstates to a good approximation by diagonalizing the Hamiltonian under base vectors |L⟩ and |R⟩
⟨L|Ĥ|L⟩ = ⟨R|Ĥ|R⟩=0,⟨L|Ĥ|R⟩ = ⟨R|Ĥ|L⟩
=vη^N-1(η^2-1)/η^2N-1.
These almost-zero-energy eigenstates |0_+⟩ and |0_-⟩ can be obtained by
|0_+⟩ = |L⟩+(-1)^N+1|R⟩/√(2),
E_+=|νη^N-1(η^2-1)/η^2N-1|,
|0_-⟩ = |L⟩-(-1)^N+1|R⟩/√(2),
E_-=-|νη^N-1(η^2-1)/η^2N-1|.
§ PARITY AND INVERSION SYMMETRY
Despite several internal symmetries, SSH chain also holds inversion symmetry [Ĥ,Î]=0.
Also, inversion symmetry operator Î is unitary and Hermitian, i.e. ÎÎ^†=Î^2=1. As we all know, the eigenstate |ψ_n⟩ of the system has certain parity under inversion symmetry
Î|ψ_n⟩=±|ψ_n⟩.
We can express |ψ_n⟩ as
|ψ_n⟩=∑_i=1^2Nc_i|i⟩,
and it satisfies |c_i|=|c_2N+1-i|. As a result, the distribution of eigenstates on left and right part of SSH chain are equal, that is
∑_i=1^N|c_i|^2=∑_j=N+1^2N|c_j|^2.
We now turn to consider the inversion-symmetry-preserving on-sit disorder introduced in Sec. <ref> of main text.
To proceed, we calculate the distribution difference
D_f=∑_i=1^N|c_i|^2-∑_j=N+1^2N|c_j|^2.
of the two in-gap instantaneous eigenstates. As shown in Fig. <ref>(a), the distribution difference D_f are exactly zero and the numerical result is well agree with Eq. <ref>. Furthermore, we also calculate the parity during the evolution
P=⟨Î⟩,
with intimal stat (|1⟩+|2N⟩)/√(2) and (|1⟩-|2N⟩)/√(2) respectively. We present the numerical result in Fig. <ref>(b) and it shows conservation of parity clearly.
99
DXiaoRMP2010
D. Xiao, M.-C. Chang, and Q. Niu, Berry phase effects on electronic properties, Rev. Mod. Phys. 82, 1959 (2010).
MZHasanRMP2010
M. Z. Hasan and C. L. Kane, Colloquium: Topological insulators, Rev. Mod. Phys. 82, 3045 (2010).
XLQiRMP2011
X.-L. Qi and S.-C. Zhang, Topological insulators and superconductors, Rev. Mod. Phys. 83, 1057 (2011).
HaldanePRL2008
F. D. M. Haldane and S. Raghu, Possible realization of directional optical waveguides in photonic crystals with broken time-reversal symmetry, Phys. Rev. Lett. 100,
013904 (2008).
ZWangNature2009
Z. Wang, Y. Chong, J. D. Joannopoulos, and M. Soljačić, Observation of unidirectional backscattering-immune topological electromagnetic states. Nature 461, 772 (2009).
LuNP2014
L. Lu, J. D. Joannopoulos, and M. Soljačić, Topological photonics, Nat. Photonics 8, 821 (2014).
TOzawaRMP2019
T. Ozawa, H. M. Price, A. Amo, N. Goldman, M. Hafezi, L. Lu, M. C. Rechtsman, D. Schuster, J. Simon, O. Zilberberg, and I. Carusotto, Topological photonics, Rev. Mod. Phys. 91, 015006 (2019).
ZYangPRL2015
Z. Yang, F. Gao, X. Shi, X. Lin, Z. Gao, Y. Chong, and B. Zhang, Topological Acoustics, Phys. Rev. Lett. 114, 114301 (2015).
MaNRP2019
G. Ma, M. Xiao, and C. T. Chan, Topological phases in acoustic and mechanical systems, Nat. Rev. Phys. 1, 281
(2019).
XueNRM2022
H. Xue, Y. Yang, and B. Zhang, Topological acoustics, Nat. Rev. Mater. 7, 974 (2022).
MAidelsburgerPRL2013
M. Aidelsburger, M. Atala, M. Lohse, J. T. Barreiro, B. Paredes, and I. Bloch, Realization of the Hofstadter Hamiltonian with Ultracold Atoms in Optical Lattices, Phys. Rev. Lett. 111, 185301 (2013).
CooperRMP2019
N. R. Cooper, J. Dalibard, and I. B. Spielman, Topological bands for ultracold atoms, Rev. Mod. Phys. 91, 015005 (2019).
YHatsugaiPRL1993
Y. Hatsugai, Chern number and edge states in the integer quantum Hall effect, Phys. Rev. Lett. 71, 3697 (1993).
EssinPRB2011
A. M. Essin and V. Gurarie, Bulk-boundary correspondence of topological insulators from their respective green’s functions, Phys. Rev. B 84, 125132 (2011).
Ryu_2010
S. Ryu, A. P. Schnyder, A. Furusaki, and A. W. W. Ludwig, Topological insulators and superconductors: tenfold way and dimensional hierarchy, New J. Phys. 12, 065010 (2010).
CKChiuRMP2016
C.-K. Chiu, J. C. Y. Teo, A. P. Schnyder, and S. Ryu, Classification of topological quantum matter with symmetries, Rev. Mod. Phys. 88, 035005 (2016).
JSeoNature2010
J. Seo, P. Roushan, H. Beidenkopf, Y. S. Hor, R. J. Cava, and A. Yazdani, Transmission of topological surface states through surface barriers, Nature 466, 343 (2010).
ABlancoScience2018
A. Blanco-Redondo, B. Bell, D. Oren, B. J. Eggleton, and M. Segev, Topological protection of biphoton states, Science 362, 568 (2018).
YEKrausPRL2012
Y. E. Kraus, Y. Lahini, Z. Ringel, M. Verbin, and O. Zilberberg, Topological States and Adiabatic Pumping in Quasicrystals, Phys. Rev. Lett. 109, 106402 (2012).
NLangQI2017
N. Lang and H. P. Büchler, Topological networks for quantum communication between distant qubits, npj Quantum Inf. 3, 47 (2017).
CDlaskaQST2017
C. Dlaska, B. Vermersch, and P. Zoller, Robust quantum state transfer via topologically protected edge channels in dipolar arrays, Quantum Sci. Technol. 2, 015001 (2017).
FMeiPRA2018
F. Mei, G. Chen, L. Tian, S. L. Zhu, and S. Jia, Robust quantum state transfer via topological edge states in superconducting qubit chains, Phys. Rev. A 98, 012331 (2018).
SLonghiPRB2019
S. Longhi, Topological pumping of edge states via adiabatic passage, Phys. Rev. B 99, 155150 (2019).
NEPalaiodimopoulosPRA2021
N. E. Palaiodimopoulos, I. Brouzos, F. K. Diakonos, and G.Theocharis, Fast and robust quantum state transfer via a topological chain, Phys. Rev. A 103, 052409 (2021).
LHuangPRA2022
L. Huang, Z. Tan, H. Zhong, and B. Zhu, Fast and robust quantum state transfer assisted by zero-energy interface states in a splicing Su-Schrieffer-Heeger chain, Phys. Rev. A 106, 022419 (2022).
CWangPRA2022
C. Wang, L. Li, J. Gong, and Y.-X. Liu, Arbitrary entangled state transfer via a topological qubit chain, Phys. Rev. A 106, 052411 (2022).
PBorossPRB2019
P. Boross, J. K. Asbóth, G. Széchenyi, L. Oroszlány, and A. Pályi, Poor man’s topological quantum gate based on the Su-Schrieffer-Heeger model, Phys. Rev. B 100, 045414 (2019).
MNarozniakPRB2021
M. Narożniak, M. C. Dartiailh, J. P. Dowling, J. Shabani, and T. Byrnes, Quantum gates for Majoranas zero modes in topological superconductors in one-dimensional geometry, Phys. Rev. B
103, 205429 (2021).
RHammerPRB2013
R. Hammer and W. Pötz, Dynamics of domain-wall Dirac fermions on a topological insulator: A chiral fermion beam splitter, Phys. Rev. B 88, 235119 (2013).
XSWangPRB2017
X. S. Wang, Y. Su, and X. R. Wang, Topologically protected unidirectional edge spin waves and beam splitter, Phys. Rev. B 95, 014435 (2017).
LQiPRB2021
L. Qi, Y. Xing, X. D. Zhao, S. Liu, S. Zhang, S. Hu, and H. F. Wang, Topological beam splitter via defect-induced edge channel in the Rice-Mele model, Phys. Rev. B 103, 085129 (2021).
LQiQuantum2021
L. Qi, Y. Yan, Y. Xing, X. D. Zhao, S. Liu, X. Han, W. X. Cui, S. Zhang, and H. F. Wang, Tunable Topological Beam Splitter in Superconducting Circuit Lattice, Quantum Rep. 3, 1 (2021).
LQiPRA2023
L. Qi, N. Han, S. Hu, and A.-L. He, Engineering the unidirectional topological excitation transmission and topological diode in the Rice-Mele model, Phys. Rev. A 108, 032402 (2023).
PhysRevB.55.1142
A. Altland and M. R. Zirnbauer, Nonstandard symmetry classes in mesoscopic normal-superconducting hybrid structures, Phys. Rev. B 55, 1142 (1997).
PhysRevLett.98.106803
L. Fu, C. L. Kane, and E. J. Mele, Topological insulators in three dimensions, Phys. Rev. Lett. 98, 106803 (2007).
Slager2013
R.-J. Slager, A. Mesaros, V. Juričić, and J. Zaanen, The space group classification of topological band-insulators, Nat. Phys. 9, 98 (2013).
PhysRevX.7.041069
J. Kruthoff, J. de Boer, J. van Wezel, C. L. Kane, and R.-J. Slager, Topological classification of crystalline insulators through band structure combinatorics, Phys. Rev. X 7, 041069 (2017).
Bradlyn2017
B. Bradlyn, L. Elcoro, J. Cano, M. G. Vergniory, Z. Wang, C. Felser, M. I. Aroyo, and B. A. Bernevig, Topological quantum chemistry, Nature (London) 547, 298 (2017).
Po2017
H. C. Po, A. Vishwanath, and H. Watanabe, Symmetrybased indicators of band topology in the 230 space groups, Nat. Commun. 8, 50 (2017).
Elcoro2021
L. Elcoro, B. J. Wieder, Z. Song, Y. Xu, B. Bradlyn, and
B. A. Bernevig, Magnetic topological quantum chemistry,
Nat. Commun. 8, 5965 (2021).
AEkertRMP1996
A. Ekert and R. Jozsa, Quantum computation and Shor’s factoring algorithm, Rev. Mod. Phys. 68, 733 (1996).
JWPanRMP2012
J.-W. Pan, Z.-B. Chen, C.-Y. Lu, H. Weinfurter, A. Zeilinger, and M. Zukowski, Multiphoton entanglement and interferometry, Rev. Mod. Phys. 84, 777 (2012).
LPezzeRMP2018
L. Pezzè, A. Smerzi, M. K. Oberthaler, R. Schmied, and P. Treutlein, Quantum metrology with nonclassical states of atomic ensembles, Rev. Mod. Phys. 90, 035005 (2018).
MCRechtsmanOptica2016
M. C. Rechtsman, Y. Lumer, Y. Plotnik, A. Perez-Leija, A. Szameit, and M. Segev, Topological protection of photonic path entanglement, Optica 3, 925 (2016).
MWangNanophotonics2019
M. Wang, C. Doyle, B. Bell, M. J. Collins, E. Magi, B. J. Eggleton, M. Segev, and A. Blanco-Redondo, Topologically protected entangled photonic states, Nanophotonics 8, 1327 (2019).
KMonkmanPRR2020
K. Monkman and J. Sirker, Operational entanglement of symmetry-protected topological edge states, Phys. Rev. Res. 2, 043191 (2020).
JXHanPRA2021
J.-X. Han, J.-L. Wu, Y. Wang, Y. Xia, Y.-Y. Jiang, and J. Song, Large-scale Greenberger-Horne-Zeilinger states through a topologically protected zero-energy mode in a superconducting qutrit-resonator chain, Phys. Rev. A 103, 032402 (2021).
JLTambascoSA2018
J. L. Tambasco, G. Corrielli, R. J. Chapman, A. Crespi, O. Zilberberg, R. Osellame, and A. Peruzzo, Quantum interference of topological states of light, Sci. Adv. 4, eaat3187 (2018).
CKHongPRL1987
C. K. Hong, Z. Y. Ou, and L. Mandel, Measurement of subpicosecond time intervals between two photons by interference, Phys. Rev. Lett. 59, 2044 (1987).
PGHarper1955
P. G. Harper, Single band motion of conduction electrons in a uniform magnetic field, Proc. Phys. Soc., London, Sect. A 68, 874 (1955).
SHuPRA2020
S. Hu, Y. Ke, and C. Lee, Topological quantum transport and spatial entanglement distribution via a disordered bulk channel, Phys. Rev. A 101, 052323 (2020).
WPSuPRL1979
W. P. Su, J. R. Schrieffer, and A. J. Heeger, Solitons in Polyacetylene, Phys. Rev. Lett. 42, 1698 (1979).
JKAsboth2016
J. K. Asbóth, L. Oroszlány, and A. Pályi, A short course on topological insulators, Lect. Notes Phys. 919, 1-22 (2016).
PhysRevB.86.115112
C. Fang, M. J. Gilbert, and B. A. Bernevig, Bulk topological invariants in noninteracting point group symmetric insulators, Phys. Rev. B 86, 115112 (2012).
PhysRevB.87.035119
C. Fang, M. J. Gilbert, and B. A. Bernevig, Entanglement spectrum classification of C_n-invariant noninteracting topological insulators in two dimensions, Phys. Rev. B 87, 035119 (2013).
LFuPRL2012
L. Fu and C. L. Kane, Topology, delocalization via average
symmetry and the symplectic anderson transition,
Phys. Rev. Lett. 109, 246605 (2012).
PhysRevB.89.155424
I. C. Fulga, B. van Heck, J. M. Edge, and A. R. Akhmerov, Statistical topological insulators, Phys. Rev. B 89, 155424 (2014).
PhysRevResearch.2.012067
A. Agarwala, V. Juričić, and B. Roy, Higher-order topological insulators in amorphous solids, Phys. Rev. Res. 2,
012067 (2020).
PhysRevLett.126.206404
J.-H. Wang, Y.-B. Yang, N. Dai, and Y. Xu, Structuraldisorder-induced second-order topological insulators in three dimensions, Phys. Rev. Lett. 126, 206404 (2021).
PhysRevX.13.031016
R. Ma and C. Wang, Average symmetry-protected topological phases, Phys. Rev. X 13, 031016 (2023).
Tao2023
Y.-L. Tao, J.-H. Wang, and Y. Xu, Average symmetry protected higher-order topological amorphous insulators,
SciPost Phys. 15, 193 (2023).
PhysRevB.89.155114
A. Alexandradinata, X. Dai, and B. A. Bernevig, Wilsonloop characterization of inversion-symmetric topological insulators, Phys. Rev. B 89, 155114 (2014).
PhysRevA.102.013301
Z. Lei, Y. Deng, and C. Lee, Symmetry-protected topological phase for spin-tensor-momentum-coupled ultracold atoms, Phys. Rev. A 102, 013301 (2020).
PhysRevB.103.024205
S. Velury, B. Bradlyn, and T. L. Hughes, Topological crystalline phases in a disordered inversion-symmetric
chain, Phys. Rev. B 103, 024205 (2021).
YKePRA2017
Y. Ke, X. Qin, Y. S. Kivshar, and C. Lee, Multiparticle Wannier states and Thouless pumping of interacting bosons, Phys. Rev. A 95, 063630 (2017).
WalterNP2023
WLiuPRR2023
W. Liu, S. Hu, L. Zhang, Y. Ke, and C. Lee, Correlated topological pumping of interacting bosons assisted by Bloch oscillations, Phys. Rev. Research 5, 013020, (2023).
WalterNP2023
A.-S. Walter, Z. Zhu, M. Gächter, J. Minguzzi, S. Roschinski, K. Sandholzer, K. Viebahn, and T. Esslinger, Quantization and its breakdown in a Hubbard–Thouless pump, Nat. Phys. 19, 1471, (2023).
ZilberbergNature2018
O. Zilberberg, S. Huang, J. Guglielmon, M. Wang, K. P. Chen, Y. E. Kraus, and Mikael C. Rechtsman, Photonic topological boundary pumping as a probe of 4D quantum Hall physics, Nature 533, 59 (2018).
|
http://arxiv.org/abs/2409.02712v1 | 20240904134945 | A Data Selection Approach for Enhancing Low Resource Machine Translation Using Cross-Lingual Sentence Representations | [
"Nidhi Kowtal",
"Tejas Deshpande",
"Raviraj Joshi"
] | cs.CL | [
"cs.CL",
"cs.LG"
] |
*
A Data Selection Approach for Enhancing Low Resource Machine Translation Using Cross-Lingual Sentence Representations
Nidhi Kowtal *
SCTR's Pune Institute of Computer Technology
Pune, India
[email protected]
Tejas Deshpande *
SCTR's Pune Institute of Computer Technology
Pune, India
[email protected]
Raviraj Joshi
Indian Institute of Technology Madras, India
L3Cube Labs, Pune
Pune, India
[email protected]
September 9, 2024
==============================================================================================================================================================================================================================================================================================================================================
* Authors contributed equally.
§ ABSTRACT
Machine translation in low-resource language pairs faces significant challenges due to the scarcity of parallel corpora and linguistic resources. This study focuses on the case of English-Marathi language pairs, where existing datasets are notably noisy, impeding the performance of machine translation models. To mitigate the impact of data quality issues, we propose a data filtering approach based on cross-lingual sentence representations.
Our methodology leverages a multilingual SBERT model to filter out problematic translations in the training data. Specifically, we employ an IndicSBERT similarity model to assess the semantic equivalence between original and translated sentences, allowing us to retain linguistically correct translations while discarding instances with substantial deviations. The results demonstrate a significant improvement in translation quality over the baseline post-filtering with IndicSBERT. This illustrates how cross-lingual sentence representations can reduce errors in machine translation scenarios with limited resources.
By integrating multilingual sentence BERT models into the translation pipeline, this research contributes to advancing machine translation techniques in low-resource environments. The proposed method not only addresses the challenges in English-Marathi language pairs but also provides a valuable framework for enhancing translation quality in other low-resource language translation tasks.
Low Resource Machine Translation, Cross-Lingual Sentence Representations, Indic Languages, Multilingual Natural Language Processing
§ INTRODUCTION AND MOTIVATION
Machine Translation in low-resource language pairs encounters several challenges, with the most significant being the scarcity of parallel corpora and linguistic resources. To overcome these obstacles, datasets are often automatically generated, simplifying the training of translation models. However, datasets produced through these techniques often contain inherent noise, presenting a significant challenge to the creation of reliable and accurate translations. While translating from a high-level language to a low-level language, often it is found there are grammatical errors, and a few words are skipped while translating. This can contribute to noise in these datasets.
A few of the methods of automated dataset generation include -
Parallel Corpora Extraction:
In parallel corpora extraction, texts in multiple languages are aligned using algorithms to match sentences for training translation models. These texts are usually sourced from translated books, articles, or official records. The scarcity of high-quality parallel corpora, particularly for specific language pairs, restricts the effectiveness of this approach and may hinder the size and diversity of the training dataset.
Back Transition:
Utilizing an established translation model, back translation generates a synthetic parallel dataset by translating monolingual data back into the original language. The model's generalization is influenced by potential artefacts and biases introduced from either the translation model or the original training set. Despite this, the method proves effective, especially in scenarios where parallel corpora in the target language pair are limited.
Data Augmentation:
By employing automated methods like paraphrasing, data augmentation can be achieved to enhance the diversity of training data. This aids the model in learning from a broader spectrum of examples, thereby increasing its proficiency in handling linguistic variations. However, excessive and aggressive data augmentation may lead to the generation of nonsensical examples, causing confusion for the model rather than contributing to its generalization.
Web Scraping:
Text content in multiple languages is extracted from online sources using web scraping. Through the use of automated data extraction from websites, we can gather a wide range of texts. To be more precise, we use web scraping to gather parallel translations that are accessible on multilingual websites. Through our website navigation, we can extract sentences that align in different languages, thereby building useful parallel translation corpora. These corpora, which comprise similar sentences in several languages, turn into an essential tool for machine translation model evaluation and training.
Pretrained Embeddings:
The pre-trained embedding method makes use of embeddings produced by pre-existing models that have been trained on linguistic datasets. Semantic relationships between words and sentences are captured by these embeddings. These pre-trained embeddings improve machine translation models' comprehension and representation of linguistic nuances. Through transfer learning, the embeddings help models understand patterns without requiring a lot of task-specific training. By using this technique, the models become more adept at managing a variety of language pairs and translating texts more accurately overall.
To tackle these issues, we concentrate on the English-Marathi language pair. For machine translation systems, in particular, the noise captures linguistic complexities that present significant challenges. Because of the inherent challenges in generating high-quality translations from these datasets, reliable filtering mechanisms are essential.
Cross-lingual similarity models have been used to construct the datasets under review to replicate the complexities of real-world translation scenarios. Still, we use a stronger model to improve the quality of the filtering process, realizing the need for a more resilient filtering mechanism. The drive to improve translation output accuracy and dependability is driving this switch to a more sophisticated model, which will ultimately help to advance machine translation capabilities in language environments with limited resources.
Our study explores the difficulties that noisy datasets present for low-resource machine translation and highlights the significance of efficient noise reduction techniques. By eliminating problematic sentences, we enhance the quality of translation outputs through the use of the multilingual IndicSBERT model.
§ LITERATURE SURVEY
Machine translation, which has a long history that dates back to the middle of the 20th century, has experienced tremendous evolution over time. The first machine translation attempts were rule-based, translating text between languages using linguistic structures and rules. Unfortunately, these systems had trouble processing the nuances of natural language, which made it difficult to translate words accurately in most cases.
The work <cit.> by Vaswani et al. in 2017, is responsible for the development of neural machine translation (NMT) and the popularity of models such as the Transformer architecture. By presenting a self-attention mechanism that could successfully capture complex linguistic patterns and long-range dependencies, the Transformer architecture described in this paper revolutionized the field of machine translation. Using high-resource language pairs, the Transformer architecture's effectiveness was shown, exhibiting notable gains in training efficiency and translation quality.
Machine translation has been impacted by the new era of natural language processing brought about by BERT (Bidirectional Encoder Representations from Transformers). Devlin et al. (2018) <cit.> pioneered BERT, a paradigm-shifting approach that pre-trains a deep bidirectional representation of language on an enormous volume of unlabeled text. Researchers have looked into integrating BERT-based models into machine translation architectures. In their investigation into the use of BERT in neural machine translation, Shavarani and Sarkar (2021) concentrated on gathering important linguistic information from BERT to improve the quality of the translated text. Their research highlighted BERT's capacity to identify complex language patterns, enhancing the model's comprehension of semantics and context.
Zhu et al. (2020) <cit.> expanded on the investigation of BERT integration into neural machine translation by looking into the advantages of incorporating pre-trained contextual embeddings from BERT. The goal of this study was to improve the representation of source and target language sentences in the translation model by capturing more comprehensive semantic information. The study demonstrated the utility of pre-trained language models in advancing machine translation capabilities by demonstrating the potential of BERT to improve translation accuracy and fluency.
Low-resource language pairs like English to Khasi presented unique challenges that spurred the development of creative methods for achieving efficient machine translation. A Transformer-based method for low-resource neural machine translation from English to Khasi was presented by Thabah and Purkayastha in 2021. <cit.> The approach focuses on utilizing the Transformer architecture to improve translation capabilities for underrepresented languages.
In 2021, Gowtham Ramesh <cit.> and his associates unveiled the Samanantar project, which aims to solve a persistent problem in machine translation for Indian languages: the lack of parallel corpora. Machine translation models that are trained effectively require parallel corpora, which are collections of aligned texts in multiple languages. Offering the largest collection of parallel corpora for 11 Indian languages that is publicly available, Samanantar stands out as a trailblazing project. To ensure representation from a range of domains, the project sources diverse texts, which are then aligned to create parallel datasets. This comprehensive resource has grown to be essential for scholars and professionals involved in machine translation into the Indian language. Samanantar greatly aids in overcoming the challenges posed by data scarcity by offering a sizable and varied collection of parallel corpora. This makes it possible to develop and assess machine translation models with better language coverage and quality.
<cit.> improved the method with semantically weighted back translation for morphologically rich and low-resource languages in the context of unsupervised machine translation. Their study sought to improve unsupervised neural machine translation efficiency while taking into account the unique difficulties presented by the linguistic peculiarities of Indian languages.
One major development in the field was the paradigm shift towards statistical machine translation. Systems such as METEOR and BLEU metrics were developed <cit.> for automated assessment, offering numerical values to gauge the calibre of translations. SMT performed well in language pairs with abundant resources, but its low-resource performance—particularly for Indian languages—remained difficult to achieve because of the scarcity of parallel corpora.
Presented in 2023 by AI4Bharat and partners <cit.>, the IndicTrans2 project is an all-encompassing endeavour to fulfil the translation requirements of all 22 scheduled Indian languages. Recognizing India's linguistic diversity, the project seeks to create machine translation models that are both accessible and of excellent quality. To make these models accessible and efficient for all scheduled Indian languages, IndicTrans2 expands on the achievements and difficulties seen in the machine translation field. To emphasize the value of linguistic diversity in the Indian context, the initiative involves the development of specialized models that are suited to the linguistic features of each language. IndicTrans2, with its emphasis on quality and accessibility, stands out as a noteworthy addition to the advancement of machine translation capabilities for all scheduled Indian languages, promoting inclusivity and linguistic representation in the digital sphere.
§ METHODOLOGY
§.§ <ref>Overview of our proposed approach
§.§.§ Original Data
The initial dataset consists of 3.6 million sentences which are the ai4bharat's BPCC Mined Dataset, representing the original noisy data. These sentences are noisy since they are mined and are not manually annotated.
This dataset includes diverse sentences with potential variations in grammar, context, and translation quality.
§.§.§ Discrepancies found in the Dataset
After evaluating the 3.6 million corpus, we found out that the dataset contained duplicates. So the sentence pairs which had the same translations were removed, and the language pairs which had different translations in any one language were retained, since the model would get attenuated to different contexts in a language.
In Table <ref>, we present the discrepancies found in the dataset.
Following a manual assessment of 200 randomly chosen sample sentences from the dataset, various kinds of differences between the Marathi and English translations were found. These include situations where the Marathi translation did not fully convey subtleties from the English sentence. There have been instances where the translated text conveyed a different meaning due to cases of different meanings. Certain translations lacked specificity and were utterly ambiguous. Inconsistencies also appeared in details and missing contextual information, as well as sentences with similar contexts but distinct meanings.
The fact that nearly 50% of the sampled data showed these kinds of inconsistencies is notable and underscores the difficulties in preserving accurate translations across the dataset.
§.§.§ Data selection using multilingual IndicSBERT
The IndicSBERT model <ref> is incorporated into the methodology as a sophisticated tool for measuring sentence similarity. The IndicSBERT model is used to get the similarity score between the English sentence and its corresponding Marathi sentence. The similarity score ranges between 0 to 1. We expelled the sentences whose similarity score was below 0.7. We trained the model only on the filtered sentences with a similarity score greater than 0.7. Thus, we procured high-quality dataset of 1.5 million corpus.
§.§.§ Model Training
We trained our on top of IndicBART model, using 1 Million sentences after pre-processing the dataset. We found some shortcomings in the translations.
During the translation process, we noticed that some translations were grammatically incorrect and included English words. After reviewing the dataset, we found it to be noisy with several discrepancies. Since manually correcting such a large dataset wasn't feasible, we decided to use the IndicSBERT model to address these issues.
The IndicSBERT model is incorporated into the methodology as a sophisticated tool for measuring sentence similarity. The IndicSBERT model was used to get the similarity score between the English sentence and its corresponding Marathi sentence. The similarity score ranges between 0 to 1. We expelled the sentences whose similarity score was below 0.7. We trained the model only on the filtered sentences with a similarity score greater than 0.7. The outputs of all three models—the original IndicBART, the fine-tuned IndicBART, and the filtered dataset using IndicSBERT are thus analyzed.
By filtering sentences based on the similarity score, we included 1 million sentences in the training dataset. After training the model on this dataset, the results showed significant improvement compared to the previous model. The translations were grammatically correct and no longer contained any English words.
§.§.§ Model Evaluation
The baseline model and our model are evaluated on the BLEU score, METEOR Score, CHRF Score, CHRF++ Score and IndicSBERT Score, and are mentioned in the table.
§.§ Dataset details
We chose "AI4Bharat's BPCC" Dataset for our model trainng.
BPCC is an extensive collection of parallel corpora created for eleven different Indic languages. The collection includes parallel texts written in Assamese, Bengali, Marathi, Gujarati, Oriya, Tamil, Telugu, Malayalam, Bengali, Bengal, and Hindi. It is a valuable tool for tackling the problem of sparse parallel corpora for low-resource languages.
§.§ Models:
§.§.§ IndicBART Model
Specifically designed for Indian languages, the IndicBART model is a transformer-based architecture. It serves a wide linguistic spectrum, having been pre-trained on a multilingual corpus that includes 11 major Indian languages. Perfected for tasks like machine translation from English to Marathi, the model is excellent at interpreting the subtle differences between these two languages. Its proficiency in producing accurate and contextually relevant translations for this particular language is ensured by its specialization in Marathi. It is a useful tool for researchers and developers working on Indian language natural language processing tasks.
§.§.§ IndicSBERT Model
Indic-Sentence-BERT (SBERT) model optimized for Indian language sentence similarity is created by L3Cube and has been trained on a wide range of Indian languages, such as Bengali, Tamil, Telugu, Kannada, Malayalam, Hindi, Marathi, and Gujarati. It is especially good at capturing semantic relationships between sentences by utilizing the SBERT architecture. This makes it useful for tasks like filtering out dissimilar sentences in a multilingual context. The model is skilled at recognizing the contextual subtleties that affect sentence similarity because it has been fine-tuned for the complexities of Indian languages. Sentences in Hindi, Marathi, or any other supported Indian language can be used to measure the semantic similarity between sentence pairs using the embeddings provided by this model. It provides a practical and approachable way for developers and researchers to include sentence similarity measurement in applications and research projects about Indian languages.
§ GOLD TESTSET CURATION
It was observed that the dataset which was being used as a test dataset previously had some errors in the translation. So we considered the news dataset available on the MahaNLP corpus. Initially, we randomly considered 10000 sentences from the dataset. Then we manually went through these 10000 sentences and selected the top 1500 sentences which were translated accurately. We then calculated the metrics of our translation model using this manually curated test dataset. To check our model's performance, we have considered the following metrics.
§ RESULTS AND DISCUSSION
§.§ Evaluation Metrics
* IndicSBERT Score -
The IndicSBERT model helps to check the similarity score between translated sentences with their respective Marathi sentence from the dataset. Our fine-tuned model are evaluated on a manually curated test dataset, using the IndicSBERT score as a metric. The mean IndicSBERT score of both the models is given in <ref> table.
* BLEU Score -
The BLEU Score is the benchmark for evaluating the quality of computer translation. It compares the translations produced by the ML model and the actual translations. It examines word clusters, such as one word, two words together, and so forth. It counts the number of words that are translated the same in both the model and the actual translations for each group. It then determines a score. A score of one indicates flawless translation. If it is zero, then no word was correctly predicted by the model. Higher BLEU Scores indicate that the translation produced by the model is more accurate than the original. Table <ref> indicates the BLEU score of both models.
* Meteor Score -
The METEOR score is a metric used to measure the quality of machine-generated translations by comparing them to reference translations, which are considered the gold standard. It considers various aspects like precision, recall, and alignment to evaluate how well the generated translation captures the meaning and nuances of the original text. METEOR is particularly useful in machine translation evaluation because it goes beyond simple word matching and considers the overall fluency and correctness of the translated sentences. Higher METEOR scores indicate more accurate and contextually relevant translations, providing a quantitative measure for assessing the performance of machine translation models. Table <ref> indicates the Meteor score of both the models.
* CHRF and CHRF++ Score -
These metrics function at the character level, as opposed to conventional metrics that concentrate on words. Essentially, they use matching character n-gram analysis to systematically check if translations produced by machines and humans are consistent.
A higher CHRF score in this case indicates improved performance by showing a good alignment between the machine translation and the human reference. An improved version of CHRF, called CHRF++, expands its analysis to include different character n-gram lengths, offering a more complex evaluation of the translation quality.
§.§ Observations from Evaluation Metrics
The five metrics mentioned above namely IndicSBERT score, BLEU Score, Meteor Score, CHRF Score and CHRF++ Score were used for the evaluation. The scores <ref> of all the metrics have improved significantly after training the model on a filtered dataset. IndicSBERT Score increased by 2.%. BLEU Score improved by 6.7%. Meteor Score improved by 5.4%. CHRF Score was increased by 6.63% and CHRF++ Score was increased from 6.4%.
§.§ Observations from Sample Translations
Observations from the examples in Table <ref> :
1) In the 1st example, the translation of the fine-tuned model is grammatically correct.
2) In sentences 2 and 3, the translations of the previous model included some English words, but the fine-tuned model's translation doesn't contain any English words.
3) In sentence number 4, the translation of the previous model contains a part of the English sentence as it is, whereas the fine-tuned model's translation is accurate
4) In the 5th sentence, the translation of the previous model contains the same name thrice and is not accurate, whereas the translation of the finetuned model is accurate
5) In the 6th sentence, the translation of the previous model is incomplete, whereas the translation of the fine-tuned model is complete.
§ CONCLUSION AND FUTURE SCOPE
In this paper, we have highlighted a method to filter the noisy, low-resource dataset. It was concluded that training the model after filtering out the noisy sentences from the dataset improved the performance of the model.
Our long-term goal is to obtain additional high-quality data for languages with scarce resources, which has been a recurring problem in our work. We want to work with linguists and institutions to gather large, varied datasets so that our models can be trained more effectively.
§ ACKNOWLEDGEMENT
This work was done under the mentorship of Mr. Raviraj Joshi (Mentor, L3Cube Pune). We would like to express our gratitude towards him for his continuous support and encouragement.
plain
|
http://arxiv.org/abs/2409.02647v1 | 20240904122347 | Learning-Based Error Detection System for Advanced Vehicle Instrument Cluster Rendering | [
"Cornelius Bürkle",
"Fabian Oboril",
"Kay-Ulrich Scholl"
] | cs.CV | [
"cs.CV",
"cs.HC",
"cs.LG",
"cs.RO",
"eess.IV"
] |
IEEEexample:BSTcontrol
Novel Approach for solving the discrete Stokes problems based on Augmented Lagrangian and Global Techniques: Application to Saddle-Point Linear Systems from Incompressible flow
[
September 9, 2024
==================================================================================================================================================================================
empty
empty
§ ABSTRACT
The automotive industry is currently expanding digital display options with every new model that comes onto the market. This entails not just an expansion in dimensions, resolution, and customization choices, but also the capability to employ novel display effects like overlays while assembling the content of the display cluster. Unfortunately, this raises the need for appropriate monitoring systems that can detect rendering errors and apply appropriate countermeasures when required. Classical solutions such as Cyclic Redundancy Checks (CRC) will soon be no longer viable as any sort of alpha blending, warping of scaling of content can cause unwanted CRC violations. Therefore, we propose a novel monitoring approach to verify correctness of displayed content using telltales (e.g. warning signs) as example. It uses a learning-based approach to separate “good” telltales, i.e. those that a human driver will understand correctly, and “corrupted” telltales, i.e. those that will not be visible or perceived correctly. As a result, it possesses inherent resilience against individual pixel errors and implicitly supports changing backgrounds, overlay or scaling effects. This is underlined by our experimental study where all “corrupted” test patterns were correctly classified, while no false alarms were triggered.
§ INTRODUCTION
Modern vehicle instrument clusters are becoming increasingly complex and are able to show an ever increasing amount of information paired with a growing degree of user design configurations. Nowadays, all data including the speedometer, telltales (signal lights) and navigation are digitally rendered and simple analog designs belong to the past <cit.>. The advantage of the digital displays is obvious, as these can be customized and updated easily, and additionally enable modern display effects such as enlargements (scaling), warping or overlay of information on top of diverse backgrounds. While this new freedom offers a considerable level of design flexibility, it also raises the challenge to ensure that the rendered content is correct, and that rendering/composition errors often need to be detected immediately to apply appropriate countermeasures. For example, an error in the navigation part is certainly inconvenient, and even worse is a corrupted telltale for a failed brake system. Hence, it is important that such instrument clusters are paired with adequate error detection systems to ensure that all rendered content is correct <cit.>.
The standard approach to ensure correctness of displayed information is to use classic error detection codes such as CRC to verify the video/image stream after composition but before it is sent to the display matrix. However, these codes are approaching their limits, in a world in which automakers started to use overlay effects (see Figure <ref>) and in a near future in which users will be able to customize the look of the display cluster, the background, the rendering effects, the colors or the placement of information. For example, if an icon color is modified from red to orange due to an overlay effect with a greenish background, CRC may flag this modification as erroneous if the CRC check value is not updated as well. This however is not trivial, requires expensive hardware modifications, and may not be possible for all rendering effects <cit.>. Hence, new more flexible error detection approaches are required for advanced digital instrument clusters.
Therefore, we propose a novel learning-based error detection system for the complex rendering process of modern vehicle instrument clusters using telltales as example. Our system uses an anomaly detection approach which analyzes rendered telltales to detect errors that would cause a wrong human perception of the telltale. A major advantage of this system over state-of-the-art is generalization combined with an inherent robustness against minor disturbances, i.e. rendering errors that do not alter the human understanding such as a missing pixel do not cause an exception, and advanced rendering effects such as alpha blending are supported as well. Our results show that all “corrupted” test samples, i.e. those that were not clearly perceivable by a human, were properly classified as erroneous, while no false alarms were found in fault-free scenarios.
The remainder of the paper is organized as follows. In Section <ref> the problem of rendering errors in digital instrument clusters is introduced. Afterwards, in Section <ref> our novel solution is presented, followed by an experimental evaluation in Section <ref>. Finally, Section <ref> concludes the paper.
§ BACKGROUND
Modern vehicle instrument clusters use a variety of data sources that need to be aggregated into one image (or video stream) that can then be sent to the display for visualization (see Figure <ref>). This composition involves many processing steps at various levels like converting CAN messages with vehicle status information to telltales which then need to be stitched and overlayed in multiple steps. Consequently, various errors can affect the finally displayed image, which can occur at any of the composition stages, as described in <cit.>. It is important to note that some errors can impair only one composition element, for example a single telltale, while the rest of the image is perfectly rendered. Errors in the finally displayed image may include corruption or artifacts (e.g. wrong pixels), distortion (wrong shape), erroneous zoom factor (wrong size), erroneous orientation/color/contrast or brightness to name only the most relevant types <cit.>.
An important aspect of the content composition is that it can involve non-linear image processing functions, i.e. I_out = f(I_in), where f is not a linear function. In the past, composition was often very simple as elements where usually placed next to each other without being modified or rendered on top of each other using e.g. blending effects. Hence, CRC checks were an effective mean to detect any error during the execution of the composition function f <cit.>. However, when for example alpha blending is used, the CRC check value can change, as illustrated in Figure <ref>. This requires either expensive hardware modifications to calculate CRC for each render step (e.g. <cit.>) or other approaches to detect errors due to rendering faults.
In this regard it is worth noting that rendering/composition errors leading to the display of information that a human cannot perceive correctly can be of mixed criticality. If the error affects only a convenience function, e.g. a part of the navigation map is not displayed correctly, it is surely undesired yet tolerable. If in contrast, a warning light about a failed brake system is not correctly perceived, it is a safety critical event. Thus, error detection solutions are required, that can handle both categories <cit.>.
To address the problem of rendering/composition errors, <cit.> proposed an architectural solution depicted in Figure <ref>, which uses additional checks to verify the video/image stream after composition but before it is sent to the display matrix. An abstract description of such an error detector, also referred to as “monitor” or “checker” is provided in <cit.>, which proposed to use a software-based calculation of the check values to be compared including CRC, MISR (multiple independent signature registers) or hash values. While this enables the handling of alpha blending or scaling, single pixel errors or other non-perceivable deviations will be treated as errors. An alternative is to use abstract descriptors to detect errors, as suggested by <cit.>.
Another possibility to validate the final display output is to use diodes <cit.> or even a camera-based monitor <cit.>. However, these come at additional costs and limited flexibility, which seems out-of-scope for most automotive applications.
Finally, it is worth mentioning that AI-based error detection approaches have been proposed. For example, <cit.> presented monitor concepts for perception systems that partially uses AI. In summary, AI can be used under certain conditions, including a representative training and validation set, and the use of out-of-distribution monitoring.
In conclusion, a new composition/rendering error detection approach for advanced digital instrument clusters is required. The solution needs to be robust and effective, support high resolution displays, user customizable rendering effects and is robust against errors that humans will not realize or can tolerate. In this paper, we propose a novel learning-based approach that fulfills all these characteristics. Is uses an anomaly detection concept on abstract features to verify the correctness, which is explained in depth in the next section.
§ PROPOSED TELLTALE ERROR DETECTION SYSTEM
In this section, we will introduce our novel learning-based error detection approach for rendering using telltales as example. The goal is to realize a so-called telltale monitor that verifies the rendered output on-the-fly using abstract features rather than pixel-level data, can handle overlays (alpha blending) of multiple telltales on varying backgrounds, effects such as warping or scaling and is resilient against a small number of pixel errors.
Please note that after an error is detected, various means can be employed to mitigate the effects, depending on the criticality of the event (see Section <ref>). In case of the telltales, it may be desired to provide a warning message and possibly re-render a default icon at a replacement location on a fixed (known) background which can be checked with classical means (e.g. CRC). As this is rather straightforward, we focus in this work on the error detection capabilities.
§.§ High-Level Overview
Our proposed telltale monitor, depicted in Figure <ref>, is designed to be permanently active to detect rendering errors that affect the visibility of an active telltale as well as faults that lead to the display of a telltale that should be inactive. The monitor treats the entire rendering step as a black box and only checks if the final result contains any perceivable errors. Therefore, the monitor is realized as a software module comprising multiple processing steps to verify the correct composition of the cluster content before it is handed over to the display.
The first step is to retrieve the required video/image data from the video buffer. Additionally, the location of the telltale is obtained from the instrument cluster configuration storage (e.g. an EPROM). With this input data, the image, which can be quite large (e.g. 4K resolution), is cropped to a smaller region of interest (ROI) around the telltale.
This smaller image (e.g. 52x52 pixels), is then fed into the core component of the system to compute the anomaly score. This step is based on the work from <cit.>, and will be described in detail in Section <ref>.
Finally, the resulting anomaly score is compared against a pre-defined threshold, which can depend on whether the telltale should be displayed or not. The comparison success is dependent on the telltale state, i.e. if the telltale should be ON, scores below the threshold represent a successful verification and can be displayed, while all other results[Some telltale examples are provided in Figure <ref>] invoke an error handling routing by the display ECU. However, if the telltale should not be displayed, low scores indicate that the telltale is erroneously part of the image. Hence, these are classified as failure, while scores above the threshold represent a success in this situation. The result ('OK' or 'NOK') is then sent to the display ECU.
Please note in case that multiple telltales should be displayed at the same time, the system will create multiple crops for the different ROIs and then perform all the required following steps in parallel for all telltales.
§.§ Anomaly Detection Step
The core element of our telltale monitor, is the rendering anomaly detection step. For this component, we make use of the anomaly detection system proposed in <cit.>. The idea of this system is to use an input image, feed it into a feature encoder of a CNN[In this work we use a pre-trained ResNet-18 <cit.> as CNN.], and then use a principal component analysis (PCA) on the feature map to analyze if the input image contains any kind of anomaly, in our case any kind of error/corruption. Therefore, the system calculates the feature reconstruction error (FRE), as depicted in Figure <ref>. For this purpose, the feature tensor f from an intermediate CNN layer, for example belonging to the first or second layer, is extracted[In this work we use the first layer.]. Next, f, which is originally in a high-dimensional space ℱ, is transformed with the help of a PCA, i.e a transformation 𝒯 : ℱ→𝒮, with dim(ℱ) >> dim(𝒮) is applied on f. The result is f' = 𝒯(f). Afterwards, the inverse transformation 𝒯^-1 is applied on f' resulting in:
f” = 𝒯^-1(f') = 𝒯^-1(𝒯(f))
Finally, the FRE is calculated as follows:
FRE(f) = || f” - f ||
The goal is that valid images achieve a near-zero FRE score. Therefore, during the PCA fitting process only valid telltales without errors are provided to derive the transformation function 𝒯. This can be interpreted as finding a multi-dimensional ellipsoid that encloses the most relevant feature points. At runtime, any valid telltale image that is given to the transformation process, will only use features which are already part of the PCA, i.e. that falls within the ellipsoid. Thus the forward and inverse transformations will cancel each other. Consequently, the FRE in such a cases will be close to zero according to Equation (2). However, if an error (anomaly) is introduced in the telltale, the transformation steps will lose part of the information contained in the “erroneous” tensor f (those that are outside of the ellipsoid), i.e. f”≠ f and as a result FRE >> 0.
Under normal operating conditions, with changing background images (for example due to a running navigation with changing map content), the ideal case with FRE = 0 will usually not occur. However, the scores for telltales with visible errors and significant degradation will be considerably different from scores of acceptable telltales. Hence, a threshold τ can be introduced to differentiate among acceptable telltales (even if these include a few non noticeable faults), and telltales that are clearly corrupted with the risk that a human driver may miss-perceive the telltale. Based on this, we can define the set of acceptable ('OK') telltales as the set of all telltales with FRE < τ.
𝒜 = {telltale image | FRE(f) < τ}
It is worth noting that the set of fault free telltales, which is used to train the PCA, is a true subset of 𝒜, and that any telltale that is not within 𝒜, i.e. with FRE≥τ is classified as erroneous ('NOK' for 'not OK'). This comparison step is the last component illustrated in Figure <ref>, and if it is above the threshold may trigger an error mitigation action.
Both τ as well as the size of the training set to derive a meaningful representation of 𝒯 need to be carefully chosen. If τ is too small, it may cause flagging valid telltales as erroneous, which can be undesired. Similarly, if τ is too large, telltales with visibly errors could be classified as 'OK'. Also, if the training set it too small, not all relevant features may be represented well enough by 𝒯, resulting in the same effect as a τ that is too small. However, as we will show in Section <ref> a few hundred training and testing samples are sufficient to obtain a meaningful transformation function 𝒯 and definition of τ.
Finally, it is worth noting that it is not required to train feature extractor itself on a telltale-specific dataset. Instead, any abstract feature representation can be used as input to the PCA. For this reason, we used a standard pre-trained ResNet-18 in this work as feature extractor. In fact, we use the first feature layer of ResNet-18, which consists of 64 different convolutions, each with a size of 8x8, 16x16 or 32x32 depending on the original image size. <cit.> suggests to fit a single PCA covering all convolutions, which however has certain drawbacks. A major one being the size of the resulting transformation matrix (e.g. 65536x65536), that requires a lot of memory and computational resources, which both are typically rare for a SM. Therefore, we analyzed the impact of each convolution and identified that only a few are really of importance. Consequently, we propose to fit a PCA for each individual convolution that is of relevance. As explained in Section <ref>, this helps to improve the robustness as well as the computational demands.
§.§ FRE Score Hardening
As we discussed before, the decision if a telltale is 'OK' or 'NOK' is based on a single value comparison, which of course by the nature of the employed dimension reduction (within PCA) can introduce undesired effects. One of these being a false classification either being 'NOK' instead of 'OK' or vice versa. To mitigate these, we propose an enhanced FRE scoring.
Therefore, we use the anomaly map or FRE map. This map can be interpreted as an image, in which a pixel i reflects the difference (f”_i - f_i)^2, as illustrated in Figure <ref>. Visually this map can be interpreted as a differential image, i.e. the brighter an image, the greater is the anomaly (error) in this region. This is also shown by some examples in Figure <ref>.
Now, instead of considering all pixels (or elements of the tensors f and f”) as in Equation (2), one can also restrict the analysis to those pixels that are of interest, i.e. those pixels that are part of the telltale's shape but ignore pixels that are part of the background. To achieve this, a mask can be generated using the shape of the telltale and subsequently applied to the anomaly map, as depicted in Figure <ref>. The result is a score that is only influenced by anomalies (or faults) on the telltale, which is more robust than a score over the entire map. Of course, in a similar way also a weighted score is possible by applying different weights to background pixels and pixels belonging to the telltale shape.
Another improvement addresses the robustness of the FRE calculation itself. Following Equation (2), the FRE score can be dominated by a single tensor element, for example if most of the elements of f and f” are similar except one. If for this element i the difference ||f”_i - f_i|| is much larger than for any other element, it can hold FRE(f) ≈ ||f”_i - f_i||. This is of course undesired, as computational errors causing only a single bit (or few bits) to be considerably off (e.g. by affecting an exponent bit), would influence the entire result. Therefore, we propose to limit the values within the FRE calculation to be no larger than a predefined value d_max.
In summary, Equation (2) can be reformulated using the Euclidean norm (of course also other distance measures could be used) to:
FRE(f) = 1/|𝒫|√(∑_i∈𝒫min{(f”_i-f_i)^2,d_max}),
where 𝒫 is the set of relevant pixels (e.g. just on mask).
Finally, it is important to note that the decision 'OK'/'NOK' may not be taken on the result of a single image (frame) only. Instead the FRE scores over multiple frames can be combined. By this means, short upsets that only occur in very few frames do not disturb the final signal. Instead these can be filtered out, making the system more robust. One possible filtering approach is to use a running average over the FRE scores over a sliding time window, e.g. last 60 frames (approx. 1 second in good displays). Hence, a false score in a few frames will not considerably alter the running average, while a permanent issues will become visible within 1 second. Similarly, if the error disappears, the system will switch to 'OK' within the next second.
§.§ Testing Mode
So far, we discussed the normal operation of the telltale monitor. If a telltale is inactive, we foresee an additional mode that can be used to test if everything works as expected.
In this mode, instead of fetching the image from the video buffer (see Figure <ref>), a test image from a database is fed into the telltale monitor, for which the anomaly map is computed. The result is then compared to a reference map from the database for the given test image. If any difference at pixel-level is detected, it indicates a problem with the anomaly scoring stage and an error mitigation action can be initiated. In addition the anomaly scoring stage can be reset (i.e. refresh all memories, re-run steps, check if problem still exists), to detect if this was a single event upset or a more persistent issue that the driver should be informed about.
Please note that both good as well as corrupted test images can be used during test mode to verify both, correct detection of 'OK' as well as of 'NOK' telltales. Furthermore, it is worth noting that the monitor in testing mode is not connected to the rest of the display and rendering pipeline, meaning the visual output of the instrument cluster is not altered such that this process remains hidden from the driver.
§ EXPERIMENTAL EVALUATION
§.§ Setup, Dataset Choices & Rendering Errors
For the evaluation we tested our solution for a selection of six different telltales shown in Figure <ref>, that are based on the Automotive Grade Linux sample application[https://github.com/agl-ic-eg/cluster-refgui]. We modified these slightly, to have a black edge which favors robustness of the approach, but in general any telltale could be used.
We used ResNet-18 as feature extractor with an input images of size 128x128 pixel. This created a feature tensor of size 64x16x16. We also tested a smaller input sizes down to 64x64 pixel and larger images with 256x256 pixel, as well as a ResNet-50. In all cases, the results were basically the same, only the actual anomaly score values differed among the configurations. Thus, we will only show the results for ResNet-18 and 128x128 input images. For this configuration, the runtime on a single E-core of an Intel Core i9-13900K for a C++ implementation of the entire pipeline is around 5ms, i.e. 200 frames per second can be analyzed.
To train the PCA (see Section <ref> for details), and evaluate the performance of our detector, we used three different datasets:
* A training dataset that only contains perfect renderings (4000 images per telltale).
* A test dataset that contains both corrupted as well as perfect telltales (1800 images per telltale).
* An evaluation dataset that is similar to the test dataset but is 11700 images large (per telltale).
The training dataset used various background images to make the training as diverse as possible. Therefore, we placed the telltales at random positions (see Figure <ref>) and then cropped them to the expected input size. Additionally, some actual backgrounds from the target application (here: an enhanced version of the Automotive Grade Linux reference GUI <cit.>) were used. The purpose of the test dataset is to determine the threshold τ (see Equation (3)), to separate 'OK' from 'NOK' input images. Therefore, it comprised 200 images for each error type as well as 200 perfect samples (1800 images in total). Finally, the evaluation was performed on the evaluation dataset, which contained 1300 images for each defect type.
For the evaluation we injected a variety of different error types at different levels (magnitude), starting from small deviations, up to significant impact. The different types are depicted in Figure <ref>, while the influence of the error level is visualized in Figure <ref>. In detail, the following exemplary error types were used, which reflect the most relevant rendering errors according to <cit.>:
* No Render: The telltale is not rendered at all.
* Alpha Blending: Telltale is blended over the background. With increasing error the transparency of the telltale is increased.
* Color Error: Error in the color of the telltale.
* Pixel Noise: Random modification of color value of the entire region.
* Clipping Error: Random erase of pixels on the telltale.
* Partial Rendering: Removal of parts of the telltale.
* Stride: Errors on the stride of the telltale during rendering.
* Scale: Increase of the scale of the telltale
Most of the errors result in a degraded visibility, which increases with the magnitude of the error. E.g., a low level of Pixel Noise causes just a few erroneous pixels, which does not impact the readability of the telltale. In contrast a high error level impairs the visibility clearly. This is even more obvious for the Alpha Blending, where at the highest error level the telltale is fully transparent (Figure <ref>). Ideally we want the anomaly score to reflect this error quantity.
In addition to the aforementioned errors, we also analyzed a more “binary” error type, where the image region is entirely filled with the dominant color of the telltale. The effect can be seen in Figure <ref>. If the foreground is flooded, the telltale is not visible and the anomaly detection identifies that with a high anomaly score. If the background is flooded, the anomaly score remains low, yet the telltale also remains readable.
§.§ Scoring for different Error Types & Levels
We evaluated the score for the different error types for the individual telltales. In Figure <ref> it can be seen that the good samples have a very low score and small variance. The different error types differentiate in the mean value, but for all error types the mean is significantly higher then the score of the good samples, which is a good indication that our solution can deliver the expected behavior. Furthermore, it can be noted that most errors show a high variance. This is due to the fact, that the score increases with the error levels (Figure <ref>). For low error levels the score is close to the good samples, which is also inline with the expectation as low error levels do not degrade the visibility noticeably. However, with an increase in error magnitude also the score increases, such that visible errors can be observed through a high score.
Figure <ref> and Figure <ref> also illustrate the threshold τ among 'OK' and 'NOK' telltales, which was obtained on the test dataset. To determine τ, we use the following approach:
τ = max_score (S_test^Good) × m
In other words, the maximum score observable on the perfect samples of the test dataset (S_test^Good) is multiplied with a margin m to account for variations in the scores given the higher variety of data in the final deployment. In our experiments we use m = 2.1, which showed a good balance between robustness and detection accuracy. With this configuration, rendering errors are robustly detected, expect very minor deviations in alpha or partial rendering, as show in Figure <ref> (a) and (b). As further detailed in Figure <ref>, this setting results in no false alarms, and only 1420 degraded telltales were not properly detected as defects. However, as these contain only minor errors that do not affect the understanding of the telltale or its visibility, all of these can be classified as not-relevant. Hence, the system is able in this setting to classify all error-free telltales correctly (no false alarms) and also can detect all relevant errors.
An advantage of our telltale monitor is the ability to handle alpha blending and other rendering effects. While this is already possible with the aforementioned configuration of m = 2.1, it may be desirable to allow even further reduced alpha settings on purpose. Therefore, one can define an additional
τ_alpha = max_score (S_test^Alpha[error_level]) × m
As illustrated in Figure <ref>(c), this setting can still identify all important rendering errors, but is more relaxed for alpha blending, color drifts and partially missing telltale elements.
With the same approach we can as well accomplish checks for warping and scaling of telltales. To achieve that we de-warp / de-scale the transformed telltale back to its original shape. Usually this will create some artifacts in the resulting image. But with the proven tolerance to minor deviations we can still separate correct from anomalous samples.
§.§ Scoring for different telltales
The previous discussions have been focused on the results of a single telltale. However, it is important that the same results can be achieved for the other telltales as well. Figure <ref> shows the scores for all the six telltales we used in our evaluation. It can be seen that the results slightly vary in absolute numbers, but that the overall behavior is comparable. The scores of the good samples is always close to zero, and the injected rendering errors have in all cases a similar degrading effect. This shows that our solution can address different telltales with different shape or color.
§.§ PCA Decomposition
For now we used all features for training and scoring with a single PCA, i.e. the entire feature tensor was fed into a single PCA. As motivated in Section <ref>, using a single large PCA may not be the best choice. Therefore, we analyzed the contribution of the individual features (convolutions) to the final scoring. To achieve this, 64 PCAs were trained, each of them on just one feature. For each of the PCAs we determined τ_anomalous^Good as previously described.
With m=1.2, only 10 of 64 PCAs showed a meaningful influence on the final score, and classified the good samples correctly. For the remaining PCAs, no clear separation is possible, as illustrated in Figure <ref>. For one of the relevant PCAs, Figure <ref> depicts the classification results, showing that the maximum tolerated error level for this PCA is 4, which still is acceptable as humans can perceive such telltales correctly. Yet, there are also PCAs that have an even stricter separation and tolerate only lower levels. This shows that in fact only a subset of features can be used within the safety monitor, and that only the most relevant features should be used to achieve the most efficient realization (up to 10x faster). However, it is worth noting that not just a single PCA on a single feature should be used, as this could cause a higher undetectable error rate, which is address in the next section.
§.§ Safety Discussion & Undetectable Errors
A great challenge of learning-based safety monitors is to provide evidence that the entire system (main function + safety monitor) is safe enough. For a CRC-based safety monitor it is possible to mathematically prove that it can identify all single pixel errors, but this aspect also limits the overall system functionality (e.g. overlays are hardly possible as explained in Section <ref>). For our proposed monitor, the roles are inverted, as the desired system functionality can be achieved, but the safety evidence is harder to achieve. Therefore, some key considerations are provided next.
As mentioned in Section <ref>, our safety monitor covers both cases, errors for active as well as inactive telltales. However, while a rendering error for an inactive telltale is certainly inconvenient, only an error for an active telltale is safety-relevant. Hence, for a critical error the related telltale needs to be active and a rendering error has to occur. As discussed before, many rendering errors are properly detected, however, there is a certain chance for undetectable errors, meaning corrupted input images that are not properly flagged by the safety monitor as 'NOK'.
To test if our safety monitor is susceptible to such images, we used a genetic algorithm (GA) to find such input images. The goal of the GA is to find an image with a score sufficiently low to be acceptable, i.e. the safety monitor would consider the image as a valid telltale. As initial population, 1000 images were selected, which contained up to 5 % faulty pixels within the telltale shape for a given background. With each new generation, the amount of faulty pixels can increase through the manipulation and crossover, providing a wide variety of images that are tested.
As illustrated in Figure <ref>, there is a significant difference in undetectable errors, depending on the PCA configuration. If only a single PCA is used covering all features, no undetectable error was found within 10^4 generations (each covering a population of 1000 images). Instead, if PCAs are used that address only a single feature (see Section 4.4), undetectable errors can be found. In this case, the overall undetectable error rate depends on how many PCA scores are combined to create the final score. If all of them need to be 'OK', as shown by Figure <ref>(c), no undetectable error was found in 10^4 generations, while if a single PCA score is sufficient, less than 100 generations are required (Figure <ref>(b)). Hence, depending on the safety requirement of the telltale, a different setting can be chosen that achieves the required safety level with best performance and compute demands (fewer single-feature PCAs is faster). For example, an informational telltale (e.g. high beam on/off), can use only the most relevant PCAs, whereas a critical telltale (e.g. brake failure) can use a PCA that contains all possible features.
In any case, it is worth noting that the results also show that there are many more detectable than undetectable errors. In addition, it is worth noting that the image that causes the safety monitor to fail, is a combination of random pixel values, which can be clearly detected by other means.
Furthermore, as depicted in Figure <ref>(d), the obtained image that got closest to an undetectable error, shows already the black edges of the correct telltale. In fact, the more iterations were executed, the more did the created image converge towards the correct telltale. This underlines that the safety monitor “learned” relevant telltale features.
Furthermore, we performed tests, where we intentionally limited the number of corrupted pixels. As a result, all images show basically an empty background with only a few faulty pixels (see Figure <ref>). If the safety monitor would flag one of these images as valid, this may result in a safety relevant error, as a human driver would not be able to understand that a telltale should be displayed. To test these situations, we used the same GA as before, but limited the number of corrupted pixels after each generation to at most 5 %, at most 20 % and at most 50 %. As it can be inferred from Figure <ref>, under these conditions the GA was not able to generate an image with a sufficiently low score, i.e. all generated and tested images were correctly flagged as containing no valid telltale. It is also clearly visible that fewer pixel errors lead to higher scores, which is inline with the expectation. Consequently, it can be concluded that just a few random pixel faults on an empty background will not cause an undetected error. In fact a huge amount of pixel errors needs to be present to make the safety monitor flag a rendered image as being a valid tell-tale, which can then be easily detected by the driver.
§ CONCLUSION
The expectations for the capabilities of digital instrument cluster are increasing with each new automotive generation. Classic dials are now a thing of the past, while instead, users desire beautiful 3D navigation maps, design customization options, and smartphone-like rendering, including modern effects such as overlays or color grading. This, however, imposes a challenge for the state-of-the-art error detection approaches used to detect rendering errors. Therefore, we presented in this work a novel learning-based error detector evaluated for the rendering of telltales. The monitor utilizes a feature-based principal component analysis to detect rendering errors that would disturb the human perception of a telltale. Our evaluation results show that the system is able to detect all significant errors, while not causing false alarms on correctly rendered telltales.
IEEEtran
|
http://arxiv.org/abs/2409.02096v1 | 20240903175339 | Sharp threshold for the ballisticity of the random walk on the exclusion process | [
"Guillaume Conchon--Kerjan",
"Daniel Kious",
"Pierre-François Rodriguez"
] | math.PR | [
"math.PR",
"60K35, 82C41, 60J27"
] |
empty
Target Detection in Sea Clutter with Application to Spaceborne SAR Imaging
Shahrokh Hamidi
Department of Electrical and Computer Engineering
University of Waterloo
Waterloo, Ontario, Canada
[email protected]
September 9, 2024
=========================================================================================================================================================
Guillaume Conchon-Kerjan^1, Daniel Kious^2 and Pierre-François Rodriguez^3
§ ABSTRACT
We study a non-reversible random walk advected by the symmetric simple exclusion process, so that the walk has a local drift of opposite sign when sitting atop an occupied or an empty site. We prove that the back-tracking probability of the walk exhibits a sharp transition as the density ρ of particles in the underlying exclusion process varies across a critical density ρ_c. Our results imply that the speed v=v(ρ) of the walk is a strictly monotone function and that the zero-speed regime is either absent or collapses to a single point, ρ_c, thus solving a conjecture of <cit.>. The proof proceeds by exhibiting a quantitative monotonicity result for the speed of a truncated model, in which the environment is renewed after a finite time horizon L. The truncation parameter L is subsequently pitted against the density ρ to carry estimates over to the full model. Our strategy is somewhat reminiscent of certain techniques recently used to prove sharpness results in percolation problems. A key instrument is a combination of renormalisation arguments with refined couplings of environments at slightly different densities, which we develop in this article. Our results hold in fact in greater generality and apply to a class of environments with possibly egregious features, outside perturbative regimes.
5cm0.4pt September 2024
2
^1King's College London
Department of Mathematics
London WC2R 2LS
United Kingdom
<[email protected]>
^2University of Bath
Department of Mathematical Sciences
Bath BA2 7AY
United Kingdom
<[email protected]>
empty
^3Imperial College London
Department of Mathematics
London SW7 2AZ
United Kingdom
<[email protected]>
§ INTRODUCTION
Transport in random media has been an active field of research for over fifty years but many basic questions remain mathematically very challenging unless the medium satisfies specific (and often rather restrictive) structural assumptions; see for instance <cit.> and references therein. In this article we consider the problem of ballistic behavior in a benchmark setting that is both i) non-reversible, and ii) non-perturbative (in the parameters of the model). To get a sense of the difficulty these combined features entail, general results in case i) are already hard to come by in perturbative regimes, see, e.g., <cit.> regarding the problem of diffusive behavior.
Our focus is on a certain random walk in dynamic random environment, which lies outside of the well-studied `classes' and has attracted increasing attention in the last decade, see for instance <cit.> and references below. The interest in this model stems in no small part from the nature of the environment, which is driven by a particle system (e.g. the exclusion process) that is typically conservative and exhibits slow mixing. Even in the (1+1)-dimensional case, unless one restricts its parameters to perturbative regimes, the features of the model preclude the use of virtually all
classical techniques: owing to the dynamics of the environment, the model is genuinely non-reversible, and its properties make the search for an invariant measure of the environment as seen by the walker (in the spirit of <cit.>) inaccessible by current methods; see Section <ref> for a more thorough discussion of these and related matters.
The model we study depends on a parameter ρ governing the density of particles in the environment, which in turn affects the walk, whose transition probabilities depend on whether the walk sits on top of a particle (advection occurs) or not. This may (or not) induce ballistic behaviour. Our aim in this article is to prove that ballisticity is a property of the walk undergoing a `sharp transition' as ρ varies, which is an inherently non-perturbative result. Our results answer a number of conjectures of the past fifteen years, and stand in stark contrast with (i.i.d.) static environments <cit.>, where the zero-speed regime is typically extended. The sharpness terminology is borrowed from critical phenomena, and the analogy runs deep, as will become apparent. Drawing inspiration from it is one of the cornerstones of the present work.
§.§ Main results
We present our main results with minimal formalism in the model case where the environment is the (simple) symmetric exclusion process, abbreviated (S)SEP in the sequel, and refer to Sections <ref> and <ref> for full definitions. We will in fact prove more general versions of these results, see Theorem <ref>, which hold under rather broad assumptions on the environment, satisfied for instance by the SEP. Another environment of interest that fulfils these conditions is discussed in Appendix <ref>, and is built using a Poisson system of particles performing independent simple random walks (PCRW, short for Poisson Cloud of Random Walks).
The SEP is the continuous-time Markov process η=(η_t)_t≥ 0 taking values in {0,1}^ and describing the simultaneous evolution of continuous-time simple random walks subject to the exclusion rule. That is, for a given realization of η_0, one puts a particle at those sites x such that η_0(x)=1. Each particle then attempts to move at a given rate ν >0, independently of its previous moves and of the other particles, to one of its two neighbours chosen with equal probability. The move happens if and only if the target site is currently unoccupied; see Section <ref> for precise definitions. For the purposes of this article, one could simply set ν=1. We have kept the dependence on ν explicit in anticipation of future applications, for which one may wish to slow down/speed up the environment over time.
On top of the SEP η, a random walk X=(X_n)_n ≥ 0 starting from X_0=0 moves randomly as follows. Fixing two parameters p_∙,p_∘∈ (0,1) such that p_∙ >p_∘, the process X moves at integer times to a neighbouring site, the right site being chosen with probability p_∙, resp. p_∘, depending on whether X is currently located on an occupied or empty site. Formally, given a realization η of the SEP, and for all n≥ 1,
X_n+1-X_n=
+1, with prob. p_∙1_η_n(X_n)=1+p_∘1_η_n(X_n)=0
-1, with prob. (1-p_∙)1_η_n(X_n)=1+(1-p_∘)1_η_n(X_n)=1
(notice that the transition probabilities depend on η since p_∘≠ p_∙ by assumption). For ρ∈ (0,1) we denote by ^ρ the annealed (i.e. averaged over all sources of randomness; see Section <ref> for details) law of (η,X)
whereby η_0 is sampled according to a product Bernoulli measure ((1-ρ)δ_0 + ρδ_1)^⊗, with δ_x a Dirac measure at x, which is an invariant measure for the SEP, see Lemma <ref>. Incidentally, one can check that except for the trivial cases ρ∈{0,1}, none of the invariant measures for the SEP (cf. <cit.>) is invariant for the environment viewed from the walker (to see this, one can for instance consider the probability that the origin and one of its neighbours are both occupied). In writing ^ρ, we leave the dependence on ν,p_∙ and p_∘ implicit, which are regarded as fixed, and we will focus on the dependence of quantities on ρ, the main parameter of interest. For a mental picture, the reader is invited to think of X as evolving in a half-plane, with space being horizontal and time running upwards. All figures below will follow this convention.
We now describe our main results, using a language that will make analogies to critical phenomena apparent. Our first main result is simplest to formulate in terms of the fast-tracking probability, defined for n≥0 and ρ∈ (0,1) as
θ_n(ρ)= ^ρ( H_n < H_-1 ),
where H_k= inf{ n ≥ 0: X_n =k }. In words θ_n(ρ) is the probability that X visits n before -1. Analogous results can be proved for a corresponding back-tracking probability, involving the event {H_-n< H_1 } instead; see the end of Section <ref> for more on this. The events in (<ref>) being decreasing in n, the following limit
θ(ρ)def.=lim_n θ_n(ρ)
is well-defined and constitutes an order parameter (in the parlance of statistical physics) for the
model. Indeed, it is not difficult to see from (<ref>)
and monotonicity properties of the environment (see (<ref>) in Section <ref> and Lemma <ref>), that the functions θ_n(·), n ≥ 0, and hence θ(·) are non-decreasing, i.e. that θ(ρ) ≤θ(ρ') whenever ρ≤ρ'. One thus naturally associates to the function θ(·) a corresponding critical threshold
ρ_c def.=sup{ρ∈ [0,1]: θ(ρ)=0 }.
The transition from the subcritical (ρ < ρ_c) to the supercritical (ρ > ρ_c) regime corresponds to the onset of a phase where the walk has a positive chance to escape to the right. In fact it will do so at linear speed, as will follow from our second main result. To begin with, in view of (<ref>) one naturally aims to quantify the behavior of θ_n(ρ) for large n. Our first theorem exhibits a sharp transition for its decay.
For all ρ∈(0,1), there exist constants c_1,c_2 ∈ (0,∞) depending on ρ such that, with ρ_c as in (<ref>) and for all n≥1,
(i) θ_n(ρ) ≤ c_1 exp(-(log n)^3/2), if ρ<ρ_c;
(ii) θ_n(ρ) ≥ c_2, if ρ>ρ_c.
The proof of Theorem <ref> appears at the end of Section <ref>.
Since θ = inf_n θ_n, item (ii) is an immediate consequence of (<ref>). The crux of Theorem <ref> is therefore to exhibit the rapid decay of item (i) in the full subcritical regime, and not just perturbatively in ρ≪ 1, which is the status quo, cf. Section <ref>. In the context of critical phenomena, this is the famed question of (subcritical) sharpness, see for instance <cit.> for sample results of this kind in the context of Ising and Bernoulli percolation models. Plausibly, the true order of decay in item (i) is in fact exponential in n (for instance, a more intricate renormalisation following <cit.> should already provide a stretched-exponential bound in n). We will not delve further into this question in the present article.
Our approach to proving Theorem <ref> is loosely inspired by the recent sharpness results <cit.> and <cit.> concerning percolation of the Gaussian free field and the vacant set of random interlacements, respectively, which both exhibit long-range dependence (somewhat akin to the SEP). Our model is nonetheless very different, and our proof strategy, outlined below in Section <ref>, vastly differs from these works. In particular, it does not rely on differential formulas (in ρ), nor does it involve the OSSS inequality or sharp threshold techniques, which have all proved useful in the context of percolation, see e.g. <cit.>. One similarity with <cit.>, and to some extent also with <cit.> (in the present context though, stochastic domination results may well be too much to ask for), is our extensive use of couplings. Developing these couplings represents one of the most challenging technical aspects of our work; we return to this in Section <ref>. It is also interesting to note that, as with statistical physics models, the regime ρ≈ρ_c near and at the critical density encompasses a host of very natural (and mostly open) questions that seem difficult to answer and point towards interesting phenomena; see Section <ref> and Corollary <ref> for more on this.
Our second main result concerns the asymptotic speed of the random walk, and addresses an open problem of <cit.> which inspired our work. In <cit.>, the authors prove the existence of a deterministic non-decreasing function v:(0,1) → such that, for all ρ∈ (0,1)∖{ρ_-, ρ_+ },
^ρ-a.s., X_n/n→ v(ρ) as n →∞,
where
ρ_-def.=sup{ρ : v(ρ) < 0 },
ρ_+def.=inf{ρ : v(ρ) > 0 },
leaving open whether ρ_+>ρ_- or not, and with it the possible existence of an extended (critical) zero-speed regime. Our second main result yields the strict monotonicity of v(·) and provides the answer to this question. Recall that our results hold for any choice of the parameters 0<p_∘ <p_∙ <1 and ν>0.
With ρ_c as defined in (<ref>), one has
ρ_- = ρ_+ = ρ_c.
Moreover, for every ρ,ρ'∈ (0,1) such that ρ >ρ', one has
v(ρ)>v(ρ').
The proof of Theorem <ref> is given in Section <ref> below. It will follow from a more general result, Theorem <ref>, which applies to a class of environments η satisfying certain natural conditions. These will be shown to hold when η is the SEP.
Let us now briefly relate Theorems <ref> and <ref>. Loosely speaking, Theorem <ref> indicates that v(ρ) >0 whenever ρ> ρ_c, whence ρ_+ ≤ρ_c in view of (<ref>), which is morally half of Theorem <ref>. One can in fact derive a result akin to Theorem <ref>, but concerning the order parameter θ̂(ρ)= lim_n ^ρ( H_-n < H_1 ) associated to the back-tracking probability instead. Defining ρ̂_c in the same way as (<ref>) but with θ̂ in place of θ, our results imply that ρ̂_c=ρ_c and that the transition for the back-tracking probability has similar sharpness features as in Theorem <ref>, but in opposite directions –
the sub-critical regime is now for ρ>ρ̂_c(=ρ_c). Intuitively, this corresponds to the other inequality ρ_- ≥ρ_c, which together with ρ_+ ≤ρ_c, yields (<ref>).
Finally, Theorem <ref> implies that v(ρ)≠ 0 whenever ρ≠ρ_c. Determining whether a law of large number holds at ρ=ρ_c holds or not, let alone whether the limiting speed v(ρ_c) vanishes when 0< ρ_c< 1, is in general a difficult question, to which we hope to return elsewhere; we discuss this and related matters in more detail below in Section <ref>.
One noticeable exception occurs in the presence of additional symmetry, as we now explain. In <cit.> the authors could prove that, at the `self-dual' point p_∙=1-p_∘ (for any given value of p_∙∈ (1/2,1)), one has v(1/2)=0 and the law of large numbers holds with limit speed 0, but they could not prove that v was non-zero for ρ≠ 1/2, or equivalently that ρ_+=ρ_-=1/2. Notice that the value p_∙=1-p_∘ is special, since in this case (cf. (<ref>)) X has the same law under ^ρ as -X under ^1-ρ, and in particular Xlaw=-X when ρ=1/2. In the result below, which is an easy consequence of Theorem <ref> and the results of <cit.>, we summarize the situation in the symmetric case. We include the (short) proof here. The following statement is of course reminiscent of a celebrated result of Kesten concerning percolation on the square lattice <cit.>; see also <cit.> in related contexts.
If p_∘=1-p_∙, then
ρ_c=12 and θ(12)=0.
Moreover, under ^1/2, X is recurrent, and the law of large numbers (<ref>) holds with vanishing limiting speed v(1/2)=0.
From the proof of <cit.> (and display (3.26) therein), one knows that v(1/2)=0 and that the law of large numbers holds at ρ=1/2. By Theorem <ref>, see (<ref>) and (<ref>), v is negative on (0,ρ_c) and positive on (ρ_c,1), which implies that ρ_c=1/2. As for the recurrence of X under ^1/2, it follows from the ergodic argument of <cit.> (Corollary 2.2, see also Theorem 3.2 therein; these results are stated for a random walk jumping at exponential times but are easily adapted to our case). Since θ(ρ) ≤^ρ(lim inf X_n ≥ 0) in view of (<ref>), recurrence implies that θ(1/2)=0, and (<ref>) follows.
§.§ Discussion
We now place the above results in broader context and contrast our findings with existing results. We then discuss a few open questions in relation with Theorems <ref> and <ref>.
§.§.§ Related works
Random walks in dynamic random environments have attracted increasing attention in the past two decades, both in statistical physics - as a way to model a particle advected by a fluid <cit.>, and in probability theory - where they provide a counterpart to the more classically studied random walks in static environments <cit.>.
On ℤ, the static setting already features a remarkably rich phenomenology, such as transience with zero speed <cit.>, owing to traps that delay the random walk, and anomalous fluctuations <cit.> - which can be as small as polylogarithmic in the recurrent case <cit.>. On ℤ^d for d≥ 2, some fundamental questions are still open in spite of decades of efforts, for instance the conjectures around Sznitman's Condition (T) and related effective ballisticity criteria <cit.>, and the possibility that, for a uniformly elliptic environment, directional transience is equivalent to ballisticity. If true, these conjectures would have profound structural consequences, in essentially ascertaining that the above trapping phenomena can only be witnessed in dimension one. As with the model studied in this article, a circumstance that seriously hinders progress is the truly non-reversible character of these problems. This severely limits the tools available to tackle them.
New challenges arise when the environment is dynamic, requiring new techniques to handle the fact that correlations between transition probabilities are affected by time. In particular, the trapping mechanisms identified in the static case do not hold anymore and it is difficult to understand if they simply dissolve or if they are replaced by different, possibly more complex, trapping mechanisms. As of today, how the static and dynamic worlds relate is still far from understood. Under some specific conditions however, laws of large numbers (and sometimes central limit theorems) have been proved in dynamic contexts: when the environment has sufficiently good mixing properties <cit.>, a spectral gap <cit.>, or when one can show the existence of an invariant measure for the environment as seen by the walk (<cit.>, again under some specific mixing conditions). One notable instance of such an environment is the supercritical contact process (see <cit.>, or <cit.> for a recent generalization).
We also refer to <cit.> for recent results in the time-dependent reversible case, for which the assumptions on the environment can be substantially weakened.
The environments we consider are archetypal examples that do not satisfy any of the above conditions. In the model case of the environment driven by the SEP, the mixing time over a closed segment or a circle is super-linear (even slightly super-quadratic <cit.>), creating a number of difficulties, e.g. barring the option of directly building a renewal structure without having to make further assumptions (see <cit.> in the simpler non-nestling case). The fact that the systems may be conservative (as is the case for the SEP) further hampers their mixing properties. As such, they have been the subject of much attention in the past decade, see <cit.> among many others. All of these results are of two types: either they require particular assumptions, or they apply to some perturbative regime of parameters, e.g. high density of particles <cit.>, strong drift, or high/low activity rate of the environment <cit.>.
Recently, building on this series of work, a relatively comprehensive result on the Law of Large Numbers (LLN) in dimension 1+1 has been proved in <cit.>, see (<ref>) above, opening an avenue to some fundamental questions that had previously remained out of reach, such as the possible existence of transient regime with zero speed. Indeed, cf. (<ref>) and (<ref>), the results of <cit.> leave open the possibility of having an extended `critical' interval of ρ's for which v(ρ)=0, which is now precluded as part of our main results, see (<ref>). We seek the opportunity to stress that our strategy, outlined below in Section <ref>, is completely new and rather robust, and we believe a similar approach will lead to progress on related questions for other models.
§.§.§ Open questions
1) Existence and value of v(ρ_c). Returning to Theorems <ref> and <ref>, let us start by mentioning that ρ_c, defined by (<ref>) and equivalently characterized via (<ref>)-(<ref>), may in fact be degenerate (i.e. equal to 0 or 1) if v(·) stays of constant sign. It is plausible that ρ_c∈(0,1) if and only if
p_∘<1/2<p_∙, which corresponds to the (more challenging) nestling case, in which
the walk has a drift of opposite signs depending on whether it sits on top of a particle or an empty site. Moreover, the function v(·), as given by <cit.>, is well-defined for all values of ρ∈(0,1) (this includes a candidate velocity at ρ_c), but whether a LLN holds at ρ_c with speed v(ρ_c) or the value of the latter is unknown in general, except in cases where one knows that v(ρ_c)=0 by other means (e.g. symmetry), in which case a LLN can be proved, see <cit.>. When ρ_c is non-degenerate, given Theorem <ref>, it is natural to expect that v(ρ_c)=0, but this is not obvious as we do not presently know if v is continuous at ρ_c. It is relatively easy to believe that continuity on (0,1)∖{ρ_c}, when the speed is non-zero, can be obtained through an adaptation of the regeneration structure defined in <cit.>, but continuity at ρ_c (even in the symmetric case) seems to be more challenging.
Suppose now that one can prove that a law of large numbers holds with vanishing speed at ρ=ρ_c, then one can ask whether X is recurrent or transient at ρ_c. A case in point where one knows the answer is the `self-dual' point p_∘=1-p_∙ where the critical density equals 1/2 and the walk is recurrent. In particular, this result and Theorem <ref> imply that v(ρ) is zero if and only if ρ=1/2, and thus there exists no transient regime with zero-speed in the symmetric case. This also answers positively the conjecture in <cit.> (end of Section 1.4), which states that in the symmetric case, the only density with zero speed is in fact recurrent.
2) Regularity of v(·) near ρ_c and fluctuations at ρ_c.
In cases where v(ρ_c)=0 is proved, one may further wonder about the regularity of ρ↦ v(ρ) around ρ_c (and similarly of θ(ρ) as ρ↓ρ_c). Is it continuous, and if yes, is it Hölder-continous, or even differentiable? This could be linked to the fluctuations of the random walk when ρ=ρ_c, in the spirit of Einstein's relation, see for instance <cit.> in the context of reversible dynamics. Whether the fluctuations of X under ^ρ_c are actually diffusive, super-diffusive or sub-diffusive is a particularly difficult question. It has so far been the subject of various conjectures, both for the RWdRE (e.g. Conjecture 3.5 in <cit.>) and very closely related models in statistical physics (e.g. <cit.>, <cit.>, <cit.>). One aspect making predictions especially difficult is that the answer may well depend on all parameters involved, i.e. ρ, p_∘ and p_∙. At present, any rigorous upper or lower bound on the fluctuations at ρ=ρ_c would be a significant advance.
3) Comparison with the static setting.
In the past decade, there have been many questions as to which features of a static environment (when ν=0, so that the particles do not move) are common to the dynamic environment, and which are different. In particular, it is well-known that in the static case, there exists a non-trivial interval of densities for which the random walker has zero speed, due to mesoscopic traps that the random walker has to cross on its way to infinity (see for instance <cit.>, and above references). It was conjectured that if ν>0 is small enough, there could be such an interval in the dynamic set-up, see <cit.>. Theorem <ref> thus disproves this conjecture.
Combined with the CLT from <cit.>, which is valid outside of (ρ_-,ρ_+), hence for all ρ≠ρ_c by Theorem <ref>, this also rules out the possibility of non-diffusive fluctuations for any ρ≠ρ_c, which were conjectured in <cit.>. In light of this, one may naturally wonder how much one has to slow down the environment with time (one would have to choose ν=ν_t a suitably decreasing function of the time t) in order to start seeing effects from the static world. We plan to investigate this in future work.
§.§ Overview of the proof
We give here a relatively thorough overview of the proof, intended to help the reader navigate the upcoming sections. For concreteness, we focus on Theorem <ref> (see below its statement as to how Theorem <ref> relates to it), and specifically on the equalities (<ref>), in the case of SEP (although our results are more general; see Section <ref>). The assertion (<ref>) is somewhat reminiscent of the chain of equalities u̅=u_*=u_** associated to the phase transition of random interlacements that was recently proved in <cit.>, and draws loose inspiration from the interpolation technique of <cit.>, see also <cit.>. Similarly to <cit.> in the context of interlacements, and as is seemingly often the case in situations lacking key structural features (in the present case, a notion of reversibility or more generally self-adjointness, cf. <cit.>), our methods rely extensively on the use of couplings.
The proof essentially consists of two parts, which we detail individually below. In the first part, we compare our model with a finite-range version, in which we fully re-sample the environment at times multiple of some large integer parameter L (see e.g. <cit.> for a similar truncation to tame the long-range dependence). One obtains easily that for any L and any density ρ∈ (0,1), the random walk in this environment satisfies a strong Law of Large Numbers with some speed v_L(ρ) (Lemma <ref>), and we show that in fact,
v_L(ρ-ε_L)-δ_L≤ v(ρ)≤ v_L(ρ+ε_L) +δ_L,
for some δ_L, ε_L that are quantitative and satisfy δ_L, ε_L =o_L(1) as L →∞ (see Proposition <ref>).
In the second part, we show that for any fixed ρ,ρ+ε∈ (0,1), we have
v_L(ρ+ε) > v_L(ρ)+3δ_L,
for L large enough (Proposition <ref>). Together, (<ref>) and (<ref>) readily imply that v(ρ+ε)>v(ρ), and (<ref>) follows. Let us now give more details.
First part: from infinite to finite range. For a density ρ∈ (0,1) the environment η of the range-L model is the SEP starting from η_0∼Ber(ρ)^⊗ during the time interval [0,L). Then for every integer k ≥1, at time kL we sample η_kL again as Ber(ρ)^⊗, independently from the past, and let it evolve as the SEP during the interval [kL,(k+1)L). The random walk X on this environment is still defined formally as in (<ref>), and we denote ^ρ,L the associated annealed probability measure. Clearly, the increments (X_kL-X_(k-1)L)_k≥ 1 are i.i.d. (and bounded), so that by the strong LLN, X_n/n→ v_L(ρ) :=𝔼^ρ[X_L/L], a.s. as n→∞.
Now, to relate this to our original 'infinite-range' (L=∞) model, the main idea is to compare the range-L with the range-2L, and more generally the range-2^kL with the range-2^k+1L model for all k ≥ 0 via successive couplings. The key is to prove a chain of inequalities of the kind
v_L(ρ)≤ v_2L(ρ+ε_1,L)+δ_1,L≤…≤ v_2^kL(ρ+ε_k,L)+δ_k,L≤…,
where the sequences (ε_k,L)_k≥ 1 of and (δ_k,L)_k≥ 1 are increasing, with ε_L:=lim_k→∞ε_k,L=o_L(1) and δ_L:=lim_k→∞δ_k,L=o_L(1). In other words, we manage to pass from a scale 2^kL to a larger scale 2^k+1L, at the expense of losing a bit of speed (δ_k+1,L-δ_k,L), and using a little sprinkling in the density (ε_k,L-ε_k-1,L), with δ_0,L= ε_0,L=0. From (<ref>) (and a converse inequality proved in a similar way), standard arguments allow us to deduce (<ref>).
Let us mention that such a renormalization scheme, which trades scaling against sprinkling in one or several parameters, is by now a standard tool in percolation theory, see for instance <cit.>.
We now describe how we couple the range-L and the range-2L models in order to obtain the first inequality, the other couplings being identical up to a scaling factor. We do this in detail in Lemma <ref>, where for technical purposes, we need in fact more refined estimates than only the first moment, but they follow from this same coupling. The coupling works roughly as follows (cf. Fig. <ref>). We consider two random walks X^(1)law=^ρ,L and X^(2)law=^ρ+ε_L,2L coupled via their respective environments η^(1) and η^(2) such that we retain good control (in a sense explained below) on the relative positions of their respective particles during the time interval [0,2L], except during a short time interval after time L after renewing the particles of η^(1). In some sense, we want to dominate η^(1) by η^(2) `as much as possible', and use it in combination with the following monotonicity property of the walk: since p_∙ >p_∘, X^(2) cannot be overtaken by X^(1) if η^(2) covers η^(1) (cf. (<ref>); see also Lemma <ref> for a precise formulation). Roughly speaking, we will use this strategy from time 0 to L, then lose control for short time before recovering a domination and using the same argument but with a lateral shift (in space), as illustrated in Figure <ref>.
The key for recovering a domination in as little time as possible is a property of the following flavour.
0.8
Let L,t,ℓ≥ 1 be such that L≫ t^2≫ℓ^4, and let η_0, η'_0 ∈{0,1}^ℤ be such that over [-3L,3L], η_0 (resp. η'_0) has empirical density ρ+ε (resp. ρ) on segments of length ℓ. Then one can couple the time evolutions of η,η' as SEPs during [0,t] such that
(η_t(x)≥η_t'(x), ∀ x ∈ [-L+ct,L-ct]) ≥ 1-c'tLexp(-c”ε^2 t^1/4),
for some constants c,c',c”>0.
Roughly, for t poly-logarithmic in L the right-hand side of (<ref>) will be as close to 1 as we need.
We do not give precise requirements on the scales ℓ,t,L (nor a definition of empirical density), except that they must satisfy some minimal power ratio due to the diffusivity of the particles. We will formalize this statement in Section <ref>, see in particular condition <ref>, with exponents and constants that are likely not optimal but sufficient for our purposes. Even if the environment were to mix in super-quadratic (but still polynomial) time, our methods would still apply, up to changing the exponents in our conditions <ref>-<ref>.
The only statistic we control is the empirical density, i.e. the number of particles over intervals of a given length. The relevant control is formalised in Condition <ref>. In particular, our environment couplings have to hold with a quenched initial data, and previous annealed couplings in the literature (see e.g. <cit.>) are not sufficient for this purpose (even after applying the usual annealed-to-quenched tricks), essentially due to the difficulty of controlling the environment around the walker in the second part of the proof, as we explain in Remark <ref>; cf. also <cit.> for related issues in other contexts.
Let us now track the coupling a bit more precisely to witness the quantitative speed loss δ_L incurred by (<ref>). We start by sampling η^(1)_0 and η^(2)_0 such that a.s., η^(1)_0(x)≤η^(2)_0(x) for all x∈ℤ, which is possible by stochastic domination. In plain terms, wherever there is a particle of η^(1), there is one of η^(2). Then, we can let η^(1) and η^(2) evolve during [0,L] such that this domination is deterministically preserved over time (this feature is common to numerous particle systems, including both SSEP and the PCRW). As mentioned above, conditionally on such a realization of the environments, one can then couple X^(1) and X^(2) such that X^(2)_s≥ X^(1)_s for all s∈ [0,L]. The intuition for this is that 'at worst', X^(1) and X^(2) sit on the same spot, where X^(2) might see a particle and X^(1) an empty site (hence giving a chance for X^(2) to jump to the right, and for X^(1) to the left), but not the other way around (recall to this effect that p_∙ >p_∘).
At time L, we have to renew the particles of η^(1), hence losing track of the domination of η^(1) by η^(2). We intend to recover this domination within a time t:=(log L)^100 by coupling the particles of η^(1)_L with those of η^(2)_L using (<ref>) (at least over a space interval [-3L,3L], that neither X^(1) nor X^(2) can leave during [0,2L]). Since during that time, X^(1) could at worst drift of t steps to the right (and X^(2) make t steps to the left), we couple de facto the shifted configurations η^(1)(·) with η^(2)(· -2t). Hence, we obtain that η^(1)_L+t(x)≤η^(2)_L+t(x -2t) for all x∈ [-3L+ct,3L+ct] with high probability as given by (<ref>). As a result, all the particles of η^(1)_L+t are covered by particles of η^(2)_L+t, when observed from the worst-case positions of X^(1)_L+t and X^(2)_L+t, respectively.
Finally, during [L+t,2L] we proceed similarly as we did during [0,L], coupling η^(1) with η^(2)(· -2t) so as to preserve the domination, and two random walks starting respectively from the rightmost possible position for X^(1)_L+t, and the leftmost possible one for X^(2)_L+t. The gap between X^(1) and X^(2) thus cannot increase, and we get that X^(1)_2L≤ X^(2)_2L+2t, except on a set of very small probability where (<ref>) fails. Dividing by 2L and taking expectations, we get
v_L(ρ)≤ v_2L(ρ +ε)+δ, where δ=O((log L)^100L),
for suitable ε > 0. The exponent 100 is somewhat arbitrary, but importantly, owing to the coupling time of the two processes (and the diffusivity of the SEP particles), this method could not work with a value of δ smaller than (log^2L)/L. In particular, the exponent of the logarithm must be larger than one, and this prevents potentially simpler solutions for the second part.
Second part: quantitative speed increase at finite range. We now sketch the proof of (<ref>), which is a quantitative estimate on the monotonicity of v_L, equivalently stating that for ρ∈ (0,1) and ε∈ (0,1-ρ),
𝔼^ρ+ε[X_L]>𝔼^ρ[X_L]+3Lδ_L,
with δ_L explicit and supplied by the first part. This is the most difficult part of the proof, as we have to show that a denser environment actually yields a positive gain for the displacement of the random walk, and not just to limit the loss as in the first part. The main issue when adding an extra density ε of particles is that whenever X is on top of one such particle, and supposedly makes a step to the right instead of one to the left, it could soon after be on top of an empty site that would drive it to the left, and cancel its previous gain.
We design a strategy to get around this and preserve gaps, illustrated in Figure <ref> (cf. also Figure <ref> for the full picture, which is more involved). When coupling two walks X^ρlaw=^ρ, X^ρ+εlaw=^ρ+ε with respective environments η^ρ, η^ρ+ε, our strategy is to spot times when i) X^ρ+ε sees an extra particle (that we call sprinkler) and jumps to the right, while X^ρ sees an empty site and jumps to the left, and ii) this gap is extended with X^ρ then drifting to the left and X^ρ+ε drifting to the right, for some time t that must necessarily satisfy t ≪log L – so that this has a chance of happening often during the time interval [0,L].
Once such a gap is created, on a time interval say [s_1,s_2] with s_2-s_1=t, we attempt to recouple the environments η^ρ, η^ρ+ε around the respective positions of their walker, so that the local environment seen from X^ρ+ε again dominates the environment seen from X^ρ,
which then allows us to preserve the gap previously created, up to some time T which is a polynomial in log L. As we will explain shortly, this gap arises with not too small a probability, so that repeating the same procedure between times iT and (i+1)T (for i≤ L/T-1) will provide us with the discrepancy 3δ_L needed.
The re-coupling of the environments mentioned in the previous paragraph must happen in time less than O(log L) (in fact less than t), in order to preserve the gap previously created. This leads to two difficulties:
* in such a short time, we cannot possibly recouple the two environments over a space interval of size comparable to L (≈ the space horizon the walk can explore during [0,L]), and
* the coupling will have a probability ≫ L^-1 to fail, hence with high probability, this will actually happen during [0,L]! In this case, we lose track of the domination of the environments completely, hence it could even be possible that X^ρ overtakes X^ρ+ε.
We handle the first of these difficulties with a two-step surgery coupling:
1) (Small coupling). We perform a first coupling of η^ρ and η^ρ+ε on an interval of stretched exponential width (think exp(t^1/2) for instance) during a time s_3-s_2=t/2 (cf. the orange region in Fig. <ref>), so as to preserve at least half of the gap created. We call this coupling “small” because t is small (compared to L). If that coupling is successful, X^ρ+ε now sees an environment that strictly covers the one seen by X^ρ, on a spatial interval I of width ≍exp(t^1/2) ≫ t. Due to the length of I, this domination extends in time, on an interval [s_3,T] of duration say t^200 (it could even be extended to timescales ≍exp(t^1/2)): it could only be broken by SEP particles of η^ρ_s_3 lying outside of I, who travel all the distance to meet the two walkers, but there is a large deviation control on the drift of SEP particles. The space-time zone in which the environment as seen from X^ρ+ε dominates that from X^ρ is the green trapezoid depicted in Figure <ref>.
2) (Surgery coupling). When the coupling in 1) is successful, we use the time interval [s_3,T] to recouple the environments η^ρ and η^ρ+ε on the two outer sides of the trapezoid, on a width of length 10L say, so that η^ρ+ε_T(· +t/2) dominates η^ρ_T(· -t/2) on the entire width, all the while preserving the fact that η^ρ+ε(· +t/2) dominates η^ρ(· -t/2) inside the trapezoid at all times (for simplicity and by monotonicity, we can suppose assume that X^ρ+ε_s_3-X^ρ_s_3 is exactly equal to t, as pictured in Figure <ref>). This step is quite technical, and formulated precisely as Condition <ref> in Section <ref>. It requires that different couplings can be performed on disjoint contiguous intervals and glued in a `coherent' fashion, whence the name surgery coupling. Again, we will prove this specifically for the SEP in a dedicated section, see Lemma <ref>. Since this second coupling operates on a much longer time interval, we can now ensure its success with overwhelming probability, say 1-O(L^-100).
We still need to address the difficulty described in the second bullet point above, which corresponds to the situation where the small coupling described at 1) fails. As argued, will happen a number of times within [0,L] because t is small. If this occurs, we lose control of the domination of the environments seen by the two walkers. This leads to:
3) (Parachute coupling). If the coupling in 1) is unsuccessful, instead of the surgery coupling, we perform a (parachute) coupling of η^ρ and η^ρ+ε in [s_3,T] so that η^ρ+ε(· -(T-s_3)) covers η^ρ(· +(T-s_3)), as we did in the first part of the proof. This again happens with extremely good probability 1-O(L^-100). The price to pay is that on this event, the two walks may have drifted linearly in the wrong direction during [s_3,T] (see the dashed blue trajectories in Figure <ref>), but we ensure at least that X^ρ+ε_T-X^ρ_T≥ -2(T-s_3), and we have now re-coupled the environments seen from the walkers, hence we are ready to start a new such step during the interval [T,2T], etc. This last point is crucial, as we need to iterate to ensure an expected gain 𝔼^ρ+ε[X_L]-𝔼^ρ[X_L] that is large enough, cf. (<ref>).
We now sketch a back-of-the-envelope calculation to argue that this scheme indeed generates the necessary discrepancy between 𝔼^ρ+ε[X_L] and 𝔼^ρ[X_L]. During an interval of the form [iT,(i+1)T] for i=0, 1,… , ⌊ L/T⌋ -1 (we take i=0 in the subsequent discussion for simplicity), with probability ≃ 1-e^-ct, we do not have the first separating event (around time s_1) between X^ρ+ε and X^ρ, and we simply end up with X^ρ+ε_T - X^ρ_T≥ 0, which is a nonnegative expected gain.
Else, with probability ≃ e^-ct we have the first separation happening during [s_1,s_2]. On this event, the coupling in 1) (and the subsequent impermeability of the green trapezoid) succeeds with probability ≥ 1- e^-t^1/100, and we end up with X^ρ+ε_T - X^ρ_T≥ t, resulting in an expected gain of order ≃ t. If the first coupling fails, and we resort to the parachute coupling, we have a (negative) expected gain ≃ -2T e^-t^1/100. Multiplying by the probability of the first separation, we obtain a net expected gain ≃ e^-ct(t-2T e^-t^1/100).
Finally, we evaluate the possibility that the surgery coupling (in step 2) above) or the parachute coupling 3) fails, preventing us to restore the domination of η^ρ(·+X^ρ) by η^ρ+ε(·+X^ρ+ε) at time T, and we do not control anymore the coupling of the walks. This has probability O(L^-100), and in the worst case we have deterministically X^ρ+ε_L-X^ρ_L =-2L, so the expected loss is O(L^-99).
Putting it all together, we get, for L large enough by choosing e.g. t=√(log L) and T=(log L)^100 (it turns out that our couplings work with this choice of scales), that
𝔼^ρ+ε[X^ρ+ε_T]-𝔼^ρ[X^ρ_T]≥ e^-ct(t-2T e^-t^1/100)+O(L^-99)≥ L^-o(1)
As long as the surgery coupling or the parachute succeed, we can repeat this coupling for up to ≃ L/T times during [0,L], thus obtaining
𝔼^ρ+ε[X^ρ+ε_L]-𝔼^ρ[X^ρ_L]≥ L^1-o(1)/T≥ L^1-o(1),
which establishes (<ref>), with a sizeable margin.
§.§ Organization of the paper
In Section <ref>, we give rigorous definitions of general environments with a minimal set of properties (<ref>)-(<ref>), and of the random walk X.
In Section <ref>, we impose the mild coupling conditions <ref>-<ref> on the environments, state our general result, Theorem <ref>, and deduce Theorems <ref> and <ref> from it. We then define the finite-range models and give a skeleton of the proof.
In Section <ref>, we proceed to the first part of the proof, and in Section <ref>, we proceed to the second part (cf. Section <ref> reagrding the two parts). In Section <ref>, we define rigorously the SEP and show that it satisfies all the conditions mentioned above. The (short) Appendix <ref> contains a few tail estimates in use throughout this article.
In Appendix <ref>, we introduce the PCRW environment and show that it equally satisfies the above conditions, which entails that our results also apply to this environment. Throughout this article all quantities may implicitly depend on the two parameters p_∙, p_∘∈ (0,1) that are assumed to satisfy p_∙ >p_∘, cf. above (<ref>) (and also Section <ref>).
§ SETUP AND USEFUL FACTS
In this section, after introducing a small amount of notation (Section <ref>), we proceed to define in Section <ref> a class of (dynamic) random environments of interest, driven by a Markov process η characterized by properties (<ref>)–(<ref>) below. We will primarily be interested in environments driven by the exclusion process, which is introduced in Section <ref> and shown to satisfy these properties. The framework developed in Section <ref> and further in Section <ref> will allow our results to apply directly to a second environment of interest, considered in Appendix <ref>. We conclude by introducing in Section <ref> the relevant walk in random environment along with its associated quenched and annealed laws and collect a few basic features of this setup.
§.§ Notation
We write ℤ_+, resp. ℝ_+, for the set of nonnegative integers, resp. real numbers. We use the letter z ∈×_+ exclusively to denote space-time points z=(x,t), and typically x,y, x', y' … for spatial coordinates and s,t,s',t',… for time coordinates. We usually use m,n, … for non-negative integers.
With a slight abuse of notation, for two integers a≤ b, we will denote by [a,b] the set of integers {a,…, b} and declare that the length of [a,b] is |{a,…, b}| =b-a+1. Throughout, c,c',… and C,C',… denote generic constants in (0,∞) which are purely numerical and can change from place to place. Numbered constants are fixed upon first appearance.
§.§ A class of dynamic random environments
We will consider stationary Markov (jump) processes with values in Σ = (ℤ_+)^ℤ. The state space Σ carries a natural partial order: for two configurations η, η' ∈Σ, we write η≼η' (or η'≽η) if, for all x∈ℤ, η(x)≤η'(x). More generally, for I ⊂ℤ, η|_I≼η'|_I (or η'|_I≽η|_I) means that η(x)≤η'(x) for all x ∈ I.
For a finite subset I⊂ℤ, we use the notation η(I)=∑_x∈ Iη(x). We will denote ≥_st. and ≤_st. the usual stochastic dominations for probability measures. Let J be a (fixed) non-empty open interval of ℝ^+. An environment is specified in terms of two families of probability measures (𝐏^η_0: η_0 ∈Σ) governing the process (η_t)_t ≥ 0 and (μ_ρ : ρ∈ J), where μ_ρ is a measure on Σ, which are required satisfy the following conditions:
P.114cm(Markov property and invariance). For every η_0∈Σ, the process (η_t)_t ≥ 0 defined under 𝐏^η_0 is a time-homogeneous Markov process, such that for all s≥ 0 and x∈ℤ, the process (η_s+t(x+·))_t≥ 0 has law 𝐏^η_s(x+·); moreover, the process (η_t)_t ≥ 0 exhibits `axial symmetry' in the sense that (η_t(x-·))_t≥ 0 has law 𝐏^η_0(x-·).
P.214cm(Stationary measure). The initial distribution μ_ρ is a stationary distribution for the Markov process 𝐏^η_0. More precisely, letting
𝐏^ρ= ∫μ_ρ(dη_0) 𝐏^η_0, ρ∈ J,
the 𝐏^ρ-law of (η_t+s)_t≥0 is identical to the 𝐏^ρ-law of (η_t)_t≥0 for all s≥ 0 and ρ∈ J. In particular, the marginal law of η_t under 𝐏^ρ is μ_ρ for all t ≥ 0.
P.314cm(Monotonicity). The following stochastic dominations hold:
i) (quenched) For all η'_0 ≼η_0, one has that 𝐏^η'_0≤_st.𝐏^η_0 , i.e. there exists a coupling of (η'_t)_t ≥0 and (η_t)_t ≥ 0 such that η'_t≼η_t for all t≥ 0.
ii) (annealed) For all ρ'≤ρ, one has that μ_ρ'≤_st.μ_ρ. Together with i), this implies that 𝐏^ρ'≤_st.𝐏^ρ.
P.414cm(Density at stationarity). There exists a constant [c]densitydev >0 such that for all ρ∈ J, for all ε∈ (0,1) and every positive integer ℓ,
𝐏^ρ({η_0∈Σ: |η_0([0,ℓ-1 ])-ρℓ|≥εℓ})≤ 2exp(-densitydevε^2ℓ).
A prime example satisfying the above conditions is the simple exclusion process, introduced and discussed further in Section <ref>; see in particular Lemma <ref> regarding the validity of properties (<ref>)–(<ref>). We refer to
Appendix <ref> for another example.
§.§ Random walk
We now introduce the random walk in dynamic environment (RWdRE) that will be the main object of interest in this article.
To this effect, we fix two constants p_∙,p_∘∈ (0,1) such that p_∙>p_∘ and an environment configuration η=(η_t)_t≥ 0 with η_t ∈Σ (= ℤ_+^ℤ, see Section <ref>).
Given this data, the random walk evolving on top of the environment η is conveniently defined in terms of a family (U_n)_n≥ 0 of i.i.d. uniform random variables on [0,1], as follows.
For an initial space-time position z=(x,m)∈ℤ×ℤ_+, let P^η_z be the law of the discrete-time Markov chain X=(X_n)_n≥ 0 such that X_0=x and for all integer n≥0,
X_n+1=X_n+2×{ U_n≤ (p_∘-p_∙)1{η_n +m
(X_n)=0 } + p_∙}-1.
Let us call x∈ℤ an occupied site of η_t ∈Σ if η_t(x)>0, and empty otherwise.
With this terminology, (<ref>) implies for instance that when X_n, started at a point z=(x,t=0), is on an occupied (resp. empty) site of η_n, it jumps to its right neighbour with probability p_∙ (resp. p_∘) and to its left neighbour with probability 1-p_∙ (resp. 1-p_∘). We call P^η_z the quenched law of the walk started at z and abbreviate P^η=P^η_(0,0); here quenched refers to the fact that the environment η is deterministic.
We now discuss annealed measures, i.e. including averages over the dynamics of the environment η. We assume from here on that η=(η_t)_t≥ 0 satisfies the assumptions of Section <ref>. Recall
that 𝐏^η_0 denotes the law of η starting from the configuration η_0 ∈Σ and that 𝐏^ρ is declared in (<ref>). Correspondingly, one introduces the following two annealed measures for the walk
ℙ^ρ_z[ · ]=∫𝐏^ρ(dη)P^η_z[ · ], ρ∈ J,
ℙ^η_0_z[ · ]=∫𝐏^η_0(dη)P^η_z[ · ], η_0 ∈Σ,
for arbitrary z ∈ℤ×ℤ_+. Whereas the latter
averages over the dynamics of η for a fixed initial configuration η_0, the former includes η_0, which is sampled from the stationary distribution μ_ρ for η. Observe that
ℙ^ρ_z = ∫μ_ρ(dη_0) ℙ^η_0_z. For ⋆∈{η,ρ}, write ^⋆_x for ^⋆_(x,0), for all x∈ℤ, and write simply ^⋆ for ^⋆_0.
We introduce a joint construction for the walk X when started at different space-time points. This involves a graphical representation using arrows similar to that used in <cit.>, but simpler, which will be sufficient for our purposes. For a point w=(x,n)∈ℤ×ℤ_+, we let π_1(w)=x and π_2(w)=n denote the projection onto the first (spatial) and second (temporal) coordinate. We consider the discrete lattice
𝕃= (2ℤ× 2ℤ_+) ∪( (1,1)+(2ℤ× 2ℤ_+)).
Note that the process (X_n,n)_n ≥ 0 evolves on the lattice 𝕃⊂ (ℤ×ℤ_+) defined by (<ref>) when X=(X_n)_n ≥ 0 is started at z=(0,0) under any of P_z^η and the measures in (<ref>)-(<ref>).
We proceed to define a family of processes (X^w=(X^w_n)_n ≥ 0 : w∈𝕃), such that X^w_0=π_1(w) almost surely and (X^(0,0)_n)_n ≥ 0 has the same law as X under P_(0,0)^η. Furthermore, X^w' and X^w have the property that they coalesce whenever they intersect, that is if X^w'_m=X^w_n for some w,w'∈𝕃 and n,m≥0, then X^w'_m+k=X^w_n+k for all k≥0.
Let U= (U_w)_w∈𝕃 be a collection of i.i.d. uniform random variables on [0,1]. Given the environment η, we define a field A=(A_w)_w∈𝕃∈{-1,1}^𝕃 of arrows (see Figure <ref>), measurably in (η,U ) as follows:
A_w = A(η_n(x), U_w) =2×{ U_w≤ (p_∘-p_∙)1{η_n(x)= 0 } + p_∙}-1, w=(x,n)∈𝕃.
For any w=(x,n)∈𝕃, we then set X^w_0=x and, for all integer k ≥0, we define recursively
X^w_k+1 = X^w_k+A_(X^w_k, n+k).
This defines the coupled family ((X^w_k)_k ≥ 0: w∈𝕃), and we note that trajectories the process (X_k^w , k+n)_k ≥ 0, where π_2(w)=n, are embedded in (i.e. subsets of) 𝕃. In view of (<ref>) and (<ref>)-(<ref>), it follows plainly that the law of X^(0,0)= (X^(0,0)_n)_n ≥ 0 when averaging over U while keeping η fixed is the same as that of X under P^η_(0,0)= P^η.
We now discuss a variation of the construction specified around (<ref>)-(<ref>), which will be practical in Section <ref>.
Let η_0∈Σ and suppose that (η,U) are coupled under the probability measure Q, with η having marginal law 𝐏^η_0 and (U_w)_w∈𝕃 a collection of i.i.d. uniform random variables on [0,1], in such a way that under Q, for all integers t ≥ 0,
(U_w: π_2(w)=t) is independent from ℱ_t,
where
ℱ_t:=σ ((η_s)_0 ≤ s ≤ t, (U_w: π_2(w)≤ t-1)).
Then, defining X^w= X^w(η,U) as in (<ref>)-(<ref>), it follows that X^(0,0) has law ℙ^η_0 (cf. (<ref>)) under Q.
We show by induction over k ≥ 1 integer that for all such k, the pair ((η_t)_0≤ t≤ k-1,(X_t)_0≤ t≤ k) has the same law under both Q and ℙ^η_0 (which, for the duration of the proof, we extend here to denote the joint distribution of (X, η), by a slight abuse of notation). The initialisation at k=1 is immediate, since Q(X_1=1)=p_∙{η_0(0)=1}+p_∘{η_0(0)=0} and η_0 is deterministic.
As for the induction step, assume the induction hypothesis for some k≥ 1, and condition on the σ- algebra Σ_k:=σ((η_t)_0≤ t≤ k-1,(X_t)_0≤ t≤ k). We first let η evolve between times k-1 and k. Under both Q and ℙ^η_0 and conditionally on Σ_k, we have that (η_t)_k-1≤ t≤ k is distributed as (η_t)_0≤ t≤ 1 under 𝐏^η_k-1, given the marginal distribution of η and by the Markov property (<ref>). Therefore, ((η_t)_0≤ t≤ k,(X_t)_0≤ t≤ k) has the same law under both Q and ℙ^η_0. Now, conditioning on ℱ_k as defined in (<ref>), and noticing that (η_t)_0≤ t≤ k, (X_t)_0≤ t≤ k) (hence also η_k(X_k)) are all ℱ_k-measurable (for (X_t)_0≤ t≤ k this follows from (<ref>)-(<ref>)), we obtain that
Q(X_k+1-X_k=1 | ℱ_k)
(<ref>),(<ref>)=(U_(X_k,k)≤ p_∙ | ℱ_k) {η_k(X_k)=1}+(U_(X_k,k)≤ p_∘ | ℱ_k) {η_k(X_k)=0}.
Now, by (<ref>), and using that X_k is ℱ_k-measurable, we can evaluate the conditional probabilities in (<ref>) to find that
Q(X_k+1-X_k=1 | ℱ_k) = p_∙{η_k(X_k)=1}+p_∘{η_k(X_k)=0}
=ℙ^η_0(X_k+1-X_k=1 | ℱ_k),
where the second equality comes from the independence of ℱ_k and U_k under ℙ^η_0, see (<ref>) and (<ref>). Integrating both sides of (<ref>) under Q and ℙ^η_0, respectively, against a suitable (ℱ_k-measurable) test function of ((η_t)_0≤ t≤ k,(X_t)_0≤ t≤ k), it follows that ((η_t)_0≤ t≤ k,(X_t)_0≤ t≤ k+1) have the same distribution under Q and ℙ^η_0. This concludes the proof of the induction step.
We conclude this section by collecting a useful monotonicity property for the collection of random walks defined above, similar to <cit.>.
Its proof hinges on the fact that the trajectories considered cannot cross without first meeting at a vertex, after which they merge. In the sequel, for a given environment configuration η =(η_t)_t ≥ 0, we refer to X^w = (X_n^w)_n ≥ 0 the process defined as in (<ref>) but with η in place of η entering the definition of the arrows in (<ref>). We will be interested in the case where η and η are such that
η_n(x)≤η_n(x), for all (x,n)∈𝕃∩ K,
for some K⊆ℤ×ℤ_+.
The following result is already interesting for the choice η=η, in which case X^w= X^w below.
If η,η∈Σ and K⊆ℤ×ℤ_+ are such that (<ref>) holds, then for every w,w'∈ K with π_1(w')≤π_1(w) and π_2(w)= π_2(w'),and for every n ≥ 0 such that [π_1(w)-k, π_1(w')+k]× [π_2(w), π_2(w)+k] ⊆ K for all 0 ≤ k ≤ n, one has that
X^w'_n≤X^w_n.
We only treat the case K= ℤ×ℤ_+ to lighten the argument. The adaptation to a general K is straightforward, as all possible trajectories considered for X and X lie in K by assumption. The proof proceeds by a straightforward induction argument. Indeed, since X_0^w'= π_1(w') and X_0^w= π_1(w), one has that
X^w_0- X^w_0≥ 0 by assumption. To carry out the induction step, one notes that X^w_n-X^w'_n is even for any n ≥ 0 with increments ranging in -2, 0 or +2, and combines this with the following observation: if X^w_n-X^w'_n=0, then
X^w_n+1-X^w'_n+1(<ref>)= A_(X_n^w, π_2(w)+n) - A_(X_n^w', π_2(w')+n)(<ref>)= A(η_n(X^w_n), U_w)- A(η_n(X^w_n), U_w)≥0,
where the second equality follows using that π_2(w)=π_2(w') (along with X^w_n=X^w'_n) and the inequality is due to (<ref>) and the fact that A(·, ξ) is increasing for any ξ∈ [0,1], which is straightforward from (<ref>).
For later reference, we also record that for any w∈𝕃 and n≥ m≥ 0,
|X^w_n - X^w_m| ≤ n - m,
as follows clearly from (<ref>).
§ MAIN RESULTS
In this section, we start by formulating in Section <ref> precise coupling conditions that we require from the environments, and which are of independent interest. These are given by conditions <ref>-<ref> below (a flavor of the second of these was given in (<ref>) in the introduction). Our main result, Theorem <ref>, appears in Section <ref>. It concerns the generic random walk in random environment defined in Section <ref>-<ref>, subject to the conditions of Section <ref>. Our standing assumptions will thus be that all of properties (<ref>)-(<ref>) and the conditions <ref>-<ref> hold. We will verify separately in Section <ref> that all of these conditions hold for SEP. From Theorem <ref> we then readily deduce Theorems <ref>and <ref> at the end of Section <ref>.
Towards the proof of Theorem <ref>, and following the outline of Section <ref>, we proceed to introduce in Section <ref> a finite-range approximation of the model, in which the environment is renewed after L time steps and gather its essential features that will be useful for us. We then state two
key intermediate results, Propositions <ref> and <ref>, which correspond to the first and second parts from the discussion in Section <ref>; cf. also (<ref>) and (<ref>). From these, we deduce Theorem <ref> in Section <ref>. The proofs of the two propositions appear in forthcoming sections.
§.§ Coupling conditions on the environment
Recall the framework of Section <ref>-<ref>, which we now amend with three further conditions. These involve an additional parameter ν>0 which quantifies the activity of the environment (in practice it will correspond to the rate parameter appearing in (<ref>) in the case of SEP). We keep the dependence on ν explicit in the following conditions for possible future applications, for which one may wish to tamper with the speed of the environment (for instance, by slowing it down).
For the purposes of the present article however, one could simply set ν=1 in what follows.
The first two conditions <ref> and <ref> below regard the environment (η_t)_t ≥ 0 alone, which, following the setup of Section <ref>, is assumed to be specified in terms of the measures (𝐏^η_0: η_0 ∈Σ) and (μ_ρ : ρ∈ J) that satisfy (<ref>)-(<ref>). The first condition, <ref>, concerns the empirical density of the environment. Roughly speaking, it gives a quantitative control on how (<ref>) is conserved over time. The second condition, <ref>, is more technical.
In a nutshell, it states that if one environment η covers another environment η' on a finite interval I at a given time, and if η has a larger empirical density than η' outside I at the same time, the evolutions of η and η' can be coupled in a way that η covers η' on a larger interval after some time.
We proceed to formalize these two properties:
*
(Conservation of density). There exists constants densitystable,[c]densitystableexpo∈ (0,∞) such that for all ρ∈ J and ε∈ (0,1) (with ρ+ε∈ J), and for all
ℓ, ℓ', H,t≥ 1 satisfying H>4ν t > densitystableℓ^2ε^-2(1+|log^3 (ν t)|) and ℓ'≤√(t), the following two inequalities hold. Let η_0 be such that on every interval I of length ℓ included [-H,H], one has η_0(I)≤ (ρ+ε)ℓ (resp. η_0(I)≥ (ρ-ε)ℓ). Then
𝐏^η_0([ for all intervals I' of length ℓ' included in; [-H+2ν t,H-2ν t]: η_t(I') ≤ (ρ+3ε) ℓ'; (resp. η_t(I')
≥ (ρ-3ε) ℓ') ]) ≥ 1 - 4 H exp(-densitystableexpoε^2ℓ').
* (Couplings). There exists compatible, SEPcoupling2∈ (0,∞) such that the following holds. Let ρ∈ J, ε∈ (0,1) (with ρ+ε∈ J), and H_1, H_2,t,ℓ≥ 1 be integers such that min{H_1, H_2-H_1-1}> 10νt>4νℓ^100>compatible, ν^ℓ>compatibleε^-2(1+|log^3(νℓ^4)|) and ℓ>80νε^-1+ν^-2. Let η_0,η'_0∈Σ be such that η_0|_[-H_1, H_1]≽η_ 0 '|_[-H_1, H_1] and such that for every interval I⊆ [-H_2,H_2] of length ⌊ℓ/2 ⌋≤ |I| ≤ℓ , we have η_0(I)≥ (ρ+3ε/4)| I | and η'_0(I)≤ (ρ+ε/4)| I |. Then there exists a coupling of two environments η,η' with respective marginals 𝐏^η_0,𝐏^η'_0 such that
(∀ s∈ [0,t], η_s|_[-H_1+4νt, H_1-4νt]≽η'_s|_[-H_1+4νt, H_1-4νt])≥ 1 - 20texp (-νt/4)
and
(η_t |_[-H_2+6νt, H_2-6νt]≽η'_t|_[-H_2+6νt, H_2-6νt])≥ 1 -5SEPcoupling2ℓ^4 H_2exp(- SEPcoupling2^-1ν/ν+1ε^2ℓ).
For later reference, we record the following two particular instances of <ref>, which correspond to the cases where H_2=0 and H_1=0, respectively. For convenience, we state them with better constants and exponents than in <ref>. In fact, when verifying condition <ref> for the SEP in Section <ref>, we will first prove that these two conditions hold, and use them to prove <ref>.
* (No particle drifting in from the side).
Let t,H ≥ 0 and k≥ 1 be integers, and let η_0,η'_0∈Σ be such that η_0 |_[-H, H]≽η_0' |_[-H, H].
There exists a coupling of environments η,η' with respective marginals 𝐏^η_0 and 𝐏^η'_0 such that
ℚ(∀ s∈ [0,t], η_s|_[-H+2νkt, H-2νkt]≽η'_s|_[-H+2νkt, H-2νkt])≥ 1- 20exp(-kν t/4).
Informally, with high probability no particle of η' outside of [-H,H] can drift into [-H+2νt, H-2νt] (when k=1 for instance) before time t to perturb the domination of η' by η.
* (Covering η' by η).
There exists SEPcoupling >0 such that for all ρ∈ J and ε∈ (0,1) (with ρ+ε∈ J), the following holds. If H,t≥ 1 satisfy H>4ν t, ν^8t>1 and ν t^1/4 >SEPcouplingε^-2(1+|log^3(ν t) |), then for all η_0,η'_0∈Σ such that on each interval I ⊂ [-H,H] of length ⌊ℓ/2 ⌋≤ |I| ≤ℓ, where ℓ:=⌊ t^1/4⌋, η_0(I)≥ (ρ+3ε/4)|I| and η'_0(I) ≤ (ρ+ε/4)|I|, there exists a coupling ℚ of η,η' with marginals 𝐏^η_0, 𝐏^η'_0 so that
ℚ( η_t | _[-H+4ν t, H-4ν t]≽η_t' | _[-H+4ν t, H- 4ν t])≥ 1- SEPcoupling2 tHexp(-(SEPcoupling2(1+ν^-1))^-1ε^2t^1/4).
In words, the coupling ℚ achieves order between η and η' at time t under suitable regularity assumptions on the empirical density of their initial configurations η_0 and η_0'.
The third and last property ensures that if an environment η covers another environment η' and has at least one extra particle at distance ℓ of the origin (which typically happens if η has higher density than η'), then with probability at least exponentially small in ℓ, by time ℓ, this particle can reach the origin which will be empty for η', while preserving the domination of η' by η.
We will combine this property with the uniform ellipticity of the random walk on top of the environment to show that the walker has at least an exponentially small probability to reach a position where η has a particle but not η', which in turn yields a probability bounded away from zero that a walker on η steps to the right while a walker on η' steps to the left (under the coupling mentioned in Section <ref>), hence creating the desired initial gap that we will then exploit in our constructions.
* (Sprinkler).
For all ρ∈ J, for all integers H, ℓ,k≥ 1 with H≥ 2νℓ k and k ≥ 48ν^-1(ν +log(40)-log(p_∘(1-p_∙)ν/2)), the following holds. If η_0,η'_0 ∈Σ satisfy η_0(x)≥η'_0(x) for all x∈ [-H,H], η_0([0,ℓ])≥η'_0([0,ℓ])+1 and η'_0([-3ℓ+1, 3ℓ])≤ 6(ρ+1)ℓ, then there is a coupling ℚ of η' under 𝐏^η_0' and η under 𝐏^η_0 such that, with
δ = (
ν/2e^ν )^6(ρ+1)ℓ,
ℚ(η_ℓ(x )>0 , η'_ℓ( x )=0)≥ 2δ
for x ∈{0,1}, and with δ'= δ (p_∘(1-p_∙))^6(ρ+1)ℓ,
ℚ({∀ s∈ [0, ℓ], η_s|_[-H+2νkℓ, H-2νkℓ]≽η'_s|_[-H+2νkℓ, H-2νkℓ]}^c) ≤ 20e^-kνℓ/4≤δ'.
§.§ Main result
Following is our main theorem.
[Sharpness of v(·)] Let η be an environment as in Section <ref> satisfying <ref>-<ref>. Assume that for all ρ∈ J, there exists v(ρ) such that
ℙ^ρ-a.s., lim_n n^-1X_n= v(ρ) .
Then, for all ρ, ρ'∈ J such that ρ>ρ', one has that
v(ρ)>v(ρ').
With Theorem <ref> at hand, we first give the proofs of Theorems <ref> and <ref>. We start with the latter.
Theorem <ref> concerns the particular case where ℙ^ρ refers to the walk of Section <ref> evolving on top of the exclusion process η started from product Bernoulli(ρ) distribution, for ρ∈ J ⊂ (0,1). The properties (<ref>)-(<ref>) and <ref>-<ref> are indeed all satisfied in
this case, as is proved separately in Section <ref> below, see Lemma <ref> and Proposition <ref>.
In the notation of (<ref>), we now separately consider the intervals J∈{J_-, J_0, J_+}, where J_-=(0,ρ_-), J_0=(ρ_-,ρ_+) and J_+=(ρ_+,1). The fact that the law of large numbers (<ref>) holds for any choice of J is the content of <cit.>. Thus Theorem <ref> is in force, and v is strictly increasing on J for any J∈{J_-, J_0, J_+}. Since by direct application of (<ref>) and Lemma <ref>, v is already non-decreasing on J_-∪ J_0∪ J_+, it must be (strictly) increasing on J_-∪ J_0∪ J_+ altogether, and (<ref>) follows. In view of (<ref>), the first equality in (<ref>) is an immediate consequence of (<ref>) with J=J_0.
As to the second equality in (<ref>), recalling the definition of ρ_c from (<ref>), we first argue that ρ_- ≤ρ_c. If ρ < ρ_-, then by (<ref>)-(<ref>), X_n/n → v(ρ)<0 ^ρ-a.s. and thus in particular ^ρ(lim sup X_n <0)=1 (in fact it equals -∞ but we will not need this). On the other hand, { H_n < H_-1}⊂{ H_-1>n}⊂{X_k ≥ 0, ∀ k ≤ n}, and the latter has probability tending to 0 as n →∞ under ^ρ. It follows that θ(ρ)= 0 in view of (<ref>), whence ρ≤ρ_c, and thus ρ_- ≤ρ_c upon letting ρ↑ρ_-.
Since ρ_-=ρ_+, in order to complete the proof it is enough to show that ρ_+ ≥ρ_c. Let ρ> ρ_+. We aim to show that θ(ρ) > 0. Using the fact that v(ρ)>0 and (<ref>), one first picks n_0=n_0(ρ) ≥ 1 such that ^ρ_z(X_n >0, ∀ n ≥ n_0) ≥ 1/2 for any z=(m,m) with m ≥ 0 (the worst case is m=0, the other cases follow from the case m=0 using invariance under suitable space-time translations). Now, observe that for all n ≥ 0, under ^ρ,
{ H_n < H_-1}⊃( { X_k-X_k-1= +1, ∀ 1 ≤ k ≤ n_0}
∩{ X_2n_0+k' >0, ∀ k' ≥ n_0}∩{lim sup_n→∞X_n=+∞});
for, on the event on the right-hand side, one has that X_n_0= n_0 and the walk can in the worst case travel n_0 steps to the left during the time interval (n_0, 2n_0], whence in fact X_k ≥ 0 for all k ≥ 0, and H_n<∞ since lim sup_n→∞X_n=+∞. Combining (<ref>), the fact that the last event on the RHS has full probability due to (<ref>)-(<ref>), the fact that P^η_(0,0)(X_k-X_k-1= +1, ∀ 1 ≤ k ≤ n_0) ≥ (p_∙∧ p_∘)^n_0 on account of (<ref>)-(<ref>) and the Markov property of the quenched law at time n_0, one finds that
θ_n(ρ) (<ref>)≥ (p_∙∧ p_∘)^n_0·_(n_0, n_0)^ρ(X_n_0+k' >0, ∀ k' ≥ n_0 ) ≥ 2^-1 (p_∙∧ p_∘)^n_0 >0,
where the second inequality follows by choice of n_0. Thus, θ(ρ)>0 (see (<ref>)), i.e. ρ≥ρ_c. Letting ρ↓ρ_+ one deduces that ρ_+ ≥ρ_c, and this completes the verification of (<ref>).
Only item (i) requires an explanation. This is an easy consequence of <cit.>, e.g. with the choice ε=(ρ_c-ρ)/4 for a given ρ<ρ_c, and Theorem <ref> (see (<ref>)), as we now explain. Indeed one has that { H_n < H_-1}⊂{X_n≥ v(ρ_c-ε)n} under ^ρ as soon as n is large enough (depending on ρ); to see the inclusion of events recall that ρ_c=ρ_- on account of (<ref>), which has already been proved, and therefore v(ρ_c-ε)<0. The conclusion of item (i) now readily follows using the second estimate in <cit.>.
§.§ The finite-range model 𝐏^ρ,L
Following the strategy outlined in Section <ref>, we will aim at comparing the random walk in dynamic random environment, which has infinite-range correlations, to a finite-range model, which enjoys regeneration properties, and which we now introduce.
For a density ρ∈ J (recall that J is an open interval of ℝ_+) and an integer L≥1, we define a finite-range version of the environment, that is, a probability measure 𝐏^ρ,L on Σ∋η=(η _t(x): x ∈ℤ, t∈ℝ_+) (see Section <ref> for notation) such that the following holds. At every time t multiple of L, η_t is sampled under 𝐏^ρ,L according to μ_ρ (recall (<ref>)), independently of (η _s)_0≤ s<t and, given η_t, the process (η_t+s)_0≤ s<L has the same distribution under 𝐏^ρ,L as (η_s)_0≤ s<L under P^η_t, cf. Section <ref> regarding the latter. We denote 𝐄^ρ,L the expectation corresponding to 𝐏^ρ,L. It readily follows that η is a homogenous Markov process under 𝐏^ρ,L, and that 𝐏^ρ,L inherits all of Properties (<ref>)-(<ref>) from 𝐏^ρ. In particular μ_ρ is still an invariant measure for the time-evolution of this environment. Note that 𝐏^ρ,∞ is well-defined and 𝐏^ρ,∞= 𝐏^ρ.
Recalling the quenched law P_z^η of the walk X in environment η started at z∈ℤ from Section <ref>, we extend the annealed law of the walk from (<ref>) by setting _z^ρ,L[·] = ∫𝐏^ρ,L(dη) P_z^η[·] so that ^ρ,∞_z=^ρ_z corresponds to the annealed law defined in (<ref>). We also abbreviate ^ρ,L=^ρ,L_0.
We now collect the key properties of finite-range models that will be used in the sequel. A straightforward consequence of the above definitions is that
[ under ^ρ,L, {(X_kL+s-X_kL)_0≤ s≤ L: k∈ℕ} is an i.i.d. family,; with common distribution identical to the ℙ^ρ-law of (X_s)_0≤ s≤ L. ]
Moreover, by direct inspection one sees that 𝐏^ρ,L inherits the properties listed in (<ref>) from 𝐏^ρ; that is, whenever 𝐏^ρ does,
𝐏^ρ,L satisfies <ref>, <ref>, <ref>, <ref>
(all for L≥ t) and <ref> (for L≥ℓ).
(more precisely, all of these conditions hold with 𝐏^η,L in place of 𝐏^η everywhere, where 𝐏^η,L refers to the evolution under 𝐏^ρ,L with initial condition η_0=η).
The next result provides a well-defined monotonic speed v_L(·) for the finite-range model. This is an easy fact to check. A much more refined quantitative monotonicity result will follow shortly in Proposition <ref> below (implying in particular strict monotonicity of v_L(·)).
[Existence of the finite-range speed v_L]
For ρ∈ J and an integer L≥ 1, let
v_L(ρ)def.=𝔼^ρ,L[X_L/L]=𝔼^ρ[X_L/L].
Then
ℙ^ρ,L-a.s.lim_n→ +∞ n^-1X_n=v_L(ρ).
Moreover, for any fixed L, we have that
the map ρ↦ v_L(ρ) is non-decreasing on J.
The second equality in (<ref>) is justified by (<ref>). The limit in (<ref>) is an easy consequence of (<ref>), (<ref>), the definition (<ref>) of v_L and the law of large numbers.
The monotonicity (<ref>) is obtained by combining (<ref>) and Lemma <ref>.
§.§ Key propositions and proof of Theorem <ref>
In this section, we provide two key intermediate results, stated as Propositions <ref> and <ref>, which roughly correspond to the two parts of the proof outline in Section <ref>.
Theorem <ref> will readily follow from these results, and the proof appears at the end of this section. As explained in the introduction, the general strategy draws inspiration from recent interpolation techniques used in the context of sharpness results for percolation models with slow correlation decay, see in particular <cit.>, but the proofs of the two results stated below are vastly different.
Our first result is proved in Section <ref> and allows us to compare the limiting speed of the full-range model to the speed of the easier finite-range model with a slightly different density. Note that, even though both v(·) and v_L(·) are monotonic, there is a priori no clear link between v and v_L.
[Approximation of v by v_L]
Under the assumptions of Theorem <ref>, there exists C:approx=C:approx(ν) ∈ (0,∞) such that for all L ≥ 3 and ρ such that [ρ -1/log L,ρ +1/log L]⊆ J,
v_L(ρ -1/log L)-C:approx(log L)^100/L≤ v(ρ)≤ v_L(ρ +1/log L) +C:approx(log L)^100/L.
With Proposition <ref> above, we now have a chance to deduce the strict monotonicity of v(·) from that of v_L(·). Proposition <ref> below, proved in Section <ref>, provides a quantitative strict monotonicity for the finite-range speed v_L(·). Note however that it is not easy to obtain such a statement, even for the finite-range model: indeed, from the definition of v_L(·) in (<ref>), one can see that trying to directly compute the expectation in (<ref>) for any L boils down to working with the difficult full range model. One of the main difficulties is that the environment mixes slowly and creates strong space-time correlations. Nonetheless, using sprinkling methods, it turns out that one can speed-up the mixing dramatically by increasing the density of the environment. This is the main tool we use in order to obtain the result below.
[Quantitative monotonicity of v_L]
Assume (<ref>)
holds and let ρ∈ J. For all ϵ>0 such that ρ+ϵ∈ J, there exists L_1=L_1(ρ,ϵ, ν)≥ 1 such that, for all L≥ L_1,
v_L(ρ+ϵ) - v_L(ρ) ≥3
C:approx (log L)^100/L.
Propositions <ref> and <ref> imply Theorem <ref>, as we now show.
Let ρ, ρ'∈ J with ρ>ρ', and define ε=ρ-ρ'.
Consider L≥ 3 ∨ L_1(ρ+ε/3, ε/3), so that the conclusions of Propositions <ref> and <ref> both hold. By choosing L sufficiently large in a manner depending on ρ and ρ', we can further ensure that (log L)^-1<ε/3 and [ρ-(log L)^-1, ρ+(log L)^-1]∪ [ρ'-(log L)^-1, ρ'+(log L)^-1]⊆ J. Abbreviating α_L = L^-1(log L)^100, it follows that
v(ρ)(<ref>)≥ v_L(ρ -(log L)^-1)-C:approxα_L (<ref>)≥ v_L(ρ -ε/3)-C:approxα_L (<ref>)≥
v_L(ρ'+2ε/3) - C:approxα_L
(<ref>)≥ v_L(ρ'+ε/3) +2 C:approxα_L
(<ref>)≥ v_L(ρ'+(log L)^-1) +2C:approxα_L (<ref>)≥ v(ρ')+C:approxα_L
>v(ρ'),
yielding (<ref>).
§ FINITE-RANGE APPROXIMATION OF ^Ρ
In this section, we prove Proposition <ref>, which allows us to compare the speed v(·) of the full-range model to the speed v_L(·) (see (<ref>)) of the finite-range model introduced in <ref>. The proof uses a dyadic renormalisation scheme. By virtue of the law(s) of large numbers, see (<ref>) and (<ref>), and for L_0 and ρ fixed, the speed v(ρ) ought to be close to the speed v_2^KL_0(ρ), for some large integer K. Hence, if one manages to control the discrepancies between v_2^k+1L_0(ρ) and v_2^kL_0(ρ) for all 0≤ k≤ K-1 and prove that their sum is small, then the desired proximity between v(ρ) and v_L_0(ρ) follows. This is roughly the strategy we follow except that at each step, we slightly increase or decrease the density ρ (depending on which bound we want to prove), in order to weaken the (strong) correlations in the model. This is why we only compare v(·) and v_L(·) for slightly different densities at the end. This decorrelation method is usually referred to as sprinkling; see e.g. <cit.> for similar ideas in other contexts.
As a first step towards proving Proposition <ref>, we establish in the following lemma a one-step version of the renormalization, with a flexible scaling of the sprinkling (f(L) below) in anticipation of possible future applications. For x∈ℝ, let x_-=max(-x,0) denote the negative part of x. Throughout the remainder of this section, we are always tacitly working under the assumptions of Theorem <ref>.
There exists L_0=L_0(ν) ≥ 1 such that for all L ≥ L_0, all f(L)∈ [(log L)^90, L^1/10] and all ρ such that (ρ-f(L)^-1/40,ρ+f(L)^-1/40)⊆ J, the following holds: there exists a coupling _L of (X^(i)_s)_ 0 ≤ s ≤ 2L, i=1,2, such that X^(1)∼^ρ,L, X^(2)∼^ρ + ε,2L with ε= f(L)^-1/40, and
_L(min_0≤ s≤ 2L(X^(2)_s-X^(1)_s)≤ -f(L))≤ e^-f(L)^1/40.
Consequently,
𝔼^_L[max_0≤ s≤ 2L(X^(2)_s-X^(1)_s)_-]≤ 2f(L),
and
Var^_L(max_0≤ s≤ 2L(X^(2)_s-X^(1)_s)_-)≤ 2f(L)^2.
The same conclusions hold with marginals X^(1)∼^ρ,2L and X^(2)∼^ρ + ε,L instead.
Towards showing (<ref>), let us first assume that
there exists L_0>3 such that for all L≥ L_0, (ρ+log^-2L) ∈ J and there exists a
coupling ℚ_L of η^(1)∼𝐏^ρ,L and η^(2)∼𝐏^ρ+ε,2L s.t. _L[G] ≥ 1 - exp(-f(L)^1/40),
where, setting t=⌊f(L)/2⌋, the `good' event G is defined as
G={η^(1)_s(x) ≤η^(2)_s(x), ∀ (x,s)∈ [-3L,3L] × [0,L) }
∩{η^(1)_s(x) ≤η^(2)_s(x-2t), ∀ (x,s)∈ [-3L, 3L]× [L+t, 2L)}.
Given the above, we now extend the coupling ℚ_L to the random walks X^(1)∼ P^η^(1) and X^(2)∼ P^η^(2), defined as in Section <ref>, up to time 2L. To do that, we only need to specify how we couple the collections of independent uniform random variables (U^(1)_w)_w∈𝕃 and (U^(2)_w)_w∈𝕃 (recall that 𝕃 denotes space-time, see (<ref>)) used to determine the steps of each random walk; the walks X^(i), i=1,2, up to time 2L are then specified in terms of (U^(i), η^(i)) as in (<ref>)-(<ref>). Under _L, we let (U^(1)_w)_w∈𝕃 be i.i.d. uniform random variable on [0,1] and define, for (x,s)∈𝕃,
U^(2)_(x,s)=
U^(1)_(x,s), if 0≤ s<L
U^(1)_(x-2t,s), if L≤ s<2L
(for definiteness let U^(2)_(x,s)=U^(1)_(x,s) when s ≥ 2L). Clearly (U^(2)_w)_w∈𝕃 are i.i.d. uniform variables, hence X^(2) also has the desired marginal law.
Now, we will explain why, on this coupling, (<ref>) holds, and refer to Figure <ref> for illustration. We will only consider what happens on the event G defined in (<ref>).
On G, by Lemma <ref> with K=[-3L,3L]× [0,L], we have that X^(2)_s≥ X^(1)_s for all 0≤ s≤ L. This implies that X^(2)_L+t≥ X^(1)_L+t-2t owing to (<ref>). Let η=η^(1), define η by η_n(x)=η^(2)_n(x-2t) and w=(X^(1)_L+t, L+t). Let X^w and X^w be random walks evolving on top of η and η respectively, and both using the collection of uniform random variables U^(1). Then, conditionally on η^(1)_L+t, η^(2)_L+t and X^(1)_L+t, and on the event {X^(1)_L+t= x_1}, X^w_L-t has the law of X^(1)_2L and X^w_L-t-2t has the law of the position at time 2L of a random walk started at (x_1-2t,L+t) and evolving on top of η^(2), using the collection U^(2). In particular, it evolves on the same environment as X^(2) and starts on the left of X^(2)_L+t, so that X^w_L-t≤ X^(2)_2L (by Lemma <ref> applied with K=ℤ×ℤ^+ and η=η).
On G, using the second event in the intersection on the right-hand side of (<ref>), we can again apply Lemma <ref>, now at time L+t, with X and X as described and K=[-3L+2t,3L-2t]× [L+t,2L]. We obtain that X^w_s-t≤X^w_s-t for all s∈[t,L]. Thus, subtracting 2t on both sides and using the previous facts, we have on G that X^(1)_s-2t≤ X^(2)_s, for all 0≤ s≤ 2L. All in all, we have proved that
G ⊆{max_0≤ s≤ 2L(X^(2)_s-X^(1)_s)_-≤ 2t },
and therefore (<ref>) follows from our assumption (<ref>). The fact that (<ref>) and (<ref>) hold is a simple consequence of (<ref>) and a straightforward computation using the deterministic inequality (X^(2)_s-X^(1)_s)_-≤ 4L valid for all 0≤ s≤ 2L and the fact that L≥ L_0> 3 (note that for L large enough, we have (2L)^2e^-f(L)^1/40≤ f(L) for all f(L)∈ [log^90L,L^1/10]).
It remains to show that (<ref>) holds, which brings into play several of the properties gathered in <ref> and which hold by assumption. Let L_0>3 be such that for every L≥ L_0 and any choice of f(L)≥log^90L, with ε=f(L)^-1/40 we have that ρ+ε∈ J and
L> t>10^100ν^-1+ν^-8+ν^-7SEPcoupling^8ε^-16,
where t=⌊ f(L)/2⌋ as defined above (<ref>).
We will provide a step-by-step construction of _L, in four steps. First, note that by (<ref>)-ii), there exists a coupling _L of environments (η^(1)_t(x) : x∈ℤ,t∈ [0,L)) and (η^(2)_t(x) : x∈ℤ,t∈ [0,L]) such that under _L, η^(1)∼𝐏^ρ,L on the time interval [0,L), η^(2)∼𝐏^ρ+ε,2L on the time interval [0,L] and such that _L-a.s., η^(1)_t(x)≤η^(2)_t(x) for all x∈ℤ and t∈ [0,L).
We then extend _L to time L for η^(1) by sampling η^(1)_L∼μ_ρ, independently of (η^(1)_t)_0≤ t<L and (η^(2)_t)_0≤ t≤ L.
In particular the above already yields that the inequalities required as part of the event G in (<ref>) which involve (η^(i)_t)_0≤ t<L actually hold ℚ_L-a.s.
Second, letting ℓdef.=⌊ t^1/4⌋, we define G_1 to be the event that for every interval I⊆ [-10L(1+ν), 10L(1+ν)] of length ⌊ℓ/2 ⌋≤ |I|≤ℓ, the inequalities η^(1)_L(I)≤ (ρ+ε/4)ℓ and η^(2)_L(I)≥ (ρ+3ε/4)ℓ hold (see the beginning of <ref> for notation). By (<ref>) and a union bound over the choices of such intervals I, we get that
_L(G_1^c)≤ 40ℓ L(1+ν) exp(-densitydevε^2ℓ/32).
At time L and on G_1^c, extend _L to the time interval [L,2L] by letting η^(1) and η^(2) follow their dynamics P^η^(1)_L and P^η^(2)_L independently of each other (using (<ref>)).
Third, we continue the construction of the coupling on G_1 by applying <ref> with H=10L(1+ν), η_0=η^(2)_L(· -2t) and η'_0=η^(1)_L, checking the assumptions using (<ref>) (note indeed that we have 1+|log^3(ν t)|≤ (ν t)^1/8 since ν t>10^100, and then that SEPcouplingε^-2(ν t)^1/8≤ν t^1/4). This implies that, given η^(1)_L and η^(2)_L, on G_1, there exists an extension of the coupling _L on the time interval [L,L+t] such that, defining
G_2={∀ x∈ [-8L-6νL , 8L+6νL ], η^(1)_L+t(x)≤η^(2)_L+t(x-2t) },
we have
_L(G_2^c)≤ 10SEPcoupling2t L(1+ν) exp( -SEPcoupling2^-1ν/ν +1ε^2t^1/4).
At time L+t and on G_2^c, we extend _L on the time interval [L+t,2L] by letting η^(1) and η^(2) follow their dynamics P^η^(1)_L+t and P^η^(2)_L+t independently of each other.
Fourth, we continue the construction of the coupling on G_2 by applying <ref> with η_0=η^(2)_L+t(· -2t) and η'_0=η^(1)_L+t, H=8L+6νL, k=1 and t in <ref> equal to L-t. This implies that, given η^(1)_L+t and η^(2)_L+t, on G_2, there exists an extension of the coupling _L on the time interval [L+t,2L] such that, defining
G_3={∀ x∈ [-8L , 8L ], s∈[L+t,2L], η^(1)_s(x)≤η^(2)_s(x-2t) },
we have
_L(G_3^c)≤ 20exp(-ν(L-t)/4).
Finally, note that G_1∩ G_2∩ G_3 ⊆ G, so that (<ref>) is a straightforward consequence of (<ref>), (<ref>) and (<ref>) provided that L_0 is chosen large enough, depending only on ν (as well as SEPcoupling2, densitydev and SEPcoupling). Note in particular that with our choices of f(L), ε above (<ref>) and ℓ (≥ c f(L)^1/4), we have that min (L-t,ε^2t^1/4,ε^2ℓ)≥ f(L)^1/20. All in all (<ref>) follows. The case X^(1)∼𝐏^ρ,2L and X^(2)∼𝐏^ρ + ε,L can be treated in the same way, by means of an obvious analogue of (<ref>). The remainder of the coupling (once the environments are coupled) remains the same.
We are now ready to prove Proposition <ref>. The proof combines the law of large numbers for the speed together
with Lemma <ref> applied inductively over increasing scales.
The rough strategy is as follows. We aim to compare the speed of the full-range model v(ρ) to the speed of the finite-range model v_L(ρ) for some possibly large, but finite L, and prove that these two are close. For δ<v_L(ρ), we know that the probability for the finite-range model to go slower than speed δ goes to 0 on account of Lemma <ref>. Thus, in a large box of size 2^K L_0, the L_0-range model will most likely be faster than δ as soon as K is large enough. Lemma <ref> is used over dyadic scales to control the discrepancies between the 2^k L_0-range model and the 2^k+1 L_0-range model for all k from 0 to K-1, and to prove that they are small. It will be seen to imply that with high probability, in a box of size 2^K L_0, the 2^K L_0-range model will be faster than δ. Now, we only need to observe that when observed in a box of size 2^K L_0, the 2^K L_0-range model is equivalent to the full-range model. As Lemma <ref> already hints at, this is but a simplified picture and the actual argument entails additional complications. This is because each increase in the range (obtained by application of Lemma <ref>) comes not only at the cost of slightly `losing speed,' but also requires a compensation in the form of a slight increase in the density ρ, and so the accumulation of these various effects have to be tracked and controlled jointly.
We only show the first inequality in Proposition <ref>, i.e. for L ≥ C(ν) and ρ such that [ρ -(log L)^-1,ρ +(log L)^-1]⊆ J, abbreviating α_L= (log L)^100/L, one has
v_L(ρ - (log L)^-1)-α_L≤ v(ρ).
The first inequality in (<ref>) then follows for all L ≥ 3 by suitably choosing the constant C:approx since v,v_L ∈ [-1,1].
The second inequality of (<ref>) is obtained by straightforward adaptation of the arguments below, using the last sentence of Lemma <ref>.
For L ≥ 1, define L_k= 2^kL for k ≥ 0. As we now explain, the conclusion (<ref>) holds as soon as for L ≥ C(ν) and ρ as above, we show that
ℙ^ρ((X_L_K/ L_K) ≤ v_L(ρ -( log L)^-1)-α_L)→ 0, as K→∞.
Indeed under the assumptions of Proposition <ref>, the law of large numbers (<ref>) holds, and therefore in particular, for all δ>v(ρ), we have that ℙ^ρ(X_L_K≤δL_K) tends to 1 as K →∞. Together with (<ref>) this is readily seen to imply (<ref>).
We will prove (<ref>) for L ≥ C(ν), where the latter is chosen such that the conclusions of Lemma <ref> hold for L, and moreover such that
(log L)^-9/4+100/(log L)^5/4≤1/log L, log L≥ 5 and (3/2)^-k(klog2/log L + 1)^99≤ 1, for all k≥0.
For such L we define, for all integer K,k ≥ 0, all ρ>0 and all δ∈, recalling the finite-range annealed measures ℙ^ρ, L from <ref>,
p_ρ,K,δ^(k)=^ρ,L_k(X_L_K≤δL_K),
p_ρ,K,δ^(∞)=^ρ(X_L_K≤δL_K)
(observe that the notation is consistent with <ref>, i.e. ℙ^ρ,∞= ℙ^ρ).
In this language (<ref>) requires that p_ρ,K,δ^(∞) vanishes in the limit K →∞ for a certain value of δ. We start by gathering a few properties of the quantities in (<ref>).
For all ρ'∈ J, the following hold:
lim_K→ +∞p_ρ',K,δ^(0)=0 for all δ< v_L(ρ') (by (<ref>)),
p_ρ',K,δ^(∞)=p_ρ',K,δ^(k), for all δ∈[-1,1], K≥0 and k≥ K,
p_ρ',K,δ^(∞) and p_ρ',K,δ^(k) are non-increasing in ρ' and non-decreasing in δ.
As explained atop the start of the proof, owing to the form of Lemma <ref> we will need to simultaneously sprinkle the density and the speed we consider in order to be able to compare the range-L_k+1 model to the range-L_k model. To this effect, let
ρ_0=ρ-(log L)^-1 and ρ_k+1=ρ_k+(log L_k)^-9/4, for all k≥0,
as well as
δ_0=v_L(ρ_0)-L^-2 and δ_k+1=δ_k- log^99(L_k)/L_k, for all k≥0.
A straightforward computation, bounding the sum below for k≥1 by the integral ∫_0^∞ dx/(xlog 2 +log L)^9/4, yields that
lim_k ρ_k = ρ_0 + ∑_k≥0 (log L_k)^-9/4≤ρ_0 + (log L)^-9/4+100/(log L)^5/4(<ref>)≤ρ_0 +1/log L(<ref>)≤ρ.
Another straightforward computation yields that
∑_k≥0(log L_k)^99/L_k =
∑_k≥0(3/2)^-k(klog2/log L + 1)^99×(log L)^99/L(3/4)^k
(<ref>)≤(log L)^99/L∑_k≥0(3/4)^k
(<ref>)≤(log L)^100/L-1/L^2.
In particular, since (ρ_k) is increasing in k, (<ref>) implies that for all K≥0, we have ρ≥ρ_K, and (<ref>) yields in view of (<ref>) that δ_K >v_L(ρ_0)-α_L, with α_L (=L^-1(log L)^100) as above (<ref>).
Using this, it follows that, for all K≥ 0,
p^(∞)_ρ,K,v_L(ρ_0)-α_L(<ref>)=p_ρ,K,v_L(ρ_0)-α_L^(K)
(<ref>)≤ p_ρ_K,K,δ_K^(K) = p_ρ_0,K,δ_0^(0)+∑_0 ≤ k < K(p_ρ_k+1,K,δ_k+1^(k+1) - p_ρ_k,K,δ_k^(k)).
As the left-hand side in (<ref>) is precisely equal to the probability appearing in (<ref>), it is enough to argue that the right-hand side of (<ref>) tends to 0 as K →∞ in order to conclude the proof.
By recalling that δ_0<v_L(ρ_0) from (<ref>) and using (<ref>), we see that lim_K→∞ p_ρ_0,K,δ_0^(0)=0, which takes care of the first term on the right of (<ref>).
We now aim to show that the sum over k in (<ref>) vanishes in the limit K →∞, which will conclude the proof. Lemma <ref>
now comes into play. Indeed recalling the definition (<ref>), the difference for fixed value of k involves walks with range L_k and L_k+1, and Lemma <ref> supplies a coupling allowing good control on the negative part of this difference (when expressed under the coupling). Specifically, for a given K ≥ 1 and
0≤ k ≤ K-1, let X^(1)∼ℙ^ρ_k+1,L_k+1 and X^(2)∼ℙ^ρ_k,L_k. Note that for i∈{1,2}, one has the rewrite
X^(i)_L_K=∑_0 ≤ℓ< 2^K-k-1( X^(i)_(ℓ+1)L_k+1-X^(i)_ℓ L_k+1).
Therefore, due to the regenerative structure of the finite-range model, explicated in (<ref>), it follows that, for i∈{1,2}, under ℙ^ρ_k+2-i,L_k+2-i,
X^(i)_L_Klaw=∑_0 ≤ℓ< 2^K-k-1 X^(i,ℓ)_L_k+1,
where X^(i,ℓ)_L_k+1, ℓ≥0, is a collection of independent copies of X^(i)_L_k+1 under ℙ^ρ_k+2-i,L_k+2-i.
Now recall the coupling measure ℚ_L provided by Lemma <ref> with f(L)=(log L)^90 and let us denote by ℚ the product measure induced by this couplingfor the choices L=L_k and ε= ρ_k+1-ρ_k in Lemma <ref>, so that ℚ supports the i.i.d. family of pairs (X^(1,ℓ)_L_k+1,X^(2,ℓ)_L_k+1), ℓ≥0, each sampled under ℚ_L_k. In particular, under ℚ, for all ℓ≥0, X^(1,ℓ)_L_k+1 and X^(2,ℓ)_L_k+1 have law ℙ^ρ_k+1,L_k+1 and ℙ^ρ_k,L_k, respectively. Now, one can write, for any K≥1 and 0≤ k≤ K-1, with the sum over ℓ ranging over 0 ≤ℓ< 2^K-k-1 below, that
p_ρ_k+1,K,δ_k+1^(k+1) - p_ρ_k,K,δ_k^(k)(<ref>)=ℚ(∑_ℓ X^(1,ℓ)_L_k+1≤ L_K δ_k+1) - ℚ(∑_ℓ X^(2,ℓ)_L_k+1≤ L_K δ_k)
≤ℚ(∑_ℓ X^(1,ℓ)_L_k+1≤ L_K δ_k+1, ∑_ℓ X^(2,ℓ)_L_k+1> L_K δ_k)
(<ref>)≤ℚ(∑_ℓ (X^(1,ℓ)_L_k+1- X^(2,ℓ)_L_k+1)≤ -2^K-klog^99(L_k)).
Now, using Chebyshev's inequality together with Lemma <ref> (recall that f(L)=log^90L), it follows that for K≥1 and 0≤ k≤ K-1,
p_ρ_k+1,K,δ_k+1^(k+1) - p_ρ_k,K,δ_k^(k) ≤2^K-k(log L_k)^190/( 2^K-k(log L_k)^99-2^K-k(log L_k)^90)^2
≤7/2^K-k(k+5)^8≤ 2^-K2+224K^-8,
where the second line is obtained by considering the cases when k< K/2 or k≥ K/2 separately, together with straightforward computations. The bound (<ref>) implies in turn that
∑_k=0^K-1(p_ρ_k+1,K,δ_k+1^(k+1) - p_ρ_k,K,δ_k^(k))≤ K2^-K2+224K^-7K→∞⟶ 0,
which concludes the proof.
§ QUANTITATIVE MONOTONICITY FOR THE FINITE-RANGE MODEL
The goal of this section is to prove Proposition <ref>. For this purpose, recall that J is an open interval, and fix ρ∈ J and ϵ>0 such that (ρ+ϵ)∈ J. Morever, in view of (<ref>), we can assume that ϵ<1/100. The dependence of quantities on ρ and ϵ will be explicit in our notation. As explained above Proposition <ref>, even if we are dealing with the finite-range model, the current question is about estimating the expectation in (<ref>), which is actually equivalent to working on the full-range model. Thus, we retain much of the difficulty, including the fact that the environment mixes slowly. The upshot is that the speed gain to be achieved is quantified, and rather small, cf. the right-hand side of (<ref>).
The general idea of the proof is as follows. First, recall that we want to prove that at time L, the expected position of X^ρ+ϵ∼ℙ^ρ+ϵ (where ∼ denotes equality in law in the sequel) is larger than the expected position of X^ρ∼ℙ^ρ by 3(log L)^100. The main conceptual input is to couple X^ρ and X^ρ+ϵ in such a way that, after a well-chosen time T ≪ L, we create a positive discrepancy between them with a not-so-small probability, and that this discrepancy is negative with negligible probability, allowing us to control its expectation. We will also couple these walks in order to make sure that the environment seen from X^ρ_T is dominated by the environment seen from X^ρ+ϵ_T, even if these walks are not in the same position, and we further aim for these environments to be `typical'. The last two items will allow us to repeat the coupling argument several times in a row and obtain a sizeable gap between X^ρ_L and X^ρ+ϵ_L. In more quantitative terms, we choose below T=5(log L)^1000 and create an expected discrepancy of exp(-(log L)^1/20) at time T. Repeating this procedure L/T times provides us with an expected discrepancy at time L larger than 3(log L)^100 (and in fact larger than L^1+o(1)). We refer to the second part of Section <ref> for a more extensive discussion of how the expected gap size comes about.
We split the proof of Proposition <ref> into two parts. The main part (Section <ref>) consists of constructing a coupling along the above lines. In Section <ref>, we prove Proposition <ref>.
§.§ The trajectories Y^±
In this section, we define two discrete-time processes Y_t^- and Y^+_t, t ∈ [0,T], for some time horizon T, see (<ref>) below, which are functions of two deterministic environments η^- and η^+ and an array U=(U_w)_w∈ (see Section <ref> for notation) of numbers in [0,1]. This construction will lead to a deterministic estimate of the difference Y^+_t-Y^-_t, stated in Lemma <ref>. In the next section, see Lemma <ref>, we will prove that there exists a measure on (η^±, U) such that under , Y^- dominates stochastically the law of a random walk X^ρ∼ℙ^ρ (see below (<ref>) for notation), Y^+ is stochastically dominated by the law of a random walk X^ρ+ϵ∼ℙ^ρ+ϵ, and we have a lower bound on 𝔼^[Y^+_T-Y^-_T], thus yielding a lower bound on 𝔼^ρ+ϵ[X^ρ+ϵ] -𝔼^ρ[X^ρ].
The construction of the two processes Y^± will depend on whether some events are realized for η^± and U. We will denote E_1, E_2,… these events, which will occur (or not) successively in time. We introduce the convenient notation
E_i-jdef.=E_i∩ E_i+1∩…∩ E_j, for all j>i≥ 1,
and write E_i-j^c for the complement of E_i-j. We refer to Figure <ref> (which is a refined version of Figure <ref>) for visual aid for the following construction of Y^±, and to the discussion in Section <ref> for intuition. Let us give a brief outlook on what follows. The outcome of the construction will depend on whether a sequence of events E_1-E_9 happens or not: when all of these events occur, which we call a success, then Y_T^+ and Y^-_T have a positive discrepancy and we retain a good control on the environments at time T, namely η^+ dominates η^- (or, more precisely, η^- shifted by Y_T^+-Y^-_T). When at least one of these nine events does not happen, this can either result in a neutral event, where the trajectories end up at the same position and we still have control on the environments, or it could result in a bad event, where we can have Y_T^+<Y^-_T (but we will show in Lemma <ref> that we keep control on the environments with overwhelming probability, owing to the parachute coupling evoked at the end of Section <ref>). Whenever we observe a neutral or a bad event, we will exit the construction and define the trajectories Y^± at once from the time of observation all the way up to time T. Finally, we note that the walks Y^+ and Y^- are actually not Markovian. This is however not a problem as we use them later to have bounds on the expected displacements of the actual walks X^ρ and X^ρ+ε.
We now proceed to make this precise. For L≥ 3, define
ℓ_G=⌊ (log L)^1000⌋, ℓ_g = ⌊ (log L)^1/1000⌋ and T=5ℓ_G,
as well as
s_0= ℓ_G, s_1= s_0+ℓ_g, s_2= s_1+2ℓ_g^20 and s_3=s_2+ℓ_g^20.
Let M≥ 10(ν+1)L, which will parametrize the spatial length of a space-time box in which the entire construction takes place, and let Y_0^±=0. The definition of Y^± depends on the values of M and L, but we choose not to emphasize it in the notation. Given processes
γ = (η^+, η^-, U)
on a state space Ω such that the first two coordinates take values in ( ℤ_+ )^[0,T] ×ℤ and the last one in [0,1]^, we will define the events E_i=E_i(γ) below measurably in γ and similarly Y^± =(Y^±_t(γ))_ 0 ≤ t ≤ T with values in the set of discrete-time trajectories starting at 0. Unlike the sample paths of X^ρ, X^ρ+ε, the trajectories of Y^± may perform jumps that are not to nearest neighbors. Probability will not enter the picture until Lemma <ref> below, see in particular (<ref>), which specifies the law of γ. We now properly define the aforementioned three scenarii of success, neutral events and bad events, which will be mutually exclusive, and we specify Y^± in all cases.
Below s and t always denote integer times. Define the event (see Section <ref> regarding the notation ≽)
E_1= {∀ s∈ [0,s_0], η^+_s|_[-M+2νs_0, M-2νs_0]≽η^-_s|_[-M+2νs_0, M-2νs_0)]}.
The event E_1 guarantees the domination of the environment η^- by η^+ and is necessary for a success, while E_1^c will be part of the bad event.
On E_1, for all 1≤ t≤ s_0, let recursively
Y^-_t=Y^+_t=Y^-_t-1+A(η^-_t-1(Y^-_t-1), U_(Y^-_t-1,t-1))
where A was defined in (<ref>), so that we let the walks evolve together up to time s_0. On the bad event E_1^c, we exit the construction by defining
Y^-_t=t and Y^+_t=-t, for all 1≤ t≤ T.
Next, define the events
E_2= {η^+_s_0([Y^-_s_0, Y^-_s_0+ℓ_g])>η^-_s_0([Y^-_s_0, Y^-_s_0+ℓ_g]) }
∩{η^-_s_0([Y^-_s_0-3ℓ_g+1, Y^-_s_0+3ℓ_g]) ≤ (ρ+1)· 6ℓ_g},
E_3= {∀ s∈ [s_0, s_1], η^+_s|_[-M+(4ν+1) s_1, M-(4ν+1) s_1]≽η^-_s|_[-M+(4ν+1) s_1, M-(4ν+1) s_1]}.
The event E_3 again ensures the domination of η^- by η^+ on a suitable spatial interval while E_2 creates favourable conditions at time s_0 to possibly see a sprinkler at time s_1.
On the event E_1-3 (recall (<ref>) for notation), for all s_0+1≤ t≤ s_1, let recursively
Y^-_t=Y^+_t=Y^-_t-1+A(η^-_t-1(Y^-_t-1), U_(Y^-_t-1,t-1)),
so that the walks, from time s_0, continue to evolve together up to time s_1 and, in particular, we have that
on E_1-3, Y^-_s_1=Y^+_s_1.
In order to deal with the case where E_2 or E_3 fail, we will distinguish two mutually exclusive cases, that will later contribute to an overall neutral and bad event, respectively; cf. (<ref>) and (<ref>). To this end, we introduce
E_2,bis={∀ s∈ [s_0,T], η^+_s|_[-M+2νT, M-2νT]≽η^-_s|_[-M+2νT, M-2νT]}.
On E_1∩ E_2^c∩ E_2,bis, which will be a neutral event, we exit the construction by letting the walk evolve together up to time T, that is, we define
Y^-_t=Y^+_t=Y^-_t-1+A(η^-_t-1(Y^-_t-1), U_(Y^-_t-1,t-1)), for all s_0+1≤ t≤ T.
On the bad event (E_1∩ E_2^c∩ E_2,bis^c)∪(E_1-2∩ E_3^c), we exit the construction by defining
Y^-_t=t and Y^+_t=-t, for all s_0+1≤ t≤ T.
In the above, Y^+ and Y^- may take a non-nearest neighbour jump at time s_0, which is fine for our purpose. Next, define
E_4= {η^+_s_1(Y_s_1^ + )≥ 1,η^-_s_1(Y_s_1^ - )=0},
E_5= {∀ s∈ [s_1,s_2], η^+_s|_[-M+(6ν+1) s_2, M-(6ν+1) s_2]≽η^-_s|_[-M+(6ν+1) s_2, M-(6ν+1) s_2]},
where E_4 states that the walkers see a sprinkler at time s_1 and E_5 guarantees domination of the environments from time s_1 to s_2. We will first define what happens on the neutral and the bad events. For this purpose, define
E_4,bis={∀ s∈ [s_1,T], η^+_s|_[-M+(4ν+1) T, M-(4ν+1) T]≽η^-_s|_[-M+(4ν+1) T, M-(4ν+1) T]}.
Recall that on E_1-3, we have defined Y^± up to time s_1. On the neutral event E_1-3∩ E_4-5^c∩ E_4,bis, we define
Y^-_t=Y^+_t=Y^-_t-1+A(η^-_t-1(Y^-_t-1), U_(Y^-_t-1,t-1)), for all s_1+1≤ t≤ T.
On the bad event E_1-3∩ E_4-5^c∩ E_4,bis^c, we exit the construction by defining
Y^-_t=t and Y^+_t=-t, for all s_1+1≤ t≤ T.
On the event E_1-5, because of the sprinkler, the walkers have a chance to split apart hence, for t∈ [s_1+1,s_2], we let
Y^-_t=Y^-_t-1 +A(η^-_t-1(Y^-_t-1), U_ ( Y^-_t-1 ,t-1) ) and Y^+_t=Y^+_t-1 +A(η^+_t-1(Y^+_t-1), U_ ( Y^+_t-1 ,t-1) ).
Above, Y^+ and Y^- evolve on top of their respective environments η^+ and η^- from time s_1+1 to time s_2. To be able to continue the construction from time s_2 to s_3, we define two events E_6 and E_7 concerning the environments from times s_1 to s_2. If we are on E_1- 6, between times s_1+1 and s_2 we will require that Y^+ and Y^- drift away, regardless of the states of η^+ and η^-, using only the information provided by U. For this purpose, define
E_6= ⋂_s_1≤ t≤ s_2-1 E_6^t,
where we set
E^s_1_6={U_(Y^-_s_1,s_1)∈ (p_∘, p_∙)} (recall that p_∙ >p_∘) and
E^t_6={ U_ ( Y_s_1^- -(t-s_1) ,t) > p_∙}∩{ U_ ( Y^-_s_1+ t-s_1 ,t)< p_∘} for s_1+1≤ t ≤ s_2-1.
On E_4∩ E_6^s_1, Y^- steps to the left and Y^+ steps to the right from their common position, thus creating a gap at time s_1+1. Then for all s_1<t< s_2, as long as E_4 and E_6^s_1 to E_6^t-1 happen, Y^-_t and Y^+_t are at distinct positions and E_6^t allows Y^- to take one more step to the left and Y^+ one more step to the right. Hence, using what we have constructed so far on E_1-5, we have that
on E_1-6, Y^+_s_2=Y^-_s_2+2(s_2-s_1),
and we still have the domination of η^- by η^+ at time s_2, cf. (<ref>).
Since Y^+ and Y^- are no longer at the same position, we are going to momentarily allow to lose this domination in order to recreate it at time s_3 but in a suitably shifted manner, namely, achieve that η^+(Y^+_s_3+·)|_I ≽η^-(Y^-_s_3+·) |_I for a suitable interval I, see (<ref>). To do so, we first need favourable conditions at time s_2 encapsulated by the event
E_7={[ for all intervals I⊆ [Y^-_s_1 - 3(ν+1)T, Y^-_s_1 + 3(ν+1)T] with; ⌊ℓ_g^2 /2⌋≤| I|≤ℓ_g^2, η^+_s_2(I)≥ (ρ+3ϵ/4) | I | and η^-_s_2(I)≤ (ρ+ϵ/4) | I | ]}
The above requires good empirical densities at time s_2 on an interval of length of order T centred around the common position of the walkers Y^± at time s_1. On E_1-7, we do not precisely control the position of the walkers from time s_2 to s_3 and define, for all s_2< t≤ s_3,
Y_t^- -Y_t-1^- = 1, Y_t^+ - Y_t-1^+ = -1,
which corresponds to the worst case scenario assuming nearest-neighbour jumps. In particular, using (<ref>), (<ref>) and (<ref>), we have that
on E_1-7, Y^+_s_3-Y^-_s_3=2ℓ_g^20.
We now need to consider the case where E_6-7 fails, and we will again distinguish two types of failure. For this purpose, define
E_6,bis={∀ s∈ [s_2,T], η^+_s|_[-M+(6ν+1) T, M-(6ν+1) T]≽η^-_s|_[-M+(6ν+1) T, M-(6ν+1) T]},
and, on the neutral event E_1-5∩ E_6,7^c∩ E_6,bis, we merge Y^+ with Y^- at time s_2+1 and then let them walk together, by defining
Y_s_2+1^+ =Y_s_2+1^-= Y_s_2^- + A(η_s_2^-(Y^-_s_2), U_(Y^-_s_2,s_2)), and
Y^-_t =Y^+_t=Y^-_t-1+A(η^-_t-1(Y^-_t-1), U_(Y^-_t-1,t-1)), for all s_2+2≤ t≤ T
(above one line would be sufficient but we single out (<ref>) because the merging will typically occasion a jump for Y^+). On the remaining bad event E_1-5∩ E_6-7^c∩ E_6,bis^c, we exit the construction in the now usual way by defining
Y^-_t=t and Y^+_t=-t, for all s_2+1≤ t≤ T.
It remains to define Y^± from time s_3 to T on the event E_1-7. The good event will require
E_8def.={η^+_s_3(·+ℓ_g^20)|_[Y^-_s_1-2(ν+1)T,Y^-_s_1+2(ν+1)T]≽η^-_s_3(·-ℓ_g^20)|_[Y^-_s_1-2(ν+1)T,Y^-_s_1+2(ν+1)T]},
that is, we want that the environment η^+ seen from Y^+_s_3 covers η^- seen from Y^-_s_3 on an interval of length of order T, which will enable us to let the walkers move in parallel (i.e. taking the same steps at the same time), even if they are at different positions. Moreover, we want this domination to persist from time s_3 to time T, hence we require
E_9def.={∀ s∈ [s_3,T], η^+_s(·+ℓ_g^20)|_[Y^-_s_1-2 T,Y^-_s_1+2T]≽η^-_s(·-ℓ_g^20)|_[Y^-_s_1-2 T,Y^-_s_1+2T]}.
On the good event E_1-9, we let, for t∈ [s_3+1,T],
Y^-_t=Y^-_t-1+A(η^-_t-1(Y^-_t-1),U_(Y^-_t-1,t-1)) and
Y^+_t=Y^+_t-1+A(η^+_t-1(Y^+_t-1),U_(Y^+_t-1-2ℓ_g^20,t-1)).
Note that above, we choose to shift spatially the collection (U_w) by 2ℓ_g^20 spatially for Y^+, which corresponds to the difference between Y^- and Y^+. Therefore, both walks are using the same U_w to determine their next step. Using (<ref>), (<ref>) and (<ref>), one thus proves recursively that
on E_1-9, Y^+_t-Y^-_t=2ℓ_g^20, for all s_3≤ t≤ T.
Finally, on the bad event E_1-7^c∩ E_8-9^c, we finish the construction by defining
Y^-_t=Y^-_t-1+1, Y^+_t=Y^+_t-1-1, for all s_3+1≤ t≤ T-1, and Y^-_T=T,Y^+_T=-T.
This ends the definition of the trajectories Y^+ and Y^-. Let us emphasize once more that these trajectories are not nearest-neighbour and not Markovian w.r.t. the canonical filtration associated to (η^+,η^-,U), but that this will not prevent us from obtaining the desired bounds.
Below, we proceed to define our three key events and summarise in Lemma <ref> some of the important (deterministic) properties we will use.
* Scenario I: Good event. We define
E_gooddef.= E_1-9.
Notice that by (<ref>), on the event E_good we have that
Y^+_T-Y^-_T=2ℓ_g^20>0,
and also that
∀ s∈ [0,s_2], η^+_s|_[-T,T]≽η^-_s|_[-T,T],
which follows by (<ref>), (<ref>) and (<ref>), provided that M-(6ν+1)s_2≥ T. Finally, (<ref>) and the fact that on E_good, | Y^-_s_1|≤ s_1 (by (<ref>) and (<ref>)) imply that
∀ s∈ [s_3,T], η^+_s(·+2ℓ_g^20)|_[-T-2ℓ_g^20,T-2ℓ_g^20]≽η^-_s|_[-T,T].
* Scenario II: Neutral event. We define
E_neutraldef.=(E_1∩ E_2^c∩ E_2,bis) ∪(E_1-3∩ E_4-5^c∩ E_4,bis)
∪(E_1-5∩ E_6-7^c∩ E_6,bis).
Note that on E_neutral, (<ref>)-(<ref>), (<ref>) and (<ref>) imply that
Y^+_T=Y^-_T
and (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>)
imply that
Y^-_t=Y^-_t-1+A(η^-_t-1(Y^-_t-1), U_(Y^-_t-1,t-1)), for all 1≤ t≤ T.
From (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we also have
∀ s∈ [0,T], η^+_s|_[-T, T]≽η^-_s|_[-T,T],
provided that M-(6ν+1)T≥ T.
* Scenario III: Bad event. We finally define the event
E_bad def.=( E_good∪ E_neutral)^c=E_1^c ∪(E_1∩ E_2^c∩ E_2,bis^c) ∪(E_1-2∩ E_3^c) ∪
(E_1-3∩ E_4-5^c∩ E_4,bis^c) ∪(E_1-5∩ E_6-7^c∩ E_6,bis^c) ∪(E_1-7∩ E_8-9^c ).
In particular, (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>) yield
Y^+_T = Y^-_T-2T=-T on the event E_bad.
The next deterministic lemma is a restatement of (<ref>), (<ref>) and (<ref>).
The event E_good, E_neutral and E_bad defined in (<ref>), (<ref>)
and (<ref>), respectively, form a partition of Ω such that
Y_T^+ = Y_T^-+2ℓ_g^20 on E_good;
Y^+_T =Y_T^- on E_neutral;
Y_T^+ = Y^-_T -2T on E_bad.
§.§ The coupling ℚ
We now aim to compare Y^± to X^ρ and X^ρ+ε, and to integrate over the dynamics of the environments when started from a typical initial configuration. We will derive from this a bound on the expected discrepancy between Y^+ and Y^-, and thus on the one between X^ρ and X^ρ+ε. The main result of this section is Lemma <ref>, which entails a coupling ℚ with these features. Lemma <ref> is the key ingredient in the proof of Proposition <ref>, which appears in the next subsection.
We will require the environments η^+ and η^- to have a typical initial configuration under 𝐏^ρ+ϵ and 𝐏^ρ in the following sense. Recall that ρ∈ J and ϵ∈ (0,1/100) have been fixed at the start of this section, and that (ρ+ϵ)∈ J. For M,L∈ℕ, we say that (η^+_0,η^-_0)∈Σ^2 (recall that Σ= (_+)^ from Section <ref>) is (M,L)-balanced if all of the following occur:
* η^+_0(x)≥η^-_0(x) for all x∈ [-M,M],
* η^+_0([x,x+⌊(log L)^2⌋ -1])≥ (ρ +99ϵ100) ⌊(log L)^2⌋ for all x∈ [-M,M-⌊(log L)^2⌋ +1],
* η^-_0([x,x+⌊(log L)^2⌋ -1])≤ (ρ +ϵ100) ⌊(log L)^2⌋ for all x∈ [-M,M-⌊(log L)^2⌋ +1].
There exists L_2=L_2(ρ,ϵ, ν)
≥ 1 such that for all L≥ L_2, the following holds. For all M∈ [10(ν+1)L,20(ν+1)L] and for every (M,L)-balanced choice of (η^+_0,η^-_0), there exists a coupling = _(η^+_0,η^-_0) of (η^+, η^-, U) with the following properties:
0.8η^-∼𝐏^η^-_0, η^+∼𝐏^η^+_0, and (U_w)_w∈𝕃 are i.i.d. uniform variables on [0,1];
0.85
There exist X^±=X^±(η^±,U) such that X^±∼ℙ^η^±_0 and ℚ-a.s., the inequalities X^-_T ≤ Y^-_T and Y^+_T ≤ X^+_T hold, with Y^±= Y^± (η^+,η^-,U) as in Section <ref>;
𝔼^ℚ[Y^+_T - Y^-_T] ≥exp(- (log L)^1/20);
( E_restart) ≥ 1- L^-100, where
E_restartdef.={∀ x∈ [-(M-(8ν+3)T),M-(8ν+3)T], η^+_T(x +Y^+_T) ≥η^-_T(x +Y^-_T) }.
Let L_2≥ 1 to be chosen later, and L≥ L_2. Fix M∈ [10(ν+1)L, 20(ν +1)L], and let (η_0^+, η_0^-) be (M, L)-balanced.
We construct the coupling ℚ below. We will then define X^- and X^+ such that
(<ref>) is satisfied. To prove the main estimate (<ref>), we will control the probabilities of the events E_good, E_neutral and E_bad emerging from the construction of Y^±, and use Lemma <ref>. Finally, proving (<ref>) will require to bound the probability of losing the synchronisation of η^+ and η^- by time T.
We split the proof into five parts: the construction of ℚ and the proofs of (<ref>)-(<ref>).
Part I: Construction of .
We will denote the natural filtration generated by the triplet (η^+,η^-,U) by
ℱ_t=σ{(η^+_t')_0≤ t'≤ t,(η^-_t')_0≤ t'≤ t, (U_w)_w∈, π_2(w)≤ t-1}, for t≥0.
Under ℚ, we first let U=(U_w)_w∈ be a collection of i.i.d. uniform random variables on [0,1]. We now define the coupling of η^- and η^+ under ℚ in such a way that given ℱ_t, the evolution of (η^±_s)_t≤ s≤ t+1 has the right marginal 𝐏^η^±_t (up to time one), cf. (<ref>). Moreover, throughout the construction of ℚ, i.e. from (<ref>) to (<ref>) below, we will in fact ensure that for all integer t ∈ [0,T], under ℚ,
(U_w: π_2(w)=t) is independent from ℱ_t.
We now proceed to specify η^± under ℚ, and refer again to Figure <ref> for visual aid. On the time-interval [0,s_0], we
couple (η^+,η^-) as (η,η') in <ref> with η_0=η^+_0, η'_0=η^-_0, H=M, t=s_0 and k=1;
in particular, the domination η_0 |_[-H, H]≽η_0' |_[-H, H] required by <ref> is ensured by the fact that (η^+_0,η^-_0) is (M,L)-balanced; see item (i) in the corresponding definition above Lemma <ref>.
Since E_1 ∈ℱ_s_0 by (<ref>) and (<ref>), we can observe at time s_0 whether E_1 occurred. Conditionally on ℱ_s_0 and on E_1^c, during [s_0,T], we
let η^- and η^+ evolve independently with respective marginals ^η^-_s_0 and ^η^+_s_0.
Recalling the definition (<ref>), we have that E_2∈ℱ_s_0.
Conditionally on ℱ_s_0 and on E_1∩ E_2^c, we couple (η^+,η^-) during the interval [s_0,T] as
(η,η') in <ref> with η_0=η^+_s_0, η'_0=η^-_s_0, H=M-2ν s_0, t=T-s_0 and k=1.
On E_1-2, note that | Y^-_s_0|≤ s_0 <s_1. Hence, as we now explain, during the time-interval [s_0, s_1], conditionally on ℱ_s_0 and on E_1-2, we can
apply the coupling of <ref> to η_ t(·)=η^+_s_0+ t (·+Y^-_s_0) and η_ t '(·)=η^-_ s_0+ t (·+Y^-_s_0), t ≥0,
with ℓ=ℓ_g, H=M-(2ν+1) s_1, k=⌊ s_1/ℓ_g⌋ and x= ℓ mod 2.
Note indeed that by (<ref>) and (<ref>), k≥ℓ_g> 48ν^-1(ν +log (40) - p_∘(1-p_∙)ν/2) for L large enough and H= M-(2ν+1) s_1≥ 2ν s_1≥ 2ν kℓ_g = 2 νℓ k for all L≥ 3, as required in <ref>. Together with the definition of E_2 in (<ref>), this allows us to apply the coupling of <ref>.
So far we have specified η^± for all of [0,T] on E_1-2^c and up to time s_1 on E_1-2. As we now briefly elaborate, it is also plain from the construction above that the independence property postulated in (<ref>) holds for all t ≤ s_1. This is a trivial matter for t<s_0 in view of (<ref>), which does not involve U at all. For s_0 ≤ t ≤ s_1, the only dependence on U arises through Y^-_s_0 via E_2 and (<ref>). However, on account of (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), Y^-_s_0 only relies on variables in U with time label at most s_0-1, whence the claim. In the sequel (more precisely, up to (<ref>)), considerations along similar lines allow to extend (<ref>) to larger times t. These will not be made explicit.
Returning to the construction of ℚ, it remains to specify η^±
after time s_1 on the event E_1-2. Recall that by (<ref>) and (<ref>), both E_3 and E_4 are in ℱ_s_1. Over the time interval [s_1, T], conditionally on ℱ_s_1 and on E_1-2∩ E_3^c, we choose to
let η^- and η^+ evolve independently with respective marginals ^η^-_s_1 and ^η^+_s_1.
On the other hand, conditionally on ℱ_s_1 and on E_1-3∩ E_4^c, during [s_1,T], we
couple (η^+,η^-) as (η, η') in <ref>,
with H=M-(4ν+1) s_1, t=t=T-s_1, k= 1 .
Conditionally on ℱ_s_1 and on E_1-4, during the time-interval [s_1, s_2], we
couple (η^+,η^-) as (η, η') in <ref>,
with H=M-(4ν+1) s_1, t=s_2-s_1, k= ⌊ s_1/(s_2-s_1)⌋.
Note that k ≥ 1 in (<ref>) by (<ref>) and since L ≥ L_2 by assumption (upon possibly enlarging L_2). Moreover, note that the conditions on the environments at time s_1 needed for <ref> to apply in both (<ref>) and (<ref>) are met owing to the occurrence of E_3, see (<ref>).
We still have to specify η^± from time s_2 onwards on the event E_1-4. By definition (<ref>), we have that E_5∈ℱ_s_2. Hence, conditionally on ℱ_s_2 and on E_1-4∩ E_5^c, during [s_2, T], we
let η^- and η^+ evolve independently with respective marginals ^η^-_s_2 and ^η^+_s_2.
Using the definitions (<ref>) and (<ref>), we have that both E_6 and E_7 are in ℱ_s_2. During [s_2,T], conditionally on ℱ_s_2 and on E_1-5∩ E_6-7^c, observing that the defining features of E_5 (see (<ref>)) allow to apply <ref>, we
couple (η^+,η^-) as (η, η') in <ref> with H=M-(6ν+1) s_2, t=T-s_2 and k=1.
Conditionally on ℱ_s_2 and on E_1-7, during [s_2,s_3], as we now explain, we
couple (η^+(·+Y^-_s_2+ℓ_g^20),η^-(·+Y^-_s_2-ℓ_g^20 )) as (η, η') in <ref>
with H=3(ν+1)T, t=s_3-s_2 and (ρ,ε)=(ρ,ϵ).
Indeed, one checks that the conditions needed for <ref> to apply are all satisfied on the event E_7 whenever L_2 (ρ,ϵ,ν) is large enough. First, by choice of L_2 and for all L ≥ L_2, one readily ensures (recall (<ref>)-(<ref>)) that all of 3(ν+1)T > 4ν(s_3-s_2), ν^8(s_3-s_2)>1 and ν(s_3-s_2)^1/4>SEPcouplingϵ^-2(1+|log^3(ν(s_3-s_2)) |) hold. Moreover, the conditions on the empirical densities of η_0 and η_0' appearing above (<ref>) hold by definition of E_7 at (<ref>), since ⌊ (s_3-s_2)^1/4⌋≥ℓ_g^2 by (<ref>). Note that the relevant intervals I in the context of <ref>, which have length ⌊ℓ/2 ⌋≤ |I| ≤ℓ with ℓ= ⌊ (s_3-s_2)^1/4⌋, may in practice be much larger than those appearing in the definition of E_7, but they can be paved by disjoint contiguous intervals as entering E_7. The required controls on the corresponding empirical densities η_0(I) and η_0'(I) are thus inherited from those defining E_7. Similar considerations also apply below (whenever either of <ref> or <ref> are used).
It remains to specify η^± on the event E_1-7 for the time interval [s_3,T]. We now define an additional event whose realisation will allow us to couple η^± on the whole window of width order M at time T, regardless of whether the coupling at (<ref>) succeeds or not; here, by “succeed” we mean that the (high-probability) event appearing in (<ref>) is realized. The following will play a key role when establishing (<ref>). We set
E_10def.={[ ∀ I ⊆ [-M+2ν s_3,M-2ν s_3] with ⌊⌊ (log L)^2⌋/2⌋≤| I |≤⌊ (log L)^2⌋,; η^-_s_3(I)≤ (ρ+ϵ/4)| I| and η^+_s_3( I )≥ (ρ+3ϵ/4)| I | ]}.
The event E_10 above and E_8 defined in (<ref>) are both ℱ_s_3-measurable. Conditionally on ℱ_s_3 and on E_1-8∩ E_10, during [s_3,T], we couple
(η^+(· +Y^-_s_1+ℓ_g^20 ),η^-(· +Y^-_s_1-ℓ_g^20 )) as in <ref> with H_1=2(ν+1)T,
H_2=M-2ν s_3-s_1, t=T-s_3, ℓ=ℓ_G^1/200 and (ρ,ε)=(ρ,ϵ).
To this effect, we verify that for L large enough, by (<ref>), (<ref>) and since M ≥ 10(ν+1)L, we indeed have that
min{H_1, H_2-H_1-1}> 10νt>4νℓ^100>compatible, νℓ>compatibleϵ^-2(1+|log^3(νℓ^4)|) and ℓ>80νϵ^-1+ν^-2 .
Also, the domination on [-H_1,H_1] and the empirical density condition are respectively guaranteed by definition of E_8 at (<ref>) and E_10 at (<ref>).
If E_8 does not occur (hence we temporarily lose the domination of η^- by η^+) but E_10 does, we proceed to what was referred to as the parachute coupling in Section <ref>, in order to recover this domination by time T. Precisely, conditionally on ℱ_s_3 and on E_1-7∩ E_8^c∩ E_10, we
couple (η^+(· -T),η^-(· +T)) as (η, η') in <ref>
with H=M-T-2ν s_3, t=T-s_3 and (ρ,ε)=(ρ,ϵ).
One checks indeed that the conditions of <ref> hold, since for L large enough (recalling (<ref>) and (<ref>)) we have that
H>4ν t,ν^8t>1, ν t^1/4 >SEPcouplingϵ^-2(1+|log^3(ν t) |) and t^1/4>⌊ (log L)^2 ⌋, so that on E_10 the condition on the empirical density holds.
The remaining cases are straightforward to specify. Conditionally on ℱ_s_3, on E_1-8∩ E_10^c, we
couple (η^+(· +Y^-_s_1+ℓ_g^20 ),η^-(· +Y^-_s_1-ℓ_g^20 )) as in <ref>
with H=2(ν+1)T-ℓ_g^20, t=T-s_3, and k=1.
Finally, conditionally on ℱ_s_3 and on E_1-7∩ E_8^c∩ E_10^c, during [s_3,T], we
let η^- and η^+ evolve independently with respective marginals ^η^-_s_3 and ^η^+_s_3.
We have now fully defined the measure ℚ and with it the triplet (η^+,η^-,U) (in the time interval [0,T]). The task is now to verify that with these choices, all of (<ref>)-(<ref>) hold. For concreteness we extend all three processes (η^+,η^-,U) independently at times t >T, using the Markov property at time T for η^±. These extensions will de facto play no role because all of (<ref>)-(<ref>) only concern matters up to time T.
Part II: Proof of (<ref>).
The fact that U has the desired law is immediate, see below (<ref>). We proceed to show that η^-∼𝐏^η_0^-. Since the construction of η^- only consists of successive couplings at times 0,s_0,s_1,s_2 and s_3 where the marginals have the desired distributions (recall <ref>, <ref>, <ref> and <ref>), by virtue of the Markov property (<ref>), it is enough to check that the events deciding which coupling to apply at time s_i are ℱ_s_i-measurable for i∈{0,1,2,3}, which we already did at (<ref>)-(<ref>), (<ref>), (<ref>), (<ref>)
and (<ref>). Therefore, η^-∼𝐏^η_0^-. In the same way we deduce that η^+∼𝐏^η_0^+.
Part III: Proof of (<ref>).
We now use (η^+,η^-,U) to construct explicit functions X^+=X^+(η^+,U) and X^-= X^-(η^-,U) with the correct marginal laws X^±∼ℙ^η_0^±. The case of X^- is easily dispensed with: we define X^- as in (<ref>) and (<ref>) with (η^-,U) instead of (η,U). Since we have already established that η^-∼𝐏^η_0^- as part of (<ref>), it follows using (<ref>) and Lemma <ref> that X^-∼ℙ^η_0^-.
As for X^+, we also define it via (<ref>) and (<ref>) using the construction specified around (<ref>), replacing (η,U) by (η^+,U^+) where U^+ is defined as follows: for all w∈𝕃 with π_2(w)≤ s_3-1, U^+_w=U_w, and for all w∈𝕃 with π_2(w)≥ s_3,
U^+_w=
U_w-(2ℓ_g^20,0) on E_1-8,
U_w on E_1-8^c.
Recalling that E_1-8∈ℱ_s_3, and hence is independent of (U_w; w∈𝕃, π_2(w)≥ s_3), it follows that X^+∼ℙ^η_0^+ again by combining the established fact that η^+∼𝐏^η_0^+ and (<ref>). The rationale behind (<ref>) will become clear momentarily.
We show that X^± as defined above satisfy Y^-_T≥ X^-_T and Y^+_T≥ X^+_T ℚ-a.s. To this end, note first that on E_bad and by (<ref>), Y^-_T=T and Y^+_T=-T, hence ℚ-a.s. on E_bad we have using the trivial bounds X^±_T ∈ [-T,T] that X^-_T≤ Y^-_T and Y^+_T≤ X^+_T.
Second, on E_neutral and by (<ref>), under ℚ, the process (Y^-_t: 0≤ t≤ T) simply follows the environment (η^-,U) as per (<ref>) and (<ref>), and so does (X_t^-: 0≤ t≤ T) by above definition of X^-. Hence Y^-_T=X^-_T. Moreover, since E_neutral⊂ E_1-8^c by (<ref>), the process (X^+_t: 0≤ t≤ T) follows the environment (η^+,U) by (<ref>). By Lemma <ref> applied with (X,X)=(Y^-,X^+), (η,η̃)=(η^-,η^+) and K=[-T,T]× [0,T], recalling (<ref>), we get that X^+_T≥ Y^-_T. Using (<ref>), we finally obtain X^+_T≥ Y^+_T=Y^-_T=X^-_T ℚ-a.s. on E_neutral.
Third, on E_good⊂ E_1-6, the process (Y^-_t: 0≤ t≤ s_2) follows the environment (η^-,U) by (<ref>), (<ref>) and (<ref>), as is the case of X^-, so that X^-_s_2=Y^-_s_2. Then by (<ref>) we have that Y^-_s_3=Y^-_s_2+(s_3-s_2) which guarantees that X^-_s_3≤ Y^-_s_3. From time s_3 to time T, X^- and Y^- follow the same environment (η^-,U) by (<ref>) and thus, by Lemma <ref> with (X,X)=(Y^-,X^-), (η,η̃)=(η^-,η^-) and K=ℤ× [s_3,T], we have that X^-_T≤ Y^-_T.
Similarly for X^+ and Y^+, on E_good⊂ E_1-3, during [0,s_1] the process X^+ follows the environment (η^+,U) while Y^+ follows (η^-,U) by (<ref>), (<ref>). By (<ref>) we can apply Lemma <ref> with (X,X)=(Y^+,X^+), (η,η̃)=(η^-,η^+) and K=[-T,T]× [0,s_1] to deduce that Y^+_s_1≤ X^+_s_1. Then during [s_1,s_2], both Y^+ and X^+ follow (η^+,U) (recall (<ref>), and that E_good⊂ E_1-5). Thus Lemma <ref> with (X,X)=(Y^+,X^+), (η,η̃)=(η^+,η^+) and K=ℤ× [s_1,s_2] ensures that Y^+_s_2≤ X^+_s_2 ℚ-a.s. on E_good. Next, (<ref>), which holds on E_1-7⊃ E_good, implies that Y^+_s_3≤ X^+_s_3, given that X^+ only takes nearest-neighbour steps. Finally, since E_good⊂ E_1-9, from time s_3 to T, both Y^+ and X^+ follow the environment (η^+,U^+), where U^+ is as in (<ref>) (recall (<ref>); this explains the choice in (<ref>)). Applying Lemma <ref> with (X,X)=(Y^+,X^+), (η,η̃)=(η^+,η^+) and K=ℤ× [s_3,T] therefore yields Y^+_T≤ X^+_T on E_good. This concludes the proof of (<ref>).
Part IV: Proof of (<ref>).
By Lemma <ref>, we have that
𝔼^[Y^+_T-Y^-_T]=2ℓ_g^20(E_good)-2T(E_bad).
Recalling the definitions of E_good and E_bad from (<ref>) and (<ref>), as well as (<ref>) and (<ref>), it is thus enough to show that
((E_good)=) (E_1-9)≥exp(-ℓ_g^22)
and
Sdef.=(E_1^c) +(E_1∩ E_2^c∩ E_2,bis^c ) +(E_1-2∩ E_3^c)
+(E_1-3∩ E_4,5^c∩ E_4,bis^c )
+(E_1-5∩ E_6-7^c∩ E_6,bis^c) +(E_1-7∩ E_8-9^c ) ≤(E_1-9)/(10T).
Proof of (<ref>). Recall E_1 from (<ref>). By (<ref>) and (<ref>), which is in force, we get
ℚ(E_1^c)≤ 20 exp (-ν s_0/4) ≤exp(-ℓ_g^10^5),
the last inequality being true for L large enough by (<ref>) and (<ref>).
Next, recalling (<ref>), that 4s_0>s_0+3ℓ_g by (<ref>) and that ϵ<1/100, and using that | Y^-_s_0|≤ s_0, we note that
E_2^c⊆ ⋃_ x∈ [-4s_0+1,4s_0],
⌊ℓ_g/2⌋≤ℓ'≤ℓ_g{η^-([x,x+ℓ'-1])≤ (ρ+ϵ/2)ℓ' <η^+([x,x+ℓ'-1])≤ (ρ+2ϵ)ℓ'}^c.
We note in passing that we cannot locate precisely Y^-_s_0 without (possibly heavily) conditioning the evolution of η^- on [0,s_0]. By <ref> applied with either (η_0,ρ,ε)=(η^-_0,ρ ,ϵ/20) or (η_0,ρ,ε)=(η^-_0,ρ+ϵ ,ϵ/20), both times with (ℓ, H,t)= (⌊log ^2 L⌋,(4ν+8)s_0,s_0) and ℓ' ranging from ⌊ℓ_g/2⌋ to ℓ_g we have by a union bound on x and ℓ':
ℚ(E_2^c)≤ (2× 8s_0×ℓ_g) 4 (4ν+8)s_0exp (-densitystableexpoϵ^2 ⌊ℓ_g/2⌋/400)≤ 1/4,
the last inequality holding for L large enough by (<ref>) and (<ref>). Note that we can indeed apply <ref> since (η^+_0,η^-_0) is (M,L)-balanced (see items (ii) and (iii) above Lemma <ref>), and M>(4ν+8)s_0> 4ν s_0 >densitystableϵ^-2⌊ (log L)^2⌋^2(1+|log^3(ν s_0)|) and ℓ_g≤√(s_0) for L large enough.
Next, we aim to derive a suitable deterministic lower bound on ℚ(E_3-4 |ℱ_s_0), on the event E_1-2, which is ℱ_s_0-measurable. We aim to apply <ref>, cf. (<ref>), and dealing with E_3 is straightforward, see (<ref>), but E_4 requires a small amount of work, cf. (<ref>) and (<ref>). To this effect, we first observe that under ℚ, with η, η' as defined in (<ref>), one has the inclusions
E_1-4(<ref>), (<ref>)⊃{η^+_s_1(Y_s_1^ - )≥ 1,η^-_s_1(Y_s_1^ - )=0}∩ E_1-3
⊃{η_ℓ(x)≥ 1,η'_ℓ(x)=0}∩{Y_s_1^– Y_s_0^-=x }∩ E_1-3,
where ℓ=ℓ_g=s_1-s_0 and x= ℓ mod 2 on account of (<ref>) and (<ref>). Now recalling that the evolution of Y^- on the time interval [s_0,s_1] follows (<ref>) on E_1-3, and in view of (<ref>), one readily deduces that the event {Y_s_1^– Y_s_0^-=x } is implied by the event E_1-3∩ F, where F refers to the joint occurrence of
{ U_(Y^-_s_0,s_0+ n) < p_∘} for even integer n satisfying 0 ≤ n ≤ℓ-1 and { U_(Y^-_s_0,s_0+ n) > p_∙} for odd integer n satisfying 0 ≤ n ≤ℓ-1.
(Observe indeed that, if ℓ is odd, whence x=1, there will be one more step to the right than to the left in the resulting trajectory for Y^+). Feeding this into (<ref>), applying a union bound, using (<ref>) and (<ref>) (which are in force on account of (<ref>)), it follows that on E_1-2,
ℚ(E_3-4 |ℱ_s_0)≥ℚ({η_ℓ(x)≥ 1,η'_ℓ(x)=0}∩ F |ℱ_s_0) -δ' ≥ 2δ(p_∘(1-p_∙))^ℓ -δ' ≥δ'
(see above <ref> regarding δ and δ');
in the penultimate step above, we have also used that, conditionally on ℱ_0, the events {η_ℓ(x)≥ 1,η'_ℓ(x)=0} and F are independent. Combining this with (<ref>) and (<ref>), we deduce that for L large enough, using (<ref>),
ℚ(E_1-4)≥ 3δ'/4- 20 e^-ℓ_g^10^5≥δ'/2.
Next, recalling the definition (<ref>) of E_5 and that E_1-4∈ℱ_s_1, by (<ref>) and the bound given in <ref>, we have that, ℚ-a.s. on E_1-4,
ℚ(E_5^c |ℱ_s_1)≤ 20exp(- ⌊ s_1/(s_2-s_1)⌋ (s_2-s_1)ν4 )≤exp(- ν4ℓ_G )≤exp(- ν8ℓ_g^10^6),
for all L large enough by (<ref>).
We will return to E_6 momentarily and first consider E_7. To compute the probability of E_7^c, we take a union bound over ℓ' such that ⌊ℓ_g^2/2⌋≤ℓ'≤ℓ_g^2, use the definition (<ref>) and the fact that (η_0^-,η_0^+) is (M,L)-balanced in order to apply <ref> twice, once for η^- and once for η^+, choosing in both cases (ℓ,H,t)=(⌊ (log L)^2⌋, 38(ν+1)ℓ_G, s_2) and, for η^-, with (η_0,ρ, ε)=(η^-_0,ρ ,ϵ/100 ) and, for η^+, with (η_0,ρ, ε)=(η^+_0,ρ+ϵ ,ϵ/100 ). Note that for L large enough by (<ref>) and (<ref>), we have indeed M>H>4ν t>densitystableℓ^2ε^-2(1+|log^3(ν t)|) and ℓ'≤ℓ_g^2≤√(t), as required in <ref>. We thus obtain that
ℚ(E_7^c)≤ 400(ν+1)ℓ_g^2ℓ_G exp(-densitystableexpoϵ^2⌊ℓ_g^22⌋/10^4)≤exp( -densitystableexpoϵ^210^5ℓ_g^2 ),
where the last inequality holds for L large enough (depending on ν, ρ and ϵ).
Putting (<ref>), (<ref>) and (<ref>) together and taking L (hence ℓ_g) large enough, it follows that
(E_1-5∩ E_7) ≥(E_1-5)-(E_7^c) ≥δ'/4.
Concerning E_6, we first observe that under ℚ, the field (U_w: s_1 ≤π_2(w) ≤ s_2) is independent from the σ-algebra generated by ℱ_s_1 and (η^±_t)_s_1≤ t≤ s_2; this can be seen by direct inspection of the coupling construction until time s_2, paying particular attention (with regards to the evolution of η^± during [s_1,s_2]) to (<ref>), (<ref>), (<ref>) (and (<ref>)), which all involve only either i) trivial couplings or ii) couplings relying on <ref>. In particular these couplings do not involve U at all. Using the previous observation and recalling (<ref>)-(<ref>), it follows that ℚ-a.s.,
(. E_6| ℱ_s_1, (η^±_t)_s_1≤ t≤ s_2)=(p_∙ -p_∘)( p_∘ (1-p_∙))^s_2-s_1-1≥ (p_∙ -p_∘)( p_∘ (1-p_∙))^2ℓ_g^20.
Next, observe that E_1-5 and E_7 are measurable with respect to the σ
-algebra generated by ℱ_s_1 and (η^±_t)_s_1≤ t≤ s_2 owing to (<ref>), (<ref>),
(<ref>), (<ref>),
(<ref>) and (<ref>). Combining this with (<ref>), and recalling the value of δ' from <ref>, we obtain that
(E_1-7)
≥( p_∘ (1-p_∙))^2ℓ_g^20δ'/4 ≥( p_∘ (1-p_∙))^3ℓ_g^20≥exp(-ℓ_g^21),
where, in the last two inequalities, we took L (hence ℓ_g) large enough.
Recalling E_8 from (<ref>) along with the coupling defined in (<ref>) and using that E_1-7∈ℱ_s_2, we can apply <ref>, and obtain that, -a.s. on E_1-7, for large enough L (owing to (<ref>) and (<ref>)),
( . E_8^c| ℱ_s_2)≤SEPcoupling2× 3(ν+1)T(s_3-s_2)exp(-(SEPcoupling2(1+ν^-1))^-1ϵ^2 (s_3-s_2)^1/4)≤exp(-ℓ_g^4),
which implies that
( . E_8^c| E_1-7)≤exp(-ℓ_g^4).
Next, we control the probability of E_10 defined at (<ref>). To do so, we take a union bound over ℓ'∈ [⌊⌊ (log L)^2⌋/2⌋,⌊ (log L)^2⌋] and, using that (η_0^-,η_0^+) is (M,L)-balanced, we apply <ref> twice for any fixed ℓ' in this interval: once with (η_0, ρ, ε)=(η^-_0,ρ,ϵ/20 ), once with (η_0, ρ, ε)=(η^+_0,ρ+ϵ,ϵ/20 ) and both times with (ℓ,H,t)=(⌊ (log L)^2⌋, M, s_3). Note once again that for L large enough by (<ref>) and (<ref>), we have M>H>4ν t>densitystableℓ^2ε^-2(1+|log^3(ν t)|) and ℓ'≤ℓ_g^2≤√(t), as required for <ref> to apply. This yields, applying a union bound over the values of ℓ', that for large enough L (recalling that M≤ 20(ν+1)L),
(E_10^c) ≤ 8M⌊ (log L)^2⌋exp(-densitystableexpoϵ^2 ⌊log^2 L⌋ /400)≤exp(-ℓ_g^1500).
Finally, recalling the coupling (<ref>) (together with (<ref>), (<ref>) and (<ref>)), we have by <ref>, more precisely by (<ref>), for large enough L,
(E_9^c | E_1-8∩ E_10)≤ 20Texp(-ν(T-s_3)/4)≤exp(-ℓ_g^10^5).
Putting together (<ref>), (<ref>), (<ref>) and (<ref>), we obtain (<ref>), as desired.
Proof of (<ref>). We bound individually the six terms comprising S on the left-hand side of (<ref>). The first of these is already controlled by (<ref>). Recall the coupling defined in (<ref>) together with the definitions of E_1, E_2 and E_2,bis in (<ref>), (<ref>) and (<ref>). Observe that by (<ref>) and <ref>, we have -a.s. on E_1∩ E_2^c that ( E_2,bis^c| ℱ_s_0)≤ 20exp( -ν (T-s_0)/4 ). Hence for large enough L, by (<ref>) and (<ref>), this yields
(E_1∩ E_2^c∩ E_2,bis^c)≤exp(-ℓ_g^10^5).
Recall now the coupling (<ref>) and the definition (<ref>) of E_3. By (<ref>) which is in force, we have
(E_1-2∩ E_3^c)≤ 20exp(-νℓ_g ⌊ s_1/ℓ_g ⌋ /4 )≤exp(-ℓ_g^10^5),
for L large enough due to (<ref>) and (<ref>). Next, remark that
E_1-3∩ E_4-5^c∩ E_4,bis^c ⊆{E_1-4∩ E_5^c }∪{E_1-3∩ E_4^c∩ E_4,bis^c }.
On one hand, by (<ref>) and <ref> (recalling (<ref>)) we have
(E_1-4∩ E_5^c)≤ 20exp(-ν (s_2-s_1)⌊ s_1/(s_2-s_1)⌋/4).
On the other hand, by (<ref>) and <ref> (recalling (<ref>)), we have
(E_1-3∩ E_4^c∩ E_4,bis^c )≤ 20exp(-ν (T-s_1)/4)
Together, (<ref>), (<ref>) and (<ref>) yield that
(E_1-3∩ E_4-5^c∩ E_4,bis^c)≤exp(-ℓ_g^10^5),
for L large enough, again via (<ref>) and (<ref>). Similarly, by (<ref>), <ref> and (<ref>), for L large enough we have that
(E_1-5∩ E_6-7^c∩ E_6,bis^c)≤exp(-ℓ_g^10^5).
In view of (<ref>), putting together (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) yields that
S≤(E_1-7∩ E_8-9^c)+5exp(-ℓ_g^10^5).
By (<ref>), which has been already established, we know that 5exp(-ℓ_g^10^5)≤(E_1-9)/(20T) for L large enough by (<ref>). Hence, in order to conclude the proof, it is enough to show that
(E_1-7∩ E_8-9^c)≤(E_1-9)/(20T), or equivalently that
(E_8-9 | E_1-7)≥ 1-(20T)^-1.
Indeed,
(E_8-9 | E_1-7)= (E_8 | E_1-7)(E_9 | E_1-8)=(E_8 | E_1-7)( E_1-9 | E_1-8∩ E_10 )( E_1-8∩ E_10 )/( E_1-8)
≥(E_8 | E_1-7)( E_1-9 | E_1-8∩ E_10 )(1- (E_10^c)/( E_1-9))
≥(1-exp(-ℓ_g^4))(1-exp(-ℓ_g^10^5))(1-exp(-ℓ_g^1500)exp(ℓ_g^22)),
by virtue of (<ref>), (<ref>), (<ref>) and (<ref>) in the last line. For large enough L (recalling (<ref>) and (<ref>)), this readily yields (<ref>) and concludes the proof of (<ref>).
Part V: Proof of (<ref>).
Recall the event E_restart from (<ref>). The proof of (<ref>) is relatively straightforward at this point, we just need to keep careful track of those events in the above construction that can force us out of the event E_restart; this is key to get the rapid decay in (<ref>). First note that E_neutral⊆ E_2,bis∪ E_4,bis∪ E_6,bis by (<ref>), so that by (<ref>), (<ref>), (<ref>) and (<ref>) (and using that |Y_T^±| ≤ T along with (<ref>)), E_restart holds on E_neutral for large enough L (tacitly assumed in the sequel), hence E_restart^c can only happen on E_good∪ E_bad. As we now explain, looking at the decomposition of E_good and E_bad at (<ref>) and (<ref>), and inspecting closely the construction η^± (and Y^±), especially in Part I of the proof, starting with the paragraph of (<ref>) until (<ref>), one notices that E_restart^c can in fact only happen:
(i) on all but the last event (i.e. all but E_1-7∩ E_8-9^c) defining E_bad at (<ref>), or
(ii) on E_1-7∩ E_10^c (see (<ref>)), or
(iii) on E_1-7∩ E_8^c∩ E_10 if the coupling <ref> applied at (<ref>) fails, i.e. if the event
E_11def.={η^+_T (· -T)|_[-M+T+4ν T,M-T-4ν T ]≽η^-_ T (· +T)|_[-M+T+4ν T,M-T-4ν T ]}
(cf. (<ref>)) does not occur, or
(iv) on E_1-8∩ E_9^c∩ E_10, or
(v) on E_1-10 if the coupling <ref> (more precisely (<ref>)) applied at (<ref>) fails, i.e. if the event
E_12def.={η^+_ T (· +Y^-_s_1+ℓ_g^20)|_I≽η^-_ T (· +Y^-_s_1-ℓ_g^20)|_I }
with I= [-M+T+6ν T,M-T-6ν T ] does not occur.
We now detail how the cases (i)-(v) arise. First note that, since E_restart⊂ (E_bad∪ E_good) as established above, and since E_good= E_1-9 by definition (see (<ref>)), after discarding item (i) from the above list it only remains to investigate matters on the event E_1-7, and the cases considered in items (ii)-(v) indeed form a partition of this event, save for the additional specifications (“if the coupling...”) in items (iii) and (v), which we now discuss. For item (iii), note indeed that if E_1-7∩ E_8^c∩ E_10⊆ E_bad holds (see (<ref>)) then Y^+_T=Y^-_T-2T by Lemma <ref>, and that since | Y^-_T|≤ T, if in addition E_11 holds then so does E_restart in view of (<ref>). Similarly, regarding item (v), we have by (<ref>) that E_1-10⊆ E_good so that if E_1-10 holds, Y^+_T=Y^-_T+2ℓ_g^20 by Lemma <ref>. Since | Y^-_T-(Y^-_s_1-ℓ_g^20)|≤ T-s_1+ℓ_g^20≤ T, if in addition E_12 occurs then E_restart occurs as well.
Combining items (i)-(v) above and recalling (<ref>) in the context of item (i), by a union bound we have that
(E_restart^c)≤(E_1^c) +(E_1∩ E_2^c∩ E_2,bis^c )
+(E_1-2∩ E_3^c) +(E_1-3∩ E_4,5^c∩ E_4,bis^c )
+(E_1-5∩ E_6-7^c∩ E_6,bis^c) + (E_10^c)
+(E_1-8∩ E_9^c∩ E_10)+(E_1-7∩ E_8^c∩ E_10∩ E_11^c)+(E_1-8∩ E_10∩ E_12^c).
By (<ref>), (<ref>), (<ref>), (<ref>), (<ref>),
(<ref>) and (<ref>), for large enough L we obtain that
(E_restart^c)≤ 7exp(-ℓ_g^1500)+Q(E_1-7∩ E_10∩ E_8^c∩ E_11^c)+(E_1-8∩ E_10∩ E_12^c).
As to the last two terms in (<ref>), using the coupling defined in (<ref>) and <ref>, we get
Q( E_11^c | E_1-7∩ E_8^c∩ E_10)≤SEPcoupling2 TMexp(-(SEPcoupling2(1+ν^-1))^-1ϵ^2(T-s_3)^1/4)
≤ C_3 2ℓ_g^10^6· 20(ν+1) L ·exp(-(SEPcoupling2(1+ν^-1))^-1ϵ^2ℓ_g^2· 10^5) ≤exp(-ℓ_g^10^5),
where we used (<ref>)-(<ref>), the fact that M≤ 20(ν +1) L and took L large enough.
Similarly, using the coupling defined in (<ref>) and <ref> (more precisely (<ref>)), we obtain that
Q( E_12^c | E_1-8∩ E_10)≤ 5SEPcoupling2ℓ_G^1/20 Mexp(- (SEPcoupling2(1+ν^-1))^-1ϵ^2 ℓ_G^1/100)≤exp(-ℓ_g^2· 10^3).
Finally, putting (<ref>), (<ref>) and (<ref>) together, and using (<ref>) with L large enough, leads to (E_restart^c)≤ L^-100.
This concludes the proof of (<ref>) and thus of Lemma <ref>, taking L_2 large enough so that (<ref>)-(<ref>) (which represent a finite number of constraints) hold.
In the coupling ℚ constructed in Lemma <ref>, U is a priori not independent from (η^+,η^-); for instance a synchronous evolution of η^+(·+2ℓ_g^20) and η^- after s_3 could indicate that E_6, which depends on U during [s_1,s_2], has happened.
§.§ Proof of Proposition <ref>
Let L≥ 1 be an integer satisfying L≥ L_2(ρ,ϵ,ν), where L_2 is given by Lemma <ref>. Let k:=⌊ L/T⌋, with T as defined in (<ref>). We start with a brief overview of the proof. To deduce (<ref>), we will couple two walks X^+∼ℙ^ρ+ϵ and X^-∼ℙ^ρ on the time interval [0,iT], 1≤ i ≤ k, recursively in i. The processes X^± will be specified in terms of associated environments η̂^+∼𝐏^ρ+ϵ, η̂^-∼𝐏^ρ, and an i.i.d. array U=(U_w)_w∈𝕃 using Lemma <ref> repeatedly, cf. (<ref>)-(<ref>). We will denote the associated coupling measure defined below, which will also comprise associated auxiliary walks Y^± that will be defined using the construction of Y^± in Section <ref>, iterated over i and allow to keep control on the gap between X^+ and X^- via a combination of (<ref>)-(<ref>). The very possibility of iteration is guaranteed by the high-probability event E_restart in (<ref>).
We now proceed to make the above precise. For later reference we set M_i=20(ν+1) L-(8ν+3)i, for i∈{0,…,k-1}. We further recall the filtration ℱ_t defined in (<ref>) and write ℱ_t below when adding hats to all processes involved. Finally, for all i∈{0,…,k-1}, let
B_idef.=⋂_j=0^i { ( η̂^+_jT(· +Y^+_jT),η̂^-_jT( · + Y^-_jT) is (M_j,L)-balanced}
(see items (i)-(iii) above Lemma <ref> for notation).
By successive extensions of , we will construct a coupling such that the following hold for all L ≥ L_2 (as supplied by Lemma <ref>) and i∈{0,…, k}:
(-1) The processes (η̂^+_t, η̂^-_t)_0≤ t≤ iT, (U_w)_w∈𝕃, π_2(w)≤ iT-1 (absent when i=0) and (X^±_t)_0≤ t≤ iT are defined under with the correct marginal laws. That is, η̂^+_0∼μ_ρ+ϵ, η̂^-_0∼μ_ρ and (η̂^±_t)_0< t≤ iT has the same law as (the restriction to [0,iT]) of η^± under 𝐏^η̂_0^±. Moreover, (U_w)_w∈𝕃, π_2(w)≤ iT-1 are i.i.d. uniform variables on [0,1], and given η̂_0^±, (X^±_t)_0≤ t≤ iT is ℱ_iT-measurable and has the law of (the restriction to [0,iT] of) X^± under ℙ^η̂_0^±.
(-2) The processes (Y^±_t)_0≤ t≤ iT are ℱ_iT-measurable and X^-_iT≤Y^-_iT and X^+_iT≥Y^+_iT hold -a.s.
(-3) Y^+_0= Y^-_0=0, and if i ≥1, with B_i as in (<ref>),
𝔼^[ . (Y^+_iT- Y^+_(i-1)T)-(Y^-_iT- Y^-_(i-1)T)| ℱ_(i-1)T] 1_B_i-1≥exp(-(log L)^1/20).
(-4) (B_0^c)=0 and if i ≥ 1, (B_i^c | B_i-1) ≤ L^-100.
For i=0, we simply couple under two configurations η̂^+_0∼μ_ρ+ϵ and η̂^-_0∼μ_ρ such that a.s. η̂^+_0(x)≥η̂^-_0(x) for all x∈ℤ, which we can do with probability one by (<ref>). We set X^+_0=X^-_0=Y^+_0=Y^-_0=0. Thus (-1) and (-2) are satisfied, and (-3) is trivial. Finally (B_0^c)=0 since
η̂^±_0 are in particular (M_0,L)-balanced, whence (-4) holds.
Assume by induction that for some i∈{0,…, k-1}, we have constructed a coupling with the above properties. We now proceed to extend so as to have (-1)-(-4) with (i+1) in place of i.
We first specify matters on the event B_i^c. Conditionally on ℱ_iT, if B_i^c occurs, we let η̂^+_t and η̂^-_t evolve independently for iT< t≤ (i+1)T according to 𝐏^η̂^+_iT and 𝐏^η̂^-_iT respectively, and independently of this, we choose U_w as uniform random variables on [0,1] in an i.i.d. manner, for w such that iT≤π_2(w)≤ (i+1)T -1. On B_i^c, we further let Y^+_iT+t=Y^+_iT-t and Y^-_iT+t=Y^-_iT+t, for all 0 < t≤ T and (X^±_t)_iT≤ t≤ (i+1)T evolve as in (<ref>) and (<ref>) with (η^±,U) instead of (η,U). With these choices it is clear that the inequalities in (-2) hold on B_i^c, since for instance Y^+_(i+1)T≤Y^+_iT-T ≤ X^+_iT-T≤ X^+_(i+1)T -a.s., using the induction hypothesis and the fact that increments of X^+ are bounded from below by -1. The inequality X^-_(i+1)T≤Y^-_(i+1)T is derived similarly.
We now turn to the case that B_i occurs, which brings into play Lemma <ref>.
Conditionally on ℱ_iT and on the event B_i, we couple (x,t)↦η̂^+_iT+t(x+Y^+_iT) and (x,t)↦η̂^-_iT+t(x+Y^-_iT) for x∈ℤ and t∈[0,iT], as well as ( U_w+(0,iT): w ∈𝕃, π_2(w)≤ T -1) following the coupling of (η^+,η^-, U) provided by Lemma <ref>, with the choice M=M_j. The requirement of (M,L)-balancedness of the initial condition needed for Lemma <ref> to apply is precisely provided by B_i, cf. (<ref>).
Combining (<ref>), the Markov property (<ref>) applied at time iT, and in view of the choices made on B_i^c, it readily follows that the processes
(η̂^+_t, η̂^-_t)_0≤ t≤ (i+1)T, (U_w)_w∈𝕃, π_2(w)≤ (i+1)T-1 thereby defined have the marginal laws prescribed in (-1). Moreover, by above application of Lemma <ref>, the processes X^± and Y^± satisfying all of (<ref>)-(<ref>) are declared. Thus, setting
X^±_iT+t=X^±_iT+X^±_t, Y^±_iT+t=Y^±_iT+Y^±_t, for all 0 ≤ t ≤ T,
it readily follows, combining (<ref>) and the induction assumption on the law of
(η̂^±_t)_0≤ t≤ iT, (X^±_t)_0≤ t≤ iT, combined with the Markov property (<ref>) and that of the quenched law, that (X^±_t)_0≤ t≤ (i+1)T declared by (<ref>)
has the desired marginal law, thus completing the verification of
(-1) with (i+1) in place of i. Next, we show (-3) and (-4), before returning to (-2). Since Y^±_(i+1)T-Y^±_iT = Y^±_T by (<ref>), the inequality (<ref>) with (i+1) in place of i is an immediate consequence of (<ref>). Hence
(-3) holds. Finally, by construction of the coupling extension on the event B_i, which uses Lemma <ref>, and in view of (<ref>) and
(<ref>), the failure of B_i+1 on the event B_i amounts to the failure of E_restart in (<ref>), from which (-4)
follows with (i+1) in place of i.
It remains to show that (-2) holds with (i+1) in place of i. To this effect, we introduce two auxiliary processes (Z^±_t)_0 ≤ t ≤ (i+1)T, defined as
Z^±_t = Y^±_t, if 0 ≤ t ≤ iT,
Y^±_iT + X_t^± , if 0 < t ≤ T.
Combining the induction assumption (-2), the definition of Y^±_t and Z^±_t in (<ref>) and (<ref>) (the latter implying in particular that Z^±_iT= Y^±_iT), one readily deduces from property (<ref>) the ℚ-almost sure inequalities
Z^-_(i+1)T≤Y^-_(i+1)T and Y^+_(i+1)T≤Z^+_(i+1)T.
To deduce from this the analogous inequalities with X in place of Z, we apply Lemma <ref>, with (X, X)= (Z^+, X^+), η= η̃= η^+ (whence (<ref>) plainly holds), π_1(w')= Z^+_iT= Y^+_iT, π_1(w)= X^+_iT, π_2(w)=π_2(w') and K= ℤ×[iT, (i+1)T] to deduce that Z^+_(i+1)T≤X^+_(i+1)T holds ℚ-a.s. Note that the condition
π_1(w')≤π_1(w) necessary for Lemma <ref> to apply is in force by induction hypothesis in (-2). Together with (<ref>), this yields one of the desired inequalities in (-2) with i+1 in place of i. The other one is obtained in a similar way using Lemma <ref>. This completes the proof of the induction step.
We now use the coupling ℚ, which satisfies
(-1)-(-4) for all 0 ≤ i ≤ k (and L ≥ L_2), to complete the proof of (<ref>).
To this end, we first extend the laws of X_t^± to all t >kT using the Markov property, by sampling X^±_kT+ · independently conditionally on ℱ_kT. In particular, recalling that k=⌊ L/T⌋, this implies that X_L^± is declared under ℚ. We thus proceed to derive a suitable lower bound on 𝔼^[X_L^+-X_L^-], which is well-defined, and from which
(<ref>) will follow.
Using that |X_n+1^± - X_n^±| ≤ 1 for any n ≥ 0, we obtain (with k=⌊ L/T⌋) that
𝔼^[X_L^+-X_L^-] + 2T ≥𝔼^[X_L^+-X_L^-] + 2(L-kT) ≥𝔼^[ X^+_kT-X^-_kT]
(-2)≥𝔼^[ Y^+_kT-Y^-_kT]
≥∑_1≤ i ≤ k𝔼^[ (Y^+_iT- Y^+_(i-1)T)-(Y^-_iT- Y^-_(i-1)T) ]
(-3)≥ ke^-(log L)^1/20 - 2T ∑_1 ≤ i ≤ k(B_i-1^c) ≥L/2Te^-(log L)^1/20 - 2L(B_k-1^c),
for large enough L, where, in the second inequality of the second line, we have used that Y^±_0=0, see (-3), and in the last line, we have first used that the difference of increments is deterministically bounded from below
by -2T (see (<ref>) and Lemma <ref>), and for the last inequality that the events B_i^c are increasing in i, cf. (<ref>). We also used in various places that L-T≤ kT≤ L.
It remains to suitably estimate the probability (B_k-1^c) appearing in the last line of (<ref>), for which we use (-4) and a straightforward induction argument to bound
(B_k-1^c)≤(B_k-1^c | B_k-2) +
(B_k-2^c) (-4)≤ L^-100 + (B_k-2^c) ≤…≤ (k-1)L^-100≤ L^-99,
using also in the penultimate step that (B_0^c)=0. Feeding this into (<ref>) yields that
In order to do this, let us define the event
B=⋂_(y,j) {η̂^+_j([y, y+⌊(log L)^2⌋-1] ) ≥ (ρ+99ϵ100) ⌊(log L)^2⌋}
∩{η̂^-_j([y , y+⌊(log L)^2⌋-1] ) ≤ (ρ+ϵ100) ⌊(log L)^2⌋},
where the intersection is taken over (y,j) ∈ [-20(ν+1)L,20(ν+1)L]× [0,L].
Note that (B_0^c∩ B)=0 by our initial coupling of η̂^+_0 and η̂^-_0. Thus we have
(B_k-1^c)≤(B^c )+∑_i=1^k-1(B_i^c | B_i-1∩ B).
On one hand, (<ref>) yields (for L large enough):
max_i=1^k-1(B_i^c | B_i-1∩ B)≤ L^-100.
On the other hand, by <ref> applied with (η_0,ρ,ε,ℓ)=(η^+_j(· +y),ρ+ϵ,ϵ, ⌊ (log L)^2⌋) and with (η_0,ρ,ε,ℓ)=(η^-_j(· +y),ρ,ϵ, ⌊ (log L)^2⌋) for (y,j)∈ [-20(ν+1)L, 20(ν+1)L]× [0,L], and by a union bound over all such (y,j), we have
(B^c)≤ 4(40(ν+1)L+1)(L+1)exp(-densitydev⌊ (log L)^2⌋/10000) ≤ L^-100,
for L large enough. Injecting (<ref>) and (<ref>) into (<ref>), we obtain
(B_k-1^c)≤ L^-100+ (k-1)L^-100≤ L^-99
for L large enough. Using the above, together with (<ref>) and (<ref>), we have that
𝔼^[X_L^+-X_L^-] ≥L/2Te^-(log L)^1/20 - 2L^-98 -2T ≥ 3C:approx (log L)^100,
as soon as L is large enough (recall T from (<ref>)). Dividing by L and applying (<ref>) whilst observing that X_L^+ has the same law under as X_L under ^ρ+ϵ, L and X_L^- has the same law under as X_L under ^ρ, L, (<ref>) implies that
v_L(ρ+δ)-v_L(ρ)≥3C:approxL^-1 (log L)^100,
which concludes the proof of Proposition <ref>.
As mentioned at (<ref>), we prove in fact a much stronger statement than Proposition <ref>, owing to (<ref>), namely that
v_L(ρ+δ)-v_L(ρ)≥ L^o(1)
where the o(1) denotes a negative quantity that goes to 0 as L→∞. This is however insufficient to imply directly Theorem <ref>, and the renormalisation in Section <ref> is still essential to improve (<ref>) to a right-hand side bounded away from 0 as L→∞.
[Necessity for couplings with quenched initial condition]
We explain here the main reason why we need quenched conditions for our couplings. In short, this is due to the lack of an invariant measure (or reasonable proxy thereof) from the point of view of the walk.
More precisely, abbreviating t=s_2-s_1, the only a-priori lower bound we have for the probability that X^ρ and X^ρ+ε drift away linearly from each other during [s_1,s_2] (corresponding to ℚ(E_6) at (<ref>)) is exp(-Ct ) for some large constant C by uniform ellipticity – we are in fact precisely trying to derive a better bound in this section.
But this gap is necessary to create a difference between X^ρ+ϵ-X^ρ on a time interval of length T. Hence, to accrue a significant gain in expectation between X^ρ+ϵ-X^ρ, we need to repeat this at least exp(Ct) times. Thus, during that time, X^ρ and X^ρ+ϵ could straddle an interval of width at least exp(C t). The main issue is that we have no a priori information on their local environment (which would not be the case if we had access to an invariant measure and could estimate the speed of convergence to it).
Hence if the coupling <ref>, that we use between s_1 and s_2 on a interval much narrower than exp(C t), was only valid under the annealed product Bernoulli initial condition (which is a priori not what the walk sees), we could resort to the annealed-to-quenched trick (via Markov's inequality) and a union bound over exp(Ct) intervals to control the probability that the coupling fails. However, the failure probability at <ref> is exp(-C't^1/4) ≫exp(-Ct), hence we cannot obtain any non-trivial bound this way. Note that this does not depend on the choice of t. Furthermore due to the diffusivity of the environment particles and large deviation considerations, it seems unlikely that one could improve the bound of <ref> beyond exp(-Ct^1/2).
This is why we resort to some quenched control, cf. items (i)-(iii) above Lemma <ref>, and also (<ref>). The empirical density was the most accessible and relevant statistic (in particular if the environment is conservative, as is the case of SEP). For similar reasons, we had to establish <ref> in a quenched setting, to ensure that whatever the distribution of the environment around the walker at time iT for some i≥ 1 (as long as it is balanced), there is still a uniformly low probability not to have the required empirical density at time s_1 (see (<ref>)) to perform the coupling of <ref>. Of course the necessity for quenched couplings encapsulated in <ref> and in particular <ref>, means that we have (more) work to do in order to verify this in specific instances, as we do for SEP in the next section.
§ EXCLUSION PROCESS AND COUPLINGS
We start by giving in <ref> a formal definition of the main environment η of interest in this article, the symmetric simple exclusion process (SEP), and first check that it fits the setup of <ref>, in particular, that the basic properties (<ref>)–(<ref>) listed in <ref> hold. The main result of this section, proved in <ref>, is to show that the SEP satisfies the conditions <ref>-<ref> stated in <ref>; see Proposition <ref> below. This implies that our main result, Theorem <ref>, applies in this case; cf. also Theorem <ref> and its proof in <ref>. We refer to Appendix <ref> for another environment η of interest which fits this framework.
§.§ Definition of SEP and basic properties
We fix a parameter ν>0, which will be constant throughout this section and often implicit in our notation. The (rate ν) symmetric simple exclusion process (SEP) is the Markov
process on the state space {0,1}^ℤ (tacitly viewed as a subset of Σ, cf. <ref>) with (pre-)generator
L f(η)=∑_x,y∈ℤ: |x-y|=1_{η(x)=1,η(y)=0}ν/2(f(η_xy)-f(η)),
for η∈{0,1}^ℤ and f in the domain of L, where η_xy is the configuration obtained from η by exchanging the states of x and y, i.e. such that η_xy(x)=η(y), η_xy(y)=η(x) and η_xy(z)=η(z) for all z∈ℤ∖{x,y}; see <cit.>. We denote 𝐏_SEP^η_0 its canonical law with initial configuration η_0 and drop the subscript SEP whenever there is no risk of confusion. In words, (<ref>) entails that the vertices x such that η(x)=1, which can be seen as the locations of particles evolve like continuous-time symmetric simple random walks on ℤ with rate ν that obey the exclusion rule; that is, particles are only allowed to jump onto empty locations.
It will often be useful to consider the interchange process on ℤ, with generator L defined as in (<ref>) but omitting the exclusion constraint {η(x)=1,η(y)=0}, which interchanges the state of neighbors x and y independently at rate ν/2. We will use the following specific construction of this process.
Let E={{x,x+1}: x ∈ℤ} denote the set of edges on ℤ and 𝐏 be a probability governing independent Poisson counting processes 𝒫_e of intensity ν/2 on ℝ_+ attached to every edge e ∈ E. For any given η_0∈{0,1}^ℤ, one defines (η_t)_t ≥ 0 under 𝐏 by exchanging the states of η at x and y every time the `clock rings' for 𝒫_e, where e={x,y}. This is well-defined up to a set of measure zero. Then for every η_0 ∈{0,1}^ℤ,
(η_t)_t ≥ 0 has the same law under 𝐏 and 𝐏_SEP^η_0.
This follows upon observing that the states of neighboring sites suffering the exclusion constraint can also be exchanged. For our purpose, these two processes are equivalent, but note that they differ when one distinguishes the particles of the system (for instance studying the motion of a tagged particle).
A useful feature of this alternative description is the following. A particle trajectory of the interchange process is obtained by following the trajectory of a state x ∈ℤ such that η_0(x)=1 (a particle) under
𝐏. We won't define this formally but roughly speaking, if e and e' are the two edges incident on x, one waits until the minimum of the first arrival times of these two processes (which is an exponential variable with parameter ν) and jumps across the corresponding edge. Then one repeats this procedure. In particular, it immediately follows that
12.5cmfor each x such that η_0(x)=1, the particle trajectory of x under 𝐏 follows the law of a continuous time simple random walk with jump rate ν.
Recalling the properties (<ref>)–(<ref>) from <ref>, we first record the following fact.
With
μ_ρ = ((1- ρ) δ_0 + ρδ_1 )^⊗ℤ, ρ∈ J def.= (0,1),
the measures (𝐏^η_0: η_0 ∈{0,1}^ℤ) with 𝐏^η_0=𝐏_SEP^η_0 and (μ_ρ: ρ∈ J) satisfy all of (<ref>)–(<ref>).
Property (<ref>) is classical, see <cit.> along with Example 3.1(d), p.21 of the same reference. So is (<ref>), i.e. the stationarity of the measure μ_ρ in (<ref>), see <cit.>. The required coupling (for two given initial configurations η_0' ≼η_0) needed to verify the quenched monotonicity asserted in (<ref>),i) is simply obtained by realizing the process η= (η_t)_t ≥ 0 under the auxiliary measure 𝐏^η_0' and 𝐏^η_0 using the same Poisson processes (𝒫_e). In particular this measure yields a coupling over all possible initial distributions, including η_0 and η_0', and this coupling is seen to preserve the partial order η_0' ≼η_0 for all t>0. The monotonicity in (<ref>),ii) is classical. Lastly, upon observing that η_0[0,ℓ-1] is a binomial random variable with parameters ℓ and ρ under 𝐏_ρ, property (<ref>) is obtained by combining (<ref>) and (<ref>), which are well-known large deviation estimates.
In anticipation of <ref>, we now collect a simple lemma to bound the linear deviations of an SEP particle, which we will routinely use in our couplings below.
Let Z=(Z_t)_t≥ 0 denote a simple random walk on ℤ with jump rate ν, starting from 0 at time 0, with law denoted by P. For all t>0 and k,a∈:
P(max_0≤ s≤ t| Z_s|≥ 2kν t+a)
≤ P(Z makes more than 2kν t+a jumps during [0,t])≤ e^-(2kν t+a)/8.
The first inequality is immediate since Z only performs nearest-neighbor jumps. As for the second one, remark that the number N of jumps performed by Z during [0,t] is a Poisson random variable with parameter ν t. Hence by (<ref>) applied with λ =ν t and x=(2k-1)ν t +a≥ (2kν t+a)/2> 0, we obtain that
P(N≥ 2kν t+a)≤exp( -((2k-1)ν t+a)^2/2(2kν t+a))≤exp (-(2kν t+a)/8),
and the conclusion follows.
§.§ Conditions <ref>–<ref> for SEP
We now proceed to verify that the conditions introduced in <ref> all hold for the exclusion process introduced in <ref>, as summarized in the next proposition. Its proof occupies the bulk of this section. These properties (above all, <ref> and <ref>) are of independent interest.
For (𝐏^η_0: η_0 ∈{0,1}^ℤ_+) with 𝐏^η_0 =𝐏^η_0_SEP, J = (0,1) (cf. (<ref>)), and with ν as appearing in (<ref>), all of <ref>, <ref>, <ref>, <ref> and <ref> hold.
This follows directly by combining Lemmas <ref>, <ref>, <ref>, <ref> and <ref> below.
We now proceed to investigate each of the relevant conditions separately. Throughout the remainder of this section, we work implicitly under the assumptions of Proposition <ref>. In particular, in stating that some property P `holds for SEP,' we mean precisely that P is verified for the choice (𝐏^η_0: η_0 ∈{0,1}^ℤ_+) with 𝐏^η_0 =𝐏^η_0_SEP for all ρ∈ J = (0,1), and with ν the rate parameter underlying the construction of SEP.
The condition <ref> holds for SEP, with no restriction on the choice of ℓ'≥ 1.
We give a brief overview of the proof. A key idea is to exploit the fact that SEP particles, although not independent, are in fact `more regularly' spread out than a bunch of independent random walks (starting from the same initial positions), which is due to the inherent negative association of the SEP. This fact is implicit in the bound (<ref>) below, which is borrowed from <cit.> (itself inspired from <cit.>), the proof of which uses in a crucial way an inequality due to Liggett, see <cit.>, encapsulating this property. With this observation, it is enough to argue that after time t ≫ℓ^2 (where ℓ is the precision mesh of the empirical density of the initial configuration, as in <ref>), random walks have diffused enough to forget their initial positions and average their density, which follows from classical heat kernel estimates.
We focus on proving the upper inequality (i.e. when each η_0(I)≤ (ρ+ε)ℓ), and comment where necessary on the minor adjustments needed to derive the other inequality in the course of the proof.
Let ρ,ε∈ (0,1), H,ℓ,t≥ 1 and η_0 be such that the conditions of <ref> hold. We will in fact show <ref> for an arbitrary value of ℓ'≥ 1 although the restriction to ℓ' ≤ℓ is sufficient for later purposes (note that the statement is empty if ℓ'>2(H-2ν t)). Thus, let ℓ'≥ 1 and ℐ denote the set of intervals of length ℓ' included in [-H+2ν t, H-2ν t], and define
η̅_t(I)= ∑_x∈ I𝐏^η_0(η_t(x)=1), I ∈ℐ,
the average number of occupied sites of I after time t ≥ 0. Note that η̅_0(I)=η_0(I).
By Lemma 2.3 of <cit.> (see also Lemma 5.4 of <cit.>) one knows that for all t ≥ 0 and suitable densitystableexpo∈ (0,∞),
𝐏^η_0(η_t(I)≥η̅_t(I)+εℓ')≤exp(-densitystableexpoε^2ℓ')
(in fact the bound (<ref>) holds for any initial configuration η_0). Thus, if
max_I∈ℐη̅_t(I) ≤ (ρ+2 ε)ℓ',
under our assumptions on η_0 and t,
then (<ref>), a union bound over ℐ, and the upper bound on the maximum in (<ref>) yield that
𝐏^η_0(max_I⊆ℐη_t(I)>(ρ+3ε)ℓ')≤ 2H exp(-densitystableexpoε^2ℓ').
A companion inequality to (<ref>) can be deduced in a similar way using the lower bound on the minimum in (<ref>) and exploiting symmetry, i.e. rewriting {η_t(I)< (ρ-3ε) ℓ'} = {ξ_t(I)> ((1-ρ)+3ε) ℓ'} where ξ_t=1-η_t, while observing that (ξ_t)_t ≥0 has law 𝐏^ξ_0, cf. (<ref>). The conclusion <ref> then follows.
Therefore, we are left with showing (<ref>). In view of (<ref>), it is enough to prove that for every x∈ [-H+2ν t, H- 2ν t], and under our assumptions on η_0 and t,
𝐏^η_0(η_t(x)=1) ≤ρ +2ε.
Let (Z_s)_s≥ 0 denote a continuous-time symmetric simple random walk on ℤ with jump rate ν, defined under an auxiliary probability P, and let
p_s(x,y)=P(Z_s=y | Z_0=x ), for x,y∈ℤ and s≥ 0 denote its transition probabilities, which are symmetric in x and y. Writing {η_t(x)=1} as the disjoint (due to the exclusion constraint) union over y ∈ℤ of the event that η_0(y)=1 and the particle starting at y is located at x at time t, it follows using (<ref>) that for all x∈ℤ,
𝐏^η_0(η_t(x)=1)= ∑_y∈ℤp_t(y,x) η_0(y)= ∑_y∈ Sp_t(x,y),
where S:={y∈ℤ: η_0(y)=1} and we used reversibility in the last step. Using the rewrite (<ref>), we first show the upper bound in (<ref>). Let us define c_t=C:c-t√(ν tlog(ν t)), where the constant C:c-t>0 will be chosen below. Recalling that x∈ [-H+2ν t, H-2ν t], we cover the sites of [x-c_t ,x+c_t] ⊆ [-H,H] using intervals I_1, … , I_q all contained in this interval and each containing ℓ sites, with q=⌈ (2c_t +1)/ℓ⌉. We may assume that all but the last interval I_q are disjoint. For later reference, we note that | I_r∩ S|≤ (ϱ +ε)ℓ for all 1≤ r≤ q by assumption on η_0. By (<ref>), we have that
𝐏^η_0(η_t(x)=1)≤∑_y∈ℤ∖ [x-c_t,x+c_t] p_t(x,y)+∑_r=1^q∑_y∈ S∩ I_rp_t(x,y)
We will look at the two terms on the right-hand side separately. Using (<ref>)), one knows that with N denoting the number of jumps of a continuous time random walk with jump rate ν until time t>0, one has ℙ(A)≥ 1-Ce^-ν t/C with A { 2ν t/3≤ N≤ 4ν t/3}. Combining this with the deviation estimate (<ref>) yields that
∑_y∈ℤ∖ [x-c_t,x+c_t] p_t(x,y)=ℙ(A^c)+2∑_k=⌈ 2ν t/3⌉^⌊4ν t/3⌋ℙ(N=k) ∑_y'≥ c_t ℙ( Bin(k,12)=y'+k2)
≤ Ce^-ν t/C+2∑_k=⌈ 2ν t/3⌉^⌊ 4ν t/3⌋ℙ(N=k) exp( -c_t^2/ 16 ν t)
≤ Ce^-ν t/C+2 exp( -C:c-t^2log(ν t)/16)
≤1/ν t≤ε/3,
where we choose C:c-t≥√(11), use that ν t≥densitystable, thus choosing densitystable large enough, and use the conditions from <ref> for the last inequality.
Next, we want to deal with the points in the interval [x-c_t,x+c_t]. Recall that a continuous-time simple random walk with rate ν at time t has the same law as a rate 1 continuous-time simple random walk at time ν t. Hence, by <cit.>, for all x,y,z such that y,z∈ [x-c_t,x+c_t] and |z-y|≤ℓ, using that c_t<ν t/2 for densitystable large enough, with C changing from line to line,
p_t(x,y)/p_t(x,z)≤exp( |x-z|^2-|x-y|^2/2ν t+C(1/√(ν t)+|x-y|^3+|x-z|^3/(ν t)^2) )
≤exp( c ℓ√(log(ν t))/√(ν t)+C1+log^3/2(ν t)/√(ν t)) ≤exp( Cε/√(densitystable))≤ 1+ε/3,
where we used that 4ν t>densitystableℓ^2ε^-2log^3(ν t) and ℓ≥1, from <ref>, and chose densitystable large enough. Using <cit.> again, we also have that, as soon as densitystable is large enough, using the conditions from <ref>,
p_t(x,z)≤1/√(ν t)≤ε/3ℓ.
Now, recalling that the intervals I_r are disjoint for 1≤ r≤ q-1 and that they all have cardinality ℓ, using (<ref>) and (<ref>), we have that
∑_r=1^q∑_y∈ S∩ I_rp_t(x,y)
≤∑_r=1^q-1∑_y∈ S∩ I_rp_t(x,y) + ∑_y∈ S∩ I_qp_t(x,y)
≤(1+ε/3)∑_r=1^q-1∑_y∈ S∩ I_rmin_z∈ I_rp_t(x,z) + ∑_y∈ S∩ I_qε/3ℓ≤(1+ε/3)(ρ+ε)∑_r=1^q-1|I_r|min_z∈ I_rp_t(x,z) + ε |I_q|/3ℓ
≤(1+ε/3)(ρ+ε)∑_r=1^q-1∑_y∈ S∩ I_rp_t(x,y) + ε/3≤(1+ε/3)(ρ+ε) + ε/3.
Putting together (<ref>), (<ref>) and (<ref>), and using that ρ+ε≤ 1, we obtain
𝐏^η_0(η_t(x)=1)≤ε/3+(1+ε/3)(ρ+ε) + ε/3≤ρ +2ε.
This yields the upper bound in (<ref>), and the upper inequality in <ref> follows.
For the lower bound, which requires to change (<ref>) to ρ-2ε≤𝐏^η_0(η_t(x)=1), one uses the estimate 𝐏^η_0(η_t(x)=1)≥∑_r=1^q-1∑_y∈ S∩ I_rp_t(x,y) instead of (<ref>) and proceeds similarly as in (<ref>).
The condition <ref> holds for SEP.
Recall the construction in (<ref>) of the SEP using 𝐏. Setting ℚ=𝐏, this yields a natural coupling of η and η' with marginal laws 𝐏^η_0 and 𝐏^η'_0, respectively, for any choice of initial distribution η_0 and η'_0. In words, the coupling ℚ identifies η and η' as interchange processes and uses the same Poisson processes (𝒫_e) on the edges of ℤ. Now if η_0 |_[-H, H]≽η_0' |_[-H, H], then under ℚ, we claim that
{∃ x∈ [-H+2kν t ,H-2kν t ], s∈ [0,t] , η_s(x)<η'_s(x) }⊆⋃_y: η'_0(y)=1, | y| >HE(y),
where E(y) is the event that the particle at y enters the interval [-H+2kν t ,H-2kν t ] before time (s≤)t.
By (<ref>) with a=| y| -H, we have
ℚ(E(y))≤exp (-(2kν t+| y| -H)/8).
Applying a union bound to (<ref>) and feeding (<ref>) thus yields that the probability of the complement of the event appearing on the left-hand side of (<ref>) is bounded from above by
2 ∑_y=H^+∞ℚ(E(y)) ≤ 2 ∑_y'=0^+∞exp (-y'/8-kν t/4) ≤ 20exp(-kν t/4).
[Locality in <ref>]
For later reference, we record the following locality property of the coupling ℚ constructed in the course of proving Lemma <ref>. Let E_H ⊂ E denote the edges having both endpoints in [-H,H]. Then the above argument continues to work for any specification of clock processes 𝒫_e, 𝒫_e' for e ∉ E_H for η and η', respectively, so long as (𝒫_e)_e ∈ E and (𝒫_e')_e ∈ E end up having the correct law. This observation will be important when several couplings are `concatenated,' as in the proof of Lemma <ref> below (see also Figure <ref>).
The condition <ref> holds for SEP.
We first give a brief overview of the argument. Lemma <ref> corresponds to a quenched version of <cit.> by Baldasso and Teixeira, in which η_0 and η'_0 are sampled under 𝐏^ρ+ε and 𝐏^ρ respectively. We mostly follow their argument. Since fixing the initial environments induces some changes, and because we will use a variation of this proof to show <ref> for the SEP (Lemma <ref>), we detail below the coupling of η and η', seen as interchange processes. In doing so we also clarify an essential aspect of this coupling; see in particular (<ref>)-(<ref>) below.
In a nutshell, the idea is to pair injectively each particle of η' with one of η at a relatively small distance (of order t^1/4), during a relatively long time (of order t^3/4), so that they perform independent random walks until they meet (which has a large probability to happen since t^3/4≫(t^1/4)^2), after which they coalesce, i.e. follow the same evolution. To make such a matching possible, we ensure that with large probability, η has more particles than η' on each interval of length roughly t^1/4. Within the present quenched framework, this property is now obtained by means of <ref>, which has already been proved; see Lemma <ref>. Since the probability that at least one particle of η' does not get paired, i.e. does not meet its match, is relatively high (polynomially small in t), we repeat this coupling; see also Figure <ref>.
Let ρ∈ J, ε∈ (0,1), and H,t≥ 1 and η_0,η'_0 satisfy the conditions appearing in <ref>. Recall that
ℓ=⌊ t^1/4⌋ and let τ=ℓ^3.
We abbreviate I_s= [-H+2ν s, H-2ν s] for s ≥ 0 in the sequel.
Let ℚ= 𝐏⊗𝐏', where 𝐏' is a copy of 𝐏, with 𝐏, 𝐏' governing the independent processes (𝒫_e), (𝒫_e'), respectively, cf. above (<ref>). We will define a coupling of (η,η') under ℚ (to be precise, a suitable extension of ℚ carrying additional independent randomness), inductively in i over the time interval (iτ, (i+1)τ], for 0 ≤ i < ℓ.
Suppose (η_[0, iτ], η_[0, iτ]' ) have been declared under ℚ for some 0 ≤ i < ℓ, with the correct marginal laws for η_[0, iτ] and η_[0, iτ]' (the case i=0 of this induction assumption holds trivially). We start by controlling the empirical densities of η and η' at time iτ, which are already defined under ℚ. The original proof of <cit.> uses the stationarity of 𝐏^ρ and 𝐏^ρ+ε and (<ref>), while we resort to <ref> (cf. (<ref>) below). Let
G_1,idef.={ for all I with I ⊂ I_iτ of length ⌊ℓ/2 ⌋≤ |I| ≤ℓ,
one has that η_iτ(I)≥ (ρ+ε/2)|I| ≥η'_iτ(I)}.
Observe that G_1,0 is automatically satisfied by assumption on η_0 and η_0'.
We proceed to define ( η_i τ+t)_0 < t ≤τ and ( η'_i τ+t)_0 < t ≤τ under ℚ.
If G_1,i does not occur, we sample (η'_t+ i τ)_0 < t ≤τ (and in fact for all t>0) in a manner as in (<ref>) using the processes (𝒫_e') after time τ_i, and similarly for (η_t+ i τ)_0 < t ≤τ using (𝒫_e) instead. Thus in this case η' evolves independently from η from time τ_i on.
If on the other hand G_1,i occurs, we proceed as follows.
Let Pai_s ⊂ [-H,H] denote the set
Pai_s= { x ∈ I_s: η_s(x)= η_s'(x)=1 },
so that Pai_i τ is measurable relative to (η_i τ, η_i τ'). We refer to Pai_s as the set of paired particles (at time s). Let Π_s = { x ∈ I_s: η_s (x)=1} and Π_s' = { x ∈ I_s: η_s '(x)=1}. Observe that Pai_s⊂Π_s'. Our goal is to reduce the size of their difference as s= i τ for i=1,2,… and eventually achieve equality when i=ℓ.
To this effect, we first define a matching, i.e. an injective map ψ_i: Π_iτ' →Π_iτ, still measurable relative to (η_i τ, η_i τ'), as follows. The map ψ_i acts as identity map on Pai_i τ, a subset of both Π_iτ' and Π_iτ, cf. (<ref>). For each x∈Π_iτ' ∖Pai_i τ, ψ_i(x) is a point in Π_iτ∖Pai_i τ at distance at most ℓ from x. As we now briefly explain, owing to the occurrence of G_1,i, this can be achieved in such a way that ψ_i is injective. To see this, first note that one can write I_iτ as disjoint union of intervals of length |I| ranging in ⌊ℓ/2 ⌋≤ |I| ≤ℓ, as follows. One covers I_iτ with contiguous intervals of length ℓ starting at one boundary, leaving a remaining interval I_r at the other boundary of length less than ℓ. If I_r has length at least ⌊ℓ/2 ⌋, one simply adds it, else unless I_r is empty one cuts the penultimate interval into two halves of length at least ⌊ℓ/2 ⌋ each and merges I_r with the last of them. By construction any of the disjoint intervals I thereby obtained has length ⌊ℓ/2 ⌋≤ |I| ≤ℓ as required, and thus on the event G_1,i, see (<ref>), one knows that η_iτ(I) ≥η'_iτ(I). Since the I's are disjoint and their union is I_iτ, it follows that we can pair injectively each particle of Π_iτ' ∖Pai_i τ with a particle of Π_iτ∖Pai_i τ within the same interval. We now fix any such matching ψ_i and call any two particles (x,ψ_i(x)) ∈Π_iτ' ×Π_iτ matched. We note that |x-ψ_i(x)| ≤ℓ by construction.
The evolution for ( η_i τ+s)_0 < s ≤τ and ( η'_i τ+s)_0 < s ≤τ under ℚ (and on G_1,i) is now prescribed as follows. Both ( η_i τ+s)_0 < s ≤τ and ( η'_i τ+s)_0 < s ≤τ will be realized as interchange processes as in (<ref>), thus it is sufficient to specify the relevant (Poisson) clock processes attached to each edge of ℤ.
Let E_H denote the set of edges of ℤ having both endpoints in [-H,H]. For e ∉ E_H, ( η_i τ+s)_0 < s ≤τ and ( η'_i τ+s)_0 < s ≤τ simply use the clocks of 𝒫_e∘θ_iτ and 𝒫_e'∘θ_iτ, respectively, where θ_s denotes the canonical time-shift of the process by s. It remains to specify the clock processes for e ∈ E_H.
Let 𝒫_e= 𝒫_e+ 𝒫_e'. All clock processes attached to edges e ∈ E_H will be defined via suitable thinning of 𝒫_e.
First, one orders chronologically all arrivals for the processes 𝒫_e∘θ_iτ = ( 𝒫_e(s+ i τ))_s ≥ 0 as e ∈ E_H varies (there are countably many such times and they are a.s. different so this is well-defined on a set of full measure). Let σ_0=0 and σ_1,σ_2 etc. denote the chronologically ordered times thereby obtained. By suitable extension, ℚ is assumed to carry a family { X_n: n ≥ 0 } of i.i.d. Bernoulli variables with (X_n=1)= 1- (X_n=0)= 1/2. We regard X_n as the label attached to σ_n. Let ℛ_e be the thinned process obtained from 𝒫_e by only retaining arrivals with label X_n=1. Then,
η_· + i τ uses the clock process ℛ_e for each e ∈ E_H (and 𝒫_e for each e ∉ E_H).
By elementary properties of Poisson processes and applying (<ref>), it follows that (η_s+ i τ)_0< s ≤τ has the correct conditional law given η_i τ; indeed by construction to each edge e of one has associated independent Poisson processes having the correct intensity.
The definition of ( η'_i τ+s)_0 < s ≤τ is analogous to (<ref>), and the clock process ℛ_e' for e ∈ E_H underlying the definition of ( η'_i τ+s)_0 < s ≤τ is specified as follows. For a particle x ∈Π_iτ' (resp. Π_iτ), let γ_i; ·'(x) (resp. γ_i;·(x)) denote its evolution
under η'_i τ+· (resp. η_i τ+·). Proceeding chronologically starting at n=1, one chooses whether the clock σ_n is retained or not according to the following rule. With e={x,y}∈ E_H denoting the edge of σ_n,
if for some z ∈Π_iτ', γ'_i;σ_n-1(z)= γ_i;σ_n-1(ψ_i(z)) ∈{x,y},
then σ_n is retained iff X_n=1, otherwise iff X_n=0;
here we think of right-continuous trajectories so γ'_i;σ_n-1(z) is the position of the particle z after the (n-1)-th jump; in fact one could replace each occurrence of σ_n-1 in (<ref>) by an arbitrary time s with σ_n-1≤ s< σ_n, since there is no jump between those times. In words, at time σ_n-1, one inspects if at least one endpoint of e contains (the evolution to time s of) two matched particles, in which case the clock σ_n is retained if it has label 1 only. If no endpoint of e contains matched particles the clock is retained if it has label 0. The process ( η'_i τ+s)_0 < s ≤τ then simply uses the clocks ℛ_e' on edges e ∈ E_H that are retained according to (<ref>) and such that σ_n < τ. We will now argue that
given η'_i τ, the process ( η'_i τ+s)_0<s≤τ has law 𝐏^η'_i τ under ℚ.
To see this, one simply notes using a straightforward induction argument that the conditional law of ξ_n = 1{the n-th arrival in (𝒫)_e ∈ E_H is retained} given ξ_1,…ξ_n-1, σ_1,…,σ_n, X_1,…, X_n-1 and ( η_i τ+t,η'_i τ+s )_0 < s ≤σ_n-1 is that of a Bernoulli-1/2 random variable. From this and the thinning property for Poisson processes it readily follows that ℛ' has the right law.
Overall we have now defined a coupling of (η,η') until time ℓτ≤ t. In case ℓτ < t we simply use the same process (𝒫_e) to define the evolution of both η and η' in the remaining time interval (ℓτ, t]. The Markov property (<ref>), (<ref>) and (<ref>) ensure that η,η' indeed have the desired marginals during [0,t].
In view of (<ref>), this immediately yields that
{η_t | _[-H+4ν t, H-4ν t]≽η_t' | _[-H+4ν t, H- 4ν t]}^c ⊂{ (Π_t'∖Pai_t) ∩ [-H+4ν t, H- 4ν t] ≠∅}.
The key of the above construction is that the latter event forces one of three possible unlikely scenarios. Namely, as we explain below, one has that
{ (Π_t'∖Pai_t) ∩ [-H+4ν t, H- 4ν t] ≠∅}⊆ B_1 ∪ B_2 ∪ B_3,
where B_1=⋃_i=1^ℓ-1 G_1,i^c,
B_2 =⋃_s=0^ t-1 {[ one particle of η'_s(ℤ∖ I_s) ends up; in [-H+4 ν t, H-4ν t] at time t ]},
B_3 = B_1^c ∩{[ ∃ x ∈Π_0' s.t. γ_iτ'(x) ∈ I_iτ and; inf_ s ∈ [0, τ] Z_s^i(x)>0 for all 0 ≤ i < ℓ ]};
here γ_·'(x) refers to the evolution of particle x under η' and with x_i= γ_iτ'(x), one sets Z_s^i(x)= |γ_i;s'(x_i)- γ_i;s(ψ_i(x_i))|. In words Z_s^i follows the evolution of the difference between x_i, which is not paired at time iτ since Z_0^i(x) ≠ 0, and its match ψ_i(x_i), which is well-defined on the event B_1^c. Thus B_3 refers to the event that some particle x ∈Π_0' is found in I_iτ at time iτ for all i and never meets its match during the time interval (iτ, (i+1)τ].
We now explain (<ref>). To this effect we first observe that (<ref>) and (<ref>) ensure that two matched particles at some stage i follow the same evolution once they meet (and thus belong to Pai_s for all later times s) as long as they stay in I_s. Therefore, on the event B_1^c ∩ B_2^c, on which (due to occurrence of B_2^c) no unpaired η'-particle at time t can arise by drifting in from the side, meaning that such a particle cannot be seen in η'_s(ℤ∖ I_s) at any time 0 ≤ s <t,
the set (Π_t'∖Pai_t) ∩ [-H+4ν t, H- 4ν t] being non-empty requires at least one particle from Π_0' to never meet its match at any of the stages 1 ≤ i ≤ℓ (matching happens at all stages due to occurrence of B_1^c). That is, B_3 occurs, and (<ref>) follows.
To finish the proof, we now bound the (bad) events appearing in (<ref>) separately. In view of (<ref>), we apply <ref> (which now holds on account of Lemma <ref>) to η' (resp. η) at time iτ, with (ρ+ε/8, ε/8) (resp. (ρ+7ε/8, ε/8)) instead of (ρ,ε), with (H,ℓ, iτ ) instead of (H,ℓ,t) and with ℓ' ranging from ⌊ℓ/2⌋ to ℓ, which fulfils the conditions of <ref> if SEPcoupling is large enough so that H> 4ν t and
min_1≤ i ≤ℓ^4-1 4ν iτ≥ 4ντ > densitystableℓ^2ε^-2(1+|log^3(ν t)|)≥max_1≤ i ≤ℓ^4-1densitystableℓ^2ε^-2(1+|log^3(ν iτ)|) .
Recalling that ℓ=⌊ t^1/4⌋ and summing over the possible values of ℓ', this gives that
ℚ(B_1)≤∑_i=1^ℓ-1ℚ(G_1,i^c)≤ 4t^1/2Hexp( -densitystableexpoε^2 2^-7 (ℓ-1)).
We deal with ℚ(B_2) by applying <ref>, which is in force on account of Lemma <ref>.
Noticing that the event indexed by s entering the definition of B_2 in (<ref>) implies that at least one particle of η'_s(ℤ∖ I_s) ends up in [-H+2ν s+2ν t, H- 2ν s-2ν t ] before time s+ t, we get using (<ref>) with k=1, η_0=η_s ≡ 0, and I_s playing the role of [-H,H] that
ℚ(B_2)
≤∑_0 ≤ s < tℚ(
[ a particle of η'_s(ℤ∖ I_s) ends up in; [-H+2ν(s+ t), H- 2ν(s+t) ]; before time s +t ])
≤ 20texp(-ν t/4).
Finally, owing to our coupling in (<ref>), (<ref>), conditionally on (η_u, η'_u), u ≤ iτ, the process Z_s^i(x) is a one-dimensional continuous-time random walk with rate 2ν started at a point in [0,ℓ] (owing to the separation of x_i and its match ψ_i(x_i)) with absorption at 0. If (Z_s)_s≥ 0 is such a random walk and ℙ_k denotes the probability for this walk starting at k∈ [0,ℓ], we have by invariance by translation and the reflection principle:
max_0≤ k≤ℓℙ_k(min_0≤ s≤τZ_s>0) = ℙ_ℓ(min_0≤ s≤τZ_s>0)= ℙ_0(min_0≤ s≤τZ_s>-ℓ)
≤ 2ℙ_0(max_0≤ s≤τ| Z_s|<ℓ)≤ 2ℙ(Z_τ∈ [-ℓ,ℓ]).
Using again <cit.> and taking SEPcoupling large enough (so that in particular 2ντ=t ≥ 2| x | for any x∈ [-ℓ,ℓ]), we have thus for some universal constant C>0 (changing from one expression to the next):
max_0≤ k≤ℓℙ_k(min_0≤ s≤τZ_s>0)≤2ℓ +1/√(4πντ)exp(C(ν^-1τ^-1+ℓ^3ν^-2τ^-2))≤Cℓ/√(ντ)≤ Cν^-1/2t^-1/8.
Recalling that t>ν^-8 and taking a union bound over x ∈Π_0', the previous estimate applied with the Markov property for (η,η') yields that, as long as SEPcoupling is large enough:
ℚ(B_3)≤ 2H(Ct^-1/16)^ℓ.
Putting together (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we obtain that
ℚ( η'_t | _[-H+4ν t, H-4ν t]≼η_t | _[-H+4ν t, H- 4ν t])
≥ 1 -8t^1/2Hexp( -densitystableexpoε^22^-7(ℓ-1) ) -20texp(- ν t/4)-2H(Ct^-1/16)^ℓ,
which is larger than 1- SEPcoupling2 tHexp(-SEPcoupling2^-1ν/ν +1ε^2t^1/4 ) as required by (<ref>)
provided SEPcoupling and SEPcoupling2 are chosen large enough.
[Locality in <ref>]
Similarly as in Remark <ref>, which exhibits an analogous property for the coupling inherent to <ref>, the coupling ℚ yielding property <ref> constructed in the proof of Lemma <ref> can be performed for any specification of clock processes (𝒫_e, 𝒫_e')_e ∉ E_H used to define η,η', so long as the marginal laws of (𝒫_e)_e ∈ E and (𝒫_e')_e ∈ E, are that of independent Poisson processes of intensity ν/2. This can be seen by inspection of the proof: the only `non-trivial' joint distribution concerns (𝒫_e, 𝒫_e')_e ∈ E_H, which are obtained by suitable thinning from (𝒫_e)_e∈ E_H, see in particular the discussion around (<ref>) and (<ref>).
Combining the couplings supplied by Lemmas <ref> and <ref> multiple times, which will be permitted owing to Remarks <ref> and <ref>, yields the following result.
The condition <ref> holds for SEP.
Let ρ,ε∈ (0,1), H_1,H_2,t,ℓ≥ 1 and η_0,η'_0 ∈{0,1}^ℤ be such that the assumptions of <ref> hold. In particular, note that these entail that H_2 > H_1. Define
t_1= ℓ^4, t_2= t_1+ℓ^19.
We proceed in two steps and refer to Figure <ref> for visual aid. In the first step, we apply simultaneously the couplings of Lemma <ref> on [-H_1,H_1] and of Lemma <ref> on [-H_2,H_2]∖ [-H_1,H_1], in the time-interval [0,t_1] (in doing so we shall explain how this preserves the marginals of η and η'). As a result, we get that η_t_1(x)≥η'_t_1(x) for all x∈ [-H_2+2 ν t_1 , H_2-2 ν t_1], except possibly around two intervals around -H_1 and H_1, of width O( ν t_1).
In the second step, we couple the particles of η' on these intervals with "additional" particles of η_t_1∖η'_t_1 on [-H_2,H_2] (using that the empirical density of η' is slightly larger than that of η on [-H_2,H_2] by <ref>), in a manner similar to that used in the proof of Lemma <ref>. This ensures that with large enough probability, all these particles of η' get covered by particles of η within time t_2-t_1, without affecting the coupling of the previous step on account of the Markov property. Finally, during the time interval [t_2,t] we use again the natural coupling of Lemma <ref>, and conclude by showing (<ref>) and (<ref>). We now proceed to make this precise.
Step 1: we construct a coupling ℚ_1 of the evolutions of η and η' during the time-interval [0,t_1] such that if we define the (good) events
G_1={∀ x∈ [-H_1+2 ν t_1, H_1-2 ν t_1], ∀ s∈ [0,t_1]: η_s(x)≥η'_s(x) },
G_2={∀ x ∈ [-H_2+2 ν t_1, -H_1-2 ν t_1-1]∪ [H_1+2 ν t_1+1, H_2-2 ν t_1] : η_t_1(x)≥η'_t_1(x) },
then
ℚ_1(G_1∩ G_2)≥ 1 - 3SEPcoupling2 t_1H_2exp(- SEPcoupling2^-1ν/ν+1ε^2ℓ).
The coupling ℚ_1 is defined as follows. Let (𝒫_e)_e ∈ E be a family of i.i.d. Poisson processes on ℝ_+ of intensity ν/2. These processes are used to describe the exchange times for η (seen as an interchange process as in (<ref>)), during the time-interval [0,t_1]. We now define the exchange times (𝒫_e')_e ∈ E to be used for η' during [0,t_1] as follows. For an edge e having at least one endpoint outside [-H_2,H_2] or at least one endpoint inside [-H_1, H_1], set 𝒫_e'= 𝒫_e. It remains to specify 𝒫_e' for e with both endpoints in [-H_2,H_2] but outside [-H_1,H_1]. This set splits into two disjoint intervals E_±, which are both dealt with separately and in exactly the same manner. Thus restricting our attention to E_+, one couples (𝒫_e')_e ∈ E_+ and (𝒫_e)_e ∈ E_+ in exactly the same manner as in the proof of Lemma <ref>, with the interval I_+, defined as the set of all endpoints of edges in E_+, playing the role of [-H,H]. The fact that Lemma <ref> applies even though the processes 𝒫_e and 𝒫_e' have been specified for certain edges e ∉ E_+ is owed to Remark <ref>. Define I_- similarly and perform the same coupling of (𝒫_e')_e ∈ E_- and (𝒫_e)_e ∈ E_-.
Since the sets of edges E_-, E_+ and E ∖ (E_- ∪ E_+) are disjoint, it readily follows that (𝒫_e)_e ∈ E is an i.i.d. family of Poisson processes on ℝ_+ of intensity ν/2. This ensures that under ℚ_1, η∼𝐏^η_0 and η'∼𝐏^η'_0.
Now, by our assumptions on ρ, ε, H,ℓ,t_1,η_0 and η'_0, the above construction of ℚ_1 together with Remark <ref> ensure that Lemma <ref> applies on the interval [-H_1, H_1] with t=t_1 and k=1 (recall to this effect that the relevant coupling for which <ref> is shown to hold is simply ℚ=𝐏, see above (<ref>), and that this coupling is also local in the sense of Remark <ref>). This yields that
ℚ_1(G_1) ≥ 1 -20 exp (- νt_1/4).
Second, our assumptions (taking compatible>SEPcoupling) and the above construction of ℚ also allow us to apply Lemma <ref> on E_- and on E_+ instead of [-H,H], with the same values of ρ, ε and ℓ, and with t=t_1 (in particular, ν^8t_1=(ν^2ℓ)^4>1). We obtain that
ℚ_1(G_2) ≥ 1 - 2SEPcoupling2 t_1H_2exp(- SEPcoupling2^-1ν/ν+1ε^2ℓ).
Combining (<ref>) and (<ref>) yields (<ref>), since 20exp (- νt_1/4)≤SEPcoupling2 t_1H_2exp(- SEPcoupling2^-1ν/ν+1ε^2ℓ) if compatible is chosen large enough. This concludes Step 1.
Step 2: We now extend ℚ_1 to a coupling ℚ_2 up to time t, using a slight variation of the coupling in the proof of Lemma <ref> in the time-interval [t_1, t_2].
For definiteness of ℚ_2, on the complement of G_1∩ G_2, use (𝒫_e)_e ∈ E for the exchange times of both η and η' during the time [t_1,t]. Focusing now on the case where G_1 ∩ G_2 occurs, the aim of the step is to show that we can have
ℚ_2(η'_t_2|_[-H+4ν t_2, H-4ν t_2]≼η_t_2|_[-H+4ν t_2, H-4ν t_2] | G_1∩ G_2)≥ 1 - H_2exp( - ν/ν+1ε^2ℓ^4).
Fix any realization of η_t_1, η'_t_1 such that G_1∩ G_2 holds.
Define Pai_0 (cf. (<ref>)) as the set of vertices in I_0 def.=[-H_2+2 ν t_1, H_2-2 ν t_1] containing both a particle of η_t_1 and of η'_t_1, and think of these particles as being paired. Let U_0 (resp. U'_0) be the set of unpaired particles of η_t_1 (resp. η'_t_1) in I_0. By definition of G_1 and G_2, the particles of U'_0 must be in [-H_1-2 ν t_1, -H_1+2 ν t_1]∪ [H_1-2 ν t_1, H_1+2 ν t_1] when the event G_1∩ G_2 occurs; cf. Figure <ref>. Hence, on G_1∩ G_2 we have that
| U'_0|≤ 8 ν t_1.
Denote B_1,0^c the event that on every interval of length between ℓ^5/2 and ℓ^5 included in I_0, η_t_1 has at least εℓ^5/10≥ 8 ν t_1 unmatched particles (recall that ℓ>80ν/ε by assumption). On B_1,0, use (𝒫_e)_e ∈ E as exchange time process to define both η and η' during [t_1,t] (cf. the discussion leading to (<ref>)).
Henceforth, assume that B_1,0^c ∩ G_1 ∩ G_2 occurs. Following the line of argument in the paragraph after (<ref>), one matches injectively each particle of U_0' with a particle of U_0 at distance at most ℓ^5. In the present context this is possible owing to (<ref>) and occurrence of B_1,0^c. Then one couples the evolutions of η and η', first during [t_1, t_1+ℓ^15] as done in the proof of Lemma <ref> during the time interval [0, τ]. Then iteratively at times t_1+iℓ^15 for 1≤ i ≤ℓ^4-1, one performs on I_i= [-H_2+2 ν(t_1+iℓ^15), H_2-2 ν(t_1+iℓ^15)] the same coupling at times i ⌊ t^3/4⌋ on the event B_1,i^c that for any interval I⊆ I_i of length ranging between ℓ^5/2 and ℓ^5, one has η_t_1+iℓ^15(I)≥η'_t_1+iℓ^15(I). On B_1,i, use (𝒫_e)_e ∈ E for the exchange times of both η and η' during the time interval [t_1+iℓ^15,t].
Overall, this yields a coupling of (η_·∧ t_2, η_·∧ t_2') with the correct marginal law (as in the proof of Lemma <ref>). Finally
one extends this coupling during [t_2, t] on the event G_1 ∩ G_2 ∩⋂_i=0^ℓ^4-1B_1,i^c by using the same exchange times for η and η'. It follows that η∼𝐏^η_0 and η'∼𝐏^η'_0 under ℚ_2.
In much the same way as in (<ref>), it follows from the above construction that the complement of the event on the left-hand side of (<ref>) implies that at least one particle of η' in the interval [-H_2+2 ν t_2, H_2-2 ν t_2] is unpaired at time t_2, which in turn (cf. (<ref>)-(<ref>)) implies the occurrence of
( ⋃_i=0^ℓ^4-1B_1,i) ∪ B_2 ∪ B_3
where
B_2= ⋃_s=0^ℓ^19-1{[ a particle of η'_t_1+ s(ℤ∖ [-H_2+2 ν(t_1+s), H_2-2 ν(t_1+s)]); ends up in η'_t_2([-H_2+4 ν t_2, H_2-4 ν t_2]) ]}
B_3={ at time t_2, one particle from η'_t_1([-H_2+2 ν t_1, H_2-2 ν t_1])
has been in I_i for all 0≤ i < ℓ^4 and remains unpaired}.
We now mimic (<ref>), (<ref>) and (<ref>) to handle ∑_i=0^ℓ^4-1ℚ_2(B_1,i), ℚ_2(B_2) and ℚ_2(B_3) respectively. In detail, for the first term, we apply <ref> with (H,t)=(H_2,iℓ^15) for 1≤ i ≤ℓ^4-1, and ℓ' ranging from ℓ^5/2 to ℓ^5, noting that
H_2>4ν iℓ^15≥ 4νℓ^15≥densitystableℓ^19/2ε^-2(1+|log^3(νℓ^19)|)≥densitystable(iℓ^15)^1/2ε^-2(1+|log^3(ν iℓ^15)|).
For the third inequality, remark that νℓ> densitystableε^-2(1+|log^3(νℓ^4) |) (taking compatible>densitystable), hence it is enough to show that ℓ^9/2(1+|log^3(νℓ^4)|)≥ 1+|log^3(νℓ^19)|. But 1+|log^3(νℓ^19)|≤ 1+4|log^3(νℓ^4)|+60logℓ (since (a+b)^3≤ 4a^3+4b^3 for a,b≥ 0). Taking compatible large enough so that νℓ^100 and thus ℓ is large enough (recall that ℓ>80ν/ε>ν), we have 1+4|log^3(νℓ^4)|+60logℓ≤ 1+4|log^3(νℓ^4)|+ ℓ≤ℓ^9/2(1+|log^3(νℓ^4)|) as desired.
For the second term ℚ(B_2), we use <ref>, and for the third term ℚ(B_3), we note that a continuous-time random walk with rate 2ν started in [0,ℓ^5] will hit 0 before time ℓ^15 with probability at least 1-Cℓ^5/(νℓ^15)^1/2≥ 1-Cℓ^-2 (note that νℓ≥compatible>1 if we take compatible>1).
We obtain that
ℚ_2(∀ x∈ [-H_2+4 ν t_2, H_2-4 ν t_2], η_ t_2(x) ≥η'_ t_2(x) )
≥ 1-∑_i=0^ℓ^4-1ℚ_2(B_1,i) - ℚ_2(B_2)-ℚ_2(B_3)
≥ 1- 4ℓ^9 H_2 exp(- densitystableexpoε^22^-7(ℓ^5-1) ) - 20 ℓ^19exp (- νℓ^19 /4) - 2H_2 (Cℓ^-2 )^ℓ^4
≥ 1 - H_2exp( -ν/ν+1ε^2ℓ^4),
if compatible in <ref> is chosen large enough. This yields (<ref>).
Let ℚdef.=ℚ_2. It remains to establish (<ref>) and (<ref>).
Owing to the way the coupling ℚ is defined during the intervals [0,t_1],[t_1,t_2] and [t_2,t], ℚ has the following property: in [-H_1,H_1], any particle of η' that is covered at some time s<t by a particle of η will be covered by this particle until time t, or until it leaves [-H_1,H_1]. Moreover, by assumption every particle of η'_0([-H_1,H_1]) is covered by a particle of η_0. Therefore,
{∀ s∈ [0,t], η_s|_[-H_1+4νt, H_1-4νt]≽η'_s|_[-H_1+4νt, H_1-4νt]}^ c
⊆⋃_s=1^t {a particle of η'_s(∖ [-H_1,H_1])
enters [-H_1+4νt,H_1-4νt ] before or at time t}.
By Lemma <ref> applied with (t,k,a)=(t-s,2,4ν t+x) for every s≤ t and x≥ 0 if η'_s(± (H_1+x))=1, a union bound over all particles appearing in the event on the right-hand side of (<ref>) leads to the bound on the -probability of the left-hand side by 2t∑_x≥ 0exp(-(4ν t+x) /8)≤ 20texp (-ν t/4), and (<ref>) follows.
As for (<ref>), first note that by (<ref>) and (<ref>), and for compatible large enough, we have that
ℚ(η'_ t_2|_[-H_2+4ν t_2, H_2-4ν t_2]≼η_ t_2|_[-H_2+4ν t_2, H_2-4ν t_2])≥ 1 - 4SEPcoupling2 t_1 H_2exp(-SEPcoupling2^-1ν/ν+1ε^2ℓ).
Since we use, during [t_2,t], the same natural coupling as in the proof of Lemma <ref>, letting B_4 denote the event that a particle of η'_t_2(ℤ∖ [-H_2+4ν t_2, H_2-4ν t_2]) ends up in [-H_2+6ν t, H_2-6ν t]⊆ [ -H_2+4ν t_2+2ν t, H_2-4ν t_2-2ν t] in time at most t, we obtain, provided compatible is large enough and abbreviating ξ= SEPcoupling2^-1ν/ν+1ε^2ℓ, that
ℚ(η'_ t|_[-H_2+6ν t, H_2-6ν t]≼η_ t|_[-H_2+6ν t, H_2-6ν t])
≥ 1 - 4SEPcoupling2 t_1 H_2e^-ξ-ℚ(B_4)
≥ 1 - 4SEPcoupling2 t_1 H_2e^-ξ-20e^-ν t/4≥ 1 - 5SEPcoupling2 t_1 H_2e^-ξ.
This shows (<ref>) (recalling that t_1=ℓ^4 by (<ref>)), and concludes the proof.
The final result which feeds into the proof of Proposition <ref> is the following.
The condition <ref> holds for SEP.
Let ℓ≥ 1, ρ∈ (0,1) and η_0, η'_0 be such that the conditions of <ref> hold. In particular, these imply the existence of x_0∈ [0, ℓ] such that η_0(x_0)=1 and η'_0(x_0)=0. Let ℚ = 𝐏 as above (<ref>), by which (η,η') are coupled as interchange processes using the same exchange times (𝒫_e)_e∈ E. As observed in (<ref>), under ℚ the particle of η_0 starting at x_0 moves like a simple random walk Z=(Z_t)_t≥ 0 with jump rate ν, and we have η'_t(Z_t)=0 for all t≥ 0 by construction of 𝐏. Hence, ℚ(η_ℓ( x )>0 , η'_ℓ( x )=0) ≥ℚ(Z_ℓ= x) and therefore, in order to deduce (<ref>) it is enough to argue that
ℚ(Z_ℓ= x)≥(ν2e^ν)^6(ρ+1)ℓ, x=0,1.
Indeed, let Z^L_ℓ (resp. Z^R_ℓ) denote the number of jumps of Z to the left (resp. right) during the time interval [0,ℓ]. These two variables are independent and distributed as Poi(νℓ/2), which entails that, for x=0,1, and x_0 ≥ x,
ℚ(Z_ℓ= x)≥ℚ(Z^L_ℓ=x_0-x)ℚ(Z^R_ℓ=0)≥(ℓν/2)^x_0-xe^-ℓν/(x_0 -x)!≥(ν/2)^x_0-x e^-ℓν
≥(ν/2e^ν)^x_0-xe^-(ℓ-(x_0-x))ν,
using that (x_0-x)! ≤ x_0^x_0-x≤ℓ^x_0-x in the third step.
Since 0≤ x_0-x≤ℓ and ν≤ 2e^ν, (<ref>) follows for all x_0 ≥ x (and x=0,1), and it is easy to see that the bound remains true in the remaining case, i.e. when x_0=0 = 1-x (now forcing Z^L_ℓ=0 and Z^R_ℓ=1 instead, which by symmetry of Z^L_ℓ and Z^R_ℓ yields the same bound as when x_0=x+1). Overall this yields (<ref>).
Finally, since the marginal of the coupling ℚ for (η,η') is that of Lemma <ref>, we get the first inequality of (<ref>) from <ref> with t=ℓ. The second inequality follows from our condition on k (using that ρ≤ 1).
§ CONCENTRATION ESTIMATES
We collect here a few classical facts on concentration of Poisson and binomial distributions, that are repeatedly use to control the probability that the environment could have an abnormal empirical density. The PCRW case corresponds to Poisson distributions, and the SSEP to binomial distributions).
Let λ,x >0, and X∼Poisson(λ). Then
ℙ(X≥λ+x)≤exp(-x^2/2(λ+x)).
If x∈ [0,λ], then
ℙ(X≤λ-x)≤exp(-x^2/2(λ+x)).
Let m∈ℕ, p∈(0,1) and X∼Bin(m,p). Then for all q ∈ (0,1-p), we have
ℙ(X≥ (p+q)m)≤exp (-mq^2/3)
and for all q∈ (0,p):
ℙ(X≤ (p-q)m)≤exp (-mq^2/2).
We get (<ref>) (resp. (<ref>)) from Theorem 4.4 (resp. Theorem 4.5) of <cit.> with μ=mp and δ=q/p≥ q in both cases. We now turn to (<ref>).
By a classical application (and optimization) of Chernoff's bound (see Theorem 5.4 of <cit.>), one shows that
ℙ(X≥λ+x)≤e^-λ(eλ)^λ+x/(λ+x)^λ+x=exp(-λ - (λ+x)(log(λ +x)-logλ -1)).
Hence for (<ref>), it remains to show that
λ + (λ+x)(log(λ +x)-logλ -1)≥x^2/2(λ+x).
Setting u=1+x/λ and multiplying both sides by u/λ this amounts to show that the map f:[1,∞) →ℝ defined by
f(u)= u+u^2(log u -1) - (u-1)^2/2
remains non-negative. Indeed, f(1)=0 and f'(u)=1+2u(log u -1) +u-(u-1)=2(1+u(log u -1)). One checks that log u -1≥ -1/u for all u≥ 1 (with equality iff u=1) so that f' is nonnegative on [1,∞), and this concludes the proof of (<ref>). One proves (<ref>) via the same method.
§ THE PCRW ENVIRONMENT
In this Appendix, we consider another environment, the Poisson Cloud of Random Walks (PCRW) previously considered in <cit.> among others; we refer to the introduction for a more complete list of references. For this environment, the parameter ρ∈ (0,∞) governs the intensity of walks entering the picture. We show below, see Lemma <ref> and Proposition <ref>, that the PCRW also fits the setup of <ref> and
satisfies <ref>-<ref>. Hence, this environment yields another example to which the conclusions of our main result, Theorem <ref>, apply. The proof is virtually the same as that of Theorem <ref> given in <ref>, upon noting that the relevant law of large numbers (<ref>) in this context was also shown in <cit.>, yielding the existence of the speed v(ρ) at all but at most two values ρ=ρ_± (as for SEP).
The PCRW is defined below using random walks evolving in discrete time, following the practice of <cit.> and other previous works, but the following results could easily be adapted to random walks in continuous time (with exponential holding times of mean one).
§.§ Definition of the PCRW
The PCRW is a stochastic process η= (η_t(x); x ∈ℤ, t ∈ℝ_+) with state space Σ=ℤ^ℤ_+ defined as follows: for any given initial configuration η_0∈Σ and every x∈ℤ, place η_0(x) particles at x. Then, let all the particles follow independent discrete-time lazy simple random walks, i.e. at each integer time, any given particle stays put with probability 1/2, or jumps to its left or right neighbour with the same probability 1/4.
For t≥ 0 and x∈ℤ, let η_t(x) be the number of particles located at x. We denote by ^η_0_PCRW the canonical law of this environment with initial state η_0, and frequently abbreviate ^η_0_=^η_0_PCRW below.
§.§ Properties of the PCRW
We proceed to show that the PCRW environment has the desired features, i.e. that the properties (<ref>)-(<ref>) listed in <ref> as well as the conditions <ref>-<ref> appearing in <ref> all hold. The parameter ρ indexing the stationary measures will naturally vary in (0,∞).
We will in practice always consider a bounded open interval J= (K^-1,K) for arbitrary K > 1 below. The constants densitydev, …, SEPcoupling appearing as part of the conditions we aim to verify will henceforth be allowed to tacitly depend on K. Note that this is inconsequential for the purposes of deriving monotonicity of v(·) on (0,∞) (cf. (<ref>)) since this is a local property: to check monotonicity at ρ one simply picks K large enough such that ρ∈(K^-1,K) =J.
For the remainder of this appendix, let K>1 be arbitrary and Jdef.= (K^-1,K). We start by verifying properties (<ref>)-(<ref>).
With
μ_ρ = Poi(ρ)^⊗ℤ, ρ∈ (0,+∞),
the measures (𝐏^η_0: η_0 ∈{0,1}^ℤ) with 𝐏^η_0=𝐏_PCRW^η_0 and (μ_ρ: ρ∈ J) satisfy all of (<ref>)-(<ref>).
Fix ρ >0.
Property (<ref>) is classical, and follows readily from the time-homogeneity, translation invariance and axial symmetry of the lazy simple random walk.
Property (<ref>) is also standard: if η_0∼μ_ρ, by suitably thinning the Poisson process one can realize η_0 by decomposing η_0(x)=l(x)+c(x)+r(x) for all x∈ℤ, where l(x) (resp. c(x), r(x)) is the number of particles starting from x that make their first move to the left (resp. stay put, and make their first move to the right), with l(x),r(x)∼Poi(ρ/4), c(x)∼Poi(ρ/2) and the family of variables (l(x),c(x),r(x))_x∈ℤ is independent. From this one infers that η_1∼μ_ρ, as η_1(x)=l(x+1)+c(x)+r(x-1) ∼Poi(ρ) for all x∈ℤ, and the variables η_1(x) are independent as x varies.
As for property (<ref>), it holds with the following natural coupling (which straightforwardly yields the correct marginal laws for η and η'): if η'_0(x)≤η_0(x) for all x∈ℤ, then one matches injectively each particle of η'_0 to a particle of η_0 located at the same position. The coupling imposes that matched particles follow the same trajectory, and the remaining particles of η_0 (if any) follow independent lazy simple random walks, independently of the matched particles.
Finally, Property (<ref>) is a consequence of the fact that under 𝐏^ρ, for every finite interval I and every time t ≥0, η_t(I)∼Poi(ρ| I|), and combining with the tail estimates (<ref>)-(<ref>).
We now establish the conditions <ref>-<ref>.
For (𝐏^η_0: η_0 ∈{0,1}^ℤ_+) with 𝐏^η_0 =𝐏^η_0_PCRW, ρ∈ J and with ν=1, all of <ref>, <ref>, <ref>, <ref> and <ref> hold.
The proof of Proposition <ref> is given in <ref> below. We start with a coupling result (which for instance readily implies <ref> as shall be seen), similar in spirit to Lemma B.3 of <cit.>, stating that the evolution of the PCRW with a sufficiently regular deterministic initial condition can be approximated by a product of independent Poisson variables. This is of independent interest (and lurks in the background of various more elaborate coupling constructions employed in <ref>).
There exist positive and finite constants c:diffusive, [c]EGrestesmallerthanepsilon and [c]coupling-1 such that the following holds. Let ρ∈ (0,K), ε∈ (0,(K-ρ) ∧ρ∧ 1 ) and H,ℓ,t∈ℕ be such that c:diffusiveℓ^2< t < H/2 and (ρ +ε) (t^-1log t)^1/2ℓ< EGrestesmallerthanepsilonε. There exists a coupling of (η^ρ-ε , η_t, η^ρ+ε) with η^ρ±ε∼μ_ρ±ε (and η_t sampled under 𝐏^η_0) such that, if η_0∈Σ is such that for any interval I⊆ [0,H] with | I|=ℓ,
(ρ - ε/2) ℓ≤η_0(I) (resp. η_0(I) ≤ (ρ + ε/2) ℓ),
then with G={η^ρ-ε|_[t, H-t]≼η_t|_[t, H-t]} (resp. G={η_t|_[t, H-t]≼η^ρ+ε|_[t, H-t]}),
(G) ≥ 1 - H exp(-coupling-1 (ρ+ε)^-1ε^2√(t)).
Moreover, the coupling ℚ is local in the sense that (η^ρ-ε , η_t, η^ρ+ε) |_[t,H-t] depends on the initial condition η_0 through η_0(x), x ∈ [0,H], alone.
We now prepare the ground for the proof of Proposition <ref>. Let us abbreviate I_t=[t,H-t]. We use the framework of soft local times from Appendix A of <cit.> (the latter following Section 4 of <cit.>), which we extend to fit our needs. We define a coupling ℚ as follows.
Let Λ (defined under ℚ) be a Poisson point process on ⊗ℝ_+ with intensity 1⊗λ where 1 stands for the counting measure and λ is the Lebesgue measure on ℝ. For each z∈ I_t, set
η^ρ±εdef.=Λ({z}× (0,ρ±ε]).
For z∈∖ I_t, let independently η^ρ±ε(z)∼Poi(ρ±ε). This indeed yields the correct marginal distributions μ_ρ±ε in view of (<ref>).
As for η_t, given any initial configuration η_0 ∈Σ let (x_i)_i≥ 1 denote an arbitrary ordering of the positions of the (finitely many) particles of η_0([0,H]) (counted with multiplicity, hence the sequence (x_i)_i≥ 1 is not necessarily injective). Define for i≥ 1 and z∈ [-t,H+t] g_i(z):=q_t(x_i,z) with (q_n)_n ∈ℕ denoting the discrete-time heat kernel for the lazy simple random walk. Let ξ_1:=sup{t≥ 0 : ⋃_z∈Λ({z}× (0,tg_1(z)])=0} and for i≥ 2, define recursively
ξ_idef.=sup{t≥ 0: ⋃_z∈Λ({z}× (0,ξ_1g_1(z)+… + ξ_i-1g_i-1(z)+tg_i(z)])=i-1},
see Figure 5 of <cit.>.
Note that since each g_i has a finite support (included in [y_i-t,y_i+t]), the ξ_i's are well-defined. In fact by Propositions A.1-A.2 of <cit.>, the variables are i.i.d. Exp(1). For all z ∈ [-t,H+t], we define the soft local time
G_η_0 (z)def.=∑_i≥ 1ξ_i q_t(x_i,z),
with ξ_i as in (<ref>). With these definitions, it follows, denoting
h(z)def.=Λ({z}× (0,G_η_0(z))),
that the family (h(z))_z∈ is distributed as η̃_t, defined as the restriction of η_t under 𝐏^η_0 restricted to the particles of η_0([0,H]). To see this, note that ℚ is such that, if u is the -coordinate of the particle of Λ seen when determining ξ_i at (<ref>), then the particle x_i of η_0 moves to u by time t. Remark indeed that by (<ref>) and the spatial Markov property for Poisson point processes, the choice of u is proportional to g_i(·)=q_t(x_i,·) and independent of what happened in the first i-1 steps. In particular, (h(z))_z∈ I_t is distributed as η_t|_I_t, since all the particles of η_t|_I_t perform at most one step per unit of time, and must have been in [0,H] at time 0. Finally, independently of all this, let all particles of η_0(∖ [0,H]) follow independent lazy random walks.
Overall ℚ indeed defines a coupling of
(η^ρ-ε , η_t, η^ρ+ε) with the required marginal law, and the desired locality (see below (<ref>)) follows immediately from the previous construction. Moreover, (<ref>) and (<ref>) imply that under ℚ, for all t ∈ℕ and η_0∈Σ,
{η^ρ-ε|_I_t≼η_t|_I_t}⊆{∀ z∈ I_t: ρ-ε≤ G_η_0 (z) },
{η_t|_I_t≼η^ρ+ε|_I_t}⊆{∀ z∈ I_t: G_η_0 (z)≤ρ+ε}.
It is now clear from (<ref>) that the desired high-probability domination in (<ref>) hinges on a suitable control of the soft local time G_η_0 defined by (<ref>). To this effect we first isolate the following first moment estimate.
Under the assumptions of Proposition <ref>, there exists EGheatkernel such that for all z ∈ I_t,
(ρ - ε/2) (1- EGheatkernelℓ√(log t/t)) ≤𝔼^[G_η_0 (z)], (resp. 𝔼^[G_η_0 (z)] ≤ (ρ + ε/2) (1+ EGheatkernelℓ√(log t/t))).
Let c_t=√(t log t) and fix z∈ [-H+t,H-t].
Cover [z-⌊ c_t⌋,z+⌊ c_t⌋] by a family (I_i)_1≤ i≤⌈ (2⌊ c_t⌋+1)/ℓ⌉ of intervals of length ℓ, all disjoint except possibly I_1 and I_2. We focus on the upper bound on 𝔼^[G_η_0 (z)] in (<ref>) (the lower bound is derived in a similar fashion).
By assumption in (<ref>), we have that η_0(I_i) ≤ (ρ + ε/2) ℓ for all i, hence, recalling that ξ_i in (<ref>) has unit mean,
E^[G_η_0 (z)] (<ref>)=∑_i≥ 1 q_t(x_i,z) ≤∑_i=1^⌈ (2⌊ c_t⌋+1)/ℓ⌉ (ρ + ε/2) |I_i| max_x ∈ I_i q_t(z,x) +2∑_x≥ z+c_t-1 q_t(z,x).
We start by dealing with the last term above. By Azuma's inequality, we have that
∑_x≥ z+c_t-1 q_t(z,x)≤exp( -(c_t-1)^2/2t)≤exp( -log t/2+√(log t/t))≤C(ρ+ε2)/√(t),
for some constant C>0, depending on K.
Let us now handle the first term in the right-hand side of (<ref>). We start by noting that
∑_i=1^⌈ (2⌊ c_t⌋+1)/ℓ⌉ (ρ + ε2) |I_i| max_x ∈ I_i q_t(z,x)
≤ (ρ + ε2) ∑_i=1^⌈ (2⌊ c_t⌋+1)/ℓ⌉∑_y ∈ I_i q_t(z,y) +(ρ + ε2)∑_i=1^⌈ (2⌊ c_t⌋+1)/ℓ⌉∑_y ∈ I_imax_x ∈ I_i (q_t(z,x)-q_t(z,y))
By <cit.>, we have that
max_y∈q_t(z,y) ≤Ct^-1/2.
Recalling that at most I_1 and I_2 may overlap, the above implies that
∑_i=1^⌈ (2⌊ c_t⌋+1)/ℓ⌉∑_y ∈ I_i q_t(z,y) ≤ 1+∑_y∈ I_1q_t(z,y)≤ 1+Cℓ/√(t).
It remains to deal with the last term in (<ref>). By standard heat kernel estimates (see for instance <cit.> and <cit.>) and a computation similar to (<ref>), combined with a large deviation estimate on the number N_t of non-zero steps performed by the lazy random walk up to time t (using Azuma's inequality for instance), for all x,y∈ [z-c_t,z+c_t] with |x-y|≤ℓ and first assuming that both |x-z| and |y-z| are even, leaving C be a universal constant changing from line to line, we have that
q_t(z,x) ≤∑_n=⌊ t/4-c_t⌋^⌈ t/4+c_t ⌉ℙ(N_t=2n) q_2n(z,y)×exp( Cℓ c_t/t) + ℙ(|N_t-t2|>2c_t)
≤ q_t(z,y)×(1+Cℓ√(log t/t)) + 2exp(-2log t),
where we have to choose c:diffusive (and hence t) large enough in the assumptions of Proposition <ref>
The case where both |x-z| and |y-z| are odd is treated similarly, considering 2n+1 instead of 2n. If |x-z| is even and |y-z| is odd, note that for all n such that ⌊ t/4-c_t⌋≤ n≤⌈ t/4+c_t ⌉,
ℙ(N_t=2n)=2n+1/t-2n·ℙ(N_t=2n+1)≤(1+C√(log t/t))·ℙ(N_t=2n+1),
where a corresponding lower bound holds in order to treat the case where |x-z| is odd and |y-z| is even.
Hence the result in (<ref>) holds regardless of the parity of |x-z| and |y-z|.
Using (<ref>), we obtain that
∑_i=1^⌈ (2⌊ c_t⌋+1)/ℓ⌉∑_y ∈ I_imax_x ∈ I_i (q_t(z,x)-q_t(z,y))
≤∑_i=1^⌈ (2⌊ c_t⌋+1)/ℓ⌉∑_y ∈ I_i Cℓ√(log t/t) q_t(z,y) + 1/t
≤ Cℓ√(log t/t)+∑_y∈ I_1q_t(z,y) ≤ Cℓ√(log t/t),
where we used (<ref>) and the fact that 1≤ℓ^2<t/c:diffusive, chose c:diffusive (and hence t) large enough, and let the value of C change from one line to the next.
Substituting (<ref>) and (<ref>) into (<ref>) and feeding the resulting estimate together with (<ref>), into (<ref>) yields that
E^[G_η_0 (z)] ≤(ρ + ε/2) ( 1+ Cℓ/√(t) +Cℓ√(log t/t) +C/√(t))≤(ρ + ε/2) ( 1 +Cℓ√(log t/t)),
and the conclusion follows.
We are now ready to give the short proof of Proposition <ref>, which combines the above ingredients.
We use the coupling ℚ defined atop Lemma <ref> and show that G_η_0 concentrates in order to exploit (<ref>). To this end, first note that for all θ < 1/2min_x q_t(x,z)^-1,
𝔼^[ e^θ G_η_0 (z)]= ∏_i ≥ 11/1-θ q_t(x_i,z).
Observe that only a finite number of factors may differ from 1 (which requires q_t(x_i,z)>0, hence | x_i-z|≤ t).
By (<ref>), the probability (G^c) with G={η_t|_[t, H-t]≼η^ρ+ε|_[t, H-t]} is thus bounded from above by
[∃ z∈ I_t : G_η_0 (z) > ρ + ε ] ≤ H sup_z ∈ I_t [ G_η_0 (z) > ρ + ε ]
≤ H sup_z exp{- θ(ρ + ε - ∑_i≥ 1 q_t(x_i,z) ) + ∑_i≥ 1 θ^2q_t(x_i,z)^2}≤ H exp{ -(ρ+ ε - ∑ q)^2/4∑ q^2},
using (<ref>), the exponential Markov inequality and the inequality log(1-x)≥-x - x^2 for |x|< 1/2 in the second step and optimizing over θ in the third, and abbreviating ∑ q^α= ∑_i≥ 1q_t(x_i,z)^α. Using (<ref>) and then (<ref>), we have
∑_i≥ 1q_t(x_i,z)^2≤C/√(t)∑_i≥ 1q_t(x_i,z) = C/√(t) E^[G_η_0 (z)]
≤C/√(t)(K+1+ EGheatkernel (ρ+ε)ℓ√(log t/t))≤C/√(t)((K+1) + EGheatkernelEGrestesmallerthanepsilonε)≤C/K√(t),
where we used the assumptions of Proposition <ref>, chose EGrestesmallerthanepsilon small enough (depending on EGheatkernel), and let the value of C change in the last inequality (depending on K≥ 1).
Moreover,
K+1≥ρ+ ε - ∑_i≥ 1q_t(x_i,z)≥ε/2-EGheatkernel(ρ+ε)ℓ√(log t/t)≥ε/2-EGheatkernelEGrestesmallerthanepsilonε≥ε/4,
provided that EGrestesmallerthanepsilon is small enough (depending on EGheatkernel).
Substituting the two displays above into (<ref>) provides the asserted upper bound on ℚ(G^c) in (<ref>). For the other choice of G in (<ref>), we bound, using that log (1+x)≥ x-x^2/2 for all x≥ 0,
[∃ z∈ I_t : G_η_0 (z) < ρ - ε ] ≤ Hsup_z ∈ I_t [ G_η_0 (z) < ρ - ε ]
≤ H sup_z exp{- θ(ε -ρ + ∑_i≥ 1q_t(x_i,z) ) + 1/2∑_i≥ 1θ^2q_t(x_i,z)}≤ H exp{ -(ε -ρ+ ∑ q)^2/2∑ q^2},
optimizing again over θ in the last step. We conclude in the same way as for the upper bound.
§.§ Proof of Proposition <ref>
The proof of Proposition <ref> follows immediately by combining Lemmas <ref>-<ref> below, each of which focuses on one specific property among <ref>, <ref>, <ref>, <ref> and <ref>, which are proved in this order. Recall that J=(K^-1,K) for some K >1 and that constants may implicitly depend on K.
Condition <ref> (with ν=1) holds for PCRW.
Let ρ,ε, H,ℓ,t and η_0 be such that the conditions of <ref> hold. It is straightforward to check that these imply the conditions for applying Proposition <ref> with (2H,2ε) instead of (H,ε), provided that densitystable is large enough w.r.t. c:diffusive, EGrestesmallerthanepsilon and K. Hence, we can now use Proposition <ref> with (2H,2ε) to show <ref>.
Let 1 ≤ℓ' ≤ℓ. By (<ref>) (with an appropriate coupling ℚ under which η∼𝐏^η_0 and η^ρ±ε∼μ_ρ±ε, and translating [0,2H] to [-H,H] by means of (<ref>)), we have that
𝐏^η_0([ for all I' ⊂ [-H+2 t,H-2 t]; of length ℓ': ±( η_t(I') -ρℓ' ) ≤ 3εℓ' ])≥ 1 - 2 exp(-coupling-1 (ρ+ε)^-1ε^2√(t))-p_±
where
p_±def.=μ_ρ±ε([ there exists an interval I' of length ℓ' included in; [-H+2 t,H-2 t] so that: | η^ρ±ε(I') -(ρ±ε) ℓ' | ≥ 2εℓ' ]).
By (<ref>) (which holds on account of Lemma <ref>) and a union bound over all intervals I'⊆ [-H+2 t,H-2 t] of length ℓ', we have that
p_±≤ 2H exp(-densitydevε^2ℓ'). Combining this and (<ref>), noting that coupling-1(ρ+ε)^-1√(t)≥densitystableexpoℓ≥densitystableexpoℓ' if we choose densitystableexpo small enough w.r.t. K and coupling-1, yields <ref>.
For every H,t≥ 0, and all η_0,η'_0 ∈Σ such that η_0 |_[0, H]≽η_0' |_[0, H], there exists a coupling of η,η' with respective marginals ^η_0 and ^η'_0 such that
(∀ s∈ [0,t], η_s|_[t, H-t]≽η'_s|_[t, H-t])=1.
Therefore, condition <ref> holds for PCRW with ν=1.
Clearly, (<ref>) implies <ref>, up to changing H to 2H and translating [0,2H] to [-H,H] (using (<ref>), as established in Lemma <ref>). We now show (<ref>).
Let H,t≥ 0 and η_0,η'_0∈Σ be as above.
Couple η and η' by matching injectively each particle of η'_0(x) to a particle of η_0(x), for all x∈ [0,H], and by imposing that matched particles follow the same trajectory (and by letting all other particles follow independent lazy random walks).
Since particles can make at most one move (to a neighbouring position) per unit of time due to the discrete-time nature of the walks, no particle of η'_0 outside of [0,H] can land in [t,H-t] before or at time t. Thus the event in (<ref>) holds with probability 1.
Let ρ∈ (K^-1,K), ε∈ (0,(K-ρ) ∧ρ∧ 1), and H,ℓ,t∈ℕ be such that c:diffusiveℓ^2<t<H/2 and (ρ +3/2ε) (t^-1log t)^1/2ℓ< EGrestesmallerthanepsilonε/4. Let η_0,η'_0∈Σ be such that for every interval I⊆ [0,H ] of length ℓ, we have η_0(I)≥ (ρ+3ε/4)ℓ and η'_0(I)≤ (ρ+ε/4)ℓ. Then there exists a coupling ℚ of η and η' such that
ℚ( η'_t | _[t, H-t]≼η_t | _[t, H-t])≥ 1- 4H exp(-coupling-1(ρ+ε)^-1ε^2√(t)/4).
Consequently, <ref> with ν=1 holds for PCRW. Moreover, ℚ is local in that (η'_t, η_t) |_[t,H-t] depends on the initial conditions (η_0,η_0') through η_0(x), η_0'(x), x ∈ [0,H], alone.
We first show how (<ref>) implies <ref>. Let ρ,ε,H,ℓ,t,η_0 and η'_0 satisfy the assumptions of <ref> (in particular, ℓ=⌊ t^1/4⌋). Then they also satisfy the assumptions of Lemma <ref> (with 2H instead of H), upon taking SEPcoupling large enough in <ref>. By (<ref>) applied to [-H,H] instead of [0,2H] (again using translation invariance, see (<ref>), established in Lemma <ref>), (<ref>) holds with SEPcoupling2=max(4,coupling-1^-1K/2) since ρ+ε≤ K.
We now proceed to the proof of (<ref>). Let ρ,ε,H,ℓ,t,η_0 and η'_0 satisfy the assumptions of Lemma <ref>. Then we can apply Proposition <ref> to η_0 with (ρ+ε,ε/2) instead of (ρ,ε), and the same values of H,ℓ,t. Similarly, we can apply it to η'_0 with (ρ-ε,ε/2) instead of (ρ,ε). This entails the existence of two couplings ℚ^1 of (η_t,η^ρ+ε/2) and ℚ^2 of (η^ρ+ε/2,η'_t), where η^ρ+ε/2∼μ_ρ+ε/2, such that, abbreviating I_t=[t,H-t],
^1(η'_t|_I_t≼η^ρ+ε/2|_I_t) ∧^2( η^ρ+ε/2|_I_t≼η_t|_I_t) ≥ 1 - H exp(-coupling-1/4 (ρ+ε/2)^-1ε^2√(t)).
Applying <cit.> with (X,Y)=(η_t,η^ρ+ε/2) and (Y',Z)= (η^ρ+ε/2,η'_t), one can `chain' ℚ^1 and ℚ^2, i.e. one obtains a coupling ℚ of (η_t|_I_t,η^ρ+ε/2|_I_t, η'_t|_I_t) such that the pair (η_t|_I_t,η^ρ+ε/2|_I_t) has the same (marginal) law as under ℚ^1 and (η^ρ+ε/2|_I_t, η'_t|_I_t) has the same law as under ℚ^2; explicitly, a possible choice is
(η_t|_I_t=μ_A, η'_t|_I_t=μ_B, η^ρ+ε/2|_I_t=μ_C)
=^1(η_t|_I_t=μ_A | η^ρ+ε/2|_I_t=μ_C) ·^2(η'_t|_I_t=μ_B | η^ρ+ε/2|_I_t=μ_C) ·^1(η^ρ+ε/2|_I_t=μ_C),
with μ_A, μ_B, μ_C ranging over point measures on I_t. On account of <cit.> applied with the choices ε_1=1- ^1(η'_t|_I_t≼η^ρ+ε/2|_I_t ) and ε_2=1- ^2( η^ρ+ε/2|_I_t≼η_t|_I_t), ℚ has the property that
ℚ(η'_t|_I_t≼η_t|_I_t) ≥ 1 -ε_1-ε_2. In view of (<ref>), (<ref>) follows. The asserted locality of ℚ is inherited from ℚ^i, i=1,2, due to Proposition <ref> (used to define ℚ^i).
Condition <ref> with ν=1 holds for PCRW.
Let ρ∈ (K^-1,K), ε∈ (0,1), H_1,H_2,t,ℓ∈, and η_0,η'_0∈Σ be such that the conditions of <ref> hold. We proceed by a two-step coupling similar to the one in Lemma <ref> and first give a short overview of both steps; cf. also Fig. <ref>. In the first step, we couple η and η' during the time interval [0, t_1], with t_1:=ℓ^4, using the coupling of Lemma <ref> on [-H_2,H_2]∖ [-H_1,H_1] and the coupling given in Lemma <ref>, making sure that these couplings can be simultaneously performed on disjoint intervals. As a result, we get that η_t_1(x)≥η'_t_1(x) for all x∈ [-H_2+t_1 , H_2- t_1], except possibly within two intervals around -H_1 and H_1, of width O(t_1).
In the second step, during the time interval [t_1,t], we couple the particles of η' on these intervals with "additional" particles of η_t_1∖η'_t_1 on [-H_2,H_2] (using that the empirical density of η' is slightly larger than that of η on [-H_2,H_2] by <ref>), using Lemma <ref>. This ensures that with large enough probability, all these particles of η' get covered by particles of η within time t-t_1, without affecting the coupling of the previous step by the Markov property.
Step 1: choosing compatible large enough (in a manner depending on K, c:diffusive and EGrestesmallerthanepsilon), as we now briefly explain, the conditions of Lemma <ref> hold for η_0,η'_0, with (H,ℓ,t)=(H_2-H_1-1,ℓ,t_1) up to translating [0,H] in either of the intervals [-H_2,-H_1-1] or [H_1+1,H_2]. Indeed, the choice t_1=ℓ^4 and the assumptions in <ref> yield that
(ρ+3ε/2)(t_1^-1log t_1)^1/2ℓ≤(K+2)ℓ^-1/2≤ (K+2)ε/√(compatible)≤EGrestesmallerthanepsilonε/4,
where we choose compatible large enough depending on EGrestesmallerthanepsilon and K.
During the time interval [0,t_1], we apply Lemma <ref>, which we now know is in force, simultaneously on [-H_2,-H_1-1] and [H_1+1,H_2]. This is possible owing to the locality property stated as part of Lemma <ref> (see below (<ref>)), since the couplings involved rely independently on the particles of η_0([-H_2,-H_1-1]) and η'_0([-H_2,-H_1-1]), and those of η_0([H_1+1,H_2]) and η'_0([H_1+1,H_2]) respectively. Moreover, by suitable extension of this coupling we can also couple, during the interval [0,t_1], the particles of η_0([-H_1,H_1]) and η'_0([-H_1,H_1]) in the following way: match injectively each particle of η'_0(x) to a particle of η_0(x), for all x∈ [-H_1,H_1], and impose that matched particles follow the same trajectory (note that this is precisely the coupling underlying the statement of Lemma <ref>).
From the construction in the previous paragraph, the four groups of particles η_0(∖ [-H_2,H_2]), η_0([-H_2,-H_1-1]), η_0([-H_1,H_1]) and η_0([H_1+1,H_2]) evolve independently under , and within each group the particles themselves follow independent lazy simple random walks. Hence under , during [0,t_1], we have indeed η∼^η_0, and by a similar argument that η'∼^η'_0.
Now consider the events _1={∀ x∈ [-H_1+t_1,H_1-t_1], ∀ s∈ [0,t_1], η_s(x)≥η'_s(x) } and _2= {∀ x∈ [-H_2+t_1,-H_1-1-t_1]∪ [H_1+1+t_1,H_2-t_1], η_t_1(x)≥η'_t_1(x) } declared under . By Lemmas <ref> and <ref>, respectively, we have that
(_1)=1 and (_2)≥ 1 - 8 (H_2-H_1)exp(-coupling-1(ρ+ε)^-1ε^2ℓ^2/4).
Then, define (still under ) two further events
_3= {for all intervals I⊆ [-H_2+2t_1,H_2-2t_1] of length ℓ^2: η'_t_1(I)≤ (ρ+2ε/5) ℓ^2}
_4={for all intervals I⊆ [-H_2+2t_1,H_2-2t_1] of length ℓ^2: η_t_1(I)≥ (ρ+3ε/5)ℓ^2 }.
We apply condition <ref>, which holds by Lemma <ref> to η' with (ρ,ε,ℓ,ℓ',H,t)=(ρ+ε/5,ε/20,ℓ,ℓ^2,2H_2,t_1), and to η with (ρ,ε,ℓ,ℓ',H,t)=(ρ+3ε/4,ε/20,ℓ,ℓ^2,2H_2,t_1). A straightforward computation proves that the necessary conditions are implied by the assumptions in <ref>. This yields
(_3 ∩_4) ≥ 1-16H_2exp(-densitystableexpoε^2ℓ^2/400).
Step 2: since _1 holds with full -measure, we can, as in Lemma <ref>, pair injectively each particle of η'_t_1([-H_1+t_1,H_1-t_1]) to one of η_t_1 on the same site, and impose by suitable extension of that paired particles follow the same trajectory during [t_1,t], all pairs being together independent. We obtain in this way that
(∀ s∈ [0,t], η_s|_[-H_1+t, H_1-t]≽η'_s|_[-H_1+t, H_1-t])=1.
To complete the construction of it remains to describe the trajectory of the particles of η'_t_1(∖ [-H_1+t_1,H_1-t_1]) and η_t_1(∖ [-H_1+t_1,H_1-t_1]), and those of η_t_1([-H_1+t_1,H_1-t_1]) that were not paired. On (_1 ∩_2∩_3 ∩_4 )^c, let all these particles follow independent lazy simple random walks during [t_1,t], independently from the particles paired at (<ref>). On (_1 ∩_2∩_3 ∩_4 )⊆ (_1∩_2), note that η_t(x)≥η'_t(x) for all x∈ [-H_2+t_1, H_2-t_1]∖ I_1, where
I_1=[-H_1-t_1,-H_1+t_1-1] ∪ [H_1-t_1+1,H_1+t_1].
We pair injectively each particle of η'_t_1( [-H_2+t_1, H_2-t_1]∖ I_1) to one of η_t_1 on the same site, and impose by suitable extension of that paired particles follow the same trajectory during [t_1,t], all pairs being together independent.
For convenience, we introduce
η_t_1(x)=
η_t_1(x)-η'_t_1(x)_{x ∈ [-H_2+t_1, H_2-t_1]∖ I_1 }
η_t_1'(x)=η'_t_1(x)_{x ∈ I_1 },
which denote the number of particles of η_t_1 and η'_t_1 at position x∈ℤ whose trajectory has not yet been described. It thus remains to cover the particles of η' by those of η by time t. Let us first explain the reasoning to establish this covering. Note that the two intervals making up I_1 escape to our couplings during [0,t_1], so that we can only guarantee that the empirical density of η_t_1' is lower than ρ+2ε/5, see _3. Fortunately, _3 and _4 ensure that the empirical density of η_t_1 is at least ε/5 on [-H_2+t_1, H_2-t_1]∖ I_1, which is much wider than I_1. We thus apply Lemma <ref> with a mesh much larger than | I_1| =4t_1 in order to 'dilute' the particles of η'_t_1. By making this coupling independent of the other particles of η,η' previously paired, we will ensure that both η and η' have the correct PCRW marginals.
We now formalise this coupling. By choosing compatible large enough, one can check, via a straightforward computation, that the assumptions in <ref> imply the necessary conditions to apply Lemma <ref> for η and η' of on [-H_2+2t_1,H_2-2t_1] with (H,ℓ,t,ρ,ε )= (2H_2-4t_1,ℓ^5,t-t_1,ε/40,ε/50). Hence by Lemma <ref>, we can extend such that
the trajectories of the particles of η_t_1+· and
η'_t_1+· are independent of those paired at (<ref>),
and such that (using that ε < 1), on the event _1 ∩_2∩_3 ∩_4,
(η_t|_[-H_2+2t,H_2-2t]≽η'_t|_[-H_2+2t,H_2-2t] | (η_s, η_s' )_ s ∈ [0, t_1])
≥ 1- 8H_2 exp(-coupling-1ε^2√(t-t_1)/10^4).
Let us check that Step 2 yields the marginals η∼^η_1 and η∼^η'_1 during [t_1,t] (we have already seen in Step 1 that η∼^η_0 and η∼^η'_0 during [0,t_1] so that the Markov property (<ref>) will ensure that η∼^η_0 and η∼^η'_0 during [0,t]). Remark that the events _i, 1≤ i≤ 4 are measurable w.r.t. the evolution of η and η' until time t_1. On (_1 ∩_2∩_3 ∩_4 )^c, by (<ref>) and the paragraph below (<ref>), it is clear that all particles of η_t_1 follow independent lazy simple random walks, and that the same is true for η'_t_1. On _1 ∩_2∩_3 ∩_4, (<ref>) and Lemma <ref> ensure that this is also the case. Therefore, we have indeed that η∼^η_1 and η∼^η'_1.
Finally, we explain how to derive <ref> from our construction. Note that (<ref>) immediately follows from (<ref>) (with full -probability). As for (<ref>), we have
Q def.=(η_t|_[-H_2+6t,H_2-6t]≽η'_t|_[-H_2+6t,H_2-6t])
≥(η_t|_[-H_2+2t,H_2-2t]≽η'_t|_[-H_2+2t,H_2-2t])
by (<ref>) and (<ref>). Thus, by combining (<ref>), (<ref>), (<ref>) and (<ref>), we get
Q ≥ 1- (_2^c∪_3^c∪_4^c)-8H_2 exp(-coupling-1ε^2√(t-t_1)/10000)
≥ 1-32H_2exp( -densitystableexpoε^2ℓ/64 )
≥ 1-5SEPcoupling2ℓ^4H_2exp( -ε^2ℓ/(2SEPcoupling2)),
choosing compatible and SEPcoupling2 large enough (w.r.t. K, densitystableexpo and coupling-1).
This yields (<ref>) and concludes the proof, since (<ref>) is implied by Lemma <ref>.
Condition <ref> (with ν=1) holds for PCRW, the constraint on k with the pre-factor 48 now replaced by 24(K+1).
The modification of the pre-factor appearing in the constraint on k is inconsequential for our arguments (and consistent with SEP where one can afford to choose K=1 since J=(0,1)). Indeed <ref> is only used at (<ref>), where this modified condition on k clearly holds for L large enough (K being fixed).
We adapt the proof of Lemma <ref>.
Let H,ℓ,k≥ 1, ρ∈ (0,K), ε∈ (0,K-ρ) and η_0, η'_0 be such that the conditions of <ref> hold. We can thus pair injectively each particle of η'_0([-H,H]) to one particle of η_0([-H,H]) located at the same position. Moreover, y assumption there is at least one particle of η_0([0,ℓ]) that is not paired. For s≥ 0, denote Z_s the position of this particle at time s.
Let be a coupling of η and η' during [0,ℓ] such that paired particles perform the same lazy simple random walk (independently from all other pairs), and all other particles of η_0 and η'_0 follow independent lazy simple random walks (which yields the marginals η∼^η_0 and η'∼^η'_0). As in Lemma <ref>, we get that
(∀ s∈ [0,ℓ], η_s|_[-H+ℓ, H-ℓ]≽η'_s|_[-H+ℓ, H-ℓ])=1
and (<ref>) follows.
It remains to show (<ref>). Assume for convenience that ℓ is even (the case ℓ odd being treated in essentially the same way). Similarly as in the proof of Lemma <ref>, we get that
ℚ(η_ℓ( x )>0 , η'_ℓ( x )=0)≥ℚ( Z_ℓ=x , η'_ℓ(x)= 0),
for x=0,1. Note that by our construction, the two events {Z_ℓ=x } and {η'_ℓ(x)= 0} are independent. Clearly, we have that for both x=0,1,
(Z_ℓ=x)≥ 4^-ℓ.
Note that there is no parity issue in (<ref>) because Z performs a lazy random walk.
Moreover, since η'_0([-3ℓ+1, 3ℓ])≤ 6(ρ+1)ℓ by assumption and since no particle outside [-3ℓ+1, 3ℓ] can reach 0 by time ℓ, it follows that at time ℓ-1, there are at most 6(ρ+1)ℓ particles of η'_0 in [-1,1], and each of them has probability at least 1/2 not to be at x ∈{0,1} at time ℓ. Hence
( η'_ℓ(x)= 0)≥ 2^-6(ρ +1)ℓ.
Putting together (<ref>), (<ref>) and (<ref>), we get that the probability on the left of (<ref>) is bonded from below by (2e)^-6(ρ+1)ℓ, whence (<ref>).
abbrv
|
http://arxiv.org/abs/2409.02520v1 | 20240904082814 | Bootstrap percolation on rhombus tilings | [
"S Esnay",
"V Lutfalla",
"G Theyssier"
] | cs.DM | [
"cs.DM",
"math.CO"
] |
Fundamental properties of linear factor models
Peter M. Kraus
3 September 2024
================================================
§ ABSTRACT
2-boostrap percolation on a graph is a diffusion process where a vertex gets infected whenever it has at least 2 infected neighbours, and then stays infected forever.
It has been much studied on the infinite grid for random Bernoulli initial configurations, starting from the seminal result of van Enter that establishes that the entire grid gets almost surely entirely infected for any non-trivial initial probability of infection.
In this paper, we generalize this result to any adjacency graph of any rhombus tiling of the plane, including aperiodic ones like Penrose tilings. We actually show almost sure infection of the entire graph for a larger class of measure than non-trivial Bernoulli ones.
Our proof strategy combines a geometry toolkit for infected clusters based on chain-convexity, and uniform probabilistic bounds on particular geometric patterns that play the role of 0-1 laws or ergodicity, which are not available in our settings due to the lack of symmetry of the graph considered.
§ INTRODUCTION
Bootstrap percolation was first introduced by <cit.> as a simplified model of a magnetic system progressively loosing its magnetic order[Due to initial non-magnetic impurities and competition between exchange interactions (which tends to align atomic spins) and crystal fields interactions (which tends to force atoms into a singlet state, and thereby suppressing magnetic moments).].
When abstracted away from this physical modeling, it can be seen as a simple growing or diffusion process on a graph: at each time step, any vertex with m or more infected neighbours becomes infected, and infected vertices remain infected forever.
As in classical percolation theory, this system is usually studied on random initial condition (each vertex is initially infected with some probability p), and the main question is to determine whether the entire graph will be infected almost surely (depending on p).
This process has been first studied on the bi-dimensional grid where the case m=2 is the most interesting one.
<cit.> proved in that case that whenever p>0, the entire graph is infected almost surely.
Then <cit.> and later <cit.> obtained sharp thresholds on p for finite grids as a function of their size.
Still on the grid (in dimension two or higher), these seminal results were considerably generalized to a large class of monotone cellular automata called U-bootstrap and introduced in <cit.> (see <cit.> for a detailed survey of this family and <cit.> for a taste of the most recent results).
On the other hand, bootstrap percolation was studied on many different graphs: finite ones like in <cit.>, tilings by regular polygons <cit.> where critical probabilities are trivial (0 or 1) like on the infinite grid, or hyperbolic lattices <cit.> or regular trees and Cayley graphs of non-amenable groups <cit.> where non-trivial critical probabilities were established.
A common aspect to these works (and to most of the literature to our knowledge) is that the graphs considered are highly symmetric which gives, on one hand, powerful probabilistic tools for tackling the problem (0-1-laws, ergodicity, etc), and, on the other hand, a nicer and more uniform geometry to infected clusters.
There is a general observation (or belief) in statistical mechanics that many features of studied systems only depend on a few parameters and should be independent of the details of the system.
This is often referred to as universality, but there is no rigorous mathematical definition of the concept to our knowledge. More modestly and back to the precise context of bootstrap percolation, one could ask how robust is the seminal result of <cit.> when modifying the underlying graph. First, as mentioned above, there is a dramatic change of behavior when going from the grid to trees, Cayley graphs of non-amenable groups or hyperbolic graphs (non-critical probabilities arise).
But what is special about the regular grid ^2 among planar graphs that are quasi-isometric to it?
what is the importance of symmetry (in particular translation invariance) that allows to make concise proofs?
In this paper, we consider boostrap percolation on any adjacency graph of any rhombus tiling of the plane with finitely many tile types up to translation, including the regular grid but also highly non-symmetric ones like Penrose tilings.
Our main result is a generalization of the seminal result of <cit.> to this entire class of graphs, precisely: on the adjacency graph of any rhombus tiling, for any non-zero initial probability of infection, the entire graph will get infected almost surely.
Classical percolation was already studied on Penrose tilings in <cit.>, where the lack of translation invariance and symmetry is tackled by introducing an ergodic measure that averages over all possible Penrose tilings. Our approach is different and gives results for bootstrap percolation on any single tiling without averaging. We generalize Enter's argument to this setting by combining a geometrical analysis of blocked infected cluster based on chain-convexity (which are simply rectangles in the case of the grid) and uniform probability upper bounds on blocked cluster events that bypass the lack of translation invariance (whereas thanks to ergodicity in the case of the grid, it is enough to bound the probability of a blocked infected cluster containing the origin).
*Contents of the paper. In section <ref>, we introduce bootstrap percolation cellular automata on any graph, the main problem, and some basic probabilistic notions and ingredients to be used later. In section <ref>, we introduce rhombus tiling and some of their geometric features. The two previous objects introduced separately meet in section <ref>, where we establish our main result in two steps: first, by a detailed analysis of infection clusters based on chain-convexity (sub-section <ref>), and then, by a probabilistic argument applied to specific geometric objects previously identified (sub-section <ref>).
In the appendices we present two examples that limit the generalization of our result to other graphs and percolation processes.
In Appendix <ref> we show that 2-neighbour percolation does not have a critical threshold (or 0-1) behaviour on the adjacency graphs of arbitrary quadrilater tilings.
In Appendix <ref> we show that arbitrary percolation processes (here a variant of oriented bootstrap percolation) do not have a critical threshold behaviour on the adjacency graphs of arbitrary rhombus tilings.
§ BOOTSTRAP CELLULAR AUTOMATON AND PROBABILITY MEASURES
In this section, we introduce the bootstrap cellular automaton on an arbitrary abstract graph and present basic facts around probability measures (to be used later in Section <ref>).
Given a graph G=(,E), a configuration is a map giving a value 0 or 1 to each vertex.
We denote by = {0,1}^ the set of configurations.
If c∈ and v∈, we use notations c(v) and c_v interchangeably.
The set X can be endowed with the pro-discrete topology (product of the discrete topology over {0,1}) which is generated by the collection of cylinder sets for D⊆ a finite domain of and P:D→{0,1} a partial configuration of domain D:
[P] = {c∈|∀ t∈ D, c(t)=P(t)}.
X endowed with this topology is compact.
We will often use the notation shortcut q^D for q∈{0,1} and D⊆ to denote the constant partial configuration equal to q on domain D.
Therefore [q^D] denotes the set of configurations in state q on domain D.
In this paper, we are interested in a dynamical process acting on configurations which is a particular cellular automaton often called bootstrap percolation <cit.>.
The m-neighbour contamination or m-bootstrap cellular automaton F:→ for some integer m is defined as follows:
F(c)_v =
1 if c_v=1 or |{v' | (v',v)∈ E and c_v'=1}|≥ m,
0 else,
for any configuration c∈ X and any vertex v∈.
This cellular automaton is both freezing and monotone (see <cit.>), which means that:
* F(c)_v≥ c_v for any c∈ and v∈ (freezing),
* F(c')_v≥ F(c)_v for all v∈ whenever c'_v≥ c_v for all v∈ (monotone).
The freezing property lets us see F as a growing process: starting from any initial configuration seen as a set of selected vertices of value 1, F can only make this set increase with time.
In particular, for any c∈ and any v∈, the sequence (F^n(c)_v)_n is ultimately constant of limit value c^∞_v. Equivalently, the sequence of configurations (F^n(c))_n always converges and its limit is precisely c^∞.
We will focus on the set of initial configurations that invade the entire graph during this growing process, formally:
= {x∈|∀ v∈, ∃ n∈, F^n(c)_v = 1}.
Clearly not all configurations lie in .
We want to understand how big is.
Topologically, it is a G_δ set (intersection over v∈ of sets ∪_n{c | F^n(c)_v=1} which are open), but following motivations from statistical physics and percolation theory, we will study through probability measures.
A Borel probability measure is uniquely determined by its value on cylinders (by Carathéodory-Fréchet extension theorem, see <cit.>).
We mainly consider Bernoulli measures μ_p which are product measures determined by a parameter p∈[0,1] giving the “probability of having a 1 at one vertex”, as follows:
μ_p([P]) = ∏_v∈ D, P(v)=1 p ×∏_v∈ D, P(v)=0(1-p).
The monotony property of F mentioned above implies that the measure of I under Bernoulli measures can only increase with p: μ_p(I)≤μ_p'(I) whenever p≤ p' (this can be shown by a straightforward coupling argument).
Since μ_0()=0 and μ_1()=1, there are two critical values of the parameter p to consider: = sup{p |μ_p()=0} and = inf{p |μ_p()=1} which verify ≤.
In many proofs, an event is shown to be of probability either 0 or 1 for a given measure μ using 0-1 laws.
One of them is the Kolmogorov 0-1-law; but it applies to tail events, which is not: it might depend on a finite portion of the configuration.
Another of them is that, if the graph G has enough symmetry, Bernoulli measures follow a 0-1 law on Borel sets of configurations that preserve these symmetries.
More precisely, denoting Γ the group of automorphisms of graph G, if the action of Γ on G has an infinite orbit (intuitively meaning that infinitely many vertices of G are indistinguishable) and if G is locally finite, then, for any Bernoulli measure μ and any set X of configurations which is invariant under Γ (i.e., c∈ X and ϕ∈Γ implies c∘ϕ^-1∈ X), we have μ(X)∈{0,1} (see <cit.>).
Whatever the graph G, the set is always invariant under Γ, simply because F(c)∘ϕ^-1=F(c∘ϕ^-1) for any configuration c and c∈ I is determined by a uniform constraint on all vertices: ∀ v∈, c^∞(v)=1.
Consequently, for this 0-1 law to hold on , the only requirement is for the graph to have enough symmetry. It is the case in particular for Cayley graphs of infinite groups (typically ^2), and is often expressed as the fact that the action of translation with Bernoulli measure is ergodic <cit.>.
Following remark <ref>, in the case of a graph with enough symmetry, μ_p(I)∈{0,1} for all p so that =.
We will however consider graphs that have few or no symmetry and there is a priori no reason to expect this 0-1 law to hold (see counter-example in appendix <ref> and Proposition <ref> therein).
Furthermore, this 0-1 law uses Bernoulli measures: percolation theory often focuses on these measures to study critical values of parameter p.
Here, we will actually consider a larger class of measures in Section <ref> that are not necessarily product measures but are sufficiently well-behaved.
Our goal here is not to look for abstract generality, but rather to make explicit and clear the requirements we use in the probabilistic arguments of our main result (Section <ref>).
There are three such requirements that we detail below: bounded correlations (Markov property), non-vanishing probability of patches of 1 of any fixed size, and positive correlation of upward-closed sets.
Fix some integer k∈. We say that a measure μ is k-Markov if, for any finite D⊆ V and any set X⊆ V which is at distance at least k from D in graph G and any patterns u:D→{0,1} and P:X→{0,1}, the following holds:
μ([u]∩ [P]) = μ([u])μ([P]).
In addition, we say a measure is non-vanishing if for all r there is some constant α>0 such that for all v∈ V, μ([1^B(v,r)])≥α where B(v,r) is the ball of radius r centered in v in graph G.
Finally, a set of configurations X⊆ is upward-closed if, whenever x∈ X and x_v≤ y_v for all v∈ V, then y∈ X.
A measure μ is called (Markov-Non Vanishing-Positively Correlated) if it is k-Markov for some k, non-vanishing, and any pair X,Y of upward-closed Borel sets are positively correlated, i.e. μ(X∩ Y)≥μ(X)μ(Y).
Any non-trivial Bernoulli measure on a bounded degree graph is .
Any such measure is clearly 1-Markov. It is non-vanishing on any graph of bounded degree because the cardinal of balls of radius r is uniformly bounded.
Finally, Bernoulli measures are positively correlated on upward-closed events, a fundamental fact often referred to as Harris inequality (see <cit.> for a complete proof).
One of the basic property of measures is that imposing a large patch of 0 at a fixed position has a decreasing probability with the patch size that can be uniformly exponentially upper-bounded.
If G is of bounded degree and μ a measure, then there is some β with 0<β<1 such that, for any D⊆ V with |D|=n it holds that μ([0^D])≤β^n.
Let k be such that μ is k-Markov, and α>0 be such that μ([1^{v}])≥α for all v (α exists because μ is non-vanishing).
If Δ is a bound on the degree of G then any ball of radius k in G has cardinality at most Δ^k.
Therefore, one can choose at least N=⌊n/Δ^k⌋ vertices v_1,…, v_N in D which are pairwise separated by distance at least k.
By the k-Markov property we get
μ([0^D])≤∏_1≤ i≤ Nμ([0^{v_i}])≤ (1-α)^N.
Taking β = (1-α)^1/2Δ^k proves the lemma.
§ RHOMBUS TILINGS
We call tiling, denoted by , a countable set of tiles that covers the euclidean plane ℝ^2without overlap. That is, = {t_i, i ∈} is a tiling when ⋃_i∈ t_i = ^2 and for any i,j∈, t_i∩t_j = ∅.
In all that follow, we consider the case of rhombus tilings, where there are finitely many tiles up to translation, all the tiles are rhombuses and the tiling is edge-to-edge, i.e., any two tiles either intersect on a single common vertex, on a full common edge, or not at all.
Throughout the article we use the famous example of Penrose rhombus tilings <cit.>, see Fig. <ref>.
In the case of edge-to-edge tilings, the condition that there are finitely many tiles up to translation is equivalent to finite local complexity (FLC) <cit.> that is: for any given size there are finitely many patches of that size.
We say that two tiles t and t' are adjacent, denoted by t t', when they share a full edge, that is, when there exists an edge e such that t ∩ t' = e.
This notion, also called edge-adjacency, is the natural notion of adjacency on edge-to-edge tilings.
However, we also define a weaker notion of vertex-adjacency.
We say that two tiles t and t' are vertex-adjacent, denoted by t t', when they are distinct and share at least one vertex, i.e. t ∩ t' ≠∅ and t ≠ t'.
Given a tiling and a tile t∈, we define the neighbourhood of t, denoted by t, as the set of tiles which are adjacent to t, i.e., t:= { t' ∈| t t'}.
We also define the weaker vertex-neighbourhood of t, t:= {t' ∈| t t'}.
We define similar notions for a set of tiles S, that is, S:= { t' ∈∖S|∃ t ∈ S, t t'} and S:= { t' ∈∖S|∃ t ∈ S, t t'}. They are respectively the tiles adjacent and vertex-adjacent to the set S.
Note that all these notions are not neighbourhoods in the topological sense, notably since they do not contain the original set. However, this denomination, which is ultimately about adjacent tiles, is widely used and understood in tilings theory.
We call patch a set of adjacent tiles in a tiling. We also sometimes consider vertex-patches, which are sets of vertex-adjacent tiles.
We consider only finite patches unless specified otherwise.
The key structure in rhombus tilings are the chains which generalize the rows and columns of the square grid.
Given a rhombus tiling .
Given an edge direction e⃗, we say that two tiles t and t' are e⃗-adjacent, denoted by t e⃗∼ t', when they share an edge of direction e⃗.
Given an edge direction e⃗ and a normal vector n⃗ (orthogonal to e⃗), we say that a tile t' is e⃗-adjacent to t in direction n⃗ when t e⃗∼ t' and ⟨t⃗t⃗'⃗ | n⃗⟩ > 0 where t⃗t⃗'⃗ is the vector from the barycenter of t to the barycenter of t' and ⟨ | ⟩ denotes the scalar product.
We call chain of edge direction e⃗ a bi-infinite patch = (t_i)_i∈ such that there exists n⃗ orthogonal to e⃗ such that for any i, t_i+1 is e⃗-adjacent to t_i in direction n⃗.
We call half-chain ^+ of edge direction e⃗, orientation n⃗ and starting tile t the infinite patch ^+ = (t_i)_i∈ such that t_0 = t and for any i, t_i+1 is e⃗-adjacent to t_i in direction n⃗.
We call chain segment a finite connected subset of a chain. For a chain and i < j, we write _i^j to denote the subset {t_i, …, t_j}.
We denote ≡' when two chains are identical up to shift of index and/or multiplication of one's indices by -1.
Given a rhombus tiling and two chains of direction e⃗ and ' of direction e⃗'⃗.
The following hold:
* if ≢' then and ' intersect at most once;
* if ≢' and ∩' ≠∅ then the intersection tile t has edge directions e⃗ and e⃗'⃗
* if e⃗ = ±e⃗'⃗ then either ≡' or ∩' = ∅
Note however that, in an arbitrary rhombus tiling, non-intersecting chains might have different edge directions.
Let be a rhombus tiling with finitely many edge directions.
There exists a constant θ>0 such that any chain is θ-uniformly monotonous, that is, if =(_i)_i∈ has direction e⃗, there exists a unit normal vector n⃗ orthogonal to e⃗ such that for any i∈, ⟨t⃗_⃗i⃗t⃗_⃗i⃗+⃗1⃗ | n⃗⟩≥θ where t⃗_⃗i⃗t⃗_⃗i⃗+⃗1⃗ is the vector from the barycenter of t_i to the barycenter of t_i+1.
Let be a rhombus tiling with d edge directions {e⃗_⃗0⃗, …e⃗_⃗d⃗-⃗1⃗}.
Denote e⃗^ the unit vector orthogonal to e⃗ in the positive orientation.
Let θ := min_0≤ i,j < d, i≠ j |⟨e⃗_⃗i⃗ | e⃗_⃗j⃗^⟩|.
Any chain ⊂ is θ-uniformly monotonous.
Indeed, let be a chain of edge direction e⃗.
Let t and t' be two consecutive tiles in .
There exists two directions e⃗_⃗i⃗ and e⃗_⃗j⃗ (possibly the same) such that t has edge directions e⃗ and e⃗_⃗i⃗ and t' has edge directions e⃗_⃗j⃗ and e⃗.
Without loss of generality (changing the choice of orientations of the directions), assume ⟨e⃗_⃗i⃗|e⃗^⟩ >0 and ⟨e⃗_⃗j⃗ | e⃗^⟩ > 0.
Since t and t' are parallelograms and adjacent along an e⃗ edge, this means that t⃗t⃗'⃗ = 12(e⃗_⃗i⃗ + e⃗_⃗j⃗).
So ⟨t⃗t⃗'⃗ | e⃗^⟩ = 12(⟨e⃗_⃗i⃗|e⃗^⟩ + ⟨e⃗_⃗j⃗|e⃗^⟩ )≥θ.
The keen reader might have already noticed that the combinatorial structure of chains only uses the fact that rhombuses have two pairs of opposite parallel edges, that is, they are parallelograms.
However for simplicity of redaction we chose to only mention rhombus tilings as any edge-to-edge parallelogram tiling is combinatorially equivalent to an edge-to-edge rhombus tiling, meaning they have the same adjacencies.
Indeed the transformation consisting in rescaling all edge directions so they have the same length transforms an edge-to-edge parallelogram tiling to a combinatorially equivalent edge-to-edge rhombus tiling.
Note however that, by transforming a tileset in this manner we might allow more tilings as collinear edge directions merge in this process.
§ CRITICAL PROBABILITY FOR 2-BOOTSTRAP PERCOLATION ON RHOMBUS TILINGS
The 1-bootstrap dynamics is not interesting on connected graphs ( contains all configurations except one). Moreover, on rhombus tilings, the 3-bootstrap dynamics always admits finite obstacles and therefore has a trivial critical probability (p_c^0=p_c^1=1): indeed, considering any vertex of the tiling and assuming that all tiles sharing this vertex are in state 0 (not infected), then they will stay in state 0 forever, because each of them has at most 2 neighbours in state 1 at any moment.
In this section we investigate 2-neighbour contamination on rhombus tilings, and we prove almost sure invasion on any rhombus tiling with finitely many edge directions for any measure μ.
In particular, it implies almost sure invasion for any non-trivial Bernoulli measure so that the critical percolation threshold of 2-neighbour contamination on any rhombus tiling with finitely many edge directions is p_c^0=p_c^1=0.
Formally, to fit the formalism of Section <ref>, we can consider the adjacency graph of the rhombus tilings.
The adjacency graph, or dual graph, G_ of the rhombus tiling is G_ = (, E_) where (t,t')∈ E_⇔ t t'.
However, for simplicity, we identify and G_ in this section.
In Section <ref>, we study the geometry and combinatorics of clusters (stable contaminated patterns) and prove that any finite cluster is enclosed in a “wall” which is a polygon of a bounded number of chain segments, generalising the fact that in ℤ^2 clusters are enclosed in a rectangle that is in particular a polygon of 4 chain segments.
We also prove that infinite clusters are bounded by some chain path including an infinite half-chain.
In Section <ref> we use the geometrical and combinatorial results on clusters to prove that invasion is almost sure for any measure.
The key idea is counting the possible finite “walls” of length n around a tile and bounding their number by a polynomial in n.
From this we deduce for any measure μ :
* a uniform bound λ<1 on the measure of the event E_t of a tile being enclosed in a finite “wall”, and
* a uniform approximation of E_t by the tail event E_t,≥ n of the tile t being enclosed in a finite “wall” of length more than n.
Remarking that the head events E_t,≤ n of being enclosed in a wall of length less than n are independent for sufficiently far away tiles for Markov measures, we obtain almost independence of far away events E_t from which we conclude almost sure invasion for any measure.
§.§ Geometrical and combinatorial elements
The first observation that we can make is that a chain of 0s can stop a contaminated cluster, see Fig <ref>. Actually we prove that chains of 0s are the only possible “walls” for 2-neighbour contamination.
To formalize this we introduce the notion of chain-convexity.
A connected set of tiles S⊂ is called chain-convex when for any chain of tiles =(_i)_i∈ the following hold:
* {j ∈|_j ∈ S} is an interval of ;
* if there exist i< j such that _i and _j are vertex-neighbours of S (that is {_i, _j}⊂S) then either _i+1^j-1⊂S and no tile of is in S, or S∩ = {_i, _j} and the tiles of that belong to S are exactly _i+1^j-1.
Note that a chain convex set of tiles, as it is connected and contains no hole due to item 1 of the definition, is simply connected.
If a tile _i from a chain is vertex-adjacent to a set S, then among _i+1 and _i-1, one of them is either vertex-adjacent to S too, or in S. Indeed, _i+1 shares half the vertices of _i with it, and _i-1 the other half; consequently one of them also shares a vertex with a tile of S, unless it is itself a tile of S.
This definition states that if is a chain and S is chain-convex, we have the following possibilities for the adjacency of and S:
* has no element that is vertex-adjacent to S (this also holds if ⊂ S);
* has exactly one element χ_i that is vertex-adjacent to S; then by the above remark either {χ_i+1, χ_i+2, …} or {χ_i-1, χ_i-2, …} is in S;
* has exactly two elements _i and _j that are vertex-adjacent to S (and in particular no other element can be adjacent to S); then _i and _j are adjacent to S and the intersection with S is exactly the (possibly empty) set {_i+1, …, _j-1};
* has at least three elements that are vertex-adjacent to S (and notably not in S), and a finite number of them; then let us take _i and _j be the ones with smallest and largest indices. We cannot have the second case of point 2 of the definition above, therefore does not intersect S, all elements in {_i+1, …, _j-1} neighbour S, and no element of outside of _i^j is adjacent or vertex-adjacent to S.
* has infinitely many elements that are vertex-adjacent to S. Then similarly does not intersect S, and either has a half-chain adjacent to S, or is adjacent to S in its entirety.
With the following two results, we prove that any vertex-connected set of tiles that is stable for 2-neighbour contamination is chain convex, as we would want it to be. We start by a more technical lemma on vertex-connected stable sets of tiles.
Let S be a vertex-connected set of tiles S⊂ that is stable for 2-neighbour contamination.
Let ∈ be a chain of rhombuses of edge direction such that there exist i<j ∈ with _i, _j ∈S.
Denote v_i (resp. v_j) a vertex common to _i and S (resp. common to _j and S).
If no tile of _i+1^j-1 is in S, and if v_i and v_j are in the same half-space of ∖,
(that is, v_i and v_j are connected in S∩ (∖), as in Fig. <ref>),
then the edge path _ from v_i to v_j along the boundary of is also in the boundary of S.
Note that in the following proofs, ∖ and S ∩ (∖) are understood as sets of tiles.
Let S be a vertex-connected set of tiles S⊂ that is stable for 2-neighbour contamination.
Let i<j ∈ and =(_i)_i∈ a chain of rhombuses of edge direction satisfying the condition above.
Notably, _i+1^j-1 has no tile in S; and there are vertices v_i and v_j vertices of _i∩ S and _j ∩ S which are in the same half-space of ∖, and connected in S∩ (∖) as depicted in Fig. <ref>.
We prove that all the rhombuses _k∈_i+1^j-1 are adjacent to S through a non- edge.
More precisely, the edge path _ from v_i to v_j along the boundary of (it exists as v_i and v_j are in the same half-space) is also in the boundary of S.
Denote _S a path from v_j to v_i along the boundary of S such that S does not intersect the domain delimited by _·_S, where · is the path concatenation. To obtain such a path, one can first take an arbitrary path p_0 along the boundary of S connecting v_j to v_i (v_j and v_i lie on the boundary of S by hypothesis, since _i, _j ∈S). If the (finite) domain D_0 delimited by _· p_0 doesn't intersect S we are done, otherwise there must be a tile t∈ S∩ D_0 with a vertex on p_0. Considering the edge-connected component of S∩ D_0 containing t, we can define a new path p_1 along the boundary of S still connecting v_j to v_i but such that the domain D_1 delimited by _· p_1 is included in D_0 but does not contain t.
Iterating this process a finite number of times (since S ∩ D_0 is finite), we obtain the desired path _S.
First, remark that if _S is trivial, that is if v_i=v_j, then by uniform monotonicity of in direction ^ (Lemma <ref>), j=i+1 and so the conclusion holds vacuously with _i+1^j-1 = ∅.
Now we assume that _S is non trivial, as depicted in Fig. <ref>.
Assume for contradiction that the finite patch of tiles D defined as the interior of the cycle = _·_S is non empty.
We now construct a subset D'⊊ D that satisfies the same hypothesis and has to be non-empty. We consequently reach a contradiction by iterating this construction which builds a strictly decreasing sequence of non-empty finite sets of tiles.
As D is non-empty and delimited by _S, there exists a tile t∈ D∩S. As S is stable for 2-neighbour contamination, S touches exactly one edge of t. So t has another edge direction e⃗_⃗t⃗ so both e⃗_⃗t⃗-edges of t are not in _S. Denote ' the chain of rhombuses of edge direction e_t passing through '_0:=t.
As is a closed curve containing t, and ' is uniformly monotonous in direction e_t^, ' crosses at least twice. As chains can cross at most once, the _ part of cannot be crossed twice by ', and so ' crosses _S at least once.
Therefore there exists t':='_k such that '_0^k⊂ D and '_k+1∈ S.
In particular, '_k is adjacent to S through an e_t edge.
Denote D' the subset of D delimited by the subpath '_S between t and t' along the boundary of S and the path _' between t and t' on the boundary of ', as depicted in Fig. <ref>.
D'⊊ D and D'≠∅ as otherwise t' would be adjacent to S through both a e_t edge and a non-e_t edge, contradicting the stability of S.
Additionnally, the chain ' from t to t' satisfies the same hypothesis as the chain from _i to _j.
So we can repeat this decomposition process to reach a contradiction.
Therefore _S=_ and all the tiles along the chain from v_i to v_j are adjacent to S through a non- edge.
A direct consequence of that lemma is that if a chain (of direction ) touches a connected set S of tiles that is stable for 2-neigbour contamination through two vertices v_i ∈_i and v_j ∈_j lying in two different half-spaces of ∖, then the whole portion of chain _i+1^j-1 is in S.
This follows from the fact that v_i and v_j are connected in S by some edge path , which crosses by Jordan's theorem.
Denote v_i_1,…, v_i_k the vertices of intersection of with , including v_i and v_j, named from the smallest index i_1 to the biggest index i_k of tiles in the chain that contain these vertices (up to using ' if a given tile has several vertices involved in the process). Applying Lemma <ref> on the elementary intervals between consecutive intersection points, we obtain that all tiles of _i_1+1^i_k-1 are adjacent to S through a non- edge.
Additionally, if crosses , then S also contains some tile _k'∈, with i_1 ≤ k' ≤ i_k. Then _k'+1 (or _k'-1 if k' = i_k) is adjacent to S both through _k' and a non- edge. Hence _k'+1∈ S (or _k'-1) by stability of S for 2-neighbour contamination. With a finite number of steps, we deduce that the entire _i_1+1^i_k-1 chain portion is in S. Notably, _i+1^j-1 is in S.
Any vertex-connected set of tiles S⊂ that is stable for 2-neighbour contamination is chain convex.
Let S⊂ be a vertex-connected set of tiles that is stable for 2-neighbour contamination.
We prove, using Lemma <ref>, that S is chain-convex, by checking all the items of that definition.
Let be a chain of rhombuses in .
We prove that {j ∈, _j ∈ S} is an interval of .
Let _j∈ S and _j+k∈ S.
If _j and _j+k are not connected in S∩(∖), then as remarked in Remark <ref>, we have _j+1^j+k-1⊂ S.
Otherwise, denote the edge path from a vertex v_j of _j to a vertex v_j+k of _j+k in the same half-space such that v_j and v_j+k are connected in S∩ (∖).
For contradiction, assume that _j+1^j+k-1 is not entirely in S. This means it contains a sub-interval that is entirely not in S. Up to renaming, we keep the notations j and j+k for that sub-interval, taken as big as possible (so that we still have _j and _j+k in S).
By Lemma <ref>, if _j+1^j+k-1 does not intersect S then is in the boundary of S. Moreover, _j+1∉ S. And yet, _j+1 has two edge neighbours in S: its neighbour on the other side of the edge path , and _j. This contradicts the stability of S for 2-neighbour contamination.
Consequently, _j+1^j+k-1 is entirely in S and so {j ∈, _j ∈ S} is an interval of .
We now prove the second chain-convexity condition.
Let i<j such that _i and _j are in S (not in S).
Denote v_i (resp. v_j) the vertex of _i (resp. _j) the vertex touching S.
As explained in Remark <ref>, if v_i and v_j are not in the same half-space or not connected in S∩ (∖) then _i+1^j-1 is included in S.
We now assume that v_i and v_j are in the same half-space and connected in S ∩(∖).
Denote the edge path from v_i to v_j along the boundary of _i^j.
Up to decomposing _i^j in smaller intervals whose endpoints are not in S, we assume that _i+1^j-1 either is entirely in S or has no tile in common with S, while _i, _j ∉ S.
In the first case we have _i+1^j-1⊂∩ S and _i, _j ∉ S. By the fact we proved above that {j ∈, _j ∈ S} is an interval of , we obtain that S∩ = _i+1^j-1. Additionally, the only elements of in S are _i and _j. Indeed, if it weren't the case, then there exists k (assume by symmetry k>j) such that _k∈S, and then by applying Lemma <ref> we obtain that the edge path from _j to _k is on the boundary of S. So _j has two edge neighbours in S: its neighbour on the other side of the edge path , and _j-1. This would contradict the stability of S.
In the second case, _i+1^j-1 has no tile in common with S. By applying Lemma <ref>, we obtain that the edge path from v_i to v_j is on the boundary of S and so _i+1^j-1⊂S. In that case we also have that no tile of is in S. Indeed, if there exist _k∈ S (assume by symmetry that k>j) then taking the smallest such k, we have _k-1∈S, and the edge path ' from v_j to v_k (a common vertex to _k and _k-1) is in the boundary of S. And so once again _k-1 has two neighbours in S (_k and some t∈ S through '), contradicting stability.
Overall, we have proved that S is chain-convex.
We call fortress (or finite obstacle) a non-empty finite patch R that resists outside contamination.
That is, the configuration c_R where tiles in R are 0s and all other tiles are 1s is stable for 2-neighbour contamination. See Fig. <ref> for an example of fortress for 2-neighbour contamination with quadrilater tiles.
Straightforwardly, the existence of fortresses in a structure is incompatible with a critical percolation threshold less than 1.
Our first step to prove the existence of a non-trivial critical percolation threshold is to use Lemma <ref> to prove the absence of such fortresses.
In an edge-to-edge rhombus tiling there is no fortress (finite obstacle) for 2-neighbour contamination.
Let R be a fortress and S be its complement.
By definition, S is stable for 2-neigbour contamination.
Let be a chain that intersects R. As R is finite, enters and leaves R.
That is, there exist i<j such that _i,_j ∈ S, _i+1, _j-1∈ R.
Up to restricting to a sub-segment of chain, we assume that _i+1^j-1⊂ R.
Denoting v_i and v_i' (resp. v_j and v_j') the vertices of _i ∩_i+1 (resp. _j-1∩_j) such that v_i and v_j are in the same half-space delimited by .
Denoting (resp. ') the edge path from v_i to v_j along (resp. from v_i' to v_j') and applying Lemma <ref> on both we obtain that the tiles of _i+1^j-1 are edge adjacent to S through both and ', contradicting the stability of S.
In what follows, we define some nice notions of polygons and paths in the rhombus tiling, then use Lemma <ref> and the results above to prove that any chain-convex patch has such a well-behaved boundary.
We say that =(_i)_0≤ i <m is a chain polygon in when :
* the _i are distinct tiles,
* consecutive _i are edge adjacent, that is : for any i, _i+1 m∈_i,
* the cycle has no cords, that is: for any i,j such that j≠ i ± 1 m, _j∉_i.
Note that any chain polygon =(_i)_0≤ i < m can be partitoned into k chain segments for some integer k, that is: there exists chains _1,…_k and indices i_1,… i_k-1 such that {_0, …_i_1}⊂_1, {_i_1,…_i_2}⊂_2, …{_i_k-1,…_m-1,_0}⊂_k. Indeed, one can always choose m chain segments made of only two tiles each.
A chain polygon is called a chain n-gon if it can be partitioned into at most n chain segments.
We say that = (_i)_i∈ℤ is a simple path in when the _i are distinct tiles, consecutive _i are edge-adjacent and there are no cords.
We call chain polypath a simple path (_i)_i∈ that can be partitioned into k half-chains and chain segments for some integer k.
A chain polypath is called a n-polypath if it can be partitioned into at most 2 half-chains and at most n-2 chain segments.
Let S⊊ be a chain-convex set of tiles, and d be the number of edge directions in the tiling.
If S is finite, then the exterior tile boundary of S is a chain 2d-gon.
If S is infinite, then the exterior tile boundary of S contains an infinite chain 2d-polypath. More precisely, the exterior tile boundary of S is the disjoint union of one or more chain polypaths of which the sum of the number of chain segments is at most 2d.
Let S be a finite chain-convex set of tiles.
First note that we can partition S (which is finite, as S is) as a chain-polygon of chains that do not intersect S. Indeed, any two consecutive tiles in S (when going around S in a given rotationnal order) are edge-adjacent and the chain connecting them does not intersect S (by definition of chain-convexity). So grouping the tiles of S by common edge direction, we get finitely many segments of chain.
Actually, we have at most 2d chain segments in S because for each edge orientation there are at most two chain segments of edge direction in S.
This follows from the uniform monotonicity of chains, see Lemma <ref>, and the fact that chains of same edge direction cannot cross.
Indeed assume that for some edge direction , there are three distinct chains , ' and ” that do not intersect S but have tiles in the vertex-neighbourhood of S. Up to renaming assume that ' is between and ” as in Fig. <ref>. By uniform monotonicity, ' either crosses S (which is a contradiction) or or ” (which is also a contradiction).
Let S be an infinite chain-convex set of tiles. Let P be a connected component of S, we prove that P is an infinite chain 2d-polypath.
This proof is decomposed in two parts: first we prove that P is infinite, and second we prove that it intersects non-trivially (meaning for at least two consecutive tiles) at most two chains for each chain direction.
As P is a connected component of S and S is infinite and chain-convex, each tile t∈ P has two edge-neighbours in P, either two opposite edges (the tiles are along a chain) if P shares an edge with S, or two adjacent edges if P only shares a vertex with S.
This implies that if P is finite, then it is a chain polygon. This polygon induces a finite interior and an infinite exterior. As S is infinite it cannot be the interior, and hence P together with its interior forms a fortress (finite obstacle) for 2-neighbour contamination which contradicts Lemma <ref>. So P is infinite.
As detailed in the first part of the proof, each chain that has two consecutive tiles in S does not insersect S.
This implies, using the same proof as above, that for each chain directions there are at most two segments (or half-chains) of chain of that direction in P.
This implies that P is an infinite chain 2d-polypath (recall that 2d-polypath means it has at most 2d components).
Note that in the case of infinite S, S might be an infinite strip between two non-intersecting chains of rhombuses. In that case, S contains exactly 2 connected components which are both an infinite chain.
Any finite patch of tiles P⊂ that is stable for 2-neighbour contamination is delimited by a chain-polygon with at most 2d sides with all 0s.
The exterior boundary of any infinite simply-connected set of tiles P⊊ that is stable for 2-neighbour contamination contains an infinite half-chain with all 0s.
§.§ Probabilistic arguments
In this subsection, we fix a rhombus tiling T and a measure μ.
Let be the set of configurations that contain at least a half-chain of zeros,i.e.,
:= { x ∈|∃ a half-chain =(_i)_i∈, ∀ i≥ 0, x__i=0}
.
μ() = 0.
For a fixed half-chain =(_i)_i∈ the set _ := { x ∈|∀ i≥ 0, x__i=0} has measure 0 because
_ = ⋂_i∈ [0^χ_i]
and μ(⋂_0≤ i≤ n[0^χ_i])≤β^n by Lemma <ref> where 0<β<1.
Since there are only countably many half-chains (a half-chain is uniquely identified by an initial tile, a direction and an orientation), the union bound gives μ()=0 because
= ⋃_ half-chain_.
Let be the set of configurations that do not invade but do not contain any half-chain of zeros, i.e.,
:= ^∁∩^∁
Recall that we call chain 2d-gon a chain polygon with at most 2d sides.
In the finitely blocking configurations , every 1 is enclosed in a closed chain 2d-gon of 0, i.e.,
⊂{ x ∈|∀ v ∈, [x_v=1 ⇒∃closed chain 2d-gon c around v of 0]}
This derives from the definition of together with Corollary <ref>.
Consider some configuration c∈. As the 2-neighbour contamination F is monotone and freezing, there exists a limit configuration c^∞ = lim_n→∞F^n(c) and since c ∈⊂^∁, c^∞ is not the 1-uniform configuration, i.e., c^∞≠ 1^.
Recall that because F is freezing, we have that for any tile t∈ T, c_t ≤ c^∞_t.
So we also have c^∞∈.
As c^∞ is stable for and not the 1-uniform configuration, any 1 in c^∞ is in a stable contaminated cluster.
Corollary <ref> states that any stable contaminated cluster is either infinite or enclosed in a chain 2d-gon of zeros.
The case of an infinite contaminated cluster P ⊊ T is impossible, as the exterior boundary of P would contain an infinite half-chain of zeros which is a contradiction with c^∞∈.
So any tile t is either in state 0 in c, in which case it is enclosed in the trivial chain 2d-gon of zeros that is itself; or in state 1 in c in which case it is also in state 1 in c^∞ and therefore belongs to some finite stable cluster and is enclosed in a non-trivial chain 2d-gon of zeros.
Let t be the set of configurations x such that tile t is enclosed in a closed chain 2d-gon of 0s, including the configurations where x_t = 0 as t is enclosed in a trivial 2d-gon.
Let t,≤ n be the set of configurations x such that t is enclosed in a closed chain 2d-gon of 0s of length at most n, and t,>n be t∖t,≤ n.
By definition, for any t and any n, we have t,n⊆t and μ(t,n)≤μ(t).
We will actually show that μ(t) is uniformly bounded away from 1.
The first step is the following polynomial bound on the number of 2d-gons.
Given a rhombus tiling T with finitely many edge directions.
There exists a polynomial Q such that for any tile t, the number of closed convex chain 2d-gons around t of length n is at most Q(n).
Let us first remark that a chain 2d-gon of length n is in particular a 2d-partition of [n].
Remark also that, when walking a convex chain 2d-gon with exactly 2d sides in clockwise orientation, the order of the chain edge directions is fixed. If the 2d-gon has less than 2d sides some directions are absent but the order of chain edge directions respects the fixed order.
This implies that given a tile t, and a starting point p_0 there are at most as many convex chain 2d-gons of length n arount t and starting at p_0 as there are 2d partitions of [n]. In particular there are less than n^2d such chain 2d-gons.
Now remark that since p_0 is the starting point of a chain 2d-gon of length n that encloses t, the adjacency distance d_T(p_0,t) is less than n.
As the tiling T has finitely many tiles up to translation, there is a maximal diameter of tiles D and a minimal area of tiles A. So the euclidean distance from p_0 to t is at most D(n-1), and there are at most π D^2n^2/A such starting points p_0 in T.
Overall there are at most Q(n):=π D^2A n^2d+2 convex chain 2d-gons of length n around t.
This polynomial depends only on the tileset, that is, the set of tiles in T up to translation. In particular Q(n) is of degree 2d+2 where d is the number of edge directions in T, and the leading coefficient depends on the shapes of the tiles (minimal area and maximal diameter).
There exists a λ<1 such that for any tile t∈ T, μ(t)≤λ < 1.
This lemma is a consequence of Lemma <ref>. The key element is that there are polynomially many convex 2d-gons of length n, but having 0s on the whole 2d-gon has exponential probabilistic cost.
Consider the series ∑_n≥ 0 Q(n)β^n where 0<β<1 is the constant given by Lemma <ref> on measure μ. As β<1, and Q is a polynomial the series converges. This implies that there exists N∈ℕ such that ∑_n≥ NQ(n)β^n < 1. Denote λ_0 := ∑_n≥ NQ(n)β^n.
Now consider a tile t∈ T and the patch P=B(t,N) made of all tiles around t up to distance N.
Since μ is non-vanishing, there is some constant α>0 depending on N but not on t such that μ([1^P])≥α.
1cm
μ([1^P] ∩t^∁)≥α(1-λ_0)
Denote P the set of configuration where P is enclosed in a convex chain 2d-gon of zeros (not intersecting P), and P,=n the subset where it is enclosed in a convex chain 2d-gon of length exactly n.
Remark that [1^P]∩t^∁ = [1^P]∩P^∁ since any chain 2d-gon of zeros around t in a configuration of [1^P] necessarily avoids and encloses P.
Remark also that if n<N, P,=n = ∅ and for n≥ N we have μ(P,=n)≤ Q(n)β^n, by the union bound, as there are at most Q(n) convex 2d-gons of length n around P and P,=n is the union on all these 2d-gons of the event [0^] which has measure less than β^n by Lemma <ref>.
With P = ⋃_n∈ℕP,=n we have μ(P) ≤λ_0 = ∑_n≥ N Q(n)β^n.
Both [1^P] and P^∁ are upward-closed events so by hypothesis on μ they are positively correlated. Therefore we have: μ([1^P]∩t^∁) ≥α(1-λ_0) which proves the claim.
From the claim we deduce that μ(t^∁) ≥α(1-λ_0).
Denote λ := 1- α(1-λ_0).
We have μ(t)≤λ < 1.
For any ϵ,
there exists n such that for any t∈ T we have
μ(t,> n) ≤ϵ
The proof is similar to the proof of Lemma <ref> and relies on the fact that there are polynomially many convex chain 2d-gons of length n.
Denote t,>n' the set of configurations where tile t is enclosed in a convex chain 2d-gon of zeros of length more than n (but possibly also in one of length less than n).
By definition we have t,>n⊂t,>n'.
Additionally t,>n' is the disjunction over all convex chain 2d-gons of length more than n of the event “being all 0” so, denoting β the constant from Lemma <ref>, we have μ(t,>n') ≤∑_k>n Q(k)β^k.
As there the series converge, there exists n such that ∑_k>n Q(k)β^k<ϵ.
We then have μ(t,>n)≤μ(t,>n') ≤ϵ as expected.
μ() = 0
We prove that for any m∈ℕ and any ϵ > 0, we have μ(B)≤λ^m + mϵ with λ<1 of Lemma <ref>.
Take ϵ > 0.
Take m∈ℕ.
By Lemma <ref>, there exists n such that for any tile t,
μ(t,>n) ≤ϵ.
Take m tiles t_0… t_m-1 sufficiently far away from each other, in such a way that no two chain 2d-gons of length n enclosing two distinct tiles can be at distance k or less, where k is such that μ is k-Markov.
This means that the events t_i,≤ n are independent and therefore we have μ(⋂_0≤ i <mt_i,≤ n) = ∏_0≤ i <mμ(t_i,≤ n).
We have
⊆⋂_0≤ i < mt_i
due to the fact that for any configuration x in B, for any i ∈{ 0, …, m-1 }, either x_t_i = 1 and thus x belongs to E_t_i due to Lemma <ref>, or x_t_i=0 and t_i is enclosed in the trivial 2d-gon of 0s of itself.
Therefore, we have:
⊆⋂_0≤ i < mt_i⊆⋂_0≤ i <m( t_i,≤ n∪(t_i∖t_i,≤ n)) ⊆⋂_0≤ i <mt_i, ≤ n∪⋃_0≤ i <mt_i,>n
From this we get:
μ() ≤μ(⋂_0≤ i <mt_i,≤ n) + μ( ⋃_0≤ i <mt_i,>n)
≤μ(⋂_0≤ i <mt_i,≤ n) + ∑_0≤ i < mμ(t_i,>n)
≤μ(⋂_0≤ i <mt_i,≤ n) + mϵ
≤∏_0≤ i <mμ(t_i,≤ n) + mϵ
≤∏_0≤ i < mμ(t_i) + mϵ
≤λ^m + mϵ
Since this holds for any m∈ℕ and any ϵ>0, we get μ()=0.
We call the proof technique above the polka dot technique, since the t_i,≤ ns focus on the surroundings of tiles far away from each other and therefore with independant behavior, just as polkka dots on our tiling.
Let be a rhombus tilings with finitely many edge directions.
Let F be the 2-neighbour contamination cellular automaton on configurations {0,1}^.
Let I be the invasion set, that is I:={ c ∈{0,1}^| F^n(c) → 1^}.
Let μ be a measure on {0,1}^.
We have
μ() = 1.
Recall in particular that non trivial Bernoulli measures are so this theorem holds for non trivial Bernoulli measures.
Since μ()=0 by Lemma <ref>, we have μ(^∁)=1. Therefore μ(^∁) = μ(^∁∩^∁) = μ().
By Lemma <ref>, μ()=0 so μ(^∁) = 0 and μ()=1.
§ OPEN QUESTIONS
This work opens more questions than it closes. We classify these open questions into three classes.
As mentioned in Section <ref>, the classical 0-1 laws do not apply for dynamical or bootstrap percolation on rhombus tilings.
In all generality, for any rhombus tilings and any percolation process, the 0-1 law does not hold, see for example Appendix <ref>.
However, we conjecture that if the rhombus tiling is sufficiently regular then the 0-1 law holds for the invasion event of percolation processes.
Let G=(V,E) be the adjacency graph of an uniformly repetitive (or uniformly recurrent) rhombus tiling.
Let F be a percolation process (monotone and freezing) on {0,1}^V.
Let I⊂{0,1}^V be the set of invading configurations for F.
Let μ be a Bernoulli measure.
μ(I)∈{0,1}.
On ℤ^2, percolation processes have been studied in great generality, and though the exact critical threshold can often not be determined exactly, a trichotomy between trivial percolation threshold (p_c∈{0,1}) and non-trivial percolation threshold (p_c>0) can be determined from the stable directions of the percolation process <cit.>.
We similarly conjecture that the non-triviality of the critical threshold on sufficiently regular rhombus tilings is determined by the stable directions.
We consider here the class of multigrid dual tilings <cit.>, which are the canonical case of cut-and-project rhombus tilings: those are very regular, and in particular chains of rhombuses are almost straight (each chain of rhombuses is included in a tube of direction e⃗^ where e⃗ is the edge direction of the chain). In particular the classical Penrose rhombus tilings and Ammann-Beenker rhombus tilings are multigrid dual tilings.
[Stable directions and critical probability on rhombus tilings]
Let be a multigrid dual tiling and F a percolation process on {0,1}^.
Do the stable directions of F on determine the trichotomy between trivial percolations thresholds and non-trivial percolation threshold?
The main result of the present article is that 2-neighbour percolation invades almost surely for any or non-trivial Bernoulli measure on any rhombus tilings, or equivalently on the adjacency graph of any rhombus tiling.
The first key element of the proof is the absence of fortress, the second is a counting argument on the possible finite walls of 0s by bounding the number of wall directions.
It can easily be seen that for arbitrary 4-regular graphs, even planar graph, if the graph has exponential growth (for example the Cayley graph of the free group on 2 generators) then the counting argument fails.
However, if the graph is quasi-isometric to ℤ^2, the counting argument may hold without the strong structure of rhombus tilings.
[Almost sure percolation for 2-neighbour percolation on 4-regular graphs QI to ^2 without fortress]
Let G=(V,E) be a 4-regular graph that is quasi-isometric to ℤ^2.
Let I⊂{0,1}^V be the set of invading configurations for 2-neighbour percolation on G.
Let μ be a non trivial Bernoulli measure on {0,1}^V.
If G contains no fortress (finite obstacle) for 2-neighbour contamination, is it the case that μ(I)=1?
2-neighbour percolation is very classicaly studied on ℤ^2 and has been studied on trees and non-amenable groups <cit.>. We aim to generalise this approach to other 2-generator groups such as Baumslagg Solitar groups.
[Cayley graph]
Can we classify 2-generators finitely presented groups with regards to the critical probability of 2-neighbour percolation?
An auxiliary question that arose from our first exploration of percolation on Cayley graph is related to mixed degree graphs.
Consider for example G a single sheet of the Cayley graph of BS(1,2).
G is planar and contains vertices of degree 4 and 3, each vertex of degree 3 being adjacent to only vertices of degree 4.
In a 2-neighbour percolation on G, vertices of degree 3 are “hard” to contaminate as they require 2 out of 3 neighbours to be contaminated.
It appears that G has a critical probability of percolation of 1, that is for any non-trivial Bernoulli measure μ, μ(I)=0.
This is due to the uncountable number of vertical chains (each chain branching in two at each tile), linked to the fact that G is not quasi-isometric to ℤ^2.
However we may ask a similar question as above for mixed degree graphs that are quasi-isometric to ^2.
[Mixed degree]
Let G=(V,E) a mixed 3-4 degree graph quasi isometric to ℤ^2 and such that degree 3 vertices are only adjacent to degree 4 vertices.
Let I⊂{0,1}^V be the set of invading configurations for 2-neighbour percolation on G.
Let μ be a non trivial Bernoulli measure on {0,1}^V.
If G contains no fortress (finite obstacle) for 2-neighbour contamination, is it the case that μ(I)=1?
Note also that for the example of G being a single sheet of the Cayley graph of BS(1,2), there exists a second approach which consists in studying percolation not on vertices but on elementary cycles, or equivalently on the dual graph G^*.
In other words, we can consider G either as a graph on which we study percolation, or as a tiling by elementary cycles for which we study the percolation on the adjacency or dual graph.
§ THE NON 0-1 PERCOLATION ON A QUADRILATER TILING
Let P_f be the fortress consisting of 4 trapezes and a small square forming a big square presented in Fig. <ref>.
Let _f be the square grid where the origin square has been replaced by P_f.
For a configuration c∈{0,1}^_f, we write c|_P_f=0 (resp. c|_P_f=1) when all the tiles of the fortress P_f are in state 0 (resp. 1), c|_P_f∉{0,1} when at least a fortress tile is in state 0 and at least one is in state 1. For other tiles we denote c(i,j) with (i,j)≠ (0,0) for the state of the (i,j) tile in the induced grid.
We write F the cellular automaton of 2-neighbour contamination on both _f and ^2.
Let 𝔉 be the set of configurations in {0,1}^_fwhere at least one tile of the fortress P_f is in state 1.
Let be the set of invading configurations.
Let μ be the Bernoulli measure of parameter p.
μ(I) = μ(𝔉) = 1-(1-p)^5
We give here an outline of the proof which is adapted from the strategy on rhombus tilings detailed in Section <ref>.
Denote G_∘ the square grid with a single hole on the origin, that is G_∘ := ℤ^2∖{(0,0)}.
Denote F the 2-neighbour contamination also on G_∘, and _∘ the set of invading configurations on {0,1}^G_∘.
For any non-trivial Bernoulli measure μ on {0,1}^G_∘, μ(I_∘)=1.
In what follows, we sketch the proof that the behavior of the square grid with a hole can be assimilated to the one of the full square grid ^2, and reuse our results on rhombus tilings to prove the almost-sure invasion.
Given a stable connected subset P of G_∘, we denote P̃ its induced ℤ^2 subset by P̃:= P∪{(0,0)} if P contains at least three of {(1,0), (-1,0),(0,1), (0,-1)}, and P̃:= P otherwise.
We say that P is regular if |P∩{(1,0), (-1,0), (0,1), (0,-1)}| ≠ 2 and singular otherwise.
If P is singular, then exactly two of {(1,0), (-1,0), (0,1), (0,-1)} are in P. Denote H_1 and H_2 the two half-planes among {(i,j)| i>0}, {(i,j) | i<0}, {(i,j) | j>0}, {(i,j) | j<0} containing this two points.
Then denote P̃_̃1̃:= P ∩ H_1 and P̃_̃2̃:= P∩ H_2. Remark that because outside of position (0,0), P̃ is stable for 2-neighbour contamination in ℤ^2, we obtain that P̃=P̃_̃1̃∪P̃_̃2̃.
Let P be a connected subset of G_∘ that is stable for 2-neighbour contamination.
If P is regular, then P̃ is stable for 2-neighbour contamination in ℤ^2.
If P is singular, then both P̃_1 and P̃_2 are stable for 2-neighbour contamination in ℤ^2.
Remark that Lemma <ref> holds in G_∘, though in G_∘ the horizontal line (resp. vertical line) going through 0 is actually two disjoint half-chains.
The conclusion follows.
From this remark on P̃, we obtain immediatly the following corollary on the shape of stable clusters.
Let P be a subset of G_∘ that is stable for 2-neighbour contamination.
If P is finite, then P̃ is enclosed in either a rectangle or a L-shaped hexagon.
If P is infinite, then P̃'s boundary contains an infinite half-chain.
Here we call L-shaped hexagon the union of two rectangles of the form ([0,n_1]× [0,m_1]) ∪ ([0,n_2]× [0,m_2]), see Fig. <ref>.
As the number of “sides” of a “wall” around a finite contaminated cluster is bounded, then we obtain a polynomial upper bound on the number of possible walls of a given length as stated below.
There is a polynomial Q such that the number of possible walls of length n around any given cell t is at most Q(n).
With these intermediate statements, we can now apply the proof strategy from Section <ref> as outlined below
We can now apply the same techniques as for the rhombus case.
The set of configurations containing at least an infinite half-line or half-column in state 0 is of measure 0.
Define E_t as the set of configurations where the tile t is enclosed in a finite wall.
Denote := ^∁∩^∁.
We have ⊂⋂_t∈ G_∘ E_t.
Now we have a uniform bound λ<1 on the measure of E_t, and a uniform approximation on E_t,>n.
We apply the polka dot technique of Lemma <ref> to obtain μ()=0, which implies μ(_∘^∁)=0 and μ(_∘)=1.
This result implies that, on _f, almost surely all non-fortress cells eventually get contaminated.
As a consequence, full contamination happens almost surely as long as the fortress is initially disarmed (meaning it initially contains at least one contaminated cell), indeed a disarmed fortress gets contaminated if it is surronded.
Note that for a measure, we similarly have μ()=μ(𝔉) by a very similar proof.
Note that the proof outlined here also works for any single finite rectangular (or sufficiently well behaved) hole (or fortress) in a square grid.
§ A PERCOLATION PROCESS ON RHOMBUS TILINGS WITHOUT 0-1-LAW
In this appendix we quickly show an example of non uniformly repetitive rhombus tiling and a specific percolation process F such that contains a single fortress for F leading to a non 0-1 probability of percolation for any parameter 0<p<1.
Here we consider rhombus tilings with 5 edge directions ζ_i for i=0...4 (as for Penrose tilings).
In this setting we consider the specific tiling of Fig. <ref>.
On this tiling, we have ten possible adjacency directions for tiles which are ±ζ_i^ for i=0...4.
We say that t' is adjacent to t in direction a if t∩ t' ≡ a^ and the vector t⃗t⃗'⃗ from the barycenter of t to that of t' has positive scalar product with a.
We call partially directed boostrap percolation F_A the percolation process F where a tile gets contaminated if it has at least two contaminated neighbours in directions A ⊂{±ζ_i^| 0≤ i < 5}.
We consider the specific case where A_3 = {ζ_0^, ζ_1^, ζ_2^,±ζ_3^, ±ζ_4^}
and the percolation process F_3 associated with A_3, that is:
F_3(c)_t =
1 if c_t=1 or t has at least two neighbours t' and t” in
directions from A with c_t'=1 ∧ c_t”=1
0 else,
This means a given tiles gets contaminated if two of its neighbours are contaminated in the previous step, but not counting neighbours in directions {-ζ_0^, -ζ_1^, -ζ_2^}.
One can check that the only fortress for F_3 in is the “cube” at the center of Figure <ref> which is at the crossing of the ζ_0, ζ_1 and ζ_2 lines (any other finite set of tiles S contains a tile with two neighbours outside S in directions from A).
Note that outside this singular band of tiles, in the white zones F_3 behaves as two neighbour percolations on ℤ×ℕ as only directions ζ_3 and ζ_4 are present; and so almost surely the two half planes outside the band get invaded. This means that the right part of the band almost surely gets contaminated.
Indeed, if the outside half planes are fully 1s and the grey chains on the right each contain a contaminated cell (say at positions n_1 and n_2, with position 0 being the closest to the central “cube”) then both grey chains get contaminated from 0 to that contaminated position. Consequently the band gets fully contaminated from the fortress to the position min(n_1,n_2).
Since for every N, there almost surely exist n_1,n_2≥ N such that the grey cell of the top chain at position n_1 and of the top chain at position n_2 are contaminated, it follows that the whole right band almost surely gets contaminated.
So if the fortress has been disabled the entirety of the left part of the band gets contaminated, but if the fortress has not been disabled then the full invasion does not occur.
Overall the probability of invasion is exactly the probability that the fortress initally contains a contaminated tile; this probability is not in {0,1} for non trivial Bernoulli distributions.
|
http://arxiv.org/abs/2409.02597v1 | 20240904102805 | Rate-Adaptive Generative Semantic Communication Using Conditional Diffusion Models | [
"Pujing Yang",
"Guangyi Zhang",
"Yunlong Cai"
] | eess.SP | [
"eess.SP"
] |
Rate-Adaptive Generative Semantic Communication Using Conditional Diffusion Models
Pujing Yang, Student Member, IEEE, Guangyi Zhang, Student Member, IEEE, and Yunlong Cai, Senior
Member, IEEE
P. Yang, G. Zhang, and Y. Cai are with the College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China (e-mail: [email protected]; [email protected]; [email protected]).
September 9, 2024
======================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Recent advances in deep learning-based joint source-channel coding (DJSCC) have shown promise for end-to-end semantic image transmission.
However, most existing schemes primarily focus on optimizing pixel-wise metrics, which often fail to align with human perception, leading to lower perceptual quality.
In this letter, we propose a novel generative DJSCC approach using conditional diffusion models to enhance the perceptual quality of transmitted images.
Specifically, by utilizing entropy models, we effectively manage transmission bandwidth based on the estimated entropy of transmitted symbols. These symbols are then used at the receiver as conditional information to guide a conditional diffusion decoder in image reconstruction.
Our model is built upon the emerging advanced mamba-like linear attention (MLLA) skeleton, which excels in image processing tasks while also offering fast inference speed.
Besides, we introduce a multi-stage training strategy to ensure the stability and improve the overall performance of the model.
Simulation results demonstrate that our proposed method significantly outperforms existing approaches in terms of perceptual quality.
Semantic communications, conditional diffusion models, joint source-channel coding, image transmission.
§ INTRODUCTION
The rapid development of sixth-generation (6G) communication systems has driven the rise of various smart applications, such as Virtual Reality (VR) and the Internet of Everything (IoE) <cit.>. These services demand enhanced communication efficiency to manage the massive inflow of data traffic. In this context, semantic communications have emerged as a new paradigm, attracting significant attention. Unlike traditional transmission systems that rely on separate source and channel coding, semantic communications focus on accurately transmitting the underlying semantic information of digital data. This approach integrates source and channel coding for joint optimization, a technique known as joint source-channel coding (JSCC).
Recently, the integration of deep learning into wireless communication system designs has gained traction, driven by the exceptional information processing capabilities of various deep learning models. In this context, deep learning-based JSCC (DJSCC) has stimulated significant interest, particularly through the use of autoencoders (AEs) and their variants, such as variational AEs (VAEs). DJSCC maps input images into low-dimensional symbol vectors for transmission. A pioneering work in this area is DeepJSCC<cit.>, which is built on an AE-based framework that allows for joint optimization and mitigates the cliff-edge effect commonly observed in traditional separation-based schemes. Moreover, the authors in <cit.> introduced a Transformer-based framework that enhances image fidelity by incorporating channel feedback to adaptively reconstruct images under varying wireless conditions. Inspired by entropy model-based source coding <cit.>, the authors in <cit.> proposed a rate-adaptive JSCC system, where the number of transmitted symbols is determined by the entropy estimated by the entropy models. This work was further evolved in <cit.> into a digital version, enabling practical application in modern digital systems.
Despite the superior performance of current DJSCC transmission systems, they are typically optimized for mean-square error (MSE) distortion metrics such as peak signal-to-noise ratio (PSNR), which assesses pixel-level similarity between reconstructed and source images. However, it is increasingly recognized that high pixel-level similarity does not necessarily indicate high perceptual quality, which reflects how humans perceive the image's realism and visual appeal <cit.>. To improve the perceptual quality of transmitted images, the authors in <cit.> introduced adversarial and perceptual losses into DJSCC. Moreover, <cit.> developed a dual-path framework to extract both pixel-level information and perceptual information. Alternatively, <cit.> combined DJSCC with the denoising diffusion probabilistic model (DDPM) <cit.>, a powerful generative model that creates images from Gaussian noise. In <cit.>, the reconstructed image is split into two components: range-space and null-space. The range-space that captures the primary structure is transmitted via DJSCC, while the null-space, refining details, is generated at the receiver using diffusion models.
Another approach, presented in <cit.>, leveraged diffusion models by first performing an initial reconstruction with DJSCC, followed by a neural network to guide the denoising process. Although <cit.> and <cit.> have achieved performance improvements, the decoding process at the receiver is time-consuming, as denoising requires hundreds of steps. Additionally, the training is performed module by module without joint optimization, potentially leading to performance loss. Besides, they typically support fixed-rate transmission, lacking the ability to determine optimal bandwidth for each image.
In this letter, we introduce a novel framework called conditional diffusion models-based generative DJSCC (CDM-JSCC) for wireless image transmission.
Unlike previous methods that rely on DDPM noise prediction (referred to as ϵ-prediction)<cit.>, our method utilizes 𝒳-prediction <cit.> to directly predict the source image.
𝒳-prediction offers comparable performance to ϵ-prediction but with significantly fewer steps, as its objective function resembles an autoencoder loss, enabling data reconstruction in a single iteration.
Moreover, we optimize transmission bandwidth by utilizing entropy models to estimate the entropy of transmitted symbols. At the receiver, the transmitted symbols are utilized as conditional information to guide the conditional diffusion decoder to reconstruct images.
Recently, the mamba-like linear attention (MLLA) technique has demonstrated superior performance in image processing tasks, benefiting from parallel computation and fast inference <cit.>. Building on this, we design our JSCC encoder and decoder with MLLA as the core architecture.
Furthermore, we propose a multi-stage training strategy to achieve joint optimization, resulting in substantial enhancements in perceptual quality.
§ PROPOSED CDM-JSCC FRAMEWORK
In this section, we propose the framework of the proposed CDM-JSCC to realize rate-adaptive image transmission. To provide a comprehensive understanding, we begin by giving an overview of CDM-JSCC and then proceed to detail the conditional diffusion models-based decoder.
§.§ Rate-Adaptive JSCC
As illustrated in Fig. <ref>, the source image, represented
by a pixel intensity vector x_0 ∈ℝ^n, is first processed by a nonlinear transform coding encoder g_e, producing a low-dimensional latent representation z = g_e(x; θ_g), where θ_g encompasses the trainable parameters.
Inspired by <cit.>, we estimate the distribution of z̃, i.e., the quantized version of z, by utilizing hyperprior entropy models. Specifically, we introduce an additional latent feature y = h_e(z; θ_h), serving as side information to capture the dependencies among the elements in z̃, where h_e represents the parametric analysis transform, and θ_h denotes its trainable parameters.
Each z̃_i in z̃ is variationally modeled as a Gaussian distribution with the standard deviation σ_i and mean μ_i predicted based on the quantized ỹ as:
p_z̃ | ỹ(z̃ | ỹ)= ∏_i (𝒩(μ̃_i, σ̃_i^2) * 𝒰(-1/2, 1/2)) (z̃_i),
where (μ̃, σ̃)=h_s(z̃; ϕ_h), h_s is the parametric synthesis transform, ϕ_h denotes its trainable parameters, and * represents the convolutional operation. Moreover, we convolve each element with a standard uniform density 𝒰(-1/2, 1/2) to enable a better match of the prior to the distribution of z̃ <cit.>.
In this way, we are able to control the channel bandwidth ratio (CBR) based on the estimated entropy of z̃. Specifically, z comprises multiple embedding vectors z_i of length C. The JSCC encoder f_e adjusts the length of each vector to k_i=Q(-βlog p_z̃_i | ỹ(z̃_i | ỹ)), where Q denotes the quantization operation, β controls the relation between the prior p_ẑ_i | ỹ(ẑ_i | ỹ), and the expected length of the corresponding symbol vector s_i. In particular, when the entropy of z̃_i is high, a higher CBR is adopted, resulting in a larger k_i. This process is expressed as s = f_e(z, p_z̃ | ỹ(z̃ | ỹ); θ_f) ∈ℂ^k, where s denotes the channel input symbols, k is the number of transmitted symbols, and θ_f denotes the trainable parameters. Additionally, to meet the energy constraints of real-world communication systems, we ensure that s satisfies an average power constraint before transmission.
Then, s is transmitted through a noisy wireless channel, which is denoted by η. As the additive white Gaussian noise (AWGN) channel is adopted in our work, this process follows ŝ≜η(s) =s + n, where n∼𝒞𝒩(0,σ^2 I_k × k) is a complex Gaussian vector with variance σ^2.
At the receiver, ŝ is first decoded to obtain the reconstructed latent ẑ = f_d(ŝ; ϕ_f), where f_d represents the JSCC decoder, and ϕ_f denotes its trainable parameters.
The receiver utilizes ẑ as an additional “content” latent to guide the conditional diffusion decoder g_d to reconstruct images x̂_0 = g_d(x̂_N, ẑ; ϕ_g), where x̂_N is the randomly sampled Gaussian source and ϕ_g represents the trainable parameters, as shown in Fig. <ref>. The details of the conditional diffusion decoder will be provided in the following section.
§.§ Conditional Diffusion Decoder
We build our decoder on conditional diffusion models for their significant success in generative tasks.
The core idea of diffusion models is to transform an image x_0 into a Gaussian distribution by progressively adding noise to it, referred to as the forward process q, resulting in a sequence of increasingly noisy versions x_1, x_2,..., x_N. Then, the reverse process p_θ generates high-quality samples by reversing this process.
The two Markov processes at step n can be respectively described as follows:
q(x_n|x_n-1) = 𝒩(x_n; √(1-β_n)x_n-1, β_n I),
p_θ(x_n-1 | x_n) = 𝒩(x_n-1; μ_θ(x_n,n), Σ_θ(x_n, n)),
where the variance β_n is held constant as hyperparameters, the reverse process mean μ_θ(x_n, n) is parameterized by a neural network, and the variance Σ_θ(x_n, n) is always set to β_n I.
The diffusion models are typically trained to predict the accumulated noise ϵ that perturbs x_0 to x_n, a process known as ϵ-prediction, with the loss function being:
ℒ(θ, x_0)=𝔼_x_0, n,ϵ||ϵ - ϵ_θ(x_n, n)||^2,
where n ∼Uniform(1, ..., N), ϵ∼𝒩(0, I), x_n = √(α̅_n)x_0 + √(1 - α̅_n)ϵ, and α̅_n=∏_i=1^n (1-β_i).
In our conditional diffusion decoder, ẑ is taken as the condition for the reverse process. Consequently, the forward process remains unchanged as (<ref>), and the reverse process is replaced by:
p_θ(x_n-1 | x_n, ẑ) = N(x_n-1; μ_θ(x_n, ẑ, n), β_n I).
Analogous to (<ref>), we modify the training objective as follows:
ℒ(θ, x_0)
= 𝔼_n, ϵ||x_0 - 𝒳_θ(x_n, ẑ, n/N)||^2.
First, we employ 𝒳-prediction <cit.> to predict the source image instead of using ϵ-prediction. 𝒳-prediction requires only a few denoising steps yet achieves performance comparable to ϵ-prediction, which involves hundreds of denoising steps. This is because the optimization objective resembles an autoencoder loss, allowing the model to reconstruct the source image in a single iteration. Moreover, we replace the step n with a normalized step n/N, enabling the use of a smaller N during testing compared to training, thereby accelerating the decoding process.
During inference, images are generated using ancestral sampling with Langevin dynamics as follows:
x_n-1 = √(α̅_n)𝒳_θ(x_n, ẑ, n/N) - √(1-α̅_n)ϵ_θ(x_n,n,n/N),
ϵ_θ(x_n, ẑ, n/N)=x_n-√(α̅_n)𝒳_θ(x_n, ẑ, n/N)/√(1-α̅_n).
§ TRAINING STRATEGY AND MODEL ARCHITECTURE
In this section, we first present the architecture of CDM-JSCC. Then, we analyze the proposed loss function. Finally, we propose a multi-stage training algorithm to ensure the stability and improve the overall performance.
§.§ Model Architecture
As shown in Fig. <ref>, the proposed model consists of a transmitter, a wireless channel, and a receiver. We adopt the AWGN channel in our work, which is incorporated as a non-trainable layer in the architecture.
§.§.§ Transmitter
The transmitter consists of an entropy encoder and a rate-adaptive JSCC encoder. The entropy encoder comprises a hyperprior model to effectively capture
spatial dependencies in the latent representation <cit.>.
The encoder g_e consists of 4 residual blocks followed by 4 down-sampling convolution layers connected sequentially. The hyperprior encoder is composed of a down-sampling convolution layer followed by a ReLU activation function, while the hyperprior decoder is composed of an up-sampling convolution layer and a ReLU activation function.
Then, z is input to the JSCC encoder, where it is compressed into a variable-length latent representation based on the estimated entropy p(z̃|ỹ). Specifically, elements are selected and masked sequentially in a one-dimensional checkerboard pattern, which captures global features more effectively than random selection or linear sequencing. In addition, our proposed model is built on the advanced MLLA skeleton, known for its exceptional performance in image processing tasks and fast inference speed.
The JSCC encoder consists of 3 MLLA blocks and multiple multi-layer perceptrons (MLPs).
§.§.§ Receiver
The receiver consists of a rate-adaptive JSCC decoder and a conditional diffusion decoder. The JSCC decoder inverts the operations performed by the JSCC encoder. Then, ẑ is input to the diffusion decoder as conditional information to guide the denoising process. The diffusion denoising unit is built on a U-Net architecture, featuring skip connections that allow the output from each down-sampling unit to be directly passed to the corresponding up-sampling unit. The architecture includes 6 down-sampling units and 6 up-sampling units, with a mid-block in between. Each down-sampling unit, up-sampling unit, and mid-block is composed of residual blocks and linear attention mechanisms. In addition, ẑ is encoded into features of different scales by 4 layers of hierarchical encoders. These features are concatenated with the output of the preceding down-sampling unit and embed time information as input for the next down-sampling unit. During testing, the output x̂_n-1 is passed through the diffusion denoising unit iteratively until x̂_0 is produced.
§.§ Loss Function
In a typical end-to-end JSCC system, the optimization focuses sorely on the MSE between the source image and the reconstructed image, with the loss function defined as:
ℒ_D =
𝔼_x_0||x_0-x̂_0||^2,
which aligns with the training objective of our conditional diffusion models as defined in (<ref>), with the exception that step n and noise ϵ are randomly sampled.
Moreover, for an entropy model-based rate-adaptive JSCC system, the loss function is typically defined as:
ℒ_RD= 𝒟+λℛ
= 𝔼_x_0||x_0-x̂_0||^2_JSCC distortion + 𝔼_x_0||x_0-x̅_0||^2_compression distortion
+ λ𝔼_x_0[- log p(z̃|ỹ)- log p(ỹ)]_rate,
where 𝔼_x_0||x_0-x̅_0||^2 is the compression distortion between the source image x_0 and the compressed image x̅_0, 𝔼_x_0[- log p(z̃|ỹ)- log p(ỹ)] represents the compression rate estimated by entropy models, and 𝔼_x_0||x_0-x̂_0||^2 denotes the typical JSCC loss term.
To improve the perceptual quality of reconstructed images, we introduce an additional perceptual loss, expressed as follows:
ℒ_P =
𝔼_x_0[d_p(x_0 , x̂_0)] + 𝔼_x_0[d_p(x_0,x̅_0)],
where d_p(·, ·) represents the perceptual loss term. Here we adopt widely-used learned perceptual image patch similarity (LPIPS) loss based on VGGNet <cit.>. Therefore, the objective for our model can be expressed as:
ℒ =
(1 - η)(𝔼_x_0||x_0-x̂_0||^2_JSCC distortion + 𝔼_x_0||x_0-x̅_0||^2_compression distortion)
+ η (𝔼_x_0[d_p(x_0,x̂_0)]_JSCC perceptual loss + 𝔼_x_0[d_p(x_0,x̅_0)]_compression perceptual loss)
+ λ𝔼_x_0[- log p(z̃|ỹ)- log p(ỹ)]_rate,
where η∈ [0,1] balances the trade-off between MSE distortion and the perceptual loss term, while λ adjusts the trade-off between the image quality and transmission rate.
§.§ Training Strategy
To ensure training stability and enhance overall performance, we propose a multi-stage training strategy. Initially, we train each module individually to reduce complexity and facilitate convergence. Once all modules have been trained separately, we fine-tune the entire model to optimize performance. The complete multi-stage training process is detailed in Algorithm <ref>.
§ SIMULATION RESULTS
In this section, we perform simulations to evaluate the performance of the proposed CDM-JSCC.
§.§ Simulation Settings
§.§.§ Basic Settings
We evaluate our CDM-JSCC on the Kodak image dataset, which consists of 24 images, each with a resolution of 512 × 768 × 3. During testing, we only employ 30 denoising steps at the receiver, ensuring an efficient and fast decoding process. All experiments are conducted using Pytorch. For the training process, we use the Adam optimizer for stochastic gradient descent, starting with a learning rate of 1 × 10^-4 and planning to reduce it after several epochs. The training dataset comprises 50,000 randomly sampled images from the ImageNet dataset, which are randomly cropped to 256 × 256. The batch size is set to 4. We set λ=0.0512 and η=0.5 for the experiments.
§.§.§ Benchmarks
For benchmarks, we consider both deep learning-based methods and traditional separation-based schemes, incorporating optimization strategies for both MSE distortion and perceptual loss. The benchmarks are detailed as follows:
* “BPG+LDPC": This method employs BPG for source coding and LDPC for channel coding, followed by quadrature amplitude modulation (QAM).
* “BPG+capacity": In this approach, we use an ideal capacity-achieving channel coding in conjunction with BPG and QAM.
* “DeepJSCC": As a representative learning-based method, we include the classic DeepJSCC optimized for MSE distortion <cit.>.
* “GAN-JSCC": This method optimizes both MSE distortion and perceptual loss <cit.>.
§.§.§ Considered Metrics
We evaluate the performance of our method along with benchmarks using typical metric PSNR as well as perceptual metrics LPIPS<cit.>, and Fréchet Inception Distance (FID). PSNR calculates pixel-wise MSE distortion. LPIPS measures the l2 distance between two latent embeddings extracted by a pre-trained network <cit.>. FID calculates the distribution of source images and reconstructed images in the feature space using the Fréchet distance <cit.>.
§.§ Performance Comparison
We evaluate the performance of our proposed model on the AWGN channel, a representative and widely adopted method in this research community. In all subsequent experiments, the model is trained on a single SNR value and tested on the same SNR. In addition, by carefully adjusting the hyper-parameters, we control the CBR =1/48.
Fig. <ref> compares CDM-JSCC with benchmarks across various SNRs, illustrating a comparative analysis of our proposed CDM-JSCC against the benchmarks. Notably, our model outperforms others across all perceptual quality metrics, including LPIPS and FID. This enhancement is attributed to the proposed loss function and the powerful generative diffusion models. However, it is important to note that our scheme shows some performance degradation in the PSNR metric compared to the MSE-optimized DeepJSCC and traditional separation-based schemes. This is because generative models tend to generate realistic and clear images and may overlook pixel-wise fidelity sometimes. Despite this, our model still outperforms GAN-JSCC in the PSNR metric. Unlike GAN-JSCC, our model does not plateau as SNR increases, maintaining the potential to transmit clear, realistic, and high-fidelity images at higher SNR levels.
§ CONCLUSION
In this letter, we proposed a framework of conditional diffusion models-based generative DJSCC system for image transmission. Moreover, we employed 𝒳-prediction with a few denoising steps to accelerate the decoding process. Furthermore, we effectively managed the transmission bandwidth based on the estimated entropy of transmitted symbols. Besides, we proposed a multi-state training strategy to ensure the stability of the training process. Simulation results demonstrated that the proposed method can significantly surpass existing methods in terms of perceptual quality.
IEEEtran
|
http://arxiv.org/abs/2409.02701v1 | 20240904133812 | Ground state of the gauge invariant Dicke model: condensation of the photons in non-classical states | [
"N. Q. San",
"O. D. Skoromnik",
"A. P. Ulyanenkov",
"A. U. Leonau",
"I. D. Feranchuk"
] | quant-ph | [
"quant-ph"
] |
[Corresponding author: ][email protected]
School of Engineering and Technology - Hue University, Hue, Vietnam
Currently without university affiliation
Atomicus GmbH Amalienbadstr. 41C, 76227 Karlsruhe, Germany
Deutsches Elektronen-Synchrotron DESY, Hamburg 22607, Germany
Atomicus GmbH Amalienbadstr. 41C, 76227 Karlsruhe, Germany
§ ABSTRACT
We investigate the ground state of two physically motivated
modifications of the Dicke model. The first modification corresponds
to particles whose phase space contains only two states, for
example, particles with spin 1/2 or artificially created qubits. The
second modification describes two-level systems that arise as a
result of truncating the full Hilbert space of atoms to two levels
that are in resonance with the electromagnetic field and are
described by the gauge-invariant Dicke model. We demonstrate that
the behavior of these systems is qualitatively distinct in both
cases. In particular, in the first scenario, a phase transition into
the state with a non-zero amplitude of the classical field is
possible, while in the second case, the so-called order parameter
η = ⟨â|$⟩ of the field's phase transition into a coherent state
with photon condensation is zero. At the same time, the average number of
photonsn̅ = ⟨â^†â|≠⟩0, and the collective excitation in
the system manifests a non-classical "squeezed" state of the
field. We analyze the observable characteristics of both systems in
a wide range of variation of their parameters.
Ground state of the gauge invariant Dicke model: condensation of the photons in non-classical states
I. D. Feranchuk
September 9, 2024
=====================================================================================================
§ INTRODUCTION
The Dicke Model (DM) <cit.> describes the interaction
between light and matter and was introduced to describe coherent
processes in a dense gas induced by the electromagnetic field of a
resonator <cit.>. An important peculiarity of this model
is the occurrence of a phase transition into the superradiant state,
corresponding to the excitation of a macroscopic number of photons in
the resonator. This means that the mean field amplitude⟨â|≠⟩0<cit.>.
Another property of the DM is the existence of a phase
transition in a state of thermodynamic equilibrium of this system, which has drawn considerable attention starting from
works <cit.>. It was shown that in the zero
temperature limit, when the system is in its ground state, the phase
transition corresponds to the emergence of the superradiant phase,
which is associated with the formation of a photon condensate
<cit.>.
However, it was demonstrated<cit.>
that when the DM is used to describe the real atoms, i.e. atoms whose
spectrum contains many levels, it is necessary to account for the
diamagnetic contribution in the Hamiltonian from the interaction
between the atoms and the field. This leads to the emergence of a quadratic term in the
model Hamiltonian involving field operators and prohibits the
thermodynamic phase transition in this system, as proven by the no-go theorem <cit.>. Similar statements have also been established for a phase
transition in the ground state of the system leading to the formation of a
photon condensate. It is important to stress that proofs of the corresponding no-go theorems are based on
transforming the Hamiltonian of the DM into a gauge-invariant
form, with the subsequent truncation of the dimensity of the Hilbert space to only two
levels <cit.>.
At the same time, when using the DM to describe the real physical two-level systems -
qubits - a phase transition is allowed
<cit.>. In particular, for systems with large finite numbers of qubits (N ≫1) this problem has been
investigated numerically, and characteristics of the
phase transition have been identified <cit.>. It corresponds to the
appearance of a classical field component in the system, characterized
by a non-zero order parameterη= ⟨â|≠⟩0. Recently, such a
state in the system withN=1has been observed experimentally
<cit.>.
These results lead to the ambiguity of using the DM, i.e. whether it is exploited to describe simplified atomic systems with many levels being truncated to only two of them, or real two-state
systems as qubits, which naturally contain only two levels.
The aim of the present work is to investigate numerically the ground state properties of the gauge-invariant Dicke model (GIDM) <cit.> accounting for the multi-level energy spectrum of atoms, and compare them with the corresponding results emerging from the DM. The paper is organized as follows: in Sec. <ref> we introduce the Hamiltonian of the DM and GIDM and describe the algorithm for numerical calculation of the ground state energies and wave functions of these systems. In Sec. <ref> other characteristics of the ground states are calculated and the possibility of phase transition is analyzed. In Sec. <ref> we discuss the obtained results.
§ HAMILTONIANS OF DM AND GIDM
Let's consider two models used to describe the resonant interaction of
radiation with atoms in a resonator. The first model is the standard
DM with the Hamiltonian corresponding to a system
consisting ofNqubits interacting with a single-mode quantum field <cit.>:
Ĥ_D = â^†â + ΔĴ_z + 2λ/√(N)(â + â^†) Ĵ_x + λ^2,
where the field frequencyωis considered as the energy unit;Δis the transition frequency between the qubit levels;â,â^†are the annihilation and creation operators of the quantum field with the frequencyω= 1;Ĵ_zandĴ_xdenote the projections of the total
angular momentumJ = N/2;λis the coupling constant between the
qubit and the field. Here we use the system of unitsħ= c = 1.
The coupling constant can be defined as follows:
λ = d √(2πρ),
wheredis the dipole moment of the qubit, andρis the density
of qubits in the resonator.
The second model is the GIDM corresponding to a system
consisting ofNatoms, whose energy spectrum is truncated to only two levels, interacting with a single-mode quantum field <cit.>:
Ĥ_C = â^†â + Δ{Ĵ_z cosh[ 2f (â - â^†) ]
+ i Ĵ_x sinh[ 2f (â - â^†) ] } .
We note that canonical transformations are made to match the
parameters in the operator from <cit.> with
the form of HamiltonianĤ_D:
â→ i â; â^†→ -i â^†; Ĵ_y →Ĵ_x,
with the coupling constantfhaving another normalization
f = λ/√(N).
In addition, the termλ^2is introduced in (<ref>) to
provide the same energy reference level as in the operator (<ref>)
<cit.>.
In order to solve the Schrödinger equation for (<ref>) and (<ref>) numerically, we use the following expansion for the eigenvectors of the system's states:
|Ψ⟩ = ∑_n=0^N_tr∑_M=-N/2^N/2C_nM|n⟩χ_J,M,
where|n⟩is the Fock state of the field,N_trthe upper value of the photon state taken into account;χ_J,Meigenfunctions of the angular momentum operator:
Ĵ^2 χ_J,M = J(J+1)χ_J,M; Ĵ_z χ_J,M = Mχ_J,M.
Using|n⟩andχ_J,Mas the basis vectors, we derive the following expressions for the matrix elements of the Hamiltonian (<ref>):
(H_D)_kn^MM' = (n δ_nk + Δ M + λ^2 ) δ_MM' +
λ/√(N)(√(n)δ_n-1,k +√(n+1)δ_n+1,k)×
√((J+M)(J-M+1))(δ_M-1,M' +δ_M+1,M'),
and the Hamiltonian (<ref>)<cit.>:
H_kn^MM' = n δ_knδ_MM' + Δ/2 S_kn[ (-1)^n + (-1)^k/2 Mδ_MM'
- i (-1)^n - (-1)^k/2√((J+M)(J-M+1))×
(δ_M-1,M' +δ_M+1,M') ],
S_kn(f) = (-1)^n √(n!/k!) (2f)^k-nL_n^k-n(4f^2) e^-2f^2;
k ≥ n; S_kn= S_nk,
whereL_n^k (x)are the generalized Laguerre polynomials.
§ OBSERVABLE CHARACTERISTICS
In Fig. <ref> we show the results of calculating the normalized values of the ground state energies
ϵ (N,λ) = 2 E_0(N,λ)/N
by numerical diagonalization of matrices (<ref>) and
(<ref>). It is remarkable that results from Fig. <ref>a coincide with the results of Fig. 1 in <cit.>. Let us also stress that for the GIDM sufficiently fast convergence rates with increasing the photon threshold valueN_tris observed as shown in Fig. <ref>b.
Fig. <ref> presents the results of calculating the normalized
ground state energy of the system
as a
function of the coupling constant and the number of qubits/atoms in the
resonator for both models. It is remarkable that Fig. <ref>b does not provide any evidence of a phase transition to take place within GIDM.
We note that for the GIDM andN ≫1the main contribution to
the energy is determined by the following term in the Hamiltonian
(<ref>):
H_1 ∼ N e^-2f^2,
which allows one to derive approximate analytical expression for the
energy for this particular case:
ϵ (N,f)≈ϵ (N_1,f_1) + O (1/N);
f_1 = √(f^2 - 1/2lnN/N_1), N>N_1; f^2 > 1/2lnN/N_1.
Fig. <ref> demonstrates sufficiently high accuracy of the
relation (<ref>), allowing one to determine the ground state
energy for any value ofN ≫1by analytical continuation.
Let us investigate some observable characteristics of the system
being in the ground state. We start with the average amplitude
of the field:
η =⟨Ψ|â|Ψ|=⟩∑_n∑_M √(n)C^*_n-1,M C_nM.
As mentioned above, the non-perturbative proof of relationη= 0for the ground
state of the system with Hamiltonian (<ref>) was provided in
<cit.>. When solving the corresponding Schrödinger equation numerically, we obtain the same result due to the matrix elements (<ref>) with fixedMvalues being non-zero only for the field states possessing the same parity. Therefore, in the state vector (<ref>) the indexnchanges with step 2 and, as a consequence, the product of the coefficientsC^*_n-1,M C_nM = 0. At the same time, the matrix elements of the
operator (<ref>) do not possess this property and the general valueη≠0can be obtained. This result is illustrated by Fig. <ref>
However, the average number of photons in the ground state of the
system within both models is different from zero and determined by the
equation:
α = n̅ = ⟨Ψ|â^†â|Ψ|=⟩∑_n∑_M n |C_nM|^2.
Fig. <ref> illustrates dependence ofαon
the coupling constant with various atom/qubit numbers for both models.
It is remarkable that the systems possess qualitatively different
statistics of photons in the condensed state. This is illustrated in
Fig. <ref>, where we show the probabilities of the zero-photon|c_0|^2, single-photon|c_1|^2, and multi-photon|c_mult|^2 = ∑_n=2^N_tr |c_n|^2states:
|c_n|^2 = ∑_M |C_nM|^2.
These quantities play an important role for the generator of
nonclassical states of the electromagnetic
field<cit.>.
The resulting condensate consists of photons in a nonclassical
"squeezed" state. Fig. <ref> illustrates the normalized average
squared fluctuations of the coordinate, momentum of the field
oscillators, and the 'squeezing' parameterr, calculated as:
σ_x
=√(⟨x̂^2|-⟩⟨x|^⟩2), σ_p =√(⟨p^2|-⟩⟨p|^⟩2);
⟨x̂^2|=⟩1/2⟨(â + â^†)^2|=⟩1/2∑_n∑_M [√(n(n-1))C^*_n-2,M + (2n+1)C^*_n,M +√((n+1)(n+2))C^*_n+2,M]C_nM;
⟨p̂^2|=⟩ - 1/2⟨(â - â^†)^2|=⟩- 1/2∑_n∑_M
[√(n(n-1))C^*_n-2,M - (2n+1)C^*_n,M +√((n+1)(n+2))C^*_n+2,M]C_nM;
α - η^2 = sinh^2 r.
Fig. <ref> illustrates the phase space of the photon condensate for the DM and GIDM. Fig. <ref> shows the population of the photon states at various coupling constants for both models.
Let us stress, that along with the appearance of the photon condensate in the system there
is a transition of a part of atoms to the excited state due to the
interaction with the electromagnetic field in the resonator. The
resulting population of the excited state of atoms can be calculated as:
ξ = ⟨1 + σ̂_z/2|=⟩1/N∑_i ⟨1+σ̂_zi/2|=⟩
1/2+1/N⟨Ĵ_z|=⟩1/2 + 1/N∑_n∑_M M C^*_nM C_nM
and the corresponding dependence of this value on the coupling constant is shown in Fig. <ref>
Another important characteristic of the system is the entanglementΠ(f,N)between the atoms/qubits and the field <cit.>:
Π(f,N)= - Σ_M p_Mlog_2p_M; p_M = Σ_n |C_Mn|^2,
which is illustrated in Fig. <ref>.
§ CONCLUSION
In our work we investigated numerically the possibility of the phase transition within the Dicke model (DM) and gauge-invariant Dicke model (GIDM). This transition was expected to arise from the interaction of an ensemble of two-level systems (atoms/qubits) with the electromagnetic field in a resonator. The obtained results resolve the contradictions about the existence of a phase transition in the DM <cit.> and no-go theorems <cit.> (and citations
therein). We conclude that for the system consisting of natural two-level objects (for example, particles with spin 1/2 in a magnetic field or artificially created qubits) the phase transition is possible and validity of the DM Hamiltonian for its description is proved. However, the DM becomes invalid when used for modeling the interaction of the system of multi-level objects with the electromagnetic field involving resonant transitions. In such cases, no phase transition occurs and the GIDM should be used instead. Nevertheless, in both cases, the collective
excitation of the field in the system represents a photon
condensate. However, in case of the DM it includes the coherent
quasiclassical states, whereas for the GIDM it consists of nonclassical
"squeezed" states. The computed characteristics of these states are
essential for various applications of QED in the resonators.
§ ACKNOWLEDGEMENTS
AL acknowledges support from DESY (Hamburg, Germany), a member of the Helmholtz Association HGF.
abbrv |
http://arxiv.org/abs/2409.02496v1 | 20240904074632 | Self-consistent approach to the dynamics of excitation energy transfer in multichromophoric systems | [
"Veljko Janković",
"Tomáš Mančal"
] | physics.chem-ph | [
"physics.chem-ph",
"cond-mat.other",
"quant-ph"
] |
Institute of Physics Belgrade, University of Belgrade, Pregrevica 118, 11080 Belgrade, Serbia
Faculty of Mathematics and Physics, Charles University, Ke Karlovu 5, 121 16 Prague 2, Czech Republic
§ ABSTRACT
Computationally tractable and reliable, albeit approximate, methods for studying exciton transport in molecular aggregates immersed in structured bosonic environments have been actively developed.
Going beyond the lowest-order (Born) approximation for the memory kernel of the generalized quantum master equation typically results in complicated and possibly divergent expressions.
Starting from the memory kernel in the Born approximation, and recognizing the quantum master equation as the Dyson equation of the Green's functions theory, we formulate the self-consistent Born approximation to resum the memory-kernel perturbation series in powers of the exciton–environment interaction.
Our formulation is in the Liouville space and frequency domain, and handles arbitrary exciton–environment spectral densities.
In a molecular dimer coupled to an overdamped oscillator environment, we conclude that the self-consistent cycle significantly improves the Born-approximation energy-transfer dynamics.
The dynamics in the self-consistent Born approximation agree well with solutions of hierarchical equations of motion over a wide range of parameters, including the most challenging regimes of strong exciton–environment interactions, slow environments, and low temperatures.
This is rationalized by analytical considerations of coherence-dephasing dynamics in the pure-dephasing model.
We find that the self-consistent Born approximation is good (poor) at describing energy transfer modulated by an underdamped vibration resonant (off-resonant) with the exciton energy gap.
Nevertheless, it reasonably describes exciton dynamics in the seven-site model of the Fenna–Matthews–Olson complex in a realistic environment comprising both an overdamped continuum and underdamped vibrations.
Self-consistent approach to the dynamics of excitation energy transfer in multichromophoric systems
Tomáš Mančal
September 9, 2024
===================================================================================================
§ INTRODUCTION
The light absorption and thus initiated excitation energy transfer (EET) in molecular aggregates constitute the first steps of solar-energy conversion in both natural <cit.> and artificial <cit.> systems.
The EET takes place in a complex dynamic spatio–temporal landscape stemming from the competition of interactions promoting exciton delocalization (resonance coupling between molecules) and localization (static and dynamic disorder). <cit.>
As the energy scales of these counteracting interactions are typically comparable to one another, <cit.> theoretical descriptions of EET dynamics are quite challenging.
When approached from the perspective of the theory of open quantum systems, <cit.> the challenge transforms into describing non-Markovian quantum dynamics of excitons interacting with their environment.
As standard theories (such as Redfield <cit.> and Förster <cit.> theories) do not meet this challenge, <cit.> various numerically exact methods have been developed.
Two most common foundations of these are (i) the Feynman–Vernon influence functional theory, <cit.> and (ii) the Nakajima–Zwanzig [time-convolution (TC)] generalized quantum master equation (GQME). <cit.>
The approaches rooted in (i) include the hierarchical equations of motion (HEOM) <cit.> and a host of path-integral and process-tensor-based methods. <cit.>
Each of these approaches develops a different representation of the so-called exact reduced evolution superoperator (or the dynamical map) 𝒰(t), which becomes its central object.
On the other hand, the main aim of the approaches originating from (ii) is to evaluate the exact memory-kernel superoperator 𝒦(t). <cit.>
All the above-referenced approaches are in general computationally intensive.
Their applications to realistic EET models, which feature a larger number of chromophores and/or structured spectral densities (SDs) of the exciton–environment interaction extracted from experiments or atomistic simulations, <cit.> can thus be impractical.
A need for computationally less demanding and reliable, although approximate, approaches to study EET dynamics in realistic multichromophoric models cannot be overemphasized.
Recent years have witnessed the emergence of many such approaches, among others quantum–classical methods, <cit.> and related quantum-chemical approaches to nonadiabatic dynamics (e.g., the surface hopping). <cit.>
One can also devise approximations to 𝒰 and 𝒦 starting from the formally exact expressions of (i) and (ii), respectively.
For example, the former give rise to cumulant-expansion-based approaches, <cit.> while the latter result in various approximations to the exact memory kernel. <cit.>
The authors of Ref. PhysRevE.86.011915 noted that already the second-order TC (TC2) GQME, which retains only the lowest-order (second-order) term 𝒦^(2)(t) in the expansion of 𝒦 in powers of the exciton–environment interaction, can qualitatively reproduce the main features of EET dynamics in multichromophoric systems.
The most obvious improvement over the TC2 GQME, also known as the (second) Born approximation (BA), <cit.> is to retain some of the higher-order contributions to 𝒦.
Such a route generally leads to involved expressions that might not always be convergent, and that can be practically evaluated only for the simplest models, where they show some improvements over the original theory. <cit.>
While already the lowest-order kernel of polaron-transformed QMEs <cit.> can significantly improve over TC2 GQME, decent results can also be obtained by partially resumming the perturbation expansion for 𝒦 using only low-order terms as the input.
Partial resummations are most commonly performed in the frequency domain, and one usually applies Padé or Landau–Zener resummation schemes to 𝒦^(2)(ω) and 𝒦^(4)(ω). <cit.>
Another possibility, the self-consistent resummation schemes strongly rooted in condensed-matter physics, <cit.> has received even less attention in this context.
Successful self-consistent improvements over the TC2 GQME have been reported in the context of quantum theory of electronic transport through molecular junctions. <cit.>
These approaches have mainly considered purely electronic quantum transport, when the role of the bath is played by electronic leads.
Their applications to the quantum transport in the presence of both electronic leads and molecular vibrations are much more recent and scarcer. <cit.>
A proposal for a self-consistent description of the dynamics of a general open quantum system has appeared only very recently. <cit.>
It leans on a diagrammatic representation of the perturbation series for 𝒰(t), which has indeed appeared in the context of open quantum dynamics, see, e.g., Refs. PhysRevA.41.6676,ChemPhys.347.185,JChemPhys.153.244122.
The novelty of the approach by Scarlatella and Schirò <cit.> lies in the subsequent formulation of the perturbation series for 𝒦(t), which is related to 𝒰(t) via the TC GQME.
In the context of the theory of Green's functions in quantum many-body systems, <cit.> the memory kernel 𝒦(t) is analogous to the self-energy, whereas the TC GQME plays the role of the Dyson equation.
Considering the zero-temperature dynamics of the spin–boson model, Scarlatella and Schirò <cit.> find that performing the self-consistent cycle starting from the BA memory kernel 𝒦^(2) produces promising results.
However, they consider only the zero temperature and SDs commonly used when studying the spin–boson model in its narrowest sense. <cit.>
Moreover, their time-domain formulation of the self-consistent Born approximation (SCBA) involves solving an integro-differential equation in each iteration of the cycle.
This study explores the applicability of the self-consistent memory-kernel resummation scheme introduced in Ref. scarlatella2023selfconsistent to EET dynamics in multichromophoric aggregates.
For the sake of completeness, we first reconsider the theoretical developments of Ref. scarlatella2023selfconsistent, and make them technically simpler by working in the Liouville space and replacing the above-mentioned integro-differential equation in the time domain by a matrix equation in the frequency domain.
In a molecular dimer coupled to an overdamped phonon environment, we find that the self-consistent cycle greatly improves the original BA (TC2 GQME) over a wide range of dimer parameters.
This success of the SCBA is rationalized by considering the analytically tractable example of coherence dephasing in the pure-dephasing model.
Remarkably, we find that the SCBA remains reliable even in the generally difficult regimes of strong exciton–environment interactions and/or slow environments, when multiply environmentally assisted processes dominate the dynamics, <cit.> as well as at low temperatures.
Considering exciton dynamics modulated by a single underdamped vibrational mode, we conclude that the SCBA delivers good results only when the vibrational energy is resonant with the exciton–energy gap.
Upon including overdamped phonon continuum on top of a number of underdamped modes, we find that the SCBA delivers decent results for EET dynamics in the seven-site model of the FMO complex interacting with the realistic environment extracted from atomistic simulations.
The paper is organized as follows.
Section <ref> introduces our theoretical framework, whose applicability to EET dynamics is assessed in Sec. <ref>.
Section <ref> summarizes our main findings and discusses prospects for future work.
§ THEORETICAL CONSIDERATIONS
§.§ Model
We model EET dynamics in an aggregate composed of N chromophores using the Frenkel–Holstein Hamiltonian
H=H_S+H_B+H_S-B.
We take into account only the ground and one excited state on each chromophore, so that the purely electronic part H_S reads as
H_S=∑_nε_n|n⟩⟨ n|+∑_m≠ nJ_mn|m⟩⟨ n|.
Here, |n⟩ is the collective singly excited state residing on chromophore n (all the other chromophores are unexcited), ε_n is the energy of the vertical transition from the ground state to the excited state on chromophore n, while J_mn is the resonance (typically dipole–dipole) coupling between chromophores m and n.
The aggregate is in contact with the environment modeled as collections of mutually independent harmonic oscillators associated with each chromophore:
H_B=∑_nξω_nξb_nξ^† b_nξ.
Here, ω_nξ is the frequency of oscillator ξ on chromophore n, while bosonic operators b_nξ^† (b_nξ) create (annihilate) the corresponding oscillation quantum and obey [b_nξ,b_mξ']=0,[b_nξ,b_mξ'^†]=δ_nmδ_ξξ'.
The exciton–environment interaction is of the Holstein type, i.e., local and linear in oscillator displacements:
H_S-B=∑_nξg_nξV_n(b_nξ+b_nξ^†),
where V_n=|n⟩⟨ n|.
Its strength, determined by the interaction constants g_nξ, is more conveniently described using the reorganization energy on chromophore n
λ_n=∑_ξg_nξ^2/ω_nξ=∫_-∞^+∞dω/2π𝒥_n(ω)/ω,
where the SD of the exciton–environment interaction
𝒥_n(ω)=π∑_ξ g_nξ^2[δ(ω-ω_nξ)-δ(ω+ω_nξ)]
is typically a continuous function of ω.
We focus on exciton dynamics starting from the factorized initial condition
W(0)=ρ(0)ρ_B^eq,
where ρ(0) is the excitonic reduced density matrix (RDM) at the initial instant t=0, while
ρ_B^eq=e^-β H_B/Tr_B e^-β H_B
represents the state of the environment (equilibrium at temperature T=β^-1) with no excitons present.
Within the Condon approximation, <cit.> this choice of the initial condition leads to the dynamics that can be probed in ultrafast nonlinear spectroscopies. <cit.>
As our formalism deals with the Green's function, we can still reconstruct the dynamics under arbitrary excitation condition as long as the light–matter interaction is weak and the Condon approximation is valid. <cit.>
The only environmental quantity influencing the reduced excitonic dynamics starting from Eq. (<ref>) is the displacement autocorrelation function <cit.>
C_n(t) =∑_ξ g_nξ^2Tr_B{[b_nξ(t)+b_nξ^†(t)](b_nξ+b_nξ^†)ρ_B^eq}
=∫_-∞^+∞dω/πe^-iω t𝒥_n(ω)/1-e^-βω.
The time dependence in Eq. (<ref>) is taken with respect to H_B, i.e., b_nξ(t)=e^-iω_nξtb_nξ.
§.§ Real-time diagrammatic representation: Green's superoperator and self-energy superoperator
The assumption embodied in Eq. (<ref>) permits us to define the exact reduced evolution superoperator (dynamical map) 𝒰(t) by
|ρ(t)=𝒰(t)|ρ(0).
Equation (<ref>) is formulated in the Liouville space, <cit.> in which the excitonic RDM at t>0, represented by the operator ρ(t) in the Hilbert space, becomes the N^2-component vector |ρ(t).
The reduced evolution superoperator is then a tetradic quantity comprising N^4 entries e'_2e'_1|𝒰(t)|e_2e_1, where {|e⟩} is an arbitrary basis in the single-exciton manifold.
𝒰(t) is formally expressed as (see, e.g., Ref. PhysRevB.82.235307 and references therein)
𝒰(t)=e^-iℒ_StTr_B{𝒯_te^-i∫_0^t ds ℒ_S-B^(I)(s)ρ_B^eq},
where the Liouvillian ℒ_a associated with the term H_a (a=S,B,S-B) in Eq. (<ref>) is defined by its action on an arbitrary Lioville-space vector |O corresponding to the Hilbert-space operator O, ℒ_a|O↔ [H_a,O], while the interaction-picture counterpart of ℒ_S-B is
ℒ_S-B^(I)(t)=e^i(ℒ_S+ℒ_B)tℒ_S-Be^-i(ℒ_S+ℒ_B)t.
The time-ordering sign 𝒯_t imposes the chronological order (latest to the left) among the Liouvillians in the expansion of Eq. (<ref>) in powers of the exciton–environment interaction.
The Liouville-space approach adopted here is somewhat different from standard real-time approaches to the RDM of a particle in oscillator environment, <cit.> which consider the forward and backward Hilbert-space evolution operators separately.
Dealing with superoperators, we simplify the formalism, as we consider only the forward evolution in the Liouville space. <cit.>
The average in Eq. (<ref>) can be performed term-by-term using the Wick's theorem, <cit.> and only even-order powers in ℒ_S-B remain.
The order 2k (k≥ 1) consists of (2k-1)!! terms.
Further manipulations usually proceed in two different manners.
The first possibility is to observe that, because of the 𝒯_t sign, all the terms appearing in any given order are mutually identical. <cit.>
Retaining the 𝒯_t sign, one obtains the well-known Feynman–Vernon expression <cit.>
𝒰(t)=e^-iℒ_St𝒯_te^-Φ(t),
with
Φ(t)=∑_n∫_0^t ds_2∫_0^s_2ds_1 V_n^(I)(s_2)^×
[C_n^r(s_2-s_1)V_n^(I)(s_1)^×+iC_n^i(s_2-s_1)V_n^(I)(s_1)^∘].
In Eq. (<ref>), the superoperators V^× and V^∘ are defined by the correspondences V^×|O↔[V,O] and V^∘|O↔{V,O} (anticommutator), respectively.
Equations (<ref>) and (<ref>) serve as the starting point for HEOM and path integral-based approaches.
The possibility we opt for here is to make the time ordering explicit in each term, in which case the terms in any given order appear as different.
Further developments are facilitated by considering the (retarded) evolution superoperator [the (retarded) Green's function or Green's superoperator] <cit.>
𝒢(t)=-iθ(t)𝒰(t)
instead of 𝒰(t).
Each term in order 2k of the expansion of 𝒢(t) can be represented by a diagram [Fig. <ref>(d)] comprising a total of 2k+2 (2 terminal and 2k internal) instants
s_2k+1=t≥ s_2k≥…≥ s_1≥ 0=s_0,
and k chromophore indices n_k,…,n_1 associated with k pairs selected from the set {s_2k,…,s_1} by the application of the Wick's theorem.
The consecutive instants s_j+1 and s_j (0≤ j≤ 2k+1) of a diagram are connected by a straight line directed from s_j to s_j+1 [Fig. <ref>(d)] and representing the (retarded) free-exciton Green's superoperator [Fig. <ref>(b)]
𝒢_S(t)=-iθ(t)e^-iℒ_St.
The directed double-line represents 𝒢(t) [Fig. <ref>(a)].
Dashed circumferences connect instants s_l and s_j (s_l≥ s_j) that are paired by the Wick's theorem, see Fig. <ref>(c).
These instants are accompanied by the same chromophore index n_jl [see Eq. (<ref>)] and the following superoperators [cf. Eq. (<ref>)]:
s_l↔ V_n_jl^×,
s_j↔ C_n_jl^r(s_l-s_j)V_n_jl^×+iC_n_jl^i(s_l-s_j)V_n_jl^∘.
Having placed all the superoperators in a chronologically ordered string (reading diagrams from left to right), one performs integrations ∫_0^t ds_2k…∫_0^s_2ds_1 over 2k internal instants, and summations over k independent chromophore indices.
Any diagram is either reducible or irreducible, depending on whether it can be cut in two by cutting a free Green's superoperator line or not. <cit.>
Examples of irreducible diagrams in Fig. <ref>(d) are diagrams (2), (4.2), and (4.3), while the diagram (4.1) is reducible.
Amputating the external lines corresponding to 𝒢_S(s_1) and 𝒢_S(t-s_2k) of irreducible diagrams, one obtains the diagrammatic representation of the so-called (retarded) self-energy superoperator, see Fig. <ref>(f).
The self-energy superoperator thus consists of all amputated diagrams that cannot be cut in two by cutting a free Green's superoperator line.
One can then derive the following Dyson equation: <cit.>
𝒢(t)=𝒢_S(t)
+∫_-∞^+∞ ds_2∫_-∞^+∞ds_1 𝒢_S(t-s_2)Σ(s_2-s_1)𝒢(s_1),
whose diagrammatic representation is shown in Fig. <ref>(e).
All the superoperators in Eq. (<ref>) are retarded, and the integrals in Eq. (<ref>) can run from -∞ to +∞, which facilitates the transition to the frequency domain.
Rewriting Eq. (<ref>) as an equation for 𝒰(t), inserting the result thus obtained into Eq. (<ref>), and differentiating with respect to t, one obtains the GQME
∂_t|ρ(t)=-iℒ_S|ρ(t)-∫_0^t ds 𝒦(t-s)|ρ(s),
where the relation between the memory kernel 𝒦 and the retarded self-energy Σ is
Σ(t)=-iθ(t)𝒦(t).
Equation (<ref>) differs from the standard GQME only by the representation of the memory kernel.
While the memory kernel is commonly expressed in terms of projection superoperators, here, we provide its representation as a perturbation expansion in the exciton–environment interaction.
§.§ Frequency-domain description
Resummation techniques usually require transferring to the frequency space.
This is most conveniently done starting from Eq. (<ref>) and forming the dynamical equation for 𝒢(t)
∂_t𝒢(t)=-iδ(t)-iℒ_S𝒢(t)-i∫_-∞^+∞ds Σ(t-s)𝒢(s).
With the standard definition of the frequency-dependent quantities
𝒢(t)=∫_-∞^+∞dω/2π e^-iω t𝒢(ω),
Eq. (<ref>) becomes the following algebraic equation:
𝒢(ω)=[ω-ℒ_S-Σ(ω)]^-1.
The Hermitian property of the RDM implies that 𝒢(ω) satisfies
e'_2 e'_1|𝒢(ω)|e_2e_1=- e'_1e'_2|𝒢(-ω)|e_1e_2^*.
This property shows that it is sufficient to compute 𝒢(ω) only for ω≥ 0, while the values for negative frequencies follow from this symmetry property.
We additionally note that the same symmetry property also characterizes the inverse Green's superoperator 𝒢^-1(ω), as well as the self-energy superoperator Σ(ω), which now carries the dimension of energy.
The frequency-domain diagrammatic representations of 𝒢(ω) and Σ(ω) appear the same as the time-domain representations in Figs. <ref>(d) and <ref>(f), respectively.
The general rules for translating diagrams into formulae can be inferred from the discussion in Sec. <ref>.
§.§ Born and Redfield approximations
The lowest, second-order approximation to Σ is known as the (second) BA. <cit.>
The corresponding self-energy superoperator, shown in Fig. <ref>(a1), reads as (see also Refs. JChemPhys.132.194111,PhysRevA.96.032105)
Σ_BA(t)=∑_nV_n^×𝒢_S(t)[C_n^r(t)V_n^×+iC_n^i(t)V_n^∘],
while its frequency-domain counterpart is
Σ_BA(ω)=∑_n∫_-∞^+∞dν/2π𝒥_n(ω-ν)
{(β(ω-ν)/2)V_n^×𝒢_S(ν)V_n^×+V_n^×𝒢_S(ν)V_n^∘}.
Upon instering 𝒦_BA(t)=iΣ_BA(t) into Eq. (<ref>), we obtain the well-known TC2 GQME <cit.>
∂_tρ(t)=-i[H_S,ρ(t)]
-∑_n[V_n,∫_0^t ds C_n(t-s)e^-iH_S(t-s)V_nρ(s)e^iH_S(t-s)]
+∑_n[V_n,∫_0^t ds C_n(t-s)^*e^-iH_S(t-s)ρ(s)V_ne^iH_S(t-s)].
For delta-correlated noise characterized by the dephasing rate Γ, C_n(t)=Γδ(t), Eq. (<ref>) assumes the Lindblad form
∂_tρ(t)= -i[H_S,ρ(t)]
-Γ∑_n(1/2{V_n^2,ρ(t)}-V_nρ(t)V_n).
We will use this result in Sec. <ref>.
The time-independent and nonsecular Redfield theory (see, e.g., Sec. 3.8.2 of Ref. May-Kuhn-book or Sec. 11.2 of Ref. Valkunas-Abramavicius-Mancal-book)
∂_t|ρ(t)=-iℒ_S|ρ(t)-ℛ|ρ(t),
is obtained by inserting Eq. (<ref>) into Eq. (<ref>), and then performing the Markov and adiabatic approximations.
These approximations result in the delta-like self-energy in the time domain, Σ_Red(t)=Σ_Redδ(t), the frequency-independent self-energy Σ_Red(ω)=Σ_Red, and the Redfield tensor ℛ=iΣ_Red [see Eq. (<ref>)], where
Σ_Red=∫_-∞^+∞ds Σ_BA(s) e^iℒ_Ss.
The Redfield theory is formulated in the exciton basis {|x⟩} defined through H_S|x⟩=ω_x|x⟩.
Equation (<ref>) then implies that the Redfield-tensor matrix elements read
x'_2x'_1|ℛ|x_2x_1=i x'_2x'_1|Σ_BA(ω_x_2-ω_x_1)|x_2x_1.
For delta-correlated noise, C_n(t)=Γδ(t), the Redfield equation [Eq. (<ref>)] also assumes the Lindblad form [Eq. (<ref>)]. <cit.>
§.§ Self-consistent Born approximation
In this work, we improve upon the BA by replacing the free-exciton Green's superoperator 𝒢_S in Fig. <ref>(a1) or Eq. (<ref>) with the interacting Green's superoperator 𝒢, see Fig. <ref>(b1).
The resultant equation for the self-energy superoperator
Σ(ω)=∑_n∫_-∞^+∞dν/2π𝒥_n(ω-ν)
{(β(ω-ν)/2)V_n^×𝒢(ν)V_n^×+V_n^×𝒢(ν)V_n^∘}
is to be solved together with the Dyson equation [Eq. (<ref>)] in a self-consistent loop.
Namely, one starts from the free-exciton case, Σ^(0)(ω)=0, when Eq. (<ref>) gives the free-exciton Green's function (η→+0)
𝒢^(0)(ω)=𝒢_S(ω)=[ω+iη-ℒ_S]^-1.
𝒢^(0)(ω) is then inserted into Eq. (<ref>) to yield Σ^(1)(ω)=Σ_BA(ω), which is then inserted into Eq. (<ref>) to yield 𝒢^(1)(ω), etc.
The above-described procedure is repeated until the difference between two consecutive iterations Σ^(k)(ω) and Σ^(k-1)(ω) for the self-energy (k≥ 1) becomes smaller than a prescribed numerical tolerance (see Sec. <ref> for more details).
Using either BA or SCBA for the self-energy superoperator, we do perform a partial resummation of the perturbation series for 𝒢 [Fig. <ref>(d)].
The resulting 𝒢_(SC)BA contains contributions from all orders in the exciton–environment interaction constant.
Nevertheless, the diagrammatic content of the SCBA is much richer than that of the BA, compare Fig. <ref>(b2) to <ref>(a2), suggesting that the SCBA is reliable in a much wider parameter range than the BA.
Still, the SCBA retains only a very limited subset of all possible diagrams appearing in Fig. <ref>(d), <cit.> and its reliability is to be carefully checked.
§ RESULTS
§.§ Technical details
Equation (<ref>) features an infinitesimally small frequency η, which ensures the causality, i.e., 𝒢_S(t)=0 for t<0.
We always shift ω→ω+iη on the right-hand side of the Dyson equation [Eq. (<ref>)], which, apart from the causality, ensures that the matrix inversion in Eq. (<ref>) is numerically stable.
However, the numerical Fourier transformation of 𝒢(ω) thus obtained produces the exponentially damped Green's superoperator 𝒢(t)=e^-η t𝒢(t).
The results for the true Green's superoperator 𝒢(t)=e^η t𝒢(t) are thus the most reliable for t≪η^-1.
In all our computations, we set η=1 cm^-1, meaning that our results for exciton dynamics are bound to be reliable for t≪ 5 ps.
While we find that our results in the real-time domain are free of finite-η effects on time scales beyond η^-1, we always show only the initial 2–3 ps of exciton dynamics.
Generally, 𝒢(ω) slowly decays towards zero as |ω|→+∞.
The dominant component of the high-frequency tail of 𝒢(ω) can be inferred by performing one partial integration of 𝒢(ω)=∫_0^+∞dt e^i(ω+iη)t𝒢(t), which results in
𝒢(ω)=1/ω+iη+𝒪(ω^-2), |ω|→+∞.
Deriving Eq. (<ref>), we use 𝒢(t=0)=-i1, where 1 denotes the unit operator in the Liouville space.
The strongly pronounced high-frequency tail of 𝒢(ω) means that we have to consider many values of ω in order for the discrete Fourier transform to produce decent results in the time domain.
This is to be avoided as computing 𝒢(ω) and Σ(ω) involves inversion of an N^2× N^2 matrix [Eq. (<ref>)] and numerical integration [Eq. (<ref>)], respectively.
Defining 𝒯(ω)=1/ω+iη, and 𝒢^nt(ω)=𝒢(ω)-𝒯(ω), <cit.> the discrete (numerical) Fourier transformation of the non-tailed part 𝒢^nt(ω) produces 𝒢^nt(t), while the Fourier transformation of the high-frequency tail 𝒯(ω) can be performed analytically to yield 𝒯(t)=-iθ(t)e^-η t1.
Finally,
𝒢(t)=-iθ(t)1+e^η t𝒢^nt(t).
As we assume local exciton–environment interaction [Eq. (<ref>)], 𝒢(ω) and Σ(ω) are most conveniently represented in the site basis {|n⟩}.
In all the examples to be discussed, we assume for simplicity that the environments of individual chromophores are identical.
The integration in Eq. (<ref>) expressing Σ in terms of 𝒢 is performed using the ordinary trapezoidal rule.
Because of the symmetry property in Eq. (<ref>), we perform numerical integration of Eq. (<ref>) from 0 to ω_max with frequency step Δω.
We take care to correctly treat the contribution around ν=ω, which is finite for Ohmic SDs.
We stop the self-consistent cycle once we achieve δΣ^(k)≤ε_tol, where (k≥ 1)
δΣ^(k)=max_n'_2n'_1
n_2n_1|2 n'_2n'_1|Σ^(k)(ω)-Σ^(k-1)(ω)|n_2n_1/ n'_2n'_1|Σ^(k)(ω)+Σ^(k-1)(ω)|n_2n_1|,
while ε_tol is the desired (relative) accuracy.
We summarize the values of adjustable numerical parameters (η,ω_max,Δω,ε_tol) involved in our computations in Table <ref>.
Figures <ref>(a) and <ref>(b) respectively show δΣ^(k) as a function k when the SCBA is used on the dimer in the overdamped phonon continuum [the examples analyzed in Figs. <ref>(a)–<ref>(d)] and on the seven-site FMO model in the realistic environment [the examples analyzed in Figs. <ref>(b1) and <ref>(b2)].
Notably, the convergence of the self-consistent algorithm is achieved in a couple of tens of steps, even in a multichromophoric system immersed in a structured environment.
After its initial increase with k, starting from the value of 2 [see the text above Eq. (<ref>)], δΣ^(k) decreases in a power-law fashion for sufficiently large k.
In both BA and SCBA, the trace of the RDM is preserved because of the outermost commutator in self-energies in Eqs. (<ref>) and (<ref>).
The RDM positivity is a much subtler issue, but we observe that, whenever SCBA improves over the BA, the results of both approximations conform to the positivity requirement on the time scales analyzed.
As we are primarily interested in examining the reliability of the BA and SCBA, we use
|ρ(0)=1/N∑_n_2n_1|n_2n_1
as the initial condition in all numerical computations.
Apart from its simplicity, this initial condition provides a fairer assessment of the approximation performance than the widely used initial condition |n_0n_0, in which the exciton is placed at chromophore n_0.
In more detail, exciton dynamics in an arbitrary basis {|e⟩} is
e_2e_1|ρ(t)=∑_n_2n_1
n'_2n'_1⟨ e_2|n_2⟩⟨ n_1|e_1⟩×
n_2n_1|𝒢(t)|n'_2n'_1 n'_2n'_1|ρ(0).
If the exciton is initially placed at chromophore n_0, its subsequent dynamics is determined by only N^2 matrix elements n_2n_1|𝒢(t)|n_0n_0 of 𝒢 out of the total of N^4 elements.
Quite generally, <cit.> the quality of approximate dynamics is different for different matrix elements of 𝒢, that is, for different starting chromophores n_0.
Inserting the initial condition of Eq. (<ref>) into Eq. (<ref>), we can assess the overall approximation performance, which is effectively "averaged" over different matrix elements of 𝒢.
§.§ Reliability of the SCBA: Asymmetric dimer
The advantages and shortcomings of the SCBA are most transparently identified on the simplest model relevant for EET, the molecular dimer.
In the site basis {|1_n⟩,|2_n⟩}, the exciton Hamiltonian H_S is represented by the matrix
H_S=[ Δε J; J 0 ],
where Δε is the site-energy gap, while J is the resonance coupling.
The exciton state of lower (higher) energy is denoted as |1_x⟩ (|2_x⟩).
In Secs. <ref>–<ref>, we consider exciton dynamics in the featureless phonon environment described by the overdamped Brownian oscillator (OBO) SD <cit.>
𝒥_ph(ω)=2λ_phωγ_ph/ω^2+γ_ph^2,
where λ_ph is the reorganization energy, while γ_ph^-1 determines the environment-reorganization time scale.
Table <ref> summarizes the default values of model parameters, which are broadly representative of photosynthetic aggregates.
The performance of our approximations upon varying these parameters is studied in Secs. <ref> and <ref>.
In Sec. <ref>, we assume the exciton interacts with a single underdamped vibrational mode, so that the interaction can be modeled by the underdamped Brownian oscillator SD <cit.>
𝒥_vib(ω)=S_0ω_0[ωγ_0/(ω-ω_0)^2+γ_0^2+ωγ_0/(ω+ω_0)^2+γ_0^2],
where ω_0 is the vibrational frequency, S_0 the Huang–Rhys factor, while γ_0 is the relaxation rate.
Quite generally, γ_0≪ω_0 and S_0≪ 1.
The values used in benchmarks are taken from Ref. JChemPhys.157.095103 and listed in Sec. <ref>.
§.§.§ Overdamped phonon continuum. Variations in excitonic parameters
In this section, we fix the exciton–environment interaction parameters λ_ph and γ_ph, and study the quality of SCBA and BA for different excitonic parameters J and Δε.
Figure <ref> provides an overall assessment of the performance of BA [Fig. <ref>(a)] and SCBA [Fig. <ref>(b)], and of the improvement of the SCBA over the BA introduced by the self-consistent cycle [Fig. <ref>(c)].
A convenient performance measure is the trace distance between the approximate [ρ_(SC)BA(t)] and numerically exact [ρ_HEOM(t)] RDM.
As the trace distance is time-dependent, while we also limit ourselves to the short-time dynamics (see Sec. <ref>), we quantify the performance of our approximations using the maximum trace distance over the time window [0,t_max]:
𝒟((SC)BA|HEOM)=
max_0≤ t≤ t_max1/2∑_k=1^2 |r_k^(SC)BA-HEOM(t)|,
where r_k^(SC)BA-HEOM(t) is the k-th eigenvalue of the operator ρ_(SC)BA(t)-ρ_HEOM(t).
We set t_max=2 ps in Eq. (<ref>).
The colorbar ranges in Figs. <ref>(a) and <ref>(b) suggest that the SCBA is generally a better approximation to the exact dynamics than the BA.
To quantify the improvement of the SCBA over the BA, in Fig. <ref>(c) we plot the ratio 𝒟(BA|HEOM)/𝒟(SCBA|HEOM), which we find to be greater than or equal to unity for all the pairs (J,Δε) examined.
The larger the ratio, the more pronounced the improvement of the SCBA over the BA.
Figures <ref>(a) and <ref>(b) show that the reliability of both BA and SCBA generally improves with increasing J and/or decreasing Δε.
This suggests that the approximations are best suited for relatively delocalized excitons, when the mixing angle θ∈[0,π/4], defined as tan(2θ)=2J/Δε, significantly deviates from zero.
Still, for Δε=0, when excitons are perfectly delocalized, the quality of both BA and SCBA increases with increasing J, i.e., decreasing λ_ph/J.
The impact of the exciton–environment interaction on the approximation reliability will be analyzed in detail in Sec. <ref>.
Figure <ref>(c) reveals that the improvement of SCBA over BA is the most pronounced in the region of moderate resonance coupling 50 cm^-1≲ J≲ 150 cm^-1 and large site-energy gap Δε≳ 200 cm^-1.
The improvement is also appreciable for small resonance coupling, irrespective of the value of Δε, when both BA and SCBA perform relatively poorly, see Figs. <ref>(a) and <ref>(b).
§.§.§ Overdamped phonon continuum. Analytical insights into the pure-dephasing model
For J=0, the model reduces to the pure-dephasing model, in which there is no population dynamics, but only dephasing of the initially present interexciton coherences.
The superoperators entering Eq. (<ref>) are then time independent, and combining Eqs. (<ref>)–(<ref>) we readily obtain the following exact expression for the reduced evolution superoperator:
n'_2n'_1|𝒢(t)|n_2n_1=-iθ(t)δ_n'_2n_2δ_n'_1n_1{δ_n_2n_1+ .
.
(1-δ_n_2n_1)e^-iε_n_2n_1te^-2g^r(t)},
where g^r(t)=∫_0^t ds_2∫_0^s_2ds_1 C^r(s_1) is the real part of the lineshape function, whereas ε_n_2n_1=ε_n_2-ε_n_1 (for the dimer, ε_n_2n_1=±Δε).
The derivation of Eq. (<ref>), in which only g^r appears, crucially relies on our assumption that individual-chromophore environments are identical.
While the exact coherence dynamics, which follows from Eqs. (<ref>), (<ref>), and (<ref>), can be recovered from the time-convolutionless second-order QME, <cit.> the corresponding BA and SCBA results remain only approximations to the exact solution, as both involve an explicit convolution in the time domain.
Still, relevant analytical insights concerning the (SC)BA can be obtained for the pure-dephasing model in the high-temperature limit 2π T≫γ_ph.
In Appendix <ref>, we derive that the matrix elements of the exact self-energy superoperator n_2n_1|Σ(ω)|n_2n_1 for n_2≠ n_1 have the following continued-fraction expansion (CFE):
n_2n_1|Σ(ω)|n_2n_1=
4λ_phTω-ε_n_2n_1+iγ_ph-2· 4λ_phTω-ε_n_2n_1+2iγ_ph-3· 4λ_phTω-ε_n_2n_1+3iγ_ph-⋯.
Truncating the CFE in the first layer, we obtain the BA result (n_2≠ n_1)
n_2n_1|Σ_BA(ω)|n_2n_1=4λ_phT/ω-ε_n_2n_1+iγ_ph.
The same result follows from Eq. (<ref>) upon approximating coth(β(ω-ν)/2)≈2T/ω-ν, as appropriate in the high-temperature limit, and performing a contour integration by closing the contour in the upper half-plane.
In the pure-dephasing model, we can use Eqs. (<ref>) and (<ref>) to obtain the matrix elements of the self-energy superoperator in the Redfield theory
n_2n_1|Σ_Red|n_2n_1=-i4λ_phT/γ_ph.
Appendix <ref> also demonstrates that the CFE of the SCBA self-energy reads
n_2n_1|Σ_SCBA(ω)|n_2n_1=4λ_phTω-ε_n_2n_1+iγ_ph-4λ_phTω-ε_n_2n_1+2iγ_ph-4λ_phTω-ε_n_2n_1+3iγ_ph-⋯.
Comparing Eq. (<ref>) to Eq. (<ref>), we find that the CFE of the SCBA result can be obtained from the exact result by changing all the coefficients multiplying 4λ_phT in the CFE numerators to unity.
While this could suggest that the SCBA is, in general, a poor approximation to the exact solution, <cit.> we note that the broadening factors are the same at each CFE denominator in both the SCBA and exact self-energy.
One can then expect that the SCBA is superior to both the BA and the Redfield theory at reproducing the timescale of coherence dephasing.
This expectation is confirmed in Fig. <ref>(a) comparing different approximations to the coherence dynamics in a symmetric (Δε=0) pure-dephasing dimer.
The Redfield theory predicts an excessively fast coherence dephasing whose analytical form reads as [see Appendix <ref> and the circles in Fig. <ref>(a)]
1_n2_n|ρ_Red(t)=1/2exp(-4λ_phT/γ_pht).
On the contrary, the BA predicts an excessively slow coherence dephasing that can be reasonably described by [see Appendix <ref> and the down-triangles in Fig. <ref>(a)]
1_n2_n|ρ_BA(t)≈1/2cos(2√(λ_phT) t)e^-γ_pht/2.
The exact coherence-dephasing timescale is in between the results of the Redfield theory and BA, while the corresponding exact dynamics can be reasonably approximated by [see Appendix <ref> and the up-triangles in Fig. <ref>(a)]
1_n2_n|ρ(t)≈1/2exp(-2λ_phTt^2).
The SCBA indeed reproduces the correct order of magnitude of the dephasing timescale, although it displays oscillations similar to that predicted by Eq. (<ref>).
The performance of different approximations can also be inferred from (the imaginary part of) the corresponding self-energy profile presented in Fig. <ref>(a).
The self-energy within the Redfield theory [Eq. (<ref>)] does not bear even a qualitative resemblance to the exact result.
The BA, SCBA, and the exact result all display a peak centered around ω=0.
The BA peak is the narrowest and highest and has a Lorentzian shape whose full width at half-maximum is determined by γ_ph only, see Eq. (<ref>).
The exact peak [Eq. (<ref>)] is much broader than the BA peak, while the width of the SCBA peak [Eq. (<ref>)] is somewhat smaller than, yet comparable to, the width of the exact peak.
§.§.§ Overdamped phonon continuum. Variations in exciton–environment interaction parameters and temperature
Here, we fix exciton parameters Δε and J, and vary λ_ph and γ_ph to examine the reliability of BA [Fig. <ref>(a)] and SCBA [Fig. <ref>(b)], as well as the improvement over the BA brought about by the self-consistent cycle [Fig. <ref>(c)].
Overall, we find that the quality of both BA and SCBA improves with decreasing the reorganization energy and/or shortening the environment-reorganization timescale, see Figs. <ref>(a) and <ref>(b).
Figure <ref>(c) shows that the SCBA is always better at approximating the exact dynamics than the BA.
Within the range of values of λ_ph and γ_ph^-1 typically used in models of photosynthetic EET (25 cm^-1≲λ_ph≲ 250 cm^-1 and 50 fs≲γ_ph^-1≲ 200 fs), <cit.> the self-consistent cycle significantly improves BA results, see the dashed-line rectangle in Fig. <ref>(c).
Notably, for very fast environments (γ_ph^-1≲ 10 fs), we find that the quality of BA and SCBA is virtually the same, i.e., 𝒟(SCBA|HEOM)≈𝒟(BA|HEOM), for all reorganization energies examined.
To understand this, we observe that when γ_ph→∞ and T→∞ so that γ_ph/T→ 0, our model reduces to the well-known Haken–Strobl–Reineker white-noise model, <cit.> within which C_n(t)=Γδ(t), with Γ=2λ_phT/γ_ph.
Inserting this C_n(t) into Eq. (<ref>) for the exact evolution superoperator 𝒰(t), we conclude that the exact equation of motion for ρ(t) coincides with the Lindblad-like Eq. (<ref>).
To reach this conclusion, we use that the white-noise assumption renders the time-ordering sign upon differentiating Eq. (<ref>) effective only on the superoperator e^-Φ(t) and ineffective on the superoperator ∂_tΦ(t).
Our discussion in Sec. <ref> then implies that the BA, and even the Redfield theory, become exact in the white-noise limit.
The exactness of the BA implies that the self-consistent cycle cannot improve on BA any further, meaning that the SCBA result is also exact in the white-noise limit.
Figure <ref>(c) shows that the improvement of the SCBA over the BA is the most pronounced for sluggish environments or large reorganization energies.
Both observations are ultimately rooted in the richer diagrammatic content of the SCBA compared to that of the BA, see Fig. <ref>, and are remarkable as computing exciton dynamics modulated by strong exciton–environment interactions and/or slow environments has to take into account higher-order environmentally assisted processes. <cit.>
Similar challenges are encountered when studying exciton dynamics at low temperatures.
Additional insights into the quality of different approximations can be gained from Fig. <ref>, which compares approximate and numerically exact exciton dynamics for the default values of model parameters [Figs. <ref>(a1) and <ref>(a2)], slow environment [γ_ph=5 cm^-1, Figs. <ref>(b1) and <ref>(b2)], strong exciton–environment interaction [λ_ph=200 cm^-1, Figs. <ref>(c1) and <ref>(c2)], and low temperature [T=77 K, Figs. <ref>(d1) and <ref>(d2)].
The overall performance of approximate methods is essentially as discussed in Sec. <ref>.
The Redfield theory shows pronounced deviations from the exact result already on shortest time scales, whereas the BA and SCBA reproduce the very initial stages (t≲ 50 fs) of the exact dynamics quite well.
While the subsequent dynamics within the BA generally exhibits oscillatory features that are damped relatively slowly, cf. Fig. <ref>(a), the SCBA dynamics of exciton populations and the interexciton coherence follows the corresponding HEOM results very reasonably.
The SCBA approximates the true exciton-population dynamics in both realistically slow [Fig. <ref>(a1)] and excessively slow [Fig. <ref>(b1)] environments quite well, both on subpicosecond and somewhat longer time scales.
On the other hand, Figs. <ref>(a2) and <ref>(b2), as well as the insets of Figs. <ref>(a1) and <ref>(b1), suggest that the SCBA is not that good at reproducing the true interexciton-coherence dynamics.
Still, it does capture the correct long-time behavior (the imaginary part of the interexciton coherence tends to zero, and the real part tends to a non-zero value).
For strong interactions and at low temperatures, the subpicosecond dynamics of exciton populations within the SCBA is quite close to the true dynamics, see Figs. <ref>(c1) and <ref>(d1), while some quantitative differences between them appear on longer time scales (these are more pronounced for the higher-energy exciton state).
On the other hand, the SCBA dynamics of the interexciton coherence agrees well with the corresponding numerically exact dynamics, especially at lower temperatures, see Figs. <ref>(c2) and <ref>(d2) and the insets of Figs. <ref>(c1) and <ref>(d1).
Finally, the convergence of the self-consistent algorithm [Fig. <ref>(a)] generally slows down with increasing the interaction energy or the memory time scale, while its pace appears to be relatively weakly affected by temperature variations.
§.§.§ Overdamped phonon continuum. Self-energy superoperator
We formulate our approximate approaches in the frequency domain, with the self-energy (memory-kernel) superoperator as their central quantity, and Figs. <ref>(a)–<ref>(d) discuss the reflections of the above-summarized time-domain observations on the frequency domain.
We choose the slow-environment regime analyzed in Fig. <ref>(b) and concentrate on the matrix elements 2_x2_x|Σ(ω)|2_x2_x and 1_x2_x|Σ(ω)|2_x2_x describing respectively the population flux out of the higher-energy exciton state and the population-to-coherence transfer from that state.
The results in Figs. <ref>(a)–<ref>(d) are obtained setting the artificial-broadening parameter to η=1 cm^-1, and we have checked that varying η over the range [0.5,5] cm^-1 does not qualitatively (and to a large extent quantitatively) affect the results presented here.
Within the BA, the imaginary parts of both self-energy matrix elements display very narrow peaks centered around the exciton energy gap (at ω=±Δε_X), see Figs. <ref>(a) and <ref>(c), which is compatible with the oscillatory features of the BA in Fig. <ref>(b).
On the other hand, the peaks of the numerically exact profiles are much wider, and their centers are somewhat shifted from Δε_X.
While the SCBA profiles overall reasonably reproduce the numerically exact ones both in terms of peak positions and shapes, we note the tendency of the SCBA towards somewhat excessive peak shifting and narrowing.
The relative advantage of the SCBA over the BA is the most obvious for the population-to-coherence transfer, when the BA completely misses the peak appearing somewhat below Δε_X in both HEOM and SCBA profiles, see Fig. <ref>(c).
In Figs. <ref>(e) and <ref>(f), we compare the BA and SCBA results for the self-energy superoperator in the time domain to the corresponding numerically exact result.
As discussed in Sec. <ref>, these time-domain results do not depend on η.
The general tendencies observed in the exciton dynamics in Fig. <ref>(b) are also seen on the level of the time-dependent memory kernel.
Namely, the BA result for the time-dependent memory kernel displays pronounced weakly damped oscillatory features, while the SCBA reproduces the exact result reasonably well.
The quantitative agreement between the SCBA and HEOM results for the matrix element connected to population-to-population transfer is somewhat better than in the case of population-to-coherence transfer, compare Fig. <ref>(e) to Fig. <ref>(f).
We finally note that obtaining the memory-kernel superoperator in the time domain is generally difficult. <cit.>
On the contrary, our time-domain results are readily obtained by the numerical Fourier transformation of the corresponding frequency-domain self-energies.
§.§.§ A single underdamped vibrational mode
Here, using the values of J,Δε, and T summarized in Table <ref>, we benchmark the (SC)BA when the SD of the exciton–environment interaction is modeled using Eq. (<ref>).
We set γ_0=3 cm^-1, corresponding to the relaxation timescale of γ_0^-1=1.77 ps.
Keeping in mind that the interaction with an individual intrachromophore mode is, in general, relatively weak (S_0∼ 0.01, irrespective of the vibrational frequency ω_0), <cit.> one could expect that already the BA recovers the numerically exact exciton dynamics.
Figures <ref>(a) and <ref>(b) reveal that this is indeed the case when the vibrational mode is not resonant with the excitonic energy gap (ω_0≠Δε_X).
The true dynamics then exhibits weakly damped oscillations in both exciton populations and interexciton coherence, which reflect the relatively slow and inefficient interchromophore population transfer discussed in the literature. <cit.>
While the BA reproduces the oscillatory behavior fairly well on the time scales we focus on, the SCBA produces an excessively fast oscillation damping and an overall incorrect dynamics of exciton populations.
One may then expect that the general tendency of the self-consistent cycle towards fast equilibration of the excitonic subsystem could render the SCBA a reliable approximation when the vibrational frequency is nearly resonant with the exciton-energy gap (ω_0≈Δε_X).
At resonance, the existing results show that the damping of the interexciton coherence and the concomitant interchromophore population transfer are particularly fast. <cit.>
Figures <ref>(a) and <ref>(b) show that the SCBA is indeed sufficiently good at reproducing this rapid equilibration of both exciton populations and coherences.
In the off-resonant case, the exciton populations and interexciton coherence predominantly oscillate at frequencies |ω_0-Δε_X| and Δε_X, respectively, see the most pronounced features of the spectra in Figs. <ref>(c) and <ref>(d).
The spectra in Figs. <ref>(c), <ref>(d), <ref>(c), and <ref>(d) originate from our frequency-domain computations, so that they are broadened with parameter η.
The incorrect SCBA population dynamics in Fig. <ref>(a) is reflected on the frequency domain as the spurious low-frequency feature in Fig. <ref>(c).
In the resonant case, the dynamics of interexciton coherence displays beats most probably stemming from oscillations at two similar frequencies, see the most intensive features in Fig. <ref>(d).
In contrast to the off-resonant case, the relation between the beating frequency or the frequency of population oscillations in Fig. <ref>(c) to the inherent energy scales of the problem (Δε,J,ω_0) is not obvious.
To establish such a relation, we note that even the weak exciton–vibration interaction can induce appreciable mixing of the vibrational and excitonic levels, and thus render the description in terms of vibronic-exciton states more appropriate.
To discuss our observations in terms of these hybrid states, we assume, for simplicity, that the vibrational mode is undamped, i.e., γ_0=0.
Then, the Hamiltonian is analogous to the Jaynes–Cummings Hamiltonian of quantum optics. <cit.>
When S_0≪ 1, it is sufficient to consider at most a single vibrational quantum, <cit.> and in Appendix <ref> we conclude that the first two states above the lowest-lying vibronic-exciton state |1_x,v_0=0⟩ are linear combinations of the states |1_x,v_0=1⟩ and |2_x,v_0=0⟩.
In the nearly resonant case (Δε_X≈ω_0), the energies corresponding to the oscillatory features in exciton populations
Ω_pop^res≈ω_0√(2S_0)|sin(2θ)|,
and the interexciton coherence
Ω_coh,±^res≈Δε_X±ω_0√(S_0/2)|sin(2θ)|
are proportional to √(S_0) and agree reasonably well with the corresponding numerically exact and SCBA results, see Figs. <ref>(c) and <ref>(d).
Interestingly, the frequency of the beats in the interexciton-coherence dynamics virtually coincides with the frequency of population oscillations, i.e., Ω_pop^res≈Ω_coh,+^res-Ω_coh,-^res.
In the off-resonant case Δε_X≠ω_0, we find that the frequency of population oscillations is shifted from |Δε_X-ω_0| by an amount proportional to S_0, i.e.,
Ω_pop^off-|Δε_X-ω_0|≈ S_0sin^2(2θ)ω_0^2/|Δε_X-ω_0|,
which agrees very well with the HEOM result, and not so well with BA and SCBA results, see Fig. <ref>(c).
The frequency shift of the most pronounced component of exciton-coherence oscillations from Δε_X is also linear in S_0,
Ω_coh,-^off-Δε_X≈-1/2S_0sin^2(2θ)ω_0^2/|Δε_X-ω_0|,
in good agreement with HEOM and BA results in Fig. <ref>(d).
We mention that the much less intensive feature of the coherence-oscillation spectrum appearing around the vibration energy is shifted from ω_0 by
Ω_coh,+^off-ω_0≈1/2S_0sin^2(2θ)ω_0^2/|Δε_X-ω_0|.
§.§ Reliability of the SCBA: Seven-site model of the FMO complex
In Figs. <ref> and <ref>, we benchmark the SCBA and BA on the widely studied seven-site model of the FMO complex found in green sulphur bacteria.
Detailed benchmarks of our approximations for chromophore populations (Fig. <ref>) and interchromophore coherences (Fig. <ref>) against numerically exact results are possible for the OBO SD [Eq. (<ref>)].
In Figs. <ref>(a1), <ref>(b1), <ref>(a1), and <ref>(b1), we use λ_ph=35 cm^-1 and γ_ph^-1=50 fs (γ_ph=106.2 cm^-1).
To explore the viability of our methodology, we also apply it to the model using the structured SD emerging from atomistic simulations performed in Ref. JPhysChemLett.7.3171.
As we are not aware of any numerically exact results for the dynamics modulated by this structured bath, Figs. <ref>(a2), <ref>(b2), <ref>(a2), and <ref>(b2) compare SCBA and BA results.
Details on the excitonic Hamiltonian H_S and the structured SD used are summarized in Appendix <ref>.
Using the OBO SD, already the BA provides a good approximation to the subpicosecond dynamics of chromophore populations [Figs. <ref>(a1) and <ref>(b1)], in agreement with findings of Ref. PhysRevE.86.011915.
Nevertheless, on a picosecond timescale, the BA overestimates (underestimates) populations of low-energy (high-energy) states, and this effect becomes more pronounced with decreasing the temperature, compare Fig. <ref>(a1) to Fig. <ref>(b1).
While the SCBA suffers from the same deficiency, its predictions for chromophore populations are systematically closer to the numerically exact results throughout the time window examined.
At higher temperatures, Figs. <ref>(a1) and <ref>(a1) suggest that the SCBA is better at approximating population dynamics than coherence dynamics.
On the other hand, Figs. <ref>(b1) and <ref>(b1) suggest that the reverse is true at lower temperatures.
Figures <ref>(a1) and <ref>(b1) do not show BA results, which, at all temperatures, display oscillatory features lasting much longer than the numerically exact method predicts.
Comparing panels (a2) and (a1) [(b2) and (b1)] in Figs. <ref> and <ref>, we conclude that our approximations deliver reasonable results for exciton dynamics in the structured environment, as it is overall similar to the dynamics in the featureless environment.
The problems with longer-time population dynamics of extremal-energy states are exacerbated in the structured environment at lower temperatures, when SCBA (and also BA) predicts nonphysical (greater than 1 or negative) populations of such states, see Fig. <ref>(b2).
A possible origin of these problems can be understood from our analysis of the dynamics modulated by an underdamped vibrational mode, see Sec. <ref>.
There, we find that the resonance condition between the exciton-energy gap and the vibrational energy quantum is crucial to the success of the SCBA.
Here, exciton-energy gaps fluctuate due to low-frequency phonon modes, their fluctuations becoming more pronounced with increasing temperature.
Therefore, at higher temperatures, satisfying resonance conditions between exciton-energy gaps and weakly damped vibrational modes is more probable than at lower temperatures.
In other words, the SCBA is expected to work better at higher temperatures.
The SCBA coherence dynamics in Figs. <ref>(a2) and <ref>(b2) is physically sensible, while the lifetime of interchromophore coherences in the structured environment is somewhat shorter than in the featureless environment at both temperatures examined.
§ SUMMARY AND OUTLOOK
We have developed and benchmarked the self-consistent Born approximation for studying the dynamics of EET through a multichromophoric aggregate linearly interacting with a bosonic environment.
We start from the lowest-order approximation for the memory kernel of the GQME, and improve it in the self-consistent cycle based on the GQME represented in the Liouville space.
We find that the SCBA reproduces the exact exciton dynamics modulated by an overdamped phonon continuum very well, even in the generally difficult regimes of strong exciton–environment interaction, slow environmental reorganization, and low temperature.
This success of the SCBA can be understood from the analytically tractable example—coherence-dephasing dynamics in the pure-dephasing model.
We conclude that the reliability of the SCBA when the dynamics is modulated by an underdamped vibrational mode leans on the resonance between the mode frequency and the exciton-energy gap.
Nevertheless, in a structured environment comprising both an overdamped phonon continuum and a number of underdamped vibrational modes, the SCBA reasonably describes exciton dynamics through the seven-site model of the FMO complex.
Importantly, our method does not introduce any assumptions on the form of the exciton–environment SD, making it a strong candidate for studying exciton dynamics modulated by structured environments whose properties are extracted from experiments or atomistic simulations.
Moreover, the method can, in principle, be systematically improved by performing the self-consistent cycle starting from a higher-order approximation for the memory kernel.
The next member of the family of self-consistent approximations thus obtained is the so-called one-crossing approximation, in which the starting memory kernel is the sum of the first two diagrams in Fig. <ref>(f).
The final memory kernel then contains all diagrams in which the lines representing the environmental assistance cross at most once.
However, the one-crossing approximation is computationally much more demanding than the SCBA, as each iteration involves a double integral over frequency.
The practical applicability of the one-crossing approximation is thus determined by the balance between its computational requirements and the improvements it offers over the SCBA.
Finally, our Liouville-space frequency-domain formulation of the SCBA suggests that it might be used as a computationally efficient and reasonably accurate approach to compute experimentally accessible nonlinear response functions.
To test such a possibility, one should generalize the present formulation so that memory kernels in different excited-state sectors can be computed.
§ AUTHOR CONTRIBUTIONS
V.J. developed the method and performed all analytical derivations.
V.J. and T.M. numerically implemented the method and conceived the examples on which the method was tested.
V.J. performed all numerical computations and prepared the initial version of the manuscript.
Both authors contributed to the final version of the manuscript.
Work in Prague is funded by the Czech Science Foundation (GAČR), Grant No. 22-26376S.
Work in Belgrade is supported by the Institute of Physics Belgrade, through the grant by the Ministry of Science, Technological Development, and Innovation of the Republic of Serbia.
Numerical computations were performed on the PARADOX-IV supercomputing facility in the Scientific Computing Laboratory, National Center of Excellence for the Study of Complex Systems, Institute of Physics Belgrade.
§ COHERENCE DEPHASING IN THE PURE-DEPHASING MODEL (OBO SD AND HIGH-TEMPERATURE LIMIT)
We first show that the CFE of the exact Σ(ω) is given by Eq. (<ref>).
In the high-temperature limit, the real part of the lineshape function can be approximated as <cit.>
g^r(t)=2λ_phT/γ_ph^2(e^-γ_pht+γ_pht-1).
Inserting Eq. (<ref>) into Eq. (<ref>) and performing the Fourier transformation of the latter gives (n_2≠ n_1)
n_2n_1|𝒢(ω)|n_2n_1=e^Λ_ph∑_k=0^+∞(-Λ_ph)^k/k!1/ω-ε_n_2n_1+iΛ_phγ_ph+ikγ_ph,
where we introduce the dimensionless parameter Λ_ph=4λ_phT/γ_ph^2.
It is known that the zero-temperature absorption lineshape (the excitation-addition spectral function) of a two-level system whose energy gap ε_eg between the ground and excited state is modulated by an undamped vibrational mode (frequency ω_0 and HR factor S_0) is proportional to the negative imaginary part of the retarded Green's function (see, e.g., Ch. 8 of Ref. Mukamel-book or Ch. 4 of Ref. Mahanbook):
G^R(ω)=e^-S_0∑_k=0^+∞S_0^k/k!1/ω-ε_eg+S_0ω_0-kω_0+iη.
The CFE of Eq. (<ref>) reads as <cit.>
G^R(ω)=1ω-ε_eg-(S_0ω_0)^2ω-ε_eg-ω_0-2(S_0ω_0)^2ω-ε_eg-2ω_0-⋯.
The CFE of n_2n_1|𝒢(ω)|n_2n_1 is then obtained by substituting ω-ε_eg→ω-ε_n_2n_1, S_0→-Λ_ph, and ω_0→-iγ_ph in Eq. (<ref>), which follows from comparing Eq. (<ref>) with Eq. (<ref>).
Keeping in mind that n_2n_1|Σ(ω)|n_2n_1=ω-ε_n_2n_1- n_2n_1|𝒢(ω)|n_2n_1^-1, see Eq. (<ref>), one immediately obtains Eq. (<ref>).
We now argue that the CFE of the SCBA self-energy is given by Eq. (<ref>).
Our starting point is the BA propagator in the frequency domain (n_2≠ n_1)
n_2n_1|𝒢_BA(ω)|n_2n_1=1ω-ε_n_2n_1-4λ_phTω-ε_n_2n_1+iγ_ph,
which is obtained by inserting the BA self-energy [Eq. (<ref>)] into the Dyson equation [Eq. (<ref>)].
The BA propagator is then inserted into Eq. (<ref>) to obtain the following self-energy Σ^(2) in the next iteration:
n_2n_1|Σ^(2)(ω)|n_2n_1=∫_-∞^+∞dν/2π4λ_phγ_phT/(ω-ν)^2+γ_ph^2 n_2n_1|𝒢_BA(ω)|n_2n_1.
Here, we have approximated coth(β(ω-ν)/2)≈2T/ω-ν.
The last integral is solved by a integrating along a contour that is closed in the upper half-plane to ensure causality.
As the poles of the BA propagator are in the lower half-plane by construction, the only pole of the integrand in the upper half-plane is at ν=ω+iγ_ph.
Evaluating the corresponding residue, we obtain
n_2n_1|Σ^(2)(ω)|n_2n_1=4λ_phTω-ε_n_2n_1+iγ_ph-4λ_phTω-ε_n_2n_1+2iγ_ph.
Repeating the above-described procedure ad infinitum, we obtain Eq. (<ref>).
We end this section by presenting analytical results for the matrix elements of the Green's superoperator that determine the dynamics of coherence dephasing within the BA and the Redfield theory, see Eqs. (<ref>) and (<ref>).
Equation (<ref>) implies that the BA propagator in the time domain can be expressed as
n_2n_1|𝒢_BA(t)|n_2n_1=e^-iε_n_2n_1t∫_-∞^+∞dΩ/2πe^-iΩ tΩ+iγ_ph/(Ω-Ω_+)(Ω-Ω_-),
where
Ω_±=γ_ph/2[±√(16λ_phT/γ_ph^2-1)-i]≈± 2√(λ_phT)-iγ_ph/2
are the roots of the quadratic equation Ω^2+iγ_phΩ-4λ_phT=0, both of which lie in the lower half-plane.
We use 2π T/γ_ph≫ 1 to obtain Ω_± in the high-temperature limit.
The integral in Eq. (<ref>) is solved using the contour integration with the final result
n_2n_1|𝒢_BA(t)|n_2n_1=-iθ(t)[Ω_++iγ_ph/Ω_+-Ω_-e^-iΩ_+t-Ω_-+iγ_ph/Ω_+-Ω_-e^-iΩ_-t].
Retaining the contributions that are the most dominant in the high-temperature limit 2π T/γ_ph≫ 1, Eq. (<ref>) can be further simplified to
n_2n_1|𝒢_BA(t)|n_2n_1=-iθ(t)e^-iε_n_2n_1tcos(2√(λ_phT) t)e^-γ_pht/2.
Equation (<ref>) implies that the Redfield propagator in the time domain can be expressed as
n_2n_1|𝒢_Red(t)|n_2n_1 =e^-iε_n_2n_1t∫_-∞^+∞dΩ/2πe^-iΩ t/Ω+i4λ_phT/γ_ph
=-iθ(t)e^-iε_n_2n_1texp(-4λ_phT/γ_pht).
The approximation to the exact coherence-dephasing dynamics embodied in Eq. (<ref>) is obtained by inserting the short-time approximation g^r(t)≈λ_phTt^2 to the lineshape function [Eq. (<ref>)] into Eq. (<ref>).
§ VIBRONIC-EXCITON MODEL
We consider the model dimer, Eq. (<ref>), in which the excitons interact with an undamped vibrational mode of frequency ω_0 and HR factor S_0 [we set γ_0=0 in Eq. (<ref>)].
We assume that S_0≪ 1, so that the reorganization energy S_0ω_0 is much smaller than all the other energy scales in the problem (Δε,J).
It is known that the center-of-mass motion of the intrachromophore vibrations, described by B_+=(b_1+b_2)/√(2), does not affect the single-exciton dynamics. <cit.>
The energies of the exciton states, as well as transitions between them, are then modulated by the relative motion of intrachromophore vibrations, which is described by B_-=(b_1-b_2)/√(2).
The single-exciton dynamics is governed by the Hamiltonian H=H_X+H_B_-+H_X-B_-, where
H_X=∑_k=1^2|k_x⟩[ε_k_x+(-1)^kcos(2θ)ω_0√(S_0/2)(B_-^†+B_-)]⟨ k_x|,
H_B_-=ω_0 B_-^† B_-,
H_X-B_-=-sin(2θ)ω_0√(S_0/2)(|2_x⟩⟨ 1_x|+|1_x⟩⟨ 2_x|)(B_-^†+B_-).
The energy of the bare exciton state k=1,2 is ε_k_x=Δε/2[1+(-1)^k√(1+tan^2(2θ))].
The dynamic modulation of the exciton energy can be taken into account exactly by transferring to the polaron frame, H=UHU^†, using the polaron transformation
U=∑_k=1^2|k_x⟩⟨ k_x|e^(-1)^kS_θ, S_θ=cos(2θ)√(S_0/2)(B_-^†-B_-).
The bare exciton energies ε_k_x are then shifted to ε_k_x=ε_k_x-S_0ω_0/2cos^2(2θ), while the eigenstates |k_x,v_0⟩ of the transformed Hamiltonian H_X+H_B_- are enumerated by the exciton number k and the number v_0=0,1,… of excited vibrational quanta.
In more detail,
|k_x,v_0⟩=|k_x⟩(B_-^†)^v_0/√(v_0!)|∅⟩, ε_k,v_0=ε_k_x-S_0ω_0/2cos^2(2θ)+v_0ω_0.
As we assume that S_0≪ 1, the leading contribution (proportional to √(S_0)) to the transformed interaction term H_X-B_- in powers of S_0 is identical to H_X-B_-.
To proceed further, we additionally perform the rotating-wave approximation, which amounts to
H_X-B_-≈ -ω_0√(S_0/2)(|2_x⟩⟨ 1_x| B_-+|1_x⟩⟨ 2_x|B_-^†).
We now limit ourselves to the subspace containing at most a single vibrational excitation, where we find that H_X-B_- mixes the states |1_x,v_0=1⟩ and |2_x,v_0=0⟩.
In the resonant case ω_0≈Δε_X, these two states are nearly degenerate, and H_X-B_- lifts this degeneracy.
While there is no degeneracy in the off-resonant case ω_0≠Δε_X, the vibronic mixing still affects the energy difference between these two states.
The energies of vibronically mixed states measured with respect to the energy of the lowest-lying state |1_x,v_0=0⟩ then read as
E_v_0=1,±-E_v_0=0=1/2[Δε_X+ω_0±√((Δε_X-ω_0)^2+2S_0ω_0^2sin^2(2θ))].
The exciton populations then exhibit oscillatory features of frequency
Ω_pop=E_v_0=1,+-E_v_0=1,-=√((Δε_X-ω_0)^2+2S_0ω_0^2sin^2(2θ)),
while the oscillations in interexciton coherence have the frequencies
Ω_coh,±=E_v_0=1,±-E_v_0=0.
In the nearly resonant case (Δε_X-ω_0≈ 0), the frequencies of oscillatory features in exciton-population and interexciton-coherence dynamics are given by Eqs. (<ref>) and <ref>, respectively.
To be consistent with the simplifications made up to now, our considerations in the off-resonant case have to keep only the lowest-order term in small S_0, so that
Ω_pop^off≈|Δε_X-ω_0|[1+S_0sin^2(2θ)ω_0^2/(Δε_X-ω_0)^2],
Ω_coh,±^off=1/2[Δε_X+ω_0±|Δε_X-ω_0|± S_0sin^2(2θ)ω_0^2/|Δε_X-ω_0|].
Equations (<ref>), (<ref>), and (<ref>) are then readily obtained from Eqs. (<ref>) and (<ref>).
§ COMPUTATIONS ON THE SEVEN-SITE MODEL OF THE FMO COMPLEX
The excitonic Hamiltonian H_S is taken from Ref. BiophysJ.91.2778, and the values of interchromophore couplings and average chromophore energies are summarized in Table <ref>.
We take the structured SD of the exciton–environment interaction from the spreadsheet appearing in the Supporting Information to Ref. JPhysChemLett.7.3171.
More specifically, we use the data from the sheet , which reports the total SD (comprising both the interchromophore and intrachromophore contributions) for individual BChls in one of the FMO subunits.
In our computations, we assume that the SDs for all BChls are identical, and thus use the arithmetic average of the SD data for BChl1,…,BChl7.
The SD thus obtained is plotted in Fig. <ref>.
The frequency grid is equidistant, with spacing Δω=0.53 cm^-1, which we find sufficiently fine for the numerical integration of Eq. (<ref>) using the ordinary trapezoidal rule.
The SD is Ohmic, and we obtain its linear behavior around ω=0 by fitting the points (0,0),(Δω,𝒥(Δω)),…,(5Δω,𝒥(5Δω)) to a straight line.
As the SD is available up to ω_avail=2650.8 cm^-1, our numerical computations use ω_max=2ω_avail, and assume 𝒥(ω)=0 for ω_avail<ω<ω_max.
While we take the excitonic and the exciton–environment interaction parameters from different studies and different (seven-site <cit.> and eight-site <cit.>) FMO models, we emphasize that our main goal is to examine the applicability of our approximate methods to a relatively realistic model of a multichromophoric complex.
The results in Figs. <ref> and <ref> indeed suggest that our approximate methods can provide decent results on exciton dynamics in realistic models.
|
http://arxiv.org/abs/2409.02199v1 | 20240903180848 | Microwave-free imaging magnetometry with nitrogen-vacancy centers in nanodiamonds at near-zero field | [
"Saravanan Sengottuvel",
"Omkar Dhungel",
"Mariusz Mrózek",
"Arne Wickenbrock",
"Dmitry Budker",
"Wojciech Gawlik",
"Adam M. Wojciechowski"
] | quant-ph | [
"quant-ph"
] |
These authors contributed equally to this work
Jagiellonian University, Doctoral School of Exact and Natural Sciences, Łojasiewicza 11, 30-348 Kraków, [email protected]
Institute of Physics, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków, Poland
These authors contributed equally to this work
Helmholtz-Institut Mainz, 55099 Mainz, Germany
Johannes Gutenberg-Universität Mainz, 55128 Mainz, Germany
Institute of Physics, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków, Poland
Helmholtz-Institut Mainz, 55099 Mainz, Germany
Johannes Gutenberg-Universität Mainz, 55128 Mainz, Germany
GSI Helmholtzzentrum für Schwerionenforschung GmbH, 64291 Darmstadt, Germany
Helmholtz-Institut Mainz, 55099 Mainz, Germany
Johannes Gutenberg-Universität Mainz, 55128 Mainz, Germany
GSI Helmholtzzentrum für Schwerionenforschung GmbH, 64291 Darmstadt, Germany
Department of Physics, University of California, Berkeley, California 94720-7300, USA
Institute of Physics, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków, Poland
[email protected]
Institute of Physics, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków, Poland
§ ABSTRACT
Magnetometry using Nitrogen-Vacancy (NV) color centers in diamond predominantly relies on microwave spectroscopy. However, microwaves may hinder certain studies involving biological systems or thin conductive samples. This work demonstrates a wide-field, microwave-free imaging magnetometer utilizing NV centers in nanodiamonds by exploiting the cross-relaxation feature near zero magnetic fields under ambient conditions without applying microwaves. For this purpose, we measure the center shift, contrast, and linewidth of the zero-field cross-relaxation in 140 nm nanodiamonds drop-cast on a current-carrying conductive pattern while scanning a background magnetic field, achieving a sensitivity of 4.5 /√(). Our work allows for applying the NV zero-field feature in nanodiamonds for magnetic field sensing in the zero and low-field regimes and highlights the potential for microwave-free all-optical wide-field magnetometry based on nanodiamonds.
Microwave-free imaging magnetometry with nitrogen-vacancy centers in nanodiamonds at near-zero field
Adam M. Wojciechowski
September 9, 2024
====================================================================================================
§ INTRODUCTION
The application of negatively charged nitrogen-vacancy (NV^-) color centers in diamond <cit.> has become the focal point for sensing physical parameters such as temperature, pressure, magnetic and electric fields at the nanoscale, even under challenging environmental conditions <cit.>. NV centers have demonstrated remarkable sensitivities in recent years, reaching 0.5 /√() for ensembles at room temperature <cit.>. The use of NV centers for magnetometry allows the exploration of magnetic fields generated by various samples, including biological systems, magnetic materials, and current-carrying wires <cit.>. NV magnetometry leverages the magnetically sensitive microwave transitions via the Optically Detected Magnetic Resonance (ODMR), which relies on detecting the spin-dependent fluorescence while applying microwaves to the optically pumped NV centers.
Many applications of magnetic measurement using NV centers are based on microwave-induced ODMR. However, these techniques can be challenging when studying systems where external magnetic fields and high-power microwaves could interfere with the sample, where it is difficult to implement such control, or where microwaves are invasive, such as in the case of high-transition-temperature (T_c) superconductors, samples for zero- to ultra-low-field nuclear magnetic resonance (ZULF NMR) measurements, thin conductive materials, and biological samples <cit.>.
To address these problems, microwave-free protocols have been developed. They exploited energy-level crossings of NV centers at different non-zero magnetic fields, notably the level anti-crossing within the triplet ground state at 102.4 , as well as the NV-P1 crossing at 51.2 <cit.>. However, in addition to the nonzero-field crossings, the electronic sublevels of the NV center cross also at a zero B-field <cit.>. Each level crossing reflects the related degeneracy in the system that makes it very prone to perturbations that may mix the quantum states and alter their populations, which is known as cross-relaxation <cit.>.
Alternatively, various zero-field magnetic measurement strategies using NV centers have been proposed <cit.>, which involve microwaves. However, these techniques can be challenging for studying systems where high-power microwaves and a high external magnetic field might interfere with the sample of interest. To address this, an NV magnetometry scheme at near zero fields without microwave radiation has been proposed <cit.>. This technique involves measuring the magnetic field by observing cross-relaxation features in the fluorescence spectrum of NV centers at near zero B-fields. Let us assume that the measured magnetic field is the sum of an unknown constant field and a scanned field. In this scenario, the zero-field cross-relaxation feature position depends on the constant field and can be used to infer the unknown magnetic field.
This work focuses on the implementation of a practical NV magnetometer utilizing the zero-field cross-relaxation feature. Specifically, we present a microwave-free magnetometer that leverages the cross-relaxation feature at zero-field to map magnetic fields above a current-carrying cross-pattern coated with NV nanodiamonds. The use of nanodiamonds facilitates imaging on arbitrarily shaped surfaces. Our findings highlight the potential of zero-field magnetometry for real-world applications.
§ ZERO-FIELD MICROWAVE-FREE NV MAGNETOMETRY
Numerous studies have been carried out recently to understand NV cross-relaxation characteristics near zero-field. Although not all aspects of longitudinal relaxation (T_1) in dense ensembles are fully understood <cit.>, one hypothesis involves “fluctuator” defects, which depolarize nearby NVs, leading to depolarization of the entire sample via dipolar interactions <cit.>. In addition, there is also a relaxation mechanism associated with local electric fields and the interaction between NV centers oriented similarly and differently, which is particularly important in high-density samples <cit.>. The zero field cross-relaxation feature and additional features observed in the presence of a bias field have been systematically examined as a function of NV density, magnetic field orientation, and temperature <cit.>. Similar characteristics have also been identified in nanodiamonds, and their behavior has been investigated in terms of nanodiamond size and light intensity. Specifically, the authors of Ref. <cit.> investigated how variations in NV density, orientation, nanodiamond size, and light intensity influence observed characteristics, providing a significant understanding of the interplay of these factors in the studied phenomena.
Magnetometry with nanodiamonds offers advantages over other NV-diamond sensors. Conventional NV magnetometry often involves the use of bulk diamond or thin NV layers or employs scanning diamond tips to reduce the standoff distance to the sample, typically down to several nanometers for AFM-type sensors, to achieve high sensitivity and spatial resolution when mapping magnetic fields <cit.>. However, these methods have limitations as they require a smooth sample surface and proximity of the sample to the measurement diamond. In contrast, the small size of nanodiamonds allows them to be bonded to irregular material surfaces, including fiber tips and living cells <cit.>.
In this study, we used the "salt and pepper" approach <cit.> using randomly oriented nanodiamonds containing NV centers to create a cost-effective and straightforward wide-field magnetometer without microwaves for near-zero-field measurements. The sensitivity of these NV-ND magnetometers can reach below the /√(Hz) level <cit.>, with typical per-pixel sensitivities in the case of imaging devices of a few /√() <cit.>. The biocompatibility of NDs allows for their use in biomedical applications, such as magnetic resonance imaging (MRI) of the brain and other organs <cit.>. Furthermore, the small size of NDs enables nanoscale spatial resolution, making them ideal for imaging the magnetic fields of biological structures at the cellular and subcellular levels <cit.>. The utilization of nanodiamonds presents a balanced compromise between sensitivity, resolution, and manufacturing costs. Another significant advantage of our method is the implementation of wide-field magnetic imaging without microwaves, eliminating all the effects of microwaves, enabling real-time parallel mapping of the magnetic field across a large field of view, and reducing measurement time.
§ EXPERIMENTAL SETUP
The diamonds used in this work are commercially available 140- carboxylated fluorescent nanodiamonds by Adamas Nanotechnologies. A suspension of these diamonds in deionized (DI) water is deposited onto a transparent polyethylene terephthalate (PET) substrate, which has a thickness of 0.11 mm, using the drop-casting technique. The sample is then allowed to air-dry for 15 to 30 minutes. On the reverse side of the PET substrate, a copper cross-pattern, with wires 65 wide, is printed and subsequently fixed onto a printed circuit board (PCB). This cross-pattern comprises four individual arms (labeled 1 to 4 in Fig. Fig. <ref>b)) that intersect at the center. The pattern is connected to a power source, allowing the current to be driven through any specific path (e.g., 1-2, 1-4) across the pattern.
For NV magnetometry described here, a home-built wide-field fluorescence microscope was used, as shown in (Fig. <ref>a). The nanodiamond sample was illuminated with 60-70 of 532 green light from an LED, Thorlabs M530L4). The light beam was collimated with an aspheric condenser lens (focal length 50 ) and guided with a dichroic mirror to the back focal plane of a 40x Olympus microscope objective with a numerical aperture of 0.65. The pump beam focused with the objective illuminates the nanodiamonds. The red fluorescence emitted from the NV centers is collected using the same microscope objective and imaged with a Basler camera equipped with a 12-bit Sony CMOS sensor. In our imaging system, the camera field of view (FOV) is 2448 × 2048 pixels which corresponds to an area of 384 × 321 on the nanodiamond layer with 0.15 × 0.15 per pixel. A square DC current-carrying coil (with N = 20 turns and length R_coil = 50 ) creates a bias field perpendicular to the NV imaging plane.
The NV magnetometer was calibrated using a test magnetic field. The magnetic field of the current-carrying coil was cross-verified by measuring the Zeeman splitting along the quantization axis ({111} crystallographic directions) of one of the NV orientations in a 100-cut thin monocrystal diamond sample. The estimated magnetic field value from NV ODMR splitting is consistent with the applied test magnetic field calculated based on the coil current I and its geometry.
§ RESULTS AND DISCUSSIONS
§.§ Imaging of static magnetic fields using nanodiamonds
In this section, we describe the methodology to image and map the magnetic field generated with a cross pattern for various current intensities and paths. As illustrated in Fig. <ref>(d), both the laser and the DC remain constant throughout the measurement. The background magnetic field applied perpendicular to the plane of the ND layer is systematically scanned in steps from -4.0 to +4.0 with the corresponding images captured sequentially.
Before the current measurements, we experimentally verified the presence of the zero-field feature in the NDs in the absence of current in the cross. Next, the non-zero current in the cross-pattern was applied in two specific directions: i) path 3 to 4 and ii) path 1 to 4. We took the measurement several times and averaged the resulting images to improve the signal-to-noise (SNR) ratio. For a given current path in the cross, we scan the magnetic field and record the fluorescence image of the ND layer. From the acquired image data, we then extract the zero-field cross-relaxation spectrum for each pixel of the camera. To reduce the relative noise of each pixel and the amount of data to be processed, we binned 16×16 pixels, thus reducing the total pixel number by a factor of 256. The extracted spectrum is then fitted with a Gaussian function (<ref>) to estimate the shift of the center field (Δ B), full-width at half maximum (w), and the contrast (C) of the zero-field cross-relaxation feature.
f(x; Δ B, w) = y_0 + C ·1/w√(2π) e^-(x - Δ B)^2/2w^2 .
Here, the parameters y_0, C, Δ B, and w are, respectively, the offset, contrast, center, and width of the distribution. An additional magnetic field generated by the cross-pattern in both parallel and perpendicular directions to the scanned field affects the parameters Δ B, C, andw of the zero-field feature. The shift Δ B is equal to the strength of the B-field generated above the cross-pattern. Quantifying these parameters spatially provides information about the field generated by the cross-pattern.
The Gaussian function is then fitted to the extracted spectrum to obtain the Δ B, C, and w values for each pixel. Using these estimated values, we create Δ B, C, and w maps for the entire field of view (FOV) of the camera under two different current paths at 0 A, 0.3 A, and 0.5 A, as shown in Fig-<ref>.
As expected, when there is no current (0A) flowing through the conductive cross-pattern, we see no visible trace of the pattern in the Δ B map, which is expected. However, when the current is non-zero (0.3 A and 0.5 A), the arms of the cross pattern (the central white region), through which the current flows, become visible. Notably, shift Δ B above the cross-pattern is zero, regardless of the current path, from 3 to 4 or 1 to 4. This is because the magnetic field generated by the conductive cross-pattern is perpendicular to the scanned field and does not affect it.
The strength of the magnetic field decreases with distance from the center of the wire, which is observable in the shift maps. The color scheme on the center field shift map indicates that the magnetic field generated by the wire is either parallel (red) or anti-parallel (blue) to the scanned field. The thickness of the wire estimated from the map agrees with the actual dimensions. The colored squares on both the left and right sides of the pattern indicate pixel locations, and the corresponding spectra are shown in Figure 2 to the right of the maps. The spectra extracted from the binned image data visualize the center shift when a current is applied; the peak shift increases with a higher current and reverses when the current is reversed.
The contrast and width maps provide complementary information to the shift maps. At 0 A, with no current flowing through the conductive cross-pattern, we observe a maximum contrast of the zero-field feature of around 1% and a minimum width of 2 mT for nanodiamonds throughout the FOV. As the current increases, the overall contrast of the zero-field feature reduces, and its width increases, as evident from the colormaps at 0.3 A and 0.5 A. This occurs because the magnetic field from the cross-pattern tunes the electron spin splitting of NV centers into resonance with many surrounding NV centers that are randomly oriented. This results in less-enhanced cross-relaxation through magnetic dipole interactions with neighboring NV centers, leading to higher emissions. At 0 A, unlike in the Δ B maps, we observe a trace of the conductive cross-pattern in the contrast and width maps, though it is less pronounced in the latter. This trace is due to the reflective nature of the copper structure.
Additionally, Fig-<ref> shows a non-uniformity of contrast and width around the cross-pattern, where the current is maximum at the center and decreases toward the edges of the maps. This suggests the presence of a non-uniform NV layer and nanodiamond clusters, possibly resulting from the aggregation of nanodiamonds during deposition on the PET surface. Also, "oasis-like" structures are seen in the width map, which may be due to the failure of the fitting function to model the data accurately.
The center field shift (Δ B) above the cross pattern is zero when the direction of the current is from 3 to 4 and from 1 to 4.
This is because the magnetic field generated by the current-carrying pattern is perpendicular to the scanned field, broadening the zero-field feature and decreasing the observed contrast. Additionally, the strength of the field decreases with the distance from the center of the wire, which can be observed in center shift maps. The color code (blue to red) on the center field shift map indicates that the magnetic field generated by the wire is either parallel or anti-parallel to the scanned field. Contrast and width maps offer complementary information about the broadening when the magnetic field is perpendicular to the scanning background field.
Figure <ref> shows the map of the zero field parameters and cross-sectional plots of the parameters for an applied current of 0.5 A. The plot observed for Δ B, obtained by selecting one row of the image data perpendicular to the current carrying wire, shows that the maximum (minimum) shift occurs when the applied test field is parallel (anti-parallel) to the scanning background field and decreases with distance from the center. The shift is zero in the center of the wire, while the contrast (C) is minimum and width (w) is maximum there.
Our measurements show that the changes in Δ B, C, and w are proportional to the current applied to the wire in the relevant distance ranges. Figure <ref> shows that the fit parameters depend linearly on the current flowing in the cross pattern for the center shift, the width, and the contrast above 0.1 A.
§.§ Numerical simulations
To cross-validate our microwave-free magnetometry scheme, we performed a numerical simulation. Numerical integration was performed using the open-source Python package Magpylib <cit.>. The simulated B-field distributions for two different current paths are shown in Fig. <ref>. The current-induced fields of currents are directly derived from the Biot-Savart law. The magnetic field distribution map in the xy plane was calculated at 0.11 along the Z-axis from the cross-structure to account for the thickness of the substrate. As can be seen, there is a good agreement between the simulation and our measurements.
§.§ Single-pixel magnetic field sensitivity
The photon shot-noise limited magnetic field sensitivity of an NV ensemble magnetometer (similarly to that of a continuous-wave ODMR imaging protocol) is given by <cit.>:
δB = P_Fh/gμ_BΓ/C·√(I_0)
Here Γ is the FWHM resonance linewidth, C is the contrast, P_F ≈ 0.70 for a Gaussian-shaped resonance peak, I_0 is the photon detection rate. Here, the detected photon rate is defined in terms of the number of photon counts per pixel per second, taking into account the sensor quantum efficiency. Equation <ref> can be applied similarly to estimate the NV magnetic field sensitivity using the zero-field feature since the feature width and the contrast primarily depend on the number of NV centers in the nanodiamonds. Due to the random orientation of NV centers in NDs and the variation in the number of nanodiamonds per pixel, we estimate that the mean per-pixel sensitivity of our ND magnetometer is approximately 4.5 /√() per ^2 (≈ 9 × 10^6 counts per second for an effective sample area of 0.15 ^2).
The estimated sensitivity of our current imaging instrument corresponds to the field sensed by a 140 nanodiamond sitting at a distance of 110um above the cross-pattern. In our experiment, this sensitivity was sufficient to detect B-fields (< 1 ) when applying a few mA of current. The sensitivity can be further improved by reducing the stand-off distance between the NV layer and the current source, which would allow the detection of smaller magnetic fields and increase the precision of source localization.
§ CONCLUSIONS
In this study, we have shown that the zero-field cross-relaxation feature can be effectively used for imaging and mapping the magnetic field distribution. As a proof-of-concept potential application, we successfully visualized the magnetic field produced by a current-carrying wire pattern using NV ensembles in nanodiamonds.
In this work, we characterized three key parameters of the zero-field feature: center, width, and contrast for different current intensities. Additionally, we validated our experimentally observed magnetic field distribution of the cross-pattern through numerical simulations. In particular, the feature exhibits a broader linewidth (≈ 2.0 ) and a lower contrast of approximately 1–2% in NDs compared to some single crystal diamond samples with high NV concentrations (≈ 0.2 ) linewidth and ≈ 4% contrast <cit.>.
The fact that the zero-field technique does not require a microwave or precise orientation of a known magnetic field makes it unique among other NV magnetometers. The zero-field technique may find applications in biosensing since the lack of microwave field is a significant advantage when studying cells, and nanodiamonds are already readily used as trackers for fluorescence microscopy. It could also be used for real-time magnetometry in living cells with diamonds without having to constantly calibrate the diamond axes using ODMR and other microwave-based techniques. Thanks to the application of nanodiamond coatings, the method can be very useful for imaging of arbitrary-shaped surfaces.
The measured sensitivity with the zero-field technique is comparable to that obtained with the continuous-wave ODMR (cw-ODMR) and GSLAC techniques. We stress that the zero-field microwave-free technique is not optimal since the technique used here is analogous to cw-ODMR, which is not the most sensitive DC magnetometer. Further, we aim to improve the sensitivity of our magnetometer by optimizing the experimental setup. However, how to improve its sensitivity with the material engineering of nanodiamonds remains unclear.
§ ACKNOWLEDGEMENTS
This work was supported by the European Commission’s Horizon Europe Framework Program under the Research and Innovation Action MUQUABIS GA no. 101070546, by the German Federal Ministry of Education and Research (BMBF) within the Quantumtechnologien program (DIAQNOS,
project no. 13N16455) and by the German DFG, Project SFB 1552 “Defekte und Defektkontrolle in weicher Materie”. The study was carried out using research infrastructure purchased with the funds of the European Union in the framework of the Smart Growth Operational Programme, Measure 4.2; Grant No. POIR.04.02.00-00-D001/20, “ATOMIN 2.0 - ATOMic scale science for the INnovative economy and was also funded by the National Science Centre, Poland grant number 2020/39/I/ST3/02322.
|
http://arxiv.org/abs/2409.03623v1 | 20240905153453 | A proof of a conjecture of Erdős and Gyárfás on monochromatic path covers | [
"Alexey Pokrovskiy",
"Leo Versteegen",
"Ella Williams"
] | math.CO | [
"math.CO"
] |
8in
6.6in
margin = 2.4cm
theoremTheorem[section]
definition[theorem]Definition
proposition[theorem]Proposition
example[theorem]Example
conjecture[theorem]Conjecture
lemma[theorem]Lemma
corollary[theorem]Corollary
observation[theorem]Observation
*remarks*Remarks
*remark*Remark
claim[theorem]Claim
equation
item
equationsection
proofclaim[1][Proof of claim]
⌈⌉
#1
ε
A proof of a conjecture of Erdős and Gyárfás on monochromatic path covers
Alexey PokrovskiyDepartment of Mathematics, University College London, Gower Street, WC1E 6BT, UK. Emails: Leo Versteegen[1] Ella Williams[1]
==================================================================================================================================================
§ ABSTRACT
In 1995, Erdős and Gyárfás proved that in every 2-edge-coloured complete graph on n vertices, there exists a collection of 2√(n) monochromatic paths, all of the same colour, which cover the entire vertex set. They conjectured that it is possible to replace 2√(n) by √(n). We prove this to be true for all sufficiently large n.
§ INTRODUCTION
In <cit.>, Gerencsér and Gyárfás observed the following fact with a very short and elegant proof.
The vertex set of any 2-edge-coloured complete graph K_n can be covered by two monochromatic paths.
Importantly, the two monochromatic paths do not have to be of the same colour. In his survey <cit.>, Gyárfás mentions that when he first told Erdős about this result, Erdős misunderstood this detail, which led the two of them to investigate the smallest number of monochromatic paths of the same colour that cover all vertices of a 2-edge-coloured complete graph. In <cit.>, they proved the following.
In every 2-edge-coloured complete graph on n vertices, there exists a collection of 2√(n) monochromatic paths, all of the same colour, which cover the entire vertex set.
The bound 2√(n) in <Ref> is best possible up to a constant by the following construction. Suppose √(n)∈ℕ and partition V(K_n) into disjoint sets A and B of order n-√(n)+1 and √(n)-1 respectively. Colour the edges contained in A blue, and all other edges red. Each red path covers at most √(n) vertices in A since it must alternate between A and B, so we need at least ⌈ |A|/√(n)⌉ = √(n) red paths to cover the entirety of V(K_n). Every collection of blue paths covering V(K_n) must include all individual vertices in B as distinct paths and needs another blue path to cover A, thus has size at least |B|+1 = √(n). If n is not a perfect square, one can use the same construction with | B|=⌊√(n)⌋-1, so that any cover requires at least ⌊√(n)⌋ paths of the same colour.
Erdős and Gyárfás conjectured (see <cit.>) that this construction is the colouring that requires the most paths for a cover, meaning that the bound in their theorem could be improved to √(n). In this paper, we confirm their conjecture for all sufficiently large n.
There exists n_0∈ such that for all n>n_0 and all 2-edge-colourings of E(K_n), there exists a collection of at most √(n) monochromatic paths of the same colour that cover the vertices of K_n.
The constant n_0 that we find is large, but with some additional technical effort, one can show that for all n∈, √(n)+10 monochromatic paths of the same colour are sufficient to cover K_n.
Several related problems of this type have also been studied, looking for monochromatic structures that either cover or partition the vertex set within an edge-coloured graph.
For example, Lehel's conjecture, proven in full by Bessy and Thomassé <cit.>, strengthens <Ref>. It states that the vertices of a 2-edge-coloured K_n can be covered by two vertex-disjoint monochromatic cycles (of course, they must be allowed to have different colours). A famous generalisation of Lehel's conjecture due to Gyárfás <cit.>, stating that an r-edge-coloured complete graph can be covered by r monochromatic cycles, remains wide open. We refer the reader to the survey <cit.> for partial results and further related problems.
Finally, as an interpolation between <Ref> and <Ref>, Eugster and Mousset <cit.> investigated the minimal number of monochromatic paths needed to cover the r-edge-coloured K_n if at most s≤ r different colours are allowed to be used for the paths in total.
§ PRELIMINARIES
We call a collection 𝒫 of paths in a red-blue edge-coloured graph G a red path cover if every vertex in V(G) is covered by some path in 𝒫, and every P ∈𝒫 contains only red edges. We define a blue path cover similarly.
The length of a path is the number of edges it contains, and importantly we allow paths to have length zero, so that a path can be a single vertex.
Note that paths need not be disjoint, unlike those required for some definitions of the path covering number of a graph.
We will use a bipartite Ramsey-type result for paths determined by Gyárfás and Lehel in <cit.> (see Theorem 3 and Remark 1).
If k and ℓ are distinct natural numbers then every red-blue edge-coloured complete bipartite graph K_⌈k+l/2⌉,⌈k+l/2⌉ contains a red path of length k or a blue path of length ℓ.
Suppose we have found a long monochromatic path P in a red-blue edge-coloured K_n, without loss of generality we may assume P is blue.
We may consider the bipartite graph between V(P) and a subset of V(K_n) ∖ V(P), comprised only of red edges. If there is a long red path, say R, is this bipartition then we obtain a large set V(P)∩ V(R) covered by both a red path and a blue path, which allows us some flexibility for covering the remainder of the vertices.
A similar idea may be used if we can find few red paths that cover many vertices in the bipartition. In the following two lemmas we consider certain degree properties within bipartite graphs that allow us to find few paths covering many vertices.
Let G be a bipartite graph on vertex set X∪ Y with the property that every vertex in Y has degree at least (| X| +| Y|)/2. Then G contains a path covering 2| Y| vertices.
Let P=x_1y_1… x_ky_k be a path starting in X and ending in Y and suppose that k<| Y|. Fix y_k+1∈ Y∖{y_1,…,y_k}. We have
| N(y_k)∩ N(y_k+1)|≥| X| +| Y|/2-(| X| -| X|+ | Y|/2)=| Y|>k.
Therefore, y_k and y_k+1 have a common neighbour x_k+1∈ X∖{x_1,…,x_k}, and we can extend P via y_kx_k+1y_k+1.
Let m∈ and let G be a bipartite graph on vertex set X∪ Y with | X|≥| Y|+2m, and such that every vertex in Y has degree at least | X| -m. Then G contains a collection of at most ⌊| X| /| Y|⌋ paths whose union covers all but | Y| + 2m vertices in X and all vertices in Y.
We prove the statement by induction on |X|, always assuming that | X|≥| Y | + 2m. Since
| X| -m ≥| X|+| Y|/2,
we may apply <Ref>, to obtain a path P covering 2| Y| vertices in G. In particular, P covers all vertices in Y. Let X'=X∖ V(P), and note that in the induced subgraph G'=G[X'∪ Y], every vertex in Y has degree at least | X'|-m. If | X'|≤| Y|+2m, then {P} is a collection of paths as desired. Otherwise, we may apply the induction hypothesis to G[X'∪ Y] to obtain a collection of paths P_1,…,P_k with k≤⌊| X'| /| Y|⌋=⌊| X| /| Y|⌋-1 that cover all but | Y| +2m vertices in X'. Adding P to this collection yields the claim.
To get the exact bound in <Ref>, we also require a version of <Ref> which lets us cover all vertices in X when many vertices in Y are adjacent to all vertices in X.
Let G be a non-empty bipartite graph on vertex set X∪ Y with sets X_0={x∈ X: d(x)=| Y|}, X_1=X∖ X_0, Y_0={y∈ Y:d(y)=| X|}, and Y_1=Y∖ Y_0 such that
* | X|> | Y|, and
* X_1=Y_1=∅ or | X_0|/| Y_1|>2| X_1| /| Y_0|.
Then G contains a collection of at most ⌈| X|/| Y| +1⌉ paths that cover all vertices in G.
We prove the statement again by induction on |X|. It follows from (i) and (ii) that | X_0|≥| Y_1|+1, and therefore, G contains a path P with 2| Y_1|+1 vertices, alternating between X_0 and Y_1, such that P starts at x_0 ∈ X_0 and also ends in X_0. In particular P covers all vertices in Y_1.
Furthermore, since every vertex in Y_0 has neighbourhood X, and | X| > | Y|, G contains a path Q with 2min{| Y_0|, | X_1|} vertices, alternating between Y_0 and X∖ V(P), that starts at y_0∈ Y_0 and covers min{| Y_0|, | X_1|} vertices in X_1. Consider the concatenated path R=Px_0y_0Q and the graph G'=G[X'∪ Y] where X'=X∖ V(R).
Defining the sets X_0',X_1',Y_0', and Y_1' analogously for G' to X_0,X_1,Y_0, and Y_1 were defined for G respectively, we observe that Y_1'⊂ Y_1 and X_1'⊂ X_1∖ V(Q), while Y_0⊂ Y_0'. If we assume that X_1'=∅ or Y_1'=∅, then it follows that G' is complete bipartite and it is trivially possible to cover X∖ V(R) with ⌈| X'|/| Y| +1⌉ =⌈| X|/| Y| +1⌉-1 paths so that nothing is left to show. Suppose therefore that X_1,Y_1≠∅, in particular, we must have | X_1| >| Y_0| so that | V(Q)∩ X_1| =| Y_0|. Using this, we see that
| X_0'|/| Y_1'|≥| X_0∖ V(P)|/| Y_1|≥| X_0|/| Y_1|-2>2(| X_1|/| Y_0| - 1 )= 2| X_1∖ V(Q)|/| Y_0|>2| X_1'|/| Y_0'|.
If | X'|> | Y'|, we may use the induction hypothesis to obtain a family 𝒫 of at most ⌈| X|/| Y| +1⌉-1 paths which cover all of X' and Y, and together with R, these paths cover all of G.
If | X'|≤| Y'| on the other hand, then we must have | Y_0'| >| X_1'| so that it is possible to cover all of X_1' with a single path Q' of length 2| X_1'|. The remaining graph G'-V(Q') is complete bipartite so that we may find a path P' that covers all of X_0' in it. Concatenating P' and Q' suitably, we obtain a second path R' such that R and R' cover all of G.
§ PROOF OF <REF>
Let n ∈. In what follows, without loss of generality we assume the vertex set of K_n is given by [n]. Let _n be the set of all red-blue colourings of E(K_n). For χ∈_n, denote by f(n,χ) the smallest family of monochromatic paths of the same colour that cover [n]. Furthermore, let f(n)=max_χ∈_n f(n,χ).
In order to prove <ref>, we follow an inductive strategy to initially prove a weaker upper bound on f(n) for all n ∈, given in <ref> which can then be bootstrapped to prove our main theorem. Thus in each of the lemmas in this section, we will assume an upper bound on f(m) for all m<n that will relate to our inductive hypothesis.
We first introduce a lemma which tells us that under a colouring χ∈_n, if there exists a large set S that is covered by both a collection of few red paths, and a collection of few blue paths, then we can obtain a strong upper bound on f(n,χ).
Let C_1≥ C_2≥ 0 and let n∈ be such that f(m)<√(m)+C_1 for all m<n. Let further k∈ and χ∈_n be such that there exist red paths P_1,…, P_k⊂ K_n, blue paths Q_1,…,Q_k⊂ K_n, and a set S⊂ [n] satisfying S⊂ (V(P_1)∪…∪ V(P_k))∩ (V(Q_1)∪…∪ V(Q_k)). If
√(n-| S|)+C_1+k≤√(n)+C_2,
then f(n,χ)≤√(n)+C_2. In particular, if there is a red path P and a blue path Q such that S⊂ V(P)∩ V(Q), and
| S|≥ 2(C_1-C_2+1)√(n),
then f(n,χ)≤√(n)+C_2.
Suppose there exist red paths P_1,…, P_k⊂ K_n, blue paths Q_1,…,Q_k⊂ K_n, and a set S⊂ [n] satisfying S⊂ (V(P_1)∪…∪ V(P_k))∩ (V(Q_1)∪…∪ V(Q_k)).
Let H be the 2-edge-coloured graph obtained by deleting S from K_n under the colouring χ. Then H has m=n-|S| vertices, and by definition of f(m), there exists a collection of at most f(m) monochromatic paths of the same colour which cover V(H). Adding {P_1,…,P_k} or {Q_1,…,Q_k} to this collection according to whether these paths are red or blue respectively yields a collection of at most f(m)+k monochromatic paths of the same colour covering V(H) ∪ S = [n] in K_n. By our assumptions, we have
f(n,χ) ≤ f(m) + k < √(m) + C_1 +k = √(n-| S|)+C_1+k≤√(n)+C_2,
as desired.
For the second part of the statement, observe that if there is a red path P and a blue path Q such that S⊂ V(P)∩ V(Q) for some S ⊂ [n] with | S|≥ 2(C_1-C_2+1)√(n), then
√(n-|S|)≤√(n-2(C_1 -C_2+1))≤√(n) - (C_1 -C_2+1).
Rearranging gives
√(n-| S|)+C_1+1≤√(n)+C_2,
and so taking k=1, we deduce that f(n,χ)≤√(n)+C_2.
We now show that if χ∈_n is such that there exists no small monochromatic path cover, then there must exist some long monochromatic path, of colour γ say, such that for every vertex y outside of the path, there are few vertices x on the path such that χ(xy)=γ.
Let C_1≥ C_2≥ 0 and let n>8^4(C_1-C_2+1)^4 be such that f(m)<√(m)+C_1 for all m<n. If χ∈_n is such that f(n,χ)>√(n)+C_2, then there a monochromatic path P such that the size of Y= [n]∖ V(P) is at most √(n)+8(C_1-C_2+1)√(n). Furthermore, if γ is the colour of the edges of P, then for every y∈ Y, the size of {x∈ V(P): χ(xy)=γ} is at most 2(C_1-C_2+1)√(n).
Let Δ = C_1-C_2 and let χ∈_n be such that f(n,χ)>√(n)+C_2. Without loss of generality, there exists a blue path Q which covers ⌊ n/2⌋ vertices. Let W be a subset of [n]∖ V(Q) of size ⌊ n/2⌋ and consider the longest red path R in the complete bipartite graph between V(Q) and W. The set V(R)∩ V(Q) is covered both by a single red and a single blue path so that by <Ref>, we may assume that | V(R)∩ V(Q)|<2(Δ+1)√(n). However, since R is a path in a bipartite graph between V(Q) and W, at least ⌊| V(R)| /2⌋ vertices of R must lie in V(Q), and thus, it follows that | V(R)|<4(Δ + 1)√(n). By <Ref>, there exists a blue path of length at least n-2(Δ+1)√(n)-1 between V(Q) and W.
Let P be the longest blue path in K_n, denote its vertex set by X, and let Y=[n]∖ X. Consider y∈ Y and let B denote the set of blue neighbours of y in P. Since P cannot be extended, B does not contain any endpoint or any two consecutive vertices of P. Furthermore, if we direct P arbitrarily, the predecessors of B form a red clique set. Indeed, if P=v_1v_2… v_k, 1≤ i<j≤ k, and v_iv_j, v_i+1y, and v_j+1y were all blue, then v_1… v_iv_jv_j-1… v_i+1yv_j+1… v_k would be a blue path of length k+1. However, we may argue as before that V(P) cannot contain a red path of length 2(Δ+1)√(n), which means that y has at most 2(Δ+1)√(n) blue neighbours in X.
In other words, taking m=2(Δ+1)√(n), each vertex in Y has at least | X| -m blue neighbours in X. What is more, we know that | X|≥ n-2(Δ+1)√(n)-1 and hence | Y|≤ 2(Δ+1)√(n)+1. Thus, as n>100(Δ+1)^2,
| X|≥ n-2(Δ+1)√(n)-1 > 10(Δ+1)√(n)-2(Δ+1)√(n)-1>6(Δ+1)√(n)+1≥| Y | + 2m.
It now follows from <Ref> that the bipartite graph given by the red edges between X and Y contains a collection of at most ⌊| X|/| Y|⌋ <n/| Y| paths which cover all but |Y|+ 2m<6(Δ+1)√(n)+1 vertices in X and all vertices of Y.
However, all vertices in X that are covered by this collection of paths are also covered by the single blue path P, which means that by <Ref>,
√(n)+C_2<√(6(Δ+1)√(n)+1)+C_1 +n/| Y|≤ 3(Δ+1)√(n) +C_1+n/| Y|,
which we can simplify to
√(n)<n/| Y| + 4(Δ+1)√(n),
which implies that
| Y | < n/√(n)-4(Δ+1)√(n)=√(n)/1-4(Δ+1)/√(n).
Since we have √(n)≥ 8(Δ+1) by assumption and 1/(1-x)<1+2x for x∈ [0,1/2], we get that
| Y | < √(n)+8(Δ+1)√(n),
as desired.
If there exists a monochromatic path in K_n containing almost all vertices, we can afford to cover the remaining vertices individually or in pairs. The following lemma formalises this.
Let n∈ and χ∈_n and suppose P is a blue path in K_n. Let Y=[n]∖ V(P) and write Y_0 for the set of vertices in Y that are not contained in a blue edge with any vertex in V(P).
Then we have
f(n,χ) ≤ 1+⌈| Y∖ Y_0|/2⌉ + | Y_0| < 2+ | Y|/2 + | Y_0|/2.
Let Y_1 Y∖ Y_0 and note that for any two vertices y_1,y_2∈ Y_1, there exists a blue path between y_1 and y_2 whose interior points lie in P. Thus, by partitioning the vertices in Y_1 arbitrarily into pairs, we may cover all of them with at most ⌈| Y_1|/2 ⌉ blue paths. Adding a single-vertex path for each y∈ Y_0 and P to this collection, we obtain a blue path cover of K_n of size at most 1+⌈| Y_1|/2⌉+| Y_0|.
Since any monochromatic path cover of [n] must have size at least f(n,χ), this yields
f(n,χ) ≤ 1+⌈| Y_1|/2⌉+ | Y_0| < 2 + |Y_1|/2 + |Y_0| =2 + |Y|-|Y_0|/2 + |Y_0| = 2+ | Y|/2 + | Y_0|/2.
We now prove a weak version of <Ref>, which will bootstrap to prove <Ref> in full.
For all n∈ and C=160000, f(n)<√(n)+C.
We prove the claim by induction on n, noting that it is trivial for n≤ C. Suppose therefore that n>C and that f(m)<√(m)+C for all m<n and consider a colouring χ∈_n. Applying <Ref> with C=C_1=C_2, we obtain without loss of generality a blue path P such that the size of Y=[n]∖ V(P) is at most √(n)+8√(n)<3√(n)/2 and such that every y∈ Y has at least | X| - 2√(n) blue neighbours in X=V(P). By <Ref>, if | Y_0|≤√(n)/2 or | Y|≤√(n), then f(n,χ)≤√(n)+2 and there is nothing left to show. Let us therefore assume that | Y_0| >√(n)/2 and | Y| > √(n).
On the other hand, we have that | X |≥ n-3√(n)/2≥ 3√(n)/2+2m for m=2√(n). Therefore, by applying <Ref>, we see that there must exist a collection of at most √(n) red paths that cover all vertices in Y and all but at most 6√(n) vertices in X. Let X'⊂ X denote the vertices that are not covered by these paths and recall that all edges between Y_0 and X, in particular those between Y_0 and X', are red. Since | Y_0|>√(n)/2 and | X'| <6√(n) we can cover all of X' with at most 12 red paths between X' and Y_0, which completes the proof.
We are now ready to prove <Ref>.
Let C=160000, n_0 = C^10 and n>n_0. Define α_n = 9C/√(n), and suppose χ∈_n is such that f(n,χ)>√(n). By <Ref>, we have f(m)<√(m) +C for every m ∈. Applying <Ref> with C_1=C and C_2=0, we see that without loss of generality, there exists a blue path P such that the size of Y=[n]∖ V(P) is at most (1+α_n)√(n) and with the additional property that every vertex in Y has less than 2C√(n) blue neighbours in X =V(P). By <Ref>, if | Y_0|≤ (1-2α_n)√(n) or | Y|≤√(n)-1, then f(n,χ)≤√(n), a contradiction. Let us therefore assume that | Y_0| >(1-2α_n)√(n) and | Y| > √(n)-1.
We define H as the bipartite graph H on vertex set X∪ Y whose edges are the red edges between X and Y under χ. Let Y_1 = Y ∖ Y_0, X_0 = {x∈ X: d_H(x)=|Y|}, and X_1 = X∖ X_0. We have |Y_1|=|Y|-|Y_0| <(1+α_n)√(n)-(1-2α_n)√(n) = 27Cn^1/4. By choice of P, we know that |{x∈ X: χ(xy) = blue}| < 2C√(n) for every y∈ Y, which implies that | X_1 | < 2C√(n)|Y_1|< 54C^2n^3/4.
Now, either Y_1 = ∅ = X_1, or, since n is sufficiently large, we have
|X_0|/|Y_1|= |X|-|X_1|/|Y_1|>n-(1+α_n)n^1/2-54C^2n^3/4/27C n^1/4>n^1/2>108C^2n^3/4/(1-2α_n) n^1/2>2|X_1|/|Y_0|.
Either way, we may apply <Ref> to obtain a collection of at most ⌈ |X|/(|Y|+1) ⌉ paths in H covering all vertices in H. This corresponds to a red path cover in K_n, which implies that f(n,χ)≤⌈ |X|/(|Y|+1) ⌉. Since |Y|>√(n) -1, it follows that
f(n,χ)≤⌈|X|/|Y|+1⌉ = ⌈n+1/|Y|+1⌉ -1
≤⌈√(n+1)⌉ -1
≤√(n).
Thus we have a contradiction, proving the theorem.
§ ACKNOWLEDGEMENTS
The second author is supported by the London Mathematical Society and the Heilbronn Institute for Mathematical Research through an Early Career Fellowship. The third author is supported by the Martingale Foundation.
plain
|
http://arxiv.org/abs/2409.03141v1 | 20240905003623 | Towards Autonomous Cybersecurity: An Intelligent AutoML Framework for Autonomous Intrusion Detection | [
"Li Yang",
"Abdallah Shami"
] | cs.LG | [
"cs.LG",
"cs.CR",
"cs.NI",
"68T01, 90C31",
"I.2.1; I.2.6; C.2.0"
] |
§ ABSTRACT
The rapid evolution of mobile networks from 5G to 6G has necessitated the development of autonomous network management systems, such as Zero-Touch Networks (ZTNs). However, the increased complexity and automation of these networks have also escalated cybersecurity risks. Existing Intrusion Detection Systems (IDSs) leveraging traditional Machine Learning (ML) techniques have shown effectiveness in mitigating these risks, but they often require extensive manual effort and expert knowledge. To address these challenges, this paper proposes an Automated Machine Learning (AutoML)-based autonomous IDS framework towards achieving autonomous cybersecurity for next-generation networks. To achieve autonomous intrusion detection, the proposed AutoML framework automates all critical procedures of the data analytics pipeline, including data pre-processing, feature engineering, model selection, hyperparameter tuning, and model ensemble. Specifically, it utilizes a Tabular Variational Auto-Encoder (TVAE) method for automated data balancing, tree-based ML models for automated feature selection and base model learning, Bayesian Optimization (BO) for hyperparameter optimization, and a novel Optimized Confidence-based Stacking Ensemble (OCSE) method for automated model ensemble. The proposed AutoML-based IDS was evaluated on two public benchmark network security datasets, CICIDS2017 and 5G-NIDD, and demonstrated improved performance compared to state-of-the-art cybersecurity methods. This research marks a significant step towards fully autonomous cybersecurity in next-generation networks, potentially revolutionizing network security applications.
<ccs2012>
<concept>
<concept_id>10002978.10003014</concept_id>
<concept_desc>Security and privacy Network security</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002978.10002997</concept_id>
<concept_desc>Security and privacy Intrusion/anomaly detection and malware mitigation</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257</concept_id>
<concept_desc>Computing methodologies Machine learning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Security and privacy Network security
[500]Security and privacy Intrusion/anomaly detection and malware mitigation
[500]Computing methodologies Machine learning
Field Theory of Non-Newtonian Turbulence
Esteban Calzetta
September 9, 2024
========================================
§ INTRODUCTION
The progression of mobile networks has played a pivotal role in the digital revolution, with each generation bringing forth new technologies and capabilities. The fifth-generation (5G) networks have significantly enhanced mobile broadband and enabled massive machine-type communications with ultra-reliable low latency <cit.>. 5G networks leverages abstraction and virtualization techniques, such as Software-Defined Networking (SDN), Network Function Virtualization (NFV), and Network Slicing (NS), to provide flexible, efficient, and automated network management and services <cit.>.
For the evolution from 5G to the sixth generation (6G) networks, network automation has become a necessity to meet the unprecedented demand for future network applications. 6G networks are expected to leverage Artificial Intelligence (AI), Machine Learning (ML), and automation techniques to provide functional modules and operational services, leading to self-organizing and autonomous networks <cit.>. Previous researchers have extensively dedicated efforts to developing network automation architectures, including Intent-Based Network Management (IBN), Self-Organizing Network Management (SON), and Autonomic Network Management (ANM), etc. <cit.>. Recently, Zero-Touch Networks (ZTNs) were proposed by the European Telecommunications Standards Institute (ETSI) as a fully autonomous network management architecture with minimal human involvement <cit.>. Network automation solutions, including ZTNs, can effectively decrease network operational costs, enhance resource utilization efficiency, and mitigate the risks associated with human errors.
On the other hand, as network and service management requires a trustworthy and reliable system, cybersecurity has become a critical component of next-generation networks. Modern networks are vulnerable to various cyber-attacks, such as Denial of Service (DoS), sniffing/eavesdropping, spoofing, web attacks, and botnets <cit.>. These threats can lead to severe consequences, including financial losses, disruption of critical services, compromise of sensitive information, and reputational damage <cit.>. Therefore, effective cybersecurity measures should be developed to enhance the security of modern networks, while autonomous cybersecurity solutions are essential for safeguarding future networks with high automation requirements.
AI/ML techniques are widely used in network applications to develop data-driven cybersecurity mechanisms such as Intrusion Detection Systems (IDSs) and anomaly detection systems, which can analyze network traffic patterns and identify anomalies or cyber-attacks <cit.>. AI/ML models have shown effectiveness in network data analytics and IDS development, due to their capability to large volumes of network data, identify complex patterns, and adapt to evolving threats. ML-based IDSs can detect malicious attacks and predict potential threats based on historical data, thereby triggering countermeasures or response mechanisms to safeguard against the detected attacks <cit.>.
To ensure robust cybersecurity in next-generation networks, such as ZTNs, it is crucial to incorporate self-management functionalities that address security concerns, such as self-configuration, self-monitoring, self-healing, self-protection, and self-optimization <cit.>. To meet these requirements, autonomous cybersecurity solutions, such as autonomous IDSs, should be developed to automatically monitor network activities, detect network anomalies, and identify potential attacks.
Automated ML (AutoML) techniques, which are developed to automate the design and implementation of ML models, are promising solutions to realize network automation for ZTNs or future networks <cit.>. AutoML techniques offer the advantage of automating laborious and repetitive tasks involved in the ML and data analytics pipeline, such as data pre-processing, feature engineering, model selection, and hyperparameter tuning <cit.>. This automation can effectively reduce human effort, minimize the occurrence of human errors, and alleviate the need for extensive expert knowledge. In the cybersecurity domain, autonomous IDSs can be developed using AutoML techniques by automatically designing, tuning, and optimizing ML models that can effectively detect cyber-attacks and achieve self-monitoring and self-protection.
Therefore, this paper proposes an AutoML-based autonomous IDS framework to automatically detect malicious cyber-attacks for safeguading 5G and potential 6G networks. The proposed AutoML framework enables the automation of critical procedures of the ML/data analytics pipeline for intrusion detection. Specifically, it consists of: an Automated Data Pre-processing (AutoDP) component that focuses on automated data balancing using the Tabular Variational Auto-Encoder (TVAE) <cit.> method to address class-imbalance issues and improve data quality, an Automated Feature Engineering (AutoFE) component that automatically selects the most relevant features based on their average importance scores calculated using the Gini index and entropy metrics, an automated base model learning and selection component that automatically trains six tree-based machine learning models—Decision Tree (DT) <cit.>, Random Forest (RF) <cit.>, Extra Trees (ET) <cit.>, Extreme Gradient Boosting (XGBoost) <cit.>, Light Gradient Boosting Machine (LightGBM) <cit.>, and Categorical Boosting (CatBoost) <cit.>—and selects the top three best-performing models from them, a Hyper-Parameter Optimization (HPO) component that automatically tunes and optimizes the hyperparameters of the selected ML models using Bayesian Optimization with Tree-structured Parzen Estimator (BO-TPE) <cit.> to obtain optimized base models, and an automated model ensemble component that employs the proposed novel Optimized Confidence-based Stacking Ensemble (OCSE) method to generate the meta-learner for final intrusion detection. Overall, the proposed AutoML-based IDS can automatically process network data and generate optimized ML models capable of detecting various types of cyber-attacks to safeguard current and future networks.
This paper presents the following key contributions:
* It proposes a novel and comprehensive AutoML framework[
Code for this paper is publicly available at: <https://github.com/Western-OC2-Lab/AutonomousCyber-AutoML-based-Autonomous-Intrusion-Detection-System>]. that enables fully autonomous intrusion detection in next-generation networks, holding the potential to achieve fully autonomous cybersecurity.
* It proposes a novel automated data balancing method based on TVAE and class distribution exploration.
* It proposes a novel ensemble learning method, OCSE, which extends the traditional stacking ensemble method by incorporating confidence values of classes and the BO-TPE method for model optimization.
* It assesses the proposed AutoML-based IDS model using two public benchmark network security datasets, CICIDS2017 <cit.> and 5G-NIDD <cit.> datasets, which contain state-of-the-art cyber-attack scenarios.
* It compares the performance of the proposed AutoML-based IDS model with state-of-the-art methods.
To the best of our knowledge, no previous research has proposed such a comprehensive autonomous IDS model that leverages AutoML to automate all essential network data analytics procedures, ensuring efficient and automatic detection of diverse cyber-attacks for safeguarding 5G and next-generation networks.
The paper is structured as follows: Section 2 introduces the related work using AI/ML and AutoML-based methods for developing IDSs and cybersecurity mechanisms. Section 3 presents a detailed description of the proposed AutoML-based IDS framework, including AutoDP, AutoFE, automated base model selection, HPO, and automated model ensemble. Section 4 presents and discusses the experimental results of evaluating the proposed framework on benchmark network datasets. Finally, Section 5 summarizes the paper.
§ RELATED WORK
AI/ML models have been extensively applied in recent years to the development of IDSs for modern networks. This related work section aims to provide an overview of the critical studies that have contributed to the development and advancement of IDSs using ML and AutoML models for future networks.
§.§ AI/ML-based IDSs
Research on developing IDSs using AI/ML models has gained significant attention and importance, as threat hunting and cyber-attack detection are critical components of cybersecurity systems for modern networks.
Traditional AI/ML algorithms have demonstrated their effectiveness in intrusion detection, especially tree-based algorithms such as DT and RF. Sharafaldin et al. <cit.> created the benchmark network security dataset, CICIDS2017, and observed that the DT and RF algorithms outperformed the other compared ML models on this dataset. Maseer et al. <cit.> proposed a ML-based benchmarking Anomaly-based IDS (AIDS) approach that develops ten typical supervised and unsupervised ML models and evaluates their performance on the CICIDS2017 dataset. The experimental results illustrate that the DT and K-Nearest Neighbor (KNN) based AIDS models perform the best on the CICIDS2017 dataset among the evaluated ML models. Yang et al. <cit.> proposed a Multi-Tiered Hybrid IDS (MTH-IDS) framework for intrusion detection in vehicular networks. It incorporates both supervised learning algorithms (DT, RF, ET, and XGBoost) and unsupervised learning methods (k-means) to detect multiple types of cyber-attacks. They evaluated their framework on the CAN-intrusion dataset and the CICIDS2017 dataset to emphasize the model's effectiveness.
The utilization of Deep Learning (DL) methods in the development of IDSs has become prevalent due to their effectiveness in handling high-dimensional network traffic data. Agrafiotis et al. <cit.> proposed the embeddings and Fully-Connected network (Embeddings & FC) model to detect malware traffic in 5G networks. This IDS model employs the Long Short-Term Memory Autoencoders (LSTM-AE) to transform packets into embeddings and uses the Fully-Connected (FC) network model to identify attacks. The Embeddings & FC IDS demonstrates improved accuracy when applied to the 5G-NIDD dataset, a dedicated dataset for 5G networks. Tayfour et al. <cit.> proposed a DL-LSTM method supported by Software-Defined Networking (SDN) to detect cyber-attacks in the Internet of Things (IoT) and 5G networks. The DL-LSTM model achieved high accuracy on the CICIDS2017 dataset, demonstrating the effectiveness of deep learning in network intrusion detection. He et al. <cit.> proposed a Pyramid Depthwise Separable Convolution neural network-based IDS (PyDSC-IDS) for network intrusion detection. The PyDSC-IDS model uses Pyramid convolution (PyConv) to extract features from data and Depthwise Separable Convolution (DSC) to reduce model complexity. Compared with other DL models, PyDSC-IDS achieves higher detection accuracy with only a small increase in complexity on the NSL-KDD, UNSW-NB15, and CICDIDS2017 datasets.
Due to the robustness of tree-based ML algorithms in handling large-scale, high-dimensional, and non-linear network data, they are utilized as base models in the proposed framework for intrusion detection. While DL models offer powerful data analysis capabilities, they often come with higher computational complexity compared to traditional ML algorithms. To mitigate the impact of this challenge, the proposed framework employs the TVAE method, a DL model, only for synthesizing samples for minority classes. This approach proves to be more efficient than using it for intrusion detection, as processing minority class samples is significantly faster than dealing with the entire large dataset. Furthermore, the development of traditional ML/DL models for intrusion detection poses several critical challenges, such as manual effort, human bias & errors, and expertise requirements. These challenges underscore the importance of automating AI/ML models and developing autonomous IDSs.
§.§ AutoML-based IDSs
AutoML techniques are promising solutions to develop autonomous IDSs by automating the tedious procedures in the data analytics/ML pipeline. While AutoML is a relatively new research area in IDS development, several recent works have already employed AutoML techniques to create autonomous IDSs for modern networks. Yang et al. <cit.> provided a comprehensive discussion on the general and specific procedures of applying AutoML techniques to IoT data analytics and conducted a case study to employ AutoML for IoT intrusion detection tasks. Khan et al. <cit.> proposed an Optimized Ensemble IDS (OE-IDS) for intrusion detection in network environments. It automates the hyperparameter tuning process of four supervised ML algorithms and uses them to develop an ensemble model based on a soft-voting method. The OE-IDS model achieved better accuracy and F1-scores than most other compared traditional ML models on the CICIDS2017 and UNSW-NB15 datasets. Elmasry et al. <cit.> proposed a double PSO and DL-based IDS for network intrusion detection. It involves the Particle Swarm Optimization (PSO) method to select features and tune hyperparameters of three DL methods: Deep Neural Networks (DNN), LSTM, and Deep Belief Networks (DBN). This IDS model outperforms other compared methods in terms of accuracy and detection rate on the CICIDS2017 dataset. Singh et al. <cit.> proposed AutoML-ID, an AutoML-based IDS designed for Wireless Sensor Networks (WSNs). The AutoML-ID approach focuses on simple automated ML model selection and hyperparameter optimization using Bayesian Optimization (BO). The model AutoML-ID was tested on a public IDS dataset, Intrusion-Data-WSN, and achieved better performance than traditional ML models.
The existing literature has demonstrated the advantages of AutoML-based IDSs in improving performance and reducing human effort in intrusion detection and cybersecurity applications. However, many current AutoML-based IDS models only focus on automated model selection and hyperparameter optimization, leaving significant potential for improvement in other crucial stages of the AutoML pipeline. In our proposed AutoML framework, we aim to propose and develop techniques to automate every critical step in the data analytics pipeline, including the TVAE-based automated data balancing method in the AutoDP process to handle class imbalance issues, the tree-based averaging method in the AutoFE process to reduce noise and data complexity, automated model selection and BO-based hyperparameter optimization on tree-based ML algorithms to acquire optimized base models, and the proposed OCSE method for automated model ensemble to further enhance model performance. Table <ref> summarizes and compares the contributions of existing literature introduced in Section <ref>.
Overall, this paper presents a generic, comprehensive, and fully automated AutoML framework for future networks with high automation requirements.
§ PROPOSED FRAMEWORK
§.§ System Overview
The objective of this study is to develop an autonomous IDS model capable of detecting various cyber-attacks to safeguard 5G and potential 6G networks. The overall framework of the proposed AutoML-based IDS is demonstrated in Fig. <ref>, which comprises five stages: AutoDP, AutoFE, automated base model learning and selection, HPO, and automated model ensemble. During the initial stage, AutoDP, the input network traffic data undergoes pre-processing, where the proposed automated data balancing method identifies and addresses class-imbalance issues through the TVAE model to improve data quality. In the AutoFE stage, the most relevant features are automatically selected based on their importance scores calculated by the Gini index and entropy metrics using tree-based algorithms. This AutoFE process reduces data complexity and improves the generalization ability of the IDS model by minimizing noisy and redundant features. Subsequently, during the automated base model learning and selection stage, six tree-based ML algorithms (i.e., DT, RF, ET, XGBoost, LightGBM, and CatBoost) are trained and evaluated on the training set, and the top three best-performing models are automatically selected as the base models for further processing. In the HPO stage, the three selected ML models are further optimized through automated hyperparameter tuning or HPO using the BO-TPE method. In the automated model ensemble stage, the confidence values of all classes generated from the three optimized base models are integrated using the proposed OCSE model to obtain the final ensemble IDS model for final intrusion detection.
Overall, this comprehensive AutoML-based framework enables the integration of advanced AutoML techniques across multiple stages, collectively enhancing the detection capabilities and robustness of the proposed autonomous IDS model against various cyber threats for safeguarding next-generation networks.
§.§ Automated Data Pre-Processing (AutoDP)
Data pre-processing is an essential stage in the ML and data analytics pipeline, since it directly influences the quality of input data and, consequently, the performance of ML models <cit.>. However, data pre-processing procedures can be tedious and time-consuming, often requiring massive human effort and expert knowledge. To address these challenges, Automated Data Pre-processing (AutoDP) has emerged as a critical component of AutoML that aims to automatically identify and address data quality issues in datasets, thereby ensuring that ML models can learn meaningful patterns from high-quality data <cit.>.
In the proposed AutoML framework, the AutoDP component focuses on automated data balancing, a crucial aspect of data pre-processing that addresses class imbalance issues. Class imbalance is a common data quality issue in network data analytics and intrusion detection problems, as cyber-attacks or anomalies usually occur less frequently compared to benign or normal events, leading to a significant imbalance in class distribution. Class imbalance issues often bias ML models in IDS development, leading them to prioritize the detection of normal sample and common attacks and neglecting the less common but critical threats <cit.>.
Data balancing techniques are designed to address class imbalance issues and can be classified into under-sampling and over-sampling techniques. Under-sampling methods alter the class distribution by eliminating instances from the majority classes to balance data, which can result in the loss of critical patterns of normal network activities <cit.>. On the other hand, over-sampling methods balance data by creating synthetic samples for the minority classes, which may slightly increase model training time but often outperform under-sampling methods. Therefore, over-sampling methods are considered in this research.
Over-sampling methods can be broadly categorized into random and informed over-sampling methods <cit.>. Random over-sampling randomly replicates samples from the minority classes, while informed methods aim to generate higher-quality samples to improve the data balancing performance. In the proposed framework, the Tabular Variational Auto-Encoder (TVAE) <cit.> method is used as an informed over-sampling method to synthesize minority class samples to balance tabular network data. TVAE is a DL model that extends the capabilities of the traditional autoencoder by incorporating probabilistic modeling and adapting to tabular data <cit.>. TVAEs incorporate an encoder to project the input data 𝑥 as a probability distribution in the latent space 𝑧, denoted as q(𝑧|𝑥), and a decoder to reconstruct the data generation process as a probability distribution p(𝑥|𝑧). The objective function of TVAE is to maximize the Evidence Lower BOund (ELBO) on the log-likelihood of data, denoted by <cit.>:
ELBO = 𝔼[log p(𝑥|𝑧)] - D_KL(q(𝑧|𝑥) || p(𝑧))
where D_KL is the Kullback-Leibler (KL) divergence, a measure of the difference between two probability distributions, and 𝔼 denotes the expectation.
To automate the data balancing procedure, the proposed AutoDP method consists of two procedures: automated class-imbalance detection and automated data synthesis. In the automated class-imbalance detection procedure, the system calculates three metrics in the training set: the number of classes, the number of samples in each class, and the average number of samples per class. Based on these three metrics, the system will identify minority classes that have fewer than a certain threshold, defined as half of the average number of samples per class in the training set, in the proposed framework. If there are minority classes indicating class imbalance, the system will automatically perform automated data synthesis by synthesizing minority class samples using the TVAE method until the number of samples in each minority class reaches the threshold (half of the average number of samples). The details of the proposed TVAE-based automated data balancing method are provided in Algorithm <ref>. Finally, a balanced training dataset is automatically obtained using the proposed AutoDP method, which involves generating high-quality minority class samples that better represent patterns of less common attacks, thereby assisting in effective IDS model development.
§.§ Automated Feature Engineering (AutoFE)
After the AutoDP stage, Automated Feature Engineering (AutoFE) is another crucial component of the proposed AutoML framework. Feature Engineering (FE) involves extracting and selecting the most informative and relevant features from a dataset, as the original features are often suboptimal for specific datasets <cit.>. This process enhances the performance of ML models. AutoFE aims to automate the traditional FE process, minimizing human effort on FE tasks. In the proposed framework, AutoFE focuses on the Feature Selection (FS) process, aiming to identify and select the most relevant features to construct a highly efficient and accurate ML model. The proposed Automated FS (AutoFS) method is designed based on the feature importance scores generated by the tree-based algorithms used in the automated model learning process.
To construct a DT in tree-based algorithms, features that result in significant reductions in Gini impurity or entropy will be assigned higher importance scores, as they have an important impact on the node-splitting process. Gini impurity and entropy are two common evaluation metrics to measure the impurity of nodes in DTs for classification problems to which intrusion detection problems belong <cit.>. The Gini index quantifies the impurity of a node by evaluating the probability of misclassifying a randomly selected element within that node. The Gini index is calculated as follows for a multi-class problem with K classes <cit.>:
Gini(p_1, p_2, …, p_K)=1-∑_i=1^K p_i^2
where p_1, p_2, …, p_k indicate the proportions of classes 1, 2, …, k.
Entropy is another impurity measure that calculates how much information is required to identify the class of a randomly selected element within a node <cit.>. The entropy for a multi-class problem with K classes can be denoted by:
Entropy(p_1, p_2, …, p_K)=-∑_i=1^K p_i log _2(p_i)
In the proposed AutoFE framework, the FS process is automated by leveraging the power of tree-based models, as they can automatically calculate the feature importance scores during their training process. The specific procedures of this AutoFS process are as follows:
* Train the six tree-based ML models (DT, RF, ET, XGBoost, LightGBM, and CatBoost) on the training set and evaluate their performance.
* Obtain the feature importance scores generated from the top three best-performing models.
* Calculate the average relative importance score for each feature across the top three best-performing models.
* Rank the features from highest to lowest based on these average importance scores.
* Select features from the top of this ranked list, accumulating their importance scores until the sum reaches a predefined threshold, α (default value is 0.9).
* Generate the updated training and test sets using the newly generated feature set that comprises the selected features.
By implementing the proposed AutoFE process, the most relevant features are selected based on their cumulative relative importance scores, ensuring a total of 90%. Simultaneously, features with a cumulative importance score below 10% are discarded, effectively reducing noise and computational complexity. The cumulative feature importance threshold, α, can be tuned using the optimization method presented in Section <ref> to customize it for specific tasks or problems. The proposed AutoFE process helps to simplify the model, reduce the risk of overfitting, improve computational efficiency, and increase model interpretability.
§.§ Automated Base Model Learning and Selection
After the AutoDP and AutoFE procedures, the improved network traffic datasets are learners by supervised ML algorithms to train ML-based IDS that can detect various types of cyber-attacks. Six tree-based ML models, i.e., DT, RF, ET, XGBoost, LightGBM, and CatBoost, are built as base models to perform the initial intrusion detection.
DT <cit.> is a fundamental ML algorithm that makes predictions by learning decision rules inferred from the input features. The decision rules are formed in a tree structure, where each internal node represents a test on a feature, each branch denotes a test outcome, and each leaf node contains a class label <cit.>. The DT algorithm recursively partitions data by selecting the best splitting rule, creating child nodes based on the chosen criterion, and repeating this process until a stopping condition is met.
RF <cit.> is an ensemble learning model constructed on multiple DT models. RF works by generating a set of DTs from randomly selected subsets of the training set and then aggregates the votes from the base DTs to decide the final result based on the majority voting rule.
ET <cit.> is another tree-based ensemble method constructed by combining multiple DTs. However, it randomizes both features and cut-point choices to construct completely randomized trees. As the splitting points in ETs are chosen randomly, the constructed trees in ETs are more diverse and less prone to over-fitting than in RF.
XGBoost <cit.> is an ensemble model built on the speed and performance of the Gradient-Boosted Decision Trees (GBDT) model. XGBoost distinguishes itself from traditional gradient boosting methods by incorporating a regularization term into the objective function, which effectively controls the model's complexity, smooths the final weights, and mitigates overfitting <cit.>. Additionally, XGBoost uses a second-order Taylor expansion to estimate the loss function, enabling an accurate model update and fast convergence.
LightGBM <cit.> is another improved version of the GBDT model with enhanced model performance and efficiency. Similar to other tree-based algorithms, LightGBM is constructed on an ensemble of DTs, but it introduces two advanced techniques, Gradient-based One-Side Sampling (GOSS) and Exclusive Feature Bundling (EFB) <cit.>. GOSS is a down-sampling approach that keeps instances with large gradients and randomly samples instances with small gradients to save model training time and memory. EFB groups mutually exclusive features into bundles as single features, reducing feature space dimensionality and improving training efficiency. By utilizing GOSS and EFB, LightGBM significantly reduces data size without losing critical information, preserving accuracy in the learning process and reducing computational cost.
CatBoost <cit.> is another GBDT-based algorithm that is particularly effective for datasets with categorical features. CatBoost distinguishes itself from traditional GBDT algorithms through three major innovations: symmetric trees, ordered boosting, and native feature support. Symmetric trees ensure that all the DTs in the model are symmetric, which simplifies the model and reduces the risk of overfitting. Ordered boosting is a novel boosting scheme that prevents overfitting on small-sized datasets. Native feature support allows CatBoost to handle categorical features for model performance enhancement natively.
The primary reasons for choosing these six tree-based algorithms as candidate base models are as follows:
* RF, ET, XGBoost, LightGBM, and CatBoost are all ensemble models that combine multiple base DTs to improve model performance and robustness, and DT can serve as the baseline model for comparison.
* These methods are proficient at handling non-linear and high-dimensional data to which 5G network data belongs.
* They support parallel computation, which can significantly improve the training efficiency on large network datasets.
* These tree-based ML algorithms offer the advantage of automatically calculating feature importance in their training process, which assists in efficient feature selection process in the proposed AutoFE method.
* These tree-based models incorporate randomness during their tree construction process, which enables the proposed framework to build a robust ensemble model with diverse base models and increased generalizability.
After training and evaluating the performance of these six tree-based models on the training set, the three best-performing models based on their cross-validated F1-scores are automatically selected as the three base models to construct the ensemble model discussed in Section <ref>.
§.§ Hyper-Parameter Optimization (HPO)
Tuning hyper-parameters is a crucial step in deploying an effective ML model to a particular problem or dataset. Hyper-parameters of a ML model determine its architecture and have a direct effect on the performance of this ML model. The process of using optimization methods to automatically tune and optimize these hyper-parameters is known as Hyper-Parameter Optimization (HPO). In HPO or AutoML tasks, Given the hyperparameter search space X, the goal is to find the optimal hyper-parameter value or configuration x^* that minimizes the objective function f(x) <cit.>:
x^*=min _x∈ X f(x)
In the proposed AutoML framework, the important hyperparameters of the six tree-based algorithms are optimized during the HPO process. Utilizing terminology from the Scikit-Learn library, key hyperparameters for the Decision Tree (DT) include `max_depth', which sets the maximum tree depth; `min_samples_split', specifying the minimum number of samples required to split a node; and `min_samples_leaf', defining the minimum number of samples required at a leaf node. The `criterion' hyperparameter allows selection between Gini impurity and entropy to measure splitting quality. As RF and ET are ensemble models built using DTs, they inherit these four critical hyperparameters from DT. Additionally, RF and ET include the `n_estimators' hyperparameter, which determines the number of trees in the ensemble and significantly influences model performance and efficiency.
The number of base trees and maximum tree depth are two crucial hyperparameters shared by XGBoost, LightGBM, and CatBoost. Additionally, since these three algorithms are gradient-boosting models, the learning rate is another critical hyperparameter that significantly impacts their learning speed and overall performance.
Among the various optimization methods for HPO tasks, Bayesian Optimization (BO) methods have proven to be efficient <cit.>. BO leverages a posterior distribution, known as the surrogate, to describe the function under optimization. As more observations are made, the posterior distribution improves, which increases the certainty about promising regions in the parameter space worth exploring and the unpromising regions. Therefore, BO methods can determine future hyper-parameter evaluations based on the results of previous evaluations to avoid unnecessary model assessments <cit.>.
The Tree Parzen Estimator (TPE) is a common surrogate for BO to model the evaluated configurations <cit.>. BO with TPE, or BO-TPE, can handle a tree-structured hyper-parameter search space using Parzen estimators, also known as kernel density estimators (KDEs) <cit.>. In BO-TPE, the hyper-parameter configuration space D is split into the better group D^(l) and the worse group D^(g) based on a top quantile y^*. The KDEs are estimated using a kernel function with a bandwidth that changes based on a provided dataset. The density functions of the configurations, modeled by TPE, can be denoted by <cit.>:
p(x| y, D)= p(x| y, D^(l)), if y ≤ y^*,
p(x| y, D^(g)), if y>y^*,
The ratio between the two probability density functions is utilized to determine the new configurations for evaluation, facilitating the gradual identification of optimal configurations. BO-TPE is selected as the HPO method to tune and optimize the hyperparameters of the ML models in the proposed framework for the following reasons <cit.> <cit.>:
* BO-TPE is effective for handling high-dimensional variables with multiple types, rendering it suitable for the tree-based ML methods utilized in the proposed framework, which involve numerous hyperparameters.
* BO-TPE can handle tree-structured search spaces, enabling flexible and complex hyperparameter optimization, making it well-suited for the tree-based ML models employed in the proposed framework.
* Unlike other HPO methods, like grid search, which treats each hyperparameter configuration independently and causes many unnecessary evaluations, BO-TPE enables more efficient HPO by exploring promising regions and determining new hyperparameter configurations based on previous evaluation results.
* BO-TPE has low time complexity of O(nlog n), where n is the number of hyperparameter configurations, which is much lower than other HPO methods, such as grid search, with time complexity of O(n^k) <cit.>.
By automatically tuning the hyperparameters of the three best-performing base ML models using BO-TPE, three optimized ML models with improved intrusion detection effectiveness are obtained for further analysis.
§.§ Automated Model Ensemble
After selecting the top three best-performing tree-based models and optimizing their hyperparameters using BO-TPE, the three optimized models are utilized as base models to construct an ensemble model for further performance enhancement. Ensemble learning is an advanced technology that combines the prediction outcomes of multiple individual ML models to make final predictions <cit.>. Ensemble learning aims to improve model performance and generalizability by leveraging the collective knowledge of multiple models.
In the last stage of the proposed AutoML framework, a novel Optimized Confidence-based Stacking Ensemble (OCSE) method is proposed to construct the final ensemble model by extending the traditional stacking ensemble strategy. Stacking is a widely-used ensemble learning method that comprises two layers of models. The first layer of stacking contains multiple trained base learners, and their output labels serve as the input for training a robust meta-learner in the second layer <cit.>.
Compared with the traditional stacking method, the proposed OCSE method introduces two additional strategies: confidence inputs and optimization. Firstly, the three optimized base ML models provide a probability distribution over the target classes for each sample, which indicates the confidence of the models’ prediction for this sample. The confidence values output by the base models are used as input to the meta-learner in the second layer of the proposed OCSE model.
Secondly, the best-performing base model from the six tree-based models, based on the cross-validated F1-scores, is selected to construct the meta-learner. Its hyperparameters are optimized by BO-TPE to obtain the optimized meta-learner, following the same HPO process used for the base models, as described in Section <ref>. The specifications of the proposed OCSE method and the entire AutoML framework are illustrated in Algorithm <ref>.
The computational complexity of the proposed OCSE model is O(ncmh), where n is the number of samples, c is the number of unique classes, m is the number of base models, and h is the number of hyperparameter configurations of the meta-learner. In the proposed framework, c, m, and h are all relatively small numbers.
Compared with other ensemble techniques, the proposed OCSE method presents the following advantages:
* Utilization of Confidence: Unlike many existing ensemble techniques, such as bagging, boosting, and traditional stacking, which solely rely on the predicted labels to construct the ensemble model, the proposed OCSE method utilizes the confidence of all classes as input features, which provides more comprehensive information about the certainty of base model's predictions, resulting in more informed and robust ensemble predictions.
* Automated and Optimized Models: The proposed OCSE method automatically selects the best-performing base model as the second-layer meta-learner and tunes its hyperparameters, resulting in an optimized final learner capable of achieving the optimal overall performance. The automation process also reduces the need for manual effort and saves time in model development.
* Flexibility: OCSE is a flexible method in which both the base models and the meta-learner can be replaced with other ML algorithms to adapt to a wide range of tasks.
Overall, with the use of the novel OCSE method and all the other critical components described in this section, the proposed AutoML-based IDS framework can automatically generate an optimized ensemble model for effective and robust intrusion detection, serving as a key component for autonomous cybersecurity solutions.
§ PERFORMANCE EVALUATION
§.§ Experimental Setup
The proposed AutoML-based IDS framework was developed in Python by extending the Scikit-Learn <cit.>, Xgboost <cit.>, Lightgbm <cit.>, Catboost <cit.>, Synthetic Data Vault (SDV) <cit.>, and Hyperopt <cit.> libraries. The experiments were performed on a Dell Precision 3630 computer equipped with an i7-8700 processor and 16 GB of memory, which served as the server machine in 5G and potential 6G networks.
To evaluate the proposed AutoML-based IDS framework, two public benchmark network traffic datasets, namely CICIDS2017 <cit.> and 5G-NIDD <cit.>, are utilized in the experiments. The CICIDS2017 dataset is one of the most comprehensive public cybersecurity datasets, created by simulating a real-world network environment and involves six primary types of attacks: DoS, botnets, brute force, infiltration, port scan, and web attacks <cit.>. The diverse attack scenarios and comprehensive feature set of the CICIDS2017 dataset make it suitable for network security applications. The 5G-NIDD dataset is one of the most state-of-the-art network security datasets developed in December 2022 <cit.>. This dataset was generated by capturing network traffic within a 5G testbed under diverse DoS and port scan cyber-attacks. The 5G-NIDD dataset is particularly suitable for our research, as it specifically targets 5G networks and enables the detection of new and sophisticated cyber-attacks.
To develop and evaluate the proposed AutoML-based IDS model, both cross-validation and hold-out validation methods are used in the experiments. The model training and optimization process utilizes five-fold cross-validation to automatically generate the optimized ensemble model, while the final model produced by the proposed AutoML framework is evaluated using an unseen test set, which is split from an 80%/20% hold-out validation during the initial stage of data pre-processing.
Due to the inherent class imbalance issues in network intrusion detection datasets, four model performance metrics—accuracy, precision, recall, and F1-scores—are considered collectively in the experiments. The F1-score is utilized as the primary performance metric in the performance-based automated model selection and tuning process of the proposed AutoML framework, as it offers a balanced view of anomaly detection results by calculating the harmonic mean of recall and precision. Additionally, the model execution time, involving the training and inference time of the final OCSE model, is utilized to assess the model's efficiency.
§.§ Experimental Results and Discussion
As described in Section <ref>, the proposed AutoML-based IDS comprises five critical stages: AutoDP, AutoFE, automated base model learning and selection, HPO, and automated model ensemble. Initially, the proposed TVAE-based automated data balancing method applied in the AutoDP stage automatically balances the distributions of two datasets, CICIDS2017 and 5G-NIDD, to prevent model bias. During the AutoFE stage, important features are selected based on the average importance scores obtained from the three best-performing ML models, where the accumulative importance score reaches the threshold α=90%. These selected features, along with their relative importance scores for the CICIDS2017 and the 5G-NIDD datasets, are illustrated in Figs. <ref> and <ref>, respectively. Subsequently, the three top-performing ML models—RF, XGBoost, and LightGBM—are optimized by tuning their hyperparameters using BO-TPE. The hyperparameters tuned, their search spaces, and the optimal values obtained for these hyperparameters for both datasets are detailed in Table <ref>. Finally, the three optimized base ML models are integrated using the proposed OCSE model to automate the model ensemble, improving the decision-making effectiveness in intrusion detection.
The performance of the proposed AutoML-OCSE model and several state-of-the-art methods in the literature is provided in Table <ref> for the CICIDS2017 dataset and Table <ref> for the 5G-NIDD dataset. The performance is evaluated based on the metrics: accuracy, precision, recall, F1-score, training time, and average test time per sample.
Firstly, as shown in Table <ref>, results on the CICIDS2017 dataset indicate that RF, XGBoost, and LightGBM perform better than the other three base models, DT, ET, and CatBoost. Hence, these three ML models are selected as the base models of the proposed AutoML-OCSE framework.
After optimizing the hyperparameters of the three selected base models (as detailed in Table <ref>) and integrating their outputs using the proposed OCSE ensemble method, the final AutoML-OCSE model outperforms all the compared methods in the literature <cit.> <cit.> <cit.> <cit.> - <cit.>. The proposed method achieves the highest metrics on the CICIDS2017 dataset, with accuracy, precision, recall, and F1-score of 99.806%, 99.806%, 99.806%, and 99.804%, respectively.
Furthermore, the average test time per sample of the proposed AutoML-OCSE method is the fastest among the compared methods <cit.> <cit.> <cit.> <cit.> - <cit.>, highlighting its efficiency on network traffic datasets. Compared with state-of-the-art methods on the CICIDS2017 dataset, AutoML-OCSE demonstrates notable improvements in both accuracy and inference efficiency. These enhancements are attributed primarily to the AutoDP, AutoFE, HPO, and automated model ensemble procedures, which collectively enhance data quality, optimize machine learning models, and reduce feature and model complexity.
Similarly, as indicated in Table <ref>, the proposed AutoML-OCSE method outperforms all other compared methods <cit.> <cit.> - <cit.> <cit.> <cit.> on the 5G-NIDD dataset, achieving the highest accuracy, precision, recall, and F1 score, all at 99.956%. In terms of average test time per sample, the AutoML-OCSE method matches the fastest time set by the DT method, demonstrating an exceptional balance between performance and efficiency.
Regarding the intrusion detection performance of the proposed AutoML-OCSE on the CICIDS2017 and 5G-NIDD datasets, while its accuracy and F1-score are only slightly higher than those of the best-performing base ML models, this is primarily attributed to the simplicity of these datasets, where many base ML models can achieve over 99% accuracy and F1-score. In real-world scenarios, where the complexity and variability of network traffic datasets are usually higher than those of public benchmarks, the proposed AutoML-based IDS is expected to demonstrate more significant improvements. This enhancement is due to every component of the proposed framework, including AutoDP, AutoFE, automated model selection, HPO, and automated model ensemble, each contributing to the overall enhancement of intrusion detection performance. Additionally, considering the error rate reduction, the proposed AutoML-OCSE method achieves significant decreases of approximately 15.65% and 24.13% in the error rates on the CICIDS2017 and 5G-NIDD datasets, respectively, calculated as 99.806% - 99.770%/100% - 99.770% and 99.956% - 99.942%/100% - 99.942%.
Furthermore, without the proposed automated model selection method, researchers might resort to selecting ML models either randomly or based on personal experience, which may not always lead to choosing the best-performing base ML model. This limitation further highlights the significant potential for improvement offered by the proposed AutoML method, which can automatically select, optimize, and integrate the best-performing ML models. Consequently, the proposed AutoML-based IDS can achieve substantial improvements over traditional cybersecurity methods through its autonomous cybersecurity strategies.
On the other hand, although the AutoML-OCSE model takes longer to train than certain base models, such as DT, ET, and LightGBM, its training time is still shorter than that of some other models, like CatBoost. This efficiency is due in part to its AutoFE process, which reduces data dimensionality and model complexity. Moreover, the improvement in accuracy, precision, recall, and F1-score justifies the slightly increased training time. Furthermore, the AutoML-OCSE model achieves the fastest average inference time per sample by constructing a stacking ensemble model based on confidence values rather than the original high-dimensional dataset, making it highly suitable for real-time network data analytics and intrusion detection applications. In network applications, low inference time is often more crucial than low training time, as model training typically occurs on cloud servers with ample computational resources, while model predictions are performed on edge or local devices with limited computational capabilities in many scenarios.
Overall, the performance results demonstrate the effectiveness and efficiency of the proposed AutoML-OCSE method. It integrates the strengths of various base ML models, automates tedious ML tasks through AutoML, and achieves high detection performance via an optimized ensemble strategy. Therefore, the proposed AutoML-OCSE method and the AutoML framework can serve as powerful autonomous cybersecurity solutions for intrusion detection in 5G and potential 6G networks.
§ CONCLUSION
The advent of 5G and the impending transition to 6G networks have underscored the importance of ZTNs in achieving network automation. However, the increased connectivity and complexity of these networks have also escalated cybersecurity risks, making the development of effective and autonomous cybersecurity mechanisms a critical necessity. In this paper, we propose an AutoML-based Intrusion Detection System (IDS) to achieve autonomous cybersecurity for future networks. By introducing AutoDP, AutoFE, automated base model learning and selection, HPO, and automated model ensemble components, the proposed AutoML-based IDS can automatically generate an optimized ensemble model for accurate intrusion detection. This paper also proposed a novel TVAE-based automated data balancing method and a novel OCSE model to improve the AutoML procedures. Through the experiments, the proposed AutoML-based IDS achieves high F1-scores of 99.804% and 99.956% on two public benchmark network security datasets: the CICIDS2017 and 5G-NIDD datasets. This illustrates the effectiveness of the proposed autonomous IDS framework in achieving autonomous cybersecurity. In future work, the IDS framework will be extended to involve automated model updating using continual learning and drift adaptive methods in dynamic networking environments.
ACM-Reference-Format
|
http://arxiv.org/abs/2409.03611v1 | 20240905151637 | Reimagining Data Visualization to Address Sustainability Goals | [
"Narges Mahyar"
] | cs.HC | [
"cs.HC"
] |
Introduction
[
[
September 9, 2024
=====================
Visualization has demonstrated immense potential in effectively communicating climate change data to experts, scientists, and policymakers. Given the complexity and interconnected nature of climate change issues, engaging the broader community to collaborate with policymakers and government officials is crucial. However, public engagement with climate change remains low. The pressing question now is: how can we reimagine visualization to effectively communicate this intricate and complex societal challenge to the public?
To envision the future of information visualization for addressing sustainability challenges, we must first look into the past, its history and origins. Understanding the goals and objectives that have shaped the field over time provides valuable insights into its evolution and the foundational principles that guide its development.
This historical perspective helps to identify areas where there have been limitations or opportunities for growth. By reflecting on these insights, we can reimagine the future of information visualization, considering what needs to be changed or expanded upon to meet new challenges and opportunities in our increasingly data-driven world, especially in the face of sustainability challenges.
Information visualization is inherently interdisciplinary, drawing principles from fields such as statistics, visual communication, graphic design, and cognitive science. This interdisciplinary nature has led to a diverse array of definitions and perspectives. In the following sections, I present notable definitions arranged chronologically to illustrate the evolution of the field, as well as the variety of perspectives and approaches in visualization.
Edward Tufte, in <cit.>
defines information visualization as a means to visually represent data with the goal of revealing complex information clearly and efficiently. He emphasizes achieving graphical excellence in information visualization. According to Tufte, graphical excellence involves “the well-designed presentation of interesting data—a matter of substance, statistics, and design” <cit.>.
Card et al define visualization as “the use of computer-supported, interactive visual representations of data to amplify cognition” <cit.>.
Stephen Few, in <cit.>,
defines data visualization as the use of visual representations to explore, comprehend, and communicate data. According to Few, data visualization serves three fundamental goals suited to their respective tasks: exploration for discovery, sensemaking for understanding, and communication for informing decisions. He emphasizes the effectiveness of visual representation in conveying information and highlights vision as the dominant human sense for processing data.
David McCandless, in <cit.>, views visualization as an art form that transforms data into compelling narratives.
Tamara Munzner, in <cit.>, defines visualization as
“computer-based visualization systems provide visual representations of datasets designed to help people carry out tasks more effectively.” She emphasizes that visualization is particularly valuable when it complements human capabilities rather than aiming to replace them with automated decision-making processes.
Andy Kirk, in
defines data visualization as “the representation and presentation of data to facilitate understanding." He explains three stages of understanding: perceiving, interpreting, and comprehending.
Colin Ware, in <cit.>, discusses how visual representations leverage human perceptual abilities to recognize patterns and trends. He states that “one of the greatest benefits of data visualization is the sheer quantity of information that can be rapidly interpreted if it is presented well.”
These diverse definitions reinforce the multifaceted nature of information visualization, encompassing principles from design, cognition, perception, and HCI, and highlighting its role in making data accessible and interpretable.
While these definitions cover a broad range of goals for information visualization, including discovering new insights, understanding and making sense of data, and communicating data and knowledge through graphical means, the majority of work in the field has focused predominantly on investigating and harnessing analytical aspects of visualization <cit.>. Consequently, less attention has been paid to other societal aspects that consider the lifecycle of a visualization, including its creation, usage by people, and the context in which it is applied.
Drawing on insights gained from over a decade of research in designing and developing visualization for public participation and community engagement using theories of public participation and citizen sourcing <cit.>, I discuss the dimensions in data visualization that can lead to democratizing access to information and fostering increased public engagement. I begin by reviewing recent critical perspectives on visualization that challenge conventional assumptions and delve into broader aspects of the field. Subsequently, I highlight dimensions that require further development or greater emphasis to effectively engage communities with complex data, such as climate change and sustainability information.
By exploring the intersection of data visualization with public participation and participatory frameworks, this work aims to highlight how visual representations of complex information can engage communities, empower them to influence policies, and amplify their voices in decision-making processes.
This work calls for the development of new theories and approaches in designing visualizations that prioritize Community engagement, and conducting experiments to interrogate previous assumptions.
Deriving insights from a body of interdisciplinary research, public participation, citizen sourcing, learning and communication theories, the design and development of these visualizations should be methodically implemented and evaluated with target audiences in mind. Future approaches should prioritize clear communication with emotional resonance, stimulate attention, and bridge gaps in understanding to promote inclusivity and facilitate informed dialogues on critical issues such as sustainability and climate change. By emphasizing these aspects, visualizations can not only present data effectively but also resonate emotionally with audiences, driving engagement and prompting actionable outcomes.
§ EMERGING THEORIES AND CRITICAL DEVELOPMENTS IN VISUALIZATION
More recently, researchers have begun to explore alternative perspectives on visualization, examining the intentions behind designing visualization and its broader implications. These studies offer new perspectives to understanding visualization processes and practices, including ethical dimensions <cit.>, feminist perspectives <cit.>, critical theory <cit.>, and rhetorical approaches <cit.>.
In a recent book on <cit.>, Alberto Cairo writes a foreword called “the dawn of a philosophy of visualization” in which he argues that just as artifacts such as the clock, the compass, or the map has transformed science, society, and culture, data visualization transforms how we perceive and interact with reality. Cairo invites visualization researchers to critically examine data visualization, exploring its origins and intentions, and to work towards developing a philosophy of data visualization. Data Visualization in Society is a notable example of such effort, offering a deeper inquiry into the role of visualization in society and politics and its potential to bridge or deepen societal divides.
This book has inspired much research, including our work on the political aspects of text visualization in the context of civics <cit.>. However, if we examine the citations of this book, we find that it has been mostly applied in other disciplines such as journalism, communication, education, and politics. There is a need for more of such critical work to be incorporated into the core body of visualization research.
In , D'Ignazio and Klein, presents a compelling framework that combines data science and intersectional feminist theory to address issues of power and justice <cit.>. The book is structured around seven key principles: examining power, challenging power, elevating emotion and embodiment, rethinking binaries and hierarchies, embracing pluralism, considering context, and making labor visible. These principles guide readers in understanding how power dynamics influence data collection, analysis, and interpretation, and how these processes can be redesigned to promote equity and justice. They advocate for a critical approach to data that recognizes these existing social hierarchies and biases and works to dismantle them. For example, they highlight how the male/female binary in data classification can be expanded to challenge other hierarchical systems.
This book has swiftly influenced various disciplines, including visualization. For instance, we have utilized concepts from this book to lay the foundation for the integration of para data into visualization <cit.>, advocate for including novices into the field, and critically examining how power dynamics shape the definition of valuable and valid knowledge or skills <cit.>, and emphasizing that data is neither neutral nor objective which is extremely important in political contexts such as civic decision-making <cit.>.
While these works have paved the way for new and critical perspectives in data visualization, we need more research that critically investigates which aspects, goals, and methods are most effective in engaging a global audience with climate change data. There are a wide range of open questions to consider:
What adjustments are required in our visualization design, methods, processes, and evaluation when engagement becomes the primary goal of data visualization? How can we engage, inform, and inspire such a diverse audience not only to understand the urgency of climate data but also to take meaningful action? How can embedding emotion and embodiment enhance the communication of complex data to the general public? How can we make data more relatable, personal, and impactful? How do we empower audiences with varying levels of data, language, and visual literacy to understand the data and enhance their analytical reasoning? Should we explore new media? Should we incorporate multimodal interactions? Should we integrate art, engage with the various senses such as smell and sound in new ways? What is the role of sonification, music and theatrical performances in this context?
Viegas and Wattenberg <cit.> emphasized that “an ideal visualization should not only communicate clearly but also stimulate viewer engagement and attention”, which is particularly relevant in this context. However, recent research by Mark
reveals a significant decline in average attention spans on screens, dropping from around two and a half minutes in 2004 to approximately 75 seconds <cit.>. According to Mark, currently, people can only maintain focus on one screen for an average of 47 seconds. Therefore, the pressing question remains: in an era of shrinking attention spans, how do we capture and sustain engagement?
Furthermore, how can we empower stakeholders across diverse sectors to grasp climate data and effectively address climate-related challenges?
How can visualization not only drive progress in climate mitigation but also lay the foundation for a more resilient and sustainable world?
§ FOSTERING COMMUNITY ENGAGEMENT: DIMENSIONS AND THEORETICAL DEVELOPMENT
Designing technology solutions for sociotechnical problems presents significant challenges due to the interconnected nature of these issues and the complexity arising from diverse stakeholder needs, values, objectives, knowledge levels, and technical skills <cit.>. In the literature of Computer-Supported Cooperative Work (CSCW), it is well-known that transitioning from supporting a single user to multiple users adds layers of complexity <cit.>. While visualization design has a strong foundation in supporting groups in CSCW research, an important distinction lies in designing for heterogeneous groups with diverse goals and objectives, rather than homogeneous groups of collaborators with shared interests and similar skills.
When designing for communities, particularly in the context of sociotechnical problems, the challenge lies in accommodating varying levels of knowledge, technical backgrounds, visual and data literacy, objectives, values, available time, attention, and more. Complicating this space further are inherent conflicts, such as the NIMBY (Not In My Backyard) problem in urban planning or misinformation and disbelief in climate change. Addressing these challenges necessitates nuanced approaches that promote inclusivity, mitigate conflicts, and enhance community engagement through effective communication and visual strategies.
Therefore, there is a crucial need for engaging, accessible, and relatable data visualization for people with various capabilities. Despite advancements in the field of visualization, current solutions often fall short, lacking inclusivity and accessibility for novices <cit.>.
In the following sections, I discuss and propose new dimensions that require further development, expanding upon the open questions that I raised in Section 2.
§.§ Transitioning from Analytics-Heavy to Engagement-Savvy
The prevailing emphasis on the analytical aspect of data visualization often overshadows other crucial dimensions such as communication, engagement, emotional resonance, and community empowerment <cit.>. While the analytical aspect is vital, it is equally important to foster connections with people, to draw their attention and to make data relatable and actionable for them. By focusing solely on analysis, we risk alienating the very people who could benefit most from the insights data provides.
Additionally, it is imperative to consider how factors such as educational background, political affiliation, and personal experience shape attitudes and trust in data visualization, particularly among underrepresented populations<cit.>.
Incorporating emotional engagement and community-building elements can transform data visualization into a powerful tool for education, advocacy, and collective action, especially on pressing issues such as climate change and public health. It is time to broaden our approach to data visualization, recognizing that its power lies not just in what it reveals, but in how it connects and mobilizes people.
Many open questions remain including: How can we measure the impact of engagement in data visualizations on public understanding and behavior change given the subjectivity of it ?
What are the best practices for designing visualizations that cater to diverse audiences with varying levels of data, visual and language literacy?
How can community feedback be effectively integrated into the design process of visualizations to enhance relevance and engagement? and
What role do emotions and personal relevance play in the effectiveness of data visualizations, and how can these elements be integrated into design practices?
While we introduced an interdisciplinary approach by borrowing Bloom's Taxonomy from the field of education to more meaningfully measure engagement <cit.>, future studies are needed to develop frameworks for identifying factors that lead to disengagement, as well as methods and processes for evaluating and refining engagement practices in data visualization.
§.§ Democratizing Data & Broadening Participation
Effective visualization strategies should aim to democratize access to information by making complex data accessible and understandable to a broad audience <cit.>. This involves not only designing comprehensible and navigable visualization and visual interfaces but also incorporating techniques such as storytelling and narrative techniques to convey data-driven insights effectively. By empowering communities with the tools and knowledge to interpret and act upon data, visualization can foster informed decision-making and active participation in addressing societal challenges <cit.>.
Advancing the field of data visualization requires a commitment to inclusivity and accessibility. It is notable that broadening participation includes not only novices and the general public, but also domain experts and decision makers. Our research shows that even these professionals can feel like visualization novices due to a lack of visual analytics expertise <cit.>. They often request simple visualization techniques and interactive modes that allow them to operate the system independently, without needing to delegate data analysis to analysts and data scientists.
Visualization needs to address the diverse needs and capabilities of various stakeholders by embracing new theoretical frameworks and methodologies. By doing so, we can create visualizations that engage and empower stakeholders, enhance public understanding, and drive positive social change.
Future work should explore questions such as: What are the most effective strategies for designing visualizations that are both comprehensible and engaging to diverse audiences? In "Of Course It's Political" <cit.>, we proposed five new dimensions to help designers and researchers consider their design choices more diligently. However, the paper was just the beginning, sparking a new philosophy. Further research is needed to design and evaluate these techniques for more concrete answers. Other questions include:
How can storytelling and narrative techniques be optimized to convey complex data in a way that resonates with different demographic groups?
What are the barriers to data literacy among different communities, and how can visualization design help overcome these barriers? and
How can participatory design processes be implemented to ensure that visualizations meet the needs of all stakeholders, especially marginalized groups?
§.§ Promoting Emotional Resonance & Inclusive Dialogue
Effective communication of environmental data requires consideration of the audience's emotional responses. Visualizations that evoke emotion or a sense of urgency can be more impactful than those that present data neutrally. For instance, interactive tools that allow users to visually see the future impacts of climate change on their own neighborhoods can elicit stronger emotional responses and drive action <cit.>. Some effective ways to communicate climate change data to the public include leveraging public art and augmented reality. Public art, with its engaging impact and universal accessibility, helps with raising awareness and fostering public dialogue.
For instance, RisingEMOTIONS is a data physicalization and public art project that we installed in East Boston in 2020 <cit.>. Our goal was to engage communities affected by sea-level rise in planning adaptation strategies and increase their involvement in these crucial processes.
This installation visually represents local projected flood levels and public emotions towards the threat of sea-level rise. Placing it in front of the East Boston Public Library, a prominent community hub, ensured high visibility and accessibility to all community members. The project included an opening ceremony and remained on display for two weeks.
The strategic location played a vital role in engaging community members, as it was installed in an open space.
Additionally, the installation was designed to be approachable, allowing viewers to engage effortlessly regardless of prior knowledge about public art. The community's engagement with our project demonstrated the potential for public art to create interest and raise awareness of climate change.
Augmented reality (AR) enhances public understanding and engagement with climate change impacts by overlaying digital information onto the real-world.
Leveraging AR technology, we conducted the first of its kind Communal eXtended Reality (CXR) study aimed at providing immersive experiences to help people visualize the potential impacts of floods and encourage proactive action <cit.>. We engaged 74 community members by inviting them to ride a local shuttle bus, utilizing public transportation as a communal gathering space. Equipped with VR Head-Mounted Displays (HMDs), participants explored their neighborhood's past and future, physically traversing the island to comprehend the effects of climate change over time. Our study revealed several significant advantages of CXR. Firstly, its immersive and embodied nature made climate change more tangible and immediate to participants. Secondly, its situational elements brought the reality of climate impacts closer to their daily surroundings. Thirdly, the communal setting of the experience emphasized community resilience and responses. A striking observation from this work was how the community aspect transformed negative emotions, such as hopelessness, into actionable plans. This work led to the development of a resiliency plan for the neighborhood. This is particularly important because research has shown that the shocking and negative emotions elicited by confronting the harsh realities of climate change often discourage community members from further engagement with the subject.
Questions to further explore include:
How can visualizations effectively balance accuracy in depicting climate change data with the emotional engagement needed to prompt action without overwhelming or disengaging audiences?
What are the most effective design strategies for creating public art installations that foster inclusive dialogue and community engagement around climate change impacts?
How can augmented reality (AR) applications be optimized to enhance public understanding of climate change impacts while ensuring accessibility and usability across diverse audiences? and
What are the long-term effects of engaging communities through emotionally resonant visualizations, such as public art and AR, on their attitudes and behaviors towards climate change?
§.§ Drawing from Other Fields to Build New Theories
Drawing on the fields of citizen sourcing, crowdsourcing, participatory design, and asset-based design provides valuable insights into enhancing data visualization for societal challenges. Citizen sourcing emphasizes engaging communities in data collection and decision-making processes, ensuring local knowledge and priorities are integrated into sustainability initiatives. Crowdsourcing extends this concept by leveraging collective intelligence to address complex problems through collaborative data analysis and visualization. Participatory design emphasizes the active involvement of stakeholders in the design process, ensuring that visualizations are relevant, accessible, and actionable for diverse audiences.
Platforms that allow community members to contribute data and visualize their findings foster greater public engagement and ownership of sustainability initiatives <cit.>. For instance, CommunityCrit's micro-activity workflow proved to be successful in providing a complementary avenue and a new opportunity for community members and those who are not usually engaged (e.g., family with kids, working professionals) to provide meaningful feedback on urban design in a short amount of time <cit.>. When deployed in San Diego, CommunityCrit surpassed the current community engagement approach by gathering 352 comments within four weeks of deployment.
Another example is visualizing crowd-sourced data on local wildlife sightings to track biodiversity and inform conservation efforts. These grassroots visualizations not only democratize data, but also enhance the credibility and reach of sustainability campaigns <cit.>.
Another example is Chemicals in the Creek, a community-based situated data physicalization to enhance community engagement with open government data <cit.>.
Asset-based design approaches represent a crucial method for communicating complex climate change data to the public <cit.>. This approach shifts focus to community strengths and resources, utilizing local assets to promote sustainable development and resilience planning. Integrating principles, theories, and frameworks from the aforementioned fields into data visualization practices can result in more inclusive, effective, and empowering visualizations. Such approaches not only enhance communication and understanding but also foster a sense of ownership and collective responsibility toward sustainability challenges.
§ CONCLUSION
In conclusion, reimagining the field of information visualization to address sustainability goals requires a multifaceted approach that prioritizes community engagement. By integrating emotional resonance, inclusivity, and participatory frameworks, we can create visualizations that not only present data but also connect with broad audiences on a deeper level. This connection fosters informed dialogue, empowers communities to influence policies, and drives collective action. Additionally, understanding the epistemology of data visualization is crucial. This involves recognizing how information is constructed, processed, selected, validated, and communicated through visual means, which shapes our perception and interaction with data. By reflecting on the historical and philosophical dimensions of visualization, we can better address its limitations and opportunities for further expansions. Emphasizing the design and development of visualizations with target audiences in mind, and drawing insights from interdisciplinary research, will ensure that visualization is effective in bridging gaps, promoting understanding, and facilitating meaningful engagement. Ultimately, this modern, integrated perspective on data visualization can serve as a powerful catalyst for social empowerment and positive change, paving the way for a more sustainable and resilient future.
abbrv-doi
|
http://arxiv.org/abs/2409.02553v1 | 20240904091843 | ResiLogic: Leveraging Composability and Diversity to Design Fault and Intrusion Resilient Chips | [
"Ahmad T. Sheikh",
"Ali Shoker",
"Suhaib A. Fahmy",
"Paulo Esteves-Verissimo"
] | cs.CR | [
"cs.CR",
"cs.AR"
] |
: Leveraging Composability and Diversity to Design Fault and Intrusion Resilient Chips
Ahmad T. Sheikh, Ali Shoker, Suhaib A. Fahmy and Paulo Esteves-Verissimo
CEMSE Division, King Abdullah University of Science and Technology (KAUST)
Thuwal 23955-6900, Kingdom of Saudi Arabia
{ahmad.sheikh, ali.shoker, suhaib.fahmy, paulo.verissimo{@kaust.edu.sa}}
September 9, 2024
=============================================================================================================================================================================================================================================================================
§ ABSTRACT
A long-standing challenge is the design of chips resilient to faults and glitches. Both fine-grained gate diversity and coarse-grained modular redundancy have been used in the past. However, these approaches have not been well-studied under other threat models where some stakeholders in the supply chain are untrusted. Increasing digital sovereignty tensions raise concerns regarding the use of foreign off-the-shelf tools and IPs, or off-sourcing fabrication, driving research into the design of resilient chips under this threat model. This paper addresses a threat model considering three pertinent attacks to resilience: distribution, zonal, and compound attacks. To mitigate these attacks, we introduce the framework that exploits Diversity by Composability: constructing diverse circuits composed of smaller diverse ones by design. This gives designer the capability to create circuits at design time without requiring extra redundancy in space or cost. Using this approach at different levels of granularity is shown to improve the resilience of circuit design in against the three considered attacks by a factor of five.
Additionally, we also make a case to show how E-Graphs can be utilized to generate diverse circuits under given rewrite rules.
Chip Resilience, Hardware Diversity, TMR, Fault and Intrusion Tolerance (FIT), E-Graphs
§ INTRODUCTION
Integrated Circuit (IC) development is a pressing challenge in the modern digital ecosystem. The huge supply chain and toolchain across the development pipeline involves a large number of known and unknown stakeholders and dependencies. This has compounded the development complexity and results in higher likelihood of unintentional faults, glitches, bugs, etc. <cit.> and other intentional intrusions with malicious intent <cit.>. While resilience to unintentional benign faults has been well studied and mitigated in the literature, mainly using hardening and redundancy <cit.>, the case of resilience against malicious actors (e.g., vendors and foundries) has often been ignored.
With nation states aligning their priorities to develop local semiconductor manufacturing, technological excellence, and bridge recent global chip shortages <cit.>, the quest for new resilience methods to withstand untrusted stakeholders is regaining momentum.
Our work is the first that studies chip design resiliency assuming a threat model in which vendors and silicon fabs may be untrusted.
Typical chip design can be more productive when the designer relies on available off-the-shelf tools and libraries from different vendors and then sends the design to a silicon fab for manufacturing. The premise is that the vendors and fab are trusted, both ethically and technically. With the recent digital sovereignty tensions, the former is not guaranteed and hardly measurable, while the latter has been shown to be unrealistic with such a complex development process, incurring too many dependencies on third party tools, libraries, and sometime open source resources <cit.>. Our analysis shows that a design is highly prone to three potential attacks of relevance to resiliency (as outlined in Section <ref>): Distribution attacks (supply chain trojans and backdoors), Zonal attacks (using electromagnetic or Laser beam interference), and Compound attacks (simultaneous Distribution and Zonal attacks).
Redundancy in space is a fundamental resilience approach that is used to mask faults and intrusions <cit.>. Two main directions for redundancy at different granularity are studied in state of the art chip design. The first is fine-grained, where redundancy is applied at the logic gate level <cit.>. This approach aims at diversifying the logic circuit design through adding various basic logic gates, e.g., AND and Inverter. While this approach has been shown to be effective, e.g., increasing design resilience up to 90% under (benign) faults <cit.>, it is designed to be incorporated in the vendor's IP libraries and tools, where the designer has little control. The designer must either create libraries (costly and unproductive) or live under the mercy of the vendor and fab (who may collude). The second approach is course-gained, following the Triple Modular Redundancy <cit.> (TMR) style replication that requires three identical copies of a circuit followed by a majority voter to mask the faulty one. However, this works well as long as a minority of circuits are non-faulty and assuming the independence of failures, i.e., the absence of Common Mode Failures (CMF, henceforth); this is often considered a hard assumption <cit.>.
To demonstrate this, we conducted an experiment that shows, in Figure <ref>, that the average failure probability of TMR (red curve) is around 70%, even when used together with the aforementioned fine-grained design approach <cit.> (see more details in Section <ref>).
In this paper, we bridge this gap by introducing , a framework for building resilient diversified integrated circuits.
To do this, combines course-grained (TMR) style replication with fine-grained gate-level redundancy that uses the concept of diversity by composability. This concept leverages the properties of higher level composable logic circuits, i.e., circuits whose modular structure is composed of smaller linked circuits.
Since various circuits are composable by nature, e.g., N-bit Adder, N-bit Multiplier, Vector Unit, Systolic Array, etc. <cit.>, this makes the approach feasible for many modern applications (see Section <ref> for details).
Generation of diverse circuits using E-Graphs <cit.> is also proposed, that leverages powerful equality saturation technique to implement rewrite driven functionally equivalent but structurally different artifacts.
The rest of the paper is organized as follows. Section <ref> introduces the threat and fault models. Section <ref> introduces the notion of Diversity by Composability that is a basis for the framework in Section <ref>. Section <ref> discusses E-Graph based diversity generation method, evaluation results are presented in Section <ref>, followed by related works and concluding remarks in Sections <ref> and <ref>, respectively.
§ THREAT AND FAULT MODELS
§.§ Actors and Adversaries
We consider a typical design process of three main actors in the ecosystem: designer, vendor, and fab. The designer may use several products from several vendors (e.g., many tools and implementations) to develop a resilient chip design, which is sent to the fab for fabrication. The roles and trust models of the actors are as follows:
* Designer: the designer of the chip and the main user of . They are assumed to be a trusted entity, and a target/victim of acting adversaries.
* Vendor: the provider of pre-designed prerequisites like design libraries, implementations, or toolchains. Not all vendors are trusted entities.
* Fab: the post-design fabrication foundry that uses silicon technology to implement the design as an integrated circuit. Not all fabs are trusted entities.
§.§ Intrusions and Faults
During the design and manufacture process, the design is prone to unintentional faults and intentional intrusions, leading to some functional circuit failure. Faults can be induced at any stage, even by the designer; while intrusions are assumed to be performed through attacking the circuit by the vendor or the fab, who can also collude. Intrusions and faults and their corresponding failures manifest as faults <cit.>—being a binary domain. In this model, the injected fault/intrusion on a logic gate results in a logic () or (). Figure <ref> depicts a schematic of intrusion/fault injection between (a) two gates G1 and G2, leading to (b) or (c) faults. This can be replicated between any two gates in the design.
§.§ Threats and Attacks
We consider the threat model where the goal of the adversaries, i.e., vendor and fab, is to break the integrity of the design or circuit. The aim is to subvert the output either permanently or occasionally.
We define three types of potential integrity attacks, i.e., Distribution, Zonal, and Compound attacks, defined as follows:
§.§.§ Distribution Attack
It is typically faster and cheaper for the designer to use off-the-shelf tools, pre-designed modules, compilers, synthesizers, etc, to create a new design. Any malicious vendor in this chain can induce intrusions, trojans, or backdoors <cit.>.
The Distribution attack (or fault) is a supply-chain attack where a fault or intrusion can be induced at any step prior to or after design <cit.>. This attack targets any circuit module, channel, or zone.
§.§.§ Zonal Attack
The Zonal attack is a directed attack on the design layout. As shown in Figure <ref>, the attack can target a chip in many ways, among them: (1) using an electric bus to overheat an edge <cit.>, (2) using an Electromagnetic field to subvert another edge <cit.>, or (3) using an optical laser beam to target a specific zone after fabrication <cit.>.
§.§.§ Compound Attack
The Compound attack is a more complex attack where Distribution and Zonal attacks are preformed simultaneously. In this case, these attacks could be coordinated, e.g., collusion between vendor and fab, or uncoordinated. Although the former is more directed, both scenarios are devastating to the circuit as we show in the simulated experiments.
§ DIVERSITY BY COMPOSABILITY
In this section, we provide the motivations and design decisions behind .
§.§ The Quest for Resilience
Designing for resilience often requires redundancy in space or time <cit.>.
Redundancy in time entails repeating computation using the same hardware.
This is effective against transient failures, but not permanent faults and longer term intrusions.
Hence, redundancy in space through replicated hardware, with convergence to a final output is often used.
This final output can incorporate the replicated outputs or choose between them, depending on application.
Since most applications require determinism, reconciling outputs by majority voting is the most common approach in the literature <cit.>.
This is usually on top of Double- or Triple-Modular-Redundancy (TMR).
Clearly, majority voting only works if the replicated logic units do not fail together due to Common Mode Failures (CMF) <cit.>.
Consequently, using diverse replicas—that are functionally equivalent but structurally different—is one way to enhance this approach.
Without diversity, modular redundancy exhibits limited resilience (e.g, against transient failures), at a high hardware cost.
We elaborate further on this in Section <ref>.
§.§ Diversity Levels
For integrated circuits, we identify three possible levels of diversity: Gate-level, Module-level, and Artifact-level diversity.
We are interested in providing the designer with a mechanism to build resilient circuits, following a bottom-up approach, where diversity at the gate level is utilized to create larger diverse artifacts by composition.
§.§.§ Gate-level diversity
The lowest level of diversity we consider, targets designing diverse primitive logic modules composed of basic logic gates, e.g., AND, OR.
Diversity at this level can be achieved using different combinations of logic gates while maintaining the same functionality. This can be done manually, but preferably using an automated algorithm as we do in this work, inspired by <cit.>. As an example,
Fig. <ref> shows two diverse implementations of the same boolean function. If all inputs are logic , the true output of both circuits is logic . If a fault is injected at the same point, the two circuits exhibit different behaviors at the final output .
§.§.§ Module-level diversity
This diversity level aims at building a module artifact as a composition of diverse modules with some dependency between them.
The designer makes use of off-the-shelf modules, diversified at the Gate-level, to create higher order Composed Module Artifacts (CMAs) through additional diversity at this layer.
We formally define a CMA as follows:
Consider an arithmetic logical artifact L with a deterministic specification S. An implementation A of L is an ordered set of modules M_i defined as:
A = (M_1, M_2, ..., M_N) ∈Π = {M_i × M_j × ... × M_N}
where Π is a set of logical circuit modules. A is a Composable Module Artifact (CMA) iff it respects the specification S given a defined connectivity matrix Γ.
Γ_N × N =
cccc[cccc]
V_0,0 V_0,1 ⋯ V_0,N-1
V_1,0 V_1,1 ⋯ V_1,N-1
⋮ ⋮ ⋱ ⋮
V_N-1,0 V_N-1,1 ⋯ V_N-1, N-1
where,
V_ij =
1 if a_i connected to a_j
0 otherwise
An implementation A uses N modules to conform to the specification S. The connectivity matrix (Γ) of size N × N is a directed acyclic graph (DAG) that defines how Modules are connected with each other.
For example, to construct an N-bit Ripple-Carry Adder (RCA), the connectivity matrix would simply be an identity matrix (I) as shown in1 Eqn. <ref> .
Similarly, an N-bit multiplier would require N^2 AND gates and 2N-1 adders (either half adders or full adders).
Such CMAs can be applied to a wide range of designs such as N-bit Adders/Subtracters, N-bit Multipliers, N-bit Dividers, to mention few.
Even modern accelerator architectures are modular and can be implemented as CMAs, e.g. SHA-256 hashing, Vector units, or Machine Learning Systolic Array accelerators <cit.>.
To demonstrate the concept we discuss two widely used designs: N-bit Adder and 4-bit Multiplier.
N-bit Ripple-Carry Adder: Figure <ref> shows an N-bit Ripple-Carry Adder (RCA) composed of smaller modules which can be diverse or homogeneous structures.
The adder is composed of 4-bit full adder modules.
The basic resolution of a module can be decided by the designer.
The corresponding connectivity matrix in Equation <ref> highlights how the modules are connected.
Adder (Γ_N × N) =
ccccc M_1 M_2 ⋯ M_N
c[cccc]
M_0 1 0 ⋯ 1
M_1 0 1 ⋯ 0
⋮ ⋮ ⋮ ⋱ ⋮
M_N-1 0 0 ⋯ 1
4-bit Array Combinational Multiplier: The 4-bit combinational multiplier in Fig. <ref> is composed of various Half-Adders (HA) and Full-Adders (FAs) which sum partial products.
The corresponding connectivity matrix is in Equation <ref>, Mul (Γ_N × N):
c@c@c@c@c@c@c@c@c@c@c@c HA_1 HA_2 HA_3 FA_1 FA_2 FA_3 FA_4 FA_5 FA_6 FA_7
c[ccccccccccc]
HA_0 1 0 0 0 0 0 0 0 0 0
HA_1 0 0 1 0 0 0 0 0 0 0
HA_2 0 0 0 0 0 0 0 1 1 0
HA_3 0 0 0 0 0 0 0 1 0 0
FA_0 1 0 0 0 0 1 0 0 0 0
FA_1 0 1 0 0 0 1 0 0 0 0
FA_2 0 1 0 0 0 0 1 0 0 0
FA_3 0 0 1 0 0 0 0 1 0 0
FA_4 0 0 0 0 0 0 0 0 1 1
FA_5 0 0 0 0 0 0 0 0 1 0
FA_6 0 0 0 0 0 0 0 0 0 1
Systolic Arrays: Systolic arrays<cit.> are a class of parallel computing architectures designed to efficiently execute matrix computations, which are fundamental to many scientific and engineering applications <cit.>.
Based on Systolic architecture, Google proposed Tensor Processing Units (TPU) <cit.>, a custom ASIC to accelerate DNN applications.
A typical Systolic array design consists of a 2-Dimensional grid of Processing Elements (PE), a controller and the on-chip memory/buffer.
Leveraging the design of diverse adders and multipliers, diverse PEs can be designed to enhance the resilience of systolic arrays.
§.§.§ Artifact-level diversity
This third level of diversity makes uses of multiple replicas of diverse CMAs running simultaneously, the outputs of which are computed using a majority voter. This is similar to TMR with an important difference: common mode failures are reduced significantly, since diverse CMAs of diverse modules can be used by composition. Visual examples are shown in Figure <ref>. Our evaluation in Section <ref> shows that this level is key to thwart Distribution and Compound attacks in particular.
Furthermore, due to the coarse granularity, the placement of these replicas can be directed to leverage some resilience to Zonal attacks. Combining placement with diversity at different levels boosts the resilience of the system against all considered attacks: Distribution, Zonal, and Compound, as we convey in the evaluations in Section <ref>.
§ E-GRAPH BASED DIVERSITY GENERATION
An e-graph or equivalence graph is a data structure that compactly represents equivalence relations in the form of Abstract Syntax Trees (AST) <cit.>. Apart from theorem proving <cit.>, it has been used in the optimization of large datapath and arithmetic circuits <cit.> and technology aware synthesis <cit.>.
Transformations in e-graph consists of two main steps 1) and 2) . In this work we rely on Yosys/ABC logic synthesis <cit.> flow to synthesize a benchmark circuit into an equivalent format that could be fed to the e-graph tool for design space exploration.
Fig. <ref> highlights the application of e-graph operation to a simple AST representing a simple operation (x × 2) >> 1 in Fig. <ref>. Since (2 × x) is equivalent to (x << 1), therefore, applying this rule leads to an equivalent graph shown in Fig. <ref> with equivalent operations grouped in the same e-class (red dotted box). rules are successively applied until the graph is saturated and no more rules can be applied.
refers to selecting an e-node (black dotted box) from each e-class based on a certain cost function.
In this work we extract a number of equivalent relations to create a pool of functionally equivalent but structurally diverse implementations. For example, from Fig. <ref>, two relations can be extracted: 1) (x × 2) >> 1 and 2) (x << 1) >> 1.
§.§ E-Graph Flow
Fig. <ref> shows the e-graph flow to generate diverse implementations of an input circuit represented in the verilog format. The verilog file is synthesized and converted into equation format which represents the circuit in the AND, OR and NOT format. Next, the operation is applied until saturation and finally a pool of diverse implementations is extracted. Further, HOPE <cit.> parallel fault simulation tool is utilized to quantify the fault coverage of pooled netlists and further pruning can be applied to select the candidates that demonstrate low fault coverage i.e., high resiliency.
Table <ref> highlights the rules applied in the implementation. The rules are expanded rather than optimized/minimized to achieve the objective of enhanced resilience. This approach can be extended to generate N-version programs based on the rules.
§ THE FRAMEWORK
This section presents the framework that is used by the designer to implement resilient designs based on the diversity concepts introduced in Section <ref>. The framework defines a mechanism to construct circuits while: (1) maintaining deterministic behaviour across the entire design despite replication and diversity, and (2) ensuring resiliency to the attacks outlined in Section <ref>.
In , the ultimate goal is to produce a resilient circuit corresponding to a CMA, e.g., 16-bit Adder, following Defintion <ref>. The process follows three main phases summarized in Algo. <ref>, with the details discussed next.
§.§ Building Diverse Modules via Gate-diversity
This phase aims at identifying the different candidate module implementations m_i for use in a CMA implementation A = (m_1, m_2, ..., m_N); e.g., m_i:=4-bit-adder in our example. The aim is to generate diverse modules of m_x^l, m_x^t ∈ M_x, such that out(m_x^t)=out(m_x^z), where out is the output function, but the structure differs in at least one logic gate, i.e., m_x^t≠ m_x^z.
In the case of a homogeneous CMA, a single module type is used across the entire tuple.
§.§ Intra-diversity: building a CMA out of Diverse Modules
This phase leverages the generated diverse modules M_i to build a resilient diverse CMA A_i=(M_i, M_j, ...,M_N). The designer starts with the standard implementation of the CMA using any generated module implementations in the previous phase.
A conservative option is to use the most resilient CMA implementation A = (m_1, m_2, ..., m_N).
In general, this depends on the policy considering the resilience, cost, space, and time tradeoffs.
Diverse CMAs can be generated either by changing a single module implementation at index i or multiple modules in CMA A_i.
§.§ Inter-diversity: building a replicated artifact
This phase uses TMR-style replication of the diverse CMAs.
The designer starts by selecting one CMA implementation and then replicates it into three replicas whose outputs are connected to a majority voter.
(One can generalize the replication to different numbers.)
The designer then iterates over the three tuples to replace the CMA implementations with diverse homogeneous or heterogeneous ones.
The latter is preferable as this utilizes Inter-diversity between CMAs to defend against Distribution and compound attacks by reducing common mode failures.
To defend against Zonal attacks, we leverage the course-grained nature of this CMA replication to define a resilient-aware placement in the chip design.
This is possible since we can enforce it at this level to avoid fab compiler optimisations that may remove the designed redundancy or optimize routing or layout space.
§.§ Configurations
The above multi-level diversity is expressed through configurations that combine the Intra- and Inter-diversity.
We define these configurations as follows.
We denote the Intra-diversity of a CMA by α.
This refers to the number of different modules used in the CMA composition.
Since different CMAs in a replicated artifact can have different Intra-diversity α, we model these as a tuple 𝐈 whose dimension |𝐈| is the replication factor. In this case, each index I_i refers to the Intra-diversity value for the implementation A_i of replica i.
The β_𝐈 factor for a given tuple 𝐈, describes the Inter-diversity among the replica CMAs i.e., how diverse they are with respect to each other.
β_𝐈 is computed by taking the symmetric difference among replicated CMAs, as shown in Eqn. <ref>.
To compute β_𝐈 for |𝐈|>2, Eqn. <ref> can be applied iteratively.
β_𝐈 = (A_i - A_i+1) ∪ (A_i+1 - A_i)
Fig. <ref> shows few example configurations for different values of 𝐈 and β_𝐈. The term CMAs/replicas will be used interchangeably. In Fig. <ref>, a single replica is created with a single module M_1 i.e., 𝐈 = (1). Fig. <ref> shows a single replica consisting of a set of four different modules {M1,M2,M3,M4} i.e., 𝐈=(4) and |𝐈|=1. Fig. <ref> shows a case where |𝐈|=2 with 𝐈=(4,4) i.e., two replicas are created with four diverse modules, however, they are the same as each other, hence β_𝐈=0. Fig. <ref> demonstrates a case where four distinct modules are used to create two replicas (𝐈=(4,4)) with two common modules M1 and M4, the inter-diversity (β_𝐈), computed using Eqn. <ref>, is 4.
Lastly, Fig. <ref> consists of two replicas created with configuration 𝐈=(4,4), β_𝐈=0, which implies each replica is composed of four different modules and there are no common modules between them, therefore, they are said to be totally diverse.
§ CMF COMPUTE ENGINE
We implemented a CMF Compute Engine to use as our evaluation testbed.
We plan to make this tool (implemented in Perl and Verilog) publicly available for the research community.
To the best of our knowledge, there are two previous works that focus on how to quantify diversity in order to build fault tolerant redundant systems. Mitra et. al. <cit.> proposed fault injection at the gate level, computing if both designs produce same type of erroneous output, resulting in a CMF. This method is based on simulation and provides a reliability measure of the design in case of random faults.
However, to capture true CMFs, identical faults must be injected in redundant designs followed by fault simulation.
The work in <cit.> accelerate the diversity metric computation by analyzing circuit paths of gate level netlists of designs. The aim of the proposed tool is to measure the rate of CMF among redundant designs, therefore, a variant of the approach proposed in <cit.> is employed to cater for the need to inject identical faults.
§.§ Architecture
Fig. <ref> (Appendix <ref>) shows the block diagram of the proposed common mode failure computing engine. It fetches a set of diverse replicas from the library generated by the method discussed in Section <ref>.
No fault is injected in RR, which serves as a reference circuit producing the golden output.
The number and sites of faults to be injected depends on the selected fault campaign as discussed in Section <ref>.
Eqn. <ref> highlights the fact that for the CMF output to be true, all outputs from the replicas must be equal to each other and different from the reference output RO:
CMF ≡ O_i=O_j AND O_1 ≠ RO ∀ i ≠ j
§.§ Simulating Attacks
To simulate Distribution faults, faults are injected at the same location in the common modules among replicas. Evaluation is done by injecting faults across all possible locations in common modules. For example, consider Fig. <ref>, the common modules are M1 and M4 among the two replicas: faults will be injected at the same locations of M1 and M4 in these replicas. However, the total fault injection combinations will be the cross product of the number of gates in the respective common modules i.e., |M1|×|M4|.
In the Zonal fault model, faults are injected at the common zones of the redundant replicas. The zone granularity is a module. In Fig. <ref>, replicas are divided into four zones as each one of them has four modules. This approach emulates the faults that could be manifested due to external attacks on the same spatial zones of the replicas.
∑_zones(z)∏_Replicas(i) |M_i,z|
Where |M_i,z| denotes the total number of gates in a module of replica i in zone z. The zonal fault injection campaign is more compute intensive in comparison to the distribution faults as it takes into account combinations of all possible locations in the replicas.
The distribution and zonal attacks are combined in the Compound attack model by injecting faults in common modules among replicas and then iterating through zones for zonal fault injection.
§ RESULTS & DISCUSSIONS
§.§ Experimental Settings
Our experimental settings make use of the CMF Compute Engine evaluation test-bed. The three attacks Distribution, Zonal, and Compound are simulated as described in the previous section. Intrusion/Fault simulations are randomly generated for 100K test vectors.
We evaluate (1) various configurations (variable α and β) compared to (2) TMR as a state-of-the-art replication method and (3) without replication as a baseline (No-TMR). The focus here is to create diverse N-bit adders.
We do not consider other CMAs due to space limitations and given that the N-bit adder is a good representative example.
For simulations and evaluations, the a 16-bit Ripple-Carry adder (RCA) artifact is considered. However, the proposed approach can be extended to any value of N.
The diverse adder modules are generated as per <cit.> and a subset of 10 CMA adder implementations with different diversity ratios are handpicked (but used fairly across all experiments). Note that 10 versions were enough to generate hundreds of permutations and millions of intrusion/fault injections, visiting the entire graph of each. The selected modules–functionally equivalent but structurally different–along with the gate count and failure probability (P_f) are shown in Table <ref>. They are also classified into three resilience levels which depends upon the respective failure probabilities of the module.
The selected modules have varying fault tolerance, area and logic level.
P_f is computed by using the HOPE fault simulator <cit.>.
From Table <ref> it is also evident that, as the number of gates increases, P_f decreases.
This is due to improvement in logical masking of the modules.
§.§ Resilience under Distribution Attack/Fault
We first discuss the general patterns of failure probability for against TMR (without composition diversity) as state-of-the-art solutions and No-TMR (no replication) as baselines. We observe in Figure <ref> that all configurations of are significantly more resilient than the baselines. In particular, as more Inter-diversity is induced across replicas 's resilience ranges between 0.6 and 0.1, while both TMR and No-TMR (whose lines are separated for visibility) exhibit around 0.7 resilience. This shows that TMR is not effective without diversity as it fails similar to No-TMR due to common mode failures.
We then discuss Intra- and Inter-diversity aside, showing that both are important and complementary. We omit No-TMR for clarity, knowing that its resilience is similar to TMR across all Distribution attack results. Fig. <ref> conveys the resilience for fixed β, while intra-diversity α varies between 2 and 4. The results consistently show that diversity by composition improves resilience of over TMR as α increases to 10% with each diverse module added. The improvement is significant for higher β. In the best case (f) where β=6, i.e., different modules are used in different replicas, the failure probability is lower than 0.1. These results are consistent with the case where α is fixed, whereas the Intra-diversity β varies in Figure <ref>. With few diverse modules (a) α=2 per replica, the resilience improves as long as more diverse modules are used in different replicas. The improvement is significant in the most diverse configuration α=4. The interesting observation in varying α and β is that the designer can tune the level of diversity based on the available versions, cost, and sensitivity of the application.
§.§ Module placement and alignment across replicas
An impacting property of composition in is the position of common modules across replicas that plays an important role in overall resilience. We explain this in Fig. <ref>, where two replicas R1 and R2 are constructed in such a way that R1={M1, M2, M3, M4} and R2={M5, M6, M7, M1}. M1 is the common module and the placement distance difference between the location of M1 in two replicas is 3 i.e., M1 location in R1 and R2 is 0 and 3, respectively. From Fig. <ref>, it can be observed that increasing alignment distance between a common module across two replicas improves resilience because varying common module location in replicas has a direct impact on fault controllability and observability conditions, resulting in reduced CMFs.
§.§ Resilience under Zonal Attacks
We now evaluate under Zonal attacks. We pick two simulated scenarios: a horizontal attack (e.g., Electromagnetic field applied Figure <ref>) which affects an entire replica and its modules, and a vertical attack that affects one module of each replica. Since No-TMR has no replication, the entire modules of the CMA are attacked. As expected, under the horizontal zonal attack an entire CMA replica is attacked in TMR and (4,8) (full diversity). The impact of the attack is 100% masked by the majority voting. The No-TMR case however fails drastically with failure probability close to 0.9. This reinforces the fact that TMR is useful in some cases even without modular diversity under Zonal attacks. The advantage is, however, not present in the case of vertical zonal attack, since all replicas in TMR and are hit by the attack, which cannot be masked by the voter. As expected, the failure probability in this case is only affected by the gate-level diversity of the modules used.
§.§ Resilience under Compound Attacks
We finally consider the case of Compound attack where the Zonal and Distribution attacks are simultaneously applied.
We fix α=4, while varying the value of β in Fig <ref>. For the case of α=1,β=0, all modules are victims of the Distribution attack which has a huge impact on TMR and No-TMR as we have seen in Figure <ref>. For the case where replicas are composed of diverse modules and there are no common modules among them, a module is picked at random for the injection of the Distribution attack.
can sustain the Compound attack with down to 0.1 failure probability compared to TMR and No-TMR which fail most of the time.
§.§ Impact on Area, Power and Delay
To determine the impact of proposed technique on area, power and delay, the replicas are generated with specific resilience according to the classification levels mentioned in Table <ref>. The area, delay, and logic levels are calculated using the Berkeley-ABC synthesis tool <cit.> for the MCNC library. The MCNC library is slightly modified to include only 2-input NAND/NOR and XOR/XNOR gates along with an Inverter. This is done to make sure the library remains highly correlated with the designs mentioned in Table <ref> and also to restrict the map tool optimizations. The area is computed by adding the fanins of all gates in a circuit. The dynamic power consumption of a digital circuit is determined using the Berkeley-SIS tool <cit.> for the MCNC library operating at a supply voltage of 5V and a frequency of 20 MHz.
Algorithm <ref> is applied by fixing the values of α and β to {1,0}, {1,8} and {2,0} respectively. We generated 100 CMAs corresponding to each resilience level as shown in Table <ref> and results are averaged. Table <ref> provides valuable insights into the trade-offs between area, power, and delay across different redundancy configurations. It can be observed that with increasing resilience level, area, power, and delay also increase. This is due to the fact that the resilience of modules M_i is increased by injecting the functionally redundant logic leading to an increase in area and power. The introduction of redundant paths or logic can also affect the timing characteristics of a circuit. While some forms of redundancy, like parallel processing units, might aim to reduce processing time, others might introduce additional stages of logic that data must travel through, thus increasing the delay.
Delay is a function of the critical path–the longest path between any input and output–of a circuit.
TMR is a special case of (α,β)={1,0}, where all replicas are identical and consumes maximum area in comparison to the other configurations. In the case of low module resilience for (α,β)={1,0} & {2,8}, the area decreases from 344 to 324, but power consumption and delay is increased. This is due to the introduction of XOR/XNOR gates in the redundant logic, which consume more power and have greater delay compared to NAND/NOR gates. Similarly, for the case of medium and high resilience, the corresponding area, power and delay is increased for both cases of (α,β). However, the area for the case of medium resilience reduced from 379 to 374.
§.§ Diversity and Resilience with E-Graphs
The effectiveness of E-GG based approach to generate diverse and resilient circuits is evaluated on the benchmark circuits shown in Table <ref>. The benchmarks are a mix of circuits from LGSynth <cit.>, ITC <cit.>, and ISCAS85 <cit.> benchmark suites. Additionally, an open source divider from OpenCores is also considered for evaluation. <cit.> is also utilized to generate two multipliers (3x3 and 5x5). The benchmark corresponds to the Multiply-Accumulate operation unit of a typical systolic array.
The (Table <ref>) and methods of E-Graphs are implemented in to interface with the library.
Fig. <ref> highlights the resilience range of chosen benchmark circuits. For benchmarks , , and the tool is able to generate diverse replicas with varying resilience (P_f) levels. Similarly, Fig. <ref> shows the spread of area across benchmark circuits. We are able to generate varying area profiles for , and , whereas for the rest the areas of generated diverse replicas almost remain constant.
We observed that the resilience and corresponding area profiles are sensitive to the rules which can be controlled by modifying them in Table <ref>. New rules added or existing ones modified can lead to different implementations and area/resilience profiles. can leverage these diverse replicas to create larger diverse artifacts. It should be noted that for the benchmarks in Table <ref>, we didn't perform the distribution and zonal attacks simulations.
Impact of Synthesis Tools on Diversity Generation. The inherent nature of Electronic Design Automation (EDA) synthesis tools is to optimize circuits to meet the area, power and delay constraints. To achieve resilience, additional gates have to be added that doesn't change the function of circuit but increase the area and delay. Since E-Graph works on the ASTs of a given language, therefore, it relies on parser and lexer for text processing. Any redundancy in the resulting E-Graphs of diverse replicas will 1) be removed by the parser, or 2) removed by the synthesis tools. To avoid these optimizations, one way is to 1) add a different tag to each redundant gate, 2) declare all redundant gates as primary inputs to the circuit, 3) doing the synthesis now will not perform any optimization, and 4) remove redundant gates from the primary inputs list. Following these steps, it is observed that very high resilience can be achieved, but at the cost of exorbitant area (cf. Appendix <ref>). It will be interesting to investigate the impact of E-Graphs to generate diverse replicas at different levels of digital circuits abstraction.
§ RELATED WORK
The resilience of a system is significantly improved by applying replication followed by a voter for majority consensus. Traditional approaches to avoid CMF in TMR/DMR have been to implement diverse systems at higher levels of abstraction <cit.>. Faults can be intentional or accidental and have predictable manifestations in digital circuits.
State of the art literature classifies faults/vulnerabilities into four categories: 1) hardware trojans or malicious implants, 2) backdoor through test/debug interfaces, 3) accidental/unintentional vulnerabilities <cit.>, and 4) external faults <cit.>.
To combat low level faults, resilience through design diversity must be applied at the gate level.
Cartesian Genetic Programming (CGP) <cit.> had been proposed to generate diverse isofunctional structures to improve resilience at the gate level of a design <cit.>. Similarly, redundant designs can be spatially separated to provide resilience against external faults in TMR systems <cit.>. attempts to build on this concept of diversity, however, proposes a novel approach to combine small diverse structures to create larger diverse artifacts.
Faults based on external attacks can be due to various sources.
Power supplies can be tampered with to alter uniform behavior, resulting in erroneous results.
Similarly, power spiking can cause a processor to not only misinterpret an instruction but also induce memory faults <cit.>.
Clock glitches, through a deviated clock signal, can be induced to cause execution of subsequent instructions before retirement of previous instructions, i.e., modification of the program counter (PC) at irregular intervals <cit.>.
This type of attack is considered to be the simplest as low-end FPGAs can be used to inject clock glitches <cit.>.
Electronic devices function properly in a certain temperature range, however, when exposed to extreme temperatures, the data inside memories can be corrupted.
These attacks are easier to set up but their effect cannot be localized.
Optical attacks are performed by exposing the device to a focused laser beam or strong photo flash.
This attack can be targeted on either side of the chip and can be confined to certain area to set or reset memory bits or switch transistor state <cit.>.
Similarly, electromagnetic (EM) interference can corrupt or modify memory contents by inducing eddy currents, resulting in a single or multiple bit faults <cit.>.
Both optical and electromagnetic attack fits our definition of zonal attacks in this paper.
Attacks based on malicious implants can be inserted into a design at various stages, such as by an untrusted CAD designer, by a malicious tool, or at the foundry <cit.>.
Furthermore, hardware verification is a time-consuming and costly task, making it challenging to detect such trojans at scale <cit.>.
Malicious implants can be activated by an attacker using a carefully crafted strategy <cit.>.
Previous works have discussed hardware trojan activation using a combination of software-hardware <cit.> and triggers based on time, input sequence and traffic patterns <cit.>.
We model these Distribution Attacks by injecting fault(s) at specific location(s) in the design and investigate which input patterns are able to activate them.
§ CONCLUSIONS
Existing approaches to resilient chip design have mostly been studied under benign fault models where no malicious actors exist <cit.>. addresses that gap, motivated by recent digital sovereignty concerns <cit.>.
Productive chip businesses focus on design, while making use of off-the-shelf tools and libraries from different vendors after which fabrication is outsourced. If vendors and fabs are not trusted, the design can be vulnerable to several attacks, among them three we introduced: Distribution, Zonal, and Compound.
To mitigate these attacks, we introduced the notion of Diversity by Composability that enables the building of resilient circuits out of smaller diverse modules.
Together with state of the art gate-level diversity and modular redundancy, we show that resilience to the aforementioned attacks is significantly improved, by the factor of five against considered attacks, and the designer retains some control on diversity. Additionally, we also showed how E-Graphs can be utilized to generate diverse replicas using the language independent tool.
plain
§
There exists an N-bit Adder implementation that is a homogeneous CMA.
Consider the N-bit Ripple-Carry Adder (RCA) in Figure <ref>. The RCA is composed of a tuple of N modules (M_N,...,M_1) where M_i=N-bit-FA ∀ i ∈ [1,N]. Therefore all modules M_i are equivalent, i.e., homogeneous.
In addition, the RCA is defined by the Identity dependency matrix of these deterministic modules, in Eq. <ref>, that by design produces the same output(s) for the same input(s).
There exists an N-bit Multiplier implementation that is a heterogeneous CMA.
Consider the N-bit Array Combinational Multiplier (ACM) <cit.>, similar to the 4-bit one in Figure <ref>. RCA is composed of a tuple of L=2N-1 modules (M_L,...,M_1) where M_i=N-bit-FA or N-bit-HA ∀ i ∈ [1,L]. The same functional modules M_i can be implemented as homogeneous or diverse structures, i.e., heterogeneous. In addition, ACM is defined by the deterministic dependency matrix of these deterministic modules, e.g, Mul in Eq. <ref>, that by design produces the same output(s) for the same input(s).
(A Diverse CMA is deterministic)
A CMA A_i=(M_i, M_j, ...,M_N) whose module implementations m_k^t belong to a diverse module M_k is deterministic.
Assume the contrary, that there exist at least two diverse CMA implementations P=(p_1^a, p_2^b, ..., p_N^z) and Q=(q_1^l, q_2^t, ..., q_N^k) with at least one tuple location x having (p_x^e≠ q_x^e), p_x^e, q_x^e∈ M_x (M_x being a diverse module), and such that out(P)≠ out(Q) (nondeterministic). Since all module implementations p_y^f and q_y^g belong to a diverse module M_y, then it is possible to replace each p_y^f with q_y^g at every index y in CMA P. This generates exactly equivalent tuple implementations P=Q. Since we assumed out(P)≠ out(Q), then out(P)≠ out(P). This means having one exact implementation of CMA that is not deterministic, which contradicts the CMA definition in Eq. <ref>.
Therefore, a diverse CMA is deterministic.
§
Fig. <ref> illustrates the impact of synthesis on digital circuits (discussed in Section <ref>). Example circuit (Fig. <ref>) consists of 7 logical gates with P_f=1. Applying E-Graph rewrite and extract operations with optimizations i.e., not tagging each gate, the resulting area after synthesis is 16 gates with the corresponding P_f=0.84, as shown in Fig. <ref>. However, if each gate is tagged i.e., no optimization is applied, synthesis of the circuit will result in 25 logical gates with P_f=0.74, as shown in Fig. <ref>. This simple example highlights the area is increased by more than 3x if no optimization is applied. For larger circuits, we observed that the resulting area becomes exorbitantly high (with corresponding significant reduction in P_f) making the final design unfeasible for practical applications.
§
Fig. <ref> highlights the flow of common mode failure computing engine.
|
http://arxiv.org/abs/2409.03359v1 | 20240905090531 | Relaxation rate of ModMax-de Sitter black holes perturbed by massless neutral scalar fields | [
"Haryanto M. Siahaan"
] | gr-qc | [
"gr-qc"
] | |
http://arxiv.org/abs/2409.03478v1 | 20240905123813 | LLM-based event abstraction and integration for IoT-sourced logs | [
"Mohsen Shirali",
"Mohammadreza Fani Sani",
"Zahra Ahmadi",
"Estefania Serral"
] | cs.DB | [
"cs.DB",
"cs.ET",
"cs.LG",
"68M14",
"I.2.1; H.4.0"
] |
Numerical study of Darcy's law of yield stress fluids
on a deep tree-like network
[
September 9, 2024
==================================================================================
Corresponding author: Mohsen Shirali ()
§ ABSTRACT
The continuous flow of data collected by Internet of Things (IoT) devices, has revolutionised our ability to understand and interact with the world across various applications. However, this data must be prepared and transformed into event data before analysis can begin. In this paper, we shed light on the potential of leveraging Large Language Models (LLMs) in event abstraction and integration. Our approach aims to create event records from raw sensor readings and merge the logs from multiple IoT sources into a single event log suitable for further Process Mining applications. We demonstrate the capabilities of LLMs in event abstraction considering a case study for IoT application in elderly care and longitudinal health monitoring. The results, showing on average an accuracy of 90% in detecting high-level activities
. These results highlight LLMs' promising potential in addressing event abstraction and integration challenges, effectively bridging the existing gap.
Keywords. Internet of Things Large Language Models Event Abstraction Log Integration Multi-modality.
§ INTRODUCTION
The IoT technology has the potential to create a new "cyber-physical" world, where "things" can directly operate, act, and influence the physical world <cit.>. Smart devices have seamlessly integrated into everyday life, and low-cost sensors are embedded in various applications, from smart homes to smart cities and industrial IoT in smart factories.
To extract meaningful insights from raw IoT data, advanced data mining techniques are required. However, raw sensor-level data is unstructured and non-informative, making it unsuitable for data mining. Raw data must be transformed into event logs to make the information discoverable and comprehensible for analysis <cit.>. Without proper pre-processing, the analysis foundation is compromised, creating an abstraction gap <cit.>.
Looking in particular into Process Mining (PM), as one of the data analysis disciplines, it requires event logs, where each event represents a single activity at a particular time <cit.>. However, in complex settings, systems often lack process-centric data, rendering the data unsuitable for immediate analysis. Likewise, in smart homes, events are often logged at the sensor trigger level, which is too granular for meaningful pattern mining, complicating the application of PM techniques <cit.>.
Additionally, the required data often resides in various databases and/or generated by multiple information systems or sources. Data from multiple sources must be integrated to provide a comprehensive view, facilitating the understanding of system behaviour and enhancing analysis capabilities from various angles.
Preparing event logs is typically manual, requiring domain knowledge to group correlated tasks into meaningful activities. This manual effort can be inconsistent and error-prone, especially for complex processes involving multiple data sources <cit.>. Moreover, integrating heterogeneous logs introduces challenges like format inconsistencies, timestamp misalignment, and data quality issues <cit.>. Besides, event data may have mixed granularity and contextual information, complicating the application of data analysis techniques <cit.>.
In this way, pre-processing techniques are vital to build meaningful event logs before any actual analysis, like PM, can begin <cit.> and resources have to be allocated for this purpose.
A study in <cit.> revealed that the laborious effort for data preparation or the necessity for proper expertise can terminate PM projects and cause them to be called off. If event logs are not generated properly, the potential benefits of using recorded logs for data analysis are significantly limited.
Log abstraction and integration are two essential data preparation or pre-processing tasks often used in data-driven analytics, to enhance result interpretability <cit.>.
Furthermore, log abstraction involves summarising or grouping low-level data elements into higher-level event-based representations, aligning with the executed activities in a process.
This simplifies the analysis and assigns meaning to sets of raw data records, allowing them to be interpreted as a single event <cit.>.
Log integration also merges combining data from multiple sources/systems to create a unified view for analysis <cit.>.
Existing event abstraction techniques are either unsupervised or supervised <cit.>. Unsupervised methods rely on control-flow similarities between low-level event types without requiring input on targeted high-level activities <cit.>. Conversely, supervised methods require extensive input on high-level activities, demanding significant manual effort and domain knowledge <cit.>.
These two extremes necessitate a balance to support users in abstraction, addressing the challenges of manual effort, domain knowledge, and consistency.
Motivated by these challenges, we propose using Large Language Models (LLMs) to automate raw data pre-processing and event log generation, minimising the need for user background knowledge and effort. LLMs, trained on extensive data, are ideal for this task, capable of processing textual information and performing natural language understanding and generation tasks <cit.>. Our approach aims to utilise LLMs to analyse IoT sensor logs, generate meaningful event records, and merge logs from multiple sources, facilitating Process Mining applications.
We explore how an LLM, given a prompt with event label samples and minimal user input, can abstract sensor readings into event logs. In the prompt, we outline the task, clarify the role of the LLM, describe the inputs it will process, and specify the expected outputs. Hence, the contributions of our work are as follows; first, We use LLMs to automate the detection and labelling of low-level sensor data to create abstracted events. Second, we develop a method for abstracting and generating events based on data streams, making the proposed approach suitable for online applications. Third, information from multiple sources will be merged into a unified event log to enhance analytics by providing a comprehensive view of the processes.
Our work aims to streamline the process of event log generation, making it more efficient and accessible, and ultimately improving the utility of IoT-sourced sensor data in data-driven analytical applications.
The rest of this article is organised as follows:
Section <ref> reviews the existing event abstraction and integration methods and describes the multi-source IoT dataset which is used for evaluation. Section <ref> describes the proposed LLM-based event log abstraction and integration approach.
Then, in Section <ref>, the results and performance of the proposed approach are evaluated and a discussion on the usefulness of utilising LLMs besides IoT systems is provided in Section <ref>.
Finally, Section <ref> concludes the article's findings.
§ BACKGROUND KNOWLEDGE
§.§ Existing works for event abstraction and integration
The journey from raw data to event logs suitable for PM has been extensively studied in works like <cit.> and <cit.>.
For example, van Zelst et al. <cit.> and Fahland et al. <cit.> have used techniques like log abstraction to group events into higher-level activities for simplified visualisation and analysis
Additionally, a hierarchical framework for event abstraction based on notion of activity instances was proposed in <cit.>. Similarly, Senderovich et al. <cit.> suggested a knowledge-driven approach to transform raw sensor data into standardised event logs.
The integration of Complex Event Processing (CEP) and BPM, as highlighted by Soffer et al. has gained attention for creating more efficient systems, especially for IoT applications.
Mangler et al. <cit.> also address the challenges of connecting process data with IoT data and propose a semi-automatic framework for transforming low-level IoT sensor data into higher-level process events.
Furthermore, Di Federico and Burattin <cit.> introduce CvAMoS, a method to identify recurring sequences of activities and consider their context of execution for event abstraction.
Additionally, <cit.> introduces the concept of activity signatures for deriving knowledge about activity executions and training supervised learning models to detect higher-level process activity executions for larger datasets.
Moreover, it is particularly useful (or even crucial in many use cases) to build a ground truth based on domain knowledge and user expertise in the early stages of IoT data analysis, which can later facilitate automated activity detection <cit.>. For instance, in some IoT logs, human expertise can help in interpreting sensor-level data and mapping them to activity-level events using samples, without predefined descriptions.This approach, used in activity recognition, maps sensor readings such as opening the door of a refrigerator to activities like "preparing meal" <cit.>.
In addition, system designers can translate sensor readings into events using predefined rules. For example, <cit.> and <cit.> used senor locations to create event logs for discovering behaviour models and mobility patterns by PM techniques. Also, studies like <cit.> consider sensor types and attached objects to create event logs for performed activities of daily living (ADLs).
To overcome the required domain knowledge, <cit.> applies a method to learn event representations and automatically suggest related event groups for abstraction. This approach allows users to select meaningful event groups without initial input. Additionally, an unsupervised method in <cit.> continuously transforms user interactions into event streams for downstream analysis.
Another unsupervised learning technique is also proposed in <cit.> to discretize sensor data into identifiable activities, and refine their sequences by deducing sub-models to identify concurrency and interleaving within the data in smart home scenario.
Furthermore, recent studies have explored using LLMs for event abstraction like <cit.> that grouped tasks into high-level activities and assigned labels to them based on their similarity.
In terms of log integration methods to address data heterogeneity, in <cit.>, logs from different organizational departments were combined for end-to-end process analysis. Also, in <cit.>, a time-based heuristics miner is used to discover high-level and low-level process models in parallel from multi-source logs, integrating them with PetriNet refinement.
§.§ Multi-source IoT Dataset
Our study aimed to assess the effectiveness and accuracy of using LLMs on IoT-sourced datasets to create event logs suitable for Process Mining.
We applied our proposed LLM to sensor logs from an IoT dataset collected in an Ambient Assisted Living scenario.
This dataset contains 146 days of data, capturing the daily activities of a 60-year-old woman living independently in an apartment <cit.>. The data was collected using ambient sensors installed in the house, a wristband, and a smartphone.
The ambient sensors included PIR sensors placed on furniture and appliances, a power usage sensor indicating TV usage, contact sensors for detecting door openings and closings, and a gas detection sensor for identifying cooking activity.
These sensors were positioned in six different areas of the house, as shown in Figure <ref> and recorded event data with timestamps, sensor names, and sensor values (e.g., on/off states).
The participant also wore a Xiaomi Mi Band-3 wristband to collect sleep-related information such as sleep time, duration, and quality.
Additionally, smartphone usage data, including timestamps and the names of used applications, were collected using an application.
The ambient sensors, such as the TV sensor and kitchen appliance sensors, are triggered when the participant performs specific activities.
As a result, their measurements are at a low level of abstraction representing the presence of the subject near the appliances, the open or closed state of doors, and the use of the stove.
By knowing the timestamp of their triggered states, we can discover the corresponding activities to express the exact start and end times and the duration of the conducted activity.
These activities are mainly known as instrumented Activity Daily Livings (iADL), the activities performed using specific instruments and are investigated in several behaviour modelling and analysis studies and projects, like <cit.>.
With the entire sequence of captured records for a specific duration, we can infer events related to the person’s presence in different areas of the house, the performed activities, and the sequence of their movements.
For this purpose, an event abstraction step is needed to elevate the abstraction level of sensor raw readings into recognisable activity events in an event log.
The events related to smartphone usage should also be converted and inserted into the log, while the names of applications can be replaced with their type or with a general label such as `Using Smartphone' to preserve the participants' privacy.
Furthermore, the sleep-related information collected by the wristband will complete the preferred event log.
In the current case study, the event log includes location events, such as Bedroom, Bathroom, Kitchen,
and the activity events like sleeping, meal preparation, praying, watching tv.
A snapshot of sensor logs and the required event log, including the information of all modalities (the ground truth version), is depicted in Figure <ref>.
We fed these sensor logs obtained from devices' reports in our multi-modal IoT dataset into the proposed LLM model, asking it to create a PM-friendly event log.
The resulting event log was then compared with a ground truth event log that was generated through a pre-processing step by applying multiple event detection rules determined by domain experts and then inspecting manually to correct any falsely detected events, as described and used in <cit.>.
§ EVENT LOG ABSTRACTION AND INTEGRATION BY LLMS
As previously stated, PM techniques rely on having an event log as an input, and without it, gaining insights is not possible.
While many scientific works assume the availability of the event log, the procedure of pre-processing raw data, detecting events, abstracting them into activities, and generating an event log can be time-consuming in real scenarios. Abstracting events from sensor data, particularly when dealing with streams of input data and potential concurrent events and activities recorded by a combination of sensors, can be quite challenging <cit.>.
In this section, we explain how we can use Generative AI and LLMs to (1) abstract the events from sensor data to activities, and (2) generate an event log from different resources.
§.§ Abstraction of events into activities
We propose to use LLMs to detect activities from sensor data. In this regard, we give some basic explanations about the sensors, possible activity labels, and a few examples to LLM to provide us with a label for the detected change in the status of sensors.
In other words, we call an LLM whenever we have a change in sensor values.
The changes in the binary values of the sensors have the highest significance regarding process-level events, as it is stated in the <cit.>.
Thus, each sensor log entry should denote a change in the status of sensors.
Therefore, we can consider this task as a classification task in that LLM will be the classifier and activity labels are the classes.
The number of classes is the number of possible desired activities plus one as we may encounter sensor changes that do not correspond to any specific activity. However, there are instances where sensor changes occur without any corresponding activities due to noises, irrelevant sensors, etc. In such cases, a blank class is defined to address these situations.
For this task, we utilised GPT-4 as the model and applied few-shot learning <cit.> and chain of thought <cit.> techniques.
In the prompt template, we incorporated placeholders for the label of activities, the sensor explanation, and the output format[The sources and examples for our implemented LLM with more detailed information are publicly available at https://github.com/mfanisani/LLM4IoT].
In general, the prompt can be modified if we have more information based on the rules that we have in business.
For instance, here the output of all sensors in our dataset are binary values (i.e., 0 and 1), and it is important to note that we take this advantage to save tokens and reduce the cost by just passing the aggregated sensor information in a binary format to LLM (the number of consumed tokens in a generated a prompt is one of the limitations in using LLMs). Hence, the LLM will need to recognise which value corresponds to which sensor based on the order of binary digits in the sensor state. Some examples of SensorStates are given in Figure <ref>.
We provide the SensorStates for the current and previous state in our prompt to simplify the task for the LLM. The LLM will first identify what has changed in these two values and then provide the activity name.
§.§ Integration and event log generation
We can have several information sources for one event log.
Note that for each activity we have two events; the first one corresponds to the start of the activity and the second one indicates that the activity has ended.
In real world, we may have more events for each activity considering the complete life cycle of activities <cit.>.
We propose to use LLM to gather information from different resources and combine them into one event log.
However, we have some limitations:
* We cannot give the whole information to LLM as we have a limitation on the number of tokens,
* We do not have a caseID[caseID is one of the main columns in any event log that represents the process instance.] as all the information related in our dataset is related to one process instance.
To overcome the above limitations, we considered date as caseID. This will allow us to analyse the process of activities that have been done each day.
Therefore, our data will include the date as the case ID, the activity, start time, and end time.
However, it is possible that an activity start on one day and end on the next day, e.g. sleeping from 2022-11-23 22:00 to 2022-11-24 07:00.
In this case, we will provide LLM with the information for activities that span two days and ask it to trim the event at midnight, and adjust the event log for each day accordingly.
To handle this, we introduce another column with binary values indicating whether the activity is completed on the same date or not. If the activity crosses over to the next day, we will not include the end time for that next day.
While we assumed activities span at most two consecutive days to mitigate the constraint on the number of tokens per LLM call, it's important to recognise that real-world scenarios may not adhere to this assumption and may have longer execution times.
Our prompt template includes possible activities, two-shot examples, the information for the date of interest and the following day, and the format of the desired output.
In this way, we can increase the performance of the event log generation task, as we will call LLM once per day.
Note that the output does not contain the header of event log columns.
We use a basic script to collect the output information generated by LLM for different dates and compile it into a single event log.
The proposed framework for event log abstraction and integration can be used on different sensors or data logs with different prompts and on various LLMs. The samples for the appropriate labels are given to the model and it returns the expected output for the stream of data inputs immediately.
Therefore, the framework is also able to deal with streams of data sources and the output of this layer is sufficient for the PM tools without any further involvement of the users and interventions.
§ EVALUATION
It is important to assess the effectiveness of LLMs in event abstraction by analysing the results and evaluating the model's accuracy in generating event records and assigning the appropriate labels. This section presents the results of proposed LLM-based event abstraction on the real-world IoT-sourced dataset for our case study.
We have conducted the evaluation in two steps. First, the abstracted events for each data source are examined and compared with the intended labels to measure the accuracy of the LLM model in identifying and abstracting the raw sensor records for each data source. This comparison will highlight the LLM model's potential in dealing with different IoT sources.
In the next step, the outcome of merging three logs from ambient, smartphone and wristband into the final integrated PM-friendly event log is cross-checked with the ground truth event log.
§.§ Event abstraction and integration results
The accuracy results of the LLM in the abstraction of events foreach of the three modalities in our dataset are provided in Table <ref>.
For ambient sensors, we just consider activities with an occurrence of at least 50 times. The 21 labels in this source correspond to 10 activities with a start and end plus a label that corresponds to no activity.
For smartphones and wristbands, we have respectively 13 and 3 labels indicating the type of used application or day/night sleep.
For each activity, we have start, continue and end labels. Similar to ambient sensors, we have a label for no activity.
The accuracy for ambient sensor data is much lower than the other datasets as the number of related sensors and the number of labels are higher.
For all three datasets, the lowest accuracy is for None label that corresponds to no activity. In several cases, we predicted an activity (e.g., Watching TV start/end) instead of None.
We randomly checked some of the wrongly assigned labels and in almost all of them, the reasoning part produced by LLM was wrong.
§.§ Evaluation of LLM-generated event log
The evaluation of the LLM-based event log in terms of the accuracy of the recognised events is crucial to assess the success of LLMs in the abstraction and integration task.
This evaluation involves comparing two event logs that contain various events with their respective start and end times.
A snapshot of the event log generated using LLM is presented in Table <ref>.
Therefore, the matching of events labels, events orders, and the duration and time precedence of events are the important factors to be considered for evaluation.
We have utilised the number of correctly generated events, the ratio of perfectly aligned date and Edit Distance Alignment (EDA) <cit.> metrics to measure the success rate of our proposed LLM-based approach.
In our experiment, we provided the GPT4-0613 model with sensor states and their original labels, tasking it to generate an event log. Notably, we used the ground truth labels, not those predicted by LLM. The ground truth dataset contains 17,165 events, while the LLM-generated event log has 15,420 events. Among these generated events, 12,741 were correctly detected, with 11,365 having matching start and end activities.
In addition, LLM is able to generate all the unique activities that are defined for it.
In our analysis, we observed that self-loops—consecutive events with the same label—play a significant role. After removing these self-loops, we found that the generated event log contains 11,248 events, while the ground truth dataset has 11,431 events. Interestingly, LLM sometimes aggregates self-loops, particularly when the user interacts with her smartphone, where the interactivity times between self-loops are negligible.
To show how the generated event log is aligned with the ground truth event log, we applied EDA between similar dates in both event logs using the technique proposed in <cit.>.
This metric measures how similar the sequences of events are, where a value closer to one indicates higher similarity.
The results are presented in Table <ref> and indicate that the event log generated by LLM captures similar behaviour compared to the ground truth event log but it aggregated the self-loops.
§ DISCUSSION
The results of this case study indicate that utilising LLMs can significantly alleviate the challenges associated with event log abstraction and integration for IoT-sourced logs. LLMs can bridge the gap between data collection and the demand for event logs at appropriate abstraction levels for data analysis techniques, including Process Mining. Although, traditional methods for event abstraction and integration are often manual, error-prone, and time-consuming, but, LLMs can automate and speed up these tasks while reducing human errors. They can swiftly pre-process large volumes of raw sensor data, identify and abstract events with high accuracy, and uniformly label them, thus freeing up human resources for more strategic tasks.
However, it’s important to acknowledge that the output of our proposed solution may not always be highly reliable. Therefore, we recommend incorporating human oversight in applications that require a high level of reliability.
Key advantages of using LLMs in event log generation and integration for IoT systems include:
* Automation and efficiency. LLMs can automate the pre-processing of raw data and make the process more efficient by reducing manual effort and errors.
* Reduced need for domain knowledge. Expertise is crucial in supervising the abstraction process and interpreting IoT logs requires substantial domain knowledge to map low-level sensor readings to high-level activities. LLMs, trained on diverse datasets, can understand and abstract sensor data into meaningful events without deep domain-specific knowledge. This allows more users with limited background knowledge to generate valuable insights and make IoT data analytics more feasible.
* Handling multi-modality. IoT systems often collect data from multiple sources, resulting in mixed granularity and inconsistencies in data formats and characteristics. LLM-based models can understand and integrate heterogeneous data inputs, by handling this complexity. These models can align data from various sensors, ensuring a cohesive and consistent event log, particularly for scenarios involving time misalignment and varied data structures.
* Real-time processing. LLMs are capable of handling streams of data inputs, making them suitable for online applications. Initially, in our study, we triggered LLM calls whenever there was a change in sensor values. However, this approach can be computationally expensive and slow in certain applications. To address this limitation, we propose using batching—calling the LLM fewer times by grouping input data— similar to our approach for event log generation. This means that LLMs can process incoming data in real-time, handling sensor records one by one without relying on maintaining extensive change histories, and continuously updating the event logs with new information. This ensures that the event logs are always current, facilitating timely analysis and decision-making.
* Performance optimisation with prompt engineering and user feedback. Designing prompts that clearly describe the task, input data, and expected output, can significantly enhance the quality of the LLM's output.
Moreover, users' and domain experts involvement in the process remains crucial for fine-tuning the LLM's outputs and improving accuracy over time. A continuous feedback loop between LLMs and users helps in refining the model, adapting to evolving data patterns, and ensuring that the system meets the specific needs of the application. This collaboration between humans and intelligent systems ensures that the benefits of LLMs are fully realised while maintaining high standards of accuracy and reliability.
In the end, LLMs offer a powerful solution to the challenges of event log abstraction and integration in IoT systems. By automating the manual, repetitive, and error-prone data preparation tasks, LLMs streamline the process, reduce the need for extensive domain knowledge, and handle the complexities of multi-modality and mixed granularity in data. Additionally, with proper prompt engineering and user feedback mechanisms, LLMs can provide accurate and up-to-date event logs, enhancing the overall efficiency and effectiveness of IoT data analytics. Hence, integrating LLMs with IoT systems can revolutionise the way we process and analyse sensor data, unlocking new potential for real-time insights and decision-making.
§ CONCLUSION AND FUTURE WORKS
The rapid advancement of IoT technologies promises valuable data-driven insights and enhanced operational efficiency. However, a gap exists between the raw IoT data and the processed data needed for analysis, hindering the full potential of IoT technologies. In this study, LLMs are utilised to aid users in pre-processing raw data, converting it into event logs containing all the essential fields for event data analytics techniques, like Process Mining. The results underscore LLMs' potential in addressing event log abstraction and integration for IoT-generated logs. Despite these promising outcomes, the use of LLMs in this field is in its early stages, with room for improvement. Future research could focus on applying LLMs to text-based raw data, improving prompt engineering techniques, and fine-tuning LLMs to further enhance their performance and usefulness in data preparation.
§.§.§ Acknowledgement.
Mohsen Shirali, is now affiliated at UCLouvain in Belgium under grant Win4Collective number 2310088. The work of Zahra Ahmadi and Estefanía Serral was supported by the Flemish Fund for Scientific Research (FWO) with grant number G0B6922N.
splncs04
|
http://arxiv.org/abs/2409.03114v1 | 20240904223902 | Developing, Analyzing, and Evaluating Self-Drive Algorithms Using Drive-by-Wire Electric Vehicles | [
"Beñat Froemming-Aldanondo",
"Tatiana Rastoskueva",
"Michael Evans",
"Marcial Machado",
"Anna Vadella",
"Rickey Johnson",
"Luis Escamilla",
"Milan Jostes",
"Devson Butani",
"Ryan Kaddis",
"Chan-Jin Chung",
"Joshua Siegel"
] | cs.RO | [
"cs.RO",
"cs.CV"
] |
QHDOPT: A Software for Nonlinear Optimization with Quantum Hamiltonian Descent
[
==============================================================================
empty
empty
§ ABSTRACT
Reliable lane-following algorithms are essential for safe and effective autonomous driving. This project was primarily focused on developing and evaluating different lane-following programs to find the most reliable algorithm for a Vehicle to Everything (V2X) project. The algorithms were first tested on a simulator and then with real vehicles equipped with a drive-by-wire system using ROS (Robot Operating System). Their performance was assessed through reliability, comfort, speed, and adaptability metrics. The results show that the two most reliable approaches detect both lane lines and use unsupervised learning to separate them. These approaches proved to be robust in various driving scenarios, making them suitable candidates for integration into the V2X project.
§ INTRODUCTION
In this paper, five-lane detection approaches are presented that were proven to work on real drive-by-wire vehicles. This project aimed to identify the most reliable one for use in a Vehicle-to-Everything (V2X) project <cit.>, which focused on developing a low-cost Roadside Unit (RSU) for Adaptive Intersection Control of Autonomous Electric Vehicles. V2X is a communication technology that enables vehicles to interact with each other (V2V), with infrastructure (V2I), with pedestrians (V2N), and with other road users. It aims to improve road safety and traffic efficiency by providing real-time information that facilitates proactive responses to different driving scenarios <cit.>.
To have a successful V2X communication system, autonomous vehicles must have some basic and essential components such as effective lane-following, which requires precise detection and tracking of lane markings to maintain a proper positioning on the road. The 5 algorithms presented in this paper are the following: Largest White Contour, Lane Line Approximation using Least Square Regression, Linear Lane Search with K-Means, Lane Line Discrimination using DBSCAN, and DeepLSD Lane Detection. Their performance was compared based on reliability, comfort, speed, and adaptability. Reliability was measured by the consistency of successful laps, comfort by the smoothness of the vehicle's movements, speed by lap time, and adaptability by the algorithms' performance under varying conditions. This paper also details lane following, the architecture used, real-life implementation challenges, and presents an experimental analysis to identify the best algorithm for the V2X project.
The remainder of this paper is organized as follows: Section II reviews related work; Section III outlines the methodology and resources; Section IV describes the software architecture with four nodes; Section V covers preprocessing; Section VI details five lane detection algorithms; Section VII discusses vehicle control algorithms; Section VIII presents experimental results; and Section IX concludes with future research directions. The project code is available at https://github.com/benatfroemming/REU-2024-Lane-Following.
§ RELATED WORK
Due to the complexities and cost of using real-scale autonomous vehicles, most research on lane-following has not been tested outside of a virtual environment. Much of this research does not effectively simulate lane following, and instead only focuses on lane detection using a single image or video. Hence, there is no reliable way to prove the efficiency and safety of those algorithms. The performance of an algorithm can be completely different in a real environment where there are a lot more factors to consider, such as the kinematics of the vehicle.
In the previous editions of the Self-Drive Research Experience for Undergraduates (REU) funded by the National Science Foundation (NSF) at Lawrence Technological University (LTU), several algorithms were developed and tested on real electric vehicles <cit.>. Traditional computer vision techniques were used for lane detection using OpenCV. This paper builds on those initial approaches, showing continued progress by introducing machine learning techniques.
§ MATERIAL AND METHODS
§.§ Simulation and Real Environment
The lane-following algorithms were tested on two different simulators: Simple-Sim <cit.> and Gazelle-Sim. Simple-Sim is a 2D simulator that allows for the control of a single robot in custom environments using ROS. Gazelle-Sim is an expanded version of the original, allowing for the simultaneous operation of multiple robots.
After testing the algorithms within the simulator, they were then modified to work on real-life drive-by-wire vehicles. The real-world test course is in parking lot H, located at Lawrence Technological University in Southfield, Michigan, USA. This circular test course simulates real-world adverse conditions such as potholes, sharp curves, faded and narrow lane markings, cracks, and extraneous lines. It also introduces unpredictable external challenges like tree shadows, sun glare, and puddles. An aerial view of the course is shown in Fig. 1.
§.§ Vehicle Specifications
The vehicles used for testing were ACTors (Autonomous Campus Transport) 1 and 2. Refer to Fig. 2. These vehicles were built on the Polaris Gem e2 platform, provided by MOBIS, and then modified by the Lawrence Technological University Intelligent Ground Vehicle Competition (IGVC) team. Each ACTor is equipped with essential self-driving hardware, including a Dataspeed Drive-by-Wire kit, an HDR camera for lane-following, 2D and 3D LiDAR sensors, and two Swift Piksi GPS modules. Additionally, both vehicles feature a Netgear router, power inverter, and a removable computer for networking and programming the Drive-by-Wire system using ROS (Robot Operating System). The Polaris Gem e2 boasts a top speed of 25 miles per hour and a range of approximately 30 miles.
§ ARCHITECTURE AND DYNAMIC RECONFIGURE
All the lane-following algorithms share the same architecture to maintain simplicity and modularity. It was designed to facilitate easy switching and tuning of the algorithms and environments within the vehicle or simulator. This is accomplished by using arguments within the launch file and dynamic reconfigure. The dynamic reconfigure GUI allows the user to activate the vehicle, adjust the vehicle's speed, tune the algorithms' parameters, and modify the steering sensitivity. The ROS-based architecture consists of 4 nodes: Preprocessor, Lane Detector, Vehicle Controller, and Vehicle Node.
§ PREPROCESSING
The Preprocessor node is in charge of enhancing the raw frontal image of the vehicle to extract insightful data about the road markers. It does this by applying a series of different effects using the OpenCV library. The first of these effects is Median Blur, which is applied to remove noise while preserving edges. This creates a blur effect in the image and highlights white regions. The image is then converted from the RGB color space, which has three channels, to a single-dimensional color space of gray shades. The new image retains the white pixels and makes it easier to find the correct threshold for identifying them. After that, the white regions are masked out. Finally, by cropping and removing the top part of the image, the region of interest (ROI) is obtained. The ROI only looks at the road and avoids other extraneous noises like buildings, trees, and the sky.
These steps are common to all algorithms, although some may require more sophisticated processing as explained in the next section. They can be disabled or tuned using Dynamic Reconfigure. For example, the upper and lower white threshold values when creating the mask. Finally, the lane-detection node receives the modified image and follows a specific set of steps tailored to each algorithm.
§ LANE DETECTION ALGORITHMS
§.§.§ Largest White Contour
The first approach used for lane detection involves a basic line-following algorithm with an offset and is used to teach and test the basic concepts behind using the ACTor Vehicles <cit.>. Hence, it is a good starting point to add improvements. The basic algorithm looks for the largest contiguous collection of white pixels, or "contour", in the preprocessed image. OpenCV is used to obtain a list of all white contours by calling the find contours function. Then, by iterating through them and computing their areas, the largest one can be identified. Next, the spatial moments are computed to find the largest contour's centroid. Finally, a static offset is added to the centroid in the x-axis to approximate the center of the lane as seen in Fig. 3 (on the left of the line if driving in the right lane).
While this approach is reliable in ideal conditions, it encounters challenges on the real course. Cracks and poor lane markers can disrupt the largest contour, leading to incorrect marker identification. To address this, in the improved version, dilation is applied to enlarge white object areas, making them more pronounced. Applying Gaussian Blur also helps connect broken lines. Additionally, on the enhanced version, only contours on the right side of the image are considered to avoid detecting the left line. The downside of this approach is that it assumes a continuous white line exists on the right side of the lane, which may not always be the case in real-world scenarios. Moreover, the paper later demonstrates that following a single line from a lane is less reliable compared to tracking both lines.
§.§.§ Lane Line Approximation using Least Square Regression (LSRL)
The second lane detection approach identifies both lane lines and computes approximate least squares regression lines for each. To improve line detection, particularly for curves, it takes advantage of a bird's-eye view perspective <cit.>. This transformation makes the image appear as if it were taken from above, causing the lines to appear parallel rather than converging due to depth. Refer to Fig. 4. Additionally, curves look straighter but slightly inclined, allowing the use of a first-degree polynomial to represent the lines.
After applying this perspective transform, the preprocessing requires additional steps that were not used in the first approach. Canny filtering is used on the white mask image to remove noise and only keep edges. The edges are found from gradient changes in pixels, non-maximum suppression, and thresholding <cit.>. The next step involves the Hough Lines Transform, a feature extraction technique used to detect straight lines <cit.>. All the straight edges of the Canny edges are extracted and stored as a list of start and end points. The points are then filtered by the minimum length and the slope of the corresponding lines. In most cases, it is safe to remove lines that are nearly horizontal as they are likely not part of the lane lines. If not enough points are obtained this way, another option is to draw the filtered lines in white on a black canvas, extract white pixels, and downsample them uniformly. The image is then divided into two sections: left and right. Points on the left part are classified as part of the left lane line, while points on the right side are classified as part of the right lane line. For each side, regression lines are computed, which are subsequently used to determine the lane's midpoint. The least squares regression line is given by y = mx + b where the slope m and y-intercept b are calculated as seen in (1).
m = n ∑xy - ∑x∑y/n ∑x^2 - (∑x)^2 (Slope)
b = ∑y - m ∑x/n (Intercept)
To enhance reliability, additional methods are used. One method reduces noise by live cropping on curves, which minimizes white points along the lane edges for better detection. Another method uses an adaptive variable to divide the image, distinguishing between left and right lines during turns. It's useful when the lines are not strictly positioned to the left and right of the horizontal midpoint. Furthermore, if a line is not detected, the algorithm assumes the line is at the image's edge.
This algorithm can detect both lane lines simultaneously, providing a more accurate lane representation than single-line algorithms. It performs well on curves, even if one line isn't detected. However, the least squares regression can be sensitive to outliers, and despite noise reduction, residual points might still cause deviations from the lane.
§.§.§ Linear Lane Search with K-Means
The third approach focuses on finding lines using a single horizontal line of the image. In particular, it focuses on the central horizontal row only, finding all of the white pixels. In the most ideal case, two groups of white pixels are detected, one for each lane line. Instead of finding the mean of the detected points, it is a better approximation to apply K-Means with K = 2 to find the centroid of each group, and then find an average. K-means clustering initializes K cluster centers (centroids) and iteratively assigns each data point to the nearest centroid <cit.>. The centroids are updated to the mean of the assigned points, and this process repeats until the centroids converge and no longer change significantly. This method is implemented using the Scikit-Learn Python library.
The approach works well when both lane lines are continuous. However, it struggles when the middle line is discontinuous, potentially detecting only one or no lines. This issue can be identified by the proximity and size of the K-Means centroids, which should be close together or have a small number of cluster points. To address this, the two most recent centroids are stored as an approximation of lane positions. Refer to Fig. 5.
The algorithm is computationally fast, enabling quick processing speeds and rapid reactions. It also provides stability, as defects in individual images do not cause immediate incorrect responses. However, the main issue is extraneous white noise in the searched row, which can affect K-Means clustering results. This can be mitigated by using noise reduction techniques like histograms and thresholds.
§.§.§ Lane Line Discrimination using DBSCAN
The fourth approach developed for lane detection uses an algorithm called DBSCAN to identify and distinguish both lane lines. Similar to the LSRL algorithm, it obtains a list of points through Canny edge detection and Hough Lines Transform.
In the next step, an unsupervised learning method is employed to filter out noise and differentiate between the lane lines. Specifically, DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is used, an algorithm also available in the Scikit-Learn Python library. DBSCAN clusters points that are densely packed together while identifying points in low-density regions as outliers. The algorithm selects a point and searches for other points within a specified radius, epsilon. If the number of points within this area exceeds a certain threshold (minPts), the point is classified as a "core point," and a new cluster is formed. Refer to Fig. 6. This process is then repeated to expand the clusters <cit.>. Density-based algorithms are effective for lane line detection because they can handle well-separated lines as seen in Fig. 7. They can also connect discontinuous segments of the center line, provided the epsilon value is appropriately set to avoid exceeding the minimum distance between lane lines.
Once all of the clusters are computed, the focus is shifted to the two clusters directly in front of the vehicle. The algorithm sorts the clusters according to their y-axis values, where the y-axis increases from top to bottom in the image. It then identifies the two largest clusters nearest to the bottom of the image and checks if they are substantial enough to be lane lines by ensuring they contain a sufficient number of points. Subsequently, the centroids of these two clusters are computed and averaged to determine the lane's center. Averaging all the unclustered points was avoided because a line with more points could disproportionately influence the centroid. Therefore, clustering is essential for accurate lane detection.
The effectiveness of clustering in this approach depends on choosing the right epsilon value. In a raw frontal image, increasing distance can make lane lines appear closer, potentially merging them into one cluster if epsilon is too large. Transforming the image to a bird's-eye view, as used in the LSRL algorithm, and adjusting epsilon by extending shorter Hough lines can help, while overextension risks overlap on curves. Cropping the image's top can also reduce depth issues. DBSCAN separates lane lines more effectively than simple vertical splits or slope-based methods, which struggle with curved lanes.
§.§.§ DeepLSD Lane Detection
The final approach developed is based on supervised learning. The rise of deep learning has significantly impacted the vehicle industry, enabling the automation of various tasks <cit.>. Instead of building a deep learning model from scratch, which is a difficult task without the proper resources, we utilized a pre-trained model called DeepLSD. By combining deep learning with the precision of handcrafted detectors, it excels at extracting line segments from real-world images <cit.>. Using such models can streamline the lane line detection process compared to traditional computer vision algorithms.
DeepLSD performed well under ideal conditions but struggled with identifying lines on the test course. The model detected noise, such as cracks and potholes, and had difficulty with curved lines. Applying masking and blurring techniques from previous algorithms improved white-line identification. To address curve detection, horizontal lines were drawn into the image, converting the curved lines into short straight segments for better analysis. An example is shown in Fig. 8.
Once the lane lines are detected, they are filtered by length and slope, and one of the previous approaches can be used to separate the lanes, such as DBSCAN. When testing this system on the vehicles' laptops, inference took approximately 0.15 seconds on average. The device used for testing was an MSI Gaming Laptop with an Intel 8-Core i7-11800H processor, 16GB of RAM, a 512GB SSD, and a GeForce RTX 3050 Ti 4GB graphics card. Since the camera within the vehicle publishes images quicker than the model can process, some images are ignored, and the processing happens less frequently. Additionally, the inference is run in parallel to allow the publishing of motion commands to the vehicle at a consistent 50 Hz, which is necessary for the DBW system’s heartbeat. If the vehicle does not receive this heartbeat signal, the system shuts down because it assumes unsafe driving conditions. The coordinates of the centroid between the lane lines, determined through inference, are stored globally and updated once the model finishes processing. Meanwhile, these coordinates are published to the vehicle during each callback.
§ LANE FOLLOWING ALGORITHMS
After detecting the lane lines, the next step is to center the vehicle within the lanes and translate this information into motion. This is done in the vehicle controller node. In this section, two different methods are presented for lane following. The first approach uses ROS Twist messages, which control the vehicle's motion with linear speed along the x-axis (measured in meters per second) and angular velocity, or yaw rate, along the z-axis (measured in radians per second). These values are published to the vehicle's vel_cmd topic. As an input, the algorithm only needs the offset between the center of the image (denoted as midx), and the center of the lane (denoted as cx) computed by the lane detection algorithms.
While this approach works well at slower speeds, it fails when going faster due to the difficulty of establishing a proportional relationship between angular speed, linear speed, and cx. Yaw rate control, due to its role in rotational dynamics, is less intuitive and leads to difficulties in keeping the vehicle centered, resulting in an uncomfortable experience for the passenger.
To address these issues, an alternative control method is introduced for the Ackermann steering vehicle <cit.>. Instead of relying on yaw rate, this method controls the vehicle's steering and pedals directly, providing a more intuitive control experience. The Dataspeed drive-by-wire system in the ACTor vehicles uses custom ROS messages for this purpose. Actuator Messages (SteeringCmd) handle steering, while Unified Control Messages (UlcCmd) manage the brake and gas pedals.
A key improvement with this method is incorporating the y-offset (denoted as cy) of the computed lane center. Along with cx, midx, and the image height, a turning angle relative to the y-axis is calculated. This turning angle is then converted into a steering angle and transmitted via the SteeringCmd message. Before any movement is initiated, an enable message is sent to activate the vehicle's motion. These updates contribute to smoother driving and better lane centering.
§ EXPERIMENT AND RESULTS
The performance of the five algorithms was evaluated on the Lot H course. The objective was to complete five consecutive laps on both the inner and outer lanes, driving on the right side. DBW Native Commands were used for lane following. In the outer lane, which is 97.54 meters long, we set a fixed speed of 2m/s, while in the inner lane of 78.67 meters, the speed was set to 1.5m/s. Across all algorithms, these target speeds were identified as being the most reliable and comfortable options during the testing phases. If an algorithm failed to successfully complete all five laps, it was repeated with a reduced speed if necessary. The experiment was conducted over several days, with weather conditions ranging from sunny to cloudy. This variability demonstrated each algorithm's adaptability to different environmental conditions.
All five algorithms were able to complete the set goal. The average speed and time showed minimal variation since the set speed was consistent for the algorithms. Therefore, the number of attempts is a more indicative measure of reliability. The only algorithm that successfully completed all 10 laps on the first attempt was the DBSCAN-based approach. In the inner lane, there is a sharp turn with a radius of just 4 meters where the lane-following algorithms fail more frequently.
As speed increased, issues with jerkiness and acceleration occurred during turns. To address this, we measured linear and angular momentum using GPS and an Inertial Measurement Unit (IMU). The IMU, equipped with accelerometers and gyroscopes, provides crucial data on the vehicle's acceleration and rotational rate, essential for navigation and stability control. We focused on the angular-z component, indicating angular momentum. During test runs, GPS coordinates (latitude and longitude) were recorded using Rosbags, with 30-second segments—approximately one lap—extracted for analysis. Linear momentum was calculated by determining the distance between consecutive GPS points using the Haversine formula <cit.>. Velocities were then computed by dividing these distances by the time differences between timestamps, as shown in Equation (2), and accelerations, as seen in Fig. 11, were obtained by differentiating velocity with respect to time, as seen in Equation (3).
0.22
v_i = d_i/t_i - t_i-1
0.2
a_i = v_i+1 - v_i/t_i+1 - t_i
The results seen in Fig. 11 show that the K-Means and DBSCAN-based lane detection algorithms are the best at keeping a steady speed. Sharp turns, inclined slopes, potholes, and the algorithms themselves can cause variations from the target speed. The other three algorithms show more instability with larger spikes in their momentum.
In Fig. 12, the angular momentum plots show the completion of a lap with four turns. This can be seen from the four peaks. Smoother peaks indicate better comfort, as seen in the DBSCAN plot. Low peaks under zero indicate that the algorithm overcorrected and had to recover back, as seen in the DeepLSD plot, likely due to computational latency. In the LSRL plot, the IMU readings show sharp spikes at various points along each curve, followed by extended periods of remaining at zero. This may be due to the algorithm's inability to anticipate turns effectively, caused by its reliance on a bird's eye view. This view might not capture the road's curvature accurately, leading the vehicle to go straight more often and make sharper, more abrupt corrections, even on slight curves.
§ CONCLUSION AND FUTURE WORK
In conclusion, based on reliability, comfort, speed, and adaptability metrics, Linear Lane Search with K-Means and Lane Line Discrimination using DBSCAN performed the best out of the five tested algorithms. They both share the common approach of using unsupervised learning to distinguish between the two lane lines, enabling the vehicle to be centered between them. These two algorithms were subsequently integrated with adaptive speed control algorithms from the aforementioned V2X project to create a demonstration and gather additional data. They were able to achieve maximum speeds of 3.5m/s in the outer lane and 2.5m/s in the inner lane.
In the future, we aim to conduct further research into deep learning-based approaches.
IEEEtran
|
http://arxiv.org/abs/2409.03583v1 | 20240905143743 | Text-Guided Mixup Towards Long-Tailed Image Categorization | [
"Richard Franklin",
"Jiawei Yao",
"Deyang Zhong",
"Qi Qian",
"Juhua Hu"
] | cs.CV | [
"cs.CV"
] |
SegTalker: Segmentation-based Talking Face Generation with Mask-guided Local Editing
Zhihu Hu
September 9, 2024
====================================================================================
§ ABSTRACT
In many real-world applications, the frequency distribution of class labels for training data can exhibit a long-tailed distribution, which challenges traditional approaches of training deep neural networks that require heavy amounts of balanced data. Gathering and labeling data to balance out the class label distribution can be both costly and time-consuming. Many existing solutions that enable ensemble learning, re-balancing strategies, or fine-tuning applied to deep neural networks are limited by the inert problem of few class samples across a subset of classes. Recently, vision-language models like CLIP have been observed as effective solutions to zero-shot or few-shot learning by grasping a similarity between vision and language features for image and text pairs. Considering that large pre-trained vision-language models may contain valuable side textual information for minor classes, we propose to leverage text supervision to tackle the challenge of long-tailed learning. Concretely, we propose a novel text-guided mixup technique that takes advantage of the semantic relations between classes recognized by the pre-trained text encoder to help alleviate the long-tailed problem. Our empirical study on benchmark long-tailed tasks demonstrates the effectiveness of our proposal with a theoretical guarantee. Our code is available at https://github.com/rsamf/text-guided-mixuphttps://github.com/rsamf/text-guided-mixup.
§ INTRODUCTION
In recent years, deep learning has made state-of-the-art advancements in computer vision tasks such as image categorization, object detection, and semantic segmentation <cit.>. Deep learning models are highly dependent on large-scale and balanced training data, but real-world data are typically class-imbalanced <cit.>. When training data is abundant for a subset of classes (i.e., head classes) but scarce for the other (i.e., tail classes), the distribution of the data is said to be long-tailed <cit.>. Taking image categorization as an example, deep neural networks (DNNs) aim to minimize the empirical risk on the training data by incrementally adjusting the learnable parameters. However, given a long-tailed training data, this happens more on the head-class instances that appear more frequently, augmenting the model's performance bias towards head classes but reducing the model's generalization performance on tail classes <cit.>.
Long-tailed learning proves to be a significantly challenging task as addressed by many previous studies <cit.>. Intuitively, under-sampling the head classes and over-sampling the tail classes is a reasonable technique. Although class-level re-sampling or re-weighting can help balance out the data distribution and mitigate the model's performance bias on head classes, these techniques can cause the model's overfitting on tail classes and/or degenerate the performance on head classes <cit.>. There is evidently more success in module improvement techniques <cit.>, especially those that use ensemble learning <cit.>. There are a number of additional techniques <cit.> that aim to mitigate the long-tailed problem such as class-level re-margining <cit.>, data augmentation <cit.>, and transfer learning <cit.>. However, these methods are still limited by the scarce information found among tail classes.
r0.38
< g r a p h i c s >
The decision boundary of `tiger' stretches towards that of `leopard' and `cat' and away from `bicycle' as text-guided mixup allows semantically similar classes to be mixed more frequently.
Recently, vision-language models such as CLIP <cit.> and ALIGN <cit.> have demonstrated good performance in zero-shot classification and few-shot learning <cit.>. These models are trained on large-scale data containing image-text pairs that elicit the forming of connections between text and image embedding. By capturing the contrastive locality of image and text features, vision-language models can generalize to unseen categories well, which is a potential information source of tail classes in long-tailed learning. However, existing multi-modal works <cit.> are limited by the general domain knowledge of the CLIP's pre-trained text encoder and must continue linguistic training on the downstream task.
In this work, we propose to leverage the frozen CLIP text encoder to obtain prompt embedding as additional supervision for long-tailed learning in vision tasks.
Considering the observation that semantic relationships between class names (e.g., `tiger' and `cat') correlate with their localities of visual features in vision-language models, we can utilize semantically similar classes to assist the generalization among tail classes (e.g., the head class `cat' can help assist the tail class of `tiger' as shown in Fig. <ref>). However, the intra-class variance of the tail class can still be ignored. Therefore, we further propose a novel text-guided mixup strategy, named local feature mixup (LFM), to shift the label towards tail classes, so as to alleviate the long-tailed problem. The main contributions of this work are summarized as follows.
* We leverage the frozen CLIP text encoder to enhance the performance of long-tailed visual recognition tasks.
* We construct a novel mixup technique that takes advantage of the text encoder to boost the performance of tail classes with a theoretical guarantee.
* Our extensive experiments on several benchmark long-tailed data demonstrate the effectiveness of our proposal.
§ RELATED WORK
In long-tailed visual recognition, numerous methods have been proposed to boost the performance of tail classes <cit.>. Module improvement methods including ensembling have shown recent success <cit.>. In mixture of experts, TADE <cit.> and SHIKE <cit.> output an aggregation of multiple expert modules, where each expert in TADE strives to perform well in a different training distribution, and each expert in SHIKE focuses on modeling a different depth of image features. Although ensembling can boost performance, these methods are still limited by the scarce information found among instances of the tail classes.
Moreover, class re-balancing such as class-level re-sampling <cit.>, re-weighting <cit.> (e.g. Balanced Cross Entropy <cit.>), and re-margining (e.g., LDAM <cit.>) can adjust the model's attention to classes with a lower sample rate. However, class-balanced sampling or re-weighting can lead to overfitting of the tail classes, under-represent the intra-class variance of the head classes <cit.>, and thus decrease the model's overall performance <cit.>. Alternatively, it can be effective to train a model with meta sampling <cit.>, in which the optimal sample rate per class is estimated by applying a learnable parameter for each class label. Using this method can slightly avoid the overfitting of tail classes, but finding the optimal parameter or trade-off between class labels for multi-class classification is difficult.
Another instance of success is found through pre-training vision transformers <cit.> in an autoencoder setup <cit.>. Once the encoder is sufficiently trained, it feeds into a classification layer that is trained using a balanced binary cross-entropy loss <cit.>. However, these methods still lack sufficient performance on the set of tail classes as it is an inert challenge to train deep neural networks for classes with small sample rates. Recently, pre-trained vision-language models like Contrastive Language-Vision Pre-training (CLIP) <cit.> have demonstrated strong zero-shot performance. CLIP embodies multi-modal learning through unsupervised training of image-caption pairs available on the wild web to capture the contrastive locality of image and text features. This makes CLIP more adaptable to new tasks, so that they can be leveraged to make zero-shot predictions, that is, generalize to unseen categories. Thereafter, a pre-trained vision-language model can be further fine-tuned on a downstream task in few-shot learning <cit.> or long-tailed learning (e.g., RAC <cit.>, VL-LTR <cit.>, LPT <cit.>, TeS <cit.>, and VPT <cit.>). However, most of them have been focusing more on the text encoder. For example, VL-LTR <cit.> requires manually retrieving text descriptions of each class from the Internet to augment the text data in preparation for linguistic training, which is resource expensive, so we instead freeze the text encoder.
§ THE PROPOSED METHOD
Given a long-tailed training data D = {(x_i,y_i)}, x_i is an image associated with its target class y_i∈{1, …, C}. We construct a set of text snippets T, where each T_k describes a class label for k∈{1, …, C}. For example, the text snippet describing class name “dog” is a tokenized sequence generated from the string as “a photo of a dog”.
We feed image and text snippets T to the image and text encoders, respectively, pre-trained by CLIP <cit.> as shown in Fig. <ref>, for which we denote as ℱ_I and ℱ_T, respectively. Both of these encoders output feature vectors of size d. We denote the output from the text encoder as f_T and f_T = ℱ_T (T) and allow f_T_k to denote the feature vector for class k, which does not change during the long-tailed learning. To better separate the tail-class feature embeddings from that of the head-classes following <cit.>, we append a fully connected layer W_I ∈ℝ^d× d that is learnable to ℱ_I. Thereafter, we can extract the feature vector for each image x_i as f_I = W_Iℱ_I(x_i). Additionally, we normalize both f_T_k and f_I to be of a unit norm.
After obtaining f_I and f_T, image classification is performed as shown in Fig. <ref> by computing the cosine similarity between f_I and f_T_k for all k, and finally, the predicted class label, ŷ, for each image is computed as ŷ = *arg max_k ∈{1, …, C} f_I · f_T_k.
Thereafter, we can adopt a decoupled training approach as suggested by <cit.> to learn better embeddings for tail-classes compared to joint training. In stage 1, we open ℱ_I and freeze W_I for training, and in stage 2, we freeze ℱ_I and open W_I. At the beginning, W_I is initialized as the identity matrix with f_I = ℱ_I(x). However, by minimizing the empirical risk directly based on the training data with a long-tailed distribution, both ℱ_I and W_I can still be biased to the head classes. Therefore, we propose a novel text-guided mixup technique.
Local Feature Mixup
A statistical measure of class imbalance in a dataset can be defined as the imbalance factor γ = n_1 / n_C, where n_k is the number of examples in class k and n_1≥ n_2≥ ...≥ n_C is ordered from high to low, and typically, we have n_1≫ n_C. Our main goal is to increase the few-shot accuracy (i.e., those with low n_k), while not attenuating the model's accuracy on many-shot classes (i.e., those with high n_k). We strive to boost the few-shot accuracy by making two assumptions about the data. First, we assume that classes with low n_k are underrepresented because a few examples may not fully express the complete diversity (or variance) of their associated class. For example, a cat can look different from another cat in terms of their features such as their sizes, their eye colors, and the color/pattern of their furs. When limited to observing a few examples of cats, it is difficult for DNNs to grasp the full range of features that a cat can express. Therefore, we assume that every tail class has a larger intra-class variance than that can be learned from long-tailed data.
Secondly, because both CLIP's image and text encoders map their respective inputs to d-dimensional feature vectors, we say that every class can be represented by certain feature space in ℝ^d. The pre-trained text encoder already has an understanding of the local relationships between words. For example, words “frog” and “toad” are close in the language model feature space, since they have similar meanings. Part of our learning objective is to closely align the outputs of our image encoder to the outputs of the pre-trained language model. That is, if we feed an image of a frog and an image of a toad to our image encoder, their extracted feature vectors should be close in proximity as in the text feature space. Therefore, we also assume that if two classes have similar meanings (i.e., nearby in the text encoder's feature space), these two classes also share a subset of visual features and thus should also be nearby within the image encoder's feature space. In the following construction of local feature mixup, we incorporate these two critical ideas separately, that is, local sampling and label shift.
Local Sampling
Existing mixup strategies often randomly sample y_i and y_j uniformly across the training data <cit.>. However, we aim to choose pairs that are semantically related supervised by the pre-trained text encoder. First, we sample an instance from class y_i uniformly across the training data as p(y = y_i) = n_i/∑^C_kn_k. Then, we sample another instance from class y_j with probability p_ls(y = y_j | y_i) given by Eqn. (<ref>).
p_ls(y = y_j | y_i) =
exp( f_T_i· f_T_j / τ ) /∑^C_k=1 exp( f_T_i· f_T_k / τ ) i ≠ j
0 o.w.
where the hyperparameter τ > 0 controls the temperature scaling on the softmax equation. A lower τ increases the likelihood that similar class pairs are chosen for mixup, but a too low temperature can lead to oversampling of nearby classes. We set τ = 0.05 for most experiments. Using this strategy, we hope to extend the variance of minor class samples towards neighboring classes as our assumption is that semantically similar classes share a subset of visual features as depicted in Fig. <ref>.
Label Shift
Then, we perform mixup by mixing images x_i and x_j sampled through our above local sampling method. With mixing factors λ_x, λ_y ∈ [0,1], we propose
x̃^LFM = λ_x x_i + (1-λ_x)x_j
ỹ^LFM = λ_y y_i + (1-λ_y)y_j
where y_i, y_j are one-hot vectors and factor λ_x is chosen randomly from the beta distribution. More importantly, we generate λ_y by
λ_y = clamp( λ_x - αn_i - n_j/n_i + n_j, 0, 1 )
where hyperparameter α≥ 0 adjusts the intensity of label shift and the resulting value is clamped between 0 and 1. In order to expand the margin for tail classes, we shift the decision boundary away from tail classes and towards head classes according to the difference of n_i and n_j. For example, if n_i > n_j (i.e., class y_i has more samples than class y_j), we shift the target to be more in favor of class y_j, thus increasing the model's margin on the class with fewer samples. Algorithms are summarized in the supplementary and we provide a theoretical guarantee for our proposal as follows, while the overall framework is illustrated in Fig. <ref>.
Letting p=n_i/(n_i+n_j), λ_y can be obtained by balancing the distribution between x_i and x_j
λ_y = min_λ∈[0,1] (λ - λ_x)^2/2+α R(λ)
where R(λ) = (λ-1/2)^2 - (λ - p)^2.
Remark The former term constrains that the obtained weight for the label should be close to the weight for the example, while the latter term is a balance regularization to incorporate the prior distribution p between two examples. By minimizing the regularization, it aims to push λ from the imbalanced initial distribution to a balanced one. When p=1/2, it degenerates to the standard weight for mixup.
§ EXPERIMENTS
To demonstrate the proposed LFM method, following the common practice in long-tailed learning, we use publicly available long-tailed datasets, that is, CIFAR10-LT and CIFAR100-LT <cit.>, ImageNet-LT <cit.>, and Places-LT <cit.>.
Experiment Setup
For CIFAR10/100-LT, we fine-tune CLIP with a single GPU, and for ImageNet-LT and Places-LT, we fine-tune CLIP with three GPUs. Each GPU is an Nvidia GeForce RTX 2080 Ti with 11GB of memory. During training, each GPU receives a batch size of 32, so for ImageNet-LT and Places-LT the effective batch size is 96. Training is performed with a fixed seed to allow for reproduceability. The hyperparameters chosen for LFM are fixed (i.e., α = 1, τ = 0.05) for all experiments on CIFAR10-LT, CIFAR100-LT, and ImageNet-LT, while they are adjusted on Places-LT as α = 1.25, τ = 1.00 in stage 1 and α = 1.50, τ = 1.00 in stage 2, due to the imbalance severity as explained in the next section. Low learning rates were picked to avoid the risk of catastrophic forgetting and losing CLIP's zero-shot performance advantage.
The detailed hyperparameters used can be found in the supplementary. CLIP's default text prompt template is “a photo of a {CLASS}”. For all experiments, we utilize the default text prompt template provided.
A model's performance is not necessarily stable across all classes, each with different sample counts, so it is important that we quantify the performance of our model in subdivisions relative to every n_k. Across all datasets, we subdivide the resulting model's accuracy into four categories, namely many-shot, medium-shot, few-shot, and overall following <cit.>. Many-shot classes have n_k > 100, medium-shot classes have 20 ≤ n_k ≤ 100, and few-shot classes have n_k < 20. For each performance category, we report the top-1 accuracy of our model against the balanced validation set for each subdivision of our chosen datasets.
We compare our proposed method with vision-focused baseline methods and strategies that perform well in tackling the long-tailed problem. We also fine-tune the competitive image encoder (i.e., ViT-B/32) with different existing losses as baselines, i.e., Cross Entropy (CE), Balanced Cross Entropy (BalCE) <cit.>, Focal <cit.>, Label Distribution Aware Margin (LDAM) <cit.>, and Margin Metric Softmax (MMS) <cit.>. All losses except CE were proved to be helpful for the class imbalance problem. In summary, we compare with the following baselines based on the pre-trained CLIP <cit.>: 1) Zero-shot: The pre-trained image and text encoders by CLIP <cit.> are directly used to do prediction on the balanced test data, in which ViT-B/32 is adopted; 2) CE: Fine-tuned ViT-B/32 using the cross entropy loss; 3) BalCE: Fine-tuned ViT-B/32 using the balanced loss <cit.>; 4) Focal: Fine-tuned ViT-B/32 using the Focal loss <cit.>; 5) LDAM: Fine-tuned ViT-B/32 using the loss in LDAM <cit.>; and 6) MMS: Fine-tuned ViT-B/32 with MMS <cit.>.
CIFAR10/100-LT
As in the literature, we can create CIFAR10-LT and CIFAR100-LT by taking a subset of the original balanced CIFAR10 and CIFAR100 datasets <cit.>, and the imbalance factor γ is variable. We experiment with multiple imbalance factors in {10, 50, 100}.
First, on CIFAR100 with the imbalance factor of 100, we compare all methods based on CLIP. For our proposal,
we set ViT-B/32 as the backbone
and apply LFM with two different losses, i.e., cross entropy and MMS <cit.> that is the best in the literature. The comparison results to all baselines are summarized in Table <ref>. Based on the zero-short performance, we can observe that the pre-trained CLIP can help balance the performance in different categories, which demonstrates the effectiveness of pre-trained vision-language model to alleviate the class imbalance issue. Then, by fine-tuning the pre-trained image encoder, the overall accuracy can be improved. However, due to the severe imbalance, the performance of the tail classes is still lacking even when balanced losses are utilized. Our proposal can help improve the accuracy in all categories, where LFM combined with a loss well-suited for CLIP can further help improve the performance.
Then, we compare the fine-tuned CLIP models (ViT-B/32 is adopted) including our proposal with multiple existing state-of-art long-tailed learning methods in Table <ref> under different imbalance factors. We can observe that by fine-tuning the pre-trained CLIP image encoder, the performance can be significantly improved in all scenarios. Moreover, the state-of-the-art imbalance loss MMS <cit.> is very helpful, while our proposal can further significantly improve the performance in most cases. This further demonstrates the proposal of alleviating the class imbalance problem using pre-trained vision-language model and the effectiveness of LFM. It should be noted that methods using the backbone of ResNet50 are performing worse in general, and thus ResNet50 is not adopted in the following experiments.
ImageNet-LT and Places-LT
We construct ImageNet-LT <cit.> by forming a subset of the ImageNet 2014 dataset <cit.>. The resulting imbalance ratio of ImageNet-LT is 256. As shown in Table <ref>, we can observe that compared to existing methods, our method begets better performance especially on few-shot accuracy (i.e., for tail classes) by both rebalancing and leveraging semantic similarities of classes. Observing that the LFM + MMS performance for minor classes falls behind LFM + CE, we hypothesize that MMS's sole focus on exercising semantic similarities and ignorance of class sample frequencies may overfit the many classes. A technique that only focuses on one may be problematic for tasks where semantic similarities happens to exist more frequently among the many classes.
In addition, we conduct experiments on Places-LT <cit.> using LFM with CE and MMS. Places-LT is a long-tailed subset of the original dataset Places2 <cit.>. It is a dataset for scene classification containing 365 classes, and it suffers from extreme imbalance (γ = 996). To account for its imbalance severity, we adjust local feature mixup hyperparameters to be highly in favor of the minority classes. We increase the value of τ, so that the probability distribution constructed by local sampling is more balanced. Additionally, we increase the value of α, so that the label is shifted to the tail classes, more heavily as shown in the supplementary.
Table <ref> summarizes the results. The benefit from the pre-trained model by CLIP can be observed from the zero-shot performance on tail classes, which further demonstrates the advantage of the text supervision from CLIP. However, fine-tuning using our proposal is necessary to improve the performance. It should also be noted that due to the severe imbalance factor of this data, our proposal with CE is expected to be less effective compared to that with MMS <cit.>. LFM with MMS shows significantly better performance compared to state-of-the-arts, especially on medium-shot and few-shot classes, and demonstrates strong performance on many-shot classes as well. This further demonstrates the effectiveness of our proposed method on the long-tailed problem.
Effect of Mixup Techniques
To demonstrate the proposed LFM, we also compare it with the standard Mixup <cit.> and Remix <cit.> on CIFAR100-LT. Specifically, we fine-tune CLIP with the same hyperparameters and decoupled stages, using different mixup techniques. Each model is trained using cross entropy loss with the ViT-B/16 backbone. Remix is a mixup method that addresses the class-imbalance issue, and it makes a trade-off between many-shot and few-shot performances. For example, compared to the standard mixup, Remix can help improve the performance on few-shot classes but sacrifice the performance on many-shot classes. However, Remix ignores the semantic relationship between each class pair that CLIP can be used for. Our proposal shows significantly better performance in Table <ref>.
Visualization
To demonstrate our assumption on the local semantic relationship, we illustrate the geometric effect of fine-tuning CLIP with our proposal in Fig. <ref>. We demonstrate the effect by revealing the contrastive locality of image feature vector outputs, where the input is comprised of a set of randomly sampled images from 10 chosen classes {apple, pear, …, motorcycle} in CIFAR100. Our illustration contains the following 5 pairs of semantically related categories from CIFAR100: (apple, pair), (lobster, crab), (snake, worm), (bed, couch), and (bicycle, motorcycle). The legend contains the name of the class and the sample count in parenthesis. We choose these pairs to show that their semantic relations are aligned with their visual relations in terms of contrastive locality, as perceived by the image encoding layers.
With the ViT-B/32 vision encoder and fully connected layer, we obtain a 512-dimensional feature vector for each image. To reduce the high-dimensional feature vectors to three dimensions for human readability, we convert them using t-SNE trained for 1000 iterations and seed set to 1. At zero shot, we can observe that semantically related classes are located nearby (e.g., `apple' vs. `pear' and `bed' vs. `couch'), although some are poorly clustered (e.g., `lobster' vs. `crab'). This confirms our assumption that pre-trained vision-language model can align semantically related classes together. However, the separation between non-related classes are not clear in the pre-trained model. By fine-tuning the image encoder with cross entropy loss, the separation between non-related classes becomes clear, thanks to the help of head-class training data. However, we can observe that tail-class instances are largely overlapping with semantically related head-class instances (e.g., `lobster' vs. `crab'). Fortunately, by incorporating our proposal of LFM, tail-class instances can be pushed a bit away from their semantically related head-class instances without sacrificing the clear boundaries between non-related classes, which further demonstrates our proposal.
§ CONCLUSION
Considering CLIP's ability to generalize to unseen categories, we leverage a fixed text encoder to enhance the performance of image categorization over long-tailed training distributions.
We enable the accuracy boost with the construction of a novel mixup technique that takes advantage of the semantic relationships between classes by probabilistic sampling based on their locality in the text encoder's feature space and slightly shift the label towards tail classes. Our extensive experiments on several benchmark long-tailed training data demonstrate the effectiveness of our proposal in alleviating the class imbalance issue with an efficient strategy that incorporates a fixed text encoder. Local feature mixup can be easily applied to not only vision-language backbones but also non multi-modal methods (i.e. vision-only architectures), which will be studied in our future work. However, both LFM and vision-language image classification are limited by the domain knowledge of the text encoder. Without further training, pre-trained CLIP performs poorly on domain-specific tasks as suggested by <cit.> due to its generic knowledge.
Our method relies on the ability of the text encoder to capture pairwise semantic similarities among the class names present in the dataset which proves to be performant for the common domain such as CIFAR and ImageNet but not for biological names present in iNaturalist <cit.>, which will be our future work.
§ ACKNOWLEDGEMENT
Yao and Hu's research is supported in part by NSF (IIS-2104270) and Advata Gift Funding. Zhong's research is supported in part by the Carwein-Andrews Graduate Fellowship and Advata Gift Funding. All opinions, findings, conclusions and recommendations in this paper are those of the author and do not necessarily reflect the views of the funding agencies.
§ APPENDIX
§.§ Effect of Local Sampling
As discussed in the main paper, at each training step, local sampling feeds the model an image pair that holds semantically-related images, where the semantic relation is determined by the text encoder. In constructing the pair, the label of the first image is determined by
p(y = y_i) = n_i/∑^C_kn_k
which is to uniformly sample an image without replacement from the dataset. However, the label of the second image is determined by
p_ls(y = y_j | y_i) =
exp( f_T_i· f_T_j / τ ) /∑^C_k=1 exp( f_T_i· f_T_k / τ ) i ≠ j
0 o.w.
which ignores the sample count for any class label. Due to the negligence of the second label's sample count, the amount of times that the model sees minority classes can be increased effectively balancing the data distribution by resampling. To observe the amount of resampling, we show the sample count before and after local sampling as follows. Allow Y to be the random variable in the event that local sampling yields an instance of class y ∈{y_i, y_j}, and allow y_i to be the event that y_i = y and y_j to be the event that y_j = y. The probability that the model observes an image with class label y can be calculated as
p(Y=y) = p(y_i) + (1 - p(y_i)) p(y_j)
= p(y_i) + (1 - p(y_i))∑_k, k≠ i^C p(y_j | y_k)p(y_k) .
Using Eqns. <ref> and <ref>, p(Y) can be evaluated for all y, and we illustrate the resulting p(y) for every dataset in Figs. <ref>-<ref>. Additionally, we indicate the new imbalance factor as γ'. We can observe that the imbalance severity and the magnitude of long-tailed distribution can be well reduced, which demonstrates the effectiveness of our local sampling method.
§.§ Comparison between Textual Similarity and Visual Categorization
To further confirm our assumption that semantically related classes are visually related, we make a comparison between class label textual similarities and CLIP's zero-shot performance. Fig. <ref> shows a comparison between our semantic probability distribution p_ls and a confusion matrix of CLIP's zero-shot classification performance using CIFAR100's validation set. It can be observed that p_ls is correlated with the performance of zero-shot classification. By observing the blue cells in the confusion matrix, we see that the model more frequently struggles to find a decision boundary between related classes. When we sample with p_ls, we expect that we are sharing information with related classes more frequently and thus establish a decision boundary more optimally positioned for inference on the balanced validation data.
§.§ Algorithms and Training Configurations
In this section, we summarize the algorithms for LocalSample, Mix, and the entire training process, where the effect on the proposed mixup technique on the decision boundary between nearby head and tail classes is illustrated in Fig. <ref>. Upon acceptance of this paper, we will also publicly release the code.
During the training, we use the hyperparameters and other training properties listed in Table <ref>. Most experiments have the same setup, but some minor adjustments are made largely due to differences in class label distributions. Under the circumstances of heavy class imbalance, we can simply raise the values of α and τ, which we do for Places-LT <cit.>. Detailed information for each dataset is provided in Table <ref>. The original dataset imbalance is summarized by the imbalance factor γ.
§.§ Additional Ablation Studies
Besides the ablation study conducted in the main paper, we also conducted the following ablation studies.
§.§.§ Effect of α
We study the effect of the intensity in which we shift the training label assigned to each mixup, for which we can control with α. The α value directly affects the positioning of the model's decision boundaries between class pairs, and we can expect lower values to extend the boundary of many-shot classes and higher values to extend the boundary of few-shot classes. In this study, we change α among the range [0, 2] on CIFAR100-LT with an imbalance factor of 100 using CLIP's ResNet50 backbone with the same configuration settings. From Fig. <ref>, we can easily observe that an increasing of α can slowly degenerate the performance of many-shot classes while improve the performance of the other, especially that of the few-shot classes as expected. The result also reveals that setting α to 1 works best for all accuracies.
§.§.§ Effect of τ
To study the effect of different temperature settings for p_ls, we run multiple experiments with τ={.002, .01, .05, .25, 1.25, 31.25}. At lower values, we increase the probability that nearby class samples (i,j) are paired together. At higher values, the probability of two nearby class samples becoming paired is mitigated, and the class sampling becomes more balanced. We run our experiments on CIFAR100-LT <cit.> with an imbalanced factor of 100 using CLIP's ViT-B/16 backbone, which is of the same configuration settings.
Fig. <ref> reveals that when we increase τ from a small value, all classes can benefit from LFM by mixing semantically related samples, while after τ=0.05 it plateaued. Therefore, τ=0.05 is adopted in the rest of our experiments.
|
http://arxiv.org/abs/2409.03263v1 | 20240905061225 | A typology of activities over a century of urban growth | [
"Julie Gravier",
"Marc Barthelemy"
] | physics.soc-ph | [
"physics.soc-ph"
] |
Centre de Recherches Historiques, EHESS (CNRS/EHESS) 54 Avenue de Raspail, 75006 Paris, France
Centre d'Analyse et de Mathématique Sociale (CNRS/EHESS) 54 Avenue de Raspail, 75006 Paris, France
Université Paris-Saclay, CNRS, CEA, Institut de Physique Théorique, 91191, Gif-sur-Yvette, France
^*[email protected]
§ ABSTRACT
Contemporary literature on the dynamics of economic activities in growing cities mainly focused on a few years or decades time frames. Using a new geo-historical database constructed from historical directories with about 1 million entries, we present a comprehensive analysis of the dynamics of activities in a major city, Paris, over almost a century (1829-1907). Our analysis suggests that activities that accompany city growth can be classified in different categories according to their dynamics and their scaling with population: (i) linear for everyday needs of residents (food stores, clothing retailers, health care practitioners), (ii) sublinear for public services (legal, administrative, educational), (iii) superlinear for the city's specific features (passing fads, specialization, timely needs). The dynamics of these activities is in addition very sensitive to historical perturbations such as large scale public works or political conflicts. These results shed light on the evolution of activities, a crucial component of growing cities.
A typology of activities over a century of urban growth
Marc Barthelemy^* ,2,
September 9, 2024
=======================================================
§ INTRODUCTION
The science of cities benefited these last years from the recent availability of massive amount of data about various aspects of these systems <cit.> such as mobility, segregation, CO_2 emissions, etc. The growth dynamics is one of the most important aspect as it results from the evolution of infrastructures, the population distribution and economic activity. A large amount of studies discussed the growth of cities and their main drivers <cit.>, highlighting the importance of innovation <cit.>, amenities, agglomeration economies and human capital. Political institutions, the degree of democratization and technological advances also naturally play a key role <cit.>. Shocks are also very important and determine the fate of cities. These shocks can be structural changes or related to new industries and affect the urban landscape and eventually have a large impact on interurban migration <cit.>.
Most of the available datasets for this type of studies have, however, a very limited time frame — typically a few years or decades—, and mainly concerns the contemporary period <cit.>. Other studies that consider historical periods, such as US counties between 1840 and 1990 <cit.>, focus on the simpler quantity that is population, and other aspects such as the location and number of different activities are rarely considered on an historical scale. The construction of a science of cities however needs a long time span, and a quantitative approach to the historical evolution of urban systems is key for our understanding of these systems. Cross-sectional studies are available such as in <cit.>, but an analysis on individual cities over a long time is very demanding in terms of dataset construction. This might however change as we witness a spectacular increase of the digitalization of historical sources, which opens the way to a broader view on the evolution of urban systems.
Existing studies consist essentially on the digitalization of old maps allowing scientists to characterize the evolution of the road system <cit.>, a critical component in cities. Very few other aspects were considered, with the exception of urban forms as in <cit.>. However, it is usually advocated that economic activities lie at the heart of a city allowing for its specialization and diversification through time <cit.>. An important question is then to understand and to identify economic activities that go along the growth of a large city. Such data is however extremely difficult to get for historical periods and here we take advantage of a new dataset recently released <cit.>, constructed from the city directories that consist of large lists of individuals, merchants or prominent inhabitants, businesses, organizations and institutions. We report the analysis of such a dataset obtained for the city of Paris in the period 1829-1907 which gives access to a total of about 1 million entries of economic activities with their address, regularly updated in this period (see details about the dataset construction in the Methods section). This 79 years period allows us to have an historical perspective on city growth and activity evolution, but also to question the impact of large perturbations that took place in this period, such as Haussmann's works that happened between 1853-1870 and modified in-depth the structure of the city <cit.>, or polical conflicts such as the Franco-Prussian war and the Commune `revolution' (1870-1871).
During the evolution of a city we observe the emergence of different activities and their variation as the city grows. The idea here is then to characterize the different activities and to exhibit the existence of categories of activities that have different functions. A natural tool for the analysis of the evolution of activities in a city is scaling <cit.>. It has indeed been observed that a total quantity Y measured for a city of population P varies as Y ∼ P^β, with β>0. In other words, the quantity per capita Y/P varies as P^β-1 which allows us to identify three different categories. First, for β=1, the quantity per capita is constant. This typically corresponds to quantity that are relative to human needs (such as water consumption). In the other cases, the quantity per capita depends on the city size. For β>1, there is a `positive' impact of the city: the larger it is and the larger the quantity Y/P. In contrast, the case β<1 usually corresponds to economy of scale. These scalings are usually measured `cross-sectionally' at a given time for different cities with different values of their population. The number of studies considering history and evolution through time are increasing (see Table <ref>), but the majority concerns cross-sectional scaling analyses for a given period, and we identified two historical studies of cross-sectional scaling over time only: one focusing on election results in the US between 1948 and 2016 <cit.>, the other one on the morphological structure of US cities between 1900 and 2015 <cit.>. For contemporary cities, studies are mainly based on cross-sectional scaling over time (computed over various cities at a given time and also for different dates), and it is not yet clear how this relates to temporal scaling (computed over time for a given city) <cit.>. Indeed, analyzing cross-sectional data differs from studying the temporal scaling of individual cities. The former primarily reflects spatial dynamics, enabling insights into the characteristics of different cities at specific time points, while the latter, focusing on the temporal dynamics, offers a view into how individual cities evolve over time. Here, we won't address this general problem, but will focus on the measured value of β as a way to categorize a wide range of urban activities and to characterize the evolution of a city. The objective is then to analyse how the number of a given activity grows when the population varies.
§ RESULTS
§.§ Overview of Paris Activities
The dataset studied here encompasses about 1 million entries documenting the primary activities within Paris. Each entry corresponds to an individual having an activity (and less frequently to an organization), together with an address. These entries display a consistent growth pattern, expanding from 23,000 in 1829 to 88,000 in 1907. In order to compare historical activities of Paris during the 19th century with contemporary cities, we have used 21 contemporary groups, drawing inspiration from the North American Industry Classification System (NAICS) developed by the U.S. Census Bureau (more details in the Methods section).
We show the evolution in time of a selection of activity categories in Fig.<ref>.
We observe that there is overall increase in activity categories – with the exception of the decline in the public administration sector at the end of the period, resulting from a bias in the dataset (see details in Material section). The two most significant sectors correspond to food stores and clothing related activities. The number of food stores increased from 4,000 in 1829 to 20,000 in 1907, and for clothing from approximately 3,000 entries in 1829 to 10,000 in 1907. Manufacturing appears as the third most important section with over 10,000 entries by 1907. The Gini coefficient — that characterizes the heterogeneity — computed on the number of entries N_a for 21 categories is relatively stable and fluctuates between 0.44 and 0.52 showing that the global inequality of activity sizes is stable over time (SI, Fig. S2). Despite this stability, the increase rate of N_a differs largely between categories. Notably, there is a tenfold increase in the number of trade agents and brokers between 1829 and 1907, along with an eight-fold increase in engineering and more than fivefold for finance and insurance individuals. Additionally, administrative and legal individuals, along with educational ones, displays a modest increase of 1.75 and 3 times, respectively.
§.§ Scaling of Activities and Population over Time
The population of Paris increased strongly from 785k in 1831 to more than 2.75M in 1907 <cit.>, questioning how activities in the city correspondingly grew. The very sharp growth in the population of the municipality of Paris is closely linked to the process of urban sprawl that led to the redrawing of the city's administrative boundaries in 1860 (see Methods). A natural tool <cit.> to investigate this question is to analyze how the number of entries N_a(t) for a given activity `a' scales with the population P(t) for all times in our dataset. We expect in general a power-law of the form
N_a(t) ∼ P(t)^β
where the exponent β characterizes the growth rate of the activity `a'. We measure this exponent for different categories of activities and obtain the result showed in Fig.<ref>.
We observe that these scaling exponents are highly variable, ranging from β≈ 0.21 for administrative and legal category to β≈ 2.52 for activities related to agriculture. A value of β∼ 1 indicates that an activity grows linearly with population during the period. We observe such a linear relation for building and car rental categories (β≈ 1.03 ± 0.15) and non-food retailers (except for clothing and home furniture sales). Other activities display a value of β slightly below 1, such as clothing and home supplies manufacturers-retailers (respectively β≈ 0.84 and 0.85), or health practitioners (β≈ 0.83) indicating a growth rate slightly lower than the population. We also observe β values slightly larger than one for food stores (β≈ 1.17 ± 0.11), although a linear behavior cannot be excluded. We note that food stores described in the directories display highly variable values of β depending on the type of food (see Fig. S4 and S5 in the SI). Grocers grow linearly with population (β≈ 0.95 ± 0.05), whereas we observe for seed or cheese retailers an almost constant behavior (β≈ 0.04 and 0.07, respectively).
Besides these activities that scale approximately with the population, we observe activities with scaling exponents very different from 1. For example confectioners
(β≈ 0.59), wine retailers (β≈ 1.48) and creamers (β≈ 2.38). Furthermore, activity categories relative to politico-institutional services, such as administrative and legal individuals, public administration or educational individuals grow sublinearly to the population, with 0.21 < β < 0.51. In contrast, engineering individuals (β≈ 1.36), trade agents and brokers (β≈ 1.53), and full-service restaurants (β≈ 2.15) experience more rapid growth rates than Paris' population during the period.
A wide range of empirical studies on urban scaling analyse data of system of cities by comparing different cities at a given time. Among these studies, Pumain et al. <cit.> compare β values for various economic and social activities within contemporary city systems in both the US and France (in 2000 and 1999). Certain exponents identified in their study, such as finance and insurance, and wholesalers trade exhibit similarities to the scaling observed here for Paris during the 19th century (see Table <ref>). It seems that in these cases, the value for β is governed by principles independent from the period. Moreover, comparing to the Brazilian system of cities in 2000, the wholesale trade sector had a β value of 1.18 <cit.>. For other activities, the β values differ significantly. This is particularly true for the cases of education, art, construction and restaurants, where values can be larger or smaller than 1 depending on the measure, and whose evolution depends strongly on the period considered. It should be noted that β values can vary in space and time, as in the case of educational services: indeed, we observe both a variation in space (β for the French case is very different from that of the US) and in time (β for Paris in the 19th century is very different from that of France today). This suggests that the total number of workers in education services depends strongly on other considerations such as political decisions. In addition, while some values are similar between US and French systems, as in the construction sector, they may be different in other emerging economies. Indeed, β value of this sector is about 1.08 in China (2000) and 0.98 ten years later <cit.>, while they are 1.13 (2000) and 1.32 (2010) in Brazil (due to the housing boom and population concentration in larger cities <cit.>).
Despite these fluctuations in time and from a country to another, these values allow us to distinguish the role of different activities during the growth of a city. We thus propose the following typology of activities for the dynamics of urban activities. For β<1, we find all institutional services whose number is in general dictated by optimization principles based on accessibility <cit.>. Typically, minimizing the total time expenditure for public facility leads to a power law behavior with β<1. The underlying assumption being that social structures evolve in such a way to minimize the time spent in their operation (see Methods for an analytical discussion). It is important to note that entries in directories relating to institutional services are not limited to establishments (e.g. schools for educational services), but also to individuals (e.g. teachers), as presented in SI Table S1. In this respect, optimization is also driven by other factors in addition to accessibility. Urban activities with β≈ 1 are intrinsically linked to population and to a fixed number per capita. The number of these activities is mainly driven by the population needs (typically as it is the case for grocers). These two activity types (β<1 and β≈ 1) correspond essentially to basic needs and can be expected for other growing modern cities. This is in contrast with the last type of activities that display β>1 whose number is mainly dictated by specific needs, that in general will vary according to the period considered. Some activities depend on transformations in the economic production system, like engineering services during the industrial revolution, while others depend on transformations in consumption habits. This is typically the case here of wine retailers (owing to the increasing popularity of wine consumption in the 19th century <cit.>) or creamers (milk consumption increased during this period <cit.>). These large β values reveal a specific phase of the development of the city: urban sprawl (e.g. the number of individuals in charge of feeding cows for milk production and trade increase because the proximity to consumers is vital before pasteurization methods were invented), or a specialization (such as restaurants in Paris).
§.§ Dynamics of Activities
In addition to the scaling, we also investigate the dynamics of activities. An interesting visualization tool is the rank clock <cit.> for activity categories that we show in Fig.<ref> for the period 1829-1907,
and observe different types of dynamics. First (Fig.<ref>,a), there are activities – such as food stores or others such as cosmetics, beauty supplies or perfume – that maintain their rank over 79 years. More generally, all activities connected to everyday needs of residents (food stores, clothing manufacturers/retailers, or heath practitioners, etc.) have a rank that remain stable during the period under study. Second, there are activities whose rank decreased (i.e. became more important) such as wholesalers trade, trade agents and brokers, restaurants and engineering individuals (Fig.<ref>,b,c). The latter had its important sharp increase, moving from the 15th position in 1829 to the 6th position in 1907. Finally, there are activities whose rank increased such as administrative and legal, educational, public administration, construction and public works, and publishing industries ones. Their number (see Fig.<ref>), however, remained stable over time which is consistent with the fact that it is governed by optimal considerations.
§.§ Effect of large perturbations
Paris during the 19th century experienced large perturbations. One of the most important happened under the guidance of Baron Haussmann <cit.>. The social, political, and urbanistic importance and impact of Haussmann's renovation is particularly significant. Essentially, until the middle of the 19th century, central Paris has a medieval structure composed of many small and crowded streets, creating congestion and, according to some contemporaries, probably health problems. In 1852, Napoleon III commissioned Haussmann to modernize Paris by building safer streets, large avenues connected to the new train stations, central or symbolic squares (such as the famous place de l’Étoile, place de la Nation and place du Panthéon), improving the traffic flow and, last but not least, the circulation of army troops. Haussmann also built modern housing with uniform building heights, new water supply and sewer systems, new bridges, etc. These works lasted between 1853 and 1870.
The second important historic event is called the Paris Commune which was a revolutionary government that seized power in Paris in 1871. It emerged after the defeat of France during the Franco-Prussian war in 1870-1871 (with the siege of Paris by Prussian since September until the capitulation of Paris), and in opposition to the new government formed by the monarchist-majority National Assembly in 1871. Paris Commune government lasted from March to May, while the regular government troops once again surrounded Paris, until the `Communards' were defeated in a severe crackdown.
An interesting question is then whether we can observe these two different types of events in the statistics of activities. Indeed, Haussmann's project was an extensive urban planning operation lasting 17 years, while the Paris Commune was a political event lasting just a few months. We thus compute the average slope for a given activity (computed over the period 1829-1907), and the average relative deviation D_a around this slope computed over the Haussmann period (1855-1875), during the Paris Commune (1864-1871), and after the Paris Commune (1871-1875). More precisely, if we denote by S_a^loc the local slope for the activity `a' (equal to (N_a(1875)-N_a(1855))/20 for the Haussman period for example), and by S_a the average slope for activity `a', the quantity D_a is given by
D_a=S_a^loc-S_a/|S_a|
The results for different activities are shown in the Table <ref> and display large impacts on some activities (see also SI Fig. S6 and S7).
The results for different activities are shown in the Table <ref> and display large impacts on some activities.
During the Haussmann period, most activities were positively affected with an increase of their numbers. In particular, construction works and building rental are impacted, as could be expected for such a large-scale restructuring of the city and its infrastructures. Indeed, during the beginning of Haussmann's renovation we observe the growth of new companies in the construction sector (D_a ≈ 3.34 in 1855-1864 and -0.21 in 1864-1875), while furnished hotels (building rental) used as transitional housing for new immigrants to Paris are mainly impacted during the 1860s (D_a ≈ 0.83 in 1855-1860; 2.87 in 1860-1864), probably as a consequence of the housing price inflation of over 50% in the 1850s in the city center <cit.>. From this perspective, the global impact of this large-scale planning operation was positive. It is very different in the case of the Paris Commune that had a globally negative impact on most activities, as we could expect for such a troubled period. The impact is not significant when looking at the total number of entries in city directories (N=93.7k in 1864, 101.8k in 1871 and 107.5k in 1875), but it is significative on the most frequent activities in the city analyzed here (respectively N=67.9k, 56k and 80.8k). Surprisingly enough, the period after this event witnessed a large-scale reorganization with many activities whose number exploded. From this point of view, three main factors probably explain this evolution: 1) temporary closures of companies; 2) temporary transformations in the types of activity carried out by companies listed before, during and after 1871; 3) changes in the ways in which individuals in 1871, especially elites, present themselves in the directory (e.g. administrative and legal individuals, or by people living off their income as owners or annuitants). Despite the political and social shock of the two sieges of Paris in 1870-1871, the transformations of activities are likely to be short-lived rather than long-lasting. A return to regular activity can be observed from 1875, in terms of the number of activities by category (see SI Figs. S6).
§ DISCUSSION
We showed on the case of Paris during the 19th century that not all activities are equivalent and that they can be grouped in categories having different dynamics and scaling. In particular, it seems that some activities naturally accompany the growth of the city and can be considered as intrinsic to its development. These activities essentially answer to basic needs of the residents, and the type of scaling with population of the number of these activities depends essentially on their underlying logic: it seems indeed intuitive that the number of food stores scales more or less linearly with population while the number of administrative or educational needs follow an optimization logic (and scale sublinearly). All these activities constitute the core of the city, while other activities determine its specific features. These quickly developing activities usually respond to some passing fad or to a need of the specific period and the phase of the city's development.
The dynamics of activities in the city over time should also be considered from the perspective that Paris is a city within a system of cities <cit.>, and is also the largest city in the French urban system. In this perspective, the evolutionary theory for interpreting urban scaling laws <cit.> can shed some light on our results. Apart from human-related needs that are always characterized by β≈ 1, we can discuss the typology in terms of innovation cycles. When the exponents are larger than 1, the corresponding activities are in a first adoption stage of innovations in the urban systems, and are usually concentrated in the largest cities, while exponents below 1 correspond to more mature activities <cit.>. We thus expect the typology of these activities to be valid for all growing cities, with differences being essentially in the 2nd group of activities (β>1) that characterize the city's specific features in an innovation phase or specialization. These results shed a new light about the development of large cities by highlighting the existence of a typology of activities exhibiting their different role that they play in the development of a large city.
§.§ Data availability
The dataset of the population censuses of Paris at the scale of districts between 1801 and 1911 is openly accessible with the documentation on Nakala platform of the CNRS Research Infrastructure Huma-Num https://doi.org/10.34847/nkl.e173c93pDOI:10.34847/nkl.e173c93p. The dataset of Paris directories entries with NAICS inspired categories between 1829 and 1907 specifically constructed and used for this paper is openly accessible on Zenodo platform https://doi.org/10.5281/zenodo.8388101DOI:10.5281/zenodo.8388101.
§.§ Code availability
The open repository https://doi.org/10.5281/zenodo.8388101DOI:10.5281/zenodo.8388101 contains the code to create the figures and tables of both the main text and the SI.
§.§ Acknowledgements
We thank N. Abadie, S. Baciocchi, P. Cristofoli, B. Duménieu, E. Carlinet, J. Chazalon and J. Perret, who set up the processing chain enabling the extraction and enrichment of Paris directories, and without whom this work would not have been possible. This research was funded by ANR SoDUCo Program, grant number ANR-18-CE38-0013 (JG and MB).
§.§ Author Contributions Statement
JG and MB designed the study, JG worked on the data and analyzed it. JG and MB wrote
the paper.
§.§ Competing Interests Statement
None.
§ MATERIALS AND METHODS
§.§ Paris Directories between 1799 and 1914
City directories became an international publishing phenomenon during the 19th century in Europe and USA. They consist of large lists of individuals, merchants or prominent inhabitants, businesses, organizations and institutions, each with description and address. Published at a fast pace, often yearly, they provide massive, fine-grained and highly valuable geohistorical information for in-depth interdisciplinary studies of social and spatial aspects of cities. Analyzing the content of these directories over time and with a high temporal frequency requires a considerable amount of manual transcription, structuring, geolocation and data linking work. To overcome this difficulty, SoDUCo ANR Program has proposed an automatic pipeline to extract, semantically annotate, geocode and analyze the contents of 141 directories of Paris published between 1799 and 1914. First, each page is processed using image segmentation to extract its layout and detect directories entries, i.e. regions containing a triple composed of a person (physical or moral), an activity and an address. OCR is applied on each entry to get its textual content, which is then semantically enriched using a deep-learning based Named Entity Recognition approach <cit.>. Finally, each address is assigned a geographic position in the city using a geocoding process. Spatial enrichment leverages historical maps of Paris so the geocoding takes into account the changes in the numbering and streets over the century, including Haussmann’s renovation <cit.>. More than 10M entries currently compose the 141 digitally transcribed and enriched directories <cit.>.
Paris directories are commercial editions, involving competition between publishers, buy-outs over time and periods of editorial monopoly. Five main phases have been revealed throught a systematic chronological inventory of directories. The years 1780-1793 are those of the origins. It was followed by the advent of the Almanach du commerce (1798-1815). Competition was fierce, and publications abounded until 1856, when the Firmin Didot brothers bought the Bottin publishing company. The period 1857-1890 was thus marked by the hegemony of the Didot-Bottin collection. A new period of competition began in 1891.
The city of Paris is conceived by the editors in terms of its socio-economic functioning. The main issue is to list individuals so that other ones can learn about their activities and addresses, and thus potentially interact with them (through visits or epistolary exchanges). The aim is to connect individuals in the city and from the city with others who are part of wider networks by establishing these lists; even including, if necessary but very rarely, individuals who live or perform their activities outside the administrative limits of the city, but belong to socio-economic networks of Paris (as in the case of Batignolles before 1860, see SI section 6, Fig. S10). In this sense, directories are homogenous across space. Moreover, activities listed in the directories are limited to those associated with specific addresses, implying a focus on activities linked to the physical structure of the city. Consequently, informal activities are not listed in the directories, and the data captured therein mirrors the evolution of the city's urban fabric. In this sense, the coverage of the city's outskirts can be less extensive than that of the center, since it is also on some of these fringes that informal activities are more numerous.
The lists in directories follow three main organizations: by name, by profession, and by street, in order to simplify researches for readers of the time. Payment is not required to be listed in the directories. However, the differentiated length of activity descriptions in the professional lists, and the integration of promotional vignettes towards the end of the 19th century, testify to the payment of advertising by companies. In contrast, the alphabetical lists studied in this paper are made up of much shorter, standardized activity descriptions.
§.§ Selection of Directories and Main Activities between 1829 and 1907
The fierce competition between editors from the mid-1810s onwards led them to enrich the listings by extending their social and economic coverage, and to continually update their editions. Indeed, until 1817, alphabetical lists of Paris directories exclusively documented prominent inhabitants. Subsequently, a diverse range of individuals was incorporated into the alphabetical lists, enabling the study of the dynamics of diversified urban activities. However, the statistical distributions of the number of entries per page in alphabetical lists show significant variations for each of the directories published before 1829, resulting from noise in the segmentation phase of the digitized directory images. Hence, we took into account the data from 1829 onwards.
Knowing that the directories capture activities related to the urban physical space, we accordingly selected directories with a time step of around 5 years. Indeed, changes in urban structure do not occur at an annual rate. The choice of a particular year over another was determined through an evaluation of the extraction and enrichment quality of the digital transcriptions of the directories.
The character strings of the activities in the data are rather noisy, due to OCR errors, and to the activity description in the sources (use of abbreviations and/or information add-on). We thus applied the soundex phonetic index to smooth out the noise and group together similar activities. Over 7,000 different activities are listed in the directories studied. The rank-size distributions of the activities show that a few activities group together a large number of persons, such as wine retailers in 1885 counting 10,000 entries, while a large number of activities group together a small number of persons (see SI Fig. S8, S9). We selected the 175 most frequent activities from the slightly less than 1.4M entries in the 16 directories studied, which account for 71% of all entries, i.e. over 990,000 entries.
§.§ Activity Categories Inspired of the U.S. Census Bureau's NAICS
The 175 most frequent activities are grouped in categories inspired of the 2022 North American Industry Classification System (https://www.census.gov/naics/NAICS). The latter is a standard, and facilitates comparisons with present-day cities. The social and economic organization of cities during the 19th century and nowadays is not strictly equivalent, so we could not systematically apply NAICS categories on past-activities described in directories. As an example, places of manufacturing and retail are regularly not distinct, requiring their association in some of the categories proposed (see SI Tab. S1). We studied the definitions of each activity description with four editions of the dictionary of the French Academy (Dictionnaire de l'Académie française from 1798 to 1935) to better understand what covers each of them and to evaluate semantic stability/instability of the words used in directories. This method step was necessary to relate with accuracy the activity descriptions in directories with NAICS-inspired categories typology.
Soundex index is not always efficient when OCR errors in the data are very important at the begin of character strings of the activities, and sometimes groups very different activities that sound alike. We thus performed manual editing of data to reduce noise of bad automatic attribution between activities in Paris directories and activity categories. Some variations in the number of entries are significant, as in the case of food stores, where total entries were 250,000 before the manual revision and 175,000 after (see SI Fig. S9).
§.§ The specific cases of `No activity, living of income' and `Public administration'
The category `No activity, living of income' displays a highly exceptional pattern, characterized by a significant decline of N_a in both 1839 and 1907 (see SI Fig. S6). It reflects editorial choices made by the companies that published directories. Indeed, this category encompasses individuals who, in 1839, were classified not as owners by publisher Lamy, but rather as electors or eligible individuals, within the context of a tax-based voting system – payed by approximately 1.2% of Parisians at that time and about 0.65% of French population in 1842 <cit.>. Furthermore, starting from 1903, the publishers Didot-Bottin undertook the creation of a distinct directory dedicated to prominent inhabitants (the Bottin mondain). As a result, all 2,780 individuals listed as owners or annuitants in the 1901 directory are absent in the 1907 edition. This editorial shift also explained the decrease of N_a in public administration category (Fig.<ref>). Specifically, 1,240 individuals were removed from 1901 to 1907 directories, highlighting that 78% of individuals listed in this category were prominent inhabitant in Paris (e.g. deputies, ministers, advisors at the Court of cassation). In contrast, certain other activities, such as finance and insurance individuals or health practitioners, demonstrated a consistent presence over time, suggesting that bankers, placement agents, doctors, etc. were not part of the highest social strata of society according to the publishers of the time.
§.§ Relationships between administrative and urban activities delineations of Paris
The question of Paris' delineation is an important one, as it generally affects scaling results <cit.>. Paris directories are not based on the municipality's administrative boundaries, but reflect the economic activities of the urban area. In this sense, the spatial spread of geolocalized entries over the century reflects urban sprawl – mainly from the 1870's. Population censuses, on the other hand, are carried out within the districts of the municipality of Paris. In this sense, the delineation is administrative. In addition, the political and fiscal law of June 16, 1859 implied the enlargement of the municipality of Paris by merging and redrawing the boundaries of other neighbouring municipalities <cit.>. The areas covered by the activities contained in the directories are however overwhelmingly included within the city's administrative boundaries over time (see SI section 6, Fig. S10).
§.§ Optimal number of public facilities
Elaborating on a argument proposed by Stephan <cit.>, Um et al <cit.> minimize the total time expenditure to explain the scaling with population. We assume that the city has a surface area A and contains N facilities.
The total time expenditure can be written as
T=hN+cPd^γ
where d=√(A/N). In the first term of the r.h.s. the quantity h represents the number
of person.hour per establishment, and hN is then the total person.hour expenditure for
the N facilities. The second term corresponds to the time needed for the population P to reach the facility of the form
d/v where v is the average velocity. The dependence on distance could however be more complex
and we generalize this expression and write d^γ where γ is an unknown exponent.
Minimizing T with respect to N then gives
N∼ A^γ/γ+2P^2/γ+2
leading to an exponent β=2/γ+2. The usual assumption γ=1 leads
to the standard β=2/3 as discussed in <cit.>.
67
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Batty(2013)]batty2013new
author author Michael Batty, @noop title The new science of
cities (publisher MIT press, year
2013)NoStop
[Barthelemy(2016)]barthelemy2016structure
author author Marc Barthelemy, @noop title The structure and
dynamics of cities (publisher Cambridge University Press, year 2016)NoStop
[Bettencourt(2021)]bettencourt2021introduction
author author Luís MA Bettencourt, @noop title
Introduction to urban science: evidence and theory of cities as complex
systems (publisher MIT Press, year
2021)NoStop
[Duranton and Puga(2014)]duranton2014growth
author author Gilles Duranton and author Diego Puga, title title The growth of
cities, @noop journal journal
Handbook of economic growth volume 2, pages 781–853 (year 2014)NoStop
[Pumain et al.(2006)Pumain,
Paulus, Vacchiani-Marcuzzo, and Lobo]pumain_evolutionary_2006
author author Denise Pumain, author Fabien Paulus,
author Céline Vacchiani-Marcuzzo, and author José Lobo, title title An
evolutionary theory for interpreting urban scaling laws, 10.4000/cybergeo.2519 journal journal CyberGeo:
European Journal of Geography (year 2006), 10.4000/cybergeo.2519NoStop
[Henderson and Wang(2007)]Henderson
author author J. Vernon Henderson and author Hyoung Gun Wang, title title Urbanization and city growth: The role of institutions, 10.1016/j.regsciurbeco.2006.11.008 journal journal Regional Science and Urban Economics volume 37, pages 283–313 (year
2007)NoStop
[Verbavatz and Barthelemy(2020)]Verbavatz
author author Vincent Verbavatz and author Marc Barthelemy, title title The growth
equation of cities, 10.1038/s41586-020-2900-x journal journal Nature volume
587, pages 397–401 (year 2020)NoStop
[Beeson et al.(2001)Beeson,
DeJong, and Troesken]beeson_population_2001
author author Patricia E Beeson, author David N DeJong, and author Werner Troesken, title title
Population growth in U.S. counties, 1840–1990, 10.1016/S0166-0462(01)00065-5 journal journal
Regional Science and Urban Economics volume 31, pages 669–699 (year 2001)NoStop
[Ortman et al.(2014)Ortman,
Cabaniss, Sturm, and Bettencourt]ortman_pre-history_2014
author author Scott G. Ortman, author Andrew H. F. Cabaniss, author Jennie O. Sturm, and author Luis M. A. Bettencourt, title title
The Pre-History of Urban Scaling, 10.1371/journal.pone.00879 02 journal journal
PLOS ONE (year 2014), 10.1371/journal.pone.00879
02NoStop
[Ortman et al.(2020a)Ortman, Lobo, and Smith]ortman_cities_2020
author author Scott G. Ortman, author José Lobo, and author Michael E. Smith, title title
Cities: Complexity, theory and history, 10.1371/journal.pone.0243621 journal journal
PLOS ONE (year 2020a), 10.1371/journal.pone.0243621NoStop
[Lobo et al.(2020)Lobo,
Bettencourt, Smith, and Ortman]lobo_settlement_2020
author author Jose Lobo, author Luis M.A. Bettencourt, author Michael E. Smith, and author Scott Ortman, title title Settlement
scaling theory: Bridging the study of ancient and contemporary urban
systems, 10.1177/0042098019873796 journal
journal Urban Studies volume 57, pages 731–747 (year 2020)NoStop
[Perret et al.(2015)Perret,
Gribaudi, and Barthelemy]perret2015roads
author author Julien Perret, author Maurizio Gribaudi, and author Marc Barthelemy, title title Roads and
cities of 18th century france, 10.1038/sdata.2015.48
journal journal Scientific data volume 2, pages 1–7 (year
2015)NoStop
[El Gouj et al.(2022)El Gouj, Rincón-Acosta, and Lagesse]el_gouj_urban_2022
author author Hanae El Gouj, author Christian Rincón-Acosta, and author Claire Lagesse, title title
Urban morphogenesis analysis based on geohistorical road data, 10.1007/s41109-021-00440-0 journal journal Applied Network Science volume 7, pages 6 (year 2022)NoStop
[Bretagnolle et al.(2015)Bretagnolle, Delisle, Mathian, and Vatin]bretagnolle_urbanization_2015
author author Anne Bretagnolle, author François Delisle, author Hélène Mathian, and author Gabriel Vatin, title title Urbanization of
the United States over two centuries: an approach based on a long-term
database (1790–2010), 10.1080/13658816.2014.999681
journal journal International Journal of
Geographical Information Science volume 29, pages 850–867 (year 2015)NoStop
[Burghardt et al.(2023)Burghardt, Uhl, Lerman, and Leyk]burghardt_analyzing_2023
author author Keith Burghardt, author Johannes H. Uhl, author Kristina Lerman,
and author Stefan Leyk, 10.48550/arXiv.2209.10852 title
Analyzing urban scaling laws in the United States over 115 years,
(year 2023), note arXiv:2209.10852
[physics]NoStop
[Rodriguez and Feagin(1986)]rodriguez_urban_1986
author author Nestor P. Rodriguez and author Joe R. Feagin, title title
Urban Specialization in the World-System: An Investigation of
Historical Cases, 10.1177/004208168602200201
journal journal Urban Affairs Quarterly volume 22, pages 187–220 (year 1986)NoStop
[GeoHistoricalData(2023)]geohistoricaldata_annuaires_2023
author author GeoHistoricalData, 10.34847/nkl.98eem49t title
Annuaires historiques parisiens, 1798-1914. Extraction structurée et
géolocalisée à l'adresse des listes nominatives par ordre alphabétique et
par activité dans les volumes numérisés, (year
2023)NoStop
[Barthelemy et al.(2013)Barthelemy, Bordin, Berestycki, and Gribaudi]barthelemy_2013
author author Marc Barthelemy, author Patricia Bordin, author Henri Berestycki, and author Maurizio Gribaudi, title title
Self-organization versus top-down planning in the evolution of a city, 10.1038/srep02153 journal journal Scientific Reports volume 3, pages 2153 (year 2013)NoStop
[Pumain(2004)]pumain2004scaling
author author Denise Pumain, title title Scaling laws and
urban systems, https://www.santafe.edu/research/results/working-papers/scaling-laws-and-urban-systems
journal journal Santa Fe Institute, Working
Paper , pages 002 (year 2004)NoStop
[Bettencourt et al.(2007)Bettencourt, Lobo, Helbing, Kühnert, and West]bettencourt_growth_2007
author author Luís M. A. Bettencourt, author José Lobo, author Dirk Helbing, author Christian Kühnert, and author Geoffrey B. West, title title
Growth, innovation, scaling, and the pace of life in cities, 10.1073/pnas.0610172104 journal journal
Proceedings of the National Academy of Sciences volume 104, pages 7301–7306 (year
2007)NoStop
[Bokányi et al.(2018)Bokányi, Szállási, and Vattay]bokanyi_universal_2018
author author Eszter Bokányi, author Zoltán Szállási, and author Gábor Vattay, title title Universal
scaling laws in metro area election results, 10.1371/journal.pone.0192913 journal journal
PLOS ONE volume 13 (year 2018), 10.1371/journal.pone.0192913NoStop
[Depersin and Barthelemy(2018)]depersin_global_2018
author author Jules Depersin and author Marc Barthelemy, title title From global
scaling to the dynamics of individual cities, 10.1073/pnas.1718690115 journal journal
Proceedings of the National Academy of Sciences volume 115, pages 2317–2322 (year
2018)NoStop
[Keuschnigg(2019)]keuschnigg_scaling_2019
author author Marc Keuschnigg, title title Scaling
trajectories of cities, 10.1073/pnas.1906258116
journal journal Proceedings of the National
Academy of Sciences volume 116, pages
13759–13761 (year 2019)NoStop
[Bettencourt et al.(2020)Bettencourt, Yang, Lobo, Kempes, Rybski, and Hamilton]bettencourt_interpretation_2020
author author Luís M. A. Bettencourt, author Vicky Chuqiao Yang, author José Lobo, author Christopher P. Kempes, author Diego Rybski, and author Marcus J. Hamilton, title title The interpretation of urban scaling analysis in time, 10.1098/rsif.2019.0846 journal journal Journal of The Royal Society Interface volume 17, pages 20190846 (year
2020)NoStop
[Ignazzi(2014)]ignazzi_scaling_2014
author author Cosmo Antonio Ignazzi, title title
Scaling laws, economic growth, education and crime: Evidence from
Brazil, 10.3917/eg.434.0324 journal
journal Espace géographique volume
43, pages 324–337 (year 2014)NoStop
[Condé(2015)]conde_mutations_2015
author author Gilles Condé, title title Mutations du
système de villes belges, 10.4000/cybergeo.26691
journal journal Cybergeo: European Journal of
Geography (year 2015), 10.4000/cybergeo.26691NoStop
[Paulus and Vacchiani-Marcuzzo(2016)]paulus_knowledge_2016
author author Fabien Paulus and author Céline Vacchiani-Marcuzzo, title title
Knowledge Economy and Competitiveness: Economic Trajectories of
French Cities Since the 1960s, in 10.1007/978-3-642-45173-7_8 booktitle Knowledge-creating
Milieus in Europe: Firms, Cities, Territories, editor edited by editor Augusto Cusinato and editor Andreas Philippopoulos-Mihalopoulos (publisher Springer, address Berlin, Heidelberg, year 2016) pp. pages 157–170NoStop
[Meirelles et al.(2018)Meirelles, Neto, Ferreira, Ribeiro, and Binder]meirelles_evolution_2018
author author Joao Meirelles, author Camilo Rodrigues Neto, author Fernando Fagundes Ferreira, author Fabiano Lemes Ribeiro, and author Claudia Rebeca Binder, title title Evolution of urban scaling: Evidence from Brazil, 10.1371/journal.pone.0204574 journal
journal PLOS ONE volume 13 (year 2018), 10.1371/journal.pone.0204574NoStop
[Goncalves and Domingos(2014)]goncalves_scaling_2014
author author Ana Goncalves and author Tiago Domingos, title title Scaling laws
and electricity consumption in cities: a sectoral view, 10.5278/ijsepm.2014.2.3 journal journal
International Journal of Sustainable Energy Planning and Management volume 2, pages 19–32 (year
2014)NoStop
[Xu et al.(2019)Xu,
Zhou, Jiao, Dong, and Li]xu_cross-sectional_2019
author author Gang Xu, author Zhengzi Zhou,
author Limin Jiao, author Ting Dong, and author Ruiqi Li, 10.48550/arXiv.1910.06732 title Cross-sectional
Urban Scaling Fails in Predicting Temporal Growth of Cities, (year 2019), note arXiv:1910.06732
[physics]NoStop
[Hong et al.(2020)Hong,
Frank, Rahwan, Jung, and Youn]hong_universal_2020
author author Inho Hong, author Morgan R. Frank,
author Iyad Rahwan, author Woo-Sung Jung, and author Hyejin Youn, title
title The universal pathway to innovative urban
economies, 10.1126/sciadv.aba4934 journal
journal Science Advances volume 6
(year 2020), 10.1126/sciadv.aba4934NoStop
[Ribeiro et al.(2020)Ribeiro, Meirelles, Netto, Neto, and Baronchelli]ribeiro_relation_2020
author author Fabiano L. Ribeiro, author Joao Meirelles, author Vinicius M. Netto, author Camilo Rodrigues Neto, and author Andrea Baronchelli, title title
On the relation between transversal and longitudinal scaling in cities, 10.1371/journal.pone.0233003 journal
journal PLOS ONE volume 15, pages e0233003 (year 2020)NoStop
[Xu et al.(2020)Xu,
Xu, Gu, Lei, Pan, Liu, and Jiao]xu_scaling_2020
author author Gang Xu, author Zhibang Xu,
author Yanyan Gu, author Weiqian Lei, author
Yupiao Pan, author
Jie Liu, and author
Limin Jiao, title title Scaling laws in intra-urban systems and over time at the
district level in Shanghai, China, 10.1016/j.physa.2020.125162 journal journal
Physica A: Statistical Mechanics and its Applications volume 560, pages 125162 (year
2020)NoStop
[Finance and Swerts(2020)]finance_scaling_2020
author author Olivier Finance and author Elfie Swerts, title title Scaling Laws
in Urban Geography. Linkages with Urban Theories, Challenges and
Limitations, in 10.1007/978-3-030-36656-8_5 booktitle Theories and Models of Urbanization: Geography,
Economics and Computing Sciences, series and number
Lecture Notes in Morphogenesis, editor edited by editor Denise Pumain (publisher Springer International Publishing, address Cham, year 2020) pp. pages 67–96NoStop
[Maisonobe(2020)]maisonobe_regional_2020
author author Marion Maisonobe, title title Regional
Distribution of Research: The Spatial Polarization in
Question, in 10.1515/9783110646610-036 booktitle Handbook Bibliometrics, editor
edited by editor Rafael Ball (publisher De Gruyter Saur, year 2020) pp. pages 377–396NoStop
[Andersson et al.(2020)Andersson, Andersson, Harsman, and Yang]andersson_geography_2020
author author David Emanuel Andersson, author Ake E. Andersson, author Björn Harsman, and author Xiyi Yang, title title The
geography of science in 12 European countries: a NUTS2-level analysis, 10.1007/s11192-020-03510-9 journal journal Scientometrics volume 124, pages 1099–1125 (year 2020)NoStop
[Arvidsson et al.(2023)Arvidsson, Lovsjö, and Keuschnigg]arvidsson_urban_2023
author author Martin Arvidsson, author Niclas Lovsjö, and author Marc Keuschnigg, title title Urban
scaling laws arise from within-city inequalities, 10.1038/s41562-022-01509-1 journal journal
Nature Human Behaviour volume 7, pages
365–374 (year 2023)NoStop
[Lei et al.(2022)Lei,
Jiao, Xu, and Zhou]lei_urban_2022
author author Weiqian Lei, author Limin Jiao,
author Gang Xu, and author Zhengzi Zhou, title
title Urban scaling in rapidly urbanising China, 10.1177/00420980211017817 journal journal Urban Studies volume 59, pages 1889–1908 (year 2022)NoStop
[Curado et al.(2021)Curado,
Damásio, Encarnação, Candia, and Pinheiro]curado_scaling_2021
author author António Curado, author Bruno Damásio, author Sara Encarnação, author Cristian Candia, and author Flávio L. Pinheiro, title title
Scaling behavior of public procurement activity, 10.1371/journal.pone.0260806 journal journal
PLOS ONE volume 16 (year 2021), 10.1371/journal.pone.0260806NoStop
[Strumsky et al.(2021)Strumsky, Lobo, and Mellander]strumsky_as_2021
author author Deborah Strumsky, author Jose Lobo, and author Charlotta Mellander, title title As different
as night and day: Scaling analysis of Swedish urban areas and regional
labor markets, 10.1177/2399808319861974 journal journal Environment and Planning B: Urban Analytics
and City Science volume 48, pages
231–247 (year 2021)NoStop
[Ortman et al.(2016)Ortman,
Davis, Lobo, Smith,
Bettencourt, and Trumbo]ortman_settlement_2016
author author Scott G. Ortman, author Kaitlyn E. Davis, author José Lobo, author Michael E. Smith, author Luis M. A. Bettencourt, and author Aaron Trumbo, title title Settlement
scaling and economic change in the Central Andes, 10.1016/j.jas.2016.07.012 journal journal
Journal of Archaeological Science volume 73, pages 94–106 (year 2016)NoStop
[Cesaretti et al.(2016)Cesaretti, Lobo, Bettencourt, Ortman, and Smith]cesaretti_populationarea_2016
author author Rudolf Cesaretti, author José Lobo,
author Luis M. A. Bettencourt,
author Scott G. Ortman, and author Michael E. Smith, title title Population-Area
Relationship for Medieval European Cities, 10.1371/journal.pone.0162678 journal journal
PLOS ONE (year 2016), 10.1371/journal.pone.0162678NoStop
[Ortman et al.(2020b)Ortman, Smith,
Lobo, and Bettencourt]ortman_why_2020
author author Scott G. Ortman, author Michael E. Smith, author José Lobo, and author Luís M. A. Bettencourt, title title
Why Archaeology Is Necessary for a Theory of Urbanization, 10.1484/J.JUA.5.120914 journal journal Journal of Urban Archaeology volume 1, pages 151–167 (year
2020b)NoStop
[Hanson et al.(2017)Hanson,
Ortman, and Lobo]hanson_urbanism_2017
author author J. W. Hanson, author S. G. Ortman,
and author J. Lobo, title title Urbanism and the division of
labour in the Roman Empire, 10.1098/rsif.2017.0367
journal journal Journal of The Royal Society
Interface volume 14, pages 20170367
(year 2017)NoStop
[Hanson et al.(2019)Hanson,
Ortman, Bettencourt, and Mazur]hanson_urban_2019
author author John W. Hanson, author Scott G. Ortman, author Luís M. A. Bettencourt, and author Liam C. Mazur, title title
Urban form, infrastructure and spatial organisation in the Roman
Empire, 10.15184/aqy.2018.192 journal
journal Antiquity volume 93, pages 702–718 (year 2019)NoStop
[Altaweel and Palmisano(2019)]altaweel_urban_2019
author author Mark Altaweel and author Alessio Palmisano, title title Urban and
Transport Scaling: Northern Mesopotamia in the Late Chalcolithic
and Bronze Age, 10.1007/s10816-018-9400-4 journal journal Journal of Archaeological Method and
Theory volume 26, pages 943–966
(year 2019)NoStop
[Cesaretti et al.(2020)Cesaretti, Lobo, Bettencourt, and Smith]cesaretti_increasing_2020
author author Rudolf Cesaretti, author José Lobo,
author Luis M. A. Bettencourt, and author Michael E. Smith, title title Increasing returns to scale
in the towns of early Tudor England, 10.1080/01615440.2020.1722775 journal journal
Historical Methods: A Journal of Quantitative and Interdisciplinary
History volume 53, pages 147–165
(year 2020)NoStop
[Smith et al.(2021)Smith,
Ortman, Lobo, Ebert,
Thompson, Prufer, Stuardo, and Rosenswig]smith_lowdensity_2021
author author Michael E. Smith, author Scott G. Ortman, author José Lobo, author Claire E. Ebert, author Amy E. Thompson, author Keith M. Prufer, author Rodrigo Liendo Stuardo, and author Robert M. Rosenswig, title title
The Low-Density Urban Systems of the Classic Period Maya and
Izapa: Insights from Settlement Scaling Theory, 10.1017/laq.2020.80 journal journal Latin
American Antiquity volume 32, pages
120–137 (year 2021)NoStop
[Cristofoli and Gravier(2023)]cristofoli_populations_2023
author author Pascal Cristofoli and author Julie Gravier, 10.34847/nkl.e173c93p title Populations of Paris districts (1801-1911), (year
2023)NoStop
[Ignazzi(2015)]ignazzi_coevolution_2015
author author Cosmo Antonio Ignazzi, title Coevolution in the
brazilian system of cities, @noop type Phd thesis, address Paris (year 2015)NoStop
[Um et al.(2009)Um,
Son, Lee, Jeong, and Kim]um_scaling_2009
author author Jaegon Um, author Seung-Woo Son,
author Sung-Ik Lee, author Hawoong Jeong, and author Beom Jun Kim, title
title Scaling laws between population and facility
densities, 10.1073/pnas.0901898106 journal
journal Proceedings of the National Academy of Sciences volume 106, pages 14236–14240
(year 2009)NoStop
[Stephan(1977)]stephan1977territorial
author author G Edward Stephan, title title
Territorial division: The least-time constraint behind the formation of
subnational boundaries, 10.1126/science.196.4289.523
journal journal Science volume 196, pages 523–524 (year
1977)NoStop
[Gusein-Zade(1982)]gusein1982bunge
author author Sabir M Gusein-Zade, title title Bunge's
problem in central place theory and its generalizations, @noop
journal journal Geographical Analysis volume 14, pages 246–252 (year
1982)NoStop
[Gastner and Newman(2006)]gastner2006optimal
author author Michael T Gastner and author Mark EJ Newman, title title
Optimal design of spatial distribution networks, @noop
journal journal Physical Review E volume 74, pages 016117 (year
2006)NoStop
[Nourrisson(2017)]nourrisson_7_2017
author author Didier Nourrisson, title title 7.
Sociabilités du vin, in @noop booktitle
Une histoire du vin, series and number Pour l'histoire (publisher Perrin, address Paris, year 2017) pp. pages 197–240NoStop
[Husson(1875)]husson_consommation_1875
author author A. Husson, title title La consommation
du lait à Paris, http://www.numdam.org/item/?id=JSFS_1875__16__17_0 journal
journal Journal de la Société de statistique de Paris volume 16, pages 17–23 (year 1875)NoStop
[Batty(2006)]batty_rank_2006
author author Michael Batty, title title Rank clocks, 10.1038/nature05302 journal journal Nature volume 444, pages
592–596 (year 2006)NoStop
[Jordan(1995)]jordan_transforming_1995
author author David P. Jordan, @noop title
Transforming Paris: the life and labors of Baron Haussmann (publisher University of Chicago Press, address
Chicago, year 1995)NoStop
[Panerai et al.(2004)Panerai, Castex, Depaule, and Vitale Samuels]samuels_urban_2004
author author Philippe Panerai, author Jean Castex, author Jean-Charles Depaule, and author Olga Vitale Samuels, @noop title
Urban forms: the death and life of the urban block, edited by editor Ivor Samuels (publisher Architectural Press, address Oxford (GB),
Boston, year 2004)NoStop
[Faure(2004)]faure_speculation_2004
author author Alain Faure, title title Spéculation et
société : les grands travaux à Paris au XIXe siècle, 10.3917/hes.043.0433 journal journal
Histoire, économie & société volume 23e
année, pages 433–448 (year 2004)NoStop
[Berry(1964)]berry_cities_1964
author author Brian J. L. Berry, title title
Cities as systems within systems of cities, @noop journal journal Papers of the Regional Science Association volume 13, pages 146–163 (year 1964)NoStop
[Pumain(2021)]pumain2021networks
author author Denise Pumain, title title From networks of
cities to systems of cities, in @noop booktitle Handbook of cities and networks (publisher
Edward Elgar Publishing, year 2021) pp. pages
16–40NoStop
[Abadie et al.(2022)Abadie,
Carlinet, Chazalon, and Duménieu]abadie_benchmark_2022
author author N. Abadie, author E. Carlinet,
author J. Chazalon, and author B. Duménieu, title title A Benchmark of Named Entity
Recognition Approaches in Historical Documents Application
to 19th Century French Directories, in 10.1007/978-3-031-06555-2_30 booktitle Document
Analysis Systems, series and number Lecture Notes in
Computer Science, editor edited by editor
Seiichi Uchida, editor
Elisa Barney, and editor
Véronique Eglin (publisher
Springer International Publishing, address Cham, year 2022) pp. pages 445–460NoStop
[Cura et al.(2018)Cura,
Dumenieu, Abadie, Costes,
Perret, and Gribaudi]cura_historical_2018
author author Rémi Cura, author Bertrand Dumenieu, author Nathalie Abadie, author Benoit Costes,
author Julien Perret, and author Maurizio Gribaudi, title title Historical collaborative
geocoding, 10.3390/ijgi7070262 journal
journal ISPRS International Journal of Geo-Information volume 7, pages 1–29 (year
2018)NoStop
[Tudesq(1958)]tudesq_1958
author author André-Jean Tudesq, title title 1.
Les listes électorales de la Monarchie censitaire, 10.3406/ahess.1958.2736 journal journal
Annales volume 13, pages 277–288
(year 1958)NoStop
[Cottineau et al.(2016)Cottineau, Hatna, Arcaute, and Batty]cottineau_diverse_2016
author author Clémentine Cottineau, author Erez Hatna, author Elsa Arcaute, and author Michael Batty, title title Diverse cities or
the systematic paradox of Urban Scaling Laws, 10.1016/j.compenvurbsys.2016.04.006 journal journal Computers, Environment and Urban Systems volume 63, pages 80–94 (year
2016)NoStop
[Montel(2012)]montel_lagrandissement_2012
author author Nathalie Montel, title title
L’agrandissement de Paris en 1860 : un projet controversé, in 10.4000/books.psorbonne.2393 booktitle
Agrandir Paris (1860-1970), series and number Histoire
contemporaine, editor edited by editor
Florence Bourillon and editor
Annie Fourcaut (publisher
Éditions de la Sorbonne, address Paris, year
2012) pp. pages 99–111, note code: Agrandir
Paris (1860-1970)NoStop
|
http://arxiv.org/abs/2409.02068v1 | 20240903171545 | Mixed tensor invariants of Lie color algebra | [
"Santosha Pattanayak",
"Preena Samuel"
] | math.RT | [
"math.RT",
"17B10, 17B35, 17B65, 14M30"
] |
MSC subject classification 2020: 13A50, 15A72, 16R30, 17B10, 17B70, 17B75.
Keywords: Picture invariants, Invariant theory, Lie color algebra, G-graded Lie algebra.
Mixed tensor invariants of Lie color algebra
santosha Pattanayak, Preena Samuel
September 9, 2024
============================================
Department of Mathematics and Statistics, IIT Kanpur,
Kanpur, India
[email protected], [email protected]
§ ABSTRACT
In this paper, we consider the mixed tensor space of a G-graded vector space where G is a finite abelian group. We obtain a spanning set of invariants of the associated symmetric algebra under the action of a color analogue of the general linear group which we refer to as the general linear color group. As a consequence, we obtain a generating set for the polynomial invariants, under the simultaneous action of the general linear color group, on color analogues of several copies of matrices. We show that in this special case, this is the set of trace monomials, which coincides with the set of generators given by Berele in <cit.>.
§ INTRODUCTION
Lie color algebras were introduced as ‘generalized Lie algebras’ in 1960 by Ree <cit.>, being also called color Lie superalgebras (see <cit.>). Since then, this kind of algebra has been an object of constant interest in mathematics, being also remarkable for the important role played in theoretical physics, especially in conformal field theory and supersymmetries. Lie color algebras have close relation with Lie superalgebras. Any Lie superalgebra is a Lie color algebra defined by the simplest nontrivial abelian group ℤ_2, while any Lie color algebra defined by a finitely generated abelian group admits a natural Lie superalgebra structure. Unlike Lie algebras and Lie superalgebras, structures and representations of Lie color algebras are far from being well understood and also there is no general classification result on simple Lie color algebras. Some recent interest related to their representation theory and related graded ring theory can be found in <cit.>, <cit.>, <cit.>.
In <cit.>, Procesi studied tuples (A_1, ⋯ ,A_k) of endomorphisms of a finite-dimensional vector space up to simultaneous conjugation by studying the corresponding ring of invariants. He showed that for an algebraically closed field F of characteristic zero the algebra of invariants F[(A_i)_jl]^GL_r(F) can be generated by traces of monomials in A_1, ⋯ A_k.
The main tool used in the work of Procesi, in order to describe the invariants, is the Schur–Weyl duality. This tool was used in a similar way also to study more complicated algebraic structures than just a vector space equipped with endomorphisms. In <cit.>, Datt, Kodiyalam and Sunder applied this machinery to the study of finite-dimensional complex semisimple Hopf algebras. They were able to obtain a complete set of invariants for separating isomorphism classes of complex semi-simple Hopf algebras of a given dimension. This they accomplished by giving an explicit spanning set for the invariant ring of the mixed tensor space. These are called “picture invariants". The picture invariants obtained in the case of complex semisimple Hopf algebras were also obtained by using techniques from Geometric Invariant Theory in <cit.>. In <cit.> these picture invariants were used to describe a finite collection of rational functions in the structure constants of a Lie algebra, which form a complete set of invariants for the isomorphism classes of complex semisimple Lie algebras of a given dimension.
More recently, the invariant ring of the mixed tensor spaces was used to extend results of <cit.> to separate isomorphism classes of complex Hopf algebras of small dimensions <cit.>. The invariant ring of the mixed tensor space arises naturally in obtaining invariants for any such isomorphism classes. Here, we look at the G-graded analogue of this invariant ring. We define “graded picture invariants" which Λ_ϵ-linearly span this ring.
In <cit.>, Fischman and Montgomery proved an analogue of the double centralizer theorem for coquasitriangular Hopf algebras and as a consequence they proved the double centralizer theorem for the general linear Lie color
algebra 𝔤𝔩_ϵ. In <cit.>, Moon reproves the double centralizer theorem for 𝔤𝔩_ϵ by relating its centralizer algebra to that of the general Lie superalgebra. For a G-graded vector space V, we set U:=V_Λ_ϵ.
The general linear color group is defined as the group of invertible elements in the set of all degree preserving Λ_ϵ-linear maps on U. We denote this group by _ϵ(U). Using the results of <cit.>, Berele obtained a graded analogue of the Schur–Weyl
duality between the general linear color group and the symmetric group in <cit.>. This result is then used to arrive at a generating set for the _ϵ(U)-polynomial invariants on color analogue of multiple copies of matrices, thereby extending some of the results of <cit.> to this setting.
In this paper, we extend Berele's results to the action of the general linear color group on the mixed tensor space ⊕_i=1^sU_b_i^t_i where t_i, b_i are in ∪{0} for all i=1,… s. The case of invariants of the color analogues of d copies of matrices, as described in <cit.>, may be seen as a special case of the above by taking t_i=1=b_i for all i=1…,s. We show in Theorem <ref> that the ring of invariants of the G-graded symmetric algebra S(⊕_i=1^sU_b_i^t_i)^*, is generated by certain special invariants which are analogues of the “picture invariants" in <cit.>. We then show in Theorem <ref> that in the special case of t_i=1=b_i for all i=1…,s, this agrees with the results of <cit.>. Viewing a superspace as a special case of a G-graded space, the invariants that we obtain here coincide with those obtained in <cit.>.
To define polynomials on a G-graded vector space we use the notion of -valued polynomials over U, as introduced in <cit.>. This is done in <ref>. This description of polynomials over U is a suitable alternative to Berele's notion of a polynomial as defined in <cit.> since we are interested in mixed tensor spaces which do not have a natural identification with matrices. In the special case of t_i=1=b_i for i=1,…,s, however, this notion of polynomials agrees with that of <cit.>. We have used this notion also in <cit.> to arrive at extensions of Berele's results in the supersetting. For defining this notion of a polynomial, we consider the -module of maps to from the graded component of U corresponding to the identity element of G. This module is denoted as ℱ(U_0,). Then using a G-graded analogue of the restitution map from the r-fold tensor space of U^* to ℱ(U_0, ), we call the image under this map to be the space of homogeneous polynomials of degree r. The G-graded algebra of polynomials on U is then taken to be the direct sum of these spaces of homogeneous polynomials. We prove that this algebra is isomorphic to the symmetric algebra of U^*, under the restitution map, analogous to loc. cit.. In this paper, we use the above notion of -valued polynomials on ⊕_i=1^sU_b_i^t_i and obtain a generating set for the polynomial invariants of a G-graded mixed tensor space. We then show that in the special case when the mixed tensor space corresponds to several copies of the endomorphism space of U, the graded picture invariants are just trace monomials as given in <cit.>. This may be regarded as the G-graded analogue of Procesi's result in <cit.>.
We now give an outline of the paper. In section 2 we review preliminaries of G-graded vector spaces and Lie color algebras and, we also recall the G-graded analogue of the Schur-Weyl duality. We also introduce in this section the notion of a general linear color group which is denoted as _ϵ(U) in <cit.> and recall the Schur-Weyl duality for it. In section 3 we introduce the notion of graded picture invariants and prove that these span the space of invariants of the symmetric algebra of the dual of the mixed tensor space. Using this result we give a spanning set for the polynomial invariants of the mixed tensor space and thereby show that the trace monomials span the polynomial invariants for the action of the general linear color group on color analogues of several copies of matrices.
Notation: Throughout this paper we work over the field of complex numbers ℂ. All modules and algebras are defined over ℂ and in addition all the modules are of finite dimension. We write ℤ_2={0̅,1̅} and use its standard field structure. For a finite abelian group G, we denote the identity element of G by ∘, while we reserve the symbols 0,1 to denote the usual complex numbers that they represent.
§ PRELIMINARIES
§.§
Definition: Let G be a finite abelian group. A map ϵ: G × G →ℂ∖{0} is called a skew-symmetric bicharacter on G if the following identities hold, for all f, g, h ∈ G.
(1) ϵ (f, g+h)=ϵ(f,g) ϵ(f,h),
(2) (g+h,f)=(g,f)(h,f),
(3) (g,h)(h,g)=1.
From the definition of ϵ it follows that ϵ(g,∘)=ϵ(∘,g)=1 and ϵ(g,g)=± 1, where ∘ denotes the identity element of G.
For a bicharacter ϵ of G, we set G_0̅:={g ∈ G: ϵ(g,g)=1} and G_1̅=G ∖ G_0̅. Then there exists a group homomorphism θ:G →ℤ_2 such that θ(g)=0̅ for g ∈ G_0̅ and θ(g)=1̅ for g ∈ G_1̅. Moreover, G_0̅ is a subgroup of G of index 1 or 2. It easily follows that if g ∈ G_1̅ then -g ∈ G_1̅.
Definition: For a finite abelian group G, a G-graded vector space is a vector space V together with a decomposition into a direct sum of the form V=⊕_g ∈ G V_g, where each V_g is a vector space. For a given g ∈ G the elements of V_g are then called homogeneous elements of degree g and we write |v| to denote the degree of v.
Any finite dimensional G-graded vector space
V=⊕_g ∈ G V_g can be given a ℤ_2-grading via V=V_G_0̅⊕ V_G_1̅, where V_G_0̅=⊕_g ∈ G_0̅V_g and V_G_1̅=⊕_g ∈ G_1̅V_g.
Definition: Fix a pair (G,ϵ), where G is a finite abelian group and ϵ is a skew-symmetric bicharacter on G. A Lie color algebra L=⊕_g ∈ GL_g associated to (G, ϵ) is a G-graded ℂ-vector space with a graded bilinear map [-,-]:L × L → L satisfying
1. [L_g,L_h] ⊂ L_g+h for every g,h ∈ G,
2. [x, y]=-(x, y)[y, x] and
3. (z, x)[x,[y, z]] + (x, y)[y,[z, x]] +(y, z)[z,[x, y]] = 0 for all homogeneous elements x,y,z ∈ L.
§.§
Given a G-graded vector space V=⊕_g ∈ G V_g, the ℂ-endomorphisms of V, End_ℂ(V) is also G-graded, where
End_ℂ(V)_g={f: V → V: f(V_h) ⊂ V_g+h, for all h ∈ G}
The general linear Lie color algebra, 𝔤𝔩_ϵ(V) is defined to be End_ℂ(V) with the Lie
bracket given by [x,y]_ϵ=xy-ϵ(x,y) yx.
The k-fold tensor product V^⊗ k of V is also G-graded; V^⊗ k=⊕_g ∈ G (V^⊗ k)_g, where (V^⊗ k)_g=⊕_g=g_1+⋯ +g_kV_g_1⊗⋯⊗ V_g_k.
We have an action of the symmetric group S_k on V^⊗ k as follows:
The group S_k is generated by the transpositions s_1, s_2, ⋯ s_k-1, where s_i=(i,i+1). Then
s_i.(v_1 ⊗ v_2 ⊗⋯⊗ v_k)=ϵ(g,h) (v_1 ⊗ v_2 ⊗⋯⊗ v_i+1⊗ v_i ⊗⋯⊗ v_k),
where v_i ∈ V_g and v_i+1∈ V_h.
More generally, for σ∈ S_k and v=v_1 ⊗ v_2 ⊗⋯⊗ v_k where each v_i is a homogeneous element of V,
σ.v=γ(v, σ^-1)(v_σ^-1(1)⊗ v_σ^-1(2)⊗⋯⊗ v_σ^-1(k)),
where γ(v, σ)=∏_(i,j) ∈ Inv(σ)ϵ(|v_i|,|v_j|), with Inv(σ)={(i,j): i < j and σ(i) > σ(j)}.
We then extend the action to V^⊗ k by linearity. We then have the following lemma.
For two permutations σ, τ∈ S_k we have γ(v,στ)=γ(σ^-1v,τ)γ(v,σ).
Let w_i=v_σ(i) for all i. Then w_τ(i)=v_στ(i) for all i.
The action of σ on v is given by σ.v=γ(v, σ^-1)(v_σ^-1(1)⊗ v_σ^-1(2)⊗⋯⊗ v_σ^-1(k))
We have τ^-1σ^-1.v=γ(v, στ)(v_στ(1)⊗ v_στ(2)⊗⋯⊗ v_στ(k)).
On the other hand τ^-1σ^-1.v=γ(v, σ)τ^-1.(v_σ(1)⊗ v_σ(2)⊗⋯⊗ v_σ(k))=γ(v, σ)τ^-1.w
=γ(v, σ) γ(w, τ)(w_τ(1)⊗ w_τ(2)⊗⋯⊗ w_τ(k))
=γ(v, σ) γ(σ^-1v, τ)(v_στ(1)⊗ v_στ(2)⊗⋯⊗ v_στ(k))
So we have the required identity.
By fixing a set of homogeneous vectors v_1, ⋯ ,v_k of V such that |v_i|=a_i for all i, we set I to be the tuple (a_1,…, a_k) in G^k. The symmetric group S_k acts on G^k via σ.(a_1,…, a_k):=(a_σ^-1(1),… , a_σ^-1(k)) for σ∈ S_k.
Define γ(I,σ):=γ(v_1⋯ v_k,σ). Then the above relation may be rephrased as, γ(σ^-1 I,τ)γ(I,σ)=γ(I,στ).
We denote by Φ the resulting homomorphism: Φ: ℂ[S_k] → End_ℂ(V^⊗ k).
On the other hand the general linear Lie color algebra, 𝔤𝔩_ϵ(V) acts on V^⊗ k by twisted derivation:
x.(v_1 ⊗ v_2 ⊗⋯⊗ v_k)=∑_i=1^k (∏_j <iϵ(α, g_j)) v_1 ⊗ v_2 ⊗⋯⊗ x.v_i ⊗⋯⊗ v_k,
where x ∈𝔤𝔩_ϵ(V)_α and each v_j ∈ V_g_j.
We denote by Ψ the resulting homomorphism: Ψ: 𝔤𝔩_ϵ(V) → End_ℂ(V^⊗ k).
The group G also act on V^⊗ k by
g(v_1 ⊗ v_2 ⊗⋯⊗ v_k)=∏_iϵ(g, g_i)(v_1 ⊗ v_2 ⊗⋯⊗ v_k),
where g ∈ G and each v_i ∈ V_g_i.
We denote by η the resulting homomorphism: η: ℂ[G] → End_ℂ(V^⊗ k). We then have the double centralizer theorem for the general linear Lie color algebra.
Let A=Φ(ℂ[S_k]) and let B be the subalgebra of End_ℂ(V^⊗ k) generated by η(ℂ[G]) and Ψ(𝔤𝔩_ϵ(V)). Then A and B are centralizers of each other.
§.§
Fix a pair (G,ϵ), where G is a finite abelian group and ϵ is a skew-symmetric bicharacter on G. We define a graded algebra Λ_ϵ which generalizes the infinite Grassmann algebra in the super setting. Let X=∪_g ∈ GX_g be a G-graded set, where each X_g is countably infinite. Let Λ be the free ℂ-algebra generated by X and we define Λ_ϵ:=Λ/I, where I is the ideal generated by the elements xy-ϵ(g,h)yx, for all g,h ∈ G and for all x ∈ X_g and y ∈ X_h. Then Λ_ϵ is also G-graded.
If X={x_1, x_2, ⋯}, then the set {x_i_1x_i_2⋯ x_i_r: i_1 ≤ i_2 ≤⋯≤ i_r} is a basis of Λ_ϵ. For a fixed linear ordering of the elements of X, we set (N) to be the linear span of the basis vectors {x_i_1x_i_2⋯ x_i_r:1≤ i_1 ≤ i_2 ≤⋯≤ i_r≤ N} where N ∈ℤ_≥ 0. We then have the following filtration:
Λ_ϵ(N) ⊂Λ_ϵ(N+1), for all N ≥ 0.
§.§
Given a G-graded vector space V=⊕_g ∈ G V_g, we set U:= V ⊗_ℂΛ_ϵ. Then U is also a G-graded Λ_ϵ-bimodule: U=⊕_g ∈ GU_g, where U_g=⊕_g_1+g_2=g V_g_1⊗ (Λ_ϵ)_g_2; x.u=ϵ(g,h)ux where x and u are of degrees g and h respectively.
Let End_Λ_ϵ(U):={T ∈ End_ℂ(U): T(ux)=T(u)x for u ∈ U, x ∈Λ_ϵ}. Then End_Λ_ϵ(U) is also G-graded in a natural way via
End_Λ_ϵ(U)_g={T ∈ End_Λ_ϵ(U): T(U_h) ⊆ U_g+h, for all h ∈ G}.
There is a natural embedding V↪ U given by v↦ v⊗ 1. Any ℂ-linear map T:V → V extends uniquely to an element of End_Λ_ϵ(U) and so we have End_Λ_ϵ(U) ≅ End_ℂ(V) ⊗_ℂΛ_ϵ.
More generally, for two G-graded vector spaces V and V', set U:=V_ℂ and U':= V'_ℂ, then we denote by Hom_Λ_ϵ(U,U') to be the set of ℂ-linear maps from U to U' which commute with the right action of Λ_ϵ. Then Hom_ℂ(V, V')_ℂ≅ Hom_(U,U'). In particular, if we denote by U^* the G-graded space Hom_Λ_ϵ(U,Λ_ϵ), then U^*≅ V^*_ℂ.
With notation as above, it may be easily seen that U_U'≅ (V_ℂ V')_ℂ via the map defined on the homogeneous elements by vλ v'λ'↦ v v'ϵ(|λ|,|v'|)λλ'. Here |λ|, |v'| denote the G-grading of λ and v' respectively.
There is a pairing between U and U^* given by uα↦ϵ(|α|,|u|)α(u) where u∈ U and α∈ U^*. This will be called the evaluation map and is denoted by ev. More generally, let V_1,…, V_k be G-graded vector spaces and let W_i:=V_i_ℂ. Let W=W_1⊕⋯⊕ W_k and τ be the permutation which takes (1,2,⋯, k,k+1, ⋯ ,2k) to (τ(1), τ(2), ⋯ ,τ(k),τ(k+1), ⋯ ,τ(2k))=(1,3,5, ⋯ ,2k-1, 2,4,6, ⋯ ,2k). Then the group S_2k acts naturally on the 2k-fold tensor product (W⊕ W^*)^ 2k. Under this action the element τ∈ S_2k induces an isomorphism between the subspaces W_1^*⋯ W_k^* W_1⋯ W_k and W_1^* W_1⋯ W_k^* W_k of (W⊕ W^*)^ 2k. This isomorphism followed by the map W_1^* W_1⋯ W_k^* W_k→ given by α_1 w_1⋯α_k w_k↦∏_iα_i(w_i) is called the evaluation map and is denoted by ev. The non-degeneracy of the pairing ev:W_1^*⋯ W_k^* W_1⋯ W_k→ comes from noticing that this map is obtained by extending scalars to of a non-degenerate pairing over ℂ. The isomorphism induced by this non-degenerate pairing will be denoted by ι:W_1^*⋯ W_k^* → (W_1⋯ W_k)^*
§.§ Symmetric algebra on a graded vector space
Let G be a finite abelian group and ϵ is a skew-symmetric bicharacter on G. Let V=⊕_g ∈ G V_g be a G-graded vector space over ℂ. The tensor algebra on V is T(V)=⊕_k ≥ 0 V^⊗ k.
The symmetric algebra on V is defined to be S(V)=T(V)/I(V), where I(V) is the ideal of T(V) generated by elements of the form v ⊗ w-ϵ(g,h)w ⊗ v, where v and w are homogeneous elements of degrees g and h respectively. Note that the ideal I(V) is both ℤ-graded as well as G-graded.
We note that if we write V=⊕_g ∈ G V_g=V_G_0̅⊕ V_G_1̅, where V_G_0̅=⊕_g ∈ G_0̅ V_g and V_G_1̅=⊕_g ∈ G_1̅ V_g then since ϵ(v,v)=-1 for v ∈ V_G_1̅ we have v ⊗ v=0 in S(V).
We define the d-th symmetric power of V, written S^d(V) to be the image of V^⊗ d in S(V). Since I(V) is both ℤ as well as G-graded, we have S(V)=⊕_d ≥ 0S^d(V) and each S^d(V) is G-graded. If V_G_1̅=V then S(V) is denoted as ⋀(V) and it is called the exterior algebra of V.
We then have the following lemma which will be used in the proof of the main theorem.
(1) Given any map f:V → W, between two G-graded vector spaces, there is a unique map of ℂ-algebras S(f):S(V) → S(W) carrying V to W.
(2) For a G-graded vector space V, we have S(V ⊗_ℂΛ_ϵ)=S(V) ⊗_ℂΛ_ϵ.
(3) For two G-graded vector spaces V and W we have S(V ⊕ W)=S(V) ⊗ S(W).
(4) If V is a G-graded vector space and {v_1, v_2, ⋯ ,v_n} is a basis of V consisting of homogeneous elements, then the set of all monomials of the form v_1^i_1..v_n^i_n such that ∑_j=1^ni_j=d and 0≤ i_j≤ 1 for v_i_j in V_G_1̅ form a basis of S^d(V).
(1) The map f induces a map from T(V) to T(W) carrying the ideal of relations in T(V) to the ideal of relations in T(W). So we have an induced map of ℂ algebras S(f):S(V) → S(W), which is unique by the construction.
(2) It is clear that in the tensor algebra level the assertion holds, that is, T(V ⊗_ℂΛ_ϵ)=T(V) ⊗_ℂΛ_ϵ. The algebra Λ_ϵ⊗_ℂ S(V) is obtained by factoring out the ideal generated by elements of the form 1 ⊗ (x ⊗ y -ϵ(x,y)y ⊗ x) from T(V) ⊗_ℂΛ_ϵ. The element 1 ⊗ (x ⊗ y -ϵ(x,y)y ⊗ x) corresponds to (1 ⊗ x) (1 ⊗ y) -ϵ(x,y)(1 ⊗ y) ⊗ (1 ⊗ x) and these elements generate the ideal of relations in T(V ⊗_ℂΛ_ϵ).
(3) The proof is straight forward.
(4) We write V=⊕_g ∈ G V_g=V_G_0̅⊕ V_G_1̅, where V_G_0̅=⊕_g ∈ G_0̅ V_g and V_G_1̅=⊕_g ∈ G_1̅ V_g. Then S(V)=S(V_G_0̅) ⊗⋀(V_G_1̅), where S(V_G_0̅) and ⋀(V_G_1̅) are the symmetric and exterior algebras on V_G_0̅ and V_G_1̅ respectively. We then have the required result.
§.§
For V=⊕_g ∈ G V_g and U=V ⊗_ℂΛ_ϵ, we set
M_ϵ(U):={T: U → U: T(U_g) ⊆ U_g, for all g ∈ G}.
With the identification, End_Λ_ϵ(U) ≅ U ⊗_Λ_ϵ U^*, M_ϵ(U) is spanned by all u ⊗ f, where u and f are homogeneous of opposite degree, i.e., |u|=-|f|. We define the trace function from M_ϵ(U) to (Λ_ϵ)_G_0 by tr(u ⊗ f)=ϵ(|u|, |f|)f(u) with the above identification.
By choosing a basis {v_1, ⋯ ,v_r, v_r+1⋯ ,v_n} of V, where the first r of them are in V_G_0̅ and the last n-r are in V_G_1̅ and identifying M_ϵ(U) with the space of matrices we get that
tr(A)=∑_i=1^ra_ii-∑_i=r+1^na_ii.
It satisfies tr(AB)=tr(BA) for A,B ∈ M_ϵ(U) (see Lemma 3.7 of <cit.>).
Definition: The general linear color group GL_ϵ(U) is defined to be the group of invertible elements in M_ϵ(U).
The group GL_ϵ(U) acts on the k-fold tensor product U^⊗ k diagonally:
T.(u_1 ⊗⋯⊗ u_k)=(Tu_1 ⊗⋯⊗ Tu_k).
We denote by Ψ the resulting homomorphism: Ψ: GL_ϵ(U) → End_Λ_ϵ(U^⊗ k).
The action of the symmetric group on V^⊗ k defined in <ref> can be extended to an Λ_ϵ-linear action on U^⊗ k as follows:
For σ∈ S_k and u=u_1 ⊗ u_2 ⊗⋯⊗ u_k ∈ U^⊗ k where each u_i is a homogeneous element of U,
σ.u=γ(u, σ^-1)(u_σ^-1(1)⊗ u_σ^-1(2)⊗⋯⊗ u_σ^-1(k)),
where γ(u, σ)=∏_(i,j) ∈ Inv(σ)ϵ(|u_i|,|u_j|), with Inv(σ)={(i,j): i < j and σ(i) > σ(j)}.
As U=V ⊗Λ_ϵ is a Λ_ϵ-bimodule, U^⊗ k is also a Λ_ϵ-bimodule. The action of S_k and Λ_ϵ on U^⊗ k commute with each other. So the S_k action on U^⊗ k extends to an action of the group algebra Λ_ϵ[S_k].
We denote by Φ the resulting homomorphism: Φ: Λ_ϵ[S_k] → End_Λ_ϵ(U^⊗ k).
Then in <cit.>, Berele proved a version of Schur-Weyl duality for the general linear color group.
Let A be the subalgebra of End_Λ_ϵ(U^⊗ k) generated by Ψ(GL_ϵ(U)). Then the centralizer of A in End_Λ_ϵ(U^⊗ k) is Φ(Λ_ϵ[S_k]).
§ MIXED TENSOR SPACES
For a G-graded vector space V over ℂ, one can define the mixed tensor space as the direct sum of the form ⊕_i=1^s(V^ b_iV^*^⊗ t_i) for an s∈ℕ and t_i,b_i∈ℕ∪{0}. For simplicity of notation we denote V^ b_iV^*^⊗ t_i by V_b_i^t_i. Since each summand in the mixed tensor space has a G-grading, there is a natural G-grading inherited by their direct sum ⊕_i=1^s(V_b_i^t_i). Taking U=V_ℂ and U^*=Hom_(U,)≅ V^*_ℂ, the mixed tensor space over is the G-graded space ⊕_i=1^s(U^ b_i _U^*^⊗ t_i) for an s∈ℕ and t_i,b_i∈ℕ∪{0}. It may be seen that (⊕_i=1^s(V_b_i^t_i))_ℂ≅⊕_i=1^s(U^ b_i_U^*^⊗ t_i). We shall denote this mixed tensor space over by W.
Let m= V_G_0̅ and n= V_G_1̅. Fixing an ordering for the elements of G_0̅ and G_1̅, we list the elements of G as {∘,g_1,g_2,…} with the elements in G_0̅ appearing first. We then arrange the basis vectors of V, {e_i}_i=1^m+n, such that the first m vectors are from V_G_0̅ and the rest are from V_G_1̅; further, ordered linearly so that i<j implies |e_i| appears before |e_j| under the fixed ordering of elements of G. Here we use the notation |v|=h for v∈ V_h. Let {e_i^*}_i=1^m+n be the dual basis corresponding to {e_i}_i=1^m+n. We denote the image in U and U^* of the above basis elements under the embedding V↪ U (and V^*↪ U^* respectively) also by the same notation. The mixed tensor space W then has a basis given in terms of the above bases of U and U^*.
Denote the element dual to the basis vector e_l_1⋯ e_l_b_i e^*_u_1⋯ e^*_u_t_i∈ U_b_i^t_i by T(i)_l_1… l_b_i^u_1… u_t_i. We denote the corresponding element in W^* also by the same notation. Notice that T(i)_l_1… l_b_i^u_1… u_t_i defines a linear map on W via the projection p_i:W→ U_b_i^t_i, i.e., T(i)_l_1… l_b_i^u_1… u_t_i(v_1,…,v_s)=T(i)_l_1… l_b_i^u_1… u_t_i(v_i) for (v_1,…,v_s)∈ W. The G-grading of the element T(i)_l_1… l_b_i^u_1… u_t_i∈ W^* is given by ∑_i=1^b_ih_l_i-∑_i=1^t_ih_u_i where e_l_j∈ V_h_l_j and e^*_u_j∈ V_h_u_j^*. The set of all T(i)_l_1… l_b_i^u_1… u_t_i, where i=1,… , s and u_j, l_j are from {1,⋯, m+n}, forms a -basis of W^*.
§.§ Symmetric algebra of the mixed tensor space
Let S(W^*) be the symmetric algebra of W^*. We denote by ϖ:T(W^*)→ S(W^*) the natural map from the tensor algebra of W^* to S(W^*). We know that S(W^*) has a ℤ-grading given by ⊕_r∈∪{0}S^r(W^*) where S^r(W^*) is the image under ϖ of T^r(W^*). The restriction of ϖ to T^r(W^*) is denoted as ϖ_r. By Lemma <ref>(2), S(W^*) can be identified with S(⊕_i=1^s(V_b_i^t_i)^*)_; further by Lemma <ref>(4) and the remarks in <ref>, this in turn is identified with [S((⊕_i=1^s(V_b_i^t_i)^*)_G_0̅)⋀((⊕_i=1^s(V_b_i^t_i)^*)_G_1̅)] _. Using the relations among the T(i)_l_1… l_b_i^u_1… u_t_i, i=1,… , s and u_j, l_j∈{1,⋯, m+n} which are given by the ideal I(W^*) of T(W^*), the latter identification yields a -basis for S(W^*) consisting of monomials in T(i)_l_1… l_b_i^u_1… u_t_i, i=1,… , s and u_j, l_j are from {1,⋯, m+n} where the variables T(i)_l_1… l_b_i^u_1… u_t_i are arranged in order such that basis vectors of (⊕_i=1^sV_b_i^t_i^*)_g_l come before those of (⊕_i=1^sV_b_i^t_i^*)_g_k whenever l<k and among them arranged from i=1… ,s; the degree of each variable T(i)_l_1… l_b_i^u_1… u_t_i∈ (⊕_i=1^sV_b_i^t_i^*)_G_1̅ in such a monomial being either 0 or 1. The monomials among the above whose total degree is r, form a basis for S^r(W^*) over .
For each r∈ℕ and a s-tuple (m_1,…, m_s) such that
m_1+⋯+ m_s=r, the tensor space T^m_1(U_b_1^t_1^*)⋯ T^m_s(U_b_s^t_s^*) is realised as a subspace of T^r(W^*). The image of the restriction of ϖ_r to this subspace is denoted as S^m_1,…,m_s(W^*). If the multidegree of the element T(i)_l_1… l_b_i^u_1… u_t_i∈ S(W^*) is set to be the s-tuple (0,…,0,1,0…,0) with 1 in exactly the i-th position then the monomials in the above listed basis of
S^r(W^*) with multidegree (m_1,…,m_s) forms a basis of S^m_1,…, m_s(W^*). The space _i=1^sS^m_i((U_b_i^t_i)^*) can be identified with S^m_1,…, m_s(W^*) under the map ϕ_1⋯ϕ_s↦ϕ_1⋯ϕ_s. It is easy to see that the following diagram commutes, by checking it does on the basis elements of T^m_1(U_b_1^t_1^*)⋯ T^m_s(U_b_s^t_s^*):
_i=1^s((U_b_i^t_i)^*)^ m_i@->>[r]^ϖ_m_1…ϖ_m_s@^(->[dd] _i=1^sS^m_i((U_b_i^t_i)^*)[d]^≅
S^(m_1,…, m_s)(⊕_i=1^s(U_b_i^t_i)^*)@^(->[d]
T^r(W^*)@->>[r]^ϖ_m S^r(W^*)
For (m_1,…, m_s), an s-tuple of non-negative integers such that ∑_i m_i=m, the subspace T^m_1(U_b_1^t_1^*)⋯ T^m_s(U_b_s^t_s^*) of T^m(W^*) is a contains the direct sum of
T^r_1((U_b_1^t_1^*)_0) T^n_1((U_b_1^t_1^*)_1)⋯ T^r_s((U_b_s^t_s^*)_0) T^n_s((U_b_s^t_s^*)_1)
for (r_1,…, r_s), (n_1,…, n_s), non-negative integers such that r_j+n_j=m_j for each j=1… s.
Note that in the above diagram, all the maps are _ϵ(U)-equivariant; indeed, since the _ϵ(U)-action on (U_b_i^t_i)^*^ m_i (resp., W^*^ r) commutes with the S_m_i-action (resp., S_r-action) and when M=W or M=U^t_i_b_i, we note that S^m(M^*) can be identified under ϖ_r with the _ϵ(U)-invariant subspace e(r)(M^*^ r) where e(r):=1/r!∑_σ∈ S_rσ. Thus the _ϵ(U)-action descends to S^r(M^*) thereby making the map ϖ_r a _ϵ(U)-equivariant map with respect to this action.
§.§ Main Result
Keeping notations as above, let ∑_i m_ib_i=N and ∑_i m_it_i=N'. We have the following sequence of GL_ϵ(U)-invariant isomorphisms:
(U^ (∑_i m_ib_i) (U^*)^(∑_i m_it_i))^*→(_i=1^s(U_b_i^t_i)^ m_i)^*→_i=1^s((U_b_i^t_i)^*)^ m_i
The first isomorphism is induced on the duals by the permutation action of μ∈ S_N+N' on the subspace(U⊕ U^*)^ (N+N') of (U⊕ U^*)^ (N+N') where μ takes (1,…, N+N') to (1,… b_1, N+1, … , N+t_1, b_1+1, … 2b_1, ⋯ ). The isomorphism above is _ϵ(U)-equivariant since the symmetric group action on (U⊕ U^*)^ (N+N') commutes with the _ϵ(U)-action induced on it.
Let k=∑_i m_i and W=W_1⊕⋯⊕ W_k, where W_i=U_b_i^t_i. Then the group S_2k acts naturally on the 2k-fold tensor product (W⊕ W^*)^ 2k. The second isomorphism in the above sequence is the inverse of ι as described in <ref> between W_1^*⋯ W_k^* and (W_1⋯ W_k)^*. More explicitly, the isomorphism is given on the dual basis by
T_w_1⋯ w_∑_i m_i↦p_ϵ(w,w)T(1)_w_1⋯ T(s)_ w_∑_i m_i
where w_i∈ W_i is a basis vector and T(i)_w_i is the dual vector in W_i^*; here for w_i=e_l_1⋯ e_l_b_i e^*_u_1⋯ e^*_u_t_i∈ U_b_i^t_i the notation T(i)_w_i represents the linear map T(i)_l_1… l_b_i^u_1… u_t_i by our earlier notation. The value of p_ϵ(w,w) for w=(w_1,…, w_k) where w_i∈ V_|w_i| is given by ∏_i=1^k(∏_i≤ j≤ kϵ(|w_i|,|w_j|)). This isomorphism is _ϵ(U)-equivariant since the symmetric group action on (W⊕ W^*)^ 2k commutes with the _ϵ(U)-action induced on it.
By a standard argument, the space U^ NU^*^ N' has _ϵ(U)-invariants if and only if N=N', and so does its dual, (U^ NU^*^ N')^*. When N=N', the latter space can be identified with _(U^ N) via the non-degenerate pairing (_(U^ N)) (U^ NU^*^ N)→Λ_ϵ given by ⟨ A,vf^*⟩:= ev(A(v)f^*). The latter is ev(ν.τ. (A(v)f^*)) where τ∈ S_2N is as in <ref> (with k replaced by N) and ν∈ S_2N is the permutation (1 2)(3 4)⋯ (2N-1 2N). This gives a _ϵ(U)-invariant isomorphism
Θ:_(U^ N)→ (U^ NU^*^ N)^*.
§.§.§ Graded picture invariants for colored spaces
Given an s-tuple (m_1,…,m_s) of non-negative integers such that ∑_k=1^s m_kt_k=∑_k=1^s m_k b_k=N and a σ∈ S_N we associate the polynomial φ_σ∈ S(W^*) given by
∑_r_1,…, r_N∈{1,…, n}p_σ(r_1,…, r_N, m_1,…, m_s)∏_i=1^s∏_j=1^m_iT(i)_r'_(∑_p<im_pb_p+(j-1)b_i+1)⋯ r'_(∑_p<im_pb_p+jb_i)^r_(∑_p<im_pt_p+(j-1)t_i+1)⋯ r_(∑_p<im_pt_p+jt_i)
where r'_j:=r_σ j and p_σ(I,M) for an N-tuple I=(r_1,…,r_N) of N elements from {1,… m+n} and M=(m_1,…, m_s)∈ (∪{0})^s such that ∑_k m_kt_k=∑_k m_k b_k=N takes the value in G given by
γ(μ^-1.(I,σ I^*),(ντσ̂μ)^-1)p(w,w)
where (r_1,…, r_N)^*:=(r_1^*,…, r_N^*), indicating that these are the indexes corresponding to the basis vectors e_r_1^*,…,e_r_N^* in the dual space U^*, μ, τ,ν are as defined above and for a σ∈ S_N we define σ̂∈ S_2N as
σ̂(i) =σ(i) i≤ N
σ̂(i) =i i>N
The vector w:=w_1 ⊗ w_2 ⊗⋯⊗ w_∑_i=1^sm_i where each w_∑_p<im_p+j:=e_r_∑_p<im_pb_p+(j-1)b_i+1⋯ e_r_∑_p<im_pb_p+jb_i e_r_σ^-1(∑_p<im_pt_p+(j-1)t_i+1)^*⋯ e_r_σ^-1(∑_p<im_pt_p+jt_i)^* for i=1,…,s; j=1,… ,m_i.
The polynomials φ_σ as defined above[In (<ref>), the monomials should be considered modulo the anti-commutativity relations in S(W^*). See Remark <ref> above.] are called the graded picture invariants.
With the above notation, the elements φ_σ linearly span S(W^*)^_ϵ(U).
As was seen earlier in this section, the space (U^ NU^*^ N')^* has _ϵ(U)-invariants if and only if N=N'. By Theorem <ref>, we know that the _ϵ(U)-invariants of _(U^ N) are spanned over by S_N. So, via the isomorphism Θ we get invariants on (U^ NU^*^ N)^*. Let (m_1,…,m_s) be an s-tuple of non-negative integers such that ∑_im_it_i=∑ m_ib_i=N, and σ∈ S_N. Going by the isomorphisms in (<ref>) and projecting Θ(σ) onto _i=1^sS^m_i(U_b_i^t_i^*) via ϖ_1⋯ϖ_s we arrive, under the natural identification of _i=1^sS^m_i(U_b_i^t_i^*) with S^m_1,…,m_s(W^*), at the invariants φ_σ defined above. Since S(W^*) is the direct sum ⊕_r∈ℤ_≥ 0S^r(W^*), each summand of which in turn is a direct sum of S^m_1,…, m_s(W^*) as (m_1,…,m_s) varies over s-tuples of non-negative integers such that ∑_im_i=r, we get the required result.
§.§ Restitution map and the polynomial ring on W_ø
For a G-graded vector space V let M:=V_ℂ and M_o denote the graded component in M corresponding to the identity element o∈ G. In this section we define the polynomial ring over M_∘ and the `restitution map' from the space of multilinear maps on W to this polynomial ring. For this, let us consider the -module of all functions from M→, denoted by ℱ(M,). Let F^r:T^r(M^*)→ℱ(M,) be given by F^r(α)(v)=ev(α v⋯ v) where ev:T^r(M^*) T^r(M)→ is as defined in <ref>.
In other words, the following diagram commutes
_i=1^sS^m_i((U_b_i^t_i)^*)[d]^≅[r]^F^m_1… F^m_s _i=1^sℱ((U_b_i^t_i)_0,)[dd]
_i=1^s((U_b_i^t_i)^*)^ m_i@->>[ru]^ϖ_m_1…ϖ_m_s@->[rd] S^(m_1,…, m_s)(⊕_i=1^s(U_b_i^t_i)^*)@^(->[d]
S^r(W^*)[r]^F^r ℱ(W_0,)
The symmetric group S_r acts on T^r(M) as described in <ref>. We then have the following analogue of <cit.>
For σ=(i i+1)∈ S_r, F^r(σ.α)(v)=ϵ(|α_i|,|α_i+1|)ϵ(|v|,|α_i|-|α_i+1|)F^r(α)(v). In particular, F^r(α)(v)=0 for α∈ I(M^*) and v∈ M_∘.
The lemma follows from a simple calculations using the identity <ref>(1) satisfied by ϵ.
Let ℱ(M_ø,) be the -module of all functions from M_ø→.
Let F^∙:=⊕_r=0^∞ F^r:T(M^*)→ℱ(M_ø,). As a consequence of the above lemma, we note that this map factors through S(M^*). We continue to denote the induced map from S(M^*)→ℱ(M_ø,) also by F^∙, and its restriction to S^r(M^*) as F^r.
The next result allows us to define polynomials on M_o via the restitution map. For the purpose of the proof, we fix a basis v_1,… ,v_k of V ordered such that |v_i|∈ G comes before |v_j|∈ G whenever i<j. (Here the elements of G are ordered as given in the beginning of <ref>). Let ϕ_1,…,ϕ_k be dual to the above basis. Denote the bases of M and M^* corresponding to the above bases also by the same notation.
Let 𝒫^r(M_ø) be the image of F^r in ℱ(M_ø,). The space of polynomials on M_ø is the image of F^∙:=⊕_r=0^∞ F^r:S(M^*)→ℱ(M_ø,), denoted as 𝒫(M_ø). Then the following proposition is a G-graded analogue of (<cit.> ).
The map F^∙ is an isomorphism from S(M^*)→𝒫(M_ø).
The surjectivity of the map F^∙: S(M^*)→𝒫(M_ø) is just a consequence of the definition of 𝒫(M_ø). To obtain the injectivity, we show the injectivity of each F^r. For this we consider an f∈ F^r. The symmetric algebra S(M^*) has a basis given by monomials in ϕ_i; the monomials whose total degree is r form a basis of S^r(M^*). Expressing f in terms of this basis, we have
f=∑_r_1+⋯+r_k=rλ_r_1,…,r_kϕ_1^r_1⋯ϕ_k^r_k.
Let v=∑_ib_iλ_i∈ M_ø for some λ_i∈. Since v∈ M_ø, |b_i|=-|v_i| for all i. Evaluating F^r(f) at v, we get
F^r(f)(v)=∑_r_1+⋯ +r_k=rλ_r_1,…,r_kλ_1^r_1⋯λ_k^r_k.
Choose N>0 such that λ_r_1,…,r_k∈(N) for all indices r_1,…,r_k such that ∑_i r_i=r. We now inductively choose λ_j for j=1,…,k such that λ_j∈ ((N_j)∖ (N_j-1))∩ X_-|b_j| for some suitable N_j>N_j-1; set N_0=N. For this choice of scalars λ_i, noting that the individual terms in the summation are distinct basis vectors of , we deduce that λ_r_1,…,r_k=0.
The space 𝒫^r(M_ø) is called the space of polynomials of degree r on M_ø. For f∈ S^r(M^*) and w∈ M_ø, we note that F^r(f)(w)=ev(f w^⊗ r) where w^⊗ r=w ⊗ w ⊗⋯⊗ w (r-times) and f∈ T^r(M^*) such that ϖ_r(f)=f; (recall, ϖ_r: T^r(M^*)→ S^r(M^*) is the natural quotient map).
§.§ Polynomial invariants of W_ø
Let W be the mixed tensor space as defined in the begining of this section. Then the graded component of W corresponding to the identity element ø∈ G is W_ø=⊕_i=1^s(U_b_i^t_i)_ø. As described above, we obtain the restitution map F^r:S^r(W^*)→ℱ(W_ø,).
For a tuple (m_1,…,m_s) such that ∑_i=1^sm_i=r,
let T_σ∈ (_i=1^s(U_b_i^t_i)^ m_i)^* be the linear map corresponding to σ∈ S_N obtained in <ref>. The graded picture invariants defined in <ref> are the images in S^m_1,…, m_s(W^*) of these linear maps T_σ, σ∈ S_N. Then we have,
The graded picture invariants φ_σ∈ S^(m_1,…,m_s)(W^*) maps under restitution F^r to the element of ℱ(W_ø,) given by u↦ T_σ(u_1^⊗ m_1⋯ u_s^⊗ m_s) where u=(u_1…,u_s)∈ W_ø.
As noted earlier in <ref>, the isomorphism ι:T^r(W^*)→ (T^r(W))^* is obtained from the non-degenerate pairing T^r(W^*) T^r(W)→Λ_ϵ given by the evaluation map. Therefore, for any ϕ∈ T^r(W^*) we get a linear map ι(ϕ) on T^r(W) and the evaluation ι(ϕ)(w_1… w_r) is given by ev(ϕ w_1… w_r). In particular, F^r(ϕ)(u)=ι(ϕ)(u…u).
The linear maps on U_b_i^t_i are regarded as linear maps on W via the projection p_i: W→ U_b_i^t_i. Denote the induced map on the dual spaces as p_i^*:U_b_i^t_i^*→ W^*.
For a tuple (m_1,…,m_s) such that ∑_i=1^sm_i=r, T^m_1(U_b_1^t_1^*)… T^m_s(U_b_s^t_s^*) is a subspace of T^r(W^*), via _i p_i^*^ m_i. Similarly, _i=1^s(U_b_i^t_i)^ m_i is a direct summand of the tensor space T^r(W) so the projection map : T^r(W)↠(_i=1^s(U_b_i^t_i)^ m_i) induces an injective map (_i=1^s(U_b_i^t_i)^ m_i)^*↪T^r(W)^*
given by ψ↦ [(w_1… w_r)↦ψ∘(w_1… w_r)] for ψ∈(_i=1^s(U_b_i^t_i)^ m_i)^* and w_1… w_r∈ T^r(W). The non-degenerate pairing above restricted to the subspace T^m_1(U_b_1^t_1^*)… T^m_s(U_b_s^t_s^*) T^m_1(U_b_1^t_1)… T^m_s(U_b_s^t_s) via the above described maps, is a non-degenerate pairing and induces the isomorphism in (<ref>). For r=∑_i m_i, f_1⋯f_r∈_i=1^s(U_b_i^t_i^*)^ m_i and u∈ W_ø, we have
ι∘ (_ip_i^*^ m_i)(f_1⋯f_r)(u…u)=ev(p_1^*(f_1)⋯ p_s^*(f_r)u⋯u).
(Here the scaling factor involved is 1 since u∈ W_ø.)
On the other hand, the isomorphism in (<ref>) takes
f_1⋯f_r to the linear map on T^r(W) given by w_1⋯ w_r ↦ ev(f_1⋯f_r pr(w_1⋯ w_r)). This linear map on _i=1^s((U_b_i^t_i)^ m_i also is denoted by ι(f_1⋯f_r). When w_i=u∈ W_ø for all i, the latter equals ev(f_1⋯f_r u_1^⊗ m_1⋯ u_s^⊗ m_s).
This in turn evaluates to the right hand side of the above equation.
In the construction of φ_σ in Theorem <ref>, let T_σ∈(_i=1^s((U_b_i^t_i)^ m_i)^* maps to φ_σ∈ S^r(W^*). Let φ_σ∈ T^m_1(U_b_1^t_1^*)… T^m_s(U_b_s^t_s^*) be such that ι(φ_σ)=T_σ and ϖ_r(_ip_i^*^ m_i(φ_σ))=φ_σ. By the above discussion, we have F^r(φ_σ)(u)=ev(_ip_i^*^ m_i(φ_σ)u…u)=ev (φ_σ u_1^⊗ m_1⋯ u_s^⊗ m_s) where u=(u_1…,u_s)∈ W_ø. As ι(φ_σ)=T_σ, the latter is T_σ( u_1^⊗ m_1⋯ u_s^⊗ m_s), as required.
§.§.§ Graded picture invariants in terms of traces
We now restrict to the case when W=(U_1^1)^s. Under the identification U_1^1≅ End_ (U), we define a product operation on U U^* as (vα). (wβ)=vα(w)β making the identification an isomorphism of G-graded algebras. Consider the trace function on U_1^1 given by tr(vα)=ϵ(|v|,|α|)α(v). With this notation, one may define the trace monomial tr_σ for a permutation σ∈ S_N as tr_σ(v_1ϕ_1⋯ v_Nϕ_N)=tr(v_i_1ϕ_i_1.v_i_2ϕ_i_2.⋯ .v_i_rϕ_i_r)tr(v_j_1ϕ_j_1.v_j_2ϕ_j_2.⋯ .v_j_tϕ_j_t)⋯
where σ^-1=(i_1 i_2 … i_r)(j_1 j_2⋯ j_t)⋯. This definition is dependent on the permutation σ and its cycle decomposition as well. However, if we restrict to v_1ϕ_1⋯ v_Nϕ_N coming from W_∘ then the definition is independent of the cycle decomposition of the permutation. The proof of the following is analogous to that of Lemma 3.8 of <cit.>:
(see <cit.>)
For a σ∈ S_N such that σ^-1=(i_1 i_2 … i_r)(j_1 j_2⋯ j_t)⋯∈ S_N, the _ϵ(U)-invariant map T_σ (as in Proposition <ref>) corresponds to the trace monomials tr_σ up to a scalar. Further, both the maps agree when restricted to the degree 0 part, ((U_1^1)_ø)^ N.
The invariants in 𝒫(W_ø) for the induced action of _ϵ(U) such that the isomorphism is _ϵ(U)-equivariant are called the invariant polynomials on W_ø. We recover Theorem 5.6 of <cit.> as follows.
The invariant polynomials for the simultaneous action of _ϵ(U) on ⊕_i=1^s(U_1^1)_o is spanned by the trace monomials _σ given by
_σ(A_1,…, A_s):= (A_f(i_1)⋯ A_f(i_r))(A_f(i_r+1)⋯ A_f(i_t))⋯
where A_1,…, A_s∈⊕_i=1^sU_1^1, σ=(i_1 … i_r)(i_r+1… i_t)⋯∈ S_n and a map f:{1,…, n}→{1,…, s} as n varies over .
The invariants in 𝒫(W_0) is the image of S(W^*)^_ϵ(U) which in turn is spanned by φ_σ, by Theorem <ref>. Proposition <ref> followed by Lemma <ref> then gives the required result.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data availability
No data was used for the research described in the article.
11
Berele
A. Berele: Trace identities and ℤ/2ℤ-graded invariants, Trans. Amer. Math. Soc. 309, no. 2, 581-589 (1988).
Berele2
A. Berele: Invariant theory and trace identities associated
with Lie color algebras, J. Algebra 310, 194-206 (2007).
BMPZ
Y.A. Bahturin, A.A. Mikhalev, V.M. Petrogradsky, M.V. Zaiciv: Infinite Dimensional Lie Superalgebras, DeGruyter, Berlin (1992).
CSO
X.W. Chen, S.D. Silvestrov, F. Van Oystaeyen: Representations and Cocycle Twists of Color Lie Algebras. Algebr. Represent. Theor. 9, 633–650 (2006).
dks
S. Datt; V. Kodiyalam; V.S. Sunder:
Complete invariants for complex semisimple Hopf algebras,
Math. Res. Lett. 10, no. 5-6, 571–586 (2003).
FM
D. Fischman, S. Montgomery: A Schur double centralizer theorem for cotriangular Hopf algebras and generalized Lie algebras, J. Algebra 168, 594-614 (1994).
KR
V. Kodiyalam; K.N. Raghavan: Picture invariants and the isomorphism problem for complex semisimple Lie algebras, Contemporary Mathematics, Vol 390, 127-136 (2005).
LZ
G.I. Lehrer, R.B. Zhang: The first fundamental theorem of invariant theory for the orthosymplectic supergroup, Comm. Math. Phys. 349 , no. 2, 661-702 (2017).
M
E. Meir:
Semisimple Hopf algebras via geometric invariant theory, Adv. Math, Vol. 311, 61-90 (2017).
Moon
D. Moon: The centralizer algebras of lie color algebras, Comm. Algebra, 27(7), 3233-3261 (1999).
PS
S. Pattanayak, P. Samuel: Graded picture invariants and polynomial invariants for mixed tensor superspaces, J. Algebra, Vol. 661, 595-621 (2025).
P
C. Procesi:
The invariant theory of n×n matrices,
Adv. Math. 19, no. 3, 306-381 (1976).
R
R. Ree: Generalized Lie elements, Canad. J. Math. 12, 439-502 (1960).
Samuel
P. Samuel:
Separating invariants for Hopf algebras of small dimensions, Math. Res. Lett., Vol. 27, no 2, 551-563 (2020).
SZZ
Y. Su, K. Zhao, L. Zhu:
Simple Lie color algebras of Weyl type
Isr. J. Math., 137, 109-123, (2003).
Z
K. Zhao: Simple Lie color algebras from graded associative algebras, J. Algebra 269 (2), 439-455 (2003).
|
http://arxiv.org/abs/2409.02745v1 | 20240904142434 | Adaptive Formation Learning Control for Cooperative AUVs under Complete Uncertainty | [
"Emadodin Jandaghi",
"Mingxi Zhou",
"Paolo Stegagno",
"Chengzhi Yuan"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
1
Running Title]Adaptive Formation Learning Control for Cooperative AUVs under Complete Uncertainty
]
[
[
Received 2024; accepted 2024
================================
§ ABSTRACT
§
This paper introduces an innovative two-layer control framework for Autonomous Underwater Vehicles (AUVs), tailored to master completely uncertain nonlinear system dynamics in the presence of challenging hydrodynamic forces and torques. Distinguishing itself from prior studies, which only regarded certain matrix coefficients as unknown while assuming the mass matrix to be known, this research significantly progresses by treating all system dynamics, including the mass matrix, as unknown. This advancement renders the controller fully independent of the robot’s configuration and the diverse environmental conditions encountered. The proposed framework is universally applicable across varied environmental conditions that impact AUVs, featuring a first-layer cooperative estimator and a second-layer decentralized deterministic learning (DDL) controller. This architecture not only supports robust operation under diverse underwater scenarios but also adeptly manages environmental effects such as changes in water viscosity and flow, which influence the AUV's effective mass and damping dynamics. The first-layer cooperative estimator fosters seamless inter-agent communication by sharing crucial system estimates without the reliance on global information, while the second-layer DDL controller utilizes local feedback to precisely adjust each AUV’s trajectory, ensuring accurate formation control and dynamic adaptability. Furthermore, the capability for local learning and knowledge storage through radial basis function neural networks (RBF NNs) is pivotal, allowing AUVs to efficiently reapply learned dynamics following system restarts. Comprehensive simulations validate the effectiveness of our groundbreaking framework, highlighting its role as a major enhancement in the field of distributed adaptive control systems for AUVs. This approach not only bolsters operational flexibility and resilience but is also essential for navigating the unpredictable dynamics of marine environments.
§ KEYWORDS:
Environment-independent Controller, Autonomous Underwater Vehicles (AUV), Dynamic learning, formation learning control, Multi-agent systems, Neural network control, Adaptive control, Robotics.
§ INTRODUCTION
Robotics and autonomous systems have a wide range of applications, spanning from manufacturing and surgical procedures to exploration in challenging environments <cit.>. However, controlling robots in challenging environments, such as space and underwater, presents significant difficulties due to the unpredictable dynamics in these settings. In the context of underwater exploration, Autonomous Underwater Vehicles (AUVs) have emerged as indispensable tools. These vehicles are not only cost-effective and reliable but also versatile in adapting to the dynamic and often harsh conditions of underwater environments <cit.>. Unlocking the mysteries of marine environments hinges on the effective use of AUVs, making advancements in their control and operation critical.
As the complexity of tasks assigned to AUVs increases, there is a growing need to enhance their operational capabilities. This includes developing sophisticated formation control strategies that allow multiple AUVs to operate in coordination, mimicking collaborative behaviors seen in natural swarms such as fish schools or bird flocks. These strategies are essential for ensuring efficient, stable, and precise operations in varied underwater tasks, ranging from pipeline inspections to seafloor mapping <cit.>. Multi-AUV formation control presents significant challenges due to the intricate nonlinear dynamics and complex interaction behaviors among the AUVs, as well as the uncertain and dynamic nature of underwater environments. Despite these difficulties, the problem has garnered substantial interest from the marine technology and control engineering communities, driven by the extensive applications of AUVs in modern ocean industries, making effective multi-AUV formation control increasingly critical <cit.>.
Historically, formation control research has predominantly utilized the behavioral approach <cit.>, which segmented the overall formation control design problem into multiple subproblems. Each vehicle's control action is then determined by a weighted average of solutions to these subproblems, facilitating decentralized implementation. However, defining the appropriate weighting parameters often presents challenges. Another method, the leader-following approach <cit.>, selected one vehicle as the leader while the rest function as followers. The leader pursues a specified path, and the followers maintain a pre-defined geometric relationship with the leader, enabling control over the formation behavior by designing specific motion behaviors for the leader. Additionally, the virtual structure approach <cit.> modified this concept by not assigning a physical leader but instead creating a virtual structure to set the desired formation pattern. This method combines the benefits of the leader-following approach with greater flexibility, as the dynamics of the virtual leader can be tailored for more adaptable formation control designs.
In the field of formation control, several key methodologies have emerged as significant contributors to research. The behavioral approach <cit.> addresses the formation control challenge by partitioning the overall design problem into distinct subproblems, where the control solution for each vehicle is computed as a weighted average of these smaller solutions. This technique supports decentralized control but often complicates the calibration of weighting parameters. Conversely, the leader-following strategy <cit.> assigns a primary vehicle as the leader, guiding a formation where subordinate vehicles adhere to a specified geometric arrangement relative to this leader. This approach effectively governs the collective behavior by configuring the leader’s trajectory. Furthermore, the virtual structure method <cit.> builds upon the leader-following paradigm by substituting the leader with a virtual structure, thereby enhancing the system’s adaptability. This advanced method allows for the flexible specification of virtual leader dynamics, which can be finely tuned to meet diverse operational needs, thus inheriting and enhancing the leader-following model's advantages.
Despite extensive literature in the field, much existing research assumes homogeneous dynamics and certain system parameters for all AUV agents, which is unrealistic in unpredictable underwater environments. Factors such as buoyancy, drag, and varying water viscosity significantly alter system dynamics and behavior. Additionally, AUVs may change shape during tasks like underwater sampling or when equipped with robotic arms, further complicating control. Typically, designing multi-AUV formation control involves planning desired formation paths and developing tracking controllers for each AUV. However, accurately tracking these paths is challenging due to the complex nonlinear dynamics of AUVs, especially when precise models are unavailable. Implementing a fully distributed and decentralized formation control system is also difficult, as centralized control designs become exceedingly complex with larger AUV groups.
In previous work, such as <cit.>, controllers relied on the assumption of a known mass matrix, which is not practical in real-world applications. Controllers’ dependent on known system parameters can fail due to varying internal forces caused by varying external environmental conditions. The solution lies in developing environment-independent controllers that do not rely on any specific system dynamical parameters, allowing the robot to follow desired trajectories based on desirable signals without any knowledge on system dynamical parameters.
This paper achieves a significant advancement by considering all system dynamics parameters as unknown, enabling a universal application across all AUVs, regardless of their operating environments. This universality is crucial for adapting to environmental variations such as water flow, which increases the AUV’s effective mass via the added mass phenomenon and affects the vehicle's inertia. Additionally, buoyancy forces that vary with depth, along with hydrodynamic forces and torques—stemming from water flow variations, the AUV’s unique shape, its appendages, and drag forces due to water viscosity—significantly impact the damping matrix in the AUV's dynamics. By addressing these challenges, the proposed framework ensures robust and reliable operation across a spectrum of challenging scenarios.
The framework's control architecture is ingeniously divided into a first-layer Cooperative Estimator Observer and a lower-layer Decentralized Deterministic Learning (DDL) Controller. The first-layer observer is pivotal in enhancing inter-agent communication by sharing crucial system estimates, operating independently of any global information. Concurrently, the second-layer DDL controller utilizes local feedback to finely adjust each AUV’s trajectory, ensuring resilient operation under dynamic conditions heavily influenced by hydrodynamic forces and torques by considering system uncertainty completely unknown. This dual-layer setup not only facilitates acute adaptation to uncertain AUV dynamics but also leverages radial basis function neural networks (RBF NNs) for precise local learning and effective knowledge storage. Such capabilities enable AUVs to efficiently reapply previously learned dynamics following any system restarts. This framework not only drastically improves operational efficiency but also significantly advances the field of autonomous underwater vehicle control by laying a robust foundation for future enhancements in distributed adaptive control systems and fostering enhanced collaborative intelligence among multi-agent networks in marine environments. Extensive simulations have underscored the effectiveness of our pioneering framework, demonstrating its substantial potential to elevate the adaptability and resilience of AUV systems under the most demanding conditions.
The rest of the paper is organized as follows: Section <ref> provides an initial overview of graph theory, Radial Basis Function (RBF) Neural Networks, and the problem statement. The design of the distributed cooperative estimator and the decentralized deterministic learning controller are discussed in Section <ref>. The formation adaptive control and formation control using pre-learned dynamics are explored in Section <ref> and Section <ref>, respectively. Simulation studies are presented in Section <ref>, and Section <ref> concludes the paper.
§ PRELIMINARIES AND PROBLEM STATEMENT
§.§ Notation and Graph Theory
Denoting the set of real numbers as ℝ, we define ℝ^m × n as the set of m × n real matrices, and ℝ^n as the set of n × 1 real vectors.
The identity matrix is symbolized as 𝐈.
The vector with all elements being 1 in an n-dimensional space is represented as 1_n. The sets 𝕊_+^n and 𝕊_-^n+ stand for real symmetric n × n and positive definite matrices, respectively. A block diagonal matrix with matrices X_1, X_2, …, X_p on its main diagonal is denoted by diag{X_1, X_2, …, X_p}.
A ⊗ B signifies the Kronecker product of matrices A and B.
For a matrix A, A⃗ is the vectorization of A by stacking its columns on top of each other. For a series of column vectors x_1, …, x_n, col{x_1, …, x_n} represents a column vector formed by stacking them together.
Given two integers k_1 and k_2 with k_1 < k_2, 𝐈[k_1, k_2] = {k_1, k_1 + 1, …, k_2}. For a vector x ∈ℝ^n, its norm is defined as |x| := (x^T x)^1/2. For a square matrix A, λ_i(A) denotes its i-th eigenvalue, while λ_min(A) and λ_max(A) represent its minimum and maximum eigenvalues, respectively.
A directed graph G = (V, E) comprises nodes in the set V = {1, 2, …, N} and edges in E ⊆ V × V. An edge from node i to node j is represented as (i, j), with i as the parent node and j as the child node. Node i is also termed a neighbor of node j. N_i is considered as the subset of V consisting of the neighbors of node i. A sequence of edges in G, (i_1, i_2), (i_2, i_3), …, (i_k, i_k+1), is called a path from node i_1 to node i_k+1. Node i_k+1 is reachable from node i_1. A directed tree is a graph where each node, except for a root node, has exactly one parent. The root node is reachable from all other nodes. A directed graph G contains a directed spanning tree if at least one node can reach all other nodes. The weighted adjacency matrix of G is a non-negative matrix A = [a_ij] ∈ℝ^N × N, where a_ii = 0 and a_ij > 0 (j, i) ∈ E. The Laplacian of G is denoted as L = [l_ij] ∈ℝ^N × N, where l_ii = ∑_j=1^N a_ij and l_ij = -a_ij if i ≠ j.
It is established that L has at least one eigenvalue at the origin, and all nonzero eigenvalues of L have positive real parts.
From <cit.>, L has one zero eigenvalue and remaining eigenvalues with positive real parts if and only if G has a directed spanning tree.
§.§ Radial Basis Function NNs
The RBF Neural Networks can be described as f_nn(Z)=∑_i = 1^Nw_i s_i(Z)=W^TS(Z), where Z∈Ω_Z⊆ℝ^q and W=w_1,...,w_N^T∈ℝ^N as input and weight vectors respectively <cit.>. N indicates the number of NN nodes, S(Z)=[s_1(||Z-μ_i||),...,s_N(||Z - μ_i||)]^T with s_i(·) is a RBF, and μ_i(i = 1,...,N) is distinct points in the state space.
The Gaussian function s_i (||Z - μ_i||)=exp[-(Z-μ_i)^T(Z-μ_i)/η_i^2] is generally used for RBF, where μ_i=[μ_i1,μ_i2,...,μ_iN]^T is the center and η_i is the width of the receptive field. The Gaussian function categorized by localized radial basis function s in the sense that s_i (||Z - μ_i||)→ 0 as ||Z||→∞.
Moreover, for any bounded trajectory Z(t) within the compact set Ω_Z, f(Z) can be approximated using a limited number of neurons located in a local region along the trajectoryf(Z) = W^*_ζ S_ζ(Z) + ϵ_ζ, where ϵ_ζ is the approximation error, with ϵ_ζ = O(ϵ) = O(ϵ^*), S_ζ(Z) = [s_j_1(Z), …, s_j_ζ(Z)]^T ∈ℝ^N_ζ, W^*_ζ = [w^*_j_1, …, w^*_j_ζ]^T ∈ℝ^N_ζ, N_ζ < N_n, and the integers j_i = j_1, …, j_ζ are defined by |s_j_i(Z_p)| > ι (ι > 0 is a small positive constant) for some Z_p ∈ Z(t). This holds if Z(t) - ξ_j_i < ϵ for t > 0. The following lemma regarding the persistent excitation (PE) condition for RBF NNs is recalled from <cit.>.
Consider any continuous recurrent trajectory[A recurrent trajectory represents a large set of periodic and periodic-like trajectories generated from linear/nonlinear dynamical systems. A detailed characterization of recurrent trajectories can be found in <cit.>.] Z(t) : [0, ∞) →ℝ^q. Z(t) remains in a bounded compact set Ω_Z ⊂ℝ^q. Then for an RBF Neural Network (NN) W^T S(Z) with centers placed on a regular lattice (large enough to cover the compact set Ω_Z), the regressor subvector S_ζ(Z) consisting of RBFs with centers located in a small neighborhood of Z(t) is persistently exciting.
§.§ Problem Statement
A multi-agent system comprising N AUVs with heterogeneous nonlinear uncertain dynamics is considered. The dynamics of each AUV can be expressed as <cit.>:
η̇_i = J_i(η_i)ν_i,
M_iν̇_i + C_i(ν_i)ν_i + D_i(ν_i)ν_i + g_i(η_i) + Δ_i(χ_i) = τ_i.
In this study, the subscript i ∈ I[1, N] identifies each AUV within the multi-agent system. For every i ∈ I[1, N], the vector η_i = [x_i, y_i, ψ_i]^T ∈ℝ^3 represents the i-th AUV's position coordinates and heading in the global coordinate frame, while ν_i = [u_i, v_i, r_i]^T ∈ℝ^3 denotes its linear velocities and angular rate of heading relative to a body-fixed frame. The positive definite inertial matrix M_i = M_i^T ∈𝕊_3^+, Coriolis and centripetal matrix C_i(ν_i)∈ℝ^3 × 3, and damping matrix D_i(ν_i)∈ℝ^3 × 3 characterize the AUV's dynamic response to motion. The vector g_i(η_i)∈ℝ^3 × 1 accounts for the restoring forces and moments due to gravity and buoyancy. The term Δ_i(χ_i) ∈ℝ^3 × 1, with χ_i := col{η_i, ν_i}, describes the vector of generalized deterministic unmodeled uncertain dynamics for each AUV.
The vector τ_i ∈ℝ^3 represents the control inputs for each AUV. The associated rotation matrix J_i(η_i) is given by:
J_i(η_i) = [ cos(ψ_i) sin(ψ_i) 0; -sin(ψ_i) cos(ψ_i) 0; 0 0 1 ],
Unlike previous work <cit.> which assumed known values for the AUV's inertia matrix and rotation matrix, this study considers not only the matrix coefficients C_i(ν_i), D_i(ν_i), g_i(η_i), and Δ_i(χ_i) but also treats the inertia matrix and rotation matrix as completely unknown and uncertain. When all system parameters are considered unknown, external forces do not influence the controller, making it universally applicable to any AUV regardless of shape, weight, and environmental conditions, including variations in water flow or depth that typically affect damping and inertia forces during operation.
In the context of leader-following formation tracking control, the following virtual leader dynamics generates the tracking reference signals:
χ̇_0 = A_0χ_0
With “0” marking the leader node, the leader state χ_0 := col{η_0, ν_0} with η_0 ∈ℝ^3 and ν_0 ∈ℝ^3, A_0 ∈ℝ^3 × 3 is a constant matrix available only to the leader’s neighboring AUV agents.
Considering the system dynamics of multiple AUVs (<ref>) along with the leader dynamics (<ref>), we establish a non-negative matrix A = [a_ij], where i, j ∈ I[0, N] such that for each i ∈ I[1, N], a_i0 > 0 if and only if agent i has access to the reference signals η_0 and ν_0. All remaining elements of A are arbitrary non-negative values, such that a_ii = 0 for all i. Correspondingly, we establish G = (V, E) as a directed graph derived from A, where V = {0, 1, …, N} designates node 0 as the leader, and the remaining nodes correspond to the N AUV agents. We proceed under the following assumptions:
All the eigenvalues of A_0 in the leader’s dynamics (<ref>) are located on the imaginary axis.
The directed graph G contains a directed spanning tree with the node 0 as its root.
Assumption <ref> is crucial for ensuring that the leader dynamics produce stable, meaningful reference trajectories for formation control. It ensures that all states of the leader, represented by χ_0 = col{η_0, ν_0}, remain within the bounds of a compact set Ω_0 ⊂ℝ^6 for all t ≥ 0. The trajectory of the system, starting from χ_0(0) and denoted by ϕ_0(χ_0(0)), generates periodic signal. This periodicity is essential for maintaining the Persistent Excitation (PE) condition, which is pivotal for achieving parameter convergence in Distributed Adaptive (DA) control systems. Modifications to the eigenvalue constraints on A_0 mentioned in Assumption <ref> may be considered when focusing primarily on formation tracking control performance, as discussed later.
Additionally, Assumption <ref> reveals key insights into the structure of the Laplacian matrix L of the network graph G. Let Ψ be an N × N non-negative diagonal matrix where each i-th diagonal element is a_i0 for i ∈ I[1, N]. The Laplacian L is formulated as:
L = [ ∑_j=1^N a_0j -[a_01, …, a_0N]; -Ψ1_N H ],
where a_0j > 0 if (j, 0) ∈ E and a_0j = 0 otherwise. This results in 𝐇1_N = Ψ1_N since L1_N+1 = 0. As cited in <cit.>, all nonzero eigenvalues of 𝐇, if present, exhibit positive real parts, confirming 𝐇 as nonsingular under Assumption <ref>.
In the context of a multi-AUV system (<ref>) integrated with virtual leader dynamics (<ref>) and operating within a directed network topology G, the aim is to develop a distributed NN learning control protocol that leverages only local information. The specific goals are twofold:
1) Formation Control: Each of the N AUV agents will adhere to a predetermined formation pattern relative to the leader, maintaining a specified distance from the leader's position η_0.
2) Decentralized Learning: The nonlinear uncertain dynamics of each AUV will be identified and learned autonomously during the formation control process. The insights gained from this learning process will be utilized to enhance the stability and performance of the formation control system.
remarkRemark
The leader dynamics described in Equation (<ref>) are designed as a neutrally stable LTI system. This design choice facilitates the generation of sinusoidal reference trajectories at various frequencies which is essential for effective formation tracking control. This approach to leader dynamics is prevalent in the literature on multiagent leader-following distributed control systems like <cit.> and <cit.>.
It is important to emphasize that the formulation assumes formation control is required only within the horizontal plane, suitable for AUVs operating at a constant depth, and that the vertical dynamics of the 6-DOF AUV system, as detailed in <cit.>, are entirely decoupled from the horizontal dynamics.
As shown in Fig <ref>, a two-layer hierarchical design approach is proposed to address the aforementioned challenges. The first layer,the Cooperative Estimator, enables information exchange among neighboring agents. The second layer, known as the Decentralized Deterministic Learning (DDL) controller, processes only local data from each individual AUV. The subsequent two sections will discuss the development of the DA observer within the first layer and the formulation of the DDL control strategy, respectively.
§ TWO-LAYER DISTRIBUTED CONTROLLER ARCHITECTURE
§.§ First Layer: Cooperative Estimator
In the context of leader-following formation control, not all AUV agents may have direct access to the leader’s information, including tracking reference signals (χ_0) and the system matrix (A_0). This necessitates collaborative interactions among the AUV agents to estimate the leader’s information effectively. Drawing on principles from multiagent consensus and graph theories <cit.>, we propose to develop a distributed adaptive observer for the AUV systems as:
χ̇̂̇_i(t) = A_i(t)χ̂_i(t) + β_i1∑_j=0^N a_ij (χ̂_j(t) - χ̂_i(t)), ∀ i ∈ I[1, N]
The observer states for each i-th AUV, denoted by χ̂_i = [η̂_i, ν̂_i]^T ∈ℝ^6, aim to estimate the leader's state, χ_0 = [η_0, ν_0]^T ∈ℝ^6. As t →∞, these estimates are expected to converge, such that η̂_i approaches η_0 and ν̂_i approaches ν_0, representing the leader's position and velocity, respectively. The time-varying parameters A_0(t) for each observer are dynamically adjusted using a cooperative adaptation law:
Â̇_i(t) = β_i2∑_j=0^N a_ij (Â_j(t) - Â_i(t)), ∀ i ∈ I[1, N]
Â_i are used to estimate the leader's system matrix A_0 while both are ∈ℝ^6*6. The constants β_i1 and β_i2 are all positive numbers and are subject to design.
Each AUV agent in the group is equipped with an observer configured as specified in equations (<ref>) and (<ref>), comprising two state variables, χ_i and A_i. For each i ∈ I[1, N], χ_i estimates the virtual leader’s state χ_0, while A_i estimates the leader’s system matrix A_0. The real-time data necessary for operating the i-th observer includes: (1) the estimated state χ̂_j and matrix Â_̂ĵ, obtained from the j-th AUV itself, and (2) the estimated states χ_j and matrices A_j for all j ∈ N_i, obtains from the j-th AUV’s neighbors. Note that in equations (<ref>) and (<ref>), if j ∉ N_i, then a_ij = 0, indicating that the i-th observer does not utilize information from the j-th AUV agent. This configuration ensures that the proposed distributed observer can be implemented in each local AUV agent using only locally estimated data from the agent itself and its immediate neighbors, without the need for global information such as the size of the AUV group or the network interconnection topology.
By defining the estimation error for the state and the system matrix for agent i as χ̃_i = χ̂_i - χ_0 and Ã_i = Â_i - A_0, respectively, we derive the error dynamics:
χ̇̃̇_i(t) = Â_i(t)χ̂_i(t) + A_0χ_0(t) + β_i1∑_j=0^N a_ij (χ̂_j(t) - χ̂_i(t))
= Â_i(t) χ̂_i (t) - A_0χ̂_i (t) + A_0χ̂_i (t ) - A_0χ _0 (t ) + β _1∑ _j=0^Na_ij (χ̂_j (t ) - χ _0 (t ) + χ _0 (t ) - χ̂_i (t ) )
= A_0χ̃_i (t ) + Ã_i (t )χ̃_i (t ) + Ã_i (t )χ _0 (t ) + β _1∑ _j=0^Na_ij (χ̃_j (t ) - χ̃_i (t ) )
Ã̇_i(t) = β_i2∑_j=0^N a_ij - (Ã_j(t) - Ã_i(t)) , ∀ i ∈ I[1, N].
Define the collective error states and adaptation matrices: χ̃ = col{χ̃_1, …, χ̃_N} for the state errors,
à = col{Ã_1, …, Ã_N} for the adaptive parameter errors,
Ã_b = diag{Ã_1, …, Ã_N} representing the block diagonal of adaptive parameters,
B_β_1 = diag{β_11, …, β_N1} and
B_β_2 = diag{β_12, …, β_N2} for the diagonal matrices of design constants.
With these definitions, the network-wide error dynamics can be expressed as:
χ̇̃̇(t) = ((I_N ⊗ A_0) - B_β_1(H ⊗ I_2n)) χ̃(t) + (Ã_b(t) ⊗ I_2n) χ̃(t) + Ã_b(t) (1_N ⊗χ_0(t)),
Ã̇(t) = -B_β_2(H ⊗ I_n) Ã(t).
theoremTheorem
Consider the error system equations (<ref>). Under Assumptions <ref> and <ref>, and given that β_1, β_2 > 0, it follows that for all i ∈ I[1, N] and for any initial conditions χ_0(0), χ_i(0), A_i(0), the error dynamics of the adaptive parameters and the states will converge to zero exponentially. Specifically, lim_t →∞Ã_i(t) = 0 and lim_t →∞χ̃_i(t) = 0.
This convergence is facilitated by the independent adaptation of each agent's parameters within their respective error dynamics, represented by the block diagonal structure of Ã_b and control gains B_β_1 and B_β_2. These matrices ensure that each agent's parameter updates are governed by local interactions and error feedback, consistent with the decentralized control framework.
Proof: We begin by examining the estimation error dynamics for à as presented in equation <ref>. This can be rewritten in the vector form:
Ã̇⃗̇_0(t) = -β_2(I_6⊗ (ℋ⊗ I_3)) Ã⃗_0(t)
Under assumption <ref>, all eigenvalues of ℋ possess positive real parts according to <cit.>. Consequently, for any positive β_2 > 0, the matrix -β_2(I_6 ⊗ (H ⊗ I_3)) is guaranteed to be Hurwitz, Which implies the exponential stability of system (<ref>). Hence, it follows that lim_t →∞Ã⃗_0(t) = 0 exponentially, leading to lim_t →∞Ã_i0(t) = 0 exponentially for all i ∈ I[1, N].
Next, we analyze the error dynamics for χ_0 in equation (<ref>). Based on the previous discussions, We have lim_t →∞Ã_b(t) = 0 exponentially, and the term Ã_b(t)(1_N ⊗χ_0(t)) will similarly decay to zero exponentially. Based on <cit.>, if the system defined by
χ̇̃̇_0(t) = ((I_N⊗ A_0) - β_1(ℋ⊗ I_6)) χ̃_0(t)
is exponentially stable, then lim_t →∞χ_0(t) = 0 exponentially. With assumption <ref>, knowing that all eigenvalues of A_0 have zero real parts, and since ℋ as nonsingular with all eigenvalues in the right-half plane, system (<ref>) is exponentially stable for any positive β_1 > 0. Consequently, this ensures that lim_t →∞χ_0(t) = 0, i.e., lim_t →∞χ_i0(t) = 0 exponentially for all i ∈ I[1, N].
Now, each individual agent can accurately estimate both the state and the system matrix of the leader through cooperative observer estimation (<ref>) and (<ref>). This information will be utilized in the DDL controller design for each agent's second layer, which will be discussed in the following subsection.
§.§ Second Layer: Decentralized Deterministic Learning Controller
To fulfill the overall formation learning control objectives, in this section, we develop the DDL control law for the multi-AUV system defined in (<ref>). We use d^*_i to denote the desired distance between the position of the i-th AUV agent η_i and the virtual leader's position η_0. Then, the formation control problem is framed as a position tracking control task, where each local AUV agent's position η_i is required to track the reference signal η_d,i := η_0 + d^*_i.
Besides, due to the inaccessibility of the leader’s state information χ_0 for all AUV agents, the tracking reference signal η̂_d,i := η̂_0,i + d^*_i is employed instead of the reference signal η̂_d,i. As established in theorem <ref>, η̂_d,i is autonomously generated by each local agent and will exponentially converge to η_d,i. This ensures that the DDL controller is feasible and the formation control objectives are achievable for all i ∈ I[1, N] using η̂_d,i.
To design the DDL control law that addresses the formation tracking control and the precise learning of the AUVs' complete nonlinear uncertain dynamics at the same time, we will integrate renowned backstepping adaptive control design method outlined in <cit.> along with techniques from <cit.> and <cit.> for deterministic learning using RBF NN. Specifically, for the i-th AUV agent described in system (<ref>), we define the position tracking error as z_1,i = η_i - η̂_d,i for all i ∈ I[1, N]. Considering J_i(η_i) J_i^T(η_i) = I for all i ∈ I[1, N], we proceed to:
ż_1,i = J_i(η_i)ν_i - η̇̂̇_i, ∀ i∈𝐈[1,N].
To frame the problem in a more tractable way, we assume ν_i as a virtual control input and α_i as a desired virtual control input in our control strategy design, and by implementing them in the above system we have:
z_2,i = ν_i - α_i,
α_i = J_i^T(η_i) (-K_1,iz_1,i + η̇̂̇_i), ∀ i ∈𝐈[1, N].
A positive definite gain matrix K_1,i∈𝕊_3^+ is used for tuning the performance. Substituting ν_i = z_2,i + α_i into ((<ref>)) yields:
ż_1,i = J_i(η_i)z_2,i - K_1,iz_1,i, ∀ i ∈𝐈[1, N].
Now we derive the first derivatives of the virtual control input and the desired control input as follows:
ż_2,i = ν̇_i - α̇_i
= M_i^-1(-C_i(ν_i)ν_i - D_i(ν_i)ν_i - g_i(η_i) - Δ_i(χ_i) + τ_i) - α̇_i,
α̇_i = J̇_i^T(η_i)(-K_1,iz_1,i + η̇̂̇_i) + J_i^T(η_i)(K_1,iη̇̂̇_i - K_1,iJ_i(η_i)ν_i + η̈̂̈_i). ∀ i ∈𝐈[1, N],
As previously discussed, unlike earlier research that only identified the matrix coefficients C_i(ν_i), D_i(ν_i), g_i(η_i), and Δ_i(χ_i) as unknown system nonlinearities while assuming the mass matrix M_i to be known, this work advances significantly by also considering M_i as unknown. Consequently, all system dynamic parameters are treated as completely unknown, making the controller fully independent of the robot's configuration—such as its dimensions, mass, or any appendages—and the uncertain environmental conditions it encounters, like depth, water flow, and viscosity. This independence is critical as it ensures that the controller does not rely on predefined assumptions about the dynamics, aligning with the main goal of this research. To address these challenges, we define a unique nonlinear function F_i(Z_i) that encapsulates all nonlinear uncertainties as follows:
F_i(Z_i) = M_iα̇_i + C_i(ν_i)ν_i + D_i(ν_i)ν_i + g_i(η_i) + Δ_i(χ_i),
where F_i(Z_i) = [f_1,i(Z_i), f_2,i(Z_i), f_3,i(Z_i)]^T and Z_i = col{η_i, ν_i}∈Ω_Z_i⊂ℝ^6, with Ω_Z_i being a bounded compact set.
We then employ the following RBF NNs to approximate f_k,i for all i ∈𝐈[1, N] and k ∈𝐈[1, 3]:
f_k,i(Z_i) = W_k,i^*TS_ki(Z_i) + ϵ _k,i(Z_i),
where W_k,i^* is the ideal constant NN weights, and ϵ_k,i(Z_i) is the approximation error ϵ^*_k,i > 0 for all i ∈𝐈[1, N] and k ∈𝐈[1, 3], which satisfies |ϵ_k,i(Z_i)| ≤ϵ^*_k,i. This error can be made arbitrarily small given a sufficient number of neurons in the network.
A self-adaptation law is designed to estimate the unknown W_k,i^* online. We aim to estimate W_k,i^* with Ŵ_k,i by constructing the DDL feedback control law as follows:
τ _i = -J_i^T(η _i)z_1,i - K_2,iz_2,i + Ŵ_k,i^TS_k,i(Z_i)
To approximate the unknown nonlinear function vector F_i(Z_i) in (<ref>) along the trajectory Z_i within the compact set Ω_Z_i, we use:
Ŵ_i^T S_i^F(Z_i) =
[ Ŵ_1,i^T S_1,i(Z_i); Ŵ_2,i^T S_2,i(Z_i); Ŵ_3,i^T S_3,i(Z_i) ].
Also, K_2,i∈𝕊_3^+ is a feedback gain matrix that can be tuned to achieve the desired performance. From (<ref>) and (<ref>) we have:
M_iν̇_i + C_i(ν_i)ν_i + D_i(ν_i)ν_i + g_i(η_i) + Δ_i(χ_i) = τ _i = -J_i^T(η _i)z_1,i - K_2,iz_2,i + Ŵ_k,i^TS_k,i(Z_i)
By subtracting W_k,i^*TS_k,i(Z_i) + ϵ_k,i(Z_i) from both sides and considering Equations (<ref>) and (<ref>), we define W̃_k,i := Ŵ_k,i - W_k,i^*, leading to:
ż_2,i = M_i^-1 (-J_i^T(η _i)z_1,i - K_2,iz_2,i + Ŵ_k,i^TS_k,i(Z_i) - ϵ _k,i(Z_i))
For updating Ŵ_k,i online, a robust self-adaptation law is constructed using the σ-modification technique <cit.> as follows:
Ŵ̇_k,i = -Γ_k,i (S_k,i(Z_i)z_2k,i + σ _kiŴ_k,i)
where z_2,i = [z_21,i, z_22,i, z_23,i]^T, Γ_k,i = Γ_k,i^T > 0, and σ_k,i > 0 are free parameters to be designed for all i ∈𝐈[1, N] and k ∈𝐈[1, 3].
Integrating equations (<ref>), (<ref>) and (<ref>) yields the following closed-loop system:
{ż_1,i = -K_1,iz_1,i + J_i(η_i)z_2,i,
ż_2,i = M_i^-1(-J_i^T(η_i)z_1,i - K_2,iz_2,i + Ŵ_ki^TS_ki(Z_i) - ϵ_k,i(Z_i)),
Ẇ̃̇_k,i = -Γ_k,i(S_k,i(Z_i)z_2,k,i + σ_k,iŴ_k,i),
.
where, for all i ∈𝐈[1, N] and k ∈𝐈[1, 3], W̃_i^T S_i(Z_i) = [W̃_1,i^T S_1,i(Z_i), W̃_2,i^T S_2,i(Z_i), W̃_3,i^T S_3,i(Z_i)]^T, and ϵ_i(Z_i) = [ϵ_1,i(Z_i), ϵ_2,i(Z_i), ϵ_3,i(Z_i)]^T.
Unlike the first-layer DA observer design, the second-layer control law is fully decentralized for each local agent. It utilizes only the local agent's information for feedback control, including χ_i, χ̂_i, and W_k,i, without involving any information exchange among neighboring AUVs.
The following theorem summarizes the stability and tracking control performance results of the overall system:
Consider the local closed-loop system (<ref>). For each i ∈𝐈[1, N], if there exists a sufficiently large compact set Ω_Z_i such that Z_i ∈Ω_Z_i for all t ≥ 0, then for any bounded initial conditions, we have:
1) All signals in the closed-loop system remain uniformly ultimately bounded (UUB).
2) The position tracking error η_i - η_d,i converges exponentially to a small neighborhood around zero in finite time T_i > 0 by choosing the design parameters with sufficiently large λ(K_1,i) > 0 and λ(K_2,i) > 2λ(K_1,i) > 0, and sufficiently small σ_k,i > 0 for all i ∈𝐈[1, N] and k ∈𝐈[1, 3].
Proof:
1) Consider the following Lyapunov function candidate for the closed-loop system (<ref>):
V_i = 1/2z_1,i^T z_1,i + 1/2z_2,i^T M_i z_2,i + 1/2∑_k=1^3W̃_k,i^T Γ_k,i^-1W̃_k,i.
Evaluating the derivative of V_i along the trajectory of (<ref>) for all i ∈𝐈[1, N] yields:
V̇_i = z_1,i^T(-K_1,iz_1,i + J_i(η_i)z_2,i)
+ z_2,i^T(-J_i^T(η_i)z_1,i - K_2,iz_2,i + W̃_k,i^TS_k,i(Z_i) - ϵ_k,i(Z_i)))
- ∑_k=1^3W̃_k,i^T (S_k,i(Z_i)z_2k,i + σ_k,iŴ_k,i)
= -z_1,i^T K_1,i z_1,i - z_2,i^T K_2,i z_2,i - z_2,i^T ϵ_k,i(Z_i)
- ∑_k=1^3σ_k,iW̃_k,i^T Ŵ_k,i, ∀ i ∈𝐈[1, N].
Choose K_2,i = K_1,i + K_22,i such that K_1,i, K_22,i∈𝕊_3^+. Using the completion of squares, we have:
-σ_k,iW̃_k,i^T Ŵ_k,i ≤ -σ_k,iW̃_k,i^2/2 + σ_k,iW_k,i^*^2/2,
-z_2,i^T K_22,i z_2,i - z_2,i^T ϵ_i(Z_i) ≤ϵ_i^T(Z_i) ϵ_i(Z_i)/4 λ(K_22,i)≤ϵ_i^*^2/4 λ(K_22,i).
where ϵ_i^* = [ϵ_1,i^*, ϵ_2,i^*, ϵ_3,i^*]^T. Then, we obtain:
V̇_i ≤ -z_1,i^T K_1,i z_1,i - z_2,i^T K_1,i z_2,i + ϵ_i^*^2/4 λ(K_22,i)
+ ∑_k=1^3 ( -σ_k,iW̃_k,i^2/2 + σ_k,iW_k,i^*^2/2 .
It follows that V̇_i is negative definite whenever:
z_1,i > ϵ_i^*/2√(λ(K_1,i) λ(K_22,i)) + ∑_k=1^3( √(σ_k,i/2 λ(K_1,i))W_k,i^*),
z_2,i > ϵ_i^*/2√(λ(K_1,i) λ(K_22,i)) + ∑_k=1^3( √(σ_k,i/2 λ(K_1,i))W_k,i^*),
W̃_k,i > ϵ_i^*/2√(σ_k,iλ(K_22,i)) + ∑_k=1^3W_k,i^*:= W̃_k,i^*.
For all i ∈𝐈[1, N], ∃ k ∈𝐈[1, 3]. This leads to the Uniformly Ultimately Bounded (UUF) behavior of the signals z_1,i, z_2,i, and W̃_k,i for all i ∈𝐈[1, N] and k ∈𝐈[1, 3]. As a result, it can be easily verified that since η_di = η_i + d^*_i with η_i bounded (according totheorem<ref> and assumption<ref>), η_i = z_1,i + η_i is bounded for all i ∈𝐈[1, N]. Similarly, the boundedness of ν_i = z_2,i + α_i can be confirmed by the fact that α_i in (<ref>) is bounded.
In addition, W_k,i = W̃_k,i + W_k,i^* is also bounded for all i ∈𝐈[1, N] and k ∈𝐈[1, 3] because of the boundedness of W̃_k,i and W_k,i^*. Moreover, in light of (<ref>), α̇_i is bounded as all the terms on the right-hand side of (<ref>) are bounded. This leads to the boundedness of the control signal τ_i in (<ref>) since the Gaussian function vector S_i^F(Z_i) is guaranteed to be bounded for any Z_i. As such, all the signals in the closed-loop system remain UUB, which completes the proof of the first part.
2) For the second part, it will be shown that η_i will converge arbitrarily close to η_di in some finite time T_i > 0 for all i ∈𝐈[1, N]. To this end, we consider the following Lyapunov function candidate for the dynamics of z_1,i and z_2,i in (<ref>):
V_z,i = 1/2z_1,i^T z_1,i + 1/2z_2,i^T M_i z_2,i, ∀ i ∈𝐈[1, N].
The derivative of V_z,i is:
V̇_z,i = z_1,i^T (-K_1,iz_1,i + J_i(η_i)z_2,i) + z_2,i^T (-J_i^T(η_i)z_1,i - K_2,iz_2,i + W̃_k,i^T S_k,i(Z_i) - ϵ_i(Z_i))
= -z_1,i^T K_1,i z_1,i - z_2,i^T K_2,i z_2,i + z_2,i^T W̃_k,i^T S_k,i(Z_i) - z_2,i^T ϵ_k,i(Z_i), ∀ i ∈𝐈[1, N].
Similar to the proof of part one, we let K_2,i = K_1,i + 2K_22,i with K_1,i, K_22,i∈𝕊_3^+. According to <cit.>, the Gaussian RBF NN regressor S_i^F(Z_i) is bounded by S_i^F(Z_i)≤ s_i^* for any Z_i and for all i ∈𝐈[1, N] with some positive number s_i^* > 0. Through completion of squares, we have:
-z_2,i^T K_22,i z_2,i + z_2,i^T W̃_i^T S_i^F(Z_i) ≤W̃_i^*^2 s_i^*2/4 λ(K_22,i),
-z_2,i^T K_22,i z_2,i - z_2,i^T ϵ_i(Z_i) ≤ϵ_i^*^2/4 λ(K_22,i).
where W̃_i^* = [W̃_1,i^*, W̃_2,i^*, W̃_3,i^*]^T. This leads to:
V̇_z,i ≤ -z_1,i^T K_1,i z_1,i - z_2,i^T K_1,i z_2,i + δ_i
≤ -2λ(K_1,i)(1/2z_1,i^T z_1,i + 1/2λ(M_i)z_2,i^T M_i z_2,i) + δ_i
≤ -ρ_i V_z,i + δ_i, ∀ i ∈𝐈[1, N],
where ρ_i = min{2λ(K_1,i), 2λ(K_1,i)/λ(M_i)} and δ_i = (W̃_i^*^2 s_i^*2/4 λ(K_22,i)) + (ϵ_i^*^2/4 λ(K_22,i)), ∀ i ∈𝐈[1, N]. Solving the inequality (<ref>) yields:
0 ≤ V_z,i(t) ≤ V_z,i(0)exp(-ρ_i t) + δ_i/ρ_i
which together with (<ref>) implies that:
min{1, λ(M_i)}1/2(z_1,i^2 + z_2,i^2) ≤ V_z,i(0)exp(-ρ_i t) + δ_i/ρ_i,
∀ t ≥ 0, i ∈𝐈[1, N].
Also
z_1,i^2 + z_2,i^2 ≤2/min{1, λ(M_i)} V_z,i(0) exp(-ρ_i t)
+ 2 δ_i/ρ_i min{1, λ(M_i)}.
Consequently, it is straightforward that given δ̅_i > √(2δ_i/ρ_i min{1, λ(M_i)}), there exists a finite time T_i > 0 for all i ∈𝐈[1, N] such that for all t ≥ T_i, both z_1,i and z_2,i satisfy z_1,i(t)≤δ̅_i and z_2,i(t)≤δ̅_i ∀ i ∈𝐈[1, N], where δ̅_i can be made arbitrarily small by choosing sufficiently large λ(K_1,i) > 0 and λ(K_2,i) > 2 λ(K_1,i) > 0 for all i ∈𝐈[1, N]. This ends the proof.
By integrating the outcomes of theorems 1 and 2, the following theorem is established, which can be presented without additional proof:
By Considering the multi-AUV system (<ref>) and the virtual leader dynamics (<ref>) with the network communication topology G and under assumptions <ref> and <ref>, the objective 1) of Problem 1 (i.e., η_i converges to η_0 + d_i^* exponentially for all i ∈𝐈[1, N]) can be achieved by using the cooperative observer (<ref>), (<ref>) and the DDL control law (<ref>) and (<ref>) with all the design parameters satisfying the requirements in theorems <ref> and <ref>, respectively.
With the proposed two-layer formation learning control architecture, inter-agent information exchange occurs solely in the first-layer DA observation. Only the observer's estimated information, and not the physical plant state information, needs to be shared among neighboring agents. Additionally, since no global information is required for the design of each local AUV control system, the proposed formation learning control protocol can be designed and implemented in a fully distributed manner.
It is important to note that the eigenvalue constraints on A_0 in <ref> are not needed for cooperative observer estimation (as detailed in the section (<ref>) or for achieving formation tracking control performance (as discussed in this section). This indicates that formation tracking control can be attained for general reference trajectories, including both periodic paths and straight lines, provided they are bounded. However, these constraints will become necessary in the next section to ensure the accurate learning capability of the proposed method.
§ ACCURATE LEARNING FROM FORMATION CONTROL
It is necessary to demonstrate the convergence of the RBF NN weights in Equations (<ref>) and (<ref>) to their optimal values for accurate learning and identification. The main result of this section is summarized in the following theorem.
Consider the local closed-loop system (<ref>) with assumptions <ref> and <ref>. For each i ∈𝐈[1, N], if there exists a sufficiently large compact set Ω_Z_i such that Z_i ∈Ω_Z_i for all t ≥ 0, then for any bounded initial conditions and W_k,i(0) = 0 ∀ i ∈𝐈[1, N], k ∈𝐈[1, 3], the local estimated neural weights W_ζ,k,i converge to small neighborhoods of their optimal values W_ζ,k,i^* along the periodic reference tracking orbit ϕ_ζ,i(Z_i(t))|_t ≥ T_i (denoting the orbit of the NN input signal Z_i(t) starting from time T_i). This leads to locally accurate approximations of the nonlinear uncertain dynamics f_k,i(Z_i) ∀ k ∈𝐈[1, 3] in (<ref>) being obtained by W_k,i^T S_k,i(Z_i), as well as by W̅_k,i^T S_k,i(Z_i), where ∀ i ∈𝐈[1, N], k ∈𝐈[1, 3].
W̅_k,i = mean_t∈ [t_a,i, t_b,i]Ŵ_k,i(t)
Where [t_a,i, t_b,i] (t_b,i > t_a,i > T_i) represents a time segment after the transient process.
Proof: From theorem <ref>, we have shown that for all i ∈𝐈[1, N], η_i will closely track the periodic signal η_d,i = η_0 + d_i^* in finite time T_i. In addition, (<ref>) implies that ν_i will also closely track the signal J_i^T(η_i) η̇_0^i since both z_1,i and z_2,i will converge to a small neighborhood around zero according to <ref>. Moreover, since η̇_0^i will converge to η̇_0 according to <ref>, and <ref> J_i(η_i) is a bounded rotation matrix, ν_i will also be a periodic signal after finite time T_i, because η̇_0 is periodic under <ref>.
Consequently, since the RBF NN input Z_i(t) = col{η_i, ν_i} becomes a periodic signal for all t ≤ T_i, the PE condition of some internal closed-loop signals, i.e., the RBF NN regression subvector S_ζ,k,i(Z_i) (∀ t ≥ T_i), is satisfied according to Lemma <ref>. It should be noted that the periodicity of Z_i(t) leads to the PE of the regression subvector S_ζ,k,i(Z_i), but not necessarily the PE of the whole regression vector S_k,i(Z_i). Thus, we term this as a partial PE condition, and we will show the convergence of the associated local estimated neural weights W_ζ,k,i→ W_ζ,k,i^*, rather than W_k,i→ W_k,i^*.
Thus, to prove accurate convergence of local neural weights W_ζ,k,i associated with the regression subvector S_ζ,k,i(Z_i) under the satisfaction of the partial PE condition, we first rewrite the closed-loop dynamics of z_1,i and z_2,i along the periodic tracking orbit ϕ_ζ,i(Z_i(t))|_t ≥ T_i by using the localization property of the Gaussian RBF NN:
ż_1,i = -K_1,i z_1,i + J_i(η_i) z_2,i,
ż_2,i = M_i^-1( -W_ζ,i^*T S_ζ,i^F(Z_i) - ϵ_ζ,i - J_i^T(η_i) z_1,i - K_2,i z_2,i + Ŵ_ζ,i^T S_ζ,i^F(Z_i) + Ŵ_ζ̅,i^T S_ζ̅,i^F(Z_i) )
= M_i^-1 ( -J_i^T(η_i) z_1,i - K_2,i z_2,i + W̃_ζ,i^T S_ζ,i^F(Z_i) - ϵ'_ζ,i ).
where F_i(Z_i) = W_ζ,i^*T S_ζ,i^F(Z_i) + ϵ_ζ,i with W_ζ,i^*T S_ζ,i^F(Z_i) = [W_ζ,1,i^*T S_ζ,1,i(Z_i), W_ζ,2,i^*T S_ζ,2,i(Z_i), W_ζ,3,i^*T S_ζ,3,i(Z_i)]^T and ϵ_ζ,i = [ϵ_ζ,1,i, ϵ_ζ,2,i, ϵ_ζ,3,i]^T being the approximation error. Additionally, W_ζ,i^T S_ζ,i^F(Z_i) + W_ζ̅,i^T S_ζ̅,i^F(Z_i) = W_i^T S_i^F(Z_i) with subscripts ζ and ζ̅ denoting the regions close to and far away from the periodic trajectory ϕ_ζ,i(Z_i(t))|_t ≥ T_i, respectively. According to <cit.>, W_ζ̅,i^T S_ζ̅,i^F(Z_i) is small, and the NN local approximation error ϵ'_ζ,i = ϵ_ζ,i - W_ζ̅,i^T S_ζ̅,i^F(Z_i) with ϵ'_ζ,i = O(ϵ_ζ,i) is also a small number. Thus, the overall closed-loop adaptive learning system can be described by:
[ [ ż_1,i; ż_2,i; Ẇ̃̇_ζ,1,i; Ẇ̃̇_ζ,2,i; Ẇ̃̇_ζ,3,i ]] = [ [ [ -K_1,i J_i(η_i); -M_i^-1 J_i^T(η_i) -M_i^-1 K_2,i ] Ξ_i; [ 0 -Γ_ζ,1,i S_ζ,1,i(Z_i); 0 -Γ_ζ,2,i S_ζ,2,i(Z_i); 0 -Γ_ζ,3,i S_ζ,3,i(Z_i) ] 0 ]] ×[ [ z_1,i; z_2,i; W̃_ζ,1,i; W̃_ζ,2,i; W̃_ζ,3,i ]] + [ [ 0; -ϵ'_ζ,i; -σ_i,1Γ_ζ,1,iŴ_ζ,1,i; -σ_i,2Γ_ζ,2,iŴ_ζ,2,i; -σ_i,3Γ_ζ,3,iŴ_ζ,3,i ]]
and
[ Ẇ̃̇_ζ̅,i,1; Ẇ̃̇_ζ̅,i,2; Ẇ̃̇_ζ̅,i,3 ] = [ -Γ_ζ̅,1,i( S_ζ̅,1,i(Z_i) z_2,i + σ_i,1Ŵ_ζ̅,1,i); -Γ_ζ̅,2,i( S_ζ̅,2,i(Z_i) z_2,i + σ_i,2Ŵ_ζ̅,2,i); -Γ_ζ̅,3,i( S_ζ̅,3,i(Z_i) z_2,i + σ_i,3Ŵ_ζ̅,3,i) ].
where
Ξ_i = [ 0; M_i^-1[ S_ζ,1,i^T(Z_i) 0 0; 0 S_ζ,2,i^T(Z_i) 0; 0 0 S_ζ,3,i^T(Z_i) ] ].
for all i ∈𝐈[1, N]. The exponential stability property of the nominal part of subsystem (<ref>) has been well-studied in <cit.>, <cit.>, and <cit.>, where it is stated that PE of S_ζ,k,i(Z_i) will guarantee exponential convergence of (z_1,i, z_2,i, W̃_ζ,k,i) = 0 for all i ∈𝐈[1, N] and k ∈𝐈[1, 3]. Based on this, since ϵ'_ζ,i = O(ϵ_ζ,i) = O(ϵ_i), and σ_k,iΓ_ζ,k,iŴ_ζ,k,i can be made small by choosing sufficiently small σ_k,i for all i ∈𝐈[1, N], k ∈𝐈[1, 3], both the state error signals (z_1,i, z_2,i) and the local parameter error signals W̃_ζ,k,i (∀ i ∈𝐈[1, N], k ∈𝐈[1, 3]) will converge exponentially to small neighborhoods of zero, with the sizes of the neighborhoods determined by the RBF NN ideal approximation error ϵ_i as in (<ref>) and σ_k,iΓ_ζ,k,iŴ_ζ,k,i.
The convergence of W_ζ,k,i→ W_ζ,k,i^* implies that along the periodic trajectory ϕ_ζ,i(Z_i(t))|_t ≥ T_i, we have
f_k,i(Z_i) = W_ζ,k,i^*T S_ζ,k,i(Z_i) + ϵ_ζ,k,i
= Ŵ_ζ,k,i^T S_ζ,k,i(Z_i) - W̃_ζ,k,i^T S_ζ,k,i(Z_i) + ϵ_ζ,k,i
= Ŵ_ζ,k,i^T S_ζ,k,i(Z_i) + ϵ_ζ_1,k,i
= W̅_ζ,k,i^T S_ζ,k,i(Z_i) + ϵ_ζ_2,k,i,
where for all i ∈𝐈[1, N], k ∈𝐈[1, 3], ϵ_ζ_1,k,i = ϵ_ζ,k,i - W̃_ζ,k,i^T S_ζ,k,i(Z_i) = O(ϵ_ζ,i) due to the convergence of W̃_ζ,k,i→ 0. The last equality is obtained according to the definition of (<ref>) with W̅_ζ,k,i being the corresponding subvector of W̅_k,i along the periodic trajectory ϕ_ζ,i(Z_i(t))|_t ≥ T_i, and ϵ_ζ_2,k,i being an approximation error using W̅_ζ,k,i^T S_ζ,k,i(Z_i). Apparently, after the transient process, we will have ϵ_ζ_2,k,i = O(ϵ_ζ_1,k,i), ∀ i ∈𝐈[1, N], k ∈𝐈[1, 3].
Conversely, for the neurons whose centers are distant from the trajectory ϕ_ζ,i(Z_i(t))|_t ≥ T_i, the values of S_ζ̅,k,i(Z_i) will be very small due to the localization property of Gaussian RBF NNs. From the adaptation law (17) with W^k_i(0) = 0, it can be observed that these small values of S_ζ̅,k,i(Z_i) will only minimally activate the adaptation of the associated neural weights W_ζ̅,k,i. As a result, both W_ζ̅,k,i and W_ζ̅,k,i^T S_ζ̅,k,i(Z_i), as well as W̅_ζ̅,k,i and W̅_ζ̅,k,i^T S_ζ̅,k,i(Z_i), will remain very small for all i ∈𝐈[1, N], k ∈𝐈[1, 3] along the periodic trajectory ϕ_ζ,i(Z_i(t))|_t ≥ T_i.
This indicates that the entire RBF NN W_k,i^T S_k,i(Z_i) and W̅_k,i^T S_k,i(Z_i) can be used to accurately approximate the unknown function f_k,i(Z_i) locally along the periodic trajectory ϕ_ζ,i(Z_i(t))|_t ≥ T_i, meaning that
f_k,i(Z_i) = Ŵ_ζ,k,i^T S_ζ,k,i(Z_i) + ϵ_ζ_1,k,i = Ŵ_k,i^T S_k,i(Z_i) + ϵ_1,k,i
= W̅_ζ,k,i^T S_ζ,k,i(Z_i) + ϵ_ζ_2,k,i = W̅_k,i^T S_k,i(Z_i) + ϵ_2,k,i.
with the approximation accuracy level of ϵ_1,k,i = ϵ_ζ_1,k,i - W_ζ̅,k,i^T S_ζ̅,k,i(Z_i) = O(ϵ_ζ_1,k,i) = O(ϵ_k,i) and ϵ_2,k,i = ϵ_ζ_2,k,i - W̅_ζ̅,k,i^T S_ζ̅,k,i(Z_i) = O(ϵ_ζ_2,k,i) = O(ϵ_k,i) for all i ∈𝐈[1, N], k ∈𝐈[1, 3]. This ends the proof.
The key idea in the proof of theorem <ref> is inspired by <cit.>. For more detailed analysis on the learning performance, including quantitative analysis on the learning accuracy levels ϵ_1,i,k and ϵ_2,i,k as well as the learning speed, please refer to <cit.>. Furthermore, the AUV nonlinear dynamics (<ref>) to be identified do not contain any time-varying random disturbances. This is important to ensure accurate identification/learning performance under the deterministic learning framework. To understand the effects of time-varying external disturbances on deterministic learning performance, interested readers are referred to <cit.> for more details.
Based on (<ref>), to obtain the constant RBF NN weights W̅_k,i for all i ∈𝐈[1, N], k ∈𝐈[1, 3], one needs to implement the formation learning control law (<ref>), (<ref>) first. Then, according to Theorem 4, after a finite-time transient process, the RBF NN weights W_k,i will converge to constant steady-state values. Thus, one can select a time segment [t_a,i, t_b,i] with t_b,i > t_a,i > T_i for all i ∈𝐈[1, N] to record and store the RBF NN weights W_k,i(t) for t ∈ [t_a,i, t_b,i]. Finally, based on these recorded data, W̅_k,i can be calculated off-line using (<ref>).
It is shown in theorem <ref> that locally accurate learning of each individual AUV's nonlinear uncertain dynamics can be achieved using localized RBF NNs along the periodic trajectory ϕ_ζ,i(Z_i(t))|_t ≥ T_i. The learned knowledge can be further represented and stored in a time-invariant fashion using constant RBF NNs, i.e., W̅_k,i^T S_k,i(Z_i) for all i ∈𝐈[1, N], k ∈𝐈[1, 3]. In contrast to many existing techniques (e.g., <cit.> and <cit.>), this is the first time, to the authors’ best knowledge, that locally accurate identification and knowledge representation using constant RBF NNs are accomplished and rigorously analyzed for multi-AUV formation control under complete uncertain dynamics.
§ FORMATION CONTROL WITH PRE-LEARNED DYNAMICS
In this section, we will further address objective 2 of problem <ref>, which involves achieving formation control without readapting to the AUV's nonlinear uncertain dynamics. To this end, consider the multiple AUV systems (<ref>) and the virtual leader dynamics (<ref>). We employ the estimator observer (<ref>), (<ref>) to cooperatively estimate the leader's state information. Instead of using the DDL feedback control law (<ref>), and self-adaptation law (<ref>), we introduce the following constant RBF NN controller, which does not require online adaptation of the NN weights:
τ_i = -J_i^T(η_i)z_1,i - K_2,iz_2,i + W̅_i^TS_i^F(Z_i)
where W̅_ki^TS_ki^F(Z_i) = [W̅_1,i^TS_1,i(Z_i), W̅_2,i^TS_2,i(Z_i), W̅_3,i^TS_3,i(Z_i)]^T is ubtained from (<ref>). The term W̅_k,i^T S_k,i(Z_i) represents the locally accurate RBF NN approximation of the nonlinear uncertain function f_k,i(Z_i) along the trajectory ϕ_ζ,i(Z_i(t))|_t ≥ T_i, and the associated constant neural weights W̅_k,i are obtained from the formation learning control process as discussed in remark <ref>.
Consider the multi-AUV system (<ref>) and the virtual leader dynamics (<ref>) with the network communication topology G. Under assumptions <ref> and <ref>, the formation control performance (i.e., η_i converges to η_0 + d^*_i exponentially with the same η_0 and d^*_i defined in theorem <ref> for all i ∈𝐈[1, N]) can be achieved by using the DA observer (<ref>), (<ref>) and the constant RBF NN control law (<ref>) with the constant NN weights obtained from (<ref>).
Proof: The closed-loop system for each local AUV agent can be established by integrating the controller (<ref>) with the AUV dynamics (<ref>).
ż_1,i= -K_1,iz_1,i + J_i(η _i)z_2,i
ż_2,i= M_i^-1(-J_i^T(η _i)z_1,i - K_2,iz_2,i + W̅_i^TS_i^F(Z_i) - F_i(Z_i))
= M_i^-1(-J_i^T(η _i)z_1,i - K_2,iz_2,i - ϵ _2,i), ∀ i∈𝐈 [1,N ]
where ϵ_2,i = [ϵ_21,i, ϵ_22,i, ϵ_23,i]^T. Consider the Lyapunov function candidate V_z,i = 1/2 z_1,i^T z_1,i + 1/2 z_2,i^T M_i z_2,i, whose derivative along the closed-loop system described is given by:
V̇_z,i = z_1,i^T (-K_1,i z_1,i + J_i(η_i) z_2,i) + z_2,i^T (-J_i^T(η_i) z_1,i - K_2,i z_2,i - ϵ_2,i)
= -z_1,i^T K_1,i z_1,i - z_2,i^T K_2,i z_2,i - z_2,i^T ϵ_2,i.
Selecting K_2,i = K_1,i + K_22,i where K_1,i, K_22,i∈𝕊_3^+, we can utilize the method of completing squares to obtain:
-z_2,i^T K_22,i z_2,i - z_2,i^T ϵ_2,i≤(ϵ_2,i^2/4 λ(K_22,i)) ≤(ϵ^*_2,i^2/4 λ(K_22,i)),
which implies that:
V̇_z,i≤ -z_1,i^T K_1,i z_1,i - z_2,i^T K_1,i z_2,i + (ϵ^*_2,i^2/4 λ(K_22,i)) ≤ -ρ_i V_z,i + δ_i, ∀ i∈𝐈 [1,N ]
where ρ_i = min{2 λ(K_1,i), (2 λ(K_1,i) / λ(M_i))} and δ_i = (ϵ^*_2,i^2 / 4 λ(K_22,i)).
Using similar reasoning to that in the proof of theorem <ref>, it is evident from the derived inequality that all signals within the closed-loop system remain bounded. Additionally, η_i - η^d_i will converge to a small neighborhood around zero within a finite period. The magnitude of this neighborhood can be minimized by appropriately choosing large values for λ(K_1,i) > 0 and λ(K_2,i) > λ(K_1,i) across all i ∈𝐈[1, N]. In line with theorem <ref>, under assumptions <ref> and <ref>, the implementation of the DA observer DA observer (<ref>), (<ref>) facilitates the exponential convergence of η_i towards η_0. This conjunction of factors assures that η_i rapidly aligns with η_d,i = η_0 + d^*_i, achieving the objectives set out for formation control.
Building on the locally accurate learning outcomes discussed in Section <ref>, the newly developed distributed control protocol comprising (<ref>), (<ref>), and (<ref>) facilitates stable formation control across a repeated formation pattern. Unlike the formation learning control approach outlined in Section <ref>, which involves (<ref>), (<ref>) coupled with (<ref>) and (<ref>), the current method eliminates the need for online RBF NN adaptation for all AUV agents. This significantly reduces the computational demands, thereby enhancing the practicality of implementing the proposed distributed RBF NN formation control protocol. This innovation marks a significant advancement over many existing techniques in the field.
§ SIMULATION
We consider a multi-AUV heterogeneous system composed of 5 AUVs for the simulation. The dynamics of these AUVs are described in THE system (<ref>). The system parameters for each AUV are specified as follows:
M_i = [ m_11,i 0 0; 0 m_22,i m_23,i; 0 m_23,i m_33,i ],
C_i = [ 0 0 -m_22,i v_i - m_23,i r_i; 0 0 -m_11,i u_i; m_22,i v_i + m_23,i r_i -m_11,i u_i 0 ],
D_i = [ d_11,i (ν_i) 0 0; 0 d_22,i (ν_i) d_23,i (ν_i); 0 d_32,i (ν_i) d_33,i (ν_i) ], g_i = 0,
Δ_i = [ Δ_1,i (χ_i); Δ_2,i (χ_i); Δ_3,i (χ_i) ], ∀ i ∈𝐈 [1, 5]
where the mass and damping matrix components for each AUV i are defined as:
m_11,i = m_i - X_u̇,i, m_22,i = m_i - Y_v̇,i,
m_23,i = m_i x_g,i - Y_ṙ,i, m_33,i = I_z,i - N_ṙ,i,
d_11,i = -(X_u,i + X_uu,iu_i), d_22,i = -(Y_v,i + Y_vv,iv_i + Y_rv,ir_i),
d_23,i = -(Y_r,i + Y_vr,iv_i + Y_rr,ir_i), d_32,i = -(N_v,i + N_vv,iv_i + N_rv,ir_i),
d_33,i = -(N_r,i + N_vr,iv_i + N_rr,ir_i).
according to the notations in <cit.> and <cit.> the coefficients {X(·), Y(·), N(·)} are hydrodynamic parameters. For the associated system parameters are borrowed from <cit.> (with slight modifications for different AUV agents) and simulation purposes and listed in table <ref>. For all i ∈𝐈[1,5], we set x_g,i = 0.05 and Y_ṙ,i = Y_rv,i = Y_vr,i = Y_rr,i = N_rv,i = N_rr,i = N_vv,i = N_vr,i = N_r,i = 0.
Model uncertainties are given by:
Δ _1= 0, Δ _2 =
[ 0.2u_2^2+0.3v_2 -0.95 0.33r_2 ]^T
Δ _3= [ -0.58+cos (v_3 ) 0.23r_3^3 0.74u_3^2 ]^T
Δ _4= [ -0.31 0 0.38u_4^2 + v_4^3 ]^T
Δ _5= [ sin (v_5 ) cos (u_5 + r_5 ) -0.65 ]^T.
Fig. <ref> illustrates the communication topology and the spanning tree where agent 0 is the virtual leader and is considered as the root, in accordance with assumption <ref>. The desired formation pattern requires each AUV, η_i, to track a periodic signal generated by the virtual leader η_0. The dynamics of the leader are defined as follows:
[ η̇_0; ν̇_0 ] = [ 0 [ 1 0 0; 0 -1 0; 0 0 1 ]; [ -1 0 0; 0 1 0; 0 0 -1 ] 0 ][ η_0; ν_0 ],
[ η_0(0); ν_0(0) ] = [ 0 80 0 80 0 80 ]^T.
The initial conditions and system matrix are structured to ensure all eigenvalues of A_0 lie on the imaginary axis, thus satisfying Assumption <ref>. The reference trajectory for η_0 is defined as [80sin(t), 80cos(t), 80sin(t)]^T. The predefined offsets d^*_i, which determine the relative positions of the AUVs to the leader, are specified as follows:
d^*_1 = [0, 0, 0]^T, d^*_4 = [-10, 10, 0]^T, d^*_2 = [10, -10, 0]^T, d^*_5 = [-10, -10, 0]^T, d^*_3 = [10, 10, 0]^T
Each AUV tracks its respective position in the formation by adjusting its location to η_i = η_0 + d^*_i.
§.§ DDL Formation Learning Control Simulation
The estimated virtual leader's state, derived from the cooperative estimator in the first layer (see equation (<ref>) and (<ref>)), is utilized to estimate each agent's complete uncertain dynamics within the DDL controller (second layer) using equations (<ref>) and (<ref>). The uncertain nonlinear functions F_i(Z_i) for each agent are approximated using RBF NNs, as described in equation (<ref>). Specifically, for each agent i ∈{1, …, 5}, the nonlinear uncertain functions F_i(Z_i), dependent on ν_i, are modeled. The input to the NN, Z_i = [u_i, v_i, r_i]^T, allows the construction of Gaussian RBF NNs, represented by W^T_k,i S_k,i(Z_i), utilizing 4096 neurons arranged in an 16 × 16 × 16 grid. The centers of these neurons are evenly distributed over the state space [-100, 100] × [-100, 100] × [-100, 100], and each has a width γ_k,i = 60, for all i ∈{1, …, 5} and k ∈{1, 2, 3}.
The observer and controller parameters are chosen as β_1 = β_2 = 5, and the diagonal matrices K_1,i = 800 * diag{1.2, 1, 1} and K_2,i = 1200 * diag{1.2, 1, 1}, with Γ_k,i = 10 and σ_k,i = 0.0001 for all i ∈{1, …, 5} and k ∈{1, 2, 3}. The initial conditions for the agents are set as η_1(0) = [30, 60, 0]^T, η_2(0) = [40, 70, 0]^T, η_3(0) = [50, 80, 0]^T, η_4(0) = [10, 70, 0]^T, and η_5(0) = [10, 50, 0]^T. Zero initial conditions are assumed for all the distributed observer states (χ^i_0, A^i_0) and the DDL controller states W^k_i for all i ∈{1, …, 5} and k ∈{1, 2, 3}. Time-domain simulation is carried out using the DDL formation learning control laws as specified in equations (<ref>) and (<ref>), along with equations (<ref>) and (<ref>).
Figure <ref> displays the simulation results of the cooperative estimator (first layer) for all five agents. It illustrates how each agent's estimated states,η̂_i, converge perfectly to the leader's states η̂_0 thorough (<ref>) and (<ref>).
[b]0.5
< g r a p h i c s >
x̂_i → x_0 (m).
[b]0.5
< g r a p h i c s >
ŷ_i → y_0 (m).
[b]0.5
< g r a p h i c s >
ψ̂_i →ψ_0 (deg).
Distributed observer for all three states of each AUVs
Fig <ref> presents the position tracking control responses of all agents. Fig <ref>a, <ref>b, and <ref>c illustrate the tracking performance of AUVs along the x-axis, y-axis, and vehicle heading, respectively, demonstrating effective tracking of the leader's position signal. While the first AUV exactly tracks the leader's states, agents 2 through 5 are shown to successfully follow agent 1, maintaining prescribed distances and alignment along the x and y axes, and matching the same heading angle. These results underscore the robustness of the real-time tracking control system, which enforces a predefined formation pattern, initially depicted in Fig <ref>. Additionally, Fig <ref> highlights the real-time control performance for all agents, showcasing the effectiveness of the tracking strategy in maintaining the formation pattern.
[b]0.5
< g r a p h i c s >
x_i → x_0 (m).
[b]0.5
< g r a p h i c s >
y_i → y_0 (m).
[b]0.5
< g r a p h i c s >
ψ_i →ψ_0 (deg).
Postition Tracking Control for all three states of each AUVs
The sum of the absolute values of the neural network (NN) weights in Fig. <ref>, along with the NN weights depicted in Fig. <ref>, demonstrates that the function approximation from the second layer is achieved accurately. This is evidenced by the convergence of all neural network weights to their optimal values during the learning process, which is in alignment with Theorem <ref>.
Fig. <ref> presents the neural network approximation results for the unknown system dynamics F_3(Z_3) (as defined in (<ref>) for the third AUV, using a Radial Basis Function (RBF) Neural Network. The approximations are plotted for both W^T_k,3 S_k,3(Z_3) and W̅^T_k,3 S_k,3(Z_3) for all k ∈ I[1,3]. The results confirm that locally accurate approximations of the AUV's nonlinear dynamics are successfully achieved. Moreover, this learned knowledge about the dynamics is effectively stored and represented using localized constant RBF NNs.
[b]0.5
< g r a p h i c s >
Ŵ_1,3.
[b]0.5
< g r a p h i c s >
Ŵ_2,3.
[b]0.5
< g r a p h i c s >
Ŵ_3,3.
Neural Network weights convergence for the third AUV (i = 3).
[b]0.5
< g r a p h i c s >
k=1.
[b]0.5
< g r a p h i c s >
k=2.
[b]0.5
< g r a p h i c s >
k=3.
Function approximation for k = 1, 2, 3 for the 3^rd AUV (i = 3): Comparison of W̅_k,3^TS_k,3(Z_3), W_k,3^TS_k,3(Z_3), and f_k,3(Z_3).
§.§ Simulation for Formation Control with Pre-learned Dynamics
To evaluate the distributed control performance of the multi-AUV system, we implemented the pre-learned distributed formation control law. This strategy integrates the estimator observer (<ref>) and (<ref>), this time coupled with the constant RBF NN controller (<ref>). We employed the virtual leader dynamics described in (<ref>) to generate consistent position tracking reference signals, as previously discussed in Section <ref>. To ensure a fair comparison, identical initial conditions and control gains and inputs were used across all simulations. Fig <ref> illustrates the comparison of the tracking control results from (<ref>) and (<ref>) with the results using pre-trained weights w̅ in (<ref>).
[b]0.5
< g r a p h i c s >
x_i → x_0 (m).
[b]0.5
< g r a p h i c s >
y_i → y_0 (m).
[b]0.5
< g r a p h i c s >
ψ_i →ψ_0 (deg).
Postition Tracking Control using pretrained weights (w̅)
The control experiments and simulation results presented demonstrate that the constant RBF NN control law (<ref>) can achieve satisfactory tracking control performance comparable to that of the adaptive control laws (<ref>) and (<ref>), but with notably less computational demand. The elimination of online recalculations or readaptations of the NN weights under this control strategy significantly reduces the computational load. This reduction is particularly advantageous in scenarios involving extensive neural networks with a large number of neurons, thereby conserving system energy and enhancing operational efficiency in real-time applications.
For the design of this framework, all system dynamics are assumed to be unknown, which allows its application to any AUV, regardless of environmental conditions. This universality includes adaptation to variations such as water flow, which can increase the AUV's effective mass through the added mass phenomenon, as well as influence the AUV's inertia. Additionally, buoyancy force, which varies with depth, impacts the inertia. The lift force, dependent on varying water flows or currents and the AUV's shape or appendages, along with the drag force from water viscosity, significantly influences the damping matrix in the AUV's dynamics. These factors ensure the framework's robustness across diverse operational scenarios, allowing effective operation across a range of underwater conditions including those affected by hydrodynamic forces and torques.
The controller's design enables it to maintain stability and performance in dynamic and often unpredictable underwater environments. The stored weights from the training process enable the AUVs to recall and apply learned dynamics efficiently, even after the systems are shut down and restarted. This feature ensures that AUVs equipped with our control system can quickly resume operations with optimal control settings, regardless of environmental changes or previous operational states. Our approach not only demonstrates a significant advancement in the field of autonomous underwater vehicle control but also establishes a foundation for future enhancements that could further minimize energy consumption and maximize the adaptability and resilience of AUV systems in challenging marine environments
§ CONCLUSION
In conclusion, this paper has introduced a novel two-layer control framework designed for Autonomous Underwater Vehicles (AUVs), aimed at universal applicability across various AUV configurations and environmental conditions. This framework assumes all system dynamics to be unknown, thereby enabling the controller to operate independently of specific dynamic parameters and effectively handle any environmental challenges, including hydrodynamic forces and torques. The framework consists of a first-layer distributed observer estimator that captures the leader's dynamics using information from adjacent agents, and a second-layer decentralized deterministic learning controller. Each AUV utilizes the estimated signals from the first layer to determine the desired trajectory, simultaneously training its own dynamics using Radial Basis Function Neural Networks (RBF NNs). This innovative approach not only sustains stability and performance in dynamic and unpredictable environments but also allows AUVs to efficiently utilize previously learned dynamics after system restarts, facilitating rapid resumption of optimal operations. The robustness and versatility of this framework have been rigorously confirmed through comprehensive simulations, demonstrating its potential to significantly enhance the adaptability and resilience of AUV systems. By embracing total uncertainty in system dynamics, this framework establishes a new benchmark in autonomous underwater vehicle control and lays a solid groundwork for future developments aimed at minimizing energy use and maximizing system flexibility.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
§ AUTHOR CONTRIBUTIONS
Emadodin Jandaghi and Chengzhi Yuan led the research and conceptualized the study. Emadodin Jandaghi developed the methodology, conducted the derivation, design, implementation of the method, and simulations, and wrote the manuscript. Chengzhi Yuan, Mingxi Zhou, and Paolo Stegagno contributed to data analysis and provided insights on the control and oceanographic aspects of the research. Chengzhi Yuan supervised the project, assisted with securing resources, and reviewed the manuscript. All authors reviewed and approved the final version of the manuscript.
§ FUNDING
This work is supported in part by the National Science Foundation under Grant CMMI-1952862 and CMMI-2154901.
Frontiers-Harvard
|
http://arxiv.org/abs/2409.02183v1 | 20240903180005 | Thermodynamic properties of the macroscopically degenerate tetramer-dimer phase of the spin-1/2 Heisenberg model on the diamond-decorated square lattice | [
"Katarina Karlova",
"Andreas Honecker",
"Nils Caci",
"Stefan Wessel",
"Jozef Strecka",
"Taras Verkholyak"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.stat-mech",
"quant-ph"
] |
[email protected]
Laboratoire de Physique Théorique et
Modélisation, CNRS UMR 8089, CY Cergy Paris Université, Cergy-Pontoise, France
Laboratoire de Physique Théorique et
Modélisation, CNRS UMR 8089, CY Cergy Paris Université, Cergy-Pontoise, France
Laboratoire Kastler Brossel, Collège de France, CNRS, École Normale Supérieure - Université PSL, Sorbonne Université, 75005 Paris, France
Institute for Theoretical Solid State Physics, RWTH Aachen University,
Otto-Blumenthal-Str. 26, 52074 Aachen, Germany
Institute of Physics, Faculty of Science, P. J. Šafárik University, Park Angelinum 9, 04001 Košice, Slovakia
Institute for Condensed Matter Physics, NASU, Svientsitskii Street 1, 790 11, L'viv, Ukraine
§ ABSTRACT
The spin-1/2 Heisenberg antiferromagnet on the diamond-decorated square lattice in the presence of a magnetic field displays various quantum phases including the Lieb-Mattis ferrimagnetic, dimer-tetramer, monomer-dimer, and spin-canted phases, in addition to the trivial fully saturated state. Thermodynamic properties of this model are investigated using several complementary analytical and numerical methods such as exact diagonalization up to the systems of 40 spins, an effective monomer-dimer description, sign-problem-free quantum Monte Carlo simulations for up to 180 spins, and a decoupling approximation. Our particular attention is focused on the parameter region favoring the dimer-tetramer phase. This ground state can be represented by a classical hard-dimer model on the square lattice and retains a macroscopic degeneracy even under a magnetic field. However, the description of the low-temperature thermodynamics close to the boundary between the macroscopically degenerate dimer-tetramer and the non-degenerate monomer-dimer phases requires an extended classical monomer-dimer lattice-gas model. Anomalous thermodynamic properties emerging in the vicinity of the dimer-tetramer phase are studied in detail. Under the adiabatic demagnetization we detect an enhanced magnetocaloric effect promoting an efficient cooling to absolute zero temperature, provided that the system reaches the dimer-tetramer ground state at zero field.
Thermodynamic properties of the macroscopically degenerate tetramer-dimer phase of the spin-1/2 Heisenberg model on the diamond-decorated square lattice
Taras Verkholyak
September 2, 2024
==========================================================================================================================================================
§ INTRODUCTION
Over the past decades significant attention has been dedicated to the investigation of low-dimensional quantum magnets situated on geometrically frustrated lattices <cit.>. The hallmark of classical highly frustrated magnets are
macroscopically degenerate ground states <cit.>.
Quantum fluctuations instead
possess the remarkable capability to manipulate and potentially eliminate such degeneracies through a phenomenon known as the quantum order-by-disorder mechanism <cit.>.
Quantum spin systems thus often exhibit ordered ground states such as long-range antiferromagnetism <cit.> or valence bond solids <cit.>. However, the interplay of frustration and quantum fluctuations can occasionally also give rise to the emergence of exotic quantum disordered states, most prominently in quantum spin liquids <cit.>. Characterized by long-range entanglement, quantum spin liquids indeed exhibit a plethora of intriguing phenomena, including the fractionalization of excitations and the emergence of topological order <cit.>. Nevertheless, it is also feasible for frustrated quantum magnets to stabilize quantum disordered phases that retain a peculiar macroscopic degeneracy in the absence of quantum spin-liquid characteristics.
Frustrated quantum spin systems constitute a challenge for theory, for example because the infamous sign problem <cit.>
precludes straightforward and efficient quantum Monte Carlo simulations.
For this reason, tailor-made models such as the Rokhsar-Kivelson <cit.> and Kitaev <cit.> models have been proposed in order to allow for obtaining an exact solution.
However, geometric frustration can sometimes also be beneficial rather than detrimental: if kinetic energy is significantly or even completely suppressed by destructive quantum interference, the quantum problem may be mapped to an effective classical dimer model <cit.> or at least well approximated by a quantum dimer model <cit.>, thus rendering a controlled approximation if not an exact treatment of the low-energy physics possible.
Examples of such phenomena include but are not limited to
models with exact dimer ground states in one <cit.> and two dimensions <cit.>,
the one-third plateau in the kagome quantum antiferromagnet <cit.>,
and localized magnons appearing just below the saturation field of highly frustrated antiferromagnets
<cit.>.
In this respect, the Heisenberg model on the diamond-decorated square lattice, sketched in Fig. <ref>, serves as an illustrative example of the interplay between frustration, quantum fluctuations, and emergent phenomena.
Previous investigations into the ground-state characteristics of this model
in the absence of a magnetic field
demonstrate the presence of a Lieb-Mattis ferrimagnetic phase, a monomer-dimer phase, and a dimer-tetramer phase <cit.>. The latter two phases exhibit a macroscopic ground-state degeneracy in the absence of a magnetic field and thus deserve particular attention.
The dimer-tetramer ground state retains its macroscopic degeneracy even in the presence of a finite magnetic field (in contrast to the monomer-dimer state), highlighting its exceptional character <cit.>. This robust degeneracy can furthermore be related to the classical hard-dimer model on the square lattice, emphasizing its connection to well-known models in condensed-matter physics <cit.>.
Hence, the dimer-tetramer phase, characterized by its short-range ordering and robust macroscopic degeneracy over an extended parameter regime, offers an intriguing perspective on frustrated quantum magnetism.
In this paper, we therefore examine the thermodynamic and magnetocaloric properties of the dimer-tetramer phase of the spin-1/2 Heisenberg model on the diamond-decorated square lattice, including a finite magnetic field.
Indeed, we will show that this phase presents an ideal candidate for applications such as magnetic cooling during the process of adiabatic demagnetization <cit.>.
The further organization of this paper is as follows: the Hamiltonian and ground-state phase diagram of the spin-1/2 Heisenberg model on the diamond-decorated square lattice are reviewed in Sec. <ref>. Our analytical and numerical approaches will be introduced in Sec. <ref>. Results are then presented in Sec. <ref> and a final summary is provided in Sec. <ref>.
§ HEISENBERG MODEL ON THE DIAMOND-DECORATED SQUARE LATTICE
In the following, we consider the spin-1/2 Heisenberg model on the diamond-decorated square lattice depicted in Fig. <ref>, and defined through the Hamiltionian
H = J_1 ∑_i=1^N [ S_i,1·(S_i,2+S_i,3+S_i,4+S_i,5
+
S_i-x̂,2+S_i-x̂,3+S_i-ŷ,4+S_i-ŷ,5)
]
+ J_2 ∑_i=1^N (S_i,2·S_i,3 +
S_i,4·S_i,5)
- h∑_i=1^N∑_μ=1^5 S_i,μ^z .
The spin-1/2 operator S_i,μ= (S_i,μ^x, S_i,μ^y, S_i,μ^z) is assigned to the μth spin of the ith unit cell. The indices i-x̂ and i-ŷ refer to the unit cells immediately to the left and below the ith unit cell, respectively. We consider a finite lattice consisting of N unit cells, containing N_s = 5N spins, and we apply periodic boundary conditions. Typically, we utilize square lattices with lattice sizes L_x, L_y, and the total number of unit cells N=L_xL_y. The two distinct exchange interactions J_1 and J_2 are depicted in Fig. <ref> by thin and thick lines, respectively. The magnetic-field (h) term represents the standard Zeeman coupling of the spin degrees of freedom to an external magnetic field.
Our investigations center on the thermodynamic properties of the dimer-tetramer phase. However, we start by providing a brief overview of the ground-state phase diagram and the characteristics of various phases occurring in the low-field regime. The ground-state phase diagram of the spin-1/2 Heisenberg model on the diamond-decorated square lattice is depicted in Fig. <ref> and was extensively described previously <cit.>. It exhibits five phases in the parameter plane spanned by the interaction ratio J_2/J_1 and the reduced magnetic field h/J_1. In addition to a saturated paramagnetic (PM) state and the spin-canted (SC) phase, the phase diagram features three distinct phases: the Lieb-Mattis (LM) ferrimagnetic, the dimer-tetramer (DT), and the monomer-dimer (MD) phases. In the LM phase the Heisenberg dimers form triplets and the overall state is reminiscent of classical ferrimagnetic order.
The MD and DT phases exhibit fragmentation. In both cases, the overall wave function can be expressed as a product of individual fragments <cit.>. In the MD phase, singlet dimers (depicted in Fig. <ref> as yellow ovals) coexist with free monomeric spins in zero field (h=0) and fully polarized monomeric spins in nonzero field.
More specifically, the ground state can be presented as follows:
|MD⟩={[ ∏_i=1^N|σ⟩_i,1⊗∏_d=1^2N|s⟩_d, σ∈{↑,↓}, h=0; ∏_i=1^N|↑⟩_i,1⊗∏_d=1^2N|s⟩_d, h> 0, ].
where |s⟩_d=(|↑⟩|↓⟩-|↓⟩|↑⟩)/√(2) represents the dimer-singlet state.
The DT phase is particularly intriguing because it comprises a macroscopic number of states that contain both singlet dimers and singlet tetramers.
The singlet tetramers contain in turn triplet dimers, as depicted in Fig. <ref> by small and large yellow ovals, respectively. Furthermore, each singlet tetramer is surrounded by singlet dimers within the DT ground-state manifold.
As a result, overall these DT states are singlets and are of the following form:
|DT⟩ = '∏_{d}^∏_d'≠ d |t⟩_d |s⟩_d',
|t⟩_d = 1/√(3)(|↑⟩_i,1|↓↑⟩_d|↓⟩_i',1+|↓⟩_i,1|↑↓⟩_d|↑⟩_i',1)
-1/2(|↑⟩_i,1|↑↓⟩_d|↓⟩_i',1 + |↑⟩_i,1|↓↓⟩_d|↑⟩_i',1.
+.|↓⟩_i,1|↑↑⟩_d|↓⟩_i',1+ |↓⟩_i,1|↓↑⟩_d|↑⟩_i',1) ,
where '∏_{d}^ denotes the product over a subset of dimers that are not nearest neighbors.
This phase exhibits a macroscopically high degeneracy that is related to the hard-dimer model on the square lattice <cit.>.
We will be especially interested in the regime near the quantum phase transition line between the MD and DT phases. This transition line is given by the condition
h = 2J_1 - J_2, if 1.487 J_1≲ J_2 ≤ 2J_1,
and here, the model is expected to exhibit an enhanced magnetocaloric effect, as we explain further below.
We note that the finite-temperature properties in the vicinity of the MD-LM transition are well captured by a
Ising-Heisenberg variant of the model <cit.>. By contrast, the DT phase (which is the focus of the present investigation) owes its existence to quantum fluctuations on the monomeric spins and is thus absent in the Ising-Heisenberg variant.
§ METHODS
The overall
thermodynamic properties and thermal phase transitions for various phases of this model were examined previously <cit.>. Here, we focus on the macroscopically degenerate DT phase, which is investigated using exact diagonalization, sign-problem-free quantum Monte Carlo simulations, an effective monomer-dimer (EMD) description, as well as a spin-star decoupling approximation.
We also show some density-matrix-renormalization group (DMRG) ground-state
results, but no new data has been produced for the present purposes such
that we refer to Ref. <cit.> for technical details on these DMRG
calculations.
§.§ Effective monomer-dimer (EMD) model
In order to formulate the effective model at the boundary between the MD and DT phases, we
recall some notation <cit.>.
The energy of singlet dimers of the MD state (<ref>) can be calculated as follows:
E^(0)_MD = 2Nε_sd = -3N/2J_2,
where ε_sd=-3/4J_2 is the energy of one singlet dimer.
To obtain the energy of the MD state at nonzero fields, we need to add the contribution of the Zeeman term for the monomer spins to Eq. (<ref>):
E_MD = E^(0)_MD - Nh/2.
The DT phase of the states (<ref>) consists of N/2 singlet tetramers and 3N/2 singlet dimers.
Thus, its energy is
E_ DT = N(3/2ε_sd + 1/2ε_st)
= - N (J_2 + J_1),
where ε_st=-2J_1+1/4J_2 is the energy of the singlet tetramer.
The condition E_MD = E_ DT yields the phase-transition line given by Eq. (<ref>).
The ground state is macroscopically degenerate on the phase-transition line. It contains all possible configurations of two states of decorated diamonds: (i) singlet dimers, (ii) singlet tetramers containing a dimer triplet.
For all states the condition applies that the singlet tetramers cannot have common edges.
The singlet-dimer-to-singlet-tetramer excitation is given by
the gap
Δ_H,V = ε_st - ε_sd = J_2^H,V - 2J_1.
Here we have introduced a distinction between horizontal (H) and vertical dimers (V)
with two different interaction constants J_2^H and J_2^V
in anticipation of a transfer-matrix approach that will treat the two spatial directions differently.
It should be noted that the two monomer spins in a diamond do not contribute to the Zeeman term in case they are included in the singlet tetramer.
As a result, the effective low-temperature model can be considered as a monomer-dimer problem on the square lattice <cit.>, where dimers can be located on the bonds of a square lattice connecting nearest-neighbor vertices (which corresponds to the singlet tetramer state of the diamond-decorated square lattice) and each vertex may host no more than one dimer (i.e., different singlet tetramers cannot have common spins). This correspondence is illustrated in Figs. <ref>(a), (b).
Defining the energy with respect to the dimer singlets, the partition function of such a model can be written as follows:
Z = e^-β E^(0)_ MD∑_C x^H y^V z^M,
where ∑_C denotes the sum over all possible configurations of dimers on the square lattice;
M, H, V are the numbers of monomers and horizontal and vertical dimers (which obey the restriction 2H+2V+M = N);
x=e^-βΔ_H, y=e^-βΔ_V, z=2cosh(β h/2) are the respective activities,
β = 1/T is the inverse temperature (where we set k_B=1).
The monomer activity z corresponds to the partition function of a single monomer spin in an external field.
Recall that we have introduced the anisotropic excitations Δ_H and Δ_V for the horizontal and vertical dimers in order to identify the corresponding contributions in Eq. (<ref>) explicitly, but fix Δ_H=Δ_V (J_2^H=J_2^V=J_2) in all final calculations.
After introducing α=x/y, γ=z/y^1/2, we represent the partition function as
Z = e^-β E^(0)_ MDy^N/2∑_Cα^H γ^M,
where α=e^-β(Δ_H-Δ_V), γ=2cosh(β h/2)e^βΔ_V/2.
One can prove without solving the model that the free energy of the monomer-dimer system is analytic such that
this model cannot exhibit any finite-temperature phase transition <cit.>.
In order to calculate the partition function, we resort to the transfer-matrix formalism <cit.> (see also Refs. <cit.>). The transfer matrix proceeds on two neighboring rows of vertical bonds. Vertical dimers are identified with spins up (↑), and the absence of dimers on vertical bonds corresponds to spin down (↓) (see Fig. <ref>(c)). The transfer matrix in operator form contains three contributions, V=V_3 V_2 V_1, with
V_1 = ∏_i=1^L_xσ^x_i,
V_2 = exp(γ∑_i=1^L_xσ^-_i),
V_3 = exp(α∑_i=1^L_xσ^-_iσ^-_i+1),
where σ^x_i and σ^±_i = σ^x_i ± iσ^y_i are the corresponding Pauli matrices.
Here, V_1 counts all possible configurations of closely packed vertical dimers, where σ^x_i fixes the condition that the configurations with two spins up in the vertical direction, i.e., two neighboring dimers, are forbidden.
The second term V_2 describes the creation of a monomer instead of a vertical dimer upon the action of the σ^-_i operator.
In a similar manner, V_3 corresponds to the creation of horizontal dimers by the annihilation of pairs of neighboring vertical dimers.
It is easy to check that V= V^† is hermitian, since V_1 V^†_i = V_i V_1 for i=2,3.
For the algebraic calculations it is more convenient to consider V^2,
V^2 = V_3 V_2 V_1 V_3 V_2 V_1 = V_3 V_2 V^†_3 V^†_2,
where V^†_2 =exp(γ∑_i=1^L_xσ^+_i),
V^†_3 =exp(α∑_i=1^L_xσ^+_iσ^+_i+1).
The partition function can then be written as
Z = e^-β E^(0)_ MD y^N/2 Tr V^L_y.
An exact solution of this problem is available only for the pure dimer model (γ=0) <cit.>,
where one can establish an equivalence to a generalized XY model <cit.>.
Then, the transfer matrix is of the form V=V_3 V^†_3, and can be diagonalized within the Jordan-Wigner fermionization scheme <cit.>. In the isotropic case, α=1, one obtains the entropy per unit cell of the closely-packed dimer model,
s_d≈ 0.29156 <cit.>.
Here, we consider the EMD model where all the terms (<ref>) in the transfer matrix are taken into account. Therefore, we proceed with a numerical calculation of the maximal eigenvalue Λ_max of the transfer matrix.
Using the Lanczos procedure <cit.>, we are able to find Λ_max for systems with a number of unit cells in the horizontal direction L_x up to 16.
In the limit L_y→∞, only the maximal eigenvalue Λ_max remains essential compared to the others, thus giving us the
free energy per unit cell as follows:
f = -1/Nβln Z = f_0 -1/β L_xlnΛ_max,
f_0
= -3/4 J_2^H - 1/4 J_2^V - J_1.
All other thermodynamic quantities are finally obtained by numerical differentiation.
§.§ Spin-star decoupling approximation
The spin-star decoupling approximation is developed from the idea of decomposing the total Hamiltonian of the spin-1/2 Heisenberg model on the diamond-decorated square lattice (<ref>) into a sum of non-commuting cluster Hamiltonians H^0=∑_j=1^N H_j^0 ascribed to individual spin-star clusters. As a starting point of our calculation we will therefore consider the Hamiltonian H_j^0 of a single spin-star cluster, which is composed of one monomeric spin and its four neighboring dimers. The considered spin-star cluster is schematically illustrated on the left-hand-side of Fig. <ref> including simplified notation and is mathematically defined as follows
H_j^0 = J_1S_0·(S_1+S_2+S_3+S_4+S_5+S_6+S_7+S_8)
+ J_2/2(S_1·S_2+S_3·S_4+S_5·S_6+S_7·S_8).
The inclusion of the factor of 1/2 in the second term ensures that the intra-dimer interaction J_2 will not be double-counted if one performs a summation over the spin-star Hamiltonians H_j^0 at the end of our calculation. Note furthermore that we have omitted the Zeeman term because this term can be trivially added at the end of the calculation as it only shifts the respective eigenenergies thanks to the total Hamiltonian and the Zeeman term commuting with each other. Within the spin-star cluster decoupling, the total energy of the spin-1/2 Heisenberg diamond-decorated square lattice will be obtained as the sum of eigenenergies of the spin stars given by the cluster Hamiltonian (<ref>) upon simply disregarding the non-commutative nature of the cluster Hamiltonians [ H_i^0, H_j^0]≠ 0 for neighbors i and j.
Owing to the local conservation of the total spin on each dimer, it is convenient to express the cluster Hamiltonian (<ref>) in terms of composite spin operators of four dimers denoted as S_D_i = S_2i-1 + S_2i (i=1, …,4), as well as the total spin of the dimers S_D = ∑_i=1^4S_D_i
and the total spin of the spin star denoted as S_t=S_0+S_D. Consequently, the eigenenergies of the spin-star cluster Hamiltonian (<ref>) can be obtained from the eigenenergies of a simpler composite spin star illustrated on the right-hand-side of Fig. <ref>, which depend only on the total quantum spin number S_t, the composite quantum spin numbers of the individual dimers S_D_i, and the total quantum spin number of all four dimers S_D,
E_S_t, S_D, S_D_i^0 = J_1/2[S_t(S_t+1)-3/4- S_D
(S_D+1)]
- 3/2J_2+J_2/4∑_i=1^4 S_D_i (S_D_i + 1).
We note
that each composite spin dimer can be either in a singlet state characterized by S_D_i=0 or a triplet state with S_D_i=1, which means that the spin star can have from zero up to four triplets. Assuming antiferromagnetic couplings J_1>0 between the central monomeric spin S_0 and the four neighboring spin dimers S_D_i, the energetically most favorable states of the spin star in each sector correspond to the highest
multiplicity of the dimer spins S_D^ max = ∑_i S_D_i, given by the eigenenergies E_S_t, S_D^ max^0 with
E_1/2,0^0 = -3/2J_2,
E_1/2,1^0 = -J_1-J_2,
E_3/2,2^0 = -3/2J_1-J_2/2,
E_5/2,3^0 = -2J_1,
E_7/2,4^0 = -5/2J_1+J_2/2.
The total number of triplets within this set of eigenstates of the spin star thus coincides with the sum of the composite quantum spin numbers S_D.
The total energy of the spin-1/2 Heisenberg model on the diamond-decorated square lattice can be then determined by summing the energies of individual spin stars, whereby the states of the dimers can be effectively described within a quasi-particle formalism when assigning the occupation number n_i=1 to a dimer-triplet state and n_i=0 to a dimer-singlet state, respectively. By considering only the five lowest-energy eigenstates of the spin star, ranging from zero up to four triplet-dimer states as given by Eq. (<ref>), the eigenenergies can be expressed in terms of the occupation numbers of the dimer-triplet states,
E_n_1,…, n_4^0 =-3/8J_1-3/2J_2+J_2/2∑_i=1^4n_i+J_1/2∏_i=1^4(1-n_i)
+J_1/2[(∑_i=1^4 n_i-1/2)(∑_i=1^4 n_i+1/2)
-∑_i=1^4 n_i (∑_i=1^4 n_i+1)],
which can be further simplified into the following compact form
E_n_1, …, n_4^0= - 1/2(J_1+3J_2)+1/2(J_2-J_1)∑_i=1^4 n_i
+ J_1/2∏_i=1^4(1-n_i).
The last term represents the correction to the energy of free monomeric spins surrounded by four adjacent dimer singlets.
The eigenenergy E_n_1, …, n_4^0 corresponds to the spin multiplet with the total quantum spin number
S_t= ∑_i=1^4 n_i+ ∏_i=1^4 (1-n_i)-1/2.
Summing the eigenenergies of the spin stars and adding the respective Zeeman term yields the following formula for the overall energy of the spin-1/2 Heisenberg model on the diamond-decorated square lattice
E_T({n_i}) = E_T^0({n_i})-h S_T^z, where E_T^0({n_i}) represents the respective zero-field energy depending on the set of all occupation numbers { n_i}:
E_T^0({n_i}) = -N/2(J_1+3J_2)+(J_2-J_1)∑_i=1^2Nn_i
+(J_1/2)∑_j=1^N ∏_i∈ (1-n_j,i).
The last term in Eq. (<ref>) provides a correction to the energy when all four dimers surrounding the jth monomeric spin are in a singlet state. This situation is depicted in Fig. <ref>, where the central monomeric spin is surrounded by four occupation numbers assigned to composite spins of adjacent dimers. If all four composite spins of the dimers are in the singlet state, the correction term accounts for this specific configuration, ensuring that the energy is accurately represented under this specific condition. In Eq. (<ref>), the notation i∈ indicates that the index i runs over the four sites of a square plaquette surrounding the central monomeric spin, as depicted in Fig. <ref>. Finally, S_T^z is the z-component of the total spin S_T of the whole system, which can be expressed in terms of the occupation numbers of the dimer-triplet states
S_T= -N/2 +∑_i=1^2Nn_i+∑_j=1^N ∏_i∈ (1-n_j,i).
The first term accounts for the contribution of the monomeric spins, while the second term accounts for the contribution of the dimeric spins. Hence, the second term involves a summation over the occupation numbers assigned to the dimer-triplet states and thus corresponds to the total number of triplets N_ trip.. The summation in the third term is a correction stemming from all free
monomeric spins. In fact, the third term involves a projection operator for the monomeric spin surrounded by four dimer-singlet
states and thus corresponds to the total number of free monomeric spins N_ free.
The free energy of the spin-1/2 Heisenberg model on the diamond-decorated square lattice within the spin-star decoupling approximation can then be calculated according to the formula:
f_ star = -1/Nβln∑_{n_i}∑_S_T^z=-S_T^S_Texp[-β E_T^0({n_i})+β h S_T^z].
We emphasize that the formula (<ref>) for the total quantum spin number S_T tacitly assumes that all free monomeric spins are fully polarized by the magnetic field. Therefore, in the zero-field limit the formula (<ref>) for the free energy does not take into account the degeneracy from the free monomeric spins (i.e., monomeric spins surrounded by four singlet dimers), which have a significant role in determining the zero-point entropy and low-temperature thermodynamics in the absence of a magnetic field.
To improve the zero-field estimate of the free energy within the spin-star decoupling approximation, we therefore modified the formula (<ref>) for the free energy by accounting for the respective degeneracy factor of the free monomeric spins
f_ star^0 = -1/Nβln∑_{n_i}∑_S_T^z=-S_T^0^S_T^02^N_ freeexp[-β E_T^0({n_i})],
where N_ free=∑_j=1^N ∏_i∈ (1-n_j,i) denotes the total number of the free monomeric spins and the overall spin multiplicity S_T^0 in zero field can be expressed as
S_T^0=N_ free-N/2 + ∑_i=1^2Nn_i.
The spin-star decoupling approximation recovers the transition fields around the DT phase in the ground-state phase diagram (see red broken lines in Fig. <ref>(b)). By setting all n_i=0 in Eq. (<ref>), we actually get the exact value for the ground-state energy of the MD phase (<ref>). Requiring that there is a single triplet dimer in each spin-star configuration, we instead obtain the exact energy of the DT phase (<ref>).
Calculating the ground-state energy of the LM phase (all n_i=1) within the spin-star approach, E_ LM=-N/2(5J_1 - J_2 + 3h), yields the transition field between the LM and DT phases as h=J_2 - J_1, which is very close to the DMRG results shown in Fig. <ref>(b) (cf. the red broken and blue solid lines in Fig. <ref>(b)).
Recall that on the phase boundary the energy of any configuration that contains at least one triplet in the spin-star cluster has the same energy. This leads to an additional macroscopic degeneracy (higher than the degeneracy of the DT phase). This artifact of the decoupling approximation influences the low-temperature behavior entropy discussed in Sec. <ref>.
The decoupling approximation based on the spin-star clusters was developed under two fundamental assumptions, (i) ignoring the non-commutative nature of the cluster Hamiltonians (<ref>) and (ii) subsequently considering only the five energetically most favorable states of the spin star given by Eq. (<ref>). Despite these simplifications, the spin-star cluster decoupling offers a reasonable approximation at sufficiently low temperatures whenever the system is driven towards the fragmented MD or DT phases, whereas the collective nature of the LM ferrimagnetic phase is
not rigorously accounted for within this treatment. Although we have attempted to improve precision of the spin-star approximation in zero magnetic field by considering degeneracies pertinent to free monomeric spins, this calculation procedure should be regarded as much more reliable and precise in
the presence of a magnetic field due to Zeeman splitting of the relevant spin multiplets. It indeed turns out that the quantitative agreement between the results derived from the spin-star approximation and a numerical treatment of the full model progressively extends to higher temperatures as the magnetic field increases,
see Sec. <ref>.
§.§ Exact diagonalization
The full N_s=30 spectra were obtained previously <cit.> and can be re-evaluated for the present purposes.
However, here we push exact diagonalization (ED) further to the N_s=40 system sketched in
Fig. <ref>. We have followed the same strategy as in Ref. <cit.>, exploited conservation of the total spin
on each dimer and thus passed to an effective Lieb lattice, represented by the yellow spheres in the background of Fig. <ref>.
For the present N_s=40 system, computer enumeration then yields 191 topologically nonequivalent arrangements of spin triplets and singlets
on the dimers with degeneracies ranging from 1 (for all dimers in the triplet state) to 1152.
We were then able to compute the full spectra in the sectors with up
to N_ trip.=8 triplets. This includes in particular the sector with N_ trip.=4 triplets that contains the dimer-tetramer phase for the N_s=40 geometry.
For the sectors with 9 triplets or more, we further diagonalized high-S_z sectors completely and obtained low-energy levels
using the Lanczos procedure <cit.> combined with the strategy of Ref. <cit.>. Specifically, in the Lieb-Mattis regime that corresponds to 16 triplets in the N_s=40 system,
we were only able to obtain full spectra for S^z ≥ 13 and had to contend ourselves with low-lying levels for S^z ≤ 12,
i.e., the ferrimagnetic state with magnetization 3/5 and below.
Furthermore, we reconstructed the spin multiplets using S^z classification and spin inversion symmetry in the S^z=0 sector;
the latter permits us to avoid diagonalizations in the S^z=1 sector, which would usually have the highest dimension.
All in all, we were able to recover an entropy of 36.082 ln(2), i.e., 90.2% of the total entropy of the N_s=40 system.
We note that going to the N_s=40 system is particularly important in the vicinity of the Lieb-Mattis phase.
Specifically, the lowest state in the sector with N_ trip.>N_s/10 triplets turns out to be in the sector with
spin S=N_ trip.-N_s/10 and approaches the Lieb-Mattis limit S=3N_s/10 for N_ trip.=2N_s/5.
At J_2=0, we find a ground-state energy of -2.46115 J_1 per unit cell in the S=12, N_ trip.=16 sector for N_s =40.
The finite-size error is about an order of magnitude smaller than the N_s=30 estimate of -2.46809 J_1 as compared to the estimates
-2.46083 J_1 <cit.> and -2.46000 J_1 per unit cell <cit.>, respectively. This indicates that the
N_s=40 system can be considered significantly closer to the thermodynamic limit than the N_s=30 one, even if we increased
the number of spins only by a third. This large change can be attributed to the N_s=30 system still having additional symmetries that arise from the periodic
boundary conditions imposed on this small system whereas such particular symmetries are absent in the N_s=40 geometry,
rendering the latter geometry significantly more generic.
As another indicator, we mention that the ground-state entropy per site is s≈0.0880 and 0.0795 for N_s=30 and 40, respectively,
compared to s≈0.0583 in the thermodynamic limit <cit.> (the corresponding ground-state degeneracy is 14 and 24, respectively).
Figure <ref> shows the gaps to the lowest excitation above the DT ground-state for the N_s=40 system (N_ trip.=4).
For J_2 ≳ 1.5 J_1,
the gap to the N_ trip.=3 excitation coincides with the DT-MD phase transition in the phase diagram Fig. <ref>, as in fact
in this case states with any number N_ trip.≤ N_s/10 collapse.
On the other hand, the field-induced DT-LM transition in Fig. <ref> is a true first-order transition. Thus, the closing of the N_ trip.=5 gap in Fig. <ref> is preempted by a direct transition to a N_ trip.=16 configuration, reproduced from Fig. <ref> by the dashed line in
Fig. <ref>. As a consequence, the maximum of the gap is found for J_2/J_1 ≈ 1.4
whereas the maximal extension of the DT phase in a magnetic field occurs for J_2/J_1 ≈ 1.5.
Furthermore, one observes an N_ trip.=6 bound state coming down in energy when approaching the DT-LM transition at h=0. The nature of the DT-LM transition at h=0 is less clear to decide, at least based on the N_s=40 ED data, i.e., one does observe higher-N_ trip. / higher-S states coming down in energy, but it is not clear if they actually collapse in the thermodynamic limit, or if the DT-LM transition remains a conventional first-order transition at h=0.
§.§ Quantum Monte Carlo
In addition to ED, we can also use QMC in order to access thermodynamic properties of the spin-1/2 Heisenberg antiferromagnet on the diamond-decorated square lattice. In particular, this approach allows us to obtain unbiased results for system sizes that extend beyond those accessible to ED. The QMC method that we used for our investigation has already been employed in our previous finite-field study of the spin-1/2 Heisenberg antiferromagnet on the diamond-decorated square lattice <cit.> and therefore we summarize it only briefly.
More specifically, our QMC approach is based on the stochastic series expansion (SSE)
method with directed loop updates <cit.>. In order to avoid
the sign problem, i.e., an exponential drop of the statistical accuracy of the QMC simulations at low temperatures and large system sizes <cit.> due to the presence of geometric frustration, we avoid working in the conventional local spin-S^z basis for the model at hand.
Instead, we eliminate the sign problem upon using appropriate basis states after decomposing the Hamiltonian into separate terms based on dimers (or trimers) <cit.>.
For certain so-called fully frustrated models
the sign problem can indeed be completely eliminated, cf. Refs. <cit.> for the spin-dimer and Ref. <cit.> for the spin-trimer basis, respectively.
In case of the diamond-decorated square lattice a finite value of the coupling J_2 leads to geometric frustration.
In order to avoid the associated sign problem, we follow Ref. <cit.> and treat all J_2-dimer spins in the spin-dimer basis, but keep using the local S^z-basis for the monomer spins. Using this combined five-site cluster-basis, we can simulate our model without a sign-problem within the SSE framework, based on the abstract operator loop update introduced in Ref. <cit.>. We refer to Ref. <cit.> for further details on this QMC approach.
Here, we performed QMC simulations for systems with
up to 6×6 unit cells.
§ RESULTS
§.§ Dimer triplet density, entropy, and specific heat above the DT phase
Let us start our discussion by examining the dimer triplet density, entropy, and specific heat of the spin-1/2 Heisenberg model on the diamond-decorated square lattice in zero magnetic field. We will compare results from the different methods
introduced in Sec. <ref>. For the spin-star decoupling approximation we will use the zero-field formula Eq. (<ref>) for the free energy.
Figure <ref> presents results for the interaction ratio J_2/J_1=1.7.
At zero temperature, the dimer triplet density is 0.25, reflecting the nature of the DT state, where 1/4 of the dimers are in the dimer-triplet state within the singlet tetramers and 3/4 of the dimers remain in the dimer-singlet state. Due to the relatively lower energy cost of creating a dimer-singlet state compared to an additional dimer-triplet state for the interaction ratio J_2/J_1=1.7, a decrease in the dimer-triplet density can be observed at
low temperatures terminated at a round local minimum. With further increase of temperature, the dimer-triplet density tends towards the asymptotic value 0.75 reached in the limit of infinite temperature due to the three-fold degeneracy of the triplet state in zero magnetic field.
A comparison of the dimer-triplet density obtained for N_s=30 and N_s=40 spins from exact diagonalization (ED) reveals
small
finite-size effects. ED for N_s=40 spins provides an interpolation between the spin-star decoupling approximation and the EMD model at low temperatures and QMC at high temperatures.
The excitation from a dimer-singlet state to a dimer-triplet state is neglected within the EMD model and the dimer-triplet density should accordingly always decrease with increasing temperature. On the other hand, the local minimum observed in the dimer-triplet density arises from the competition of the lower excitation energy for the dimer-singlet state and the higher degeneracy of the triplet state.
In fact, the lowest states not taken into account by the EMD model are at energies slightly above
0.6J_1 for J_2=1.7J_1 and these are responsible for the triplet density increasing again with increasing temperature after going through a minimum. However, the spin-star decoupling approximation takes only some of these into account (as reflected also by the lower entropy s compared to ED and QMC, see Fig. <ref>(b)) and approximates their energies, thus explaining the deviation from N_s=40 ED and QMC, which are in good agreement for T/J_1 ≳ 0.2.
Figure <ref>(b) shows
the entropy per spin s
as a function of temperature at zero magnetic field for the interaction ratio J_2/J_1=1.7. The zero-temperature entropy of the spin-1/2 Heisenberg model on the diamond-decorated square lattice in the parameter regime of the DT phase is given by s=1/N_SlnΩ_N_s, where Ω_N_s represents the degeneracy of the DT phase for the system size with N_s spins; for instance,
Ω_30=14, Ω_40=24, as already stated in the ED context in Sec. <ref>.
Figure <ref>(b) shows that the results obtained from the spin-star decoupling approximation for N_s=40 are in perfect agreement with ED data for the same system size at low enough temperature T/J_1 ≲ 0.1. Similar accuracy can be expected at low temperatures for the EMD model. Notably, the residual entropy of the DT phase decreases with increasing system size, but it still remains nonzero even in the thermodynamic limit N_s →∞. From the EMD model with system size 16×∞, one can for instance infer a residual entropy s≈0.0587 that quite closely converges to the known value for the hard-dimer model on the square lattice <cit.>.
Figure <ref>(c) displays
the specific heat per spin c as a function of temperature for the same value of the interaction ratio. A notable feature
is the presence of a low-temperature peak respectively shoulder superimposed on the high-temperature maximum of the specific heat. As anticipated, the spin-star and EMD approaches fail to capture the behavior of the specific heat at higher temperatures as they neglect high-energy excitations above the DT phase.
On the other hand, these approaches accurately account for low-lying excitations which leads to a close agreement between the ED data for N_s=40 and the analytical results obtained from the spin-star decoupling approximation at low enough temperatures. A comparison of the peak height in the specific heat obtained from ED and QMC reveals relatively large finite-size effects at low temperatures. The EMD model provides further insight into the behavior of the specific heat at low temperatures, which effectively bridges the gap between the available QMC data (T/J_1>0.1) and extends it further towards lower temperatures. Figure <ref>(c) shows that thermal activation of the specific heat indeed
occurs at progressively lower temperatures as the system size increases such that the EMD model provides the best approximation to the thermodynamic limit at low temperatures.
We now turn to a discussion of the dimer-triplet density, entropy, and specific heat in zero magnetic field for the case J_2/J_1=1.5 plotted in Fig. <ref>.
Here the numerical data for the dimer-triplet density
exhibits only a shallow minimum, which is consistent with the gap for one triplet less being only slightly smaller than that for an additional triplet at J_2/J_1=1.5, compare Fig. <ref>.
By contrast, the spin-star decoupling approximation fails to account for this minimum. The reason is that, within the spin-star decoupling approximation, the excitation energies from the DT phase to singlets and triplets are equal for this interaction ratio.
Consequently, the spin-star approach cannot explain the thermal population into singlet states for the interaction ratio J_2/J_1≤1.5. Comparing the results obtained from QMC simulations for a system size of 6 × 6 (total of 180 spins) with those from ED results for 40 spins reveals nearly perfect agreement, indicating negligible finite-size effects.
However, the discrepancy between ED results for 30 spins and 40 spins at intermediate temperatures (T/J_1>0.1) may stem from artificial symmetries imposed for the system size of 30 spins by considering periodic boundary conditions, which are absent for the system size of 40 spins.
The entropy and specific heat
for the interaction ratio J_2/J_1=1.5 in zero magnetic field are shown in Fig. <ref>(b) and <ref>(c), respectively, and exhibit qualitatively similar behavior to that for the interaction ratio J_2/J_1=1.7. In the high-temperature regime, small finite-size effects are observed.
While the high-temperature peak of the specific heat shifts slightly towards higher values with increasing system size, the low-temperature peak is conversely suppressed towards lower values.
The results obtained from the spin-star decoupling approximation for 40 spins are slightly overestimated compared to ED data for N_s=40 at low temperatures T/J_1 ≲ 0.1. The discrepancy at higher temperatures arises due to the presence of low-energy excitations to states that share the same energy as the Lieb-Mattis ferrimagnetic state at zero magnetic field (compare the discussion in Sec. <ref>), which is not accurately accounted for by the spin-star decoupling approximation simply ignoring the non-commutativity of the spin-star Hamiltonians. The QMC data are consistent with the intriguing double-peak temperature dependence of the zero-field specific heat with low- and high-temperature peaks located around T/J_1≈ 0.12 and 0.6, respectively.
With a yet lower value of the interaction ratio J_2/J_1, the analytical approaches provided by the spin-star and EMD models neglect more and more important low-energy excitations. For the interaction ratio J_2/J_1=1.3, both of these approaches only provide an accurate description
at very low temperatures as evidenced by the temperature dependencies of the dimer-triplet density, entropy, and specific heat depicted in Fig. <ref>. This limitation arises from the failure of these analytical approaches to accurately capture the
degeneracy of the LM phase in zero magnetic field with possible values of the z-component of the total spin S_T^z = -S_T, -S_T+1, ..., S_T, and the low-energy excitations to these states are not properly accounted for either.
The dimer-triplet density
monotonically increases without any local minimum for the interaction ratio J_2/J_1=1.3, as evidenced by the ED data for N_s=40 spins. This behavior is due to the
lowest excitation in this parameter regime arising from an additional dimer-triplet,
compare Fig. <ref>.
The EMD approach neglects the lowest excitations and thus only gets the T=0 limit right, but fails to capture the leading low-temperature asymptotics. By contrast, the star approach does account for these excitations, even if only in an approximate fashion, and thus captures at least qualitatively the low-temperature behavior of the triplet density.
While the high-temperature peak of the specific heat shown in Fig. <ref>(c) exhibits a slight variation with increasing system size, the behavior of the low-temperature peak is less straightforward. For a system size of N_s=30, the peak is lower than for size N_s=40, but the specific heat obtained from QMC for a system size of 6×6 unit cells (N_s=180) shows the lowest peak. This indicates a more complex dependency on system size.
§.§ Magnetization, entropy, and specific heat under magnetic field
We proceed to a discussion of the most intriguing findings
under finite magnetic fields.
Now we use the finite-field formula Eq. (<ref>) for the free energy
within the spin-star decoupling approximation.
The field dependencies of magnetization, entropy, and specific heat are illustrated for the interaction ratio J_2/J_1=1.7 in Fig. <ref> for two different temperatures T/J_1=0.01 and 0.1.
The magnetization curve at the interaction ratio J_2/J_1=1.7 exhibits magnetization plateaux at m=0 and 1/5 (full magnetization corresponding to 1/2) with the characteristics of the DT and MD phases, respectively. These can be almost perfectly described by the analytical spin-star or EMD approaches including the most important low-energy excitations above these ground states.
However, another transition from the 1/5-plateau to the 3/5-plateau corresponding to the Lieb-Mattis ferrimagnetic phase cannot be described at all by the EMD model and only qualitatively by the spin-star decoupling approximation. This finding relates to the fact that the ground-state energy of the Lieb-Mattis phase is determined within the spin-star decoupling approximation with a relative error of approximately 4.3% (the ground-state energy of the LM phase within DMRG and spin-star decoupling methods in zero field are E_ LM^ DMRG=-2.46J_1+2J_2 <cit.>
and E_ LM^ STAR =-2.50J_1+2J_2, respectively, compare also the related discussion in
Sec. <ref>).
It can be observed from Fig. <ref>(b) that the entropy starts from its nonzero residual value, which is progressively converging with increasing system size to the residual entropy of the dimer model on the square lattice per unit cell s_d≈ 0.29156 (per spin ≈ 0.058312) <cit.>. Contrary to this, one detects a sizable entropy gain reaching the specific value s=0.13256
at the transition field between the DT and MD phases, which is almost independent of the system size as corroborated by ED, the EMD model, as well as the spin-star decoupling approximation. Similar qualitative agreement between results of all three aforementioned methods is observed in the peculiar asymmetric double-peak structure of the specific heat, which can be detected in the vicinity of the magnetic-field-driven phase transition between DT and MD phases (see the inset of Fig. <ref>(c)). The lower peak height observed below the relevant transition field can be attributed to the macroscopic degeneracy of the DT ground state, which is not lifted by the magnetic field in contrast to the MD phase that becomes non-degenerate at any nonzero magnetic field.
The results obtained from the spin-star decoupling approximation are in very good agreement with the ED data even for magnetic fields higher than h/J_1≳ 1.0, up to the end of the Lieb-Mattis ferrimagnetic phase, and even at moderately high temperatures T/J_1=0.1 still constitute a good approximation of the field dependence of the entropy and specific heat, as depicted in Fig. <ref>(b) and (c).
Figure <ref>(a) shows the magnetization curves
for the interaction ratio J_2/J_1=1.5.
There is a distinct magnetization jump from the DT phase to the MD phase at zero temperature, as shown in the inset of Fig. <ref>(a). However, the 1/5-plateau is sufficiently small to be smeared out at finite temperatures. Consequently, there is a sharp rise in magnetization from the 0-plateau to the 3/5-plateau even at temperatures as low as T/J_1=0.01, which becomes smoother with increasing temperature. Due to the presence of the MD phase within a narrow range of magnetic-field strengths, the EMD model is in excellent agreement with the ED data up to a magnetic field of approximately h/J_1≈ 0.54 at a very low temperature T/J_1=0.01 though it fails to account for further increases in magnetization with increasing magnetic-field strength. On the other hand, the spin-star decoupling approximation is still capable of qualitatively reproducing the ED data even at higher magnetic fields. The comparison of ED data for N_s=40 spins with QMC data for 36 unit cells (i.e., N_s=180 spins) indicates negligible finite-size effect in this parameter region.
The results for the low-temperature entropy exhibit a relevant discrepancy between the ED and spin-star decoupling approximation only in the close vicinity of the transition field h/J_1=0.5 (see Fig. <ref>(b)). The excessive macroscopic degeneracy predicted by the spin-star decoupling approximation relates to a triple phase coexistence point of the LM, DT, and MD phases, which is located exactly at the magnetic field h/J_1=0.5 for J_2/J_1=1.5.
Recall that the triple point obtained within the spin-star approach is slightly shifted towards a lower value of the magnetic field with respect to its true coordinates (see Fig. <ref>(b) and the discussion in Sec. <ref>). At a higher temperature T/J_1=0.1 the discrepancy between ED and the spin-star decoupling approximation is significantly reduced, because all relevant low-lying excited states are thermally activated. Contrary to this, we see the opposite effect for the EMD model with much lower entropy, because high-energy excitations are missing within this description. A similar effect can also be observed in the field dependence of the specific heat (
Fig. <ref>(c)). While the EMD model reproduces the ED data at low temperatures, the spin-star decoupling approximation correctly describes the double peak of the specific heat also at a higher temperature of T/J_1=0.1. At this higher
temperature, the double-peak structure of the specific heat is also confirmed by the QMC simulations
for the bigger system of 6×6 unit cells.
Last but not least, magnetization, entropy, and specific heat as a function of temperature are depicted in Fig. <ref> for the interaction ratio J_2/J_1=1.3. As discussed in Sec. <ref>, for the interaction ratio J_2/J_1≲ 1.4, the lowest excitation above the DT phase corresponds to the creation of an additional triplet (see Fig. <ref>). For this reason, the EMD model is unable to describe the finite-temperature properties
for the interaction ratio J_2/J_1=1.3.
On the other hand, our second analytical approach derived from the spin-star decoupling approximation provides a quite reliable description of all quantities above the transition field h/J_1=0.3.
The most important artifacts of the spin-star decoupling approximation are just below the transition field
and at T/J_1=0.01
where it yields a spike in the entropy s (Fig. <ref>(b)) and a second maximum
in the specific heat c (Fig. <ref>(c)) while such pronounced maxima are absent in the ED results.
The reason for this difference is that the field-induced DT-LM transition is a true first-order transition in the full model
such that excitations are scattered over a certain energy range below the transition
while the spin-star decoupling approximation treats these as a degenerate manifold that comes down at the transition field.
Conversely, a temperature T/J_1=0.1 is sufficiently high to thermally excite these low-lying excitations
in the full model such that at this higher temperature the N_s=40 ED results also exhibit a pronounced maximum
below the transition field, both in the entropy (Fig. <ref>(b)) and the specific heat (Fig. <ref>(c)).
The latter behavior is confirmed by our QMC simulations on the bigger system of 6× 6 unit cells.
§.§ Enhanced magnetocaloric effect due to the macroscopically degenerate DT phase
Finally, we discuss the magnetocaloric effect with a focus on the transition region between the macroscopically
degenerate DT phase and the MD state (that is non-degenerate in a finite field).
Figure <ref> shows
density plots of the entropy in the magnetic field versus temperature plane
for the interaction ratio J_2/J_1=1.7.
The superimposed constant-entropy curves correspond to the variation of the temperature T during an adiabatic
(de)magnetization process.
First, we discuss the ED results for a system size of N_s=40 spins shown in Fig. <ref>(a).
An enhanced magnetocaloric effect can clearly be seen in the vicinity of the magnetic field h/J_1=0.3, where the temperature sharply decreases during an adiabatic demagnetization process, whereas an inverse magnetocaloric effect associated with an increase in temperature with further reduction of the magnetic field can be mostly observed below this value of the magnetic field.
The specific value of the magnetic field h/J_1=0.3 corresponds to the magnetic-field-driven phase transition from the MD phase to the DT phase for the interaction ratio J_2/J_1=1.7. Recall also that the residual entropy of the DT phase (per spin) for a system size of N_s=40 is equal to s=ln 24/40≈ 0.079 and, hence, the system cools down to absolute zero temperature at h/J_1=0.3, whenever the entropy at the beginning of the adiabatic demagnetization is tuned below this value. Under this condition, the temperature stays at zero below the transition field and there is no consecutive increase in temperature upon further demagnetization,
as Fig. <ref>(a) demonstrates by the absence of isentropes below the magnetic field value h/J_1≲ 0.3 (i.e., below the cyan curve). This asymmetry
of the entropy around the transition field is caused by the special character of the DT phase, which preserves its macroscopic degeneracy
at nonzero magnetic fields. From this point of view, the spin-1/2 Heisenberg model on the diamond-decorated square lattice offers
promising
perspectives
for achieving ultra-low temperatures during adiabatic demagnetization.
For comparison, Fig. <ref>(b) presents the density plot of the entropy, for the same set of parameters and the same system size N_s=40, as obtained from Eq. (<ref>) within the spin-star decoupling approximation.
Remarkably, the spin-star approach is capable of predicting the aforementioned asymmetry of the entropy, and the relevant density plot reveals qualitatively the same features of the magnetocaloric effect as obtained from ED.
Moreover, the enhanced magnetocaloric effect and cooling to absolute zero temperature should persist even in the thermodynamic limit due to the macroscopic degeneracy of the DT phase.
The EMD model shows indeed
that the asymmetric behavior of the isentropes as a function of magnetic field and temperature is preserved
for a system size comprising of 16 elementary units in one spatial direction and infinite in the other direction, see
Fig. <ref>(c). For this system size, the residual entropy of the DT phase is s ≈ 0.0587, which is consistent with the macroscopic degeneracy of the hard-dimer model on the square lattice
<cit.>.
Hence, the enhanced magnetocaloric effect accompanied with cooling to zero temperature is detected for h/J_1<0.3 whenever the entropy is tuned below its zero-point value s ≲ 0.0587 at the beginning of an adiabatic demagnetization process (i.e., all curves lying below the cyan curve).
§ CONCLUSION
In this study, we have investigated the thermodynamic properties of the spin-1/2 Heisenberg model on the diamond-decorated square lattice, particularly focusing on the macroscopically degenerate DT phase. Using a combination of exact diagonalization, sign-problem-free quantum Monte Carlo simulations, an effective monomer-dimer model, and a spin-star decoupling approximation, we have provided a comprehensive analysis of this model's behavior under varying interaction ratios and external magnetic fields.
Our results confirm that the DT phase exhibits unique thermodynamic properties due to its macroscopic degeneracy, which is linked to the classical hard-dimer model on the square lattice. The boundary between the MD and DT phases is well described by the effective monomer-dimer model, highlighting the significance of localized excitations in determining the low-temperature behavior of this system.
A particularly interesting
result of our study is the enhanced magnetocaloric effect observed upon approaching the DT phase. This effect is particularly pronounced near the transition
field between the MD and the DT phase, where the temperature decreases sharply during adiabatic demagnetization. This phenomenon arises due to the macroscopic degeneracy of the DT phase, which allows the system to cool to absolute zero temperature as long as the entropy content is set below the residual entropy value s ≈ 0.0583 derived from the effective hard-dimer model on the square lattice in the thermodynamic limit. This renders the DT phase a promising candidate for achieving ultra-low temperatures
by magnetic refrigeration.
The third law of thermodynamics <cit.> requires this macroscopic degeneracy to be lifted
in an experimental realization, for example by a lattice distortion that breaks the conservation of the total
spin of a dimer.
Our conclusions will nevertheless remain relevant
as long as the resultant splitting of the ground-state manifold remains at an energy scale that is below the temperature scale of interest.
The challenge is more to find a suitable compound to begin with. In this respect, it may be interesting
to consider the diamond-decorated honeycomb lattice since in this case there are known compounds that exhibit
the corresponding crystal structure, see for example Refs. <cit.>.
Our findings
demonstrate the usefulness of effective classical statistical physics models for the low-energy and low-temperature
behavior of a highly frustrated quantum spin system and
shed light on the rich phase diagram and thermodynamic phenomena of the spin-1/2 Heisenberg model on the diamond-decorated square lattice.
Furthermore, they suggest potential applications of such highly frustrated quantum spin systems in magnetic cooling technologies.
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 945380. We acknowledge support under the Štefánik+ program for Slovak-France bilateral under contract SK-FR-22-0011/49880PG.
Computing time for the QMC simulations was provided by the IT Center at RWTH Aachen University,
part of the ED computations were carried out on the “osaka” cluster at the Centre de Calcul (CDC) of CY Cergy Paris Université.
T.V. is supported through the EURIZON project (#3025 “Frustrated quantum spin models to explain the properties of magnets over wide temperature range”), which is funded by the European Union under grant agreement No. 871072. J.S. acknowledges financial support provided by Slovak Research and Development Agency under the contract No. APVV-20-0150. N.C. acknowledges support by the ANR through grant
LODIS (ANR-21-CE30-0033).
|
http://arxiv.org/abs/2409.03643v1 | 20240905160121 | CDM: A Reliable Metric for Fair and Accurate Formula Recognition Evaluation | [
"Bin Wang",
"Fan Wu",
"Linke Ouyang",
"Zhuangcheng Gu",
"Rui Zhang",
"Renqiu Xia",
"Bo Zhang",
"Conghui He"
] | cs.CV | [
"cs.CV",
"cs.CL"
] |
Wind turbine condition monitoring based on intra- and inter-farm federated learning
[
Accepted XXX. Received YYY; in original form ZZZ
===================================================================================
§ ABSTRACT
Formula recognition presents significant challenges due to the complicated structure and varied notation of mathematical expressions. Despite continuous advancements in formula recognition models, the evaluation metrics employed by these models, such as BLEU and Edit Distance, still exhibit notable limitations. They overlook the fact that the same formula has diverse representations and is highly sensitive to the distribution of training data, thereby causing the unfairness in formula recognition evaluation. To this end, we propose a Character Detection Matching (CDM) metric, ensuring the evaluation objectivity by designing a image-level rather than LaTex-level metric score. Specifically, CDM renders both the model-predicted LaTeX and the ground-truth LaTeX formulas into image-formatted formulas, then employs visual feature extraction and localization techniques for precise character-level matching, incorporating spatial position information. Such a spatially-aware and character-matching method offers a more accurate and equitable evaluation compared with previous BLEU and Edit Distance metrics that rely solely on text-based character matching. Experimentally, we evaluated various formula recognition models using CDM, BLEU, and ExpRate metrics. Their results demonstrate that the CDM aligns more closely with human evaluation standards and provides a fairer comparison across different models by eliminating discrepancies caused by diverse formula representations.
§ INTRODUCTION
Mathematical formula recognition is pivotal in document analysis as it directly influences the scientific rigor and accuracy of the content. Unlike standard optical character recognition (OCR), formula recognition presents unique challenges. Formulas often encompass multi-level symbols, subscripts, fractions, and other complex structures, requiring models to comprehend spatial and structural relationships rather than just linear, sequential text. Additionally, formulas exhibit representational diversity, meaning that the same formula can be expressed in multiple valid ways.
In recent years, significant advancements in formula recognition <cit.> have been primarily driven by deep learning. Deep learning models, especially those leveraging the Transformer architecture and large-scale pretraining strategies, have demonstrated superior performance in specific scenarios. Notably, commercial formula recognition software like Mathpix and the recently proposed UniMERNet <cit.> model have achieved impressive results in diverse real-world settings. Despite these advancements, the existing evaluation metrics for formula recognition have notable shortcomings. Commonly-used metrics such as BLEU and Edit Distance primarily rely on text-based character matching, which introduces several limitations as follows:
(1) Low Metric Reliability. BLEU and Edit Distance are reliable for evaluating the quality of text-level similarity. However, the diversity in formula representations makes these text-level evaluation metrics inadequate for precisely reflecting formula recognition quality. For example, as shown in <Ref> (Case 1), a model's prediction might render an image identical to the ground truth formula. However, due to the variations in formula expression styles, the evaluation results obtained using the ExpRate <cit.>, BLEU <cit.>, and Edit Distance <cit.> may be somewhat misleading.
(2) Unfair Model Comparison. Current metrics can be biased by the distribution of training and testing data. If a model's training data distribution differs significantly from the test data, it can adversely affect the evaluation metrics. As illustrated in <Ref> (Case 1 and Case 2), a model may produce a correct prediction but score poorly due to representational differences from the ground truth, while an incorrect prediction might score higher if its representation aligns more closely with the test data distribution.
(3) Lack of Intuitive Scoring. There can be a significant discrepancy between BLEU scores and human perception. For instance, in <Ref> (Case 3), a model's prediction contains many errors, yet the BLEU score is as high as 0.907, which does not align with human judgment.
To address these issues, we propose a novel evaluation metric for formula recognition: Character Detection Matching (CDM). The proposed CDM regards the formula recognition evaluation as an image-based object detection task, by converting both the predicted LaTeX and the ground-truth LaTeX formulas into the image-formatted formulas and treating each character as an independent target. This approach overcomes the challenges posed by the diverse representations of formulas and aligns more closely with human subjective evaluation standards. CDM offers the advantages as follows: 1) Accuracy and Reliable. By calculating metrics in the image space, CDM eliminates issues caused by different valid representations of the same formula, directly reflecting recognition accuracy and aligning more closely with human intuitive perception. 2) Fairness. CDM removes the high dependency on consistent data distribution between training and evaluation task, allowing for a fair comparison of different models based on their true recognition capabilities. Our contributions can be summarized as follows:
* We perform a detailed analysis of the existing formula recognition evaluation methods, highlighting the issues and unreliability of ExpRate and BLEU metrics.
* We introduce a novel evaluation metric, CDM, which assesses formula recognition quality by performing visual character matching between rendered images of predicted and ground-truth formulas, providing an intuitive and fair evaluation standard.
* We validate CDM's effectiveness through extensive experiments on various mainstream models and datasets, demonstrating its superiority over traditional metrics like BLEU in assessing formula recognition performance.
§ RELATED WORK
§.§ Formula Recognition Algorithms
Initially, researchers employ specific grammar rules to represent the spatial structure of formulas, including graph grammars <cit.>, relational grammars <cit.>, and probabilistic grammars <cit.>. Besides, the CROHME competitions <cit.> have promoted the development of handwritten formula recognition by incorporating deep learning algorithms. Key contributions include a neural encoder-decoder model with coarse-to-fine attention <cit.>, a tree-structured decoder <cit.>, and the Counting-Aware Network <cit.>, which integrates a weakly-supervised counting module. The ABM network <cit.> employs mutual distillation and an Attention Aggregation Module, while a transformer-based decoder <cit.> simplifies model architecture. The Syntax-Aware Network (SAN) <cit.> models recognition as a tree traversal process, significantly improving accuracy for complex expressions. Overall, these models employ ExpRate for formula evaluation.
In document information extraction <cit.>, Donut <cit.> directly converts input documents into structured outputs without using traditional OCR tools. Texify <cit.> and UniMERNet <cit.> are designed using Donut <cit.>, utilizing more diverse datasets and data augmentation operations. Nougat <cit.> is designed to convert PDF documents from screenshot to Markdown format, making the document content (e.g. table and formula) easier to edit. These methods use BLEU and Edit Distance metrics for formula evaluation.
§.§ Formula Recognition Evaluation Metrics
BLEU is initially proposed for machine translation tasks, matching standard and machine-translated texts using N-grams (sequences of N words) between the generated and the reference texts. It applies a brevity penalty factor to produce the final BLEU score <cit.>:
BLEU = BP ·exp( ∑_n=1^N w_n log p_n ),
where BP is the brevity penalty factor, and p_n is the N-gram match result, with n ranging from 1 to 4.
Edit Distance is also commonly-used metric to assess the similarity between the generated and the reference texts. It measures the number of insertions, deletions, or substitutions needed to transform one text into another, with a smaller Edit Distance indicating higher similarity <cit.>.
ExpRate refers to the proportion of samples where the texts are exactly matched out of the total number of samples. Compared to BLEU and Edit Distance, ExpRate is coarser and more stringent in evaluation <cit.>.
The above three metrics can effectively evaluate the textual differences between ground truth and reference, making them suitable for tasks requiring strict matches. BLEU and Edit Distance, in particular, provide a finer evaluation of text recognition capabilities compared to ExpRate, making them widely used in extensive text recognition tasks such as document recognition <cit.>. These metrics are also applied to formula recognition, with most open-source models, such as Pix2Tex <cit.> and Texify, adopting them for evaluation and comparison.
In addition to text-based metrics, image edit distance has been explored to measure the accuracy of predicted formulas <cit.>. Image processing metrics like MSE (Mean Squared Error) and SSIM <cit.> have also been considered. Structuring Chart-oriented Representation Metric (SCRM) <cit.> is designed to comprehensively evaluate the information represented by structured triplet representations. However, these metrics are better suited for natural images. For document images such as formula images, even slight character misalignments can result in significant penalties, making these metrics less suitable for formula recognition.
§ LIMITATIONS OF CURRENT METRICS
Although ExpRate, BLEU, and Edit Distance are widely used in formula evaluation tasks, they exhibit significant limitations in accurately reflecting formula recognition performance, particularly in scenarios where there are domain gaps between training and testing data distributions. The main reason is that a single formula can have multiple valid LaTeX representations, making the Ground Truth (GT) LaTeX non-unique, which introduces inherent flaws for the formula evaluation.
As illustrated in Case 1 of <Ref>, the formula (x+y)+z=x+(y+z) corresponds to the GT annotation . When the model's prediction is , the prediction is correct because the rendered formula image matches the GT image, despite different LaTeX syntax. Theoretically, the ExpRate/BLEU/Edit Distance results should be 1/1/0, indicating a correct instance. However, in practice, ExpRate is 0, BLEU is 0.449, and Edit Distance is 0.571, failing to accurately assess the formula's quality.
The aforementioned issues make it challenging to objectively evaluate the performance of different formula recognition models. For instance, as illustrated in Case 2 of <Ref>,
one character is misrecognized as . The prediction is incorrect, and the ExpRate, BLEU, and Edit Distance metrics reflect this error. However, when compared to Case 1 where the model prediction is correct, the BLEU and Edit Distance metrics for the incorrect prediction in Case 2 are better than those for the correct prediction in Case 1.
A LaTeX regularization method, which abstracts LaTeX code into a tree structure and standardizes elements, addresses LaTeX syntax diversity <cit.>. Pix2tex <cit.>, Texify <cit.>, and UniMERNet <cit.> use such regularization method as a preprocessing step before evaluation, which can solve part of the syntax inconsistency issue. For instance, , , and all compile to x^b_a. Directly calculating BLEU scores would not correctly assess the model's prediction quality. Regularized code unifies these into a consistent format, such as always adding curly braces and arranging superscripts before subscripts, contributing to the fairness of subsequent metric calculations. However, regularization does not solve all LaTeX syntax diversity issues. Some symbols have multiple representations, such as and both representing ≤. Exhaustively listing these representations is challenging due to the huge LaTeX symbol library and many additional symbols provided by extension packages (e.g., amsmath, amssymb).
Overall, while regularization mitigates some issues, it does not fully address the inherent limitations of current metrics in evaluating formula recognition performance. This highlights the need for a more robust and comprehensive evaluation metric that can accurately reflect the quality of formula recognition across diverse representations.
§ CHARACTER DETECTION MATCHING
Due to the diversity of LaTeX expressions, text-based character-matching methods are unreliable for formula recognition evaluation. The basic idea of CDM is to compare the rendered images from LaTex text. If the image rendered from the predicted LaTeX source code matches the image rendered from the ground truth LaTeX source code, the formula is considered entirely correct. However, directly comparing the pixel values of the original and predicted formulas is not ideal. Any error or extra/missing character in the prediction can cause subsequent characters to be mismatched. Additionally, two similar formulas might have different layouts, with one being a single-line formula and the other a multi-line formula due to line breaks. Therefore, a more robust algorithm is needed to calculate the match between the predicted result and the ground truth image.
To this end, we propose a metric that incorporates a bipartite matching step for element-level matching in images, providing a more intuitive assessment. As shown in <Ref>, the algorithm consists of four stages as follows.
§.§ Element Localization
First, the bounding boxes (bboxes) of each individual element in the rendered image are extracted, followed by the subsequent steps.
LaTeX Source Normalization. LaTeX source codes of both the ground truth and predicted formulas are normalized, breaking them down into individual tokens such as . Composite elements are decomposed into individual characters, e.g., is decomposed into .
Element Region Localization. Each token in color based on the normalized LaTex source code is rendered. For the element e to be localized, render it using while rendering other elements using e̅. By binarizing the fully rendered formula to extract the bounding box of each element, this process is repeated until all elements are accurately localized.
§.§ Element Region Matching
In this stage, a bipartite matching method pairs the predicted elements with the corresponding ground truth elements. Based on the element localization, two sets are obtained for each formula: one for the ground truth independent elements y and one for the predicted independent elements ŷ. The number of independent elements in each set is N_y and N_ŷ, respectively, with N = min(N_y, N_ŷ) being the number of elements in the smaller set.
To measure the similarity between y and ŷ, we match elements in the two sets by identifying the corresponding ground truth element for each predicted element. We use the bipartite matching Hungarian algorithm <cit.>, as described in DETR <cit.>, to find a permutation σ̂ that minimizes the total matching cost:
σ̂ = min_σ∈ S_N∑_i=1^N L_match(y_i, ŷ_σ(i)),
L_match=W_t× L_t + W_p × L_p + W_o × L_o,
where the matching cost L_match is defined as a weighted sum of three components as introduced as follows:
* Token Matching Cost L_t: This component measures whether the tokens corresponding to two bounding boxes are the same. If they are identical, the cost is 0; if they are different, the cost is 1. For tokens that render identically but are different, such as , , and , the cost is 0.05, which can be formulated as follows:
L_t =
0, if t_i = t̂_σ(i);
0.05, if t_i≈t̂_σ(i) ;
1, otherwise;
where ≈ denotes tokens that differ but render identically.
* Positional Proximity Cost L_p: This component measures the proximity of the two bounding boxes' positions using the L1 norm of their coordinates, which can be formulated as follows:
L_p = 1/D_b×b_i - b̂_σ(i)_1 ,
where b = [x_1, y_1, x_2, y_2], and D_b is the dimension of the bounding box coordinates.
* Order Similarity Cost L_o: This measures the similarity of the token order in the original LaTeX source (an approximation of reading order). The order is normalized to the range [0, 1], and the L1 norm can be calculated as follows:
L_o = 1/D_o×o_i - ô_σ(i)_1 ,
where this calculation is similar to L_p, with D_o=1.
Overall, the weights W_t, W_p, W_o are used to balance the contributions of the three components. By employing this comprehensive matching strategy, we ensure a more accurate and robust evaluation of the correspondence between the predicted and ground truth elements, thereby improving the overall assessment of formula recognition quality.
§.§ Invalid Match Elimination
After pairing the individual elements of the predicted result with the ground truth using the Hungarian matching algorithm, we need to verify these pairs and eliminate invalid matches. This process involves two steps:
Token Consistency Check. Check whether the elements in each matched pair are consistent in terms of characters. If they are inconsistent, discard the match.
Position Relationship Consistency Check. The relative positions of elements in mathematical formulas are crucial. For instance, in the expressions 2^3 and 3^2, bipartite matching might pair 2 with 2 and 3 with 3, but their meanings and visual representations are entirely different. Thus, we need to check the consistency of the positional relationships within the matched pairs. We treat each element in the matched pair as a bounding box and analyze their relative positions. Specifically, we assume an affine transformation between the ground truth and predicted elements:
b̂_σ(i) = 𝐀(b_i),
where 𝐀 is the affine transformation matrix. To identify inconsistent match pairs, we detect pairs that do not conform to this transformation relationship. We employ the RANSAC algorithm <cit.> for this purpose. RANSAC can determine the optimal transformation matrix 𝐀 in the presence of noise. Given that formulas are usually horizontally arranged during rendering, we fix the rotation angle in the transformation matrix to 0, considering only translation and scaling. This approach not only improves the convergence speed of the RANSAC algorithm but also enhances the final matching accuracy.
To account for line-breaking effects in formulas, we perform multiple rounds of RANSAC iterations to ensure that as many matched pairs as possible conform to the transformation relationship. After several iterations, matched pairs that still do not conform to the transformation relationship are considered incorrect and are eliminated.
The above two steps effectively eliminate invalid match pairs, ensuring more accurate final matching results.
§.§ Metric Calculation
We use the F1-Score as the default metric for evaluating CDM (Character Detection Metric), defined as:
CDM = 2 × TP/2 × TP + FP + FN,
where TP denotes true positives, FP denotes false positives, and FN denotes false negatives.
To further evaluate the accuracy of formula recognition, we introduce the ExpRate@CDM metric , defined as:
ExpRate@CDM = ∑_i=1^N𝕀(CDM_i = 1)/N,
where 𝕀 is the indicator function that equals 1 if CDM_i = 1 and 0 otherwise, and N is the total number of formulas. This metric represents the proportion of formulas for which the model's prediction results are perfectly matched. Essentially, ExpRate@CDM serves as a precise version of the ExpRate metric specifically for formula recognition.
§ EXPERIMENTS
§.§ Models and Data
We validate the CDM metric by evaluating several mainstream formula recognition models using both subjective impressions and objective metrics. The models include open-source UniMERNet <cit.>, Texify <cit.>, Pix2tex <cit.>, and the commercial Mathpix API, all tested on the UniMER-Test dataset. Besides, we evaluate document-level models, such as the open-source Nougat <cit.> and the commercial GPT-4o <cit.>. Vary <cit.> and StrucTexTv3 <cit.> are excluded as they are currently unavailable.
UniMER-Test Dataset. The dataset includes 23,757 formula samples, categorized into Simple Printed Expressions (SPE), Complex Printed Expressions (CPE), Screenshot Expressions (SCE), and Handwritten Expressions (HWE). We use these categories to conduct the model evaluation.
Tiny-Doc-Math Dataset. To evaluate document-level recognition, we construct the Tiny-Doc-Math dataset, consisting of arXiv papers in mathematics and computer science, published after June 2024, to ensure that they are not in the training data of the compared models. We obtain LaTeX code and corresponding PDFs, match displayed equations using regular expressions, and manually verify them. Overall, the dataset includes 12 PDFs, totaling 196 pages and 437 formulas.
This validation set includes both formula-level and document-level evaluations:
* Formula-level: Using single rendered formula images as input, we evaluate Mathpix, Pix2Tex, and UniMERNet. These models accept cropped formula images as input, and we compare the model outputs with the ground truth to compute relevant metrics.
* Document-level: Using PDFs or images as input, we evaluate Nougat, GPT-4o, and Mathpix, which can convert entire PDF pages into Markdown format. We match the displayed equations in the model outputs using regular expressions and compare them with the ground truth LaTeX formulas to compute relevant metrics.
§.§ Credibility Assessment of CDM
§.§.§ Rendering Success Rate
The CDM metric relies on the successful rendering of formula images. For models that fail to render images, we assign a CDM score of 0, as rendering failures indicate that the predicted LaTeX code lacks critical elements. The rendering success rates on the UniMER-Test dataset for Pix2tex, Texify, UniMERNet, and Mathpix are 86.17%, 94.97%, 97.62%, and 98.95%, respectively, ensuring the applicability and reliability of the CDM metric.
§.§.§ User Preference Evaluation
We analyze the distribution of CDM scores for four models on the UniMER-Test dataset. As shown in <Ref>, Mathpix and UniMERNet perform well in terms of CDM scores. We conduct a detailed analysis of the Pix2Tex model by randomly selecting samples from different score ranges to evaluate if the prediction quality corresponds to the CDM scores. The analysis in <Ref> shows that the CDM scores effectively reflect formula quality, with higher scores indicating fewer errors.
To verify the consistency between the CDM metric and human evaluation, we conduct a large-scale experiment. We select 1,008 CDM scores from Pix2Tex predictions, ensuring a balanced score distribution. We design an annotation interface displaying a ground truth label and the corresponding predicted LaTeX rendered image. Annotators choose between ScoreA, ScoreB, Both (credible), and Neither (credible). ScoreA and ScoreB correspond to the BLEU and CDM scores, respectively, but their order is randomized.
The results in <Ref> show that 64% of participants prefer the CDM metric, and 32% consider both metrics good. This indicates a 96% consistency between the CDM metric and human evaluation, demonstrating its reliability. <Ref> compares the number of cases where BLEU or CDM is preferred across different score ranges, showing that CDM consistently outperforms BLEU.
§.§.§ Objective Stability Assessment
To evaluate the impact of formula writing styles on the CDM and BLEU metrics, we randomly select 50 formulas with LaTeX source code and rewrite each formula five times using GPT-4, generating 250 additional formulas. We manually verify these formulas to ensure their rendered results are identical to the original 50 formulas. Using the initial LaTeX source code as the ground truth, we analyze the score distribution of the BLEU and CDM metrics. As shown in <Ref>, the CDM metric is unaffected by style changes, with all samples scoring 1. In contrast, the BLEU metric's scores are dispersed, making it unsuitable for formula evaluation. The CDM metric remains robust and reliable despite formatting changes.
§.§ Evaluation of Mainstream Models
We conduct a detailed evaluation of mainstream models using both the CDM and BLEU metric. Note that all BLEU metric in this paper have been normalized <cit.>. However, as discussed in the limitation section, normalization operations cannot address all issues, which will be evident in the following experiments.
§.§.§ UniMER-Test Evaluation
As shown in <Ref>, the evaluation results of the four models on the UniMER-Test dataset indicate that the quality ranges from low to high as follows: Pix2Tex, Texify, Mathpix, and UniMERNet, based on both BLEU and CDM metrics. ExpRate@CDM clearly shows the proportion of completely correct predictions for each model, indicating that the text character-based ExpRate is unreliable.
From the results in <Ref>, it appears that the trends of the BLEU and CDM metrics are consistent. To verify the reliability of using the BLEU metric for model comparison, we further present evaluation results on the UniMER-Test subsets. As shown in <Ref>, we observe two notable anomalies: Firstly, in the SCE subset, when comparing the quality of the Mathpix and UniMERNet models, the BLEU and CDM metrics provide opposite conclusions. A detailed review of the UniMERNet paper reveals that the SCE subset was annotated based on Mathpix and then manually corrected. This means that the expression style of the SCE formulas is more consistent with Mathpix. Consequently, even though the CDM metric indicates that UniMERNet has better actual model quality, the BLEU metric, influenced by the expression style, suggests that Mathpix is superior. Secondly, for the Pix2Tex model, the BLEU metric is very low on the HWE and SCE subsets but performs well on the SPE and CPE subsets. This discrepancy arises because the Pix2Tex training set includes a large number of printed formulas from arXiv and lacks data in the HWE and SCE styles.
These anomalies clearly illustrate the limitations of the BLEU metric in evaluating the quality of formula recognition models. In contrast, the CDM metric proposed in this paper is fair and intuitive.
§.§.§ Tiny-Doc-Math Evaluation
The evaluation results of Tiny-Doc-Math are shown in <Ref>. For cropped formula inputs (formula-level), all four models perform reasonably well, with CDM scores above 0.7. Notably, the current leading multimodal large model GPT-4o has the highest BLEU score among the four models but the lowest CDM score. This discrepancy indicates that the BLEU metric may not be reliable, suggesting that the formula recognition accuracy of GPT-4o still has room for improvement, lagging behind traditional SOTA models. Additionally, although Mathpix has the highest CDM score, only 18.53% of the formulas are completely accurate. Manual verification revealed that many formulas are missing commas or periods at the end.
When the input is document-level screenshots, the models output the recognition results for the entire document (not just the formulas). Evaluation is conducted by matching the recognized block formulas. In this scenario, it can be observed that the accuracy of GPT-4o further decreases. In contrast, Mathpix and Nougat perform better, but even the document multimodal large model Nougat only achieves a CDM score of 0.7852. This indicates that there is still significant room for improvement in document-level recognition models. Mathpix remains the best performer, with a fully correct formula rate of 57.89%. The accuracy of document-level recognition is crucial for advanced document understanding tasks like scientific knowledge Q&A, and CDM provides an excellent standard for selecting formula models and offers direction for improving formula recognition.
§ CONCLUSION
In this paper, we introduced Character Detection Matching (CDM), a novel evaluation metric for formula recognition. CDM addresses the shortcomings of the existing metrics by utilizing spatial character matching, overcoming issues with diverse formula representations. Comprehensive evaluations on different models and datasets demonstrate CDM's superiority in precisely reflecting recognition quality. CDM provides a fairer and more intuitive assessment, highlighting current evaluation metric issues and paving the way for future research and improvements in the field.
§ APPENDIX
§ USER PREFERENCE EVALUATION ANALYSIS
To provide a more intuitive and clear analysis of the credibility of CDM, we supplement the content in Section 5.2 with a detailed examination of user preferences for CDM and BLEU metrics under different conditions.
To assess the reliability of CDM, we design an annotation interface as shown in <Ref>. Given the ground truth rendered image and the model's predicted rendered image for various samples, annotators are asked to assign an appropriate score. Score A and Score B correspond to the BLEU and CDM scores of the prediction results, but the order is randomized so that users do not know which score corresponds to which metric. Users make their choice based on their intuitive judgment from four options.
A total of 1008 samples are scored, and the results are categorized into four scenarios. We provide a detailed and clear analysis of user preferences for CDM and BLEU metrics in each scenario, as illustrated in <Ref>:
CDM is better (64%):
In this scenario, examples include Case 1 and Case 2. In Case 1, the prediction result is 100% correct, with a CDM score of 1 and a BLEU score of 0. Users directly chose the CDM score. In Case 2, the prediction result is mostly correct, but the BLEU score is significantly lower than expected, leading users to prefer the CDM score.
Both scores are equally good (32%):
Examples in this scenario include cases 3 and 4, where the CDM and BLEU scores are relatively close, both reflecting the proportion of model prediction errors in an accurate and intuitive manner.
BLEU is better (3%):
In Case 5, due to different token representations of "BF", BLEU detects inconsistencies, while CDM considers BF and 𝔅𝔉 as the same token.
Neither score is good (1%):
In Case 6, although the two formulas contain different tokens, and , they render similar images (ℰ and ε). Both CDM and BLEU fail in this case.
CDM is reliable in 96% of cases. The remaining 4% are due to LaTeX issues, which will be optimized in future versions, with minimal impact on the overall evaluation.
§.§ Latex Rendering and Syntax Errors
CDM relies on normalizing LaTeX source code and rendering images. Therefore, code that cannot be rendered or contains syntax errors (which cannot be normalized) will result in computation failures. For example, the expression is a failure case due to a missing , leading to rendering failure. For these cases, CDM assigns a score of 0. Although CDM cannot directly handle them, this approach is reasonable and aligns well with human perception.
The number of LaTeX rendering and syntax errors depends on the quality of the model's prediction. Among the four models, Pix2tex, Texify, Mathpix, and UniMERNet, the proportion of LaTeX rendering and syntax errors in the predicted results on the UniMER-Test is 13.83%, 5.03%, 2.38%, and 1.05%, respectively.
§.§ Rendering Types Affecting Token Consistency
CDM defines characters without considering rendering styles. However, different rendering styles can produce visually distinct results, potentially causing different tokens to render into nearly identical characters(<Ref> Case6), or same tokens to render into different characters(<Ref> Case5). Similar situations include and , and , whose rendering effects are G, 𝒢, 𝒳, 𝔛, respectively. This inconsistency can confuse the token consistency check, leading to errors in the model's output.
§ IN-DEPTH METHODOLOGY FOR EVALUATING TINY-DOC-MATH
§.§ Construction of Tiny-Doc-Math Dataset
The evaluation dataset is constructed primarily from arXiv papers in the fields of mathematics and computer science, published after June 2024. We manually select a batch of these papers and download the LaTeX source code and corresponding PDFs. Using regular expressions, we match the formulas displayed from the LaTeX source. After individual formula rendering and manual verification, the Tiny-Doc-Math validation set is built, comprising 12 papers, 196 pages, and a total of 437 formulas.
§.§ Formula-Level Evaluation Methodology
Once the evaluation dataset is constructed, we extract mathematical formulas from the LaTeX source code. Since LaTeX sources may contain custom commands and comments from authors, we apply a series of preprocessing steps to ensure accurate extraction. First, we remove comments from the LaTeX source using regular expressions (including , , and ). Next, we convert aliases defined by commands such as , , , , , and to their original forms to ensure successful formula rendering. We then remove content before to avoid matching irrelevant information. After preprocessing, we extract displayed mathematical formulas from the LaTeX source using a series of regular expressions, as shown in <Ref>(a). For each paper, the matched mathematical formulas are written to a text file, one formula per line.
We render the extracted GT mathematical formulas to obtain formula-level GT images, which are then used as inputs for Mathpix, UniMerNet, pix2tex, and GPT-4o to generate corresponding predictions. Finally, we compute metrics such as BLEU and CDM after matching the predictions with the GTs.
§.§ Document-Level Evaluation Methodology
We convert PDF pages to images and use these images as inputs for Mathpix and GPT-4o to generate corresponding predictions, while Nougat takes the whole PDF as input. After obtaining the document-level predictions, we used extraction algorithms to extract displayed formulas from the predictions, and match them with the GT formulas obtained in the previous section to compute BLEU and CDM metrics.
Due to the different syntax formats of the outputs from different models, we use different regular expressions to extract formulas for each model, as shown in <Ref>(b), (c), and (d). Similarly, for each PDF, the matched mathematical formulas from each model's predictions are written to a text file, one formula per line.
§.§ Matching and Metric Computation
After obtaining the GTs and predicted mathematical formulas, we match the GTs with the predicted formulas line by line to compute the final CDM metric. Given the high accuracy of displayed formula predictions, we use edit distance as the metric for matching formulas. To account for different math delimiters used by different models (e.g., vs. ), we remove all math delimiters before matching, focusing solely on the content. Labels and tags are also removed from the formulas.
The matching process consists of two rounds. In the first round, we set a low edit distance threshold for precise matching. This means that only predictions with a high similarity to the ground truth formula will be matched. We iterate through the GT formulas, calculating the edit distance with all predicted results. The prediction with a minimum edit distance is recorded as matched only if the minimum edit distance was below the threshold. If not, we skip the line and mark both the GT and the prediction as unmatched. In the second round, we set a higher threshold to account for those matching cases where the edit distance might be large. We iterate through the unmatched GT formulas, calculate the edit distance with the remaining unmatched predicted formulas, and record matches if the distance is below the threshold. If any predicted formulas remain unmatched after the first two rounds, we mark them as incorrect or redundant predictions and append them to the end of the matched results.
Through practical implementation, we find that setting the first-round threshold to 0.4 and the second-round threshold to 0.8 provides the most reasonable matching. Although extreme cases might occur where the rendered results are identical but fail to match due to large edit distances, these instances are not common and have been manually corrected.
After matching the GTs and predicted formulas, we compute metrics such as BLEU and CDM.
§.§ Result Discussion
As shown in <Ref>, GPT-4o's document-level predictions exhibited a significant number of CDM scores between 0.6 and 0.9, primarily due to hallucination phenomena in large models. For example, as shown in <Ref>(a), GPT-4o generates structurally similar but content-irrelevant results. Additionally, as shown in <Ref>(b), GPT-4o's predictions often lack standardized formatting, i.e., frequently generating formulas without math delimiters, leading to extraction and rendering failures and resulting in many CDM=0 cases. For Mathpix, although the CDM between the document level and formula level is close, the proportion of CDM=1 predictions at the formula level is significantly lower. This is mainly due to the lack of commas in Mathpix's single formula predictions, as shown in <Ref>(c). Nougat's predictions often contain syntax errors, as shown in <Ref>(d), leading to rendering failures and CDM=0 cases. Moreover, Nougat's predictions sometimes leave several pages in the middle of the PDF with no prediction results, resulting in missing formulas in the final output.
§ EFFICIENT DATA SELECTION FOR FORMULA RECOGNITION
Current formula recognition methods often overlook the importance of sample selection during training. We demonstrate that by utilizing the CDM metric for training data selection, it is possible to achieve performance comparable to using the entire dataset while only utilizing less than 20% of the data. We conduct the following experiment: First, we randomly split the UniMER-1M dataset into ten equal parts. We then train the model using 10%, 20%, up to 100% of the data and observe the model's performance with varying amounts of training data. As shown by the blue points in <Ref>, the model's performance generally improves as the amount of training data increases. Notably, with just 10% (106,179 samples) of the data, the model achieves satisfactory performance, accurately predicting most formulas. This suggests that the remaining 90% of the data may be largely redundant for training purposes.
To further investigate, we perform two rounds of hard case data selection. First, we use the model trained on 10% of the data to identify samples with CDM ≠ 1 from the remaining 90%. We find 76,026 such samples, which is less than 8% of the remaining data, indicating that over 90% of the formulas can be accurately predicted. Combining these with the initial 10% random data, we have a total of 182,205 samples (17.16% of the UniMER-1M dataset). As shown in <Ref>, the model trained on this combined dataset performs comparably to the model trained on the full dataset, except for a slight underperformance on the SCE subset.
Next, we use this model to further select hard cases from the remaining data, identifying an additional 9,734 samples, representing about 1% of the remaining data. This brings the total to 191,939 samples (18.08% of the full dataset). The performance of this model shows a slight improvement over the previous round, achieving results comparable to or even exceeding those of the model trained on the full dataset across various subsets.
This experiment demonstrates the effectiveness of using CDM for hard case selection in formula recognition. Training based on hard case mining can serve as an efficient method to enhance model performance. This approach allows for the expansion of training data by selecting only the necessary samples, eliminating the need to use the entire dataset. Future formula recognition datasets can be expanded using this method, focusing on the most challenging samples to improve model accuracy and efficiency.
§ EVALUATION METHOD BASED ON IMAGE DIFFERENCES
Previous work <cit.> mentions using image-based difference methods for evaluating formula recognition results, but a thorough analysis of the limitations of this approach is needed. To further assess the effectiveness of these methods, we conduct experiments using both image edit distance (Editdist) and Mean Squared Error (MSE) of image differences. As shown in <Ref>, Case 1 demonstrates that when the model's prediction is correct and the rendered output perfectly matches the ground truth (GT), both EditDist and MSE are zero, indicating an accurate formula. However, in Case 2, where the prediction misses the character α, the image-based difference method flags all subsequent positions as mismatched, even though only one character is missing. A more severe example is illustrated in Case 3, where the predicted formula content is correct but an extra newline character is predicted, leading to a significant image difference. In this case, both EditDist and MSE are non-zero and fail to reflect the error accurately. This highlights the necessity of the proposed CDM metric.
|
http://arxiv.org/abs/2409.03506v1 | 20240905131933 | Some mathematical models for flagellar activation mechanisms | [
"François Alouges",
"Irene Anello",
"Antonio DeSimone",
"Aline Lefebvre-Lepot",
"Jessie Levillain"
] | math.NA | [
"math.NA",
"cs.NA",
"math.DS"
] |
Bias correction of posterior means using MCMC outputs
Yukito Iba
September 9, 2024
=====================================================
§ ABSTRACT
Several micro-swimmers in nature use active slender appendages such as cilia or flagella to swim inside fluids. The characteristic periodic motion of these cilia and flagella allows them to swim in viscous fluids at low Reynolds numbers. In order to generate this motion, some form of internal activity along the appendage is necessary.
This activity is created by an internal structure extending all along the flagellum, called the axoneme. It is composed of nine pairs of microtubules arranged on the surface of a cylinder. Protein macro-molecules, called molecular motors, are located between microtubule pairs and create local sliding between two neighboring pairs. The motors' work is powered by a chemical component called ATP.
A “two-row model" is introduced, which is a
simplified model of the axoneme with two motor rows in total, using a system of partial differential equations to describe the motors' behavior. The main theorems of this paper then prove existence and uniqueness of the solutions of this model is well-posed, and that the system undergoes a Hopf bifurcation depending on the quantity of ATP available to the motors. Near the instability, one can also compute the oscillation amplitude.
The two-row model is then extended to a system with an arbitrary number N of motor rows, and
all models are also studied through numerical simulations, in which one can observe periodic oscillations
past a certain ATP value, without any additional form of regulation in the system.
This paper focuses on studying a model for molecular motors responsible for the bending of the axoneme in the flagella of microorganisms. The model is a coupled system of partial differential equations inspired by <cit.>, incorporating two rows of molecular motors between microtubules filaments. Existence and uniqueness of a solution is proved, together with the presence of a supercritical Hopf bifurcation. Additionally, numerical simulations are provided to illustrate the theoretical results. A brief study on the generalization to N-rows is also included.
§.§ Division of paragraphs
* Irene:
* 15/12: General form of the theorem (old file), 4.3.1
* have to do : 3.5 (only figures), find a link between obs on b1 e comparison one-row two-row (notes to bring at the meeting)
* 15/01 Rewrite theorem and appendix
* Jessie:
* 1-4/12: I've modified the intro
* I need to write Appendix A at some point
* 4/12: I wrote paragraph 4.1
* 7/12: I wrote 4.2. I've added numerical results for 4.4.2 and 4.5. I'm writing descriptions
* Remaining paragraphs:
* 3.6.2 tug of war
* Simulations Appendix A
* 3.5 (only figures)
* 4.3.2
* all of 4.4
* conclusion
* abstract
where did we say how oscillation amplitude was computed?
In 3.7.1
§.§ Molecular motors bibliography
cited:
<cit.>, <cit.>, <cit.>, <cit.>, <cit.> <cit.>, <cit.>, <cit.>, <cit.>. <cit.>, <cit.>,
General review: <cit.>.
<cit.>
Review <cit.>.
Curvature regulation models
Lindemann's geometric clutch model
<cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>.
Steady dynein loading: <cit.>, <cit.>, <cit.>.
inhibition/excitation <cit.>
Filament structure, simulations
<cit.>
theory and simulation by Brokaw: <cit.>, <cit.>,<cit.>.
Sliding filament theory (muscle contraction, A. Huxley, R. Niedergerke, H. Huxley, J. Hanson). Review by J. M. Squire: <cit.>.
Optimal swimming (Pironneau): <cit.>.
Motor protein types and models: <cit.>.
Deriving molecular models from Langevin equations
Read papers by H. Wang, on mathematical theory for molecular motors: <cit.>.
Predicting location of force-generating dyneins: <cit.>.
§.§ not cited yet
numerically solving Fokker-Planck equations: <cit.>, <cit.>
Spontaneous cilia synchronization
Spontaneous cilia coordination: <cit.>
determining an optimal number of motors <cit.>
Misc?
<cit.>,
tracking kinesin jumps: <cit.>
§ INTRODUCTION
Cilia and flagella are active
slender organelles employed by
eukaryotic cells to swim through viscous fluid <cit.>. They exploit hydrodynamic friction to induce
self-propulsion and show a characteristic non-reciprocal periodic motion in their dynamics <cit.>. This enables them to move without relying on inertial forces, i.e.,
at a low Reynolds number <cit.>.
Pioneering works ( <cit.>)
established that some form of internal activity along the flagellum is necessary in order to have sustained oscillations.
Pioneering work by Taylor <cit.>, Gray and Hancock <cit.>, and
Machin <cit.>
established that actuation at one end is not sufficient to establish a traveling wave of bending similar to the one experimentally observed, and that
some form of internal activity along the flagellum is necessary to sustain the oscillations.
Numerical simulations with distributed activity were then conducted by Brokaw <cit.>, by matching parameters values in order to reproduce a flagellum's beating pattern <cit.>.
This internal activity is present in all cilia and flagella due to their common internal structure, the axoneme. The axoneme consists of nine pairs of protofilaments called microtubules. It is an active structure due to the molecular motors located between neighboring microtubule pairs. These molecular motors are attached to one microtubule pair and walk along the adjacent pair, creating local sliding between contiguous microtubule pairs.
This mechanism, which is fueled by the conversion of chemical energy (ATP) into mechanical work, creates bending along the filament <cit.>.
The three-dimensional structure of the axoneme can be modeled as two planar filaments, with motors attached to the top filament exerting force on the fixed bottom filament (<cit.>,<cit.>,<cit.>). Since we are interested in the emergence of mesoscopic order in the axoneme from the uncoordinated activity of many independent molecular motors, we consider a portion of the filaments of arc-length of the order of 10 ℓ, where ℓ is a sub-micron length scale defined by the periodic microtubule structure (specifically, the typical inter-dynein spacing is 24 nm, see <cit.>). Over this length scale, the filaments can be considered as rigid.
Throughout this paper, we will call the one-row model the system composed of a single row of motors between two filaments. In this work, we extend the one-row model by introducing a second row of molecular motors, as in Figure <ref>(b). We call this new framework, which symmetrizes the previous model, the two-row model. We also present a more comprehensive N-row model, where N corresponds to the number of motor rows.
The two-row model is a simplified version of the axoneme, which forms a cylinder with only two microtubule pairs linked by two motor rows, as shown in Figure <ref>(a). It shall also be seen as a first step to the real model, which we are going to study via an N-row model in section <ref>, where N corresponds to the quantity of motor rows (see Figure <ref>). Since we are interested in the microscopic description of the axoneme, we assume rigid filaments all the N-row models.
This complex three dimensional structure has first been modeled in <cit.> as a pair of filaments that bends in a plane; throughout this paper we will call this model the one-row model. This kind of symplified model of the axoneme, further discussed in <cit.>, implies a great simplification on the study of the dynamics of the system, while still
exhibiting
the typical oscillatory behaviour of cilia and flagella.
In particular, it is possible to
couple explicitly the dynamics of active filaments with the stochastic equations that regulate the molecular activity of the dyneins and still keep the governing equations at a manageable level of complexity. Our focus is going to be on this last part, and we study an amended
simplified model filament with additional motors rows while still using the stochastic model presented in <cit.> and <cit.>. Here motors are attached to one of two filaments
and walk on the other one. This model has no additional components regulating sliding, and nevertheless proves that dyneins can synchronize and thus auto-regulate themselves to create a form of alternating sliding on both sides of the axoneme. This matches, in particular, with the curvature regulation hypothesis called steady dynein loading <cit.> in which axonemal curvature is created without any inhibition mechanism transmitted along the system.
Other curvature regulation models, that use additional mechanical signaling to provide some form of inhibition have been proposed in the literature <cit.>.
To study the two-row model, we generalize the stochastic model proposed in <cit.>. Each motor is anchored to one of two filament pairs, pair 1 or pair 2, and has two identical heads, e.g. head A and head B, each of them being either bound or unbound to the opposite filament. The motor has two different chemical states, state 1, where head A is bound and head B is unbound, or state 2, where the opposite holds. In this model, both heads cannot be unbound (or bound) at the same time. Between the pair of filaments, passive elastic and viscous elements resist the motion; they are modeled by the positive constants k and η, respectively.
Each state i = 1, 2 is defined by its potential energy W_i(ξ) at position ξ∈ℝ. The transition rates, ω_i(ξ)= ω_ij(ξ) with i,j = 1,2 and i ≠ j, represent the probability per unit time for a motor to switch between state i to state j. Moreover, since the filament is a periodic structure, W_i and ω_i are periodic functions with period ℓ.
As ATP provides the energy for the motor to change states, the transition rates depend on its concentration Ω. A detailed explanation of this is available in <cit.>.
In the limit of an infinite system of motors, we introduce P_i(ξ,t) as the probability density that a motor anchored to tubule pair i is in state 1 at position ξ and time t.
To obtain the probability density that a motor anchored to tubule pair i is in state 2, one must compute 1/ℓ - P_i.
The shift between the moving and the non-moving filament is measured by x(t), with velocity v(t) = d/dtx(t).
Let P(ξ, t) = P_1(ξ,t) and Q(ξ,t) = P_2(ξ + x(t),t).
Then, we obtain (see section <ref>) the following system of equations, for ξ∈ [0, ℓ] and t>0:
{[ ∂ P/∂ t(ξ,t) + v(t) ∂ P/∂ξ(ξ, t) = -( ω_1(ξ) +ω_2(ξ)) P(ξ, t) + ω_2(ξ)/ℓ,; ∂ Q/∂ t(ξ,t) - v(t) ∂ Q/∂ξ(ξ,t) = -( ω_1(ξ) +ω_2(ξ)) Q(ξ,t) + ω_2(ξ)/ℓ,; v(t) = 1/2η( ∫_0^ℓ (P(ξ,t)- Q(ξ,t))∂_ξΔ W (ξ)dξ - 2kx(t) ),; P(0,t) = P(ℓ,t), Q(0,t) = Q(ℓ,t), ].
where ω_i = ω_i(ξ; Ω) and Δ W (ξ) = W_2(ξ) - W_1(ξ) are ℓ-periodic.
As one can see in Figure <ref>(b), pair 0 and pair 2 are the same and they are both fixed.
The goal of this paper is twofold. On the one hand, we rigorously prove an existence and uniqueness result for the solution of (<ref>). Furthermore, we show the existence of a Hopf bifurcation for that same system when reasonable formulas are taken for potentials W_i and transition rates ω_i, depending on the ATP concentration via Ω.
Moreover, we provide the reader with numerical simulations of (<ref>), where an upwind scheme is employed to solve both transport equations. The numerical method is then extended to the more realistic N-row model.
Before stating the theoretical results, we point out that we assume (see <cit.>),
ω_1(ξ; Ω) +ω_2(ξ; Ω) = a_0(Ω),
where a_0(Ω)>0. Let us define C_#^1([0,ℓ]) as the set of restrictions to [0,ℓ] of C^1 and periodic functions over ℝ, then the first theorem reads as follows:
Let us fix ℓ>0. Assume ω_1(ξ;Ω) and ω_2(ξ;Ω) as in (<ref>). Moreover, assume ω_2 and Δ W to be at least C_#^1([0,ℓ]).
If the initial data P(ξ, 0) and Q(ξ, 0) are C_#^1([0,ℓ]) and x(0)=0, then the system of equations (<ref>), admits a unique solution P, Q ∈ C^1([0,ℓ]×ℝ_+), with ξ↦ P(ξ,·) and ξ↦ Q(ξ,·) in C_#^1([0,ℓ]), and x ∈ C^1(ℝ_+).
For the second result we chose Δ W (ξ) = U cos(2 πξ/ℓ), as in <cit.>. Moreover, since the two motor heads are identical, then ω_2(ξ) = ω_1(ξ + ℓ/2). Taking the transition rates' periodicity into account, we use the following Fourier expansion
ω_2(ξ; Ω) = a_0(Ω)/2 + ∑_n = 2k + 1, k ≥ 0( a_n(Ω) cos2n πξ/ℓ + b_n(Ω)sin2n πξ/ℓ),
with a_n(Ω) and b_n(Ω) real coefficients.
In addition to the hypothesis above, assume that a_0(Ω)=a_0^0Ω^α_0, a_1(Ω)=a_1^0Ω ^α_1, b_1(Ω)=b_1^0Ω^α_1 for a_1^0,b_1^0 ∈ℝ and α_0,α_1,a_0^0, Ω∈ℝ_+. Furthermore, let us define
τ (Ω) := -1/4(2a_0^0Ω^α_0+ ζℓ/π+ 2 λa_1^0/a_0^0 ℓΩ^α_1-α_0),
with ζ = 2 π k / ηℓ and λ = 2 π^2 U / ηℓ.
Suppose there exists Ω_0 ∈ℝ_+ such that τ(Ω_0)=0 and τ'(Ω_0)>0, then the solutions P(ξ,t;Ω), Q(ξ,t;Ω) and x(t;Ω) of the system (<ref>) show a super-critical Hopf bifurcation in time near the bifurcation value Ω_0 and near the fixed point
(P_eq(ξ;Ω),Q_eq(ξ;Ω),x_eq(Ω)) = (ω_2(ξ;Ω)/a_0(Ω)l,ω_2(ξ;Ω)/a_0(Ω)l,0).
In particular, the fixed point is asymptotically stable for Ω < Ω_0, and unstable for Ω > Ω_0. Moreover there exists an asymptotically stable periodic orbit for Ω > Ω_0 with radius
ρ = √(-(Ω-Ω_0)τ'(Ω_0)/τ̃) +o(√(Ω-Ω_0)),
where τ̃ = - 3 πζ/4 ℓ(π a_0^0Ω_0^α_0 + ℓζ/π a_0^0Ω_0^α_0 + 2 ℓζ), τ̃<0.
The methods introduced to prove the theorems can be also used to demonstrate similar results for the one-row model, and might prove useful to consider the general N-row structure.
The next section presents the two-row model and proves Theorems <ref> and <ref>, with numerical simulations confirming these results. The final section explores new questions by extending the model to an N-row structure.
§ INTRODUCTION2
Cilia and flagella are slender, active organelles that enable eukaryotic cells to swim through viscous fluids by exploiting hydrodynamic friction. They exhibit periodic motion, allowing movement without relying on inertial forces at low Reynolds numbers. All cilia and flagella share a common internal structure, the axoneme, consisting of nine pairs of microtubules.
The axoneme's activity is driven by molecular motors between neighboring microtubule pairs. These motors, attached to one microtubule pair, walk along the adjacent pair, creating local sliding by converting chemical energy (ATP) into mechanical work, causing the filament to bend.
This study focuses on a microscopic segment of the axoneme, considering the filaments rigid. The axoneme, that is therefore a complex three-dimensional structure, is modeled as a pair of filaments sliding relative to each other due to motor activity, referred to as the one-row model.
We introduce a two-row model of the axoneme, depicted in Figure 1, by adding another row of motors and a more comprehensive N-row model, where N represents the number of motor rows. The two-row model is studied using the two-state model proposed in [10]. Each motor is attached to one of the two filaments (pair 1 or pair 2) and can be either in state 1 (bound) or state 2 (unbound), with state-dependent potential energy W_i(ξ). Transition rates ω_1(ξ) and ω_2(ξ) indicate the probability of transitioning between states, dependent on ATP concentration Ω, as detailed in [9].
In the infinite motor limit, Pi(ξ, t) denotes the probability of a motor on tubule pair i being in state 1. The probability of being in state 2 is 1/ℓ - Pi. The moving filament's position is x(t) with velocity v(t) = dx/dt. Defining P (ξ, t) = P1(ξ, t) and Q(ξ, t) = P2(ξ + x(t), t), we derive the following system (see section 3).
§ NUMERICAL SIMULATIONS OF THE ONE-ROW MODEL
In this section, we describe the numerical scheme used throughout the whole paper. It is used here to compute the solution to the one-row model, but can easily be extended to N-row models, for N>1.
We also show that well-known results for the 1-row model are recovered using our numerical scheme.
§.§ One-Row model
In the one-row model the motors are attached to the top filament only and they exerts the force on the bottom and non moving filament.
We introduce densities P^i(ξ)≥ 0 with i= 1,2 which give the probability to find a motor at position ξ in state i. These densities are not independent but obey P^1(ξ) + P^2(ξ) = 1/l, reflecting the fact that a motor is always either in state 1 or in state 2. The top filament is at position x(t), with velocity v(t) = d/dtx(t).
The one-row model, given in <cit.>, expresses this motion as a transport equation on P=P^1, namely:
{[ ∂_t P(ξ,t) + v(t)∂_ξ P(ξ,t) = -( ω_1 +ω_2) P(ξ,t) + ω_2/ℓ; v(t) = 1/η(∫_0^ℓ dξ P(ξ,t)∂_ξΔ W(ξ) - kx(t)), ].
where ω_i = ω_i(ξ; Ω) and Δ W (ξ) = W_2(ξ) - W_1(ξ).
This system has a non moving and stationary solution:
P_eq(ξ; Ω) = ω_2/ℓ(ω_1+ω_2), v_eq = 0, x_eq = 1/k∫_0^ldξ P_eq(ξ; Ω)∂_ξΔ W(ξ).
Depending on the chosen expressions for transitions rates ω_i and potentials W_i, a threshold value Ω=Ω_0 may exist, above which this solution becomes unstable and the system oscillates.
§.§ Numerical method
For numerical simulations of the whole 1-row PDE system (<ref>), we write a first-order upwind scheme for the density P where, after each step, we update the velocity v. Let us take Δ x such that ℓ = J Δ x for J ∈ℕ^*, and Δ t the time step. We define P_j^n := P(j Δ x, n Δ t), for n≥ 1 and 1 ≤ j ≤ J. Being ℓ-periodic, we may extend P_j^n for all j ∈ℤ by setting P_j+J^n=P_j^n for all j ∈ℤ and all n ∈ℕ. At a fixed time step t= n Δ t, n≥ 1, the velocity v^n of the central tubule pair being known, we compute, for j ∈ℤ
{[ S^n_j = -a_0 P^n_j + ω_2(jΔ x)/ℓ, ; P^n+1_j = ( 1 - v^nΔ t/Δ x) + v^nΔ t/Δ xP^n_j-1 + Δ t S^n_j if v^n >0,; P^n+1_j = ( 1 + v^nΔ t/Δ x) - v^nΔ t/Δ xP^n_j+1 + Δ t S^n_j if v^n <0. ].
and (x^n+1, v^n+1) are
{[ η v^n+1 = Δ x ∑_j=1^J (P^n+1_j ∂_ξΔ W(jΔ x)) - kx^n,; x^n+1 = x^n + v^n+1Δ t. ].
Notice that, in (<ref>), only one of the two updates is used for all j. Moreover, due to the stability of the upwind scheme, we need to check at each iteration that | v^n Δ t/Δ x| <1.
§.§ Numerical results
All simulations are carried out defining the model functions that satisfy the hypotheses from theorem <ref>: Δ W(ξ) = U cos (2 πξ / ℓ), a_0(Ω) =a_0^0 √(Ω), a_1(Ω)=b_1(Ω) =a_1^0Ω and a_n(Ω)=b_n(Ω) = 0, if n>1, with α_0 = 1/2 and α_1=1. All parameters values used for all simulations (unless specified otherwise) are shown in Table 1, matching F. Jülicher's <cit.>. We have taken for k_BT the value at 36^∘C (human body temperature). We compute a_1^0 such that the threshold of value of the system remains Ω_0, which is given.
These choices will be justified in section <ref>.
As written above, in <cit.>, the transition rates were taken symmetric to reflect the symmetry of the three dimensional body. Namely, ω_is are symmetric in the sense of <cit.> if |ω_i(ξ+l/2)|=|ω_i(-ξ+l/2)|.
To point out what happens if the system is not symmetric we choose b_1 ≠ 0, as in <cit.>.
The system needs a specific ATP concentration Ω_0 to start moving. This can be observed in Figure <ref>. Here, we plot the graphs of P(ξ,t) in the periodicity cell [0,ℓ] for the space variable ξ.
When Ω<Ω_0, the system quickly goes to its equilibrium state. The filament does not move and maintains a non-zero displacement, x_0 ≠ 0, as shown in Figures <ref>(a), <ref>(c) and <ref>(e). In this regime, the quantity P(ξ,t), which is the probability of finding a motor with the right head attached to the filament and the left head detached from it, is constant in time. Lastly, due to our choices for the transition rates, the natural equilibrium configuration of the motor's heads is not independent from the position in space.
As soon as the system with elastic links attains enough chemical energy (namely, Ω > Ω_0), the position of the filament, together with the motor density P, oscillate around their equilibrium positions, as in Figures <ref>(d) and <ref>(f).
Notice that, since the transition rates are not symmetric, the equilibrium displacement for the one-row model is different from zero (Figures <ref>(c) and <ref>(d)).
It is also known that the dynamical behavior of the filament depends on the presence of elastic elements. They are defined through the elastic force -kx(t), with k ≠ 0. If enough chemical ATP is given to the system, namely Ω > Ω_0, then the filament is expected to deviate from its equilibrium position. If no elastic elements are present, i.e. k = 0, then the motors walk and the filament slides, with v(t) ≠ 0, in one arbitrary direction. In this case, | x | keeps growing until both filaments are not connected to one another (through the motors) anymore. Instead, if k ≠ 0, the filament starts to oscillate.
§ THE TWO-ROW MODEL
§.§ Motivation
In this section we focus on the two-row model, from both theoretical and numerical perspectives.
The two-row model serves the purpose of symmetrizing the one-row model.
In the one-row model <cit.>, even without any external force, the moving filaments experiences a non-zero equilibrium displacement x(t)=x_eq, which disrupts axonemal symmetry and causes bending at rest. Jülicher and Camalet mentioned this issue in <cit.>, solving it by choosing symmetrical potentials, as in our case, and symmetrical transition rates, namely |ω_i(ξ+l/2)|=|ω_i(-ξ+l/2)|, with ξ∈ [0,ℓ]. Instead, in this work, we model non-symmetrical transition rates and introduce a second row of molecular motors, producing a zero equilibrium x_eq=0 at rest.
The two-row model is a simplified version of the axoneme that acts like its two-dimensional projection (<cit.>), forming a cylinder with only two microtubule pairs linked by two motor rows, as shown in Figure <ref>(a). Moreover, this model is a first step to the N-row model presented in section <ref>, where N corresponds to the quantity of motor rows.
Despite lacking additional sliding regulation components, the model demonstrates that dyneins can synchronize and self-regulate to create alternating sliding on both sides of the axoneme. This matches, in particular, with the curvature regulation hypothesis called steady dynein loading <cit.> in which axonemal curvature is created without any inhibition mechanism transmitted along the system.
§.§ Model structure
We now derive in detail the system (<ref>), which is depicted in Figure <ref>(b). We consider three pairs of filaments, numbered from 0 to 2. The pairs 0 and 2 are identical, and fixed, while the central pair 1, may move. We call x(t) its displacement, and v(t)= d/dtx(t) its velocity. We define P_1(ξ,t) and P_2(ξ,t) as the probabilities for the motors to be in state one at position ξ and time t, when attached to the pair 1 (bottom row) or to the pair 2 (top row), respectively. We assume that the sum of the transition rates is uniform, as in (<ref>), and obtain two transport equations for ξ∈ [0, ℓ] and t>0:
{[ ∂ P_1/∂ t(ξ,t) + v(t) ∂ P_1/∂ξ(ξ,t) = -a_0(Ω) P_1(ξ,t) + ω_2(ξ; Ω)/ℓ,; ∂ P_2/∂ t(ξ,t) = -a_0(Ω)P_2(ξ,t) + ω_2(ξ-x(t); Ω)/ℓ. ].
Additionally, the force balance equation on the central filament reads:
f_ext-2η v - 2kx +f_mot = 0,
where f_ext is the external force applied on the central filament, and f_mot is the active force exerted by the motors, which is given by
f_mot(t) = ∫_0^ℓ (P_1(ξ,t)∂_ξΔ W (ξ) - P_2(ξ,t)∂_ξΔ W (ξ - x(t)))dξ.
Here, k x and η v represent the elastic and viscous resistances, respectively.
For the purpose of our work, we do not consider any external forcing, which means
f_ext = 0.
Let P(ξ, t) = P_1(ξ,t) and Q(ξ,t) = P_2(ξ + x(t),t). Then, (<ref>) becomes
{[ ∂ P/∂ t(ξ,t) + v(t) ∂ P/∂ξ(ξ,t) = -a_0(Ω) P(ξ,t) + ω_2(ξ)/ℓ,; ∂ Q/∂ t(ξ,t) - v(t) ∂ Q/∂ξ(ξ,t) = -a_0(Ω) Q(ξ,t) + ω_2(ξ)/ℓ. ].
The periodicity of P_1 and P_2, enables us to rewrite the motor force as
f_mot(t) = ∫_0^ℓ (P(ξ,t)- Q(ξ,t))∂_ξΔ W (ξ)dξ ,
and obtain, from (<ref>),
v(t) = ẋ(t) = 1/2η( ∫_0^ℓ (P(ξ,t)- Q(ξ,t))∂_ξΔ W (ξ)dξ - 2kx(t) ).
Finally, we recover the system of equations (<ref>), which will then be studied both theoretically and numerically, before generalizing it to an arbitrary number of layers N.
§.§ Proof of Theorem <ref> (existence and uniqueness)
The proof follows a classical scheme in two steps: we first show local existence of the solution in time, and then extend it to ℝ_+.
Step 1: local existence.
Let T>0 be a time to be chosen afterwards and define the map ψ: C^0([0,T]) → C^0([0,T]) as ψ[x(·)](t) = ∫_0^t F[x(·)](s) ds, where
F[x(·)](s) = 1/2η∫_0^ℓ (P_x(ξ,s)- Q_x(ξ,s))∂_ξΔ W (ξ)dξ - k/η x(s),
and the functions P_x(ξ,t) and Q_x(ξ,t) are defined as the solutions of (<ref>) and v(t) = ẋ(t). Notice that P_x and Q_x are explicitly given in their integral form by
P_x(ξ,t)= e^-a_0 tP (ξ - x(t)+ x(0), 0) +e^-a_0 t/ℓ∫_0^t e^a_0 sω_2(ξ + x(s)-x(t)) ds,
and
Q_x(ξ,t)= e^-a_0 tQ (ξ +x(t)-x(0), 0) +e^-a_0 t/ℓ∫_0^t e^a_0 sω_2(ξ - x(s)+x(t)) ds,
which remain well-defined when x is only in C^0([0,T]).
Notice that (x̃, P_x̃, Q_x̃) is a solution of the system (<ref>, <ref>) if and only if x̃(t) is a fixed point of ψ, i.e. ψ[x̃(·)] = x̃(·). In order to proceed, we prove that ψ: C^0([0,T]) → C^0([0,T]) is a strict contraction for T sufficiently small.
Let us take two functions x_1 and x_2 in C^0([0,T]), with x_1(0)=x_2(0)=0 and with the corresponding P_x_1, Q_x_1 and P_x_2, Q_x_2 defined
by the integral formulations (<ref>, <ref>). The initial conditions are identical P_x_1(ξ, 0) = P_x_2(ξ, 0)=P(ξ) and Q_x_1(ξ, 0) = Q_x_2(ξ, 0)=Q(ξ). We want to estimate the quantity
ψ[x_1(·)](t)-ψ[x_2(·)](t) = ∫_0^t F[x_1(·)](s)-F[x_2(·)](s) ds.
Using (<ref>), we get
F[x_1(·)](s) -F[x_2(·)](s) = 1/2η( ∫_0^ℓ(P_x_1(ξ, s) - P_x_2(ξ, s)
- ( Q_x_1(ξ, s) - Q_x_2(ξ, s) ))∂_ξΔ W (ξ)dξ)- k/η (x_1(s)-x_2(s)).
Now, we have from (<ref>)
P_x_1(ξ,t) - P_x_2(ξ,t)= e^-a_0 t(P_x_1 (ξ - x_1(t), 0) - P_x_2 (ξ - x_2(t), 0))
+e^-a_0 t/ℓ∫_0^t e^a_0 s(ω_2(ξ + x_1(s)-x_1(t))-ω_2(ξ + x_2(s)-x_2(t)) ds.
We have the estimate
P_x_1(t,ξ) - P_x_2(t,ξ)_L_t^∞L_ξ^∞
≤∂_ξ P_L_ξ^∞x_1(t)-x_2(t)_L_t^∞
+1/ℓ∫_0^t e^a_0 (s-t)∂_ξω_2_L_ξ^∞ | x_1(s)-x_1(t)- x_2(s)+x_2(t))| ds _L_t^∞
≤( ∂_ξ P_L_ξ^∞ + 2/a_0 ℓ∂_ξω_2_L_ξ^∞) x_1(t)-x_2(t)_L_t^∞,
The same goes for the difference Q_x_1(ξ,t) - Q_x_2(ξ,t), and we deduce
P_x_1(t,ξ) - P_x_2(t,ξ)- Q_x_1(t,ξ) + Q_x_2(t,ξ)_L_t^∞L_ξ^∞≤ C_1 x_1(t)-x_2(t)_L_t^∞.
with C_1 = 4 max{∂_ξQ_L_ξ^∞,∂_ξP_L_ξ^∞,2/a_0 ℓ∂_ξω_2_L_ξ^∞}.
We now deduce from (<ref>)
F[x_1(·)]-F[x_2(·)]_L_t^∞≤( C_1ℓ/2η∂_ξΔ W_L_ξ^∞+ k/η) x_1(t)-x_2(t)_L_t^∞,
and obtain
ψ[x_1(·)]-ψ[x_2(·)]_L_t^∞≤ C_2 T x_1-x_2_L_t^∞,
with C_2 = C_1 ℓ/2η∂_ξΔ W_L_ξ^∞ +k/η.
Taking T = 1/ 2C_2, this proves that ψ is a contraction, as claimed. Then, there exists a unique fixed point x̃ (·) ∈ C^0([0,T]) which satisfies (<ref>).
Step 2: global solutions.
The previous construction can be extended as long as P_x and Q_x remain bounded in C^1([0,ℓ]) with respect to ξ and in L^∞(ℝ_+) with respect to t, as shown by the formulas for C_1 and C_2. But from (<ref>, <ref>) we have
‖ P_x(·, t) ‖_C_ξ^1≤ e^-a_0t‖ P ‖_C_ξ^1 + (1-e^-a_0t)1/a_0 ℓ‖ω_2‖_C_ξ^1,
which shows that
‖ P_x ‖_C_ξ^1, L_t^∞≤max( ‖ P ‖_C_ξ^1,1/a_0 ℓ‖ω_2‖_C_ξ^1),
and the same goes for Q_x.
We thus obtain that there is a unique global solution (x, P_x, Q_x) to (<ref>, <ref>) for all time t≥ 0.
In fact, x∈ C^1(ℝ_+) and P_x, Q_x ∈ C_#^1([0,ℓ] ×ℝ_+), with ξ↦ P(ξ,·) and ξ↦ Q(ξ,·) in C_#^1([0,ℓ]), as shown by the following bootstrap argument:
From equations (<ref>, <ref>), we know that P_x and Q_x are continuous in time, which enables us to deduce that F[x(·)] ∈ C^0(ℝ_+). Therefore, ψ[x(·)] ∈ C^1(ℝ_+). But, since x = ψ[x(·)], we deduce that x ∈ C^1(ℝ_+). Re-using equations (<ref>, <ref>), we may then deduce that P_x and Q_x are in fact C^1 in time (and they actually have the minimal regularity of the initial conditions P, Q and of ω_2).
§.§ Proof of Theorem <ref> (Hopf bifurcation)
The functions ξ↦ P(ξ,t) and ξ↦ Q(ξ,t) are periodic with period l, and they can be then expanded in Fourier series
{[ P(ξ,t) = p_0(t)/2 + ∑_n>0( p^c_n(t) cos2n πξ/ℓ + p^s_n(t)sin2n πξ/ℓ),; Q(ξ,t) = q_0(t)/2+ ∑_n>0( q^c_n (t)cos2n πξ/ℓ + q^s_n(t)sin2n πξ/ℓ). ].
We insert this expansion and the one for the transition rates (<ref>) into the system (<ref>). By matching same order terms, we get an infinite number of ordinary differential equations for the coefficients of P and Q. Namely, for n = 0 we get two decoupled equations for p_0 and q_0:
ṗ_̇0̇(t) = - a_0(Ω) p_0(t) + a_0(Ω)/ 2 ℓ, q̇_̇0̇(t) = - a_0(Ω) q_0(t) + a_0(Ω)/ 2ℓ.
For n ≠ 0 we obtain:
{[ ṗ^̇ċ_̇ṅ(t) + 2 π n /ℓ v(t) p^s_n(t)= - a_0(Ω) p^c_n(t) + a_n(Ω)/ ℓ ,; ṗ^̇ṡ_̇ṅ(t) - 2 π n /ℓ v(t) p^c_n(t)= - a_0(Ω) p^s_n(t) + b_n(Ω)/ ℓ ,; q̇^̇ċ_̇ṅ(t) - 2 π n /ℓ v(t) q^s_n(t)= - a_0(Ω) q^c_n(t) + a_n(Ω)/ ℓ ,; q̇^̇ṡ_̇ṅ(t) + 2 π n /ℓ v(t) q^c_n(t)= - a_0(Ω) q^s_n(t) + b_n(Ω)/ ℓ , ].
together with the force balance equation
2 ηẋ(t) + 2 k x(t) + π U (p_1^s-q_1^s) = 0 .
Note that the coupling between the probabilities evolution and the force equilibrium equation takes place only for the first order coefficients. We are going to prove the existence of a Hopf bifurcation by treating order n=0 first, then n=1 and lastly n>1. Combining all three results will complete the proof.
Step 1. Zeroth order coefficients. It is clear that (<ref>) gives
p_0(t)=e^-a_0t(p_0(0)- 1/2l) + 1/2ℓ , q_0(t)=e^-a_0t(q_0(0)- 1/2l) + 1/2ℓ ,
and both converge to 1/2ℓ exponentially fast.
Step 2. First order coefficients.
We now prove the onset of oscillatory patterns for the first order coefficients p_1^c,s(t), q_1^c,s(t), and for the position x(t).
From (<ref>) and (<ref>) we obtain a first-order five-dimensional ODE:
{[ ṗ^̇ċ_̇1̇ = - a_0(Ω) p_1^c + (ζ x + λ/2 (p_1^s-q_1^s))p^s_1 + a_1(Ω)/ℓ,; ṗ^̇ṡ_̇1̇ = - a_0(Ω) p_1^s - (ζ x + λ/2 (p_1^s-q_1^s)) p^c_1 + b_1(Ω)/ℓ,; q̇^̇ċ_̇1̇= - a_0(Ω) q_1^c - (ζ x + λ/2 (p_1^s-q_1^s)) q^s_1 + a_1(Ω)/ℓ,; q̇^̇ṡ_̇1̇ = - a_0(Ω) q_1^s + (ζ x + λ/2 (p_1^s-q_1^s)) q^c_1 + b_1(Ω)/ℓ,; ẋ = -ℓ/2 π( ζ x + λ/2 (p_1^s-q_1^s)) , ].
where ζ = 2 π k / ηℓ and λ = 2 π^2 U / ηℓ.
We linearize the system around its equilibrium point p_eq(Ω) = (p_1,eq^c(Ω),p_1,eq^s(Ω),q_1,eq^c(Ω),q_1,eq^s(Ω), x_eq(Ω)), where
p_1,eq^c = q_1,eq^c = a_1(Ω) / a_0(Ω) ℓ , p_1,eq^s = q_1,eq^s = b_1(Ω) / a_0(Ω) ℓ , x_eq=0 .
p_1,eq^c = q_1,eq^c = a_1^0 /a_0^0 ℓΩ^α_1-α_0, p_1,eq^s = q_1,eq^s = b_1^0/a_0^0 ℓΩ^α_1-α_0, x_eq=0 ,
We observe that the Jacobian matrix has five eigenvalues: three of them are equal to
-a_0(Ω)<0 while the other two are of the form τ(Ω) ±1/2 √(π)√(-2 ζℓ a_0(Ω) + π τ^2(Ω)) , where τ(Ω) is given by (<ref>).
Since, by hypothesis, there exists a real and positive Ω=Ω_0 such that τ(Ω_0)=0, then -2 ζℓ a_0(Ω_0) + π τ^2(Ω_0)<0, and we deduce that the pair of complex eigenvalues can be written as τ(Ω) ± i ω(Ω) where
ω(Ω) := -1/2 √(π)√(2 ζℓ a_0(Ω) - π τ^2(Ω)),
and they cross the imaginary axis at Ω=Ω_0.
In the following proposition, we are going to study the non-linear behavior of the vector field solution to (<ref>) by exploiting the center manifold theorem; the orbit structure near the fixed point and Ω_0 is determined by the restriction of the non linear system to the center manifold. In particular, system (<ref>) restricted to the center manifold will show a super-critical Hopf bifurcation near p_eq(Ω_0) and Ω_0.
With the hypothesis of Theorem <ref>, the non-linear system (<ref>) has a supercritical Hopf bifurcation near (p_eq(Ω_0),Ω_0).
Change of variables.
In the first part of the proof we restrict the dynamical system to the center manifold. To compute the latter, we bring the system to a more suitable formulation.
Let us first transform the fixed point of (<ref>) to the origin. We define the new variables as
δ p_1^c = p_1^c - p_1,eq^c, δ p_1^s = p_1^s - p_1,eq^s, δ q_1^c = q_1^c - q_1,eq^c, δ q_1^s = q_1^s - q_1,eq^s.
We then use the following linear and invertible change of variables
[ r = a_1/b_1δ p_1^c + δ p_1^s, s = a_1/b_1δ q_1^c +δ q_1^s, z = 1/2(δ p_1^s + δ q_1^s), y = 1/2(δ p_1^s - δ q_1^s), ]
and shorten the notation by taking X=(r,s,z)^T, and Y=(y,x)^T to transform the system (<ref>) into
d/dt([ X; Y ]) = 𝕄(Ω) ([ X; Y ]) + ([ G(X,Y); F(X,Y) ]),
where
𝕄(Ω)=(
[ 𝔹(Ω) 0; 0 𝔸(Ω) ])
is a block diagonal matrix. Since we assumed that a_0(Ω)=a_0^0Ω^α_0, a_1(Ω)=a_1^0Ω ^α_1, b_1(Ω)=b_1^0Ω^α_1 for a_1^0 ∈ℝ and α_0,α_1,a_0^0, Ω∈ℝ_+, then for the linear part we have
𝔹(Ω)=-a_0^0 Ω^α_0Id_3,3 , 𝔸(Ω)= (
[ -a_0^0 Ω^α_0 - λ/ℓa_1^0/a_0^0Ω^α_1-α_0 - a_1^0/a_0^0ζ/ℓΩ^α_1-α_0; ; -λℓ/2 π -ζℓ/2 π; ]),
and the non linear part is defined as
G(X,Y)=(ζ x+λ y)([ a_1^0/b_1^0 (y+z)+ b_1^0/a_1^0(-r+y+z); a_1^0/b_1^0 (y-z)+ b_1^0/a_1^0 (s+y-z); -b_1^0/a_1^0 (r-s-2 y)/2 ]) ,
and
F(X,Y)=(ζ x+λ y)([ -b_1^0/a_1^0(r+s-2 z)/2; 0 ]).
Computation of the center manifold.
The center manifold can then be computed by using standard techniques (see <cit.>, Chapter 20, Section 2). We start by rewriting the system in such a way that Ω_0 is moved to the origin through the change of variable δΩ = Ω - Ω_0. As it is classical, we treat δΩ as a variable of the system. This means that we add the equation δ̇Ω̇=0 to the dynamical system and that the non linear part of the system includes all the products δΩ r, δΩ s, δΩ z etc.
Since the terms in the matrix 𝕄 are nonlinear in Ω, we expand them as
(Ω_0 + δΩ)^α = Ω_0^α + c(α) δΩ + O(δΩ ^2),
where c(α) = αΩ_0^α-1.
We insert the expansion (<ref>) into the system (<ref>), neglecting terms of order two in δΩ and getting
d/dt([ X; Y; δΩ ]) = 𝕄(Ω_0)([ X; Y; 0 ]) + ([ g(X,Y,δΩ); f(X,Y,δΩ); 0 ]),
with
g(X,Y,δΩ)= -a_0^0c(α_0)δΩ X +G(X,Y)
([ - a_0^0c(α_0) δΩ r + (ζ x+λ y) ( (a_1^0)^2 (y+z)+(b_1^0)^2 (-r+y+z)/a_1^0b_1^0); -a_0^0 c(α_0) δΩ s +(ζ x+λ y) ((a_1^0)^2 (y-z)+(b_1^0)^2 (s+y-z)/a_1^0b_1^0); -a_0^0c(α_0) δΩ z -b_1^0 (r-s-2 y)/2 a_1^0 (ζ x+λ y); -a_0^0 c(α_0) δΩ y- λ/ℓa_1^0/a_0^0c(α_1-α_0) δΩ y -ζ/ℓ c(α_1-α_0) δΩ x -b_1^0(r+s-2 z)/2 a_1^0(ζ x+λ y); 0 ]).
We call
𝔹_0:=(
[ -a_0^0 Ω_0^α_0 0 0; 0 -a_0^0 Ω_0^α_0 0; 0 0 -a_0^0 Ω_0^α_0; ]), 𝔸_0:= (
[ -a_0^0 Ω_0^α_0 - λ/ℓa_1^0/a_0^0Ω_0^α_1-α_0 - a_1^0/a_0^0ζ/ℓΩ_0^α_1-α_0; ; -λℓ/2 π -ζℓ/2 π; ]),
the two sub-matrices of 𝕄.
The non linear part is split into
g(r,s,z, y,x,δΩ):=
([ - a_0^0c(α_0) δΩ r + (ζ x+λ y) ( (a_1^0)^2 (y+z)+(b_1^0)^2 (-r+y+z)/a_1^0b_1^0); -a_0^0 c(α_0) δΩ s +(ζ x+λ y) ((a_1^0)^2 (y-z)+(b_1^0)^2 (s+y-z)/a_1^0b_1^0); -a_0^0 c(α_0) δΩ z -b_1^0 (r-s-2 y)/2 a_1^0 (ζ x+λ y); ])
and
and
f(X,Y,δΩ) = -([ (a_0^0 c(α_0) y+λ/ℓa_1^0/a_0^0 c(α_1-α_0))δΩ y +ζ/ℓ c(α_1-α_0)δΩ x; ; 0 ])+ F(X,Y).
We can define a center manifold as
W^c(0) = {(X,Y,δΩ): X= 𝐡(Y,δΩ),
|Y|<, |δΩ |<, 𝐡(0,0)=0, ∇𝐡(0,0)=0},
for and sufficiently small and 𝐡=(h_1,h_2,h_3) smooth enough.
In order for 𝐡 to be the center manifold for the system (<ref>), the graph of 𝐡(Y,δΩ) has to be invariant under the dynamics generated by (<ref>). Hence, by plugging 𝐡 into the system and compute its derivative, we get the following quasilinear differential equation
∇_Y𝐡·(𝔸(Ω_0)Y+ f(𝐡,Y,δΩ))=𝔹(Ω_0)𝐡 + g(𝐡,Y,δΩ).
The solution 𝐡(Y,δΩ) of (<ref>) can be approximated with a power series expansion up to any desired degree of accuracy. In our case we expand them up to order two defining
h_i(Y,δΩ):= a_i1y^2+a_i2 y x+a_i3 y δΩ +a_i4 x^2+a_i5 x δΩ +a_i6δΩ ^2 + O(Y^3,δΩ ^3).
To completely determine (locally) the center manifold, we have to compute the coefficients a_ij knowing that the functions defined in (<ref>) must solve (<ref>). For the detailed computations, we refer the reader to Appendix <ref>.
Restricted to the center manifold, the original system (<ref>) has the following form:
d/dtY = 𝔸(Ω) Y + f(𝐡(Y),Y)+O(Y^3,δΩ ^3) .
A complete formula for f(𝐡(Y),Y)) in terms of Y=(y,x)^T follows from our computations and reads
f( 𝐡(Y),Y)=
([ 2 π a_0^0ℓ (λ y+ζ x) (πa_1^0Ω_0^α_1-α_0(λ ^2 y^2+ζ ^2 x^2)+ζ a_0^0ℓ (π a_0^0Ω_0^α_0-ζℓ) y x)/(2 πa_1^0λΩ_0^α_1-α_0+ζ a_0^0ℓ^2) (2 πa_1^0λΩ_0^α_1-α_0+a_0^0 ℓ (π a_0^0Ω_0^α_0-ζℓ)); 0 ]).
Observe that τ(Ω) ± i ω(Ω) is the pair of conjugated eigenvalues of 𝔸(Ω) and therefore we are in the hypothesis of the existence of a Hopf bifurcation for a two-dimensional system.
Normal form.
The next step is to bring system (<ref>) into its normal form, from which we deduce the type of Hopf bifurcation that the system is attaining.
In order to proceed, we apply a further change of coordinates, such that 𝔸(Ω) is transformed to its real Jordan form. Namely, we define the transformation matrix
ℙ=[ P_11 P_12; 1 0 ],
where
P_11=π/λℓ(a_1^0λΩ_0^α_1-α_0/a_0^0 ℓ+a_0^0 Ω_0^α_0)-ζ/2 λ
and
P_12 = -1/λℓ√(πζℓ(a_0^0 Ω_0^α_0-a_1^0λΩ_0^α_1-α_0/a_0^0 ℓ)- (πa_1^0λΩ_0^α_1-α_0/a_0^0 ℓ+π a_0^0Ω_0^α_0)^2-ζ ^2 ℓ^2/4).
The matrix ℙ defines the new coordinates Y= ℙỸ thanks to which equation (<ref>) can be expressed in its normal form
d/dtỸ = [ τ(Ω) -ω(Ω); ω(Ω) τ(Ω) ]Ỹ + ℙ^-1f(𝐡(ℙỸ),ℙỸ)+O(Y^3,δΩ ^3).
To compute key properties of the system, such as the amplitude of the limit cycles, we change coordinates to the polar ones Ỹ^T= ρ (sin(θ),cos(θ)), and get
{[ ρ̇(t) = τ'(Ω_0) δΩρ(t)+ τ̃ρ^3(t) + O(δΩ^2ρ,δΩρ^3),; θ̇(t) =ω(Ω_0) + ω'(Ω_0) δΩ + ω̃ρ^2(t) +O(δΩ^2,δΩρ^2). ].,
that is the normal form of (<ref>) in polar coordinates around Ω_0.
The non linear part ℙ^-1f(𝐡(ℙỸ),ℙỸ) determines the constants τ̃ and ω̃. The first one is involved in the expression for the limit cycle amplitude, and we compute it using a well know formula (see <cit.>, Chapter 20, Section 2). We obtain
τ̃ = - 3 πζ/4 ℓ(π a_0^0Ω_0^α_0 + ℓζ/π a_0^0Ω_0^α_0 + 2 ℓζ)
Additionally, it is straightforward to compute
τ'(Ω_0) = -1/2(a_1^0λα_1-α_0Ω_0^α_1-α_0 -1/a_0^0 l+a_0^0).
Since τ̃ is negative and
τ'(Ω_0) = -1/2(a_1^0λα_1-α_0Ω ^α_1-α_0-1/a_0^0 ℓ + α_0 a_0^0 Ω ^α_0-1),
is positive by hypothesis, the reduced system (<ref>), and hence, the whole system (<ref>), shows a supercritical Hopf bifurcation near the bifurcation parameter Ω_0.
In particular, for Ω sufficiently near and greater then Ω_0 there exists an asymptotically stable periodic orbit with radius as in (<ref>).
This finishes the proof of Proposition <ref>
Step 3. Higher order terms. Lastly, we are going to prove that, after large times, the solutions of (<ref>) p_n^c,s(t) and q_n^c,s(t), with n>1 are periodic with the same period as x(t). We remark again that, since the dynamics of x (<ref>) is independent of p_n^c, p_n^s, q_n^c, q_n^s for n > 1, we may assume that x(t) is given and periodic, and split the system into pairs of coupled equations as stated in the following Proposition.
For n > 1 consider the infinite system of four equations:
{[ ṗ^̇ċ_̇ṅ(t) + 2 π n/ℓẋ p^s_n(t)= - a_0(Ω) p^c_n(t) + a_n(Ω)/ ℓ,; ṗ^̇ṡ_̇ṅ(t) - 2 π n/ℓẋ p^c_n(t)= - a_0(Ω) p^s_n(t) + b_n(Ω)/ ℓ. ].
and
{[ q̇^c_n(t) - 2 π n/ℓẋ q^s_n(t)= - a_0(Ω) q^c_n(t) + a_n(Ω)/ ℓ,; q̇^s_n(t) + 2 π n/ℓẋ q^c_n(t)= - a_0(Ω) q^s_n(t) + b_n(Ω)/ ℓ. ].
Suppose that, for t ≥ 0, the function t ↦ x(t) is periodic of period T. Then, the solutions to the systems (<ref>) and (<ref>) may have two behaviors: they either converge in time to periodic solutions of the same period T if n is odd, or they go to zero for large times when n is even.
We introduce z_n = p_n^c + ip_n^s.
The first two equations of (<ref>) then become:
ż_n + (a_0(Ω) -2inπ/ℓẋ)z_n = c_n
where c_n = a_n(Ω) + ib_n(Ω)/ℓ is constant. We thus deduce the following expression for z_n(t)
z_n(t) = c_n ∫_0^te^-∫_u^t(a_0(Ω) - iv̅(s))dsdu + z_n(0)e^-∫_0^t (a_0(Ω)-iv̅(s))ds ,
where v̅ = 2nπ/ℓẋ.
Let us first notice that the second term, z_n(0)e^-∫_0^t (a_0(Ω)-iv̅(s))ds goes to zero when t goes to infinity. This, together with the fact that a_n=b_n=0 if n is even (see (<ref>)), gives c_n=0 and the solutions p_n^c,s go to zero for large times when n is even.
Let us now focus on the case where n is odd. We want to prove that, when v̅ is periodic of period T, z_n converges towards a T-periodic solution after a transitory regime. We again notice that the second term of
(<ref>) converges to 0, and the result will follow by studying only the first term.
We thus denote by z̃_n(t)= ∫_0^te^-∫_u^t(a_0 - iv̅(s))dsdu, and compute
[ z̃_n(T+t)- z̃_n(t) = ∫_0^t+Te^-∫_u^t+T(a_0 - iv̅(s))dsdu - ∫_0^te^-∫_u^t(a_0 - iv̅(s))dsdu; = ∫_0^Te^-∫_u^t+T(a_0 - iv̅(s))dsdu + ∫_T^t+Te^-∫_u^t+T(a_0 - iv̅(s))dsdu; -∫_0^te^-∫_u^t(a_0 - iv̅(s))dsdu . ]
Using the fact that v̅ is T-periodic, the last two terms cancel and we obtain
z̃_n(T+t)- z̃_n(t) = ∫_0^Te^-∫_u^t+T(a_0 - iv̅(s))dsdu
= e^-(t+T)a_0∫_0^Te^ua_0 + ∫_u^t+Tiv̅(s)dsdu
which goes to 0 when t goes to infinity.
We therefore proved that z_n converges to a T-periodic function.
The same result holds for (q^c_n,q^s_n) solution to (<ref>).
Finally, if we have initial conditions that are close to the equilibrium point of system (<ref>), and an amount of ATP Ω, which is close to Ω_0, we know exactly how the solution of the system evolves in time. The first order coefficients of (<ref>) go quickly to the constant term 1/l, the first order coefficients starts to oscillate in a limit cycle, and the higher order terms either go to zero or have the same patterns as p_1^c,s and q_1^c,s with same period of oscillation. Globally, the solution of (<ref>), shows a supercritical Hopf-bifurcation in time, close enough to the parameter Ω_0.
We remark that, up until the computation of the center manifold, the coefficients a_0, a_1 and b_1 may remain general: there is no need for the power laws to obtain three negative eigenvalues and two complex eigenvalues in system (<ref>). To facilitate the application of center manifold techniques, we have opted to explicitly express the dependence of a_0, a_1, and b_1 on Ω. The selection of power laws is somewhat intuitive, as it extends previous findings (<cit.>,<cit.>) and proves to be advantageous for our analysis.
* add numerics and cite: https://link.springer.com/article/10.1140/epje/i2011-11060-5, <cit.>
It is worth mentioning the role of b_1 in the dynamics of the two-row system, since we introduce it to point out that x_eq for the two-row model is always zero, while for the one-row model this is not the case.
By looking at system (<ref>), we can observe that the coefficient d appears only in the non linear part ℕ𝕃, implying that the effect of b_1 can be appreciated only with the complete non linear simulations.
First notice that b_1^0 does not appear in 𝕄. Moreover, after the restriction to the center manifold, the non linear part f(𝐡(Y),Y) is independent on b_1^0 too, at least up to third order expansion. Since ẋ(t) only depends on Y, the effect of b_1(Ω) on the displacement solution and its velocity is negligible.
We also remark how the two systems (<ref>) and (<ref>) depends on x(t) and therefore won't depend on b_1(Ω), but rather on a_0(Ω), on a_n( Ω) with n>0 odd, and on b_n(Ω) with n>1 odd. On the other hand, we expect the probabilities P and Q to be sensitive to b_1(Ω), through their first order coefficients.
We point out that both Theorem <ref> and <ref> can be applied to system (<ref>), leading to the existence of an oscillatory solution for the one-row model, and completing the work done in <cit.>.
The Hopf bifurcation theorem gives a formal proof of the results carried out in <cit.>, together with new insights on the non linear dynamics which have been often studied exploiting the non linear system theory, considering the displacement x(t) as input and the external force f_ext(t) as output.
§.§ Numerical method
We now describe the numerical scheme used throughout the remaining of the paper.
It is used here to compute the solution to the two-row model (<ref>), but can easily be extended to N-row models, for N>2 or restricted to the 1-row model.
We write a first-order upwind scheme for the densities P and Q where, after each step, we update the velocity v. Let us take Δ x such that ℓ = J Δ x for J ∈ℕ^*, and Δ t the time step. We define P_j^n := P(j Δ x, n Δ t) and Q_j^n := Q(j Δ x, n Δ t) for n≥ 1 and 1 ≤ j ≤ J. Being ℓ-periodic, we may extend P_j^n for all j ∈ℤ by setting P_j+J^n=P_j^n for all j ∈ℤ and all n ∈ℕ; the same can be done for Q_j^n. At a fixed time step t= n Δ t, n≥ 1, the velocity v^n of the central tubule pair being known, we compute, for j ∈ℤ
{[ S^n_j = -a_0 P^n_j + ω_2(jΔ x)/ℓ, ; ; T^n_j = -a_0 Q^n_j + ω_2(jΔ x)/ℓ, ; ; P^n+1_j = {[ ( 1 - v^nΔ t/Δ x) + v^nΔ t/Δ xP^n_j-1 + Δ t S^n_j, if v^n >0,; ( 1 + v^nΔ t/Δ x) - v^nΔ t/Δ xP^n_j+1 + Δ t S^n_j, if v^n <0, ].; ; Q^n+1_j = {[ ( 1 + v^nΔ t/Δ x) - v^nΔ t/Δ xQ^n_j-1 + Δ t T^n_j, if v^n >0,; ( 1 - v^nΔ t/Δ x) + v^nΔ t/Δ xQ^n_j+1 + Δ t T^n_j, if v^n <0, ]. ].
and (x^n+1, v^n+1) are
{[ η v^n+1 = Δ x ∑_j=1^J ((P^n+1_j - Q^n+1_j)∂_ξΔ W(jΔ x)) - kx^n,; x^n+1 = x^n + v^n+1Δ t. ].
Notice that, in (<ref>), only one of the two updates is used for all j. Moreover, due to the stability of the upwind scheme, we need to check at each iteration that | v^n Δ t/Δ x| <1.
§.§.§ Conditions on α_0 and α_1
In order to implement the numerical simulations, we have to characterize ω_1 and ω_2 by fixing their coefficients. We first set the values for a_0^0 and Ω_0. For simplicity, we impose that a_n=b_n=0 for n>1 since, as observed in Theorem <ref>, they have no significant influence over the system's oscillations. We now want to find some a_1^0, α_0, α_1, that verify τ(Ω_0)=0 from equation (<ref>) and τ'(Ω_0)>0. Starting with α_0 and α_1, we search for sufficient conditions such that τ'(Ω_0)>0. This inequality reads
1/2a_0^0α_0 Ω_0^α_0-1 - (α_0 -α_1)λ a_1^0/2a_0^0ℓΩ_0^α_1-α_0-1 < 0,
Substituting a_1^0 into it we get
a_0^0(2 α_0-α_1) Ω_0^α_0 +(α_0 -α_1) ℓζ/2 π < 0.
The condition (<ref>) is fulfilled with different choices for α_0, α_1. A particularly simple solution that we will use hereafter is α_0 = 1/2 and α_1 = 1.
Then, to recover a_1^0=a_1^0(Ω_0), we solve τ(Ω_0)=0, from equation (<ref>), which leads to
a_1^0= - a_0^0 ℓ/2 πλΩ_0^α_0-α_1(2a_0^0Ω_0^α_0π + ℓζ).
Finally, using the parameter values presented in Table 1, we compute a_1^0 at the instability and get a_1^0(Ω_0) = -56.4588 s^-1.
* Starting from a non zero shift, explain tug of war theory <cit.>
page 10 of Sartori's PhD: <https://www.pks.mpg.de/fileadmin/user_upload/MPIPKS/group_pages/BiologicalPhysics/dissertations/PabloSartori2015.pdf>
* Conclude on inhibition and curvature regulation
§.§ Numerical simulations
Motivated by the previous computations, all simulations are carried out defining Δ W(ξ) = U cos (2 πξ / ℓ), a_0(Ω) =a_0^0 √(Ω), a_1(Ω)=b_1(Ω) =a_1^0Ω and a_n(Ω)=b_n(Ω) = 0, if n>1, with α_0 = 1/2 and α_1=1. It is fundamental to notice that the choice to define b_1 ≠ 0, makes the transition rates not-symmetric, as in <cit.>.
All parameters values used for all simulations are shown in Table 1, matching F. Jülicher's <cit.>. We have taken for k_BT the value at 36^∘C (human body temperature).
As shown in Figure <ref>, the behavior of the two-row model with respect to the ATP concentration is comparable with the one for the one-row model.
When the ATP concentration Ω is lower than Ω_0, the central pair does not move from its zero equilibrium position. On the contrary, when the ATP concentration is high enough, we observe an oscillatory displacement between microtubules and in the probabilities.
The main difference between the one-row model and the two-row one concerns the equilibrium position around which the solution x(t) oscillates, as expected. This is clearly illustrated in Figure <ref>(b), where the displacement takes place around zero. In this case, the equilibrium position does not create any curvature, meaning it has the potential to bend equally in both directions.
The simulations for the one-row model where carried out using the same upwind numerical scheme. In this case there is only one probability density P(ξ,t). It is linked to the only moving filament, whose displacement is defined as x(t). We used the same notation as the literature, see <cit.> for example.
§.§.§ Illustrating Theorem <ref>
In the following section, we are going to test the results of Theorem <ref> against numerical simulations of the original partial differential equations system (<ref>) and its first order approximation as the ordinary differential equations system (<ref>).
All constants are chosen as described in Table 1, and in Section <ref>. The parameter δ indicates the relative distance from the instability Ω_0; therefore, the point where the simulations is performed is Ω_0(1+δ).
Motivated by the previous computations, we numerically solve the ODE (<ref>) and the PDE (<ref>) for Ω = Ω_0(1+δ) with δ > 0, focusing on the amplitude of oscillations for the displacement.
To determine the numerical amplitude for both the ODE and the PDE, we calculate the difference between the maximum and minimum values of the solution after some time t̅, when the numerical solution has reached its limit cycle. This difference is then divided by two.
We then compare the numerical amplitude with the theoretical amplitude defined by (<ref>). For our choice of parameters and neglecting higher-order terms, the theoretical amplitude simplifies to:
ρ(Ω)= √(δℓ^2/6 π^2(π a_0^0Ω_0^1/2 + 2 ℓζ/π a_0^0Ω_0^1/2 + ℓζ)).
In Figure <ref>, simulations only start after δ=0.05. This is related to the fact that the closer we are to Ω_0, the slower the solution enters into its limit cycle. For this reason, we should have taken a time t̅=t(δ) after which evaluating the amplitude of x(t). In this case, when δ goes to zero, t̅ (δ)→∞. In the simulations this was not reasonable, so we chose a time t̅ independently from δ, but large enough so that the solution enters in its steady-state regime for all δs.
In Figure <ref>(a), we observe the amplitude of x(t) computed with the PDE, the ODE, and the analytical result (<ref>). As expected, the last loses its prediction capability the more we go away from the instability; and subsequently, the relative error between the truncated formula (<ref>) and the one measured from the PDE increases, as we can see in Figure <ref>(b). This is due to the contribution of the non linear terms, which play a role in determining the amplitude of oscillations even when the system is close to Ω_0.
§.§.§ Comment on the amplitude
The expansion (<ref>) of the amplitude ρ can also be adapted to the ODE formulation of the one-row model. In Figure <ref>, the two-row model does not only center the equilibrium point for the oscillations around zero, but it also influences important physical quantities such as the amplitude of oscillations.
§ TOWARDS MODELING THE AXONEME: N-LAYER MODEL WITH FIXED EXTREMITIES
Starting from two rows of molecular motors, we extend the model from section <ref> to N ≥ 2 rows of molecular motors, without any explicit form of inhibition in the system.
The system is then composed of N+1 microtubule doublets on the outside, arranged in a circle, and no central pair. All filaments have the same polarity, meaning that between two microtubule pairs, motors move towards the base.
To take the bridge of an axoneme into account (as shown in Figure <ref>) in our N-layer model, filament pairs 0 and N are considered to be the same pair. They have no shift between each other, as shown in the split view Figure <ref>.
Moreover, all microtubule pairs are assumed inextensible, rigid and at a constant distance from each other. They are held together by elastic and viscous elements that do not permit infinite sliding between them.
We look at molecular motors in a periodicity cell of an N-axoneme, of length ℓ.
§.§ Full mathematical model
For i ∈{ 0, …, N }, we denote by X_i the horizontal shift of the i-th tubule, as shown in Figure <ref>.
We measure displacement with respect to the 0-th filament: X_0 = X_N = 0. For 1 ≤ i ≤ N, we introduce Δ_i = X_i - X_i-1, the relative displacement between filaments i and i-1. It follows that ∑_i=1^NΔ_i(t) =0. The shifting speed of the i-th filament is defined by V_i = Ẋ_i - Ẋ_i-1 = Δ̇_i, and we have
∑_i=1^N V_i = ∑_i=1^NΔ̇_i(t)=0.
Once again, the variable ξ∈ [0, ℓ] represents the local variable along one pair, as in the two-row model. We denote by Q_i(ξ, t) the density of molecular motors at position ξ and time t attached to the i-th filament and who are in state 1, walking on the (i-1)-th filament.
As for the two-row model, in section <ref>, (Q_i, Δ_i) are solution to, for ξ∈ [0, ℓ] and t>0
{∂_t Q_j(ξ,t) = -(ω_1(ξ - Δ_j(t), t) + ω_2(ξ - Δ_j(t))) Q_j(ξ, t)
+ ω_2(ξ - Δ_j(t)))/ℓ, for j ∈{ 1, …, N },
η (V_ i(t)-V_i+1(t)) = ∫_0^ℓ (Q_i(ξ,t)∂_ξΔ W(ξ) - Q_i+1(ξ,t)∂_ξΔ W(ξ - Δ_i(t))) dξ
- k(Δ_i(t) - Δ_i+1(t)), for i ∈{ 1, …, N-1 }.
.
Where the first equation is a transport equation for the motors probability density. Shifting speed is measured by looking at the current filament, as well as the one above. We thus obtain the second equation in system (<ref>) by writing the force balance on the i-th filament with the motors and springs around it. We then have N-1 force-balance equations that determine the filaments' motion, in which we impose the external forces to be zero.
Following again section <ref>, for j ∈{ 1, …, N }, t>0 and ξ∈ [0, ℓ], we define P_j as P_j(ξ,t) = Q_j(ξ + Δ_j(t),t).
We then observe that ∂_t P_j(ξ + Δ_j, t ) = V_j(t) ∂_ξ Q_j(ξ,t) + ∂_t Q_j(ξ,t), and we finally obtain the system at any t>0 and ξ∈ [0, ℓ]
{∂_t P_j + V_j ∂_ξ P_j = -(ω_1 + ω_2) P_j + ω_2/ℓ, for j ∈{ 1, …, N },
η (V_ i-V_i+1) = ∫_0^ℓ (P_i - P_i+1) ∂_ξΔ W - k(Δ_i - Δ_i+1), for i ∈{ 1, …, N-1 },
∑_i=1^NΔ̇_i(t)=0 and Ẋ_n = 0.
.
Like in previous models, the equilibrium state is, for all 1 ≤ i ≤ N-1 and 1 ≤ j ≤ N
{[ V_i = 0,; X_i = 0,; P_j = P^0 = ω_2/ℓ (ω_1 + ω_2). ].
§.§ Theoretical considerations
In this situation we want to investigate the existence of a Hopf bifurcation in time, as it was done with the two-row model. To do so, we consider a linearized system arising from the N-row model (<ref>),
in order to have a general idea of the system's dynamics, as in (<ref>). We expand in Fourier series P_1, …, P_N and treat order by order the Fourier coefficients of the probabilities.
For the first order coefficients, we obtain a 3N-1 dimensional ODE system with unknowns:
p_j^c,s, j=1,…,N, X_k, k=1,…,N-1.
We then perform some linear change of variables, similarly to (<ref>), in order to write the Jacobian of the system as a block matrix. The first block is a N+1 diagonal matrix with real and negative entries -a_0(Ω). The rest of the linearized system is defined by the following equations
([ q̇_k; ẇ_k; ])=
[ - 2 π^2/ℓη U p_e - a_0 - 2 π/ℓη k p_e; - π U /η - k / η; ]([ q_k; w_k; ])
for k=1,…,N-1, where p_e = a_1/(a_0 ℓ), q_k = p_k^s - p_k+1^s for k=1,…,N-1, and
([ w_1; ⋮; w_N-1 ]) =
[ 2 -1 ; -1 2 -1 ; ⋱ ; -1 2 -1; -1 2 ]([ X_1; ⋮; X_N-1 ]).
It follows that the Jacobian has N+1 real and negative eigenvalues -a_0(Ω), and N-1 identical pairs of complex and conjugated ones μ_k(Ω) for k=1,…, N-1 which come from (<ref>). We observe that μ_k(Ω)=τ(Ω) + i ω(Ω), with τ and ω defined as in Theorem <ref> by equations (<ref>) and (<ref>).
Then, if there exists Ω=Ω_0 such that τ(Ω_0)=0 and τ'(Ω_0)>0, the eigenvalues cross the imaginary axis all at the same bifurcation parameter. Thus, there is a suggestion of a bifurcation in the dynamics at Ω_0, indicated by the linear part of the complete dynamical system. We will not investigate the theoretical aspects of this model further in this paper, as the Central Manifold Theorem used in previous section, does not apply to such systems. However, as shown in the next section, numerical simulations still suggest a potential pattern in the oscillations and hint that there is indeed a bifurcation.
§.§ Numerical results
In this section, the N-layer model is tested for N=8, to match the 8 motor rows of an axoneme with a bridge and nine microtubule doublets, as in Figure <ref>. We use the same parameter values as for the previous systems, as specified in Table 1.
The initial conditions are such that P_i(t=0) = P^0 for all i, and we destabilize the X_is randomly in [-x_0, x_0], where x^0 = 0.01 nm.
As we are looking for a setting in which the axoneme generates curvature along the flagellum, we are looking for a setting in which the system alternates between shifting directions in an organized manner. We initially start by destabilizing each half of the system in opposite directions:
{[ X_i = ix^0, if 1 ≤ i ≤⌊ N/2 ⌋,; X_i = (N-i)x^0, if ⌊ N/2 ⌋ +1 ≤ i ≤ N-1,; X_N = 0. ].
where x^0 = 0.01 nm.
Using the same notations as before,
i.e. Ω = (1+δ) Ω_0, we measure distance the to the bifurcation point Ω_0 using δ.
Figure <ref>(a) shows the absence of oscillations before instability (Ω = 0.9Ω_0), exactly as expected. Even though the system is put out of equilibrium by the initial conditions, since some tubule shifts are nonzero, they all quickly go back to zero and stop moving.
In Figure <ref>(b), we look at the system past the instability point, for Ω = 1.1 Ω_0. Here, the system clearly reaches a steady-state oscillating regime after a short amount of time. All oscillations also remain centered in zero. As expected from the linear study in section <ref>, all Δ_is have the same amplitude and oscillation frequency, but their phase difference varies depending on initial conditions. In the particular case shown in Figure <ref>(b), one group of tubule pairs is synchronized with pair 7, and the other group with pair 8. In fact, when looking at all layers separately, one can see that odd layers (respectively even layers) oscillate in sync. The other possible outcome when running the simulation with random initial conditions was groups {1, 2, 7, 8} and {3, 4, 5, 6} oscillating together, as shown in Figure <ref>(c).
This influence of the initial conditions is understandable since there is no external force taking the whole axonemal structure and filament into account, resulting in limited coupling between layers at this level. The layers thus tend to keep their original phase difference.
Then, using another set of initial conditions, where exactly one half of the system is pushed in one direction, and the other half in the other one, one can observe that the phase difference is maintained through time and each layer in one half of the axoneme oscillates in sync with the other ones from the same half, as shown in Figure <ref>. These initial conditions are denoted as "middle" in the next paragraphs, and implemented as
{[ X_i = ix_0, if 1 ≤ i ≤⌊ N/2 ⌋,; X_i = (N-i)x_0, if ⌊ N/2 ⌋ +1 ≤ i ≤ N-1,; X_N = 0. ].
The synchronization shown in Figure <ref>(c) was predicted by Howard et al. <cit.>, and proves the importance of having a bridge around which the system alternates between positive and negative displacement.
In the general case, since there is no theory behind the bifurcation of this system of 3N-1 equations, formally understanding the coupling between phase difference and initial conditions remains an open question.
A similar problem has recently been numerically studied in Kuramoto oscillators <cit.>. The Kuramoto model is fairly different from ours, as each oscillator has its own velocity, whereas all of our layers have one common velocity value. Our system of equations cannot thus be easily reduced to a Kuramoto model, but the behaviors of the two systems in terms of synchronization and dependence on initial conditions are very much alike. Some first steps, only modelling individual motors along a single row, have been presented in <cit.>.
§.§.§ Is there an optimal number of layers?
Figure <ref> shows the oscillation amplitude depending on the number of layers. For both relative and absolute shift, the maximum amplitude is reach for N between 8 and 12. In particular, the relative shift is clearly maximal for N=8, i.e. for a 9-axoneme model. Relative shift is directly responsible for curvature, showing that the number of motor rows in an axoneme is the optimal value to maximize bending.
either we plot a better graph or we can't really conclude anything from it
§.§.§ Influence of b_1 on the oscillations - Note: doesn't work with middle initial conditions
In this subsection, the N-layer model is simulated with a different transition rate. We impose b_1(Ω) = 0, i.e. ω_2(ξ; Ω) = a_0(Ω)/2 + a_1(Ω) cos2 πξ/ℓ.
P0 changes with b1. test b1 = alpha a1
explore initial conditions
amplitude graph?
Figure <ref> shows tubule oscillation with this new ω_2, after the instability point, with all other parameters unchanged. As expected from the amplitude estimation from the proof of Theorem <ref>, the coefficient b_1 has no influence over oscillation amplitude. Oscillation frequency remains unchanged as well. However, every tubule is slightly out of phase with respect to its neighbor. This time, all oscillations are centered in zero but there are three groups of Δs oscillating together instead of three.
We would like to find some parameters with a physical meaning, for which we can clearly identify two halves of the N-axoneme oscillating in opposite directions, with a π/2 phase lag and centered in 0.
§.§ Changing potentials
In this section, we highlight the influence of our choice for the potentials W_i and transition rates ω_i. To do so, let us consider once again the one-row model.
In Jülicher's first models <cit.>, transitions rates are directly derived from chemical considerations, by looking at reaction speed in ATP hydrolysis. In the case of a two headed motor, the reactions can be written as:
{[ M_1 + ATP k_1k_-1⇌ M_2 + ADP +P; M_1 + ADP + Pk_2k_-2⇌ M_2 + ATP ].
Which leads to the following reaction speed:
d[M_1]/dt = -k_1 [M_1] + k_-1 [M_2] -k_2 [M_1] + k_-2[M_2]
= -δ(x) e^W_1(x) + μ_ATP/kT [M_1] + δ(x) e^W_2(x) + μ_ADP + μ_P/kT [M_2]
-δ(x+ℓ/2) e^W_1(x) + μ_ADP + μ_P/kT [M_1] + δ(x+ℓ/2) e^W_2(x) + μ_ATP/kT [M_2]
= -ω_1[M_1]+ω_2 [M_2],
where
{[ ω_1(ξ)= [δ (ξ)e^μ_ATP/kT+δ(ξ+ℓ/2)] e^W_1(ξ) / k_B T; ω_2(ξ)= [δ(ξ)+δ(ξ+ℓ/2)e^μ_ATP/kT] e^W_2(ξ) / k_B T ].
which directly and explicitly depend on both ATP and physical potentials.
mu adp and mu p? go back to original computation
Here, δ is a Gaussian centered around the minimum of W_1, which helps favor transition at a specific point in the periodicity cell.
For W_1, we chose an arbitrary periodic potential that was highly nonlinear in ξ, so as to ensure it was not symmetric around ℓ/2:
{[ W_1(ξ) = U cos(2 π/ℓ (0.6 ξ^3 + 0.4ξ)); W_2(ξ) = W_1(ξ+ℓ/2); ].
We also introduce, as previously done by <cit.>, the local deviation from the chemical equilibrium of the system Ω(ξ), to model the dependence of the system on ATP concentration, depending on our parameter Ω:
Ω(ξ) = ω_1(ξ)/ω_2(ξ) - exp[(W_1(ξ)-W_2(ξ))/ k_B T],
where k_B is the Boltzmann constant and T is the temperature of the system.
We use the amplitude of this function, defining it as Ω := sup_ξ∈ [0,ℓ]| Ω(ξ)|.
In practice, we will only work with Δ W and its usual assumptions, as well as the ones we originally made on ω_2 in the introduction. Moreover, using both our previous conditions and our chemical expressions (<ref>), we can arrive to a series of equations on both Ω and α, which could eventually prove that there exists a solution that combines all our assumptions. The system is explained in further detail in <ref>.
We can thus clearly observe in Figure <ref> our choice has had an influence on the solution, as displacement x becomes highly nonlinear, and of much larger periodicity in time.
§ CONCLUSION AND OUTLOOK
In this paper, we presented a new model for the axoneme, the cytoskeleton of cilia and flagella, focusing on the presence of spontaneous oscillations and potential synchronization of microtubule doublets under varying ATP concentrations. We called this model N-row model, where N represents the number of rows of motors walking between two microtubule doublets that are arranged in a circular configuration. Molecular motors are put into motion thanks to a chemical reaction involving ATP, thus inducing microtubule sliding. This specific structural arrangement results in all microtubule doublets having a zero equilibrium position. The dynamic is governed by N transport equations, coming from a stochastic two-state model, and N-1 force balance equations, collectively forming a PDE system with 2N-1 equations. We analyzed this model using both theoretical methods and numerical simulations.
Our first studies focused on the N=2 configuration, revealing the existence and uniqueness of the displacement solution. Moreover, under sufficient chemical energy, the middle microtubule doublet exhibits spontaneous oscillations, corresponding to the existence of a super-critical Hopf-bifurcation in the dynamical system. Theoretical results are corroborated by numerical simulations. Subsequently, we conduct numerical analysis of the N-row model, focusing particularly on the case N=8. Here too, we observe that the system starts to oscillate once the ATP concentration surpasses a critical threshold. Remarkably, under appropriate initial conditions, we note the emergence of two out-of-phase groups among the microtubules, oscillating around the zero position with opposite shifting directions.
The first result on our model can be appreciated starting from the N=2 case. The cylindrical structure answers the problem of the axonemal symmetrization proposed in <cit.>, previously solved by choosing symmetric transition rates. Instead, in of this paper, asymmetric transition rates have been used all along. More in general, even in the N-row model, we observed that the equilibrium displacement with null external force was zero, independently from the potentials and the transition rates. This means that the desired symmetry of the axoneme is always preserved: at equilibrium, with no external forces, each microtubule has the same role and there is no initial shifting in the structure. We underline that this is made possible in our model without imposing specific symmetry for the potentials.
From a more theoretical point of view, both Theorem <ref> (well-posedness) and Theorem <ref> (existence of a Hopf bifurcation) give a complete mathematical description on the two-row model. The same results can be proven for the one-row model as well, formalizing the work done in <cit.>.
With the N-row model with fixed extremes, we set up a framework that is useful to gain insights on the microscopic structure of the axoneme. In particular, we comment on the specific case N=8, which is the closest representation of the real axoneme in terms of number of microtubule doublets. With the aid of the numerical scheme presented in the two-row model section, we observed the onset of oscillations at a critical value of ATP. To perturb the system we tried several random initial conditions. We then noticed that imposing an opposite shift on each half of the model immediately led to robust and realistic patterns, potentially leading to planar flagellar beating patterns, as discussed in <cit.>.
A natural extension to our work would be to take into account the presence of the central microtubule pair and, therefore, of the radial spokes coming from outer doublets towards the axonemal center. Moreover, this model can be thought of as a building block that, coupled with mechanics of an elastic flagellum, gives the feedback that generates oscillatory patterns following in the footsteps of <cit.> and <cit.> for the planar case and of <cit.> for the three dimensional case.
§ FUNDING
This work was supported by a public grant as part of the Investissement d'avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH.
§ CENTER MANIFOLD COMPUTATION
In practice, equation (<ref>) is a system of three equations. With the first equation below we aim to find the coefficients a_1j, with j=1,…,6
(2 a_11 y + a_12 x + a_13δΩ ) ( -a_0^0 Ω_0^α_0y - λ/ℓa_1^0/a_0^0Ω_0^α_1-α_0y - a_1^0/a_0^0ζ/ℓΩ_0^α_1-α_0x.
.-a_0^0 c(α_0)δΩ y
- λ/ℓa_1^0/a_0^0 c(α_1-α_0)δΩ y -ζ/ℓ c(α_1-α_0)δΩ x .
- . b_1^0(r+s-2 z)/2 a_1^0(ζ x+λ y) )
+ (a_12 y + 2 a_14 x + a_15δΩ ) (-βℓ/2 π y -αℓ/2 π x)
+ a_0^0Ω_0^α_0 h_1 + a_0^0c(α_0)δΩ h_1
- (ζ x+λ y) ( (a_1^0)^2 (y+z)+(b_1^0)^2 (-r+y+z)/a_1^0b_1^0) = 0.
Since we only need terms x, y and δΩ up to order two, we get rid of all the third order terms, getting
(2 a_11 y + a_12 x + a_13δΩ ) ( -a_0^0 Ω_0^α_0y - λ/ℓa_1^0/a_0^0Ω_0^α_1-α_0y - a_1^0/a_0^0ζ/ℓΩ_0^α_1-α_0x )
+ (a_12 y + 2 a_14 x + a_15δΩ ) (-βℓ/2 π y -αℓ/2 π x)
+ a_0^0 Ω_0^α_0 h_1 - (ζ x+λ y) ( (a_1^0)^2 y+(b_1^0)^2 y/a_1^0b_1^0) = 0.
We then proceed by matching the terms x, y and δΩ with same order, and solve the resulting system of equations for a_1j. Notice we could have kept only the zero order term in the Taylor expansion, since all the higher order terms disappear during this approximation.
We do the same for the second and third equation, solving for the power series of h_2 and h_3.
For i=1,2, we obtain
a_i1= -2 π ^2 λ ^2a_0^0ℓ((a_1^0)^2+(b_1^0)^2) Ω_0^α_1-α_0/b_1^0(2 πa_1^0λΩ_0^α_1-α_0+ζ a_0^0ℓ^2) (2 πa_1^0λΩ_0^α_1-α_0+a_0^0 ℓ(π a_0^0Ω_0^α_0-ζℓ))
a_i2= 2 πζ (a_0^0)^2 ℓ^2 ((a_1^0)^2+(b_1^0)^2) (ζ l-π a_0^0Ω_0^α_0)/a_1^0b_1^0(2 πa_1^0λΩ_0^α_1-α_0+ζ a_0^0ℓ^2) (2 πa_1^0λΩ_0^α_1-α_0+a_0^0 ℓ(π a_0^0Ω_0^α_0-ζℓ))
a_i3= 0, a_i4= -2 π ^2 ζ ^2 a_0^0ℓ((a_1^0)^2+(b_1^0)^2) Ω_0^α_1-α_0/b_1^0(2 πa_1^0λΩ_0^α_1-α_0+ζ a_0^0ℓ^2) (2 πa_1^0λΩ_0^α_1-α_0+a^0_0 ℓ(π a_0^0Ω_0^α_0-ζℓ))
a_i5= 0, a_i6= 0.
For i=3 we get
a_31= -2 π ^2 λ ^2a_0^0 b_1^0ℓΩ_0^α_1-α_0/(2 πa_1^0λΩ_0^α_1-α_0+ζ a_0^0ℓ^2) (2 πa_1^0λΩ_0^α_1-α_0+a_0^0 ℓ(π a_0^0Ω_0^α_0-ζℓ))
a_32= 2 πζ c^2b_1^0ℓ^2 (π a_0^0Ω_0^α_0-ζℓ)/a_1^0(2 πa_1^0λΩ_0^α_1-α_0+ζ a_0^0ℓ^2) (2 πa_1^0λΩ_0^α_1-α_0+c ℓ(π a_0^0Ω_0^α_0-ζℓ))
a_33= 0, a_34= -2 π ^2 ζ ^2a_0^0 b_1^0ℓΩ_0^α_1-α_0/(2 πa_1^0λΩ_0^α_1-α_0+ζ a_0^0ℓ^2) (2 πa_1^0λΩ_0^α_1-α_0+ a_0^0 ℓ(π a_0^0Ω_0^α_0-ζℓ))
a_35= 0, a_36= 0.
10
BAYANI2023
A. Bayani, S. Jafari, H. Azarnoush, F. Nazarimehr, S. Boccaletti, and M. Perc.
Explosive synchronization dependence on initial conditions: The minimal kuramoto model.
Chaos Soliton. Fract., 169:113243, 2023.
bayly_steady_2016
P. V. Bayly and S. K. Dutcher.
Steady dynein forces induce flutter instability and propagating waves in mathematical models of flagella.
J. R. Soc. Interface, 13(123):20160523, 2016.
brokaw_flagellar_1972
C. J. Brokaw.
Flagellar Movement: A Sliding Filament Model: An explanation is suggested for the spontaneous propagation of bending waves by flagella.
Science, 178(4060):455–462, 1972.
brokaw_bending_1999
C. J. Brokaw.
Bending patterns of ATP-reactivated sea urchin sperm flagella following high salt extraction for removal of outer dynein arms.
Cell. Motil. Cytoskel., 42(2):125–133, 1999.
brokaw_computer_2014
C. J. Brokaw.
Computer simulation of flagellar movement X: Doublet pair splitting and bend propagation modeled using stochastic dynein kinetics.
Cytoskeleton, 71(4):273–284, 2014.
camalet2000generic
S. Camalet and F. Jülicher.
Generic aspects of axonemal beating.
New J. Phys., 2(1):324, 2000.
gadelha2023
J. Cass and H. Bloomfield-Gadêlha.
The reaction-diffusion basis of animated patterns in eukaryotic flagella.
Nat. Commun., 14, 09 2023.
Costantini_2024
G. Costantini and A. Puglisi.
Thermodynamic precision of a chain of motors: the difference between phase and noise correlation.
J. Stat. Mech.: Theory Exp., 2024(2):024003, feb 2024.
Gaffney2011
E. Gaffney, H. Gadêlha, D. Smith, J. Blake, and J. Kirkman-Brown.
Mammalian sperm motility: Observation and theory.
Annu. Rev. Fluid Mech., 43(1):501–528, 2011.
Gaffney2021
E. A. Gaffney, K. Ishimoto, and B. J. Walker.
Modelling motility: The mathematics of spermatozoa.
Front. Cell Dev. Biol., 9, 2021.
GrayHancock1955
J. Gray and G. J. Hancock.
The propulsion of sea-urchin spermatozoa.
J. Exp. Biol., 32, 1955.
guerin2011dynamical
T. Guérin, J. Prost, and J.-F. Joanny.
Dynamical behavior of molecular motor assemblies in the rigid and crossbridge models.
Eur. Phys. J. E, 34:1–21, 2011.
Howard2022
J. Howard, A. Chasteen, X. Ouyang, V. F. Geyer, and P. Sartori.
Predicting the locations of force-generating dyneins in beating cilia and flagella.
Front. Cell Dev. Biol., 10, 2022.
hu_finite_2018
T. Hu and P. V. Bayly.
Finite element models of flagella with sliding radial spokes and interdoublet links exhibit propagating waves under steady dynein loading.
Cytoskeleton, 75(5):185–200, 2018.
Julicher1997Modeling
F. Jülicher, A. Ajdari, and J. Prost.
Modeling molecular motors.
Rev. Mod. Phys., 69:1269–1282, 1997.
Julicher1995
F. Jülicher and J. Prost.
Cooperative molecular motors.
Phys. Rev. Lett., 75:2618–2621, 1995.
Julicher1997
F. Jülicher and J. Prost.
Spontaneous oscillations of collective molecular motors.
Phys. Rev. Lett., 78:4510–4513, 1997.
julicher_molecular_1998
F. Jülicher and J. Prost.
Molecular Motors: From Individual to Collective Behavior.
Prog. Theor. Phys. Supp., 130:9–16, 1998.
lindemann_many_2021
C. B. Lindemann and K. A. Lesich.
The many modes of flagellar and ciliary beating: Insights from a physical analysis.
Cytoskeleton, 78(2):36–51, 2021.
machin_wave_1958
K. E. Machin.
Wave Propagation along Flagella.
J. Exp. Biol., 35(4):796–806, 1958.
oriola2017nonlinear
D. Oriola, H. Gadêlha, and J. Casademunt.
Nonlinear amplitude dynamics in flagellar beating.
R. Soc. Open Sci., 4(3):160698, 2017.
Purcell
E. M. Purcell.
Life at low Reynolds number.
Am. J. Phys., 45(1):3–11, 01 1977.
sartori2016twist
P. Sartori, V. F. Geyer, J. Howard, and F. Jülicher.
Curvature regulation of the ciliary beat through axonemal twist.
Phys. Rev. E, 94:042426, Oct 2016.
Sartori2016
P. Sartori, V. F. Geyer, A. Scholich, F. Jülicher, and J. Howard.
Dynamic curvature regulation accounts for the symmetric and asymmetric beats of Chlamydomonas flagella.
eLife, 5:e13258, 2016.
Summers
K. E. Summers and I. R. Gibbons.
Adenosine triphosphate-induced sliding of tubules in trypsin-treated flagella of sea-urchin sperm.
Proc. Nat. Acad. Sci. USA, 68(12):3092–3096, 1971.
Taylor1951
G. I. Taylor.
Analysis of the swimming of microscopic organisms.
Proc. R. Soc. Lond. A, 209:447–461, 1951.
Velho2021
M. F. Velho Rodrigues, M. Lisicki, and E. Lauga.
The bank of swimming organisms at the micron scale (boso-micro).
PLoS One, 16(6):1–80, 06 2021.
Walker2020
B. J. Walker, S. Phuyal, K. Ishimoto, C.-K. Tung, and E. A. Gaffney.
Computer-assisted beat-pattern analysis and the flagellar waveforms of bovine spermatozoa.
R. Soc. Open Sci., 7(6), 2020.
Wiggins
S. Wiggins.
Springer, Texts in Applied Mathematics.
2003.
woodhams_generation_nodate
L. G. Woodhams, Y. Shen, and P. V. Bayly.
Generation of ciliary beating by steady dynein activity: the effects of inter-filament coupling in multi-filament models.
J. R. Soc. Interface, 19(192):20220264, 2022.
Yagi1995
T. Yagi and R. Kamiya.
Novel mode of hyper-oscillation in the paralyzed axoneme of a chlamydomonas mutant lacking the central-pair microtubules.
Cell Motil., 31(3):207–214, 1995.
|
http://arxiv.org/abs/2409.02158v2 | 20240903180000 | Uniform Modeling of Observed Kilonovae: Implications for Diversity and the Progenitors of Merger-Driven Long Gamma-Ray Bursts | [
"J. C. Rastinejad",
"W. Fong",
"C. D. Kilpatrick",
"M. Nicholl",
"B. D. Metzger"
] | astro-ph.HE | [
"astro-ph.HE"
] |
Jillian Rastinejad
[email protected]
Uniform Modeling of Kilonovae
Rastinejad et al.
0000-0002-9267-6213]J. C. Rastinejad
0000-0002-7374-935X]W. Fong
0000-0002-5740-7747]C. D. Kilpatrick
0000-0002-2555-3192]M. Nicholl
Astrophysics Research Centre, School of Mathematics and Physics, Queen’s University Belfast, BT7 1NN, UK
0000-0002-4670-7509]B. D. Metzger
Department of Physics and Columbia Astrophysics Laboratory, Columbia University, Pupin Hall, New York, NY 10027, USA
Center for Computational Astrophysics, Flatiron Institute, 162 5th Ave, New York, NY 10010, USA
§ ABSTRACT
We present uniform modeling of eight kilonovae, five following short gamma-ray bursts (GRBs; including GRB 170817A) and three following long GRBs. We model their broadband afterglows to determine the relative contributions of afterglow and kilonova emission. We fit the kilonovae using a three-component model in that accounts for ejecta geometry, and report population median ejecta masses for the total, blue (κ_ B = 0.5 cm^2 g^-1), purple (κ_ P = 3 cm^2 g^-1), and red (κ_ R = 10 cm^2 g^-1) components. The kilonova of GW 170817 is near the sample median in most derived properties, while, together, the sample indicates great diversity.
We investigate trends between the ejecta masses and the isotropic-equivalent and beaming-corrected γ-ray energies (E_γ, iso, E_γ), as well as rest-frame durations (T_ 90, rest).
We find long GRB kilonovae have higher median red ejecta masses (M_ ej, R≳ 0.05 M_⊙) compared to on-axis short GRB kilonovae (M_ ej, R≲ 0.02 M_⊙). We also observe a weak scaling between the total and red ejecta masses with E_γ, iso and E_γ, though a larger sample is needed to establish a significant correlation. These findings imply a connection between merger-driven long GRBs and larger tidal dynamical ejecta masses, which may indicate that their progenitors are asymmetric compact object binaries. We produce representative kilonova light curves and find that the planned depths and cadences of the Rubin and Roman Observatory surveys will be sufficient for order-of-magnitude constraints on M_ ej, B (and, for Roman, M_ ej, P and M_ ej, R) of future kilonovae at z ≲ 0.1.
§ INTRODUCTION
On August 17, 2017, the nearly coincident detection of a binary neutron star (BNS) merger through gravitational waves (GWs, GW 170817; ) and a short gamma-ray burst (GRB 170817A; ) confirmed the long-theorized connection between these “multi-messenger” signals. Adding to this long-awaited breakthrough, an optical-near-IR counterpart (“AT 2017gfo”) to the GW and GRB signals was observed <cit.> and strongly resembled theoretical predictions for a kilonova, a thermal transient powered by the radioactive decay of elements beyond the iron-peak formed through rapid neutron capture nucleosynthesis (“r-process”; e.g., ). Indeed, the counterpart's rapid temporal evolution pointed to a relatively low ejecta mass (≲ 0.1 M_⊙), expected for neutron star mergers, while its reddening on ∼day timescales indicated a high (compared to that of iron-group elements) ejecta opacity produced by newly-created heavy elements (e.g., ). Modeling of AT 2017gfo demonstrated that the light curve was well-explained with a two-component model, in which each component is parameterized by a different ejecta mass, velocity, and opacity (e.g., ), providing evidence for multiple emission sources from the merger.
The first definitive kilonova opened a host of new questions and predictions that may only be studied with a wider population of observed events. Rapid progress on the theoretical end of kilonova studies has resulted in models that incorporate additional physics, including updated neutrino schemes and viewing angle dependencies (e.g., ), and a mapping of progenitors, remnants, and novel emission mechanisms to predicted light curve properties (e.g., ). In parallel, GW observations of diverse BNS and NSBH progenitors (e.g., ) and recent kilonovae discoveries following LGRBs <cit.> further motivate expectations for observed kilonova diversity. Looking forward, constraints on light curve diversity are critical for kilonova search strategies with next-generation facilities such as the Rubin Observatory and the Roman Space Telescope (Roman; ).
Despite progress on the theoretical end, constraints on kilonova ejecta parameters from observations other than AT 2017gfo remain limited due to the low rates of detected neutron star mergers (e.g., ). Kilonovae are also relatively faint and fleeting transients, rendering them difficult to observe at distances beyond z≈ 0.1 without rapid response and highly-sensitive telescopes.
Several kilonovae following GRBs have been discovered contemporaneously and in archival data (e.g., ) and, when combined with deep upper limits, reveal a span of ≈100 in optical luminosity (e.g., ). However, the majority of previous fits to observed light curves have been with by multiple codes with varying assumptions (e.g., ). Thus, direct comparisons of derived parameters (e.g., mass, velocity, composition) between individual events are inadvisable.
Previous uniform modeling of GRB kilonovae used a single component model <cit.>, prohibiting a search for diversity among emission components. Further, a uniform analysis has not been performed since 2019 <cit.>, thus excluding three recent events (GRBs 200522A, 211211A and 230307A; ). Surprisingly, two of these kilonovae followed long-duration GRBs. The progenitors and/or emission mechanisms driving their longer γ-ray signals remain an unknown, with several observationally untested theories (e.g., ). Beyond significantly expanding the sample size, these new events motivate an updated, multi-component modeling endeavor to explore kilonova diversity and compare γ-ray and kilonova properties across the short and long GRB populations.
Here, we perform uniform, multi-component modeling of a sample of eight kilonovae discovered over 2005–2023. We provide compiled multi-wavelength light curves for future fitting efforts with additional models. In Section <ref> we describe our sample selection and provide details for each event. In Section <ref> we detail our modeling of the synchrotron afterglow component to extricate the kilonova emission. In Section <ref> we describe our kilonova modeling procedure and report our results. In Section <ref> we discuss the implications of our results for future work. We assume a cosmology of H_0 = 69.6 km s^-1 Mpc^-1, Ω_M = 0.286, Ω_vac = 0.714 <cit.> and report magnitudes in the AB system throughout this work.
§ OBSERVATIONS
§.§ Sample Selection
We begin with the list of claimed kilonovae compiled in previous literature and covering the full sample of short GRBs discovered in the post-Swift era <cit.>. These include GRBs 050709 <cit.>, 050724A <cit.>, 060614 <cit.>, 070714B <cit.>, 070809 <cit.>, 130603B <cit.>, 150101B <cit.>, 160821B <cit.>, 170817A (GW 170817) and 200522A <cit.>. Though GRBs 080503 and 150424A have been claimed as a kilonova candidate <cit.>, we do not include these events as their redshifts, and thus their kilonova luminosities, remain uncertain <cit.>. In addition, we also consider the more recent kilonova candidates GRB 211211A <cit.>, and GRB 230307A <cit.>, both of which were identified following recent merger-driven long GRBs[Though we note there are alternate explanations for the red excess following these events (e.g., .)].
We limit our sample to events with sufficient observations (typically, multiple X-ray detections, several optical-near-IR detections past 2 days and at least one radio observation) for robust afterglow and multi-component kilonova modeling. To evaluate the availability of X-ray and radio observations, we use the catalog of <cit.>, supplemented with data from the Gamma-ray Circular Notices (GCNs) and the literature <cit.>. We restrict our sample to events with two or more optical-near-IR observations past 2 days for two reasons. First, this is the timescale on which we roughly expect the kilonova to significantly contribute to the observed flux. Second, multiple observations allow us to assess fading, which in turn constrains the ejecta mass and velocity, and/or color, giving insight to the relative contribution from different components (e.g., ). To evaluate available optical and near-IR data for each burst we consider data collected and published in short GRB kilonova compilations <cit.> for bursts prior to 2021 and the literature for those from 2021-2023. We do not include GRBs 050724A, 070714B and 150101B in our sample as they each have only a single optical detection past 2 days. We further remove GRB 070809 from our sample as it was not observed in the radio and no optical data is available past ≈ 1.5 days.
We are left with eight events that meet our criteria: GRBs 050709, 060614, 130603B, 160821B, 170817A (GW 170817), 200522A, 211211A, and 230307A. The redshift range of these events is relatively low for short GRBs (z = 0.008 - 0.554; ), but greater than the expected horizon for GW-detected compact object mergers in O4 and O5 <cit.>. We describe each burst and the dataset used in our analysis in the following sections. We list the main properties of each burst in Table <ref>, including the γ-ray durations, GRB fluence (f_γ), isotropic-equivalent γ-ray energy (E_γ, iso; calculated using the method of ), and galaxy type <cit.>. For Swift GRBs we collect values for f_γ (15-350 keV) and the durations over which 50% and 90% of the gamma-ray fluence was detected (T_50 and T_90) from the Swift-BAT catalog[<https://swift.gsfc.nasa.gov/results/batgrbcat/>] <cit.>. For GRBs detected with other satellites we collect values from the GCNs and literature <cit.>. In the following sub-sections, we summarize the burst discovery for each event in our sample and the source of data used in our analysis.
clCcCCCCc
1
10
0.07in
GRB Sample & Properties
GRB
z
A_V, MW^1
γ-ray Tel.
T_ 90
T_ 50
f_γ
E_γ, iso, 52
Host Type^2
(mag)
(s)
(s)
(10^-6 erg cm^-2)
(10^52 erg)
050709 0.161 0.030 HETE-II 0.07 - 1.0 0.031 SF
060614 0.125 0.059 Swift 109 ±3 43.2 ±0.8 28.0 ±4 0.51 ±0.07 SF
130603B 0.356 0.063 Swift 0.18 ±0.02 0.06 ±0.004 1.8 ±0.1 0.29 ±0.02 SF
160821B 0.162 0.123 Fermi ≈1 - 1.7 ±0.2 0.011 ±0.001 SF
Swift 0.48 ±0.07 0.28 ±0.05 0.17 ±0.03 0.005 ±0.001 -
170817A 0.008 0.338 Fermi^* 2.0 ±0.5^* - 28 ±2^* 0.0006 ±0.00004^* Q
200522A 0.554 0.071 Swift 0.62 ±0.08 0.38 ±0.05 0.20 ±0.04 0.081 ±0.02 SF
211211A 0.076 0.048 Fermi 34.3 ±0.6 - 507 ±10 0.68 ±0.01 SF
Swift 50.7 ±0.9 21.2 ±0.2 255 ±4 1.72 ±0.03 -
230307A 0.065 0.239 Fermi ≈35 - 2950 2.9 SF
^1Milky Way extinction values taken from <cit.>.
^2Host galaxy type (star-forming, SF, or quiescent, Q) taken from the uniformly modeled sample of <cit.> with the exception of GRB 2303037A for which we use the analysis of <cit.>.
^*We include this information for context on GRB 170817A. However, as the event's γ-ray emission source is debated and likely distinct from typical on-axis events (e.g., ) we do use this value in our analysis.
§.§ GRB 050709
GRB 050709 was detected by the High Energy Transient Explorer II (HETE-II; ) at 22:36:37 UT on 2005 July 9, with a T_90=0.07 s <cit.>. Follow-up observations with the Chandra X-ray Observatory (Chandra) revealed a new X-ray source within the HETE-II localization <cit.>. Follow-up observations by the Swift X-Ray Telescope (XRT; ) and Chandra confirmed this counterpart as fading and thus, likely related to GRB 050709 <cit.>. At the position of the X-ray source, an optical counterpart was detected, embedded in a star-forming galaxy at z=0.161 <cit.>. The Very Large Array (VLA) observed the position of the X-ray counterpart over four epochs, but did not detect any significant emission <cit.>.
Imaging with the Hubble Space Telescope (HST) revealed long-lived emission in the F814W band, which has been attributed to a kilonova by numerous groups in the literature <cit.>. We combine the X-ray, radio <cit.> and optical-near-IR <cit.> datasets for our analysis. We present all observations used in our analysis in Table <ref>.
§.§ GRB 060614
GRB 060614 was discovered by the Swift Burst Alert Telescope (BAT; ) on 2006 June 14 at 12:43:48.5 UT with a T_90 = 109 ± 3 s <cit.>. Prompt follow-up by Swift-XRT and the Swift Ultra-Violet Optical Telescope (UVOT; ) revealed bright counterparts to the burst <cit.>. The early ultraviolet through optical afterglow spectral energy distribution (SED) provides evidence for low line-of-sight extinction and a z < 1.3 origin <cit.>. Subsequent follow-up by ground-based observatories revealed an optical counterpart with a spectroscopic redshift of z=0.125 <cit.> on the outskirts of a star-forming galaxy at the same distance. The optical counterpart was monitored to late times with HST <cit.>.
Numerous deep imaging observations and spectra, extending out to ≈ 65 days, failed to reveal the expected counterpart to a long-duration GRB, a SN Ic-BL <cit.>. This fact, combined with its intriguing gamma-ray properties, motivated a later analysis showing that the optical light curve reddened at later times, suggesting instead the presence of a kilonova <cit.>.
For our analysis, we use the XRT and UVOT light curves (though we do not use UVOT detections past 10 days as the exposures are on ≳ day timescales; ), ATCA radio upper limits <cit.> and combine the optical datasets in the literature <cit.>.
§.§ GRB 130603B
GRB 130603B was detected by Swift-BAT and the Konus-Wind Observatory <cit.> on 2013 June 13 at 15:49:14 UT with T_90 = 0.18 ± 0.02 s <cit.>. Prompt Swift-XRT observations revealed an X-ray counterpart <cit.>. Rapid optical follow-up revealed the optical afterglow within the X-ray localization <cit.>. Spectroscopy of the afterglow identified a GRB origin of z=0.356 <cit.>. Additional multi-wavelength follow-up detected a radio counterpart and a well-sampled optical and X-ray afterglow (e.g., ). Later observations with HST revealed that the optical counterpart had significantly reddened, resulting in a bright near-IR detection and deep limits in the optical bands. This represented the first bona-fide claim of an r-process-enriched kilonova <cit.>.
The kilonova detection of GRB 130603B has been modeled by numerous groups in the literature, though with only a single detection, precise ejecta parameters remain uncertain (e.g., ). We employ the multi-wavelength dataset compiled in <cit.> and combine the optical through near-IR datasets published in the literature <cit.>.
§.§ GRB 160821B
GRB 160821B was detected by the Fermi Space Telescope Gamma-ray Burst Monitor (GBM; ) and the Swift-BAT on 2016 August 21 at 22:29:13 UT. It was classified as a short GRB with T_90 = 0.048 ± 0.07 s (Swift; ) and T_90≈ 1 s (Fermi; ). Swift-XRT quickly localized a counterpart <cit.>. UVOT observed the location but did not detect a counterpart <cit.>. Prompt follow-up identified optical and radio counterparts on the outskirts of a bright galaxy at z=0.1616 <cit.>, motivating further multi-color follow-up by HST and ground observatories (e.g., ).
The afterglow and kilonova of GRB 160821B have been analyzed by multiple groups in the literature (e.g., ), and show evidence for an early reverse shock (e.g., ) and a bluer kilonova relative to AT 2017gfo (e.g., ). For our analysis, we collect Swift and XMM-Newton <cit.> and VLA radio observations <cit.> from the literature. We combine the optical-NIR datasets <cit.> and report where we draw specific measurements from in Table <ref>.
§.§ GRB 170817A/GW 170817
GRB 170817A was detected by the Fermi-GBM and INTernational Gamma-ray Astrophysics Laboratory (INTEGRAL; ) on 2017 August 17 at 12:41:06 UT, 1.7 s after the LVK-detected BNS merger GW 170817 <cit.>. The gamma-ray duration was T_90 = 2.0 ± 0.5 s (50–300 keV; Fermi; ). An optical counterpart (AT 2017gfo) was first found δ t = 0.452 days following the event discovery <cit.>. The community observed the transient in exquisite temporal and color detail, revealing a fast-fading, reddening source with spectroscopic evidence of the radioactive decay of lanthanide elements (e.g., ).
We utilize the optical-NIR dataset compiled in with observations from the literature <cit.> extending out to ≈ 25 days. Specifically, we employ the set of observations used in their analysis, as there are known inconsistencies amongst the full dataset <cit.>. We do not collect multi-wavelength data to model this component as it is well-established that the off-axis afterglow did not contaminate observations on the timescales of the kilonova (δ t ≲ 30 days; e.g., ).
§.§ GRB 200522A
GRB 200522A was discovered on 2020 May 22 at 11:41:34 UT by Swift-BAT with T_90 = 0.62 ± 0.08 s <cit.>. A prompt XRT counterpart location was reported <cit.>, within which a catalogued galaxy with photometric redshift z_ phot = 0.4 ± 0.1 was noted <cit.>. Subsequent observations identified a radio counterpart <cit.> and secured the host redshift to z = 0.554 <cit.>. Follow-up by HST and ground observatories uncovered a fading optical-near-IR counterpart embedded in the host galaxy <cit.>.
We draw the Swift-XRT observations from the United Kingdom Swift Science Data Centre (UKSSDC; ) and incorporate Chandra and VLA observations <cit.>. We combine the optical-near-IR data in the literature <cit.>.
§.§ GRB 211211A
GRB 211211A was detected on 11 December 2021 at 13:09:59 UT by Swift-BAT, Fermi-GBM, INTEGRAL, and the CALET Gamma-ray Burst Monitor <cit.>. It was reported as a bright, long-duration GRB with T_90 = 50.7 ± 0.9 s (Swift) and T_90 = 34.3 ± 0.6 s (Fermi; ). Swift-XRT and UVOT promptly identified counterparts to the burst <cit.>. Notably, the detection of the afterglow in the UVW2 filter limits the event origin to z<1.4 <cit.>. An early optical counterpart was discovered proximate to galaxy SDSS J140910.47+275320.8 <cit.>. Later spectroscopy revealed a featureless afterglow <cit.> and a galaxy redshift of z=0.0763, rendering it one of the most nearby GRBs observed across all durations <cit.>.
Motivated by the low redshift of the putative host galaxy, the counterpart was followed in the optical and near-IR. These observations revealed a fast-fading, red transient with similar luminosities and behavior to AT 2017gfo <cit.>. Later (δ t ≈2-3 weeks), deep optical upper limits revealed no sign of a supernova counterpart to a luminosity lower than that of any known GRB-SN <cit.>, further motivating an interpretation of the red excess as a kilonova. The kilonova of GRB 211211A has been modeled in the literature by numerous groups (e.g., ). For our analysis, we use Swift-XRT observations from UKSSDC, XMM-Newton upper limits <cit.> and a VLA radio observation <cit.>. We use the Swift-UVOT and optical-near-IR datasets from , and add early optical data from .
§.§ GRB 230307A
GRB 230307A was detected on 7 March 2023 15:44:06.67 UT by Fermi-GBM with a duration of ≈ 35 s <cit.>. The burst was also observed by GECAM, the InterPlanetary Network (IPN), AGILE, AstroSAT, and GRBAlpha <cit.>, and was quickly noted as the second-brightest GRB seen by Fermi to date <cit.>. The ULTRACAM instrument mounted on the 3.5 m New Technology Telescope (NTT) and Swift-XRT undertook wide-field searches for an optical and X-ray counterpart, respectively, discovering coincident candidates at δ t = 1.4-1.7 days <cit.>. The counterpart was offset 30.2” (38.9 kpc) from a bright spiral galaxy confirmed at z=0.0646 <cit.>, and an extensive multi-wavelength follow-up campaign was initiated.
Despite GRB 230307A's nominal long duration, its high-energy properties, including its spectral lag and X-ray flux decay, provided evidence for a compact object merger origin similar to GRB 211211A. Near-IR follow-up from ground-based observatories and JWST revealed a late-time red excess, light curve shape, and spectral features expected for a kilonova (e.g., ). We combine the multi-wavelength datasets presented in the literature, which includes Chandra, Swift, and XMM X-ray observations, broad coverage by ground observatories, late-time HST and JWST observations, and ATCA and AMI-LA radio observations <cit.> for our analysis.
§ AFTERGLOW MODELING
In addition to a radioactive decay-powered kilonova, BNS mergers are expected to launch a relativistic jet whose interaction with the surrounding medium produces broadband synchroton emission, or the “afterglow” (e.g., ). The afterglow flux is a potential contaminant for modeling kilonovae, especially for on-axis events in which the afterglow often dominates the total luminosity at early times, motivating us to model the afterglows of each event in our sample, with the exception of GW 170817 (see Section <ref>). Our afterglow model uses the formulae of <cit.> and methods described in <cit.> to describe synchrotron emission from a forward shock (FS), produced by the interaction of the GRB's collimated jet and the surrounding medium, incorporating the effects of Inverse Compton cooling <cit.>.
The parameters fit in this afterglow model are the jet isotropic-equivalent energy (E_ K,iso), the circumburst density of the surrounding medium (n_0), the input electron distribution power law index (p), the fraction of energy deposited into non-thermal relativistic electrons (ϵ_ e), the line-of-sight extinction in B-band (A_ B; fixed to 0 in our fits except for GRB 130603B, see below), and the time of the jet break (t_ jet), which is directly related to the jet opening angle <cit.>. For each event we assume a radially-homogeneous ISM-like environment (k=0), as this is expected in the local environments for most NS mergers. We fix the energy deposited in magnetic fields (ϵ_ B) to 0.01, the median for short GRBs <cit.>, as the model fails to converge to a reasonable solution when leaving this parameter free. We fit the afterglow model using the Markov Chain Monte Carlo (MCMC) package <cit.>, enforcing a minimum 10% uncertainty on all detections to capture realistic measurement errors. We run each fit using 128 walkers for 5000 iterations and discard the first 10% of steps as burn-in. For each event, we employ the redshifts and extinction values listed in Table <ref> and use values from the literature as starting parameters.
As the kilonova and afterglow both contribute flux in the optical-to near-IR wavelengths, disentangling emission between the two can be difficult, especially at early times. In particular, observations of AT 2017gfo demonstrated that kilonovae may have early (δ t ≲ 1 day) blue emission due to large quantities of fast-moving lanthanide-poor ejecta or, potentially, additional energy sources such as free neutron decay (e.g., ) or shock-heating (e.g., ). Thus, we attempt to remain agnostic to the precise afterglow contribution in the optical and mask data in the range 10^13 - 10^16 Hz (effectively, fitting only the radio and X-ray observations) in our fits to GRBs 160821B, 200522A, 211211A and 230307A. For GRB 230307A, we find this method significantly underestimates the flux of TESS optical observations at 0.01 ≲δ t ≲ 0.2 days (Figure <ref>; “Model 1”) while a fit including the TESS observations provides a worse fit to the X-ray and radio data (“Model 2”). We perform our analysis on the results from both models due to the uncertainty in the emission source. For GRB 160821B, we exclude the radio detection at δ t = 0.17 days as it is likely the result of an early reverse shock <cit.>, and incompatible with the standard FS model. We do not expect the reverse shock to significantly contaminate the optical flux on the timescales of observations we use to model the kilonova (δ t > 0.95 days).
For the remaining three events we include some early optical data in our fit either due to sparse X-ray and radio detections (GRBs 050709, 060614) or high line-of-sight extinction that will significantly affect our estimation of the afterglow flux contribution (GRB 130603B; ). While it is possible that some kilonova flux may be contributing in these optical detections, previous works have shown that these points can be explained with an afterglow model only (e.g., ). Specifically, for GRB 050709, we include two early (δ t < 2.4 days) R-band optical observations. For GRB 060614, we include early RIJK-band data (δ t = 0.7 - 2 days) in our afterglow fit. We exclude X-ray data of GRB 060614 prior to δ t = 0.5 days as numerous analyses favor an energy injection scenario to explain the afterglow plateau observed at δ t ≲ 0.5 day that is incompatible with the standard afterglow model (Figure <ref>; e.g., ). We do not expect the exclusion of the energy injection episode in GRB 060614 to affect our kilonova modeling as the FS is expected to dominate on the timescales and in the filters we use to model the kilonovae. Finally, for GRB 130603B we include early (δ t ≲ day) optical-near-IR detections to measure the line-of-sight dust, A_B, which we propagate to the output models we use for subtraction. We also exclude the final XMM-Newton observation of GRB 130603B as it is known to be contaminated by an unrelated X-ray source <cit.>. We denote which observations were used in the afterglow fitting in Table <ref>.
For all events, we visually inspect the optical-near-IR afterglow models to ensure they are not brighter than measured values beyond the uncertainties. In general, we find that our derived afterglow physical parameters are consistent with those in the literature, though in some cases we find inconsistent values (typically, in the degenerate parameters E_ K,iso and n_0). Variations are likely the result of discrepancies between modeling codes or the fact that we primarily use X-ray and radio data only; however the inferred afterglow parameters are not used in any subsequent analysis[With the exception of the jet break in GRB 230307A, which is incorporated in Section <ref>.], so the specific values are not important for this work. We create model light curves at the same rest-frame wavelengths as the optical-near-IR observations for 1000 random draws from the full posterior of each event. From these 1000 draws we calculate the median and 68% credible flux range, using the 68% flux range as our uncertainty on the afterglow model. We show our fits to the multi-wavelength data, along with their uncertainties, in Figure <ref>. Following interpolation of the afterglow light curves to the δ t of each observation, we subtract the median afterglow model flux from each observed flux, producing an “afterglow-subtracted” light curve. We combine the data uncertainty and model uncertainty at the time of the observation in quadrature. We present our afterglow-subtracted values and errors in Table <ref>.
§ KILONOVA MODELING
cccCC
2
5
0.07in
Kilonova Model Parameters & Priors
Parameter^†
Units
Prior
Min.
Max.
M_ ej (B, R, P) M_⊙ Log-Uniform 0.001 0.5
V_ ej (B, R, P) c Uniform 0.03 0.3
T_ floor (B, R, P) K Log-Uniform 1000^* 4000
cos(θ_ open) – Uniform 0.5 0.866
σ – Log-Uniform 0.001 100
^†“B”, “P”, and “R” refer to the blue, purple and red ejecta components, as described in Section <ref>. Each components' parameters are modeled separately with different κ values but have the same priors, minima, and maxima.
^*For GRB 230307A only we allow this minimum to extend to T_ floor, P = 800 K and T_ floor, R = 440 K due to the temporal and wavelength coverage of the kilonova (Section <ref>).
§.§ Description of Kilonova Model
We employ the Python-based Modular Open Source Fitter for Transients () code <cit.> to fit kilonova models to the afterglow-subtracted data, for which the kilonova contribution has nominally been isolated. From this modeling, we derive physical parameters describing the observed kilonova emission, which we parameterize in terms of ejecta mass (M_ ej), velocity (V_ ej) and temperature cooling floor (T_ floor). The latter is the temperature below which the photosphere recedes into the ejecta. Hence, T_ floor represents an effective emission temperature for the optically-thin nebular phase to follow. We elect to use as its modular design affords us the flexibility to build new modules, freeze or adjust parameters and their priors, add constraints, and test several samplers without a high computational cost. In addition, is a well-tested method that has been used to fit or determine upper ejecta mass limits of several past kilonovae (e.g., ). All of the results presented in this section were performed with nested sampling, as implemented in the fitting routine <cit.>.
We begin with 's existing three-component (blue, purple and red; see below) kilonova model <cit.>, which assumes analytic forms for the radioactive heating rate <cit.> and the thermalization efficiency <cit.>, and uses the <cit.> formalism to calculate bolometric light curves. Here, each of the three components is parameterized by a constant “grey” ejecta opacity (κ), the value of which correlates with lanthanide or electron fractions in portions of the total ejecta. Kilonova models comprised of two or three components were found to provide a better fit to the well-sampled AT 2017gfo compared to single component models <cit.>. They are also physically motivated by simulations that predict multiple ejecta mechanisms with distinct elemental compositions prior to and following the NS merger (e.g., ).
For our three-component model we employ κ_ R = 10 cm^2 g^-1, κ_ P = 3 cm^2 g^-1, and κ_ B = 0.5 cm^2 g^-1 for the “red”, “purple” and “blue” components, respectively <cit.>. We expect these components to roughly map to the red (lanthanide-rich or Y_ e≲ 0.2) tidal dynamical ejecta (e.g., ), the purple (moderately lanthanide-rich or 0.2 ≲ Y_ e≲ 0.3) disk wind ejecta (e.g., ), and the blue (lanthanide-poor or Y_ e≳ 0.3) dynamical ejecta shocked at the NS contact interface and ejected near the poles (e.g., ) or ejected in a magnetized wind from the neutron star remnant prior to black hole formation (e.g., ). For each component in our model, we measure M_ ej, V_ ej and T_ floor (Table <ref>).
The geometry of the mechanism producing each ejecta component and the viewing angle of the observer are known to significantly impact the observed light curve (e.g., ), and thus any parameter inference. Of particular relevance to this work, the assumption of an isotropic kilonova will likely introduce a bias in estimating the mass of material ejected along the line-of-sight. Thus, under this assumption, GRB events observed along the jet axis likely have blue and red ejecta components that are overestimated and underestimated, respectively. Instead, here we account for the geometry of the ejecta by modifying the original spherical kilonova model <cit.> to an aspherical model, wherein a half-opening angle (θ_ open) defines a conical boundary between red ejecta, confined to the equatorial region, and the blue and purple ejecta, modeled in the direction of the poles. We use the half-opening angle prescription of <cit.>, as implemented in by <cit.>. The viewing angle (θ_ obs; defined relative to the axis of the GRB jet) of each event in our sample is well-established, either as relatively pole-on due to the detection of a cosmological GRB or measured through high-precision astrometry in the case of AT 2017gfo <cit.>. Thus, we fix θ_ obs = 22 (the central value in the range given) for AT 2017gfo <cit.> and θ_ obs = 0 for all other events in our sample. We allow the kilonova ejecta half-opening angle to be a free parameter and include an additional parameter (σ) to account for white noise in the likelihood function. These, in addition to M_ ej, V_ ej and T_ floor for each of the three components, comprise the 11 parameters measured for each kilonova.
We acknowledge that the number of free parameters exceeds the number of data points for some events in our sample. We perform several test fits fixing each component's T_ floor and θ_ open, finding consistent masses compared to runs where these parameters are left free. In the end, we opt to keep these parameters free in order to marginalize over the uncertainties on other parameters when measuring masses. In these cases we see find very broad posteriors on the masses (e.g., Figure <ref> and Table <ref>), but are still able to constrain them compared to the broad uniform prior.
cCCCCCCCCCC
3
11
0.05in
Kilonova Model Posteriors
GRB
cos(θ_ open)
M_ ej, B
M_ ej, P
M_ ej, R
V_ ej, B
V_ ej, P
V_ ej, R
T_ floor, B
T_ floor, P
T_ floor, R
(M_⊙)
(M_⊙)
(M_⊙)
(c)
(c)
(c)
(K)
(K)
(K)
050709 0.68^+0.13_-0.12 0.003^+0.004_-0.002 0.029^+0.010_-0.009 0.015^+0.072_-0.012 0.18^+0.08_-0.09 0.07^+0.05_-0.03 0.16^+0.09_-0.09 2040^+1183_-775 2807^+833_-1322 1904^+1146_-662
060614 0.73^+0.10_-0.18 0.014^+0.013_-0.010 0.029^+0.034_-0.027 0.146^+0.072_-0.120 0.19^+0.06_-0.07 0.15^+0.09_-0.09 0.22^+0.05_-0.07 2546^+1059_-1157 2743^+675_-1280 3030^+513_-786
130603B 0.68^+0.13_-0.12 0.006^+0.017_-0.004 0.075^+0.063_-0.037 0.023^+0.160_-0.020 0.14^+0.11_-0.08 0.11^+0.12_-0.05 0.17^+0.09_-0.09 1990^+1185_-737 2085^+1145_-793 1982^+1197_-711
160821B 0.63^+0.14_-0.09 0.003^+0.001_-0.001 0.011^+0.002_-0.002 0.011^+0.021_-0.009 0.14^+0.06_-0.05 0.12^+0.06_-0.03 0.15^+0.09_-0.08 2038^+1201_-759 3792^+156_-1067 2069^+1226_-769
170817A 0.86^+0.00_-0.00 0.004^+0.000_-0.000 0.019^+0.001_-0.001 0.052^+0.003_-0.002 0.15^+0.01_-0.01 0.15^+0.01_-0.01 0.20^+0.01_-0.01 1745^+239_-261 3152^+30_-29 1004^+6_-3
200522A 0.67^+0.13_-0.12 0.046^+0.009_-0.009 0.020^+0.068_-0.018 0.019^+0.124_-0.016 0.23^+0.04_-0.06 0.18^+0.08_-0.09 0.16^+0.09_-0.09 1969^+1183_-725 1938^+1173_-694 2043^+1150_-756
211211A 0.85^+0.01_-0.02 0.007^+0.000_-0.001 0.010^+0.001_-0.002 0.130^+0.051_-0.054 0.28^+0.01_-0.02 0.27^+0.02_-0.04 0.27^+0.02_-0.04 1800^+549_-427 2048^+463_-554 1817^+304_-623
230307A^† 0.82^+0.03_-0.10 0.012^+0.001_-0.001 0.025^+0.003_-0.003 0.054^+0.019_-0.022 0.20^+0.02_-0.02 0.09^+0.01_-0.01 0.20^+0.05_-0.06 1655^+229_-250 830^+42_-22 464^+46_-18
230307A^ 0.54^+0.09_-0.03 0.008^+0.001_-0.001 0.037^+0.004_-0.005 0.003^+0.009_-0.002 0.19^+0.03_-0.02 0.11^+0.01_-0.01 0.19^+0.07_-0.09 3718^+201_-357 831^+45_-23 1626^+1635_-1033
“B”, “P”, and “R” subscripts refer to the blue, purple and red ejecta components, as described in Section <ref>.
^†Afterglow model 1 fit to X-ray and radio data only.
^Afterglow model 2 fit to X-ray, radio and TESS data.
We list the parameters and priors used in our kilonova model in Table <ref>. For M_ ej and V_ ej we elect to use the widest priors possible that correspond to physical values based on simulations (e.g., ). For T_ floor we use the range 1000-4000 K following the reasoning of <cit.> based on previous fits to AT 2017gfo with <cit.>. For GRB 230307A, JWST detections at δ t ≈ 29 and 61 days likely occur at epochs when nebular emission is either significantly contributing or dominating the observed flux. As kilonova nebular emission remains a challenge to properly model, we do not include the observations at δ t ≈ 61 days in our kilonova fit. To accommodate the observations at δ t ≈ 29 days and the mid-IR coverage of the JWST detections, we allow the T_ floor, P and T_ floor, R prior range to extend down to 800 and 440 K, respectively. We base our choice of T_ floor, R = 440 K on the blackbody fit to synthetic nebular lanthanide-rich kilonova spectra and Spitzer detections of AT 2017gfo <cit.>. We choose a median value of T_ floor, P = 800 K as we expect the blackbody temperature of moderately lanthanide-rich ejecta to fall in between that of lanthanide-poor and lanthanide-rich ejecta.
We fit the eight afterglow-subtracted optical-near-IR light curves at δ t > 0.5 days (Table <ref>) with the three-component kilonova model presented above. We do not include data prior to 0.5 days in our kilonova fits as these timescales may be affected by emission mechanisms beyond radioactive decay, including energy injection in the afterglow (GRB 060614; ), central engine activity (e.g., GRB 211211A; ), or shock cooling (e.g., ). For our fits, we include only data points whose combined statistical and systematic (from afterglow fitting; Section <ref>) errors are < 0.5 mag and treat all observations with combined errors > 0.5 mag as upper limits. In general, this results in detections prior to ≈1 day being treated as upper limits (e.g., Figure <ref> and Table <ref>). For GRB 050709, we employ a threshold of < 0.6 mag, as we find the inclusion of the two additional points provides a significantly better fit. We present the median and 68% confidence range for each parameter measured in our kilonova modeling in Table <ref>.
§.§ Results and Observed M_ ej Diversity
In Figure <ref> we plot median and 68% confidence range model light curves, constructed from 900 random draws of the full posterior of each GRB (with the exception of GRB 170817A for which we use 50 random draws due to computational constraints). Overall, we find that the models provide reasonable fits to the data and follow the predicted behavior of kilonovae: rapid decay, especially in bluer bands, and reddening over time. As expected, events with better observational coverage correspond to tighter constraints on the light curves. For AT 2017gfo, we find that our model provides a good fit to the optical data and the majority of the near-IR observations, but overpredicts the late-time (δ t ≳ 20 day) K-band observations, a feature also noted in previous fits to this event <cit.>. We discuss comparisons to previous fits further in Section <ref>.
For GRB 230307A we find that our choice of afterglow model significantly affects the shape of the model optical light curve at later times, reflected in the disparate best-fit values found for M_ ej, R and T_ floor, R. The fit to data subtracted with afterglow Model 1 (Figures <ref> and <ref>; “`KN Fit 1”) follows a typical fading behavior at δ t ≳ 10 days. In contrast, the fit to data subtracted with afterglow Model 2 (“KN Fit 2”; in particular the JWST F070W detection at ≈29 days which is an upper limit in KN Fit 1) produces a flattening in the optical decay past δ t ≳ 10 days. To produce this shape, KN Fit 2 requires higher values of T_ floor, B and T_ floor, R than KN Fit 1 to explain the emission at later times (Table <ref>). We favor KN Fit 1 in our later analysis (though we show both for completeness), for several reasons. First, we prefer the afterglow model that provides a better fit to the X-ray and radio data than the early TESS observations (afterglow Model 1; Figure <ref>) as there are a number of proposed emission mechanisms, including a reverse shock, shock cooling and free-neutron decay <cit.>, that may explain the early TESS excess but could not account for a late-time radio and X-ray excess. Second, the true TESS bandpass is wider than the nominal I_c-band reported in the literature (e.g., ), and may also explain excess emission relative to the model. Third, KN Fit 1 results in more physically realistic values for T_ floor, R, as they are similar to the blackbody temperature that approximates kilonova nebular emission at the wavelengths where the nebular observations occur (e.g., 440 K; ). Finally, fits to both afterglow-subtracted datasets without JWST observations are more consistent with KN Model 1.
In Figure <ref> we show the median and 68% confidence range for M_ ej and V_ ej for each component and the total ejecta. In keeping with previous works (e.g., ), our fits place the tightest constraints on the blue and purple component parameters and the coarsest constraints on the red component. This is in large part due to the traditional use of more sensitive optical telescopes for afterglow searches, especially for the kilonova candidates detected prior to GW 170817. Notably, our fits to events with just two or three detections and deep upper limits (GRBs 050709 and 200522A) place order-of-magnitude or tighter constraints on M_ ej, B and M_ ej, P. In Section <ref> we analyze and compare the kilonova properties of long and short GRBs.
Focusing on the blue component, our analysis finds that all except one event prefer M_ ej, B≤ 0.01 M_⊙ (Table <ref>). We also observe a general trend between increasing M_ ej, B and V_ ej, B (Figure <ref>), though the large error bars on several events prohibit a firm conclusion. Taking 1000 draws from the posterior of each event (and discarding GRB 230307A KN Fit 2), we calculate a median of M_ ej, B = 0.006_-0.004^+0.015 M_⊙ (68% confidence; Figure <ref>).
The blue kilonova emissions may be attributed to either dynamical ejecta heated at the contact surface between the NSs and ejected along the the axis of the jet (e.g., ) or post-merger disk ejecta experiencing neutrino irradiation from a NS remnant, which lowers the lanthanide-richness of any ejecta (e.g., ).
Notably, GRB 200522A is a significant outlier in M_ ej, B (Figure <ref>) with M_ ej, B = 0.046 ± 0.009 M_⊙, consistent with past findings <cit.>. We slightly favor a disk-wind source (rather than a shock-heated dynamical source) to explain the majority of GRB 200522A's larger M_ ej, B as BNS merger simulations that measure shock-heated dynamical ejecta do not produce M_ ej, B≳ 0.01 M_⊙, even when spanning a range in NS masses, mass ratios and two NS equations of state <cit.>. In contrast, simulations measuring a disk-wind mass in the case of a long-lived NS remnant produce M_ ej, B≈ 0.03 M_⊙ (e.g., ). The extreme luminosity of GRB 200522A's kilonova has previously been explained with the creation of a magnetar remnant, which may provide an additional blue emission source <cit.>.
Turning to the purple component, we find a population median of M_ ej, P = 0.020_-0.010^+0.034 M_⊙ (68% confidence; Figure <ref>). This range is broadly consistent with expectations of disk component masses (e.g., ). We find that several events (GRBs 050709, 130603B and 230307A) prefer lower ejecta velocities compared to those found for the blue or red components. This trend supports the purple component's source as a disk wind which is likely ejected with slower speeds compared to the dynamical red and blue components (V_ ej≈ 0.01 - 0.1c; e.g., ). We find that GRB 211211A is an outlier on the higher end in V_ ej, P. As our fit to this event finds high velocities for all three components, we posit this is likely a reflection of the fast-decaying UV-optical emission observed at δ t ≈ 0.5 - 2 days, which previous fits have explained with a shock cooling model (e.g., ).
Finally, for the red component, we find a population median of M_ ej, R = 0.051_-0.045^+0.100 M_⊙ (68% confidence; Figure <ref>). Amongst the three components, this is highest median, and the widest range in ejecta masses, though we note there are poor constraints on this value for GRBs 050709, 130603B and 200522A. The red kilonova component can be ascribed to the neutron-rich ejecta tidally stripped from the NS surfaces as the compact objects slowly inspiral (e.g., ). Generally, it is expected that red dynamical ejecta mass will increase with larger asymmetry in the progenitor masses (particularly for NSBH events; e.g., ) or higher spins (e.g., ). Notably, the three highest M_ ej, R median values are found for the three LGRB events, GRBs 060614, 211211A and 230307A. We further discuss the implications of this finding and other trends with GRB properties in Section <ref>.
Across the sample, we derive a median total ejecta mass of M_ ej, tot = 0.085_-0.040^+0.110 M_⊙ (68% confidence). In every component and the total, the ejecta mass of GW 170817/AT 2017gfo falls comfortably within the 68% credible range found for all events. This indicates that AT 2017gfo may be considered a “representative” kilonova (Figure <ref>) compared to the seven kilonovae analyzed here. In contrast, the kilonova of GRB 160821B falls below the median ejecta masses found for all GRBs, rendering it a critical point in probing kilonova diversity.
§.§ Constraints on M_ ej from Additional Short GRB Observations
Next, we briefly explore the possibility that our kilonova sample is observationally biased towards more luminous events. As luminosity roughly scales with M_ ej (e.g., ), a missing population of low-luminosity kilonovae would translate to a population with lower M_ ej than those reported here.
To evaluate this, we employ observations of seven short GRBs with upper limits and afterglow detections that are less luminous compared to AT 2017gfo when matched in rest-frame time and band at their known redshifts <cit.>. These bursts[We remove GRB 060201 from the sample as its host association is inconclusive (e.g., ).] are GRBs 050509B <cit.>, 080905A <cit.>, 090515 <cit.>, 100206 <cit.>, 130822A <cit.>, 150120A <cit.> and 160624A <cit.>. For this analysis, we assume all detections are dominated by afterglow flux and treat them as upper limits.
In the observed frame for each event (determined with the redshift catalog of ), we generate three sets of one-component kilonova light curves parameterized by the blue, purple, and red opacities (Section <ref>). We fix T_ floor = 1000 K and use the corresponding component's median velocity found in Section <ref>. For each component, we produce kilonova models log-spaced in M_ ej over the range M_ ej = 0.001 - 0.5 M_⊙. We compare our GRB observations to the set of kilonova models, and record the highest M_ ej in each component allowed by the upper limits for each event. In Figure <ref> we plot constraining (< 0.5 M_⊙) upper limits on the respective component ejecta masses. We note that our use of one-component models translates to a conservative upper limit as flux from other components is not taken into account in our procedure.
Similar to previous findings, short GRB observations are most constraining of M_ ej, B (e.g., Section <ref>; ). Thus, current deep upper limits are inadequate for placing meaningful constraints on M_ ej, P and M_ ej, R. In the blue component, we find one event less massive than AT 2017gfo (GRB 080905A; M_ ej, B < 0.002 M_⊙) and an additional two events with upper limits below the median value found in Section <ref> (GRBs 050509B and 130822A; M_ ej, B < 0.005 M_⊙). Overall, the existing upper limits span the range of blue ejecta masses, so. Thus, it is difficult to conclude at present if the population that do not have detected kilonovae also have lower ejecta masses.
§.§ Comparison to Previous Kilonova Fits
Comparing our results in Table <ref> to those from previous fits, we find our results are generally consistent with those in the literature. We find variation in absolute differences, an expected outcome given the range of kilonova modeling codes used, which we further discuss here.
For AT 2017gfo, our fit produces a larger M_ ej, R and a smaller M_ ej, B and M_ ej, P compared to a previous run <cit.>, though a similar value for the total ejecta mass is reached (M_ tot≈ 0.07 M_⊙). We ascribe this difference in the relative component masses to our addition of the geometry prescription (Section <ref>; ), which is expected to increase the amount of red mass relative to the blue for viewing angles less than the θ_ open (and is observed in though a larger θ_ obs and different asymmetry prescription was used). <cit.> also model AT 2017gfo with , incorporating constraints from GW observations, and find comparable M_ ej, B and M_ ej, P. Their analysis finds a smaller M_ ej, R (≈ 0.001 M_⊙), which we attribute to their constraints on the tidal dynamical ejecta based on the BNS mass ratio and chirp mass <cit.>. Compared to two-component (red and blue components only) fits to AT 2017gfo, we obtain M_ ej, B,R values that are within the range but on the upper end of those measured in the literature (e.g., ).
Of the remaining kilonovae in our sample, the majority have measured M_ ej and V_ ej. However, previous fits to these events use a variety of models and methods to measure the kilonova parameters and exact comparisons are not advisable. For GRB 060614, a previous estimate found M_ ej, tot≈ 0.1 M_⊙, on the lower end of our estimate <cit.>. For GRB 130603B, previous fits find a wide span in M_ ej, tot = 0.01 - 0.1 M_⊙ <cit.>, broadly consistent with our results. For GRB 160821B, <cit.> constrain a lanthanide-rich mass to < 0.006 M_⊙ and a lanthanide-poor mass to ≈ 0.01 M_⊙. For the same burst, <cit.> find a dynamical ejecta mass of (1.0 ± 0.6) × 10^-3 M_⊙ and a post-merger ejecta mass of (1.0 ± 0.6) × 10^-2 M_⊙. Within errors, our results are consistent with both findings but are on the upper end of the given ranges.
GRB 211211A was fit with a three-component model in that included a shock heating prescription, finding M_ ej, B = 0.01 ± 0.001 M_⊙, M_ ej, P = 0.01 ± 0.02 M_⊙ and M_ ej, R = 0.02^+0.02_-0.01 M_⊙ <cit.>. They find consistent results when performing joint afterglow and kilonova modeling <cit.>. Within errors, the blue and purple component masses are consistent with our results, but we find a larger median M_ ej, R by a factor of two. Additional M_ ej, tot estimates of this kilonova span 0.02 - 0.1 M_⊙ <cit.>. For GRB 230307A, M_ ej, tot of 0.05^+0.15_-0.05 M_⊙ <cit.> and ≈ 0.08 M_⊙ <cit.> are found in the literature. Taken together, our results are generally consistent with those found in the literature. Our median values are on the upper end of previous estimates, consistent with comparisons to AT 2017gfo modeling.
§.§ Caveats to Kilonova Model
We find that in several cases the error bars on M_ ej (particularly for the blue component) are unrealistically small. While a powerful tool to infer physical properties, makes a series of simplying assumptions that likely result in an underestimation of the true errors, similar to the conclusion made for modeling of tidal disruption events <cit.>.
In particular, the assumption of a constant grey opacity may significantly affect the model posteriors, as M_ ej, V_ ej and κ are degenerate parameters in predicting the shape of the light curve. Notably, κ may vary up to an order of magnitude on the timescales of our observations (0.1 ≲δ t ≲ 30 days) as the ejecta temperature and density directly impact the elements' ionization states (e.g., ). We quantify the minimum systematic error introduced by our assumed opacities by running fits of AT 2017gfo in which each component's opacity is a free parameter whilst holding the other two components' κ values constant and keeping the same prior ranges listed in Table <ref>. We then determine the difference in derived component M_ ej relative to the masses inferred in Section <ref> with fiducial, fixed opacities. Specifically, we explore the effects of the range κ_ R = 5-30 cm^2 g^-1 (compared to κ_ R = 10 cm^2 g^-1 in our Section <ref> fits), κ_ P = 1-5 cm^2 g^-1 (κ_ P = 3 cm^2 g^-1), and κ_ B = 0.2 - 2.5 cm^2 g^-1 (κ_ B = 0.5 cm^2 g^-1). We use a minimum of κ_ B = 0.2 cm^2 g^-1 as it corresponds to the minimum value dictated by Thompson scattering of ionized elements <cit.>. We base the remaining ranges on calculations for kilonova opacities at δ t ≈ 1 day <cit.>.
Based on the three fits with free κ values, we find the median errors on the component masses are Δ M_ ej, R = (-0.04, +0.0008) M_⊙, Δ M_ ej, P = (-0.004, +0.02) M_⊙ and Δ M_ ej, B = (-6.5 × 10^-5, 0.0006) M_⊙. For the total mass, fitting with κ as a free parameter results in only modestly lower total ejecta masses of which the median value is δ M_ ej, tot = -0.001 M_⊙. We find that our fits varying κ_ P result in the most significant source of uncertainty. This result can naturally be explained as values of κ_ P more similar to κ_ B or κ_ R will divert a portion of the luminosity typically explained by M_ ej, B or M_ ej, R to M_ ej, P. This results in a different ratio between the two component masses but an overall similar M_ ej, tot. We conclude that our choice of opacity (within the range of values explored here) does not significantly impact our ejecta mass results. We do not include these uncertainties where we compare between our uniform fits.
We caution that, in addition to the effect explored above, the kilonova models used in this analysis do not account for the effect of jet-ejecta interaction, shock cooling, central engine activity, or magnetic fields, all of which may play a large role in determining ejecta masses (e.g., ). In addition, several pieces of kilonova physics are still not well understood even in state-of-the-art simulations, such as the uncertainty in wavelength-dependent opacities, nuclear heating rate, and thermalization efficiencies <cit.>. Further, joint analysis of the afterglow and kilonova may result in additional uncertainty in the derived kilonova parameters. To account for this point, we exclude data at <0.5 days in our kilonova fits, the timescale on which the afterglow is most likely to dominate, and propagate the uncertainties in our afterglow model to the data passed to the kilonova model (Sections <ref> and <ref>).
In light of these uncertainties, we emphasize that the aim of this work is to perform uniform modeling on a sample of kilonovae, allowing for an exploration of diversity and correlations with γ-ray and environment properties. For all objects in our sample besides AT 2017gfo, the datasets are relatively sparse (e.g., Section <ref> and Figure <ref>), limiting our ability to investigate each uncertainty listed above. We provide all observations (Table <ref>), including afterglow-subtracted photometry (Table <ref>), for the community to model with other existing, or future, codes.
§ DISCUSSION
§.§ Comparison to γ-ray Properties and Implications for Long/Short Progenitors
Motivated by the unknown progenitor properties and/or mechanism driving merger-origin long GRBs, we next examine any trends between the kilonova ejecta and γ-ray properties. Historically, one factor in explaining the divide in γ-ray duration between BNS and stellar progenitors is their order-of-magnitude difference in the mass reservoir surrounding the central compact object. The significantly higher masses and larger physical size of long GRB progenitor stars lead to a longer timescale for accretion, translating to a longer-lived jet (e.g., ). To produce a long-lived GRB jet from a neutron star merger, previous works have posited the progenitors are a white dwarf-neutron star binary <cit.> or an NSBH (e.g., ), or that the merger remnant is a magnetar (e.g., ).
Here, we explore the theory that a long-lived, massive (≈ 0.2 M_⊙) accretion disk, a product of an asymmetric binary merger, is capable of powering longer-lived and more energetic GRBs <cit.>. The merger of an asymmetric binary, whether it is a BNS or an NSBH with a favorable mass ratio (Q ≈ 3-5; e.g., ), will produce a greater amount of lanthanide-rich tidal dynamical ejecta compared to a symmetric binary (e.g., ). Within our modeling framework, this translates to an expected trend between M_ ej, R and the duration and/or energy of the GRB.
To investigate these possible trends, we compare the kilonova ejecta masses (Table <ref>) with the values of E_γ, iso (described in Section <ref>), beaming-corrected E_γ, and T_ 90, rest (converted to the rest-frame using their respective redshifts; Table <ref>). To calculate E_γ, we gather jet opening angle measurements (θ_j) from the literature <cit.>. All events in our sample have measured θ_j values from X-ray observations, with the exception of GRB 050709 for which a lower limit is reported <cit.>. For GRB 230307A we use 5000 random draws from our afterglow model 1 (Section <ref>), as it is fit to the combined X-ray light curves from the literature, therein providing a tighter constraint on the jet break than previous analyses <cit.>. Our analysis finds θ_j = 3.95^+1.72_-0.65 deg. We calculate the beaming-corrected energy, given by,
E_γ = [1- cos(θ_j)] × E_γ, iso .
In Figure <ref>, we show the results of this analysis. We observe that M_ ej, B (with the exception of the outlier GRB 200522A; Section <ref>), M_ ej, R and M_ ej, tot generally increase with higher values of E_γ, iso and E_γ. We further observe that M_ ej, R increases with longer T_90. The trends with E_γ, iso, E_γ, and T_ 90 are most apparent with M_ ej, R (Figure <ref>, third column), though the error bars for several short GRB masses and θ_j preclude a firm conclusion. We do not observe any apparent trends between M_ ej, P (Figure <ref>, second column) and γ-ray properties.
We briefly test the statistical significance of any trends between the ejecta masses and the γ-ray properties. We apply the Pearson correlation coefficient (r-score) test to the datasets in each panel of Figure <ref>. We caution that this test is agnostic to a model, and thus does not probe all underlying physical motivations. We randomly draw values from the ejecta mass posteriors and calculate r- and p-scores with the E_γ, iso, E_γ, and T_ 90 values for 1000 iterations, producing distributions of 1000 r- and p-values. We then determine the fraction of random draws that imply a significant correlation between the ejecta mass and γ-ray properties using a threshold of p< 0.05. We do not find that a significant fraction of the p-scores favor a correlation between the ejecta masses and any γ-ray properties. The strongest correlation is between M_ ej, R and T_ 90, where 32% of p-scores indicate a significant correlation.
Though we do not find any statistically significant correlations, we observe that GRBs with T_90, rest≳ 2 s have higher median red ejecta masses (M_ ej, R≳ 0.05 M_⊙) compared to typical short GRBs (M_ ej, R≲ 0.02 M_⊙). We observe a similar pattern with M_ ej, B and E_γ (with the exception of GRB 200522A), though this component may also originate in the post-merger disk wind (Section <ref>). The implication that M_ ej, B seems to trend more strongly with γ-ray properties compared to M_ ej, P may also suggest that long GRB mergers produce relatively blue disk winds, perhaps due to energy injection from the GRB. At present, our results could indicate asymmetric binaries as the progenitors of merger-driven long GRBs. However, to establish any statistically significant correlations, a larger population of joint GRB-kilonova detections with well-constrained ejecta masses is necessary.
Finally, we note that while asymmetric binaries are uncommon in the observed population of BNS systems (e.g., ), LVK observations have revealed an asymmetric BNS (GW190425; ) and an NSBH (GW230519; ) merger. These detections may point to these asymmetric binaries being common in the Universe, with potential rates able to explain the increasing (but highly uncertain) rates of long GRBs from mergers.
§.§ Kilonova Contribution to Universal r-Process
At present, kilonovae are the only observationally-confirmed source of r-process production in the Universe. Indirect observational evidence may favor the existence of a second, “faster” heavy element nucleosynthesis channel. Specifically, to explain observations of r-process-enhanced metal poor stars in the Milky Way and dwarf galaxies (e.g., ) a significant fraction of NS mergers are required to have short delay times from enrichment to star formation (e.g., ). Simulations have demonstrated that collapsar and magneto-hydrodynamical (jet-driven) supernovae could be this second channel (e.g., ). At present, observations of these candidates are limited and those that exist do not support r-process enrichment <cit.>. Here, we explore if the average r-process yield from kilonovae, calculated using the median ejecta masses calculated in Section <ref> along with the current NS merger rates, is capable of producing the estimated r-process abundance in the Milky Way. In this analysis, we assume the kilonovae in our sample are created by BNS mergers only, though we note in Section <ref> the kilonovae following long GRBs are favored to be from asymmetric mergers that may be NSBH events. We make this assumption for simplicity, as only a small fraction of NSBH events are expected to produce kilonovae and the rate of NSBH events is sub-dominant compared to the rates of BNS mergers (e.g., ).
To calculate the r-process enrichment in the Milky Way from NS mergers, we employ the equation from <cit.>:
M_r ∼ 17 000 M_⊙[ℛ_ BNS/500 Gpc^-3 yr ^-1]
[m̅_ ej/0.03 M_⊙][τ_ gal/1.3 × 10^10 yr]
where ℛ_ BNS is the rate of NS mergers, m̅_ ej is the average kilonova ejecta mass, and τ_ gal is the age of the Milky Way, which we fix to 1.3 × 10^10 yr. We employ the range in BNS merger rate calculated from the Gravitational Wave Transient Catalog 3 (GWTC-3) of ℛ_ BNS = 10 - 1700 <cit.>. For m̅_ ej we employ our median for the eight kilonovae in our sample, M_ ej, tot = 0.085^+0.108_-0.044 M_⊙, including the uncertainties due to the opacity (Section <ref>). Taking these values together, we find a wide range of M_r≈ 500 - 400,000 M_⊙ for BNS mergers, mostly driven by the large uncertainty in BNS merger rate.
This range for M_r encompasses the estimate of total r-process mass in the Milky Way, M_r, MW≈ 23,000 M_⊙ <cit.> and, on the lower end, leaves room for the existence of a second r-process source. As discussed in Sections <ref>-<ref>, our m̅_ ej value is likely to be an overestimate of the true value for two reasons. First, less luminous, and thus less massive, kilonovae are likely to have been missed due to observational biases (Section <ref>). Second, in keeping with fits to AT 2017gfo, our model derives ejecta mass values on the upper end of ranges from previous fits (Section <ref>). However, we anticipate that the uncertainty in m̅_ ej in subdominant compared to the uncertainty in BNS merger rate, as our values are generally comparable with literature values, where they exist, (Section <ref>) and likely do not vary beyond an order of magnitude.
The specific star formation rate of the host galaxy is a dominant factor in governing what fraction of ejected r-process elements enrich later generations of stars (Nugent et al., in prep.). Notably, in comparison to the quiescent host galaxy of GW 170817/AT 2017gfo <cit.>, the hosts of the kilonovae in our on-axis GRB sample are all star-forming (; Table <ref>). Though losses may still be significant for these latter events, it is more likely that they enriched their galaxies with heavy elements compared to AT 2017gfo.
§.§ Future Kilonova Observations
We next consider the implications of our sample and model light curves for future wide-field kilonova searches, either triggered with a GRB or GW event or untriggered (e.g, ). For these searches, an understanding of the span in kilonova light curve diversity is critical in the vetting process, which has yet to be enabled with observations beyond AT 2017gfo.
For each of the seven on-axis GRB events in our sample, we create 900 light curves in the observed-frame rizJHK-bands. Each light curve is based on a random draw from the posterior (we use GRB 230307A Model 1 in our fit given the reasoning in Section <ref>) and matched to the approximate rest-frame band in Figure <ref>. We create 50 light curves for GRB 170817A/AT 2017gfo due to computational constraints and iterate over these 18 times. Due to the tight posteriors on this event, we do not expect this process to affect our conclusions. Across the random draws, we calculate the median, 68% and 90% credible region in luminosity space for the sample of kilonovae. In Figure <ref> we show the median and kilonova luminosity range in each filter. Taken together, our results indicate that future kilonovae will span ≳one order of magnitude in luminosity.
As we are motivated by both targeted and untargeted kilonova searches in large surveys, we also plot the single image 5σ depths for the Rubin Observatory <cit.>, the WINTER J-band limiting magnitude <cit.> and the “Wide Tier” (RZ-band) and “Deep Tier” (JH-band) limiting magnitudes for Roman High Latitude Time Domain Survey (HLTDS; ), shifted to z=0.1. We caution that our results are based mostly on on-axis events, and the bolometric luminosity of kilonovae may vary up to factor of ∼ 10 with viewing angle <cit.>. In the riz-bands, Rubin observatory will be sensitive to the full and upper end of the 68% credible range of kilonovae at z=0.1 out to δ t ≈ 3 and ≈ 7 days, respectively. As shown with our analysis GRB 200522A, in Section <ref>, a single epoch of simultaneous rest-frame optical observations on these timescales is sufficient for order-of-magnitude constraints on M_ ej, B. We therefore find that order-of-magnitude constraints on M_ ej, B are possible for kilonovae observed by Rubin at one epoch. We acknowledge that large outstanding challenges exist to distinguish these rare events from other transients, especially in just one or two epochs of observations, and encourage further development of automatic vetting tools.
Roman's HLTDS will be a powerful tool in measuring the properties of low-redshift kilonovae if events can be identified. Indeed, Roman is capable of detecting z=0.1 kilonovae in the optical and near-IR out to δ t ≈ 2 and 3 weeks, respectively. If the Roman HLTDS observations occur at a five-day cadence, Roman is poised to observe z=0.1 kilonovae over ≈2-3 epochs in the optical and ≈3-5 epochs in the near-IR. Two to three epochs is sufficient for obtaining better than order-of-magnitude constraints on M_ ej-V_ ej, as we have demonstrated in Section <ref>. Finally, WINTER will be sensitive to the most luminous kilonovae, but not the 68% credible range, at z=0.1.
In addition to wide-field searches in the nearby Universe, well-localized Swift GRBs continue to be a promising method to detect kilonovae. These events come with their own set of challenges, including higher redshifts, bright afterglows and low rates. Further, the more coarsely localized detections of GRBs by Fermi Space Telescope, Space Variable Objects Monitor (SVOM; ), the InterPlanetary Network (IPN; e.g., ) and other γ-ray telescopes offer a second route to finding kilonovae using “targeted” wide-field surveys. However, for all GRB kilonovae, the higher average redshifts render follow-up with large-aperture ground-based telescopes and space-based observatories critical. Indeed, seven of the eight kilonovae in our sample had key observations with HST that were critical to detections on ≳ week timescales. Looking to the future, JWST can obtain similar observations for events out to z ≈ 1 (e.g., ). For lower-redshift events JWST has the power to capture the kilonova spectral energy distribution (SED) in the nebular phase, an important element in refining the mapping from observations to M_ ej.
§ CONCLUSIONS
We have compiled and collated the multi-wavelength light curves of eight kilonovae from the GCNs and literature. Five of these events follow short GRBs, while three events follow long GRBs, allowing us to explore trends between γ-ray and kilonova ejecta properties. We uniformly model the afterglows of seven events with on-axis GRBs, producing “afterglow-subtracted” light curves. We fit the afterglow-subtracted light curves with a three-component kilonova model in that accounts for geometric viewing effects. Our fits provide reasonable fits to the data, and we compare our posteriors to those in the literature. Our major conclusions are as follows:
* Our fits unveil a wide span in derived kilonova properties, namely M_ ej and V_ ej, implying that the progenitors and/or remnants of these mergers are also diverse. We determine that the luminous kilonova of GRB 200522A has a significantly more massive M_ ej, B compared to the sample of events (or the luminosity may be boosted by non-radioactive heating like a magnetar; ), while the kilonova of GRB 160821B is the least massive of the total sample.
* While well-sampled events provide the tightest constraints, we also find value in kilonovae with a single color measurement, particularly if their colors are unique (e.g., GRBs 130603B and 200522A).
* We discuss the main uncertainties in our modeling (Section <ref>) and compare our results to previous fits with other modeling codes. We emphasize that all observations used in this work, including our “afterglow-subtracted” light curves, are provided for future modeling endeavors in Tables <ref> and <ref>.
* We demonstrate that GW 170817/AT 2017gfo is a “representative” kilonova in each components' M_ ej and V_ ej (Table <ref>), when compared to the seven kilonovae in our sample and deep upper limits from the literature. Given this, our estimate of the total Milky Way r-process mass produced by kilonovae does not change significantly when using the median ejecta mass of our sample compared to previous estimates made for AT 2017gfo.
* We explore trends between our derived ejecta masses and E_γ, iso, beaming-corrected E_γ and T_90. Overall, we do not find any statistically significant correlations but observe that long GRB kilonovae have larger median M_ ej, R compared to short GRB kilonovae. We hypothesize that this is indicative of an asymmetric binary merger origin for longer-lived GRBs. A larger sample of well-studied kilonovae following short and long GRBs will be critical to confirming this hypothesis.
* We produce median, 68% and 90% confidence range light curves in a variety of bands based on the posteriors of the eight events in our sample. Comparing these light curves to the expected depths of upcoming surveys, we anticipate that Rubin and Roman will be sensitive to the majority of the kilonova luminosity range for z≲0.1 and are capable of order-of-magnitude mass constraints.
Here, we have shown that the existing sample of kilonovae, the majority of which are detected at a fixed viewing angle, demonstrates diversity and trends with γ-ray properties. Widening the sample of these events requires dedicated strategies for observational pointings (e.g., ) and kilonova candidate vetting (e.g., ) that take into account the full diversity of compact binary merger EM counterparts and environmental properties. The advent of next-generation GW detectors, deep wide-field surveys, and new γ-ray instruments, when combined with dedicated search strategies, opens the doors for unprecedented exploration into the physics of compact binary mergers, jets, and kilonovae.
§ ACKNOWLEDGEMENTS
The authors thank Ore Gottlieb, Genevieve Schroeder, Anya Nugent, Tanmoy Laskar, Igor Andreoni, Sylvia Biscoveneau, Nick Kaaz, Ben Margalit and Michael Fausnaugh for helpful conversations regarding this manuscript.
J.C.R. acknowledges support from the Northwestern Presidential Fellowship. The Fong Group at Northwestern acknowledges support by the National Science Foundation under grant Nos. AST-2206494, AST-2308182, and CAREER grant No. AST-2047919. W.F. gratefully acknowledges support by the David and Lucile Packard Foundation, the Alfred P. Sloan Foundation, and the Research Corporation for Science Advancement through Cottrell Scholar Award 28284.
cCcccRC
4
7
0.12in
Afterglow-subtracted Observations
GRB
δ t
Telescope
Instrument
Filter
Subtracted Mag.
Uncertainty
(days)
(AB mag)
(AB mag)
050709 1.24 Swope-40 Swope i > 20.50
1.42 Danish DFOSC R > 23.19
2.39 Danish DFOSC R > 23.94
2.46 VLT FORS2 I > 23.70
2.46 VLT FORS2 V > 24.40
2.47 VLT FORS2 R 25.07 0.58
4.36 VLT FORS1 V > 25.02
4.37 VLT FORS1 I > 24.55
5.60 Subaru K > 23.95
5.60 HST ACS F814W 25.61 0.54
8.33 VLT FORS1 I > 23.95
9.80 HST ACS F814W 26.21 0.41
10.48 VLT FORS1 R > 25.21
10.49 VLT FORS1 V > 25.22
18.60 HST ACS F814W > 27.81
060614 0.65 CTIO ANDICAM J > 19.11
0.65 CTIO ANDICAM I > 19.35
0.66 Swift UVOT UVW1 > 20.68
0.66 Swift UVOT U > 20.31
0.66 Swift UVOT B > 19.77
0.66 Swift UVOT V > 19.79
0.66 Swift UVOT UVW2 > 21.36
0.67 Swift UVOT UVM2 > 20.44
0.67 Danish DFOSC R > 19.71
0.74 Danish DFOSC R > 19.86
0.75 NTT SofI K > 19.08
0.76 NTT SofI K > 19.11
0.77 NTT SofI J > 19.42
0.79 Danish DFOSC R > 19.90
0.84 Danish DFOSC R > 19.99
0.86 Swift UVOT U > 20.35
0.86 Swift UVOT UVW1 > 20.86
0.86 Swift UVOT B > 20.22
0.86 Swift UVOT V > 19.76
0.87 Swift UVOT UVW2 > 21.48
0.87 Swift UVOT UVM2 > 20.73
0.90 Danish DFOSC R > 20.12
0.90 Danish DFOSC R > 20.14
0.91 Danish DFOSC R > 20.06
0.91 Danish DFOSC R > 20.18
1.06 Swift UVOT U > 20.63
1.06 Swift UVOT UVW1 > 21.19
1.06 Swift UVOT B > 20.35
1.07 Swift UVOT V > 20.29
1.07 Swift UVOT UVW2 > 22.25
1.08 Swift UVOT UVM2 > 21.47
1.26 Swift UVOT UVW1 > 21.96
1.26 Swift UVOT U > 21.74
1.27 Swift UVOT B > 20.64
1.27 Swift UVOT UVW2 > 22.23
1.27 Swift UVOT V > 20.29
1.28 Swift UVOT UVM2 > 22.16
1.34 Watcher R > 20.95
1.48 Watcher R > 21.01
1.50 Swift UVOT U > 22.68
1.50 Swift UVOT B > 22.37
1.50 Swift UVOT V > 21.41
1.52 Swift UVOT UVW1 > 22.75
1.54 Swift UVOT UVM2 > 23.20
1.59 Swift UVOT UVW2 > 22.98
1.68 NTT SofI K > 20.03
1.72 VLT FORS1 V > 21.40
1.73 VLT FORS1 R > 21.24
1.73 VLT FORS1 I > 21.18
1.74 Swift UVOT B > 21.63
1.82 Danish DFOSC R > 21.71
1.87 VLT FORS1 R > 21.49
2.34 Watcher R > 21.53
2.83 VLT FORS1 V > 22.90
2.84 VLT FORS1 R > 22.65
2.85 VLT FORS1 I > 22.42
3.50 Swift UVOT B > 22.57
3.50 Swift UVOT U > 23.22
3.51 Swift UVOT V > 21.69
3.53 Swift UVOT UVW1 > 24.22
3.56 Swift UVOT UVW2 > 24.47
3.56 Swift UVOT UVM2 > 24.42
3.86 VLT FORS1 I > 22.93
3.86 VLT FORS1 V > 23.66
3.87 VLT FORS1 R > 23.35
4.84 VLT FORS1 R 25.52 0.49
6.74 VLT FORS1 R 25.75 0.38
7.83 VLT FORS1 V > 24.87
7.84 VLT FORS1 I 25.27 0.49
10.18 Swift UVOT V > 22.09
10.54 Swift UVOT B > 22.97
10.81 VLT FORS1 R > 25.75
13.57 HST WFPC2 F814W 25.56 0.12
13.97 VLT FORS1 F606W 27.45 0.48
14.77 VLT FORS1 R > 26.56
19.68 VLT FORS1 R > 26.51
23.80 VLT FORS1 V > 24.82
23.80 VLT FORS1 R > 24.81
23.81 VLT FORS1 I > 24.35
130603B 0.01 Swift UVOT V > 17.21
0.09 Swift UVOT V > 18.63
0.10 Swift UVOT B > 19.69
0.25 NOT MOSCA r > 20.34
0.25 NOT MOSCA z' > 19.89
0.27 WHT ACAM i > 20.25
0.28 CAHA DLR-MKIII V > 20.67
0.29 GTC OSIRIS r > 20.49
0.30 WHT ACAM g > 20.86
0.33 Gemini-S GMOS g > 21.05
0.34 Magellan/Baade IMACS r > 20.76
0.38 Gemini-S GMOS i > 20.56
0.60 UKIRT WFCAM K > 20.95
0.60 Gemini-N GMOS z' > 21.38
0.61 Gemini-N GMOS i > 21.66
0.61 UKIRT WFCAM J > 21.22
0.62 Gemini-N GMOS g > 22.35
0.62 Gemini-N GMOS r > 21.94
1.30 Gemini-S GMOS r > 24.40
1.30 Gemini-S GMOS r > 23.93
1.30 Gemini-S GMOS i > 23.88
1.59 Gemini-N GMOS g > 24.66
1.60 Gemini-N GMOS r > 24.79
1.61 UKIRT WFCAM J > 22.25
1.61 Gemini-N GMOS i > 24.09
1.62 Gemini-N GMOS z' > 23.41
2.32 VLT HAWK-I J > 23.40
3.26 GTC OSIRIS r > 24.29
4.26 GTC OSIRIS r > 24.69
7.30 VLT HAWK-I J > 23.24
8.23 TNG DOLoRes r > 22.93
8.25 TNS DOLoRES i > 24.00
8.41 Magellan/Baade IMACS r > 24.08
9.41 HST WFC3 F160W 25.68 0.21
160821B 0.95 GTC OSIRIS g 24.52 0.26
0.96 GTC CIRCE H > 23.85
1.06 WHT ACAM r 24.21 0.17
1.08 WHT ACAM z 23.97 0.31
1.94 GTC CIRCE H > 23.82
1.95 NOT ALFOSC r 25.38 0.21
1.96 GTC CIRCE J > 24.03
1.99 NOT ALFOSC z 24.19 0.28
2.02 GTC OSIRIS g > 25.75
2.03 GTC OSIRIS r 25.35 0.20
2.04 GTC OSIRIS z 24.70 0.32
2.04 GTC OSIRIS i 24.95 0.18
3.64 GTC OSIRIS F606W 26.80 0.33
3.71 HST WFC3 F160W 24.70 0.16
3.76 HST WFC3 F110W 25.05 0.30
3.98 GTC OSIRIS g 26.80 0.44
4.00 GTC OSIRIS i > 25.78
4.30 Keck MOSFIRE K > 24.04
4.99 GTC OSIRIS r > 26.21
6.98 GTC OSIRIS g > 27.05
7.50 Keck MOSFIRE K > 24.02
8.40 Keck MOSFIRE K > 23.85
9.97 GTC OSIRIS i > 25.68
10.00 GTC OSIRIS g > 25.85
10.01 GTC OSIRIS F606W > 26.22
10.40 GTC OSIRIS F606W > 27.82
10.46 HST WFC3 F160W 27.14 0.48
10.53 HST WFC3 F110W 27.19 0.43
23.18 HST WFC3 F110W > 27.34
23.20 GTC OSIRIS F606W > 27.42
23.23 HST WFC3 F160W > 27.22
200522A 0.67 LCO Sinistro I > 20.39
0.67 LCO Sinistro R > 22.82
2.12 Gemini GMOS r > 22.20
3.12 Gemini GMOS r > 25.90
3.52 HST WFC3 F125W 24.73 0.31
3.66 HST WFC3 F160W 24.84 0.35
16.38 HST WFC3 F125W > 27.50
211211A 0.63 CAHA CAFOS r 20.88 0.30
0.64 Swift UVOT g 21.32 0.29
0.66 Swift UVOT UVW1 > 23.23
0.66 Swift UVOT U > 21.38
0.68 CAHA CAFOS i 20.89 0.33
0.69 Swift UVOT g 21.13 0.19
0.69 NOT ALFOSC r 20.94 0.25
0.70 NOT ALFOSC i 21.05 0.37
0.70 LCO Sinistro R 21.16 0.34
0.72 Swift UVOT UVW2 > 23.84
0.73 Swift UVOT V > 19.20
0.78 Swift UVOT B > 20.50
0.85 Swift UVOT UVW1 23.57 0.49
0.86 Swift UVOT U > 21.96
0.92 Swift UVOT V > 19.19
0.92 Swift UVOT UVW2 > 23.96
1.12 Swift UVOT B > 20.72
1.19 Swift UVOT U > 22.94
1.22 Swift UVOT UVW1 > 23.49
1.40 GMG r > 22.00
1.41 DOT 4Kx4K R 22.69 0.25
1.42 DOT 4Kx4K I 22.22 0.27
1.43 GIT r > 21.19
1.43 Swift UVOT Vc 23.28 0.37
1.68 CAHA CAFOS i 22.68 0.24
2.56 Zeiss-1000 R > 23.10
2.68 CAHA CAFOS i > 24.59
4.07 Gemini NIRI K 22.45 0.16
4.42 DOT 4Kx4K R > 23.91
4.70 TNG NICS H > 21.90
5.10 Gemini NIRI K 22.43 0.19
5.11 Gemini GMOS i 26.30 0.49
5.96 MMT MMIRS J 24.23 0.44
6.08 Gemini GMOS i > 25.52
6.94 MMT MMIRS K 23.46 0.36
7.98 MMT MMIRS K 23.80 0.36
9.92 MMT MMIRS K > 22.11
230307A^† 0.75 KMTNet R 21.11 0.37
0.99 Swift UVOT U > 21.10
1.12 KMTNet R 21.14 0.24
1.16 Swift UVOT wh > 22.25
1.20 PRIME Y 20.88 0.34
1.20 PRIME J 21.11 0.45
1.20 PRIME z > 20.50
1.20 PRIME H 20.48 0.36
1.25 Swift UVOT wh > 22.29
1.43 VLT ULTRACAM g > 20.70
1.43 VLT ULTRACAM r 20.87 0.23
1.43 VLT ULTRACAM u' > 19.70
1.61 Swift UVOT wh > 22.60
1.82 KMTNet R 21.66 0.22
1.82 KMTNet I 21.44 0.25
2.13 KMTNet R 22.09 0.27
2.14 KMTNet I 22.02 0.33
2.36 SOAR z 21.81 0.30
2.37 VLT VST r 22.11 0.34
2.38 CTIO R > 22.46
2.39 CTIO I 22.39 0.48
2.41 VLT ULTRACAM g 22.68 0.47
2.41 VLT ULTRACAM i 21.95 0.25
2.41 VLT ULTRACAM u' > 21.20
2.43 Swift UVOT wh > 22.00
2.77 KMTNet R 22.89 0.39
3.12 KMTNet R 23.04 0.38
3.36 SOAR z > 22.63
3.39 VLT ULTRACAM g > 22.60
3.39 VLT ULTRACAM i 21.62 0.25
3.39 VLT ULTRACAM u' > 20.80
4.12 KMTNet R > 23.30
4.76 KMTNet R > 23.65
4.89 Swift UVOT wh > 23.60
5.12 KMTNet R > 23.61
6.13 KMTNet R 24.09 0.38
6.42 VLT FORS2 z 23.58 0.34
6.42 VLT FORS2 B > 26.10
7.40 XSH K > 22.02
7.40 XSH r > 24.90
8.34 Gemini GMOS z > 24.32
10.34 Gemini FLAMINGOS-2 K 22.63 0.21
10.36 VLT FORS2 I > 24.00
11.36 VLT FORS2 I > 25.20
11.42 Gemini FLAMINGOS-2 K 22.34 0.18
15.30 Gemini FLAMINGOS-2 J > 23.50
15.40 Gemini GMOS r > 24.96
15.45 Gemini FLAMINGOS-2 K > 22.10
17.30 Gemini FLAMINGOS-2 J > 23.00
17.38 VLT FORS2 R > 25.20
19.37 VLT FORS2 R > 25.80
19.38 VLT HAWK-I K > 23.40
28.83 JWST NIRCam F115W > 28.50
28.83 JWST NIRCam F277W 26.69 0.38
28.86 JWST NIRCam F356W 25.65 0.18
28.86 JWST NIRCam F150W > 28.11
28.89 JWST NIRCam F070W > 28.97
28.89 JWST NIRCam F444W 24.74 0.10
230307A^ 0.75 KMTNet R 21.11 0.37
0.99 Swift UVOT U > 21.10
1.12 KMTNet R 21.14 0.24
1.16 Swift UVOT wh > 22.25
1.20 PRIME Y 20.88 0.34
1.20 PRIME J 21.11 0.45
1.20 PRIME z > 20.50
1.20 PRIME H 20.48 0.36
1.25 Swift UVOT wh > 22.29
1.43 VLT ULTRACAM g > 20.70
1.43 VLT ULTRACAM r 20.87 0.23
1.43 VLT ULTRACAM u' > 19.70
1.61 Swift UVOT wh > 22.60
1.82 KMTNet R 21.66 0.22
1.82 KMTNet I 21.44 0.25
2.13 KMTNet R 22.09 0.27
2.14 KMTNet I 22.02 0.33
2.36 SOAR z 21.81 0.30
2.37 VLT VST r 22.11 0.34
2.38 CTIO R > 22.46
2.39 CTIO I 22.39 0.48
2.41 VLT ULTRACAM g 22.68 0.47
2.41 VLT ULTRACAM i 21.95 0.25
2.41 VLT ULTRACAM u' > 21.20
2.43 Swift UVOT wh > 22.00
2.77 KMTNet R 22.89 0.39
3.12 KMTNet R 23.04 0.38
3.36 SOAR z > 22.63
3.39 VLT ULTRACAM g > 22.60
3.39 VLT ULTRACAM i 21.62 0.25
3.39 VLT ULTRACAM u' > 20.80
4.12 KMTNet R > 23.30
4.76 KMTNet R > 23.65
4.89 Swift UVOT wh > 23.60
5.12 KMTNet R > 23.61
6.13 KMTNet R 24.09 0.38
6.42 VLT FORS2 z 23.58 0.34
6.42 VLT FORS2 B > 26.10
7.40 XSH K > 22.02
7.40 XSH r > 24.90
8.34 Gemini GMOS z > 24.32
10.34 Gemini FLAMINGOS-2 K 22.63 0.21
10.36 VLT FORS2 I > 24.00
11.36 VLT FORS2 I > 25.20
11.42 Gemini FLAMINGOS-2 K 22.34 0.18
15.30 Gemini FLAMINGOS-2 J > 23.50
15.40 Gemini GMOS r > 24.96
15.45 Gemini FLAMINGOS-2 K > 22.10
17.30 Gemini FLAMINGOS-2 J > 23.00
17.38 VLT FORS2 R > 25.20
19.37 VLT FORS2 R > 25.80
19.38 VLT HAWK-I K > 23.40
28.83 JWST NIRCam F115W > 28.50
28.83 JWST NIRCam F277W 26.69 0.38
28.86 JWST NIRCam F356W 25.65 0.18
28.86 JWST NIRCam F150W > 28.11
28.89 JWST NIRCam F070W > 28.97
28.89 JWST NIRCam F444W 24.74 0.10
Afterglow-subtracted observations are obtained using the methods described in Sections <ref> and <ref>.
^†Afterglow model 1 fit to X-ray and radio data only.
^Afterglow model 2 fit to X-ray, radio and TESS data.
Observations are corrected for Galactic extinction. Observations of GRB 130603B are also corrected for line-of-sight extinction measured from the early optical afterglow.
aasjournal
§ APPENDIX
cCcCcCCcc
5
9
0.04in
Multi-wavelength Observations of GRB Afterglows and Kilonovae
GRB
z
Tel./Instum.
δ t
Band
Magnitude^†
Flux
f_ AG^†
Ref.
(days)
(AB mag)
(μJy)
050709 0.161 Swift/XRT 1.62 0.3-10 keV 0.00019 ±0.00014 * 1
Swift/XRT 2.43 0.3-10 keV < 0.00015 * 1
Chandra/ACIS 2.55 0.3-10 keV 0.00011 ±0.00002 * 1
Swift/XRT 3.24 0.3-10 keV < 0.00073 * 1
Swift/XRT 4.28 0.3-10 keV < 0.00059 * 1
Chandra/ACIS 16.1 0.3-10 keV 0.000011 ±0.00002 * 1
Swope-40/Swope 1.24 i' > 20.5 < 23 1
Danish/DFOSC 1.42 R 23.2 ±0.1 1.9 ±0.099 * 2
Danish/DFOSC 2.39 R 23.9 ±0.2 0.96 ±0.21 * 2
VLT/FORS2 2.46 I > 23.7 < 1.2 3
VLT/FORS2 2.46 V 24.4 ±0.1 0.63 ±0.058 3
VLT/FORS2 2.47 R 24.0 ±0.1 0.88 ±0.056 3
VLT/FORS1 4.36 V > 25.0 < 0.36 3
VLT/FORS1 4.37 I > 24.6 < 0.55 3
Subaru 5.6 K 23.9 ±0.7 0.95 ±0.61 1
HST/ACS 5.6 F814W 25.1 ±0.1 0.34 ±0.0062 1
VLT/FORS1 8.33 I > 23.9 < 0.95 3
HST/ACS 9.8 F814W 25.8 ±0.1 0.17 ±0.0077 1
VLT/FORS1 10.5 R > 25.2 < 0.3 3
VLT/FORS1 10.5 V > 25.2 < 0.3 3
HST/ACS 18.6 F814W 27.8 ±0.3 0.027 ±0.0068 1
HST/ACS 34.7 F814W > 28.1 < 0.021 1
VLA 1.6 8.5 GHz < 115 * 1
VLA 2.5 8.5 GHz < 114 * 1
VLA 4.5 8.5 GHz < 74 * 1
VLA 7.5 8.5 GHz < 40 * 1
060614 0.125 Swift/XRT 0.053 0.3-10 keV 0.757 ±0.162 4
Swift/XRT 0.054 0.3-10 keV 0.701 ±0.158 4
Swift/XRT 0.056 0.3-10 keV 0.549 ±0.124 4
Swift/XRT 0.058 0.3-10 keV 0.801 ±0.18 4
Swift/XRT 0.06 0.3-10 keV 0.913 ±0.209 4
Swift/XRT 0.062 0.3-10 keV 0.751 ±0.17 4
Swift/XRT 0.064 0.3-10 keV 0.923 ±0.207 4
Swift/XRT 0.065 0.3-10 keV 0.834 ±0.188 4
Swift/XRT 0.066 0.3-10 keV 0.744 ±0.167 4
Swift/XRT 0.068 0.3-10 keV 0.84 ±0.19 4
Swift/XRT 0.069 0.3-10 keV 0.89 ±0.2 4
Swift/XRT 0.071 0.3-10 keV 0.753 ±0.17 4
Swift/XRT 0.072 0.3-10 keV 0.756 ±0.171 4
Swift/XRT 0.074 0.3-10 keV 0.607 ±0.139 4
Swift/XRT 0.075 0.3-10 keV 0.954 ±0.221 4
Swift/XRT 0.119 0.3-10 keV 1.11 ±0.239 4
Swift/XRT 0.121 0.3-10 keV 0.695 ±0.159 4
Swift/XRT 0.123 0.3-10 keV 0.839 ±0.189 4
Swift/XRT 0.126 0.3-10 keV 0.713 ±0.161 4
Swift/XRT 0.128 0.3-10 keV 0.724 ±0.163 4
Swift/XRT 0.13 0.3-10 keV 0.737 ±0.167 4
Swift/XRT 0.133 0.3-10 keV 0.808 ±0.184 4
Swift/XRT 0.136 0.3-10 keV 1.082 ±0.244 4
Swift/XRT 0.138 0.3-10 keV 0.942 ±0.214 4
Swift/XRT 0.14 0.3-10 keV 0.96 ±0.216 4
Swift/XRT 0.141 0.3-10 keV 1.04 ±0.229 4
Swift/XRT 0.143 0.3-10 keV 2.096 ±0.469 4
Swift/XRT 0.144 0.3-10 keV 0.917 ±0.158 4
Swift/XRT 0.186 0.3-10 keV 0.773 ±0.173 4
Swift/XRT 0.188 0.3-10 keV 0.581 ±0.131 4
Swift/XRT 0.189 0.3-10 keV 1.003 ±0.215 4
Swift/XRT 0.191 0.3-10 keV 0.684 ±0.154 4
Swift/XRT 0.192 0.3-10 keV 0.594 ±0.134 4
Swift/XRT 0.194 0.3-10 keV 0.89 ±0.195 4
Swift/XRT 0.195 0.3-10 keV 0.814 ±0.183 4
Swift/XRT 0.197 0.3-10 keV 0.958 ±0.215 4
Swift/XRT 0.198 0.3-10 keV 0.688 ±0.156 4
Swift/XRT 0.2 0.3-10 keV 0.763 ±0.172 4
Swift/XRT 0.201 0.3-10 keV 0.831 ±0.187 4
Swift/XRT 0.203 0.3-10 keV 0.681 ±0.15 4
Swift/XRT 0.204 0.3-10 keV 0.743 ±0.168 4
Swift/XRT 0.207 0.3-10 keV 0.924 ±0.204 4
Swift/XRT 0.209 0.3-10 keV 0.825 ±0.185 4
Swift/XRT 0.211 0.3-10 keV 0.759 ±0.17 4
Swift/XRT 0.213 0.3-10 keV 0.633 ±0.123 4
Swift/XRT 0.254 0.3-10 keV 0.568 ±0.129 4
Swift/XRT 0.256 0.3-10 keV 0.664 ±0.15 4
Swift/XRT 0.258 0.3-10 keV 0.707 ±0.16 4
Swift/XRT 0.26 0.3-10 keV 0.518 ±0.118 4
Swift/XRT 0.262 0.3-10 keV 0.921 ±0.207 4
Swift/XRT 0.264 0.3-10 keV 0.849 ±0.192 4
Swift/XRT 0.266 0.3-10 keV 0.546 ±0.124 4
Swift/XRT 0.268 0.3-10 keV 0.72 ±0.162 4
Swift/XRT 0.27 0.3-10 keV 0.746 ±0.168 4
Swift/XRT 0.273 0.3-10 keV 0.963 ±0.212 4
Swift/XRT 0.275 0.3-10 keV 0.551 ±0.125 4
Swift/XRT 0.277 0.3-10 keV 0.603 ±0.137 4
Swift/XRT 0.28 0.3-10 keV 0.544 ±0.125 4
Swift/XRT 0.32 0.3-10 keV 0.691 ±0.156 4
Swift/XRT 0.322 0.3-10 keV 0.813 ±0.183 4
Swift/XRT 0.323 0.3-10 keV 0.754 ±0.169 4
Swift/XRT 0.324 0.3-10 keV 0.834 ±0.188 4
Swift/XRT 0.326 0.3-10 keV 0.8 ±0.18 4
Swift/XRT 0.327 0.3-10 keV 0.805 ±0.181 4
Swift/XRT 0.328 0.3-10 keV 0.917 ±0.206 4
Swift/XRT 0.33 0.3-10 keV 1.001 ±0.225 4
Swift/XRT 0.331 0.3-10 keV 0.874 ±0.196 4
Swift/XRT 0.332 0.3-10 keV 1.098 ±0.247 4
Swift/XRT 0.333 0.3-10 keV 0.92 ±0.178 4
Swift/XRT 0.473 0.3-10 keV 0.71 ±0.16 4
Swift/XRT 0.477 0.3-10 keV 0.557 ±0.094 4
Swift/XRT 0.524 0.3-10 keV 0.5 ±0.113 * 4
Swift/XRT 0.525 0.3-10 keV 0.74 ±0.167 * 4
Swift/XRT 0.527 0.3-10 keV 0.538 ±0.122 * 4
Swift/XRT 0.529 0.3-10 keV 0.564 ±0.124 * 4
Swift/XRT 0.531 0.3-10 keV 0.655 ±0.128 * 4
Swift/XRT 0.59 0.3-10 keV 0.527 ±0.118 * 4
Swift/XRT 0.593 0.3-10 keV 0.348 ±0.092 * 4
Swift/XRT 0.594 0.3-10 keV 0.504 ±0.113 * 4
Swift/XRT 0.597 0.3-10 keV 0.481 ±0.087 * 4
Swift/XRT 0.658 0.3-10 keV 0.349 ±0.091 * 4
Swift/XRT 0.66 0.3-10 keV 0.423 ±0.096 * 4
Swift/XRT 0.664 0.3-10 keV 0.256 ±0.054 * 4
Swift/XRT 0.725 0.3-10 keV 0.236 ±0.062 * 4
Swift/XRT 0.728 0.3-10 keV 0.287 ±0.072 * 4
Swift/XRT 0.731 0.3-10 keV 0.382 ±0.079 * 4
Swift/XRT 0.792 0.3-10 keV 0.28 ±0.071 * 4
Swift/XRT 0.796 0.3-10 keV 0.342 ±0.061 * 4
Swift/XRT 0.859 0.3-10 keV 0.212 ±0.055 * 4
Swift/XRT 0.863 0.3-10 keV 0.265 ±0.07 * 4
Swift/XRT 0.866 0.3-10 keV 0.288 ±0.073 * 4
Swift/XRT 0.926 0.3-10 keV 0.182 ±0.048 * 4
Swift/XRT 0.93 0.3-10 keV 0.293 ±0.076 * 4
Swift/XRT 0.932 0.3-10 keV 0.271 ±0.071 * 4
Swift/XRT 0.935 0.3-10 keV 0.338 ±0.085 * 4
Swift/XRT 0.993 0.3-10 keV 0.267 ±0.07 * 4
Swift/XRT 0.996 0.3-10 keV 0.17 ±0.045 * 4
Swift/XRT 1.001 0.3-10 keV 0.202 ±0.047 * 4
Swift/XRT 1.059 0.3-10 keV 0.27 ±0.071 * 4
Swift/XRT 1.062 0.3-10 keV 0.239 ±0.062 * 4
Swift/XRT 1.067 0.3-10 keV 0.17 ±0.045 * 4
Swift/XRT 1.072 0.3-10 keV 0.211 ±0.039 * 4
Swift/XRT 1.128 0.3-10 keV 0.248 ±0.065 * 4
Swift/XRT 1.131 0.3-10 keV 0.188 ±0.049 * 4
Swift/XRT 1.135 0.3-10 keV 0.2 ±0.052 * 4
Swift/XRT 1.141 0.3-10 keV 0.14 ±0.033 * 4
Swift/XRT 1.194 0.3-10 keV 0.201 ±0.052 * 4
Swift/XRT 1.199 0.3-10 keV 0.174 ±0.045 * 4
Swift/XRT 1.203 0.3-10 keV 0.209 ±0.054 * 4
Swift/XRT 1.208 0.3-10 keV 0.208 ±0.044 * 4
Swift/XRT 1.261 0.3-10 keV 0.142 ±0.037 * 4
Swift/XRT 1.267 0.3-10 keV 0.169 ±0.044 * 4
Swift/XRT 1.272 0.3-10 keV 0.188 ±0.036 * 4
Swift/XRT 1.329 0.3-10 keV 0.144 ±0.038 * 4
Swift/XRT 1.334 0.3-10 keV 0.232 ±0.06 * 4
Swift/XRT 1.34 0.3-10 keV 0.127 ±0.027 * 4
Swift/XRT 1.396 0.3-10 keV 0.143 ±0.037 * 4
Swift/XRT 1.405 0.3-10 keV 0.1 ±0.021 * 4
Swift/XRT 1.471 0.3-10 keV 0.107 ±0.018 * 4
Swift/XRT 1.534 0.3-10 keV 0.085 ±0.022 * 4
Swift/XRT 1.542 0.3-10 keV 0.123 ±0.026 * 4
Swift/XRT 1.597 0.3-10 keV 0.099 ±0.026 * 4
Swift/XRT 1.607 0.3-10 keV 0.11 ±0.023 * 4
Swift/XRT 1.666 0.3-10 keV 0.072 ±0.019 * 4
Swift/XRT 1.676 0.3-10 keV 0.096 ±0.024 * 4
Swift/XRT 1.733 0.3-10 keV 0.088 ±0.023 * 4
Swift/XRT 1.765 0.3-10 keV 0.101 ±0.017 * 4
Swift/XRT 1.936 0.3-10 keV 0.078 ±0.018 * 4
Swift/XRT 2.004 0.3-10 keV 0.09 ±0.019 * 4
Swift/XRT 2.071 0.3-10 keV 0.059 ±0.014 * 4
Swift/XRT 2.14 0.3-10 keV 0.054 ±0.012 * 4
Swift/XRT 2.207 0.3-10 keV 0.08 ±0.015 * 4
Swift/XRT 2.276 0.3-10 keV 0.046 ±0.012 * 4
Swift/XRT 2.343 0.3-10 keV 0.04 ±0.011 * 4
Swift/XRT 2.447 0.3-10 keV 0.052 ±0.009 * 4
Swift/XRT 2.57 0.3-10 keV 0.036 ±0.007 * 4
Swift/XRT 3.343 0.3-10 keV 0.028 ±0.011 * 4
SSO 0.0179 R 20.5 ±0.3 23 ±7.1 6
SSO 0.0221 R 20.2 ±0.2 31 ±6.3 6
SSO 0.0262 R 20.2 ±0.2 31 ±6.3 6
SSO 0.0303 R 20.2 ±0.2 31 ±6.3 6
SSO 0.0713 R 19.3 ±0.1 69 ±6.3 6
SSO 0.119 R 19.4 ±0.1 63 ±6.3 6
SSO 0.161 R 19.3 ±0.1 69 ±6.3 6
SSO 0.197 R 19.2 ±0.1 76 ±7 6
SSO 0.243 R 19.0 ±0.1 91 ±8.4 6
Swift/UVOT 0.00177 wh 18.4 ±0.1 160 ±16 5
Swift/UVOT 0.0035 V > 19.7 < 48 5
Swift/UVOT 0.0514 UVM2 > 17.7 < 300 5
Swift/UVOT 0.0548 UVW1 18.5 ±0.3 150 ±41 5
Swift/UVOT 0.0572 U 18.8 ±0.2 110 ±16 5
Swift/UVOT 0.0595 B 19.8 ±0.2 43 ±8.3 5
Swift/UVOT 0.0619 wh 18.2 ±0.1 180 ±10 5
Swift/UVOT 0.0643 UVW2 18.3 ±0.2 18 ±38 5
Swift/UVOT 0.0666 V 19.3 ±0.3 67 ±17 5
Swift/UVOT 0.069 UVM2 17.8 ±0.2 280 ±46 5
Swift/UVOT 0.0714 UVW1 18.3 ±0.2 170 ±39 5
Swift/UVOT 0.0735 U 18.6 ±0.2 140 ±24 5
Swift/UVOT 0.12 B 19.5 ±0.1 60 ±7.1 5
Swift/UVOT 0.123 B 19.5 ±0.1 57 ±6.8 5
Swift/UVOT 0.127 B 19.6 ±0.1 54 ±6 5
Swift/UVOT 0.131 wh 18.3 ±0.1 170 ±11 5
Swift/UVOT 0.134 wh 18.4 ±0.1 160 ±7.5 5
Swift/UVOT 0.138 wh 18.2 ±0.1 180 ±13 5
Swift/UVOT 0.142 UVW2 18.1 ±0.5 210 ±92 5
Swift/UVOT 0.187 V 19.5 ±0.2 60 ±14 5
Swift/UVOT 0.19 V 19.2 ±0.2 75 ±14 5
Swift/UVOT 0.194 V 19.3 ±0.2 68 ±14 5
Swift/UVOT 0.201 UVM2 18.6 ±0.8 130 ±96 5
Swift/UVOT 0.21 UVW1 18.4 ±0.3 150 ±38 5
Swift/UVOT 0.254 U 18.6 ±0.1 140 ±15 5
Swift/UVOT 0.257 U 18.5 ±0.1 150 ±16 5
Swift/UVOT 0.261 U 18.5 ±0.1 140 ±14 5
Swift/UVOT 0.264 B 19.5 ±0.1 55 ±7.1 5
Swift/UVOT 0.268 B 19.4 ±0.1 65 ±7.7 5
Swift/UVOT 0.271 B 19.5 ±0.2 58 ±8.4 5
Swift/UVOT 0.275 wh 18.2 ±0.1 180 ±22 5
Swift/UVOT 0.278 wh 18.3 ±0.1 180 ±18 5
Swift/UVOT 0.281 wh 18.3 ±0.2 170 ±35 5
Watcher 0.289 R 19.1 ±0.2 81 ±13 7
Watcher 0.309 R 18.8 ±0.1 110 ±11
Watcher 0.331 R 19.5 ±0.3 55 ±15 11
Swift/UVOT 0.331 U 18.3 ±0.1 170 ±15 5
Swift/UVOT 0.324 UVW1 18.4 ±0.1 160 ±18 5
Swift/UVOT 0.334 U 18.4 ±0.2 150 ±29 5
Watcher 0.367 R 19.2 ±0.1 77 ±7.1 11
Watcher 0.387 R 19.2 ±0.1 78 ±7.9 11
Watcher 0.406 R 19.2 ±0.1 72 ±7.3 11
Swift/UVOT 0.473 UVW1 19.0 ±0.3 90 ±21 5
Swift/UVOT 0.473 U 18.8 ±0.2 110 ±18 5
Swift/UVOT 0.474 B 19.7 ±0.2 47 ±8.1 5
Swift/UVOT 0.479 UVW2 18.7 ±0.5 120 ±49 5
Swift/UVOT 0.478 V 19.2 ±0.3 74 ±19 5
Swift/UVOT 0.481 UVM2 18.6 ±0.5 130 ±62 5
CTIO/ANDICAM 0.646 I 19.4 ±0.1 66 ±6.1 8
CTIO/ANDICAM 0.646 J 19.1 ±0.1 82 ±7.6 8
Swift/UVOT 0.659 UVW1 19.2 ±0.3 78 ±19 5
Swift/UVOT 0.659 U 19.3 ±0.2 70 ±13 5
Swift/UVOT 0.66 B 19.9 ±0.2 40 ±8 5
Swift/UVOT 0.664 UVW2 19.6 ±0.3 51 ±13 5
Swift/UVOT 0.664 V > 19.8 < 44 5
Swift/UVOT 0.667 UVM2 18.8 ±0.3 110 ±35 5
D1.5m+DFOSC 0.673 R 19.7 ±0.1 50 ±0.46 9
D1.5m+DFOSC 0.741 R 19.8 ±0.1 43 ±0.4 * 9
NTT/SofI 0.754 K 19.1 ±0.1 85 ±5.4 * 10
NTT/SofI 0.76 K 19.1 ±0.1 82 ±6 * 10
NTT/SofI 0.769 J 19.4 ±0.1 62 ±1.1 * 10
D1.5m/DFOSC 0.793 R 19.9 ±0.1 42 ±0.38 * 9
D1.5m/DFOSC 0.84 R 19.9 ±0.1 38 ±0.35 * 9
Swift/UVOT 0.86 UVW1 19.4 ±0.2 66 ±13 5
Swift/UVOT 0.86 U 19.3 ±0.2 67 ±12 5
Swift/UVOT 0.861 B 20.4 ±0.3 26 ±6.5 5
Swift/UVOT 0.866 UVW2 19.8 ±0.6 46 ±25 5
Swift/UVOT 0.865 V 19.8 ±0.3 45 ±14 5
Swift/UVOT 0.868 UVM2 19.0 ±0.4 88 ±30 5
D1.5m/DFOSC 0.9 R 20.1 ±0.1 34 ±0.31 * 9
D1.5m/DFOSC 0.904 R 20.1 ±0.1 33 ±0.31 * 9
D1.5m/DFOSC 0.907 R 20.0 ±0.1 36 ±0.33 * 9
D1.5m/DFOSC 0.911 R 20.1 ±0.1 32 ±0.3 * 9
Swift/UVOT 1.06 UVW1 19.7 ±0.2 49 ±10 5
Swift/UVOT 1.06 U 19.6 ±0.2 52 ±8.6 5
Swift/UVOT 1.06 B 20.5 ±0.2 23 ±4.9 5
Swift/UVOT 1.07 UVW2 20.5 ±0.4 22 ±8.9 5
Swift/UVOT 1.07 V > 20.3 < 28 5
Swift/UVOT 1.08 UVM2 19.8 ±0.6 44 ±24 5
Swift/UVOT 1.26 UVW1 20.4 ±0.3 24 ±7 5
Swift/UVOT 1.26 U 20.7 ±0.3 19 ±6 5
Swift/UVOT 1.27 B 20.8 ±0.3 18 ±4.4 5
Swift/UVOT 1.27 UVW2 20.5 ±0.4 23 ±8.4 5
Swift/UVOT 1.27 V > 20.3 < 28 5
Swift/UVOT 1.28 UVM2 20.5 ±0.7 24 ±16 5
Watcher 1.34 R 20.9 ±0.2 16 ±3.1 * 7
Watcher 1.48 R 21.0 ±0.3 15 ±3.6 * 7
Swift/UVOT 1.5 U 21.7 ±0.3 7.9 ±1.9 5
Swift/UVOT 1.5 B > 22.5 < 3.6 5
Swift/UVOT 1.5 V 21.4 ±0.3 9.8 ±3.1 5
Swift/UVOT 1.52 UVW1 21.2 ±0.2 12 ±1.7 5
Swift/UVOT 1.54 UVM2 21.5 ±0.2 9 ±1.7 5
Swift/UVOT 1.59 UVW2 21.2 ±0.2 11 ±2 5
NTT/SofI 1.68 K 20.0 ±0.1 35 ±2.9 * 10
VLT/FORS1 1.72 V 21.4 ±0.1 10 ±0.28 11
VLT/FORS1 1.73 R 21.2 ±0.1 12 ±0.21 * 11
VLT/FORS1 1.73 I 21.2 ±0.1 12 ±0.45 * 11
VLT/FORS1 1.74 B 21.6 ±0.1 8.1 ±0.15 look up
D1.5m/DFOSC 1.82 R 21.7 ±0.1 7.9 ±0.43 * 9
VLT/FORS1 1.87 R 21.5 ±0.1 9.2 ±0.25 * 11
Watcher 2.34 R 21.5 ±0.9 9.3 ±7.3 7
VLT/FORS1 2.84 V 22.9 ±0.1 2.5 ±0.16 11
VLT/FORS1 2.84 R 22.6 ±0.1 3.2 ±0.15 11
VLT/FORS1 2.85 I 22.4 ±0.2 3.9 ±0.61 11
Swift/UVOT 3.5 U 22.2 ±0.3 4.8 ±1.5 5
Swift/UVOT 3.5 B > 22.7 < 3 5
Swift/UVOT 3.51 V > 21.7 < 7.6 5
Swift/UVOT 3.53 UVW1 22.7 ±0.4 3 ±0.99 5
Swift/UVOT 3.56 UVW2 22.7 ±0.3 2.9 ±0.83 5
Swift/UVOT 3.56 UVM2 22.7 ±0.3 2.9 ±0.89 5
VLT/FORS1 3.86 I 22.9 ±0.1 2.4 ±0.22 11
VLT/FORS1 3.86 V 23.7 ±0.1 1.2 ±0.1 11
VLT/FORS1 3.87 R 23.4 ±0.1 1.7 ±0.061 11
VLT/FORS1 4.84 R 23.8 ±0.1 1.1 ±0.063 11
VLT/FORS1 6.74 R 24.5 ±0.1 0.58 ±0.048 11
VLT/FORS1 7.83 V 24.9 ±0.2 0.41 ±0.094 11
VLT/FORS1 7.84 I 24.4 ±0.2 0.61 ±0.11 11
Swift/UVOT 10.2 V > 22.1 < 5.2 5
Swift/UVOT 10.5 B > 23.1 < 2.1 5
VLT/FORS1 10.8 R 25.8 ±0.3 0.18 ±0.043 11
HST/WFPC2 13.6 F814W 25.2 ±0.1 0.3 ±0.022 11
HST/WFPC2 14.0 F606W 26.5 ±0.2 0.095 ±0.014 11
VLT/FORS1 14.8 R 26.6 ±0.3 0.086 ±0.025 11
VLT/FORS1 19.7 R > 26.5 < 0.09 11
VLT/FORS1 23.8 V > 24.8 < 0.43 11
VLT/FORS1 23.8 R > 24.8 < 0.43 11
VLT/FORS1 23.8 I > 24.4 < 0.66 11
HST/WFPC2 31.1 F814W > 27.8 < 0.029 11
HST/WFPC2 31.8 F606W > 28.1 < 0.021 11
VLT/FORS1 32.8 R > 25.6 < 0.21 11
VLT/FORS1 44.7 R > 26.5 < 0.09 11
HST/WFPC2 45.0 F814W > 27.9 < 0.026 11
ATCA 9.96 4.8 GHz < 900 * 12
ATCA 9.99 8.6 GHz < 900 * 12
130603B 0.356 Swift/XRT 0.00156 0.3-10 keV 11.139 ±2.487 * 4
Swift/XRT 0.0032 0.3-10 keV 7.933 ±1.876 * 4
Swift/XRT 0.00481 0.3-10 keV 8.041 ±1.87 * 4
Swift/XRT 0.00668 0.3-10 keV 4.448 ±1.038 * 4
Swift/XRT 0.0079 0.3-10 keV 6.006 ±1.385 * 4
Swift/XRT 0.0087 0.3-10 keV 3.733 ±0.73 * 4
Swift/XRT 0.0462 0.3-10 keV 2.06 ±0.451 * 4
Swift/XRT 0.047 0.3-10 keV 1.23 ±0.276 * 4
Swift/XRT 0.0479 0.3-10 keV 1.781 ±0.401 * 4
Swift/XRT 0.0486 0.3-10 keV 2.029 ±0.458 * 4
Swift/XRT 0.0495 0.3-10 keV 1.195 ±0.269 * 4
Swift/XRT 0.0504 0.3-10 keV 2.036 ±0.457 * 4
Swift/XRT 0.0511 0.3-10 keV 1.655 ±0.372 * 4
Swift/XRT 0.052 0.3-10 keV 1.448 ±0.326 * 4
Swift/XRT 0.0527 0.3-10 keV 2.251 ±0.506 * 4
Swift/XRT 0.0534 0.3-10 keV 1.397 ±0.313 * 4
Swift/XRT 0.0543 0.3-10 keV 1.534 ±0.345 * 4
Swift/XRT 0.0551 0.3-10 keV 1.485 ±0.334 * 4
Swift/XRT 0.0563 0.3-10 keV 1.177 ±0.264 * 4
Swift/XRT 0.057 0.3-10 keV 1.763 ±0.396 * 4
Swift/XRT 0.0579 0.3-10 keV 1.214 ±0.273 * 4
Swift/XRT 0.0589 0.3-10 keV 1.319 ±0.296 * 4
Swift/XRT 0.0598 0.3-10 keV 2.397 ±0.537 * 4
Swift/XRT 0.0611 0.3-10 keV 0.752 ±0.17 * 4
Swift/XRT 0.0626 0.3-10 keV 0.991 ±0.223 * 4
Swift/XRT 0.0635 0.3-10 keV 1.401 ±0.315 * 4
Swift/XRT 0.0645 0.3-10 keV 1.804 ±0.406 * 4
Swift/XRT 0.0663 0.3-10 keV 1.175 ±0.266 * 4
Swift/XRT 0.0677 0.3-10 keV 0.969 ±0.219 * 4
Swift/XRT 0.0691 0.3-10 keV 1.173 ±0.264 * 4
Swift/XRT 0.0703 0.3-10 keV 0.789 ±0.178 * 4
Swift/XRT 0.0718 0.3-10 keV 1.479 ±0.332 * 4
Swift/XRT 0.0727 0.3-10 keV 1.233 ±0.277 * 4
Swift/XRT 0.0739 0.3-10 keV 1.194 ±0.262 * 4
Swift/XRT 0.075 0.3-10 keV 0.883 ±0.194 * 4
Swift/XRT 0.114 0.3-10 keV 0.893 ±0.2 * 4
Swift/XRT 0.116 0.3-10 keV 0.503 ±0.114 * 4
Swift/XRT 0.118 0.3-10 keV 0.558 ±0.123 * 4
Swift/XRT 0.136 0.3-10 keV 0.336 ±0.088 * 4
Swift/XRT 0.139 0.3-10 keV 0.493 ±0.111 * 4
Swift/XRT 0.141 0.3-10 keV 0.643 ±0.125 * 4
Swift/XRT 0.182 0.3-10 keV 0.259 ±0.069 * 4
Swift/XRT 0.191 0.3-10 keV 0.262 ±0.045 * 4
Swift/XRT 0.249 0.3-10 keV 0.21 ±0.055 * 4
Swift/XRT 0.254 0.3-10 keV 0.216 ±0.048 * 4
Swift/XRT 0.403 0.3-10 keV 0.055 ±0.015 * 4
Swift/XRT 0.461 0.3-10 keV 0.071 ±0.017 * 4
Swift/XRT 0.556 0.3-10 keV 0.038 ±0.009 * 4
Swift/XRT 1.27 0.3-10 keV 0.004 ±0.001 * 4
XMM/EPIC 2.69 0.3-10 keV 0.002^+0.00054_-0.00049 * 13
XMM/EPIC 6.96 0.3-10 keV 0.00062^+0.00033_-0.00037 13
Swift/UVOT 0.01 V > 18.1 < 200 * 14
Swift/UVOT 0.09 V > 19.6 < 54 * 14
Swift/UVOT 0.1 B 20.8 ±0.3 17 ±4.5 * 14
NOT/MOSCA 0.25 r 21.1 ±0.1 13 ±0.23 * 14
WHT/ACAM 0.25 z 20.4 ±0.1 25 ±1.4 * 14
WHT/ACAM 0.27 i 20.9 ±0.1 16 ±0.87 * 14
CAHA/DLR-MKIII 0.28 V 21.6 ±0.1 8.3 ±0.73 * 14
GTC/OSIRIS 0.29 r 21.3 ±0.1 11 ±0.2 * 14
WHT/ACAM 0.3 g 21.9 ±0.1 6.3 ±0.34 * 14
Gemini-S/GMOS 0.33 g 22.1 ±0.1 5.3 ±0.19 * 14, 15
Magellan/Baade/IMACS 0.34 r 21.6 ±0.1 8.6 ±0.14 * 16
Gemini-S/GMOS 0.38 i 21.2 ±0.1 12 ±1.2 * 15
Gemini-N/GMOS 0.6 z 21.9 ±0.1 6.5 ±0.18 * 15
UKIRT/WFCAM 0.6 K 21.1 ±0.1 14 ±1.3 * 14, 17
Gemini-N/GMOS 0.61 i 22.3 ±0.1 4.5 ±0.12 * 14, 17
UKIRT/WFCAM 0.61 J 21.5 ±0.2 9.3 ±1.3 * 14, 17
Gemini-N/GMOS 0.62 g 23.4 ±0.1 1.6 ±0.06 * 14, 17
Gemini-N/GMOS 0.62 r 22.7 ±0.1 2.9 ±0.08 * 14, 17
Gemini-S/GMOS 1.3 r > 25.2 < 0.3 14, 17
Gemini-S/GMOS 1.3 i > 24.5 < 0.58 14, 17
Magellan/Baade/IMACS 1.3 r > 24.7 < 0.46 16
Gemini-N/GMOS 1.59 g > 25.7 < 0.19 14, 17
Gemini-N/GMOS 1.6 r 25.6 ±0.3 0.21 ±0.05 14, 17
Gemini-N/GMOS 1.61 i > 24.7 < 0.48 14, 17
UKIRT/WFCAM 1.61 J > 22.5 < 3.6 14, 17
Gemini-N/GMOS 1.62 z > 23.9 < 1 14, 17
VLT/HAWK-I 2.32 J > 23.7 < 1.2 14, 17
GTC/OSIRIS 3.26 r > 25.1 < 0.33 14, 17
GTC/OSIRIS 4.26 r > 25.5 < 0.23 14, 17
VLT/HAWK-I 7.3 J > 23.5 < 1.4 14, 17
TNG/DOLoRes 8.23 r > 23.7 < 1.2 14
TNS/DOLoRES 8.25 i > 24.6 < 0.52 14
Magellan/Baade/IMACS 8.41 r > 24.9 < 0.4 13
HST/ACS 9.37 F606W > 27.7 < 0.03 16, 17
HST/WFC3 9.41 F160W 25.8 ±0.2 0.17 ±0.028 16, 17
VLA 0.37 4.9 GHz 120 ±14 * 13
VLA 0.37 6.7 GHz 120 ±9.1 * 13
VLA 1.43 4.9 GHz < 57.0 * 13
VLA 1.43 6.7 GHz 65 ±15 * 13
VLA 1.44 21.8 GHz < 50.0 * 13
VLA 4.32 4.9 GHz < 51.0 * 13
VLA 4.32 6.7 GHz < 26.0 * 13
VLA 84.3 4.9 GHz < 69.0 * 13
VLA 84.3 6.7 GHz < 34.0 * 13
160821B 0.1616 Swift/XRT 0.057 0.3-10 keV 0.15^+0.08_-0.06 * 18
Swift/XRT 0.063 0.3-10 keV 0.05^+0.03_-0.02 * 18
Swift/XRT 0.07 0.3-10 keV 0.05 ±0.02 * 18
Swift/XRT 0.126 0.3-10 keV 0.08^+0.04_-0.03 * 18
Swift/XRT 0.13 0.3-10 keV 0.07^+0.04_-0.03 * 18
Swift/XRT 0.136 0.3-10 keV 0.037^+0.018_-0.014 * 18
Swift/XRT 0.195 0.3-10 keV 0.035 ±0.019 * 18
Swift/XRT 0.285 0.3-10 keV 0.025^+0.014_-0.01 * 18
Swift/XRT 0.327 0.3-10 keV 0.05 ±0.02 * 18
Swift/XRT 0.34 0.3-10 keV 0.029 ±0.011 * 18
Swift/XRT 0.419 0.3-10 keV 0.014 ±0.05 * 18
Swift/XRT 1.02 0.3-10 keV 0.0036 ±0.02 * 18
Swift/XRT 2.33 0.3-10 keV < 0.0046 * 18
XMM/EPIC 3.94 0.3-10 keV 0.0023 ±0.0003 * 18
XMM/EPIC 9.98 0.3-10 keV 0.00029 ±0.00015 * 18
Swift/XRT 15.2 0.3-10 keV < 0.001 * 18
Swift/UVOT 0.002 wh > 21.9 < 6.3 18
Swift/UVOT 0.004 wh > 21.4 < 10 18
NOT/AlFOSC 0.05 r 22.6 ±0.1 3.4 ±0.28 19
NOT/AlFOSC 0.07 r 22.5 ±0.1 3.6 ±0.2 19
GTC/OSIRIS 0.08 r 22.5 ±0.1 3.5 ±0.097 19
GTC/OSIRIS 0.08 i 22.4 ±0.1 4.1 ±0.11 19
GTC/OSIRIS 0.08 z 22.4 ±0.1 4 ±0.074 19
TNG/DOLoRes 0.95 g 24.0 ±0.2 0.9 ±0.13 19
GTC/CIRCE 0.96 H 23.8 ±0.3 1.1 ±0.34 19
WHT/ACAM 1.06 r 23.8 ±0.1 1.1 ±0.1 19
WHT/ACAM 1.08 z 23.6 ±0.2 1.3 ±0.24 19
GTC/CIRCE 1.94 H > 23.8 < 1.1 18
NOT/ALFOSC 1.95 r 24.8 ±0.1 0.44 ±0.04 19
GTC/CIRCE 1.96 J > 24.0 < 0.91 18
NOT/ALFOSC 1.99 z 23.9 ±0.2 1 ±0.18 19
GTC/OSIRIS 2.02 g 25.6 ±0.2 0.21 ±0.038 19
GTC/OSIRIS 2.03 r 24.8 ±0.1 0.44 ±0.04 19
GTC/OSIRIS 2.04 z 24.3 ±0.2 0.69 ±0.13 19
GTC/OSIRIS 2.04 i 24.5 ±0.1 0.58 ±0.053 19
HST/WFC3 3.64 F606W 25.9 ±0.1 0.16 ±0.015 19
HST/WFC3 3.71 F160W 24.4 ±0.1 0.63 ±0.058 19
HST/WFC3 3.76 F110W 24.7 ±0.2 0.48 ±0.088 19
GTC/OSIRIS 3.98 g 26.0 ±0.2 0.14 ±0.027 19
GTC/OSIRIS 4.0 i 25.7 ±0.4 0.19 ±0.07 19
Keck/MOSFIRE 4.3 Ks 24.0 ±0.4 0.88 ±0.3 20
GTC/OSIRIS 4.99 r 26.1 ±0.3 0.13 ±0.036 19
GTC/OSIRIS 6.98 g 26.9 ±0.2 0.063 ±0.012 19
Keck/MOSFIRE 7.5 Ks > 24.0 < 0.9 20
Keck/MOSFIRE 8.4 Ks > 23.9 < 1 20
GTC/OSIRIS 9.97 i > 25.6 < 0.21 19
GTC/OSIRIS 10.0 g > 25.7 < 0.19 18
HST/WFC3 10.0 F606W > 26.1 < 0.13 19
HST/WFC3 10.4 F606W 27.7 ±0.1 0.03 ±0.0028 19
HST/WFC3 10.5 F160W 26.6 ±0.2 0.083 ±0.015 19
HST/WFC3 10.5 F110W 26.7 ±0.2 0.076 ±0.014 19
HST/WFC3 23.2 F110W > 27.3 < 0.044 19
HST/WFC3 23.2 F606W > 27.3 < 0.044 19
HST/WFC3 23.2 F160W > 27.2 < 0.048 19
VLA 0.17 5.0 GHz 40 ±8.9 21
VLA 1.12 5.0 GHz < 16.5 * 21
VLA 10.1 9.8 GHz 16 ±4 * 21
VLA 17.1 9.8 GHz < 33.0 * 21
200522A 0.5536 Chandra/ACIS 5.6 0.3-10 keV 0.00094 ±0.00029 * 21
Chandra/ACIS 23.9 0.3-10 keV < 0.00022 * 21
Swift/XRT 0.0059 0.3-10 keV 0.337 ±0.081 * 21
Swift/XRT 0.048 0.3-10 keV 0.133 ±0.036 * 21
Swift/XRT 0.0558 0.3-10 keV 0.144 ±0.028 * 21
Swift/XRT 0.157 0.3-10 keV 0.031 ±0.009 * 21
Swift/XRT 0.644 0.3-10 keV 0.012 ±0.003 * 21
Swift/XRT 2.74 0.3-10 keV < 0.0058 * 21
LCO/Sinistro 0.67 R > 22.8 < 2.7 21
LCO/Sinistro 0.67 I > 20.4 < 25 21
Gemini/GMOS 2.12 r > 22.3 < 4.4 22
Gemini/GMOS 3.12 r 26.0 ±0.4 0.14 ±0.053 22
HST/WFC3 3.52 F125W 24.5 ±0.1 0.55 ±0.076 21
HST/WFC3 3.66 F160W 24.6 ±0.1 0.51 ±0.071 21
HST/WFC3 16.4 F125W > 27.5 < 0.036 21
VLA 0.234 6.0 GHz 33 ±8.2 * 21
VLA 2.19 6.0 GHz 27 ±7.2 * 21
VLA 2.19 9.8 GHz < 23.7 * 21
VLA 6.15 6.0 GHz < 18.6 * 21
VLA 11.1 6.0 GHz < 14.07 * 21
211211A 0.0763 Swift/XRT 0.0408 0.3-10 keV 3.776 ±0.835 * 4
Swift/XRT 0.0417 0.3-10 keV 4.293 ±0.946 * 4
Swift/XRT 0.0423 0.3-10 keV 6.942 ±1.562 * 4
Swift/XRT 0.0428 0.3-10 keV 4.013 ±0.912 * 4
Swift/XRT 0.0434 0.3-10 keV 9.155 ±1.924 * 4
Swift/XRT 0.044 0.3-10 keV 4.497 ±0.986 * 4
Swift/XRT 0.0446 0.3-10 keV 5.312 ±1.166 * 4
Swift/XRT 0.0454 0.3-10 keV 4.094 ±0.921 * 4
Swift/XRT 0.0459 0.3-10 keV 5.489 ±1.212 * 4
Swift/XRT 0.0466 0.3-10 keV 4.36 ±0.982 * 4
Swift/XRT 0.0472 0.3-10 keV 4.549 ±1.028 * 4
Swift/XRT 0.0481 0.3-10 keV 3.669 ±0.827 * 4
Swift/XRT 0.0487 0.3-10 keV 5.866 ±1.291 * 4
Swift/XRT 0.0493 0.3-10 keV 4.483 ±1.011 * 4
Swift/XRT 0.05 0.3-10 keV 5.576 ±1.229 * 4
Swift/XRT 0.0507 0.3-10 keV 4.415 ±0.979 * 4
Swift/XRT 0.0512 0.3-10 keV 5.058 ±1.141 * 4
Swift/XRT 0.0519 0.3-10 keV 5.11 ±1.148 * 4
Swift/XRT 0.0525 0.3-10 keV 7.009 ±1.574 * 4
Swift/XRT 0.0529 0.3-10 keV 6.135 ±1.348 * 4
Swift/XRT 0.0534 0.3-10 keV 8.18 ±1.834 * 4
Swift/XRT 0.0538 0.3-10 keV 6.687 ±1.5 * 4
Swift/XRT 0.0543 0.3-10 keV 3.79 ±0.859 * 4
Swift/XRT 0.0551 0.3-10 keV 5.29 ±1.186 * 4
Swift/XRT 0.0557 0.3-10 keV 5.954 ±1.341 * 4
Swift/XRT 0.0563 0.3-10 keV 4.819 ±1.084 * 4
Swift/XRT 0.0573 0.3-10 keV 4.415 ±0.715 * 4
Swift/XRT 0.185 0.3-10 keV 0.996 ±0.219 * 4
Swift/XRT 0.186 0.3-10 keV 1.24 ±0.279 * 4
Swift/XRT 0.187 0.3-10 keV 1.115 ±0.252 * 4
Swift/XRT 0.189 0.3-10 keV 1.319 ±0.297 * 4
Swift/XRT 0.191 0.3-10 keV 1.154 ±0.26 * 4
Swift/XRT 0.192 0.3-10 keV 1.009 ±0.228 * 4
Swift/XRT 0.193 0.3-10 keV 1.199 ±0.271 * 4
Swift/XRT 0.194 0.3-10 keV 0.738 ±0.167 * 4
Swift/XRT 0.195 0.3-10 keV 1.208 ±0.272 * 4
Swift/XRT 0.196 0.3-10 keV 1.195 ±0.271 * 4
Swift/XRT 0.198 0.3-10 keV 1.023 ±0.23 * 4
Swift/XRT 0.199 0.3-10 keV 0.791 ±0.178 * 4
Swift/XRT 0.201 0.3-10 keV 0.93 ±0.177 * 4
Swift/XRT 0.252 0.3-10 keV 0.643 ±0.142 * 4
Swift/XRT 0.253 0.3-10 keV 0.792 ±0.179 * 4
Swift/XRT 0.254 0.3-10 keV 0.953 ±0.214 * 4
Swift/XRT 0.256 0.3-10 keV 0.564 ±0.127 * 4
Swift/XRT 0.654 0.3-10 keV 0.141 ±0.037 * 4
Swift/XRT 0.66 0.3-10 keV 0.136 ±0.033 * 4
Swift/XRT 0.721 0.3-10 keV 0.095 ±0.02 * 4
Swift/XRT 0.783 0.3-10 keV 0.115 ±0.03 * 4
Swift/XRT 0.788 0.3-10 keV 0.159 ±0.037 * 4
Swift/XRT 0.87 0.3-10 keV 0.065 ±0.012 * 4
Swift/XRT 1.14 0.3-10 keV 0.026 ±0.006 * 4
Swift/XRT 1.94 0.3-10 keV 0.005 ±0.002 * 4
XMM/EPIC 9.6 0.3-10 keV < 0.0041 * 23
XMM/EPIC 51.0 0.3-10 keV < 0.0021 * 23
MITSuME 0.27 g' 20.3 ±0.2 27 ±4.9 24
MITSuME 0.27 Rc 20.3 ±0.1 29 ±2.6 24
MITSuME 0.27 Ic 20.4 ±0.3 26 ±7.1 24
DOT/4Kx4K 0.42 R 20.2 ±0.1 30 ±4.2 25
DOT/4Kx4K 0.42 I 20.2 ±0.2 30 ±5.8 25
DOT/4Kx4K 0.42 V 20.3 ±0.1 28 ±3.1 25
NEXT 0.43 r 20.2 ±0.1 29 ±1.9 26
DOT/4Kx4K 0.43 B 20.5 ±0.2 22 ±3.9 25
DOT/4Kx4K 0.43 U 20.8 ±0.2 17 ±2.5 25
NEXT 0.45 z 19.9 ±0.3 41 ±11 26
HCT 0.46 R 20.3 ±0.1 29 ±3.4 27
CAHA/CAFOS 0.63 r 20.7 ±0.1 19 ±1.5 25
CAHA/CAFOS 0.64 g 21.2 ±0.1 12 ±0.92 25
CAHA/CAFOS 0.68 i 20.8 ±0.1 18 ±1.3 28
NOT/ALFOSC 0.69 g 21.0 ±0.1 14 ±0.53 28
NOT/ALFOSC 0.69 r 20.8 ±0.1 17 ±0.79 28
NOT/ALFOSC 0.7 i 20.9 ±0.1 16 ±0.88 28
LCO/Sinistro 0.7 R 21.0 ±0.1 14 ±1.2 29
GMG 1.4 r > 22.0 < 6 30
DOT/4Kx4K 1.41 R 22.5 ±0.1 3.5 ±0.29 25
DOT/4Kx4K 1.42 I 22.1 ±0.1 5.2 ±0.72 25
DOT/4Kx4K 1.43 V 23.1 ±0.2 2.1 ±0.37 25
GIT 1.43 r' > 21.1 < 13 27
CAHA/CAFOS 1.68 i 22.6 ±0.1 3.4 ±0.41 28
Zeiss-1000 2.56 Rc > 23.1 < 2.2 31
CAHA/CAFOS 2.68 i 24.6 ±0.3 0.54 ±0.17 28
Gemini/NIRI 4.07 K 22.4 ±0.1 3.9 ±0.51 28
DOT/4Kx4K 4.42 R > 23.9 < 1 32
TNG/NICS 4.7 H > 21.9 < 6.4 23
Gemini/NIRI 5.1 K 22.4 ±0.2 4 ±0.62 28
Gemini/GMOS 5.11 i 26.0 ±0.3 0.14 ±0.039 28
MMT/MMIRS 5.96 J 24.2 ±0.3 0.78 ±0.25 28
Gemini/GMOS 6.08 i > 25.5 < 0.23 28
MMT/MMIRS 6.94 K 23.4 ±0.3 1.5 ±0.42 28
MMT/MMIRS 7.98 K 23.8 ±0.3 1.1 ±0.31 28
MMT/MMIRS 9.92 K > 22.1 < 5.2 28
NOT/ALFOSC 17.6 i > 24.7 < 0.49 28
CAHA/CAFOS 19.6 i > 24.1 < 0.8 28
MMT/Binospec 46.9 g > 24.7 < 0.47 28
MMT/Binospec 47.0 r > 24.5 < 0.59 28
MMT/Binospec 47.0 z > 23.9 < 0.98 28
Gemini/GMOS 55.0 i > 26.8 < 0.071 28
GTC/EMIR 66.0 Ks > 22.0 < 5.8 28
LBT/LUCI 88.8 Ks > 24.6 < 0.52 28
MMT/MMIRS 97.8 K > 24.3 < 0.68 28
HST/WFC3/IR 122.0 F140W > 27.2 < 0.048 28
HST/ACS/WFC 124.0 F606W > 27.8 < 0.029 28
VLA 6.27 6.0 GHz < 9.6 * 28
230307A 0.065 Swift/XRT 1.12 0.3-10 keV 0.12^+0.05_-0.04 * 33
Swift/XRT 1.19 0.3-10 keV 0.057^ ±0.014 * 33
Swift/XRT 1.71 0.3-10 keV 0.029^+0.009_-0.008 * 33
Swift/XRT 4.71 0.3-10 keV 0.012^+0.005-0.004 * 33
Swift/XRT 9.59 0.3-10 keV < 0.0071 * 33
XMM/EPIC 13.5 0.3-10 keV 0.00118^+0.00017_-0.00018 * 33
Chandra/ACIS 25.3 0.3-10 keV 0.00034^ ±0.00017 * 34
XMM/EPIC 37.0 0.3-10 keV 0.00014^+0.00006_-0.0004 * 34
TESS 0.0178 Ic 18.0 ±0.1 230 ±15 33
TESS 0.0445 Ic 17.7 ±0.1 310 ±17 33
TESS 0.0665 Ic 17.6 ±0.1 320 ±18 33
TESS 0.0873 Ic 17.7 ±0.1 310 ±17 33
TESS 0.107 Ic 17.5 ±0.1 370 ±21 33
TESS 0.132 Ic 18.1 ±0.1 220 ±14 33
TESS 0.164 Ic 18.0 ±0.1 240 ±15 33
TESS 0.195 Ic 18.1 ±0.1 210 ±15 33
TESS 0.228 Ic 18.1 ±0.1 200 ±13 33
TESS 0.264 Ic 18.3 ±0.1 180 ±13 33
RASA36 0.435 r 19.2 ±0.6 74 ±41 33
KMTNet 0.752 R 20.6 ±0.1 21 ±0.98 33
Swift/UVOT 0.99 U > 21.1 < 13 34
KMTNet 1.12 R 20.7 ±0.1 19 ±0.87 33
Swift/UVOT 1.16 wh 22.2 ±0.2 4.6 ±0.96 34
PRIME 1.2 H 20.2 ±0.1 31 ±4.3 33
PRIME 1.2 Y 20.5 ±0.1 23 ±2.9 33
PRIME 1.2 z > 20.4 < 26 33
PRIME 1.2 J 20.7 ±0.1 20 ±2 33
Swift/UVOT 1.25 wh 22.3 ±0.3 4.4 ±1.4 34
VLT/ULTRACAM 1.43 r 20.7 ±0.1 19 ±2.6 34
VLT/ULTRACAM 1.43 g > 20.7 < 19 34
VLT/ULTRACAM 1.43 u > 19.7 < 48 34
Swift/UVOT 1.61 wh > 22.6 < 3.3 34
KMTNet 1.82 I 21.1 ±0.1 14 ±1.3 33
KMTNet 1.82 R 21.2 ±0.1 12 ±0.54 33
KMTNet 2.13 R 21.6 ±0.1 8.4 ±0.39 33
KMTNet 2.14 I 21.6 ±0.1 8.7 ±0.8 33
SOAR 2.36 z' 21.4 ±0.1 9.9 ±1.1 33
VLT/VST 2.37 r 21.8 ±0.2 6.7 ±1.2 34
CTIO 2.38 R 22.3 ±0.1 4.5 ±0.25 33
CTIO 2.39 I 21.9 ±0.2 6.6 ±1.2 33
VLT/ULTRACAM 2.41 i 21.7 ±0.1 7.7 ±0.64 34
VLT/ULTRACAM 2.41 g 22.4 ±0.3 4.2 ±0.99 34
VLT/ULTRACAM 2.41 u > 21.2 < 12 34
Swift/UVOT 2.43 wh > 22.0 < 5.8 34
KMTNet 2.77 R 22.3 ±0.1 4.5 ±0.25 33
KMTNet 3.12 R 22.4 ±0.1 3.9 ±0.22 33
SOAR 3.36 z' 22.5 ±0.2 3.6 ±0.76 33
VLT/ULTRACAM 3.39 i 21.5 ±0.2 9.3 ±1.5 34
VLT/ULTRACAM 3.39 g > 22.6 < 3.3 34
VLT/ULTRACAM 3.39 u > 20.8 < 17 34
KMTNet 4.12 R 23.1 ±0.1 2.1 ±0.13 33
KMTNet 4.76 R 23.4 ±0.1 1.5 ±0.11 33
Swift/UVOT 4.89 wh > 23.6 < 1.3 34
KMTNet 5.12 R 23.4 ±0.1 1.6 ±0.12 33
KMTNet 6.13 R 23.5 ±0.1 1.5 ±0.11 33
VLT/FORS2 6.42 z 23.2 ±0.1 1.8 ±0.19 34
VLT/FORS2 6.42 B > 26.1 < 0.13 34
Gemini/GMOS 7.4 r 24.7 ±0.1 0.48 ±0.044 33
XSH 7.4 K 22.0 ±0.6 5.8 ±3.3 33
Gemini/GMOS 8.34 z 24.2 ±0.2 0.76 ±0.14 33
Gemini/FLAMINGOS-2 10.3 K 22.5 ±0.1 3.6 ±0.5 34
VLT/FORS2 10.4 I > 24.0 < 0.91 34
VLT/FORS2 11.4 I > 25.2 < 0.3 34
Gemini/FLAMINGOS-2 11.4 K 22.3 ±0.1 4.5 ±0.62 34
Gemini/FLAMINGOS-2 15.3 J > 23.4 < 1.5 33
Gemini/GMOS 15.4 r > 24.8 < 0.46 33
Gemini/FLAMINGOS-2 15.4 K > 22.1 < 5.2 34
Gemini/FLAMINGOS-2 17.3 J > 22.9 < 2.4 33
VLT/FORS2 17.4 R > 25.2 < 0.3 34
VLT/FORS2 19.4 R > 25.8 < 0.17 34
VLT/HAWK-I 19.4 K > 23.4 < 1.6 34
JWST/NIRCam 28.8 F115W 28.5 ±0.1 0.014 ±0.00093 34
JWST/NIRCam 28.8 F277W 26.2 ±0.1 0.12 ±0.0011 34
JWST/NIRCam 28.9 F150W 28.1 ±0.1 0.021 ±0.0023 34
JWST/NIRCam 28.9 F356W 25.4 ±0.1 0.25 ±0.0023 34
JWST/NIRCam 28.9 F070W 29.0 ±0.2 0.0094 ±0.0017 34
JWST/NIRCam 28.9 F444W 24.6 ±0.1 0.52 ±0.0047 34
JWST/NIRCam 61.5 F115W 29.8 ±0.3 0.0044 ±0.0013 34
JWST/NIRCam 61.5 F444W 27.0 ±0.1 0.059 ±0.0022 34
JWST/NIRCam 61.5 F150W 29.2 ±0.2 0.0073 ±0.0011 34
JWST/NIRCam 61.5 F277W 28.3 ±0.1 0.017 ±0.0019 34
ATCA 4.46 5.5 GHz < 84 * 34
ATCA 4.46 9.0 GHz 92 ±22 * 34
MeerKAT 6.64 1.3 GHz < 390 * 34
MeerKAT 6.75 3.1 GHz < 140 * 34
ATCA 10.7 16.7 GHz < 114 * 34
ATCA 10.7 21.2 GHz < 165 * 34
ATCA 10.7 5.5 GHz 92 ±36 * 34
ATCA 10.7 9.0 GHz 83 ±26 * 34
MeerKAT 15.7 1.3 GHz < 350 * 34
MeerKAT 16.0 3.1 GHz < 120 * 34
ATCA 25.6 16.7 GHz < 81 * 34
ATCA 25.6 21.2 GHz < 219 * 34
ATCA 25.6 5.5 GHz < 63 * 34
ATCA 25.6 9.0 GHz < 63 * 34
MeerKAT 40.9 3.1 GHz < 93 * 34
Observations are not corrected for Galactic nor local extinction.
† Starred observations were employed in our afterglow analysis.
References: (1) , (2) , (3) , (4) UKSSDC <cit.>, (5) , (6) , (7) , (8) , (9) , (10) , (11) , (12) , (13) , (14) , (15) , (16) , (17) , (18) , (19) , (20) , (21) , (22) , (23) , (24) , (25) , (26) , (27) , (28) , (29) , (30) , (31) , (32) , (33) , (34) .
|
http://arxiv.org/abs/2409.03280v1 | 20240905064314 | Absorbing state transitions with long-range annihilation | [
"Nicholas O'Dea",
"Sayak Bhattacharjee",
"Sarang Gopalakrishnan",
"Vedika Khemani"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"quant-ph"
] |
Department of Physics, Stanford University, Stanford, CA 94305, USA
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA
Department of Physics, Stanford University, Stanford, CA 94305, USA
§ ABSTRACT
We introduce a family of classical stochastic processes describing diffusive particles undergoing branching and long-range annihilation in the presence of a parity constraint. The probability for a pair-annihilation event decays as a power-law in the distance between particles, with a tunable exponent. Such long-range processes arise naturally in various classical settings, such as chemical reactions involving reagents with long-range electromagnetic interactions. They also increasingly play a role in the study of quantum dynamics, in which certain quantum protocols can be mapped to classical stochastic processes with long-range interactions: for example, state preparation or error correction processes aim to prepare ordered ground states, which requires removing point-like excitations in pairs via non-local feedback operations conditioned on a global set of measurement outcomes. We analytically and numerically describe features of absorbing phases and phase transitions in this family of classical models as pairwise annihilation is performed at larger and larger distances. Notably, we find that the two canonical absorbing-state universality classes – directed-percolation and parity-conserving – are endpoints of a line of universality classes with continuously interpolating critical exponents.
Absorbing state transitions with long-range annihilation
Vedika Khemani
Received 16 July 2024; accepted 04 September 2024
========================================================
Introduction–
Absorbing state transitions from a fluctuating active phase to an inactive absorbing state are widespread in non-equilibrium classical settings, from chemical reactions to epidemic spreading <cit.>. These transitions also play a growing role in quantum interactive dynamics in which a classical observer monitors a quantum system and performs feedback conditioned on measurement outcomes <cit.>.
A canonical task in quantum information theory is to prepare a desired many-body entangled state, such as a topologically ordered ground state of the 2D toric code <cit.> or a Schrodinger-cat ground state of the 1D Ising model. A typical “low-entropy" configuration around the desired state often looks like a dilute gas of point-like excitations (such as anyons in the 2D toric code or domain-walls in a 1D Ising model). In the simplest cases, these excitations may only by annihilated or created in pairs, and they may be separated freely without energy penalty. Canonical measurement-based state-preparation protocols make measurements everywhere in the system to deduce the locations of excitations, and apply non-local “string-like" recovery operations to remove excitations in pairs <cit.>. This process can be mapped to a classical stochastic process describing the dynamics of excitations <cit.>. However, crucially, the long-range nature of feedback means that pairs of particles can effectively instantaneously annihilate over long distances.
Long-range interactions are also natural in classical models showing absorbing state transitions, in settings ranging from chemical reactions involving charged particles, to predator-prey models incorporating movement patterns of different species (see Ref. <cit.> for a review discussing long-range hopping and branching, and Refs. <cit.> for recent work on biased hopping). However, models with long-range annihilation have largely been neglected in the classical absorbing state literature. An exception is Ref. <cit.> by Park and Deem, which, however, does not consider branching processes and does not study the transition out of the absorbing phase.
To address this hole in the literature, we introduce a family of stochastic reaction-diffusion processes describing diffusive particles undergoing branching and long-range annihilation in the presence of a parity constraint which enforces that particles are created and annihilated in pairs (Fig. <ref>(a)-(c)). By tuning a long-range exponent κ, we analytically and numerically probe how the features of the absorbing phase and the transition change as pairwise annihilation is successfully performed at larger and larger distances[We note that a key distinction between these absorbing state models and conventional descriptions of quantum error correction is that a noisy quantum system allows defects to be created in pairs, so that the defect-free (empty) state is not an absorbing state. This is discussed further in <cit.>, where we also discuss various other classes of quantum models that do display the same universality classes as the classical models we study.].
Our model uncovers a continuum of unusual universality classes. Two paradigmatic universality classes of absorbing state transitions are directed percolation (DP) and parity-conserving (PC), respectively corresponding to local models without and with a parity constraint. Having a parity constraint or not naively seems like a binary question, making DP and PC discrete and ostensibly unconnected universality classes. However, our model uncovers a line of universality classes with exponents that continuously interpolate between DP and PC despite a global parity constraint (Fig. 1d)), greatly enriching the set of known critical phenomena in non-equilibrium phase transitions.
In long-range equilibrium and non-equilibrium systems, continuously varying exponents are usually associated with an approach to mean-field physics under sufficiently-long-range interactions (e.g. section 3 of Ref. <cit.>). However, we find in numerics on a chain that the transition in the long-range limit of our model corresponds to (1+1)d DP, not mean-field DP. Below, we do not interpret the long-range interactions as changing the dimensionality of the system but rather as weakening the parity constraint to no longer have local consequences.
Our numerics are for one dimension, but we discuss scaling arguments that should be valid in the absorbing phase in all dimensions. We probe the transition numerically, and we outline a field theory program to understand its properties. We derive the field theory describing our model in the Supplementary Material <cit.> and discuss some of its interesting features in the main text. Though field theories usually have actions that are polynomial in the fields, we have a nontrivial exponential term in our action. We show that the resulting mean-field theory in the absorbing phase crucially relies on this exponential term and matches the predictions of our simple scaling arguments in d ≥ 2.
Model and phase diagram–
We study a classical stochastic process describing a system of particles undergoing diffusion, branching and annihilation subject to a parity constraint which conserves the number of particles modulo two. Our model consists of a mixture of deterministic and stochastic steps, illustrated in Fig. <ref>(a)-(c) for a system in one spatial dimension. Each time step begins by selecting a particle at random, illustrated by the green outline. With probability , we continue to the annihilation stage in (a), where the particle's distance r to the nearest excitation is calculated and with probability 1/r^ the two particles are annihilated. Thus, sets the rate at which pairing is attempted, while controls the nonlocality of the annihilation. Because we only attempt to annihilate a particle with the nearest particle rather than attempting pairwise annihilation with all other particles, the thermodynamic limit is already well-defined without any system-size dependent rescaling of the attempted annihilation rate (e.g. as in Ref. <cit.>), even for the extreme long-range case of κ=0.
If, with probability 1-, annihilation is not selected, then (b) with probability
the particle branches (produces two offspring on two different adjacent sites) or (c) with probability 1-, it attempts to hop to a neighboring site. A hard-core constraint is enforced by aborting the attempt to branch or hop if more than one particle would occupy the same site. At the end of each time step, in a system with N particles, the time increments by 1/N to model these processes occurring in parallel; the more particles, the more of these processes happen during a given time interval. Note that a system with no particles is an absorbing state without dynamics; this state can be reached but never left. In our numerics, we work in one spatial dimension and fix = .2, while we explore the phase diagram shown in Fig. <ref>d) with parameters and . Throughout, we consider initial states with an even number of particles so that in principle the absorbing state can be reached; in particular, we choose even-length chains and initialize in the fully occupied state.
When the annihilation rate is low, the density of particles n(t) is asymptotically constant up to a time growing exponentially in system size. This “active" phase is denoted by “I" in Fig. <ref>d). When is sufficiently large, the density of particles decays to zero in a time at most polynomial in system size. The functional form of the decay rate n(t) ∼ t^- in these “absorbing" phases is set by (see phases II, III, IV in Fig. <ref>d)). Between the active and absorbing phases is a critical line of absorbing state transitions where the density decays as n(t) ∼t^-; we denote the critical decay exponent to differentiate it from the absorbing phase decay exponent of . We summarize our results for the phase diagram in Fig. <ref>d), and provide a detailed exposition below.
Absorbing phase analytics and numerics–
The properties of the absorbing phases are controlled by the pure annihilation fixed point and hence can be understood through scaling arguments in the limit of no branching.
Though our main focus in this work is the case of d=1, we also consider the characteristics of the absorbing phase in higher dimensions.
We note that section 3.1 of Ref. <cit.> discusses a related model with both long-range annihilation and diffusion, and in particular notes an analogous phase to our PC absorbing phase II in Fig. <ref>. However, that section restricts to κ > min(2,d) (and the paper restricts to κ > d more generally), and so does not obtain our regimes III and IV.
Furthermore, the existence and properties of the absorbing state transition (our next section) require branching and are not considered in Ref. <cit.>.
Our model also has an emergent cutoff on the range of the interaction. By only pairing a particle to the particle closest to it, we avoid diverging annihilation rates in the thermodynamic limit even when the nonlocality of the annihilation is maximal. Together, branching and the natural cutoff give us access to a novel regime of the phase diagram; this otherwise hidden part of the phase diagram contains the novel phase transitions that interpolate all the way between the PC and DP universality classes.
First, consider the short-range limit of = ∞ where particles must be neighboring in order for annihilation to proceed. Annihilation is then diffusion-limited, as particles must effectively collide in order to annihilate. In this limit, there is a standard set of arguments for the decay of n(t) in terms of the number of unique sites visited by a random walker in a time t. The number of unique sites visited controls the number of particles a given particle can encounter as it diffuses and hence the number of annihilation events (see chapter 13 of Ref. <cit.> for more details, and Ref. <cit.> for alternative arguments using field theory).
In particular, in d dimensions
n(t) ∼t^-d/2 d < 2
t^-1log(t) d = 2
t^-1 d > 2
On the other hand, for < ∞, there will be a steady background decay of particles from the long-range annihilation. Suppose is small so that the particle annihilation rate is dominated by events where the particles are not necessarily close. In d dimensions, the typical interparticle distance will scale on the order of r∼n(t)^-1/d and hence the typical probability for a pairing to result in annihilation will scale as n(t)^/d. The number of annihilation events per unit time will scale as the number of particles times this probability, giving an effective rate equation ṅ(t) ∼ n(t)^1+/d. Solving this equation gives
n(t) ∼t^-d/ >0
e^-c t = 0
With diffusion turned off, this result (for κ>0) has been rigorously proven in Ref. <cit.>, and further discussed in a renormalization group context in Ref. <cit.> (for κ > d).
We now make the simple assumption that the process (short-range annihilation induced by diffusion vs. long-range annihilation between distant particles) which leads to a faster decay is the one that (asymptotically) dominates the dynamics. Comparing Eqs. <ref> and <ref>, we get in d=1:
n(t) ∼t^-1/2 ≥ 2 (II)
t^-1/ 0 < ≤ 2 (III)
e^-c t = 0 (IV)
The parenthetical labels (II/III/IV) are shown in the corresponding regions of the phase diagram in Fig. <ref>d).
We numerically confirm this picture in Fig. <ref> via linear fits of log(n(t)) vs log(t). There is good agreement except for a small, systematic discrepancy around ≈ 2.6. We believe this discrepancy is caused by asymptotically sub-leading terms from the background long-range decay from in this otherwise diffusion-limited regime, see <cit.>. For example, =3 in the inset shows an ultimately-transient slower decay at early times.
Similarly, for d ≥ 2,
n(t) ∼t^-1 ≥ d
t^-d/ 0 < ≤ d
e^-c t = 0
We save the numerical verification of this prediction in d ≥ 2 for future work. To summarize, for > max(2,d), the model is effectively short-ranged and the absorbing phase is diffusion-limited, while for < max(2,d), absorption occurs faster and is dominated by long-range annihilation. In particular, in d=1, the decay in the absorbing phases at ≥ 2 and =0 are consistent with the PC and DP absorbing phases of n(t) ∼ t^-1/2 and n(t) ∼ e^-ct respectively.
We can interpret these results in terms of the parity constraint. Conventionally,
the DP/ PC universality are classes distinguished by the absence/ presence of a discrete parity symmetry. In our model, the global parity constraint exactly holds, but it no longer has consequences locally when κ=0. In particular, looking at just a local subsystem, long-range annihilation will violate the parity of the number of particles in the subsystem even as it preserves the global parity. This allows a model with a global parity constraint to be described by symmetry-free DP rather than PC physics. Thus we expect that the κ = 0 model is described by DP, including at the phase transition. We confirm this numerically in the next section.
Transition numerics–
In Fig. <ref>, we find that the decay exponent is consistent with DP's ≈ .159 at =0 and PC's ≈ .285 at ≥ 2, and the exponents for 0 < < 2 appear to monotonically interpolate between the two, indicating a line of critical fixed points with continually varying exponents. From our numerics, we cannot rule out that the critical exponents at the transition begin to change from at a _c slightly different from 2. However, our numerics are consistent with the idea that the at which the phase behavior changes (= 2 from the previous section) is the same at which the critical behavior changes.
In the inset of Fig. <ref> we show an example of our method of extracting and from the dynamics. At large system sizes and for close to , the dynamics will appear to follow the algebraic decay of the critical model. However, when is detuned to be above or below the transition, the dynamics at late times will respectively drift to the different asymptotic behaviors of the absorbing and active phases. By calculating a “running" (t) ≡log_10[n_d(t) / n_d(10t)], we see a horizontal separatrix between the absorbing and active phases; the and (after finite-time transients) of the separatrix give our estimates for and .
See Ref. <cit.> for a succinct discussion of this and other numerical probes of absorbing state critical behavior and chapter 4 of Ref. <cit.> for a general discussion. Note that both DP and PC have a large z critical exponent (and so should the interpolating universality classes), meaning that systematic late-time finite-size effects controlled by t/L^z should only appear at much later times than we probe.
In the following sections, we briefly outline a field theory program to analytically estimate the critical exponents and critical dimensions of these newly uncovered universality classes.
Field theory action–
Cardy and Tauber gave a systematic field theory treatment of critical phenomena and phase behavior in models with local, pairwise annihilation processes <cit.> (see also the review <cit.>). The starting point is the derivation of the field theory directly from the stochastic process <cit.> by treating the real-time stochastic master equation as an imaginary-time Schrodinger equation. The new degrees of freedom are complex fields ϕ(x,t) and (x,t); statistical expectation values of n(x,t) can be computed via expectation values of ϕ(x,t) in the field theory (e.g. 𝔼[n(x,t)] = ⟨ϕ(x,t)⟩, 𝔼[n^2(x,t)] = ⟨ϕ^2(x,t) + ϕ(x,t)⟩).
Approximations like derivative expansions and the continuum limit for field theories near criticality simplify the field theory while maintaining the universality of the phase transition and phases, allowing for renormalization group calculations of critical exponents. However, the field theory keeps a few peculiar signatures of its origins as a classical stochastic process: conservation of probability ultimately forces the action to vanish whenever = 1, whatever the value of ϕ; processes changing the number of particles make the theory explicitly non-Hermitian, which manifests as terms in the action carrying an unbalanced number of ϕ and . It is useful to identify ϕ(x,t) with n(x,t) at the mean-field level and to build intuition for ϕ(x,t) from n(x,t), but this identification cannot be made rigorously beyond the mean-field level <cit.>.
Using a method pioneered in <cit.>, we extend their results to derive a field theory for our nonlocal annihilation process in <cit.>. Our action density ℒ includes standard diffusion (∂_t-D ∇^2)ϕ and branching B(1-^2)ϕ terms, but it also contains an annihilation term that is exponentially suppressed in particle density:
ℒ(𝐱,t) ∋ A ∫d^d𝐲/|𝐱 - 𝐲|^[ (1-(𝐱,t)(𝐲,t))ϕ(𝐱,t)ϕ(𝐲,t)
exp(-∫_r=0^r=|𝐱-𝐲| d^d 𝐫 (𝐱+𝐫,t)ϕ(𝐱+𝐫,t) ) ]
The exponential suppression comes directly from the constraint that a particle can only be paired with the closest particle <cit.>; heuristically, the exponential term is accounting for the low likelihood that two far away particles have no particles in between them.
We believe this exponential factor plays a fundamental role in both the critical and absorbing phase behavior at small . This is straightforward to see at the level of the saddle-point approximation or Euler-Lagrange equations, which we will refer to as mean-field theory.
Mean-field theory
The Euler-Lagrange equations of this action, under assumptions and identifications noted in <cit.>, reduce to
ṅ(𝐱,t) = D ∇^2 n(𝐱,t) + 2B n(𝐱,t) - A n(𝐱,t) [
∫ d^d y n(𝐲,t)/|𝐱-𝐲|^( e^-∫_r=0^|𝐱-𝐲| d^dr n(𝐱+𝐫, t) + 𝐱↔𝐲) ]
We will further assume a uniform initial condition; this equation is translationally invariant, and so n(x,t) will then be independent of x at all times. Introducing 𝐳 = 𝐱-𝐲 and the volume of a d-dimensional unit ball V_d, the dynamics further simplify to
ṅ(t) = 2B n(t) - 2 A n(t)^2 ∫ d^d z 1/z^ e^-V_d z^d n(t)
For ≤ d, the integral needs the infrared-regulator provided by the decaying exponential, which forces the resulting integral to depend strongly on n. When > d, there is no longer an infrared divergence, but there is an ultraviolet divergence; this small-z divergence in the integral is regulated by the lattice spacing, and so ∫ d^d z 1/z^ e^-V_d z^d ρ∼const + O(n(t)). Together, these give (with non-universal constant c depending on d,, a)
ṅ(t) ∼ 2B n(t) - cA n(t)^1+/d ≤ d
n(t)^2 log(1/n(t)) = d
n(t)^2 > d
To reiterate, at the level of mean-field theory, the exponential in the annihilation term prevents the infrared divergence of the long-range annihilation double-integral at small by introducing a density-dependent cutoff length.
Note that mean field theory only holds in sufficiently high dimensions, if at all. Indeed, our mean field theory does not predict an absorbing phase for any > 0 at any positive branching B; a positive coefficient on the dominant term n(t) forces a non-vanishing steady state n(t) ∼const. This was argued (in the case of κ = ∞) to be an artifact of the infinite-dimensional nature of mean field theory <cit.>; a special upper critical dimension d_c was identified below which the absorbing phase is indeed stable. For =∞, d_c was found to be 4/3 at one-loop order <cit.>; on the other hand, our mean-field equations at =0 show a transition at nonzero branching and are hence consistent with d_c = ∞. We thus anticipate that d_c will be a function of κ.
Despite the omission of the absorbing phase at nonzero branching for κ>0, the mean-field predictions for the absorbing phase – i.e. solving Eq. <ref> when B=0 – match that of our scaling arguments when d ≥ 2 in Eq. <ref> (up to logarithmic corrections when = d).
Discussion–
Our work mapped out the phase diagram of diffusive particles undergoing branching and long-range annihilation, and we uncovered a new line of universality classes connecting the naively unrelated DP and PC classes.
We characterized the phases and phase transition by how the density of the particles decays with time starting from a completely occupied state. Other probes of critical exponents are associated with survival time and growth of small seeds of particles in otherwise empty systems; we leave these complementary probes to future work.
We also derived a field theory describing the unusual transitions and absorbing phases and showed that the resulting mean field theory reproduces our simple scaling arguments for absorbing phase behavior in d ≥ 2. We view this agreement in high dimensions as signaling that field theory is an appropriate tool for understanding properties of the absorbing phase and the transition. Further studies of this unusual field theory is interesting in its own regard. However, to study low dimensions and go beyond mean-field theory, it is necessary to consider renormalization.
We note that both perturbative RG (once appropriate counterterms are identified for the exponential factor) in the style of Cardy and Tauber <cit.>, and non-perturbative RG in the style of section 3.4.2 of the review <cit.> appear promising.
Finally, we note that our study was inspired by considering quantum error correction processes which often require completely non-local feedback operations (κ =0) in various canonical cases. Despite this broad similarity, there are also crucial differences between descriptions of error correction and absorbing state transitions <cit.>, and it would be interesting to explore how the different universality classes of absorbing state transitions as a function of κ may inform error correction protocols with varying degrees of non-locality in feedback.
Acknowledgements– We are grateful to Su-Chan Park for useful discussions about scaling arguments and mean field theory, Alan Morningstar for prior collaboration on related work, and Pavel Nosov and Samuel Alipour-fard for discussions about field theory.
This work was supported by the Office of Naval Research Young Investigator Program (ONR YIP) under Award Number N00014-24-1-2098 (S.B and V.K.).
V.K. also acknowledges support from the Alfred P. Sloan Foundation through a Sloan Research Fellowship and the Packard Foundation through a Packard Fellowship in Science and Engineering. N.O.D. acknowledges support
from the ARCS Foundation for ARCS Scholar funding.
Numerical simulations were performed on Stanford Research Computing Center's Sherlock cluster. Formulating the models was supported in part by the US Department of Energy Co-design Center for Quantum Advantage (C2QA)
under contract number DE-SC0012704 (S.G.).
§ FIELD THEORETIC ANALYSIS
§.§ Derivation of the field theory
In this appendix, we provide explicit steps for the derivation of the field theory described in the main text. We follow the treatment in Refs. <cit.> and <cit.>; much of this derivation is standard, but we find it worthwhile to spell out relevant details. The main new contribution in this appendix is our treatment of the “only annihilate with the nearest particle" constraint on long-range annihilation, for which we adapt results of Ref. <cit.>. The results of Ref. <cit.> were originally intended to realize hard-core constraints, and we use them both for their original purpose and in realizing our annihilation constraint.
The starting point is a master equation that describes the reaction-diffusion system under consideration; in particular, we consider diffusion, branching and long-range annihilation processes with rates denoted by D, B and A respectively. We will define the master equation in d spatial dimensions on a hypercubic lattice for concreteness. For the stochastic process in the main text, D_0=1/2d(1-)(1-), B_0=(1-) and A_0=.
We define an occupation vector 𝐧 where n_i is the number of particles at site i. We also define unit vectors 𝐞_i for ease in describing changes in 𝐧. As an example, if the number of particles on site i increases by 1, then 𝐧→𝐧 + 𝐞_i. Sometimes it is useful to directly refer to the position vector of site i, which we will denote as 𝐫_i, such as in the power-law decay of the annihilation rate 1/|𝐫_i - 𝐫_j|^κ.
When indexing sums, we will use the notation ∑_j ∼ i to mean the sum over the set of j such that j is a nearest neighbor to i. ∑_j,k ∼ i will be the sum over the set of j and k such that j ≠ k and j and k are both nearest neighbors of i. ∑_j ≠ i will denote the sum over the set of j with j ≠ i.
The function P(𝐦;t) will denote the probability that the state of the system is represented by the occupation vector 𝐦 at time t. The master equation for the reaction-diffusion process can be decomposed into parts representing diffusion, branching, and annihilation
∂_t P(𝐧;t)=P_diff+P_branch+P_ann
with
P_diff:= D_0[∑_i ∑_j ∼ i[(n_i+1)P(𝐧 +𝐞_i - 𝐞_j;t )δ_n_j,1-n_iP(𝐧;t)δ_n_j,0]]
P_branch := (B_0/ )[∑_i∑_j,k ∼ i n_i[P(𝐧 - 𝐞_j - 𝐞_k; t)δ_n_j,1δ_n_k,1- P(𝐧;t)δ_n_j,0δ_n_k,0]]
P_ann:= A_0 ∑_i∑_j ≠ i1/|𝐫_i-𝐫_j|^κ[(n_i+1)(n_j+1)P(𝐧+𝐞_i + 𝐞_j; t)-n_i n_jP(𝐧;t)]{∏_k: |𝐫_k - 𝐫_i|<|𝐫_j - 𝐫_i|δ_n_k,0}
We briefly explain the above construction below. Each of these expressions P_diff, P_branch and P_ann is the difference of two terms corresponding to the rate that the configuration 𝐧 is entered minus the rate that the configuration 𝐧 is left. The coefficients like (n_i+1) or n_i and the like are combinatorial factors that account for choosing arbitrarily any of the particles at that site to diffuse, branch or annihilate. The diffusion and branching rates are normalized by 2d and to account for the fact that diffusion can cause hopping of the particle i to any of its 2d neighbors and branching can birth particles on any of the pairs of sites neighboring i.
The Kronecker deltas multiplying the rates in P_diff and P_branch implement a hard-core constraint for the particles, restricting to at most one particle at a site. In particular, an initial configuration that respects this constraint will continue to respect this constraint under the dynamics. We also note that we do not expect changes in universality from loosening the hard-core constraint. (More precisely, we don't expect changes as long as the on-site annihilation rate is <∞. See Ref. <cit.> for a brief discussion of certain processes without hard-core constraints always being absorbing when the onsite annihilation is infinite.)
The product of Kronecker deltas in the curly brackets in P_ann implements the nearest-particle constraint, ensuring that there are no particles closer to i than j when annihilating the pair; this is the factor that gives rise to the exponential term in the field theory. As noted in the main text, we do expect this term to affect the physics, particularly at small κ, since it is responsible for regulating the long-range annihilation.
The next step in deriving the field theory in the Doi-Peliti formalism <cit.> is to write an equivalent bosonic Hamiltonian whose imaginary-time quantum dynamics will exactly correspond to the real time dynamics described by the stochastic master equation. Occupation numbers n_i can be interpreted as n_i many bosonic particles at site i, and the corresponding state vector 𝐧 can be represented as an appropriate Fock state. We introduce bosonic operators _̱i, _̱i^† obeying the usual algebra [_̱i,_̱j]=[_̱i^†, _̱j^†]=0 and [_̱i, _̱j^†]=δ_ij. We introduce the states |𝐧⟩:= ⊗_i (^̱† _i)^n_i|0⟩ (this choice of normalization is standard and aids in the calculations) with the action |̱%̱s̱⟩̱n=n|n-1⟩ and ^̱†|n⟩=|n+1⟩. The occupation vector 𝐧 can then be written as a state vector |𝐧⟩:= ⊗_i (^̱† _i)^n_i|0⟩.
We can then define a wavefunction corresponding to the vector of probabilities
|Ψ(t)⟩:= ∑_𝐧 P(𝐧;t)|𝐧⟩
Note that this state is not normalized according to ⟨Ψ(t) | Ψ(t)⟩ = 1; it is instead normalized according to the rule that the probabilities sum to one, which corresponds to the normalization
⟨ 0 | e^∑_i b̂_i |Ψ(t) ⟩ = 1
Expectation values in the stochastic process then become matrix elements of operators in the Hilbert space
𝔼[f(n(t))] = ⟨ 0 | e^∑_i b̂_i f(n̂) |Ψ(t) ⟩
The time evolution of |Ψ(t) ⟩ takes the form of Schrodinger equation in imaginary time t of the form
- ∂/∂ t|Ψ(t)⟩= |Ψ(t)⟩
for bosonic computed below. We emphasize again that the imaginary time dynamics with a quantum Hamiltonian actually corresponds to the real time dynamics of the classical stochastic process.
For the master equation in Eqs. <ref> and <ref>, the corresponding quantum Hamiltonian is
= _diff+_branch+_ann
with
_diff := -D_0∑_i∑_j ∼ i(^̱†_j_̱i-_̱i^†_̱i)δ_n̂_j,0
_branch:= -(B_0/)∑_i∑_j,k∼ i_̱i^†_̱i(_̱j^†_̱k^†-1)δ_n̂_j,0δ_n̂_k,0
_ann:= -A_0∑_i∑_j ≠ i1/|𝐫_i-𝐫_j|^κ[_̱i_̱j-_̱i^†_̱i_̱j^†_̱j]{_k: |𝐫_k - 𝐫_i|<|𝐫_j - 𝐫_i|δ_n̂_k,0}.
The bosonic Hamiltonians are intuitive—diffusion manifests as hopping from site i to j, branching manifests by creation operators given a particle is present at site i, and annihilation manifests as annihilation operators at site i and j. The branching and annihilating operators are manifestly non-Hermitian, implying the irreversibility of the processes. The second term in each Hamiltonian (i.e. the bosonic density operators acting as onsite energies or density-density interactions) follow from the rates of leaving configurations in the master equation; these are effectively enforcing conservation of probability. The exclusion constraints can be explicitly represented in terms of the operator n̂ by δ_n̂_i,m=∫_-π^π (du/2π) e^i u (n̂_i-m) <cit.>.
Note that the equality of statistical expectation values with matrix elements given in Eq. <ref> can be re-written as
𝔼[f(n(t))] = ⟨ 0 | e^∑_i b̂_i f(n̂) e^-Ĥt|Ψ(0) ⟩
By inserting resolutions of the identity made of coherent states, we can transform this expression into a coherent state path integral. There are subtleties, especially with time ordering and boundary conditions, and we direct the interested reader to section 3.3 of Ref. <cit.> and references therein for a detailed discussion of the coherent state path integral in this context.
We consider coherent states labeled by a vector of complex numbers ϕ and denoted by |ϕ⟩. These states are defined as eigenstates of the annihilation operator, b̂_i |ϕ⟩ = ϕ_i |ϕ⟩, and constructed as |ϕ⟩=e^∑_iϕ_i_̱i^†|0⟩. For this choice of normalization, the resolution of the identity is
=∫(∏_j(dϕ_jd_j/2π i))e^-∑_jϕ_j_j|ϕ⟩⟨ϕ|.
The integral is to be interpreted as follows. We can decompose ϕ into its real and imaginary parts ϕ= R + i I. The integral in Eq. <ref> becomes
=∫(∏_j(d R_jdI_j/π))e^-∑_j(R_j^2 + I_j^2)|R + i I⟩⟨R + i I|.
where all integrals run from -∞ to ∞.
As noted, we can compute a path integral by inserting resolutions of the identity into time-slices of Eq. <ref>. e^-Ĥ t will become a weighting e^-S with S a functional of ϕ(t) and e^-S understood perturbatively <cit.>. It is nice to write H in a normal-ordered form, as this allows the b's of the Hamiltonian to become the ϕ's of one time slice and the b^†'s to be the ϕ^*'s of the next time slice. Besides a time derivative term and boundary terms, S can then be written down by computing ⟨ϕ|H| ϕ⟩/⟨ϕ|ϕ⟩:
S[ϕ] = boundary terms + ∫_0^t_f dt [ ⟨ϕ|H| ϕ⟩/⟨ϕ|ϕ⟩ + ∑_i _i ∂_t ϕ_i ]
where ϕ is a function of the integration variable t. It is nice to use t as the dummy integration variable; when calculating observables at time t, t_f will then be t in a slight abuse of notation. As noted, we direct the interested reader to section 3.3 of Ref. <cit.> and references therein for more details.
The only non-trivial normal-ordering is of the exclusion constraint operators δ_n̂_i,0 and δ_n̂_i,1. The normal ordering can be computed <cit.> using the integral representation mentioned before. For our purposes, we only need the following identity, which holds for integers m≥ 0,
ϕδ_n̂,mϕ/⟨ϕ||ϕ⟩ =1/m!(ϕ)^me^-ϕ
from which related identities can be derived straightforwardly, including ϕb^†δ_n̂,mϕ/⟨ϕ||ϕ⟩ = ϕ^* ϕδ_n̂,mϕ/⟨ϕ||ϕ⟩ or ϕb δ_n̂,mϕ/⟨ϕ||ϕ⟩ = ϕδ_n̂,m-1 bϕ/⟨ϕ||ϕ⟩ = ϕϕδ_n̂,m-1ϕ/⟨ϕ||ϕ⟩. Note that ϕδ_n̂,mϕ/⟨ϕ||ϕ⟩=0 for m<0.
For diffusion:
⟨ϕ|H_ diff| ϕ⟩/⟨ϕ|ϕ⟩ = -D_0∑_i ∑_j ∼ i (_j ϕ_i - _i ϕ_i)e^-_j ϕ_j
= -D_0∑_i ∑_j ∼ i (_j ϕ_i - _i ϕ_i) + subleading
Near the critical point and in the absorbing phase, we can Taylor-expand the exponential. The contributions labeled subleading in Eq. <ref> will ultimately be subleading in the RG sense after taking the continuum limit, as they will contain too many fields and derivatives.
Similarly, for annihilation:
⟨ϕ|H_ ann| ϕ⟩/⟨ϕ|ϕ⟩= -A_0∑_i∑_j ≠ i1/|𝐫_i-𝐫_j|^κ[ϕ_iϕ_j-_i ϕ_i_jϕ_j] e^-∑_k: |𝐫_k - 𝐫_i|<|𝐫_j - 𝐫_i|_kϕ_k
Note that we do not attempt to Taylor expand the argument of the exponential. The reason is that although each individual _kϕ_k may be small close to the transition and in the absorbing phase, their sum might be considerable if sites i and j are far apart. We will indeed find that this is the case when we consider the saddlepoint equations.
For branching:
⟨ϕ|H_ branch| ϕ⟩/⟨ϕ|ϕ⟩ = -(B_0/)∑_i∑_j,k∼ i_i ϕ_i (_j _k-1) e^- _j ϕ_j - _j ϕ_k
= -(B_0/)∑_i∑_j,k∼ i_i ϕ_i ( (_i + (_j - _i)) (_i + (_k - _i))-1) e^ - _j ϕ_j - _j ϕ_k
= -(B_0/)∑_i∑_j,k∼ i_i ϕ_i ( _i^2-1) + subleading
= -B_0 ∑_i _i ϕ_i ( _i^2-1) + subleading
For branching, almost all of the terms labeled subleading are indeed subleading in the RG sense, as they contain products of many fields and/or derivatives. However, one of the terms coming from the linear order of the Taylor expansion of the exponential is _i ϕ_i_jϕ_j with j and i nearest neighbors, which is comparable to the analogous term in the annihilation process. We neglect this contribution for simplicity, as we view it as effectively already contained in the annihilation process. Additionally, when considering the saddlepoint equations, such a contribution will enter at O(n^2).
As noted above, these expectation values will ultimately yield the action S of the field theory, up to a time derivative term and boundary terms. f(n̂) will give rise to fields multiplying e^-S in the path integral. Writing f(n̂) explicitly in terms of b̂ and b̂^† in a normal-ordered form, the b̂^† acting to the left on ⟨ 0 | e^∑_i b̂_i will be just 1, leaving only products of b̂_i terms. After inserting resolutions of the identity to construct the path integral, b̂_i →ϕ_i(t). |Φ(0)⟩ and ⟨ 0 | e^∑_i b̂_i will give boundary conditions of the path integral; the ⟨ 0 | e^∑_i b̂_i boundary condition is especially straightforward, as this is in fact the coherent state ⟨1|.
For example, following this procedure, 𝔼[n_i(t)] = ⟨ϕ_i(t) ⟩ and 𝔼[(n_i(t))^2] = ⟨ (ϕ_i(t))^2 + ϕ_i(t) ⟩, where ⟨⟩ means expectation value with respect to the above-mentioned action and boundary conditions. If we refrain from Taylor expanding the exponentials implementing the hard-core constraint and then dropping subleading terms, the coherent state path integral at this stage will still reproduce the statistical expectation values. However, we indeed drop terms and will subsequently take a continuous space limit. These approximations will maintain universality but will no longer allow for exactly reproducing statistical expectation values.
We will focus on the action in the following, and we will not further discuss boundary conditions. Taking the continuous space limit of Eq. <ref> using Eqs. <ref>, <ref>, and <ref> (with the choice ϕ_i(t) → a^d ϕ(𝐱,t) and _i(t) →(𝐱,t) for lattice spacing a), we obtain the action
S:= ∫_t_1^t_2d t [ ∫d^d𝐱[∂_tϕ+ℋ_local(;ϕ)]+∫d^d 𝐱d^d𝐲 ℋ_non-local(;ϕ)]
ℋ_local :=-D ∇^2ϕ +B(1-^2)ϕ
ℋ_non-local :=- A_κ(|𝐱-𝐲|)(1-(𝐱,t)(𝐲,t))ϕ(𝐱,t)ϕ(𝐲,t) exp(-∫_r=0^r=|𝐱-𝐲|d^d 𝐫 (𝐱+𝐫,t)ϕ(𝐱+𝐫,t) )
where A_κ(|𝐱-𝐲|):=A/|𝐱-𝐲|^κ Eqs. <ref> and <ref> describe the field theory referenced in the main text. D,B,A are equal to D_0, B_0, A_0 up to factors of the lattice spacing. We will generically take t_1 = 0 and t_2 = t.
Before concluding this subsection, we highlight certain salient features of this field theory. (1) We have obtained a non-relativistic field theory (since the kinetic term is given by (∂_t-D∇^2)ϕ) for a complex scalar field ϕ. (2) The branching term contributes to a mass B for the fields and can be appended to the Gaussian part of the action. The branching term also contributes an interaction term of the form ^3ϕ; the annihilation term completely contributes to interaction terms, of which key to the annihilation is ϕ(𝐱,t)ϕ(𝐲,t) while the other term is a non-local density-density interaction, and both of these terms are weighted by the nearest-particle exponential factor. We also note that the field theory is non-local only in space, and remains local in time. (3) The lack of particle number conservation manifests in the non-Hermitian nature of the field theory and lack of U(1) symmetry. The explicit parity conservation of the dynamics corresponds to ℤ_2 symmetry of ϕ↦ -ϕ and ↦ -. (4) The scaling dimensions of the coupling constants can be straightforwardly computed using [length] =-1 and [time]=-2 so that [momentum]=1; under renormalization, there will be anomalous dimensions ascribed to fluctuation effects over mean-field level.
§.§ Euler-Lagrange minimization
In this subsection, we provide additional details about the mean-field analysis described in the main text. The mean-field rate equation is obtained by Euler-Lagrange minimization of the action derived in the previous subsection for the fields ϕ and . The Euler-Lagrange equations we obtain are as follows,
[∂_t-D ∇^2+B-3B^2(𝐱,t)+∫d^d𝐳[A_κ(|𝐱-𝐳|)(𝐳,t)ϕ(𝐳,t)(( 𝐳, 𝐱;t )+( 𝐱, 𝐳;t ))]]ϕ(𝐱,t)-
[∫d^d𝐳_1d^d𝐳_2 A_κ(|𝐳_1-𝐳_2|)(1-(𝐳_1,t)(𝐳_2,t))ϕ(𝐳_1,t)ϕ(𝐳_2,t)( 𝐳_1, 𝐳_2;t )1_0≤ |𝐳_1-𝐱|≤ |𝐳_1-𝐳_2|]ϕ(𝐱,t)=0
[-(∂_t +D∇^2)+B-B^2(𝐱,t)-∫d^d𝐳 A_κ(|𝐱-𝐳|)(1-(𝐱,t)(𝐳,t))ϕ(𝐳,t)(( 𝐳, 𝐱;t )+( 𝐱, 𝐳;t ))](𝐱,t)+
[∫d^d𝐳_1d^d𝐳_2 A_κ(|𝐳_1-𝐳_2|)(1-(𝐳_1,t)(𝐳_2,t))ϕ(𝐳_1,t)ϕ(𝐳_2,t)( 𝐳_1, 𝐳_2;t )1_0≤ |𝐳_1-𝐱| ≤ |𝐳_1-𝐳_2|](𝐱,t)=0
where (𝐱,𝐲;t):=exp(-∫_r=0^r=|𝐱-𝐲|d^d 𝐫 (𝐱+𝐫,t)ϕ(𝐱+𝐫,t) ) is a shorthand notation for the exponential factor implementing the nearest-particle constraint and 1_x ∈support is the indicator function on the real line. While the Euler-Lagrange equations look complicated, a remarkable simplification is obtained by noting that =1 is a solution of Eq. <ref> (this occurs generically <cit.>). Using this solution in Eq. <ref>, and identifying the mean-field ϕ as the local density n, we obtain the rate equation,
ṅ(𝐱,t) = D ∇^2 n(𝐱,t) + 2B n(𝐱,t) - A n(𝐱,t) ∫ d^d 𝐲n(𝐲,t)/|𝐱-𝐲|^( e^-∫_r=0^|𝐱-𝐲| d^dr n(𝐱+𝐫, t) + e^-∫_r=0^|𝐱-𝐲| d^dr n(𝐲+𝐫, t))
This is the rate equation discussed in the main text. We made several approximations in the field theory; explicitly including the exponentials from the hard-core constraints (particularly on the branching term, see discussion below Eq. <ref>) will give corrections of size at most O(n^2).
§ SUBLEADING CORRECTIONS IN THE DIFFUSION-DOMINATED ABSORBING PHASE
In the main text, we discuss the behavior of absorbing phase decay exponent α (n(t) ∼ t^-α) as a function of . One of our predictions in d=1 is that for every ≥ 2, = 1/2. This is borne out in our numerics except for a small discrepancy that is most pronounced when ≈ 2.6. In this brief appendix, we discuss this discrepancy in more detail, and why we believe it comes from finite-time transients in the decay.
There are several ways to describe the density decay n(t) ∼ t^-1/2 of a diffusion-limited pure annihilation process in d=1. As noted in chapter 13 of Ref. <cit.>, one method is to use an effective rate equation of the form ṅ(t) ∼ -n(t)^3 which, despite being deterministic and having no diffusion term, yields the correct exponent of the decay. We will combine this effective rate equation with the contribution n(t)^κ+1 to the decay from the long-range annihilation, giving
ṅ(t) ∼ -c_d n(t)^3 - c_l n(t)^κ + 1
where c_d and c_l are κ-dependent constants multiplying the terms corresponding to diffusion and long-range annihilation.
We emphasize that this effective rate equation is heuristic and likely will not capture the quantitative forms of the subleading asymptotics, but we expect this equation will successfully describe their qualitative features.
For κ≥ 2, the long-range term n(t)^κ + 1 is subleading relative to the leading diffusive term n(t)^3. It is useful to consider different regimes of κ≥ 2.
When κ is sufficiently large, the long-range term rapidly vanishes relative to the diffusive term and will not have an effect on the dynamics outside of a short-lived transient. This would explain why κ≥ 3 does not show a noticeable discrepancy besides early times (the κ=3 data are shown in the inset of Fig. <ref>).
When κ is close to 2, κ = 2 + ϵ with ϵ very small, the long-range term is comparable in magnitude to the diffusive term for a time that diverges as ϵ→ 0. However, despite being long-lived, the subleading term is so similar to n(t)^3 for small ϵ that it then cannot noticeably change the functional form of the decay. This would explain why we do not see a discrepancy for κ=2.
To summarize, it is only for κ close but not too close to 2 that the subleading term will lead to a long-lived transient that is also sufficiently different from the asymptotic decay to be noticeable. This explains why the discrepancy goes to zero at κ=2 and κ≥ 3; the subleading term needs to strike the right balance.
Eq. <ref> would also predict subleading transients for κ < 2. However, it is likely that the correct effective rate equation will have a rather suppressed diffusive term n(t)^3. The reason is that in the long-range-dominated regime of κ < 2, long-range annihilation will with very high probability destroy far-away pairs of particles before they can diffuse near each other, removing much of the diffusive contribution to annihilation. This should be contrasted to what happens in the diffusion-limited regime. Although particles coming close to one another from diffusion is what sets the asymptotic decay rate, long-range annihilation events will still occur. They will be subleading, rather than both subleading and suppressed.
§ RELATED MODELS FROM RANDOM CIRCUITS WITH FEEDBACK
We discuss a few ways for random quantum circuits to realize models in the same universality class as the one studied in the main text. Mapping quantum circuits to classical stochastic processes is a standard technique; for a review, see <cit.> and references therein. Appendix I of Ref. <cit.> is also useful for a review of mappings in cases involving measurement and feedback, and we summarize the main idea here. Averaging over block-Haar gates gives rise to a stochastic process described by block-diagonal stochastic matrices satisfying detailed balance, while feedback based on measurement outcomes can violate detailed balance and induce interesting non-equilibrium behavior like absorbing state transitions.
The average over random circuit realizations can be realized as a channel acting on the averaged density matrix of the system. As noted in Ref. <cit.>, for appropriate choices of random gates and feedback, the diagonal and off-diagonal of the density matrix decouple. The diagonal of the density matrix can be viewed as a 2^L vector of probabilities that is time-evolved by a stochastic matrix. This time-evolution is equivalent to the stochastic evolution of a particle system on L sites, allowing the dynamics of certain observables to be described by a simple classical Markov process. In the case of a direct realization of the stochastic process on the sites, the time evolution of all diagonal observables are encoded in the classical process.
We can consider cases where the particles of the stochastic process don't live directly on the sites; for example, they might correspond to domain walls living halfway between sites in a 1d system, or anyons living on the centers of plaquettes in a 2d system. The domain walls would then be probed through σ^z_i σ^z_i+1 measurements and the anyons through plaquette measurements. However, for simplicity, we will walk through the simpler special case where the particles of the stochastic process live directly on sites. The main difference in the examples with domain walls and anyons is a redundancy of the classical description, where a given description in terms of classical particles may correspond to a handful of different quantum density matrices. Despite this redundancy, appropriate local observables can still be extracted from the classical stochastic process, such as ℤ_2-symmetric diagonal operators in the case of a 1d system with domain walls.
We will consider a realization of our 1d process directly on a lattice of qubits. All operations on the quantum system will be in the computational basis generated by |↑⟩ and |↓⟩, while the corresponding operations on the related classical process will be described in terms of the action on bitstrings generated by 0 and 1 corresponding respectively to |↑⟩⟨↑| and |↓⟩⟨↓|. That is, spin-down on a single site will ultimately correspond to a particle in the classical stochastic process occupying that site, while spin-up corresponds to an empty site.
A two-site particle number conserving block-diagonal Haar matrix U_h gives rise to a hopping term in the classical process. In particular, averaging U_h ⊗ U^*_h gives rise to a matrix that is block-diagonal between the diagonals and off-diagonals of the density matrix. In particular, there is a block corresponding to |↑↑⟩⟨↑↑|,|↑↓⟩⟨↑↓|,|↓↑⟩⟨↓↑|,|↓↓⟩⟨↓↓|, which we will identify with the bitstring basis {00, 01, 10, 11}. In this basis, the block takes the form
[ 1 0 0 0; 0 1/2 1/2 0; 0 1/2 1/2 0; 0 0 0 1 ]
corresponding to unbiased hopping between 01 and 10.
Similarly, a two by two block in a three-site Haar matrix between |↑↓↑⟩ and |↓↓↓⟩ gives rise to processes taking 010 ↔ 111 and back corresponding to branching and local annihilation. Alternatively, if there are relevant non-unitary processes, one could have the processes 010 ↔ 111 occur with unequal probabilities. In our model, we assume this to be the case, taking all of our annihilation to come from feedback conditioned on measurements rather than additionally from local unitary processes. However, we do not find significant differences between models where 010 ↔ 111 occurs locally with equal probabilities and models where only 010 → 111 occurs (modulo the feedback steps in these models).
Feedback allows for nonlocal annihilation processes. In our model, measurements are assumed to be taken everywhere so that all defect positions are known. A particle is selected at random for its dynamics, including whether or not to attempt to pair it with the closest particle to it and attempt annihilation. Alternatively, using the ingredients described above, we could have constructed models with parallel update rules, involving parallelized feedback and brickwork circuits of unitary gates to induce hopping and branching.
Our serial model has some differences relative to the parallel models; the parallel models may be more natural, given that defects are paired all at once in minimum-weight perfect matching decoders. We chose our serial model for our numerics because we found that it has better finite-size and -time effects, making it easier to illustrate our results.
We also found that some choices of parallel feedback could induce long-range attraction between the defects, even in the ostensibly short-range limit of = ∞. The idea is that if defects are always paired to minimize the sum total of the weights between pairs, there are certain processes occurring at nonzero probability that effectively induce biased random walks. For example, consider the following hypothetical process where two far-away particles both branch and then undergo the annihilation feedback step:
00000100000100000
00001110001110000
00000010001000000
Because the feedback minimizes the sum of the distances, the two particles at the edge of the cluster of 1s closest to the opposite cluster always pair. Though this pair won't annihilate, this pairing forces the pairing and annihilation of the remaining two pairs of adjacent particles in the clusters, leaving just those “inner" two 1's at the final time step. This is effectively inducing a biased motion between the 1s closer to each other; while there are other possible processes or orderings of processes that could occur, there is a net bias even when taking those into account. Biased walks that favor attraction between particles can be relevant for the universality class of the transition <cit.>. Given our interest in interpolating between a long-range directed percolation and a short-range parity conserving limit, we also chose our serial process to avoid these biased walks. However, we believe it would be interesting in future work to compare the bias induced from the above parallel process to bias induced in other ways in the literature.
In this appendix, we have outlined several ingredients in quantum circuits with measurements and feedback that give rise to models similar to the one numerically probed in the main text. While many of these models fall in the same universality class as the model in the main text, there are some that may fall outside, particularly those with biased random walks attracting particles towards one another. We note that there are further variations on measurement and feedback, including using “flags" both for measurement outcomes and for error locations; these introduce further classical degrees of freedom which may themselves undergo absorbing state transitions.
§.§ Differences from quantum error correction
While our “attempt annihilation only with the closest particle" rule was inspired by quantum error correction, we note some key differences between our absorbing state transition and thresholds in the literature.
In particular, our absorbing state dynamics do not keep track of whether a “logical error" occurred (the primary object of interest in quantum error correction). Furthermore, in quantum error correction, errors may occur everywhere, but our dynamics instead only allow the spawning of new defects via local particle branching. This restriction allows for the existence of an absorbing state.
There are similarities, in that the proliferation of defects is what can induce a logical error when attempting to decode in quantum error correction, but the two problems are ultimately concerned with different quantities.
We do note that the two can be intimately tied together, particularly when classical flags are used. In Ref. <cit.>, the classical flags can reach an absorbing state. Decoding operations are conditional on the classical flags, and the absorbing state for the classical flags halts all these decoding operations. This effectively forces the logical failure rate to be O(1) in the classical flag absorbing phase, tying the logical information and absorbing state transitions together.
|
http://arxiv.org/abs/2409.02368v1 | 20240904013837 | Pluralistic Salient Object Detection | [
"Xuelu Feng",
"Yunsheng Li",
"Dongdong Chen",
"Chunming Qiao",
"Junsong Yuan",
"Lu Yuan",
"Gang Hua"
] | cs.CV | [
"cs.CV"
] |
Pluralistic Salient Object Detection
Xuelu Feng, Yunsheng Li, Dongdong Chen, Chunming Qiao, Fellow, IEEE, Junsong Yuan, Fellow, IEEE, Lu Yuan, and Gang Hua, Fellow, IEEE
Xuelu Feng, Chunming Qiao, Junsong Yuan are with the Department of Computer Science and Engineering, University at Buffalo, USA (e-mail: [email protected]; [email protected]; [email protected]).
Yunsheng Li, Dongdong Chen, Lu Yuan are with Microsoft GenAI, USA (e-mail: [email protected]; [email protected]; [email protected])
Gang Hua is with Dolby Laboratories, USA (e-mail: [email protected]).
September 9, 2024
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
We introduce pluralistic salient object detection (PSOD), a novel task aimed at generating multiple plausible salient segmentation results for a given input image. Unlike conventional SOD methods that produce a single segmentation mask for salient objects, this new setting recognizes the inherent complexity of real-world images, comprising multiple objects, and the ambiguity in defining salient objects due to different user intentions. To study this task, we present two new SOD datasets “” and “”, along with newly designed evaluation metrics. builds upon the DUTS dataset but enriches the ground-truth mask annotations from three aspects which 1) improves the mask quality especially for boundary and fine-grained structures; 2) alleviates the annotation inconsistency issue; and 3) provides multiple ground-truth masks for images with saliency ambiguity. consists of approximately 100K image-mask pairs with human-annotated preference scores, enabling the learning of real human preferences in measuring mask quality. Building upon these two datasets, we propose a simple yet effective pluralistic SOD baseline based on a Mixture-of-Experts (MOE) design. Equipped with two prediction heads, it simultaneously predicts multiple masks using different query prompts and predicts human preference scores for each mask candidate. Extensive experiments and analyses underscore the significance of our proposed datasets and affirm the effectiveness of our PSOD framework.
Pluralistic, Salient Object Detection, Mask Quality
§ INTRODUCTION
Salient object detection (SOD) is a classical vision task that seeks to automatically segment salient objects within a given input image. However, due to the inherent complexity of real-world images and varying user intentions, ambiguities often arise in defining salient objects. For instance, as shown in Fig. <ref>, when confronted with two or three objects in an image, segmentation becomes ambiguous. Objects on a dining table, such as food, plates and cups, may be perceived differently based on user intentions. For example, an individual desiring a drink may focus on the cup, while someone hungry may prioritize the food or even include its utensil. Conversely, an individual simultaneously thirsty and hungry may consider everything on the table as salient. This variability aligns with diverse user preferences, particularly in downstream SOD applications.
However, the inherent ambiguity in salient object detection (SOD) is overlooked by most existing datasets and methods, which treat SOD as a task with a singular solution. Under this task definition, existing datasets exhibit ambiguity issue when the datasets are labelled by multiple annotators with diverse intentions. Using the most widely utilized DUTS dataset <cit.> as an example, each image is associated with only a single ground-truth mask, despite a notable proportion of images featuring inherent ambiguity. As shown in Fig. <ref>, the annotation inconsistency issue exists across many images. Such annotation inconsistency will also inevitably affect the performance of existing SOD methods which are designed to predict a single mask, as it introduces an ambiguous supervision signal. One typical illustrative example is shown in the top part of Fig. <ref>. Here we want to emphasize that the inherent ambiguity of SOD is the root cause of the aforementioned annotation inconsistency issue and subsequent learning ambiguity. Only adopting an excessively strict and consistent annotation policy during dataset construction cannot fundamentally address the ambiguity issue arising from diverse user intentions.
In this paper, we recognize the inherent ambiguity issue in SOD and propose pluralistic salient object detection (PSOD), shifting the task from single mask prediction to enabling generation of multiple salient mask candidates. To facilitate this, we introduce two new SOD datasets, “” and “”. utilizes the same images as DUTS <cit.> but providing new enhanced annotations. Specifically, compared to the original DUTS annotations, our improves in three key aspects: 1) Enhancing mask quality, particularly for boundary and fine-grained structures (hair and bicycle wheels); 2) Addressing annotation ambiguity by providing multiple ground-truth masks for relevant images instead of a single one; 3) Alleviating annotation inconsistency issue existing in DUTS by adopting a more consistent annotation policy and permitting the annotation of multiple masks.
For , it comprises about 100K image-mask pairs with human-annotated preference scores. Notably, the mask quality in this dataset exhibits a diverse distribution, encompassing both low-quality and high-quality masks. This dataset enables us to train a mask quality predictor, capable of predicting human preference without requiring knowledge of the ground-truth mask. It is very useful in some real user scenarios and automatically select high-quality ones. Moreover, considering existing segmentation metrics like mIoU may not always align with human preference, we find the learned mask quality predictor can potentially serve as an additional quality evaluator for existing SOD methods to show the alignment with real human feedback as well.
Building up the above two datasets, we further propose a simple yet effective end-to-end training baseline for pluralistic salient object detection, which can accomplish two tasks within one model, outputting multiple mask candidates and predicting human preference scores for each mask candidate. Because these two tasks have different input/output formats and need to learn different representations, we find adopting two different input embedding tails and prediction heads while sharing the same backbone does not work well. Therefore, we introduce the mixture-of-experts (MOE) design and use different expert modules for each task in the shared backbone, which can well address the interference from each other. As shown at the bottom of Fig. <ref>, our inference process involves generating multiple mask outputs initially. These masks pass through our MQP module in a batch, resulting in a preference score for each. The score can be leveraged as a guidance to decide whether the corresponding mask should be kept. Through comprehensive experimental results and analysis, we showcase the significant value of our newly introduced datasets and the efficacy of our PSOD baseline. Our key contributions can be summarized as follows:
* We introduce a novel task “PSOD”, the first to address the inherent ambiguity in salient object detection, offering a fresh perspective and a redefined objective.
* We present two large-scale datasets, and , curated for PSOD. These two datasets will be made publicly available, fostering further research along this direction.
* We present one simple yet effective end-to-end baseline for PSOD, capable of predicting not only multiple output mask candidates but also human preference scores. To our knowledge, this is also the first method that evaluates SOD quality in alignment with real human feedback.
§ RELATED WORK
Over recent years, deep learning based methods <cit.> have achieved remarkable success in salient object detection. A crucial avenue for performance enhancement involves the incorporation of diverse network structures, such as fully convolutional networks <cit.>, Complex-valued Networks <cit.>, and Transformers <cit.>. Additional directions for improvement encompass edge enhancement <cit.>, feature extraction, and refinement <cit.>, lightweight model design <cit.>, as well as novel supervision strategies and loss functions <cit.>.
Along with the methodological development, a lot of SOD datasets have been introduced as well, including ECSSD <cit.>, PASCAL-S <cit.>, DUT-OMRON <cit.>, HKU-IS <cit.> and SOD <cit.>. Early datasets like SOD <cit.> and PASCAL-S <cit.> often have relatively small scales, covering only 300 and 850 images, respectively. ECSSD <cit.> offers 1,000 images, notable for their semantically rich yet intricate structures, accompanied by complex backgrounds. Recently, the most widely used dataset is DUTS <cit.>, boasting a larger scale with 10,553 training images and 5,019 test images sourced from ImageNet. Serving as the lifeblood and testbed, the evolution of these datasets has made a substantial contribution to the development of SOD techniques. Compared to our method, all above datasets and methodologies predominantly framed SOD as a single-mask prediction problem, overlooking its inherent ambiguity and diversity. In contrast, this paper introduces a distinct task definition along with two new datasets that permits the generation of multiple masks and prediction of human preference.
Another SOD setting closely related to our proposed PSOD is salient object subitizing <cit.>, which predicts the number of salient object instances and potentially the saliency rank of each instance. The final saliency mask is then generated by merging one or multiple predicted object instance masks. However, this setting necessitates more costly instance-level object annotation and a more intricate network design, as it requires the model to differentiate between object instances. Contrastingly, the SOD task itself does not require such instance discrimination. Additionally, as segmenting high-quality object instance boundary is challenging, stitching multiple instance masks in this setting will result in unwanted hole/seam in the final mask. Furthermore, this setting struggles to handle salient regions belonging to stuff categories (a tree in front of a blurred background), which are difficult to define as instances. Compared to salient object subitizing, our PSOD setting is notably simpler and does not need unnecessary instance differentiation, thus naturally capable of handling salient regions without clear instance definition.
§ METHODS
Recognizing the inherent ambiguity of SOD, we design a simple yet effective baseline network “PSOD-MQP” for our newly proposed task setting. The overall framework is shown in Fig. <ref>. In order to predict human-preferred pluralistic saliency masks, it contains two sub-modules, PSOD and mask quality predictor (MQP). PSOD is used to predict multiple masks to cover different potential saliency maps, while MQP is trying to predict the human preference score for each saliency mask by feeding the image-mask pair without knowing the groundtruth mask. Considering PSOD and MQP have different inputs (image vs image-mask pair) and require learning different characteristics, we find naively combining these two tasks within one model by using independent prediction heads and beginning embedding layers while sharing the same backbone does not work well. To address the learning interference, we incorporate Mixture-of-Expert (MoE) design into our transformer-based backbone as MoEBlocks. This is achieved by substituting the feed forward network (FFN) layer with an expert-switch layer, where two task-specific FFNs are trained for the two tasks respectively. In this paper, we simply adopt the DaViT<cit.> as the backbone by default, and other more advanced backbones <cit.> can also work well. And we replace the FFN layer in the original channel attention block and spatial attention block with a switch which applies task-specific FFN for PSOD and MQP. In Fig. <ref>, we denote PSOD expert and MQP expert as “P-FFN” and “Q-FFN”, which are activated respectively when training the corresponding task. Besides the expert design, we also use different beginning embedding layers and prediction heads for the two tasks, which will be further elaborated below.
§.§ PSOD Sub-Module
Since current SOD methods predominantly focus on predicting a single saliency mask, they lack direct applicability for pluralistic salient object detection, which requires identifying multiple potential salient objects within an image. In this paper, inspired by the latest work SAM <cit.>, we design a prompt-driven mask decoder with multiple learnable output tokens as the PSOD head.
In detail, given the input image, we first use one embedding layer to convert it into visual tokens. Then such tokens will be fed into the backbone to get the multi-scale features, where each expert-switch layer in the backbone MoEBlocks will select the P-FFN to process the incoming features. To aggregate the multi-scale features, we utilize a modified version of the FPN proposed in PanopticFPN <cit.> for segmentation tasks, by excluding the last upsampling layer. The output feature of the FPN is 1/4 of the input resolution and will be fed into the PSOD head to predict multiple mask candidates by being modulated by different learnable tokens via mutual cross-attention. It should be noted that we omit the IoU prediction branch in the SAM decoder <cit.> and use our proposed Mask Quality Predictor module to select the high-quality masks.
§.§ MQP Sub-Module
To predict the human preference score, we take an image and a mask candidate as the input and concatenate them along the channel dimension, and then use another embedding layer to convert them into visual tokens. Similarly, these visual tokens will be fed into the shared backbone while selecting the Q-FFN in each MoEBlocks. The output feature will be finally fed into the MQP prediction head to predict the preference score. In this paper, we simply design the MQP prediction head as a stack of self-attention layers, one global average pooling layer and one MLP layer followed by sigmoid. In real-world applications, users can leverage the MQP header to meet their specific requirements. For instance, if they prefer more diverse results, they can set a low score threshold, allowing the MQP to output more results. Conversely, if users prioritize mask quality, they can set a higher threshold, below which the corresponding masks are removed. In summary, the key benefits of MQP is that it offers a flexible solution that allows users to strike a balance between the quantity and quality of the results. This flexibility sets our method apart from other fixed mask approaches, emphasizing the benefits of integrating MQP into the whole PSOD framework.
§.§ Losses
To train our proposed model, we jointly train these two tasks and adopt different loss objectives accordingly. For PSOD task, we utilize the Cross-Entropy (CE) loss rather than the focal loss <cit.> used in SAM, since in our experiments we observe that focal loss is less stable. The possible reason is that, due to the existence of soft values in the ground-truth mask, the theoretical optimal point of focal loss is no longer at p=t, where p,t is the prediction and ground-truth mask value, respectively. The Dice loss <cit.> is kept and combined with the CE loss as follows:
ℒ_mask= λℒ_ce + ℒ_dice,
where the value of λ is assigned as 2.5 in our implementation. In addition, since multiple masks are generated, similar to SAM, we only backpropagate the mask that has the minimum loss to the ground-truth.
For MQP task, we treat it as a regression task, as our experiments empirically show improved performance and training stability compared to formulating it as a classification problem. One more benefit is the model can naturally predict a continuous preference score. By default, we simply use the Mean Squared Error (MSE) loss as our objective function, aiming to guide the model towards predicting preference scores that closely align with human-annotated scores.
§ DATASET
To support our PSOD setting, we further propose two new datasets, and . reuses the images from DUTS <cit.> dataset, but provides multiple possible saliency masks for images containing ambiguity in defining salient objects, rather than just one mask like DUTS. And provides massive image-mask pairs with human annotated preference scores under a diverse quality distribution. Before introducing our newly proposed datasets, we first analyze some annotations issues within DUTS.
§.§ Annotation Issues within DUTS
By examining the annotations within DUTS, we identify another two main issues: 1) coarse annotation quality; 2) annotation inconsistency. More specifically, the coarse annotation issue manifests in two aspects. First, some salient objects with thin structures are not annotated accurately. For example, in the first row of Fig. <ref> (a), the wheel spokes and fan blades are not well segmented in the ground-truth mask, and the dog's hair is also delineated coarsely. Second, the issue of missing parts or introducing unrelated parts is widespread in DUTS. As shown in the second row of Fig. <ref> (a), the guitar strap, which should be integral to the guitar, is missing. We observe that such coarse annotation issue occurs in both the training and test sets. Consequently, models trained on this dataset often fail in segmenting salient objects with thin structures, and this shortcoming is not evident from the final test performance.
Besides coarse annotations, DUTS also contains a considerable portion of images with inconsistent annotations. This inconsistency may stem from multiple human annotators participating in the annotation process, each with varying intentions or interpretations of saliency, especially in ambiguous cases. This actually further echoes the inherent ambiguity within SOD. As illustrated in Fig. <ref> (b), the reflections of animals on water are perceived as salient objects, capturing attention with their mirrored beauty. However, in other instances, only the animals themselves are considered as the salient object. Due to the existence of such conflicted data annotation, the model is more likely to produce predictions with artifacts that exhibit low visual quality. Similar to the coarse annotation issue, inconsistent ground-truth in the test set can result in inaccurate test performance. To overcome these issues in the DUTS dataset, we conduct a comprehensive reannotation, providing finer masks and multiple ground-truth masks for images with ambiguity, which significantly alleviates the annotation inconsistency issue. We will discuss this further in the following sections.
§.§ DUTS-MM Dataset
relabel images in the DUTS dataset that exhibits the above two issues and providing multiple masks for images with inherent ambiguity. During the labeling process, we instruct annotators to provide masks as detailed as possible, especially around the boundary areas and thin structures of salient objects, to address the coarse annotation issue in DUTS. As for the inconsistent annotation, we observe that the primary cause is the ambiguous definition of a salient object. For instance, as shown in the left of the third row in Fig. <ref> (b), both the individual and chair are collectively perceived as salient objects, yet in the right sample the salience is attributed exclusively to the person. Therefore, we ask annotators to provide multiple ground-truth masks for such images to encompass all possible combinations of salient objects. For images without ambiguity, we still annotate only one ground-truth masks. In Fig. <ref> (b), we show some typical cases that contain ambiguity, including interaction between human and objects, reflection, and multiple objects in front of sky or blurred backgrounds.
Detailed statistics of the number of ground-truth masks per image in our dataset are illustrated in Fig. <ref>. Notably, approximately 23% and 34% of images contain more than one ground-truth mask in the training and test sets, respectively. Among these, a substantial proportion of images have three ground-truth masks. It is worth noting that the images in the DUTS dataset are meticulously selected to include salient objects. Through empirical observation, we find that three ground-truth masks suffice to cover all possible saliency masks for each of these images.
§.§ DUTS-MQ Dataset
Trained with the proposed Dataset, our proposed PSOD task should be able to return multiple predicted masks per image. However, some of them may be not always with high quality, it would be super useful if we can automatically select the high-quality ones for users in real products. Since the ground-truth mask is unknown during testing, designing a quality predictor that can assess the mask quality without ground-truth reference become necessary.
To this end, we propose to support the training of such a mask quality predictor. Each sample in is a triplet {ℐ, 𝒫, 𝒬}, consisting of an input image ℐ, a predicted mask 𝒫, and a human-annotated preference score 𝒬.
In details, to build , we carefully select images from the ImageNet dataset to maintain the similar distribution as DUTS, while avoiding identical images (to prevent the generation of perfect masks by models trained on DUTS). The image selection process involves computing the similarity between images in ImageNet and those in DUTS-TR based on features from the CLIP <cit.> image encoder. We retain the images with similarities ranked second and third for each reference input from DUTS-TR, and then further filter them by manually excluding those without clear salient objects.
For each selected image, we use two state-of-the-art SOD models (TE3 and TE5 proposed in TRACER <cit.>) and our PSOD model (to be introduced in the following section) trained on to generate different masks. Finally, we obtain 19,703 images, with each image containing 5 distinct masks.
In annotating mask quality, we instruct annotators to rely on their visual preferences rather than using mIoU as the criterion. This aims to simulate user feedback in real-world application scenarios. For each image-mask pair, annotators are asked to assign one of four distinct levels, ranging from 1 to 4, where 1 and 4 represents the least satisfactory and the ideal mask without any errors, respectively. Among the annotated image-mask pairs, we uniformly sample 5,564 pairs as the validation set for determining training errors, while the remaining pairs form the training set. Some typical annotation examples are shown in Fig. <ref>. It can be seen that, some masks assigned level 1, despite having high mIoU values, leave a poor impression to humans due to annoying artifacts. But some masks missing a large part (elephant in the second row) are even more preferable. Need to note that both predicted masks with artifacts are typical examples of salient ambiguity cases we discussed earlier, likely caused by inconsistent annotations. This observation highlights the necessity of addressing the inherent ambiguity and annotation inconsistency in SOD. The detailed statistics of are shown in Table <ref>. It reveals that masks with quality levels of 1 and 2 account for more than 65% of the entire dataset, while only 16% of the samples (annotated to level 4) are considered “perfect” by the human annotators, indicating a substantial room for improvement in current SOD models.
Need to note that, when annotating the above two datasets, we have two annotators to do cross-check, one annotating the mask/score and another reviewing the annotation result to guarantee the quality. If the reviewer is uncertain about some hard cases, the third senior supervisor will be involved and correct the annotation labels if needed.
§.§ Evaluation Metrics
Since multiple prediction masks and ground-truth masks exist in the PSOD setting, we can no longer rely on conventional one-one matching metrics such as mIoU, which are employed in previous SOD studies. In this paper, we redefine the average precision AP, average recall AR and F_1 score to evaluate the model for the PSOD task. Specifically, given K predicted masks (M̂_i_1,...,M̂_i_K) and J GT masks (M_i_1,..., M_i_J) for the ith image, the precision is defined as:
Prec_i = 1/K∑_k=1^Kℳ(M̂_i_k,M_i_j^*),
where ℳ(·) denotes the average of mean F-measure <cit.> and S-measure <cit.>. M_i_j^*=j ∈1,2,..,Jmaxℳ (M̂_i_k, M_i_j) that represents the ground-truth best matches the predicted mask M̂_i_k. Similarly, the recall is defined as:
Rec_i = 1/J∑_j=1^Jℳ(M̂_i_k^*,M_i_j),
where M̂_i_k^*=k ∈1,2,..,Kmaxℳ (M̂_i_k, M_i_j) represents the closest predicted masks to the ground-truth. Finally, we calculate the average precision (AP) and average recall (AR) of N images, and F_1 score is the harmonic mean of AP and AR.
§ EXPERIMENTS
§.§ Implementation Details.
We employ the ImageNet <cit.> pretrained DaViT-Tiny <cit.> as the backbone by default. For PSOD, the FPN neck and mask decoder are initialized randomly. We scale inputs to the resolution of 512 × 512 for both training and inference phases. The output channel of the FPN features is set to 256 while the mask decoder is designed to generate 5 masks per image. Aligning with PSOD sub-module, inputs (concatenation of an image and a predicted mask) of MQP are scaled to 512 × 512 as well. We utilize the Adam optimizer with beta1 and beta2 values of 0.5 and 0.999, respectively. The initial learning rate is set at 0.0001. For PSOD, it follows a cosine learning rate scheduler decaying to 0, whereas for MQP, it is decreased by a factor of 10 if the validation set loss plateaus over five consecutive epochs. The model is trained for 50 epochs with batch size 4/8 (PSOD/MQP) on 4 RTX 2080Ti GPUs. For MQP training, we normalize the ground-truth scores by mapping the quality level {1, 2, 3, 4} to {0, 0.33, 0.67, 1.0} respectively. During the inference stage, we initially execute a forward pass to generate multiple saliency maps, followed by a second forward pass specifically dedicated to assessing the quality of each generated mask.
§.§ Main Experiment Results
Quantitative Comparison
In this section, we evaluate our PSOD model on the proposed dataset and compare our results with five state-of-the-art SOD methods, namely MINet <cit.>, VST <cit.>, TRACER <cit.>, EDN <cit.>, and SelfReformer <cit.>. In addition, we also compare with two SOTA high-resolution SOD methods (PGNet <cit.> and InSPyReNet <cit.>) and the only one open-sourced&runnable subitizing method RSDNet <cit.>. We keep the original experimental settings of these methods. All the methods are retrained and evaluated on the . Besides, we retrain two high-resolution SOD methods with the same resolution (512 ×512) and mask quality. There is no training mask annotation quality difference. The only difference is PSOD is supervised by multiple GTs at each iteration while others are learnt with one of GT masks (one GT mask will be sampled at each iteration, and multiple GT masks will be regarded as multiple training samples) due to their architecture constrain. The main results are displayed in Table <ref>. The results of our model are derived by evaluating the masks generated and filtered by the PSOD module and MQP module respectively. For mask filtering, we establish a threshold for the mask quality predictor, below which the corresponding mask is deemed as `poor' and excluded from evaluation. By adjusting this threshold, we obtain and report the optimal F_1 score along with its corresponding AP and AR in Table <ref>.
As observed, our PSOD network + MQP framework significantly surpasses existing methods, improving AP, AR, and F_1 by margins of 1.1%, 6.3%, and 3.7% over the best-performing method, respectively. And for result comparing with HRSOD, it demonstrates PSOD still has a clear superiority in terms of AR and F_1. Moreover, it shows our method clearly beats the performance of subitizing-based SOD method.
To more effectively show the superiority of our framework, we construct a precision-recall curve in Fig. <ref>. The curves are generated by adjusting the threshold for the MQP from 0 to 0.9 in increments of 0.1. Here only the curve obtained by the PSOD model with 5 output masks (n=5 in Fig. <ref>) is discussed. As observed, our curve lies to the right of all other state-of-the-art methods, indicating superior performance. Furthermore, the AP range spans from 0.866 to 0.904, while the AR range extends from 0.924 to 0.860. This demonstrates that by incorporating the MQP, our system can more effectively balance precision and recall to meet the requirements of various users, whereas other methods produce fixed masks. In other words, in real-world applications, users can adjust the threshold based on their preferences to obtain more tailored results.
To further verify the generalizability of our model across different datasets, we conducted a comprehensive comparison with the top-performing state-of-the-art methods, TRACER and VSCode-T <cit.>, using four additional benchmark datasets: DUT-OMRON, ECSSD, HKU-IS, and PASCAL-S. The results, as presented in Table <ref>, demonstrate that our method outperforms the others on most of these benchmarks across prevalent evaluation metrics. This indicates the robustness and effectiveness of our approach in diverse scenarios.
Qualitative Comparison
In addition to the quantitative results, Fig. <ref> shows
the qualitative results of our PSOD model and 5 other methods over three representative images in test Dataset. We display 3 output masks selected by the MQP per image. We can see from the figure that our method is capable of generating different saliency maps. And for each map, our model well locate the salient regions, making the predicted saliency map closer to the ground-truth than other methods. We also visualize the output scores of mask quality predictor for multiple prediction mask of PSOD in Fig. <ref> (similar to Fig. <ref>, only masks with top-3 MQP score are shown). By setting proper quality threshold, our mask quality predictor can help filter out low-quality masks for users automatically.
§.§ Ablation Study
Impact of Model Size
We compare multiple backbone encoders with varying model sizes, ranging from DaViT-Tiny (28.3 Mb) to DaViT-Base (87.9 Mb). The results are presented in Table <ref>. It can be seen that increasing the model size results in only minor performance improvements, while introducing more computation cost. Therefore, we utilize DaViT-Tiny throughout the paper to maintain efficiency by default. If we further increase the dataset scale and complexity, larger models may help more.
Performance gain from backbone?
In Table <ref>, we substitute the backbone networks in TRACER (EfficientNet), EDN (ResNet50), and SelfReformer (PVT) with our consistent backbone DaViT-Tiny, allowing for a direct comparison of performance across different methods using the same backbone. Despite this uniform modification, our PSOD method continues to exhibit superior performance, highlighting the robustness and effectiveness of our approach. Specifically, our method's AP score is higher by 0.052, its AR score is higher by 0.094, and its F_1 score is higher by 0.073. These results underscore the strength of our PSOD approach, as it consistently delivers higher accuracy, recall, and F_1 scores, even when the same backbone network is employed, demonstrating its superior performance across all evaluated metrics.
Number of output masks of PSOD
By using more output tokens as prompt, Our PSOD framework can output more mask candidates. Here we conduct an ablation study on the number of output masks generated by the proposed PSOD. In Table <ref> and Fig. <ref>, we show the results with 3∼ 6 output masks. We can observe that generating more masks will yield slightly better results when n is changed from 3 to 5 and the performance drops at n=6. As adding an output only introduces a little computation cost increase related to the total cost, we opt for 5 output masks for PSOD.
MQP vs. SAM IoU Predictor
In this paper, we employ an additional mask quality predictor module to autonomously assess the quality of predicted masks. Notably, SAM <cit.> integrates an IoU predictor. To show the superiority of MQP over the IoU predictor in reflecting real human preference, we train a variant of PSOD with an additional IoU predictor branch. For quantitative comparison, we randomly select 500 images from the test set. For each image, we choose a pair of predicted masks with visually discernible quality differences from the output of PSOD. We then ask human annotators to assign a score of 1 to the superior mask and 0 to the inferior one, establishing a ground-truth (GT). The output from both MQP and IoU prediction scores is subsequently employed to rank each mask pair and assess consistency with human feedback. On average, MQP achieves an accuracy of 94.2%, surpassing SAM's IoU predictor by 12.4% (94.2% vs. 81.8%). One visual comparison example is provided in Fig. <ref>. It is evident that MQP is more sensitive to visual flaws in the masks, assigning significantly lower scores, whereas SAM's IoU predictor focuses more on IoU, which cannot accurately reflect the human preference.
MQP vs. Current Metrics
To validate the efficacy of our MQP sub-module in assessing mask quality that aligns with human preference, we conduct a thorough comparison between preference scores from MQP and existing established SOD evaluation metrics. Specifically, we compare MQP against five widely used metrics, i.e., mean absolute error (MAE) <cit.>, max F-measure (F_max) <cit.>, mean F-measure (F_avg) <cit.>, mean E-measure (E_ξ) <cit.> and S-measure (S_m) <cit.>. Apart from MAE where smaller is better, the other four metrics are better when larger. We conduct this experiment under the same setting when comparing MQP with SAM's IoU predictor. The alignment accuracy with human feedback is presented in Fig. <ref>. It shows that our MQP sub-module clearly outperforms existing SOD evaluation metrics, demonstrating closer alignment with the real human feedback.
Enhancing performance and human preference alignment using MQP
MQP enables automatic ranking not only for PSOD but also for existing SOD methods in real products. To show this capability, we use existing five existing SOTA SOD methods to generate the SOD masks for randomly chosen 500 images from , and then use MQP to choose the best SOD mask for each image without knowing the GT SOD mask. This automatic selection boosts the F1 evaluation metric to 0.87, which is better than the best-performing method (F1: 0.86). More importantly, the selected masks align with human preference much better (alignment acc: 77.5%) than mask selection using existing evaluation metrics (F_β^avg) that require GT masks (alignment acc: 57.0%), which is not available in real products.
PSOD vs. SAM We conduct an experiment to fine-tune the EfficientSAM-S <cit.> on our dataset, given the huge model size of SAM. In detail, we employ a full image bounding box to prompt the model. The results can be seen in Table <ref>. Interestingly, merely fine-tuning SAM does not yield superior performance. This might because SAM is not specifically designed and pre-trained for SOD task.
§ CONCLUSION
This paper pioneers the exploration of inherent ambiguity in Salient Object Detection (SOD) by introducing a novel perspective: redefining SOD as a pluralistic salient object detection (PSOD) task, generating multiple mask candidates per image. To facilitate the training of PSOD, we introduce two new large-scale datasets, and . Different from existing SOD datasets, which offer only one ground-truth mask per image, is tailored to supply multiple saliency masks for images containing ambiguity in defining salient objects. Additionally, we enhance annotation quality and mitigate issues of annotation inconsistency within existing SOD dataset. Notably, is the first dataset designed to provide real human visual preference scores for evaluating mask quality, without the need for knowledge about the ground-truth mask. Building on these two new datasets, we propose a simple PSOD baseline capable of predicting multiple masks in SOD and human preference scores for each mask, which can be used to automatically estimate the mask quality aligning with real human feedback. We believe our paper will provide a fresh perspective to the research community and inspire more great works along this direction.
splncs04
|
http://arxiv.org/abs/2409.03660v1 | 20240905161358 | Poincaré and Sobolev inequalities with variable exponents and log-Holder continuity only at the boundary | [
"David Cruz-Uribe",
"Fernando López-Garcí a",
"Ignacio Ojea"
] | math.AP | [
"math.AP",
"math.CA",
"Primary: 46E35, Secondary: 26D10, 42B35"
] |
§ ABSTRACT
We prove Sobolev-Poincaré and Poincaré inequalities in variable Lebesgue spaces (Ω), with Ω⊂^n a bounded John domain, with weaker regularity assumptions on the exponent that have been used previously. In particular, we require to satisfy a new boundary log-Hölder condition that imposes some logarithmic decay on the oscillation of towards the boundary of the domain. Some control over the interior oscillation of is also needed, but it is given by a very general condition that allows to be discontinuous at every point of Ω. Our results follows from a local-to-global argument based on the continuity of certain Hardy type operators. We provide examples that show that our boundary log-Hölder condition is essentially necessary for our main results. The same examples are adapted to show that this condition is not sufficient for other related inequalities. Finally, we give an application to a Neumann problem for a degenerate -Laplacian.
[2020]Primary: 46E35; Secondary: 26D10, 42B35
The first author is partially supported by a Simons Foundation
Travel Support for Mathematicians Grant and by NSF Grant DMS-2349550.
The third author is partially supported by ANPCyT under grand PICT 2018-3017, by CONICET under grant PIP112201130100184CO and by Universidad de Buenos Aires under grant 20020170100056BA.
Quantum reservoir computing on random regular graphs
Tapio Ala-Nissila
====================================================
§ INTRODUCTION
In this paper we prove Poincaré and Sobolev inequalities in the variable Lebesgue spaces (Ω), with weaker regularity assumptions on the exponent function than have been used previously. To put our results in context, we briefly survey some earlier results in the classical Lebesgue spaces. For a more detailed history of these results, with proofs, see <cit.>.
A Sobolev-Poincaré inequality is an inequality of the form
f-f_Ø_L^q(Ø)≤ C∇ f_L^p(Ø),
where Ø⊂^n is a domain, f is locally Lipschitz on Ø, and f_Ø=1/|Ø|∫_Ø f dx. Some assumptions need to be made on the domain Ø, and on the exponents p and q, for this inequality to hold. For 1≤ p<n, define p^*=np/n-p. If Ω is a Lipschitz domain and 1<p<n, (<ref>) was proved for q=p^* by Sobolev <cit.>, and extended to the case p=1 and q=1^* by Gagliardo <cit.> and Nirenberg <cit.>. The inequality for q<p^* follows from the critical case q=p^* by Hölder's inequality. Inequality (<ref>) was extended to bounded John domains by Martio <cit.> when 1<p<n and by Bojarski <cit.> when p=1. Moreover, Buckley and Koskela <cit.> showed that John domains are the largest class of bounded domains for which the Sobolev-Poincaré inequality holds with p<n and q=p^*.
If p<n, then as p→ n, p^*→∞, but the constant on the right-hand side of (<ref>) blows up.
Inequality (<ref>) with p=n does not hold for q=∞, but for p≥ n it holds for every 1<q<∞. This case follows from the Sobolev embedding theorem; see <cit.>.
Inequality (<ref>) is a particular case of a weighted inequality of the form
f - f_Ω_L^q(Ø)≤ Cd^1-α∇ f_L^p(Ø),
where d(x) =dist(x,∂Ø), α∈[0,1], pα<n, and 1<p≤ q≤np/n-pα. Inequality (<ref>) was first proved on bounded John domains by Hurri-Syrjänen <cit.>, and later extended with additional weights by Drelichman and Durán <cit.>. An important special case of this inequality is when q=p and α=1; this case is usually referred to as an improved Poincaré inequality.
A Sobolev inequality is an inequality of the form
f_L^q(Ø)≤ C∇ f_L^p(Ø),
where f is assumed to be a Lipschitz function with compact support contained in Ω. By Hölder's inequality and the triangle inequality, the Poincaré inequality (<ref>) implies the Sobolev inequality. However, one key difference is that since this inequality is for functions of compact support, their behavior near the boundary, and so the geometry of the set Ω, does not play an important role.
Sobolev and Sobolev-Poincaré inequalities have also been proved on the variable Lebesgue spaces. Intuitively, given an exponent function : Ω→ [1,∞), the space (Ω) consists of all measurable functions f such that
∫_Ω |f(x)|^p(x) dx < ∞.
For a precise definition and further details, see Section <ref> below and <cit.>.
Harjulehto and Hästö proved that if Ω is a John domain, and the exponent function is such that either p_-(Ω)<n and p_+(Ω) ≤ p_-(Ω)^*, or if p_-(Ω)≥ n and p_+(Ω)<∞, then
f-f_Ω_L^(Ω)≤ C f_L^(Ω).
As a consequence of this result they showed that if Ω is bounded, convex domain and is uniformly continuous on Ω, then (<ref>) holds.
In <cit.> Diening, et al. proved that given a John domain Ω, if the exponent satisfies 1<p_-(Ω) ≤ p_+(Ω) <n and is log-Hölder continuous, that is,
|p(x)-p(y)| ≤C_0/-log(|x-y|), x, y ∈Ω, |x-y|<12,
and if = p^*(·), then for all locally Lipschitz functions f,
f-f_Ω_L^(Ω)≤ C f_L^(Ω).
This result has been generalized to Hörmander vector fields by Li, Lu, and Tang <cit.>.
Sobolev inequalities have also been considered by a number of authors. Kováčik and Rákosník <cit.> proved that
f_L^(Ω)≤ C f_L^(Ω)
where is uniformly continuous on Ω, p_+<n, and q(x)=p^*(x)-ε for some ε>0. Edmunds and
Rákosník <cit.> proved that (<ref>) holds with =, provided that Ω has Lipschitz boundary and is Lipschitz, later extending this to is Hölder continuous.
In <cit.> the first author and Fiorenza proved that (<ref>) holds with = provided that is log-Hölder continuous on Ω. (Though not stated there, their argument also shows that (<ref>) holds with = if Ω is convex.) A different proof was given by Diening, et al. <cit.> It is worth noting that in <cit.> the hypotheses on are actually somewhat weaker than
log-Hölder continuity: they instead assume that the Hardy-Littlewood maximal operator is bounded on
L^(p^*(·)/n')'(Ø). This hypothesis is a consequence of the proof, which uses the theory of Rubio de Francia extrapolation on variable Lebesgue spaces. For a discussion of how this hypothesis differs from assuming log-Hölder continuity, see <cit.>. A very different version of Sobolev's inequality was proved by Mercaldo, et al. <cit.>. They showed that if took on two distinct values 1<p_1<p_2≤ 2, and the sets where it took on these values have Lipschitz boundaries, then <ref> holds with =.
Variable Lebesgue space versions of the weighted Poincaré inequality <ref> have not been explicitly proved in the literature. However, arguing as in the proof of <cit.>, it is possible to prove the following result. Let Ω be a bounded John domain and fix 0≤α <1. If the exponent satisfies 1<p_-(Ω) ≤ p_+(Ω) <n/α and is log-Hölder continuous, and ≤n/n-α, then
f - f_Ω_L^(Ω)≤ Cd^1-α∇ f_L^(Ø).
The proof uses Rubio de Francia extrapolation, starting from the weighted versions of this inequality proved in <cit.>. Details are left to the reader. We note that, instead of assuming log-Hölder continuity, it is also possible to state the hypotheses of this result in terms of the boundedness of the maximal operator.
In all of the above results, it was necessary to assume some additional continuity or control on the oscillation of the exponent function . For the original Poincaré inequality proved by Harjulehto and Hästö they either required a strong restriction on the global oscillation of the exponent function, or uniform continuity. Similarly, the Sobolev inequality of Mercaldo, et al. required a very particular form for the exponent function. In the later results for Sobolev and Sobolev-Poincaré inequalities, the exponent was required to satisfy p_+<n and to be log-Hölder continuous or the Hardy-Littlewood maximal operator is bounded on a particular space that arises in the underlying extrapolation argument, a condition which, particularly in applications, is very close to log-Hölder continuity. In <cit.> the authors asked about weaker regularity assumptions for proving Poincaré and Sobolev inequalities.
In this paper we show that we can considerably weaken the previous regularity conditions and still prove the weighted Poincaré inequality (<ref>). We will require three conditions. First, we replace log-Hölder continuity with a condition that essentially implies that is log-Hölder continuous at the boundary. Given x∈Ω and τ≥ 1 we define d(x)=d(x,∂Ω) and B_x,τ = B(x,τ d(x)).
Given Ω⊂^n, let ∈(Ω) and τ≥ 1. We say that satisfies the condition if there is a constant C_0, that may depend on τ, such that:
p_+(B_x,τ) - p_-(B_x,τ) ≤C_0/-log(τ d(x)),
for all x∈Ω with τ d(x)≤ 1/2. In that case, we write ∈(Ω).
Below, we will show that with modest additional conditions on ∂Ω, an exponent ∈(Ω) can be extended to a function that is log-Hölder continuous on ∂Ω. See Section <ref>.
Second, we require a weak continuity property in the interior of Ω.
Given a domain Ø⊂^n, ε>0, and f:Ø→, we say that the function f is ε-continuous at a point x∈Ø if there exists δ>0 such that
|f(y)-f(x)|<ε for every y ∈ B(x,δ).
If f is ε-continuous at every x∈Ø, we say that f is ε-continuous on Ø.
Finally, we say that f is uniformly ε-continuous if the same δ>0 can be taken for every x∈Ω.
The notion of ε-continuity given in Definition <ref> is well-known: it arises, for example, in the proof that a bounded function is Riemann integrable if and only if its set of discontinuities has Lebesgue measure 0. See Apostol <cit.>, or Convertito and the first author <cit.>.
It is immediate that if f is continuous on Ω, then it is ε-continuous for every ε>0. However, ε-continuous functions may be discontinuous: for example, any step function on whose discontinuities have a jump smaller than ε. In particular, there exist ε-continuous exponent functions for which the maximal operator is unbounded on (Ω): see <cit.>. If f is ε-continuous on Ω, then by a standard compactness argument, f is uniformly 2ε-continuous on Ω. We leave the details to the reader.
Finally, we need to restrict the relationship between and in a natural way. We will assume that ∈(Ω) satisfies 1<p_-(Ø)≤ p_+(Ø)<∞, and that ∈(Ω) is defined by
1/- α/n = 1/
where α satisfies
0≤α < 1 if p_+(Ø)<n
0≤α< n/p_+(Ø) if p_+(Ø)≥ n.
Note that with this definition our results include the case p_+>n.
Let Ø⊂^n be a bounded John domain. Suppose ∈(Ω) is such that 1<p_-(Ø)≤ p_+(Ø)<∞ and ∈∂ LH_0^τ_K(Ω), where the constant τ_K≥ 1 depends on the John domain constants of Ω. Fix α as in (<ref>) and define by (<ref>). Suppose also that 1/ is uniformly σ/n-continuous for some σ<1-α. Then there is a constant C such that for every f∈ W^1,p(·)(Ω),
f(x)-f_Ω_L^(Ω)≤ C d^1-α∇ f(x)_L^(Ω),
where d(x)=(x,∂Ω).
As a consequence of Theorem <ref> we get a Sobolev inequality with the same exponents but without assuming log-Hölder continuity at the boundary.
Let Ø⊂^n be a bounded domain. Suppose ∈(Ω) is such that 1<p_-(Ø)≤ p_+(Ø)<∞. Fix α as in (<ref>) and define by (<ref>). Suppose also that 1/ is uniformly σ/n-continuous for some σ<1-α. Then there is a constant C such that for every f∈ W_0^1,p(·)(Ω),
f_L^(Ω)≤ C ∇ f(x)_L^(Ω).
One seeming drawback to Theorems <ref> and <ref> is that we are not able to prove the inequality for =, but only for <.
We will discuss the technical reason for the restriction that 0≤α<1 in (<ref>) below: see Remark <ref>.
Broadly speaking, however, the problems lies in the local inequality on cubes. Our arguments in the interior of Ω are based on the constant exponent Sobolev-Poincaré inequality, and so require only weak regularity requirements on the exponents. However, this approach has a gap that prevents us from reaching the critical case. This gap, however, cannot be easily closed. We construct an example of uniformly continuous exponent that is not log-Hölder continuous such that the Sobolev and Sobolev-Poincaré inequalities fail to hold with =: see Section <ref>. This gives a positive answer to <cit.> and raises the question of determining the domains Ω and continuity conditions stronger than uniform continuity but weaker than log-Hölder continuity for which these inequalities hold.
We also note that while Theorem <ref> applies to much more general domains and exponent functions than the result of Mercaldo, et al., we are not actually able to recapture their result. In their result their exponent is 1/2-continuous, and (as we will see below in Corollary <ref>) we need to assume something stronger than 1/n-continuity. It is an interesting question as to whether and to what extent their results on the -Laplacian can be generalized using our results.
The basic idea in our proof of Theorem <ref> is to cover the domain Ω with cubes from a Whitney decomposition and prove a local Sobolev-Poincaré inequality on each cube. Different techniques are required for cubes in the interior and those close to the boundary. We then use the fact that Ω is a John domain to apply a local-to-global argument to patch together the local estimates. We prove Theorem <ref> using the well-known fact that a Sobolev-Poincaré implies a Sobolev inequality, provided that one can estimate the average term f_Ω appropriately. The main tool we need for this argument is an extension theorem for ε-continuous exponent functions that allows us to avoid assuming that the domain is a John domain and that we have the log-Hölder condition at the boundary. It would be interesting to give a direct proof that did not pass through the Sobolev-Poincaré inequality.
The remainder of this paper is organized as follows. In Section <ref> we gather some preliminary results about variable Lebesgue and Sobolev spaces, and prove some basic properties of the (Ω) condition. In Section <ref> we introduce a tree structure defined using the Whitney decomposition of a domain, and prove a boundedness result for a Hardy-type operator, A_Γ, defined with respect to this tree structure. In Section <ref> we introduce two more Hardy-type operators, T_ and T_^α, where the averages over the Whitney cubes are formed using the variable Lebesgue space norm, and explore the role played by the (Ω) condition. We note that the operator T_ is gotten by taking the parameter α=0 in T_^α, but we have chosen to treat them separately, despite some repetition. We have done so since a careful analysis of the difference between the proofs shows exactly where the restriction (<ref>) on α in Theorem <ref> comes from. In Section <ref> we prove a decomposition theorem for functions on a John domain. This decomposition is central to our ability to extend Sobolev-Poincaré inequalities defined on cubes to a John domain.
In Section <ref> we prove Theorem <ref>. In order to do so we first prove a number of Sobolev-Poincaré inequalities on cubes. Some of these results are known but we give all the details as we need to keep very careful track of the constants. We also give two corollaries that hold if we restrict the global oscillation of : see Theorems <ref> and <ref>. In Section <ref> we prove Theorem <ref>; as we noted above the heart of the proof is an extension theorem for ε-continous functions; our proof is adapted from the proofs of <cit.> and <cit.>.
In Section <ref> we consider the question of the necessity of the boundary log-Hölder condition. We cannot prove that it is necessary in general, but we show that it is very close to necessary: we construct a continuous exponent that is not in (Ω), and inequality (<ref>) fails to hold. This is analogous to the situation for the boundedness of the Hardy-Littlewood maximal operator: log-Hölder continuity is not necessary, but examples show that this is the weakest continuity condition which universally guarantees boundedness. Our construction is very general, and so we adopt it to give additional examples. First we construct a uniformly continuous exponent that is not log-Hölder continuous such that the Sobolev and Sobolev-Poincaré inequalities fail. We then modify this example to show that the Korn inequality need not be true on (Ω) if we do not assume is log-Hölder continuous in the interior. For this same example we show that the divergence equation cannot be solved in (Ω).
In Section <ref> we show that with modestly stronger assumptions on the domain, if an exponent function satisfies the (Ω) condition, then it extends in a natural way to a function defined on ∂Ω that is log-Hölder continuous on the boundary. This further reinforces referring to (Ω) as boundary log-Hölder continuity. Finally, in Section <ref> we apply our results to a problem in degenerate elliptic PDEs. We use our improved Poincaré inequalities and a result due to the first author, Penrod, and Rodney <cit.>, to give solutions to a Neumann-type problem for a degenerate -Laplacian.
Throughout this paper, all notation is standard or will be defined as needed. By n we will always mean the dimension of the underlying space, ^n. Constants will be denoted by C, c, etc. and may change in value from line to line. Given two quantities A and B, if for some c>0 A≤ cB, then we will write A≲ B. If A≲ B and B≲ A, we will write A≈ B.
§ VARIABLE LEBESGUE SPACES AND THE BOUNDARY LOG-HÖLDER CONDITION
We begin by defining the variable Lebesgue space (Ω). For complete information, see <cit.>.
Given a domain Ω⊂^n, let (Ω) be the set of all Lebesgue measurable functions p(·):Ω→ [1,∞]. Given any measurable set E⊂Ω, define
p_-(E) = _x∈ E p(x), p_+(E) = _x∈ E p(x).
For brevity we will write p_-=p_-(Ω) and p_+ = p_+(Ω). Let Ω_∞ = { x∈Ω : p(x)=∞} and Ω_0 = Ω∖Ω_∞. Define the space (Ω) to be the collection of all measurable functions f such that for some λ>0,
ρ_(f/λ) = ∫_Ω_0(|f(x)|/λ)^p(x) dx + λ^-1f_L^∞(Ω) < ∞.
This becomes a Banach function space when equipped with the Luxemburg norm
f_(Ω) = inf{λ > 0 : ρ_(f/λ) ≤ 1 }.
We will be working with functions in the variable Sobolev spaces. Define W^1,(Ω) to be the space of all functions f∈ W^1,1_loc(Ω) (that is, locally integrable functions whose weak derivatives exist and are locally integrable) such that f, f ∈(Ω). Define W^1,_0(Ω) to be the closure of Lipschitz functions of compact support in W^1,(Ω) with respect to the norm f_W^1,(Ω)=f_L^(Ω)+ f_(Ω).
Given ∈(Ω), define ∈(Ω), the dual exponent function, pointwise by
1/p(x) + 1/p'(x) = 1,
with the convention that 1/∞=0. We then have an equivalent expression for the norm, referred to as the associate norm:
f_(Ω)≈sup_g_L^(Ω)≤ 1∫_Ω f(x)g(x) dx.
We also have a version of Hölder's inequality: given f∈(Ω) and g ∈ L^(Ω),
∫_Ω |f(x)g(x)| dx ≤ Cf_(Ω)g_L^(Ω).
If Ω is a bounded domain, then we have the following general embedding theorem.
Let Ω be a bounded domain. Given , ∈(Ω), if p(x) ≤ q(x) for almost every x∈Ω, then L^(Ω) ⊂(Ω) and f_≤ (1+|Ω|)f_.
We now prove some properties of the boundary log-Hölder condition, Definition <ref>.
We first note that the bound τ d(x) ≤ 1/2 in this definition is somewhat arbitrary: the intention is that this condition holds for x very close to ∂Ω. In particular, if p_+<∞ and 0<a<1, we get an equivalent condition if we assume τ d(x) ≤ a. In this case we also get that the classes are nested: if τ_1<τ_2, then ∂ LH_0^τ_2(Ω) ⊂∂ LH_0^τ_1(Ω). To see this, we will show that (<ref>) holds for τ_1 when τ_1 d(x)≤ (τ_1/τ_2)^2<1. Then in this case, τ_2 d(x)≤τ_1/τ_2<1, so we may assume (<ref>) holds for τ_2. But then we have that
p_+(B_x,τ_1) - p_-(B_x,τ_1)
≤ p_+(B_x,τ_2) - p_-(B_x,τ_2)
≤C_τ_2/-log(τ_2 d(x))
= C_τ_2/-log(τ_2/τ_1) -log(τ_1 d(x))≤2C_τ_2/-log(τ_1 d(x)).
Given ∈(Ω), if p_-(B_x,τ) ≥γ >1 for every x∈Ω, then ∈(Ω).
Given a set E⊂Ω, we have that
p_+'(E) = _x∈ E p'(x)= (p_-(E))', p_-'(E) = _x∈ E p'(x) = (p_+(E))'.
Therefore,
p_+'(B_x,τ) - p_-'(B_x,τ)
= (p_-(B_x,τ)' - (p_+(B_x,τ))'
= p_+(B_x,τ)- p_-(B_x,τ)/(p_-(B_x,τ)-1)(p_+(B_x,τ)-1≤γ^-2 (p_+(B_x,τ)- p_-(B_x,τ))
≤γ^-2C_0/-log(τ d(x)).
As we shall see, for proving our main results we will only need ∈(Ω) for certain τ depending on the domain. However this particular value of τ can be unknown, so we give also the following more restrictive definition:
We say that is uniformly log-Hölder continuous at the boundary, and write ∈∂ LH_0(Ω), if ∈(Ω) for every τ≥ 1 with constant C_0 independent of τ.
For our work below on John domains we could actually assume the weaker condition that ∈(Ω) for all τ≥ 1 without the constant being independent of τ. We make this definition to establish a stronger class that may be needed for future work.
It is easy to check that LH_0(Ω)⊂∂ LH_0(Ω). The following example shows that there are exponents in ∂ LH_0(Ω) that do not belong to LH_0(Ω).
Let Ω=B(0,1)⊂^2 be the unit disc. Let A_1 and A_2 be a partition of Ω into Lebesgue measurable sets (i.e. A_1∪ A_2=Ω and A_1∩ A_2=∅). Let also 1≤ p_-<p_+<∞. We define :Ω→ as
p(x) = {[ p_- if x∈ A_1; p_- + (p_ + -p_-)log(2)/-log(d(x)/2) if x∈ A_2. ].
Then, it is easy to check that ∈∂ LH_0(Ω), and it is discontinuous for any x in Ω that belongs to A_1∩A_2. Moreover A_1 and A_2 can be chosen so that is discontinuous at every point in the domain.
The following lemma gives an equivalent form of property ∂ LH_0^τ. It is an analogue to Lemma <cit.> (see also <cit.>).
Let Ω⊂^n a domain, τ≥ 1, and :Ω→, 1≤ p_-≤ p_+<∞, then the following statements are equivalent:
(a) p(·)∈(Ω);
(b) |B_x,τ|^-(p(y)-p_-(B_x,τ))≤ C for every x∈Ω and almost every y∈ B_x;
(c) |B_x,τ|^-(p_+(B_x,τ)-p_-(B_x,τ))≤ C for every x∈Ω.
We first show that (a) implies (b). Fix x∈Ω and suppose first that τ d(x)>1/2. Then
|B_x,τ|=c_n (τ d(x))^n ≥ c_n/2^n. Therefore, since for almost every y ∈ B_x,τ, p(y)≤ p_+(B_x,τ)≤ p_+,
|B_x,τ|^-(p(y)-p_-(B_x,τ))≤ (1+2^n/c_n)^p_+-p_- = C.
On the other hand, if τ d(x)≤ 1/2, then
log(|B_x,τ|^p_-(B_x,τ)-p(y))
= (p(y)-p_-(B_x,τ))log(|B_x,τ|^-1)
≤C_0/-log(τ d(x))log(|B_x,τ|^-1)
= C_0/-log(τ d(x))log((τ d(x))^-n/c_n)≤ C· C_0,
where C depends only on the dimension n.
To prove that (b) implies (a), note that for every x∈Ω and almost every y∈ B_x,τ:
(1/τ d(x))^p(y)-p_-(B_x,τ)
= (τ d(x))^p_-(B_x,τ)-p(y)
≤ C |B_x,τ|^(p(y)-p_-(B_x,τ))/n≤ C,
where C depends on τ, n and p_+-p_-. If we take the logarithm, we obtain
(p(y)-p_-(B_x,τ))log(1/τ d(x)) ≤logC,
which is equivalent to the (Ω) condition.
Since p(y)≤ p_+(B_x,τ) for almost every y ∈ B_x,τ, (c) implies (b). To prove the converse, it suffices to note that there exists a sequence {y_k}⊂ B_x,τ such that (b) holds and p(y_k) → p_+(B_x,τ). If we pass to the limit, we get (c).
Let us recall that a Whitney decomposition of Ø is a collection {Q_t}_t∈Γ of closed dyadic cubes, whose interiors are pairwise disjoint, which satisfies
* Ø=⋃_t∈ΓQ_t,
* (Q_t) ≤ d(Q_t,∂Ω) ≤ 4(Q_t),
* 1/4diam(Q_s)≤diam(Q_t)≤ 4diam(Q_s), if Q_s∩ Q_t≠∅.
Let Q⊂Ω be a Whitney cube with side length equal to 2^-k. Then, if ∈(Ω) for any τ≥ 1,
p_+(Q) - p_-(Q) ≤C/k.
To see this, let x_Q be the center of Q; then
d(x_Q) ≤(Q)+(Q,∂Ω) ≤ 5(Q)=5√(n)2^-k.
Hence, for any τ≥ 1, Q⊂ B_x_Q,1⊂ B_x_Q,τ, and so
p_+(Q) -p_-(Q) ≤ p_+(B_x_Q,1)-p_-(B_x_Q,1) ≤C_0/-log(d(x_Q))≤C_0/-log(5√(n)2^-k)≤C/k.
§ A HARDY-TYPE OPERATOR ON VARIABLE LEBESGUE SPACES
Our proof of Theorem <ref> is based on a local-to-global argument that extends the validity of the inequalities from Whitney cubes to the entire domain Ω in ^n. We decompose Ω into a collection of Whitney cubes {Q_t}_t∈Γ and identify two cubes as adjacent if they intersect each other in a n-1 dimensional face. It is often helpful to view this discretization of the domain as a graph whose vertices are the Whitney cubes (technically, we consider a small expansion of the cubes) and two cubes are connected by an edge if they are adjacent in the sense given above. We can then define a rooted spanning tree on this graph, where the root is simply a distinguished cube. Usually, we take one of the largest Whitney cubes as the root, but it could be any other cube. This tree structure on the Whitney cubes contains the geometry of the domain, which is fundamental for this analysis.
To apply this perspective, we recall some definitions and prove some basic results.
A tree is a graph (V,E), where V is the set of vertices and E the set of edges, satisfying that it is connected and has no cycles. A tree is said to be rooted if one vertex is designated as the root. In a rooted tree (V,E), it is possible to define a partial order “≼" in V as follows: s≼ t if and only if the unique path connecting t to the root a passes through s. We write t ≽ s if s≼ t.
The parent t_p of a vertex t is the vertex connected to t by an edge on the path to the root. It can be seen that each t∈ V different from the root has a unique parent, but several elements (children) in V could have the same parent. Note that two vertices are connected by an edge (adjacent vertices) if one is the parent of the other. For simplicity, we say that a set of indices Γ has a tree structure if Γ is the set of vertices of a rooted tree (Γ,E). Also, if the partial order “≼" in Γ is a total order (i.e. each element in Γ has no more than one child), we say that Γ is a chain, or has a chain structure.
For convenience, let us introduce the following notation:
Γ^* = Γ∖{a}.
Let Ω⊂^n be a bounded domain. We say that an open covering {U_t}_t∈Γ is a tree-covering of Ω if it also satisfies the properties:
* χ_Ω(x)≤∑_t∈Γχ_U_t(x)≤ C_1 χ_Ω(x), for almost every x∈Ω, where C_1≥ 1.
* The set of indices Γ has the structure of a rooted tree.
* There is a collection {B_t}_t≠ a of pairwise disjoint open sets such that B_t⊆ U_t∩ U_t_p, and there is a constant C_2 such that: |U_t|/|B_t|≤ C_2 for every t∈Γ.
Let Ø be a bounded domain and {Q_t}_t∈Γ a Whitney decomposition of it. If we take U_t to be a small expansion of the interior of Q_t, for example U_t=17/16int(Q_t), it is clear that we can define a rooted tree structure for Γ such that two vertices s and t are adjacent along the tree only if Q_t∩ Q_s share a n-1 dimensional face. Hence, every bounded domain admits a tree covering composed of expanded Whitney cubes.
This tree covering is not unique, so care should be taken in order to select a tree-covering that contains meaningful information about the geometry of the domain. For example, it is known that the quasi-hyperbolic distance between two cubes in a Whitney decomposition is comparable with the shorter chain of Whitney cubes connecting them (see <cit.>). Hence, we can take the open covering {U_t}_t∈Γ and apply an inductive argument to define a tree-covering such that the number of Whitney cubes in each chain to the root is minimal. This tree structure contains some geometric information in terms of the quasi-hyperbolic distance.
For each t we define W_t, the shadow of U_t, to be the set
W_t = ⋃_s≽ t U_s.
Let Ω⊂^n be a bounded domain with a tree-covering {U_t}_t∈Γ. We define the following Hardy-type operator on the shadows W_t:
A_Γ f(x) = ∑_t∈Γ^*χ_B_t(x)/|W_t|∫_W_t |f(y)| .
Let Ω⊂^n be a bounded domain with a tree-covering {U_t}_t∈Γ. Suppose ∈𝒫(Ω) is such that 1<p_-≤ p_+ <∞, and there is a constant C such that
|W_t|^-(p(y)-p_-(W_t))≤ C
for every t∈Γ and almost every y∈ W_t. Then the operator A_Γ defined in (<ref>) is bounded from L^(Ω) to itself.
By homogeneity, it suffices to prove that there is a constant C=C(p(·),Ω) such that
∫_Ω A_Γ f(x)^p(x)≤ C
for any f∈ L^p(·)(Ω) with f_L^p(·)(Ω)=1. Fix such a function f.
Since the sets in the collection {B_t}_t≠ a are pairwise disjoint we have that
∫_Ω A_Γ f(x)^p(x) = ∫_Ω∑_t∈Γ^*χ_B_t(x)|W_t|^p(x)(∫_W_t|f(y)|)^p(x)
≤∫_Ω∑_t∈Γ^*χ_B_t(x)|W_t|^p(x)(∫_W_t|f(y)|+1)^p(x)
≤∫_Ω∑_t∈Γ^*χ_B_t(x)|W_t|^p(x)(∫_W_t(|f(y)|+1)^p(y)/p_-(W_t))^p(x).
Since f_L^p(·)(Ω)=1 and p_+<∞ we have that
∫_W_t(|f(y)|+1)^p(y)/p_-(W_t)≤∫_Ω(|f(y)|+1)^p(y)/p_-(W_t)
≤∫_Ω(|f(y)|+1)^p(y)≤
2^p_+(∫_Ω|f(y)|^p(y)+|Ω|)
≤ 2^p_+(1+|Ω|).
Also, by (<ref>) there is a constant C=C(Ω,p(·)) such that
1|W_t|^p(x)≤C|W_t|^p_-(W_t)
for all t∈Γ and almost every x∈ W_t. Thus, if we combine the above estimates, we get
∫_Ω A_Γ f(x)^p(x) ≤ C ∫_Ω∑_t∈Γ^*χ_B_t(x)|W_t|^p_-(W_t)(∫_W_t(|f(y)|+1)^p(y)/p_-(W_t))^p_-(W_t).
We want to replace p_-(W_t) by p_- in the previous inequality. If p_t:=p_-(W_t)p_-=1, then this is immediate. Otherwise, if p_t>1, we apply the classical Hölder inequality in the last integral with exponent p_t to get
∫_Ω A_Γ f(x)^p(x) ≤ C ∫_Ω∑_t∈Γ*χ_B_t(x)|W_t|^p_-(∫_W_t(|f(y)|+1)^p(y)/p_-)^p_-.
Finally, again using that the sets {B_t}_t≠ a are pairwise disjoint and using the fact that the operator A_Γ is bounded on L^p_-(Ω) if p_->1, proved by the second author in <cit.>), we get
∫_Ω A_Γ f(x)^p(x) ≤ C ∫_Ω(∑_t∈Γχ_B_t(x)|W_t|^p_-∫_W_t(|f(y)|+1)^p(y)/p_-)^p_-
≤ C ∫_Ω A_Γ((|f(x)|+1)^p(y)/p_-)^p_-
≤ C ∫_Ω (|f(x)|+1)^p(y)
≤ C(Ω,p(·)).
This completes the proof.
§ VARIABLE EXPONENT HARDY-TYPE OPERATORS AND LOG-HÖLDER CONTINUITY AT THE BOUNDARY
In this section we prove several results necessary for our local-to-global argument. Before doing so, however, we need to explore the relationship between the (Ω) condition and some other averaging-type conditions on the exponent functions. Our starting point is the observation that many results which are obviously true in classical Lebesgue spaces can fail in the variable exponent setting. A very relevant example of this is the so called K_0 condition, introduced by Kopaliani <cit.>, which states that
sup_B |B|^-1χ_B_χ_B_ <∞,
where the supremum is taken over all balls contained in a certain domain. If is constant, the argument of the supremum is exactly 1 for every ball. On the contrary, if is not constant, the boundedness of the supremum is not guaranteed: a simple example is given by p(x)=2+χ_Q(x), where Q is any cube. (See <cit.>.) Thus, this condition sometimes needs to be imposed on as a hypothesis.
In this section we consider several auxiliary results of this kind, which are crucial in the sequel. These or similar results are known in the literature given the assumption that the exponent satisfies the global LH_0(Ω) condition. However, here we only assume that ∈∂ LH_0^τ for some τ≥ 1; this is sufficient as we will restrict the analysis to cubes/balls with radius proportional to the distance to the boundary.
Throughout this section, let Ω be a bounded domain and {U_t}_t∈Γ a tree-covering of Ω, like the one given by Remark <ref>. Recall that U_t is an expansion of a Whitney cube Q_t, U_t=17/16intQ_t. Hence, if we let x_t be the center of U_t, then for any τ≥ 1 we have that
U_t⊂ B(x_t,τ d(x_t)) and
|U_t|∼ |B(x_t,τ d(x_t))|,
where the implicit constants only depend on n and τ. Combined with Lemma <ref>, this implies that if ∈∂ LH^τ_0, then
|U_t|^-(p(y)-p_-(U_t))≤ C
for every t∈Γ and almost every y∈ U_t. We will make extensive use of these properties. In particular, we will show that properties that hold for balls close to the boundary, as in the (Ω) condition, also hold for the cubes U_t in a tree-covering. Depending on the circumstances, we will emphasize either balls or cubes.
Let Ω⊂^n be a bounded domain with a tree-covering {U_t}_t∈Γ as the one given by Remark <ref>. Fix ∈(Ω), 1≤ p_- ≤ p_+<∞. We define the operator
T_p(·) f(x) = ∑_t∈Γχ_U_t(x)fχ_U_t_p(·)/χ_U_t_p(·) .
If ∈∂ LH_0^τ(Ω) for some τ≥ 1, then T_p(·):L^p(·)(Ω)→ L^p(·)(Ω) is bounded.
By Fatou's Lemma in the scale of variable Lebesgue spaces (see <cit.>), we may assume without loss of generality that f∈ L^∞(Ω), has compact support, and is non-negative. Moreover, a homogeneity argument allows us to assume f_L^p(·)(Ω)=1. Under these assumptions it is enough to find a constant C such that
∫_Ω T_p(·) f(x)^p(x)≤ C.
Since the sets {U_t}_t∈Γ have finite overlap, we have that
∫_Ω T_p(·) f(x)^p(x)≤ C∑_t∈Γ∫_U_t[χ_U_t f_p(·)/χ_U_t_p(·)]^p(x).
First consider the case
χ_U_t f_p(·)/χ_U_t_p(·)≥ 1.
This implies χ_U_t_p(·)≤χ_U_t f_p(·)≤f_p(·) = 1. Hence, <cit.> yields
χ_U_t_p(·)≥ |U_t|^1/p_-(U_t) and χ_U_t f_p(·)^p_+(U_t)≤∫_U_tf(y)^p(y).
If we combine these estimates and (<ref>), we get
[χ_U_t f_p(·)/χ_U_t_p(·)]^p(x)≤χ_U_t f_p(·)^p_+(U_t)/χ_U_t_p(·)^p_+(U_t)≤∫_U_t f(y)^p(y) |U_t|^-p_+(U_t)/p_-(U_t)
≤ C|U_t|^-1∫_U_t f(y)^p(y)≤ C(1 + |U_t|^-1∫_U_tf(y)^p(y)).
On the other hand, if
χ_U_t f_p(·)/χ_U_t_p(·)≤ 1,
then it is immediate that the same inequality holds. Therefore, we have that
∫_Ω T_p(·) f(x)^p(x)≤ C∑_t∈Γ∫_U_t(1 + |U_t|^-1∫_U_tf(y)^p(y))
≤ C∑_t∈Γ(|U_t| + ∫_U_tf(y)^p(y))
≤ C(|Ω| + 1).
In Definition <ref> we generalized log-Hölder continuity to balls close to the boundary. Here, we generalize the K_0 condition in the same way.
Given ∈(Ω), we say that p(·) satisfies the ∂ K_0^τ(Ω) condition for some τ≥1 if
sup_x∈Ω |B|^-1χ_B_p(·)χ_B_p'(·)<∞,
where B=B_x,τ.
Moreover, we say that satisfies the ∂ K_0(Ø) condition if ∈∂ K_0^τ(Ø) for every τ≥ 1 with a constant independent of τ.
Given a tree covering {U_t}_t∈Γ, we can define a similar condition,
sup_t∈Γ |U_t|^-1χ_U_t_p(·)χ_U_t_p'(·)<∞.
It follows from (<ref>) that if ∈∂ K_0^τ(Ω), then (<ref>) holds. The K_0 condition is necessary and sufficient to show that averaging operators defined on balls in a domain are bounded; (<ref>) is the same characterization for averaging operators defined on the U_t.
Given a ball B, define the operator A_B by
A_Bf(y) = χ_B(y)/|B|∫_Bf(z).
For any ∈(Ω) and τ≥ 1, p(·)∈∂ K_0^τ(Ω) if and only if the operators A_B:L^p(·)(Ω)→ L^p(·)(Ω) are uniformly bounded for every B=B_x,τ, x∈Ω. Similarly, given a tree-covering {U_t}_t∈Γ, if we define the operators
A_U_tf(y) = χ_U_t(y)/|U_t|∫_U_tf(z),
then the A_U_t, t∈Γ, are uniformly bounded if and only (<ref>) holds.
The proof is identical to the proof of <cit.>, replacing Q_0 with B_x,τ or U_t.
Given ∈(Ω) and τ≥ 1, if p(·)∈∂ LH_0^τ(Ω), then A_B:L^p(·)(Ω)→ L^p(·)(Ω) are uniformly bounded for every B=B_x,τ, x∈Ω. Similarly, given a tree-covering {U_t}_t∈Γ, the operators A_U_t, t∈Γ, are uniformly bounded.
We prove this for the operators A_B; the proof for A_U_t is the same, using (<ref>).
Without loss of generality we can assume that f is non-negative, and by a homogeneity argument we can assume that f_p(·)=1. Then in this case it is enough to show that
∫_Ω |A_B f(y)|^p(y)≤ C.
It follows from the ∂ LH_0^τ(Ω) condition that
∫_Ω |A_B f(y)|^p(y) = ∫_B (1/|B|∫_B f(x) )^p(y)
≤∫_B |B|^-p(y)(∫_B (f(z)+1))^p(y)
≤ C∫_B|B|^-p_-(B)(∫_B (f(z)+1)^p(z)/p_-(B))^p(y) = I.
Since f_p(·)=1, we have that
∫_B (f(z)+1)^p(z)/p_-(B) ≤∫_B (f(z)+1)^p(z)
≤ 2^p_+(Ω)∫_B f(z)^p(z) + 2^p_+(Ω)|B|
≤ 2^p_+(Ω)(1+|Ω|);
hence, the integral is uniformly bounded above. Therefore, we can lower the exponent p(y) to p_-(B). Taking this into account and applying the classical Hölder inequality with exponent p_-(B), we obtain
I
≤ C ∫_B|B|^-p_-(B)(∫_B (f(z)+1)^p(z)/p_-(B))^p_-(B)
≤ C |B|^1-p_-(B)∫_B(f(z)+1)^p(z) |B|^p_-(B)/p'_-(B)≤ C(p(·),Ω).
This concludes the proof.
If p(·)∈∂ LH^τ_0(Ω), then p(·)∈∂ K^τ_0(Ω).
This follows immediately from Lemma <ref> and Lemma <ref>.
The following property is again immediate in the constant exponent case. In variable Lebesgue spaces, or more generally in Banach function spaces, this property, when applied to an arbitrary collection of sets with bounded overlap, is sometimes referred to as Property G. See <cit.> for details and references.
Let Ω⊂^n a bounded domain and {U_t}_t∈Γ a tree-covering, like the one given by Remark <ref>. If ∈(Ω) satisfies p(·)∈∂ LH_0^τ(Ω) for some τ≥ 1, then for every f∈ L^(Ø) and g∈ L^(Ø) we have that
∑_t ∈Γχ_U_t f_p(·)χ_U_t g_p'(·)≤ C f_p(·)g_p'(·).
Using (<ref>) we have that
∑_t ∈Γχ_U_t f_p(·)χ_U_t g_p'(·)≤ C∑_t ∈Γ |U_t| χ_U_t f_p(·)/χ_U_t_χ_U_t g_p'(·)/χ_U_t_ =: I.
Since the sets {U_t}_t∈Γ have finite overlap, we have that
∑_t ∈Γχ_U_t(x) χ_U_t f_p(·)/χ_U_t_χ_U_t g_p'(·)/χ_U_t_≤ CT_ f(x) T_ g(x).
If we integrate this estimate over Ω, apply Hölder's inequality (<ref>), and then apply Lemma <ref>, we get
I
≤ C ∫_Ω T_ f(x) T_ g(x)
≤ C χ_Ω T_ f_χ_ΩT_ g_≤ C χ_Ωf_χ_Ωg_.
This completes the proof.
In Lemma <ref> we proved (<ref>) using the continuity of the operator T_, which holds thanks to the hypothesis ∈∂ LH_0^τ(Ω) for some τ≥ 1. It is interesting to notice that the reverse implication is also true. For if (<ref>) holds, then by (<ref>), Hölder's inequality (<ref>), and (<ref>), we get
T_ f _p(·) ≤ Csup_g:g_p'(·)≤ 1∫_Ω∑_t ∈Γχ_U_t(x) χ_U_t f_p(·)/χ_U_t_p(·) g(x)
≤ C sup_g:g_p'(·)≤ 1∑_t ∈Γ∫_U_tχ_U_t f_p(·)/χ_U_t_p(·) |g(x)|
≤ C sup_g:g_p'(·)≤ 1∑_t ∈Γχ_U_t f_p(·)χ_U_t g_p'(·)
≤ C sup_g:g_p'(·)≤ 1f_p(·)g_p'(·)
≤ C f_p(·).
Finally we prove the following lemma, which resembles <cit.>.
Given ∈(Ω), suppose 1<p_-≤ p_+<∞ and ∈∂ LH_0^τ(Ø). If {U_t}_t∈Γ is a tree-covering of Ω, like the one given by Remark <ref>, and
1/p_U_t=1/|U_t|∫_U_t1/p(x),
then for every t∈Γ,
χ_U_t_∼ |U_t|^1/p_U_t.
where the implicit constants are independent of t∈Γ.
Recall that as in Remark <ref> each set U_t is cube.
The inequality
|U_t|^1/p_U_t≤ 2 χ_U_t_
was proved in <cit.> for every ∈(Ω) with 1<p_-≤ p_+<∞ and for every cube Q⊂Ω.
To prove the reverse inequality, we apply a duality argument. By (<ref>), there exists g∈ L^(Ø), g_=1, such that
χ_U_t_/|U_t|^1/p_U_t ≤ C 1/|U_t|^1/p_U_t∫_Øχ_U_t(x) g(x)
= C|U_t|^1/p'_U_t1/|U_t|∫_U_t g(x).
But by inequality (<ref>) we have that ≤ Cχ_U_t_1/|U_t|∫_U_t g(x)
= CA_U_t g_
≤ C g_
= C.
Here A_U_t is the operator defined in Lemma <ref>. The last inequality follows, since by Lemma <ref>, ∈∂ LH_0^τ(Ø), and so the A_U_t are uniformly bounded on L^(Ω).
In order to prove Sobolev-Poincaré inequalities, we will need off-diagonal versions of some of the previous results. We will state these results in more generality than we need to prove the results in Section <ref>. We do so partly because of the intrinsic interest of these results, but also to highlight where the more restrictive hypotheses are needed.
One natural set of assumptions would be to fix a value α, 0<α<n, let ∈(Ω) satisfy 1≤ p_-≤ p_+<n/α, and define ∈(Ω) pointwise by the identity
1/p(x) - 1/q(x) = α/n.
However, we would like to avoid the restriction that p_+<n/α. To do so, we will assume that , ∈(Ω) satisfy the following:
1≤ p_-≤ p_+<∞, 1 ≤ q_- ≤ q_+ < ∞,
and that there exists 0<β<α such that
β/n = 1/p(x) - 1/q(x)≤α/n.
If we argue as we did in the proof of Lemma <ref>, if ∈(Ω) for some τ≥ 1, then (<ref>) and the first inequality in (<ref>) imply that ∈(Ω).
Given any ∈(Ω) such that p_+<∞, and given any 0<α<n, there exists ∈(Ω) and 0<β<α such that (<ref>) holds. If p_+<n/α, let β=α and use the first inequality in (<ref>) to define . If n/α≤ p_+<∞, fix 0<β<α such that p_+<n/β and again define using (<ref>).
We first consider an off-diagonal operator T_^α, similar to the one defined in Lemma <ref>.
Let Ø⊂^n a bounded domain with a tree-covering {U_t}_t∈Γ, like the one given by Remark <ref>. Fix α, 0<α<n, and , ∈(Ω) that satisfy (<ref>) and (<ref>) for some 0<β<α. Define the operator
T^α_ f(x) = ∑_t∈Γ |U_t|^α/nχ_U_t f_/χ_U_t_χ_U_t(x).
If ∈(Ω) for some τ≥ 1, then T^α_:L^(Ø)→ L^(Ø) is bounded.
As in the proof of Lemma <ref>, it will suffice to prove that for f∈ L^(Ω) such that f≥ 0 and f_=1, there is a constant C such that
∫_Ω T^α_ f(x) ^q(x)≤ C.
By Corollary <ref>, since ∈(Ω) we have that
|U_t|^α/n/χ_U_t_≈|U_t|^α/n/|U_t|^1/p_U_t
= |U_t|^-1/q_U_t |U_t|^α/n - 1/p_U_t + 1/q_U_t.
If we integrate (<ref>) over U_t, we see that
α/n - 1/p_U_t + 1/q_U_t≥ 0.
Hence, again by Corollary <ref>, since ∈(Ω) by Remark <ref>,
|U_t|^-1/q_U_t |U_t|^α/n - 1/p_U_t + 1/q_U_t≲1/χ_U_t_ |Ω |^α/n - 1/p_- + 1/q_+.
If we combine these two inequalities, using q_+<∞, we have that
∫_Ω T^α_ f(x) ^q(x)
= ∑_t∈Γ∫_U_t[χ_U_t f_/χ_U_t_|U_t|^α/n]^q(x)≤ C^q_+∑_t∈Γ∫_U_t[χ_U_t f_/χ_U_t_]^q(x);
The constant C depends on , , α, n, and |Ω|.
The argument continues as in the proof of Lemma <ref>. Since f_=1, if the expression in square brackets is bigger than 1, then χ_U_t_≤ 1, so by <cit.> and the fact that p_+(U_t)≤ q_+(U_t) (which follows from (<ref>)) we have that
[χ_U_t f_/χ_U_t_]^q(x)≤[χ_U_t f_/χ_U_t_]^q_+(U_t)
≤χ_U_t f_^p_+(U_t)/χ_U_t_^q_+(U_t)≤ C∫_U_tf(y)^p(y)/|U_t|^q_+(U_t)/q_-(U_t)≤ C ∫_U_tf(y)^p(y) |U_t|^-1,
where in the last step we used Lemma <ref> and the fact that ∈(Ω). Hence, in any case we have that
[χ_U_t f_/χ_U_t_]^q(x)≤ 1 + C ∫_U_tf(y)^p(y) |U_t|^-1,
and consequently,
∫_Ω T^α_ f(x) ^q(x)≤ C∑_t∈Γ∫_U_t[1+C∫_U_tf(y)^p(y)|U_t|^-1]
≤ C∑_t∈Γ |U_t|+C∫_U_tf(y)^p(y)≤ C + C|Ω|,
which completes the proof.
The following is an off-diagonal version of the ∂ K_0^τ condition. For all cubes or balls in a given domain it was introduced by the first author and Roberts <cit.>.
Let Ø⊂^n be a bounded domain with a tree-covering {U_t}_t∈Γ, like the one given by Remark <ref>. Fix α, 0<α<n, and , ∈(Ω) that satisfy (<ref>) and (<ref>) for some 0<β≤α. Suppose ∈∂ LH_0^τ(Ø) for some τ≥ 1. Then the following estimates hold:
sup_t∈Γ |U_t|^-1+α/nχ_U_t_χ_U_t_<∞,
sup_t∈Γ |U_t|^-1-β/nχ_U_t_χ_U_t_<∞.
As will be clear from the way the proof is written, for (<ref>) to be true, instead of the first equality in (<ref>), it suffices to assume that β/n≤1/p(x)-1/q(x).
We first prove (<ref>). Fix t∈Γ; then by Lemma <ref> and Remark <ref>, , ∈(Ω). Hence, by Lemma <ref>, we get
|U_t|^-1+α/nχ_U_t_χ_U_t_≈ |U_t|^-1+α/n|U_t|^1/q_U_t|U_t|^1/p_U_t'
= |U_t|^-1+α/n|U_t|^1/q_U_t|U_t|^1-1/p_U_t
= |U_t|^α/n+1/q_U_t-1/p_U_t≤ |Ω|^α/n+1/q_--1/p_+.
The last inequality follows from (<ref>).
The implicit constants are independent of t, so if we take the supremum, we get the desired estimate.
The proof of (<ref>) is nearly identical. Since , ∈(Ω), for any t∈Γ, by
|U_t|^-1-β/nχ_U_t_χ_U_t_≈ |U_t|^-1-β/n|U_t|^1/q_U_t'|U_t|^1/p_U_t
= |U_t|^-β/n-1/q_U_t+1/p_U_t≤ |Ω|^-β/n-1/q_++1/p_-;
the last inequality follows from the inequality we get if we integrate the lower estimate in (<ref>) as we did to find (<ref>).
Finally, we prove an off-diagonal analog of Lemma <ref>. It is only in this result, which requires (<ref>), that we were required to introduce the parameter β. In Section <ref> below, we will choose the parameter α so that we have β=α when applying this result.
Let Ω⊂^n a bounded domain and {U_t}_t∈Γ a tree-covering, like the one given by Remark <ref>. Fix α, 0<α<n, and , ∈(Ω) that satisfy (<ref>) and (<ref>) for some 0<β≤α. Suppose that p(·) ∈∂ LH_0^τ(Ω) for some τ≥ 1. Then for every f∈ L^(Ø) and g∈ L^(Ø), the following inequality holds:
∑_t ∈Γχ_U_t f_p(·)χ_U_t g_q'(·)≤ C f_p(·)g_q'(·).
The proof is very similar to the proof of Lemma <ref>, so we omit some details. By inequality <ref> and since the cubes {U_t}_t∈Γ have finite overlap, we have that
∑_t ∈Γχ_U_t f_p(·)χ_U_t g_q'(·) ≤ C∑_t ∈Γ |U_t|^1+β/nχ_U_t f_p(·)/χ_U_t_χ_U_t g_q'(·)/χ_U_t_
≤ C ∫_Ω T_^β f(x) T_ g(x)
≤ CT_^β f_T_ g_
≤ Cf_g_ ;
the last two inequalities follow from Hölder's inequality <ref>, and from Lemmas <ref> and <ref>.
§ A DECOMPOSITION OF FUNCTIONS FOR JOHN DOMAINS
In this section we prove a decomposition theorem which in our local-to-global argument will let us extend our results from cubes to John domains. We begin by recalling the definition of John domains. They were introduced by Fritz John in <cit.>. They include domains with a fractal boundary, such as the interior of the Koch snowflake, domains with inner cusps and even with cuts, etc. However, they have properties similar to more regular domains with respect to Sobolev-Poincaré type inequalities. It was shown by Bojarski <cit.> that Sobolev-Poincaré inequalities hold on John domains without weights. Later, Chua <cit.> showed that a weighted Sobolev-Poincaré inequality holds, where the same weight appears on the left and right hand sides. By contrast, the classical Sobolev-Poincaré inequality does not hold on domains with external cusps or, more generally, with boundary of type Hölder-α. For these domains, weighted versions of the inequalities can be derived, with a weight that compensates the singularities of the boundary.
A bounded domain Ω in ^n is a John domain with parameter λ>1 if there exists a point x_0∈Ω such that, given any y∈Ω, there exists a rectifiable curve parameterized by arc length γ : [0,ℓ] →Ω, with γ(0)=y, γ(ℓ)=x_0, and λ (γ(t),∂Ω) ≥ t.
The following result was proved by the second author in <cit.> and gives a characterization of John domains in terms of tree-coverings.
A bounded domain Ø⊂^n is a John domain if and only if given a Whitney decomposition {Q_t}_t∈Γ of Ø, there exists a tree structure for the set of indices Γ satisfying the conditions in Remark <ref> and a constant K>1 such that
Q_s⊆ KQ_t,
for any s,t∈Γ with s≽ t. In other words, the shadow W_t of Q_t is contained in KQ_t.
Let Ø⊂^n be a bounded John domain and {Q_t}_t∈Γ a Whitney decomposition of Ø. Let x_t denote the center of Q_t. Then by Proposition <ref>, there is a constant τ_K depending on K such that W_t⊂ B_x_t,τ_K. Moreover, |B_x_t,τ_K|∼ |W_t| with constants depending on n and K. Hence, for John domains, the characterization of log-Hölder continuity at the boundary in Lemma <ref> implies that ∈∂ LH_0^τ_K(Ω) if and only if for every t∈Γ,
|W_t|^-(p(y)-p_-(W_t))≤ C.
In fact, this estimate is the main assumption on that we actually need in the sequel.
Hereafter, when we consider a tree-covering of a John domain, we assume that the tree-covering is taken as in Proposition <ref> and that τ_K is the constant from Remark <ref>. As an immediate consequence, we get the following corollary to Theorem <ref>.
Let Ω⊂^n be a bounded John domain with a tree-covering {U_t}_t∈Γ. If ∈𝒫(Ω) is such that ∈∂ LH_0^τ_K(Ω) condition and 1<q_-≤ q_+ <∞, then the operator A_Γ defined in (<ref>) is bounded from L^q(·)(Ω) to itself.
Our main result in this section is the following decomposition theorem. We begin with a definition.
Given a bounded domain Ω⊂^n, let {U_t}_t∈Γ be a tree-covering of Ω as in Remark <ref>. Given g∈ L^1(Ω) with ∫ g =0, we say that a collection of functions {g_t}_t∈Γ
in L^1(Ω) is a decomposition of g subordinate to {U_t}_t∈Γ if the following properties are satisfied:
* g=∑_t∈Γ g_t;
* (g_t)⊂ U_t;
* ∫_U_t g_t =0 for all t∈Γ.
Given a bounded John domain Ω⊂^n, let {U_t}_t∈Γ be a tree-covering of Ø as given by Proposition <ref>. Fix ∈∂ LH_0^τ_K(Ø), with 1<q^-≤ q^+<∞. Then for every g∈ L^q(·)(Ω) such that ∫_Ø g =0, there exists a decomposition {g_t}_t∈Γ of g subordinate to {U_t}_t∈Γ with the additional property that
∑_t∈Γχ_U_tχ_U_t g_t_q(·)/χ_U_t_q(·)_q(·)≤ C g_q(·).
Fix g∈ L^q(·)(Ω). Let {ϕ_t}_t∈Γ be a partition of the unity subordinate to {U_t}_t∈Γ: i.e., supp(ϕ_t)⊂ U_t, 0≤ϕ_t(x)≤ 1, and ∑_t ϕ_t(x) = 1 for all x∈Ω. We first define an initial decomposition of g by f_t = gϕ_t. The collection {f_t}_t∈Γ satisfies properties (1) and (2) in Definition <ref>, but not necessarily (3). Hence, we modify these functions as follows.
For each s∈Γ, s≠ a, define
h_s(x) := χ_s(x)/|B_s|∫_W_s∑_t≽ s f_t(y),
Note that supp(h_s)⊂ B_s and
∫ h_s(x) = ∫_W_s∑_t≽ s f_t(y).
Now define, for t≠ a,
g_t(x) := f_t(x) + (∑_s:s_p = th_s(x)) - h_t(x),
and
g_a(x) := f_a(x) + ∑_s:s_p = ah_s(x).
Recall that s_p denotes the parent of s in the tree Γ.
Note that the summations in these definitions are finite since they are indexed over the children of t (or a). It is easy to check that {g_t}_t∈Γ satisfies all the properties of Definition <ref>. Therefore, we only need to prove (<ref>).
Observe that for any s,t∈Γ, s≠ a,
|h_s(x)| ≤χ_s(x)/|B_s|∫_W_s |g(x)| ≤|W_s|/|B_s|χ_s(x)/|W_s|∫_W_s |g(x)| ,
and
|f_t(x)| ≤ |g(x)|χ_U_t(x).
Moreover, by Proposition <ref>, we have that |W_s|≤ C|B_s|, where the constant C depends only on n and the constant in (<ref>). Therefore,
|g_t(x)| ≤ C(χ_U_t |g(x)| + ∑_s:s_p=tχ_s(x)/|W_s|∫_W_s |g| + χ_t(x)/|W_t|∫_W_t |g| dy ),
and so we can estimate as follows:
∑_t∈Γχ_U_tg_t_/χ_U_t__ ≤
C{∑_t∈Γχ_U_tgχ_U_t_/χ_U_t__
+ ∑_t∈Γχ_U_t∑_s:s_p=tχ_s(x)/|W_s|∫_W_s |g|_/χ_U_t__
+ ∑_t∈Γχ_U_tχ_t(x)/|W_t|∫_W_t |g|_/χ_U_t__}
= C{I_1 + I_2 + I_3}.
By Lemma <ref>,
I_1 = T_ g_≤ Cg_;
moreover, Lemmas <ref> and Theorem <ref> show that
I_2 +I_3 ≤ C T_(A_Γ g)_≤ CA_Γ g_≤ Cg_.
This completes the proof.
In Theorem <ref> we only used that Ω is a John domain to show that |W_t|/|B_t|≤ C for any t∈Γ, t≠ a. If this estimate does not hold, then A_Γ(g) would be replaced by the operator that maps g to
∑_t∈Γ^*|W_t|/|B_t|χ_B_t(x)/|W_t|∫_W_t |g(y)| .
For irregular domains, the factor |W_t|/|B_t| becomes a weight that can often be expressed in terms of a power of the distance to the boundary. In this context, the decomposition can be obtained provided that the operator A_Γ is bounded in weighted spaces. See <cit.> for an example in the classical Lebesgue spaces over Hölder-α domains. We will consider this problem in a subsequent work.
§ IMPROVED SOBOLEV-POINCARÉ INEQUALITIES
In this section we prove Theorem <ref> and related theorems. To state these results, throughout this section
we will assume that ∈(Ω) satisfies 1<p_-(Ø)≤ p_+(Ø)<∞, and that ∈(Ω) is defined by (<ref>) and (<ref>). For the convenience of the reader we repeat these definitions here:
<ref>1/- α/n = 1/,
where α satisfies
<ref> 0≤α < 1 if p_+(Ø)<n
0≤α< n/p_+(Ø) if p_+(Ø)≥ n.
The restriction that 0≤α<1 is discused in Remark <ref>.
To prove our main result on John domains using a local-to-global argument, we first need to prove variable exponent Sobolev-Poincaré inequalities on cubes, which we will do using embedding theorems and the constant exponent Sobolev-Poincaré inequality. We first recall the statement of this result; for a proof, see <cit.> or Bojarski <cit.>.
Let D⊂^n be a bounded John domain with parameter λ from Definition <ref>. Fix 1≤ p<∞. If p<n and p≤ q≤ p^*, then for every f∈ W^1,p(D),
f-f_D_L^q(D)≤ C(n,p,λ)|D|^1/n+1/q-1/p∇ f_L^p(D).
If p≥ n and q<∞, then for every f∈ W^1,p(D),
f-f_D_L^q(D)≤ C(n,q,λ)|D|^1/n+1/q-1/p∇ f_L^p(D).
For our proof below we need better control over the constants in (<ref>) and (<ref>), since in the variable exponent setting these constants will depend on the local values of and . The critical situation is when p_-<n<p_+, since in that case there may be cubes U_t on which p_-<n is very close to n, and the constant in (<ref>) tends to infinity as p approaches n and q approaches p^*. There are similar concerns if p_->n but is close to n.
In the latter case, we can choose a uniform constant in (<ref>), as the following remark shows.
In the case p≥ n the constant can be taken depending only on q̅ for any q̅≥ q. Indeed, applying the Hölder inequality and (<ref>) we obtain
( ∫_Q |f(x)-f_Q|^q )^1/q ≤ |Q|^1/q-1/q̅( ∫_Q |f(x)-f_Q|^q̅)^1/q̅
≤ C(n,q̅)|Q|^1/q-1/q̅|Q|^1/n+1/q̅-1/p(∫_Q |∇ f(x)|^p)^1/p
= C(n,q̅) |Q|^1/n+1/q-1/p(∫_Q |∇ f(x)|^p )^1/p.
In the critical case when p<n, we can give a quantitative estimate on the constant in (<ref>). The proof of the following result is implicit in <cit.>; here we give the details to make explicit the resulting constants.
Let Q⊂^n be a cube. If 1≤ p<n and p≤ q≤ p^*, then
|f-f_Q_L^q(Q)≤ C(n)q |Q|^1/n+1/q-1/p∇ f_L^p(Q).
First observe that by Hölder's inequality we have that
f-f_Q_L^q(Q)≤ 2 inf_b∈(∫_Q |f(x)-b|^q)^1/q;
hence, it is enough to prove (<ref>) with a constant b in place of f_Q.
We will take b to be the median of f on Q, i.e., a possibly non-unique b such that
|{x∈ Q: f(x)≥ b}|≥ |Q|/2 and |{x∈ Q: f(x)≤ b}|≥ |Q|/2.
By <cit.>, the inequality
f-b_L^q(Q)≤ C_2 ∇ f_L^p(Q)
is equivalent to
sup_t>0 |{x∈Ø: |f(x) - b| > t}|t^q ≤ C_1 (∫_Ω |∇ f(x)|^p )^q/p.
Moreover, if we define
I_1^Ø g(x) = ∫_Ø g(z)|x-z|^1-n,
then by <cit.> we have that
|g(x) -g_Ω| ≤ C I_1^Ø(∇ g)(x). Finally, by <cit.>,
sup_t>0|{x∈Ø: I_1^Ø g(x)>t}|t^n/(n-1)≤ C(n)(∫_Ø |g(x)|)^n/(n-1).
If we combine these three estimates, we get
(∫_Q |f(x)-b|^n/n-1)^n-1/n≤ C(n) ∫_Q |∇ f(x)|,
This is inequality (<ref>) when p=1 and q=1^*.
In <cit.> inequality (<ref>) for 1<p<n and q=p^* is derived from the case p=1. We reproduce their argument to keep track of the constant. Fix b∈ as in (<ref>) and let Q_+={x∈ Q: f(x)≥ b} and Q_-={x∈ Q: f(x)≤ b}. Let γ=p(n-1)/(n-p) and define
g(x) =
|f(x)-b|^γ, x∈ Q_+,
-|f(x)-b|^γ, x∈ Q_-.
Then |g|^n/n-1 = |f-b|^p^*, and g satisfies (<ref>) with b=0. Therefore, inequality (<ref>) applied to g and Hölder's inequality, we get
(∫_Q |f(x)-b|^p^*)^n-1/n = (∫_Q|g(x)|^n/n-1)^n-1/n
≤ C(n) ∫_Q |∇ g(x)|
≤ C(n) ∫_Q γ |f(x)-b|^γ-1|∇ f(x)|
≤ C(n) γ∫_Q |f(x)-b|^n(p-1)/n-p|∇ f(x)|
≤ C(n)p(n-1)/n-p(∫_Q |f(x)-b|^p^*)^1/p'(∫_Q |∇ f(x)|^p)^1/p.
If we rearrange terms, we get
f-b_L^p^*(Q)≤ C(n)p(n-1)/n-p∇ f_L^p(Q)≤ C(n)p^* ∇ f_L^p(Q).
Finally, to prove inequality (<ref>) with q<p^* we follow the argument in <cit.>. Fix s such that s^*=q, (or s=1 if q<1^*); then by inequality (<ref>) with exponents s^* and s and Hölder's inequality,
(∫_Q |f(x)-f_Q|^q)^1/q≤ C(n) s(n-1)/n-s(∫_Q |∇ f(x)|^s)^1/s
≤ C(n) s(n-1)/n-s(∫_Q |∇ f(x)|^p)^1/p |Q|^1/s-1/p≤ C(n)s^*|Q|^1/n+1/q-1/p∇ f_L^p(Q).
This completes the proof.
The following result was proved in <cit.> when = and our proof is adapted from theirs. This result is the key for translating the classical Sobolev-Poincaré inequality to a variable exponent setting, with minimal hypotheses on the exponents.
Given a cube Q⊂^n, let ∈(Q) and suppose 1≤ p_-(Q)≤ p_+(Q) < ∞. Define by (<ref>). If either p_-(Q)<n and p_-(Q)≤ q_+(Q)≤ (p_-(Q))^*, or p_-(Q)≥ n and q_+(Q)<∞, then
f-f_Q_L^(Q)≤ C(n,q_+(Q))(1+|Q|)^2 |Q|^1/n+1/q_+(Q)-1/p_-(Q)∇ f_(Q),
Moreover, if p_-(Q)<n, then C(n,q_+(Q)) = C(n)q_+(Q).
We first consider the case p_-(Q)<n. By Lemma <ref> (applied twice) and by Lemma <ref>, we have that
f-f_Q_ ≤ (1+|Q|)f-f_Q_q_+(Q)≤ C(n)q_+(Q)(1+|Q|)|Q|^1/n+1/q_+(Q)-1/p_-(Q)∇ f _p_-(Q)
≤ C(n)q_+(Q)(1+|Q|)^2 |Q|^1/n+1/q_+(Q)-1/p_-(Q)∇ f_.
The proof for p_-(Q)≥ n is the same, but using (<ref>) instead of Lemma <ref>.
Our goal is to apply this inequality to the cubes of a tree-covering of a larger John domain. We can do so directly when the oscillation of on a cube is under control.
Let Ω⊂^n be a bounded John domain with a tree covering {U_t}_t∈Γ. Given ∈(Ø), suppose 1≤ p_-(Ω)≤ p_+(Ω)<∞ and ∈∂ LH_0^τ_K(Ø). Define ∈(Ω) by (<ref>). If U∈{U_t}_t∈Γ is such that p_-(U)<n and p_-(U)≤ q_+(U)≤ p_-(U)^* or p_-(U)≥ n, then
f-f_U_L^(U)≤ C(Ω,n,C_0,q_+(U)) |U|^1-α/n∇ f_(U).
The constant C_0 is from Definition <ref> and depends on . In the first case, the constant can be taken to be C(Ω,n, C_0)q_+(U).
This is an immediate consequence of Lemma <ref>. By inequality (<ref>) and Lemma <ref>, and since |Q|≤ |Ω|, we have that
f-f_U_ ≤ C(n,q_+(U))(1+|Q|)^2 |U|^1/n+1/q_+(U)-1/p_-(U)∇ f_
= C(n,q_+(U)(1+|Q|)^2 |U|^1-α/n+1/p_+(U)-1/p_-(U)∇ f_
≤ C(Ω,n,C_0,q_+(U))|U|^1-α/n∇ f_.
We do not want to assume, however, that the restriction on the oscillation of in Lemma <ref> holds. Indeed, we want to consider exponents such that p_-(U)<n but q_+(U)>p_-(U)^*. To handle this situation, we prove an extension result, which shows that a Sobolev-Poincaré inequality can be obtained for a domain U if it holds for every set in a finite partition of U. This result is similar to <cit.>; however, rather than work in the full generality of this theorem, we concentrate on the particular case where U is an element of a tree-decomposition of a larger John domain Ω. This allows us to obtain a sharper estimate on the constant.
Let Ø⊂^n a bounded John domain with a tree-covering {U_t}_t∈Γ. Suppose ∈(Ø) satisfies 1≤ p_-(Ω) ≤ p_+(Ω)<∞ and ∈∂ LH_0^τ_K(Ø). Define ∈(Ω) by (<ref>). For a fixed cube U∈{U_t}_t∈Γ, suppose that there exist cubes G_i i=1,…,M, such that G_i⊂ U for every i, U=∪_i=1^M G_i and either p_-(G_i)<n and p_-(G_i)≤ q_+(G_i)≤ (p_-(G_i))^*, or p_-(G_i)≥ n. Thus Lemma <ref> holds on each G_i; let C_i be the constant for this inequality on G_i. Then there exists a constant C>0 such that for every f∈ W^1,(U),
f-f_U_L^(U)≤ C|U|^1-α/n∇ f_(U).
The constant C depends on n, M, the ratio |U|/|G_i|, and max_i{C_i}.
We first apply the triangle inequality to get
(f-f_U)χ_U_ ≤∑_i=1^M χ_G_i(f-f_U)_
≤∑_i=1^M χ_G_i(f-f_G_i)_ + ∑_i=1^M χ_G_i(f_U-f_G_i)_.
To estimate the first term, we apply Lemma <ref> and use the fact that 1/n+1/q_+(G_i)-1/p_-(G_i)≥ 0, U is bounded, and that 1/q_+(U)-1/p_-(U)≤1/q_+(G_i)-1/p_-(G_i):
χ_G_i(f-f_G_i)_ ≤ C_i|G_i|^1/n+1/q_+(G_i)-1/p_-(G_i)χ_G_i∇ f_
≤ C_i|U|^1/n+1/q_+(G_i)-1/p_-(G_i)∇ f_
≤ C_i|U|^1/n+1/q_+(U)-1/p_-(U)∇ f_;
by (<ref>) and Lemma <ref>,
≤ C_i|U|^1-α/n +1/p_+(U_t)-1/p_-(U_t)∇ f_
≤ C_i|U|^1-α/n∇ f_.
To estimate the second term, we first note that
χ_G_i(f_U-f_G_i)_ = χ_G_i_ |f_U-f_G_i| ≤χ_U_ |f_U-f_G_i|.
By the classical Poincaré inequality in L^1(U) (see, for example, <cit.>), we have that
|f_U-f_G_i|
≤ |G_i|^-1∫_G_i |f(x)-f_U|
≤ |G_i|^-1∫_U |f(x)-f_U|
≤ C|G_i|^-1diam(U)∫_U |∇ f(x)|.
By (<ref>) and since (U)∼ |U|^1/n, we have
≤ C|G_i|^-1 |U|^1/nχ_U_∇ f_.
If we combine this with the previous estimate and apply inequality (<ref>), we get
χ_G_i(f_U-f_G_i)_ ≤ C|G_i|^-1|U|^1/nχ_U_χ_U_∇ f_
≤ C|G_i|^-1|U|^1/n|U|^1-α/n∇ f_
= C|U|/|G_i||U|^1-α/n∇ f_.
This completes the proof.
In order to apply Lemma <ref>, we need to partition each U_t into smaller sets G_i such that one of the conditions in Lemma <ref> holds on G_i. To do so, we will assume that the local oscillation of is under control. In particular, we will assume that 1/ is σ/n continuous for some σ<1-α.
Let Ø⊂^n a bounded John domain with a tree-covering {U_t}_t∈Γ. Suppose ∈(Ø) satisfies 1<p_-(Ø)≤ p_+(Ø)<∞ and ∈∂ LH_0^τ_K(Ω).
Define ∈(Ω) by (<ref>). If 1/ is uniformly σ/n-continuous for some 0<σ< 1-α, then there is a constant C, independent of t, such that
f-f_U_t_L^(U_t)≤ C |U_t|^1-α/n∇ f_(U_t),
for every f∈ W^1,(U_t) and every t∈Γ. The constant C depends on n, p_+(Ω), α and σ. In particular, it goes to infinity when α tends to 1 or when σ tends to 1-α.
Fix δ in the σ/n-continuity condition for 1/.
We first consider the cubes U_t such that diam(U_t)<δ. If p_-(U_t)≥ n, then by Lemma <ref> and Remark <ref>, we have that inequality (<ref>) holds with a constant that depends only on Ω, n and q_+(Ω). On the other hand, if p_-(U_t)<n, then U_t is contained in the ball B(x_t,δ/2), where x_t is the center of U_t. Therefore, given any two points x, y ∈ U_t, y ∈ B(x,δ), and so
| 1/p(x) - 1/p(y)| < σ/n.
If we now argue as in the proof of Lemma <ref>, we get that
1/p_-(U_t)-1/q_+(U_t) = 1/p_-(U_t)-1/p_+(U_t) + α/n≤σ+α/n < 1/n;
hence, p_-(U_t)≤ q_+(U_t)< (p_-(U_t))^*. Again, Lemma <ref> gives inequality(<ref>), but now with a constant of the form C(n)q_+(U_t). We need to show that q_+(U_t) is uniformly bounded. But by inequality (<ref>),
1/q_+(U_t)≥1/p_-(U_t)-σ+α/n = n-(σ+α)p_-(U_t)/np_-(U_t).
Since we have assumed that p_-(U_t)<n, we get that
q_+(U_t) ≤n/1-σ-α,
which gives the desired upper bound.
Note that in this case, the constant may blow up if σ tends to 1-α or if α tends to 1.
Now suppose (U_t)≥δ; for this case we will apply Lemma <ref>.
Partition U_t into M_t cubes {G_i}_i=1^M_t with δ/2<(G_i)<δ. The number of cubes is uniformly bounded, since
M_t ≲|U_t|/δ^n≤|Ω|/δ^n.
If p_-(G_i)≥ n, then G_i satisfies the hypotheses of Lemma <ref>. If, p_-(G_i)<n, we can repeat the previous oscillation estimate for G_i instead of U_t, to get that p_-(G_i)≤ q_+(G_i)≤ (p_-(G_i))^*. So again G_i satisfies the hypotheses of Lemma <ref>. Therefore, we can apply this lemma to U_t; this gives us (<ref>)with a constant that depends on the number of cubes M_t, which is bounded, and on the ratio |U_t|/|G_i|, which is also bounded by |Ω|/δ^n. The constant also depends on the largest constant C_i for the Sobolev-Poincaré inequality on G_i. To estimate these we can argue as we did for the cubes satisfying (U_t)<δ to get that they are uniformly bounded. This gives the desired result.
The following two lemmas are corollaries of the proof of Lemma <ref>; we show that with additional restrictions on the oscillation of the hypotheses can be weakened.
Let Ω, , and be as in Lemma <ref>, with the additional restriction p_+(Ω)<n but only assuming that 1/ is 1-α/n-continuous. Then (<ref>) holds with a constant that depends on n, Ω and p_+(Ω), but not on α (or equivalently, on ).
Since p_-(U_t)<n for every t∈Γ, we can apply Lemma <ref> on every U_t; this yields a local constant of the form C(n)q_+(U_t). Arguing as in the proof of Lemma <ref> with our weaker continuity assumption, we have q_+(U_t)≤ p_-(U_t)^* ≤ p_+(Ω)^*<∞, and so the local constants are uniformly bounded.
Let Ω, , and be as in Lemma <ref>, with the additional restriction p_-(Ω)≥ n, but not assuming 1/ is ε-continuous for any ε>0. Then (<ref>) holds with a constant that depends on n, Ω and q_+(Ω).
Since p_-(U_t)≥ n for every t∈Γ we can again apply Lemma <ref> on every U_t, without assuming any additional regularity on . The local constants are of the form C(n,q_+(U_t)), but by Remark <ref> we can replace q_+(U_t) by q_+(Ω).
In our definition of α in (<ref>) above, we required that 0≤α<1 when p_+(Ω)<n. Ideally, we would like to take α=1 in Lemmas <ref>–<ref>; however, this is not possible unless is constant. This is a consequence of the fact that our proof uses Lemma <ref>. Indeed, if we fix a cube Q and take α=1, we have that
0 ≤1/p_-(Q)-1/p_+(Q)
= 1/p_-(Q)-1/q_+(Q)-1/n
= 1/p_-(Q)^*-1/q_+(Q).
In other words, if we assume that q_+(Q)≤ p_-(Q)^*, then this inequality forces p_+(Q)=p_-(Q), so that is constant.
The advantage of Lemma <ref> is that it imposes minimal regularity conditions on .
The Sobolev-Poincaré inequality is true in the critical case = p^*(·) if we assume 1/ is log-Hölder continuous and p_+(Ω)<n: see, for example, <cit.> and <cit.>. An interesting open question is whether this inequality is true on cubes assuming, for example, that is continuous. If that were the case, then the proof of Lemma <ref> can still work assuming continuity of , and Theorem <ref> below would still hold. This question is related to a similar open question for the Sobolev inequality: see <cit.>.
We can now prove our main result; for the convenience of the reader we repeat its statement.
<ref>:
Let Ø⊂^n be a bounded John domain. Suppose ∈(Ω) is such that 1<p_-(Ø)≤ p_+(Ø)<∞ and ∈∂ LH_0^τ_K(Ω). Fix α as in (<ref>) and define by (<ref>). Suppose also that 1/ is uniformly σ/n-continuous for some σ<1-α. Then there is a constant C such that for every f∈ W^1,p(·)(Ω),
f(x)-f_Ω_q(·)≤ C d^1-α∇ f(x)_p(·),
where d(x)=(x,∂Ω).
We want to stress that the hypotheses on in Theorem <ref> allow the exponent to be discontinuous. In fact, it can be discontinuous at every point of the domain, as long as the jumps of 1/ are smaller than σ/n and the oscillation of decays towards the boundary according to condition ∂ LH_0^τ_K(Ø).
We will prove this result using a local-to-global argument and the local Sobolev-Poincaré inequality in Lemma <ref>. Let {U_t}_t∈Γ be a tree covering of Ω as in Proposition <ref>. By (<ref>), there exists g∈ L^(Ø), g_≤ 1, such that
f-f_Ø_q(·) ≤ C ∫_Ω (f(x)-f_Ø)g(x)
= C ∫_Ω (f(x)-f_Ø)(g(x)-g_Ω) ;
if we now decompose g-g_Ø into {g_t}_t∈Γ using Theorem <ref>, we get, since ∫_U_t g_t dx=0, that = C∑_t∈Γ∫_U_t (f(x)-f_Ø)g_t(x)
= C∑_t∈Γ∫_U_t (f(x)-f_U_t)g_t(x) ;
we now apply (<ref>) and Lemmas <ref> and <ref> with β=α to get ≤ C ∑_t∈Γ C χ_U_t (f-f_U_t)_χ_U_tg_t_
≤
C ∑_t∈Γ |U_t|^1-α/nχ_U_t∇ f_χ_U_t∑_r∈Γχ_U_rχ_U_rg_r_/χ_U_r__
= C ∑_t∈Γℓ(U_t)^1-αχ_U_t∇ f_χ_U_t∑_r∈Γχ_U_rχ_U_rg_r_/χ_U_r__
≤ C ∑_t∈Γχ_U_t d^1-α∇ f_χ_U_t∑_r∈Γχ_U_rχ_U_rg_r_/χ_U_r__
≤ C d^1-α∇ f _p(·)∑_r∈Γχ_U_rχ_U_rg_r_/χ_U_r__;
finally, we apply the bound from Theorem <ref> to get ≤ C d^1-α∇ f _p(·)g_
≤ Cd^1-α∇ f _p(·).
Let Ø⊂^n a bounded John domain, ∈(Ω) satisfying n≤ p_-(Ø)≤ p_+(Ø)<∞ and ∈∂ LH_0^τ_K. Let also ∈(Ω) defined by (<ref>) for some 0<β<n/p_-(Ω) Then, there is a constant C such that the following Sobolev-Poincaré inequality holds
f(x)-f_Ω_q(·)≤ C d^1-β∇ f(x)_p(·),
for every f∈ W^1,p(·)(Ω).
The proof is exactly the same than the one for Theorem <ref>. The only difference is that here we apply the local inequalities given by Lemma <ref> instead of Lemma <ref>.
Theorem <ref> gives a family of improved Sobolev-Poincaré inequalities. We highlight the case when α=0 as a separate corollary.
Let Ø⊂^n a bounded John domain. Suppose ∈(Ω) is such that 1<p_-(Ø)≤ p_+(Ø)<∞, ∈∂ LH_0^τ_K and 1/ is uniformly σ/n-continuous for some σ<1. Then for every f∈ W^1,p(·)(Ω),
f(x)-f_Ω_≤ C d∇ f(x)_,
where d(x) = d(x,∂Ø).
If we impose further restrictions on the oscillation of , we can weaken some of the other hypotheses in Theorem <ref>.
Let Ø, , α and be as in Theorem <ref>, but also assume that p_+(Ø)<n and 1/ is 1-α/n-continuous. Then inequality (<ref>) holds.
The proof is the same as the proof of Theorem <ref>, but we use Lemma <ref> instead of Lemma <ref>.
Let Ø, , α and be as in Theorem <ref>, but also assume that p_-(Ø)≥ n and do not assume that 1/ is ε-continuous for any ε>0. Then inequality (<ref>) holds.
Again, the proof is the same as the proof of Theorem <ref>, but we use Lemma <ref> instead of Lemma <ref>.
§ THE SOBOLEV INEQUALITY WITH ROUGH EXPONENTS
In this section we prove Theorem <ref>. For the convenience of the reader we restate it here.
<ref>:
Let Ø⊂^n be a bounded domain. Suppose ∈(Ω) is such that 1<p_-(Ø)≤ p_+(Ø)<∞. Fix α as in (<ref>) and define by (<ref>). Suppose also that 1/ is uniformly σ/n-continuous for some σ<1-α. Then there is a constant C such that for every f∈ W_0^1,p(·)(Ω),
f_L^(Ω)≤ C ∇ f(x)_L^(Ω).
The heart of the proof is an extension theorem for exponent functions that are ε-continuous, which lets us extend the exponent from an arbitrary bounded domain to a John domain on which the hypotheses of Theorem <ref> hold. We prove this extension result in three lemmas. The first lemma lets us extend the exponent to the boundary of Ω.
Given a bounded domain Ω, let f : Ω→ be uniformly ε_0-continuous on Ω for some ε_0>0. Then f can be extended to a function on Ω that is uniformly ε-continuous for any ε>ε_0. Further, f_-(Ω)=f_-(Ω) and f_+(Ω)=f_+(Ω).
Let δ_0>0 be such that if x, y∈Ω are such that |x-y|<δ_0, then |f(x)-f(y)|<ε_0. Since Ω is closed and bounded, it can be covered by a finite collection of balls B(x_i,δ_0/3), 1≤ i ≤ N, where x_i ∈Ω. Suppose x_i∈Ω∖Ω. Then it is a limit point of Ω, and so there exists a point x_i' ∈Ω such that |x_i-x_i'|< δ_0/6. But then B(x_i,δ_0/3) ⊂ B(x_i',δ_0/2). Therefore, we may assume that Ω is covered by N balls B(x_i,δ_0/2), where each x_i∈Ω.
Fix a point x∈Ω∖Ω. Then x∈ B(x_i,δ_0/2) and so there exists a sequence of points y_k ∈ B(x_i,δ_0/2)∩Ω such that y_k → x as k →∞. Then we have that |f(x_i)-f(y_k)|<ε_0, so the sequence {f(y_k)} is bounded. Therefore, we can define
f(x) = lim inf_k→∞ f(y_k).
Hereafter, by passing to a subsequence we will assume without loss of generality that f(y_k)→ f(x).
It is immediate from the construction that f_-(Ω)=f_-(Ω) and f_+(Ω)=f_+(Ω). To show ε-continuity, fix ε>ε_0 and δ=δ_0/2. First, fix x∈Ω and suppose y∈Ω satisfies |x-y|<δ. If y ∈Ω, then it is immediate that |f(x)-f(y)|<ε_0<ε. Now suppose y∈Ω∖Ω. Then for some i, y ∈ B(x_i,δ_0/2) and and there is a sequence {y_k} in this ball such that f(y_k)→ f(y). But then, we have that
|f(x)-f(y)| ≤ |f(x)-f(y_k)|+|f(y_k)-f(y)| < ε_0 + |f(y_k)-f(y)|.
If we take the limit as k→∞, we get that |f(x)-f(y)|≤ε_0<ε.
Now suppose x∈Ω∖Ω and fix y∈Ω∖Ω such that |x-y|<δ. Then we have x∈ B(x_i,δ_0/2) and sequence {y_k} in the ball such that f(y_k)→ f(x); we also have that y∈ B(x_j,δ_0/2) and sequence {z_k} in this ball such that f(z_k) → f(y). But then we have that for k sufficiently large
|y_k-z_k| ≤ |x-y_k| + |x-y| + |y-z_k| < δ_0/4 + δ_0/2+ δ_0/4 = δ_0,
and so,
|f(x)-f(y)|
≤ |f(x)- f(y_k)| + |f(y_k) - f(z_k)|+|f(z_k)-f(y)|
≤ |f(x)- f(y_k)| + ε_0 + |f(z_k)-f(y)|.
If we take the limit as k→∞, we get |f(x)-f(y)|≤ε_0<ε.
Finally, we need to consider the case x∈Ω∖Ω and y ∈Ω; this is similar to but simpler than the previous case and we omit the details. This completes the proof.
In the proof of Lemma <ref>, the value of f(x) that we chose is not unique, and we could choose any value between the limit infimum and the limit supremum, or we could choose any other sequence {y_k}.
The second lemma lets us extend the exponent function to an ε-continuous function on ^n∖Ω. A more general result, replacing balls by other sets, is possible using the same ideas; here we prove what is necessary for our proof of Theorem <ref>. Details of generalizations are left to the interested reader.
Let Ω be a bounded domain, and suppose that f : Ω→ is uniformly ε-continuous for some ε>0. Let B be a ball containing Ω. Then there exists an extension of f : 3B→ that is uniformly ε-continuous, continuous on 3B∖Ω, and f_-(Ω)=f_-(3B) and f_+(Ω)=f_+(3B).
We will follow the proof of the extension theorem in Stein <cit.>. For the convenience of the reader we will adopt the same notation. Let {Q_k} be the Whitney decomposition of ^n∖Ω: that is, cubes with disjoint interiors such that
(Q_k) ≤(Q_k,Ω) ≤ 4(Q_k).
Let Q_k^*=9/8Q_k, and let {q_k^*} be a C^∞ partition of unity such that (q_k^*)⊂ Q_k^*. These cubes are such that given any x∈^n ∖Ω there exist at most N cubes such that x∈ Q_k^*. Since Ω is compact, there exists p_k ∈Ω such that (Q_k,p_k)=(Q_k,Ω).
For all x∈ 3B∖Ω, define
f(x) = ∑_k f(p_k) q_k^*(x).
Since ∑ q_k^*(x)=1, is immediate from this definition that f_-(Ω)=f_-(3B) and f_+(Ω)=f_+(3B). Since the cubes Q_k^* have finite overlap, this sum contains a finite number of non-zero terms at each x and so is continuous on 3B∖Ω.
We will show that with this definition, f is uniformly ε-continuous on 3B.
Since f is uniformly ε-continuous on Ω, fix δ>0 such that if x, y∈Ω satisfy |x-y|<δ, then |f(x)-f(y)|<ε. Let
K_0={ x∈ 3B : (x,Ω) ≥δ/8},
K={ x∈ 3B : (x,Ω) ≥δ/4}
Since K_0 is compact and f is continuous at each x∈ K, f is uniformly continuous on K_0. Therefore, there exists δ_ext<δ/8 such that if x∈ K, and y ∈ 3B is such that |x-y|<δ_ext, then |f(x)-f(y)|<ε.
Now fix x∈ 3B∖Ω such that (x,Ω)<δ/4. Fix y∈Ω such that |x-y|<δ/4. Then, since ∑ q_k^*(x)=1,
|f(x)-f(y)| ≤∑_k |f(p_k)-f(y)|q_k^*(x).
Fix k such that x∈ Q_k^*. Then
(Q_k, p_k) = (Q_k,Ω)
≤(Ω,Q_k^*) + (Q_k^*,Q_k)
< (Ω, x) + 18(Q_k)
≤δ/4 + 18(Q_k,p_k).
Therefore, rearranging terms we see that (Q_k,p_k) < 2/7δ.
Furthermore, we have that
|x-p_k|
≤(Q_k,p_k)+ (Q_k^*)
= (Q_k,p_k) + 98(Q_k)
≤178(Q_k,p_k) < 1728δ.
Hence, |y-p_k|≤ |x-y|+|x-p_k|< 14δ + 1728δ< δ. Therefore, |f(p_k)-f(y)|<ε for every k, and so we have that |f(x)-f(y)|<ε.
Finally, we need to show that if x ∈Ω, then f is ε-continuous at x. But this follows by the previous argument, exchanging the roles of x and y. Therefore, we have shown that f is ε-continuous on 3B.
The final lemma shows how the extension can be modified to satisfy the (Ω) condition on the larger set.
Given a bounded domain Ω and ∈(Ω), suppose 1≤ p_-≤ p_+ <∞ and 1/ is uniformly ε_0-continuous for some ε_0>0. Fix a ball B such that Ω⊂ B. Then there exists an extension of to 3B such that p_-(3B)=p_-(Ω), p_+(3B) =p_+(Ω), and 1/ is uniformly ε-continuous on 3B for any ε>p_+(Ω)^2/p_-(Ω)^2ε_0. Moreover, for any τ>1, ∈(3B).
We will prove that if is uniformly ε_0-continuous on Ω, then it has an extension to 3B that is uniformly ε-continuous for any ε>ε_0 and satisfies p_-(3B)=p_-(Ω), p_+(3B) =p_+(Ω). The continuity estimate for 1/ follows immediately from this and the fact that for any x, y in the domain of ,
| p(x)-p(y)| ≤ p_+^2|1/p(x)-1/p(y)|, |1/p(x)-1/p(y)| ≤1/p_-^2| p(x)-p(y)|.
On Ω, define f(x)=p(x)-p_-. Then by Lemmas <ref> and <ref>, for any ε>ε_0, we can extend f to a uniformly ε-continuous function 3B such that f_-(3B) = 0, f_+(3B)=p_+-p_-, and f is continuous on 3B∖Ω. Let ψ be a Lipschitz cut-off function such that 0≤ψ(x)≤ 1, ψ(x)=1 if x∈ 2B, and ψ(x)=0 if x∈^n∖ 3B. Then, arguing as we did above, since fψ is uniformly continuous on 3B∖ 1.5B, we must have that fψ is uniformly ε-continuous on 3B∖ 2B, and so uniformly ε-continuous on 3B.
Now define the extension of to be p(x)=f(x)ψ+p_-. The ε-continuity and oscillation bounds for follow at once from the corresponding properties of f. Finally, if we fix x∈ 3B∖ 2B such that τ d(x) < min{ r(B),1/2}, then, since is Lipschitz on B_x,τ, we have that (<ref>) holds. On the other hand, if 1/2>τ d(x)>r(B), this condition always holds with a suitably large constant. Thus, we have that ∈(3B).
We can now prove our main result.
By Lemma <ref>, we can extend to an exponent function on the ball 3B (which is a John domain) that satisfies all the hypotheses of Theorem <ref>. Therefore, we have that for all f∈ W_0^1,(Ω) ⊂ W^1,p(3B),
f-f_3B_L^(3B)≤ Cd^1-α f_L^(3B)≤ C(Ω)^1-α f_L^(3B).
By the triangle inequality and since (f)⊂Ω, we can rewrite this as
f_L^(Ω)≤ C f_L^(Ω) + |f_3B|χ_3B_L^(3B).
But by the classical Sobolev inequality in L^1(Ω) and by (<ref>),
|f_3B| ≤ |3B|^-1∫_Ω |f| dx ≤ C|3B|^-1∫_Ω | f| dx
≤ C|3B|^-1Ω_L^(Ω) f_L^(Ω).
If we combine this with the previous inequality, we get the desired result.
§ NECESSITY OF THE LOG-HÖLDER CONTINUITY CONDITIONS
In this section we consider the necessity of the (Ω) and LH_0(Ω) conditions for Sobolev-Poincaré and Sobolev inequalities in the variable Lebesgue spaces. We do not prove that they are necessary in general; rather, we construct examples to show that they are the weakest continuity conditions which can be assumed to prove these inequalities in general. These results are similar to the classic example of Pick and Růžička <cit.> showing that log-Hölder continuity is necessary for the Hardy-Littlewood maximal operator to be bounded on (Ω). Indeed, our construction should be compared to theirs. It also contains ideas similar to the ones used in <cit.>, where the irregularity appears on the domain instead of the exponent.
We first show that the boundary condition (Ω) is necessary for the validity of the improved Sobolev-Poincaré inequality (<ref>),
f-f_Ω_L^(Ω)≤ C d^1-α∇ f_L^(Ω),
for all 0 ≤α≤ 1.
Indeed, given a bounded domain Ω⊂^n, we construct an exponent function that is uniformly continuous on Ω but does not verify (Ω) for any τ≥ 1. Then, we construct an associated sequence of test functions for which inequality (<ref>) holds with a constant that blows up.
Second, we use the same uniformly continuous exponent (with some minor modifications) to show that the boundary condition (Ω) is not sufficient for the validity of the Sobolev-Poincaré inequality obtained for α=1 and =p^*(·) in (<ref>), that is,
f-f_Ω_L^(Ω)≤ C ∇ f_L^(Ω).
Our example shows that this case, which was not considered in Theorem <ref>, requires hypotheses that control the variation of inside of the domain, such as the classical log-Hölder condition. This example also shows that the Sobolev inequality (<ref>),
f_≤ C ∇ f_p(·),
also fails when =. As we noted above, this gives a positive answer to a question in <cit.>.
Finally, We adapt our test functions to prove that the boundary condition (Ω) is also not sufficient for the validity of the Korn inequality, or for the solution of the divergence equation.
§.§ Constructing the exponent function
Let Ω⊂^n be a bounded domain (not necessarily a John domain). Fix sequences {c_k}_k=1^∞⊂Ω and {r_k}_k=1^∞ such that {B(c_k,7r_k)}_k=1^∞ is a collection of pairwise disjoint balls contained in Ω. In each ball B(c_k,7r_k) define the balls A_k^1=B(a_k,r_k), and the annuli A_k^2=B(a_k,2r_k)∖ B(a_k,r_k) and A_k^3=B(a_k,3r_k)∖ B(a_k,2r_k). Analogously, we define the ball B_k^1=B(b_k,r_k) and the annuli B_k^2 and B_k^3 in such a way that A_k^3∩ B_k^3=∅. Let D_k be the complement in B(c_k,7r_k) of the union of these balls and washers. See Figure <ref>.
Observe that since Ω has a finite measure and the balls B(c_k,7r_k) are pairwise disjoint, then lim_k→∞ r_k=0. In addition, we also assume that 0<r_k<1.
Now, in order to define the variable exponent ∈(Ω), let p_0>1 and let {p_k}_k=0^∞ be a sequence that converges to p_0, with p_k≥ p_0 for all k∈. Hence, equals p_0 on Ω∖⋃_k=1^∞ B(c_k,7r_k), and on each of the balls B(c_k,7r_k) is piece-wise constant or radial. More precisely, let
=1.4pt1.5
p(x) = {[ p_0 in D_k∪ A_k^3 ∪ B_k^3; p_k in A_k^1∪ B_k^1; (|x-a_k|-r_k/r_k)p_0 + (2r_k-|x-a_k|/r_k)p_k in A_n^2; (|x-b_k|-r_k/r_k)p_0 + (2r_k-|x-b_k|/r_k)p_k in B_k^2. ].
Then ∈(Ω) and satisfies
1<p_0=p_-(Ω)≤ p_+(Ω)= p_k_ℓ^∞ <∞.
In addition, using that {p_k}_k=0^∞ converges to p_0, it follows by a straightforward estimate that is uniformly continuous on Ω. Fix ε>0; then there exists k_0 such that |p_k-p_0|< ε for any k> k_0. For k≤ k_0, since is uniformly continuous on each ball B(c_k,7r_k), there exist δ_k such that |p(x)-p(y)|<ε for any x, y ∈ B(c_k,7r_k) such that |x-y|<δ_k. Therefore, if we set
δ:=min{δ_1,⋯,δ_k_0,r_1,⋯,r_k_0},
then we have that |p(x)-p(y)|<ε for any x, y ∈Ω such that |x-y|<δ.
Finally, we choose a sequence {p_k} such that the exponent satisfies all the assumptions made above but (<ref>) fails to be uniformly bounded along the balls {B(c_k,7r_k)}_k=1^∞. We do so by showing the failure of the equivalent property in Lemma <ref>. Define
p_k:=p_0+1|log(r_k)|^1/2.
Hence, given B=B(c_k,7r_k), we have
|B|^1/p_+(B)-1/p_-(B) =|B|^1/p_k-1/p_0≃ r_k^n(p_0-p_k)/p_0p_k
=exp(n/p_0p_k-1/|log(r_k)|^1/2log(r_k))≥ Cexp(n/p_0p_k_ℓ^∞|log(r_k)|^1/2),
which tends to infinity.
§.§ Necessity of the (Ω) condition for Theorem <ref>
We now show that the boundary log-Hölder condition (Ω) is necessary for the improved Sobolev-Poincaré inequality (<ref>) to hold. We start with a bounded domain Ω⊂^n and the uniformly continuous exponent ∈(Ω) defined above in (<ref>). Further, we also assume that the collection of balls {B(c_k,7r_k)}_k=1^∞ approaches the boundary. More precisely, we assume that d(c_k,∂Ω)=7r_k. Then, from (<ref>) and Lemma <ref>, we have that does not satisfy the (Ω) for τ=1. Since these conditions are nested, it does not satisfy it for any τ≥ 1.
We now define our test functions to correspond to this geometry. For each k∈, define f_k:Ω→ so that each function f_k has support in B(c_k,7r_k) and has mean value zero:
=1.4pt1.2
f_k(x) = {[ r_k in A_k^1 ∪ A_k^2; 3r_k-|x-a_k| in A_k^3; -r_k in B_k^1 ∪ B_k^2; |x-b_k|-3r_k in B_k^3; 0 in D_k. ].
Clearly, each f_k is a bounded Lipschitz function, and so f_k ∈ W^1,(Ω).
Now, fix α as in (<ref>) and define by (<ref>). Let q_k be such that 1/q_k = 1/p_k-α/n.
Now suppose that the improved Sobolev-Poincaré inequality is valid on W^1,p(·)(Ω) for the exponents and . But, if we let B=B(c_k,7r_k), then we have that
f_k_L^(Ω)d^1-α f_k_(Ω)
= f_k_L^(Ω)d^1-α f_k_(A_k^3∪ B_k^3)≥f_k_L^(A_k^1)d^1-α f_k_(A_k^3∪ B_k^3)
≃f_k_L^(A_k^1)r_k^1-α f_k_(A_k^3∪ B_k^3)
=r_k |A_k^1|^1/q_kr_k^1-α(2|A_k^3|)^1/p_0≃ r_k^-α+n/q_k-n/p_0
= r_k^n/p_k-n/p_0≃ |B|^1/p_+(B)-1/p_-(B).
By (<ref>) we have that the right-hand term tends to infinity as k→∞. This contradicts our assumption that the Sobolev-Poincaré inequality holds. This shows that boundary condition (Ω) is (in some sense) a necessary condition for inequality (<ref>) to hold.
If B∩ B(c_k,7r_k)∖ D_k ≠∅, for some one or more k∈, we fix k so that p_k is the largest value among the balls it intersects. Notice that B might intersect B(c_k,7r_k)∖ D_k for infinitely many k. However, since the collection of exponents p_k converges to p_0 from above there is a maximum value for this intersection. Also, the radius r of B is at least r_k/4, and
p_-(B(c_k,7r_k)) ≤ p_-(B) ≤ p_+(B) ≤ p_+(B(c_k,7r_k)).
It follows that for any ball B=B(x,d(x,∂Ω)),
|B|^1/p_+(B)-1/p_-(B)≤ C.
Therefore, by Lemma <ref>, ∈∂ LH_0^1(Ω).
§.§ Insufficiency of the (Ω) condition for the Sobolev-Poincaré inequality
We now modify the previous construction to show that the (Ω) condition is not sufficient to prove (<ref>).
This example shows that it is also necessary to impose a control on the regularity of inside the domain. For this example, we will modify and the f_k by assuming that the balls {B(c_k,7r_k)}_k=1^∞ do not approach the boundary. More precisely, we assume that the distance from each ball B(c_k,7r_k) to ∂Ω is larger than some μ>0, independent of k. It then follows immediately that satisfies the boundary condition (Ω), for any τ≥ 1. It does not, however, satisfy the LH_0(Ω) condition, since this condition is equivalent to the quantity |B|^1/p_+(B)-1/p_-(B) being uniformly bounded for all balls B⊂Ω. (See <cit.>; this result is stated for Ω=^n but the same proof works in general.)
We can now argue as before. On each ball B=B(c_k,7r_k) we have that
f_k_(Ω) f_k_(Ω)
= f_k_(Ω) f_k_(A_k^3∪ B_k^3)≥f_k_(A_k^1) f_k_(A_k^3∪ B_k^3)
=r_k |A_k^1|^1/p^*_k(2|A_k^3|)^1/p_0≃ r_k^n/p_k-n/p_0≃ |B|^1/p_+(B)-1/p_-(B).
Again, by (<ref>) the righthand term is unbounded as k→∞.
§.§ Insufficiency of the (Ω) condition for other inequalities
It is well-known that in the constant exponent case
the Sobolev-Poincaré inequality is related to (and, in fact, in many case equivalent to) the Korn inequality, the conformal Korn inequality, the Fefferman-Stein inequality on bounded domains, and the solvability of the divergence equation. These inequalities have been studied in the variable exponent setting assuming the classical log-Hölder condition. Here we show that the (Ω) condition is not sufficient for the Korn inequality to hold.
We first recall this inequality. Given a bounded domain Ω⊂^n, with n ≥ 2 and 1 < p < ∞, the constant exponent Korn inequality says that there exists a constant C such that
𝐮_L^p(Ω)^n × n≤ C ε(𝐮) _L^p(Ω)^n × n
for any vector field 𝐮 in W^1,p(Ω)^n with ∫_Ω𝐮-ε(𝐮)=0. By ∇𝐮 we denote the differential matrix of 𝐮 and by ε(𝐮) its symmetric part,
ε_ij(𝐮) = 1/2( ∂ u_i/∂ x_j + ∂ u_j/∂ x_i).
This inequality plays a fundamental role in the analysis of the linear elasticity equations, where 𝐮 represents a displacement field of an elastic body. We refer to <cit.> for a detailed description. The Korn inequality is also valid on spaces with variable exponent assuming that verifies log-Hölder condition on the domain. (See <cit.> for an equivalent version of (<ref>).)
We now construct our counter-example. Define as in the previous example so that it satisfies the log-Hölder condition only on the boundary. On each ball B=B(c_k, 7r_k), define the vector field 𝐮:Ω→^n by
=1.4pt1.2𝐮(x) = {[ S· (x-a_k) in A_k^1 ∪ A_k^2,; ϕ(x)S· (x-a_k) in A_k^3,; -S· (x-b_k) in B_k^1 ∪ B_k^2,; -ϕ(x)S· (x-b_k) in B_k^3,; 0 in D_k, ].
where ϕ(x)=3-|x|/r_k and S∈^n× n is the skew-symmetric matrix
that equals S_12=-1, S_21=1, and zero in the rest of the entries. The vector field 𝐮 depends on k but we omit it in the notation for simplicity. Then we have that
𝐮_(Ω)ε(𝐮)_(Ω)
= 𝐮_(Ω)ε(𝐮)_(A_k^3∪ B_k^3)≥𝐮_(A_k^1)ε(𝐮)_(A_k^3∪ B_k^3)
≃|A_k^1|^1/p_k(2|A_k^3|)^1/p_0≃ r_k^n/p_k-n/p_0≃ |B|^1/p_+(B)-1/p_-(B),
and again the right-hand side is unbounded.
§.§ Insufficiency of the (Ω) condition for the solvability of the divergence equation
Another problem related to the Korn and Sobolev-Poincaré inequalities is the solvability of the divergence equation. In the constant exponent case, the existence of a solution for this differential equation has been widely studied by a number of authors under different geometric assumptions on the domain. Given a bounded domain and an exponent 1<q<∞, we say that the divergence equation is solvable if there exists a constant C such that, for any function f∈ L^q(Ω) with vanishing mean value, there is a solution v∈ W^1,q_0(Ω)^n of the divergence equation ÷( v)=f with the L^q(Ω) regularity estimate
v_L^q(Ω)≤ C f_L^q(Ω).
Bogovskii <cit.> constructed an explicit representation of the solution v on star-shaped domains with respect to a ball using singular integral operators. Later, this representation was generalized to the class of John domains in <cit.>. Then, using the theory of Calderón-Zygmund operators and the boundedness of the Hardy-Littlewood maximal operator (e.g., assuming that ∈ LH_0(Ω)), this explicit solution on John domains was generalized to variable Lebesgue spaces in <cit.>.
It is also well-known that the solvability of the divergence equation implies the Korn inequality under very general assumptions. In our case, it only uses the norm equivalence (<ref>), and so this argument extends to the variable exponent setting. Therefore, if is the exponent above for which the Korn inequality fails, the divergence equation is not solvable on (Ω). We refer to <cit.> and <cit.> for the implication to the Korn inequality.
§ LOG-HÖLDER CONTINUITY ON THE BOUNDARY
In this section we show that the (Ω) condition implies that the exponent function can be extended to a log-Hölder continuous function on ∂Ω. We cannot prove this for all John domains: we need to impose some regularity on the boundary. More precisely, we will assume the following.
A bounded domain Ω in ^n is a John domain up to the boundary (or, simply, a boundary John domain) with parameter λ>1 if there exists a point x_0∈Ω such that, given any y∈Ω, there exists a rectifiable curve parameterized by arc length γ : [0,ℓ] →Ω, with γ(0)=y, γ(ℓ)=x_0, and λ (γ(t),∂Ω) ≥ t.
This property holds, for example, if Ω has a Lipschitz boundary, but it also holds for a much larger class of domains: for instance, the so-called semi-uniform domains. For a precise definition and the relationship of these domains to John domains, see Aikawa and Hirata <cit.>. It would be interesting to give an explicit example of a John domain that is not a boundary John domain.
Given a boundary John domain Ω with John parameter λ>1, let ∈(Ω) be such that 1≤ p_-≤ p_+<∞ and ∈(Ω) for some τ>2λ. Then can be uniquely extended to a function on Ω such that is log-Hölder continuous on ∂Ω. More precisely, given x, y ∈∂Ω, |x-y|<1/2,
|p(x)-p(y)| ≤C_0/-log(|x-y|).
Fix a point x∈∂Ω=Ω∖Ω. Let {x_k} be a sequence of points in Ω such that x_k → x as k→∞ and |x_k-x| ≤λ d(x_k). Since Ω is a boundary John domain, such a sequence of points always exists: choose the points x_k to lie on the curve γ connecting x_0 to x. (Note that the length of the curve connecting x to x_k is always longer than |x-x_k|.) We will refer to such sequences at nontangential approach sequences.
We claim that the sequence {p(x_k)} is Cauchy. To see this, fix j, k ∈; without loss of generality we may assume that d(x_j)≤ d(x_k). Then
|x_j-x_k| ≤ |x_j-x| + |x_k-x| ≤λ( d(x_j)+ d(x_k) ) ≤ 2λ d(x_k).
It is immediate x_j ∈ B_x_k,2λ⊂ B_x_k,τ. Hence, by the (Ω) condition,
|p(x_j)-p(x_k)| ≤ p_+(B_x_k,τ) - p_-(B_x_k,τ) ≤C_0/-log(τ d(x_k)).
Thus, as j, k →∞, |p(x_j)-p(x_k)|→ 0.
Since {p(x_k)} is Cauchy, it converges; denote this limit by p(x). This limit is unique in the sense that given any other nontangential approach sequence {y_k} in Ω converging to x, the same argument shows that
|p(x_k)-p(y_k)| ≤ C_0max{1/-log(τ d(x_k)),1/-log(τ d(y_k))},
and the right-hand side tends to 0 as k→∞. This defines our extension of to Ω; note that by our definition we have p_-(Ω)=p_-(Ω) and p_+(Ω)=p_+(Ω).
We will now prove that is log-Hölder continuous on ∂Ω. It will suffice to prove that (<ref>) holds for x, y∈∂Ω such that
|x-y| < τ-2λ/4τ < 1/2.
Since p_+<∞, it always holds for |x-y|≥τ-2λ/4τ for a sufficiently large constant C_0.
Fix such x, y ∈∂Ω. Let x_k be a nontangential approach sequence converging to x. Since we may choose the points x_k to lie on the curve connecting x_0 to x, we may assume that there exists k_0≥ 1 such that
|x-y| = d(x_k_0) (τ - 2λ/2).
Further, by passing to a subsequence (that includes x_k_0) we may assume that d(x_k) decreases to 0. Let {y_k} be a nontangential approach sequence converging to y. By passing to a subsequence, we may assume that for all k≥ k_0,
|y_k-y| ≤ d(x_k_0) (τ - 2λ/2).
We can now estimate as follows: for all k≥ k_0,
|x_k - x_k_0| ≤ |x_k - x| + |x_k_0-x| ≤λ d(x_k) + λ d(x_k_0) < τ d(x_k_0).
Similarly,
|y_k - x_k_0| ≤ |y_k - y|+ |y-x|+|x-x_k_0| < 2d(x_k_0)(τ - 2λ/2) + λ d(x_k_0)
≤τ d(x_k_0).
Hence, x_k, y_k ∈ B_x_k_0,τ, and so by the (Ω) condition and our choice of k_0,
|p(x_k) - p(y_k)| ≤ p_+(B_x_k_0,τ) - p_-(B_x_k_0,τ)
≤C_0/-log(τ d(x_k_0))
= C_0/-log(2τ/τ-2λ|x-y|)≤D_0/-log(|x-y|);
the last inequality holds since the function x↦ -log(x)^-1 is convex and 2τ/τ-2λ>1. Since this is true for all k≥ k_0, if we pass to the limit we get (<ref>). This completes the proof.
§ AN APPLICATION TO PDES
In this section we give an application of our results to elliptic PDEs. In <cit.>, the first author, Penrod and Rodney studied a Neumann-type problem for a degenerate -Laplacian. The basic operator is the
-Laplacian: given an
exponent function , let
Δ_ u = -÷ ( |∇ u|^-2∇ u ).
This operator arises in the calculus of variations as an example of
nonstandard growth conditions, and has been studied by a
number of authors: see <cit.> and the extensive references they contain. In <cit.> they considered the degenerate version of this operator,
Lu = - ÷(|√(Q)∇ u|^-2 Q∇ u),
where Q is a n× n, positive semi-definite, self-adjoint, measurable
matrix function. These operators have also been studied, though
nowhere nearly as extensively: see, for instance, <cit.>.
Here we consider a particular version of their result.
Let Ω be a bounded, open domain in ^n, and let Q be defined on an open neighborhood of Ω. They showed that if
1<p_-≤ p_+<∞ and |√(Q(·))|_op∈ L^∞(Ω), then the existence of the Poincaré inequality
f-f_Ω_L^(Ω)≤ C f_L^(Ω)
is equivalent to the existence of a weak solution to
÷(|√(Q)∇ u|^-2 Q∇ u)
= |f|^-2 in Ω
n^T · Q ∇ u = 0 on ∂Ω,
where n is the outward unit normal vector of
∂Ω.
Further, they showed that solutions must satisfy the (Ω) regularity condition,
u_(Ω)≤ C_1 f_(Ω)^r_*-1/p_*-1,
where p_* and r_* are defined by
p_* =
p_+, if u_(Ω) <1,
p_-, if u_(Ω)≥ 1,
and
r_* =
p_+, if f_(Ω)≥ 1,
p_-, if f_(Ω) < 1.
If we combine this with our Sobolev-Poincaré inequalities, in particular with Corollary <ref> and Theorem <ref>, we immediately get the following result.
Let Ω be a bounded John domain. Suppose ∈(Ω) is such that ∈∂ LH_0^τ_K(Ω) and either
* 1<p_-≤ p_+ < ∞ and 1/ is σ/n-continuous for any σ<1, or
* n≤ p_-≤ p_+ < ∞ (and we make no assumptions on the interior regularity of ).
Then the Neumann-type problem (<ref>) has a weak solution in Ω that satisfies the regularity condition (<ref>).
Because the matrix Q is allowed to be degenerate, the definition of a weak solution to (<ref>) is somewhat technical, though it reduces to the classical definition when Q is the identity matrix (i.e., the operator is the -Laplacian). Also, this definition allows for domains Ω whose boundaries are rough and on which the the normal is not well-defined. We refer the reader to <cit.> for more information.
plain
|
http://arxiv.org/abs/2409.03005v1 | 20240904180110 | PIETRA: Physics-Informed Evidential Learning for Traversing Out-of-Distribution Terrain | [
"Xiaoyi Cai",
"James Queeney",
"Tong Xu",
"Aniket Datar",
"Chenhui Pan",
"Max Miller",
"Ashton Flather",
"Philip R. Osteen",
"Nicholas Roy",
"Xuesu Xiao",
"Jonathan P. How"
] | cs.RO | [
"cs.RO",
"cs.LG",
"cs.SY",
"eess.SY"
] |
Black hole singularity resolution from unitarity
Lucía Menéndez-Pidal
September 9, 2024
================================================
§ ABSTRACT
Self-supervised learning is a powerful approach for developing traversability models for off-road navigation, but these models often struggle with inputs unseen during training. Existing methods utilize techniques like evidential deep learning to quantify model uncertainty, helping to identify and avoid out-of-distribution terrain. However, always avoiding out-of-distribution terrain can be overly conservative, e.g., when novel terrain can be effectively analyzed using a physics-based model. To overcome this challenge, we introduce Physics-Informed Evidential Traversability (PIETRA), a self-supervised learning framework that integrates physics priors directly into the mathematical formulation of evidential neural networks and introduces physics knowledge implicitly through an uncertainty-aware, physics-informed training loss. Our evidential network seamlessly transitions between learned and physics-based predictions for out-of-distribution inputs. Additionally, the physics-informed loss regularizes the learned model, ensuring better alignment with the physics model. Extensive simulations and hardware experiments demonstrate that PIETRA improves both learning accuracy and navigation performance in environments with significant distribution shifts.
Supplementary Video: <https://youtu.be/OTnNZ96oJRk>
§ INTRODUCTION
Recent advancements in perception and mobility have accelerated the deployment of autonomous robots in challenging real-world environments such as office spaces, construction sites, forests, deserts and Mars <cit.>, where both geometric and semantic comprehension of the terrain is crucial for reliable navigation. In these settings, self-supervised traversability learning has emerged as a powerful tool to train neural networks (NNs) to predict terrain models from navigation data without manual labeling <cit.>, where the learned representations can work directly with model-based motion planners, providing better interpretability and flexibility compared to fully learned navigation policies. However, the lack of abundant and diverse training data limits the reliability of learned traversability models in novel environments, such as the situation considered in this work (see Fig. <ref>). This is a well-known issue in the learning literature that arises from out-of-distribution (OOD) inputs at test time due to the distribution shift between training and test data <cit.>.
Many recent works mitigate the risk of encountering OOD scenarios by quantifying the epistemic uncertainty, which is the model uncertainty due to distribution shift <cit.>. For example, OOD terrain can be detected by learning a density estimator for the training data to identify test input with low density <cit.>, or a terrain auto-encoder for detecting poorly reconstructed terrain <cit.>. While avoiding OOD terrain has been shown to improve mission success rate in our prior work <cit.>, doing so can be too conservative.
To this end, we propose Physics-Informed Evidential TRAversability (PIETRA), a self-supervised learning framework that seamlessly combines learning-based and physics-based traversability analysis methods such that the downstream planner relies on the learned model for in-distribution (ID) terrain and the physics-based model for OOD terrain. Improving upon our prior work <cit.>, we exploit the mathematical structure of evidential learning to embed a physics-based prior that automatically gets invoked when epistemic uncertainty is high. Moreover, we propose an uncertainty-aware physics-informed loss function inspired by existing works to regularize the learned model to improve generalization <cit.>.
In summary, the contributions of this work are threefold:
* An evidential traversability learning framework with an explicit physics-based prior that gets invoked when encountering OOD features at test time.
* An uncertainty-aware physics-inspired loss function that implicitly injects physics knowledge at training time to improve learning accuracy for both ID and OOD features.
* Extensive simulation and hardware experiments showing that our approach improves both the learning accuracy and the downstream navigation performance in test environments with significant distribution shift.
§ RELATED WORK
The field of traversability analysis studies how to infer suitability of terrain for navigation (see survey <cit.>).
Compared to hand-crafting planning costs based on terrain features, directly learning traversability models from data requires less manual labeling and results in a more accurate assessment of vehicle-terrain interaction. Based on navigation data, visual and/or geometric features of the visited terrain can be used to train a traversability predictor based on estimated traversability values <cit.>. Building upon this basic idea, visual and visual-inertial representation learning <cit.>, self-training with pseudo-labels for unvisited terrain <cit.>, temporal fusion of robot states and sensor measurements <cit.>, and data augmentation via vision foundation models <cit.> can all improve learning accuracy. Alternatively, when a hand-crafted traversability model such as <cit.> is available, it can be used to provide supervisory signals for training NNs that are faster <cit.>.
One concern is that, because learning-based methods rely on likely limited real-world data, the learned models may not generalize to environments unseen during training.
While our work also uses self-supervised learning to obtain a traversability model, we incorporate physics knowledge into the model to improve generalization to OOD environments.
OOD detection is well studied and closely related to uncertainty quantification (see surveys <cit.>). At a high level, input features that are not well-represented in the training data can lead to high epistemic uncertainty, which can be estimated with techniques such as Bayesian dropout <cit.>, model ensembles <cit.>, and evidential methods <cit.>. Epistemic uncertainty can also be estimated via a terrain auto-encoder for detecting high reconstruction error <cit.>, a density estimator fit to the training data distribution <cit.>, or Gaussian Process regression <cit.>.
Our prior work <cit.> adopts the evidential method proposed by <cit.> to efficiently identify OOD terrain and shows that avoiding OOD terrain during planning can improve mission success rate. However, always avoiding OOD terrain can be too conservative.
To address this limitation, this new work exploits the mathematical formulation of evidential learning to explicitly embed a physics prior that is activated when encountering OOD features.
While informative priors have recently been combined with evidential learning in other problem settings, such as the use of a rule-based prior for trajectory prediction in autonomous driving <cit.>, our work focuses on off-road navigation and additionally introduces a physics-informed loss to further improve OOD generalization.
Incorporating physics and expert knowledge for navigating challenging terrain is crucial for both performance and safety, which can be achieved explicitly or implicitly (see survey <cit.>). For example, explicit safety constraints based on terrain geometry and robot states can be imposed during planning <cit.>.
Physics laws can be explicitly incorporated into NNs via differentiable physics engines or neuro-symbolic methods <cit.>.
In addition, custom models can be directly used as priors in an evidential framework <cit.>.
In contrast to previously mentioned explicit approaches, physics knowledge can also be infused into NN models implicitly by learning to reduce prediction errors with respect to both the training data and the physics model <cit.>.
This work described herein uses both explicit and implicit methods to infuse physics knowledge into the learned traversability model. As will be shown in Sec. <ref>, compared to NNs trained with a physics-based loss that suffer from distribution shifts in far-OOD regime, our method gracefully falls back to the explicit physics prior. Furthermore, compared to an evidential network that only has a physics prior, our method uses a physics-based loss to further improve generalization.
§ PROBLEM FORMULATION
We consider the problem of motion planning over uneven terrain for a ground vehicle. This section introduces the robot model with uncertain traversability parameters caused by rough terrain and sensing uncertainty in Sec. <ref> and the risk-aware planning formulation in Sec. <ref>. Compared to our prior work <cit.> that only considers the linear and angular traction, this new work additionally accounts for the roll and pitch angles important for navigating uneven terrain.
§.§ Dynamical Models with Traversability Parameters
Consider the discrete time system:
𝐱_t+1 = F(𝐱_t, 𝐮_t, _t),
where 𝐱_t∈⊆^n is the state vector such as the position and heading of the ground robot, 𝐮_t∈^m is the control input, and _t∈⊆^r is the parameter vector that captures traversability of the terrain.
In this work, we focus on the bicycle model which is applicable for Ackermann-steering robots used in our simulation and hardware experiments:
[ p_t+1^x; p_t+1^y; θ_t+1 ] =
[ p_t^x; p_t^y; θ_t ] + Δ·[ ψ_1, t· v_t ·cos(θ_t); ψ_1, t· v_t ·sin(θ_t); ψ_2, t· v_t ·tan (δ_t)/L ],
where 𝐱_t=[p_t^x, p_t^y, θ_t] contains the X, Y positions and yaw angle; 𝐮_t=[v_t, δ_t] contains the commanded speed and steering angle; 0≤ψ_1,t, ψ_2,t≤1 are the linear and angular traction; Δ>0 is the time interval; and L>0 is the wheelbase. We additionally consider the absolute roll and pitch angles of the robot ψ_3,t, ψ_4, t≥ 0 that do not appear in (<ref>). Therefore, the traversability parameter vector is _t=[ψ_1,t, ψ_2, t, ψ_3,t, ψ_4, t].
Intuitively, traction captures the “slip” or the ratio between achieved and commanded velocities, which is important for fast navigation, and the roll and pitch values are important for rollover prevention.
For rough terrain with vegetation, the traversability values are often unknown but can be empirically learned. Additionally, due to the noisy nature of the empirical data, we model traversability _t as random variables and mitigate the risk of encountering poor traversability during planning.
§.§ CVaR-Based Risk-Aware Navigation
Given the initial state 𝐱_0 and maximum roll and pitch angles ψ_3^max, ψ_4^max≥ 0, we want to find a control sequence 𝐮_0:T-1 that minimizes the time to reach the goal using the objective proposed in <cit.>. We use Conditional Value at Risk (CVaR, as visualized in Fig. <ref>) to quantify the risk of obtaining low traction and large roll and pitch angles. We achieve risk-aware planning by simulating the state trajectory using the left-tail CVaR_α^← of traction and imposing the maximum attitude constraints over the right-tail CVaR_α^→ of roll and pitch angles:
min_𝐮_0:T-1 C(𝐱_0:T)
s.t. 𝐱_t+1 = F(𝐱_t, 𝐮_t, _t)
ψ̅_3, t≤ψ_3^max, ψ̅_4,t≤ψ_4^max
_t=
[ αψ_1,t; αψ_2,t; αψ_3,t; αψ_4,t ], [ ψ_1,t; ψ_2,t; ψ_3,t; ψ_4,t ]∼ p(𝐨_t)
𝐨_t is the terrain feature at 𝐱_t
∀ t∈{0,…,T-1},
where _t contains the worst-case expected traversability values, α∈(0,1] is the risk tolerance, and p(𝐨_t) is the traversability distribution after observing the terrain feature at state 𝐱_t.
We use Model Predictive Path Integral control (MPPI <cit.>) to solve (<ref>–<ref>) because MPPI is gradient-free and parallelizable on GPU. Note that a similar formulation of (<ref>–<ref>) has been shown by <cit.> to outperform methods that assume no slip and state-of-the-art methods such as <cit.>, but this new work introduces additional constraints over roll and pitch.
§ PHYSICS-INFORMED EVIDENTIAL LEARNING
In this section, we present the proposed method PIETRA shown in Fig. <ref>. At a high level, for each traversability parameter, PIETRA outputs a categorical distribution over discretized traversability values to capture aleatoric uncertainty (inherent, irreducible data uncertainty). Moreover, PIETRA uses a normalizing flow <cit.> to estimate densities of latent features as proxies for epistemic uncertainty (model uncertainty due to distribution shift).
PIETRA improves upon our prior work EVORA <cit.> by having an explicit physics prior that is invoked when encountering OOD inputs (Sec. <ref>) and an uncertainty-aware physics-informed loss that implicitly injects physics knowledge to further improve model accuracy in OOD terrain (Sec. <ref>). Lastly, we design custom physics priors used in our experiments in Sec. <ref> and discuss the implementation details in Sec. <ref>.
§.§ Dirichlet Distribution with Physics Prior
The Dirichlet distribution q=(β) with concentration parameters β=[β_1,…,β_B]∈^B_> 0 is a hierarchical distribution over categorical distributions (𝐩), where 𝐩∈^B_≥0 is a normalized probability mass function (PMF) over B>0 discretized traversability values, i.e., ∑_b=1^B p_b = 1. The parameters 𝐩 of the lower-level categorical distribution Cat(𝐩) are sampled from the higher-level Dirichlet distribution, i.e., 𝐩∼(β). The mean (also called the expected PMF) of the Dirichlet distribution is given by 𝔼_𝐩∼ q[ 𝐩] = β/∑_b=1^Bβ_b.
The sum n=∑_b=1^Bβ_b reflects how concentrated the Dirichlet distribution is around its mean and corresponds to the “total evidence” of a data point observed during training.
Given an input feature 𝐨 and a physics model p^phys that maps its input to a traversability PMF with the evidence n^phys>0, the NN performs an input-dependent posterior update:
β_ϕ, λ^𝐨 = n^phys p^phys(𝐨) + n_λ^𝐨 p_ϕ(𝐨),
n_λ^𝐨 =Np_λ(𝐳_𝐨),
where the posterior Dirichlet distribution q_ϕ, λ^𝐨 = (β_ϕ, λ^𝐨) depends on the physics prior and its evidence, the predicted traversability PMF p_ϕ(𝐨) and the predicted evidence n_λ^𝐨 that is proportional to the density p_λ(𝐳_𝐨) for the latent feature 𝐳_𝐨 weighted by a fixed certainty budget N>0.
The posterior Dirichlet distribution leads to the expected traversability PMF:
𝐩^𝐨_ϕ, λ = n^physp^phys(𝐨)+ n_λ^𝐨 p_ϕ(𝐨) /n^phys+ n_λ^𝐨,
which is used by the risk-aware planner (<ref>–<ref>).
The prior evidence is set to a small number (e.g., n^phys=B) such that the predicted evidence n_λ^𝐨 is much larger than n^phys for ID features.
As a result, the learned traversability PMF p_ϕ(𝐨) is used for planning on ID terrain.
However, the new evidence n^𝐨_λ diminishes if the features are OOD, so the physics model p^phys(𝐨) is used for planning instead. The graceful transition between learned and physics-based traversability estimates eliminates the need to avoid OOD terrain, as is required in our prior work EVORA <cit.> that uses an uninformative prior.
§.§ Uncertainty-Aware Physics-Informed Loss
In addition to explicitly embedding the physics prior in the posterior update (<ref>), physics knowledge can also be implicitly infused into the NN by adding a physics-based loss term during training. In contrast to existing methods <cit.> that are not uncertainty-aware, we adapt the physics-informed loss to the context of evidential learning.
We design the training loss based on the squared Earth Mover's distance ( <cit.>) which is a better measure of error compared to KL divergence that treats prediction errors independently across discrete bins. has a closed-form expression after dropping the constant multiplicative factor:
(𝐩, 𝐲) (𝐩) - (𝐲) ^2 ,
where :^B→^B is the cumulative sum operator, and 𝐩 and 𝐲 are the predicted and target PMFs.
To train an evidential NN, our prior work proposes the uncertainty-aware () loss, which is the expected under the predicted Dirichlet q (see the closed form in <cit.>):
L^(q, 𝐲) _𝐩∼ q [ (𝐩, 𝐲) ].
This uncertainty-aware loss has been shown to improve both learning accuracy and navigation performance compared to using (<ref>) or cross-entropy-based losses for training.
To implicitly infuse physics knowledge, we propose the uncertainty-aware physics-informed (UPI) loss, defined as the expectation of the weighted sum of data loss and physics loss given the predicted Dirichlet distribution:
L^UPI(q, 𝐲) _𝐩∼ q [ (𝐩, 𝐲) + κ· (𝐩, 𝐩^phys) ]
= (q, 𝐲) + κ· (q, 𝐩^phys),
where 𝐩^phys=p^phys(𝐨) is the PMF predicted by the physics model given some feature 𝐨, and κ≥0 is a hyperparameter. Intuitively, the physics-based term ensures that NN predictions stay close to the physics prior, thus improving generalization in near-OOD regimes. As our ablation study in Sec. <ref> later shows, both the UPI loss (<ref>) and the physics prior (<ref>) are important for achieving the best learning performance.
§.§ Uncertainty-Aware Physics Prior
We now present our custom physics priors for uneven terrain with two semantic types: the rigid dirt terrain and the soft vegetation terrain. Note that alternative prior designs could be used provided they are based on physics principles that hold in both ID and OOD scenarios.
The traction priors are inferred from the terrain properties under the wheels (also called the “footprint”) assuming the robot has 0^∘ roll and pitch.
Over dirt terrain, as the tire traction depends on friction that reduces with greater slopes, we design a simple slope-based model that predicts lower traction for more sloped terrain.
In our hardware experiment, taller vegetation deforms less and slows down the robot more, so we propose a traction model that predicts lower traction for taller vegetation.
To factor in the uncertainty due to outcomes from the four wheels, our physics-based traction models for the traversability parameter ψ'∈{ψ_1, ψ_2} are
𝐩^ψ'_dirt =
1/4∑_i=1^4 1^ψ'_dirt( clip( s^max - s_i/s^max, 0, 1) ) ,
𝐩^ψ'_veg =
1/4∑_i=1^4 1^ψ'_veg( clip( h^max - h_i/h^max, 0, 1) ) ,
where s_i, h_i are the terrain slope and the height of vegetation under the wheel i, as shown in Fig. <ref>(a-b); the clip function restricts the traction values between 0 and 1;
and the operator 1^ψ'_s maps a scalar to a PMF where the estimated traversability value for ψ' has probability 1 for the semantic type s∈𝒮{dirt,veg}.
Intuitively, the priors merge the estimated traction outcomes from the four wheels into a common PMF. Note that the terrain slope is estimated in the direction of the robot's heading, and we use the absolute slope to produce low traction values for both uphill and downhill terrain.
The roll and pitch angles are estimated based on the terrain height under the wheels using trigonometry. The physics model for the traversability parameter ψ'∈{ψ_3, ψ_4} and the semantic type s∈𝒮 is
𝐩^ψ'_s =
1/2∑_(i,j) ∈𝒲^ψ'1_s^ψ'( arctan( h_i-h_j/d_i,j) ) ,
where d_i,j is the distance between the wheels i,j. Roll and pitch priors use different wheel index pairs defined in 𝒲^ψ_3={(1,4), (2,3)} and 𝒲^ψ_4={(1,2), (4,3)} to account for uncertain terrain contact points as shown in Fig. <ref>(c-d).
To handle different semantic types within the footprint feature 𝐨, the proposed physics prior for the traversability parameter ψ'∈{ψ_1, ..., ψ_4} combines the prior predictions based on different semantic types via:
p^phys, ψ'(𝐨) = w^unif𝐩^unif + (1-w^unif)∑_s∈𝒮 r_s 𝐩^ψ'_s,
where r_s is the ratio of the semantic types within the footprint, 𝐩^ψ'_s is the physics prior based on (<ref>–<ref>), 𝐩^unif is a uniform PMF, and w^unif∈[0,1] determines how much uniform PMF is incorporated into the final physics prior to account for inaccurate prior predictions, as shown in Fig. <ref>(e). Note that (<ref>) is used as the physics prior in (<ref>).
§.§ Implementation Details
The traversability predictor p_ϕ and the normalizing flow p_λ are trained jointly using the proposed UPI loss (<ref>) on an empirically collected dataset {(𝐨, )_k}_k=1^K where K>0 and every estimated traversability parameter in is converted to a target PMF 𝐲 using one-hot encoding. In practice, we assume independence of the traversability parameters and learn their distributions separately. For simplicity, we use a multi-layer perceptron (MLP) as the shared encoder to process the flattened and concatenated semantic and elevation patches of the terrain. The shared encoder is followed by separate fully connected decoders with soft-max outputs for predictions of each traversability parameter. To reduce computation, we train a single flow network using the outputs from the shared encoder. As the accuracy of physics priors differs across traversability parameters, we use a standalone MLP to map the shared encoder's output to scalars between 0 and 1 to downscale the predicted evidence in the posterior update (<ref>) for each traversability parameter.
In contrast to our prior work <cit.>, this work uses yaw-aligned features because roll, pitch and traction are directional over uneven terrain. At deployment time, feature patches are obtained in a sliding-window fashion with a fixed number of yaw angles. The predicted traversability distributions are then converted to CVaR values for look-up during planning.
§ SIMULATION RESULTS
We use the Chrono Engine <cit.> to simulate navigation over rocky terrain and collect benchmark data. When compared against several baselines with or without physics knowledge, PIETRA achieves the best prediction accuracy (Sec. <ref>). Furthermore, we conduct an ablation study to validate the proposed improvements (Sec. <ref>). When used for navigation, PIETRA leads to the best success rate and time to goal (Sec. <ref>). While only the dirt terrain is simulated, the real-world experiments have both dirt and vegetation (Sec. <ref>).
§.§ Simulation Setup
An overview of the simulation environment is shown in Fig. <ref>. We generate 30 distinct synthetic elevation maps based on the real-world rock testbed proposed in <cit.>, which are split evenly for training, validation and testing. Every map has sides of 50 m and we use an Ackermann-steering robot of dimension 3.4 m by 1.8 m to collect 20 navigation trials in each map. For simplicity, the robot executes sinusoidal steering and 2 m/s speed commands, and the episode ends if the robot rolls over, gets stuck, or reaches the map boundaries.
To emphasize the testing of OOD generalization, the test elevation maps are scaled more than the training and validation maps, inducing significant distribution shift as visualized in Fig. <ref>, where we use the standard deviations of the elevation features to measure terrain unevenness.
To clearly mark the ID/OOD boundary, we consider a terrain feature as ID (or OOD) if its unevenness falls below (or above) the 50th percentile of the training dataset. We only use the ID data for training and validation, but the accuracy of trained NNs is evaluated on the entire test dataset. Note that the test maps are also used for evaluating navigation performance.
§.§ Learning Benchmark
The proposed method PIETRA is compared to several state-of-the-art methods, including our prior work EVORA <cit.>, an encoder-decoder NN trained with the loss (Vanilla), and an encoder-decoder NN trained with the physics-informed loss (PI) adapted from <cit.> to use the loss.
Based on the training data and vehicle characteristics, we tune the max slopes (<ref>) for linear and angular traction to correspond to 30^∘ and 15^∘ with w^unif=0.2 for mixing in the uniform PMF (<ref>).
Hyperparameter sweeps are conducted over learning rates in {1e-3, 1e-4,1e-5} for the Adam optimizer, and weights for the physics loss term in {0.1, 0.5, 1.0}. To encourage smoothness, we penalize Dirichlet entropy <cit.> with weights in {1e-3, 1e-4,1e-5} for evidential NNs. We identify the best parameter set for each method based on the average validation errors over 5 training seeds and report the mean and standard deviations of the prediction errors over the seeds.
The overall, ID and OOD test errors are reported in Table <ref>, where we also include the uniform prior and the proposed physics prior.
The takeaway is that PIETRA achieves the best overall, ID and OOD performance. To gain more intuition, the test errors are binned based on terrain unevenness in Fig. <ref>. Notably, while PI outperforms Vanilla and performs similarly to PIETRA on ID features, the improvement thanks to the physics-informed loss degrades as features become more OOD. On the other hand, PIETRA and EVORA fall back to their corresponding priors (physics prior and uniform prior) due to decreasing predicted evidence (<ref>).
§.§ Ablation Study
An ablation study for the proposed improvements with respect to EVORA is summarized in Table <ref>. For completeness, we include a variant where EVORA uses the physics model for OOD features with latent densities lower than the densities observed during training. The takeaway is that both the physics prior and physics-informed loss are important for achieving the best overall accuracy. While the UPI loss (<ref>) alone leads to the best ID accuracy, it offers limited improvement on OOD accuracy.
Interestingly, EVORA equipped with a physics prior achieves better OOD accuracy than OOD-based switching, verifying the benefit of the explicitly embedded physics prior.
§.§ Navigation Benchmark
We deploy the NNs from Sec. <ref> trained with the best hyperparameters in the test environments. As discussed in <cit.>, EVORA detects OOD terrain that is avoided during planning via auxiliary costs. We consider 10 test maps and 10 start-goal pairs that are 40 m apart as visualized in Fig. <ref>. The robot aims to reach the goal without exceeding 30^∘ of roll and pitch angles or getting immobilized. The robot has a maximum steering angle of 30^∘ and a maximum speed of 1 m/s to ensure stable crawling. We further consider 3 risk tolerances α in {0.4, 0.6, 0.8} for the planner and report the success rate and the time to goal of successful trials.
The navigation results are shown in Fig. <ref>, which shows that PIETRA achieves the best success rate and time to goal. In comparison, EVORA is too conservative by always avoiding OOD terrain. While PI outperforms Vanilla, it performs worse than the physics prior due to poor OOD generalization.
Note that the test maps are challenging to navigate, so the best average success rate is low, but the results still clearly demonstrate the advantages of the proposed method.
§ HARDWARE EXPERIMENTS
While the simulation results in Sec. <ref> have shown improved learning and navigation performance achieved by PIETRA with respect to the state-of-the-art, this section presents further evidence of PIETRA's practical utility via real-world experiments on a computationally constrained platform in the presence of multiple semantic types in the environment.
§.§ Data Collection, Training and Deployment
The indoor 9.6 m by 8 m arena contains turf and fake bushes to mimic outdoor vegetation. In addition, the dirt semantic type includes the concrete floor and a skateboard ramp.
A 0.33 m by 0.25 m RC car carries a RealSense D455 depth camera, and a computer with an Intel Core i7 CPU and Nvidia RTX 2060 GPU. The robot runs onboard traversability prediction, motion planning, and elevation mapping with 0.1 m resolution, but Vicon is used for estimating the pose and velocity of the robot. The robot identifies vegetation by extracting green image pixels instead of using a standalone NN classifier to conserve GPU resources.
Traversability models are trained on 10 min of manual driving data over flat concrete floor and turf, making the tall bushes and skateboard ramp OOD at test time. For the vegetation and dirt traction priors, we empirically tune the maximum dirt slope s^max (<ref>) to correspond to 30^∘ and the maximum vegetation height h^max (<ref>) to be 0.2 m which is slightly greater than the 0.15 m wheel diameter.
To prevent damaging the hardware, we do not consider PI and Vanilla because their predictions for OOD terrain are unreliable. Therefore, we only consider PIETRA, the physics prior, and EVORA that avoids OOD terrain. All models are trained with the learning rate of 1e-4, entropy weight of 1e-4 and physics loss weight of 0.1 when applicable. We set a fixed risk tolerance of α=0.5 for the risk-aware planner. Note that MPPI runs at 10 Hz on GPU with a 5 s planning horizon and 1024 rollouts, and the generation of CVaR maps runs at 5 Hz with the map dimension of 81× 96 with 13 yaw discretizations.
§.§ Navigation Results
The robot is tasked to navigate to a fixed goal without violating roll and pitch constraints of 30^∘ and 45^∘ respectively, while deciding whether to drive on vegetation, climb over the ramp, or stay on the concrete floor. We vary the elevation and density of bushes and repeat each method 20 times. Some snapshots of the start of the missions are visualized in Fig. <ref> for scenarios with short and tall vegetation. The success rates and the time to goal of the successful trials are reported in Fig. <ref>, showing that PIETRA achieves the best performance. PIETRA violates the roll constraint once because the robot drives off the down-ramp prematurely. EVORA has 3 out-of-bound failures because the robot frequently takes wide turns to avoid OOD terrain and does not have time to stop within the boundaries. The physics prior leads to 2 out-of-bound failures because it misses the goal region slightly due to over-optimistic traction estimates. The physics prior also leads to 2 roll violations due to driving off the down-ramp prematurely.
Representative trials in Fig. <ref> show that PIETRA goes over vegetation when the bushes are short, but it goes over the ramp when the bushes are tall. In comparison, the large OOD regions force EVORA to take wide turns. Interestingly, the robot using the physics prior favors the concrete floor. To explain this phenomenon, Fig. <ref> shows the predicted CVaR of linear traction for each method, suggesting that the physics prior is over-confident about the traction of the concrete floor. This further shows that PIETRA combines the best of EVORA and physics prior in that it relies on the learned model in ID terrain, but falls back to the physics model in OOD terrain.
§ CONCLUSION AND FUTURE WORK
We presented PIETRA, a self-supervised traversability learning method that incorporates physics knowledge explicitly via the custom physics prior and implicitly via the physics-informed training loss. Extensive simulations and hardware experiments show that PIETRA improves learning accuracy and navigation performance under significant distribution shifts.
One of the limitations of PIETRA is its large memory footprint due to yaw dependency and fine discretization, so improvements are needed to use PIETRA for higher-dimensional systems with more complex state dependencies.
There are several directions that could be pursued to further improve the OOD generalization, such as data augmentation <cit.>, differentiable physics simulators <cit.> and neuro-symbolic networks <cit.>.
IEEEtran
|
http://arxiv.org/abs/2409.03711v1 | 20240905171755 | Relaxation times for disoriented isospin condensates in high energy heavy ion collisions | [
"Olivia Chabowski",
"Joseph I. Kapusta",
"Mayank Singh"
] | hep-ph | [
"hep-ph",
"nucl-th"
] | |
http://arxiv.org/abs/2409.02982v1 | 20240904180000 | Topological recursion for hyperbolic string field theory | [
"Atakan Hilmi Fırat",
"Nico Valdes-Meller"
] | hep-th | [
"hep-th",
"math.GT"
] |
=1
psf.sty
-20pt
0pt
0pt 0pt
6.55in
9.5in
.875in
5pt plus 1pt = 1.5ex
1.15
@addtoresetequationsection
=0pt
=0pt
height 0.5ex depth 0pt 0.5pt
3pt to 0pt to 7.5pt
#1###1
height .34pt depth 0.5ex 0.5pt
2pt to 0pt to 2pt
#1##
1pt
#1
#1#1
#1#1
*
𝔰𝔬
𝔰𝔲
𝔲
𝔰𝔩
𝔰𝔭
𝔬𝔰𝔭
𝔭𝔰𝔲(2,2|4)
𝔥𝔰
𝔥𝔬
𝔤
𝔥
ℝ
ℂ
ℤ
𝕀𝕀
≪⟨-1pt ⟨
⟩-1pt ⟩
α A
ABḂ
β̱ ̱
γ̧
Γ
δ̣
Δ
ϵ ϵ̂
ε
ϕϕ̃
ΦΦ̃
φ
κ̨
łλλ̃
ŁΛ
μ
ν
ρ̊
σ
Σ
τ
þθ θ̅
Θ
ξ
η
ζ
ØΩ
ω
π̅
ε̅
λ̅
𝔇
𝔇_id
𝔇_q
B
G
L
D
F
O
A
E
M
N
R
S
G
U
V
Y
H
I
T
W
k⃗
x⃗
z⃗
w⃗
A
F̂
d̂
Ψ̂
π̅
y̅
z̅
v̅
u̅
+
-
O
#1#1
→
↔
∂
∇
==
tr
Tr
STr
#1#2(<ref>-<ref>) det
[ = ≡ d^dk (2π)^d∫_0^∞2sinhππ^2 d#1#2#3#4_2 F_1(#1,#2;#3;#4)#1#2#3#4#1[ #3 ]#4∇#1#2 #1 #2#1#2#1 #2#1#2#3(#1#3, #2#3 )#1 #1 14mu l|#⟩1|#1⟩⟨#|1⟨#1|#1
§ #1
#1
§.§ #1
#1
§.§.§ #1
#1
#1
./images/ MIT-CTP-5753September 9, 2024
0.5cm
Topological recursion for hyperbolic string field theory 0.5cm
1.0cm
Atakan Hilmi Fırat^1,2 and Nico Valdes-Meller^1 0.5cm
-.1truecm
^1
Center for Theoretical Physics
Massachusetts Institute of Technology
Cambridge MA 02139, USA
0.5cm
^2
Center for Quantum Mathematics and Physics (QMAP)
Department of Physics & Astronomy,
University of California, Davis, CA 95616, USA
0.5cm
mailto:[email protected]@ucdavis.edu, mailto:[email protected]@mit.edu 5pt 2.5cm
Abstract
0.5cm
15pt
We derive an analog of Mirzakhani's recursion relation for hyperbolic string vertices and investigate its implications for closed string field theory. Central to our construction are systolic volumes: the Weil-Petersson volumes of regions in moduli spaces of Riemann surfaces whose elements have systoles L ≥ 0. These volumes can be shown to satisfy a recursion relation through a modification of Mirzakhani's recursion as long as L ≤ 2 sinh^-1 1. Applying the pants decomposition of Riemann surfaces to off-shell string amplitudes, we promote this recursion to hyperbolic string field theory and demonstrate the higher order vertices are determined by the cubic vertex iteratively for any background. Such structure implies the solutions of closed string field theory obey a quadratic integral equation. We illustrate the utility of our approach in an example of a stubbed scalar theory.
15pt
§ INTRODUCTION
Despite its three decade history, covariant closed string field theory (CSFT) still remains notoriously impenetrable, refer to <cit.> for reviews. This can be attributed to the seemingly arbitrary structure of its elementary interactions that have been reverse-engineered from string amplitudes. Even though CSFT provides a complete and rigorous definition for closed string perturbation theory, its construction is also responsible for why almost all of its nonperturbative features remain inaccessible—for example its nonperturbative vacua. Among them the most important one is arguably the hypothetical tachyon vacuum of the critical bosonic CSFT <cit.>. Its construction is expected to provide insights on the nonperturbative nature of string theory and its background-independent formulation.
Therefore obtaining any nonperturbative information from CSFT (or the theory and/or principles that govern CSFT perturbatively) appears to require seeking finer structures within its current formulation—beyond those demanded by perturbative quantum field theory, such as amplitude factorization, to tame the infinitely many, nonlocal interactions between infinitely many target space fields. The main purpose of this work is to begin unearthing some of these structures and investigate their implications.
This objective immediately brings our attention to string vertices encoding the elementary interactions of CSFT, which is arguably the most nontrivial ingredient of CSFT. Roughly speaking, these are the subsets of appropriate moduli spaces of Riemann surfaces that satisfy the consistency condition known as the geometric master equation
∂𝒱 + 1 2{𝒱, 𝒱} + ħΔ𝒱 = 0 ,
see section <ref> for details. This equation encodes the perturbative consistency of CSFT in the sense of Batalin-Vilkovisky (BV) formalism <cit.>. A particular solution to (<ref>) is equivalent to a particular choice of field parametrizations in CSFT <cit.>. Although any allowed field parametrization can be used for a given theory, having one that most efficiently parametrizes the interactions and contains additional structure would be the superior choice.
Hyperbolic string vertices have the potential to provide such a canonical choice for all-order investigations in CSFT <cit.>. The overall objective of this paper is to demonstrate the existence of a topological recursion among the elementary interactions of hyperbolic CSFT and initiate the investigation of its consequences for the nonperturbative structure of CSFT and its solutions.
Briefly stated, we show there is an analog of Mirzakhani's recursion for the Weil-Petersson (WP) volumes of moduli spaces <cit.> for the hyperbolic vertices ⟨𝒱_g,n (L_i) | = ⟨𝒱_g,n (L_1, ⋯, L_n) | of hyperbolic CSFT, which are defined using hyperbolic surfaces of genus g with n punctures whose local coordinates are determined up to a phase by grafting semi-infinite flat cylinders to geodesic borders of lengths |L_1|, ⋯, |L_n| <cit.>. It takes the following form (see (<ref>) and figure <ref>)
|L_1| ·⟨𝒱_g,n (L_i) | = ∑_i=2^n ∫_-∞^∞ d ℓ [ ⟨ℜ (L_1, L_i, ℓ) | ⊗ ⟨𝒱_g, n-1 (-ℓ, 𝐋∖{L_i }) |
] |ω^-1⟩
+ 1 2∫_-∞^∞ d ℓ_1 ∫_-∞^∞ d ℓ_2 [⟨𝔇 (L_1, ℓ_1, ℓ_2) | ⊗ (⟨𝒱_g-1,n+1 (-ℓ_1, -ℓ_2, 𝐋 )|
+
∑_stable⟨𝒱_g_1,n_1 (-ℓ_1, 𝐋_1 ) | ⊗ ⟨𝒱_g_2,n_2 (-ℓ_2, 𝐋_2 ) |
)
] | ω^-1⟩_1 | ω^-1⟩_2 ,
where
𝐋 = {L_2, ⋯, L_n } , 𝐋_1∪𝐋_2 = 𝐋 , 𝐋_1∩𝐋_2 = ∅ ,
and the “stable” in (<ref>), and henceforth, denotes the sum over all non-negative integers g_1,g_2,n_1,n_2 and partitions 𝐋_1, 𝐋_2 that satisfy
g_1 + g_2 = g , n_1 + n_2 = n + 1 , 2g_1 - 2 + n_1 > 0 , 2g_2 - 2 + n_2 > 0 ,
for 2g -2 + n > 1 and n ≥ 1.[We comment on the cases g=1, n=1 and g≥ 2, n = 0 in section <ref>.] Here | ω^-1⟩ is the Poisson bivector <cit.> that twist-sews the entries of states whose length is integrated. Its subscript denotes the sewed entry. The relation (<ref>) is a recursion in the negative Euler characteristic -χ_g,n = 2g-2+n of the underlying surfaces, hence we refer it topological recursion in this particular sense.
The amplitudes ⟨𝒱_g,n (L_i) | are defined using the hyperbolic surfaces directly when L_i >0. However, forming the recursion also requires incorporating the cases when L_i are negative in the sense that they differ by the application of the 𝔅 ghost operators that result from changing the length of the border
⟨𝒱_g,n (L_1, ⋯, L_n) | =
⟨𝒱_g,n (|L_1|, ⋯, |L_n|) | (𝔅^⊗ θ(-L_1)⊗⋯⊗𝔅^⊗ θ(-L_n)) ,
where θ(x) is the Heaviside step function: it is 1 whenever x ≥ 0, 0 otherwise. These cases are recursively determined by (<ref>) as well. We particularly highlight that the base case for the recursion is given by the cubic vertex
⟨𝒱_0,3 (L_1, L_2, L_3) | = ⟨Σ_0,3( |L_1|, |L_2| , |L_3| )|
( 𝔅^⊗ θ(-L_1)⊗𝔅^⊗ θ(-L_2)⊗𝔅^⊗ θ(-L_3)) ,
where ⟨Σ_0,3( L_1, L_2 ,L_3)| is the generalized hyperbolic three-vertex of <cit.>, see appendix <ref>. As an example, a single application of 𝔅 to ⟨Σ_0,3( L_1, L_2 ,L_3)| is given by (see (<ref>))
⟨Σ_0,3( L_1, L_2 , L_3 )| ( 𝔅⊗𝕀⊗𝕀)
=
⟨Σ_0,3( L_1, L_2 , L_3 )|
[ 1 2 π(
1 ρ_1∂ρ_1 ∂λ_1 ( b_0 + b_0 )
+λ_1 (λ_2^2 - λ_3^2) (1+ λ_1^2)^2 ρ_1
( b_1+ b_1 )
+ ⋯) ⊗𝕀⊗𝕀]
.
Here L_i = 2 πλ_i > 0 and ρ_1 = ρ_1(L_1, L_2, L_3) is the mapping radius associated with the first puncture (<ref>). Observe that the 𝔅 insertion produces a dependence on all the border lengths L_1, L_2, L_3. One can similarly consider when there are multiple 𝔅 insertions.
The recursion (<ref>) also contains the string kernels
⟨ℜ(L_1,L_2,L_3) | = R_|L_1| |L_2| |L_3| ⟨𝒱_0,3(L_1,L_2,L_3)| ,
⟨𝔇(L_1,L_2,L_3) | = D_|L_1| |L_2| |L_3| ⟨𝒱_0,3(L_1,L_2,L_3)| ,
where R_L_1, L_2, L_3 , D_L_1, L_2, L_3 are the functions
R_L_1 L_2 L_3 = R_L_1 L_2 L_3 - L_1 θ(L - L_3) ,
D_L_1 L_2 L_3 =D_L_1 L_2 L_3
- R_L_1 L_2 L_3 θ(L -L_2) - R_L_1L_3 L_2 θ(L -L_3) + L_1 θ(L-L_2) θ(L -L_3) .
The kernels depend on the threshold lengthL that is bounded by
0 < L ≤ 2 sinh^-1 1 ≈ 1.76 ,
for the recursion (<ref>) to hold true. Finally, the functions R_L_1 L_2 L_3 and D_L_1 L_2 L_3 above are the well-known functions that appear in Mirzakhani's recursion for the WP volumes of moduli spaces of Riemann surfaces <cit.> (also see <cit.>)
R_L_1 L_2 L_3 = L_1 - log[cosh(L_2/2) + cosh(L_1 + L_3/2) /cosh(L_2/2) + cosh(L_1 - L_3/2) ] ,
D_L_1 L_2 L_3 = 2log[exp( L_1/2) + exp(L_2 + L_3/2) /exp(-L_1/2) + exp(L_2 + L_3/2) ] .
The central idea behind (<ref>) is to excise certain pants from surfaces that make up ⟨𝒱_g,n (L_i) | (see figure <ref>) to factorize it in terms of lower-order vertices, then perform the moduli integration over the lengths and twists of the excised seams. We need to make sure that each surface in ⟨𝒱_g,n (L_i) | is counted once in the moduli integration after the factorization and this introduces the measure factors R_L_1 L_2 L_3,D_L_1 L_2 L_3 in the integrand (<ref>). The excisions of pants from the surface further requires insertion of 𝔅, see section <ref>.
A careful reader may notice the similarity between (<ref>) and the recursion proposed by Ishibashi <cit.>. Despite the structural similarity, there are a few crucial differences between our constructions. First, Ishibashi's off-shell amplitudes are unconventional from the perspective of covariant CSFT. They do not factorize through using the flat propagators. Instead, the simple closed geodesics of hyperbolic surfaces play the role of the propagator and the degenerations occur when the lengths of simple closed geodesics shrink to zero. This obscures its connection to ordinary CSFT, but it provides the straightforward structure to write a recursion relation for off-shell amplitudes as the moduli integration is performed over the entire moduli space.
On the other hand, ⟨𝒱_g,n (L_i) | in (<ref>) are obtained by performing the moduli integration only over “systolic subsets” and the recursion is formed exclusively among them. A priori, it isn't clear that such a recursion has a right to exist—it requires a nontrivial modification of Mirzakhani's method where the systolic subsets are related to each other rather than the entire moduli spaces. But thanks to the “twisting procedure” of <cit.> and the collar lemma <cit.>, this modification is in fact possible, see section <ref> for details. Combining the twisting procedure and the factorization analysis of amplitudes, we derive a recursion relation (<ref>) without giving up the connection to CSFT.
In order to see the impact of this modification, first recall the hyperbolic CSFT BV master action
S_L[Ψ] = S_0,2 [Ψ] + ∑_g,n1 n! ħ^gκ^2g-2+n ⟨𝒱_g,n (L, ⋯, L_n times) | Ψ⟩^⊗ n ,
when 0 < L ≤ 2 sinh^-1 1 <cit.>. Here S_0,2[Ψ] is the free part of the action and the sum over the non-negative integers g,n above, and henceforth, denotes the sum over 2g-2+n > 0. The recursion (<ref>) then strikingly shows the different order terms in the hyperbolic CSFT action are related to each other and they are entirely determined by the cubic data (<ref>). This manifestly demonstrates the cubic nature of CSFT in the sense of topological recursion. We point out this result is consistent with the famous no-go theorem for having a cubic level-matched covariant CSFT <cit.>: we don't give such a formulation here, only express a relation between different terms in the action (<ref>).
A similar observation has already been made using the relation between hyperbolic vertices and the classical conformal bootstrap <cit.> by one of the authors. However the equation (<ref>) here actually makes this cubic nature even more apparent.[In fact these features are possibly related to each other, see the discussion on conformal blocks in <cit.>.] We highlight that (<ref>), and its consequences, are background-independent and apply to any bosonic closed string background.
The recursion (<ref>) is stronger than the BV structure of CSFT. There, the focus is on how the Feynman diagrams with propagators are related to the elementary vertices—as stated in terms of the geometric master equation (<ref>). In (<ref>), on the other hand, the elementary vertices themselves relate to each other. Something akin to this feature can be artificially created by deforming a polynomial theory with stubs <cit.>, for which the interactions are obtained through homotopy transfer of the seed theory using a “partial propagator” and the resulting (infinitely many) vertices are related to each other iteratively. This is the primary reason why the stubbed theories are theoretically tractable despite their non-polynomiality. The expectation is that the topological recursion (<ref>) would help to achieve similar feats in CSFT.
In order to demonstrate the power of this extra structure we examine its implication for the classical solutions to CSFT (<ref>). Let us call the stationary point of the classical CSFT action Ψ_∗, keeping its L dependence implicit, and introduce the string field Φ(L_1)
Φ(L_1) = ∑_n=3^∞κ^n-2 (n-1)! (𝕀⊗⟨𝒱_0,n (L_1, L, ⋯, L_(n-1) times) |
) ( | ω^-1⟩ ⊗ | Ψ_∗⟩^⊗ (n-1)) ,
associated with the solution Ψ_∗. Observe that we have
Φ(L_1 = L) = - Q_B Ψ_∗ ,
by the equation of motion, so Φ(L_1) should be thought as a certain off-shell generalization of the the BRST operator Q_B acting on the solution. It is parameterized by L_1. The recursion (<ref>) implies the following quadratic integral equation for this string field (see (<ref>))
0 = Φ (L_1)
- κ 2
A_L_1 L L ( β Φ (L) , β Φ (L) )
- κ ∫_-∞^∞ d ℓ
B_L_1 L ℓ(β Φ (L) , Φ(ℓ))
- κ 2 ∫_-∞^∞ d ℓ_1 ∫_-∞^∞ d ℓ_2
C_L_1 ℓ_1 ℓ_2(Φ(ℓ_1), Φ(ℓ_2) ) ,
where the 2-products A,B,C are constructed using (<ref>). Here we fixed the gauge by
β(L) Ψ_∗ = 0 ,
see (<ref>). The detailed derivation of this equation is given in section <ref>.
The solutions Φ(L_1) to (<ref>) (if they exist) are related to the honest CSFT solutions through (<ref>) and they can be used to obtain the cohomology around them. Even though we haven't made any attempt towards realizing this here, we have investigated the counterpart problem in the stubbed cubic scalar field theory as a proof of principle. We construct the nonperturbative vacuum of <cit.> and read the mass of the linear excitations around it entirely using the analog of (<ref>) with some guesswork. In particular we haven't performed a resummation while doing so. This is encouraging for future attempts of solving CSFT with (<ref>): resumming the CSFT action would have been a hopeless endeavor with current technology.
The outline of the paper is as follows. In section <ref> we review hyperbolic geometry and CSFT. In section <ref> we compute the WP volumes of systolic subsets à la Mirzakhani, after reviewing her recursive method for computing WP volumes of the entire moduli spaces. This provides a toy model for the actual case of interest, which is the recursion for hyperbolic string vertices. We derive it in section <ref>. We provide more insight on our results by obtaining an analogous recursion relation for the stubbed cubic scalar theory of <cit.> in section <ref> and investigate its implications. This section is mostly self-contained and should be accessible to readers who are not familiar with CSFT. The solutions of the stubbed theory are shown to obey a quadratic integral equation in section <ref>. We also show the solutions to CSFT also obey a similar equation in section <ref> using similar reasoning. We conclude the paper and discuss future prospects in section <ref>.
In appendix <ref> we report the local coordinates for the generalized hyperbolic three-vertex of <cit.>. Finally we compute certain systolic volumes using the recursion and the bootstrap methods of <cit.> in appendix <ref> and <ref> respectively.
§ PRELIMINARIES
In this section we review hyperbolic geometry and the construction of CSFT. For a mathematically-oriented introduction to hyperbolic geometry refer to <cit.> and for the basics of CSFT there are many excellent reviews <cit.>. We follow <cit.> for our discussion of hyperbolic vertices.
§.§ Hyperbolic surfaces and Teichmüller spaces
Let S_g,n be an orientable closed two-dimensional marked surface of genus g with n borders that has a negative Euler characteristic[The marking is an orientation-preserving diffeomorphism f : S_g,n→ S_g,n and two markings are equivalent if they are isotopic to each other. The surface together with the isotopy class of the marking [f] is a marked surface.]
-χ_g,n≡ 2g-2+n > 0 .
By the uniformization theorem, such a surface admits metrics with constant negative Gaussian curvature K = -1 with geodesic borders of length L_i ≥ 0 for i=1, ⋯, n. These are hyperbolic metrics with geodesic borders. We denote the space of all such distinct metrics on S_g,n by 𝒯_g,n(L_i) and call it Teichmüller space. This is same as the set of marked (hyperbolic) Riemann surfaces of genus g with n borders. The border is a hyperbolic cusp when its associated length vanishes.
The ordinary moduli spaces of Riemann surfaces ℳ_g,n(L_i) are obtained by forgetting the marking on the surface S_g,n, i.e., by taking the quotient[We always consider ℳ_g,n (L_i) to be compactified in the sense of Deligne-Mumford. This also requires using the so-called augmented Teichmüller space <cit.>, however we are not going to be explicit about this.]
ℳ_g,n (L_i) = 𝒯_g,n(L_i)/ MCG(S_g,n) ,
MCG(S_g,n) = Diff^+(S_g.n) Diff_0^+(S_g,n) ,
where MCG(S_g,n) is the mapping class group of the surface S_g,n. Here Diff^+(S_g.n) is the set of orientation-preserving diffeomorphisms and 0 subscript denotes those isotopic to the identity. These diffeomorphisms are taken to be restricted to be the identity on the boundary ∂ S_g.n. In other words MCG(S_g,n) is the group of large diffeomorphisms of S_g,n. The action of MCG(S_g,n) on 𝒯_g,n(L_i) produces the hyperbolic structures on S_g,n that are same up to marking.
The Teichmüller space is somewhat easier to work with relative to the moduli space ℳ_g,n (L_i) since it is simply-connected <cit.>. That is, 𝒯_g,n(L_i) is the universal cover of ℳ_g,n(L_i) and there is a covering map
π : 𝒯_g,n(L_i) →ℳ_g,n(L_i) .
In order to better appreciate this feature we remind the reader that marked Riemann surfaces admit pants decompositions, which is done by cutting the surface along a set of 3g-3+n disjoint simple closed geodesics <cit.>. The pant decompositions are far from unique. There are infinitely many such decompositions and they are related to each other by large diffeomorphisms.
Given a pants decomposition, however, any marked surface can be constructed uniquely through specifying the tuple
(ℓ_i, τ_i) ∈ℝ_+^3g-3+n×ℝ^3g-3+n ,
where ℓ_i are the lengths of the seams of the pants and τ_i are the relative twists between them. This provides a particularly nice set of coordinates for Teichmüller space known as Fenchel-Nielsen coordinates. They descend to the moduli spaces by (<ref>), but only locally due to the large diffeomorphisms. It is then apparent that the dimensions of Teichmüller and moduli spaces are given by
_ℂ𝒯_g,n(L_i) = _ℂℳ_g,n(L_i) = 3g - 3 + n ≡ d_g,n .
Note that we present the complex dimensions as these spaces have complex structures <cit.>.
Teichmüller space further admits a Kähler metric known as the Weil-Petersson (WP) metric. The associated symplectic form is the WP form. Its mathematically precise definition is not important for our purposes, but one of its important facets is that the WP form is MCG-invariant so that there is a globally well-defined symplectic form ω_WP over the moduli space ℳ_g,n(L_i). This is reflected in Wolpert's magic formula <cit.>
π^∗ ω_WP = ∑_i=1^3g -3 + n d ℓ_i ∧ d τ_i ,
that presents the pullback of ω_WP under (<ref>) in Fenchel-Nielsen coordinates (ℓ_i, τ_i). Here the “magic” refers to the fact that the right-hand side holds for the Fenchel-Nielsen coordinates associated with any pants decomposition (i.e., its invariance under the action of the MCG). Using ω_WP, it is possible to construct the WP volume form vol_WP on the moduli space ℳ_g,n(L_i)
vol_WP = ω_WP^3g-3+n (3g-3+n)! , π^∗ vol_WP = ⋀_i=1^3g -3 + n d ℓ_i ∧ d τ_i ,
and compute integrals over moduli spaces.
We are not going to delve into the proofs of these statements. However, we highlight one of the common ingredients that goes into their proofs: the collar lemma <cit.>. This lemma essentially states that two sufficiently short closed geodesics on a hyperbolic surface cannot intersect each other. More precisely, defining a collar 𝒞_γ around a simple closed geodesic γ∈Σ_g,n(L_i) by
𝒞_γ = {
p ∈Σ_g,n(L_i)
| dist(p, γ) ≤w 2 , sinhw 2sinhγ 2 = 1
} ,
the collar lemma states that the collars are isometric to hyperbolic cylinders and are pairwise-disjoint for a pairwise-disjoint set of simple closed geodesics. The lengths of the geodesics are always denoted by the same letter and “dist” stands for the distance measured by the hyperbolic metric. One can analogously define half-collars around the geodesic borders and they will be pairwise-disjoint from other (half-)collars. Here w is the width of the collar and it increases as γ decreases.
The last observation, together with the collar lemma, necessitates the lengths of a simple closed geodesic γ and a (not necessarily simple) closed geodesic δ with γ∩δ≠∅ to obey
sinhγ 2 sinhδ 2 > 1 .
Defining the maximum threshold length
L_∗ = 2 sinh^-1 1 ,
it is apparent that a simple closed geodesic γ with γ≤ L_∗ cannot intersect with a closed (but not necessarily simple) geodesic δ with δ≤ L_∗.
§.§ String vertices
Now we return our attention to the construction of CSFT. The central geometric ingredient is string vertices. They can be succinctly expressed as the formal sum
𝒱 = ∑_g,nħ^g κ^2g-2 + n 𝒱_g,n .
Here the sum runs over non-negative integers g,n with 2g -2 +n > 0 and ħ and κ (string coupling) are formal variables. The objects 𝒱_g,n are 6g - 6 +2n real dimensional singular chains of the bundle 𝒫_g,n→ℳ_g,n, where ℳ_g,n≡ℳ_g,n(L_i = 0). This bundle encodes a choice of local coordinates without specified global phases around the punctures.
The perturbative consistency of CSFT requires 𝒱 to satisfy the geometric master equation
∂𝒱 = - 1 2{𝒱 , 𝒱} - ħΔ𝒱 ,
as stated in the introduction. Expanding in ħ, κ, this equation can be seen as a homological recursion relation among 𝒱_g,n. Here is the boundary operator on the chains, while the anti-bracket {·, ·} and the Laplacian Δ are the following multilinear operations:
* The chain {𝒱_g_1, n_1, 𝒱_g_2, n_2} is the collection of all surfaces constructed by twist-sewing a puncture in a surface belonging to 𝒱_g_1, n_1 to a puncture in a surface belonging to 𝒱_g_2, n_2 by
w_1 w_2 = expi θ where 0 ≤θ < 2 π .
Here w_i are the local coordinates around the sewed punctures. The resulting surfaces are of genus g_1+g_1 and have n_1 + n_2 - 2 punctures.
* The chain Δ𝒱_g,n is the collection of all surfaces constructed by twist-sewing two punctures of the same surface in 𝒱_g,n. The resulting surfaces are of genus g+1 and have n - 2 punctures.
These operations, together with an appropriate notion of dot product on the chains over the moduli space of disjoint Riemann surfaces with a choice of local coordinates around their punctures, form a differential-graded Batalin-Vilkovisky (BV) algebra <cit.>.
Any explicit construction of CSFT requires a solution to (<ref>). In this paper we are concerned with the one provided by hyperbolic geometry <cit.>. The primary idea behind its construction is as follows. We first imagine bordered Riemann surfaces endowed with hyperbolic metrics whose borders are geodesics of length L_i and consider
𝒱^L_g,n(L_i) =
{ Σ∈ℳ_g,n(L_i) | sys(Σ) ≥ L }⊆ℳ_g,n(L_i)
.
Here sys(Σ) is the systole of the surface Σ—the length of the shortest non-contractible closed geodesic non-homotopic to any border. We denote these subsets as the systolic subsets.
We remark that the set 𝒱^L_g,n(L_i) may be empty. For example, the maximum value of the systole in ℳ_2,0 is 2 cosh^-1 ( 1 + √(2)) ≈ 3.06 and it is realized by the Bolza surface <cit.>. Any choice of L greater than this value leads to an empty set. It is clear that taking the threshold length L sufficiently small always leads to a non-empty 𝒱^L_g,n(L_i) since 𝒱^L_g,n(L_i) covers the entire moduli space ℳ_g,n(L_i) as L → 0. In particular the systolic subsets are always non-empty when L ≤ L_∗.
The systolic subsets are used to construct hyperbolic string vertices via grafting semi-infinite flat cylinders to each geodesic borders of a surface in 𝒱^L_g,n(L_i)[We use tilde to distinguish systolic subsets before and after grafting. We are not going to make this distinction moving forward unless stated otherwise.]
𝒱^L_g,n(L_i) = gr'_∞( 𝒱^L_g,n(L_i)
) where gr'_∞ : ℳ_g,n(L_i) →𝒫_g,n .
The grafting map gr'_∞ naturally endows a bordered surface with local coordinates. In fact this map is a homeomorphism <cit.> and 𝒱^L_g,n is a piece of a section over the bundle 𝒫_g,n→ℳ_g,n as a result.
It can be shown that
𝒱^L = ∑_g,nħ^g κ^2g-2 + n 𝒱^L_g,n (L_i = L) ,
solves (<ref>) when L ≤ L_∗ = 2 sinh^-1 1 by using the collar lemma <cit.>. This essentially follows from noticing that 𝒱^L_g,n consists of surfaces where the length of at least one simple closed geodesic is equal to L and these surfaces are in 1-1 correspondence with those constructed using {·, ·} and Δ. The crucial insight behind this proof was using the corollary of the collar lemma stated in (<ref>), which introduces a nontrivial constraint on the threshold length L. Refer to <cit.> for more details.
§.§ Differential forms over 𝒫_g,n→ℳ_g,n
String vertices in bosonic CSFT are the chains over which the moduli integration is performed while the integrand is constructed using a 2d matter CFT of central charge c=26 together with the bc ghost system whose central charge is c = -26. We call their combined Hilbert space ℋ and consider its subspace ℋ whose elements are level-matched
b_0^- |Ψ⟩ = ( b_0 - b_0) |Ψ⟩ = 0 ,
L_0^- |Ψ⟩ = ( L_0 - L_0) |Ψ⟩ = 0 .
Here b_0 and L_0 are the zero modes of the b-ghost and stress-energy tensor respectively. In this subsection we often follow the discussion in <cit.>.
The natural objects that can be integrated over the singular p-chains are differential p-forms. This requires us to introduce a suitable notion of p-forms over the the bundles 𝒫_g,n→ℳ_g,n. There are primarily two ingredients that go into their construction: the surface states and the b-ghost insertions.
The surface states⟨Σ_g,n| are the elements of the dual Hilbert space (ℋ^∗)^⊗ n that encode the instruction for the CFT correlator over a given Riemann surface Σ_g,n. That is
⟨Σ_g,n | Ψ_1 ⊗⋯⊗Ψ_n
=
⟨Ψ_1(w_1 = 0) ⋯Ψ_n(w_n = 0) ⟩_Σ_g,n ,
for any Ψ_1 , ⋯ , Ψ_n ∈ℋ. Notice ⟨Σ_g,n | contains the local coordinate data around each puncture. They have the intrinsic ghost number 6g-6 and have even statistics. Because of this, it is sometimes useful to imagine ⟨Σ_g,n (L_i) | encodes the CFT path integral over bordered surfaces and Ψ_i are inserted through grafting to provide boundary conditions for them in the context of hyperbolic vertices. We usually keep the border length dependence of the hyperbolic string amplitudes to distinguish them from the generic ones.
Often times we apply the surface states to an element
Ψ_1 ⊗⋯⊗Ψ_n ∈ℋ^⊗ n ,
where Ψ_i is understood to be grafted to the i-th border. However, the order of the borders in the surface states may come permuted in our expressions, i.e., ⟨Σ_g,n(L_σ_i) | for σ∈ S_n is a possibility. In this case we have
⟨Σ_g,n(L_σ_i) | Ψ_1 ⊗⋯⊗Ψ_n
= ϵ(σ) ⟨Σ_g,n(L_σ_i) | Ψ_σ_1⊗⋯⊗Ψ_σ_n ,
from (<ref>). Here ϵ(σ) is the Koszul sign of the permutation σ after commuting string fields. This amounts to replacing ⊗→∧, however we keep the tensor product notation.
Now for the b-ghost insertions, recall that the defining property of the p-forms is that they produce scalars when they are applied to p vectors and are antisymmetric under exchange of these vectors. The surface states don't have this structure—we need to act on them with appropriate b-ghosts so that we construct the relevant (ℋ^∗)^⊗ n-valued p-forms over 𝒫_g,n→ℳ_g,n. So introduce
B_p (V_1, ⋯ V_p) = b (v_1) ⋯ b (v_p) ,
where V_1, ⋯, V_p ∈ T 𝒫_g,n are vector fields over the bundle 𝒫_g,n→ℳ_g,n while
b(v_q) = ∑_i = 1^n [
∮ dw_i v_q^(i) (w_i) b (w_i) + ∮ dw_i v_q^(i) (w_i) b (w_i)
] ,
q = 1, ⋯, p ,
are given in terms of b-ghosts insertions around the punctures. Here
v_q ∈ T Σ_g,n ,
are the so-called Schiffer vector fields associated with the vector V_q(Σ_g,n) ∈ T_Σ_g,n𝒫_g,n and the expressions v_q^(i) (w_i) are their presentations in the local coordinates w_i around the punctures. We take
∮dz z = ∮ d zz =1 ,
for a contour oriented counterclockwise around z=0. Notice the object B_p is antisymmetric under exchanging V_i's by the anticommutation of the b-ghosts.
The (generalized) Schiffer vector fields are constructed as follows.[Strictly speaking Schiffer vectors arise only due to the change of local coordinates around the punctures while keeping the rest of the transition functions of the surface fixed. So it is better to call the objects v described here generalized Schiffer vectors. Unless stated otherwise, we simply call them Schiffer vectors as well.] Suppose that we have two coordinates z_i and z_j on the surface with a holomorphic transition map
z_i = f_i j (z_j) ,
where z_i and z_j can be uniformizing coordinates and/or local coordinates around punctures. Further suppose the vector V(Σ_g,n) ∈ T_Σ_g,n𝒫_g,n is associated with some infinitesimal deformation of the transition map corresponding a change of the moduli of the surface. We keep the coordinate z_i = f_i j (z_j) fixed while taking f_ij→ f^ϵ_ij and z_j → z_j^ϵ for this deformation. Then
z_i= f^ϵ_ij(z^ϵ_j)
z^ϵ_j = (f^ϵ_ij)^-1 (z_i) = (f^ϵ_ij)^-1 (f_i j (z_j) )
= z_j + ϵ v^(i) (z_j) .
Here v^(i) (z_j) defines the Schiffer vector field on the surface associated with V. It is regular on the intersection of coordinate patches z_i, z_j but it can develop singularities away from this region. We further have
f_ij^ϵ (z_j) = f_ij^ϵ (z^ϵ_j - ϵ v^(i) (z_j) )
= f_ij^ϵ (z^ϵ_j ) - ϵ f_ij (z_j) z_j v^(i) (z_j)
= z_i - ϵ v^(i) (z_i) ,
v^(i) (z_i) = f_ij (z_j) z_j v^(i) (z_j)
,
after pushing the vector forward to z_i coordinates. Here we abuse the notation and use the arguments of the Schiffer vectors to implicitly remind us the coordinates in which they are expressed. The superscript on v^(i) (z_j) informs which patch is held fixed.
Given the local coordinates t^a on 𝒫_g,n we have the dependence f_ij (z_j) = f_ij(z_j ; t^a). The variation associated with t^a → t^a + δ t^a (which is also associated with the vectors V_a) is then given by
f_ij (z_j; t^a + δ t^a) = f_ij (z_j; t^a) + δ t^a f_ij (z_j; t^a) t^a
= z_i + δ t^a f_ij (z_j; t^a) t^a .
Using (<ref>) we find
v^(i)_a (z_i) = - f_ij (z_j; t^a) t^a = - f_ij (f_i j^-1 (z_i); t^a) t^a
v^(i)_a (z_j) = - ( f_ij (z_j; t^a) z_j)^-1 f_ij (z_j; t^a) t^a .
From here it is natural to take Schiffer vectors as 1-forms over 𝒫_g,n and express the operator-valued p-form over 𝒫_g,n (<ref>) as
B_p = b(v_ a_1) d t^a_1∧⋯∧ b(v_a_p) d t^a_p ,
in the coordinates t^a. The b-ghost insertions here can be chosen to act either on coordinates z_i or z_j using the first or second expressions in (<ref>)
b(v_a_q) = [
∮ d u v^(i)_a_q (u) b (u) +
∮ du v_a_q^(i) (u) b (u)
] ,
u = z_i, z_j ,
since this is conformally invariant. We point out the Schiffer vectors v^(i) (u) and v^(i) (u) above may not be complex conjugates of each other.
Given these ingredients, we define
⟨Ω^(p)_g,n | = (- 2π i)^- d_g,n⟨Σ_g,n | B_p .
The prefactor is inserted to generate the correct factorization of the amplitudes. It carries an intrinsic ghost number 6g-6-p and has the statistics determined by (-1)^p. The object ⟨Ω^(p)_g,n | is indeed a (ℋ^∗)^⊗ n-valued p-form over the bundle 𝒫_g,n→ℳ_g,n due to the antisymmetry of exchanging b-ghosts. We call ⟨Ω^(6g-6+2n)_g,n | = ⟨Ω_g,n | the string measure. This is the correct integrand for the moduli integrations in string amplitudes.
§.§ Closed string field theory action
Given p-forms over 𝒫_g,n→ℳ_g,n we are now ready to construct the CSFT action and discuss its perturbative consistency <cit.>. The free part of it is given by
S_0,2 [Ψ] = 1 2 ⟨Ψ | c_0^- Q_B | Ψ⟩ ,
where c_0^- = ( c_0 - c_0 ) /2 is the zero mode of the c-ghosts, Q_B is the BRST operator of matter + ghost CFT, and ⟨· | ·⟩ is the ordinary BPZ inner product
⟨Σ_0,2 | Ψ_1 ⊗Ψ_2
= ⟨Ψ_1|Ψ_2 ⟩ =
⟨Ψ_1(z = 0) Ψ_2(z = 0) ⟩ where z(z) = 1 z .
Above we have defined ⟨Σ_0,2 | encoding the BPZ product, which carries intrinsic ghost number -6 and is even. In a given basis ϕ_α of ℋ it can be expressed as
⟨Σ_0,2 | = ∑_α ⟨ϕ_α | ⊗⟨ϕ_α^c | ,
| Σ_0,2⟩ = ∑_α | ϕ_α⟩⊗ | ϕ_α^c ⟩ ,
and the conjugate states ϕ_α^c ∈ℋ are given by
⟨ϕ_α^c | ϕ_β⟩ =
(-1)^ϕ_α⟨ϕ_α | ϕ_β^c ⟩ =
δ_αβ ,
where (-1)^ϕ_α denotes the statistics of the state ϕ_α. Notice the conjugate state has the same statistics as ϕ_α and it has ghost number 6-gh(ϕ_α). The associated partitions of the identity operator 𝕀 acting on ℋ are
𝕀 = ∑_α | ϕ_α⟩⟨ϕ_α^c |
= ∑_α (-1)^ϕ_α | ϕ_α^c ⟩⟨ϕ_α | .
By further introducing the symplectic form <cit.>
⟨ω | = ⟨Σ_0,2 | 𝕀⊗ c_0^- ∈ (ℋ^∗)^⊗ 2 ,
the quadratic part of the action (<ref>) can be expressed as
S_0,2 [Ψ] = 1 2 ⟨ω| Ψ⊗ Q_B Ψ ,
since Q_B is cyclic under the symplectic form and the string field Ψ is even in the CSFT master action. The object ⟨ω| has odd statistics and carries intrinsic ghost number -5.
Relatedly, we also introduce the Poisson bivector
|ω^-1⟩ = 𝕀⊗ b_0^- δ(L_0^-) | Σ_0,2⟩
=
∑_α (-1)^ϕ_α | ϕ_α⟩ ⊗ b_0^- δ(L_0^-) | ϕ_α^c ⟩ ,
which inverts the symplectic form in the following sense <cit.>
( ⟨ω | ⊗𝕀) ( 𝕀⊗ |ω^-1⟩)
= b_0^- δ(L_0^-) c_0^- .
Note that b_0^- δ(L_0^-) c_0^- is the projector onto ℋ so it restricts to the identity 𝕀 there. That is,
b_0^- δ(L_0^-) c_0^- | Ψ⟩ = | Ψ⟩ ,
⟨Ψ | c_0^- δ(L_0^-) b_0^- = ⟨Ψ | .
for every Ψ∈ℋ. Furthermore the Poisson bivector is annihilated by b_0^- and L_0^- at both entries, hence
|ω^-1⟩∈ℋ^⊗ 2 .
Like ⟨ω|, it has odd statistics and carries intrinsic ghost number 5. The Poisson bivector essentially implements the entirety of the twist-sewing (<ref>), thanks to the combination of these properties <cit.>.
Finally, it is also useful to introduce a (odd, ghost number 5) bivector that implements the sewing with a particular value of twist τ
| 𝔖 (τ) ⟩ ≡[ 𝕀⊗ b_0^- exp( 2 π i τ L_0^- ℓ) ] | Σ_0,2⟩
= ∑_α (-1)^ϕ_α | ϕ_α⟩⊗ b_0^- exp(2 π i τ L_0^- ℓ)
| ϕ_α^c ⟩∈ℋ^⊗ 2 .
Note that this bivector does not belong to the space ℋ^⊗ 2. Only upon integrating over all twists do we obtain the Poisson bivector
| ω^-1⟩ = ∫_0^ℓ d τℓ | 𝔖 (τ) ⟩∈ℋ^⊗ 2 .
and we restrict to the level-matched states.
We now include the interactions to the action (<ref>). These are defined by integrating the string measure over the string vertices
⟨𝒱_g,n | = ∫_𝒱_g,n⟨Ω_g,n | =
(- 2π i )^-d_g,n∫_𝒱_g,n⟨Σ_g,n | B_6g-6+2n ,
We denote the resulting bras by the same letter as the integration chain. Combining them for all g,n produces the interacting CSFT action
S[Ψ] = S_0, 2[Ψ] + ∑_g,n1 n! ħ^g κ^2g-2+n ⟨𝒱_g,n | Ψ⟩^⊗ n .
Observe that each term in the action is finite individually since the moduli integration (<ref>) doesn't get contributions from the degenerating surfaces. The action S[Ψ] can be shown to be BV quantizable as a result of string vertices 𝒱 solving the geometric master equation (<ref>), refer to <cit.> for details.
A somewhat convenient way of writing the action S[Ψ] is
S[Ψ] =
S_0,2[Ψ]
+ ∑_g,n1 n! ħ^g κ^2g-2+n ⟨ω | Ψ⊗ L_g,n-1(Ψ^(n-1) ) ,
for which we have defined the string productsL_g,n-1 : ℋ^⊗ (n-1)→ℋ
L_g,n-1 = ( 𝕀⊗⟨𝒱_g,n | ) ( | ω^-1⟩⊗𝕀^⊗ (n-1)) .
Using them and their cyclicity under the symplectic form, the classical equation of motion and the gauge transformations can be expressed as
Q_B Ψ + ∑_n=2^∞κ^n-1 n! L_0,n(Ψ^n) = 0 ,
δ_ΛΨ = Q_B Λ
+ ∑_n=2^∞κ^n-1 (n-1)! L_0,n (Λ, Ψ^n-1) .
Here Ψ and Λ should be restricted to have ghost numbers 2 and 1 respectively. The latter is also taken to have odd statistics.
We see CSFT is simply a result of reverse engineering string amplitudes as far as generic string vertices 𝒱 are concerned. It is a natural anticipation that a smart choice of string vertices can manifest extra information about the theory however, as we have argued before. This situation already presents itself for open strings: Witten's vertex is superior relative to other string vertices since it makes the theory's underlying associative algebra manifest and the analytic solution becomes accessible as a result—even though any other choice would have accomplished the BV consistency.
As we shall argue, using hyperbolic vertices 𝒱^L (<ref>) for the action (<ref>) manifests a peculiar recursive structure among its elementary interactions. This structure is topological in nature and somewhat resembles the recursive structure of the stubbed theories <cit.>. On the other hand, some of its features are quite distinct due to the presence of closed Riemann surfaces and their modularity properties.
§ THE RECURSION FOR THE VOLUMES OF SYSTOLIC SUBSETS
Before we discuss the recursion in hyperbolic CSFT it is beneficial to investigate a simpler analog to communicate some of the central ideas behind its construction. In this section, after reviewing Mirzakhani's method for computing the WP volumes of the moduli spaces of bordered Riemann surfaces ℳ_g,n(L_i) <cit.>, we form a recursion among the WP volumes of the systolic subsets 𝒱_g,n^L(L_i) for L ≤ L_∗ using the twisting procedure developed in <cit.>. The twisting allows one to restrict the considerations to the vertex regions of CSFT, which is one of the most crucial ingredients for deriving the recursion for hyperbolic vertices in the next section.
§.§ Mirzakhani's recursion for the volumes of ℳ_g,n(L_i)
We begin by reviewing Mirzakhani's method for recursively computing the WP volumes of the moduli spaces ℳ_g,n(L_i) in order to set the stage for our discussion <cit.>. The central idea behind her method is to replace the original volume integration with another integration over a suitable covering space—at the expense of introducing measure factors in the integrand that compensate the overcounting when working on the covering space.
Let us demonstrate this procedure by considering a toy example provided in <cit.>. Imagine we have a circle S^1 described by the interval [0,1] with identified endpoints. We would like to evaluate
I = ∫_0^1 f(x) dx ,
where f(x) is a periodic integrable function f(x) = f(x + 1) on S^1. This integral can be replaced with another integral over the universal cover of the circle S^1, the real line ℝ, by noticing the partition of unity
1 = ∑_k ∈ℤsin^2 ( π (x-k) ) π^2 (x-k)^2 ,
and inserting it into the integrand of I
I = ∫_0^1 1 · f(x) dx
= ∫_0^1 ( ∑_k ∈ℤsin^2 ( π (x-k) ) π^2 (x-k)^2) f(x) dx
= ∫_0^1 ∑_k ∈ℤsin^2 ( π (x-k) ) π^2 (x-k)^2 f(x-k) dx
= ∑_k ∈ℤ ∫_0^1 sin^2 ( π (x-k) ) π^2 (x-k)^2 f(x-k) dx
= ∫_-∞^∞ sin^2 ( π x) π^2 x^2 f(x) dx .
Above we have used the periodicity of f in the second line and commuted the sum out of the integral in the third line. Then we replaced the sum and integral with a single integration over ℝ using the translation property of the integrand. For f(x)=1 we plot the integrand of the last integral of (<ref>) in figure (<ref>). As one can see, not only [0,1] contributes to the integral, but there are contributions from all of its images in ℝ accompanied by a suitable measure factor. Summing all of them adds up to integrating 1 over [0,1].
We can formalize this result. Consider a space M and its covering space N with the projection π: N → M. Imagine the top form ε on M and its pullback π^∗ε on N. Also imagine a function h on N. This can be pushed forward to M by
π_∗ h (x) = ∑_y ∈π^-1{ x } h(y) ,
assuming the sum converges. Then we see
∫_Mπ_∗ h ·ε = ∫_N h ·π^∗ε ,
following the reasoning in (<ref>). There, for instance, M=S^1, N = ℝ, ε = dx and h(x) = sin^2 ( π x) f(x) / π^2 x^2. We note the nontrivial step of this method is coming up with the function h in the cover N from a seed function π_∗ h on the original manifold M, i.e., identifying the appropriate partition of unity like in (<ref>).
Remarkably, Mirzakhani found a way to calculate the volumes of ℳ_g,n(L_i) by identifying the appropriate partitions of unity via the hyperbolic geometry of pair of pants <cit.> and using (<ref>) recursively, refer to appendix D of <cit.> for a derivation accessible to a physicist. Before we illustrate this procedure let us introduce the functions which we call the Mirzakhani kernels
D_L_1 L_2 L_3 = 2log[exp( L_1/2) + exp(L_2 + L_3/2) /exp(-L_1/2) + exp(L_2 + L_3/2) ] ,
T_L_1 L_2 L_3 =
log[cosh(L_3 2)+cosh(L_1 + L_2 2)
cosh(L_3 2) + cosh(L_1 - L_2 2) ] ,
R_L_1 L_2 L_3 = L_1 - log[cosh(L_2/2) + cosh(L_1 + L_3/2) /cosh(L_2/2) + cosh(L_1 - L_3/2) ] .
They have the symmetry properties
D_L_1 L_2 L_3 = D_L_1 L_3 L_2 ,
T_L_1 L_2 L_3 = T_L_2 L_1 L_3 ,
D_L_1 L_2 L_3 + T_L_1 L_2 L_3 = L_1 - T_L_1 L_3 L_2
= R_L_1 L_2 L_3
and satisfy
D_L_1 L_2 L_3 L_1 = H_ L_2+L_3, L_1 ,
R_L_1 L_2 L_3 L_1 = 1 2 (H_L_3, L_1+L_2+ H_L_3, L_1-L_2 ) ,
where
H_L_1,L_2 = [ 1 + exp(L_1+L_2 2)]^-1
+ [ 1 + exp(L_1-L_2 2) ]^-1 .
We can express these kernels compactly as
D_L_1 L_2 L_3 = F_L_1 - L_2 - L_3 - F_-L_1 - L_2 - L_3
R_L_1 L_2 L_3 = 1 2(
F_L_1 + L_2 - L_3 + F_L_1 - L_2 - L_3 - F_-L_1 + L_2 - L_3 - F_-L_1 - L_2 - L_3) ,
by introducing
F_L = 2 log[ 1 + exp(L 2)] .
Some of these identities are going to be useful later.
In terms of the kernels D and R, the relevant partition of the length L_1 for a Riemann surface with n borders of length L_i is given by the Mirzakhani–McShane identity <cit.>
L_1 = ∑_{γ, δ}∈𝒫_1 D_L_1γδ
+∑_i = 2^n ∑_γ∈𝒫_i R_L_1L_i γ .
Here the sets 𝒫_i are certain collections of simple geodesics. For i=1, it is the set of pairs of simple closed internal geodesics that bound a pair of pants with the first border L_1, and for i = 2, ⋯, n, it is the set of simple closed internal geodesics that bound a pair of pants with the first and i-th borders L_1, L_i. We often imagine these pants are “excised” from the surface as shown in figure <ref> to interpret various different terms in the expressions below.
The volumes of the moduli spaces ℳ_g,n(L_i) are given by the integrals
V ℳ_g,n(L_i) = ∫_ℳ_g,n(L_i)vol_WP =
∫_ℳ_g,n(L_i)ω_WP^3g-3+n (3g-3+n)! ,
for which we take V ℳ_0,3(L_i)= 1 to set the units. This definition is implicitly taken to be divided by 2 for (g,n) = (1,1) due to the presence of a nontrivial ℤ_2 symmetry <cit.>. This simplifies the expressions below.
It is possible to derive the following recursion relation by substituting the partition of identity (<ref>) to the integrand of (<ref>) for 2g -2 + n > 1 and n ≥ 1 to obtain
L_1 · V ℳ_g,n (L_i) =
∑_i=2^n ∫_0^∞ℓ d ℓ R_L_1L_iℓ V ℳ_g,n-1(ℓ, 𝐋∖{L_i })
+ 1 2∫_0^∞ℓ_1 d ℓ_1 ∫_0^∞ℓ_2 d ℓ_2
D_L_1 ℓ_1 ℓ_2[
V ℳ_g-1, n+1 (ℓ_1, ℓ_2, 𝐋)
+ ∑_stable V ℳ_g_1, n_1 (ℓ_1, 𝐋_1) · V ℳ_g_2, n_2 (ℓ_2, 𝐋_2)
] ,
by “unrolling” the integral over the relevant fibers in the covering space, see <cit.> for mathematical details. Here
𝐋 = {L_2, ⋯, L_n } , 𝐋_1 ∪𝐋_2 = 𝐋 , 𝐋_1 ∩𝐋_2 = ∅ ,
and the “stable” in (<ref>) denotes the sum over all g_1,g_2,n_1,n_2,𝐋_1, 𝐋_2 that satisfy
g_1 + g_2 = g , n_1 + n_2 = n + 1 , 2g_1 - 2 + n_1 > 0 , 2g_2 - 2 + n_2 > 0 .
The pictorial representations of various terms in the relation (<ref>) along with their associated pants excisions are given in figure <ref>. The first few of the volumes are given in table <ref>. Observe that it is possible to ignore the first term in the second line of (<ref>) (i.e., nonseparating D term) as long as surfaces of genus zero are concerned.
We note the formula (<ref>) doesn't apply to 1-bordered tori directly and this case requires special treatment. Nevertheless, it can be still found using the partition of unity (<ref>)
V ℳ_1,1 (L_1) = 1 21 L_1∫_0^∞ℓ d ℓ D_L_1 ℓℓ·
V ℳ_0,3(L_1, ℓ, ℓ)
= π^2 12 + L_1^2 48 ,
and together with Vℳ_0,3(L_i) = 1, the recursion (<ref>) can be used to find the volumes for the rest of the moduli spaces of surfaces with at least one border. We highlight the presence of the extra 1/2 in front of this equation: this comes from the ℤ_2 symmetry of one-bordered tori.
Finally, we point out the formula, known as the dilaton equation,
(2g-2+n) V ℳ_g,n(L_i) = 1 2 π i V ℳ_g,n+1' (L_i, 2 π i) ,
can be used for the volumes of the moduli spaces of surfaces without borders <cit.>. Here the prime stands for the derivative with respect to the border length L_n+1.
§.§ The recursion for the volumes of 𝒱_g,n^L(L_i)
In this subsection we generalize Mirzakhani's recursion discussed in the previous subsection to the recursion for the WP volumes of the systolic subsets 𝒱_g,n^L(L_i). Only the cases with L_i = L are relevant for CSFT, but as we shall see, it will be necessary to consider L_i ≠ L to construct the desired recursion. We always take L ≤ L_∗≡ 2 sinh^-1 1 for reasons that are going to be apparent soon, unless stated otherwise.
The volumes of the systolic subsets are given by
V 𝒱_g,n^L(L_i) = ∫_𝒱_g,n^L(L_i) vol_WP
=∫_ℳ_g,n(L_i) 1_𝒱_g,n^L(L_i) ·vol_WP .
Here 1_𝒱_g,n^L(L_i) is the indicator function for the region 𝒱_g,n^L(L_i), that is
1_𝒱_g,n^L(L_i) (Σ) =
0 for sys(Σ) < L
1 for sys(Σ) ≥ L
.
The volume V 𝒱_g,n^L(L_i) can be understood as integrating the indicator function over the moduli space with respect to the WP volume form vol_WP. The definition (<ref>) is implicitly divided by 2 for (g,n) = (1,1) like before.
It is beneficial to have a “regularized” representation for the indicator function 1_𝒱_g,n^L(L_i) by considering its slight generalization <cit.>. Let us denote the set S(Σ) to be the set of simple closed “short” geodesics γ of Σ∈ℳ_g,n(L_i) with γ < L. For L ≤ L_∗ the curves in the set S(Σ) cannot intersect by the collar lemma (<ref>), hence
|S(Σ)| ≤ 3g - 3 + n < ∞ ,
and the following holds
Ω_t(Σ) ≡ (1+t)^|S(Σ)| = ∑_k=0^|S(Σ)||S(Σ)|k t^k
= ∑_μ⊆ S(Σ) t^|μ|
= ∑_c ∈ M(Σ) ∏_γ∈π_0(c) t θ(L - γ )
,
for t ∈ℝ, as each sum and product is finite. Here μ⊆ S(Σ) is a collection of short geodesics on Σ, which we obtained by binomial theorem. It can be empty.
We also define M(Σ) to be the set of primitive multicurves on the surface Σ. A multicurvec is defined as the homotopy class of one-dimensional submanifolds on Σ whose components are not homotopic to borders. Primitive refers to no disjoint component of c ∈ M(Σ) is homotopic to each other. The primitive multicurve can be thought of the union of pairwise-disjoint simple closed geodesics on the surface Σ. From this definition (<ref>) directly follows after employing the step functions. Finally π_0(c) stands for the set of components of the multicurve c.
The equation (<ref>) then shows
1_𝒱_g,n^L(L_i) (Σ) = lim_t → -1Ω_t(Σ)
= Ω_-1(Σ)
= ∑_c ∈ M(Σ) ∏_γ∈π_0(c) (-1) θ(L - γ ) ,
since the product and sum are finite. The indicator function, and its regularized version, are MCG-invariant because they only depend on the set S(Σ). This is also manifest in (<ref>) since the terms in the sum map to each other under large diffeomorphisms. Based on this analysis the volumes (<ref>) take the form
V 𝒱_g,n^L(L_i)
= ∫_ℳ_g,n(L_i) Ω_-1·vol_WP
= ∫_ℳ_g,n(L_i) ∑_c ∈ M(Σ) ∏_γ∈π_0(c) (-1) θ(L - γ ) ·vol_WP .
As before, we need to find an appropriate partition of the function Ω_-1(Σ) (<ref>) and apply (<ref>). However we now have a somewhat complicated integrand that involves a sum over the multicurves, rather than a mere constant, and using the Mirzakhani-McShane identity (<ref>) wouldn't be sufficient as it is. We can handle this situation by classifying the multicurves c appropriately and refining (<ref>) with respect to this classification. This would lead to the desired recursion à la Mirzakhani.
Let us investigate this in more detail. We can think of the multicurves c as decomposition of the surface Σ into a disjoint collection of connected surfaces. Having done this, let us isolate the connected surface Σ_1 which contains the border L_1. Then we have the following categorization of multicurves:
* Those having a component γ that bounds a pair of pants with the border L_1 and L_i. The surface Σ_1 = P_γ i is a pair of pants in this case.
* Those having two components γ and δ that together with the border L_1 bound a pair of pants. The surface Σ_1 = P_γδ is a pair of pants in this case as well.
* Those such that Σ_1 is not a pair of pants.
We remind that each multicurve c may contain any number of components on the remaining surface Σ∖Σ_1. These distinct situations are shown in figure <ref>.
Now we evaluate the sum over M(Σ) in (<ref>) for each of these cases. For the first case we find the contribution to the sum over the multicurves is
Ω_-1(Σ)
⊃ - ∑_i=2^n ∑_γ∈𝒫_iθ(L-γ) ∑_c∈ M(Σ∖ P_γ i)∏_δ∈π_0(c) (-1) θ(L-δ)
=-∑_i=2^n ∑_γ∈𝒫_iθ(L - γ) Ω_-1(Σ∖ P_γ i) ,
after singling out the geodesic γ∈𝒫_i that bounds Σ_1=P_γ i in the decomposition (<ref>). Similarly we find the contribution
Ω_-1(Σ) ⊃∑_{γ,δ}∈𝒫_1θ(L- γ) θ(L- δ) Ω_-1(Σ∖ P_γδ) ,
for the second case, after singling both γ and δ. Note that the surface Σ∖ P_γδ can be disjoint in this case. For those cases we implicitly consider the function Ω_-1(Σ∖ P_γδ) to be the product of Ω_-1 associated with these two disjoint surfaces.
The third case is somewhat more involved. We begin by rewriting their total contribution as
Ω_-1(Σ) ⊃1/L_1∑_c∈ M'(Σ) L_1 ∏_γ∈π_0(c) (-1) θ(L-γ) ,
where M'(Σ) is the set of primitive multicurves that excludes the components bounding a pair of pants with the border L_1 whose contributions have already been considered in (<ref>) and (<ref>). This form allows us to use the Mirzakhani–McShane identity (<ref>) for
L_1 =
∑_{γ,δ}∈𝒫_1^(1) D_L_1 γδ +
∑_i=2^m_1∑_γ∈𝒫_i^(1) R_L_1 L_i γ +
∑_i=m_1+1^m ∑_γ∈𝒫_i^(1) R_L_1 L_i γ ,
in the summand (<ref>). This partition is associated with the surface Σ_1 determined by c ∈ M' (Σ) and the superscript on the set 𝒫_i^(1) is there to remind us that we only consider the relevant pair of pants over the surface Σ_1, which is assumed to have m borders. We split the contribution of the R-terms into two: the first m_1 of the borders are assumed to be borders of Σ, while the rest are the components of the relevant multicurve c, see the second and third surfaces in figure <ref>.
After this substitution we commute the sums over 𝒫_i^(1) in (<ref>) with the sum over M'(Σ) in (<ref>). Note that this is allowed since M'(Σ) is a finite set. To do this, we point out that there are three independent cases for a simple closed geodesic γ to bound a pant with the border L_1 in Σ_1:
* The geodesic γ bounds a pant P_γδ with the border L_1 and some other internal simple closed geodesic δ of Σ_1. These are not components of the multicurve c ∈ M'(Σ) by design.
* The geodesic γ bounds a pant P_γ i with the border L_1 and L_i of Σ_1, which is also a border of Σ. This border is not a component of a multicurve c ∈ M'(Σ) by design.
* The geodesic γ bounds a pant P_γδ with the border L_1 and δ of Σ_1, which is not a border of Σ. This border is a component of c ∈ M'(Σ) by design.
These can be associated with the three sums in (<ref>), also refer to figure <ref>. Note that the geodesic γ above is not a component of a multicurve by the construction. These cases yield the total contributions
Ω_-1(Σ) ⊃1 L_1∑_{γ,δ}∈𝒫_1 D_L_1 γδ Ω_-1(Σ∖ P_γδ) ,
Ω_-1(Σ) ⊃1 L_1∑_i=2^n ∑_γ∈𝒫_i R_L_1 L_i γ Ω_-1(Σ∖ P_γ i) ,
Ω_-1(Σ) ⊃
- 1 L_1∑_{γ, δ}∈𝒫_1[R_L_1 γδ θ(L-γ) +
R_L_1 δγ θ(L-δ)] Ω_-1(Σ∖ P_γδ) ,
respectively after summing over the multicurves c ∈ M'(Σ) following the reasoning around (<ref>). Notice the symmetrization over γ and δ in the last equation after the sum is performed. This is due to {γ, δ}∈𝒫_1 being an unordered set, while both of the situations where only one of them belongs to c ∈ M'(Σ) contributes.
Combining the exhaustive contributions (<ref>) (<ref>) and (<ref>) we find the desired partition
L_1 ·Ω_-1(Σ) =
∑_i=2^n ∑_γ∈𝒫_iR_L_1 L_i γ Ω_-1(Σ∖ P_γ i)
+∑_{γ, δ}∈𝒫_1D_L_1γδ Ω_-1(Σ∖ P_γδ) ,
for which we defined the twisted Mirzakhani kernels
R_L_1 L_2 L_3 = R_L_1 L_2 L_3 - L_1 θ(L - L_3) ,
D_L_1 L_2 L_3 = D_L_1 L_2 L_3 - R_L_1 L_2 L_3 θ(L - L_2 )
- R_L_1 L_3 L_2 θ(L - L_3 ) + L_1 θ(L - L_2 ) θ(L - L_3 )
.
We highlight that the symmetries in (<ref>) are also satisfied and taking L→ 0 reduces them to the Mirzakhani kernels. Similarly the partition (<ref>) reduces to (<ref>) in this limit.
Upon inserting the partition (<ref>) to (<ref>) and exactly repeating Mirzakhani's analysis in <cit.> we conclude for 2g -2 + n > 1 and n ≥ 1 the systolic volumes satisfy a recursion relation among themselves
L_1 · V 𝒱^L_g,n (L_i) =
∑_i=2^n ∫_0^∞ℓ d ℓ R_L_1L_iℓ V 𝒱^L_g,n-1(ℓ, 𝐋∖{L_i })
+ 1 2∫_0^∞ℓ_1 d ℓ_1 ∫_0^∞ℓ_2 d ℓ_2 D_L_1 ℓ_1 ℓ_2[
V 𝒱^L_g-1, n+1 (ℓ_1, ℓ_2, 𝐋)
+ ∑_stable V 𝒱^L_g_1, n_1 (ℓ_1, 𝐋_1) · V 𝒱^L_g_2, n_2 (ℓ_2, 𝐋_2)
] ,
whenever L ≤ L_∗. Like before, we take V 𝒱^L_0,3 (L_i) = V ℳ_0,3 (L_i) = 1 to set the units. We are going to discuss the case (g,n)=(1,1) shortly.
The first few volumes V 𝒱^L_g,n (L_i) are listed in table <ref>. The results of this systolic recursion can be shown to be independent of the choice of the border L_1 following the arguments for the counterpart statement for (<ref>) <cit.>. The dilaton equation (<ref>) applies to the systolic volumes as well and can be used to find the systolic volumes for the surfaces without borders. We remark in passing that this particular twisting procedure has been used in <cit.> to establish that hyperbolic vertices (<ref>) satisfy the geometric master equation (<ref>), providing an alternative to the argument of <cit.>.
Take note that using step functions θ(L-L_i) wasn't essential for the twisting procedure itself. For instance, we could have formed a recursion similar to (<ref>) for the integrals of the functions <cit.>
ζ (Σ) = ∑_c ∈ M(Σ) ∏_γ∈π_0(c) f(γ) ,
provided f is a sufficiently well-behaved real integral function. In the twisted Mirzakhani kernels (<ref>) this simply amounts to replacing -θ(L-γ) with f(γ). In particular we can use a suitable family of functions f(γ) that limits to -θ(L-γ), which may be particularly helpful to evaluate (<ref>) efficiently.
In order to get an intuition for the systolic volumes it is instructive to evaluate (<ref>) for the first nontrivial case, (g,n)=(0,4),
V 𝒱^L_0,4 (L_i) =
1 L_1∑_i=2^4 ∫_0^∞ℓ d ℓ R_L_1L_iℓ
= V ℳ_0,4 (L_i) - ∑_i=2^4 ∫_0^L ℓ d ℓ
= V ℳ_0,4 (L_i) - 3 2 L^2
= 2 π^2 + 1 2∑_i=1^4 L_i^2 - 3 2 L^2 .
Note that for all L_i ≥ 0
V 𝒱^L_0,4 (L_i) ≥ 2 π^2 - 3 2 L^2 > 0 when 0 ≤ L ≤ L_∗ = 2 sinh^-1 1 .
which shows this systolic volume is always positive in the allowed regime, as it should be.
Let us illustrate another way to perform the same computation. Recall that we exclude surfaces whose systoles are smaller than L from the systolic subset 𝒱^L_0,4 (L_i). This means we need to subtract the WP volume associated with the region
0 < ℓ≤ L ≤ L_∗ , 0 ≤τ < ℓ ,
from the the WP volume V ℳ_0,4 (L_i) as part of evaluating 𝒱^L_0,4 (L_i). Here (ℓ, τ) are the Fenchel-Nielsen coordinates determined by a particular way to split a four-bordered sphere into two pairs of pants with an internal simple closed geodesic ℓ.
The volume of this excluded region is L^2/2 since the MCG acts freely there, i.e., it always maps surfaces outside of (<ref>)—there isn't any multiple counting of surfaces. Recall that the effect of MCG is to exchange geodesics on the surface with possible twists. The restriction on the twist τ has already eliminated the possibility of the Dehn twists. Further, two short geodesics never intersect on the surface by the collar lemma (<ref>), so short geodesics always get exchanged with the longer ones, i.e., the MCG takes us outside of the region (<ref>). For the additional multiplication by 3, notice the decomposition (<ref>) is just one way to split a four-bordered sphere: there are two other distinct channels to perform a similar decomposition. So one should subtract 3L^2/2 from the total volume to obtain (<ref>). We emphasize again that these regions can't intersect by the collar lemma and there was no oversubtraction in our results.
Now we turn our attention to (g,n)=(1,1) required to run (<ref>). This case requires special attention like (<ref>). The partition of Ω_-1 for a one-bordered torus Σ is
L_1 ·Ω_-1(Σ) = ∑_γ∈𝒫_1[ D_L γγ - L_1 θ(L-γ) ]
Ω_-1(Σ∖ P_γγ) .
This can be argued similarly to (<ref>) after observing there is only one geodesic γ that has been cut, which is either part of a primitive multicurve (the second term) or isn't (the first term), refer to figure <ref>. Using this partition we then find
V 𝒱^L_1,1 (L_1) = 1 21 L_1 ∫_0^∞ℓ d ℓ [ D_Lℓℓ - L_1 θ(L- ℓ) ]
· V 𝒱^L_0,3(L, ℓ, ℓ)
= V ℳ_1,1(L_1) - 1 4 L^2
.
This result can be alternatively argued from the geometric perspective similar to V 𝒱^L_0,4 (L_i). We also have
V 𝒱^L_1,1 (L_1) ≥π^2 12 -1 4 L^2 > 0 when
0 ≤ L ≤ L_∗ = 2 sinh^-1 1 ,
for all L_1 ≥ 0 and this volume is positive in the allowed range.
For the higher-order cases the systolic volumes are somewhat more complicated. We subtract the volumes of the regions with short geodesics from the the total volume of the moduli space, however the MCG action is not necessarily free in these subtracted regions and the evaluations of their volumes requires suitable weighting of the integrand by Mirzakhani kernels as a result. The computations for some sample systolic volumes are given in appendix <ref>.
We can further crosscheck the results for V 𝒱_1,1(L_1) and V 𝒱_0,4(L_i) by numerically integrating the WP metric derived using the Polyakov conjecture <cit.>. The dependence of V 𝒱^L_1,1(L_1) on L for L_1 = 0 and L_1 = π/2, along with their fits to the quadratic polynomial
f(L) = c_1 + c_2 L ,
are shown in figure <ref> for instance. It is apparent that we obtain consistent results. For the details of this computation refer to appendix <ref>.
As a final remark in this section, we highlight the systolic recursion (<ref>) simplifies upon restricting to genus 0 surfaces. Like in Mirzakhani's recursion, we don't need to consider the first term in the second line of (<ref>). This means for n > 3
L_1 · V 𝒱^L_0,n (L_i) =
∑_i=2^n ∫_0^∞ℓ d ℓ R_L_1L_iℓ V 𝒱^L_0,n-1(ℓ, 𝐋∖{L_i })
+ 1 2∫_0^∞ℓ_1 d ℓ_1 ∫_0^∞ℓ_2 d ℓ_2 D_L_1 ℓ_1 ℓ_2∑_stable V 𝒱^L_0, n_1 (ℓ_1, 𝐋_1) · V 𝒱^L_0, n_2 (ℓ_2, 𝐋_2)
.
This again corresponds to considering only the first and second types of pants excisions in figure <ref>.
Remember it is possible to take 0 < L ≤∞ for string vertices in the classical CSFT <cit.>. It is then natural to ask whether the condition on the threshold length, L ≤ L_∗, can also be eliminated for (<ref>). However, this doesn't follow from our construction. Clearly we can't take 0 < L ≤∞ for the general border lengths: taking L too large while keeping L_i small would eventually make systolic volumes negative, see table <ref>. So the bound for them should persist in general. Although we can argue it can be relaxed to
0 < L ≤ 2 L_∗ = 4 sinh^-11 ≈ 3.53 ,
since two intersecting closed geodesics always traverse each other at least twice. We point out (<ref>) remains positive with this weaker bound, but (<ref>) doesn't apply anymore. It is still in the realm of possibilities (<ref>) to hold when the border lengths L_i scale with L while taking some of them the same however. We comment on this possibility more in the discussion section <ref>.
§ THE RECURSION FOR HYPERBOLIC STRING VERTICES
We now turn our attention to deriving the recursion relation satisfied by the elementary vertices of hyperbolic CSFT. We remind that they are given by integrating the string measure over the systolic subsets[Strictly speaking, only the amplitudes ⟨𝒱_g,n(L_i) | with L_i = L for 0<L≤ L_∗ form string vertices. However we also denote the cases with generic L_i and L as string vertices and/or elementary interactions for brevity.]
⟨𝒱_g,n(L_i) | =
∫_𝒱^L_g,n (L_i) ⟨Ω_g,n (L_i) |
= ∫_ℳ_g,n (L_i)1_𝒱_g,n^L(L_i) ·⟨Ω_g,n (L_i) |
and the resulting recursion would be among ⟨𝒱_g,n (L_i) | ∈ (ℋ^∗)^⊗ n. It is useful to introduce
⟨ω_g,n (L_i) | = 1_𝒱_g,n^L(L_i) ·⟨Ω_g,n (L_i) | ,
in this section. The indicator function 1_𝒱_g,n^L(L_i) is already given in (<ref>).
The integrations (<ref>) above take place in the moduli space of Riemann surfaces with geodesic borders of length L_i, ℳ_g,n (L_i). In the context of CSFT, on the other hand, it is often considered that such integrations take place in the sections of moduli space of punctured Riemann surfaces endowed with suitable local coordinates around the punctures 𝒫_g,n. In hyperbolic CSFT, however, they are equivalent since both the string measure (and the local coordinate data it contains) and the regions of integrations are defined by the bordered Riemann surfaces through grafting. More precisely, gr_∞' ℳ_g,n (L_i) is a section of 𝒫_g,n by gr_∞' being a homeomorphism <cit.>. We are performing the integration over this section of 𝒫_g,n.
§.§ b-ghost insertions for the Fenchel-Nielsen deformations
In our arguments we are going to need to excise pairs of pants from the surface states. For this we ought to understand the Schiffer vectors corresponding to the Fenchel-Nielsen deformations. So consider a Riemann surface Σ_g,n(L_i) together with its pants decomposition. The lengths ℓ_i and the twists τ_i of the seams of the pants define local coordinates over the moduli space (<ref>). Let us denote the Schiffer vectors corresponding to the coordinate vectors / τ_i and / ℓ_i by u_i and v_i respectively. In order to derive their expression, and subsequently the associated b-ghost insertions, we need to find the transition functions between two coordinate patches for pants after twisting and increasing the length of the seams. Similar analysis has been considered with different levels of detail in <cit.>, however we are going to be more explicit.
Let z_L, z_R be the coordinate patches on the left and right pants L,R for the semi-infinite flat cylinder grafted i-th seam whose length is ℓ_i. They are related to each other by (observe figure <ref>)
w_L(z_L; L^(L)_j) w_R(z_R; L^(R)_j) = e^2 π iτ_i / ℓ_i
z_L = w_L^-1( e^2 π iτ_i / ℓ_i w_R(z_R ; L^(R)_j) ; L^(L)_j
) = f_LR (z_R ; L^(L)_j, L^(R)_j)
,
where w_L, w_R are the local coordinates for the generalized hyperbolic three-string vertex around the puncture at z=0, whose expression is given in appendix <ref>. The formula above is just a consequence of performing the sewing fixture (<ref>) for the i-th seam. We importantly highlight that the definition of coordinates depends on the lengths of the borders of the left (L^(L)_j) and right (L^(R)_j) pairs of pants with j=1,2,3. However it doesn't depend on the relative twist τ_i between them. We take L^(L)_1 = L^(R)_1 = ℓ_i.[If the surface is a one-bordered torus, we take z = z_L = z_R and w_R= w_R(z) to be the local coordinate around z=1 instead. We also specify the border lengths by L^(L)_j = L^(R)_j= (ℓ_i, ℓ_i, L_3).]
The aim is finding the Schiffer vectors v_i and u_i and their associated b-ghost insertions based on (<ref>) now. Begin by investigating the twist deformation τ_i →τ_i + δτ_i while keeping all other coordinates the same. Using (<ref>) we simply find
u_i(w_R) = - 2 π i ℓ_i w_R .
The corresponding b-ghost insertion according to (<ref>) is then given by
b(u_i) = - 2 π i ℓ_i[
∮ dw_R w_R b(w_R) - ∮ dw_R w_R b(w_R)
] = - 2 π i ℓ_i b_0^- ,
applied to the states inserted in the local coordinates w_R around the puncture z_R=0. It has the same expression in w_L by L ↔ R symmetry. This is a well-known result from the ordinary factorization analysis <cit.>.
Now we return our attention to the variation ℓ_i →ℓ_i + δℓ_i. This is slightly more involved compared to the twist deformations since the definition of the local coordinates themselves depends on the pair of pants. Nonetheless we can express the associated Schiffer vector v_i as a sum of the following vectors using (<ref>)
v_i =
v^(L)_i(w_L) + v^(R)_i(w_R)+ 2 π i τ_i ℓ_i^2 w_R ,
where
v^(L)_i (w_L) = - w_L (z_L(w_L ) ; L^(L)_j ) ℓ_i .
This holds similarly for L ↔ R and their anti-holomorphic counterparts.
We can understand the reasoning behind v_i (<ref>) as follows. When we vary the length of the seam ℓ_i, the local coordinate w_L (w_R), as a function of z_L (z_R) changes, see (<ref>). Keeping the coordinates w_L, w_R and their identification (<ref>) fixed, the vector v^(L)_i (v^(R)_i) is then simply one part of the Schiffer vector v_i. Clearly, the parts associated with L and R add up since we consider infinitesimal changes. Finally we have another part associated with changing ℓ_i in the identification (<ref>). However, this produces a b_0^- insertion like in (<ref>), which would vanish upon multiplying the twist's insertion b(u_i) (<ref>). These together produce the Schiffer vector v_i.
The nonvanishing part of the b-ghost insertion for ℓ_i →ℓ_i + δℓ_i is then given by
b(v_i) = b(v^(L)_i) + b(v^(R)_i) ,
where
b(v^(L)_i)
= - ∮ d w_L w_L (z_L (w_L ) ; L^(L)_j) ℓ_i b(w_L)
- ∮ d w_L w_L (z_L (w_L ) ; L^(L)_j) ℓ_i b (w_L)
= 1 2 π[
1 ρ_1∂ρ_1 ∂λ_1( b_0 + b_0 )
+ λ_1 ( λ_2^2 - λ_3^2) (1+ λ_1^2)^2ρ_1
( b_1 + b_1 )
+ ⋯] ,
for which we take
2 πλ_j = L_j^(L) ,
2 πλ_1 = ℓ_i ,
to simplify the presentation. Here ρ_1 = ρ_1 (λ_j) is the mapping radius associated with the local coordinate around the puncture z_L =0, see (<ref>). This b-ghost insertion (<ref>) acts on the left pair of pants but similar considerations apply after exchanging L ↔ R for the right one.
The same b-ghost insertion applies to changing the border length of any pair of pants. This has already been presented in (<ref>), which we report here again for L_i > 0
⟨Σ_0,3( L_1, L_2 , L_3 )| ( 𝔅⊗𝕀⊗𝕀)
=
⟨Σ_0,3( L_1, L_2 , L_3 )|
[ 1 2 π(
1 ρ_1∂ρ_1 ∂λ_1 ( b_0 + b_0 )
+λ_1 (λ_2^2 - λ_3^2) (1+ λ_1^2)^2 ρ_1
( b_1+ b_1 )
+ ⋯) ⊗𝕀⊗𝕀]
.
In these general situations we denote b(v_i) by 𝔅. There may be multiple applications of 𝔅 to the surface states if we consider the deformation of lengths of multiple borders simultaneously. In such cases 1,2,3 labels in 𝔅 should be permuted accordingly and 𝔅 should act on the border whose length is deformed.
We emphasize strongly that the 𝔅 insertion depends on all of the border lengths of a given pair of pants—not just to the border whose length has shifted. This is quite unorthodox, especially when contrasted to the ordinary factorization analysis for off-shell amplitudes <cit.>. For the latter the associated b-ghosts insertion b_0^- b_0^+ doesn't depend on the details of the surface they are acting on whatsoever. They take a universal form. The b-ghost insertion due to the Fenchel-Nielsen deformation 𝔅 b_0^-, on the other hand, naturally depends on the particular pants decomposition.
One may worry that such dependence may obstruct factorizing generic off-shell amplitudes by requiring “global” knowledge of the surface. However, in the view of (<ref>) and (<ref>), we can decompose the b-ghost insertions into two disjoint parts where each part exclusively depends on either the left or right pant—but not both of them simultaneously. So after cutting the surface along the i-th seam any dependence on the opposite pants disappears. While they still depend on the pants decomposition itself, the parts of the b-ghost insertions can be separated in a way that the knowledge of the opposite part of the surface is irrelevant at least. This will be sufficient for our purposes.
§.§ The recursion for hyperbolic vertices
In the previous subsection we have derived the b-ghost insertions associated with the Fenchel-Nielsen deformations (<ref>). Using such insertions we can factorize the string measure ⟨Ω_g,n (L_i) | according to the different types of pants excisions shown in figure <ref>. They take the form (twists are taken respect to the L pants):
* R-term for the i-th border:
⟨Ω_g,n(L_i)| = d ℓ∧d τℓ ∑_ϵ = ± [
⟨𝒱_0,3(L_1, L_i, ϵℓ) |
⊗⟨Ω_g, n - 1(- ϵℓ, 𝐋∖{ L_i }) |
] | 𝔖 (τ) ⟩ .
* Nonseparating D-term:
⟨Ω_g,n(L_i)| = d ℓ_1 ∧d ℓ_2∧ d τ_1 ℓ_1 ∧d τ_2 ℓ_2
∑_ϵ_1,ϵ_2 = ± [
⟨𝒱_0,3(L_1, ϵ_1 ℓ_1, ϵ_2 ℓ_2) |
⊗⟨Ω_g-1, n + 1(- ϵ_1 ℓ_1, - ϵ_2 ℓ_2, 𝐋 ) |
] | 𝔖 (τ_1) ⟩_1 | 𝔖 (τ_2) ⟩_2 .
* Separating D-term:
⟨Ω_g,n(L_i)| = d ℓ_1 ∧ d ℓ_2 ∧d τ_1 ℓ_1∧d τ_2 ℓ_2
∑_ϵ_1,ϵ_2 = ±[
⟨𝒱_0,3(L_1, ϵ_1 ℓ_1, ϵ_2 ℓ_2) |
⊗∑_stable⟨Ω_g_1,n_1(-ϵ_1 ℓ_1, 𝐋_1) | ⊗⟨Ω_g_2,n_2(-ϵ_2 ℓ_2, 𝐋_2) |
] | 𝔖 (τ_1) ⟩_1 | 𝔖 (τ_2) ⟩_2 .
These results can be established following a similar analysis to <cit.> while being mindful about various signs, for example (<ref>). We point out this decomposition only holds locally on ℳ_g,n (L_i). Above we have adopted the conventions stated in (<ref>) and (<ref>)
⟨𝒱_0,3 ( L_1, L_2, L_3) | ≡⟨Σ_0,3( |L_1|, |L_2| , |L_3| )|
( 𝔅^⊗ θ(-L_1)⊗𝔅^⊗ θ(-L_2)⊗𝔅^⊗ θ(-L_3)) ,
⟨Ω_g,n ( L_1, ⋯, L_n) | ≡⟨Ω_g,n (|L_1|, ⋯, |L_n|) | (𝔅^⊗ θ(-L_1)⊗⋯⊗𝔅^⊗ θ(-L_n)) ,
of having negative lengths. Since the 𝔅 insertion either acts on the left or right of the seam (but not both at the same time), having a negative length interpretation is quite natural in this context and is associated with the orientation of the pants with respect to each other.
Observe that we insert the non-level-matched bivector | 𝔖 (τ) ⟩ (<ref>) to the entries of the bras in (<ref>), which are supposed to act only on the level-matched states (<ref>). This means that factorizations above come with global phase ambiguities. As we shall see shortly, however, this ambiguity is going to disappear for the final expression.[The same ambiguity also presents itself in the ordinary factorization analysis of the off-shell amplitudes with the string propagator before summing over the twists and disappear only afterward. We observe an analogous behavior. For the recent attempts of relaxing level-matching condition (<ref>) in CSFT, see <cit.>.]
Now, we can promote (<ref>) to the partition of the hyperbolic string measure (<ref>) as
|L_1| ·⟨ω_g,n (L_i) | =
∑_i=2^n ∑_γ∈𝒫_i∑_ϵ_γ = ± d γ∧d τ_γγ [ ⟨ℜ( L_1, L_i, ϵ_γγ) | ⊗⟨ω_g, n - 1(- ϵ_γγ, 𝐋∖{ L_i }) |
] | 𝔖(τ_γ) ⟩_γ
+
∑_{γ, δ}∈𝒫_1∑_ϵ_γ, ϵ_δ = ±
d γ∧ d δ∧d τ_γγ∧d τ_δδ [
⟨𝔇( L_1, ϵ_γγ, ϵ_δδ) |
⊗(
⟨ω_g-1, n + 1(- ϵ_γγ, - ϵ_δδ, 𝐋 ) |
+∑_stable⟨ω_g_1,n_1(-ϵ_γγ, 𝐋_1) | ⊗⟨ω_g_2,n_2(-ϵ_δδ, 𝐋_2) |
)
] | 𝔖(τ_γ) ⟩_γ | 𝔖(τ_δ) ⟩_δ ,
in light of (<ref>). Above we have collected the cubic parts into the string kernels (<ref>)
⟨ℜ(L_1,L_2,L_3) | = R_|L_1| |L_2| |L_3| ⟨𝒱_0,3(L_1,L_2,L_3)| ,
⟨𝔇(L_1,L_2,L_3) | = D_|L_1| |L_2| |L_3| ⟨𝒱_0,3(L_1,L_2,L_3)| .
There is an implicit dependence on the threshold length L in these objects.
Upon integrating (<ref>) over the moduli space ℳ_g,n(L_i) and repeating the unrolling trick (<ref>) for evaluating such integrals, we obtain the desired recursion relation for 2g-2+n > 1 and n ≥ 1 (refer to figure <ref>)
|L_1| ·⟨𝒱_g,n (L_i) | = ∑_i=2^n ∫_-∞^∞ d ℓ [ ⟨ℜ (L_1, L_i, ℓ) | ⊗ ⟨𝒱_g, n-1 (-ℓ, 𝐋∖{L_i }) |
] |ω^-1⟩
+ 1 2∫_-∞^∞ d ℓ_1 ∫_-∞^∞ d ℓ_2 [⟨𝔇 (L_1, ℓ_1, ℓ_2) | ⊗ (⟨𝒱_g-1,n+1 (-ℓ_1, -ℓ_2, 𝐋 )|
+
∑_stable⟨𝒱_g_1,n_1 (-ℓ_1, 𝐋_1 ) | ⊗ ⟨𝒱_g_2,n_2 (-ℓ_2, 𝐋_2 ) |
)
] | ω^-1⟩_1 | ω^-1⟩_2 ,
given that the string amplitudes should be independent of the particular marking on the surface. Above we have evaluated the twist integrals, taken to be running from 0 to ℓ's, using (<ref>). After such integration the global phase ambiguity mentioned below (<ref>) disappears since only the level-matched states appear in the decomposition. This is the main result of this paper. Similar logic can be used to establish the following identity for the one-bordered torus (see (<ref>))
|L_1| ·⟨𝒱_1,1 (L_1) | = 1 2∫_-∞^∞ d ℓ (
D_|L_1| |ℓ| |ℓ| - | L_1| ·θ(L - |ℓ|)
) ⟨𝒱_0,3(L_1, ℓ,-ℓ) |
(𝕀⊗ | ω^-1⟩) .
We again included 1/2 due to the nontrivial ℤ_2 symmetry.
A few remarks are in order here. We first highlight that (<ref>) relates hyperbolic vertices with each other iteratively in the negative Euler characteristic -χ_g,n = 2g - 2 +n. This makes it a topological recursion. Furthermore, it contains more information on the nature of vertices compared to the ordinary loop L_∞ relations satisfied by CSFT interactions <cit.>. The latter follow from the constraint satisfied by the boundaries of string vertices ∂𝒱 as a consequence of the geometric master equation—they don't provide any information on the behavior of the interior of 𝒱 whatsoever. On the other hand (<ref>) also relates the interiors.
As mentioned earlier, a similar recursive formulation has been carried out by Ishibashi <cit.>. Despite its inspiring features, such as its connection with the Fokker-Planck formalism, Ishibashi's off-shell amplitudes don't factorize according to the prescription provided by covariant CSFT for which the Feynman diagrams contain flat cylinders. This obstructs their field theory interpretation and degenerating behavior of the amplitudes. Our recursion overcomes these particular issues by relating ⟨𝒱_g,n (L_i) |, for which the string measure is integrated over the systolic subsets 𝒱^L_g,n(L_i).
Precisely speaking, the difference mentioned above follows from the distinct choices of kernels for (<ref>)—contrast with the equation (3.20) in <cit.>. The impact of this modification particularly presents itself when the length of the seams becomes smaller than L ≤ L_∗. For example
R(L_1, L_2, ℓ) = L_1 -
sinh( L_1 2)/sinh(L_1 2) + cosh(L_2 2) ℓ + 𝒪 (ℓ^2) ,
while in the twisted version the leading term L_1 disappears and the expansion holds for 0 ≤ℓ≤ L (<ref>). Similar expressions can be worked out for D(L_1, L_2, L_3).
Thanks to the twist (<ref>), we subtract the Feynman region contributions while maintaining the recursive structure, resulting in a recursion for hyperbolic CSFT. However these subtracted contributions are not ordinary Feynman diagrams that contain flat propagators: rather they are hyperbolic amplitudes as well. As an example, consider (g,n)=(0,4) in (<ref>). From the expansion (<ref>) the subtracted term is
∫_-L^L d ℓ [ ⟨𝒱_0,3 (L_1, L_2, ℓ) |
⊗⟨𝒱_0,3 (-ℓ, L_3, L_4) | ] | ω^-1⟩ ,
for one of the channels. This is the hyperbolic contribution from this channel's associated Feynman region. Upon subtracting the other channel's contributions one is left with purely the vertex region as desired.[Even though the final result is finite, this is not necessarily the case for the individual subtracted terms, due to the |ℓ| → 0 regime. They may need analytic continuation.] The form of the subtraction for the higher-order vertices would be more complicated in a similar way to systolic volumes.
The recursion relation (<ref>) works for any bordered hyperbolic surface, but it can also be used for obtaining the vacuum vertices ⟨𝒱_g,0 | ∈ℋ^⊗ 0 for g ≥ 2 using the dilaton theorem <cit.>. The dilaton theorem in the context of hyperbolic CSFT states
⟨𝒱_g,n+1 (L_i, L_n+1 = 2 π i) |
( 𝕀⊗⋯⊗𝕀_n times ⊗ | D ⟩)
= (2g-2+n) ⟨𝒱_g,n (L_i) | ,
where
|D ⟩ = ( c_1 c_-1 - c_1 c_-1) | 0 ⟩ ,
is the ghost-dilation. Taking n=0 in (<ref>) and using (<ref>) for ⟨ A_g,1 (2 π i) | D ⟩ leads to the desired recursion.
Observe, however, D is not a (0,0)-primary and (<ref>) implicitly contains a specific prescription for its insertions. In (<ref>) the ghost-dilaton is grafted to the border of formal length L_k+1 = 2 π i.[This corresponds to the border degenerating to a cone point with the opening angle 2 π and becoming a regular point on a surface with one less border <cit.>.] For this, the choice of local coordinates is made according to section (2.2) of <cit.> using the regular hyperbolic metric on the surface. Since the other borders are geodesics, there are no contributions from the integrals of c_0^-, see <cit.>.
Clearly (<ref>) can be restricted to apply for only genus 0 surfaces like in (<ref>). We report this case for the completeness (n>3)
|L_1| ·⟨𝒱_0,n (L_i) | = ∑_i=2^n ∫_-∞^∞ d ℓ [ ⟨ℜ (L_1, L_i, ℓ) | ⊗ ⟨𝒱_0,n-1 (-ℓ, 𝐋∖{L_i }) |
] |ω^-1⟩
+ 1 2∫_-∞^∞ d ℓ_1 ∫_-∞^∞ d ℓ_2 [⟨𝔇 (L_1, ℓ_1, ℓ_2) | ⊗ ∑_stable⟨𝒱_0,n_1 (-ℓ_1, 𝐋_1 ) | ⊗ ⟨𝒱_0,n_2 (-ℓ_2, 𝐋_2 ) |
] | ω^-1⟩_1 | ω^-1⟩_2 .
We finally highlight that (<ref>) holds for every quantum hyperbolic CSFT given the former works for L ≤ L_∗ which is the regime for which the quantum hyperbolic vertices 𝒱^L obey the geometric master equation (<ref>) <cit.>. The condition on L can be relaxed slightly for (<ref>), but it can't be entirely disposed of as in (<ref>). But recall the classical CSFT is consistent for any choice of L and it is in principle possible to relate different choices of L through field redefinitions. A question is what is the fate of (<ref>) for generic classical CSFT.
In a wider context, it is possible to modify string vertices 𝒱 while maintaining that they solve the relevant geometric master equation, which is equivalent to performing field redefinitions in CSFT <cit.>. Under generic field redefinitions the form of the recursion (<ref>) or (<ref>) gets highly obstructed. Nevertheless, its observable consequences should remain the same. This relates back to our emphasis on using the “correct” string vertices: we would like to manifest these special structures as much as possible in order to ease the extraction of physics.
§.§ Recursion as a differential constraint
An alternative and more succinct way to encode (<ref>) and (<ref>) is through a second-order differential constraint on an appropriate generating function. So imagine the string field
Ψ(ℓ) ∈ℋ⊗ℝ , Ψ (ℓ) = Ψ (-ℓ) ,
that has a dependence on a real parameter and introduce the generating function Z[Ψ (ℓ)] and its associated free energy W[Ψ (ℓ)] as a functional of Ψ (ℓ)[This partition function is formal since the integral may diverge around ℓ = 0.]
Z[Ψ (ℓ)] ≡exp[ W[Ψ (ℓ)]]
≡exp[
∑_g,n 1 n! ħ^g-1 κ^2g-2+n∫_-∞^∞ d ℓ_1 ⋯∫_-∞^∞ d ℓ_n ⟨𝒱_g,n (ℓ_i) | ( Ψ (ℓ_1) ⊗⋯⊗Ψ (ℓ_n) )
] .
We always consider Ψ (ℓ) to have even statistics (but arbitrary ghost number) in order to interpret Z[Ψ (ℓ)] as a “partition function”. This is in the same vein of taking the string field even in the BV master action (<ref>). As we shall see this will be sufficient for our purposes.
We further introduce the 2-products A,B,C: ℋ^⊗ 2→ℋ
A_L_1 L_2 L_3 =
[ 𝕀⊗⟨𝒱_0,3 (L_1, L_2, L_3) | ]
[ | ω^-1⟩⊗𝕀⊗𝕀] ,
B_L_1 L_2 L_3 =
[ 𝕀⊗⟨ℜ (L_1, L_2, L_3) | ]
[ | ω^-1⟩⊗𝕀⊗𝕀] ,
C_L_1 L_2 L_3 =
[ 𝕀⊗⟨𝔇 (L_1, L_2, L_3) | ]
[ | ω^-1⟩⊗𝕀⊗𝕀] ,
and the 0-product D: ℋ^⊗ 0→ℋ
D_L_1 =
[ 𝕀⊗⟨𝒱_1,1 (L_1) | ] | ω^-1⟩ .
We point out these products depend on the length of the borders so they are not graded-symmetric. However A,B are symmetric if we also exchange L_2 ↔ L_3 thanks to the symmetry (<ref>). When L_1 = L_2 = L_3 = L, the products A and D become part of the ordinary L_∞ products of CSFT, see (<ref>).
The final object we introduce is “differentiation” with respect to the level-matched string fields
δΨ'(ℓ') δΨ (ℓ) ≡δ | Ψ'(ℓ') ⟩δ( ⟨Ψ (ℓ) | c_0^- ) = δδ( ⟨Ψ (ℓ) | c_0^- ) ( 𝕀⊗⟨Ψ'(ℓ') | ) | Σ_0,2⟩
= δδ( ⟨Ψ (ℓ) | c_0^- ) ( 𝕀⊗⟨Ψ'(ℓ') | c_0^- δ(L_0^-) b_0^- ) | Σ_0,2⟩
=( 𝕀⊗δ [ Ψ'(ℓ') - Ψ (ℓ)] ) | ω^-1⟩∈ℋ^⊗ 2 ,
where we have used in order, (<ref>); (<ref>); (<ref>) and the fact that c_0^- doesn't support a cohomology. Notice the right-hand side contains the delta functional in the space ℋ⊗ℝ. This derivative obeys the Leibniz rule. The effect of acting the derivative on (<ref>) is
δ Z δΨ (ℓ) = δ W δΨ(ℓ) · Z
where
δ W δΨ(ℓ ) =
∑_g,n ħ^g-1 κ^2g-2+n (n-1)! ∫_-∞^∞ d ℓ_2 ⋯∫_-∞^∞ d ℓ_n ( 𝕀⊗⟨𝒱_g,n (ℓ, ℓ_2, ⋯ℓ_n) )
(| ω^-1⟩⊗Ψ (ℓ_2) ⊗⋯⊗Ψ (ℓ_n)
) ,
after applying the Leibniz rule and symmetry of the vertices. Note that taking this derivative produces a state in ℋ.
One can similarly find the second derivative
δ^2 Z δΨ' (ℓ') δΨ (ℓ)
= [
δ^2 W δΨ'(ℓ') δΨ(ℓ) + δ W δΨ'(ℓ') δ W δΨ(ℓ) ] · Z ,
where
δ^2 W δΨ'(ℓ') δΨ(ℓ) =
∑_g,n ħ^g-1 κ^2g-2+n (n-2)! ∫_-∞^∞ d ℓ_3 ⋯∫_-∞^∞ d ℓ_n
( ⟨𝒱_g,n (ℓ', ℓ, ℓ_3, ⋯ℓ_n) | )
| ω^-1⟩_ℓ' | ω^-1⟩_ℓ ( Ψ (ℓ_3) ⊗⋯⊗Ψ (ℓ_n)
) .
This object belongs to ℋ^⊗ 2. Observe how the Poisson bivector acts here for lack of a better notation.
Upon defining the string field | Ψ (ℓ) ⟩-valued operator
𝒟_Ψ(L_1)≡[ ħ δδΨ (L_1) - κ 2 ∫_-∞^∞ d ℓ_1 ∫_-∞^∞ d ℓ_2
A_L_1 ℓ_1 ℓ_2 (Ψ(ℓ_1), Ψ(ℓ_2))
- ħκ∫_-∞^∞ d ℓ_1 ∫_-∞^∞ d ℓ_2
B_L_1 ℓ_1 ℓ_2(Ψ(ℓ_1), δδΨ(ℓ_2))
- ħ^2 κ 2∫_-∞^∞ d ℓ_1 ∫_-∞^∞ d ℓ_2
C_L_1 ℓ_1 ℓ_2(δ^2 δΨ(ℓ_1) δΨ(ℓ_2))
- ħκ D_L_1] ,
the differential constraint reads
∀ Ψ(L_1) ∈ℋ⊗ℝ 𝒟_Ψ(L_1)· Z[Ψ(ℓ)] =0 .
It is not difficult to establish this is equivalent to the recursion (<ref>) using the derivatives above. Observe that (<ref>) admits a trivial solution Z = 1, on top of those provided by the hyperbolic amplitudes. This case corresponds to having no interactions in CSFT that trivially realizes (<ref>).
It is important to point out the uncanny resemblance between (<ref>)- (<ref>) and the differential operators that form quantum Airy structures and the unique function that is annihilated by them <cit.>. This is not coincidental: quantum Airy structures can be used to encode topological recursion as differential equations and we have established the latter already. However the relation between quantum airy structures and CSFT can be more than what meets the eye initially. For example, turning the logic on its head, one may imagine CSFT as something that is constructed by solving (<ref>) perturbatively. We are going to comment on these points at the end of the paper.
We can also state the recursion restricted to genus 0 surfaces (<ref>) as a first-order differential constraint. This can be simply obtained by taking the classical ħ→ 0 limit of (<ref>)
0 = δ W δΨ (L_1) - κ 2∫_-∞^∞ d ℓ_1 ∫_-∞^∞ d ℓ_2
A_L_1 ℓ_1 ℓ_2(Ψ(ℓ_1), Ψ(ℓ_2) )
- κ∫_-∞^∞ d ℓ_1 ∫_-∞^∞ d ℓ_2
B_L_1 ℓ_1 ℓ_2(Ψ(ℓ_1), δ W δΨ(ℓ_2))
- κ 2∫_-∞^∞ d ℓ_1 ∫_-∞^∞ d ℓ_2
C_L_1 ℓ_1 ℓ_2(δ W δΨ(ℓ_1) , δ W δΨ(ℓ_2) ) ,
for all Ψ (L_1) ∈ℋ⊗ℝ. It is further possible to restrict the string fields to be ghost number 2 in this equation. This particular form is going to be useful in section <ref>.
§ COMPARISON WITH A STUBBED THEORY
Before we proceed to investigating the implications of the recursion relation described in the previous section, let us show that similar structures can be exhibited in theories with stubs <cit.> and investigate their consequences to get an intuition for the recursion (<ref>). We work with the classical stubbed cubic scalar field theory of <cit.> for simplicity and adopt its conventions unless stated otherwise. This section is mostly self-contained.
§.§ The stubbed cubic scalar field theory
We consider the real scalar field theory in D dimensions
I[ϕ] = ∫ d^D x (- 1 2 ∂_μϕ ∂^μϕ - V(ϕ)) ,
with the cubic potential
V(ϕ) = - μ^2 2!ϕ^2 + κ 3!ϕ^3 .
Here μ^2 > 0 is the mass parameter and κ is the coupling constant. This potential has an unstable perturbative vacuum at ϕ = 0 and the stable nonperturbative tachyon vacuum ϕ = ϕ_∗ residing at
ϕ_∗ = 2 μ^2 κ ,
V(ϕ_∗) = - 2 3μ^6 κ ^2 .
The squared mass of the linear fluctuations around this vacuum are V”(ϕ_∗) = μ^2 > 0.
We would like to deform this theory by stubs <cit.>. Focusing on the zero-momentum sector of the theory for simplicity, i.e., spacetime-independent configurations, we find the potential of the theory deforms to
V(ϕ; Λ) = - 1 2!μ^2 ϕ^2
+ 1 3!κ(e^Λμ^2 2ϕ)^3
- 1 4! 3 κ ^2 (e^Λμ^2 - 1 μ^2 ) (e^Λμ^2 2ϕ)^4
+ ⋯
= - 1 2!μ^2 ϕ^2
+ ∑_n=3^∞1 n! b_n-2κ ^n-2(1 - e^Λμ^2μ^2 )^n-3(e^Λμ^2 2ϕ)^n ,
after including stubs of length Λ / 2. The positive integers b_n are the number of unordered rooted full binary trees with n+1labeled leaves, whose formula is given by a double factorial
b_n = (2 n - 1)!! = (2n)! 2^n n != 1 × 3 × 5 ×⋯× (2 n - 1) ,
for n ≥ 1 and it is 1 when n=0. The first few terms are
b_n = 1, 1,3,15, 105, ⋯ .
These numbers come from the combinatorics of the Feynman diagrams. Stubs instruct us to treat the Feynman diagrams whose propagator's proper time is shorter than Λ as an elementary interaction. That means we need to include each topologically distinct Feynman diagrams with these “partial propagators” to the potential. Our conventions in (<ref>) and (<ref>) suggests us to consider the labeled Feynman diagrams.[In other words we perform the homotopy transfer with the partial propagator in the context of the homotopy Lie algebras instead of their associative counterpart like in <cit.>.]
We point out the form of this stubbed potential is slightly different from <cit.>. Nevertheless they can be related to each other after setting
κ ^here = 2 g^there , ,
compare (<ref>) with equation (2.2) of <cit.>, since
1 n! b_n-2 2^n-2
= 1 n! (2n - 4)! 2^n-2 (n-2) ! 2^n-2
= 1 n (2n-4)! (n-1)! (n-2)! = 1 n C_n-2 ,
where C_n are the Catalan numbers. Plugging this into (<ref>) one indeed obtains the stubbed potential of <cit.>, see equation (2.24). Taking this slightly different convention would help us to draw parallels with the topological recursion presented in the previous section more directly.
Resumming the expansion (<ref>) gives
V(ϕ; Λ) = - μ^2 2 ϕ^2 e^Λμ^2 f_3(x_3) - 1 e^Λμ^2 - 1 ,
where
f_3(x) = 6 x - 1 + (1-4x)^3/2 6x^2 ,
x_3 = - κ 2e^Λμ^2 -1 μ^2 e^Λμ^2 2ϕ ,
and the nonperturbative vacuum is shifted exponentially far away
ϕ_∗(Λ) = 2 μ^2 κexp( Λμ^2 2) ,
in the stubbed theory while the depth of the potential remains the same. We also point out the expansion (<ref>) in ϕ has the radius of convergence
r(Λ) =
μ^2 2 κ e^- 3Λμ^2 2 1 - e^- Λμ^2 ,
due to the fractional power in (<ref>).
§.§ Topological recursion for the stubs
Now we would like to obtain the nonperturbative solution (<ref>) of the stubbed theory using an alternative perspective. From the form of the series in (<ref>) and the identity
b_n+1 = (2n+1) b_n ,
it shouldn't be surprising that there is a recursion among the elementary vertices of the stubbed theory. Introducing
W_n(s_i) =
b_n-2(1 - e^Λμ^2μ^2 )^n-3∏_i=1^n exp( s_i μ^2 2) ,
s_i ≥ 0 ,
one can form the topological recursion
W_n(s_i) =∑_i=2^n ∫_0^∞ dt r_s_1 s_i t W_n-1( t, 𝐬∖{ s_i } )
+
1 2 ∫_0^∞ dt_1 ∫_0^∞ dt_2 ∑_stable d_s_1 t_1 t_2
W_n_1 (t_1, 𝐬_1) W_n_2 (t_2, 𝐬_2) ,
for n > 3, where the stub kernels are given by
r_s_1 s_2 s_3 = (-1 + θ(s_3 - Λ) ) a_s_1 s_2 s_3
d_s_1 s_2 s_3 = (1- θ(s_2 - Λ)- θ(s_3 - Λ) + θ(s_2 - Λ) θ(s_3 - Λ) ) a_s_1 s_2 s_3 ,
and
a_s_1 s_2 s_3 = W_3(s_1, s_2, s_3) = ∏_i=1^3 exp( s_i μ^2 2) .
Notice the functions W_n are totally symmetric in their arguments, whereas the stub kernels r and d have the expected counterparts of the symmetry properties (<ref>). Comparing with the twisted kernels (<ref>) we can roughly identify Λ∼ 1/L.
The expression (<ref>) can be derived by the recursive structure of the labeled binary trees in figure <ref>. From the labeled binary trees we read the identity
b_n = 1 2 ∑_i=1^n n+1i b_i-1 b_n-i
= ∑_i=2^n+2 b_n-1 + 1 2∑_stable b_n_1 b_n_2 ,
which can be also derived using (<ref>). Together with (<ref>), it is possible to include the rest of the terms in (<ref>) through integrals like in (<ref>). We run the bounds of integration from 0 to ∞, which requires introducing step functions as in (<ref>). So we see the stubbed scalar theory obeys a (classical) topological recursion (<ref>) with the twisted kernels (<ref>).[Here we considered the “classical” recursion for simplicity, however similar considerations can also be applied to the “quantum” recursion after one considers cubic graphs instead of trees.]
It is useful to look at two limiting cases of (<ref>). First, we see that Λ = 0 trivializes the recursion, W_n(s_i) = 0 for n > 3 and W_3(s_i) = 1, which corresponds to having the polynomial and manifestly local formulation. Such a limit doesn't exist in the stringy analog (<ref>) since there is an upper-bound bound on (<ref>)L ∼ 1/ Λ. This makes sense: the ordinary CSFT can't admit a polynomial and local formulation <cit.>. On the opposite end, Λ→∞, we see (<ref>) becomes well-defined upon analytic continuation. The stringy analog of this case is Ishibashi's recursion <cit.>.
We can similarly cast the recursion (<ref>) in the form of a second-order differential constraint. Introducing the following functionals of ϕ(s)
z[ϕ(s)] ≡exp( w[ϕ(s)]) ≡exp( ∑_n=3^∞κ^n-2 n!∫_0^∞ d s_1 ⋯∫_0^∞ ds_n
W_n(s_1, ⋯, s_n) ϕ(s_1) ⋯ ϕ(s_n)
) ,
the relation (<ref>) can be expressed compactly as
0 = δ w δϕ(t_1) - κ 2 ∫_0^∞ dt_2 ∫_0^∞ dt_3 a_t_1 t_2 t_3 ϕ(t_2) ϕ(t_3)
- κ ∫_0^∞ dt_2 ∫_0^∞ dt_3 r_t_1 t_2 t_3 ϕ(t_2) δ w δϕ(t_3)
- κ 2 ∫_0^∞ dt_2 ∫_0^∞ dt_3 d_t_1 t_2 t_3( δ w δϕ(t_2))
( δ w δϕ(t_3)) .
This is the analog of (<ref>). Note that we didn't need to place any constraint like (<ref>) on the value of Λ in contrast to its stringy counterpart—we can have any stub length. Note
V(ϕ; Λ ) = - 1 2! μ^2 ϕ^2 + w [ϕ(s) = ϕ δ(s - Λ)] ,
see (<ref>) and (<ref>).
We remark that (<ref>) was a consequence of the geometric interpretation of stubs in terms of suitable binary trees. If we had used a different way of integrating out UV modes, which is choosing a different field parametrization, this geometric presentation would have been highly obstructed. This is the avatar of what we have discussed for the stringy case at the end of subsection <ref>. But again, the implications of the recursion are supposed to stay the same.
§.§ The implications
Let us now investigate the consequences of (<ref>). First notice the functional derivative of the free energy w[ϕ(s)] evaluates to
δ w δϕ(t_1) =
∑_n=3^∞ κ^n-2 (n-1)! ∫_0^∞ d s_1 ⋯∫_0^∞ ds_n W_n(t_1, 𝐬) ϕ(s_2) ⋯ ϕ(s_n) ,
after using the symmetry property of W_n. In particular consider taking ϕ(s_i) = ϕ_∗(Λ) δ(s_i - Λ), where ϕ_∗(Λ) is a solution to the stubbed theory of stub length Λ/2. We have
φ(t_1; Λ)
≡δ w δϕ(t_1; Λ)|_ϕ→ϕ_∗
= ∑_n=3^∞κ^n-2 (n-1)! W_n(t_1, Λ) ϕ_∗(Λ)^n-1 ,
where we introduced φ(t_1) for convenience. Here Λ indicates that all remaining entries are equal to Λ here. We often suppress the Λ dependence to simplify the presentation of the expressions.
Importantly, when t_1= Λ (<ref>) evaluates to (see (<ref>))
φ(t_1 = Λ)
=∑_n=3^∞κ^n-2 (n-1)! W_n(s_i = Λ) ϕ_∗^n-1
= ∂ V ∂ϕ|_ϕ = ϕ_∗ + μ^2 ϕ_∗
= μ^2 ϕ_∗ ,
since ϕ_∗ is a solution and it extremizes the potential (<ref>) by construction. Therefore one can think of φ(t_1) as a function that captures the solution when t_1 = Λ, but generalizes it otherwise. The definition of this function is schematically shown in figure <ref>.
Taking ϕ(s_i) = ϕ_∗ δ(s_i - Λ) =( φ(Λ) / μ^2 ) δ(s_i - Λ) in (<ref>) then implies
0 = φ(t_1)
- κ 2 μ^4 a_t_1 ΛΛ φ(Λ)^2
- κμ^2 ∫_0^∞ dt_3 r_t_1 Λ t_3 φ(Λ) φ(t_3)
-κ 2 ∫_0^∞ dt_2 ∫_0^∞ dt_3 d_t_1 t_2 t_3 φ(t_2) φ(t_3) ,
and we obtain a quadratic integral equation for φ(t_1)
0 = φ(t_1)
- κ 2 μ^4 exp( t_1 μ^2 2 + Λμ^2 ) φ(Λ)^2
+ κμ^2 exp( t_1 μ^2 2 + Λμ^2 2)
φ(Λ) [ ∫_0^Λ dt' exp(t' μ^2 2) φ(t') ]
-κ 2 exp(t_1 μ^2 2) [ ∫_0^Λ dt' exp( t' μ^2 2)
φ(t') ]^2
,
after substituting the stub kernels (<ref>). The schematic representation of this quadratic integral equation is shown in figure <ref>.
Observe that setting t_1 = Λ = 0 above produces V'(ϕ) = 0, see (<ref>). This suggests (<ref>) can be understood as a resummation of the equation of motion V'(ϕ; Λ) = 0 of the stubbed theory. We can iteratively insert φ(t_1) into itself and expand in κ to find
0
= φ(t_1)
- κ 2 exp( t_1 μ^2 2 + Λμ^2) ϕ_∗^2
- κ^2 2 exp( t_1 μ^2 2 + 3 Λμ^2 2)
( 1- e^Λμ^2μ^2)
ϕ_∗^3
+ ⋯ .
Upon taking t_1 = Λ and using (<ref>) one indeed obtains V'(ϕ; Λ) = 0 in the expanded form, see (<ref>). The argument here can be generalized to higher orders in κ.
In fact, upon close inspection we see
φ(t_1) = 2 μ^4 κexp( t_1 μ^2 2) ,
satisfies (<ref>) and we again obtain the solution (<ref>) for the stubbed theory
ϕ_∗ = 1 μ^2 φ(t_1 = Λ )
= 2 μ^2 κexp( Λμ^2 2) .
This demonstrates the differential constraints encoding the topological recursion (<ref>) in a theory can be used to find its solutions. Of course, we have found φ(t_1) by guesswork, while it can be quite challenging to do so for more complicated situations. We highlight resumming the potential V(ϕ; Λ) (<ref>) as in <cit.> wasn't necessary to obtain this solution: we were able to access beyond the radius of converges ϕ > r (Λ) (<ref>) directly.
This approach, however, does not give a novel method for finding the energy difference between the solutions since this calculation still has to involve the resummed potential. On the other hand, it is possible to find the mass of the linear fluctuations around the vacuum (<ref>) based on the recursion alone. In order to do that we vary φ→φ + δφ in (<ref>) for which we find
- 1 μ^2 M^2 (t_1) δφ(t_1) =
δφ(t_1)
- 2 exp( t_1 μ^2 2 + Λμ^2 2)
δφ(Λ)
+ 2 μ^2 exp(t_1 μ^2 2) ∫_0^Λ dt' exp(t' μ^2 2) δφ(t') ,
using (<ref>). We suggestively named this variation M^2 (t_1) δφ(t_1) /μ^2 given that (<ref>) was essentially the resummed V'(ϕ; Λ) = 0. Its variation can be understood as the resummed version of -V”(ϕ; Λ) and it should encode the spectrum of the linear fluctuations around ϕ = ϕ_∗. The overall sign follows from the choice of signs in (<ref>) and the division by μ^2 is to get the correct units for M^2. We have δφ(Λ) = μ^2 δϕ for the genuine fluctuations.
Indeed, by taking
δφ(t_1) =
exp( t_1 μ^2 2) ϵ ,
with infinitesimal ϵ without dependence on t_1, we see
- 1 μ^2 M^2 δφ(Λ ) = - δφ(Λ)
M^2 δϕ = μ^2 δϕ ,
so the squared mass of the fluctuations around the nonperturbative vacuum is M^2 = μ^2 > 0—as it should be. We guess the particular form of (<ref>) for δφ(t_1) as a function of t_1 by getting inspired from (<ref>), but the linearity in δφ(t_1) of (<ref>) justifies it a posteriori in any case. We again emphasize that we didn't need to resum the potential to derive (<ref>). In fact, obtaining this result from the resummed form (<ref>) would have been quite intricate due to the complicated higher derivative structure of the stubbed theory.
Given that the integral equation (<ref>) is a consequence of the differential constraint and topological recursion, it is natural to wonder whether there is an analog of the integral equation in hyperbolic CSFT as a result of (<ref>). Indeed, we are going to see there is such an equation in the upcoming section.
§ QUADRATIC INTEGRAL EQUATION FOR HYPERBOLIC CSFT
Now we apply the reasoning from the last section to derive a quadratic integral equation for hyperbolic CSFT following from (<ref>). We adapt the conventions from before and we are going to be brief in our exposition since the manipulations are very similar to the previous section.
We begin with the derivative of W[Ψ(ℓ)]. Upon substituting
Ψ(ℓ_i) = Ψ_∗ δ(ℓ_i - L) ,
into (<ref>) we get
Φ (L_1)
≡δ W δΨ(L_1)|_Ψ→Ψ_∗ = ∑_n=3^∞κ^n-2 (n-1)! ( 𝕀⊗⟨𝒱_0,n (L_1, 𝐋) | ) ( | ω^-1⟩⊗ | Ψ_∗⟩^⊗ (n-1))
= ∑_n=2^∞κ^n-1 n ! L_0,n( Ψ_∗^n; L_1 ) .
Here Ψ_∗ = Ψ_∗(L) is a critical point of the classical CSFT action and we defined the string field Φ (L_1) = Φ (L_1; L) accordingly. These string fields are even and assumed to have ghost number 2.
We also introduce a new set of (graded-symmetric) string products L_g,n-1(L_1): ℋ^⊗ (n-1)→ℋ
L_g, n-1 (L_1) =
( 𝕀⊗⟨𝒱_g,n (L_1, 𝐋) | ) ( | ω^-1⟩⊗𝕀^⊗ (n-1)) ,
that generalize the ordinary string products L_g,n (<ref>) such that the border length of the output is L_1 instead of L. They are equivalent when L_1 = L. Note that we have
Φ (L_1 = L) = - Q_B Ψ_∗ ,
by the equation of motion. We therefore see Φ (L_1 = L) is related to Q_B acting on the solutions. This definition can also be schematically understood like in figure <ref>.
We remind the reader there is a gauge redundancy for the solutions Ψ_∗ (<ref>), which indicates there should be a redundancy in the choice of Φ (L_1) as well. We demand this is given by
δ_Λ Φ (L_1) =
∑_n=2^∞κ^n-1 (n-1)! L_0,n( δ_ΛΨ_∗, Ψ_∗^n-1; L_1 ) ,
so that the definition (<ref>) is invariant under (<ref>). Here δ_ΛΨ_∗ is given by (<ref>). Upon taking L_1 = L and using the L_∞ relations <cit.> this gauge transformation simplifies further and can be seen to be consistent with (<ref>).
We need to fix this gauge symmetry in order to invert (<ref>) and write down an integral equation. We do this by assuming a grassmann odd operator β (L_1) = β(L_1; L), possibly to be constructed with b-ghost modes and depending on L_1, satisfying[We take β = β(L_1) so that it indicates β is to be used for fixing the gauge redundancy (<ref>) for all L_1.]
{ Q_B, β(L_1 = L) } = -P ,
where P is the projector to the complement of the cohomology of Q_B. We impose the gauge
β(L_1 = L) Ψ_∗ = 0 ,
on the CSFT solutions. As a result the equation (<ref>) gets inverted to
β Φ (L_1 = L) = Ψ_∗ .
Note that we have β Φ (L_1) ≠ 0 and { Q_B, β(L_1) }≠ - 1 for L_1 ≠ L in general. We have taken P → 1 above and ignored the subtleties associated with the on-shell modes.
Taking (<ref>) in the differential constraint (<ref>) then implies
0 = Φ (L_1)
- κ 2
A_L_1 L L ( β Φ (L) , β Φ (L) )
- κ ∫_-∞^∞ d ℓ
B_L_1 L ℓ( β Φ (L), Φ(ℓ))
- κ 2 ∫_-∞^∞ d ℓ_1 ∫_-∞^∞ d ℓ_2
C_L_1 ℓ_1 ℓ_2(Φ(ℓ_1), Φ(ℓ_2) ) ,
where the 2-products A,B,C are already defined in (<ref>). The similarity to (<ref>) is apparent, also observe the structure in figure <ref>. This is the quadratic integral equation and it makes the cubic nature of CSFT manifest.
Like earlier, iteratively substituting Φ (L_1) into this equation, taking L_1 = L, and expanding in κ we obtain the equation of motion for CSFT (<ref>) using (<ref>). A similar phenomenon occurs for the stubs (<ref>) and it is associated with resummation as discussed. So we again interpret (<ref>) as the resummation of the CSFT equation of motion. Having such a form is reassuring: trying to resum the action (<ref>) as in (<ref>) would have been quite a daunting task, bordering on the impossible.
Being able to recast the CSFT equation of motion to a quadratic integral equation (<ref>) is encouraging. It is explicitly and exclusively given in terms of the generalized hyperbolic three-vertex of <cit.> and the twisted Mirzakhani kernels (<ref>). It may be viable to attempt to solve this integral equation in the future after identifying a convenient gauge-fixing operator β(L_1). Furthermore, it should be possible to read the spectrum of the linear excitations (i.e., cohomology) around the solution by varying (<ref>) like in (<ref>) and repeating a similar analysis.[On the other hand we don't have a new way to compute the on-shell value of the action as before. This quantity vanishes up to boundary terms in CSFT <cit.>.] However, we expect the form of solutions and their variations would be more complicated than the stub counterparts (<ref>) and (<ref>). We comment on this approach more in the next section.
§ DISCUSSION
In this paper:
* We identified a background-independent topological recursion relation satisfied by the elementary vertices of hyperbolic closed string field theory, refer to (<ref>) or (<ref>). This explicitly demonstrates closed string field theory has an underlying cubic structure and makes its features manifest. The moduli integrations are simplified considerably. The recursion relation satisfied among the volumes of the systolic subsets (<ref>) provides the essential ingredients for its closed string field theory counterpart.
* We showed a suitable generalization of the classical solutions to closed string field theory obey a quadratic integral equation (<ref>) as a consequence of the aforementioned topological recursion. We discussed how this equation can be used to construct analytic solutions in principle.
There are numerous interesting directions left to be investigated in future work. We list the ones that we find the most appealing and exciting:
* Developing a systematic approach for solving the quadratic integral equation (<ref>) is the natural next step. Even though this equation appears to contain sufficient information to construct solutions based on our investigations in the stubbed scalar theory, it is far from clear how to obtain them beyond guessing judiciously. Identifying a convenient gauge choice (<ref>) and the smallest subspace for which a solution to (<ref>) exists are possibly among the first prerequisites for making any further progress. Understanding the underlying algebraic structure to our reformulation may also help.
* Even if a solution is constructed, it is somewhat unclear how to probe its physics. For example, it has been established that the on-shell action vanishes up to boundary terms in closed string field theory <cit.>, thanks to the dilaton theorem <cit.>. We don't have a novel way to test this. However, investigating the spectrum around the background may still be doable with a correct approach. One may try to establish (or rule out) the closed string version of Sen's third conjecture <cit.> for the vanishing cohomology around the tachyon vacuum for instance.
* The form of the differential constraint encoding the topological recursion (<ref>) is suggestive. We have already pointed out its possible interpretation as a quantum Airy structure <cit.>. It would be interesting to investigate this further to see whether a deeper principle and/or theory is lurking behind here and how it relates to the nonperturbative structure of closed string field theory. There may be connections to the ideas of <cit.> for example.
* It may also be useful to illuminate how our approach compares to the themes in the relation between matrix models <cit.> and Jackiw-Teitelboim gravity <cit.>. Here, the initial point of investigation would be understanding and clarifying the matrix model interpretation of the systolic volumes and their recursion (<ref>).
* We have seen the string fields are naturally endowed with an additional positive real parameter: the length of the geodesic border L_i (<ref>). A similar feature also appears in the lightcone SFT <cit.>, where the lightcone momentum k_- relates to the border length. It may be useful to understand how similar their roles are in the interactions.
* Physics following from the recursion shouldn't change under field redefinition even though a particular form of the relations will be no longer manifest. Nevertheless, it is still rather counter-intuitive that our derivation falls short of establishing a recursion for all classical hyperbolic vertices—especially for the polyhedral vertices based on Strebel quadratic differentials <cit.>. Such a recursion may still exist considering the work <cit.> and it may be possible to find a generalization to our argument to cover these scenarios as well. Relatedly, it is desirable to establish a recursion relation for other hyperbolic theories <cit.> and/or their supersymmetric versions <cit.>.
We hope these results help construct closed string field theory solutions some day.
§ ACKNOWLEDGMENTS
We are grateful to Scott Collier, Harold Erbin, Ted Erler, Daniel Harlow, David Kolchmeyer, Raji Mamade, and Edward Mazenc for many enlightening discussions; Barton Zwiebach for his comments on the early draft and discussions; and Nobuyuki Ishibashi for the correspondence. NVM in particular thanks Daniel Harlow for the freedom and encouragement to explore.
The work of AHF is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics of U.S. Department of Energy under grant Contract Number DE-SC0012567, DE-SC0009999, and the funds from the University of California. NVM is supported by the Hertz Foundation and the MIT Dean of Science Fellowship.
§ HYPERBOLIC THREE-STRING VERTEX
In this appendix we summarize the local coordinates for the generalized hyperbolic string vertex ⟨Σ_0,3 (L_1, L_2, L_3)| for L_1,L_2,L_3 ≥ 0. The reader can refer to <cit.> for the details of the derivation using the connection between hyperbolic geometry on the three-bordered sphere and the hypergeometric equation. We also comment on various branch choices in this appendix.
We begin with a three-punctured sphere whose punctures are placed at z=0,1,∞ and there are coordinate patches
D_i = { z ∈ℂ | 0 < |w_i(z; L_1, L_2, L_3)| ≤ 1 } ,
around them such that the surface
Σ_0,3(L_1, L_2, L_3) = ℂ∖⋃_i=1^3 D_i ,
endows a regular hyperbolic metric with geodesic borders of length L_i = 2 πλ_i. The local coordinate maps w_1(z; L_1, L_2, L_3) in (<ref>) are given by
w_1(z; L_1, L_2, L_3) =
z (1-z)^-λ_2 / λ_1ρ(L_1, L_2, L_3 ) [
_2F_1( 1+ i λ_1 - i λ_2 + i λ_3 2,
1+ i λ_1 - i λ_2 - i λ_3 2;
1 + i λ_1; z )
_2F_1( 1- i λ_1 + i λ_2 - i λ_3 2,
1- i λ_1 + i λ_2 + i λ_3 2;
1 - i λ_1; z )
]^1 / i λ_1
= 1 ρ_1 [ z +
1 + λ_1^2 + λ_2^2 - λ_3^2 2 (1+λ_1^2) z^2 + ⋯] ,
where the expression for the mapping radius ρ_1 = ρ (L_1, L_2, L_3 ) is
ρ (L_1, L_2, L_3 ) = e^-π / 2 λ_1[
Γ(1-i λ_1)^2 Γ(1+i λ_1)^2 γ(1 + iλ_ 1 + i λ_2 + i λ_3 2)
γ( 1 + iλ_ 1 - i λ_2 + i λ_3 2)
γ( 1 - iλ_ 1 - i λ_2 + i λ_3 2)
γ( 1 - iλ_ 1 + i λ_2 + i λ_3) 2 ) ]^ i / 2 λ_1 .
Here _2F_1(a,b;c;z) is the hypergeometric function and Γ(x) is the gamma function. The latter also goes into the definition of
γ(x) = Γ(x) Γ(1-x) .
Take note the mapping radius ρ_1 diverges as ∼ e^- 1 / L_1 as L_1 → 0. In the inverted form, the expansion (<ref>) is given by
z(w_1; L_1, L_2, L_3) =
ρ_1 w
- 1 + λ_1^2 + λ_2^2 - λ_3^2 2 (1+λ_1^2)
(ρ_1 w )^2
+ ⋯ .
Unfortunately the closed-form expressions for this inverse function is not known except for L_i → 0. The expressions for the local coordinates w_2 and w_3 around z=1 and z=∞ are given by
w_2(z; L_1, L_2, L_3) = w_1 (1-z ; L_2, L_1, L_3) ,
w_3(z; L_1, L_2, L_3) = w_1 ( 1 z ; L_3, L_2, L_1 ) ,
up to possible global phases.
It is important to be mindful about the various branch choices for the imaginary exponents in the mapping radius (<ref>). This has been partially investigated in <cit.>, however we briefly comment on them again here. First notice there is an overall ambiguity in the mapping radii
ρ(L_1,L_2, L_3) ≃exp( n πλ_1 ) ρ(L_1,L_2, L_3)
n ∈ℤ .
as a result of the imaginary exponent. If one would like to work in a particular branch for the local coordinates then one has to find the correct integer n = n (L_1,L_2,L_3). It has been argued that the correct choice is n=0 when L_1=L_2 = L_3 together with the choice of principal branch for the branches <cit.>. However there may be branch crossings for generic border lengths. Since the mapping radius should be continuous in L_i the integer n should be adjusted accordingly when a branch cut is crossed. In the expressions (<ref>) we assume such adjustments are implicitly present.
§ SAMPLE COMPUTATIONS FOR THE SYSTOLIC VOLUMES
In this appendix we perform sample computations for the systolic volumes using the recursion (<ref>). We have already computed (g,n)=(0,4),(1,1) in the main text, see (<ref>) and (<ref>). Using them we can compute the systolic volumes for the surfaces with χ_g,n = -3 and n ≥ 1:
* Five-bordered sphere g=0, n=5. Writing the recursion (<ref>) explicitly we have
L_1 · V 𝒱_0,5 (L_i) =
∑_i=2^5 ∫_0^∞ℓ d ℓ R_L_1L_iℓ V 𝒱^L_0,4(ℓ, 𝐋∖{L_i })
+ 3∫_0^∞ℓ_1 d ℓ_1 ∫_0^∞ℓ_2 d ℓ_2 D_L_1 ℓ_1 ℓ_2 ,
after taking V 𝒱^L_0, 3 (L_i) =1. This can be expressed as
L_1 · V 𝒱^L_0,5 (L_i) = L_1 · V ℳ_0,5 (L_i)
- 3 2 L^2 ∑_i=2^5 ∫_0^∞ℓ d ℓ R_L_1L_iℓ
- L_1 ∑_i=2^5 ∫_0^L ℓ d ℓ 𝒱^L_0,4(ℓ, 𝐋∖{L_i })
- 6 ∫_0^L ℓ_1 d ℓ_1 ∫_0^∞ℓ_2 d ℓ_2 R_L_1 ℓ_1 ℓ_2
+ 3 L_1 ∫_0^L ℓ_1 d ℓ_1 ∫_0^L ℓ_2 d ℓ_2 ,
using the twisted Mirzakhani kernels (<ref>). Focus on the integral
I_L_1 L_i≡∫_0^∞ℓ d ℓ R_L_1 L_i ℓ
= ∫_0^∞ℓ d ℓ[
L_1 -
log(cosh(L_i 2) + cosh(L_1 + ℓ 2) cosh(L_i 2) + cosh(L_1 - ℓ 2))
] .
Clearly this integral converges. It can be evaluated by taking the derivative of both sides with respect to L_1 and using the identity (<ref>)
I_L_1 L_i L_1 = 1 2∫_0^∞ℓ d ℓ∑_ϵ_1, ϵ_i = ±[ 1 + exp(1 2(ℓ + ϵ_1 L_1+ ϵ_i L_i )) ]^-1
= 1 2( L_1^2 + L_i^2) + 2 π^2 3
I_L_1 L_i = 1 6 L_1^3 + 1 2L_1 L_i^2 + 2 π^2 3 L_1
.
The integration constant fixed by noticing I_L_1=0,L_i =0, see (<ref>). Then we have
∫_0^L ℓ_1 d ℓ_1 I_L_1 ℓ_1 =
1 12 L_1^3 L^2 + 1 8L_1 L^4 + π^2 3 L_1 L^2 ,
and
∫_0^L ℓ d ℓ 𝒱^L_0,4(ℓ, 𝐋∖{L_i })
= ∫_0^L ℓ d ℓ[
2 π^2 +1 2ℓ^2 + 1 2∑_j=2
j ≠ i^5 L_j^2 - 3 2 L^2
]
= π^2 L^2 + 1 4 L^2 ∑_j=2
j ≠ i^5 L_j^2 - 5 8 L^4 .
Combining these integrals we obtain
L_1 · V 𝒱^L_0,5 (L_i) = L_1 · V ℳ_0,5 (L_i)
- 3 2 L^2 ∑_i=2^5 [ 1 6 L_1^3 + 1 2 L_i^2 L_1 + 2 π^2 3 L_1]
- L_1 ∑_i=2^5 [
π^2 L^2 + 1 4 L^2 ∑_j=2
j ≠ i^5 L_j^2 - 5 8 L^4
]
- 6 [ 1 12 L_1^3 L^2 + 1 8L_1 L^4 + π^2 3 L_1 L^2 ]
+ 3 4 L_1 L^4
= L_1 · V ℳ_0,5 (L_i) - L_1^3 L^2 - 3 4 L_1 L^2 ∑_i=2^5 L_i^2 - 4 π^2 L_1 L^2
- 4 π^2 L_1 L^2
- 3 4 L_1 L^2 ∑_i= 2^5 L_i^2 + 5 2 L_1 L^4
- 1 2 L_1^3 L^2 - 3 4 L_1 L^4 - 2π^2 L_1 L^2 + 3 4 L_1 L^4
= L_1 · V ℳ_0,5 (L_i) - 3 2L_1^3 L^2 - 3 2 L_1 L^2 ∑_i=2^5 L_i^2 - 10π^2 L_1 L^2 + 5 2 L_1 L^4 ,
and the final result is
V 𝒱^L_0,5 (L_i) =
V ℳ_0,5 (L_i) - L^2 [ 3 2∑_i=1^5 L_i^2 + 10 π^2 ]
+ 5 2 L^4 .
This expression is symmetric under permutations of L_i, as it should be. Observe that the term subtracted from V V ℳ_0,5 (L_i) to get V 𝒱^L_0,5 (L_i) is quite nontrivial unlike (<ref>) and (<ref>).
* Two-bordered torus g=1, n=2. We begin by writing (<ref>) specific to this case
L_1 · V 𝒱^L_1,2 (L_i) =
∫_0^∞ℓ d ℓ R_L_1L_2ℓ V 𝒱^L_1,1(L_1 )
+ 1 2∫_0^∞ℓ_1 d ℓ_1 ∫_0^∞ℓ_2 d ℓ_2 D_L_1 ℓ_1 ℓ_2 .
after using V 𝒱^L(ℓ_1, ℓ_2,L_i) = 0. The twisted Mirzakhani kernels (<ref>) produce
L_1 · V 𝒱^L_1,2 (L_i) =
L_1 · V ℳ_1,2 (L_i)
- 1 4 L^2 ∫_0^∞ℓ dℓ R_L_1 L_2 ℓ
- L_1 ∫_0^L ℓ d ℓ V 𝒱^L_1,1(ℓ)
-∫_0^L ℓ_1 d ℓ_1 ∫_0^∞ℓ_2 d ℓ_2
R_L_1 ℓ_1 ℓ_2
+ 1 2 L_1 ∫_0^L ℓ_1 d ℓ_1 ∫_0^L ℓ_2 d ℓ_2
.
This is almost the same as the case considered above, the only difference being that we have to use V 𝒱^L_1,1(ℓ) = π^2/12 + ℓ^2/48 - L^2 /4 (<ref>) now. We perform the integral
∫_0^L ℓ d ℓ V 𝒱^L_1,1(ℓ) =
∫_0^L ℓ d ℓ[ π^2 12 +ℓ^2 48 -L^2 4]
= π^2 24 L^2 - 23 192 L^4 ,
and obtain
L_1 · V 𝒱^L_1,2 (L_i) =
L_1 · V ℳ_1,2 (L_i)
- 1 4 L^2 [ 1 6 L_1^3 + 1 2 L_1 L_2^2 + 2 π^2 3 L_1]
- L_1 [π^2 24 L^2 - 23 192 L^4 ]
- [1 12 L_1^3 L^2 + 1 8L_1 L^4 + π^2 3 L_1 L^2]
+ 1 8 L_1 L^4
= L_1 · V ℳ_1,2 (L_i) - 1 24 L_1^3 L^2 - 1 8 L_1 L_2^2 L^2 - π^2 6 L_1 L^2
- π^2 24 L_1 L^2 + 23 192L_1 L^4
- 1 12 L_1^3 L^2 - 1 8 L_1 L^4 - π^2 3 L_1 L^2 + 1 8 L_1 L^4
= L_1 · V ℳ_1,2 (L_i) - 1 8 L_1 L^2 ∑_i=1^2 L_i^2 + 23 192 L_1 L^4
- 13 π^2 24 L_1 L^2 .
Then the final result is
V 𝒱^L_1,2 (L_i) = V ℳ_1,2 (L_i) -
L^2 [ 1 8∑_i=1^2 L_i^2 + 13 π^2 24]
+ 23 192 L^4 .
This is also invariant under exchanging L_1 ↔ L_2. Defining
x ≡ L_1^2 + L_2^2 ,
we can alternatively express (<ref>) as
V 𝒱^L_1,2 (x) = 1 192 (x + 4 π^2) (x + 12 π^2) - 1 8 L^2 x
- 13 π^2 24 L^2
+ 23 192 L^4 .
which can be plotted as a function of the threshold length L and x, see figure <ref>. Take note that the quantity is positive for all 0 ≤ L ≤ 2 sinh^-1 1.
§ NUMERICAL EVALUATION OF V 𝒱^L_1,1(L_1)
Following <cit.>, it is possible to work out the WP geometry of ℳ_0,4(L_i) and ℳ_1,1(L_1) explicitly using classical conformal blocks <cit.>. Despite the fact that this procedure doesn't have a mathematically rigorous basis in general, there is overwhelming evidence that it holds true by the aforementioned works. We can use this approach to numerically compute the volumes 𝒱^L_1,1(L_1) to provide numerical evidence for the systolic volumes (<ref>). The reader can refer to <cit.> for a deeper exposition.
We begin by reminding the WP metric on the moduli space ℳ_1,1(L_1), parameterized by the moduli τ∈ℍ / PSL(2,ℤ), is given by
g^(1,1)_ττ(L_1) = - 4 π_τ_τ S_HJ^(1,1) (τ, τ; L_1) .
Here the Kähler potential S_HJ^(1,1) (τ, τ; L_1) is a suitably-regularized on-shell Liouville action that can be found as an expansion in q = e^2 π i τ, see equation (3.25) of <cit.>. The Kähler form associated with the metric g^(1,1)(L_1) is related to the WP form on ℳ_1,1(L_1). We can use this to evaluate V 𝒱^L_1,1(L_1) by performing the integral
𝒱^L_1,1(L_1) = i 4∫_𝒱^L_1,1(L_1) g_ττ(L_1) d τ∧ d τ .
Note the factor of 2 difference between the conventions for the volumes here and in <cit.>. This is a result of our conventions explained below (<ref>).
The integral for L=0 has been numerically evaluated in <cit.> and a good agreement with the analytic results has been observed. Here we repeat an analogous computation after placing a cutoff on the moduli space to find the volumes of the systolic subsets. This first requires us to find the demarcation curve ∂𝒱^2 πλ_1,1(L_1), as a function of the moduli τ, that separates the systolic subset from the region whose volume we subtract. This curve is
ρ(2 πλ, 2 πλ_1, -2 πλ)^λ = |
exp f_λ'^λ∂λ' (q)
|_λ' = λ where
q = e^2 π i τ ,
and the left-hand side of (<ref>) is given by the classical torus conformal blocks <cit.>
f_λ'^λ (q) = λ'^2 4log q + (1+λ^2)^2 8 (1+ λ'^2) q + 𝒪(q^2) ,
and the right-hand side is given by the mapping radius (<ref>).
Now it is possible to approximate the integral (<ref>) numerically, for which we use Monte-Carlo (MC) integration and consider the representative cases L_1 = 0, π/2. We approximate the curve (<ref>) by keeping only the log q term, as the shape of the curve is observed to remain almost the same including the higher order terms in q. To this order ∂𝒱^2 πλ_1,1(L_1) is simply described by the line
Im τ = 1 π logρ(2πλ, 2πλ_1, -2πλ) .
In our evaluation, we sampled points between this line and Im τ = 40 with the restriction -1/2 ≤Reτ≤ 1/2 and evaluated the volume of this region. We then subtracted it from the total volume of the moduli space V ℳ_1,1(L_1) to find V 𝒱^L_1,1(L_1). We repeated the MC integration five times for each value of L. The results are already shown in figure <ref> and it is consistent with the analytical result (<ref>). We can also repeat a similar numerical evaluation to argue V 𝒱^L_0,4(L_i) is given by (<ref>) after constructing the WP metric on the four-bordered sphere <cit.>.
We emphasize the restriction L ≤ L_∗ is necessary to cover the region ℳ_1,1 (L_1) ∖𝒱^L_1,1 (L_1) once and only once. This can be seen from setting L=L_∗. The symmetric punctured-torus (L_1 = 0, τ =i) saturates the systolic condition in this case <cit.> and any increase in the threshold length L would overcount the region ℳ_1,1 (L_1) ∖𝒱^L_1,1 (L_1) as a result <cit.>. This implies the formula (<ref>) no longer holds and the form of the subtraction term should be modified from L^2/4 appropriately. Hence the condition L ≤ L_∗ is necessary. Having L_1 > 0 doesn't lead to a stricter condition. Notice this argument doesn't inform whether L ≤ L_∗ is sufficient or not. This follows from the collar lemma instead.
10Zwiebach:1992ie
B. Zwiebach, “Closed string field theory: Quantum action and the B-V master
equation,” Nucl. Phys. B 390 (1993) 33–152,
http://www.arXiv.org/abs/hep-th/9206084 hep-th/9206084.
Sen:2024nfd
A. Sen and B. Zwiebach, “String Field Theory: A Review,”http://www.arXiv.org/abs/2405.19421 2405.19421.
deLacroix:2017lif
C. de Lacroix, H. Erbin, S. P. Kashyap, A. Sen, and M. Verma, “Closed
Superstring Field Theory and its Applications,” Int. J. Mod. Phys. A 32 (2017), no. 28n29, 1730021,
http://www.arXiv.org/abs/1703.06410 1703.06410.
Erler:2019loq
T. Erler, “Four Lectures on Closed String Field Theory,” Phys. Rept. 851 (2020) 1–36, http://www.arXiv.org/abs/1905.06785
1905.06785.
Erbin:2021smf
H. Erbin, String Field Theory: A Modern Introduction, vol. 980 of
Lecture Notes in Physics.
3, 2021.
Maccaferri:2023vns
C. Maccaferri, “String Field Theory,” 8, 2023.
http://www.arXiv.org/abs/2308.00875 2308.00875.
Belopolsky:1994sk
A. Belopolsky and B. Zwiebach, “Off-shell closed string amplitudes: Towards a
computation of the tachyon potential,” Nucl. Phys. B 442 (1995)
494–532, http://www.arXiv.org/abs/hep-th/9409015
hep-th/9409015.
Belopolsky:1994bj
A. Belopolsky, “Effective Tachyonic potential in closed string field
theory,” Nucl. Phys. B 448 (1995) 245–276,
http://www.arXiv.org/abs/hep-th/9412106 hep-th/9412106.
Yang:2005rx
H. Yang and B. Zwiebach, “A Closed string tachyon vacuum?,” JHEP
09 (2005) 054, http://www.arXiv.org/abs/hep-th/0506077
hep-th/0506077.
Moeller:2004yy
N. Moeller, “Closed bosonic string field theory at quartic order,”
JHEP 11 (2004) 018,
http://www.arXiv.org/abs/hep-th/0408067 hep-th/0408067.
Moeller:2006cw
N. Moeller, “Closed Bosonic String Field Theory at Quintic Order:
Five-Tachyon Contact Term and Dilaton Theorem,” JHEP 03 (2007)
043, http://www.arXiv.org/abs/hep-th/0609209 hep-th/0609209.
Moeller:2007mu
N. Moeller, “Closed Bosonic String Field Theory at Quintic Order. II.
Marginal Deformations and Effective Potential,” JHEP 09 (2007)
118, http://www.arXiv.org/abs/0705.2102 0705.2102.
Moeller:2006cv
N. Moeller and H. Yang, “The Nonperturbative closed string tachyon vacuum to
high level,” JHEP 04 (2007) 009,
http://www.arXiv.org/abs/hep-th/0609208 hep-th/0609208.
Batalin:1981jr
I. A. Batalin and G. A. Vilkovisky, “Gauge Algebra and Quantization,”
Phys. Lett. B 102 (1981) 27–31.
Schwarz:1992nx
A. S. Schwarz, “Geometry of Batalin-Vilkovisky quantization,” Commun.
Math. Phys. 155 (1993) 249–260,
http://www.arXiv.org/abs/hep-th/9205088 hep-th/9205088.
Henneaux:1992ig
M. Henneaux and C. Teitelboim, Quantization of gauge systems.
1992.
Hata:1993gf
H. Hata and B. Zwiebach, “Developing the covariant Batalin-Vilkovisky
approach to string theory,” Annals Phys. 229 (1994) 177–216,
http://www.arXiv.org/abs/hep-th/9301097 hep-th/9301097.
Moosavian:2017qsp
S. F. Moosavian and R. Pius, “Hyperbolic geometry and closed bosonic string
field theory. Part I. The string vertices via hyperbolic Riemann surfaces,” JHEP 08 (2019) 157,
http://www.arXiv.org/abs/1706.07366 1706.07366.
Moosavian:2017sev
S. F. Moosavian and R. Pius, “Hyperbolic geometry and closed bosonic string
field theory. Part II. The rules for evaluating the quantum BV master
action,” JHEP 08 (2019) 177,
http://www.arXiv.org/abs/1708.04977 1708.04977.
Pius:2018pqr
R. Pius, “Quantum Closed Superstring Field Theory and Hyperbolic Geometry I:
Construction of String Vertices,”http://www.arXiv.org/abs/1808.09441 1808.09441.
Costello:2019fuh
K. Costello and B. Zwiebach, “Hyperbolic string vertices,” JHEP
02 (2022) 002, http://www.arXiv.org/abs/1909.00033 1909.00033.
Cho:2019anu
M. Cho, “Open-closed Hyperbolic String Vertices,” JHEP 05
(2020) 046, http://www.arXiv.org/abs/1912.00030 1912.00030.
Erbin:2022rgx
H. Erbin and A. H. Fırat, “Characterizing 4-string contact interaction
using machine learning,” JHEP 04 (2024) 016,
http://www.arXiv.org/abs/2211.09129 2211.09129.
Firat:2021ukc
A. H. Fırat, “Hyperbolic three-string vertex,” JHEP 08
(2021) 035, http://www.arXiv.org/abs/2102.03936 2102.03936.
Firat:2023glo
A. H. Fırat, “Bootstrapping closed string field theory,” JHEP
05 (2023) 186, http://www.arXiv.org/abs/2302.12843 2302.12843.
Firat:2023suh
A. H. Fırat, “Hyperbolic string tadpole,” SciPost Phys. 15
(2023), no. 6, 237, http://www.arXiv.org/abs/2306.08599
2306.08599.
Firat:2023gfn
A. H. Fırat, “String vertices for the large N limit,” Nucl. Phys.
B 1000 (2024) 116485, http://www.arXiv.org/abs/2311.00747
2311.00747.
Wang:2021aog
P. Wang, H. Wu, and H. Yang, “Connections between reflected entropies and
hyperbolic string vertices,” JHEP 05 (2022) 127,
http://www.arXiv.org/abs/2112.09503 2112.09503.
Jiang:2024noe
X. Jiang, H. Wu, and H. Yang, “String Scattering and Evolution of
Ryu-Takayanagi Surface,”http://www.arXiv.org/abs/2408.12495
2408.12495.
Ishibashi:2022qcz
N. Ishibashi, “The Fokker–Planck formalism for closed bosonic
strings,” PTEP 2023 (2023), no. 2, 023B05,
http://www.arXiv.org/abs/2210.04134 2210.04134.
Ishibashi:2024kdv
N. Ishibashi, “Strebel Differentials and String Field Theory,” PTEP 2024 (2024), no. 7, 073B02,
http://www.arXiv.org/abs/2402.09641 2402.09641.
Bernardes:2024ncs
V. Bernardes and U. Portugal, “A two parameter family of lightcone-like
hyperbolic string vertices,” JHEP 07 (2024) 205,
http://www.arXiv.org/abs/2404.17268 2404.17268.
mirzakhani2007simple
M. Mirzakhani, “Simple geodesics and weil-petersson volumes of moduli spaces
of bordered riemann surfaces,” Inventiones mathematicae 167
(2007), no. 1, 179–222.
mirzakhani2007weil
M. Mirzakhani, “Weil-petersson volumes and intersection theory on the moduli
space of curves,” Journal of the American Mathematical Society
20 (2007), no. 1, 1–23.
do2011moduli
N. Do, “Moduli spaces of hyperbolic surfaces and their weil-petersson
volumes,” arXiv preprint arXiv:1103.4674 (2011).
wright2020tour
A. Wright, “A tour through mirzakhani’s work on moduli spaces of riemann
surfaces,” Bulletin of the American Mathematical Society 57
(2020), no. 3, 359–408.
Eynard:2007fi
B. Eynard and N. Orantin, “Weil-Petersson volume of moduli spaces,
Mirzakhani's recursion and matrix models,”http://www.arXiv.org/abs/0705.3600 0705.3600.
andersen2017geometric
J. E. Andersen, G. Borot, and N. Orantin, “Geometric recursion,” arXiv
preprint arXiv:1711.04729 (2017).
Saad:2019lba
P. Saad, S. H. Shenker, and D. Stanford, “JT gravity as a matrix integral,”http://www.arXiv.org/abs/1903.11115 1903.11115.
Stanford:2019vob
D. Stanford and E. Witten, “JT gravity and the ensembles of random matrix
theory,” Adv. Theor. Math. Phys. 24 (2020), no. 6, 1475–1680,
http://www.arXiv.org/abs/1907.03363 1907.03363.
buser2010geometry
P. Buser, Geometry and spectra of compact Riemann surfaces.
Springer Science & Business Media, 2010.
Sonoda:1989sj
H. Sonoda and B. Zwiebach, “Covariant closed string theory cannot be
cubic,” Nucl. Phys. B 336 (1990) 185–221.
Chiaffrino:2021uyd
C. Chiaffrino and I. Sachs, “QFT with stubs,” JHEP 06 (2022)
120, http://www.arXiv.org/abs/2108.04312 2108.04312.
Schnabl:2023dbv
M. Schnabl and G. Stettinger, “Open string field theory with stubs,”
JHEP 07 (2023) 032, http://www.arXiv.org/abs/2301.13182
2301.13182.
Schnabl:2024fdx
M. Schnabl and G. Stettinger, “More on stubs in open string field theory,”http://www.arXiv.org/abs/2402.00308 2402.00308.
Erbin:2023hcs
H. Erbin and A. H. Fırat, “Open string stub as an auxiliary string
field,” SciPost Phys. 17 (2024) 044,
http://www.arXiv.org/abs/2308.08587 2308.08587.
Erler:2023emp
T. Erler and A. H. Fırat, “Wilsonian effective potentials and closed
string field theory,” JHEP 02 (2024) 018,
http://www.arXiv.org/abs/2311.17322 2311.17322.
Maccaferri:2024puc
C. Maccaferri, R. Poletti, A. Ruffino, and B. Valsesia, “Adding stubs to
quantum string field theories,” JHEP 08 (2024) 005,
http://www.arXiv.org/abs/2403.10471 2403.10471.
hubbard2014analytic
J. H. Hubbard and S. Koch, “An analytic construction of the deligne-mumford
compactification of the moduli space of curves,” Journal of
Differential Geometry 98 (2014), no. 2, 261–313.
wolpert1983symplectic
S. Wolpert, “On the symplectic geometry of deformations of a hyperbolic
surface,” Annals of Mathematics (1983) 207–234.
Sen:1993kb
A. Sen and B. Zwiebach, “Quantum background independence of closed string
field theory,” Nucl. Phys. B 423 (1994) 580–630,
http://www.arXiv.org/abs/hep-th/9311009 hep-th/9311009.
parlier2009simple
H. Parlier, “Simple closed geodesics and the study of teichmuller spaces,” arXiv preprint arXiv:0912.1540 (2009).
mondello2011riemann
G. Mondello, “Riemann surfaces with boundary and natural triangulations of the
teichmüller space,” Journal of the European Mathematical Society 13 (2011), no. 3, 635–684.
Sen:2014pia
A. Sen, “Off-shell Amplitudes in Superstring Theory,” Fortsch. Phys. 63 (2015) 149–188, http://www.arXiv.org/abs/1408.0571
1408.0571.
do2009weil
N. Do and P. Norbury, “Weil–petersson volumes and cone surfaces,”
Geometriae Dedicata 141 (2009) 93–107.
zograf1987liouville
P. G. Zograf and L. A. Takhtadzhyan, “On liouville's equation, accessory
parameters, and the geometry of teichmüller space for riemann surfaces of
genus 0,” Matematicheskii Sbornik 174 (1987), no. 2, 147–166.
zograf1988uniformization
P. G. Zograf and L. A. Takhtadzhyan, “On uniformization of riemann surfaces
and the weil-petersson metric on teichmüller and schottky spaces,”
Mathematics of the USSR-Sbornik 60 (1988), no. 2, 297.
Artemev:2023bqj
A. Artemev, “p → limit of tachyon
correlators in (2, 2p + 1) minimal Liouville gravity from classical Liouville
theory,” JHEP 12 (2023) 155,
http://www.arXiv.org/abs/2305.08118 2305.08118.
Hadasz:2003kp
L. Hadasz and Z. Jaskolski, “Polyakov conjecture for hyperbolic
singularities,” Phys. Lett. B 574 (2003) 129–135,
http://www.arXiv.org/abs/hep-th/0308131 hep-th/0308131.
Piatek:2013ifa
M. Piatek, “Classical torus conformal block, N = 2^* twisted superpotential
and the accessory parameter of Lamé equation,” JHEP 03 (2014)
124, http://www.arXiv.org/abs/1309.7672 1309.7672.
Erbin:2022cyb
H. Erbin and M. Médevielle, “Closed string theory without level-matching at
the free level,” JHEP 03 (2023) 091,
http://www.arXiv.org/abs/2209.05585 2209.05585.
Okawa:2022mos
Y. Okawa and R. Sakaguchi, “Closed string field theory without the
level-matching condition,”http://www.arXiv.org/abs/2209.06173
2209.06173.
Bergman:1994qq
O. Bergman and B. Zwiebach, “The Dilaton theorem and closed string
backgrounds,” Nucl. Phys. B 441 (1995) 76–118,
http://www.arXiv.org/abs/hep-th/9411047 hep-th/9411047.
Rahman:1995ee
S. Rahman and B. Zwiebach, “Vacuum vertices and the ghost dilaton,”
Nucl. Phys. B 471 (1996) 233–245,
http://www.arXiv.org/abs/hep-th/9507038 hep-th/9507038.
kontsevich2017airy
M. Kontsevich and Y. Soibelman, “Airy structures and symplectic geometry of
topological recursion,” arXiv preprint arXiv:1701.09137 (2017).
Andersen:2017vyk
J. E. Andersen, G. Borot, L. O. Chekhov, and N. , “The ABCD of topological
recursion,” Adv. Math. 439 (2024) 109473,
http://www.arXiv.org/abs/1703.03307 1703.03307.
Erler:2022agw
T. Erler, “The closed string field theory action vanishes,” JHEP
10 (2022) 055, http://www.arXiv.org/abs/2204.12863 2204.12863.
Sen:1999mh
A. Sen, “Descent relations among bosonic D-branes,” Int. J. Mod. Phys.
A 14 (1999) 4061–4078,
http://www.arXiv.org/abs/hep-th/9902105 hep-th/9902105.
Sen:1999xm
A. Sen, “Universality of the tachyon potential,” JHEP 12 (1999)
027, http://www.arXiv.org/abs/hep-th/9911116 hep-th/9911116.
Ellwood:2006ba
I. Ellwood and M. Schnabl, “Proof of vanishing cohomology at the tachyon
vacuum,” JHEP 02 (2007) 096,
http://www.arXiv.org/abs/hep-th/0606142 hep-th/0606142.
costello2007topological
K. Costello, “Topological conformal field theories and gauge theories,”
Geometry & Topology 11 (2007), no. 3, 1539–1579.
Kaku:1974zz
M. Kaku and K. Kikkawa, “The Field Theory of Relativistic Strings. I.
Trees,” Phys. Rev. D 10 (1974) 1110.
Kaku:1974xu
M. Kaku and K. Kikkawa, “The Field Theory of Relativistic Strings. II. Loops
and Pomerons,” Phys. Rev. D 10 (1974) 1823–1843.
hata1986covariant
H. Hata, K. Itoh, T. Kugo, H. Kunitomo, and K. Ogawa, “Covariant string field
theory,” Physical Review D 34 (1986), no. 8, 2360.
Hata:1986kj
H. Hata, K. Itoh, T. Kugo, H. Kunitomo, and K. Ogawa, “Covariant String Field
Theory. 2.,” Phys. Rev. D 35 (1987) 1318.
Kugo:1992md
T. Kugo and B. Zwiebach, “Target space duality as a symmetry of string field
theory,” Prog. Theor. Phys. 87 (1992) 801–860,
http://www.arXiv.org/abs/hep-th/9201040 hep-th/9201040.
Erler:2020beb
T. Erler and H. Matsunaga, “Mapping between Witten and lightcone string field
theories,” JHEP 11 (2021) 208,
http://www.arXiv.org/abs/2012.09521 2012.09521.
Saadi:1989tb
M. Saadi and B. Zwiebach, “Closed String Field Theory from Polyhedra,”
Annals Phys. 192 (1989) 213.
Hadasz:2003he
L. Hadasz and Z. Jaskolski, “Classical Liouville action on the sphere with
three hyperbolic singularities,” Nucl. Phys. B 694 (2004)
493–508, http://www.arXiv.org/abs/hep-th/0309267
hep-th/0309267.
Zamolodchikov:1995aa
A. B. Zamolodchikov and A. B. Zamolodchikov, “Structure constants and
conformal bootstrap in Liouville field theory,” Nucl. Phys. B
477 (1996) 577–605, http://www.arXiv.org/abs/hep-th/9506136
hep-th/9506136.
Hadasz:2005gk
L. Hadasz, Z. Jaskolski, and M. Piatek, “Classical geometry from the quantum
Liouville theory,” Nucl. Phys. B 724 (2005) 529–554,
http://www.arXiv.org/abs/hep-th/0504204 hep-th/0504204.
Hadasz:2006rb
L. Hadasz and Z. Jaskolski, “Liouville theory and uniformization of
four-punctured sphere,” J. Math. Phys. 47 (2006) 082304,
http://www.arXiv.org/abs/hep-th/0604187 hep-th/0604187.
Hadasz:2009db
L. Hadasz, Z. Jaskolski, and P. Suchanek, “Recursive representation of the
torus 1-point conformal block,” JHEP 01 (2010) 063,
http://www.arXiv.org/abs/0911.2353 0911.2353.
maskit1989parameters
B. Maskit, “Parameters for fuchsian groups ii: topological type (1, 1),”
Annales Fennici Mathematici 14 (1989), no. 2, 265–275. ]
|
http://arxiv.org/abs/2409.03040v1 | 20240904191912 | A method for site-specifically tethering the enzyme urease to DNA origami with sustained activity | [
"Ian Murphy",
"Keren Bobilev",
"Daichi Hayakawa",
"Eden Ikonen",
"Thomas E. Videbæk",
"Shibani Dalal",
"Wylie W. Ahmed",
"Jennifer L. Ross",
"W. Benjamin Rogers"
] | physics.bio-ph | [
"physics.bio-ph"
] |
A method for site-specifically tethering the enzyme urease to DNA origami with sustained activity
Ian Murphy1,
Keren Bobilev1,
Daichi Hayakawa1,
Eden Ikonen1,
Thomas E. Videbæk1
Shibani Dalal1,
Wylie W. Ahmed2,3,4,5,
Jennifer L. Ross6,
W. Benjamin Rogers1
1 Martin A. Fisher School of Physics, Brandeis University, Waltham, MA 02453 USA
2 Laboratoire de Physique Théorique (LPT), Université Paul Sabatier, 31400 Toulouse, France
3 Molecular, Cellular and Developmental biology unit (MCD), Centre de Biologie Intégrative (CBI), 31400 Toulouse, France
4 Centre National de la Recherche Scientifique (CNRS), Université Paul Sabatier, 31400 Toulouse, France
5 Department of Physics, California State University, Fullerton, CA 92831 USA
6 Department of Physics, Syracuse University, Syracuse, NY 13244 USA
§ ABSTRACT
Attaching enzymes to nanostructures has proven useful to the study of enzyme functionality under controlled conditions and has led to new technologies. Often, the utility and interest of enzyme-tethered nanostructures lie in how the enzymatic activity is affected by how the enzymes are arranged in space. Therefore, being able to conjugate enzymes to nanostructures while preserving the enzymatic activity is essential. In this paper, we present a method to conjugate single-stranded DNA to the enzyme urease while maintaining enzymatic activity. We show evidence of successful conjugation and quantify the variables that affect the conjugation yield. We also show that the enzymatic activity is unchanged after conjugation compared to the enzyme in its native state. Finally, we demonstrate the tethering of urease to nanostructures made using DNA origami with high site-specificity. Decorating nanostructures with enzymatically-active urease may prove to be useful in studying, or even utilizing, the functionality of urease in disciplines ranging from biotechnology to soft-matter physics. The techniques we present in this paper will enable researchers across these fields to modify enzymes without disrupting their functionality, thus allowing for more insightful studies into their behavior and utility.
§ INTRODUCTION
Enzymes are proteins of great interest owing to their ability to catalyze chemical reactions. Without the presence of a catalyst, many biological reactions spontaneously occur with half-times ranging from seconds to millions of years <cit.>. Enzymes accelerate chemical reactions by orders of magnitude <cit.> thanks to their substrate-specific active sites, which have evolved to efficiently target unique molecular conformations <cit.>. Various industries—including pharmaceutical, chemical, and agricultural—use enzymes to decrease production times and costs <cit.>. Thus, understanding the function of enzymes is necessary for making use of their potential applications in industrial and academic settings.
Attaching enzymes to synthetic nanostructures has emerged as a promising approach to studying enzyme biophysics and developing new technologies, such as diagnostics. For example, attaching enzymes to magnetic, nanoscale particles has enabled researchers to develop new diagnostic tools for early indication of cancerous cells <cit.>. Conjugating enzymes to DNA-based nanometer- and micrometer-scale particles has led to the discovery of enhanced diffusion of the conjugate in the presence of the enzyme's substrate, opening new pathways to making enzyme-powered active colloids <cit.>. Finally, DNA origami <cit.>, a molecular engineering technology capable of precisely controlling the locations of different enzymes with respect to one another <cit.>, has led to a better understanding of coupled enzymatic reactions <cit.> and the roles of motor proteins, like dynein and kinesin, in intercellular transport <cit.>.
The above applications hinge on the ability to site-specifically functionalize nanostructures or macromolecular complexes with active enzymes. However, general, easy-to-follow protocols for conjugating enzymes to nanostructures are limited. Furthermore, the available synthesis approaches obscure the importance of the various details of the protocol; the choices of crosslinker-to-enzyme ratio, the solution conditions, and the purification steps used therein are not often investigated or mentioned. Since many applications make use of the enzymatic activity of the enzyme, it is critical to retain the enzymatic activity throughout the conjugation procedure, an aspect that is only occasionally discussed <cit.>.
In this paper, we present a method to conjugate single-stranded DNA to the enzyme urease while maintaining enzymatic activity, and use the resultant method to bind the DNA-labeled urease to DNA origami with high site-specificity. The method uses a two-step protocol to attach single-stranded DNA to cysteines on the surface of urease using the thiol-maleimide reaction and click chemistry (Fig <ref>A,B). We show proof of a successful DNA-urease conjugation and investigate different factors affecting the conjugation yield. We find that the ratio of crosslinker to enzyme should be comparable to the number of sites on the enzyme complex to maximize yield. Contrary to many other protocols, we also show that the presence of a common reducing agent during the crosslinker incubation can significantly reduce the conjugation yield. Additionally, we find that the presence of salt during the final DNA conjugation step is necessary to achieve appreciable yields, owing to its role in screening the electrostatic interactions between the protein and the DNA. We conclude by demonstrating the ability to bind urease to a six-helix bundle made via DNA origami at user-specified locations with nanometer precision (Fig <ref>C), confirmed with transmission electron microscopy. We believe that this experimental system holds promise for elucidating the fundamental behavior and utility of enzymes, and we hope that this paper serves as a useful guide for those hoping to utilize this approach in their field of study.
§ MATERIALS AND METHODS
§.§ Conjugation of single-stranded DNA to urease
In brief, our synthesis scheme uses a commercially available heterobifunctional crosslinker, azido-PEG3-maleimide (Vector Laboratories, CCT-AZ107), to connect synthetic single-stranded DNA to surface-exposed cysteine (or thiol) groups on the enzyme jack bean urease (Fig <ref>A,B). We use urease due to its ubiquity in enzyme studies, its relative durability, and its high turnover rate <cit.>. To conjugate single-stranded DNA to urease, we begin by combining urease with azido-PEG3-maleimide. The maleimide functional group on the crosslinker reacts with the cysteine amino acid side chain on the surface of the urease, forming a thiosuccinimide linkage. We then add single-stranded DNA modified with a dibenzocyclooctyne (DBCO) molecule (DBCO-DNA) to undergo a strain-promoted click reaction with the azide functional group on the other end of the crosslinker <cit.>. Refer to Fig <ref>B for a diagram of the conjugation scheme. We verify the conjugation using polyacrylamide gel electrophoresis and quantify the enzymatic activity using colorimetry.
The first step of the conjugation is to reduce the thiol groups on the surface of the urease. We dissolve powdered jack bean urease (Canavalia ensiformis (TCI)) to 55 M in 1xPBS buffer at pH 6.7. We add 20 mM of reducing agent tris(2-carboxyethyl)phosphine (TCEP) at pH 7 to the urease at a 10-fold molar ratio of TCEP per urease hexamer, briefly vortex, and allow the solution to incubate in the dark for 1 h. Meanwhile, we add 500 L of 1xPBS at pH 6.7 to a 0.5 mL Amicon 100-kDa filter and centrifuge the filter at 9,400 xg for 8 minutes at room temperature (Fisherbrand accuSpin Micro 17) to equilibrate the column. We use 100-kDa filters because they allow the TCEP (286 Da) to pass through but not the urease hexamers (550 kDa). After the TCEP has incubated with the urease, we add the urease to the prewashed filter and wash it three times with fresh 1xPBS at pH 6.7 each time. Washing the TCEP out of the urease solution is essential for ensuring good conjugation with our crosslinker, as we will show later.
We then react the TCEP-reduced urease with the crosslinker. We start by measuring the urease concentration after washing using a NanoDrop 2000c spectrophotometer (Thermofisher) with the A280 function, dilute it back to 55 M in 1xPBS at pH 6.7, and move it into a 4-mL flat-bottom glass vial with a stir bar. While gently stirring, we slowly add azido-PEG3-maleimide in DMSO at a 5-fold crosslinker-to-urease-hexamer ratio, with final concentrations of 249 M crosslinker and 50 M urease. We cover the tube in foil and allow it to gently stir for 2 h at room temperature.
Next, we remove unreacted azido-PEG3-maleimide and conjugate the single-stranded DNA to the urease-azide. After two hours, we run the solution through a 0.5-mL, 40-kDa spin desalting column (Thermo Scientific PIA57759) to remove the unbound crosslinker. We collect the washed azide-urease solution and mix it with custom single-stranded DNA (IDT, 5’-TTTTTAACCATTCTCTTCCT-3’, DBCO-modified on the 5' and Cy5-modified on the 3') at a 10-fold DNA to urease hexamer ratio in 1xPBS with a final NaCl concentration of 637 mM (1xPBS has 137 mM NaCl, and we add an additional 500 mM). We incubate this solution overnight in a 40^∘C rotating incubator (Roto-Therm; Benchmark Scientific). After the DNA incubation, we wash the solutions in 0.5-mL Amicon 100-kDa MWCO filters, similar to the TCEP reduction step, to remove unbound DNA (7 kDa), which passes through the filters.
§.§ Native polyacrylamide gel electrophoresis
We verify the success of our conjugation using native polyacrylamide gel electrophoresis (PAGE). Invitrogen NuPAGE 3–8% Tris-Acetate 1.0 mm mini protein gels were purchased and the manufacturer's instructions were followed to perform PAGE. The running buffer is a 1X solution of Tris-Glycine diluted from a 10X stock (240 mM Tris Base, 1.9 M glycine). The samples are mixed as-is after the conjugation protocol in a 3:1 ratio with 4X loading dye (0.5 M Tris-Cl pH 6.8, 40% glycerol, 0.1% (wt/vol) bromophenol blue). We load approximately 5 mg/mL of each sample into the wells. For reference, we load 3 L of NativeMark Unstained Protein Standard (Invitrogen). We run the gel at 150 V for 2.5 h at 4^∘C. We remove the gel and place it in a Typhoon FLA 9500 laser scanner (GE), then scan it for fluorescence of the Cy5 dye, which is conjugated to DNA. We move the gel into a gel box with Coomassie stain (0.1% (wt/vol) Coomassie R-250, 30% methanol, 5% acetic acid). The gel stains for at least 1 h, after which time we pour off the Coomassie stain and fill the box with a destaining solution (20% methanol, 10% acetic acid). Once the protein bands are clearly visible, we scan the gel in a ChemiDoc MP Imaging System (Bio-Rad).
§.§ SDS polyacrylamide gel electrophoresis
We use SDS PAGE to complement the results from our native PAGE experiments, ensuring that protein shape does not impact the results. We prepare a 4% stacking gel on top of an 8% denaturing, resolving gel between two gel plates. We load approximately 3 mg/mL of each sample into the wells. For reference, we load 3 L of PageRuler Plus Prestained Protein Ladder (ThermoFisher Scientific). We run the gel at 50 V for 30 minutes at room temperature, then increase the voltage to 110 V and run for another hour. When the dye front runs out of the gel, we remove the gel and place it in the above laser scanner, then scan it for fluorescence in the Cy5 range. We move the gel into a gel box with Coomassie stain. The gel stains for at least 1 h, after which time we pour off the Coomassie stain and fill the box with a destaining solution. Once the protein bands are clearly visible, we scan the gel in a ChemiDoc MP Imaging System.
§.§ DNA origami-urease conjugation and negative-stain transmission electron microscopy
To further verify the conjugation of DNA to urease and to visualize the conjugation of urease to DNA origami, we use transmission electron microscopy (TEM). We mix DNA-conjugated urease at a molar ratio (up to 10-fold) with DNA origami, and dilute the solution to a final DNA origami concentration of 1 nM. We incubate the samples on glow-discharged FCF400-Cu TEM grids (Electron Microscopy Sciences) for 60 seconds at room temperature. We then stain the grids with 2% uranyl formate solution at 20 mM NaOH for 30 seconds. To image the samples, we use an FEI Morgagni TEM at 80 kV with a Nanosprint5 CMOS camera (AMT).
§.§ Enzyme activity assays
We use a colorimetric assay to measure the enzymatic activity after conjugation. To assess the enzymatic activity of urease, we use phenol red (Sigma-Aldrich 114529) as a means of colorimetric analysis. Urease hydrolyzes urea to produce weak base ammonia as one of the products. Ammonia increases the pH of the solution, and phenol red changes color from marigold yellow to vivid cerise between pH 6.8 and 8.2 (see Fig <ref>). Thus, the activity of urease can be quantified by measuring the change in absorbance of the solution at 560 nm <cit.>. We prepare samples at a final concentration of 28 M phenol red and 5 mM urea in a 1xPBS buffer at pH 6.7. We add urease to these solutions, either native urease or DNA-modified urease, to a final concentration of 5 nM. We reserve native urease from the conjugation protocol prior to any modifications. For reference, we include blanks with the same phenol red and urease concentrations without urea. Immediately after the addition of urease, we seal the plate with a transparent plate sealer (Thermo Scientific 1424518), and we move the plate to an EPOCH2 microplate reader (BioTek) that takes absorbance measurements of each sample at 560 nm every 60 seconds for up to 90 minutes at 28^∘C.
§ RESULTS AND DISCUSSION
§.§ Conjugation of single-stranded DNA to urease
§.§.§ Verification of conjugation using PAGE
To determine the success of conjugating single-stranded DNA to the surface of urease, we employ native and denaturing PAGE. We run native urease, azide-modified urease, and DNA-modified urease samples from the same conjugation protocol, as well as a protein ladder. The DNA is labeled on its 3’-end with a Cy5 fluorescent dye in order to visualize the DNA in these gels.
We infer the success of our conjugation, as well as the number of conjugated DNA molecules, from the band position and intensity. Figure <ref>A shows a scan of the native PAGE gel of our samples for one experimental condition. The primary protein bands of the native and azide-modified urease samples run similarly to the 480-kDa band in the ladder (Fig <ref>A “N”, “A”). This molecular weight is consistent with the previous observations of native urease with molecular weights between 480 kDa and 590 kDa that are indicative of a native hexameric form <cit.>. There is a slight shift up in the azide-modified urease band compared to the native urease band, which may be due to the non-negligible molecular weight of the crosslinker (369.37 Da). The primary protein band of the DNA-modified urease runs above the 750 kDa band in the ladder (Fig <ref>A “D”). We attribute this apparent increase in the molecular weight of the protein to the conjugated DNA, which itself has a molecular weight of about 7 kDa. It is also worth noting that the shape and charge of molecules affect how they run through the gel, making it difficult to predict how the DNA-enzyme conjugate will run compared to globular proteins, for example.
To confirm the conjugation, we also scan the same gel for Cy5 fluorescence, shown as the DNA scan in Fig <ref>A. The DNA-modified urease band shows a strong fluorescent signal, while the native and azide-modified urease bands show no signal. We thus conclude that we have successfully conjugated single-stranded DNA to urease.
Given that the tertiary and quaternary structures of proteins can change and influence native PAGE results, we also run these samples in a denaturing gel that breaks urease oligomers down to their monomeric form and linearizes them. The only reason the protein band would shift between samples in a denaturing gel is due to the attachment of another molecule since the amino-acid sequence is the same for all samples. Figure <ref>B shows the protein and DNA scans for an SDS PAGE experiment. Again, there is a shift in the bands of the DNA-modified sample compared to the bands in the native and azide-modified samples. There is also a clear fluorescent signal in the largest, most prominent shifted band in the Cy5 channel, indicating that DNA is conjugated to the proteins in that sample. By eliminating protein shape as a variable in how the gels run, these results confirm the covalent conjugation of DNA to urease.
In addition to confirming the conjugation of DNA to urease, we use PAGE gels to quantify the influence of some of the most important variables affecting the final yield. In particular, we test the effects of the molar ratio of crosslinker to urease in the first step, the amount of salt in the final reaction step, and the presence of a reducing agent in the crosslinker incubation. The amount of DNA bound to the urease is inferred by comparing how far the protein bands of different samples run in the gel, as well as the ratio of the intensity of the conjugate bands from the protein and DNA dyes.
§.§.§ Effect of crosslinker molar ratio on conjugation yield
The molar ratio of crosslinker to urease has a large effect on the success of conjugation, with too much or too little crosslinker negatively impacting the final yield. We consider the molar ratio of the crosslinker compared to the urease hexamer. Figure <ref>C shows intensity scans of a native PAGE gel for urease samples incubated with various molar ratios of crosslinker during the conjugation protocol. As expected, native urease, azide-modified urease, and urease mixed with pure DMSO in place of crosslinker (0x) all have comparatively similar molecular weights, since no DNA is introduced or able to bind to these enzymes. As the molar ratio increases from 1x to 10x, there is a steady increase in the molecular weight of the primary protein band. We attribute this observation to an increasing amount of crosslinker, and thus DNA, binding to the urease. Above a molar ratio of 10x, the band position appears to saturate or even decrease.
We also consider the signal intensities within the PAGE bands and what they can tell us about the conjugation yield. Figure <ref>D shows a plot of the ratios of the integrated densities of the DNA signal to the protein signal within the main band of each molar-ratio condition, normalized to the ratio at 1x. We see that these relative signals follow the same trend as the running distances, with an increasing shift in the band position corresponding to an increasing relative signal of DNA to protein. Moreover, we observe that the relative DNA to protein signal increases linearly with increasing molar ratio up to 4x.
Our result indicating an optimal crosslinker to urease hexamer molar ratio of about 10x is consistent with a rough estimate of the number of accessible thiol groups. Based on its structure, we estimate that a urease hexamer has roughly 12 potential thiol groups on its surface for labeling <cit.>. This number is comparable to the molar ratios at which we observe a plateau or even dip in the shifts of the DNA-labeled protein bands. Therefore, given the linear dependence of the relative DNA signal and the optimal molar ratio of 10x, we suspect that the yield of the thiol-maleimide reaction in the first step of our conjugation scheme is quite high.
§.§.§ Increasing salt concentration improves the conjugation yield
We demonstrate that increasing the salt concentration in the final DNA incubation reaction increases the conjugation efficiency of the second step of our scheme. Figure <ref>A shows the protein bands for various urease samples at 137 mM, 187 mM, and 637 mM NaCl in the final solution. When no DBCO-DNA is added to the final reactions, there is no difference in molecular weight between the samples with different NaCl concentrations. However, when DBCO-modified DNA is added, there is a clear increase in the molecular weight of the samples as the NaCl concentration increases. The plot in Fig <ref>B shows the ratios of the DNA signal to the protein signal within the main band of each salt condition, normalized to the ratio at 137 mM NaCl. The conjugation yield increases by a factor of roughly 1.6, going from 137 mM NaCl to 187 mM NaCl, and by approximately three-fold from 137 mM NaCl to 637 mM NaCl. Therefore, we conclude that the NaCl concentration plays an important role in determining the success and yield of the DNA-to-urease conjugation step.
We attribute the increase in conjugation efficiency to the salt-dependent screening of the negative charges on the DNA and the enzyme. Both DNA and urease are negatively charged at neutral pH, given that urease’s isoelectric point is about 5.1 <cit.>. We hypothesize that the addition of salt screens the electrostatic repulsion between the two negatively charged molecules, enabling them to come closer together and thus resulting in more conjugation events. Other protocols conjugating DNA to urease do not include additional salt and are mostly done in 1xPBS, which has a NaCl concentration of 137 mM and a KCl concentration of 2.7 mM <cit.>. Therefore, compared to existing protocols in the literature, our findings suggest that the conjugation efficiency can be improved roughly three-fold by using monovalent salt concentrations above 600 mM.
§.§.§ The presence of TCEP significantly reduces crosslinking efficiency
Finally, we report a subtle but important detail in the protocol to achieve higher yield: The presence of the reducing agent tris(2-carboxyethyl)phosphine (TCEP) significantly impedes the conjugation of DNA to urease. We incubate urease with TCEP prior to incubation with the crosslinker because it cleaves any disulfide bonds at the urease surface, thus freeing up the thiol groups for conjugation with the maleimide-end of the crosslinker. After this incubation, washing TCEP out of the solution with 1xPBS using a molecular weight cutoff filter dramatically improves the conjugation success. Using PAGE gel analysis, we observe that the urease from the reaction with TCEP removed (-TCEP) is of a higher molecular weight than native urease and urease from the reaction with TCEP not removed (+TCEP) (see Fig <ref>C). Indeed, the yield increases roughly four-fold when we remove TCEP compared to when we do not remove TCEP, as determined by the relative DNA signals.
We attribute the importance of removing TCEP to the fact that the presence of TCEP may reduce the ability of the maleimide-functionalized crosslinker to react with the thiol groups on the urease. This result is consistent with a previous report demonstrating that TCEP can react with maleimide groups to form non-reactive byproducts <cit.>. It is also consistent with a report detailing how the presence of TCEP significantly reduced the conjugation efficiency of maleimide labeling <cit.>. However, many publications do not mention that this consideration should be made <cit.>, and some online protocols <cit.> state that it is unimportant to remove TCEP after reducing the disulfide bonds, which is inconsistent with our observations. Our experiments clearly show that removing TCEP is a necessary step in achieving successful urease-DNA conjugation using a maleimide functional group.
§.§ Enzyme-activity assay
We verify that the enzymes remain active after DNA conjugation using phenol red. The absorbance of phenol red at 560 nm increases as the pH of the solution increases from 6.8 to 8.2. When urease catalyzes the decomposition of urea, it increases the pH of its solution by creating ammonia. Therefore, an increase in the absorbance at 560 nm of solutions containing urease, urea, and phenol red indicates enzymatic activity <cit.> (Fig <ref>). Each dataset is blank-subtracted using controls of the same urease and phenol red concentrations with no urea present.
We find that the conjugation of DNA to urease does not change the enzymatic activity and that the activity is largely independent of the DNA labeling efficiency. Figure <ref> shows the absorbance change of native urease (NU) and DNA-modified ureases incubated with varying molar ratios of crosslinker. There is no clear trend in the activity as the molar ratio changes. These results indicate that the activity of urease does not depend on crosslinker molar ratios between 1x and 4x, and that the activity is preserved compared to native urease.
Our results are at odds with other published experiments that demonstrate a decrease in activity for some enzymes as the number of labeled crosslinkers per protein increases <cit.>. We speculate that we may not see this trend using our protocol for a few reasons. First, the prior experiments did not use urease (instead using glucose oxidase, horseradish peroxidase, lactate dehydrogenase, alkaline phosphatase, and glucose-6-phosphate dehydrogenase), and the impact of labeling on the activity may depend sensitively on the details of the active site and where labeling occurs relative to it. Second, the specific chemical group used for conjugation may also influence the resultant activity. For example, our method targets thiol groups on the protein surface, while the previous study targets amine groups using the crosslinker succinimidyl 3-(2-pyridyldithio)propionate (SPDP). For these reasons, we argue that it is important to directly confirm the activity of any new combination of enzyme and crosslinker, and no single conjugation protocol is likely to be a magic bullet.
§.§ Conjugating urease to DNA origami
Because the ultimate motivation for developing this synthesis scheme is to utilize DNA hybridization to make enzyme-tethered DNA nanostructures, we finally confirm that our DNA-enzyme constructs can be site-selectively bound to DNA origami.
We design and fold the DNA origami into a six-helix bundle <cit.>. The resulting rod-like structure is folded from an 8064-nucleotide circular DNA scaffold and has a diameter of approximately 8 nm and a length of around 450 nm. Figure <ref>A shows a diagram of the six-helix bundle, highlighting the DNA organization within the structure. Five thymines extend from the ends of the helices and prevent base-pair stacking between six-helix bundles. For conjugating urease to this DNA origami, we extrude single-stranded DNA from the structure at specific locations. We design the strands to come out at roughly 85-nm intervals at six locations along the full length of the six-helix bundle. The strands, with a binding domain of sequence 5'-AGGAAGAGAATGGTT-3', are complementary to the strand conjugated to the urease. Five thymines precede the binding domain toward the 5'–end to act as a spacer between the handle and the DNA origami structure. By mixing urease with the complementary DNA strand attached in a 10:1 ratio of urease to DNA origami, the two join together via DNA hybridization.
Using transmission electron microscopy (TEM), we see direct visual evidence of enzymes bound to DNA origami. Figure <ref>B shows TEM micrographs of DNA-origami rods with DNA-modified urease bound to them at regular intervals along the length of the rod. The bottom image in the series shows urease bound at each end of the DNA origami. Of the 305 DNA origami rods we imaged, we found urease bound to 180 of them, a binding efficiency of approximately 60%. For all DNA origami with a bound urease, no more than one urease was bound to a single structure and we did not detect any off-target bindings. These observations are consistent with the fact that each DNA origami was folded to possess at most one of six possible anchor strands, except for the last scenario in Fig <ref>B which allows for two bindings.
We speculate that the binding efficiency may be hampered due to binding-site-inactivation or slow kinetics. While we wash the unreacted DNA oligomers out of the solution at the end of the conjugation (as outlined in the first section of the Methods), it is possible that some free DNA strands remain and can compete to bind to the DNA origami. It is also possible that the binding of urease-DNA to DNA origami is slow and that we do not wait long enough for all of the potential binding sites to be occupied at the concentrations we use for incubation. The samples are mixed with a final DNA origami concentration close to 1 nM and a final urease concentration close to 10 nM. Calculation of the dissociation constant for our DNA binding domain at room temperature, 5 mM MgCl_2, and 50 mM NaCl yields approximately 1 fM <cit.>. At concentrations less than 1 fM we would expect to see complete dissociation of these strands, implying that our mixing concentrations are high enough. We allow our combined samples to incubate for 45 minutes prior to negative-staining, therefore if the binding is slow then increasing the incubation time may improve the binding efficiency.
§ CONCLUSION
In summary, we developed a protocol to conjugate urease to DNA origami nanostructures with high site-specificity while maintaining enzymatic activity. Enzyme-bound nanostructures offer an interesting way to examine and utilize the unique behaviors of enzymes, thus understanding how to modify enzymes without significantly reducing their activity is essential. To the best of our knowledge, no comprehensive study has investigated the optimal conditions for labeling urease with single-stranded DNA while maintaining its activity. In this paper, we have presented a detailed protocol describing how to perform this conjugation, and how to verify its success. We observe that the crosslinker azide-PEG3-maleimide offers us a way to modify urease with an azide which can be reacted with DBCO-modified DNA to form DNA-labeled urease. We found that a ten-fold molar ratio of crosslinker to urease hexamer results in the highest labeling yield. We also found that the presence of a popular reducing agent can impede conjugation of the crosslinker, contrary to some conjugation protocols. We showed that increasing the monovalent salt concentration during the DNA reaction step can increase conjugation yield. Critically, our conjugation protocol does not reduce the enzymatic activity of the urease compared to its native state. We are thus able to bind enzymatically-active urease to the surface of DNA origami nanostructures. Thanks to the programmability of DNA origami, we can bind urease to its surface with great site-specificity. We believe that our thorough examination of these factors will enable researchers to utilize this experimental system, and hope that this protocol offers guidance in this regard.
§ ACKNOWLEDGMENTS
We thank Zachary Curtis from the Bisson Lab for assistance with operating the plate reader. We thank Bryan Gworek for assistance in debugging the early stages of the project. TEM samples were prepared and imaged at the Brandeis Electron Microscopy Facility.
Funding: This work was funded in-part by NSF Enzyme-Powered, Programmable Active Matter DMR-2004400 to WBR, DMR-2004566 to WWA, DMR-2004417 to JLR, and with support from the Brandeis Biological Materials Facility, which is funded through NSF Brandeis MRSEC DMR-2011846. WBR also acknowledges funding from the Human Frontier Science Program (RGP0029).
10
snider2000rate
Snider MJ, Wolfenden R.
The rate of spontaneous decarboxylation of amino acids.
Journal of the American Chemical Society. 2000;122(46):11507–11508.
radzicka1995proficient
Radzicka A, Wolfenden R.
A proficient enzyme.
Science. 1995;267(5194):90–93.
zhang2005enzymes
Zhang X, Houk K.
Why enzymes are proficient catalysts: beyond the Pauling paradigm.
Accounts of Chemical Research. 2005;38(5):379–385.
choi2015industrial
Choi JM, Han SS, Kim HS.
Industrial applications of enzyme biocatalysis: Current status and future aspects.
Biotechnology Advances. 2015;33(7):1443–1454.
li2012technology
Li S, Yang X, Yang S, Zhu M, Wang X.
Technology prospecting on enzymes: application, marketing and engineering.
Computational and Structural Biotechnology Journal. 2012;2(3):e201209017.
liu2013achieve
Liu L, Yang H, Shin Hd, Chen RR, Li J, Du G, et al.
How to achieve high-level expression of microbial enzymes: strategies and perspectives.
Bioengineered. 2013;4(4):212–223.
wang2020ph
Wang D, Zhao X, Wei Y, Xue W, Xu Z.
A pH-responsive colorimetric detection of human telomerase RNA based on a three-dimensional DNA amplifier.
Analytica Chimica Acta. 2020;1111:67–74.
patino2024synthetic
Patiño Padial T, Del Grosso E, Gentile S, Baranda Pellejero L, Mestre R, Paffen LJ, et al.
Synthetic DNA-based Swimmers Driven by Enzyme Catalysis.
Journal of the American Chemical Society. 2024;146(18):12664–12671.
ma2015enzyme
Ma X, Jannasch A, Albrecht U-R, Hahn K, Miguel-López A, Schaffer E, Sánchez S
Enzyme-powered hollow mesoporous Janus nanomotors.
Nano letters. 2015;15(10):7043–7050.
rothemund2006folding
Rothemund PW.
Folding DNA to create nanoscale shapes and patterns.
Nature. 2006;440(7082):297–302.
Douglas2009May
Douglas SM, Dietz H, Liedl T, Höögberg B, Graf F, Shih WM.
Self-assembly of DNA into nanoscale three-dimensional shapes.
Nature. 2009;459(7245):414–418.
doi:10.1038/nature08016.
Dietz2009Aug
Dietz H, Douglas SM, Shih WM.
Folding DNA into Twisted and Curved Nanoscale Shapes.
Science. 2009;325(5941):725–730.
doi:10.1126/science.1174251.
castro2011primer
Castro CE, Kilchherr F, Kim DN, Shiao EL, Wauer T, Wortmann P, et al.
A primer to scaffolded DNA origami.
Nature Methods. 2011;8(3):221–229.
Gerling2015Mar
Gerling T, Wagenbauer KF, Neuner AM, Dietz H.
Dynamic DNA devices and assemblies formed by shape-complementary, non–base pairing 3D components.
Science. 2015;347(6229):1446–1452.
doi:10.1126/science.aaa5372.
baker2018dimensions
Baker MA, Tuckwell AJ, Berengut JF, Bath J, Benn F, Duff AP, et al.
Dimensions and global twist of single-layer DNA origami measured by small-angle X-ray scattering.
ACS Nano. 2018;12(6):5791–5799.
sigl2021programmable
Sigl C, Willner EM, Engelen W, Kretzmann JA, Sachenbacher K, Liedl A, et al.
Programmable icosahedral shell system for virus trapping.
Nature Materials. 2021;20(9):1281–1289.
hayakawa2022geometrically
Hayakawa D, Videbæk TE, Hall DM, Fang H, Sigl C, Feigl E, et al.
Geometrically programmed self-limited assembly of tubules using DNA origami colloids.
Proceedings of the National Academy of Sciences. 2022;119(43):e2207902119.
funke2016placing
Funke JJ, Dietz H.
Placing molecules with Bohr radius resolution using DNA origami.
Nature Nanotechnology. 2016;11(1):47–52.
Schnitzbauer2017Jun
Schnitzbauer J, Strauss MT, Schlichthaerle T, Schueder F, Jungmann R.
Super-resolution microscopy with DNA-PAINT.
Nature Protocols. 2017;12(6):1198–1228.
doi:10.1038/nprot.2017.024.
kahn2022cascaded
Kahn JS, Xiong Y, Huang J, Gang O.
Cascaded enzyme reactions over a three-dimensional, wireframe DNA origami scaffold.
Jacs Au. 2022;2(2):357–366.
fu2016assembly
Fu J, Yang YR, Dhakal S, Zhao Z, Liu M, Zhang T, et al.
Assembly of multienzyme complexes on DNA nanostructures.
Nature Protocols. 2016;11(11):2243–2273.
muller2008dna
Müller J, Niemeyer CM.
DNA-directed assembly of artificial multienzyme complexes.
Biochemical and Biophysical Research Communications. 2008;377(1):62–67.
sun2017real
Sun L, Gao Y, Xu Y, Chao J, Liu H, Wang L, et al.
Real-time imaging of single-molecule enzyme cascade using a DNA origami raft.
Journal of the American Chemical Society. 2017;139(48):17525–17532.
derr2012tug
Derr ND, Goodman BS, Jungmann R, Leschziner AE, Shih WM, Reck-Peterson SL.
Tug-of-war in motor protein ensembles revealed with a programmable DNA origami scaffold.
Science. 2012;338(6107):662–665.
goodman2014engineering
Goodman BS, Reck-Peterson SL.
Engineering defined motor ensembles with DNA origami.
Methods in Enzymology. 2014;540:169–188.
balasubramanian2010crystal
Balasubramanian A, Ponnuraj K.
Crystal structure of the first plant urease from jack bean: 83 years of journey from its first crystal to molecular structure.
Journal of Molecular Biology. 2010;400(3):274–283.
eeftens2015copper
Eeftens JM, van der Torre J, Burnham DR, Dekker C.
Copper-free click chemistry for attachment of biomolecules in magnetic tweezers.
BMC Biophysics. 2015;8:1–7.
okyay2013high
Okyay TO, Rodrigues DF.
High throughput colorimetric assay for rapid urease activity quantification.
Journal of Microbiological Methods. 2013;95(3):324–326.
sumner1938molecular
Sumner JB, Gralen N, Eriksson-Quensel IB.
The molecular weight of urease.
Journal of Biological Chemistry. 1938;125(1):37–44.
dixon1980jack
Dixon NE, Hinds JA, Fihelly AK, Gazzola C, Winzor DJ, Blakeley RL, et al.
Jack bean urease (EC 3.5. 1.5). IV. The molecular size and the mechanism of inhibition by hydroxamic acids. Spectrophotometric titration of enzymes with reversible inhibitors.
Canadian Journal of Biochemistry. 1980;58(12):1323–1334.
sumner1929isoelectric
Sumner JB, Hand DB.
The isoelectric point of crystalline urease1.
Journal of the American Chemical Society. 1929;51(4):1255–1260.
chang2017detection
Chang D, Tram K, Li B, Feng Q, Shen Z, Lee CH, et al.
Detection of DNA amplicons of polymerase chain reaction using litmus test.
Scientific Reports. 2017;7(1):3110.
kantner2016characterization
Kantner T, Watts AG.
Characterization of reactions between water-soluble trialkylphosphines and thiol alkylating reagents: implications for protein-conjugation reactions.
Bioconjugate Chemistry. 2016;27(10):2400–2406.
getz1999comparison
Getz EB, Xiao M, Chakrabarty T, Cooke R, Selvin PR.
A comparison between the sulfhydryl reductants tris (2-carboxyethyl) phosphine and dithiothreitol for use in protein biochemistry.
Analytical Biochemistry. 1999;273(1):73–80.
scales2006fluorescent
Scales CW, Convertine AJ, McCormick CL.
Fluorescent labeling of RAFT-generated poly (N-isopropylacrylamide) via a facile maleimide- thiol coupling reaction.
Biomacromolecules. 2006;7(5):1389–1392.
cumnock2013trisulfide
Cumnock K, Tully T, Cornell C, Hutchinson M, Gorrell J, Skidmore K, et al.
Trisulfide modification impacts the reduction step in antibody–drug conjugation process.
Bioconjugate Chemistry. 2013;24(7):1154–1160.
liu2014intracellular
Liu P, Cai Z, Kang JW, Boyle AJ, Adams J, Lu Y, et al.
Intracellular routing in breast cancer cells of streptavidin-conjugated trastuzumab Fab fragments linked to biotinylated doxorubicin-functionalized metal chelating polymers.
Biomacromolecules. 2014;15(3):715–725.
fisherprotocol
Thiol-Reactive Probe Labeling Protocol;.
<https://www.thermofisher.com/us/en/home/references/protocols/cell-and-tissue-analysis/labeling-chemistry-protocols/thiol-reactive-probe-labeling-protocol.html>.
mathieu2005six
Mathieu F, Liao S, Kopatsch J, Wang T, Mao C, Seeman NC.
Six-helix bundles designed from DNA.
Nano Letters. 2005;5(4):661–665.
zadeh2011nupack
Zadeh JN, Steenberg CD, Bois JS, Wolfe BR, Pierce MB, Khan AR, et al.
NUPACK: Analysis and design of nucleic acid systems.
Journal of Computational Chemistry. 2011;32(1):170–173.
poppleton2022nanobase
Poppleton E, Mallya A, Dey S, Joseph J, Šulc P.
Nanobase.org: a repository for DNA and RNA nanostructures.
Nucleic Acids Research. 2022;50(D1):D246–D252.
wagenbauer_how_2017
Wagenbauer KF, Engelhardt FA, Stahl E, Hechtl VK, Stömmer P, Seebacher F, et al.
How we make DNA origami.
ChemBioChem. 2017;18(19):1873–1885.
§ SUPPORTING INFORMATION
§.§ DNA origami techniques
*Designing DNA origami
The full design for the six-helix bundle is available on Nanobase as structure 249 <cit.>.
*Folding DNA origami
Each DNA origami particle is folded by mixing 50 nM of p8064 scaffold DNA, which has 8064 nucleotides (Tilibit), and 200 nM each of staple strands with folding buffer and annealed through a temperature ramp starting at 65^∘C for 15 minutes, then 54 to 51 ^∘C, -1^∘C per hour. Our folding buffer contains 5 mM Tris Base, 1 mM EDTA, 5 mM NaCl, and 5 mM MgCl_2. We use a Tetrad (Bio-Rad) thermocycler for annealing the solutions.
*Agarose gel electrophoresis
To assess the outcome of folding, we perform agarose gel electrophoresis. Gel electrophoresis requires the preparation of the gel and the buffer. The gel is prepared by heating a solution of 1.5% w/w agarose, 0.5x TBE to boiling in a microwave. The solution is cooled to 60^∘C. At this point, we add MgCl_2 solution and SYBR-safe (Invitrogen) to adjust the concentration of the gel to 5.5 mM MgCl_2 and 0.5x SYBR-safe. The solution is then quickly cast into an Owl B2 gel cast, and further cooled to room temperature. The buffer solution contains 0.5x TBE and 5.5 mM MgCl_2. Agarose gel electrophoresis is performed at 90 V for 1.5 hours at 4^∘C. The gel is then scanned with a Typhoon FLA 9500 laser scanner.
*Gel purification and resuspension
After folding, DNA origami particles are purified to remove all excess staples and misfolded aggregates using gel purification. The DNA origami are run through an agarose gel (now at a 1xSYBR-safe concentration for visualization) prepared using a custom gel comb, which can hold around 4 mL of solution per gel. We use a black light box (Hall Productions BL1012) to identify the gel band containing the folded DNA origami. The folded origami band is then extracted using a razor blade and cut into pieces. We place the gel pieces into a Freeze 'N Squeeze spin column (Bio-Rad), freeze it in a –80^∘C freezer for 30 minutes, thaw at room temperature, and then spin the solution down for 5 minutes at 13,000 xg.
Next, we concentrate the solution through ultrafiltration <cit.>. First, a 0.5-mL Amicon 100 kDA ultrafiltration spin column is equilibrated by centrifuging down 0.5 mL of the folding buffer at 5,000 xg for 7 minutes. Then, the DNA origami solution is added up to 0.5 mL and centrifuged at 14,000 xg for 15 minutes. Finally, we flip the filter upside down into a new Amicon tube and spin down the solution at 1,000 xg for 2 minutes. The concentration of the DNA origami is measured using a Nanodrop (Thermofisher), assuming that the solution consists only of well-folded particles that are each 8064 base pairs.
*Negative stain TEM
We first prepare a solution of uranyl formate (UFo). Millipore water is boiled to deoxygenate it and then mixed with uranyl formate powder to create a 2% w/w UFo solution. The solution is covered with aluminum foil to avoid light exposure, then vortexed vigorously for 20 minutes. The solution is filtered using a 0.2 m filter. The solution is divided into 0.2-mL aliquots, which are stored in a –80^∘C freezer until further use.
Prior to each negative-stain TEM experiment, a 0.2-mL aliquot of UFo is taken out from the freezer to thaw at room temperature. We add 4 L of 1 M NaOH to precipitate the UFo and vortex the solution vigorously for 15 seconds. The solution is centrifuged at 4^∘C and 16,000 xg for 8 minutes. We extract 170 L of the supernatant for staining and discard the rest.
The EM samples are prepared using FCF400-Cu grids. We glow discharge the grid prior to use at –20 mA for 30 seconds at 0.1 mbar, using a Quorum Emitech K100X glow discharger. We place 4 L of the sample on the grid for 1 minute to allow adsorption of the sample to the grid. During this time 5 L and 18 L droplets of UFo solution are placed on a piece of parafilm. After the adsorption period, the remaining sample solution is blotted on a Whatman filter paper. We then touch the carbon side of the grid to the 5 L drop and blot it away immediately to wash away any buffer solution from the grid. This step is followed by picking up the 18 L UFo drop onto the carbon side of the grid and letting it rest for 30 seconds to deposit the stain. The UFo solution is then blotted to remove excess fluid. Grids are dried for a minimum of 15 minutes before insertion into the TEM.
We image the grids using an FEI Morgagni TEM operated at 80 kV with a Nanosprint5 CMOS camera. The microscope is operated at 80 kV and images are acquired between x8,000 to x28,000.
|
http://arxiv.org/abs/2409.03104v1 | 20240904220356 | Delving into the Phenomenology of Very Special Relativity: From Subatomic Particles to Binary Stars | [
"Alessandro Santoni"
] | hep-ph | [
"hep-ph",
"gr-qc",
"hep-th"
] |
boxed
figure
10000
|
http://arxiv.org/abs/2409.03041v1 | 20240904192411 | Nonlinear Monolithic Two-Level Schwarz Methods for the Navier-Stokes Equations | [
"Axel Klawonn",
"Martin Lanser"
] | math.NA | [
"math.NA",
"cs.CE",
"cs.NA",
"65F08, 65F10, 65H10, 65N22, 65N55, 76-10"
] |
Nonlinear Monolithic Two-Level Schwarz Methods for the Navier-Stokes Equations
Nonlinear Two-Level Schwarz Methods for Navier-Stokes
Axel Klawonn^1,2, Martin Lanser^1,2 ^1Department of Mathematics and Computer Science, Division of Mathematics, University of Cologne, Weyertal 86-90, 50931 Cologne, Germany, [email protected], [email protected], url: <https://www.numerik.uni-koeln.de> ^2Center for Data and Simulation Science, University of Cologne, Germany, url: <https://www.cds.uni-koeln.de>
*
Axel Klawonn0000-0003-4765-7387
Martin Lanser0000-0002-4232-9395
======================================================================
Nonlinear domain decomposition methods became popular in recent years since they can improve the nonlinear convergence behavior of Newton's method significantly for many complex problems. In this article, a nonlinear two-level Schwarz approach is considered and, for the first time, equipped with monolithic GDSW (Generalized Dryja-Smith-Widlund) coarse basis functions for the Navier-Stokes equations. Results for lid-driven cavity problems with high Reynolds numbers are presented and compared with classical global Newton's method equipped with a linear Schwarz preconditioner. Different options, for example, local pressure corrections on the subdomain and recycling of coarse basis functions are discussed in the nonlinear Schwarz approach for the first time.
§ INTRODUCTION AND NONLINEAR PROBLEMS
In recent years, nonlinear preconditioning techniques became popular to improve the convergence speed of nonlinear solvers for complex partial differential equations. Many of these nonlinear preconditioners are based on the ideas and basic principles of linear domain decomposition methods (DDMs) and consequently they are denoted as nonlinear DDMs. In nonlinear DDMs, e.g., <cit.>, the discretized nonlinear problem is restricted to smaller local ones on subdomains and the local solutions of those are recombined to the original one in a Newton-like iteration, which hopefully converges faster than classical Newton's method applied to the original problem. As in the linear case it often is beneficial to include a (nonlinear) coarse space for robustness and convergence improvement, which leads to two-level nonlinear DDMs. In this article, we consider some variants of the nonlinear two-level Schwarz approaches first introduced in <cit.>; older two-level nonlinear Schwarz approaches for less general coarse spaces can be found in <cit.>. Here, for the first time, we combine two-level nonlinear Schwarz methods with a monolithic RGDSW (Reduced Generalized Dryja-Smith-Widlund) coarse space for Navier-Stokes problems (see <cit.>) and apply it to the well-known lid-driven cavity flow test problem considering high Reynolds numbers.
We compare our solvers with classical Newton-Krylov-Schwarz approaches equipped with a simple globalization, namely a simple backtracking approach.
Lid-driven Cavity Problem
As stated above, we consider the classical lid-driven cavity flow problem modeled by the stationary, dimensionless, and incompressible Navier-Stokes equations in two-dimensions
[ -1/ReΔ v + (v ·∇)v + ∇ p = 0 in Ω; div(v) = 0 in Ω; v = v_0 on ∂Ω; p =0 in (x,y)=(0,0). ]
Here, v denotes the velocity, p the pressure, and Re the Reynolds number. In case of the lid-driven cavity problem Ω is the unity square and v_0=[1,0] for the Dirichlet boundary constraints on the lid, that is, the upper boundary of Ω where y is equal to one. On all other parts of the boundary ∂Ω we have v_0=[0,0]. We usually use v^(0)|_∂Ω=v_0 and v^(0)=0 within Ω and p^(0)=0 as initial value for Newton's method, regardless if we use a classical Newton-Krylov-Schwarz approach or a modern nonlinear Schwarz method. Therefore, since v^(0) already fulfills all boundary constraints, we can enforce a zero Dirichlet boundary condition on ∂Ω for the velocity of the Newton update in each Newton iteration. Note that for the pressure we enforce a Dirichlet boundary condition in a single corner, that is, in (x,y) = (0,0) to obtain a unique solution.
§ NONLINEAR TWO-LEVEL SCHWARZ METHODS
Let us briefly describe the two-level nonlinear Schwarz methods considered in our experiments, which were first introduced in <cit.> and extended in <cit.> for more general coarse spaces. We assume to have a nonlinear problem
F(u)=0
which originates from a finite element discretization of a given nonlinear partial differential equation. In the following, we define nonlinearly left-preconditioned systems of the form
ℱ(u) := G(F(u)) = 0,
where the left preconditioner G is given implicitly by a domain decomposition approach. The nonlinearly preconditioned problem is then solved with Newton's method instead of solving the original formulation from <ref> directly. The goal of this approach is to accelerate the nonlinear convergence of Newton's method by a proper choice of G.
We first consider a decomposition of the computational domain Ω⊂ℝ^d, d=2,3, into N nonoverlapping subdomains Ω_i', i=1,...,N, such that Ω = ⋃_i=1^N Ω_i'.
By adding m rows of finite elements around the boundary of each subdomain, we obtain an overlapping domain decomposition Ω = ⋃_i=1^N Ω_i with overlap δ =m · h, where h is the typical diameter of a finite element.
For the first level of the nonlinear Schwarz preconditioner, we define local nonlinear corrections T_i(u) by solving
R_i F(u- P_i T_i(u))=0, i=1,...,N,
where R_i: V → V_i is the restriction from the global finite element space V to the local space V_i belonging to the overlapping subdomain Ω_i, i=1,...,N. Here, P_i: V_i → V, i=1,...,N, is a corresponding prolongation operator; we always use the symmetric choice of P_i := R_i^T throughout this article.
For the definition of the second and coarse level, we assume to have a coarse space V_0 and a prolongation operator P_0: V_0 → V. In practice, we usually compute the operator P_0, which is a column-wise collection of coarse basis functions discretized in V and afterwards define
V_0 := { v_0 = P_0^T v | v ∈ V }.
Let us note that we are usually interested in coarse spaces which can be built without having a coarse discretization of Ω. Nonetheless, if such coarse finite elements defining V_0, the corresponding operators R_0 and P_0 can be built by simply interpolating the finite element shape functions from V_0 to the fine space V. Details on the construction of P_0 for the Navier-Stokes equations without having a coarse discretization are given in <ref>. With R_0 := P_0^T, we can define a nonlinear coarse correction T_0(u) by
R_0 F(u- P_0 T_0(u))=0.
Now we have all ingredients to define new left-preconditioned nonlinear operators and we start with an additive variant. To obtain an additive nonlinear two-level Schwarz method, the nonlinear problem
ℱ_a(u) := ∑_i=1^N P_i T_i(u) + P_0 T_0(u)
is solved using Newton's method. We refer to this method as two-level ASPEN (Additive Schwarz Preconditioned Exact Newton) based on the one-level approaches from <cit.>, or, even simpler, as additive nonlinear two-level Schwarz method. Similarly, a hybrid nonlinear two-level Schwarz method can be obtained by solving
ℱ_h(u) := ∑_i=1^N P_i T_i(u-P_0 T_0(u)) + P_0 T_0(u)
with Newton's method. Here, the coupling between the levels is multiplicative while the subdomains are still coupled additively against each other. For more details and further nonlinear two-level variants, we refer to <cit.>.
Let us remark that in each Newton iteration a linear system with the Jacobian Dℱ_X(u^(k)) has to be solved iteratively with, for example, GMRES (Generalized Minimal Residual), where X ∈{a,h }. Fortunately, Dℱ_X(u^(k)) has already the favorable structure of a linear two-level Schwarz preconditioner, either additive or hybrid, times the Jacobian DF(·). Therefore, no additional linear preconditioner is necessary to solve the linearized systems efficiently. For details on the exact shape and the different linearization points of the local and the coarse part of the preconditioner, we refer to <cit.>. For experts, let us remark that we use always the exakt Jacobian and not the approximation used, for example, in the well-known ASPIN approach <cit.>. All in all, nonlinear two-level Schwarz algorithms have a special structure. First, there is an outer Newton loop where in each iteration a linearized system with the matrix D ℱ_X(u^(k)) is solved iteratively using GMRES. Second, to compute the residual ℱ_X(u^(k)) and the Jacobian D ℱ_X(u^(k)), all corrections T_i(u^(k)), i=0,...,N, have to be computed in each outer Newton iteration, which is done by solving <ref> and <ref> with Newton's method. This immediately leads to inner Newton iterations which are denoted by local inner loops in case of the local corrections and the coarse inner loop in case of the coarse correction. Let us note that all local inner loops can be easily run in parallel but the coarse inner loop can only be carried out in parallel in the case of an additive coupling between levels. In the hybrid variant, the coarse loop is carried out before the subdomain corrections are computed. We provide a brief algorithmic overview of two-level nonlinear Schwarz methods in <ref>.
To get a good estimate of the performance of our nonlinear Schwarz methods, we compare against classical Newton-Krylov-Schwarz, where Newton's method is applied to solve F(u)=0 directly and all linear systems are solved using GMRES and a linear two-level Schwarz preconditioner, either hybrid or additive.
In our comparison, we always use the same coarse spaces for Newton-Krylov-Schwarz and nonlinear Schwarz.
§ COARSE SPACES
For our Navier-Stokes problems we use the monolithic basis functions introduced in <cit.>, which are based on the GDSW coarse spaces; see <cit.> for scalar elliptic problems.
We only provide a brief description here. Let us first assume that we have a symmetric and positive definite matrix K which is obtained from the discretization of a scalar linear partial differential equation, for example, a linear diffusion equation with sufficient boundary constraints of Dirichlet type. Thus, we have a single degree of freedom in each node of the finite element mesh and, for the moment, do not have to distinguish between nodes and degrees of freedom. We partition K into interface and interior nodes
K = [ K_II K_I Γ; K_Γ I K_ΓΓ ],
where the interface Γ contains all nodes on the boundaries of the nonoverlapping subdomains Ω_i', i=1,...,N, except the nodes belonging to a part of ∂Ω where a Dirichlet boundary condition is imposed. The set I contains all remaining nodes. Furthermore, we partition the interface Γ into n_c overlapping or nonoverlapping patches Γ_j, j=1,...,n_c, and define associated functions or, more precisely, vectors Φ_Γ^j, j=1,...,n_c, which are defined on Γ but are only nonzero on the corresponding patch Γ_j, j=1,...,n_c. Additionally, we construct Φ_Γ^j such that we obtain a partition of unity ∑_j=1^n_cΦ_Γ^j = 1 on the complete interface.
Using the discrete energy minimizing extension
Φ_I^j := - K_II^-1 K_I ΓΦ_Γ^j
we define the coarse basis function belonging to the patch Γ_j, j=1,...,n_c, by
Φ^j := [ Φ_I^j; Φ_Γ^j ] and by Φ := [ Φ^1,...,Φ^n_c ]
the matrix which contains all coarse basis functions. Let us remark that P_0 := Φ gives us the prolongation from the coarse space V_0 to the fine space V and that we have a partition of unity
∑_j=1^n_cΦ^j = 1
on Ω with 1 being a vector of ones, since K is symmetric positive definite.
Using this concept, different partitions into patches Γ_j and different definitions of Φ_Γ^j are possible. Partitioning Γ into n_e edges and n_v vertices in two dimensions and defining Φ_Γ^j, j=1,...,n_c, with n_c=n_v+n_e to be one on the corresponding edge or vertex leads to the classical GDSW coarse space. Alternatively, we can define one patch Γ_j, j=1,...,n_v, for each vertex, consisting of the vertex itself and all adjacent edges. Then, Φ_Γ^j can be defined to be one in the vertex and to linearly decrease to zero along the adjacent edges. This gives a smaller coarse space of size n_v but also fulfills all necessary properties. This coarse space is called MsFEM-D coarse space in <cit.> and can also be interpreted as one instance of the class of reduced GDSW (RGDSW) coarse spaces; see <cit.> for more details on RGDSW. In the present article, we denote this coarse space as RGDSW coarse space of type A. An alternative approach to define Φ_Γ^j is to set it to 1 in the vertex and to 0.5 on the adjacent edges. Let us note that if an edge ends at the boundary ∂Ω we set it to 1 instead of 0.5. This coarse space can be implemented in an algebraic way without information about the geometry of the subdomains. We refer to this as RGDSW type B coarse space.
In general, for a nonlinear problem, one can use the Jacobian K := DF(u^(k)) in the k-th linearization u^(k) of Newton's method to compute the coarse basis functions. We consider two approaches: 1) compute Φ once using K:= DF(u^(0)) in the initial value u^(0) and then recycle Φ in all further steps or 2) recompute Φ with K := DF(u^(k)) at the beginning of each Newton step. Both versions can be used either for Newton-Krylov-Schwarz or nonlinear two-level Schwarz, where Φ can be recomputed at the beginning of each outer iteration.
Considering the Navier-Stokes equations in two dimensions, we have three degrees of freedom for each node, that is, two velocities v=(v_x,v_y) and the pressure p. To build a partition of unity on the interface, we now define three basis functions Φ_Γ^j,i, i=1,2,3, j=1,...,n_c, for each patch Γ_j, j=1,...,n_c; one for each degree of freedom (velocities and pressure). Sorting the degrees of freedom appropriately, that is, first all x-velocities, then all y-velocities, and finally all pressures, we simply have
Φ_Γ^j,1=[ Φ_Γ^j; 0; 0 ], Φ_Γ^j,2=[ 0; Φ_Γ^j; 0 ], and Φ_Γ^j,3=[ 0; 0; Φ_Γ^j ]
and, of course,
∑_i=1^3 ∑_j=1^n_cΦ_Γ^j,i =1
on the interface. For the extension to the interior degrees of freedom, the complete block matrix
K = DF(u^(k)) = [ A(u^(k)) B^T; B 0 ]
from the linearization of the discretized Navier-Stokes equation is used, which is clearly a monolithic approach to compute the coarse basis. Here, as usual, only the velocity part A depends on the current velocities and pressures u^(k) = (v^(k),p^(k)). We note that, since K is no longer a symmetric positive definite matrix, we have a lack of theory and cannot show that the coarse basis functions Φ build a partition of unity on the interior degrees of freedom. Nonetheless, in case of Newton-Krylov-Schwarz, this approach has already shown good performance; see <cit.>. Let us remark that the resulting Φ can also be written as
Φ = [ Φ_vv Φ_vp; Φ_pv Φ_pp ],
where the first column contains all basis functions belonging to the velocities v and the second column contains all basis functions belonging to the pressure variables. In our computations, we use a modification by deleting the coupling blocks, that is,
Φ = [ Φ_vv 0; 0 Φ_pp ].
These modified coarse basis functions yield better results in the case of nonlinear Schwarz methods and have already been used in <cit.>. For the definition of the patches Γ_j and corresponding scalar basis functions Φ_Γ^j we use both RGDSW type approaches suggested above.
To further improve the linear convergence, we additionally enforce a zero pressure average for the subdomain corrections in the linear solves. Following <cit.>, we define the local projection
P_i := I_p - a_i^T (a_i a_i^T)^-1 a_i, i=1,...,N,
on the pressure variables of the subdomain Ω_i, i=1,...,N, with a_i=1 is a vector of ones; one for each local pressure variable. We then define the local projection
𝒫_i := [ I_v 0; 0 P_i ]
for each subdomain Ω_i, i=1,...,N, which is the identity for all local velocities but enforces a zero average in the pressure.
Usually, we prefer to use restricted versions of nonlinear Schwarz. In the restricted variants, the local projections P_i, i=1,...,N, which are used to recombine the nonlinear local corrections are replaced by P_i, i=1,...,N, where the P_i fulfill a partition of unity property on Ω, that is, ∑_i=1^N P_i R_i = I.
In our numerical results, due to space limitatons, we only consider the hybrid two-level restricted nonlinear Schwarz approach with or without local pressure projections, that is, we either solve
ℱ_h(u) := ∑_i=1^N P_i 𝒫_i T_i(u-P_0 T_0(u)) + P_0 T_0(u) = 0
or
ℱ_h(u) := ∑_i=1^N P_i T_i(u-P_0 T_0(u)) + P_0 T_0(u) = 0
with Newton's method. Let us remark that the local projections have successfully been used to reduce the number of GMRES iterations in the case of Newton-Krylov-Schwarz but now, in the case of nonlinear Schwarz, 𝒫_i actually appears in the nonlinear formulation and might influence the convergence of Newton's method.
Let us give some details on the initial value of Newton's method. We decided to use an initial value which fulfills all Dirichlet boundary constraints such that we can use homogeneous boundary conditions in all linear solves. We thus choose v^(0)|_∂Ω=v_0 and v^(0)=0 elsewhere. We use p^(0)=0 for the pressure. Since we have zero Dirichlet boundary conditions for the velocity now, we only choose coarse basis functions in the vertices which are not part of the boundary for both, velocity as well as for the pressure.
To summarize, we vary three different aspects in the numerical tests: 1) the coarse space using two different types of RGDSW, 2) using or not using local pressure projections P_i, and 3) recycling the coarse basis functions Φ or recomputing them in each Newton iteration.
§ NUMERICAL RESULTS FOR THE LID-DRIVEN CAVITY FLOW PROBLEM
As already stated, we consider the classical lid-driven cavity problem; see <ref> for details. As a classical approach, we use Newton-Krylov-Schwarz with a linear hybrid restricted two-level Schwarz preconditioner. We always use the monolithic RGDSW basis functions described above to build the second level using either type A or B to define them on the interface. The linear preconditioner is not varied since it will not affect the nonlinear convergence in case of Newton-Krylov-Schwarz. We choose only the best possible option which is, in this specific case, recycling the coarse basis functions Φ and using the local pressure projections 𝒫_i, i=1,...,N. As a nonlinear Schwarz approach, we use the hybrid restricted two-level Schwarz approach with or without local pressure projection, that is, we consider one of the residual formulations <ref> or <ref> and solve with Newton's method. We can optionally also use recycling of the coarse basis functions in nonlinear Schwarz methods and also compare both initial values.
We stop the nonlinear iteration when the relative residual ||F(u^(k))||_2/||F(u^(0))||_2 is smaller than 10^-6. All inner local and coarse nonlinear iterations in nonlinear Schwarz are stopped early when a reduction of the relative residual of at least three orders of magnitude is reached. Furthermore, all iterative solves with GMRES are stopped when the relative residual is below 10^-8. We test for different Reynolds numbers Re up to 2500.
To increase the robustness of Newton's method, we use a simple backtracking approach for inexact Newton methods suggested in <cit.>. We use this approach for the local and coarse inner loops in nonlinear Schwarz as well as for all classical Newton-Krylov-Schwarz approaches. The step size cannot get smaller then 0.01 in the line search, which gave the best results for out specific problem. We always use 256 subdomains, since using less subdomains leads to very small coarse spaces which have no positive effect in the case of nonlinear two-level Schwarz methods. Using more subdomains is too time consuming using our MATLAB implementation. We use 128 finite elements (Taylor-Hood elements) to discretize each nonoverlapping subdomain and an overlap of δ=3h. As a consequence, the original global problem has 148 739 degrees of freedom in total. In <ref> and <ref> we present the obtained results for the different combinations.
In general, we can observe that nonlinear Schwarz methods can improve the nonlinear convergence, especially for high Reynolds numbers where the classical Newton approach does not converge within 20 iterations. For small Reynolds numbers the global Newton method is, as expected, the best choice, since it provides comparably fast convergence without the additional work introduced by the inner loops. The best setup in our tests was to always recycle the coarse basis functions and to use the local pressure projection 𝒫_i. The latter ones improve the linear convergence without deteriorating the fast convergence of the nonlinear solver. Comparing both RGDSW coarse spaces, the nonlinear convergence is very similar but slightly better for type B coarse spaces up to a Reynolds number of 2000. Additionally, the inner coarse loop converges a bit faster. On the other hand, the type A coarse space leads to a better linear convergence. For the case with Re=2500, nonlinear Schwarz with type A coarse basis functions divergence caused by a divergent inner course iteration. To summarize, type B RGDSW coarse basis functions proved to be more robust in our experiments. Additionally, the type B coarse spaces can be built without geometry information on the subdomains while for the type A coarse basis functions the coordinates of the nodes have to be known. For a more detailed picture of the convergence behavior, we provide <ref> including the most promising variants for Reynolds numbers of 1000 and 1500.
Furthermore, we provide a visualization of the coarse corrections T_0 for a Reynolds number of 500 and a decomposition into 64 subdomains in <ref>. Although the convergence speed is nearly identical for both coarse spaces, it can be seen that, in general, the convergence strongly depends on the choice of the coarse basis and different coarse corrections are computed.
Finally, we show the Newton iterates for the case with Re=1 500 for Newton-Krylov-Schwarz and nonlinear Schwarz to compare the convergence behavior; see <ref>. Already after one iteration, nonlinear Schwarz visibly converges against the solution while Newton-Krylov-Schwarz tends to form more vortices and clearly diverges, although a backtracking approach is used to control the step lengths.
The authors acknowledge the financial support by the German Federal Ministry of Education and Research (BMBF) for the project Stroemungsraum within the exascale computing programme SCALEXA.
spmpsci
|
http://arxiv.org/abs/2409.03339v1 | 20240905083357 | Four-order power reduction in nanoscale electron-nuclear double resonance with a nitrogen-vacancy center in diamond | [
"Zhiyi Hu",
"Fengjian Jiang",
"Jingyan He",
"Yulin Dai",
"Ya Wang",
"Nanyang Xu",
"Jiangfeng Du"
] | quant-ph | [
"quant-ph"
] |
§ ABSTRACT
Detecting nuclear spins using single Nitrogen-Vacancy (NV) centers is of particular importance in nano-scale science and engineering, but often suffers from the heating effect of microwave fields for spin manipulation, especially under high magnetic fields. Here, we realize an energy-efficient nano-scale nuclear-spin detection using a phase-modulation electron-nuclear double resonance scheme. The microwave field can be reduced to 1/250 of previous requirements and the corresponding power is over four orders lower. Meanwhile, the microwave-induced broadening to the line-width of the spectroscopy is significantly canceled and we achieve a nuclear-spin spectrum with a resolution down to 2.1 kHz under a magnetic field at 1840 Gs. The spectral resolution can be further improved by upgrading the experimental control precision. This scheme can also be used in sensing microwave fields and extended to a wide range of applications in the future.
Nuclear magnetic resonance (NMR) spectroscopy is one of the most important analytical technique that is widely used in physics, chemistry, biology and medical sciences <cit.>. It resolves the chemical specificity of elements from the resonance frequency of nuclear spins in order to obtain molecular structure or image of materials non-destructively. Conventional NMR, which often suffers from low detection efficiency, relies primarily on a large amount of sample molecules for signal accumulation <cit.>. Many efforts have been provided to improve the sensitivity for example the hyper-polarization approaches <cit.>, the magnetic resonance force microscopy<cit.> and optically-detected NMR <cit.>. Recently, the Nitrogen-Vacancy (NV) center in diamond has shown its outstanding capability as a nano-scale sensor for nuclear-spin detection that associates with a significantly-improved sensitivity even at the single-molecule level <cit.>.
In the field of NMR, the resonance frequency of nuclear spins is determined by the Larmor frequency and its local environment, i.e., the neighboring nuclear spins and surrounding electron distribution, which generates an additional field and changes the resonance frequency. The relative difference between the resonance frequency and the Larmor frequency also called the chemical shift, reveals fruitful knowledge of the host molecule <cit.>. Therefore, a high spectral resolution is necessary for the NMR spectroscopy applications. Generally, the resolution is improved by increasing the external magnetic field which can enlarge the chemical shift. Besides, it also brings crucial benefits to the decoherence time and polarization of the sample spins under a high magnetic field.
NV-based NMR often utilizes the standard pulsed dynamical decoupling (pulsed-DD) sequences <cit.>, such as XY-N <cit.> or Carr-Purcell-Meiboom-Gill-N <cit.>, as band-pass noise filters to detect the nuclear spins by matching their resonance frequency, as well as their extensions with a quantum(classical) memory to realize a high(arbitrary)-resolution signal detection<cit.>. But the pulsed-DD tends to work under low magnetic fields because the maximal band-pass frequency (limited by square root of the pulse power) has practical upper limitation and cannot increase as the magnetic field increases. An alternative scheme under high magnetic fields is the continuous-wave dynamical decoupling (CWDD) sequence <cit.>. It also requires the continuous microwave driving to match the resonance frequency that is linearly-scaled with the magnetic field. However, the microwave power has practical limitations and will induce various issues including power broadening of line-width and sample heating<cit.>. A nanoscale NMR spectroscopy that can largely reduce the power requirements is highly benefited for high-field NMR and the related applications including bio-sensing applications <cit.>.
Recently, the phase-modulation (PM) technique is introduced into NV-based quantum sensing, which works as a mixer to down-convert high-frequency microwave fields to a proper range <cit.>. Also, a theoretical method combining the PM and CWDD schemes has been proposed to reduce the power requirement for nuclear-spin detection under high magnetic fields <cit.>. In this work, we experimentally examine the PM-based CWDD scheme and realize the new scheme to detect in vivo nuclear spin under a field up to 3015 Gs. The microwave power is about four orders of magnitude lower than the standard DD schemes. We show that the power-induced broadening of the spectrum is also suppressed, where four weakly-coupled ^13C spins are distinguished from the spin bath with a minimal line-width of 2.1 kHz under 1840 Gs. Besides, we also contribute to the theoretical scheme by extending the resonant condition to both sidebands of the mixer, enabling its future applications in zero-field situations. This scheme can be easily extended to sensing of in vitro nuclear spins or classical magnetic fields in general.
The NV center in diamond has emerged as an excellent solid-state platform for quantum information processing <cit.>. The center consists of a nitrogen atom with a neighboring vacancy and its negatively charged state NV^- forms a spin triplet in the orbital ground state. The electron spin with nearby nuclear spins form a hybrid system that has been used as quantum registers <cit.> for computation <cit.> and simulation <cit.> tasks, or interacting nodes of quantum network <cit.>. NV center is also used as a nano-scale sensor to detect magnetic field <cit.>, electric field <cit.>, temperature <cit.> or other spins <cit.>. Under this situation, the Hamiltonian of the system is often formulated as
H_0/2π =DS_z^2 -γ_e B_z S_z - ∑_jγ^(j)_nB_zI^(j)_z+∑_jS_z𝐀^(j)·𝐈^(j)
where D≈ 2870MHz is the zero-field splitting, S_i (I_i^(j)) the electron (j-th nuclear) spin operator, B_z the static magnetic field applied along the NV axis, γ_e (γ_n^(j)) the gyro-magnetic ratio of electron (j-th nuclear) spin and 𝐀^(j)=(A^(j)_z x, A^(j)_z y, A^(j)_z z) the interaction vector between electron and nuclear spin. In our experiment, only the subspace of |m_s=0⟩ and |m_s= 1⟩ of electron spin is utilized. A resonant microwave field with frequency ω is applied to drive the transition between |0⟩ and |1⟩ with a controlling Hamiltonian H_c=√(2)Ωcos(ω t+ϕ)S^(0,1)_x, where Ω is the Rabi frequency, ϕ the phase and S^(0,1)_x=(|0⟩⟨ 1|+| 1⟩⟨ 0|)/√(2) the transition operator. In the following discussion, we consider a simple system that contains only the NV electron spin and a single ^13C nuclear spin (the notation j is ignored in the following description).
Conventionally, the pulsed-DD scheme (e.g., XY-N) utilizes a basic operation sequence like τ-π-2τ-π-τ, where τ is the interval time and π pulse flips the electron spin between |0⟩ and |1⟩. During this process, the extra phase generated by the nuclear spin is accumulated on the electron spin if its precessing speed (2γ_n B_z+A_zz) matches with τ. This protocol requires the microwave to be as strong as possible to ensure π pulse is much shorter than τ. While in CWDD scheme (Fig. <ref>d), the electron spin is instead prepared on one of the dressed states |±⟩≡(|0⟩± |1⟩)/√(2) and experiences a spin-locking process driven by another microwave. During this process, the spin would stay on the dressed state except for a global phase if no perturbation from the nuclear spin exists. Otherwise, when the electron-spin Rabi frequency Ω matches with the energy gap between nuclear-spin states, i.e., the Hartmann-Hahn (HH) condition is satisfied, the double resonance (DR) happens between states |+↑⟩ to |-↓⟩, where |↑⟩ and |↓⟩ are the nuclear-spin eigenstates. In the above protocols, the microwave field (power) is scaled linearly (quadratically) with the magnetic field B_z. As shown in Fig. <ref>f, this makes them extremely hard to be applied under high magnetic fields.
Recently, new schemes based on HHDR are proposed where the phase or amplitude of the microwave field is modulated in a regular <cit.> or arbitrary<cit.> way to compensate for the mismatch between Rabi frequency and nuclear-spin energy gap. To elaborate this new protocol, we first explain the original HHDR and show its schematic in Fig. <ref>b. In electron-spin m_s=0 subspace, the hyperfine interaction takes no effect and the nuclear spin precesses under magnetic field B_z with frequency γ_nB_z only. But in m_s=1 subspace (Fig. <ref>c) it generates an extra precessing A_∥ along the z axis and an effective field A_⊥/γ_n in the x-y plane on nuclear spin, where A_∥=A_zz and A_⊥=√(A_zx^2 + A_zy^2). As a result, when electron spin rotates along the x-axis during the spin-locking, the nuclear spin experience a z-axis precessing with frequency γ_n B_z-1/2 A_∥ in average and an effective microwave field in x-y plane with frequency Ω. Once the HH condition is satisfied, i.e., Ω=γ_n B_z-1/2 A_∥, the nuclear-spin resonance happens and the electron-spin state is changed by the hyperfine interaction as well.
In the phase-modulated (PM-) HHDR scheme (Fig. <ref>e) , the driven field is replaced with
H_c= √(2)Ω_+ S^(0,1)_x cos (ω t)+√(2)Ω_- S^(0,1)_x cos (ω t-ϕ),
where the phase ϕ switches periodically between the values 0 and π with a modulation frequency ν. The total effect of this modulation is to change the electron-spin rotating speed between (Ω_++Ω_-) and (Ω_+-Ω_-) in the same frequency ν. In experiment, we make Ω^' =Ω_+=Ω_- to ensure a best signal contrast (see Supporting Information).
The effective magnetic field on the nuclear spin is also modulated to generate two sidebands with frequency Ω^'+ν and Ω^'-ν. Then the resonant condition in the PM-HHDR scheme is changed to
|Ω^'±ν|=|γ_n B_z-A_∥/2|.
and the signal in the resonance ν is described by cos^2( A_⊥ J_1(4 Ω^' / πν) t_f/ 4 ), where Ω^' is defined as effective Rabi frequency, t_f is integration time, J_1 is the first kind of Bessel function. This new condition brings a possibility to make the microwave power significantly reduced when we drive the electron spin to achieve the double resonance under high magnetic fields (Fig. <ref>f). Note that J_1 associates with a relatively small value in experiment, which weakens the effective coupling between the sensor and target spins, thus reduces the sensitivity of this scheme. It is worth pointing out that the theoretical proposal <cit.> only consider the single-side resonant situation Ω^' + ν=γ_n B_z-A_∥/2, which is different with Eq.<ref> here (see Supporting Information).
Here, we utilize single NV centers (NV1 and NV2) in diamond to detect intrinsic ^13C nuclear spins at 1840 Gs as an example. The PM-HHDR spectrum is obtained by sweeping the modulation frequency ν with Ω^' fixed. The result is shown in Fig.<ref>, where we can observe two resonance dips symmetrical about γ_nB_z, i.e., the Larmor frequency of ^13C nuclear spin. Additionally, with decreasing Ω^' the positions of resonance moves towards the Larmor frequency. The relation between the resonance position and Ω^' is highlighted in the Fig. <ref>, which matches well with the condition defined in Eq.<ref>. Most importantly, the effective Rabi frequency Ω^' is reduced to around 1/20 (1/250) of the microwave field used in conventional HHDR (XY-N) where the modulation frequency ν is chosen much larger than Ω^' (see Supporting Information). Noted that A_∥ is averaged for different ^13C nuclear spins, thus it is not apparently shown in the relation.
Since the resolution of the HHDR spectrum is mainly determined by the noise in microwave amplitude, low-power control can also improve the spectrum in principle <cit.>. However, this also weaken the decoupling efficiency of the CWDD sequence that protects the electron spin against (z-axis) magnetic fluctuations. In order to improve the performance, a single-spin lock-in detection <cit.> is realized to monitor the electron-spin resonance in real-time together with a PID-based temperature control of the setup within ±5 mK.
Meanwhile, we optimize the experimental parameters, and finally the single resonance dip is split into five narrow lines as shown in Fig. <ref>a (the upper line). By fitting the spectrum, four weakly-coupled individual ^13C nuclear spins and a spin-bath signal are assigned to each dip. The calculated coupling parameter is listed in Tab.<ref>. Because of the reduction in microwave noise, a minimal (average) line-width of 2.1 (3.5) kHz is achieved. For comparison, we realize conventional HHDR spectroscopy under the same field in Fig. <ref>a (the lower line). Since the microwave amplitude is around 20 times higher than that is used in PM-HHDR, the line-width is over 100 kHz and none of the weakly-coupled nuclear spins can be observed (see Supporting Information).
To verify this result, we perform a standard XY-32 experiment on the same NV center. As shown in Fig. <ref>b (the upper line), a spectrum with five distinguishable dips is observed under a field near 500 Gs and the minimal (average) line-width in the spectrum is 2.3 (3.0) kHz respectively. The extracted coupling information (also shown in Tab.<ref>) matches well with the result from PM-HHDR. The experiment under a higher magnetic field (the same field as PM-HHDR) is also realized, where the spectral lines become overlapped and no splitting can be observed anymore (the lower line in Fig. <ref>b). For even higher magnetic fields, the scheme works well and the results are shown in Fig. <ref>. Due to the maximal sampling frequency of the microwave pulse generator, the frequency-modulation resolution is increased to 2 kHz and we have to boost the Rabi frequency Ω^' to broaden the line-width accordingly (see Supporting Information).
Here we demonstrate a nano-scale nuclear-spin detection spectroscopy under high magnetic fields and achieve an spectral line-width down to 3.5 kHz at about 1840 Gs. The microwave power is about two orders below previous double resonance schemes, which associates with a detection bandwidth over 100 kHz (see Supporting Information). The performance is mainly limited by the minimal configuration precision of the modulation frequency ν in our case. High-resolved pulse control based on time-delay lines can further improve the line-width to around 10 Hz.
In general cases, the ultimate limit is defined by the rotating-frame relaxation time T_1^ρ of the electron spin, where T_1^ρ is measured to be over mili-second in the experiment thus the spectral line-width is possibly pushed down to the sub-kHz level in the future. Although our work is focused on the electron-nuclear hyperfine interactions, this method can also applied to more general spin-based quantum sensing fields, for example, detection of electron-electron spin interaction <cit.>, chemical shift analysis <cit.> and spin manipulation in two-dimensional materials<cit.>. Finally, we have extended the resonant condition to both sidebands towards the real resonance <cit.>, which could potentially be utilized in the ultra-low or zero-field cases, i.e., to up-convert the required Rabi frequency to a proper range in the future <cit.>.
The authors thank Bing Chen and Ying Dong for helpful discussion. This work was supported by the National Natural Science Foundation of China (Grant Nos. 92265114, 92265204), the Fundamental Research Funds for the Central Universities (Grant No. 226-2023-00139),the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302200).
* Supporting Information: Details of the setup and experimental scheme, and analysis of the performance. (PDF).
@ifundefinedendmcitethebibliography
81
f
subitem(mcitesubitemcount)
[Aue et al.(1976)Aue, Bartholdi, and Ernst]ref1
Aue, W. P.; Bartholdi, E.; Ernst, R. R.; Two-dimensional spectroscopy
Application to nuclear magnetic resonance. The Journal of Chemical
Physics 1976, 64, 2229–2246
[Oschkinat et al.(1988)Oschkinat, Griesinger, Kraulis,
Sørensen, Ernst, Gronenborn, and Clore]ref2
Oschkinat, H.; Griesinger, C.; Kraulis, P. J.; Sørensen, O. W.; Ernst, R. R.;
Gronenborn, A. M.; Clore, G. M. Three-dimensional NMR spectroscopy of a
protein in solution. Nature 1988, 332, 374–376
[Wüthrich(2001)]ref3
Wüthrich, K. The way to NMR structures of proteins. Nature Structural
Biology 2001, 8, 923–925
[Bucher et al.(2020)Bucher, Glenn, Park, Lukin, and
Walsworth]ref47
Bucher, D. B.; Glenn, D. R.; Park, H.; Lukin, M. D.; Walsworth, R. L.
Hyperpolarization-Enhanced NMR Spectroscopy with Femtomole Sensitivity Using
Quantum Defects in Diamond. Physical Review X 2020,
10, 021053
[Pfender et al.(2019)Pfender, Wang, Sumiya, Onoda, Yang,
Dasari, Neumann, Pan, Isoya, Liu, and Wrachtrup]ref48
Pfender, M.; Wang, P.; Sumiya, H.; Onoda, S.; Yang, W.; Dasari, D. B. R.;
Neumann, P.; Pan, X.-Y.; Isoya, J.; Liu, R.-B.; Wrachtrup, J. High-resolution
spectroscopy of single nuclear spins via sequential weak measurements.
Nature Communications 2019, 10, 594
[Lovchinsky et al.(2017)Lovchinsky, Sanchez-Yamagishi, Urbach,
Choi, Fang, Andersen, Watanabe, Taniguchi, Bylinskii, Kaxiras, Kim, Park, and
Lukin]ref6
Lovchinsky, I.; Sanchez-Yamagishi, J. D.; Urbach, E. K.; Choi, S.; Fang, S.;
Andersen, T. I.; Watanabe, K.; Taniguchi, T.; Bylinskii, A.; Kaxiras, E.;
Kim, P.; Park, H.; Lukin, M. D. Magnetic resonance spectroscopy of an
atomically thin material using a single-spin qubit. Science
2017, 355, 503–507
[Shi et al.(2014)Shi, Kong, Wang, Kong, Zhao, Liu, and
Du]ref57
Shi, F.; Kong, X.; Wang, P.; Kong, F.; Zhao, N.; Liu, R.B.; Du, J. Sensing and
atomic-scale structure analysis of single nuclear-spin clusters in diamond.
Nature Physics 2014, 10, 21–25
[Fratila and Velders(2011)Fratila, and Velders]ref78
Fratila, R. M.; Velders, A. H. Small-Volume Nuclear Magnetic Resonance
Spectroscopy. Annual Review of Analytical Chemistry 2011,
4, 227–249
[Fernéndez-Acebal et al.(2018)Fernéndez-Acebal, Rosolio,
Scheuer, Müller, Müller, Schmitt, McGuinness, Schwarz, Chen, Retzker,
Naydenov, Jelezko, and Plenio]hyperpolarization1
Fernéndez-Acebal, P.; Rosolio, O.; Scheuer, J.; Müller, C.; Müller, S.;
Schmitt, S.; McGuinness, L. P.; Schwarz, I.; Chen, Q.; Retzker, A.;
Naydenov, B.; Jelezko, F.; Plenio, M. B. Toward Hyperpolarization of Oil
Molecules via Single Nitrogen Vacancy Centers in Diamond. Nano Letters
2018, 18, 1882–1887
[King et al.(2015)King, Jeong, Vassiliou, Shin, Page, Avalos,
Wang, and Pines]hyperpolarization
King, J. P.; Jeong, K.; Vassiliou, C. C.; Shin, C. S.; Page, R. H.;
Avalos, C. E.; Wang, H.-J.; Pines, A. Room-temperature in situ nuclear spin
hyperpolarization from optically pumped nitrogen vacancy centres in diamond.
Nature Communications 2015, 6, 8965
[Abrams et al.(2014)Abrams, Trusheim, Englund, Shattuck, and
Meriles]hyperpolarization2
Abrams, D.; Trusheim, M. E.; Englund, D. R.; Shattuck, M. D.; Meriles, C. A.
Dynamic Nuclear Spin Polarization of Liquids and Gases in Contact with
Nanostructured Diamond. Nano Letters 2014, 14,
2471–2478
[Degen et al.(2009)Degen, Poggio, Mamin, Rettner, and
Rugar]mrfm
Degen, C. L.; Poggio, M.; Mamin, H. J.; Rettner, C. T.; Rugar, D. Nanoscale
magnetic resonance imaging. Proceedings of the National Academy of
Sciences 2009, 106, 1313–1317
[Balasubramanian et al.(2008)Balasubramanian, Chan, Kolesov,
Al-Hmoud, Tisler, Shin, Kim, Wojcik, Hemmer, Krueger, Hanke, Leitenstorfer,
Bratschitsch, Jelezko, and Wrachtrup]odnmr
Balasubramanian, G.; Chan, I. Y.; Kolesov, R.; Al-Hmoud, M.; Tisler, J.;
Shin, C.; Kim, C.; Wojcik, A.; Hemmer, P. R.; Krueger, A.; Hanke, T.;
Leitenstorfer, A.; Bratschitsch, R.; Jelezko, F.; Wrachtrup, J. Nanoscale
imaging magnetometry with diamond spins under ambient conditions.
Nature 2008, 455, 648–651
[Maze et al.(2008)Maze, Stanwix, Hodges, Hong, Taylor,
Cappellaro, Jiang, Dutt, Togan, Zibrov, Yacoby, Walsworth, and Lukin]odnmr1
Maze, J. R.; Stanwix, P. L.; Hodges, J. S.; Hong, S.; Taylor, J. M.;
Cappellaro, P.; Jiang, L.; Dutt, M. V. G.; Togan, E.; Zibrov, A. S.;
Yacoby, A.; Walsworth, R. L.; Lukin, M. D. Nanoscale magnetic sensing with an
individual electronic spin in diamond. Nature 2008,
455, 644–647
[Doherty et al.(2013)Doherty, Manson, Delaney, Jelezko,
Wrachtrup, and Hollenberg]ref61
Doherty, M. W.; Manson, N. B.; Delaney, P.; Jelezko, F.; Wrachtrup, J.;
Hollenberg, L. C. L. The nitrogen-vacancy colour centre in diamond.
Physics Reports 2013, 528, 1–45
[Xie et al.(2021)Xie, Zhao, Kong, Ma, Wang, Ye, Yu, Yang, Xu,
Wang, Wang, Shi, and Du]ref38
Xie, T.; Zhao, Z.; Kong, X.; Ma, W.; Wang, M.; Ye, X.; Yu, P.; Yang, Z.;
Xu, S.; Wang, P.; Wang, Y.; Shi, F.; Du, J. Beating the standard quantum
limit under ambient conditions with solid-state spins. Science
advances 2021, 7, 9204
[Holger Försterling(2010)]ref24
Holger Försterling, F. Spin dynamics: Basics of Nuclear Magnetic Resonance,
Second Edition. Medical Physics 2010, 37,
406–407
[Kong et al.(2015)Kong, Stark, Du, McGuinness, and
Jelezko]ref54
Kong, X.; Stark, A.; Du, J.; McGuinness, L.; Jelezko, F. Towards Chemical
Structure Resolution with Nanoscale Nuclear Magnetic Resonance Spectroscopy.
Physical Review Applied 2015, 4, 024004
[de Lange et al.(2010)de Lange, Wang, Risté, Dobrovitski, and
Hanson]ref20
de Lange, G.; Wang, Z. H.; Risté, D.; Dobrovitski, V. V.; Hanson, R. Universal
Dynamical Decoupling of a Single Solid-State Spin from a Spin Bath.
Science 2010, 330, 60–63
[Hegde et al.(2020)Hegde, Zhang, and Suter]ref46
Hegde, S. S.; Zhang, J.; Suter, D. Efficient Quantum Gates for Individual
Nuclear Spin Qubits by Indirect Control. Physical Review Letters
2020, 124, 220501
[Lang et al.(2019)Lang, Madhavan, Tetienne, Broadway, Hall,
Teraji, Monteiro, Stacey, and Hollenberg]ref49
Lang, J. E.; Madhavan, T.; Tetienne, J. P.; Broadway, D. A.; Hall, L. T.;
Teraji, T.; Monteiro, T. S.; Stacey, A.; Hollenberg, L. C. L. Nonvanishing
effect of detuning errors in dynamical-decoupling-based quantum sensing
experiments. Physical Review A 2019, 99, 012110
[Zhao et al.(2012)Zhao, Honert, Schmid, Klas, Isoya, Markham,
Twitchen, Jelezko, Liu, Fedder, and Wrachtrup]ref63
Zhao, N.; Honert, J.; Schmid, B.; Klas, M.; Isoya, J.; Markham, M.;
Twitchen, D.; Jelezko, F.; Liu, R.-B.; Fedder, H.; Wrachtrup, J. Sensing
single remote nuclear spins. Nature Nanotechnology 2012,
7, 657–662
[Taminiau et al.(2012)Taminiau, Wagenaar, van der Sar, Jelezko,
Dobrovitski, and Hanson]ref17
Taminiau, T. H.; Wagenaar, J. J. T.; van der Sar, T.; Jelezko, F.;
Dobrovitski, V. V.; Hanson, R. Detection and Control of Individual Nuclear
Spins Using a Weakly Coupled Electron Spin. Physical Review Letters
2012, 109, 137602
[Casanova et al.(2015)Casanova, Wang, Haase, and Plenio]ref55
Casanova, J.; Wang, Z. Y.; Haase, J. F.; Plenio, M. B. Robust dynamical
decoupling sequences for individual-nuclear-spin addressing. Physical
Review A 2015, 92, 042304
[Carr and Purcell(1954)Carr, and Purcell]ref18
Carr, H. Y.; Purcell, E. M. Effects of Diffusion on Free Precession in Nuclear
Magnetic Resonance Experiments. Physical Review 1954,
94, 630–638
[Meiboom and Gill(1958)Meiboom, and Gill]ref19
Meiboom, S.; Gill, D. Modified Spin Echo Method for Measuring Nuclear
Relaxation Times. Review of Scientific Instruments 1958,
29, 688–691
[Liu et al.(2017)Liu, Xing, Ma, Wang, Li, Po, Zhang, Fan, Liu,
and Pan]ref52
Liu, G.-Q.; Xing, J.; Ma, W.-L.; Wang, P.; Li, C.-H.; Po, H. C.; Zhang, Y.-R.;
Fan, H.; Liu, R.-B.; Pan, X.-Y. Single-Shot Readout of a Nuclear Spin Weakly
Coupled to a Nitrogen-Vacancy Center at Room Temperature. Physical
Review Letters 2017, 118, 150504
[Boss et al.(2017)Boss, Cujia, Zopes, and
Degen]arb_resolution
Boss, J. M.; Cujia, K. S.; Zopes, J.; Degen, C. L. Quantum sensing with
arbitrary frequency resolution. Science 2017, 356,
837–840
[Schmitt et al.(2017)Schmitt, Gefen, StÃŒrner, Unden, Wolff,
MÃŒller, Scheuer, Naydenov, Markham, Pezzagna, Meijer, Schwarz, Plenio,
Retzker, McGuinness, and Jelezko]RN222
Schmitt, S. et al. Submillihertz magnetic spectroscopy performed with
a nanoscale quantum sensor. Science 2017, 356,
832–837
[Cai et al.(2012)Cai, Naydenov, Pfeiffer, McGuinness, Jahnke,
Jelezko, Plenio, and Retzker]ref74
Cai, J. M.; Naydenov, B.; Pfeiffer, R.; McGuinness, L. P.; Jahnke, K. D.;
Jelezko, F.; Plenio, M. B.; Retzker, A. Robust dynamical decoupling with
concatenated continuous driving. New Journal of Physics 2012,
14, 113023
[Zhou et al.(2014)Zhou, Huang, Zhang, Wang, Tan, Xu, Shi, Rong,
Ashhab, and Du]zhouPRL124
Zhou, J.; Huang, P.; Zhang, Q.; Wang, Z.; Tan, T.; Xu, X.; Shi, F.; Rong, X.;
Ashhab, S.; Du, J. Observation of Time-Domain Rabi Oscillations in the
Landau-Zener Regime with a Single Electronic Spin. Physical Review
Letters 2014, 112, 010503
[Li et al.(2020)Li, Kong, Zhao, Cheng, Qin, Wang, Zhang, Wang,
Wang, Shi, and Du]kongPRL112
Li, R.; Kong, F.; Zhao, P.; Cheng, Z.; Qin, Z.; Wang, M.; Zhang, Q.; Wang, P.;
Wang, Y.; Shi, F.; Du, J. Nanoscale Electrometry Based on a
Magnetic-Field-Resistant Spin Sensor. Physical Review Letters
2020, 124, 247701
[London et al.(2013)London, Scheuer, Cai, Schwarz, Retzker,
Plenio, Katagiri, Teraji, Koizumi, Isoya, Fischer, McGuinness, Naydenov, and
Jelezko]ref25
London, P.; Scheuer, J.; Cai, J. M.; Schwarz, I.; Retzker, A.; Plenio, M. B.;
Katagiri, M.; Teraji, T.; Koizumi, S.; Isoya, J.; Fischer, R.;
McGuinness, L. P.; Naydenov, B.; Jelezko, F. Detecting and Polarizing Nuclear
Spins with Double Resonance on a Single Electron Spin. Physical Review
Letters 2013, 111, 067601
[Kucsko et al.(2013)Kucsko, Maurer, Yao, Kubo, Noh, Lo, Park,
and Lukin]ref79
Kucsko, G.; Maurer, P. C.; Yao, N. Y.; Kubo, M.; Noh, H. J.; Lo, P. K.;
Park, H.; Lukin, M. D. Nanometre-scale thermometry in a living cell.
Nature 2013, 500, 54–58
[McGuinness et al.(2011)McGuinness, Yan, Stacey, Simpson, Hall,
Maclaurin, Prawer, Mulvaney, Wrachtrup, Caruso, Scholten, and
Hollenberg]ref80
McGuinness, L. P.; Yan, Y.; Stacey, A.; Simpson, D. A.; Hall, L. T.;
Maclaurin, D.; Prawer, S.; Mulvaney, P.; Wrachtrup, J.; Caruso, F.;
Scholten, R. E.; Hollenberg, L. C. L. Quantum measurement and orientation
tracking of fluorescent nanodiamonds inside living cells. Nature
Nanotechnology 2011, 6, 358–363
[Le Sage et al.(2013)Le Sage, Arai, Glenn, DeVience, Pham,
Rahn-Lee, Lukin, Yacoby, Komeili, and Walsworth]ref81
Le Sage, D.; Arai, K.; Glenn, D. R.; DeVience, S. J.; Pham, L. M.;
Rahn-Lee, L.; Lukin, M. D.; Yacoby, A.; Komeili, A.; Walsworth, R. L. Optical
magnetic imaging of living cells. Nature 2013, 496,
486–489
[Wang et al.(2022)Wang, Kong, Zhao, Huang, Yu, Wang, Shi, and
Du]mixer1
Wang, Z.; Kong, F.; Zhao, P.; Huang, Z.; Yu, P.; Wang, Y.; Shi, F.; Du, J.
Picotesla magnetometry of microwave fields with diamond sensors.
Science Advances 2022, 8, eabq8158
[Wang et al.(2022)Wang, Liu, Schloss, Alsid, Braje, and
Cappellaro]mixer2
Wang, G.; Liu, Y.-X.; Schloss, J. M.; Alsid, S. T.; Braje, D. A.;
Cappellaro, P. Sensing of Arbitrary-Frequency Fields Using a Quantum Mixer.
Phys. Rev. X 2022, 12, 021061
[Casanova et al.(2019)Casanova, Torrontegui, Plenio,
García-Ripoll, and Solano]ref27
Casanova, J.; Torrontegui, E.; Plenio, M.; García-Ripoll, J.; Solano, E.
Modulated Continuous Wave Control for Energy-Efficient Electron-Nuclear Spin
Coupling. Physical Review Letters 2019, 122,
010407
[Aharon et al.(2019)Aharon, Schwartz, and Retzker]power_limit
Aharon, N.; Schwartz, I.; Retzker, A. Quantum Control and Sensing of Nuclear
Spins by Electron Spins under Power Limitations. Physical Review
Letters 2019, 122, 120403
[Ge et al.(2022)Ge, Chen, Wang, Zhou, Zheng, Yang, Qian, and
Xu]omega
Ge, F.; Chen, B.; Wang, Y.; Zhou, F.; Zheng, R.; Yang, X.; Qian, P.; Xu, N. A
wideband balun-based microwave device for quantum information processing with
nitrogen vacancy centers in diamond. Journal of Lightwave Technology
2022, 40, 7572–7577
[Gruber et al.(1997)Gruber, Dräbenstedt, Tietz, Ludovic,
Wrachtrup, and Borczyskowski]ref34
Gruber, A.; Dräbenstedt, A.; Tietz, C.; Ludovic, F.; Wrachtrup, J.;
Borczyskowski, C. Scanning Confocal Optical Microscopy and Magnetic Resonance
on Single Defect Centers. Science 1997, 276,
2012–2014
[Staudacher et al.(2013)Staudacher, Shi, Pezzagna, Meijer, Du,
Meriles, Reinhard, and Wrachtrup]ref5
Staudacher, T.; Shi, F.; Pezzagna, S.; Meijer, J.; Du, J.; Meriles, C. A.;
Reinhard, F.; Wrachtrup, J. Nuclear Magnetic Resonance Spectroscopy on a
(5-Nanometer)3 Sample Volume. Science 2013,
339, 561–563
[Childress et al.(2006)Childress, Dutt, Taylor, Zibrov,
Jelezko, Wrachtrup, Hemmer, and Lukin]ref36
Childress, L.; Dutt, M. V. G.; Taylor, J. M.; Zibrov, A. S.; Jelezko, F.;
Wrachtrup, J.; Hemmer, P. R.; Lukin, M. D. Coherent Dynamics of Coupled
Electron and Nuclear Spin Qubits in Diamond. Science 2006,
314, 281–285
[Dutt et al.(2007)Dutt, Childress, Jiang, Togan, Maze, Jelezko,
Zibrov, Hemmer, and Lukin]register1
Dutt, M. V. G.; Childress, L.; Jiang, L.; Togan, E.; Maze, J.; Jelezko, F.;
Zibrov, A. S.; Hemmer, P. R.; Lukin, M. D. Quantum Register Based on
Individual Electronic and Nuclear Spin Qubits in Diamond. Science
2007, 316, 1312–1316
[Fuchs et al.(2011)Fuchs, Burkard, Klimov, and
Awschalom]register2
Fuchs, G. D.; Burkard, G.; Klimov, P. V.; Awschalom, D. D. A quantum memory
intrinsic to single nitrogen-vacancy centres in diamond. Nature
Physics 2011, 7, 789–793
[Zhang et al.(2021)Zhang, Guo, Ji, Wang, Yin, Kong, Lin, Yin,
Shi, Wang, and Du]register3
Zhang, Q.; Guo, Y.; Ji, W.; Wang, M.; Yin, J.; Kong, F.; Lin, Y.; Yin, C.;
Shi, F.; Wang, Y.; Du, J. High-fidelity single-shot readout of single
electron spin in diamond with spin-to-charge conversion. Nature
Communications 2021, 12, 1529
[Zu et al.(2014)Zu, Wang, He, Zhang, Dai, Wang, and
Duan]computation1
Zu, C.; Wang, W. B.; He, L.; Zhang, W. G.; Dai, C. Y.; Wang, F.; Duan, L. M.
Experimental realization of universal geometric quantum gates with
solid-state spins. Nature 2014, 514, 72–75
[van der Sar et al.(2012)van der Sar, Wang, Blok, Bernien,
Taminiau, Toyli, Lidar, Awschalom, Hanson, and Dobrovitski]computation2
van der Sar, T.; Wang, Z. H.; Blok, M. S.; Bernien, H.; Taminiau, T. H.;
Toyli, D. M.; Lidar, D. A.; Awschalom, D. D.; Hanson, R.; Dobrovitski, V. V.
Decoherence-protected quantum gates for a hybrid solid-state spin register.
Nature 2012, 484, 82–86
[Taminiau et al.(2014)Taminiau, Cramer, van der Sar,
Dobrovitski, and Hanson]computation3
Taminiau, T. H.; Cramer, J.; van der Sar, T.; Dobrovitski, V. V.; Hanson, R.
Universal control and error correction in multi-qubit spin registers in
diamond. Nature Nanotechnology 2014, 9, 171–176
[Shi et al.(2010)Shi, Rong, Xu, Wang, Wu, Chong, Peng,
Kniepert, Schoenfeld, Harneit, Feng, and Du]computation
Shi, F.; Rong, X.; Xu, N.; Wang, Y.; Wu, J.; Chong, B.; Peng, X.; Kniepert, J.;
Schoenfeld, R.-S.; Harneit, W.; Feng, M.; Du, J. Room-Temperature
Implementation of the Deutsch-Jozsa Algorithm with a Single Electronic Spin
in Diamond. Physical Review Letters 2010, 105,
040504
[Childress et al.(2006)Childress, Taylor, Sørensen, and
Lukin]computation5
Childress, L.; Taylor, J. M.; Sørensen, A. S.; Lukin, M. D. Fault-Tolerant
Quantum Communication Based on Solid-State Photon Emitters. Physical
Review Letters 2006, 96, 070504
[Cai et al.(2013)Cai, Retzker, Jelezko, and Plenio]simulation
Cai, J.; Retzker, A.; Jelezko, F.; Plenio, M. B. A large-scale quantum
simulator on a diamond surface at room temperature. Nature Physics
2013, 9, 168–173
[Ji et al.(2020)Ji, Zhang, Wang, Zhang, Guo, Chai, Rong, Shi,
Liu, Wang, and Du]simulation1
Ji, W.; Zhang, L.; Wang, M.; Zhang, L.; Guo, Y.; Chai, Z.; Rong, X.; Shi, F.;
Liu, X.; Wang, Y.; Du, J. Quantum Simulation for Three-Dimensional Chiral
Topological Insulator. Physical Review Letters 2020,
125, 020504
[Reiserer et al.(2016)Reiserer, Kalb, Blok, van Bemmelen,
Taminiau, Hanson, Twitchen, and Markham]network1
Reiserer, A.; Kalb, N.; Blok, M. S.; van Bemmelen, K. J. â.; Taminiau, T. H.;
Hanson, R.; Twitchen, D. J.; Markham, M. Robust Quantum-Network Memory Using
Decoherence-Protected Subspaces of Nuclear Spins. Physical Review X
2016, 6, 021040
[Hastie et al.(2017)Hastie, Zandonatti, Kleinfelter, Heinrich,
Rowland, Chandran, Branco, Robinson, Garry, and Saphire]network2
Hastie, K. M.; Zandonatti, M. A.; Kleinfelter, L. M.; Heinrich, M. L.;
Rowland, M. M.; Chandran, K.; Branco, L. M.; Robinson, J. E.; Garry, R. F.;
Saphire, E. O. Structural basis for antibody-mediated neutralization of Lassa
virus. Science 2017, 356, 923–928
[Humphreys et al.(2018)Humphreys, Kalb, Morits, Schouten,
Vermeulen, Twitchen, Markham, and Hanson]network3
Humphreys, P. C.; Kalb, N.; Morits, J. P. J.; Schouten, R. N.; Vermeulen, R.
F. L.; Twitchen, D. J.; Markham, M.; Hanson, R. Deterministic delivery of
remote entanglement on a quantum network. Nature 2018,
558, 268–273
[Pompili et al.(2021)Pompili, Hermans, Baier, Beukers,
Humphreys, Schouten, Vermeulen, Tiggelman, Martins, Dirkse, Wehner, and
Hanson]network4
Pompili, M.; Hermans, S. L. N.; Baier, S.; Beukers, H. K. C.; Humphreys, P. C.;
Schouten, R. N.; Vermeulen, R. F. L.; Tiggelman, M. J.; Martins, L. d. S.;
Dirkse, B.; Wehner, S.; Hanson, R. Realization of a multinode quantum network
of remote solid-state qubits. Science 2021, 372,
259–264
[Casola et al.(2018)Casola, van der Sar, and Yacoby]ref7
Casola, F.; van der Sar, T.; Yacoby, A. Probing condensed matter physics with
magnetometry based on nitrogen-vacancy centres in diamond. Nature
Reviews Materials 2018, 3, 17088
[Schmitt et al.(2017)Schmitt, Gefen, StÃŒrner, Unden, Wolff,
MÃŒller, Scheuer, Naydenov, Markham, Pezzagna, Meijer, Schwarz, Plenio,
Retzker, McGuinness, and Jelezko]ref8
Schmitt, S. et al. Submillihertz magnetic spectroscopy performed with
a nanoscale quantum sensor. Science 2017, 356,
832–837
[Mamin et al.(2013)Mamin, Kim, Sherwood, Rettner, Ohno,
Awschalom, and Rugar]ref58
Mamin, H. J.; Kim, M.; Sherwood, M. H.; Rettner, C. T.; Ohno, K.;
Awschalom, D. D.; Rugar, D. Nanoscale Nuclear Magnetic Resonance with a
Nitrogen-Vacancy Spin Sensor. Science 2013, 339,
557–560
[Chen et al.(2020)Chen, Hou, Ge, Zhang, Ji, Li, Qian, Wang, Xu,
and Du]lab2
Chen, B.; Hou, X.; Ge, F.; Zhang, X.; Ji, Y.; Li, H.; Qian, P.; Wang, Y.;
Xu, N.; Du, J. Calibration-Free Vector Magnetometry Using Nitrogen-Vacancy
Center in Diamond Integrated with Optical Vortex Beam. Nano Letters
2020, 20, 8267–8272
[Bian et al.(2021)Bian, Zheng, Zeng, Chen, Stöhr, Denisenko,
Yang, Wrachtrup, and Jiang]ref9
Bian, K.; Zheng, W.; Zeng, X.; Chen, X.; Stöhr, R.; Denisenko, A.; Yang, S.;
Wrachtrup, J.; Jiang, Y. Nanoscale electric-field imaging based on a quantum
sensor and its charge-state control under ambient condition. Nature
Communications 2021, 12, 2457
[Doherty et al.(2014)Doherty, Struzhkin, Simpson, McGuinness,
Meng, Stacey, Karle, Hemley, Manson, Hollenberg, and Prawer]ref10
Doherty, M. W.; Struzhkin, V. V.; Simpson, D. A.; McGuinness, L. P.; Meng, Y.;
Stacey, A.; Karle, T. J.; Hemley, R. J.; Manson, N. B.; Hollenberg, L. C.;
Prawer, S. Electronic properties and metrology applications of the diamond
NV- center under pressure. Phys Rev Lett 2014, 112,
047601
[Dolde et al.(2011)Dolde, Fedder, Doherty, Nöbauer, Rempp,
Balasubramanian, Wolf, Reinhard, Hollenberg, Jelezko, and Wrachtrup]ref65
Dolde, F.; Fedder, H.; Doherty, M. W.; Nöbauer, T.; Rempp, F.;
Balasubramanian, G.; Wolf, T.; Reinhard, F.; Hollenberg, L. C. L.;
Jelezko, F.; Wrachtrup, J. Electric-field sensing using single diamond spins.
Nature Physics 2011, 7, 459–463
[Michl et al.(2019)Michl, Steiner, Denisenko, Bülau,
Zimmermann, Nakamura, Sumiya, Onoda, Neumann, Isoya, and Wrachtrup]ref67
Michl, J.; Steiner, J.; Denisenko, A.; Bülau, A.; Zimmermann, A.;
Nakamura, K.; Sumiya, H.; Onoda, S.; Neumann, P.; Isoya, J.; Wrachtrup, J.
Robust and Accurate Electric Field Sensing with Solid State Spin Ensembles.
Nano Letters 2019, 19, 4904–4910
[Fujiwara and Shikano(2021)Fujiwara, and Shikano]ref11
Fujiwara, M.; Shikano, Y. Diamond quantum thermometry: from foundations to
applications. Nanotechnology 2021, 32, 482002
[Singam et al.(2019)Singam, Nesladek, and Goovaerts]ref12
Singam, S. K. R.; Nesladek, M.; Goovaerts, E. Nitrogen-vacancy nanodiamond
based local thermometry using frequency-jump modulation.
Nanotechnology 2019, 31, 105501
[Neumann et al.(2013)Neumann, Jakobi, Dolde, Burk, Reuter,
Waldherr, Honert, Wolf, Brunner, Shim, Suter, Sumiya, Isoya, and
Wrachtrup]ref71
Neumann, P.; Jakobi, I.; Dolde, F.; Burk, C.; Reuter, R.; Waldherr, G.;
Honert, J.; Wolf, T.; Brunner, A.; Shim, J. H.; Suter, D.; Sumiya, H.;
Isoya, J.; Wrachtrup, J. High-Precision Nanoscale Temperature Sensing Using
Single Defects in Diamond. Nano Letters 2013, 13,
2738–2742
[Plakhotnik et al.(2014)Plakhotnik, Doherty, Cole, Chapman, and
Manson]ref72
Plakhotnik, T.; Doherty, M. W.; Cole, J. H.; Chapman, R.; Manson, N. B.
All-Optical Thermometry and Thermal Properties of the Optically Detected Spin
Resonances of the NV-Center in Nanodiamond. Nano Letters
2014, 14, 4989–4996
[Liu et al.(2022)Liu, Henning, Heindl, Allert, Bartl, Sharp,
Rizzato, and Bucher]ref15
Liu, K. S.; Henning, A.; Heindl, M. W.; Allert, R. D.; Bartl, J. D.;
Sharp, I. D.; Rizzato, R.; Bucher, D. B. Surface NMR using quantum sensors in
diamond. Proceedings of the National Academy of Sciences
2022, 119, e2111607119
[Soshenko et al.(2021)Soshenko, Bolshedvorskii, Rubinas,
Sorokin, Smolyaninov, Vorobyov, and Akimov]ref39
Soshenko, V. V.; Bolshedvorskii, S. V.; Rubinas, O.; Sorokin, V. N.;
Smolyaninov, A. N.; Vorobyov, V. V.; Akimov, A. V. Nuclear Spin Gyroscope
based on the Nitrogen Vacancy Center in Diamond. Physical Review
Letters 2021, 126, 197702
[Sushkov et al.(2014)Sushkov, Lovchinsky, Chisholm, Walsworth,
Park, and Lukin]ref56
Sushkov, A.; Lovchinsky, I.; Chisholm, N.; Walsworth, R.; Park, H.;
Lukin, M. Magnetic Resonance Detection of Individual Proton Spins Using
Quantum Reporters. Physical Review Letters 2014, 113,
197601
[Loretz et al.(2013)Loretz, Rosskopf, and Degen]ref59
Loretz, M.; Rosskopf, T.; Degen, C. L. Radio-Frequency Magnetometry Using a
Single Electron Spin. Physical Review Letters 2013,
110, 017602
[Xu et al.(2023)Xu, Zhou, Ye, Lin, Chen, Zhang, Yue, Chen,
Wang, and Du]2201.06002
Xu, N.; Zhou, F.; Ye, X.; Lin, X.; Chen, B.; Zhang, T.; Yue, F.; Chen, B.;
Wang, Y.; Du, J. Noise Prediction and Reduction of Single Electron Spin by
Deep-Learning-Enhanced Feedforward Control. Nano Letters
2023, 23, 2460–2466
[Zhou et al.(2021)Zhou, Song, Deng, Zhang, Chen, and Xu]ref33
Zhou, F.; Song, S.; Deng, Y.; Zhang, T.; Chen, B.; Xu, N. Mixed-signal data
acquisition system for optically detected magnetic resonance of solid-state
spins. Review of Scientific Instruments 2021, 92,
114702
[Belthangady et al.(2013)Belthangady, Bar-Gill, Pham, Arai, Le Sage, Cappellaro and Walsworth]EE
Belthangady, C.; Bar-Gill, N.; Pham, L. M; Arai, K.; Le Sage, D.; Cappellaro, P.; Walsworth, R. L. Dressed-State Resonant Coupling between Bright and Dark Spins in Diamond. Physical Review Letters
2013, 110, 157601
[Aslam et al.(2017)Aslam, Pfender, Neumann, Reuter, Zappe,
Fávaro de Oliveira, Denisenko, Sumiya, Onoda, Isoya, and Wrachtrup]RN221
Aslam, N.; Pfender, M.; Neumann, P.; Reuter, R.; Zappe, A.; Fávaro de
Oliveira, F.; Denisenko, A.; Sumiya, H.; Onoda, S.; Isoya, J.; Wrachtrup, J.
Nanoscale nuclear magnetic resonance with chemical resolution. Science
2017, 357, 67–71
[Zhou et al.(2023)Jiang, Liang, Ru, Bettiol, and
Gao]ref101
Zhou, F.; Jiang, Z.; Liang, H.; Ru, S.; Bettiol, A.; Gao, W. DC Magnetic Field Sensitivity Optimization of Spin Defects in Hexagonal Boron Nitride.
Nano Letters 2013, 23, 6209–6215
[Kong et al.(2018)Kong, Zhao, Ye, Wang, Qin, Yu, Su, Shi, and
Du]ref23
Kong, F.; Zhao, P.; Ye, X.; Wang, Z.; Qin, Z.; Yu, P.; Su, J.; Shi, F.; Du, J.
Nanoscale zero-field electron spin resonance spectroscopy. Nature
Communications 2018, 9, 1563
|
http://arxiv.org/abs/2409.03589v1 | 20240905144559 | Simplified EPFL GaN HEMT Model | [
"Farzan Jazaeri",
"Majid Shalchian",
"Ashkhen Yesayan",
"Amin Rassekh",
"Anurag Mangla",
"Bertrand Parvais",
"Jean-Michel Sallese"
] | physics.app-ph | [
"physics.app-ph"
] |
Simplified EPFL GaN HEMT Model
This project is funded by the Swiss National Science Foundation - project 200021 213116.
Farzan Jazaeri, Majid Shalchian, Ashkhen Yesayan, Amin Rassekh, Anurag Mangla,
Bertrand Parvais, and Jean-Michel Sallese
Farzan Jazaeri, Ashkhen Yesayan, and Jean-Michel Sallese are with the Electron Device Modeling and Technology Laboratory (EDLAB) of the École Polytechnique Fédérale de Lausanne (EPFL), Switzerland (e-mail:[email protected]).Majid Shalchian is with the department of Electrical Engineering, Amirkabir University of Technology. Amin Rassekh is with InCize, Louvain-la-Neuve, Belgium. Anurag Mangla is an alumnus of EPFL.
Bertrand Parvais is affiliated with IMEC in Leuven and holds a position as a Guest Professor at Vrije Universiteit Brussels, Belgium.
September 9, 2024
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
This paper introduces a simplified and design-oriented version of the EPFL HEMT model<cit.>, focusing on the normalized transconductance-to-current characteristic (Gm/I_D). Relying on these figures, insights into GaN HEMT modeling in relation to technology offers a comprehensive understanding of the device behavior. Validation is achieved through measured transfer characteristics of GaN HEMTs fabricated at IMEC on a broad range of biases.
This simplified approach should enable a simple and effective circuit design methodology with AlGaN/GaN HEMT heterostructures.
§ INTRODUCTION
High Electron Mobility Transistors (HEMTs) have garnered interest due to their exceptional electron transport capabilities boosted by a quantum well that enables high-speed and high-power applications in microwave and millimeter-wave domains.
Despite these advantages, existing compact models and traditional design methods with HEMTs rely on outdated interpretations of their behavior which limits hand calculations in a preliminary design phase. To this purpose, we introduce the concept of inversion coefficient introduced initially in MOSFETs <cit.>, combined with the important G_m/I_D figure of merit. This approach leads to a comprehensive set of analytical expressions based on the charge-voltage relationships outlined in <cit.> and valid in all the regions of operation. Likewise for the MOSFET, we aim at giving a simple and effective design-oriented modelling framework.
The simplified model relies only on six model parameters:
the slope factor n_q, the threshold voltage V_T0, the specific current I_sp, the velocity saturation coefficient λ_c, the mobility reduction coefficient θ, and the charge threshold voltage, V_T. This leads to a general modeling and designing methods with HEMTs since the model inherits from the regular silicon MOSFET features, but with new physical quantities, a kind of generalized MOSFET. Then, the design strategy adopted in CMOS circuits can be seamlessly applied to the design of circuits based on HEMTs.
§ DESIGN-ORIENTED MODELING IN HEMTS
To extract analog design parameters that are used to predict the electrical characteristics of HEMTs, we propose to simplify the EPFL HEMT model making use of the G_m/I_D methodology. This model relies on semiconductor physics, making it a valuable tool for optimizing circuit designs with GaN with few model parameters.
§.§ Charge-Voltage Dependence
The typical structure of a HEMT is shown in Fig. 1. It consists in a large bandgap AlGaN semiconductor playing the role of the confinement barrier combined with a smaller bandgap GaN semiconductor.
For simplicity we propose to reuse the explicit and continuous charge-based relationship developed in <cit.> that we recal hereafter
U_Tln[exp(n_ch/DoS_2D U_T)-1]+qn_ch/C_b n_q +u_1√(n_ch^2)=ψ_p-V,
where V= U_Tln(N_A/N_V)+E_g/q+V_ch, u_1=Γ/q, C_b=-ε_1/x_1 where x_1 is the AlGaN layer thickness, U_T is the thermal voltage (k_BT/q), E_g is the band gap of GaN and the temperature-dependency of E_g is described in <cit.>:
E_g(T)=E_g(0)-5.08× 10^-4 T^2/(996-T),
where E_g (0) = 3.28eV for Zinc Blende crystal structure, and other symbols are defined in Table <ref>.
As initially introduced in <cit.> for MOSFETs, then adopted in HEMTs <cit.>, the pinch-off surface potential ψ_p is defined as the surface potential ψ_s when n_ch=0
Q_ch=-qn_ch=C_b(V_GB+V_A+ψ_s+γ√(ψ_s) ),
leading to
ψ_p=V_GB-V_A-γ^2(√(V_GB-V_A/γ^2+1/4)-1/2)
where the body factor γ and the voltage offset V_A are given by
γ=-x_1/ε_1√(2ε_2N_Aq),
V_A=-Δ E_C/q+ϕ_B,eff-qN_D/2ε_1x_1^2.
The effective Schottky barrier height ϕ_B,eff accounts for polarization effects. Its value is given by ϕ_B_eff=ϕ_B+x_1σ_pol/ε_1 where σ_pol =σ_pol(GaN)- σ_pol(AlGaN). The terms σ_pol(GaN) and σ_pol(AlGaN) are the polarization-induced charge surface densities localized at the GaN-AlGaN interface. The slope factor n_q is defined as the derivative of n_ch versus ψ_s, obtained from relation <ref>, evaluated at ψ_p (<cit.>), leading to
n_q=1+γ/2√(ψ_p). This slope factor is part of the linearization scheme that links the mobile charge density to the surface potential (<cit.>, <cit.>):
Q_ch=-n_q C_b (ψ_p-ψ_s),
where C_b holds for an equivalent gate stack capacitance. We can also use the first order term of the Taylor series of (<ref>) with respect to V_GB to get an approximate value for ψ_p, ψ_p≈ (V_GB-V_A)/n_q (<ref>). Replacing this value in (<ref>) leads to the following normalized charge-based expression
ln[exp(α q_ch)-1]+2q_ch + β√(q_ch^2)=V_GB-VT_0/n_qU_T,
where α =-Q_sp/qDoS_2DU_T, β=(U_1/U_T)√((-Q_sp/q)^2), Q_sp is the specific charge density defined as Q_sp = -2n_qC_bU_T and V_T0 is the threshold voltage defined as
V_T0=V_A+n_qU_Tln(N_A/N_V)+n_qE_g/q+n_qV_ch.
§.§ Drain to Source Current Derivation in HEMTs
The drain current expression of the EPFL HEMT model is given by I_ds=I_sp× i where i is the normalized current and I_sp is the so-called specific current defined as I_sp = 2n_qμ_0 C_bU^2_T W/L_G. Here, i is obtained by
i =(q^2_s+q_s-q^2_d+q_d)=i_f-i_r,
where q_s =-q n_ch,s/Q_sp and q_d=-q n_ch,d/Q_sp are the normalized charge densities at source and drain, and i_f and i_r are defined as the forward and reverse normalized currents. Each of them is related to the normalized charge densities q_s and q_d by i_f,r =q_f,r^2+q_f,r.
These expressions make the HEMT model close the MOSFET <cit.> and motivate their use for circuit design.
§.§ Velocity Saturation and Mobility Reduction
Reducing the length of the transistor needs to include more features, such as mobility reduction, velocity saturation, and velocity overshoot for instance.
The mobility μ_eff = v_drift/E_y , called the effective mobility, links the drift velocity to the longitudinal and vertical electric fields. AlGaN semiconductors exhibit saturation of the drift velocity
(v_drift) for electrons (v_sat) when
the longitudinal electrical field E_y exceeds a
critical value E_crit, that also depends
on the vertical electrical field. In the effective mobility, normalizing the v_drift to v_sat and E_y to the E_crit leads to an effective mobility u_eff <cit.>
u_eff=v_drift/v_sat/E_y/E_crit=μ_eff/μ_x=v/e
where e stand for the normalized electric field (e = E_y/E_crit) and
v for the normalized velocity (v = v_drift/v_sat). The mobility μ_x is defined as μ_x= v_sat/E_crit and includes the effect of the vertical electrical field since the E_crit is also dependent on the vertical electrical field E_x.
The normalized effective mobility to the low-field mobility μ_0 is rewritten as
u=μ_eff/μ_0=μ_eff/μ_xμ_x/μ_0=u_eff× u_0,
where u_0 is given by
u_0=μ_x/μ_0=[1+θ(V_GB-V_T)]^-1,
Note that in (<ref>), V_T is the charge threshold voltage which is not to be confused with the extrapolated threshold voltage, i.e. V_T0, and θ corresponds to the mobility reduction coefficient <cit.>. As already discussed in <cit.>, the drain current in HEMT is given by the drift-diffusion equation
I_DS=μ_eff W [Q_chE_y+U_TdQ_ch/dy].
Normalizing the longitudinal electrical field, E(y), charge density, and current to E_crit, Q_sp, and I_sp respectively, the current can be written in the normalized from as
i=-u (2q_che/λ_c+dq_ch/dξ),
where ξ=y/L_G, e=E_y/E_crit and λ_c, called the velocity saturation parameter, is defined as λ_c =2U_T/E_cL_G.
In a device operating in velocity saturated regime, the channel charge density at the drain approaches a saturated value q_d,sat which remains almost constant over the velocity saturated region, therefore dq_ch/dξ =0. Relying on the current continuity, using piecewise linear velocity-field model (i.e. |e|u_eff=1 for |e|≥ 1 and u_eff=u_0 for |e|<1), and integrating (<ref>) over the velocity saturated part of the channel close to the drain, the normalized drain current in a velocity saturated device is given by i_d,sat=2u_0q_d,sat/λ_c. On the other hand, relying on the current continuity along the channel, the drain current can be obtained from (<ref>) as<cit.>
i_d,sat=u_0(q^2_s+q_s-q^2_d,sat-q_d,sat)=IC.
For the sake of simplicity, we first propose to neglect the impact of the mobility reduction due to the vertical electric field, assuming u_0= 1. Later on, this non-ideal effect will be merged into the solution of the input characteristic. Introducing inversion coefficient IC to determine the channel inversion level of a HEMT as IC= I_D,sat/I_sp =i_d,sat, the normalized source charge density in the saturation regime for a velocity saturated device is obtained form (<ref>):
q_s=-qn_ch|_s/Q_sp=1/2√(λ^2_cIC^2+2λ_cIC+4IC+1)-1/2.
It is worth mentioning that to some extent, naming 'inversion' and introducing IC to identify such a level of inversion in HEMT devices may seem unappropriated. However, to make HEMT look like a generalized silicon MOSFET we can still classify the operation regions of a HEMT as weak inversion (WI) for IC ≤ 0.1, moderate inversion (MI) for 0.1 < IC ≤ 10, and strong inversion (SI) for IC > 10, even though the channel is not resulting from an inversion process.
Inserting relation (<ref>) into (<ref>) while imposing V_ch=0 leads to a relationship between IC and V_GB in saturation mode. The relations (<ref>) and (<ref>) give an explicit solution of V_GB for a given IC. The general current–voltage relationship for a HEMT in saturation is independent of technological parameters and makes use of normalized variables. In the presence of mobility reduction due to the vertical field, IC can be replaced by u_0IC. Then using (<ref>), the large- and small-signal characteristics over a wide range of IC, from weak to strong 'inversion', can be fully captured by only six design parameters i.e. V_T0, n_q, I_sp, λ_c, V_T, and θ.
On the other hand, differentiating (<ref>), the input transconductance in a HEMT is obtained
g_m=∂ I_DS/∂ V_GB=I_sp[(2q_s+1)∂ q_s/∂ V_GB-(2q_d+1)∂ q_d/∂ V_GB]
where ∂ q_s/∂ V_GB and ∂ q_d/∂ V_GB are obtained from (<ref>) and are expressed by ∂ q_s/∂ V_GB≈∂ q_d/∂ V_GB≈ 1/(2n_qU_T). Therefore, an approximate value of g_m is given by
g_m=I_sp(q_s-q_d)/(n_qU_T).
Knowing g_m, the best approach of interpreting the IC is through the transconductance efficiency, i.e. g_m/I_d versus IC which is a key figure of merit widely used in CMOS circuit design:
g_mn_qU_T/I_DS =q_s-q_d/i=1/q_s+q_d+1
=2/√((1+λ_cIC)^2+4IC)+1+λ_cIC.
Considering the relationship between the transconductance efficiency and IC allows for a proper evaluation of the design trade offs among gain, performance of analog design, transistor dimensions. This ratio reach the maximum in the weak 'inversion' (g_mn_qU_T/I_DS = 1) where the drain current dependence is exponential versus V_GB. However, moving toward to the strong inversion, the g_mn_qU_T/I_DS dependence versus IC becomes linear and therefore g_mn_qU_T/I_DS≈ 1/(λ_cIC) for large values of IC.
Once the g_m/I ratio is established, determining the optimal width-to-length ratio (W/L) of the transistor becomes straightforward.
§ PARAMETER EXTRACTION PROCEDURE
This section describes a straightforward and precise method for parameter extraction for the simplified design-oriented EPFL HEMT model. This method necessitates six design parameters alongside some physical constants.
The initial step involves extracting n_q from the minimum of I_DS/g_mn_qU_T in weak inversion. It is worth noting that this ratio reaches the minimum value of 1 in weak inversion, leading to n_q=I_DS/g_mU_T. This minimum occurs when the drain current dependence is exponential with respect to V_GB.
Consequently, the slope factor can be extracted experimentally from the plot of I_DS/g_mU_T versus I_DS in weak inversion. Similarly, the specific current I_sp can be determined from the same plot. Adjusting I_sp to match the asymptotic curvature of n_q=I_DS/g_mU_T to the experimental data allows to extract I_sp.
Moving forward, λ_c is adjusted until the experimental data intersects the 1/λ_c IC line. Subsequently, the parameter n_q is modified to ensure that both the curves g_m.n.U_T/I_DS and n_q intersect the asymptotic lines simultaneously.
Then, V_T and V_T0 are utilized to fit the I_DS-V_GB curve near the threshold and the rising part of g_m. Finally, θ and other variables are fine-tuned to fit the I_DS-V_GB curve at higher V_GB and the falling part of the g_m curve.
§ RESULTS AND DISCUSSION
The model has been validated experimentally on GaN-on-Si HEMTs fabricated in IMEC following <cit.>. The epitaxial stack grown on 200 mm HR Si wafers by MOCVD used in this study is composed of: 5nm in-situ SiN passivation layer, 15nm AlGaN barrier, 1nm AlN spacer, 300nm GaN channel, on top of 1um C-doped back barrier and 1um transition layers.
In Figure 2, the slope factors (n_q) are derived from the plateau of I_D/(G_m·U_T) versus I_D in weak inversion (WI) for both long-channel (L_G = 3.0 μ m), shown in Fig. 2.a, and short-channel (L_G = 200 nm), depicted in Fig. 2.e.
This essential extraction process captures the behavior of the drain current and transconductance, offering profound insights into device performance across different channel lengths.
Additionally, the normalized transconductance efficiency, G_mnU_T /I_D, is plotted versus the inversion coefficient IC.
Figures 2.b and 2.f serve to corroborate the robustness of the G_m/I_D design methodology. In particular, from the bare intersection of the asymptotes in weak and strong 'inversion', see dashed lines in Figure 2, the key specific current parameter I_sp for both short and long channel devices is obtained.
Figures 2c and 2g present the I_D versus V_G characteristics for both long and short channel GaN HEMTs, depicted in both linear and logarithmic scales. These plots demonstrate that the simplified EPFL model accurately captures the input transfer behaviour across different channel lengths. Whether in the linear or logarithmic scale, the model accurately reflects the device characteristics, underscoring its efficacy in modelling both long and short channel devices.
In summary, Figures 2d and 2h show a detailed comparison of the transconductance (G_m) against V_G for both long and short channel GaN HEMTs. The remarkable fidelity with which the simplified EPFL model captures both G_m and its peak values across varying gate voltages is confirmed. This robust agreement between model predictions and experimental data highlights the model's capability to accurately represent the transconductance behavior of GaN HEMTs. The extracted parameters for both long and short channel GaN HEMTs are listed in Table II.
This validation underscores the simplified model effectiveness to model HEMTs using only a few parameters and enables a design and optimization of GaN-based devices in a more traditionnal way.
§ CONCLUSION
This paper introduces a simplified and design-oriented adaptation of the EPFL HEMT model, with a specific focus on the normalized transconductance-to-current characteristic and IC. The research delves into GaN HEMT technology and modeling, aiming to provide a precise model for the electrical behavior of these devices using only a few parameters.
Validation is conducted by comparing measured transfer characteristics of GaN HEMTs at room temperature across a broad range of IC values. This study provides valuable guide in the effective design of circuits employing HEMTs.
1
HEMT
F. Jazaeri and J.-M. Sallese, “Charge-based EPFL HEMT Model,” IEEE
Transactions on Electron Devices, vol. 66, no. 3, pp. 1218–1229, 2019.
EKV1
Christian C. Enz, Eric A. Vittoz, Charge-Based MOS Transistor
Modeling: The EKV Model for Low-Power and RF IC Design.1em plus
0.5em minus 0.4emWiley, 2006.
tran
F. Jazaeri, M. Shalchian, and J.-M. Sallese, “Transcapacitances in EPFL HEMT
Model,” IEEE Transactions on Electron Devices, vol. 67, no. 2, pp.
758–762, 2020.
9137643
M. Allaei, M. Shalchian, and F. Jazaeri, “Modeling of Short-Channel Effects
in GaN HEMTs,” IEEE Transactions on Electron Devices, vol. 67,
no. 8, pp. 3088–3094, 2020.
jazaeri2017free
F. Jazaeri, A. Pezzotta, and C. Enz, “Free Carrier Mobility Extraction in
FETs,” IEEE Transactions on Electron Devices, vol. 64, no. 12, pp.
5279–5283, 2017.
EG
Michael E. Levinshtein (Editor), Sergey L. Rumyantsev (Editor), Michael S.
Shur (Editor), Properties of Advanced Semiconductor Materials: GaN,
AIN, InN, BN, SiC, SiGe.1em plus 0.5em minus 0.4emJohn Wiley and Sons, Inc., New York, 2001.
linearization
J.-M. Sallese, M. Bucher, F. Krummenacher, and P. Fazan, “Inversion Charge
Linearization in MOSFET Modeling and Rigorous Derivation of the EKV Compact
Model,” Solid-State Electronics, vol. 47, no. 4, pp. 677–683, 2003.
Mangla
A. Mangla, “Modeling Nanoscale Quasi-Ballistic MOS Transistors: A Circuit
Design Perspective,” 2014.
8993582
U. Peralagu, A. Alian, and e. a. Putcha, “CMOS-compatible GaN-based Devices
on 200mm-Si for RF Applications: Integration and Performance,” pp.
17.2.1–17.2.4, 2019.
|
http://arxiv.org/abs/2409.02468v1 | 20240904063312 | Axion Minicluster Halo Limits from Wide Binary Disruption | [
"Zihang Wang",
"Yu Gao"
] | hep-ph | [
"hep-ph",
"astro-ph.CO"
] | |
http://arxiv.org/abs/2409.03433v1 | 20240905112557 | An innovation-based cycle-slip, multipath estimation, detection and mitigation method for tightly coupled GNSS/INS/Vision navigation in urban areas | [
"Bo Xu",
"Shoujian Zhang",
"Jingrong Wang",
"Jiancheng Li"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
An innovation-based cycle-slip, multipath estimation, detection and mitigation method for tightly coupled GNSS/INS/Vision navigation in urban areas
Bo Xu, Shoujian Zhang, Jingrong Wang, Jiancheng Li
Bo Xu and Shoujian Zhang are with School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, China; Corresponding author: Shoujian Zhang, Email: [email protected]
Jingrong Wang is with the GNSS research center, Wuhan University, Wuhan 430079, China;
Jiancheng Li is with the School of Geodesy and Geomatics, Hubei Luojia Laboratory, Wuhan University, Wuhan 430079, China;
September 9, 2024
========================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Precise, consistent, and reliable positioning is crucial for a multitude of uses. In order to achieve high precision global positioning services, multi-sensor fusion techniques, such as the Global Navigation Satellite System (GNSS)/Inertial Navigation System (INS)/Vision integration system,
combine the strengths of various sensors. This technique is essential for localization in complex environments and has been widely used in the mass market.
However, frequent signal deterioration and blocking
in urban environments exacerbates the degradation of GNSS positioning and negatively impacts the performance of the multi-sensor integration system. For GNSS pseudorange and carrier phase observation data in the urban environment, we offer an innovation-based cycle slip/multipath estimation, detection, and mitigation (I-EDM) method to reduce the influence of multipath effects and cycle slips on location induced by obstruction in urban settings. The method obtains the innovations of GNSS observations with the cluster analysis method. Then the innovations are used to detect the cycle slips and multipath. Compared with the residual-based method, the innovation-based method avoids the residual overfitting caused by the least square method, resulting in better detection of outliers within the GNSS observations. The vehicle tests carried out in urban settings verify the proposed
approach. Experimental results indicate that the accuracy of 0.23m, 0.11m, and 0.31m in the east, north and up components can be achieved by the GNSS/INS/Vision tightly coupled system with the I-EDM method, which has a maximum of 21.6% improvement when compared with the residual-based EDM (R-EDM) method.
multi-sensor fusion; tightly coupled integration; cycle slip detection; multipath mitigation; innovation-based method.
§ INTRODUCTION
Autonomous driving has captured significant attention in both research and practical applications, where GNSS serves as the primary positioning technology to provide continuous, accurate, and stable positioning results <cit.>. Despite the worldwide prevalence and extensive utility of GNSS, its signals are susceptible to disruptions such as multipath effects and signal loss in complex urban settings, resulting in decreased positioning accuracy <cit.>. Consequently, the utilization of multi-sensors provides a more comprehensive and robust positioning capability, which becomes the optimal choice for pose estimation in urban scenarios <cit.>.
Integrating different sensors, such as INS and cameras, with GNSS can fully utilize the locally accurate characteristics of INS/Visual Inertial Odometry (VIO) and the global drift-free characteristics of GNSS. Fusing GNSS/INS has been widely studied. Significant improvements in accuracy, continuity, and reliability of localization are carefully analyzed and evaluated <cit.>. However, in an integrated system where GNSS serves as the cornerstone for providing global positioning, the positioning accuracy is compromised or even worsens due to the rapid error accumulation of the microelectromechanical system inertial measurement unit (MEMS-IMU) when GNSS observations are affected by multipath effects or lost tracking <cit.>.
To mitigate the drifts of MEMS-IMU, the low-cost visual camera is applied in GNSS/INS integration system to offer redundant measurements. The multi-sensor fusion extended Kalman filter (MSF-EKF) framework proposed by Lynen et al. <cit.> allows for the seamless loose sensor-feed integration of GNSS, INS, and visual sensors. A semi-tightly coupled framework of multi-GNSS/INS/Vision based on graph optimization is proposed by Li et al. <cit.>. The method produces consistent and accurate global positioning outputs and operates well in GNSS-challenged environments. A tightly coupled GNSS/INS/Vision system with open-source code is proposed by Cao et al. <cit.>; However, the method only incorporates Doppler shift and code pseudorange measurements. To enhance the navigation accuracy during GNSS outages, Liao et al. <cit.> propose a tightly coupled framework to make full use of measurements from real-time kinematic (RTK)/INS/Vision. The method improves the accuracy and robustness of the system under conditions where GNSS is unavailable. However, these methods regard GNSS observations as equal weight, without considering the impact of outliers on the system, thereby failing to make the most of the GNSS observations.
Effective GNSS measurement quality control is key to ensuring that the estimator is not affected by outliers in urban situations. A practical quality control technique is to assign weights according to the quality of the signals. The common GNSS weighting strategies include: signal to noise ratio (SNR) model <cit.>, satellite elevation angle model <cit.>, SNR and satellite elevation angle hybrid model <cit.> et al. However, computing the appropriate weights for GNSS observations in urban situations is challenging. Leveraging sky plots to divide the satellites into line-of-sight (LOS) and non-line-of-sight (NLOS), and decreasing the weights of NLOS can increase the positioning accuracy in urban settings <cit.>. Nevertheless, this approach is limited to adjusting weights at the satellite levels, which means it can not modify the weights corresponding to different frequencies from the same satellite. An additional technique for measurement quality control is fault detection and exclusion (FDE) <cit.>. A multiple-fault GNSS FDE technique is proposed by Sun et al. <cit.> for an integrated GNSS/IMU system. A parallel GNSS FDE technique for tightly coupled GNSS/INS/Vision integration through factor graph optimization is presented by Jiang et al. <cit.>. However, they only perform quality control on the pseudorange observations and do not consider the carrier phase observations.
Even though significant progress has been made in the area of positioning in challenging situations, the GNSS quality control still requires extensive attention. In our earlier work, a unified cycle slip, multipath estimation, detection, and mitigation (EDM) method is proposed <cit.>, in which we utilize the clustering method to separate the cycle slips and multipath from the carrier phase observations aided by the predicted VIO positioning. However, this methodology entails repeatedly estimating the parameters using the least square method, followed by cycle slips detection and multipath modeling with the observation residuals. Whereas, the residuals are inevitably absorbed into the estimated parameters (e.g. positioning and receiver clock) with the least square method, leading to inaccurate outlier culling and cycle slip detection. Meanwhile, the method relies on the predicted VIO position to preprocess the GNSS observations. In the case of GNSS blocking frequently, setting the uncertainty of the predicted position becomes challenging. This contribution proposes an innovation-based EDM method. The innovations are applied to identify the cycle slips and multipath in the GNSS observations. To obtain the innovations, the inter-frequency bias (IFB) is estimated with the clustering method, and the receiver clock is obtained using the satellite with the highest elevation angle. Furthermore, considering the persistent impact of multipath on ambiguities, we mark the ambiguity as the cycle clip when the accumulated multipath exceeds a certain threshold to mitigate the influence of multipath effect on localization. Finally, we conduct a detailed analysis of the effectiveness of the innovation-based EDM method and residual-based EDM method.
Our main contributions are summarized as follows:
* We proposed an innovation-based EDM method for the pseudorange and carrier phase observations in the tightly coupled GNSS/MEMS/Vision system.
* We compare and analyze the residual-based (R-EDM) and innovation-based (I-EDM) methods in terms of cycle slip detection, multipath estimation, positioning accuracy, and computational efficiency in the tightly coupled multi-sensor fusion system.
* Extensive road vehicular experimental evaluations are conducted in the urban area to evaluate the performance of our method. The experimental results demonstrate the superior performance of our GNSS observation preprocessing method in complex urban settings.
The introduction is followed by a description of the RTK/MEMS-IMU/Vision integration approach and our proposed innovation-based EDM method. Then, the detailed description of the vehicle-borne experimental setups and processing strategies are described in detail. The experimental outcomes of different GNSS observation weighting schemes in typical urban settings are examined, and the effectiveness of the innovation-based EDM and residual-based EDM approaches is contrasted. The conclusions are given in the end.
§ METHODS
This section begins with an overview of the tightly coupled RTK/MEMS/Vision system. The error models of all the associated senors are next presented, followed by the time update and measurement update model in the tightly coupled filter. Lastly, we provide the specifics of our proposed innovation-based EDM algorithm.
§.§ System Overview
Fig.<ref> shows the system architecture of the tightly coupled RTK/MEMS/Vision system. Based on the Multi-State Constraint Kalman Filter (MSCKF), the pseudorange and carrier phase observations from GNSS, the raw data of MEMS-IMU, and the images from the stereo camera are fused. The INS mechanization is used to carry out the state propagation once the system has been initialized. The predicted state variables will also help with feature matching and tracking in the image processing, as well as the preprocessing of outlier culling and cycle slip detection in GNSS processing. The state variables and associated covariance in the estimator will be augmented upon receiving new visual or GNSS observations. Once the new observations are deemed to be available, the corresponding state variables and the covariance will be updated in the measurement update. After completing the filter, the state variables are ultimately fixed through ambiguity resolution.
§.§ INS Error Model
The vehicle platform's biased noisy angular velocity and linear acceleration make up the measurements of the IMU. Because of the low-cost IMU measurement noise, the Coriolis and centrifugal forces caused by Earth's rotation are disregarded in IMU's formulation. Therefore, the kinematic model of the IMU's error state can be written as <cit.>:
δ𝐩̇^n = δ𝐯^n
δ𝐯̇^n = -𝐑^n_b(𝐚-𝐛_a)^∧δθ-𝐑^n_bδ𝐛_a-𝐑_b^n 𝐧_a
δθ̇ = -(ω - 𝐛_w)^∧δθ-δ𝐛_w-𝐧_w
δ𝐛̇_w = 𝐧_b_w
δ𝐛̇_a = 𝐧_b_a
where b and n are the IMU body frame and navigation frame (East-North-Up frame), respectively. δ𝐩̇^n, δ𝐯̇^n, δθ̇, δ𝐛̇_w and δ𝐛̇_a represent the derivative of the position, velocity, attitude, gyroscope bias and accelerometer bias error in the navigation frame, respectively. 𝐑^n_b indicates the rotation from the body frame to the navigation frame. 𝐚 and ω are the acceleration and angular velocity measurement, respectively. The vector 𝐧_a and 𝐧_w represent the Gaussian noise of accelerometer and gyroscope measurement, while 𝐧_b_a and 𝐧_b_w are the random walk rate of the accelerometer and gyroscope measurement biases. 𝐚^∧ is the skew symmetric matrix of 𝐚. Therefore, the INS error state vector has the following writing:
δ𝐱_ins = [δθ δ𝐯^n δ𝐩^n δ𝐛_w δ𝐛_a]^⊤
§.§ Visual Observation Model
Taking into account that the stereo camera observes a visual point feature f^j, the related visual observation measurement 𝐳_i^j can be written as follows:
𝐳_i^j= (
u_i,1^j
v_i,1^j
u_i,2^j
v_i,2^j
) = (1/z_j^C_i, 1𝐈_2 × 2 0_2 × 2
0_2 × 2 1/z_j^C_i, 2) (
x_j^C_i, 1
y_j^C_i, 1
x_j^C_i, 2
y_j^C_i, 2) + 𝐧^j_i
where [u_i, k^j, v_i, k^j]^⊤, k ∈{1,2} indicate the feature observations of the left and right cameras on their normalized projective plane. 𝐧^j_i indicates the visual measurement noise. [x_j^C_i, n, y_j^C_i, n, z_j^C_i, n]^⊤, n ∈{1,2} are the visual landmarks in the camera frame, which can be calculated as follows:
(
x_j^C_i, 1
y_j^C_i, 1
z_j^C_i, 1) = (𝐑^n_C_i,1)^⊤(𝐩^n_j - 𝐩^n_C_i, 1)
(
x_j^C_i, 2
y_j^C_i, 2
z_j^C_i, 2) = 𝐑^C_i,2_C_i,1(𝐩^C_i,1_j - 𝐩^C_i,1_C_i, 2)
where 𝐑^C_i,2_C_i,1 and 𝐑^n_C_i,1 denote the rotation matrix that goes from the left camera frame to the right camera frame and navigation frame, respectively. The location of the left camera with regard to the navigation frame is 𝐩^n_C_i, 1. 𝐩^C_i,2_C_i, 1 is the position of the left camera frame with respect to the right camera frame. 𝐩_j^n and 𝐩_j^C_i,1 represent the visual landmarks' position in the navigation frame and left camera frame, respectively.
In order to construct the visual reprojection residuals between relative camera poses, we use the algorithm that is proposed by <cit.>. The following is the description of the visual state vector:
δ𝐱_vis = [δθ^n_C_1 δp^n_C_1 δθ^n_C_2 δp^n_C_2 ⋯ δθ^n_C_k δp^n_C_k]^⊤
where δθ^n_C_i and δp^n_C_i represent the error state of the left camera's rotation and position at various time stamps, respectively. k indicates how many camera poses there are in the sliding window overall.
The following is the expression for the projection residual of the visual measurement:
𝐫_vis = 𝐳_vis - 𝐳̂_vis = 𝐇_vis𝐱_vis + 𝐧_vis
where 𝐳_vis and 𝐳̂_vis are the observation and reprojection visual measurements, respectively, and 𝐇_vis is the Jacobian of the relevant camera states, which are provided in <cit.>.
§.§ Between-station Single-difference Multi-GNSS Observation Model
From the undifferenced pseudorange and carrier phase models, we will derive the between-station single-difference GNSS model. The undifferenced pseudorange P^s_r,i and carrier phase L_r,i^s model is first given as follows:
P^s_r,i = ρ+c(t_r-t^s)+T_r^s+I_r,i^s+e_r,i^s,σ^2_P^s_r,i
L_r,i^s =ρ+c(t_r-t^s)+T_r^s-I_r,i^s+ λ_i · N_r,i^s+ϵ_r,i^s, σ^2_L_r,i^s
where the satellite and receiver are denoted by s and r, respectively. The carrier frequency is i=1,2,3. The speed of light is c. The pseudorange and carrier phase observations are denoted by P_r,i^s and L^s_r,i. The distance between the receiver and the satellite is ρ. The receiver clock and satellite clock are t_r and t^s. The tropospheric and ionospheric delays on the i frequency are T^s_r and I^s_r,i. The wavelength of the carrier phase is λ_i. The float ambiguity at frequency i is denoted by N^s_r,i. The noise of the pseudorange and carrier phase measurement is e^s_r,i and ϵ_r,i^s, with the variance of σ^2_P^s_r,i and σ^2_L_r,i^s, respectively.
The GNSS observations from the base station help to significantly improve the positioning accuracy of the rover station. Satellite orbit and clock biases, ionospheric and tropospheric delays in (<ref>) are eliminated by performing single-difference between the base station b and rover station r when the baseline is less than 10km. The following is how we get the single-difference measurement model:
Δ P^s_r,i = Δρ+c Δ t_r + Δ e_r,i^s,Δσ^2_P^s_r,i
Δ L_r,i^s =Δρ+c Δ t_r + λ_i ·Δ N_r,i^s+ Δϵ_r,i^s, Δσ^2_L_r,i^s
where Δ(·) represents the single-difference operator.
We also introduce the IFB <cit.> for the RTK model with multi-system and multi-frequency observations. Subsequently, the state vector of single-difference RTK can be expressed as follows:
δ𝐱_rtk = [δΔ𝐩^n_r δΔt^G_r δΔ𝐈𝐅𝐁_r,i δΔ𝐍_r,i^s]^⊤
where δΔ𝐩^n_r indicates the error state of the baseline from a base station to a rover station. δΔt^G_r is the error state of the single-difference GPS receiver clock, i.e., the datum receiver clock. δΔ𝐈𝐅𝐁_r,i is the error state of the single-difference IFB. And δΔ𝐍_r,i^s represents the error state of the single-difference ambiguities.
It is worth noting that we estimate the single-difference float ambiguity in (<ref>). After the measurement update at each epoch, we will fix the between-satellite between-station ambiguities to integers, if possible, then the fixed solutions will be obtained. In this study, the integer least-square estimation of the float ambiguities is searched using the least-square ambiguity decorrelation adjustment (LAMBDA) method <cit.>. Although fixing ambiguities can enhance the positioning accuracy of RTK, if the float ambiguities are excessively erroneous, they tend to result in incorrect fixes, leading to fixed positioning solutions worse than the float solutions. Therefore, reliable initial float ambiguities are crucial for high-precision RTK positioning.
§.§ Tightly Coupled Filter Model for RTK/MEMS/Vision
The MSCKF is employed to estimate the real-time state variables with single-difference observations of GNSS, raw observations of MEMS IMU, and the stereo camera. To realize the tightly coupled estimation, all the corresponding state variables must be held in one estimator, which is different from loosely coupled filter <cit.> or semi-tightly coupled filter <cit.>. The state variables in the tightly coupled filter are defined as:
δ𝐱= [δ𝐱_ins δ𝐱_vis δΔ t_r^G δΔ𝐈𝐅𝐁_r,i δΔ𝐍_r,i^s_δ𝐱_rtk]^⊤
The Kalman filter's time updates and measurement updates are used to estimate the optimal state variables. In the time updates, the state prediction and error state covariance are propagated. The INS states are propagated forward by INS mechanization. The RTK-related state variables and the camera poses in the sliding window are regarded as constant without process noises. Regarding the error state covariance, the error state variables are propagated using a continuous system model, which is provided by:
[δ𝐱̇_ins
δ𝐱̇_vis
δ𝐱̇_rtk] =
[𝐅_ins 0 0
0 0 0
0 0 0] [δ𝐱_ins
δ𝐱_vis
δ𝐱_rtk] + [𝐧_ins
0
𝐧_rtk]
where the continuous-time state transition matrix of INS is represented by 𝐅_ins. The process noise of INS is 𝐧_ins, including the Gaussian noise of the accelerometer and gyroscope. And 𝐧_rtk is the process noise of RTK, containing the Gaussian noise of the receiver clock.
The discrete form of the covariance propagation based on (<ref>) can be written as:
𝐏_k = Φ_k, k-1𝐏_k-1Φ_k, k-1^⊤ + 𝐐_k-1
where Φ_k,k-1 is the discrete-time state transition matrix. 𝐐_k-1 is the discrete-time noise covariance matrix. The covariance matrix and state variables are augmented each time the new visual or RTK observation is captured. RTK augmentation is simple. The optimal value and covariance are carried over from the previous epoch if the state variables from that epoch are present in the current epoch, as seen in Fig. <ref>. For the newly added state variables in the current epoch, the optimal error state value is set to 0, and the variance is set as Gaussian noise. The INS mechanization would be used to initialize the camera pose for the new add image, and the augmented covariance can be written as follows:
𝐏'_k = (𝐈_15+γ + 6k
𝐇) 𝐏_k(𝐈_15+γ + 6k
𝐇)^⊤
with 𝐇 = [𝐑^b_c^⊤ 0 0 0 0 0_3 × (γ + 6 k)
-𝐑_b^n (𝐩^b_c)^∧ 0 𝐈 0 0 0_3 × (γ + 6 k)]. γ and k are the number of the RTK-related parameters and camera poses. 𝐑_c^b and 𝐩_c^b are the offline-calibrated extrinsic parameters that connect the IMU and camera <cit.>.
The tightly coupled measurement update can
be represented as follows when the GNSS or visual measurements are available:
[ 𝐫_vis
𝐏^s_r,i
𝐋^s_r,i] = [ 𝐳_vis - ẑ_vis
Δ𝐏^s_r,i - Δ𝐏̂^s_ins,i
Δ𝐋^s_r,i - Δ𝐋̂^s_ins,i]
= [ 𝐇_vis
𝐇_Δ𝐏^s_r,i
𝐇_Δ𝐋^s_r,i] [ 𝐱_vis
δ𝐱_vis
δΔ t^G_r
δΔ𝐈𝐅𝐁_r,i
δΔ𝐍_r,i^s
] + [ 𝐧_vis
Δ𝐞^s_r,i
Δϵ^s_r,i]
where Δ𝐋̂^s_ins,i and Δ𝐏̂^s_ins,i represent the INS-predict single-difference GNSS carrier phase measurement and the pseudorange measurement, respectively. 𝐇_vis, 𝐇_Δ𝐏^s_r,i and 𝐇_Δ𝐋^s_r,i represent the Jacobian matrix of vision representation error, pseudorange error and carrier phase error, respectively. Since the INS central position 𝐩_ins^n does not overlap with the GNSS receiver antenna reference point 𝐩_rtk^n, we also consider the lever-arm correction 𝐥^b in our study.
§.§ Innovation-based EDM Method
Based on the hybrid model of SNR and satellite elevation angle <cit.>, we propose an innovation-based EDM method for GNSS pseudorange and carrier phase observations, which avoids residual overfitting with the least square method. As illustrated in Fig.<ref> and Fig.<ref>, the pseudorange and carrier phase data are processed independently by the innovation-based EDM approach. For pseudorange processing, we assume all the pseudorange observations share the same receiver clock Δt_r, and the observations from different frequencies and systems are modeled with IFB. There are three steps in the pseudorange processing algorithm:
1)
IFB initialization: at the initial epoch, IFB needs to be computed first. We choose the code observations on B1I frequency of the BDS system as the reference observations, which are the most among all the observations. The DBSCAN cluster analysis method <cit.> is utilized to obtain the stable mean value m_P_B1I. Then we clustered the observations from different frequencies after calibrating with m_P_B1I. The mean IFBs of different frequencies are computed from the most groups in the clustering results:
ΔIFB_i = 1/n∑^n_i=1(Δ P^s_r,i - m_P_B1I)
2)
Outlier culling: we assume the observations of satellites with the highest elevation angle are almost not affected by the multipath effect. Then the receiver clock Δt_r is computed with these observations after calibrating the IFB. After obtaining ΔIFB_i and Δt_r, innovations of different observations, i.e. the prefit-residuals, can be computed by:
ℰ_Δ P^s_r,i = Δ P^s_r,i - ΔIFB_i - Δ t_r
Finally, the weights of innovations greater than 1m are decreased from w to 1e^-3w. The reason for choosing pseudorange values greater than 1m as outliers is that the standard deviation of pseudorange is set to 0.3m in our research. Therefore, the innovation which is greater than 3 times standard deviation needs to be culled.
3)
IFB update: we compute the m_P_B1I and update the mean IFBs of different frequencies as (<ref>) in each epoch to obtain the robust estimation of IFB.
For carrier phase observations, the algorithm is simplified due to the adoption of between-station between-epoch double difference carrier phase observations Δ∇ L^s_r,i.
In order to determine the innovation of the observations, we further assume that all the carrier phase observations share the same receiver clock, with medium value serving as the datum. Next, using the following criteria <cit.>, the observations that belong to the good, multipath, and cycle slips are identified.
f(ℰ_Δ∇ L^s_r,i)={
good, |ℰ_Δ∇ L^s_r,i| < δ_th
multipath, δ_th < |ℰ_Δ∇ L^s_r,i| < 3 ·δ_th
cycle slip, |ℰ_Δ∇ L^s_r,i| > 3 ·δ_th.
δ_th is set to 0.05m in our experiments. It is worth noting that the biased ambiguities will still have an impact on the system's positioning performance if the multipath is absorbed by the ambiguities during the Kalman filter. The multipath will be gathered and designated as the cycle slip if the aggregated value is above 0.2m to mitigate the multipath effect on ambiguity.
§ EXPERIMENTS
Road vehicular experiments in urban regions are carried out in Wuhan, China, to assess the performance of the tightly coupled RTK/IMU/Vision system with the proposed I-EDM method. As seen in Fig. <ref>, the experimental vehicle was outfitted with two FLIR BFS-U3-31S4C-C cameras, a low-cost ADIS-16470 MEMS IMU, a tactical-grade NovAtel SPAN-ISA-100C IMU, a time synchronization board and a Septentrio mosaic-X5 mini GNSS receiver with a NovAtel GNSS-850 antenna. The vehicle-borne mobile system aboard the vehicle was used to collect the experimental data. Tab.<ref> contains the specification information of the consumer-grade and tactical-grade IMUs.
Furthermore, a single Septentrio PolaRx5 GNSS receiver was installed a mere 5km away from the rover station, which functions as the base station with exact coordinates. The single-difference GNSS measurements between the base and rover stations are produced. We used the commercial Inertial Explorer (IE) 8.9 software <cit.> to acquire the accurate smoothed solutions of the tightly coupled multi-GNSS post-processing kinematic (PPK) and tactical-grade IMU integration, with which the experimental results are verified. When
compared to MEMS-IMU, the tactical-grade IMU can sustain a particular level of pose output
accuracy for a comparatively extended period of time without the need for external corrective data, ensuring a trustworthy comparison in the assessment that follows.
The vehicular dataset, including both open-sky and urban settings, is collected between 18:04:06 and 18:41:19 on September 3, 2023, is used for a complete evaluation. Fig. <ref> displays the number of available satellites and the position dilution of precision (PDOP). The average number of LOS, NLOS, and total satellites are 14.57, 7.77, and 22.34. The average number of available satellites for GPS, BDS, Galileo, and QZSS satellite system is 4.25, 11.86, 3.64, and 2.59, respectively. The PDOP has an average value of 5.84. There is a sharp decline in the number of LOS satellites while the car is traveling between towering structures. The experimental scenarios are severely affected by GNSS NLOS, multipath effect, and cycle slip issues, negatively impacting the RTK/INS/Vision integration system's positioning performance. The trajectory's top view is shown in Fig. <ref>. Additionally, the vehicle trajectory's typical experimental situation are depicted in Fig. <ref>. The trajectory's overall length is roughly 8587m. Tab. <ref> displays the specific parameters for the GNSS, INS, and Vision modules. The processing frequency for GNSS, INS, and Vision is 1HZ, 100HZ, and 10HZ, respectively. The experiments are carried out on a PC with an Intel Core i7-9750 @ 2.6 GHz and 16 GB of RAM.
§.§ Performance of Different Weighting Strategies
In urban canyon scenarios, the GNSS observations are susceptible to multipath and outliers, consequently decreasing the positioning performance of the GNSS/INS/Vision integration system. Therefore, choosing appropriate weights is essential for GNSS observations. We compared the positioning performance of the current mainstream weighting methods of GNSS pseudorange observations, including signal-to-noise ratio (SNR) model <cit.>, 1/sinθ^2 satellite elevation angle model <cit.>, SNR and satellite elevation angle hybrid model <cit.> and the proposed I-EDM method in RTK/INS/Vision integration system. The trajectory errors in east, north, up for SNR, satellite elevation angle, hybrid method, and innovation-based method are displayed in Fig. <ref>. The root mean square error (RMSE) of the trajectory error is computed and displayed in Tab.<ref>. We can observe that the proposed I-EDM method achieves the highest positioning accuracy of 0.31m. This is because our method effectively models the multipath effects in the GNSS signals by computing the innovation of the pesudorange observations and carrier phase observations. Therefore the accuracy of the positioning is significantly improved by eliminating the impact of the multipath effects. On the contrary, the other three methods do not account for the significant multipath effect caused by the obstructions in urban situations, resulting in the positioning accuracy only reaching the meter level. Then, by comparing the hybrid method with the SNR and satellite elevation angle method, we can see that the positioning performance of the hybrid method is better than that of the SNR and satellite elevation angle method, achieving on average a 28.1% and 38.4% improvement in 3D direction. And the performance of the SNR method is better than that of the satellite elevation angle method. The SNR method allows for assigning reasonable weights to the observations of different frequencies and types from the same satellite. However, the satellite elevation angle method only assigns the weights according to the satellite type. Consequently, the SNR method provides more accurate modelling of observations at the signal level, resulting in higher precision in positioning compared to the satellite elevation angle method. Furthermore, the hybrid method combines the advantages of both methods, achieving better positioning accuracy.
§.§ Comparison of R-EDM and I-EDM Method
We conduct experiments to evaluate the performance of R-EDM and I-EDM in terms of cycle slip detection, multipath estimation, positioning accuracy, and running time consumption. The aim is to provide a detailed analysis of the differences between the residual-based EDM method and the innovation-based EDM method in urban environments.
§.§.§ Cycle Slip Detection Results
We compared the performance of the cycle slip detection of R-EDM and I-EDM methods. The reference trajectory generated from the tactical-grade IMU is employed to obtain the real cycle slips and multipath <cit.>. For fairness, the same criteria described in equation (<ref>) are utilized to access the cycle slips and multipath of R-EDM, I-EDM, and real values. The false detection rate of cycle slip is displayed in Fig. <ref>. The false detection rate means that the cycle slips exist within the real data, yet the method fails to detect them. This omission leads to incorrect ambiguity calculation, which is harmful to navigation performance, especially in urban experiments. The periods when cycle slip detection errors occur essentially coincide with intervals of high PDOP values, as shown in Fig.<ref>. This correlation demonstrates that multipath effects and signal loss in GNSS observations often result in cycle slips. More specifically, R-EDM and I-EDM have average error rates of 0.07% and 0.03%, respectively. This demonstrates that in the residual-based method, the cycle slips are partially absorbed into the estimated parameters such as IFB and clock biases when solving least squares, making it challenging to detect the minor cycle slips within the GNSS observations. Conversely, the innovation-based method only leverages the raw GNSS observations to detect the cycle slips, which are more sensitive to minor cycle slips.
§.§.§ Multipath Estimation Results
We also compared the performance of the I-EDM and R-EDM methods for the multipath estimation of pseudorange and carrier phase observations. In Fig.<ref>, we plot the probability distribution histogram of the pseudorange multipath obtained by I-EDM and R-EDM methods. The multipath exceeding ±1m detected by I-EDM is greater than R-EDM method, which is 15.70% and 11.87%, respectively. This indicates that the innovation-based method can mitigate the impact of the multipath effect in pseudorange observations on the positioning system by
modeling the multipath effect effectively. The cumulative distribution of the multipath in carrier phase observations acquired by I-EDM, R-EDM, and the actual value is displayed in Fig.<ref>. It can be observed that the multipath estimated by I-EDM is smaller than R-EDM. However, due to the between-station between-epoch double difference operator on carrier phase observations, the RMSE of the multipath estimated by I-EDM and R-EDM is similar, which is 0.011m and 0.009 m, respectively.
§.§.§ Positioning Results
The positioning accuracy of different preprocessing methods is estimated in the tightly coupled RTK/MEMS/Vision mode. Due to the R-EDM method relying on the prior position constraints, we set the standard deviations of 0.1, 0.2, 0.3, and 0.4m to the predicted position of VIO, and analyzed the impact on the R-EDM method. Tab. <ref> displays the RMSE of the approximated trajectory errors. The position errors calculated with different uncertainty of the prior position show a maximum difference of 6cm in the vertical direction. This indicates that the performance of the R-EDM method is influenced by the uncertainty of prior position constraints. When GNSS loses tracking frequently, setting the uncertainty of the prior position estimated by IMU and vision observations becomes challenging. The I-EDM method, on the other hand, avoids the problem of setting uncertainty of prior position constraints and the deficiency estimation of the outliers. The error trajectory of the I-EDM method decreases by 21.57% compared with the R-EDM-0.1 method. Fig.<ref> is the error trajectory for epochs from 37300-37800 with severe multipath effect, which presents the trajectory errors of different methods more clearly. The benefits of the innovation-based method are further validated.
§.§.§ Running Time Results
We counted the time consumption for the pseudorange process and carrier phase process module in the R-EDM and I-EDM algorithms. The computation cost (in milliseconds) of different modules is shown in Table <ref>. In the R-EDM method, employing the clustering analysis avoids searching for the outliers in the observations with the iterative method, thereby decreasing the algorithm’s time consumption. However, the algorithm still introduces the least square method to compute the residuals of pseudorange and carrier phase observations, which consumes most of the time. In the I-EDM method, we only computed the innovation of the pseudorange and carrier phase observations. Compared with the R-EDM method, the pseudorange processing, carrier phase processing, and total processing time decreased by 46.5%, 51.5%, and 49.9%, respectively.
§ CONCLUSION
The utilization of the GNSS/INS/Vision integration system provides a more comprehensive and robust positioning capability. However, multipath and cycle slips brought on by obstructions deteriorate GNSS location performance in urban areas, which further impairs multi-sensor integration positioning. For GNSS pseudorange and carrier phase measurements, this work develops an innovation-based cycle slip, multipath estimation, detection, and mitigation (I-EDM) technique. The cluster analysis method is utilized to extract the innovations from GNSS measurements, which are subsequently employed to identify cycle slips and multipath.
The experimental results show that the proposed strategy effectively reduces the impact of outliers within observations in urban test scenarios. Our proposed method significantly enhances the positioning accuracy when compared to the signal-to-noise ratio (SNR) model, 1/sinθ^2 satellite elevation angle model, and the hybrid model. Moreover, a detailed comparison between the performance of residual-based and innovation-based EDM methods is conducted. Compared with the residual-based EDM method, the innovation-based EDM method obtains a decrease in average error rates of cycle detection by 57.1%, a 21.6% improvement in positioning accuracy, and a 49.9% reduction in running time consumption. These results show the significant improvements achieved by the proposed innovation-based EDM method over the residual-based approach.
§ ACKNOWLEDGMENTS
Upon reasonable request to the corresponding author, the experimental data used in this research is available.
This work is sponsored in part by The National Key Research and Development Program of China (2021YFB2501100), the National Natural Science Foundation of China (Major Program, Grant No. 42192533), and the Fellowship of China National Postdoctoral Program for Innovative
Talents (Grant No. BX20200251).
IEEEtran
[
< g r a p h i c s >
]Bo Xu received the B.E. degree in the School of Land Science and Technology from China University of Geosciences, Beijing, China, in 2014 and 2018, and the M.E. degree in the School of Geodesy and Geomatics from Wuhan University, Wuhan, China, in 2018 and 2021, respectively, where, he is currently pursuing the Ph.D degree with the School of Geodesy and Geomatics. His research interests include GNSS precise positioning, visual SLAM, visual inertial odometry (VIO) and multi-sensor fusion algorithm.
[
< g r a p h i c s >
]Shoujian Zhang received the B.Eng. and Ph.D.
degrees in the School of Geodesy and Geomatics from Wuhan University, Wuhan, China, in 2004 and 2009, respectively. He is an associate professor at the school of geodesy and geomatics from Wuhan University. His research interests are GNSS data processing, multiple sensor data fusion algorithms and applications.
[
< g r a p h i c s >
]Jingrong Wang received the B.E. degree in the School of Land Science and Technology from China University of Geosciences, Beijing, China, in 2014 and 2018, and the M.E. degree in the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS) from Wuhan University, Wuhan, China, in 2018 and 2020, respectively, where, he is currently pursuing the Ph.D degree with the GNSS Research Center. His research interests include GNSS precise positioning, visual inertial odometry (VIO) and multi-sensor fusion algorithm.
[
< g r a p h i c s >
]Jiancheng Li obtained his PhD degree from Wuhan University in 1993 and is a professor in geodesy at Wuhan University, Wuhan, China. He was honored as an academic member of the Chinese Academy of Engineering in 2011. His main research interests are the earth gravity field and its engineering applications.
|
http://arxiv.org/abs/2409.02246v1 | 20240903191957 | Multi-Agent Reinforcement Learning for Joint Police Patrol and Dispatch | [
"Matthew Repasky",
"He Wang",
"Yao Xie"
] | cs.LG | [
"cs.LG",
"math.OC"
] |
H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology
Multi-Agent Reinforcement Learning for
Joint Police Patrol and Dispatch
Yao Xie
========================================================================
§ ABSTRACT
Police patrol units need to split their time between performing preventive patrol and being dispatched to serve emergency incidents. In the existing literature, patrol and dispatch decisions are often studied separately. We consider joint optimization of these two decisions to improve police operations efficiency and reduce response time to emergency calls.
Methodology/results: We propose a novel method for jointly optimizing multi-agent patrol and dispatch to learn policies yielding rapid response times. Our method treats each patroller as an independent Q-learner (agent) with a shared deep Q-network that represents the state-action values. The dispatching decisions are chosen using mixed-integer programming and value function approximation from combinatorial action spaces. We demonstrate that this heterogeneous multi-agent reinforcement learning approach is capable of learning joint policies that outperform those optimized for patrol or dispatch alone.
Managerial Implications: Policies jointly optimized for patrol and dispatch can lead to more effective service while targeting demonstrably flexible objectives, such as those encouraging efficiency and equity in response.
Multi-Agent Reinforcement Learning for
Joint Police Patrol and Dispatch
Yao Xie
========================================================================
§ INTRODUCTION
The timeliness of police response to emergency calls plays a critical role in maintaining public safety and preventing crimes.
However, recent data showed that police response times in major U.S. cities are getting longer, partly due to staffing shortages <cit.>. In this paper, we study how to improve police patrol operations and reduce emergency call response times by multi-agent reinforcement learning, where each patrol unit (e.g., a police patrol car) is treated as an agent.
Coordinating the patrol of multiple agents in a shared environment has been a task of interest in operations research and machine learning for many years. Approaches ranging from extensions of the traveling salesman problem <cit.> to deep reinforcement learning (RL) for multi-agent robot patrol <cit.> have been applied. Many challenges are associated with multi-agent patrol, including large action spaces that grow exponentially with the number of patrollers, coordination/cooperation of agents, and state representation.
Some prior works evaluate patrol policies using criteria that capture “coverage,” such that all locations are visited frequently <cit.>. Instead, in this work, we evaluate patrol policies with respect to a separate but related task: multi-agent dispatch.
The police dispatching problem has traditionally been considered using multi-server priority queues <cit.>. The task of a dispatcher is to select a patroller to dispatch to the scene of an incident. Recent efforts have considered the effects of dispatch policies given positioning and/or routing of patrollers in some space (e.g., a grid or a graph) <cit.>. Oftentimes, the response times to incidents must meet some key performance indicators <cit.>, which has also been considered in the context of dispatch <cit.>.
The positioning of patrollers is key to effective dispatch, yet most dispatch works consider heuristic patrol policies. Early works considering multi-agent patrol (without dispatch) coordinated individual agent patrol decisions using heuristics such as neighboring node idleness <cit.>. Others considered extensions of the single-patroller traveling salesman problem to multiple agents <cit.>. Multi-agent reinforcement learning (MARL) for multi-agent patrolling has also been considered for many years <cit.>; e.g., node idleness can be minimized by independent learners with cooperative rewards. Early RL approaches considered individual patrollers who only had information about their local environment but could communicate with others about intended actions. More recently, multi-agent patrol policies have been learned using Bayesian learning <cit.> and deep RL <cit.>. A common theme is distributed policy optimization with varying degrees of communication between agents, where the goal is to optimize with respect to idleness or frequency of visits to each location.
In addition to a growing need for efficient police operations in light of staffing shortages and limited resources in recent years <cit.>, fairness and equity in operations such as patrol and dispatch is an area of increased interest, and the benefits of such considerations are plentiful. For instance, <cit.> shows that police officers who are perceived to be fair are more likely to observe respect and legitimacy with their community. Moreover, opinions within the US justice system echo the sentiment that fairness in law enforcement is crucial for public cooperation and support <cit.>. The importance of this issue is highlighted by many police departments across the US, often posting statements and procedures regarding fairness in their operations <cit.>. Algorithmic considerations of fairness in police operations typically focus on providing equity in service to multiple groups <cit.>.
To our knowledge, ours is the first work to optimize policies for patrol and dispatch jointly. Our contributions are as follows.
* Jointly-optimized policies for patrol and dispatch: Prior dispatch optimization strategies typically assume heuristic patrol policies. Similarly, prior multi-agent patrolling works do not jointly optimize separate decision tasks such as dispatch. We outline a procedure to optimize policies for patrol and dispatch jointly, outperforming benchmark policies optimized for patrol or dispatch alone in addition to heuristic policies such as priority queue dispatch. We demonstrate that the superior performance of our method is robust to a range of simulated environments of varying sizes and dynamics and with different objectives.
* Novel optimization approach: We use distributed MARL with parameter sharing to exploit similarities between patrollers while jointly optimizing a dispatcher agent whose action space has combinatorial cardinality. We extend recent works in combinatorial RL to systems with stochastic transitions. We incorporate them into our MARL framework via a coordinate-descent-style alternating optimization of patrol and dispatch policies.
* Fairness and real service systems: We demonstrate that our approach is flexible to various reward definitions, such as those that encourage equitable policies. Policies that explicitly encode equity in the reward definition yield more balanced responses to groups of different sizes. We also apply our approach to scenarios based on real calls-for-service data.
§.§ Related works
RL is a machine learning approach for training an agent to make sequential decisions in a stochastic environment <cit.>. Deep RL uses neural networks, such as deep Q networks (DQNs), to learn policies <cit.>. Making decisions for multiple agents is a central issue in MARL, as the number of possible actions is exponential in the number of agents. Independent Q-learning (IQL) <cit.> addresses this by decomposing the problem into multiple single-agent RL problems. Each agent operates in the same environment, but there are no guarantees regarding cooperation and interference; nonetheless, the naive extension of DQNs to the IQL framework has been successful in some cases <cit.>. To address cooperation and efficiency in distributed MARL, parameter sharing in deep RL has been employed <cit.>. Greater degrees of parameter sharing are often most effective between agents with similar value spaces and reward functions.
Multi-agent patrol has been investigated from the perspective of operations research <cit.>, Bayesian learning <cit.>, and (deep) MARL <cit.>. Many of these works pose the problem as a discrete-time, discrete-location graph traversal <cit.>. Past works recognize the inherent complexity of making a centralized decision to patrol many agents and decompose the problem via distributed optimization <cit.>. Similarly, our approach to learning multi-agent patrol policies is distributed while addressing issues arising from the decentralized decision-making for multiple cooperative agents. Additionally, we consider this problem in conjunction with a separate but connected dispatch task, which is not thoroughly addressed in prior work.
Dispatch of multiple patrollers and resources is typically considered from the queueing or ranking systems perspective. <cit.> investigate such a setting using static and dynamic priority queues, connecting to the queuing literature in developing dispatch policies and indicating the static queues yield better performance. For instance, <cit.> construct a ranking system based on quantitative incident characteristics, such as arrival time, priority, and distance to the nearest patroller. Similarly, <cit.> formulates the dispatch decision process by constructing multiple queues, considering incident categories and severity, amongst other factors such as routing of units. Crucially, prior works regarding dispatch in service systems neglect to optimize multi-agent patrol policies, typically assuming basic heuristics.
A typical notion of fairness in machine learning applications suggests that the outcome of an algorithm should be independent of the group to which it is applied <cit.>. In the context of service operations such as police patrol and dispatch, such groups can be defined by socioeconomic or racial status. For example, fairness has been studied in the police districting problem to improve efficiency while mitigating racial disparity in defining patrol districts <cit.>. Parallel works have worked to alleviate disproportionate minority contact that can arise due to hot spot policing <cit.>. Such works find that simply neglecting to take sensitive features (e.g., race or location) into account is ineffective at achieving fairnesss <cit.>. Therefore, these attributes are taken into account in decision-making problems to encode fairness explicitly, often leading to a minimal tradeoff in efficiency.
§ BACKGROUND
This section provides the necessary background information regarding reinforcement learning (RL) and multi-agent reinforcement learning (MARL) as applied to service systems such as police patrol and dispatch.
§.§ Single-Agent Reinforcement Learning
An infinite-horizon discounted Markov Decision Process (MDP) can be represented by a tuple ⟨𝒮, 𝒜, ℙ, R, γ⟩ with state space 𝒮, action space 𝒜, transition dynamics ℙ:𝒮×𝒜→Δ(𝒮), reward function R:𝒮×𝒜×𝒮→ℝ, and discount γ∈[0,1). The goal of RL <cit.> is to learn a policy π:𝒮→𝒜 maximizing cumulative discounted reward, captured by the value function V^π:𝒮→ℝ:
V^π(s_0) = 𝔼[∑_t≥0γ^t R(s_t, π(s_t), s_t+1) ],
beginning in state s_0 and following a_t ∼π(s_t), where s_t is the state in period t. A policy that maximizes the value function V^π(s_0) above is called an optimal policy and is denoted by π^⋆. Let V^⋆ be the optimal value function.
It is often more convenient to consider the state-action value (Q-value) in RL:
Q^π(s_0, a_0) = 𝔼[ R(s_0, a_0, s_1) + ∑_t≥1γ^t R(s_t, π(s_t), s_t+1) ].
The state value V^π(s) and the state-action value Q^π(s, a) are connected by V^π(s) = Q^π(s,π(s)). Let Q^⋆ be the optimal Q-value.
The optimal policy π^⋆ takes greedy actions with respect to Q^⋆: π^⋆(s) = max_a ∈𝒜Q^⋆(s, a).
Since V^⋆=max_a Q^⋆(s,a), the Bellman optimality equation reveals:
Q^⋆(s, a) = 𝔼[ R(s, a, s') + γ V^⋆(s') ].
§.§.§ Deep Q-Learning.
Q-learning is a model-free RL technique to learn the optimal state-action value function by exploiting the Bellman optimality condition <cit.>. Q^π, approximated by a function parameterized by θ, can be learned given transitions (s_t, a_t, s_t+1, r_t)∈𝒟, where r_t := R(s_t, a_t, s_t+1) and 𝒟 sampled from the MDP. Parameter θ can then be learned by minimizing the least-squares value iteration (LSVI) objective <cit.>:
θ̂ = θ argmin( ∑_(s_t, a_t, s_t+1, r_t)∈𝒟_ℓ⊂𝒟( r_t + γ a max Q^θ̃(s_t+1, a) - Q^θ(s_t, a_t) )^2 ),
A deep Q-network (DQN) <cit.> Q^θ: ℝ^d →ℝ^|𝒜| is often used as the approximator, taking s∈𝒮⊂ℝ^d and outputting Q^θ(s,a) values for each action. Target networks parameterized by θ̃, a copy of θ cloned every L updates, are often used to stabilize optimization.
§.§.§ Value Function Approximation.
One might instead want to approximate V^π(s) given policy π. Assume tuples (s_t,r̃_t)∈ℰ are observed, where r̃_t := ∑_t'≥ tγ^t'-t R(s_t',π(s_t'), s_t'+1) is the discounted sum of reward following state s_t under policy π. Value V^π(s) can be approximated by a function V^ϕ parameterized by ϕ to minimize the following objective:
ϕ̂ = ϕ argmin( ∑_(s_t, r̃_t)∈ℰ( V^ϕ(s_t) - r̃_t )^2 ).
§.§.§ Reinforcement Learning with Combinatorial Action Space
If |𝒜| is too large to enumerate (e.g., selection amongst combinatorially-many sets of pairings), Q-learning is not applicable since it must enumerate all action values. Let x_ij=1 if i is paired to j, with x_ij=0 otherwise. Define d_ij the reward for this pairing. Since the optimal action maximizes the current reward plus the next state value <cit.>:
π(s) = {x_ij} argmax ∑_i
∑_j d_ijx_ij + V^π(s'),
subject to problem-specific constraints. If s' can be computed given the action and V^π is approximated as in Section <ref>, actions are selected by solving a mixed-integer program (MIP).
§.§ Multi-Agent Reinforcement Learning
Given agents i=1,…,N, centralized MARL formulates decision-making as an MDP <cit.>; one agent learns a policy π mapping states to a joint action in 𝒜_1×⋯×𝒜_N. IQL addresses the exponential growth of this joint action space by learning N functions Q^θ_i <cit.>. Agent i operates within the single-agent MDP represented by the tuple ⟨𝒮, 𝒜_i, ℙ_i, R_i, γ_i ⟩. When the agents are similar, parameter sharing can accelerate learning, acting as an intermediate between centralized MARL and IQL <cit.>.
MARL can also be described using a Markov game (MG) <cit.> formulation, defined by a tuple ⟨ N, 𝒮, {𝒮_i}_i=1^N, {𝒜_i}_i=1^N, ℙ, {R_i}_i=1^N, γ⟩, including a universal state space 𝒮, “perspectives" {𝒮_i}_i=1^N, joint action space 𝒜 = 𝒜_1×⋯×𝒜_N, transition dynamics ℙ:𝒮×𝒜→Δ(𝒮), reward functions R_i:𝒮×𝒜×𝒮→ℝ, and discount γ. The goal is to maximize the expected discounted sum of future reward for each agent Ṽ^π_i(s_0) = 𝔼[ ∑_t≥0γ^t R_i(s_t,π(s_t), s_t+1) ], where s_0∈𝒮 is an initial universal state and π=(π_1,…,π_N) is a joint policy. Similarly:
Q̃^π_i(s_0, a_0) = 𝔼[ R_i(s_0, a_0, s_1) + ∑_t≥1γ^t R_i(s_t, π(s_t), s_t+1) ].
Each Ṽ^π_i is meant to be maximized with respect to all other policies, such that
π_i=π_i' argmax Ṽ^π'_i where π'=(π_1, …, π_i',…, π_N).
IQL can be employed to optimize each π_i. Moreover, if {𝒮_i}_i=1^N, {𝒜_i}_i=1^N, and {R_i}_i=1^N are similar among all agents, parameter sharing can be employed to learn the policies.
§.§ Police Operations
Police jurisdictions are typically hierarchically divided. For example, the Atlanta Police Department (APD) divides the city of Atlanta into six zones, each of which is divided into beats <cit.>. One patroller is usually assigned to each beat. When calls-for-service arrive at the dispatcher, an officer in the same zone as the incident is dispatched to the scene; a patroller can be dispatched to a call outside their assigned beat. This joint problem of beat-based patrol and dispatch is further discussed and formalized in Section <ref>.
§ PROBLEM FORMULATION
Consider patrollers i=1,…,N making decisions in time t=1,2,…. Joint patrol and dispatch is formalized as an MDP with a central agent and then is decomposed into an MG of N+1 agents.
§.§ The Patrol Problem
Patrollers navigate a connected, undirected graph G=(V,E) <cit.> with nodes/locations V connected by edges E which are traversed in a single iteration; each v∈ V is connected to itself. Each patroller i is assigned to a node subset (beat) V_i, an example is visualized in Figure <ref>. The patrol problem involves travel decisions for each patroller. For patrollers not busy responding to incidents (Sections <ref> and <ref>), a patrol policy directs each patroller to a neighboring node in V_i.
§.§ The Dispatch Problem
Incidents of categories k=1,…,K arrive with rates λ_k following distributions q_k(v) supported on V, requiring t_ scene∼ Exp(β_k) on-scene time to manage. The dispatch problem involves matching free patrollers (not currently dispatched) to idle incidents (not yet been assigned). A dispatch policy selects among all possible groups of pairings, including leaving incidents idle or patrollers free. A dispatched patroller travels along the shortest path towards the incident, whose number of edges is equivalent to the travel time t_ travel in iterations. If a patroller arrives at iteration t, it is finished managing the incident starting at iteration t+t_ scene.
§.§ Joint Patrol & Dispatch
The service time for patroller i to manage incident j is t_ service(i,j) = t_ idle(j) + t_ travel(i, j) + t_ scene(j), where t_ idle(j) is the time spent in the queue before dispatch. When patroller i is dispatched, it must travel toward the incident or remain at the scene. When outside its beat, it must return once free. Define the response time, reflecting the time incident j waited before patroller i arrived:
t_ response(i,j) = t_ idle(j) + t_ travel(i, j).
Joint patrol and dispatch can be formulated as an MDP to learn joint policy π.
§.§.§ Joint State Space.
Given patroller positions v_1,…,v_N∈ V and busy statuses u_1,…,u_N∈ℕ (travel and on-scene time), define v := [(v_1,u_1),…,(v_N,u_N)]. At dispatch, patrollers observe the random on-scene service time. Given locations of M idle incidents w_1,…,w_M∈ V and idle times p_1,…,p_M∈ℕ, define w := [(w_1,p_1),…,(w_M,p_M)]. The queue has a fixed idle-incident capacity M; when full and a new incident arrives, the longest-waiting incident is replaced. The joint state space is 𝒮 = {s = [ v⊕ w]^T}, where ⊕ refers to vector concatenation.
§.§.§ Joint Action Space.
For a free patroller, patrol action set 𝒜^( patrol)_i(v_i,t, u_i,t) contains movements to neighboring vertices in V_i. A busy patroller moves towards an incident or stays at the scene, moving back towards its beats afterward. After patrol actions, a dispatch action is selected from the dispatch action set 𝒜^( dispatch)( v'_t, w'_t). Note, v'_t and w'_t refer to the updated v_t and w_t according to the patrol actions and the incident arrivals. The joint action space is
𝒜( v_t, w_t) = 𝒜^ (patrol)_1(v_1,t,u_1,t)×⋯×𝒜^ (patrol)_N(v_N,t,u_N,t)×𝒜^( dispatch)( v'_t, w'_t).
The patrol action space has exponential cardinality in N, and the dispatch action space has combinatorial cardinality.
§.§.§ Response-Based Reward & Objective.
The reward function is the negative sum of response times and a penalty for incidents removed from a capacitated queue:
R(t) = - ( ∑_j dispatched in iteration t t_ response(i, j) + ∑_j removed in iteration tα· t_ idle(j) ).
Penalty weight α is a hyper-parameter (α=2 in this work). Given s_0∈,
π^⋆ (s_0) = π∈Π argmax 𝔼[ ∑_t≥0γ^t R(t) | s_t ∼ℙ(s_t-1,π(s_t-1)) ∀ t>0 ].
§.§.§ Markov Game Decomposition.
Patrol and dispatch can be decomposed into an MG with N+1 agents defined by tuple:
⟨ N+1, 𝒮, {𝒮^(patrol)_i}_i=1^N∪𝒮^(dispatch), {𝒜^(patrol)_i}_i=1^N∪𝒜^(dispatch), ℙ, R, γ⟩.
Patrol agent i corresponds to 𝒜^(patrol)_i(v_i,t,u_i,t) and the dispatch agent corresponds to 𝒜^( dispatch)( v'_t, w'_t). Similar but distinct state spaces {𝒮^(patrol)_i}_i=1^N are defined so that each patrol agent observes its own perspective, swapping the location and busy status to the first index: v^(i):= [(v_i,u_i),…,(v_i-1,u_i-1),(v_1,u_1),(v_i+1,u_i+1),…,(v_N,u_N)]. This yields 𝒮^(patrol)_i = {s = [ v^(i)⊕ w]^T}. The dispatch agent observes joint state space 𝒮^(dispatch) = 𝒮. Reward R(t) (<ref>) is shared. The goal is to learn patrol policies π^(patrol)_i and dispatch policy π^(dispatch) to minimize the response time while avoiding incidents being removed from the queue.
§ METHODOLOGY
Parameter sharing in deep Q-learning is applied to learn patrol policies for each patroller (Section <ref>), and incidents are assigned to patrollers using a MIP and value function approximation (Section <ref>). These are combined to learn joint policies for patrol and dispatch (Section <ref>).
§.§ MARL with Parameter Sharing for Patrolling
The MG patrol policies (Section <ref>) are learned using MARL. Recall, in Section <ref>, the state spaces {𝒮^(patrol)_i}_i=1^N are defined to contain the positions and statuses of all patrollers and incidents on the graph, while the action spaces {𝒜^(patrol)_i}_i=1^N contain movements along the graph edges. Furthermore, all patrol agents share a reward R. Based upon prior MARL works indicating that similar agents benefit from parameter sharing <cit.>, this similarity in the state spaces, action spaces, and shared reward suggests that parameter sharing can be applied.
Moreover, since the state spaces and action spaces are nearly identical, we employ an extreme version of parameter sharing in learning the patrol policies. That is, a single neural network Q^θ (parameterized by θ) learns a state-action value function for all agents. Our assumption is that the universal approximation power of a “large enough” neural network has the capacity to jointly learn all patrol policies, which is validated by the performance of the learned patrol policies in the experiments of Section <ref>. We find that DQNs of relatively few layers can learn these policies. Patrol policies thus select actions according to π_i^θ(s_i) = a argmax Q^θ(s_i,a). Mini-batches of transitions (s_i,t, a_i,t, r_t, s_i,t+1) from each agent's perspective are collected via ϵ-greedy simulation to update Q^θ using LSVI updates.
§.§ Policy Iteration for Dispatching
A variation of the MIP action selection strategy outlined in Section <ref> is employed for dispatch. Let x_ij=1 if patroller i is assigned to incident j, otherwise let x_ij=0. The first term of reward (<ref>) can be computed given x_ij: ∑_i,j t_ response(i,j)· x_ij. Unlike <cit.>, we cannot directly compute the entire reward, nor can we obtain the transition state deterministically due to incident queue “overflow” resulting from stochastic call arrival dynamics. This factor, and the difference in patroller positions due to the patrol policy, affects the state transition.
Given V^(dispatch) (or an approximation V^ϕ), the next-state value can be approximated. Define the value delta for patroller i as
δ V^(dispatch)_i(s) := 𝔼[ V^(dispatch)(patroller i assigned) - V^(dispatch)(patroller i not assigned) ]
and for incident j as
δ V^(dispatch)_j(s) := 𝔼[ V^(dispatch)(incident j assigned) - V^(dispatch)(incident j not assigned) ],
where the expectation is taken under the transition dynamics ℙ conditioned upon the policies for patrol and dispatch. These represent beginning in state s and considering the expected change in value for assigning patroller i to some idle incident, with no other assignments made. Given approximators δ V^ϕ_1_i for δ V^(dispatch)_i and δ V^ϕ_2_j for δ V^(dispatch)_j, dispatch is formulated as the MIP:
π^ϕ(s) = xargmin ∑_i=1^N ∑_j=1^M t_ response(i,j)·(x_ij - δ V^ϕ_1_i(s) - δ V^ϕ_2_j(s))
s.t. ∑_j=1^M x_ij≤𝕀_{patroller i is free} ∀ i∈{1,…,N}
∑_i=1^N x_ij≤𝕀_{incident j is idle} ∀ j∈{1,…,M}
x_ij∈{0,1} ∀ i∈{1,…,N},j∈{1,…,M},
where 𝕀_{event} is an indicator for {event}. Notably, given δ V^ϕ_1_i and δ V^ϕ_2_j, this can be computed deterministically given x_ij and the initial state s.
δ V^ϕ_1_i and δ V^ϕ_2_j can be learned by first approximating V^(dispatch) using V^ϕ (Section <ref>). Using V^ϕ, mappings are generated from states to expectations (<ref>)-(<ref>). Approximators δ V^ϕ_1_i and δ V^ϕ_2_j are trained to map states to value deltas. We utilize a policy iteration scheme as in <cit.>; we iterate between updating V^ϕ and updating δ V^ϕ_1_i and δ V^ϕ_2_j using V^ϕ. In practice, one network is learned for patrollers, δ V^ϕ_1, and one network for incidents, δ V^ϕ_2, which have output dimension equal to the number of patrollers and the size of the incident queue, respectively.
§.§ Joint Patrol & Dispatch Policy
The joint policy for patrol and dispatch combines the separate patrol and dispatch policies by optimizing their parameters alternatively. As described in Section <ref>, the parameters of the patrol policies are frozen when learning those of the dispatch policy, and vice versa. This alternating procedure is empirically found to converge to effective policies that outperform those optimized for patrol or dispatch alone (see Figure <ref> and Section <ref>).
Given jointly-optimized policies for patrol and dispatch, the policy is deployed as follows: patroller i observes a state and moves according to π_i^θ. Incidents are sampled, and the dispatch agent matches patrollers to incidents according to π^ϕ. R(t) and joint state s_t+1 are finally computed. The dispatcher observes the state as influenced by the patrol policy in the first phase, with reward and subsequent state observed by the patrollers being influenced by the dispatch policy. The assumption that the coordination of the patrollers and the dispatcher is enhanced by a joint optimization is validated by the improved performance of the jointly-optimized policies in our experiments.
§.§.§ Joint Policy Optimization
We alternatively optimize the parameters defining the patrol and dispatch policies. One phase involves Q-learning of the shared patroller Q^θ, while the other phase involves updates to the incident value networks V^ϕ, δ V_i^ϕ_1, and δ V_j^ϕ_2, repeating to convergence. For outer loop iterations, a number of inner loop updates are made to the dispatcher, followed by a number of inner loop updates to the patrollers. The steps of the n_ inner,ϕ inner loop updates to the dispatcher are:
(1) collect transitions from on-policy simulation,
(2) conduct n_ epoch,ϕ epochs of training for V^ϕ,
(3) generate “delta transitions” for each delta network using V^ϕ, and
(4) conduct n_ epoch,ϕ_1=n_ epoch,ϕ_2 epochs of training on each network δ V^ϕ_1 and δ V^ϕ_2.
The steps of the n_ inner,θ inner loop updates to the patrollers are:
(1) collect transitions from ϵ-greedy on-policy simulation and
(2) conduct n_ epoch,θ epochs of training for Q^θ.
This is repeated for n_ outer outer loop iterations. We find that performance gains from learning dispatch policies are greater than patrol policies, so we begin optimization with a “warm-start” to the dispatch policy consisting of n_ warm dispatch inner loop iterations. For the experiments of Section <ref>, all networks are multi-layer perceptrons (MLPs) using rectified linear unit (ReLU) activation. Dispatch inner loops train using n_ dispatch number of transitions (per inner loop iteration), while patrol inner loops train using n_ patrol transitions. All transitions are split into an 80%/20% train/validation split. The discount factor γ=0.9 in all settings.
§ EXPERIMENTS
The experiments compare joint optimization of patrol and dispatch to policies optimized for patrolling or dispatch alone. In Section <ref>, the goal is to minimize the reward as defined in (<ref>). Then, in Section <ref>, the reward is altered to demonstrate learning of more equitable policies. Finally, a real data example is outlined in Section <ref> based on a region in the City of Atlanta. See Appendix <ref> for a discussion of model selection in each experiment.
Baselines & Comparison.
The three baselines include patrol-only optimization, dispatch-only optimization, and a fully heuristic policy. For policies without optimized patrol, patrollers move randomly in their assigned regions. For policies without optimized dispatch, dispatch follows a first come, first serve priority queue whereby incident class 2 precedes incident class 1, and the closest free patroller is sent to the scene. To compare policies, we evaluate incident response over n_ episode episodes, each consisting of 5,000 iterations of on-policy simulation. The characteristics of the response time distributions are compared in addition to the number of incident queue overflows per episode.
§.§ Efficient Response Experiments
Simulated Settings.
Consider two simulated settings on the same graph consisting of two 7×7 beats adjacent on one edge. Two categories of incident arrive with rates λ_1 and λ_2, uniform distributions q_1 and q_2, and average service times β_1=1 and β_2=3. The incident queue size is 3. In the High Call Volume setting, λ_1=0.15 and λ_2=0.075. In the Low Call Volume setting, λ_1=0.075 and λ_2=0.05.
Modeling & Optimization.
Joint optimization is conducted using n_ outer=4 with n_ inner,ϕ=5 and n_ inner,θ=5 and beginning with n_ warm=20. Each dispatch inner loop trains for n_ epochs,ϕ=n_ epochs,ϕ_1=n_ epochs,ϕ_2=25 epochs each for V^ϕ, δ V^ϕ_1, and δ V^ϕ_2. This is conducted over n_ dispatch=1,000 simulated transitions per iteration with batch size 100 and learning rate 10^-3. Each patrol inner loop fixes ϵ=1 (fully random patrol) and trains for n_ epochs,θ=1 epochs for Q^θ, which is conducted over n_ patrol=1.25 million simulated transitions with batch size 50 and learning rate 10^-5. V^ϕ, δ V^ϕ_1, and δ V^ϕ_2 are MLPs of 1 hidden layer with 128 nodes, while Q^θ is an MLP of 2 hidden layers with 512 nodes each. Validation loss is visualized in Figure <ref> for the high call volume setting. The dispatch dynamic is very noisy but still leads to an effective policy. The patrol-only optimization uses n_ inner,θ=20 updates to a patrol Q^θ network. The dispatch-only optimization uses n_ inner,ϕ=50 updates to dispatch networks. All other patrol- and dispatch-only optimization parameters are identical to those of the joint optimization.
Results.
Response time distributions are generated using n_ episodes=100. The high call volume experiment results are visualized in Table <ref> and Figure <ref>. The jointly-optimized policy has the fastest response and loses few incidents. This and the dispatch-only optimized policy have similar response distributions. Response times are visualized by category in Figure <ref>(b)-(c); the jointly-optimized policy is best in category 1 with diminished performance in category 2. This could indicate that dispatch optimization yields policies that prefer incidents that take less time to manage (since β_1<β_2), resulting in lower average response times. Table <ref> and Figure <ref> show the low call volume results. The jointly-optimized policy responds more quickly, and all policies lose a similarly small number of calls except for the dispatch-only optimized policy. In this case, the jointly-optimized policy has a similar response time to both categories.
§.§ Equity-based Reward
Simulated Setting.
Figure <ref>(a) visualizes a graph of 119 nodes divided into two beats of 58 and 61 nodes. One incident category arrives with rate λ=0.2 and has average service time β=3. In Figure <ref>(b) and (c), it is revealed that the non-uniform incident distribution induces two groups. The smaller of these groups, Group 1, consists of 20 nodes and has incident probability ten times that of the larger group, Group 2, which consists of 99 nodes.
Fairness.
An equity-based reward can be constructed to yield different penalties for each group:
R̃(t) = - ( ∑_j dispatched in iteration tρ_j t_ response(i, j) + ∑_j removed in iteration tρ_jα· t_ idle(j) ),
where ρ_j is the weight of the penalty incurred for the group in which j occurred. Additional metrics can be introduced to quantify fairness. The distribution of response times can be compared between the two groups; connecting to the independent notion of fairness <cit.>, response efficiency should be independent of the group. Similarly, a fair policy could be expected to cover each group for a fraction of iterations equal to the portion of the graph the group represents.
Modeling & Optimization.
The networks V^ϕ, δ V^ϕ_1, δ V^ϕ_2, and Q^θ are structured and optimized exactly as in Section <ref>. Jointly-optimized policies trained using equitable reward (<ref>) with different weights are compared. Specifically, the weight of the small group is fixed at ρ_ small=1, and the weight of the large group is ρ_ large∈{0.5, 1.0} representing weighted and unweighted policies, respectively.
Results.
Response characteristics are compared using n_ episodes=100. The results are visualized in Table <ref> and Figure <ref>. A discrepancy in average response time between the two groups is observed in all settings, but this is particularly pronounced in the “unfair” optimized setting (ρ_ large=1.0). This discrepancy is about an iteration smaller for the “fair” (ρ_ large=0.5), which is also an improvement over the heuristic. Moreover, the fair policy improves upon the overall response time of the heuristic policy while achieving a better balance in response. Coverage is compared using n_ episodes=25 simulations, and the results are compared in the final column of Table <ref>. The unfair and heuristic policies do a poor job of balancing coverage, while the fair policy does much better to cover the larger group. Appendix <ref> outlines model selection for this section; Figure <ref> reveals that the unfair (ρ_ large=1.0) policy has a large discrepancy in average response throughout training, which is not the case for the fair policy. Both policies can experience similar coverage ratios, but this can come at a cost to a balanced or effective response.
§.§ Southwest Atlanta Beats
Environment Setting.
The graph environment in this experiment is based upon a 3-beat region in Southwest Atlanta, which is discretized into 154 rectangular partitions representing nodes on the spatial graph. See Figure <ref>(a); the patches are selected based on APD patroller trajectory data discretized into 1-minute intervals. The patches are constructed such that observed trajectories take approximately 1 iteration (i.e., 1 minute) to travel between adjacent patches. In Figure <ref>(b), the resultant graph is shown, which is divided into three beats with one patroller per beat. Finally, as shown in Figure <ref>(c), incidents are spatially distributed according to a distribution estimated given the ∼90k APD calls-for-service from January 1, 2009 to January 8, 2012. The temporal intensity λ=0.25 per iteration and service time β=5 iterations.
Real Policy.
An additional baseline is developed for this setting based on limited APD patroller trajectory data. That is, divided into 1-minute time intervals, APD patroller trajectories over a 2-day interval are tracked along the graph structure. Transitions to adjacent nodes are counted, and the resultant policy ranks potential actions according to the frequency with which each edge was traversed in the real trajectory data. This is denoted the “Real Policy,” and can reflect some information about how the police patrol in practice. This policy also uses a first come, first serve queue for dispatch.
Modeling & Optimization.
Joint optimization is conducted using n_ outer=8 with n_ inner,ϕ=10 and n_ inner,θ=5 with 0 warm-start iterations. The dispatch networks are trained using n_ dispatch=10,000 simulated transitions per inner loop iteration, while all other optimization parameters for patrol and dispatch are identical to those described in Section <ref>. For comparison, patrol-only optimization uses n_ inner, θ=40 and dispatch-only optimization uses n_ inner,ϕ=80. As before, the patrol- and dispatch-only optimization parameters are identical to those of the joint optimization.
Results.
In Figure <ref> and Table <ref>, the response time distributions for each policy are represented over n_ episodes=100 episodes. While the jointly-optimized policy has the fastest response on average, the overall response time distribution characteristics are similar to those of the dispatch-only optimized policy. However, as shown in Figure <ref>(b), the jointly-optimized policy loses substantially fewer incidents per episode than the other policies, indicating that it is the most effective overall policy. Moreover, the jointly-optimized policy leads to faster response and fewer incidents lost than the realistic policy. This suggests that (joint) policy optimization can lead to substantial improvements when applied to real-world environments.
§ CONCLUSION
We have outlined a novel procedure for the joint optimization policies for patrol and dispatch using MARL. The patrol policies are learned using deep Q-learning with parameter sharing, and the dispatch policies are learned using a policy iteration scheme based on MIP action selection. These policies are combined in a joint optimization procedure that follows a coordinate-descent-style scheme, iteratively updating patrol policies given fixed dispatch policy and vice versa. Within this framework, we extend previous works addressing combinatorial action selection in RL via an MIP approach, where our novel procedure facilitates action selection in settings with stochastic state transitions. We comprehensively demonstrate that policies learned using this joint optimization approach outperform those optimized for patrol or dispatch alone, in addition to heuristics such as random patrol and priority-queue-based dispatch.
The experiments of Section <ref> reveal that the joint optimization procedure is robust to settings with a range of incident arrival dynamics. In the high call volume setting, the jointly-optimized policy performs similarly to the dispatch-only optimized policy. In the low call volume setting, the jointly-optimized policy performs similarly to the patrol-only optimized policy. The jointly-optimized policy is the only policy to perform effectively in both settings. In Section <ref>, a modified reward is used to obtain more fair policies than those trained “greedily”. The results reveal that our procedure is flexible to the reward definition and can generate policies that yield more balanced responses and coverage across groups. While there is still much room for improvement according to the equity-based metrics, this represents a promising direction for further research within the framework of joint policy optimization. Finally, Section <ref> suggests that the outlined method applies to real-world scenarios by applying joint policy optimization to a setting based on Southwest Atlanta. A comparison to a realistic policy based on real patroller trajectory data reveals significant improvements in response time distribution characteristics due to policy optimization; particularly, jointly-optimized policies for patrol and dispatch are most effective.
Future work can incorporate additional incident-level information in the decision-making process. For instance, the experiment of Section <ref> indicates that the encoding of group attributes in the reward can lead to a more equitable response. As with prior works considering fairness in police patrol and dispatch, incidents can also include information regarding socioeconomic and racial status, going beyond the geographic location-based fairness demonstrated in this work. Algorithmically, future research can also focus on the incorporation of fairness into the learning procedure itself, drawing inspiration from prior work on fairness in learning for contextual bandits <cit.> and RL <cit.>. Such studies require that certain equity constraints are satisfied throughout the learning process, which could be relevant to online learning of policies for patrol and dispatch.
§ ACKNOWLEDGEMENT
This work is partially supported by an NSF CAREER CCF-1650913, NSF DMS-2134037, CMMI-2015787, CMMI-2112533, DMS-1938106, DMS-1830210, and the Coca-Cola Foundation.
msom/informs2014
§ MODEL SELECTION
§.§ Model Selection in Section <ref>
In the experiments of Section <ref>, the desired policy is that which responds to incidents most efficiently. As such, the average response time is tracked over n_ episodes=100 of on-policy validation simulation over the course of optimization for each of the three optimized policies in each setting. This is depicted for the high call volume setting in Figure <ref>(a), which demonstrates that the policies corresponding to the minimum average response time are the jointly-optimized policy after iteration 22, the patrol-only optimized policy after iteration 15, and the dispatch-only optimized policy after iteration 9. Similarly, in the low call volume setting depicted in Figure <ref>(b), the selected policies are the jointly-optimized policy after iteration 35, the patrol-only optimized policy after iteration 19, and the dispatch-only optimized policy after iteration 7.
§.§ Model Selection in Section <ref>
In the experiment of Section <ref>, the goal is to find a policy that minimizes the difference in average response time between the two groups while balancing coverage. The characteristics of the “unfair” (ρ_ large=1.0) policy over the course of training are depicted in Figure <ref>(a), where coverage is computed using n_ episodes=25 on-policy validation simulations in each iteration, and response time characteristics are computed over n_ episodes=100 simulations (the same is true of the “fair" policy in Figure <ref>(b)). With the goal of this section in mind, the unfairly selected policy corresponds to iteration 66, which has a minimal average difference in response time and comparable coverage to the heuristic policy. The fair policy is chosen similarly - the goal is to balance fair response and fair coverage while still yielding an “effective policy”. With this in mind, while iteration 23 corresponds to the most balanced average response in the top panel of Figure <ref>(b), the policy is not “effective” since there is a very large number of incident queue overflows (bottom panel). Therefore, the fair (ρ_ large=0.5) policy used in Section <ref> corresponds to the optimized policy after iteration 36, which improves upon the balancing of average response time (second panel) while maintaining few overflows of the incident queue (bottom panel) and yielding a better coverage ratio (top panel).
§.§ Model Selection in Section <ref>
Just as in Section <ref> and Appendix <ref>, in the experiment of Section <ref>, the policy is selected to minimize the average response time. Therefore, this metric is tracked over n_ episodes=100 on-policy simulated episodes throughout the optimization of each policy. This is visualized in Figure <ref>, indicating that the optimal policies are those after iteration 110 for the jointly-optimized policy, after iteration 33 for the patrol-only optimized policy, and after iteration 77 for the dispatch-only optimized policy.
|
http://arxiv.org/abs/2409.02407v1 | 20240904031741 | Multiferroicity, Magnetoelectricity, and Piezoelectricity in Two-Dimensional Janus VSBrI Monolayers | [
"Qiuyue Ma",
"Busheng Wang",
"Guochun Yang",
"Yong Liu"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cond-mat.mes-hall"
] |
[email protected]
State Key Laboratory of Metastable Materials Science and Technology & Key Laboratory for Microstructural Material Physics of Hebei Province, School of Science, Yanshan University, Qinhuangdao 066004, China
§ ABSTRACT
Two-dimensional (2D) multiferroic materials that combine intrinsic ferromagnetism and ferroelectricity exhibit significant potential for applications in highly integrated magnetoelectric and multifunctional spintronic devices. Through first-principles calculations, we identify the Janus VSBrI monolayer as a promising multiferroic semiconductor material, possessing both ferromagnetism and ferroelectricity. Specifically, the VSBrI monolayer shows a large in-plane magnetic anisotropic energy (MAE) of 460 μeV/V, a significant intrinsic in-plane spontaneous ferroelectric polarization of 1.20 × 10^-10 C/m, and a high energy barrier between two ferroelectric states of 168 eV. Our findings reveal that the energy variances among different magnetic states notably correlate with polarization, hinting at the potential for sizable magnetoelectric coupling within the VSBrI. Interestingly, we find that the stability of the ferroelectric phase can be enhanced by the application of biaxial tensile strain. The calculated in-plane piezoelectric coefficient d_11 reaches 29.01 pm/V and the out-plane piezoelectric coefficient d_32 reaches 1.60 pm/V, both of which are significantly larger than those of most known 2D materials, which is greatly desirable for practical applications in piezoelectronic devices. These intriguing properties make the Janus VSBrI monolayer a promising candidate for 2D multifunctional spintronic devices, with significant potential to advance next-generation technology and inform the design of related electronic devices.
Multiferroicity, Magnetoelectricity, and Piezoelectricity in Two-Dimensional Janus VSBrI Monolayers
Yong Liu
September 9, 2024
===================================================================================================
Multiferroicity, Magnetoelectricity, and Piezoelectricity in Two-Dimensional Janus VSBrI Monolayers
Yong Liu
September 9, 2024
===================================================================================================
§ INTRODUCTION
Two-dimensional (2D) nanomaterials have garnered considerable interest due to their unique physical properties and atomically thin structures, especially after the successful isolation of graphene <cit.>. The advent of 2D multifunctional materials, integrating multiple advantageous properties, significantly enhances device performance and broadens the horizons for advanced optoelectronic and spintronic technologies <cit.>. The compelling nature of these multifunctional materials stems from the complex interplay between charge, spin, and lattice degrees of freedom, making them highly attractive for advanced research and technological applications <cit.>. Multiferroics are multifunctional materials that exhibit more than one ferroic ordering within a single phase, such as ferroelectricity, ferromagnetism, ferroelasticity, or ferrotoroidicity. Encouraged by the recent success of 2D ferroelectric (FE)<cit.> and ferromagnetic (FM) <cit.> materials, 2D magnetoelectric materials with coupled FM and FE orders have gained significant attention for their superior physical characteristics and potential in magnetoelectric applications <cit.>. Multiferroics with both FE and FM orders are of great importance due to the interactions between magnetic and electric polarizations, which can be highly beneficial for achieving ultra-high-speed reading and writing in data storage <cit.>. Unfortunately, 2D magnetoelectric multiferroics are rare in nature, and their design has proven unexpectedly difficult <cit.>. This scarcity arises primarily from the inherent conflict between ferroelectricity and magnetism, exemplified by the “d_0 rule" of ferroelectricity in perovskites <cit.>. Generally, the generation of atomic magnetic moments requires a partially filled d orbital, while ferroelectricity demands that the d orbital be empty. The inherent contradiction significantly limits the development of single-phase magnetoelectric materials. In recent years, tremendous efforts have been devoted to understanding and searching for magnetoelectric multiferroics <cit.>.
2D Janus monolayers represent an attractive new class of 2D materials with mirror asymmetry, which has brought impetus to many other fields ranging from separating membranes to 2D electronics. Graphone <cit.>, also known as single-sided hydrogenated graphene, was created by removing half of the hydrogen from the graphene and was the first proposed 2D Janus material. Since the discovery of graphone, 2D Janus materials have attracted significant interest <cit.>. Traditionally, 2D Janus materials are synthesized through bottom-up approaches, such as selective chemical functionalization of free-standing 2D materials or substitution of a surface-layer element in the layered structures of transition metal dichalcogenides (TMDs) <cit.>. 2D Janus materials show novel physical and chemical properties, including the Rashba effect <cit.>, valley spin splitting <cit.>, catalytic performance <cit.>, and piezoelectric polarization <cit.>. The TMD monolayers MXY (M = Mo, W; X ≠ Y = S, Se, Te) exhibt large Rashba spin splitting and valley spin splitting, potentially making a significant contribution to semiconductor spintronics and valleytronics <cit.>. In the realm of Janus TMDs, the Janus MoSSe monolayer, as initially synthesized by Lu et al., is derived from a monolayer of MoS_2 wherein selenium atoms have been substituted for the sulfur atoms in the top layer <cit.>. Previous studies have suggested its promising potential as an efficient photocatalyst for solar-driven water splitting across a broad spectrum due to its anticipated low carrier recombination rate <cit.>. In addition, the Janus VSSe monolayer featuring piezoelectricity, ferroelasticity, and strong valley polarization is a highly promising material for potential applications in nanoelectronics, optoelectronics, and valleytronics <cit.>. The successful synthesis and extraordinary properties of 2D Janus materials will promote the development of electronic and spintronic devices.
The piezoelectric effect is a fundamental electromechanical interaction found in semiconductors with crystal structures lacking inversion symmetry <cit.>, and it is particularly common in Janus structures. Significant advancements in 2D piezoelectric materials have created unprecedented opportunities for fascinating physics and provided a potential platform for multifunctional electronic devices <cit.>. Experimentally, the successfully synthesized Janus MoSSe monolayer exhibits vertical dipoles and a piezoelectric effect, suggesting potential for future applications <cit.>. Theoretically, the piezoelectric properties of various other 2D Janus materials have also been reported, such as Janus group-III chalcogenide monolayers <cit.>, ZnBrI <cit.>, FeGeN_3 <cit.>, and Bi_2X_2Y (X = S, Se, Te; Y = S, Se, Te; X ≠ Y) <cit.> monolayes, enriching the potential applications of 2D piezoelectric materials. However, a major issue with 2D piezoelectric materials is the absence or weakness of out-of-plane piezoelectricity. Despite many piezoelectric materials display strong in-plane piezoelectric responses, those with significant out-of-plane piezoelectric responses are rare. Therefore, the search for 2D materials with large out-of-plane piezoelectric responses has become crucial and challenging.
In this work, based on first-principles calculations, we propose a 2D multiferroic material, the Janus VSBrI monolayer, which exhibits intrinsic ferroelectricity and ferromagnetic properties. The identified Janus VSBrI monolayer shows semiconducting behavior with a bandgap of 0.76 eV. It possesses strong magnetic anisotropy (460 μeV/V) with an in-plane easy magnetic axis and an estimated Curie temperature (T_C) of 83 K. The in-plane spontaneous ferroelectric polarization is calculated as 1.20 × 10^-10 C/m. An energy barrier of 168 meV between two ferroelectric phases with opposite electronic polarizations indicates that the ferroelectricity is thermodynamically stable at room temperature and can be switched by applying a moderate external electric field <cit.>. Moreover, the calculation of the magnetoelectric coupling effect in the Janus VSBrI monolayer reveals that energy differences between various magnetic states vary significantly with polarization, confirming the presence of strong magnetoelectric coupling. It is observed that the stability of the ferroelectric phase in the VSBrI monolayer exhibits high can be further enhanced by biaxial tensile strain. The Janus VSBrI monolayer not only exhibits large in-plane piezoelectricity (d_11) but also demonstrates large out-of-plane piezoelectricity (d_32), an uncommon but highly desirable trait in 2D materials. Our findings establish a promising platform for the advancement of spintronics and multiferroic electronic devices.
§ METHODS
The structural optimization and properties calculations are performed within the framework of density functional theory (DFT) <cit.>using a planewave basis set as implemented in Vienna ab initio Simulation Package (VASP) <cit.>. The projector-augmented wave (PAW) potential with the generalized gradient approximation of Perdew-Burke-Ernzerhof (GGA-PBE) is used as the exchange-correlation functional <cit.>. Considering the strong electronic correlation of 3d electrons, an effective Hubbard U (U_eff = 2 eV) is added for the V-3d orbitals <cit.>. The plane wave cutoff energy is set to 450 eV, and a Γ-centered 15 × 13 × 1 Monkhorst-Pack grid is employed to sample the first Brillouin zone <cit.>. A vacuum layer of at least 15 Å along the z direction is added to avoid unexpected interactions between periodic images. The structure undergoes full optimization until the Hellmann-Feynman forces acting on each atom are below 0.01 eV/Å, while ensuring that the total energy difference between consecutive steps remains less than 10^-6 eV. The stability dependence on temperature of 4 × 4 × 1 supercell is confirmed through ab initio molecular dynamics (AIMD) simulations <cit.>. Phonon spectrum calculations are self-consistently conducted using density functional perturbation theory (DFPT) <cit.> implemented in the PHONOPY code <cit.>. The ferroelectric polarization is calculated using the standard Berry phase method <cit.>. The elastic stiffness tensor C_ij and piezoelectric stress tensor e_ij are calculated by using the strain-stress relationship (SSR) and DFPT method, respectively. The 2D elastic coefficients C_ij^2D and piezoelectric stress coefficients e_ij^2D have been renormalized by C_ij^2D = L_zC_ij^3D and e_ij^2D = Lze_ij^3D, where L_z is the length of unit cell along the z direction.
§ RESULTS AND DISCUSSION
Figures <ref>(a) and (b) show the top and side views of the Janus VSBrI monolayer, derived from the ground-state structure of the parent VSI_2 monolayer <cit.> by replacing the top layer of halogen atoms I with the more electronegative halogen atoms Br. The Janus nature of VSBrI arises from the presence of two outer sublayers consisting of nonequivalent halogen atoms sandwiching the central V atoms, resulting in a lower symmetry (Pm) in comparison to its parent VSI_2 monolayer (Pmm2). As shown in Fig. <ref>(c), the electron localization function (ELF) illustrates the ionic characteristics of V-S, V-I, and V-Br bonds, and bader charge analysis indicates that S, Br, and I atoms gain electrons, while V atoms lose electrons. Four magnetic configurations of V atoms (one FM state and three AFM states) with the supercell of 2 × 2 × 1 are considered to determine the magnetic ground state of the Janus VSBrI monolayer [Fig. <ref>(d)]. By comparing the energies of different magnetic configurations, it is determined that the magnetic ground state of the Janus VSBrI monolayer is FM, with energy differences of 23.18, 25.67, and 19.25 meV/f.u. relative to the three other AFM states. Spatial distribution of the spin-polarized electron density reveals that the significant FM localized magnetic moment (1 μ_B per unit cell) is primarily contributed by the V atom. The optimized lattice parameters for the FM unit cell are a = 3.85 Å and b = 4.66 Å (see Table <ref>). The V ions are located at the center of the S_2Br_2I_2 octahedron and shifts towards S ions on the one side, resulting in different bond lengths of V-S1 (2.11 Å) and V-S2 (2.55 Å), which breaking the spatial inversion symmetry along the b-axis and leading to a polar structure. For parent structure VSI_2 monolayer, the bond lengths of V-S1 and V-S2 are 2.10 Å and 2.49 Å, respectively, which are shorter than those in the Janus VSBrI monolayer. Consequently, the lattice parameter b of Janus VSBrI monolayer is larger than that of the VSI_2 (b= 4.60 Å). The V-halogen bond length depends on the monolayer's other halogen. Comparing the V-I bond lengths in VSBrI, it decreases with the addition of higher electronegative halogen atoms Br as 2.60 Å, which is lower than the V-I bond lengths in VSI_2 of 2.80 Å. The I/Br-V-S1 and I/Br-V-S2 bond angles are larger and smaller, respectively, than the I-V-S1 (96.8^∘) and I-V-S2 (83.2^∘) bond angles of the VSI_2 monolayer, resulting in the lattice parameter a of the Janus VSBrI monolayer being smaller than that of the parent structure VSI_2 (a = 3.99 Å).
To assess the structural integrity of Janus VSBrI monolayer, we calculated its cohesive energy, defined as E_coh = (E_V + E_S + E_Br + E_I - E_VSBrI)/4, where E_V, E_S, E_Br, E_I, and E_VSBrI represent the total energies of a single V atom, a single S atom, a single Br atom, a single I atom, and VSBrI monolayer, respectively. The number 4 signifies the total number of atoms in each unit cell. With a cohesive energy value of 3.46 eV/atom for Janus VSBrI monolayer, it demonstrates a level of stability comparable to the material phosphorene (3.48 eV/atom) <cit.>. As shown in Fig. <ref>(e) for the phonon dispersion, the absence of imaginary frequencies confirms the dynamic stability of Janus VSBrI monolayer. Its good thermal stability is evidenced by the little energy fluctuation and slight structural deformation observed in the AIMD simulation conducted at a constant room temperature of 300 K [see Fig.S1 of the Supplemental Material]. The calculated elastic constants (C_11 = 37.45 N/m , C_12 = 7.49 N/m, C_22 = 31.48 N/m, and C_66 = 9.96 N/m) satisfy the Born-Huang criteria for mechanical stability <cit.>: C_11 > 0, C_66 > 0, C_11C_22 - C_12^2 > 0, indicating that the Janus VSBrI monolayer is mechanically stable. Besides stability, it is important to consider material strength and anisotropy for practical applications. Therefore, we calculated the angular-dependent elastic properties: Young's modulus Y_2D(θ) and Poisson's ratio ν_2D(θ), using the two following formulas <cit.>:
Y_2D(θ) = C_11C_22 - C_12^2/C_11sin^4 θ + C_22cos^4 θ + (B - 2C_12)sin^2 θcos^2 θ
ν_2D(θ) = (C_11 + C_22 - B)sin^2 θcos^2 - C_12(sin^4 θ + cos^4 θ)/C_11sin^4 θ + C_22cos^4 θ + (B - 2C_12)sin^2 θcos^2 θ
Where the θ is the angle of the direction with the x direction as 0^∘ and the y direction as 90^∘, B = (C_11C_22 - C_12^2)/C_66. The results show that the Young's modulus and Poisson's ratio exhibt obvious anisotropic characters [Fig. <ref>]. The Young's modulus peaks along the x and y directions, with values of 35.67 N/m and 29.98 N/m, respectively. It is noteworthy that the Young's modulus of VSBrI is lower than that of many other 2D materials, suggesting it can be easily tuned by strain. This makes the Janus VSBrI monolayer highly suitable for novel flexible piezotronics and electronics applications. In addition, the Poisson's ratio is 0.24 and 0.20 along the x and y directions, respectively.
The electrical properties of the Janus VSBrI monolayer were investigated by calculating its electronic band structure and density of states (DOS). As illustrated in Fig. <ref>(a), the VSBrI monolayer is identified as a semiconductor with a band gap of 0.76 eV, as calculated using the revised PBE + U functional. The band gap of the Janus VSBrI monolayer is larger than that of the parent VSI_2 monolayer (0.63 eV) <cit.>. The valence band maximum and the conduction band minimum are located at the X and Y high symmetry point, respectively, indicating a characteristic indirect band gap (0.76 eV) of the Janus VSBrI monolayer. It is important to note that the hybrid Heyd-Scuseria-Ernzerh (HSE06) functional exhibits similar band structure to the PBE + U functional. However, the values of band gaps (1.58 eV in spin-up, and 3.21 eV in spin-down) are larger than those (0.78 eV in spin-up, and 1.83 eV in spin-down) of the PBE + U functional [see Fig. <ref>(b)], as the PBE functional often underestimate the band gap. The DOS of the Janus VSBrI monolayer is shown in Fig. S2 of the Supplemental Material, which indicates that the S and I ions contribute to the valence band maximum, while the conduction band minimum is primarily influenced by the V and S ions. Additionally, near the Fermi level, the spin-up states are primarily contributed by the electronic states of V and I ions.
Magnetic anisotropy energy (MAE) describing the magnetic anisotropy can resist disruptions to magnetic ordering caused by external magnetic fields or thermal agitation. The larger the MAE, the better resisting thermal disturbance and keeping long-range magnetic ordering. The relative energy associated with different magnetization directions indicates that the Janus VSBrI monolayer tends to favor in-plane magnetization (x-axis) [Fig. <ref>(c)]. The calculated MAE of the Janus VSBrI monolayer is 460 μeV/V, which is comparable to that of the Mn_2PSb monolayer (450 μeV/Mn) <cit.> and larger than those of Cr_1.5Br_1.5I (356 μeV/Cr) <cit.>, VP (101.4 μeV/V), and VAs (262.6 μeV/V) <cit.>. Then, we determined the magnetic coupling parameters (J) of the Janus VSBrI monolayer using the classical Heisenberg model. The Hamiltonian is defined as:
H = - ∑_ < i,j > J_ijS_iS_j- A(S_i^Z)^2
where the S and A represent the spin quantum number (|S|=1/2) and the MAE, respectively. The magnetic coupling parameters of the nearest-neighbor J_1, next-nearest-neighbor J_2, and third-nearest-neighbor are calculated via:
E(FM) = E_0- (4J_1+4J_2+8J_3)S^2
E(AFM1) = E_0 - (4J_1-4J_2-8J_3)S^2
E(AFM2) = E_0 - (-4J_1+4J_2-8J_3)S^2
E(AFM3) = E_0 - (-4J_1-4J_2+8J_3)S^2
The calculated J_1, J_2, and J_3 are 21.77, 16.79 and 14.82 meV, respectively. Due to the positive values of J, the magnetic configuration of the Janus VSBrI monolayer favors a FM arrangement. With the above-mentioned magnetic coupling parameters and MAE, T_C of the Janus VSBrI monolayer is estimated by using the Monte Carlo simulation based on the Heisenberg model. Figure <ref>(d) shows the magnetic moment and specific heat of the V atoms as a function of temperature. The calculated T_C of Janus VSBrI monolayer is 83 K, which is higher than those of magnetoelectric multiferroics, such as VOClBr (43 K) <cit.>, NbSCl_2 (25 K) <cit.>, and VOF_2 (15 K) <cit.> monoalyers.
Next, the ferroelectric properties of the Janus VSBrI monolayer are studied in detail. Due to the anisotropic octahedral crystal field in the VSBrI monolayer, the cation 3d orbitals split into four energy levels: d_z^2, d_x^2-y^2, d_xz/d_yz, d_xy orbitals, respectively [Fig. <ref>(a)]. The V^4+ ion has a 3d^1 electronic configuration, with the single unpaired d electron occupying the lowest energy d_xy orbital, which is distributed in the plane perpendicular to the V-S chain, as illustrated by the charge density plot in Fig. <ref>(b). Typically, magnetism arises from the partially filled d orbitals of transition-metal cations. However, it is widely accepted that this phenomenon hinders the emergence of ferroelectricity, a principle known as the d_0 rule in multiferroics <cit.>. Here, although V^4+ is not d_0, it behaves like d_0 along the z direction (the b-axis) since the d_yz and d_xz orbitals are empty. This unique anisotropic orbital ordering is responsible for the violation of the d_0 rule, resulting in the emergence of proper ferroelectricity, referred to as the “anisotropic d_1 rule" <cit.>. In the Janus VSBrI monolayer, the displacement of the V ions from the middle of the octahedron disrupts inversion symmetry, leading to the generation of a reversible in-plane electric polarization along the b-axis. The calculated in-plane spontaneous ferroelectric polarization (P_s) is 1.20 × 10^-10 C/m, which is larger than those of the CrNCl (0.60 × 10^-10 C/m) <cit.>, SnTe (0.22 × 10^-10 C/m) monolayers <cit.>. Figure <ref>(c) shows the energy versus structural parameter h for the Janus VSBrI monolayer, which resembles a double-well potential curve. The h is defined as the difference between the two V-S bond lengths along the b-axis. Here, h = 0.45 and -0.45 Å represent the two equivalent stable FE phases with opposite electric polarizations, and h = 0 represents the centrosymmetric paraelectric (PE) phase. For the Janus VSBrI monolayer, the calculated energy barrier (E_G) for FE switching is 169 meV/f.u, which is higher than that of the parent VSI_2 monolayer (140 meV/f.u), indicating that its FE phase is relatively more stable. Then, we simulated an in-plane ferroelectric polarization switching pathway through the antiferroelectric (AFE) phase for the Janus VSBrI monolayer [Fig. <ref>(d)]. The energy barrier (ΔE) of the FE-AFE-FE pathway significantly decreases to 88 meV per formula unit. Despite the close energy of the AFE and FE states, the energy barrier between them is significantly larger than the thermal energy at room temperature (25 meV), ensuring that the FE phase remains stable and does not transition to the AFE phase due to thermal vibrations at room temperature. The magnetoelectric coupling effect of the Janus VSBrI monolayer was preliminarily assessed by observing the magnetic response to the variation of ferroelectric polarization. We displaced the V atom along the V-O chain away from the ground state to monotonically change the polarization. The energy dependence of the three magnetic configurations on polarization was calculated using the ferromagnetic phase as the energy reference [Fig. S3 of the Supplemental Material]. The results indicate that the energy differences between the different magnetic states vary significantly with polarization, highlighting the influence of polarization on magnetic properties. Therefore, it becomes possible for the Janus VSBrI monolayer to exhibit a notable magnetoelectric coupling.
Biaxial strain engineering is an effective approach to tuning the properties of 2D materials. As shown in Fig. <ref>, we investigated the impact of biaxial tensile strain ranging from 0% to 6% on the magnetic, electronic, and ferroelectric properties of the Janus VSBrI monolayer. The energy differences between three AFM and FM configurations (ΔE = E_AFM - E_FM) were calculated at varying percentages of biaxial strain [Fig. <ref>(a)]. The positive energy differences indicate that there is no magnetic transition resulting from the application of biaxial tensile strain. Therefore, it can be concluded that the Janus VSBrI monolayer exhibits strong ferromagnetic robustness under the biaxial strain from 0% to 6%. As shown in Fig. <ref>(b), with increasing biaxial tensile strain, J_1 exhibits a monotonically increasing trend, indicating that the ferromagnetic coupling strength between nearest-neighbor V ions gradually strengthens. Both J_2 and J_3 decrease under biaxial tensile strain, suggesting that the biaxial strain weakens the ferromagnetic coupling strength between the next-nearest-neighbor and third-nearest-neighbor V ions. Since the number of J_3 is twice that of J_1 and J_2, J_3 predominantly determines the change in T_C. Figure S4 of the Supplemental Material shows the variation of T_C and MAE under biaxial strain ranging from 0% to 6%. Despite the gradual decrease in T_C with increasing biaxial tensile strain, it remains higher than the temperature of CrI_3 (45 K) <cit.>. The MAE initially decreases and then increases with increasing biaxial tensile strain. Calculations reveal that when a 6% biaxial tensile strain is applied, the MAE value along the z-axis is the lowest, causing the magnetic easy axis to shift from the x-axis to the z-axis. We calculated the electronic band structures of the Janus VSBrI monolayer under biaxial tensile strain [Fig. S5 of the Supplemental Material]. The Janus VSBrI monolayer maintains its semiconductor behavior within the range of 0 to 6% biaxial tensile strain, and the spin gap gradually increases with the tensile strain, reaching 0.92 eV at 6% strain. As illustrated in Fig. <ref>(c), the polar displacement and spontaneous ferroelectric polarization are evaluated under the application of biaxial tensile strain. The polar displacement is defined as the variation in the V ion's fractional coordinates along the polar axis (b-axis) during the transition from the PE state to the FE state. The results show that the polar displacement increases monotonically with the application of biaxial tensile strain. Due to the direct relationship between polar displacement and spontaneous ferroelectric polarization, the trend of ferroelectric polarization is similar to the trend of polar displacement under biaxial tensile strain. The spontaneous ferroelectric polarization reaches its maximum value of 1.43 × 10^-10 C/m under 6% biaxial tensile strain. Additionally, the activation energy barrier E_G and ΔE for electric polarization switching can be effectively adjusted by applying biaxial tensile strain [Fig. <ref>(d)]. E_G values are nearly twice the ΔE values under the biaxial strain changing from 0% to 6%, leading to the conclusion that the path through the AFE phase consistently exhibits the minimum activation energy, irrespective of the strain magnitude. The E_G and Δ E values increase with biaxial tensile strain, ranging from 168 to 378 meV/f.u. and 89 to 194 meV/f.u., respectively. It is known that high-energy barriers can ensure the stability of the ferroelectric phase. By studying the effect of biaxial tensile strain on ferroelectric properties, it was found that the stability of the ferroelectric phase in Janus VSBrI monolayer is further enhanced with increasing biaxial tensile strain.
As shown in Fig. S6 of the Supplemental Material, we calculated the planar averaged electrostatic potential energy variation along the z direction for VSI_2 and VSBrI monolayers. Due to horizontal mirror symmetry, the electrostatic potential energy difference and intrinsic polar field of the VSI_2 monolayer are zero. Due to the larger electronegativity of the halogen atom Br (2.96) compared to the halogen atom I (2.66), there is an out-of-plane polarization in the Janus VSBrI monolayer with a value of 0.04 × 10^-10 C/m. The existence of out-of-plane polarization results in an electrostatic potential energy difference and an intrinsic polar field in the Janus VSBrI monolayer. The electrostatic potential energy difference between the sides is 0.44 eV, which can be interpreted as a surface dependent work function. As illustrated in Fig. S6(b) of the Supplemental Material, we determined the strength of the intrinsic polar field from the slope of the curve, approximately 1.60 eV/Å, which is comparable to the reported value in the 2D piezoelectric material Janus Si_2SeTe (1.79 eV/Å) <cit.>. An intrinsic polar field may give rise to the piezoelectricity of 2D materials, suggesting that the Janus VSBrI monolayer possesses potential piezoelectricity. Accordingly, the piezoelectric response of the VSBrI was then estimated. In noncentrosymmetric materials, the piezoelectric effect leads to the generation of electric dipole moments and electric charge in response to applied mechanical strain or stress. The 2D piezoelectric effect is described by the piezoelectric stress coefficients e_ij and the piezoelectric strain coefficients d_ij. The relaxed piezoelectric tensors (e_ij and d_ij) are obtained as the sum of ionic and electronic contributions:
e_ij = ∂P_i/∂ε_j = e_ij^elc + e_ij^ion
d_ij = ∂P_i/∂σ_j = d_ij^elc + d_ij^ion
where the p_i, ε_j, and σ_j represent the piezoelectric polarizations, strains, and stresses, respectively. The piezoelectric strain coefficients d_ij can be derived by piezoelectric stress coefficients e_ij and elastic stiffness coefficients C_ij:
d_11 = e_11C_22 - e_12C_12/C_11C_22 - C_12^2
d_12 = e_12C_22 - e_12C_12/C_11C_22 - C_12^2
d_31 = e_31C_22 - e_32C_12/C_11C_22 - C_12^2
d_32 = e_32C_11 - e_31C_12/C_11C_22 - C_12^2
The calculated in-plane piezoelectric stress coefficients e_11 and e_12 are 11.10 × 10^-10 C/m and 3.17 × 10^-10 C/m, respectively, while the out-of-plane piezoelectric stress coefficients e_31 and e_32 are 0.40 × 10^-10 C/m and 0.56 × 10^-10 C/m, respectively. According to equations (8)-(11), the calculated in-plane piezoelectric strain coefficients d_11 and d_12 are 29.01 pm/V and 3.17 pm/V, respectively. The d_11 value for the VSBrI monolayer is much larger than that of the well-known piezoelectric material MoS_2 (3.7 pm/V) <cit.>. In contrast to the centrosymmetric VSI_2 with only in-plane piezoelectricity, the absence of inversion symmetry in VSBrI results in two distinctive out-of-plane piezoelectric responses characterized by d_31 and d_32. The d_31 (0.75 pm/V) of the Janus VSBrI monolayer is much larger than that of Janus group-III chalcogenide monolayers (0.07-0.46 pm/V) <cit.> and is larger than that of the Janus VOFI monolayer (-0.68 pm/V) <cit.> with the same space group as the VSBrI. Interestingly, the d_32 is more than twice d_31, with a value of 1.60 pm/V, which is larger than many known 2D materials, making it highly desirable for multifunctional piezoelectric devices.
Considering the valence electron is from a 3d orbital for Janus VSBrI monolayer, the Hubbard U correction was reassessed using the GGA+U method (U_eff = 0 ∼ 2 eV) to re-check its magnetic, ferroelectric, structural, and electronic properties. We listed the energy differences between the FM and three AFM states across the U_eff range of 0 to 2 eV in Table SI of the Supplemental Material. The results indicate that the energy of the FM configuration is always lower than that of the three AFM configurations, regardless of the U_eff value, suggesting that the ground state of the Janus VSBrI monolayer is still FM. As shown in Fig. S7 of the Supplemental Material, the polar displacement decreases as the U_eff increase, indicating that the FE state approaches the PE state, resulting in a reduction in the polarization value. The calculation results indicate that the spontaneous ferroelectric polarization decreases from 1.33 × 10^-10 C/m to 1.20 × 10^-10 C/m. The physical reason for the diminishing ferroelectricity is that the increased U_eff weakens the d-p orbital hybridization between V and S, thereby reduce the driving force of its proper ferroelectricity. As shown in Fig. S8 of the Supplemental Material, the lattice constants a and b do not change significantly as the U_eff value increases. The band structures of Janus VSBrI monolayer under different U_eff value show no obvious alterations, and its semiconductor characteristics remain unaltered [Fig.S9 of the Supplemental Material]. Therefore, it can be confidently concluded that the magnetic ground state of the Janus VSBrI monolayer remains both FM and FE, unaffected by the choice of the Hubbard U in DFT calculations, and the selected U_eff value have little effect on the structural and electronic properties.
§ CONCLUSION
In summary, we predict a 2D intrinsic semiconductor Janus VSBrI monolayer with a band gap of 0.76 eV. The Janus VSBrI monolayer exhibits ferroelasticity, ferroelectricity, and strong piezoelectric behavior. MC simulations based on the Heisenberg model show that the magnetic T_C reaches up to 83 K. The VSBrI monolayer possesses a large MAE (460 μeV/V) with an in-plane easy magnetization direction. Additionally, the calculated in-plane spontaneous ferroelectric polarization along the b-axis is about 1.20 × 10^-10 C/m. In-plane ferroelectric switching pathways involve a PE intermediate phase with an energy barrier of 169 meV/f.u. and an AFE intermediate phase with an energy barrier of 88 meV/f.u. Interestingly, the VSBrI monolayer exhibits a notable magnetoelectric coupling effect, driven by the substantial variation in energy discrepancies among different magnetic states with polarization. We find that the stability of the ferroelectric phase is further enhanced under increasing biaxial tensile strain. The predicted in-plane d_11 (29.01 pm/V) and out-of-plane d_32 (1.60 pm/V) are higher than or comparable with ones of many 2D known materials, making it highly desirable for ultrathin piezoelectric devices. Our work provides a promising platform for exploring multiferroics and piezoelectric materials, which is important for the development of multifunctional spintronic devices.
§ ACKNOWLEDGEMENT
This work was supported by the Natural Science Foundation of China under Grants (No.22372142), the Innovation Capability Improvement Project of Hebei province (22567605H), the Natural Science Foundation of Hebei Province of China (No. B2021203030), the Science and Technology Project of Hebei Education Department (No. JZX2023020). The numerical calculations in
this paper have been done on the supercomputing system in the High Performance Computing Center of Yanshan University.
§ DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request.
§ REFERENCES
99
1K-Science-2004
K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, and A. A. Firsov,
Electric field effect in atomically thin carbon films,
https://www.science.org/doi/abs/10.1126/science.1102896Science 306 666-69 (2004).
2S.-Rev.-2007
S. Das Sarma, S. Adam, E. H. Hwang, and E. Ross,
Electronic transport in two-dimensional graphene,
https://link.aps.org/doi/10.1103/RevModPhys.83.407Rev. Mod. Phys. 83 407 (2011).
3K-Nature-2005
K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov,
Two-dimensional gas of massless Dirac fermions in graphene,
https://doi.org/10.1038/nature04233Nature 438 197-200 (2005).
4E-Phys.-2007
E. H. Hwang, S. Adam, and S. D. Sarma,
Carrier transport in two-dimensional graphene layers,
https://link.aps.org/doi/10.1103/PhysRevLett.98.186806Phys. Rev. Lett. 98 186806 (2007).
Z-Nano-2021
Z. Liu, L. Deng, and B. Peng,
Ferromagnetic and ferroelectric two-dimensional materials for memory application,
https://doi.org/10.1007/s12274-020-2860-3Nano Res. 14 1802-1813 (2021).
H-Nat.-2018
H. Wang, Z. R. Liu, H. Y. Yoong, T. R. Paudel, J. X. Xiao, R. Guo, W. N. Lin, P. Yang, J. Wang, G. M. Chow, T. Venkatesan, E. Y. Tsymbal, H. Tian, and J. S. Chen,
Direct observation of room-temperature out-of-plane ferroelectricity and tunneling electroresistance at the two-dimensional limit,
https://doi.org/10.1038/s41467-018-05662-yNat. Commun. 9 3319 (2018).
C-Natl-2019
C. Lu, M. Wu, L. Lin, and J. M. Liu,
Single-phase multiferroics: new materials, phenomena, and physics,
https://doi.org/10.1093/nsr/nwz091Natl. Sci. Rev. 6 653-668 (2019).
W-Nature-2006
W. Eerenstein, N. D. Mathur, and J. F. Scott,
Multiferroic and magnetoelectric materials,
https://doi.org/10.1038/nature05023Nature 442 759-765 (2006).
S-Adv-2015
S. Dong, J. M. Liu, S. W. Cheong, and Z. Ren,
Multiferroic materials and magnetoelectric physics: Symmetry, entanglement, excitation, and topology,
https://doi.org/10.1080/00018732.2015.1114338Adv. Phys. 64 519-626 (2015).
K.-Science-2016
K. Chang, J. Liu, H. Lin, N. Wang, K. Zhao, A. Zhang, F. Jin, Y. Zhong, X. Hu, W. Duan et al.,
Discovery of robust in-plane ferroelectricity in atomic-thick SnTe,
https://www.science.org/doi/abs/10.1126/science.aad8609Science 353 274-278 (2016).
J.-Phys. Rev. Lett.-2018
J. Xiao, H. Zhu, Y. Wang, W. Feng, Y. Hu, A. Dasgupta, Y. Han, Y. Wang, D. A. Muller, L. W. Martin et al.,
Intrinsic two-dimensional ferroelectricity with dipole locking,
https://link.aps.org/doi/10.1103/PhysRevLett.120.227601Phys. Rev. Lett. 120 227601 (2018).
Z. Fei-Nature-2018
Z. Fei, W. Zhao, T. A. Palomaki, B. Sun, M. K. Miller, Z. Zhao, J. Yan, X. Xu, and D. H. Cobden,
Ferroelectric switching of a two-dimensional metal,
https://doi.org/10.1038/s41586-018-0336-3Nature 560 336-339 (2018).
K.-Nano-2021
K. Lee, A. H. Dismukes, E. J. Telford, R. A. Wiscons, J. Wang, X. Xu, C. Nuckolls, C. R. Dean, X. Roy, and X. Zhu,
Magnetic order and symmetry in the 2D semiconductor CrSBr,
https://doi.org/10.1021/acs.nanolett.1c00219Nano Lett. 21 3511-3517 (2021).
Y.-Nature-2018
Y. Deng, Y. Yu, Y. Song, J. Zhang, N. Z. Wang, Z. Sun, Y. Yi, Y. Z. Wu, S. Wu, J. Zhu et al.,
Gate-tunable room-temperature ferromagnetism in two-dimensional Fe_3GeTe_2,
https://doi.org/10.1038/s41586-018-0626-9Nature 563 94-99 (2018).
C.H-Phys-2018
C. Huang, Y. Du, H. Wu, H. Xiang, K. Deng, and E. Kan,
Prediction of intrinsic ferromagnetic ferroelectricity in a transition-metal halide monolayer,
https://link.aps.org/doi/10.1103/PhysRevLett.120.147601Phys. Rev. Lett. 120 147601 (2018).
S.-Nat-2007
S. W. Cheong and M. Mostovoy,
Multiferroics: a magnetic twist for ferroelectricity,
https://doi.org/10.1038/nmat1804Nat. Mater. 6 13-20 (2007).
S-Natl-2019
S. Dong, H. Xiang and E. Dagotto,
Magnetoelectricity in multiferroics: a theoretical perspective,
https://doi.org/10.1093/nsr/nwz023Natl. Sci. Rev. 6 629-641 (2019).
X-Mod-2014
X. Huang and S. Dong,
Ferroelectric control of magnetism and transport in oxide heterostructures,
https://doi.org/10.1142/S0217984914300105Mod. Phys. Lett. B 28 1430010 (2014).
R-Nat-2007
R. Ramesh and N. A. Spaldin,
Multiferroics: Progress and prospects in thin films,
https://doi.org/10.1038/nmat1805Nat. Mater. 6 21-29 (2007).
N-Science-2005
N. A. Spaldin and M. Fiebig,
The renaissance of magnetoelectric multiferroics,
https://www.science.org/doi/abs/10.1126/science.1113357Science 309 391-392 (2005).
N-J. Phys. Chem. B-2000
N. A. Hill,
Why are there so few magnetic ferroelectrics?,
https://doi.org/10.1021/jp000114xJ. Phys. Chem. B 104 6694-6709 (2000).
K.-Adv. Phys.-2009
K. F. Wang, J. M. Liu, and Z. F. Ren,
Multiferroicity: the coupling between magnetic and polarization orders,
https://doi.org/10.1080/00018730902920554Adv. Phys. 58 321-448 (2009).
J.-Nature-2010
J. H. Lee, L. Fang, E. Vlahos, X. Ke, Y. W. Jung, L. F. Kourkoutis, J. W. Kim, P. J. Ryan, T. Heeg, M. Roeckerath et al.,
A strong ferroelectric ferromagnet created by means of spin-lattice coupling,
https://doi.org/10.1038/nature09331Nature 466 954-958 (2010).
J-Nano-2009
J. Zhou, Q. Wang, Q. Sun, X. S. Chen, Y. Kawazoe, P. Jena,
Ferromagnetism in semihydrogenated graphene sheet,
https://doi.org/10.1021/nl9020733Nano Lett. 9 3867-3870 (2009).
Y-Appl.-2017
Y. Guo, S. Zhou, Y. Z Bai, J. J. Zhao,
Enhanced piezoelectric effect in Janus group-III chalcogenide monolayers,
https://doi.org/10.1063/1.4981877Appl. Phys. Lett. 110 163102 (2017).
W.-ACS Appl-2018
W. Z. Chen, X. H. Hou, X. Q. Shi, H. Pan,
Two-dimensional Janus transition metal oxides and chalcogenides: multifunctional properties for photocatalysts, electronics, and energy conversion,
https://doi.org/10.1021/acsami.8b13248ACS Appl. Mater. Interfaces 10 35289-35295 (2018).
Y. Yang-Superlattice-2019
Y. Yang, Y. Zhang, H. Ye, Z. Yu, Y. Liu, B. Su, and W. Xu,
Structural and electronic properties of 2H phase Janus transition metal dichalcogenide bilayers,
https://www.sciencedirect.com/science/article/pii/S0749603619305993Superlattices Microstruct 131 8-14 (2019).
X. -ACS-2021
X. Wan, E. Chen, J. Yao, M. Gao, X. Miao, S. Wang, Y. Gu, S. Xiao, R. Zhan, K. Chen, Z. Chen, X. Zeng, X. Gu, and J. Xu,
Synthesis and characterization of metallic Janus MoSH monolayer,
https://doi.org/10.1021/acsnano.1c08531ACS Nano 15 20319-20331 (2021).
Z.-Angew.-2010
Z. Zheng, C. T. Nottbohm, A. Turchanin, H. Muzik, A. Beyer, M. Heilemann, M. Sauer, and A. Glzhuser,
Janus nanomembranes: a generic platform for chemistry in two dimensions,
https://onlinelibrary.wiley.com/doi/abs/10.1002/anie.201004053Angew. Chem., Int. Ed. 49, 8493-8497 (2010).
A. -Nat.-2017
A. Y. Lu, H. Zhu, J. Xiao, C. P. Chuu, Y. Han, M. H. Chiu, C. C. Cheng, C. W. Yang, K. H. Wei, Y. Yang, et al,
Janus monolayers of transition metal dichalcogenides,
https://doi.org/10.1038/nnano.2017.100Nat. Nanotechnol. 12, 744-749 (2017).
T.-Phys.-2018
T. Hu, F. Jia, G. Zhao, J. Wu, A. Stroppa, and W. Ren,
Intrinsic and anisotropic Rashba spin splitting in Janus transition-metal dichalcogenide monolayers,
https://link.aps.org/doi/10.1103/PhysRevB.97.235404Phys. Rev. B 97 235404 (2018).
S.-Phys. -2020
S. Li, W. Wu, X. Feng, S. Guan, W. Feng, Y. Yao, and S. A. Yang,
Valley-dependent properties of monolayer MoSi_2N_4, WSi_2N_4, and MoSi_2As_4,
https://link.aps.org/doi/10.1103/PhysRevB.102.235435Phys. Rev. B 102 235435 (2020).
J.-Phys. -2024
J. Zhao, Y. Qi, C. Yao, and H. Zeng,
Tunable valley-spin splitting in a Janus XMSiN_2 monolayer (X = S, Se; M = Mo, Cr) and giant valley polarization via vanadium doping,
https://link.aps.org/doi/10.1103/PhysRevB.109.035408Phys. Rev. B 109 035408 (2024).
D.-Nano-2018
D. Er, H. Ye, N. C. Frey, H. Kumar, J. Lou, and V. B. Shenoy,
Prediction of enhanced catalytic activity for hydrogen evolution reaction in Janus transition metal dichalcogenides,
https://doi.org/10.1021/acs.nanolett.8b01335Nano Lett. 18 3943-3949 (2018).
W.-J.-2018
W. J. Yin, B. Wen, G. Z. Nie, X. L. Wei, and L. M. Liu,
Tunable dipole and carrier mobility for a few layer Janus MoSSe structure,
http://dx.doi.org/10.1039/C7TC05225AJ. Mater. Chem. C 6, 1693-1700 (2018).
L.-ACS-2017
L. Dong, J. Lou, V.B. Shenoy,
Large in-plane and vertical piezoelectricity in Janus transition metal dichalchogenides,
https://doi.org/10.1021/acsnano.7b03313ACS Nano 11, 8242-8248 (2017).
J.-J.-2021
J. Qiu, H. Li, X. Chen, B. Zhu, H. Guo, F. Zhang, Z. Ding, L. Lang, J. Yu, and J. Bao,
Piezoelectricity of Janus Sb_2Se_2Te monolayers: A first-principles study,
https://doi.org/10.1063/5.0039605J. Appl. Phys. 129, 125109 (2021).
Q.-F-Phys.-2017
Q. F. Yao, J. Cai, W. Y. Tong, S. J. Gong, J. Q. Wang, X. Wan, C. G. Duan, and J. H. Chu,
Manipulation of the large Rashba spin splitting in polar two-dimensional transition-metal dichalcogenides,
https://link.aps.org/doi/10.1103/PhysRevB.95.165401Phys. Rev. B 95 165401 (2017).
A.-Nat.-2017
A. Y. Lu, H. Zhu, J. Xiao, C. P. Chuu, Y. Han, M. H. Chiu, C. C. Cheng, C. W. Yang, K. H. Wei, Y. Yang, Y. Wang, D. Sokaras, D. Nordlund, P. Yang, D. A. Muller, M. Y. Chou, X. Zhang, and L. J. Li,
Janus monolayers of transition metal dichalcogenides,
https://doi.org/10.1038/nnano.2017.100Nat. Nanotechnol. 12, 744-749 (2017).
X.-J. Mater-2018
X. Ma, X. Wu, H. Wang, and Y. Wang,
A Janus MoSSe monolayer: a potential wide solar-spectrum water-splitting photocatalyst with a low carrier recombination rate,
http://dx.doi.org/10.1039/C7TA10015AJ. Mater. Chem. A 6 2295-2301 (2018).
CZhang-Nano-2019
C. Zhang, Y. Nie, S. Sanvito, and A. Du,
First-principles prediction of a room-temperature ferromagnetic Janus VSSe monolayer with piezoelectricity, ferroelasticity, and large valley polarization,
https://doi.org/10.1021/acs.nanolett.8b05050Nano Lett. 19 1366-1370 (2019).
W.-Nat.-2016
W. Wu and Z. L. Wang,
Piezotronics and piezo-phototronics for adaptive electronics and optoelectronics,
https://doi.org/10.1038/natrevmats.2016.31Nat. Rev. Mater. 1, 16031 (2016).
C.-J. Am-2020
C. Shi, J. J. Ma, J. Y. Jiang, M. M. Hua, Q. Xu, H. Yu, Y. Zhang, and H. Y. Ye,
Large piezoelectric response in hybrid rare-earth double perovskite relaxor ferroelectrics,
https://doi.org/10.1021/jacs.0c00480J. Am. Chem. Soc. 142, 9634-9641 (2020).
P.-Mater. -2018
P. Lin, C. Pan, and Z. L. Wang,
Two-dimensional nanomaterials for novel piezotronics and piezophototronics,
https://www.sciencedirect.com/science/article/pii/S2588842018301494Mater. Today Nano 4 17-31 (2018).
L.-Chem.-2022
L. Wang, Z. Lin, Y. Du, J. Qiu, X. Chen, and J. Yu,
The piezoelectricity of 2D Janus ZnBrI: Multiscale prediction,
https://www.sciencedirect.com/science/article/pii/S0009261422001737Chem. Phys. Lett. 794, 139506 (2022).
Z. -Appl.-2024
Z. Wang, X. Yan, Y. Liu, and G. Yang,
Piezoelectric response and ferromagnetic order in 2D Janus FeGeN_3,
https://doi.org/10.1063/5.0196548Appl. Phys. Lett. 124, 122409 (2024).
J.-Mater.-2021
J. Bao, J. Qiu, and X. Liu,
Large in-plane piezoelectricity of Janus Bi_2X_2Y (X = S, Se, Te; Y = S, Se, Te; X ≠ Y) monolayers with polyatomic thickness,
https://www.sciencedirect.com/science/article/pii/S0167577X21005747Mater. Lett. 296, 129878 (2021).
S.-Nanoscale-2018
S. H. Zhang and B. G. Liu,
A controllable robust multiferroic GaTeCl monolayer with colossal 2D ferroelectricity and desirable multifunctionality,
http://dx.doi.org/10.1039/C7NR09588KNanoscale 10, 5990-5996 (2018).
R.-Phys.-2016
R. Fei, W. Kang, and L. Yang,
Ferroelectricity and phase transitions in monolayer group-IV monochalcogenides,
https://link.aps.org/doi/10.1103/PhysRevLett.117.097601Phys. Rev. Lett. 117, 097601 (2016).
P.-Phys. Rev.-1964
P. Hohenberg and W. Kohn,
Inhomogeneous electron gas,
https://link.aps.org/doi/10.1103/PhysRev.136.B864Phys. Rev. 136, B864 (1964).
W-Phys. Rev.-1965
W. Kohn and L. J. Sham,
Self-consistent equations including exchange and correlation effects,
https://link.aps.org/doi/10.1103/PhysRev.140.A1133Phys. Rev. 140, A1133 (1965).
G-J. Non-Cryst. Solids-1995
G. Kresse,
Ab initio molecular dynamics for liquid metals,
https://www.sciencedirect.com/science/article/pii/002230939500355XJ. Non-Cryst. Solids 192-193, 222-229 (1995).
G. Kresse-Computational Materials Science-1996
G. Kresse and J. Furthmüller,
Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set,
https://www.sciencedirect.com/science/article/pii/0927025696000080Comput. Mater. Sci. 6, 15-50 (1996).
G-Phys. Rev. B-1999
G. Kresse and D. Joubert,
From ultrasoft pseudopotentials to the projector augmented-wave method,
https://link.aps.org/doi/10.1103/PhysRevB.59.1758Phys. Rev. B 59, 1758 (1999).
J. P-Phys. Rev. Lett.-1996
J. P. Perdew, K. Burke, and M. Ernzerhof,
Generalized gradient approximation made simple,
https://link.aps.org/doi/10.1103/PhysRevLett.77.3865Phys. Rev. Lett. 77, 3865 (1996).
X.-Nanoscale-2022
X. Zhou, Z. Wang, H. Zhu, Z. Liu, Y. Hou, D. Guo, and D. Zhong,
Epitaxial growth and electronic properties of an antiferromagnetic semiconducting VI_2 monolayer,
http://dx.doi.org/10.1039/D2NR02367ANanoscale 14 10559-10565 (2022).
H-Phys. Rev. B-1976
H. J. Monkhorst and J. D. Pack,
Special points for brillouin-zone integrations,
https://link.aps.org/doi/10.1103/PhysRevB.13.5188Phys. Rev. B 13, 5188-5192 (1976).
v-J. Chem. Phys.-1992
G. J. Martyna,M. L. Klein, and M. Tuckerman,
Nosé-Hoover chains: The canonical ensemble via continuous dynamics,
https://doi.org/10.1063/1.463940J. Chem. Phys. 97, 2635-2643 (1992).
S.-Rev. Mod. Phys.-2001
S. Baroni, S. de Gironcoli, A. Dal Corso, and P. Giannozzi,
Phonons and related crystal properties from density-functional perturbation theory,
https://link.aps.org/doi/10.1103/RevModPhys.73.515Rev. Mod. Phys. 73, 515 (2001).
A-Phys. Rev. B-2008
A. Togo, F. Oba, and I. Tanaka,
First-principles calculations of the ferroelastic transition between rutile-type and CaCl_2-type SiO_2
at high pressures,
https://link.aps.org/doi/10.1103/PhysRevB.78.134106Phys. Rev. B 78, 134106 (2008).
R-Phys. Rev. B-1993
R. D. King-Smith and D. Vanderbilt,
Theory of polarization of crystalline solid,
https://link.aps.org/doi/10.1103/PhysRevB.47.1651Phys. Rev. B 47, 1651-1654 (1993).
D.-Appl.-2023
D. Li, P. Liu, R. He, Y. Bai, C. Liu, B. Wang, and G. Jia,
Intrinsic multiferroicity and magnetoelectric coupling in VSI_2 monolayer,
https://doi.org/10.1063/5.0155960Appl. Phys. Lett. 123, 052902 (2023).
J.-Phys.-2014
J. Guan, Z. Zhu, and D. Tománek,
Phase coexistence and metalinsulator transition in few-layer phosphorene: A computational study,
https://link.aps.org/doi/10.1103/PhysRevLett.113.046804Phys. Rev. Lett. 113, 046804 (2014).
M. -Oxford-1996
M. Born and K. Huang,
Dynamical theory of crystal lattices,
https://doi.org/10.1093/oso/9780192670083.001.0001Oxford University Press (1996).
E-Phys. Rev. B-2012
E. Cadelano and L. Colombo,
Effect of hydrogen coverage on the Young's modulus of graphene,
https://link.aps.org/doi/10.1103/PhysRevB.85.245434Phys. Rev. B 85, 245434 (2012)
E-Phys. Rev. B-2010
E. Cadelano, P. L. Palla, S. Giordano and L. Colombo,
Elastic properties of hydrogenated graphene,
https://link.aps.org/doi/10.1103/PhysRevB.82.235414Phys. Rev. B 82, 235414 (2010)
Q.-Appl.-2022
Q. Ma, W. Wan, Y. Li, and Y. Liu,
First principles study of 2D half-metallic ferromagnetism in Janus Mn_2XSb (X = As, P) monolayers,
https://doi.org/10.1063/5.0076332Appl. Phys. Lett. 120, 112402 (2022).
S.-Appl.-2021
S. D. Guo, X. S. Guo, X. X. Cai, W. Q. Mu, and W. C. Ren,
Intrinsic piezoelectric ferromagnetism with large out-of-plane piezoelectric response in Janus monolayer CrBr_1.5I_1.5,
https://doi.org/10.1063/5.0055014Appl. Phys. Lett. 129, 214301 (2021).
X.-Phys.-2021
X. Cheng, S. Xu, F. Jia, G. Zhao, M. Hu, W. Wu, and W. Ren,,
Intrinsic ferromagnetism with high curie temperature and strong anisotropy in a ferroelastic VX monolayer (X = P,As),
https://link.aps.org/doi/10.1103/PhysRevB.104.104417Phys. Rev. B 104, 104417 (2021).
A.-J. Phys-2023
A. Mahajan and S. Bhowmick,
Magnetoelectric multiferroic Janus monolayers VOXY (X/Y = F, Cl, Br, or I, and X ≠ Y) with in-plane ferroelectricity and out-of-plane piezoelectricity,
https://doi.org/10.1021/acs.jpcc.3c03100J. Phys. Chem. C 127, 11407-11418 (2023).
Y-J.-2024
Y. Li, H. Bai, Z. Yu, C. T. Kwok, and H. Pan,
Multiferroicity in 2D MSX_2 (M = Nb and Zr; X = Cl, Br, and I),
http://dx.doi.org/10.1039/D4TC00463AJ. Mater. Chem. C 12, 6131-6139 (2024).
H.-Phys.-2020
H. P. You, N. Ding, J. Chen, and S. Dong,
Prediction of two-dimensional ferromagnetic ferroelectric VOF_2 monolayer,
http://dx.doi.org/10.1039/D0CP04208KPhys. Chem. Chem. Phys. 22, 24109-24115 (2020).
N.-Phys. Rev. B-2020
N. Ding, J. Chen, S. Dong, and A. Stroppa,
Ferroelectricity and ferromagnetism in a VOI_2 monolayer: Role of the Dzyaloshinskii-Moriya interaction,
https://link.aps.org/doi/10.1103/PhysRevB.102.165129Phys. Rev. B 102 165129 (2020).
H. -Appl.-2023
H. Sun, Z. Qu, A. Li, Y. Wan, F. Wu, C. Huang, and E. Kan,
Prediction of tunable room-temperature ferromagnetism, ferroelectricity, and ferroelasticity in a CrNCl monolayer,
https://doi.org/10.1063/5.0157258Appl. Phys. Lett. 123, 042901 (2023).
S-Rev.-2021
S. Barraza-Lopez, B. M. Fregoso, J. W. Villanova, S. S. P. Parkin, and K. Chang,
Colloquium: Physical properties of group-IV monochalcogenide monolayers,
https://link.aps.org/doi/10.1103/RevModPhys.93.011001Rev. Mod. Phys. 93, 011001 (2021).
B.-Nature-2017
B. Huang, G. Clark, E. Navarro-Moratalla, D.R. Klein, R. Cheng, K.L. Seyler, D. Zhong, E. Schmidgall, M.A. McGuire, D.H. Cobden, W. Yao, D. Xiao, P. Jarillo-Herrero, X. Xu,
Layer-dependent ferromagnetism in a van der Waals crystal down to the monolayer limit,
https://doi.org/10.1038/nature22391Nature 546 270-273 (2017).
S.-Phys. -2023
S. D. Guo, X. K. Feng, Y. T. Zhu, G. Wang, and S. A. Yang,
Two-dimensional Janus Si dichalcogenides: a first-principles study,
http://dx.doi.org/10.1039/D2CP04536BPhys. Chem. Chem. Phys. 25, 2274-2281 (2023).
|
http://arxiv.org/abs/2409.02473v1 | 20240904064555 | A Variable Power Surface Error Function backstepping based Dynamic Surface Control of Non-Lower Triangular Nonlinear Systems | [
"Abdulrazaq Nafiu Abubakar",
"Ali Nasir",
"Md Muzakkir Quamar"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
a]Abdulrazaq Nafiu Abubakar
a,b,c]Ali Nasir*
a]Md Muzakkir Quamar
[a]Control and Instrumentation Engineering Department, KFUPM, Dhahran 31261, SAUDI Arabia
[b]Interdisciplinary Research Center for Intelligent Manufacturing and Robotics, KFUPM, Dhahran 31261, SAUDI Arabia
[c]Interdisciplinary Research Center for Aviation and Space Exploration (guest affiliate), KFUPM, Dhahran 31261, SAUDI Arabia
*[email protected]
§ ABSTRACT
A control design for error reduction in the tracking control for a class of non-lower triangular nonlinear systems is presented by combining techniques of Variable Power Surface Error Function (VPSEF), backstepping, and dynamic surface control. At each step of design, a surface error is obtained, and based on its magnitude, the VPSEF technique decides the surface error to be used. Thus, the backstepping-based virtual and actual control law is designed to stabilize the corresponding subsystem. To address the issue of circular structure, a first-order low-pass filter is used to handle the virtual control signal at each intermediate stage of the recursive design. The stability analysis of the closed-loop system demonstrates that all signals indicate semi-global uniform ultimate boundedness. Moreover, by using the switching strategy of the control input using the VPSEF technique suitably, it is possible to ensure that the steady-state tracking error converges to a neighborhood of zero with an arbitrarily very small size. The effectiveness of the proposed concept has been verified using two different simulated
demonstrations.
Error correction Surface error Backstepping Dynamic surface control (DSC) Non-lower Triangular Nonlinear Systems Variable Power Surface Error Function (VPSEF)
§ INTRODUCTION
The backstepping method is a frequently used strategy for designing controllers and analyzing stability in nonlinear systems that have a strictly triangular structure <cit.>, <cit.>, <cit.>. Nevertheless, the system architecture of triangular nonlinear systems may be influenced by many uncertainties, such as modeling inaccuracies, unidentified parameters, time delays, disturbances, unknown faults, input and state delays, and so on <cit.>,<cit.>,<cit.>,<cit.>,<cit.>,<cit.>. Several design approaches for adaptive control were developed specifically for lower triangular nonlinear systems with linearly parameterized uncertainty using backstepping techniques <cit.>,<cit.>,<cit.>. while online approximation methods based adaptive backstepping control approaches were presented for specific types of lower triangular systems with nonlinearly specified uncertainty for example adaptive control based neural network <cit.>, Adaptive fuzzy control <cit.>. Traditional backstepping based control design approaches have a limitation related to the complexity of controller design. In order to simplify this control design, several approaches such as dynamic surface control (DSC) are used <cit.>, <cit.>, <cit.>, minimal learning parameter <cit.>, <cit.>, <cit.>, online single neural network approximation <cit.>, <cit.> and so on were integrated into the framework of backstepping-based control design.
Although there has been much research on the management of lower triangular nonlinear systems using backstepping design, there are comparatively few findings in research about the control of nonlinear systems having non-lower triangular structures. A significant challenge in designing controls for nonlinear systems that are not lower triangular is the circular structure of the controller. This circular structure arises from the structural characteristics of the class of nonlinear systems and the differentiation of virtual control laws in the backstepping design stage. Indeed, in certain lower triangular nonlinear systems with pure-feedback structure, the issue of circular structure also arises in backstepping based control design. Most recently relevant research to the backstepping approach as the circularity problem solver include <cit.> which introduces a tracking control design technique suited for nonlinear systems with good synthesis and filtering to deal with slow rates and circular structure drawbacks. The authors in <cit.> enhance the capacity to control trajectories for spherical rolling robots by using the backstepping approach on a ball-pendulum system. The performance of the controller is shown via simulations. the study in <cit.> examine output feedback approaches for integrators that are globally stable and systems that include uncertainties and also consider the use of delayed static observers and adding backstepping techniques to expand the range of control applications. A control design strategy based on adaptive backstepping was presented in <cit.> for a specific kind of uncertain affine pure-feedback nonlinear systems. In the study, a DSC approach was used to resolve the issue of circular construction in the design. Nevertheless, the dynamic surface control reduces the circular structure, but there are tracking errors that tends to distabilize the system at initial stage with high overshoot which is due to the initial magnitude of the error, thus, reducing the performance of the controller. These errors tends to reduce as the controller becomes more adaptive. In this situation, the control performance is good but the error changes with time. Therefore, there is a need to control the magnitude of the error.
In this work, a problem of tracking error reduction for non lower triangular nonlinear class of system is considered. A Variable Power Surface Error Function (VPSEF) backstepping based DSC design method will be developed for an nth-order non lower triangular nonlinear system. At each step of the control design, say the intermediate step i of the design being recursive, a switching full state feedback virtual control law is firstly designed to stabilize the corresponding switched subsystem i=1,2,3,...n-1. This control law will switched based on the magnitude of the error i.e. being higher or lower error at a time t. Therefore, this entails that, the surface error of the control law will be different in respect of the magnitude of the error. By using a first-order low-pass filter to process the switching virtual control signal and allowing the filtered output to be used in the next design stage, the issue of circular construction may be efficiently resolved. Finally, in the last step n, a complete state feedback real switching control is provided to stabilize the final subsystem. To demonstrate reliability of the suggested method, the stability analysis of the system under study is shown in a subsection below. The closed-loop system signals are uniformly finally bounded, indicating that the system tracking error may converge to a small neighborhood of zero by carefully reducing the error through the switched strategy and selecting parameters.
Motivated by the literature and the discussion above, this article studies the error reduction in trajectory tracking of a special class of nonlinear system. First, a switching virtual controller is designed for the ith subsystem considering the surface error of the backstepping technique to switched based on the magnitude of the error at an instant time t. We then apply the first order low pass filter to process the virtual switching control signal and finally the full state feedback actual switching control is designed to stabilize the last subsystem. To the best of our knowledge, this is the first time the Switching control of DSC based backstepping, is proposed for the tracking control of a non lower triangular nonlinear system. The main contributions of this paper are enumerated as follows:
* Contrary to the the backstepping methods in <cit.>, <cit.>, <cit.>, <cit.>, we used the switching control strategy to eliminate the effect of high and low surface errors related to the tracking performance using the VPSEF technique.
* Compared with the control methods in <cit.>, <cit.> and <cit.> the switched VPSEF backstepping based DSC show a superior trajectory tracking in respect of the tracking error and change in reference signals.
* The Control strategy proposed guarantees stability, fast and finite time convergence to the desired signals.
The paper is outlined as follows. The nonlinear dynamic model of the non lower triangular nonlinear system is given in Section <ref>. The proposed switching VPSEF backstepping based DSC are designed in Section <ref>. Stability analysis is proved in <ref>. Simulation results are presented in Section <ref>. Some concluding remarks are provided in Section <ref>.
§ MATHEMATICAL MODELLING
The description as well as the mathematical model of a class of non lower triangular nonlinear systems as follow has been extensively studied in the literature <cit.>, <cit.>, <cit.>. The state space dynamic equations describing such class of nonlinear systems are
dx_i =f_i^*(x_1, x_2, …, x_n), i=1,2, …, n-1 1
dx_n =g_n(x_1, x_2, …, x_n) u+f_n(x_1, x_2, …, x_n)
y =x_1
in the above equation x_i∈ R are the state variables of the system, i=1,2, …, n; u ∈ R and y ∈ R are input control signal and the output to the system respectively; f_i^*(·) and f_n(·) are nonlinear functions that are known to be smooth, i=1,2, …, n ; g_n(·) is a function being continuous that satisfies g_n(·) ≠ 0.
The objective of the control to be applied in (1) is, to design a controller for the system such that, the output y will track a reference signal given as y_r, and therefore all signals in the closed-loop system are evenly distributed and eventually bounded.
The system (1) has a non-lower triangular shape, which means that the usual design approaches based on backstepping are not applicable. This study aims to design a switching control approach for the class of systems using an error reduction DSC based methodology.
The signs of the term ∂ f_i^*(·) / ∂ x_i+1 are well known, i=1,2, …, n-1. For simplicity, it is considered that ∂ f_i^*(·) / ∂ x_i+1≥ 0.
The reference signal y_r is substantially continuous, and y_r, ẏ_r and ÿ_r are sufficiently bounded.
§ SWITCHING BACKSTEPPING BASED DSC CONTROL DESIGN
The objective of our control is to provide a switching control scheme that minimizes errors using backstepping based DSC approaches, ensuring the semi-globally uniformly stable behavior of the closed loop system. Like the conventional backstepping-based DSC approaches, the recursive design process consists of n procedures and two different controllers depending on the magnitude of the error. At each intermediate step i, a virtual switching control law β_i+1 is implemented to stabilize the i th subsystem, where i ranges from 1 to n-1. In order to avoid cyclic dependence of control laws in the following procedures, a first-order low-pass filter is used to process the virtual control signal β_i+1. The resulting signal from the filter is then fed into the subsequent subsystem. In the last stage, an actual control is formulated to stabilize the nth subsystem. The aforementioned process results in the formation of a controller that consists of n-1 virtual control laws, n-1 first-order filters, and an actual control law. In order to implement the design of the switching control technique, the following theorem are presented.
Variable Power Surface Error Function (VPSEF)
Within the framework of backstepping-based dynamic surface control, we present the notion of a Variable Power Surface Error Function (VPSEF), which is denoted as ψ_i, that is designed to adjust the influence of surface error on the effectiveness of control, depending on their magnitude. The VPSEF is represented by the following equation:
ψ_1=(x_1-yr)^p/q
2ψ_i=(x_i-a_i)^p/q
where x_1 represent the current state variable, yr denotes the reference signal, the parameters p and q determine the behavior of the function, with p>q for higher errors and p<q for smaller errors.
This formulation enables the control response to be adjusted adaptively based on the size of the error. This allows for precise control actions that are customized to different levels of deviation from the reference trajectory. The choice of p and q variables allows for adjusting the VPSEF to the unique dynamics and performance needs of a system.
To enhance the clarity and conciseness of the design process, system (1) is first reformulated in the following form:
dx_i =x_i+1+f_i(x_1, x_2, …, x_n), i=1,2, …, n-1 3
dx_n =g_n(x_1, x_2, …, x_n) u+f_n(x_1, x_2, …, x_n)
y =x_1
where f_i(x_1, x_2, …, x_n)=f_i^*(x_1, x_2, …, x_n)-x_i+1 are well-known nonlinear smooth function, i=1,2, …, n-1.
The detailed methodology for controller design is provided in the following.
Step 1: Given the surface tracking error ψ_1=x_1-y_r (i.e. the initial surface error). By applying the VPSEF at each iteration step, by checking the switching condition below:
ψ_i=(x_i-y_r)^p/q p>q
for- ψ_i > Threshold, 4
ψ_i=(x_i-y_r)^p/q p<q
for- ψ_i < Threshold,
Threshold>0
We can obtain the derivative of ψ_i as
ψ̇_1 = ẋ_1-ẏ_r5
=x_2+f_1(x_1, x_2, …, x_n)-ẏ_r
If we choose x_2 as the control signal for subsystem (5), we can design a virtual control that will stabilize the subsystem in the following manner.
β_2=-k_1ψ_1-f_1(x_1, x_2, …, x_n)+ẏ_r6
where k_1>0 being a control parameter and ψ_1 is the switching parameter and is obtained as in equation (4). Therefore, we obtained different values of β_2 depending on the magnitude of the errors and thus, dual virtual control.
To address the issue of circular structure, a DSC approach is used to manipulate the virtual control signal. This involves the implementation of a first-order low-pass filter, as shown below.
σ_2ȧ_2+a_2=β_27
The variables β_2 and a_2 represent the input and output of the filter, respectively. Additionally, σ_2 is a positive filtering parameter.
The primary focus of the control is to ensure that the tracking error ψ_1 converges to a limited neighborhood around zero. Hence, if x_2 is derived as -k_1ψ_1-f_1(x_1, x_2, …, x_n)+ẏ_r, Subsequently, the subsystem (5) may be mathematically converted into the equation ψ̇_1=-k_1ψ_1, where ψ_1 represents the tracking error. It is important to note that the tracking error ψ_1 gradually approaches zero in an asymptotic manner. Nevertheless, x_2 is considered a system state variable and is not subject to design. Thus, a switching virtual control law β_2 is implemented, with the aim of ensuring the convergence of x_2-β_2 to neighborhood of zero.
Since the switching virtual control law β_2 incorporates x_n, the derivative of β_2 will therefore include the real control input u. If β_2 is immediately sent to the x_2-subsystem, the subsequent design of the switching virtual control law β_3 will be depending upon the real control u in the next stage of the design. Consequently, the issue of cyclical reliance of control laws would emerge, making it challenging to practically apply the controller. Hence, the use of DSC methodology is employed to resolve the issue. By allowing the output a_2 of the filter to reach the following step (step 2), the intended switching virtual control law β_3 will not have a direct dependence on u.
Furthermore, the states x_1, x_2, …, x_n are included in β_2, which implies that the derivative of β_2 will include nonlinear functions f_i(x_1, x_2, …, x_n) for i=1,2, …, n. If β_2 is immediately included into the next subsystem, it will result in a complicated design of the switching virtual control rule. The DSC approach is used to send the filter output a_2 to the next subsystem. The derivative of a_2 may be derived by the algebraic operation ȧ_2=(β_2-a_2) / σ_2. Consequently, the complicated nature of the design is decreased.
Step 2: Let's examine the second error surface, denoted as ψ_2=x_2-a_2 by applying the VPSEF as in equation (4). The derivative of ψ_2 may be calculated as:
ψ̇_2 =ẋ_2-ȧ_2
=x_3+f_2(x_1, x_2, …, x_n)-ȧ_28
If we choose x_3 as the control input for subsystem (8), we can create a switching virtual control law to stabilize the subsystem in the following manner:
β_3=-k_2ψ_2-f_2(x_1, x_2, …, x_n)+ȧ_29
where k_2>0 being a control parameter and ψ_2 is the switching parameter and is obtained as in equation (4). Therefore, we obtained different values of β_3 depending on the magnitude of the errors and thus, dual virtual control. A first-order low-pass filter is introduced to process β_3 as follows,
σ_3ȧ_3+a_3=β_310
The filtering parameter is denoted as σ_3 and it must be greater than zero. Similar to step 1, the result of the filter a_3 will be used in the subsequent stage of the design.
For each step i where i(i=3, …, n-1) : Let's examine the error surface ψ_i, which is defined as the difference between x_i and a_i. The derivative of ψ_i may be calculated as
ψ̇_i =ẋ_i-ȧ_i
=x_i+1+f_i(x_1, x_2, …, x_n)-ȧ_i11
If we use x_i+1 as the control input for subsystem (11), we can build a switching virtual control law to stabilize the subsystem in the following manner:
β_i+1=-k_iψ_i-f_i(x_1, x_2, …, x_n)+ȧ_i12
where k_i>0 being a control parameter and ψ_i is the switching parameter and is obtained as in equation (4). Therefore, we obtained different values of β_i+1 depending on the magnitude of the errors and thus, dual virtual control. Implementing a first-order low-pass filter to modify β_i+1 in the following equation:
σ_i+1ȧ_i+1+a_i+1=β_i+113
The variable σ_i+1 represents a filtering parameter with a positive value. Similar to step 1, the result of the filter a_i+1 will be used in the subsequent stage of the design.
Step n: The final error surface is represented by the equation ψ_n=x_n-a_n. The derivative of ψ_n may be calculated as
ψ̇_n =ẋ_n-ȧ_n
=g_n(x_1, x_2, …, x_n) u+f_n(x_1, x_2, …, x_n)-ȧ_n14
In order to provide stability for subsystem (14) above, an actual switching control is formulated below:
u=1/g_n(x_1, x_2, …, x_n)(-k_nψ_n-f_n(x_1, x_2, …, x_n)+ȧ_n) 15
where k_n is a positive control parameter.
The aforementioned approach yields the designed controller explained by the following
§ STABILITY ANALYSIS
Given the nonlinear system in (1), with the controller designed as in equation (15). If we consider the error of the dynamics corresponding to the final state of the system to be:
ψ_n=(x_n-a_n)^p/q17
Where a_n is the desired trajectory for x_n.
let the error be, e=x_n-a_n, where the error dynamics is given as:
ẋ_n-ȧ_n=-k_n(x_n-a_n)
and this simplifies to:
ė=-k_n e
The given equation is a linear differential equation with a solution
e(t)=e(0)e^-k_nt
As t →∞ e(t) → 0 if k_n > 0, this shows that, x_n→ a_n. Since, x_n converges to a_n it is important that we understand the impact of this on the whole system, particularly x_n-1.
Consider the dynamics for the equation of x_n-1 as,
ẋ_n-1 =f^*_n-1(x_1, x_2, …, x_n)
As x_n→ a_n we will substitute x_n=a_n into f^*_n-1
ẋ_n-1 =f^*_n-1(x_1, x_2, …, a_n)
f^*_n-1 is designed such that if x_n = a_n, x_n-1→ a_n-1 and will continue throughout the system. Therefore, It is possible to show recursively that each state x_i converges towards its desired trajectory a_i for i=n-1, n-2,..., 1
Overall, the switching VPSEF backstepping based DSC approach developed in the previous section ensures the stability of each subsystem step by step as the controller ensures that x_n→ a_n. Furthermore, in the n-1 stage, given that x_n→ a_n, the designed controller ensures that a_n-1 is designed such that x_n-1→ a_n-1. Finally, x_i is assured to converge to a_i, thus, stabilizing the entire system.
§ SIMULATION RESULTS
In this section, We present the aforementioned theoretical result, to validate the efficacy of the proposed VPSEF Back-stepping Based DSC Control method. Let us consider a nonlinear system that is non-lower triangular and has a third-order as shown below.
ẋ_1 =x_1^2+x_2^3+x_3
ẋ_2 =x_1^2 x_2+x_3^5
ẋ_3 =u+x_1 x_2 x_3^2
y =x_1
The control goal for the system above is to make x(t) to track y_r=sin(t).
Following Algorithm 1, a VPSEF Back-stepping Based DSC Control is designed with threshold of 0.1. Based on the control design approach outlined in Algorithm 1, the controller for the system may be created as follows:
β_2 =-k_1(ψ_1)-(x_1^2+x_2^3+x_3-x_2)+cos t 16
β_3 =-k_2(ψ_2)-(x_1^2 x_2+x_3^5-x_3)+ȧ_2
u =-k_3(ψ_3)-x_1 x_2 x_3^2+ȧ_3
where the variables a_2 and a_3 represent the outputs of first-order filters. β_2=σ_2ȧ_2+ a_2 and β_3=σ_3ȧ_3+a_3 respectively. The selected control parameters for simulation are k_1=3, k_2=3, and k_3=3. The initial states of the system are x_1(0)=0, x_2(0)=-1, x_3(0)=1. The initial states of the filters are a_2(0)=0, a_3(0)=0.
In the process of implementing β_3, ȧ_2 is obtained by ȧ_2=(β_2-a_2) / σ_2. In order to prevent an excessively large signal of β_3 during the early phase of operation of the control system, the filtering parameter σ_2 is selected as σ_2=exp (-t)+0.05. By using this approach, the output of the filter may closely track the input of the filter over time, ensuring that the value of ȧ_2 remains within an acceptable range. We use a similar strategy for the implementation of u, and the selection of the filtering parameter is determined as σ_3=exp (-t)+0.05.
The figures below illustrate the simulation result of this particular scenario. It is evident that all signals exhibit uniform ultimate boundedness. Figure 1 demonstrates the system's ability to rapidly and accurately track the reference signal y_r with the system output y. The time required for adjustment is less than 1 second, and the maximum steady state tracking error is less than 1 %. This is due to the error correction mechanism, where the initial error magnitudes are being reduced preventing the system from initial overshoot. The control input signal and the system states are shown in figure 2 and 3 respectively. Figure 5 and 6 depict the input and output signals of the two first-order low-pass filters respectively. Figure 4 clearly shows the impact of the proposed technique, where by the tracking error is less than 1 %. The stability of the system and the achievement of perfect tracking performance has being obtained using the suggested controller designed in this work. Figure 7 depict the high error reduction in traccking the reference y_r resulting from the proposed control algorithm compared to the control technique presented in <cit.>.
Likewise, when the reference signal is taken as y_r=1-e^-t. To further validate the effectiveness of the suggested control mechanism given in (16) another reference signal y_r=1-e^-t and the simulation result is given in the figures 8 to 13. The effective achievement of stability and tracking performance in a closed-loop system in question is clearly seen. Figure 7-12 shows the system output, control input, system states, tracking error and the first order filter respectively related to the second reference using the proposed control design.
§ CONCLUSIONS
This article introduces a new technique called Variable Power Surface Error Function backstepping based Dynamic Surface Control for non lower-triangular non linear systems. The approach is supported by strong theoretical proofs. The control law is switching based on the magnitude of error in order to mitigate the high impact of these errors. The designed control technique not only ensures accurate and prescribed settling time tracking of the reference, but also actively reduces oscillations in the tracking error and intermediate errors. Furthermore, the use of the DCS in this approach effectively eliminates the issue associated with the circular design. The stability analysis shows that all signals in the closed-loop control system are uniformly ultimately bounded, and the tracking error converges to a small vicinity of zero. Two simulations were conducted using various reference signals to illustrate the enhanced trajectory tracking achieved by the suggested technique. The simulations showed a reduction in tracking error and changes in reference signals, highlighting the effectiveness of the proposed approach.
Declarations
Ethical Approval
Not applicable
Funding
No funding was received for this work.
elsarticle-num-names
|
http://arxiv.org/abs/2409.03211v1 | 20240905031546 | Project Severe Weather Archive of the Philippines (SWAP). Part 1: Establishing a Baseline Climatology for Severe Weather across the Philippine Archipelago | [
"Generich H. Capuli"
] | physics.ao-ph | [
"physics.ao-ph"
] |
Bi-capacity Choquet Integral for Sensor Fusion
with Label Uncertainty
This material is based upon work supported by the National Science
Foundation under Grant IIS-2153171-CRII: III: Explainable Multi-Source Data Integration with Uncertainty.
Hersh Vakharia
University of Michigan
Ann Arbor, MI
[email protected]
Xiaoxiao Du
University of Michigan
Ann Arbor, MI
[email protected]
Accepted XXX. Received YYY; in original form ZZZ
========================================================================================================================================================================================================================================================
§ ABSTRACT
Because of the rudimentary reporting methods and general lack of documentation, the creation of a severe weather database within the Philippines has been difficult yet relevant target for climatology purposes and historical interest. Previous online severe weather documentation i.e. of tornadoes, waterspouts, and hail events, has also often been few, inconsistent, or is now defunct. Many individual countries or continents maintain severe weather information through either government-sponsored or independent organizations. In this case, Project SWAP is intended to be a collaborative exercise, with clear data attribution and open avenues for augmentation, and the creation of a common data model to store the severe weather event information will assist in maintaining and updating the database in the Philippines. For this work, we document the methods necessary for creating the SWAP database, provide broader climatological analysis of spatio–temporal patterns in severe weather occurrence within the Philippine context, and outline potential use cases for the data. We also highlight its key limitations, and emphasize the need for further standardization of such documentation.
§ INTRODUCTION
Convective storms such as thunderstorms are capable of producing large hail (yelo), extreme rainfall (Labis na pag-ulan), and tornadoes/waterspouts (buhawi/ipo-ipo). It can be destructive to man-made structures and can lead to loss of life, especially in urban areas with high population, building density, and variable quality. This is an important challenge for meteorologist as such forecasts require knowledge of the environment of the storms, which can be obtained from radiosonde measurements or numerical weather prediction models. However, tallying the impacts of each severe weather events (SWEs) that occurred poses yet another challenge due to the problems associated with non-meteorological effects in data collection <cit.>, which is why the approach to data acquisition needs to be standardized to reduce potential error sources <cit.>.
Archives of SWEs, mostly on tornadoes, occurrence are currently spread among individual countries and regional research groups, with varying quality, standards, and temporal coverage. Thus, it is difficult to draw conclusions about the worldwide spatial or temporal distribution of tornado frequency or strength. Efforts to estimate a global climatology have been made, notably by Fujita <cit.> and Goliger & Milford <cit.>, but relied on fragmented and sometimes contradictory sources. The more recent European Severe Weather Database (ESWD), published by the European Severe Storms Laboratory <cit.>, has been used to estimate tornado climatologies for Europe <cit.>. This dataset is now integrated well to the global archives by Maas et al. <cit.> and their Tornado Archive (TA) with over 100,000 tornadoes tallied.
The production of an official, organized database of extreme weather reports is essential to identify the nature of these phenomena <cit.>. An inventory of events is the first step to identify threats, and it is valuable information for planning and insurance matters <cit.>. For this reason, several countries and regions in the world have generated severe weather and tornado databases, including China <cit.>, Italy <cit.>, Portugal <cit.>, the Czech Republic <cit.>, the British Isles <cit.>, and the USA <cit.>, among others. These databases are mainly supported and maintained by their national agencies, such as national meteorological services <cit.> and, in some other cases, as part of academic projects <cit.>.
Similarly, extensive climatologies have been conducted on the incidence of waterspouts in Europe and North America <cit.>. In Catalonia, Spain, the information obtained from social networks was essential to improve the climatology of tornadoes and waterspouts. It led to a better understanding of the frequency and distribution of waterspouts and severe weather <cit.>. The relatively high incidence of waterspouts around the Florida Peninsula led to an attempt to predict these events by applying a statistical model that considers a significant number of variables <cit.>. The regional predictions of the model on the probability of waterspout incidence were better than some of the applied indices. In summary, research on waterspouts has become relevant in the last decade.
Furthermore, due to the significant importance of hail climatology research, in situ measurement at weather stations, rawinsonde measurements, and several kinds of remote sensing data, such as radar and satellite, have been used to analyze the temporal and spatial distributions of hails across different continentals and countries of the world <cit.>. There are also several studies about the climatology of hail day <cit.>, hail frequency and size <cit.>, large hail <cit.>, and the potentially influential factors of hailstorms <cit.> in China in recent decades collected and provided by the National Meteorological Information Center.
Besides these professional observation datasets, social databases had also been used to demonstrate the characteristics of hail distributions, such as news reports, disaster addresses from insurance companies, records of the agriculture and housing industry, and statistical yearbooks. Tuovinen et al. <cit.> investigated the severe hails in Finland by collecting newspaper, storm spotter, and eyewitness reports in 70 years. The ESWD which is a database of SWEs reported from crowdsources in 2002 <cit.>. The Tornado and Storm Research Organization (TORRO) holds a website to let voluntary persons upload severe weather reports such as tornadoes, lightning and damaging hailstorm on the island of Britain <cit.>. However, damage quantification and characterization are more problematic due to the meteorological context in which each case occurs, e.g. events occurring during a severe storm. Such issues need to be considered to establish the limitations of baseline climatology within the country based only on documentary evidence.
This paper establishes the first, simple database for SWEs and a baseline climatology that include tornadoes, waterspouts, and hailstorms based on data compiled from hemerographic sources and personal communications, which contributes to the knowledge of climate types in the Philippines through Project Severe Weather Archive of the Philippines (SWAP). Now in its 2nd Data Release (labelled as SWAP DR2), we utilized the compiled data to investigate the distribution of severe weather occurrence and record-keeping, as well as the interdependence between the two. We present the compiled database as-is, acknowledging known biases existing therein. As the true severe weather climatology is fundamentally intertwined with the historical context surrounding its observation, we analyze trends in our database with both in mind.
§ METHODOLOGY
§.§ Data Sources: Tornado, Hail, and Waterspout database
The documentary information was collected from official and non-official sources. The first type of data contains reports from the Department of Science and Technology-Philippine Atmospheric, Geophysical, Astronomical Services Administration (DOST-PAGASA)[Available: <https://www.pagasa.dost.gov.ph/>], National Disaster Risk Reduction and Management Council (NDRRMC)[Available: <https://ndrrmc.gov.ph/>], Department of Social Welfare and Development and their Disaster Response Operations Management, Information and Communication sector (DSWD-DROMIC)[Available: <https://dromic.dswd.gov.ph/>], and local civil protection units distributed throughout the country. These are the official sources that were also included in the archive.
Information not included in any major available dataset was obtained from other reliable sources as indicated in De Coning and Adam (2000), which were used both to add new tornadoes and fill in missing data for existing ones. Doing so is quite time-consuming, as a vast number of different sources are involved, and the effort to find and parse them is continuing. The non-official sources include eyewitness reports, media news/newspapers[Most of these newspapers are available at the National Library of the Philippines (NLP) upon request. Same goes to other official sources such as those from DOST-PAGASA whose documents are very difficult to find in the internet.], and information from social media platforms (e.g. Twitter, Facebook, and YouTube). Such documentary data were exhaustively evaluated to identify fake events. The updated database includes, as far as possible, information about the type of event, date, hour, location (latitude-longitude coordinates), tags e.g. tornadoes, hails, elevated, high-based etc., photographs and videos through the documentation column, and other relevant information. To regulate the process, every tornado in the database was given a mandatory “Source” attribute which either referred to one of our major datasets or another source (usually a web link). We designate these sources whether they are still functioning (Active), preserved somewhere else like a library (Preserved), or no longer accessible or maintained (Inactive).
The official record on damage to public and residential infrastructure is limited to only a few tornadoes e.g. the Manila City and Bacolor, Pampanga Tornado events. Despite the scarcity of information about the damage caused by each tornado or waterspout in the database, a classification using, for instance, the Enhanced Fujita Scale (EF) was carefully considered, but should be updated upon further review. Although there are inconsistencies to the details provided by eyewitnesses and media news, in addition to the lack of instrumentation for immediate forecasting, the structural differences in the buildings in Philippines compared to those considered in the EF scale, the limited capacity to conduct field investigations, and the nonexistence of a meteorological observer network, we intend to provide initial ratings to these events by examining texts and the damages it caused through photographic/videographic evidences.
§.§ Spatial Analysis and Smoothing
Although Project SWAP, as it currently stands, includes more than half a thousand SWEs, our database is necessarily incomplete, and likely unrepresentative despite covering from 1969-Present due to the lack of other tallied SWEs within the project. A spatial smoother was employed to account for shortcomings in our samplings, and to provide a preliminary climatological estimate of severe event. A non-parametric multivariate kernel density estimation (KDE) was employed to identify and examine hazard patterns/hotspots on a 11 km x 11 km horizontal grid using a kernel function <cit.>. The general equation for a multivariate KDE with ≥ 2-dimensions given multivariate data set is expressed as;
f̂(x) = 1/nh^d∑_i=1^n K {1/h(x-x_i) }
Where K is the kernel function, in this study, a symmetrical Gaussian kernel was chosen shown by;
K(x) = 1/√(2π) e^-t^2/2
Meanwhile, a cross validation was utilized to identify the most optimal bandwidth (h)[Also known as the smoothing parameter. The resultant probability density function (PDF) is sensitive to the chosen bandwidth. One can also either use Scott’s and Silvermann’s estimation methods.], the other variable in Equation <ref>, by exhaustively considering parameter conditions, assuming with fixed metric and kernel function i.e. Euclidean points and Gaussian. This is conducted through k-fold cross validation with 100 iterations for each 10000 candidates to each of our array of data. Though, a very large grid can be time-consuming to search through. Typically, the GridSearchCV (GSCV) offers such capability having 3 compulsory parameters; the estimator, grid of parameters, and number of cross validation (k), and is widely used on machine learning for hyperparameter optimization. However, another way to consider on identifying h is to utilize an adaptive bandwidth with different h values based on the local density of the observations under consideration <cit.>.
§ DATA DISCUSSION
§.§ Limitation of Project SWAP
a. Number of SWEs. Many SWEs go unreported or unrecorded, especially in areas or time periods with less infrastructure or lower population density <cit.>. This is reflected, for example, prior to 2010s including the years 2010 and 2016, were only 2 and 8 cases were tallied to the archive, respectively. We are not sure why it is difficult to look for SWEs way back on Year 2010, but the lack of cases in Year 2016 was likely due to the August 14 Manila Tornado that occurred on the Metro, making it a standout signature, while the rest of the SWEs were concealed by the alias.
The more general increase throughout the time period is due to a combination of better access to more recent tornado records and population expansion that allowed more tornadoes to be observed. Similar patterns are present in most other datasets e.g. ESWD <cit.>, but they are unlikely to be physical trends, and instead reflect societal growth over time. Under-reporting of the total tornado number can also occur when tornado families are judged to be individual tornadoes <cit.>.
In historical reconstructions of tornado, hail, and waterspout events, where the best available sources are newspaper reports written by journalists rather than meteorologists, severe wind events such as microbursts can be misidentified as tornadoes. Reports of tornadic activity are also sometimes sensationalized <cit.>. Still, the under-reporting effects previously discussed are generally of much larger magnitude, meaning that any observed tornado count for a given region, especially in the pre-modern era, is very likely an underestimate.
b. Tornado and Waterspout Intensity, and Hailstone Sizes. Most of the waterspout events within Project SWAP have occurred on open waters without any damage to property. Thus, was straightforwardly rated as EF0. A tiny fraction of the waterspout events were rated as EF1 which impacted several homes made of light construction materials and shanties.
The difficult part is initially rating more than 300 tornadoes included Project SWAP. Around a half of total tornadoes were initially rated by this project as EF1, given that most of the infrastructures around the country fall under One- or two-family residences, Low-rise (1-4 story) bldg., and both Hard and Softwood Trees damage indicators. A more comprehensive survey and assessment to update these initial ratings are needed in rectifying
Following the TA's guidelines <cit.>, tornadoes for which an intensity rating is unassigned are given EFU (EF-Unknown), which the other half of the tornado reports in this project were recorded. This occurs when a tornado is confirmed (e.g., visually), but no damage is found <cit.>. In other cases, tornadoes may be rated EFU because there were no damage assessment conducted by professional surveyors, and in some databases e.g. León-Cruz et al. <cit.> and National Institute of Water and Atmospheric Research <cit.>, the tornadoes in this category are the majority. While NCEI/SPC until 2016 rate tornadoes that did not cause observable damage as EF0.
Other avenues to provide damage estimates for tornadoes are either by adopting the International Fujita (IF) Scale developed by the ESSL <cit.> or developing a damage scale based on the structures suited in the Philippines. Recently, the use of the IF scale (preliminary ver.) was demonstrated and evaluated by Pucik et al. <cit.> by applying it to a violent tornado case in Czechia on 24 June 2021. The feedback from the surveyors has been used to improve the IF scale providing a more coherent, global framework for rating tornado and convective wind damage. Either this will be adopted by our country's national weather bureau or develop a new system depending on factors such as the unique structural characteristics found in the Philippines, the availability of data for calibration, and the practical applicability of the scale in local contexts is another subject area that is required to touch and ponder on.
Hail sizes were also estimated based on photographic and videographic evidences presented and included for this project. Most authors have indicated that stones of severe sizes have diameters greater than 2 cm <cit.> or 2.5 cm <cit.>. While other authors have established the threshold at which hail becomes “large” at 3 cm <cit.>. A new category called "significant hail" was also introduced for cases in which the severity has been catastrophic for and in which the the diameter has reached at least 5 cm <cit.>.
Therefore, with current support from recent researches and findings, Project SWAP creates a simple hail scale that considers the initial diameter to classify the hail type as;
Small hail: ≥ 1 cm
Severe hail: ≥ 2 cm
Large hail: ≥ 3 cm
Significant hail (Sig. Hail): ≥ 5 cm
If a hail event was reported but without any traceable photographic and/or videographic evidences, then we will consider and label it as 'Undefined'. Table <ref> depicts the counts of SWEs that includes tornadoes, waterspouts, and hail reports with respect to the scaling utilized on this project.
c. Location and Coordinates. Path widths are difficult to measure and rather unreliable at high degrees of precision—for example, exhibiting unnatural, unexplained shifts over time. Pathlength is more easily measured, but in older and even new records can suffer from underestimates, through incomplete surveying of the entire track, or overestimates, through judgment of tornado families as individual tornadoes <cit.> The only available and thorough tornado track being generated in the country was conducted on the August 14, 2016 EF1 Manila Tornado by stitching videographic evidences <cit.>.
Some databases recorded all tornado paths as single point coordinates (e.g., Mexico, Argentina, China), while others included paths of two or more points (e.g., Europe, Canada, Japan). Most of these archives had a mix of both, with the proportions of each depending on the quality of observations. Furthermore, a few datasets, such as those for Argentina and South Africa, record only the town in which a tornado occurred, meaning that the coordinates listed are simply those of the town center.
That being said, we adopted a single point coordinate for this archive, resulting in some level of bias. For documented cases, e.g. Manila Tornado, the coordinates were placed on a well-known infrastructure it impacted such as hospitals, convenience stores, or schools etc. within the area of interest. However, difficult cases were tallied only on the town-level in which a tornado occurred and seen. With no single objective procedure for resolving these biases, we present the database as-is and recommend careful consideration of individual data collection methodologies in future analysis using it. We hope that tornado paths/swaths in most, if not all of the tornado cases, will be included in a future data releases and version of this archive.
d. Sounding Profiles. In the context of a tornado, hail, and waterspout database, these sounding profiles (Skew-T Hodograph) provide critical insights into the environmental conditions leading up to and during SWEs.
By analyzing the vertical structure of the atmosphere at various layers, sounding profiles help identify key factors such as instability, wind shear, moisture content, and temperature gradients, which are essential for understanding the potential for severe weather development. Inclusion of these profiles in the archive allows researchers and meteorologists to understand atmospheric conditions with the occurrence of tornadoes, hail, and waterspouts, enhancing predictive capabilities and stimulate this research area.
These sounding profiles were queried and analyzed through SounderPy of Gillet <cit.>, either from observational standpoint or reanalysis analogs mainly from ERA5, or in both. Files will have the standard csv and a special cm1 input formats for Cloud Model 1 simulations which will be available alongside the SWAP DR2.
§.§ Temporal Distribution of Severe Weather Activity
In this research, 596 reports were collected as of the SWAP DR2 increasing the documented activity considerably. Thus, the current climatology considers a total of 326 tornadoes, 133 hail events, and 137 waterspouts distributed over 500 severe weather days. SWAP DR2 currently runs from 1969-Present, with period between 1969-1999 were called as 'Pre-2000s' due to the lack of available data (total of 17 tornadoes were documented and indexed).
The annual distribution of severe weather activity in Philippines is shown in Fig. 1. The documented events vary from 5, 15, 25, and 41 events yr-1, on average, for the periods 2000-2010, 2000-2020, 2010-2020, and 2014-2024, respectively. Reports of tornadoes have increased by 2009, so as waterspouts by 2019. The highest tornadic activity was reported in 2022, with a total of events (37 tornadoes and 23 waterspouts) while the highest total activity is on the current year 2024 (36 tornadoes, 18 hail reports, 38 waterspouts). The low number of events reported in 2010 and 2016 is attributed to the initially applied methods used in the updated database and the low density of reliable information sources. The entire tornado and waterspout dataset shows a maximum of 3 tornadoes on the same day.
In recent years there has been an incremental trend in tornado activity records in US and European databases <cit.>. This apparent increase in the activity could be associated with the addition of more official databases and the higher public interest in events with high social impacts <cit.>. The same goes for the Philippines, it is inferred that such behavior is associated with enhanced social attention to these natural phenomena, the extensive use of social media networks, and increased access to internet services.
The highest severe weather activity is recorded in the warm spring and summer period, which starts between March-April with break-even counts at first, then severe weather season underway (Fig. 2a). The prolonged severe weather season, which includes tornadoes, hail events, and waterspouts, extends from May to August (Fig. 2a) winding down by September. During summer (June, July, and August, JJA), the number of documented tornadoes oscillates around 50 mo-1 (Fig. 2a), while hail events decline throughout the said period. In this season, the arrival of tropical waves and onset of southwesterly winds to Philippines <cit.> favors the moisture transport from oceans to the continental areas. The quality of low-level moisture leading to instability and ambient wind profile i.e. wind shear are the primary ingredients for robust convective storms to initiate capable of producing severe weather hazards at such period <cit.>.
Furthermore, the maximum registers of waterspouts are also on JJA period with August being at the highest (Fig. 2a). A lag period is evident when comparing the occurrence of the typical tornadoes and waterspouts. Interestingly, the active season of waterspouts coincides with the seasonal occurrence of tropical cyclones <cit.>. In this sense, a relationship can be established between the tropical cyclone and the waterspout seasons. Although such an association is known in other parts of the world <cit.>, it has not previously been recognized in the Philippine context. Further research is needed to understand in-depth the implications of tropical cyclone activity and waterspouts in the country and generally, the timing of these SWEs to other pre-existing weather systems.
Some tornadoes and waterspouts were documented in the cold winter period of December, January, and February (DJF) and in autumn (September, October, and November, SON). These events have been classified as cold tornadoes <cit.> and could be associated with a different dynamic than the conditions of the warm tornado season. Seasonally, the number of documented SWEs between November and February is reduces (an average of 11 SWEs yr-1) with average of 8 tornadoes yr-1, 4 hail events yr-1, and 2 waterspout yr-1 on that time span and can be related to stability conditions derived from the cold air masses from Northeast monsoon and reduced moisture fluxes into the continent.
In summary, the severe weather season kicks off in spring (March, April, and May, MAM), reaches its peak in summer (JJA) along with reported hail decreasing, accompanied by the winding down of SWEs in autumn (SON), and finishes in winter (DJF) (Fig. 2b). There is an evident difference between tornadoes, commonly reported in spring and summer, and waterspouts, which have their active period between summer and autumn. It is inferred that waterspouts are notably influenced by tropical cyclone activity, principally along the coast of the Pacific Ocean. On the other hand, tornadoes are more related to instability conditions derived from the pass of easterly waves, moisture advection, favorable wind profile, and the late cold front activity.
The diurnal distribution (Fig. 3) shows that these phenomena are usually reported between 14:00 and 15:59 h local time (06:00-07:59 h UTC), followed by 16:00-17:59 h local time (08:00-09:59 h UTC). This behavior is associated with daytime heating and the development of air-mass thunderstorms <cit.>, necessary for convection to persist, hailstone to develop along the updrafts, and tornadogenesis. The diurnal distribution of severe weather activity reported in Philippines matches the findings in several climatologies <cit.> and rainfall climatologies by Banares et al. <cit.>.
§.§ Spatial Distribution of Severe Weather Activity
The spatial distribution of SWEs is a crucial aspect of risk management associated with this phenomenon. Geographical characteristics and meteorological conditions are features to be considered in every baseline climatology for SWEs.
Figure <ref>a shows the geographic distribution of all documented SWEs across the archipelago. The highest tornadic and hail activity is registered in Greater Metro Manila (GMM) area encompassing Southern Luzon, National Capital Region (NCR), and Region III (Central Luzon). While scattered area of waterspout events were seen in along the coast of Southern Luzon and even at Laguna Lake, notably with the quadruple spouts back in May 2020 at height of pandemic. On the other hand, a large swath of waterspout events coinciding with few tornadic events was recorded in the Regions VI and VII (Western and Central Visayas). Lastly, another area where significant number of tornadoes were recorded was evident along BARMM (Bangsamoro Autonomous Region in Muslim Mindanao) and Region XII (SOCKSARGEN) with North and South Cotabato, Sultan Kudarat, and Maguindanao as its favored corridor for severe weather activity, making it 2nd on the highest tornadic activity. Notably, a regional tornado outbreak occurred in South Cotabato at the afternoon of September 30, 2009 impacting several municipalities, including Koronadal City.
There is a clear association between population density and documented tornadoes (Fig. <ref>a,b). More than 50% of the Philippine population are located where the SWEs are recorded. The Region IV-A, NCR, and Region III takes up 30%[Based on 2020 Census of Population and Housing by the https://psa.gov.ph/content/highlights-philippine-population-2020-census-population-and-housing-2020-cphPhilippine Statistics Authority.] of it providing a hint to the number of the documented SWEs (48%) were located in the Luzon landmass (Fig. <ref>). The population density effect on tornado detection is well known <cit.>. In addition to the population factor, the improvement in internet coverage and smartphone devices explains the increase in the number of reported tornadoes in recent years. This situation implies that the actual number of cases has been underestimated. Even though the aforementioned regions along Visayas down to Mindanao may have present topography and environmental characteristics similar to those observed throughout the GMM, the number of tornadoes, hails, and waterspouts documented is lower as seen in Figures <ref>.
Fig. 4c shows the climatological mean flash rate density observed by the Lightning Imaging Sensor (LIS) aboard Tropical Rainfall Measurement Mission satellite <cit.>. Some studies have documented relationships among large hail, thunderstorms, and tornadoes, all of them associated with severe convective storms <cit.>. The flash rate climatology presents a pattern most similar to SWE distribution across the GMM, but not for the SMOr and the SMOc. The convective processes are intensified over those regions, influenced by the upscale growth forming mesoscale convective systems <cit.>[However, MCS were also reported along Mindanao's favorable area for tornadic storms to consummate. See Lagare et al. <cit.>.] and Southwest Monsoon <cit.>. These spatial variations could result from the differences in data sources. While the tornado and waterspout information comes from reports by inhabitants, the flash data come from satellite-mounted lighting sensors.
Fig. <ref>a,b,c displays a kernel density estimation made from Python package with a 11 km search radius and 121 km of spatial resolution. A clear and well-defined hotspot for all hazards threat with bimodal distribution was latched in the GMM and greatest likelihood for severe weather at the upper-portion of Metro Manila-Pampanga area and another at Pangasinan, while another hail hotspot was denoted along Benguet region (Fig. <ref>a,b). A total of 137 tornado and 105 hail reports have been documented over this area i.e. ≥40% and ≥70%, respectively of the total within SWAP DR2. Some of the notable events include the first recorded tornado in the Philippines back on June 1968 by Grazulis <cit.>, a potential EF2 tornado that tore parts of Bulakan, Bulacan on August 1998, the well-documented EF1 Manila Tornado on August 2016, the quadruplet waterspouts in Laguna Lake on May 2020, a significant 8 cm hail in Norzagaray, Bulacan on August 2021, and yet another Pampanga Tornadoes that impacted towns of Magalang and Arayat back on June 2023 and May of this year, respectively.
This specific hotspot in Luzon is surrounded by complex terrain features. In particular, to the west is form Zambales Mountain Range (ZMR) and to its east is the Sierra Madre Mountain Range (SMMR). Previous researches show that for landfalling tropical cyclones (LFTC), the complex terrain is crucial for magnifying rainfall across Luzon through orographic effects <cit.>. Although these were applied to TCs instead, such geographic features may also be conducive for building instability favorable for occurrence of severe weather in the area both on TC and non-TC days.[This is most likely the case during SWM periods, as southwesterly (veering) winds blow over the ZMR. Due to conservation of potential vorticity, a lee-side cyclogenesis and its associated meso-low occurs at the backside of ZMR and seems to be the 'spark' for the Manila Tornado case as studied by Capuli <cit.> and so as to other SWM period/JJA/warm summer season tornadoes along the hotspot zone in Luzon. On the other hand, prevailing easterly winds conditions during MAM season and monsoon breaks that can potentially lead to severe convective storms remains understudied.]
In Visayas regions, two small tornado hotspots was discovered overlayed by a strong signal of waterspout events across parts of Region VI and VII shown in Figure <ref>a,c (as also depicted and discussed earlier). In particular, the Panay Islands, Negros Provinces, and much of Central Visayas shows a notable signature of SWEs focused along waterspouts. Two potential EF2 tornadoes were recorded in this area, both occurred in Cebu back on 2013 – one in Minglanilla and the other at Lapu-Lapu City. Meanwhile, more than 100 cases of waterspout events were documented in this area as well, which were mostly rated with confidence as EF0s. A significant portion of the waterspout reports correspond to short duration (maximum 20 min). However, with an extraordinary number of documented waterspout events in these aforementioned area of Visayas, compared to other severe weather hotspots in the archipelago, an extensive analysis is required to understand the environment surrounded by mostly bodies of water suitable for tornadogenesis and non-tornadogenesis.[Most of these visayas-located waterspouts are rotating anti-cyclonically while few cases rotate cyclonically, based on documented videos linked in SWAP DR2.]
In Mindanao, the data on tornado hotspots highlight a significant area of increased tornadic activity (≥100 tornadoes), particularly in South Cotabato extending towards North Cotabato, with a weaker influence noted in Northern Mindanao. This region stands out due to its frequent occurrence of tornadic storms, contributing to its classification as a tornado hotspot. Interestingly, this increased tornado activity is situated between two major mountain ranges: the Pantaron Mountain Range to the east and the Highlands of Tiruray to the west. This geographical positioning may play a critical role in the development and intensification of tornadic storms in the area. The mountains could influence local wind patterns, moisture distribution, and the stability of the atmosphere, creating favorable conditions for tornado formation. This interplay between topography and weather patterns may partially explain why this region experiences higher-than-average tornadic activity compared to other areas in Mindanao.
Additionally, the region has a historical record of significant tornado events, including three potential EF2 tornadoes. Notably, a killer tornado struck Zamboanga del Sur in June 1990, resulting in the deaths of up to 30 people in Manukan, despite being outside the main hotspot zone. Other destructive tornadoes were recorded in Lantapan, Bukidnon in July 1991, a regional tornado outbreak in South Cotabato back on September 2009, and in Pikit, North Cotabato in August 2015. The severity and impact of these tornadoes further underscore the vulnerability of the region to SWEs, especially within the identified hotspot zone. However further studies could explore the precise meteorological mechanisms at play, enhancing the understanding of tornadogenesis in this complex terrain.[as also noted the same for the severe weather hotspot in Luzon.]
Figure <ref> shows the monthly spatial distribution of all tornadoes, hails, and waterspouts across the archipelago within SWAP DR2. The winter period (DJF) shows few and scattered cases of tornadoes across the entire archipelago. At the beginning of the active season in March, the number of cases increases with hail activity picking up in the Northern Luzon, and tornado activities within Panay Islands, Maguindanao, and South Cotabato. By April, a dominant hail event pattern is observed across the Luzon Landmass, while tornado season had a head-start in the Mindanao influence zone. In May, there are relatively high concentrations of tornadoes and hail events along the bimodal hotspots in Luzon, mixed batch of severe weather along Regions VI and VII, and scattered severe weather activity within Mindanao as well.
At the initial peak on June, severe weather activity is already spread out across the country, with tornado and hail reports centered in Pangasinan and across GMM area. Also, waterspout and tornado season is underway in Western and Central Visayas region, so as in Mindanao. This pattern is maintained throughout July and August across the country, with increased waterspout activity along the Visayas regions. At September, severe weather season in Luzon is about to end with decrease of severe weather activity (decrease in tornado cases) and shifts down south, with notable clusters around Visayas and Mindanao hotspots. October registered some hailstorm activity across NCR and Baguio due to monsoon breaks shifting the wind pattern to easterlies and the initial arrival of Northeast Monsoon. Meanwhile, Visayas and Mindanao severe weather season is still on going along Negros Provinces and Cebu, and along the Mindanao tornado hotspot as depicted earlier. By November, severe weather activity caps off in Luzon as Northeast Monsoon starts to advance and effect the landmass, so is about to end for Visayas with fewer registered SWEs, however Mindanao tornado activity remains quite active, surprisingly. By December, the severe weather activity flattens across the archipelago with few documented cases within the aforementioned month.
§ CONCLUSION AND RECOMMENDATION
This paper has presented the first baseline climatology of SWEs encompassing tornadoes, hails, and waterspouts in the Philippines. Project SWAP and its DR2 consists of previous literature and recent documentary datasets, with a temporal coverage of 56 yr (1968-2024). The archive has many potential uses: its global reach allows for worldwide estimates of severe weather climatology, including intercomparisons of severe weather observation and documentation methodologies. We hope that the digitized dataset will open possibilities for more broad climatological studies. More importantly, this project and paper itself can serve as a foundational piece for aspiring filipino researchers trying to get a grasp and would like to study meteorology, and SWEs such as tornadoes in the Philippine context in the future. Still, any analyses using this archive will require careful consideration of biases therein, many of which we have discussed.
On the analysis side, the increase in the number of documented SWEs is attributed to the rise in public awareness of these natural phenomena, the growth in the use of social networks (e.g. Facebook, Twitter, and YouTube), and improvements in information technologies (e.g. internet access). Such an apparent increase in tornadic, hail, and waterspout activity in Philippines could be associated with natural variability as well. For example, previous studies have been showing the influence of the El Niño-Southern Oscillation (ENSO) and the Madden-Julian Oscillation (MJO) in convection processes and precipitation <cit.>. However, it is not yet possible to determine the influence of such oscillations on tornado formation in Philippines, as there is insufficient evidence and a relatively short data series on tornadoes.
The monthly distribution shows that the beginning of the most active phase of the severe weather season is in April. This first activity fits with the most active tornado phase in the USA <cit.>. In May, SWEs have been previously documented across the country, in particular at lower Region I and Region III encompassing GMM area and Southern Luzon, with early termination between September-October. This may be indicating that this portion of the country may be related to similar atmospheric processes that lead to supercell formation and tornadogenesis, especially during Southwest Monsoon periods and the start of TC season in the western Pacific. However, easterly wind setups during months of April and May may also provide the same hazardous meteorological condition. Several studies documented the characteristics and role of tropical activity in rainfall in Philippines. However, the impact of TCs on tornadogenesis and severe weather initiation remains understudied in the Philippine context.
Meanwhile, down south towards the Western and Central Visayas, waterspouts, along some tornadoes in the Panay Islands, are the main show with severe weather activity. The start of increased waterspout and tornado activity is in May followed by scattered and clumps of activity across the island groups of Visayas until it slowly winds down by November. On the other hand, Mindanao, particularly BARRM and Region XII, had a head start in terms of severe weather; mostly tornadic events, from April to November with severe weather season capping off by December across the country. The environments in these locations were generally comprised in accordance to the Philippine Climate Types developed by Coronas <cit.> and Kintanar <cit.>, but the convective and kinematic setup of these remains to be studied.
Project SWAP and its DR2 shows that nearly 50% of cases are documented in the Pangasinan-GMM influence area. This fact may involve 2 critical issues: first, there is a clear relationship with population density. Demographic effects on tornado documentation have been previously reported <cit.>, and Philippines is no exception. Second, this spatial pattern could also be related to topographic features. Much of these hotspots, pertaining to the Luzon and even Mindanao hotspots, were located in between two mountainous areas. Topographic effects were also reported along Luzon <cit.>, but seldom, if not none, along Mindanao hotspot zone. However, a previous study on an MCS <cit.> may give an insight to the role of complex terrain along the said hotspot and for the increased tornadic activity in the area of interest. A complete, thorough mesoscale analysis of convective and wind profile setting of these severe weather hotspots we identified in the Philippine context has been proposed.
What is more striking is that the locations of these severe weather hotspots are well-placed to the climate types, specifically along Climate Type 1 and 3. As a quick discussion, Climate Type 1, so as Type 3 at one point, was characterized by pronounced wet season influenced by the southwesterlies from May to October and the dry season from November to April caused by the prevailing winds in form of north Pacific easterlies, with maximum rainfall during June to September <cit.>. However, the timing of the wet season can vary depending on the onset of the southwesterly wind that flows at the country, tied to the Asian summer monsoon. Still, aiding on the initiation convective activity over the western coast of the Philippines during monsoon period <cit.>.
Now, this begs the question; what are the convective and kinematic mechanisms within these climate types, not based on rainfall and temperature, making it favorable environment for severe weather? What makes the clock tick capable of producing these SWEs? As proposed and a Part 2 of this project, we will explore and establish another baseline climatology, this time centered around hazardous convective weather setups across the archipelago.
§ ACKNOWLEDGMENTS
We are very grateful to the Philippine population for reporting the occurrence of tornadoes in their communities. We also appreciate the valuable comments of the three anonymous reviewers and editor, which helped to improve this manuscript. This work received no funding. However, it was made possible through extensive and exhaustive effort, whose dedication and commitment to advancing our understanding of severe weather phenomena were indispensable. We would also like to extend our appreciation to the government agencies, acting as the primary source, for providing essential data that contributed significantly to this work. Finally, we are thankful to our families and loved one for their unwavering support throughout the project.
§.§ Author Contributions
G. H. Capuli is the project leader of Project SWAP. Thus, conceptualized and leads the initiative, and designed-conducted the experiments/analysis. So as to the writing of the manuscript.
§.§ Conflicts of Interest
The author declares that they have no competing interests.
§.§ Data Availability
Project SWAP and its SWAP DR2 is available and distributed on https://zenodo.org/doi/10.5281/zenodo.11236890Zenodo. Sounding profiles will be available soon on the aforementioned Zenodo page. The Digital Elevation Model (DEM) is from SRTM15Plus distributed and available on https://portal.opentopography.org/datasetMetadata?otCollectionID=OT.122019.4326.1OpenTopography. The population density is available through https://hub.worldpop.org/geodata/summary?id=43216World Population Hub. Finally, the lightning data from the TRMM is accessible through https://cmr.earthdata.nasa.gov/search/concepts/C1979883245-GHRC_DAAC.htmlNASA Earth Data. All of these data is distributed under Creative Commons CC-BY 4.0. Proper attribution is required for these datasets.
This paper has made of use of the following Python packages: , , , , , , ,
|
http://arxiv.org/abs/2409.02780v1 | 20240904145700 | How empty are the voids? | [
"Anton N. Baushev"
] | astro-ph.CO | [
"astro-ph.CO"
] |
bau]A.N. Baushev
[email protected]
[bau]Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research,
141980 Dubna, Moscow Region, Russia
The department of Physics, Helsinki University, Helsinki, Finland
§ ABSTRACT
We find an analytical solution for the minimal matter density of a void, its central density. It turns out that the voids are not so empty: most of the voids have the central underdensity Δ_c ∼ -50% (which means that the matter density in their centers is only two times lower than in the Universe on average). For small voids (of radius R_0≃ 5-10 Mpc), the underdensity can be significantly greater, but the number of voids decreases rapidly with increasing of |Δ_c| over 50%, and voids with Δ_c < -80% are practically absent. The large voids (R_0≥ 40 Mpc) always have |Δ_c| < 50%.
voidcosmologylarge scale structure of the Universedark matter
98.65.Dx98.65.-r98.80.-k98.62.Py
How empty are the voids?
[
September 9, 2024
========================
§ INTRODUCTION
Voids are vast (∼ 10-100 Mpc) areas of the large-scale structure of the Universe that contain no galaxy clusters and almost no bright galaxies. The voids are elliptical and occupy a significant part of the Universe volume. It is still not quite clear what is the ratio between the matter density inside voids and of the Universe on average. On the one hand, the density of bright galaxies is tens times lower in the void than in the Universe on average, and it implies that the voids are really quite empty. On the other hand, one may easily observe only the stellar component of sufficiently bright galaxies in the voids. There are reasons to believe that the structure formation in the voids is suppressed <cit.>, and a significant quantity of matter forms there a massive diffuse component. The voids may contain almost arbitrary quantity of dark matter (hereafter DM), both as the homogeneous component and as dark halos. The baryon matter in form of dwarf galaxies, hot gas, ultradiffuse galaxies (UDGs) etc. is also hardly detectable in the voids. Therefore, the real emptiness of the voids is hard to be measured.
N-body simulations support the opinion that the voids are very underdense: they suggest that the matter density in the void center is almost ten times lower than of the Universe on average (see the Discussion for details). However, the simulations may suffer from numerical effects, and it would be very useful to check them by analytical calculation. The analytical investigation of void formation have a long history; here we mention only a few remarkable works from the very extensive literature: <cit.>. As a rule, calculations were carried out under the assumption of spherical symmetry of the system; in addition, in early works the cosmological constant was ignored for obvious reasons. The spherical symmetry allows to find a solution for any initial conditions. However, the result depends on the shape of the initial perturbation, which is unknown. In addition, the spherical symmetry is obviously violated when the voids enter the nonlinear regime and begin to "collide" with each other, forming a flat wall between them.
We solve a much simpler problem in this paper. We want to estimate the minimal density in a void, the density at its center. As we show, the density profile of voids is very flat at the center, and thus the region with the density close to the minimal one may occupy a significant part of the void volume. Contrary to the full density profile, the minimal void density can be found analytically, and it depends only on the amplitude of the initial perturbation, which can be found assuming that the primary perturbations were Gaussian. We use the analytical method offered in <cit.>, correcting a minor calculation error that was made there and generalizing the result to the case of a non-spherical void.
The structure of the paper is the following: in the second section we calculate the central density of a void in the spherically-symmetric case, in the third — we consider the case of a non-spherical void and show that the central density should be the same as in the spherically-symmetric case, in the Discussion section we estimate the underdensities of real voids and discuss them.
§ CALCULATIONS
Now we find the central density of a spherically-symmetric void. First of all, we need to set the initial conditions. We choose a redshift z_1 deeply on the matter-dominated stage of the Universe, so that we may already neglect the radiation term, but the perturbation (which later transforms into the void) is still small, say, z_1=10. We will denote the values of variables at this moment by subscript '1'. Let the characteristic size of the perturbation at this moment be λ_1.
The future void contains a lot of substructures, and therefore the real density profile of the underdensity at z=z_1 is quite complex. However, the substructures do not affect much the void formation, and we may get rid of them by 'averaging' the velocity and density profiles (for instance, with the help of a Gaussian filter). Hereafter we will consider this 'softened' profile instead of the real one.
We denote the scale factor of the Universe by a(t), and the Hubble constant - by H≡daa dt. The redshift is bound with a by trivial equation z+1=a_0/a. The subscript '0' corresponds to the present-date values of the quantities. We choose the point of the density minimum in the center of the void as the origin of coordinates and surround it by a sphere of radius R_1≪λ_1, which expands with the same rate as the Universe: R=R_1 aa_1. Since the size of the perturbation λ changes alike, ratio R/λ remains constant. We need to underline that the sphere is not quite comoving with the substance: the void center expands slightly faster, than the Universe on average, and therefore the matter crosses the sphere outwards.
The outer regions of the void do not influence the matter inside R, since a spherically symmetric layer does not create gravitational field inside it (the Birkhoff's theorem). Thus, though a void is always surrounded by huge masses of matter, it hardly can create a strong gravitational field in the void center. As we show below, this conclusion is very probably valid even for a non-spherical void. The only significant perturbations inside R may be created by other nearby elements of the cosmic web, outside of the void and its walls, for instance, by the walls of a nearby void, since these structures are distributed asymmetrically with respect to the void that we consider. However, this tidal perturbations are small on the scales of R (they are ∼ (R/λ)^3), and we neglect them.
The matter density inside R, q_m, does not depend on coordinates (since r=0 is an extremum of ρ(r)), and the matter velocity v⃗∝r⃗ (because of the spherical symmetry). Moreover, the only component that has cosmologically significant (p∼ρ) pressure after z_1 is the dark energy, but its pressure is everywhere constant. So the system has no pressure gradients.
We conclude that the evolution of the Universe inside R may be described by usual Friedman's equation <cit.>, though its parameters should slightly differ from those of the undisturbed Universe. We denote the critical, matter, dark energy, and curvature densities of the undisturbed Universe by ρ_c, ρ_m, ρ_Λ, ρ_a, of the 'universe' inside R — by q_c, q_m, q_Λ, q_a, respectively. We'd like to remind that the curvature density ρ_a≡3c^28π Gya^2, where y=-1, 1, 0 for a closed, open, and flat Friedmann's universe, respectively. We denote the scale of the 'universe' inside R by b(t), and the Hubble constant — by Q≡dbb dt. The critical densities are
ρ_c=3H^2/8π G; q_c=3Q^2/8π G
We assume the standard ΛCDM cosmological model, i.e., the dark energy is a non-zero cosmological constant (ρ_Λ=q_Λ= const), the Universe is flat (ρ_a (but not q_a!) is equal to zero). Since we consider the Universe at z≤ 10, we neglect the radiation term, and therefore Ω_m+Ω_Λ≡ρ_mρ_c+ρ_Λρ_c=1. The present-day ratios Ω_m,0=0.315, Ω_Λ,0=0.685, H_0=73 (km/s) Mpc^-1 <cit.>. It is convenient to introduce h so that H_0=h· 100 (km/s) Mpc^-1. Thus h=0.73.
Since the Universe is flat, a(t) is defined up to an arbitrary factor (b(t) is the radius of the 3-space of the 'universe' inside R, and so it is defined uniquely). Thus we always may choose a_1=b_1. The Friedman's equations for the Universe and the central region of the void may be written as
H^2=8π G/3[ρ_Λ+ρ_m,1(a_1a)^3],
Q^2=8π G/3[q_Λ+q_m,1(b_1b)^3+q_a,1(b_1b)^2].
The values of ρ_Λ, q_Λ, and ρ_m,1 can be easily expressed through Ω_m,0 and ρ_c,0=3H_0^2/(8π G):
ρ_m,1=(a_0/a_1)^3 ρ_c,0Ω_m,0; ρ_Λ = q_Λ=ρ_c,0 (1-Ω_m,0).
At z=z_1 the future void is only a shallow underdensity |δ_1|=|Δρ_m/ρ_m,1|≪ 1. The density contrast on the matter-dominated stage is proportional to a(t) <cit.>, and it is more convenient to characterize the underdensity by
ℵ≡ (z+1) δ=(a_0/a) δ; ℵ= (a_0/a_1) δ_1,
since ℵ is constant on the matter-dominated stage, and it does not depend on the choice of z_1. Thus
q_m,1=(1-δ_1)(a_0/a_1)^3 ρ_c,0Ω_m,0,
q_m=(1-δ(t))(a_0/a)^3 ρ_c,0Ω_m,0
As we have already mentioned, the sphere R is not quite comoving with the substance: the void center expands slightly faster, than the Universe on average, and therefore the matter crosses the sphere outwards with some speed v(t). The speed of sphere R with respect to the origin of coordinates at z=z_1 is equal to H_1 R_1, while the matter speed is equal to Q_1 R_1. Thus
Q_1 R_1= H_1 R_1 +v_1.
The speed can be easily calculated from the continuity consideration. The flux of matter through sphere R at z=z_1 is (we substitute eqn. (<ref>) for q_m,1)
dM/dt=-4π R^2_1 v_1 q_m,1= -4π R^2_1 v_1 (1-δ_1)(a_0/a_1)^3 ρ_c,0Ω_m,0.
On the other hand,
M(t)=4/3π R^3_1 (a/a_1)^3 q_m=
=4/3π R^3_1 (1-δ(t))(a_0/a_1)^3 ρ_c,0Ω_m,0.
Here we substitute eqn. (<ref>) for q_m. Only multiplier (1-δ(t)) depends on time in the obtained equation. We may calculate the time derivative of it with the help of eqn. (<ref>): since ℵ is constant on the matter-dominated stage, dℵ=0 and from (<ref>) it follows that
a(t) dδ(t)=δ(t) d a(t); dδ(t)/dt=δ(t)da/adt=δ(t) H
Then we obtain from (<ref>):
dM/dt
=δ(t) H(t) 4/3π R^3_1 (a_0/a_1)^3 ρ_c,0Ω_m,0.
Equating equations (<ref>) and (<ref>) at z=z_1, we obtain
v_1=1/3δ_1 R_1 H_1
As we can see, v_1/R_1≪ H_1, the velocity perturbation is small with respect to the Hubble expansion. Then we may rewrite (<ref>) as Q_1 = (1+δ_1/3) H_1. Squaring it and neglecting the small δ^2_1 term, we obtain Q^2_1 = (1+2δ_1/3) H^2_1. Substituting here equations (<ref>) and (<ref>) at z=z_1, we get:
(1+2/3δ_1)(ρ_Λ+ρ_m,1)=q_Λ+q_m,1+q_a,1.
In accordance with (<ref>) and (<ref>), ρ_Λ=q_Λ, q_m,1=(1-δ_1)ρ_m,1, and we may convert (<ref>) into
q_a,1=2/3δ_1ρ_Λ+5/3δ_1 ρ_m,1.
However, ρ_Λ is ∼ 3 orders of magnitude smaller, than ρ_m,1, and we may neglect the first term. Thus
q_a,1=5/3δ_1 ρ_m,1=5/3δ_1 (a_0/a_1)^3 ρ_c,0Ω_m,0=
=5/3ℵ(a_0/a_1)^2 ρ_c,0Ω_m,0.
We used (<ref>) in the last transformation. Now we substitute equations (<ref>), (<ref>), and (<ref>) for q_Λ, q_m,1, and q_a,1 into (<ref>). We reduce b_1 and use equation (<ref>) in the form of 8π Gρ_c,0=3H_0^2:
Q^2=H^2_0[Ω_Λ,0+Ω_m,0(a_0b)^3
+5/3ℵ Ω_m,0(a_0b)^2].
We also neglected small term δ_1 ((1-δ_1)≃ 1) in equation (<ref>) for q_m,1. This equation[In work <cit.>, a minor mathematical error was made: the numerical coefficient before the last term under the root in expression (<ref>) for Q was 3 instead of the correct value 5/3, as in this work.] for Q(t) defines the evolution of the region inside R. In order to finish our calculation, we use a well-known trick <cit.>: the age of the Universe, as well as the age of the 'universe' inside R, may be found from the Hubble constants:
t_U=∫ dt=∫_0^a_0da/aH, t_R=∫_0^b_0db/bQ,
where H(a) and Q(b) are given by equations (<ref>) and (<ref>), respectively. But, of course, t_U should be equal to t_R.
It is essential that b_0 a_0: the void center expands much stronger than the Universe on average. Let us denote k=b_0/a_0. It is apparent that the central density of a void is q_m,0=ρ_m,0/k^3. As is customary, we will characterize voids by their present-day central underdensity
Δ_c≡q_m,0-ρ_m,0/ρ_m,0=1/k^3-1
For voids Δ_c is always negative. Thus, our problem is reduced to the determination of k. We need to calculate both the integrals in (<ref>) and equate them. The first integral may be calculated analytically
t_U=2/3H_0(1/√(Ω_m,0))√(1-Ω_m,0)
The second one may be transformed into (we use equation (<ref>) for H_0)
t_R=1/H_0∫^k_0 √(x) dx√(x^3 Ω_Λ,0+Ω_m,0(1+5/3ℵ x)),
Equating the age of the Universe calculated in two different ways, t_U=t_R, we obtain the final equation that bounds 'the factor of additional expansion' of the void, k, with the amplitude of initial perturbation, ℵ, where Ω_m,0 is the only parameter (we take into account that Ω_m,0+Ω_Λ,0=1):
2/3(1/√(Ω_m,0))√(1-Ω_m,0)=∫^k_0 √(x) dx√(x^3+Ω_m,0(1+5/3ℵ x - x^3)).
This equation (together with equation (<ref>)) solves the problem of the void central density in the spherically-symmetric case. It does not imply that the final void is in linear regime, being correct for arbitrarily strong final perturbations. As it should be, equation (<ref>) does not depend on the choice of the initial moment, which we denoted by '1': the central density is totally defined by the amplitude of initial perturbation, ℵ, and Ω_m,0.
§ THE CASE OF AN ELLIPSOIDAL VOID
Thus, the central density of a void may be found analytically in the spherically-symmetric case. However, this case is degenerate, and it would be more reasonable to consider the general case of a small initial perturbation having an ellipsoidal shape. Will the central density (<ref>) be a reasonable estimation in this case?
First of all, the statement that the outer regions of the void and huge masses of matter surrounding it hardly can create a strong gravitational field in the void center is very probably correct for a general ellipsoidal void as well. Indeed, the density profile of an ellipsoidal void looks like (we take into account that the average density of the Universe is equal to the critical one):
ρ=ρ_c - ϱ(x^2/e^2_1+y^2/e^2_2+z^2/e^2_3).
This profile may be divided into a set of thin homoeoids of constant densities (a homoeoid is a region bounded by two concentric, similar ellipsoids). A homoeoid homogeneously filled with matter does not create gravitational field inside itself, and so it does not differ from spherical layers in this sense. Thus, if we could neglect the influence of outer layers in the case of a spherical void, we may do the same in the case of an ellipsoidal one.
Second, let us consider how the asymmetry of distribution (<ref>) evolves with time. In the case of an overdensity the answer is well-known and can be based on a simple qualitative consideration <cit.>. Until the perturbation is linear, its form changes little <cit.>. Apparently, this result is valid for an underdensity as well. As an ellipsoidal overdensity reaches the nonlinear regime, its asymmetry rapidly grows. Indeed, in this case the second term in (<ref>) is positive. Let us suppose that e_1 is the shortest length in (<ref>), i.e., the ellipsoid is compressed the most along the x axis. Then the gravitational acceleration towards the center is also the largest along this axis. All the velocity field on the nonlinear stage of the structure is determined by the gravitational field of the forming structure. Thus, not only the gravitational attraction is the strongest along the shortest axis of the ellipsoid, but the velocity and acceleration of the contraction are the highest. As a result, the asymmetry rapidly grows with time, and finally the ellipsoid transforms into a Zeldovich pancake <cit.>.
It is easy to understand by the same qualitative consideration that the situation is opposite in the case of a void: the asymmetry in the center decreases with time. Indeed, in this case the second term in (<ref>) is negative: the underdensity creates a sort of 'gravitational repulsion' with respect to the case of undisturbed Universe. The repulsion is the strongest along the shortest axis of the ellipsoid: as a result, the ellipsoid expands faster in this direction, and the asymmetry decreases with time. Thus, even in the case of an ellipsoidal void the central part of it tends to spherical symmetry (e_1=e_2=e_3). The process of spherization is relatively slow when the density contrast of the void is small, but gets very fast on the nonlinear stage of the void formation.
Third, let us consider the central density evolution in the case of an ellipsoidal void. The evolution is determined by the continuity equation
ρ̇/ρ= - v⃗.
Thus the density evolution is totally determined by v⃗, and we need to find how it depends on the ellipsoid parameters. The density (<ref>) in the void center depends only on time, since this is an extremum point. The gravitational potential in the void center looks like <cit.>
ϕ=ϕ_u + 1/2(ϕ_xx x^2+ϕ_yy y^2+ϕ_zz z^2),
where ϕ_u is the gravitational potential of the undisturbed Universe. Of course, v⃗ created by ϕ_u coincides with that in the spherically-symmetric case. Now we need to find v⃗ created by the second term in (<ref>), which is actually the perturbation of the velocity field created by the void. Coefficients ϕ_xx, ϕ_yy, ϕ_zz are determined by the central density and the ratios between e_1, e_2, and e_3 <cit.> and depend only on time. It follows from Δϕ = 4π G ρ that
ϕ_xx +ϕ_yy +ϕ_zz = 4π G ϱ(0)= 4π G (ρ-ρ_c).
Thus, though each coefficient ϕ_ii depends on the void asymmetry, their sum depends only on the density.
On the stage when the perturbation is linear we may easily calculate the velocity field created by it. For instance, if we consider the x component, dv_x/dt=xϕ_xx(t). At some previous moment of time, τ,
dv_x/dτ=xa(τ)/a(t)ϕ_xx(τ).
Here we neglect the difference between the scale factor inside and outside the void, which is a reasonable approximation on the linear stage: the particle gets some additional velocity in the perturbation, but the time is too short to shift it significantly with respect to the undisturbed Universe. This approximation fails only when the perturbation gets nonlinear <cit.>. From (<ref>) we have
v_i=i∫_0^t a(τ)/a(t)ϕ_ii(τ) dτ,
where i=x,y,z. Now we may calculate the velocity divergence:
dv_x/dx+dv_y/dy+dv_z/dz=∫_0^t a(τ)/a(t)(ϕ_xx+ϕ_yy+ϕ_zz)dτ,
and with the help of equation (<ref>) we finally obtain
v⃗=4π G∫_0^t a(τ)/a(t)(ρ(τ)-ρ_c(τ))dτ.
As we can see from equation (<ref>), the velocity distribution is anisotropic in the case of an ellipsoidal perturbation: coefficients ϕ_xx, ϕ_yy, ϕ_zz are essentially different. The velocity components are not completely independent, however: they are implicitly bound by (<ref>), and as a result the velocity divergence (<ref>) does not depend on the void ellipticity at all. Thus, equation (<ref>) shows that the central density evolution of an ellipsoidal void is exactly the same as in the spherically-symmetric case, at least, on the linear stage of the void formation.
To summarize this section: the statement that the outer regions of the void hardly can create a strong gravitational field in the void center is very probably correct for a general ellipsoidal void as well. At the linear stage the central density evolution of an ellipsoidal void coincides with that of the spherically-symmetric one with the same initial central density. The nonlinear stage requires further consideration, but the ellipticity itself rapidly decreases on this stage, making the density profile spherically symmetric. Thus, equations (<ref>, <ref>) for the central density in the spherically-symmetric case give, at least, a very good estimation of the central density even for an elliptical void.
§ DISCUSSION
Figure <ref> presents relationship (<ref>) between perturbation amplitude ℵ and the 'factor of additional expansion' of the void, k; figure <ref> — the dependence of the central underdensity, Δ_c, from ℵ. The solid lines represent the relationships corresponding to the real Universe with Ω_m,0=0.315. The dashed lines correspond to the case, when the universe has the same H_0 and no curvature term, as the real Universe, but contains no dark energy (i.e., Ω_m,0=1). As we can see, the presence of the dark energy suppresses the void formation: the voids in the hypothetical universe without dark energy expand stronger and have lower central density.
This is not surprising: the presence of dark energy suppresses the growth of cosmological perturbations, both overdensities and underdensities. This effect may be taken into account by multiplying the perturbation amplitudes by corrective multiplier g(z) (see, for instance, <cit.> for details). At the matter-dominated stage g(z)=1, but even now, when the dark energy dominates (Ω_Λ,0=0.685), g(0)≡ g_0≃ 0.7879, i.e., the influence of the dark energy on the structures is moderate.
As equation (<ref>) shows, the deeper the initial perturbation is, the larger k is, i.e., the deepest regions of the initial underdensity corresponding to the future void expand the most. This leads to the characteristic void density profile: it is very flat in the center (i.e., the matter density is almost constant in the central part of the void), but grows rapidly towards the edge of the void. For instance, N-body simulations suggest the following profile <cit.>:
ρ(r)/ρ_M,0=A_0+A_3(r/R_V)^3.
The authors obtained the best fit A_0=0.13± 0.01, A_3=0.70± 0.03 for voids of R_V=8 h^-1 Mpc. In reality the central part may be even flatter <cit.>. Equations (<ref>, <ref>) define a one-to-one correspondence between the amplitude ℵ of the perturbation that later forms the void and the void central underdensity Δ_c. One should distinguish Δ_c from the average void underdensity Δ_av, which is also used in literature. Of course, |Δ_c| ≥ |Δ_av|. Moreover, Δ_av is much worse defined: it strongly depends on how the void boundary is determined. The density profile is very steep there, and if we slightly shift the border (by changing the void criterion), Δ_av changes significantly, while Δ_c remains the same.
In order to compare our results with observations, we need to estimate the values of ℵ corresponding to voids. Following observational results <cit.>, we assume Gaussianity of the primordial perturbations. In particular, it means that on the linear stage perturbations with the same |ℵ| and size are equally probable, regardless of whether they are under or overdensities. Let us consider an overdensity and an underdensity of the same size and |ℵ| at the linear stage. The underdensity now forms a void of radius R_0 and central underdensity Δ_c, which is bound with |ℵ| by equations (<ref>, <ref>). The overdensity collapses into a galaxy cluster of mass
M_0≃4/3π R^3_0 ρ_m,0
at z=z_col, when its amplitude Δρ_m/ρ_m≃ 1, i.e.,
g(z_col)|ℵ|z_col+1=1.
For instance, a galaxy cluster of mass 2· 10^14 M_⊙ approximately corresponds to a void of radius R_0≃ 11 Mpc. There is one-to-one correspondence between the central underdensity of the void, Δ_c, and amplitude |ℵ| of the initial perturbation. Equation (<ref>) binds |ℵ| and z_col of the cluster. Since we suppose that the void and the cluster had the same amplitude of initial perturbation |ℵ|, there is also a one-to-one correspondence between Δ_c and z_col, which is represented in figure <ref>.
Thus, if we consider voids and galaxy clusters formed from primordial perturbations of the same size (i.e., the void radius and the cluster mass are bound by (<ref>)) and the same |ℵ| (i.e., their Δ_c and z_col depend as it is shown in Fig. <ref>), their number densities in the Universe should be equal because of the perturbation Gaussianity. Therefore, instead of measuring Δ_c of the voids of some given radius R_0 (which is difficult) we may observe z_col of the corresponding galaxy clusters.
Observations <cit.> show no clusters of the present-day mass exceeding 3· 10^14 M_⊙ form before z=1.75. It means that there are no (or extremely rare) voids of radius R_0≥ 12.5 Mpc with |Δ_c| > 75%. The first clusters of lower mass may occur at z<2.5 <cit.>, which corresponds to Δ_c = -80%. Large voids (R_0≥ 40 Mpc) correspond to clusters of mass > 10^16 M_⊙, which do not exist, because the perturbations of this mass are still linear. Fig. <ref> shows that the central underdensities of the supervoids are Δ_c > -55%: they are also still linear.
Let us emphasize that we estimated the maximum possible densities of voids in the previous paragraph. Indeed, only the first clusters with the present-day mass 3· 10^14 M_⊙ appear at z=1.75, while most of the clusters of this mass collapse much later. By analogy, there may be only rare (one of a hundred) voids of radius R_0≥ 12.5 Mpc having Δ_c ≃ -75%, but most of the voids of this radius have a significantly higher central density.
In order to estimate not the maximum possible , but the average |Δ_c| of the voids of a given radius, we use the Gaussianity of the primordial perturbations again. Let us randomly choose a comoving sphere of radius L/((z+1)h) (so L/h is the present-day radius of the sphere) deeply on the matter-dominated stage, when the perturbations corresponding to voids were linear. The Gaussianity of the perturbations means that the probability f that the average density contrast Δρ_m/ρ_m inside L is equal to δ_L has the Gaussian form
p(δ_L)=1/σ_L √(2π)exp(-δ^2_L/2σ^2_L).
The value of σ_L is proportional to g(z)/(z+1). In view of the foregoing, we assume that the distribution of ℵ looks like
f(ℵ)=1/σ_8 √(2π)exp(-ℵ^2/2σ^2_8),
where σ_8=0.811 <cit.>. Thus, we assume that the distribution of ℵ coincides with that of relatively small perturbations (R_0=L_0/h=8h^-1 Mpc ≃ 11 Mpc). This size roughly corresponds to small voids, while the largest voids exceed R_0=70 Mpc <cit.>. Actually, we suppose that σ_L= const=σ_8. In fact the value of σ decreases with the object size, and thus our approach somewhat overestimates ℵ and the emptiness of large voids.
When the perturbations reach nonlinear stage, the voids expand significantly faster than the Universe on average. As we have discussed, a void has two regions: a large density plateau in the center, where the matter density is almost constant and equal to q_m,0, and the void walls with very steep density profile. The plateau part expands k^3(ℵ) times more than the Universe on average. Combining it with (<ref>), we find that the volume V(Δ_c) occupied by the regions with matter density less than Δ_c is proportional to
V(Δ_c(ℵ))∝k^3(ℵ)/σ_8 √(2π)exp(-ℵ^2/2σ^2_8).
By expressing ℵ in terms of Δ_c using expressions (<ref>) and (<ref>), we can integrate (<ref>) and obtain the fraction of the Universe volume occupied by regions emptier than Δ_c, as a function of Δ_c. This dependence is presented in Fig. <ref>.
It is reasonable to expect that the plateau regions of voids occupy at least 1/3 of the Universe volume, which correspond to Δ_c ≃ -52%. This is probably the most reasonable estimate of the average central underdensity Δ_c for the voids. According to figure <ref>, regions with Δ_c ≃ -75% and Δ_c ≃ -80% occupy ∼ 1.5% and ∼ 0.1% of the Universe volume, respectively. These fractions are quite consistent with the above considerations that only one of a hundred small voids may have Δ_c ≃ -(75-80)%, but this is a very rare exception, not a rule.
Let us sum it up. Most of the voids have Δ_c ∼ -50%, there are no voids with Δ_c < -80%. For small voids (R_0≃ 5-10 Mpc), the underdensity can be significantly greater than |Δ_c| = 50%, but the number of voids decreases rapidly with increasing of |Δ_c|, and voids with |Δ_c| > 80% are practically absent. The largest voids (R_0> 40 Mpc) always have |Δ_c| < 50%: they correspond to overdensities of mass 2· 10^16 M_⊙, which exceeds the mass of the richest galaxy clusters. Overdensities of this length are still linear in the modern Universe, so the underdensities also could not yet enter the nonlinear regime. Finally, the voids with R_0= 15-50 Mpc have average values of their parameters: their average Δ_c ∼ -50%, the number of voids decreases rapidly with increasing of |Δ_c|, and the upper limit of |Δ_c| depends on the void radius (larger voids are less underdense) and is always smaller than |Δ_c| =75%.
Our estimates seriously contradict the estimates of void density obtained in N-body simulations. For instance, equation (<ref>) is obtained by fitting N-body simulation and suggests average Δ_c = -87% (A_0=0.13) even for small voids with (R_0=8h^-1 Mpc ≃ 11 Mpc). Recent simulations also suggest very underdense voids. <cit.> find that voids occupy ∼ 83% of the Universe volume, and their average underdensity is Δ_av = -65%. The fact that voids occupy 83% of the Universe in the simulations makes it clear that we are talking about the average, not the central density of voids. The relationship between Δ_av and Δ_c depends on the model, but if we use the generally accepted (<ref>), then Δ_av = -65% corresponds to Δ_c = -90%. Thus, simulations consistently give an average of Δ_c = -(85-90)% for voids, while this analytical calculation gave an estimate Δ_c = -50% for this value.
This contradiction is difficult to attribute to the simplifying assumptions we made in our analytical calculation. Section <ref> shows that the void ellipticity very probably doesn't affect the central void density at all. We also show that the tidal influence on the void center is moderate. Moreover, tidal perturbations created by an external body (at least, to the first approximation) lead to a deformation of a dust cloud on which they act, but not to a change in its volume. Thus, there is no reason to believe that the simplifying assumptions made in our calculations may be responsible for the discrepancy. The N-body simulations are known to suffer from some significant numerical effects <cit.>, and this could be the reason why the simulations significantly overestimate the emptiness of the voids.
The overestimation of the void emptiness by the N-body simulations is evident from the result of simulations <cit.> that the voids with average underdensity Δ_av = -65% occupy ∼ 83% of the Universe volume. Comparing it with equation (<ref>), we may see that this situation is only possible if σ_L≫ 1: all the underdense region are already deeply nonlinear (Δ_av = -65%), they have already expand much more than the Universe on average and occupy 83% of the Universe instead of 50% in the linear regime. However, the voids are large, and they correspond to L ≳ 8 Mpc. Since σ_L decreases with L, for voids σ_L ≲σ_8≃ 0.811<1, and the regions with Δ_av = -65% may occupy only ∼ 10% of the Universe volume, not 83% (see figure <ref>).
As equation (<ref>) shows, the "Hubble constant" inside a void Q differs from that of the Universe H and depends on |ℵ|. It is curious to check if we may determine the void underdensity by measuring Q. Substituting k=b_0/a_0 into (<ref>), we obtain the relationship binding the present-day ratio Q_0/H_0 and ℵ. With the help of equations (<ref>, <ref>) we may express Q_0/H_0 through Δ_c, the dependence is reproduced in figure <ref>. As we can see, the "Hubble constant" inside voids indeed always exceeds H. However, the prospects for measuring Δ_c by observing Q_0 seem bleak: as Δ_c changes from 0 to 100%, Q_0 increases only by ∼ 25%. Thus, Q_0 is not very sensitive to Δ_c, and it is difficult to draw the value of Δ_c from observations of Q_0 inside the void.
Finally, a question arises: if the voids contain so much matter, why do they look so empty? Indeed, the density of bright galaxies in the voids is tens of times lower than in the Universe on average, while the density of matter in most voids, as we have seen, is only two to three times lower than the average one. The reason for this discrepancy is probably quite simple. We saw that the interior of the void can be considered as a part of a Friedmann space, but its parameters differ from those of our Universe. Firstly, the moment of equality of the matter and dark energy densities occurs somewhat earlier, and so the value of g_0 is noticeably higher in it. For this reason alone, the growth of cosmological perturbations in the void is suppressed significantly stronger than in the Universe on average. Secondly, the "universe" inside the void corresponds to the open Friedmann model: the three-dimensional space there has Lobachevsky geometry. The open universe expands faster than the flat one, and this also suppresses the perturbation growth.
The problem of the smaller perturbation growth inside voids certainly requires additional study. Note, however, that both of the above-mentioned suppression mechanisms are activated comparatively late, when the amplitude of the void-forming underdensity becomes noticeable (z<5). Thus, large structures in the voids, like large galaxies and galaxy clusters, are strongly suppressed. But low-mass structures, such as dwarf galaxies or dark matter clamps, collapse at z>10 and therefore form in the void in almost the same way as in the rest of the Universe: their present-day numerical density there is only k^3∼ 2-3 times lower than the average in the Universe, it follows the matter density. Thus, voids contain a significant amount of dark and baryonic matter, but it forms only low-mass, faintly luminous objects and a massive diffuse component, being therefore not visible.
We would like to thank the Academy of Finland mobility grant 1341541, for the financial support of this work.
22
natexlab#1#1
[#1],#1
[Baushev(2015)]13
authorBaushev, A.N., year2015.
titleThe real and apparent convergence of N-body simulations of the dark matter structures: Is the Navarro-Frenk-White profile real?
journalAstroparticle Physics volume62, pages47–53.
10.1016/j.astropartphys.2014.07.012, http://arxiv.org/abs/1312.0314arXiv:1312.0314.
[Baushev(2019)]24
authorBaushev, A.N., year2019.
titleThe radius, at which a galaxy group stops the Hubble stream, and the group mass: an exact analytical solution.
journal volume490, pagesL38–L41.
10.1093/mnrasl/slz143, http://arxiv.org/abs/1907.08716arXiv:1907.08716.
[Baushev(2020)]25
authorBaushev, A.N., year2020.
titleHubble stream near a massive object: The exact analytical solution for the spherically-symmetric case.
journalPhys. Rev. D volume102, pages083529.
<https://link.aps.org/doi/10.1103/PhysRevD.102.083529>, 10.1103/PhysRevD.102.083529, http://arxiv.org/abs/2004.01427arXiv:2004.01427.
[Baushev(2021)]26
authorBaushev, A.N., year2021.
titleThe central region of a void: an analytical solution.
journal volume504, pagesL56–L60.
10.1093/mnrasl/slab036, http://arxiv.org/abs/2104.01359arXiv:2104.01359.
[Baushev and Barkov(2018)]18
authorBaushev, A.N., authorBarkov, M.V., year2018.
titleWhy does Einasto profile index n-0.5ex~6 occur so frequently?
journal volume2018, pages034.
10.1088/1475-7516/2018/03/034, http://arxiv.org/abs/1705.05302arXiv:1705.05302.
[Baushev et al.(2017)Baushev, del Valle, Campusano, Escala, Muñoz and Palma]17
authorBaushev, A.N., authordel Valle, L., authorCampusano, L.E., authorEscala, A., authorMuñoz, R.R., authorPalma, G.A., year2017.
titleCusps in the center of galaxies: a real conflict with observations or a numerical artefact of cosmological simulations?
journal volume2017, pages042.
10.1088/1475-7516/2017/05/042, http://arxiv.org/abs/1606.02835arXiv:1606.02835.
[Baushev and Pilipenko(2020)]21
authorBaushev, A.N., authorPilipenko, S.V., year2020.
titleThe central cusps in dark matter halos: Fact or fiction?
journalPhysics of the Dark Universe volume30, pages100679.
10.1016/j.dark.2020.100679, http://arxiv.org/abs/1808.03088arXiv:1808.03088.
[Bertschinger(1985)]bertschinger1985
authorBertschinger, E., year1985.
titleThe self-similar evolution of holes in an Einstein-de Sitter universe.
journal volume58, pages1–37.
10.1086/191027.
[Blumenthal et al.(1992)Blumenthal, da Costa, Goldwirth, Lecar and Piran]blumenthal1992
authorBlumenthal, G.R., authorda Costa, L.N., authorGoldwirth, D.S., authorLecar, M., authorPiran, T., year1992.
titleThe Largest Possible Voids.
journal volume388, pages234.
10.1086/171147.
[Bocquet et al.(2019)Bocquet, Dietrich, Schrabback, Bleem, Klein, Allen, Applegate, Ashby, Bautz, Bayliss, Benson, Brodwin, Bulbul, Canning, Capasso, Carlstrom, Chang, Chiu, Cho, Clocchiatti, Crawford, Crites, de Haan, Desai, Dobbs, Foley, Forman, Garmire, George, Gladders, Gonzalez, Grandis, Gupta, Halverson, Hlavacek-Larrondo, Hoekstra, Holder, Holzapfel, Hou, Hrubes, Huang, Jones, Khullar, Knox, Kraft, Lee, von der Linden, Luong-Van, Mantz, Marrone, McDonald, McMahon, Meyer, Mocanu, Mohr, Morris, Padin, Patil, Pryke, Rapetti, Reichardt, Rest, Ruhl, Saliwanchik, Saro, Sayre, Schaffer, Shirokoff, Stalder, Stanford, Staniszewski, Stark, Story, Strazzullo, Stubbs, Vanderlinde, Vieira, Vikhlinin, Williamson and Zenteno]observations2019
authorBocquet, S., authorDietrich, J.P., authorSchrabback, T., authorBleem, L.E., authorKlein, M., authorAllen, S.W., authorApplegate, D.E., authorAshby, M.L.N., authorBautz, M., authorBayliss, M., authorBenson, B.A., authorBrodwin, M., authorBulbul, E., authorCanning, R.E.A., authorCapasso, R., authorCarlstrom, J.E., authorChang, C.L., authorChiu, I., authorCho, H.M., authorClocchiatti, A., authorCrawford, T.M., authorCrites, A.T., authorde Haan, T., authorDesai, S., authorDobbs, M.A., authorFoley, R.J., authorForman, W.R., authorGarmire, G.P., authorGeorge, E.M., authorGladders, M.D.,
authorGonzalez, A.H., authorGrandis, S., authorGupta, N., authorHalverson, N.W., authorHlavacek-Larrondo, J., authorHoekstra, H., authorHolder, G.P., authorHolzapfel, W.L., authorHou, Z., authorHrubes, J.D., authorHuang, N., authorJones, C., authorKhullar, G., authorKnox, L., authorKraft, R., authorLee, A.T., authorvon der Linden, A., authorLuong-Van, D., authorMantz, A., authorMarrone, D.P., authorMcDonald, M., authorMcMahon, J.J., authorMeyer, S.S., authorMocanu, L.M., authorMohr, J.J., authorMorris, R.G., authorPadin, S., authorPatil, S., authorPryke, C., authorRapetti, D.,
authorReichardt, C.L., authorRest, A., authorRuhl, J.E., authorSaliwanchik, B.R., authorSaro, A., authorSayre, J.T., authorSchaffer, K.K., authorShirokoff, E., authorStalder, B., authorStanford, S.A., authorStaniszewski, Z., authorStark, A.A., authorStory, K.T., authorStrazzullo, V., authorStubbs, C.W., authorVanderlinde, K., authorVieira, J.D., authorVikhlinin, A., authorWilliamson, R., authorZenteno, A., year2019.
titleCluster Cosmology Constraints from the 2500 deg^2 SPT-SZ Survey: Inclusion of Weak Gravitational Lensing Data from Magellan and the Hubble Space Telescope.
journal volume878, pages55.
10.3847/1538-4357/ab1f10, http://arxiv.org/abs/1812.01679arXiv:1812.01679.
[van den Bosch et al.(2018)van den Bosch, Ogiya, Hahn and Burkert]vanderbosch2018
authorvan den Bosch, F.C., authorOgiya, G., authorHahn, O., authorBurkert, A., year2018.
titleDisruption of Dark Matter Substructure: Fact or Fiction?
journal volume474, pages3043–3066.
10.1093/mnras/stx2956, http://arxiv.org/abs/1711.05276arXiv:1711.05276.
[Curtis et al.(2024)Curtis, McDonough and Brainerd]nbody2024
authorCurtis, O., authorMcDonough, B., authorBrainerd, T.G., year2024.
titleProperties of Voids and Void Galaxies in the TNG300 Simulation.
journal volume962, pages58.
10.3847/1538-4357/ad18b4, http://arxiv.org/abs/2401.02322arXiv:2401.02322.
[Gorbunov and Rubakov(2011)]gorbrub2
authorGorbunov, D.S., authorRubakov, V.A., year2011.
titleIntroduction to the theory of the early universe.
[Kirshner et al.(1987)Kirshner, Oemler, Schechter and Shectman]bootesvoid
authorKirshner, R.P., authorOemler, Augustus, J., authorSchechter, P.L., authorShectman, S.A., year1987.
titleA Survey of the Bootes Void.
journal volume314, pages493.
10.1086/165080.
[Kiyota et al.(2024)Kiyota, Ando, Tanaka, Finoguenov, Shariar Ali, Coupon, Desprez, Gwyn, Sawicki and Shimakawa]finoguenov2024
authorKiyota, T., authorAndo, M., authorTanaka, M., authorFinoguenov, A., authorShariar Ali, S., authorCoupon, J., authorDesprez, G., authorGwyn, S., authorSawicki, M., authorShimakawa, R., year2024.
titleCluster candidates with massive quiescent galaxies at z∼2.
journalarXiv e-prints , pagesarXiv:2406.0284910.48550/arXiv.2406.02849, http://arxiv.org/abs/2406.02849arXiv:2406.02849.
[Lavaux and Wandelt(2012)]modelvoid
authorLavaux, G., authorWandelt, B.D., year2012.
titlePrecision Cosmography with Stacked Voids.
journal volume754, pages109.
10.1088/0004-637X/754/2/109, http://arxiv.org/abs/1110.0345arXiv:1110.0345.
[Sheth and van de Weygaert(2004)]weygaert2004
authorSheth, R.K., authorvan de Weygaert, R., year2004.
titleA hierarchy of voids: much ado about nothing.
journal volume350, pages517–538.
10.1111/j.1365-2966.2004.07661.x, http://arxiv.org/abs/astro-ph/0311260arXiv:astro-ph/0311260.
[Tolman(1934)]tolman1934
authorTolman, R.C., year1934.
titleEffect of Inhomogeneity on Cosmological Models.
journalProceedings of the National Academy of Science volume20, pages169–176.
10.1073/pnas.20.3.169.
[van de Weygaert and van Kampen(1993)]weygaert1993
authorvan de Weygaert, R., authorvan Kampen, E., year1993.
titleVoids in Gravitational Instability Scenarios - Part One - Global Density and Velocity Fields in an Einstein - De-Sitter Universe.
journal volume263, pages481.
10.1093/mnras/263.2.481.
[White and Silk(1979)]silk1979
authorWhite, S.D.M., authorSilk, J., year1979.
titleThe growth of aspherical structure in the universe: Is the Local Supercluster an unusual system?
journal volume231, pages1–9.
10.1086/157156.
[Workman and others(2022)]pdg2022
authorWorkman, R.L., authorothers (collaborationParticle Data Group), year2022.
titleReview of Particle Physics.
journalPTEP volume2022, pages083C01.
10.1093/ptep/ptac097.
[Zeldovich and Novikov(1983)]zn2
authorZeldovich, I.B., authorNovikov, I.D., year1983.
titleRelativistic astrophysics. Vol.2: The structure and evolution of the universe.
|
http://arxiv.org/abs/2409.03192v1 | 20240905023207 | PEPL: Precision-Enhanced Pseudo-Labeling for Fine-Grained Image Classification in Semi-Supervised Learning | [
"Bowen Tian",
"Songning Lai",
"Lujundong Li",
"Zhihao Shuai",
"Runwei Guan",
"Tian Wu",
"Yutao Yue"
] | cs.CV | [
"cs.CV"
] |
𝐱
ŁL
PEPL: Precision-Enhanced Pseudo-Labeling for Fine-Grained Image Classification in Semi-Supervised Learning
1st Bowen Tian^** The first two authors contributed equally to this work.
HKUST(GZ)^ The Hong Kong University of Science and Technology (Guangzhou)
DI^2 Lab^‵‵ Deep Interdisciplinary Intelligence Lab
Guangzhou, China
[email protected]
2nd Songning Lai^*
HKUST(GZ)
DI^2 Lab
Guangzhou, China
[email protected]
3rd Lujundong Li
HKUST(GZ)
Guangzhou, China
[email protected]
4th Zhihao Shuai
HKUST(GZ)
Guangzhou, China
[email protected]
5th Runwei Guan
HKUST(GZ)
Institute of Deep Perception Technology, JITRI
University of Liverpool
Guangzhou, China
[email protected]
6th Tian Wu
Nanchang University
Nanchang, China
[email protected]
7th Yutao Yue^†† Correspondence to Yutao Yue {[email protected]}
HKUST(GZ)
Institute of Deep Perception Technology, JITRI
DI^2 Lab
Guangzhou, China
[email protected]
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Fine-grained image classification has witnessed significant advancements with the advent of deep learning and computer vision technologies. However, the scarcity of detailed annotations remains a major challenge, especially in scenarios where obtaining high-quality labeled data is costly or time-consuming. To address this limitation, we introduce Precision-Enhanced Pseudo-Labeling (PEPL) approach specifically designed for fine-grained image classification within a semi-supervised learning framework. Our method leverages the abundance of unlabeled data by generating high-quality pseudo-labels that are progressively refined through two key phases: initial pseudo-label generation and semantic-mixed pseudo-label generation. These phases utilize Class Activation Maps (CAMs) to accurately estimate the semantic content and generate refined labels that capture the essential details necessary for fine-grained classification. By focusing on semantic-level information, our approach effectively addresses the limitations of standard data augmentation and image-mixing techniques in preserving critical fine-grained features. We achieve state-of-the-art performance on benchmark datasets, demonstrating significant improvements over existing semi-supervised strategies, with notable boosts in accuracy and robustness.Our code has been open sourced at SemiFGhttps://github.com/TianSuya/SemiFG
Fine-grained Image Classification, Semi-Supervised Learning, Label Mixing
§ INTRODUCTION
Fine-grained image classification <cit.>, which involves distinguishing between visually similar classes, plays a crucial role in various applications such as species identification, product categorization, and medical diagnostics. Despite the remarkable success of deep learning in computer vision <cit.>, achieving high accuracy in fine-grained classification remains challenging due to the scarcity of labeled data and the subtlety of distinguishing features <cit.>.
The limited availability of labeled data, particularly in fine-grained domains, hinders the development of robust models. To mitigate this issue, semi-supervised learning (SSL) <cit.> techniques have been proposed to leverage large amounts of unlabeled data alongside a small labeled dataset. SSL methods, including pseudo-labeling <cit.> and consistency regularization <cit.>, have shown promise in improving model performance with limited supervision. However, existing SSL approaches face significant challenges when applied to fine-grained image classification. Standard data augmentation techniques <cit.> <cit.> can disrupt critical visual cues, and fine-grained image features are destroyed. Image region mixing may also
overlook the fine details essential for accurate classification <cit.>.
To address these challenges, we present a novel Precision-Enhanced Pseudo-Labeling (PEPL) approach tailored for fine-grained image classification. PEPL leverages CAMs <cit.> to generate high-quality pseudo-labels that capture the essential details necessary for fine-grained classification. Specifically, our method consists of two key phases: Initial Pseudo-Label Generation and Semantic-Mixed Pseudo-Label Generation. These phases utilize CAMs<cit.> to accurately estimate the semantic content<cit.> and generate refined labels that capture the essential details necessary for fine-grained classification. By focusing on semantic-level information, our approach effectively addresses the limitations of standard data augmentation and image-mixing techniques in preserving critical fine-grained features, we have conducted extensive experiments on two commonly used datasets for fine-grained classification, and the results show that our method far exceeds the most advanced and representative semi-supervised methods<cit.> at present, with a 13% improvement in accuracy over the fully supervised model on the CUB_200_2011 dataset using 20% labeled data, and similar results to supervised learning using only 30% labeled data.
The key contributions of our work are as follows:
(i) We propose the Precision-Enhanced Pseudo-Labeling (PEPL) approach specifically designed for fine-grained image classification.
(ii) Our method generates high-quality pseudo-labels using CAMs, which are progressively refined to enhance the precision of the pseudo-labels.
(iii) We demonstrate significant improvements in performance on benchmark datasets, outperforming existing semi-supervised strategies and achieving state-of-the-art accuracy.
§ METHODS
§.§ Stage I: Initial Pseudo-Label Generation
Inspired by the concept of FreeMatch <cit.>, our approach relies on the adaptive selection of confidence thresholds, which are dynamically adjusted based on the model's predictive performance on unlabelled data. Rather than adopting a static approach, we holistically evaluate the model's predictions across all classes for each iteration.
After each round of predictions, we apply the following equation to the class outputs of every unlabelled sample. This collective consideration of all classes ensures that the thresholds are not only category-specific but also responsive to the evolving model performance:
First, we holistically consider all categories to determine the overall predictive values for the current iteration. After each round of predictions, we apply the following equation to the class outputs of every unlabelled sample:
τ̂_t =
1/C, if t = 0,
βτ_t-1+(1-β)1/μ B∑_b=1^μ Bmax(q_b), otherwise
where C represents the total number of categories, β is a pre-set hyperparameter that controls the ratio of the EMA, μ B indicates the batch size of the current unlabeled data, where B indicates the batch size of the labeled data, and μ is the preset multiple factor, q_b represents the output of the model's predictions, and τ_t represents the global threshold at step t.
To address the issue of class imbalance in the model's predictive capability, we compute an individual model prediction threshold for each class using the following formula:
P̂_t(c) =
1/C, if t = 0,
βP̂_t-1(c)+(1-β)1/μ B∑_b=1^μ Bq_b(c), otherwise
where c represents the current category number.
After obtaining the individual prediction thresholds for each class, we combine the overall threshold and the class-specific thresholds to determine the confidence selection threshold for each class at the current moment:
τ_t(c) = MaxNorm(p̂_t(c))·τ_t = p̂_t(c)/maxp̂_t(c):c∈[C]·τ_t
where p̂_t = [p̂_t(1),p̂_t(2), ⋯, p̂_t(C)]. This integration of both global and class-wise thresholds allows us to strike a balance between the general performance of the model and the unique characteristics of each class. By doing so, we can effectively select confident predictions for each class, enhancing the reliability of the pseudo-labels in the semi-supervised learning process.
With the thresholds calculated for each class, we can now employ them in the initial generation of pseudo-labels:
𝕀(max(q_b) > τ_t(argmax(q_b))
where 𝕀(·) represents the indicator function, which is 1 when the condition is met and 0 otherwise. This step assigns provisional labels to unlabelled samples based on their highest predicted probabilities using derived thresholds. This creates a set of pseudo-labels reflecting the model's confidence, which will be refined in subsequent training iterations of the semi-supervised learning algorithm.
§.§ Stage II: Hybrid Semantic Pseudo-Label Generation
The utility of pseudo-labels alone in enhancing model performance is somewhat limited. To better exploit the potential of unlabeled images, we propose a two-stage approach. In Stage I, we randomly blend images and then estimate the semantic information contained in the mixed images. Based on the pseudo-labels generated in the previous step, we create hybrid semantic pseudo-labels for the mixed images. To quantify the semantic composition of the mixed images, we need to measure the semantic correlation between each original image's pixels and their corresponding labels. An effective approach to achieve this is through Class Activation Maps (CAMs), which reveal how regions relate to semantic classes. We initially employ an attention mechanism <cit.> to compute the class activation map for the input image. Let the feature map at the lth layer be represented as:
F_l(Î_i) ∈𝐑^c× h × w
where I_i represents the input image, and c,h,w represents the number of categories, height and width of the feature map, respectively. We can match the activation map from the l th layer to the size of the input image using upsampling operations:
CAM(I_i) = ϕ(∑_t=0^d w_y_i^lF_l(I_i))
where CAM(·) represents an activation map of the same size as the input image, and ϕ(·) represents the upsampling operation. Next, we normalize the CAM(I_i) to obtain a map with a sum equal to 1:
S(I_i)=CAM(I_i)/sum(CAM(I_i))
where S(·) represents the activation map that has been normalized. This step involves transforming the CAM associated with the image I_i such that the sum of all its values equals unity. When blending images, we infer the label of the mixed image based on the semantic proportions of each component in the original images:
ρ_a = 1 - sum(M_λ^a⊙ S(I_a)),
ρ_b = sum(M_λ^b⊙ S(I_b))
The process of estimating the label for the blended image involves considering the relative semantic contributions of the individual parts from the original images. For each blended image, we estimate the proportions ρ_a and ρ_b of the semantic pseudo-labels a and b respectively. M_λ^a represents the part of input a that is removed, and M_λ^b represents the part of input b that is blended into input a. We derive ρ_a by subtracting the removed portion from 1, and ρ_b by estimating the semantic proportion of the blended part. These proportions reflect the combined semantic content of the blended image.
For each batch of unlabeled data, we first generate preliminary pseudo-labels. Then, we randomly combine these pseudo-labelled samples to create mixed instances along with their corresponding hybrid semantic pseudo-labels. These hybrid labels are used to iteratively refine and optimize the model during training.
§.§ Loss Function for Whole Framework
We can divide the overall loss function into supervised loss ℒ_sup and unsupervised loss ℒ_unsup. The calculation of the loss function can be expressed as follows:
ℒ_sup = ℋ(p_m(x_i|θ),y_i),
ℒ_unsup = ℋ(p_m(x_a|θ),y_a) ·ρ_a + ℋ(p_m(x_b|θ),y_b) ·ρ_b,
ℒ_total = γℒ_sup + λℒ_unsup
where p_m represents the predicted output for input x_i when the parameter is θ, and ℋ(·) represents the cross-entropy loss function. y_a and y_b represent two semantic pseudo-labels generated by the steps above. γ and λ represent the weights of supervised and unsupervised losses, respectively.
§ EXPERIMENT AND RESULT
§.§ Setup
Datasets. To evaluate the effectiveness of PEPL, we conducted experiments on two standard fine-grained classification datasets: CUB_200_2011 <cit.> and Stanford Cars <cit.>. The first dataset, introduced by Caltech in 2010, comprises 11,788 images across 200 bird species, with 5,994 images for training and 5,794 for testing. This dataset is widely used as a benchmark for fine-grained classification and recognition research. Stanford Cars, released by Stanford AI Lab in 2013, includes 16,185 images of 196 car models, with 8,144 images for training and 8,041 for testing. The dataset is designed for fine-grained classification tasks and categorizes cars by brand, model, and year.
Settings. We conducted experiments using a single NVIDIA A800 80G GPU. The pre-trained ResNet50 on ImageNet served as the base classification model. The overall training was set for 200 epochs, with a batch size of 16 for labeled data. For unlabeled data, the batch size was set to 112 (μ = 7). The initial learning rate was 0.01, decreasing by a factor of 0.1 every 80 epochs. After reaching 0.0001, a cosine annealing scheduler was applied to gradually reduce the learning rate to 0 over the last 40 epochs. The hyperparameter β for pseudo-label generation in Stage I was set to 0.999 to ensure a stable growth trend. Both the loss weights (γ and λ) for supervised and unsupervised learning were set to 1.
§.§ Results and Analysis
Evaluation Metric. We chose multi-classification accuracy as our evaluation metric, defined below:
Accuracy = TP+TN/ALL
where TP+TN represents the number of samples with all the correct classifications, and ALL represents the total number of samples.
Performance. The main experimental results are summarized in Table <ref>. We compared our method with classic semi-supervised learning approaches, Pi-Model <cit.> and Pseudo-Label <cit.>, as well as sota methods, FlexMatch <cit.> and FreeMatch <cit.>, under scenarios with 10%, 20%, and 30% of the total data labeled, and also when all label data were used. We also compared with purely supervised learning (Supervised-Only). The perturbation method of Pi-Model and the strong augmentation methods of FlexMatch and FreeMatch all used RandAugment <cit.> technology. The classification accuracy on the two datasets clearly demonstrates that our proposed PEPL method consistently outperforms other semi-supervised learning methods under different label proportions. Using just 30% of the labels, our method achieves comparable results to supervised training with 100% of the label data. With 10% and 20% of the label data, our method outperforms state-of-the-art semi-supervised methods by approximately 8%, and improves accuracy by about 10% to 13% compared to purely supervised training. These results fully demonstrate the effectiveness of our proposed PEPL semi-supervised learning framework in enhancing fine-grained classification performance across different datasets.
Ablation Study. To further validate the effectiveness of semantically mixed pseudo-labels introduced by PEPL, we compared it with the method of directly mixing and generating pseudo-labels without semantic mixing on the CUB. As shown in Table <ref> and combined with Table <ref>, we find that while direct mixing without semantic mixing still achieves some improvement compared to purely supervised learning, adding semantic mixing results in an additional performance gain of about 4% to 9%. This fully demonstrates the rationale behind introducing semantically mixed pseudo-labels in PEPL.
Case Study. To more intuitively demonstrate the superiority of the PEPL method, we exported models trained using the FreeMatch method and the PEPL method for semi-supervised training on 30% labeled data. We calculated class attention maps using the output of the last convolutional layer and visualized them. As shown in Figure <ref>, it is evident that the class attention maps based on the PEPL method focus more on areas where the current class may have fine-grained differences with other classes (such as car logos and rearview mirrors). This intuitively indicates that the PEPL method can better enhance the model's perception of fine-grained features.
§ CONCLUSION
In this paper, we introduced the PEPL method, which effectively addresses the challenges faced by semi-supervised learning methods in the domain of fine-grained image classification. By leveraging CAMs to generate high-quality pseudo-labels, PEPL overcomes the limitations of standard data augmentation and image-mixing techniques. The simplicity and effectiveness of PEPL make it a valuable addition to the toolkit of researchers and practitioners working in fine-grained classification, alleviating the exceptionally severe label scarcity problem. Its flexibility and strong performance position PEPL as a method that can significantly advance state of the art in semi-supervised learning and inspire further research into innovative approaches for fine-grained image classification.
10
wang2019survey
Yafei Wang and Zepeng Wang,
“A survey of recent work on fine-grained image classification techniques,”
Journal of Visual Communication and Image Representation, vol. 59, pp. 210–214, 2019.
rong2021human
Yao Rong, Wenjia Xu, Zeynep Akata, and Enkelejda Kasneci,
“Human attention in fine-grained classification,”
arXiv preprint arXiv:2111.01628, 2021.
zhuang2020learning
Peiqin Zhuang, Yali Wang, and Yu Qiao,
“Learning attentive pairwise interaction for fine-grained classification,”
in Proceedings of the AAAI conference on artificial intelligence, 2020, vol. 34, pp. 13130–13137.
abdullahi2024systematic
Mohammed Abdullahi, Olaide Nathaniel Oyelade, Armand Florentin Donfack Kana, Mustapha Aminu Bagiwa, Fatimah Binta Abdullahi, Sahalu Balarabe Junaidu, Ibrahim Iliyasu, Ajayi Ore-ofe, and Haruna Chiroma,
“A systematic literature review of visual feature learning: deep learning techniques, applications, challenges and future directions,”
Multimedia Tools and Applications, pp. 1–58, 2024.
rayed2024deep
Md Eshmam Rayed, SM Sajibul Islam, Sadia Islam Niha, Jamin Rahman Jim, Md Mohsin Kabir, and MF Mridha,
“Deep learning for medical image segmentation: State-of-the-art advancements and challenges,”
Informatics in Medicine Unlocked, p. 101504, 2024.
lai2023multimodal
Songning Lai, Xifeng Hu, Haoxuan Xu, Zhaoxia Ren, and Zhi Liu,
“Multimodal sentiment analysis: A survey,”
Displays, p. 102563, 2023.
guo2024fine
Jingcai Guo, Zhijie Rao, Song Guo, Jingren Zhou, and Dacheng Tao,
“Fine-grained zero-shot learning: Advances, challenges, and prospects,”
arXiv preprint arXiv:2401.17766, 2024.
grandvalet2004semi
Yves Grandvalet and Yoshua Bengio,
“Semi-supervised learning by entropy minimization,”
NeurIPS, vol. 17, 2004.
yang2022class
Fan Yang, Kai Wu, Shuyi Zhang, Guannan Jiang, Yong Liu, Feng Zheng, Wei Zhang, Chengjie Wang, and Long Zeng,
“Class-aware contrastive semi-supervised learning,”
in CVPR, 2022, pp. 14421–14430.
zeng2023self
Changyu Zeng, Wei Wang, Anh Nguyen, and Yutao Yue,
“Self-supervised learning for point cloud data: A survey,”
Expert Systems with Applications, p. 121354, 2023.
lee2013pseudo
Dong-Hyun Lee et al.,
“Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks,”
.
laine2016temporal
Samuli Laine and Timo Aila,
“Temporal ensembling for semi-supervised learning,”
arXiv preprint arXiv:1610.02242, 2016.
cubuk2020randaugment
Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le,
“Randaugment: Practical automated data augmentation with a reduced search space,”
in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 702–703.
cubuk2019autoaugmentlearningaugmentationpolicies
Ekin D. Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V. Le,
“Autoaugment: Learning augmentation policies from data,” 2019.
su2021realistic
Jong-Chyi Su, Zezhou Cheng, and Subhransu Maji,
“A realistic evaluation of semi-supervised learning for fine-grained classification,”
in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 12966–12975.
jiang2021layercam
Peng-Tao Jiang, Chang-Bin Zhang, Qibin Hou, Ming-Ming Cheng, and Yunchao Wei,
“Layercam: Exploring hierarchical class activation maps for localization,”
IEEE Transactions on Image Processing, vol. 30, pp. 5875–5888, 2021.
muhammad2020eigen
Mohammed Bany Muhammad and Mohammed Yeasin,
“Eigen-cam: Class activation map using principal components,”
in 2020 international joint conference on neural networks (IJCNN). IEEE, 2020, pp. 1–7.
zhang2024opti
Hanwei Zhang, Felipe Torres, Ronan Sicre, Yannis Avrithis, and Stephane Ayache,
“Opti-cam: Optimizing saliency maps for interpretability,”
Computer Vision and Image Understanding, p. 104101, 2024.
chen2022class
Zhaozheng Chen, Tan Wang, Xiongwei Wu, Xian-Sheng Hua, Hanwang Zhang, and Qianru Sun,
“Class re-activation maps for weakly-supervised semantic segmentation,”
in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 969–978.
yang2022survey
Xiangli Yang, Zixing Song, Irwin King, and Zenglin Xu,
“A survey on deep semi-supervised learning,”
IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 9, pp. 8934–8954, 2022.
ouali2020overview
Yassine Ouali, Céline Hudelot, and Myriam Tami,
“An overview of deep semi-supervised learning,”
arXiv preprint arXiv:2006.05278, 2020.
wang2022freematch
Yidong Wang, Hao Chen, Qiang Heng, Wenxin Hou, Yue Fan, Zhen Wu, Jindong Wang, Marios Savvides, Takahiro Shinozaki, Bhiksha Raj, et al.,
“Freematch: Self-adaptive thresholding for semi-supervised learning,”
arXiv preprint, 2022.
wang2017residual
Fei Wang, Mengqing Jiang, Chen Qian, Shuo Yang, Cheng Li, Honggang Zhang, Xiaogang Wang, and Xiaoou Tang,
“Residual attention network for image classification,”
in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3156–3164.
WahCUB_200_2011
C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie,
,”
Tech. Rep. CNS-TR-2011-001, California Institute of Technology, 2011.
krause20133d
Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei,
“3d object representations for fine-grained categorization,”
in Proceedings of the IEEE international conference on computer vision workshops, 2013, pp. 554–561.
article
Dong-Hyun Lee,
“Pseudo-label : The simple and efficient semi-supervised learning method for deep neural networks,”
ICML 2013 Workshop : Challenges in Representation Learning (WREPL), 07 2013.
zhang2021flexmatch
Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki,
“Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling,”
NeurIPS, vol. 34, pp. 18408–18419, 2021.
|
http://arxiv.org/abs/2409.03557v1 | 20240905141749 | Patterns of the $V_2$-polynomial of knots | [
"Stavros Garoufalidis",
"Shana Yunsheng Li"
] | math.GT | [
"math.GT",
"hep-th"
] |
positioning
calc
decorations.markings
arrows
calc
decorations.markings
hvector=[inner sep=2pt,draw=blue!50,fill=blue!10,thick]
unit=[inner sep=2pt,shape=circle, draw]
counit=[inner sep=2pt,shape=circle, draw,fill=gray]
antipode=[inner sep=2pt,shape=rectangle, draw]
cocycle=[inner sep=2pt,shape=circle, draw]
twistedm=[inner sep=2pt,shape=circle, fill=gray]
autom=[inner sep=2pt,shape=circle, draw]
coact=[inner sep=2pt,shape=circle, fill=black]
knots, hobby, decorations.pathreplacing,
shapes.geometric, calc
theoremTheorem[section]
definition
proposition[theorem]Proposition
lemma[theorem]Lemma
definition[theorem]Definition
remark[theorem]Remark
corollary[theorem]Corollary
conjecture[theorem]Conjecture
question[theorem]Question
problem[theorem]Problem
example[theorem]Example
exercise[theorem]Exercise
hint[theorem]Hint
#1[#1]
#1[#1]
#1[#1]
N
Z
Q
R
C
K
W
T
F
𝒜
ℐ
𝒦
𝒟
𝒞
𝒯
𝒫
𝒮
ℬ
𝒢
𝒳
ℳ
𝒪
σ
⟨
⟩
∧
SL
⟶
∂
I_Δ
q̃
Res
Vol
α
β̱
γ
δ̣
ϵ
ε
þθ
coeff
Li
PSL
Jac
diag
AK
v_lon
Re
Im
sgn
I_ΔI^ΔÎ^preI^balABCe_μe_λ#1*[c]#1𝗎𝗏η𝖥𝗊𝗉μ𝗁𝗓𝖧𝖹𝖩𝖠𝖡𝖢𝖦φLHSRHS#1[SG:#1]#1[RK:#1]#1#1#1[#1]#1⟨ #1 ⟩#1#2⟨ #1; #2 ⟩TV𝐜per𝗊BCℰℱℰEndLG#1*[c]#1
part[#1]#2
@̧secnumdepth >@ne
part
tocpartPart . #1
tocpart#1
@ M
@̧secnumdepth >@ne
.
#2
4.7ex
afterheadingspart#1part
tocpart#1
@ M
#1
4.7ex
afterheadingpart[2]
@̧tocdepth >-2secpenalty
0.75em plus@
@ pnumwidth
-pnumwidth
#1@xt@pnumwidth #2@compatibility
nobreaktrue
nobreakfalse
ł@subsectiontocline20pt2pc6pc
Patterns of the V_2-polynomial of knots]
Patterns of the V_2-polynomial of knots
International Center for Mathematics, Department of Mathematics
Southern University of Science and Technology
Shenzhen, China <http://people.mpim-bonn.mpg.de/stavros>[email protected]
Department of Mathematics
University of Illinois
Urbana, IL, USA <https://li-yunsheng.github.io>[email protected] Key words and phrases:
Knots, Jones polynomial, V_n-polynomial, Nichols algebras, R-matrices,
Yang–Baxter equation, knot polynomials, knot genus, Conway mutations, Khovanov
Homology, Heegaard Floer Homology, homologically thin and thick knots, tight and
loose knots.
§ ABSTRACT
Recently, Kashaev and the first author defined a sequence V_n of 2-variable
knot polynomials with integer coefficients, coming from the R-matrix of a rank
2 Nichols algebra, the first polynomial been identified with the Links–Gould
polynomial. In this note we present the results of the computation of the
V_n polynomials for n=1,2,3,4 and discover applications and emerging patterns,
including unexpected Conway mutations that seem undetected by the V_n-polynomials
as well as by Heegaard Floer Homology and Knot Floer Homology.
[
Shana Yunsheng Li
5 September 2024
=====================
§ INTRODUCTION
§.§ A sequence of 2-variable knot polynomials
Recently, Rinat Kashaev and the first author defined multivariable polynomials
of knots using Nichols algebras <cit.> with automorphisms. In our paper,
we focus on the sequence of 2-variable polynomial invariants V_n(t,q) of knots
which come by applying a general construction of <cit.> to one of the
simplest Nichols f-algebras of rank 2 of diagonal type.
This algebra depends on one variable q that determines the braiding and two
variables (t_1,t_2) that determine the automorphism type, and when q is not
a root of unity and (t_1,t_2,q) satisfy the relation t_1t_2q^n=1 for some
positive integer n>0, then the Nichols algebra has a right Yetter–Drinfel'd
f-module Y_n of dimension 4n <cit.> and an explicit
R-matrix T_n. Taking this as a black box, and parametrizing the three variables
(t_2,t_2,q) satisfying t_1t_2q^n=1 in terms of two variables (t,q)(t_1,t_2)=(1/(q^n/2 t,t/q^n/2), it was shown in <cit.> that Y_n
comes equipped with an R-matrix T_n which leads to a matrix-valued knot invariant
K ↦ J_T_n(K) ∈End(Y_n)
as well as to a scalar-valued invariant
V_K,n(t,q)∈[t^± 1,q^± 1/2] given by the (1,1)-entry of
J_T_n(K).
It was advocated in <cit.> that the sequence V_n of 2-variable knot
polynomials has similarities and differences with the sequence of the Jones
polynomial of a knot and its parallels, otherwise known as the colored Jones
polynomial.
The polynomial invariant V_n satisfies
*
the symmetry
V_K,n(t,q)=V_K,n(t^-1,q), V_K,n(t,q)=V_K,n(t,q^-1)
where the first equality comes from the involution exchanging t_1 and t_2,
and in the second one K denotes the mirror image of K,
*
conjecturally, the specialization
V_K,n(q,q)=1, V_K,n(t,1) =Δ_K(t)^2
where Δ_K(t) ∈[t^± 1/2] is the symmetrized
(i.e., Δ_K(t)=Δ_K(t^-1)), normalized (i.e., Δ_K(1)=1)
Alexander polynomial,
*
conjecturally, the relation V_1= with the Links–Gould
invariant <cit.>,
*
and conjecturally, the genus bound
_t V_K,n≤ 4 g(K)
where the Seifert genus g(K) is the smallest genus of a spanning surface of a knot.
Here, by t-degree of a Laurent polynomial of t we mean the difference between
the highest and the lowest power of t.
The paper <cit.> stimulated a lot of subsequent work.
The relation V_1= is now known <cit.>, and consequently
the specialization (<ref>) and the genus bounds (<ref>) holds for
n=1 since they hold for the Links–Gould invariant <cit.>.
On the other hand, it is known that the Links–Gould invariant does not detect
Conway mutation, whereas the genus does, and hence the genus bounds (<ref>)
cannot be sharp for n=1.
More recently, an expression of V_2 in terms of V_1 polynomial of a knot
and its (2,1)-parallel was obtained <cit.>, and this proves both the
specialization (<ref>) and the genus bound (<ref>) for n=2.
But of all those properties of V_2, the last inequality is the most intriguing.
Based on some experiments with few knots and 12 and 13 crossings, it was observed
in <cit.> that in all computed cases, the inequality (<ref>) is
in fact an equality. Is this an accident for knots with low numbers of
crossings, or a new phenomenon? To decide one way or another, one needs an efficient
way to compute the V_n-polynomials of knots, do so, and sieve the data.
This is exactly what we did, and led to the results of our paper.
§.§ Is the genus bound an equality for V_2?
Since we are talking about tables of knots and their invariants, we will be
using the naming of the HTW table of knots up to 16 crossings <cit.>
imported in <cit.> and also in
<cit.>.
The inequality (<ref>) combined with the specialization (<ref>)
for n=2 imply that
2 _t A_K(t) ≤_t V_K,2(t,q) ≤ 4 g(K) .
On the other hand,
_t A_K(t) ≤ 2 g(K)
with equality if and only if a knot is Alexander-tight, otherwise Alexander-loose.
In our paper we abbreviate these two classes simply with tight/loose, similar to what
people do in Heegaard Floer Homology and Khovanov Homology where they talk about
HFK-thin/thick or Kh-thin/thick knots, but then they drop the HFK or Kh once
the context is clear. Likewise, we use the term thin in our paper to mean HFK and
Kh-thin. Note that alternating knots are tight <cit.>.
What's more, quasi-alternating knots (a class that includes all alternating knots)
introduced by Ozsvath-Szabo in <cit.> are HFK and
Khovanov-thin <cit.>, and hence tight.
Combining the above two inequalities, it follows that
the inequality (<ref>) is in fact an equality for V_2 and all tight knots.
Note next that there are no loose knots with ≤ 10 crossings. Moreover, the number
of loose knots with ≤ 16 crossings is given in Table (<ref>).
Incidentally, the list of loose knots was compiled by computing in
the Alexander polynomial, and also the HFK (and in particular, the Seifert genus
of a knot).
Among the loose knots, are the ones with trivial Alexander polynomial (also computed
by ) which are in some sense extreme. The list of 7 loose knots
(up to mirror) with 11 crossings is
11n34^∗, 11n42^∗, 11n45, 11n67,
11n73, 11n97, 11n152
where the asterisque indicates that the knot has trivial Alexander polynomial, and
the pair (11n34,11n42) is the famous Kinoshita–Terasaka and Conway pair of
mutant knots. Their genus is given by 3, 2, 3, 2, 3, 2, 3, the t-degree of the
V_1-polynomial is 6, 6, 8, 6, 8, 6, 8 and the t-degree of the V_2-polynomial
is 12, 8, 12, 8, 12, 8, 12, confirming the equality in (<ref>) for n=2.
Table <ref> summarizes the knots for which the V_n-polynomial was computed.
For n=2, we computed its values for all loose knots with at most 15 crossings
and all trivial Alexander knots with at most 16 crossings. In all cases, we found that
the inequality (<ref>) is an equality for n=2. Combined with the
specialization and the genus bounds for V_2, this implies the following.
Equality holds in (<ref>) for n=2 and for all knots with at most 15 crossings
and all trivial Alexander polynomial knots with 16 crossings.
The relation between V_2 and V_1 discussed in Section <ref> below,
combined with the fact that V_1= implies that the map
K ↦ V_K,2(e^ħ + N,e^ħ) ∈[N][[ħ]]
is a Vassiliev power series invariant of knots. Hence, if (<ref>) is an
equality for n=2, it follows that Vassiliev invariants determine the Seifert genus
of a knot. A celebrated method to detect the genus of the knot is Heegaard Floer
Homology <cit.>. A second (conjectural) method to compute the genus of a knot uses
hyperbolic geometry, and more specifically the degree of the twisted torsion
polynomial τ_K,3(t) of a hyperbolic knot (twisted with the
adjoint representation of the geometric representation of a hyperbolic knot); see
<cit.>. Curiously, the Conjecture 1.7
of <cit.> was verified for all hyperbolic knots with
at most 15 crossings.
§.§ V_2-trivial Conway mutations
A question that we discuss next is how strong is the new V_2 polynomial in separating
knots. Given the values of the polynomial for knots up to 14 crossings, we searched
for repetitions, taking into account mirror image, which changes V_2(t,q) to
V_2(t,q^-1). Here, we came across a new surprise. The V_2 polynomial separates
knots with at most 11 crossings, but fails to separate 12 crossing knots, and
the three pairs that we found are
(12n364, 12n365)
(12n421, 12n422)
(12n553, 12n556) .
We tried the V_1 polynomial on them and it failed to
separate them, and we tried the V_3-polynomial which also failed to separate. Yet,
the genus inequality (<ref>) was an equality for n=2, which meant that these
3 pairs have equal genus (in each pair). We checked their HFK,
computed by , and their Khovanov Homology, computed by
, and a bit to our surprise, was equal in each pair.
Looking at these 3 pairs more closely, we realized that they are in fact
Conway mutants. A table of mutant knots with at most 15 crossings is given in
Stoimenow <cit.>. As was pointed out to us by N. Dunfield, one
can separate the knots in these pairs using the homology of their 5-fold covers,
or the certified isometry signature of the complete hyperbolic structure.
Having found these unexpected pairs of Conway mutant knots, we tried knots with
13 crossings, where we now found 26 more pairs with exactly the same properties as
above, given in Table (<ref>). We then searched knots with
14 crossings, where we now found 192 pairs and 1 triple (up to mirror image)
with the same properties as above. The V_2-equivalence classes of knots of size
more than 1 with at most 12, 13 and 14 crossings are given Tables (<ref>),
(<ref>) and (<ref>), respectively.
Summarizing, we obtain the following.
The V_2 polynomial separates knots with at most 14 crossings except for the
pairs and triples in Tables (<ref>), (<ref>) and
(<ref>)–(<ref>). The knots in those tuples with at most
14 crossings have
*
equal V_1, V_2, V_3 and (for at most 13 crossings) V_4
polynomials,[with 2 exceptional pairs (14n2423, 14n5868) and
(14n5822, 14n5852) that have different V_3-polynomials]
*
equal HFK and equal Khovanov Homology,
*
and they are Conway mutant knots, in particular they have equal volumes, trace field,
colored Jones polynomials, ADO polynomials and HOMFLY polynomial.
For lack of a better name, let us say that two knots are V_2-equivalent
if they have equal V_2-polynomial. This notion is similar to the almost-mutant
knots of <cit.>.
Are V_2-equivalent knots always Conway mutant? Do they have equal
equal HFK and equal Khovanov Homology? And why?
We can give a partial answer to this question as follows. The observed
V_2-equivalence classes come in 3 flavors
1: tight + thin, 2: tight + thick,
3: loose + thick .
The HFK homology of an HFK-thin (resp., Kh-thin) knot is determined by its
Alexander polynomial (resp., by the Jones polynomial and the signature). Since
mutation does not change the Alexander polynomial, nor the Jones polynomial, nor the
signature, it follows that mutant thin knots have equal HFK and equal Khovanov
Homology. This gives an explanation of the above question for the class 1.
As mentioned in the introduction, quasi-alternating knots are tight + thin.
On the other hand, our tables (given in the Appendix) give concrete examples
of tight + thick or loose + thick knots. Several methods of constructing knots
with equal HFK and equal Khovanov Homology, are discussed in detail in
Hedden–Watson, <cit.>, but we do not know how to apply these
constructions to generate our examples.
It is also worth noticing that all tight + thin knots listed in the appendix
are Kh'-thin, as defined in <cit.>.
§.§ Independence of the V_2 and the 2-loop polynomials
The colored Jones polynomial can be decomposed into loop invariants, starting
from 0-loop which is the inverse Alexander polynomial, and then going up the loops.
In fact, the 2-loop invariant of the Kontsevich integral of a knot is essentially
2-variable polynomial invariant, and its image under the 𝔰𝔩_2()-weight
system is a 1-variable polynomial computed efficiently by Bar-Natan and van der Veen
<cit.>.
One may ask whether the V_2-polynomial is determined by the 2-loop invariant Z_2
of the Kontsevich integral. It is known that the map K ↦ Z_2(Wh(K))
is a degree 2 Vassiliev invariant of knots <cit.>, where
Wh(K) denotes the Whitehead doubling of a 0-framed knot with a positive
clasp. Since the vector space of degree 2 Vassiliev invariants is 1-dimensional
generated by a_2(K)=Δ”(1), it follows that
Z_2(Wh(K)) = c a_2(K) for a universal nonzero constant c.
For the trefoil and its mirror image, we have a_2(3_1)=a_2(3_1)=2, which
implies that
Z_2(Wh(3_1)) = Z_2(Wh(3_1)) .
On the other hand, V_Wh(3_1),2(t,q) ≠ V_Wh(3_1),2(t,q),
the exact values given in Equation (<ref>) in the Appendix. This implies the
following.
The V_2-polynomial is not determined by the 2-loop part of the Kontsevich integral.
Based on limited computations available, it was observed in <cit.> that
Khovanov Homology alone, or HFK alone, or the colored Jones polynomial alone
do not determine V_2.
§.§ A relation between V_1 and V_2
In a sense, the sequence of V_n-polynomials are similar to the sequence of the
Jones polynomial of a knot <cit.> and its (n,1)-parallels. In fact, it
follows from the axioms of the TQFT that the Jones polynomial of a parallel of
knot is a linear combination (with coefficients that are independent of the knot) of
colored Jones polynomial, colored by the irreducible representations of
𝔰𝔩_2(); see <cit.>, and vice-versa.
It was recently conjectured in <cit.> that the V_n-polynomials of a knot
are also linear combinations of the V_1-polynomial of a knot and its (n',1)-cables
for n' ≤ n, and a proof for n=2 was given there. We illustrate this relation
here, and at the same time giving a consistency between the coefficients of the
relation computed by the spectral decomposition of R-matrices and by
representation theory in <cit.> with the computer-program that computes V_1
and V_2. The following relation holds for the unknot,
3_1, 4_1, 6_1, 6_2, 6_3, 7_7, 8_3, 8_4 and their mirrors (in total,
14 knots)
V_K,2(t^2,q^2) = c_2,0(t,q) V_K(2,1),1(t,q)
+ c_2,-1(t,q) V_K,1(t^2 q^-1,q)
+ c_2,1(t,q) V_K,1(t^2 q,q) ,
where
c_2,-1(t,q) = t(t^2 q^2 -1)/q(1+q^2)(t^2-1),
c_2,1(t,q) = t^2-q^2/qt(1+q^2)(t^2-1),
c_2,0(t,q) = (q+t)(1+qt)/(1+q^2)t
satisfy the symmetry c_2,1(t,q) = c_2,-1(t^-1,q),
c_2,0(t,q)=c_2,0(t^-1,q).
Some values of (<ref>) are given in the appendix.
§.§ The V_2-polynomial of torus knots with two strings
From the general setting, it follows that if β and γ are elements of
a braid group of a fixed numbers of strands and K_n denotes the link obtained
by the closure of β^n γ, then K_n is a knot if n lies in an arithmetic
progression and the sequence V_1(K_n)(t,q) is holonomic and satisfies a linear
recursion relation with coefficients in [t^± 1,q^± 1]) coming from the
minimal
polynomial of the square of the R-matrix. This can be computed explicitly and
leads to the answer. The above holds locally, if we replace tangle in a planar
projection of a knot by β^n γ, and holds for any of the polynomial
invariants that we discuss in this paper.
We illustrate this giving a recursion relation of the values of
V_1 and V_2 for T(2,2b+1)-torus knots for an integer b.
The minimal polynomial of the square of the R-matrix of V_1 is
(-1 + x) (-t^2 + q^2 x) (-1 + q^2 t^2 x) =
-t^2 + (q^2 + t^2 + q^2 t^4) x -(q^2 + q^4 t^2 + q^2 t^4) x^2 + q^4 t^2 x^3 .
It follows that f_b(t,q) = V_T(2,2b+1),1(t,q) satisfies the recursion relation
-t^2 f_b(t,q) + (q^2 + t^2 + q^2 t^4) f_b+1(t,q)
-(q^2 + q^4 t^2 + q^2 t^4) f_b+2(t,q) + q^4 t^2 f_b+3(t,q) =0
for b ∈ with initial conditions
f_-1(t,q)=1, f_0(t,q)=1, f_1(t,q) =
1 + (q^-1+q^-3) u + q^2 u^2
where
u=t+t^-1-q-q^-1 .
This and the t ↔ t^-1 symmetry of V_1 implies that
f_b(t,q) = q^-2b (t^2b+t^-2b) + (lower order terms)
for b ≥ 0, thus _t(f_b(t,q))=4b=4 ·genus(T(2,2b+1)) for
b>0. It follows that inequality (<ref>) for n=2 is an equality
for b ≥ 0. Since T(2,2b+1)=T(2,-2b-1), it follows that
f_b(t,q^-1)=f_-b-1(t,q) which then concludes that inequality (<ref>)
for n=1 is an equality for all 2-string torus knots.
Likewise, the minimal polynomial of the square of the R-matrix of V_2 is
(-1 + x) (-t^2 + q^2 x) (-1 + q^3 x) (-t^2 + q^4 x) (-1 + q^2 t^2 x) (-1 + q^4 t^2 x)
which translates into a 6th order linear recursion relation for
g_b(t,q) = V_T(2,2b+1),2(t,q) with initial conditions
g_-1(t,q) = g_0(t,q^-1) = 1
g_-2(t,q) = g_1(t,q^-1) =
1 + (q + 2 q^3 - q^4 + q^5 - q^6) u + (q^2 + q^4 - q^5) u^2
g_-3(t,q) = g_2(t,q^-1) =
1 + (2 q + 3 q^3 - q^4 + 3 q^5 - q^6 + 2 q^7 - q^8 + q^9 - 2 q^10 +
q^11 - q^12) u
+ (4 q^2 + 7 q^4 - 3 q^5 + 10 q^6 - 6 q^7 +
6 q^8 - 7 q^9 + 3 q^10 - 3 q^11) u^2
+ (3 q^3 + 6 q^5 - 3 q^6 + 6 q^7 - 6 q^8 + 3 q^9 - 3 q^10) u^3
+ (q^4 + q^6 - q^7 + q^8 - q^9) u^4
with u as in (<ref>). As in the case of V_1, from the above recursion
one deduces that inequality (<ref>) for n=2 is in fact an equality
for all 2-string torus knots.
We have performed the analogous calculation for the case of the V_3 and V_4
polynomials, and the conclusion is that inequality (<ref>) for n=1, …, 4
is in fact an equality for all 2-string torus knots.
§.§ Positivity of the V_1 and V_2-polynomials of
alternating knots?
The next topic that we discuss is a curious positivity observation for the
coefficients of the V_1 and V_2 polynomials of alternating knots.
Recall the number of alternating knots with at most 15 crossings
(up to mirror image) given in Table <ref> and taken from <cit.>.
After computing the V_1 and V_2 polynomials in the following range of knots,
we observed the following.
For all alternating knots with ≤ 15 crossings, we have
V_1(t,-q) ∈_≥ 0[t^± 1,q^± 1] .
The same conclusion holds for V_2 for all alternating knots with ≤ 14
crossings.
The above positivity fails for V_3(t,-q) and V_4(t,-q) already both for the
3_1 and the 4_1 knots.
Is this an accident of knots with low number of crossings or a hint of a
relation of V_1 and V_2 with some categorification theory?
§ COMPUTING THE V_N-POLYNOMIALS
A priori, the polynomial invariant of long knots based on an R-matrix on a
d-dimensional vector space is a state sum of d^2c terms where c is the
number of crossings of a planar projection of a knot knot, and in the case of
the V_n polynomials, d=4n. Even though the summand is sparse, a direct
computation of the V_2 polynomial for knots with 8 crossings is unfeasible.
A key observation is that every polynomial invariant of long knots of <cit.>
is given as a state sum and has a natural local tangle version due to the fact
that oriented crossings are allowed to be oriented up/downwards but also sideways.
The locality property of this polynomial is very important for its efficient
computation, an idea that highlighted time and again in the work of Bar-Natan and
van der Veen.
Given a planar diagram of a knot, the computation is assembled from the following
parts:
*
Convert the planar diagram into the planar diagram of a corresponding long knot,
with up-pointing crossings, and record the rotation number for each arc;
*
Correspond each crossing with the pre-computed R-matrices. Since the crossings
are up-pointing, there are only two kinds of crossings (positive and negative);
*
Tensor contract the crossings until the polynomial is obtained, with respect to a
specific order that simplifies the computation. The contraction is done accordingly
to the rotation numbers of arcs, which, together with the pre-computed curls,
determine the signs of indeterminate terms.
Part (1) is done by directly calling the function from <cit.>.
The part most critical to the speed of computation is to determine the order to which
the contraction is performed. Before explaining our method of determining the order,
we briefly review some standard terminologies of tensor contraction.
An n-tensor is a tuple indexed by a set of the form
{ 1,⋯,m_1 }×⋯×{ 1,⋯ m_n },
where, for k∈{ 1,⋯,n }, the integer m_k is the dimension
of the legk of the tensor.
For an n-tensor T, the entry corresponding to an element
{ i_1,⋯,i_n } in the index set is denoted as T_i_1,⋯ ,i_n.
Given two tensors T and T', if T has a leg k whose dimension m_k is the
same as a leg k' of T', we can contractT and T' along the pair of
legs k and k' to a new tensor T”, defined by
T”_i_1,⋯,i_k,⋯,i_n,j_1,⋯,j_k',⋯,j_n'∑_i_k, j_k'∈{ 1,⋯,m_k }
i_k = j_k' T_i_1,⋯,i_k,⋯,i_nT'_j_1,⋯,j_k',⋯,j_n',
where the hats indicate that the indices under them are deleted. Similarly we can
define contractions of contracting multiple pairs of legs and/or more than two tensors
at once. By definition, assuming each multiplication and addition is of the same
complexity (and ignoring the fact that we are adding and multiplying polynomials
with integer coefficients in two variables, rather than integers), the time
complexity of a contraction is described by
Product of dimensions of all legs of tensors
involved/Product of dimensions of all pairs of legs contracted.
For example, the time complexity of contracting two 3-tensors along two pairs of
legs into a 2-tensor, where all legs are of dimension d, is
d^3· d^3/d· d = d^4.
In our case, the tensors come from the R-matrices, which are indexed as 4-tensors
with 4n-dimensional legs for the V_n-polynomial. When associating the crossings
with R-matrices in part (2), the four arcs of the crossings are also associated to
the four legs of the corresponding 4-tensors according to a carefully arranged order.
Tensor contraction of crossings in part (3) means to contract the associated tensors
along their common legs (i.e. legs associated to same arcs) up to the sign provided
by the curls and rotation numbers. After all arcs, except the entrance and exit arcs,
have been contracted, we obtain a 2-tensor which is a 4n× 4n diagonal matrix
with identical diagonal entries which give the desired V_n-polynomial. This allows
us to reduce the legs associated to the entrance and exit arcs to only one dimension
before computing the contractions.
When all dimensions of legs are equal, taking the logarithm with the dimension as
the base to (<ref>) gives the following description of the time complexity
log_dim(time complexity) = #{legs of tensors involved}
- #{pairs of legs contracted},
where # S stands for the cardinality of a set S.
Therefore, to reduce the time complexity, we need to reduce the number of legs of
tensors involved and increase the number of pairs of legs contracted per contraction.
For our case, this means that we contract all contractible legs of two tensors at a
time, and find an order of contractions so that the right hand side of (<ref>)
is as small as possible. The latter is the most difficult, and is well-known to be
NP-hard for general graphs (instead of only the planar diagrams of knots) in the field
of tensor network. Due to the less complex nature of planar diagrams of knots, we do
not need to go as far as NP-hard, while being able to reduce the time complexity to a
satisfactory level with some techniques. One approach to this is to prioritize
contracting bigons as described in <cit.>, but we are taking a different
approach here, described as the following:
*
Input: a planar diagram of a long knot with rotation numbers obtained by
and R-matrices associated;
*
While there are arcs other than the entrance and exit arcs remaining:
*
Find all pairs of crossings with common arcs (crossings may appear multiple
times in different pairs);
*
Evaluate the time complexity of contracting each pair of crossings in (a) along
all their common arcs according to (<ref>) (the entrance and exit arcs
do not count);
*
Contract the pair with the minimal time complexity (if there are multiple pairs
with the minimal complexity, choose an arbitrary one).
*
Output: Only the entrance and exit arcs remain and the desired V_n-polynomial
is obtained.
In practice, the time complexity depends on which arc is split when the knot is
converted into a long knot, so before actually performing the computations of
R-matrices, we execute the above algorithm with each possible long knot as the
input, contract formally without computing the R-matrices, record the time
complexities in each step (c) and return the maximal time complexity recorded as
the estimated complexity for the input long knot. After all these, we perform the
actual computation on the long knot with the minimal estimated complexity.
For example, it takes a few minutes to compute the V_2-polynomial of the loose
knots with 11 crossings from (<ref>).
§.§ Acknowledgements
The authors wish to thank Dror Bar-Natan, Nathan Dunfield, Rinat Kashaev,
Ben-Michael Kohli, Ciprian Manolescu, Mingde Ren and Roland van der Veen
for useful conversations.
§ V_2-EQUIVALENCE CLASSES OF KNOTS WITH AT MOST 14 CROSSINGS
In this section we list the V_2-equivalence classes (up to mirror image)
of knots with at most 14 crossings. As perhaps expected, the equivalence classes
involve knots with the same number of crossings. Overline means mirror image.
The counts are given in Table <ref>.
Below, we indicate the flavor of each equivalence class (defined in
Equation (<ref>)) by the corresponding number in the superscript.
There are 218 tuples in total, 172 being tight + thin, 30 tight + thick,
and 16 loose + thick.
First, we give the 3 pairs with 12 crossings.
(12n364, 12n365)^1
(12n421, 12n422)^1
(12n553, 12n556)^1
Next, we give the 25 pairs of knots with 13 crossings.
(13a141, 13a142)^1
(13a199, 13a204)^1
(13a906, 13a916)^1
(13a1114, 13a1143)^1
(13a1126, 13a1163)^1
(13a1813, 13a1831)^1
(13a1991, 13a2021)^1
(13a1995, 13a2006)^1
(13a2720, 13a2727)^1
(13a2802, 13a2808)^1
(13n370, 13n373)^1
(13n372, 13n375)^3
(13n404, 13n416)^1
(13n406, 13n418)^1
(13n534, 13n549)^1
(13n536, 13n551)^3
(13n875, 13n950)^1
(13n1129, 13n1130)^1
(13n1653, 13n1683)^3
(13n1655, 13n1685)^2
(13n1894, 13n2099)^1
(13n2185, 13n2229)^1
(13n2205, 13n2250)^3
(13n2933, 13n2956)^1
(13n2937, 13n2955)^1
Next, we give the 189 pairs and one triple of knots with 14 crossings.
(14a34, 14a35)^1
(14a96, 14a103)^1
(14a454, 14a458)^1
(14a518, 14a592)^1
(14a533, 14a550)^1
(14a608, 14a617)^1
(14a675, 14a734)^1
(14a718, 14a736)^1
(14a989, 14a1017)^1
(14a1047, 14a1170)^1
(14a1268, 14a1362)^1
(14a1445, 14a1449)^1
(14a1522, 14a1532)^1
(14a1767, 14a1860)^1
(14a2205, 14a2215)^1
(14a2253, 14a2256)^1
(14a2609, 14a2618)^1
(14a3400, 14a3433)^1
(14a3403, 14a3432)^1
(14a3407, 14a3436)^1
(14a3409, 14a3438)^1
(14a3419, 14a3439)^1
(14a4041, 14a4998)^1
(14a4147, 14a4939)^1
(14a4901, 14a5698)^1
(14a6467, 14a6614)^1
(14a6614, 14a6467)^1
(14a7193, 14a7216)^1
(14a7196, 14a7269)^1
(14a7200, 14a7249)^1
(14a7207, 14a7272)^1
(14a7210, 14a7275)^1
(14a7215, 14a7698)^1
(14a7219, 14a7260)^1
(14a7258, 14a7263)^1
(14a7264, 14a7274)^1
(14a7446, 14a7477)^1
(14a7527, 14a7598)^1
(14a8017, 14a8107)^1
(14a8096, 14a8115)^1
(14a9707, 14a9711)^1
(14a10116, 14a10142)^1
(14a10142, 14a10116)^1
(14a10405, 14a10410)^1
(14a10407, 14a10434)^1
(14a10411, 14a10453)^1
(14a10414, 14a10417)^1
(14a10415, 14a10435)^1
(14a10439, 14a10456)^1
(14a10853, 14a11544)^1
(14a11793, 14a12325)^1
(14a12335, 14a12344)^1
(14a12816, 14a12833)^1
(14a12868, 14a12876)^1
(14a13431, 14a13433)^1
(14a13473, 14a13475)^1
(14n179, 14n182)^1
(14n181, 14n184)^3
(14n213, 14n226)^1
(14n215, 14n228)^1
(14n364, 14n386)^1
(14n366, 14n388)^3
(14n733, 14n810)^1
(14n1366, 14n1393)^1
(14n1370, 14n1395)^2
(14n1374, 14n1397)^1
(14n1378, 14n1399)^1
(14n1380, 14n1401)^3
(14n1692, 14n1704)^1
(14n1697, 14n1945)^1
(14n1699, 14n1947)^2
(14n1701, 14n1949)^1
(14n1752, 14n1753)^1
(14n1762, 14n1839)^2
(14n1764, 14n1841)^2
(14n1766, 14n1843)^2
(14n1768, 14n1775)^1
(14n1770, 14n1835)^1
(14n1772, 14n1837)^1
(14n2007, 14n2032)^2
(14n2148, 14n2372)^1
(14n2150, 14n2374)^3
(14n2295, 14n2376)^1
(14n2297, 14n2378)^1
(14n2299, 14n2380)^2
(14n2423, 14n5868)^2
(14n3418, 14n3823)^1
(14n3422, 14n3825)^3
(14n3444, 14n3836)^1
(14n3448, 14n3829)^1
(14n3450, 14n3831)^2
(14n4577, 14n4583)^2
(14n4579, 14n4585)^2
(14n4657, 14n4665)^1
(14n4925, 14n5085)^1
(14n4930, 14n4931)^1
(14n4933, 14n5087)^2
(14n4938, 14n4939)^1
(14n5822, 14n5852)^3
(14n5854, 14n5862)^1
(14n7506, 14n7559)^1
(14n7566, 14n7673)^1
(14n7575, 14n7675)^2
(14n7577, 14n7677)^2
(14n7586, 14n7678)^1
(14n7593, 14n7680)^1
(14n7597, 14n7682)^2
(14n7599, 14n7684)^2
(14n7603, 14n7686)^1
(14n7617, 14n7628)^2
(14n7636, 14n7688)^1
(14n7638, 14n7690)^1
(14n8225, 14n10806)^2
(14n8291, 14n8293)^1
(14n8648, 14n8649)^1
(14n8650, 14n8651)^1
(14n8696, 14n8697)^1
(14n9075, 14n9076)^1
(14n9139, 14n9140)^1
(14n9142, 14n9143)^1
(14n9395, 14n9396)^1
(14n9398, 14n9399)^1
(14n9455, 14n9456)^1
(14n9458, 14n9459)^1
(14n9686, 14n9687)^1
(14n10002, 14n11740)^3
(14n10503, 14n11641)^3
(14n11679, 14n11981)^3
(14n14122, 14n14288)^1
(14n14130, 14n14216)^1
(14n14134, 14n14215)^2
(14n14136, 14n14219)^3
(14n14148, 14n14150)^1
(14n14149, 14n14151)^1
(14n14154, 14n14328)^1
(14n14156, 14n14157)^1
(14n14158, 14n14159)^1
(14n14162, 14n14169)^1
(14n14177, 14n14330)^3
(14n14196, 14n14333)^1
(14n14204, 14n14341)^1
(14n14208, 14n15068)^2
(14n14210, 14n14225)^1
(14n14214, 14n14131)^2
(14n14223, 14n14211)^2
(14n14227, 14n14335)^1
(14n14313, 14n14319)^3
(14n14322, 14n14332)^2
(14n14504, 14n14506)^1
(14n14508, 14n14502)^1
(14n14511, 14n14513)^1
(14n14516, 14n14509)^1
(14n14590, 14n14663)^1
(14n14655, 14n14685)^1
(14n14687, 14n15694)^1
(14n14780, 14n14893)^2
(14n14787, 14n14895)^1
(14n14793, 14n14897)^1
(14n14808, 14n14804)^1
(14n14924, 14n14923)^1
(14n14926, 14n14925)^1
(14n14931, 14n14930)^1
(14n15024, 14n15022)^1
(14n15059, 14n15058)^1
(14n15063, 14n15062)^1
(14n15066, 14n15065)^1
(14n15084, 14n15083)^1
(14n15103, 14n15102)^1
(14n15106, 14n15105)^1
(14n15173, 14n15172)^1
(14n15180, 14n15179)^1
(14n15202, 14n15201)^1
(14n15205, 14n15204)^1
(14n15208, 14n15207)^1
(14n15228, 14n15227)^1
(14n15232, 14n15231)^1
(14n15258, 14n15257)^1
(14n15727, 14n15756)^1
(14n15729, 14n15758)^2
(14n17934, 14n17940)^1
(14n17938, 14n17936)^2
(14n17942, 14n17946)^1
(14n17949, 14n17960)^2
(14n17986, 14n18013)^1
(14n17997, 14n18017)^1
(14n18005, 14n18012)^2
(14n18146, 14n18144)^1
(14n18208, 14n18207)^1
(14n19744, 14n19758)^2
(14n14212, 14n14213, 14n14222)^1
The knots in Tables (<ref>), (<ref>),
(<ref>)–(<ref>) are Conway mutant, have equal HKF and
Equation (<ref>) is an equality for n=2. The knots in
Tables (<ref>), (<ref>) have equal V_1, V_2, V_3 and V_4
polynomials and the ones in Tables (<ref>)–(<ref>) have equal
V_1, V_2 and V_3 polynomials with the two exceptions as in the footnote
of Proposition <ref>.
All but at most 12 of the tight + thin knots listed above are quasi-alternating knots.
We examined them with a computer search, extending further the table
of quasi-alternating knots with ≤ 12 crossings of Jablan <cit.>.
All but the following 12 tight + thin knots were confirmed to be
quasi-alternating:
14n2378, 14n3448, 14n4925, 14n5085, 14n5862, 14n5854,
14n7559, 14n7506, 14n14151, 14n14149, 14n14169, 14n14162 .
§ VALUES FOR WHITEHEAD DOUBLES AND (2,1)-PARALLELS
The values of V_Wh(K),2(t,q) for the first three nontrivial knots
is given as follows, where u=t+t^-1-q-q^-1 is as in (<ref>).
V_Wh(3_1),2(t,q) =
1 + (-2 - 2 q^-2 + 2 q^-1 + 4 q - 4 q^2 + 4 q^3 - 2 q^4 - 2 q^7 + 2 q^8 -
2 q^10 + 2 q^11 + 2 q^15 - 2 q^16 + 2 q^17
- 2 q^18
- 2 q^20 +
4 q^22 - 2 q^23) u + (2 + 2 q^-2 - 2 q^-1 - 4 q + 2 q^2 - 4 q^3 +
2 q^4 + 4 q^5 - 2 q^6 + 4 q^7
- 4 q^8 + 4 q^9 - 6 q^10
+ 2 q^13 -
2 q^14 + 2 q^15 + 2 q^18 - 2 q^19 + 2 q^20 - 4 q^21 + 2 q^22) u^2,
V_Wh(3_1),2(t,q) =
1 + (-2 q^-26 + 4 q^-25 - 2 q^-23 - 2 q^-21 + 2 q^-20 - 2 q^-19
+ 2 q^-18 + 2 q^-14 - 2 q^-13 + 2 q^-11
- 2 q^-10 + 2 q^-9
- 4 q^-8 + 4 q^-7 - 6 q^-6 + 4 q^-5 - 4 q^-4 + 4 q^-3) u
+ (- 2 q^-25 + 4 q^-24 - 2 q^-23
+ 2 q^-22 - 2 q^-21 - 2 q^-18
+ 2 q^-17 - 2 q^-16 + 6 q^-13 - 4 q^-12 + 4 q^-11 - 4 q^-10
+ 2 q^-9 - 2 q^-8
- 2 q^-7 + 4 q^-6 - 4 q^-5
+ 2 q^-4 - 2 q^-3 + 2 q^-2) u^2
and for fun,
V_Wh(4_1),2(t,q) =
1 + (-14 - 2 q^-18 + 4 q^-17 + 2 q^-16 - 6 q^-15 - 4 q^-14
+ 6 q^-13
+ 6 q^-12 - 4 q^-11 - 8 q^-10 + 4 q^-9
+ 8 q^-8 - 4 q^-7
- 10 q^-6 + 16 q^-5 - 16 q^-4 + 20 q^-3 - 22 q^-2 + 18 q^-1
+ 8 q + 2 q^2 - 4 q^3 - 6 q^4 + 8 q^5
+ 4 q^6 - 8 q^7 - 4 q^8 + 6 q^9
+ 6 q^10 - 4 q^11 - 6 q^12 + 2 q^13 + 4 q^14 - 2 q^15) u
+ (24 - 2 q^-17 + 4 q^-16
- 2 q^-14 - 4 q^-13 + 4 q^-12
+ 2 q^-11 + 4 q^-10 - 16 q^-9 + 10 q^-8 + 8 q^-7 - 10 q^-6
- 4 q^-5 + 20 q^-4
- 28 q^-3 + 28 q^-2 - 28 q^-1 - 16 q +
2 q^2 + 12 q^3 - 8 q^4 - 10 q^5 + 16 q^6 - 4 q^7 - 2 q^8 -
4 q^9 + 4 q^10 + 2 q^11
- 4 q^13 + 2 q^14) u^2
with u as in (<ref>).
§ VALUES FOR (2,1)-PARALLELS OF KNOTS
We now give values of the V_K,1, V_K(2,1),1 and V_K,2 for some sample
knots to explicitly confirm Equation (<ref>).
V_3_1,1(t,q) =
1 + (q + q^3) u + q^2 u^2,
V_3_1,2(t,q) =
1 + (q + 2 q^3 - q^4 + q^5 - q^6) u + (q^2 + q^4 - q^5) u^2,
V_3_1(2,1),1(t,q) =
1 + (q^-3 + 2 q + 3 q^3 + 2 q^5 + q^7 - q^13) u + (3 + 3 q^-2 +
6 q^2 + 4 q^4 + 4 q^6 + 2 q^8 - 2 q^10 - 2 q^12) u^2
+ (3 q^-1 + 3 q + q^3 + q^5 + q^7 - q^11) u^3 + u^4
V_4_1,1(t,q) =
1 + (- q^-1 - q) u + u^2,
V_4_1,2(t,q) =
1 + (2 - q^-3 + q^-2 - 2 q^-1 - 2 q + q^2 - q^3) u
+ (1 + q^-2 - q^-1 - q + q^2) u^2,
V_4_1(2,1),1(t,q) =
1 + (- q^-7 + q^-5 - 3 q^-3 - 3 q^-1 - q^3 - q^7) u
+ (2 + 2 q^-6 + 5 q^-4 + q^-2 + 2 q^4 + 2 q^6) u^2
+ (q^-5 + 3 q^-3 + 3 q^-1 + q^5) u^3 + q^-2 u^4
V_6_1,1(t,q) =
1 + (- q^-1 - 2 q - q^3) u + (3 + q^2) u^2,
V_6_1,2(t,q) =
1 + (1 - q^-3 - 2 q^-1 - 2 q + 2 q^2 - 2 q^3 + 2 q^4 - 2 q^5 + q^6 -
q^7) u
+ (5 + 3 q^-2 - 2 q^-1 - 5 q + 3 q^2 - 3 q^3 + 3 q^4 - q^5 + q^6) u^2,
V_6_1(2,1),1(t,q) =
1 + (- q^-7 + 2 q^-5 - 5 q^-3 - 8 q^-1 - 2 q - q^11 - q^15) u
+ (15 + 6 q^-6 + 17 q^-4 + 16 q^-2 - 2 q^2 - 4 q^4 + 4 q^8
+ 4 q^10 +
2 q^12 + 2 q^14) u^2
+ (3 q^-5 + 10 q^-3 + 15 q^-1 + 3 q - 2 q^3 + 2 q^9 + q^13) u^3
+ (1 + 3 q^-2) u^4
V_6_2,1(t,q) =
1 + (- q^-1 - q^5) u + (1 - q^2 - q^4) u^2 + (q + q^3) u^3 + q^2 u^4,
V_6_2,2(t,q) =
1 + (1 - q^-3 + q^-2 - 2 q^-1 - q + q^2 - q^5 + q^6 - 2 q^7 + 2 q^8 - q^9) u
+ (-1 + q^-2 - q^-1 + 2 q - 4 q^2 + 5 q^3
- 6 q^4 + 5 q^5 - 3 q^6 + 2 q^7 - q^8) u^2
+ (-1 + q^-1 + 3 q - 3 q^2 + 4 q^3 - 4 q^4 + 3 q^5 - 2 q^6 + q^7) u^3
+ (1 - q + 2 q^2 - 2 q^3 + 2 q^4 - 2 q^5 + q^6) u^4,
V_6_2(2,1),1(t,q) =
1 + (q^-9 - 3 q^-7 + q^-3 - 6 q^-1 + 4 q^3 - 2 q^5 - 2 q^7 - q^13 +
q^17 - q^19) u
+ (7 q^-8 - q^-6 - 5 q^-4 - q^-2 - 8 q^2
- 12 q^4 - q^6 + q^8 + 2 q^10 + 2 q^12 - 2 q^18) u^2
+ (21 q^-7 + 20 q^-5 + 6 q^-3 + 10 q^-1 + 18 q + 14 q^3 + 18 q^5
+ 9 q^7
- 5 q^9 + 2 q^11 - q^13 + 5 q^15 + 3 q^17) u^3
+ (64 + 35 q^-6 + 62 q^-4 + 64 q^-2 + 63 q^2 + 41 q^4 + 22 q^6
- 12 q^8
- 16 q^10 + 16 q^14 + 12 q^16) u^4
+ (35 q^-5 + 73 q^-3 + 74 q^-1 + 51 q + 29 q^3 + 18 q^5 - 13 q^9
- 11 q^11 + 11 q^13
+ 13 q^15) u^5
+ (25 + 21 q^-4 + 39 q^-2 + 6 q^2 + 7 q^4 - 6 q^10 + 6 q^14) u^6
+ (7 q^-3 + 8 q^-1 + q^3 - q^11 + q^13) u^7 + q^-2 u^8
and finally,
V_8_4,1(t,q) =
1 + (- q^-3 - 2 q^-1 - q - q^3 - q^5) u
+ (3 + q^-2 + q^4) u^2
+ (q^-1 + 6 q + 5 q^3) u^3 + (1 + 3 q^2) u^4,
V_8_4,2(t,q) =
1 + (2 - q^-7 + q^-6 - 2 q^-5 + q^-4 - 2 q^-3 + q^-2
- 3 q^-1 - 2 q + 3 q^2 - 2 q^3 + q^4 - 2 q^5 - q^7 + q^8 - q^9) u
+ (8 + q^-6 - q^-5 + 3 q^-4 - 2 q^-3 + 3 q^-2 - 5 q^-1
- 7 q + 8 q^2 - 10 q^3 + 8 q^4 - 5 q^5 + 6 q^6 - 3 q^7 + q^8) u^2
+ (-20 + q^-5 - q^-4 + 7 q^-3 - 7 q^-2 + 16 q^-1 + 24 q
- 27 q^2 + 27 q^3 - 22 q^4 + 18 q^5 - 9 q^6 + 5 q^7) u^3
+ (6 + q^-4 - q^-3 + 4 q^-2 - 4 q^-1 - 8 q + 8 q^2 - 7 q^3 + 7 q^4
- 5 q^5 + 3 q^6) u^4,
V_8_4(2,1),1(t,q) =
1 + (-q^-15 - 5 q^-7 - 3 q^-3 - 11 q^-1 + 2 q + 6 q^3 - 6 q^5 -
4 q^7 + 2 q^9 - 2 q^11 - q^13 - q^19) u
+ (14 + 2 q^-14
+ 2 q^-12 + 11 q^-10 + 24 q^-8 - 2 q^-6
- 9 q^-4 + 14 q^-2 - 6 q^2 - 11 q^4 + 17 q^8 + 8 q^10 + 8 q^12
+ 2 q^14 - 2 q^16
+ 2 q^18) u^2
+ (5 q^-13 + 8 q^-11 + 51 q^-9 + 127 q^-7 + 124 q^-5 + 89 q^-3
+ 63 q^-1 + 56 q + 46 q^3 + 49 q^5 + 52 q^7
+ 27 q^9 + 41 q^11
+ 27 q^13 + 22 q^15 + 21 q^17) u^3
+ (141 + 12 q^-12 + 28 q^-10 + 107 q^-8 + 259 q^-6 + 330 q^-4
+ 255 q^-2 + 84 q^2 + 47 q^4 + 90 q^6 + 44 q^8 + 44 q^10 + 60 q^12
+ 60 q^14 + 44 q^16) u^4
+ (13 q^-11 + 24 q^-9 + 89 q^-7
+ 237 q^-5 + 309 q^-3 + 215 q^-1
+ 70 q + 5 q^3 + 39 q^5 + 41 q^7 + 20 q^9 + 21 q^11 + 44 q^13 + 41 q^15) u^5
+ (59
+ 6 q^-10 + 6 q^-8 + 39 q^-6 + 114 q^-4 + 132 q^-2
- 11 q^2 + 9 q^4 + 6 q^6 + 12 q^8 + 6 q^12 + 18 q^14) u^6
+ (q^-9 + 10 q^-5 + 28 q^-3 + 23 q^-1 - 2 q + q^5 + 2 q^9 - 2 q^11
+ 3 q^13) u^7 + (q^-4 + 3 q^-2) u^8
Keep in mind that the genus of the (2,1)-parallel of K is twice the genus of
K, and that the knots 3_1, 4_1, 6_1, 6_2 and 8_4 have genus
1, 1, 1, 2, 2, hence we expect (and we find) that the V_1-polynomial of their
(2,1)-parallel to have u-degree 2, 2, 2, 4, 4 conforming the equality
in (<ref>) for the (2,1)-parallels of 3_1, 4_1, 6_1, 6_2 and 8_4.
GS:VnDatahamsalpha
|
http://arxiv.org/abs/2409.02562v1 | 20240904092924 | Interacting Multiple Model-based Joint Homography Matrix and Multiple Object State Estimation | [
"Paul Johannes Claasen",
"Johan Pieter de Villiers"
] | cs.CV | [
"cs.CV"
] |
up]Paul Johannes Claasencor1
[email protected]
up]Johan Pieter de Villiers
[email protected]
[up]organization=Department of Electrical, Electronic and Computer Engineering, University of Pretoria,
addressline=Lynnwood Road, Hatfield,
city=Pretoria,
postcode=0028,
country=South Africa
[cor1]Corresponding author.
§ ABSTRACT
A novel MOT algorithm, IMM Joint Homography State Estimation (IMM-JHSE), is proposed. By jointly modelling the camera projection matrix as part of track state vectors, IMM-JHSE removes the explicit influence of camera motion compensation techniques on predicted track position states, which was prevalent in previous approaches. Expanding upon this, static and dynamic camera motion models are combined through the use of an IMM filter. A simple bounding box motion model is used to predict bounding box positions to incorporate image plane information. In addition to applying an IMM to camera motion, a non-standard IMM approach is applied where bounding-box-based BIoU scores are mixed with ground-plane-based Mahalanobis distances in an IMM-like fashion to perform association only. Finally, IMM-JHSE makes use of dynamic process and measurement noise estimation techniques. IMM-JHSE improves upon related techniques on the DanceTrack and KITTI-car datasets, increasing HOTA by 2.64 and 2.11, respectively, while offering competitive performance on the MOT17, MOT20 and KITTI-pedestrian datasets.
multi-object tracking tracking by detection camera calibration camera motion compensation homography estimation
§ INTRODUCTION
The increasing reliability of object detectors <cit.> has encouraged research on the topic of tracking objects in the image space, i.e. where objects are tracked in pixel coordinates. Most recent methods have focused on using bounding box information with or without appearance embeddings <cit.>. Yet, it is possible in some cases to obtain an estimate of the location of a bounding box within the ground plane of the imaged 3D space <cit.>. The ground plane is a coordinate system that represents a 2D approximation of the earth's surface in front of the camera, as viewed from above. Leveraging such information may improve association performance. Occlusion and camera motion effects may be mitigated if the target dynamics within the ground plane are reliably estimated and the homographic projection of the ground plane to the image plane is decoupled from the target's motion. This is illustrated in Figure <ref>, which depicts two tracks that undergo heavy occlusion in the image plane at time t. Although the red track is occluded by the blue track, their locations on the ground plane are still easily separable. Furthermore, camera motion does not influence target association, i.e. additional camera motion compensation is not required if an accurate projection matrix (homography) can be determined at any time step.
This paper expands upon previous multiple object tracking (MOT) methods. In particular, it makes the following contributions:
* Modelling limitations of a previous method <cit.> are addressed; see Section <ref>. Briefly, target motion is decoupled from the homography matrix state.
* The homography matrix is included in the target state vector. Thus, it is estimated jointly with the target position and velocity.
* A static camera motion model (which assumes that the homography matrix does not change over time) is combined with a dynamic camera motion model (which accounts for camera-motion-induced changes in the homography) with an interacting multiple model (IMM) filter.
* In addition to applying the IMM to camera motion as mentioned in the point directly above, the proposed method dynamically mixes ground-plane- and image-plane-based association scores, i.e. Mahalanobis distance and buffered intersection over union (BIoU) respectively, for the purpose of association only.
* Dynamic measurement noise estimation is used to estimate the noise associated with a particular track's measured bounding box, and dynamic process noise estimation is used to estimate the noise associated with each camera motion model.
* The proposed method outperforms related methods on the DanceTrack <cit.> and KITTI-car <cit.> datasets while offering competitive performance on the MOT17 <cit.>, MOT20 <cit.>, and KITTI-pedestrian <cit.> datasets.
The remainder of the paper is structured as follows. Section <ref> examines previous work on the topic of image-based tracking. Section <ref> provides more detail on motion models and camera motion compensation in the literature. Section <ref> gives a detailed overview of the proposed method. Section <ref> provides the experimental setup and results of tests on validation data. Section <ref> reports the results on the MOT17, MOT20, DanceTrack and KITTI test datasets. Finally, Section <ref> concludes the findings of this paper and suggests topics of focus for future work.
§ PREVIOUS WORK
This section provides an overview of previous approaches to the MOT problem in order to provide context for the contributions presented in this paper.
§.§ Image-based Tracking
Tracking the motion of objects from a video has been of interest as early as 1998 when Isard and Blake <cit.> developed the condensation algorithm for tracking objects represented by a curve or sets of curves with learned motion models. Their method propagates conditional density distributions similar to the approach of particle filtering. However, tracking multiple objects simultaneously with their method may become infeasible, especially for complex multi-modal posterior densities where large samples or particles may be required to obtain accurate estimates. Besides, multi-object tracking requires solving the association problem – deciding which observations belong to which tracked object – which becomes increasingly difficult when the tracked objects are near one another, partially or fully occluding each other. Multiple object tracking (MOT) methods presented in the literature focus more on the association problem than on detection or individual target tracking.
Given a set of object detections, Li et al. <cit.> model the association problem with a cost-flow network, with the solution obtained by a minimum-cost flow algorithm. They also explicitly model occluded observations, which is achieved by simply comparing differences in detection locations and scale to predetermined thresholds. While the approach is reported to be real-time and yields favourable results on the Context-Aware Vision using Image-based Active Recognition (CAVIAR) <cit.> dataset, object motion is not explicitly modelled. As a result, the method may perform poorly when object motion is highly dynamic. Berclaz et al. <cit.> similarly model object trajectories with a flow model consisting of a discretised 2D spatial grid (representing the image area) at each time step. They use the K-shortest path algorithm to solve their linear programming formulation of the association problem. Motion is modelled by only allowing an occupied node at the previous time step to transition to a specific neighbourhood around that node in the current time step. As with <cit.>, this is an unsatisfactory solution in situations where objects are highly dynamic.
With the increasing reliability and popularity of several object detectors <cit.>, modern methods have primarily made use of bounding box information. Bewley et al. <cit.> emphasise the importance of detection quality and use the Hungarian algorithm to perform data association based on the intersection-over-union (IoU) of detected bounding boxes with those predicted by Kalman-filtered tracks (specifically, predicted bounding boxes are constructed from the predicted bounding box centre x- and y-coordinates, height and aspect ratio). Their method has been influential in promoting the use of Kalman filters to estimate object motion distributions and/or the Hungarian algorithm (or other linear assignment algorithms) to perform association: these have become the status quo in <cit.>. Taking the minimalist approach of the original SORT algorithm one step further, Bergmann et al. <cit.> rely purely on the regression head of their detector network (or “Tracktor”) to perform tracking. However, they show that performance improves with camera motion compensation and appearance-feature-based re-identification, suggesting that explicit motion models (and appearance information) can still be beneficial. Supporting this idea, Khurana et al. <cit.> use motion models that consider monocular depth estimates to maintain occluded tracks. SORT is improved upon in <cit.> by integrating appearance information, resulting in DeepSORT. They train a convolutional neural network (CNN) to discriminate between pedestrians and keep each track's normalised appearance embeddings of the last 100 frames. Track association is performed by finding the minimum cosine distance of the current detected appearance embedding with the history of embeddings for each of the previous tracks. Additionally, they use motion information by obtaining the Mahalanobis distance between detections and predicted Kalman filter states, consisting of bounding box centre positions, aspect ratios, heights and their corresponding velocities. In SiMIlarity LEarning for Occlusion-Aware Multiple Object Tracking (SMILEtrack), Hsiang et al. <cit.> make use of a Siamese-based network which incorporates elements inspired by vision transformers (specifically, self-attention computed on image patches) to extract appearance features from detected bounding boxes. However, it does not achieve a convincing improvement in higher-order tracking accuracy (HOTA) <cit.> compared to ByteTrack, despite its superior computational speed. Leveraging multiple-camera traffic surveillance systems, another method explicitly models inter-vehicle occlusion using reconstructed-cuboid projections instead of relying on similarity learning <cit.>. It is noted that the use of camera motion compensation has become ubiquitous, with applications in <cit.>.
Whereas previous methods separately perform detection and appearance embedding extraction, Wang et al. <cit.> design a model which performs both of these simultaneously. They report that similar results can be achieved compared to state-of-the-art SDE (separate detection and embedding) methods at reduced computational cost. Zhang et al. <cit.> show that such multi-task networks must be designed carefully since the tasks require different features and feature dimensions and that the anchors used for object detection can introduce ambiguity in the re-identification features. They introduce a carefully designed anchor-free model to simultaneously perform object detection and appearance embedding extraction, which outperforms the state-of-the-art methods in tracking metrics and frame rate.
Shifting the focus from appearance embeddings, Zhang et al. <cit.> note that it is beneficial to perform association in two steps in their method called ByteTrack: the first step takes only high-confidence detections into account, while the remaining unmatched detections, as well as the lower-confidence detections, are associated in a second association step. They note that using the IoU is essential in the second association step since other features may become unreliable for low-confidence detections, which may be occluded and/or blurred. Still, re-identification or other features may be used in the first step. By altering the Kalman filter state vector, Aharon et al. <cit.> improve upon ByteTrack. Instead of tracking the aspect ratio of bounding boxes, they perform better by directly tracking bounding box widths and heights. Similar to <cit.>, an exponential moving average mechanism is used to update the appearance states of tracklets – the features of which are obtained by a re-identification network from the FastReID library <cit.>. Specifically, the bags of tricks <cit.> stronger baseline with a ResNeSt50 <cit.> backbone is used. Accordingly, they name their method BoT-SORT. Although tracking width and height may improve performance, Cao et al. <cit.> note that previous motion models are estimation-centric, allowing for rapid error accumulation over time, especially when no observations are available, and motion is non-linear. This implies that previous approaches are more sensitive to the noise in state estimation than the noise in measurements. They propose observation-centric SORT (OC-SORT), which incorporates a re-update step when an object is re-identified, during which the Kalman filter is updated with virtual detections generated based on the last-known and currently detected bounding box state. In addition, they use a novel association criterion that considers the change in motion direction that each new detection would induce, which is termed observation-centric momentum (OCM). Finally, Yang et al. <cit.> buffers object bounding boxes to expand the matching space of subsequent detections in their Cascaded Buffered IoU (C-BIoU) tracker. A cascaded matching approach is used such that small buffers are used in the first association step, and larger buffers are used in the second step. Despite its simplicity, Their method performs superior to DeepSORT, SORT, ByteTrack and OC-SORT.
§.§ Tracking Involving Homography
Various methods have used homography between multiple camera views to increase the robustness of the tracking of sports players. Since a particular player may be occluded in one camera view but not in another, using multiple cameras increases the robustness of a player tracking system to player occlusions. Examples of methods which make use of inter-camera homographies may be found in <cit.>. However, few approaches consider the homography between a single camera view and the playing field. Notably, Hayet et al. <cit.> incrementally update the homography between the camera view and an imaged soccer field and explicitly use it to track player locations on the playing field with a Kalman filter. Recently, Maglo et al. <cit.> compute the homography to display player positions, but the tracking is performed without the aid of the homography. Instead, they rely heavily on appearance features extracted by a model fine-tuned offline after tracklet generation (on the corresponding bounding boxes). As a result, their method is not suitable for online operation. Most recently, Yi et al. <cit.> manually annotated the homographic projections between the image and ground planes of various datasets. They then perform tracking and association in the ground plane. However, as will be shown in detail in Section <ref>, camera motion explicitly influences target position in their method – which this paper regards as a modelling error.
§ BACKGROUND: CAMERA MOTION COMPENSATION AND MOTION MODELS
The majority of modern approaches to MOT model target motion in the image plane <cit.>. Particularly, the near-constant velocity model is usually employed:
x^I_t=x^I_t-1+x^I_t-1Δ t,
where x^I_t represents the centre x- or y-coordinate, width or height (in pixels) of the target's bounding box (these are jointly estimated and form part of the same state vector) at time t. The superscript I indicates that the state vector x^I_t is represented in the image plane. The individual state components will subsequently be denoted by x^I,x_t, x^I,y_t, x^I,w_t, x^I,h_t, respectively. The velocity of a state component is denoted by x, and Δ t represents the sampling period — in this case, it is equal to the reciprocal of the video frame rate. The additive noise term has been omitted for brevity.
The current state is conditioned on the previous state, with measurements in the form of detected bounding boxes being directly related to the state itself. This model is graphically represented in Figure <ref>,
and assumes that the camera remains stationary. As a result, several approaches <cit.> make use of camera motion compensation after prediction with (<ref>). Specifically, a matrix
𝐀_t = [[ 𝐑^A_t∈ℛ^2× 2 𝐭_t∈ℛ^2× 1; 0^1× 2 1 ]]
is estimated such that
[ x^I,x'_t; x^I,y'_t; 1 ] = 𝐀_t[ x^I,x_t; x^I,y_t; 1 ]
and
[ x^I,w'_t; x^I,h'_t; ] = 𝐑^A_t[ x^I,w_t; x^I,h_t; ],
where the prime superscript denotes the camera-motion-compensated state component. 𝐀_t is estimated with the camera calibration module of the OpenCV <cit.> library. Most previous approaches make use of optical flow features and the RANSAC <cit.> algorithm, although <cit.> makes use of the enhanced correlation coefficient (ECC) optimisation method.
UCMCTrack <cit.> instead models target motion in the ground plane:
x^G_t=x^G_t-1+x^G_t-1Δ t + w^G,
where x^G_t represents the target's x- or y-coordinate, and the superscript G indicates that the state is represented in the ground plane. These components will subsequently be denoted by x^G,x_t and x^G,y_t, respectively. The additive noise term w^G is sampled from a zero-mean multivariate normal distribution with covariance 𝐐^G,x (i.e. the process noise covariance matrix) determined by<cit.>:
𝐐^G,x=𝐆diag(σ_x,σ_y)𝐆^T,
where
𝐆=[[ Δ t^2/2 0; Δ t 0; 0 Δ t^2/2; 0 Δ t ]],
and σ_x and σ_y are process noise compensation factors along the x- and y-axes, respectively. The superscript x in 𝐐^G,x differentiates this noise term from those associated with the homography components in (<ref>), (<ref>).
UCMCTrack manually annotates the homographic projection relating the ground plane to the starting frame of each video on which they evaluate their method. Let 𝐇^L represent this initial projection, then
[ x^I,x_t; x^I,y_t + 0.5 × x^I,h_t; 1 ] = norm(𝐇^L[ x^G,x_t; x^G,y_t; 1 ]),
where norm([ x y z ]^⊤) = [ x/z y/z 1 ]^⊤. For details regarding the theory of the pinhole camera model employed here, the reader is directed to <cit.>. The left-hand side of (<ref>) will subsequently be referred to as the bottom-centre bounding box coordinates. As in <cit.>, these coordinates are selected to represent the projection of an object's ground plane position since it is where the object is expected to touch the ground (e.g. if the object is a person, this coordinate represents where their feet touch the ground).
UCMCTrack does not consider how the projection matrix 𝐇^L evolves over time. Because of this, existing tracks' positions must be corrected to compensate for camera motion. In particular, UCMCTrack makes use of a post-prediction camera motion compensation step. First, the bottom-centre bounding box location is obtained by applying (<ref>) to the predicted ground plane coordinates. The approximate centre x- and y-coordinates of the bounding box in the image plane are then obtained by subtracting half of the last-known height from the y-coordinate of this projection. This estimate is then used in (<ref>) to approximate the x- and y-coordinates of the target bounding box in the current frame after camera motion compensation (CMC) has been applied. CMC is also applied to the width and height of the last-known bounding box (i.e. the bounding box which has been associated with the particular track most recently) as in (<ref>). Finally, the inverse of (<ref>) is used to obtain the camera-motion-compensated ground plane coordinates. Ignoring the effect of bounding box width and height, this update step can roughly but concisely be described by
𝐱^G'_t=norm((𝐇^L)^-1𝐀_tnorm(𝐇^L𝐱^G_t)).
One might interpret (<ref>) to imply that the target position in the ground plane is in some way dependent on camera motion, which is counter-intuitive to the knowledge that a target's movement does not depend on camera motion. Target and camera motion models are treated independently in the proposed solution to address this.
§ THE PROPOSED SOLUTION
Let 𝐇^G_t represent the projection matrix as it evolves over time. Its evolution may be described by <cit.>
𝐇^G_t = 𝐀_t𝐇^G_t-1 + 𝐐_𝐭^𝐆,𝐇,
with 𝐇^G_0=𝐇^L, and 𝐐_𝐭^𝐆,𝐇 is an additive noise term which is estimated as described in Section <ref>. The superscript 𝐇 differentiates this noise term from that of the ground plane position components, and the tilde differentiates it from the static motion model introduced in (<ref>). Equation <ref> describes how the homographic projection changes as a function of the previous projection and the estimated camera motion. Importantly, this is independent of and does not influence target motion.
A graphical model which combines Equations <ref> and <ref> with the measurement model in (<ref>) is shown in Figure <ref>. Here, x^I_t represents the bottom-centre bounding box position of the target detection. According to Bayesian network theory, the target and camera projection dynamics are independent while the target bounding box is unobserved, but they become dependent when conditioned on x_t^I <cit.>. Thus, the temporal evolution of the ground plane and homography states are independent but are related through a common measurement in the image plane. This model may allow for a more accurate estimation of a target's ground plane motion since its ground plane position prediction is not directly influenced by camera motion — only the position of its projection in the image plane.
The authors of <cit.> note that in scenes with relatively low camera motion, their method performs better without the camera motion compensation step in (<ref>). This may be because the estimated affine matrix <ref> is inaccurate, and noise is not considered in (<ref>). To remedy this, the method proposed in this paper includes an additive noise term in (<ref>) (see Sections <ref> and <ref>). Additionally, to better model the evolution of 𝐇^G_t when there is no camera motion, the following motion model is included:
𝐇^G_t = 𝐇^G_t-1 + 𝐐̅_𝐭^𝐆,𝐇,
where 𝐐̅_𝐭^𝐆,𝐇 is an additive noise term, where the bar differentiates it from that in (<ref>). For each track, two filters are run in parallel. One of these makes use of (<ref>), and the other makes use of (<ref>). The states of these filters are fused in an IMM filter.
Furthermore, it has been shown that making use of buffered intersection over union (BIoU) during association can increase performance in scenes where either the camera or the tracks are highly dynamic <cit.>. Besides, there may be times when the tracks leave the ground plane (for example, jumps or flips in the DanceTrack dataset <cit.>), causing their measured ground plane positions to fluctuate wildly. During these times, it may be better to perform association with an image-based association measure such as the BIoU. Even if tracks do not leave the ground plane, they may be occluded, which could cause partial detections that manifest similar behaviour to when tracks leave the ground plane. Nevertheless, considering bounding box dimensions during association increases performance, as shown in Section <ref>. Therefore, the proposed method makes use of the simple bounding box motion model proposed in <cit.>. To perform association, the ground plane and image-based association scores are taken into account proportionally according to the predicted model probability <cit.>:
μ_t | t-1^i=P(m_t^i|𝐱^I_1: t-1)=∑_j=1^r p_j iμ_t-1^j,
where μ_t-1^j denotes the probability that the target motion matches model m^j at time t-1, p_ji denotes the probability of a transition from model m^j to model m^i, r denotes the number of models under consideration and 𝐱^I_1: t-1 represents all past observations. This is explained further in Section <ref>.
Finally, the proposed method makes use of dynamic measurement and process noise estimation, which significantly improves performance (Section <ref>).
§.§ State Definition and Motion Models
The ground plane state of a track includes its position and velocity along both ground plane axes. Furthermore, this paper proposes including the elements of the projection matrix that relate the track's ground plane position to its position in the image plane in the state vector. Thus, the ground plane state is defined as
𝐱 = [ x^G,x; x^G,x; x^G,y; x^G,y; 𝐇^G,1; 𝐇^G,2; 𝐇^G,3 ],
where 𝐇^G,1, 𝐇^G,2 and 𝐇^G,3 are the first, second and third columns of the homography matrix, respectively.
Note that although each track is treated as having its own unique projection matrix, in reality, there is a single projection matrix that relates the image plane to the ground plane. Examining Figure <ref>, this would imply that all of the current tracks' ground plane positions become dependent upon receiving an image plane measurement. However, this work handles each track independently to simplify track management.
The proposed method makes use of the constant velocity model for the ground plane dimensions as in (<ref>). For the homography state elements, both the dynamics models of (<ref>) and (<ref>) are combined with an IMM filter. The reader is referred to <cit.> for details regarding IMM filters. The process noise covariance matrix corresponding to the homography state elements is determined as Section <ref> explains.
When a track's position is predicted without a measurement update, it is said to be coasting, and it is in the coasted state. Otherwise, it is confirmed. In the image plane, the bounding box prediction is independent of the state vector in (<ref>) while the track is confirmed. This paper makes use of the simple motion model presented in <cit.>, which averages bounding box velocities over a buffer of previous measurements and adds this average to the previous bounding box measurement:
𝐱^M_t| t-1=𝐱^M_t-1 + 1/n - 1∑_i=t-n+1^t-1𝐱^M_i-𝐱^M_i-1,
n denotes the size of the buffer, which stores the bounding box measurements 𝐱^M_t-n:t-1 previously associated with the particular track. The superscript M indicates a measurement, except in 𝐱^M_t| t-1 which refers to the predicted measurement at time t given the measurements from t-n to t-1. Each measurement is stored as [ x^I,l x^I,t x^I,r x^I,b ], where the components denote the left x-, top y-, right x- and bottom y-coordinate of the corresponding bounding box, respectively. The maximum size of the bounding box measurement buffer is fixed to n=5. When fewer than two measurements are in the buffer, the last-known bounding box is used as the prediction.
When a track is coasted, the predicted bounding box is coupled to the predicted ground plane position. Specifically, the bottom-centre coordinates of the predicted bounding box are set to the projection of the predicted ground plane position in the image plane. The width and height of the predicted bounding box are determined by (<ref>), where the last-confirmed bounding box width and height are propagated through time. This is illustrated in Figure <ref>, where a track is successfully re-identified after a period of occlusion due to this technique. When a measurement is associated with a track after a period of being coasted, the previous buffer of (<ref>) is cleared and contains only the newly associated detection.
§.§ Measurement Model
Let x̅^I,x and x̅^I,y denote the bottom-centre x- and y- bounding box coordinates in the image plane as obtained in (<ref>). A track's ground plane coordinates x^G,x and x^G,y are related to these image coordinates through its homographic projection state elements:
[ x̅^I,x_t; x̅^I,y_t; 1 ] = norm(𝐇^G_t[ x^G,x_t; x^G,y_t; 1 ]) + 𝐑_t= h(𝐱) + 𝐑_t.
𝐑_t is a noise term which is dynamically estimated as described in Section <ref>, but initially set to
𝐑_0=[[ (σ_m x^I,w_0)^2 0; 0 (σ_m x^I,h_0)^2 ]]
as in <cit.>, where σ_m=0.05.
Let the vector 𝐛 represent the unnormalised projection of the ground plane position:
𝐛 = [ b_1; b_2; b_3 ] = 𝐇^G_t[ x^G,x_t; x^G,y_t; 1 ]
and
𝐇^G_t=[ h_1 h_2 h_3; h_4 h_5 h_6; h_7 h_8 h_9 ].
Following a similar derivation to that presented in <cit.>, the Jacobian matrix of (<ref>) with respect to the ground plane coordinates is
∂(x^I,x_g_t, x^I,y_g_t)/∂(x^G,x_t, x^G,y_t) = γ[ h_1-b_1/b_3h_7 h_2-b_1/b_3h_8; h_4-b_2/b_3h_7 h_5-b_2/b_3h_8 ],
where γ = b_3^-1.
Furthermore, the Jacobian matrix with respect to the homography matrix elements is:
∂(x^I,x_g_t, x^I,y_g_t)/∂(h_1,h_4,h_7,h_2,h_5,h_8,h_3,h_6) =
γ[ x^G,x_t 0 -b_1/b_3x^G,x_t x^G,y_t 0 -b_1/b_3x^G,y_t 1 0; 0 x^G,x_t -b_2/b_3x^G,x_t 0 x^G,y_t -b_2/b_3x^G,x_t 0 1 ].
Note that in practice, 𝐇^G_t is normalised with respect to h_9 such that h_9=1. Therefore, the partial derivative with respect to h_9 is 0.
§.§ Dynamic Noise Estimation
Instead of attempting to determine a static process noise covariance matrix for the dynamics models in (<ref>) and (<ref>), the proposed method obtains dynamic estimates of these parameters. The same is performed for the measurement noise covariance matrix associated with the measurement model (<ref>) – this is shown to increase performance in Table <ref>.
From <cit.>, consider a generalised representation of the process noise covariance matrix 𝐐_t, which can represent either the static motion model of (<ref>), or the dynamic motion model of (<ref>). This matrix is the expected value of the outer product of the state process noise vector 𝐰_t:
𝐐_t=E[𝐰_t𝐰_t^T].
Similarly, the measurement noise covariance matrix is the outer product of the state measurement noise vector 𝐯_t:
𝐑_t=E[𝐯_t𝐯_t^T].
When a measurement update is performed, the measurement noise covariance matrix for the current time step is estimated by <cit.>:
E[𝐯_t𝐯_t^T]=ε_t ε_t^T + 𝐉_t𝐏^-_t𝐉_t^T,
where 𝐉_t is the Jacobian matrix with respect to the predicted state components obtained from (<ref>), (<ref>). 𝐏^-_t is the a priori state covariance matrix. Let 𝐱̅^M_t denote the measured bottom-centre x- and y- bounding box coordinate vector, then ε_t=[𝐱̅^M_t-h(𝐱^+_t)] is the residual between the measurement and the updated state vector projected into the image plane. This paper makes use of a moving window to average noise covariance estimates such that
𝐑_t=1/m∑_i=t-m+1^tE[𝐯_i𝐯_i^T],
where m=5. Note that the Kalman filter update step requires an estimate of 𝐑_t, yet 𝐑_t depends on the residual of the updated state vector in (<ref>). To make use of as much information as possible at the current time step, a “dummy” Kalman filter update is first performed to obtain the quantity in (<ref>). The actual update step is then performed with 𝐑_t obtained from (<ref>).
After 𝐑_t is obtained from (<ref>), the resulting Kalman gain 𝐊_t is used to obtain an estimate of the process noise covariance matrix for the current time step <cit.>:
E[𝐰_t𝐰_t^T]=E[𝐊_t(𝐝_t 𝐝_t^T) 𝐊_t^T]=𝐊_t E[𝐝_t 𝐝_t^T] 𝐊_t^T,
where 𝐝_t=[𝐱̅^M_t-h(𝐱^-_t)] is the measurement innovation and 𝐱^-_t is the predicted a priori state estimate. Similar to (<ref>), a moving window is used to average process noise covariance estimates obtained from (<ref>). The process noise covariance matrix for the homography state elements, which will be used in the subsequent prediction at time step t + 1, is obtained by
𝐐^G,H_t=1/m∑_i=t-m+1^tE[𝐰_t𝐰_t^T].
Only the matrix elements which correspond to homography state elements are extracted from (<ref>), such that the final process covariance matrix for the entire state is 𝐐_t=diag(𝐐^G,x,𝐐^G,H_t). 𝐐^G,H_t refers either to 𝐐^G,H_t or to 𝐐̅^G,H_t, depending on the filter motion model in question.
§.§ Association and Track Management
This section describes the association algorithm and how the BIoU and Mahalanobis distances are combined to form robust association scores between tracks and candidate detections. The Hungarian algorithm (specifically, the Jonker-Volgenant variation <cit.>) uses these scores in cascaded matching stages to obtain track-detection associations. There are three association stages in total, with association score thresholds α_1, α_2 and α_3, respectively. The values of the hyperparameters introduced in this section are obtained as described in Section <ref>.
§.§.§ Track Management
Before the association algorithm is described, it is necessary to define the track states that determine by which branch of the algorithm a specific track will be considered. When a track is initialised, it is in the tentative state. From the tentative state, a track becomes confirmed if it is associated with a detection for two consecutive time steps after initialisation; otherwise, it is deleted if it is not associated with any detection for two consecutive time steps. If a confirmed track's position is predicted without a measurement update (i.e. it is not associated with any detection), it is coasting and is in the coasted state. If a measurement is associated with a coasted track, its state changes again to confirmed. If a coasted track is not associated with a measurement within Ω time steps, it is deleted.
§.§.§ The Association Algorithm
Figure <ref> depicts the association algorithm graphically. At the start of a time step, the detections for the corresponding video frame are obtained, and all existing tracks' positions are predicted by their ground- and image-plane filters. From here, confirmed and coasted tracks are considered by two cascaded association stages. Before the first stage, detections are separated into high- and low-confidence detections by the confidence thresholds d_high and d_low, as introduced in <cit.>. Specifically, detections with confidence greater than or equal to d_high are considered high-confidence, and those with confidence less than d_high but greater than or equal to d_low, are considered low-confidence. Detections with confidences below d_low are discarded.
During the first association stage, high-confidence detections are associated with the confirmed and coasted tracks, with the association score
P(D)×BIoU(𝐱^M_t, 𝐱^M_t|t-1)× d_conf,
where d_conf refers to the detection confidence value output by the detector, and D is the normalised Mahalanobis distance as in <cit.>:
D=𝐝_t^T𝐒_t^-1𝐝_t + ln |𝐒_t|,
where 𝐒_t=𝐉_t𝐏_t^-𝐉_t^T + 𝐑_t-1, |𝐒_t| denotes the determinant of 𝐒_t, ln is the natural logarithm and 𝐑_t-1 is the measurement noise covariance estimate (<ref>) obtained in the previous update step. For reasons that will be explained subsequently, D is converted to a probability by
P(D)=1 - CDF(D),
where CDF is the cumulative distribution function of the chi-squared distribution, which is defined to have 24 degrees of freedom (determined experimentally). As a result, P(D) is in the range [0, 1]. Finally, BIoU(𝐱^M_t, 𝐱^M_t|t-1) denotes the BIoU between a given detection 𝐱^M_t and the bounding box predicted by (<ref>). The BIoU is calculated as in <cit.>, where the measured and predicted bounding box widths and heights are each scaled by 2b+1; in this paper, only a single buffer scale parameter is used.
After the first association stage, the remaining unassociated detections are combined with the low-confidence detections. These are associated with the remaining unassociated confirmed and coasted tracks, with the association score
(μ^I_t|t-1BIoU(𝐱^M_t, 𝐱^M_t|t-1)+μ_t|t-1^GP(D))d_conf,
where μ^I_t|t-1 and μ^G_t|t-1 refer to the predicted model probabilities of the image and ground plane filters, respectively. These weighting factors are determined by (<ref>). From (<ref>), it should be clear that D is converted to probability P(D) to allow the Mahalanobis distance to be mixed with the BIoU on the same scale.
This association stage manifests the filter architecture depicted in Figure <ref>. While the ground plane filters are mixed explicitly with an IMM filter in the manner established in the literature <cit.>, the image and ground plane filters are handled independently. Transition probabilities between these filters are only defined to use the predicted model probability (<ref>) during association. Each model probability is initialised as μ^I_0=μ^G_0=0.5. When a measurement is associated with a track, these model probabilities are updated <cit.>:
μ^i_t|t=μ^i_t|t-1Λ^i_t/∑_jμ^j_t|t-1Λ^j_t,
where Λ^i_t is the likelihood of the associated measurement given model i∈{I,G}. Specifically, the proposed method uses Λ^I_t=BIoU(𝐱^M_t, 𝐱^M_t|t-1) and Λ^G_t=P(D). This is another reason for converting D to a probability: so that the likelihoods Λ^I_t and Λ^G_t are on the same scale.
The second association stage helps to avoid tracks from coasting when their detections in the image plane diverge from their expected value as predicted by the ground plane filter. This may occur due to partial occlusion or irregular, rapid object motion. It may also occur when ground plane filter state estimates are inaccurate, notably when a track is re-identified after a period during which it was coasted <cit.>. In these cases, the predicted model probability μ^G_t|t-1 is expected to be close to zero, while the predicted model probability μ^I_t|t-1 is expected to be close to one. Thus, the association score (<ref>) will be dominated by the BIoU and the track may still be associated despite low P(D). Note that <cit.> introduced a re-update step to mitigate the effect of state estimation noise and that the second association stage proposed here could be considered an alternative to their solution.
After the second association stage, the remaining unassociated tracks are placed in the coasted state. Of the unassociated detections, those with detection confidence greater than d_high are used in the third association stage to be associated with the tentative tracks. The third association stage also makes use of the association score in (<ref>). The unassociated detections that remain after this stage are used to initialise new tracks.
§ EXPERIMENTS
This section describes the experimental procedure used to find optimal hyperparameters for IMM-JHSE and reports the results of experiments on validation data. The datasets considered during these experiments are the MOT17 <cit.> and DanceTrack <cit.> datasets. Results on the test sets are reported in Section <ref> and include the MOT20 <cit.> and KITTI <cit.> datasets.
For all experiments and test set results, the camera calibration matrices provided in <cit.> are used – except for the KITTI dataset, which provides its own. Furthermore, the estimates of 𝐀_t (<ref>) provided by <cit.> are used for the MOT17 and MOT20 datasets, but for the DanceTrack and KITTI datasets optical flow features and the RANSAC algorithm are used to estimate 𝐀_t with the OpenCV library.
For IMM-JHSE, the following hyperparameters need to be specified:
σ_x, σ_y, α_1, α_2, α_3, Ω, b, d_high, d_low, as well as the transition probabilities illustrated in Figure <ref>. For the transition probabilities, it is only necessary to specify the self-transition probabilities p_s,s, p_d,d, p_G,G and p_I,I, since the other transition probabilities can be derived from them.
This paper uses the pattern search algorithm <cit.> to find optimum values for the hyperparameters. For each experiment described subsequently, the initial values are set as follows: σ_x=5, σ_y=5, α_1=α_2=α_3=0.5, Ω=30, b=0, d_high=0.6, d_low=0.5, p_s,s=p_d,d=p_G,G=p_I,I=0.9, and the number of pattern search iterations is restricted to 200. The objective function selected for optimisation is HOTA for the MOT17, MOT20 and KITTI datasets and association accuracy (AssA) for the DanceTrack dataset. Other evaluation metrics include IDF1 and multiple object tracking accuracy (MOTA). The reader is referred to <cit.> for details regarding these metrics. In all cases, evaluation is performed with the TrackEval library <cit.>. For the DanceTrack, KITTI and MOT20 datasets, the combined objective function of the entire dataset is optimised, while for the MOT17 dataset, the objective function is optimised per video.
Table <ref> reports the results of an ablation study on the DanceTrack validation dataset. The second column refers to the use of the dynamic measurement noise estimation method described in Section <ref>. When dynamic measurement noise is not used (as indicated with X), (<ref>) is used to provide an estimate of the measurement noise at each time step. When the dynamic estimation method is not used, HOTA decreases from 63.81 to 60.80. Thus, the dynamic noise estimation method contributes significantly to the success of IMM-JHSE. The third column refers to the use of an IMM filter. Without an IMM filter, IMM-JHSE is restricted to the single camera motion model of (<ref>). In this case, HOTA decreases to 62.04 – showing that the mixture with the static camera motion model (<ref>) allows for more accurate state propagation. The final column refers to the use of the image plane filter and the BIoU. Without these, all association scores are calculated by P(D)d_conf, and HOTA decreases to 63.03. Thus, the use of bounding box information in conjunction with the ground plane state vector proves beneficial.
In Table <ref>, the way bounding box information is considered is varied. For “IMM-like”, the association score for stages 2 and 3 remains as in (<ref>). For “Multiplicative”, all stages use the association score in (<ref>). The “IMM-like” association scores achieve the best performance across all metrics for both MOT17 and DanceTrack. Furthermore, the benefit of using the “IMM-like” association score is illustrated in Figure <ref>, where track identities are maintained while the performers jump, and the ground plane position estimates cannot be relied upon. Thus, the proposed association score in (<ref>) is justified.
Table <ref> shows that for some datasets, such as the DanceTrack dataset, it may be beneficial to optimise for AssA instead of HOTA. Conversely, optimising for AssA for the MOT17 dataset drastically reduces HOTA, IDF1 and MOTA. A possible explanation is that partial detections (e.g. slightly occluded tracks) are correctly associated. As a result, AssA increases, but localisation accuracy may decrease since the associated detection is not well aligned with the ground truth. This hints that specific datasets (such as MOT17) could benefit significantly from increased detector localisation accuracy. Furthermore, note that all metrics significantly improve for the DanceTrack dataset when the objective function is optimised per video instead of optimising the combined objective function over the entire dataset. Nevertheless, for the DanceTrack, MOT20 and KITTI datasets, the parameters optimised on the combined objective function are used for testing in the subsequent section. This is to avoid estimating an association between validation and testing videos. However, for the MOT17 dataset, this association is clear and manageable since it consists of fewer videos. Thus, for MOT17, the parameters optimised per video are used.
§ RESULTS
This section presents the results of IMM-JHSE evaluated on the DanceTrack, MOT17, MOT20 and KITTI test datasets. The results for the DanceTrack test set are shown in Table <ref>, where IMM-JHSE significantly outperforms related methods in terms of HOTA, IDF1 and AssA, using the same detections. Specifically, compared to the runner-up (UCMCTrack+), HOTA increases by 2.64, IDF1 by 6.72 and Assa by 4.11. Following the ablation study presented in Table <ref>, this is attributed to IMM-JHSE's dynamic noise estimation, ground plane IMM filter, which accounts for two camera motion models, and its specific use of BIoU during association. Note that MOTA is the only metric that has not improved over previous methods. It is possible that this is due to the fact that tentative tracks are removed from the result file if they are never confirmed. Thus increasing the number of false negative detections and reducing the MOTA. It is also possible that partial detections are associated with IMM-JHSE, which do not have corresponding ground truth detections because the object is, for example, mostly occluded. This would increase the number of false positive detections, which could also cause a decrease in MOTA. The last row of Table <ref> shows the results of FusionTrack <cit.> on the DanceTrack test set. FusionTrack performs joint detection and tracking with an attention-based decoder and also uses appearance features generated from an encoder-like component. Despite this, IMM-JHSE outperforms FusionTrack in terms of HOTA.
IMM-JHSE achieves the results reported in Table <ref> on the MOT17 test dataset. While IMM-JHSE does not achieve the best result for any metric, it places itself firmly in between UCMCTrack and UCMCTrack+ for all metrics – outperforming all other methods (besides UCMCTrack+) in HOTA, IDF1 and AssA. As will be seen in Tables <ref> and <ref>, IMM-JHSE seems to perform poorly with tracks which take up relatively little of the image plane area, particularly if a multitude of such tracks are in close proximity to one another. In other words, the merits of IMM-JHSE are more evident when tracking relatively large (in terms of bounding box area) objects that move irregularly or with high acceleration or velocity in the image plane rather than in highly cluttered situations. This may be due to the second association stage, which is designed to account for ambiguous detections that may diverge from that expected by the predicted ground plane state. Yet, if multiple tracks are close to one another and all have ambiguous detections, it is conceivable that they may be confused with one another since they are likely to have high BIoU scores with detections that belong to other tracks.
Since MOT20 deals with even more cluttered environments than MOT17, this reasoning seems all but confirmed in Table <ref>, where IMM-JHSE achieves the worst results for every metric except AssA on the MOT20 test dataset. Another possibility is that the optimised parameters overfit the validation sets, reducing performance on the test sets. Indeed, the pattern search algorithm seems prone to getting stuck in local minima based on close observation during optimisation iterations.
For the KITTI test dataset, IMM-JHSE once again shows the poorest results for the pedestrian class in all metrics except AssA in Table <ref>. However, it outperforms the HOTA and AssA scores of other methods in the car class – with HOTA increasing by 2.11 over UCMCTrack and AssA by 5.09.
§ CONCLUSION
This work introduced IMM-JHSE, a novel MOT algorithm. The method makes use of an IMM ground plane filter, which combines static and dynamic camera motion models. Crucially, object motion is decoupled from these motion models. Furthermore, image plane information is incorporated through a simple bounding box motion filter and the BIoU score. IMM-JHSE dynamically switches between and mixes BIoU and ground plane-based Mahalanobis distance in an IMM-like fashion to perform association. Finally, dynamic process and measurement noise estimation are used. All of these components contribute to the overall performance of the method.
IMM-JHSE improves significantly upon similar methods on the DanceTrack and KITTI-car datasets while offering competitive performance on the MOT17, MOT20 and KITTI-pedestrian datasets. Future work may improve its performance on data with high track densities.
§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENT
Paul Claasen: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data Curation, Writing - Original Draft, Visualization. Pieter de Villiers: Writing - Review & Editing, Supervision, Project administration, Funding acquisition.
§ DATA AVAILABILITY
The used datasets are already publicly available.
§ ACKNOWLEDGEMENTS
This work was supported by the MultiChoice Chair in Machine Learning and the MultiChoice Group.
elsarticle-num
|
http://arxiv.org/abs/2409.03499v1 | 20240905131122 | Thermal Emittance Isolation by Cathode Retraction | [
"Benjamin Sims",
"John W. Lewellen",
"Xu Ting",
"Sergey V. Baryshev"
] | physics.acc-ph | [
"physics.acc-ph"
] |
[email protected]
Department of Electrical and Computer Engineering, Michigan State University, MI 48824, USA
Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA
Facility for Rare Isotope Beams, Michigan State University, East Lansing, MI 48824, USA
[email protected]
Accelerator Operations and Technology Division, Los Alamos National Laboratory, NM 87545, USA
[email protected]
Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA
Facility for Rare Isotope Beams, Michigan State University, East Lansing, MI 48824, USA
[email protected]
Department of Electrical and Computer Engineering, Michigan State University, MI 48824, USA
Department of Chemical Engineering and Material Science, Michigan State University, MI 48824, USA
§ ABSTRACT
In this work, a combination of cathode retraction and two-slit emittance measurement technique is proposed as an advanced means to individually modify emittance growth components, specifically, rf injector fringe fields, to isolate and directly measure the thermal emittance, the fundamental beam emittance metric for an electron beam. A case study of the LCLS-II-HE Low Emittance Injector (LEI), a state-of-the-art superconducting radiofrequency (SRF) gun, designed for LCLS-II HE upgrade is used to showcase the power of the two-slit technique. Particularly, it is demonstrated that generating a high resolution phase-space distribution map, dominated by the intrinsic emittance of the electron bunch, is possible. This result goes beyond the normal single-parameter distribution characterizations (e.g. RMS emittance and Twiss parameters) provided by the solenoid scan.
One key feature making this technique work (and in the end practically useful) is the ability to retract the cathode, because it provides the ability to compensate for radiofrequency (rf) de-focusing. It is demonstrated how the cathode retraction can serve as an additional optimisation tool for tailoring the routine performance of the photoinjector. We posit that a variable position cathode may be a useful method for optimizing photoinjector performance across multiple parameters regimes.
Thermal Emittance Isolation by Cathode Retraction
Sergey V. Baryshev
=================================================
§ INTRODUCTION
Superconducting radiofrequency (SRF) injectors are emerging as powerful sources for applications requiring high-average-current and high-brightness electron beams. Use cases include tools of scientific discovery (e.g. as beam sources for ultrafast electron diffraction and microscopy systems, and X-ray free electron laser injectors), as well as medical and industrial applications requiring high duty factors and average beam power <cit.>. While several metrics, such as beam brightness, have been established to characterize the quality of electron beams used for scientific applications, a parameter termed emittance is a central figure of merit for many applications. Roughly analogous to the M-squared figure-of-merit used for laser beams, the transverse emittance of an electron beam provides a characterization of the transverse phase-space density of the beam. An ideal electron beam would have an emittance limited by the Fermi exclusion principle. In practice, the emittance of a beam is determined by a number of factors, including contributions from space charge forces within the beam, nonlinear radiofrequency (rf) and static electromagnetic fields with which the beam interacts, and intrinsic emittance arising from the conditions during the beam's initial emission from a solid-state cathode <cit.>.
Because in legacy systems beam emittance growth has been dominated by the space charge and nonlinear fields, efficient methods have been found to manage and minimize (or compensate) them <cit.>. Current beam source designs are approaching the point where intrinsic emittance can become a limiting factor in beam quality. Since the intrinsic emittance depends on the cathode material used, it needs to be measured experimentally as a benchmark of the emittance floor, i.e. setting the best attainable case. Techniques and apparatus such as the momentatron <cit.> have been developed to do this under "laboratory" conditions. However, experimentally measuring the intrinsic emittance in an operational beam source can be difficult due to the contributions from space charge and nonlinear external fields <cit.>. Given the relatively fragile nature of high-quantum-efficiency photocathodes, the ability to characterize in detail the evolution of cathode performance over time in an operational setting is increasingly essential.
The solenoid scan technique <cit.> has been a workhorse for measuring emittance because solenoid scans are fast, most if not all of the equipment is already present in most high-brightness photoinjector beamlines, and the technique itself is straightforward to implement. However, measuring emittance using solenoids has limitations associated with aberrations inherent to the solenoidal field, typically leading to potentially significant intrinsic emittance overestimation <cit.>. As usually implemented, solenoid scans provide only a single cumulative (or whole-bunch) emittance measurement, i.e. without providing information about the detailed transverse phase-space distribution <cit.>. To extract RMS (whole-beam) parameters, additional assumptions may be required as the solenoid scan technique alters the impact of space charge during the measurement process, most apparently when the beam is focused to a waist, giving rise to uncertainties when estimating the beam parameters <cit.>. For very low emittance beams, resolution of the beam imaging system employed may also prove to be a limiting factor.
Such issues aside, a density map of the beam's transverse phase space would be a preferable measurement as it could contain important details about individual emittance contributions, as well as cathode emission characteristics, that could lead to better informed injector R&D and performance tuning. Tomographic techniques <cit.> have been used to develop phase-space maps, but these techniques have generally been applied at higher beam energies, such that beam has evolved significantly since its initial formation. While the starting phase-space distribution can be attributed to the cathode properties (e.g. intrinsic emittance and quantum efficiency) convolved with the drive laser, emittance growth over time can be attributed to space charge, acceleration and propagation through nonlinear fields, etc. Tomographic measurements performed at the photoinjector using a solenoid are also still subject to the effects mentioned above, e.g. spherical aberration.
In this context, an arguably better way to measure the intrinsic emittance immediately after a photoinjector is by using the two-slit technique <cit.>. Given the phase space maps can be directly obtained, it opens up an opportunity to quantify additional lingering effects of space charge and rf emittance components that can be additionally manipulated by an effective and practical means such as cathode retraction with respect to the injector back wall <cit.>. As the goal is to measure the intrinsic emittance, ideally to obtain a two-dimensional phase-space density map rather than a single RMS value, a slit method is preferable as long as it can provide the required resolution within a feasible measurement time. Contrary to the rapid solenoid scan, two-slit measurement comes at a cost in terms of time, where the duration of the measurement and errors resulting from drifts and jitter are major, and coupled, liming factors <cit.>. Indeed, reducing the intrinsic measurement error requires higher resolution scans that, consequently, increases the duration of the scan. Results from such long duration scans could be subject to system drifts or instabilities like limited lifetime of high quantum efficiency (QE) photocathodes. Nevertheless, the detailed phase-space distribution provides a substantially improved ability to identify unexpected behaviors via direct inspection of the phase-space distribution, in addition to providing the ability to calculate the emittance from the measured distribution. Therefore, finding optimal ways to perform two-slit measurements deserves further efforts.
In the present work, using an example of the low emittance SRF injector being developed for LCLS-II HE upgrade <cit.>, we computationally demonstrate a novel approach to intrinsic emittance measurement that utilizes i) cathode retraction to compensate for rf-induced emittance growth, and ii) a 2-slit emittance measurement system to obtain a phase-space map. Practically speaking, the proposed diagnostic two-slit beamline could serve to characterize high brightness bunches and provide a means to characterize high-brightness photocathodes in an operating photoinjector.
§ DEFINITIONS OF EMITTANCE AND ERROR
§.§ Two Slit Emittance Measurement
A two slit emittance measurement works by measuring the intensity of "bunchlets," located at position x and angle x' with widths δ x and δ x' respectively within the beam bunch; the ensemble of measurements form a current density map of the transverse phase space ρ (x,x'). (Strictly, we are measuring a projection of the full 6-d phase space distribution of the bunch ρ (x,x',y,y',p,t), onto a single plane.) The apparatus consists of two plates with transverse slits (simply referred to as "slits" hereafter) located along the axis of beam propagation. A detector is located downstream of the second slit; the detector can be an optical screen, a Faraday cup, etc. The slit plates are made thick enough to absorb beam particles not entering the slit, but thin enough so as not to significantly collimate the beam.
The transverse position of the first, or upstream, slit X_1 sets the position of the bunchlet center, that is x=X_1, and the difference in the center positions of the upstream and downstream slits, X_1 and X_2 respectively, divided by the distance between the slits, L, determines the angle x' as <cit.>
x'= X_2 - X_1/L
The number of transmitted beam particles at each pair of slit locations X_1 and X_2 (or the corresponding location in the beam's phase space x and x') describes a data bin. These bins can be combined together into a density map representing the bunch phase space ρ (x,x'). The resolution of the phase space map is determined by the slit widths (W_1 and W_2) and longitudinal separation of the slits L as
Δ x= W_1
Δ x'= W_2/L
This indicates that small slits and large separation L between slits will provide higher resolution, e.g. more bins across a given phase-space distribution. The bin size is analogous to pixel size in a conventional imaging system.
Two-slit measurements of the intrinsic emittance require the management of space charge and rf induced emittance growth, such that the emittance of the beam is dominated by the intrinsic emittance. Space charge-induced growth can be minimized by using low-intensity laser pulses, i.e., by conducting measurements at low bunch charge.
Additionally, by sweeping laser parameters, specifically the intensity, we can assess the impact of space charge, effectively characterizing its contribution to the overall emittance.
The intrinsic emittance can be increased by increasing the primary laser spot size, making the intrinsic emittance easier to measure. This, however, is not without a cost as the increase in radius also increases the emittance contribution from nonlinear fields, as well as (all other effects equal) increasing the size of the beam spot at the first slit. This effect can be addressed with the use of cathode retraction. By retracting the cathode, a transverse focusing field near the cathode is introduced by the aperture edges. Such lensing effects helps mitigate the rf defocusing of the beam as it exits the rf gun, allowing for the use of a larger laser spot on the cathode. There should exist an optimal location where focusing provided by cathode retraction and rf defocusing effects from the rf field, in the body and at the exit of the gun, effectively cancel each other, providing a net minimization of beam divergence due to rf effects. Hence, rf emittance contribution to the total emittance is minimized. This approach also results in a smaller spot at the first slit, with a smaller far-field divergence angle, which is beneficial for the two-slit method. We note that while a solenoid located immediately downstream of the gun cavity, e.g. in the typical location for emittance compensation, can provide a small beam at the first slit, it cannot provide the same benefits in terms of minimizing nonlinear field contributions as the method of cathode retraction can.
§.§ Binning Error
Edge bins, those located near the edges of the beam in phase-space, represent a source of error similar to that encountered in particle-in-cell space-charge calculations.
All particles within a bin are typically assigned to a single (x, x') determined by the slit locations. For reasonable bin sizes, bins with many particles, and beams with relatively slow variations across the phase-space distribution, binning is a reasonable approach for phase space mapping. However, edge bins near the boundaries of the distribution may collect a relatively small number of particles (or even none in low bunch charge case), and the center of the bin may not correspond well to the actual average position of the particles within the bin. Thus, the error of the two-slit measurement can be directly correlated to the number of edge bins as <cit.> where the error is proportional to both the resolution and the maximum x and x' measured. The result of the difference between these two values, as seen in Eq.<ref>, is the approximate emittance without the error attributable to binning effects.
ε_er = n_edge·Δ x ·Δ x' /2 ·π.
The emittance measured by any two-slit scan can be approximated by <cit.>
ε = n ·Δ x ·Δ x' /π.
When combined with additional metric for calculating the error in the measured emittance, it yields
ε_er = 2/π· (x_max·Δ x' - x'_max·Δ x ),
The set of the given equations establishes the basis for informed design of experimental beamline where slit-based phase space measurements with minimized error.
§.§ Basic Emittance Concepts
Projected transverse emittance can be calculated using either velocity or momentum phase space. As many of the phase spaces measured are non-elliptical, momentum space was chosen to help account for any irregularities that would be overlooked in the velocity space <cit.>. The conversion from the measured velocity space to momentum space is done by multiplying the measured x' by the Lorentz γ-factor of the particle. The statistical root mean square (RMS) and 100% of the particles were used for emittance calculation as
S_11= x · x,
S_12= x · x',
S_22= x' · x',
ε = √(S_11· S_22 - S_12^2 ).
The unnormalized emittance (scales with the beam's γ-factor) versus the normalized emittance (e.g. nominally invariant under acceleration) were used, as it allows for direct comparison between the calculated emittance and that expected given the parameters of the cathode.
§.§ Intrinsic Emittance and MTE
The intrinsic emittance of a bunch is determined by the initial spot size of the bunch and the mean transverse energy (MTE) of electrons emitted from the cathode. The MTE is determined by several factors including the photocathode material and illumination wavelength, temperature, and surface roughness. In the presented simulations, the cathode MTE is chosen in accord with the requirements of the LCLS-II-HE low-emittance injector <cit.>.
The intrinsic emittance of the beam can be calculated as:
ε_int=σ_xi√(2 · MTE/m_e c^2),
σ_xi = R_i/2,
ε_int=R_i/2√(2 · MTE/m_e c^2),
MTE= m_e · c^2 /2·(2 ·ε_int/R_i)^2 ,
where, assuming a uniform emission current density, R_i is the emission spot radius. These equations allow for the calculation of the expected intrinsic emittance for a given MTE and emission spot radius, and thus the contribution of the MTE to the total emittance and therefore can be compared against the simulated slit measurement.
§ CASE STUDY: LOW EMITTANCE INJECTOR FOR LCLS-II-HE UPGRADE
§.§ Case study setup
The LCLS-II-HE low emittance injector (LEI) is a state-of-the-art high-gradient SRF injector design <cit.>. The LEI is intended to enable extending the LCLS-II-HE’s useful photon energy to 20 keV without additional cryomodules (e.g. increasing the beam energy past the LCLS-II-HE goal of 8 GeV) <cit.> by providing a significantly lower-emittance beam at 100 MeV, than the current LCLS-II injector. The LEI begins with a 1.8-MeV SRF photoinjector, being developed by a collaboration between SLAC, MSU/FRIB, Argonne and HZDR. Low emittance bunch production necessitates this case study focusing on setting a useful and versatile photocathode testing beamline for the LEI. A robust two-slit emittance measurement optimized for the LEI SRF gun was considered. Requirements for any such system under consideration include compatibility and integrability with the current LEI gun-to-linac beamline design, and the ability to measure photocathode MTEs below 200 meV (e.g. suitable for cathodes proposed for the LEI ) <cit.>. In situ measurement of photocathode MTE, and evolution thereof, would then help attain best overall performance of the LEI. The design of the SRF gun allows for manipulation of the cathode stalk, in particular variation of the longitudinal position of the cathode surface relative to the gun "nosecone" surface <cit.>. Utilization of this feature is key to the MTE measurement process, as proposed here: the cathode-region fields, as modified by shifting the cathode longitudinal position, enable a low-error measurement.
The work was preformed utilizing Superfish <cit.>, General Particle Tracer (GPT) <cit.>, and a proprietary sequencer. Superfish was used to generate rf field maps of the LEI SRF gun with the cathode surface located at different longitudinal positions. Several examples can be seen in Fig.<ref>. The field maps were imported into GPT to simulate, visualize and quantify the two slit measurement. The sequencer was used to automate transition from Superfish to GPT and for quantifiable data collection and post-processing.
§.§ Cathode retraction
Under nominal operating conditions for the LEI (e.g. 100-pC bunch charge), the SRF gun cathode is flush with the gun's nosecone surface. This configuration produces a diverging beam, which is compensated by the gun solenoid as part of the traditional emittance-compensation process. However, for the proposed measurement technique to compensate for the divergence of the bunch, cathode retraction is used along with reduced-charge bunches.
This case study was conducted with the cathode being retracted from –0.15 cm to –0.22 cm with respect its flush (0 cm retraction) reference position, Fig. <ref>. In this range, a minima was identified where the smallest bunch transverse size was observed on a screen located 1 m downstream from the injector exit plane; we define the corresponding cathode location as the optimal cathode retraction for intrinsic emittance measurement. The variation in spot size with cathode position is due to the emergence of the radial focusing fields that can be seen in Fig. <ref>. In accord with Eq. <ref>, the minimum spot size, corresponding to retraction of –0.197 mm is a desirable condition for measuring the intrinsic beam emittance because systemic error can be minimized. Fig. <ref> contrasts two bunch transverse phase spaces corresponding to a cathode retraction of zero, and the optimal position for intrinsic emittance measurement. A bunch emitted with the cathode in its nominal (flush) location is strongly diverging, with a "narrow" phase space as shown in Fig.<ref>a. Attempting to measure this distribution produces a larger error in accordance with Eqs.<ref> and <ref> as it has a large x and x' spread in addition to a high number of edge bins.
Alternately when the cathode is retracted the phase space at the screen is quite different, Fig.<ref>. Retracting the cathode has two benefits when measuring MTE. First, it is seen that the extent of the phase space with the cathode at –0.197 cm is about an order of magnitude smaller in both x and x' than with the cathode in the nominal position. While the actual phase-space area in both cases is nominally the same, the potential measurement resolution is better and the number of edge bin is smaller, allowing for more accurate measurements. Second, the dominant factor in the bunch's transverse phase space is no longer the radial defocusing term from the gun's rf field. Instead the divergence is dominated by the MTE with vanishing contribution from the rf fields. Note that, while we describe the beam location as at a "focus," e.g. smallest obtainable spot size, as a matter of convenience, the beam is not in fact at a waist but is diverging. Thus, it meets one of the criteria for the 2-slit measurement technique, e.g. that the location of the first slit be downstream of a beam waist.
§.§ GPT beamline modeling
GPT simulations were performed with slits located at z=1 m and z=2 m, for a separation L=1 m. This set-up (Fig. <ref>) is sufficient to allow the beamlet passed by the first slit, to diverge appropriately. The largest spot size at each screen determined slit widths for all simulations as the parameters were, in analogy to a physical measurement with fixed-width slits, not modified over the course of the simulated measurement. The up- and down-stream slits were sized as W_1=99 μm and W_2=198 μm respectively, to provide 101 bins across the beam spot at both longitudinal locations. This was done as to accommodate the spot size produced with the cathode retracted to –0.15cm resulting in a spot size at z=1 m of 0.5 cm and the spot size at z=2 m of 1 cm. As the bunch is larger at the second screen, a larger slit size is needed to fully map it in the same number of steps as at first slit location. In an experimental setting, such slits can be readily fabricated. With a nominal beam energy of 1.8 MeV, if made of tungsten the slits would need to be at least 1mm thick to completely stop the beam. The angular acceptance of the first slit would therefore be approximately atan(0.1 mm / 1 mm) = 5.7 deg, and approximately 11.3 deg. for the second slit. The angular resolution provided by the second slit is (0.2 mm / 1 m) = 0.01 deg; and the anticipated divergence of the beam as a whole, when the cathode is in the retracted location, is on the order of 1 deg. Thus, we would not expect the angular resolution of the measurement to be limited by the upstream slit acting as a collimator.
In GPT, the bunch was generated as a uniform spot with a radius of 1 mm and a Gaussian temporal profile with a σ of 5.67 ps, clipped at ± 3 σ. The MTE was set to 200 meV. Space-charge calculations were not included in the simulation, as a low-intensity laser pulse yielding a bunch charge of around 1 pC would have negligible space charge contributions to the emittance, while providing sufficient charge to be captured by a Faraday cup like device <cit.>, but space-charge calculation would significantly increase the time required to perform the simulations. Particles transmitted through both slits were counted at each pair of slit locations. This process was repeated for 10 different cathode retracted positions, between –0.15 cm to –0.22 cm, as seen in Fig. <ref>.
Both the cathode-region focusing and exit-region defocusing are manifestations of the same phenomenon, e.g. a spatially varying longitudinal field gives rise to radial fields. A longitudinal field increasing in magnitude (as in the case in the near-cathode region when the cathode is retracted) gives rise to a radially focusing field; while a longitudinal field decreasing in magnitude (as is the case along the majority of the axis within the gun) leads to a radially defocusing field <cit.>.
For the same emission radius and launch phase, the retracted cathode produces a much smaller bunch radius of 3 mm, as compared to the un-retracted simulation with a bunch radius of 30 mm, both at z= 3 m. The retracted cathode simulation generates a beam waist approximately 0.4 m downstream from the cathode. In contrast, with the cathode at its nominal position, there is a virtual beam waist several cm upstream of the cathode, and the far-field divergence angle is approximately an order of magnitude larger. The somewhat peculiar shape of the bunch's edge in phase space is attributed to nonlinear components in the near-cathode radial rf fields; but the overall divergence is dominated by the intrinsic emittance of the beam, not the rf fields (regardless of linearity). The cathode retraction thus provides a phase space distribution which can be more accurately measured by a two-slit scan to yield an MTE.
The difference in the number of edge bins between distributions from an ideally retracted vs. partly retracted cathode is drastic, as illustrated in Figs.<ref> and <ref>. There is approximately 3-fold difference in the number of edge bins indicating a similar drop in the measurement error. (We note that when the cathode is flush, the resulting strongly diverging distribution is effectively all contained in the edge bins.) Together these effects – fewer edge bins, and smaller extents in phase space – allow to preform a measurement of the intrinsic emittance with lower systematic error.
§ DISCUSSION
The results of the case study show that the minimal measured emittance was found with the cathode retracted to –0.197cm, Fig. <ref>. At this location, emittance was calculated as 0.475 μrad, corresponding to an MTE of 230 meV, 30 meV higher than would be expected solely from the cathode intrinsic emittance as marked by the black line in Fig. <ref>. This suggests the cathode retraction method can provide a reasonable measure of cathode MTE, when the cathode installed in an operational high-brightness photoinjector. However, the measurement error of +15% is still substantial. When the resulting phase space analysis is performed using the resolution-based emittance with consideration of the binning error as discussed above, in Fig. <ref> there is nearly perfect agreement between the RMS emittance and the resolution based emittance when the binning error is subtracted. The calculation of the binning error also indicates that it is lowest at –0.197 cm due to the size of the bunch in phase space. Comparisons between Fig. <ref> and Fig. <ref>, particle-based phase space and bin-based phase space respectively, show excellent agreement between the two, while tracking individual particles in GPT provides a more precise map of the phase space but is not feasible in a practical experimental system.
The results highlight two particularly useful intertwined insights for a practical measurement: cathode retraction can be used to generate a beam in which the divergence is dominated by the cathode MTE; this is the same foundation upon which operation of instruments such as the Momentratron rely. This condition in turn, leads to a lower binning error in the two-slit measurement and, ultimately, to sub-20% measurement errors for measuring the cathode MTE. In terms of practical implementation and experimental setup, the optimal cathode location to make an intrinsic emittance measurement is found simply via minimizing the spot size of the beam at the first slit location. This provides a fast way to identify the optimal cathode position for the measurement. Second, by effectively compensating for rf defocusing within the remainder of the gun, cathode retraction allows the MTE to become the dominant term impacting the measurement process. Together theses effects are sufficient for measuring the MTE of the cathode within a reasonably small margin of error. Analysis of the systematic error contributions provide understanding of why these effects decrease the measurement error. Indeed, calculating the resolution based emittance and binning error provides not only a useful double check of an RMS emittance measurement but explains why an emittance measurement will have lower error with a smaller, more slowly diverging beam than one where the bunch is strongly diverging, e.g. due to rf fields. By analyzing the binning error equation, Eq. (<ref>), we observe that bunches with larger extents in phase space, all else equal, will have larger measurement errors than bunches with the same phase-space area, but smaller extents in phase space. Understanding the error contribution that different phase space shapes will contribute provides a critical tool in determining the utility of a two-slit measurement.
§ CONCLUSION AND OUTLOOK
This work introduces an improved two-slit emittance measurement methodology that combines several techniques to measure the intrinsic emittance of a beam, and thus the MTE of the photocathode from which it was emitted. These techniques have been simulated to show their effectiveness at isolating the intrinsic emittance. As is often the case the approach to simulating a system is different from using that system in practice. The limitations for a two-slit measurement can be broadly categorized as duration based and error based. The discussed case study provided ways to address both of these limitations so that a practical system can be proposed.
The duration limitation can be addressed in part by utilizing a simple approach to determine the preferred cathode position for an emittance measurement, as only the diameter of the beam needs to be measured to set the cathode position. This eliminates the need to make slit-scan measurements at multiple cathode locations. Careful design of the slit system, e.g. determining the optimal number of bins, will help to minimize the time required to make a single scan while maintaining the desired resolution. While a thorough analysis of minimizing measurement time is beyond the scope of this paper, we note several possible directions. Increased bunch charge can decrease measurement time (e.g. accumulating the same statistics, in terms of charge per bin etc., more quickly) albeit at the expense of increased space-charge contributions to the beam emittance; this can be explored further in simulation. Substituting fast beam steering (via magnetic or electrostatic deflectors) for physical slit motion <cit.> could potentially significantly decrease the measurement time; the steering fields do not perturb the phase space and thus does not corrupt the measurement. Incorporation of intelligent algorithms into the measurement process, e.g. to identify likely regions of no current density, can reduce the number of total locations to be sampled in phase space.
The error limitation of two-slit measurements cannot be avoided as binning of the particles will occur based of the size of the slits used. However, as is shown here, it is possible to generate phase space distributions where the binning error is minimized and an RMS measurement of emittance is not strongly affected by the binning error. The phase space effects shown here are attributed to the dynamics at play from cathode retraction and work ideally for measuring the intrinsic emittance. In general, the resolution-based emittance and binning error estimates may provide useful corrections to the RMS emittance calculated from the measured phase-space map.
§ ACKNOWLEDGMENTS
The work by Benjamin Sims was supported by the U.S. Department of Energy Office of Science, High Energy Physics under Cooperative Agreement Award No. DE-SC0018362. The work by Sergey Baryshev was supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under Award No. DE-SC0020429. The work was supported by the U.S. Department of Energy Office of Science, High Energy Physics under Cooperative Agreement Award No. DE-SC0018362, and the US Department of Energy under Contract DE-AC02-76SF00515.
|
http://arxiv.org/abs/2409.03741v1 | 20240905175426 | Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm? | [
"Rui Wen",
"Michael Backes",
"Yang Zhang"
] | cs.LG | [
"cs.LG",
"cs.CR"
] |
15(3.2,1)
To Appear in Network and Distributed System Security (NDSS) Symposium 2025
Understanding Data Importance in Machine Learning Attacks:
Does Valuable Data Pose Greater Harm?
Rui Wen Michael Backes Yang ZhangCorresponding author
CISPA Helmholtz Center for Information Security
================================================================================================================
§ ABSTRACT
Machine learning has revolutionized numerous domains, playing a crucial role in driving advancements and enabling data-centric processes.
The significance of data in training models and shaping their performance cannot be overstated.
Recent research has highlighted the heterogeneous impact of individual data samples, particularly the presence of valuable data that significantly contributes to the utility and effectiveness of machine learning models.
However, a critical question remains unanswered: are these valuable data samples more vulnerable to machine learning attacks?
In this work, we investigate the relationship between data importance and machine learning attacks by analyzing five distinct attack types.
Our findings reveal notable insights.
For example, we observe that high importance data samples exhibit increased vulnerability in certain attacks, such as membership inference and model stealing.
By analyzing the linkage between membership inference vulnerability and data importance, we demonstrate that sample characteristics can be integrated into membership metrics by introducing sample-specific criteria, therefore enhancing the membership inference performance.
These findings emphasize the urgent need for innovative defense mechanisms that strike a balance between maximizing utility and safeguarding valuable data against potential exploitation.
§ INTRODUCTION
Machine learning has emerged as an indispensable tool across numerous domains, revolutionizing industries and empowering data-driven decision-making processes.
Central to the essence of machine learning is the pivotal role played by data, serving as the bedrock for training models and exerting a profound influence on their performance and predictive accuracy.
Concurrently, the crucial role of data in model training also exposes it as a noteworthy source of vulnerabilities.
Recent research has shed light on the heterogeneous impact of individual data samples, highlighting the presence of certain data that exhibit a heightened influence on the utility and overall effectiveness of machine learning models <cit.>.
Understanding this variability is important for two main reasons.
First, knowing how individual data samples affect model performance is key to improving machine learning explainability, offering new insights into model behavior, and enhancing interpretability <cit.>.
Second, this knowledge can guide data trading practices, where the importance of data is a significant factor <cit.>.
However, the influence of such diverse data on model leakage and security remains largely unexplored.
Existing research predominantly concentrates on the models themselves; for example, studies <cit.> suggest that overfitted models are more prone to membership inference attacks.
Nevertheless, even within the same model, distinct data samples exhibit varying vulnerabilities to attacks.
This prompts a crucial question: do these valuable data samples also exhibit an increased vulnerability to a spectrum of machine learning attacks?
Understanding the differential vulnerability of data samples has significant practical implications.
In medical diagnostics, for example, patient records with rare but highly indicative symptoms are considered high importance samples.
Assessing whether these records are more prone to attacks is crucial, as breaches could lead to discrimination, higher insurance premiums, or other serious consequences for individuals.
In this paper, our focus lies in investigating the relationship between data importance and machine learning attacks.
Our primary objective is to thoroughly investigate whether valuable data samples, which contribute significantly to the utility of machine learning models, are exposed to an elevated risk of exploitation by malicious actors.
To achieve our objectives, we focus on five distinct types of attacks, encompassing both training-time and testing-time attacks.
The training-time attack we consider is the backdoor attack <cit.>, while the testing-time attacks consist of membership inference attack <cit.>, model stealing attack <cit.>, attribute inference attack <cit.>, and data reconstruction attack <cit.>.
For each of these attacks, we thoroughly analyze the behavior and impact on both high importance and low importance data samples, aiming to uncover any discernible differences.
Main Findings
Our research has yielded significant findings that shed light on the heightened vulnerability of valuable data samples to privacy attacks.
Specifically, our key findings are as follows:
* Membership Inference Attack: High importance data samples exhibit a higher vulnerability compared to low importance samples, particularly in the low false-positive rate region.
For instance, in the CIFAR10 dataset, at a false positive rate (FPR) of 1%, the true positive rate (TPR) of high importance data is 10.2× greater than that of low importance samples.
* Privacy Onion Effect: The concept of the privacy onion effect <cit.> can be extended to the distribution of data importance.
Specifically, previously considered unimportant samples gain significance when the dataset removes the important samples.
* Model Stealing Attack: High importance samples demonstrate greater efficiency in stealing models when the target model is trained on the same distribution as the query distribution.
However, we empirically demonstrate that the importance does not transfer between different tasks.
* Backdoor Attack: Poisoning high importance data enhances the efficiency of the poisoning process, particularly when the size of the poison is small.
On the other hand, the influence on clean accuracy does not yield a definitive conclusion, poisoning either type of data has a limited impact on clean accuracy.
* Attribute Inference and Data Reconstruction Attacks: We observe no significant distinction between high and low importance data in these attacks.
Our research provides empirical evidence establishing a correlation between data importance and vulnerabilities across diverse attack scenarios.
This introduces a novel perspective for analyzing sample-specific vulnerabilities, enriching our understanding of the security implications within the realm of machine learning.
Beyond theoretical insights, our study showcases practical applications of these findings, illustrating how they can be utilized to devise more potent attacks.
On one hand, these findings can be utilized in a passive manner.
For example, we empirically demonstrate that membership inference attacks can be improved by introducing sample-specific criteria based on sample importance.
Additionally, adjusting the poisoning strategy according to sample importance proves to enhance the efficacy of backdoor attacks, particularly with a reduced poisoning rate.
More interestingly, we can actively modify samples to alter their importance, which subsequently impacts both attack and defense performance.
For example, recognizing that high importance samples are more vulnerable to membership inference attacks, attackers could increase the importance of targeted samples to heighten their vulnerability.
This approach follows exactly the same idea as the attack accepted at CCS’22 <cit.>, effectively demonstrating how we can “reinvent” state-of-the-art attacks guided by findings in our work.
In summary, our work represents a pioneering step in systematically understanding the vulnerabilities of the machine learning ecosystem through the lens of data.
These findings serve as a resounding call to action, urging researchers and practitioners to develop innovative defenses that strike a delicate balance between maximizing utility and safeguarding valuable data against malicious exploitation.
§ BACKGROUND
§.§ Machine Learning Models
Machine learning algorithms aim to construct models that effectively predict outputs based on given inputs.
These models are typically represented by a parameterized function denoted as f_θ: 𝒳→𝒴, where 𝒳 represents the input space and 𝒴 represents the output space encompassing all possible predictions.
The process of determining optimal parameter values θ involves minimizing an objective function using gradient descent.
Specifically, the objective is to minimize the classification loss
𝔼_(x,y)[ℒ(f_θ(x),y)]
where (x,y)∈𝒳×𝒴 denotes samples from the training dataset used to train the target model.
This optimization process guides the model towards achieving optimal performance by iteratively adjusting the parameters.
§.§ Data Importance
The investigation of individual training sample importance in machine learning (ML) is a fundamental and intricate problem with broad implications, especially in data valuation.
Understanding the importance of a single training sample within a learning task profoundly impacts data assessment, allocation of resources, and the quality of ML models.
Leave-one-out (LOO) method has long been regarded as an intuitive approach for assessing the importance of data samples.
Formally, let D and D_val represent the training set and the validation set, and 𝒜 denote the learning algorithm.
U_𝒜, D_val denotes the validation accuracy of the model trained on D using 𝒜.
The importance of a target sample z can be quantified as the difference in utility before and after incorporating the target sample into the training set, expressed as:
v_loo(z) ∝ U_𝒜, D_val(D) - U_𝒜, D_val(D \{z})
Nevertheless, evaluating the importance of all N samples in the training set necessitates retraining the model N times, resulting in computational heaviness.
To address this limitation, Koh and Liang <cit.> proposed influence functions as an approximation method, significantly reducing the computational cost from O(Np^2+p^3) to O(Np), where p represents the number of model parameters.
Despite the effectiveness of LOO, Ghorbani and Zou <cit.> have raised concerns about its ability to capture complex interactions between subsets of data.
They argue that the Shapley value provides a more comprehensive framework for measuring data importance.
The Shapley value, originally proposed by Shapley <cit.>, assigns an importance value to each sample z in the training set using the following formulation:
ν_shap(z) ∝1/N∑_S⊆ D∖{z}1/N-1 |S|[U_𝒜, D_val(S∪{z})-U_𝒜, D_val(S)]
To simplify the interpretation of the Shapley value assigned to each sample, one can conceptualize it as the contribution to accuracy in typical scenarios.
For instance, in a hypothetical scenario with 100 samples and a model achieving 90% accuracy, a valuable sample may contribute 2% accuracy, while a less valuable sample may only contribute 0.1%.
Consequently, the importance value assigned to a valuable sample is 0.02, whereas for a less valuable sample, it is 0.001.
Samples with an importance of 0 signify no contribution to the model's accuracy, while values below 0 suggest a detrimental impact, possibly due to incorrect labels or samples lying outside the distribution.
The Shapley value takes into account the contributions of all possible subsets of the training set, offering a more holistic assessment of data importance.
However, the accurate computation of the Shapley value based on the defined formula necessitates training 𝒪(2^N) machine learning models, rendering it impractical for complex datasets.
As a result, existing methods employ approximate algorithms to estimate the Shapley value.
For instance, Ghorbani and Zou <cit.> introduced two Monte Carlo-based approaches for Shapley value approximation.
To expedite evaluation time and enable analysis of large datasets, Jia et al. <cit.> utilized the K-nearest neighbors (KNN) algorithm to approximate the target learning algorithm, reducing the time complexity to 𝒪(Nlog N).
§ EVALUATION SETUP
In this work, we deploy KNN-Shapley <cit.> to assess the importance of samples in the training set, which takes a dataset as input and assigns an importance value to each sample in the dataset.
This decision is justified by two main considerations.
Firstly, from the perspective of utility, traditional data attribution methods struggle to account for the complex interactions within data subsets.
Previous research, as discussed by Gupta and Zou <cit.>, highlights this limitation.
Consequently, we adopt Shapley value-based approaches for a more accurate assessment of data importance.
We further examine the efficacy of two non-Shapley-based measurement techniques: Leave-one-out (LOO) and the advanced data attribution method, Trak[Trak quantifies the influence of each sample on specific test samples within a dataset.
To adhere to the established definition of data importance, we calculate the average influence exerted by each sample across the entire test dataset as the importance of each sample.] <cit.>.
As evidenced in <ref>, when comparing the performance of models trained with 5000 samples of varying significance, the accuracy discrepancy is less than 7% for these methods.
In contrast, the KNN-Shapley method identifies samples that yield an accuracy difference exceeding 20%, thereby demonstrating its superior capability in accurately quantifying importance, we defer the details to append:effective_knn.
Secondly, regarding scalability, most measurement methods are highly computationally inefficient.
For instance, employing Leave-one-out to calculate importance values for CIFAR10 necessitates over 80 hours on 8×A100 GPUs.
Shapley value-based methods are generally more demanding.
In Jia et al.'s work <cit.>, the authors provide a runtime demonstration (<ref> for ease reference) showing that existing measurement methods, except for KNN-Shapley, do not scale efficiently to large datasets, even as CIFAR10.
Furthermore, the comprehensive evaluations conducted by prior studies <cit.> consistently underscore the effectiveness and accuracy of KNN-Shapley.
Therefore, considering both utility and scalability, KNN-Shapley emerges as the sole feasible method for conducting experiments.
Datasets
Our evaluation encompasses three widely-used benchmark datasets, namely CIFAR10 <cit.>, CelebA <cit.>,
and TinyImageNet <cit.>.
CIFAR10 comprises a collection of 60,000 colored images evenly distributed across ten classes, representing common objects encountered in everyday life, including airplanes, birds, and dogs.
CelebA is a large-scale face dataset that encompasses over 40 annotated binary attributes.
To ensure balance in our analysis, we follow previous works <cit.> that select the three most balanced attributes (Heavy Makeup, Mouth Slightly Open, and Smiling) to create an 8-class (2^3) classification task.
Note that our findings are not dependent on this specific attribute selection, as validated in append:attributeselection.
Additionally, our evaluation incorporates TinyImageNet, which constitutes a subset of the ImageNet dataset.
It encompasses 200 distinct object classes, each with 500 training images.
We further validate the generalizability of our conclusions across different modalities, with detailed information deferred to <ref>.
§.§ Learning Characteristic
In order to gain a deep understanding of the disparities between high and low importance data, we delve into the learning characteristics, such as loss, associated with these samples.
To the best of our knowledge, our study represents the first endeavor to investigate the learning characteristics of samples with varying degrees of importance, diverging from the conventional focus solely on their contribution to the final performance.
To quantify these learning characteristics, we initially train a model using the complete dataset, comprising both high and low importance data.
Subsequently, we compute the loss for each individual data point and explore the correlation between the loss and its corresponding importance value.
In <ref>, we present a visual representation of the relationship between loss and importance value.
The x-axis represents the importance order of a sample in the dataset, with 1 denoting the lowest importance and 50000 representing the most valuable data.
Initially, it may seem that there is no discernible pattern between loss and importance value, as both low and high importance samples can exhibit either low or high loss.
However, upon further analysis, we statistically observe that higher importance samples tend to demonstrate lower loss, as depicted in <ref>, <ref>, and <ref>.
To arrive at this conclusion, we divide the samples into 200 bins based on their importance value.
For instance, the lowest 1 to 250 samples are categorized into bin one, 251 to 500 are allocated to bin two, and so forth.
For each bin, we calculate the sum of the losses and plot these 200 data points to generate the final curve.
Despite some fluctuations observed in the curve, it is evident that valuable samples tend to exhibit lower loss.
This finding aligns with our expectations, as lower loss signifies greater representativeness, thereby facilitating easier learning and enhancing their contribution to the overall utility of the model.
Having established the effectiveness of importance assignment and gained preliminary insights into the learning characteristics, we proceed to conduct representative machine learning attacks to investigate the impact of data importance in such attacks.
Our experimental investigations are carried out using the ResNet18 architecture, and in <ref>, we demonstrate the generalizability of our conclusions to different architectures.
§ MEMBERSHIP INFERENCE ATTACK
Membership Inference Attack (MIA) <cit.> is a prominent privacy attack utilized to determine whether a specific data sample belongs to a training dataset.
This attack is widely employed to assess the privacy of training data due to its simplicity and broad applicability.
In the attack scenario, the adversary 𝒜 is granted access to a target model and is tasked with determining the membership status of a given data sample (x, y).
Formally, the membership inference attack can be defined as a security game, referred to as Membership Inference Security Game, which is described as follows:
The game proceeds between a challenger 𝒞 and an adversary 𝒜:
* The challenger samples a training dataset D 𝔻 and trains a model f_θ𝒯(D) on the dataset D.
* The challenger flips a bit b, and if b=0, samples a fresh challenge point from the distribution (x, y) 𝔻 (such that (x, y) ∉ D).
Otherwise, the challenger selects a point from the training set (x, y) $ D.
* The challenger sends (x, y) to the adversary.
* The adversary gets query access to the distribution 𝔻, and to the model f_θ, and outputs a bit b̂𝒜^𝔻, f(x, y).
* Output 1 if b̂=b, and 0 otherwise.
The adversary 𝒜 is provided with auxiliary information about the data distribution 𝔻.
This allows the adversary to sample a shadow dataset from the same or a similar distribution, which is a common assumption in the existing literature.
The attack accuracy for the adversary is defined as follows:
Acc = _x,y,f,b[𝒜^𝔻,f(x, y) = b].
To assess the privacy leakage caused by membership inference attacks (MIAs), we employ two metrics commonly used in prior research, focusing on both worst-case and average-case performance:
* (Log-scale) ROC Analysis <cit.>, which focuses on the true-positive rate at low false-positive rates, effectively capturing the worst-case privacy vulnerabilities of machine learning models.
* Membership Advantage <cit.>, defined as
Adv=2×(Acc-0.5).
This metric represents the advantage over random guessing, multiplied by 2, providing an average-case measure to gain an overview of the attack's efficacy.
In this work, we investigate four specific membership inference attacks.
For the CIFAR10 and CelebA tasks, a training set of 50,000 samples is employed, while for the TinyImageNet task, we utilize a training set of 100,000 samples to construct the target model.
To assess the membership status of samples, we first adopt a methodology based on previous research <cit.> that considers the distance to the decision boundary as a reflection of membership status.
Specifically, they claim that samples located near the decision boundary are more likely to be non-members, whereas samples positioned in the central region of the decision area are more likely to be members.
We calculate the distance to the decision boundary for all samples in the training dataset.
Specifically, for each sample, we iteratively perturb it using Projected Gradient Descent (PGD) with a small step size until it is classified into a different class.
Subsequently, we compute the distance between the perturbed sample and its original counterpart.
In this analysis, the distance is measured using the ℓ_∞ norm, and we find consistent results across different norms such as ℓ_1 and ℓ_2, as evidenced by the corresponding findings presented in appendix_mia.
Our initial visualization focuses on examining the distances of samples with different importance values.
Similar to the observation made regarding the distribution of loss values in <ref>, no direct relationship is discernible between the importance value and the distance to the decision boundary.
Notably, samples with similar importance values may exhibit substantial differences in their distances to the decision boundary.
In contrast, we further analyze the statistical characteristics of these samples, as performed in <ref> and present the “group distance” in <ref>, <ref>, and <ref>.
The results reveal that low importance samples are statistically closer to the decision boundary, which aligns with the previous conclusion that low importance samples tend to have higher loss compared to high importance samples.
We follow the same procedure to derive the distance for samples in the testing dataset and launch the membership inference attack based on the distance.
We identify 10,000 samples with the highest importance value as the high group, and an equivalent number of samples with the lowest value as the low group.
The resulting ROC curves, depicted in <ref>, are presented on a logarithmic scale to compare the performance between these two groups.
From the figures, we observe significant differences in the behavior of high importance samples and low importance samples, particularly in the low false-positive rate area.
Specifically, for the CIFAR10 dataset, high importance samples demonstrate a true-positive rate (TPR) 10.2× higher than low importance samples at a low false-positive rate of 1%.
For the TinyImageNet dataset, the difference is even more pronounced, with high importance samples exhibiting a TPR that is 27.9× higher than that of low importance samples at the same false-positive rate.
These observations provide compelling and empirical evidence supporting the notion that high importance samples are considerably more vulnerable to membership inference attacks, which satisfies our expectation as the importance of data samples can be regarded as the proxy of memorization <cit.>.
These findings thus pose a significant and tangible threat to the safeguarding of high importance data privacy.
On the other hand, these findings may also prompt researchers to consider adopting strategic sampling methods for more effective privacy auditing <cit.>.
We further validate the generalizability of this finding across various attack methodologies by conducting experiments with three additional metric-based attacks: prediction confidence-based attack <cit.>, entropy-based attack <cit.>, and modified prediction entropy-based attack <cit.>.
The first two attacks were enhanced by introducing class-dependent thresholds, as demonstrated by Song and Mittal <cit.>.
By grouping the samples based on their importance values in intervals of 10,000 samples (equivalent to the size of the testing dataset), we conducted the aforementioned attacks on these subsets.
The membership advantage achieved for each subset is illustrated in <ref>.
Notably, a clear monotonic increase in attack advantage is observed as the importance value increases, establishing a positive correlation between the importance value and the susceptibility of membership inference.
This empirical trend aligns with our expectations.
As evidenced by the metrics in <ref> and <ref>, samples with lower importance inherently present greater learning challenges compared to their higher-importance counterparts.
Even post-learning, these samples exhibit worse membership metrics compared to those of higher importance.
This circumstance renders them challenging to distinguish from non-member samples, especially when certain non-member samples manifest a lower learning difficulty and consequently exhibit better metrics compared to the more challenging member samples.
Drawing from this insight, one potential strategy to enhance the efficiency of membership inference attacks is to compare each sample with others of comparable difficulty.
A pragmatic approach to actualize this entails introducing sample-specific criteria.
Rather than employing a uniform threshold across the entire testing dataset, such criteria should intricately correlate with the sample's characteristics, with their designated importance level serving as a robust quantitative index to reflect this alignment.
In this study, we initiate an exploration into the feasibility of such an approach.
To seamlessly integrate our method into the existing metric-based framework, we introduce a sample-specific threshold in a consistent manner: while maintaining a uniform threshold for the dataset, we modify the membership metrics by incorporating an importance-related term:
𝙲𝚊𝚕𝚒𝙼𝚎𝚖(x) = 𝙾𝚛𝚒𝙼𝚎𝚖(x) + k×𝚂𝚑𝚊𝚙𝚕𝚎𝚢(x)
Here, 𝙾𝚛𝚒𝙼𝚎𝚖(x) denotes the conventional membership metric, including elements such as confidence, entropy, and modified entropy.
The term 𝚂𝚑𝚊𝚙𝚕𝚎𝚢(x) signifies the importance value attributed to the specific sample x, and 𝙲𝚊𝚕𝚒𝙼𝚎𝚖(x) represents the recalibrated membership metric.
As a proof of concept, we empirically determine the hyperparameter k in an exploratory manner, adjusting its magnitude until optimal performance is attained.
The experimental outcomes, depicted in <ref>, illustrate that the incorporation of importance calibration notably enhances the efficacy of metric-based attacks.
Nevertheless, it is pertinent to acknowledge that identifying the optimal hyperparameter and devising more refined methods for integrating importance values warrant further investigation.
We also emphasize that this improvement does not necessitate additional requirements compared to standard attacks; specifically, the adversary does not need full access to the training dataset to obtain importance values.
Although importance values cannot be calculated for single samples, most membership inference attacks assume access to a shadow dataset.
We validate the feasibility of our approach using a shadow dataset.
Specifically, we randomly select 1,000 samples from the CIFAR10 dataset to calculate their importance value, we first calculate the importance value for all samples using the whole CIFAR10 dataset as the ground truth.
Then we assume the adversary can only access a shadow dataset containing 10,000 samples, and calculate the importance value for each sample using only the shadow dataset.
We found a correlation coefficient of 0.957 between these values, indicating that using the shadow dataset could provide a good approximation.
§.§ Privacy Onion Effect
Carlini et al. <cit.> have identified the onion effect of memorization, which refers to the phenomenon wherein “removing the layer of outlier points that are most vulnerable to a privacy attack exposes a new layer of previously-safe points to the same attack.”
Their research demonstrates this effect by removing samples that are at the highest risk of being compromised through membership inference, resulting in formerly safe samples becoming vulnerable to the attack.
Building upon the insights from the preceding section, our empirical findings confirm a positive correlation between membership inference vulnerability and data importance.
This prompts an intriguing question: Does this effect reflect in the importance values assigned to the data?
Put differently, when high importance samples are removed from a dataset, do previously designated low importance samples gain significance?
To avoid ambiguity, it is imperative to understand that the term “importance” in this context is not subjective or relative.
The removal of high importance data points does not inherently increase the importance of those initially deemed low importance.
Furthermore, it is conceivable for a dataset to exclusively consist of low importance samples.
This clarification is indispensable; otherwise, the studied question may seem trivial.
Our results indicate that removing important samples indeed makes samples previously considered unimportant gain importance.
Specifically, upon removing 10,000 data points with the highest importance scores, we recalculated the importance for the remaining samples.
As depicted in <ref>, this removal led to a noticeable redistribution in data point importance, with previously low important data points now being assigned greater significance.
However, a note of caution is warranted in interpreting this result.
The removal of a substantial number of samples (10,000 in this context) might introduce a baseline drift.
Therefore, attributing the observed importance augmentation solely to the exclusion of important samples might be premature.
To further validate our findings, we executed controlled experiments wherein we systematically excluded either the most or least significant data points.
This approach mitigates potential biases stemming from dataset size discrepancies.
To emphasize the impact of these exclusions, we quantified the importance value discrepancies for data points ranked between 10,000 and 20,000 in descending order of importance in the original dataset, as they remained for both removal procedures.
Our results, as visualized in <ref>, <ref>, and <ref>, underscore the pronounced disparities in how the exclusion of high importance versus low importance data samples influences the remaining dataset's importance distribution.
Using CIFAR10 as an illustrative case, removing the most significant data points caused 99.14% of the remaining data points to be reevaluated as more important.
In contrast, removing the least significant data points led to a 45.88% decrease in importance for the affected data points.
These findings robustly support our hypothesis that data points previously deemed of lesser importance assume greater significance when high importance data points are excluded, and such a conclusion cannot be attributed to dataset size variation.
§.§ Actively Modify Sample Importance
As discussed in the previous section, altering the dataset can influence the importance of samples.
Given the observed linkage between membership vulnerability and importance value, an interesting question arises: can we actively use our findings to design more advanced attacks by modifying the importance of target samples?
However, directly altering sample importance is challenging due to the absence of a standardized method or framework.
In this section, we explore an ad-hoc approach aimed at increasing the importance of target samples.
Specifically, we select a set of target samples and duplicate each one multiple times with consistently incorrect labels.
This strategy intuitively heightens the influence of these samples by causing the model to consider them as “outliers” due to the prevalence of incorrect duplicates.
We tested this idea by duplicating 50 target samples 16 times and reassessing their importance values.
As shown in <ref>, the duplicated samples generally exhibited an increase in importance.
Specifically, 45 of the 50 target samples experienced an increase in importance, with an average increase of 53.02%, while the importance of the unaltered samples remained nearly constant.
This approach is exactly the membership poisoning attack proposed by Tramèr et al. <cit.>, where they comprehensively demonstrated the method's efficiency.
This indicates that increasing the importance of samples can be a practical approach to enhance their vulnerability.
By revisiting their technique from a data importance perspective, we highlight that actively modifying sample importance could be a promising strategy for developing sophisticated attack techniques or formulating robust defenses.
§.§ Data Augmentation
In previous discussions, it has been demonstrated that manipulating data samples can alter their importance values and, consequently, their susceptibility to attacks.
Given that data augmentation is the most widely used method for data manipulation, it would be interesting to ask: does data augmentation affect the importance of a sample?
In our study, we examined the impact of four data augmentation techniques—ColorJitter, Grayscale, HorizontalFlip, and VerticalFlip—under two specific scenarios.
In both scenarios, we selected 1,000 samples for augmentation while leaving the rest of the dataset unaltered:
Augmented Versions Only
In this scenario, we aimed to investigate how data augmentation impacts the importance of the augmented samples and the unaltered samples in the dataset.
Specifically, we replaced 1,000 samples with their augmented versions, recalculated their importance values, and compared these values to the original.
As illustrated in <ref>, we found that the augmented versions had variable effects on importance: some samples gained higher importance, while others lost it.
Overall, a slight majority of the samples experienced a decrease in importance following augmentation.
However, the augmented samples had a negligible impact on the remaining non-augmented samples.
Original and Augmented Versions
In this scenario, we examined the effect of having both augmented and original versions of certain samples in the dataset.
We added 1,000 augmented samples to the original dataset and recalculated the importance values for the original samples.
To control for dataset size, we also considered a baseline case where the same 1,000 samples were duplicated.
As shown in <ref>, the presence of both original and augmented samples had minimal impact on the importance of the original samples, with no significant differences compared to simple duplication.
We acknowledge that more complex augmentation techniques, such as those using generative models, may have different effects.
The exploration of these complex augmentation techniques remains an avenue for future research.
Takeaways
Our findings highlight the vulnerability of high importance samples to membership inference attacks.
Significant differences were observed in the behavior of high importance and low importance samples, particularly in the low false-positive rate region, where high importance samples exhibited substantially higher true-positive rates.
This emphasizes the necessity of addressing the privacy risks associated with high importance samples and implementing effective safeguards.
Simultaneously, it encourages researchers to explore strategic sampling methods to enhance the effectiveness of privacy audits.
The observation also suggests a potential enhancement to membership inference attacks through the introduction of sample-specific criteria.
We empirically validate the practicality of using importance values to calibrate membership metrics, thereby enhancing attack efficiency.
Moreover, our findings reveal the “privacy onion effect” within the sample importance distribution, where previously overlooked samples gain importance when key samples are removed.
Furthermore, by revisiting an advanced membership poisoning attack from the perspective of data importance, we suggest that actively manipulating sample importance can be a potent strategy for developing sophisticated cybersecurity measures, both offensive and defensive, but finding general manipulating methods needs further investigation.
§ MODEL STEALING
Model stealing attack <cit.> differs from membership inference attack as it aims to compromise the confidentiality of the model itself rather than exploiting privacy information about training samples.
This type of attack does not have information about the target model's architecture or parameters but seeks to create a surrogate model that emulates the functionality of the target model.
Such attacks can be employed by adversaries for various purposes, including monetary gains or as a preliminary step for subsequent attacks <cit.>.
The workflow of a model stealing attack is visualized in <ref>.
The adversary samples data from a specific distribution 𝔻 and simultaneously queries the target and surrogate models.
To ensure similarity between the surrogate and target models, the adversary optimizes the surrogate model to produce similar outputs 𝒮(x) as the target outputs 𝒯(x).
While the attack approach is straightforward, selecting an appropriate query data distribution poses a challenge, as it directly impacts the stolen accuracy and query efficiency.
Recent research has explored efficient and data-free methods for launching these attacks <cit.>, yet the question of selecting high-quality samples when the target task is known remains intriguing.
In this work, we focus on the primary scenario where the adversary can query the target model to obtain corresponding posteriors, while having knowledge of the target task.
Specifically, the adversary could query the model with a dataset from the same or a similar distribution.
This scenario has practical applications, such as creating a surrogate model to facilitate further attacks or to save on labeling costs.
We limit our discussion to this primary scenario and do not delve into more advanced model stealing techniques that focus on reducing the dataset assumption, as our interest lies in understanding how different data interact with the model stealing process.
Our goal is to investigate whether query samples with different importance values exhibit varying efficiency in stealing models.
We explore two settings in our experiments.
First, we launch the attack using query data from the same distribution as the target model trained on.
For example, if the target model is trained on CIFAR10, we employ CIFAR10 data to query the model.
The second scenario involves using data from different distributions, specifically CelebA and TinyImageNet, to query the CIFAR10 model.
We choose accuracy and query budget as the metrics to evaluate the success of the attack, using less query budget to achieve higher accuracy denotes better attack performance.
§.§ Same Distribution Query
Three target models were trained using a standard training procedure, resulting in testing accuracies of 95.15% for CIFAR10, 79.05% for CelebA, and 65.01% for TinyImageNet.
After training the target models, our attack solely interacts with the target models through their outputs without accessing or reading their parameters.
To initiate the attack, we establish a query budget ranging from 100 to 10,000.
Once the query budget is determined, we prioritize collecting high importance data until the budget is exhausted, and the same principle applies to the collection of low importance data.
The attack results are illustrated in <ref>, highlighting the superior efficiency of high importance samples in the model stealing process.
For instance, when the query budget is set to 1000, high importance data steal a CIFAR10 model with 53.77% accuracy, which is 1.6× higher than the model stolen by low importance data (33.29%).
This trend holds true for the other two datasets as well.
Taking TinyImageNet as an example, when the query budget is 1000, high importance data yield a model accuracy of 19.25%, whereas low importance data only result in a model accuracy of 9.25%, exhibiting a notable 2.1-fold disparity.
One plausible explanation for this difference may arise from variations in class balance, given that the query sets are chosen based on sample importance.
It is conceivable that the low importance query set may lack samples from certain classes, thereby resulting in suboptimal performance.
Prior research has suggested that a more balanced data distribution could potentially improve model stealing performance <cit.>.
However, upon examining the data distribution for both high and low significance samples, we observed no significant disparities.
For example, in the case of CIFAR10, the entropy values for the top-10,000 high importance and low importance distributions were 3.282 and 3.245, respectively.
Even when considering 1000 samples, the corresponding entropy values were 3.161 (high importance) and 3.229 (low importance).
For context, a perfectly uniform distribution has an entropy of 3.322.
This indicates that both the high and low importance subsets closely approximate a uniform distribution.
Such findings reinforce our assertion that high importance data can indeed augment model stealing performance, mitigating concerns related to distributional biases.
§.§ Different Distribution Query
The previous section highlighted the enhanced efficiency of high importance samples in stealing models trained on the same task.
However, it remains uncertain whether this efficiency persists when the target model is trained using a different dataset or task, and whether importance values can be transferred across tasks.
To investigate, we conducted experiments involving query data that differed from the distribution used to train the target model.
Specifically, we employed a CIFAR10 model as the target model and queried it with the CelebA and TinyImageNet datasets.
Interestingly, as depicted in <ref>, we observed that the advantage of high importance samples disappeared in this cross-task scenario.
When the query budget was consistent, the stolen accuracy for both high and low importance samples was comparable.
This suggests that samples deemed important for one task may not transfer effectively to arbitrary tasks.
Takeaways
Our findings demonstrate that high importance samples exhibit greater efficiency in stealing models when the target model is trained on the same distribution as the query distribution.
Importantly, this enhanced efficiency cannot be solely attributed to distribution bias.
This suggests that adversaries, when aware of the target task, can employ high importance samples to optimize attack performance with a reduced query budget.
However, this conclusion does not hold when the target task differs from the query distribution.
Consequently, this implies that selecting a group of high importance samples as a “universal” query set for efficient model stealing attacks, regardless of the target task, is not feasible.
§ BACKDOOR ATTACK
Backdoor attack <cit.> is a training-time attack that involves actively interfering with the training process to manipulate the resulting model.
Its primary objective is to introduce malicious behavior into the model, making it behave like a benign model for normal inputs.
However, when a specific trigger is detected, the backdoored model intentionally misclassifies the input to a predetermined class.
This type of attack can have severe consequences, such as compromising the integrity and reliability of the model, leading to potential security breaches, data manipulation, or unauthorized access to sensitive information.
Despite the severe consequences that a backdoor attack may cause, the attack itself is relatively easy to achieve by poisoning the training dataset, thereby posing an even stronger threat.
For instance, a straightforward attack approach called BadNets <cit.> adds a fixed trigger to a portion of the training dataset, resulting in a perfect attack where almost all triggered samples are misclassified into the target class, while the accuracy on the original task remains largely unaffected.
In the context of backdoor attacks, the poison rate plays a critical role as it directly influences the effectiveness and concealment of the attack.
A higher poison rate can lead to an increased attack success rate, but it also raises the risk of detection since a large number of samples need to be modified.
Conversely, a lower poison rate may offer better concealment, but it may not achieve optimal attack performance.
Additionally, there are situations where the adversary can only control a small set of samples, making it impossible to poison a large number of samples to achieve the attack.
Consequently, the problem of backdooring a model with a limited poison rate becomes an interesting and challenging research question.
In this section, we conduct empirical investigations to explore whether poisoning samples with different importance levels influences the attack performance under the same poison rate.
We utilize two metrics to evaluate the attack performance:
* Accuracy.
This metric assesses the deviation of the backdoored model from the clean model.
We measure the performance of the backdoored model on the clean dataset, and a successful attack should result in accuracy close to that of the clean model, making it difficult to detect.
* Attack Success Rate (ASR).
This metric evaluates the functionality of the backdoored model and is measured on the triggered dataset.
A desirable backdoored model should exhibit a high ASR, indicating its ability to misclassify all triggered samples into the target label.
By analyzing these metrics, we aim to gain insights into the influence of data importance on the attack performance and further understand the trade-offs between attack effectiveness and concealment in the context of backdoor attacks.
In this part, we adopt the same approach as BadNets to backdoor the model, with hyperparameter details provided in append:bd_hyperparameter.
Additionally, we validate the generalizability of our conclusion across five other backdoor attacks—Blend <cit.>, SSBA <cit.>, LF <cit.>, SIG <cit.>, and CTRL <cit.>—which utilize various trigger patterns or target different learning paradigms, as discussed in append:more_backdoor.
We present the visualized attack success rate in <ref>.
As depicted in the figure, there is a noticeable increase in the attack success rate as the number of poisons increases.
Concurrently, we observe significant differences between poisoning high importance samples and low importance samples.
Specifically, poisoning an equal number of high importance samples proves to be more effective in increasing the attack success rate compared to poisoning low importance samples.
This phenomenon becomes more pronounced when the poisoning rate is small.
For instance, in the case of CIFAR10, with a poisoning size of 50, poisoning high importance data results in a model with an ASR of 54.42%, whereas poisoning low importance data only achieves 37.74%, indicating a 1.44× advantage.
Similar trends can be observed across the other two datasets.
However, we also find that when the poisoning rate is large, the difference is not significant.
We believe this is due to the trade-off between the importance advantage and the attack upper bound.
As the number of poisons increases, the advantage of using high importance data becomes more evident.
However, achieving the optimal ASR for the backdoor attack does not require a large amount of data.
Generally, poisoning approximately 10% of the dataset is sufficient.
Therefore, as the number of poisons increases, the gap between high importance and low importance samples is reduced.
Nevertheless, it is still observed that poisoning high importance samples requires poisoning fewer samples to achieve its optimal ASR.
In scenarios where adversaries have limited access to data, determining the true importance of samples can be challenging, which impacts the feasibility of selectively poisoning high importance samples.
In this case, we empirically demonstrate that calculating the importance value using just a fraction of the training set can provide a good approximation of the true importance.
For example, with just 2% of the CIFAR10 data available, the computed importance values correlate strongly with those derived from the entire dataset, achieving a correlation coefficient of 0.811 ± 0.016.
The accuracy of these approximations improves with more data: with 5% of the data, the correlation coefficient rises to 0.899 ± 0.006, and with 20% of the data, it exceeds 0.96.
These results demonstrate that even with limited data access, it is feasible to closely estimate the importance of samples, facilitating effective attack planning under realistic constraints.
Additionally, our investigation into the impact on clean accuracy reveals no significant trends suggesting that poisoning samples of differing importance levels affects clean accuracy.
In both scenarios, the influence on clean accuracy remains below 2%, indicating the concealment of the backdoor attack.
Due to space constraints, detailed results are deferred to append:bd_acc.
Takeaways
Our experimental results demonstrate that poisoning high importance samples enhances the efficiency of the poisoning process, particularly when the poisoning rate is small.
This insight offers valuable guidance for developing attack strategies aimed at compromising models with restricted data accessibility.
Beyond refining trigger patterns for effective injections, prioritizing the poisoning of high importance samples emerges as a promising approach.
On the other hand, the influence on clean accuracy does not yield a definitive conclusion, as poisoning either type of data has a limited impact on clean accuracy.
§ ATTRIBUTE INFERENCE ATTACK
Attribute inference attack is a privacy attack that aims to infer sensitive attributes that are not directly related to the original task of a machine learning model.
For instance, a model trained to predict age from profile photos may unintentionally learn to predict race as well <cit.>.
This type of attack has significant implications for privacy and fairness, as the inadvertent leakage of sensitive attributes can have far-reaching consequences, including the violation of privacy rights, potential discrimination, and the undermining of trust in machine learning systems.
In this work, we focus on a commonly considered attack scenario as depicted in <ref>, where the adversary exploits the embeddings of a target sample obtained from the target model to predict its sensitive attributes [We acknowledge that there exists a separate line of research on attribute inference attacks targeting tabular data <cit.>, which primarily aims to reconstruct missing attribute values in original records.
Given that these attacks employ different technical methodologies and pursue distinct objectives, our conclusions may not necessarily apply to such work.]
To perform attribute inference, the adversary assumes auxiliary information about the training dataset and collects a shadow dataset from similar distributions.
They train a shadow model to mimic the behavior of the target model and use the embeddings and sensitive attributes to train an attack classifier.
In this section, we investigate the impact of data importance on the CelebA dataset, which contains several attributes that can be inferred.
We categorize the samples into five groups, each comprising 10,000 samples, based on their importance values ranging from low to high.
Following this categorization, we train five models using these groups as target models.
To perform the attack, we utilize 10,000 samples, disjoint from the 50,000 training samples, to train a shadow model.
This shadow model is employed to generate datasets for training the attack model, where the inputs are embeddings, and the associated sensitive attributes serve as labels.
We train a two-layer fully connected network as the attack model, which is then utilized to infer the sensitive attribute from the embeddings.
To evaluate the attack performance, we utilize relative accuracy as the metric, comparing the accuracy against a random guessing baseline that varies for different attributes due to the uneven distribution of the CelebA dataset.
The experimental results, as shown in <ref>, reveal no significant connection between data importance and the success of attribute inference attacks.
For instance, the “Arched Eyebrows” attribute is easily inferred for high importance samples, while only low importance samples can be inferred for the “High Cheekbones” attribute.
Furthermore, the vulnerability to attribute inference for the “Mouth Slightly Open” attribute is most prominent among samples with middle importance values.
These results demonstrate that there is no significant correlation between attribute inference attacks and data importance.
One possible explanation for these results is that the importance value of data samples may vary depending on the prediction task.
In other words, the significance of certain features or attributes may differ across different prediction tasks.
For example, while whiskers may be an important feature for predicting gender, it may hold less importance when predicting income.
We further validate our conjecture by visualizing the correlation among importance values assigned to different attributes in app:attr_corr.
It indicates that a sample's elevated importance on one attribute may not align with its importance on another attribute.
Takeaways
Our findings indicate that there is not a straightforward correlation between the significance of data samples and the performance of an attack, aligning with our initial hypothesis.
A pivotal insight from this section demonstrates that the importance of data samples is context-dependent and can vary based on the specific task at hand, which resonates with our earlier discovery in <ref>.
§ DATA RECONSTRUCTION ATTACK
Data reconstruction attack <cit.> refers to recovering the target dataset with limited access to the target model, with the aid of additional knowledge possessed by the adversary.
While data reconstruction attack shares similarities with membership inference attack, there are significant differences that make data reconstruction a stronger attack.
Specifically, membership inference operates at the sample level, determining the membership status of individual samples.
In contrast, data reconstruction is a dataset-level attack aimed at extracting the entire training dataset.
This distinction necessitates different technical approaches for data reconstruction.
In this work, we employ two data reconstruction attacks, namely DeepInversion <cit.> and Revealer <cit.>, to investigate the influence of data importance on the reconstruction process.
These attacks are based on the optimization of input samples, as illustrated in <ref>.
Specifically, given a target class y, both methods initialize a sample x and iteratively update it to maximize the likelihood or probability of belonging to that class while keeping the model parameters fixed.
This optimization process is guided by the following loss function:
min_xℒ(f_θ(x),y)
DeepInversion leverages statistical information encoded in the batch normalization layer to enhance the quality of reconstructions, while Revealer employs a Generative Adversarial Network (GAN) to generate high-quality reconstructions.
To investigate the impact of data importance on the performance of data reconstruction attacks, we partition the samples into groups of 10,000 based on their importance values, ranging from low importance to high importance.
Subsequently, we train one target model for each sample group, resulting in a total of five models for CIFAR10 and CelebA datasets, and ten models for TinyImageNet.
We then apply two reconstruction attacks on each of them.
We leverage Fréchet Inception Distance (FID) to measure the similarity between reconstructed samples and the training samples, given its established utility in evaluating the quality of generated distributions <cit.>.
A smaller FID denotes better reconstruction quality.
For each target model, we generate 10,000 reconstructions, matching the size of the training dataset.
Subsequently, we calculate the FID score, quantifying the discrepancy between the reconstructions and the corresponding training dataset.
The findings presented in <ref> suggest that there is no significant distinction between high and low importance data samples in terms of data reconstruction.
Taking CIFAR10 as an example, DeepInversion exhibits a maximum deviation of only 13.02% compared to the mean value, indicating a consistent performance.
Similar results are observed with Revealer, where the maximum deviation is merely 5.35% compared to the mean value.
Moreover, this consistent performance extends to more complex datasets.
For instance, in the case of CelebA dataset, the maximum deviation is less than 8.27%, while for TinyImageNet, the deviation is less than 4.01%.
These findings suggest that the reconstruction performance remains steady regardless of the importance level of the data samples.
§ TRANSFERABILITY STUDY
In order to fortify the generalizability of our conclusions, this section investigates the transferability of our findings across various model architectures and data modalities.
For the vision modality, we conducted experiments employing two distinct model architectures, namely MobileNetV2 <cit.> and ResNet50 <cit.>.
=-1
To evaluate the transferability to diverse data modalities, we introduced the tabular dataset Purchase-100 <cit.>, consisting of 600 binary features for classifying 100 classes.
We focus on the tabular modality considering that existing Shapley methods predominantly support vision and tabular modalities.
We utilize a Multilayer Perceptron (MLP) to process the Purchase task, aligning with established practices in prior research <cit.>.
Our experimental results consistently support our conclusions, irrespective of the model architecture and data modality.
For example, in <ref>, the three left figures depict a consistent relationship between the advantage of membership inference attacks and the importance of data across all three model architectures.
This trend is also evident when performing the attack based on the distance to the boundary (see results in append:transfer).
Additionally, <ref> illustrates that this conclusion holds for the tabular modality.
Although slight fluctuations are observed in the low importance area, the overall picture demonstrates a consistent relationship between importance and membership vulnerability, aligning with the conclusions drawn from the vision modality.
Furthermore, this conclusion extends to other attack types, such as model stealing and backdoor attacks.
Due to space constraints, we defer the results to append:transfer.
These findings reaffirm the consistent impact of data importance across different attack scenarios, underscoring the generalizability of our observations.
§ LIMITATIONS AND FUTURE WORK
While our research provides valuable insights into the relationship between data importance and vulnerability to specific attacks, several limitations exist that warrant further investigation.
Our study focuses on a specific set of attacks.
Although these are important, they may not cover the entire spectrum of potential threats.
Other types of attacks could exhibit different relationships between data importance and vulnerability, and understanding how these various attacks interact with data importance remains an open area for exploration.
Extending our findings to Large Language Models (LLMs) presents substantial challenges despite their promising advancements.
The primary obstacle is the computational cost associated with calculating importance values.
To manage this burden, current methods often resort to computationally lighter algorithms like KNN for classification tasks.
However, it is unclear whether similar computationally efficient approaches can be adapted to approximate auto-regression models, especially since LLMs exhibit unique emergent characteristics when scaled beyond certain thresholds.
Additionally, the considerably larger datasets typical of LLMs further complicate the feasibility of extending these methods.
Furthermore, our research does not examine more complex augmentation techniques, such as those utilizing generative models.
Future work should investigate whether these advanced techniques affect data importance and vulnerability differently.
Additionally, exploring whether there exists a generalizable method to manipulate data importance across various augmentation techniques would be invaluable.
To foster further research and collaboration, we have open-sourced our evaluation framework, available at <https://github.com/TrustAIRLab/importance-in-mlattacks>. This will enable other researchers to examine whether the observed data discrepancies hold for new types of attacks, thereby benefiting the broader community.
§ CONCLUSION
In this paper, our research systematically studies the vulnerability of heterogeneous data when confronted with machine learning attacks.
Our findings underscore a heightened susceptibility of high importance data samples to privacy attacks, including membership inference attacks and model stealing attacks.
Our findings also carry practical implications, inspiring researchers to design
more efficient attacks.
For example, we empirically showcase the potential enhancement of membership inference attacks through the incorporation of sample-specific criteria based on importance values.
Additionally, we demonstrate that our findings can be strategically employed to guide the creation of more advanced attacks through the active manipulation of sample importance.
§ ACKNOWLEDGEMENTS
We thank all anonymous reviewers for their constructive comments.
This work is partially funded by the European Health and Digital Executive Agency (HADEA) within the project “Understanding the individual host response against Hepatitis D Virus to develop a personalized approach for the management of hepatitis D” (DSolve, grant agreement number 101057917) and the BMBF with the project “Repräsentative, synthetische Gesundheitsdaten mit starken Privatsphärengarantien” (PriSyn, 16KISAO29K).
plain
§ CELEBA ATTRIBUTE SELECTION
The CelebA dataset contains 40 binary attributes, which is not suitable for multi-class classification.
Therefore, we follow previous works <cit.> that select the three most balanced attributes (Heavy Makeup, Mouth Slightly Open, and Smiling) to create an 8-class (2^3) classification task.
To validate that our findings are not dependent on this specific attribute selection, we conducted the same experiments using another randomly selected set of attributes (High Cheekbones, Arched Eyebrows, and Wearing Lipstick).
We evaluated the performance on membership inference attacks, model stealing, and backdoor attacks.
The results are depicted in <ref>, confirming that our findings are consistent across these different attribute sets.
For example, <ref> demonstrate that samples with higher importance are more vulnerable to membership inference attack; it also reflects on the worst-case evaluation as illustrated in <ref>.
The conclusion holds for backdoor and model stealing attack, specifically, with a 1500 query budget, high importance samples can steal a surrogate model with 48% higher accuracy than that stolen by low importance samples.
§ MEASUREMENT HYPERPARAMETER
In implementing the KNN-Shapley method, we set the hyperparameter k=6, following the suggestion in the original paper <cit.>.
Further experimentation with k=7 and k=8 indicated that performance remains largely consistent across these settings.
Specifically, the correlation between importance values calculated with k=6 and k=7 is 0.9988, and between k=6 and k=8, it's 0.9972, demonstrating the robustness of our results with respect to this hyperparameter.
§ IMPORTANCE DISTRIBUTION
<ref> shows the importance distributions for the CelebA and TinyImageNet dataset.
§ EFFECTIVENESS OF KNN-SHAPLEY
We validate the efficacy of the KNN-Shapley method to ensure its accurate assignment of importance value to individual samples.
We apply the KNN-Shapley approach to three distinct datasets and compute the importance value for each sample.
To gain insight into the contribution of different samples to the model's utility, we visualize the distribution of importance values for the CIFAR10 dataset in <ref>.
The observed distribution aligns with our expectations, as most samples exhibit similar contributions, while certain samples significantly influence the model's behavior.
We corroborate this finding with the other two datasets, and include the corresponding visualizations in append:importance_dis.
Subsequently, we empirically validate whether samples with high importance values and those with low importance values demonstrate distinct training performance.
To achieve this, we sort the samples based on their importance values and form two sets: one comprising samples with the highest values and the other consisting of samples with the lowest values.
We employ these two sets to train two separate models and evaluate their performance on the testing dataset.
We vary the size of these two sets from 50 to 5000 and plot the corresponding testing accuracy in <ref>, <ref>, and <ref>.
The figures clearly demonstrate that samples with varying importance values exhibit significant differences in training performance.
Specifically, considering the CIFAR10 dataset trained with 2000 samples, the model trained with high importance samples achieves a testing accuracy that is 1.6× higher compared to the model trained with low importance samples.
Moreover, in the case of TinyImageNet, the disparity is even more pronounced.
When the training set comprises 5000 samples, the model trained with valuable data attains a testing accuracy that is 4.4× higher than that of the model trained with low importance samples.
These experimental findings provide strong evidence supporting the effectiveness of KNN-Shapley.
§ MEMBERSHIP INFERENCE ATTACK
<ref> depicts the distance to the boundary for samples with different importance values, measured by two different norms.
<ref> represents the log-scale ROC curves for attacks conducted based on the distance to the boundary.
§ CORRELATION BETWEEN DIFFERENT ATTRIBUTES
We visualize the correlation among importance values assigned to different attributes in <ref>.
§ HYPERPARAMETERS FOR BACKDOOR ATTACKS
For the three datasets evaluated in our study—CIFAR10, CelebA, and TinyImagenet—a consistent modification was applied to each image: a black square was positioned at the bottom left corner.
The dimension of this black square, or backdoor trigger, varied according to the image sizes of the respective datasets to maintain proportional consistency.
Specifically, for the CIFAR10 dataset, with an image resolution of 32 × 32, the trigger was sized at 2 × 2.
In the case of CelebA, which features larger images with dimensions of 178 × 218, the trigger's size was increased to 8 × 8.
Lastly, for images from TinyImagenet, which are 64 × 64 pixels, a 5 × 5 square was used as the trigger.
§ CLEAN ACCURACY PERFORMANCE
The effect of poisoning high and low importance samples on clean accuracy is depicted in <ref>.
§ MORE BACKDOOR ATTACKS
To assess whether the observation that poisoning high importance data samples enhances backdoor attack effectiveness is applicable to various trigger patterns, we broadened our study to include three additional backdoor methods.
These methods comprise Blend <cit.>, which incorporates triggers covering the entirety of the input; SSBA <cit.>, characterized by sample-specific and invisible triggers; LF <cit.>, which utilizes triggers of low frequency; SIG <cit.>, a method without label poisoning; and CTRL <cit.>, which targets contrastive learning.
We utilized the BackdoorBench tool <cit.> to conduct our experiments, adhering to all default implementation settings with the sole modification being the selection process for poisoning samples.
The results, depicted in <ref>, consistently demonstrate that the poisoning of high importance samples significantly improves the efficacy of the backdoor attacks across more complex trigger patterns, thus underscoring the robust generalizability of our conclusions.
§ TRANSFERABILITY STUDY
<ref> and <ref> demonstrate the transferability on model stealing and backdoor attack.
<ref> showcases Log-scale ROC curves for three distinct architectures utilizing three different norms.
The experiments are conducted on the CIFAR10 target dataset.
§ RELATED WORK
In addition to the attack approach investigated in our work, several methods exist for privacy and security attacks against machine learning models.
In the following section, we provide a brief overview of these approaches.
§.§ Membership Inference Attack
Membership Inference Attacks (MIA) <cit.> have emerged as a significant threat to privacy in the context of machine learning models.
These attacks aim to reveal the membership status of a target sample, i.e., whether the sample was part of the training dataset or not, thereby directly breaching privacy.
The seminal work by Shokri et al.<cit.> introduced MIA against machine learning models, wherein multiple shadow models were trained to mimic the behavior of the target model.
This attack originally required access to data from the same distribution as the training dataset.
However, Salem et al.<cit.> relaxed this assumption by demonstrating the effectiveness of using only a single shadow model, substantially reducing the computational cost involved.
Subsequent research <cit.> has explored more challenging settings for MIA.
In these scenarios, the adversary only has access to hard-label predictions from the target model.
Li and Zhang <cit.> proposed a method that approximates the distance between the target sample and its decision boundary using adversarial examples, enabling the attacker to make decisions based on this distance.
Recent advancements in MIA have focused on enhancing attack performance.
Carlini et al.<cit.> leveraged the discrepancy between models trained with and without the target sample to improve attack effectiveness.
Liu et al.<cit.> demonstrated the utility of loss trajectory analysis in MIA.
Furthermore, Tramèr et al. <cit.> highlighted the potential of data poisoning, showing that even with access to a small fraction of the training dataset, the attacker can significantly boost the performance of membership inference attacks.
§.§ Model Stealing Attack
Model stealing attacks <cit.> aim to extract information from a victim model and construct a local surrogate model.
This attack was initially proposed by Tramèr et al.<cit.>, assuming that the adversary has access to a surrogate dataset for stealing the model.
Orekondy et al. further advanced this approach by developing a reinforcement learning-based framework that optimizes query time and effectiveness <cit.>.
Recent research has focused on the more stringent data-free setting, where adversaries lack access to any data.
In this context, Kariyappa et al. <cit.> propose MAZE, which employs a generative model to generate synthetic data samples for launching the attack.
The generator is trained to maximize disagreement between the victim model and the clone model, requiring the gradients from the victim model.
To approximate these gradients with only black-box access, zeroth-order gradient estimation techniques are adopted.
Truong et al. <cit.> present a similar approach, where they replace the loss function from Kullback-Leibler (KL) divergence to ℓ_1 norm loss for training the student model.
In contrast to the previous attacks that generate “hard” queries that differ in predictions between the victim and clone models, Sanyal et al. <cit.> adopt a different strategy by generating "diverse" queries to increase predictions belonging to different classes.
§.§ Backdoor Attack
Backdoor attacks <cit.> are training-time attacks that introduce malicious behavior into the model, making it behave like a benign model for normal inputs, while intentionally misclassifying the input to a predetermined class when the trigger appears.
The seminal work by Gu et al.<cit.> introduced the concept of the backdoor attack on machine learning models.
Building upon this, Liu et al.<cit.> proposed an advanced backdooring technique that incorporates enhanced triggers and relies on fewer assumptions.
However, these attacks were limited to injecting static triggers, making them susceptible to detection.
Salem et al.<cit.> integrated generative models to perform dynamic backdoor attacks, where the trigger is not fixed thus increasing the difficulty of detection.
Nguyen and Tran<cit.> further extended this concept to design an input-aware attack.
Most existing attacks in this domain are based on poisoning attacks <cit.>, which involve poisoning the training dataset.
In contrast, Bagdasaryan and Shmatikov <cit.> propose a distinct attack target in the scenario where the learning algorithm itself is poisoned, presenting an alternative approach in this field of study.
§.§ Data Reconstruction Attack
Data reconstruction attacks <cit.> aim to recover the target dataset with limited access to the target model, with the aid of additional knowledge possessed by the adversary.
In the realm of data reconstruction attacks, existing approaches can be broadly classified into three categories: optimization-based attacks, training-based attacks, and analysis-based attacks.
Optimization-based attacks, first introduced by Fredrikson et al.<cit.>, represent the majority of existing reconstruction attacks.
These attacks employ an iterative optimization process to reconstruct the training dataset, with the objective of obtaining a high likelihood score for the desired class.
Notably, the integration of generative models by Zhang et al.<cit.> has contributed to improving the quality of reconstruction.
Building on this line of research, several studies have explored diverse architectural choices <cit.> and loss functions <cit.> to further enhance reconstruction performance.
Conversely, training-based attacks <cit.> regard the target model as an encoder and train a corresponding decoder network to reconstruct inputs based on the model's outputs.
Recently, Haim et al. <cit.> presented a theoretical demonstration that, under specific assumptions, the training data can be completely recovered, leading to a new attack approach.
|
http://arxiv.org/abs/2409.03219v1 | 20240905033354 | Content Moderation by LLM: From Accuracy to Legitimacy | [
"Tao Huang"
] | cs.CY | [
"cs.CY",
"cs.AI",
"cs.ET",
"cs.HC",
"cs.LG"
] |
FairQuant: Certifying and Quantifying Fairness of Deep Neural Networks
Brian Hyeongseok Kim
University of Southern California
Los Angeles, USA
Jingbo Wang
Purdue University
West Lafayette, USA
Chao Wang
University of Southern California
Los Angeles, USA
Received 16 July 2024; accepted 04 September 2024
==============================================================================================================================================================================================================================================================
§ ABSTRACT
One trending application of LLM (large language model) is to use it for content moderation in online platforms. Most current studies on this application have focused on the metric of accuracy – the extent to which LLM makes correct decisions about content. This article argues that accuracy is insufficient and misleading, because it fails to grasp the distinction between easy cases and hard cases as well as the inevitable trade-offs in achieving higher accuracy. Closer examination reveals that content moderation is a constitutive part of platform governance, the key of which is to gain and enhance legitimacy. Instead of making moderation decisions correct, the chief goal of LLM is to make them legitimate. In this regard, this article proposes a paradigm shift from the single benchmark of accuracy towards a legitimacy-based framework of evaluating the performance of LLM moderators. The framework suggests that for easy cases, the key is to ensure accuracy, speed and transparency, while for hard cases, what matters is reasoned justification and user participation. Examined under this framework, LLM's real potential in moderation is not accuracy improvement. Rather, LLM can better contribute in four other aspects: to conduct screening of hard cases from easy cases, to provide quality explanations for moderation decisions, to assist human reviewers in getting more contextual information, and to facilitate user participation in a more interactive way. Using normative theories from law and social sciences to critically assess the new technological application, this article seeks to redefine LLM's role in content moderation and redirect relevant research in this field.
§ INTRODUCTION
LLM (large language model) can be used in a wide range of scenarios. One emergent area of its application is content moderation. Within a span of less than two years, such topic has generated considerable scholarly discussion (see <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>). Most studies so far have deemed LLM promising in this application field and the technology has been hailed as “the greatest change to the dynamics of content moderation in at least a decade” (<cit.>).
Even though researchers have explored LLM's contribution in different types or aspects of moderation tasks, one conclusion shared by most of them is that the new tool, through appropriate design and use, could be more accurate in detecting and classifying violating content than its predecessors. In other words, the existing scholarship on LLM moderation has centered upon the metric of accuracy – the extent to which LLM makes correct decisions of moderation. LLM's superiority in accuracy derives from its technical structure and pretraining process, which enable them to better understand complex contexts as well as to adapt to changing circumstances. Such accuracy bonus drives the advocacy of many researchers that the new technique should be encouraged for wider use in managing content (<cit.>).
This article argues, however, that accuracy is far from enough in situating and evaluating LLM’s role in content moderation. On the one hand, it is impossible to make all decisions right. Content moderation involves the inevitable tradeoffs and balances among various rights and interests. On global platforms, in particular, people from different cultural backgrounds may reasonably differ on how the balance or tradeoff should be made. Like judicial adjudication by courts, moderation by online platforms also face hard cases. On the other hand, accuracy is not the only goal of a moderation system. Content moderation is not an isolated process, but a constitutive component of platforms’ governance scheme. The central purpose of such scheme is to provide legitimacy to the private exercise of governance power, which, unlike the state governance, has neither constitutional authorization nor democratic approval. To justify their power of delimiting people’s fundamental freedoms online, platforms must maintain a governance scheme that helps them gain legitimacy. As a vital component of governance, content moderation must also serve this central goal, rather than the narrow focus of improving accuracy.
This article proposes a paradigm shift from the accuracy discourse to a more comprehensive analytical framework based on legitimacy. The centrality of legitimacy derives from dual needs of protecting people’s fundamental rights and curbing platforms’ formidable power. The concept of legitimacy examines how the practice of moderation could be normatively justified and accepted. Answering the inquiry of how the new technology of LLM can contribute to content moderation is possible only after it has been positioned into the holistic structure of platform governance, the central concern of which is legitimacy. That being said, the right question to be asked is not whether and how LLM can help reach more accurate moderation decisions, but whether and how it can enhance the legitimacy of the moderation and governance system of online platforms.
Legitimacy is a broad concept that consists of far more factors than accuracy, such as transparency, speed, rationale of decisions and procedural fairness(<cit.>; <cit.>). It contains both substantive and procedural aspects. The framework proposed by this article makes a distinction between easy cases and hard cases, and uses different legitimacy metrics to measure the moderation of the two categories. It does so for two reasons. First, people hold different expectations toward easy cases and hard cases, which, as a result, entail different normative implications for the legitimacy of the governance scheme. Second, the numbers and moderation costs of the two categories are disparate, so it is advisable for them to be managed by different governance strategies.
For the easy cases, which are those containing clear answers and little controversies, moderating them should be accurate, fast, and transparent. When straightforward answers are available, the moderation system should deliver those answers in an efficient and open manner, ensuring the health and vividity of the online discourse. For hard cases, which involve complex fact contexts or difficult value compromises, requiring fast and correct decisions is unrealistic, since there may hardly be any agreement on what decision is correct. Rather, the legitimacy of deciding hard cases depends upon the substantive quality of the explanation (reason-giving) for the decisions, as well the procedural justice of the decision process (especially the participation of users). Easy cases fit with the administrative model which based on statistical and probabilistic management (<cit.>), while hard cases should be dealt by the juridical approach, using procedures similar to those of the courts.
Examining LLM under this framework reveals that the most promising strength of this tool is not to increase the accuracy of moderation, but to conduct pre-screening of hard cases from easy cases, to provide quality explanations for moderation decisions, to assist human reviewers in getting more contextual information, and to facilitate user participation in a more interactive way. LLM’s accuracy advantage is significantly discounted by its increased latency and cost in moderating easy cases, as well as the diminished relevance of accuracy in hard cases. Instead, LLM can play valuable roles in various other aspects that promote the legitimacy of platform governance.
The accuracy discourse is parochial, misleading, and counter-productive in the sense that it has led substantial academic and industrial resources to an enterprise that produces little return. Achieving higher accuracy is not the best application scenario here; researchers and developers interested in LLM moderation should consider reorienting their focus of efforts. Locating LLM within the legitimacy framework helps such redirection by aligning the technological project of progressive improvement with the socio-legal concerns of normativity and justification. It allows us to rethink LLM’s real potential in content moderation and platform governance, as well as what improvements can be made to further realize its potential.
This article aims to make three contributions. First, after reviewing the current literature on LLM’s capability in achieving accurate moderation and the possible reasons behind (parts 2 and 3), I argue that accuracy is a poor metric for measuring the performance of LLM and guiding its design in content moderation tasks (part 4). Second, I propose a framework that focuses on legitimacy, under which various substantive and procedural metrics will be used for assessing the performance of moderation (part 5). Third, positioning LLM under the framework reveals both promises and challenges (part 6), and they point directions for future work of research in technical and legal fields (part 7).
It should be noted that this article deals with moderation by LLM, not moderation of LLM. The former refers to the use of LLM as tools for moderating online communities and platforms; while the latter is the moderation of content produced or generated by LLM. Moderation of LLM is a component of governing LLM to prevent it from producing harmful messages. This area also attracts much scholarly attention (see <cit.>) but should not be confused with the research issue of this article.
§ THE ACCURACY DISCOURSE
LLM moderation is gaining momentum. A simple Google Scholar search shows that more than a thousand papers have discussed this topic in the past two years.[The search of keywords “LLM” and “content moderation” generated 1220 results in Google Scholar. The search was conducted on Aug 5, 2024.] Researchers and developers have growing interest on using this technology to moderate content on digital platforms and communities. Most existing studies on LLM moderation have focused on the metric of accuracy. Accuracy refers to the capability of making correct decisions. It is usually measured by the percentage of correct decisions (true positives plus true negatives) among the total number of cases.
There are also metrics that are akin to accuracy, such as recall and precision: some may stress the ability of identifying true positives and others may stress the ability of identifying true negatives (<cit.>, 21). And tradeoffs are sometimes inevitable between the two sides (<cit.>, 45). But the general goal of all AI moderation tools is to reduce the two types of errors and increase the ratio of correct ones. This article uses accuracy in this broad sense, referring to the capability of making correct (true) decisions and avoiding erroneous (false) ones.
Among the 1220 results in Google Scholar after searching “LLM” and “content moderation”, 844 of them contain “accuracy”, constituting roughly two thirds. Generally, studies to date reported positive appraisals regarding LLM’s accuracy performance in moderation tasks, such as identifying and classifying content. (<cit.>, 2) found that in most cases, ChatGPT’s zero-shot accuracy is higher than that of MTurk (crowd-workers of annotation on Amazon). (<cit.>, 9) attested that “LLMs can perform on par or better than traditional ML models in cyberbullying detection. (<cit.>) revealed that LLMs exhibit a high degree of accuracy in detecting fake news. (<cit.>) tested the fine-tuned LLMs’ performance in fact-checking and discovered that “LLMs can reliably assess the relationships between social media posts and verified claims, offering performance comparable to human evaluations”. Similar work of testing LLM’s ability in fact-checking has been done by (<cit.>). What’s more, (<cit.>) has used chain-of-thought reasoning to prompt the LLM and observed a superior rate of accuracy in detecting new waves of online hatred.
Using LLM to moderate online content not only attracts much scholarly attention, it also generated heated discussion in the industry. One of the most successful LLM providers, OpenAI, has enthusiastically touted its product, GPT4, for conducting content moderation tasks (<cit.>; <cit.>). The chief reason for OpenAI’s enthusiasm is, again, GPT4’s impressive performance in accuracy. The major evidence that OpenAI offered to support its claim is a chart that lists GPT4’s F1 scores in various categories of content, as compared to human moderators (<cit.>). As a reminder, F1 score is calculated as the harmonic mean of precision and recall – both are mathematical metrics that measure the level of accuracy (<cit.>).
To be sure, not all authors are optimistic. Some also expressed caution and concern for the wide use of LLMs in content moderation, at least at the current stage (<cit.>). One concern, for example, is the uneven error rate of LLMs: one study has found that GPT-3.5 “is much more likely to create a false negative (86.9% of all errors) than a false positive (13.1% of all errors)” (<cit.>, 4). Another limit is LLMs’ capability beyond the moderation of bad content. Most of the current studies on the accuracy of LLM moderation focused on detecting and classifying bad content. But sometimes a community may need to enforce rules not on bad content, but about benign content that is not suitable to be posted in the specific community. One study shows that state-of-the-art LLMs may not be sufficiently capable of accurately moderating non-harmful content in these niched communities (<cit.>).
The list could go on and on, and new publications are burgeoning, exploring the potentials and limits of LLM’s accuracy performance in moderation. This reflects the dominance of using accuracy as the benchmark in the scholarship of LLM moderation. This article uses “accuracy discourse” to describe this academic inclination. Accuracy is surely important, and it is natural or intuitive to assess LLM’s performance in moderation through the metric of accuracy. In any case, everyone expects moderation decisions to be correct, and if the new technology can increase the ratio of correct decisions, then it would also be intuitive to advocate for further adoption of such technology in this domain.
However, the accuracy discourse is parochial and misleading. It attracts most scholarly attention and resources in this field, some of which should have been spent on other equally, if not more, important issues. This article will argue that accuracy is far from enough in evaluating the content moderation practices, and the most promising area for LLM’s application in moderation is not accuracy enhancement. This conclusion may sound surprising to some researchers, but it is strongly supported by the arguments below. Moving our interests and endeavors beyond the accuracy discourse would significantly enrich our research agenda and make our enterprise more productive. Before a full critique of the accuracy discourse in part 4, the next part investigates another important question about the accuracy discourse: what makes LLM different from its predecessors in moderating content, and what unique features of LLM may explain its accuracy superiority. The inquiry of next part helps unearth the rationale behind the optimistic tone of the accuracy discourse, as well as lays the foundation for the later parts of this article by providing necessary technical backgrounds of the new tool.
§ THE UNIQUE FEATURES OF LLM MODERATION
AI moderation tools have been widely used by online platforms and communities to manage content, as human moderation is impossible to scale and costly to operate. Several methods have been adopted for various circumstances, such as hashing (<cit.>, 4) and blacklisting (<cit.>, 41). Currently, the most widely used is the technique of machine learning (ML). Since LLM is one sub-category of ML, this article uses “traditional ML” to refer to the technique of ML before the emergence of LLM. To know what differences LLM could bring, we should first take a glimpse at the basic characteristics of the traditional ML method and its limits.
Traditional ML moderation contains several steps: 1) developers build a ML model with nodes or “neurons”, the connections among which have been assigned with different weights; 2) human reviewers or annotators read and annotate the content of a dataset; 3) the annotated dataset is then used to train the ML model, enabling the model to adjust its weights to fit with the pattern of the dataset (<cit.>); 4) after training, the model is utilized to review the real-life content (<cit.>, 38). This process can be continuous and iterative. If the ML model’s decisions were further reviewed by human moderators, the human decisions can be “fed” into the learning process of the model, facilitating its improvement (<cit.>, <cit.>). The key condition for this technique is human supervision, which determines the quality of the annotated datasets as well as the learning process (<cit.>, 2).
The traditional ML method for content moderation has several limitations: First, it heavily relies on manual annotation of the training dataset. Such practice generates substantial costs for subsidizing the labor (<cit.>; <cit.>, 2) and also introduces biases into the process (<cit.>). In addition, different human annotators may not agree on the decision of a piece of content, especially for highly subjective categories such as hate speech (<cit.>, 35, 10; <cit.>, 8; <cit.>, 79). Second, the traditional method lacks flexibility and adaptability. A model trained by one dataset can hardly perform equally well in other datasets, represented by different cultures, languages, or contexts (<cit.>, 78; <cit.>, 626). Traditional ML models cannot be easily adapted to changing circumstances because doing so would require re-annotation of the dataset and re-training of the model (<cit.>, 562). Such feature of domain specificity significantly limits the scope of application of these models (<cit.>, 8). Third, the traditional ML approach lacks explainability and transparency. Even though the ML method may perform well in identifying rule-violations of online content, it cannot provide explanations to the users (<cit.>, 84). Transparency of the process is also missing: we hardly know any meaningful details about how the decisions have been made by the algorithmic black-box (<cit.>, 38; <cit.>, 2).
Recently, LLM has emerged as a trending technology that shows impressive performance in multiple task settings. One area of its application is content moderation. Technically speaking, the major difference between traditional ML models and LLMs is that the former is target-trained with specific datasets, while the latter is pre-trained with massive corpus of online data. LLMs are based on deep learning (DL) techniques which can “extract pertinent features from textual data, eliminating the requirement of manual feature engineering – a typical practice in conventional ML approaches” (<cit.>, 34). In other words, LLMs are self-supervised (<cit.>, 1959), as they can automatically learn implicit language patterns from the immense body of online materials without human annotation and supervision. This key difference brings two distinctive features of LLM in the task of content moderation, which may explain its promising performance in the accuracy metric.
First, LLM has the potential of better understanding contexts and nuances. There are two reasons for this. One reason is the distinctive training process of. The pretraining of LLM by large corpus of data exposes the models to a wide range of content from diverse sources (<cit.>, 16-17). The sources may contain billions of web documents, potentially covering most areas of knowledge that has been stored online (<cit.>, 1961). Such scale and diversity enable LLM to generalize across different domains, and to develop comprehensive understanding of common language use (<cit.>, 17, 22). One particular reason why LLM can better appreciate contexts than previous ML model is that the former have seen more variations of expressions, idiomatic phrases, and emerging language trends. Broader exposure empowers LLM with the capacity of capturing more contextual expressions, such as irony, sentiment, and sarcasm. Knowledge at such scope is extremely helpful for moderating expressions online, which are complex, nuanced, and constantly changing. By contrast, traditional ML methods rely on targeted datasets that are much narrower in scope.
Another reason why LLM excels in contextual reasoning lies in the transformer architecture used by most dominant LLMs (including the popular GPT) (<cit.>). Such architecture uses the mechanism of self-attention, which takes the whole content of user input into consideration, even when the distance between two words is remote (<cit.>, 1964). In other words, it weighs the importance of each word in a sentence in relation to every other word, rather than the neighboring words only (<cit.>). Such capability facilitates the model to capture long-range dependencies and contextual relationships within user input, rather than grasping meanings through its fragmented parts. This characteristic partly explains LLM’s superiority in holistic and contextual understanding.
Second, LLM is more adaptable and flexible than traditional ML models. LLM does not rely on manual labeling of datasets; rather, their pretraining is unsupervised (<cit.>, 1). Because the decision logic is decoupled from the model, LLM can quickly adapt to changes of policy and context without re-annotating the data and re-training the model (<cit.>, 562). Through pretraining, LLM already acquires contextual knowledge and understanding across a diverse range of areas; when new context emerges, LLMs can be adjusted to fit with the new context through prompting or fine-tuning (<cit.>, 3; <cit.>, 2). The deprivation of the continuous need of updating quality datasets and annotating them manually renders LLM especially capable of dealing with emergent crisis or events. For example, in dealing with new waves of hatred online, platforms can easily adapt the LLM “by only updating the newly identified hate targets and derogatory terms in the prompts rather than the model” (<cit.>, 2).
These are the main promising aspects of LLM moderation that leads to the optimistic tone in the accuracy discourse. To be sure, LLM also contains risks and limitations, such as the high cost, the problem of hallucination, and the inevitable bias inherent in their pretraining.[See infra part 6.1] But the revolutionary technical features make the tool an eligible candidate of the next-generation technology for moderation. Unfortunately, however, current discussion and research focus almost exclusively on exploring LLM’s capacity of achieving high(er) accuracy in moderation. Such parochial vision limits the endeavor of discovering the full potential of LLM in the governance of online communities. Accuracy is surely an important metric, but as the following part shows, it is not the only one that matters; and in some cases, it may even be misleading and counter-productive to pursue accuracy as the top priority. Indeed, the technical features of LLM, such as its broad data exposure, generative functionality, and contextual adaptability, can contribute to many other aspects of moderation apart from accuracy.
§ WHY ACCURACY IS NOT ENOUGH
The accuracy discourse is parochial and misleading. Taking accuracy as our primary or even exclusive metric of evaluating and designing LLM moderation tools misunderstands the function of accuracy and its position in the governance system.
There are four cases against the accuracy discourse. First, it is impossible to achieve perfect accuracy in practice, and trying to do so is dangerous. Second, accuracy has both individual and systematic aspects, but the current discourse stresses the former while ignoring the latter. Third, the discourse failed to recognize the distinction between easy cases and hard cases, which should be taken differently in platform governance. And fourth, focusing exclusively on the substantive result of decisions overlooks other aspects of content moderation that are also critical for governing the platforms. This part will illustrate these four points in turn.
§.§ Can we make all decisions right?
The accuracy discourse, as reflected in most studies about LLM moderation so far, holds that the goal of designing and incorporating LLM into moderation is to achieve high percentage of correct decisions – the higher, the better. If accuracy is the benchmark of evaluation, then the ideal state is that all moderation decisions will be made correctly; in other words, LLM can perfectly tell whether an online post violates the community rules. That is the aspiration that LLM developers should pursue.
Such discourse has one assumption behind: that there exists, and we can identify, an objective or consensually endorsed standard of determining right decisions (true positives and true negatives) from errors (false positives and false negatives). That assumption, however, does not stand. Determining whether certain content violates the platform’s rules involves probing the facts, identifying the relevant rules, and making a choice in front of value conflicts. It requires delicate assessment and shrewd judgment. Without objective standard, the only metric for measuring algorithmic decisionmaking is the decision of humans. Actually, human moderators’ classification has been used as the “ground truth” that trains and evaluates AI tools (<cit.>, 46). Facebook has hired a separate reviewing panel (containing three reviewers) to evaluate the correctness of first-instance human moderators (<cit.>, 13). This is also the case for experiments that test LLM’s capability of moderation. During (<cit.>, 3)’s experiment, for example, they employed experienced human reviewers to resolve disputing cases.
The problem with this approach is that human moderators sometimes differ in moderating cases. “By definition, content moderation is always going to rely on judgment calls, and many of the judgment calls will end up in gray areas where lots of people’s opinions may differ greatly” <cit.>. One salient issue, as noted by many researchers in this field, is the intercoder disagreement, referring to the instance that different moderators cannot reach consensus on a case (<cit.>). In such circumstances, whose decision should be counted as “ground truth” or the standard of accuracy?
The existence of disagreement reveals that we cannot make every case right, because not every case can be decided by humans in a consensual and non-controversial way. In fact, “[n]o legal system guarantees a right to an accurate decision. Enforcement of speech rules has never been perfect, online or off” (<cit.>, 799). People reasonably disagree on many issues, especially issues regarding moral, religious, and other value judgments (<cit.>). Observation on legal adjudications shows that determinacy and consensus are not always available, otherwise there would be much less lawyers, appeals, and dissenting judicial opinions. Just take a look at the free speech cases decided by the U.S. Supreme Court: within the past 20 years, the court decided 43 cases on free speech; 26 of them contain dissenting opinions (about two thirds!) and 7 of them are decided with a divisive vote of 5:4.[The statistics here is up to the date of June 18, 2024.] The prevalent disagreement on legal cases illustrates that consensus on accuracy is an illusion.
The goal of perfect accuracy is not only impossible to achieve, it is also undesirable. Perfect accuracy means there will be, and must necessarily be, a single, uniform, and perfect standard for measuring accuracy. If there is not, we must design one. But doing so would greatly suppress the space of value contestation by imposing one set of values upon the whole platform. Consider the fact that most online platforms are global, with users coming from divergent cultural backgrounds (<cit.>). Striving for uniform and definite applications of rules may serve the goal of formal consistency, but sacrifices other vital principles such as value pluralism, open disagreement, and epistemic humility. This is unacceptable for governing global and open platforms.
§.§ Two levels of accuracy
Current studies on LLM moderation uses accuracy in the individual case level, aiming for making correct decisions in each and every individual case. This perspective ignores that accuracy can also be measured in the holistic or system level. System accuracy is not the aggregate of the accuracy scores of individual cases. Rather, it refers to the general performance of the whole moderation system, including metrics like consistency, predictability, and the fairness of error distribution across different groups of users, different categories of content, and different periods of time. Two insights can be gained by taking the system level of accuracy into account.
First, the perspective from system level further corroborates why perfect accuracy is not a worthwhile goal. One can reasonably anticipate that the moderation of one case to be accurate. But things will get much harder if the number of cases arises to millions or billions, and the time for moderating each case is very limited (sometimes only seconds for a post) (<cit.>, 549). Moderation also takes costs, no matter who does the job – human or AI. The factors of time, cost, and scale at the system level renders moderation a managerial enterprise that must constantly make tradeoffs under limited resources. Errors are to be managed by considering costs, not to be avoided at all costs. That’s why people have widely accepted that it is “unrealistic to expect rules to be applied correctly in every case” (<cit.>, 766), and that perfect accuracy is “unaffordable” and “intolerably costly” (<cit.>, 185-6).
Second, viewed through the system lens, accuracy is not only a numerical metric, measuring the performance of moderation by percentages. Rather, accuracy is also a distributional metric. If, as the foregoing analysis suggests, errors are inevitable and perfect accuracy is illusory, then how to manage accuracy and how to distribute errors become crucial (<cit.>, 791). The right questions worth examination should include, for example, “[w]ho suffers from false positives and false negatives? ... [w]hich types of errors are known and tolerated, [and] how is risk distributed” (<cit.>). Accuracy in the individual case level is a poor indicator for these issues, since it ignores the distributional effect. Even though the total accuracy is sufficiently high, say 99.99%, it would still be worrisome if the tiny percentage of errors (0.01%) are mostly suffered by LGBTQ groups or racial minorities. To date, few works (one example is <cit.>) on the accuracy of LLM moderation has discussed the distribution of accuracy across different demographic groups. Future studies should pay more effort on this aspect.
§.§ Easy cases vs. hard cases
The foregoing analysis revealed several tensions within the practice of content moderation: that accuracy is an important metric for moderation but it is also parochial and misleading; that good governance requires making correct decisions but the scale and resource limit render such task extremely challenging; that content moderation is individually similar to judicial adjudication but systematically more akin to administrative management. Evelyn Douek is right that perfect accuracy is impossible and “system thinking” is necessary for approaching content moderation (<cit.>). But she neglects the fact that accuracy and other metrics can coexist, so does the individualistic/judicial thinking and the systematic/administrative thinking (<cit.>, 343). This article argues that the two modes can be smoothly combined in the platform governance. One crucial premise is the distinction between easy cases and hard cases. For easy cases, the systematic/administrative thinking will apply, and the measuring standard is the correct application of rules to cases in a time-sensitive and transparent way. For hard cases, the individualistic/judicial approach is more appropriate, the goal of which is less of reaching correct outcomes, than affording justification and participation to the stakeholders.
Distinguishing between easy cases and hard cases is reasonable and even necessary for three reasons. First, such distinction already exists in platforms’ content moderation practice. Meta has reported that “[i]n most cases, identification is a simple matter. The post either clearly violates our policies or it doesn’t. Other times, identification is more difficult” (<cit.>). This claim not only acknowledges the distinction between easy and hard cases but also discloses the fact that easy cases count for a dominant majority of all moderation cases, while hard cases are relatively few. Platforms constantly draw the line between easy and hard cases when they divide labor between AI moderators and human moderators (<cit.>, 12, 14). For example, Meta uses AI tools to “identify and remove a large amount of violating content—often, before anyone sees it”, and when “technology misses something or needs more input”, human reviewers will intervene to make their judgment calls (<cit.>). Such practice reveals that platforms must make, and already have made the strategic choice of treating different cases differently.
Second, easy cases and hard cases have also been distinguished in legal systems, for similar reasons (see generally <cit.>; <cit.>). In most instances, we get highly determinate answers regarding what the law prescribes (<cit.>, 423): consider our behaviors adjusted according to the law, the quick and clear advice we receive from lawyers during consultation, and cases that are settled or finalized at trial level (<cit.>, 412-3; <cit.>, 597; <cit.>, 225). Sometimes the result to a case is unclear or contested so that value judgments are needed (<cit.>, 596): think about the cases that are appealed or those with dissenting opinions from the deciding judges. In some sense, the institutional design of litigation costs, hierarchies of courts, and judicial writing of dissents is made to distinguish between easy cases and hard cases,[Dissent is one signal of controversies among judges and can be used as an indicator of hard cases. However, the opposite is not true. A unanimous judicial decision without any dissenting opinions does not necessarily indicate that the case is easy. Such Unanimity may reflect the court’s strategic choice of speaking with one voice. Take Brown v. Board of Education. See (<cit.>, 408).] so that limited judicial resources could be reasonably distributed, and clear issues can be quickly resolved while difficult ones can be sufficiently debated and contested.
Third, easy cases and hard cases have different social impacts and different expectations from the users. For easy cases, answers are relatively clear and obvious; what users and the society expect is their resolution in a correct and convenient way. Substantive results are what matter the most, so accuracy should be the dominant metric here. Hard cases, by contrast, are controversial and contested. They typically involve complex facts or contentious value judgments, and people do not and may never have consensual answers to those cases. Should images of women breasts and nipples be prohibited (<cit.>)? If there are exceptions, what exceptions should there be? If child nudity is generally banned but can be allowed in some cases due to, e.g., newsworthiness or awareness raising, how should the boundaries be drawn (<cit.>)? What about hate speech like Holocaust denial (<cit.>)? When reasonable people differ on the substantive results, what matters most is not who wins and who loses, but how the result has been reached. Participation and procedure may be more important than the contested correctness in these cases (<cit.>, 320-1).
The existence and salience of hard cases illustrate why accuracy should not be the overarching metric that applies across the board. Hard cases defy the clear standard of telling right decisions from wrong ones. For those cases, accuracy is not only impossible, but also irrelevant, for in front of disagreements, conformity with one party’s vision of justice should not be used to judge the fairness of the resolution.
The rights discourse mandates that all actions infringing basic rights (such as the right to free speech) should be justified in a way as a fair trial. However, the scale of content moderation on digital platforms dictates that it is impractical to subject every moderation case to a judicial-style adjudication bound by complex procedural safeguards. In other words, “there is a tension between the evaluation of AI tools from a statistical perspective (how well are they performing overall) and their evaluation on a case-by-case basis, which is the predominant mode of evaluation from a fundamental rights perspective” (<cit.>, 10). That’s why Masnick has famously declared that “content moderation at scale is impossible to do well” <cit.>. “To do well” here means to make the decisions correct from the juridical perspective and in a case-by-case manner. If the juridical approach cannot be applied at scale, shall we abandon it entirely? Or, is it savvy to discard the rights discourse and juridical approach entirely and replace them with probabilistic and systematic management (<cit.>)?
The answer is no, because people still expect moderation by platforms to be done correctly and still use the rights discourse to evaluate the practice – how the rights have been infringed by the moderation decision and whether such infringement is fair. Such discourse filled with the headline moderation cases in news coverage. Sure, these cases are significantly outnumbered by the easy and invisible cases and cannot reflect the system design behind the scene (<cit.>, 529-30, 559). But the public attention these cases draw demonstrates their predominant influence. These cases greatly shape the public’s view of platforms’ moderation and the users’ level of acceptance of their governance power. Unlike state institutions, private platforms cannot build legitimacy of their power upon ex ante democratic authorization. Rather, their legitimacy relies mostly upon ex post public acceptance. Justification and contestation of contentious and influential cases is one crucial way for garnering public acceptance and gaining legitimacy. Even though the hard cases only constitute a tip of the giant moderation iceberg and cannot bring overall accountability to the moderation system (<cit.>, 530), these “high-profile content moderation controversies” act as paradigms that shape the public discourse and opinions about platform governance.
As both easy cases and hard cases matter for the platforms’ governance scheme, this article argues for combining the systematic management approach and the individualistic juridical approach. Due to the large scale and relatively non-controversial feature of easy cases, it is appropriate to subject them to systematic and probabilistic management. By contrast, hard cases are contested in nature and few in numbers, making them the proper objects of judicial and adjudicatory processes.
§.§ Content moderation is a part of governance
The accuracy discourse views content moderation as an isolated process, a process in which platform rules are mechanically applied to individual cases. Framed in this way, content moderation is like a syllogistic game. But content moderation is never simply the application of rules or the adjudication of disputes. Rather, it is, and must be taken as, a constitutive part of the governance system of platforms. Perceiving content moderation as a component of governance will highlight many more metrics other than accuracy. “Because substantive outcomes will always be fundamentally contestable and contested, the task is not to arrive at ‘right’ answers” (<cit.>, 820). Rather, the principal goal of content moderation is to enhance the legitimacy of platform governance.
Descriptively, the personnel, budget, and tools for content moderation are all internal components of the governance structure of platforms (<cit.>). Most platforms are private companies, which have no legal duty to moderate content generated by users. National laws such as the Section 230 of U.S. has granted them broad immunity against liability arising from the user-generated content.[47 U.S.C. § 230.] However, all platforms choose to moderate content voluntarily. The motive for doing so is mixed: preventing state regulations, promoting public image, making their products more profitable, etc. But one common concern underlying these different types of motives is legitimacy (<cit.>, 104) – the concern of how platforms’ exercise of power could be justified and recognized (<cit.>, 387; <cit.>, 68; <cit.>, 666).
Normatively, legitimacy should be the central concept in evaluating the practices of content moderation and platform governance. This is due to two, interrelated reasons: rights protection and power check. Moderation is the process of delineating the boundaries of free speech when it conflicts with other rights or interests. The exercise of many basic rights of people is dependent on the private platforms (<cit.>, 1668). Public and human rights law prescribes that any state intervention or infringement on fundamental rights must be appropriately justified to be legitimate. The stakes are not lower when the infringing body is a private company. These private platforms exert tremendous power of influencing people’s basic freedoms. As (<cit.>, 38) summarized, “the largest online platforms, such as Facebook and Google, exercise more power over our right to free expression than any court, king, or president ever has—in view of the very significant percentage of human discourse that occurs within the boundaries of these ‘walled gardens.’” The exercise of such power must be scrutinized with and measured by the lens of legitimacy. Actually, scholars have advocated for “digital constitutionalism” which aspires to ensure that platform powers are wielded legitimately (<cit.>, 2). The power of delimiting people’s fundamental rights must be legitimate for it to earn acceptance, obedience, and respect (<cit.>, 178).
As an essential and definitional component of online platforms (<cit.>, 21), content moderation must be viewed as a constitutive unit of governance, the goal of which is to legitimize the exercise of platforms’ power of defining people’s fundamental rights and shaping the contour of the online public square. The enterprise of legitimization entails increasing the public’s trust and acceptance of the power exercise. It includes, as the next part elaborates, both substantive and procedural aspects. Accuracy is not only insufficient but also misleading for measuring and evaluating LLM moderation, as it exhausts our attention that should have been paid to other metrics. It makes the discussion parochial and ineffective. By contrast, a more suitable and comprehensive framework of evaluating and guiding content moderation must take the broader concept of legitimacy as its governing principle. The next part introduces such a framework.
§ A LEGITIMACY-BASED FRAMEWORK FOR CONTENT MODERATION
§.§ Introductory notes on the framework
This article is not the first endeavor that proposes a framework for normatively guiding the content moderation practice. For instance, (<cit.>, 96) has devised a framework on the basis of two metrics, risk and accuracy. This framework failed to notice that risk is not the only factor of distinguishing moderation cases and accuracy is merely one of the several elements that matter. (<cit.>, 1038) has compared the relative strengths and weaknesses of machine and human moderators, covering factors such as contextual adaptability, speed, and consistency. However, their work overlooks procedural legitimacy and the disparate implications of different cases. Based on the previous discussion, this part introduces a framework that distinguishes between easy and hard cases on the one hand, and covers both substantive and procedural aspects of legitimacy on the other.
Scholars have proposed various benchmarks or principles as indexes of legitimacy, such as transparency, democratic participation, and rule of law (<cit.>, 5; <cit.>, 2-4). Legitimacy has both substantive and procedural aspects (<cit.>, 379). Substantive legitimacy evaluates the content of decisions – whether they are correct, fair, or conforming to some high values or principles (<cit.>, 676-7). Another crucial component of legitimacy is procedural justice, which refers to the approach, manner, and process in which rules are enacted and enforced <cit.>. Justice Marshall has succinctly summarized the two aspects as “the prevention of unjustified or mistaken deprivations and the promotion of participation and dialogue by affected individuals”[Marshall v. Jerrico, Inc., 446 U.S. 238, 242 (1980)] . This is also the case for content moderation. Users care about both the outcome of their cases and how they have been treated in the process. The public holds serious concern over how the platforms’ moderation system performs as well as to what extent it remains accountable.
Few existing studies of LLM moderation have drawn a distinction between easy cases and hard cases. One exception is (<cit.>, 2), which argues that LLM can automate moderation decisions on clear cases, so that human moderators can focus on borderline cases that require more expertise. This article further elaborates on this distinction and suggests different metrics for evaluating the legitimacy of the two groups separately. Before introducing the metrics, one practical issues needs to be addressed: how to separate them – how to screen the hard cases from the easy ones? This might be a challenging task since “it is not clear how to know in advance which areas are safely bulk and which are more controversial, as the landscape of controversy changes over time” (<cit.>, 3). Meta has described the hard cases as involving content that is “severe, viral, nuanced, novel and complex” (<cit.>). But these descriptions are too abstract to be manageable. Like the similar distinction in the legal system, distinguishing easy and hard cases in moderation context is not a clear-cut practice (<cit.>, 88; <cit.>, 600). Instead of offering a bright line, I argue that for cases to be hard, the following conditions should be taken into account:
1) Complexity of facts and contexts. Disagreement about fact is one type of causes that makes a case hard (<cit.>, 93). Lack of clear and comprehensive understanding of the contextual facts of a case may inhibit judges or moderators from reaching a clear answer. The Facebook Oversight Board, for example, stated in the famous moderation decision about the suspension of Trump’s account, that because Facebook refused to provide detailed information about how the platform’s technical design amplified Trump’s inciteful posts, the Board could not fully assess whether the removal of account is a necessary measure in that case (<cit.>). One pertinent factor here is the category of content. Some content generally requires less contextual knowledge, such as spam, child porn, and IP infringing materials. Other categories of content may be highly contextual and culturally dependent, such as hate speech, which is more likely to make a case hard.
2) The vagueness of platform rules. Sometimes the moderation rules may be too vague to dictate a clear or singular result. Ambiguity of rules is more complicated than non-existence of rules. When there are no rules applicable for a case, we can resort to the principle that those which are not prohibited are permitted (<cit.>, 484-5). But when a rule is so ambiguous in its applicability or that it dictates multiple reasonable, but potentially conflicting, results (<cit.>, 94), moderators have to use other tools, sometimes judgment calls, to pick from those conflicting options.
3) The textual meaning of a rule contradicts with its underlying purpose or value, or violates some established principles of morals (<cit.>, 415-6; Hutchison 1989, 560). Sometimes the application of a rule may lead to unconventional or controversial results (<cit.>, 488). This requires the decisionmaker to make a choice between adhering to the textual mandate and resorting to some external values or morals.
4) Plurality of rules. This arises when there are multiple rules that are applicable to a case (<cit.>, 94; <cit.>, 415). The Meta Oversight Board, for example, has three sources of rules for its judgment: Meta’s Community Standards, Meta’s values, and the International Human Rights Law (IHRL) (see <cit.>). Sometimes these different sources of rules may be in conflict. This happens in one case, in which the Board held that Facebook’s moderation decision conforms with its community standards, but the decision should be overturned because it violates the Values of Facebook as well as the IHRL (<cit.>).
In reality, the division between easy and hard cases should not be a fixed line, but a fluid one, subject to changing circumstances. For example, during political and social crises, such as ethnic conflicts, national elections, and public health events, different response strategies can be used to address these crises (<cit.>). One strategy is to lower the threshold of hard cases, letting more cases to be classified as “hard” and then be elevated to human or expert review. Actually, as will be elaborated by part 6.2, the threshold can be set with the help of LLM by setting a confidence score to the model.
Below, I introduce the framework based on the core benchmark of legitimacy and the distinction between easy and hard cases. As a general framework of evaluating content moderation, it is not limited for applying to LLMs. Rather, the framework can be used to measure the performance of different moderators (e.g. traditional ML, LLM, ordinary human moderator, and expert moderator) according to the legitimacy criterion. Identifying the suitable role of each type of moderators depends on their relative strengths and weaknesses as compared to other moderators.
§.§ Easy cases: making correct, fast, and transparent decisions
Easy cases are those with clear answers: either the post has violated platform rules or not, and the type of violation is obvious. Moderating this type of cases is the routine job of moderators. From the legitimacy perspective, moderating easy cases must serve two goals: for users, cases should be resolved in a correct and timely manner; for platforms, the communicative space must be regulated fairly, efficiently, and openly. These are the goals that are not only achievable but also indispensable for gaining trust for the moderation practice. From these goals we can derive three legitimacy metrics for moderating easy cases:
1) Accuracy. Users expect cases to be decided accurately, and platforms also anticipate right moderation in order to gain public trust and to build amicable communities. Accuracy should be measured at both individual-case level and system level. The former refers to the aggregate percentage of correctly decided cases, while the latter requires the distribution of accuracy – or, viewed in the opposite angle, errors – to be fair across different groups of people, categories of content, and periods of time.
2) Speed. One salient difference between online moderation and offline speech adjudication is the velocity and virality of communications in the former context. In the online world, it is more appropriate to say “justice delayed is justice denied”. For cases with obvious answers of right or wrong, providing the right answers in a timely manner means delivering justice to the users. Speed is also crucial for the moderation system as a whole, since maintaining the health of the platform requires fast removal of harmful content.
3) Transparency. Transparency refers to the extent to which the moderation process is visible to the public. This metric ensures that the users are informed about the moderation decisions as well as the internal moderation process (<cit.>, 1527). As an old principle of justice, transparency can be imposed on the individual case level and the system level. The Santa Clara Principles, for instance, urge the platforms to not only disclose the numbers (statistics) of the moderation system but also provide users with individual notices about the moderation decisions.[The Santa Clara Principles, available at https://santaclaraprinciples.org/]
At both levels, the transparency requirement can be either loose or strict, subject to considerations of practical needs. At the individual level, the notice to relevant users can be a brief note that states which piece of content has been dealt with and which provision of rules has been violated; the notice can also be much more detailed, including information about why the content violated the specific rule, how the violation has been identified, whether the moderation has been done by AI or human, what is the technical makeup of the AI and what is the decision process of the human moderator (<cit.>, 1536, 1539). In the system level, likewise, the statistics disclosed can be concise or comprehensive. The degree of transparency requirement should take into account the operational cost and potential burden that may impose on the platforms. One particular concern is the effect on competition, since the costs of fulfilling transparency mandates may be unbearable for small and startup platforms, positioning them with greater disadvantages in the market.
§.§ Hard cases: enabling justification and participation
Hard cases defy clear and consensual answers. For the users, especially the losing party, to accept and respect the decisions of hard cases, the key is not the substantive result, but the process of reaching that result. On the one hand, the contentious decisions must be justified to be accepted; on the other hand, users and stakeholders must be fairly treated in the moderation process.
1) Justification. The more contentious the case, the more it needs to be justified. Jurists like Hart and Dworkin shared the view that for hard cases, adjudicators need to justify their decisions (<cit.>, 80). Feldman argued, in his proposal of establishing the Facebook Oversight Board, that “[when] there is no magic-bullet solution to balancing competing values … [t]he advantage enjoyed by real-life constitutional courts is that they openly address difficult cases, and so derive credit and legitimacy from being principled” (<cit.>, 102). The metric of justification mandates that explanations be provided to the relevant parties as well as the public. This echoes with the principle of reason-giving in public law (<cit.>, 75). Explanation here is not the same thing as the metric of transparency in easy cases: while the latter refers to the disclosure of moderation details in a systematic, statistical, and holistic manner, the former evaluates the quality of explanation in individual cases. Explanation has been recognized as an important factor of legitimacy: for example, empirical findings show that “users who did receive some sort of explanation from moderators regarding their removal were less likely to have posts removed in the future” (<cit.>, 71). In addition, reading explanations of moderation decisions is an educational opportunity for users to learn and internalize the community norms <cit.>.
To be sure, explanations should not aim for gaining wide approval on the merit of the decisions. But refraining from the ambition of achieving consensus on substantive results does not mean that all the substantive reasonings are equally persuasive and equally acceptable. Not all explanations can be qualified as justifications (<cit.>, 1288). Justificatory effect entails that the explanations provided must reach certain level of substantive quality. For example, how the decisions address the facts and rules of the case as well as the context of the controversy, how the decisions respond to users’ concern and the public expectation, and how the decisions approach pressing issues like the borrowing of human rights norms into private moderation context, are all important aspects for measuring the substantive legitimacy of the explanations.
2) Participation. If justification (reason-giving) earns substantive legitimacy for moderating hard case, then participation secures the procedural side of legitimacy. That participation is a central tenet of procedural justice is “as old as the law” (<cit.>, 308). Scholars found that people would deem algorithms more acceptable when they are more informed about how the algorithms work and afforded more control in their work (<cit.>, 4, 20). This corresponds with the general finding that “[a]ffected individuals are more likely to perceive a decisional system as legitimate ‘when they play a meaningful role in the process’” (<cit.>, 1548).
Importing principles from public and constitutional law, the participation metric prescribes that users should have a fair say in the moderation process: this includes, e.g., the right to be informed or noticed, the right to comment, and the right to appeal (<cit.>, 7-8; <cit.>, 391-2). The normative meaning for hard cases is not their perfect resolution, but the process of debating and deliberating them in a public way – that is crucial and even constitutive for an open society (Gillespie 2020, 3-4). The procedural safeguards ensure that hard cases could be contested in an accessible and inclusive manner. Of course, these procedural requirements are also a matter of degree, subject to practical limits of cost and other considerations.
The defining feature of this analytical framework is that it makes three kinds of distinctions: that among different types of cases, that among different categories of moderators, and that among different specific metrics. The last distinction is the result of making the first two distinctions. In this regard, to examine the performance of LLM under the framework, we need to ask the following questions: will LLM be used to moderate easy or hard cases, what is LLM’s strengths and limits on the relevant metrics, and compared to whom.
§ LLM MODERATION UNDER THE FRAMEWORK
The main argument supporting the accuracy discourse and its sanguine tone can be summarized as this: because of LLM’s superior contextual understanding achieved during their extensive pretraining, it is capable of making more accurate judgments in moderation tasks, many of which involve contexts and nuances. This part will argue that accuracy is not LLM’s major field of contribution to content moderation. For easy cases, replacing traditional ML models with LLMs may bring some accuracy bonus, but LLMs also generate additional cost and latency; and as easy cases do not involve much contextual complexity, traditional ML models already perform reasonably well in this category. For hard cases, accuracy is not an important metric, since the standard of what is accurate becomes blurred here. Rather, what matters is the quality of justification for the decisions as well as the procedural justice offered by the decisionmaking process.
§.§ Reassessing LLM’s accuracy capability
Examining LLM under the framework proposed by this article and comparing its performance with other types of moderators, we can see that LLM’s superior capability in accuracy comes with significant limitations and costs.
First, even though LLM has achieved impressive improvement in accuracy, it cannot take the place of human experts in moderating hard cases. It is true that LLM’s exposure to large corpus of language equips it with more contextual knowledge and more proficiency in recognizing linguistic variations. This capability is a competitive advantage compared to the target-trained ML models, the knowledge of which is limited to the specific training datasets. And some studies indicated that LLM’s accuracy score is higher than the outsourced human moderators (<cit.>, 2). But LLM cannot, at least for the short run, reach the level of experts. According to OpenAI, LLM and ordinary human moderators (with light training) perform equally well in labeling accuracy; but both are outperformed by expert moderators that are well-trained (<cit.>). Expert moderators, with systematic training, sufficient decision time, and organizational and financial support from the platforms, are the best performers in accuracy – actually, expert labeling has commonly been used as the “ground truth” for determining what is accurate (<cit.>, 13). To be sure, it is impractical for experts to moderate easy cases due to scale and budgetary limit. But the quality of their reasoning and justification makes experts the best choice for moderating hard cases. That’s why Meta has delegated some of its hard cases to a board of experts (Oversight Board), the role of which can never been fully replaced by LLM.
Second, if LLM’s accuracy superiority cannot find its place at moderating hard cases, how about the easy cases? In the realm of easy cases, the currently major moderators are traditional AI tools and ordinary human reviewers. Even though their accuracy capability has been surpassed by LLM, platforms still have to be cautious of replacing them with LLM.
One reason is speed. LLM generally spends more time in generating decisions because of the larger scale of parameters they have. Ordinary human moderators are generally slower than machines but are quicker than expert moderators, as they spend about 10 to 30 seconds on average in moderating one piece of content (<cit.>, 32; <cit.>, 116). LLM moderation is quicker than human moderation, but takes much more time than its AI predecessors. It is reported that moderating one piece of content could take LLM a few seconds (<cit.>, 9). High latency makes it unlikely for LLM to fully replace the role of traditional ML models in moderating easy cases, where speed is a key parameter. To be sure, LLM can supplement traditional ML in some cases, and technical improvement may increase LLM’s speed in the future. But speed is a concern that should not be ignored, especially in real-time moderation tasks.
The second concern is cost. LLM moderation is a costly practice (<cit.>; <cit.>, 10). On the one hand, many LLMs, especially their API, are proprietary and charge fees for their uses. On the other hand, fine-tuning the models for specific scenarios introduces extra costs, including computational resources, preparation of the fine-tuning dataset, and the required expertise (<cit.>, 212). Researchers reported that for moderation of highly contextual content like hate speech, the quality of the fine-tuning dataset significantly affects LLM’s accuracy performance (<cit.>) – high-quality annotation of the dataset necessarily invites additional spending. Resource intensiveness poses a substantial obstacle for platforms, especially small ones, to put such technique in wide use. In fact, due to the law of diminishing returns, the small accuracy gain brought by LLMs may come at a price of substantial cost. That means, after a certain point, “the cost of reducing the marginal rate of error would become higher and higher... [and platforms would] ...invest enormous resources for an infinitesimal gain in accuracy” (<cit.>, 247).
Third, the buoyant mood toward LLM’s accuracy performance gains more vigilance in considering the new technology’s inherent limits. One limit is bias. Like all other ML models, LLM is not immune from biases. Bias can come from “online users who produced the pretraining data, feedback from crowdworkers during Reinforcement Learning from Human Feedback (RLHF) process and potentially, the decisions made by the model developers themselves” (<cit.>, 1). One particular concern is the majoritarian bias. The most salient difference between LLM and traditional ML models is that the former is based on pre-training within much larger datasets. Larger training datasets can provide more diverse knowledge and more contextual understanding to the models. But the drawback is also obvious. The more training data has been fed into the model, the more it will align with the knowledge, perception, and opinion of the mainstream. Taking what the majority holds as the “truth”, LLM “may reinforce majority views and further marginalise minority perspectives” (<cit.>, 23). Because the training datasets of LLM contain more materials on the dominant cultures, languages, and positions (<cit.>, 23-24), its application may contain the risk of entrenching the mainstream viewpoints and strengthening the existing power dynamics within the online discourse (<cit.>, 8056).
Another technical challenge for the state-of-art LLMs is the existence of hallucinations (<cit.>, 8056), which can cause inaccuracies and inconsistencies in moderation. Hallucinations are the non-sensical or fabricated answers produced by LLMs; they are not fully explainable: in many occasions, developers do not know why they emerge and how to prevent them (<cit.>). Even though LLM could moderate at a high level of accuracy in the aggregate, the inconsistencies and lack of explainability caused by hallucinations may hamper their performance in other metrics including system accuracy and transparency.
In addition, current studies on LLM moderation either test the accuracy of LLMs in detecting certain categories of speech (such as hate speech or misinformation), or prompt the LLMs with a particular rule of content. In both occasions, researchers give the LLMs a rule for a specific category of content, and then LLMs are asked to determine whether a post violates that rule. In reality, however, the rules of content in a platform are very complex, covering many kinds of speech; and the first thing LLMs must do is to determine the appropriate rule that is applicable to a piece of content. In other words, all the content rules (tens of thousand words at least, for a big platform[See “Facebook Community Standards”: https://transparency.meta.com/policies/community-standards/]) should be prompted as input to the LLMs. This may further exacerbate the latency of LLMs in moderation tasks. One study shows that once multiple policies have been prompted to LLMs, their accuracy of classification will decrease and the costs will increase (<cit.>, 9-10).
The above analysis suggests that LLM’s diminishing advantage in accuracy compared to expert moderators makes it not suitable for moderating hard cases. For easy cases, LLM does generate accuracy dividends for the moderation system, but advocacy for its application should meet with caution because of the increased latency and cost, as well as the inherent bias and hallucination in the models. If LLM should replace neither expert review in hard cases nor traditional AI and ordinary human moderators in easy cases, then their accuracy bonus would be much less useful than currently expected.
§.§ LLMs’ real potential in content moderation
If accuracy is not the major field of LLM’s contribution, then what areas can LLM play a role in? Recognizing LLM’s limited use in accuracy does not dictate its dim prospect in content moderation. As this part will argue, LLM can make significant contributions to content moderation and platform governance in at least four aspects: to conduct screening of hard cases from easy cases, to provide quality explanations for moderation decisions, to assist human reviewers in getting more contextual information, and to facilitate user participation in a more interactive way.
1) Distinguishing easy and hard cases is crucial since they should be assigned with different resources and strategies. LLM can help the task of differentiation. One simple way is judging by disagreement. For example, human moderators and LLM can moderate the same content simultaneously; if the two disagree, then it is likely to be a hard case (<cit.>, 4). Another approach is to ask LLM to generate confidence scores for cases (<cit.>): if the confidence score is low, that means the model may not be certain about the result of the case – in this scenario, the case is probably hard. To facilitate LLM’s capacity on this task, we can fine-tune LLM with an annotated dataset that containing easy and hard cases. Due to costs consideration, however, platforms may not choose to screen all the cases through LLM. Rather, they can either use LLM to screen only those cases which have been appealed by users, or those which had already been marked as uncertain by traditional ML models. Using LLM as a second screener can supplement the first reviewer with more contextual knowledge; and instead of replacing all the work of traditional ML models, using the two tools in a collaborative way is also more financially sustainable.
Researchers found that LLM exhibits satisfactory performance in pre-filtering content, that is, to remove clearly non-violative content from the queue of moderation, as well as to escalate clearly violative content for human review (<cit.>, 11). LLM can also self-evaluate their answers by measuring the level of confidence (<cit.>, 3). These findings corroborate LLM’s technical capacity in distinguishing hard cases from easy ones. In practice, LLM’s confidence score can be adjusted according to changing contexts such as the need to address emergencies or crises – this is another testimony of LLM’s adaptability. LLM can also provide rationales for its screening. This is especially important for the influential headline cases, for explaining why they have been escalated for further review is necessary to address the public concerns.
2) As a type of generative AI, LLM owns one defining feature: the capacity of generating text content. Unlike traditional ML models which moderate content in an opaque way, LLM is iterative by design. Such feature, supported by its extensive training, makes it capable of providing high-quality explanations for moderation decisions, enhancing transparency of the process (<cit.>, 7-8). LLM can provide not only reasons of decisions to users but also possible ways of revising their posts to make them conforming with the content policies. Such informative process is dialogic and can even starts before the content has been made public (<cit.>). (<cit.>, 5) found that the explanations provided by LLM, though not identical to human reasoning, look quite convincing to users. In some occasions, LLM explanations can be more comprehensive than those written by human annotators (<cit.>, 4).
LLM’s strength of explainability can be utilized not only in their own moderation process, but also for helping justify the decisions made by other moderators, such as the traditional AI tools or the human reviewers. When researchers provide the LLM with moderation decisions and relevant content rules, it can generate quality explanations for the decisions (<cit.>). As has previously argued, it may not be economically feasible for LLM to directly moderate easy cases; but it can help AI and human moderators generate contextual explanations for their decisions if needed. This could save substantial time for other moderators, enhancing efficiency for the whole system.
3) LLM can also assist other types of moderators by providing more contextual information for their reference. Due to extensive pre-training, LLM’s biggest advantage lies in its “comprehensive grounding knowledge, strong language understanding, and logical reasoning capabilities” (<cit.>, 2). The ground knowledge, which has been fed into the LLM during pretraining by vast text corpora, enables the model to acquire basic understanding across different platforms, contexts, and communities. Such capability of making cross-domain generalizations makes LLM distinctive from traditional ML models, which rely on quality labeled data on specific platforms or contexts (<cit.>). Empowered by such comprehensive and diverse knowledge, LLM can be consulted in the moderation process concerning issues such as the factual background of a case, the cultural context, and the delicate value considerations. These types of information can then help the other moderators make more accurate and judicious decisions.
Even though it would be too expensive to totally replace the traditional AI tools with LLM in moderating easy cases, LLM can still play a role in the systematic management of platforms by assisting the traditional methods. For example, LLM can monitor the algorithmic moderation system by offering statistical insights such as toxicity score (<cit.>) as well as explanations on those metrics. LLM can also help the traditional AI methods by generating datasets for the training of the latter. This technique has been called data augmentation (<cit.>). Deploying such capability of LLM can produce synthetic data, reducing the burden of collecting and labeling data for training the ML models (<cit.>, 5). These are instances where LLM could boost the functioning of other moderators with its informational competence.
4) Putting transparency into the framework of governance, we can conclude that “[e]xplanation is not just about providing accurate information about how AI works, but fundamentally social and situated” (<cit.>, 102:18). Transparency is far more than the one-way disclosure of information: it is also a communicative process between the platform and the users (<cit.>, 1539). LLM can make significant contribution to improving such interactive design. In its interactive API, LLM could not only educate users about the rules and norms of the platform, but also hear feedback from users to improve the rulemaking and moderation practices. Users could pose their comments on whether the justificaiton of a decision is powerful enough and whether some other factors should be considered in making the decision (<cit.>, 3). Such interactive route enhances users’ control and agency in the process (<cit.>, 9-10). The dialogic feature of LLM makes it suitable for conducting this task.
The conversational capability of LLM can help solicit user participation and feedback in various occasions, such as the revision of content rules and the appeal of moderation decisions. Researchers have devised tools that incorporate user feedback into the LLM explanations, and such feedback can be then used to further improve the models (<cit.>). Empowering users to have their voices heard would greatly enhance the legitimacy of the moderation practice as well as the governance scheme as a whole.
From the above illustration on LLM’s potential in content moderation, two concluding remarks can be summarized. First, identifying the suitable role of LLM in content moderation requires us to put this tool within the whole system of platform governance and compare it with other types of moderators. Each type of moderator has its strengths and limits. Traditional AI methods are quick and cost-effective, but they can hardly grasp contextual nuances and provide meaningful explanations. Outsourced human moderators are reasonably efficient and perform fairly well in understanding contexts, but they “are poorly paid, inadequately trained, and traumatized by exposure to the worst that the internet has to offer” (<cit.>). Expert reviewers are the most expensive and most time-consuming, but they are superb in accuracy and reason-giving – so they are suitable for moderating hard cases, especially those with the most social impact. Comparing LLM’s strengths and weaknesses with those of other moderators is necessary for finding a suitable position of LLM moderation.
Second, in order to enhance legitimacy of platform governance under the framework, one crucial strategy is that different types of moderators should not only fulfill their own strengths, but also help and supplement other moderators. In other words, the four types of moderators should collaborate and help with each other in the governance scheme. For example, expert moderators could train the ordinary moderators in the moderation practices (<cit.>, 20). Likewise, LLM can provide help to other moderators in various aspects, such as supplementing more contextual information, generating more comprehensive explanation, and offering avenues for soliciting user input. Utilizing LLM for content moderation is not an isolated effort of excavating the new technique’s potential in its own regard, but a holistic enterprise that takes the relational structure of governance into account.
§ TECHNICAL AND LEGAL IMPLICATIONS
Broadening our discourse on LLM moderation from accuracy to legitimacy is the first step. To better facilitate LLM’s contributary role in content moderation and platform governance, endeavors from both technical and legal/policy fields are needed. This part sketches some technical and legal issues that may arise from the paradigm shift. Due to limit of space, the discussion below is only introductory. Instead of offering firm conclusions or detailed suggestions, this part aims to point directions for future work.
On the technical side, there are two urgent issues that await researchers to explore. One is bias and arbitrariness, the other is latency and cost. Bias and arbitrariness are, to some extent, an inherent feature of ML models, because the probabilistic and statistical approach they rely upon necessarily contains randomness (<cit.>, 8, 17). Apart from the models themselves, arbitrariness and bias can also be introduced in pretraining and fine-tuning processes. The datasets for pretraining, though large and diverse, cannot be comprehensive enough to cover all knowledge. Developers must select and curate the datasets. Thus, the datasets may be more influenced by some social viewpoints than others, as well as reflect biases of the selectors. Besides, biases can be produced during fine-tuning because the process “might overgeneralize alleged patterns and…falsely associates those patterns with positive or negative labels” (<cit.>, 12).
One particular type of bias or arbitrariness is the phenomenon of predictive multiplicity, in which different ML models with similar accuracy performance may produce conflicting individual decisions (<cit.>). This is troublesome because from the legitimacy perspective, people expect decisions about their rights to be consistent and predictable. However, seemingly innocuous choices about the seed and parameters of a model may affect the performance of the model, generating different outcomes in individual cases (<cit.>, 2). What’s worse, researchers found that the arbitrary decisions are not only rampant in LLMs, but also “unequally distributed across different demographic groups” (<cit.>, 3), displaying significant ideological divide in their answers (<cit.>, 1-2). To address this issue, technical developers could further explore the relationship between the model selection and the statistical distribution of outcomes, as well as investigate better ways of mitigating biases and inconsistencies in LLM moderation. One direction is to make better use of prompt engineering to trigger more consistent decisions from the model (<cit.>, 15).
Another drawback of the state-of-art LLMs is their latency and cost. This not only bars LLM from wide adoption, but also creates disadvantages for small platforms. One way of reducing the costs of LLM moderation is to develop smaller models from the large ones: such method can reduce computation costs while retaining similar capabilities of the large model (<cit.>, 32). To be sure, the drawbacks of latency and cost come from the current status of the technology, and one may be confident for future progresses to overcome them (<cit.>). In such a fast-evolving field, “[w]hat may be a limitation today may be improved tomorrow” (<cit.>, 1944). In any case, lowering the level of expense and response time of LLM is definitely one of the top priorities for the technical field.
On the legal/policy side, at least three issues need further research. First, legal regulations must strike a balance between the needs of moderation by LLM and moderation of LLM. To mitigate the harmful content generated by LLM, moderation of their output is needed. One effective approach is to conduct curation during LLM’s pretraining process (<cit.>); another approach is to add safety filters in LLM’s API (<cit.>, 24). The curation and filtering are used to prevent malicious content to be fed into the model. But they would also compromise the model’s capability of conducting moderation tasks, because lack of exposure to harmful content makes LLM less effective in recognizing and classifying these types of content. That means, there is tension between promoting safety and preserving integrity of the training data (<cit.>). If laws impose strict requirement upon the health and safety of LLM, then its ability of moderation will likely decline. Lawmakers and policymakers must reach a delicate balance between the two competing goals.
Second, even though LLM exhibits impressive performance in explaining decisions, the explanations generated by them are not always reliable. Sometimes, LLM may be uncertain about its reasoning, expressing confusion like a human (<cit.>, 2). And like the previous technical tools, LLM also contains the risk of cloaking errors and biases with techno-objectivity. When LLM provides seemingly convincing but actually misleading explanations for its decisions (<cit.>, 3), such explanations cannot be counted as valid justification. To address this issue, regulators could consider either mandating right of users to contest LLM explanations or requiring more transparency on the development of the model to facilitate easier identification of false and misleading explanations.
Third, incorporating LLM into the system of platform governance demands corresponding regulatory and public oversight. Remember that LLM is itself a tool of power and a site for control. When it is used for governing online spaces, it becomes more than urgent to limit and oversee such formidable power. LLM contains bias, generates hallucinations, and is vulnerable to manipulations (<cit.>). How to ensure that the LLM moderator, as a component of platforms’ governance scheme, is accountable to the public ends and serving legitimate goals? This is a question that regulators should not wait to tackle with. Liability law is one tool, for it can provide strong incentives. The specific rules of LLM liability should take into account the specific application scenarios and their stakes on legitimacy. The aim is to incentivize LLM to be used in a responsible way that enhances, rather than threatens, the legitimacy of platform governance. Another tool is transparency rules. There are various choices that has to be made in LLM moderation: the selection of models, the selection of pretraining datasets, as well as the selection of finetuning datasets and finetuning method. Each choice has substantial impact. Disclosing meaningful details and rationales of those choices would facilitate public oversight on LLM moderation.
§ CONCLUSION
Using LLM for content moderation is an exciting field to work on. Generative AI and LLM constitute one of the most revolutionary technical breakthroughs in recent years – and their applications have reshaped many industries. Realizing LLM’s full potential in moderation tasks depends upon locating this technology within the governance structure of online platforms and communities. The currently dominant discourse on LLM moderation focuses on accuracy – how to deploy the tool to increase the ratio of correct decisions. However, this article argues that accuracy is not the best area of contribution by LLM. LLM’s accuracy advantage as compared to traditional AI and ordinary human moderators has been largely offset by its weakness in cost and latency. Rather, LLM can make meaningful contributions in other aspects, such as distinguishing hard cases from easy ones and providing interactive channels for user participation. Moving from accuracy to legitimacy, we can get a clearer picture of LLM's role in moderation and governance.
Content moderation is an important case of application of Generative AI and LLM. It is a recent instance of using technological advances to improve our social lives. The critical analysis offered by this article affirms the necessity of combining technical explorations with normative inquiries from the socio-legal perspective. If the objective of LLM moderation is to assist online platforms to better govern their communicative space, then the research effort should not be fixated upon accuracy only. The literature from law and social sciences, such as the studies on platform governance, the division between easy and hard cases, and the conceptualization of legitimacy, supplies valuable insights to the research field of LLM moderation. Such an inter-disciplinary approach is indispensable for future studies of technologies and their impact on human society.
|
http://arxiv.org/abs/2409.02731v1 | 20240904140444 | Ventilated noise-insulating metamaterials inspired by sonic black holes | [
"Farid Bikmukhametov",
"Lana Glazko",
"Yaroslav Muravev",
"Dmitrii Pozdeev",
"Evgeni Vasiliev",
"Sergey Krasikov",
"Mariia Krasikova"
] | physics.app-ph | [
"physics.app-ph"
] |
These authors contributed equally.
School of Physics and Engineering, ITMO University, Saint Petersburg 197101, Russia
These authors contributed equally.
School of Physics and Engineering, ITMO University, Saint Petersburg 197101, Russia
These authors contributed equally.
School of Physics and Engineering, ITMO University, Saint Petersburg 197101, Russia
These authors contributed equally.
School of Physics and Engineering, ITMO University, Saint Petersburg 197101, Russia
These authors contributed equally.
School of Physics and Engineering, ITMO University, Saint Petersburg 197101, Russia
[email protected]
School of Physics and Engineering, ITMO University, Saint Petersburg 197101, Russia
School of Physics and Engineering, ITMO University, Saint Petersburg 197101, Russia
§ ABSTRACT
Acoustic black holes represent a special class of metastructures allowing efficient absorption based on the slow sound principle. The decrease of the wave speed is associated with the spatial variation of acoustic impedance, while the absorption properties are linked to thermoviscous losses induced by the local resonances of the structure.
While most of the developments in the field of sonic black holes are dedicated to one-dimensional structures, the current study is concerned with their two-dimensional counterparts. It is shown that the change of the dimensionality results in the change of noise insulation mechanism, which relies on the opening of band-gaps rather then thermoviscous losses. The formation of band-gaps is associated with the strong coupling between the resonators constituting the considered structures. Numerically and experimentally it is shown than the structure is characterized by broad stop-bands in transmission spectra, while the air flow propagation is still allowed. In particular, a realistic application scenario is considered, in which the acoustic noise and the air flow are generated by a fan embedded into a ventilation duct.
The obtained results pave the way towards the development of next-level ventilated metamaterials for efficient noise control.
Ventilated noise-insulating metamaterials inspired by sonic black holes
Mariia Krasikova
September 9, 2024
=======================================================================
§ INTRODUCTION
The development of the low-frequency passive sound insulation systems still represent an unsolved challenge. Recent studies in the field of acoustic metamaterials <cit.> open a rout towards the development of low-weight sub-wavelength structures allowing not only efficient noise suppression <cit.>, but also air ventilation <cit.>. A special class of metastructures is represented by the so-called acoustic black holes (ABH) in which the phase velocity reduces until zero due to spatial variation of acoustic impedance <cit.>. In this sense, the incoming wave is trapped inside the structure, in analogy with the corresponding cosmological objects.
Successfully implemented for suppression of vibrations <cit.>, the concept of ABH was successfully translated for airborne sound, where the structures typically represent a set of plates characterized by the gradually changing sizes and distances between them <cit.>.
The absorption mechanism of ABH in this case relies on resonant states inducing the increase of thermoviscous losses <cit.>. Though, in realistic systems the fulfillment of critical coupling condition is rather difficult to achieve and additional porous inserts might be required <cit.>.
In order to distinguish between two different types of structures, ABHs for sound propagating fluids are usually referred to as sonic black holes (SBH), while the term vibrational black holes (VBH) is utilized for bending waves <cit.>.
Typically, SBH are utilized as absorbing termination ends of circular waveguides, utilizing the ring-based geometry developed in Ref. <cit.>, or ventilated duct silencers <cit.>. More complicated geometries may also include arrays of discs <cit.> or combination of rings with other structures, such as lattices <cit.>.
The circular geometry implies that the structure effectively is one-dimensional. The increase of the dimensionality might be associated with the periodicity of the structure in the direction perpindicular to the direction of the incident wave propagation. Wile periodic arranges of VBH and SBH were suggested for suppression of vibrations <cit.>, two-dimensional versions of SBH are still under-represented (with the rare exceptions like in Ref. <cit.>).
However, it might be speculated that the change of dimensionality might might allow to improve performance of the noise-insulating systems via the use of band-gaps.
Hence, the work is dedicated to the development of ventilated meta-atoms for efficient noise mitigation in rectangular ducts. Considered meta-atoms represent a set of rectangular plates forming a set of strongly coupled resonators. Noise-suppression properties of the meta-atoms are described in terms of band-structures of the associated infinitely periodic 2D systems. It is shown that wide stop-bands in the transmission spectra originate from the coupling between the resonances formed by the plates. As the consequence, the properties of the considered meta-atoms are not directly defined by the geometric profile of the structure. The obtained experimental results indicate that the broad stop-band covers nearly the whole range from 1000 to 2850 Hz. At the same time, the structure is ventilated and the meta-atom reduces the air flow speed by only 25%. These findings might be useful for the further development of metastructures for passive noise insulation in ventilated ducts.
§ MATERIALS AND METHODS
§.§ System description
Considered meta-atoms consist of rectangular plates having different widths and arranged at unequal distances with respect to each other (see Fig. <ref>). Similar structures frequently investigated as sonic black holes are usually characterized by axial symmetry meaning that effectively they are one-dimensional, contrary to the considered case. When the meta-atom is placed inside a rectangular duct with sound hard walls, it can be considered as an effectively two-dimensional structure, which is periodic along the directions perpendicular to the direction of the incident wave propagation. The approximation is valid as long as the field distribution is characterized by the mirror symmetry with respect to the direction of the wave propagation, i.e. x-axis. Similarly, the system is effectively two-dimensional until the field distribution along the z-axis is uniform. Mirror symmetry also implies that instead of the whole structure, only its half can be utilized without the loss of generality. The verification of this statement, as well as the detailed description of the design procedure are given in Supplementary Information.
§.§ Numerical Calculations
Numerical calculations are provided in COMSOL Multiphysics. Transmission spectra calculations presented in the main text are obtained using the “thermoviscous acoustics, frequency domain” physics. The incident wave is implemented as the background pressure field with the amplitude p_0 = 1 Pa. Both ends of the waveguide are supplemented by the perfectly matched layers, imitating absorbing termination ends of the experimental setup. All walls of the resonators are considered to be the sound hard ones. Still, viscous boundary layers are taken into account, such that the thickness of the boundary layers are considered to be
d_visc = d_visc, 0√(f_0/f),
where d_visc, 0 = 0.22 mm is the boundary layer thickness at the frequency f_0 = 100 Hz. Correspondingly, the mesh is supplemented by the “boundary layers” feature.
Transmission coefficients given in decibels are obtained as
T = 20 log_10 P_t/P_b,
where P_t and P_b are the amplitudes of the total and background pressure fields, respectively, calculated via the integration of the pressure over the area behind the structure [see Fig. <ref>(a)]:
P_t,b = 1/A∫_A |p_t,b|^2 dA.
Note that transmission spectra are calculated for the halved unit cell, contrary to the case of the eigenmodes analysis.
§.§ Optimization
The optimization procedure is based on a simple genetic algorithm, similar to the one implemented in Ref. <cit.>. In particular, the goal of the algorithm is to find geometric parameters of the structure for which the value of the cost function 𝒞 is minimized. The considered cost function represents a linear combination of the transmission coefficient averaged over the whole spectra ⟨ T ⟩, the transmission coefficient averaged over the specified part of the spectra ⟨ T ⟩_[f_1,f_2], and its standard deviation σ_[f_1,f_2]:
𝒞 = a_1 ⟨ T ⟩ + a_2 ( ⟨ T ⟩_[f_1,f_2] + σ_[f_1,f_2]).
All calculations are performed for the spectral range 100 - 3000 Hz, while ⟨ T ⟩_[f_1,f_2] is defined within the target range 300 - 2000 Hz. Such a combination of parameters allows to ensure that the stop-band occurs in the desired range, and the transmission spectra in this case is close to a flat one. At the same time, the consideration of ⟨ T ⟩ allows to ensure that there are no undesired resonances outside of the target range, which might be crucial for practical applications. The procedure is performed for different values of the coefficients a_1,2. Namely, the structure discussed in the main text is obtained with a_1 = 0.1 and a_2 = 0.9.
For the optimization procedure, the size of the structure is limited by 10 plates. Each plate is characterized by the width, which can take a value between 0 and 55 mm (the width of the waveguide is fixed at 60 mm). Similarly, the thickness of each plate and the distance between the plates can take values from the range from 1 to 10 mm.
The population size is considered to be 20, such that the initial population is generated randomly. The selection procedure is implemented using the roulette wheel approach with the selection pressure 0.1. For the crossover operation, single point, double point, or uniform crossover operators are utilized, such that the particular operator can be selected with the probability 0.2, 0.2, and 0.6, respectively. The mutation is based on the normal distribution with the standard deviation 0.1, and the mutation rate is fixed at 0.1.
§.§ Measurements
Experimental measurements are performed in the rectangular waveguide with the height 60 mm and the adjustable width, while the length of the waveguide is 1350 mm. The walls and the bottom of the waveguide are made of 15 mm thick aluminium and the lid is made of plexiglass with the thickness 6 mm. Sound waves are generated by the loudspeaker (Visaton BF 45/4) located at one end of the waveguide, and the recording of the transmitted field is done with the shotgun microphone (RODE NTG4) placed at the other end of the waveguide. To avoid reflections at the end of the waveguide, both the loudspeaker and the microphone are embedded into the porous inserts made of melamine foam. The length of the inserts is 150 mm. The schematics of the setup and its photo are shown in Figs. <ref>(b) and <ref>(d).
Generation and recording of the signals is controlled via the custom software based on the sounddevice python module <cit.>. For that both the loudspeaker and the microphone are connected to the USB audio interface Roland Rubix22, such that the loudspeaker is connected via the amplifier based on the Yamaha YDA138-E microchip. Generated signals are the chirped signals with the duration 10 s and the sampling frequency 44100 Hz. For each measurement the signal was generated and recorded 6 times and then averaged in the frequency domain. The transmission spectra in this case is defined as
T = 20 log_10 p_tr/p_ref,
where p_tr is a pressure amplitude of the transmitted wave and p_ref is the reference pressure amplitude of the wave in the absence of the structure. The same procedure is performed for the measurements with the noise generated by a fan, i.e. the loudspeaker is substituted by the hair dryer (STINGRay ST-HD801A). The measured signal-to-noise ratio for both the loudspeaker and the hair dryer are presented in the Supplementary Information.
Ventilation measurements are performed in the same waveguide. The airflow is generated by the hair dryer, the same as for the noise measurements [see Fig. <ref>(c)]. The speed of the flow is measured with the anemometer (ADA AeroTemp 30) located at the distance 55 cm from the structure, such that the distance between the hairdryer and the structure is also about 55 cm.
The samples are manufactured by 3D printing using a PLA filament. To avoid gaps between the lid and the structure, the tops of the plates were supplemented by a thin layers of a porous material [see Fig. <ref>(e)].
§ RESULTS
§.§ “Spruce” Meta-Atom
The considerations start from the structure in which parameters of the plates change gradually, such that the coordinates of the plates along the x-axis are defined as
x_n = 5(n+1) + n(n-1)/2,
where n is the number of a plate, and the corresponding semi-widths are
w_n = 115 x_n/255.
The thickness of all plates is fixed at d = 2 mm. In the consequent text, such meta-atom will be refereed to as a “spruce”, just due to associations with some schematic pictures of the corresponding tree.
One-dimensional structures with the similar parameters were considered in Ref. <cit.>, where it was shown that attenuation inside the structure is associated with the resonances of the cavities formed by the plates. As in the considered two-dimensional case the structure is periodic, it is reasonable to start from the consideration of its band structure. The corresponding unit cell [see Figs. <ref>(a) and <ref>(b)] is characterized by the width a_x and the height a_y, which are considered to be 250 and 240 mm, respectively. According to the band structure shown in Fig. <ref>(d), the system is characterized by a set of band-gaps, which are practically merged in the regions 750 – 1450 Hz and 2200 – 2800 Hz, as the modes within this ranges are flat. The field distributions shown in Fig. <ref>(f) indicate that these modes correspond to the resonances in the cavities formed by the plates. Practically, the meta-atom can be considered as a set of resonators strongly coupled to each other. This coupling is the origin of the flat-band formation and the opening of band-gaps as discussed in the Supplementary Material.
It should be expected, that when the equivalent finite-size structure is considered, the transmission spectra should be characterized by the stop-bands corresponding to the band-gaps of the infinite structure. Indeed, as shown in Fig. <ref>(e), the experimentally measured transmission drops below -40 dB in the corresponding regions 900 – 1450 Hz and 2450 – 2850 Hz. At the same time, not all band-gaps are manifested as the stop-bands. The reason for that is the small thickness of the structure, which is just a single unit cell. When the thickness increases, more stop-bands can be observed (see Supplementary Information). In particular, experimentally measured transmission spectra of a system having a thickness of two meta-atoms [see Fig. <ref>(e)] is characterized by a small dip within the range 300 – 450 Hz, where a band-gap is located. In addition, the two other stop-bands are a bit wider and deeper than for the case of the structures consisting of a single meta-atom. It should be also noted that the experimental results do not quite match the numerical ones, which is associated with the fact that in the real system the finite height of the samples and the presence of the lid and bottom of the waveguide result in the occurrence of additional thermoviscous losses, which are not taken into account in the calculations. In addition, geometric parameters of the samples might differ from the ones used in the numerical calculations due to the limited accuracy of the utilized 3D printer. Still, the qualitative behavior of the measured and calculated spectra is the same. In addition, an inverted structure is considered, such that the meta-atom is flipped inside the waveguide, i.e. the largest plate is located closer to loudspeaker and the the smallest one – closer to the microphone. The resulting transmission spectra [see Fig. <ref>(e)] is practically the same as for the initial orientation of the meta-atom, additionally indicating that the properties of the structure are determined by the resonances, rather than the gradual change of the admittance. Hence, the structure works as a reflector, not an absorber.
Still, an effect similar to the acoustic black hole can be observed for the case when the size width of the plates is equalized [see Fig. <ref>(a)]. In this case the band structure is characterized by two band-gaps, but the most interesting observation is the fact that the tilt of the low-frequency eigenmodes decreases with the increase of frequency until the modes become flat [see Fig. <ref>(b)]. These flat bands correspond to the lower boundary of the first band-gap. The same situation repeats for the modes between the first and the second band-gaps. As the inclination of the modes indicate the group velocity, defined as v_g = ∂ω/∂ k, it can be stated that the group velocity decreases with the increase of the frequency until reaching zero, following by the opening of a band-gap. This is a direct analogy of a conventional ABH effect in which the phase velocity decreases with the propagation coordinate. In this sense, the considered structure can be regarded as a reciprocal ABH. The “spruce” meta-atom in this case is practically the deformed version of such a structure allowing the formation of larger number of band-gaps, which is the aim of the work. While the further exploration of the effect lies beyond the scope of the work, some additional details are provided in Supplementary Information.
§.§ Optimized meta-atom
While the “spruce” meta-atom demonstrate noticeable reduction of the transmission coefficient in the expected spectral regions, it might be argued that the geometric parameters of the structure are not optimized and the performance might be improved. The particular aim is to merge the stop-bands and simultaneously decrease the size of the structure. Following the analysis of coupled resonators and eigenmodes of periodic systems (see Supplementary Information), it might be expected that the optimized structure should consist of several nearly equivalent cavities characterized by a large width, as it allows to increase the width of band-gaps and shift them towards the low frequencies.
Using the optimization procedure described in Materials and Methods, the meta-atom shown in Fig. <ref>(a) is obtained. It consists of just 5 plates forming four cavities inside a unit cell and two additional cavities formed by the plates from the neighboring unit cells (along the x-axis). The geometric profile in this case is flat, meaning that all plates have the same width, contrary to the “spruce” meta-atom and typical geometries of SBH. There is also no gradual change of the cavity sizes, as it not an important requirement for the adjustment of the coupling between the resonators. The band structure of the infinitely periodic system consisting of such meta-atoms is characterized by broad band-gaps covering almost the whole range from 600 to 2800 Hz [see Fig. <ref>(c)]. Again, the modes correspond to the resonances in-between the cavities [see Fig. <ref>(e)], which can be characterized by both symmetric and anti-symmetric distributions. Here it also should be mentioned, that resonators also interact with their neighbors along the y-axis, which also affects the results (see Supplementary Information). Notably, such effect is unavailable in one-dimensional SBHs, which make a distinguish between these two types of systems.
The corresponding transmission spectra of the finite-size structure is characterized by the broad stop-band spectrally coinciding with the band-gaps of the infinitely periodic structure [see Figs. <ref>(b) and <ref>(d)]. Again, there qualitative differences between the numerical and experimental results caused by the limitations of the experimental setup, but still the transmission within the stop-band is mostly below -40 dB, while the stop-band itself is much larger than for the case of the “spruce” meta-atom.
§.§ Ventilation
Geometry of the structure implies the presence of the gaps between the meta-atom and one of the waveguide walls. Therefore, it might be expected that an airflow propagating through the waveguide will not be fully blocked by the meta-atom. In particular, the specific case of the noise control in the ventilation ducts in which both acoustic noise and air flow propagate simultaneously. As discussed in the Materials and Methods, to test the structure under such conditions, a fan is integrated into one the waveguide termination ends. Obviously, both transmission and ventilation depend on the size of the gap between the meta-atom and the wall of the waveguide [see Figs. <ref>(a) and <ref>(c)]. It might be expected that the increase of the gap will result in the corresponding increase of the air flow speed behind the structure and, at the same time, in the increase of the transmission coefficient.
Indeed, as shown in Fig. <ref>(b), the averaged transmission coefficient increases when the gap becomes larger. For the case of the “spruce” meta-atom, the averaged transmission coefficient changes almost linearly, such that the difference between the lowest and the largest value is just -5 dB. For the optimized meta-atom the situation is a bit more complicated, and the dependence of the transmission coefficient on the gap size resembles a logarithmic function. The change in the values is rather drastic, about 15 dB. Notably, the optimized structure demonstrate much better noise-insulating performance at low values of Δ w, than the “spruce”, which was actually the aim of the optimization procedure. When the gap size increases, and becomes more than 10 mm, the “spruce” structure demonstrates better performance, indicating the fact that the optimization procedure should be actually performed for each value of Δ w.
The ventilation properties of both meta-atoms is quite similar to each other [see Fig. <ref>(d)]. Expectantly, the ratio of the air flow speed U and the reference flow speed U_0 (i.e. the speed without the structure) increases with the increase of the gap between a structure and the waveguide. The performance of the “spruce” meta-atom is slightly better, than for the optimized one, which might be associated with its geometry implying the gradual decrease of the effective channel width. In any case, the ventilation above 50% is achieved when the value of Δ w exceeds 15 mm. For Δ w = 25 mm the ratio U/U_0 is nearly 0.75, meaning that only 25% of the air flow is blocked. The transmission spectra in this case is still characterized by stop-bands [see Fig. <ref>(d)] occurring within the range 1000–2800 Hz. Contrary to the previous measurements with the loudspeaker, the transmission spectra for the case of the hair dryer is noisy, but distinguishable gaps are still clearly visible. Therefore, a compromise between noise-insulation and ventilation properties can be found for the realistic application scenario.
§ CONCLUSION
To conclude, the work is dedicated to the investigation of 2D structures having the geometries similar to those of 1D sonic black holes. It is shown that the change of the dimensionality practically ruins the black hole effect and the decrease in transmission is associated with the opening of band-gaps, such that the structure works as a reflector rather than absorber. The formation of band-gaps is associated with the coupling between the resonators constituting the structure. As a consequence, the geometric profile in the 2D case is not the key factor, and only the size of the cavities matters. Still, an analog of a sound black hole effect is numerically observed, such that the group velocity gradually decreases with frequency until reaching zero, though this effect is not utilized in the work. As the main result, it is shown that fro the realistic scenario of a ventilation duct in which both the airflow and acoustic noise propagate simultaneously, it is possible to find a compromise between ventilation properties and noise insulation.
In particular, it is shown that the reduction of the air flow speed might be about 25% while the broad stop-band is observed within the spectral range 1000–2850 Hz.
These results might be useful for further development of ventilated acoustic metamaterials and noise insulating systems for passive noise insulation, especially in ventilation ducts.
§ AUTHOR CONTRIBUTIONS
FB, LG, YM, DP, and EV contributed equally to this work.
LG and SK provided the numerical calculations. FB, LG, YM, DP, EV and MK performed the experimental measurements.
SK proposed the idea and supervised the project. MK acquired the funding and co-supervised the project.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§ DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request.
§ ACKNOWLEDGEMENTS
The authors thank Aleksandra Pavliuk for the help with the experimental measurements and Mikhail Kuzmin for fabrication of the samples. The authors also thank Anton Melnikov for fruitful discussions and useful comments.
This work is supported by the Russian Science Foundation (project 24-21-00275).
elsarticle-num
|
http://arxiv.org/abs/2409.02177v1 | 20240903180002 | Efficiently preparing chiral states via fermionic cooling on bosonic quantum hardware | [
"Gilad Kishony",
"Mark S. Rudner",
"Erez Berg"
] | cond-mat.str-el | [
"cond-mat.str-el",
"quant-ph"
] |
Department of Condensed Matter Physics,
Weizmann Institute of Science,
Rehovot 76100, Israel
Department of Physics, University of Washington, Seattle, WA 98195-1560, USA
Department of Condensed Matter Physics,
Weizmann Institute of Science,
Rehovot 76100, Israel
§ ABSTRACT
We propose an efficient protocol for preparing low energy states of arbitrary fermionic Hamiltonians on a noisy bosonic quantum simulator. This procedure involves performing adiabatic cooling by coupling the target system with a simulated bath. The bath is periodically monitored in order to extract entropy from the system. By fermionizing the simulated target system and the bath together, we allow individual fermionic excitations of the system to coherently hop to the bath sites. In this way, we achieve a cooling rate linearly proportional to the density of these excitations, despite the fact that they are non-local in terms of the bosonic degrees of freedom of the hardware. In particular, we show that certain topological phases, such as the chiral (non-Abelian) phase of the Kitaev honeycomb model can be prepared efficiently using our protocol. We find that our protocol performs favorably in the presence of noise, making it suitable for execution on near-term quantum devices.
Efficiently preparing chiral states via fermionic cooling on bosonic quantum hardware
Erez Berg
=====================================================================================
§ INTRODUCTION
Simulating quantum many-body systems, both in and out of equilibrium, is one of most promising applications of near-term quantum computers <cit.>. In such tasks, quantum computers offer a vast advantage over their classical counterparts, by virtue of their ability to store the many-body wave functions and perform unitary gates
that implement physical time evolution according to arbitrary Hamiltonians, at a polynomial cost in the system size. Such simulations may have practical utility in many fields, such as materials science, high-energy physics, and quantum chemistry. Preparation of a low energy state of a many-body Hamiltonian is an important subroutine for these simulations. Doing this efficiently for arbitrary Hamiltonians is a highly non-trivial task, and is one of the major challenges in the field.
Of particular interest is the case where the ground state of the Hamiltonian is topologically non-trivial. Predicting the occurrence of topological states, e.g., in quantum spin systems or interacting electrons
in flat Chern bands, is notoriously difficult. In addition, such topological states are particularly challenging to prepare on quantum computers using conventional methods since, by definition, they cannot be connected to a product state by a finite-depth spatially-local unitary circuit <cit.>. These unitary circuit algroithms include variational methods <cit.>, adiabatic processes <cit.>, and effective imaginary time evolution <cit.> . In addition, protocols for cooling the system by coupling it to a simulated thermal bath have been proposed <cit.>. All these methods are expected to
suffer a parametric reduction in performance
when preparing topological states, requiring
the circuit depth to increase with the system size to achieve a given accuracy.
A large class
of topologically non-trivial states can be prepared efficiently using dynamic circuits with
measurements and classical feedback <cit.>. The principle is illustrated by
the example of the toric code model <cit.>: the Hamiltonian is “frustration–free,”
i.e., it consists of a sum of commuting terms that can be measured
simultaneously. After the measurement, the system can be brought into
its ground state by applying a unitary transformation designed to
remove the excitations that were detected by the measurement. This
method is not applicable for generic Hamiltonians away from the frustration-free
limit.
Moreover, certain topological
states, such as chiral states,
do not have a frustration-free description.
In this study, we tackle the problem of preparing chiral topological states, focusing on the well-known example of the chiral gapped phase of
the Kitaev spin liquid (KSL) model on the honeycomb lattice, which hosts non-Abelian excitations <cit.>.
The KSL model can be mapped into a system of emergent Majorana fermions coupled to a static ℤ_2 gauge field.
The preparation scheme consists of two steps.
In the first step, the system is prepared in the flux-free sector by measuring the flux operators through all plaquettes, and applying unitary operators that correct the fluxes that are detected <cit.>.
The second step is to prepare the fermionic ground state, whose Chern number is non-zero. The key idea is to introduce auxiliary “bath” degrees of freedom that simulate an effective fermionic bath, coupled to the system's emergent fermions.
Both species of fermions are coupled to the same ℤ_2 gauge field, allowing single-fermion hopping between them.
We provide an explicit encoding of the system and bath fermions into the physical bosonic qubits, generalizing the one-dimensional construction of Ref. <cit.> to two dimensions.
The bath fermions are then used to extract energy and entropy from the system, by performing simulated cyclic adiabatic cooling <cit.> followed by measurements of the auxiliary qubits. The steps of the protocol are illustrated in Fig. <ref>.
Our protocol prepares the ground state of the chiral phase parametrically more efficiently than other methods, such as adiabatic preparation <cit.> or `naive' adiabatic cooling using a simulated bosonic bath <cit.>. These methods require a quantum circuit whose depth grows at least polynomially with the system size. In contrast, within our method, the energy density decreases exponentially with the number of cycles performed, and thus the total depth required to achieve a given accuracy in the total ground state energy grows only logarithmically with the system size. In the presence of decoherence, our protocol reaches a steady state whose energy density is proportional to the noise rate, which is parametrically lower than bosonic cooling protocols (where the energy density of the steady state scales as the square root of the noise rate <cit.>).
The protocol presented here can be used to efficiently prepare the ground state of a generic (interacting) fermionic model in 2D. This is done by artificially introducing a ℤ_2 gauge field similar to the physical gauge field of the KSL model in order to facilitate the fermionization scheme for the system and the bath. The presence of an effective fermionic bath allows to remove single fermion excitations, while keeping the coupling between the system and auxiliary qubits spatially local.
This paper is organized as follows. In Sec. <ref> we introduce our algorithm for preparing the ground state of the KSL model.
In Sec. <ref> we analyze the performance of the protocol both with and without noise via numerical simulations.
Finally, in Sec. <ref> we
generalize our protocol for arbitrary interacting fermionic systems.
§ PREPARING A CHIRAL SPIN LIQUID
§.§ Model
In this section we explain how
to prepare the ground state of the KSL model in the phase where the emergent fermions carry a non-trivial Chern number,
as efficiently as in a topologically trivial phase. The same principle can be applied to arbitrary interacting fermionic Hamiltonians, as we discuss below.
The KSL model consists of spin-1/2 degrees of freedom on a honeycomb lattice with strongly anisotropic exchange interactions <cit.>. The Hamiltonian is written as:
H_KSL= -∑_α∈{x,y,z}𝒥_α∑_⟨𝐈,𝐉⟩∈α-bondsσ_𝐈^ασ_𝐉^α
-κ∑_⟨⟨𝐉,𝐊,𝐋⟩⟩σ_𝐉^xσ_𝐊^yσ_𝐋^z.
Here, 𝐉 = (i,j,s) denotes the sites of the honeycomb lattice, where i ∈{1,…,N_x} and j ∈{1,…,N_y} label the unit cell and s∈{A,B} labels the two sublattices,
and σ_𝐉^α denotes the α=x,y,z Pauli matrix at site 𝐉.
The bonds of the lattice are partitioned into three sets – x, y, and z bonds –according to their geometric orientations (see Fig. <ref>).
We denote the corresponding Kitaev anisotropic exchange couplings by 𝒥_x, 𝒥_y, and 𝒥_z, respectively.
The κ term is a three-spin interaction acting on three neighboring sites denoted by ⟨⟨𝐉,𝐊,𝐋⟩⟩,
such that both 𝐉 and 𝐋 are nearest neighbors of site 𝐊.
The κ term breaks time reversal symmetry, and opens a gap in the bulk
that drives the system into the chiral non-Abelian phase <cit.>.
In order to facilitate an efficient protocol to prepare the ground state of (<ref>), we introduce a modified KSL Hamiltonian, H_KSL, that includes both a system (σ) and a auxiliary (τ) spin at every site of the honeycomb lattice. The auxiliary spins are used to cool the system down to its ground state.
The Hamiltonian is illustrated in Fig. <ref>, and is composed of a sum of two terms:
H(t)=H_KSL+H_ c(t),
where H_KSL is the modified KSL Hamiltonian:
H_KSL= -∑_α∈{x,y,z}𝒥_α∑_⟨𝐈,𝐉⟩∈α-bondsσ_𝐈^ασ_𝐉^ατ_𝐈^zτ_𝐉^z
-κ∑_⟨⟨𝐉,𝐊,𝐋⟩⟩σ_𝐉^xτ_𝐉^zσ_𝐊^yτ_𝐊^zσ_𝐋^z.
The τ^z_ J operators all commute with H_KSL. Clearly, H_KSL is identical to H_KSL in Eq. (<ref>) in the sector τ^z_𝐉=+1. In fact, the two Hamiltonians are related by a unitary transformation for any state of the τ spins with even parity, ∏_𝐉τ^z_𝐉=1.
To see this,
we note that
all of the unitary operators
𝒰_𝐈,𝐉 = τ^x_𝐈σ^α_𝐈σ^α_𝐉τ^x_𝐉, for ⟨𝐈,𝐉⟩∈α-bonds, commute with H_KSL and anticommute individually with τ^z_𝐈 and τ^z_𝐉. The transformation from a sector with a given {τ^z_𝐉} of even parity to the sector {τ^z_𝐉=1} can be constructed by annihilating pairs of τ^z_𝐊,τ^z_𝐋=-1 using a product of the unitaries 𝒰_𝐈,𝐉 along a path connecting the points 𝐊, 𝐋. Thus, preparing the ground state of H_KSL in any known even-parity sector of {τ^z_𝐉} is equivalent to preparing the ground state of the original KSL Hamiltonian, Eq. (<ref>).
The “control” Hamiltonian H_ c(t), which is used to cool into the ground state of H_KSL, is chosen as a time-dependent effective Zeeman field acting on τ_𝐉 with x̂ and ẑ components g_(t) and B_𝐉(t), respectively,
H_ c(t)=-∑_𝐉B_𝐉(t)τ_𝐉^z-∑_𝐉g_(t)τ_𝐉^x.
The time dependence of B_𝐉(t) and g_(t) and the protocol used for cooling are described below.
§.§ Fermionization
It is useful to fermionize the spins σ and τ by introducing a set of six Majorana operators {c_𝐉^x,c_𝐉^y,c_𝐉^z,b_𝐉^x,b_𝐉^y,b_𝐉^z} per site, subject to the constraint
c_𝐉^xc_𝐉^yc_𝐉^zb_𝐉^xb_𝐉^yb_𝐉^z=i.
The spin operators are related to the fermionic ones by:
σ_𝐉^α =-i/2∑_β,γε^αβγb_𝐉^βb_𝐉^γ, τ_𝐉^α =-i/2∑_β,γε^αβγc_𝐉^βc_𝐉^γ,
where α,β,γ∈{x,y,z} and ε^αβγ is the totally antisymmetric
tensor. This transformation of the spins to fermions is analogous to the one used by Kitaev <cit.>. Note, however, that we have fermionized the σ and τ spins together, and as a result, there is a single constraint per site containing both a σ and a τ spin.
For later use, we note that, using Eqs. (<ref>) and (<ref>), we can write
τ_𝐉^zσ_𝐉^α=-ic_𝐉^zb_𝐉^α.
Using this mapping, we express
Hamiltonian (<ref>) as
H(t)= ∑_α∈{x,y,z}𝒥_α∑_⟨𝐈,𝐉⟩∈α-bondsu_𝐈,𝐉ic_𝐈^zc_𝐉^z
-iκ∑_⟨⟨𝐉,𝐊,𝐋⟩⟩u_𝐉,𝐋u_𝐋,𝐊c_𝐉^zc_𝐊^z
-∑_𝐉B_𝐉(t)ic_𝐉^yc_𝐉^x-∑_𝐉g_(t)ic_𝐉^zc_𝐉^y,
where we identify the set of conserved ℤ_2 gauge fields
{u_𝐈,𝐉=ib_𝐈^αb_𝐉^α|⟨𝐈,𝐉⟩∈α-bonds,α∈{x,y,z}}.
The gauge-invariant flux W_i,j of the ℤ_2 gauge field is defined on each hexagonal plaquette that corresponds to the unit cell (i,j),
as the product of u_𝐈,𝐉 on its edges. In terms of the spin degrees of freedom, this can be written as
W_i,j = σ^x_(i,j,B)σ^z_(i+1,j,A)σ^y_(i+1,j,B)
×σ^x_(i+1,j+1,A)σ^z_(i,j+1,B)σ^y_(i,j+1,A).
The fermionic Hamiltonian (<ref>) with g_=0, B_𝐉=0 is identical to the Hamiltonian found by fermionizing the
original KSL model <cit.>, with {c_𝐉^z} playing the role of the “system fermions.”
The remaining {c^x_ J} and {c^y_ J} operators describe the fermionic modes of the bath,
while the {b^α_ J} operators introduced above Eq. (<ref>) only participate through their roles in setting the values of the conserved ℤ_2 gauge fields {u_ I, J}.
The term multiplied by g_ J in Eq. (<ref>) allows fermions to hop between the system and the bath,
while the term multiplied by B_ J acts on the bath fermions alone.
Our aim is to prepare the system fermions {c_𝐉^z} in the ground state of the Hamiltonian (<ref>) with g_=0 and no ℤ_2 fluxes, i.e., W_i,j=1 (equivalent to setting u_𝐈,𝐉=1, up to a ℤ_2 gauge transformation).
§.§ Preparation Protocol
We now present the protocol to prepare the ground state of H_KSL, starting from an arbitrary initial state. Similarly to Refs. <cit.>, the protocol consists of cycles which are repeated until convergence to the ground state is achieved.
Since the protocol is intended to be implemented on bosonic qubits, in this section we return to the description in terms of the σ and τ spins.
In each cycle, the σ and τ spins are evolved unitarily for a time period T, with a time-dependent Hamiltonian (<ref>) designed to decrease the expectation value of H_KSL. The unitary evolution is followed by a projective measurement of the τ spins in the z basis. The measurement outcomes are used to determine the Hamiltonian of the next cycle.
In the beginning of the nth cycle, the τ spins are assumed to be in an eigenstate of τ^z_ with known eigenvalues, which we denote by τ_(t_n)=± 1. Before the first cycle, the system is initialized in a state where τ^z_𝐉=+1 for all 𝐉. We then perform the following operations:
* The σ spins are brought to a flux-free state, W_i,j = 1. This can be done by measuring all the flux operators (which are mutually commuting), and annihilating the detected fluxes in pairs by applying appropriate unitary operators <cit.> (see Appendix <ref> for details).
* The system is evolved from time t_n to t_n+1 = t_n + T with the Hamiltonian H(t-t_n), where H(t) is given in Eq. (<ref>).
Specifically, we set the effective Zeeman fields to B_𝐉(t) = B(t-t_n)τ_𝐉(t_n) and the “system-bath couplings” to g_𝐉(t) = g(t-t_n), with the functions g(t), B(t) as shown in Fig. <ref> and given explicitly in Appendix <ref>.
* The τ^z_ operators are measured, giving new values {τ_(t_n+1)}. A new cycle then begins from step b.
Importantly, the flux operators W_i,j commute with H(t).
Ideally, they therefore retain their initial values, W_i,j=1. In the presence of decoherence on a real quantum simulator, W_i,j may flip during the cycle. This may require returning to step 1 between cycles, where the flux operators are measured again and unitary operators are applied to correct flipped fluxes, as discussed in Sec. <ref> and Appendix <ref>.
The idea behind this protocol is that, in analogy to adiabatic demagnetization, the coupling between the σ and τ spins cools the system towards the ground state of H_KSL.
The τ spins are initially polarized in a direction parallel to the effective Zeeman fields acting on them.
Adiabatically sweeping the Zeeman fields downward while the coupling g is non-zero tends to transfer energy into the τ spins, decreasing the expectation value of H_KSL. This picture guides the choice of the protocol parameters, B_0, g_1 and T: B_0 should be chosen to be large enough compared to the gap E_gap between the ground state and the lowest excited state of H_KSL, such that sweeping the Zeeman field during the protocol brings the excitations in the system to resonance with the τ spins. The maximum coupling g_1 controls the rate of the cooling. T is chosen such that the time evolution is adiabatic with respect to the system's gap, which sets the maximum allowed values of B_0 and g_1: these are chosen such that max(|B_0|, |g_1|)/T≪ E_gap^2.
The operation of the cooling protocol is particularly transparent in the fermionic representation, Eq. (<ref>). It is straightforward to check that in the beginning of each cycle, when g_=0, the bath fermions c^x and c^y are decoupled from the system fermions, c^z, and are initialized in the ground state of the bath Zeeman Hamiltonian. Sweeping |B_(t)| downwards induces level crossings between states of the system (c^z) and bath fermions, and when g_ 0, excitations are transferred from the system to the bath.
§.§ Performance analysis
The fact that the system and bath can be described as emergent fermions which are coupled to the same gauge field means that a single fermionic excitation can be transferred coherently to the bath. This dramatically accelerates the cooling process compared to simpler “bosonic” adiabatic cooling algorithms of the type described in Refs. <cit.>. In these protocols, only a gauge-neutral pair of fermionic excitations can be transferred from the system to the bath, slowing down the cooling process as the ground state is approached.
Specifically, the performance of our protocol can be understood from simple considerations. The argument
mirrors that given in Ref. <cit.> for “gauged cooling” of a one-dimensional quantum Ising model.
Since fermionic excitations in the system can independently hop into the bath, each excitation has a finite probability to be removed by the bath in each cycle.
The cooling rate is therefore proportional to the energy density of H_KSL.
Approximating the cooling process as a continuous time evolution, we obtain a rate equation for the energy density, ε(t):
ε̇(t) = -C [ε(t)-ε_s],
where C>0 is the cooling rate, and ε_s is the steady state energy density.
The quantities C and ε_s depend on the protocol parameters, such as T, B_0 and g_1, and in a realistic noisy quantum simulator, also on the noise rate; in the perfectly adiabatic, noiseless case, ε_s→ε_0, the ground state energy density. Solving Eq. (<ref>), we obtain ε(t) = ε_s + [ε(0)-ε_s]e^-C t.
In contrast, in a simple adiabatic cooling protocol where only pairs of fermionic excitations can be transferred to the bath, ε̇∝ -(ε-ε_s)^2, resulting in a power-law approach to the steady state, ε-ε_s ∼ 1/t.
The fact that the fermionic Hamiltonian (<ref>) is quadratic allows us to analyze the cooling dynamics in a basis in which the excitations are decoupled. Moreover, within the flux-free sector, and assuming that the system is prepared initially such that τ^z_𝐉=1 and hence the signs of the B_𝐉's are spatially uniform, the Hamiltonian can be brought into a translationally invariant form by a unitary transformation.
This allows an explicit analysis of the performance of the protocol in momentum space, which we present in Appendix <ref>. The analysis verifies the exponential convergence to the steady state with the number of cycles performed, as anticipated above. Moreover, we find that in the ideal (noiseless) case, the steady state energy ε_s depends only on the adiabaticity of the protocol with respect to the system's gap, E_gap, whereas the cooling rate C depends also on the the adiabaticity with respect to the avoided crossing gaps encountered during the time evolution with H(t). In particular, in the limit where T is large compared to 1/E_gap, the ground state can be approached with an exponential accuracy.
While these results are demonstrated for the solvable model H_KSL, which can be mapped to free fermions, we expect them to hold more generally. For example, the performance is not expected to be parametrically affected if quartic interactions between the system fermions are added to Eq. (<ref>). The application of our protocol to general interacting fermionic Hamiltonians is discussed further in Sec. <ref>.
§ NUMERICAL SIMULATIONS
§.§ Performance without noise
We simulate the fermion cooling algorithm numerically with and without noise using the efficient procedure described in Appendix <ref> which relies on the model being one of free fermions. In all simulations we choose 𝒥_x=𝒥_y=𝒥_z=𝒥=1, κ=1, in the chiral phase,
and use protocol parameters B_0=7, g_1=0.5. Periodic boundary conditions are used unless otherwise stated.
Setting periodic boundary conditions allows us to simulate this translation invariant model in k-space point by point. We cool a system of size 85×85 unit cells in the absence of noise starting from a completely random state. Here, it is assumed that the system has been initialized to the flux-free sector prior to the action of the cooling protocol, which leaves the gauge fields fixed.
Time evolution is performed
in each momentum sector using an ordinary differential equation solver with a relative tolerance of 10^-6 such that the deviation of the steady state from the ground state is expected to derive predominantly from diabatic transitions.
In the ground state, the Chern number of the c^z_ fermions is equal to 1. To probe the convergence to the ground state, we define the quantity ν^l:
ν^l=1/2π i∫R^l[∂_k_xR^l,∂_k_yR^l]dk_xdk_y,
where R^l is the single particle density matrix of either the system (l=sys) or the bath (l=bath) fermions. Specifically, R^sys_s,s'(k)=⟨ c_(k,s)^z† c_(k,s')^z⟩, with an analogous expression for R^bath.
Here, c_(k,s)^z is the Fourier transform of c_^z = c_(i, j ,s)^z on sublattice s with wavevector k.
In a (pure) Slater determinant state, the quantity ν^l becomes the integrated Chern density. We calculate this value by performing a discrete integral in momentum space.
Fig. <ref> shows ν^sys (solid lines) and ν^bath (dashed lines) at the end of a single cooling cycle before performing the measurement of the τ spins, as a function of the
sweep time, T. As the
sweep duration is increased, the Chern number of the system approaches 1 and that of the bath approaches -1, even after a single cycle. The fact that our protocol succeeds in preparing a fermionic
chiral ground state starting from a
topologically trivial state
by a finite depth local unitary circuit is due to
the chiral state possessing “invertible” topological order; i.e., stacking two systems with opposite Chern numbers yields a trivial state.
Thus, the topological state of the system's fermions and its inverse (in the bath)
can be prepared starting from a trivial state [Note, however, that overall, the spin system is in a non-invertible topologically ordered state, since the chiral phase of the KSL has fractional (non-Abelian) statistics.].
The inset of the Fig. <ref> shows the energy density after a single cooling cycle as a function of the duration T. As the cycle duration is increased, the system approaches the ground state energy in a single cycle. The deviation from the ground state energy decreases exponentially with increasing T.
§.§ Performance with noise
In this section, we numerically test the performance of the proposed cooling algorithm in the presence of noise.
For simplicity, the noise is modeled as a uniform depolarizing channel acting on all σ and τ qubits
at some rate ζ of errors per cooling cycle per qubit. This noise is simulated stochastically as described in Appendix <ref>.
Importantly,
the fluxes of the gauge field may be excited by the action of noise on the system. In order to overcome this, one should periodically measure and fix these fluxes, while performing our proposed algorithm for cooling the excitations of the fermions {c_ J^z}. Fixing the observed fluxes can be done in an analogous way to the action of error correction in the surface code <cit.> by annihilating them in pairs, as explained in Appendix <ref>. When flux excitations are removed (by applying local spin operations), fermionic excitations may be created, and these are then cooled by subsequent cycles of the cooling protocol. Using the protocol, the energy density of the steady state should be proportional to the noise rate, although there is a finite density of flux excitations in the steady state.
We have simulated a KSL system of size 6× 6 unit cells, cooled in the presence of decoherence for multiple cycles. The results are presented in Fig. <ref>. The energy density is shown at the end of each cycle, with and without the active correction of the flux degrees of freedom.
In the simulation with flux correction, the fluxes are measured and removed at the end of each cycle. For simplicity, we have assumed that the measurements are ideal, and the flux correction operations are perfect.
In both cases the stochastic action of errors appear as local peaks in the energy density after which the fermionic excitations produced are cooled. However, when errors which excite the flux degrees of freedom occur and these are not corrected, further cycles of fermionic cooling are only able to reach the ground state within the new excited flux sector. After many cycles, a steady state is reached in which the flux degrees of freedom are in a maximally mixed state.
Figure <ref> shows the energy density of the steady state reached by cooling a system of size 10× 10 in the presence of noise while correcting the flux degrees of freedom at each cycle. The energy density is linear in the error rate, demonstrating that cooling of both fermionic and flux excitations is done at a rate proportional to their density.
The active correction of the flux degrees of freedom by measurement and feedback provides a natural tool for further error mitigation by post selection. In cycles which require a large weight Pauli correction in order to annihilate the monitored flux excitations, the error together with its correction likely result in the insertion of fermionic excitations which must be removed by further cooling cycles. We denote the weight of the Pauli correction (defined as the number of gates applied to correct the fluxes) at cycle n by w_n, and calculate its exponential moving average
w̅_n = ∑_m≤ n w_m e^-n-m/n_0/∑_m≤ n e^-n-m/n_0,
where 1/n_0 is the exponential cooling rate in the absence of noise. We perform post selection in Fig. <ref> by accepting a fraction of the cycles with the smallest values of w̅_n, using n_0=3.
Performing such post selection succeeds in reducing the expectation value of the energy density. In the limit of a very low accepted fraction of the data, the energy density is reduced approximately by a factor of two at low error rates. In this limit, all cycles in which σ spins are affected by errors are removed, but cycles in which τ spins are disturbed remain part of the data. Further post selection can be done by imposing a requirement on the outcomes of the measurements of the τ degrees of freedom from cycle to cycle <cit.>.
§ EXTENSION TO ARBITRARY FERMIONIC HAMILTONIANS
Our protocol for preparing the ground state of the KSL model can be generalized as a method for preparing the ground state of an arbitrary (interacting) fermionic Hamiltonian in 2D. Given some local target Hamiltonian acting on the system Majorana fermions c_𝐉^z, we introduce the gauge fields u_𝐈,𝐉 artificially and modify the target Hamiltonian such that the c_𝐉^z fermions become charged under the gauge field.
Namely, we modify each hopping term between site 𝐈 and site 𝐉 in the target Hamiltonian by the product of the gauge fields on a path connecting the two sites.
We then introduce the bath fermions c_𝐉^x,c_𝐉^y,
and use Eq. (<ref>) to map the fermionic Hamiltonian (including the hopping between the system and bath fermions, used for cooling the system) onto a bosonic local Hamiltonian acting on the σ and τ spins.
The ground state of the Hamiltonian is prepared according to the protocol above and this state is related to the ground state of the target Hamiltonian by the known gauge transformation as discussed in Sec. <ref>.
Generalizations to other problems, e.g., with complex spinful fermions or multiple orbitals per site, are straightforward. We also note that Hamiltonians with fermion number conservation can be implemented within our protocol; the fermion number changes during each cycle, but the system is cooled towards the ground state that has a well-defined fermion number set by a chemical potential term.
§ SUMMARY
In this work, we have presented an efficient protocol for preparing certain chiral topological states of matter on quantum simulators. Our prime example is the chiral phase of the Kitaev spin model on the honeycomb lattice. Our protocol can be similarly used to prepare invertible topological phases of fermions, such as Chern insulators and chiral topological superconductors. Importantly, our protocol does not require an a-priori knowledge of the ground state wavefunction, and should perform equally well in the presence of interactions (although the phases we prepare have a description in terms of non-interacting fermions). In the absence of interactions and decoherence, the ground state can be reached after a single cycle, up to adiabatic errors
(for the non-interacting case, a similar approach was proposed in Ref. <cit.>).
Our scheme utilizes a simultaneous fermionization of the target system and the cooling bath with which it is coupled, in order to allow the removal of single fermionic excitations from the system to the bath. Crucially, the protocol relies on repeated measurements of the bath spins that detects the fermionic excitations, and local classical feedback that removes them.
In practice, for running this algorithm on real quantum hardware, one should choose the time duration T of the unitary evolution and the number of Trotter steps, N_t, to optimize the trade-off between diabatic errors and Trotterization noise (which decrease upon increasing T and N_t) versus the noise deriving from environmental decoherence and low fidelity gates (which are typically proportional to the number of gates applied, and hence to N_t). The optimization can be done even without knowledge of the noise characteristics, by minimizing the measured energy of the steady state with respect to the protocol parameters, such as T and N_t.
Our protocol offers a parametric advantage compared to other recently-proposed methods to prepare chiral 2D states. Ref. <cit.> describes a method to prepare the chiral state of the Kitaev honeycomb model by adiabatic evolution. Due to the gap closing along the path, the excitation energy of the final state (or the ground state fidelity) scales polynomially with the duration of the time evolution. In contrast, in our protocol, no adiabatic path is required, and the excitation energy scales exponentially with the cycle duration. In Ref. <cit.>, a method for preparing chiral states using sequential quantum circuits is described; this method requires a circuit whose depth scales with the linear size of the system in order to reach a state with a given energy density. Ref. <cit.> provides an entanglement renormalization circuit which can prepare chiral states in a depth that scales logarithmically with system size, however this requires the use of long-range unitary gates. By comparison, within our method the required circuit depth is independent of the system size and local qubit connectivity in 2D is sufficient.
We emphasize that, while our method does not require knowing the ground state in advance, the performance of the protocol depends crucially on the topological phase to which the ground state belongs. Developing practical and efficient methods to prepare ground states in more complicated topological phases, such as fractional Chern insulators,
is an interesting outstanding challenge.
G.K. and E.B. were supported by CRC 183 of the Deutsche Forschungsgemeinschaft (subproject A01) and a research grant from
the Estate of Gerald Alexander. M.R. acknowledges the Brown Investigator Award, a program of
the Brown Science Foundation, the University of Washington College of Arts and Sciences, and the Kenneth K.
Young Memorial Professorship for support.
§ REMOVING FLUXES IN 2D
In order to reach the ground state of the 2D KSL, one should prepare the system in a flux-free state of the gauge field u and then proceed to cool the fermionic excitations c^z.
Fluxes can be removed by measuring the plaquette operators W_i,j which correspond to local 6-body spin operators shown in Eq. (<ref>).
The excitations W_i,j = -1 can be removed in pairs by products of local unitary operations. We note that the operator σ^α_𝐉 at a vertex of the honeycomb lattice shared between three hexagons anticommutes with two of the three flux operators on these hexagons,
while commuting with the third (as well as all the other plaquettes in the lattice). Therefore, applying such a unitary operator σ^α_𝐉 flips the signs of the two corresponding plaquettes, potentially also inserting fermionic excitations. Choosing some pairing of the measured excited plaquettes, one can construct a string of spin operators ∏_𝐉∈ Sσ^α(𝐉)_𝐉 connecting
each pair such that the product anti-commutes only with the
plaquette operators at the end points of the string.
Using the above approach, a finite density of flux exitations can be corrected using measurements and feed-forward unitaries (analogously to the error correction protocol in the surface code <cit.>). The composite error (the original error affecting the fluxes together with the correction) is a product of Pauli operators acting on the σ spins which commutes with all of the flux operators, leaving them unaffected. However, the action of this operator on the system excites a number of residual fermionic excitations proportional to its weight (the number of spins on which it acts). Therefore, assuming a low error rate resulting in a low weight Pauli error with high probability, it is important to construct a correction with a low weight as well by annihilating the flux excitations in local pairs - using a minimum weight perfect matching decoder <cit.>. Otherwise, the density of residual fermionic excitations will scale with system size.
The resulting fermionic excitations left after fusing the fluxes can be removed later by the coupling to the bath. Consequently, in the presence of noise, the combined algorithm (composed of a unitary sweep, measurement of the bath spins and the fluxes, and feed-forward correction of the fluxes) should reach a steady state whose energy density is proportional to the noise rate and is independent of system size. This is the algorithm used in presence of noise, whose results are shown in Figs. <ref> and <ref>.
§ SMOOTH EVOLUTION OF TIME DEPENDENT COUPLINGS
In order to approach the adiabatic limit exponentially with increasing cooling cycle duration T, we choose the time dependence of the couplings B and g to have finite derivatives at all orders. Explicitly, we use a smooth step function 𝒮(x) as used in Ref. <cit.>, defined as
𝒮(x) =
0, x<0
1, x>1
1/1+e^1/x+1/x-1, otherwise.
Using this, we express B(t) and g(t) as
B(t)=B_0[1-𝒮(t/T)],
g(t)=g_1𝒮(t/t_1)[1-𝒮(t-t_2/T-t_2)].
These curves are shown in Fig. <ref> for t_1=1/4T, t_2=3/4T as chosen in our numerical simulations.
§ ANALYSIS IN MOMENTUM SPACE
In this section we study the cooling dynamics of the quadratic fermionic Hamiltonian (<ref>) in the flux-free sector with uniform τ^z_𝐉=1. We choose a gauge where u_𝐈,𝐉=1, for which the Hamiltonian is translationally invariant, and Fourier transform the fermionic Hamiltonian to find
H=∑_kΨ_k^† H_kΨ_k,
where
Ψ_k^†=[ c_(k,A)^z† c_(k,B)^z† c_(k,A)^y† c_(k,B)^y† c_(k,A)^x† c_(k,B)^x† ],
with the 6 × 6 time-dependent Hamiltonian describing the modes at each k given by
H_k(t)=([ h_k G(t) 0; G^†(t) 0 -iB(t)𝕀; 0 iB(t)𝕀 0 ]).
Here, 𝕀 is a 2× 2 unit matrix,
h_k is the 2 × 2 submatrix describing the “system” (c^z_(k,s)) fermions,
h_k =[ Δ(k) if(k); -if(-k) -Δ(k) ],
and the 2 × 2 matrix G that describes hopping between fermionic system and bath (c^y_(k,s)) is given by
G(t)= [ -ig(t)Γ_A 0; 0 -ig(t)Γ_B ],
where we allow the coupling g_ to be sublattice dependent g_(i,j,s)(t) = g(t-t_n)Γ_s rather than spatially uniform as chosen in the main text.
The functions Δ(k) and f(k) in Eq. (<ref>) are given by
Δ(k)=2κ[sink_x-sink_y+sin(k_y-k_x)]
and f(k) = 𝒥_x+𝒥_ye^-ik_x+𝒥_ye^-ik_y.
§.§ Adiabatic limit
For the cooling protocol to succeed, it is essential to ensure that for all k the off-diagonal terms G have a non-zero matrix element between the ground state of the target Hamiltonian, h_k, and the ground state of the Hamiltonian Bσ^y of one of the bath modes, c^x_(k,s),c^y_(k,s). Importantly, if the system-bath coupling
is nonvanishing for all k,
in the perfectly adiabatic limit the system reaches its ground state after a single cycle (see discussion in Ref. <cit.>).
The instantaneous spectrum of H_k in the topological phase with 𝒥_x=𝒥_y=𝒥_z and κ>0 (where the ground state of the system fermions c^z_(k,s) carries a Chern number of 1 when g=0) is shown at the point k=(2π/3,-2π/3) as a function of B in Fig. <ref>. At this point, we find f(k)=0 and h_k∝σ^z, such that
using only the A sublattice bath (taking Γ_B=0) does not allow cooling of fermionic excitations at this k point.
The existence of this gap closing is due to the non-zero Chern number of the target ground state. As k varies, the spinor corresponding to the system's ground state covers the entire Bloch sphere. Therefore, there must be a value of k for which the system's ground state is orthogonal to the spinor formed by acting with the first column of G on the bath's ground states. At this k point, the coupling to
the bath on the A sublattice alone cannot cool the system.
Using two bath sublattices is enough to overcome this problem. As long as matrix G is not singular, the spinors obtained by acting with the columns of G on the system's ground state are not identical, and at least one of these spinors has a non-zero overlap with the system's ground state at each k. Note that, in contrast to the topological phase, in the trivial phase where the Chern number vanishes (obtained, e.g., when 𝒥_z >> 𝒥_x, 𝒥_y and κ=0), one bath sublattice would generically be sufficient (with appropriately chosen coupling).
§.§ Diabatic corrections
Deviations from the adiabatic limit have two main effects. First, the protocol
does not converge to its steady state after a single cycle, but rather
converges to the steady state exponentially with the number of cycles
performed.
Second, the steady state deviates from the ground state of the
system. Importantly, we shall now show that the deviation of the steady
state from the ground state due to diabatic errors is controlled by
the intrinsic energy gap of the system, and not by the avoided crossing
in the middle of the cycle (see Fig. <ref>). The mid-cycle avoided crossing gap controls
the rate of convergence to the steady state.
Returning to the case of Γ_A=Γ_B=1, the diabatic corrections are most straightforward to analyze after
a series of unitary transformations which map the problem at each k into an evolution of three spins.
First, we perform a k-dependent unitary transformation
U_k = [ V_k; V_k; V_k; ],
where V_k is chosen such that V_k h_k V_k^† = E_kσ^y,
with E_k=√(|Δ(k)|^2+|f(k)|^2) being the spectrum of the KSL model.
This brings the Hamiltonian
(<ref>) into a purely antisymmetric form
H_k=U_kH_kU_k^†=([ E_kσ^y -ig(t)𝕀 0; ig(t)𝕀 0 -iB(t)𝕀; 0 iB(t)𝕀 0 ]).
Next, we write the corresponding vector of complex fermionic annihilation operators
Ψ_k=U_kΨ_k in terms of two vectors of Majorana
operators: Ψ_k=1/2(a_k+id_k), where a_k^T=(a_1,k,…,a_6,k)
with a_n,k=a_n,k^†, and similarly for d_k.
Using the antisymmetric property of H_k, the many-body
Hamiltonian decouples when written in terms of a_k, d_k:
H=1/4∑_k[a_k^TH_ka_k+d_k^TH_kd_k].
For a given k,
the parts of the Hamiltonian that
depend on a_k and d_k are independent and identical, and it is sufficient to analyze one of them.
§.§.§ Mapping to a three spin problem
The part that
depends on a_k
can be written in terms of three pseudospin operators,
defined
such that μ_k^z=ia_1,ka_2,k, η_k^z=ia_3,ka_5,k, χ_k^z=ia_4,ka_6,k.
The basis of the corresponding Hilbert space can be labelled according to the eigenvalues of μ_k^z,η_k^z,χ_k^z; we denote the basis states as |μ_k^z,η_k^z,χ_k^z ⟩.
The a_k-dependent part of the
Hamiltonian is written as
H_k^a(t)=-1/2E_kμ_k^z-1/2B(t)(η_k^z+χ_k^z)+1/2g(t)μ_k^x(η_k^x+χ_k^x).
We have thus mapped the problem into that of three coupled spins. The Hamiltonian of the d_k sector has
the same form, and can be treated similarly.
Note that the μ_k,η_k,χ_k spins are distinct from the physical ones, σ_J and τ_J.
§.§.§ Solution of the three spin problem
At g=0 the Hamiltonian is diagonalized in the orthonormal basis 1/√(2)(|± 1,1,-1⟩-|± 1,-1,1⟩), |± 1,1,1⟩, 1/√(2)(|± 1,1,-1⟩+|± 1,-1,1⟩), |± 1,-1,-1⟩ with energies given by ∓ E_k/2, ∓ E_k/2-B, ∓ E_k/2, ∓ E_k/2+B. At finite g, the first two states remain decoupled. The Hamiltonian in the complementary subspace written in this basis is given by
H=[ -E_k/2-B √(2)g ; E_k/2-B √(2)g ; √(2)g -E_k/2 √(2)g; √(2)g E_k/2 √(2)g ; √(2)g -E_k/2+B ; √(2)g E_k/2+B; ],
and its spectrum is shown as a function of B for g>0 (solid lines) and for g=0 (dotted lines) in Fig. <ref>b. Turning on a non-zero g couples 1/√(2)(|± 1,1,-1⟩+|± 1,-1,1⟩) to |∓ 1,1,1⟩ and |∓ 1,-1,-1⟩.
(Note that [μ_k^z η_k^z χ_k^z, H^a_k]=0: the even and odd parity sectors of total z pseudospin are decoupled.)
Each cooling cycle is initialized with the bath spins η_k, χ_k in their respective ground states, i.e., the system is initialized in some mixture of the states |± 1,1,1⟩. For g=0, the energies of the eigenstates |-1,1,1⟩ and 1/√(2)(| 1,1,-1⟩+| 1,-1,1⟩) cross at B=E_k, while the state |1,1,1⟩ remains spectrally separated from the only state it is directly coupled to, 1/√(2)(| -1,1,-1⟩+| -1,-1,1⟩). With a finite g, the crossing becomes an avoided crossing and in the adiabatic limit, the states |-1,1,1⟩ and 1/√(2)(| 1,1,-1⟩+| 1,-1,1⟩) are exchanged under the time evolution, while | 1,1,1 ⟩
evolves trivially.
Away from the adiabatic limit, we denote the diabatic transition probability for the | -1, 1, 1⟩, 1/√(2)(| 1,1,-1⟩+| 1,-1,1⟩) pair as P_1,k and the diabatic transition probability
between | 1,1,1⟩ and 1/√(2)(| -1,1,-1⟩+| -1,-1,1⟩) as P_2,k (see Fig. <ref>b).
For g(t)/E_k≪ 1 and assuming g(t) is constant for times where B(t) ≈ E_k, P_1,k is approximately given by the Landau-Zener formula, P_1,k≈ e^-c_1 Tg^2/B_0 for some constant c_1. The second transition probability,
P_2,k, is determined by the single particle gap of the target Hamiltonian, P_2,k≈ e^-c_2 TE_k^2/B_0≥ e^-c_2 TE_gap^2/B_0, where E_gap is the minimal gap. We neglect the possibility for transitions to the states |± 1,-1,-1⟩ as a second order effect, occurring with probability of the order of P_1,kP_2,k.
§.§.§ Analysis of the cooling dynamics
A single cycle of the cooling protocol, including the measurement of the ancilla spins, can be described as a map from the initial density matrix for the system (corresponding to the pseudospin μ_k) to the final one, i.e., the dynamics realize a quantum channel.
The protocol consists of applying this quantum channel to the system multiple times, until a steady state is reached.
We number the four states participating in the dynamics | 1,1,1⟩, 1/√(2)(| 1,1,-1⟩+| 1,-1,1⟩), | -1, 1, 1⟩, 1/√(2)(| -1,1,-1⟩+| -1,-1,1⟩) by 0,1,2,3 for brevity and denote the corresponding density matrix by
ρ=[ ρ_00 ρ_01 ρ_02 ρ_03; ρ_10 ρ_11 ρ_12 ρ_13; ρ_20 ρ_21 ρ_22 ρ_23; ρ_30 ρ_31 ρ_32 ρ_33 ].
The final step of the protocol, in which τ_J^z are measured and the signs of B_J are adjusted accordingly for the next cycle, is equivalent to a resetting operation taking η_k^z,χ_k^z→1 after the measurement. The action of this operation on the density matrix is given by
ρreset→[ ρ_00+ρ_11 0 ρ_02+ρ_13 0; 0 0 0 0; ρ_20+ρ_31 0 ρ_22+ρ_33 0; 0 0 0 0 ].
The unitary time evolution step acts as
ρunitary→𝒰ρ𝒰^†,
where
𝒰=([ √(P_2,k) e^iθ_1√(P_2,k); -√(P_1,k) e^iθ_2√(P_1,k); e^iϕ_2√(P_1,k) e^iθ_2+iϕ_2√(P_1,k); -e^iϕ_1√(P_2,k) e^iθ_1+iϕ_1√(P_2,k) ]),
where P_1,k and P_2,k are the transition probabilities between the different states (see Sec. <ref> and Fig. <ref>b), and
we denote the complementary probabilities to P_n,k by P_n,k=1-P_n,k. Finally, under the full protocol cycle consisting of unitary evolution followed by measurement and reset,
[ ρ'_00; ρ'_02; ρ'_20; ρ'_22 ]=M[ ρ_00; ρ_02; ρ_20; ρ_22 ],
where
M=[ P_2,k P_1,k; √(P_2,kP_1,k) -√(P_2,kP_1,k); -√(P_2,kP_1,k) √(P_1,kP_2,k); P_2,k P_1,k ].
Here, the complex phases
e^iθ_1,e^iθ_2,e^iϕ_1,e^iϕ_2 cancel. If we take P_2,k→0, then the steady state of the protocol is the desired ground state (μ_k^z=1) since there is no probability
for any transition.
In general, the steady state of these dynamics is given by
M[ P_1,k/P_1,k+P_2,k; 0; 0; P_2,k/P_1,k+P_2,k ]=[ P_1,k/P_1,k+P_2,k; 0; 0; P_2,k/P_1,k+P_2,k ];
the steady state is a mixture of the ground state (μ_k^z=1) and the excited state (μ_k^z=-1). The probability to be in the ground state is given by
P_ steady, k = 1-P_1,k/1-P_1,k+P_2,k.
We denote the eigenvalues of the quantum channel by λ_n,k (where λ_1,k=1 corresponds to the steady state).
The eigenvalues, λ_2,k, λ_3,k, and λ_4,k
determine the rate in cycles at which the steady state is exponentially approached. These are given by
λ_2,k=√(1-P_2,k)√(P_1,k)+√(P_2,k)√(1-P_1,k),
λ_3,k=√(1-P_2,k)√(P_1,k)-√(P_2,k)√(1-P_1,k),
λ_4,k=P_1,k-P_2,k.
The largest among these (i.e., the eigenvalue closest to 1) is λ_2,k, and hence this eigenvalue controls the approach to the steady state: asymptotically, the deviation from the steady state scales with the number of cycles n as e^-n/n_0, where n_0 = 1/ln(1/λ_2,k)
[Note that, in general, λ_2,k≤ 1. In the special case P_1,k=1-P_2,k, λ_2,k=1 and the steady state becomes
degenerate. For any other value of P_1,k, the system converges to the steady state.].
Interestingly, Eq. (<ref>) shows that for P_2,k→0, the steady state is exactly the ground state,
independent of the value of P_1,k.
This is due to the fact that, for P_2,k=0, there is no way to escape from the ground state once it is reached. However, the rate of approach to the ground state does depend on P_1,k.
§ SINGLE-PARTICLE DENSITY MATRIX SIMULATION OF FREE FERMIONS
The free evolution of a quantum many-body system under a quadratic fermionic Hamiltonian, including measurements and resets of single sites and Pauli errors, can be traced numerically efficiently, requiring only
time and memory that scale as polynomials of the system size. Here, we briefly describe the technique used to simulate the noisy dynamics of the cooling process discussed in Sec. <ref>. The technique is the same as that used in Ref. <cit.>.
We keep track of the single particle density matrix R_𝐈,𝐉^α,β=⟨ ic_𝐈^αc_𝐉^β⟩ as well as the values of the ℤ_2 gauge field u_𝐈,𝐉=± 1 as they evolve under the noisy cooling dynamics.
This allows us to calculate any quadratic observables, and the energy in particular.
During free evolution under the Hamiltonian (<ref>) of the main text within a fixed sector of the gauge field u, the single particle density matrix evolves under a unitary matrix W(t) by conjugation, R(t)=W^†(t)R(0)W(t). This unitary is given by the time ordered exponentiation of the Hamiltonian in matrix form, and can be found numerically by Trotterization or using an ordinary differential equation solver. In Sec. <ref> we use second order Trotterization, making N_t steps per cooling cycle. Importantly, the unitary depends on the values of the gauge field which are looked up for its computation.
The resetting operation of the bath fermions at the end of each cycle ic_𝐉^yc_𝐉^x→ 1 is handled as in Ref. <cit.>. This amounts to setting all matrix elements of R_𝐈,𝐉^α,β in the two rows and the two columns corresponding to 𝐉,x and to 𝐉,y to zero, and then setting the 2×2 block at their intersection to the values found at ic_𝐉^yc_𝐉^x=1.
Depolarizing noise is implemented through stochastic application of Pauli errors before each Trotter step of the time evolution and for each spin ({σ_𝐉,τ_𝐉}) with probability of ζ/N_t. The Pauli τ_𝐉^α operators are bilinear in the c fermions (<ref>), such that they correspond to evolution under a quadratic fermionic Hamiltonian. Pauli errors σ_𝐉^α act on the b Majorana fermions, flipping the signs of two of the three gauge fields incident to site 𝐉.
Measuring the flux degrees of freedom within this framework simply requires reading off the product of the values of the ℤ_2 gauge fields around the corresponding plaquettes. These values are well defined at every point in time for each trajectory in these simulations, and their fluctuations can be obtained by averaging over multiple trajectories with stochastic noise. Correcting these fluxes in order to return to the flux-free sector, as described in Appendix <ref>, is done by applying a unitary correction given by a product of Pauli σ_𝐉^α operators.
|
http://arxiv.org/abs/2409.02764v1 | 20240904143938 | Homoclinic Chaos Unveiling Quorum Sensing Dynamics | [
"Mariana Harris",
"Pablo Aguirre",
"Víctor F. Breña-Medina"
] | math.DS | [
"math.DS",
"nlin.CD",
"q-bio.PE",
"34C23, 37G05, 37G15, 37C29, 37G20, 92B25"
] |
A generalization of K-theory to operator systems
Walter D. van Suijlekom
September 4, 2024
================================================
§ ABSTRACT
Quorum sensing orchestrates bacterial communication, which is vital for bacteria's population behaviour. We propose a mathematical model that unveils chaotic dynamics within quorum sensing networks, challenging predictability.
The model considers the interaction between autoinducers (molecular signalling) and two subtypes of bacteria. We analyze the different dynamical scenarios to find parameter regimes for long-term steady-state behaviour, periodic oscillations, and even chaos. In the latter case, we find that the complicated dynamics can be explained by the presence of homoclinic Shilnikov bifurcations.
Quorum sensing modelling, Synchronisation, Shilnikov homoclinic bifurcation, chaos.
34C23, 37G05, 37G15, 37C29, 37G20, 92B25.
myheadings
plain
M. Harris, P. Aguirre and V. Breña–Medina
Homoclinic Chaos and Quorum Sensing
§ INTRODUCTION
Quorum sensing (QS) —also known as auto-induction— is a mechanism of regulation of gene expression which allows communication in a cell population <cit.>. In the case of bacteria, this process is a density dependent behaviour, which consists of cell-to-cell communication through the release of chemical signal molecules known as autoinducers. This communication enables a population to explore a given medium and alter its behaviour in response to its own fluctuation, which occurs from changes in the number of species present in the community. Specifically, the basis of the QS mechanism is the production and release of autoinducers, whose concentration increases as a function of bacterial population density. When the population density exceeds a threshold, the bacteria detect the autoinducers which then activate genes that switch on a behavioural trait. Autoinducers located in the external medium diffuse and bind with a specific protein within the bacteria. A protein-autoinducer complex is then formed that attaches to a region of the bacteria's DNA. This in turn regulates the production of the autoinducers enhancing a specific regulated cell-density behaviour; see, for instance, <cit.>.
The QS phenomenon was first observed by Nealson and Hastings in 1970 in a bioluminescent bacteria called Vibrio fischeri <cit.>. This bacteria forms a mutually beneficial symbiotic relationship with some species of squid such as the Hawaiian squid Euprymna scolopes. The V. fischeri live in the squid's light organ which provides nourishment, allowing the bacteria to proliferate. When the bacteria reach a high population density the genes involved in bioluminescence are expressed. The produced light allows the squid to mask its shadow and avoid predation <cit.>. Following the discovery of the V. fisheri's density-dependent bioluminescent behaviour, other species of bacteria have been discovered to employ a QS mechanism; for instance, the bacteria Chromobacterium violeceum gives rise to the production of a purple pigment <cit.>. Also, biofilms may present challenges and opportunities in the industry setting as these may cause blockages in specialised machinery <cit.>, gene expression <cit.>, disrupting QS signalling (a process known as quorum quenching) <cit.>, and inhibiting pathogen virulence <cit.>.
The QS phenomena has been studied from either deterministic as well as stochastic and hybrid mathematical modelling approaches, which are inspired by many biologically plausible perspectives. For instance, in <cit.> a plant-pathogen population controlling virulence on leaf surfaces is investigated by means of a continuous-time Markov process, where a linear birth and a logistic-death-migration population processes along with an autocatalytic mechanism for acyl homoserine lactone autoinducers concentration are taken into account. Their findings include that QS mechanism and autoinducers diffusion follow an inversely proportional relationship. Another interesting work can be seen in <cit.>; there, extracellular polymeric substances (EPS) production is modelled in a growing biofilm under various environmental conditions, which yields to a reaction-diffusion system where the diffusion coefficient is density-dependent. The authors found that QS-induced EPS production permits a biofilm to switch from a colonisation state to a protection state, which is a crucial characteristic of QS. Even further, in <cit.> an anomalous diffusion process is taken into account, which captures a delayed maundering of substances as a consequence of transcriptional features. On the other hand, a deterministic gene regulatory network point of view is addressed in <cit.> by considering a QS pathway involving multiple feedforward and negative feedback loops aside from transcription time delays in a cancer drug realising scheme. A key result from their manuscripts lie on the appearance of Hopf bifurcations, other secondary oscillatory-induced bifurcations and a time-delay threshold, which coordinate self-sustained oscillating features. This small sample of approaches pays special attention to transcriptional time-delay as well as on-and-off activating crucial switches. Our area of focus is primarily directed towards the later.
In this work we propose a simple model which consists of features of a bacterial population and its inherent autoinducer concentration interaction. In other words, we assume that the population of bacteria is locally activated and inhibited by the production of autoinducers.
In order to capture the key qualitative ingredients of QS mechanisms in bacteria, we follow a simplified approach. In so doing, we assume that the autoinducers bind to receptors that enhance the expression of a particular gene, and its production also directly depends on the bacteria population (e. g. <cit.>). When a bacterium responds to an external stimulus by increasing the amount of a cellular component, the process is up-regulated, and it is down-regulated otherwise. To put it another way, the dynamics of the bacteria are regulated by both, autoinducers as well as growth of the bacteria population itself. Hence, the population is considered to be composed of cell sub-populations in two different states: up-regulated and down-regulated bacteria, namely motile m and static s, respectively, and the concentration of autoinducers is given by q. As autoinducers are produced by both sub-populations at a rate r (see <cit.>), and the bacteria can either switch or remain in their category, we assume that:
[label=(*)]
* the growth rate of the motile bacteria population follows an autocatalytic process and is proportional to the probability that motile bacteria switch on-and-off in their current category. The response rate of motile bacteria to autoinducer concentration is then given by k_1 and is up-regulated by its own production and down-regulated by the production of static bacteria; that is, the dynamics follows an activator-inhibitor-like behaviour, where m acts as the activator and s as the inhibitor. We also assume that the media may be overpopulated, so γ is a saturation parameter;
* the production of static bacteria is assumed as a result of a cross-catalytic reaction of motile bacteria; this interaction may be taken as an indirect down-regulation process, which is modulated by k_2;
* the entire bacteria population is constantly produced at a rate (1+ε)α, where 0< ε <1 is the ratio between both bacteria sub-populations growth rates. The autoinducers, motile and static bacteria decay at rates μ_1, μ_2 and μ_3, respectively.
Putting everything together, we obtain that the dynamical local interaction is governed by the system
X:{[ q̇ = r(m+s)-μ_1q ,; ṁ = k_1qm^2s(1+γ m^2)+α-μ_2m ,; ṡ = k_2qm^2+ϵα-μ_3s . ].
Essentially, model (<ref>) captures the dynamical behaviour of QS, namely, the concentration of autoinducers depends on the production by bacteria in both states. When this concentration lies beyond a certain threshold, bacteria are expected to react in a synchronised-like way, as has been reported in <cit.>, for instance. We have identified this feature by means of slowly varying a crucial parameter as can be seen further in section <ref>, and analyse the different dynamical scenarios of (<ref>) to find parameter regimes for steady state long term behaviour and the observed periodic oscillations. However, we also find chaos which can be explained by the presence of Shilnikov homoclinic bifurcations.
The present manuscript is organised as follows: throughout section <ref> we prove that the model solutions are always non-negative for feasible initial conditions. In section <ref> we take into consideration a constant autoinducer concentration and study the local stability of a reduced system. This is followed in Section <ref> by a bifurcation analysis where the role of a key parameter triggers leading nonlinear events, namely Hopf and Bogdanov–Takens bifurcations. Normal forms for both dynamical phenomena are also computed. From there, a thorough numerical bifurcation analysis is performed in section <ref> in the full system, where conditions to the Shilnikov homoclinic chaos mechanism are found and numerically explored. Concluding remarks can be found in section <ref>.
§ NON-NEGATIVE SOLUTIONS FOR REALISTIC INITIAL CONDITIONS
Model (<ref>) is well-posed in the sense that every solution starting from a realistic (positive) initial condition remains non-negative. In what follows we see (<ref>) as a vector field defined in the set Ω:={(q,m,s)∈ℝ^3| q≥0, m≥0, s>0} and use the following standard notation <cit.> for its components: X(q,m,s)=X_1(q,m,s)∂∂ q+X_2(q,m,s)∂∂ m+X_3(q,m,s)∂∂ s.
Let us study the behaviour of X in ∂Ω to show that Ω is invariant.
Let us subdivide ∂Ω into the following coordinate planes: ∂Ω=ω_qm∪ω_qs∪ω_ms, where ω_qm={(q,m,s)∈ℝ^3| q≥0, m≥0, s=0}, ω_qs={(q,m,s)∈ℝ^3| q≥0, m=0, s>0}, and ω_ms={(q,m,s)∈ℝ^3| q=0, m≥0, s>0}.
The restriction of (<ref>) to the plane ω_qs is X(q,0,s)=(rs-μ_1 q)∂∂ q+α∂∂ m-(ϵα+μ_3 s)∂∂ s. Since α>0, the vector field (<ref>) on ω_qs points towards the interior of Ω. Similarly, the restriction of (<ref>) to the plane ω_ms is X(0,m,s)=r(m+s)∂∂ q+(α-μ_2 m)∂∂ m-(ϵα+μ_3 s)∂∂ s. Since r>0, it follows that the vector field X in ω_ms points towards the interior of Ω.
On the other hand, system (<ref>) is not defined in the plane ω_qm. In order to analyse the behaviour of (<ref>) near ω_qm, let us consider the time scaling t↦ st. In this way, (<ref>) becomes
X_0:{[ q̇ = r(m+s)s-μ_1qs,; ṁ = k_1qm^21+γ m^2+α s-μ_2ms,; ṡ = k_2qm^2s+ϵα s-μ_3s^2. ].
System (<ref>) is topologically equivalent to (<ref>) in Ω. Moreover, (<ref>) is well defined for s=0 and, hence, it can be continually extended to the boundary plane ω_qm⊂∂Ω. The restriction of (<ref>) to the plane ω_qm is X_0(q,m,0)=k_1qm^21+γ m^2∂∂ m. It follows that the (unique) solution of (<ref>) with initial condition (q(0),m(0),s(0))=(q_0,m_0,0)∈ω_qm is given by
{[ q(t) = q_0,; m(t) = -1+k_1q_0m_0t+γ m_0^2+√(4γ m_0^2+(k_1tq_0m_0+γ m_0^2-1)^2)2γ m_0,; s(t) = 0. ].
As a consequence, the set ω_qm is an invariant plane of (<ref>) which consists of a continuum of straight lines
parallel to the m-axis parameterized by (<ref>). Moreover, both axes q=0 and m=0 consist of a continuum of equilibria.
Let us now study the behaviour of orbits of (<ref>) near the invariant plane ω_qm. Let us search for the solution of (<ref>) with initial condition (q(0),m(0),s(0))=(q_0,m_0,δ)∈ int(Ω), with 0<δ≪1 sufficiently small. Specifically, consider a solution of the form:
{[ q(t) = q_0(t)+δ q_1(t)+O(δ^2),; m(t) = m_0(t)+δ m_1(t)+O(δ^2),; s(t) = s_0(t)+δ s_1(t)+O(δ^2), ].
where (q_0(t),m_0(t),s_0(t)) is the solution of (<ref>) in the limit as δ→0 and, hence, it is given by (<ref>). In this way, (<ref>) is expressed as
{[ q(t) = q_0+δ q_1(t)+O(δ^2),; m(t) = m_0(t)+δ m_1(t)+O(δ^2),; s(t) = δ s_1(t)+O(δ^2), ].
with m_0(t)= -1+k_1q_0m_0t+γ m_0^2+√(4γ m_0^2+(k_1tq_0m_0+γ m_0^2-1)^2)2γ m_0. It follows that the higher order terms of (<ref>) must satisfy the initial conditions (q_1(0),m_1(0),s_1(0))=(0,0,1) for n=1, and (q_n(0),m_n(0),s_n(0))=(0,0,0) for n≥2.
Substitution of (<ref>) into (<ref>) leads to the following differential equations for the O(δ) terms of q(t):
δq̇_1(t) = r(m_0(t)+δ m_1(t)+…+δ s_1(t)+…)(δ s_1(t)+…)-μ_1(q_0+δ q_1(t)+…)(δ s_1(t)+…),
= δ rm_0(t)s_1(t)-δμ_1 q_0s_1(t)+O(δ^2).
Hence,
q̇_1(t)=rm_0(t)s_1(t)-δμ_1 q_0s_1(t), q_1(0)=0.
Similarly,
ṁ(t)=ṁ_0(t)+δṁ_1(t) +O(δ^2) = k_1q(t)m^2(t)1+γ m^2(t)+α s(t)-μ_2 m(t)s(t).
For 0<δ≪1 sufficiently small we have:
ṁ_0(t)+δṁ_1(t) +O(δ^2) = k_1q_0m_0^2(t)1+γ m_0^2(t)+δ( k_1q_1(t)m_0^2(t)1+γ m_0^2(t)- 2γ k_1q_0m_0^3(t)m_1(t)(1+γ m_0^2(t))^2+ 2k_1q_0m_0^2(t)m_1(t)1+γ m_0^2(t))
+ δα s_1(t)-δμ_2 m_0(t)s_1(t)+O(δ^2).
Thus,
ṁ_1(t)=k_1q_1(t)m_0^2(t)1+γ m_0^2(t)- 2γ k_1q_0m_0^3(t)m_1(t)(1+γ m_0^2(t))^2+ 2k_1q_0m_0^2(t)m_1(t)1+γ m_0^2(t)+ α s_1(t)-μ_2 m_0(t)s_1(t), m_1(0)=0.
Likewise,
δṡ_1(t) = k_2(q_0+δ q_1(t)+…)(m_0(t)+δ m_1(t)+…)^2(δ s_1(t)+…)+ϵα(δ s_1(t)+…)
- μ_3(δ s_1(t)+…)^2 = δ k_2 q_0m_0^2(t)s_1(t)+δϵα s_1(t)+O(δ^2).
Hence,
ṡ_1(t)=(k_2 q_0m_0^2(t)+ϵα)s_1(t), s_1(0)=1.
The initial value problem (<ref>) defines any solution of (<ref>) starting at a distance 0<δ≪1 from the invariant plane ω_qm with accuracy O(1/δ). In particular,
since k_2 q_0m_0^2(t)+ϵα>0, it follows from (<ref>) that s_1(t) is an increasing function for every t>0. Hence, the component s(t) in (<ref>) is increasing near the plane s=0. Therefore, no trajectory of (<ref>) with initial condition in int(Ω) can reach the boundary plane ω_qm in (finite or infinite) positive time.
Since (<ref>) is topologically equivalent to (<ref>) in Ω, we conclude from the analysis in this section that every trajectory of the original system (<ref>) starting in Ω remains non-negative.
§ CONSTANT AUTOINDUCER CONCENTRATION
To shed light on the understanding of QS we now analyse the bacteria interaction dynamics under the assumption of a constant autoinducer concentration q_0 > 0. In so doing, we have η_1 = k_1q_0 and η_2 = k_2q_0 as the motile and static bacteria response rates respectively. From (<ref>), upon substituting the rescaled variables for the bacteria population u= (η_2/η_1)m and v= (μ_3η_2/η_1^2)s as well as τ = μ_3 t for time, and the new parameters
K = γ(η_1η_2)^2 , b = μ_2μ_3 , a= η_2 αη_1 μ_3 , e = μ_3 εη_1,
we obtain the following 2D constant autoinducer system
{[ u̇ =f(u,v), f(u,v)=u^2v(1+Ku^2)+a-bu,; v̇ = g(u,v), g(u,v)=u^2+ae-v. ].
Notice that key parameters arise so that:
[label=(*)]
* K holds a saturation role,
* b characterises the decaying bacteria rate, and
* a and e capture bacteria production-related roles as well as inversely and directly proportional dependence of the constant autoinducers' concentration and decaying rate variations, respectively. The latter shows that parameter product ae only depends on the bacteria response to autoinducers capacities and production of static subpopulation.
§.§ Number of steady-states
As the understanding of interactions between motile and static bacteria is the backbone of communication mechanism via autoinducers, we seek the circumstances affecting the way subpopulations coexist or go extinct. Hence, we first look for positive steady-states. Consider that nullclines of (<ref>) satisfy
v = u^2(1+Ku^2)(bu-a) ,
v = u^2 +ae ,
for u,v>0. Notice that (<ref>) defines a convex parabola with vertex at (0,ae), and (<ref>) has an asymptote at u = a /b. For values of u< a/b we have that v<0, which is biologically unfeasible. Now, we find that the nullcline (<ref>) can be recasted as a smooth function of u, which is positive and posses critical points for u>a/b as v^'(u)=0 is held. In so doing, we get that -2a+b(u-Ku^3)=0 must be satisfied. This yields to, as a consequence of the Descartes' rule of signs, existence of two positive roots such that a/b<u_1^⋆<u_2^⋆ as a,b,K>0, where the lower value corresponds to a local minimum and the greater one to a local maximum. Thus, from both expressions in (<ref>), we get that there are at most three positive steady-states. This is schematically illustrated in Fig. <ref> for a system defined later in (<ref>) which is equivalent to (<ref>) in the interior of the first quadrant and is well-defined along the axis v=0. These sketches provide sensitive evidence that two saddle-node bifurcations may occur as two steady-states are created by varying appropriate parameters; see panels (a) and (c), where the nullclines are tangent.
§.§ Local stability of positive steady-states
We now pay attention to the local stability of positive steady-states (u_*,v_*) in (<ref>). The Jacobian matrix of (<ref>) at (u_*,v_*) is
𝐉 = ([ 2u_*v_*(1+Ku_*^2)^2-b -u_*^2(1+Ku_*^2)v_*^2; 2u_* -1; ]) ,
where, by setting 𝒥:=2u_*/[v_*(1+Ku_*^2)^2]., the trace is given by tra(𝐉) = 𝒥-b-1, and the determinant by det(𝐉) = b-(1-.u_*^2/[v_*(1+Ku_*^2)])𝒥.
Notice that the entries J_12, and J_22 of (<ref>) are negative, while J_21, is positive. Hence, local stability features depend on whether the first entry J_11= 𝒥-b is greater or less than one as follows from Hartman–Grobman theorem <cit.>. This is summarised as follows:
Let be 𝒥:=2u_*/[v_*(1+Ku_*^2)^2]..
The local stability of a positive steady-state (u_*,v_*) of (<ref>) is as follows:
* It is locally asymptotically stable if 𝒥< b+1 and b>(1-.u_*^2/[v_*(1+Ku_*^2)])𝒥.
* It is a repeller if 𝒥> b+1 and b>(1-.u_*^2/[v_*(1+Ku_*^2)])𝒥.
* It is a saddle point if b<(1-.u_*^2/[v_*(1+Ku_*^2)])𝒥.
We now analyse the dynamical behaviour in the neighbourhood of the origin.
§.§ Dynamics near the origin
Despite vector field (<ref>) being defined in 𝒟 = {(u,v)∈ℝ^2 | u ≥ 0, v >0 }, a solution with v=0 corresponds to absence of static bacteria, which is a feasible biological scenario. In consequence, similarly to (<ref>) we consider a map t↦ t / v to get
Y_0:{[ u̇ = f̃(u,v) , f̃(u,v)=u^21+Ku^2+av-buv ,; v̇ = g̃(u,v) , g̃(u,v)=u^2v+aev-v^2 , ].
which is topologically equivalent to (<ref>) and can be continually extended to the boundary axis v=0.
The origin of (<ref>) is a repeller.
Since the eigenvalues of the Jacobian matrix at the origin are λ_1=0 and λ_2=ae>0, with associated eigenvectors v_1=(1,0)^t and v_2=(1/e,1)^t, respectively, it follows that there exists a centre manifold
W^c={(u,v): v=h(u):=a_2u^2+O(u^3), |u|<δ},
for δ>0 sufficiently small <cit.>. Since W^c is invariant one must have
g̃(u,h(u))≡ h'(u)f̃(u,h(u)),
for every (u,v)∈ W^c. It follows from this identity that a_2=0 and h(u)=O(u^3). Hence, the dynamics of (<ref>) restricted to W^c is equivalent to ẋ=x^2+O(x^3). Therefore, the origin of (<ref>) is a repeller.
§ BIFURCATION ANALYSIS IN THE 2D SYSTEM
We now perform a numerical exploration with MatCont <cit.> in order to identify the most relevant bifurcations when parameters e or b are varied slowly: Upon varying b, different decaying bacteria rate scenarios are explored; meanwhile, variations of e reveal the impact of having several response rates as well as different production rates. We focus our analysis on the way bacteria decaying rates coordinate the emergence of oscillations.
Figure <ref> depicts bifurcation diagrams of system (<ref>) for parameters e and b. In the left-hand panel, two saddle-node (SN) points are identified at e_1 and e_2 (see Fig. <ref>(a)), where the system posses only one unstable or stable steady-state for e<e_1 and e>e_2, respectively. On the other hand, in Fig. <ref>(b), two SN points are shown at b_1 and b_2 in addition to a Hopf point (denoted by H, for simplicity) at b_3. This Hopf bifurcation is supercritical as the first numerically computed Lyapunov coefficient is negative. Hence, for b_1<b<b_3 a stable limit cycle arises and coexists with the stable equilibrium at the lower branch. This opens the door for the emergence of synchronising dynamics for b<b_3 under suitable initial conditions.
A two-parameter continuation from the SN points in Fig. <ref>(a) as parameters e and a are slowly varied reveals a cusp point (CP) and a Bogdanov–Takens point (BT); see Fig. <ref>.
In addition, a two-parameter continuation in Fig. <ref> in terms of both b and e starting from the H point shows a Bogdanov–Takens bifurcation as the organising centre of the dynamics.
Upon taking into account Proposition <ref>, the parameter space of interest for b and e can be divided into the following regions:
[label=(*)]
(I) Unstable Origin and Stable Steady-State: there is a stable steady-state and the origin is unstable; for values of b close to naught the positive steady-state is a stable node and is a stable focus for slightly larger values, which is a consequence of a transition from negative real eigenvalues to complex ones with negative real part.
(II) Oscillation Dynamics: The occurrence of positive stable periodic solutions shape synchronising dynamical features in the bacteria population; this feature suggests a coordinated communication among bacteria;
(III) Unstable Spiral, Saddle and Stable Node: there are three steady-states from which only one of them is stable. Moreover, notice that region III is subdivided by a bold dashed curve of homoclinic bifurcation emerging from the BT point: Limit cycles exist on the left region (adjacent to II);
(IV) Bistability: this region is composed of three steady-states: a stable node, a stable focus and a saddle.
Notice that Fig. <ref> also shows that the larger the value of e, the smaller the range of values of b that can sustain oscillations. In other words, as e surpasses a certain value no synchronising dynamics are possible, no matter the value of b. Parameter e plays a bacteria production role in the model, it is directly proportional to the fraction of static bacteria production rate and inversely proportional to the constant autoinducer concentration. This could indicate that the oscillatory dynamics of the population is dependent on the interaction between the decaying and production rates of the bacteria or the decaying rates and the autoinducer concentration. The latter scenario will be explored throughout section <ref>.
§.§ Hysteresis and synchronisation
Now, as we have found such key dynamical events, which are characterised by parameters b and e, we illustrate these transitions by considering a time-dependent varying variable b=b(t).
§.§.§ Linearly varying b
The model shows hysteresis as b is varied linearly forth and backwards. For instance, in Fig. <ref>(a) we fix e=1.0 and continuously vary linearly b=1.0 up to b=3.0. As can be seen, the solution decays in oscillatory fashion for values of b small enough; then, as b crosses a corresponding Hopf bifurcation value, the solution amplitude starts increasing approaching a stable limit cycle; then it finally approaches a stable steady-state after the homoclinic bifurcation. In Fig. <ref>(b), parameter b is traversed backwards, which yields a solution that remains in a stable node steady-state throughout the first two regions it crosses. As it reaches the Oscillation Region, the solution spikes and approaches a limit cycle. Finally, the solution begins to show a decaying amplitude oscillatory behaviour.
Notice that in Fig. <ref>(b), the solution's behaviour is rather unsimilar to the one when b increases (see Fig. <ref>(a)), which indicates the existence of hysteretic-like features in the system as b is varied forth and backwards. That is, solutions departs and returns to the same initial state visiting different dynamical regimes depending on parameter's pathways. In other words, as solution show either oscillatory and non-oscillatory properties with distinctive shape features, this observation may be interpreted as a robust quality of such interaction, where this special feature may be explained biologically as the ability to maintain functionality against external and internal perturbations; see, for instance, <cit.>. Varying b from b=1.0 to b=3.0 and backwards illustrates how the dynamics of the solution change for different decaying bacteria rate ratios for a constant value of e. When b values are close to one, the decaying rates of constant and static bacteria are similar. As b grows, the difference between the decaying rates of the bacteria increases.
On the other hand, observe that in Fig. <ref>(c), parameter e=1.5 is kept fixed and b varies from b=1.0 up to b=1.5 and remains at this value as the solution converges to a periodic solution. This periodic behaviour shows that the concentration of motile bacteria slightly rises before the static bacteria concentration and also, its exponential decay is faster than the static population one.
§.§.§ Periodically varying b
The bifurcation diagram in Fig. <ref> indicates that parameter b plays a key role in the emergence of oscillatory behaviour. We therefore proceed to illustrate crucial consequences as b varies periodically back and forth between b=1.0 and b=3.0 in a given period of time for a fixed e value. In doing so, we will gain insight on the effect of bacteria decay rates that vary in time. We consider an interval given by t ∈ [0,2000] and a periodic decay rate parameter given by b(t)=A+Bsin(ω t + φ), where A and B determine parameter bounds and ω and φ are the frequency and phase at which b varies. Fig. <ref> shows three scenarios for b varying at different frequencies. These simulations suggest that, in order to observe meaningful dynamics, the value of ω must be small.
For small ω, the solution will indeed show oscillations with a significantly small amplitude, but as the value of ω increases, the solution will behave as a constant solution. In Fig. <ref> the solutions for ω = 0.010, 0.027 and 0.044 are depicted. For ω= 0.010, we see that as b starts to increase, the solution oscillates regularly.
As b(t) varies periodically the solutions alternate between periodic behaviour, oscillatory decay, and constant. That is, as b increases and the solution enters the oscillation region in Fig. <ref>, the amplitude of the oscillations increases to remain constant. Notice that, as b starts to decrease, the solution becomes constant until it re-enters the oscillation region, when it spikes. This is characterised by the valley-like windows; see upper panel. When ω=0.027 and 0.044 we see that, as parameter b takes decreasing values, the solution spikes as b enters the oscillation region. However, as ω increases, the stable spiral behaviour lasts less time, until it is no longer present in the solution. These sudden spikes in the solution indicate excitatory dynamics in the system. The previous results convey that the dynamics of the 2D-system (<ref>) is sensible to fast varying parameter b, which suggests that the timescale at which the bacteria decay rates change has a rather direct impact on the possible emergence of synchronisation dynamics.
§.§ Andronov–Hopf bifurcation in the 2D system
We now proceed to compute the normal forms for the Hopf bifurcation as is crucial for the dynamical shaping for the distinguished parameter values of Table <ref>.
Let (U,V)∈ℝ^2_+ be the positive coordinates of an equilibrium of (<ref>) in the first quadrant. Then the set of equations f(U,V)=0 and g(U,V)=0
define implicitly a locally invertible transformation given by Ψ:Λ⟶ℝ^4_+,
(U,V,b,K)↦(a,e,b,K):= (U (b (1 + K U^2) V-U)V + K U^2 V, (V-U^2) (V + K U^2 V)U (b (1 + K U^2) V-U), b,K),
in Λ:={(U,V,b,K)∈ℝ^4_+: b (1 + K U^2) V-U>0, V-U^2>0}.
Then the vector field (<ref>) in parameter space Λ has the form:
{
x' = x^2y + K x^2 y + U (b (1 + K U^2) V-U)V + K U^2 V - b x ,
y' = x^2 + V -U^2 - y,
.
where, for convenience, we use notation (x,y) to name the state variables, and we denote x'=dx/dt, y'=dx/dt. System (<ref>) is C^∞-equivalent to (<ref>) in parameter space Λ. Moreover, the (positive) equilibrium coordinates appear now as the explicit parameters (U,V). In what follows in this section, we will give conditions such that (<ref>) undergoes a generic Hopf bifurcation at (U,V); see <cit.> for more details.
The Jacobian matrix of (<ref>) at (U,V) is
J(U,V)=[ -b+2 U(1 + K U^2)^2 V -U^2(1 + K U^2) V^2; ; 2U -1 ].
The trace T and determinant D of J(U,V) are given by
T=T(U,V,b,K)=-1 -b+2 U(1 + K U^2)^2 V
and
D=[J(U,V)]= (V + K U^2 V)^-2D̂, respectively, where
D̂=D̂(U,V,b,K)=2 U^3 + 2 K U^5 - 2 UV + b V^2 + 2 b K U^2 V^2 + b K^2 U^4 V^2.
Whenever T=0 and D>0, the eigenvalues of J(U,V) are purely imaginary and non-trivial. Moreover, since we have partial derivative T_b(U,V,b,K)=-1<0,
the Hopf bifurcation in (<ref>) is generically unfolded by parameter b. In particular, it follows from there that equation T(U,V,b,K)=0 implicitly defines the function b(U,V,K)=-1 +2 U(1 + K U^2)^2 V.
We now calculate the first Lyapunov quantity <cit.> in order to determine genericity conditions. We follow
the derivation in <cit.> and move the equilibrium (U,V) of the system (<ref>) to the origin via the translation x↦ x+U, y↦ y+V to obtain the equivalent system
{
x' = U (b - UV + K U^2 V) - b (U + x) + (U + x)^2V + y + K (U + x)^2 (V + y),
y' = 2 U x + x^2 - y .
.
In particular, the Jacobian matrix of (<ref>) at the equilibrium (0,0) coincides with J(U,V) in (<ref>).
Upon substituting the function b(U,V,K) into J(U,V), we get
J_H(U,V)=[ 1 -U^2(1 + K U^2) V^2; ; 2U -1 ],
with T_H:=[J_H(U,V)]≡0 and [J_H(U,V)]=D_H:=D|_T=0=2 U^3 - V^2 - K U^2 V^2(1 + K U^2) V^2.
If D_H>0, then 𝐯_1=(12U,1)^t and 𝐯_2=(-w2U, 0)^t are the generalised eigenvectors of J_H(U,V),
where w=√(D_H).
The change of coordinates (x,y)^t↦ [𝐯_1 𝐯_2] (x, y)^t, where t stands for transpose, allows us to express system (<ref>) with T_H=0 in the form
([ x'; y' ])=( [ 0 -w; w 0 ])([ x; y ])+([ P(x,y); Q(x,y) ]),
where
P(x,y)=14 U^2x^2 - w 2 U^2 x y+ w^2 4 U^2y^2,
and
[ Q(x,y) = -16 K U^7 - 8 K^2 U^9 - 2 U V^2 + V^3 + 3 K U^2 V^3 + 3 K^2 U^4 V^34 U^2 (V + K U^2 V)^3 wx^2; + K^3 U^6 V^3 + 8 U^5 (-1 + K V) + 2 U^3 V (4 + 3 K V)4 U^2 (V + K U^2 V)^3 wx^2; +-4 K U^5 + 2 U V - V^2 - 3 K U^2 V^2 - 3 K^2 U^4 V^2 - K^3 U^6 V^2 -
U^3 (4 + 6 K V)2 U^2 (1 + K U^2)^3 V^2xy; +(-2 U + 6 K U^3 + V + 3 K U^2 V + 3 K^2 U^4 V + K^3 U^6 V) w4 U^2 (1 + K U^2)^3 Vy^2; + 12 K^2 U^8 + 4 K^3 U^10 - 4 K U^6 (-3 + K V) + V^2 (1 + 2 K V)2U (V + K U^2 V)^4 wx^3; + U^4 (4 - 8 K V - 3 K^2 V^2) - 2 U^2 V (2 + K V + K^2 V^2)2U (V + K U^2 V)^4 wx^3; + 4 K^2 U^6 - 2 V (1 + 3 K V) + 2 K U^4 (4 + 3 K V) +
U^2 (4 + 4 K V + 6 K^2 V^2)2U (1 + K U^2)^4 V^3x^2y; +(1 - 2 K (U^2 - 3 V) - 3 K^2 (U^4 + 2 U^2 V)) w2U (1 + K U^2)^4 V^2xy^2; + K (-1 + K U^2) w^2U (1 + K U^2)^4 Vy^3+O(||(x,y)||^4),; ]
in a Taylor expansion near (x,y)=(0,0).
System (<ref>), equations (<ref>) and w=√(D_H) allow us to use the derivation in <cit.> for the direct calculation of the first Lyapunov quantity L_1. In so doing, we obtain the following expression:
L_1= 164 w^2 (1 + K U^2)^6 U^4V^5 l_1,
where
[ l_1 = -32 K^3 U^14 + 16 K^4 U^13 V^2 w^2 + 2 U^2 V^3 (2 + 3 K V^2 (w-1) w) (1 + w^2); - 2 U V^4 (1 + w^2)^2 +
V^5 w ( -1+w - w^2 + w^3); -
4 U^3 V^3 (-1 + (1 + 6 K V) w^2 + 6 K V w^4)+
4 K^3 U^11 V^2 (16 w^2 + K (V + 7 V w^2)); +
U^4 V^2 (-24 K V (1 + w^2) - 8 (3 + w^2) +
15 K^2 V^3 w (-1 + w - w^2 + w^3)); +
4 U^6 V (12 + 9 K^2 V^2 (1 + w^2) + 4 K V (3 + w^2) +
5 K^3 V^4 w (-1 + w - w^2 + w^3)); +
K^2 U^12(-96 - 48 K V + K^4 V^5 w (-1 + w - w^2 + w^3)); +
6 K U^10(-16 - 8 K V + K^4 V^5 w (-1 + w - w^2 + w^3)); +
U^8 (-32 + 48 K V + 24 K^2 V^2 (3 + w^2)+
15 K^4 V^5 w (-1 + w - w^2 + w^3)); -
4 U^5 V^2 (-4 w^2 - 4 K V (1 + w^2) + 3 K^2 V^2 (-1 + w^4)); +
2 K^2 U^9 V^2 (48 w^2 + 8 K (V + 5 V w^2) +
3 K^2 V^2 (1 + 6 w^2 + 5 w^4)); +
8 K U^7 V^2 (8 w^2 + 3 K (V + 3 V w^2) +
K^2 V^2 (2 + 7 w^2 + 5 w^4)).; ]
Thus, we have obtained the following result.
Let (U,V,b,K)∈Λ be such that T_H=0, D_H>0 and l_1≠0.
Then (<ref>) undergoes a codimension-one Hopf bifurcation at the equilibrium (U,V). In particular, if l_1<0 (resp. l_1>0), the Hopf bifurcation is supercritical (resp. subcritical), and a stable (resp. unstable) limit cycle bifurcates from (U,V) under suitable parameter variation.
Whenever condition l_1≠0 in Theorem <ref> does not hold, the Hopf bifurcation of (<ref>) at (U,V) is degenerate. The actual codimension of this singularity —and the stability of further limit cycles that bifurcate— is determined by the sign of the so-called second Lyapunov quantity L_2; see, for instance, <cit.>. However, since the transformation (<ref>) is invertible in the parameter set Λ, numerical evidence suggests that l_1<0 for representative parameter values in Table <ref> and, hence, the bifurcating limit cycle is stable.
Furthermore, bifurcation theory ensures that the existence and stability of this stable periodic orbit persist for an open set of parameter values in the region T(U,V,b,K)>0, i.e., when the focus at (U,V) is unstable <cit.>.
§.§ Bogdanov–Takens bifurcation in the 2D system
We give now conditions such that our model undergoes a Bogdanov–Takens bifurcation under suitable parameter variation at a positive equilibrium. We prove the existence of a germ of a BT bifurcation and show that our system (under certain conditions) is locally topologically equivalent to a normal form of the BT bifurcation. We refer to <cit.> and the references therein for the derivation of the genericity and transversality conditions that need to be verified during this proof.
For the sake of clarity, it is convenient to state the dependence of the vector field (<ref>) on parameters b and K explicitly. Hence, throughout this section we denote X: ℝ^4_+⟶ℝ^2_+,
X(x,y;b,K) = ( x^2(1 + K x^2)y + α - b x,
x^2 + αϵ - y ),
where we use notation (x,y) for the state variables.
Also, let us denote the Jacobian matrix of X with respect to the variables (x,y) as
∂ X∂(x,y)(x,y;b,K)=(
[ --2 x + b y + 2 b K x^2 y + b K^2 x^4 y(1 + K x^2)^2 y -x^2(1 + K x^2) y^2; 2x -1; ]
)
.
Step 1. We verify that the system may exhibit a singularity with a double zero eigenvalue and geometric multiplicity one.
Consider the set
[ Ω_BT = {(x,y)∈ℝ^2_+: -2 x^5 + y^3>0, 2 x^3 - y^2>0, -2 x^5 + y^3- x^3 y >0,; (x^2 - y)(2 x^5 + x^3 y - y^3)>0, 4 x^5 - x^3 y - 6 x^2 y^2 + 4 y^3≠0}. ]
Then there is a positive equilibrium (U,V)∈Ω_BT of (<ref>) and there are positive parameter values (b^*,K^*)=(-2 U^5 + V^32 U^5,2 U^3 - V^2U^2 V^2)∈ℝ^2_+ such that .∂ X∂(x,y)|_(U,V;b^*,K^*) is nilpotent.
In order to prove this Lemma, we look look for solutions (x,y,b,K) of the algebraic system
x^2(1 + K x^2)y + α - b x = 0 ,
x^2 + αε - y = 0,
[∂ X∂(x,y)(x,y;b,K)] = 0,
[∂ X∂(x,y)(x,y;b,K)] = 0,
with (x,y)∈Ω_BT, b>0 and K>0.
Solving for b and K in (<ref>)-(<ref>) leads to
b=-2 x^5 + y^32 x^5>0 and K=2 x^3 - y^2x^2 y^2>0.
Substitution of (b,K) in (<ref>)-(<ref>) and solving for (α,ε) leads to
α=-2 x^5 + y^3- x^3 y 2 x^4 and ε =2 x^4 (x^2 - y)2 x^5 + x^3 y - y^3 .
In particular, notice that α>0 and ε>0 since (x,y)∈Ω_BT. The map (x,y)↦(α,ε) defined by (<ref>) has a Jacobian matrix given by
∂(α,ε)∂(x,y)=(
[ -1 + y2 x^2 - 2 y^3x^5 -x^3 - 3 y^22 x^4; 2 x^3 (2 x^7 + 5 x^5 y - x^3 y^2 - 6 x^2 y^3 + 4 y^4)(2 x^5 +
x^3 y - y^3)^2 -6 x^9 + 6 x^6 y^2 - 4 x^4 y^3(2 x^5 + x^3 y - y^3)^2; ]
) ,
which determinant is
[∂(α,ε)∂(x,y)]=4 x^5 - x^3 y - 6 x^2 y^2 + 4 y^3x(2 x^5 + x^3 y - y^3).
Therefore, since [ ∂(α,ε)∂(x,y)]≠0, the Inverse Function theorem ensures that system (<ref>) is locally invertible. Then, the solution of (<ref>) is given by (x,y,b,K)=(U,V,b^*,K^*)∈ℝ^4_+, where (U,V) is locally defined by the inverse map of (<ref>) and b^*=-2 U^5 + V^32 U^5 and K^*=2 U^3 - V^2U^2 V^2.
Hence, at (b,K)=(b^*,K^*) the equilibrium (U,V) of the system (<ref>) has a Jacobian matrix given by
∂X∂(x,y)(U,V;b^*,K^*)=(
[ 1 -12U; 2U -1; ]
)
with a double zero eigenvalue. In particular, note that (<ref>) is not the null matrix.
The corresponding generalized eigenvectors of (<ref>) are given by
𝐯_1=(12U,1)^t, and𝐯_2=(1,-1 + 2 U)^t .
It follows that (<ref>) is nilpotent and that the double zero eigenvalue has geometric multiplicity one.
Step 2.
The next goal is to state the following transversality condition of a Bogdanov–Takens bifurcation.
Let (U,V,b^*,K^*) be as in Lemma <ref> and
consider the map Ψ:ℝ^4→ℝ^4,
(x,y,b,K)↦(x^2(1 + K x^2)y + α - b x,
x^2 + αϵ - y ,T,D),
where T and D are the trace and determinant of the matrix
∂ X∂(x,y)(x,y;b,K),
respectively. Then the map Ψ is regular at (x,y,b,K)=(U,V,b^∗,K^∗).
The 4×4 Jacobian matrix DΨ=DΨ(x,y;b,K) of the map Ψ is
DΨ=(
[ -b - 2 K x^3(1 + K x^2)^2 y + 2 x(1 + K x^2) y -x^2(1 + K x^2) y^2 -x -x^4(1 + K x^2)^2 y^2; 2x -1 0 0; 2 - 6 K x^2(1 + K x^2)^3 y -2 x(y + K x^2 y)^2 -1 -4 x^3(1 + K x^2)^3 y; 8 K x^4 + 2 K^2 x^6 - 2 y + 6 x^2 (1 + K y)(1 + K x^2)^3 y^2 -4 x^3 - 4 K x^5 + 2 x y(1 + K x^2)^2 y^3 1 -2 (x^5 + K x^7 - 2 x^3 y)(1 + K x^2)^3 y^2; ]
).
After some calculations, we have [DΨ(x,y,b,K)]=2x^5(y + K x^2 y)^4F(x,y,b,K) with
F(x,y,b,K)=-2 K x^5 - 9 x y + b y^2 + 2 b K x^2 y^2 + b K^2 x^4 y^2 +
x^3 (10 + K y).
In particular, straightforward substitution and algebraic simplification leads to
F(U,V,b^∗,K^∗)=-2UV^2(4 U^5 - U^3 V - 6 U^2 V^2 + 4 V^3)≠0, since (U,V)∈Ω_BT.
It follows that [DΨ(U,V,b^∗,K^∗)]≠0,
which ensures that the map Ψ is regular at (x,y,C,Q)=(U,V,b^∗,K^∗).
Step 3.
We now construct a change of coordinates to transform X(x,y;b,K) into a normal form of the Bogdanov–Takens bifurcation; we refer to <cit.> once again.
Let (U,V,b^*,K^*) be as in Lemma <ref> and consider the following auxiliary expressions:
G_1=8 U^10 - 2 U^8 V - 4 U^5 V^3 - 3 U^3 V^4 + 2 V^6 , G_2=2 U^5 + 3 U^3 V - 2 V^3 .
If G_1≠0 and G_2≠0, then there exists a smooth, invertible transformation of coordinates, an orientation-preserving time rescaling, and a reparametrization such that, in a sufficiently small neighbourhood of (x,y,b,K)=(U,V,b^∗,K^∗), system (<ref>) is topologically equivalent to a normal form of the codimension-two Bogdanov–Takens bifurcation.
Let us set up the equilibrium (U,V)∈Ω_BT of (<ref>) to the origin via the translation x↦ x+U, y↦ y+V to obtain the equivalent system
Y:{
x' = α + (x + U)( -b+ (x + U)(1 + K (x + U)^2) (y + U)),
y' = αε + (x + U)^2 - y - V .
.
In particular, the Jacobian matrix of (<ref>) at the equilibrium (0,0) at the bifurcation point (b^*,K^*) coincides with ∂X∂(x,y)(U,V;b^*,K^*) in (<ref>).
Let 𝐏=[𝐯_1, 𝐯_2] be the matrix whose columns are 𝐯_1 and 𝐯_2; see (<ref>). Next, consider the following change of coordinates:
(
[ u; v; ]
)
=
𝐏^-1(
[ x; y; ]
).
Then, the vector field given by
𝐉=𝐏^-1∘ Y∘𝐏,
is 𝒞^∞-conjugated to Y in (<ref>).
Taking a Taylor expansion of 𝐉(u,v;b,K) with respect to (u,v) around (u,v)=(0,0) and evaluating at (b,K)=(b^∗,K^∗), one obtains
(
[ u̇; v̇; ]
)
=
(
[ 0 1; 0 0; ]
)
(
[ u; v; ]
)+
12(
[ a_20u^2+2 a_11uv+a_02v^2+O(||(u,v)||^3); b_20u^2+2 b_11uv+b_02v^2+O(||(u,v)||^3); ]
),
where we have that: a_20=14U^10 V( 8 U^10 - 16 U^11 + 4 U^9 V - 4 U^5 V^3 + 8 U^6 V^3- 3 U^3 V^4 + 6 U^4 V^4 + 2 V^6 - 4 U V^6), b_20=14 U^10 V(8 U^10 - 2 U^8 V - 4 U^5 V^3 - 3 U^3 V^4 + 2 V^6) and b_11= 12 U^9 V(-4 U^9 + 8 U^10 - 2 U^8 V + U^4 V^3) - 12 U^9 V(4 U^5 V^3 + 3 U^3 V^4 - 2 V^6).
If b_20≠0 and a_20+b_11≠0, then the theory of normal forms for bifurcations <cit.> ensures that our system fulfills the necessary genericity conditions to undergo a codimension two Bogdanov–Takens bifurcation.
In particular, condition G_1≠0 ensures that b_20≠0. Furthermore, after some algebraic manipulation one obtains a_20+b_11= -V^2G_2/4U^10.
In summary, Lemmas <ref> and <ref>, and inequality G_1G_2≠0 ensure that the genericity and transversality conditions of a codimension two Bogdanov–Takens normal form are satisfied. Hence there exists a smooth, invertible transformation of coordinates, an orientation-preserving time rescaling, and a reparametrization such that, in a sufficiently small neighbourhood of (x,y,α,k)=(U,V,b^∗,K^∗), the system (<ref>) is topologically equivalent to one of the following normal forms of a Bogdanov–Takens bifurcation:
{
[ ξ̇_1 = ξ_2,; ξ̇_2 = β_1+β_2 ξ_2 +ξ_2^2±ξ_1ξ_2,; ]
.
where the sign of the term ξ_1ξ_2 in (<ref>) is determined by the sign of (a_20+b_11)b_20.
The next theorem is a straightforward consequence from the findings in Lemmas <ref>, <ref> and <ref>, as can be seen in <cit.>, and it summarises the main result in this section.
Let (U,V,b^*,K^*) be as in Lemma <ref> and consider the quantities G_1 and G_2 defined in Lemma <ref>.
Then if (b,K)=(b^*,K^*), system (<ref>) undergoes a codimension-two Bogdanov–Takens bifurcation at (x,y)=(U,V).
§ BIFURCATION AND CHAOS IN THE 3D SYSTEM
In order to understand the global picture that emerges, we perform a bifurcation analysis of system (<ref>) with Auto <cit.> allowing both parameters k_2 and γ to slowly vary. The other parameters remain fixed in their typical values, except for μ_1=0.2, and μ_2=0.7. Fig. <ref> shows the bifurcation diagram in the (k_2,γ)-plane. There curves of saddle-node (LP) and supercritical Hopf (labelled in this section as HB) bifurcation meet at a BT point. Also, a curve of Shilnikov homoclinic bifurcation (hom) emerges from the point BT. A codimension-two Belyakov point (B_c) on the curve hom marks the onset of chaotic dynamics. To the right of B_c, the homoclinic bifurcation is simple; and to the left of B_c, the homoclinic bifurcation is chaotic. More concretely, one can find horseshoe dynamics in return maps defined in a neighbourhood of the homoclinic orbit. The suspension of the Smale horsehoes form a hyperbolic invariant chaotic set which contains countably many periodic orbits of saddle-type.
The horseshoe dynamics is robust under small parameter perturbations; hence, the chaotic dynamics persist if the homoclinic connection is broken; see <cit.>.
A second codimension-two point B_+ and also called a Belyakov point, lies on hom for smaller values of both k_2 and γ. At the point B_+, the steady-state associated with the homoclinic orbit has repeated stable eigenvalues.
On the segment of hom to the right of B_+, the homoclinic orbit converges to a saddle-focus (i.e., it has a complex pair of stable eigenvalues); and on the segment to the left of B_+, the same equilibrium is a real saddle (i.e., the stable eigenvalues are real).
The bifurcation picture in Fig. <ref> is just a partial representation of the full complexity one may encounter in this region of parameter space. Indeed, the saddle periodic orbits may also undergo further bifurcations such as period-doubling and torus bifurcations <cit.>. Moreover, the presence of the chaotic Shilnikov homoclinic bifurcation and that of the Belyakov points B_c and B_+ imply a very complicated structure (not shown) of infinitely many saddle-node and period-doubling bifurcations of periodic orbits as well as of subsidiary n-homoclinic orbits <cit.>. Moreover, for each of these subsidiary n-homoclinic orbits, the system exhibits countably many horseshoes as in the original homoclinic scenario.
The Belyakov points B_c and B_+ in Fig. <ref> divide the curve hom in three segments. We now fix parameters (k_2,γ) at selected points on each of these segments —labelled as ♢, △, and ∘, respectively— and allow parameters μ_1 and μ_2 to vary. The resulting picture is shown in Fig. <ref> in the top row, while suitable enlargements are presented in the bottom row. While the resulting bifurcation scenarios shown in Fig. <ref> are slightly different to one another, the main ingredients organizing complicated dynamics (such as Belyakov points and homoclinic chaos) remain a common feature in each case. In particular, in the three cases, the bifurcation curves corresponding to the scenario of Fig. <ref> are those occurring for lower values of μ_2 in Fig. <ref>. The second curve LP (that which is present for larger values of μ_2) and associated bifurcation phenomena is not present in Fig. <ref>.
The Hopf bifurcation curve hom in Fig. <ref> is now separated into two segments by a degenerate Hopf point GH from which a curve of saddle-node of periodic orbits (LPC) emerges.
The curve of homoclinic bifurcation hom now contains further codimension two resonant (R) and Belyakov (B_-) points. While a single stable cycle bifurcates from B_- (and no extra bifurcations occur), the full picture near the points R include period-doubling bifurcations and a curve of 2-homoclinic bifurcation (not shown) emanating from the codimension-two points. Of particular interest is the case in panels (c) where the curve hom terminates at a BT point located on the second curve LP. Also, from this BT point a subcritical Hopf bifurcation emerges. In particular, this HB curve is very close to the saddle-node curve; these two curves meet at a Zero-Hopf bifurcation point (ZH).
The exact dynamical features which appear in phase space for parameter values near a ZH point depend on higher order terms in a normal form approach, which is beyond the scope of this work. It suffices to say that this codimension-two point is often related to the emergence of invariant tori and chaotic invariant sets <cit.>.
For larger values of μ_2 and lower values of μ_1, another (non chaotic) homoclinic bifurcation curve makes a sharp turn near the point ZH.
Finally, Fig. <ref> shows bifurcation diagrams in the (k_2,γ)-plane with values of μ_2=1 (in panel (a)), and μ_2=1.4 (in panel (b)). The resulting bifurcation diagrams are similar to that in Fig. <ref>, but with higher values of μ_2. In Fig. <ref> the HB curve has a degenerate Hopf point GH, from which a curve of saddle-node of cycles LPC emerges. All in all, upon comparing figures <ref> and <ref>, an increase of parameter μ_2 produces extra bifurcations that favors chaotic behaviour. Indeed, the hom curve in Fig. <ref> is now chaotic along its entire length. Also, in Fig. <ref>(b), an increase of μ_2 favors the emergence of a ZH point. However, the bifurcation sets are “pushed” towards lower values of k_2 and larger values of γ as μ_2 is increased; compare the scale of the variables in figures <ref> and <ref>.
§ DISCUSSION
Analysing the effect of autoinducers on bacterial populations is crucial due to their key role in the QS mechanism. We presented a model that demonstrates the interaction between autoinducers and two subtypes of bacteria. A thorough analysis of this model revealed parameter sets that lead to oscillation dynamics in both the constant autoinducer sub-model and the full three-component model.
The study was performed with a combination of both rigorous normal form analysis and numerical continuation methods.
Upon analysing the full system, we were able to understand how autoinducer concentrations interact with bacterial populations and influence bacterial communication triggered by these interactions.
The constant autoinducer system (<ref>) shows that parameter e is inversely proportional to the autoinducer concentration q_0, indicating that changes in e can impact response rates and production rates on the population. Exploring additional parameters revealed that parameter b, related to bacteria decaying rates, leads to a family of limit cycles via Hopf bifurcation. By conducting two-parameter numerical continuation on parameters b and e, the (b,e)-plane was divided into four regions based on their dynamical events. It was observed that no synchronising behaviour occurs for high values of e, suggesting that oscillatory behaviour may not occur below certain critical shallow threshold autoinducer concentration; this discovery agrees with QS features already recognised in e. g. <cit.>.
Furthermore, as can be seen from Proposition <ref>, when 𝒥=b/(1-. u_*^2/[v_*(1+Ku_*^2)])., notice that tra(𝐉)=[(b+1)u_*^2 - v_*(1+Ku_*^2)]/[v_*(1+Ku_*^2)-u_*^2]. and det(J)=0, which suggest that other nonlinear events may trigger synchronising behaviour on the bacterial population not only by means of a Hopf bifurcation. To fully understand such an implication and, in consequence, the impact of autoinducer dynamics on the population, we analysed the 3D system and identified k_2 and γ as key parameters for synchronising dynamics. Through Hopf bifurcations, self-sustained oscillations were observed to emerge. A two-parameter continuation analysis revealed that these parameters play a role in the emergence of additional oscillatory behaviour and robust synchronisation properties in the population.
From section <ref>, it was observed that a Shilnikov homoclinic bifurcation branch originates from a Bogdanov–Takens point, leading to a Belyakov point B_c that separates simple and chaotic behaviour along the homoclinic curve. Another Belyakov point B_+ marks the boundary between real saddle and saddle-focus steady-states. By conducting a two-parameter continuation along three pivotal points on the homoclinic curve, varying parameters μ_1 and μ_2 (equivalent to parameter e in the constant autoinducer system), intricate dynamics were revealed resulting in resonant and additional Belyakov points. Additionally, a zero-Hopf point appears in an Hopf curve intersecting a nearby saddle-node curve. These findings support chaotic dynamics as infinitely many saddle-node, period-doubling orbits and n-homoclinic orbits —accompanied with countably many horseshoes for all n∈ℕ— come to birth as well as invariant tori and invariant chaotic sets.
The QS model examined in this study depicts synchronised oscillatory dynamics in bacterial populations, potentially exhibiting synchronising characteristics. However, to fully understand the global picture of the QS mechanism, migration dynamics and time delays should also be considered. The inclusion of a time delay is crucial as it influences the dynamics of autoinducer reception and emission, indicating that bacteria do not react instantaneously to autoinducers (e.g. <cit.>). These aspects will be addressed in future research.
§ ACKNOWLEDGEMENTS
PA thanks Proyecto Interno UTFSM PI-LIR-24-04 and Proyecto Basal CMM-Universidad de Chile. VFBM thanks the financial support by Asociación Mexicana de Cultura A.C.
siamplain
|
http://arxiv.org/abs/2409.02626v1 | 20240904113526 | The vela supernova remnant: The unique morphological features of jittering jets | [
"Noam Soker",
"Dmitry Shishkin"
] | astro-ph.HE | [
"astro-ph.HE"
] |
0000-0003-0375-8987]Noam Soker
Department of Physics, Technion, Haifa, 3200003, Israel; [email protected]; [email protected]
0000-0002-9444-9460]Dmitry Shishkin
Department of Physics, Technion, Haifa, 3200003, Israel; [email protected]; [email protected]
§ ABSTRACT
We identify an S-shaped main-jet axis in the Vela core-collapse supernova (CCSN) remnant (CCSNR) that we attribute to a pair of precessing jets, one of the tens of pairs of jets that exploded the progenitor of Vela according to the jittering jets explosion mechanism (JJEM). A main-jet axis is a symmetry axis across the CCSNR and through the center. We identify the S-shaped main-jet axis by the high abundance of ejecta elements, oxygen, neon, and magnesium. We bring the number of identified pairs of clumps and ears in Vela to seven, two pairs shaped by the pair of precessing jets that formed the main-jet axis. The pairs and the main-jet axis form the point-symmetric wind-rose structure of Vela. The other five pairs of clumps/ears do not have signatures near the center, only on two opposite sides of the CCSNR. We discuss different possible jet-less shaping mechanisms to form such a point-symmetric morphology and dismiss these processes because they cannot explain the point-symmetric morphology of Vela, the S-shaped high ejecta abundance pattern, and the enormous energy to shape the S-shaped structure. Our findings strongly support the JJEM and further severely challenge the neutrino-driven explosion mechanism.
§ INTRODUCTION
Recent studies discuss two alternative theoretical explosion mechanisms of core-collapse supernovae (CCSNe), the delayed neutrino-driven mechanism and the jittering jets explosion mechanism (JJEM; for a most recent review, see ). Recent studies of the neutrino-driven mechanism focus on three-dimensional simulations, starting with the pre-collapse stellar core to seconds after the revival of the stalled shock at ≃ 100 from the newly born neutron star (NS; e.g., ). The magnetorotational explosion, which occurs when the progenitor core rotates rapidly and possesses strong magnetic fields (e.g., ), is included under the neutrino-driven as it still attributes most CCSNe to the neutrino-driven mechanism and only rare cases to jet-driven with a fixed axis.
On the other hand, recent studies of the JJEM focus on finding signatures of jittering jets in CCSN remnants (CCSNRs; e.g., ). Last year's findings of such signatures in several CCSNRs and other expected signatures of the JJEM have led to a possible breakthrough in establishing the JJEM as the main, or even sole, explosion mechanism of CCSNe (review by ). Neutrino heating plays a role in the JJEM but not the primary role. Namely, neutrino heating can help the launching of the jets from the intermittent accretion disks (or belts) around the newly born NS (but magnetic fields are also needed), and neutrino can boost the jet energy after they were launched <cit.>.
In the JJEM, pairs of jets with varying directions that the newly born NS, or black hole in some cases, launches explode the star (e.g., ). The source of the stochastic angular momentum of the gas that the NS accretes is the pre-collapse convective angular momentum fluctuations in the core (e.g., ) that instabilities above the newly born NS amplify, mainly instability modes of the spiral standing accretion shock instability (SASI; e.g., , for spiral-SASI). Envelope convection can be the seeds of the angular momentum fluctuations in electron-capture supernovae (), or if, for some reason, the accretion of core material does not explode the star and a black hole is formed ().
According to the JJEM, a black hole is formed when, because of a rapidly rotating pre-collapse core, the central newly born NS (before collapsing to a black hole) launches the exploding jets along a fixed axis that implies inefficient jet feedback mechanism (e.g., ). A large amount of accreted mass can result in a super-energetic CCSN (e.g., ). There is a small jittering around the fixed axis.
In the JJEM, there are several to a few tens jet-launching episodes. The stellar core material that did not collapse to the newly born NS chocks most of these jets acquires their energy and explodes. From that time, the explosion is similar in many aspects, but not all, to the neutrino-driven explosion mechanism; e.g., there are many instabilities, and the NS acquires a kick. However, the later jets expand more freely and can leave imprints on the ejecta; these jets are still part of the exploding jets and are not post-explosion jets. Each late jet pair can leave two opposite (relative to the center of the explosion) morphological features. If two or more pairs of jets are along different axes, the outcome is a point-symmetric morphology. The point-symmetric morphology of the Vela CCSNR is compatible with the JJEM (e.g., ).
In many cases, the two opposite jets in a pair will not be equal in their power and opening angle because of the short-lived launching accretion disk <cit.>.
The JJEM allows for the last pair or two to be long-lived and hence have a sizeable morphological impact in forming a morphological feature from one side through the center to the other side of the CCSNR <cit.>. Such are the main jet-axis of SN 1987a, the keyhole structure <cit.>, the main jet-axis of SNR 0540-69.3 <cit.>, and the main jet-axis, the S-shaped hose of the Cygnus Loop <cit.>. In this study, we reveal the main-jet axis of the Vela SNR (section <ref>) and argue that only jets can shape it. The point-symmetric morphology of the Vela CCSNR leaves only the JJEM as a viable explanation for its explosion and shaping. We add this finding to earlier findings of other CCSNRs to compare the JJEM with possible alternatives for these morphological features; we find that only the JJEM can account for all morphological properties (Section <ref>).
We summarize this study in Section <ref>.
§ IDENTIFYING THE MAIN AXIS OF THE VELA CCSNR
§.§ The point-symmetric wind-rose of Vela
Earlier studies of Vela identified several clumps of Vela; these are clumps A-L in Figure <ref>. <cit.> marked clumps A-F, <cit.> added clumps G and drew the line AG, <cit.> added more clumps' labeling, and <cit.> identified the possible clump L. <cit.> argued that clumps K and G are counter to clump A and are jet-like structure from the explosion process. <cit.> identified clump H2 from X-ray images by <cit.> and extended the point-symmetric wind-rose of Vela from <cit.> to include the symmetry lines (axes) AG, DE, FJ, and HH2.
The high Si abundance of clumps A <cit.>, G, and K <cit.> implies that they originate deep inside Vela's progenitor's core. <cit.> found clump D to have ONeMg overabundance, indicating its origin from near the remnant's center, as previously <cit.> suggested. <cit.> took ears D and E to compose the main jet-axis of Vela and estimated the total energy of the two jets that inflated these ears to be ≈ 1 % of the Vela explosion energy, which is very low energy. We identify a new main-jet axis (section <ref>), which, together with the other jet pairs, can bring the total energy of the shaping jets to be tens of percent of the explosion energy. According to the JJEM, the rest of the explosion energy is due to earlier jets that did not leave an imprint on the point symmetric morphology because they exploded the core of the stellar progenitor.
Although clumps B2 and C2, which we define in figure <ref> are small, they are (i) prominent in their surroundings, and (ii) not smaller than clumps A and L and not much smaller than clumps G that were identified in the past as clumps. The last property is shared by clump L2, which we also identify in Figure <ref>. When we connect B2 to B, C2 to C, and L2 to L (three dashed lines in Figure <ref>), the three lines cross the center defined by the other five symmetry lines (up to the uncertainty in the centers of the clumps at the ends of the lines). In section <ref>, we argue that pairs LL2 and HH2 belong to one pair of processing jets.
Three crucial comments regarding the seven pairs of clumps composing Vela's point-symmetric wind-rose are in place here. (1) In the JJEM, the two jets in each pair of jets are expected to be unequal in their opening angle and power because the intermittent accretion disk that launches the pair of jets has no time to fully relax <cit.>. Therefore, the centers of the symmetry lines, marked by blue dots, might miss the lines' cross-point, and not all lines will cross at precisely the same point. (2) Not each pair of clumps are necessarily the heads of two opposite jets. Dense clumps might be formed in compressed zones between jet-inflated bubbles. This is observed in the hot gas of clusters of galaxies, e.g., Abell 2597 <cit.>, and was found in the hydrodynamical simulations of the early phase of the JJEM <cit.>. (3) The distance of the average centers of the lines (blue asterisk) from the NS location (red asterisk) and the distances of the cross points of the symmetry axes with each other from the NS location are similar to the distance the NS has moved from the explosion site to its present location. Considering the unequal jets in a pair (point 1), the center of the seven symmetry axes is sufficiently close to the present location of the NS and to the location of the NS at the explosion to be associated with the explosion. Namely, pairs of jets exploded the progenitor of the Vela SNR.
We turn to identify Vela's main-jet axis.
§.§ The main-jet axis of Vela
In this section, we use the new results of the eROSITA X-ray telescope to identify Vela's main-jet axis. A main-jet axis is one across the diameter of the SNR, i.e., with a structure inside the main SNR shell in addition to the SNR outer zones. For example, in the most prominent pair of ears, the ears (clumps) D and E have prominent structures in the outer zones of the SNR. However, near the center, no signatures are related to the symmetry axis connecting ears D and E.
<cit.> performed an extensive study of the Vela SNR in the X-ray, using the eROSITA DR1 data. Their results include some abundance distributions in the SNR as contrived from a spectral model fitted to the X-ray data. They identify enhanced abundances (relative to solar ratios) of oxygen, neon, and magnesium along a zone extending north to south and through the remnant's center. In Figure <ref>, we present the distribution of neon abundance. The region of enhanced neon abundance through the center has an S-shaped morphology (and so are oxygen and magnesium in the maps that present). The S-shaped structure, which we identify as the main-jet axis and draw by a dotted line in Figure <ref>, includes on its northern end clumps H and L2, and its southern end clumps H2 with L further out. We attribute the pairs HH2 and LL2 to the same pair of jets, a precessing jet pair.
In Figure <ref>, we present the Vela SNR X-ray image separated by energy ranges, soft, 0.2-0.7 (upper panel), combined, including the higher range in which the SNR is still visible, 1.1-2.3 (middle panel), and 0.7-1.1 (lower panel). In the upper and lower panels of Figure <ref>, we draw two symmetric sides of an S-shaped axis (dashed lines), different from the one that we draw in Figure <ref>. The difference between the two S-shaped axes conveys the uncertainty in the exact center of the north-south S-shaped structure.
We identify another structure that we attribute to the pairs of pressing jets that formed the main-jet axis. The images in Figure <ref> reveal a sharp boundary between an inner elongated bright S-shaped structure and a fainter outer structure. We mark this boundary in the middle panel of Figure <ref>. In some segments of the closed boundary, the jump in brightness is clear, and in some, it is less clear. This boundary is around the S-shaped main-jet axis. We suggest that this boundary-enclosed material was shaped by the energetic jets of the main-jet axis.
In Figure <ref>, we draw the point-symmetric wind-rose from Figure <ref>, the dotted S-shaped line from Figure <ref>, and the dashed-blue S-shaped line and the dashed-white boundary from Figure <ref> on the same X-ray image of Vela. The point symmetric structure appears here in its full glory. We emphasize that the two S-shaped lines (dotted and dashed) are not two jet pairs but one processing pair of jets with uncertain locations; this is the main-jet axis of Vela. The HH2 axis and LL2 axis do not represent separate jet pairs but rather clumps that belong to the same processing jet pair.
Near the center, the direction of the S-shaped that we draw with the dashed blue line in figures <ref> and <ref> is parallel to the jets' direction of the pulsar of the Vela SNR, PSR B0833-45 (e.g.,
). Namely, to the accuracy in determining the S-shaped line, e.g., compare the dashed and dotted S-shaped lines in Figure <ref>, near the center, the S-shaped segment is more or less parallel to the spin of the pulsar. However, the symmetry axes HH2 and LL2 are almost perpendicular to the spin direction.
We also note that the pulsar wind nebula (PWN) covers a small region of the sky, only ≃ 8^'' across (e.g., ). Therefore, the shaping of the S-shaped structure cannot be due to the PWN.
§ OBSERVATIONAL EVIDENCE FOR POINT-SYMMETRIC EXPLODING JETS
According to the JJEM, energetic jets, as expected to explode the star, can account for the properties of the point-symmetric structures of CCSNRs, including pairs of opposite clumps, filaments, ears, lobes, and nozzles.
Other processes can also contribute to the shaping of CCSNRs but cannot explain the majority of observed point-symmetric morphologies. In Table <ref>, we list four shaping processes and whether they can or cannot account for specific morphologies of some CCSNRs. Although the interstellar medium (ISM) also influences the morphology of CCSNRs (e.g., for a very recent study), it cannot explain the basic morphological features we are studying here, in particular point symmetry. The ISM does poorer than the CSM in accounting for the CCSNR properties we are interested in. Only the JJEM can account for all the morphological structures we examine in the Table.
We did not include shaping by a possible pulsar wind nebula because, in most CCSNRs we study, there is no indication of a pulsar wind nebula (e.g., SN 1987A). Also, in general, pulsar wind nebula power is insufficient to explain the shaping of large structures along the polar directions. Instead, the pulsar wind nebula's shocked material fills the CCSNR volume.
According to the neutrino-driven explosion mechanism, post-explosion jets might shape CCSNRs, like Cassiopeia A (e.g., ). According to Table <ref>, shaping by post-explosion jets can take place in some cases (only in some cases) if the axes of pairs of jets change between jet launching episodes and the jets are energetic. This raises the question of why earlier jets could not be launched either; these early jets then explode the star. In some cases, the post-explosion jets cannot account for the observed properties. Only exploding jets with varying directions can account for all observed morphologies.
Indeed, <cit.> discuss the formation of pairs of ears by fallback material (see also ), but comment that this outflow is not expected to be energetic.
<cit.> also simulated post-explosion jets that can only shape the very inner zones of the ejecta; this also holds for jets launched by an NS companion (e.g., ). <cit.> show that very late jets, launched weeks after the explosion, can power a peak in the light curve. However, these jets cannot form large-scale point-symmetric morphologies.
The famous case of SNR W50 is observed to be shaped by precessing jets that the central binary system SS433 launches. This is a different category of processes that do not belong to this study because it involves a post-explosion active binary system.
We emphasize that all the processes we list can take place. In particular, we expect the interstellar medium (e.g., ), CSM (e.g., ) and instabilities (e.g., ) to play a role in shaping the majority of CCSNe (the CSM might influence only older CCSNRs, as the CSM of SN 1987A did not affect the inner massive ejecta yet). Also, an NS natal kick is expected in the JJEM to be similar to the neutrino-driven explosion mechanism but to avoid small angles with the main-jet axis (e.g., ).
However, instabilities and the ejecta-CSM interaction smear and dilute the point-symmetric morphology rather than produce it.
Moreover, the hot ejecta itself expands and smears point-symmetrical features; hot ejecta results from the explosion itself, the decay of nickel (nickel bubbles; e.g., ), and the reverse shock (e.g., ).
For these smearing processes, it is hard to identify point-symmetric morphological components in many cases. The NS kick, accompanied by asymmetrical mass ejection, adds to the smearing of point symmetry. The main properties of instabilities, ejecta-CSM interaction, and NS kick in the JJEM (e.g., ) are similar to those in the neutrino-driven explosion mechanism (e.g., NS kick, ). The JJEM may have another process to impart a natal kick to the NS, the kick by early asymmetrical pair (kick-BEAP) mechanism <cit.>. The JJEM has jets that carry more energy than instabilities. Hence, the jets can form point-symmetric morphological structures. Instabilities, nonetheless, somewhat smear the point symmetry.
Other features further support the JJEM.
Jittering in a plane.
The ejecta of Cassiopeia A is concentrated around one plane <cit.>. <cit.> speculated that the torus morphology of a tilted thick disc with multiple jets in Cassiopeia A, as observations find (e.g., ), results from the tendency of jittering jets in the JJEM to share a plane, up to the fluctuations that change jets' axes, and which can even avoid the planar jittering. <cit.> confirmed this speculation by identifying the point symmetry of Cassiopeia A; because dense clumps, which are concentrated in this plane, are much brighter than their surroundings, there might be bias in emission from this plane (e.g., ), implying the possible presence of massive ejecta also perpendicular to this plane. A planar jittering might also shaped SNR 0540-69.3; this requires further study.
Main-jet axis. This is the case with the `keyhole' structure of SN 1987A, the jet axis of SNR 0540-69.3, and the S-shaped hose that is the main jet-axis of the Cygnus loop <cit.>. The explanation in the frame of the JJEM <cit.> is that at the end of the accretion process onto the newly born NS, the mass accretion decreases so that the time scale of the fluctuations of the angular momentum increases, allowing long-lived jet-launching episodes; in particular the last one. This last pair of jets might form the main-jet axis. Note that this axis need not be along the pre-collapse rotation axis of the core in case the pre-collapse rotation is slow.
An interaction with a CSM shapes the outskirts of the SNR; it cannot form a main jet-axis with imprints in the remnant's center, as with the Vela SNR (Section <ref>). Based on the above, we dismiss the suggestion of <cit.> that a CSM shaped the Vela SNR.
§ SUMMARY
The main result of this study is the identification of a main-jet axis in the Vela CCSNR. This is the S-shaped structure we draw in Figures <ref> - <ref>. We based this identification on the high abundance of ejecta material, namely, O, Ne, and Mg, as the X-ray analysis of Vela by <cit.> reveals (Figure <ref>) and on the boundary of the X-ray bright inner zone that we draw on Figures <ref> and <ref>.
In earlier studies (), we discussed The point-symmetric morphology of the Vela CCSNR and its formation in the JJEM based only on outer ears and clumps. Identifying the S-shaped ejecta-rich main-jet axis has two critical implications. (1) The high abundance of O, Ne, and Mg implies that the S-shaped material was ejected during the explosion, as these metals come from the deep core. (2) The large volume that the precessing jets that we take to have formed the S-shaped main-jet axis influenced (the boundary drawn on Figures <ref> and <ref>), implies that these jets were very energetic. <cit.>, who considered only the pair DE, found the energy in the jets that inflated these two ears to add up to only ≈ 1 % of the Vela explosion energy. Now, with the other pairs and the two precessing jets that shaped the main-jet axis, the energy adds up to a much more significant fraction of the explosion energy. This is compatible with the JJEM, as early jets in the explosion process that supplied the rest of the explosion energy did not leave marks in the morphology.
As discussed in Section <ref>, other shaping processes can not explain the extension of the main-jet axis through the center, the ejecta abundance pattern, the large volume of the main-jet axis, and the point-symmetric wind-rose morphology (see Table <ref>).
We consider the point-symmetric morphologies of CCSNRs to pose the most severe challenge to the neutrino-driven explosion mechanism, as Table <ref> shows. It might be that these point-symmetric CCSNRs even rule out the neutrino-driven explosion mechanism. We emphasize that neutrino heating does take place, but in boosting the energy of the jittering jets at launching and in their interaction with the inner core, rather than being the primary explosion process <cit.>. Studies have identified point-symmetric morphology in ten CCSNRs, some with clear point-symmetric wind-rose and some with more subtle point-symmetric morphologies; we expect to increase this number in 2025.
§ ACKNOWLEDGEMENTS
A grant from the Pazy Foundation supported this research.
natexlab#1#1
[Akashi & Soker(2020)]AkashiSoker2020
Akashi, M., & Soker, N. 2020, , 901, 53, 10.3847/1538-4357/abad35
[Akashi & Soker(2021)]AkashiSoker2021
—. 2021, , 501, 4053, 10.1093/mnras/staa3897
[Akashi & Soker(2022)]AkashiSoker2022
—. 2022, , 930, 59, 10.3847/1538-4357/ac6102
[Andresen et al.(2019)Andresen, Müller, Janka, Summa, Gill, & Zanolin]Andresenetal2019
Andresen, H., Müller, E., Janka, H. T., et al. 2019, , 486, 2238, 10.1093/mnras/stz990
[Antoni & Quataert(2022)]AntoniQuataert2022
Antoni, A., & Quataert, E. 2022, , 511, 176, 10.1093/mnras/stab3776
[Antoni & Quataert(2023)]AntoniQuataert2023
—. 2023, , 525, 1229, 10.1093/mnras/stad2328
[Aschenbach et al.(1995)Aschenbach, Egger, & Trümper]Aschenbachetal1995
Aschenbach, B., Egger, R., & Trümper, J. 1995, , 373, 587, 10.1038/373587a0
[Bear et al.(2024)Bear, Shishkin, & Soker]BearShishkinSoker2024
Bear, E., Shishkin, D., & Soker, N. 2024, In preparation
[Bear & Soker(2023a)]BearSoker2023
Bear, E., & Soker, N. 2023a, Research Notes of the American Astronomical Society, 7, 266, 10.3847/2515-5172/ad1392
[Bear & Soker(2023b)]BearSoker2023RNAAS
—. 2023b, Research Notes of the American Astronomical Society, 7, 266, 10.3847/2515-5172/ad1392
[Bear & Soker(2024)]BearSoker2024
—. 2024, arXiv e-prints, arXiv:2403.07625, 10.48550/arXiv.2403.07625
[Burrows et al.(2024)Burrows, Wang, Vartanyan, & Coleman]Burrowsetal2024
Burrows, A., Wang, T., Vartanyan, D., & Coleman, M. S. B. 2024, , 963, 63, 10.3847/1538-4357/ad2353
[Chiotellis et al.(2021)Chiotellis, Boumis, & Spetsieri]Chiotellisetal2021
Chiotellis, A., Boumis, P., & Spetsieri, Z. T. 2021, , 502, 176, 10.1093/mnras/staa3573
[Chiotellis et al.(2024)Chiotellis, Zapartas, & Meyer]ChiotellisZapartasMeyer2024
Chiotellis, A., Zapartas, E., & Meyer, D. M. A. 2024, , 531, 5109, 10.1093/mnras/stae947
[DeLaney et al.(2010)DeLaney, Rudnick, Stage, Smith, Isensee, Rho, Allen, Gomez, Kozasa, Reach, Davis, & Houck]DeLaneyetal2010
DeLaney, T., Rudnick, L., Stage, M. D., et al. 2010, , 725, 2038, 10.1088/0004-637X/725/2/2038
[Dodson et al.(2003)Dodson, Legge, Reynolds, & McCulloch]Dodson_etal_2003
Dodson, R., Legge, D., Reynolds, J. E., & McCulloch, P. M. 2003, , 596, 1137, 10.1086/378089
[Fateeva et al.(2023)Fateeva, Levenfish, Ponomaryov, Petrov, & Fursov]Fateevaetal2023
Fateeva, S. S., Levenfish, K. P., Ponomaryov, G. A., Petrov, A. E., & Fursov, A. N. 2023, Astronomy Letters, 49, 56, 10.1134/S1063773723020020
[García et al.(2017)García, Suárez, Miceli, Bocchino, Combi, Orlando, & Sasaki]Garciaetal2017
García, F., Suárez, A. E., Miceli, M., et al. 2017, , 604, L5, 10.1051/0004-6361/201731418
[Gilkis & Soker(2014)]GilkisSoker2014
Gilkis, A., & Soker, N. 2014, , 439, 4011, 10.1093/mnras/stu257
[Gilkis & Soker(2016)]GilkisSoker2016
—. 2016, , 827, 40, 10.3847/0004-637X/827/1/40
[Gilkis et al.(2016)Gilkis, Soker, & Papish]Gilkisetal2016
Gilkis, A., Soker, N., & Papish, O. 2016, , 826, 178, 10.3847/0004-637X/826/2/178
[Grefenstette et al.(2017)Grefenstette, Fryer, Harrison, Boggs, DeLaney, Laming, Reynolds, Alexander, Barret, Christensen, Craig, Forster, Giommi, Hailey, Hornstrup, Kitaguchi, Koglin, Lopez, Mao, Madsen, Miyasaka, Mori, Perri, Pivovaroff, Puccetti, Rana, Stern, Westergaard, Wik, Zhang, & Zoglauer]Grefenstetteetal2017
Grefenstette, B. W., Fryer, C. L., Harrison, F. A., et al. 2017, , 834, 19, 10.3847/1538-4357/834/1/19
[Grichener & Soker(2017a)]GrichenerSoker2017ears
Grichener, A., & Soker, N. 2017a, , 468, 1226, 10.1093/mnras/stx534
[Grichener & Soker(2017b)]GrichenerSoker2017
—. 2017b, , 468, 1226, 10.1093/mnras/stx534
[Gvaramadze(2000)]Gvaramadze2000
Gvaramadze, V. 2000, , 274, 195, 10.1023/A:1026512410294
[Helfand et al.(2001)Helfand, Gotthelf, & Halpern]Helfandetal2001
Helfand, D. J., Gotthelf, E. V., & Halpern, J. P. 2001, , 556, 380, 10.1086/321533
[Hwang & Laming(2012)]HwangLaming2012
Hwang, U., & Laming, J. M. 2012, , 746, 130, 10.1088/0004-637X/746/2/130
[Janka & Kresse(2024)]JankaKresse2024
Janka, H. T., & Kresse, D. 2024, arXiv e-prints, arXiv:2401.13817, 10.48550/arXiv.2401.13817
[Janka et al.(2022)Janka, Wongwathanarat, & Kramer]Jankaetal2022spin
Janka, H.-T., Wongwathanarat, A., & Kramer, M. 2022, , 926, 9, 10.3847/1538-4357/ac403c
[Katsuda & Tsunemi(2005)]KatsudaTsunemi2005
Katsuda, S., & Tsunemi, H. 2005, , 57, 621, 10.1093/pasj/57.4.621
[Katsuda & Tsunemi(2006)]KatsudaTsunemi2006
—. 2006, , 642, 917, 10.1086/501434
[Kochanek(2022)]Kochanek2022
Kochanek, C. S. 2022, , 511, 3428, 10.1093/mnras/stac098
[Larsson et al.(2021)Larsson, Sollerman, Lyman, Spyromilio, Tenhu, Fransson, & Lundqvist]Larssonetal2021
Larsson, J., Sollerman, J., Lyman, J. D., et al. 2021, , 922, 265, 10.3847/1538-4357/ac2a41
[Lu et al.(2021)Lu, Yan, Wen, & Fang]LuYanetal2021
Lu, C.-Y., Yan, J.-W., Wen, L., & Fang, J. 2021, Research in Astronomy and Astrophysics, 21, 033, 10.1088/1674-4527/21/2/33
[Mayer et al.(2023)Mayer, Becker, Predehl, & Sasaki]Mayeretal2023
Mayer, M. G. F., Becker, W., Predehl, P., & Sasaki, M. 2023, , 676, A68, 10.1051/0004-6361/202346691
[Meyer et al.(2024a)Meyer, Meliani, Velázquez, Pohl, & Torres]MeyerMelianietal2024
Meyer, D. M. A., Meliani, Z., Velázquez, P. F., Pohl, M., & Torres, D. F. 2024a, , 527, 5514, 10.1093/mnras/stad3495
[Meyer et al.(2024b)Meyer, Velázquez, Pohl, Egberts, Petrov, Villagran, Torres, & Batzofin]MeyerDetal2024
Meyer, D. M. A., Velázquez, P. F., Pohl, M., et al. 2024b, , 687, A127, 10.1051/0004-6361/202449706
[Meyer et al.(2022)Meyer, Velázquez, Petruk, Chiotellis, Pohl, Camps-Fariña, Petrov, Reynoso, Toledo-Roy, Schneiter, Castellanos-Ramírez, & Esquivel]Meyeretal2022
Meyer, D. M. A., Velázquez, P. F., Petruk, O., et al. 2022, , 515, 594, 10.1093/mnras/stac1832
[Milisavljevic & Fesen(2013)]MilisavljevicFesen2013
Milisavljevic, D., & Fesen, R. A. 2013, , 772, 134, 10.1088/0004-637X/772/2/134
[Milisavljevic et al.(2024)Milisavljevic, Temim, De Looze, Dickinson, Laming, Fesen, Raymond, Arendt, Vink, Posselt, Pavlov, Fox, Pinarski, Subrayan, Schmidt, Blair, Rest, Patnaude, Koo, Rho, Orlando, Janka, Andrews, Barlow, Burrows, Chevalier, Clayton, Fransson, Fryer, Gomez, Kirchschlager, Lee, Matsuura, Niculescu-Duvaz, Pierel, Plucinsky, Priestley, Ravi, Sartorio, Schmidt, Shahbandeh, Slane, Smith, Sravan, Weil, Wesson, & Wheeler]Milisavljevicetal2024
Milisavljevic, D., Temim, T., De Looze, I., et al. 2024, , 965, L27, 10.3847/2041-8213/ad324b
[Morse et al.(2006)Morse, Smith, Blair, Kirshner, Winkler, & Hughes]Morseetal2006
Morse, J. A., Smith, N., Blair, W. P., et al. 2006, , 644, 188, 10.1086/503313
[Müller(2023)]Muller2023spin
Müller, B. 2023, , 526, 2880, 10.1093/mnras/stad2881
[Müller(2024)]Muler2024
—. 2024, arXiv e-prints, arXiv:2403.18952, 10.48550/arXiv.2403.18952
[Müller et al.(2024)Müller, Heger, & Powell]Mulleretal2024
Müller, B., Heger, A., & Powell, J. 2024, arXiv e-prints, arXiv:2407.08407, 10.48550/arXiv.2407.08407
[Nagakura et al.(2021)Nagakura, Burrows, Vartanyan, & Radice]Nagakuraetal2021
Nagakura, H., Burrows, A., Vartanyan, D., & Radice, D. 2021, , 500, 696, 10.1093/mnras/staa2691
[Nakamura et al.(2024)Nakamura, Takiwaki, Matsumoto, & Kotake]Nakamuraetal2024
Nakamura, K., Takiwaki, T., Matsumoto, J., & Kotake, K. 2024, arXiv e-prints, arXiv:2405.08367, 10.48550/arXiv.2405.08367
[Orlando et al.(2021)Orlando, Wongwathanarat, Janka, Miceli, Ono, Nagataki, Bocchino, & Peres]Orlandoetal2021
Orlando, S., Wongwathanarat, A., Janka, H. T., et al. 2021, , 645, A66, 10.1051/0004-6361/202039335
[Papish & Soker(2011)]PapishSoker2011
Papish, O., & Soker, N. 2011, , 416, 1697, 10.1111/j.1365-2966.2011.18671.x
[Papish & Soker(2014)]PapishSoker2014Planar
—. 2014, , 443, 664, 10.1093/mnras/stu1129
[Quataert et al.(2019)Quataert, Lecoanet, & Coughlin]Quataertetal2019
Quataert, E., Lecoanet, D., & Coughlin, E. R. 2019, , 485, L83, 10.1093/mnrasl/slz031
[Sankrit et al.(2003)Sankrit, Blair, & Raymond]Sankritetal2003
Sankrit, R., Blair, W. P., & Raymond, J. C. 2003, , 589, 242, 10.1086/374591
[Sapienza et al.(2021)Sapienza, Miceli, Peres, Bocchino, Orlando, Greco, Combi, García, & Sasaki]Sapienzaetal2021
Sapienza, V., Miceli, M., Peres, G., et al. 2021, , 649, A56, 10.1051/0004-6361/202140412
[Shibagaki et al.(2021)Shibagaki, Kuroda, Kotake, & Takiwaki]Shibagakietal2021
Shibagaki, S., Kuroda, T., Kotake, K., & Takiwaki, T. 2021, , 502, 3066, 10.1093/mnras/stab228
[Shibagaki et al.(2024)Shibagaki, Kuroda, Kotake, Takiwaki, & Fischer]Shibagakietal2024
Shibagaki, S., Kuroda, T., Kotake, K., Takiwaki, T., & Fischer, T. 2024, , 531, 3732, 10.1093/mnras/stae1361
[Shishkin et al.(2024)Shishkin, Kaye, & Soker]ShishkinKayeSoker2024
Shishkin, D., Kaye, R., & Soker, N. 2024, arXiv e-prints, arXiv:2408.11014.
2408.11014
[Shishkin & Soker(2021)]ShishkinSoker2021
Shishkin, D., & Soker, N. 2021, , 508, L43, 10.1093/mnrasl/slab105
[Shishkin & Soker(2023)]ShishkinSoker2023
—. 2023, , 522, 438, 10.1093/mnras/stad889
[Shishkin & Soker(2024)]ShishkinSoker2024
—. 2024, In preparation
[Sofue(2024)]Sofue2024
Sofue, Y. 2024, arXiv e-prints, arXiv:2408.00260, 10.48550/arXiv.2408.00260
[Soker(2010)]Soker2010
Soker, N. 2010, , 401, 2793, 10.1111/j.1365-2966.2009.15862.x
[Soker(2020)]Soker2020RAA
—. 2020, Research in Astronomy and Astrophysics, 20, 024, 10.1088/1674-4527/20/2/24
[Soker(2022a)]Soker2022nu
—. 2022a, Research in Astronomy and Astrophysics, 22, 095007, 10.1088/1674-4527/ac7cbc
[Soker(2022b)]Soker2022Rev
—. 2022b, Research in Astronomy and Astrophysics, 22, 122003, 10.1088/1674-4527/ac9782
[Soker(2022c)]Soker2022SNR0540
—. 2022c, Research in Astronomy and Astrophysics, 22, 035019, 10.1088/1674-4527/ac49e6
[Soker(2023a)]Soker2023gap
—. 2023a, Research in Astronomy and Astrophysics, 23, 095020, 10.1088/1674-4527/ace9b3
[Soker(2023b)]Soker2023SNRclass
—. 2023b, Research in Astronomy and Astrophysics, 23, 115017, 10.1088/1674-4527/acf446
[Soker(2024a)]Soker2024Rev
—. 2024a, The Open Journal of Astrophysics, 7, 31, 10.33232/001c.117147
[Soker(2024b)]Soker2024key
—. 2024b, Research in Astronomy and Astrophysics, 24, 075006, 10.1088/1674-4527/ad4fc2
[Soker(2024c)]Soker2024NA1987A
—. 2024c, , 107, 102154, 10.1016/j.newast.2023.102154
[Soker(2024d)]Soker2024CF
—. 2024d, The Open Journal of Astrophysics, 7, 49, 10.33232/001c.120279
[Soker(2024e)]Soker2024CounterJet
—. 2024e, The Open Journal of Astrophysics, 7, 12, 10.21105/astro.2311.03286
[Soker(2024f)]Soker2024PNSN
—. 2024f, Galaxies, 12, 29, 10.3390/galaxies12030029
[Tremblay et al.(2018)Tremblay, Combes, Oonk, Russell, McDonald, Gaspari, Husemann, Nulsen, McNamara, Hamer, O'Dea, Baum, Davis, Donahue, Voit, Edge, Blanton, Bremer, Bulbul, Clarke, David, Edwards, Eggerman, Fabian, Forman, Jones, Kerman, Kraft, Li, Powell, Randall, Salomé, Simionescu, Su, Sun, Urry, Vantyghem, Wilkes, & ZuHone]Tremblayetal2018
Tremblay, G. R., Combes, F., Oonk, J. B. R., et al. 2018, , 865, 13, 10.3847/1538-4357/aad6dd
[van Baal et al.(2024)van Baal, Jerkstrand, Wongwathanarat, & Janka]vanBaaletal2024
van Baal, B. F. A., Jerkstrand, A., Wongwathanarat, A., & Janka, H.-T. 2024, , 10.1093/mnras/stae1603
[Velázquez et al.(2023)Velázquez, Meyer, Chiotellis, Cruz-Álvarez, Schneiter, Toledo-Roy, Reynoso, & Esquivel]Velazquezetal2023
Velázquez, P. F., Meyer, D. M. A., Chiotellis, A., et al. 2023, , 519, 5358, 10.1093/mnras/stad039
[Walk et al.(2020)Walk, Tamborra, Janka, Summa, & Kresse]Walketal2020
Walk, L., Tamborra, I., Janka, H.-T., Summa, A., & Kresse, D. 2020, , 101, 123013, 10.1103/PhysRevD.101.123013
[Wang et al.(2024)Wang, Shishkin, & Soker]WangShishkinSoker2024
Wang, N. Y. N., Shishkin, D., & Soker, N. 2024, arXiv e-prints, arXiv:2401.06652, 10.48550/arXiv.2401.06652
[Wang & Burrows(2024)]WangBurrows2024
Wang, T., & Burrows, A. 2024, , 969, 74, 10.3847/1538-4357/ad5009
[Willingale et al.(2003)Willingale, Bleeker, van der Heyden, & Kaastra]Willingaleetal2003
Willingale, R., Bleeker, J. A. M., van der Heyden, K. J., & Kaastra, J. S. 2003, , 398, 1021, 10.1051/0004-6361:20021554
[Wongwathanarat et al.(2013)Wongwathanarat, Janka, & Müller]Wongwathanaratetal2013kick
Wongwathanarat, A., Janka, H. T., & Müller, E. 2013, , 552, A126, 10.1051/0004-6361/201220636
[Wongwathanarat et al.(2015)Wongwathanarat, Müller, & Janka]Wongwathanaratetal2015
Wongwathanarat, A., Müller, E., & Janka, H. T. 2015, , 577, A48, 10.1051/0004-6361/201425025
[Wu & Zhang(2019)]Wuetal2019
Wu, D., & Zhang, M.-F. 2019, Research in Astronomy and Astrophysics, 19, 124, 10.1088/1674-4527/19/9/124
[Yan et al.(2020)Yan, Lu, Wen, Yu, & Fang]YanLuetal2020
Yan, J.-W., Lu, C.-Y., Wen, L., Yu, H., & Fang, J. 2020, Research in Astronomy and Astrophysics, 20, 154, 10.1088/1674-4527/20/9/154
[Zha et al.(2024)Zha, Müller, & Powell]ZhaMullerPowell2024
Zha, S., Müller, B., & Powell, J. 2024, arXiv e-prints, arXiv:2403.02072, 10.48550/arXiv.2403.02072
|
http://arxiv.org/abs/2409.02718v1 | 20240904135438 | Alignment-Aware Model Extraction Attacks on Large Language Models | [
"Zi Liang",
"Qingqing Ye",
"Yanyun Wang",
"Sen Zhang",
"Yaxin Xiao",
"Ronghua Li",
"Jianliang Xu",
"Haibo Hu"
] | cs.CR | [
"cs.CR",
"cs.CL"
] |
Alignment-Aware Model Extraction Attacks on Large Language Models
Zi Liang^† Qingqing Ye^† Yanyun Wang^♭ Sen Zhang^†
Yaxin Xiao^† Ronghua Li^† Jianliang Xu^ Haibo Hu^†
†: The Hong Kong Polytechnic University
♭: The Hong Kong University of Science and Technology (Guangzhou)
: Hong Kong Baptist University
September 9, 2024
=============================================================================================================================================================================================================================================================
§ ABSTRACT
Model extraction attacks (MEAs) on large language models (LLMs) have received increasing research attention lately.
Existing attack methods on LLMs inherit the extraction strategies from those designed for deep neural networks (DNNs) yet neglect the inconsistency of training tasks between MEA and LLMs' alignments. As such, they result in poor attack performances.
To tackle this issue, we present Locality Reinforced Distillation (LoRD), a novel model extraction attack algorithm specifically for LLMs. In particular, we design a policy-gradient-style training task, which utilizes victim models' responses as a signal to guide the crafting of preference for the local model.
Theoretical analysis has shown that i) LoRD's convergence procedure in MEAs is consistent with the alignments of LLMs, and ii) LoRD can reduce query complexity while mitigating watermark protection through exploration-based stealing.
Extensive experiments on domain-specific extractions demonstrate the superiority of our method by examining the extraction of various state-of-the-art commercial LLMs.
§ INTRODUCTION
In recent years, we have witnessed the remarkable success of large language models (LLMs) such as ChatGPT <cit.>, Gemini <cit.>, and Claude <cit.>, which are now widely employed in various consumer and industrial applications.
Despite their success, these models may suffer from model extraction attacks (MEAs) <cit.>, where their knowledge could be at risk of being stolen by an adversary through a local model that learns on the data collected from the victim model.
Besides of some
"open-source" LLMs (e.g., Alpaca <cit.>), which are trained on GPT-4's chat history, cases of commercial model theft among companies have also been reported recently <cit.>.
Under such a real-world threat, instead of focusing on MEAs against conventional DNNs, which have been extensively studied theoretically <cit.> and empirically <cit.>, a few recent works turn to explore model extraction algorithms and theorems for LLMs.
For example, Wallace et al. <cit.> propose a monolingual-query-based imitation attack framework to steal machine translation knowledge from generative language models such as GPT-2. Li et al. <cit.> investigate threats of stealing the code-related knowledge from LLMs.
However, these studies inherit those MEA
algorithms from traditional fields, such as computer vision <cit.>, and train the local
model via supervised learning like maximum likelihood estimation (MLE) <cit.>, while neglecting the inconsistency of training tasks between MEAs and the alignments <cit.> of modern LLMs. As shown in Figure <ref>, modern LLMs typically employ alignments using reinforced learning, which is missing in the local model training of conventional MEAs. As a result, these attacks usually suffer from poor performance.
In this paper, we challenge the effectiveness of MLE in stealing a reinforcement-learning-aligned LLM, by analyzing its potential drawbacks as follows:
* Low query efficiency. Current
LLM-oriented MEAs suffer from unacceptably
significant query times because they must collect enough generated
responses, which entails exponential complexity in terms of generated
tokens, resulting in low query efficiency.
* Vulnerability against defenses. Learning
directly from the victim model's responses can cause local models
to inadvertently incorporate those
watermarks <cit.> embedded in victim
models. The residue of such watermarks makes the extraction process
less stealthy and even serves as provenance evidence of model theft.
Motivated by these limitations, we propose LoRD
(Locality Reinforced
Distillation), a query-efficient and watermark-resistant
model extraction attack under a training paradigm with LLM's alignments.
However, stealing LLMs via this paradigm is challenging. The main reason is that the core building block of LLM's alignments, reinforcement learning with human feedback
(RLHF) <cit.>, heavily relies on the feedback
signal of human annotators, which is difficult to be reproduced directly into MEAs.
To tackle this challenge, we develop a
policy-gradient-style extraction procedure. This approach regards the
locality direction between the generations of local models and
victim models as the implicit reward signal. It can thus achieve a human-feedback-free reinforcement learning for our extraction attack.
From the theoretical perspective, we show why those existing MEAs using MLE and knowledge distillation (KD) are inconsistent with the optimization procedure in LLMs' alignments. Along this way, we also demonstrate why LoRD can achieve stronger watermark resistance and higher query efficiency.
Extensive experiments on five downstream NLP tasks and ten datasets
demonstrate that it is feasible to steal a commercial LLM with 175
billion parameters by a pre-trained local model with only 8 billion
parameters using just 100 queries. The resulting local model performs
statistically similar to the
victim model for tasks not requiring extra knowledge (e.g., data-to-text),
and only 0∼ 3 percentage lower for tasks requiring it (e.g.,
translation and QAs). This result poses an immediate threat of
task-specific extraction on commercial LLMs.
To further draw the capability boundary of such a threat, we also
illustrate the “spectrum” in difficulties and upper bounds for
extracting LLMs.
To summarize, the contributions of our paper are as follows:
* New Perspective of LLM Alignment for MEAs. We present LoRD, a novel model extraction attack algorithm for LLMs.
To our best knowledge, it is the first effective and realistic
extraction algorithm that is compatible with the alignment procedure
of LLMs.
* Theoretical Guarantee. We theoretically prove that the convergence procedure of LoRD in MEAs is consistent with the alignments of LLMs. Furthermore, we demonstrate that LoRD can reduce query complexity while mitigating watermark protection through exploration-based stealing.
* Systematical Evaluation. Extensive experiments on domain-specific extractions demonstrate that our proposed method outperforms current extraction strategies across different downstream NLP tasks.
§ BACKGROUND
In this section, we introduce the preliminary of this
paper. We first introduce policy gradient models, the basic paradigm
of LLMs' alignments. Subsequently, we introduce the
construction procedure of LLMs. Finally, we summarize and review
previous research on model extraction attacks against language
models.
§.§ Policy Gradient Models
Policy gradient models (PGM) are commonly used in reinforcement
learning (RL) algorithms to optimize the agents based on the decided
action of RL agents. Represented by TRPO <cit.> and
PPO <cit.>, policy gradient models minimize the
the following objective function:
ℒ_pg,j=- 𝔼̂_j[p_j^r(θ)A_j],
where at each decision step j, p_j^r(θ)=π_θ(a_j|s_j)/π_θ_old(a_j|s_j) refers to the probability ratio defined by
the optimized policy π_θ(a_j|s_j) and the initial
policy π_θ_old(a_j|s_j),
s_j denotes the state of the environment, a_j denotes
the decided action of π_θ, and
A_j is the de-biased reward of a_j. A_j is
estimated by the Q-value minus the V-value, i.e.,
A_j(s_j,a_j)=Q(s_j,a_j)-V(s_j).
Intuitively, Q-value refers to the reward if employing action
a_j at the given environment state s_j, which can be seen as
the label of policy's decision. V-value represents the estimation of
the expected reward at s_j. Consequently, A_j denotes the
surprise when taking action a_j.
To alleviate the off the cliff phenomenon that a large bad
gradient update occurred from Equation <ref>, PGMs, such as PPO and
TRPO, add some regularization terms to avoid large
gradients. Specifically, TRPO constrains the distribution between
π_θ and π_θ_old with KL divergence, and PPO
warps a “clip” function to constrain the bounds of p_j^r(θ).
§.§ Language Modeling
Supervised Training (SFT).
Given a pre-trained model with parameters θ, supervised
training is essentially the maximum likelihood estimation (MLE)
task <cit.>, which fine-tunes θ on the
labeled dataset
𝒟_tr^s={(𝐱_i,𝐲_i)|i=1,2,...,N_trs}
by minimizing the following objective functions:
ℒ_mle= -∏_i^N_trsP_θ(𝐲_i|𝐱_i)=-∏_i^N_trs∏_j^NP_θ(y_i,j|𝐱_i,𝐲_i,<j),
where N denotes the sequence length of 𝐲_i, y_i,j
denotes the j-th token in 𝐲_i, and 𝐲_i,<j={y_i,0,...,y_i,j-1}. The logarithmic formula of
Equation <ref>
can also be seen as a joint cross-entropy loss function:
ℒ_ce=- ∑_i^N_trslogP_θ(𝐲_i|𝐱_i)=-∑_i^N_trs∑_j^NlogP_θ(y_i,j|𝐱_i,y_i,<j).
Equation <ref> is extensively utilized in LLM's pre-training and
fine-tuning procedures. For instance, it can be applied to
instruction-following supervised fine-tuning (SFT) with the training set 𝒟_tr, wherein 𝐱_i
encompasses the instruction and the task input, while 𝐲_i
denotes the reference response.
However, aligning LLMs merely by SFT is unrealistic, as
MLE tends to align the model with the one-hot distribution of
𝐲, making it challenging to draw a sufficient variety of
examples due to the “exponential explosion” of tokens (see Section <ref> for
more details). Moreover, providing
standard answers for LLMs can sometimes be daunting for annotators,
which further slows down and even degrades the alignment process through direct training.
Therefore, instead of “learning from answers” as in Equation
<ref>, learning from preferences is proposed, which only
requires the annotators to select a better response from a pair of
texts generated by LLMs. Reinforcement learning (RL) <cit.> has thus been
introduced to train LLMs based on preferences.
Aligning from Preferences.
Employing reinforcement learning in LLMs typically
consists of three stages. First, the annotators construct a preference dataset
𝒟^pref={(𝐱_i,𝐲_i^+,𝐲_i^-)}
by chatting with LLMs and rating their responses,
where 𝐲_i^+ and 𝐲_i^- denote the rated
positive and negative responses of the dialogue context
𝐱_i, respectively. Then, a reward model
R_θ_ϕ(𝐱,𝐲)→𝐫 is
trained based on 𝒟^pref to simulate the environment and
predict the reward values of tokens in given texts. It is trained with a
pair-wise loss,
ℒ_r=-∑_(𝐱,𝐲^+,𝐲^-) ∼𝒟^prefσ(R_θ_ϕ(𝐱,𝐲^+)-R_θ_ϕ(𝐱,𝐲^-)),
where σ(·) denotes the sigmoid function.
Based on the reward model R_θ_ϕ(𝐱,𝐲), we can
finally train the language models P_θ by maximizing its reward,
i.e.,
max_θ∑_𝐱∼𝒟_qR_θ_ϕ(𝐱,𝐲̂)-β𝔻_KL[P_θ(𝐲|𝐱)||P_θ_init(𝐲|𝐱)],
where 𝒟_q denotes the dataset of text inputs,
𝐲̂∼ P_θ(𝐲|𝐱)) denotes the
sampled sequence of the training model, and
θ_init is the initialized parameters of the model, e.g. the
parameters after SFT. The Kullback-Leibler (KL) divergence term,
β𝔻_KL[P_θ(𝐲|𝐱)||P_θ_init(𝐲|𝐱)],
introduced by TRPO <cit.>,
is incorporated to constrain the shift of distribution in generated texts
𝐲̂, where β is the hyperparameter.
Consequently, SFT shown in Equation <ref> fine-tunes the
pre-trained model with parameters θ_pre into an aligned model θ_sft through MLE, and RLHF outlined in
Equation <ref>, further aligns θ_sft towards the target model θ_vic.
There are several alternatives to the standard RLHF approach.
Lee et al. <cit.> propose reinforcement learning with AI feedback (RLAIF) as a
means to diminish the annotation burden associated with the preference
assessments. Besides, there are some approaches, such as direct preference
optimization (DPO) <cit.>, that conceptualize the language model itself as the reward model and thus consolidate Equation
<ref> and Equation <ref> into a unified supervised and
preference-based training task. Since they do not
change the primary targets (i.e., maximizing rewards) and optimization
strategies of LLM's alignments, we only consider the standard formation of alignments for simplicity in our theoretical analysis.
§.§ Language Models Extraction
Studies to steal language models originated from the natural
language understanding (NLU) models, such as BERT<cit.>, and then
evolved to generative language models, especially large language models recently.
Krishna et al. <cit.> highlights early recognition of model extraction
threats in language models. By constructing text inputs with randomly
vocabulary sampling, they successfully extract the weights from BERT-based
APIs. Besides, Rafi et al. <cit.> investigate the feasibility of
side-channel model extraction attacks, revealing that by analyzing
extra signals from GPU kernels, one could accurately steal the model architecture
and its parameters. Subsequent
research <cit.> has thoroughly investigated the strategy
of ensembling victim models to train a competitor model that surpasses its teachers.
The exploration of generative language model extraction is still in its infant stage, with only a handful of studies thus far. Wallace et al. <cit.> investigate imitation attacks on natural language models. By designing monolingual query texts and collecting responses, they successfully extract the knowledge from a simulated machine translation model under the black-box settings. This research exhibits that slight architectural differences will not influence the extraction between language models. Li et al. <cit.> also explores the potential risks of stealing the code-generation abilities of LLMs into smaller downstream models. Unlike previous research <cit.>, this is the first study that selects LLMs as targets. By collecting large-scale domain-specific samples, they fine-tune a 7-billion local pre-trained model with them and show the similarity between the victim and local models in both performances and adversarial samples. However, these two studies employ the MLE loss (Equation <ref>) as the MEA method, neither considering whether MLE is compatible with LLMs's training, especially the alignment procedure shown in Section <ref>, nor addressing optimizations related to query efficiency and the watermark resistance. Besides, the scope of these studies is limited to stealing specific knowledge in a few downstream domains. At the same time, most of the critical aspects of LLMs and the required extraction capabilities, such as query numbers and local model scales, remain unresolved.
In contrast to stealing LLMs, IP protection methods have received considerable attention recently. By sampling a stealthy but representative “greed word set” on the vocabulary distribution, these methods <cit.> can remap the generated words into their synonyms or add the “watermarked” token automatically, and thus effectively certify the output. Besides, strategies such as integrating embeddings into the representation as the backdoor <cit.> or manipulating the probabilities with crafted sinusoidal noises <cit.> are also proposed. However, these approaches often presume more stringent conditions regarding the victim and the suspected models. This paper will further assess the effectiveness of LoRD and current MEAs in evading these black-box watermarking strategies.
§ THREAT MODEL
Adversary's Objective. The adversary's objective is to steal the targeted knowledge from LLMs, where both the domain-specific knowledge and LLM's general ability acquired in alignments are considered. Specifically, we select machine translation, reasoning, data-to-text, structured text generation, and summarization as the downstream domain-specific tasks. The adversary aims to develop a query-efficient MEA algorithm, since the amount of input and generated tokens will be counted as the costs. Additionally, the MEA methods are expected to be watermark-resistant, i.e., they are highly desired to reduce the risks of exposure to unauthorized stealing.
Targeted Models. We select Llama3-70B, GPT-3.5-turbo, and GPT-4o as the victim models in this paper. Unlike previous works that only deployed simulated local victims (e.g., OPT <cit.>), our selections aim to expose the stealing threat on realistic AI services. Besides, our target models are specifically constrained to LLMs fine-tuned with alignment methods (e.g., RLHF) since they are not only state-of-the-art solutions now but also more valuable due to their human-based alignments.
Adversary's Capabilities. In accordance with the LLM-based AI service APIs, we identify two attack scenarios: black-box and grey-box attacks. In the black-box scenario, only textual responses the adversary is allowed to obtain. At the same time, all other information, such as the temperature, sampling strategies, and the hidden states of LLMs, are unseen and inaccessible. On the contrary, a grey-box attack allows the adversary to access the generation probabilities distribution of tokens. Notice that both MLE and our LoRD method are under black-box settings, and we only adopt grey-box settings on some particular stealing methods, such as knowledge distillation <cit.>.
Besides, this paper posits that the adversary usually has worse training conditions than the victims. Specifically, query times and the scale of the local model available to the adversary are much smaller than the victims' training datasets and model parameters. This setting has been adopted in previous LLMs' extraction <cit.>. We call it a LaViSH (Large-Victim-Small-Heist) framework, which allows us to estimate the upper bound of MEA risks empirically. For adversaries with more substantial resources, they can train more powerful MEA-based LLMs by leveraging MEA algorithms under our LaViSH settings.
§ LORD: LOCALITY REINFORCED DISTILLATION
In this section, we delve into the details of our model extraction
framework, LoRD (Locality Reinforced
Distillation). Specifically, we first introduce the
extraction procedure in Section <ref>, and then derive
the loss function in Section <ref>.
§.§ Overview
As described in Algorithm <ref>, LoRD follows a reinforcement
learning paradigm, that is, it consists
of several periods, and in each period, the model will learn to
explore new responses and attempt to enhance the model
trained in the last period. However, different from LLMs' alignments,
the agent can neither obtain the reward from the reward model
directly, nor label positive and negative responses manually. This motivates us to develop a locality-reinforced distillation method to address this issue.
Illustrated by Figure <ref>, the model samples two
sentences randomly at period
t-1, which are denoted as 𝐲_t-1^+ and
𝐲_t-1^-, respectively. In a new period t, we first compute the changes of likelihoods
for these two sentences, among the old model P_θ_t-1 and
the current model P_θ_t. These changes of likelihoods, denoted
as Δ_t^+ and Δ_t^-, indicate whether a selected
sentence is locally isotropic (Δ >0) to the victim
model's response 𝐲_vic or not (Δ≤ 0), which
can be seen as the feedback signal for P_θ_t in the current optimization step. For convenience, we may swap 𝐲_t-1^+ with
𝐲_t-1^- to make sure that
Δ_t^+>Δ_t^- always holds. In this way, for pairs
(𝐱,𝐲_vic) we can take 𝐲_t-1^+ as a
locality neighborhood of 𝐲_vic and 𝐲_t-1^- as the
negative sample, all of which can be utilized in the training of P_θ_t. Figure <ref> illustrates this procedure.
Additionally, we take 𝐲_t-1^+ as the positive label under the current scope only when Δ^+ and P_θ_t(𝐲_t-1^+|𝐱) both exceed their respective fixed thresholds τ1 and τ_2. If these conditions are not met, we will use 𝐲_vic as a substitute for 𝐲_t-1^+ to enable a cold start.
Based on 𝐲_vic, 𝐲_t-1^+, and
𝐲_t-1^-, we now derive the loss function of LoRD.
§.§ Derivation of Loss Functions
From Section <ref>, we know that the loss function of a policy gradient model can be expressed as an objective function to maximize the rewards of decisions (see Equation <ref>) and a regularization term to ensure the stability of training. Following this paradigm, the loss function of LoRD could be
ℒ_LoRD=ℒ_obj+ℒ_reg.
Objective function ℒ_obj.
Following the definition of R_θ_ϕ in Equation <ref>, we use the logarithmic proportion between the positive sample and
the negative sample as the de-biased reward, i.e.,
logP_θ_t(𝐲_t-1^+|𝐱)-logP_θ_t(𝐲_t-1^-|𝐱).
Subsequently, we take log[P_θ_t(𝐲_vic|𝐱)/P_θ_vic(𝐲_vic|𝐱)], the proportion of the victim model to the local model, as the ratio term to scale such a reward.
Therefore, the objective function can be formatted as
ℒ̅_obj =-∑_𝐱∈𝒟_q
|log[P_θ_vic(𝐲_vic|𝐱)/P_θ_t(𝐲_vic|𝐱)]|·log
[P_θ_t(𝐲_t-1^+|𝐱)/P_θ_t(𝐲_t-1^-|𝐱)]
=-∑_𝐱∈𝒟_q [logP_θ_t(𝐲_t-1^+|𝐱)-logP_θ_t(𝐲_t-1^-|𝐱)/|logP_θ_t(𝐲_vic|𝐱)-logP_θ_vic(𝐲_vic|𝐱)|],
where |·| indicates no gradient back propagation, t and 𝐱∼𝒟_q denote the training period and the sampled query text, respectively.
Since it is impractical to obtain the victim model's probability P_θ_vic(𝐲_vic|𝐱) under black-box settings, we simplify Equation <ref> by setting P_θ_vic(𝐲_vic|𝐱)=1, and remove P_θ_t(𝐲_vic|𝐱) to ensure the stability in the black-box extraction, i.e.,
ℒ_obj =-∑_𝐱∈𝒟_qlog
[P_θ_t(𝐲_t-1^+|𝐱)/P_θ_t(𝐲_t-1^-|𝐱)]
=-∑_𝐱∈𝒟_q [logP_θ_t(𝐲_t-1^+|𝐱)-logP_θ_t(𝐲_t-1^-|𝐱)].
Regularization loss ℒ_reg. Current studies <cit.> tend to constrain θ_t with initial model's generating distribution P_θ_init(·|𝐱) in RLHF. However, this strategy requires re-loading an extra initial model's weights, which is memory-consuming. Also, the KL-divergence term 𝔻_KL(P_θ_t||P_θ_init) consists of P_θ_t(·|𝐱), which requires an extra exponential operation of logP_θ_t(·|𝐱) that will slow down the forward computation procedure.[logsoftmax is preferred in the implementation of deep learning frameworks <cit.>, as the exponential operation in softmax and the logarithmic operation in cross-entropy can be canceled out by each other.] Therefore, in MEAs, we put forward a regularization based on the victim model, where only the logarithmic part of KL divergence is employed and only the dimension of selected tokens is used in KL divergence, i.e.,
ℒ_reg =-∑_𝐱∈𝒟_q clip(log[P_θ_t(𝐲_vic|𝐱)/P_θ_t(𝐲_t-1^-|𝐱)])
=-∑_𝐱∈𝒟_q clip(logP_θ_t(𝐲_vic|𝐱)-logP_θ_t(𝐲_t-1^-|𝐱)).
In Equation <ref>, we utilize a clip(·) function to limit the value of the regularization term, as we expect the regularization term could only be used to avoid the off the cliff problem <cit.> in RL's convergence and the shift of the distribution <cit.> in LLMs' alignments.
Incorporating Equation <ref> with Equation <ref>, we can
reshape the loss function of LoRD as
ℒ_LoRD=ℒ_obj+ℒ_reg
=∑_𝐱∈𝒟_qlog[P_θ_t(𝐲_t-1^-|𝐱)/P_θ_t(𝐲_t-1^+|𝐱)]+clip(log[P_θ_t(𝐲_t-1^-|𝐱)/P_θ_t(𝐲_vic|𝐱))].
Finally, we wrap ℒ_LoRD with a sigmoid function σ(·) to normalize the loss to the interval (0,1), which is
ℒ=𝔼 σ(log[P_θ_t(𝐲_t-1^-|𝐱)/P_θ_t(𝐲_t-1^+|𝐱)]+clip(log[P_θ_t(𝐲_t-1^-|𝐱)/P_θ_t(𝐲_vic|𝐱)])).
§ THEORETICAL ANALYSIS
This section will compare LoRD with current model extraction methods from a theoretical perspective. We will first reveal the underlying inconsistency between the optimization procedure of LLMs, which typically involves RL-based alignments, and the previous model extraction approaches utilizing MLE and knowledge distillation (KD). Subsequently, we will demonstrate in theory the reasons why LoRD can achieve stronger watermark resistance and higher query efficiency than existing methods.
§.§ Consistency Analysis regarding Different Learning Tasks
As we described in Section <ref>, both existing methods and LoRD are learned from the victim model's response 𝐲_vic and the corresponding probability distribution P_θ_vic(· |𝐱) ∈ℝ^V, where V denotes the vocabulary size. Therefore, we first investigate how the local model is learned to emulate the distribution of the victim model,P_θ_vic(·|𝐱), under the following three stealing strategies.
Expected Distribution of MLE. We can first reshape the MLE loss into a special formation of Kullback-Leibler divergence with labels of one-hot distributions, that is,
ℒ_ce =-
∑_𝐱,𝐲∼𝒟_trlogP_θ(𝐲_vic|𝐱)
=∑_𝐱,𝐲∼𝒟_tr∑_j^N𝔻_KL[1_y_vic,j||P_θ(·|𝐱,𝐲_vic,<j)],
where 1_y_vic,j is a one-hot vector in which only 1_y_vic,j[y_vic,j]=1 and all the other elements are 0. Equation <ref> demonstrates that MLE learns to maximize the probability of 𝐲_vic,j, without explicit constraints on probabilities across other dimensions.
Expected Distribution of KD.
Following a previous work <cit.>, the objective function of KD is
ℒ_kd=𝔻_KL[P_θ_vic(· |𝐱)||P_θ(· |𝐱)]
+T^2·𝔻_KL[SM(P_θ_vic(·
|𝐱)/T)||SM(P_θ(· |𝐱)/T)],
where SM(·) represents the softmax function, and T>1 denotes the temperature to smooth the targeted distribution P_θ_vic(· | 𝐱). As described in Equation <ref>, knowledge distillation aims to align P_θ(· |𝐱) with P_θ_vic(· |𝐱) in both the original and the smoothed probability across all dimensions, which is exceptionally comprehensive among these methods.
Expected Distribution of Alignments.
Replacing Equation <ref> with Equation <ref>, we can merge the optimization target of LLMs' alignments as
min_θ* -∑_(𝐱,𝐲^+,𝐲^-) ∼𝒟^prefσ(
logP_θ*(𝐲^+|𝐱) /P_θ*(𝐲^-|𝐱)/P_θ_init(𝐲^+|𝐱)/P_θ_init(𝐲^-|𝐱))
⇒max_θ*∑_(𝐱,𝐲^+,𝐲^-) ∼𝒟^preflog P_θ*(𝐲^+|𝐱) - log
P_θ*(𝐲^-|𝐱),
where θ * denotes the expected parameters of the models as
P_θ*(𝐲|𝐱)=1/Z(x)P_θ_init(𝐲|𝐱)· e^1/βR_ϕ(𝐱,𝐲).
More detailed derivation is in Appendix <ref>. Replacing Equation <ref> with Equation <ref>, the expected distribution can be represented as 𝐫_i,j· P_θ_init(· | 𝐱), in which 𝐫_i,j indicates the wrapped distribution gain. This distortion aims to maximize the ratio P_θ (y^+_j|𝐱,𝐲^+_<j) / P_θ (y^-_j|𝐱,𝐲^-_<j), and leave the probabilities in other dimensions unconstrained directly.
Expected Distribution of LoRD.
Similar to alignments, the expected converging procedure by the objective function ℒ_obj is also intended to maximize the ratio between positive samples and negative samples, i.e., P_θ_t(𝐲_t-1^+|𝐱)/P_θ_t(𝐲_t-1^-|𝐱). Meanwhile, the regularization term P_θ_t(𝐲_vic|𝐱)/P_θ_t(𝐲_t-1^-|𝐱) will guide the models to maximize the ratio between
𝐲_vic and 𝐲_t-1^-. As the “standard response” to be learned, 𝐲_vic can be viewed sufficiently as a positive example. Therefore, we can derive that the optimization target of LoRD is consistent with RLHF's optimization, i.e., both encourage local models to maximize the probability proportion between positive and negative samples.
Similar to Equation <ref> in which the optimized model can be seen as the distortion of the original model P_θ_init, in LoRD the optimized model can be regarded as the distortion of the local model P_θ_0, with P_θ_t(·|𝐱)=𝐫_i,j^tP_θ_t-1(·|𝐱) at each step t, where the distortion term 𝐫_i,j^t is intended to jointly maximize P_θ_t(𝐲_t-1^+|𝐱)/P_θ_t(𝐲_t-1^-|𝐱) and P_θ_t(𝐲_vic|𝐱)/P_θ_t(𝐲_t-1^-|𝐱), while leaving the probabilities in other dimensions unconstrained directly.
Consistency Analysis of Four Objective Functions. Based on the analysis of the above four objective functions, we illustrate their convergence procedure, as exhibited in Figure <ref>. We reach the following proposition.
The learning procedure for LLMs' alignments is consistent with the stealing procedure of LoRD, i.e., they both attempt to maximize the difference between the probabilities of positive and negative samples. Conversely, they are inconsistent with either MLE or KD. In MLE, the objective is maximizing the label probability, while KD aims to minimize the distance among all dimensions.
Albeit the inconsistency in their training procedures, the following theorem demonstrates that with enough samples, all these methods will reach the same distribution results.
Ideally, for any loss value of Equations <ref>, <ref>, <ref>, <ref>, or <ref> converging to 0, we have 𝐲^+≡𝐲_vic. Meanwhile, the local model's distribution P_θ(· |𝐱) will approach that of the victim model P_θ_vic(· |𝐱) on MEAs from all three discussed MEA methods, including LoRD, MLE, and KD.
Theorem <ref> ensures that the local model will converge to the victim model regardless of the choice of MEA methods. Nonetheless, by Proposition <ref>, LoRD still has two benefits: query time reduction (i.e., size of the query set 𝒟_q), and the watermark resistance of the learned local model. We will elaborate on them in Section <ref>.
§.§ Comparative Analysis on Model Stealing
Query Efficiency. Let N_Q and N_R denote the sequence lengths of the query text and the response text, respectively. For MLE, the ideal query numbers to populate the entire text space are given by 𝒪(V^N_Q· V^N_R), where V represents the size of the vocabulary. In contrast, the ideal query times for LoRD can be reduced to 𝒪(V^N_Q· C), with C being a constant. This constant suggests that the model requires only a fixed and limited number per query, as LoRD itself can automatically generate and explore V^N_R possibilities during training.
Based on the above analysis, a straightforward concern with employing MLE in LLMs' extraction is that, given the limited query times in real-world practices, it may suffer from incomplete learning, especially for text generation tasks. Consequently, the local model may tend to memorize some specific responses instead of achieving a broad understanding and generation. We call such a phenomenon preference overfitting (PO), which indicates that the local model is only effective on a limited set of explored samples, and yet does not generalize well to unseen scenarios. In such cases, the local model usually exhibits a more “rugged” decision surface, which appears to overfit the preference sentences in 𝒟_tr, as shown in Figure <ref> (b). Figure <ref> provides the empirical visualization of this analysis.
Watermark Resistance.
Another limitation of prevalent objective functions, such as MLE
and KD, is their susceptibility to watermarks <cit.> of
output contents, i.e., while stealing knowledge from LLMs via
responses 𝐲_vic, watermarks within them will also been
passively inherited by the local model. Consequently, the generated sentences of the local model may possess some residual of watermarks, which might be detected as evidence of stealing.
LoRD can mitigate the influences of watermarks
naturally, as it does not learn the likelihood of victim models' responses 𝐲_vic∼𝒟_tr directly, but relies on
𝐲_vic to determine positive and negative labels
from responses generated by the local model.
As depicted in Equation <ref>, LoRD guides the
local model to learn the likelihood of 𝐲_t-1^+ instead
of 𝐲_vic, which means that it will not been influenced
by watermarks contained in 𝐲_vic explicitly. However, the
regularization term ℒ_reg, as well as the replacement
𝐲_t-1^+←𝐲_vic for a cold start, will indeed
introduce watermarks from 𝐲_vic. To address this, we can
reshape Equation <ref> into a convex combination of the objective function and the regularization, i.e.,
ℒ= 𝔼 [(1-λ_1)·
(logP_θ_t(𝐲_t-1^+|𝐱)-logP_θ_t(𝐲_t-1^-|𝐱))
+λ_1· clip(logP_θ_t(𝐲_vic|𝐱)-logP_θ_t(𝐲_t-1^-|𝐱)),
where 0≤λ_1≤ 1 is the hyperparameter.
When λ_1 is small, the convergence of LoRD will
substantially focus on maximizing
P_θ_t(𝐲_t-1^+|𝐱)/P_θ_t(𝐲_t-1^-|𝐱),
with which the local model will exhibit a strong watermark resistance
ability.
When λ_1 increases, LoRD will tend to rely more on the guidance of 𝐲_vic, resulting in a higher risk of introducing watermarks. In the case of λ_1=1, the local model will converge to the victim model without any exploration and watermark resistance, which might suffer from the defense by watermarks.
From a global perspective, ℒ_obj represents the
exploration and the locality learning ability of LoRD,
which can mitigate the influences of watermarks. On the other hand, ℒ_reg ensures the stability of the training
procedure. Therefore, Equation <ref> characterizes a
trade-off via λ_1 between the stability and the diversity
during stealing, and Equation <ref> can be seen as a special case of Equation <ref> with λ_1=0.5.
§ EXPERIMENTS
§.§ Settings
Datasets. We evaluate MEAs on five mainstream natural language generation (NLG) tasks, including machine translation, text summarization, question answering, structured text generation, and data-to-text. We select ten representative datasets: WMT16 <cit.>, TLDR <cit.>, CNN Daily Mail <cit.>, Samsum <cit.>, WikiSQL <cit.>, Spider <cit.>, E2E-NLG <cit.>, CommonGen <cit.>, PIQA <cit.>, and TruthfulQA <cit.> as benchmarks for our domain-specific evaluation. These datasets cover most of the downstream tasks in natural language generation.
We compare not only the stealing efficacy of different MEA methods, but also the stealing difficulty across different downstream tasks. Table <ref> lists all datasets and backbones used in the paper.
Baselines. As described in Section <ref> and <ref>, we compare LoRD with two types of model extraction methods: maximum likelihood estimation (MLE) and knowledge distillation (KD). For MLE and LoRD, we conduct MEAs under pure black-box attack settings. For KD, the predicted distributions are used specifically under grey-box settings.
Metrics.
For text generation tasks, we evaluate extracted models with a semantic-level and two lexical-level metrics, BERTScore <cit.>, BLEU <cit.>, and Rouge-L <cit.>, all of which are commonly used in the NLG evaluation. Regarding reasoning tasks (e.g. QA), we use Precision, Recall, Accuracy, and F1 score as their evaluation metrics.
Implementation Details.
We use Llama3-8B as the local model to learn the outputs generated by victim models.
We set sequence length varying 128 to 4096 depending on the selected tasks, and learning rate 3× 10^-5. Our experiments run on 2 × 80GB Nvidia Tesla A100. We execute each training five times and record the mean values and standard variances in the following sections. For LoRD, we set τ_1 and τ_2 to 0.8 and -0.1, respectively. Besides, we set the period number N_t to 512, and use λ_1=0.5 as the default formation of the loss function.
The victim model's response, 𝐲_vic, is generated by token sampling with a temperature of 1, the default setting for OpenAI APIs. The local model also uses token sampling, but with a temperature of 0.8 and Top-P probability clipping <cit.> at 0.98. We use this setting to enhance the stability of generation in local models. Note that we have not incorporated sampling strategies with their corresponding hyperparameters into the design of LoRD. We believe that MEAs considering sampling strategies could inspire more powerful MEA methods, and we leave these improvements for future work.
§.§ Stealing Domain-Specific Knowledge
We first select GPT-3.5-turbo, a checkpoint of ChatGPT, as the basic victim
model. This is because its API provides probabilities of
candidate words when generating responses. We employ Llama3-8B <cit.>, a small LLM
with only a 4.5% fraction of parameters than the victim model as our
initial local model. Though this LaViSH setting contradicts previous
assumptions <cit.> in MEA that the copy model should usually be
“wider” or “larger” than the victim model to contain its
knowledge, we believe this setting is more applicable in real-world
scenarios <cit.>. Besides, the number of query times selected in this section is
less than 100, a significant degradation compared to previous
studies <cit.>. This is because, in our experiments, copy models
can easily learn the knowledge with a few training samples
and then exhibit only slight improvements afterward. More discussions on
query times can be found in Section <ref>.
Fidelity and limits on stealing.
We first examine the fidelity and limits of a small LLM to steal commercial
LLMs. As shown in Table <ref>, we list the performance
of the victim model and the local model on three tasks, and provide
two MEA methods, local model fine-tuned with MLE (+MLE) and LoRD
(+LoRD), respectively.
In table <ref>, cells highlighted in red indicate poorer outcomes compared to the victim model, whereas blue signifies results that are on par or potentially superior to the victim model. The intensity of the red or blue color corresponds to the degree of underperformance or outperformance relative to the victim model.
We can see that the original performance of the
local model is significantly lower than the victim model, i.e., with a 50%
decrease in BLEU-4 or 10∼ 25 decrease in Rouge-L. Once
we employ MEAs in the local model, its performance rapidly boosts to
nearly the same as the victim model, with
0∼ 40% points of gaps in BERTScore. These gaps are negligible (e.g. <1% in summarization) in
some tasks, but remain eminent in other tasks such
as reasoning, structured text generation, and machine
translation. This phenomenon indicates that domain-specific model extractions can
effectively learn domain-specific abilities from victim models but may perform poorly if downstream tasks require extra knowledge, such as machine translation and QA.
Comparison among stealing methods. Tables
<ref>, <ref>, and <ref> compare the stealing efficacy between MLE and our
LoRD. The results consistently show that
LoRD significantly outperforms MLE under the same MEA
settings. Besides, for challenging tasks such as reasoning
and translation, LoRD exhibits much higher improvements, which
demonstrates that it can address the preference overfitting problem
discussed in Section <ref> and do enable the local model to learn
the task ability from victim models. However, we also observe that for some
tasks (e.g. summarization), LoRD shows no statistical difference from
MLE, probably because these tasks are relatively simple, where merely MLE has
already achieved comparable results to victim models.
Tasks difficulties comparison. Based on previous analysis, we
observe that the performance and limitations of MEA depend on
the category of tasks. Additionally, sometimes datasets in the same task exhibit
significant differences in stealing. We
put forward two metrics to measure task difficulties: the fidelity that measures
extraction efficacy compared to victim models, and the
performance-up, which assesses the performance gain before
and after stealing for a given local model.
Formally, given a test
set 𝒟_te={(𝐱,𝐲)} and a corresponding
metric ℳ(hypothesis,reference), the fidelity (F) and performance-up (P) of the
local model θ_N_t can be defined as:
F=∑_𝐱,𝐲∈𝒟_teℳ(𝐲_N_t,𝐲)/∑_𝐱,𝐲∈𝒟_teℳ(𝐲_vic,𝐲), P=∑_𝐱,𝐲∈𝒟_teℳ(𝐲_N_t,𝐲)/∑_𝐱,𝐲∈𝒟_teℳ(𝐲_0,𝐲),
where 𝐲_N_t∼ P_θ_N_t(· |𝐱),
𝐲_0∼ P_θ_0(· |𝐱), and
𝐲_vic∼ P_θ_vic(· |𝐱) denote the
sampled responses from the trained local model (θ_N_t), the initial
local model (θ_0), and the victim model (θ_vic),
respectively. In Figure <ref>, we illustrate a “spectrum” of extracting
various downstream tasks based on these two metrics defined in Equation
<ref>. The figure can assist in recognizing and
defending commercial LLM's knowledge.
From Figure <ref>, we observe five tasks
forming the following three scenario groups and datasets coming from
the same tasks are mostly in the same group:
* High fidelity and high performance-up (HFHP): These
tasks are challenging for a pre-trained model but can be effectively
learned with the guidance of victim models. This group includes two tasks: data-to-text and structured text generation.
* High fidelity but low performance-up (HFLP): The
initial local model already achieves a comparable performance to
the victim model. QAs and summarization are in this group.
* Low fidelity but high performance-up (LFHP): While MEAs significantly improve the local model's performance, gaps between the local and victim models remain difficult to bridge with domain-specific extraction alone. Machine translation is a representative task whose reasons are explained in Section <ref>.
§ EMPIRICAL ANALYSIS
§.§ Resistance to Watermarks
Current LLM watermarking methods have been shown <cit.> to
be robust against commonly used erasing strategies (e.g., rephrasing),
making watermark removal a distinct challenge. In this section, we
validate the inherent resistance of LoRD to watermarks, suggesting
that LoRD is preliminarily resistant to text watermarking.
As described in Section <ref>, we highlight that
LoRD can extract the victim models' knowledge with two terms: the
straightforward likelihood learning term
logP_θ_t(𝐲_vic|𝐱)-logP_θ_t(𝐲_t-1^-|𝐱)
and the exploration term
logP_θ_t(𝐲_t-1^+|𝐱)-logP_θ_t(𝐲_t-1^-|𝐱),
where we can tune the hyperparameter λ_1 as shown in
Equation <ref> to trade off the exploration and the
convergence speed. Typically, a lower λ_1 encourages the
model for conducting a slower but more diverse and localized exploration
from its own generated text 𝐲_t-1^+, potentially enhancing watermark resistance. In this subsection, we evaluate this analysis
empirically.
Watermarking Details.
Unlike previous experimental settings in
Section <ref>, here we cannot utilize commercial LLMs as
victim models due to the inability to control token sampling
inside LLMs. Instead, we employ Llama3-70B as the victim model and
watermark its outputs based on “green” tokens selection. Following
prior research <cit.>, we separate the predicted
vocabulary into a green word set and a red word set
, assigning them randomly with the seed derived from the hash of
generated tokens at the last generation step. Subsequently, we sample the next token
exclusively from the green set, determined by a certain probability.
In this way, given the hypothesis H_0 that texts are generated
without the knowledge of the green word set, we can estimate the
probability H_0 occurs (P-value) and the Z-score of it for these texts. A high P-value, among with a low Z-score, indicates stronger watermark resistance for MEA algorithms.
Result Analysis.
As depicted in Figure <ref>, we
evaluate the watermark resistance for both MLE and LoRD, and
demonstrate how LoRD's performance varies with different values of
λ_1. The Z-score of
LoRD witnesses a consistent increase as λ1 arises,
indicating that the “confidence” in rejecting the hypothesis, i.e.,
the risk to be suspected, arises when λ_1 increases. This
finding coincides with the analysis in Section <ref>. Besides,
we observe that the P-values of LoRD are generally higher than those of MLE when λ_1 is below 0.8, indicating that LoRD typically exhibits stronger watermarking resistance
than MLE in most situations. It is noteworthy that this enhanced
resistance seems not a “tax” of MEAs efficacy, as the
Rouge-L (F1) scores of LoRD consistently surpass those of MLE and do not exhibit
a significant negative correlation with their P-values.
§.§ Scaling Laws of LLMs
We then explore essential capacities to steal domain-specific
knowledge from LLMs. We first analyze the influence of
query times for the adversary, then compare the efficacy when utilizing
different sizes of the local model, and finally compare the
fidelity among different victim and local models.
§.§.§ Query Times
We first investigate the influence of query numbers on
MEAs. Specifically, we sample query examples randomly from the query
dataset, starting from 4, and incrementally increase it until the
performance of the learned model stabilizes. Figure
<ref> illustrates the stealing efficacy of LoRD and MLE
on PiQA.
We observe that the scores of MLE and LoRD
consistently increase as the query number rises, showing that a
larger query number can improve stealing efficacy steadily until
reaching their empirical upper bounds. Additionally, LoRD typically obtains a
higher score than MLE with the same number of queries, and reaches
bottlenecks earlier, which can reduce the required query numbers by
87% compared to MLE. Moreover, in Figure <ref>, the
performance of LoRD exhibits a relatively lower standard variance than
MLE, indicating a more stable training procedure.
§.§.§ Scales of Local Models
As discussed in Section <ref>, we assume the adversary is stealing existing commercial LLMs with a small local model. This raises the question of selecting an appropriate interval of the local model's size. To address this concern, we illustrate the correlation between the local model's size and extraction efficacy on two machine translation tasks, Russian-to-English (ru-en) and German-to-English (de-en), as shown in Figure <ref>. Here, we employ seven OPT models <cit.> as local models, with parameters ranging from 125 million to 30 billion, to minimize the interruptions of factors other than model size.
Figure <ref> shows a sharp distinction between two machine translation tasks. In the de-en task, the performance of the local model increases steadily with model size, while this trend is not evident in the ru-en task with model size smaller than 30 billion. Nevertheless, the performance of a 30 billion parameter learned local model in ru-en cannot even be comparable to that of a 1.3 billion parameter local model in the de-en task. This phenomenon suggests that for tasks requiring commonsense knowledge, such as machine translation, the local model should at least possess foundational knowledge of the task (e.g., pre-trained on Russian texts) to learn from victim models effectively. Besides, experiments in BERTScore (F1) show that sometimes LoRD may underperform MLE when the local model has fewer than 1 billion parameters, demonstrating that it is challenging to bootstrap LoRD's exploration with a very small local model. By summarizing the increase in LoRD's curves, a model with 2.7 billion appears sufficient to steal domain-specific knowledge from commercial LLMs.
§.§.§ Fidelity under Different Victim and Local Models
We then
evaluate the fidelity of extracting different victim models using
various pre-trained local models. Specifically, we select GPT-3.5,
GPT-4, and GPT-4o as victim models, and employ five state-of-the-art
open-source models, Phi-3 (3.8B), OPT (6.7B), Qwen-2 (7B), Mistral-V3
(7B), and Llama-3 (8B), as local models, as shown in Figure <ref>.
Horizontally, while GPT-4 exhibits a consistently lower extracted
fidelity compared to the other two victim models, vulnerabilities of
the three victim models are generally similar. Vertically, fidelity
of different local models can be significantly impacted by their
performance. For instance, OPT (6.7B) shows a noticeably lower score
compared to the other four models, which indicates that the initial
performance of the local model will affect the performance of
MEAs. Besides, Phi-3 (3.8B) achieves a comparable
fidelity to larger models like Llama-3 (8B), demonstrating that the size
of a local model does not influence final fidelity in
domain-specific stealing after 2.7 billion, which corroborates the
observation in Section <ref>.
§ POTENTIAL DEFENSES
Query Detection. One approach to effectively prevent the
attack of LoRD is by detecting the distribution of query
texts. This is because LoRD, similar to current MEA algorithms, makes
no improvements to query samples, indicating that it can be
detected by analyzing the statistical information of the adversary's
queries, such as the number of queries, distribution of query
contents, and so on.
However, this defense is usually resource-consuming, as it requires
the LLM provider to store all query texts of each user. Besides, the
potential for false positives could adversely affect the
user experience.
More Powerful Watermarks. While we highlight the watermark
resistance of LoRD, watermarking remains one of the most effective
solutions to mitigate MEAs. For example, some model-level watermarks,
such as backdoor-based watermarking <cit.>, can effectively certify
the theft of DNNs. However, model-level watermarking on LLMs remains preliminary. Besides, this technique might not work when the
adversary only steals a subset of knowledge in
which no backdoor is embedded.
§ CONCLUSION
In this paper, we have focused on the extraction problem of commercial
large language models. We proposed LoRD, a practical and realistic
extraction algorithm which is consistent with the alignment procedure
of large language models. Our analysis proved that LoRD can reduce the
query time significantly and mitigate the certification of current
watermarks naturally, surpassing existing MEA algorithms' capabilities. Extensive experiments on domain-specific
stealing demonstrates the superiority of our method.
plain
§ VISUALIZATION OF DISTRIBUTIONS
We also investigate the probability distributions in the generation
procedure among different extraction methods. Specifically, we
visualize these distributions for four models, the victim model
(GPT-3.5-turbo), the initial local model (llama3-8B), and the
learned local models with MLE and LoRD. As plotted in Figure
<ref>, each row in the subfigures refers to the
distribution when generating the i-th token, with each column
element indicating the probability predicted for the
corresponding token index.
We limit the visualization to no more than five token probabilities as currently only GPT-3.5-turbo provides the token prediction probabilities during generation, with a maximum of 5 candidate tokens <cit.>.
From Figure <ref>, we can see that both MLE and LoRD successfully redistribute the generation of the initial local model into a distribution similar to the victim model's, where probabilities, especially Top-1 tokens, have been well inherited in the extraction. This phenomenon supports our analysis in Theorem <ref>. However, distributions of MLE extracted models are consistently sharper than LoRD's, which aligns with our analysis in Section <ref>, where we claim that MLE leads local models to overfit to the preferred sentences (i.e., Top-1 tokens), namely PO, and thus to disrupt the original distributions, leveraging unusual low probabilities for other token indexes. The reason why LoRD can be resistant watermarks, i.e., tokens in Top-1, can also be derived from this discovery.
To compare MLE and LoRD accurately, we quantize the entropy of these distributions, and compute the KL divergence (𝔻_KL), and the Spearman
Correlation (Spear. Corr.) with respect to the victim and initial local
model. As shown in Table <ref>, while the MLE extracted model
exhibits a lower KL divergence (i.e., high distribution similarity) with
the victim model than LoRD's on the training dataset,
its KL divergence becomes comparable to LoRD's on the test
set. Meanwhile, its Spearman correlation significantly
decreases from 0.78 to 0.27, which shows that MLE cannot effectively
imitate prediction behaviors of the victim model when encountering data
beyond the training dataset.
§ LIMITATIONS AND FUTURE WORKS
MEAs on Multi-modal Models. While this paper delves into MEAs
for large language models, it acknowledges the oversight of the multi-modal attribution of current commercial models <cit.> that integrate various forms of data such as text, images,
voice, and so on. The challenge of extending MEA algorithms to
accommodate these models, which requires extra considerations on the
unified representation of concepts, remains unexplored. Future work
could focus on developing MEA methodologies sensitive to multi-modal data nuances.
Capacities beyond LaViSH Settings. We utilize the LaViSH
setting to describe the model capacity of adversaries in our threat
model (see Section <ref>). However, sometimes, the adversary
might possess comparable or superior training resources to the
victims. Though this paper posits that our MEA algorithms and
theoretical analysis are still compatible with such conditions, we
concede that concrete experimental validation and results beyond
LaViSH settings are not presented here.
Lower-level Extractions.
This study evaluates MEAs at the performance level, i.e., it measures the
extraction effectiveness simply through task performance
metrics, or the similarity of learned distributions to the victim model.
This setting is justified, as performance metrics are
essential for evaluating task-related knowledge and the practical
application of LLMs. However, it does not consider the lower-level
similarities between the victim and local models.
Can we achieve neuron-level alignments in LLM's MEAs? How does a LaViSH
setting hurt LLM's MEAs? Is it compatible to extract a MoE
(Mix-of-the-Expert) <cit.> victim model with a dense local model? These
questions are not addressed in this research.
§ PROOFS OF THEOREMS
§.§ The Deduction of Equation <ref>
From Equation <ref>, we can get that
max_θ∑_𝐱∼𝒟_qR_θ_ϕ(𝐱,𝐲̂)-β𝔻_KL[P_θ(𝐲|𝐱)||P_θ_init(𝐲|𝐱)]
⇒ max_θ∑_𝐱∼𝒟_q∑_𝐲∼ P_θ(·
|𝐱)R_θ_ϕ(𝐱,𝐲)-β [
logP_θ(𝐲|𝐱)-logP_θ_init(𝐲|𝐱)]
⇒ min_θ∑_𝐱∼𝒟_q∑_𝐲∼ P_θ(·
|𝐱)-1/βR_θ_ϕ(𝐱,𝐲)+
logP_θ(𝐲|𝐱)/P_θ_init(𝐲|𝐱)
⇒ min_θ∑_𝐱∼𝒟_q∑_𝐲∼ P_θ(· |𝐱)-log(exp(1/βR_θ_ϕ(𝐱,𝐲))) +
logP_θ(𝐲|𝐱)/P_θ_init(𝐲|𝐱)
⇒ min_θ∑_𝐱∼𝒟_q∑_𝐲∼ P_θ(·
|𝐱)logP_θ(𝐲|𝐱)/exp(1/βR_θ_ϕ(𝐱,𝐲))·
P_θ_init(𝐲|𝐱).
If we define a partition function Z(𝐱) with the formation of
Z(𝐱)=∑_𝐲P_init(𝐲|𝐱)exp(1/βR_θ_ϕ(𝐱,𝐲)),
we can reformat the optimization target as
min_θ∑_𝐱∼𝒟_q∑_𝐲∼ P_θ(·
|𝐱)logP_θ(𝐲|𝐱)/exp(1/βR_θ_ϕ(𝐱,𝐲))·
P_θ_init(𝐲|𝐱)
⇒ min_θ∑_𝐱∼𝒟_q∑_𝐲∼ P_θ(·
|𝐱)logZ(𝐱)· P_θ(𝐲|𝐱)/exp(1/βR_θ_ϕ(𝐱,𝐲))·
P_θ_init(𝐲|𝐱)
-logZ(𝐱).
If we mark
1/Z(𝐱)exp(1/βR_θ_ϕ(𝐱,𝐲))·
P_θ_init(𝐲|𝐱) as
P_θ*(𝐲|𝐱), then we have
min_θ∑_𝐱∼𝒟_q∑_𝐲∼ P_θ(·
|𝐱)logZ(𝐱)· P_θ(𝐲|𝐱)/exp(1/βR_θ_ϕ(𝐱,𝐲))·
P_θ_init(𝐲|𝐱)-logZ(𝐱)
⇒ min_θ∑_𝐱∼𝒟_q∑_𝐲∼ P_θ(·
|𝐱)log
P_θ(𝐲|𝐱)/P_θ*(𝐲|𝐱)-logZ(𝐱).
Because Z(𝐱) is independent to 𝐲, we can deduct that
min_θ∑_𝐱∼𝒟_q∑_𝐲∼ P_θ(·
|𝐱)log
P_θ(𝐲|𝐱)/P_θ*(𝐲|𝐱)-logZ(𝐱)
⇒ min_θ∑_𝐱∼𝒟_q[∑_𝐲∼ P_θ(·
|𝐱)log
P_θ(𝐲|𝐱)/P_θ*(𝐲|𝐱)]-logZ(𝐱)
⇒ min_θ∑_𝐱∼𝒟_q𝔻_KL[P_θ(𝐲|𝐱)||P_θ*(𝐲|𝐱)]-logZ(𝐱).
As we know that Z(𝐱) does not contain θ, the
above optimization target actually minimizes the KL-divergence
between the distribution of P_θ and P_θ*,
demonstrating that θ* is the optimal value of θ that satisfies
P_θ*(𝐲|𝐱)=1/Z(𝐱)exp(1/βR_θ_ϕ(𝐱,𝐲))·
P_θ_init(𝐲|𝐱).
Based on equation <ref>, we can see that the optimal distribution
of θ is built upon P_θ_init with a distortion, as we discussed in Section <ref>.
§.§ Proofs of Theorem <ref>
Guarantee of MLE. From Equation <ref>
we can obtain that when ℒ_ce decreases to 0, the KL divergence between
P_θ(·|𝐱) and P_θ_vic(·|𝐱)
decreases to 0, indicating that
P_θ(·|𝐱) equals to P_θ_vic(·|𝐱).
Guarantee of KD. As we know,
𝔻_KL(p,q)≥ 0 ∀ p and q. Therefore,
if ℒ_kd shown in Equation <ref> equals to 0, then both
𝔻_KL[P_θ(·|𝐱)||P_θ_vic(·|𝐱)]
and
𝔻_KL[SM(P_θ(·|𝐱)/T)||SM(P_θ_vic(·|𝐱)/T)]
equal to 0. For the latter one, we have
𝔻_KL[SM(P_θ_vic(·|𝐱)/T)||SM(P_θ(·|𝐱)/T)]
=𝔼_𝐲∼
P_θ_vic(·|𝐱)𝔼_y∈𝐲
logexp(P_θ(y|𝐱,𝐲_p)/T)/∑_y'∈𝐲exp(P_θ_vic(y|𝐱,𝐲_p)/T)/exp(P_θ_vic(y|𝐱,𝐲_p)/T)/∑_y'∈𝐲exp(P_θ(y|𝐱,𝐲_p)/T)
=𝔼_𝐲∼
P_θ_vic(·|𝐱)𝔼_y∈𝐲
logexp((P_θ(y|𝐱,𝐲_p)-P_θ_vic(y|𝐱,𝐲_p))/T)/∑_y'∈𝐲exp(P_θ_vic(y|𝐱,𝐲_p)/T)/∑_y'∈𝐲exp(P_θ(y|𝐱,𝐲_p)/T),
where we can observe that only when P_θ(·|𝐱) equals to
P_θ_vic(·|𝐱) can this term reduce to
0. Integrating the analysis of these two terms, we can obtain that
ℒ_kd=0 represents the local model's distribution
converge to that of the victim model.
Guarantee of LoRD.
When ℒ shown in Equation <ref> equals to 0, the proportion of
P_θ_t(𝐲_vic|𝐱)/P_θ_t(𝐲_t-1^-|𝐱)
and
P_θ_t(𝐲_t-1^+|𝐱)/P_θ_t(𝐲_t-1^-|𝐱)
should limit to -∞. As we know that i) in a distribution
∑P_θ_t(·|𝐱)=1 and ii) 𝐲^+_t-1
is a dynamic positive response generated at each period, we can deduct
that when ℒ=0 there must be
𝐲_vic=𝐲_t-1^+, i.e.,
P_θ_t(𝐲_vic|𝐱)=P_θ_t(𝐲_t-1^+|𝐱)=1
and P_θ_t(𝐲_t-1^-|𝐱)=0. Note that
this is merely a theoretical limit that cannot be reached, because
𝐲_t-1^- will not be sampled if its probability is 0,
and 𝐲_t-1^+ usually doesn't exhibit a significant distinction
to 𝐲_t-1^- when sampling.
|
http://arxiv.org/abs/2409.03315v1 | 20240905074042 | Possible bound states of Heavy Baryonium and Heavy Dibaryon systems | [
"Jing-Juan Qi",
"Zhen-Hua Zhang",
"Xin-Heng Guo",
"Zhen-Yang Wang"
] | hep-ph | [
"hep-ph",
"nucl-th"
] |
[email protected]
[email protected]
[email protected]
[email protected]
§ ABSTRACT
In this work, we systematically study the heavy baryonium and heavy dibaryon systems using the Bethe-Salpeter equation in the ladder and instantaneous approximations for the kernel. Our results indicate that all the heavy baryonium systems, specifically Λ_QΛ̅_Q, Ξ_QΞ̅_Q, Σ_QΣ̅_Q, Ξ'_QΞ̅'_Q, and Ω_QΩ̅_Q (Q=c, b), can form bound states. Among the heavy dibaryon systems, only the Ξ_QΞ_Q system with I=0 and the Σ_QΣ_Q systems with I=0 and I=1 can exist as bound states. Additionally, the Σ_QΣ̅_Q system with I=2 and the Σ_QΣ_Q system with I=1 are not deeply bound.
Possible bound states of Heavy Baryonium and Heavy Dibaryon systems
Zhen-Yang Wang0000-0002-4074-7892
Received 16 July 2024; accepted 04 September 2024
===================================================================
§ INTRODUCTION
Since the discovery of the X(3872) by the Belle Collaboration in 2003 <cit.>, numerous exotic states have been discovered in the charmed sector by various experiments, including BES3, BaBar, Belle, D0, ATLAS and LHCb, et al. (see, e.g., Refs. <cit.> for recent reviews). A common feature of these exotic states is that their masses are mostly located near the threshold of two hadrons. For example, X(3872) and Z_c(3900) are near the DD̅^∗ threshold, T^+_cc is near the DD^∗ threshold, Z_cs(3985) is near the D^∗D̅_s and DD̅^∗_s thresholds, P_c states are near the D̅^(∗)Σ_c thresholds, and X(6900) is near the χ_c0χ_c1 threshold. Therefore, these exotic hadrons are naturally considered as candidates for hadronic molecular states and believed to have four or five quarks. Their exotic spectra and decay widths have made them popular and intriguing topics in both theoretical and experimental research, deepen our understanding of the nature of QCD.
Many heavy tetraquark and pentaquark states have already been discovered. Therefore, it is urgent to extend the research to heavy hexaquark states. The existence of the baryon-antibaryon (baryonium) and the baryon-baryon (dibaryon) molecular states has naturally become a significant research topic. In the light hexaquark sector, the deuteron is a well known molecular state composed of a proton and a neutron, with a binding energy of 2.225 MeV <cit.>. Recently, the BES3 experiment group reported the observation of a pp̅ bound state in the 3(π^+π^-) invariant mass spectrum <cit.>, which has been predicted by many theoretical works to favor decays into the final states with these pions <cit.>. In the charm sector, the Belle Collaboration observed Y(4630) (located 61 MeV above the Λ_cΛ̅_c threshold) in the e^+e^-→Λ_cΛ̅_c process in 2008 <cit.>, but no resonance structure was observed around 4.63 GeV by the BES3 Collaboration <cit.>. The nature of Y(4630) as a Λ_cΛ̅_c molecular state is highly debated in theory <cit.>. Compared with light baryon molecules, the larger masses of heavy baryons reduce the system's kinetic energy, facilitating the formation of molecules. Thus, the existence of heavy baryon molecules has attracted significant theoretical interest, and has been studied through various models such as the chiral constituent quark model <cit.>, the color flux-tube model <cit.>, the quark delocalization color screening model <cit.>, lattice QCD <cit.>, chiral effective field theory <cit.>, QCD sum rules <cit.>, the one-boson-exchange model <cit.>, and the quasipotential Bethe-Salpeter (BS) equation <cit.>.
In this work, we systematically investigate the existence of S-wave bound states composed of a heavy baryon and an antiheavy baryon or double heavy baryons in the BS equation approach within the ladder approximation and the instantaneous approximation for the kernel. Our model incorporates one free parameter, the cutoff parameter Λ, which is actually not entirely free as it governs the range of interaction and is directly related to the hadron size. Considering that the system may involve contributions from multiple exchange particles, with different interaction ranges, we reparametrize the cutoff parameter Λ as Λ = m + αΛ_QCD with m being the mass of the exchange particle. This approach allows for different cutoffs for various exchange particles through the varying parameter α which is of order unity.
This work is organized as follows. After the introduction,
we present the formalism in Sec. <ref>, which contains the
Lagrangians and the BS equations for the heavy baryonium and heavy dibaryon systems. In Sec. <ref>, we show the numerical results for the the heavy baryonium and heavy dibaryon systems. Finally, Sec. <ref> provides a brief summary and discussion. The isospin conventions and the wave functions for the charmed baryonium and charmed dibaryon systems are given in Appendix.
§ FORMALISM
To study whether the S-wave bound states of heavy baryonium and dibaryon exist, we fist construct the Lagrangians for heavy baryons and light mesons. Then the interaction kernels for the BS equations will be derived from the four-point Green’s function with the relevant Lagrangians.
§.§ Effective chiral Lagrangians
A heavy baryon contains a heavy quark and two light quarks, which will be refered to as a diquark in the following. Each light quark is in a triplet representation of the flavor SU(3), thus the diquark can form either an antisymmetric antitriplet or a symmetric sextet. The diquark in the flavor-antisymmetric antitriplet has spin 0, and the diquark in the flavor-symmetric sextet has spin 1. Considering a ground state heavy baryon, the diquark combined with the heavy quark can form an antitriplet baryon with spin-1/2 (B^(Q)_3̅) and two sextet baryons with spin-1/2 (B^(Q)_6) and spin-3/2 (B^(Q)∗_6), respectively. The heavy baryon matrices are
B^(c)_3̅=(
[ 0 Λ_c^+ Ξ_c^+; -Λ_c^+ 0 Ξ_c^0; -Ξ_c^+ -Ξ_c^0 0; ]),
B^(c)_6=(
[ Σ_c^++ 1/√(2)Σ_c^+ 1/√(2)Ξ_c^'+; 1/√(2)Σ_c^+ Σ_c^0 1/√(2)Ξ_c^'0; 1/√(2)Ξ_c^'+ 1/√(2)Ξ_c^'0 Ω_c^0; ]),
B^(b)_3̅=(
[ 0 Λ_b^0 Ξ_b^0; -Λ_b^0 0 Ξ_b^-; -Ξ_b^0 -Ξ_b^- 0; ]),
B^(b)_6=(
[ Σ_b^+ 1/√(2)Σ_b^0 1/√(2)Ξ_b^'0; 1/√(2)Σ_b^0 Σ_b^- 1/√(2)Ξ_b^'-; 1/√(2)Ξ_b^'0 1/√(2)Ξ_b^'- Ω_b^-; ]),
and the matrices for B^(Q)∗_6 are similar to those for B^(Q)_6.
For convenience, while performing chiral-loop calculations, the two sextet heavy baryons can be combined to a superfield,
S^μ= B_6^∗μ-1/√(3)(γ^μ+v^μ)γ_5B_6,
S̅^μ= B̅_6^∗μ-1/√(3)(γ^μ+v^μ)γ_5B̅_6,
where v^μ is the velocity of the heavy baryon.
Then the general chiral Lagrangian for heavy baryons is <cit.>
ℒ_B=ℒ_3̅+ℒ_S+ℒ_int,
with
ℒ_3̅= 1/2tr[B̅_3̅(i v· D)B_3̅]+iβ_Btr[B̅_3̅v^μ(𝒱_μ-ρ_μ)B_3̅]+ℓ_Btr[B̅_3̅σ B_3̅],
ℒ_S= -tr[S̅_α(i v· D-Δ_B)S^α]+3/2g_1(iv_κ)ϵ^μνλκtr[S̅_μ𝒜_ν S_λ]
+iβ_Str[S̅_μ v_α(𝒱^α-ρ^α)S^μ]+λ_Str[S̅_μ F^μνS_ν]+ℓ_Str[S̅_μσ S^μ],
ℒ_int = g_4tr[S̅_μ𝒜^μ B_3̅]+iλ_I ϵ^μνλκtr[S̅_ν F^μνS_ν]+H.c.,
where D_μ B=∂_μ B+𝒱_μ B+B𝒱_μ^T, Δ_B=M_6-M_3̅ is the mass difference between the sextet and the antitriplet, 𝒱_μ=1/2(ξ^†∂_μξ+ξ∂_μξ^†) and 𝒜_μ=1/2(ξ^†∂_μξ-ξ∂_μξ^†) are the vector and axial vector fields, respectively, F_μν=∂_μρ_ν-∂_νρ_μ+[ρ_μ,ρ_ν], ξ=exp[iP/f_π] and ρ=ig_V/√(2)V with
P=(
[ π^0/√(2)+η/√(6) π^+ K^+; π^- -π^0/√(2)+η/√(6) K^0; K^- K̅^0 -√(2/3)η; ]),
and
V=(
[ ω/√(2)+ρ^0/√(2) ρ^+ K^∗+; ρ^- ω/√(2)-ρ^0/√(2) K^∗0; K^∗- K̅^∗+ ϕ; ]),
being the pseudoscalar and vector matrices, respectively. The phases of the fields B̅_3̅, B̅^(∗)_6, 𝒱_μ^T, and 𝒜_μ^T can be fixed by the following charge conjugation convention:
B̅_3̅=𝒞B_3̅𝒞^-1, B̅^(∗)_6=𝒞B^(∗)_6𝒞^-1, 𝒱_μ^T=-𝒞𝒱_μ𝒞^-1, 𝒜_μ^T=𝒞𝒜_μ𝒞^-1.
After expanding the effective Lagrangians in Eqs.(<ref>)-(<ref>) to the leading order of the light meson field, we can obtain the following effective interactions needed for our work:
ℒ_B_3̅B_3̅V =iβ_Bg_V/2√(2m_B̅_3̅m_B_3̅)tr[B̅_3̅∂_μ V^μ B_3̅],
ℒ_B_3̅B_3̅σ =ℓ_B tr[B̅_3̅σ B_3̅],
ℒ_B_6B_6P =-g_1/4f_π√(m_B̅_6m_B_6)ϵ_μνλκtr[B̅_6γ^μγ^λ∂^κ∂^ν P B_6],
ℒ_B_6B_6V =-iβ_Sg_V/2√(2m_B̅_6m_B_6)tr[B̅_6∂_ν V^ν B_6]-iλ_Sg_V/3√(2)tr[B̅_6γ^μ(∂_μ V_ν-∂_ν V_μ)γ^ν B_6],
ℒ_B_6B_6σ =-ℓ_Str[B̅_6σ B_6],
where v is replaced by i∂/(2√(m_B̅m_B)) and the pion decay constant is f_π=132 MeV. The values of relevant coupling constants are listed in Table <ref> <cit.>.
§.§ The BS equation for the heavy baryonium system
In this section, we will discuss the general BS formalism for the heavy baryonium composed of a heavy baryon and an anti-heavy baryon. In this case, the BS wave function is defined as:
χ_P(x_1,x_2,P)_αβ =⟨0|Tψ_α(x_1)ψ̅_β(x_2)|P⟩,
=e^-iP X∫d^4p/(2π)^4e^-ip xχ_P(p)_αβ,
where α and β are spinor indices, ψ(x_1) and ψ̅(x_2) are
the field operators of heavy baryon and anti-heavy baryon, respectively, P (=Mv) is the total momentum of the heavy baryonium and v represents its velocity, X=λ_1x_1-λ_2x_2 and x=x_1-x_2 are the center-of-mass coordinate and the relative coordinate of the heavy baryonium, respectively, with λ_1(2)=m_1(2)/m_1+m_2, where m_1 and m_2 are the masses of heavy baryon and anti-heavy baryon, respectively, p is the relative momentum of the heavy baryonium. The momenta of constituent particles can be expressed in terms of the relative momentum p and the total momentum P as p_1=λ_1P+p and p_2=λ_2P-p, respectively.
The BS equation for the heavy baryonium can be written as
χ_P(p)=S(p_1)∫d^4q/(2π)^4K̅(P,p,q)χ_P(q)S(-p_2),
where K̅(P,p,q) is the interaction kernel, can derived from the irreducible Feynman diagrams, S(p_1) and S(-p_2) are the propagators of the heavy baryon and the anti-heavy baryon, respectively. For convenience, we define p_l≡ v· p as the longitudinal projection of p along v, and p_t≡ p-p_l v as the transverse component with respect to v.
In the leading of a 1/m_Q expansion, the propagators of the heavy baryon and the anti-heavy baryon can be expressed as:
S(p_1) =im_1(1+v)/2w_1(λ_1M+p_l-w_1+iϵ),
and
S(p_2)=im_2(1+v)/2w_2(λ_2M-p_l-w_2+iϵ),
where the energy w_1(2)=√(m^2_1(2)-p_t^2), and ϵ is an infinitesimal parameter.
Substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>), we obtain the following two constraint relations for the BS wave function χ_P(p):
vχ_P(p)=χ_P(p),
χ_P(p)v=-χ_P(p).
The S-wave heavy baryonium can have J^PC=0^-+ and J^PC=1^– states. With the constraints imposed by parity and Lorentz transformations, the BS wave functions can be expressed as the following:
χ_P(p)=γ_5f_1+γ_5vf_2+γ_5p_tf_3+(-i)σ_μνv^μ p_t^ν f_4,
and
χ_P^(r)(p)= [p_t^ρ g_1+γ_μ(v^μ p_t^ρ g_2+p_t^μ p_t^ρ g_3+g^μρg_4)+γ_5γ_μϵ^μραβp_tαv_β g_5
+σ_μν(p_t^μ v^ν p_t^ρ g_6+g^μρp_t^ν g_7+g^μρv^ν g_8)]ϵ_ρ^(r),
for the J^PC=0^-+ and J^PC=1^– S-wave heavy baryonia, respectively, where f_i (i=1,...,4) and g_j (j=1,...,8) are the Lorentz-scalar functions of p_t^2 and p_l, and ϵ^(r)_μ is the polarization vector of the vector heavy baryonium.
By applying the constraint relations (<ref>) and (<ref>), the BS wave functions for the S-wave pseudoscalar (J^PC=0^-+) and vector (J^PC=1^–) heavy baryonia can be simplified to the following forms, respectively:
χ_P(p)=(1+v)γ_5f_1,
and
χ^(r)_P(p)=(1+v)ϵ^(r)g_1.
§.§ The BS equation for the heavy dibaryon
For the heavy dibaryon bound states composed of double heavy baryons, the general
form of the BS equationin in momentum space is:
χ_P(p)=S(p_1)∫d^4q/(2π)^4K̅(P,p,q)χ_P(q)S(p_2),
with the BS wave function for the heavy dibaryon being defined as:
χ_P(x_1,x_2,P)_αβ=⟨0|Tψ_α(x_1)ψ_β(x_2)|P⟩.
For convenience, we define a deformed BS wave function,
χ̃_P(p)_αβ=χ_P(p)_αγ𝒞_γβ^-1=(𝒞χ_P(p))_αβ,
where 𝒞 is the charge conjugation matrix.
With this deformed BS wave function, the BS equation (<ref>) can be written in a more conventional matrix form
χ̃_P(p)^T=S(p_1)∫d^4q/(2π)^4K̅(P,p,q)χ̃_P(q)^TS(-p_2),
where the superscript “T" represents the transpose of the spinor index.
From the BS equations (<ref>) and (<ref>), we see that the BS wave function χ_P(p) in Eq. (<ref>) for the heavy baryonium and the deformed BS wave function χ̃_P(p) in Eq. (<ref>) for the heavy dibaryon satisfy the same equation. And the deformed BS wave function χ̃_P(p) have the same forms as given in Eqs. (<ref>) and (<ref>).
To simplify the BS equations (<ref>) and (<ref>), we impose the so-called covariant instantaneous approximation in the kernel: p_l=q_l. In this approximation, the projection of the momentum of each constituent particle along the total momentum P is not changed, i.e., the energy exchanged between the constituent particles of the binding system is neglected. This approximation is appropriate since we consider the binding energy of heavy baryonium and heavy dibaryon bound states to be very small compared to the masses of heavy baryons. Under this approximation, the kernel in the BS equation is reduced to K̅(P,p_t,q_t), which will be used in the following calculations.
After some algebra, we find that the BS scalar wave functions f_1 and g_1 satisfy the same integral equation as follows (in the following we will use f uniformly):
f(p)=-m_1m_2/2w_1w_2(λ_1M+p_l-w_1+iϵ)(λ_2M-p_l-w_2+iϵ)∫d^4q/(2π)^4K̅(P,p_t,q_t)f(q).
We integrate both sides of the above equation with respect to p_l to obtain:
f̃(p_t)=-im_1m_2/w_1w_2(M-w_1-w_2)∫d^3q_t/(2π)^3K̅(P,p_t,q_t)f̃(q_t),
where we have defined f̃(p_t)=∫ dp_l f(p).
Based on the effective Lagrangians in Eq.(<ref>), the lowest-order interaction kernel can be derived as follows:
K̅_B_3̅B̅_3̅^V(P,p,q)= c_I(g_Vβ_B/2√(2m_B_3̅m_B̅_3̅))^2(p_1+q_1)_μ(p_2+q_2)_νΔ^μν_V(k),
K̅_B_3̅B̅_3̅^σ(P,p,q)= -c_Iℓ_B^2Δ_σ(k),
K̅_B_6B̅_6^P(P,p,q)= -c_I(g_1/4f_π√(m_B_6m_B̅_6))^2ϵ_μνλκϵ_αβτδγ^μγ^λγ^αγ^τ(p_1+q_1)^κ(p_2+q_2)^δ k^ν k^βΔ_P(k),
K̅_B_6B̅_6^V(P,p,q)= c_I[β_Sg_V/2√(2m_B_6m_B̅_6)(p_1+q_1)^μ g_μτ-λ_Sg_V/3√(2)γ^μγ^ν(k_μ g_ντ-k_ν g_μτ)]
×[β_Sg_V/2√(2m_B_6m_B̅_6)(p_2+q_2)^α g_ακ-λ_Sg_V/3√(2)γ^αγ^β(k_α g_βκ-k_β g_ακ)]Δ_V^τκ(k),
K̅_B_6B̅_6^σ(P,p,q)= -c_Iℓ_S^2Δ_σ(k),
where Δ^μν_V(k), Δ_P(k), and Δ^σ(k) are the propagators of the exchanged vector, pseudoscalar and σ mesons, respectively, k represents the momentum of the exchanged meson, and c_I is the isospin coefficient, given in Table <ref>. In our model, the BS wave function depends only on the isospin I but not on its
component I_3 because we consider only strong interactions that preserve the isospin symmetry.
To account for the structure and finite size effects of the interacting hadrons, it is necessary to introduce the form factor at the vertices. For t-channel vertices, we use the monopole form factor:
F_M(k^2)= Λ^2-m^2/Λ^2-k^2,
where m and Λ represent the mass and cutoff parameter of the exchanged meson, respectively. Since the heavy baryonium and heavy dibaryon systems can have interaction by exchanging multiple particles, different masses of exchanged particles correspond to different interaction ranges and, consequently, different cutoff parameters. Thus, we further reparameterize the cutoff Λ as Λ=m+αΛ_QCD with Λ_QCD = 220 MeV, where the parameter α is of order one. The value of α depends on the exchanged and external particles involved in the strong interaction vertex and cannot be obtained from the first principle.
§ NUMERICAL RESULTS
In the numerical calculations, we first present the masses of the relevant mesons and heavy baryons in Table <ref> <cit.>, which are essential for investigating whether heavy baryonium and heavy dibaryon systems can exist as bound states. In our model, we have two parameters, the cutoff Λ and the bounding energy E_b. The cutoff Λ is reparameterized as a variable, α, with Λ=m+αΛ_QCD. The parameter α is not a completely free parameter, since its varying range is related to the sizes of hadrons <cit.>. Based on the experience with deuteron, the parameter α is typically of order unity. The other parameter E_b (defined as E_b= m_1+m_2-M, where we consider the heavy baryonium and dibaryon systems as shallow bound states with E_b ranging from 0 to 50 MeV), is dependent on the value of the parameter α, and is therefore not absolutely determined. In this work, we allow the parameter α to vary over a wide range (0.3–8) to search for possible solutions in the heavy baryonium and dibaryon systems.
To solve the three-dimensional integral BS equation (<ref>), we fist simplify it to a one-dimensional integral equation by completing the azimuthal integration. This one-dimensional integral BS equation is further discretised into a matrix eigenvalue equation by the Gaussian quadrature method. By solving the eigenvalue equation, we can find the possible bound states of the heavy baryonium and heavy dibaryon systems depending on the parameter α.
§.§ The results of charmed baryonium and charmed dibaryon systems
The results for the possible bound states of the charmed baryonium and charmed dibaryon systems are shown in Figs. <ref> and <ref>, respectively. Our research indicates that all the charmed baryonium systems, specifically Λ_cΛ̅_c, Ξ_cΞ̅_c, Σ_cΣ̅_c, Ξ'_cΞ̅'_c, and Ω_cΩ̅_c, can exist as bound states. Among the charmed dibaryon systems, only the Ξ_cΞ_c system with isospin I=0 and the Σ_cΣ_c system with isospin I=0 and I=1 can exist as bound states.
For the Λ_cΛ̅_c (Λ_c) system, since Λ_c is an isoscalar state, the interaction kernel arises from the exchanges of ω and σ mesons. Both ω and σ mesons induce attractive interaction in the Λ_cΛ̅_c system, allowing it to form a bound state in our model. The values of the parameter α along with the corresponding binding energy E_b are displayed in Fig. <ref>(a). It is also found that this system can exist as a bound state in various models <cit.>. However, the results of Refs. <cit.> show that the binding energy is sensitive to the cutoff Λ.
For the Λ_cΛ_c system, the interaction contributed from ω is repulsive and that from σ is attractive. Our result indicates that the Λ_cΛ_c system cannot form a bound state which is consistent with the results in Refs. <cit.> that the Λ_cΛ_c system can not be a bound state by itself. However, the coupling of the Λ_cΛ_c to the strongly attractive Σ^(∗)_cΣ^(∗)_c system may lead to a state below the Λ_cΛ_c threshold <cit.>. On the contrary, in Refs. <cit.>, it is pointed out that the single channel Λ_cΛ_c can form a bound state. This discrepancy arises from the fact that in these models, the attractive contribution from the σ meson is stronger than the repulsive contribution from the ω meosn. However, in our model, we find that even only considering the contribution from the σ meson is not sufficient for the Λ_cΛ_c system to form a bound state.
The Ξ_c baryon contains a strange quark and has an isospin of 1/2. Therefore, the Ξ_cΞ̅_c (Ξ_c) system can have isospins of both I=0 and I=1 and the interaction kernel can arise from the exchanges of ρ, ω, ϕ, and σ. For the Ξ_cΞ̅_c system with I=0, the exchanges of ρ, ω, ϕ, and σ mesons all induce attractive interaction. The Ξ_cΞ̅_c system with I=0 can form a bound state with a binding energy in the range from 0 to 50 MeV when the parameter α ranges from 1.12 to 2.88, which is presented in Fig. <ref>(b). In the Ξ_cΞ̅_c system with I=1, the interaction magnitudes due to ρ and ω are the same, but ρ provides a repulsive contribution, thus their contributions almost cancel each other considering the similar masses of ρ and ω. Then the Ξ_cΞ̅_c system with I=1 is able to exist as a bound state with a binding energy in the range of 0 to 50 MeV when the parameter α ranges from 1.32 to 3.97. The relevant results are presented in Fig. <ref>(c). That the system Ξ_cΞ̅_c with I=0 and I=1 can exist as a bound state is also supported by Refs. <cit.>. It is worth mentioning that in the lattice QCD <cit.> and the chromomagnetic interaction model <cit.> it is found that the masses of the hidden-charm and hidden-strange hexaquarks are below the Ξ_cΞ̅_c threshold by 700-1000 MeV, which cannot be obtained within a reasonable range of the parameter α in our model.
For the Ξ_cΞ_c system, only the isospin I=0 configuration can exist as a bound state, as depicted in Fig. <ref>(a). In the Ξ_cΞ_c system with I=1, contributions from the vector mesons ρ, ω, and ϕ are repulsive, while that from the σ meson is attractive but insufficient to form a bound state in our model. However, the Ξ_cΞ_c system with I=1 could be a loosely bound state with a binding energy of only a few hundred keV within the one-boson-exchange model <cit.>, and it could be a deeply bound state within the quasipotential BS equation framework <cit.>. In our model, unlike the Λ_cΛ_c system, the Ξ_cΞ_c system can form a bound state when only considering the σ meson exchange due to the greater mass of the Ξ_c compared with the Λ_c. However, given that the total contributions of the vector mesons in the I=1 Ξ_cΞ_c system are repulsive, it is quite inconceivable that this system could form a bound state.
For the Σ_cΣ̅_c(Σ_c) system, the interaction kernels are induced by the exchange of the pseudoscalar mesons π and η, the vector mesons ρ and ω, and the scalar meson σ. In the Σ_cΣ̅_c system, our results indicate that the isospin states with I = 0, 1, and 2 can all exist as bound states, consistent with Refs. <cit.>. However, in our model, the Σ_cΣ̅_c system with I = 2 is a loosely bound state with the binding energy very sensitive to the parameter α compared with the isospin states I = 0 and 1, as shown in Figs. <ref>(d)-<ref>(f). This sensitivity is due to the contributions from ρ and ω mesons nearly canceling out, while the attraction from the π meson accounts for the long-range interaction. In contrast, Ref. <cit.> reports a binding energy of 149.66 MeV for this state with a cutoff Λ = 0.95.
In the Σ_cΣ_c system, the isospin states with I=0 and I=1 can form bound states, as presented in Figs. <ref>(b) and <ref>(c). Specifically, the Σ_cΣ_c system with I=1 is a loosely bound state, consistent with the one-boson-exchange model <cit.> and the chiral effective theory <cit.>. The binding energy E_b of the Σ_cΣ_c system with I=1 is very sensitive to the parameter α. In our model, contributions from pseudoscalar mesons π and η, and vector mesons ρ and ω are repulsive. Only the σ meson provides an attractive force, which is insufficient to form a bound state for the I=2 Σ_cΣ_c system. In contrast, Ref. <cit.> suggests that the Σ_cΣ_c system with I=2 can form a bound state.
For the Ξ'_cΞ̅'_c(Ξ'_c) system, only the charmed baryonium Ξ'_cΞ̅'_c system can exist as bound state, with results presented in Figs. <ref>(g) and <ref>(h). The charmed dibaryon Ξ'_cΞ'_c system cannot exist as a bound state in our model, which is inconsistent with Refs. <cit.> and <cit.>. According to Ref. <cit.>, as the root mean square (rms) radius increases, the vector mesons that originally provided repulsive contributions become attractive, allowing the Ξ'_cΞ'_c system to exist as a loosely bound state. For the Ξ'_cΞ'_c system with I=0, contributions from η, ρ, and σ are attractive, while contributions from π, ω, and ϕ are repulsive, but no bound state is found. Thus, except for the scalar σ meson, which provides an attractive contribution, all other particles contribute repulsive forces to the Ξ'_cΞ'_c system with I=1, so that it cannot exist as a bound state in our model.
For the Ω_cΩ̅_c(Ω_c) system, only the Ω_cΩ̅_c can exist as a bound state. In the Ω_cΩ_c system, the ρ and η provide repulsive contributions, and the attractive contribution from σ exchange alone is insufficient to form a bound state. However, Ref. <cit.> suggests that the Ω_cΩ_c system can exist as a loosely bound state, because the repulsive contributions of η and ϕ decrease rapidly as the rms radius increases, making the attraction contribution provided by σ greater than the repulsion.
§.§ The results of bottom baryonium and bottom dibaryon systems
The interaction kernels in the bottom sector are the same as those in the charm sector. Thus, similar to the charmed baryonium and charmed dibaryon systems, all the bottom baryonium systems, such as Λ_bΛ̅_b, Ξ_bΞ̅_b, Σ_bΣ̅_b, and Ω_bΩ̅_b, as well as the bottom dibaryon systems Ξ_bΞ_b with isospin I=0 and Σ_bΣ_b with isospin I=0 and I=1, can exist as bound states. The results for the parameter α and the corresponding binding energy E_b are displayed in Figs. <ref> and <ref>. Because of the much heavier reduced masses of hidden-bottom systems, it is easier to form bound states than in charmed systems. Therefore, for the same binding energy, the bottom region corresponds to a smaller parameter α.
Similar to the Σ_cΣ̅_c system with I=2 and the Σ_cΣ_c system with I=1 in the charm region, the I=2 Σ_bΣ̅_b and I=1 Σ_bΣ_b systems show the binding energies being very sensitive to the parameter α. These two systems are also unable to bind very deeply, with the corresponding maximum binding energies of 70 MeV (α = 5.32) and 62 MeV (α = 4.28), respectively. To reasonably apply the instantaneous approximation in solving the BS equation (<ref>), we choose a maximum binding energy of 50 MeV. Therefore, the results for binding energies larger than 50 MeV are not shown in Figs. <ref>, <ref>, <ref>, and <ref>.
§ SUMMARY AND DISCUSSION
In this work, we utilized the BS equation to systematically study whether heavy baryonium and heavy dibaryon systems can exist as bound states. Our research indicates that all the heavy baryonium systems, including Λ_QΛ̅_Q, Ξ_QΞ̅_Q, Σ_QΣ̅_Q, Ξ'_QΞ̅'_Q, and Ω_QΩ̅_Q (Q=c, b), can exist as bound states. Among the heavy dibaryon systems, only the Ξ_QΞ_Q system with isospin I=0 and the Σ_QΣ_Q systems with isospin I=0 and I=1 can exist as bound states. Additionally, we found that the Σ_QΣ̅_Q system with I=2 and the Σ_QΣ_Q system with I=1 cannot exist as very deeply bound states. Furthermore, the large mass of heavy baryons reduces the kinetic energy of the system, making it easier to form bound states. Therefore, as shown in Figs. <ref>, <ref>, <ref>, and <ref>, the parameter α required to form bound states in the bottom region is smaller than that in the charm region, implying that the binding in the bottom region is deeper than that in the charm region.
However, there is considerable debate among different models regarding whether heavy baryonium and heavy dibaryon systems can exist as bound states, especially for the heavy dibaryon systems. In our model, the contribution of the ω meson in the Λ_cΛ_c system is repulsive, and the attractive contribution of the σ meson is insufficient to form a bound state in the Λ_cΛ_c system. Nevertheless, in many other models <cit.>, the Λ_cΛ_c system can exist as a bound state. In Refs. <cit.>, the Λ_cΛ_c system cannot be a bound state by itself but it is shown that the coupling to the strongly attractive Σ_c^(∗)Σ_c^(∗) system may lead to a state below the Λ_cΛ_c threshold. In the one-boson-exchange model <cit.>, the Ξ'_cΞ'_c system and the Ω_cΩ_c system can exist as shallow bound states, while the I=2 Σ_cΣ̅_c system can exist as a deeply bound state. Therefore, the existence of these bound states requires further theoretical studies and experimental verification.
The charmed baryonium bound states can be studied via B decays and e^+e^- collisions at LHCb, RHIC, Belle II, and BES3. With the upcoming BEPCII upgrade to 5.6 GeV by the end of 2024, as well as the completion of PANDA and the Super Tau-Charm Factory, the detailed study of charmed baryon-antibaryon bound states will become possible. Compared with the production of charmed baryon-antibaryon bound states, the production of charmed dibaryon bound states is significantly more challenging and faces immense difficulties, although it can still occur at LHC and RHIC. Charmed dibaryon bound states are highly stable because their constituent particles primarily decay through weak interactions, leading to long lifetimes (except for Σ_c which primarily decays via the strong process Σ_c →Λ_c π). However, due to the larger masses, bound states in the bottom region are more difficult to be produced than those in the charm region.
This work was supported by National Natural Science Foundation of China (Project Nos. 12105149, 12405115 12475096 and 12275024).
99
Belle:2003nnu
S. K. Choi et al. [Belle],
Phys. Rev. Lett. 91, 262001 (2003).
Hosaka:2016pey
A. Hosaka, T. Iijima, K. Miyabayashi, Y. Sakai and S. Yasui,
PTEP 2016, 062C01 (2016).
Meng:2022ozq
L. Meng, B. Wang, G. J. Wang and S. L. Zhu,
Phys. Rept. 1019, 1-149 (2023)
Chen:2022asf
H. X. Chen, W. Chen, X. Liu, Y. R. Liu and S. L. Zhu,
Rept. Prog. Phys. 86, 026201 (2023).
Brambilla:2019esw
N. Brambilla, S. Eidelman, C. Hanhart, A. Nefediev, C. P. Shen, C. E. Thomas, A. Vairo and C. Z. Yuan,
Phys. Rept. 873, 1-154 (2020).
Liu:2019zoy
Y. R. Liu, H. X. Chen, W. Chen, X. Liu and S. L. Zhu,
Prog. Part. Nucl. Phys. 107, 237-320 (2019).
Ali:2017jda
A. Ali, J. S. Lange and S. Stone,
Prog. Part. Nucl. Phys. 97, 123-198 (2017).
Guo:2017jvc
F. K. Guo, C. Hanhart, U. G. Meißner, Q. Wang, Q. Zhao and B. S. Zou,
Rev. Mod. Phys. 90, 015004 (2018),
[erratum: Rev. Mod. Phys. 94, 029901 (2022)].
Olsen:2017bmm
S. L. Olsen, T. Skwarnicki and D. Zieminska,
Rev. Mod. Phys. 90, 015003 (2018).
Lebed:2016hpi
R. F. Lebed, R. E. Mitchell and E. S. Swanson,
Prog. Part. Nucl. Phys. 93, 143-194 (2017).
Esposito:2016noz
A. Esposito, A. Pilloni and A. D. Polosa,
Phys. Rept. 668, 1-97 (2017).
Weinberg:1962hj
S. Weinberg,
Phys. Rev. 130, 776-783 (1963).
Weinberg:1963zza
S. Weinberg,
Phys. Rev. 131, 440-460 (1963).
Weinberg:1965zz
S. Weinberg,
Phys. Rev. 137, B672-B678 (1965).
BESIII:2023vvr
M. Ablikim et al. [BESIII],
Phys. Rev. Lett. 132, 151901 (2024).
Yan:2004xs
M. L. Yan, S. Li, B. Wu and B. Q. Ma,
Phys. Rev. D 72, 034027 (2005).
Ding:2005ew
G. J. Ding and M. L. Yan,
Phys. Rev. C 72, 015208 (2005).
Yang:2022kpm
Q. H. Yang, D. Guo and L. Y. Dai,
Phys. Rev. D 107, 034030 (2023).
Belle:2008xmh
G. Pakhlova et al. [Belle],
Phys. Rev. Lett. 101, 172001 (2008).
BESIII:2023rwv
M. Ablikim et al. [BESIII],
Phys. Rev. Lett. 131, 191901 (2023).
Cotugno:2009ys
G. Cotugno, R. Faccini, A. D. Polosa and C. Sabelli,
Phys. Rev. Lett. 104, 132005 (2010).
Guo:2010tk
F. K. Guo, J. Haidenbauer, C. Hanhart and U. G. Meissner,
Phys. Rev. D 82, 094008 (2010).
Simonov:2011jc
Y. A. Simonov,
Phys. Rev. D 85, 105025 (2012)
Liu:2021gva
Z. Liu, H. T. An, Z. W. Liu and X. Liu,
Phys. Rev. D 105, 034006 (2022).
Agaev:2022iha
S. S. Agaev, K. Azizi and H. Sundu,
Phys. Rev. D 106, 014025 (2022).
Song:2022yfr
L. Q. Song, D. Song, J. T. Zhu and J. He,
Phys. Lett. B 835, 137586 (2022).
Mei:2022msh
X. H. Mei, Z. Yu, M. Song, J. Y. Guo, G. Li and X. Luo,
Chin. Phys. C 47, 033104 (2023).
doi:10.1088/1674-1137/aca959
[arXiv:2212.02218 [hep-ph]].
Salnikov:2023cug
S. G. Salnikov, A. E. Bondar and A. I. Milstein,
Nucl. Phys. A 1041, 122764 (2024).
Wang:2013exa
Z. G. Wang,
Eur. Phys. J. C 74, 2874 (2014).
Liu:2016sip
X. Liu, H. W. Ke, X. Liu and X. Q. Li,
Eur. Phys. J. C 76, 549 (2016).
Guo:2016iej
X. D. Guo, D. Y. Chen, H. W. Ke, X. Liu and X. Q. Li,
Phys. Rev. D 93, 054009 (2016).
Wang:2016fhj
Y. Y. Wang, Q. F. Lü, E. Wang and D. M. li,
Phys. Rev. D 94, 014025 (2016).
Dai:2017fwx
L. Y. Dai, J. Haidenbauer and U. G. Meißner,
Phys. Rev. D 96, 116001 (2017).
Sundu:2018toi
H. Sundu, S. S. Agaev and K. Azizi,
Phys. Rev. D 98, 054021 (2018).
Anwar:2018sol
M. N. Anwar, J. Ferretti and E. Santopinto,
Phys. Rev. D 98, 094015 (2018).
Cao:2019wwt
Q. F. Cao, H. R. Qi, Y. F. Wang and H. Q. Zheng,
Phys. Rev. D 100, 054040 (2019).
Carames:2015sya
T. F. Carames and A. Valcarce,
Phys. Rev. D 92, 034015 (2015).
Garcilazo:2020acl
H. Garcilazo and A. Valcarce,
Eur. Phys. J. C 80, 720 (2020).
Deng:2013aca
C. Deng, J. Ping, Y. Yang and F. Wang,
Phys. Rev. D 88, 074007 (2013).
Huang:2013rla
H. Huang, J. Ping and F. Wang,
Phys. Rev. C 89, 035201 (2014).
Junnarkar:2022yak
P. M. Junnarkar and N. Mathur,
Phys. Rev. D 106, 054511 (2022).
Chen:2022iil
K. Chen, B. L. Huang, B. Wang and S. L. Zhu,
[arXiv:2204.13316 [hep-ph]].
Lu:2017dvm
J. X. Lu, L. S. Geng and M. P. Valderrama,
Phys. Rev. D 99, 074026 (2019).
Oka:2013xxa
M. Oka,
Nucl. Phys. A 914, 447-453 (2013).
Chen:2013sba
Y. D. Chen, C. F. Qiao, P. N. Shen and Z. Q. Zeng,
Phys. Rev. D 88, 114007 (2013).
Chen:2011cta
Y. D. Chen and C. F. Qiao,
Phys. Rev. D 85, 034034 (2012).
Wang:2021pua
X. W. Wang and Z. G. Wang,
Adv. High Energy Phys. 2022, 6224597 (2022).
Wang:2021qmn
X. W. Wang, Z. G. Wang and G. l. Yu,
Eur. Phys. J. A 57, 275 (2021).
Wan:2019ake
B. D. Wan, L. Tang and C. F. Qiao,
Eur. Phys. J. C 80, 121 (2020).
Cheng:2022vgy
J. B. Cheng, D. x. Zheng, Z. Y. Lin and S. L. Zhu,
Phys. Rev. D 107, 054018 (2023).
Ling:2021asz
X. Z. Ling, M. Z. Liu and L. S. Geng,
Eur. Phys. J. C 81, 1090 (2021).
Chen:2017vai
R. Chen, A. Hosaka and X. Liu,
Phys. Rev. D 96, 116012 (2017).
Meguro:2011nr
W. Meguro, Y. R. Liu and M. Oka,
Phys. Lett. B 704, 547-550 (2011).
Lee:2011rka
N. Lee, Z. G. Luo, X. L. Chen and S. L. Zhu,
Phys. Rev. D 84, 014031 (2011).
Li:2012bt
N. Li and S. L. Zhu,
Phys. Rev. D 86, 014020 (2012).
Song:2022svi
D. Song, L. Q. Song, S. Y. Kong and J. He,
Phys. Rev. D 106, 074030 (2022).
Song:2023vtu
D. Song, S. Chen, S. Y. Kong and J. He,
Chin. Phys. C 47, 113102 (2023).
Kong:2023dwz
S. Y. Kong, J. T. Zhu and J. He,
Eur. Phys. J. C 83, 436 (2023).
Liu:2011xc
Y. R. Liu and M. Oka,
Phys. Rev. D 85, 014015 (2012).
Cheng:1993kp
H. Y. Cheng, C. Y. Cheung, G. L. Lin, Y. C. Lin, T. M. Yan and H. L. Yu,
Phys. Rev. D 49 (1994), 5857-5881
[erratum: Phys. Rev. D 55 (1997), 5851-5852].
ParticleDataGroup:2024prd
S. Navas et al. [Particle Data Group],
Phys. Rev. D 110, 030001 (2024).
Dong:2021juy
X. K. Dong, F. K. Guo and B. S. Zou,
Progr. Phys. 41, 65-93 (2021).
Gerasyuta:2011zx
S. M. Gerasyuta and E. E. Matskevich,
Int. J. Mod. Phys. E 21, 1250058 (2012).
Yu:2021lmb
Z. Yu, M. Song, J. Y. Guo, Y. Zhang and G. Li,
Phys. Rev. C 104 (2021), 035201.
Dong:2021bvy
X. K. Dong, F. K. Guo and B. S. Zou,
Commun. Theor. Phys. 73 (2021), 125201.
Liu:2022gxf
H. Liu, J. He, L. Liu, P. Sun, W. Wang, Y. B. Yang and Q. A. Zhang,
Sci. China Phys. Mech. Astron. 67 (2024), 211011.
§ THE FLAVOUR WAVE FUNCTIONS
For the isospin conventions, we use the following ones:
| u⟩=|1/2,1/2⟩, | d⟩=|1/2,-1/2⟩, |u̅⟩=|1/2,-1/2⟩, |d̅⟩=-|1/2,1/2⟩.
Then, we have
|Λ_c^+⟩ =|0,0⟩, |Λ_c^-⟩=-|0,0⟩, |Σ_c^++⟩=|1,1⟩, |Σ_c^–⟩=|1,-1⟩,
|Σ_c^+⟩ =|1,0⟩, |Σ_c^-⟩=-|1,0⟩, |Σ_c^0⟩=|1,-1⟩, |Σ̅_c^0⟩=|1,1⟩,
|Ξ_c^(')+⟩ =|1/2,1/2⟩, |Ξ_c^(')-⟩=|1/2,-1/2⟩, |Ξ_c^(')0⟩=|1/2,-1/2⟩, |Ξ̅_c^(')0⟩=-|1/2,1/2⟩,
|Ω_c^0⟩ =|0,0⟩, |Ω̅_c^0⟩=|0,0⟩.
The flavour wave functions of the charmed baryon and anti-charmed baryon (charmed baryon) systems can be construct with the Clebsch-Gordan coefficients and above conventions,
Ψ_Λ_cΛ̅_c^0,0 =-|Λ_cΛ̅_c⟩,
Ψ_Σ_cΣ̅_c^0,0 =1/√(3)|Σ_c^++Σ_c^–+Σ_c^+Σ_c^-+Σ_c^0Σ̅_c^0⟩,
Ψ_Σ_cΣ̅_c^1,0 =1/√(2)|Σ_c^++Σ_c^–-Σ_c^0Σ̅_c^0⟩,Ψ_Σ_cΣ̅_c^1,1=-1/√(2)|Σ_c^++Σ_c^-+Σ_c^+Σ̅_c^0⟩,Ψ_Σ_cΣ̅_c^1,-1=1/√(2)|Σ_c^+Σ_c^–+Σ_c^0Σ_c^-⟩,
Ψ_Σ_cΣ̅_c^2,0 =1/√(6)|Σ_c^++Σ_c^–-2Σ_c^+Σ_c^-+Σ_c^0Σ̅_c^0⟩,Ψ_Σ_cΣ̅_c^2,1=-1/√(2)|Σ_c^++Σ_c^--Σ_c^+Σ̅_c^0⟩,
Ψ_Σ_cΣ̅_c^2,2 =|Σ_c^++Σ̅_c^0⟩, Ψ_Σ_cΣ̅_c^2,-1=1/√(2)|Σ_c^+Σ_c^–-Σ_c^0Σ_c^-⟩,Ψ_Σ_cΣ̅_c^2,-2=|Σ_c^0Σ_c^–⟩,
Ψ_Ξ_c^(')Ξ̅_c^(')^0,0 =1/√(2)|Ξ_c^(')+Ξ̅_c^(')-+Ξ_c^(')0Ξ̅_c^(')0 ⟩,
Ψ_Ξ_c^(')Ξ̅_c^(')^1,0 =1/√(2)|Ξ_c^(')+Ξ_c^(')--Ξ_c^(')0Ξ̅_c^(')0⟩, Ψ_Ξ_c^(')Ξ̅_c^(')^1,1=-|Ξ_c^(')+Ξ̅_c^(')0⟩, Ψ_Ξ_c^(')Ξ̅_c^(')^1,-1=|Ξ_c^(')0Ξ_c^(')-⟩,
Ψ_Ω_cΩ̅_c^0,0 =|Ω_c^0Ω̅_c^0⟩,
and
Ψ_Λ_cΛ_c^0,0 =|Λ_cΛ_c⟩,
Ψ_Σ_cΣ_c^0,0 =1/√(3)|Σ_c^++Σ_c^0-Σ_c^+Σ_c^++Σ_c^0Σ_c^++⟩,
Ψ_Σ_cΣ_c^1,0 =1/√(2)|Σ_c^++Σ_c^0-Σ_c^0Σ_c^++⟩,Ψ_Σ_cΣ_c^1,1=1/√(2)|Σ_c^++Σ_c^+-Σ_c^+Σ_c^++⟩,Ψ_Σ_cΣ_c^1,-1=1/√(2)|Σ_c^+Σ_c^0-Σ_c^0Σ_c^+⟩,
Ψ_Σ_cΣ_c^2,0 =1/√(6)|Σ_c^++Σ_c^0+2Σ_c^+Σ_c^++Σ_c^0Σ_c^++⟩,Ψ_Σ_cΣ_c^2,1=1/√(2)|Σ_c^++Σ_c^++Σ_c^+Σ_c^++⟩,
Ψ_Σ_cΣ_c^2,2 =|Σ_c^++Σ_c^++⟩, Ψ_Σ_cΣ_c^2,-1=1/√(2)|Σ_c^+Σ_c^0+Σ_c^0Σ_c^+⟩,Ψ_Σ_cΣ_c^2,-2=|Σ_c^0Σ_c^0⟩,
Ψ_Ξ_c^(')Ξ_c^(')^0,0 =1/√(2)|Ξ_c^(')+Ξ_c^(')0-Ξ_c^(')0Ξ_c^(')+ ⟩,
Ψ_Ξ_c^(')Ξ_c^(')^1,0 =1/√(2)|Ξ_c^(')+Ξ_c^(')0+Ξ_c^(')0Ξ_c^(')+⟩, Ψ_Ξ_c^(')Ξ_c^(')^1,1=|Ξ_c^(')+Ξ_c^(')+⟩, Ψ_Ξ_c^(')Ξ_c^(')^1,-1=|Ξ_c^(')0Ξ_c^(')0⟩,
Ψ_Ω_cΩ_c^0,0 =|Ω_c^0Ω_c^0⟩,
The wave functions of the bottom baryonium and bottom dibaryon systems can be obtained analogously.
|
http://arxiv.org/abs/2409.02519v1 | 20240904082743 | Language is Scary when Over-Analyzed: Unpacking Implied Misogynistic Reasoning with Argumentation Theory-Driven Prompts | [
"Arianna Muti",
"Federico Ruggeri",
"Khalid Al-Khatib",
"Alberto Barrón-Cedeño",
"Tommaso Caselli"
] | cs.CL | [
"cs.CL",
"cs.SI"
] |
Fundamental properties of linear factor models
Peter M. Kraus
3 September 2024
================================================
§ ABSTRACT
We propose misogyny detection as an Argumentative Reasoning task and we investigate the capacity of large language models (LLMs) to understand the implicit reasoning used to convey misogyny in both Italian and English. The central aim is to generate the missing reasoning link between a message and the implied meanings encoding the misogyny.
Our study uses argumentation theory as a foundation to form a collection of prompts in both zero-shot and few-shot settings. These prompts integrate different techniques, including chain-of-thought reasoning and augmented knowledge. Our findings show that LLMs fall short on reasoning capabilities about misogynistic comments and that they mostly rely on their implicit knowledge derived from internalized common stereotypes about women to generate implied assumptions, rather than on inductive reasoning.
§ INTRODUCTION
According to the 7^th Monitoring Round of the EU Code of Conduct on Countering Illegal Hate Speech Online,[<https://bit.ly/3yIRYWg>] Social Media are slowing down the removal of hateful content within 24 hours, dropping to 64% from 81% in 2021. The prevalence of hate speech phenomena has become a factor of polarization and pollution of the online sphere, creating hostile environments that perpetuate stereotypes and social injustice.
Previous work on hate speech detection from the NLP community has contributed to definitions <cit.>, datasets <cit.>, and systems <cit.>. However, most of these contributions have focused (more or less consciously) on explicit forms of hate. Recently, there has been an increasing interest in the study of implicit realization of hate speech phenomena <cit.>.
Implicit hate speech is more elusive, difficult to detect, and often hidden under apparently innocuous language or indirect references.
These subtleties present a significant challenge for automatic detection because they rely on underlying assumptions that are not explicitly stated. As illustrated in Figure <ref>, the model[<https://huggingface.co/tum-nlp/bert-hateXplain>] correctly mark as hateful the explicit message (
< g r a p h i c s >
), but it fails with the implicit ones (
< g r a p h i c s >
).
To correctly spot
the implicit message, the system would have to
identify at least
the implied assumptions that assume that “women aren't as capable as men.” and “women should be told what to do”.[Example and explanations extracted from <cit.>.]
In this contribution we investigate the abilities of large language models (LLMs) to correctly identify implicit hateful messages expressing misogyny in English and Italian.
In particular, we explore how prompts informed by Toulmin's Argumentation Theory <cit.>
are effective in reconstructing the warrant needed to make the content of the messages explicit and thus facilitate their identification as hateful messages <cit.>.
By prompting LLMs to generate such warrants, we further investigate whether the generated texts are comparable to those of human annotators, thus offering a fast and reliable solution to enrich hateful datasets with explanations and contributing to improve the generalization abilities of trained tools.
We summarize our contributions as follows:
* We present a novel formulation of implicit misogyny detection as an Argumentative Reasoning task, centered on reconstructing implicit assumptions in misogynous texts (<ref>).
* We introduce the first dataset for implicit misogyny detection in Italian (<ref>).[All data will be released via a Data Sharing Agreement.]
* We carry out an extensive set of experiments with two state-of-the-art instruction-tuned LLMs ( and ) on English and Italian datasets (<ref>).
* We conduct an in-depth qualitative analysis of the automatically generated implicit assumptions
against 300 human-generated ones (<ref>).
§ RELATED WORK
Hate speech detection is a widely studied research area, covering different targets and linguistic aspects.
We discuss literature work on implicit misogyny detection with particular attention to contributions in reconstructing implicit content.
Implicit Hate Speech Detection
Hate Speech Detection is a
popular research domain with more than 60 datasets covering distinct targets (e.g., women, LGBTIQ+ people, migrants)
and forms of hate (e.g., sexism, racism, misogyny, homophobia)
in 25 languages, according to the Hate Speech Dataset Catalogue.[<https://hatespeechdata.com>]
In its early stages, but still predominant nowadays, research in this domain focused on developing datasets for detecting explicit cues of hate speech, like messages containing slurs or swear words <cit.>.
However, hate speech is often implicit, characterized by the presence of code language phenomena such as sarcasm, irony, metaphors, circumlocutions, and obfuscated terms, among others <cit.>.
For this reason, implicit hate speech detection has progressively gained momentum in recent years, and several efforts have been put into the development of datasets for this purpose <cit.>.
A relevant feature of these datasets is the presence of implied statements in free-text format, which contributes to explaining the content of hate speech messages.
While the use of these statements has been shown to have a positive effect on classification performance <cit.>, few efforts have been put in automatically generating such implied assumptions <cit.>.
As <cit.> point out, current annotation schemes in this area present significant reasoning gaps between the claim and its implied meaning.
Moreover, no effort has been made to evaluate widely adopted LLMs on their reasoning capabilities required to generate high-quality implied assumptions.
To the best of our knowledge, our work is the first study to propose an empirical evaluation of LLMs for implicit misogyny detection and the generation of explanations for Italian and English.
Available datasets targeting misogyny in Italian <cit.> are highly biased toward explicit messages, with very few messages that qualify as implicit.
To fill this gap, we have developed the first Italian dataset for this task, ImplicIT-Mis.
In our work, we define misogyny as a property of social environments where women perceived as violating patriarchal norms are “kept down” through hostile or benevolent reactions coming from men, other women, and social structures <cit.>, going beyond the simplistic definition of misogyny as hate against women.
Implied Assumptions Generation
The implied assumptions instantiate statements that are presupposed by the implicit hate speech message. This can be seen as the elicitation of implicit knowledge, corresponding to new content semantically implied by the original message <cit.>.
Although limited, previous work on the generation of implied meanings —usually in the forms of explanations— has moved away from template-based methods <cit.> to the application of encoder-decoder or decoder-only models <cit.>. Generating explanations for implicit content poses
multiple challenges concerning the quality of the generated texts, whose primary goal is to be reasonable and informative.
Some approaches generate explanations by identifying pivotal concepts in texts and linking them through knowledge graphs <cit.>
More recently, the underlying concepts are generated by directly querying LLMs <cit.>.
In this work, we follow the idea of using LLMs to identify the implied assumptions in the implicit messages, but rather than centering the reasoning process on identifying specific concepts,
we formulate the problem as an Argumentative Reasoning task and apply Toulmin's Argumentation Theory <cit.>.
§ MISOGYNY DETECTION AS A REASONING COMPREHENSION TASK
<cit.> introduced the Argument Reasoning Comprehension Task, which aims at identifying and reconstructing implicit warrants.
The concept of the warrant, as defined in Toulmin’s model of argumentation, plays a crucial role in bridging the gap between Reason and Claim. Warrants are usually presupposed and left implicit, thus, comprehending them requires language understanding, logic skills and commonsense knowledge <cit.>.
In the context of misogynistic discourse, the implicit warrant often consists of social bias, stereotypes and prejudices about women. Our hypothesis is that reconstructing these implicit warrants is essential for machines to understand and detect implicit misogynistic content, as it allows for the identification of the underlying prejudices that drive such discourse.
For instance, consider Fig. <ref>.
§ MISOGYNY DETECTION AS ARGUMENTATIVE REASONING UNDERSTANDING
The elusiveness of implicit hate speech is due to its ambiguity.
Implicit messages could be understood as critiques, opinions, or statements (see Figure <ref>) rather than as hateful.
Hate, in this case, is expressed by assuming social biases, stereotypes, and prejudices against a specific target, women in the case of misogyny.
The identification of these assumptions requires access to the reasoning process behind arguments and opinions.
Argumentative Reasoning (AR) offers a solution.
AR
relies on the notion of an argumentative model or scheme, i.e. a formal representation of arguments into intrinsic components and their underlying relations. It aims at explicating an argument through the identification of its constituent components and relations <cit.>. For instance, the Toulmin's AR model organizes arguments into fundamental elements such as claim, warrant, and reason. AR models have been successfully applied in many NLP tasks, from Argument Mining <cit.> to warrant and enthymeme reconstruction <cit.>, argumentative scheme inference <cit.>, and fallacy recognition <cit.>.
Grounded on previous work on AR in user-generated content <cit.>, we frame implicit misogyny detection as an AR task <cit.> based on the Toulmin's theory <cit.>, with the aim of developing more robust detection tools by explicitly describing the underlying reasoning process in these messages. More formally, let c be the claim associated to a given message
and W = {w_1, …w_n } be a set of possible warrants, i.e., logical statement(s) that support c.
The model must generate an associated
w and, based upon it,
provide
the requested classification
: whether the message is
misogynous or not.
Figure <ref> graphically represents the approach described above.
In this particular case, the generalization that women do not understand sport because it is stereotypically for men
is what distinguishes a personal attack from a case of misogyny.
While there have been efforts on evaluating LLMs in argumentative tasks, such as quality assessment <cit.>, component detection <cit.>, and argumentative linking <cit.>, the capability of LLMs for implicit argumentative reasoning has yet to be explored. To the best of our knowledge, our work is the first to assess LLMs on implicit misogyny through the lens of AR.
§ DATA
This section introduces the datasets used in our experiments. For Italian, the newly created ImplicIT-Mis corpus
( <ref>). For English, SBIC+, an extended version of the social bias inference corpus <cit.> enriched with misogynous texts from Implicit Hate Corpus <cit.> ( <ref>).
§.§ The ImplicIT-Mis Corpus
ImplicIT-Mis is a new manually collected and curated dataset for implicit misogyny detection in Italian. It consists of 1,120 Facebook comments as direct replies to either women-related news articles or posts on public pages of communities known to tolerate misogyny.
An in-domain expert, who has been the target of misogyny, conducted the manual collection.
[The annotator is also an author of this paper.] This is in line with a participatory approach to NLP where the communities primarily harmed by specific forms of content are included in the development of datasets addressing these phenomena <cit.>. For each comment, we keep
source (either a newspaper or a Facebook page) and its context of occurrence (the news article or the main post).
All instances in ImplicIT-Mis are misogynistic.
The collection period ran from November 2023 to January 2024.
We selected 15 Facebook pages of news outlets covering the whole political spectrum as well as different levels of public outreach (national vs. local audiences), and 8 community pages.
ImplicIT-Mis is organized around 104 source posts; 70% of the 1,120 messages are comments to news articles from two national newspapers (La Repubblica and il Messagero). The full overview
is
in Appendix <ref>.
On average, each comment is 19 tokens long, with the longest having 392 tokens and the shortest only one. An exploration of the top-20 keywords, based on TF-IDF, indicates
a lack of
slurs or taboo words, confirming the quality of our corpus for implicit misogyny.
ImpliciIT-Mis is enriched with one annotation layer targeting the implied assumptions, as defined in <ref>. A subset of 150 messages was annotated by three Italian native speakers who are master students in NLP. Each annotator has worked on 50 different messages. On average, annotators took 2 hours to complete the task. The annotation guidelines for the generation of the implied assumptions are in Appendix <ref>.
We evaluated the annotators' implied assumptions against those of an expert (a Master student in gender studies and criminology). We used a subset of 75 sentences (25 from each annotator) and computed two metrics:
BLEU <cit.>
and BERTScore <cit.>.
These measures offer insight into how similar the human written implied assumptions are.
We have obtained a BLEU score of 0.437 and an F1-BERTScore of 0.685 by combining all annotations.
As the scores indicated, our pool of annotators tends to write the implied assumption adopting different surface forms,
but with a similar semantic content, as suggested by the F1-BERTScore.
Although implied assumptions have to be inferred, and therefore, humans need to interpret the text, they tend to come to the same conclusions.
In the final version of the data, all manually generated implied assumptions have been retained as valid, meaning that for 150 messages, we have a total of 225 implied assumptions.
§.§ SBIC+
SBIC+ is a dataset of 2,409 messages for implicit misogyny in English obtained by merging together 2,344 messages from SBIC and 65 from the Implicit Hate Corpus (IHC).
The social bias inference corpus (SBIC) <cit.>
consists of
150k structured annotations of social media posts for exploring the subtle ways in which language can reflect and perpetuate social biases and stereotypes. It covers over 34k implications about a thousand demographic groups. SBIC is primarily composed of social media posts collected from platforms like Reddit and Gab, as well as websites known for hosting extreme views, such as Stormfront.
The structured annotation approach implies that different annotation layers are available to annotators according to their answers. The annotation scheme is based on social science literature on pragmatics and politeness. We retain all messages whose annotation for the target group was “women” or “feminists” and were labelled as hateful. We further cleaned the data from instances labeled as targeting women but were actually targeting other categories, like gay males. We also filtered out all texts containing explicit identity-related slurs to keep only implicit instances. For each message, we also retained all associated “target stereotype” which correspond to the warrants.
The Implicit Hate Corpus (IHC) <cit.> contains 6.4k implicitly hateful tweets, annotated for the target (e.g., race, religion, gender). The corpus comprises messages extracted from online hate groups and their followers on Twitter.
Tweets were first annotated through crowdsourcing into explicit hate, implicit hate, or not hate. Subsequently, two rounds of expert annotators enriched all implicit messages with categories from a newly developed taxonomy of hate, for the target demographic group, and the associated implied statement (i.e., the warrant in our framework).
We have selected only tweets whose target demographic group was “women”.
§ EXPERIMENTAL SETUP
Our main goal is to evaluate the abilities of models to generate the implied assumptions for implicit misogynous messages.
By doing so, we can also evaluate the implicit knowledge of LLMs, for instance, named entities or events mentioned in texts.
If they are not known, it would be impossible to understand the misogynistic nature of such texts.
Each batch of experiments aims to address two tasks: (i) the generation of the implied assumptions or warrants and (ii) the classification of the messages as misogynous or not.
Regarding (i), we experiment with two prompting strategies: instructing the model to reconstruct the implied assumptions (Assumption) and the implicit claim c and related warrants W (Toulmin).
We address these tasks both in a zero-shot and in a few-shot setting.
While implied assumptions are generally broader than warrants, warrants specifically bridge the reasoning gap between claims and evidence. In our prompts, implied assumptions and warrants appear quite similar. Nevertheless, the use of these terminologies may significantly impact the model's behavior due to its sensitivity to prompt phrasing, therefore we experiment with both.
We experiment with two state-of-the-art LLMs: and .
[Refer to <https://huggingface.co/meta-llama/Meta-Llama-3-8B> and <https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2>]
For both, we select their instruction-tuned version.
During preliminary experiments with 50 instances, we also tested Italian-specific LLMs, namely , , and . They were all unable to generate valid implied assumptions, so we discarded them.
We consider the following baselines: (i) finetuned encoder-based models; (ii) zero-shot classification with LLMs; and (iii) few-shot classification with LLMs without generating explanations.
The Llama3 series has several improvements over preceding versions, including a better tokenizer with a vocabulary of 128k tokens, extended training on 15T tokens, and grouped query attention for efficiency.
Around 5% of the pre-training data concerns more than 30 languages, including Italian.
All Llama3 models have undergone safety fine-tuning for safeguarding the generation process over harmful content. This could trigger instances of over-safety, with the model being unable to follow the instructions and thus failing to provide a valid answer for our task.
A competitive fine-tuned version of Llama2 using group-query attention, developed by MistralAI <cit.>.
In particular, the 7B version has been reported to obtain better performances when compared to and .
While details about the fine-tuning data are lacking, in our experiments, we observe that the model is responsive to Italian prompts.
The instruct-based versions of the models do not present any moderation mechanism. We thus expect this model to avoid over-safety and always return an implied statement and a classification value.
§.§ Prompting Techniques
Among recent prompting techniques, we selected Chain-of-Thought (CoT) and Knowledge Augmentation. CoT was chosen for its notable success in reasoning tasks <cit.>.
On the other hand, Knowledge Augmentation has been observed to reduce hallucinations and enhance contextual depth in model prompts, facilitating the generation of sophisticated outputs beneficial for tasks requiring substantial domain knowledge and nuanced reasoning <cit.>.
Both techniques align with our goal of generating implicit components of arguments (implicit warrants) and support the construction of encoded warrant blocks.
To the best of our knowledge, these techniques have not been used yet for a computational argumentation task, which makes them worth investigating. The full list of prompts can be found in Appendix <ref> and <ref>.
More in detail, CoT sequentially guides the model through a series of reasoning steps before arriving at a final answer or conclusion <cit.>.
By following this structured approach, CoT prompts allow the identification of how the model's reasoning process influences its conclusions. This capability is particularly useful for reconstructing warrants that underlie the model's interpretations in our specific task.
Knowledge-augmented prompting generates knowledge from an LLM and incorporates it as additional input for a task <cit.>. In our task, the generated knowledge serves as either the implied assumption or the warrant that we inject into the prompt to inform the classification.
§ RESULTS
We report two blocks of results: the first block focuses on classification of the messages. Since both the Italian and the English datasets contain only positive classes, we only report the Recall. The classification task offers an indirect evaluation on the goodness of the AR methods. The second block targets the generation of the implied assumptions/warrants. Considering the complexity and the pending issues related to the evaluation of automatically generated text <cit.>, we report the results using established automatic metrics (i.e. BERTScore and BLEU) as well as a manual validation on a subset of 300 messages (150 per language) ( <ref>). The overall evaluation procedure we have devised allows us to assess both the performance of the models' in detecting implicit misogyny and the alignment between LLMs and human annotators in generating reasoning-based explanations.
§.§ Transformers
§.§ Decoder-based Models
All answers from LLMs have undergone post-processing to evaluate them properly. Two main post-processing heuristics concern the treatment of the “refusal to provide an answer” (including the refusal to generate the warrants) and the “need of more context”. We considered both cases as if the messages were marked as not misogynous. While tends to return refusals to answers, mostly due to the safeguard layer, has a tendency towards indecisive answers requiring more context. always provides an answer when applied to the Italian data.
For completeness, Appendix <ref> includes the results considering these cases as correct.
§.§ Classification Results
Table <ref> summarizes the results for the classification task. With few exceptions - mostly related to - LLMs generally perform better than finetuned models. All few-shot experiments outperform their zero-shot counterpart, and consistently performs better than . The best results are obtained by with few-shot and no generation of either the implied statements or the warrants. However, for Italian, the with the Toulmin warrant in few-shot achieves very competitive results (R=0.725). For English, on the other hand, the results are affected by the post-processing heuristics. Had we considered as correct the “refusal to answer cases”, the best score for English would have resulted in few-shot with implied assumption (R=0.913).
In all zero-shot settings, the prompt based on Toulmin's warrant outperforms the prompt based on implied assumptions. In the few-shot settings, in ImplicIT-Mis, we observe a dramatic increase when switching from implied assumptions to Toulmin's warrant, with a performance gain of 24 points. On the contrary, on English, the warrant-based prompt falls behind.
§.§ Implied Assumptions and Warrants Generation
Table <ref> gives an overview of the evaluation using BERTScore and BLEU for the best models for English and Italian. While for SBIC+ every message has an associated explanation, for ImplicIT-Mis, only 150 messages present the implied assumptions.
When is asked to elaborate on the implied assumption in both zero- and few-shot settings, it does not follow the instruction, and only in 87 and 71 instances for Italian and English, respectively, generates a response.
In all the other cases, the model just answers the final question of whether it is misogynistic; therefore, we exclude them from the evaluation.
We also exclude all the results that do not reach at least a recall of 0.3 due to their low quality, as confirmed by manual inspection.
All BERTScores in English are around 0.81-0.83, showing high similar content between the human-written texts and the answers generated by the models. Therefore, both the implied assumptions and the warrants are aligned with those written by humans. In Italian the scores drop to 0.57-0.60. In terms of BLEU scores, the highest scores for English are produced by few-shots with warrants, which shows an alignment with humans in terms of word choices. For Italian the scores are much lower, probably because of many wrong translations and lack of Italian references which cause wrong inferences.
§.§ Manual Validation
We further validate the generated implied assumptions and warrants by manually exploring a subset of 300 messages, 150 per language.
For ImplicIT-Mis, we use the manually annotated instances, while we randomly extract 150 instances for SBIC+.
We focus only on the best models: few-shots warrant for ImplicIT-Mis and few-shots implied assumptions for SBIC+.
Overall, we find that 35% of the generated warrants for ImplicIT-Mis are correct and 32% lead to a correct classification of the messages. For SBIC+, the percentage of valid implied assumptions leading to a correct classification is 50%, while correct implied assumptions leading to a wrong classification are 52%.
However, in Italian all the correctly predicted examples were actually predicted for the wrong reasons, while in English this happened 37% of the time.
Therefore, we conclude that a correct explanation does not necessarily lead to a correct classification of misogyny, and this is always the case in the subsample we manually evaluated for Italian. This can be seen as an evidence that the model relies on their internalized knowledge and spurious correlations to address the task and shows no reasoning skills, since the Italian texts, being collected to address this task, requires much more reasoning to be understood.
We design a taxonomy to regroup all errors for both models.
We identify seven kinds of common errors in warrant and implied assumption generation. Table <ref> provides some examples.
Notice that, although all error categories lead to wrong implied assumptions/warrants, we decide to keep a general “wrong inference” as a valid category for all the cases that do not fall under any other category or there is no evident reason.
Sarcasm/Irony
This is a common error in English, due to the relatively high number of jokes in SBIC+.
In these cases, the LLMs fail to capture the sarcastic/ironic intended meaning of the message and go for a more literal interpretation.
Metaphorical and Figurative Language.
This category indicates a failure to interpret another level of non-literal meaning.
We have observed a much more frequent occurrence in Italian - also because many messages use figurative or metaphorical expressions. As observed by <cit.>, misogyny in Italian is highly metaphorical, especially with references to animals.
In Italian, not identifying metaphors could also be attributable to translation errors since metaphors are cultural-dependent.
This highlights the complexity of cross-lingual implicit HS detection, as also pointed out by <cit.>, since the translation of a term often does not carry the same implications as in the source language.
Wrong Translations.
This is a category of errors that applies only to Italian.
It comprises errors due to wrong translations of messages or to a lack of understanding of non-standard language, such as dialects and jargon expressions.
Opposite Intention. These errors could be considered an instance of LLM hallucinations <cit.>.
In these cases, the models completely misinterpret the message's content, resulting in generated implied assumptions that tend to support the message.
These errors occur in both languages, with a slight preference for Italian.
Denial of Misogyny.
This class of errors indicates a lack of connection between the generated implied assumptions and the answer of the model.
In other words, the generated text is logical and correct as it clearly identifies the misogynous nature of the message.
However, the model classifies the texts as non-misogynous.
Lack of Reference
This kind
of error has been observed only in Italian - mostly because very few English data contains direct reference to individuals.
The errors in these cases are due to the model failing to identify the mentioned individuals (and their associated characteristics) that are used to trigger the misogynous content.
In the example we report in Table <ref>, the model does not recognize (Moana) Pozzi as a famous porn actress, thus resulting in a warrant that fails to capture the insulting nature of the message (i.e., the target being promiscuous).
Wrong Inference
This is the largest class of errors in both languages.
We observe that wrong inferences are mainly driven by spurious correlations and the activation of implicit knowledge.
§.§ How Reasoning Impact Classification
We further examine how reconstructed implicit assumptions relate to classification predictions for misogyny detection.
We inspect 50 instances for each language that are correctly classified by our best baseline (few-shot ) and that are subsequently misclassified when the model is asked to reason about them.
We observe the tendency to reduce a gender-related problem to the whole category of human beings, minimizing the misogynistic nature of the statement.
For instance, in the Italian text “Oggi sei felice anoressica bugiarda” [Today you are happy anorexic liar], the model responds that “Anorexia is a negative condition. 3. Lying is a negative behavior. No, the text is not misogynist. Anorexia is a condition that affects both men and women.”. Although being true that it is a condition that affects both genres, in this case, it targets a woman which is clear in Italian by the gendered “anoressica”, instead of “anoressico” which would be used for men.
Adding the statement that it affects both genres is detrimental for the classification.
§ CONCLUSION
We proposed the task of implicit misogyny detection under an Argumentative Reasoning perspective, since to understand implicit statements, one needs to reconstruct the missing link (the warrant) between the claim and the assumption.
Our work highlights the complexity of such a task, which paves the way for hate speech detection as a proxy task to probe the reasoning abilities of LLMs.
Our prompt-based experiments show
that LLMs fail 68% and 50% of the time in generating implied assumptions in Italian and English respectively.
The poor relationship between wrongly generated explanations and correctly predicted classes shows
LLMs' over-reliance on their implicit knowledge and spurious correlations rather than reasoning skills.
Our results are consistent with <cit.>: prompting strategies that rely on implicit knowledge in LLMs often generate an incorrect classification when the generated knowledge (implied assumptions/warrants) is wrong, due to lack of references, reasoning skills, or understanding of non-standard language.
Indeed, verifying the validity of the generated text before injecting it in the prompt in a human-in-the-loop approach would be a next step to undertake.
To conclude, our findings show that i) the performance of the classification task cannot be used as a proxy to guarantee the correctness of the implied assumption/warrant; ii) LLMs do not have the necessary reasoning abilities in order to understand highly implicit misogynistic statements. Therefore, models for hate-related natural language inference tasks should be improved.
One possible approach would be to inject external knowledge in the misogynous texts, in order to fill the gaps related to their lack of implicit knowledge. For instance, had the model known that Moana Pozzi was a porn actress, it would have probably inferred that when a person is compared to her, it is a derogatory way to address that woman.
§ LIMITATIONS
A limitation of our work is the integration of all generated knowledge (implied assumptions/warrants) and we do not evaluate them before using them to inform the classification task. This should be overcome with a human-in-the-loop approach that allows for the verification of the knowledge extracted by LLMs. We did not try to inject only the knowledge that led to a correct classification because of the low correlation between the generated implied statement and the class.
Another limitation is that for what concerns Llama, many examples in English trigger the safeguard, therefore the scores for Llama might not be realistic.
§ ETHICAL CONSIDERATIONS
Improving LLMs abilities to understand the implied meaning of messages with sensitive content is a case of potential risks related to dual use. Although our work has focused on assessing LLMs abilities in generating implied assumptions/warrants, we see the benefits and the detrimental effects. On the one hand, improving LLMs abilities to understand the implied meaning of sensitive message can further be used to improve the generation of counter-speech and the development of assistive tools for experts in this area. At the same time, the process can be inverted: malevolent agents can feed models with implied assumptions and generate hateful messages. We are aware of this issue, and we think our work offers the community an opportunity to understand limitations of LLMs that have a not minor societal impact. In addition to this, our work indicates the need to adopt different safeguard methods that are able to capture the core meaning of a message and grounded in different cultures.
§ IMPLICIT-MIS SOURCES
Table <ref> shows statistics on the number of Facebook comments associated to each newspaper or Facebook community.
§ IMPLICIT-MIS ANNOTATION GUIDELINES FOR IMPLIED ASSUMPTIONS
Annotators were given the following guidelines for the identification of the implied statements in the ImplicIT-Mis dataset:
Read the following comments extracted from Facebook as a reaction to a news article or a post. The comments have been judged as being sexist or misogynistic by an expert. Your task is to provide an explanation for why the comment can be offensive towards. The explanation must correspond to the implied assumption in the comment. To help you in this task, imagine that your explanations should complete the sentence: “This comment is sexist because it implies that …”.
Follow the examples.
Le dovrebbero dare 2000 volts implica che la donna in questione dovrebbe essere uccisa.
Non dovremmo abbassare i nostri standard solo per assumere più donne implica che le donne sono meno qualificate degli uomini.
Quella mangia kinder bueno non meritava di vincere X-Factor implica che la donna in questione viene offesa in quanto grassa.
Bau bau implica che la donna in questione viene chiamata cagna.
Translated version:
She should be given 2000 volts implied that the targeted woman should be killed.
We shouldn't lower our standards just to hire more women implies that women are less qualified than men.
That Kinder Bueno eater did not deserve to win X-Factor implies that the targeted woman is called 'fat'.
Wof wof implies that the targeted woman is called 'cagna', translated as female dog, which is an epithet for b**ch.
§ ENGLISH PROMPTS
§ ITALIAN PROMPTS
§ ADDITIONAL CLASSIFICATION RESULTS
Table <ref> reports classification results when considering the refusal to answer due to model safeguard trigger to hateful content as misogynous.
In particular, is the only affected model in our experiments.
|
http://arxiv.org/abs/2409.03382v1 | 20240905093537 | Strong Converse Inequalities for Bernstein Polynomials with Explicit Asymptotic Constants | [
"José A. Adell",
"Daniel Cárdenas-Morales"
] | math.CA | [
"math.CA",
"cs.NA",
"math.NA",
"math.PR",
"41A10, 41A25, 41A27, 41A36, 60E05"
] |
Hardware Acceleration of LLMs: A comprehensive survey and comparison
Nikoletta Koilia
Department of Electrical
and Electronics Engineering
University of West Attica
Athens, Greece
[email protected]
Christoforos Kachris
Department of Electrical
and Electronics Engineering
University of West Attica
Athens, Greece
[email protected]
==========================================================================================================================================================================================================================================================================================
§ ABSTRACT
We obtain strong converse inequalities for the Bernstein polynomials with explicit asymptotic constants. We give different estimation procedures in the central and non-central regions of [0,1]. The main ingredients in our approach are the following: representation of the derivatives of the Bernstein polynomials in terms of the Krawtchouk polynomials, estimates of different inverse moments of various random variables, sharp estimates of both absolute central moments of Bernstein polynomials and the total variation distance between binomial and Poisson distributions, and iterates of the Bernstein polynomials, together with their probabilistic representations.
Mathematics Subject Classification 41A10 · 41A25 · 41A27 · 41A36 · 60E05
§ INTRODUCTION AND STATEMENT OF THE MAIN RESULTS
The Bernstein polynomials represent the most paradigmatic example of positive linear operators. Recall that for a function f:[0,1]→ℝ and a natural number n∈ℕ, the nth Bernstein polynomial of f is defined as
B_nf(x)=∑_k=0^n
f(k/n)nkx^k(1-x)^n-k, x∈ [0,1].
In the nineties of the last century, Ditzian and Ivanov <cit.> and Totik <cit.> characterized the rates of uniform convergence of sequences of positive linear operators L_nf towards the function f, as n tends to infinity, in terms of the so called second order Ditzian-Totik modulus of smoothness of f (cf. Ditzian and Totik <cit.>). In the case of Bernstein polynomials, it turns out that
K_1ω_2^φ(f;1/√(n))≤B_nf-f≤ K_2ω_2^φ(f;1/√(n)), f∈ C[0,1], n∈ℕ,
for some absolute positive constants K_1 and K_2. We recall here some necessary definitions and notations. As usual, we denote by C[0,1] the space of all real continuous functions defined on [0,1] endowed with the supremum norm ·, and by C^m[0,1] the subspace of all m-times continuously differentiable functions. Besides, f^(m) and f^m represent, respectively, the mth derivative of f, and f raised to the power of m. The second order central difference of f is given by
Δ_h^2f(x)=f(x+h)-2f(x)+f(x-h), h≥ 0, x± h∈ [0,1],
and the Ditzian-Totik modulus of smoothness of f with weight function
φ (x)=√(x(1-x)), 0≤ x≤ 1,
is defined as
ω_2^φ(f;δ )=sup{|Δ_hφ (x)^2f(x) | : 0≤ h≤δ, x± hφ (x)∈ [0,1] }, δ≥ 0.
The second inequality in (<ref>) is called a direct inequality. Different authors completed this inequality by showing specific values for the constant K_2. In this regard, Gavrea et al. <cit.> and Bustamante <cit.> provided K_2=3, whereas Păltănea <cit.> obtained K_2=2.5, which is the best result up to date and up to our knowledge (see also <cit.> for an asymptotic result). It was also observed that if additional smoothness conditions on f are added, then a direct inequality may be valid for values of K_2 smaller than 2.5. In this respect, the authors recently proved <cit.> that for any f∈ C^2[0,1] and n∈ℕ, the following inequality holds true
|B_nf-f-1/2ω_2^φ(f;1/√(n))|≤1/4n(ω_1(f^(2);1/3√(n))+1/4ω_2^φ(f^(2);1/√(n))),
where ω_1 (f^(2);· ) stands for the usual first modulus of continuity of f^(2). This inequality completes asymptotic results previously shown by Bustamante and Quesada <cit.> and Păltănea <cit.>.
The first inequality in (<ref>) is called strong converse inequality, according to the terminology given in Ditzian and Ivanov <cit.>, and it turns out to be the best possible one among different types of converse inequalities. It is a consequence of the works by Totik <cit.>, Knopp and Zhou <cit.>, and Sangüesa <cit.> for general sequences of positive linear operators.
Direct and converse inequalities in the L_p-norm, 1≤ p≤∞, have been widely considered in the literature (see, for instance, Totik <cit.>, Ditzian and Ivanov <cit.>, Chen and Ditzian <cit.>, Della Vecchia <cit.>, Guo and Qi <cit.>, Finta <cit.>, Gadjev <cit.>, and Bustamante <cit.>, among many others). The main tool to prove converse inequalities is the use of K-
functionals, shown to be equivalent to the corresponding Ditzian-Totik modulus of smoothness. In some papers (cf. Knopp and Zhou <cit.> and Sangüesa <cit.>), the authors use an arbitrary number of iterates of the operators under consideration as the main tool. In any case, the proofs of converse inequalities are very involved and do not provide explicit constants in general.
This is the motivation of the present paper, whose main result reads as follows.
Let 𝔉 denote the set of all non-affine functions f∈ C[0,1]. Then,
5≤ _n→∞sup_f∈𝔉ω_2^φ(f;1/√(n))/B_nf-f< 75.
An analogous result to Theorem <ref> referring to a non-centered gamma-type operator was given in <cit.>. The upper constant there was 105, instead of 75. On the other hand, as seen in (<ref>), the estimate of the constant K_2 is improved when we restrict our attention to functions in C^2[0,1]. A similar property holds concerning strong converse inequalities, as the following result shows. In this regard, denote by ω_2(f;· ) the usual second modulus of continuity of f. Let 𝔊⊆𝔉 be the subset of those functions f such that
lim _δ→ 0ω_2(f;δ)/δ=0.
We have
_n→∞sup_f∈𝔊ω_2^φ(f;1/√(n))/B_nf-f≤ 4+ √(2)(√(2)+1)/1-0.99/√(3)log 4=15.0477…
Moreover, 𝔉∩ C^1[0,1]⊆𝔊.
To prove the aforementioned results, we follow an approach inspired by the work of Sangüesa <cit.>, using different estimation procedures in the central and non-central regions of [0,1]. In the central region, the starting point is a Taylor's formula of third order for the third iterate of the Bernstein polynomials. As specific tools in this region, we use sharp estimates for the first absolute central moments of the Bernstein polynomials, as well as for the total variation distance between binomial and Poisson distributions. In the non-central region, we start from a formula that involves the second derivatives of the iterates of the Bernstein polynomials, providing at the same time probabilistic representations of such iterates.
Two common tools applied in both regions should be emphasized. On the one hand, several expressions of the derivatives of the Bernstein polynomials, particularly that in terms of the orthogonal polynomials with respect to the binomial distribution, namely, the Krawtchouk polynomials. On the other hand, accurate estimates of various inverse moments involving different random variables.
The main auxiliary results to give estimates in the central (resp. non-central) region are Theorems <ref>, <ref>, and <ref> (resp. Theorems <ref> and <ref>). On the other hand, in some stages of the proofs of Theorems <ref> and <ref>, we use numerical computations performed with the software Mathematica 10.2 in order to obtain explicit constants. We finally mention that the problem of giving non-asymptotic estimates for K_1 remains open.
§ GENERAL AUXILIARY RESULTS
Let ℕ_0=ℕ∪{0}. From now on, whenever we write f, x, and n, we respectively assume that f∈ C[0,1], x∈ (0,1), and n∈ℕ is large enough.
Let (β_m)_m≥ 1 be a sequence of random variables such that β_m has the beta density
ρ _m(θ )=m(1-θ)^m-1, 0≤θ≤ 1.
These random variables allow us to write the remainder of Taylor's formula in closed form. In fact, if g∈ C^m[0,1], m∈ℕ, we have
g(y)=∑_j=0^m-1g^(j)(x)/j! (y-x)^j+(y-x)^m/m!𝔼g^(m)(x+(y-x)β_m), 0≤ y ≤ 1,
where 𝔼 stands for mathematical expectation.
We have
ω_2^φ(f;1/√(n))≤ 4B_nf-f+log 4/nφ ^2(B_nf)^(2).
Proof.Suppose that [x-φ (x)h,x+φ (x)h]⊆ [0,1], with 0≤ h≤ 1/√(n). This implies that
r(x):=φ (x)h/x∈ [0,1], s(x):=φ (x)h/1-x∈ [0,1].
We claim that
|Δ ^2_hφ (x)g(x)|≤ h^2φ ^2 g^(2)log 4, g∈ C^2[0,1].
Actually, we have from (<ref>)
|Δ ^2_hφ (x)g(x)| =h^2 φ ^2(x)/2|𝔼g^(2)(x-φ (x)hβ_2)+𝔼g^(2)(x+φ (x)hβ_2)| ≤ h^2φ ^2 g^(2)φ ^2(x)/2(𝔼1/φ^2(x-φ(x)hβ_2)+𝔼1/φ^2(x+φ(x)hβ_2)).
Since
1/φ^2(y)=1/y+1/1-y, y∈ (0,1),
we get from (<ref>)
φ^2(x)/2(𝔼1/φ^2(x-φ(x)hβ_2)+𝔼1/φ^2(x+φ(x)hβ_2))
=1-x/2𝔼(1/1-r(x)β_2+1/1+r(x)β_2)+x/2𝔼(1/1-s(x)β_2+1/1+s(x)β_2).
On the other hand, we have for any 0≤ r≤ 1
1/2𝔼(1/1-rβ_2+1/1+rβ_2)=𝔼1/1-(rβ_2)^2≤𝔼1/1-β_2^2=log 4,
where the last equality follows from (<ref>). This, together with (<ref>), (<ref>), and (<ref>), shows claim (<ref>).
Finally, observe that
|Δ^2_hφ(x)f(x)|≤ 4B_nf-f+|Δ^2_hφ(x)B_nf(x)| .
Therefore, the result follows by applying claim (<ref>) to g=B_nf.
□
Let (U_k)_k≥ 1 be a sequence of independent identically distributed random variables having the uniform distribution on [0,1]. Define
S_n(x)=∑_k=1^n1_[0,x](U_k),
where 1_A stands for the indicator function of the set A. Clearly,
P(S_n(x)=k)=nkx^k(1-x)^n-k, k=0,1,… n.
We consider the sequence L_n of positive linear operators which associates to each function ϕ:[0,∞ )→ℝ the function
L_nϕ(x)=𝔼ϕ(S_n(x))=∑_k=0^nϕ(k)P(S_n(x)=k).
To obtain the derivatives of L_nϕ in closed form, consider first the mth forward differences of ϕ at step h≥ 0, i.e.,
Δ_h^mϕ(y)=∑_j=0^mmj(-1)^m-jϕ (y+hj), y≥ 0, m∈ℕ_0.
If ϕ∈ C^m[0,∞ ), we have the representation (cf. <cit.>)
Δ_h^mϕ(y)=h^m𝔼ϕ^(m)(y+h(U_1+⋯ +U_m)), y≥ 0, n∈ℕ,
where (U_k)_k≥ 1 is a sequence of independent copies of a random variable U having the uniform distribution on [0,1].
In second place, denote by K_m(x;y):=K_m(n,x;y), m=0,1,…,n, the orthogonal polynomials with respect to the probability measure defined in (<ref>), namely, the Krawtchouk polynomials. Such polynomials are explicitly defined (see, for instance, Chihara <cit.>)) by
K_m(x;y)=∑_j=0^mn-ym-jyj(-x)^m-j(1-x)^j, m=0,1,…,n, y∈ℝ,
and satisfy the orthogonality property
𝔼K_r(x;S_n(x))K_m(x;S_n(x))=nmφ^2m(x)δ_r,m, r,m∈{0,1,… n},
where δ_r,m is the Kronecker delta. In particular,
K_2(x;y)=1/2((y-nx)^2-(1-2x)(y-nx)-nx(1-x)).
It was shown in <cit.> (see also Roos <cit.> and López-Blázquez and Salamanca <cit.>) that for m=0,1,…,n
(L_nϕ)^(m)(x)=(n)_m𝔼Δ_1^mϕ(S_n-m(x))=m!/φ^2m(x)𝔼ϕ(S_n(x))K_m(x;S_n(x)),
where (n)_m=n(n-1)⋯ (n-m+1).
As follows from (<ref>) and (<ref>), the nth Bernstein polynomial can be written in probabilistic terms as
B_nf(x)=𝔼f(S_n(x)/n).
From now on, we assume that the random variables S_n(x), (U_k)_k≥ 1, and (β_m)_m≥ 1, as defined in (<ref>), (<ref>), and (<ref>), respectively, are mutually independent.
Let m∈{1,2,…,n}. Then,
(B_nf)^(m)(x)=(n)_m𝔼Δ_1/n^mf(S_n-m(x)/n)=m!/φ^2m(x)𝔼f(S_n(x)/n)K_m(x;S_n(x)).
In addition, if f∈ C^m[0,1], then
(B_nf)^(m)(x)=(n)_m/n^m𝔼f^(m)(S_n-m(x)+U_1+⋯ +U_m/n).
Proof.Set ϕ(y)=f(y/n), 0≤ y≤ n. Observe that
Δ_1^mϕ(y)=Δ _1/n^mf(y/n), 0≤ y≤ n.
Since B_nf(x)=L_nϕ (x), this formula and (<ref>) imply identity (<ref>). Formula (<ref>) is an immediate consequence of (<ref>) and the first equality in (<ref>). The proof is complete.
□
We point out that the first equality in (<ref>) is well known (see, for instance, Abel et al. <cit.>). The right-hand side in (<ref>) is the probabilistic representation of the Bernstein-Kantorovich operators acting on f^(m) (see Acu et al. <cit.> and the references therein).
§ AN AUXILIARY RESULT FOR THE CENTRAL REGION
By X ℒ= Y, we mean that the random variables X and Y have the same law. It follows from (<ref>) that
S_n(1-x)ℒ=n-S_n(x).
On the other hand, to prove Theorem <ref> it is enough to show that
φ ^2(B_nf)^(2)≤ C_nB_nf-f, f∈𝔉, for some absolute constant C_n, as follows from Theorem <ref>. We claim that it suffices to show that
sup_0<x≤ 1/2|φ ^2(x)(B_nf)^(2)(x)|≤ C_n sup_0<x≤ 1/2|B_nf(x)-f(x)|, f∈𝔉.
Actually, if f∈𝔉, we define f̃∈𝔉 as f̃(y)=f(1-y), y∈ [0,1]. By (<ref>), the second equality in (<ref>), and (<ref>), it can be checked that
B_nf(1-x)-f(1-x)=B_nf̃(x)-f̃(x), φ^2(1-x)(B_nf)^(2)(1-x)=φ^2(x)(B_nf̃)^(2)(x),
thus showing claim (<ref>). For this reason, we assume from now on that 0<x≤ 1/2. For the sake of simplicity, we also denote by ‖·‖ the sup-norm on the interval (0,1/2].
To show (<ref>), we distinguish if the supremum ‖φ^2(B_nf)^(2)‖ is attained in the central region [a_n,1/2] or in the non-central region (0,a_n), for a suitable 0<a_n≤ 1/2. The estimation procedures, inspired by the work of Sangüesa <cit.>, are quite different in both regions.
From now on, we will use many times the inequality 1-y≤ e^-y, y≥ 0, without explicitly mentioning it. The crucial quantity to estimate in the central region is the inverse moment
H_n(x)=φ(x)√(n)/n+2𝔼|S_n(x)-nx|/φ^2(S_n(x)+V/n+2),
where
V=U_1+U_2,
and U_1 and U_2 are the random variables defined in (<ref>). Recall that the strong law of large numbers and the central limit theorem for S_n(x) respectively state that
S_n(x)/n⟶ x, a.s., and S_n(x)-nx/φ(x)√(n)ℒ⟶ Z, as n→∞ ,
where Z is a standard normal random variable. Thus, we have for a fixed x∈ (0,1/2]
lim_n→∞H_n(x)=lim_n→∞𝔼|S_n(x)-nx|/φ(x)√(n)=𝔼|Z|=√(2/π)= 0.7978…
This means that any upper bound for H_n(x) cannot be better than √(2/π). In fact, H_n(x) takes bigger values when λ=nx, for moderate values of λ, that is, when S_n(x) approaches to a Poisson random variable. In this regard, let N_λ be a random variable having the Poisson distribution with mean λ, i.e.,
P(N_λ=k)=e^-λλ^k/k!, k∈ℕ_0, λ≥ 0.
For any λ≥ 0, denote by
C(λ)=2log27/16ν (λ)+r(λ),
where
ν (λ)=√(λ)(2P(N_λ=
⌈λ⌉)-P(N_λ=
0))+1/√(λ)(2P(N_λ≤⌈λ⌉)-1-P(N_λ=
0)),
⌈λ⌉ being the ceiling of λ, and
r(λ)=(log 4-2log27/16)λ^3/2e^-λ.
Numerical computations, carried out with the software Mathematica 10.2, show that
sup _λ≥ 0 C(λ)=0.9827 … < 0.99.
Finally, let λ_0>0 be such that
√(2/π)+1/√(λ_0)≤ c:=0.8.
We state the main result of this section.
Suppose that n≥ 2λ_0. Then,
H_n:=sup_0<x≤ 1/2 H_n(x)≤ 0.99+2D(λ_0)/nlog27/16+n^3/2/2^n+1/2,
where D(λ_0) is the explicit constant defined in (<ref>), only depending upon λ_0.
To prove this result, some auxiliary lemmas will be needed.
Let V be as in (<ref>) and let h(y)=ylog y, y≥ 0. Then,
𝔼1/y+V=1/y+1∑_j=0^∞𝔼(V-1/y+1)^2j=Δ_1^2h(y).
In particular,
𝔼1/V=log 4, 𝔼1/1+V=log27/16.
Proof.By (<ref>), the random variable V is symmetric around 1 and therefore 𝔼(V-1)^2j+1=0, j∈ℕ_0. Hence,
𝔼1/y+V=1/y+1𝔼1/1+(V-1)/(y+1)=1/y+1∑_j=0^∞𝔼(V-1/y+1)^2j.
On the other hand, let y>0. Since h^(2)(y)=1/y, we have from (<ref>)
𝔼1/y+V=Δ_1^2h(y).
Letting y→ 0, this formula also holds for y=0. Finally, the second statement in Lemma <ref> follows by choosing y=0 and y=1 in (<ref>).
□
Using (<ref>), we rewrite the function H_n(x) defined in (<ref>) as
H_n(x)=φ(x)√(n)(𝔼|S_n(x)-nx|/S_n(x)+V+𝔼|S_n(x)-nx|/n-S_n(x)+2-V)
=φ(x)√(n)(𝔼|S_n(x)-nx|/S_n(x)+V+𝔼|S_n(1-x)-n(1-x)|/S_n(1-x)+V),
as follows from (<ref>), the fact that 2-Vℒ=V, and the independence between S_n(x) and V.
Observe that H_n(x) depends on inverse moments involving the random variables S_n(x) and V. In the following result, we estimate H_n(x) in terms of inverse moments only involving S_n(x).
Let H_n(x) be as in (<ref>). Then,
H_n(x)≤ 2log27/16(I_n(x)+I_n(1-x))+(log 4-2log27/16)(nx)^3/2(1-x)^n+1/2+n^3/2/2^n+1/2,
where
I_n(x)=φ (x)√(n) 𝔼|S_n(x)-nx|/S_n(x)+1.
Proof.Let us bound the first term on the right-hand side in (<ref>). Replace y by S_n(x) in (<ref>) and use the independence between S_n(x) and V to obtain
φ (x)√(n) 𝔼|S_n(x)-nx|/S_n(x)+V=φ (x)√(n)(𝔼|S_n(x)-nx|/S_n(x)+1+∑_j=1^∞𝔼(V-1)^2j𝔼|S_n(x)-nx|/(S_n(x)+1)^2j+1).
Let j≥ 1. By considering the events {S_n(x)=0} and {S_n(x)≥ 1}, we get
𝔼|S_n(x)-nx|/(S_n(x)+1)^2j+1≤ nxP(S_n(x)=0)+1/2^2j𝔼|S_n(x)-nx|/S_n(x)+11_{S_n(x)≥ 1}
=nxP(S_n(x)=0)+1/2^2j(𝔼|S_n(x)-nx|/S_n(x)+1-nxP(S_n(x)=0)).
This and Lemma <ref> imply that the sum in (<ref>) is bounded above by
(log 4-1)nxP(S_n(x)=0)+(2log27/16-1)(𝔼|S_n(x)-nx|/S_n(x)+1-nxP(S_n(x)=0)).
We therefore have from (<ref>) and (<ref>)
φ (x)√(n) 𝔼|S_n(x)-nx|/S_n(x)+V≤ 2log27/16I_n(x)+(log 4-2log27/16)(nx)^3(1-x)^n+1/2.
To bound the second term on the right-hand side in (<ref>), we follow the same procedure, replacing x by 1-x and noting that
(n(1-x))^3/2x^n+1/2≤n^3/2/2^n+1/2,
since, by assumption, 0<x≤ 1/2. This and (<ref>) show the result.
□
An exact expression for I_n(x), useful to perform computations, is given in the following result.
Let I_n(x) be as in (<ref>). Set λ=nx. Then,
I_n(x)=n(1-x)^3/2/n+1(√(λ)(2P(S_n(x)=⌈λ⌉)-P(S_n(x)=0))+1/√(λ)(2P(S_n(x)≤⌈λ⌉ )-1-P(S_n(x)=0))).
Proof.Since |y|=2y_+-y, where y_+=max (0,y), y∈ℝ, we can write
𝔼|S_n(x)-nx|/S_n(x)+1=2𝔼(S_n(x)-nx)_+/S_n(x)+1-𝔼S_n(x)-nx/S_n(x)+1.
Using the identity
1/k+1P(S_n(x)=k)=1/(n+1)xP(S_n+1(x)=k+1), k=0,1,… ,n,
we have
𝔼S_n(x)-nx/S_n(x)+1=∑ _k=0^nk-λ/k+1P(S_n(x)=k)=1-λ+1/(n+1)xP(S_n+1(x)≥ 1)
=-1-x/(n+1)x+λ+1/(n+1)xP(S_n+1(x)=0)=-1-x/(n+1)x(1-(λ+1)P(S_n(x)=0)).
Also, by (<ref>) and (<ref>), we get
𝔼(S_n(x)-nx)_+/S_n(x)+1=∑ _k=⌈λ⌉^nk-λ/k+1P(S_n(x)=k)=P(S_n(x)≥⌈λ⌉)-λ+1/(n+1)xP(S_n+1(x)≥⌈λ⌉ +1)
=P(S_n(x)≥⌈λ⌉ )-λ+1/(n+1)x(xP(S_n(x)≥⌈λ⌉)+(1-x)P(S_n(x)≥⌈λ⌉ +1))
=1-x/(n+1)x(λ P(S_n(x)≥⌈λ⌉)-(λ +1)P(S_n(x)≥⌈λ⌉ +1))
=1-x/(n+1)x(λ P(S_n(x)= ⌈λ⌉)-P(S_n(x)≥⌈λ⌉ +1)).
This, together with (<ref>), (<ref>), and some simple computations, shows the result.
□
We give two upper bounds for I_n(x), according to whether or not λ=nx is large. In the first case, we use the well known Stirling's approximation stating that
e^1/(2m+1)√(2π m)≤m!/m^me^m≤ e^1/(2m)√(2π m), m∈ℕ.
Observe that this implies that
P(S_n(m/n)=m)≤1/√(2π)√(n/m(n-m)), m=1,…, n-1.
For the second case, recall that the total variation distance between two ℕ_0-valued random variables X and Y is defined as
d_TV(X,Y)=sup _A⊆ℕ_0 |P(X∈ A)-P(Y∈ A)|=1/2∑_k=0^∞|P(X=k)-P(Y=k)|.
It was shown in <cit.> that
d_TV(S_n(x),N_λ)≤λ/n(√(2)/ 4+4/11(3λ+4)λ ^2/n), λ=nx, n≥ 10.
Other estimates can be found in Barbour and Hall <cit.> and Deheuvels et al. <cit.>.
Let λ_0 be as in (<ref>) and assume that n≥ 2λ_0. If λ:=nx≥λ_0, then
I_n(x)≤ c(1-x),
where c is defined in (<ref>). If λ =nx < λ_0, then
I_n(x)≤ (1-x)(ν (λ)+D(λ_0)/n),
where ν (λ) is defined in (<ref>) and
D(λ_0)=3√(λ_0)(λ_0+1)(√(2)/4+2/11(3λ_0+4)λ_0).
Proof. Assume that λ≥λ_0. From Lemma <ref> and (<ref>), we see that
I_n(x)≤ (1-x)(2√(λ(n-λ)/n)P((S_n(x)=⌈λ⌉ )+1/√(λ_0)).
Differentiating with respect to x, it can be checked that
P((S_n(x)=⌈λ⌉ )≤ P((S_n(⌈λ⌉ /n)=⌈λ⌉ ).
We thus have from (<ref>)
2√(λ(n-λ)/n)P(S_n(x)=⌈λ⌉ )≤√(2/π)√(λ(n-λ)/⌈λ⌉ (n-⌈λ⌉ ))≤√(2/π),
where the last inequality follows from two facts: first n/2≥λ≥λ_0, since 0<x≤ 1/2, and second, the function h(λ)=λ (n-λ) increases for n≥ 2λ. Therefore, (<ref>) follows from (<ref>), (<ref>), and (<ref>).
Since λ_0>5, inequality (<ref>) is an immediate consequence of Lemma <ref> and (<ref>).
□
Proof of Theorem <ref>
Let C(λ) be as in (<ref>) and denote
C(λ)=2clog27/16+r(λ), λ≥ 0,
where c=0.8 and r(λ) is defined in (<ref>). Numerical computations show that
0.9792… =sup _λ≥ 0C(λ)<sup _λ≥ 0C(λ)=0.9827… <0.99.
Since n(1-x)≥ n/2≥λ_0, we have from (<ref>)
I_n(1-x)≤ cx.
Set λ =nx. We distinguish the following two cases.
Case λ≥λ_0. We have from (<ref>), (<ref>), and Lemma <ref>
H_n(x)≤ 2clog27/16+r(λ)+n^3/2/2^n+1/2=C(λ )+n^3/2/2^n+1/2.
Case λ < λ_0. We have from (<ref>), (<ref>), and Lemma <ref>
H_n(x)≤ 2log27/16(ν (λ)(1-x)+cx)+r(λ)+2log27/16D(λ_0)/n+n^3/2/2^n+1/2
≤max (C(λ),C(λ))+2log27/16D(λ_0)/n+n^3/2/2^n+1/2.
In view of (<ref>), the conclusion follows from (<ref>) and (<ref>).
□
§ ESTIMATES IN THE CENTRAL REGION
We define the central region as
R_n(a):={x∈ (0,1/2]: nφ ^2(x)>a },
where a>0 will be chosen later on. We make the following assumption
(H_1) ‖φ ^2(B_nf)^(2)‖ =sup{|φ ^2(x)(B_nf)^(2)(x)| : x∈ R_n(a)}.
We denote by B_n^k, k∈ℕ, the kth iterate of the operator B_n, as well as
g=B_nf, τ_m(x)=x+(S_n(x)/n-x)β_m, m∈ℕ.
Since
𝔼(S_n(x)/n-x)^2=φ^2(x)/n,
we use Taylor's formula (<ref>) replacing y by S_n(x)/n to obtain the following starting identity for the central region
B_n^3f(x)-B_n^2f(x)=B_n^2g(x)-B_ng(x)=𝔼B_ng(S_n(x)/n)-B_ng(x)
=φ^2(x)/2ng^(2)(x)+φ^2(x)/2n((B_ng)^(2)(x)-g^(2)(x))+1/6𝔼(B_ng)^(3)(τ_3(x))(S_n(x)/n-x)^3.
The second term on the right-hand side in (<ref>) is bounded as follows.
We have
1/2n‖φ^2((B_ng)^(2)-g^(2))‖≤1/√(2)‖ B_nf-f‖ .
Proof. By (<ref>) and the second equality in (<ref>), we have
φ ^2(x)/2n| (B_ng)^(2)(x)-g^(2)(x) | = φ ^2(x)/2n|( B_n^2f-B_nf
)^(2)(x) |
=1/nφ^2(x)|𝔼(B_nf-f)(S_n(x)/n)K_2(x;S_n(x))|≤1/nφ ^2(x)‖ B_nf-f‖𝔼| K_2(x;S_n(x)) | .
Therefore, the result follows from (<ref>) and Schwarz's inequality.
□
To estimate the last term in (<ref>), some auxiliary results will be needed.
Let H_n be as in Theorem <ref>. Then,
‖φ^3 (B_ng)^(3)‖≤√(n+1)H_n-2‖φ ^2g^(2)‖ .
Proof. Applying (<ref>) with m=2, we have
(B_ng)^(2)(x)=n-1/n𝔼g^(2)(S_n-2(x)+V/n),
where V is defined in (<ref>). We differentiate this formula using the second equality in (<ref>) and take into account that K_1(x;y)=y-nx to obtain
(B_ng)^(3)(x)=n-1/n1/φ ^2(x)𝔼g^(2)(S_n-2(x)+V/n)(S_n-2(x)-(n-2)x).
By (<ref>), this implies that
|φ ^3(x)(B_ng)^(3)(x)|≤n-1/n‖φ ^2g^(2)‖φ (x)𝔼(| S_n-2(x)-(n-2)x|/φ ^2(S_n-2(x)+V/n))=n-1/√(n-2)‖φ ^2g^(2)‖ H_n-2(x),
which shows the result, because n≥ 3.
□
Denote by
μ_n(x)=𝔼|S_n(x)/n-x| ^k, k∈ℕ,
the kth absolute central moment of the Bernstein polynomials. Explicit expressions for the first even central moments can be found, for instance, in <cit.>. Particularly,
μ _4(x)=3φ^4(x)/n^2+φ^2(x)/n^3(1-6φ^2(x))≤φ^4(x)/n^2(3+1/nφ^2(x)),
and
μ _6(x)=φ^2(x)/n^5(15φ^4(x)n^2+5φ^2(x)(5-26φ^2(x))n+1-30φ^2(x)(1-2x)^2)
≤φ^6(x)/n^3( 15+25/nφ^2(x)+1/(nφ^2(x))^2) .
We have
n√(n)/φ^3(x)(μ _3(x)+3/8μ_4(x)/φ^2(x)+3/16μ_5(x)/φ^4(x) +7/16μ_6(x)/φ^6(x)) ≤ K(nφ^2(x)),
where
K(s)=√(3+1/s)+3/8√(s)(3+1/s)+3/16s√((3+1/s)(15+25/s+1/s^2))
+7/16s√(s)(15+25/s+1/s^2), s>0.
Proof. By Schwarz's inequality, we see that
μ _3(x)=𝔼|S_n(x)/n-x||S_n(x)/n-x| ^2 ≤√(μ_2(x)μ _4(x)).
Similarly,
μ_5(x)≤√(μ_4(x)μ _6(x)).
Thus, the result follows from (<ref>), (<ref>), (<ref>), and some simple computations.
□
Let β _m be as in (<ref>), m∈ℕ. For any 0≤ z≤ 1, we have
𝔼φ^m(x)/φ^m(x+(z-x)β_m)≤ 1+m/2(m+1)|z-x|/φ^2(x)+m/4(m+1)|z-x|^2/φ^4(x)+m+4/4(m+1)|z-x|^3/φ^6(x).
Proof. We claim that
𝔼1/(1-rβ_m)^m/2≤ 1+m/2(m+1)r+m/4(m+1)r^2+m+4/4(m+1)r^3, 0≤ r≤ 1.
Indeed, we see from (<ref>) that
𝔼1/(1-β_m)^m/2=m∫ _0^1(1-θ)^m/2-1dθ =2.
Using the binomial expansion and the fact that 𝔼β_m=1/(m+1) and 𝔼β_m^2=2/((m+1)(m+2)), we have
𝔼1/(1-rβ_m)^m/2≤ 1+m/2(m+1)r+m/4(m+1)r^2+r^3∑_k=3^∞(-1)^k-m/2 k𝔼β_m^k
=1+m/2(m+1)r+m/4(m+1)r^2+r^3(-1-m/2(m+1)-m/4(m+1)+∑_k=0^∞(-1)^k-m/2 k𝔼β_m^k).
This shows claim (<ref>), since the series in (<ref>) equals 2, as follows from (<ref>).
Assume first that 0≤ z≤ x. As noted in Sangüesa <cit.>, a useful property of the weight function φ is
φ^2(rx)≥ rφ^2(x), 0≤ r≤ 1.
As a consequence,
φ^m(x+(z-x)β_m)=φ^m(x(1-x-z/xβ_m))≥(1-x-z/xβ_m)^m/2φ^m(x).
We therefore have from (<ref>)
𝔼φ^m(x)/φ^m(x+(z-x)β_m)≤𝔼1/(1-x-z/xβ_m)^m/2
≤ 1+m/2(m+1)|z-x|/φ^2(x)(1-x)+m/4(m+1)|z-x|^2/φ^4(x)(1-x)^2+m+4/4(m+1)|z-x|^3/φ^6(x)(1-x)^3,
thus showing the result in this case. If x≤ z≤ 1, the proof is similar by observing that φ^2(x)=φ^2(1-x) and
φ^2(x+(z-x)β_m)=φ^2(1-x-(z-x)β_m)=φ^2((1-x)(1-z-x/1-xβ_m)).
This completes the proof.
□
We are in a position to estimate the last term in (<ref>).
Let H_n and K(s) be as in Theorem <ref> and (<ref>), respectively. Then,
1/6|𝔼(B_ng)^(3)(τ_3(x))(S_n(x)/n-x)^3|≤√(n+1/n)H_n-2K(nφ^2(x))/3‖φ^2g^(2)‖/2n.
Proof. Fix z=S_n(x)/n. Applying Lemma <ref> and Lemma <ref> with m=3, we get
|𝔼(B_ng)^(3)(x+(z-x)β_3)|≤‖φ^3(B_ng)^(3)‖/φ^3(x)𝔼φ^3(x)/φ^3(x+(z-x)β _3)
≤√(n+1)H_n-2‖φ^2g^(2)‖/φ^3(x)(1+3/8|z-x|/φ^2(x)+3/16|z-x|^2/φ^4(x)+7/16|z-x|^3/φ^6(x)).
We multiply both sides of this inequality by |z-x|^3/6 and replace z by S_n(x)/n. Recalling (<ref>), we obtain
1/6|𝔼(B_ng)^(3)(τ_3(x))(S_n(x)/n-x)^3|
≤√(n+1/n)H_n-2/3‖φ^2g^(2)‖/2nn√(n)/φ^3(x)(μ_3(x)+3/8μ_4(x)/φ^2(x)+3/16μ_5(x)/φ^4(x)+7/16μ_6(x)/φ^6(x)).
Thus, the conclusion follows from Lemma <ref>.
□
We state the main result of this section.
Assume (H_1). Then,
(1-√(n+1/n)H_n-2K(a)/3)‖φ^2g^(2)‖/2n≤√(2)+1 /√(2)‖ B_nf-f ‖ .
Proof. Let x∈ R_n(a). Starting from (<ref>) and applying Lemma <ref>, we obtain
|φ^2(x)g^(2)(x)|/2n≤‖ B_nf-f‖+1/√(2)‖ B_nf-f‖ +1/6|𝔼(B_ng)^(3)(τ_3(x))(S_n(x)/n-x)^3|
≤√(2)+1/√(2)‖ B_nf-f‖ +√(n+1/n)H_n-2K(a)/3‖φ^2g^(2)‖/2n,
where the last inequality follows from Lemma <ref>. Thus, the result follows from assumption (H_1).
□
Theorem <ref> makes sense only when the term in front of ‖φ^2g^(2)‖/(2n) is positive. This, in turn, implies that the parameter a cannot be close to 0, because of the definition of K(a) given in (<ref>).
§ ESTIMATES IN THE NON-CENTRAL REGION
We define the non-central region as
R_n(a):={x∈ (0,1/2]: nφ ^2(x)<a }, a>0,
and make the alternative assumption
(H_2) ‖φ ^2g^(2)‖=‖φ ^2(B_nf)^(2)‖ =sup{|φ ^2(x)(B_nf)^(2)(x)| : x∈R_n(a)}.
In this section, we assume that x∈R_n(a). The starting point is the following identity involving the second derivatives of the iterates of the Bernstein polynomials
φ^2(x)g^(2)(x)=∑_k=0^mφ^2(x)( (B_n^k+1g)^(2)(x)-(B_n^kg)^(2) (x))+φ^2(x)(B_n^m+1g)^(2)(x), m∈ℕ.
We recall here the following probabilistic representation of the terms in (<ref>) given in Sangüesa <cit.>. Denote
W_n(θ)=S_n-2(θ)+V/n, 0≤θ≤ 1,
where V is defined in (<ref>). Let (S_n-2,i(θ), 0≤θ≤ 1)_i≥ 1 and (V_i)_i≥ 1 be sequences of independent copies of (S_n-2(θ), 0≤θ≤ 1) and V, respectively. Define
W_n,i(θ)=S_n-2,i(θ)+V_i/n, 0≤θ≤ 1, i∈ℕ.
Clearly, (W_n,i(θ), 0≤θ≤ 1)_i≥ 1 is a sequence of independent copies of (W_n(θ), 0≤θ≤ 1), as defined in (<ref>). We consider the subordinated stochastic processes (W_n^(m)(θ), 0≤θ≤ 1)_m≥ 1 inductively defined as follow
W_n^(1)(θ)=W_n,1(θ), W_n^(m+1)(θ)=W_n,m+1(W_n^(m)(θ)), 0≤θ≤ 1, m∈ℕ.
The following result was shown by Sangüesa <cit.>. We give here a short proof of it for the sake of completeness.
Let m∈ℕ_0. Then,
(B_n^m+1g)^(2)(x)=(n-1/n)^m+1𝔼g^(2)(W_n^(m+1)(x)).
Proof. For m=0, the result follows from (<ref>) and (<ref>). Let 0≤θ≤ 1. Suppose that
(B_n^mg)^(2)(θ)=(n-1/n)^m𝔼g^(2)(W_n^(m)(θ)),
for some m∈ℕ. Again by (<ref>) and (<ref>), we have
(B_n^m+1g)^(2)(θ)=(n-1/n)^m𝔼(B_ng)^(2)(W_n^(m)(θ))=(n-1/n)^m+1𝔼g^(2)( W_n,m+1(W_n^(m)(θ)) ),
=(n-1/n)^m+1𝔼g^(2)( W_n^(m+1)(θ) ),
thus completing the proof.
□
Observe that if x∈R_n(a), then
nx<b_n:=2a/1+√(1-4a/n), n>4a.
For this reason, we assume in this section that n>4a. The main quantity to be estimated is the inverse moment.
1/n𝔼1/φ^2(W_n^(m)(x)), m∈ℕ.
To bound (<ref>), we will proceed by induction on m. In this respect, we define the iterates
α_0(θ)=θ, α_1(θ)=1-e^-θ, α_m+1(θ)=α_1(α_m(θ)), 0≤θ≤ 1, m∈ℕ.
Note that α_m(θ)→ 0, as m→∞, for any 0<θ≤ 1. However, the speed of convergence is very slow, since α_1^(1)(0)=1. The reason for these iterates comes from the following inequality, which follows from (<ref>):
𝔼e^-θ S_n-2(x)=(1+x(1-e^-θ))^n-2≤ e^-(n-2)xα_1(θ)≤ e^2b_n/ne^-nxα_1(θ), 0≤θ≤ 1.
Also, we have from (<ref>)
𝔼e^-θ V=(𝔼e^-θ U_1)^2=(1-e^-θ/θ)^2=(α_1(θ)/θ)^2, 0≤θ≤ 1.
For any m∈ℕ, we have
1/n𝔼1/φ^2(W_n^(m)(x)) ≤ e^2b_n/n(2log27/16∫_0^1(α_m-1(θ)/θ)^2e^-nxα_m-1(θ)dθ
+(log 4-2log27/16)α_m-1^2(1)e^-nxα_m-1(1)+ϵ _n),
where b_n is defined in (<ref>) and
ϵ_n=4/nlog27/16+e^-n/2.
Proof. We start with the case m=1. Using (<ref>), we have as in (<ref>)
1/n𝔼1/φ^2(W_n^(1)(x))=1/n𝔼1/φ^2(S_n-2(x)+V/n)=𝔼1/S_n-2(x)+V+𝔼1/n-2-S_n-2(x)+2-V
=𝔼1/S_n-2(x)+V+𝔼1/S_n-2(1-x)+V.
We claim that
𝔼1/S_n-2(x)+V≤ e^2b_n/n(2log27/16∫_0^1e^-nxθdθ +(log 4-2log27/16)e^-nx).
In fact, following the lines of the proof of Lemma <ref>, it can be seen that
𝔼1/S_n-2(x)+V≤ 2log27/16𝔼1/S_n-2(x)+1+(log 4-2log27/16)P(S_n-2(x)=0).
On the other hand, it follows from (<ref>) that
𝔼1/S_n-2(x)+1=1/(n-1)xP(S_n-1(x)≥ 1)=1/x∫_0^x(1-u)^n-2du
≤∫_0^1e^-(n-2)xθdθ≤ e^2b_n/n∫_0^1e^-nxθdθ,
where the last inequality follows from (<ref>). This and (<ref>) show claim (<ref>).
Finally, since 0<x≤ 1/2, we see from (<ref>) that S_n-2(1-x)≥ S_n-2(1/2). We thus have from (<ref>)
𝔼1/S_n-2(1-x)+V≤𝔼1/S_n-2(1/2)+V≤ e^2b_n/n(2log27/16∫_0^1e^-nθ /2dθ +(log 4-2log27/16)e^-n/2)
≤ e^2b_n/n(4/nlog27/16 +e^-n/2),
which, in conjunction with (<ref>) and (<ref>), shows the result for m=1.
For m>1, the proof follows by induction on m, taking into account (<ref>), (<ref>), and the fact that for any 0≤θ≤ 1 and j∈ℕ_0, we have
𝔼e^-nα_j(θ)W_n(x)=𝔼e^-α_j(θ)(S_n-2(x)+V)=𝔼e^-α_j(θ)S_n-2(x)𝔼e^-α_j(θ)V≤ e^2b_n/ne^-nα_j+1(θ)x(α_j+1(θ)/α_j(θ))^2,
as follows from (<ref>)-(<ref>). This completes the proof.
□
Denote by
J_n(m,a)=sup _x∈R_n(a)𝔼φ^2(x)/φ^2(W_n^(m)(x)), m∈ℕ.
Let m∈ℕ. Assume that b_n≤ 1/α_m-1(1). Then,
J_n(m,a)
≤ b_ne^2b_nm/n(2log27/16∫_0^1(α_m-1(θ)/θ)^2e^-b_nα_m-1(θ)dθ
+(log 4-2log27/16)α_m-1^2(1)e^-b_nα_m-1(1)+ϵ _n),
where b_n and ϵ_n are defined in (<ref>) and (<ref>), respectively.
Proof. Set λ=nx. The result readily follows from Lemma <ref> after observing the following. First, (<ref>) implies that λ <b_n. Second, the function h(λ)=λ e^-λα_m-1(1), λ≥ 0, increases in λ < b_n≤ 1/α_m-1(1).
□
The following is the main result of this section.
Let m∈ℕ and let b_n be as in (<ref>). Suppose that b_n≤ 1/α_i-1(1), for some 1≤ i ≤ m. Under assumption (H_2), we have
‖φ^2g^(2)‖/n(1-J_n(m+1,a))≤√(2)(i+∑_k=i^mJ_n(k,a))‖ B_nf-f‖ ,
where J_n(m,a) is defined in (<ref>).
Proof. Our starting point is identity (<ref>). Let k=0,1,… ,i-1. By the second equality in (<ref>), we have
|φ^2(x)((B_n^k+1g)^(2)(x)-(B_n^kg)^(2)(x) )| =
|φ^2(x)( B_n ( B_n^k+1f-B_n^kf ))^(2)(x) |
=2/φ^2(x)|𝔼(B_n^k+1f-B_n^kf)(S_n(x)/n)K_2(x;S_n(x))|
≤‖ B_nf-f‖2/φ^2(x)𝔼| K_2(x;S_n(x))|≤ n√(2)‖ B_nf-f‖ ,
where the last inequality follows from (<ref>) and Schwarz's inequality.
Let k=i,… ,m and set h=B_ng-g. By Lemma <ref> and (<ref>), we get
|φ^2(x)((B_n^k+1g)^(2)(x)-(B_n^kg)^(2)(x) )| =
|φ^2(x) (B_n^kh)^(2)(x) |
=(n-1/n)^k|φ^2(x)𝔼h^(2)(W_n^(k)(x))|
≤‖φ^2(B_ng-g)^(2)‖ J_n(k,a)
≤ n√(2)‖ B_nf-f‖ J_n(k,a),
where the last inequality follows from Lemma <ref>. Similarly,
|φ^2(x)(B_n^m+1g)^(2)(x)| =(n-1/n)^m+1|φ^2(x)𝔼g^(2)(W_n^(m+1)(x))|≤‖φ^2g^(2)‖ J_n(m+1,a).
In view of (<ref>) and (<ref>), the result follows from (<ref>) and assumption (H_2).
□
Obviously, Theorem <ref> makes sense only when J_n(m+1,a)<1. This can be achieved by choosing m large enough, as follows from the upper bound in Lemma <ref> and the fact that α _m(θ)→ 0, as m→∞, 0<θ≤ 1.
§ PROOF OF THEOREM <REF>
Upper bound.
Let a>0. From Theorem <ref>, we have
_n→∞sup_f∈𝔉ω_2^φ(f;1/√(n))/B_nf-f≤ 4+log 4 _n→∞sup_f∈𝔉n^-1‖φ^2(B_nf)^(2)‖/B_nf-f.
Assume (H_1). Applying Theorems <ref> and <ref>, the right-hand side in (<ref>) is bounded above by
4+√(2)(√(2)+1)/1-0.99 K(a)/3log 4 ,
where K(a) is defined in (<ref>).
Assume (H_2). Let m∈ℕ and k=1,… ,m. Define
J(k,a)=a(2 L_k(a)log27/16+(log 4-2log27/16)α_k-1^2(1)e^-aα_k-1(1)),
where
L_k(a)=∫_0^1(α_k-1(θ)/θ)^2e^-aα_k-1(θ)dθ.
By (<ref>) and Lemma <ref>, we see that
_n→∞J_n(k,a)≤ J(k,a).
Therefore, by Theorem <ref>, the right-hand side in (<ref>) can be bounded above by
4+√(2)(i+∑_k=i^mJ(k,a))/1-J(m+1,a)log 4,
where i∈{1,… ,m} is the first integer such that a<1/α_i-1(1).
After numerically experimenting with expressions (<ref>)-(<ref>), with the aid of Mathematica 10.2, we choose a=7.2, m=20, and i=13. Notice that this software system allows to handle the involved iterates α _k(θ), k=0,… , m, defined in (<ref>), and evaluate the integrals L_k(a) with the command NIntegrate. As a result, both expressions (<ref>) and (<ref>) turn out to be less than 74.8. This shows the upper bound in Theorem <ref>.
Lower bound.
For each n∈ℕ, define the function f_n∈ C[0,1] as follows
f_n(2-√(2)/n)=1, f_n(1/n)=-0.8, f_n(2/n)=-1, f_n(3/n)=0.04, f_n(2+√(2)/n)=1.
If y∈ [(2-√(2))/n,(2+√(2))/n], then f_n(y) is piecewise linear interpolating the values in (<ref>), whereas if y∈ [0,1]∖ [(2-√(2))/n,(2+√(2))/n], then f_n is constantly 1.
Taking the weighted second order difference of f at the point 2/n, it can be checked that
lim _n→∞ω _2^φ(f_n;1/√(n))=4.
On the other hand, set λ=nx and denote g(λ)=f_n(λ /n), 0≤λ≤ n. It turns out that
B_nf_n(x)-f_n(x)=𝔼f_n(S_n(x)/n)-f_n(λ/n)=𝔼g(S_n(λ/n))-g(λ), 0≤λ≤ n.
Let λ _0>2+√(2) and assume that n≥λ _0 is large enough. Using (<ref>), (<ref>), and (<ref>), we have for λ > λ _0
|𝔼g(S_n(λ /n))-g(λ)|
=|𝔼(g(S_n(λ /n))-g(λ))1_{S_n(λ/n)≤ 2+√(2)}|≤ 2P(S_n(λ /n)≤ 3)
≤ 2P(S_n(λ _0 /n)≤ 3) ≤ 2P(N_λ_0≤ 3)+2d_TV(S_n(λ_0 /n),N_λ_0)≤ 2P(N_λ_0≤ 3)+2D(λ_0)/n.
Recalling (<ref>), denote by
G(λ)=𝔼g(N_λ) =P(N_λ=0)-0.8P(N_λ=1)
-P(N_λ=2)+0.04P(N_λ=3)+1-P(N_λ≤ 3), 0≤λ≤λ_0.
By (<ref>), (<ref>), and (<ref>), we have for λ≤λ _0
|𝔼g(S_n(λ /n))-g(λ)|≤|𝔼g(N_λ)-g(λ)| +|𝔼g(S_n(λ /n))-𝔼g(N_λ)|
≤| G(λ)-g(λ) |
+d_TV(S_n(λ /n),N_λ) ≤| G(λ)-g(λ) | +D(λ_0)/n.
Choose λ_0 and n large enough so that the upper bound in (<ref>) is less than 1/2, say. On the other hand, numerical computations, performed again with Mathematica 10.2, show that
sup _0≤λ≤λ _0| G(λ)-g(λ)|≤ 0.79 …
We therefore have from (<ref>), (<ref>), and (<ref>)
‖ B_nf_n-f_n‖≤ 0.8,
for large enough n. By (<ref>), this implies that
_n→∞sup _f∈𝔉ω_2^φ(f;1/√(n))/B_nf-f≥_n→∞ ω_2^φ(f_n;1/√(n))/B_nf_n-f_n≥4/0.8=5.
This shows the lower bound in Theorem <ref>, and concludes the proof.
□
§ PROOF OF THEOREM <REF>
Let f∈𝔊. By the first equality in (<ref>), we have
|φ^2(x)(B_nf)^(2)(x)|≤ n^2φ^2(x)ω_2 ( f;1/n).
Let a>0. Define
N_1:=N_1(a,f)={n∈ℕ:‖φ^2 (B_nf)^(2)‖ is attained on R_n(a)}, N_2:=ℕ∖ N_1
Suppose that N_1 is not finite. By (<ref>) and (<ref>), we see that
lim_n→∞
n∈ N_1‖φ^2 (B_nf)^(2)‖≤ a lim_n→∞
n∈ N_1 n ω_2 (f;1/n)=0.
We thus have from Theorem <ref>
_n→∞
n∈ N_1ω_2^φ(f;1/√(n))/B_nf-f≤ 4+ log 4_n→∞
n∈ N_1 ‖φ ^2(B_nf)^(2)‖/nB_nf-f=4,
where we have used that f∈𝔉 together with the well known fact that ‖ B_nf-f‖ =o(1/n) if and only if f is affine.
Suppose that N_2 is not finite. By (<ref>), we have
_n→∞
n∈ N_2ω_2^φ(f;1/√(n))/B_nf-f≤ 4+√(2)(√(2)+1)/1-0.99 K(a)/3log 4 .
Since a is arbitrary, and k(s)→√(3), as s→∞ (see (<ref>)), inequality (<ref>) follows from (<ref>) and (<ref>).
Finally, let f∈𝔉∩ C^1[0,1] and 0 <h ≤δ. We have from (<ref>)
| f(x+h)-2f(x)+f(x-h)| = h |𝔼(f^(1)(x+hβ _1)-f^(1)(x-hβ _1)) |≤ hω _1(f^(1);2h).
By (<ref>), this readily implies that f∈𝔊. This concludes the proof.
□
Funding: The first author is supported by Research Project DGA (E48_23R). The second author is supported by Junta de Andalucía (Research Group FQM-0178).
unsrtnat
99
abelleviatanrasa
Abel, U., Leviatan, D., Raşa, I.: On the q-monotonicity preservation of Durrmeyer-type operators. Mediterr. J. Math. (2021), 18:173.
acurasasteopoaie
Acu, A.M., Raşa, I., Şteopoaie, A.E.: Bernstein–Kantorovich operators, approximation and shape preserving properties. Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A-Mat. (2024), 118:107.
adellanoz
Adell, J.A., Anoz, J.M.: Signed binomial approximation of binomial mixtures via differential calculus for linear operators. J. Statist. Plann. Inference 138 (2008), 3687–3695.
adellanozlekuona
Adell, J.A., Anoz, J.M., Lekuona, A.: Exact values and sharp estimates for the total variation distance between binomial and Poisson distributions. Adv. Appl. Prob. 40 (2008), 1033–1047.
rm2019
Adell, J.A., Cárdenas-Morales, D.: On the 10th central moment of the Bernstein polynomials. Results Math. (2019), 74:113.
rm2022
Adell, J.A., Cárdenas-Morales, D.: Asymptotic and non-asymptotic results in the approximation by Bernstein polynomials. Results Math. (2022), 77:166.
adelllekuona
Adell, J.A., Lekuona, A.: Binomial convolution and transformations of Appell polynomials. J. Math. Anal. Appl. 456 (2005), 16–33.
adellsanguesa
Adell, J.A., Sangüesa, C.: A strong converse inequality for gamma-type operators. Constr. Approx. 15 (1999), 537–551.
barbourhall
Barbour, A.D., Hall, P.: On the rate of Poisson convergence. Math. Proc. Cambridge Philos. Soc. 95(3) (1984), 473–480.
bustamante
Bustamante, J.: Estimates of positive linear operators in terms of second order moduli. J. Math. Anal. Appl. 345 (2008), 203–212.
bustamantee
Bustamante, J.: Baskakov-Kantorovich operators reproducing affine functions: inverse results. J. Numer. Anal. Approx. Theory 51(1) (2022), 67–82.
bustamantequesada
Bustamante, J., Quesada, J.M.: A property of Ditzian-Totik second order moduli. Appl. Math. Lett. 23 (2010), 576–580.
chenditzian
Chen, W., Ditzian, Z.: Strong converse inequality for Kantorovich polynomials. Constr. Approx. 10 (1994), 107–129.
chihara
Chihara, T.S.: An Introduction to Orthogonal Polynomials. Gordon and Breach, New York (1978).
deheuvelspfeifer
Deheuvels, P., Pfeifer, D., Puri, M.L.: A new semigroup technique in Poisson approximation. Semigroup Forum 38(2) (1989), 189–201.
dellavecchia
Della Vecchia, B.: Direct and converse results by rational operators. Constr. Approx. 12 (1996), 271–285.
ditzianivanov
Ditzian, Z., Ivanov, K.G.: Strong converse inequalities. J. Anal. Math. 61 (1993), 61–111.
ditziantotik
Ditzian, Z., Totik, V.: Moduli of Smoothness. Springer-Verlag, New York (1987).
finta
Finta, Z.: Direct and converse results for q-Bernstein operators. Proc. Edinb. Math. Soc. 52(2) (2009), 339–349.
gadjev
Gadjev, I.: Strong converse result for uniform approximation by Meyer-König and Zeller operator. J. Math. Anal. Appl. 428 (2015), 32–42.
gavreaetall
Gavrea, I., Gonska, H.H., Păltănea, R., Tachev, G.: General estimates for the Ditzian-Totik modulus. East J. Approx. 9(2) (2003), 175–194.
guoqi
Guo, SH., Qi, Q.: Strong converse inequalities for Baskakov operators. J. Aprox. Theory 124(2) (2003), 219–231.
knoopzhou1994
Knoop, H.B., Zhou, X.L.: The lower estimate for linear positive operators, II. Results Math. 25 (1994), 315–330.
knoopzhou1995
Knoop, H.B., Zhou, X.L.: The lower estimate for linear positive operators, I. Constr. Approx. 11 (1995), 53–66.
lopezsalamanca
López-Blázquez, F., Salamanca Miño, B.: Binomial approximation to hypergeometric probabilities. J. Statist. Plann. Inference 87 (2000), 21–29.
paltanea2004
Păltănea, R.: Approximation Theory Using Positive Linear Operators. Birkhäuser Boston, Inc., Boston, MA (2004).
paltanea2018
Păltănea, R.: Asymptotic constant in approximation of twice differentiable functions by a class of positive linear operators. Results Math. 73(2) (2018), paper no. 64, 10 pp.
roos
Roos, B.: Binomial approximation to the Poisson binomial distribution. The Krawtchouk expansion. Theory Probab. Appl. 45 (2000), 258–272.
sanguesa
Sangüesa, C.: Lower estimates for centered Bernstein-type operators. Constr. Approx. 18(1) (2002), 145–159.
totik
Totik, V.: Strong converse inequalities. J. Approx. Theory 76 (1994), 369–375.
totikk
Totik, V.: Approximation by Bernstein polynomials. Amer. J. Math. 116 (1994), 995–1018.
|
http://arxiv.org/abs/2409.03215v1 | 20240905032222 | xLAM: A Family of Large Action Models to Empower AI Agent Systems | [
"Jianguo Zhang",
"Tian Lan",
"Ming Zhu",
"Zuxin Liu",
"Thai Hoang",
"Shirley Kokane",
"Weiran Yao",
"Juntao Tan",
"Akshara Prabhakar",
"Haolin Chen",
"Zhiwei Liu",
"Yihao Feng",
"Tulika Awalgaonkar",
"Rithesh Murthy",
"Eric Hu",
"Zeyuan Chen",
"Ran Xu",
"Juan Carlos Niebles",
"Shelby Heinecke",
"Huan Wang",
"Silvio Savarese",
"Caiming Xiong"
] | cs.CL | [
"cs.CL",
"cs.AI",
"cs.LG"
] |
Optimal Regularity for Fully Nonlinear Nonlocal Equations with Unbounded Source Terms
Disson S. dos Prazeres and Makson S. Santos
Received 16 July 2024; accepted 04 September 2024
=====================================================================================
§ ABSTRACT
Autonomous agents powered by large language models (LLMs) have attracted significant research interest. However, the open-source community faces many challenges in developing specialized models for agent tasks, driven by the scarcity of high-quality agent datasets and the absence of standard protocols in this area.
We introduce and publicly release xLAM, a series of large action models designed for AI agent tasks. The xLAM series includes five models with both dense and mixture-of-expert architectures, ranging from 1B to 8x22B parameters, trained using a scalable, flexible pipeline that unifies, augments, and synthesizes diverse datasets to enhance AI agents' generalizability and performance across varied environments.
Our experimental results demonstrate that xLAM consistently delivers exceptional performance across multiple agent ability benchmarks, notably securing the 1st position on the Berkeley Function-Calling Leaderboard, outperforming GPT-4, Claude-3, and many other models in terms of tool use. By releasing the xLAM series, we aim to advance the performance of open-source LLMs for autonomous AI agents, potentially accelerating progress and democratizing access to high-performance models for agent tasks.
Models: https://huggingface.co/collections/Salesforce/xlam-models-65f00e2a0a63bbcd1c2dade4huggingface.co/Salesforce/xLAM-models
GitHub: https://github.com/SalesforceAIResearch/xLAMgithub.com/SalesforceAIResearch/xLAM
§ INTRODUCTION
The field of autonomous agents has witnessed significant advancements in recent years, with large language models (LLMs) playing a crucial role in enhancing agent capabilities across diverse tasks. Researchers have made substantial progress in developing sophisticated frameworks <cit.> and specialized environments <cit.> to enhance agent capabilities, such as tool use <cit.> and web browsing <cit.>. Concurrently, comprehensive benchmarks like AgentBench <cit.>, ToolBench <cit.>, and AgentBoard <cit.> have been established to rigorously assess agent performance in reasoning, planning, and multi-turn interactions.
While proprietary LLMs developed by industry leaders have demonstrated competitive performance in various agent tasks <cit.>, the open-source community faces limited choices for specialized models in this domain. This scarcity stems from several challenges in adapting open-source LLMs to agent tasks, primarily due to the lack of comprehensive, high-quality datasets and the heterogeneity of existing data formats. These factors complicate the unification of diverse datasets and obstruct the learning of transferable knowledge across different agent tasks.
Recently, the agent research community has intensified efforts in open-source agent data processing and model training <cit.>.
However, these works still face challenges in managing complex environments and generalizing to new scenarios, primarily due to limitations in the collected agent data.
A major obstacle is the homogeneity of content and format in existing datasets, resulting in models that lack diversity across various tasks and struggle to adapt to new or slightly different data structures in practical applications. While previous efforts have attempted to design pipelines for unifying data, they typically cover only a few scenarios or lack flexibility in their unified formats. For instance, Lumos <cit.> primarily addresses question answering, web agents, and mathematical tasks involving planning and grounding; while AgentOhana <cit.>, despite encompassing a more diverse range of environments, lacks an extendable unified format to accommodate new environments.
Moreover, open-source datasets often suffer from quality issues, such as incorrect agent outputs, hallucinated actions, and repeated interaction turns within trajectories <cit.>. The lack of detailed analysis and understanding of agent data further complicates these challenges, hindering the development of robust and versatile open-source agent models. Addressing these challenges is crucial for advancing the field of open-source agent models and bridging the performance gap with proprietary LLMs in agent tasks.
In this work, we introduce and open-source xLAM, a series of powerful models with varying sizes. This diverse set is tailored for a variety of applications, with smaller models (1B and 7B) optimized for on-device deployment, while larger models (8x7B and 8x22B) are designed to tackle more challenging tasks. Alongside the model release, we offer several insights and lessons learned from our experience in agent model training:
* Data Processing: We highlight the importance of data unification and augmentation in enhancing dataset diversity and mitigating overfitting. Our developed dataset preprocess and augmentation pipeline significantly improves the generalizability of agent models across diverse environments.
* Data Synthesis: We showcase the impact of scalable, high-quality data synthesis on agent model performance. Our synthetic dataset enabled xLAM models to achieve 4 of the top 20 positions on the Berkeley Function Calling Leaderboard, including securing the top-1 spot (Fig. <ref>), with smaller models achieving performance comparable to much larger counterparts, showing great potential in this direction.
We evaluate the xLAM series on public agent benchmarks, demonstrating exceptional performance across various agent tasks. By open-sourcing these models, we aim to advance open-source agent models and provide valuable insights into data processing and synthesis techniques, addressing key challenges in developing competitive alternatives to proprietary models.
§ RELATED WORK
§.§ LLM Agents
Recent advancements in LLMs have significantly enhanced their utility in various agent tasks. Several innovative prompt techniques have been developed to improve performance, including Chain of Thought (COT) <cit.>, ReACT <cit.>, and Reflection <cit.>. Additionally, considerable efforts have been made to fine-tune open-sourced agent models for better capabilities <cit.>. These include enhancements in data collection and processing to facilitate effective agent learning <cit.>, covering a range from simple question answering to more complex scenarios like web interactions, tool operations, reasoning, and planning.
However, many of these agent frameworks still depend on proprietary models as their core engine to achieve optimal performance, revealing a substantial gap in the availability of high-quality open-source models for these tasks.
§.§ Agent Benchmarks
A variety of benchmarks have been established to assess the abilities of LLM agents across diverse scenarios <cit.>. Notably, AgentBench <cit.>, Mint-Bench <cit.>, and AgentBoard <cit.> encompass environments ranging from code generation and games to web interactions and reasoning tasks. ToolBench <cit.> specifically evaluates multi-turn reasoning and tool-usage abilities, while the Berkeley Function-Calling Leaderboard <cit.> broadly assesses models' capabilities in function calling across various contexts. These recent advancements in benchmarking have made the evaluation of agent models more accessible and standardized.
§ DATA PROCESSING PIPELINE
In this section, we discuss the data pipeline for training xLAM, including data unification, augmentation, quality verification, general instruction data synthesis, and preference data generation.
§.§ Data Unification
Existing agent datasets are collected from diverse environments and designed in various formats, introducing noise and complicating data augmentation and verification. Models like NexusRaven <cit.>, Gorilla-Openfunctions <cit.>, and AgentOhana <cit.> have demonstrated superior performance in function-calling, suggesting that a well-defined, universal format could significantly enhance model performance. By standardizing the format of existing data, we can reduce noise and facilitate easier data augmentation and quality verification, leading to a more efficient and robust framework for model training and evaluation. Furthermore, a standardized format ensures consistency, simplifies model training, and enhances the model's ability to generalize across various benchmarks.
Function-calling formats form the basis for how models understand and execute tasks, motivating us to design our unified data format in a function-calling style.
As illustrated in Figure <ref>,
the unified format consists of several modules: task instruction, available tools, format instruction, few-shot examples, query, and steps.
Specifically, the available tools define the agent's action space, and the format instruction specifies the output format the agent should follow when generating a response. In each step, the agent's output, the environment's feedback/execution results, and the user's follow-up input are organized into a dictionary.
It's quite common for there to be purely conversational interactions between users and agents that don't trigger any APIs or receive corresponding observations. In these instances, the related entry values would simply remain empty.
This unified format is compatible with various environments and tasks, making our data processing pipeline adaptable to different datasets and scalable to large amounts of data. Moreover, the modularized design allows for fine-grained data augmentation and quality verification, which are essential in improving agent data quality. For example, by unifying all the available tools and tool calls, we can easily inspect for hallucination and function-call errors, and apply various augmentation techniques.
§.§ Data Augmentation
Our data augmentation strategy focuses on improving the diversity of the data. It involves applying various transformations to the existing dataset, thereby generating new, synthetic data samples. The data unification step significantly simplifies the application of various augmentation techniques. A standardized data format ensures consistency and ease of implementation, allowing for more efficient augmentation processes. Specifically, the augmentation techniques we adopted can be categorized as prompt format augmentation and instruction-following augmentation.
Prompt Format Augmentation: Prompt format augmentation focuses on creating various prompt formats based on the structured, unified data format. The format augmentation can be further divided into two categories: 1) Order Shuffling. In the unified format, the available tools are provided in a list, and each tool contains the name, description, and parameters. To avoid model overfitting to the specific order of the tools, we randomly shuffle the tool list. Furthermore, we also shuffle the order of the name, description, parameters, and within the parameters to present the information in different ways. We do the same thing within the tool_calls in each step. Additionally, we also shuffle the order of different sections of the input, including task instruction, tools, format instruction, few-shot examples etc. 2) Concatenation Tokens. Each training data point is a pair of input and output sequences. To convert the structured unified format to the training prompt, we use special tokens to concatenate different sections into one sequence. We create several different special token styles, including "[START/END OF QUERY]", "<query></query>", and plain text.
Instruction-Following Augmentation:
Instruction-following augmentation focuses on adding diversity to the instructions in order to improve the model's instruction-following capability. It involves rephrasing existing instructions and adding new instructions, without introducing inaccuracy and inconsistency. Therefore, verification of the new instructions is a crucial step for this type of augmentation. We employ two methods for instruction-following augmentation: 1) Task Instruction Rephrasing. We rephrase the task instructions using powerful LLMs to accommodate various input styles from users. To ensure the rephrased instructions still align with the original version, we verify them by prompting the LLMs with the rephrased instructions and check if the LLMs can still follow them and generate correct function calls. 2) Format Instruction-Following. In our unified format, the output format is a JSON string with and . To avoid the model overfitting on JSON format and to enable the model to follow various output formats upon different format instructions, we prepare 15 different output formats along with their corresponding format instructions and format converters. The output formats include JSON, XML, YAML, plain text, etc.
§.§ Data Quality Verification
To further understand of the data quality and to thoroughly investigate the sources of errors in the evaluation, we conduct a detailed analysis of the unified dataset. We identify a list of errors in the data using both rule-based and LLM-as-a-judge approaches.
Undefined Function Call:
In function-calling, a list of available functions is provided, and the model should generate a function_call using one of the given functions. However, we found that in many cases, the predicted function_call is not from the given list. We match the predicted function with the given functions by comparing the function names and the list of parameter names. When the function_call name does not match any given functions, we refer to it as Undefined Functions Invoked. When the function name matches but the argument list contains undefined arguments, we refer to it as Undefined Arguments Passed. We also take into consideration optional parameters.
Incorrect Argument Type:
Other than the error types mentioned above, we also observe that sometimes the model generates the correct argument's value, but in the wrong types. For example, when a parameter expects a , the generated arguments is , which is a string version of the list. When executing the function call, errors will occur due to incorrect data type. We identify trajectories containing the incorrect argument type error by comparing the parameter type in the available tools and the actual argument type. We also found that most argument type errors can be fixed by converting the arguments to the correct parameter types.
Argument Hallucination: Upon examining the unified dataset from public sources, we discovered that tool calls frequently include argument values not present in the user query or prior steps. This issue arises because much of this data is generated by LLMs, which are prone to hallucination, a common problem in LLM-generated content.
We identified two types of hallucination: 1) the generated tool names or argument names do not appear in the provided tool and argument list; and 2) the argument values do not align with the user query or observations from previous steps. The first type of hallucination is straightforward to address by searching the generated tool call and argument names and matching them with the provided tool list, as they are all structured in JSON, making this process efficient. However, detecting the second type, where argument values are misaligned, is more challenging, as simple string matching is ineffective for complex queries and tasks. To tackle this, we use LLMs as judges to perform step-wise argument hallucination detection, detecting if there is a mismatch between the arguments and the intended query or prior observations.
Low-Quality Reasoning and Planning: We observe many data trajectories where the reasoning and planning steps are of low quality, which is a common issue in the outputs of many LLMs. To address this, we first filter out low-quality data using rule-based methods informed by heuristics, then prompt models like Mixtral-8x22b-Instruct-v0.1 <cit.> and DeepSeek-V2 <cit.> to evaluate both the overall trajectory and individual thought steps on the selected data. A portion of these rating results is then sampled and verified by humans. We also attempted to iterate on this process using specifically fine-tuned models.
§.§ Data Synthesis
Based on our findings in Sec. <ref>, we observe that most of these publicly available datasets have several limitations. First, these datasets are often static, synthesized by weak models, limited in scope, and, more importantly, not verified by execution.
Second, these datasets mainly focus on a single type of function-calling category, i.e., outputting a single function call based on the provided tools. However, real-world scenarios might consist of many other types of use cases, such as the parallel function-calling scenario <cit.>, where the user query contains multiple requests and the model should respond with concurrent function calls in parallel within a single response.
To address these two issues, we utilize a systematic data synthesis framework called APIGen <cit.>, which can generate verifiable datasets based on a collection of executable APIs.
The key idea is a multi-stage verification process to ensure the accuracy and quality of the generated data. This process includes format verification as introduced in Sec. <ref>, execution verification, and semantic verification, which collectively help to identify and filter out low-quality data points, such as those with hallucination issues or inaccurate argument parameter values.
We utilize over 3,673 APIs across 21 categories from ToolBench <cit.> to generate a total of 60,000 high-quality data. These samples are generated using several strong open-source language models: DeepSeek-V2-Chat <cit.> and Mixtral-8x22B-Inst <cit.>.
This synthesis framework greatly improves the robustness and applicability of the dataset, as the majority of low-quality data can be identified by the multi-stage verification process.
§.§ Data Mixture
For supervised fine-tuning (SFT), our dataset combines training samples from three main sources: cleaned and augmented agent datasets, a synthetic function-calling dataset, and general instruction-tuning datasets. These sources are used to train the general xLAM models.
Specifically, to enhance the general instruction capability of xLAM, we integrate diverse instruction-tuning datasets from DialogStudio <cit.> and Data Provenance <cit.>. We employe rule-based techniques to filter out low-quality data, such as repetitive words and turns, which are common and often produced by less powerful models. We also remove data with inappropriate contents, responses and non-commercial licenses. Additionally, we deduplicate examples with similar user queries and organized the data by domain or category. We then prompt Mixtral-8x22b-Instruct-v0.1 and DeepSeek-V2 to assess both the entire dialogue and individual system responses on the selected data. This instruction data comprises 20% to 30% of our training set. To further enhance model robustness, we preserve the original formats of the general instruction-tuning data.
To enhance the function-calling capability of xLAM-7b-fc-r and xLAM-1b-fc-r, we employ a targeted training approach, with 50% of their training data drawn from our high-quality synthetic function-calling dataset. The remaining 50% of the training data is sampled from other tasks within our training set.
For Direct Preference Optimization (DPO) <cit.>, we prompt less powerful models to generate and rate responses for selected data from each source, then sample a subset for human verification. After adjustments to models and prompts, we classify the selected responses as rejected samples.
§ MODEL TRAINING
§.§ Modeling
We use a supervised fine-tuning
(SFT) approach, further aligning model checkpoints with the DPO method, and leverage the robustness of our flexible data pipeline. Our training code is based on the HuggingFace Transformers and Accelerate libraries<cit.>, as well as PyTorch FSDP<cit.>. During training, the model undergoes multiple epochs, with datasets randomly shuffled each time. When using data parallelism across multiple devices, we diversify random seeds based on process IDs, ensuring balanced data distribution through partitioning, shuffling, and interleaving, thereby enhancing the robustness and reproducibility of our training process.
The fine-tuning of general xLAM models is conducted on Nvidia H100 GPUs. For SFT, we use a full fine-tuning framework that employs the fully sharded data parallel algorithm <cit.>. In the case of xLAM-8x22b-r, we integrate LoRA <cit.> to better preserve the model's original capacities and prevent catastrophic forgetting <cit.>. LoRA is also used for DPO alignment across all xLAM models. Additionally, we use a cosine learning rate scheduler with 100 warm-up steps to optimize performance.
The xLAM-FC models target various categories of function-calling agents, including simple, multiple, parallel, and parallel multiple. These categories are designed to enhance the models' performance in different scenarios. For instance, a simple query like retrieving the weather for a location (e.g., "What is the weather in Palo Alto today?") can be handled by calling . Multiple queries involve selecting the appropriate function from several APIs, while parallel queries require executing multiple function calls simultaneously. Additionally, the models are trained in relevance detection to ensure alignment between function calls, execution results, and query objectives.
§.§ xLAM Model Series
We introduce a series of agent models tailored for different use cases. Our flagship model series, xLAM, is built upon the Mixtral Instruct <cit.> models and aims to achieve balanced performance across a diverse range of agent tasks, from complex multi-turn interactions to function-calling applications. To ensure its versatility, xLAM is trained on uniformly sampled data from our training dataset as introduced in Sec. <ref>.
In addition to general xLAM models, we develop two specialized models for function-calling use cases, xLAM-7b-fc-r and xLAM-1b-fc-r, based on DeepSeek-Coder-7B-instruct-v1.5 and DeepSeek-Coder-1.3B-instruct, respectively <cit.>. The smaller model sizes offer increased accessibility, allowing users to easily host them on a single GPU to address various function-calling tasks, ranging from simple user queries to parallel concurrent requests.
By offering a suite of models with varying sizes and specializations, the xLAM series caters to a wide range of user needs and computational resources, making powerful agent capabilities more accessible and adaptable to real-world applications.
§ EXPERIMENTS
§.§ Benchmarks
After considering the stability of environments and research budget limitations, we evaluate the performance of models across four rigorous benchmarks: Webshop <cit.>, ToolQuery <cit.>, ToolBench <cit.>, and the Berkeley Function-Calling Benchmark <cit.>. Each benchmark is designed to assess different aspects of model capabilities under a variety of settings and constraints.
Webshop is an interactive web environment designed to mimic online shopping experiences, testing an agent's ability to navigate and assist in e-commerce tasks. Webshop comprising approximately 250 test cases.
ToolQuery evaluates an agent's skills in using tools to retrieve and process information across domains. ToolQuery features 60 test cases across three distinct settings: Weather, Movie, and Academia.
We use the testing configurations from AgentBoard <cit.> for both Webshop and ToolQuery. These configurations assess overall performance using the Success Rate and evaluate progressive performance across interactive turns with the Progress Rate, with Success Rate being the more critic metric.
We additionally evaluate on ToolQuery-Unified, which is essentially ToolQuery but requires an agent to ingest the task instruction and tools following the augmented prompt format described in <ref> and likewise solve the task following the unified format.
The purpose of testing agents in this setting is to assess their reasoning and tool-use abilities when evaluated on structured formats <cit.>.
ToolBench is developed for real-time evaluation of multi-turn reasoning and interactive capabilities via RapidAPI, and includes around 1,000 test cases. It uses Pass Rate as the metric, where the trajectory and final response are sent to GPT-4-0125-preview to determine whether the agent's final response successfully addresses the given user query. The evaluations cover both in-domain and out-of-domain settings, including unseen instructions with familiar tools, unseen tools within previously known categories, and entirely new categories of unseen tools.
Berkeley Function-Calling Leaderboard (BFCL) Benchmark <cit.> provides a comprehensive evaluation framework for assessing an agent's capability to reason about and execute function calls across a variety of programming languages and application domains. The benchmark comprises over 2,200 test cases, challenging models with complex scenarios such as parallel and multiple function calls in languages like Java, JavaScript, and Python. The evaluation metrics include Abstract Syntax Tree (AST) accuracy for non-executable test queries, executable accuracy by running APIs to obtain results, and a relevance detection score that measures the agent's ability to distinguish non-relevant queries and provided tools.
Importantly, our evaluation utilizes the most recent BFCL v2 version, as of the cutoff date 09/03/2024. The v2 version introduces live function calls and real-world scenarios contributed by users, addressing issues such as data contamination, bias, and fairness by leveraging user-provided data. This updated dataset better reflects real-world distributions, characterized by a higher demand for selecting among multiple functions and a reduced demand for parallel function calls. For instance, our analysis indicates that in the v2 benchmark, the average number of available functions has doubled, while the average number of function calls has been halved compared to the non-live v1 data. It is important to note that all our models were trained prior to the release of the BFCL v2 live data.
§.§ Experimental Results
§.§.§ Webshop and ToolQuery
Webshop. Table <ref> presents detailed comparisons of state-of-the-art language and agent models in the Webshop and ToolQuery environments, illustrating the robust and strong performance of the xLAM models. In the Webshop environment, xLAM-7b-r not only achieves the highest Success Rate at 0.414, surpassing other general LLMs like GPT-4-0125-preview, GPT-4o-2024-0523, and Claude2, but also outperforms specialized agent models such as AgentOhana-8x7b and Lemur-70b. This demonstrates xLAM models' superior ability to navigate and execute tasks in the web interaction environment effectively.
ToolQuery. In the more complex and unseen ToolQuery environment, xLAM-8x7b-r and xLAM-8x22b-r also demonstrate high performance as shown in Table <ref>, ranking second with a Success Rate of 0.683. This shows a significant improvement over the baseline performance of Mixtral-8x7b-inst and Mixtral-8x22b-inst, which are 0.167 and 0.400, respectively. Notably, all three xLAM models surpass the Mixtral-8x22B-Instruct model. Despite Mixtral-8x22B-Instruct having a large number of parameters and specialized tuning for advanced functionalities such as function calling, reasoning, and complex tool usage, it falls short of the xLAM models’ performance. Furthermore, same as other general LLMs, it lacks transparency regarding the data collection, unification processes, and other critical details, contrasting with the open source purposes provided for xLAM. These results show the efficacy of our proposed data unification and synthetic data pipeline.
ToolQuery-Unified. When the system prompt from ToolQuery is presented to the model in the unified format shown in Fig. <ref>, and the model is required to follow the provided format instructions to generate a structured output, we observe that xLAM models' performances are more consistent compared to GPT models, as shown in Table <ref>. While GPT-4o’s performance significantly degrades by 42% compared to ToolQuery, our best xLAM 8x22b model maintains comparable performance. This can be attributed to xLAM being trained on trajectories that adhere to the unified format, enabling it to perform consistently during inference. Concurrent research <cit.> observed a similar decline in performance on reasoning tasks when LLMs are constrained to produce output in specific formats. Deeper analysis indicated that the degradation is more than just due to incorrectly formatted output in a specific format, but rather due to a drop in the reasoning ability of the model itself.
§.§.§ ToolBench
Table <ref> presents the results on ToolBench, where xLAM models demonstrate impressive performance. They surpass both TooLlama-V2 and GPT-3.5-Turbo-0125 across all test settings. Moreover, xLAM models outperform AgentOhana-8x7b in scenarios involving unseen instructions and unseen tools, while achieving performance comparable to GPT-4-0125-preview in the unseen tools setting. These results show xLAM models' robust capabilities in multi-turn reasoning and complex tool usage, effectively handling both in-domain and out-of-domain tasks.
§.§.§ Berkeley Function-Calling Benchmark
Table <ref> presents the experimental results on the BFCL v2 benchmark (cutoff date 09/03/2024), which shows the exceptional performance of our xLAM model series in function-calling tasks. Notably, xLAM models secure four out of the top twenty positions, demonstrating the effectiveness of our data pipeline and training methodology across various model sizes.
Our flagship model, xLAM-8x22b-r, achieves the highest overall accuracy of 87.31%, surpassing all other models in the benchmark. This result validates the effectiveness of our data processing and model training pipeline in improving models' function-calling ability. Following closely, xLAM-8x7b-r ranks 6th, outperforming most prominent models including GPT-4o-mini and Claude-3.
The performance of our models demonstrates clear scaling with model size, a trend exemplified by xLAM-7b-r, which ranks 14th with an accuracy of 80.33%. This model outperforms several larger and more resource-intensive alternatives, including multiple GPT-4 and GPT-4o versions, highlighting the potential of small models in the agent area.
Perhaps most remarkably, our smallest model, xLAM-1b-fc-r, achieves a 32nd place ranking with an accuracy of 75.43%, surpassing much larger models like Claude-3-Opus (FC) and GPT-3.5-Turbo. This performance underscores the power of our data synthesis framework in producing high-quality, diverse datasets that enhance function-calling effectiveness even for smaller language models.
It is also worth noting that the BFCL v2 benchmark <cit.> includes a live dataset released after our model training date. These fresh data are collected from real-world user queries that were entirely unseen by our models. Nevertheless, our models exhibit strong generalization capabilities in handling these real-world use cases.
The consistently strong performance across our model series, ranging from 8x22 billion to 1 billion parameters, demonstrates the scalability and versatility of our approach. This scalability is particularly noteworthy, as it enables strong results from compact models suitable for resource-constrained environments to large-scale models for more demanding applications. Furthermore, the ability of our smaller models to compete with much larger alternatives suggests significant potential for efficient deployment in various real-world scenarios.
§.§ Ablation Study
r0.55
< g r a p h i c s >
Ablation study for data augmentation and data quality verification (cleaning).
We conducted an ablation study on the 7B models to measure the impact of various steps in our data pipeline.
Three datasets were prepared for this analysis: raw data, augmented data, and augmented + cleaned data. The raw data represents the dataset before data unification, while the other two datasets are post-unification.
Figure <ref> presents the evaluation results of models trained on these three datasets. The metrics used for this evaluation are G1_instruction from ToolBench and success_rate from both Webshop and ToolQuery. The results indicate that augmented data consistently outperforms raw data across all metrics, with improvements of 2.3% on ToolBench, 5.8% on Webshop, and 18.3% on ToolQuery. Furthermore, the addition of data cleaning leads to a substantial performance increase on ToolQuery, with a further improvement of 23.4%. The results highlight the effectiveness of data augmentation and cleaning processes in the data pipeline.
§ CONCLUSION
This paper introduces xLAM series, a set of large action models for autonomous AI agents. Our models, ranging from 1B to 8x22B parameters, were trained with a scalable and flexible data pipeline that unifies, augments, and synthesizes diverse datasets.
Our evaluations show that xLAM models consistently perform exceptionally across various benchmarks.
The insights we learned from training these models highlight the importance of rigorous data processing and the potential of data synthesis in developing capable AI agents.
By releasing the xLAM series to the public, we aim to democratize access to high-performance models for agent tasks, thereby accelerating progress in the field.
unsrt
§ APPENDIX
mybox/.style=
colback=lightgold!50,
colframe=lightgold!50,
arc=4mm,
boxrule=0.5mm,
left=1mm,
right=1mm,
top=1mm,
bottom=1mm,
boxsep=1mm,
coltitle=black,
sharp corners,
|
http://arxiv.org/abs/2409.03267v1 | 20240905062429 | No Man is an Island: Towards Fully Automatic Programming by Code Search, Code Generation and Program Repair | [
"Quanjun Zhang",
"Chunrong Fang",
"Ye Shang",
"Tongke Zhang",
"Shengcheng Yu",
"Zhenyu Chen"
] | cs.SE | [
"cs.SE"
] |
0000-0002-2495-3805
[email protected]
0000-0002-9930-7111
[email protected]
0000-0002-2495-3805
[email protected]
0000-0002-2495-3805
[email protected]
0000-0002-2495-3805
[email protected]
[email protected]
0000-0002-9592-7022
State Key Laboratory for Novel Software Technology, Nanjing University
Nanjing
Jiangsu
China
§ ABSTRACT
Automatic programming attempts to minimize human intervention in the generation of executable code, and has been a long-standing challenge in the software engineering community.
To advance automatic programming, researchers are focusing on three primary directions:
(1) code search that reuses existing code snippets from external databases;
(2) code generation that produces new code snippets from natural language;
and (3) program repair that refines existing code snippets by fixing detected bugs.
Despite significant advancements, the effectiveness of state-of-the-art techniques is still limited, such as the usability of searched code and the correctness of generated code.
Motivated by the real-world programming process, where developers usually use various external tools to aid their coding processes, such as code search engines and code testing tools,
in this work, we propose , an automatic programming framework that leverages recent large language models (LLMs) to integrate the three research areas to address their inherent limitations.
Our insight is that the integration of three research areas can overcome their inherent limitations:
the code generator can benefit from the valuable information retrieved by the code searcher, while the code repairer can refine the quality of the generated code with external feedback.
In particular, our framework first leverages different code search strategies to retrieve similar code snippets, which are then used to further guide the code generation process of LLMs.
Our framework further validates the quality of generated code by compilers and test cases, and constructs repair prompts to query LLMs for generating correct patches.
We conduct preliminary experiments to demonstrate the potential of our framework, helping CodeLlama solve 267 programming problems with an improvement of 62.53%.
As a generic framework, can integrate various code search, generation, and repair tools, combining these three research areas together for the first time.
More importantly, it demonstrates the potential of using traditional SE tools to enhance the usability of LLMs in automatic programming.
<ccs2012>
<concept>
<concept_id>10011007.10011074.10011099.10011102.10011103</concept_id>
<concept_desc>Software and its engineering Software testing and debugging</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Software and its engineering Software testing and debugging
No Man is an Island: Towards Fully Automatic Programming by Code Search, Code Generation and Program Repair
Zhenyu Chen
September 5, 2024
===========================================================================================================
§ INTRODUCTION
Software automation has long been a vision of Software Engineering (SE), with one of the significant challenges being the task of automatic programming <cit.>.
Automatic programming attempts to handle high-level specifications (natural language and test cases) into correct source code without direct human intervention <cit.>.
It effectively reduces manual coding effort, improves efficiency, and minimizes programming errors, thus enabling a more robust software development pipeline.
Besides, it democratizes programming by making it accessible to individuals with varying levels of programming expertise.
This is particularly significant as software permeates various industries in modern society, allowing domain-specific experts, who may not be proficient in programming, to undertake programming tasks tailored to their specific needs, such as AI applications in Science <cit.>.
In the SE literature, to advance automatic programming effectively, researchers are focusing on three primary directions:
* Code Search.
This task <cit.> involves developing sophisticated algorithms to search and retrieve existing code snippets from vast databases or the internet.
Code search is able to accelerate development and promote best practices by enabling the reuse of code.
* Code Generation.
This task <cit.> explores the automatic creation of code based on high-level specifications or requirements. It harnesses advanced machine learning models and artificial intelligence to translate human intentions into functional programs.
* Program Repair. <cit.>
This task involves fixing bugs or vulnerabilities in existing code during the software maintenance phase.
Program repair can be seen as automatic code generation at a micro-scale by generating correct code from buggy code.
Over the last decade, considerable research efforts have been devoted to advancing the state-of-the-art in the three areas.
Despite being promising, existing research in these domains still suffers from several limitations.
First, prior studies <cit.> find that code snippets retrieved by code search techniques cannot be directly reused and require manual adaptation, consuming a significant amount of time.
Second, code generation techniques often struggle to produce syntactically and semantically correct code that can pass both the compiler and test cases.
Recent studies <cit.> show that even the latest Large Language Models (LLMs) still tend to generate code that contains errors and vulnerabilities.
Third, research on program repair <cit.> is mostly confined to semantic bugs introduced by developers, while overlooking the rapidly emerging field of auto-generated code <cit.>.
Therefore, addressing the aforementioned issues can help enhance the effectiveness and usability of code search, code generation, and program repair tools when applied in real-world automatic programming scenarios.
To that end, our insights come from the limitations of existing techniques:
* Limitation of Code Search.
Code search is an effective method for finding usable code from external codebases.
However, the retrieved code typically cannot be deployed directly due to several reasons, such as project context, software bugs, and library dependencies <cit.>.
Thus, developers need to adapt retrieved code to specific requirements, including adjusting variable names, optimizing performance or efficiency, fixing bugs or securities, and including necessary dependencies.
When applied in practice, although code search tools can significantly accelerate development by providing useful code snippets, developers will often expend considerable effort to customize and validate the retrieved code before integrating it into their projects.
To address this issue, a feasible direction is to refine the retrieved code to meet certain requirements automatically.
In this regard, program repair is promising to adapt the code retrieved by code search tools with minor modifications, as the retrieved code may already be very similar to the desired output.
* Limitation of Code Generation.
Code generation is the focus of LLMs in the SE community, and has achieved continuous progress.
However, LLMs are trained on vast datasets up to a certain cutoff point, making it difficult to acquire and update up-to-date knowledge.
Although fine-tuning remains a possible solution, it is impractical to frequently update LLMs with the latest information due to the vast number of model parameters and computational resources.
Thus, when generating code, LLMs may suffer from outdated knowledge and project-specific context.
Particularly, LLMs are unaware of new knowledge (such as libraries or frameworks) after the last training update, and fail to incorporate information about project-specific requirements, dependencies, or evolving codebases, thus limiting the effectiveness of code generation.
To address this issue, a viable approach is to dynamically search for valuable information to augment the code generator.
In this regard, code search can provide useful hints, which can guide LLMs to avoid invalid results during code generation.
* Limitation of Program Repair.
As a crucial phase of automatic programming, program repair has achieved significant progress in terms of the number of correctly-fixed bugs <cit.>.
However, existing repair research is mainly limited to fixing bugs detected by functional test cases from well-constructed benchmarks <cit.>.
Recently, LLMs have demonstrated impressive capabilities in automatically generating source code.
However, the reliability and quality of auto-generated code are usually imperfect <cit.>, making it difficult to deploy such code directly into projects.
In fact, LLMs may generate source code with syntax and semantic errors as the generation process is static without external validation tools, such as compilers.
This concern raises a significant question: can we automatically refine the code generated by LLMs to make it sufficiently trusted for integration into software systems?
Therefore, combining test-driven repair with LLMs can provide dynamic feedback, allowing LLMs to iteratively generate accurate code.
In this regard, program repair is promising to help LLMs perform self-debugging with test-driven feedback during the code generation phase.
Our analysis motivates us to leverage the complementary strengths of code search, code generation, and program repair techniques to achieve mutual improvement.
In real-world programming scenarios, as shown in Fig. <ref>, developers typically follow a three-step process.
First, they construct an appropriate search query (such as natural language descriptions) and use search engines (such as GitHub) to find similar code.
Second, they generate their own code by imitating the retrieved code instead of coding from scratch.
Third, they validate the generated code through a compiler to ensure it meets specifications, such as test cases.
Taking inspiration from the developer practice, we can integrate the three research areas into the programming process, code search retrieves similar code for code generation, and program repair provides dynamic feedback to refine the generated code.
However, the key challenge lies in emulating the human developer role to seamlessly connect the three steps, how to integrate retrieved code for generation and how to refine generated code based on test feedback.
Fortunately, thanks to the powerful natural language and programming language understanding capabilities of LLMs, we can fully automate this process through prompt engineering.
Prior studies demonstrate that LLMs can perform code-related tasks in a manner similar to human conversation, and thus, we are motivated to leverage such capabilities of LLMs to connect code search, code generation, and program repair in a unified programming pipeline.
This Work.
We propose a framework , which leverages code searCh, code geneRation, and program repAir to push forward the boundaries of automatic progrAMming in the ear of LLMs.
Our work is motivated by the potential to automatically emulate the common developer practice with the help of LLMs' powerful natural language understanding and programming language generation capabilities.
Particularly, given a programming requirement, follows three steps:
(1) code search: an information retrieval (IR) or deep learning (DL)-based technique searches for relevant code from an external database of previous code snippets that may fit the programming context.
(2) code generation: an LLM-based code generator synthesizes a ranked list of code candidates based on both the programming requirement and the retrieved external code knowledge.
(3) program repair: an LLM-based code repairer slightly refines the token sequence of generated code from the previous step by constructing dynamic prompts with test case feedback.
This framework attempts to integrate three well-known research domains that are often developed in isolation, so as to benefit the whole programming pipeline.
More importantly, the integration not only broadens the application scope of these three research areas but also boosts the capabilities of recent LLMs in resolving programming problems effectively.
Preliminary Results.
We conduct a preliminary experiment to evaluate the effectiveness of by implementing it with two retrieval strategies as the code searcher and an open-source LLM CodeLLama with 7 billion parameters as the code generator and program repairer.
The experimental results on the MBPP benchmark demonstrate that
(1) is able to help CodeLLama in solving programming problems significantly, generating 267 correct solutions with an improvement of 32.18%;
(2) the three phases positively contribute to the performance of , code search and program repair improves CodeLlama by 24.75% and 14.85%, respectively.
To evaluate the potential of in a more real-world programming scenario, we illustrate several case studies from the CoderEval benchmark by utilizing GitHub Search API to search for relevant code and a black-box LLM ChatGPT to generate and repair code.
To sum up, the contributions of this paper are as follows:
* New Dimension.
We open a new direction for integrating LLMs with traditional SE research areas for more powerful automatic programming.
To the best of our knowledge, this is the first work to reveal the potential of LLMs in bridging the gap between three long-standing yet studied-separately research topics, code search, program repair and program repair.
* Novel Framework.
We propose , a three-stage automatic programming framework built on top of LLMs with code search, code generation, and program repair.
is a conceptually generic framework that can be easily integrated with various LLMs, code search, generation and repair tools.
* Preliminary Evaluation.
We demonstrate 's ability to generate correct solutions for competitive programming problems.
Besides, we present case studies to indicate the potential of in real-world programming scenarios.
§ BACKGROUND & RELATED WORK
§.§ Large Language Models
In recent years, LLMs have attracted increasing attention from the industry and academia for their impressive capability of processing natural and programming languages <cit.>.
LLMs are generally based on the Transformer architecture that has two key components: the encoder and the decoder.
The former encodes the input into a vector representation for the model to understand the input, while the latter transforms the encoded representation to the output sequence.
Such LLMs have shown outstanding performance in code-related tasks, such as program repair and code generation <cit.>.
For example, ChatGPT <cit.> and GPT-4 <cit.> released by OpenAI are known for their ability of conducting dialogues with human beings.
They can take prompts in natural language and generate relevant code accordingly.
CodeLlama <cit.> is a family of open-source LLMs specifically trained for source code, and can solve programming problems in a zero-shot situation.
Details about the application of LLMs in SE can be found in recent survey papers <cit.>.
However, when programming, such LLMs typically struggle to learn the latest knowledge and interact with external coding tools.
In this work, we aim to improve the programming capabilities of off-the-shelf LLMs by integrating them with code search, code generation, and program repair techniques.
§.§ Code Search
A common action that developers take while programming is searching for existing code with similar requirements to reuse.
A variety of code search techniques have been proposed to facilitate the retrieval process, and they can be generally classified into two types: IR-based and DL-based <cit.>.
IR-based code search techniques usually involve indexing the codebases and using scoring algorithms to calculate the similarity between the query and the target code.
For example, Lucene <cit.> is a typical IR-based search engine whose default scoring algorithm is BM25, which considers the word frequency and the lengths of documents to rank candidates in the retrieval corpus.
DL-based techniques leverage deep learning models to encode code snippets into vectors, and retrieve similar code based on the cosine similarity between the vectors.
For example, GraphSearchNet <cit.> is a neural network framework based on bidirectional GGNN to map queries and source code by learning the structural information from them.
For a more comprehensive study on code search, please refer to the work of Sun <cit.>.
In this work, we implement with two simple yet effective code searchers, IR-based and DL-based sterategies.
§.§ Code Generation
Code generation is a popular task that LLMs are applied to because of its great potential to improve the coding efficiency of developers.
For example, AceCoder <cit.> retrieves similar code and remove redundant retrieval results to boost the effectiveness of code generation.
SkCoder <cit.> simulates developers' coding behavior by constructing code sketches from the retrieved similar code and turning the sketch into complete code with an encoder-decoder model.
CodeAgent <cit.> proposes a novel repo-level code generation framework that integrates different programming tools including information retrieval tools with the purpose of gathering relevant resources so that LLMs can better understand the problems.
Please refer to the of Jiang <cit.> for a more comprehensive survey.
In this work, we construct prompts augmented by the code searcher to query CodeLlama and ChatGPT as the code generator.
§.§ Program Repair
Program repair aims to automatically fix software bugs, thereby reducing the efforts for manual debugging <cit.>.
Existing repair techniques can be broadly categorized into traditional and learning-based ones.
Traditional program repair approaches include heuristic-based, constraint-based, and template-based techniques.
With recent advancements in DL, a variety of learning-based repair approaches have been proposed <cit.>.
Such learning-based techniques leverage Neural Machine Translation (NMT) models to understand the semantics of the bugs and transform them into the correct code.
For example, CoCoNut <cit.> utilizes a context-aware NMT architecture to represent the buggy source code and its surrounding context separately.
Recently, LLMs are increasingly being utilized for repair tasks <cit.>.
For example, Zhang <cit.> investigate the potential of fine-tuning LLMs in repairing security vulnerabilities.
Xia <cit.> evaluate the fixing capabilities of LLMs for Java single-hunk semantic bugs.
Detailed summarization of program repair studies can be found in recent work <cit.>.
However, unlike traditional repair techniques, LLMs' powerful natural language capabilities enable them to incorporate external runtime information, thus facilitating iterative patch generation <cit.>.
In this work, motivated by the self-critical capability of LLMs, we leverage execution feedback to integrate program repair into the programming process.
§ FRAMEWORK AND IMPLEMENTATION
§.§ Overview
takes a programming requirement and external codebases as inputs, and automatically returns source code that meets the requirements.
<ref> illustrates an overview of , which is divided into three phases: code search, code generation, and program repair.
* In the code search phase, given the input as a query, searches a database of source code to find similar code applied previously in a similar context.
This phase is motivated by the redundancy assumption <cit.> that the required code can be found or refactored from other projects with similar contexts.
In the motivating example in Fig. <ref>, given the programming requirement “” as query, retrieves a code snippet that addresses a similar programming problem from the codebase: “”.
* In the code generation phase, we first concatenate the retrieved code along with the original programming requirement to construct an augmented input.
We then query off-the-shelf LLMs, as mentioned in Section <ref>, to generate code.
Fig. <ref> illustrates the generation phase that produces a token sequence for “”, which is close to the intended code.
However, as LLMs generate code tokens from a probabilistic perspective without the ability to dynamically execute and validate the generated code, the generated code may still be imperfect.
For example, in Fig. <ref>, the generated code lacks a return statement, resulting in output values that do not match the expected results and failing to pass the test cases.
Although the generated code can be quite similar to the intended output, developers still need to spend manual efforts to inspect and modify the generated code.
* In the program repair phase, we attempt to capture the minor modifications and further refine the generated code with dynamic test feedback.
Our key insight is that, due to the powerful code generation capabilities of LLMs, the code returned in the last phase is already close to perfect, which can naturally be seen as a program repair task.
In particular, we utilize the self-critical ability of LLMs by compiling and executing the generated code, then returning the error information to LLMs to guide them in generating more accurate code.
In the case of Fig <ref>, this step explicitly adds the lacked returned statment “”, resulting in the correct patch
§.§ Code Search
In this phase, given a program specification that needs to be implemented as a query, retrieves relevant code snippets from external databases that are applied before in a similar code context.
utilizes two types of retrieval strategies: an IR-based retriever and a DL-based retriever, to consider the lexical and semantic similarity, respectively.
§.§.§ IR-based Retriever
employs the sparse strategy IR as the token-based retriever, to search for a code snippet that is similar to the query programming requirement based on lexical matching.
Suppose 𝒟 = (r_i, c_i)^|𝒟|_i=1 be an external retrieval database consisting of |𝒟| previous code pairs, where r_i is the i-th program specification and c_i is its corresponding source code.
Given a query requirement q, tokenizes all requirements in the retrieval dataset 𝒟 and the query, and removes duplicate tokens for an efficient retrieval process.
then calculates the lexical similarity between q with all requirements in 𝒟 based on the Jaccard coefficient.
Jaccard is a widely-used similarity coefficient, to measure the similarity between two sparse vector representations based on their overlapping and unique tokens.
Formula <ref> defines the calculation of Jaccard similarity, where S(q) and S(r_i) are the sets of code tokens of two requirements q and r_i, respectively.
The value varies from 0% to 100%, and a higher value indicates a higher similarity.
In this work, considering that all requirements are natural language descriptions, we adopt space to tokenize each requirement instead of a sub-word level tokenizer.
Jaccard(q, r_i) = |S(q) ∩ S(r_i) |/|S(q) ∪ S(r_i) |
§.§.§ DL-based Retriever
employs the pre-trained CodeBERT <cit.> as the embedding-based retriever to search for similar code snippets based on semantic similarity.
In particular, first splits all requirements in 𝒟 and the query q into a list of tokens and exploits CodeBERT to transform the tokenized tokens into vector representations.
CodeBERT prepends a special token of [CLS] into its tokenized sequence, and calculates the final layer hidden state of the [CLS] token as the contextual embedding.
then calculates the Cosine similarity between the embeddings of two requirements to measure their semantic relevance.
Cosine similarity is widely adopted in previous studies to measure the semantic relevance of two dense vectors.
Given two vectors, Cosine similarity is calculated based on the cosine of the angle between them, which is the dot product of the vectors divided by the product of their lengths.
Formula <ref> defines the calculation of Cosine similarity, where E(q) and E(r_i) denote the embeddings of two requirements q and r_i.
Cosine(q, r_i) = E(q) · E(r_i)/ E(q)E(r_i)
Algorithm <ref> presents the detailed workflow of the search strategy in our work.
The algorithm starts by taking three inputs: a query representing the program specification to be implemented (), an external database containing code snippets (𝒟), and the number of similar code snippets to be retrieved (top-k).
The algorithm initializes an empty list called “similarityList” to store the similarity scores between the query and each code requirement in the database (Line <ref>).
For each code requirement in the database, the algorithm calculates the similarity score between the query and the code requirement using a function named calculateSimilarity (Line <ref>).
This function extracts tokens or embeddings from both the query and the code requirement, then calculates the similarity score using either Jaccard similarity for tokens or cosine similarity for embeddings (Line <ref>∼<ref>).
The resulting similarity score, along with the corresponding code requirement and code snippet, is appended to the “similarityList” (Line <ref>).
Once all similarity scores are calculated, the algorithm sorts “similarityList” in descending order based on the similarity scores using a function named customSort (Line <ref>).
This function employs a bubble sort algorithm to ensure the list is ordered from the highest to the lowest similarity score (Line <ref>∼<ref>).
After sorting, the algorithm extracts the top-k most similar code snippets from the sorted list and stores them in “retrievedResult” (Line <ref>).
Finally, the algorithm returns “retrievedResult” as the output, which contains the code snippets that are most similar to the given query (Line <ref>).
Through this systematic approach, the algorithm effectively retrieves the most relevant code snippets from an external database based on the provided query.
§.§ Code Generation
In the code generation phase, we leverage LLMs to generate code based on the programming requirement and the retrieved programming solutions.
Particularly, we build a prompt by composing (a) relevant code demonstrations, (b) the programming query, and (c) natural language instructions.
To select demonstrations, we take examples from the demonstration retriever component.
Then we select a task-specific template and combine these three elements to build the final prompt.
Fig. <ref> illustrates a prompt template for code generation.
The input prompt mainly contains two sections: code demonstrations, and the query.
Each code demonstration section consists of the programming requirement, its test cases, and the expected code.
The query section contains the natural language instruction beginning with in the template, followed by the test cases.
As it is an autocomplete task, the comment is used to signal the model to generate a correct code snippet.
Finally, the expected output for a given prompt is a multi-line code snippet passing the test cases.
§.§ Program Repair
After obtaining generated code snippets by LLMs, this phase attempts to refine them further by fixing syntax and semantic errors.
Particularly, we compile the programs and dynamically execute them against all available test cases.
The test cases provide valuable information about the correctness of the generated code, associated with the error messages if the code does not pass the tests.
For the failing functions, we input them to the LLMs again together with their requirements and error messages in an attempt to have them fixed.
To this end, we conduct a dynamic prompt for LLMs in the program repair stage, as shown in Fig. <ref>.
The prompt
The input prompt contains two sections: the programming requirement. its test cases, the generated code, and the failure information.
Each section is denoted by a natural language description that begins with the comment symbol in the template.
The query contains the context of test execution followed by the instruction.
This is essentially an auto-complete task where the query is an incomplete example used to prompt the model.
Finally, the expected output for a given prompt is a fixed version that addresses the reported failure.
§ PRELIMINARY RESULTS
We conduct two preliminary experiments to evaluate the performance of our search-generation-repair framework for automatic programming.
We first investigate the performance of in solving competitive programming problems.
We then explore the potential of in solving real-world programming problems.
§.§ Evaluation 1: The Effectiveness of Three Phases in Solving Competitive Programming Problems
Experimental Design.
In this RQ, we investigate the effectiveness of the code search, code generation, and program repair phases in our automatic programming framework.
We choose Mostly Basic Programming Problems (MBPP) released by Austin <cit.> as the benchmark.
It contains 974 easy Python programming problems, their test cases and code solutions obtained from crowdsourcing.
There is also a sanitized version of MBPP with 427 programming problems manually verified and extracted from the full dataset.
We focus on the sanitized MBPP and investigate how effective our framework is in solving problems in it.
We use the same dataset for code retrieval (except the query) since the coding styles within the same dataset are similar, which helps LLMs better learn what the target code looks like.
We choose CodeLlama as the researched LLM in this RQ and use it to generate Python code for 427 programming problems from the MBPP dataset. CodeLlama is a series of large language models for code generation, and we select the CodeLlama-7b-hf model, which is one of the foundation models with 7B parameters.
We conduct experiments under programming four scenarios to explore how each phase influences the correctness of the generated code:
* Only code generation.
We construct basic prompts only with programming problem descriptions and test cases, and query LLMs to generate Python functions to solve the problems.
* Code search + code generation.
When using LLMs to generate code, we not only provide basic information of the programming problems, but also additional functions with similar requirements that are retrieved from the same dataset.
* Code generation + program repair.
We generate code with the basic prompts, and then we use LLMs again to fix the incorrect code it generates.
* Code search + code generation + program repair ().
We combine all the phases by retrieving similar code, generating code according to the programming requirements as well as the similar code, and repairing the code that does not pass all the tests.
Results.
Fig. <ref> presents the number of problems that CodeLlama successfully solves under four different scenarios.
In Fig. <ref>, “search (IR)” and “search (EM)” denote the IR and embedding retrievers, respectively.
When CodeLlama directly performs the code generation task, 202 out of 427 generated solutions are correct, resulting in a correctness rate of 47.31
Besides, when integrating program repair, code search-IR, and code search-embedding into the programming process, 31, 28, and 50 additional correct solutions are generated, respectively.
Finally, CodeLlama achieves the best performance when code search, code generation and program repair are combined, producing 248 (IR retriever) and 251 (embedding retriever) correct solutions.
The improvement against the code generation-only scenario yields 58.08% and 62.53%, respectively.
These results demonstrate that the three-phase pipeline—comprising searching, coding, and repairing—continuously enhances the programming capabilities of LLMs.
§.§ Evaluation 2: The Potential of Three Phases in Solving Real-world Programming Problems
Experimental Design.
In RQ1, we investigate the performance of in generating functions for competitive programming problems.
In this RQ, we further explore how effective our approach is in a real-world programming scenario.
We select CoderEval <cit.>. Compared to MBPP and other popular code generation benchmarks which only include standalone functions, CoderEval contains programming tasks extracted from real-world projects as well as separate platforms to execute them, so we can evaluate our automatic programming approach in a real-world scenario.
We simulate real programming scenarios by selecting three functions that need to be implemented from the dataset. First, we use the GitHub search engine to find similar code, then call ChatGPT to generate the code, providing feedback on the test results for corrections.
In the following, we present three real-world examples to illustrate the search-generation-repair capabilities of .
For all three examples, ChatGPT fails to directly generate correct code based solely on their specifications, docstring.
However, successfully queries ChatGPT to produce the correct code for the first example with code search, the second example with program repair, and the third example with both code search and program repair.
Case 1.
Fig. <ref> illustrates an example that is not correctly generated by ChatGPT, but correct code can be produced by providing relevant information from code search.
This example attempts to identify whether a given string starts with a specified case-insensitive prefix, as shown in the blue part of Fig. <ref>.
ChatGPT first attempts to generate the solution directly, which ignores some boundary conditions and leads to a `NullPointerException` if either “str” or “prefix” is null.
then retrieves a similar solution that provides additional context and correct implementation details, with which ChatGPT properly handles “null” values and checks the length of the strings before performing the comparison, thus avoiding the “NullPointerException”.
Overall, with the guidance of retrieved code, LLMs produce higher-quality code that adheres to best practices and avoids common pitfalls.
Case 2.
Fig. <ref> illustrates an example that fails in the generation-only scenario but succeeds after program repair.
This example requires completing the logic to call “String.trim()” on each element in a given string array.
Similar to the generate-with-retrieval case, CHatGPT overlooks boundary conditions and fails the null-input test case.
When provided with the dynamic error message, ChatGPT fixes the bug related to missing boundary checks and passes all test cases.
Case 3.
Fig. <ref> illustrates an example only generated corrected with code search and program repair.
This example involves implementing logic that returns true when the given value is “true”, returns false when the given value is “false”, and otherwise returns a specified default value.
ChatGPT first attempts to generate the solution while ignoring the possibility of a null input, thus causing a NullPointerException.
After retrieving the reference code, ChatGPT identifies the error and corrects the faulty logic.
However, it encounters a new issue where it fails to return the correct value in the “false” condition for the same reason.
Finally equipped with program repair, ChatGPT generates the correct code and finally passes the test.
§ DISCUSSION AND FUTURE WORK
The primary technical innovation in this work is the introduction of a unified automatic programming paradigm that leverages advanced LLMs to integrate three long-explored research areas, code search code generation and program repair.
The preliminary experiments highlight the potential of in competitive programming problems and real-world programming scenarios.
Particularly, we demonstrate that (1) code search can help code generators produce more accurate code; (2) program repair serves as an effective post-processing step, even after retrieval-augmented code generation; (3) a unified programming pipeline, incorporating the above three phases together, is highly effective in generating code, especially when equipped with LLMs.
As a unified programming pipeline, we believe has significant potential for the SE community, and can be extended in the following aspects.
Deployment Scenarios.
It is promising to adapt to more programming scenarios during deployment.
First, there are some domain-specific areas that require developers to possess both programming skills and domain expert knowledge, such as hardware code <cit.>.
can fully automate such programming process by retrieving similar code within the domain and iteratively refining it, thereby reducing the programming barrier for developers.
Second, takes natural language descriptions as inputs currently, but it can be implemented with other query formats, such as test cases.
Considering that treats queries as tokens or embeddings without taking any specific code features into account, can be applied to the other input formats in a drop-in fashion.
For example, the potential of in the well-known test-driven development <cit.> is worth investigating, , retrieving similar solutions based on test cases.
Third, is generic to other code-related tasks, such as test generation, code translation and program repair.
Technical Designs.
The effectiveness of can be further optimized by improving the quality of retrieved code and refining the prompt engineering for program repair.
First, employs a straightforward retriever that can be either token-based or embedding-based, depending on whether it focuses on syntax or semantics.
Given the rapid advancements in code search, optimizing these retrieval strategies for greater efficiency is essential.
Second, directly appends error messages to the generated code as prompts and then queries LLMs for repair.
In the future, more advanced prompt engineering techniques, such as chain-of-thought, can be utilized.
Evaluation Experiments.
In this study, preliminary experiments demonstrate the effectiveness of .
We only conduct case studies in Section <ref> due to the limitation of real-world programming datasets and retrieval corpora.
Large-scale evaluations with quantitative analysis are necessary in the future.
Besides, the opinions and experiences of developers are crucial for assessing the utility of such programming tools.
We plan to conduct user studies to evaluate in real-world programming scenarios.
Furthermore, the experiments can be extended in the future with more studied LLMs, benchmarks, and programming languages.
§ CONCLUSION
In this paper, we propose a novel automatic programming framework, , which leverages advanced large language models (LLMs) to integrate three well-established areas: code search, code generation, and program repair.
Preliminary experiments indicate the potential of our framework to enhance the problem-solving capabilities of existing LLMs in programming tasks.
Besides, our framework demonstrates the preliminary benefits of combining LLMs with traditional software engineering (SE) areas.
In the future, more advanced technologies, such as intelligent agents, can be employed to further integrate various SE techniques within the programming framework more effectively.
ACM-Reference-Format
|
http://arxiv.org/abs/2409.03302v1 | 20240905071809 | Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems | [
"Freya Shah",
"Taylor L. Patti",
"Julius Berner",
"Bahareh Tolooshams",
"Jean Kossaifi",
"Anima Anandkumar"
] | quant-ph | [
"quant-ph",
"cs.LG"
] |
Department of Computing + Mathematical Sciences (CMS), California Institute of Technology (Caltech), Pasadena, CA 91125, USA
Ahmedabad University, Ahmedabad 380015, Gujarat, India
NVIDIA, Santa Clara, CA 95051, USA
Department of Computing + Mathematical Sciences (CMS), California Institute of Technology (Caltech), Pasadena, CA 91125, USA
Department of Computing + Mathematical Sciences (CMS), California Institute of Technology (Caltech), Pasadena, CA 91125, USA
NVIDIA, Santa Clara, CA 95051, USA
Department of Computing + Mathematical Sciences (CMS), California Institute of Technology (Caltech), Pasadena, CA 91125, USA
§ ABSTRACT
Fourier Neural Operators (FNOs) excel on tasks using functional data, such as those originating from partial differential equations. Such characteristics render them an effective approach for simulating the time evolution of quantum wavefunctions, which is a computationally challenging, yet coveted task for understanding quantum systems. In this manuscript, we use FNOs to model the evolution of random quantum spin systems, so chosen due to their representative quantum dynamics and minimal symmetry. We explore two distinct FNO architectures and examine their performance for learning and predicting time evolution using both random and low-energy input states. Additionally, we apply FNOs to a compact set of Hamiltonian observables (∼poly(n)) instead of the entire 2^n quantum wavefunction, which greatly reduces the size of our inputs and outputs and, consequently, the requisite dimensions of the resulting FNOs. Moreover, this Hamiltonian observable-based method demonstrates that FNOs can effectively distill information from high-dimensional spaces into lower-dimensional spaces. The extrapolation of Hamiltonian observables to times later than those used in training is of particular interest, as this stands to fundamentally increase the simulatability of quantum systems past both the coherence times of contemporary quantum architectures and the circuit-depths of tractable tensor networks.
Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems
Anima Anandkumar
Received 16 July 2024; accepted 04 September 2024
======================================================================
Simulating the dynamics of quantum systems has been a long-standing goal for the scientific community, underpinning Feynman's initial proposition of quantum computing <cit.>. Learning and predicting the behavior of intricate quantum spin systems presents a significant challenge due to their inherent superpolynomial time complexity <cit.>. Controllable quantum systems, such as quantum simulators or other quantum computers, represent a promising pathway for simulating complex quantum systems, as they share similar dynamics and large Hilbert spaces <cit.>. However, current quantum computing technologies face significant limitations <cit.>. In the present Noisy Intermediate-Scale Quantum (NISQ) era, quantum computers are restricted to a limited number of qubits and substantial error rates due to decoherence and operational imperfections <cit.>. These coherence and scalability issues constrain the capacity of quantum computers to simulate large and complicated spin systems, particularly over long timescales. As a result, achieving substantial and accurate results in the simulation of these systems remains elusive.
Advances in quantum modeling have introduced promising new approaches for quantum simulation, addressing limitations of traditional techniques. For instance, tensor methods like the Density Matrix Renormalization Group (DMRG) are capable of simulating larger quantum systems <cit.>. Tensor methods are highly effective for certain applications, such as studying ground state properties in one-dimensional systems but they encounter significant limitations when extended to systems with higher dimensions or greater levels of entanglement. Likewise, machine learning techniques based on neural networks, such as Neural-Network Quantum States (NQS) and Heisenberg Neural Networks (HENN), capture the essence of large quantum systems with a smaller-dimensional model. NQS provides a compact representation of many-body quantum states with artificial neural networks, capturing intrinsically nonlocal correlations and improving scalability over conventional approaches <cit.>, while HENN reconstructs time-dependent quantum Hamiltonians from local measurements, employing a physics-informed loss function based on the Heisenberg equation of motion and achieving high tomographic fidelity with sparse data <cit.>. However, such machine learning-based approaches have marked limitations in accuracy, particularly when they model large quantum systems and long evolution times.
Moreover, traditional neural networks struggle with generalizing across different discretizations, often requiring re-training or adjustments to maintain accuracy when applied to new discretization schemes. In contrast, Fourier Neural Operators (FNOs) provide a compelling alternative by learning operators between infinite-dimensional function spaces, leveraging their resolution-invariance <cit.>. This allows FNOs to be trained at lower resolutions and seamlessly perform evaluations at higher resolutions, a phenomenon known as zero-shot super-resolution. Moreover, FNOs maintain consistent error rates across varying resolutions and offer exceptional computational efficiency, performing orders of magnitude faster than traditional solvers for partial differential equations (PDEs) <cit.>. Recent studies demonstrate the effectiveness of FNOs in representing the S-matrix and solving fundamental quantum problems, such as the double-slit experiment and wave packet scattering <cit.>. However, the study primarily focuses on relatively simple quantum problems, which, while illustrative, have limited practical application.
Our approach: We explore whether FNOs can effectively learn the dynamics of quantum wavefunctions in quantum systems, such as quantum spin systems <cit.>. Given the limitations of contemporary quantum simulators, FNOs present a potential tool for studying quantum systems, learning their dynamics and, most interestingly, extrapolating those dynamics to timescales that are not possible either experimentally (due to coherence times) or computationally (due to, e.g., the growing rank of tensor networks with increased circuit-depth or time-evolution). As quantum dynamics are dictated by the Schrödinger equation, which is a PDE, the use of FNOs for learning the time-evolution operator is well motivated. We employ temporal data from actual quantum computations to train FNOs, enabling them to capture system dynamics, extrapolate future time intervals, and generate predictions for analogous inputs. This methodology has the potential to address the limitations imposed by quantum decoherence, which impedes large-scale and prolonged simulations.
The time-evolution of spin systems is particularly advantageous for exploring the application of FNOs, as they are fundamental models that encapsulate a wide array of complex phenomena inherent in quantum mechanics <cit.>. These simplified systems are instrumental in elucidating various quantum many-body effects, such as quantum phase transitions, gauge symmetries, and spin liquids, while also facilitating the discovery of novel, uncharacterized phenomena <cit.>. Notable examples of these models include the Ising model, the Heisenberg model, and the XY model. The large-scale simulation of these spin systems is of paramount importance due to their extensive applications in fields such as condensed matter physics, high-energy particle physics, and quantum gravity <cit.>.
In this manuscript, we numerically demonstrate the effectiveness of FNOs in learning the time evolution operator for the wavefunction of an 8-qubit Heisenberg 1D chain with random single-qubit driving. Two distinct FNO architectures are designed: the energy-domain architecture and the time-domain architecture. Moreover, we illustrate the model's ability to extrapolate dynamics to future time intervals with minimal error rates.
In addition to using these two architectures on the full quantum wavefunction, we also deploy the time-domain architecture using a mere polynomial number of Hamiltonian observables as inputs and outputs. We are motivated to study this compact architecture because the full wavefunction analyses have utility only at scales for which classical simulations can provide a ground-truth calculation. Conversely, the more compact Hamiltonian observable training procedure can be applied to data from quantum devices that exceed the capabilities of classical simulations, representing a powerful future goal. This could enable the study of quantum systems that are too large to simulate classically on timescales that are too long to carry out experimentally. Another advantage of using Hamiltonian observables as inputs is that this can push the boundaries of tensor network simulation beyond what computationally-tractable bond dimensions, which increase linearly with the number of qubits but grow exponentially with circuit or unitary depth, can allow. This is because an FNO can be trained on data from shallower tensor networks and then be used to extrapolate to longer timescales, whereas simulating the same system directly with a deeper tensor network would result in computationally prohibitive bond dimensions. To demonstrate the potential of this goal, we extrapolate the dynamics of these Hamiltonian observables into timescales longer than those provided by the training data. This includes extending the Hamiltonian observable time horizon up to twice the duration of the training period, with a relative error of just 6.46%.
In the time-domain architecture, we achieve a substantial 6.71× speedup with FNOs compared to exact unitary evolution for inferring dynamics at later times for 8 qubit systems with only a minimal fidelity reduction of 0.04%. We note that this speedup is likely to become more substantial at larger system sizes, as both exact unitary integration and approximate integration techniques become more computationally intensive. Additionally, we demonstrate FNO's capability for zero-shot super-resolution on 4 qubits by making predictions on a grid 10 times finer than that of the training interval. Due to its discretization-invariance, the FNO maintains exceptional accuracy, with error rates as low as 0.04% on the finer grid, while the U-Net shows a significantly increased error rate of 51.70% on the finer discretization. These results highlight FNO’s superior performance in both computational efficiency and accuracy, emphasizing its potential as a powerful tool for predicting quantum dynamics.
§ RESULTS
§.§ Preliminaries
In this section, we provide a brief overview of the quantum spin systems studied and outline the functionality of FNOs.
§.§.§ Spin System Model
Quantum spin systems are characterized by two-level particles organized in a specific geometry <cit.>. The Hamiltonian involves the interaction between neighboring quantum spins and external fields. We consider a spin 1/2 1D Heisenberg chain. The corresponding Hamiltonian is given by
H = ∑_i=1^n (J_zσ_i^zσ_i+1^z+
J_xσ_i^xσ_i+1^x+
J_yσ_i^yσ_i+1^y) + hσ_i^z,
where n represents the total number of qubits or atoms in the system. The Pauli matrix σ_i^a, where a ∈x,y,z, is defined as σ_i^a = I^⊗ i-1⊗σ^a ⊗ I^⊗ n-i. Here, I is the 2× 2 identity matrix and σ^a denotes the corresponding 2 × 2 Pauli matrix. The parameters J_x, J_y, and J_z are the coupling constants for two-qubit spin interactions, while h denotes the single-qubit driving field acting along the z-direction. We restrict interactions to nearest neighbors and apply periodic boundary conditions, such that σ_n+1^a≡σ_1^a. For our analysis, we use a Hamiltonian with randomly assigned coupling constants and a single-qubit driving field, where the values are uniformly distributed in the range from -2 to 2. Additionally, we consider the quantum Ising Hamiltonian defined as
H = ∑_i=1^n J_z(σ_i^zσ_i+1^z) + hσ_i^x.
This Hamiltonian also involves nearest-neighbor interactions and periodic boundary conditions, along with randomly assigned J_z and h.
§.§.§ Fourier Neural Operators
Neural Operators (NOs) are a class of machine learning models designed to learn mappings between infinite-dimensional function spaces, making them well-suited for a variety of applications, including ordinary differential equations (ODEs) and PDEs <cit.>. A key advantage of NOs is their resolution-agnostic nature; they can be trained on data at one resolution and generalize to different resolutions without requiring retraining. Among Neural Operators, FNOs represent a specific implementation where spectral convolutions are utilized to capture the underlying patterns in the data <cit.>.
As shown in Figure <ref> b), an FNO consists of L FNO blocks F_ℓ, a lifting layer Lift, and a projection layer Proj of the form
ILift⟶V_0 F_1⟶V_1 F_2⟶…F_L-1⟶V_L-1F_L⟶V_L Proj⟶O.
The lifting layer Lift maps the input data I with a given number of input channels to a latent function V_0 with a (typically) larger number of channels. Here, channels denote the dimensions of the co-domain of the functions V_ℓ, representing distinct components or features within the latent space. The latent space is an intermediate representation where the data is abstracted into a higher-level form, capturing essential patterns and relationships. The FNO operates within this latent space by applying spectral convolutions.
Specifically, the latent dimension representation V_ℓ+1 is defined as
V_ℓ+1(x⃗) =F_ℓ(V_ℓ)(x⃗) = σ (W_tV_ℓ(x⃗)+(K_ℓV_ℓ)(x⃗)),
where W_ℓ is a learnable affine-linear map applied across the channels of V_ℓ,
and σ is a non-linear activation function.
The spectral convolution K_ℓ can be defined as follows,
K_tV_ℓ = ℱ^-1(R_ℓ·ℱ(V_ℓ)),
where ℱ and ℱ^-1 denote the Fourier and inverse Fourier transforms, respectively. In the Fourier domain, higher frequency modes are truncated, leaving a fixed number of lower modes that are multiplied with learnable parameters R_ℓ. By combining the power of linear transformations, spectral convolutions, and nonlinear activation functions, the FNO can approximate highly non-linear operators <cit.>.
After processing through multiple FNO blocks F_ℓ, the projection layer Proj maps the latent representation to the output data O, which may have one or more channels, depending on the specific architecture used as described in Sections <ref> and <ref>.
§.§ Learning Dynamics using Complete 2^n Wavefunction
In this approach, the full state space of a quantum wavefunction is used as input, with two distinct types of wavefunctions considered. The first type consists of complex-valued normalized wavefunctions for n particles, randomly generated from a uniform distribution between -1 and 1. For the second type, we uniformly distribute the wavefunction over low-energy states while setting the high-energy states to zero. This distinction is insightful, as in many physical applications, e.g., quantum chemistry <cit.>, often only low-energy components of wavefunctions are occupied. In the remainder of this manuscript, we refer to the first input type as “random input” and the latter as “low-energy input”. We develop two distinct FNO architectures for processing wavefunction inputs: the energy-domain and time-domain architectures, which are described in detail below. Notably, because quantum wavefunctions are complex quantities, we use a complex version of the FNO <cit.>.
§.§.§ Energy-domain Architecture
We first consider the energy-domain architecture, where the Fourier transform is applied to the basis states of the wavefunction. This architecture requires a careful ordering of the wavefunction, specifically by arranging the basis states in order of increasing energy levels, so that states with lower energies precede those with higher energies. The method is described in detail in Section <ref>. In the Fourier domain, the FNO truncates fast (high-frequency) energy transitions. In quantum physics, energy E and frequency ν are directly related by the Planck-Einstein relation E=hν, where h is the Plank's constant. Therefore, performing a Fourier transform over energy is effectively the same as performing it over frequency.
In this architecture, the input comprises various training data of the quantum wavefunction at an initial time, as shown in Figure <ref> c). The input to the FNO is structured as follows
I=[Embed_state,S_0] ∈ℂ^2 × 2^n.
The term S_0 corresponds to the basis states S of a 2^n wavefunction, [ |ψ_1 ⟩,|ψ_2 ⟩, ⋯, |ψ_2^n⟩], at the initial time t=0. To enhance the learning process, we incorporate a position embedding
Embed_state(k)=k/2^n into the input channel dimension, where k is the index associated with each basis state.
This approach helps the model better understand the relationship between the states and their corresponding energy levels.
The output tensor is given by
O = S_T∈ℂ^2^n.
Thus the output,
consists of wavefunction output evolved at t=T. In our experiments, we choose T=π as it represents a significant portion of the quasi-periodicity of the unitary evolution generated by the Hamiltonian H. The Fourier transform on the basis states S of the wavefunction is given by
1/2^n∑_k=0^2^n-1ψ_k e^-i k τ/ħ,
Here, ψ_k is the basis state, τ can be understood as a time-like variable, and ħ is the reduced Planck constant. By truncating the transformed modes, we effectively filter out the components where energy values change rapidly, thus eliminating fast transitions. By concentrating on the slow-transition dynamics, the FNO highlights the most relevant dynamics of the quantum system. While this architecture provides a physical interpretation as a Fourier transform along the wavefunction's energy space, it does not support further discretization of the time interval [0,T], thereby limiting its utility in scenarios requiring more fine-grained prediction of the time-evolution. We will address this issue with our time-domain architecture in Section <ref>.
We use fidelity as the primary metric to evaluate the performance of the FNO in predicting quantum wavefunction dynamics. Fidelity measures the similarity between two quantum states and is defined as,
F(ψ_true, ψ_pred) = | ⟨ψ_true | ψ_pred⟩|^2,
where ⟨ψ_true | ψ_pred⟩ is the inner product between the true and predicted wavefunctions. Additionally, we iteratively apply the model to its own predictions to forecast wavefunction dynamics over a period of up to t=10T. Although the model is trained on the time interval [0,T], this iterative approach allows it to predict wavefunction evolution well beyond the training data time-horizon.
Experiments are conducted using 4 and 8 qubits with both random and low-energy input states. For low-energy states, training is conducted over various time intervals (VTI), such as [0,T], [T,2T], and [2T,3T], rather than relying on a single time interval [0,T]. Utilizing multiple time intervals is crucial to generalize to future time intervals for low-energy states. Specifically, when training on only a single time step, the model fails to adequately capture the dynamics necessary for accurate future time predictions, despite showing strong performance during training. This is seen in Figure <ref>, where 8-qubit with low energy states has a mean fidelity of 0.7538 for prediction of future time until t=10T despite achieving a fidelity of 0.9998 during training. This discrepancy underscores the importance of multi-step training for low-energy states, as these states initially occupy only low-energy configurations and gradually transition to higher-energy states over time. We test VTI when training models on random inputs. However, random inputs do not need to be trained across multiple intervals, as they already occupy a wide array of energy regions and can thus successfully extrapolate to later times with a fidelity greater than 94%. Figure <ref> demonstrates that the model achieves high fidelity for both input types and can accurately predict future states while maintaining consistent fidelity across these predictions.
§.§.§ Time-domain Architecture
An alternative architecture involves predicting the evolution of the wavefunction on a whole time interval instead of a single time T. In practice, we need to discretize the time interval. However, using a discretization-agnostic model, such as an FNO, we train and predict using grids with arbitrary width Δ t. Instead of a single time t=0, we then use a time interval as input, i.e.,
I=[Embed_time,S_[0,3/2T]] ∈ℂ^ (2^n+2) × m ,
where m=3/2T / Δ t defines the number of of equidistant time-steps in the interval [0,3/2T]. For training, we choose T=π and Δ t=π/10.
For the positional embedding Embed_time, we use the values of two sinusoidal functions applied to the time-grid in order to assist the FNO in learning temporal patterns. The output of the FNO is given as
O = S_[T, 5/2T]∈ℂ^2^n × m,
where S_[T, 5/2T] denotes the wavefunction evolved over the (discretized) time interval [T, 5/2T], where we overlap the time interval [T, 3/2T] with the one of the input I in Eq. (<ref>).
This setup facilitates smoother and more accurate learning, as the FNO can leverage the temporal continuity and patterns in the evolved wavefunction data. Consequently, the output includes the future time interval [3/2T,5/2T], unseen during training. We use two different sets of previously described inputs, i.e., random wavefunctions and low-energy wavefunctions. While the basis states does not necessarily need to be ordered as in the previous architecture, we maintain the order for consistency.
We also perform extrapolated time prediction by applying the model on its predicted time intervals to predict unseen future time intervals, up to t=23/2T, as seen in Figure <ref>. The fidelity metrics for both the training intervals and the extrapolated times are also reported in Figure <ref>, demonstrating that even with substantial predictions into the future, the FNO achieves exceptionally high fidelities for systems of both 4 and 8 qubits. Additionally, we compare the performance of the FNO with that of a deep neural network, specifically a U-Net <cit.>. Our evaluation demonstrates that the FNO outperforms the U-Net, especially in the extrapolated time regime, as seen in Figure <ref>. Moreover, even at moderate system sizes (8 qubits), the FNO achieves an 6.71x speedup compared to the exact unitary-based method in predicting the dynamics of random inputs up to t=23/2T, with only a negligible fidelity reduction of 0.04%.
Additionally, we performed evaluations on the output time interval [T, 5/2T] using a finer grid with width Δ t = π/100 for a 4-qubit system with random inputs.
For the FNO, the resulting fidelity of 0.9999 was identical to that obtained on the coarser training discretization. While we can also apply the U-Net on the finer grid, it is not discretization-agnostic, leading to an error rate of 4.78% in the expected fidelity. To further demonstrate the FNO's capability to achieve zero-shot super-resolution, we used an input interval of [0, 3/2T] with T=5π and Δ t=π/2 for 4 qubits and evaluated the corresponding output interval on a finer grid with Δ t=π/20. The FNO successfully predicted the finely discretized output with an error rate of just 0.04%, whereas the U-Net demonstrated a significantly higher error rate of 51.70% compared to the coarse grid fidelity.
§.§ Learning Dynamics using Hamiltonian Observables
Hamiltonian observables refer to the individual terms (observables) from the time-evolution generator (Hamiltonian) of the quantum wavefunction. These observables are obtained using the wavefunction by calculating the expectation values for the individual observable terms. For example, in the Ising model, the observables could include terms like σ_z σ_z or σ_z while in the Heisenberg model, they could include σ_a σ_a (where a ∈{x,y,z}) and σ_z. Using observables from the Hamiltonian, instead of the full wavefunction, as inputs and outputs for the FNO greatly enhances its scalability, as there are approximately poly(n) terms in the former and 2^n terms in the latter. This compression would allow us to train and make predictions with quantum systems that are too large or long to simulate directly, e.g., by using data from large quantum devices or wide tensor networks, from which we could then infer times longer than the devices' coherence time or past the tensor network's tractable depth, respectively. However, predicting future dynamics based on these partial observables presents a significant challenge, particularly without explicit knowledge of the underlying Hamiltonian.
To evaluate the FNO's capabilities, we first considered the Ising Hamiltonian
as described in Eq. (<ref>), where J_z and h are randomly generated. Importantly, in this study, we focus on using random wavefunction states to calculate the expectation values of these observables, making the learning and prediction tasks for the FNO maximally general and, as a result, challenging.
For the observables, we exclusively utilize the time-domain architecture. In this setup, the input consists of predefined observables and their time-evolved intervals. We provide a detailed description of the specific observables used in Section <ref>. This input structure mirrors the previously described input I in Eq. (<ref>), but instead of wavefunction basis states S, it includes observable quantities. This setup presents a substantial challenge for the FNO, as it must learn to predict future dynamics based on only partial information about the quantum state.
We further extrapolate to the future time interval [5/2T, 7/2T], effectively doubling the dynamics captured within the output time interval [3/2T, 5/2T]. This feature is particularly advantageous for quantum computing, where the limited coherence time of error-corrected qubits imposes constraints on the length of computations. Additionally, it can also be integrated to take observable data from tensor network methods and extrapolate it to timescales that would be computationally difficult for it to calculate due to high tensor train rank. Given that these observables present challenges for the FNO to learn effectively, we extrapolate future times based on the ground truth of the predicted time interval. This approach helps minimize unnecessary errors, as the ground truth is typically known.
As seen in Figure <ref>, with a system of 8 qubits we predict future observables with a relative error of 6.46%, even when extending the prediction to double the train time horizon on ground truth.
§ DISCUSSIONS
We have presented two FNO architectures that are capable of not only learning the time-evolutions of quantum systems, but also of extrapolating these evolutions to later times. The energy-domain architecture is more compact, requiring fewer computational inputs, and hones in on key dynamics by prioritizing slow quantum transitions. The time-domain architecture is agnostic to the discretization of the time interval, making it highly versatile for obtaining the output state of quantum evolution at arbitrary times. Both methods demonstrate the ability of FNOs to not only carry out quantum state evolution, but to learn the underlying time-evolution operator itself, which constitutes a key accomplishment in the ubiquitous and challenging task of solving the Schrödinger equation.
Of foremost interest is the application of the time-domain FNO using only Hamiltonian observables as input and output. This difficult learning task requires that the FNO learn and conduct future inference on partial information, using only ∼poly(n) expectation values rather than the full 2^n-component wavefunction. Under this architecture, the FNO could feasibly be trained using measurements from noisy quantum devices or data from shallow tensor networks, then infer to later time-scales that are otherwise out of reach of contemporary techniques as they would require either more coherent quantum computers or intractably large tensor networks.
In subsequent research, such a hybrid implementation of our work should be carried out using a large quantum device and a classical FNO to extrapolate to longer times. As open quantum systems have distinct dynamics from their pure counterparts, FNOs should also be applied to noisy quantum states. Moreover, the impact of physical noise and system symmetry on the requisite FNO dimensions and training data size should be studied. As quantum noise, sampling errors, and symmetries can reduce the learning complexity of the quantum system, it is natural that we characterize the FNO in this capacity.
§ METHODS
We provide details on the wavefunction ordering protocol used in Section <ref>, the types of Pauli strings utilized in Section <ref>, and the specifics of the training data, and FNO configurations for Figures <ref>, <ref>, and <ref>.
The quantum wavefunction ψ of a system with n qubits is represented as a vector in a 2^n-dimensional Hilbert space. The basis states ϕ_i correspond to different qubit configurations, ordered by their binary representation. To reorder the wavefunction by energy levels, we arrange the basis states such that their associated energies E_i satisfy E_i ≤ E_i+1. The wavefunction is then expressed as,
ψ = ∑_i=1^2^n c_i ϕ_i.
Here ϕ_i are ordered according to increasing energy levels. The energy levels can be calculated using E_i = ⟨ϕ_i | H | ϕ_i ⟩, where H is the Hamiltonian of the system.
In Section <ref>, for a system of 8 qubits, we use a set of 48 observables that includes all nearest-neighbor interactions XX, YY, and ZZ interactions across all qubit pairs. Additionally, the set includes single-qubit interactions X, Y and Z. We focus exclusively on quantities exceeding a certain threshold (e.g., greater than 10^-2) to avoid including less significant values that might inflate the relative error. Given that these observables are the expectation values of Pauli strings, we employ Mean Squared Error (MSE) and Mean Relative Error (MRE) as our loss metrics to evaluate the model’s performance.
In Figure <ref>, an 8-qubit system is used with 4,000 training data points, 4 FNO blocks, and 128 modes retained in the Fourier integral operator after truncation. For low-energy states with VTI, each interval contains 5,000 training data points.
In Figure <ref>, the model is again trained with 4,000 data points, using 4 layers and retaining 7 modes in the FNO after truncation. The U-Net is also trained on the same amount of data.
In Figure <ref>, we use 18,000 training data points with 48 Pauli strings for the 8-qubit system, utilizing 4 layers and retaining 7 modes.
F.S. acknowledges support from the Caltech Summer Undergraduate Fellowship. J.B. acknowledges
support from the Wally Baer and Jeri Weiss Postdoctoral
Fellowship. A.A.'s work is supported in part by the Bren endowed chair, the ONR (MURI grant N00014-18-12624), and the AI2050 Senior Fellow Program at Schmidt Sciences.
apsrev4-1-title
|
http://arxiv.org/abs/2409.02997v1 | 20240904180008 | Entropy-Enhanced Fractional Quantum Anomalous Hall Effect | [
"Gal Shavit"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"cond-mat.str-el"
] |
Department of Physics and Institute for Quantum Information and Matter, California Institute of Technology,
Pasadena, California 91125, USA
Walter Burke Institute of Theoretical Physics, California Institute of Technology, Pasadena, California 91125, USA
§ ABSTRACT
Strongly interacting electrons in a topologically non trivial band may form exotic phases of matter.
An especially intriguing example of which is the fractional quantum anomalous Hall phase, recently discovered in twisted transition metal dichalcogenides and in moiré graphene multilayers.
However, it has been shown to be destabilized in certain filling factors at sub-100 mK temperatures in pentalayer graphene, in favor of a novel integer quantum anomalous Hall phase [Z. Lu et al., https://arxiv.org/abs/2408.10203arXiv:2408.10203].
We propose that the culprit stabilizing the fractional phase at higher temperatures is its rich edge state structure.
Possessing a multiplicity of chiral modes on its edge, the fractional phase has lower free energy at higher temperatures due to the excess edge modes entropy.
We make distinct predictions under this scenario, including the system-size dependency of the fractional phase entropic enhancement, and how the phase boundaries change as a function of temperature.
Entropy-Enhanced Fractional Quantum Anomalous Hall Effect
Gal Shavit
September 9, 2024
=========================================================
Introduction.—
The fractional quantum anomalous Hall effect <cit.> (FQAH) is the zero magnetic field analog of the celebrated fractional quantum Hall effect <cit.>.
Recent advances in moiré materials have recently enabled experimental observation of this topological strongly-correlated phase of matter in twisted transition metal dichalcogenides <cit.>, and crystalline graphene/hBn moiré superlattices <cit.>.
A recent experiment <cit.> revealed an unexpected surprise: throughout much of the phase diagram in Ref. <cit.>, the observed FQAH phases at various filling factors are not the actual ground state of the system.
When the electronic temperature was lowered, a different phase superseded the FQAH.
This phase is characterized by vanishing longitudinal resistivity and quantized anomalous Hall response R_xy≈ h/e^2.
Moreover, it extends to a finite range of densities, with little regards to commensuration with the underlying moiré structure.
It has thus been dubbed the extended integer quantum anomalous Hall phase (EIQAH).
To understand the root cause of this phenomenon, a better understanding of both the FQAH phases and the EIQAH is required.
A recent proposal <cit.> has associated the EIQAH with a Wigner-crystal like phase formed on top of an integer quantum anomalous Hall state.
An attractive feature of this theory is that it accounts for some of the non-linear transport signatures in Ref. <cit.>.
However, the temperature-driven transition into the FQAH remains unresolved.
Ref. <cit.> posited that the “leftover” electrons in the EIQAH Anderson-localize due to disorder.
Heating the EIQAH delocalizes these electrons, therefore allowing interaction effects to take root, such that the system crosses over to a FQAH phase.
In this Letter, we propose a different theoretical explanation for the observed phenomenon, relying only on the topological distinction between the two competing phases, and on the mesoscopic character of the experimental devices.
Namely, excess gapless edge modes in the FQAH phase endow it with a higher entropy.
This naturally results in a phase boundary which favors the FQAH over the EIQAH as temperature increases, see Fig. <ref>.
It is rather suggestive that the temperature-stabilized FQAHs appear at filling factors where the analogous fractional quantum Hall phases have rich non-universal edge content <cit.>.
Whereas one naturally expects the EIQAH to host one chiral edge state, at a FQAH filling of, e.g., ∼ 2/3, one may encounter as many as four downstream/upstream modes <cit.> which carry heat and thus contribute to the overall edge entropy <cit.>.
We provide key predictions which may confirm or invalidate the proposed scenario.
Namely, under our assumptions the entropic effect is necessarily device-size-dependent, and should be virtually undetectable for a large enough sample.
Moreover, the temperature dependence of the phase boundary between the phases should follow T∝√(δ u), with δ u the ground state energy difference between the phases.
Finally, observation of the EIQAH-to-FQAH crossover at various values of electric displacement fields suggests that this δ u should be relatively insensitive to displacement field changes.
Theory.—
We aim to provide a phenomenological characterization of the competition between the extended quantum anomalous Hall phase (I) and the fractional quantum anomalous Hall phase (F), via a description of the free energy associated with these phases.
Assuming a temperature scale far below the respective transition temperatures of these phases to a metallic phase, we approximate the free energies as,
G_μ(x,T) = U_μ(x)-TS^ edge_μ,
with the index μ=I,F for the appropriate phase.
Here, x is a parameter tuning a phase transition between the integer and fractional phases.
In the experiment, this parameter is, e.g., an electric displacement field.
Let us denote by Ω and 𝙿 the area and perimeter of the device.
The entropy density of the phase μ originating in its linearly dispersing edge states is
s^ edge_μ =S^ edge_μ/𝙿=π^2k_B^2T/3ħ∑_i∈μ1/v_i,
where v_i are the edge mode velocities, and k_B is the Boltzmann constant.
It is then useful to define the potential energy density difference
δ u ≡(U_F-U_I)/Ω, and the velocity scale
v^* = [
∑_i∈ F1/v_i-
∑_j∈ I1/v_j]^-1,
where we assume v^*>0, implying the fractional phase edge states carry excess entropy as compared to the integer phase.
If all velocities are equal to v̅, v^*=v̅/N_δ, with N_δ the difference in the number of edge states between F and I.
Notice that v^* is dominated by the lowest velocities in the system (and hence by modes carrying the highest density of states).
Furthermore, in the presence of multiple edge modes (as hypothesized for the quantum Hall analogs for the relevant filling factors in the experiment), inter-mode interactions tend to strongly renormalize the velocities <cit.>.
Generically, one or several of the renormalized velocities will be significantly lower than the bare ones.
We note it has been well established that non-linearities in the edge spectrum arise for FQH phases,
at energy scales which are a fraction of the FQH gap <cit.>.
These non-linearities soften the edge mode dispersion, give rise to an enhancement of the edge state density of states, and consequently lead to a higher entropy than the simplified description in Eq. (<ref>).
Assuming similar physics persist for the FQAH edge states, this will make the described entropy-driven effect even more pronounced.
The transition between the two phases, described by Eq. (<ref>), is then given by the condition,
k_B T =
√(α_ geo. L ħ v^*)×√(δ u(x)),
where we have conveniently defined the geometrical constant
3Ω/(π^2𝙿)≡α_ geo.L, relating the surface-to-perimeter ratio of the device to its linear extent L (for, e.g., a square, α_ geo.=3/(4π^2)).
Taking into account realistic parameters from the experiment in Ref. <cit.>, one may arrive at a rough estimate of δ u given Eq. (<ref>) characterizes the transition.
Considering an electronic density of n_e≈ 0.5 10^12 cm^-2, the difference in energies between the fractional and extended integer phase δ u/(k_B n_e) is on the order of O(5 mk / electron).
Let us take a different perspective, considering specifically the out-of-plane electric displacement field as controlling the transition, x=D.
We may formulate the evolution of the phase boundary in the D-T plane as a Clausius-Clapeyron relation,
dD/dT=S^ edge_F-S^ edge_I/δμ_D,
where δμ_D/Ω = (∂δ u)/(∂ D)
is the difference of out-of-plane electric dipole moment between the fractional and integer phases.
Eq. (<ref>) then implies that the effect we describe here, stabilization of the phase with richer edge content by the edge entropy, is greatly enhanced if the energy difference between the phases is insensitive to changes in the displacement field.
As the relevant experimental regime lies in the high-field region with D≈1 V/nm, this is actually quite likely – the displacement field effects on the single-particle properties of the relevant band are nearly saturated.
This point is demonstrated in Fig. <ref>, which examines a key quantum geometrical property of the lowest-lying conduction band in pentalyer graphene, namely the trace of the Fubini-Study metric tr g.
This quantity is believed to play a key role in stabilizing the fractional quantum anomalous Hall phase in a given band <cit.>.
For simplicity, we omit the effect of the moiré-inducing hBN layer, since electrons are mostly polarized to the graphene layer far away from it in the relevant regime.
A change of 5% to the potential difference between the outermost graphene layers hardly has any effect, as seen by comparing Fig. <ref>a–b.
In Fig. <ref>c we plot the relative change in this quantity, which remains much smaller than the relative change in D throughout the majority of the enclosed Fermi volume at the relevant densities.
In the thermodynamic limit, L→∞, and the entropy-driven transition temperature diverges.
Referring again to Fig. <ref>, the phase boundary between the FQAH and EIQAH phases becomes nearly vertical below the transition temperature to some gapless phase.
Practically, one expects to “lose” the transition once a device exceeds a certain size L, for which the transition temperature exceeds the temperature at which the fractional phase forms, T>T_c.
This length scale is approximately
L≈(k_BT_c)^2/α_ geo.ħ v^* δ u.
If the excess edge density of states (∝1/v^*) is large enough, or the potential energy difference δ u is rather small, the EIQAH-to-FQAH “melting” can be observed for realistic device sizes.
We note that in the presence of disorder, specifically of the kind that couples differently to the F and I phases, the impact of the edge-state entropy may be further enhanced.
Domain walls separating the topologically distinct phases contain protected edge modes which carry entropy at finite temperature.
A transition from a uniform I phase at low-temperatures to a domain-patterned phase at higher temperature, analogous to the so-called Chern mosaic <cit.>, is expected in such a scenario.
The interplay between disorder and domain wall entropy driven transitions has been thoroughly explored in Ref. <cit.>.
The signature this sort of transition would have on transport is not universal, would depend on the position of the domains relative to the contacts, and would thus be hard to reproduce.
Discussion.—
We have presented a scenario accounting for the emergence of FQAH upon heating of an EIQAH ground state.
The culprit in this scenario is the excess entropy of the FQAH edge states.
If the ground-state energy difference between the phases is sufficiently small, the system may gain energy by “partially melting” in its edges.
This is enabled by transitioning into the intermediate FQAH, which allows additional gapless excitations on its edge.
Clearly, the free energy gain on the edge is expected to be eclipsed by the bulk δ u>0 contribution for large enough systems, and thus the behavior of the phase boundary will be strongly system-size-dependent.
This is the first distinct prediction of our model, indicating that the movement of the phase boundary in the D–T plane should be inversely proportional to L, as inferred by Eq. (<ref>).
As the transition is dominated by edge contributions, our theory further predicts this effect should be enhanced (i.e., the FQAH more stable against the EIQAH) at filling fractions which host a larger abundance of edge states.
We carefully speculate that this effect may explain why certain FQAHs disappear completely at low temperatures, whereas others survive for a finite range of displacement fields.
An exhaustive mapping of the hierarchy of the stabilized FQAHs at intermediate temperatures is necessary in order to make this point more concrete.
In addition to experimental predictions, our proposal raises issues to be addressed by the theory underlying the FQAH-EIQAH competition.
Namely, in the picture described above the energetic difference between the phase δ u is rather small.
Moreover, δ u should be relatively insensitive to changes of the displacement field around the experimentally relevant range, D≈ 1 V/nm.
These points may provide a valuable clue towards disentangling the two phases, as well as a testable hypothesis in their numerical simulations.
The predictive power of Eq. (<ref>), suggesting the phase boundary takes the shape T∝√(δ u(x)), is somewhat diminished by the lack of knowledge of the functional dependence of δ u(x).
Improved theoretical understanding of the phase competition will help resolve this issue and provide an ideal setting to test Eq. (<ref>).
Alternative explanations resulting in the same square-root behavior are possible,
as the nature of collective excitations in FQAH and EIQAH is far from understood.
These include the anomalous magneto-roton <cit.>, which is gapped, and presumably contributes very little entropy at low temperatures.
Phonon excitations due to spontaneous crystallization in the EIQAH are expected to be gapped due to the weak moiré potential <cit.>, and even if they are not (or the gap is very small), the phonons should give an entropic advantage to the EIQAH and not the FQAH.
We note that the observed FQAH itself may harbor an additional hidden translation-symmetry-breaking order <cit.>, yet the resulting phonons are also expected to be gapped for the same reason – the underlying moiré lattice.
In conclusion, the rich landscape of FQAH, EIQAH, anomalous Hall crystals, and their interplay in moiré flat-band systems is far from being understood.
The anomalous appearance of the exotic FQAH phase at higher temperatures is one of many observations calling to question our understanding of these materials.
Drawing inspiration from the presumably-similar physics of the fractional quantum Hall effect, we illustrate how this phenomenon can be related to the topological difference between the FQAH and the EIQAH ground states, regardless of their intricate collective excitation spectra.
GS acknowledges enlightening discussions with Gil Refael, as well as support from the Walter Burke Institute for Theoretical Physics at Caltech, and from the Yad Hanadiv Foundation through the Rothschild fellowship.
apsrev4-2
|
http://arxiv.org/abs/2409.02546v1 | 20240904090347 | Real-Time Dynamic Scale-Aware Fusion Detection Network: Take Road Damage Detection as an example | [
"Weichao Pan",
"Xu Wang",
"Wenqing Huan"
] | cs.CV | [
"cs.CV"
] |
Article Title]Real-Time Dynamic Scale-Aware Fusion Detection Network: Take Road Damage Detection as an example
[1]Weichao [email protected]
1]Xu [email protected]
2]Wenqing [email protected]
*[1]School of Computer Science and Technology, Shandong Jianzhu University, Ganggou Street, Jinan, 250000, Shandong, China
[2]School of Science, Shandong Jianzhu University, Ganggou Street, Jinan, 250000, Shandong, China
Unmanned Aerial Vehicle (UAV)-based Road Damage Detection (RDD) is important for daily maintenance and safety in cities, especially in terms of significantly reducing labor costs. However, current UAV-based RDD research is still faces many challenges. For example, the damage with irregular size and direction, the masking of damage by the background, and the difficulty of distinguishing damage from the background significantly affect the ability of UAV to detect road damage in daily inspection. To solve these problems and improve the performance of UAV in real-time road damage detection, we design and propose three corresponding modules: a feature extraction module that flexibly adapts to shape and background; a module that fuses multiscale perception and adapts to shape and background ; an efficient downsampling module. Based on these modules, we designed a multi-scale, adaptive road damage detection model with the ability to automatically remove background interference, called Dynamic Scale-Aware Fusion Detection Model (RT-DSAFDet). Experimental results on the UAV-PDD2023 public dataset show that our model RT-DSAFDet achieves a mAP50 of 54.2%, which is 11.1% higher than that of YOLOv10-m, an efficient variant of the latest real-time object detection model YOLOv10, while the amount of parameters is reduced to 1.8M and FLOPs to 4.6G, with a decreased by 88% and 93%, respectively. Furthermore, on the large generalized object detection public dataset MS COCO2017 also shows the superiority of our model with mAP50-95 is the same as YOLOv9-t, but with 0.5% higher mAP50, 10% less parameters volume, and 40% less FLOPs.
[
[
=====
§ INTRODUCTION
Road Damage Detection (RDD) <cit.> is a key technology area in intelligent transportation systems and urban infrastructure maintenance. Through automated detection means, RDD can effectively identify and locate various damages on the road surface, such as cracks, potholes, and cracks. This is essential for the daily maintenance and management of the city, because timely detection and repair of road damage can not only prolong the service life of the road, but also improve the safety of the road and reduce the occurrence of traffic accidents.
Traditional road damage detection mainly relies on manual inspection, this method is not only time-consuming and labor-intensive, but also susceptible to the influence of human factors, resulting in omission or misdetection. With the development of computer vision, machine learning and drone technology <cit.>, automated road damage detection methods have received more and more attention and research. In particular, image-base and video-based detection techniques <cit.>, are able to utilize deep learning models such as convolutional neural networks (CNN) <cit.> , to efficiently and accurately detect and classify road damage <cit.>.
In recent years, as the application of Unmanned Aerial Vehicles (UAVs) in various fields becomes more and more extensive, road damage detection based on UAVs <cit.> has gradually become a research hotspot. UAVs have the advantages of flexible flight capability and multi-angle imaging, which can quickly cover a large area of the road area, and significantly improve the detection efficiency. However, due to the diversity and complexity of road damage, such as the irregularity of damage size, shape, and direction, as well as the interference of environmental background, UAV still faces many challenges in the detection process. Therefore, researchers continue to explore new models and algorithms <cit.> to improve the accuracy and robustness of detection.
Object Detection <cit.> is one of the core tasks in computer vision, aiming at recognizing all targets in an image or video and determining their locations and categories. With the development of deep learning, especially the application of convolutional neural networks (CNNs), object detection algorithms have made significant progress in accuracy and efficiency. Classical methods such as the R-CNN family <cit.> achieve accurate detection through region extraction and classification, while single-stage detection algorithms such as YOLO <cit.> and SSD <cit.> achieve real-time performance through dense prediction. In recent years, lightweight <cit.> and Transformer <cit.> based detection models have further advanced this field, which are widely used in real-world scenarios such as autonomous driving, security surveillance and so on.
Although road damage detection (RDD) has made significant progress in the field of computer vision and deep learning in recent years <cit.> , the field still faces many challenges, such as the effective extraction and fusion of multi-scale features, the interference of complex backgrounds, and the need for real-time detection. In order to solve these problems, many researchers have proposed different methods and improvements. He et al. <cit.> proposed a road damage detector based on a local perceptual feature network for the problem of uncertainty in the proportion of the damaged area in road damage detection (RDD). The multiscale feature map extracted by CSP-Darknet53 was used to map it to a local perceptual feature network LFS-Net Multi-scale fusion is performed to generate local feature representations, and finally the feature maps are fed into the detection head for detection, Ning et al. <cit.> proposed an effective detection method of multiple types of pavement distress based on low -cost front-view video data, modified the YOLOv7 framework, and introduced distributed displacement convolution, efficient aggregation network structure, and improved spatial feature-pattern pyramid structure (SPPCSPD) with similarity-attention mechanism (SimAM) integrated into the model. Wang et al. <cit.> proposed a social media image dataset for object detection of disaster-induced road damage (SODR) and an integrated learning method based on the attention mechanism of YOLOv5 network. Zhu et al. <cit.> for the problem of uniformly identifying multiple road daily maintenance inspection proposed a multi-target automatic detection method based on UAV-assisted road daily maintenance inspection, UM-YOLO, which incorporates Efficient Multiscale Attention Module (EMA) in the C2f module in the backbone <cit.> , as well as Bi-FPN in the neck part, and use lightweight convolution GSConv for convolution operation. Zhang et al. <cit.> proposed a fast detection algorithm FPDDN for real-time road damage detection, which inherits the deformation transformer that improves irregular defects detection capability, the lightweight D2f module and the SFB downsampling module, which enhances the model's ability of extracting the global damage features, and reduces the loss of small-scale defect information.
However, these methods still have some shortcomings. For example, multi-scale feature extraction and fusion may be difficult to ensure the stability and accuracy of detection when facing complex backgrounds or irregular damage shapes. In addition, in scenarios with high demand for real-time detection, the computational complexity and inference speed of the model still need to be further optimized to achieve efficient detection on resource-limited devices. To overcome these challenges, this study proposes an RDD model for UVA that flexibly adapts to road damage at multiple scales and automatically removes background interference, called Dynamic Scale-Aware Fusion Detection Model (RT-DSAFDet). Specifically,the contributions of this study are summarized as follows:
1. In this study, we designed a feature extraction module (Flexible Attention, FA module) that can flexibly adapt to changes in the shape and background of road damage, which effectively improves the detection stability and accuracy of the model in complex scenes.
2. A multi-scale sensing and adaptive Shape and background (DSAF module) was designed. By integrating multi-scale features and adapting road damage to different shapes and backgrounds, DSAF module significantly improved the multi-scale feature extraction and fusion capabilities of the model, and further improved the detection performance. By fusing multi-scale features and adapting to different shapes and backgrounds of road damage, the DSAF module significantly enhances the model's capability in multi-scale feature extraction and fusion, which further improves the detection performance.
3. In order to improve the computational efficiency of the model while maintaining the accuracy, an efficient downsampling module (Spatial Downsampling, SD module) is designed in this study, which dramatically reduces the number of parameters and the computational complexity of the model, making it more suitable for real-time detection needs.
4. In this study, a novel road damage detection model ( RT-DSAFDet ) is proposed and designed, and the experimental results on the publicly available datasets UAV-PDD2023 and MS COCO2017 val show that the model outperforms the current state-of-the-art real-time object detection in terms of both accuracy and efficiency. models, demonstrating excellent performance and wide applicability.
§ METHOD
In this section, we provide a comprehensive overview of the proposed model. We provide a detailed description of each module in the network model and clarify their respective functions. First, an explanation of the overall model will be provided, followed by a detailed explanation of the modules involved as well as the structure, including the Flexible Attention (FA) module, the Dynamic Scale-Aware Fusion (DSAF) module, and the Spatial Downsampling (SD) module.
§.§ Overview
The structural diagram of the RT-DSAFDet model is divided into three main parts: Backbone, Multi-Scale Fusion, and Head, each of which carries out a different function and together constitute the entire detection system.
In the Backbone section, the model first performs basic feature extraction on the input image through the CBS (Conv-BN-SiLU) module. This process utilizes classical convolutional operations to capture the low-level features of the image. Next, the extracted features are sequentially processed through multiple DSAF and SD modules.The DSAF module focuses on the dynamic fusion of multi-scale features to ensure that the model is able to adapt to road impairments of different sizes and shapes, while the SD module reduces the number of features in the feature map through down sampling operation to reduce the size of the feature map, thus reducing the computational complexity and improving the efficiency of the model.
The core part of the model is Multi-Scale Fusion, whose main task is to fuse feature maps from Backbone at different scales. Through multiple UpSample and Concat operations, this part effectively combines the feature maps at each level, enhancing the model's multi-scale perception capability. In this process, the feature maps are continuously fused and optimized to ensure that the model can acquire rich and consistent feature information in the final detection stage. In order to maintain feature consistency and optimize performance, the Multi-Scale Fusion part again uses the DSAF and SD modules, which allows features to be further enhanced as they are passed and fused.
In the Head section, the model uses multiple detection heads to process the fused feature maps. Each detection head is responsible for different scales of feature maps, which ensures the accuracy and comprehensiveness of the detection results. Specifically, the detection head processes the feature maps through a series of convolutional, classification and regression layers to output information such as the category, location and bounding box of road damage.
The RT-DSAFDet model forms a powerful road damage detection system by combining preliminary feature extraction with Backbone, multi-scale feature fusion and optimization with Multi-Scale Fusion, and accurate detection with Head part. The system is particularly suitable for road damage detection tasks in complex scenarios, and is able to provide highly accurate detection results while maintaining high efficiency. The experimental results also verify the excellent performance of the model on various public datasets, proving its wide applicability in practical applications.
§.§ Flexible Attention Module
This is a structural diagram of the Flexible Attention (FA) module, Fig <ref> showing the internal processing flow of the module.The FA module is designed to enhance the model's adaptability to shape and context, especially when dealing with complex road damage, and to allow flexibility in adjusting attention to different features. A detailed description of each part of the module is given below:
The Flexible Attention (FA) module is a key component in the RT-DSAFDet model, specifically designed to enhance the model's adaptability when dealing with complex road impairments. The module first performs an initial extraction of the input features through a 3 × 3 convolutional layer to enhance the spatial information. Next, the shape and size of the convolutional kernel are adaptively adjusted using Deformable Convolutional Network v2 (DCNv2) <cit.>, which enables the model to better capture irregularly shaped and different scales of road damage features. Then, the module introduces the Triple Attention mechanism (Triple Attention) <cit.>, which calculates the attention weights in the channel, spatial, and orientation dimensions separately, further improving the model's focus on key features and helping to distinguish road damage from background noise. After that, a 1 × 1 convolutional layer is passed to integrate the multidimensional features, which compresses the channel dimension to reduce computational complexity while retaining important feature information. Finally, the FA module sums the original input features with the processed features through a jump join, which preserves the integrity of the input features while enhancing the richness and robustness of the feature representation. This design enables the FA module to provide more accurate and efficient road damage detection capabilities when dealing with complex shapes and background variations.
§.§ Dynamic Scale-Aware Fusion Module
The DSAF module (Dynamic Scale-Aware Fusion) is one of the core components in the RT-DSAFDet model, which is designed to significantly improve the model's detection capability in complex road damage scenarios by dynamically fusing multi-scale features. Fig <ref> showing the internal processing flow of the module. This module is especially critical for dealing with road injuries with different scales, shapes, and background complexities, as it ensures the accuracy of feature extraction and completeness of details while maintaining high efficiency.
Specifically, the DSAF module first processes the input feature map through a 1×1 convolutional layer, adjusting the number of channels to unify the dimensionality of the input, while effectively reducing the computational complexity and ensuring the efficient use of computational resources. This operation also preserves the key information in the feature map, providing a good foundation for subsequent processing. Next, the processed feature map is partitioned into multiple sub-feature maps. This partitioning allows the module to process features independently at a more detailed scale, which improves the flexibility and accuracy of the model in processing complex scenes. Each sub-feature map is then recursively processed multiple times through the Flexible Attention (FA) module, which plays a crucial role in this process. It is able to adaptively adjust the feature maps at different scales and orientations, resulting in richer and more precise feature representations. Especially when facing road damages with complex shapes and changing backgrounds, Eventually, these sub-feature maps, which have been processed several times, are reintegrated through the concatenation operation (Concat) to form a fused feature map that integrates the detailed information of each sub-feature map. This fused feature map not only retains all the important multi-scale information, but also enhances the overall representation of the features, enabling the model to better identify and localize road damage in subsequent detection tasks.
§.§ Spatial Downsampling Module
The Spatial Downsampling (SD) module is an important component of the RT-DSAFDet model for efficient spatial downsampling. Its main function is to significantly reduce the size of the feature map through the downsampling operation, thereby reducing the computational complexity, while ensuring that key information is retained to guarantee the performance of the model in subsequent processing.
First, the SD module receives the input feature map and unfolds it into small localized regions through the "Unfold" operation. This unfolding process divides the feature map into smaller units, allowing the downsampling operation to flexibly handle features at different scales. Subsequently, through the "View" operation, the module rearranges these small cells into shapes suitable for downsampling, a process that helps the module to better preserve the key local information when downsizing the feature map. Next, the SD module samples the feature map by two strategies: Max Pool and Avg Pool. Max Pooling selects the maximum value in the local region and focuses on retaining the most salient features, this approach ensures that the model does not lose the critical high response features during the downsampling process; Avg Pooling calculates the average value in the local region, which helps to retain the overall information and smoothes the representation of the feature map. The pooled feature maps are then reintegrated through the "+" operation to form a feature map that incorporates multi-scale information. In the end, the feature map output from the SD module not only significantly reduces the computational effort, but also retains the key information in the input feature map, which provides an efficient and informative input for the subsequent detection module.
The operation of DPConv can be explained by the following formula:
1. Depthwise Convolution: Convolution is performed on each input channel separately.
Y^(k)(i,j) = ∑_m=1^M_k∑_n=1^N_kX^(k)(i+m,j+n)· K^(k)(m,n)
𝐗^(k) is the feature diagram of the KTH input channel,𝐊^(k) is the convolution kernel corresponding to the KTH channel, 𝐘^(k)is the output feature map after Depthwise convolution.
2. Pointwise Convolution Performs 1x1 Convolution on Depthwise Convolution to integrate information on all channels.
Z(i,j) = ∑_k=1^K Y^(k)(i,j) · W^(k)
C is the number of input channels,𝐖^(k) is the weight of Pointwise convolution, and Z(i,j) is the output feature map after Pointwise convolution.
§ EXPERIMENTAL DETAILS
In this section, a brief overview of the experimental setup and related resources is presented. Next, the experimental dataset, the experimental setup and the evaluation metrics are presented in turn.
§.§ Datasets
1) UAV-PDD2023 dataset: The UAV-PDD2023 dataset <cit.> is a benchmark dataset specifically designed for road damage detection using unmanned aerial vehicles (UAVs). The dataset provides researchers and practitioners in the fields of computer vision and civil engineering with an important resource that can help in developing and evaluating machine learning models for detecting and categorizing various road damages.
2) MS COCO2017 (Microsoft Common Objects in Context 2017): The MS COCO2017 dataset <cit.> is one of the most widely used benchmark datasets in the field of computer vision, and is mainly used for the tasks of object detection, image segmentation, keypoint detection and image caption generation. The dataset is published by Microsoft to advance research and development in the field of vision, especially in object recognition and understanding in real-world scenarios.
§.§ Experimental Setup
The experimental program was executed on Windows 11 operating system with NVIDIA GeForce RTX 4090 graphics card driver. The deep learning framework was selected as Pytorch+cu version 11.8 with 2.0.1, the compiler was Jupyter Notebook, Python 3.8 was used as the specified programming language, and all the algorithms used in the comparative analyses were operationally consistent and ran in the same computational settings. The image size was normalized to 640 × 640 × 3. The batch size was 8, the optimizer was SGD, the learning rate was set to 0.001, and the number of training periods was 300.
§.§ Evaluation Metrics
In this study four key metrics precision , recall, mAP50 and mAP50-95 <cit.> were used to evaluate the performance of the detection model.
§ EXPERIMENTAL RESULTS AND DISCUSSION
In order to validate the superior performance of the RT-DSAFDet object detection model proposed in this paper, a series of validations are conducted on the above dataset and several evaluation metrics mentioned above are used for evaluation and analysis.
Firstly, this section introduces the current mainstream object detection models and conducts comparison experiments with the model RT-DSAFDet proposed in this paper to prove the superiority of the proposed model. Then, the results of the model proposed in this paper are evaluated, including the analysis of UAV-PDD2023 dataset comparison experimental results, UAV-PDD2023 dataset comparison experimental model recognition results. Finally, the validity of the module as well as the structure designed in this paper is verified by ablation experiments.
§.§ Comparative experiments
In order to validate the performance of the proposed model, we compared the RT-DSAFDet trained using the well cover dataset with the YOLOv5 <cit.>, YOLOv8 <cit.>, YOLOv9 <cit.>, YOLOv10 <cit.>, and RT-DETR <cit.> models. By this experiments , the superior performance of the model was demonstrated. The mAP50 compared with YOLOv5-m, YOLOv8-m, YOLOv9-t, YOLOv10-m, and RT-DETR-l x0.5 were 12.6%, 5.8%, 9.5%, 11.1%, and 8.6% higher, respectively.
1)Analysis of the results of the comparison experiments on the UAV-PDD2023 dataset: As shown in Table1. RT-DSAFDet has a precision (P) of 68.7% and a recall (R) of 50.5%, which are the highest metrics among all the compared models.
The precision metric reflects the accuracy of the model when the prediction is a positive sample, while the recall reflects the model's ability to capture all true positive samples.RT-DSAFDet's leadership in both metrics demonstrates its ability to not only accurately identify damage, but also capture as much of the damage as possible when dealing with complex road damage scenarios where all damage is present. This is particularly important for road maintenance and safety monitoring, ensuring that potential problems are fully detected.RT-DSAFDet achieves 54.2% on the mAP50 metric, which means that the model's average detection accuracy is extremely high and significantly outperforms that of other models under looser IoU (intersection and concurrency ratio) thresholds. For example, compared to YOLOv5-n (32.3%) and YOLOv10-n (31.1%), RT-DSAFDet's mAP50 is 21.9% and 23.1% higher, respectively. More notably, RT-DSAFDet also achieves a high score of 27.9% on the more stringent mAP50-95 metric, which indicates that it maintains high detection performance under different IoU thresholds, demonstrating robustness and adaptability to various complex scenarios. This is especially important when dealing with road damage detection, as different types of damage may exhibit different characteristics under different IoU thresholds. In terms of computational efficiency, RT-DSAFDet also performs well. It has only 1.8M parameters, 4.6G FLOPs, and a model size of only 4.0MB.These values indicate that RT-DSAFDet significantly reduces the computational resource consumption while maintaining high accuracy. In contrast, YOLOv8-m has 25.8M parameters, 78.7G FLOPs, and 52.1MB model size, which are much higher than RT-DSAFDet, indicating that our model is more lightweight and suitable for running in resource-constrained environments, such as embedded systems or mobile devices. This efficient design makes RT-DSAFDet not only perform well in laboratory environments, but also provides the possibility of efficient deployment in real applications.
2)Analysis of the recognition results of the comparative models on the UAV-PDD2023 dataset: As shown in Fig <ref>.YOLOv5-n, YOLOv5-s, and YOLOv5-m show some differences in the recognition process.YOLOv5-n is relatively lightweight, and thus lacks in detection accuracy, especially in the recognition of some minor damages (e.g., small cracks, minor pits) prone to missed or false detections.The performance of YOLOv5-s and YOLOv5-m is improved, especially in the sense that more features are correctly detected while the number of false detections is reduced. However, these models still show shortcomings when dealing with complex scenarios, such as the tendency of label confusion in the presence of multiple road markings or interference from mixed objects.The YOLOv8 series of models have improved to a certain extent with respect to YOLOv5.YOLOv8-n and YOLOv8-s show improvements in detection accuracy, especially in distinguishing between different types of damages (e.g., longitudinal cracks and transverse cracks) with more accurate performance. However, YOLOv8-n and YOLOv8-s still have some under-detection in some complex scenarios, while YOLOv8-m is the best in terms of detection stability and accuracy, and is able to accurately recognize most of the road damages, but there may still be a few mis-detections in very small damages or complex backgrounds. The YOLOv9-t model has improved the detection accuracy, and can identify various types of road damage well, especially in the case of damage with low contrast, it still maintains a high detection rate. YOLOv10-n and YOLOv10-s in the YOLOv10 series of models show strong adaptability in some test scenarios, especially when dealing with multi-object dense scenes perform more stably, but the overall precision and recall are still slightly lower than that of the YOLOv9-t. The performance of the RT-DETR-l x0.5 model is somewhat improved compared to the YOLO series, especially the stronger detection ability in complex backgrounds. This can be attributed to the model's stronger feature extraction capability and the fusion processing of multi-scale features. Nevertheless, in some very small or low contrast damage detection, RT-DETR-l x0.5 still has a small amount of missed detection.The RT-DSAFDet model performs well in all test scenarios. Compared with other models, the model can more accurately detect various types of road damage, including some small cracks and minor road breaks, etc. RT-DSAFDet, thanks to its multi-scale feature fusion and adaptive attention mechanism, can not only maintain high detection accuracy in complex backgrounds, but also effectively avoid leakage and misdetection, especially in multi-object dense and complex scenes, which is particularly The performance is especially outstanding in multi-object dense and complex scenes.
§.§ Generic Object Detection Experiments
Experimental results on the MS COCO2017 dataset show that the RT-DSAFDet model performs well in several key performance metrics, in particular, it achieves detection accuracy comparable to that of YOLOv9-t while maintaining a low computational resource consumption. Specifically, RT-DSAFDet achieves 53.6% for mAPval 50 and 38.3% for mAPval 50-95, both of which are on par with the performance of YOLOv9-t, showing the model's detection stability and accuracy under different IoU thresholds.
In addition, the computational efficiency of RT-DSAFDet is significantly better than other models. Its number of parameters is only 1.8M, FLOPs is 4.6G, and model size is 4.0MB, all of which indicate that RT-DSAFDet achieves a high degree of compactness and efficiency in its design. Despite the relatively small model size, RT-DSAFDet is still able to lead in detection accuracy, making it ideal for applications in resource-constrained environments such as mobile devices or embedded systems.
§.§ Ablation Experiments
In this ablation experiment, YOLOv8-n was used for the benchmark model, and the performance indicators of the model were analyzed in detail by introducing DASF and SD modules.
The benchmark model has a precision (P) of 60.9%, a recall (R) of 42.2%, a mAP50 of 43.6%, and a mAP50-95 of 19.5% when no additional modules are introduced. Although this benchmark model already has some detection capability, it shows obvious limitations when dealing with complex multi-scale features and complex background scenes, resulting in less than ideal detection precision and recall. With the introduction of the DASF (Dynamic Scale-Aware Fusion) module, the performance of the model is significantly improved. Specifically, the precision is increased to 66.3%, the recall is improved to 54.9%, the mAP50 rises to 58.3%, and the mAP50-95 reaches 30.0%. the DASF module effectively enhances the model's capability of capturing and fusing the information of different scales in the complex scene through the dynamic fusion of multiscale features, and significantly improves the accuracy and robustness of the detection. In the case where only the SD module is introduced, the model also shows significant improvement in some of the metrics. Although the precision slightly decreases to 59.6%, the recall improves to 46.7%, and the mAP50 and mAP50-95 improve to 51.0% and 25.0%, respectively. the SD module reduces the computational complexity through efficient spatial downsampling operations and optimizes the model's performance in large-scale feature processing while maintaining critical information. When both DASF and SD modules are introduced, the model achieves the best performance indicators. The precision is improved to 68.7%, the recall is 50.5%, and the mAP50 reaches 54.2% and mAP50-95 27.9%. In addition, the number of covariates of the model is significantly reduced to 1.8 M, FLOPs are reduced to 4.6 G, and the model size is reduced to 4.0 MB.These results indicate that the combination of DASF and SD modules greatly optimizes the computational efficiency of the model while improving the detection performance, making it suitable for application in resource-constrained environments while maintaining high precision.
§ CONCLUSION
In this paper, we propose and validate the superior performance of the RT-DSAFDet model in road damage detection tasks by comparing and analyzing several advanced object detection models. A series of experiments, especially on the MS COCO2017 dataset, show that RT-DSAFDet meets or exceeds the current state-of-the-art models (e.g., YOLOv9-t) in key metrics, such as mAP50 and mAP 50-95, while significantly reducing the number of parameters and computational complexity of the model. Although the RT-DSAFDet model proposed in this paper demonstrates excellent performance in road damage detection tasks, it still has some shortcomings. First, although the model achieves a balance in terms of accuracy and efficiency, there is still a possibility of missed or false detections in extremely complex scenarios. This is mainly due to the fact that the current model still has some limitations when dealing with features in extremely small scales or extremely complex backgrounds. In addition, while RT-DSAFDet improves the detection performance, it may need to be further optimized in scenarios with very high real-time requirements (e.g., real-time detection in video streaming) to ensure stable detection performance even at higher frame rates. Our future work will try to design new feature extraction and fusion techniques, especially for the detection of extreme small-scale damage, to enhance the model's ability to perceive features at different scales.
|
http://arxiv.org/abs/2409.02497v1 | 20240904074642 | A Learnable Color Correction Matrix for RAW Reconstruction | [
"Anqi Liu",
"Shiyi Mu",
"Shugong Xu"
] | eess.IV | [
"eess.IV",
"cs.CV"
] |
Geometry of temporal chiral structures
Andres F. Ordonez^1,2, Aycke Roos ^3, Pablo M. Maier ^3, David Ayuso^1,2,3 and Olga Smirnova^3,4
September 9, 2024
====================================================================================================
§ ABSTRACT
Autonomous driving algorithms usually employ sRGB images as model input due to their compatibility with the human visual system. However, visually pleasing sRGB images are possibly sub-optimal for downstream tasks when compared to RAW images. The availability of RAW images is constrained by the difficulties in collecting real-world driving data and the associated challenges of annotation. To address this limitation and support research in RAW-domain driving perception, we design a novel and ultra-lightweight RAW reconstruction method. The proposed model introduces a learnable color correction matrix (CCM), which uses only a single convolutional layer to approximate the complex inverse image signal processor (ISP). Experimental results demonstrate that simulated RAW (simRAW) images generated by our method provide performance improvements equivalent to those produced by more complex inverse ISP methods when pretraining RAW-domain object detectors, which highlights the effectiveness and practicality of our approach.
§ INTRODUCTION
The construction of datasets is crucial for autonomous driving algorithms, as a large volume of data is needed to train models for recognizing and understanding road environments. Existing widely used real-world autonomous driving datasets, such as <cit.>, only provide sRGB images that are derived from RAW images through a camera image signal processor (ISP). Unlike sRGB images, RAW images offer physically meaningful and interpretable information due to their linear relationship between image intensity and the radiant energy incident on the camera sensor. This characteristic of RAW images has been effectively utilized in low-level tasks such as denoising <cit.>, deblurring <cit.>, HDR <cit.>, and super-resolution <cit.>. Additionally, RAW images have demonstrated advantages in high-level tasks such as object detection <cit.> and segmentation <cit.>.
Due to the suitability with the human visual system and the abundance and diversity of sRGB images, most autonomous driving algorithms use sRGB images as input. However, sRGB images may not always be the most suitable data type for computer learning compared to RAW images. As demonstrated by Tesla <cit.>, using 12-bit RAW photon count images as input to the network bypasses the complex process of manually tuning the ISP and improves algorithm processing speed. Existing research on RAW-domain object detection <cit.> shows that using RAW images as model input can lead to better performance.
However, the availability of RAW images is limited due to the constraints of data collection and the high cost of RAW image transmission, storage, and processing. Additionally, annotating RAW images is expensive and burdensome. As a result, only a few RAW datasets <cit.> of real-world driving scenarios are available. One solution to expand the availability of RAW images is to reconstruct them from existing sRGB images using inverse ISP algorithms. Traditional methods <cit.> gradually convert sRGB images to RAW images by simulating ISP system modules. Metadata-based methods <cit.> encode necessary metadata from RAW images into the inverse ISP model to achieve high-accuracy reconstruction. Learning-based methods <cit.> aim to approximate the mapping from sRGB images to RAW images using neural networks in an end-to-end pipeline. Most of these aforementioned methods <cit.> require paired sRGB and RAW images for training, which limits their application to converting existing sRGB datasets captured by unknown cameras into RAW format. To address this limitation, <cit.> propose an unpaired RAW reconstruction pipeline based on CycleGAN <cit.>, which can convert sRGB images to RAW images for any camera style, making it ideal for performing RAW reconstruction in the absence of sRGB and RAW image pairs.
In practical industrial applications, enterprises have accumulated extensive sRGB images and high-quality labels through long-term real-world road tests and simulation trials, which are used for iterative optimization of autonomous driving algorithms. These sRGB images are invaluable for RAW-domain driving perception tasks, as they can be converted into RAW format using inverse ISP algorithms. Additionally, the labels associated with these images can be directly reused for subsequent tasks, eliminating the need for costly relabeling. However, converting large volumes of sRGB images into RAW format is time-consuming and computationally expensive. To advance RAW-domain research, there is a pressing need for a time-efficient RAW reconstruction method with high reconstruction quality. To address this challenge, we propose a novel learning-based method for RAW reconstruction. Our proposed model is exceptionally simple and lightweight. Specifically, the complex inverse ISP system is replaced with a simple and learnable color correction matrix, implemented using just a single convolutional layer. Our contributions are summarized as follows:
* We propose a novel inverse ISP method for RAW reconstruction. Our proposed model consists of only one convolution layer to learn the mapping from sRGB images to RAW images.
* In the absence of real-world driving scene RAW datasets, our proposed model can rapidly expand the quantity of available RAW images, making it highly practical for advancing research in RAW-domain autonomous driving algorithms.
* Experimental results show that when used to pretrain a RAW-domain object detector, the simulated RAW (simRAW) images generated by our learnable color correction matrix achieve equivalent performance gains comparable to those generated by more complex inverse ISP methods. This demonstrates the feasibility and effectiveness of our proposed approach.
§ RELATED WORKS
RAW datasets of driving scenarios. To our knowledge, only a few RAW datasets are available as shown in Table <ref>. The PASCALRAW dataset <cit.> is collected using a Nikon D3200 DSLR camera and contains 4259 high-resolution 12-bit RAW images of daytime scenes. The ROD dataset <cit.> is collected using the Sony IMX490 sensor and contains 25k high-resolution 24-bit RAW images of both daytime and nighttime scenarios. The multiRAW dataset <cit.> is collected using five different cameras and contains 7469 high-bit RAW images from various times and locations (rural, tunnel, and urban areas), with corresponding sRGB images converted using the respective camera ISP are provided.
RAW Reconstruction. Traditional inverse ISP methods <cit.> typically involve non-learnable operations based on camera parameters, such as inverse tone mapping, inverse gamma transformation, inverse color correction, inverse white balance and mosaic. This paradigm requires manually setting the true internal camera parameters which are often proprietary and not publicly available. Additionally, they do not account for complex nonlinear ISP operations, leading to inaccurate reconstruction of sRGB and RAW images. Metadata-based methods <cit.> incorporate necessary metadata of RAW images in the network to guide RAW reconstruction process. Studies such as <cit.> focus on learning-based methods, where ISP operations are learned end-to-end by neural networks. For instance, CycleISP <cit.> uses cycle consistency to learn both the forward RAW-to-sRGB and reverse sRGB-to-RAW translations. InvISP <cit.> proposes an flow-based invertible ISP architecture using a single invertible neural network for both forward and inverse translations. ParamISP <cit.> presents a hybrid model that combines model-based and data-driven approaches, leveraging standard ISP operations in a learnable and interpretable manner. In this approach, camera parameters from EXIF data are converted into feature vectors to control the ISP network. Additionally, <cit.> introduces an unpaired RAW-to-sRGB ISP and sRGB-to-RAW inverse ISP based on CycleGAN <cit.>, which eliminates the need for paired data and is more accessible. While existing methods achieve high-quality reconstruction, there remains a need for a more concise and equally accurate inverse ISP method to enable high-speed RAW reconstruction for practical applications, as discussed in Sec <ref>.
RAW-domain Object Detection. Existing research on RAW-domain object detection <cit.> indicates that using RAW images as model input can yield better performance compared to sRGB images. For instance, <cit.> proposes a RAW adapter that is image-adaptive and jointly optimized with downstream detectors. <cit.> argues that ISPs should be optimized for specific tasks, suggesting that Bayer pattern RAW images can be fed directly into detectors after a simple module such as Yeo-Johnson transformation. <cit.> indicates that detector performance can be improved by using RAW images with gamma correction. In these approaches, an adjustment module is usually employed to modify the data distribution of RAW images and enhance feature visibility.
§ METHOD
To achieve high-speed RAW reconstruction, we propose an innovative and ultra-lightweight model called the Learnable Color Conversion Matrix (LCCM). Unlike complex inverse ISP models <cit.>, LCCM utilizes only a single convolutional layer to approximate the mapping from sRGB images to RAW images. The parameters of the color conversion matrix are learnable during the training process. Inspired by knowledge distillation <cit.>, we use the unpaired RAW reconstruction method Unpaired-CycleR2R <cit.> as the teacher model, while the LCCM serves as the student model. The main framework of our RAW reconstruction method is illustrated in Figure <ref>.
§.§ Unpaired RAW Reconstruction Teacher Model
Modern ISP systems in cameras typically consist of five major steps to convert RAW input into the corresponding sRGB output, as illustrated in Figure <ref> (a): demosaic f_dm, auto white balance f_wb, brightness adjustment f_ba, color correction f_cc, and gamma correction f_gc. The overall conversion performed by the camera ISP system can be defined as a composite function f_ISP composed of multiple invertible and tractable functions as follows <cit.>:
f_ISP=f_gc∘ f_cc∘ f_ba∘ f_wb∘ f_dm
y=f_ISP∘x
where x denotes a RAW image, y denotes an sRGB image, and ∘ denotes the function composition operation. The inverse ISP function g_InvISP that mirrors the ISP function f_ISP, can be expressed as <cit.>:
g_InvISP=g_d m∘ g_w b∘ g_b a∘ g_c c∘ g_g c
x=g_InvISP∘y
where g = f^-1 assumed for simplicity.
As shown in Figure <ref> (a), the teacher model Unpaired-CycleR2R <cit.> is designed based on a reversible, modular paradigm. It employs unpaired sRGB and RAW images to train both an sRGB-to-RAW inverse ISP for RAW reconstruction and a RAW-to-sRGB ISP. This unsupervised learning approach based on unpaired samples is highly applicable and practical for converting existing sRGB images into RAW format.
§.§ Learnable Color Correction Matrix
In a camera ISP system, a 3x3 color correction matrix is utilized for color correction and adjustment, ensuring that the colors are restored to match the human visual system. Our proposed method replaces the complex ISP system with a single learnable color correction matrix to learn the mapping from sRGB images to RAW images. The color correction matrix C can be defined as below:
X=Y ∙[[ C_11 C_12 C_13; C_21 C_22 C_31; C_31 C_32 C_33 ]]
where X denotes a RAW image matrix, Y denotes an sRGB image matrix, the • denotes the dot product of matrix. The calculation process of the convolutional layer can be represented by the following formula:
x_i j=∑_u=1^U∑_v=1^V w_u v y_i+u-1, j+v-1
where x denotes a RAW image, y denotes an sRGB image, w denotes the convolutional kernel weight parameters, i and j denote the pixel positions, U and V denote the size of the convolutional kernel. A convolutional layer with a kernel size of 1, stride of 1, input and output channels of 3, and padding of 0, contains 9 weight parameters and 3 bias parameters. We innovatively use the 9 weight parameters of the convolutional layer to simulate the 9 parameters of the color correction matrix. Thus, a color correction matrix with trainable parameters is created, allowing it to learn and approximate the mapping from sRGB images to RAW images. Additionally, the bias parameters are used to fine-tune the color conversion matrix. We calculate the mean squared error between the simRAW images generated by LCCM and those generated by the teacher model Unpaired-CycleR2R <cit.>. The loss function is formulated as follows:
L_M S E=1/n∑_i=1^n(x_simRAW-S -x_simRAW-T )^2
where x_simRAW-S denotes the simRAW images generated by our LCCM, x_simRAW-T denotes the simRAW images generated by the teacher model Unpaired-CycleR2R <cit.>, and i denotes each pixel.
§ EXPERIMENTS
§.§ Experiment Settings
Datasets. We evaluate our proposed method on the multiRAW dataset <cit.> and the BDD100K dataset <cit.>. The BDD100K dataset <cit.> is a large-scale dataset contains 100k sRGB images collected in diverse scenes (city streets, residential areas, and highways). The multiRAW dataset <cit.> contains driving scene images covering various conditions (daytime, nighttime, tunnels and rainy weather).
Training Details for RAW reconstruction. Since the teacher model Unpaired-CycleR2R <cit.> is capable of converting sRGB images to RAW images for any camera style, we utilized this open-source model to reconstruct sRGB images from the BDD100K dataset <cit.> into simRAW-T images. This process provides us with sRGB and RAW image pairs, which are then used to train the LCCM. Subsequently, simRAW-S images are generated using the student model LCCM. Since the mapping from sRGB images to RAW images is a pixel-level task, we train the model using the original size of input sRGB images with a batch size of 1. An Adam optimizer with a learning rate of 0.001 is used for 100 training epochs. The quality of the generated simRAW images is evaluated using PSNR and SSIM.
Training Details for RAW-domain object detection. The generated simRAW images can be used to train RAW-domain models for various tasks. We focus on high-level object detection task and select YOLOv3 <cit.> due to its widespread use. Our method can be easily extended to other object detectors. The RAW images are resized to a fixed resolution of 1920×1920. Demosaic and gamma correction are applied in the RAW image processing pipeline as outlined in <cit.>. Data augmentation techniques, including random scaling, cropping, and color jitter, are employed. We use an SGD optimizer with a batch size of 4, a learning rate of 0.02, momentum of 0.9, and weight decay of 10^-4. A linear learning rate adjustment strategy is implemented during training. We train the detector for 30 epochs following <cit.>. Our experiments consist of two stages: pretraining and fine-tuning. In the pretraining stage, the RAW-domain YOLOv3 detector is trained on the simRAW-T and simRAW-S datasets, respectively. In the fine-tuning stage, the pretrained model is used to fine-tune the RAW-domain YOLOv3 detector on the multiRAW dataset <cit.>.
§.§ Main Results of RAW Reconstruction
We randomly select 1000 sRGB images from the BDD100K <cit.> training set and 1000 RAW counterparts from simRAW-T images as paired training samples, with 100 of each type as testing samples. As shown in Table <ref>, the proposed LCCM demonstrates significantly lower parameter and computational complexity, allowing it to efficiently complete both training and inference and contribute to the rapid expansion of RAW image datasets. In terms of PSNR, LCCM surpasses the traditional non-learning-based method UP <cit.> by 4.03 points, achieves nearly the same point as the learning-based method InvISP <cit.>, and is only 2.08 points behind CycleISP <cit.>.
Figure <ref> illustrates that our proposed LCCM, composed of just a single convolutional layer, can effectively achieve RAW reconstruction under varying weather and lighting conditions.
Figure <ref> presents the histograms for each color channel of the sRGB images from BDD100K <cit.>, simRAW-S sRGB and simRAW-T sRGB images. It is noteworthy that the intensity distributions of simRAW-T sRGB and simRAW-S sRGB images are generally similar, demonstrating that our LCCM can successfully simulate the mapping from sRGB images to RAW images. However, it is observed that the peak distributions of the three color channels in simRAW-S sRGB images differ from those in simRAW-T sRGB images. We attribute this to the limitation of using a single linear convolutional layer, which cannot fully approximate the nonlinear complex inverse ISP.
While it is expected that a model using only one convolutional layer for RAW reconstruction may not outperform more complex networks, it is important to note that the RAW images generated by this simple network have comparable effects when used to pretrain RAW-domain detectors, as compared to complex inverse ISP methods. This will be further discussed in Sec <ref>.
§.§ Main Results of RAW-domain Object Detection
The purpose of this study is to explore an ultra-simple and highly efficient method for RAW reconstruction, aimed at expanding RAW image datasets to support research in RAW-domain perception tasks for autonomous driving. We focus on the performance of a RAW-domain detector pretrained using simRAW images to verify the feasibility of our proposed LCCM method. The quantitative experimental results are presented in Table <ref>.
First, We randomly initialize the RAW-domain YOLOv3 model and train it from scratch. Then, we pretrain the RAW-domain YOLOv3 detector using simRAW-T and simRAW-S images, respectively, to obtain a pretrained model. This pretrained model is then used to initialize the model parameters and fine-tune the RAW-domain YOLOv3 detector on the multiRAW dataset <cit.>.
As shown in Table <ref>, detection accuracy improves consistently across different camera styles. The results indicate that fine-tuning a simRAW-pretrained YOLOv3 detector outperforms training the model from scratch. Compared to the baseline without pretraining, using LCCM-generated simRAW-S images for pretraining results in an improvement of 5.8 points (iPhone XSmax), 4.1 points (huawei P30Pro) and 3.2 points (asi 294mcpro) in metric AP.
We emphasize that the performance gains obtained by pretraining with simRAW-T images and simRAW-S images are nearly identical. The experimental results reveal the feasibility and significant potential of our proposed LCCM, which is highly efficient, substantially reducing time and computational resources for RAW reconstruction.
§.§ Ablation Studies
The mapping from sRGB images to RAW images is a pixel-level task, so each high-resolution image provides a large number of pixel samples for training the learnable color conversion matrix. In this section, we conduct experiments to investigate the impact of training sample quantity on the quality of RAW reconstruction images. Our proposed LCCM is particularly notable for its ability to train the color correction matrix using only a few dozen pairs of sRGB and RAW images. As shown in Figure <ref>, performance saturation occurs with approximately 100 training samples, This indicates that with just a small set of sRGB and RAW paired images from a specific camera, LCCM can effectively learn the sRGB-to-RAW mapping relationship. These results underscore the efficiency of our proposed LCCM.
§ CONCLUSION
In this paper, we present an innovative inverse ISP method for RAW reconstruction. Our approach leverages a learnable color correction matrix to approximate the mapping from sRGB images to RAW images. The proposed model is exceptionally simple and lightweight, consisting of only a single convolutional layer. Experimental results show that when used to pretrain a RAW-domain object detector, the simRAW images generated by our proposed model achieve performance gains equivalent to those generated by more complex inverse ISP models. It demonstrates the effectiveness and feasibility of our method. We hope that our work will contribute to the rapid expansion of RAW datasets to support research in autonomous driving tasks and inspire new insights into the design of efficient inverse ISP methods.
§ ACKNOWLEDGEMENTS
This work was supported in part by the National High Quality Program under Grant TC220H07D, in part by the National Key R&D Program of China under Grant 2022YFB2902002, in part by the Innovation Program of Shanghai Municipal Science and Technology Commission under Grant 20511106603, and in part by Foshan Science and Technology Innovation Team Project under Grant FS0AAKJ919-4402-0060.
|
http://arxiv.org/abs/2409.02584v1 | 20240904100642 | BMI Prediction from Handwritten English Characters Using a Convolutional Neural Network | [
"N. T. Diba",
"N. Akter",
"S. A. H. Chowdhury",
"J. E. Giti"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
BMI Prediction from Handwritten English Characters Using a Convolutional Neural Network
N. T. Diba1, N. Akter2, S. A. H. Chowdhury3
Dept. of Electronics & Telecommunication Engineering
Rajshahi University of Engineering & Technology
Rajshahi, Bangladesh
[email protected],
[email protected], [email protected]
J. E. Giti4
Dept. of Electrical & Electronic Engineering
Rajshahi University of Engineering & Technology
Rajshahi, Bangladesh
[email protected]
September 9, 2024
===============================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
A person’s Body Mass Index, or BMI, is the most
widely used parameter for assessing their health.
BMI is a crucial predictor of potential diseases that may arise at higher body fat levels because it is correlated with body fat.
Conversely, a community’s or an individual’s nutritional status can be determined using the BMI.
Although deep learning models are used in several studies to estimate BMI from face photos and other data, no previous research established a clear connection between deep learning techniques for handwriting analysis and BMI prediction.
This article addresses this research gap with a deep learning approach to estimating BMI from handwritten characters by developing a convolutional neural network (CNN). A dataset containing samples from 48 people in lowercase English scripts is successfully captured for the BMI prediction task. The proposed CNN-based approach
reports a commendable accuracy of 99.92%.
Performance comparison with other popular CNN architectures reveals that AlexNet and InceptionV3 achieve the second and third-best performance, with the accuracy of 99.69% and 99.53%, respectively.
Body Mass Index (BMI), Convolutional Neural Network (CNN), Deep Learning, English Handwritten Character (EHC), Forensic Science.
§ INTRODUCTION
The relationship between height and weight is calculated using the body mass index, or BMI. The simplest indicator we have for defining overweight and obesity is BMI. Many people consider BMI to be a crucial measure of health.
An increased likelihood of cardiovascular disorders, including high blood pressure and diabetes, is shown with a progressive increase in BMI. Hence, an automated system for estimating BMI from face images or other body attributes would be very convenient for continuous monitoring of BMI to ensure physical well-being.
Besides body attributes, handwriting can also be considered as a person-specific attribute with each individual’s writing style being distinct and personal. This paper demonstrates the automatic estimation of a person's BMI from handwritten data belonging to that individual.
In addition to health monitoring, BMI estimation from handwriting can play a vital role in digital forensics for crime scene analysis.
Deep learning models serve as the backbone of the automated BMI prediction system.
Convolutional neural network (CNN), a particular kind of deep learning model inspired by the human visual system, is utilized in this work. Through convolutional (conv) layers, CNNs identify intricate patterns and characteristics of handwritten character images to estimate the BMI value of a person.
The streamlined procedure of an automated BMI prediction system is shown in Fig. <ref>.
This paper aims to further explore the ability of CNNs
to determine BMI from a handwriting dataset. To accomplish this, we use a new
dataset called the English Handwritten Character (EHC)
dataset[ The dataset is available at
<https://rb.gy/wdxynb>.]
and design a benchmark CNN model[
The codes will be publicly accessible at
<https://rb.gy/xub8us>.] for this dataset.
Our contributions are summarized as follows:
* Images, along with labels such as student name, height, weight, and BMI information, are carefully collected to ensure the uniformity and quality of the dataset.
* Designing a custom CNN to estimate BMI values correctly from the images of the EHC dataset.
* Conducting ablation study of the proposed CNN to provide a better understanding of its learning process.
* Evaluating the results of the proposed
model based on performance metrics in comparison to the prior models.
To the best of the authors' knowledge, this is the first work that utilizes handwriting for BMI prediction. Despite being the first work on BMI estimation from a handwriting dataset, our CNN model outperforms state-of-the-art (SOTA) models in terms of prediction accuracy. The rest of the paper is organized as follows. Section II discusses existing related work and the research gap. In Section III, we describe the details of the EHC dataset collection. Section IV provides the methodology and the design procedure of the proposed CNN. Performance evaluation of our model along with a comparison with the existing models are shown in Section V. Finally, Section VI includes the concluding remarks as well as future works.
§ RELATED WORK
CNNs have gained a lot of interest in recent years for classifying a variety of handwriting datasets for handwritten character recognition (HCR) <cit.>, writer <cit.>, and gender identification <cit.>. We briefly review research works on each task. There are some well-known handwritten text datasets named IAM <cit.>, MNIST <cit.>, EMNIST <cit.>, and Kaggle alphabet dataset<cit.> to accomplish these tasks. All of these datasets are English handwriting datasets.
HCR:
Simard et al. <cit.> introduce a basic CNN model which afterward significantly improves HCR accuracy, offering SOTA performance <cit.>. To identify handwritten English characters, Narayan and Muthalagu <cit.> develop another CNN achieving a high real-time recognition accuracy of 97.59%.
The model implemented by Saqib et al. <cit.> outperformed prior SOTA models on the unbalanced EMNIST dataset with a peak accuracy of 99.563% while maintaining its lightweight design.
In contrast, SpinalNet <cit.> splits each layer into three splits to achieve a competitive performance compared to WaveMix <cit.>, the current SOTA HCR model on the EMNIST dataset. WaveMix blocks utilize multi-level two-dimensional discrete wavelet transform to provide scale-invariance, shift invariance, and sparseness of edges.
Writer identification: To tackle the challenge of the writer identification task, Rehman et al. <cit.> used a deep transfer learning model called AlexNet <cit.> to identify 1017 writers on the QUWI dataset <cit.> by extracting unique features from handwritten text that included pen pressure, different inks, and other information.
On the contrary, using words as features, Fiel and Sablatnig <cit.> trained and assessed a CNN model for writer verification, and depending on the particular words, they were able to get accuracy rates ranging from 77% to 96%. Similarly, WriterINet <cit.>, a two-stream CNN-based SOTA writer identification approach on various datasets utilizes word images to generate discriminative and deep features.
In a separate study, Dlamini and Zyl <cit.> employ Siamese CNN networks for verification of authors using the NIST-SD19 dataset <cit.>, concentrating on single letters rather than whole words and, achieve average accuracy of 80% on unseen test data. A recurrent neural network (RNN) based model named global-context residual RNN is presented by He and Schomaker <cit.> to effectively identify a writer using sequential handwritten data. In contrast, Obaidullah et al. <cit.> propose a stack ensemble network where the truncated transfer learning models are stacked along with a shallow CNN to perform ensemble learning for signature-based verification of a writer's identity. A combination of local and global features based on VLAD descriptor <cit.> is investigated by Liang et al. <cit.> in a deep learning framework for author identification.
Gender identification: Handwriting has been used as an indication to identify gender by treating the problem as binary classification. The majority of existing approaches have relied on a few common datasets, such as IAM on-line <cit.>, QUWI <cit.>, KHATT <cit.>, and MSHD <cit.>. Using handwriting data, Illouz et al. <cit.> created a CNN-based model for gender categorization. The model has four conv layers, a dense layer, and a softmax output layer. The classifier was evaluated using the HEBIU dataset, which includes 810 handwritten samples in Hebrew and English from 405 participants. The model's maximum accuracy of 82.89% is obtained when Hebrew samples were used for training and English samples for testing. Another CNN-based model for gender and handedness classification was presented by Morera et al. <cit.>. Two publicly accessible handwriting datasets (IAM and KHATT) were used to test this model.
and achieved the highest accuracy of 80.72% for gender classification on the IAM dataset.
From the above discussion, it is observed that no prior handwriting-related work has focused on estimating a person's BMI from handwritten data. However, several deep learning-based solutions for BMI prediction from body attributes exist in the literature.
Wen and Guo <cit.> carry out the first study to show how geometrical parameters, such as cheekbone-to-jaw width, width to upper facial height ratio, perimeter-to-area ratio, and eye size, can be used to automatically estimate BMI from facial photos. The MORPH-II dataset <cit.>, which is not publicly available, served as the basis for this work. However, VisualBMI <cit.>, Bollywood <cit.>, and VIP attributes <cit.> are the three major publicly accessible BMI-annotated facial picture datasets that are collected from social media.
For BMI inference over these three public datasets, Siddiqui et al.<cit.> apply VGG-19 <cit.>, ResNet-50 <cit.>, MobileNet-V2 <cit.>, and LightCNN-29 <cit.>. Among these models, ResNet-50 achieves a minimal mean absolute error of 1.04.
On the other hand, Yousaf et al. <cit.> apply region-aware global average pooling (GAP) on the extracted features obtained from different facial regions using their designed models, FaceNet and VGGFace. The use of the region-aware GAP enhances the performance by 22.4%, 3.3%, and 63.09% on VIP-attribute, VisualBMI, and Bollywood datasets, respectively, compared to using a standard GAP layer.
A new dataset called the Reddit-HWBMI dataset is introduced by Haritosh et al. <cit.> where Xception <cit.> is claimed as the best model for that dataset. Similarly, Jiang et al.<cit.> apply deep learning models to improve BMI prediction accuracy on the FIW-BMI and Morph-II datasets.
According to the above-mentioned investigations on BMI prediction and the authors' understanding, none of the previous works has estimated BMI from handwritten characters using a CNN. To address this research gap, we aim to estimate BMI values from individuals' handwritten English characters.
This objective is achieved by collecting the EHC dataset and
designing a SOTA model for the dataset by treating the BMI prediction problem as the classification problem.
§ DATASET COLLECTION AND DESCRIPTION
In this section, we provide details about the collection process and characteristics of the EHC dataset obtained from the students of RUET, a prestigious engineering University.
§.§ Dataset Collection
The EHC dataset was collected from 48 students.
The students were provided writing supplies, including A4-sized paper and pens, and asked to write the lowercase English alphabet inside the paper's illustrated box. For further use of the dataset, each writer's personal information, including the roll number, age, height, weight, and BMI, was written at the top of the A4-sized paper. To capture the natural variation in handwritten data, the students were requested to write each character three times in their own handwriting style, without any guidelines. This was done because writing samples of the same person at different times vary in the character's writing style, shape, size, and placement. In total, 48 A4-sized paper forms were collected, resulting in a dataset comprising 26×3 characters from 48 individuals.
Fig. <ref> illustrates some samples from this diverse handwritten English dataset.
§.§ Dataset Description
The collected dataset provides 3744 raw photos with a
variety of lowercase English letters.
These characters were written in a variety of sizes, and
styles, which represented the writers' varied writing
habits. The EHC dataset has undergone a comprehensive analysis of the handwritten characters that are being used to design the BMI prediction system.
This new dataset contains both challenging samples, where the characters might be written quickly or with slight distortions, as well as easier-to-read samples with clear and well-formed characters. Some students may have written certain
characters faster than others, which could introduce variations in the shape and quality of those characters. This aspect of the dataset reflects real-world scenarios where handwriting can vary due to the speed at which individuals write. As a result, this dataset offers the chance to evaluate the model’s accuracy in estimating BMI based on the quickly and slowly written lowercase English characters.
figure1
§ METHODOLOGY
CNN has revolutionized the field of computer vision and
has been widely used in various applications, including image
classification, object detection, and, in our case, BMI prediction. CNN excels at capturing local patterns and hierarchical representations in visual data, making them particularly suitable for
tasks like BMI prediction from lowercase English characters. Fig. <ref> shows the whole pipeline of the CNN-based BMI prediction approaches.
§.§ Data Preprocessing
The pre-processing step includes all the procedures needed to create a clear character picture so that the feature extraction stage works effectively.
Data preprocessing begins by scanning and segmenting the handwritten characters that are written on the A4-sized paper.
During segmentation, the scanned image of the A4-sized paper containing a sequence of alphabets is divided into sub-images, each representing a single isolated character.
However, scanned documents often contain contaminants such as dust, spots, dots, or lines, which are classified as noise. To improve estimation results significantly, it is necessary to remove the noise from the segmented images. This stage is vital as its successful execution enhances the accuracy of BMI determination and reduces misclassification. Then, we
resized the noise-free segmented images to a consistent size of 224×224 so that all images could be transformed into a uniform format for training.
§.§ Data Augmentation
By applying several changes to the original data, new
training samples were created, utilizing data augmentation. It
is a widely used approach in machine learning, especially when dealing with limited datasets. For our EHC dataset, which
contains only 3744 original image samples, data augmentation becomes vital to expand its volume. We applied six types of augmentations to these sample images, resulting in a total of 26,208 photos. These augmentations included increases in brightness, contrast, and sharpness, as well as the addition of Gaussian noise, Gaussian blur, and jitter. Fig. <ref> displays some samples from the augmented EHC dataset.
§.§ Proposed Model
The proposed CNN architecture is designed as the classification model to quickly and accurately determine BMI from scanned handwritten isolated English character photos. It is created by stacking layers one after the other starting with the input layer and proceeding all through intermediate layers to the output layer.
Our base model is composed of two conv layers, each followed by a max-pooling layer. ReLU activation <cit.> is applied after each conv layer and the hidden layer, introducing non-linearity and enhancing the model’s ability to learn complex representations from higher level features. Fig. <ref> shows the interaction of input images with the proposed CNN architecture, designed to determine a writer's BMI from the handwritten character images.
§.§ Performance Metrics
In this study, the performance metrics used to assess the model performance are accuracy, precision, recall, and, F1-score. Please note that these performance metrics reported throughout the paper are for the
testing dataset unless mentioned otherwise.
§.§ Implementation Details
Our model is trained from scratch using an optimizer with a learning rate of 0.0001 and batch size of 32. The BMI estimation system is implemented using the Python 3.7 programming language and Keras library with TensorFlow environment. The entire experiment is performed using GPU P100 on the Kaggle Cloud Platform, a data science competition platform. The training, validation, and testing datasets are divided into around 70:15:15
ratios.
The training dataset comprises 18432 photos, while the remaining 7776 images have been distributed equally into testing and validation.
§ RESULTS AND ANALYSIS
This section contains an ablation study of the proposed CNN and a comparison of our model with SOTA classification models. An ablation study provides insights into the significance of each layer’s contribution to the overall network’s effectiveness.
§.§ Ablation Study
An ablation study is conducted by changing the number of filters in conv layers, the number of conv layers, max-pooling layers, dense layers, and dropouts of our base model. The results, summarized in Table <ref>, show that the last and fourth rows represent the base and best models, respectively. The accuracy of our base model is 80.73%. We observed improved accuracy with slight changes to the base model configuration. The best performance is achieved with a model consisting of three conv and max-pooling layers, and three dense layers each followed by 50% dropout to prevent overfitting. our best model achieves an accuracy of 99.92% (shown as bold in Table <ref>).
The accuracy of 99.92% indicates that out of 3888 test images of the EHC dataset, BMI is predicted accurately for 3885 character images by our best model.
The remaining 3 images, where the model incorrectly estimates a BMI value either as belonging to a particular BMI class or not, contribute to the total false positive or false negative count, respectively.
To tackle the overfitting issue, the proposed model is trained for 100 epochs with early stopping. Each epoch involves 576 steps, meaning that batches of 32 data points each are processed 576 times, totaling 18432 training samples.
The training and validation loss curves for our best model are shown in Fig. <ref>. As expected, the validation loss follows the training loss. Fig. <ref> also indicates that the losses stopped after approximately 49 epochs, suggesting that the training process converges around this point.
§.§ Performance Comparison with Related Works
We evaluate the performance of our model in comparison to several popular SOTA classification models. The objective of the comparison is to assess the model's ability to identify a person's BMI from handwritten characters of the EHC dataset. Based on this comparison, we discovered that our model performs better than the widely used classification models in terms of accuracy, precision, recall, and F1-score. The comparative findings are shown in Table <ref>.
Since the lack of previous research on BMI estimation from handwritten characters, the proposed CNN's simplicity and small dimensions have been beneficial—especially in view of the small size of our dataset. Our model performed better than other popular classification networks, mostly due to its ability to extract meaningful characteristics from the relatively small dataset.
Our model outperforms AlexNet <cit.>, InceptionV3 <cit.>, DenseNet121 <cit.>, ResNet50 <cit.>, EfficientNetb0 <cit.>, Xception <cit.>, and MobileNetV2 <cit.> models by 0.23%, 0.39%, 0.41%, 0.62%, 0.65%, 0.93% and 1.85%, respectively, in terms of relative improvement on the prediction accuracy.
§ CONCLUSION AND FUTURE WORK
This paper attempts to tackle a new task of estimating BMI from images of English handwritten characters.
Our CNN-based approach to tackle this task not only enhances the accuracy of BMI estimation but also surpasses the state-of-the-art solutions for BMI determination, setting a new benchmark in a field that has been scarcely addressed in the literature.
In the future, we intend to add new features to extend the functionality of our model. Future extension includes addressing the difficulties presented by other difficult languages like Bengali, Arabic, Japanese, Chinese, Korean, Finnish, as well as multilingual scripts for BMI estimation. We also plan to create an Android app that takes a photo of the handwriting and instantly determines the BMI from it.
§.§ Maintaining the Integrity of the Specifications
§ PREPARE YOUR PAPER BEFORE STYLING
§.§ Abbreviations and Acronyms
§.§ Units
* Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as “3.5-inch disk drive”.
* Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation.
* Do not mix complete spellings and abbreviations of units: “Wb/m2” or “webers per square meter”, not “webers/m2”. Spell out units when they appear in text: “. . . a few henries”, not “. . . a few H”.
* Use a zero before decimal points: “0.25”, not “.25”. Use “cm3”, not “cc”.)
§.§ Equations
Number equations consecutively. To make your
equations more compact, you may use the solidus ( / ), the exp function, or
appropriate exponents. Italicize Roman symbols for quantities and variables,
but not Greek symbols. Use a long dash rather than a hyphen for a minus
sign. Punctuate equations with commas or periods when they are part of a
sentence, as in:
a+b=γ
Be sure that the
symbols in your equation have been defined before or immediately following
the equation. Use “(<ref>)”, not “Eq. (<ref>)” or “equation (<ref>)”, except at
the beginning of a sentence: “Equation (<ref>) is . . .”
§.§ -Specific Advice
Please use “soft” (e.g., ) cross references instead
of “hard” references (e.g., ). That will make it possible
to combine sections, add equations, or change the order of figures or
citations without having to go through the file line by line.
Please don't use the equation environment. Use
or instead. The
environment leaves unsightly spaces around relation symbols.
Please note that the environment in
will increment the main equation counter even when there are no
equation numbers displayed. If you forget that, you might write an
article in which the equation numbers skip from (17) to (20), causing
the copy editors to wonder if you've discovered a new method of
counting.
does not work by magic. It doesn't get the bibliographic
data from thin air but from .bib files. If you use to produce a
bibliography you must send the .bib files.
can't read your mind. If you assign the same label to a
subsubsection and a table, you might find that Table I has been cross
referenced as Table IV-B3.
does not have precognitive abilities. If you put a
command before the command that updates the counter it's
supposed to be using, the label will pick up the last counter to be
cross referenced instead. In particular, a command
should not go before the caption of a figure or a table.
Do not use inside the environment. It
will not stop equation numbers inside (there won't be
any anyway) and it might stop a wanted equation number in the
surrounding equation.
§.§ Some Common Mistakes
* The word “data” is plural, not singular.
* The subscript for the permeability of vacuum μ_0, and other common scientific constants, is zero with subscript formatting, not a lowercase letter “o”.
* In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.)
* A graph within a graph is an “inset”, not an “insert”. The word alternatively is preferred to the word “alternately” (unless you really mean something that alternates).
* Do not use the word “essentially” to mean “approximately” or “effectively”.
* In your paper title, if the words “that uses” can accurately replace the word “using”, capitalize the “u”; if not, keep using lower-cased.
* Be aware of the different meanings of the homophones “affect” and “effect”, “complement” and “compliment”, “discreet” and “discrete”, “principal” and “principle”.
* Do not confuse “imply” and “infer”.
* The prefix “non” is not a word; it should be joined to the word it modifies, usually without a hyphen.
* There is no period after the “et” in the Latin abbreviation “et al.”.
* The abbreviation “i.e.” means “that is”, and the abbreviation “e.g.” means “for example”.
An excellent style manual for science writers is <cit.>.
§.§ Authors and Affiliations
The class file is designed for, but not limited to, six authors. A
minimum of one author is required for all conference articles. Author names
should be listed starting from left to right and then moving down to the
next line. This is the author sequence that will be used in future citations
and by indexing services. Names should not be listed in columns nor group by
affiliation. Please keep your affiliations as succinct as possible (for
example, do not differentiate among departments of the same organization).
§.§ Identify the Headings
Headings, or heads, are organizational devices that guide the reader through
your paper. There are two types: component heads and text heads.
Component heads identify the different components of your paper and are not
topically subordinate to each other. Examples include Acknowledgments and
References and, for these, the correct style to use is “Heading 5”. Use
“figure caption” for your Figure captions, and “table head” for your
table title. Run-in heads, such as “Abstract”, will require you to apply a
style (in this case, italic) in addition to the style provided by the drop
down menu to differentiate the head from the text.
Text heads organize the topics on a relational, hierarchical basis. For
example, the paper title is the primary text head because all subsequent
material relates and elaborates on this one topic. If there are two or more
sub-topics, the next level head (uppercase Roman numerals) should be used
and, conversely, if there are not at least two sub-topics, then no subheads
should be introduced.
§.§ Figures and Tables
Positioning Figures and Tables Place figures and tables at the top and
bottom of columns. Avoid placing them in the middle of columns. Large
figures and tables may span across both columns. Figure captions should be
below the figures; table heads should appear above the tables. Insert
figures and tables after they are cited in the text. Use the abbreviation
“Fig. <ref>”, even at the beginning of a sentence.
Figure Labels: Use 8 point Times New Roman for Figure labels. Use words
rather than symbols or abbreviations when writing Figure axis labels to
avoid confusing the reader. As an example, write the quantity
“Magnetization”, or “Magnetization, M”, not just “M”. If including
units in the label, present them within parentheses. Do not label axes only
with units. In the example, write “Magnetization (A/m)” or “Magnetization
{A[m(1)]}”, not just “A/m”. Do not label axes with a ratio of
quantities and units. For example, write “Temperature (K)”, not
“Temperature/K”.
§ ACKNOWLEDGMENT
The authors would like to acknowledge the contribution of all participating students of Rajshahi University of Engineering & Technology (RUET), Bangladesh for providing their handwriting, height, weight, and BMI values.
The authors would also like to thank Rifa Tabassum Mim and Naila Noyari Islam of RUET for their help with the collection of the EHC dataset.
10
url@samestyle
li2023trocr
M. Li, T. Lv, J. Chen, L. Cui, Y. Lu, D. Florencio, C. Zhang, Z. Li, and F. Wei, “TrOCR: Transformer-based optical character recognition with pre-trained models,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 11, 2023, pp. 13 094–13 102.
khan2023empirical
T. F. Khan, W. Anwar, H. Arshad, and S. N. Abbas, “An Empirical Study on Authorship Verification for Low Resource Language using Hyper-Tuned CNN Approach,” IEEE Access, 2023.
youssef2013automated
A. E. Youssef, A. S. Ibrahim, and A. L. Abbott, “Automated gender identification for Arabic and English handwriting,” in 5th International Conference on Imaging for Crime Detection and Prevention (ICDP 2013), 2013, pp. 1–6.
marti2002iam
U.-V. Marti and H. Bunke, “The IAM-database: an English sentence database for offline handwriting recognition,” International Journal on Document Analysis and Recognition, vol. 5, pp. 39–46, 2002.
deng2012mnist
L. Deng, “The MNIST database of handwritten digit images for machine learning research [best of the web],” IEEE signal processing magazine, vol. 29, no. 6, pp. 141–142, 2012.
cohen2017emnist
G. Cohen, S. Afshar, J. Tapson, and A. Van Schaik, “EMNIST: Extending MNIST to handwritten letters,” in 2017 international joint conference on neural networks (IJCNN).1em plus 0.5em minus 0.4emIEEE, 2017, pp. 2921–2926.
kaggle
S. Patel. A-z handwritten alphabets in .csv format. Accessed on 26 February 2022. [Online]. Available: <https://www.kaggle.com/datasets/sachinpatel21/az-handwritten-alphabets-in-csv-format>
simard2003best
P. Simard, D. Steinkraus, and J. Platt, “Best practices for convolutional neural networks applied to visual document analysis,” in Proceedings of Seventh International Conference on Document Analysis and Recognition, 2003, pp. 958–963.
tabik2020mnist
S. Tabik, R. F. Alvear-Sandoval, M. M. Ruiz, J.-L. Sancho-Gómez, A. R. Figueiras-Vidal, and F. Herrera, “MNIST-NET10: A heterogeneous deep networks fusion based on the degree of certainty to reach 0.1% error rate. Ensembles overview and proposal,” Information Fusion, vol. 62, pp. 73–80, 2020.
narayan2021image
A. Narayan and R. Muthalagu, “Image character recognition using convolutional neural networks,” in 2021 Seventh International conference on Bio Signals, Images, and Instrumentation (ICBSII).1em plus 0.5em minus 0.4emIEEE, 2021, pp. 1–5.
saqib2022convolutional
N. Saqib, K. F. Haque, V. P. Yanambaka, and A. Abdelgawad, “Convolutional-neural-network-based handwritten character recognition: an approach with massive multisource data,” Algorithms, vol. 15, no. 4, p. 129, 2022.
kabir2022spinalnet
H. D. Kabir, M. Abdar, A. Khosravi, S. M. J. Jalali, A. F. Atiya, S. Nahavandi, and D. Srinivasan, “SpinalNet: Deep neural network with gradual input,” IEEE Transactions on Artificial Intelligence, 2022.
jeevan2022wavemix
P. Jeevan, K. Viswanathan, A. Sethi et al., “Wavemix: A resource-efficient neural network for image analysis,” arXiv preprint arXiv:2205.14375, 2022.
rehman2019automatic
A. Rehman, S. Naz, M. I. Razzak, and I. A. Hameed, “Automatic visual features for writer identification: a deep learning approach,” IEEE access, vol. 7, pp. 17 149–17 157, 2019.
krizhevsky2012imagenet
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
al2012quwi
S. Al Maadeed, W. Ayouby, A. Hassaine, and J. M. Aljaam, “QUWI: an Arabic and English handwriting dataset for offline writer identification,” in 2012 International Conference on Frontiers in Handwriting Recognition.1em plus 0.5em minus 0.4emIEEE, 2012, pp. 746–751.
fiel2015writer
S. Fiel and R. Sablatnig, “Writer identification and retrieval using a convolutional neural network,” in Computer Analysis of Images and Patterns: 16th International Conference, CAIP 2015, Valletta, Malta, September 2-4, 2015, Proceedings, Part II 16.1em plus 0.5em minus 0.4emSpringer, 2015, pp. 26–37.
chahi2023writerinet
A. Chahi, Y. El Merabet, Y. Ruichek, and R. Touahni, “WriterINet: a multi-path deep CNN for offline text-independent writer identification,” International Journal on Document Analysis and Recognition (IJDAR), vol. 26, no. 2, pp. 89–107, 2023.
dlamini2019author
N. Dlamini and T. L. van Zyl, “Author identification from handwritten characters using Siamese CNN,” in 2019 international multidisciplinary information technology and engineering conference (IMITEC).1em plus 0.5em minus 0.4emIEEE, 2019, pp. 1–6.
NIST-SD19-dataset
“NIST Handprinted Forms and Characters Database,” <https://www.nist.gov/srd/nist-special-database-19>, Accessed: 2024-06-24.
he2021gr
S. He and L. Schomaker, “GR-RNN: Global-context residual recurrent neural networks for writer identification,” Pattern Recognition, vol. 117, p. 107975, 2021.
obaidullah2022sen
S. M. Obaidullah, M. Ghosh, H. Mukherjee, K. Roy, and U. Pal, “SEN: Stack Ensemble Shallow Convolution Neural Network for Signature-based Writer Identification,” in 2022 26th International Conference on Pattern Recognition (ICPR).1em plus 0.5em minus 0.4emIEEE, 2022, pp. 1414–1420.
zhang2021vector
J. Zhang, Y. Cao, and Q. Wu, “Vector of locally and adaptively aggregated descriptors for image feature representation,” Pattern Recognition, vol. 116, p. 107952, 2021.
liang2023supervised
D. Liang, M. Wu, and Y. Hu, “Supervised Feature Learning for Offline Writer Identification Using VLAD and Double Power Normalization,” Computers, Materials & Continua, vol. 76, no. 1, 2023.
mahmoud2014khatt
S. A. Mahmoud, I. Ahmad, W. G. Al-Khatib, M. Alshayeb, M. T. Parvez, V. Märgner, and G. A. Fink, “KHATT: An open Arabic offline handwritten text database,” Pattern Recognition, vol. 47, no. 3, pp. 1096–1112, 2014.
djeddi2014lamis
C. Djeddi, A. Gattal, L. Souici-Meslati, I. Siddiqi, Y. Chibani, and H. El Abed, “LAMIS-MSHD: a multi-script offline handwriting database,” in 2014 14th International Conference on Frontiers in Handwriting Recognition.1em plus 0.5em minus 0.4emIEEE, 2014, pp. 93–97.
illouz2018handwriting
E. Illouz, E. David, and N. S. Netanyahu, “Handwriting-based gender classification using end-to-end deep neural networks,” in Artificial Neural Networks and Machine Learning–ICANN 2018: 27th International Conference on Artificial Neural Networks, Rhodes, Greece, October 4-7, 2018, Proceedings, Part III 27.1em plus 0.5em minus 0.4emSpringer, 2018, pp. 613–621.
morera2018gender
Á. Morera, Á. Sánchez, J. F. Vélez, and A. B. Moreno, “Gender and handedness prediction from offline handwriting using convolutional neural networks,” Complexity, vol. 2018, no. 1, p. 3891624, 2018.
wen2013computational
L. Wen and G. Guo, “A computational approach to body mass index prediction from face images,” Image and Vision Computing, vol. 31, no. 5, pp. 392–400, 2013.
ricanek2006morph
K. Ricanek and T. Tesafaye, “Morph: A longitudinal image database of normal adult age-progression,” in 7th international conference on automatic face and gesture recognition (FGR06).1em plus 0.5em minus 0.4emIEEE, 2006, pp. 341–345.
kocabey2017face
E. Kocabey, M. Camurcu, F. Ofli, Y. Aytar, J. Marin, A. Torralba, and I. Weber, “Face-to-BMI: Using computer vision to infer body mass index on social media,” in Proceedings of the International AAAI Conference on Web and Social Media, vol. 11, no. 1, 2017, pp. 572–575.
BD
A. Kumar. Bollywood dataset. [Online]. Available: <https://github.com/abhaymise/Face-to-height-weight-BMI-estimation-#Solution
dantcheva2018show
A. Dantcheva, F. Bremond, and P. Bilinski, “Show me your face and I will tell you your height, weight and body mass index,” in 2018 24th International Conference on Pattern Recognition (ICPR).1em plus 0.5em minus 0.4emIEEE, 2018, pp. 3555–3560.
siddiqui2020based
H. Siddiqui, A. Rattani, D. R. Kisku, and T. Dean, “AI-based BMI inference from facial images: An application to weight monitoring,” in 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA).1em plus 0.5em minus 0.4emIEEE, 2020, pp. 1101–1105.
simonyan2014very
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
he2016deep
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
howard2017mobilenets
A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “MobileNets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.
wu2018light
X. Wu, R. He, Z. Sun, and T. Tan, “A light CNN for deep face representation with noisy labels,” IEEE transactions on information forensics and security, vol. 13, no. 11, pp. 2884–2896, 2018.
yousaf2021estimation
N. Yousaf, S. Hussein, and W. Sultani, “Estimation of BMI from facial images using semantic segmentation based region-aware pooling,” Computers in Biology and Medicine, vol. 133, p. 104392, 2021.
haritosh2019novel
A. Haritosh, A. Gupta, E. S. Chahal, A. Misra, and S. Chandra, “A novel method to estimate height, weight and body mass index from face images,” in 2019 Twelfth International Conference on Contemporary Computing (IC3).1em plus 0.5em minus 0.4emIEEE, 2019, pp. 1–6.
chollet2017xception
F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251–1258.
jiang2019visual
M. Jiang, Y. Shang, and G. Guo, “On visual BMI analysis from facial images,” Image and Vision Computing, vol. 89, pp. 183–196, 2019.
Activation_Ref
A. D. Rasamoelina, F. Adjailia, and P. Sinčák, “A review of activation function for artificial neural network,” in Proceedings of the IEEE 18th World Symposium on Applied Machine Intelligence and Informatics (SAMI), 2020, pp. 281–286.
rasamoelina2020review
A. D. Rasamoelina, F. Adjailia, and P. Sinčák, “A review of activation function for artificial neural network,” in 2020 IEEE 18th World Symposium on Applied Machine Intelligence and Informatics (SAMI).1em plus 0.5em minus 0.4emIEEE, 2020, pp. 281–286.
szegedy2016rethinking
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826.
huang2017densely
G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
tan2019efficientnet
M. Tan and Q. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning.1em plus 0.5em minus 0.4emPMLR, 2019, pp. 6105–6114.
00
b1 Wang S. Global regular network for writer identification. arXiv preprint arXiv:2201.05951. 2022 Jan 16.
b2 J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.68–73.
b3 I. S. Jacobs and C. P. Bean, “Fine particles, thin films and exchange anisotropy,” in Magnetism, vol. III, G. T. Rado and H. Suhl, Eds. New York: Academic, 1963, pp. 271–350.
b4 K. Elissa, “Title of paper if known,” unpublished.
b5 R. Nicole, “Title of paper with only first word capitalized,” J. Name Stand. Abbrev., in press.
b6 Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, “Electron spectroscopy studies on magneto-optical media and plastic substrate interface,” IEEE Transl. J. Magn. Japan, vol. 2, pp. 740–741, August 1987 [Digests 9th Annual Conf. Magnetics Japan, p. 301, 1982].
b7 M. Young, The Technical Writer's Handbook. Mill Valley, CA: University Science, 1989.
IEEE conference templates contain guidance text for composing and formatting conference papers. Please ensure that all template text is removed from your conference paper prior to submission to the conference. Failure to remove the template text from your paper may result in your paper not being published.
> |
http://arxiv.org/abs/2409.02884v1 | 20240904171401 | Regulatory Functions from Cells to Society | [
"Vicky Chuqiao Yang",
"Christopher P. Kempes",
"S. Redner",
"Geoffrey B. West",
"Hyejin Youn"
] | nlin.AO | [
"nlin.AO",
"physics.soc-ph",
"q-bio.PE"
] |
Cost-Effectiveness Analysis for Disease Prevention – A Case Study on Colorectal Cancer Screening
Yi Xiong
Department of Biostatistics, University at Buffalo, Buffalo, NY
and
Kwun C G Chan
Department of Biostatistics, University of Washington, Seattle, WA
and
Malka Gorfine
Department of Statistics and Operations Research, Tel Aviv
University, Tel Aviv, Israel
and
Li Hsu
The authors gratefully acknowledge that this work is partially supported by the National Institutes of Health grants.
Biostatistics Program, Fred Hutchinson Cancer Center, Seattle, WA
September 9, 2024
=======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Regulatory functions are essential in both socioeconomic and biological systems, from corporate managers to regulatory genes in genomes. Regulatory functions come with substantial costs, but are often taken for granted. Here, we empirically examine regulatory costs across diverse systems—biological organisms (bacteria and eukaryotic genomes), human organizations (companies, federal agencies, universities), and decentralized entities (Wikipedia, cities)—using scaling analysis. We guide the empirical analysis with a conceptual model, which anticipates the scaling of regulatory costs to shift with the system's internal interaction structure—well-mixed or modular. We find diverse systems exhibit consistent scaling patterns—well-mixed systems exhibit superlinear scaling, while modular ones show sublinear or linear scaling. Further, we find that the socioeconomic systems containing more diverse occupational functions tend to have more regulatory costs than expected from their size, confirming the type of interactions also plays a role in regulatory costs. While many socioeconomic systems exhibit efficiencies of scale, regulatory costs in many social systems have grown disproportionally over time. Our finding suggests that the increasing complexity of functions may contribute to this trend. This cross-system comparison offers a framework for understanding regulatory costs and could guide future efforts to identify and mitigate regulatory inefficiencies.
§ INTRODUCTION
Regulatory functions and mechanisms are necessary and ubiquitous features across all biological and social complex adaptive systems, spanning scales from the cellular to societal level. Regardless of scale, organizational structure, or composition, these systems rely on dedicated components to regulate internal processes and mediate interactions among their constituent parts. For instance, in cells, some genes may be beneficial when expressed in isolation, but their expressed proteins can often interact adversely. Regulatory genes control the timing of gene expression, preventing these harmful interactions by ensuring that conflicting genes are not expressed simultaneously. Similarly, in companies, managers coordinate employee activities to prevent duplicated efforts and mitigate adverse interpersonal interactions. In broader societal contexts, legal professionals mediate interactions among citizens through legal processes, ensuring orderly conduct.
While necessary and ubiquitous, regulatory functions consume a significant amount of energy and resources. For example, regulatory genes in bacteria account for about 10% of the costs. In the US, 15% of workforce compensation is paid to managers <cit.>. In US universities, administrative spending is on par with instructional spending, reaching $122.3 billion in the 2014–15 school year, and has been cited as a key factor in the skyrocketing tuition of US universities <cit.>. In fact, the burden of administrative costs is a significant concern in many aspects of society, including higher education <cit.>, health care <cit.>, manufacturing <cit.>, and the transition to renewable technologies <cit.>, As a result, regulatory costs have emerged as one of the major societal challenges of the 21st century. What aspects of regulatory function are set by fundamental requirements and which are malleable through changes in structure, culture, or procedure? The answers to these larger questions require more understanding of the mechanisms underlying regulatory functions across a wide range of systems.
Regulatory costs have been studied separately at the levels of organizations, societies, and biological organisms. In socioeconomic systems, the primary focus has been on best practices for management and regulations concerning profits and efficiencies <cit.>, while in biology, the emphasis has been on understanding the genetic and metabolic mechanisms of regulation such as complex feedback loops and signaling pathways that maintain cellular homeostasis and respond to environmental changes. These functions and expenses seem to correspond to the expenses related to governance, law enforcement, and social coordination to regulate and coordinate activities, monitor performance, and enforce rules within the organization <cit.>. As such, studies have identified several key determinants of regulatory costs, including size, measured by the number of employees <cit.>, internal structure, measured by the level of hierarchies and size of the sub-units <cit.>, and functional complexity <cit.>, often measured by the number of different tasks performed by individuals within the organization. Despite these valuable insights, a comprehensive and systematic understanding of the determinants of regulatory costs remains elusive.
In the study of biological organisms, we have a much clearer understanding of the fundamental and baseline requirements for regulatory function. A key assumption in the biological context is that the overall evolutionary process, involving vast species and timescales, leads to optimized regulatory costs. Thus, biological examples provide a case study for understanding essential or optimal regulatory functions. Furthermore, similar key determinants of size, internal structure, and complexity emerge from the biological literature. For instance, major transitions in biological architecture—–such as the evolution from bacteria, which lack internal compartments and allow any gene to interact with any other gene, to eukaryotes, which have compartmentalized cells, and from single-cell to multicellular organisms—each introduce new and distinct forms of regulation <cit.>. At a more detailed level, all cellular functions, including regulatory genes, are known to scale with organism size <cit.>, <cit.>.
The identification of common key determinants—size, structure, and functional complexity—across both social and biological systems suggests the potential for a unifying and comparative perspective. Previous studies have successfully achieved this goal by using scaling analysis as a foundation for empirical analysis. By combining this with mechanistic models that explain scaling relationships, researchers have uncovered common mechanisms across a diverse range of systems, including those in physics, biology, ecology, as well as firms and cities <cit.>. These studies reveal key connections between size, function, and architecture, illustrating how fundamental principles can be applied across different types of systems.
Here, we first conceptualize what gives rise to regulatory costs across complex systems based on managing adverse interactions by integrating the interaction of size, structure, and various kinds of costs. We compare and unify diverse systems across the dimensions of well-mixed to highly modular. At one end of the spectrum, well-mixed systems tend to be the simplest self-organized and agglomerate ones. At the other end, those with modular structures tend to be centrally planned or have gone through several transitions in architecture. For example, bacteria are defined by a cellular environment where most expressed proteins in the liquid cytoplasm can diffuse and interact with any other expressed protein, posing unique regulatory challenges for governing protein co-expression. The growth in genome size for bacteria is a result of an emergent process of evolution. In contrast, companies are typically defined by hierarchical and modular structures that regulate interactions amongst individuals and are highly planned as companies grow in size. This pattern also applies to more complex organisms that have experienced multiple major evolutionary transitions in structure <cit.>. We then compile data on regulatory costs in biological and social systems, including regulatory genes in bacteria and eukaryote cells, managers in companies, governmental agencies, universities, administrators on Wikipedia, and lawyers in cities. Using scaling analysis, we quantify shifts in regulatory costs with entity size by the scaling exponents to reveal shared governing processes and principles across different systems.
§ RESULTS
We define regulatory functions as entities whose primary role is to moderate, adjust, or coordinate the interactions among other entities. These include regulatory genes in cells, managers in companies, legal functions in cities, and administrators on Wikipedia. Examples of functions that do not fall in this category include primarily functional components, for example, functional genes in cells, factory workers in a manufacturing plant, and primarily maintenance and repair functions, such as janitors of a university.
§.§ Conceptualizing the mechanisms for baseline regulatory costs across systems
In this section, we present a simple mathematical framework to illustrate how baseline requirements for regulation may be derived from the interactions of a few key mechanisms. The central premise is that regulation arises as a response to potential adverse interactions among system components <cit.>. A complex system brings together a large number of individual components—such as genes in cells or employees in companies—to achieve certain benefits, like metabolic energy for cells or revenue for companies. We denote the amount of these benefits as B. However, adverse interactions among these components can diminish these benefits. In cells, such interactions may occur when expressed proteins interact in ways that are futile or detrimental to cellular metabolism. In organizations, adverse interactions can manifest as duplicated efforts or interpersonal conflicts between individuals. When two components of a system have an adverse interaction, we identify three responses the system can employ, as illustrated in Fig. <ref>. First, the system can do nothing and tolerate the adverse interaction. Second, it can use a regulator to manage these interactions. In cells, it takes the form of carrying a regulatory gene, which makes sure the genes encoding two negatively interacting proteins are not expressed at the same time. In organizations, it can take the form of assigning a manager to coordinate tasks and prevent duplicated work between individuals or mediate interpersonal conflict between two employees. Third, the system can separate the two components by creating compartments. In cells, this involves developing internal architecture, such as mitochondria, to ensure that certain genes are only expressed in specific sub-portions of the cell. In organizations, this approach could involve structuring individuals into separate teams, units or departments, thereby modularizing their efforts.
Each of these three strategies carries a cost. esebreak three categories as: the cost of adverse interactions, I, the cost associated with regulators in the system, R, and the cost associated with compartments C. Regulators and compartments each reduce adverse interactions, but come with their own costs such as the pay required to employ a manager or the energy dedicated to maintaining and expressing a regulatory gene. Compartments differ from regulators in that they use structural separation, such as organizational divisions or sub-cellular membranes, to isolate unnecessary interactions. It's important to note that the cost associated with regulators and compartments may interact. For instance, in companies, the establishment of a new compartment, like a new division, is typically accompanied by the appointment of a regulator, such as the division head.
We are interested in how these benefits and costs change with the system size, N, such as the number of employees in an organization or the number of genes in a genome. In particular, we specify the number of functional individuals, f, regulators, r, and the number of compartments, c, to arrive at the generic utility function
ℒ= B(f,r,c)-I(f,r,c)-R(f,r, c)-C(f,r, c).
The explicit expression of each term will depend on the system of interest. For example, in cells, the cost of compartments is related to the physical maintenance of these structures and, thus, is influenced by their surface area. In organizations, these costs may be linked to the implementation of codified processes and the coordination required between departments.
Equation <ref> enables the optimization of the utility function, given functional forms for the four terms over the number of regulators, r, and number of compartments, c. While the precise quantitative forms of these four terms still remain uncertain for specific systems, it is useful to define the simplest version of each term to understand the baseline optimization of ℒ and how r and c scale with system size.
In a well-mixed environment, the addition of a new functional individual—whether an employee or gene—comes with a probability μ of having a negative interaction with all existing functional members of the system. Denoting the number of functional individuals as f, edge counting gives the total number of negative interactions as ρ f(f-1), with ρ≡μ/2. Each of these negative interactions comes with a cost γ_1, and we can remove negative interactions either by adding regulators or placing individuals in compartments. We assume that individuals contained within compartments are removed from pairwise negative interactions with the rest of the system. If we place η individuals into a compartment, and the system contains c compartments, then the total number of negative interactions becomes ρ(f - η c)(f - η c -1). For a fixed compartment size, negative interactions within a compartment simply become a fixed cost, which we will combine with compartment costs later. Additionally, if each regulator can reduce θ negative interactions, then the total number of negative interactions becomes ρ(f - η c)(f - η c -1)-θ r leading to the final utility function,
ℒ= b f -γ_1[ρ(f - η c)(f - η c -1)-θ r] - γ_2 r - γ_3 c.
where γ_2 is the unit cost of a regulator and γ_3 is the unit cost of a compartment (including the cost of negative interactions and regulators within a compartment). Here we assume that there is a proportional cost of r and c and a linear increase in benefit associated with f such that R=γ_2 r, C=γ_3 c, and B=b f. All terms are measured in dollars or energy depending on the system of interest. It is also useful to note that the total size of the system, N, which is often what is measured, is the sum of functional and regulatory components, N=f+r.
Managing Costs with Only Regulators We now optimize ℒ as a function of r and c. Later we will show that compartments are not beneficial below a certain size, and so we begin by considering ℒ with c=0. From Equation <ref>, we see that regulators are only beneficial if γ_1 > γ_2/θ. That is, the cost of regulating a negative interaction is less than the cost of the negative interaction itself.
The utility function is optimized when regulators mediate all adverse interactions, meaning I(f,r)=0. This gives the optimal number of regulators,
r_opt = f (f-1) ρ/θ∼f^2 ρ/θ
illustrating that r grows like f^2 adjusted by the ratio of the probability of negative interactions, ρ to the number of interactions that one regulator can mitigate θ. Given that r ∼ f^2 we can also write the total size in terms of r alone where N=f+r=√(θ/ρ) r^1/2 + r. It is useful to note that we have two regimes here: N∝ r^1/2 for small f (which is also small r given the relationships above) and N∝ r in the large f limit.
The first case compares well to bacteria where every expressed gene can interact with every other gene. Here the observed scaling is r ∼ N^1.86 <cit.>, in rough agreement with this simple optimization where r∝ N^2.
For the second case, where N∝ r, there is a critical size at which regulators overwhelm the system causing ℒ=0. For the optimal number of regulators, where I=0, the utility function is
ℒ=b f -γ_2ρ f (f-1)/θ.
which equals zero at
f_r=1+b θ/γ_2ρ.
Here f_r is the maximum number of functional elements for a system with only regulators. This upper bound depends on the ratio of the benefit per functional element to an effective cost, the unit cost of a regulator times the likelihood of a negative interaction. The ratio of b θ to γ_2ρ could be quite large given that the probability of negative interactions could be small and the cost of an average individual, including regulators, γ_2, should be small relative to the productive output, b, of an average individual. In addition, a regulator may be able to handle many interactions and θ should be much larger than 1. For example, if ρ=0.10, γ_2=b/2, and θ=10, then f≈ 200, implying that an organization that handles negative interactions with only regulators could reach significant size.
Including Compartments As we mentioned earlier, f_r defines the size at which regulators become the entire system and this implies the need for compartments. Optimizing the complete cost function of Equation <ref> with respect to c by setting ∂ℒ/∂ c = 0 yields the optimal number of compartments,
c_opt =1/2 η(2 f-1-γ_3/γ_1ρη)
This result shows that compartments are only advantageous for cells with a large enough number of functional genes since c_opt>0 requires
f_c>1/2(1+γ_3/γ_1ρη).
The critical value is set by the ratio of the unit cost of a compartment to the probability of negative interactions multiplied by the unit cost of a negative interaction and how many elements are put in a compartment. In this case, the unit cost of a compartment, denoted as γ_3, should exceed the unit cost of negative interaction. This is because each compartment may entail the cost of physical space, regulators who maintain it, and coordination costs among compartments. The scale at which regulators become the entire system, f_r, will agree with the minimum size that compartments are feasible, f_c. The criterion for a smooth transition to compartments is thus given by f_r=f_c, which sets the following condition for the unit costs
γ_3=γ_1η(ρ + 2 b θ/γ_2).
To give a sense of the size of the unit compartment costs, γ_3= 40 γ_1η leads to the same transition point of f_c≈ 200 for the values of b and ρ used above.
Above the critical value f_c, the number of compartments scales like c∼ f, which also affects the scaling of the number of regulators. Given the optimal number of compartments, we can maximize the utility function by considering the point at which ℒ = b f, which occurs at
r_opt=c γ_3 +γ_1ρ (f-η c
)(f-η c -1 ) /γ_1θ -γ_2
which given the solution for c_opt becomes
r_opt = 4 γ_3 f-γ_1ηρ-2 γ_3
-γ_3^2/(γ_1ηρ)/4 η( γ_1θ - γ_2)
which for large f is approximated by
r_opt≈γ_3/η( γ_1θ - γ_2) f .
With compartments, these results show that r∼ f and c ∼ f. Because of this linear scaling, the total N=f+r is also linear r∝ N and c ∝ N. In this regime, quadratic requirements of negative pairwise interactions are proportionally handled by compartments and regulatory genes. However, this only occurs after the transition where compartments become inexpensive enough to be viable.
This result is supported by the observation that in unicellular eukaryotes, the number of regulatory genes grows as G^1.26 <cit.>, which is much closer to one than the scaling observed in bacteria. Unlike bacteria, unicellular eukaryotes have various internal spatial partitions, including the partitioning of genes between the nucleus and mitochondria. The transition from prokaryotes to eukaryotes illustrates how the internal structure of organisms, hierarchy, and partitioning can alter the requirements for regulation.
Also, it should be noted that the scaling of r_opt in Equation <ref> can not continue indefinitely because eventually I=0. This limit is given by
r_max=1/4θ(γ_3^2/γ_1^2 η^2ρ-ρ).
The number of regulators, r, saturates to a constant and thus should have effectively sublinear scaling over much of the range of size. Similarly, in the transition between only regulators and compartments, we expect r to scale slower than quadratically with f, especially if f_r≠ f_c, which occurs for a wide variety of parameter combinations. In addition, it is important to note that the unit costs, γ_i, could vary with attributes of the system. For example, the unit costs of compartments in cells depend on the surface area of these compartments. Similarly, the rate and cost of the negative interaction, ρ and γ1 respectively, may change with system size. In Equation <ref> such cost scaling would cause r_opt to scale faster or slower than f. Taking all of these considerations into account, we should expect real-world systems in the most general sense to follow
r ∼ r_0N^β,
where r_0 is the minimum cost for the smallest size (N=1), and β is the scaling exponent. The above framework gives us the common expectation of β = 2 for a compartment-free system, and β = 1 for a system with both regulators and compartments. We can interpret observed exponents that are closer to 2 as being well-mixed and compartment-free. Close to 1 indicates a system optimized on compartments with a mixed strategy. An exponent significantly larger than 1 and significantly smaller than 2 indicates the transition from regulators only to some compartmentalization. Exponents significantly less than 1 indicate the regime where compartments are expanding and regulators are becoming cumbersome and saturating, or where there are certain economies of scaling in the unit costs.
§.§ Empirical results
We collect data on regulatory components and system size in both biological and socioeconomic systems to examine how regulatory costs scale with size. In biological systems, we focus on bacterial and unicellular eukaryote cells, specifically collecting data on the number of regulatory genes in their genomes. Regulatory genes are those that determine the timing and environmental conditions for gene expression by producing proteins that bind to other genes. We use data reported by <cit.> for these biological systems. For socioeconomic systems, we gathered data on the number of lawyers in cities and the number of managers in Norwegian and Korean companies, US federal government agencies, and various types of US universities. We also synthesize results from another study measuring the number of administrators for Wikipedia pages <cit.>. For human systems, system size is measured by the population of the entity, such as the number of employees for companies and universities, and the number of editors for Wikipedia pages. The scaling exponents of these systems are estimated using Eq. <ref>, with β representing the scaling exponent. For detailed information on data sources and statistical methods used for estimation, see Data and Methods.
Based on the mathematical model, we expect the scaling of regulatory functions to vary according to the system's structure. Bacteria, representing the most well-mixed end of the spectrum, operate like a “soup,” where any gene can interact with any other gene. Cities, while still relatively well-mixed, exhibit a lesser degree of mixing due to the bottom-up emergence of social network clustering. We anticipate that these systems will have higher, superlinear exponents. On the modular end of the spectrum are human organizations such as companies and universities, which predominantly feature strong departmental structures. We expect these entities to have scaling exponents close to linear.
In the data we gathered, regulatory costs scale with system size across diverse systems. Three examples are shown in Fig. <ref>: regulatory genes in bacteria, lawyers in cities, and managers in Norwegian companies. As predicted, the scaling exponents vary across system types, with regulatory genes in bacteria having the highest exponent at 1.84. The number of lawyers scales superlinearly with the urban population, with an exponent of 1.28 (Fig. <ref>B). This is similar to prokaryotic regulatory genes but does not reach quadratic scaling due to constraints on individuals' social interaction capacity and the overall network structure of cities. Theories of urban scaling based on these constraints have successfully predicted similar exponents for many socioeconomic outputs driven by interactions <cit.>. Contrary to cities, human organizations—-such as government agencies, companies, and universities—-typically exhibit a high degree of hierarchical structure, leading us to expect different scaling behaviors. In these organizations, managers play a crucial role in coordinating efforts and mitigating conflicts among subordinates. The scaling exponent for managers in Norwegian companies is 0.91 (Fig. <ref>C).
The scaling exponents for all data we have gathered, including the number of managers in all sectors of US universities, Norwegian and Korean companies, and US federal government agencies, as well as those in biological systems, are summarized in Fig <ref>. In all the modular systems, the number of managers scales sublinearly with small variations—from 0.94 for Federal agencies (highest exponent) to 0.72 for Korean companies (lowest exponent). The sublinear scaling shown in the data suggests the span of control, the number of subordinates per manager, increases with organization size. This finding aligns with previous studies of companies and public agencies in management science <cit.>, while is contrary to the popular belief that larger organizations are less efficient in coordination costs <cit.>. Even in Wikipedia, a supposedly decentralized system, the interaction network among editors has been shown to be highly modularized <cit.>. Interactions tend to cluster around specific topics of interest, allowing a smaller number of administrators to effectively manage issues that arise. As such, despite the decentralized nature of the system, administrators naturally oversee specific modules and thus sublinearly scale with the total contributors.
§.§ Function diversity associated with scaling deviations
We have formulated a conceptual framework to understand how size and structure influence the regulatory costs of systems and collected data to compare with this theory. The differences across the spectrum from well-mixed to modular systems indicate regulation may arise from the number and type of interactions across individuals. We further investigate this idea by looking at how the number of regulators is related to the function diversity of a system.
Functional diversity reflects the range of tasks performed by the components of a system. As society advances, its technology becomes more complex, and individual roles and functions become more specialized and diversified. A classic and prominent example is car assembly, which now requires specialized metal compounds for catalytic converters, computer chips, and software to manage many aspects of automobile operation. None of these components existed a half-century ago. Such an increasingly complex manufacturing operation necessitates the coordination of a broader range of components, leading to a higher potential for adverse interactions that the system needs to manage.
Motivated by our theory, we predict that greater functional diversity is associated with higher regulatory costs. To test this prediction, we will quantify functional diversity across a range of systems and demonstrate that it is positively associated with the scaling residuals. In other words, systems with higher functional diversity exceed the expected number of regulators according to the scaling curve, while those with lower functional diversity fall short.
We quantify functional diversity in an organization by analyzing the distribution of occupations within it. For this analysis, we utilize individual-level occupation information for US federal government agencies. To ensure robustness, we also test our predictions on companies. Although occupation information is not available at the company level, we study companies aggregated into industries, for which we have detailed accounts of the distribution of occupations. We measure functions using the finest occupational categories available in these datasets. For more details on data sources and occupation definitions, see Data and Methods.
We measure functional diversity using normalized Shannon entropy (H), an information-theoretic measure that quantifies the predictability of a function given all functions in the system. This measure has been successfully utilized to quantify diversity in socioeconomic systems <cit.>. Mathematically, it is defined as:
H = -∑_i = 1^D p_i log p_i/log D
where p_i is the relative frequency of function (occupation) i, p_i = f_i/∑f_i, and f_i is the frequency of function i. The variable D is the number of distinct functions in the system. H is maximized when the abundance of functions follows a uniform distribution. The normalization by log D allows comparison across systems with different total numbers of functions.
We also compute the scaling residual <cit.> for each system, which quantifies the extent to which a system over- or under-performs relative to the scaling curve. The scaling residual for system i, ξ_i, is defined as, ξ_i = log r_i - log r(N_i), where r_i is the number of regulators for system i in the data, and r(N_i) is the regulators expected of its size according to Eq. <ref>.
Figure <ref> illustrates the relationship between managers and employees, with each entity colored according to its normalized entropy, where higher values indicate greater function diversity <cit.>. For both federal agencies and industries, entities with higher function diversity tend to have more managers than expected for their size. The insets of Fig. <ref> display the correlation between scaling residuals and normalized entropy. The correlation is 0.59 for industries (p< 0.001), and .20 for federal agencies (p= 0.030). These results support our initial hypothesis that greater function diversity is associated with greater regulatory costs due to greater coordination requirements.
§ DISCUSSION
We have proposed a conceptual framework for unifying the prediction of regulatory costs across biological and socioeconomic systems by examining the interactions of fundamental features, size and internal structure. We also conducted an empirical analysis of regulatory costs across these systems. Our findings indicate that the variation of regulatory functions with system size depends significantly on the system's structure. The exponents range from nearly 2 for regulatory genes in bacteria, which have a well-mixed internal structure, to sublinear scaling for managers in hierarchically organized human organizations, such as federal agencies and universities. By characterizing regulatory functions based on their underlying mechanisms rather than the system type, our work represents a first step toward a unified understanding of maintenance functions across diverse systems.
Our conceptual model is based on the simplest assumptions about cost factors, intended to lay the groundwork for more advanced theories in the future, These future models should incorporate more accurate cost estimates tailored to different system types through empirical measurements. For example, in hierarchical human organizations, the cost of regulators within compartments should also be factored into future models. In biological systems, more precise cost calculations should consider the frequency of gene expression, which plays a critical role in determining the energetic cost in the cell.
The greater scaling exponents of biological systems do not necessarily imply biological systems are less efficient than socioeconomic ones. Creating compartmentalization and structure in biology is expensive—intricate physical structures need to be developed to separate the cell nuclei from the rest. Extending this idea, complex physical infrastructure also needs to be developed to enable organs in animals, where genes only get expressed in a certain tissue. While bacteria need to carry a rapidly increasing number of regulatory genes with increasing size, they save on the energetic cost of creating compartmentalization within the cell. In comparison, developing structure in social systems does not necessarily incur a physical cost. For example, a company's CEO can decide to create a new division in the company without employing new physical separations between divisions, and the re-organization can be accomplished in a matter of weeks. Humans are also naturally creatures of groups with limited social capacity, making modularity a common characteristic in many social systems, even decentralized ones such as cities. Indeed, the social interactions scale with N^1.2 according to phone network data <cit.>, which is far from the N^2 null prediction for a completely well-mixed group.
Furthermore, superlinear exponents, such as the nearly 2 scaling exponent observed in bacteria, impose a fundamental constraint on system size. With superlinear exponents, there is a maximum size beyond which all components would be regulatory. For bacteria to grow larger, they must fundamentally transform their organizational structure to one that leads to a lower scaling exponent. While the ≈ 2 exponent is inefficient for scaling up, it also has significant benefits. New genes can be easily added to the bacterial genome along with new regulation in a plug-and-play manner that provides remarkable flexibility to adapt to novel and changing environments. This insight may be transferred to small human organizations, such as start-up companies and local communities. These small organizations are similar to bacteria in the sense that new functions can be easily added with the caveat that everyone interacts directly with everyone else, leading both to a high degree of flexibility and unexpected conflicts. Our theory predicts this form of organization is limited by a predictable critical size. While this study did not collect data on start-up companies, it would be valuable for future research to analyze the regulatory costs and structure of start-up companies on a large scale and examine their transition to a more modular configuration.
Social systems, aided by the lower cost of compartmentalization, appear to gain an economy of scale with regulatory costs. However, many have the experience that larger organizations are more bureaucratic. These two observations do not necessarily contradict each other. The personal experience of regulation may reflect the experience of a non-regulatory employee complying with the structure and processes put in place by an organization. Future research should consider the cost of regulatory compliance in organizations and ask how this is traded off against regulators and compartments. It has also been noted that many regulatory costs have increased over time in many forms of organizations, such as universities <cit.>. Temporal and cross-sectional scaling behavior differ in many socioeconomic systems, the difference can be due to changes in the output in the whole system regardless of size <cit.>. Future research should extend our cross-sectional data gathering and theoretical analysis to a temporal one, to address why regulatory costs have grown in many sectors over time.
While our study makes predictions based on structure and focuses on measuring regulatory costs across system types, we have not formally quantified the degree of modularity in the systems' architecture. Our work uses qualitative accounts of these systems. It is an area of important future work to perform a careful quantitative assessment of modularity across a wide range of system types. This would also involve gathering detailed interaction network data and quantifying the modularity of these networks.
Our work makes several contributions to the literature. While most management studies treat regulation as an independent variable, examining its effects on other factors, our research offers a distinct perspective by exploring what determines the level of regulation itself. We conduct a comprehensive empirical comparison across socioeconomic and biological systems, analyzing these systems based on their underlying mechanisms. For the first time, we compare regulatory costs across both biological and socioeconomic systems, identifying their commonalities and differences. Additionally, we introduce a conceptual framework that, despite lacking system-specific details, provides a foundation for understanding baseline regulatory requirements through an optimization process. We hope our work paves the way for a unified science of regulatory costs.
§ ACKNOWLEDGEMENTS
This research was supported by the National Science Foundation Grant Award Number EF–2133863. The authors thank Seoul Lee for helping collect and analyze South Korean company data.
§ DATA AND METHODS
§.§ Data
Bacteria and Unicellular eukaryotes.
Data on genome length and the number of regulatory genes are from van Nimwegen, E. Scaling laws in the functional content of genomes. In Power Laws, Scale-Free
Networks and Genome Biology, 236–253 (Springer, 2006)
Cities.
Data for occupations in cities were obtained from the Bureau of Labor Statistics in 2017. Data were downloaded from <https://www.bls.gov/oes/tables.htm> in May 2019. Lawyers are defined according to the 2010 Standard Occupational Classification (SOC) system. The boundaries of urban areas in this dataset use the Metropolitan Statistical Area established by the US Office of Management and Budget (OMB).
Federal agencies.
Data on US Federal agencies were obtained from FedScope in 2018. Downloaded from < https://www.fedscope.opm.gov/employment_access.asp> in Oct 2019. The data provides individual-level information on all US federal government employees, including the federal agency employed under, and the occupation classification. The dataset contains 125 cabinet-level departments and independent agencies. The smallest agencies in this dataset contain two employees, which are Commission for the Preservation of America's Heritage Abroad and the Northern Border Regional Commission. The largest agencies contain hundreds of thousands of employees—the largest two being the Department of Veterans Affairs, with 391,187 employees, and the Department of the Army, with 249,074 employees. The FedScope data is compiled through human resource software used by the US federal government. Note that not all federal agencies are reported in this data set; however, all agencies available through FedScope are used in our analysis. Supervisors are defined as individuals coded to have supervisor status in the FedScope dataset. The occupation classification in this dataset is according to the OPM's Handbook of Occupational Groups and Families, available at <https://www.opm.gov/policy-data-oversight/classification-qualifications/classifying-general-schedule-positions/occupationalhandbook.pdf>. The occupation categories used in our analysis is on the 4-digit level, the finest level in the dataset.
Norwegian Companies
The data on Norwegian companies were purchased from Statistics Norway in May 2022. The data are from the year 2019. For confidentiality, the company-level data was aggregated by Statistics Norway according to the following procedure. The companies are first sorted by the number of employees from large to small. For every five companies, both the number of employees and the number of distinct functions are aggregated by taking the mean of the log-transformed variables, i.e., E[log y_i] where y_i is a variable for company i in that size bin. This data only includes companies with five or more employees for confidentiality reasons. Managers are classified according to the International Standard Classification of Occupations (ISCO-08), which is detailed at <https://www.ssb.no/en/klass/klassifikasjoner/7/versjon/33>. Note that this dataset only includes resident employees in Norway, and it does not include subsidiary companies in countries other than Norway. We filter for a minimum of 50 employees in company size.
Universities.
The data on US universities are obtained from the Integrated Postsecondary Education Data System (IPEDS). The data was accessed from <https://nces.ed.gov/ipeds/use-the-data>. The data used in our analysis are from the year 2016. The type of universities are classified based on the Carnegie 15 classification system, also available in this dataset. Bachelor-level and above institutions include universities classified as "Baccalaureate," "Master," or "Doctoral" universities in the Carnegie 15 classification system. Liberal Arts universities are coded as "Baccalaureate Colleges–Liberal Arts," Associate Colleges are coded as "Associates Colleges," Doctoral universities are coded as " Doctoral/Research Universities–Extensive" and "Doctoral/Research Universities–Intensive" in Carnegie 15. In our analysis, we filter for universities with a minimum of 50 employees.
Industries
The data on US industries are obtained from the Bureau of Labor Statistics in 2018, downloaded from <https://www.bls.gov/oes/tables.htm>. Managers and other occupations are defined according to the 2010 Standard Occupational Classification (SOC) system.
§.§ Estimation of scaling exponents
The scaling exponents are derived from doing a linear fit after taking the log of both axes. For two variables x and y, the exponent β and its confidence interval are obtained by regression: log y = βlog x + log c.
|
http://arxiv.org/abs/2409.02233v1 | 20240903190254 | A Physically Motivated Framework to Compare Merger Timescales of Isolated Low- and High-Mass Galaxy Pairs Across Cosmic Time | [
"Katie Chamberlain",
"Ekta Patel",
"Gurtina Besla",
"Paul Torrey",
"Vicente Rodriguez-Gomez"
] | astro-ph.GA | [
"astro-ph.GA"
] |
Merger Timescales in TNG100
0000-0001-8765-8670]Katie Chamberlain
0000-0002-9820-1219]Ekta Patel
NASA Hubble Fellow
0000-0003-0715-2173]Gurtina Besla
0000-0002-5653-0786]Paul Torrey
0000-0002-9495-0079]Vicente Rodriguez-Gomez
Chamberlain et al.
§ ABSTRACT
The merger timescales of isolated low-mass pairs (10^8<M_*<5×10^9) on cosmologically motivated orbits have not yet been studied in detail, though isolated high-mass pairs (5×10^9<M_*<10^11) have been studied extensively.
It is common to apply the same separation criteria and expected merger timescales of high-mass pairs to low-mass systems, however, it is unclear if their merger timescales are similar, or
if they evolve similarly with redshift.
We use the Illustris TNG100 simulation to quantify the merger timescales of isolated low-mass and high-mass major pairs as a function of cosmic time, and explore how different selection criteria impact the mass and redshift dependence of merger timescales.
In particular, we present a physically-motivated framework for selecting pairs via a scaled separation criteria, wherein pair separations are scaled by the virial radius of the primary's FoF group halo (r_sep< 1).
Applying these scaled separation criteria yields equivalent merger timescales for both mass scales at all redshifts.
Alternatively, static physical separation selections applied equivalently to all galaxy pairs at all redshifts leads to a difference in merger rates of up to ∼1 between low- and high-mass pairs, particularly for <150.
As a result, applying the same merger timescales to physical separation-selected pairs will lead to a bias that systematically over-predicts low-mass galaxy merger rates.
§ INTRODUCTION
Galaxy merger rates provide an important pathway for tests of hierarchical assembly from ΛCDM theory, and are critical for understanding the formation and evolution of galaxies across time <cit.>.
In the era of future surveys with JWST, Rubin, and Roman, these tests will be extended to higher redshifts and lower mass scales than were previously accessible <cit.>.
At higher redshift (z≳1) and at lower masses (≲10^9.5), even isolated galaxies are highly disturbed <cit.>, making close pair fractions a critical alternative to morphological signatures for merger rate studies.
However, there is currently no framework for comparing merger rates for low-mass and high-mass galaxies as a function of time, and tensions between predictions from theory and observational studies exist in the literature at low masses <cit.>.
This study seeks to establish a framework to enable comparisons of merger rates across mass scales and redshift so that merger rates of low-mass galaxies can be used as tests of ΛCDM theory.
These results can also be used to interpret observations of galaxy morphologies and their hierarchical evolution, including the role of mergers in triggering starbursts, fueling active galactic nuclei (AGN), facilitating the formation of tidal features, etc., in low-mass galaxies <cit.>.
In our previous work, we show that the redshift evolution of pair fractions of isolated low-mass and high-mass pairs in Illustris TNG100 differ significantly, particularly at z<3 where pair fractions of low-mass pairs decrease up to 75% between z=3 and z=0, while high-mass pair fractions peak at z=0 <cit.>.
We suggested that differences in the merger timescales of low-mass and high-mass pairs might be the cause of the pair fraction evolution differences.
In our following analysis, we will investigate this question directly and answer whether merger timescales are responsible for these pair fraction differences.
Additionally, we found in <cit.> that recovering the pair fraction differences seen between the two mass scales is sensitive to the selection criteria adopted during pair selection.
Physical separation cuts applied equivalently to all pairs, with no mass or redshift dependence, eliminate the ability to distinguish the behavior of the pair fraction evolution of low-mass and high-mass pairs, even for separation cuts as large as <300.
Alternatively, when separation criteria vary with the mass and redshift of each system, in particular when the separation is scaled by the virial radius of the pair's FoF group halo, the ability to distinguish the underlying pair fraction behavior was recovered for scaled separations as low as < 0.5.
Employing scaled separation criteria then permits the equivalent comparison of pair fractions between low-mass and high-mass pairs, and robust comparisons across redshifts from z=0-4.
Since merger rate estimates are derived from close pair fractions and merger timescales, the results from our previous work imply that careful consideration of separation criteria is required for merger timescale and merger rate studies at different mass scales and redshifts as well.
In order to investigate the impact of pair selection criteria on merger timescales specifically, we will track the pairs selected in forwards and backwards in time to construct orbits for each pair.
We will then study the merger timescales of isolated low-mass and high-mass pairs for the same set of physical and scaled separation criteria used in to determine whether merger timescales of isolated pairs are robust to the selection criteria used to determine the pair samples.
In this paper, we aim to extend the framework for pair selection criteria presented in .
In Sec. <ref>, we detail our selection criteria for isolated low-mass and high-mass orbits in the Illustris TNG100 simulation.
We examine the redshift evolution of the number of pairs and merger fraction of our sample in Sec. <ref>.
In Sec. <ref>, we present our findings on the separation and redshift dependence of the merger timescales of low-mass and high-mass pairs, and study the impact of physical and scaled separation criteria for pair selection.
Finally, we discuss the implications of this work and provide suggestions for a self-consistent way of studying pairs across redshifts and mass scales in Sec. <ref>, and present our final conclusions in Sec. <ref>.
§ METHODOLOGY
The IllustrisTNG simulation suite <cit.> is a set of large volume dark-matter-only and full magnetohydrodynamical cosmological simulations consistent with the Planck 2015 cosmology <cit.>.
Following , we use TNG100- 1 (hereafter TNG100), which is the highest resolution full physics run of the (100)^3 volume.
In particular, we utilize the group catalogs produced by the algorithm <cit.> and the merger tree catalogs generated by the algorithm <cit.>.
The catalogs consist of a set of 100 snapshots, ranging from z∼20 (snapshot 0) to z=0 (snapshot 99).
Our sample consists of major low-mass and high-mass pairs (stellar mass ratio 1/4 ≤ M_*2/M_*1≤ 1) that are isolated, but physically associated, as in <cit.>.
From this sample, we will determine the fraction of pairs at each snapshot that merge before z=0, and track the orbits of each pair to study their merger timescales from z=0-6.
§.§ Pair sample
We begin with an extension of the described in , which consists of a collection of isolated galaxy pairs at each snapshot in TNG100.
We collect our base sample at each redshift in the simulation.
A brief version of the selection routine is transcribed here for completeness, and we refer readers to Sec. 2 of for more detail and a discussion of the selection criteria choices.
At each snapshot, low-mass and high-mass pairs are chosen by first selecting the two most massive subhalos (by assigned stellar mass, using abundance matching; as described later in this subsection) from FoF groups with virial mass :
= 8× 10^10- 5× 10^11
=1× 10^12- 6.5×10^12 .
Virial masses (and radii) are calculated according to the spherical collapse model of <cit.>, and are provided by the TNG group catalogs.
Requiring pairs to belong to the same FoF group ensures that the pairs are distant from other massive nearby systems that could perturb the dynamical state of the pair.
In addition, for each pair, we calculated the Hill Radius of every FoF group with a higher mass within 10 of the pair, as a proxy to discern whether the pair lies outside the gravitational sphere of influence of these more massive groups.
We found that over 99% of our pairs at z=0 are more distant than two times the Hill Radius of each FoF group.
We require the subhalos that constitute a pair to meet a minimum subhalo mass, , criteria of
> 1×10^9.
at the snapshot of consideration.
For TNG100, this ensures that subhalos are resolved into over
∼100 particles, enough to robustly identify gravitationally bound subhalos in the and catalogs.
For each subhalo in the FoF group that passes the minimum subhalo mass criteria, we utilize the catalogs to find the peak halo mass of each subhalo <cit.>.
As in , stellar masses are assigned to each subhalo in the FoF group using the abundance matching prescription of <cit.>.
Utilizing abundance matching to assign stellar masses allows us to circumvent simulation-specific stellar mass effects, and also results in a more straightforward process for applying our methodology to observational studies (see Sec. <ref> for more details).
The peak halo mass and current redshift are used to calculate the stellar mass of each subhalo via the abundance matching prescription =f(,z), where z is the redshift of consideration.
In , the
abundance matching prescription was sampled 1000 times for each subhalo to account for the spread in the Stellar Mass – Halo Mass (SMHM) relation.
For the present study, we only use the stellar masses given by the median of the abundance matching relationship.
We expect that the spread of merger timescales from different orbital configurations, as well as the redshift spacing of the TNG100 snapshots, will dominate over uncertainties from the number of pairs in the catalog, which varied by ∼3% <cit.>.
Primary subhalos are defined as the subhalo with the highest assigned stellar mass, M_*1 in the FoF group, and secondaries are defined as the second most massive subhalo with stellar mass M_*2.
Our sample of major pairs then consists of all pairs of primary and secondary subhalos with
10^8< M_*1 < 5×10^9
5× 10^9< M_*1 < 10^11
M_*2/M_*1 > 1/4.
A primary or secondary subhalo can only be a member of one single pair at a given snapshot, such that a collection of N pairs consists of N unique primaries and N unique secondaries.
A subhalo can belong to multiple different pairs at different redshifts.
For example, the primary of a pair that merges at z=2 can be selected with a different secondary at z=1, constituting a new pair.
More detail regarding the uniqueness of pairs and orbits is discussed in Sec. <ref>.
The base sample of pairs used for this analysis then consists of the set of all
isolated
low-mass and high-mass major pairs from each redshift of the TNG100 simulation.
§.§ Mergers
Here we describe how mergers are identified.
Note that this definition is specific to subhalo mergers and therefore results are not guaranteed to hold for galaxy mergers <cit.>, especially in cases where dark matter subhalos and galaxies have different centers of mass, as is the case in our own Galaxy <cit.>.
The primary and secondary subhalos of each pair have a in the merger tree catalogs, which contain information about every FoF group and subhalo at each redshift.
The catalogs track subhalos from one redshift to the next, and thus enable us to track subhalos both backwards and forwards in time from any given redshift.
We utilize the field of the catalogs to determine which pairs from our base sample merge before the end of the simulation (at z=0) and when each merger occurs.
The field provides the [Note that the is distinct from the , and is unique for every subhalo in the merger trees.
The catalogs provide the associated of each subhalo.] of the subhalo's descendent in the next (or one of the following) snapshots, if it has one.
In this analysis, a pair is classified as a merger if the primary and secondary subhalo share the same at the same redshift.
This means that the primary and secondary subhalo have merged such that can no longer distinguish them as two separate halos, therefore yielding a single descendant subhalo.
If a primary and secondary never share the same at the same redshift, the pair is defined as a `non-merger.'
For each merging pair, we define the merger redshift as the redshift that immediately follows the first snapshot where the primary and secondary have the same .
For example, if the primary and secondary have different from z=6 to z=2, but have the same at z=2, then the merger must take place between z=2 and z=1.9, which is the next redshift corresponding to a snapshot in the simulation.
In this example, we take z=1.9 to be the merger redshift.
§.§ Orbits
We extract orbits for all mergers and non-mergers in our base pair sample using the merger tree catalogs.
An orbit for a single pair is defined to be the physical separation between the primary and secondary subhalo as a function of redshift (or lookback time).
A given pair from the base sample at redshift z_n passes all of the selection criteria from at z_n, and can additionally be followed backwards and forwards in time using the merger trees.
We track the positions of both the primary and secondary subhalo at each redshift and calculate the physical separations after accounting for the periodic boundary conditions of the simulation box.
In cases where the primary or secondary does not have a defined position at a given redshift, we do not compute a value for the separation at that redshift.[If a subhalo is very small, or is passing through a more massive subhalo, and is unable to reach the density contrast required to be identified as an independent structure by the algorithm, it will not have a defined position in the catalogs. The algorithm allows for subhalos to skip a single snapshot, and identifies the `skipped descendent' in the S_n+2 snapshot, so that the orbit can be evaluated before and after the skip occurs. See Sec. 3 in <cit.> for more details.]
In addition to the physical separation, we also calculate the scaled separation of a pair at each redshift. We use the definition of scaled separation
r_ sc≡/
from <cit.>, where is the physical separation in , and is the virial radius of the pair's FoF group.[Given by in the group catalogs. If the secondary is not in the same FoF group as the primary, the virial radius used to calculate the scaled separation will remain that of the primary's FoF group, which merging secondaries will eventually re-enter prior to merger.]
The virial radius of the pair's FoF group reasonably approximates the virial radius of a halo with a virial mass equal to the combined subhalo mass of the primary and secondary.
This “scaled separation" is, by construction, a function of mass and redshift.
Note that we choose not to interpolate the orbits to get a more precise determination of the merger timescale, as the spread on the merger timescales in our study will be dominated by the variation of orbital configurations, rather than the uncertainty due to snapshot spacing. However, <cit.> recently showed that for studies needing more precise determination of, e.g., pericenter passages or merger times, their 6D interpolation scheme can improve timing uncertainties from ±80 to ±3.3.
§.§.§ Defining Post-Infall
While the orbit of a pair may be calculated at very early times, we wish to constrain our orbital analysis to only physically associated pairs, and thus will not consider the orbit of a subhalo pair before they belong to the same FoF group.
Specifically, we will consider only the “post-infall" part of each pair's orbit.
We define the redshift of “first infall" as the redshift of the first snapshot where the primary and secondary have the same parent FoF halo, and post-infall as all following redshifts starting with the redshift of first infall. We define merger timescales to begin at the time of first infall and conclude at the time of merger redshift, as defined in Section <ref>.
Figure <ref> presents a set of example orbits, which shows the wide variety of orbit-types that can be found in the TNG100 simulation for pairs that were originally selected from at z=1.5.
The top panel shows the orbits of five pairs that merge, where solid lines show the post-infall parts of the orbit, and dashed lines show the pre-infall portions of the orbit before the secondary and primary ever share a FoF group halo.
Even amongst galaxies of the same approximate stellar mass and stellar mass ratio, the spread of merger timescales can be very large, with merger timescales between ∼1-10 from first infall to merger (triangles).
For some pairs, the secondary subhalos can experience first infall, then after one pericentric passage can return to large distances where they are temporarily assigned a different FoF group halo than that of their primary.
In these cases, we use the full orbit beginning at first infall, including the segments of the orbit where the secondary is assigned to another FoF group temporarily, through to merger as described in Section <ref>.
This definition is most robust for considering the full interaction timescales of merging pairs.
The middle panel of Fig. <ref> (“Potential Future Mergers") shows the orbits for three pairs that are classified as non-mergers, since they do not merge before z=0, but which appear likely to merge within a few Gyr past the end of the simulation.
These orbits can have a variety of orbital periods, and the number of pericentric passages can vary significantly.
The two lighter blue orbits are very long period orbits with 1-2 pericenter passages in the past 10, while the darker blue orbit has a much shorter period with three close passages in just the past 2.
The bottom panel shows the orbits of “Fly-by" interactions (non-mergers) that are unlikely to merge in the near future, if ever.
Note that we do not split our non-merger category into fly-bys and potential future mergers for any of our following analyses, and such distinctions were made only for the purposes of showing the diversity of selected orbits. We further discuss the role of non-mergers and fly-bys on our results in Sec. <ref>.
§.§.§ Uniqueness of orbits
Since we collect the orbit for each pair at all redshifts after infall, a singular orbit will be selected as many times as the number of redshifts (or snapshots) where that pair exists. However, it is only necessary to keep a single instance of any given orbit in our catalog to avoid skewing our data artificially to longer merger timescales.
To distinguish the collection of unique orbits, each pair is assigned a `pairkey' while constructing their orbit.
The pairkey is created by concatenating the earliest of the primary and secondary subhalo from the catalogs, and is unique for each pair of halos.
After each pair is assigned a unique pairkey, we only keep one instance of an orbit per pair to avoid double/multi-counting in our orbit catalog.
Note that a single subhalo may be a member of many different pairs, but will only have one unique orbit per unique pair.
For example, the primary of a low-mass pair selected at z=3 that merges before z=2 may be selected via our selection criteria again at z=1 with a new secondary companion.
In this case, both orbits (of the original low-mass pair and the new pair which includes the primary subhalo from the previous merger) are retained in the orbit catalog.
The total number of orbits, before removing the redundant orbits,
is 71,429 for low-mass pairs, and 20,824 high-mass pairs.
However, after removing all redundant orbits, there remain 22,213 low-mass orbits, and 3,029 high-mass orbits that each correspond to a unique pair of subhalos. The collection of all unique orbits constitutes our orbit sample and is the dataset that will be used for the remainder of this analysis.
§ PAIR SAMPLE PROPERTIES
§.§ Number of Pairs
The total number of pairs at a given redshift is equal to the number of orbits at that redshift, including both merger and non-merger pairs.
A single, non-merging pair with first infall at z=3 will contribute to the number of pairs at all redshifts from z=0-3.
Figure <ref> shows the number of low-mass and high-mass pairs as a function of redshift from z=0-6.
Low-mass pairs (green solid line) are most numerous between z=1.25-2, while high-mass pairs (pink solid line) are most numerous between z=0-1.
The dashed lines show the number of unique pairs that merge prior to z=0 for each sample.
The number of pairs that merge decreases to zero at z=0 since many pairs that exist at low redshift will have merger timescales that span beyond the end of the simulation (i.e., z=0).
As subhalos experience first infall into a FoF group, the number of pairs increases.
On the other hand, mergers simultaneously lead to a decrease in the number of pairs.
The number of low-mass pairs increases from ∼600 pairs at z=6 to ∼6,600 at z=2.
The decrease in the number of pairs after z=2 means that the number of low-mass pairs that are merging at each redshift is larger than the rate at which pairs are added to the sample.
The number of high-mass pairs also increases until z=1, at which point it remains approximately constant from z=0-1 at ∼1000 pairs.
There is a slight decrease in the number of high mass pairs at the very lowest redshifts z<0.1, where the mergers begin to outnumber the new pairs being added at each redshift.
In Fig. 1 of , the number of low-mass pairs peaks (with ∼3,000 pairs) and begins to decrease at z∼2, and the number of high-mass pairs is approximately constant between z=0-1, peaking at ∼700 pairs.
We find the same behavior with our pair sample, with low-mass pairs peaking at z=2 and high mass pairs leveling off between z=0-1.
However, in this study, the number of pairs at a given redshift is higher than in our previous work, since the orbit catalog includes a unique orbit for every pair from the previous work.
For example, a pair that passes the selection criteria only at z=1 will only be counted as a pair at z=1 in the previous work, while, in this study, it may be counted at many more redshifts since we can follow the orbit forwards and backwards in time.
§.§ Merger Fraction
We calculate the merger fraction by dividing the number of pairs that merge (dashed lines in Fig. <ref>) by the total number of merging and non-merging orbits (solid lines) at a given redshift.
As before, an orbit with first infall at z=2 and merger at z=1 will be included in the merger fraction calculation for redshifts z=1-2.
Note, this definition of the merger fraction differs from that typically used in observational studies, where merger fractions are computed from close pair fractions with a correction term <cit.>.
Figure <ref> shows the fraction of isolated pairs of low-mass (green) and high-mass (pink) pairs that merge before the end of the simulation as a function of redshift.
At redshifts z>2, the merger fraction for low-mass and high-mass pairs is greater than 0.9.
The merger fraction for both mass ranges declines to zero at z=0, due to the very small fraction of pairs at low redshift (z<1) that have short enough merger timescales to merge before z=0.
The merger fraction of low-mass and high-mass pairs is the same from z=0-0.5 and z=2.5-6.
Between z=0.5-2.5, the high-mass merger fraction is larger than the low-mass merger fraction, and the knee of the merger fraction occurs at a lower redshift.
We discuss this difference in more detail in Sec. <ref>.
Note that the merger fraction defined here is a measure of the fraction of isolated pairs that merge before z=0. It is not, however, a measure of the fraction of all low-mass and high-mass pairs in all environments that will merge.
§ RESULTS: THE MASS AND REDSHIFT DEPENDENCE OF MERGER TIMESCALES
Using our sample of isolated low-mass and high-mass pairs in the TNG100 simulation, we calculate the merger timescales, or time until merger, for all of the merging pairs in sample.
The merger timescale of a pair is defined to be the amount of time that elapses between the redshift at which pairs are selected and the merger redshift.
Only the merging pairs will be considered for the remainder of the analysis.
In Sec. <ref>, we explore how the merger timescale changes as a function of pair separation across redshifts for low-mass and high-mass pairs.
In Sec. <ref>, we investigate the median time until merger for all pairs as a function of redshift.
We will additionally examine the merger timescale's dependence on a variety of separation criteria.
Separation criteria are applied to pairs at each redshift independently, such that a pair in the 10-50 bin at one redshift will not be part of the sample used at redshifts where its separation is >50.
§.§ Separation Dependence of Merger Timescales
We calculate the time until merger as a function of separation for low-mass and high-mass pairs at a variety of redshifts from z=0.5-6.
Binning the pairs by separation, we can study how the merger timescale changes for pairs selected at different points in their orbits.
We bin our pair sample by merger timescale, in bins of 0.5, and separation bins of 20, and only consider the pairs with separations >10.
Figure <ref> shows a heatmap distribution of pairs as a function of merger timescale vs. separation for low-mass (top) and high-mass (bottom) pairs at redshifts z=(0.5,1,1.5,2,3,4,5,6).
The colorbars to the right of each figure show the percentage of the population of pairs at that redshift that are in each bin.
The number of pairs at each redshift is printed in the bottom right corner of each panel.
The horizontal line in the first panel shows the time remaining in the simulation until z=0, above which no merging pairs exist.
Additionally, each panel with more than 50 pairs includes a linear fit to the data prior to binning.
We fit a line of the form y=C*x, where y is the time until merger, x is separation, and C is the slope that minimizes the mean squared error.
We do not include a fitting parameter for the intercept.
The slope of each linear fit is given in Table <ref>.
We find that the merger timescale for low-mass and high-mass pairs is positively correlated with the separation of the pair at a given redshift.
The slope of the linear fit increases with decreasing redshift, meaning that the merger timescale increases with decreasing redshift for a given separation.
For example, pairs at z=4 with separations between 50-100 tend to merge in less time than pairs with the same separations at z=0.5.
For the same physical separation, the merger timescale of high-mass pairs is shorter than for low-mass at each redshift.
In addition, the spreads in the merger timescales and pair separations are smaller at higher redshift, for both low- and high-mass pairs.
For example, the spread of merger timescales for low-mass pairs at z=1 (4273 pairs) goes from 0-8, while pairs at z=4 (2218 pairs) have merger timescales between 0-3.
Likewise, the spread of separations at z=1 is 0-200, but at z=4 the spread is smaller, between 10-125.
The high-mass pairs also have a larger spread in the distribution of merger timescales and separations at low redshift than at higher redshift.
This is due to the growth of halos over time, and thus an increasing virial radius for FoF groups at lower redshifts.
Fig. <ref> can be used to estimate the merger timescale of an isolated pair at a given redshift.
For example, a low-mass pair at z=2 with ∼ 75 will merge in 0-5, with a most likely time to merger of around 2.
On the other hand, a high-mass pair at z=2 with ∼ 75 will merge in 0-2, with a most likely merger timescale of around 0-1.
§.§ Redshift Dependence of Merger Timescales
In this subsection, we study the merger timescales of low-mass and high-mass pairs as a function of redshift.
First, we consider the pair sample as a whole, and quantify the median merger timescale for all merging low- and high-mass pairs from z=0-6.
We then consider two sets of separation criteria to create separation-selected subsamples, as in , and study the impact of different selection criteria on the resulting merger timescales.
§.§.§ Full sample
We calculate the merger timescales for pairs at each redshift, and quantify the median and spread on the merger timescale as a function of redshift in Fig. <ref>. As stated in Section <ref>, merger timescales begin at the time of first infall and conclude at the merger redshift (see Section <ref>).
We include the full catalog of merging orbits at each redshift, only excluding separations <10 to limit the impact of subhalos becoming indistinguishable in the catalogs.
Figure <ref> also shows
the 1st and 3rd quartiles are indicated with shaded regions.
Low-mass merger timescales (green) and high-mass merger timescales (pink) are roughly equivalent at all redshifts, which implies that mergers do not proceed in fundamentally different ways at different mass scales.
We explore this point further in Sections <ref> and <ref>.
The median time until merger is ∼0.5-0.7 at z∼6, then rises to a peak of 2.3-2.4 at z∼0.6, then decreases to zero at z=0.
The abrupt decrease is artificial rather than a true physical feature of merger timescales.
As z→0, there is an increasingly small fraction of pairs that merge before z=0, as seen by the dashed lines in Fig <ref>.
These mergers must proceed on shorter timescales by definition, as they were selected as pairs that must merge prior to z=0 (i.e., a pair at z=0.1 that merges by z=0 can have a maximal merger timescale of 1.47, whereas a pair at z=1 can have a maximal merger timescale of 8).
This is shown by the dotted black line on the left of Fig. <ref>, which shows the lookback time of the simulation as a function of redshift.[The lookback time at a given redshift z_n is equivalent to the time elapsed between z_n and z=0. Thus, merging pairs cannot have merger timescales larger than the lookback time of the simulation at a given redshift.]
To understand the functional form of merger timescales between z=1-6, which is shown by the thin black line in Fig. <ref>, we consider the following derivation.
In principle, merger timescales for a body moving through a homogeneous field of collision-less matter can be analytically derived from the Chandrasekhar formula for dynamical friction <cit.>.[Specifically, this derivation considers the merger timescale to be the time that elapses between the secondary subhalo crossing into the virial radius of the primary subhalo and the secondary's coalescence with the primary.]
Departures from the idealized case introduce perturbations to that solution, the validity of which has been tested in cosmological hydrodynamic and N-body simulations <cit.>.
Such studies have found that the merger timescale in N-body hydrodynamic simulations is of the form
τ_ merge = A(Θ)/lnΛ/τ_ dyn
where and are the primary and secondary subhalo masses, A(Θ) is a constant for a given orbital configuration, lnΛ is the Coulomb logarithm taken to be lnΛ = ln(1+/), and τ_ dyn is the dynamical timescale at the virial radius of the primary subhalo.
The dynamical timescale is related to the crossing time at the virial radius, and is given by
τ_ dyn = /V_ circ(),
where V_ circ() is the circular velocity at the virial radius of the primary subhalo.
Note that the dynamical time can thus be rewritten as
τ_ dyn = (Gρ_crit)^-1/2 = (3 H^2(z)/8π)^-1/2.
In our study, we keep the stellar and FoF group mass criteria fixed as a function of redshift, such that for a given pair, the merger timescale scales with redshift as
τ_merge∝τ_dyn∝ H(z)^-1,
assuming that there is no (or weak) redshift dependence of the distribution of orbital parameters.
Thus, the merger timescale is expected to scale with the Hubble time at a given redshift.
In Fig. <ref>, the black dashed line shows the Hubble time 1/H(z) multiplied by β=0.35.
We did not perform a fit for this multiplicative constant, since we are interested in investigating the behavior of the merger timescale as a function of redshift rather than the specific values of the merger timescale.
The redshift evolution of our median merger timescales is consistent with the scaling with 1/H(z) between z=∼1-6.[H(z) is calculated using the same cosmology as the TNG100 simulation, from <cit.>. Specifically, Ω_M=0.31 and Ω_Λ=0.69.]
We note that the 1/H(z) functional form deviates from the median results at low redshift, but is still within the errors. The deviation follows the trend of decreasing merger fractions for the respective pair samples illustrated in Figure <ref>. We expect that the deviation of the median results from 1/H(z) thus owes to insufficient time for all pairs to merge, rather than indicating that the true functional form should be shallower than 1/H(z).
It is notable that such a simple scaling holds for a complex simulation like TNG100 where subhalos are far from isotropic spherical distributions of mass.
It is particularly interesting as well that low-mass and high-mass pairs have the same merger timescales and follow the 1/H(z) redshift scaling.
§.§.§ Physical Separation Selected Pairs
Since pair samples are typically picked via separation criteria in both simulations and observations, we study the impact of different separation-based selection criteria on inferred merger timescales.
Separation selected samples have a lower separation criteria of 10.
This lower separation criteria is also commonly applied to observationally selected pairs in studies of merger fractions and merger rates <cit.>.
The first set of separation criteria selects only pairs at a given redshift that have physical 3D separations greater than 10 and less than [50, 70, 100, 150, 200, 300], yielding six pair subsamples.
Pairs that are selected via the <50 separation cut will be included in the <70 subsample, and so on.
These separation criteria do not vary as a function of time or mass, and are applied equivalently to the low-mass and high-mass samples at all redshifts.
For example, an orbit in the separation bin <50 at z=2 will not necessarily be in that same bin at other redshifts.
The top panel of Fig. <ref> shows the merger timescale versus redshift for each of the low-mass (green) and high-mass (pink) pair subsamples with separations less than the physical separation listed above each panel.
The same Hubble time redshift scaling from Sec. <ref> is shown by the thin black line in each panel.
We first note that the merger timescale peaks around redshift z∼0.5 for all low-mass and high-mass subsamples.
In addition, all subsamples show the same decline of mean merger time as z→0, as discussed in the previous section.
These traits are shown in Fig. <ref>, meaning they are features that are independent of separation criteria.
The subsamples with maximum separations of [50, 70, 100] result in median merger timescales that are higher for low-mass pairs than high-mass pairs at all redshifts.
The difference in the merger timescale is up to 0.8 longer for low-mass pairs than high-mass pairs at the same redshift for the same separation cut.
The offset between the low-mass and high-mass merger timescale decreases for the largest separation cut of <300.
In the top rightmost panel, the median merger times converge for nearly all redshifts, similar to the merger timescales from the full sample shown in Fig. <ref>.
As the selection criteria increases (from left to right), each subsample contains a larger fraction of the full sample.
The merger timescale for both low-mass and high-mass pairs increases with an increasing maximum separation cut.
This follows directly from Sec. <ref>, where we found that the time until merger is positively correlated with increasing separation for all pairs.
Including a higher fraction of larger-separation systems in each subsample increases the median time until merger at all redshifts.
The median merger timescale for low-mass pairs does not change significantly for any separation cuts above 150, which is larger than the average virial radius of low-mass systems at all redshifts.
On the other hand, high-mass merger timescales tend to increase as separation increases.
High-mass pairs tend to have higher separations than low-mass pairs, as shown in Sec. <ref>, due to the larger size of the high-mass subhalos.
§.§.§ Scaled Separation Selected Pairs
We calculate the merger timescales for six additional subsamples of pairs, applying separation criteria that scale with both redshift and mass.
The scaled separation, which we define in Section <ref>, is the separation of a pair divided by the virial radius of the pair's FoF group, .
Scaled separation criteria select the equivalent fraction of the full volume of each FoF group, regardless of mass or redshift, whereas physical separation criteria will select the same volume around each primary, with no account for the growth of dark matter halos or spatial distributions of satellites.
The scaled separation selected pairs have separations greater than 10 and less than [0.25, 0.5, 1, 1.5, 2, 2.5].
As in the previous section, the selection criteria are applied at each redshift independently.
As detailed in , the median virial radius for low-mass FoF groups at z=[0,1,2,3,4] is approximately [134, 85, 59, 43, 33], and for high-mass FoF groups is approximately [348, 206, 134, 97, 76].
Thus, choosing a scaled separation cut of <1 at z=1 will select low-mass pairs with separations ≲85 and high-mass pairs with separations ≲206.
The bottom panel of Fig. <ref> shows the median time until merger versus redshift for low-mass (green) and high-mass (pink) pair subsamples with scaled separations less than those listed at the top of each plot.
As in the top row, the thin black line in each panel shows the Hubble time scaling derived in Sec. <ref>.
Quite unlike the physical separation selected subsamples in the top panels, using a scaled separation criteria results in nearly identical median merger timescales for low-mass and high-mass pairs at all redshifts.
We find that the median time until merger at z=1 for each scaled separation cut is about ∼[0.64, 0.94,1.25,1.55,1.85,1.96] respectively, for both low-mass and high-mass samples.
The median merger timescale starts off small at z>4, then increases to a peak around z∼0.6, before quickly decreasing to zero at z=0.
This redshift evolution is similar to that of the full sample in Fig. <ref>.
The median time until merger at a given redshift increases.
The low-mass and high-mass merger timescales increase for separation criteria that include a larger fraction of the virial radius, and thus larger separation pairs, but recover the behavior of the full sample by a separation cut of 2.
In addition, the slope of the merger timescale from z=6→1 is steeper for high-mass pairs in the scaled separation subsamples than the physical separation subsamples.
This is especially noticeable in the first panel of the top and bottom rows, where the merger timescale of high-mass pairs is approximately flat for the physical separation cut <50.
§ DISCUSSION
In this section, we will explore the implications and broader impacts of our merger timescale analysis, and put our study in context with previous work.
In Sec. <ref>, we return to one of the key questions posed in our previous work: what are the underlying physical mechanisms that drive the difference in pair fraction evolution between low- and high-mass pairs?
In Sec. <ref>, we will discuss the non-merger portion of our orbit sample and the difference between the merger fractions in the low-mass and high-mass regimes.
Finally, in Sec. <ref>, we discuss the implications of our work for future observational and theoretical merger fraction and merger rate studies and provide suggestions for a self-consistent framework moving forward.
§.§ Connections to Pair Fractions
In our previous work <cit.>, we found that the pair fractions of isolated low-mass and high-mass pairs evolve very differently from z=0-2.5 (see their Fig. 3).
Low-mass major pair fractions are constant for z>2.2, but decrease by almost 70% from z=3 to z=0.
High-mass major pair fractions slowly increase from z=6 to z=0, with a more significant increase from z=0.25 to z=0.
In short, the redshift evolution of pair fractions is opposite for low-mass pairs (which have a decreasing pair fraction with decreasing redshift) and high-mass pairs (which have an increasing pair fraction with decreasing redshift) from z=3→0.
suggested a hypothesis for the difference in the behavior of the pair fractions: that perhaps low- mass and high-mass pairs have different merger timescales, which could lead to a difference in pair fractions from high to low redshift.
However in the present study, we have shown that the merger timescales of the full sample of low-mass and high-mass pairs are approximately equal at all redshifts, meaning that the opposing behaviour of low-mass and high-mass pair fractions is not a result of different merger timescales.
Rather, we suspect that the build-up of larger structures due to hierarchical formation under ΛCDM leads to a reduced number of low-mass groups that enter the analysis at lower redshift.
Additionally through this study, we have found that a significant fraction of the pairs selected in our previous work do indeed merge (>90% of pairs for z>1.6), so that the rate at which mergers occur is likely larger than the rate at which isolated low-mass pairs form, particularly at low redshift, thus leading to the quickly declining pair fraction measured in our previous work.
§.§ Non-merger population
In our catalog of 22,213 low-mass pairs and 3,029 high-mass pairs, only 3700 (16.66%) low-mass pairs and 903 (29.71%) high-mass pairs do not merge by z=0 (i.e. are classified here as “non-mergers").
In Sec. <ref>, we found that the merger fraction of low-mass and high-mass pairs at z>3 is >0.95 and the merger fraction decreases substantially at z<3.
While the decline in the merger fraction at z≲1 is primarily due to the limited time that a pair will have to merge before the end of the simulation, we also find that between z=∼0.6-2.5 the fraction of low-mass pairs is lower than that of high-mass pairs.
As in the previous subsection, one possible hypothesis that could explain the merger fraction differences is that low-mass pairs have shorter merger timescales than high-mass pairs, such that their mergers would occur at earlier times in the simulation leading to a decrease in the merger fraction at higher redshifts compared to high-mass pairs.
However, we have shown that the merger timescales of our full low-mass and high-mass pair sample evolve similarly for z=0-6 which means that the difference in the low- and high-mass merger fraction between z=0.5-2.5 is not a result of a difference in merger timescales.
We suspect that the cause of the difference between low- and high-mass merger fractions is not a difference in the merging population itself, but rather in the non-merger population and its evolution over time.
As we do not categorize our non-merging sample into “likely mergers" and “fly-by interactions" subsamples for this study (see Fig. <ref>), we cannot comment on how these two populations individually contribute to the difference in merger fractions.
However, one possible explanation could be that a larger fraction of infalling low-mass pairs at lower redshift results in fly-by interactions.
This could be the case if the velocity at which the secondary subhalo enters the primary's FoF group becomes increasingly large compared to the virial velocity[The virial velocity is the value of the circular velocity at the virial radius.], resulting in fewer bound systems.
§.§ Implications and Suggestions for Future Observational and Theoretical Studies of Pairs
While we are unable to make direct comparisons with previous observational studies of merger fractions and merger rates, as our study uses true physical separations from the simulation rather than projected separations, we can still draw meaningful conclusions about the application of our results to future studies of merger timescales and merger rates.
§.§.§ Pair Selection Criteria and Merger Rates in Illustris
Comparisons between studies of pair fractions, merger fractions, and merger rates, especially across cosmic time and different mass scales, are often challenged by the implementation of different selection criteria in each study.
In some observational studies, pair selection criteria are often set by the observational parameters of the survey itself.
For example, limiting magnitudes and completeness limits can dictate which range of stellar masses and stellar mass ratios are considered <cit.>. Additionally, specific separation criteria may be chosen to avoid fiber collisions in a spectroscopic survey <cit.>.
In theoretical studies, it is more straightforward to adopt several different pair selection criteria simultaneously for comparisons to specific observational studies <cit.>.
However, this then limits theoretical studies to a subset of pair selection criteria specific to their target comparison studies.
In recent years, some work has aimed to standardize pair selection criteria for future studies, in particular, to facilitate a fair comparison between theory and observations.
One such study is <cit.>, which created mock catalogs from the original Illustris-1 simulation to analyze the merger probability of galaxy pairs as a function of (both physical and projected) separation and relative velocity.
They find that selecting pairs with projected separations between 5≤ r_ proj≤ 50 and projected relative velocities of v_ proj≤300 selects all pairs that have greater than 30% chance of merging before z=0.
These selection criteria are calibrated to select a population of pairs that are fairly likely to merge.
Using their projected separation vs velocity analysis of merger probabilities, they also develop a weighting scheme to determine the likelihood of merger as a function of projected separation and velocity, which provides a way to more accurately calibrate merger fractions from close pair fractions.
While their study considers the impact of stellar mass and redshift on the probability of merger, their suggested pair selection criteria do not vary with mass or redshift, which we have shown is crucial for interpreting and comparing pair fractions and merger timescales.
<cit.> constructed galaxy merger trees to calculate the galaxy merger rate as a function of the stellar mass of the descendent subhalo and redshift in the original Illustris simulation.
They find that the galaxy merger rate increases with increasing redshift for descendent galaxy masses >10^10.
These findings are consistent with predictions of the galaxy merger rate from semi-empirical models <cit.>, and with merger rates derived from observational merger fraction measurements calibrated by the corrected observability timescales of <cit.>.
Additionally, <cit.> find that the major merger rate at z∼0.1 increases with increasing descendant subhalo mass.
While the redshift evolution of merger rates are broadly consistent with observations, they find significant discrepancies with major merger rates of galaxies with <10^10 from <cit.>, which used morphological merger signatures to estimate merger rates at low z.
As the <cit.> work is self-consistently carried out at all redshifts and descendant masses, <cit.> suggest that the discrepancy arises from significant uncertainty in the observability timescales of galaxies with lower mass <10^10, since the timescales in <cit.> are calibrated using an extrapolation of gas fraction data that is only available for >10^10.
These results imply that the timescales used to convert observational merger fractions (determined either by weighted close pair fractions or by morphologically selected pairs) are sensitive to the mass ranges and redshifts at which they are valid, and attempting to make comparisons between theoretical predictions and observational measurements can lead to seemingly incompatible results.
Our work provides a unique framework that will allow for more robust comparisons of theoretical and observational samples.
§.§.§ Implementing Scaled Separation Pair Selection Criteria in Future Studies
In our previous work, we studied the impact of different separation criteria on the recovered pair fractions of low-mass and high-mass pairs, and found that features of the redshift evolution of the pair fractions can only be distinguished by employing scaled separation criteria <cit.>.
In the present work, we explore how these same set of selection criteria impact the recovered merger timescales.
We find that the median merger timescales of all isolated low-mass and high-mass pairs evolve nearly identically between z=0-6.
All subsamples of our pair catalog from scaled separation cuts result in identical behavior of the low-mass and high-mass merger timescales as a function of redshift as well.
On the other hand, selecting pairs using a static physical separation criteria results in merger timescales that evolve differently between the two mass scales, and can differ by 0.8.
These results translate to important implications for observational studies of merger rates, which are typically determined using physical separation-selected close pairs and merger timescales.
Our work has shown that both of these quantities can vary significantly for different pair selection criteria and that physical separation criteria can eliminate the distinguishing features of pair fractions or the equivalence of merger timescales between low-mass and high-mass pairs.
We therefore promote the adoption of separation criteria that vary as a function of mass and redshift for future pair studies.
In particular, we point out the importance of utilizing pair selection criteria that permit fair comparisons of pair properties in observational studies (i.e. pair fractions, merger timescales, merger rates, etc.), particularly when the goal of such a study is to quantify and compare the redshift evolution of pair properties at different mass scales.
Specifically, we suggest employing separation criteria based on the scaled separation of each pair, given in Equation <ref>.
The application of a scaled separation as demonstrated here can be applied to observational studies.
Since estimates of a pair's stellar masses are needed to quantify the mass ratio of the pair (which is commonly computed for merger rate studies), no further observational information is needed to develop mass and redshift evolving separation cuts.
We provide a step by step example of this application as follows. First, compute the associated dark matter masses from the observed stellar masses of the pair using a SMHM relation <cit.>.
Next, calculate the virial radius of a dark matter halo with a virial mass equal to the sum of the dark matter masses of the pair.[We showed in that the virial radius of a halo with mass given by the combined dark matter halo masses of the primary and secondary recovers the virial radius of the FoF group with approximately 98% accuracy. The virial radius can be calculated from
= √(3 M_vir/4πΔ_cρ_c),
where ρ_c is the critical density of the Universe, and Δ_c is the overdensity constant <cit.>.]
Finally, determine the physical separation for each pair individually that corresponds to the scaled separation criteria adopted in the study.
The process of computing the dark matter halo masses of the observed galaxies typically introduces systematic error to an observational study.
When converting from stellar mass to dark matter halo mass, the SMHM relationship can be sampled, i.e. by computing many realizations of the dark matter mass of each galaxy, in a similar fashion as performed in , to derive the associated spread introduced by the abundance matching process.
As a concluding note, we remind readers that our results are explicitly for isolated pairs, as outlined in Sec. <ref>.
One strength of this approach is that we have been able to study dynamics that are inherent to the pairs themselves, rather than those that are a product of the environment.
Low-mass and high-mass pair merger timescales in high density environments may evolve differently than in our findings.
We leave the extension of this analysis to more high-density environments as the focus of future work.
§ SUMMARY AND CONCLUSIONS
In this study, we construct a sample of the orbits of isolated low-mass (10^8 <<5×10^9) and high-mass (5×10^9<<10^11) major pairs (stellar mass ratio > 1:4) from z=0-6 in the TNG100 simulation.
Orbits of pairs, i.e. the 3D physical separation between the pair as a function of time, are defined from the redshift at which the primary and secondary subhalo first share a common FoF group (i.e., `first infall') to either z=0 or the redshift at which the pair merges.
The sample consists of 22,213 unique low-mass orbits and 3,029 unique-high mass orbits, for which we quantify the merger fraction as a function of redshift.
We calculate the merger timescales of low-mass and high-mass major pairs as a function of separation and separately as a function of redshift.
Our goal is to identify the merger timescales of pairs in a cosmological framework and to compare the redshift evolution of the merger timescale between low-mass and high-mass pairs.
Additionally, motivated by our previous work on the pair fractions of low-mass and high-mass pairs in <cit.> where we showed that the evolution of the pair fraction is sensitive to the separation criteria used to select the pairs, we seek to determine the corresponding impact of various selection criteria on the merger timescales of pairs.
Specifically, we look at two sets of selection criteria, one which selects pairs via a static physical separation cut, and the other which selects pairs based on a separation cut that evolves with redshift and mass.
This is especially important for studies that seek to study pair fractions, merger fractions, merger timescales, and merger rates at different mass scales and/or as a function of redshift.
Our main conclusions are as follows:
* The merger fraction of physically associated low-mass and high-mass pairs is high (>0.9) at z≳1.5 (see Fig. <ref>). However, the merger fraction declines rapidly to zero as z→0, due to the finite length of the simulation, which artificially reduces the number of pairs with enough time left to merge at low redshifts.
* The merger timescale of a pair at a given redshift increases with increasing pair separation. Additionally, for a given physical separation, high-mass pairs have a shorter merger timescale than low-mass pairs at the same redshift. For example, low-mass pairs at z=1 with separations <100 merge within 6, while high-mass pairs in the same separation range merge within 3 (see Fig. <ref>).
* The median merger timescale peaks around z∼0.6 at ∼2.1-2.4 for both low-mass and high-mass pairs.
At redshifts z≳1, the median merger timescale for the full sample of low-mass and high-mass pairs that merge prior to z=0 declines with increasing redshift at a rate proportional to 1/H(z) (see Fig. <ref>).
At z≲1, the maximum merger timescale is constrained by the time remaining in the simulation for the pair to merge, which causes the median merger timescale to decline to zero at z=0.
The decrease in the merger time at low redshifts is artificial and not representative of the true merger timescales of pairs at low redshift.
* When pairs are selected via a scaled separation criteria, namely the pair separation scaled by the virial radius of the FoF group of the primary, low-mass and high-mass merger timescales are nearly identical at all redshifts. This holds for all scaled separation criteria considered (0.25,0.5,1,1.5,2,3). At z=[0, 1, 2, 3, 4], a scaled separation of 1 corresponds to an average physical separation of [134, 85, 59, 43, 33] for low-mass pairs and [348, 206, 134, 97, 76] for high-mass pairs.
* When pairs are selected via a physical separation criteria (one that does not vary with redshift or with mass), the median merger timescales of low-mass and high-mass pairs differ by up to 0.8. Thus, using the same merger timescale for low-mass and high-mass close pair samples selected via the same physical separation cuts will result in biased merger rate estimates.
Studies have found that merger rates of galaxy pairs vary with redshift and with the mass of the pair <cit.>.
In our previous work, we found that the pair fraction of isolated pairs in TNG100 likewise evolves with redshift and mass <cit.>.
In this paper, we show that the merger timescales of major pairs in TNG100 vary with redshift, but that low-mass and high-mass pairs have equal merger timescales at all redshifts from z=0-6 if the correct separation selection criteria is used to pick equivalent samples of pairs.
These works together give a comprehensive framework for inferring merger rates via pair fractions and merger timescales at redshifts from at least z=0-6, and for quantifying the differences between low-mass and high-mass pairs.
Indeed, separation selection criteria that scale with the mass and redshift of the target system are crucial for interpreting pair properties in a self-consistent way.
In the future, observatories such as the Rubin Observatory, the Roman Space Telescope, JWST, and future ELTs, will detect an abundance of low-mass pairs at a wide range of redshifts, and our theoretical framework for interpreting these observations will be more critical than ever.
§ ACKNOWLEDGEMENTS
K.C. and G.B. are supported by NSF CAREER award AST-1941096.
E.P. acknowledges financial support provided by NASA through the NASA Hubble Fellowship grant #HST-HF2-51540.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
P.T. acknowledges support from NSF-AST 2346977.
The IllustrisTNG simulations were undertaken with compute time awarded by the Gauss Centre for Supercomputing (GCS) under GCS Large-Scale Projects GCS-ILLU and GCS-DWAR on the GCS share of the supercomputer Hazel Hen at the High Performance Computing Center Stuttgart (HLRS), as well as on the machines of the Max Planck Computing and Data Facility (MPCDF) in Garching, Germany.
This work is based upon High Performance Computing (HPC) resources supported by the University of Arizona TRIF, UITS, and Research, Innovation, and Impact (RII) and maintained by the UArizona Research Technologies department.
We respectfully acknowledge the University of Arizona is on the land and territories of Indigenous peoples. Today, Arizona is home to 22 federally recognized tribes, with Tucson being home to the O’odham and the Yaqui. Committed to diversity and inclusion, the University strives to build sustainable relationships with sovereign Native Nations and Indigenous communities through education offerings, partnerships, and community service.
aasjournal
|
http://arxiv.org/abs/2409.02511v1 | 20240904081824 | Remote Analysis of Femoroacetabular Impingement by a triade of label-free optical spectroscopy techniques | [
"Martin Hohmann",
"Lucas Kreiss",
"Faramarz Dehghani",
"Dongqin Ni",
"Max Gmelch",
"Oliver Friedrich",
"Lorenz Büchler",
"Michael Schmidt"
] | physics.optics | [
"physics.optics"
] |
unsrt
Remote Analysis of Femoroacetabular Impingement by a triade of label-free optical spectroscopy techniques
Martin Hohmann^1,2,*,***,Lucas Kreiss ^2,3,**,***, Faramarz Dehghani^4,
Dongqin Ni^1,2, Max Gmelch^2, Oliver Friedrich ^2,3,
Lorenz Bchler^5,
Michael Schmidt^1,2
September 9, 2024
========================================================================================================================================================================
^1 Institute of Photonic Technologies (LPT), Friedrich-Alexander-Universitt Erlangen-Nrnberg (FAU), Konrad-Zuse-Strae 3/5, 91052 Erlangen, Germany
^2 Erlangen Graduate School in Advanced Optical Technologies (SAOT), Paul-Gordon-Strae 6, 91052 Erlangen, Germany
^3 Institute of Medical Biotechnology (MBT), Friedrich-Alexander-Universitt Erlangen-Nrnberg (FAU), Paul-Gordan-Strae 3, 91052 Erlangen, Germany
^4 Institut fr Anatomie und Zellbiologie, Martin Luther Universitt Halle-Wittenberg, Groe Steinstrae 52, 06097 Halle (Saale), Germany
^5 Department for Orthopaedics and Traumatology, Tellstrasse 25, 5001 Aarau, Switzerland
[email protected], **[email protected]
** First Author
§ ABSTRACT
This paper introduces the combination of Laser-induced breakdown spectroscopy (LIBS), Raman spectroscopy (RS) and diffuse reflectance spectroscopy (DRS) in the field of biomedical research. Thereby, the results from RS and LIBS are combined with previous DRS results. These non-invasive optical methods, used together, offer thorough analysis of the absorptive and scattering behaviour, elemental composition, and molecular bindings of tissue, without resulting in considerable harm or changes to the sample or the requirement of markers. The study centres on applying this triad of techniques to tissues affected by Femoroacetabular Impingement (FAI), a condition characterised by reduced hip mobility due to developmental deformities of the hip joint. The research results in a biochemical pathway model of the condition causing the red staining caused by FAI which origin was unknown until now. This proves that this approach may have significant implications for numerous medical applications and support in exploring complex chemical and biochemical pathways.
§ INTRODUCTION
For numerous biomedical problems, it is extremely advantageous to be able to merge information from different underlying aspects. Hence, a combination of technologies is needed to achieve this. Ideally, all technologies ought to be easy-to-use and have the potential for in vivo applications. Furthermore, in many cases, a high spatial resolution is beneficial as well. Laser-induced breakdown spectroscopy (LIBS) alongside Raman spectroscopy (RS) and diffuse reflectance spectroscopy (DRS), as a complementary method, could offer a potential solution for measuring many different aspects of tissue under investigation. All techniques possess the benefits of remote optical technologies: they do not cause significant damage to the object and do not require a high level of protection of the tissue under investigation or patient and the examiner. In addition, the proposed combination does not require the use of biochemical labels or fixation techniques.
Here, we show the combination of LIBS, RS, DRS for the first time on human samples. That enables the collection of elemental and biochemical information in combination with information of the absorbers such as hemoglobin or melanin. The paper is structured as follows: First, information about LIBS is provided. Afterwards, RS is discussed. As we published the results for DRS on FAI already <cit.>, only the findings that are relevant for FAI are briefly summarized and discussed in the according sections.
For a LIBS measurement, an initial high-power laser pulse ablates a microscopic portion of the sample and a plasma is generated from the material under investigation. The recombination of elements within the plasma then generates emission of light that can be collected and analyszed by a spectromter to measure the atomic and molecular <cit.> emission lines. The research on LIBS, especially its ability to measure the elemental composition is drawing more and more attention during the last decades <cit.>. LIBS allows the measurement of elemental concentrations in solids, liquids and gases <cit.>. Even absolute quantification is possible with a precision of a few parts per million <cit.>. In the last years, LIBS is becoming more commonly used for medical applications with extremely promising results <cit.>.
RS targets the vibrational and rotational energy bonds of molecules in the sample. It is a widely used method in many fields <cit.>, such as the assessment of food quality <cit.>, the analysis of carotenoids in biological matrices <cit.>, or in cancer research <cit.>. Normally, the Raman-spectra of organic molecules are analysed by specific peaks <cit.>. For example, it could be shown that the concentration amid I of significantly increases with age in patients compared to amid III in human bone <cit.>. In this study, the combination of LIBS and RS is applied to tissue effected by Femoroacetabular Impingement (FAI) <cit.>.
FAI defines a reduced range of motion of the hip and early bony contact of the femur with the acetabulum during movement. In many cases, the cause of the reduced range of motion is a developmental deformity of the hip joint with an increased size of the femoral neck, a deep acetabulum, or a combination of both. It is also likely that symptomatic FAI is a risk factor for the later development of hip osteoarthritis <cit.>. State of the art treatment for FAI is to restore normal hip anatomy to regain normal range of motion of the hip, to eliminate symptoms and prevent future development of osteoarthritis. In open surgery, the entire cartilage of the femoral head often appears normal immediately after dislocation. In open surgery, the entire cartilage of the femoral head often becomes reddened when exposed to air, upon dislocation <cit.>. In contrast to that, healthy cartilage, unaffected by FAI, remains whitish upon air-contact. This region is of paramount clinical interest as it defines the region of osteophyte development in advanced osteoarthritis. However, the origin of the reddening is not known.
In this study, the capabilities in investigating FAI-affected tissue of three chosen technologies, namely RS and LIBS combined with DRS results from our previous study <cit.> are discussed as shown in figure <ref>a. While the combination of LIBS and Raman is performed since long time <cit.>, their application especially in the medical field or for biochemical analysis is nearly not existant according to a review from 2021 from Dhanada et al.<cit.>. In this study, results LIBS and RS combined with previous DRS results, which is summarized in figure <ref>. This combination is used to investigate the differences between healthy and FAI-affected tissue. Based on these results, we propose a biochemical pathway that explains the appearance of red staining when FAI-affected tissue is exposed to air. It is possible to apply the same approach of combining LIBS, RS and DRS to many biomedical or biomedical problems in future.
§ RESULTS
The results section consists of four parts. The first two present the outcomes for LIBS and Raman. The third part offers a discussion of the conclusions drawn from the first two sections combined with the pertinent findings from our prior DRS investigations. In the last part, a model for the molecular changes in tissue during FAI development is formulated.
§.§ Laser-Induced Breakdown Spectroscopy (LIBS)
This section shows the mean spectra and the summarised results. The results of the analysis for each individual peak are given in the appendices (table <ref> and <ref>). The mean spectra are shown in figure <ref>. The peaks used for data analysis are labelled. Generally, for LIBS, the majority of peaks arise from calcium owing to the large number of peaks. It is anticipated that all calcium peaks will exhibit the same pattern; however, only seven representative peaks are chosen for data analysis.
Figure <ref> displays the contrast between healthy and FAI-affected tissue. The hard tissue of FAI-affected patients presents a higher concentration of calcium than healthy individuals. All calcium peaks exhibited this behaviour. This shows increased levels of hydroxyapatite, the Ca-based main constituent of hard tissue, in FAI-affected samples which is in accordance with MRI measurements <cit.> demonstrating increased hydroxyapatite levels in FAI-affected tissue. Both findings support a higher amount of calcium. When taking into account the reduction of carbon and hydrogen, it suggests a decrease in organic material in the FAI-affected tissue. The oxygen tends to be decreased in FAI-affected tissue despite its large presence in hydroxyapatite due to the reduced organic compounds in the FAI-affected tissue. Four out of five peaks demonstrate a decrease in nitrogen in the FAI-affected tissue, providing additional support for this conclusion. However, this element yields conflicting results and thus only provides minor additional support for the reduction of organic compounds. Potassium levels follow the same pattern as organic matter and, since it is largely found in cells, it can be regarded as a metric for organic matter as well.
The significance for soft tissue is lower owing to a smaller number of measurements. FAI-affected soft tissues contain less carbon, calcium, and hydrogen. A smaller amount of carbon indicates less organic tissue, while the decrease of calcium suggests that the mineralisation of the hard tissue collects the calcium from surrounding soft tissue as the calcium in hard tissue is used to grow bones. Hydrogen levels are significantly lower in FAI-affected tissue, which is consistent with the carbon peak and also indicates a reduction in organic tissue. All the peaks analyzed show a significant difference, while the calcium peaks exhibit low significance. However, all three peaks show the same trend. The other differences between FAI and healthy tissue for the remaining peaks will not be further discussed, as the differences are negligible.
§.§ Raman Spectroscopy (RS)
Figure <ref> depicts the averaged Raman spectra after denoising and baseline subtraction. Regardless of the statistical significance of each peak, there is an apparent decrease in almost all peaks that represent organic substances such as C-C stretch or pyranose ring in FAI affected tissues. Concurrently, the FAI affected tissue shows an increase in inorganic peaks (v_3 PO_4 ^3 and ν_1 CO_3^2-). These findings align with those acquired from the LIBS data wherein there is a lower amount of organic matter present. Furthermore in literature, a decrease of cartilage cells is likely in FAI-affected tissue <cit.>.
The increased levels of v_3 PO_4 ^3 and ν_1 CO_3^2- in FAI affected tissue are caused by the increased concentraion of hydroxyapatite <cit.>. As an increase in the 1661 cm^-1 peak of amide I indicates collagen quality change caused by ageing <cit.>, hydration/dehydration <cit.>, or radiological harm <cit.>, this observation demonstrates that the tissue affected by FAI is damaged. Nonetheless, it lacks statistical significance. Furthermore, there is a decline in the amount of hydroxyproline in FAI.
Additionally, the peak of phenylalanine is decreased in FAI-affected tissue. Phenylalanine is frequently transformed into tyrosine which, through 3,4-dihydroxyphenylalanine (DOPA), produces melanin via enzymatic oxidation. Nevertheless, the peaks related to tyrosine do not indicate any substantial differences. Further, both pyranose ring peaks are significantly reduced in FAI affected tissue, much more than other organic substances. The same is true for phenylalanine. Consequently, the presence of these two peaks infers that further loss is attributable to other biochemical reactions.
§ DISKUSSION AND FAI-MODEL
This section is divided into two parts and primarily summarizes the findings from the LIBS and RS sections in conjunction with our prior DRS results <cit.>. The first part examines the alteration of bone tissue due to FAI. By comparing with several other studies, it is evidenced that a comprehensive understanding can be achieved through the combination of DRS, Raman, and LIBS. The second section summarises the factors not accounted for by the bone model and combines them with established biochemical pathways to explicate the red discoloration of FAI-affected tissue upon exposure to air.
According to figure <ref>a, alterations in FAI-affected tissue occur in three distinct categories. Firstly, elevated levels of amide I reveal bone damage, which is an outcome of frequent impingement. Secondly, reduced hydroxypyroline for collagen stabilization can be observed. The hydroxylation of proline by prolyl hydroxylase or other electron-withdrawing reactions greatly enhances collagen's conformational stability <cit.>. This indicates that the body responds to the damage by generating new bone. Moreover, there is an increase in calcium in hard tissue and a decrease in soft tissue, while the opposite holds true for potassium. Osteocytes predominantly store potassium intracellularly <cit.>. As bones strengthen, they accumulate greater amounts of calcium from the surrounding soft tissue. Following fractures or microfractures, callus formation takes place, triggering a strong response from blood cells and osteoblasts. Consequently, this leads to the reinforcement or repair of the bone affected by FAI. As a third observation, there is an increase in the amount of (PO_4 ^3 and CO_3^2-) in the tissue affected by FAI, which also shows an increase of bone tissue. This may be attributed to the association between FAI and deposition of calcium hydroxyapatite at the osteochondral junction shown by MRI techniques <cit.>. Additionally, LIBS indicates a lower concentration of organic tissue in general. Thus, there is more or hardened bone present. In summary, the bone damage, the body's attempt to repair it, and the resulting increased levels of inorganic tissue are evidently shown by our measurements.
The previous model elucidates the majority of the findings. However, the bone model alone cannot account for the red discoloration upon exposure to air, the strong reduction of phenyalanaline, and the presence of pheomelanin in FAI affected tissue which could be shown by our previous DRS study <cit.>. To consider these points, the model has to be expanded, which is depicted in figure <ref>b. It is established that pheomelanin results from the enzymatic oxidation of Tyrosine catalysed by tyrosinase, through L-Dopa, Dopaquinone, and Cysteinyl DOPA <cit.>. First, tyrosinase catalyses the initial and rate-limiting step in the cascade of reactions that lead to melanin production from tyrosine <cit.>. More precisely, tyrosinase hydroxylates tyrosine to DOPA and catalyses the oxidation of DOPA to DOPAquinone <cit.>. Furthermore, the oxidation of Cysteinyl DOPA can occur under the appropriate conditions <cit.>. Apart from this reaction chain, it is known that the growth of bone requires the presence of tyrosine. However, under typical conditions in the body, the concentration of tyrosine required for melanin synthesis is approximately 20 times higher than that necessary for bone growth <cit.>. As the measured amount of tyrosine remains constant, an increase in tyrosine levels is expected in the tissue affected by FAI. When there is sudden contact with air, the concentration of oxygen rises, which subsequently promotes the reaction of tyrosine to pheomelanin, and the excess of tyrosine is then used to produce pheomelanin. As a result, the tyrosine present in the bone affected by FAI reacts suddenly and produces pheomelanin, causing an alteration in the tissue's color to red.
§ MATERIALS AND METHODS
§.§ Patients
The LIBS analysis was performed on 21 biopsies from 18 patients. One or two osteo-chondral samples from the anterior femoral head-neck junction were taken from each patient during open surgery for treatment of Cam Type FAI at the Orthopaedic Clinic of the University of Bern, Switzerland. The patients were given full information about the study. The patients then gave their informed consent to participate in the study. The study was conducted in accordance with the tenets of the Declaration of Helsinki and was approved by the Cantonal Ethics Commission of Bern (KEK - decision of 15 October 2015).
After the procedure, the samples were stored in a 4% formaldehyde solution (Roti-Histofix 4%, Carl Roth). Due to the coloring of the sample after contact with air, a clear identification of healthy and stained areas was easily possible. Therefore, histology was not performed. Each sample had the colored degeneration and an unaltered white border around it, which served as a healthy reference. From each of these samples, 5-10 measurement points were taken on the entire sample. It should also be noted that the LIBS measurements were acquired from bone as well as from cartilage tissue. An in-depth analysis of LIBS then allowed clear distinction between hard and soft tissue, based on molecular composition. This LIBS analysis was further supported by a Raman investigation of three additional samples.
Figure <ref>b shows the measurement set-ups used in this study. The LIBS measurement was performed on different samples than the Raman measurements. For all two modalities, the measurement points were selected randomly over the sample to minimise experimental bias.
§.§ Laser-Induced Breakdown Spectroscopy (LIBS)
A frequency-doubled Nd:YAG laser (Q-smart 450, Quantel Laser, Les ulis cedex, France) with a repetition rate of 10 Hz, a pulse duration of 5 ns and a wavelength of 532 nm was used for the LIBS measurements. The mean pulse energy measured 80 mJ. A lens with a focal length of 50 mm was utilised to focus the beam just beneath the surface of the tissue sample. A 3D translation stage was also employed to relocate the measurement spot on the sample into the laser focus. To target the plasma cone produced by each laser pulse, the open tip of a UV-enhanced 50 μ m optical fibre was placed above the sample. The opposite end of the fibre was linked to a high-precision spectrograph (Mechelle ME 5000 Echelle, Andor, Belfast, UK). Equipped with an ICCD camera (A-DH334T-18F-03 USB iStar ICCD detector, Andor, Belfast, UK), the spectrometer has a spectral resolution (λ / Δλ) of 6000 within the 200 - 840 nm spectral range. The laser was directly connected to the spectrograph to trigger the detector measurements with each laser pulse.
All experiments were performed with a gate delay between the laser pulse and the detector of 1 μ s and a gate width of 0.2 ms. In total, 21 samples from 18 patients were investigated with 6 to 10 spots per patient. 25 individual measurements were taken from each spot, with each measurement involving the ablation of a portion of the tissue. Thereby, first soft tissue and later hard tissue is measured by drilling into the tissue by laser ablation. Each ten LIBS pulses create a crater of approximately 300 μ m in depth and diameter <cit.>. Due to the irregular shape of the sample, it is possible that the ablation was not within the tissue, resulting in plasma generation from the air. To remove such measurements, the nitrogen content is compared with the Ca^2+ content. Nitrogen content is known to be very high in air, while Ca^2+ content is low. The ratio of the peak intensity of Ca^2+ at 392 nm to N at 822 nm <cit.> was, therefore, used to filter out measurements in which the plasma had been generated in air. Spectra with a ratio exceeding 0.5 (n=412) were discarded for subsequent analysis. The identified peaks were ultimately assigned to elements using information derived from the NIST Atomic Spectra Database <cit.>.
A comparable differentiation step was implemented in order to discern between hard and soft tissue areas. The tissue spectra vary significantly for both hard and soft tissue <cit.>, hence a distinct analysis was executed for these two tissue categories. The distinctive structure of the molecular matrix has a significant impact on the ablation process, resulting in differences in plasma quantity, temperature and hence, the intensity of the recorded peaks. Therefore, a comparable separation must also be conducted for LIBS data. The classification was based on the Ca (I) to K (I) ratio recorded at 616 nm and 766 nm <cit.>, which is known to exhibit the most significant variations between the cartilage and cortical bone <cit.>. Spectra with a ratio below 0.5 are classified as soft tissue (cartilage), while spectra with a ratio above 1.0 are classified as hard tissue (cortical bone) <cit.>. The spectra in between (n=976) were ignored to ensure that they are correctly classified.
As a result, 965 spectra were obtained from the white reference tissue, and 1222 from the FAI affected tissue. Among these spectra, 885 and 1153 were hard tissue for the white reference and FAI affected tissue, respectively. The remaining spectra comprised of 80 soft tissue spectra from the white reference tissue and 69 from the FAI affected tissue. There are much less soft tissue spectra due to the fact that most laser pulses drilled deeper into the bone and the cartilage was relatively thin in most cases.
The technique for processing and analysing the LIBS data is outlined in figure <ref>. After removing artefacts and classifying into hard bone and cartilage, the spectra were normalised using L2 norm. The following step involved removing the offset via the asymmetric least square fit (ALS) <cit.>. The chosen weight was p_als=0.0001, and a regularisation parameter of λ_als=100 was employed. From the processed data, the peak values were subjected to the two-sided Mann-Whitney rank test for evaluation. As a large number of peaks were tested, a significance level of 0.005 was set to avoid significant results by chance.
§.§ Raman Spectroscopy (RS)
Due to the promising LIBS-results, Raman measurements were added. Only samples from three patients were available. The experimental setup is presented in Figure <ref>c in a schematic manner. This setup is identical to the one we utilised in our previous study <cit.> where we measured the bone and cartilage of FAI-affected tissue using DRS and Raman spectroscopy. For Raman spectroscopy, a diode laser (LASER-785-LAB-ADJ-S, Newport Corporation, Ocean Optics, Dunedin, USA) with a wavelength of 785 nm was used as the excitation source. The light is coupled into a fibre of a Raman coupled fibre probe (RIP-RPB-785-SMA-SMA, Ocean Optics, Dunedin, USA) via a SMA 905 coupling connector. The back-reflected light was captured and measured with a fibre-based spectrometer (QE65000 Spectrometer, Ocean Optics, Dunedin, USA).
A total of 200 spectra were recorded, ranging from 788 to 940 nm or 52 to 2100 cm^-1, with 100 obtained from the tissue affected by FAI and 100 from the healthy reference tissue. For each spot, five spectra were acquired with an integration time of 20 s each. As the Raman signal of tissue had displayed strong autofluorescence with an excitation wavelength of 785 nm, a correction procedure became necessary. Figure <ref> details the implemented algorithm. Initially, the raw spectra of all three spots were averaged and then cropped to the spectral range of 200 to 2000 cm^-1. A one-dimensional median filter, with a width of three pixels, was used to eliminate any artefacts from the sensor readout. The noise reduction process entailed utilising a two-pixel sigma Gaussian filter. Following this stage, the signal was normalised to the mean. The estimation of autofluorescence background was executed through ALS, with a p_als=0.001 as its asymmetric weight and a regularisation parameter of λ_als=40. Ultimately, the Raman signal was obtained by subtracting the autofluorescence signal.
From the processed signal, the maxima and minima were used to assign the Raman peaks. If a maximum was positioned more than 10 cm^-1 away, the peak was discarded. For the localized peaks, an additional ALS was conducted using a weight of p_als=0.0001 and a regularization parameter of λ_als=0.1 to fit a function to the minima. The value of the peak was determined by calculating the area beneath the curve between the two neighbouring minima minus additional ALS fit.
For the final analysis of Raman data, the spectra were normalized using the formaldehyde peak at 911 cm^-1 as the formalin peaks at 542 cm^-1, 1046 cm^-1, 1239 cm^-1, and 1492 cm^-1 have considerable overlaps with other peaks. As the formaldehyde concentration was the same for all samples, these peaks are assumed to be independent of the sample under investigation. However, as the sample size varied, there may have been a slight difference in the final concentration, which can affect the Raman peaks of formaldehyde <cit.>. The Raman peaks may not always be sufficiently clear for data analysis. Consequently, such measurements are eliminated by verifying the existence of the formaldehyde peak. By this, the total number of measurements was reduced to 170. Of these, 97 were from the FAI, and 73 from healthy tissue.
After normalisation, the distribution of selected peaks across all spectra was analysed. To achieve this, the statsmodel framework <cit.> was used to perform an analysis of variance (ANOVA). The three parameters considered for the analysis were FAI, sample, and position on the sample. The latter two were selected to estimate intra- and inter-patient variation. As only three patients were measured, potential significance is only assumed if the partial eta-squared of the FAI (∂η^2_FAI) was at least similar to the partial eta-squared of the inter-patient variation (∂η^2_patients). Furthermore, as many peaks were examined, potential significance was only accepted if p<0.0001.
§ CONCLUSION
The study demonstrates the significant potential for non-contact, all-optical and label-free analysis of tissue samples. While our previous analysis on DRS data showed the presence of pheomelanin in FAI-affected samples <cit.>, our new data with RS and LIBS provide an even more detailed insight into the biochemical tissue composition, which enabled interesting discoveries on the development of FAI. The unified analysis including absorptive properties, scattering behaviour, elemental composition, and molecular binding permitted a forecast of the biochemical progression of the ailment.
It is also noteworthy that all results were obtained without markers, without direct contact with the sample and without significantly damaging it. Therefore, the suggested combination of DRS, RS and LIBS is versatile and could have significant implications for investigating of biochemical pathways in many other applications. The non-contact, all-optical and label-free approach is especially appealing for in vivo applications, e.g., via endoscopy or during laser surgery. Additionally, all the employed technologies can operate with micrometre resolution, enabling the generation of crucial insights at a high spatial resolution.
Notwithstanding, we emphasize that this technique relies on substantial interdisciplinary knowledge. Our method can be viewed as holistic approach of three main pillars: domain knowledge of the biochemical pathways, technological understanding of optical contrast and spectroscopy, and finally, an in-depth statistical analysis.
§ ACKNOWLEDGEMENT
The authors gratefully acknowledge funding of the Erlangen Graduate School in Advanced Optical Technologies (SAOT) by the Bavarian State Ministry for Science and Art.
The authors gratefully acknowledge the support of the non-profit German Arthritis Society (Deutsche Arthrose-Hilfe e.V.) and its president Helmut H. Huberti (to MD by grant P319).
The authors would like to thank the German Research Foundation (DFG-Deutsche Forschungsgemeinschaft) for its support. This work was partly achieved in the context of the DFG-project "Kalibrierungsfreie laserinduzierte Plasmaspektroskopie (LIBS) fr die Analyse der elementaren Zusammensetzung von Gewebe " (project number 502911968).
§ CONFLICT OF INTEREST
The authors declare no conflict of interest.
§ DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request.
§ DECLARATION OF GENERATIVE AI AND AI-ASSISTED TECHNOLOGIES
In the writing process during the preparation of this work, the authors used "DeepL Write" in order to improve language and readability. After using this tool, the authors reviewed and edited the content as necessary and they take full responsibility for the content of the publication.
geschichtsfrkl
§ ATTACHMENT
§.§ LIBS
§.§.§ Hard tissue
§.§.§ Soft tissue
§.§ Raman
|
http://arxiv.org/abs/2409.02338v1 | 20240903235544 | Distribution of local signs of modular forms and murmurations of Fourier coefficients | [
"Kimball Martin"
] | math.NT | [
"math.NT"
] |
L>l<R>r<C>c< |
http://arxiv.org/abs/2409.03622v1 | 20240905153421 | Photoelectron -- residual-ion entanglement in streaked shake-up ionization of helium | [
"Hongyu Shi",
"Uwe Thumm"
] | physics.atom-ph | [
"physics.atom-ph"
] |
Department of Physics, Kansas State University,
Manhattan, Kansas 66506, USA
§ ABSTRACT
Streaked photoelectron emission spectra access the correlated dynamics of photoelectrons and residual target electrons with attosecond temporal resolution. We calculated ab initio single-ionization spectra for photoemission from helium atoms by co-linearly polarized ultrashort XUV and assisting few-femtosecond IR pulses.
Distinguishing direct and shake-up ionization resulting in ground-state and excited (n=2,3) residual ions, respectively, we examined the effects of the correlated photoemission dynamics on the photoelectron phase-accumulation as a function of the observable photoelectron detection direction and kinetic energy, and XUV – IR pulse delay. We tracked the dynamical evolution of the residual ion in relative streaked photoemission delays and found dominant contributions for shake-up emission from the residual ion – photoelectron interaction.
These are in very good and fair agreement, respectively, for n=2 and n=3 shake-up photoemission along the pulse-polarization directions, with previous experimental and theoretical investigations [M. Ossiander et al, Nature Phys 13, 280–285 (2017)] and reveal a strong photoemission-direction dependence for shake-up ionization due to the coupling between the photoelectron and evolving residual-ion charge distribution in the IR-laser field.
Photoelectron – residual-ion entanglement in streaked shake-up ionization of helium
Hongyu Shi and Uwe Thumm
September 9, 2024
===================================================================================
§ INTRODUCTION
Attosecond (1 as = 10^-18 s) streaking is an experimental technique for investigating the time-resolved ultrafast electronic dynamics in atoms <cit.>, molecules <cit.>, and solids <cit.>. Attosecond streaking experiments provided the first direct measurement of the oscillating electric field in intense light pulses and confirmed the generation of ultrashort sub-infrared (IR) cycle extreme ultraviolet (XUV) pulses with attosecond duration <cit.>. In a typical streaking experiment, an ultrashort XUV-pump pulse ionizes the target, emitting photoelectrons (PEs) into the field of a femtosecond infrared (IR) probe pulse (art3D). This allows the recording of PE energy (or momentum) spectra for a set range of time delays τ between the two pulses. The τ-dependent spectra reveal information about the photoemission dynamics with attosecond resolution in time as "streaking traces", i.e., as an oscillating PE-energy (or momentum) dependence on τ that retraces the temporal profile of the IR streaking pulse with a characteristic absolute photoemission time delay relative to the IR-laser carrier electric field. While absolute streaking delays cannot be measured directly for photoemission from gaseous targets <cit.>, dynamical information can be accessed in streaked PE spectra from relative streaking delays between the centers of energy of streaking traces that are related to photoemission from two energetically distinct electronic levels <cit.>.
Relative streaked photoemission time delays of 21± 5 as were measured between photoemission from the 2s and 2p orbitals of neon by Schultze et al. <cit.> in a milestone experiment that triggered an intense debate about the role of electronic correlation during the single ionization of an atomic target. This debate was fueled by several numerical models for streaked photoemission underestimating the measured relative photoemission time delay by approximately a factor of two, even though the calculations included electronic correlation at various levels of approximation <cit.>.
Accounting for electronic "shake-up" excitation during XUV photoemission from neon atoms, Isinger et al. <cit.> compared many-body perturbation theory calculations with measured energy-integrated sideband oscillations in interferometric photoemission spectra. Based on these RABITT (reconstruction of attosecond beating by interference of two-photon transitions) experiments, the authors suggested that the discrepancy between the relative streaking time delay of 21± 5 as obtained experimentally in Ref. <cit.> and in previous streaking calculations <cit.> might be due to shake-up processes. The authors of Ref. <cit.> worded this suggestion very carefully. They pointed out that two of their co-authors <cit.> could not explain the mismatch of streaking experiments and streaking calculations by numerically modeling RABITT spectra, employing a similar theoretical many-body-perturbation theory approach as Ref. <cit.> that includes electronic correlation. The authors provide good evidence, based on RABITT spectra, for the importance at the attosecond timescale of shake-up excitation during the photoemission of neon. However to the best of our knowledge and despite early theoretical testimony for the significance of dynamical electronic correlation on streaked photoemission <cit.>, the direct theoretical confirmation through streaking calculations of the 21± 5 as delay in Ref. <cit.> is still outstanding.
While the attosecond correlated ionization dynamics for multi-electron targets remains a subject of debate, remarkably, for prototypical helium, Ossiander et al. <cit.> found measured and calculated relative streaking time delays to agree at the (one) attosecond timescale. In their investigation, ground-state helium is singly ionized, and the dynamical response of the residual ion is analyzed by comparing direct ionization, resulting in ground-state (n=1) and shake-up ionization to a coherent superposition of (n = 2, 3) sub-levels
t_s = t_EWS + t_CLC + t_DLC .
Calculating t_EWS and t_CLC with attosecond accuracy, the authors were able to isolate a temporal shift due to electronic correlation during n=2 and n=3 shake-up ionization of about six and several ten attoseconds, respectively.
The Eisenbud-Wigner-Smith (EWS) contribution, t_EWS, represents the PE phase accumulation during the ionization of helium atoms in the XUV-pump pulse and includes the short-range part of the electron-electron interaction. It was calculated by direct numerical solution of the bound-continuum transition-matrix element of the two-electron system [cf., Eq. amp below] <cit.> using the exterior complex scaling (ECS) method <cit.>.
While this numerical approach is state-of-the-art for helium, its value for transparently revealing physical details of the correlated dynamics during single ionization of helium is limited.
The delay contributions t_CLC and t_DLC are due the simultaneous interaction of the PE with the residual target charge distribution and IR-laser electric field and termed "Coulomb-laser coupling (CLC)" and "dipole-laser coupling (DLC)".
The CLC term includes the combined effect of the monopole term of the residual charge distribution and IR-laser <cit.>.
The DLC term represents the effect of the PE interaction with the dipole-component of the charge distribution of the residual ion.
This term tends to be very small or negligible for direct ionization, but provides an important contribution for shake-up ionization, especially for large PE-detection angles relative to the XUV- and IR-pulse-polarization direction <cit.>.
Applying this scheme to more complex atoms,
such as experimentally convenient neon and argon gases <cit.>, is computationally challenging and requires concessions (most noticeably mean-field approximations), the accuracy of which (at the one attosecond time scale) still needs to be confirmed.
Understanding the physics behind the individual delay contributions in Eq. t_s for prototypical helium targets is crucial for the development and scrutiny of dependable attosecond streaking models for many-electron targets. It requires a highly differential study of streaked photoemission, going beyond the detection of PEs within a small solid angle about the IR- and XUV-pulse-polarization direction, in addition to the reliable modeling of the target electronic response to the incident XUV and IR pulses.
For interferometric photoemission from helium, PE-emission-angle-dependent spectra were measured by Heuser et al. <cit.>. These RABITT experiments detected a striking phase shift of the sideband yields for electron-detection angles that are larger than ≈ 50 degrees relative to the polarization direction, even in the direct emission channel (leaving in the ground state).
A related RABITT study for helium targets was recently carried out by Fuchs et al. <cit.>, who measured the three-dimensional PE-momentum distribution in coincidence with the residual ions, in order to examine experimentally the dependence of RABITT time delays (i.e., relative phase shifts between sidebands in RABITT spectra) on the PE angular momentum and coupling of continuum states in the assisting IR-laser pulse.
Their measured delays of up to 12 as between outgoing s– and d–wave PEs, further emphasizes the need for highly differential (angle-resolved) data for the comprehensive analysis of (streaked or interferometric) time-resolved photoemission.
Revisiting the sensitivity of relative streaking delays on the correlated electric photoemission dynamics reported in Ref. <cit.> for direct and shake-up streaked photoemission from helium along the polarization direction of the XUV and IR pulse, we carried out ab initio calculations for angle-resolved direct and (n=2,3) shake-up photoemission.
In Secs. <ref> and <ref> we review our approach for
numerically propagating the time-dependent Schrödinger equation (TDSE) for helium exposed to external electromagnetic fields and outline the calculation of two–electron states satisfying single–ionization boundary conditions.
Sections <ref> and <ref> describe our numerical calculation of different contributions (EWS, CLC, and DLC) to the experimentally observable relative streaking delay, while Sec. <ref> briefly discusses noteworthy elements of our computational method, for which further details are given in the Appendices.
Section <ref> contains our results for and discussion of
streaked photoemission spectra and streaking delays for direct and shake-up ionization (Sec. <ref>),
the EWS delay contribution for XUV single ionization of helium (Sec. <ref>),
the CLC delay contribution (Sec. <ref>),
the DLC delay contribution in relation to the dipole moment of the residual charge distribution (Sec. <ref>), and a multipole analysis of the evolving residual charge distribution
and its back-action on the PE. Our summary and conclusions follow in Sec. <ref>.
Throughout this work we use atomic units unless indicated otherwise.
§ THEORY
§.§ Coupled radial equations
We describe the single ionization of ground-state para-helium atoms by ultrashort XUV pulses with and without assisting phase-coherent delayed (or advanced) IR-laser pulses by solving the TDSE in full dimensionality,
tΨ = HΨ = [∑_i=1,2 (H_i + H_Fi ) + V_12] Ψ .
The Hamiltonian H consists of the single-electron Hamilton operators for the interaction of each electron with the atomic nucleus, H_i, the electronic correlation, V_12, and the electron—external-field-interaction terms, H_Fi, given by, respectively,
H_i = -_i/2 - 2/r_i ,
V_12 = 1/ r_2 - r_1 ,
and
H_Fi = ℰ(t) · z_i (length gauge)
- A(t) ·z_i (velocity gauge) .
The electron position vectors,
r_i , i=1, 2, are chosen so that the atomic nucleus coincides with the origin of our coordinate system.
The external electromagnetic-field pulses are assumed to be linearly polarized along the quantization (z) axis with cosine-squared electric-field envelopes of pulse length T,
ℰ(t) = ℰ_XUV(t-τ) + ℰ_IR(t)
ℰ_j(t) = ℰ_j,0sin(ω_j t) cos(π t/T_j)^2 , j = XUV, IR,
and corresponding vector potential A(t) = -∫_-∞^t t'ℰ(t'). The delay (advance) of the IR-pulse center with respect to the center of the XUV pulse is designated by
τ < 0 (τ >0).
In our applications to photoionization by linearly polarized light,
the magnetic quantum number of the spinless two-electron system, M=0, is preserved and will be omitted below. The relevant set of quantum numbers needed to specify a partial wave, λ = {l_1,l_2,L}, thus comprises the individual angular-momentum quantum numbers l_i according to H_i , i=1, 2, and the total angular-momentum quantum number L.
Expanding the two-electron wave function
Ψ( r_1, r_2, t) =
𝒮1/r_1r_2∑_λψ_λ(r_1, r_2, t) 𝒴_λ( r_1, r_2)
in terms of real-valued generalized spherical harmonics
𝒴_λ( r_1, r_2) =
∑_m_1l_1 l_2 L
m_1 -m_1 0 Y_l_1, m_1( r_1) Y_l_2, -m_1 ( r_2) ,
we express the TDSE TDSE as the set of coupled equations for the radial partial waves ψ_λ <cit.>
∑_λ' H_λ,λ'ψ_λ' = tψ_λ ,
with (r_1,r_2)-dependent angular-coupling-matrix elements
H_λ,λ' = 𝒴_λH𝒴_λ' .
The square brackets in Eq. genharmonics denote Clebsch–Gordan coefficients and 𝒮 the symmetrization operator for the two-electron spatial wave function of para-helium. Details about the matrix elements H_λ,λ' are given in Appendix <ref>.
§.§ Partial-wave analysis of entangled PE states
The exact PE states for single ionization of He, Ψ, are solutions of the time-independent Schrödinger equation,
H_0 Ψ = E Ψ ,
H_0 = H_1 + H_2 + V_12 ,
with energy eigenvalue E = -Z^2/n_1^2 + k_2^2/2. They are subject to incoming-wave boundary conditions, since the time-reversed ionization process starts with asymptotic states of specific kinetic energy, momentum, or angular momentum. Depending on the chosen boundary condition, Ψ can be expressed in different ways, e.g., as states with a given total angular momentum L, 𝒮|n_1,λ,k_2⟩, or as states, 𝒮|n_1,l_1,m_1, k_2⟩, that asymptotically merge into product states. We expand the total-angular-momentum states in the basis of generalized spherical harmonics [Eq. (<ref>)],
⟨ r_1, r_2|n_1,λ,k_2⟩
= 1/r_1 r_2∑_l'_1,l'_2ψ_λ',n_1,k_2(r_1, r_2)𝒴_λ'( r_1, r_2) ,
where λ' = {l_1',l_2',L}. The radial partial waves then satisfy the boundary condition <cit.>
ψ_λ',n_1,k_2(r_1, r_2) r_2→∞⟶ δ_l_1,l'_1δ_l_2,l'_2 r_1 R_n_1,l_1^(Z=2)(r_1) √(2/π)
sin[k_2 r_2 - π l_2/2 +1/k_2ln(2k_2 r_2) + σ_l_2 + δ_n_1^λ(k_2)] .
They depend on the radial wave function, R_n,l^(Z=2), of the residual ion in the bound state ψ_n,l,m^(Z=2)( r) and the energy-dependent Coulomb phase shift,
σ_l(η) = [Γ(l+1+η)] ,
with Sommerfeld parameter η = -Z/k. The sin factor in Eq. bc derives from the asymptotic form of Coulomb function of the first kind, F_l(kr), and includes the additional energy-dependent phase δ_n^λ(k) due to the short-range electronic interaction during the emission process.
To provide the |l_2,m_2⟩ partial-wave contributions of emitted electrons with asymptotic momentum magnitude k_2, we decouple the outgoing spherical-wave part of |n_1,λ,k_2⟩ in Eq. EL with the unitary transformation
|n_1,l_1,m_1,l_2,k_2⟩ = |n_1,l_1,m_1,l_2,-m_1,k_2⟩
= ∑_Ll_1 l_2 L
m_1 -m_1 0^-δ_n_1^λ|n_1,λ,k_2⟩ ,
where the first equation indicates that we will drop the redundant quantum number m_2=-m_1 from now on.
This allows us to compose entangled two-electron states with asymptotic momentum vector k_2 by combining these partial waves into asymptotic plane-wave states,
|n_1, l_1, m_1, k_2⟩
= ∑_l_2^l_2/k_2^-σ_l_2 Y_l_2,-m_1^* ( k_2)|n_1,l_1,m_1,l_2,k_2⟩ ,
where l_1 relates to the bound electron and l_2 to the PE. To simplify the notation,
we drop subscripts and symmetrization operator from now on and refer to the symmetrized scattering states as
|n, l, m, k⟩ = 𝒮|n_1, l_1, m_1, k_2⟩ .
§.§ XUV only delays
Exposure of helium to just the XUV-pump pulse releases PE wave packets whose center is displaced relative to a fictitious free electron that starts moving at the atomic nucleus
with the constant final PE velocity at the instant when the center of the XUV pulse coincides with the atomic center. This "XUV-pulse-only" photoemission delay,
t_XUV( k) = t_EWS( k) + t_C(k,r) ,
consists of the EWS delay t_EWS and the Coulomb delay
t_C(k, r) = 1/k^3[1-ln(2kr)] .
The EWS delay corresponds to the phase accumulation during the absorption of XUV radiation and release of the PE <cit.>.
The Coulomb delay accounts for ongoing phase accumulations of the PE in the long-range -1/r Coulomb potential of the residual ion. It is identical for all atoms and does not converge as the PE propagates to the detector.
§.§.§ Dipole transitions
In first-order time-dependent perturbation theory, the photoemission yield and PE-phase information are calculated based on dipole-transition-matrix elements <cit.>. For photoemission by an XUV pulse ℰ_XUV(t) from the initial helium ground state, |i⟩, into asymptotic spherical waves sph_basis and plane waves with momentum k plane,
the dipole-matrix elements are, respectively,
M_s = n, l_1, m, l_2, k(z_1+z_2)i ,
M_p = n,l,m, k(z_1 + z_2)i
and depend on the energy-dependent phase δ_n^λ.
For single-photon ionization from the helium ground state, dipole selection rules restrict accessible final-state quantum numbers to L=1, M=m_1+m_2=0 and odd l_1+l_2. Thus, for direct ionization the only accessible final configuration is (n,l_1,m,l_2)=(1,0,0,1). In contrast, for n=2 shake-up ionization, the five channels
(2,0,0,1), (2,1,0,0),
(2,1,0,2), (2,1,± 1,2)
can be reached.
Similarly, n=3 shake-up ionization allows the 13 final configurations
(3,0,0,1), (3,1,0,0), (3,1,0,2), (3,1,± 1,2),
(3,2,0,1), (3,2,± 1,1),
(3,2,0,3), (3,2,± 1,3), (3,2,± 2,3).
In general, the number of dipole-accessible final channels is 2n(n-1)+1.
§.§.§ EWS delay
The EWS delay carries element-specific information on the electronic correlation dynamics during the photo-release. It can be interpreted as the group delay experienced by the PE wave packet relative to a hypothetical reference wave packet <cit.>.
It excludes the long-range Coulomb delay t_C and can be calculated numerically as the energy derivative of the phase of the dipole-matrix element for entangled spherical-partial-wave and plane-wave final PE states, respectively,
t_EWS^n,l_1,m,l_2(k) = E M_s ,
t_EWS^n,l,m( k) = E M_p .
As a measure for the net delay for direct or shake-up ionization to a specific sub-shell, we define the weighted average of Eq. tEWS:
t_EWS^n,l( k) = ∑_mM_p^2 t_EWS^n,l,m( k)/∑_mM_p^2 .
Similarly, we average over {l,m} to define t_EWS^n( k).
Instead of calculating matrix elements that yield the delays in first-order perturbation theory [cf., Eq. tEWS], we can retrieve absolute delays for ionization by the XUV-pulse directly from the (non-perturbative) two-electron wave function twoelwf, based on the numerically calculated radial partial-wave functions ψ_λ(r_1, r_2,t). This approach is particularly well suited for our applications in Sec. <ref>, which are based on the numerical propagation of partial-wave functions (cf., Sec. <ref>).
Projection of the numerically propagated wave function twoelwf on residual hydrogenic states ϕ_n,l,m^(Z=2)( r),
provides the PE probability density for direct (n=1) and shake-up (n>1) ionization
P_n,l,m( r,t)
= 2∫ϕ_n,l,m^(Z=2)( r') Ψ( r', r, t) (r')^2Ω'r'^2 ,
where the factor of 2 is due to symmetrization of Ψ( r_1, r_2, t).
Assuming PEs are detected within the detector solid angle Ω_det centered around the direction k, the mean PE distances from the atomic nucleus are now given by integrating Ω over Ω_det as
r_c^n,l( k, t) = ∑_m∫_Ω_det r P_n,l,m( r,t) r^2Ωr/∑_m ∫_Ω_det P_n,l( r,t)r^2 Ωr .
Similarly, we average over {l,m} to define
r_c^n( k, t) = ∑_l,m∫_Ω_det r P_n,l,m( r,t) r^2Ωr/∑_l,m∫_Ω_det P_n,l( r,t)r^2 Ωr .
Omitting the integration over Ω corresponds to an infinitely high PE-detection resolution.
Fitting the averaged radial PE displacements at times after which
* the PE wave packet has separated from the charge distribution of the residual ion and
* the PE velocity ṙ_c^α( k, t) has converged to the asymptotic velocity ṙ_c^α( k, t_∞) within a preset tolerance
according to
r_c^α( k,t)
= ṙ_c^α( k,t_∞)
[(t-τ) - (t_EWS^α( k) + t_C(k_f, r_c))] ,
α = {n,l,m}, {n,l}, n ,
we find the EWS delays t_EWS^α relative to the center of the XUV-pulse electric field.
§.§ Streaked photoemission delay
§.§.§ Absolute and relative streaking time delays
The delay t_XUV is not directly measurable with current experimental techniques, but contributes to the observable photoemission delay in streaked PE spectroscopy, where a probe pulse with an adjustable delay relative to the XUV-pump pulse is introduced <cit.>. The probe pulse, usually in the IR spectral range, maps temporal shifts in the photoemission process to observable shifts in asymptotic PE momenta. Recorded as a function of the pump-probe delay τ, streaked PE spectra allow the extraction of relative streaking time delays Δ t_s by comparing the temporal offset between energetically separable oscillatory photoemission yields (streaking traces). This method for deriving relative streaked photoemission time delays has been applied to electron emission from different atoms <cit.>, different initial electronic levels in the same target <cit.>, and to identify temporal shifts between direct and shake-up photoemission <cit.>. Differences of absolute streaking delays are experimental observables.
Absolute streaking time delays t_s for streaked photoemission from a specific electronic state relate temporal photoemission shifts to the phase of the probe-pulse electric field ℰ_IR(t). Numerical modeling allows the calculation of absolute delays in two distinct schemes, as either "streaked spectral delays (SSDs)" or "PE wave-packet delays (PWDs)". We refer to SSDs as delays that are extracted from streaked PE spectra
by fitting the center of momentum of a given streaking trace
p_f(τ) = -aA_IR(τ + t_s) + b (a,b > 0)
to the IR vector potential A_IR=-∫_-∞^t t'ℰ_IR(t'). In contrast, we name delays that are extracted from the wave-packet displacement on a numerical grid using Eq. EWS_xuv as PWDs. The accuracy of the SSD t_s relies only on the accuracy of the XUV–IR-delay-dependent PE-momentum spectrum and does not require the calculation of phase derivatives. For peak intensities between 10^8-10^12 W/cm^2, streaking delays and their numerical fitting accuracy are largely independent of the IR-probe-pulse intensity <cit.>.
According to Eq. t_s, the absolute SSD t_s is affected by τ-dependent momentum shifts induced by the probe pulse, in particular the Coulomb-laser and dipole-laser shifts, t_CLC and t_DLC, in addition to the probe-independent shift t_EWS.
For direct photoemission the short-range PWD contribution t_EWS to t_XUV in Eq. t_xuv merges into t_s when a sufficiently weak delayed IR probe pulse is added. In contrast, the long-range Coulomb delay t_C, being sensitive to the phase accumulation along the entire PE trajectory, is affected by the probe-laser field and transformed into the Coulomb-laser-coupling delay t_CLC <cit.>.
We have verified that t_CLC depends on the PE-energy and -detection direction. For emission along the XUV- and IR-pulse-polarization direction it is identical for direct and shake-up ionization.
§.§.§ Entangled residual Stark states
For shake-up ionization the probe-laser pulse causes an additional dipole-laser-coupling temporal shift t_DLC <cit.>, even if the probe-pulse intensity is kept sufficiently low to make the quasi-static Stark effect in the ground-state-helium entrance channel negligible (as is the case in typical streaking experiments). This delay contribution has no counterpart in t_XUV. In the exit channel the probe-laser mixes degenerate and nearly degenerate states of the residual ion, causing the formation of quasi-static Stark hybrid states with dipole moment d^n( k) as linear combinations of excited states with opposite parity.
The formation of the residual dipole affects the energy of the two-electron system in the probe-laser electric field and, in turn, the final PE-momentum and streaking delay <cit.>.
To describe the formation of quasi-static residual Stark states during shake-up ionization, we unitarily transform the scattering states |n,l,m, k⟩ plane to Stark scattering states with the same (n, m) quantum numbers and total energy -Z^2/n^2 + k^2/2,
|n,α,m, k⟩
= ∑_l C_α,l^n,m|n,l,m, k⟩ ,
where α denotes the Stark quantum numbers <cit.>. The expansion coefficients {C_α,l^n,m} for the n=2,3 shells of are listed in Tabs. <ref> to <ref> of Appendix <ref>.
After shake-up ionization by the XUV pulse, once the PE wave packet and residual charge distribution effectively no longer overlap, we represent
the residual ion in a given
n channel as a superposition of degenerate Stark states,
|*⟩ψ_^(n)( k) = 𝒩∑_α,m c_n,α,m( k)|n,α,m⟩ ,
with normalization constant 𝒩 = (∑_α,mc_n,α,m^2)^-1/2. To obtain the expansion coefficients from the numerically provided accurate solution Ψ of the two-electron TDSE TDSE for XUV-only ionization, we project at a time t^⋆ = 5.5 fs after the center of XUV pulse
onto the two-electron entangled scattering states ⟨n,α,m, k| in Eq. (<ref>),
c_n,α,m( k) =
⟨n,α,m, k|Ψ(t^⋆)⟩ .
For photoemission along the XUV-pulse-polarization direction (k = z), only
m=0 terms contribute in Eq. bound_Psi. Thus,
|*⟩ψ_^(n)( k) is rotationally symmetrical about the z axis and its dipole moment,
d^(n)( k) = -*ψ_^(n)( k) rψ_^(n)( k) ,
aligned with k. In contrast, for PE-detection directions k that are not aligned with z, the combination of terms with m≠0 results in electric dipole vectors vec_dipole that, generally, do not align with z.
A sufficiently weak, quasi-static IR-streaking electric field does not noticeably distort the electronic structure of the initial He atom and residual ion. Its interaction with the XUV-pulse-generated dipole yields the energy shift
Δ E_dip^(n)( k) = ℰ_IR(τ)
*ψ_^(n)( k) z ψ_^(n)( k)
= - A_IRτ d^(n)_z( k)
and corresponding PE momentum shift
Δ k^(n) ≈-Δ E_dip^(n)( k)/k
= -A_IRτd^(n)_z( k)/k .
This equation builds on the assumption that for shake-up emission to a specific residual shell n, PE wave packets corresponding to different asymptotic states
{|n,α,m⟩} in Eq. bound_Psi have the same center of momentum
k = √(2(ω_XUV - I^(n)_p)).
Our numerical results in Sec. <ref> below validate this assumption by revealing that, for PE momenta k within the spectral range of the XUV pulse, the relative magnitudes of the coefficients c_n,α,m( k) keep their proportions (cf. sis2n).
At large PE – distances, the PE has accumulated the EWS phase shift,
and the PE-momentum shift in the oscillating IR field can be approximated as <cit.>,
k_f(τ) ≈(k + Δ k^(n)) k
- A_IR(τ + t_EWS) .
Denoting the polar angle of k_f and k as θ_f and θ, respectively, and defining γ = θ_f-θ, projection of Eq. k_f onto k_f yields
k_f(τ) = kcosγ
- cosθ_f[A_IR(τ + t_EWS) + t_DLCA_IR(τ + t_EWS)τ]
≈ kcosγ - cosθ_f · A_IR(τ + t_EWS + t_DLC) ,
where
t_DLC( k) = cosγ/cosθ_fd_z^(n)( k)/k≈d_z^(n)( k)/k cosθ .
For the IR-pulse peak intensity considered in this work, γ is less than 1.6^∘, making it negligible in Eq. tDLC. The approximation in Eq. k_f_appr is supported by our typical delays t_DLC being less than 5% of the IR period.
We note that an alternative approximation is to treat A_IR(τ) as sinusoidal <cit.>, which results in
t_DLC( k) ≈1/ω_IRtan^-1[ω_IRd_z^(n)( k)/kcosθ] .
§.§ Propagation algorithm
We solve the TDSE TDSE using a finite-element discrete-variable representation (FE-DVR) of the partial-wave radial wave functions ψ_λ(r_1, r_2, t) on a 2D adaptive rectangular numerical grid for the radial coordinates (r_1, r_2). For details of our implementation of the FE-DVR method we refer to Ref. <cit.>, references therein, and Appendices <ref> and <ref> of the present work.
§.§.§ Length gauge
Expressing the electron interactions with the XUV and IR pulse HFI in the length gauge, we time-propagate the radial TDSE H_couple using the split-operator Crank-Nicolson technique <cit.> with equidistant time steps Δ t,
ψ_λ(t+Δ t) = ^- HΔ tψ_λ(t)
= ^- H_0Δ t/2^- H_intΔ t^- H_0Δ t/2ψ_λ(t) + Δ t^3 .
This scheme assumes the time-dependence of H to be negligible during each time step. In all applications in Sec. <ref> below, we ascertain numerical convergence by selecting sufficiently small time steps for the accumulation of numerical errors in each time step being irrelevant.
The advantageous choice of H_0 = H_1 + H_2 allows the factorization of the propagation along the radial coordinates,
^- H_0Δ t/2 =
^- H_1Δ t/2^- H_2Δ t/2,
since H_1 and H_2 commute.
In the length gauge, the interaction operator H_int = H_F1+H_F2+V_12 in position representation does not contain derivatives. It thus couples all partial waves separately at any given point of the 2D numerical grid (without the need for error-susceptible finite differencing). The split-operator method is therefore particularly suitable for length-gauge calculation.
For all exponentiations we use the Cayley form of the time-evolution operator <cit.>,
^- HΔ t = (1+/2 H Δ t)^-1(1-/2 H Δ t) .
§.§.§ Velocity gauge
For applications in Sec. <ref> below we employ both, the length and velocity form of H_Fi [cf., Eq. HFI], in order to assess the numerical convergence of our PE-momentum distribution and temporal shifts. While formally quantum mechanics is gauge invariant, approximations introduced in the modeling and the progression of numerical errors tend to affect the accuracy of length- and velocity-gauge calculations unequally. Their comparison is therefore a common tool for assessing the accuracy of numerical models. In particular, as pointed out by Cormier and Lambropoulos <cit.> for the numerical computation of photoemission spectra for strong-field ionization based on partial-wave expansions, the rate of convergence in length and velocity-gauge calculations with the number of included partial waves tends to be different. For above-threshold ionization of hydrogen atoms, the authors obtained converged spectra including 10 partial waves in velocity-gauge calculations, while requiring in excess of 200 partial waves in the length gauge. The physical reason for the very different rate of convergence is a more effective inclusion of the external-field-driven electronic quiver-motion in velocity-gauge final PE states in term of the canonical PE momentum, k - A(t). In contrast, the same anisotropic dynamics requires a large number of partial waves in the length gauge, where the final-state description hinges on the (time-independent) kinematic PE momentum k.
While this convergence comparison appears to clearly favor the velocity gauge, the split-operator scheme outlined above in part offsets its advantageous inclusion of the canonical PE momentum due to the appearance of the differential operator *z_i in Eq. HFI. In contrast to the length gauge and represented in spherical coordinates, the electron–external-field interaction H_Fi now involves the coupling of two-electron partial waves at adjacent grid points that are addressed in the finite-difference implementation of the *z_i operator on the (r_1,r_2) grid. For this reason, we abandon the split-operator propagation scheme when working in the velocity gauge and, instead, use a Lanczos algorithm <cit.> with an adaptive number of Krylov basis states to keep the numerical error below a set threshold.
In a Krylov basis {Ψ, HΨ, …, H^K-1Ψ} with K states, where Ψ is the wave function at the current propagation time, H in Eq. TDSE is represented by a tridiagonal matrix
H =
[ α_0 β_1 ; β_1 α_1 β_2 ; β_2 α_2 ⋱ ; ⋱ ⋱ β_K-1; β_K-1 α_K-1 ] .
The truncation error in the Lanczos algorithm we use to diagonalize H can be estimated as <cit.>
ϵ_lanc^K≈Δ t^K/K!∏_i=1^K β_i .
We dynamically adjust K at each propagation time step Δ t to satisfy ϵ_lanc^K-1 < 10^-12, rather than using a fixed number of basis states as, e.g., Ref. <cit.>.
§ RESULTS AND ANALYSIS
§.§ Streaking delays
Following <cit.>, we assume an IR streaking pulse with a Full-Width Half-Intensity Maximum (FWHIM) of 3 fs and a peak intensity of 4×10^11 W/cm^2. The XUV pulse has a photon energy of 90 eV, FWHIM of 200 as, and peak intensity of 10^12 W/cm^2.
We use 40 equally spaced XUV–IR-pulse delays for a delay range of 2 IR periods, calculate the center of PE momenta within an infinitely narrow detection cone, and fit to the IR-pulse vector potential to extract the streaking delays according to Eq. fit. Figure <ref> shows our corresponding calculated streaked photoemission spectrum. The streaking traces for direct and n=2 shake-up emission are clearly separated.
Absolute streaking delays t_s for direct and n=2,3 shake-up emission extracted from the spectrum in streaking are show in ang_delay, together with delays for XUV photon energies of 100 and 110 eV. The dependence of t_s on the XUV-pulse photon energy and PE-detection angle for emission is complex, owing to the three distinct terms in Eq. t_s and different (n,l,m) sub-channels. We therefore separately examine contributions to t_s in the following subsections. The most prominent feature, a rapid decrease of t_s with PE-detection angle, is primarily due to the 1/cosθ factor in the DLC term [cf. Eqs. t_s and tDLC].
§.§ XUV-only delays
Figure <ref> shows the EWS time delay, t_EWS, extracted from the PE wave packet according to Eqs. r_c_nl and EWS_xuv, for direct ionization (n=1) and all (n,l) shake-up sub-channels of the ionic L and M shells. These results are calculated for emission along the XUV-pulse polarization and XUV photon energies of 90, 100, 110, and 120 eV. Lines in between the small markers are added to guide the eye. For direct (black line) and 2s shake-up (red dashed line) our results in tEWS (a) agree with the phase-derivative calculation of Pazourek et al. <cit.> according to Eqs. tEWS and tEWS_n1l1. For shake-up emission to (2p) our results still agree with Ref. <cit.> with attosecond accuracy and show small discrepancies only at the sub-attosecond level. Generally, in the absence of resonances, EWS delays tend to decrease in absolute value for increasing XUV photon energy, due to the decreasing time left for strong PE interactions. This trend is followed for direct, 2p-, and 3p-shake-up emission. At the sub-attosecond time scale, the onset for this expected decrease is not reached for 2s-shake-up emission for XUV photon energy up to 120 eV.
Figure <ref> extends the sub-shell-resolved EWS delays, shown in tEWS for emission along the XUV-polarization axis, to arbitrary PE-detection directions for XUV photon energies of 90, 100, and 110 eV.
For 3p and 3d shake-up emission this graph clearly shows the expected decrease in magnitude of the EWS delay for increasing XUV photon energy and a more pronounced dependence on the PE-detection direction in comparison to 2p shape-up emission. Noticeable are also the general increase of the delay magnitude with the principle and angular momentum quantum numbers of the residual shake-up shake-up state, in compliance with the larger extent of the residual charge distributions.
Delays for 1s, 2s, 3s shake-up emission are angle independent, shown in tEWS, and not duplicated in this graph.
Averaging the EWS photoemission delays for n=2 and 3 according to Eqs. r_c_n and EWS_xuv results in the emission-angle-dependent absolute sub-shell averaged delays displayed in tEWS_ang, in addition to the angle-independent delays for direct XUV photoemission.
The angular dependence is most pronounced for shake-up emission at the lowest XUV photon energy (90 eV). In comparison with the sub-shell-resolved delays in tEWS_sub, the angular dependence of the sub-shell-averaged delays in tEWS_ang
tends to follow more closely the 3p than the 3d delays.
The less uniform angle dependence of the sub-shell-averaged delays for different photon energies, in particular, the crossing of n=3 delays in tEWS_ang,
is a result of the sub-shell and XUV-photon-energy- and angle-dependent emission probabilities P_n,l,m( r_2,t) employed as weights in Eq. r_c_n.
To keep the contribution of the EWS delay to the absolute XUV delay t_xuv in perspective, we note that for direct emission by 90 eV XUV pulses the non-convergent Coulomb delay amounts to t_C = -53.8,-57.0,-59.1 (as) at detector distances r_2=0.5,2,5 (m), respectively. The negative long-range Coulomb delay thus constitutes an advance that dominates the EWS delay.
§.§ The Coulomb-laser-coupling delay
§.§.§ Definition of CLC delays for hydrogen atoms
The CLC delay accounts for the long-range PE-phase accumulation. For emission along the XUV- and IR-pulse-polarization directions it does not noticeably depend on the target electronic structure and is approximately identical for H and He <cit.>. It is largely a classical quantity, which was found to agree within ∼ 1 as for quantum and classical-trajectory Monte Carlo calculations <cit.>. We assume its long-range classical behavior to apply for any final PE-momentum direction k (i.e, for any PE detection angle θ) and calculate it ab initio, following Refs. <cit.>, according to
t_CLC( k) = t^H_100_s( k) - t^H_100_EWS(k) ,
based on the streaking [t^H_100_s( k)] and EWS delay
t^H_100_EWS(k) = EC( k)z1,0,0
= σ_l=1E_E=k^2/2
for ground-state hydrogen with outgoing Coulomb waves
⟨ r|C( k)⟩ =
1/r√(2/π)∑_l,m[^l/k^-σ_lY_l,m^* ( k)] F_l(η, kr) Y_l,m( r) .
We note that t^H_100_EWS is angle independent due to the symmetry of the hydrogen ground state.
Figure <ref> shows the CLC delay for hydrogen as a function of the PE-detection angle for final PE energies between 17.03 and 85.41 eV, as given by Eq. HCLC. As expected, t_CLC decreases with increasing XUV photon energy (or asymptotic PE kinetic energy).
§.§.§ Classical-trajectory CLC calculation
To investigate the angular dependence for PE emission to large angles in CLC, we extended the classical calculation for emission along the XUV-pulse-polarization direction in Ref. <cit.> and performed an angle-resolved classical-trajectory simulation. We assume instant absorption of a single XUV photon with energy E_XUV to provide PEs at aligned initial positions and momenta, r_0 ∝z and k_0 ∝z, respectively, with total energy
E_PE = E_XUV + E_100^(Z) = k_0^2/2 - Z/r_0 ,
where E_100^(Z) is the ground-state energy of a hydrogen-like atom. This equation gives k_0 as a function of r_0. We determine r_0 = 0.55 a.u., such that the classically calculated CLC delay, t_CLC^cl, matches the results in CLC at zero degrees, and apply this value for all energies considered in this figure.
Propagating classical PE trajectories starting at ( r_0, k_0), subject to the same IR field, we use for He streaking calculation for a range of XUV–IR pulse delays τ, we obtain τ-dependent asymptotic PE momenta, i.e., classical streaking traces, that allow us to extract the classical streaking delay t_s^H_100,cl. Substitution into Eq. HCLC now yields delays t_CLC^cl that are independent of the PE detection direction, suggesting that the angle dependence of t_CLC in CLC is a quantum-mechanical effect.
§.§ The dipole-laser-coupling delay
Exposure of the residual dipole vec_dipole to the electric field of the IR streaking pulse results in the energy shift
Δ E_dip = - d_z^(n)( k) ℰ_IR(τ)
in Eq. E-dip.
This shift affects the entangled PE–residual-ion system and results in the DLC contribution t_DLC to the streaking delay in Eq. t_s.
The effect of t_DLC for PE emission along the XUV-polarization direction is discussed in Ref. <cit.>. In this subsection we examine t_DLC( k) for arbitrary PE detection directions k.
Applying Eq. tDLC, we calculate the coefficients c_n,α,m( k) according to Eq. c2s.
For the XUV-pulse parameters considered in the present work, we find that the ratios of the magnitudes of the complex-valued coefficients c_n,α,m( k) remain approximately constant over spectral profile of the PE wave packet. This allows us to evaluate these coefficients at the central PE momentum k = √(2(ω_XUV - I^(n)_p)). Figure <ref> illustrates this proportionate change for the example of n=2 shake-up emission.
The approximate DLC delays t_DLC for n=2 and n=3 shake-up ionization of para-helium and XUV pulses with photon energies of 90, 100, and 110 eV show a striking PE-detection-direction dependence at large angles that is easily traced to the 1/cosθ factor in the dipole model given by Eq. tDLC (tDLC_model).
Within this model, direct photoemission has no DLC delay, since d_z^(0)=0.
The sign change for shake-up emission at θ≈ 40^∘ indicates a change in the orientation of the residual dipole at larger angles.
We validate the DLC model results in tDLC_model by computing streaking, EWS, and CLC delays ab initio. The ab initio DLC delays shown in tDLC are obtained from Eqs. t_s and HCLC as
t_DLC( k) = t_s( k) - t^H_100_s( k)
-[t_EWS( k) - t^H_100_EWS( k)] ,
in good overall agreement with the model DLC delays for shake-up emission in tDLC_model. For most angles, this agreement is better than 10 as. It becomes worse at large angles near θ = 90^∘, possibly due to a combination of the model assumptions leading to Eq. tDLC and
small numerical signal-to-noise ratios (very small yields) at large detection angles. As seen in Tab. <ref>, for the special case of PE emission along the XUV-polarization direction (0^∘), our DLC-model delays are in good quantitative agreement with the correlation delays
derived from measured streaking delays in Fig. S10 of the supplementary material of Ref. et al. <cit.>.
The DLC model does not explain the angle dependence of the ab initio DLC delay for direct emission shown in the upper graph of tDLC. In addition, at the as timescale, it is at variance with our ab initio calculated delays (Tab. <ref>).
A possible cause for this variation is that the approximate calculation of t_CLC for hydrogen according to Eq. HCLC does not accurately represent the CLC delay for helium.
On the other hand, accurately calculating the CLC delay for direct photoemission from helium as
t_CLC^He( k) = t^n=1_s( k) - t_EWS^n=1(k) .
for the n=1 curves in tDLC would merely illustrate the difference of the two definitions, t_CLC^He - t_CLC, instead of a deviation from the physically intuitive DLC model tDLC.
§.§ Residual multipole analysis
During photoemission, the electronic probability density of the residual ion keeps evolving while interacting with the PE. After a sufficiently long time after the photo-release by the XUV pulse, the PE wave-function overlap with the charge distribution of the residual electron is small enough for the conditional electronic probability density,
P^He^+( r, t) = Ψ( r, r_PE ,t)^2/∫Ψ( r', r_PE, t)^2 [3]r' ,
to be associated with the residual ion's electronic charge density, where the PE is assumed as a classical particle located at the maximum of the PE wave-packet probability, r_PE.
The multipole expansion of the electronic potential energy V^He^+ corresponding to the charge density P^He^+, assumed to be confirmed within a volume of radius a < r_PE, results in
V^He^+( r,t) = ∑_l=0^∞∑_m=-l^l 1/r^l+1 C_l,m(t) Y_l,m( r) (r > a) ,
C_l,m(t) = -4π/(2l+1)∫_r⩽ a r^l Y_l,m^*( r) P^He^+( r, t) [3]r .
Using individual terms of this expansion as effective potentials in separate SAE calculations allows the quantitative comparison of the effects of given multipole orders on PE spectra and streaking time delays. Performing such simplified one-electron calculations for the parameters of the present work, we found dominant monopole and dipole interactions, while quadrupole and higher multipole orders are negligible.
A few snapshots of the residual charge-density evolution are shown in multi_pole
for the ionization of para-helium in 90 eV, 10^12 W/cm^2 XUV pulses with a FWHIM of 200 as (no IR pulse is present).
We assumed emission along the XUV-polarization direction (more specifically, the +z axis), such that only multipole terms with m=0 are present. The color-coded graphs show residual charge distributions P^He^+( r, t) for increasing classical PE distances r_PE from the atomic nucleus. Residual probability densities for direct and shake-up emission are displayed in Figs. <ref> (a) and (c), respectively. The corresponding coefficients C_1,0(t) of the dipole term in the multipole expansion multipole for direct and shake-up emission are shown in Figs. <ref> (c) and (d), respectively.
The oscillation period of the contribution for direct emission in multi_pole (b) is about 88 as, corresponding to an energy of 47 eV, close to the energy difference between the undistorted K and L shells of , 40.8 eV. We therefore attribute these small-amplitude oscillations to weak quantum beats of the ground state against transiently populated n=2 levels. The rapid decay of these beats as the PE moves away from the residual ion illustrates the noticeable presence of electronic entanglement, even for direct emission and PE distances way outside the residual charge distribution.
For shake-up ionization the composition and evolution of the residual charge cloud are more complex than for direct emission [multi_pole (d)]. Now the dipole term starts oscillating with a period of about 120 as (35 eV). 0.7 fs after the ionization the oscillation period changes to 530 as (7.8 eV). These two quantum-beat frequencies are close to undistorted K- to L-shell and L- to M-shell excitation energies of , respectively. As expected, the oscillation amplitude for shake-up emission in multi_pole (d) is significantly larger than for direct emission in multi_pole (b). In addition, the oscillations for shake-up ionization persist long after those for direct emission have decayed. These findings are in agreement with an intuitively expected stronger correlation of the PE with electronically excited .
§.§ The polarization effect by IR field
In addition to the XUV photoionization dynamics generating polarized residual ions, the electric field of the IR streaking pulse contributes to the polarization of the residual ion and may thereby affect angle-dependent shake-up photoemission yields and streaking delays. In this section we investigate this effect and show that, for the assumed IR-pulse intensity in the present work, the effect of the residual-ion polarization by the streaking pulse can be neglected.
For hydrogen atoms this effect was investigated before by Baggesen and Madsen <cit.>, who pointed out that the polarization of a target atom by the streaking IR-pulse may influence the angle-dependence of streaked photoemission yields and delays. They showed that, for sufficiently strong IR streaking fields, the streaked PE asymptotic energy acquires an additional phase shift relative to
the IR vector potential A_IR(t), due to the fact that the dipole-laser coupling is proportional to the IR electric field ℰ_IR(t) and thus phase-shifted by 90^∘.
To quantify the distortion of the residual ion by the streaking field, we calculate its static polarizability <cit.>,
β_n, l, 0 = β_n, l, 0^b + β_n, l, 0^c
=2∑_n',l'^n' nn',l',0zn,l,0^2/E_n'^(0)-E_n^(0)
+2∑_l'∫_0^∞kk,l',0zn,l,0^2/k^2/2-E_n^(0) ,
based on virtual excitations to hydrogenic bound states, {|n,l,m⟩}, (β_n, l, 0^b) and spherical partial-wave continuum states, |k,l',m⟩, (β_n, l, 0^c) for the nuclear charge Z=2. Our numerical values for the lowest four states of are in close agreement with the scaled theoretical values in Refs. <cit.> (Tab. <ref>). The literature values in Tab. <ref> are calculated for hydrogen atoms and multiplied by 1/16 to account for the scaling β_n,l,0∝ 1/Z^4 <cit.>.
For the IR-pulse peak intensity assumed in this work of 4× 10^11W/cm^2, corresponding to a peak-electric-field strength of ℰ_IR,0 = 3.4-3 a.u., the distortion of the electric dipole moment of the ground state is
Δ d_1,0,0,max = 2 ℰ_IR,0 β_1,0,0 = 1.9-3 ,
where the factor of two derives from the maximal variation of the IR electric field over one IR oscillation period.
For the n=2 shell of , the largest dipole polarizability is β_2,1,0 = 13.49, resulting in
Δ d_2,1,0,max = 2ℰ_IR,0β_2,1,0≈ 9.2 -2 .
The maximal IR-pulse-induced dipole in the n=2 shell is thus about 50 times larger than for the ground state, and the distortion of the ground state may be neglected in comparison with the distortion of excited states by the streaking IR field.
At the same time, the maximal IR-pulse-induced dipole in the n=2 shell in Eq. (<ref>) is about 16 times smaller than the dipole moment of 1.5 of the XUV-pulse-excited n=2 Stark states according to Tab. <ref> in Appendix <ref>. This is consistent with the numerical example
in Fig. <ref>, where the maximal dipole coefficient for n=2 shake-up emission exceeds the maximal dipole coefficient for direct emission by an order of magnitude. The distortion by the IR streaking pulse of the XUV-photoemission-produced residual dipole is thus a small correction.
This difference is less pronounced for n=3 shake-up excitation. Here, the XUV-photoemission-produced residual-dipole moment is 4.5 (cf., Tabs. <ref> and <ref> in Appendix <ref>), exceeding 6.5 times the IR-pulse-induced dipole distortion,
Δ d_3,1,0,max = 2ℰ_IR,0β_3,1,0≈ 0.69 .
These comparisons indicate that, for shake-up ionization into the K and L shells of , the distortion of the XUV-produced dipole by the IR pulse is negligible, while for n=3 shake-up ionization further scrutiny is needed in this regard.
We note that once the IR distortion of becomes relevant, the distinction of three separate delay contributions to the streaking delay in Eq. (<ref>) breaks down, and the EWS term needs to be modified to account for the influence of the IR-pulse during photoemission for small XUV – IR pulse delays. In the present work, the fact that the three contributions in Eq. (<ref>) correctly add up to the streaking delay, provides further evidence for the IR-pulse distortion of being negligible.
Furthermore, the classical trajectory analysis in Sec. <ref>, excluding the IR-pulse distortion of the residual ion, agrees for emission angles below ≈60^∘ with the quantum result and further supports the very minor role of the IR distortion of the residual ion in the present work.
This analysis of the distortion of the residual ion by the IR pulse may provide a reference for futures streaking-pulse parameters that break the non-distorting dipole assumption. For a discussion of the IR-pulse distortion of helium in the entrance channel, we refer to Ref. <cit.>. For the description of relative streaking delays, the IR-pulse distortion of the neutral helium target affects direct and shake-up channels equally and thus tends to cancel in the calculation of relative streaking delays.
§ SUMMARY AND CONCLUSIONS
In this work we confirmed the results of previous theoretical and experimental investigations for streaked direct and shake-up photoemission from helium atoms along the IR- and XUV-pulse linear polarization. We performed ab initio and model calculations and extended these earlier investigation to arbitrary PE detection directions, scrutinizing EWS, CLC, and DLC contributions for direct and (n=2,3) shake-up ionization.
For large PE detection angles we found that CLC delays become angle dependent and tracked this dependence as a purely quantum-mechanical effect that cannot be explained with classical-trajectory simulations. For shake-up ionization we find dominant contributions to the streaking delay due to DLC. Confirming a physically intuitive model for the DLC effect, we validated the model predictions for the DLC delay against ab initio calculations over a large range of PE detection angles.
Distinguishing the production of polarized residual ions due to the XUV-photoemission dynamics and possible contributions from its time-dependent polarization by the oscillating IR pulse, we estimated the IR-pulse-induced polarization contribution. Our model DLC calculations without including the IR distortion of the XUV-pulse-generated dipole agreeing with the DLC-delay contribution derived from ab initio calculations provide evidence for the IR-pulse distortion of the residual dipole being negligible.
§ COUPLING-MATRIX ELEMENTS AND POISSON INTEGRAL
Each term in the Hamiltonian of Eq. TDSE can be represented as a band-diagonal matrix in an FE-DVR basis, and the computational cost for the propagation by one time step scales linearly with the total number of grid points.
As basis function, we use polynomials defined by Gauss-Lobatto quadrature, scaled to fit each FE interval for the FE-DVR expansion of the radial partial-wave functions,
ψ_λ(r_1, r_2, t) = ∑_i,j c_i,jϕ_i(r_1)ϕ_j(r_2) .
These basis functions are localized on the grid points of the 2D numerical (r_1 and r_2) grid and approximately orthonormal,
ϕ_i(x_j) ∼δ_ij, ∫ϕ_i(x)ϕ_j(x)x≈δ_ij ,
where {x_j} designates the set of grid points for either one of the radial coordinates, r_1 and r_2.
The numerical efficiency of the FE-DVR method derives from these polynomials being adapted to and defined on the employed numerical grid.
The flexibility of the FE-DVR method stems from its applicability to any given (possibly unequally spaced) set of FE boundary points.
§.§ Electron-electron interaction
The electronic interaction term, V_12, in the Hamiltonian is expanded in generalized spherical harmonics,
V_12 = 4π∑_l=0^∞(-1)^l/√(2l + 1)r_ < ^l/r_ > ^l+1𝒴_l,l^0 ( r_1, r_2) .
For the angular part of the electronic coupling-matrix elements we have
𝒴_λ𝒴_l,l^0𝒴_λ'
= 2l+1/4πδ_L,L'√((2l_1'+1)(2l_2'+1)(2L+1))
×[ l l_1' l_1; 0 0 0 ][ l l_2' l_2; 0 0 0 ]l l_1' l_1
l l_2' l_2
0 L L
and calculate the radial matrix elements accurately using Poisson integrals <cit.>,
ϕ_i ϕ_jr_ < ^l/r_ > ^l+1ϕ_i'ϕ_j' =δ_i,i'δ_j,j'
×[2l+1/r_i r_j √(ω_i ω_j) (2 T_l)^-1_i,j + r_i^l r_j^l/r_max^2l+1] ,
where
(T_l)_ij = ϕ_i[-1/2md^2/dr^2 + l(l+1)/2mr ] ϕ_j
= -1/2mϕ_id^2/dr^2ϕ_j + δ_i,jl(l+1)/2mr_i^2 .
§.§ External-field interaction
The length-gauge external-field-interaction term H_Fi in the TDSE [Eq. TDSE] contributes to the matrix element in Eq. H_couple and is evaluated according to
𝒴_λℰ(t)· z_i𝒴_λ' = ℰ(t) r_i F_λ,λ'^(i) (i=1,2) ,
where the angular integrals are
F_λ,λ'^(i) = 𝒴_λcosθ_i𝒴_λ'
=√(9(2l'_1+1)(2l'_2+1)(2L'+1)/(4π)^2)[ δ_i,1 l'_1 l_1; 0 0 0 ]
×[ δ_i,2 l'_2 l_2; 0 0 0 ][ 1 L' L; 0 0 0 ]δ_i,1 l'_1 l_1
δ_i,2 l'_2 l_2
1 L' L (i=1,2) ,
and the curly brackets denote the Wigner 9j symbols.
In the velocity gauge, H_Fi couples the angular function as
𝒴_λ- A(t) z_i𝒴_λ' = - A(t) G_λ,λ'^(i) (i=1,2) ,
where
G^(i)_λ,λ' = F^(i)_λ,λ'r_i + g^(i)_λ,λ'/r_i
and
g_λ,λ'^(i) = -𝒴_λcosθ_i +sinθ_iθ_i𝒴_λ' .
In particular, the hydrogenic field-interaction matrices are <cit.>)
g_λ,λ'^(1) =
δ_l_2, l_2'∑_m_1l_1 l_2 L
m_1 -m_1 0l_1' l_2' L'
m_1 -m_1 0×
(δ_l_1',l_1+1 l'_1𝒞_l_1,m_1 - δ_l_1,l'_1+1 l_1𝒞_l'_1,m_1)
and
g^(2)_λ,λ' =
δ_l_1, l_1'∑_m_2l_1 l_2 L
-m_2 m_2 0l_1' l_2' L'
-m_2 m_2 0×
(δ_l_2',l_2+1 l_2'𝒞_l_2,m_2 - δ_l_2,l'_2+1 l_2 𝒞_l'_2,m_2) ,
with
𝒞_l,m
= √((l+1)^2-m^2/(2l+1)(2l+3)) .
§ STARK STATES
The following tables give the matrix element of the unitary transformation between the asymptotic Stark states |n,α,m⟩ in Eq. Starkstate and Z=2 hydrogenic bound states |n,l,m⟩ within a given shell n.
The first column shows the eigenvalues of the electric dipole moment d_z^(n,α,m) for each Stark state.
These matrices are calculated by diagonalizing z in each degenerate subspace spanned by all states {|n,l,m⟩} for fixed n,m <cit.>. The subspaces with n=2, m=± 1 and n=3, m=± 2 are non-degenerate and have zero polarization with Stark quantum number α=0. The z component of the dipole in Eqs. (<ref>) and (<ref>) is calculated as
d^(n)_z( k) = -*ψ_^(n)( k)zψ_^(n)( k) ,
= -𝒩^2 ∑_α,mc_n,α,m( k)^2 d_z^(n,α,m) ,
where the coefficients c_n,α,m( k) are defined in Eq. c2s.
§ OPTIMIZATIONS
§.§ Selection rules
For para-helium Ψ( r_1, r_2) is symmetric with respect to electron exchange. Spherical harmonics satisfying the relation
𝒴_λ( r_1, r_2) =
(-1)^l_1+l_2-L𝒴_λ( r_2, r_1) ,
implies
ψ_l_1, l_2^L(r_1, r_2,t) = (-1)^l_1 + l_2 - Lψ_l_2, l_1^L(r_2, r_1,t)
and eliminates about half of the partial waves in Eq. twoelwf, allowing us to keep only those for l_1 ⩽ l_2. This simplification is compromised for the rectangular (non-quadratic) grids we used, where r_1 and r_2 have different spacial ranges, and l_1 and l_2 require different maximal values for numerical convergence.
The symmetry properties of the Clebsch-Gordan and Wigner 9j coefficients imply that Eq. TDSEcouple couples partial waves with the same odd values of l_1+l_2-L. Since the initial ground state of para-helium (L=0, l_1=l_2) has even l_1+l_2-L, partial waves with odd l_1+l_2-L do not contribute.
§.§ Adaptive grid with optional sliding window
We propagate the radial wave function on an adaptive (r_1, r_2) grid, which we determine during the numerical calculation by adding a "detector region" outside the complex absorber potentials at the outer grid boundaries of r_1 and r_2 <cit.> (grid_layout). When the wave-function probability density in the detector region on either side reaches a threshold (typically 5-9 ), the length of that side is automatically extended, keeping the existing grid points and basis polynomials unchanged. For single ionization, one radial coordinate has a fixed length and no detector region is needed along its grid direction. In tests we found that this technique can accelerate the numerical throughput for single ionization 2 to 3 times.
§ ACKNOWLEDGMENTS
We thank Keegan Finger, who participated in the early stages of this work within the "Research Experiences for Undergraduates (REU)" program of the NSF, as well as Van-Hung Hoang and Aihua Liu for stimulating discussions.
This work was supported by NSF Grant No. 2110633 (Method development for the assessment of transient response in matter to and electronic states in arbitrary time-dependent fields)
and the Chemical Sciences, Geosciences, and Biosciences Division, Office of Basic Energy Sciences, Office of Science, US Department of Energy under Award DEFG02-86ER13491 (Attosecond electron dynamics).
|
http://arxiv.org/abs/2409.03034v1 | 20240904190813 | MDNF: Multi-Diffusion-Nets for Neural Fields on Meshes | [
"Avigail Cohen Rimon",
"Tal Shnitzer",
"Mirela Ben Chen"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
*
Axel Klawonn0000-0003-4765-7387
Martin Lanser0000-0002-4232-9395
======================================================================
§ ABSTRACT
We propose a novel framework for representing neural fields on triangle meshes that is multi-resolution across both spatial and frequency domains.
Inspired by the Neural Fourier Filter Bank (NFFB), our architecture decomposes the spatial and frequency domains by associating finer spatial resolution levels with higher frequency bands, while coarser resolutions are mapped to lower frequencies.
To achieve geometry-aware spatial decomposition we leverage multiple DiffusionNet components, each associated with a different spatial resolution level.
Subsequently, we apply a Fourier feature mapping to encourage finer resolution levels to be associated with higher frequencies.
The final signal is composed in a wavelet-inspired manner using a sine-activated MLP, aggregating higher-frequency signals on top of lower-frequency ones.
Our architecture attains high accuracy in learning complex neural fields and is robust to discontinuities, exponential scale variations of the target field, and mesh modification.
We demonstrate the effectiveness of our approach through its application to diverse neural fields, such as synthetic RGB functions, UV texture coordinates, and vertex normals, illustrating different challenges.
To validate our method, we compare its performance against two alternatives, showcasing the advantages of our multi-resolution architecture.
Keywords: Triangle Meshes · Neural Fields · Multi-Resolution
§ INTRODUCTION
Recent advancements in machine learning have lead to a surge of interest in solving visual computing problems using coordinate-based neural networks, known as Neural fields. These networks parameterize the physical properties of scenes or objects across spatial and temporal dimensions. Neural fields have gained widespread adoption due to their ability to encode continuous signals over arbitrary dimensions at high resolutions, enabling accurate, high-fidelity, and expressive solutions <cit.>.
They have demonstrated remarkable success in a variety of tasks, including animation of human bodies <cit.>, mesh smoothing and deformations <cit.>, novel view synthesis <cit.>, mesh geometry and texture editing <cit.>, 3D reconstruction <cit.>, textured 3D reconstruction from images <cit.>, shape representation and completion <cit.>, and neural stylization of meshes <cit.>.
Despite their widespread success, these coordinate-based neural architectures remain vulnerable to spectral bias <cit.> and demand significant computational resources.
Among other generalizations, these shortcomings have been addressed through spatial decomposition strategies using grids <cit.>, which support rapid training and level of detail control.
Additionally, techniques that encode input data using high-dimensional features through frequency transformations, such as sinusoidal representations <cit.>, help mitigate the inherent low-frequency bias of neural fields <cit.>.
Wu et al. wu2023neural propose an architecture that bridges these two approaches. They demonstrate that employing different grid resolutions focused on distinct frequency components, combined with proper localization, achieves state-of-the-art performance in terms of model compactness and convergence speed across multiple tasks.
However, the proposed grid-based methods, including <cit.>, are designed for Euclidean spaces and do not account for the unique properties of non-Euclidean, irregular geometric domains like triangle meshes. Although adapting such methods to fit such structures via data modification has shown efficacy, it often overlooks the inherent characteristics of mesh data.
Notably, meshes typically represent smooth manifolds with defined geometry, offering potential for enhanced understanding and representation.
Furthermore, we aim to enhance the architecture's invariance to the multi-representational nature of mesh geometry, accommodating different resolutions and many equivalent vertex connectivities.
In this work, we introduce a novel geometry-aware framework for representing neural fields on triangle meshes that are multi-resolution across both spatial and frequency domains. Inspired by the Neural Fourier Filter Bank (NFFB) <cit.> and leveraging the geometry-aware DiffusionNet architecture <cit.>, our approach decomposes the spatial and frequency domains using multiple DiffusionNet components representing different spatial resolutions and controlling frequency bands using Fourier feature mappings at different scales.
We associate finer spatial resolution levels with higher frequency bands, while coarser resolutions are mapped to lower frequencies.
This wavelet-inspired decomposition, combined with a carefully designed network architecture, enables our method to effectively learn and represent complex neural fields, accurately capturing intricate details and frequency variations.
We demonstrate the efficacy of our approach through its application to diverse neural fields, including synthetic RGB functions, UV texture coordinates, and vertex normals, showcasing its robustness to discontinuities, exponential scale variations, and mesh modification.
§.§ Related Work
We highlight relevant work that is related to the key components of our method: architectures for learning on meshes that leverage mesh geometry and neural fields on non-Euclidean domains.
Learning on Meshes
Several works have proposed unique architectures to leverage mesh geometry and other structural properties for learning on meshes <cit.>.
Hanocka et al. hanocka2019meshcnn defined MeshCNN, a convolutional layer on meshes by learning edge features and defining pooling operations through edge collapse. Milano et al. milano2020primal captures triangle adjacency in meshes through graphs of mesh edges and dual edges, and Smirnov et al. smirnov2021hodgenet learns spectral geometry elements to construct custom mesh features. DiffusionNet <cit.> leverages the heat equation and learns multiscale diffusion operations to propagate information across the manifold.
These architectures commonly focus on segmentation, classification and correspondence learning tasks. While they form the basic architecture of our work, they are typically incapable of capturing subtle differences in multiple resolutions, as demonstrated in the experimental results, Section <ref>.
Neural Fields
Neural fields have been increasingly used for learning functional representations in arbitrary resolutions, most commonly for Euclidean domains, e.g. <cit.>. The foundational work, NeRF <cit.>, a coordinate-based neural network for view synthesis, demonstrated the importance of positional encodings to facilitate learning of high frequency data by neural networks.
Subsequent works have used periodic activation functions <cit.>, wavelet-like multi-resolution decomposition <cit.>, and a decomposition to a cascade of band-limited neural fields <cit.>.
These works focus on Euclidean spaces, encoding non-Euclidean 2D manifolds as volumes, resulting in higher computational costs or failure to capture the manifold structure.
Neural Fields on Manifolds
Recently, a few works have proposed methods for learning neural fields on non-Euclidean domains <cit.>.
Bensaïd et al. bensaid2023partial leverages neural fields for learning partial matching of nonrigid shapes. They use intrinsic positional encodings and a neural representation in the spectral domain to interpolate between matched sparse landmarks of partial shapes.
NeuTex <cit.> represents meshes as 3D volume in a Euclidean space, but encodes texture with a 2D network. To enable texture representation and editing, they train mapping networks between the two spaces, which can be seen as learning a representation of the 2D surface.
Koestler et al. koestler2022intrinsic takes into account the manifold structure by using the eigenfunctions of the Laplace-Beltrami Operator as positional encodings, serving as point embeddings in the input of the trained neural network.
This approach is conceptually similar to the concatenation of DiffusionNet <cit.> and a Multi-Layer Perceptron (MLP), and we compare our method to such an architecture in Section <ref>.
<cit.> further extends this concept and learns a continuous field, independent of the manifold discretization, by mapping a series of mesh poses to an implicit canonical representation and learning surface deformations fields for each pose. This approach requires a series of related inputs, such as a human mesh in different poses.
§.§ Contribution
To summarize, our contributions are as follows:
* We propose a novel geometry-aware framework for neural fields representation on triangle meshes that is multi-resolution across both spatial and frequency domains.
* We show that our method attains high precision in learning diverse neural fields, such as synthetic RGB functions, UV texture coordinates, and vertex normals, illustrating different challenges.
* We show that our method outperforms DiffusionNet in learning high-detail functions. Moreover, we provide an ablation study comparing our model against a single-resolution variant, demonstrating the efficacy of our multi-resolution approach.
§ BACKGROUND
Our architecture draws inspiration from the architecture proposed by Wu et al. wu2023neural, and builds upon two main works: DiffusionNet <cit.> and Fourier feature mapping <cit.>. For completeness, we provide here a brief review of these works.
§.§ DiffusionNet
DiffusionNet <cit.> is a discretization agnostic architecture for deep learning on surfaces.
The architecture consists of successive identical DiffusionNet blocks. A central feature of each DiffusionNet block is the use of learned diffusion based on the heat equation to propagate information across the surface.
This diffusion is discretized via the Laplacian 𝐋 and mass M matrices of the surface. In our work, we use the cotangent-Laplace matrix, which is ubiquitous in geometry processing applications <cit.>.
For efficient diffusion computation, the authors propose to use a spectral method that utilizes the k smallest eigenpairs ϕ_i, λ_i of the generalized eigenvalue problem Lϕ_i = λ_i Mϕ_i.
The diffusion layer h_t(u) corresponding to time t is implemented by projecting the input feature channel u onto this truncated basis, exponentially scaling the coefficients by -λ_i t, and projecting back:
h_t(u) := Φ[ e^-λ_1 t; e^-λ_2 t; ⋮ ]⊙ (Φ^T 𝐌u)
where ⊙ denotes the Hadamard (elementwise) product and Φ, Λ are the matrices of generalized eigenvectors and eigenvalues, respectively.
The learned diffusion times, optimized per feature channel, control the spatial support ranging from purely local to totally global. Here, we have briefly reviewed only the aspects critical for understanding our method; see <cit.> for further details.
§.§ Fourier Feature Mapping
The work by Tancik et al. tancik2020fourier addresses the problem of "spectral bias" in coordinate-based multi-layer perceptrons (MLPs), which refers to their inherent limitation in accurately modeling high-frequency components due to the rapid decay of eigenvalues in their neural tangent kernels (NTKs) <cit.>.
The authors propose using a Fourier feature mapping that applies a non-linear transformation to the input coordinates before passing them to the MLP.
They report a Gaussian mapping as most effective, where input coordinates are multiplied by random Gaussian matrices to produce high-dimensional Fourier features.
Theoretically, they show that this mapping transforms the NTK into a stationary kernel with a tunable bandwidth. This bandwidth, which determines the width of the kernel’s effective frequency spectrum, is controlled by the scale (standard deviation) of the Gaussian matrices.
A larger scale allows representing higher frequencies, overcoming spectral bias.
§.§ Neural Fourier Filter Bank (NFFB)
Wu et al. wu2023neural introduce a novel neural field framework named "Neural Fourier Filter Bank" (NFFB) that decomposes the target signal jointly in the spatial and frequency domains, inspired by wavelet decomposition <cit.>.
The core idea is to utilize multi-layer perceptrons (MLPs) to implement a low-pass filter by leveraging their inherent frequency bias, and to employ grid features at varying spatial resolutions alongside Fourier feature mappings <cit.> at different scales to create a high-pass filter.
A novel aspect of this framework is the correlation of finer spatial resolutions with higher frequency bands, whereas coarser resolutions correspond to lower frequencies. Fourier feature mappings are applied at scales that match the respective spatial resolutions of each grid feature.
The proposed architecture feeds the high-frequency components into sine-activated MLP layers at appropriate depths, mimicking the sequential accumulation of higher frequencies on top of lower frequencies in wavelet filter banks.
This wavelet-inspired decomposition into spatial and frequency components, coupled with the association of specific resolutions with corresponding frequencies, allows for efficient learning of detailed signals while maintaining model compactness and fast convergence.
§ METHOD
We propose a multi-resolution framework that facilitates representing neural fields on meshes across both spatial and frequency domains. As illustrated in Figure. <ref>, our pipeline comprises three key stages: (1) Diffusing features across mesh vertices via multiple DiffusionNet components (Section <ref>) to capture spatial variations, (2) Transforming these diffused features through Fourier feature mapping (Section <ref>) to associate different frequency bands with the respective resolution levels, and (3) Composing the multi-resolution, multi-frequency signal representation using a sine-activated MLP in a wavelet-inspired manner (Section <ref>).
We delve into the details of each stage in the following sections.
r[-5pt]0.13
< g r a p h i c s >
*Illustrative example. Partitioning of mesh vertices.
To clarify our architecture's pipeline, we use a synthetic example featuring the Chinese dragon mesh with 125K vertices and 250K faces. The mesh is divided into three groups - Red, Yellowish, and Blue - each linked to a distinct function depicted in the inset figure. These groups represent increasing frequencies: Red corresponds to ϕ_1, Yellowish to ϕ_125, and Blue to ϕ_500, where ϕ_j ∈ℝ^n is the j-th eigenfunction of the Laplace-Beltrami operator on the mesh, and n is the number of vertices. We generate the target neural field by mapping the patchwork function to an RGB using the HSV colormap.
More details can be found in Subsection <ref>.
§.§ DiffusionNet Layers
Motivation
Aligned with the strategy in <cit.>, our pipeline's first stage inputs features into a multi-component "layer," each component associated with a different resolution band, representing varying spatial resolutions of mesh features. Unlike NFFB <cit.>, our architecture replaces each hash grid with a DiffusionNet component. As discussed in Section <ref>, DiffusionNet utilizes diffusion layers to facilitate spatial communication and optimizes diffusion support for each feature channel.
The choice of adopting the DiffusionNet architecture is based on two main reasons.
The first stems from its inherent compatibility with irregular data structures, specifically triangle meshes, as opposed to hash grid structures suited for regular formats like images. Despite previous adaptations of triangle meshes to regular-structured architectures yielding favorable results, a geometry-aware approach like DiffusionNet has proven more accurate and efficient in various applications. Due to that, substituting the grid representation with Diffusionnet is particularly effective for mesh data. Additionally, DiffusionNet facilitates discretization-agnostic learning, enhancing generalization capabilities of the overall architecture.
Second, DiffusionNet intrinsically facilitates a methodology akin to the multi-resolution hash-grid paradigm of NFFB. The diffusion time parameter can be utilized to adjust spatial resolutions via the initial values assigned to each component. Furthermore, employing the "spectral method" for the diffusion process enables each component to be associated with a distinct set of eigenvectors, enhancing their spatial resolutions as well.
Formally, the DiffusionNet component at the i-th level, δ_i, maps the input 3D coordinate x∈ℝ^3 to an F-dimensional feature space:
δ_i: ℝ^3 →ℝ^F.
Let N denote the number of DiffusionNet components.
Splitting the spectrum
Considering the total number of eigenvectors k_eig used for diffusion, we distribute the eigenvectors evenly across the levels, associating the eigenvectors corresponding to the lowest eigenvalues with level 1 and highest to level N.
For each level i ∈ [1, N], we define the range of eigenvector indices used for diffusion in the i-th DiffusionNet component as [r_m(i), r_M(i)] where
r := linspace(0, k_eig, N + 1)
r_m(i) := r(i) r_M(i) := r(i+1)
where linspace(start, end, steps) is a one-dimensional vector of size steps whose values are evenly spaced from start to end, inclusive.
The corresponding sets of eigenvectors Ψ_i and eigenvalues Λ_i used for diffusion at the i-th level are:
Ψ_i := {ϕ_j}_j=r_m(i)^j=r_M(i) Λ_i := {λ_j}_j=r_m(i)^j=r_M(i)
Splitting diffusion time
Recall the diffusion time parameter in DiffusionNet controls the spatial resolution of diffusion, theoretically ranging from local to global scales. However, in practice, such range isn't fully realized, as shown in Sec <ref>.
The diffusion process, as implemented by the "spectral method", acts as a low-pass filter due to the exponentiation e^-λ_j t, where t is the diffusion time and λ_j the j-th Laplacian eigenvalue. By creating multiple DiffusionNets, each with distinct eigenvalue ranges, we achieve a refined representation of high-frequency components.
Following the initialization scheme considered in Wu et al wu2023neural for the Gaussian distribution
variance (as in our Equation (<ref>)), we initialize the diffusion times of the i-th DiffusionNet component by t(i) defined as
t(i) := t_base· (t_exp)^i
Typically t_base is set to the squared mean edge length of the mesh.
Figure <ref> illustrates the output features of each level at the key pipeline stages, displaying the smoothest and least smooth feature channels per level and stage. See Supplemental Material Section 2 for measuring function smoothness.
The first row shows d_i := δ_i(x) for i∈[1,2,3]. Since we set F=1 for simplification, only one feature is output at this stage. We observe that the functions at the three levels in this stage exhibit smooth behavior.
§.§ Fourier Feature mapping
As in NFFB, the Fourier feature mapping stage serves to associate each level of the multi-resolution representation with a distinct frequency band. Inspired by the Fourier feature mapping approach of <cit.>, we apply a sinusoidal transformation to the output features from the previous DiffusionNet stage.
In more details, the Fourier feature at the i-th level is defined as a mapping from the DiffusionNet output features at the i-th level d_i ∈ℝ^F to an m-dimensional feature space:
η_i(d_i) := [ sin(2π·d_i^T ·B_i,1), …, sin(2π·d_i^T ·B_i,m )]^T
where B_i,1, B_i,2, …, B_i,m are trainable parameters in ℝ^F forming the frequency transform coefficients on the i-th level, and m is a hyper-parameter.
The frequency ranges for each level are defined by the initialization of the B_i,j coefficients. Drawing on the Gaussian random Fourier feature mapping by Tancik et al. tancik2020fourier, we set these coefficients using a Gaussian distribution with a mean of 0 and a level-specific variance σ_i (Equation (<ref>)).
Finer resolutions levels, associated with higher frequencies, are initialized with greater variance, biasing them towards encoding higher frequency signal components. This adaptive initialization approach allows each resolution level to naturally associate with specific frequency bands without pre-setting fixed ranges.
Practically, let σ_base, σ_exp∈ℝ, we initialize the i-th level coefficients with variance σ_i ∈ℝ defined by
σ_i := σ_base· (σ_exp)^i
where σ_base, σ_exp are hyper-parameters, and σ_exp≥ 1.
Referring again to Figure <ref>, the second row depicts the output features η_i(d_i) for i∈[1,2,3].
We observe that the frequency of the least smooth feature increases as the level increases.
§.§ Composing the final output
The next stage composes the final output using the Fourier transformed features η_i(d_i) ∈ℝ^m across levels i ∈ [1, N].
Two critical observations from <cit.> inform this process: First, features across levels are not necessarily orthogonal, calling for learned layers for optimal combination and mitigation of non-orthogonality. Second, implementing residual connections helps aggregate and joint update of the multi-resolution features, maintaining consistent processing depth across all levels in the network.
We thus start by applying a sine-activated MLP <cit.> that takes in the Fourier features η_i(d_i) in a manner that sequentially accumulates higher-frequency components on top of lower-frequency components.
More formally, let us denote the i-th layer as L_i where i ∈ [1, N], using f_i to represent the output of L_i, and h_i to denote the combination of the i-th layer output with the next level's features:
f_i := sin (α_i ·𝐖_i ·h_i-1 + 𝐛_i), h_i := f_i + η_i(𝐝_i)
where for ease of notation we define h_0 := 𝐱.
Here, 𝐖_i ∈ℝ^m × m and 𝐛_i ∈ℝ^m are the trainable weight and bias parameters in layer L_i, and α_i is analogous to the w_0 hyperparameter in SIREN <cit.>, acting as a frequency scaling factor that allows controlling the frequency band that this level focuses on representing.
Next, as illustrated in Figure <ref>, we establish residual connections by concatenating the outputs h_i ∈ℝ^m from each level and passing them through an additional MLP with ReLU activations, while also transferring them to the subsequent layer L_i+1 as described earlier.
Alternatively, as suggested in <cit.>, instead of concatenating {h_i}_i=1^N, one could pass each feature through a per-level linear layer O_i and sum the outputs to obtain the final feature representation. We refer to <cit.> for further details.
The third row in Figure <ref>, representing the output features h_i for i ∈ [1,2,3], exhibits features that are significantly less smooth than those in previous stages.
Further, in this stage we can see that for both the smoothest and least smooth features, higher levels correspond to increasingly noisier features.
To gain insight into the resolutions levels learned by our trained network, Figure <ref> depicts the output neural field representing the RGB function during evaluation, with either the first or last level disabled. Disabling level N generally results in a blurry output, reflecting lower frequency components, whereas disabling level 1 produces a high-contrast function tied to higher frequencies. We analyze this by zooming in on three areas across the ground truth (GT), our model’s predicted function, and outputs with level N or 1 disabled. The output from disabling level N is notably blurrier, capturing only basic texture outlines compared to the GT. Conversely, disabling level 1 enhances contrast, often exaggerating transitions. For instance, areas that are light blue in the GT and surrounded by similar light green regions appear as a more distinct blue, despite their subtle difference in hue in the GT.
§.§ Implementation Details
The implementation details such as the loss function, number of training epochs, and network size are adjusted for each experiment.
Outside of these customized components, the overall training setup remains consistent across all experiments.
We implement our method in PyTorch <cit.>, and utilize the Adam optimizer <cit.> with the default settings of β_1 = 0.9 and β_2 = 0.99.
The learning rate is set to 10^-4, and it is reduced by a factor of 0.7 every 700 iterations. We set the output dimension of DiffusionNet components as F=2, and the maximal index of the Laplacian eigenpair considered in the diffusion process, k_eig, is set to 500.
We run all experiments on a single NVIDIA A40 GPU.
For brevity, only the essential details are presented here; for a detailed description of the hyperparameters, see the Supplemental Material.
§ EXPERIMENTAL RESULTS
We evaluate our method on three neural fields: synthetic RGB function (Section <ref>), UV texture coordinates (Section <ref>), and vertex normals (Section <ref>).
Focusing on demonstrating our architecture, all models were trained using a supervised approach.
We compare against two baselines: a single DiffusionNet component, and our method with N=1, denoted as the One-Level model.
We refer to our method as the N-Level model where N>1, with N determined empirically for each experiment.
For all models, we report the results of the best-performing configuration.
§.§ Synthetic Example
To illustrate the effectiveness of our method, we start with a synthetic example, resembling the one in the Section <ref>.
In this experiment we define the target field to be a 3-channel RGB function y_rgb∈ℝ^n × 3 defined on a mesh, where n denotes the number of mesh vertices.
We train the model by minimizing the mean square error (MSE), hence our loss function for this task is
ℒ_rgb(y) := 1/n‖y - y_rgb‖_2^2
Data.
We demonstrate this example using the Chinese lion mesh, composed of 50K vertices.
Neural Field Generation
To generate the function y_rgb, we first partition the mesh vertices into three groups based on their x coordinates, as visualized in Figure <ref>. We denote the Red, Yellowish, and Blue, groups by group_1, group_2, and group_3, respectively.
We assign to each group a scalar function: g_1 := ϕ_1 ∈ℝ^n (constant), g_2 := ϕ_125∈ℝ^n, and g_3 = g_p ∈ℝ^n generated as Perlin noise on the mesh <cit.>.
Note that the frequency of the functions g_i increases with i.
We define a patchwork function q∈ℝ^n on the mesh such that q[group_i] = g_i[group_i]. y_rgb is then derived by mapping q, normalized to [0,1], to a HSV colormap by defining the Hue parameter. See y_rgb in the GT figures in Figure <ref>.
Results
Figure <ref> shows the error distributions for the fields learned by the three models, clipping errors above 5 × 10^-4 for clearer visualization.
Our N-Level model outperforms the others, exhibiting fewer artifacts. To provide quantitative results as well, Figure <ref> presents the Cumulative Distribution Functions (CDFs) of vertex errors across four groups: all vertices, group_1 (Red), group_2 (Yellowish), and group_3 (Blue), quantifying the percentage of vertices at each error level.
For each vertex v, error is measured as MSE: 1/3‖y(v) - y_rgb(v)‖_2^2. The x-axis represents relative error, calculated by dividing vertex errors by the maximal error across all models for each subset. We note that the N-Level model shows superior performance for all groups except for the Red group, which corresponds to a constant function and is considered a trivial region.
§.§ Discontinuities and Exponential Scale Variations
We further evaluate our method on neural fields representing UV coordinates of textured meshes, which are typically non-continuous and exhibit exponential scale variations when generated by conformal parameterization. Demonstrating our method's robustness, Figure <ref> shows the learned UV coordinates for a multi-component mesh with highly non-continuous UV coordinates. Figure <ref> presents UV coordinates from a conformal map, highlighting our method's ability to handle exponential scale variations.
As in Section <ref>, we train the model by minimizing the mean square error (MSE), hence our loss function for this task is
ℒ_uv(y) := 1/n‖y - y_uv‖_2^2
where y_uv∈ℝ^n× 2 defines the UV texture coordinates of vertices.
§.§.§ Discontinuity of Mesh and UV Coordinates
Data In this example, we use a Kangaroo mesh with texture, with a total number of 10K vertices. The geometry of this mesh is composed of multiple connected components.
Results
Figure <ref> displays the ground truth texture mesh (b) and error distributions for the three models (c, d, e), with errors above 1 × 10^-5 clipped for visualization.
The bottom row shows the 2D UV coordinates for each model. On the left (a), the CDFs of vertex errors are shown. Notably, DiffusionNet exhibits high errors at distinct features like eyes, nails, and tail tips. In its UV coordinates (g), DiffusionNet tends to squash and distort smaller regions.
The One-Level model, while showing significant errors at the head area, has less squashing than DiffusionNet but has distortions that cause overlaps with other texture components in UV (h).
Conversely, the N-Level model outperforms the others both qualitatively and quantitatively, with minimal distortions in its 2D UV coordinates, closely resembling the ground truth and with the best CDF results.
Figure <ref> compares three UV distortion metrics for each model: area distortion, angle distortion, and the percentage of flipped faces.
Area distortion is measured by the absolute difference from 1 of the ratio between ground truth and predicted triangle areas, with values over 0.2 clipped. Angle distortion involves the absolute difference from 1 of the mean ratio between ground truth and predicted triangle angles, clipping values exceeding 0.01.
The bottom row notes in text the flipped faces percentages: 12% for DiffusionNet, 16% for One-Level, and 1% for the N-Level model.
Overall, the N-Level model outperforms the others across all metrics.
§.§.§ Exponential Scale Variations
Data
In this example, we utilize a truncated lion head mesh with 8K vertices and UV coordinates computed using conformal mapping <cit.>, which leads to exponential scale variations notably between the head and neck areas, see Figure <ref>.
Results
Figure <ref> displays the GT texture alongside the results of the three models.
The top row features the textured mesh, while the middle and bottom rows show zoomed-in regions of the texture and UV coordinates, respectively.
The DiffusionNet texture reveals high distortion areas, but both the One-Level and N-Level models accurately capture the UV field.
Figure <ref> evaluates three UV distortion metrics for each model. Area distortion values above 5×10^-3 and angle distortion values above 9×10^-4 are clipped. The bottom row notes flipped face percentages: 2% for DiffusionNet and 0% for both the One-Level and N-Level models.
Although the One-Level and N-Level models show similar performance, the N-Level model still demonstrates the highest accuracy in area and angle distortions.
§.§ Mesh Generalization
In this experiment, we demonstrate our architecture's ability to generalize across multiple versions of a single mesh. Starting with a base mesh, we generate several subdivided versions via a variant of Loop subdivision, as implemented by MeshLab <cit.>, which avoids adding new vertices if triangle edges are below a specified threshold. We apply subdivision iterations until additional triangles are negligible. Since our base mesh is not overly coarse, not all triangles are subdivided in each iteration. However, before being fed into the network, each mesh is centered and normalized, making triangle additions significantly change the mesh embedding.
We aim to learn the neural field defined by mesh vertex normals, y_n ∈ℝ^n× 3. Focusing on the direction of these normals, we train the model by minimizing the mean cosine distance error <cit.>.
Thus, our loss function is given by:
ℒ_n(y) := 1/n∑_v( 1 - ⟨y(v), y_n(v) ⟩/‖y(v) ‖_2^2·‖y_n(v) ‖_2^2)
where y_n(v) ∈ℝ^3 is the normal vector at vertex v.
Data
The base mesh used for the subdivision iterations is the smiling ogre mesh, which comprises of 20K vertices. We then generated five additional subdivided versions, with the largest containing 33K triangles and 65K faces. The dataset was split into training and testing sets; the training set comprises meshes subdivided through [0,1,2,3,4] iterations, and the test set contains only the mesh generated by the 5-th subdivision iteration.
Results
Figure <ref> displays the CDFs of vertex errors on the left, and the error distributions of the three models on the right. Both quantitatively and qualitatively, our N-Level model significantly outperforms the other two models.
§ CONCLUSIONS
Our multi-resolution framework shows strong capability in representing neural fields on triangle meshes, achieving high precision across various domains and functions.
Its detailed capture of fine features makes it ideal for high precision tasks in computer graphics, such as UV learning, where a generally low error that suffices for applications such as segmentation is not enough, and one needs to be able to achieve close to machine precision.
This framework can be integrated into architectures addressing applications such as texture reconstruction from images and mesh stylization.
Additionally, it has the potential to serve as an effective feature extractor for various other high-precision tasks in geometry processing.
abbrv
|
http://arxiv.org/abs/2409.03746v1 | 20240905175807 | Orbital Support and Evolution of CX/OX Structures in Boxy/Peanut Bars | [
"Behzad Tahmasebzadeh",
"Shashank Dattathri",
"Monica Valluri",
"Juntai Shen",
"Ling Zhu",
"Vance Wheeler",
"Ortwin Gerhard",
"Sandeep Kumar Kataria",
"Leandro Beraldo e Silva",
"Kathryne J. Daniel"
] | astro-ph.GA | [
"astro-ph.GA"
] |
[email protected],
Department of Astronomy, University of Michigan, Ann Arbor, MI, 48109, USA
Department of Astronomy, Yale University, Kline Tower 266 Whitney Avenue, New Haven, CT 06511, USA
Department of Astronomy, University of Michigan, Ann Arbor, MI, 48109, USA
Department of Astronomy, School of Physics and Astronomy, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, China
Key Laboratory for Particle Astrophysics and Cosmology (MOE) / Shanghai Key Laboratory for Particle Physics and Cosmology, Shanghai 200240, China
Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030, China
Department of Physics, University of Chicago, Chicago, Illinois 60637, USA
Max-Planck-Institut für Extraterrestrische Physik, Gießenbachstraße 1, 85748 Garching, Germany
Department of Astronomy, School of Physics and Astronomy, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, China
Department of Astronomy & Steward Observatory, University of Arizona, Tucson, AZ 85721, USA
Department of Astronomy & Steward Observatory, University of Arizona, Tucson, AZ 85721, USA
Center for Computational Astrophysics, Flatiron Institute, New York, NY 10010, USA
§ ABSTRACT
Barred galaxies exhibit boxy/peanut or X-shapes (BP/X) protruding from their disks in edge-on views. Two types of BP/X morphologies exist depending on whether the X-wings meet at the center (CX) or are off-centered (OX). Orbital studies indicate that various orbital types can generate X-shaped structures. Here, we provide a classification approach that identifies the specific orbit families responsible for generating OX and CX-shaped structures. Applying this approach to three different N-body bar models, we show that both OX and CX structures are associated with the x1 orbit family, but OX-supporting orbits possess higher angular momentum (closer to x1 orbits) than orbits in CX structures. Consequently, as the bar slows down the contribution of higher angular momentum OX-supporting orbits decreases and that of lower angular momentum orbits increases resulting in an evolution of the morphology from OX to CX. If the bar does not slow down, the shape of the BP/X structure and the fractions of OX/CX supporting orbits remain substantially unchanged. Bars that do not undergo buckling but that do slow down initially show the OX structure and are dominated by high angular momentum orbits, transitioning to a CX morphology. Bars that buckle exhibit a combination of both OX and CX supporting orbits immediately after the buckling, but become more CX dominated as their pattern speed decreases. This study demonstrates that the evolution of BP/X morphology and orbit populations strongly depends on the evolution of the bar angular momentum.
§ INTRODUCTION
In numerous observations of external disk galaxies viewed edge-on, boxy/peanut or X-shaped bulges (hereafter referred to as BP/X bulges) have been identified <cit.>. Within the Milky Way, a distinct BP/X-shaped structure was first discerned in the multi-parameter model of COBE/DIRBE images of the Galactic Bulge <cit.>.
The BP/X bulges have been consistently observed in a variety of N-body simulations.
Early simulations showed that BP/X bulges can form following a short-lived buckling event in a bar which results from an asymmetric bending of the bar out of the disk midplane <cit.>.
However, recent evidence suggests that orbital resonances may play a crucial role in forming and enhancing BP/X structures. One scenario for the formation of BP/X structures without a buckling event is 'resonant trapping,' where orbits are vertically excited by being trapped at the vertical Inner Lindblad Resonance (vILR) for significant periods <cit.>. Another mechanism is 'resonant sweeping,' in which orbits cross the vILR, become vertically heated, and remain that way after leaving the resonance <cit.>. <cit.> reviewed all three mechanisms for the formation of BP/X structures and found evidence for the resonant trapping mechanism only in an artificially vertically symmetrized model. They concluded that the resonant sweeping mechanism is likely more applicable in real galaxies without bar buckling. This conclusion was later confirmed by <cit.> and <cit.>.
BP/X bulges are classified into two categories based on their morphology, as described by <cit.>. The two categories are the off-centered X-shape (OX), where the X-wings do not intersect at the center (which look like >-<), and the centered X-shape (CX), where the X-wings cross at the center and in the disk plane (><). Figure <ref> presents examples of an OX bulge (NGC 1381) and a CX bulge (NGC 4710), displaying their S^4G images with the corresponding Gaussian-filtered unsharp masked versions.
Orbital analysis is a crucial tool for deciphering the building blocks of barred galaxies. While the orbital composition of BP/X bulges has been extensively studied, several aspects remain contentious. Key among these is the identification of specific orbit families that independently support CX and OX structures. Another is understanding how the evolution of the BP/X structure correlates with changes in the bar's characteristics, such as its pattern speed.
In many studies of three-dimensional analytical bar models <cit.>, the periodic orbits bifurcating from the x_1 family (referred to as x_1-tree orbits) are considered the backbone of X-structures. This includes orbit families such as x_1v_1 (⌣ or ⌢) and x_1v_2 (∞), which are associated with the (Ω_z:Ω_x = 2:1) resonance.
However, some other studies questioned these conclusions. <cit.> analyzed the orbital structure of Made-to-Measure (M2M) models for the Milky Way bar. They found 3D resonant orbits with (Ω_z:Ω_x = 2:1) or higher vertical resonances of the x_1 orbit family cannot fully explain the X-shape of the bar. In their models, the fraction of (Ω_z:Ω_x = 2:1) resonant orbits is relatively small, and they are predominantly located in the outer regions of the bar. Consequently, they proposed an alternative family of resonant boxlet orbits, termed `brezel orbits' associated with the (Ω_z:Ω_x = 5:3) resonance. These brezel orbits are posited to generate the X-shape in the inner regions of the bar.
<cit.> studied an N-body bar model to explore the orbits responsible for building the BP/X bulge. They also found a few x_1v_1 orbits (∼ 3 %) which are in the outer half of the bar (similar to <cit.> and <cit.>). The most populated resonant boxlet family in their N-body bar is fish/pretzel orbits (∼ 6 %) that are associated with the (Ω_x:Ω_y:Ω_z = 3:-2:0) resonance. They argued that no individual orbit family is the backbone of the BP/X bulge. They found that non-resonant box orbits (∼ 63 %), banana orbits (∼ 3 %), fish/pretzel orbits (∼ 6 %), and brezel orbits (∼ 1.5 %), each contributes to the formation of X-shaped structures. This finding underscores the relatively minor role of resonant orbits in making up the BP/X structure.
<cit.> confirmed that various types of orbits can support the BP/X structures. They examined the orbital structures of two N-body bar models. They classified the bar orbits only based on the ratio of the vertical oscillation frequency to the in-plane frequency (Ω_z / Ω_x). Three groups of orbits are considered:
1.55<Ω_z / Ω_x≤ 1.75, 1.75<Ω_z / Ω_x≤ 1.95, and 1.95<Ω_z / Ω_x≤ 2.05. Their findings demonstrated that each group is capable of forming an X-shaped structure, as illustrated in Fig. 9 in <cit.>.
They concluded that X-shaped structures are not formed by specific orbits serving as the backbone. Instead, these structures arise from the assembling of high-density regions of different types of orbits at their highest points.
The orbital composition of the CX and OX structures is discussed only in a few studies. <cit.> showed that periodic orbits around x_1v_2 support a CX profile (see Fig. 6 in <cit.>), while the sticky-chaotic orbits with initial conditions close to x_1v_1 and 3D quasi-periodic orbits around x_1 support an OX structure (see Fig. 7 in <cit.>). They also analyzed an N-body bar model presented in <cit.>, and found that sticky chaotic orbits support an OX profile, and periodic orbits build a CX structure. <cit.> showed the orbits with higher Ω_z / Ω_x are generating the OX shape, and those with lower Ω_z / Ω_x are contributing to the CX structure (see Fig. 9 in <cit.>). <cit.> conducted an orbital frequency analysis of live N-body models and confirmed that orbits with varying ranges of Ω_z / Ω_x contribute to the formation of an X-shaped structure. They found that during the initial stages of bar formation, prior to the buckling phase, the ratios Ω_z / Ω_x are higher. As the models evolve, the distribution of Ω_z / Ω_x shifts towards lower values. See Also <cit.>.
In this work, we revisit the orbital composition in BP/X bulges. While it is now understood that various orbit types can give rise to X-shaped structures, yet a universal classification applicable across models with diverse characteristics has not been established. Moreover, there is no clear explanation of what determines the differences between OX and CX structures, nor an understanding of how their orbital compositions differ.
Here, we employ an automated orbit classification method based on frequency analysis <cit.> to systematically investigate the orbit families that give rise to the OX and CX-shaped structures. Additionally, we explore the influence of the bar pattern speed on the development of OX/CX structures, as well as the types of orbits that constitute bars with different pattern speeds.
The paper is organized as follows. In Section <ref>, we provide
the details of the simulations used in this study. Section <ref> presents the description of the orbital analysis and classification method. The orbital structure of the BP/X bulges is studied in Section <ref>, and the discussion follows in Section <ref>.
§ N-BODY BAR MODELS
We study three N-body Milky Way-like models evolved using the GALAXY code <cit.>. Model A, was kindly provided by J. Sellwood and is referenced as model C in <cit.>. This model consists of 10^6 equal-mass particles representing an exponential disk with a total mass of 4.21 × 10^10 M_⊙, and 10^6 particles as a live isotropic spherical Hernquist halo with a total mass of 4.21 × 10^11 M_⊙. This model is evolved for 6 Gyr, and the buckling instability is inhibited by imposing reflection symmetry about the mid-plane at each step of the simulation <cit.>.
We also study model run6000 and model C from <cit.>, which we refer to as model B and model C, respectively. Both models evolved under the same simulation setup. The first 9 Gyr of the evolution of Model B is used, after which it soon reaches a steady state, and model C evolved for 7.5 Gyr. See <cit.> for details.
Both model B and model C comprised 6 × 10^6 particles representing an exponential disk with a total mass of 5.37 × 10^10 M_⊙, and 4 × 10^6 particles in a live Navarro–Frenk–White (NFW) halo <cit.> with a total mass of 6.77 × 10^11 M_⊙. Model B is characterized by a higher initial radial velocity dispersion compared to model C.
To quantify and compare the properties of the bars, we utilize standard measures, bar amplitude (A_m = 2) and buckling amplitude (A_ buck), following e.g., <cit.>. These parameters are defined by employing the m = 2 symmetry mode in an azimuthal Fourier expansion of the disk viewed face-on. These quantities are normalized by the m = 0 mode and defined as follows:
A_m=2=|∑_k m_k e^2 i ϕ_k/∑_k m_k|
and
A_buck =|∑_k z_k m_k e^2 i ϕ_k/∑_k m_k|.
where m_k, ϕ_k, and z_k are the mass, azimuth, and vertical position of the k th particle, respectively. We measure the pattern speed of the models over time using the approach developed by <cit.>, which makes it feasible to determine the pattern speed using a single snapshot. Fig. <ref> presents the evolution of several bar properties: buckling amplitude (A_buck), bar amplitude (A_m=2), bar length, and bar pattern speed (Ω_p), for model A (orange), model B (red), and model C (blue).
Model A does not undergo a buckling event, and the BP/X structure gradually forms, becoming more pronounced by the end of the simulation. The bar strengthens and grows until ∼ 5 Gyr, and continuously slows down by the end of the simulation. Model B experiences two buckling events, approximately at 3.8 Gyr, and 7.7 Gyr, followed by a gradual increase in bar amplitude and associated decrease in pattern speed throughout the evolution. In contrast, model C undergoes only one strong buckling event early in its evolution at around 2.8 Gyr after which the bar ceases to strengthen and grow and maintains a steady pattern speed until the end of the simulation.
Figure <ref> illustrates the face-on and edge-on projected surface densities alongside the unsharp masked images for model A (left), model B (middle), and model C (right). The top row displays snapshots taken soon after the formation of the BP/X structure, while the bottom row presents snapshots from around the end of the simulation. At first glance, it is evident that the BP/X shape has undergone significant evolution in model A and model B, but shows minimal alteration in model C.
§ ORBIT ANALYSIS METHODS
§.§ Orbit integration
In this section, we examine the orbital structure of all the snapshots shown in Fig. <ref>. Detailed analysis plots will be provided exclusively for model B at t=4.5 Gyr. For the remaining models, only the final results will be presented to avoid an excess of figures.
We employ AGAMA [<https://github.com/GalacticDynamics-Oxford/Agama>] <cit.> to compute the potential and orbits in our N-body models. The N-body system is frozen at the specified snapshots, and then potentials are calculated from the particle distribution using expansion <cit.>.
We randomly selected 15,000 initial conditions corresponding to the positions and velocities of particles from the snapshots. These orbits are then integrated over a duration of 20 Gyr (∼ 200 orbital period at the end of the bar region) in the presence of the rotating bar, with the given Ω_p for each snapshot. We store 10,000 points per orbit with equal time intervals.
We choose a long integration time to better visualize the different bar structures discussed in the following section and to ensure that orbits in the outer regions are integrated for a sufficient period to compute frequencies accurately. However, we also test our results by performing the integration over a much shorter time of 2 Gyr (∼ 20 orbital periods at the end of the bar region) and demonstrate that the integration time does not significantly affect our analysis (See Appendix <ref>).
§.§ Orbits classification
Frequency analysis of orbits plays a crucial role in deciphering the characteristics of orbital structures within a large sample. To compute the fundamental frequencies in Cartesian and cylindrical coordinates, we utilize the NAFF (Numerical Analysis of Fundamental Frequencies) software [<https://bitbucket.org/cjantonelli/naffrepo/src/master/>] <cit.>.
Automated classification of orbits, based on fundamental frequencies, is a well-developed technique that has been widely utilized in the literature <cit.>. As discussed in <cit.>, orbit classification based solely on frequency ratios is insufficient for a comprehensive orbital structure framework. This is because orbit families with identical frequency ratios can exhibit divergent properties, including variations in energy ranges, stability, extent, and shapes. In contrast, the auto-classification method in NAFF incorporates multiple quantities in addition to the orbital frequencies, such as the apocenter radius of orbits, the maximum values of the x and y coordinates, and the net rotation parameter. See appendix B in <cit.> for details. A new implementation of NAFF code named naif [<https://naif.readthedocs.io/en/latest/index.html>] has been publicly released in a python package with some new features by <cit.>. The orbit auto-classifier and resonance auto-finder for frequency maps are currently being developed for inclusion in the naif package (R. Ranjan et al., 2024, in preparation).
To determine whether an orbit is prograde or retrograde in the bar's reference frame, we use the following expression, which we call the net rotation parameter:
L_zn = Σ_i=1^i=Nsign(L_z)_i/N,
where N is the number of time steps. For a positive and negative value of L_z, sign(L_z) is +1 and -1, respectively. A short-axis orbit with L_zn∼ 1.0 is prograde with the highest angular momentum, while L_zn∼ -1.0 indicates a retrograde orbit around the z-axis in the bar rotating frame. An orbit with no net rotation around the z-axis has L_zn∼ 0.0. A long-axis tube orbit has L_xn∼∓ 1.0 since it has net rotation around the x-axis.
In this approach, disk orbits are determined by those that have apocenter radii greater than the bar radius.
Then the orbits within the bar are classified into: boxo (ordinary box), boxp (periodic or resonant box, i.e. boxlet), ztub (z-axis tube), ztup (periodic or resonant z-tube), xtub (x-axis tube), xtup (periodic or resonant x-tube), x2++ (similar to z-axis tube but elongated along the y-axis of the bar and prograde), x4++ (similar to z-axis tube but elongated along the y-axis and retrograde), bo32 (box with 3:2 fish resonance), bobr (box brezel with 5:3 resonance), x1++ (resonant or near resonant x_1 boxlet), x1cl (x_1 with 3:1 resonance), x122 (x_1 with 2:2 resonance), x132 (x_1 with 3:2 resonance), x1bn (x_1 banana). We lumped all of the periodic x_1 resonant orbits together since there are very few.
<cit.> demonstrated that this automated orbit classification is generally consistent with visual classification by examining 20,000 individual orbits. The differences in classification accuracy range from approximately ∼ 1% to ∼ 4 % depending on the types of orbits being classified.
NAFF employs the frequency drift parameter (diffusion rate) to measure the fraction of chaotic orbits. The frequency drift parameter log_10(Δ f) is computed by the change of the frequency in two equal orbital time segments <cit.>.
We use log_10 (Δ f) >-1.2, which is a good empirical criterion for determining chaotic orbits as tested in <cit.> and <cit.>. All orbits classified into the aforementioned families are additionally tagged with regl (regular) or chao (chaotic).
We found that some orbits initially classified as periodic boxes exhibit chaotic behavior after a short time. Consequently, those orbits demonstrating chaotic characteristics were reclassified from the resonant orbit classes boxp, bo32, and bobr to non-periodic box class boxo. Furthermore, we noted that some orbits initially classified as periodic z-axis tubes (ztup), characterized by L_zn < 0.0 and -0.85 < L_xn < 0.85, should be reclassified into the x4++ group. Similarly, those with L_zn < 0.0 and L_xn∼± 1 are more accurately categorized as xtub orbits. Although these adjustments result in minor changes to orbital classes, they are crucial for our subsequent analysis of the morphology of different orbit classes.
§.§ Orbital decomposition of CX and OX structure
We further categorize the orbit types mentioned above by grouping together orbital families that exhibit similar morphologies.
(1) OX: These are prograde short-axis tube orbits that are elongated along the bar, including
x1++, x1cl, x122, x132, x1bn, ztub, and ztup. They make up an OX structure.
(2) CX: These are box orbits that are elongated along the bar, contributing to the formation of the CX structure. To select these groups of orbits, we begin by aggregating both periodic and non-periodic box orbits, including boxo, boxp, bo32, and bobr. Box orbits exhibit a diverse range of morphologies. We have determined that the parameter L_zn serves as an effective criterion for distinguishing between box orbits that support a CX shape and those that exhibit a more rounded boxy morphology. Fig. <ref> presents the surface densities of box orbits across various L_zn bins. The columns from left to right display the surface density in the x-z planes for different L_zn bins, which is plotted for model B at t=4.5 Gyr.
Box orbits possessing higher values of L_zn prominently exhibit an OX structure, which transitions to a CX shape as the value decreases. With further reduction in L_zn, the X wings gradually disappear. By examining the structures within various bins, we have identified a specific value of L_zn to serve as a threshold for visually distinguishing between OX and CX structures across all box orbits. For Model B, as depicted in Fig. <ref>, box orbits with L_zn > 0.3 are categorized as the OX group as they clearly exhibit a bridge connecting the two X wings. Those with 0.3 > L_zn > -0.2 are classified as the CX group, while the remaining orbits, which form a round shape, are included in the general box orbit group. These boundaries are determined visually and may vary from model to model. It should be noted that although altering the threshold value of L_zn can result in minor variations in the contribution of OX/CX orbits, these changes are typically limited to a few percent and are not significant.
(3) box: These orbits are dominated by retrograde motion and are not included in CX or OX groups.
(4) x_4: These orbits are retrograde short-axis tube orbits elongated perpendicular to the bar, referred to as x4++ in the NAFF classification. It is worth noting that x2++ orbits are their prograde counterparts, also short-axis tubes with a similar perpendicular orientation. However, our models do not contain any x2 orbits.
(5) LAT: These are long-axis tube orbits - orbits that have net angular momentum about the long x-axis of the bar model.
(6) disk: Orbits with apocenter radii exceeding half the length of the bar are classified as disk orbits.
§ RESULTS
Fig. <ref> presents the extracted surface densities in the x - y and x - z planes for six orbit categories. The columns, from left to right, display surface densities constructed from orbits in the OX, CX, box, x_4, LAT, and disk categories, respectively. In all models, the OX and CX orbits contribute to structures elongated along the bar, forming OX- and CX-shaped structures respectively (shown in the 2nd and 3rd columns). The box, x_4, and LAT orbits do not contribute to the BP/X shape, instead supporting rounder or slightly boxy structures. In the following, we explore the orbital origins and evolution of the OX and CX structures in detail.
Furthermore, we demonstrate that orbits supporting the OX/CX vertical structures contribute to outer/central bar-elongated structures when the disk is viewed face-on. Our results confirm the lack of a relationship between X-shaped structures and the inner rounded parts of the face-on bar (e.g., barlenses). <cit.> illustrated the importance of orbits with loops at their ends (in the x-y plane) to the bar shoulders in face-on density profiles. Additionally, they showed that bar thickening and vertical resonances can dilute the shoulders in the bar major-axis density profiles. We confirm that it is the OX rather than the CX orbits that contribute to the bar shoulders. This conclusion aligns with the findings from <cit.>, since the OX orbits have higher angular momentum and are therefore closer to their parent x_1 orbits with loops at their ends, while the CX orbits have lower angular momentum and are vertically thicker.
§.§ Orbital origin of CX and OX structure
The fraction of periodic orbits, including closed x_1 orbits, is less than <3% in all our models, and the majority of orbits that make up the BP/X-shaped structure are non-periodic. This is consistent with previous studies, which show that periodic orbits (such as banana, pretzel, etc.) constitute only a small fraction of orbits in N-body bars. Fig. <ref> depicts typical x_1 orbits (left panel), non-periodic orbits generating the OX (middle panel), and CX structures (right panel) in the x-y and x-z planes.
<cit.> argued that assembling non-periodic orbits into a pattern resembling a `bowtie' can create X-structures. These orbits occupy large areas on the x-z plane and can be linked to quasi-periodic orbits surrounding the plane of the x_1 orbital family. For examples of such orbits, see Fig. 15 in <cit.>, Fig. 2 in <cit.>, and Fig. 9 in <cit.>.
Based on the computed frequencies from NAFF, we confirm that the orbits contributing to the OX and CX structures can exhibit a wide range of Ω_x/Ω_z values, extending beyond those typical of resonant orbits, such as banana or pretzel orbits. This finding is in agreement with previous studies.
As we demonstrated in Fig. <ref>, the typical non-periodic orbits within the OX and CX groups exhibit similar bowtie-like shapes. However, OX orbits tend to be vertically thinner bowties, whereas CX orbits are characterized by centrally thicker bowties. <cit.> suggested that further investigations using the Poincaré surface of sections are required to elucidate the origins of such orbits.
To elucidate the origin and characteristics of orbits within each OX and CX structure, in the following, we examine their phase-space and investigate their origin using Poincaré surface of sections plots.
§.§.§ Poincaré surface of section
A useful technique for exploring the distribution of orbits in phase space is to visualize their Poincaré surfaces of section. Typically, the surfaces-of-section (SoS) is plotted for a set of orbits that share the same Jacobi integral value E_J. For orbits in two-dimensional bars, the SoS is plotted by mapping the velocity component V_y against y at each time step when an orbit intersects the x-axis with a negative V_x value <cit.>.
Plotting the SoS of orbits in an N-body simulation is complicated and presents two challenges: firstly, since all the orbits possess three-dimensional structures, they cannot be adequately represented on a two-dimensional SoS. Secondly, the distribution of E_J values is broad and continuous (rather than discrete), leading to fuzzy curves on the SoS. However, <cit.> demonstrated that a two-dimensional SoS can still offer valuable insights into the underlying three-dimensional orbital structures.
We explore the SoS of 100 orbits in our models. Fig. <ref> illustrates V_y versus y when orbits cross the y-z plane with a positive V_x. This is plotted for orbits within the energy range of -1.0 < E_J < -0.98 in Model B at t=4.5 Gyr. The spread in E_J values results in fuzzy curves in the SoSs, particularly when plotted for a large number of orbits, as highlighted in Fig. 10 of <cit.>. In our analysis of frozen potentials, most orbits traverse a broader area rather than being confined to narrow curves in the SoSs.
The left panel of Fig. <ref> displays the SoS for x_1 orbits (in red), OX orbits (in cyan), and CX orbits (in blue). To avoid overcrowding, the SoS for box orbits (in green) is presented separately in the right panel. The x_1 orbits are the ovals at the center of the bull's eye of the SoS. It is similar to what is shown in <cit.> (see Fig. 9) and <cit.> (see Fig. 10). OX orbits are primarily clustered around V_y = 0 at positive y values, confirming they are parented by x_1 orbits and exhibit prograde motion around the z axis. Orbits in the red and cyan regions contribute to the formation of the OX structure. CX orbits, while also lying on oval curves surrounding the x_1 orbits, cover a wider region, indicating their inclusion in the same sequence. CX orbits are predominantly found at positive y values. However, as CX orbits identified with 0.3>L_zn>-0.2, SoS of those orbits with negative L_z encompass negative y values. Box orbits lack a dominant direction of rotation, implying that as orbits evolve into a box-like shape, their angular momentum approaches zero on average.
In summary, we confirm that orbits closer to the x_1 family exhibit an OX structure. As they move away from the x_1 orbits and become thicker, they transition to a CX shape. Eventually, they evolve into completely boxy or rounded shapes as they lose angular momentum.
§.§ Evolution of CX and OX orbital structures
Fig. <ref> presents contour densities of OX and CX orbital structures in model A (left), model B (middle), and model C (right). The first row displays these orbital structures shortly after the formation of the BP/X structure, while the second row illustrates these structures at the end of the simulations.
The proportions of OX and CX orbits in the models at various times are indicated in Fig. <ref>. In Model A, the BP/X bulge does not undergo buckling, and its pattern speed decreases significantly over time. Initially, the dominance of the OX orbits is evident at 25% of all orbits, while the CX orbits represent only 8%. By the end of the simulation, the OX orbits have almost completely disappeared, dropping to 3%. In contrast, the BP/X bulge becomes predominantly characterized by the CX orbits, which increase to 36%.
Model B and Model C both undergo buckling, leading to the formation of the BP/X bulge where both OX and CX orbits coexist. In Model B, as the bar experiences a slowdown by the end of the simulation, the proportion of OX orbits decreases while that of CX orbits increases. The first row in the middle panel shows the OX/XC structure of model B after the first buckling event, and the bottom row displays the OX/XC structure after the second buckling. In contrast, in Model C, where the bar pattern speed remains constant, the proportions of CX and OX orbits stay the same by the end of the simulation. This suggests that as long as the bar's pattern speed does not decrease, the proportions of CX and OX orbits, and therefore the morphology of the BP/X structure remain unchanged.
We defer to a future study the exploration of whether there is a relationship between the intensity of buckling and the proportions of CX and OX orbits after the formation of the BP/X structure.
§.§ Photometric Parametrization of BP/X Bulges
We use the method presented by <cit.> and IMFIT software <cit.> to parameterize the shape of the BP/X bulges. In this parametrization, the bar is modeled using a sech^2 profile, applied to a dimensionless, scaled radius given by
ρ_pea = ρ_0 sech^2(-R_s),
where
R_s=([(x/X_bar)^c_⊥+(y/Y_bar)^c_⊥]^c_ / c_⊥+(z/Z_bar)^c_)^1 / c_ .
The coordinate system is centered at the galaxy's nucleus, with X_bar,
Z_bar and Y_bar denoting the semi-major, intermediate, and semi-minor axis lengths of the bar, respectively. c_ and c_⊥ control the diskiness/boxiness of the bar introduced by <cit.>. Here, we adopt
c_,c_⊥∈ [1.5,5], based on the results of <cit.>. The BP/X feature's morphology is defined by a scale height perpendicular to the disk plane, which varies across the x - y plane and is modeled by a double Gaussian distribution centered on the galactic center:
Z_bar (x, y)= A_pea exp(-(x-R_pea )^2/2 σ_pea ^2-y^2/2 σ_pea ^2) +
A_pea exp(-(x+R_pea )^2/2 σ_pea ^2-y^2/2 σ_pea ^2)+z_0,
where R_pea and σ_pea denote the distance of the peanut's center from the galactic center and the width of each peanut, respectively. A_pea quantifies the vertical extent of the peanut feature above the ellipsoidal bar's scale height z_0.
As demonstrated by <cit.>, these three parameters provide substantial flexibility in capturing the diverse morphologies exhibited by the BP/X feature. Although these values do not directly quantify the OX/CX structures, they are useful for demonstrating the relationship between the evolution of BP/X morphology and the bar pattern speed independent of orbital analysis. The quantification of OX/CX structures through photometric parameterization will be addressed in future studies.
Figure <ref> displays the variation of BP/X shape parameters R_pea, ρ_pea, and σ_pea versus the bar pattern speed (left column) and versus the dimensionless bar rotation parameter ℛ≡ R_ cor/R_ bar (right column) for model A (orange), model B (red), and model C (blue). The x-axis of the left column is the bar pattern speed in the reverse direction. Models A and B exhibit significant changes in BP/X morphology over time as the bar pattern speed decreases. In contrast, the BP/X parameters in model C are relatively constant or change very little, which is attributed to its nearly constant bar pattern speed.
In models A and B, the relation between the BP/X parameters and the bar pattern speed is monotonic (although not linear). This holds true even after the second buckling in model B (represented by the red point number 7).
As the bar pattern speed decreases, all three parameters R_pea, ρ_pea, and σ_pea increase. However, we do not find a monotonic relationship between the BP/X parameters and ℛ.
This indicates that the evolution of BP/X morphology is correlated with the evolution of the bar pattern speed rather than ℛ.
§.§ OX/CX bulges and the bar pattern speed in Auriga simulations
The Auriga simulations <cit.> are a suite of 30 magneto-hydrodynamical cosmological zoom-in simulations. The Auriga barred galaxies exhibit fast bars primarily because they are baryon-dominated disks, experiencing less dynamical friction, which prevents the bars from slowing down <cit.>.
<cit.> analyzed the structural and photometric properties of 21 barred galaxies from the Auriga simulations. Using edge-on unsharp-masked images at z=0, they identified BP/X structures in the inner parts of 6 out of 21 galaxies. They visually determined whether these BP/X structures are OX or CX, and found that only one of these bulges has CX morphology, while the other five have OX structures. See Table 3 in <cit.>.
<cit.> presented the evolution of the pattern speed for five Auriga galaxies with high cadence outputs as a function of lookback time. See Fig B.1 in <cit.>. Three of these galaxies, A17, A18, and A26, are identified as having BP/X bulges by <cit.>. A18 galaxy, the only one with a CX structure, has a relatively lower pattern speed of around 27 km s^-1 kpc^-1. A17 and A26, which have OX structures, exhibit significantly higher pattern speeds around 45 and 37 km s^-1 kpc^-1, respectively. Our results explain the findings in the Auriga simulations: bars with higher pattern speeds tend to exhibit BP/X bulges with more dominant OX structures.
§ SUMMARY
We present a methodology that enables us to decompose the orbital structures supporting OX and CX shapes in BP/X bulges. Our method relies on auto-classification with NAFF software, focusing on the morphological and kinematic characteristics of orbits.
To understand the origin and evolution of OX and CX structures in BP/X bulges, we applied our method to classify orbits using a frozen potential from three N-body bar models, each with distinct features, at two different times, beginning shortly after the formation of the BP/X bulge. The main results are as follows:
(1) We demonstrate that both OX and CX structures are composed of non-periodic orbits associated with the x_1 orbit family. OX orbits, being closer to x_1 orbits, possess higher angular momentum compared to those in CX structures.
(2) We found that orbits supporting OX/CX structures in the edge-on view also form a similar OX/CX-like shape in the face-on view.
(3) We showed that as the bar pattern speed decreases, the contribution from OX orbits decreases while that from CX orbits increases. If the bar remains at a constant speed, the shape of the BP/X structure stays unchanged until the end of the simulation. This indicates a strong dependence of the BP/X bulge shape on the bar pattern speed.
(4) We show that Model A (the bar that does not buckle) is initially dominated by OX orbits. These orbits surrounding the x_1 orbits begin to thicken. As the bar loses angular momentum, the OX structure transitions to a CX structure by the end of the simulation. In contrast, our bar models that experience buckling events exhibit a combination of both OX and CX structures immediately after the buckling and evolve toward CX dominance if the pattern speed decreases.
(5) Using photometric parameterization of the BP/X structure, we find that the evolution of BP/X morphology is correlated with the bar pattern speed rather than the bar rotation parameter.
§ ACKNOWLEDGEMENTS
We thank J. A. Sellwood for providing the data for Model A. BT, SD, MV, VW, LBeS gratefully acknowledge funding from the National Science Foundation (grant NSF-AST-2009122 to MV). LBeS is supported by the Heising Simons Foundation through the Barbara Pichardo Future Faculty Fellowship from grant # 2022-3927.
Software: AGAMA <cit.>, NAFF <cit.>, Jupyter Notebook <cit.>, matplotlib <cit.>, numpy <cit.>, scipy <cit.>.
§ DATA AVAILABILITY
The simulation data for model B and model C are available at doi:[10.5281/zenodo.11237062]https://doi.org/10.5281/zenodo.11237062 Readers wishing to obtain the simulation data for Model A should contact J. A. Sellwood. Any additional data will be shared on reasonable request to the corresponding author.
aasjournal
§ THE ORBIT INTEGRATION TIME
To investigate the effect of integration time on our orbit classification, we repeated our analysis using an integration time of 2 Gyr, which corresponds to approximately 20 orbital periods at the end of the bar region. We employed an orbit integration with the required accuracy of 10^-8. One primary concern is that a very long integration time with low integration accuracy may result in an overproduction of chaotic orbits. We examined the fraction of chaotic orbits under integration times of 20 Gyr and 2 Gyr, using the frequency drift parameter log _10(Δ f) as described by <cit.>. We adopted log _10(Δ f) > -1.2 as the criterion for defining chaotic orbits, consistent with other studies <cit.>.
Figure <ref> presents the distributions of the frequency drift parameter for 15,000 orbits integrated over durations of 20 Gyr (red) and 2 Gyr (blue). We found that the fraction of chaotic orbits is similar in both cases, with the orbits integrated for 2 Gyr showing approximately 2% more chaotic orbits than those integrated for 20 Gyr. This difference could also be due to the randomness in selecting initial conditions, which vary for each orbit sample. This result suggests that the accuracy we are using for orbit integration is sufficient. The overall fraction of chaotic orbits in our orbit sample, as used in this paper, is 10-13% across different models.
Another concern when adopting a longer integration time is achieving more accurate orbital frequencies. However, our orbit classification is not solely based on orbit frequencies; we also consider parameters such as angular momentum, maximum z, and the ratio of maximum y to maximum x. Figure <ref> shows a Cartesian frequency map, colored by number density, for orbits integrated for 20 Gyr (left panel) and 2 Gyr (right panel). The frequency maps are very similar, and we did not find any systematic changes between them. Figure <ref> illustrates the effect of integration time on our classification, indicating only minor changes in each orbital group (1-3%), which could also be due to randomness in selecting initial conditions. Overall, these tests indicate that our analysis outcomes throughout this paper are not significantly dependent on the integration time.
|
http://arxiv.org/abs/2409.03210v1 | 20240905030955 | Anisotropic Spin Stripe Domains in Bilayer La$_3$Ni$_2$O$_7$ | [
"N. K Gupta",
"R. Gong",
"Y. Wu",
"M. Kang",
"C. T. Parzyck",
"B. Z. Gregory",
"N. Costa",
"R. Sutarto",
"S. Sarker",
"A. Singer",
"D. G. Schlom",
"K. M. Shen",
"D. G. Hawthorn"
] | cond-mat.supr-con | [
"cond-mat.supr-con",
"cond-mat.str-el"
] |
0.0cm
0.2cm
16cm
21cm
1.0cm
sciabstract
24pt
Anisotropic Spin Stripe Domains in Bilayer La_3Ni_2O_7
N. K Gupta,^1,† R. Gong,^1,† Y. Wu,^2,† M. Kang,^2,3,4 C. T. Parzyck,^2
B. Z. Gregory,^2,3 N. Costa,^1 R. Sutarto,^5 S. Sarker,^6 A. Singer,^3 D. G. Schlom,^3,4,7
K. M. Shen,^2,4∗ and D. G. Hawthorn^1∗
^1Department of Physics and Astronomy, University of Waterloo,
Waterloo, N2L 3G1, Canada
^2Laboratory of Atomic and Solid State Physics, Department of Physics,
Cornell University, Ithaca, NY 14853, USA
^3Department of Materials Science and Engineering,
Cornell University, Ithaca, NY 14853, USA
^4Kavli Institute at Cornell for Nanoscale Science,Cornell University,
Ithaca, NY 14853, USA
^5Canadian Light Source, Saskatoon, Saskatchewan, S7N 2V3, Canada
^6Cornell High Energy Synchrotron Source, Cornell University,
Ithaca, NY 14853, USA
^7Leibniz-Institut für Kristallzüchtung, Max-Born-Straße 2,
12489 Berlin, Germany
^∗To whom correspondence should be addressed;
†These authors contributed equally to this work.
E-mail: [email protected] and [email protected].
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The discovery of superconductivity in under pressure has motivated the investigation of a parent spin density wave (SDW) state which could provide the underlying pairing interaction. Here, we employ resonant soft x-ray scattering and polarimetry on thin films of bilayer to determine that the magnetic structure of the SDW forms unidirectional diagonal spin stripes with moments lying within the NiO_2 plane and perpendicular to Q_SDW, but without the strong charge disproportionation typically associated with other nickelates. These stripes form anisotropic domains with shorter correlation lengths perpendicular versus parallel to Q_SDW, revealing nanoscale rotational and translational symmetry breaking analogous to the cuprate and Fe-based superconductors, with Bloch-like antiferromagnetic domain walls separating orthogonal domains.
The discovery of superconductivity with a transition temperature (T_c) above 80 K under hydrostatic pressure has ignited intense interest <cit.> in as a new platform to compare against the Cu- and Fe-based superconductors to understand which common ingredients are essential for realizing high T_c's. In both the cuprates and Fe-based families, parent antiferromagnetic states as well as nematicity are both known to play critical roles in their phase diagrams, the detailed understandings of which are essential for developing microscopic models. Presently, our understanding of the parent state from which the superconductivity condenses in remains incomplete. Recent resonant inelastic x-ray scattering (RIXS) <cit.>, heat capacity and magnetic susceptibility <cit.>, nuclear magnetic resonance (NMR)<cit.> and μSR <cit.> measurements of bulk suggest the presence of a spin density wave (SDW) transition occurring around T_SDW≈ 150 K, with a wavevector of (H = 1/4, K = 1/4) deduced from RIXS.<cit.> However, essential information regarding the microscopic spin structure and orientation remains unresolved.
Here, we present detailed resonant soft x-ray scattering (RSXS) and polarimetry measurements on epitaxial thin films of bilayer which reveal that the detailed spin structure consists of equal domains of bicollinear spin stripes, with the spins lying in the NiO_2 plane and aligned perpendicular to the SDW wavevector. Temperature-dependent spectroscopic measurements support a scenario with only a single electronic Ni site, with no evidence for charge or bond disproportionation across T_SDW, and where the oxygen ligand holes are strongly hybridized with the Ni d orbitals. The SDW domains exhibit a surprising anisotropy in their correlation lengths parallel versus perpendicular to their wavevector, suggesting that nanoscale rotational symmetry breaking might also play an important role in , reminiscent of the nematic order found in the cuprates and Fe-based superconductors. Finally, a detailed polarization and temperature-dependent analysis of the SDW Bragg peaks suggests the presence of extended, Bloch-like domain walls between orthogonal domains.
One of the challenges of investigating is the recent discovery of multiple structural polymorphs, including the expected n = 2 Ruddlesden-Popper structure comprised of bilayers of NiO_6 octahedra (dubbed “2222”), and a surprising naturally-formed superlattice of alternating single-layer and trilayer blocks of NiO_6 octahedra (dubbed “1313”) <cit.>. It is plausible that both polymorphs could exist within a single macroscopic bulk crystal sample, and both structures have been reported to be the superconducting phase <cit.>. Layer-by-layer reactive oxide molecular beam epitaxy (MBE) synthesis of thin films offers a solution to addressing this polymorphism. Here, we focus our investigation on phase-pure epitaxial thin films of the bilayer “2222” polymorph of . Epitaxial thin films of 16 nm thickness were grown on NdGaO_3 (110) substrates using reactive oxide molecular beam epitaxy with shuttered deposition at a pressure of 8 × 10^-7 torr of 80 % distilled ozone and a temperature of 800^∘ C. Samples were characterized by lab-based x-ray diffraction, electrical transport, and synchrotron-based hard x-ray diffraction (see Supplementary Information). Multiple samples were synthesized and investigated and all exhibited qualitatively similar behaviors; data from two samples are shown in this manuscript.
In Figure <ref>b, we show RSXS measurements on the Ni L_3 edge in π-polarization along the (1, 1, 1.85) direction for a series of temperatures. Here, we employ the pseudo-tetragonal lattice constants of a = b = 3.84 Å and c = 20.44 Å. Below 160 K, a sharp peak emerges at (1/4, 1/4) whose intensity is approximately 45 times stronger in π versus σ polarization at 20 K, suggesting that the scattering originates from magnetic rather than charge ordering, consistent with previous measurements.<cit.> The resonance energy dependence of the peak at (1/4, 1/4, 1.86) is shown in Fig. <ref>c, together with the x-ray absorption spectrum (XAS), which demonstrates that the Bragg peak is peaked only at the Ni L resonances, and not on the La M_4 edge or off-resonance. In Figure <ref>f, we show a scan as a function of L at the in-plane (1/4, 1/4) wavevector on the Ni L_2 edge, corrected for absorption and geometric factors (see Supplementary Information). The data indicate that the SDW Bragg peak is peaked at L = 2, but with only a 14 ± 4 Å correlation length along the c-direction, revealing the highly two-dimensional nature of the magnetic order in . We note that both the resonance energy and L dependence of SDW Bragg peak clearly establish that the SDW intrinsically originates from and not an impurity phase, as has plagued the identification of ostensible charge or spin density order in other related nickelates.<cit.>
In most related nickelates in the Ruddlesden-Popper sequence, including n = 1 La_2-xSr_xNiO_4 and n = ∞ perovskite RENiO_3 (RE = Nd, Pr, Sm), SDW order coincides with strong charge or bond disproportionation, where the local electronic and orbital environment of the Ni sites is strongly modulated at an atomic scale. Whether such behavior also occurs in remains an important open question that can be addressed by resonant x-ray diffraction and spectroscopy. First, there are no distinct changes in the Ni L edge XAS spectra across (Figure <ref>e), unlike the situation of RENiO_3 where bond disproportionation results in spectroscopically distinct Ni sites in the Ni L edge XAS across the metal-insulator transition. <cit.> Second, a strongly varying charge density will also modulate the x-ray scattering form factors on the different Ni sites, resulting in the linear dichroism of the SDW peak intensity (σ versus π polarization) becoming energy dependent, as reported in RENiO_3 heterostructures <cit.>. In contrast, we find that the linear dichroism of the peak in does not exhibit any observable energy dependence (Figure <ref>d), suggesting that only a single magnetic Ni site exists in the SDW state. These spectroscopic observations are consistent with transport and thermodynamic measurements which exhibit only weak anomalies across , indicating that the dominant order parameter is magnetic and that unlike many other nickelates, any associated charge modulation is either weak or absent.
§ MEASUREMENT OF SPIN CONFIGURATION
Having demonstrated the absence of strong charge modulations, we now determine the orientation of the staggered moments deep within the SDW state. For this we make use of the sensitivity of the intensity of resonant scattering to the orientation of the photon polarization relative to the magnetic moments, analogous to polarized neutron scattering.<cit.>. The resonant elastic x-ray scattering cross-section is given by:<cit.>
I^cr(ϵ_i, ω, Q) ∝|ϵ_out^*·(∑_j F_j(ħω, Q) e^iQ·r_j)·ϵ_in|^2,
where ħω is the photon energy, Q= k_out - k_in is the momentum transfer and ϵ_in and ϵ_out are the incident and scattered x-ray polarization, respectively. F_j is a tensor that encodes the photon energy dependence of the scattering cross-section for site j in the lattice, and its elements depend on the orientation of the magnetic moment at site j.<cit.> (see Supplementary Information) Assuming spherical symmetry of the local valence charge density, consistent with the linear dichroism of the SDW peak being energy-independent (Fig. <ref>d), the scattering depends on the orientation of the staggered Δm⃗, as
∑_j F_j(ħω, Q) e^iQ·r_j∝
[
0 Δ m_[[001] -Δ m_[110]
-Δ m_[001] 0 Δ m_[-110]
Δ m_[110] -Δ m_[-110] 0 ],
where Δ m_u are the components of the staggered moments in three orthogonal directions [001], [-110] and [110].
These components can then be deduced by varying the alignment of ϵ_in relative to the [001], [-110] and [110] directions. In an experiment, this is achieved by rotating the sample azimuthally by an angle ϕ about an axis normal to the [1/4, 1/4, L] set of lattice planes (here L is 1.93), as depicted in Figure <ref>a, which is accomplished by mounting the c-axis normal film on a 43.7^∘ wedge. This approach enables the orientation of the moments to be rotated relative to the incident polarization, which can be set to be π, σ, circular or linear 45^∘ (π - σ), with the scattering wavevector remaining centered on the (1/4, 1/4, 1.94) peak. As shown in Figure <ref>b, the scattering intensity has a strong dependence on the azimuthal angle, ϕ as well as on the polarization of the incident light. In Figure <ref>c, we plot the SDW peak amplitude as a function of azimuthal angle ϕ, for our various incident polarizations. Notably, the peak intensity exhibits a complex, non-monotonic dependence with ϕ and large variations with incident polarization. This azimuthal dependence can be compared to simulations of the staggered moments oriented along different directions, calculated using Equations <ref> and <ref>, the sample geometry, and a correction for the geometry-dependent absorption of the incident and scattered x-rays (Supplementary Information). As shown in Figure <ref>c the measurements show remarkable agreement with the moments forming diagonal, bicollinear spin stripes (Figure <ref>d) with the magnetic moments lying entirely within the a-b plane but oriented perpendicular to Q⃗_SDW. We emphasize this model was calculated without any free fitting parameters, apart from an overall scaling factor.
In contrast, the calculated polarization and ϕ dependence for other possible SDW scenarios (Figure <ref>f and g), including Δ m out of the plane, Δ m ∥Q⃗_SDW, or a non-collinear configuration, all bear no qualitative resemblance to the experimental data (Figure <ref>c). Having constrained the orientation of moments within the NiO_2 planes, we can now consider the full 3D magnetic structure, which is informed by the L dependence of the SDW scattering cross-section. The fact that the SDW order is peaked at L = 2 and is minimal at L = 1.5, (Fig. <ref>f) indicates that with the coupling within the bilayer (intra-bilayer coupling) and between bilayers (inter-bilayer coupling) are either both ferromagnetic or both antiferromagnetic (see Supplemental Information). Given that the experimental magnon dispersion is well described by a model with a large AF bilayer coupling,<cit.> this suggests the magnetic structure is consistent with the one depicted in Figure <ref>e, with both inter- and intra-bilayer AF coupling. We note that this bicollinear double spin-stripe configuration is notably different from other nickelates, including the non-collinear spin-spiral magnetic order in bulk RENiO_3 <cit.>, the collinear order with moments parallel to (1/4, 1/4, 1/4) in ultrathin RENiO_3, or the reported magnetic order, with moments along c, in La_2Ni_2O_5 <cit.> It is, however, remarkably similar to the bicollinear double spin stripe observed in FeTe <cit.>.
§ ANISOTROPIC, UNIDIRECTIONAL MAGNETIC DOMAINS
The unidirectional, stripe-like character of the antiferromagnetic order is also manifest in the shape of the SDW domains which break the rotational symmetry of the lattice. We measured the shapes and intensities of the SDW Bragg peaks in the H-K plane for both sets of SDW domains with orthogonal Q⃗ vectors, around (1/4, 1/4, L) (in red) and (1/4, -1/4, L) (in blue) using a two-dimensional microchannel plate detector, shown in Fig. <ref>a. The Bragg peaks around (1/4, 1/4, L) and (1/4, -1/4, L) have equal intensities to within experimental uncertainty (± 3 %), indicating an equal population of domains. In both sets of domains, the SDW peaks have highly anisotropic shapes, with a 2D Lorentzian fit giving correlation lengths that are much longer parallel to Q⃗_SDW (ξ_∥ = 292 Å) versus perpendicular to Q⃗_SDW (ξ_⊥ = 134 Å).
This peak shape is consistent with anisotropic domains of unidirectional SDW order, with each domain characterized by either (1/4, 1/4) or (-1/4, 1/4) order, as depicted in Figure <ref>d. This is in contrast to anisotropic domains of bidirectional order or isotropic domains of unidirectional order, which would exhibit x-shaped or isotropic peak shapes respectively. This anisotropy could be associated with orthorhombic structural twin domains or may be orthogonal domains of unidirectional order occurring within a single structural domain. Intriguingly, this latter scenario is reminiscent of the anisotropic charge ordering reported in underdoped YBa_2Cu_3O_7-δ and Bi_2Sr_2-xLa_xCuO_6+δ, where the CDW Bragg peaks likewise exhibit an anisotropy, with longer correlations lengths parallel to the CDW wavevector, indicative of orthogonal anisotropic unidirectional CDW domains within a single orthorhombic structural domain.<cit.>
§ PROBING MAGNETIC DOMAIN WALLS
In Figure <ref>, the orientation of the staggered moment at 20 K was determined to be almost entirely within the a-b plane and perpendicular to Q⃗_SDW. Insights may be gleaned from the small discrepancies between the model and measurements. In particular, measurements using σ polarization at ϕ = 0 are at a minimum in intensity for this magnetic orientation, with deviations of the staggered moment parallel to Q⃗_SDW or along [001] required to provide a finite scattering intensity. We now leverage this sensitivity of the polarization to the staggered moment orientation to investigate how the spin configuration evolves with temperature. In Figure <ref>b, we show a comparison between the temperature dependence of the SDW peak when measured with π versus σ polarization. At low temperatures (T < 50 K), the intensity of the SDW peak measured in π polarization, I_π, is more than an order of magnitude stronger than in σ polarization, I_σ. Upon raising the temperature to T_SDW, I_π, smoothly decreases whereas I_σ exhibits a non-monotonic temperature dependence, growing in intensity before peaking around 130 K, and then falling rapidly to zero at T_SDW. On the other hand, the ratio of I_σ / I_π, shown in figure <ref>c grows smoothly and monotonically with increasing temperature all the way up to T_SDW when the peak vanishes, indicating that the component of the spins oriented away from the low T configuration (Figure <ref>c) grows with increasing temperature, with I_σ/I_π > 0.6 near T_SDW.
In addition to this anomalous temperature dependence, the width of the SDW Bragg peak is surprisingly broader when measured with σ versus π incident polarization (Fig. <ref>a and d). This indicates that the out-of-plane moments have a shorter correlation length than the in-plane moments, which would be inconsistent with a uniform, temperature-dependent canting of all the spins. Instead, this could be consistent with the existence of real-space defects of the magnetic order, such as Bloch-like domain walls or antiferromagnetic skyrmion-like topological defects. Indeed, Bloch-like domain walls should naturally exist between two orthogonal unidirectional domains, where the transition between domains would necessitate a reorientation of the spins by 90^∘ from one domain to its orthogonal counterpart, which could occur over an extended region, depending on the relative magnitude of the magnetic anisotropy and exchange terms.
In the Bloch domain wall scenario, the staggered moments would rotate away from [110] or [-110] and out of the a-b plane, as illustrated in Figure <ref>e. Here, I_π would probe the bicollinear spin stripe regions within the core of each domain, while I_σ would be sensitive to the rotated spin component in the domain walls. The relative volume fraction of domain walls would grow with increasing temperature until SDW order is lost at T_SDW. This interpretation could explain the broader peak for I_σ when compared to I_π, as shown in Fig. <ref>d. Finally, the influence of the domain walls may also be the source of the small deviations from a perfectly in-plane moment depicted in Fig. <ref>c.
This analysis represents a new approach to detecting defects in antiferromagnetic order. Unlike ferromagnetic domain walls, antiferromagnetic domain walls are often difficult to detect.<cit.> Further analysis of the polarization and temperature dependence of the (1/4 1/4) SDW peaks in may provide key insights into the width, density and detailed magnetic configuration of the domain walls. Such investigations may be of key importance as the magnetic configuration within the domain walls may represent the melting of long-range SDW order as the material enters its superconducting phase, since a similar phenomenology also exists in the cuprates.
§ DISCUSSION
A comparison between these experiments with those on bulk crystals <cit.> reveal a number of commonalities, including similar T_SDW's, temperature dependences, photon polarization dependence, and peak widths along the (H H L) direction, suggesting that the results reported here are universal to both bulk and thin film samples.
The structure of the bicollinear spin stripe order in bears a strong resemblance to the magnetically ordered state in FeTe, but with a major distinction: in FeTe, the spin stripe ordering is accompanied by a monoclinic structural transition that stabilizes the magnetic order.<cit.> In contrast, temperature-dependent, synchrotron-based hard x-ray diffraction measurements of our thin films do not reveal any lowering of structural symmetry or the appearance of superlattice peaks upon entering the SDW state (see Supplementary Information). Furthermore, the existence of domain walls or topological defects indicate that both orthogonal magnetic domains occur within a single NiO_2 plane. This may imply an inherent instability in to rotational symmetry breaking, possibly accompanied by disordered or frustrated nematic order,<cit.> analogous to the Fe-based and cuprate superconductors where unidirectional density wave and nematic orders are pervasive and closely intertwined with superconductivity. <cit.> These close analogies with the cuprates and Fe-based superconductors suggest that the melting of this SDW state, via the likely formation of Bloch-like domain walls, is directly related to the formation of the superconducting state under high pressure.
10
sun_signatures_2023
H. Sun, et al., Nature 621, 493 (2023).
chen_electronic_2024
X. Chen, et al., Electronic and magnetic excitations in La_3Ni_2O_7 (2024). ArXiv:2401.12657 [cond-mat] version: 1.
liu_evidence_2022
Z. Liu, et al., Science China Physics, Mechanics and Astronomy 66, 217411 (2022).
wu_magnetic_2001
G. Wu, J. J. Neumeier, M. F. Hundley, Phys. Rev. B 63, 245120 (2001).
kakoi_multiband_2024
M. Kakoi, et al., Journal of the Physical Society of Japan 93, 053702 (2024).
dan_spin-density-wave_2024
Z. Dan, et al., Spin-density-wave transition in double-layer nickelate La_3Ni_2O_7 (2024). ArXiv:2402.03952 [cond-mat].
chen_evidence_2024
K. Chen, et al., Phys. Rev. Lett. 132, 256503 (2024).
khasanov_pressure-induced_2024
R. Khasanov, et al., Pressure-induced split of the density wave transitions in La_3Ni_2O_7-δ (2024). ArXiv:2402.10485 [cond-mat].
chen_polymorphism_2024
X. Chen, et al., Journal of the American Chemical Society 146, 3640 (2024).
puphal_unconventional_2023
P. Puphal, et al., Unconventional crystal structure of the high-pressure superconductor La_3Ni_2O_7 (2023). ArXiv:2312.07341 [cond-mat].
parzyck_absence_2024
C. T. Parzyck, et al., Nature Materials 23, 486 (2024).
wang_antiferromagnetic_2018
B.-X. Wang, et al., Phys. Rev. Mater. 2, 064404 (2018).
piamonteze_spin-orbit-induced_2005
C. Piamonteze, et al., Phys. Rev. B 71, 020406 (2005).
liu_strain-mediated_2010
J. Liu, et al., Applied Physics Letters 96, 233110 (2010).
bruno_probing_2014
F. Y. Bruno, et al., Applied Physics Letters 104, 021920 (2014).
hepting_complex_2018
M. Hepting, et al., Nature Physics 14, 1097 (2018).
hannon_x-ray_1988
J. P. Hannon, G. T. Trammell, M. Blume, D. Gibbs, Physical Review Letters 61, 1245 (1988).
scagnoli_induced_2008
V. Scagnoli, et al., Phys. Rev. B 77, 115138 (2008).
haverkort_symmetry_2010
M. W. Haverkort, N. Hollmann, I. P. Krug, A. Tanaka, Phys. Rev. B 82, 094403 (2010).
fink_resonant_2013
J. Fink, E. Schierle, E. Weschke, J. Geck, Reports on Progress in Physics 76, 056502 (2013).
scagnoli_role_2006
V. Scagnoli, et al., Phys. Rev. B 73, 100409 (2006).
frano_orbital_2013
A. Frano, et al., Phys. Rev. Lett. 111, 106804 (2013).
alonso_structural_1997
J. A. Alonso, M. J. Martínez-Lope, J. L. García-Muñoz, M. T. Fernández-Díaz, Journal of Physics: Condensed Matter 9, 6417 (1997).
dai_antiferromagnetic_2015
P. Dai, Reviews of Modern Physics 87, 855 (2015).
rodriguez_magnetic-crystallographic_2011
E. E. Rodriguez, et al., Phys. Rev. B 84, 064403 (2011).
li_first-order_2009
S. Li, et al., Phys. Rev. B 79, 054503 (2009).
comin_broken_2015
R. Comin, et al., Science 347, 1335 (2015).
choi_universal_2024
J. Choi, et al., Advanced Materials 36, 2307515 (2024).
cheong_seeing_2020
S.-W. Cheong, M. Fiebig, W. Wu, L. Chapon, V. Kiryukhin, npj Quantum Materials 5, 1 (2020).
bao_tunable_2009
W. Bao, et al., Phys. Rev. Lett. 102, 247001 (2009).
rodriguez_magnetic_2013
E. E. Rodriguez, et al., Phys. Rev. B 88, 165110 (2013).
bohmer_nematicity_2022
A. E. Böhmer, J.-H. Chu, S. Lederer, M. Yi, Nature Physics 18, 1412 (2022).
fradkin_colloquium_2015
E. Fradkin, S. A. Kivelson, J. M. Tranquada, Reviews of Modern Physics 87, 457 (2015).
fernandes_intertwined_2019
R. M. Fernandes, P. P. Orth, J. Schmalian, Annual Review of Condensed Matter Physics 10, 133 (2019).
hawthorn_-vacuum_2011
D. G. Hawthorn, et al., Review of Scientific Instruments 82, 073104 (2011).
§ ACKNOWLEDGMENTS
This work was supported by the the Air Force Office of Scientific Research (Grant No. FA9550-21-1-0168, FA9550-23-1-0161), the National Science Foundation through Grant No. DMR-2104427 and the Platform for the Accelerated Realization, Analysis and Discovery of Interface Materials (PARADIM) under Cooperative Agreement No. DMR-2039380, the U.S. Department of Energy, Office of Basic Energy Sciences, under contract no. DE-SC0019414, and the Natural Sciences and Engineering Research Council (NSERC) of Canada. Additional support for materials synthesis was provided by the Gordon and Betty Moore Foundation’s EPiQS Initiative through Grant Nos. GBMF3850 and GBMF9073. N.K. Gupta acknowledges support from the Waterloo Institute of Nanotechnology (WIN). This work is based on research conducted at the Center for High-Energy X-ray Sciences (CHEXS), which is supported by the National Science Foundation (BIO, ENG and MPS Directorates) under award DMR-1829070. Part of the research described in this paper was performed at the Canadian Light Source, a national research facility of the University of Saskatchewan, which is supported by the Canada Foundation for Innovation (CFI), the Natural Sciences and Engineering Research Council (NSERC), the National Research Council Canada (NRC), the Canadian Institutes of Health Research (CIHR), the Government of Saskatchewan, and the University of Saskatchewan. Substrate preparation was performed in part at the Cornell NanoScale Facility, a member of the National Nanotechnology Coordinated Infrastructure, which is supported by the NSF (Grant No. NNCI-2025233); the authors would like to thank Sean Palmer and Steven Button for their assistance in substrate preparation and Michel Gingras for helpful discussions.
§ AUTHOR CONTRIBUTIONS
D.G.H. and K.M.S. conceived of, designed, and supervised the experiment. N.K.G. conceived of the experiment and identified anisotropy in the SDW peak shape. The resonant scattering experiments were performed by N.K.G., R.G., M.K., C.T.P., N.C., R.S., and D.G.H. Samples were synthesized and characterized by Y.W. with assistance from C.T.P, D.G.S., and K.M.S. Hard x-ray scattering measurements were conducted and analyzed by B.Z.G., S.S. and A.S. The data were analyzed by N.K.G., R.G., C.T.P., K.M.S. and D.G.H. Model calculations were performed by D.G.H. The manuscript was written by D.G.H. and K.M.S. with input from all authors.
§ COMPETING INTERESTS
The authors declare that they ave no competing interests.
§ DATA AND MATERIALS AVAILABILITY
All data needed to evaluate the conclusions in the paper are present in the manuscript and supplementary information. Additional data available upon request.
§ SUPPLEMENTARY MATERIALS
Materials and Methods
Supplementary Text
Figs. S1 to S9
References
§ MATERIALS AND METHODS
Two samples A and B were measured for this study. A more extensive set of data was measured for sample A. However, sample B was found to have comparable peak intensity, peak width, and temperature dependence as sample A, as well as reproducing the anisotropy in the width parallel and perpendicular to Q (see supplementary information).
Resonant soft x-ray scattering measurements were performed at the Canadian Light Source REIXS beamline.<cit.> The majority of the measurements were made using an energy resolved silicon drift detector, with measurements of the peak shape utilizing a 2D micro-channel plate detector.
The samples were oriented using the (001) Bragg peak of the NGO substrate at 2500 eV. The in-plane orientation was performed using (1/41/4 L) SDW Bragg peaks at 852 eV. The samples were found to have a c axis lattice constant of 20.44 Å from lab-based x-ray diffraction. From the location of the (1/41/4 L) peaks, √(a_T^2 + b_T^2) = 5.426(3) Å for sample A.
The scattering intensities, I_S, shown in Figures 1b (inset), 1c, 1d, 2b, 3c and 4, are deduced by subtracting the background intensity measured above the SDW transition (at 200 K or 220 K) from the measured intensity at lower temperature.
For measurements with σ incident polarization, shown in Figure 4, the incident beam includes a small ≃ 1.5%π contribution due to synchrotron light from bending chicane magnets that is in addition to the primary σ polarized light from the EPU. In <ref>d, the full width at half maximum (FWHM) with σ polarization is determined by fitting the data in Fig.<ref>a to a narrow peak from π polarized light, with FWHM equal to that of a pure π polarization (blue curve in Fig. <ref>d), as well as broader peak with σ polarization (red curve in Fig. <ref>d).
|
http://arxiv.org/abs/2409.02921v1 | 20240904175955 | Learning Density Functionals from Noisy Quantum Data | [
"Emiel Koridon",
"Felix Frohnert",
"Eric Prehn",
"Evert van Nieuwenburg",
"Jordi Tura",
"Stefano Polla"
] | quant-ph | [
"quant-ph"
] |
[e-mail: ][email protected]
§ ABSTRACT
The search for useful applications of noisy intermediate-scale quantum (NISQ) devices in quantum simulation has been hindered by their intrinsic noise and the high costs associated with achieving high accuracy.
A promising approach to finding utility despite these challenges involves using quantum devices to generate training data for classical machine learning (ML) models.
In this study, we explore the use of noisy data generated by quantum algorithms in training an ML model to learn a density functional for the Fermi-Hubbard model.
We benchmark various ML models against exact solutions, demonstrating that a neural-network ML model can successfully generalize from small datasets subject to noise typical of NISQ algorithms.
The learning procedure can effectively filter out unbiased sampling noise, resulting in a trained model that outperforms any individual training data point.
Conversely, when trained on data with expressibility and optimization error typical of the variational quantum eigensolver, the model replicates the biases present in the training data.
The trained models can be applied to solving new problem instances in a Kohn-Sham-like density optimization scheme, benefiting from automatic differentiability and achieving reasonably accurate solutions on most problem instances.
Our findings suggest a promising pathway for leveraging NISQ devices in practical quantum simulations, highlighting both the potential benefits and the challenges that need to be addressed for successful integration of quantum computing and ML techniques.
Learning Density Functionals from Noisy Quantum Data
Stefano Polla
July 2023
====================================================
§ INTRODUCTION
Quantum simulation, NISQ and noise
Following the rapid development of quantum hardware, the last decade saw an explosion in the research on quantum algorithms for noisy intermediate-scale quantum (NISQ) devices <cit.>.
An important goal of this research is identifying avenues towards achieving useful quantum advantage: the application of quantum devices to relevant, classically-intractable problems.
A prominent example is
the electronic structure problem, which is central to chemistry and material science, and has been a major focus for quantum algorithms due to its inherent quantum nature and significant scientific and commercial importance <cit.>.
Much of the recent research on quantum algorithms for this problem has centered on the variational quantum eigensolver (VQE), a NISQ-tailored algorithm designed to approximate ground states by optimizing a heuristic quantum ansatz <cit.>.
However, several significant challenges hinder the achievement of practical solutions to the electronic structure problem.
Accurately estimating energies and other properties from a state prepared on a quantum computer is made prohibitively expensive by sampling costs <cit.>.
This is exacerbated by the necessity of applying error mitigation techniques to reduce bias due to circuit noise, which increases the sampling overhead <cit.>.
Additionally, VQE introduces errors intrinsic to the algorithm, related to the expressibility of the ansatz and the complexity of the optimization landscape.
In practical electronic structure calculations, it is often necessary to solve for multiple system configurations, which increases the overall cost.
Density functional theory and learning functionals
Density Functional Theory (DFT) is the workhorse of modern quantum chemistry and material science, used to investigate the electronic structure of many-body systems at a low computational cost <cit.>.
Central to DFT is the definition of a universal functional, which maps the ground state electronic density to the corresponding kinetic and interaction energy for a given problem family <cit.>.
Although exact universal functionals theoretically exist, their general explicit form is unknown, and their evaluation would necessarily be computationally intractable <cit.>.
Instead, DFT practitioners rely on a vast array of approximate functionals, developed over the decades through mathematical assumptions and physical intuition.
Recently, machine learning (ML) approaches have emerged as powerful tools for designing new DFT functionals <cit.>.
In particular, deep-learning-based functionals have demonstrated remarkable performance on benchmark problems in chemistry <cit.>.
These models can be trained on mixed datasets generated through various approaches, including DFT based, on other functionals, expensive first-principles methods like coupled-cluster, and experimental data <cit.>.
The synergies between DFT and quantum algorithms have also been explored by a few works, notably in the context of learning functionals on fault-tolerant quantum computers <cit.>, and in using quantum algorithms to supplement results from approximate DFT functionals <cit.>.
classical ML with quantum data
As quantum hardware continues to improve, the prospect of using quantum devices to generate training data that is inaccessible to classical methods is becoming increasingly realistic.
Such data, derived from the quantum simulation of complex systems, could significantly enhance the capabilities of classical ML models <cit.>.
In turn, these models could generalize from the limited, expensive-to-generate quantum data.
Moreover, by leveraging the underlying structure, physically-motivated ML models could combine information from all the training data points, filtering out noise and leading to predictions that are more accurate than any individual training point.
While not much research has focused on practically training classical models with quantum-generated data, related concepts exist in the literature.
In quantum science, ML is often applied to learning properties of a system from measurement outcomes, in tasks like phase classification <cit.>.
Classical shadow tomography <cit.> constructs classical models that can predict many properties of a given quantum state from a limited set of measurements.
Similarly, recent work demonstrates a technique to extract of classical surrogates from QML models trained on quantum devices <cit.>.
Moreover, classical ML models have been employed for noise mitigation in quantum systems <cit.>.
in this paper
In this work, we explore the application of NISQ devices to generate noisy training data aimed at learning DFT functionals for the Fermi-Hubbard model, with the goal of generalizing and enhancing these datasets.
We focus on limited, noisy datasets of 1,000 data points, which, though small from a machine learning perspective, can already be expensive to generate on a quantum device.
We separately examine the effects of sampling noise typical of NISQ algorithms and the algorithmic errors associated with VQE, analyzing how these factors impact the learned functionals.
The models are benchmarked on both an energy prediction task and a Kohn-Sham-like density optimization task.
The remainder paper is structured as follows:
Sec. <ref> summarizes the necessary background on DFT, the physical system and the learning task.
Sec. <ref> discusses the dataset generation process, including details on the considered quantum algorithms.
Sec. <ref> describes the machine learning model, training process, and compares various ML regression methods.
Sec. <ref> presents the benchmarking results of the trained ML models.
Finally, Sec. <ref> discusses the findings and provides an outlook for future research.
§ BACKGROUND
§.§ Density Functional Theory
Central to DFT are the Hohenberg–Kohn theorems <cit.>, which assert two fundamental principles: (1) the many-body ground state of a system of interacting electrons in an external potential is uniquely determined by its electron density, and (2) the ground state energy can be obtained variationally by minimizing a functional of the electronic density.
To formalize this and set the notation, consider a family of electronic Hamiltonians
Ĥ = Û + T̂ + V̂
where Û represents a fixed electron-electron interaction, T̂ is a fixed kinetic energy term, and V̂ is a free external potential term.
The expectation value of the potential ⟨V̂⟩ is defined as a linear functional of the electronic density .
The first Hohenberg-Kohn theorem ensures the existence of the universal functional
ℱ: _↦ := ⟨ψ_|(Û+T̂)|ψ_⟩,
which maps the electron density _ of the ground state |ψ_⟩ of any Ĥ to the respective expectation value of the kinetic and interaction energy .
The functional is termed universal because it is independent of the external potential acting on the system.
The total ground state energy E = ⟨ψ_|Ĥ|ψ_⟩ can be reconstructed from the ground state density as
E = ℰ[_] := ℱ[_] + ⟨ψ_|V̂|ψ_⟩,
where ⟨ψ_|V̂|ψ_⟩ is a linear functional of _ by definition.
The second Hohenberg-Kohn theorem states that, for a given V̂, the ground state energy and density of the corresponding Ĥ can be obtained by minimizing this functional:
E = min_ℰ[]
, _ = _ℰ[].
While DFT approaches have been developed to investigate a wide variety of many-body systems, here we focus on a particular lattice model.
In lattice models, electrons occupy discrete, localized sites, and the electron density is represented as a vector where each component ρ_j denotes the occupation of the j-th site.
The functional ℱ[] is the reformulated as a function of this vector, simplifying the computational treatment of the system and making it more amenable to a machine learning scheme.
§.§ The Fermi-Hubbard model
In this work, we focus on the Fermi-Hubbard model, a paradigmatic lattice spin-1/2 Fermion model with on-site interaction.
We take this model on a one-dimensional ring of L sites.
The Hamiltonian of this model,
Ĥ_ = T̂ + Û + V̂_,
comprises a kinetic energy term T̂, ab interaction term Û, and a potential term V̂_.
All of these can be expressed in terms of fermionic creation and annihilation operators, ĉ_j,σ and ĉ^†_j,σ.
From here on, j ∈{1, ..., L} denotes the site index, and σ={↑, ↓} represents the electron spin.
The number operator is defined as n̂_j, σ = ĉ^†_j,σĉ_j, σ.
The kinetic term describes the hopping of electrons between adjacent sites and is given by:
T̂ :=
- t ∑_j^L ∑_σ∈{↑, ↓} (ĉ^†_j,σĉ_j+1, σ + ĉ^†_j+1,σĉ_j, σ),
where t is the hopping parameter.
We choose without loss of generality t=1, thereby fixing the units of all energies.
We consider periodic boundary conditions, meaning (ĉ_L+1,σ := ĉ_1, σ).
The interaction term accounts for the on-site repulsion between electrons of opposite spins and is expressed as:
Û :=
u ∑_j^L n̂_j, ↑n̂_j, ↓,
where u is the interaction strength.
The potential V̂_ defines a specific instance of the Hubbard model within the broader family of models characterized by the chain length L and the interaction strength u/t.
The local chemical potential = (μ_0, ..., μ_L) introduces an additional term at each site to the Hamiltonian:
V̂_
= ·n
:= ∑_j^Lμ_j (n̂_j, ↑ + n̂_j, ↓).
Each choice of defines a unique instance Ĥ_ within the model family, with ground state | ψ_⟩, and the corresponding ground state energy E.
The elements of the electronic density vector _ are the expected number of particle at each site, summed over the two spin species:
ρ_j = (n̂_j,↑ + n̂_j,↓)ψ_.
The universal functional ℱ is then a function mapping the vector _ to the scalar = ⟨ψ_|(Û + T̂)|ψ_⟩; we call this the Hubbard functional.
Note that the Hubbard functional depends on the value of the interaction strength u/t, the number of sites L and the geometry of the system.
In our numerical study, we focus on a periodic Hubbard chain of L=8 sites; this system is small enough to allow effective exact diagonalization benchmarks.
We choose an interaction strength of u/t=4, corresponding to the most challenging regime to simulate when scaling to larger systems <cit.>.
Furthermore, we restrict all solutions to the quarter-filling (L/4 = 2 particles of each spin species) and total-spin singlet subspace, based on the symmetries of the Hubbard model.
A more general functional can in principle be obtained by stitching together functionals defined on different symmetry sectors.
§.§ The learning task
In recent years, data-driven approaches for learning approximate DFT functionals have emerged, where a machine learning model is trained to map electron densities to energies <cit.>.
For the Hubbard model, Nelson et al. <cit.> demonstrated a method for training a model to learn the functional ℱ using 105,000 density-ground state energy pairs obtained via exact diagonalization. However, the reliance on exact diagonalization limits the feasible system size due to its high computational demands.
In this work, we consider the task of learning an approximation ℱ^ML of the same functional, but from noisy data generated by a quantum algorithm.
Generating such data is expensive, with the cost rising with both the size of the dataset and the desired accuracy.
By teaining a model on a limited dataset, we aim to enable generalization and reduce noise, potentially leading to predictions with greater accuracy than any individual data point.
Our overall approach to learning a DFT functional from noisy quantum data is illustrated in Fig. <ref>:
The process begins by initializing the Hubbard model with random values of drawn from a specified distribution (Sec. <ref>).
The ground state problem is then solved for 1000 potentials to obtain a dataset of noisy density-energy pairs (_X, _X), where X indicates the method used to generate the data.
To represent the noise typical of NISQ algorithms, we consider and simulate two methods: expectation value estimation (EVE), which isolates the effect of sampling noise, and the variational quantum eigensolver (VQE), which isolates expressibility and optimization errors.
Exact diagonalization results are used for benchmarking, while the noisy data are employed to train a neural network-based regression model.
This approach allows us to explore how the inherent noise characteristic of quantum data from current near-term devices impacts the training of a machine-learned DFT functional.
§ THE DATASET
Training and testing data sets are constructed by solving configurations of the Hubbard problem with a random on-site potential .
The same configurations are solved with each of the considered methods: exact diagonalization (ED), expectation value estimation (EVE), and variational quantum eigensolver (VQE).
The training set and the test set are generated using 1000 independent random potentials each, the training set will be further split to implement 5-fold cross-validation.
Note that this number of configurations is smaller compared to the 105.000 potentials used in <cit.>.
We focused on this small-data regime due to the anticipated computational expense of generating data points with quantum algorithms.
Selecting the potentials
To sample the random potential we follow a modified version of the approach in <cit.>.
For each data point, we first sample a strength parameter W ∈ [0.005t,2.5t] uniformly at random.
We then sample uniformly at random and calculate its standard deviation
σ() = √(∑_j μ_j^2 - (∑_j μ_j)^2).
If the standard deviation σ() < 0.4 t we accept the potential and add it to our dataset, otherwise we reject and repeat the sampling procedure from the beginning.
This procedure produces a representative distribution of potential of varied strengths avoiding too large energy fluctuations, without requiring the solution of the Hubbard problem instance at this stage.
Exact diagonalization benchmark
We construct an exact dataset to act as a baseline, solving all the problem instances (training and test set) with exact diagonalization.
For every problem instance Ĥ_, we construct the Hamiltonian matrix block corresponding to quarter filling and spin-singlet.
We diagonalize it, obtaining the ground state |ψ_()⟩ and the energy E().
From the state we calculate the expected particle density vector _().
We then compute the kinetic-interaction energy () = E() - ·_().
Thus, from each random configuration , we obtain a pair (_, ) representing the input and output of the exact Hubbard functional ℱ.
§.§ Expectation value estimation
Sampling noise
Assuming the exact ground state |ψ_⟩ of the Fermi-Hubbard Hamiltonian Eq. (<ref>) can be prepared on a quantum device, measuring and _ will still incur an error called sampling noise.
To reproduce this effect we simulate expectation value estimation, by drawing samples from the measurement outcomes distribution defined by the Born rule on the ground state vector |ψ_⟩ produced by exact diagonalization.
Measurement scheme
To define the measurements for expectation value estimation, we note that all the terms we want to measure are diagonal either in real space (density and Coulomb energy) or in Fourier space (kinetic energy) <cit.>.
To estimate the real-space-diagonal terms, we can sample all the number operators n̂_j, σ at the same time.
Averaging M samples we obtain estimates of each n̂_j,σ, and thus reconstruct an estimate ρ̃_j for the density ρ_j = n̂_j, ↑ + n̂_j, ↓.
The expectation value of the Coulomb energy
⟨Û⟩ = u ∑_j n̂_j,↑n̂_j,↓
can be estimated from the same samples by averaging the on-site correlators n̂_j,↑n̂_j,↓.
The kinetic energy operator T̂ is diagonalized by expressing it in terms of the Fourier-space fermionic operators ĉ_k,σ = 1/N ∑_j=0^N-1 e^i j k 2 π/Nĉ_j,σ,
T̂ = 2 t ∑_k=0^N-1∑_σ=↑, ↓cos(2 π k/N) n̂_k,σ
where n̂_k,σ = ĉ^†_k,σĉ_k,σ.
The operators n̂_k,σ can be sampled at the same time; another M samples are taken and averaged to obtain n̂_k,σ, and their results combined to estimate ⟨T̂⟩.
The estimate _EVE of is finally calculated by summing the kinetic and Coulomb contributions.
For the central limit theorem the estimate of F̃ is unbiased and asymptotically normal, with variance proportional to 1/M.
Implementation on qubits
Assuming we have a qubit-based quantum device and Jordan-Wigner Fermion-to-qubit encoding <cit.>, the real-space samples are simply obtained by measuring all qubits in the computational basis as n̂_j,σ = (1-Z_j,σ)/2 (where Z_j,σ is the Pauli Z-operator on the qubit that encodes the spin-orbital j,σ).
In order to sample the plane wave number operators n̂_k,σ, we need to perform a basis rotation circuit implementing the fast fermionic Fourier transform <cit.>: a circuit of depth O(N) acting on the two spin sectors independently.
§.§ Variational quantum eigensolver
definiton of VQE
The VQE is a hybrid quantum-classical method that combines a quantum subroutine with a classical optimizer, aiming to estimate the ground state energy of a Hamiltonian <cit.>.
The quantum subroutine prepares on a quantum device an ansatz state |ψ(θ)⟩ = U(θ) |ψ_⟩ dependent on a set of classical parameters θ; and measures its expected energy E(θ) = Hψ(θ) through EVE.
This energy is then minimized over the parameters θ using a classical optimizer which calls the quantum subroutine.
VQE errors (expressibility and optimization error)
The VQE is a heuristic method, which means that convergence to the true ground state energy is not guaranteed <cit.>.
The manifold of states ψ(θ) that can be represented by the chosen ansatz is, in general, much smaller than the Hilbert space of the problem.
This is a feature of VQE: restricting the number of parameters is the only way to make the problem feasible for the classical optimizer.
The minimum energy ansatz state will thus only approximate the ground state, with its energy min_θE(θ) guaranteed to be larger than the ground state energy by the variational principle.
The quality of the results depends on the choice of ansatz, the performance of the classical optimizer, and is further affected by hardware noise and sampling noise.
In this work, we focus on isolating the effect of the errors intrinsic to the VQE due to ansatz expressibility and optimization.
We use a classical state-vector simulator to extract expectation values without sampling noise, which is studied separately through EVE.
Both the energy and the electronic density of the optimal state are affected by the VQE errors.
Ansatz in VQE
Variational ansatz families fall into two main categories: hardware-efficient and physically inspired. Physically inspired ansätze, like the Unitary Coupled Cluster (UCC) <cit.> consider essential system properties but often require high circuit depths, making their implementation challenging for near-term quantum devices.
Hardware-efficient ansätze instead maximize the number of parameters per circuit layer, but may still require many layers to approximate the ground state and do not take into consideration the problem symmetries.
We will consider two ansätze in this work: the well-known Variational Hamiltonian Ansatz (VHA) <cit.> and the Number Preserving Fabric (NPF) <cit.> ansatz, which combine the advantages of both categories.
VHA
The VHA is one of the earliest proposed ansätze for VQE study of the Fermi-Hubbard model <cit.>, and has found success in hardware experiments <cit.>.
This ansatz is inspired by the adiabatic algorithm <cit.> and formally equivalent to the quantum alternating operator ansatz (QAOA) <cit.>.
It distinguishes itself from the usual VQE ansätze by incorporating the values of defining the problem instance into the ansatz defintion.
The circuit structure for the Hubbard model, for parameters θ={θ_i,j}, is
|ψ{θ_i,j}⟩ = ∏_i=1^p [ e^-i T̂_e θ_i,0 e^-i T̂_o θ_i,1 e^-i V̂_θ_i, 2 e^-i Ûθ_i, 3] |ψ_0⟩,
thus able to represent a first order trotter decomposition in p slices of the adiabatic evolution to H from H^0, commonly taken as the non-interacting Hamiltonian of which |ψ_0⟩ is the ground state.
In Eq. (<ref>), T̂_e (T̂_o) are all hopping terms on even (odd) sites in Eq. (<ref>).
NPF
The NPF ansatz takes another strategy which is, in contrast to VHA, independent of the problem.
It composes the two fundamental interactions contained in a spin-preserving two-body Hamiltonian; a spatial orbital rotation and a double excitation.
The parameterized composition of these operations we call Q(θ, ϕ), and they are arranged in a brick-wall pattern to maximize the gate density, resulting an expressive ansatz with only local gates. For the exact form of the ansatz, see Ref. <cit.>.
Comparison of VHA vs NPF (nr. parameters and optimizer)
While both the VHA and NPF ansatz are physically inspired and preserve key symmetries like total spin and number of particles, we highlight here some differences between the two. The number of parameters in the VHA equals 4 per layer by inspection of Eq. (<ref>).
The NPF ansatz has two parameters per Q-block, of which there are N_q/2-1 per layer. In our case N_q = 2N = 16, resulting 14 parameters per layer. Although the NPF ansatz has more parameters per layer, its higher gate density means it does not necessarily result in deeper circuits.
VQE cost functions are often highly irregular and non-convex, with numerous local minima. Hence, the choice of classical optimizer is crucial <cit.>. The VHA, being a more structured ansatz, may have a more irregular cost landscape, making it prone to getting stuck in local minima. To address this, we use the gradient-free Constrained Optimization by Linear Approximation (COBYLA) optimizer. For the NPF, the abundance of parameters offers more flexibility but less structure, and empirically, following the gradient in this cost landscape tends to work better. Therefore, we employ the Sequential Least Squares Programming (SLSQP) optimizer for the NPF ansatz.
§ LEARNING THE FUNCTIONAL
In this section we present the chosen machine learning models and the details of the training procedure.
Alongside the main neural-network model, for which we provide a detailed result analysis in Sec. <ref>A, we present a comparative study of a set of out-of-the box models in Sec. <ref>.
§.§ Method
CNN model
The main ML model we consider is an adaptation of the convolutional neural network (CNN) model proposed by Nelson et al. <cit.>.
The model consists of a sequence of one 1-dimensional periodic convolutional layer with 8 local features and kernel size 3, two dense layers with 128 features each, and a final dense layer with a single output.
Before being processed by the CNN, the inputs _X are normalized.
Each element ρ_j of the density is normalized using the mean ρ̅ and standard deviation σ_ρ of the whole training set:
ρ_j ↦ (ρ_j - ρ̅)/σ_ρ =: input_j,
(where we dropped the method index X for convenience of notation).
Conversely, the output of the model is rescaled by the standard deviation of the training energies σ_ and shifted by their mean :
output↦σ_·output + := ℱ[].
Cross-validation
In potential real-world application of our method to noisy data from systems too large for exact benchmarking, the training data becomes the sole source of information for model evaluation. Thus, model selection, hyperparameter optimization and validation must be conducted exclusively using the available (noisy) dataset.
To address this, we use 5-fold cross-validation <cit.>.
This involves partitioning the noisy training dataset into five equally sized subsets (folds).
During each iteration, one fold is set aside as the validation set, while the remaining four folds are used to train the model.
The performance metrics obtained from each fold are then averaged to provide a comprehensive evaluation of the model performance.
Symmetry augmentation
The target functional ℱ[] is invariant under translations, mirror symmetry, and their combination.
This means that ℱ[] does not change under the transformation
ρ_j ↦ρ_(± j + k) mod N , ∀ k ∈{0, ..., N-1}.
We exploit this to augment the training dataset: from each pair (_X, _X) we construct 16 data points, applying the N=8 shifts and N=8 mirror-and-shifts to the density and copying the energy <cit.>.
This data augmentation is performed after the cross-validation split, on the training and validation splits separately.
The data points within the training split are then scrambled before dividing in batches for training.
optimization/Early stopping/keras
The model is trained with the Adam optimizer <cit.>.
To avoid overfitting, we implement an early stopping strategy which monitors the validation loss during training, halting when this loss stops decreasing.
The model and its training are implemented using the TensorFlow-Keras library <cit.>.
§.§ Alternative model selection
model selection
Given the changes in the underlying statistical structure of the data in this work compared to Ref. <cit.>, we evaluate the suitability of the CNN model by comparing its performance with other machine learning models known for their robust regression capabilities and ability to handle noise and outliers.
The objective of this section is to determine if any of these alternative models can enhance the learning task with our noisy data.
Our findings indicate that the CNN model is the best performing model.
Nonetheless, we present our comparative study for completeness.
models
The ensemble of models to be evaluated comprises 11 regression models,
including the convolutional neural network (CNN) model from Ref. <cit.>, three linear regression-based approaches known for their outlier robustness (Huber regression <cit.>, Theil-Sen regression (TS) <cit.>, and random sample consensus (RANSAC) regression <cit.>), five decision tree models (AdaBoost (AB) <cit.>, random forest (RF) <cit.>, gradient boosting (GB) <cit.>), support vector regression (SVR) <cit.>, nearest neighbor regression (KNN) <cit.>, and XGBoost (XGB) <cit.>.
For the CNN model, we adopted the hyperparameters from Nelson et al. <cit.>, while for the other models, we used standard baseline hyperparameters.
x-validation
To evaluate model performance, we present results for models trained on data generated from both the VQE (using an NPF ansatz with depth d=5) and EVE (M=2000 shots) methods.
These two instances are chosen to represent the typical characteristics of VQE and EVE datasets, and we focus exclusively on them for clarity and readability.
For benchmarking purposes, we also include performance metrics for models trained on exact data.
In scenarios where our method is applied to noisy data from systems that are too large for exact benchmarking, the cross-validation of training data will be the only resources available.
Consequently, model selection and hyperparameter optimization must be based solely on these datasets:
For this, we employ 5-fold cross-validation, as introduced in Sec. <ref>.
plot overview
The comparative performance of the models is illustrated by the mean and standard deviation of their cross-validation scores, as shown in the top panel of Fig. <ref>.
Additionally, both panels display the baseline mean-squared error (MSE) of the noisy data used for training and validation, indicated by dashed lines.
The bottom panel of Fig. <ref> illustrates the models' performance compared to the exact benchmark on a separate test set, generated with a distinct set of potentials not used during training.
While this information is not directly used in the model selection process, it provides insights into the models' ability to generalize to noiseless data.
plot interpretation
The cross-validation scores for models trained on EVE data never fall below the baseline error of the training data.
This is because the validation dataset is also noisy, and with EVE's noise being randomly distributed with a zero mean, the model cannot effectively learn or predict the noise pattern.
Among the models tested, those achieving the lowest validation scores on this dataset, aside from the optimized CNN model, are Support Vector Regression (SVR), Gradient Boosting (GB), Random Forest, and XGBoost (XGB).
These models also perform the best in predicting noiseless data.
We observe the characteristic performance that a cross-validation score close to the baseline of unbiased noise indicates that the underlying model has learned patterns that extrapolate to some extent to noiseless data.
The performance of models trained on VQE data differs noticeably when tested against exact data.
As discussed in Section <ref>, the optimization error inherent in our VQE data can lead to an overestimation of the ground state energy for a given model instance.
This results in biased noise that affects the training process.
Consequently, the model inadvertently learns to fit this biased noise, which does not exist in the exact data, leading to a worse performance when testing the generalizability.
When the MSE between the VQE-generated energies and the exact energies (indicated by the dashed blue line) is less than or equal to the predicted MSE from cross-validation, it suggests that the model has absorbed the bias present in the VQE data.
Specifically, for models 7-13 trained on VQE data, the validation MSE is smaller than the baseline error of the training data.
This observation implies that (1) there is an (expected) bias in the error of VQE and (2) these models learn to predict this bias.
CNN FTW
Based on the cross validation data, the CNN model from Ref. <cit.> performs best on all datasets.
While gradient boosting performs slightly better on the presented VQE dataset (NPF ansatz with d=5), general performance across VQE datasets is at best comparable to the CNN model.
We thus continue further analysis with the CNN model only.
§ RESULTS
In this section, we benchmark the CNN model trained on noisy datasets, analyzing its performance as a function of both the type and magnitude of noise in the training data.
We test the models on two tasks: energy prediction, discussed in Sec. <ref>, and density optimization, discussed in Sec. <ref>.
All benchmark are conducted using the same test set, consisting of 1,000 new problem instances with random potentials _test.
These are solved with exact diagonalization obtaining a set of exact test densities _test and energies _test.
§.§ Energy prediction
In the energy prediction task, we assess the model's accuracy by comparing its predictions on the exact densities, ℱ^ML[_test], against the exact energies _test.
Figure <ref> summarizes this comparison for four models, each trained on a datasets constructed with a different method: exact diagonalization, EVE (M=1000), VHA-VQE (d=6), and NPF-VQE (d=6).
The figure also reports the noisy training energies , compared to the respective exact value.
The model trained on exact data serves as a benchmark, representing an upper limit of performance.
Models trained on the same amount of noisy data points are expected to perform less accurately.
The EVE data are spread uniformly around the diagonal line, as sampling noise is unbiased.
The model trained on this data demonstrates improved energy predictions, which cluster closer to the diagonal line compared to the noisy training data.
Conversely, the VQE data suffer from a positive bias due to the limited expressive power of the ansatz.
Variational methods always approximate the ground state energy from above, Ẽ_VQE≥ E.
As a consequence, F̃_VQE typically (albeit not always) overestimates .
We observe this bias is larger for smaller values of , particularly for the d=6 NPF ansatz.
This effect is likely due to the variation in the strength of , which influences the efficiency of VQE – larger potentials cause stronger localization, thereby increasing kinetic and interaction energy as electrons are more confined.
Localization also reduces ground state correlations, which can enhance the performance of limited-depth VQEs.
The predictions of the models trained on VQE data are distributed similarly to their respective training data, indicating that the model learns the inherent bias.
MSE
To systematically compare models trained on datasets with a varying noise strengths, we summarize the results of the energy prediction task with the mean squared error (MSE)
MSE_ℱ^ML =
1/test size∑_test set |ℱ^ML[_test] - _test|^2.
This is compared to the raw MSE of the method used to generate the training data, calculated directly from the training set as
MSE_X = 1/train size∑_train set |_X - |^2.
EVE-trained models
Figure <ref> shows the performance of the model trained on EVE data as a function of the number of samples M.
The squared error in the training data decreases proportionally to 1/M, consistent with the standard sampling limit.
(The factor 4 arises from the average ground-state variance of the two operators T̂ and Û, which are sampled separately as discussed in Sec. <ref>.)
The model's predictions improve on this MSE by up to factor 20 in the case of large sampling noise.
As the sampling noise decreases, the model's performance saturates to that of the model trained on exact data.
It is important to note that the total number of samples needed to generate the training dataset is 1000 M, uniformly distributed over the 1000 different problem instances.
The improvement is due to the model's ability to synthesize the information from all these training instances, distilling the underlying trend while filtering out the unbiased noise.
VQE-trained models
Figure <ref> presents the same performance benchmark for the model trained on VQE data.
The MSE of the VQE, for both ansätze considered, decreases exponentially with the depth of the ansatz.
Here, the depth d represents the number of repeated layers in the ansatz, while the upper-axis labels indicate the number of parameters.
For depths larger than d=6 the optimization of the VHA becomes significantly harder, thus we explore the larger-parameter-number regime with the less-structured NPF ansatz.
The learned functional generalizes the training dataset successfully, showing a MSE comparable to the respective training data for all depths and both ansätze.
Unlike the models trained on EVE, those trained on VQE data do not manage to learn the underlying functional while filtering out noise.
This limitation is due to the bias intrinsic to the VQE errors, which is learned by the model.
Although there is some quasi-stochastic error in the VQE results, caused by the optimization converging to different local minima of the energy, the dominant factor appears to be the biased error due to the limited expressibility of the ansatz.
§.§ Density optimization
Motivation
The main application of DFT functionals lays in solving new problem instances through density optimization.
The goal of density optimization is equivalent to usual Kohn-Sham self-consistent field, but rather than optimizing a set of single-particle orbitals together with an exchange-correlation functional, we directly optimize the total energy functional in Eq. (<ref>) with respect to the density (subject to appropriate constraints).
In this section we explore the performance in this application of the ML functional learned from noisy data using the CNN model.
Task description
For each test set potential _test and each considered learned functional F^ML, we construct the total energy functional
ℰ^ML[] = ℱ^ML[] + _test·.
We then minimize ℰ^ML[] under the constraints of ρ_j ∈ [0, 2] for each site j (positive and bounded occupation) and ∑_j ρ_j = 4 (fixed total number of particles); this yields predictions for the ground state density ^*_ and energy E^* = ℰ[^*].
The minimization is performed using the Sequential Least Squares Programming (SLSQP) method implemented in the SciPy package <cit.>, exploiting automatic differentiation of the CNN model to obtain analytical gradients of ℰ[].
Description of Figure <ref>
Figure <ref> shows the resulting errors, for a selection of models trained on datasets constructed with different methods with different accuracy: exact diagonalization, EVE (M=1000), NPF-VQE (d=6), and NPF-VQE (d=11).
For energy we consider the absolute deviation |E^* - E_test|, and for density we consider the error in ℓ^2-norm
‖^* - _test‖_2
=
√(∑_j |ρ^*_j - ρ_test j|^2).
In the scatter plots we show these errors for each of the test set instances versus the strength of the potential —
as the average value of the potential μ̅= ∑_j μ_j shifts the functional Eq. (<ref>) by a constant and does not affect optimization, the strength of the potential is characterized as its standard deviation sttdev() = ‖ - μ̅‖_2.
To the right of each scatter plot, the distribution of the errors in summarized in a histogram.
As all histograms display bimodal distributions, with two clearly separated lobes at lower- and higher-errors.
On each histogram, we show the line that separates the two modes, and report the fraction of training data in the higher-error mode.
Interpretation of Figure <ref>
The results in the left panel of Fig. <ref> show that, even when using the model trained on exact data, density optimization does not always converge to the correct result.
In particular, for the largest potential strengths the optimization yields densities and and energies with a large error.
We attribute this to the limited availability of training points near the edges of the potential distribution.
In the region of smaller potential strengths, at the center of the distribution of , has more data available; there the model can learn and generalize correctly.
Farther from the center of the distribution, where the potential strength is larger, few data points are available for a wider area of possible cases to cover.
We estimate 16% of the test set potentials are in this edge area; and a similar fraction of training points are available there.
While this unavoidable for a continuous and unbounded distribution of problem instances, one can mitigate the problem by training the model on a set of potentials wider than the application range.
Conversely, for the smallest potentials, the density optimization sometimes yields correct energies and wrong densities.
This occurs because, for very small potentials, the considered Hubbard model can exhibit two quasi-degenerate states with similar energies but significantly different densities.
As a consequence ℰ[] has multiple local minima for very distant densities, and the local optimizer we use sometimes finds a wrong density with a good associated energy.
Both of these effects, for the smallest and largest potential strengths, affect the results of the models trained using noisy data in a qualitatively similar way.
Noisy training data causes an overall increase of error on the densities and energies, as well as shrinks the range of potentials on which the results of energy optimization are reliable.
In the case of VQE with lower depth (NPF d=6), we can see a concentration of the energies due to the large positive bias of the training data, which dominates the energy error.
§ DISCUSSION AND OUTLOOK
conclusion
In this work, we demonstrated the application of machine learning models motivated by density functional theory to generalize and enhance a set of data subject to noise characteristic of near-term quantum algorithms.
We showed that meaningful density functionals can be learned from a small amount of noisy training data and benchmarked the performance of these models in their usual DFT applications.
Comparing the performance of the learned models to the respective training data, we showed a significant improvement is only achieved when the noise in the input data is unbiased (Fig. <ref>).
This implies that learning techniques are well-suited to improve results from quantum algorithms that suffer from sampling noise.
However, even when the trained models learn the inherent dataset bias, the proposed technique could be useful to generalize the dataset to new problem instances (Fig. <ref>).
In the near future, quantum devices might be able to generate approximate solutions for quantum simulation problems that are beyond the reach of classical computation but remain too noisy to be directly useful.
Nonetheless, this data could be valuable for supplementing the training of classical machine learning models, possibly in combination with other classically-generated data.
In this context, it is important to characterize the effect of noise in quantum data on the trained models.
Joint noise study
Our study focused on analyzing the (biased) VQE error and (unbiased) sampling noise separately.
However, in practical implementations on quantum hardware, both sampling and optimization noise critically influence the performance and accuracy of VQE algorithms.
Additionally, hardware noise is a significant factor.
Investigating the combined effects of these noise sources presents a promising direction for future research.
Scaling with number of samples and system size
Furthermore, we concentrated on learning from a very limited dataset, comprising only 1,000 training and validation instances.
To better understand the trade-offs involved in achieving functionals with useful precision, it would be relevant to study the performance of learning as a function of both dataset size and problem size.
Additionally, advancing towards realistic applications will require the development of techniques that enable learning from mixed datasets, combining results from approximate classical algorithms and quantum data.
In this context, transfer learning might offer valuable insights <cit.>.
Which functional to learn
In this study, we focused on learning Hohenberg-Kohn density functionals, a cornerstone of density functional theory with extensive literature support; however, the choice of learning targets can generally vary depending on the specific application.
In the future, we envision betond-classical quantum computations contributing valuable data to further enhance a wider range of existing machine learning-based functionals.
Another potential learning target is the exchange-correlation functional used in Kohn-Sham density functional theory, which differs from our functional by a classical estimate of the kinetic energy expectation value, based on a product state.
Alternatively, learning targets could be drawn from 1-particle reduced density matrix functional theory, where the underlying compressed representation of the quantum state is its 1-particle reduced density matrix <cit.>.
Finally, one could consider directly learning the mapping → E, also known as the Hohenberg-Kohn mapping <cit.>.
A comparative study could help determine which of these learning targets is most resilient to noise in the data, offering better adaptability in real-world scenarios.
§ ACKNOWLEDGEMENTS
We thank Vedran Dunjko and Patrick Emonts for their useful feedback.
This work was supported by the Dutch National Growth Fund (NGF), as part of the Quantum Delta NL programme.
S.P. and E.K. acknowledge support from Shell Global Solutions BV.
J.T. acknowledges the support received from
the European Union’s Horizon Europe research and innovation programme through the ERC StG FINE-TEA-SQUAD (Grant No. 101040729).
This publication is part of the `Quantum Inspire - the Dutch Quantum Computer in the Cloud' project (with project number NWA.1292.19.194]) of the
NWA research program `Research on Routes by Consortia (ORC)', which is funded by the Netherlands Organization for Scientific Research (NWO).
The views and opinions expressed here are solely those
of the authors and do not necessarily reflect those of the
funding institutions.
Neither of the funding institutions
can be held responsible for them.
§ CODE AND DATA AVAILABILITY
The density-energy pair data used for training the machine learning models, along with the code used to generate it and to reproduce the plots in this paper, is available at the repository: https://github.com/StefanoPolla/DFTQMLhttps://github.com/StefanoPolla/DFTQML.
§ COMPETING INTERESTS
The authors declare no competing interests.
§ CONTRIBUTIONS
E.P., E.K. and S.P. conceived the project and developed a first prototype of the code.
S.P. developed the codebase for data generation and learning.
E.K. developed and optimized VQE.
F.F. performed the model selection studies.
E.K., F.F. and S.P. wrote the manuscript, which all authors contributed to reviewing.
J.T., E.vN. and S.P. supervised the work and provided funding.
|
http://arxiv.org/abs/2409.03662v1 | 20240905161512 | The representation landscape of few-shot learning and fine-tuning in large language models | [
"Diego Doimo",
"Alessandro Serra",
"Alessio Ansuini",
"Alberto Cazzaniga"
] | cs.CL | [
"cs.CL",
"cs.LG"
] |
The Kneser–Poulsen phenomena for entropy
Dongbin Li
September 9, 2024
========================================
§ ABSTRACT
In-context learning (ICL) and supervised fine-tuning (SFT) are two common strategies for improving the performance of modern large language models (LLMs) on specific tasks.
Despite their different natures, these strategies often lead to comparable performance gains.
However, little is known about whether they induce similar representations inside LLMs.
We approach this problem by analyzing the probability landscape of their hidden representations in the two cases. More specifically, we compare how LLMs solve the same question-answering task, finding that ICL and SFT create very different internal structures, in both cases undergoing a sharp transition in the middle of the network.
In the first half of the network, ICL shapes interpretable representations hierarchically organized according to their semantic content. In contrast, the probability landscape obtained with SFT is fuzzier and semantically mixed.
In the second half of the model, the fine-tuned representations develop probability modes that better encode the identity of answers, while the landscape of ICL representations is characterized by less defined peaks.
Our approach reveals the diverse computational strategies developed inside LLMs to solve the same task across different conditions, allowing us to make a step towards designing optimal methods to extract information from language models.
§ INTRODUCTION
With the rise of pre-trained large language models (LLMs), supervised fine-tuning (SFT) and in-context learning (ICL) have become the central paradigms for solving domain-specific language tasks <cit.>.
Fine-tuning requires a set of labeled examples to adapt the pre-trained LLM to the target task by modifying all or part of the model parameters.
ICL, on the other hand, is preferred when little or no supervised data is available.
In ICL, the model receives an input request with a task description followed by a few examples. It then generates a response based on this context without updating its parameters.
While operationally, the differences between SFT and ICL are clear in how they handle model parameters, it is less clear how they affect the model's representation space.
Although both methods can achieve similar performance, it is unknown whether they also structure their internal representations in the same way.
Differently from recent contributions which compared ICL and SFT in terms of generalization performance <cit.> and efficiency <cit.>, in this work we analyze how these two learning paradigms affect the geometry of the representations.
Previous studies described the geometry of the hidden layers using distances and angles <cit.>, often relying on low-dimensional projections of the data <cit.>.
In contrast, we take a density-based approach <cit.> that leverages the low-dimensionality of the hidden layers <cit.> and finds the probability modes directly on the data manifold without performing an explicit dimensional reduction.
For ICL and fine-tuning, we track how the multimodal structure of the data evolves across different layers as LLMs solve a semantically rich multiple-choice question task and measure how the geometrical properties intertwine with the emergence of high-level, abstract concepts.
We study ICL in a few-shot prompting setup and find that despite ICL and SFT can reach the same performance, they affect the geometry of the representations differently.
Few-shot learning creates more interpretable representations in early layers of the network, organized according to the underlying semantics of data (Fig. <ref>, top-center).
On the other hand, fine-tuning induces a multimodal structure coherent with the answer identity in the later layers of the network (Fig. <ref>, bottom-right).
The key findings of our study are:
* Both few-shot learning and fine-tuning show a clear division between model layers, a marked peak in the intrinsic dimension of the dataset, and a sudden change in the geometrical structure of the representations (see Sec. <ref>);
* Few-shot learning leads to semantically meaningful clustering of the representations in early layers, organized hierarchically by subject (see Section <ref>);
* Fine-tuning enhances the sharp emergence of clustered representations according to answers in the second half of the network (see Section <ref>).
In summary, our geometric analysis reveals a transition between layers that encode high-level semantic information and those involved in generating answers.
By studying the probability landscape on either side of this transition, we uncover how fine-tuning and few-shot learning take different approaches to extract information from LLMs, ultimately solving the same problem in distinct ways.
§ RELATED WORK
Probing the geometry of the embeddings.
A classical approach to understanding the linguistic content encoded in token representations is probing <cit.>.
Inspecting the embeddings with linear <cit.> or nonlinear <cit.> classifier probes allowed to extract morphological <cit.> syntactic <cit.> and semantic <cit.> information.
Probing has also been used to explore the geometrical properties of hidden layers. LLMs can represent linearly in hidden layers concepts such as space and time <cit.>, the structure of the Othello game board <cit.>, the truth or falsehood of factual statements <cit.>. The linear representation hypotheses <cit.> can be used to show that LLMs encode hyponym relations as simplices in their last hidden layer <cit.>.
However, another line of work suggests that LLMs represent temporal concepts like days, months, and years, as well as arithmetic operations in a nonlinear way using circular and cylindric features <cit.>.
Describing the geometry of the embeddings directly.
Directly analyzing the geometrical distribution of embedding vectors and their clustering can also provide insights into how LLMs organize their internal knowledge.
Early studies on BERT, GPT2, and ELMo found that token embeddings are distributed anisotropically, forming narrow cones <cit.> and isolated clusters <cit.>.
This phenomenon has also been observed in CLIP embeddings <cit.>, where image and text tokens occupy different cones, separated by a gap.
A similar gap also exists between the subspace of different languages in multilingual LLMs <cit.>, which enables approximating language translation with geometric translation between these subspaces.
Recent work has shown that changes in the intrinsic dimension of the representations is related to different stages of information processing in LLMs, linking the rise of
abstract semantic content to layers characterized by geometric compression <cit.> and the transition from surface-level to syntactic and semantic processing to layers of high dimension <cit.>.
Comparing in-context learning and fine-tuning in LLMs. Recent studies have compared ICL to fine-tuning in LLMs by analyzing their ability to generalize, their efficiency, and how well they handle changes in the training data.
Several factors can influence the outcomes when comparing ICL and fine-tuning. These include the format of the prompts in ICL <cit.>, the amount of training data used for fine-tuning <cit.>, and the size of models being compared <cit.>.
In large models, ICL can be more robust to domain shifts and text perturbations than it is fine-tuning smaller-scale ones <cit.>.
However, when ICL and fine-tuning are compared in models of the same size fine-tuned on sufficient data, SFT can be more robust out-of-distribution, especially for medium-sized models <cit.>.
Additionally, SFT achieves higher accuracy with lower inference costs <cit.>.
§ METHODS
§.§ Models and Dataset.
MMLU Dataset. We analyze the Massive Multitask Language Understanding question answering dataset, MMLU <cit.>, taking the implementation of from .
The MMLU test set is one of the most widely used benchmarks for testing factual knowledge in state-of-the-art LLMs <cit.>.
The dataset consists of multiple-choice question-answer pairs divided into 57 subjects. All the questions have four possible options labeled with the letters "A," "B," "C," and "D." When prompted with a question and a set of options, the LLMs must output the letter of the right answer.
In this work, we will analyze the MMLU test set, where each subject contains at least 100 samples, with a median population of 152 samples and the most populated class containing about 1534 examples.
To reduce the class imbalance without excessively reducing the dataset size, we randomly choose up to 200 examples from each subject.
The final size of our dataset is 9181.
Language models and token representations analyzed. We study the models of Llama3 <cit.>, Llama2 <cit.> families, and Mistral <cit.>. We choose these LLMs because, as of May 2024, they are among the most competitive open-source models on the MMLU benchmark, with an accuracy significantly higher than the baseline of random guessing (25%, see Table <ref>).
All the models we analyze are decoder-only, with a layer normalization at the beginning of each attention and MLP block. Llama2-7b, Llama3-8b, and Mistral-7b have 32 hidden representations, Llama2-13b 40, Llama2-70 and Llama3-70 80.
In all cases, we analyze the representation of the last token of the prompt after the normalization layer at the beginning of each transformer block. For transformers trained to predict the next token, the last token is the only one that can attend to all the sequence, and in the output layer, it encodes the answer to the question.
Few-shot and fine-tuning details.
We sample the shots from the MMLU dev set, which has five shots per subject. This choice differs from the standard practice, where the shots are always given in the same order for every input question.
Table <ref> shows that the final accuracies are consistent with the values reported in the models' technical reports <cit.>.
We fine-tune the models with LoRA <cit.> on a training set formed by the union of the dev and some question-answer pairs of validation sets to reach an accuracy comparable to the 5-shot one.
The specific training details are in Sec. <ref>.
§.§ Density peaks clustering
We study the structure of the probability density of data representations with the Advanced Density Peaks algorithm (ADP) presented in D'Errico et al. <cit.> and implemented in the DADApy package <cit.>.
ADP is a mode-seeking, density-based clustering algorithm that finds the modes of the probability density by harnessing the low-dimensional structure of the data without performing any explicit dimensional reduction.
ADP also estimates the density of the saddle points between pairs of clusters, which measures their similarity and provides information on their hierarchical arrangement.
At a high level, the ADP algorithm can be divided into three steps: the estimation of the data's intrinsic dimension (ID), the estimation of the density around each point, and a final density-based clustering of the data.
Intrinsic dimension estimation.
We measure the ID of the token embeddings with Gride <cit.>.
Gride estimates the ID of data points embedded in ℝ^D, using the distances between a token and its nearest neighbors.
This is done by maximizing the likelihood function L(μ_k)=d(μ_k^d-1)^k-1/B(k,k) μ_k^d(2k-1)+1 where μ_k is the ratio of the Euclidean distances between a point and its nearest neighbors of rank k_2 = 2k and k_1 = k, d is the ID, and B(k,k) a normalizing Beta function.
By increasing the value of k, the ID is measured on nearest neighbors of increasing distance.
The ID estimate is chosen as the value where the ID is less dependent on the hyperparameter k and the graph d(k) exhibits a plateau <cit.>. On this basis, we choose k = 16.
Density estimation.
We measure the local density ρ_i, k with a kNN estimate: ρ_i, k = k/NV_i,k. Here, N is the number of data points and V_i, k the volume of the ball, which has a radius equal to the distance between the point i and its k^th nearest neighbor.
Crucially, in this step, we compute the volume using the intrinsic dimension, setting k=16, the value used to estimate the ID.
Density-based clustering.
With the knowledge of the ρ_i, we find a collection of density maxima 𝒞 = {c^1, ... c^n}, assign the data points around them, and find the saddle point density ρ^α,β between a pair of clusters c_α c_β with the procedure described in Sec. <ref>.
We can not regard all the local density maxima as genuine probability modes due to random density fluctuations arising from finite sampling.
ADP assesses the statistical reliability of the density maxima with a t-test on logρ^α - logρ^α,β, where ρ^α is the maximum density in c_α.
Once the confidence level Z is fixed, all the clusters that do not pass the t-test are merged since the value of their density peaks is compatible with the density of the saddle point.
The process is repeated until all the peaks satisfy the t-test and are statistically robust with a confidence Z.
We set Z=1.6, the default value of the DADAPy package.
In the following sections, we will also use the notion of core cluster points defined as the set of points with a density higher than the lowest saddle point density. These are the points whose assignation to a cluster is considered reliable <cit.>. With a slight abuse of terminology, we will use the terms "clusters", "density peaks", and "probability modes" interchangeably.
Measuring the cluster similarity.
The ADP algorithm considers two clusters similar if connected through a high-density saddle point.
This is done defining the dissimilarity S_α, β between a pair of clusters c_α and c_β as
S_α, β = logρ^max - logρ^α,β,
ρ^max being the density of the highest peak.
With S_α, β we perform hierarchical clustering of the peaks.
We link the peaks starting from the pair with the lowest dissimilarity according to S_α, β
and update the saddle point density between their union c_γ and the rest of the peaks c_δ_i with the WPGMA linkage strategy <cit.>:
logρ^γ, δ_i= logρ^α, δ_i+logρ^β, δ_i2.
After the update of S, we repeat the procedure until we merge all the clusters.
In this way, we display the topography <cit.> of the representation landscape of a layer, namely the relations between the density peaks, as a dendrogram.
Reproducibility.
We run the experiments on a single Nvidia A100 GPU with 40GB memory. Extracting the hidden representation of 70 billion parameter models and their fine-tuning requires sharing the workload on 8 A100 GPUs.
We provide code to reproduce our experiments
at <https://github.com/diegodoimo/geometry_icl_finetuning>.
§ RESULTS
§.§ The geometry of LLMs' representations shows a two-phased behavior.
We start by exploring the geometric properties of the representation landscape of LLMs.
Our analysis proceeds from a broad description of the manifold geometry to its finer details. First, we measure the intrinsic dimension (ID) to understand the global structure of the data manifold.
Next, we will describe the intermediate-scale behavior, counting the number of probability modes.
Finally, we analyze the density distribution at the level of individual data points within clusters.
These three quantities consistently show a two-phased behavior across the hidden layers of the LLMs we analyzed.
All profiles of this and the following sections are smoothed using a moving average over two consecutive layers.
We report the original profiles in the Appendix from Sec. <ref> to <ref> and in Sec. <ref>.
Abrupt changes in intrinsic dimension and probability landscape in middle layers.
We measure the ID of the hidden representations with the Gride algorithm (see Sec. <ref>).
Figure <ref> shows the results for Llama3-8b; the analysis on the other models can be found in Sec. <ref> to <ref> of the Appendix.
The left panel shows that the ID changes through the layers with two phases, increasing during the first half of Llama3-8b and decreasing towards the output layers.
Specifically, the ID rises from 2.5 after the first attention block and peaks around layer 16.
The value at the peak increases with the number of shots, from 14 in the base model to 16.5 when a 5-shot context is added. The fine-tuned model (0-shot) reaches a maximum ID of 21 at this layer.
In the second half of the network, the IDs sharply decrease over the next three layers. For the few-shot representations, the ID profiles gradually decay in the final part of the network, while for the 0-shot models, the ID increases again.
The same two-phased behavior appears in the evolution of the number of clusters on the hidden manifold (center panel).
In the first half of the network, the probability landscape has a higher number of modes, ranging between 60 and 70, when the model is given shots, roughly matching the number of subjects. In the 0-shot and fine-tuned cases, it remains below 40.
After layer 16, the number of peaks decreases significantly, remaining between 10 and 20.
The right panel describes the representation landscape within individual clusters, measuring the fraction of core cluster points – those with a density higher than the lowest saddle point, indicating a reliable cluster assignment <cit.>.
Before layer 17, the core point fraction is higher, indicating a better separation between probability modes.
Notably, the few-shot setting shows a core-point fraction of about 0.6, much higher than the 0-shot and fine-tuned case, which remains around 0.2.
The evolution of ID, number of clusters, and core cluster points is qualitatively consistent among the models we analyzed.
In the Llama2 family, the ID peak is less evident (Fig. <ref>-bottom). In particular, it is absent for the less accurate model, Llama2-7b. Nonetheless, the number of clusters, core points, and the remaining quantities, presented in the following sections, change according to the same two-phased trend as the other models.
§.§ The probability landscape before the transition.
We now describe the data distribution by focusing on the semantic content of the last token, specifically analyzing whether the cluster composition is consistent with the prompt's topic.
Using the 57 MMLU subjects as a reference, we compare the differences in the early layers of the LLMs between ICL and fine-tuning.
Few-shot learning forms clusters grouped by subject.
To evaluate how well the clusters align with the subjects, we use the Adjusted Rand Index (ARI) <cit.>.
An ARI of zero indicates that the density peaks do not correspond to subjects, while an ARI of one means a perfect match (see Appendix <ref> for a detailed presentation).
Figure <ref> shows that as the number of few-shots increases, the ARI rises from below 0.27 in the 0-shot context to 0.82 in Llama3-8b (left), 0.72 in Llama3-70b (center) and 0.63 in Mistral (right) in the 5-shot settings.
These ARI values correspond to a remarkable degree of purity of the clusters with respect to the subject composition.
When the ARI is at its highest, 75 out of 77 clusters in Llama3-8b (layer 4), 53 out of 69 in Llama3-70b (layer 7), and 43 out of 53 in Mistral (layer 5) contain more than 80% of tokens from the same subject.
In the next section, leveraging this high homogeneity of the clusters, we will connect the cluster similarity to the similarity between the subjects of the points they contain.
Additionally, when the number of shots grows, the ARI peak shifts to earlier layers, and the peak becomes narrower.
For example, in Llama3-8b, the 0-shot profile (blue) has a broad plateau extending until layer 13. With one and two-shot settings (orange and green), the profiles show a couple of peaks between layers 3 and 9, and with 5-shots (red), a single large maximum at layer 3.
The trend is consistent across other models, especially those with 70 billion parameters (see Fig. <ref>-center for Llama3-70b, and Fig. <ref> in the Appendix for Llama2-70b).
In all cases, providing a longer, contextually relevant prompt enables models to identify high-level semantic features (i.e., the subjects) more accurately and earlier in their hidden representations.
Few-shot prompting is not the only factor that increases the ARI with the subject. In the 0-shot setup, as the performance improves, LLMs organize their hidden representations more coherently with respect to the subject.
In Llama3-8b and 70b models, where the 0-shot accuracy is above 62% (see Table <ref>), the 0-shot ARI is around 0.25. For the rest of the models with lower accuracy (below 56%), the ARI is below 0.18 (see blue profiles in Figs. <ref> and <ref>).
The distribution of the density peaks mirrors the subject similarity.
When the models learn "in context", not only do the number and composition of density peaks become more consistent with the subjects, but they also organize hierarchically to reflect their semantic relationships.
In this section, we describe only the cluster distribution in Llama3-8b in the layers where the ARI is highest and report the other models in the Appendix, Sec. <ref>.
Figure <ref> shows the probability landscapes of the Llama3-8b in 0-shot (bottom left), 5-shot (top), and fine-tuned model (bottom-right) as dendrograms.
Dendrograms are helpful visual descriptions of hierarchical clustering algorithms <cit.>.
We perform hierarchical clustering of the density peaks using the agglomerative procedure described in Sec. <ref>.
In the layers where the ARI is highest, the density peaks are homogeneous, and we can assign a single subject to each leaf of the dendrogram.
This one-to-one mapping between clusters and subjects allows us to estimate subject similarity based on the dendrogram obtained from the (density-based) hierarchical clustering of the peaks.
In all cases (0-shot, 5-shot, and fine-tuned model), clusters of subjects from the same broader field (STEM, medicine/biology, humanities, etc.) tend to be close together.
However, in 0-shot and fine-tuned settings, the probability landscape has fewer and less pure density peaks at the subject level.
In contrast, in the 5-shot setting, the number of clusters and their purity increase, and the 77 peaks are organized according to their high-level semantic relationships.
For example, in the top panel of Fig. <ref>, four major groups of similar subjects can be identified: medicine and biology (orange), philosophy, jurisprudence, and moral disputes (blue), math, physics, and chemistry (red) and machine learning and computer science (green).
In addition, these groups and hierarchical structures are consistent across different models (see <ref> to <ref>).
For instance, clusters related to statistics, machine learning, and computer science are often grouped together, as are those of chemistry, physics, and electrical engineering, or economy, geography, and global facts.
The structure we described is also robust to changes in the confidence level Z.
In the Appendix, from Sec. <ref> to Sec. <ref>, we report the dendrograms obtained with Z=0 and Z=4.
Importantly, even with Z=0, where all local density maxima are considered as probability modes, the probability landscape of the 0-shot and fine-tuned models remains largely mixed.
In contrast, the probability landscape of 5-shot representation is more stable to variations of Z.
§.§ The probability landscape after the transition.
In Sec. <ref>, we observed that the number of density peaks decreases in the middle layers of the network.
This reduction happens because the model needs to identify the answer from four options at the output, causing points related to the same answer to cluster around the same unembedding vector.
In addition, when the model is uncertain about its predictions, some output embeddings tend to lie close to the decision boundaries of the last hidden representation, resulting in a flatter density distribution with fewer peaks.
Even when the model is highly accurate, the linear separability of the answers does not guarantee distinct density peaks because the embeddings may still be near the decision boundary as long as they are on the correct side.
However, more pronounced density peaks emerge as the model confidence grows and the data moves away from the decision boundaries.
This section shows that SFT sharpens these density peaks in the later layers more than ICL. However, as model size and accuracy increase, the representation landscapes of ICL and SFT become more similar.
Fine-tuned density peaks better encode the answers better than few-shot ones.
We evaluate how well the clusters match the answer partition (i.e., "A," "B," "C," "D") using the ARI (see Sec. <ref>).
When the models are fine-tuned, four to five large clusters emerge in the second part of the network, grouping answers with the same label.
These clusters collect more than 70% of the data between layers 20 and 30 in Llama3-8b, more than 90% between layers 45 and 75 of Llama3-70b, and more than 65% between layers 21 and 30 in Mistral.
In these clusters, the most common letter represents over 90% of the point in Llama3-8b, over 70% in Llama3-70b, and over 90% in Mistral.
As a result, the ARI (see purple profiles in Fig. <ref>) rises sharply in the middle of the network, reaching approximately 0.25 in Llama3-8b, 0.45 in Llama3-70b, and 0.2 in Mistral.
These ARI are related to the MMLU test accuracies of 65%, 78.5%, and 62%, respectively (see Table <ref>).
In contrast, in ICL, the clusters are more mixed, and their number is smaller.
In Llama3-8b in the 5-shot setup, one cluster contains 70% of the points in the last layers.
In the 0-shot case, four clusters with a roughly equal distribution of letters contain the same amount of data (blue profile). Similar trends appear in Mistral's late layers. In both models, the ARI values for few-shot context stay below 0.05 (Fig. <ref>).
Interestingly, in Llama3-70b (and to a lesser extent in Llama2-70b, see Fig. <ref>-right),
the representation landscape of ICL starts to resemble that of the fine-tuned models.
Between layers 40 and 77, about 80% of the dataset forms five large peaks, and in four of them, the fraction of the most common letter is above 0.9, similar to fine-tuned models.
Consequently, in these layers, the ARI for few-shot contexts (see orange, green, and red profiles in Fig. <ref>-center) oscillates between 0.35 and 0.40, except for the 0-shot profile, which decays from 0.3 to 0.05 (blue profile).
The different ways in which fine-tuning and ICL shape the representations of the network in the second half depend on the learning protocol, model size, and performance.
In smaller models with moderate accuracy (below 65% / 70%), SFT and ICL can perform similarly as in the 5-shot setup (see Table <ref>) but they alter the geometry of the layers in different ways.
However, with higher accuracy models like Llama3-70b (accuracy above 75%), both the performance of the model and the topography of the hidden representation tend to converge.
Fine-tuning primarily alters the representations after the transition.
In fine-tuned models, training leads to the emergence of structured representations that align with the labels.
Figure <ref> shows where, during the training, layers change the most in Llama3-8b (left), Llama3-70b (center), and Mistral (right).
We compare the fine-tuned checkpoints with the 0-shot representations before training begins.
To measure the similarity between representations, we use the neighborhood overlap metric <cit.> that calculates the fraction of the first k nearest neighbors of each point shared between pairs of representations, averaged over the dataset.
Figure <ref> shows that the similarity between representations is lower in the second part of the networks, decaying more sharply to its final value in the middle of the network from 0.5 to roughly 0.3 between layers 13 and 17 in Llama3-8b and Mistral, from 0.6 to 0.4 after layer 33 in Llama3-70b (see dark blue profiles).
This picture is consistent with what is shown in the previous sections. In the first half of the network, the
representation landscapes of 0-shot and fine-tuned models are similar both geometrically (Fig. <ref>) and semantically (Figs. <ref> and <ref>).
In the second half of the network, where the representations are modified more during the training, fine-tuned models develop fewer peaks, more consistent with the label distribution than those of the other models (Fig. <ref>).
§ DISCUSSION AND CONCLUSION
This study described how the probability landscape within the hidden layers of language models changes as they solve a question-answering task, comparing the differences between in-context learning and fine-tuning.
We identified two phases in the model's internal processing, which are separated by significant changes in the geometry of the middle layers. The transition is marked by a peak in the ID and a sharp decrease in the number and separation of the probability modes.
Notably, few-shot learning and fine-tuning display complementary behavior with respect to this transition.
When examples are included in the prompt, the early layers of LLMs exhibit a well-defined hierarchical organization of the density peaks that recovers semantic relationships among questions' subjects.
Conversely, fine-tuning primarily modifies the representations to encode the answers after the transition in the middle of the network.
Advantages of the density-based clustering approach.
Our research highlights how variations in density within the hidden layers relate to the emergence of different levels of semantic abstraction, a concept previously explored by Doimo et al. <cit.> in convolutional neural networks (CNNs) trained for classification.
In CNNs, the probability landscape remains unimodal until the last handful of layers, where multiple probability modes emerge according to a hierarchical structure that mirrors the similarity of the categories.
In decoder-only LLMs solving a semantically rich question-answering task, these hierarchically organized density peaks appear in the early layers of the models, especially when they learn in context.
Our methodology also extends the work of Park et al. <cit.>, enabling the discovery of meaningful hierarchies of concepts beyond the final hidden layer of LLMs, where the data representation can be non-linear <cit.>.
Moreover, unlike previous studies that utilized k-means to identify clusters within hidden representations <cit.>, the density peak method does not assume a convex cluster shape or impose a priori the number of clusters. Instead,
clusters emerge naturally once a specific Z value is set (see Sec. <ref>).
This allows for the automatic discovery of potentially meaningful data categorizations based on the structure of the representation landscape without specifying external linguistic labels and without introducing additional probing parameters.
In this respect, an approach is similar to that of Michael et al. <cit.>, who used a weakly supervised method relying on pairs of positive/negative samples to uncover latent ontologies within representations.
Intrinsic dimension and information processing.
As discussed in section <ref>, a peak in intrinsic dimension separates two groups of layers serving different functions and being distinctly influenced by in-context learning and supervised fine-tuning.
Other studies have also highlighted the crucial role of ID peaks in marking blocks of layers dedicated to different stages of information processing within deep neural networks.
For example, Ansuini et al. <cit.> observed a peak of the ID in the intermediate layers of CNNs, separating layers that remove low-level image features like brightness from those that focus on extracting abstract concepts necessary for classification.
In transformers trained to generate images, Valeriani et al. <cit.> identified two intermediate peaks delimiting layers rich in semantic features of the data characterized by geometric compression.
In LLMs, Cheng et al. <cit.> showed that an ID peak marks a transition from representation that encodes surface-level linguistic properties to one rich in syntactic and semantic information.
These studies suggest that ID peaks consistently indicate transitions between different stages of information processing within the hidden layers.
Application to adaptive low-rank fine-tuning.
Our findings could improve strategies for adaptive low-rank fine-tuning.
Several studies <cit.> tried to adjust the ranks of the LoRA matrices based on various criteria of ‘importance’ or relevance to downstream tasks.
Our analysis of the similarity between fine-tuned and pre-trained layers (see Fig. <ref>) reveals that later layers are most impacted by fine-tuning, indicating that these layers should be assigned ranks. This approach would naturally prevent unnecessary modifications to the early layers during fine-tuning.
Limitations.
Estimating the density reliably requires a good sampling of the probability landscape. This can be a delicate condition if the intrinsic dimension is high, as often happens in neural network hidden layers.
The ID values we report in this work lie between 4 and 22, with most of the layers having an ID below 16.
Rodriguez et al. <cit.> showed that the density can be estimated reliably up to 15-20 dimensional spaces. Still, the upper bounds are problem-specific and depend on the density estimator used, the nature of the data, and the dataset size.
In this work, we analyzed MMLU, which has a semantically rich set of topics characterized by good enough sampling.
In the Appendix, in Sec. <ref>, we show that our results extend to another dataset mixture with a subject partition similar to MMLU. However, the analysis of other QA datasets and generic textual sources would make our observations more general.
The prompt structure can also be made more general. In the current study, we studied ICL framed as few-shot learning
but further investigations on more differentiated contexts would strengthen our findings.
Finally, the description of the transition observed in the proximity of half of the network can be analyzed more in detail, for instance, by providing an interpretation of the mechanism <cit.> underlying the information flow from the context to the last token position <cit.>.
§ ACKNOWLEDGEMENTS AND DISCLOSURE OF FUNDING
We thank Alessandro Laio for many helpful discussions and valuable suggestions. We acknowledge the AREA Science Park supercomputing platform ORFEO and thank the technical support of the Laboratory of Data Engineering staff.
A.A., A.C., and D. D. were supported by the project “Supporto alla diagnosi di malattie rare tramite l'intelligenza artificial"- CUP: F53C22001770002.
A.A., A. C. were supported by the European Union – NextGenerationEU within the project PNRR "PRP@CERIC" IR0000028 - Mission 4 Component 2 Investment 3.1 Action 3.1.1.
A.S. was supported by the project PON “BIO Open Lab (BOL) - Raforzamento del capitale umano”- CUP:
J72F20000940007.
§ APPENDIX
§ FINE-TUNING SETUP.
The dataset on which we fine-tune the models is the union of the MMLU dev set (all five examples per subject are selected) and a subset of the MMLU validation set.
We select up to 20 examples per subject for Llama2-7b and Llama2-13b and up to 40 examples per subject for the rest of the models. In the first case, the dataset size is 1065, and the second is 1439.
We train Llama3-8b and Mistral-7b for 6 epochs and the remaining models for 4.
We fine-tune the models with LoRA. The LoRA rank is 64, α is 16, and dropout = 0.1.
For the 70 billion models, we choose a rank of 128, and α is 32. This is an empirically reasonable choice since the embedding dimension of these models is two times larger than the 7 billion ones. We use a learning rate = 2· 10^-4; for the 70 billion models we decrease it to 1· 10^-4. For all the models, we apply a cosine annealing scheduler and a linear warm-up for 5% of the total iterations.
We fine-tune all the models with batch size = 16 using the Adam optimizer without weight decay.
Below, we report the performance of the MMLU test set on all the 14042 samples, for 0-shot, 5-shot, and fine-tuned models.
To compute the macro average, we first measure the accuracy for each subject.
Then, we compute the arithmetic mean of the subject accuracies.
The micro average is the fraction of correct answers taken over the dataset.
The two quantities differ since the dataset is unbalanced.
§ ITERATIVE SEARCH OF DENSITY PEAKS AND SADDLE POINTS.
Let 𝒩_k(i) be the set of k points nearest to x_i in Euclidean distance at a given layer l.
The first step of the density-peaks clustering is finding the point of maximum density ρ_i (namely the probability peaks).
Data point i is a maximum if the following two properties hold: (I) ρ_i>ρ_j for all the points j belonging to 𝒩_k(i); (II) i does not belong to the neighborhood 𝒩_k(j) of any other point of higher density <cit.>.
(I) and (II) must be jointly verified as the neighborhood ranks are not symmetric between pairs of points.
A different integer label 𝒞 = {c^1, ... c^n} is then assigned to each of the n maxima. The data points that are not maxima are iteratively linked to one of these labels by assigning the same label as its nearest neighbor of higher density to each point.
The set of points with the same label corresponds to a cluster.
The saddle points between two clusters are identified as the points of maximum density between those lying on the borders between the clusters.
A point x_i ∈ c^α is assumed to belong to the border ∂_c^α, c^β with a different peak c^β if exists a point x_j ∈𝒩_k(i) ∩ c^β whose distance from i is smaller than the distance from any other point belonging to c^α.
The saddle point between c^α and c^β is the point of maximum density in ∂_c^α, c^β.
§ THE ADJUSTED RAND INDEX.
To determine to which degree the density peaks are consistent with the abstractions of the data, we will compare the clustering induced by the density peaks algorithm with the partition given by the 300 classes and that given by the high-level subdivision in animals and objects.
Among the many possible scores, we choose the Adjusted Rand Index (ARI) <cit.>, one of the best clusters of external evaluation measures according to <cit.>.
The Rand Index (RI) <cit.> measures the consistency between a cluster partition 𝒞 to a reference partition ℛ counting how many times a pair of points:
a are placed in the same group in 𝒞 and ℛ;
b are placed in different groups in 𝒞 and ℛ;
c are placed in different groups in 𝒞 but in the same group in ℛ;
d are placed in the same group in 𝒞 but in different groups in ℛ;
and measures the consistency with RI = (a+b)/(a+b+c+d).
The Rand Index is not corrected for chance, meaning it does not give a constant value (e.g., zero) when the two assignments are random.
<cit.> proposed to adjust RI, taking into account the expected value of the Rand Index, n_c, under a suitable null model for chance:
ARI = a + b-n_ca+b+c+d-n_c
ARI is equal to 1 when the two partitions are consistent and 0 when the assignments are random and can take negative values.
A large value of ARI not only implies that instances of the same class are put in the same cluster (homogeneity) but also that the data points of a class are assigned to a single cluster (completeness).
§ ADDITIONAL EXPERIMENTS
§.§ Analysis on an additional dataset mixture.
In this section, we validate the findings shown in Section <ref> on a dataset constructed from TheoremQA <cit.>, ScibenchQA <cit.>, Stemez [3], and RACE <cit.>. This dataset contains roughly 6700 examples not included in MMLU, ten subjects from STEM topics, and a middle school reading comprehension task (RACE), with at least 300 examples per subject. We keep four choices for each answer. The 0-shot, 5-shot, and fine-tuned accuracies in Llama3-8b are 55%, 57%, and 58%, respectively.
In Fig. <ref>-left, we see that the intrinsic dimension profiles have a peak around layers 15/16, the same layers as in MMLU (see Fig. <ref>-left). This peak in ID signals the transition between the two phases described in the paper.
Before layer 17, few-shot models encode better information about the subjects (ARI with subjects above 0.8).
Between layers 3 and 7, the peaks in 5-shot layers reflect the semantic similarity of the subjects (see the dendrograms for layer five reported in Fig. <ref>).
Fine-tuning instead changes the representations after layer 17, where ARI with the answers for the is ∼ 0.15, higher than that of the 5-shot and 0-shot models. The absolute value is lower than that reported in the main paper (Fig. 5-left) because the fine-tuned accuracy reached on the STEM subjects in this dataset is lower.
Overall, the results are consistent with those shown in the paper for MMLU.
§.§ Intrinsic dimension profiles.
§.§.§ Scale analysis of the intrinsic dimension.
§.§ Number of Clusters
§.§ Core points
§.§ Experiments on Llama2-7b, Llama2-13b and Llama2-70b.
§.§ Profiles without moving average
§.§ Dendrograms (5-shot) in layers where the subject ARI is highest
§.§.§ Llama3-8b
§.§.§ Llama3-70b
§.§.§ Mistral-7b
§.§.§ Llama2-7b
§.§.§ Llama2-13b
§.§.§ Llama2-70b
§ ROBUSTNESS WITH RESPECT TO Z
§.§ ARI profiles with letters
§.§ Dendrograms robustness with respect to Z in Llama3-8b (0-shot, 5-shot, fine-tuned).
The probability landscape of the 0-shot and fine-tuned models remains largely mixed even when Z equals zero, and we consider all the local density maxima as genuine probability modes.
For 0-shot, the number of clusters increases from 43 to 70, but we still have two large clusters containing a mixture of 22 and 10 subjects (see Fig. <ref>). Similarly, the fine-tuned model has several clusters with more than 7 subjects (see Fig. <ref>).
We also notice that the probability landscape of 5-shot representation is more robust to variations of Z.
Indeed, by increasing Z from 1.6 to 4, the number of density peaks slightly decreases from 77 to 57, and the ARI with the subjects remains around 0.8 (see Fig. <ref>). In contrast, for the 0-shot and fine-tuned representations, it drops to 0 as most of the probability peaks are merged into large-scale structures (Figs. <ref>,
<ref>).
§.§.§ Llama3-8b 0shot Z=0
§.§.§ Llama3-8b 0shot Z=1.6
§.§.§ Llama3-8b 0shot Z=4
§.§.§ Llama3-8b 5shot Z=0
§.§.§ Llama3-8b 5shot Z=4
§.§.§ Llama3-8b ft Z=0
§.§.§ Llama3-8b ft Z=1.6
§.§.§ Llama3-8b ft Z=4
|
http://arxiv.org/abs/2409.02257v1 | 20240903193103 | MMLU-Pro+: Evaluating Higher-Order Reasoning and Shortcut Learning in LLMs | [
"Saeid Asgari Taghanaki",
"Aliasgahr Khani",
"Amir Khasahmadi"
] | cs.CL | [
"cs.CL",
"cs.LG"
] |
Generalized implementation of invariant coordinate selection with positive semi-definite scatter matrices
[
September 9, 2024
===========================================================================================================
§ ABSTRACT
Existing benchmarks for large language models (LLMs) increasingly struggle to differentiate between top-performing models, underscoring the need for more challenging evaluation frameworks. We introduce MMLU-Pro+, an enhanced benchmark building upon MMLU-Pro to assess shortcut learning and higher-order reasoning in LLMs. By incorporating questions with multiple correct answers across diverse domains, MMLU-Pro+ tests LLMs' ability to engage in complex reasoning and resist simplistic problem-solving strategies. Our results show that MMLU-Pro+ maintains MMLU-Pro's difficulty while providing a more rigorous test of model discrimination, particularly in multi-correct answer scenarios. We introduce novel metrics like shortcut selection ratio and correct pair identification ratio, offering deeper insights into model behavior and anchoring bias. Evaluations of five state-of-the-art LLMs reveal significant performance gaps, highlighting variations in reasoning abilities and bias susceptibility. We release the dataset and evaluation codes at <https://github.com/asgsaeid/mmlu-pro-plus>.
§ INTRODUCTION
Recent advancements in large language models (LLMs) have led to remarkable improvements in various natural language processing tasks <cit.>. State-of-the-art models such as GPT-4 <cit.>, Claude-3.5 Sonnet <cit.>, Gemini <cit.>, and Llama <cit.> have demonstrated impressive capabilities across a wide range of applications. However, as these models continue to evolve, existing benchmarks are struggling to keep pace, often reaching performance saturation and failing to effectively differentiate between model capabilities <cit.>.
LLMs have been evaluated using a variety of benchmark datasets designed to test different aspects of language understanding and generation. Some of the most prominent benchmarks include IFEval <cit.> for instruction following, BBH (Big-Bench Hard) <cit.> for challenging reasoning tasks, MATH <cit.> for mathematical problem-solving, GPQA <cit.> for general-purpose question answering, and MUSR <cit.> for multi-task language understanding. These join other established benchmarks like the General Language Understanding Evaluation (GLUE) <cit.> and its successor SuperGLUE <cit.>, as well as the Stanford Question Answering Dataset (SQuAD) <cit.> for reading comprehension.
Among these benchmarks, the Massive Multitask Language Understanding (MMLU) benchmark <cit.> has been widely adopted as a standard for evaluating LLMs due to its broad coverage of subjects. However, recent studies have shown that top-performing models are achieving near-identical scores on MMLU, with several models scoring between 86-87% accuracy <cit.>. This saturation raises concerns about the benchmark's ability to measure future advancements in LLM capabilities <cit.>. In response to these limitations, researchers developed MMLU-Pro <cit.>, an iteration of the original MMLU designed to challenge LLMs with more complex, reasoning-focused questions and a greater number of answer options. While MMLU-Pro made significant strides, we identified further areas for enhancement that could improve the benchmark's ability to evaluate LLMs more effectively.
Recent research has highlighted the challenges of shortcut learning, where models exploit superficial patterns rather than developing deeper understanding<cit.>, and the importance of evaluating higher-order reasoning in language models. Geirhos et al. <cit.> provide a comprehensive overview of shortcut learning in deep neural networks, emphasizing its prevalence and impact on model performance. Wei et al. <cit.> demonstrate how chain-of-thought prompting can elicit more sophisticated reasoning, addressing some of the limitations that lead to shortcut learning. The need for more nuanced evaluation methods has been underscored by Bowman and Dahl <cit.>, who argue for fixing benchmarking in natural language understanding. Higher-order reasoning involves complex cognitive processes such as analysis, synthesis, and evaluation of multiple pieces of information <cit.>. It requires models to go beyond simple pattern matching or recall, engaging in more sophisticated thought processes that mimic human-like reasoning more closely. <cit.>.
In this paper, we introduce MMLU-Pro+, a new benchmark that builds upon MMLU-Pro by incorporating these insights. The novelty of MMLU-Pro+ lies in its fundamental change to the nature of reasoning required from LLMs. By introducing questions with multiple correct answers, MMLU-Pro+ requires models to: a) Evaluate the validity of multiple statements independently; b) Recognize the potential for more than one correct answer; c) Discern subtle differences between correct and incorrect information; d) Resist the tendency to anchor on a single answer.
This approach increases the complexity of the benchmark, forcing models to engage in higher-order reasoning, recognizing and evaluating nuanced or multi-faceted concepts rather than relying on memorized patterns or simplistic heuristics. It tests a model's robustness and allows for better discrimination between models with varying levels of understanding and reasoning capabilities.
MMLU-Pro+ contributes to the field in several ways:
* It specifically targets the reduction of shortcut learning by introducing questions with multiple correct answers.
* It provides a more realistic evaluation scenario that mirrors real-world complexity, where problems often have multiple valid solutions.
* It introduces new metrics such as the shortcut selection ratio and correct pair identification ratio, offering a more nuanced understanding of model performance beyond simple accuracy.
By addressing these aspects, MMLU-Pro+ serves as a reliable and informative tool for tracking progress in language understanding. It contributes to the ongoing efforts, highlighted by Bommasani et al. <cit.>, to better understand and evaluate the capabilities and limitations of foundation models in AI, specifically targeting the reduction of shortcut learning and the promotion of higher-order reasoning skills in LLMs.
§ DATASET CONSTRUCTION
The construction of MMLU-Pro+ involves a systematic and scalable approach to modifying the original MMLU-Pro dataset, introducing multiple correct answers and various types of distractors to enhance its ability to evaluate higher-order reasoning skills in LLMs.
We begin with the MMLU-Pro dataset <cit.>, which encompasses questions from 14 diverse domains including mathematics, physics, chemistry, law, engineering, psychology, and health. The initial dataset contains over 12,000 questions, each with up to ten answer options.
§.§ Dataset Modification Process
We modify the MMLU-Pro dataset in three distinct categories:
True Positive Pairs.
For these questions, we introduce a "Both X and Y are correct" option, where X is the original correct answer from MMLU-Pro, and Y is a new correct option generated using GPT-4o. The process for generating Y varies depending on the question type: a) For mathematical questions, we prompt the LLM to rewrite numbers or equations in alternative formats. b) For other types of questions, we instruct the LLM to find another correct option not already mentioned in the original choices, or to present the same correct information in a different, more complex way beyond simple paraphrasing. This process can be represented as:
Q_TPP = f_LLM(Q_original, LLM)
where Q_TPP is the modified question with True Positive Pairs, f_LLM is the LLM-based modification function, and LLM refers to GPT-4o.
Partial False Positive Pairs.
For these questions, we create a "Both X and Y are correct" option where X is the original correct answer from MMLU-Pro, and Y is a randomly selected incorrect option from the original set of options.
Complete False Positive Pairs.
For these questions, we create a "Both X and Y are correct" option where X and Y are two randomly selected incorrect options from the original set of options.
This composition allows for a comprehensive evaluation that tests more complex multi-answer reasoning skills, while also challenging LLMs to identify partially or completely incorrect option pairs. From the total of 12,032 questions in the dataset, 3,718 were modified using an LLM to create True Positive Pairs, while 4,153 were modified without LLM intervention. Specifically, 2,029 questions were modified with two wrong options (Complete False Positive Pairs), and 2,124 were modified with one correct and one wrong option (Partial False Positive Pairs). This ensures a robust evaluation across various question types and modification strategies. Incorporating True Positive Pairs tests the models ability to recognize multiple correct answers, reflecting real-world scenarios where different solutions can be equally valid. Meanwhile, the Partial and Complete False Positive Pairs test the models' ability to discern subtle inaccuracies and resist the tendency to assume correctness when presented with familiar information. This approach not only assesses a model's knowledge but also its capacity for nuanced reasoning and its robustness against potential shortcuts or biases in answering multiple-choice questions.
§.§ Post-Processing and Quality Assurance
To ensure the integrity of MMLU-Pro+, we implemented a rigorous post-processing and validation protocol:
Human Auditing. We conducted a comprehensive audit of 100 samples from each group (True Positive, Partial False Positive, and Complete False Positive Pairs) verifying the accuracy and appropriateness of new options.
Consistency Checks. We performed thorough checks across the entire dataset to ensure newly added options maintain the same style as original ones, preventing unintended evaluation biases.
Error Identification. We systematically identified and flagged potential inconsistencies or errors introduced during the modification process.
Task Differentiation. We ensured that the process of creating true positive pairs differs fundamentally from the task of answering questions, minimizing the risk of model-specific advantages.
Comprehensive Metrics. Our evaluation metrics assess not only accuracy but also bias and shortcut learning across diverse models, providing a holistic view of model performance.
While GPT-4o was used for creating True Positive pairs, our evaluation process is designed to be model-agnostic. This approach ensures that the augmentation process genuinely increases question difficulty rather than introducing biases favoring any particular model. These measures collectively maintain MMLU-Pro+'s ability to differentiate between model capabilities, regardless of which LLM was involved in the augmentation process. This is further validated in our experiments, where GPT-4o is not the top-performing model, demonstrating the benchmark's independence from its creation process.
§ EXPERIMENTS
We evaluated several state-of-the-art LLMs, including two open-source models (LLaMA-3.1 405B Instruct <cit.> and Qwen 2 72B Instruct <cit.>) and three closed-source models (GPT-4o <cit.>, Claude Sonnet 3.5 <cit.>, and Gemini-1.5 Pro <cit.>), using the MMLU-Pro+ benchmark. Models were assessed not only on accuracy but also on their ability to recognize and correctly select the "Both X and Y are correct" option, thereby demonstrating higher-order reasoning and shortcut leanring.
§.§ Accuracy Analysis on MMLU-Pro+
Figure <ref> illustrates the overall performance of each model on the MMLU-Pro+ dataset, while Table <ref> provides a detailed breakdown of performance across individual subject categories, including the comparative drop from MMLU-Pro. These results offer insights into the models' capabilities and the increased challenge presented by MMLU-Pro+.
Sonnet-3.5 demonstrates superior performance across the majority of categories, achieving the highest average accuracy. The consistent outperformance suggests a more robust capability in handling the increased complexity introduced by MMLU-Pro+. The universal decrease in accuracy from MMLU-Pro to MMLU-Pro+ validates the increased challenge of our new benchmark. Notably, Sonnet-3.5 exhibits the smallest average performance drop, indicating enhanced resilience to the modified question format and the introduction of multiple correct answers.
The engineering category reveals an interesting divergence, with Gemini-1.5-Pro and Sonnet-3.5 showing markedly smaller performance drops compared to other models. Conversely, the law category presents an anomaly, with minimal performance drops and even slight improvements for some models, suggesting that the MMLU-Pro+ modifications may have had less impact on this domain. Despite high scores in many categories, GPT-4o exhibits the largest average performance drop, indicating a potential sensitivity to the structural changes in MMLU-Pro+. This suggests that high performance on standard benchmarks may not necessarily translate to robust reasoning capabilities in more complex scenarios.
The performance gap between different models on MMLU-Pro, particularly the superior performance of models like Claude-3.5-Sonnet compared to GPT-4o (which was used in dataset creation), further validates our dataset construction methodology. Despite GPT-4o's involvement in generating True Positive pairs, it does not exhibit an advantage in the question-answering task. This discrepancy in performance across models suggests that MMLU-Pro successfully challenges the LLMs, requiring genuine reasoning capabilities rather than pattern matching or exploitation of dataset artifacts.
We also measured models' performance on (1) questions with two correct options (), (2) questions with one correct and one incorrect option (), and (3) questions with two incorrect options (). The results, as illustrated in Figure <ref>, reveal significant variations in model performance across these question types. Interestingly, all models showed lower accuracy in the category compared to the other two, suggesting a potential difficulty in identifying multiple correct answers. GPT-4o and Gemini-1.5-Pro showed similar performance patterns, with their highest accuracies in the category. These findings highlight the varying capabilities of different models in handling nuanced multiple-choice questions
§.§ Analysis of Anchoring Bias and Shortcut Learning in MMLU-Pro+
Figure <ref> illustrates the propensity of various language models to maintain their original choices when presented with modified questions in MMLU-Pro+, specifically for True Positive Pairs. To quantify this behavior, we introduce the Shortcut Selection Ratio (SSR), defined as follows:
SSR_wrong = N_stayed_wrong/N_total_TPP
SSR_partial = N_stayed_partial/N_total_TPP
Where N_stayed_wrong is the number of times the model stayed on a previously chosen wrong answer, N_stayed_partial is the number of times the model stayed only on the previously correct answer without acknowledging the newly introduced correct option, and N_total_TPP is the total number of True Positive Pairs.
This “shortcut selection ratio” provides insights into potential anchoring bias and shortcut learning behaviors. The graph reveals that all models exhibit a tendency to stick with their initial selections, both for previously wrong and partially correct options, suggesting a degree of anchoring bias. This behavior is particularly pronounced in GPT-4o and Qwen2-72B-Ins, which show higher rates of maintaining their original choices.
The persistence in selecting previously incorrect options (high SSR_wrong) is especially noteworthy, as it indicates potential limitations in these models' ability to reassess and engage in higher-order reasoning when presented with new, valid alternatives. Similarly, a high SSR_partial suggests a failure to recognize newly introduced correct options. Conversely, Gemini 1.5 Pro, Sonnet-3.5, and Llama 1.3 405B demonstrate lower shortcut selection ratios, suggesting a greater capacity for adapting their reasoning in light of new information. These findings highlight the challenges language models face in fully leveraging the additional correct options introduced in MMLU-Pro+, and underscore the importance of developing benchmarks that can effectively evaluate and promote higher-order reasoning.
§.§ Analysis of Correct Pair Identification in MMLU-Pro+
In this experiment, we evaluate the models' ability to accurately identify true positive pairs among various types of answer combinations in MMLU-Pro+, providing insights into their reasoning capabilities and resilience to distractors.
Figure <ref> presents an error analysis for various models on MMLU-Pro+, introducing a metric called the correct pair identification ratio. This ratio is defined as:
Correct Pair Identification (CPI) Ratio = N_TPP/N_PFPP + N_CFPP
where N_TPP represents the number of correctly identified True Positive Pairs, N_PFPP is the number of times the model incorrectly predicted the Partial False Positive Pair, and N_CFPP is the number of times the model incorrectly predicted the Complete False Positive Pair. This ratio measures the model's ability to identify correct pairs relative to its tendency to be misled by partially or completely incorrect pairs.
Sonnet-3.5 achieves the highest ratio (10.26), demonstrating superior discrimination capability in distinguishing correct answer pairs from misleading options. This suggests enhanced resistance to distractors and a more robust grasp of subject matter and question structure. The significant variation in ratios, ranging from 2.80 (Llama-405B-Ins) to 10.26 (Sonnet-3.5), reveals substantial differences in models' higher-order reasoning capabilities. Models with higher ratios exhibit a greater capacity for nuanced understanding, correctly identifying multiple true statements while rejecting plausible but incorrect combinations. These findings highlight MMLU-Pro+'s effectiveness in differentiating models based on their ability to handle complex, multi-correct answer scenarios, underscoring the importance of sophisticated evaluation metrics in assessing advanced language models. Some samples of the True Positive Pairs from the MMLU-Pro+ dataset can be seen in Figure <ref>.
§ INTERPRETATION AND SIGNIFICANCE OF NOVEL METRICS
The Shortcut Selection Ratio (SSR) and Correct Pair Identification (CPI) Ratio provide insights into model behavior that are directly relevant to real-world applications:
SSR: A low SSR indicates a model's ability to adapt its reasoning when presented with new, valid information. This is crucial in dynamic decision-making environments, such as medical diagnosis or financial analysis, where new data may necessitate re-evaluation of initial conclusions.
CPI Ratio: A high CPI Ratio suggests a model's proficiency in distinguishing between subtly different correct and incorrect information combinations. This skill is essential in fields like legal analysis, scientific research, or policy-making, where the ability to discern fine-grained differences in complex information is paramount.
These metrics go beyond simple accuracy, offering a more nuanced understanding of a model's reasoning processes and its potential performance in complex, real-world tasks.
§ DISCUSSION
In this paper, we introduced MMLU-Pro+, an enhanced benchmark designed to evaluate the higher-order reasoning capabilities of large language models. By incorporating questions with multiple correct answers and various types of distractors, MMLU-Pro+ provides a more challenging and discriminative evaluation framework than its predecessors.
Our experimental results demonstrate the effectiveness of MMLU-Pro+ in several key areas:
Increased Difficulty. All evaluated models showed a consistent drop in performance when moving from MMLU-Pro to MMLU-Pro+, confirming the increased challenge of our benchmark.
Model Differentiation. MMLU-Pro+ revealed substantial differences in model performance, with Sonnet-3.5 consistently outperforming other models across most categories.
Anchoring Bias and Shortcut Learning. The shortcut selection ratio analysis exposed varying degrees of anchoring bias across models, highlighting the challenge LLMs face in adapting their reasoning when presented with new, valid alternatives.
Higher-Order Reasoning. The correct pair identification ratio provided insights into models' abilities to distinguish genuinely correct answer pairs from misleading options, with significant variations observed across different models.
These findings underscore the importance of sophisticated evaluation metrics in assessing advanced language models, particularly in scenarios requiring discernment between subtly different correct and incorrect information combinations.
MMLU-Pro+ not only serves as a more reliable and informative benchmark for tracking progress in LLM evaluation but also highlights areas for improvement in current models. The observed anchoring bias and varying abilities to identify correct pairs suggest that even top-performing models may still rely on simplistic heuristics or struggle with truly nuanced reasoning in complex scenarios.
Future work could explore the development of training techniques that specifically target the higher-order reasoning skills evaluated by MMLU-Pro+. Additionally, extending this benchmark approach to other domains or task types could provide a more comprehensive evaluation of LLM capabilities across diverse applications. While we used GPT-4o in our dataset construction process, our results demonstrate that this does not confer an unfair advantage to these or similar models. The significant performance drop of GPT-4o on the modified data, coupled with the superior performance of other models like Claude-3.5-Sonnet, indicates that our methodology produces a dataset that genuinely challenges LLMs. However, future work could explore alternative methods for dataset augmentation to further mitigate any potential biases introduced by LLM-assisted generation.
By providing a more challenging and discriminative benchmark, MMLU-Pro+ contributes to the ongoing effort to develop more capable and robust language models with human-like reasoning and understanding.
plain
|
http://arxiv.org/abs/2409.03661v1 | 20240905161407 | Ensemble noise properties of the European Pulsar Timing Array | [
"Boris Goncharov",
"Shubhit Sardana"
] | astro-ph.HE | [
"astro-ph.HE",
"astro-ph.IM"
] |
firstpage–lastpage
Wind turbine condition monitoring based on intra- and inter-farm federated learning
[
Accepted XXX. Received YYY; in original form ZZZ
===================================================================================
§ ABSTRACT
The null hypothesis in Pulsar Timing Array (PTA) analyses includes assumptions about ensemble properties of pulsar time-correlated noise.
These properties are encoded in prior probabilities for the amplitude and the spectral index of the power-law power spectral density of temporal correlations of the noise.
In this work, we introduce a new procedure for numerical marginalisation over the uncertainties in pulsar noise priors.
The procedure may be used in searches for nanohertz gravitational waves and other PTA analyses to resolve prior misspecification at negligible computational cost.
Furthermore, we infer the distribution of amplitudes and spectral indices of the power spectral density of spin noise and dispersion measure variation noise based on the observation of 25 millisecond pulsars by the European Pulsar Timing Array (EPTA).
Our results may be used for the simulation of realistic noise in PTAs.
pulsar timing arrays – pulsars – gravitational waves
§ INTRODUCTION
Pulsar Timing Arrays <cit.> are experiments that monitor pulse arrival times from galactic millisecond pulsars with a primary goal of detecting nanohertz-frequency gravitational waves <cit.>.
The most promising source of such gravitational waves is the stochastic superposition of inspiralling supermassive binary black holes in the nearby universe <cit.>.
Thanks to spacetime metric perturbations from gravitational waves, pulse arrival times experience delays and advances (henceforth, delays).
The power spectral density (PSD) of delays P(f) induced by stochastic gravitational waves manifests temporal correlations and corresponds to the background's characteristic strain spectrum h_c(f).
The Fourier frequency of timing delays f is exactly the gravitational wave frequency.
For the isotropic stochastic gravitational wave background from circular binaries where the inspiral is driven by gravitational wave emission alone, h_c∝ f^-2/3 corresponding to P(f) ∝ f^-13/3, highlighting signal prominence towards lower frequencies or, equivalently, longer observational time scales <cit.>.
At low frequencies, PTAs are limited by time-correlated “red” noise which is also modelled using a power law.
The two most ubiquitous sources of red noise in PTA data are the dispersion measure (DM) variation noise <cit.> and the spin noise <cit.>.
DM noise features an additional dependence P(f) ∝ν^-2, where ν is a radio frequency.
It belongs to a broader class of noise processes called “chromatic”, referring to the dependence of the signal amplitude on radio frequency.
Achromatic red noise that is independent of ν is also called spin noise because it is associated with irregularities in pulsar rotation.
In data of a single pulsar, spin noise acts as a gravitational wave background with a broader range of possible spectral indices.
A decisive contribution of the stochastic background to PTA data is ultimately determined through the covariance of the signal across pulsar pairs.
The covariance follows the <cit.> function of pulsar angular separation.
Contemporary PTA analyses are performed using Bayesian inference, where a uniform prior probability is assumed for pulsar-specific red noise parameters.
Namely, the log-10 amplitude, A, and the spectral index, γ, of noise PSD.
Priors are the models of how likely a parameter with a certain value is to be found in a given data realization.
A prior choice is sufficient as long as it is consistent with an observation.
Uniform priors on PTA noise parameters are designed to be sufficient for single-pulsar noise analyses.
However, all pulsars represent different realizations of data with respect to pulsar noise parameters[While remaining a single realization of data with respect to parameters governing h_c(f) of gravitational wave background.].
So, for full-PTA analyses with multiple pulsars, a mismatch between our prior on ( A,γ) and the observed distribution of these parameters may accumulate across pulsars and lead to biased inference.
This case is known as prior misspecification.
Implicitly, noise prior misspecification is shown by <cit.> to bias measurements of the strain amplitude of the gravitational wave background in PTAs.
The authors have mitigated the bias by allowing the data to choose whether red noise is present or absent in a pulsar.
Furthermore, in simulations of <cit.>, pulsar spin noise with a wide range of ( A,γ) modelled with the standard uniform priors, not representative of ensemble noise properties, yields evidence for a stochastic signal with the same power-law PSD across pulsars (which is not present in the simulated data).
Such a signal, under the assumption of uniform noise priors, has been reported in real PTA data as a possible precursor to Hellings-Downs correlations of the gravitational wave background <cit.>.
<cit.> regularised incorrect pulsar noise priors by allowing noise amplitudes of the common-spectrum signal of the putative gravitational wave background to vary across pulsars.
By showing that this variance is consistent with zero, the authors confirmed the consistency of the signal with the gravitational wave background.
Although the model of <cit.> provides the working solution and has clear use cases, the ultimate solution to the misspecification of noise priors is to parametrise priors for all relevant noise parameters.
Both the statement of the problem and the most general solution are clearly outlined in <cit.>.
The distribution of pulsar noise parameters is inferred simultaneously with a search for Hellings-Downs correlations and other red processes with pulsar-to-pulsar covariance.
The technique is shown to mitigate systematic errors introduced by incorrect noise priors.
In this study, we introduce two new methods for modelling ensemble pulsar noise properties in PTA data.
However, unlike <cit.>, (1) one of our methods can only be used to infer ensemble pulsar noise properties and (2) another method can only be used to remove a systematic error from incorrect pulsar noise priors without gaining access to ensemble noise properties.
Next, we perform inference of ensemble noise properties of pulsars from the Second Data Release <cit.> of the European Pulsar Timing Array <cit.>.
The rest of the paper is organized as follows.
In Section <ref>, we outline the data analysis methodology and an overview of sources of noise in PTAs.
Our two new methods of modelling ensemble pulsar noise properties are presented in Sections <ref> and <ref>.
In Section <ref>, we report on our main results, where each subsection corresponds to a different model of a distribution of noise parameters ( A,γ).
In Section <ref>, we test the robustness of our models to circular analysis.
Finally, in Section <ref>, we draw conclusions.
§ METHODOLOGY
In this study, we analyze the second data release (DR2, <cit.>) of the EPTA.
In particular, we focus our attention on the “DR2full”, which we will thus refer to as EPTA DR2 or data unless specified otherwise.
The data is based on pulse arrival times from a set of 25 millisecond radio pulsars observed over time spans varying between 14 and 25 years.
The data also includes pulsar timing models obtained with the least-squared fitting of pulse arrival times to the models <cit.>.
Timing models describe how pulse arrival times are affected by deterministic properties of individual pulsars such as pulsar spin frequency and derivatives, pulsar position and proper motion in the sky, dispersion measure (DM) and derivatives, and binary orbital parameters (if applicable).
Contributions of other signals and noise processes to data are referred to as “residuals”, implying that they yield a difference between pulse arrival times predicted by the timing model and measured arrival times.
Pulse arrival times are referenced to the position of the Solar System barycenter which is defined based on the DE440 ephemeris <cit.>.
§.§ Standard PTA data analysis methodology
Contributions to the PTA data beyond the timing model are often determined using Bayesian inference as described below.
The likelihood of data δ t (a vector of the measured pulse times of arrival, ToA) is a Gaussian distribution which is multivariate with respect to a number of observations,
ℒ(δ t | θ) = exp( -1/2 (δ t - μ)^T C^-1(δ t-μ) )/√(( 2πC)),
where θ is a vector of parameters of models that describe the data.
In other words, ℒ(δ t| θ) is the time-domain likelihood.
The model prediction for pulse arrival times as a function of time is μ(θ), and C(θ) is a covariance matrix that describes stochastic processes.
Diagonal elements of C, σ^2, correspond to temporally-uncorrelated “white” noise.
It is modelled as σ^2(e_f,e_q) = σ_ToA^2 e_f^2 + e_q^2, where σ_ToA is the measurement uncertainty on the data provided by the initial timing model fit and (e_f,e_q) are white noise model parameters[Also referred to as the error factor (EFAC) and the error in quadrature (EQUAD), respectively.] which are chosen to be separate for each telescope backend-receiver combination.
Off-diagonal elements of C describe temporal correlations.
For illustrative purposes, it is convenient to represent temporal correlations as components of μ together with the timing model contributions using a reduced-rank approximation <cit.>:
μ = Fa + Mϵ + Uj + d(t).
Here, F are the Fourier sine and cosine basis function and a are Fourier amplitudes of red processes.
Timing model contributions are similarly modelled via the design matrix M and the coefficients ϵ represent timing model parameters.
Whereas a type of white noise corresponding to timing delays j in units [s] that are the same for all ToAs obtained in the same observing epoch (“jitter” noise) are modelled using the basis U.
Vector d(t) corresponds to other deterministic signals as a function of pulsar observation time.
Let us denote b≡ (a, ϵ, j) ∈θ.
Bayes theorem is used to obtain a posterior distribution of model parameters 𝒫(θ | δ t) from the likelihood and the prior π(θ):
𝒫(θ | δ t) = ℒ(δ t | θ) π(θ)/𝒵,
where 𝒵 is the integral of the numerator over θ, it is termed Bayesian evidence.
Hierarchical inference – the approach of parametrising prior distributions – is already a part of the standard PTA data analysis machinery.
It is employed to reduce the parameter space and to make it physically-motivated.
Instead of Fourier amplitudes a (two per frequency, one per sine term of F and one per cosine term) it is more convenient to measure power-law parameters (A,γ) of the power spectral density of red processes:
P(f|A,γ) = A /12 π ^2(f/f_ yr)^-γ.
Thus, the prior on a joins single pulsar likelihoods into a joint posterior:
𝒫(θ^', θ^” | δ t) = 𝒵^-1∫ℒ(δ t | b,θ^') π(b|θ^”) π(θ^”) π(θ^') db,
where θ^' are parameters which are not included in the reduced-rank approximation in Equation <ref>, such that θ=(θ^',b).
Values θ^” are referred to as hyperparameters that include (A,γ).
The integral in Equation <ref> is carried out analytically <cit.>.
The integral represents marginalisation over b, and it is ultimately performed with (F,M,U) and b modelled in C.
For more details, please refer to <cit.>.
Full PTA analysis with stochastic processes that are correlated between pulsars, such as the gravitational wave background, is performed as follows.
To model the PSD of each pulsar-specific noise term, which may vary from pulsar to pulsar, we have a vector of power-law parameters (A,γ), with a pair of noise (hyper-)parameters (A,γ) per pulsar.
We also have a pair of (A_c,γ_c) to model PSD of each “common” stochastic signal, i.e., applicable to all pulsars.
Using notations from Equation <ref>, an “a” component of the term π(b|θ”) for arbitrary pulsars (a,b) and frequencies (i,j) becomes
π_(a,i),(b,j)(a|A_a,γ_a,A_c,γ_c) = P_ai(f_i|A_a,γ_a) δ_abδ_ij +
+ Γ_ab P_i(f_i|A_c,γ_c) δ_ij,
where Γ_ab is the overlap reduction function such as the Hellings-Downs function.
It encodes pulsar-to-pulsar correlations.
§.§ Hierarchical inference of ensemble noise properties
As part of the standard PTA analysis routine described in Section <ref>, priors π( A,γ) are assumed to be uniform distributions, 𝒰.
Typically, π( A)=𝒰(-20,-12), π(γ)=𝒰(0,7).
As we pointed out in the Introduction, observational data hints that this model may be incorrect.
The solution is to propose alternative parametrised models π(θ|Λ) which now depend on hyperparameters Λ.
Therefore, generalising ( A, γ) ∈θ, the posterior becomes
𝒫(θ,Λ|δ t) = 𝒵^-1ℒ(δ t|θ) π(θ|Λ) π(Λ).
The approach of <cit.> is to directly evaluate 𝒫(θ,Λ|δ t).
In this work, we propose computationally efficient marginalisation of this posterior over (a) θ, or (b) Λ, as described below.
A summary of the methods is provided in Table <ref>.
§.§.§ Marginalised posterior of hyperparameters
To obtain 𝒫(Λ) marginalised over θ, we propose a solution based on prior reweighting, the application of importance sampling to priors.
The calculation is done in two steps, both of which manifest one global fit to data.
First, one obtains 𝒫(θ',θ”|δ t) for every pulsar assuming a fixed prior on noise parameters π(θ^k_i|∅).
Here, ∅ means that hyperparameters are fixed as in the standard EPTA analysis <cit.>.
Second, one uses the resulting posterior samples and evidence values Z_∅, i to construct a likelihood marginalised over all parameters except Λ, ℒ(δ t | Λ).
It is shown in <cit.> that the likelihood from Equation <ref> can be written in the following form:
ℒ(δ t | Λ) = ∏_i^N_psrs Z_∅, i(δ t_i)/n_i∑_k^n_iπ(θ^k_i|Λ)/π(θ^k_i|∅).
In the above equation, N_psrs is the number of pulsars in a PTA, n_i is the number of posterior samples obtained for a given pulsar, θ^k_i is the k'th posterior sample for i'th pulsar.
Ensemble noise properties of the EPTA presented in this work are obtained based on Equation <ref>.
The limitation of this approach is that it is based on expressing the total PTA likelihood as the product of single-pulsar likelihoods.
Therefore, the approach is blind to Hellings-Downs correlations or other inter-pulsar correlations in the timing data.
Whereas the presence of temporal correlations with the same ( A, γ) in all pulsars has to be imposed via a prior in individual pulsars.
To model a common-spectrum process <cit.> associated with a gravitational wave background <cit.> in our hierarchical likelihood defined by Equation <ref>, we impose an additional spin-noise-like term with γ=13/3 in individual pulsar likelihoods.
Improved methods for separating the effect of gravitational wave background from pulsar-intrinsic work may be explored in future work.
§.§.§ Marginalisation over hyperparameters
The posterior from Equation <ref> marginalised over Λ can be written in the following form:
𝒫(θ|δ t) = ℒ(δ t|θ) π(θ|∅)/𝒵×1/n_p∑_k^n_pπ(θ|Λ_k)/π(θ|∅),
where Λ_k are n_p samples from the prior.
The reader may notice that the resulting posterior in Equation <ref> is the product (×) of the standard PTA posterior and the weight factor.
A derivation of Equation <ref> is provided in the Appendix.
Unlike Equation <ref>, Equation <ref> allows to simultaneously model Hellings-Downs and other inter-pulsar correlations in the data while accounting for the uncertainty in pulsar noise priors.
It is used more extensively in the companion paper (Goncharov et al., in prep.), where the likelihood is multiplied by the weight factor from Equation <ref>, and the rest of the analysis is performed in a standard way.
Alternatively, the posterior 𝒫(θ|δ t) can be reweighted into a Λ-marginalised posterior using rejection sampling.
§ RESULTS
In this work, we infer distributions for parameters governing pulsar spin noise, π( A_SN,γ_SN), and DM noise, π( A_DM,γ_DM), for EPTA DR2.
EPTA DR2 contains a common-spectrum stochastic process associated with the gravitational wave background and γ consistent with 13/3 (Goncharov et al., in prep.).
If we do not account for this noise term, it may appear as a “quasi-common” red noise <cit.>.
In other words, as a separate cluster in the distribution of spin noise parameters.
Because we are interested in knowing the distribution of pulsar-intrinsic parameters, we include an additional red noise term with γ_c=13/3 to marginalise over a contribution of the common signal.
This is done during the analysis of single pulsar noise corresponding to the first step in Section <ref>.
It corresponds to modelling common signal in our data via a prior as per Table <ref>.
Because we are agnostic about A_c, the approach imposes a suboptimal uncertainty for measuring ( A, γ) in pulsars where γ is consistent with 13/3 ≈ 4.
Furthermore, to separate contributions of other noise processes such as chromatic noise (including scattering variations) and band- or system-dependent noise, we adopt single-pulsar noise models and optimal numbers of Fourier frequency bins from the EPTA DR2 noise analysis <cit.>.
§.§ Uniform distribution of noise parameters
We start with the case of uniform distributions for noise parameters and we measure hyperparameters that govern uniform prior range.
For DM noise, these are min(A_DM), max(A_DM), min(γ_DM), max(γ_DM).
We apply the same principle to spin noise (SN) parameters.
Pulsar-specific measurements of (A,γ) and the inferred boundaries of the uniform distribution for spin noise and DM noise parameters – with measurement uncertainties – are shown in Figure <ref>.
The first important observation is that the data rules out, with high credibility, the standard values of uniform prior boundaries.
The data suggests that pulsar noise parameters are distributed in a more narrow range.
Posterior density 𝒫( A, γ) from pulsars that do not have evidence for the respective noise term according to the <cit.> is shown in colour.
The data from these pulsars has also contributed to the measurement of hyperparameters.
The inferred noise distribution model extends as little as possible to include as many identified noise parameters in pulsars as possible.
One may, in principle, extend our model to include selection effects associated with a lack of sensitivity for certain A,γ <cit.>.
This would enable excess density of π( A|Λ) towards low A.
§.§ Normal distribution of noise parameters
The uniform prior model may remain a good approximation, but it is not the most natural choice.
Therefore, we further discuss the possibility that pulsar noise parameters are normally distributed.
Thus, for spin noise, π( A_SN,γ_SN | Λ_𝒩), and Λ_𝒩=(μ_ A_SN, σ_ A_SN, μ_γ_SN, σ_γ_SN, ρ_SN).
The same model is applied to DM noise.
Here, μ and σ correspond to the mean and the standard deviation of the normal distribution of either A or γ, as subscripts indicate.
Parameter ρ∈ [-1,1] is the correlation coefficient between A and γ.
It is important to note that we obtained our posterior samples from individual pulsars on step one from Section <ref> based on uniform priors.
So, for step two, there are no posterior samples to recycle from outside of these boundaries.
Therefore, we truncate our normal distribution model and apply the same boundaries.
The inferred boundaries of the truncated normal distribution for spin noise and DM noise parameters are shown in Figure <ref>.
The shaded area corresponds to an uncertainty in (σ,ρ) and arrows separately show the uncertainty in μ.
The uncertainty in ρ is seen for DM noise as the difference between the tilt of the outer border of the shaded area compared to the inner border.
Whereas for spin noise the value of ρ≈ -1 is strongly preferred.
§.§ Distribution of noise parameters as a mixture model
One may also be interested whether the distribution of pulsar noise parameters is more complex.
For example, in Figure 1 in <cit.>, one may notice the clustering of noise parameters in two areas.
First, around γ≈ 1.
Second, a cluster of pulsars around γ=13/3 of the gravitational wave background.
A similar clustering is observed in the PTA noise analysis by the <cit.>, where it is explained that pulsars with γ≈ 13/3 contribute to the common-spectrum process in full-PTA analysis.
An analogous Figure is shown in <cit.>, although the presence of two over-densities is not obvious there.
Although we have taken steps to marginalize the contribution of the common-spectrum process and uncover properties of pulsar-intrinsic noise, it is nevertheless useful to consider the possibility of a mixture model of a Gaussian distribution 𝒩 and a Uniform distribution 𝒰, where a broader Uniform distribution may fit potential outliers.
The density of such a distribution is then ν𝒩 + (1-ν)𝒰, where ν∈ [0,1] is the contribution of the normal distribution.
§.§ Summary of the results
As part of our measurements of hyperparameters, we have also obtained Bayesian evidence values 𝒵.
With this, we perform model selection to determine which of the models fit the observed distribution of pulsar noise parameters best.
First, we find that the standard `static' uniform priors are disfavored compared to hierarchical uniform priors with natural log Bayes factors of ≳ 100.
Therefore, the use of standard uniform priors is not recommended for PTA data analysis.
The rest of the results of our hyperparameter estimation and hierarchical model selection are shown in Table <ref>.
Some ensemble properties of spin noise (SN) are similar to DM noise, whereas some properties are different.
For spin noise, the normal distribution fits the data as well as the uniform distribution.
However, the covariance between A_SN and γ_SN plays an important role.
In Figures <ref> and <ref>, one may notice a diagonal trend which results in finding a covariance parameter ρ_SN≈-1.
The model with a lack of covariance, ρ_SN=0 is disfavored with lnℬ=6.
The covariance between A_SN and γ_SN can be explained by noticing that it follows a line of equal noise power.
In contrast, for DM noise, we do not find strong evidence for covariance between A_DM and γ_DM.
With a lack of covariance, the uniform model is strongly preferred over the Gaussian model with a log Bayes factor of 6.
The mixture model is disfavored by our observations of both spin noise and DM noise, indicating the lack of outlying noise terms.
§ CAVEATS OF THE CIRCULAR ANALYSIS
Circular analysis or “double dipping” refers to the usage of data to inform on the model which is then applied to analyse the same data.
Such a practice may lead to the underestimation of measurement uncertainty and systematic errors.
This point is stressed by van Haasteren (2024, in prep.).
The author argues against the approach of determining which pulsar spin noise terms to include in the full-PTA analysis based on (a lack of) evidence for these terms in single-pulsar noise model selection.
Indeed, double dipping is not a proper statistical treatment, it should be abandoned in favour of the proper model selection and model averaging performed in one global fit, as recommended by the author.
However, to give a fair overview of the problem, we also show that circular analysis does not always yield incorrect results in analyses of PTA data.
Before we discuss the application of hierarchical inference in the subsections below, we would like to point out that it is not clear from van Haasteren (2024, in prep.) that the aforementioned approach of selecting spin noise terms for subsequent full-PTA analyses has led to underestimation of measurement uncertainties reported by PTAs.
For model averaging, the prior odds between the existence or absence of common red noise are proposed to be 50%-50% in van Haasteren (2024, in prep.).
For the case of including a red noise term to every pulsar, the prior odds will be arbitrary and vary from pulsar to pulsar which is clear from the equations in the Appendix in van Haasteren (2024, in prep.).
However, data informs on these odds, so they should become a model (hyper-)parameter to avoid prior misspecification, the same problem discussed in this work.
These parametrised odds will act as model selection.
In the limit where there is no measurable red noise in pulsars and the data informs on it sufficiently well, this will lead to the elimination of the contribution of one of the models.
Because different noise models lead to different measurement uncertainties for ( A_CP,γ_CP), it is possible that the uncertainty introduced by enforcing red noise to every pulsar is suboptimal.
§.§ A comparison between circular analysis, incorrect priors, and a proper analysis
One example of circular analysis in the context of our hierarchical inference is finding best-fit Λ and using it for the gravitational wave search.
It also leads to resolving prior misspecification but potentially at the cost of a reduced measurement uncertainty for ( A, γ) of the gravitational wave background.
The reduction may come thanks to simply ignoring a part of the intrinsic measurement uncertainty for Λ, so a regular noise fluctuation is more likely to render our estimate of ( A, γ) to be inconsistent with the true value.
In Figure <ref>, we show three measurements of ( A, γ) of the common-spectrum process in the EPTA data.
Blue dashed contours correspond to a posterior obtained with the standard uniform spin noise priors, π(θ|Λ)=𝒰(θ|∅).
Red dotted contours correspond to π(θ|Λ)=𝒩(θ |μ_0,σ_0,ρ=0), where (μ_0,σ_0) are the values obtained for this model from Table <ref>.
Therefore, the red dotted line corresponds to a circular analysis.
Filled green contours correspond to a proper (global-fit) measurement obtained based on Equation <ref>, it is discussed in more detail in the companion paper by Goncharov et al. (2024, in prep.).
The measurement uncertainty of ( A, γ) obtained with circular analysis matches that obtained with a full hierarchical analysis.
It is even 5-8% larger, suggesting that in the current setting circular analysis overestimates the intrinsic measurement uncertainty.
Note, however, that the circular analysis shifts the maximum-aposteriori value of ( A, γ), so the use of it is still not recommended.
§.§ A toy example: circular analysis of common red noise
Keeping in mind the caveats above, for the test purposes we also perform the estimation of hyperparameters Λ for the mixture model of two Gaussian distributions where hyper-priors on the second component of the mixture correspond to EPTA measurement of the common-spectrum process parameters <cit.>.
This exercise remains useful to explore the interplay between pulsar-intrinsic noise and the common-spectrum process and to understand the performance of the model.
For the two-Gaussian mixture model, π( A_SN,γ_SN | Λ_2𝒩), and Λ_2𝒩=(μ_ A_SN^1,2, σ_ A_SN^1,2, μ_γ_SN^1,2, σ_γ_SN^1,2, ρ_SN^1,2, and ν_𝒩).
Indices (1,2) refer to each of the two components in a mixture, while ν corresponds to a fraction of the first mixture component in the total prior probability.
For the second component to correspond to the common-spectrum process in the EPTA data, we impose a Gaussian hyperprior determined by the measurement uncertainty on ( A,γ) reported by the EPTA for our data.
We find that the fraction of the second normal component is mostly consistent with one and inconsistent with zero.
However, hyperparameters of the second mixture component – mean and (co-)variance – are well-constrained.
Interestingly, we also find a marginal excess posterior density at (σ_ A_SN^2,σ_γ_SN^2)=(0,0).
This result is related to <cit.>'s measurement of σ_ A to be consistent with zero.
When removing the common-spectrum term with γ=13/3 from our pulsar-intrinsic noise model, the posterior density increases, as expected.
This is shown in Figure <ref>.
§ CONCLUSIONS
We performed inference of ensemble properties of spin noise and DM noise in the 25-year version of the second data release of the European Pulsar Timing Array (EPTA).
Overall, we find that the standard uniform priors used in the previous analysis are too wide and not representative of the observations.
However, a parametrised uniform prior distribution is a good fit.
We, therefore, recommend correctly accounting for ensemble pulsar noise properties using Equation <ref> in future full-PTA analyses to avoid systematic errors (parameter estimation and model selection biases).
§ ACKNOWLEDGEMENTS
We thank Rutger van Haasteren for insightful discussions about hierarchical inference and circular analysis.
Some of our calculations were carried out using the OzSTAR Australian national facility (high-performance computing) at Swinburne University of Technology.
§ DATA AVAILABILITY
The data used for our study, EPTA DR2 by the <cit.>, is available at zenodozenodo.
The code to reproduce the results of this analysis and the analysis by <cit.> is available in the branch at https://github.com/bvgoncharov/pta_gwb_priorsgithub.com/bvgoncharov/pta_gwb_priors.
Other data may be provided by the corresponding author upon request.
The PTA likelihood is incorporated in enterprise <cit.> and posterior sampling is performed using ptmcmcsampler <cit.> and dynesty <cit.>.
mnras
§ NUMERICAL MARGINALISATION OVER HYPERPARAMETERS
In this Section, we derive Equation <ref> which describes a hierarchical posterior marginalised over hyperparameters Λ.
After taking an integral over Λ, Equation <ref> becomes
𝒫(θ|δ t) = 𝒵^-1ℒ(δ t|θ) ∫π(θ|Λ) π(Λ) dΛ.
Following the formalism of importance sampling which was used to obtain Equation <ref>, we refer to π(θ|Λ) as the target distribution.
Similarly, we introduce the proposal distribution, the standard PTA noise prior that does not depend on hyperparameters, π(θ|∅).
Next, we multiply Equation <ref> by unity and expand the unity on the right-hand side as a proposal distribution divided by itself.
Rearranging the multipliers,
𝒫(θ|δ t) = ℒ(δ t|θ) π(θ|∅)/𝒵∫π(θ|Λ)/π(θ|∅)π(Λ) dΛ.
Next, we use the expression for the expectation value of a probability density f(x) given the known probability density p(x)[Equation <ref> is also used to derive Equation <ref>. There, p(x) is taken to be a posterior distribution instead.]:
⟨ f(x) ⟩_p(x) = ∫ f(x) p(x) dx ≈1/n_s∑_i^n_sf(x_i),
where x_i are n_s samples from p(x).
Taking π(Λ) as p(x), we arrive at Equation <ref>.
|
http://arxiv.org/abs/2409.02332v1 | 20240903231304 | Double Machine Learning at Scale to Predict Causal Impact of Customer Actions | [
"Sushant More",
"Priya Kotwal",
"Sujith Chappidi",
"Dinesh Mandalapu",
"Chris Khawand"
] | cs.LG | [
"cs.LG",
"econ.EM",
"stat.AP",
"stat.ME"
] |
S. More et al.
Amazon, Seattle WA, USA
{morsusha,kotwalp,jcchappi,mandalap,khawandc}@amazon.com
Double Machine Learning at Scale to Predict Causal Impact of Customer Actions
Sushant More0000-0002-3746-2431 Priya Kotwal0009-0004-6599-359X Sujith Chappidi0009-0009-3310-6067 Dinesh Mandalapu0009-0007-2984-859X Chris Khawand0009-0000-5283-9391
============================================================================================================================================================================
§ ABSTRACT
Causal Impact (CI) of customer actions are broadly used across the industry to inform both short- and long-term investment decisions of various types. In this paper, we apply the double machine learning (DML)
methodology to estimate the CI values across 100s of customer actions of business interest and 100s of millions of customers.
We operationalize DML through a causal ML
library based on Spark with a flexible, JSON-driven model configuration approach
to estimate CI at scale (i.e., across hundred of actions and millions of customers). We
outline the DML methodology and implementation, and associated benefits over
the traditional potential outcomes based CI model. We show population-level as well as customer-level CI
values along with confidence intervals. The validation metrics show a 2.2% gain over the baseline methods and a 2.5X gain in the computational time. Our contribution is to advance the scalable
application of CI, while also providing an interface that allows faster
experimentation, cross-platform support, ability to onboard new use cases, and
improves accessibility of underlying code for partner teams.
§ INTRODUCTION
Causal Impact (CI) is a measure of the incremental change in a customer's outcomes (usually spend or profit) from a customer event or action (e.g, signing up for a paid membership). CI values are used across the industry as important signals of long-term value for multiple decisions, such as marketing content ranking to long-term investment decisions.
Business teams are typically interested in calculating CI values for relevant actions that a customer participates in. Some examples include customer actions such as `first purchase in category X', `first Y stream', or `sign up for program Z'[We use placeholder X,Y,Z to maintain business confidentiality].
The CI values are leveraged by partner teams to understand and improve the value they generate. For many of these customer actions, we are unable to conduct A/B experiments due to practical or legal constraints. CI values are thus estimated off of observational data, effectively leveraging rich customer data to isolate causal relationships in the absence of a randomized experiment.
In this paper, we provide results for average treatment effects and conditional average treatment effects (i.e., customer-level CI values) estimated using a variant on the Double Machine Learning (DML) methodology <cit.>. The paper is arranged as follows. In Sec. <ref>, we give a brief overview of the use and scale of CI in the industry. In Sec. <ref>, we introduce the traditional system used for calculating CI values. We also discuss the shortcomings of the traditional method and the advantages of moving to a DML-based method for calculating CI.
Sec. <ref> covers the details of our DML implementation for calculating CI values. Our contributions include improving the robustness of CI estimates through inverse propensity weighting, adding the ability to produce heterogeneous CI values, implementing customer-level confidence intervals with various assumptions, and making available the JSON Machine Learning interface to accelerate experimentation. We present results in Sec. <ref> for a few customer actions and conclude with the takeaways and ideas for future work in Sec. <ref>.
§ CAUSAL IMPACT ESTIMATION IN INDUSTRY
Causal impact estimation drives a large number of business decisions across industry. This includes multiple organizations such as retail, search, devices, streaming services, and operations. To this end, most companies have invested in developing and deploying models that vend CI values for the cutsomer actions under consideration. In the next section, we give an overview of the traditional potential-outcome based model which is widely used in the industry for CI estimation. This will be the baseline model.
§.§ CI: Potential Outcomes framework
CI framework applies the principles of observational causal inference. We rely on it because A/B testing is not possible to evaluate the impact of certain treatments due to practical constraints (e.g., the treatment is not effectively assignable, or would be too expensive to assign at scale). Observational causal inference methods rely on eliminating potential confounders through adjustment on observed variables. Under a "selection on observables" assumption, we believe we can estimate the causal effect correctly on average. Applied to the customer's next 365 days of spending, for example, the CI value represents the incremental spending that a customer makes because of participating in a certain action compared to the counterfactual case where they didn’t take that particular action. The formal framework for this kind of counterfactual reasoning is the "potential outcomes" framework, sometimes known as the Neyman-Rubin causal framework <cit.>, <cit.>, <cit.>.
The potential outcome based CI model has two parts:
* Propensity binning. Group the customer based on their propensity to participate in the action. This is done based on features that relate to recency, frequency, and the monetary behavior of customers along with their other characteristics such as their tenure type.
* Regression adjustment. In each of the groups, we build a regression model on the control customers with customer-spend as the target. The trained model is applied on the treatment customers to predict the counterfactual spend (how much would customer have spent if they didn’t participate in the action). The difference between the predicted counterfactual and the actual spend is the CI value. We take a weighted average across different groups to get the final CI value for the customer action.
In addition, we require the CI model to be able to scale to the business use case. For instance, we may want generate CI values for hundreds of customer actions in an automated way. In the rest of the paper, we refer to the traditional potential outcome based CI model as “CI-PO” and the DML-based model as “CI-DML”.
§ CI: DML FRAMEWORK
Note that one of the challenges in validating the causal estimates is posed by the Fundamental Problem of Causal Inference <cit.>. The lack of observable ground truth makes it difficult to validate the output of a causal model. We therefore prefer models with proven desirable theoretical properties such as √(n) convergence.
The Double/Debiased Machine learning (DML) method proposed by Chernozhukov et al. <cit.> leverages the predictive power of modern Machine Learning (ML) methods in a principled causal estimation framework that is free of regularization bias asymptotically.
For treatment D, features X, we express the outcome Y as an additively separable function of D and arbitrary function of features X:
Y = D β + g(X) + ϵ
DML’s estimation strategy is motivated by writing out the residualized representation of Eq. (<ref>) and its parts:
Ỹ = Y - E(Y|X)
D̃ = D - E(D|X)
Ỹ = D̃β + ϵ̃
We use ML models to estimate E(Y|X) and E(D|X). The residuals from outcome equation (Eq. (<ref>)) are regressed on residuals from propensity equation (Eq. (<ref>)) to obtain the causal parameter β. We use ML models to predict E(Y|X) and E(D|X). We leverage K-fold sample splitting so that training and scoring of the ML models happens on different folds. We use a 3-fold sample split and follow the “DML2” approach <cit.> where we pool the residuals outcome and propensity residuals across all the folds to fit a single, final regression of the residualized outcome on the residualized treatment (Eq. (<ref>)).
§.§ Inverse Propensity Treatment Weighting
We use a weighted ordinary least squares to solve the residual regression equation (Eq. (<ref>)). where the weights are determined by the Inverse Propensity Treatment Weighting (IPTW or IPW) <cit.>. Our IPTW weights correspond to the Horvitz-Thompson (HT) weight <cit.>, in which the weight for each unit is the inverse of the probability of that unit being assigned to the observed group.
In Table <ref> we define the weights that balance the distributions of covariates between comparison groups for two widely used estimands, the Average Treatment Effect (ATE) and Average Treatment Effect on the Treated (ATT). Weighting helps achieve additional robustness, bringing us closer to a conventional Doubly Robust estimator. Applying these weights when conducting statistical tests or regression models helps reduce impact of confounders over and above what we get from the regression adjustment <cit.>. Secondly, the weights allow us to target the estimand; the ATT has long been the preferred estimand for CI values, since it represents the treatment effects for those customers actually treated historically. (We refer to the customer-level counterparts of ATE, ATT estimands as HTE and HTT respectively.)
§.§ Common support and propensity score trimming
For many treatments, propensity distribution has significant mass near 1 for the treated group and near 0 for the control group (see an example histogram in Fig. <ref>). Scores near the boundary can create instability in weighting methods. In addition, these scores often represent units for whom we cannot make an adequate treatment-control comparison. We limit analysis to the common support region, where propensity score distributions overlap between treated and untreated samples.
We also use trimming to exclude customers whose estimated propensity is outside of the range [α, 1-α]. We experimented with different thresholds on various customer actions and observed that α=0.001 with rescaled propensity scores works the best.
§.§.§ Normalizing and rescaling weights
When using the IPW, we normalize the weights by rescaling the propensity scores for each customer i as in Eq. (<ref>).
ê(X_i)_ scaled = (Dê(X)) ∗ê(X_i)
D and ê(X) in Eq. (<ref>) are the averages of treatment assignment and propensity score respectively taken over both the treatment and control population combined.
Propensity trimming and rescaling reduces variance, leads to more stable estimates, and tighter confidence intervals as seen in Fig. <ref>.
§.§ Heterogeneity in DML
CI-DML implements a version of the heterogenous effects modeling proposed in <cit.>, by leveraging the treatment-feature interactions in the final stage of DML to identify heterogenous (customer-level) responses.
The general form of Eq. (<ref>) can be written as
Ỹ = h(X, D̃) + ϵ̃ .
In fact, Eq. (<ref>) is a special case of Eq. (<ref>) with h(X, D̃) = D̃β. We interact treatment with the features and define h(X, D̃) ≡ψ(X) * D̃ β, where `∗' represents element-wise multiplication. Thus, the heterogeneous residual regression becomes:
Ỹ = ψ(X) * D̃ β + ϵ̃
We want ψ(X) to be low-dimensional so that we are able to extract the coefficient β in Eq. (<ref>) reliably.
Let N and M be the number of customers and features respectively. If the dimension of ψ(X) is N× K, we want K ≪ M. In our use case, M is typically 2000 and K is typically around 20. To get the low-dimensional representation, ψ(X) we proceed as follows:
* We project the original features onto an orthogonal space through Principal Component Analysis (PCA).
* We run a K-means clustering algorithm on the highest-signal Principal Components. Dimension reduction from PCA helps to reduce dimensionality-related problems when computing Euclidean distance for K-means clustering.
* We calculate the K cluster scores for each customer, as ψ_i,c = 1/d_i,c∑_k=1^K 1/d_i,k where d_i,c is the distance of customer i's value from centroid of cluster c (see schematic in Fig. <ref>).
Once we have calculated the distance features ψ(X) for each customer, we interact them with the propensity residuals and fit a linear regression model using IPW (refer sec. <ref>) to extract the coefficients β in Eq. (<ref>). The heterogenous estimates are given by
h = ψ(X) β .
E.g., for K=3, h = ψ_1 β_1 +ψ_2 β_2 + ψ_3 β_3.
A schematic of CI-DML workflow is shown in Fig. <ref>.
§.§ Confidence intervals in DML
One of the disadvantages of CI-PO is that generating confidence intervals requires bootstrapping around the multi-step process and is computationally expensive. Obtaining confidence intervals in CI-DML is straightforward. For a single ATT parameter estimate, we obtain the confidence interval simply by calculating the variance of the estimate of β in Eq. (<ref>). We also estimate Huber-White heteroscedasticity consistent standard errors <cit.>, <cit.>. For the ATT case, the steps for calculating variance of the coefficient β̂ are as follows:
Var(β) = H Σ̂ H' .
For a customer, `p':
σ̂_p^2 = Û_p^2 = (Ỹ_p - D̃_p β̂)' (Ỹ_p - D̃_p β̂) ,
where β̂ is the value of coefficient from solving Eq. (<ref>). Note that Ỹ_p and D̃_p are scalars.
Σ in Eq. (<ref>) is a diagonal matrix with the squared prediction error σ̂_p^2 for each customer on its diagonal and H in Eq. (<ref>) is defined as
H = (D̃' ∗ W D̃)^-1D̃' ∗ W
where W are the IPW weights as defined in Sec. <ref>.
We compute the confidence intervals on the causal estimate β using Var(β).
§.§.§ Customer-level confidence intervals
CI-DML also provides the ability to obtain customer-level confidence intervals. From Eq. (<ref>), we can write
Var(h) = Var(ψβ) = ∑_k ψ_k^2 Var(β_k) + ∑_k≠ lψ_k ψ_l Cov(β_k, β_l).
We calculate variance of the heterogeneous coefficients following similar approach as in Eqs. <ref>, <ref>, and <ref>. The only difference is we replace D̃_p →ψ(X_p) * D̃_p in Eq. (<ref>) and D̃→ψ(X)*D̃ in Eq. (<ref>).
For the ATT case, Var(β) is a scalar whereas for the HTT case, Var(β) is a K× K matrix. The first and second terms in the summation in Eq. (<ref>) are the diagonal and the off-diagonal terms of the Var(β) matrix respectively.
§.§ DML implementation
We developed a causal ML library with JSON driven modeling configuration (see Fig. <ref>). JSON ML Interpreter (JMI) translates JSON configuration to executable Python ML application.
The main advantages of JMI approach are:
Flexibility: Business questions from various domains cannot always be addressed through a single unified configuration of a causal model. We address this in CI-DML where users can invoke different causal analysis frameworks (DML, Causal Forests) and prediction algorithm type (regression, classification, clustering).
Scalability: CI-DML utilizes distributed implementation of algorithms and file system via Apache Spark which helps causal modeling at the big-data scale (100 millions customers, multiple targets, and time horizons)
Persistence: CI-DML inherits SparkML serialization and deserialization methods to persist and instantiate fitted models.
Compatibility: In addition to Spark, interfaces to adapt ML libraries from scikit-learn, tensorflow, MXNet and other communities can be onboarded using the configurable abstraction support by JMI.
In the CI-DML system, we dockerize the JMI Causal ML library which is platform agnostic which has the flexibility to extend and utilize different compute engines like AWS EMR, Sagemaker or AWS Batch based on the use case and will abstract the computation information from the user. Dockerization also helps version control and the build environment via standard software development tooling.
§ RESULTS
Next we present the results for CI-DML. The target variables we focus on is customer spending, but our framework on can be leveraged to obtain the causal impact on any other target variable of interest (e.g., net profit, units bought etc). For every CI run, we produce both the population-level ATT values (Eq. (<ref>)) and the customer-level HTT (Eq. (<ref>)) values.
We compared the CI-PO and CI-DML results for 100+ customer actions. As noted earlier, two major advantages of CI-DML are the availability of customer grain results (aka. HTT) and confidence intervals.
In Fig. <ref>, we present population-level (ATT) and customer-level values for selected representative customer actions [We anonymize actions to preserve business confidentiality].
The reported confidence intervals are for both homoscedastic and heteroscedastic error variances. To get a sense of the level of variance in customer-grain results, we report the percentage of customers where the customer-level confidence interval crosses zero. We also report the out-of-sample fit metrics for outcome and propensity model in DML.
Takeaways from analysis of 100+ actions are:
* Population-level CI-PO and CI-DML values are aligned for 86% of actions.
* When the customer-level CI values are aggregated up, they are generally aligned with the population-level CI-DML values.
* The difference between the CI-PO and CI-DML values are larger either when the data is noisy and/or we have a small sample size. For such cases, we also see large confidence intervals and the mean of HTT values is farther away from the CI-DML ATT values.
* The homoscedastic confidence intervals are tighter than the heteroscedastic confidence interval as expected. However, the homoscedastic confidence intervals likely under-predict the variance. We recommend business stakeholders to use the heteroscedastic results while making decisions.
* The customer-level confidence intervals are economically reasonable. The percentage of customer-level confidence interval crossing zero increases for data with lower participation and is small for customer actions with a long history.
* The ML model metrics shown in Fig. <ref> are using ridge regression for the outcome model and logistic regression for the propensity model. We noticed that the model metrics as well as the CI values are relatively insensitive to the choice of ML model at the outcome/propensity stage. Accordingly, we leverage ridge and logistic models due to their favorable compute time.
§.§ Hyperparameter tuning
The hyperparameters (e.g., regularization strength) in the outcome and propensity model are chosen based on the out-of-sample performance.
For the HTT estimates, the two main hyperparameters are the number of principal components and the number of clusters.
We choose the number of principal components (PC) based on the percentage of variance explained. We find that around 300 PC, about 80% of the variance is explained (Fig. <ref>). The amount of variance explained grows much slowly as we add more number of PC. To avoid sparsity issues in the downstream K-means calculation, we choose the number of PC components to be 300.
Choosing the number of clusters is less straightforward. Standard tools such as elbow method and Silhouette score do not yield a clear answer for the optimal cluster choice. In the current work, we choose 20 clusters as we think its best suited for our data. One of our ongoing work, is to make this choice in a more data-driven way. We also find that the mean of HTT values is robust with respect to the choice for number of clusters.
§.§ Spread of customer-level CI values
So far, we have only looked at the mean of customer-level values in Fig <ref>. In Fig. <ref>, we look at the customer-level CI scores for a representative customer action. We see that most of the customers have CI value close to the average HTT value. We see that there are few customers with a low CI value which shifts the mean to the left. One of the industry application of CI is marketing optimization. Having access to distribution of customer-level CI scores as in Fig. <ref> can help personalization and aid finer decision making.
§ VALIDATION
§.§ Placebo tests
Placebo tests help us understand the relative ability of competing causal estimates to account for selection bias. Selection bias occurs
when customers who take an action (e.g., stream video) have unobserved characteristics not included in the model that makes them
systematically more or less likely to take the action (e.g., high income, low age etc.). In placebo tests, we take the treatment group
customers and simulate as though they took the action a year before the actual date. This is achieved by shifting the event date by one year and recalculating the features based on the shifted event date. The CI estimated in this set up is the
“placebo error”. Since, this is a fake event, a model with a lower placebo error than another suggests that it has a lower selection bias. Running placebo tests on all events is computationally expensive, so we selected a few events for placebo analysis. The results are shown in Fig. <ref>.
The key takeaways from Fig. <ref> are:
* Selection bias is inherently event-dependent. When averaged across the selected customer actions, we see a 2.24% improvement in placebo estimates when going from CI-PO to CI-DML.
* Selection bias is primarily impacted by the modeling features. As CI-PO and CI-DML use the same features, we did not expect
big improvements in placebo tests. The consistent improvement across the events shows that double machine learning
methodology is better able to adjust for observables even when the same features are used.
§.§ Confidence interval comparison
One of the major wins in CI-DML is that we provide heteroskedasticity-consistent confidence intervals at both a customer and
aggregate level for every CI analysis in a scalable and lower-cost fashion. We compare the uncertainty estimates (specifically the
width of confidence intervals) from CI-DML with the bootstrap results in CI-PO for a few events in Fig. <ref>.
We find confidence interval width to be comparable among the two approaches. On average, the CI-DML width (scaled with CI-PO point
estimate) is 1.5% smaller when compared to bootstrap-based confidence interval. A bootstrap-enabled CI-PO run takes about 2.5X more time than a CI-DML run. Bootstrap also does not scale for events with large number of customers. As CI-DML approach for confidence intervals is based on a closed form implementation, we do not have any scalability issues. In addition, note that bootstrapping has theoretical limitations when used for matching estimators <cit.>.
§ CONCLUSION AND FUTURE WORK
In this work, we introduced a state-of-the-art methodology used for calculating CI values. We noted that a DML based framework eliminates bias, allows us to extract heterogeneity in CI values, and provides a scalable way to construct heteroscedastic confidence intervals. We also made a case for using IPW and common support to refine the CI estimates. We demonstrated how leveraging PCA followed by K-means clustering allowed us to introduce customer-level heterogeneity. Using JSON based config allows flexibility to experiment with a wide variety algorithms and can take us from experimentation to production in minimal steps.
We presented results for few anonymized customer actions across different domains. Both the population-level and customer-level results for the customer actions we have looked at so far are aligned with the CI-PO results and our expectations.
Note that estimation of heterogeneous or context-aware treatment effects is an active area of research with wide applications ranging from marketing to health care. Distribution of treatment effects across different subgroups, or as a function of specific individual-level characteristics provides researchers with additional insights about the treatment/ intervention analyzed. Our work showcases, a scalable real-world application for extracting average as well heterogeneous causal estimates which we believe will be of interest to the broader scientific community.
§.§ Future work
Validating the causal estimates is challenging due to lack of ground truth. In the current work, we relied on the model fit metrics in the DML steps, placebo tests, and on bridging the CI-DML and CI-PO outputs.
In the future, we plan to include metrics which focus on the validation of heterogeneous treatment effects. Examples of these include metrics based on Generic Machine Learning <cit.> and empirically calibrated Monte Carlo resampling techniques <cit.>.
We also plan to experiment with doubly robust estimator in the DML stage.
§ APPENDIX
§ SAMPLE JSON CONFIG
We show a snippet of JSON config in Fig. <ref>.
We can swap the specified models in the outcome and propensity step with any ML model of our choice.
Likewise we can easily configure pre/post-processing steps and hyperparameters through JSON files.
999
plain
Chernozukov
Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, James Robins. Double/debiased machine learning for treatment and structural parameters. The Econometrics Journal, Volume 21, Issue 1, 1 February 2018, Pages C1–C68, https://doi.org/10.1111/ectj.12097doi.org/10.1111/ectj.12097
Jasjeet
Sekhon, Jasjeet. The Neyman–Rubin Model of Causal Inference and Estimation via Matching Methods. 2007. The Oxford Handbook of Political Methodology.
Holland
Holland, Paul W. Statistics and Causal Inference. 1986. J. Amer. Statist. Assoc. 81 (396): 945–960. https://doi.org/10.1080/01621459.1986.10478354doi:10.1080/01621459.1986.10478354
Neyman
Neyman, Jerzy. Sur les applications de la theorie des probabilites aux experiences agricoles: Essai des principes. Master's Thesis (1923). Excerpts reprinted in English, Statistical Science, Vol. 5, pp. 463–472.
Rubin
Rubin, Donald. Causal Inference Using Potential Outcomes. 2005. J. Amer. Statist. Assoc. 81 (396): 945–960. https://doi.org/10.1080/01621459.1986.10478354doi:10.1080/01621459.1986.10478354
Cherno_goldman_semenova_taddy
Chernozhukov, V., Goldman, M., Semenova, V., and Taddy, M. (2017). Orthogonal Machine Learning for Demand
Estimation: High Dimensional Causal Inference in Dynamic Panels. ArXiv:1712.09988 [Stat].
Huber
Huber, Peter J. The behavior of maximum likelihood estimates under nonstandard conditions. (1967) Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability.
Vol. 5. pp. 221–233
White
White, Halbert. A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity. Econometrica 48 (4): 817–838
HTR1
Horvitz, D. G., and Thompson, D. J. . A Generalization of Sampling Without Re-placement From a Finite Universe (1952). Journal of the American Statistical Association, 47(260), 663-685
HTR2
Nie, X., and Wager, S. Quasi-Oracle Estimation of Heterogeneous Treatment Effects (2017). ArXiv:1712.04912 [Econ, Math, Stat]
DR
Edward H. Kennedy, Optimal doubly robust estimation of heterogeneous causal effects (2020). ArXiv:2004.14497 [math.ST]
biometric
Estimating exposure effects by modelling the expectation of exposure conditional on confounders. Biometrics 48 479–495. MR1173493
Ruth
Ruth C, Brownell M, Isbister J, MacWilliam L, Gammon H, Singal D, Soodeen R, McGowan K, Kulbaba C, Boriskewich E. Long-Term Outcomes Of Manitoba's Insight Mentoring Program: A Comparative Statistical Analysis . Winnipeg, MB: Manitoba Centre for Health Policy, 2015
Austin
P. C. Austin and E. A. Stuart, Moving towards Best Practice When using Inverse Probability of Treatment Weighting (IPTW) Using the Propensity Score to Estimate Causal Treatment Effects in Observational Studies, vol. 34, pp. 3661-3679, 2015.
Hirano
Hirano, K., Imbens, G.W. Estimation of Causal Effects using Propensity Score Weighting: An Application to Data on Right Heart Catheterization. Health Services and Outcomes Research Methodology 2, 259–278 (2001). https://doi.org/10.1023/A:1020371312283doi.org/10.1023/A:1020371312283
Abadie_Imbens
A. Abadie and G. Imbens, On the failure of bootstrap for matching estimators, Econometrica, Vol. 76, No. 6 (2008), 1537-1157
GLM
Chernozhukov, Victor, Mert Demirer, Esther Duflo, and Ivan Fernandez-Val (2022).
Generic machine learning inference on heterogenous treatment effects in randomized experiments.
https://arxiv.org/abs/1712.04802aXiv:1712.04802v6 [stat.ML]
MC
Knaus, M. C., Lechner, M., & Strittmatter, A. (2021). Machine learning estimation of heterogeneous causal effects: Empirical monte carlo evidence. The Econometrics Journal, 24(1), 134-161
|
http://arxiv.org/abs/2409.03000v1 | 20240904180011 | Singletons in supersymmetric field theories and in supergravity | [
"Henning Samtleben",
"Ergin Sezgin"
] | hep-th | [
"hep-th"
] |
addtoresetequationsection
155mm
235mm
=eusb10 at 12pt
#1#1
14mul
[ #1#2#1/#2 G i ]
|
http://arxiv.org/abs/2409.03381v2 | 20240905093324 | CogniDual Framework: Self-Training Large Language Models within a Dual-System Theoretical Framework for Improving Cognitive Tasks | [
"Yongxin Deng",
"Xihe Qiu",
"Xiaoyu Tan",
"Chao Qu",
"Jing Pan",
"Yuan Cheng",
"Yinghui Xu",
"Wei Chu"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
CogniDual Framework: Self-Training Large Language Models within a Dual-System Theoretical Framework for Improving Cognitive Tasks
Yongxin Deng1^,1,
Xihe Qiu1^,1^,2,
Xiaoyu Tan2^,1,
Chao Qu2,
Jing Pan3,
Yuan Cheng2,
Yinghui Xu4,
and Wei Chu2
1School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, China
2INF Technology (shanghai) Co., Ltd., Shanghai, China
3School of Art, Design and Architecture, Monash University, Melbourne, Australia
4Artificial Intelligence Innovation and Incubation Institute, Fudan University, Shanghai, China
1This is to indicate the equal contribution.
2This is to indicate the corresponding author. Email: [email protected]
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Cognitive psychology investigates perception, attention, memory, language, problem-solving, decision-making, and reasoning. Kahneman's dual-system theory elucidates the human decision-making process, distinguishing between the rapid, intuitive System 1 and the deliberative, rational System 2. Recent advancements have positioned large language Models (LLMs) as formidable tools nearing human-level proficiency in various cognitive tasks. Nonetheless, the presence of a dual-system framework analogous to human cognition in LLMs remains unexplored. This study introduces the CogniDual Framework for LLMs (CFLLMs), designed to assess whether LLMs can, through self-training, evolve from deliberate deduction to intuitive responses, thereby emulating the human process of acquiring and mastering new information. Our findings reveal the cognitive mechanisms behind LLMs' response generation, enhancing our understanding of their capabilities in cognitive psychology. Practically, self-trained models can provide faster responses to certain queries, reducing computational demands during inference.
large language model, cognitive psychology, language processing.
§ INTRODUCTION
Cognitive psychology seeks to elucidate the processes by which humans acquire, retain, and retrieve knowledge <cit.>. Kahneman's dual-system theory <cit.> emerges as a seminal framework within this realm, offering a nuanced understanding of cognitive operations. This theory outlines two distinct cognitive systems: System 1, which is instinctual and facilitates rapid decision-making with minimal cognitive effort, and System 2, which is methodical and requires deliberate focus for complex reasoning tasks.
In the realm of artificial intelligence, the advent of deep learning and the influx of extensive datasets have precipitated the swift advancement of language models (LMs). These models, particularly those utilizing the Transformer architecture <cit.> such as GPT-4 <cit.>, have garnered significant attention for their advanced language processing abilities <cit.>, achieving near-human proficiency across numerous linguistic tasks. Trained on expansive natural language corpora, these models demonstrate an ability to comprehend and produce symbol sequences with an intuition akin to human System 1's pattern processing. Furthermore, when prompted to employ CoT problem-solving, LLMs exhibit deep reasoning capabilities paralleling human System 2 <cit.>. Nevertheless, the persistence of such efficient and accurate outputs in the absence of CoT remains uncertain. Should LLMs achieve this, it would suggest the integration of an intuitive operational process comparable to human System 1.
Can LLMs internalize System 2's complex reasoning into System 1's intuitive responses through iterative training? We hypothesize that LLMs, by mimicking human rapid skill acquisition, can generate fast, intuitive answers without additional training data, thus enhancing resource efficiency and reducing dependence on chains of thought (CoT). Our methodology was straightforward yet robust: we commenced by prompting the model with specific reasoning questions, both with and without CoT cues, and subsequently assessed the accuracy of the responses generated under each condition. Following this, the model employed CoT as a scaffold to reengineer non-CoT responses. Theoretically, this self-editing could facilitate the internalization of CoT reasoning steps, potentially enhancing the model's future problem-solving precision without explicit CoT prompts. The final phase involved reevaluating the model's performance post self-improvement to determine if there was an enhancement in CoT-independent operations. Our experiments are designed to uncover whether LLMs can emulate the human cognitive system by internalizing complex reasoning processes, particularly when functioning without direct reasoning instructions.
We applied our methodology to the Vicuna and Llama2 models of varying sizes and evaluated their performance enhancements on reasoning datasets such as GSM8K, ReClor, and LogiQA 2.0. Our findings indicate that LLMs display marked discrepancies in response accuracy when utilizing CoT compared to when it is absent. Following a period of self-training, LLMs exhibited a substantial increase in response precision in scenarios devoid of CoT. This suggests that LLMs are capable of developing intuitive response mechanisms akin to the human cognitive System 1, as well as the deliberate, sequential reasoning characteristic of System 2. Our research demonstrates the potential to cultivate LLMs' System 2 skills into System 1 proficiencies, enabling rapid application.
This paper presents three principal contributions:
* We propose a self-iterative framework for large models, which is used to explore whether large models themselves possess characteristics similar to human cognitive structures.
* We demonstrate that LLMs can simulate the dual-system characteristics of human cognition. Through experimentation, we have verified that LLMs can not only perform complex reasoning tasks under the guidance of CoT (similar to human System 2), but also respond relying on pattern recognition intuition without CoT (similar to human System 1).
* We propose and validate a new method that may allow LLMs to maintain efficient and accurate outputs without relying on CoT. This suggests that LLMs can handle tasks in a manner closer to human System 1, and this method is more efficient in terms of computational resources and time because it avoids additional training data or steps, thus it is expected to play a significant role in resource-limited application scenarios.
§ COGNIDUAL FRAMEWORK
§.§ Model Self-Iteration
Our CogniDual framework replicates the human learning curve, as depicted in Figure <ref>. Initially, we prompt untrained LLMs to answer questions from reasoning datasets without CoT instructions, even compelling the LLMs to provide immediate answers without rationale. We designate this question set as Q_n={q_i, for i=1,2,…,n }, with n denoting the total number of questions. The corresponding answer set is labeled A1_n = {a1_i, for i=1,2,…,n }, symbolizing the LLMs’ initial responses, akin to human cognitive System 1. These question-answer pairs are preserved.
Subsequently, we introduce CoT directives, guiding LLMs to derive correct answers sequentially. The resulting answers are categorized as A2_n={a2_i, for i=1,2,…,n }. We also maintain these pairs. In the third phase, we furnish the LLMs with standard answers from the dataset, denoted as A_n={a_i, for i=1,2,…,n}.
To quantify the proficiency of our LLMs in responding to Q_n, we define the accuracy metric as follows:
Acc(A1_n, A_n) = 1/n∑_i=1^nSemanticMatch(a1_i, a_i),
Acc(A2_n, A_n) = 1/n∑_i=1^nSemanticMatch(a2_i, a_i),
where SemanticMatch(·, ·) assesses the semantic similarity between the LLM's initial response and the standard answer.
Given that standard answers usually include comprehensive reasoning and do not conform to a 'yes or no' format, we cannot rely on character matching scripts to evaluate the LLMs’ responses. Instead, we engage the LLMs in semantic synonymy judgments to assess the accuracy of A1_n and A2_n against A_n, specifically identifying instances where A2_n is accurate, and A1_n is not.
The fourth stage involves the LLMs consolidating the correct answers from A2_n and the incorrect ones from A1_n into new question-answer pairs. Given that A2_n responses encompass extensive reasoning, we require LLMs to distill these answers, converting them from elaborate, reasoned responses to concise answers. The specifics of this process will be explored in Section <ref>. In the final step, we employ these restructured question-answer pairs as training material for LLMs and subsequently assess the LLMs' reasoning capabilities on different questions within the same dataset without CoT. Our experimental approach, chosen for its minimal computational resource demands and deployability, utilizes the LoRA training method <cit.> for LLMs.
§.§ Pre-training Model Distillation
The framework outlined in Section <ref> enables LMs to self-train independently of external interaction. Nevertheless, our framework necessitates that models complete two supplementary tasks when addressing dataset questions: 1. Synonymy Semantic Judgment: LMs must determine if A1_n, A2_n, and A_n are semantically equivalent to assess reasoning accuracy both with and without the CoT; 2. Answer Rewriting: Recognizing the impracticality of manually rewriting numerous answers containing reasoning processes into concise responses, we expect LMs to autonomously perform answer rewriting. Suppose we have an open-source LLM denoted by p_θ, which is parameterized by θ. To perform synonymy semantic judgment and achieve SemanticMatch, we can design a specific prompt, Prompt_Semantic. This prompt will evaluate whether the two responses a_i and a_j have identical semantic meanings:
SemanticMatch(a_i,a_j) = p_θ(a_j,a_i|Prompt_Semantic).
For answer rewriting, which is also a common task for chat base LLM, we can design a Prompt_Rewrite to align the generated answer a_j to the target answer a_i with the identical format, and acquire the updated answer a^'_i:
a^'_i = p_θ(a_j,a_i|Prompt_Rewrite).
While large-scale models readily accomplish these tasks, smaller models, such as the Llama2-7B, may find them challenging <cit.>. A model that struggles to understand standard answers is unlikely to enhance its capabilities from System 2 to System 1 through self-training alone. To improve training outcomes, we advocate for the pre-training of smaller models using knowledge distillation, equipping them with essential skills for synonymy semantic judgment and answer rewriting.
Knowledge distillation <cit.> is a technique for transferring knowledge from a large, complex `teacher' model to a smaller, simpler `student' model, facilitating deployment in resource-limited settings without greatly impacting performance. We employ a simplified approach akin to Distilling step-by-step <cit.>. For synonymy semantic judgment, smaller models generate sample A1_n, A2_n, and A_n, after which GPT-3.5's advanced generative power yields precise judgments and comprehensive explanations. By creating multiple explanations per question, we ensure a clear delineation of the reasoning pathway. GPT-3.5 also supplies sample rewrites and their justifications for the answer rewriting task. Subsequently, smaller model θ can be trained through supervised fine-tuning using the larger models' outputs A^', preparing them for self-improvement and independent practice:
min_θ -𝔼_(q_i, a^'_i) ∼ A^'[log p_θ(a^'_i|q_i)].
§ EXPERIMENT
§.§ Experimental Objectives
This experiment is designed to examine various questions concerning the cognitive and reasoning capabilities of LLMs such as Llama2. Specifically, we aim to determine whether such models exhibit characteristics analogous to the dual-system cognitive framework observed in humans (Q1), if self-practice in the absence of Chain of Thought (CoT) guidance enhances reasoning abilities (Q2), whether learning curves indicate improved accuracy with additional examples post self-practice (Q3), if larger models benefit more from self-practice without CoT guidance in terms of performance (Q4), and whether the enhanced reasoning abilities generalized across different reasoning tasks (Q5).
§.§ Experimental Setting
To investigate Q1 and Q2 outlined in Section <ref>, we employed untrained LLMs as a baseline to evaluate their efficacy both with and without implementing the CoT method. The few-shot methodology was consistently applied in prompt construction, irrespective of CoT method utilization. Given that the GSM8K dataset's solutions entail reasoning sequences, in instances where the CoT method was not applied, we modified the prompting using GPT-4 <cit.> to exclude the reasoning pathway, employing an 8-shot technique. In contrast, for the Reclor and LogiQA2.0 datasets, which naturally lack reasoning pathways in their answers, we engaged GPT-4 to fabricate corresponding reasoning sequences to assess LLMs' proficiency under the CoT paradigm, adopting a 3-shot approach for these datasets. This baseline was then juxtaposed with our CFLLMs framework. To tackle Q3, we experimented with diverse data volumes within the CFLLMs framework to cultivate the LLMs and scrutinized their performance.
In pursuit of Q4, our framework was applied to LLMs of varying sizes, including Vicuna models (7B, 13B, 30B) <cit.> and Llama2 models (7B, 13B) <cit.>. To facilitate deployment on a consumer-grade Nvidia RTX 4090 GPU <cit.> while minimizing memory usage and inference time, we employed the GPTQ <cit.> approach to quantize the models to 4-bit precision. It is important to note that, despite the ability of 7B-sized models to operate on consumer-grade GPUs without quantization, we opted for 4-bit quantization across all models to maintain a uniform comparison scale and minimize quantization errors.
To address the variability in dataset sizes and their potential influence on experimental results, we standardized our approach by extracting a consistent sample of 1000 data entries from each dataset to form the training set for the LLMs' self-practice. An additional 1000 data entries were selected to comprise the test set. To maintain experimental uniformity, each data entry within these subsets was numbered. For each experiment, we consistently used the first n numbered data entries, with n representing the requisite volume of data for the specific experimental conditions.
For Q5, we selected datasets encompassing various reasoning tasks, such as GSM8K <cit.>, ReClor <cit.>, and LogiQA2.0 <cit.>. GSM8K comprises over 8,000 quality elementary mathematics problems crafted by human authors to assess arithmetic reasoning in LLMs. ReClor features questions from logical reasoning sections of standardized tests like the GMAT and LSAT, challenging the LLMs' critical thinking and complex logical reasoning skills. LogiQA2.0, based on questions from the Chinese civil service exam translated and validated by professional translators and human experts, evaluates the LLMs' capacity for generalizing natural language reasoning.
§.§ Results
We conducted experiments across a variety of LLMs, differing in type and size, as well as on diverse datasets, to evaluate their reasoning capabilities. This evaluation was based on the mean answer accuracy derived from five experimental trials, detailed in Table <ref>. It is important to note that the figures succeeding “CFLLMs” in the table signify the volume of data utilized for the LLMs' self-practice. The underscored values denote the peak accuracy attained with this methodology for the consistent model and dataset, whereas the bolded values represent the maximum accuracy achieved without employing the CoT method to prompt incremental reasoning, compelling the model to directly generate answers. Red-highlighted numbers in the table reveal that our framework, under the given experimental conditions, did not improve but rather diminished performance.
With this groundwork, we can address Q1 and Q2 introduced in Section <ref>. The implementation of CoT markedly influences the models' reasoning proficiency on tasks that entail natural language inference, such as reading comprehension and logical deduction. For instance, on the LogiQA2.0 dataset, the accuracy rates for smaller models like Llama2-7B and Vicuna-7B plummet to near zero in the absence of CoT. However, the deployment of the CogniDual Framework, has resulted in a substantial enhancement of performance without CoT. Despite the models' reasoning accuracy not equalling that of CoT use, their ability to intuitively respond to certain questions suggests an inherent decision-making logic akin to the human dual-system cognitive framework. This insight indicates the potential for transforming System 2 capabilities into System 1 through sustained practice, thereby bolstering the LLMs' rapid response to specific queries and diminishing the time and computational resources required for reasoning.
Moreover, we observed a negligible improvement from the CogniDual Framework on the GSM8K dataset, attributed to the models' propensity for step-by-step reasoning even when instructed to directly answer. The prevalence of LLMs producing answers with comprehensive derivations is likely due to task contamination, as postulated by Liu et al. <cit.>, where mathematical problems are consistently presented with accompanying detailed solutions throughout the training phase. Our framework aims to enhance the System 1 capabilities of LLMs, rather than augment System 2 directly. Consequently, we can deduce from Q5 that only tasks exhibiting a substantial discrepancy in accuracy between CoT usage and non-usage enable LLMs to advance their internalized reasoning abilities through self-practice.
For Q3 and Q4, the results in Table <ref> indicate that, in general, an increase in additional examples correlates with a more pronounced enhancement in the LLMs' reasoning abilities without CoT, achieved through self-practice. Larger models require fewer examples to approach their System 1 capacity ceiling; beyond this point, further example data yield minimal benefits. This finding suggests that larger models are more adept at leveraging limited data to improve performance without CoT guidance through self-practice, aligning with the research by Jaimovitch et al. <cit.>.
§ CONCLUSION
This study explores the dual cognitive characteristics of LLMs. Our experimental results indicate that once LLMs internalize CoT reasoning through self-training, they can retain CoT-enhanced problem-solving abilities even without CoT prompts. This finding supports the hypothesis that, with appropriate training, LLMs can convert complex, deliberative System 2 reasoning into faster, more intuitive System 1-like responses. Leveraging this property, we designed a self-training framework to reduce the cognitive load of LLM reasoning. Despite these advancements, further research is necessary to address the study's limitations, including examining how this framework influences the cognitive processing preferences of LLMs.
IEEEtran
|
http://arxiv.org/abs/2409.02272v1 | 20240903200337 | Discrete-Time Maximum Likelihood Neural Distribution Steering | [
"George Rapakoulias",
"Panagiotis Tsiotras"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
Laser cooling a centimeter-scale torsion pendulum
Vivishek Sudhir
September 9, 2024
=================================================
§ ABSTRACT
This paper studies the problem of steering the distribution of a discrete-time dynamical system from an initial distribution to a target distribution in finite time.
The formulation is fully nonlinear, allowing the use of general control policies, parametrized by neural networks.
Although similar solutions have been explored in the continuous-time context, extending these techniques to systems with discrete dynamics is not trivial.
The proposed algorithm results in a regularized maximum likelihood optimization problem, which is solved using machine learning techniques.
After presenting the algorithm, we provide several numerical examples that illustrate the capabilities of the proposed method.
We start from a simple problem that admits a solution through semidefinite programming, serving as a benchmark for the proposed approach.
Then, we employ the framework in more general problems that cannot be solved using existing techniques, such as problems with non-Gaussian boundary distributions and non-linear dynamics.
§ INTODUCTION
The problem of controlling distributions of dynamical systems has attracted increased attention in the past few years, mainly due to its practical applications in machine learning and generative AI <cit.>.
Compared to standard optimal control problems, where a feedback policy that minimizes a functional in the presence of uncertainty is sought, distribution control aims at directly steering the distribution of the system's state while minimizing a cost function <cit.>.
Apart from generative AI applications, this approach is suitable for controlling systems whose behavior is better captured by a distribution rather than a deterministic state variable.
Such applications include, for example, swarm and multiagent control <cit.>, mean field games <cit.>, opinion dynamics <cit.>, and safe stochastic model predictive control <cit.>, among many others.
This work focuses on the problem of controlling the distribution of a deterministic, discrete-time, control-affine system with nonlinear drift.
The problem of interest can be cast as the following infinite-dimensional optimization problem
min_ x_k, u_k J = ∑_k = 0^N-1[ u_k ^2 + V_k(x_k)],
x_k+1 = f_k(x_k) + B_k u_k,
x_0 ∼ρ_i,
x_N ∼ρ_f,
where ρ_i, ρ_f are the boundary state distributions, f_k, B_k describe the system's prior dynamics, and V_k(x_k) is a state-dependent cost that penalizes potentially undesirable regions of the state space.
Problem (<ref>) is an Optimal Mass Transport problem with (nonlinear) prior dynamics (OMTwpd).
For continuous linear prior dynamics and V(x_k) ≡ 0, this problem is equivalent to an Optimal Transport (OT) problem in a transformed set of coordinates defined by the linear dynamics <cit.>.
Furthermore, for a quadratic state cost V(x_k) = x_k Q_k x_k, linear prior dynamics, and for Gaussian initial and terminal distributions, globally optimal solutions exist through the Covariance Steering (CS) framework for both continuous and discrete time settings.
These can be computed efficiently using semidefinite programming (SDP) <cit.>.
For general, nonlinear systems or systems with non-Gaussian boundary distributions, a globally optimal solution is difficult to obtain.
Local solutions through linearization have been explored in <cit.>, while solutions utilizing characteristic functions, for non-Gaussian boundary distributions and noise, have been proposed in <cit.>.
However, both of these methods utilize linear feedback policies and therefore, may be suboptimal.
Furthermore, the authors of <cit.> propose a randomized policy for steering between Gaussian Mixture Models (GMM) with deterministic linear prior dynamics using randomized linear policies.
While the algorithm results in numerically efficient solutions with guaranteed convergence leveraging analytical results from the Covariance Steering theory, the optimality of the policy compared to more general nonlinear policies is not explored in <cit.>.
Focusing on formulations with nonlinear dynamics, most existing works concern systems in continuous time.
In the simplest case where ẋ_t = u_t, the problem has been studied in the context of Continuous Normalizing Flows (CNFs) <cit.> due to its applications in generative AI <cit.>.
Its stochastic counterpart, referred to as the Schrödinger Bridge Problem (SPB), has also been explored in a similar context <cit.>.
Works that account for more complicated prior dynamics usually impose some assumptions on their structure.
For example, in <cit.> the authors account for dynamics with a nonlinear drift term but otherwise require control in all states by restricting their analysis to cases where B_k = I, and focus on mean field games and generative AI applications.
A more general framework, allowing dynamics with fewer control channels than states is considered in <cit.> where the authors extended the results of <cit.> to feedback linearizable systems.
Finally, a framework that does not require feedback linearizable dynamics is explored in <cit.> but the deterministic drift term is required to be the gradient of a potential function.
The discrete-time problem has been studied much less.
The case of degenerate prior dynamics, i.e., x_k+1 = x_k + u_k has been addressed within the framework of Discrete Normalizing Flows (DNFs) <cit.>.
To the best of the authors' knowledge, the more general problem with nonlinear prior dynamics has not been addressed in the literature.
This problem is of interest for several reasons.
First, it captures the case where the system dynamics are inherently discrete, as in the digital implementation of control algorithms.
Furthermore, most types of neural networks can be analyzed through the lens of discrete dynamical systems, <cit.>, providing a significant drive towards more research in the discrete setting.
Finally, from a computational point of view, solving the discrete-time steering problem requires storing the state vector at a constant number of intervals, requiring, therefore, a fixed memory budget, compared to continuous formulations which perform a temporal discretization based on the stiffness of the trained dynamics <cit.>.
To this end, in this work, we study Problem (<ref>) using tools from machine learning and the DNF literature, specifically combining
control-theoretic ideas and tools from <cit.>.
To bring Problem (<ref>) to the framework of Normalizing Flows, we first relax the terminal distribution constraint (<ref>) to a Kullback–Leibler (KL) divergence soft constraint, giving rise to the problem
min_ x_k, u_k J = ∑_k = 0^N-1[ u_k ^2 + V_k(x_k) ] + λ (ρ_N ρ_f),
x_k+1 = f_k(x_k) + B_k u_k,
x_0 ∼ρ_i,
where λ > 0.
Henceforth, this is the general problem formulation we will use in this paper.
§ NOTATION
We use lowercase letters to denote vectors and vector random variables and capital letters to denote matrices.
Given a function F: ^n→^m, its Jacobian is denoted by ∇ F: ^n →^n × m. Distributions in ^n are denoted by ρ and probability density functions (PDFs) by p. Given a random variable x ∼ρ_x and a transformation y = F(x), the pushforward of x is denoted by ρ_y = F_#ρ_x.
§ PRELIMINARIES
§.§ Normalizing flows
Let ρ_1, ρ_2 be two distributions with probability density functions p_1(x), p_2(x) respectively. Their KL divergence is given by
(ρ_1 ρ_2) = ∫_-∞^∞ p_1(x) log p_1(x)/ p_2(x)dx
= _x ∼ρ_1[ logp_1(x)/p_2(x) ]
= _x ∼ρ_1[ log p_1(x) ] - _x ∼ρ_1[ log p_2(x) ] .
Equation (<ref>) suggests that calculating the KL divergence requires the analytic expressions of the PDFs of the two functions ρ_1 and ρ_2.
In the context of Problem (<ref>), ρ_1 would be a target distribution, which we know explicitly, while ρ_2 would correspond to the distribution of the state at some time step of interest.
The calculation of ρ_2 would therefore require propagating the initial state distribution through the nonlinear dynamics, which is generally intractable.
To overcome this issue, one can use the change of variables formula connecting the PDFs of two random variables that are linked through a diffeomorphic (invertible and differentiable) transformation.
This is summarized in the following lemma:
<cit.> (Change of Variables)
Let x be an n-dimensional random variable with known PDF, denoted p_x(x), and let F: ^n →^n be a diffeomorphism.
The PDF of y = F(x), denoted p_y(y), can be calculated using the formula
p_y(y) = p_x (F^-1(y) ) ∇ F^-1(y).
Transformations that correspond to flow maps of continuous-time systems are invertible, as far as unique solutions exist for the differential equation describing their dynamics.
In CNF problems without prior dynamics, this condition is satisfied because neural networks with finite weights and Lipschitz nonlinearities result in Lipshitz ODE dynamics, and the uniqueness of solutions is guaranteed through Picard's existence theorem <cit.>.
In the discrete-time case, however, the invertibility of the network needs to be addressed explicitly.
In this paper, we make use of the following lemma:
<cit.> (Flow Invertibility)
Consider the discrete-time nonlinear system described by
x_k+1 = x_k + f_k(x_k), k = 0, 1, …, N-1,
and the transformation x_N = F(x_0). The transformation F is invertible if,
for all k = 0, 1, …, N-1, the mappings f_k are contractive.
For a different technique that requires modeling the individual state transition functions as gradients of a convex potential, we refer the reader to <cit.>.
Finally, one result that facilitates the derivation of the proposed algorithm is the equivalence of the KL-divergence before and after a diffeomorphic transformation.
<cit.> (Forward and Backward KL divergence)
Let F:^n→^n be a diffeomorphism and let ρ_1, ρ_2 be two distributions. Then
(ρ_1 ρ_2) = (F_#ρ_1 F_#ρ_2 ).
§.§ Exact Steering Using Semidefinite Programming
For the special case of a system with linear dynamics of the form
x_k+1 = A x_k + B u_k,
and for Gaussian initial and terminal state distributions, the optimal policy for Problem (<ref>) is parametrized by an affine feedback controller of the form <cit.>
π_k(x_k) = K_k (x_k - μ_k) + v_k,
where μ_k = [x_k], while its solution, i.e., the calculation of {K_k, v_k } for k = 0, 1, … N-1, can be attained efficiently through semidefinite programming.
The soft-constrained version (<ref>)
has only been studied for a Wasserstein-2 soft constraint penalty function in <cit.>.
The KL-divergence soft-constrained version can be solved similarly.
Although this paper focuses on the nonlinear version of the problem, the case of linear prior dynamics with Gaussian boundary conditions will be used as a benchmark, to validate the accuracy of the proposed algorithm.
To this end, we briefly present how one can calculate the optimal solution to Problem (<ref>) with a controller of the form (<ref>) using semidefinite programming.
Although we do not explicitly prove that this family of policies is globally optimal for the problem in this paper, this has been shown to be the case for the hard constrained Problem (<ref>)
in <cit.>.
§.§ KL-divergence Covariance Steering
When the dynamics of the system are linear and a control policy of the form (<ref>) is used, the first two moments of the state can be calculated explicitly at any time instant k in the steering horizon.
The corresponding equations are
μ_k+1 = Aμ_k + B v_k,
Σ_k+1 = (A + B K_k) Σ_k (A + B K_k),
where Σ_k = (x_k). The KL-divergence between the terminal distributions ρ_N = 𝒩(μ_N, Σ_N) and ρ_f = 𝒩(μ_f, Σ_f) can be calculated via <cit.>
(ρ_N ρ_f) = ( (Σ_f^-1Σ_N) + (μ_f - μ_N)Σ_f^-1 (μ_f - μ_N)
+ log(Σ_f) - log (Σ_N) - n ),
where n is the dimension of the state vector.
Using equations (<ref>), the change of variables K_k = Σ_k^-1 U_k, and Y_k = U_k Σ_k^-1 U_k in
Problem (<ref>) yields
min J = ∑_k = 0^N-1 (Y_k) + v_k^2 + λ(ρ_Nρ_f),
U_k Σ_k^-1 U_k = Y_k ,
Σ_k+1 = A Σ_k A + B U_k + U_k B + B Y_k B,
μ_k+1 = Aμ_k + B v_k,
Σ_0 = Σ_i,
μ_0 = μ_i,
Σ_N = Σ_f,
μ_N = μ_f.
Relaxing (<ref>) to the semidefinite inequality U_k Σ_k^-1 U_k ≼ Y_k turns Problem (<ref>) into a semidefinite program.
This relaxation has been proven to be lossless in <cit.>.
Finally, we note that the term -log(Σ_f) in the KL-divergence is convex with respect to Σ_f and can be added to the cost function using appropriate slack variables accompanied by an LMI constraint <cit.>.
§ MAXIMUM LIKELIHOOD DISTRIBUTION STEERING
This section contains the main results of the paper.
To this end, consider Problem (<ref>) with a policy parametrized by a neural network, i.e., u_k = π_k(x_k ; θ) where θ corresponds to the trainable policy parameters.
To optimize (<ref>), tractable expressions for the cost function (<ref>)
must be developed.
The first step is arguably the calculation of the KL divergence.
Calculating it directly would involve the explicit calculation of the probability density of the state at the end of the steering horizon, p_N, which is challenging due to the nonlinearities of the system.
Instead of calculating p_N directly, let F(x_0) = Φ_N-1∘Φ_N-2∘…∘Φ_0 (x_0) denote the transformation linking the initial and terminal states under the discrete dynamic model (<ref>), where
Φ_k(x_k) = f_k(x_k) + B_k π_k(x_k),
is the closed-loop state transition function at time step k.
Under certain conditions on the control policy that will be specified later in the section, this transformation is diffeomorphic.
Therefore, its inverse x_0 = F^-1(x_N) satisfies the conditions of Lemma <ref>.
Applying this result to the KL divergence yields
(ρ_N ρ_f) = (ρ_i F^-1_#ρ_f).
Further expanding the second term using the definition of the KL divergence, yields
(ρ_i F^-1_#ρ_f)) = _x ∼ρ_i [log p_i(x)] - _x ∼ρ_i [log p_0(x)],
where p_i(x) is the PDF of ρ_i and p_0(x) is the PDF of the distribution F^-1_#ρ_f, that is, the density of a random variable sampled from ρ_f and pushed through the inverse transformation F^-1.
Notice that the term _x ∼ρ_i [p_i(x)] does not depend on the control policy parameters, and can therefore be omitted from the cost function of (<ref>).
The calculation of log p_0(x) can be facilitated through Lemma <ref>.
Specifically, one can link the density p_0 with the density of p_f using
log p_0(x) = log p_f (F(x)) + log∇ F(x).
The second term can also be efficiently calculated using the chain rule as follows
log∇ F(x) = ∑_k=0^N-1log∇Φ_k (x_k).
In our implementation, the Jacobian of the state transition functions ∇Φ_k(x_k) were calculated using automatic differentiation.
Finally, we discuss conditions for π_k(x_k; θ) that preserve the invertibility of the discrete-time dynamics.
Without loss of generality, we assume that the state transition functions (<ref>) are of the form
x_k+1 = x_k + ϕ_k(x_k) + B_k π_k(x_k ; θ).
Let the system dynamics be described by (<ref>), and let L_ϕ_k, L_π_k be the Lipschitz constants of ϕ_k, π_k, respectively, and let σ_B_k be the spectral norm of the matrix B_k.
Then, if L_π_k < (1 -L_ϕ_k)/σ_B_k, the state transition function defined in (<ref>) is a diffeomorphism.
Based on Lemma <ref>, it suffices to show that ϕ(x_k) + B_k π_k(x_k) is a contraction.
To upper bound its Lipschitz constant note that ∇( ϕ_k + B_k π_k(x_k) ) _2 ≤∇ϕ_k _2 + B_k ∇π_k _2 ≤ L_ϕ_k + σ_B_k L_π_k,
due to the subadditivity and submultiplicativity of the spectral norm <cit.>.
Constraining this upper bound yields the desired result.
In the case where f_k, B_k in (<ref>) are discretized versions of the continuous dynamics ẋ_t = f_t(x_t) + B_t u_t, then the first order approximation of the terms in (<ref>) are ϕ_k(x_k) = Δ T f_t(x_k) and B_k = Δ T B_t, where Δ T is the discritzation step size.
Therefore, for Lipschitz continuous-time dynamics, L_ϕ_k and σ_B_k can be made sufficiently small by reducing the discretization step Δ T.
Note that training Neural Networks with bounded Lipschitz constants can be achieved using spectral normalization <cit.>.
In this work, we use π_k = α L_π_kπ̂_k where L_π_k = (1 -L_ϕ_k)/σ_B_k, α∈ (0 , 1) and π̂_k is a Multilayer Perceptron (MLP) with spectral normalization in all of its weights, having therefore a Lipschitz constant of 1.
After calculating the loss function, optimization is carried out using standard gradient-based optimizers.
For our implementation, we used the AdamW <cit.> scheme, implemented in pytorch <cit.>.
§ NUMERICAL EXAMPLES
In the first numerical example, we study the problem of driving a double integrator system from an initial to a terminal Gaussian distribution.
Since an exact solution can be obtained for this problem using the results from Section <ref>, we use this example as a benchmark.
To this end, consider the discrete-time deterministic dynamics of the form (<ref>) with
A = [ I_2 Δ T I_2; 0_2 I_2 ], B = [ 0_2; Δ T I_2 ], Δ T = 0.1,
a horizon of N=30 time steps, V_k(x_k) ≡ 0 and λ = 60, which is equivalent with normalizing u_k ^2 with 1/(N m) where m is the number of input channels. We use this value for λ for all the subsequent examples.
The boundary distributions are x_0 ∼(μ_i, Σ_f) and x_N ∼𝒩(μ_f, Σ_f) with parameters μ_i = [0, 0, 5, 8], Σ_i = (1, 1, 0.2, 0.2), μ_f = [0, 0, 10, 0], Σ_f = 0.4 I_4.
For this example, the Lipschitz constants of the prior dynamics are L_ϕ = σ_B = Δ T.
To this end, we set L_π = 9, α = 0.9 and π_k = α L_ππ̂_k.
Each policy π̂_k(·) is modeled using a fully connected MLP with spectral normalization, and five layers with {4, 64, 64, 64, 64, 2} neurons per layer.
The convergence plot, along with the optimal solution calculated using the SDP technique described in Section <ref> can be viewed in Figure <ref>.
In the second numerical example, we use double integrator dynamics, a horizon of N=40, but this time, we opt to steer from a Gaussian Mixture Model with 8 modes to the Normal distribution in the presence of obstacles.
The obstacles are modeled using appropriate potential fields with Gaussian kernels <cit.> of the form
V_k(x) = λ_obsexp( -(x - x_0)^2/r_obs^2).
The policy at each time step has the same structure as in the first example but with 128 neurons in the hidden layers.
The results are illustrated in Figure <ref>.
Another, more complicated example, which capitalizes on the fact that only a batch of samples is required for the initial distribution rather than its PDF, is depicted in Figure <ref>, where the initial distribution is arbitrary and the terminal is the normal distribution.
The prior dynamics and policy parametrization are identical to Example 2.
We note that the inverse problem, i.e., steering from a Gaussian distribution to an arbitrary distribution for which only samples are available, would be significantly harder and cannot currently be solved with the proposed approach, since explicit information about the PDF of the terminal distribution is required for the maximum likelihood training.
We leave the investigation of this case as part of future work.
Finally, we test the proposed method in the 2D nonlinear model
x_k+1 = x_k + 0.1 √(1 + y_k^2) + u_k,
y_k+1 = y_k + 0.1 x_k,
for N=40.
By bringing these equations in the form of (<ref>) with L_B = σ_B = 0.1,
we may use the same policy parametrization as in the first example.
The results are illustrated in Figure <ref>.
Since this is a two-dimensional example, we can overlay the samples with the contour lines of the PDF at each time step to validate the precision of the solution.
Table <ref> demonstrates a quantitative evaluation of the algorithm's performance.
The first column corresponds to the experiment number.
To measure the distance between the distribution of the state at the final time step of the steering horizon and the target distribution, we calculate the 2-Wasserstein distance using discrete Optimal Transport <cit.> and report its value in the second column.
We use the 2-Wasserstein distance because it can be computed exactly on empirical distributions that are available only through samples and accurately reflects the actual distance between the continuous distributions given enough samples.
In the third column, we report the minimum value of the log-determinant of the Jacobian of the optimal map linking the initial and final state in order to validate the invertibility of the computed map and finally provide the total training time in minutes in the last column.
Training was performed on an Nvidia RTX-3070 GPU.
§ CONCLUSIONS
This paper presents a method for solving the distribution steering problem for discrete-time nonlinear systems by formulating it as a regularized maximum likelihood optimization problem.
The control policies are parametrized using neural networks with appropriate Lipschitz constraints to ensure the invertibility of the discrete-time dynamics.
A general cost function is considered, allowing state-dependent terms to model obstacles in the state space using potential fields.
In parallel, a KL-divergence soft constraint version of the Covariance Steering problem is developed as a benchmark to compare with the proposed nonlinear maximum likelihood methods.
Finally, four comprehensive numerical examples are presented and analyzed with respect to how closely they achieve the target distribution, as well as in terms of run time.
For the linear dynamics with Gaussian boundary distributions, the solution is also compared against the globally optimal solution calculated as the solution of a semidefinite program.
§ ACKNOWLEDGMENT
The authors would like to sincerely thank Dr. Ali Reza Pedram for his comments in an initial version of the manuscript and for multiple fruitful discussions.
Support for this work has been provided by
ONR award N00014-18-1-2828 and NASA ULI award #80NSSC20M0163.
This article solely reflects the opinions and conclusions of its authors and not of any NASA entity.
ieeetr
|
http://arxiv.org/abs/2409.02643v1 | 20240904122019 | On the Focal Locus of Submanifold of a Finsler Manifold | [
"Aritra Bhowmick",
"Sachchidanand Prasad"
] | math.DG | [
"math.DG",
"Primary: 53C22, 53B40, Secondary: 53C60"
] |
A. Bhowmick]Aritra Bhowmick
Department of Mathematics, Indian Institute of Science, Bangalore, India
[email protected]
S. Prasad]Sachchidanand Prasad
School of Mathematics, Jilin University, China
Faculty of Mathematics and Computer Science, Göttingen University, Germany
[email protected]
Focal Locus of Finsler Submanifold]On the Focal Locus of Submanifold of a Finsler Manifold
[2020]Primary: 53C22, 53B40; Secondary: 53C60
[
[
September 9, 2024
=====================
§ ABSTRACT
In this article, we investigate the focal locus of closed (not necessarily compact) submanifolds in a forward complete Finsler manifold. The main goal is to show that the associated normal exponential map is regular in the sense of F.W. Warner (Am. J. of Math., 87, 1965). This leads to the proof of the fact that the normal exponential is non-injective near tangent focal points. As an application, following R.L. Bishop's work (Proc. Amer. Math. Soc., 65, 1977), we express the tangent cut locus as a closure of a certain set of points, called separating tangent cut points. This strengthens the results from the present authors' previous work (J. Geom. Anal., 34, 2024).
§ INTRODUCTION
One of the primary objects of study in Riemannian geometry are the geodesics, which are (locally) distance minimizing curves. A geodesic always arises as the solution to a first-order initial value problem, which gives rise to the exponential map. Given a complete Riemannian manifold, it is a smooth map from the tangent bundle to the base manifold. The singularities of the exponential map are of particular interest, as they are related to variations of geodesics. For a given submanifold, one can define the normal bundle to it, and the restriction of the exponential map is then called the normal exponential map. The focal locus of the submanifold consists of the critical values of the normal exponential map. A closely related concept is that of the cut locus, originally introduced by Henri Poincaré <cit.>. The cut locus of a submanifold consists of points, beyond which a globally distance-minimizing geodesic from the submanifold fails to be distance-minimizing. Both the cut locus and the focal locus have been studied extensively in the literature, see <cit.>.
Finsler manifolds are a natural generalization of the Riemannian ones, which were first studied by P. Finsler in his dissertation <cit.>. A Finsler metric on a manifold is a parametrized collection of Minkowski norms on each tangent space, which allows us to measure the length of a tangent vector. With this generality, we encounter certain challenges as well, primarily stemming from the fact that we cannot measure the angle between two tangent vectors, unlike the case of a Riemannian metric. Still, most of the results in Riemannian geometry can be translated to Finsler geometry, with suitable modifications. See <cit.> for a survey of results. In particular, the notions of cut and focal loci have their counterpart in a Finsler manifold.
The study of submanifolds in a Finsler manifold has been sporadic <cit.>. One of the first hurdles to cross is that in the absence of an inner product, the suitable generalization of a normal bundle of a submanifold is no longer a vector bundle, and it is only a topological manifold. In <cit.>, we have systematically studied the cut locus of a submanifold in a Finsler manifold, and showed that most of the well-known results in the Riemannian context still hold true. In the same spirit, in this article we study the focal locus of a submanifold in a Finsler manifold.
The primary goal of this article is to show that the normal exponential map associated to a submanifold in a Finsler manifold is regular in the sense of <cit.>, which is proved in <ref>. Along the way, we obtain <ref> and <ref>, which were stated without proof in <cit.> for submanifolds in Riemannian manifolds. For completeness, we give detailed proof in the Finsler setup. As a consequence, in <ref>, we obtain local normal forms for the normal exponential map near certain regular tangent focal points. This culminates in the following important result.
[<ref>]
The normal exponential map of a submanifold in a (forward) complete Finsler manifold is not injective in any neighborhood of a tangent focal point.
Let us point out why the above result is quite striking. In general, a smooth map can have singularities, and yet be globally injective. As an example, consider the map x ↦ x^3 on ℝ, which has a singularity at the origin, and yet is (globally) injective. On the other hand, by the inverse function theorem, a non-singular map is locally injective. The above result can be thought of as a partial converse to this statement for the normal exponential map. This was originally proved for the exponential map exp_p : T_p M → M at a point p of an analytic Finsler manifold in <cit.> and later for C^∞ manifold in <cit.>. Later, Warner proved the same for any smooth Riemannian or Finsler manifold. In recent years, similar results have been proved for the sub-Riemannian exponential maps <cit.>.
Next, we look at the first tangent focal locus in particular, which, as the name suggests, consists of the first tangent focal locus encountered along a geodesic emanating from a submanifold. It is well-known that if a point is in the cut locus, then either it is the endpoint of at least two distinct distance minimizing geodesics, or it is a first focal locus. Furthermore, the points where two or more distance minimizing geodesic meet, called separating points in this article, are dense in the cut locus <cit.>. Both the cut points and separating points have their counterparts in the normal bundle, respectively called the tangent cut points, and the separating tangent cut points. In the same vein as <cit.>, we prove the following.
[<ref>]
Given a compact submanifold of a forward complete Finsler manifold, the set of tangent cut points is the closure of the set of separating tangent cut points.
As a corollary, we reprove <cit.>, which states that the separating set is dense in the cut locus.
Conventions. Throughout this article, boldface symbols, e.g. 𝐮,𝐯,𝐱,𝐲, will always denote an element of a tangent space.
Organization of the article. Falana
§ DRAFT IDEAS
Some related articles : <cit.>.
* Non-injectivity of the Normal Exponential Map following <cit.>. <ref>.
* Decomposition of the Cut Locus following <cit.>, showing that tangent Cu(N) = Se(N). <ref>.
* Conjugate locus of constant order <cit.>. Uses Morse theory. Proves a classification result.
* Odd Dimensional Manifold with Regular Conjugate Locus <cit.>. Rules out a possibility in <cit.>.
* Cartan-Ambrose-Hicks Theorem <cit.>. Uses Morse theory and <cit.>.
* Regularity of the Focal Locus <cit.>. This is the one that “reports” (as stated in <cit.>) Warner's result can be easily proved for submanifolds of Riemannian manifolds. Then, it proves the disc bundle decomposition under extra hypothesis. Uses results from <cit.>.
* Tangent cut locus of with exactly two separating vectors <cit.>. Uses <cit.> as a hypothesis.
§ PRELIMINARIES ON FINSLER GEOMETRY
In this section, we collect a few definitions and results on Finsler geometry. We refer to <cit.> as primary references.
§.§ Finsler Metric
Let M be a smooth manifold, and TM denotes its tangent bundle. A Finsler metric on M is a continuous function F: TM →ℝ satisfying the following properties.
* F is smooth on TM;
* for any p∈ TM, the restriction F_p F|_T_pM is a Minkowski norm, i.e.,
* (Positive 1-homogeneity) for any λ>0 and 𝐯∈ T_pM∖{0}, we have F_p(λ𝐯)=λ F_p(𝐯), and
* (Strong Convexity) for all 𝐯∈ T_pM∖{0}, the symmetric tensor g_𝐯 on T_pM, called the fundamental tensor, is positive definite, where
g_𝐯(𝐯_1,𝐯_2) .12∂^2∂ s_1 ∂ s_2|_s_1=s_2=0(F_p(𝐯 + s_1𝐯_1 + s_2𝐯_2))^2.
F is reversible if F(-𝐯) = F(𝐯) holds for all 𝐯∈TM.
For 𝐯∈ T_p M ∖ 0, the associated Cartan tensor on T_p M is a symmetric 3-tensor defined as
C_𝐯(𝐯_1,𝐯_2,𝐯_3) . 1/4∂^3/∂ s_1 ∂ s_2 ∂ s_3|_s_1=s_2=s_3=0( F_p(𝐯 + s_1 𝐯_1 + s_2 𝐯_2 + s_3 𝐯_3) )^2.
For each 𝐯∈ T_p M ∖ 0 and 𝐮, 𝐰∈ T_p M, we have the following relations
C_𝐯(𝐯, 𝐮,𝐰) = C_𝐯(𝐮, 𝐯, 𝐰) = C_𝐯(𝐮, 𝐰, 𝐯) = 0.
We extend the definition of the fundamental tensor and the Cartan tensor to vector fields. For any V∈ΓTM and X,Y,Z ∈Γ TM defined near p ∈ M, we define
g_V(X, Y)(p) g_V_p(X_p, Y_p), C_V(X, Y, Z)(p) C_V_p( X_p, Y_p, Z_p ).
§.§ Chern Connection
Unlike the Levi-Civita connection in the Riemannian context, we do not get a canonical connection that is both torsion free and metric compatible in a suitable sense. In this article, we consider the Chern connection, which is a family of torsionless connections on a Finsler manifold.
<cit.>
For each V ∈ΓTM, we have a unique affine connection
∇^V: Γ TM ⊗Γ TM →Γ TM,
called the Chern connection, satisfying the following conditions for any X, Y, Z ∈Γ TM.
* (Torsion freeness) ∇^V_X Y - ∇^V_Y X = [X, Y].
* (Almost metric compatibility) X (g_V(Y,Z)) = g_V(∇^V_X Y, Z) + g_V(Y, ∇^V_X Z) + 2 C_V(∇^V_X V, Y, Z).
The value of ∇^V_X Y|_p is independent of the values of V and X other than at p, and consequently ∇^𝐯_𝐱 Y|_p is well-defined for any vector 𝐯, 𝐱∈ T_p M with 𝐯 0, and for any vector field Y defined near p. Similarly, given a curve γ : [a, b] → M and V ∈Γγ^*TM, one can define ∇^V_X Y ∈Γγ^*TM for any X, Y ∈Γγ^*TM. This leads to a covariant derivative along a γ.
<cit.>
Given a curve γ : [a,b] → M and W ∈Γγ^*TM, the covariant derivative along γ is defined as
D^W_γ : Γγ^* TM →Γγ^* TM,
which satisfies the following.
* (Linearity) For any X, Y ∈Γγ^*TM and scalars α, β∈ℝ we have
D^W_γ(α X + β Y) = α D^W_γ X + β D^W_γ Y.
* (Leibniz rule) For any X ∈Γγ^*TM and a smooth function f : [a,b] →ℝ, we have
D^W_γ(f X) = df/dt X + f D^W_γ X.
* (Almost metric compatibility) For any X, Y ∈Γγ^* TM, we have
d/dtg_W(X,Y) = g_W ( D^W_γ X, Y ) + g_W ( X, D^W_γ Y ) + 2 C_W ( D^W_γ W, X, Y ).
If for some curve γ we have γ̇(t) 0 for all time, then we shall use the notations
Ẋ D^γ̇_γ X, Ẍ D^ġ_γẊ = D^γ̇_γ D^γ̇_γ X, for any X ∈Γγ^*TM,
provided the curve γ is understood from the context. A vector field X ∈Γγ^*TM is said to be parallel with respect to γ if Ẋ = 0.
§.§ Geodesics and Jacobi Fields
Let us denote the space of piecewise smooth paths γ : [a,b] → M as 𝒫 = 𝒫([a,b]). We have two functionals defined on this path space, namely the length and the energy functionals.
L : 𝒫 ⟶ℝ
γ ⟼∫_a^b F(γ̇(t)) dt
,
E : 𝒫 ⟶ℝ
γ ⟼1/2∫_a^b F(γ̇(t))^2 dt
A curve γ∈𝒫 is a geodesic if it is a critical point of the energy functional with respect to proper variations. Recall that given a piecewise smooth vector field W ∈Γγ^* TM, a W-variation of γ is a piecewise smooth map Λ : (-ϵ, ϵ) × [a,b] → M satisfying
Λ(0,t) = γ(t), .∂/∂ s|_s=0Λ(s, t) = W(t),
for all (s,t) in the domain. The variation is proper if W(a) = 0 = W(b), or equivalently, if Λ(s, a) = γ(a) and Λ(s, b) = γ(b) for all s. Geodesics are locally distance minimizing, where we have the Finsler distance between p, q ∈ M defined as
d(p, q) = inf{ L(γ) | γ∈𝒫, γ(a) = p, γ(b) = q }.
In general, geodesics fail to be globally distance minimizing. We say a geodesic γ is a global distance minimizer (or simply a minimizer) if γ is unit-speed, and d(γ(a), γ(b)) = b - a = L(γ).
Geodesics are precisely the solutions to an initial value problem known as the geodesic equation (e.g., <cit.>), which can be written succinctly as γ̈= 0 with the notation as above. As such, geodesics are always smooth, and given a vector 𝐯∈ T_p M, we have a unique maximal geodesic γ_𝐯 : [0,ℓ] → M satisfying
γ_𝐯(0) = p, γ̇_𝐯(0) = 𝐯.
Note that, unlike Riemannian geometry, due to the asymmetry of the Finsler metric, the reversed curve γ̅:[0, ℓ] → M defined by γ̅(t) = γ(ℓ - t) need not be a geodesic. Nevertheless, γ̅ is a geodesic with initial velocity -γ̇(ℓ) for the reverse Finsler metric F̅ defined by F̅(𝐯) = F(-𝐯) for all 𝐯∈ TM <cit.>.
A Finsler manifold (M, F) is said to be forward complete if for all 𝐯∈ TM, the geodesic γ_𝐯 is defined for all time [0, ∞). We say (M, F) is backward complete if (M, F̅) is forward complete. By the Hopf-Rinow theorem (<cit.>), if a Finsler manifold (M, F) is forward or backward complete, then given any two points p, q ∈ M there exists a minimizer with respect to F, and another (possibly distinct) minimizer with respect to F̅ joining p to q. Throughout this article, we shall assume (M, F) to be a forward complete Finsler manifold, unless otherwise stated.
The exponential map exp : TM → M at a point p ∈ M is defined as exp_p(𝐯) = γ_𝐯(1) for any 𝐯∈ T_p M, where γ_𝐯 : [0,∞) → M is the unique geodesic starting at p with initial velocity 𝐯.
It follows from the theory of ordinary differential equation that the exponential map is smooth on TM, but only C^1 on the whole tangent bundle, see <cit.> for details.
§.§.§ Jacobi Fields
Given a geodesic γ : [a,b] → M, consider a geodesic variation Λ : (-ϵ, ϵ) × [a,b] → M of γ, that is, for each -ϵ < s < ϵ we require the curve Λ_s : [a,b] → M given by Λ_s(t) Λ(s, t) to be a geodesic.
Let γ : [a,b] → M be a geodesic. A vector field J ∈Γγ^* TM along γ is called a Jacobi field along γ if there exists a geodesic variation Λ : (-ϵ, ϵ) × [a,b] → M of γ satisfying
. ∂/∂ s|_s=0Λ(s,t) = J(t), t∈ [a,b].
Given a geodesic γ : [a, b] → M, and for any U, W ∈Γγ^*TM, one can define a tensor field R^γ (γ̇, U) W along γ (see <cit.>). Then, a vector field J ∈Γγ^*TM is a Jacobi field if and only if it satisfies the Jacobi equation
D^γ̇_γ D^γ̇_γ J - R^γ(γ̇, J)γ̇= 0.
Since the Jacobi equation is a second order ODE, given the initial data, 𝐮, 𝐯∈ T_γ(a)M, there exists a unique Jacobi field J along γ satisfying, J(a) = 𝐮 and J̇(a) = D^γ̇_γ J(a) = 𝐯. In particular, the collection of all Jacobi fields along γ forms a vector space of dimension 2 M. Let us make an important observation regarding the zeros of a Jacobi field.
Given a non-vanishing Jacobi field J along a geodesic γ : [0, ℓ] → M, if J(t_0) = 0 for some 0 ≤ t_0 ≤ℓ, then J̇(t_0) 0. Moreover, the zeros of J are isolated.
Suppose J(t_0) = 0 for some 0 ≤ t_0 ≤ℓ. If possible, suppose J̇(t_0) = 0. If t_0 = 0, then we must J ≡ 0, by the uniqueness of Jacobi fields. Suppose t_0 > 0. We have geodesic variation Λ : (-ϵ, ϵ) × [0, ℓ] → M such that J(t) = ∂/∂ s |_s=0Λ(s, t). Define Λ̅ : (-ϵ, ϵ) × [0, t_0] → M by Λ̅(s, t) = Λ̅(s, t_0 - t). Then, Λ̅ is a geodesic variation with respect to the reverse Finsler metric F̅. In particular, J̅(t) ∂/∂ s |_s=0Λ̅(s, t) = -J(t) is a Jacobi field along the geodesic γ̅(t) = γ(t_0 - t) defined on [0, t_0], with respect to F̅. By our assumption,
0 = .D^γ/dt|_t=t_0 J(t) = .∂^2/∂ t∂ s|_(t_0,0)Λ(s, t) = - .∂^2/∂ t∂ s|_(0,0)Λ̅(s, t) = - .D^γ̅/dt|_t = 0J̅(t).
But then J̅≡ 0 along γ̅, as the initial conditions are J̅(0) = 0 and D^γ̅/dt|_t=0J̅(t) = 0. Consequently, we have J ≡ 0 along γ, a contradiction. Hence, J(t_0) = 0 implies J̇(t_0) 0.
Now, let us fix some frame of parallel vector fields { e_1(t), …, e_n(t) } along γ, near t_0, where n = M. We can write, J = ∑_i = 1^n J^i(t) e_i (t) near t_0. Since ė_i = 0, we have 0 J̇(t_0) = ∑J̇^i(t_0) e_i(t_0). Hence, for some 1 ≤ i_0 ≤ n we get J^i_0(t_0) = 0 but J̇^i_0(t_0) 0. Consequently, J^i_0 is locally injective near t_0, and in particular, J^i_0 is non-zero in a deleted neighborhood of t_0. It follows that zeros of J are isolated.
§.§ Submanifolds in Finsler Manifolds
Given a submanifold N of a Finsler manifold (M, F), we consider the normal cone bundle as the natural replacement for normal bundles.
Given a submanifold N ⊂ M, the set
ν_p = ν_p(N) = {𝐯∈ T_p M ∖{0} | g_𝐯(𝐯,𝐰) = 0 ∀𝐰∈ T_p N}∪{0}
is called the normal cone of N at p ∈ N. The set ν = ν(N) = ∪_p ∈ Nν_p(N) is called the normal cone bundle of N. The unit normal cone bundle of N is denoted as S(ν) = ∪_p ∈ N S(ν_p), where S(ν_p) {𝐯∈ν_p | F_p(𝐯) = 1 }.
Given a 0 𝐧∈ν_p we have a direct sum decomposition of T_p N defined using the fundamental tensor g_𝐧 which is nondegenerate. In particular, for any 𝐯∈ T_p M we can uniquely write
𝐯 = 𝐯^⊤_𝐧 + 𝐯^⊥_𝐧∈ T_p N ⊕( T_p N )^⊥_𝐧.
It should be noted that ν(N) is not a vector bundle in general; in fact, it is a cone bundle. That is to say for any 0 𝐯∈ν we have λ𝐯∈ν for all λ≥ 0. We shall denote ν̂_p ν_p ∖{ 0 }, and ν̂∪_p∈ Nν̂_p is then called the slit cone bundle. It follows that ν̂ (resp. S(ν)) is a smooth submanifold of TM of dimension N + N = M (resp. M - 1), whereas ν is only a topological submanifold. In <ref>, we shall see a natural way to identify the tangent space to ν̂ at some 𝐯.
Recall the Legendre transformation ℒ : TM → T^*M <cit.>, which is a fiber-wise homeomorphism, restricting to a C^∞-diffeomorphism in the complement of the zero sections. For any 𝐯, 𝐰∈ TM we have
ℒ(𝐯)(𝐰) =
g_𝐯( 𝐯, 𝐰), 𝐯 0,
0, 𝐯 = 0.
It is easily seen that ℒ maps ν bijectively onto the annihilator bundle of TN. Thus, the cone bundle ν is fiber-wise (non-linearly) homeomorphic to a vector bundle, and ν̂ is C^∞-diffeomorphic to the complement of the zero section in the annihilator bundle. As a consequence, for any 𝐯∈ν̂, we can define a smooth local extension V ∈Γν̂ of 𝐯 via ℒ.
Given 𝐧∈ν̂_p, we have the second fundamental form of N in the direction of 𝐧 defined as
Π^𝐧 : T_p N ⊙ T_p N ⟶( T_p N )^⊥_g_𝐧
( 𝐱, 𝐲) ⟼ - ( ∇^𝐧̃_X Y |_p)^⊥_𝐧,
where X, Y ∈Γ TN and 𝐧̃∈Γν̂ are arbitrary local extensions of 𝐱, 𝐲, 𝐧 respectively. The map is well-defined, as the assignment can be seen to be C^∞(N)-linear. Furthermore, as the Chern connection is torsion-free, we have
Π^𝐧( 𝐱, 𝐲) - Π^𝐧( 𝐱, 𝐲) = ( ∇^𝐧_X Y|_p - ∇^𝐧_Y X|_p )^⊥_𝐧 = ( [X, Y]_p )^⊥_𝐧 = 0.
In other words, Π^𝐧 is a symmetric linear map. Taking adjoint with respect to g_𝐧, we define the shape operator A_𝐧 : T_p N → T_p N of N along 𝐧 via the equation
g_𝐧( A_𝐧𝐱, 𝐲) = g_𝐧( 𝐧, Π^𝐧( 𝐱, 𝐲) ), 𝐱, 𝐲∈ T_p N.
For any 0 𝐧∈ν_p and 𝐱∈ T_p N, we have
A_𝐧(𝐱) = ( ∇^𝐧̃_X 𝐧̃|_p )^⊤_𝐧,
where X ∈Γ TN, 𝐧̃∈Γν̂ are arbitrary local extensions of 𝐱, 𝐧 respectively.
For any 𝐲∈ T_p N get an extension Y ∈Γ TN so that g_𝐧̃( 𝐧̃, T ) = 0. Then,
0 = X( g_𝐧̃( 𝐧̃, Y ) ) = g_𝐧̃( ∇^𝐧̃_X 𝐧̃, Y ) + g_𝐧̃( 𝐧̃, ∇^𝐧̃_X Y ) + 2C_𝐧̃( ∇^𝐧̃_X 𝐧̃, 𝐧̃, Y ).
The last term vanishes by <ref>. Hence, evaluating at p we have
g_𝐧( A_𝐧𝐱, 𝐲)
= g_𝐧( 𝐧, Π^𝐧( 𝐱, 𝐲) )
= g_𝐧( 𝐧, - ( ∇^𝐧̃_X Y|_p )^⊥_𝐧)
= g_𝐧( 𝐧, -∇^𝐧̃_X Y|_p ), as g_𝐧( 𝐧, -( ∇^𝐧̃_X Y|_p )^⊤_𝐧) = 0
= g_𝐧( ∇^𝐧̃_X 𝐧̃|_p, 𝐲)
= g_𝐧( ( ∇^𝐧̃_X 𝐧̃|_p )^⊤_𝐧, 𝐲), as g_𝐧( ( ∇^𝐧̃_X 𝐧̃|_p )^⊥_𝐧, 𝐲) = 0.
Since 𝐲∈ T_p N is arbitrary and g_𝐧|_T_p N is nondegenerate, we have the claim.
Given N ⊂ M, the normal exponential map exp^ν: ν(N) → M is defined as the restriction of the exponential map to the cone bundle ν(N).
As ν̂ is a smooth submanifold of TM, the restricted map exp^ν|_ν̂ is smooth. We shall denote the restriction as
ℰexp^ν|_ν̂.
§.§ N-Geodesics and N-Jacobi Fields
Given a submanifold N ⊂ M, we have the subspace of piecewise smooth paths
𝒫_N = 𝒫_N([a,b]) = {γ∈𝒫([a,b]) | γ(a) ∈ N}
starting at N. For any γ∈𝒫_N we say γ is a curve joining N to γ(b). Given q ∈ M, the distance from N to q is then defined as
d(N,q) inf{ L(γ) | γ∈𝒫_N, γ(b) = q }.
We identify the tangent space of 𝒫_N at a given γ∈𝒫_N as the infinite dimensional vector space consisting of piecewise smooth vector fields W ∈Γγ^* TM with W(a) ∈ T_γ(a) N. Given W ∈ T_γ𝒫_N, a W-variation is then a piecewise smooth map Λ : (-ϵ,ϵ) × [a,b] → M satisfying
Λ(0, t) = γ(t), . ∂/∂ s|_s=0Λ(s, t) = W(t), Λ(s, a) ∈ N,
for all (s,t) in the domain of the definition. The variation is proper, if furthermore we have W(b) = 0, or equivalently if Λ(s, b) = γ(b) for all s.
A piecewise C^1 curve γ : [a,b] → M in 𝒫(N) is called an N-geodesic if γ is a critical point of the restricted energy functional E|_𝒫(N) with respect to proper variations. An N-geodesic γ is called an N-segment joining N to γ(b) if γ is unit-speed, and d(N, γ(b)) = b - a = L(γ).
We recall the following useful lemma, which can be compared to <cit.>.
<cit.>
Suppose, γ_i : [0, ℓ_i] → M are unit-speed minimizers joining p_i = γ_i(0) to q_i = γ_i(ℓ_i), where ℓ_i = d(p_i, q_i). Suppose ℓ_i →ℓ. If either
(a) p_i → p and F is forward complete, .5cm or .5cm (b) q_i → q and F is backward complete,
then a subsequence of γ_i converges uniformly to a minimizer γ, which satisfies L(γ) = ℓ.
As an immediate consequence, we get the following.
<cit.>
Suppose (M, F) is a forward complete Finsler manifold, and N is a closed submanifold of M. Let γ_i : [0, ℓ_i] → M be N-segments joining N to q_i γ_i(ℓ_i). Suppose that either
𝖧(a) N is compact, .5cm or .5cm (b) F is backward complete as well.
If ℓ_i →ℓ and q_i → q, then there exists an N-segment γ : [0, ℓ]→ M joining N to q, with a subsequence γ_i →γ. In particular, for any q ∈ M there exists an N-segment γ joining N to q, with d(N, q) = L(γ).
We would like to point out that the hypothesis ([eq:hypothesisH]𝖧) is automatically satisfied for a closed (not necessarily compact) submanifold of a complete Riemannian manifold since the notion of forward and backward completeness coincides there. On the other hand, in a forward but not backward complete Finsler manifold, the distance from a closed submanifold to a point may not even be achieved on the submanifold, see <cit.> for an example.
§.§.§ N-Jacobi Fields
Given a unit-speed N-geodesic γ : [a,b]→ M, a variation Λ : (-ϵ, ϵ) × [a,b] → M is said to be an N-geodesic variation if for each -ϵ < s < ϵ, the curve Λ_s : [a,b] → M is an N-geodesic.
Let γ : [a,b] → M be a unit-speed N-geodesic. A vector field J ∈Γγ^*TM is called an N-Jacobi field of γ if there exists an N-geodesic variation Λ:(-ϵ,ϵ)× [a,b]→ M satisfying
.∂/∂ s|_s=0Λ(s,t) = J(t), t∈[a,b].
We have the following characterization of an N-Jacobi field via the Jacobi equation.
<cit.>
Given an N-geodesic γ:[a,b]→ M with initial velocity 𝐯 = γ̇(a), a vector field J ∈Γγ^*TM is an N-Jacobi field if and only if it satisfies the following initial value problem
D^γ̇_γ D^γ̇_γ J - R^γ(γ̇, J) γ̇= 0, J(a)∈ T_γ(a) N, J̇(a) - A_𝐯( J(a) ) ∈( T_p N )^⊥_𝐧.
Given an N-geodesic γ : [a,b]→ M, consider the vector field J(t) = t γ̇(t) along γ. Then, J̇ = γ̇+ t γ̈= γ̇⇒J̈ = 0. On the other hand, R^γ(γ̇, J)γ̇= t R^γ(γ̇, γ̇) γ̇= 0 by the anti-symmetry of the tensor. Thus, J satisfies the Jacobi equation. Also, J(a) = 0 ∈ T_γ(a)N and J̇(a) = γ̇(a) ∈ν_γ(a). Since, A_γ̇(a)( J(a) ) = 0, it follows that J is an N-Jacobi field along γ. Note that J(b) 0.
As an N-Jacobi field is itself a Jacobi field, by <ref>, zeros of N-Jacobi fields are isolated as well. We shall need the following result.
Given an N-geodesic γ : [0, ℓ] → M and two N-Jacobi fields J, K along γ, we have g_γ̇(J, K̇) = g_γ̇(J̇, K).
Since γ is a geodesic, we have D^γ̇_γγ̇= 0. Hence, for any two vector fields X, Y ∈Γγ^*TM we have
d/dt g_γ̇( X, Y ) = g_γ̇( Ẋ, Y ) + g_γ̇( X, Ẏ) + 2 C_γ̇( D^γ̇_γγ̇, X, Y )_0.
Now, for two Jacobi fields J, K along γ we compute
d/dt[ g_γ̇( J, K̇) - g_γ̇( J̇, K )]
= g_γ̇( J, K̈) - g_γ̇( J̈, K )
= g_γ̇( J, R^γ_γ̇(γ̇, K) γ̇) - g_γ̇( R^γ_γ̇(γ̇, J) γ̇, K ), by <ref>
= g_γ̇( J, R^γ_γ̇(γ̇, K) γ̇) - g_γ̇( R^γ_γ̇(γ̇, K) γ̇, J ), by symmetry of the curvature tensor
= 0.
Now, evaluating at t = 0, from <ref> and <ref> we get
g_𝐯( J(0), K̇(0) ) = g_𝐯( J(0), A_𝐯( K(0) ) ) = g_𝐯( 𝐯, Π^𝐯( J(0), K(0) )).
Since the second fundamental form Π^𝐯 is symmetric, we get g_γ̇( J(0), K̇(0) ) - g_γ̇( J̇(0), K(0) ) = 0. Consequently,
g_γ̇( J, K̇) = g_γ̇( J̇, K ) holds for all time t.
§.§ Cut Locus of a Submanifold
The cut locus of a point p ∈ M is the set consisting of all q ∈ M such that there exists a minimizer from p to q, any extension of which fails to be distance minimizing. We denote the cut locus of p by Cu(p). Generalizing this notion to an arbitrary submanifold, we have the following definition.
Given a submanifold N ⊂ M, the cut locus of N consists of points q ∈ M such that there exists an N-segment joining N to q, whose extension fails to be an N-segment. Given 𝐯∈ S(ν), the cut time of 𝐯 is defined as
ρ(𝐯) sup{ t | d(N, γ_𝐯(t 𝐯)) = t },
where γ_𝐯(t) = exp^ν(t 𝐯) is the unique N-geodesic with initial velocity 𝐯.
Note that we allow ρ to take the value ∞, although if M is compact then ρ(𝐯) < ∞ for any 𝐯∈ S(ν). The map ρ: S(ν ) → [0, ∞] is continuous <cit.>. It follows from <cit.> that ρ is always strictly positive for any closed N, not necessarily compact. The tangent cut locus of N is defined as
Cu(N) {ρ(𝐯) 𝐯 | 𝐯∈ S(ν), ρ(𝐯) ∞}⊂ν.
It follows from <ref> that Cu(N) = exp^ν(Cu(N)).
In order to characterize the points of Cu(N), we introduce the notion of separating sets, originally called the several geodesics set in <cit.>. In the terminology of <cit.>, this is also known as ordinary cut points, see <ref>.
Given a submanifold N ⊂ M, a point p ∈ M is said to be a separating point of N if there exist two distinct N-segments joining N to p. The collection of all separating points of N is called the separating set of N, denoted Se(N).
Under hypotheses ([eq:hypothesisH]𝖧), it follows that Cu(N) = Se(N) <cit.>. In order to describe the points in Cu(N) ∖Se(N), we introduce the notion of focal points of N.
§.§ Focal Locus of a Submanifold
Given a submanifold N ⊂ M, a vector 𝐯∈ν̂ is said to be a tangent focal point of N if
d_𝐯(exp^ν|_ν̂) : T_𝐯ν̂→ T_exp^ν(𝐯)M
is degenerate, i.e., if 𝐯 is a critical point of exp^ν|_ν̂. The set of tangent focal loci will be denoted by ℱ = ℱ(N) ⊂ν̂. The nullity of the map is known as the multiplicity of the tangent focal point 𝐯. The collection of all such vectors is called the tangent focal locus of N. The focal locus of N is defined as the image of the tangent focal locus under the exp^ν map.
In other words, the focal locus of N consists of the critical values of the (restricted) normal exponential map exp^ν|_ν̂. The (first) focal time for 𝐯∈ S(ν) is defined as
λ(𝐯) inf{ t | d_t𝐯( exp^ν|_ν̂) is degenerate}.
The N-geodesic γ_𝐯 cannot be an N-segment beyond λ(𝐯) <cit.>. Furthermore, under hypotheses ([eq:hypothesisH]𝖧), we have a (non-exclusive) dichotomy: a point in Cu(N) is either a point in Se(N) or it is a first focal locus along some N-segment <cit.>.
Focal locus can be naturally characterized via N-Jacobi fields with vanishing endpoint <cit.>. For 𝐯∈ν̂, denote the kernel
K_𝐯( d_𝐯( exp^ν|_ν̂) : T_𝐯ν̂→ T_exp^ν(𝐯) M = T_γ_𝐯(1) M ) = d_𝐯ℰ.
On the other hand, consider the space of N-Jacobi fields
𝒥_𝐯{ J | J is an N-Jacobi field along γ_𝐯},
and its subspace
𝒦_𝐯{ J | J is an N-Jacobi field along γ_𝐯, with J(1) = 0}.
For any 𝐱∈ T_𝐯ν̂, consider a curve α : (-ϵ, ϵ) →ν̂ such that α(0) = 𝐯 and α̇(0) = 𝐱. We then have a family of curves
Λ(s, t) exp^ν( t α(s) ), 0 ≤ t ≤ 1, -ϵ < s < ϵ,
which is clearly an N-geodesic variation. In particular,
J_𝐱(t) = .∂/∂ s|_s=0Λ(s, t), 0 ≤ t ≤ 1,
is an N-Jacobi field along the N-geodesic γ_𝐯(t). Consider the map
Φ : T_𝐯ν̂ →𝒥_𝐯
𝐱 ↦ J_𝐱.
Φ is a well-defined linear isomorphism T_𝐯ν̂≅𝒥_𝐯, which restricts to an isomorphism K_𝐯≅𝒦_𝐯.
Let us denote c(s) = Λ(s, 0) = π∘α(s), which is a curve in N. Observe that ∂_t|_t = 0Λ(s, t) = α(s), as Λ(s, _) is an N-geodesic with initial velocity α(s). We compute the following.
J(0) = . ∂/∂ s|_s=0exp^ν (0 ·α(s)) = . ∂/∂ s|_s=0 c(s) = ċ(0) = dπ(α̇(0)) = dπ(𝐱).
D^γ̇_γ J (0) = D^γ̇_γ |_t = 0∂_s Λ(s, t) = D^γ̇_c|_s = 0∂_t|_t = 0Λ(s, t) = D^γ̇_c α (0).
Now, D^γ̇_c α (0) is determined by the first jet of α at 0, i.e., by 𝐱 = α̇(0) (see for example, <cit.>). Hence, J is uniquely determined by 𝐱, and in particular, Φ is a well-defined map. Note that for 𝐱∈ K_𝐯 = d_𝐯ℰ, we have
J(1) = . ∂/∂ s|_s=0exp^ν(α (s)) = d_𝐯ℰ (α̇(0)) = d_𝐯ℰ(𝐱) = 0.
Consequently, Φ restricts to a map K_𝐯→𝒦_𝐯.
Let us show that Φ is linear. Clearly,
J_a𝐱 + b𝐲(0) = dπ(a𝐱 + b𝐲) = a dπ(𝐱) + b dπ(𝐲) = a J_𝐱(0) + b J_𝐲(0).
Similarly, it follows from <cit.> that D^γ̇_γ J_a 𝐱 + b 𝐲(0) = a D^γ̇_γ J_𝐱(0) + b D^γ̇_γ J_𝐲(0). But then from the uniqueness of Jacobi fields, it follows that J_a 𝐱 + b 𝐲 = a J_𝐱 + b J_𝐲. Consequently, Φ is linear.
If 𝐱 = 0, we have J_𝐱(0) = 0 and J̇_𝐱(0) = 0, which shows that Φ is injective. On the other hand, suppose J ∈𝒥_𝐯 is an N-Jacobi field. Then, J is given by an N-geodesic variation, say, Ξ : (-ϵ, ϵ) × [0, 1] → M. We have a curve β in ν̂ such that Ξ(s, t) = exp^ν(tβ(s)). The above computation then shows that J = J_𝐱, where 𝐱 = β̇(0). If J(1) = 0, we have 𝐱∈ K_𝐯, since d_𝐯ℰ(𝐱) = J_𝐱(1) = 0. Thus, Φ is a linear isomorphism, which restricts to an isomorphism K_𝐯→𝒦_𝐯.
It is immediate that given a tangent focal point 𝐯∈ν̂, the focal multiplicity K_𝐯≤ T_𝐯ν̂ = M. On the other hand, from <ref>, we have a Jacobi field J ∈𝒥_𝐯∖𝒦_𝐯. Thus, it follows from <ref> that K_𝐯 = 𝒦_𝐯⪇𝒥_𝐯 = M.
Using <ref>, we get the following useful result, which is analogous to <cit.> in the Riemannian context.
Suppose for 𝐯∈ S(ν), and let k = d_ℓ𝐯ℰ for some ℓ > 0. Then, there exists a frame of N-Jacobi fields { J_1,… J_n } along γ_𝐯 such that
* The subspaces Span⟨J̇_1(ℓ), … ,J̇_k(ℓ) ⟩ and Span⟨ J_k+1(ℓ), … , J_n(ℓ) ⟩ are g_ℓ𝐯 orthogonal.
* T_γ_𝐯(ℓ) M = Span⟨J̇_1(ℓ), …J̇_k(ℓ), J_k+1(ℓ), … , J_n(ℓ) ⟩, and
* { J_i(t) }_i=1^n is a basis of T_γ_𝐯(t) M for a deleted neighborhood 0 < |t - ℓ| < ϵ.
In particular, focal points of N along γ_𝐯 are discrete.
Fix a basis d_ℓ𝐯ℰ = Span⟨𝐱_1,… ,𝐱_k ⟩, and extend it to a basis T_ℓ𝐯ν̂ = Span⟨𝐱_1,… ,𝐱_n ⟩. Consider the N-Jacobi fields J_i J_𝐱_i = Φ(𝐱_i) along γ_𝐯. By <ref>, { J_i } forms a basis of 𝒥_𝐯, and we have J_1(ℓ) = … = J_k(ℓ) = 0. Suppose, if possible, ∑_i=1^k a^i J̇_i(ℓ) = 0 for some scalars a^i. Consider the N-Jacobi field J = ∑_i=1^k a^i J_i. Then, J̇(ℓ) = ∑_i=1^k a^i J̇_i(ℓ) = 0. By <ref>, we have J = 0, which forces a^1 = … = a^k = 0. Thus, {J̇_1(ℓ), … , J̇_k(ℓ) } are linearly independent. Next, suppose ∑_i = k+1^n b^i J_i (ℓ) = 0 for some scalars b^i. Consider N-Jacobi field K = ∑_i > k b^i J_i. As Φ in <ref> is a linear isomorphism, we have K = Φ(𝐲), where 𝐲 = ∑_i > k b^i 𝐱_i. Now, d_ℓ𝐯ℰ(𝐲) = K(ℓ) = ∑_i > k b^i J_i(ℓ) = 0 implies 𝐲∈ d_ℓ𝐯ℰ. By our choice of 𝐱_i, we must have b^k+1 = … = b^n = 0. Thus, { J^k+1(ℓ), … , J^n(ℓ) } are linearly independent as well. Lastly, for some J = ∑_i=1^k a^i J_i and some K = ∑_i > k b^i K_i, we have from <ref>
g_ℓ𝐯( J̇(ℓ), K(ℓ) ) = g_ℓ𝐯( J(ℓ), K̇(ℓ) ) = 0.
Consequently, Span⟨J̇_1(ℓ),…J̇_k(ℓ) ⟩ and Span⟨ J_k+1(ℓ), … J_n(ℓ) ⟩ are g_ℓ𝐯 orthogonal. A simple dimension counting then proves that T_γ_𝐯(ℓ) = Span⟨J̇_1(ℓ), …J̇_k(ℓ), J_k+1(ℓ), … , J̇_n (ℓ) ⟩.
Now, for 1 ≤ i ≤ k, as J_i(ℓ) = 0 and J̇_i(0) 0, i.e., J_i(t) has a 0 of order 1 at t = ℓ. We then have an N-Jacobi field
Y_i(t) =
J_i(t)/t - ℓ, t ℓ
J̇_i(ℓ), t = ℓ.
Set Y_i = J_i for k < i ≤ n. Clearly, Span⟨ Y_i(t) ⟩ = Span⟨ J_i(t) ⟩ for all t ℓ. Now, { Y_i(ℓ) } are linearly independent, and hence, in some neighborhood |t - ℓ| < ϵ, we have { Y_i(t) } are linearly independent as well. But then for 0 < |t - ℓ| < ϵ, we get { J_i(t) } are linearly independent.
Lastly, suppose t_0 𝐯 is a focal point of N along γ_𝐯, for some 0 < |t_0 - ℓ| < ϵ. But then there exists some non-vanishing N-Jacobi field J along γ_𝐯 with J(t_0) = 0. Write J = ∑ c^i J_i for some scalars c^i. But then ∑ c^i J_i(0) = 0, which implies c^i = 0, a contradiction. Thus, there are no focal point of N along γ_𝐯 for 0 < |t - ℓ| < ϵ, i.e., focal points along γ_𝐯 are discrete.
§.§ Index Form and the Morse Index Theorem
From the second variation formula of the restricted energy functional E|_𝒫_N, we get the index form <cit.>.
Let γ : [a,b] → M be a unit-speed N-geodesic. Then, the index form ℐ_γ is defined for X, Y ∈ T_γ𝒫_N as
ℐ_γ(X,Y) ∫_a^b [ g_γ̇(D^γ̇_γ X, D^γ̇_γ Y) - g_γ̇( R^γ(γ̇, X) Y, γ̇) ] - g_γ̇(a)( π^⊥∇^γ̇(a)_X Y, γ̇(a) ),
where π^⊥ is the second component projection of the canonical splitting T_γ(a)M = T_γ(a)N ⊕( T_γ(a)N )^⊥_g_γ̇(a).
It follows that ℐ_γ is a symmetric 2-form, and the kernel of the index form ℐ_γ consists of precisely the N-Jacobi fields J along γ, with J(b) = 0. In particular, γ is a nondegenerate critical point (i.e., an N-geodesic) of the energy functional, precisely when there are no N-Jacobi fields along γ vanishing at the endpoint. In other words, γ is nondegenerate if and only if γ(b) is not a focal point of N along γ.
The index of an N-geodesic γ is defined as
Ind(γ) max{ K | ℐ _γ is negative definite restricted to some K ⊂ T_γ𝒫_N}.
By the Morse index lemma for Finsler submanifolds <cit.>, it follows that for a unit-speed N-geodesic γ : [0, T] → M given as γ(t) = exp^ν(t 𝐯), such that γ(T) is not a focal point of N along γ, one has
Ind(γ) = ∑_0 < t < T( d_t𝐯( exp^ν|_ν̂) ) < ∞.
In other words, Ind(γ) equals the number of focal points of N along γ, counted with multiplicities, excluding the endpoints. Note that by <ref>, only finitely many non-zero terms can appear in the sum. For a fixed unit vector 𝐯∈ S(ν), based on the index of γ_𝐯, we can now define the following.
Given N ⊂ M and 𝐯∈ S(ν), the k^th focal time in the direction of 𝐯 is defined as
λ_k(𝐯) sup{ t | Ind(γ_𝐯|_[0, t]) ≤ k - 1 },
where γ_𝐯 : [0, ∞) → M is the unit speed N-geodesic given by γ_𝐯(t) = exp^ν(t 𝐯). The k^th focal locus of N is defined as the set
{exp^ν( λ_k(𝐯) 𝐯) |𝐯∈ S(ν), λ_k(𝐯) ∞}.
Clearly, λ_1(𝐯) = λ(𝐯) as defined in <ref>. In general, we have 0 < λ_1(𝐯) ≤λ_2(𝐯) ≤…. If γ_𝐯(T) is not a focal point of N along γ_𝐯, we can immediately see
Ind(γ_𝐯|_[0, T]) = ∑ K_λ_k(𝐯)𝐯,
where the sum runs over the finite set {λ_k(𝐯) < T }, and K_t 𝐯 is as in <ref>. We now prove a useful result about the index being locally constant near a non-focal point, analogous to <cit.>.
Suppose, for 𝐯_0 ∈ S(ν), we have γ_𝐯(T) is not a focal point of N along γ_𝐯. Then, there exists a neighborhood 𝐯_0 ∈ U ⊂ S(ν) such that,
Ind(γ_𝐯|_[0, T]) = Ind(γ_𝐯_0|_[0, T]), for all 𝐯∈ U.
Let us first show that for Ind(γ_𝐯|_[0, T]) ≥Ind(γ_𝐯_0|_[0, T]) for 𝐯 sufficiently near 𝐯_0. As S(ν) is a smooth manifold of dimension n - 1, consider a chart 𝐯_0 ∈ U ⊂ S(ν), where U ≅𝔻^n-1. Then, we have a (n-1)-dimensional N-geodesic variation
Λ : U × [0, T] ⟶ M
(𝐯, t) ⟼exp^ν(t 𝐯).
Clearly, γ_𝐯 = Λ(𝐯, _). Suppose for some X ∈Γγ_𝐯_0^*TM, we have I_γ_𝐯_0(X, X) < 0. Since U is contractible, we can get an extension, say, X̃∈ΓΛ^* TM. Denoting X̃_𝐯(t) = X̃(𝐯, t), we see that X̃_𝐯∈Γγ_𝐯^* TM. Since the index form <ref> is continuous, shrinking U if necessary, we may assume that I_γ_𝐯(X̃_𝐯, X̃_𝐯) < 0 for all 𝐯∈ U. Now, suppose Ind(γ_𝐯_0|_[0, T]) = k. Choose a frame X^1, … , X^k ∈Γγ_𝐯_0^*TM spanning a maximal subspace on which I_γ_𝐯_0 is negative definite, and satisfying
I_γ_𝐯_0(X^i, X^i) < 0, 1 ≤ i ≤ k.
As described above, shrinking U as necessary for finitely many times, we have X̃^i_𝐯∈Γγ_𝐯^* TM such that
I_γ_𝐯( X̃^i_𝐯, X̃^i_𝐯) < 0, 1 ≤ i ≤ k, 𝐯∈ U.
Since { X^i } are linearly independent, possibly shrinking U further, we have {X̃^i_𝐯} are linearly independent for all 𝐯∈ U. But then, Ind(γ_𝐯|_[0, T]) ≥ k = Ind(γ_𝐯_0|_[0, T]) for all 𝐯∈ U. Note that this inequality is true irrespective of whether γ_𝐯_0(T) is focal of N along γ_𝐯_0 or not.
Since d_T𝐯_0ℰ is nonsingular, we can assume that d_T 𝐯ℰ is nonsingular for all 𝐯∈ U as well. Thus, γ_𝐯(T) is not a focal point of N along γ_𝐯. We show that Ind(γ_𝐯) is constant on some neighborhood of 𝐯_0 in U. If not, choose some sequence 𝐯_j ∈ U such that Ind(γ_𝐯_j|_[0, T]) Ind(γ_𝐯_0|_[0, T]). Passing to a subsequence, we may assume that 𝐯_j →𝐯_0, and
*Ind(γ_𝐯_j|_[0, T]) > Ind(γ_𝐯_0|_[0, T]) ∀ j.
Let us denote K(j, k) K_λ_k(𝐯_j) 𝐯_j. Thus, K(j, k) is precisely the multiplicity of the k^th focal time of 𝐯_j. Clearly, 1 ≤ K(j, k) < n = M (see <ref>). Thus, inducting on k, by passing onto subsequence as necessary, we may assume that for each k fixed, K(j, k) = η_k is constant for all j ≥ 1. It then follows that there are precisely, say, r many distinct focal times in (0, T) along γ_𝐯_j for all j ≥ 1. Furthermore, for each k we have exactly η_k many values of λ_i(𝐯_j) coincide. Let us rename these unique focal times as 0 < μ_1(𝐯_j) < … < μ_r(𝐯_j) < T. Note that K(j, k) = ( d_μ_k(𝐯_j)𝐯_jℰ) for 1 ≤ k ≤ r, and we have
Ind(γ_𝐯_j|_[0, T]) = ∑_i=1^r K(j, k), j ≥ 1.
Let us now consider K(j, k) as points in the Grassmann bundle Gr_η_k(Tν̂). Since the base-points 𝐯_j →𝐯_0, passing to a subsequence, we may assume that K(j, k) converges to an η_k-dimensional subspace Z_k ⊂ T_t_k 𝐯_0ν̂ for some time t_k > 0. Clearly μ_k(𝐯_j) 𝐯_j → t_k 𝐯_0 ⇒μ_k(𝐯_j) → t_k, as F(𝐯_j) = 1 = F(𝐯_0). Since we only consider μ_k(𝐯_j) ≤ T, it follows that t_k ≤ T as well. Now, d_μ_k(𝐯_j)ℰ vanishes on K(j, k), and hence by continuity, d_t_k 𝐯_0ℰ vanishes on the limit subspace Z_k. In particular, t_k 𝐯_0 is a focal point of N along γ_𝐯_0. Since, by assumption, γ_𝐯(T) is not a focal point, we must have 0 < t_k < T.
For distinct values of t_k, then each Z_k are subspaces of the tangent spaces at distinct points of ν̂ along the ray R_𝐯_0 = { t 𝐯_0 }. If they are not distinct, let us consider the situation t_k_1 = … = t_k_s = t_0 for some s ≥ 2, where no other t_k equals t_0. We show that Z_k_1 + … + Z_k_s is a direct sum in the tangent space T_t_0 𝐯_0ν̂. Clearly, ∑ Z_k_i⊂ K_t_0 𝐯_0. Fix some αβ∈{ k_1, … , k_r }. For some 𝐚∈ K_α and 𝐛∈ K_β, get sequences 𝐚_j ∈ K(j, α) and 𝐛_j ∈ K(j, β) such that 𝐚_j →𝐚, 𝐛_j →𝐛. Consider the N-Jacobi fields J_𝐚_j = Φ(𝐚_j), J_𝐛_j = Φ(𝐛_j) along γ_𝐯_j. By <ref>, we have g_γ̇_𝐯_j( J̇_𝐚_j, J_𝐛_j) = g_γ̇_𝐯_j( J_𝐚_j, J̇_𝐛_j) for all time t. As J_𝐚_j( t ) = 0 for t = μ_α(𝐯_j), we have
g_γ̇_𝐯_j(μ_α(𝐯_j))( J̇_𝐚_j( μ_α(𝐯_j) ), J_𝐛_j( μ_α(𝐯_j) ) ) = 0, j ≥ 1.
On the other hand, it follows from <ref>, that there is an N-Jacobi field K_𝐛_j along γ_𝐯_j satisfying
K_𝐛_j(t) =
J_𝐛_j(t)/t - μ_β(𝐯_j) , t μ_β(𝐯_j)
J̇_𝐛_j( μ_β(𝐯_j) ), t = μ_β(𝐯_j).
Then we have,
g_γ̇_𝐯_j(μ_α(𝐯_j))( J̇_𝐚_j( μ_α(𝐯_j) ), K_𝐛_j( μ_α(𝐯_j) ) ) = 0, j ≥ 1.
Since μ_α(𝐯_j) → t_α = t_0 and μ_β(𝐯_j) → t_β = t_0 as j →∞, taking limit we get
g_γ̇_𝐯_0(t_0)( J̇_𝐚 (t_0), J̇_𝐛 (t_0) ) = 0.
Now, it follows from <ref> and <ref> that the linear map
Ψ : K_t_0 𝐯_0 ⟶ T_γ_𝐯_0(t_0) M
𝐱 ⟼J̇_𝐱(t_0)
is injective. The above then discussion shows that for any αβ∈{ k_1,… k_s }, the subspaces Ψ(Z_α), Ψ(Z_β) are g_γ̇_𝐯_0(t_0)-orthogonal. Consequently, ∑_i=1^s Ψ( Z_k_i) is a direct sum. Since Ψ is injective, it follows that _i=1^s Z_k_i is a direct sum as well. Hence, we have obtained that Ind(γ_𝐯_0|_[0, T]) ≥∑_k=1^r Z_k.
Now, for each j ≥ 1 we get
Ind(γ_𝐯_j|_[0, T]) = ∑_k=1^r K(j, k) = ∑_k=1^r Z_k ≤Ind((γ_𝐯_0|_[0, T])),
which is a contradiction to (<ref>). Hence, we have some neighborhood of 𝐯_0 in S(ν) on which Ind(γ_𝐯|_[0, T]) is constant.
For each k≥ 1, the functions λ_k : S(ν) → (0, ∞] is continuous.
Fix some 𝐯_0 ∈ S(ν). Suppose λ_k(𝐯_0) = T < ∞. Then, for ϵ > 0 small, we have from <ref> that γ_𝐯_0((T+ϵ)) and γ_𝐯_0((T-ϵ)) are not tangent focal points of N along γ_𝐯_0. It follows from <ref> that there exists some neighborhood 𝐯_0 ∈ U ⊂ S(ν) such that for all 𝐯∈ U we have
Ind(γ_𝐯|_[0, T- ϵ]) = Ind(γ_𝐯_0|_[0, T-ϵ]) ≤ k - 1, and Ind(γ_𝐯|_[0, T+ϵ]) = Ind(γ_𝐯_0|_[0, T+ϵ])≥ k.
Consequently, for all 𝐯∈ U we have T - ϵ≤λ_k(𝐯) ≤ T + ϵ. Since ϵ > 0 is arbitrary, we see that λ_k is continuous at 𝐯_0 if λ_k(𝐯_0) < ∞.
Lastly, suppose λ_k(𝐯_0) = ∞. Then, for all T large we have Ind(γ_𝐯_0|_[0, T]) ≤ k - 1, and we may also assume that γ_𝐯_0(T) is not a focal point of N along γ_𝐯_0. Then, for some 𝐯_0 ∈ U ⊂ S(ν) we have, Ind(γ_𝐯|_[0, T]) = Ind(γ_𝐯_0|_[0, T]) ≤ k - 1, and hence λ_k(𝐯) ≥ T. Since T is arbitrary, we have λ_k is continuous at 𝐯_0 with λ_k(𝐯_0) = ∞ as well. This concludes the proof.
§ WARNER'S REGULARITY OF THE NORMAL EXPONENTIAL MAP
Suppose N is a closed submanifold of a forward complete Finsler manifold (M, F). In this section, we prove that ℰ is regular in the sense of Warner <cit.> while modifying the author's definition suitably. For any 𝐯∈ν̂, define the ray
R_𝐯 = { t 𝐯 | t > 0 }⊂ν̂.
We also need the following definition.
A subset C ⊂ν̂ is said to be radially convex if C is connected, and if for each 𝐯∈ C the subset { t | exp^ν(t𝐯) ∈ C }⊂ (0, ∞) is connected.
In this section, we prove the following.
Given a closed submanifold N of a forward complete Finsler manifold (M, F), the normal exponential map exp^ν: ν(N) → M satisfies the following three regularity properties.
(R1) For each 𝐯∈ν, the derivative d_𝐯ℰ is nonvanishing on T_𝐯R_𝐯.
(R2) The map
K_𝐯 ⟶ T_ℰ(𝐯)M/Imd_𝐯ℰ
𝐱 ⟼J̇_𝐱(1)
is a linear isomorphism.
(R3) For each 𝐯∈ν̂, there exists a radially convex open neighborhood 𝐯∈ U ⊂ν̂, such that for each 𝐮∈ U, the number of critical points (counted with multiplicity) of ℰ on R_𝐮∩ U is a constant independent of 𝐮, and equals to the dimension of K_𝐯.
To prove ([R1]R1), consider the curve η : (-ϵ, ϵ) → R_𝐯 given by η(t) = t𝐯, so that η(0) = 𝐯 and T_𝐯R_𝐯 = Span⟨η̇(0) ⟩γ̇(0). Now,
d_𝐯ℰ(η̇(0)) = . d/dt|_t=0 (ℰ∘γ) = . d/dt|_t=0exp^ν(t 𝐯) = 𝐯 0.
This proves ([R1]R1).
For ([R2]R2), pick a basis K_𝐯 = Span⟨𝐱_1, … , 𝐱_k ⟩ and extend it to a basis T_𝐯ν̂ = Span⟨𝐱_1,…𝐱_n ⟩. It follows from <ref> that d_𝐯ℰ(x_i) = J_𝐱_i(1). But then ([R2]R2) follows immediately from <ref> and <ref>.
Lastly, to show ([R3]R3), set 𝐯_0 = 𝐯/F(𝐯)∈ S(ν). Then, 𝐯 = T 𝐯, where T = F(𝐯). If 𝐯 is not a focal locus of N along γ_𝐯_0, then one can easily choose a radially convex open neighborhood 𝐯∈ U ⊂ν̂ so that d_𝐯_0ℰ is nonsingular for each 𝐮∈ U. The claim is then immediate. Now, suppose T𝐯_0 is a focal point. Since focal points along γ_𝐯_0 are discrete (<ref>), we have some ϵ > 0 so that T 𝐯_0 is the only focal point of N along γ_𝐯_0 for |T - t| ≤ϵ. Applying <ref> twice for both γ_𝐯_0(T - ϵ) and γ_𝐯_0(T + ϵ), we have a neighborhood 𝐯_0 ∈ V ⊂ S(ν) so that for all 𝐯∈ V we have
* γ_𝐯(T ±ϵ) are not focal points of N along γ_𝐯,
* Ind(γ_𝐯|_[0, T - ϵ]) = Ind(γ_𝐯_0|_[0, T - ϵ]), and
* Ind(γ_𝐯|_[0, T + ϵ]) = Ind(γ_𝐯_0|_[0, T + ϵ])
Consider the radially convex open set
U = { t𝐯 | T - ϵ < t < T + ϵ, 𝐯∈ V}⊂ν̂.
For any 𝐮∈ U, we have R_𝐮∩ U = { t 𝐮_0 | T - ϵ < t < T + ϵ}, where 𝐮_0 = 𝐮/F(𝐮)∈ V. By the Morse index lemma, the number of focal points counted with multiplicity on R_𝐮∩ U then equals
Ind(γ_𝐮_0|_[0, T + ϵ]) - Ind(γ_𝐮_0|_[0, T - ϵ]) = Ind(γ_𝐯_0|_[T + ϵ]) - Ind(γ_𝐯_0|_[0, T - ϵ]) = d_T𝐯_0ℰ = d_𝐯ℰ.
This concludes the proof of ([R3]R3).
In view of ([R3]R3), one defines a certain regularity for tangent focal points.
A tangent focal point 𝐯∈ν̂ is said to be regular if there exists a radially convex open neighborhood 𝐯∈ U ⊂ν̂ such that for each 𝐮∈ U, the map ℰ has at most one critical point on R_𝐮∩ U, where the multiplicity necessarily equals that of 𝐯. The set of regular tangent focal loci will be denoted as ℱ^reg⊂ℱ. A tangent focal point that is not regular is called a singular focal point, and the set of all singular tangent focal points will be denoted as ℱ^sing⊂ℱ.
Suppose 𝐯∈ν̂ is a tangent focal point of multiplicity k. Then, there exist coordinate charts (U, x^1,… ,x^n) and (V, y^1,… ,y^n) around 𝐯 and γ_𝐯(1) = ℰ(𝐯) respectively, satisfying
d_t𝐯ℰ( . ∂/∂ x^i|_t𝐯) = f_i(t) . ∂/∂ y^i|_ℰ(t𝐯), for t with t𝐯∈ U,
where f_i(t) are smooth functions satisfying f_i(t) 0 for 1 ≤ i ≤ k, and the only zeros of f_i(t) for k + 1 ≤ i ≤ n are at t = 1, and thence ḟ_i(1) > 0.
Pick a basis {𝐱_1, …𝐱_k } of K_𝐯 = d_𝐯ℰ, and extend it to a basis {𝐱_1,…𝐱_n } of T_𝐯ν̂. Let us write 𝐯 = ℓ𝐯_0 for 𝐯_0 ∈ S(ν) and ℓ = F(𝐯). As in the proof of <ref>, we get N-Jacobi fields J_1,… J_n along the N-geodesic γ_𝐯_0. In particular, J_1(ℓ) = … = J_k(ℓ) = 0. Next, as in <ref>, we define the N-Jacobi fields Y_i for 1 ≤ i ≤ k, and set Y_i = J_i for i > k. Then, { Y_i(t) } is basis for T_γ_𝐯_0(t) M in some small neighborhood of ℓ.
Consider some α_i : (-ϵ, ϵ) →ν̂ such that α_i(0) = 𝐯 and α̇_i(0) = 𝐱_i. Then, we have a vector field A_i(t) = . ∂/∂ s|_s = 0( t α_i(s) ) along the ray R_𝐯_0 = { t 𝐯_0 }. It is immediate that
d_t 𝐯_0ℰ( A_i(t) ) = . ∂/∂ s|_s=0ℰ( t α_i(s)) = J_i(t).
By construction, { A_i(ℓ) } = {𝐱_i } is a basis at T_𝐯ν̂ = T_ℓ𝐯_0ν̂. Also, in a deleted neighborhood of ℓ, we have { d_t 𝐯_0ℰ( A_i(t) ) } = { J_i(t) } is a basis for T_ℰ(t𝐯_0) M, and hence, { A_i(t) } is a basis of T_t 𝐯_0ν̂ for a neighborhood of ℓ. Then, by a standard argument (see <cit.> for a detailed proof), we have a coordinate system (U, x^1, …, x^n) around 𝐯 so that . ∂/∂ x^i|_t 𝐯 = A_i(t ℓ). Similarly, we get coordinate chart (V, y^1, … , y^n) around γ_𝐯(1) so that . ∂/∂ y^i|_γ_𝐯(t) = Y_i(tℓ). If we set f̃_i(t) = t - ℓ for 1 ≤ i ≤ k and f̃_i(t) = 1 for i > k, then J_i = f̃_i Y_i. We get,
d_t𝐯ℰ( . ∂/∂ x^i|_t𝐯) = d_tℓ𝐯_0ℰ( A_i(t ℓ) ) = J_i(t ℓ) = f̃_i(t ℓ) Y_i(t ℓ) = f_i(t ℓ) . ∂/∂ y^i|_γ_𝐯(t).
Setting f_i(t) = f̃(t ℓ) concludes the proof.
We now state and give a sketch of proof for the following result, which is a direct generalization of <cit.>.
ℱ^reg is open and dense in ℱ. Furthermore, ℱ^reg is an embedded codimension 1 submanifold of ν̂, with T_𝐯ν̂ = T_𝐯ℱ^reg⊕ T_𝐯 R_𝐯 for all 𝐯∈ℱ^reg.
Given any open set U ⊂ν̂, let us denote ℱ^reg_U = ℱ^reg∩ U and ℱ^sing_U = ℱ^sing∩ U. By definition of the regular focal loci, given any 𝐯∈ℱ^reg we have some (radially convex) open set p ∈ U ⊂ν̂ such that every focal point in U is regular. In other words, ℱ^reg_U = ℱ∩ U, proving that ℱ^reg is open in ℱ. Next, let 𝐯_0 ∈ = ℱ^sing = ℱ∖ℱ^reg be a singular focal point. By [R3]R3, we get an arbitrarily small radially convex neighborhood 𝐯_0 ∈ U ⊂ν̂ such that for each 𝐮∈ U, the number of focal points on the sub-ray R_𝐮∩ U equals the multiplicity of 𝐯. Since 𝐯 is singular, there is some focal point 𝐯_1 ∈ U with at least two distinct focal points on R_𝐯_1∩ U. But then the focal multiplicity of 𝐯_1 is strictly less than that of 𝐯_0. If 𝐯_1 is regular, we are done. Otherwise, we repeat this argument, choosing the successively nested neighborhoods. Eventually, after some finitely many steps, we either get a regular focal point, or we get a focal point of order 1. But every order 1 focal point is automatically regular. Thus, U ∩ℱ^reg∅, proving that ℱ^reg is dense in ℱ.
In order to show that ℱ^reg is a codimension 1 submanifold of ν̂, we locally realize ℱ^reg as the zero set of some submersion. Fix 𝐯∈ℱ^reg with focal multiplicity, say, k. By <ref> get coordinate charts (U, 𝐱^i) and (V, 𝐲^j), respectively around 𝐯 and ℰ(𝐯) = γ_𝐯(1). With these coordinates, we have the eigenvalues of dℰ, say, f_1, …, f_n as smooth functions on U. Denote Δ : U →ℝ as the (n-k+1)^th-elementary symmetric polynomial in n many variables, evaluated on the eigenvalues. Thus,
Δ(𝐮) = ∑_1 ≤ i_1 < … < i_k ≤ n f_1(𝐮) …f_i_1(𝐮)…f_i_k-1(𝐮)… f_n(𝐮), 𝐮∈ U.
It follows from <ref> that along the ray R_𝐯, we have
f_1(𝐯) = … = f_k(𝐯) = 0, f_k+1(𝐯) 0, … , f_n(𝐯) 0.
Furthermore, denoting the curve σ(t) = t𝐯, we see the radial derivatives,
σ̇(1)( f_i ) = d f_i ( σ̇(1) ) > 0, 1 ≤ i ≤ k.
Consequently, a simple computation gives us Δ(𝐯) = 0 and σ̇(1)(Δ) > 0. Shrinking U as necessary, while keeping it radially convex, we may assume that the radial derivative of Δ along each of the rays intersecting U is non-vanishing on U. In particular, Δ is a submersion on U. We show that D^-1(0) = ℱ^reg_U. Indeed, for any 𝐮∈ℱ^reg we have the multiplicity equals k, and thus d_𝐮ℰ has rank n - k. Consequently, at least one of the eigenvalues vanishes at 𝐮 in each of the product terms appearing in the sum of products Δ. Thus, ℱ^reg⊂ D^-1(0). For the converse, suppose Δ(𝐮) = 0 for some 𝐮∈ U. Pick the (unique) focal point 𝐮_0 ∈ R_𝐮∩ U. We have Δ(𝐮_0) = 0 = Δ(𝐮). Also, it follows from the radial convexity of U, that the radial line joining 𝐮 and 𝐮_0 is contained in U, and moreover the radial derivative of Δ is nonvanishing on this line. Then, by the mean value theorem, we must have 𝐮 = 𝐮_0 ∈ℱ^reg_U. Thus, ℱ^reg_U = Δ^-1(0). The proof then follows.
Fix a connected component, say, 𝒞⊂ℱ^reg. Clearly 𝒞 is open in ℱ^reg, and hence a codimension 1 submanifold of ν̂. Furthermore, the focal multiplicities of points in 𝒞 are constant, say, k. At any 𝐯∈𝒞, we have two subspaces of T_𝐯ν̂, namely, T_𝐯𝒞 = T_𝐯ℱ^reg and K_𝐯 = d_𝐯ℰ. Let us denote,
𝒩_𝐯 K_𝐯∩ T_𝐯ℱ^reg, 𝐯∈𝒞.
We have 𝒩_𝐯 is either k-1 or k, in which case K_𝐯⊂ T_𝐯ℱ^reg. Based on 𝒩_𝐯, we have a decomposition
𝒞 = 𝒞(k) ⊔𝒞(k-1).
Since dℰ attains its maximum value n - (k-1) on 𝒞(k-1), we have 𝒞(k-1) is an open submanifold of 𝒞. Also, since the union ∪_𝐯∈𝒞(k-1)𝒩_𝐯 is precisely the kernel of d( ℰ|_𝒞(k-1)), it is a rank (k-1) involutive distribution on 𝒞(k-1). By a similar argument, the union ∪_𝐯∈𝒞(k)𝒩_𝐯 = ∪_𝐯∈𝒞(k) K_𝐯 is also an involutive distribution of rank k on the open submanifold 𝒞(k) ⊂𝒞. We have the following result, generalizing <cit.>.
For any 𝐯∈ℱ^reg with focal multiplicity ≥ 2, we have K_𝐯⊂ T_𝐯ℱ^reg, and thus 𝒞(k-1) = ∅.
Note that for a component 𝒞⊂ℱ^reg consisting of tangent focal points of multiplicity 1, both 𝒞(k) and 𝒞(k-1) can be non-empty. We are now in position to state the local form of the normal exponential map ℰ near a regular focal point.
Suppose 𝐯∈ν̂ is a regular tangent focal locus of N with multiplicity k. Let 𝒞 be the connected component of ℱ^reg containing 𝐯, and consider the decomposition 𝒞 = 𝒞(k) ⊔𝒞(k-1) (<ref>). Then, there exist coordinates { x^1,… x^n } and { y^1,… y^n } near 𝐯∈ν̂ and ℰ(𝐯) ∈ M, which can be arranged so that ℰ has special forms in the following cases.
* If 𝐯 has multiplicity k ≥ 2, then one can arrange so that
y^i ∘ℰ =
x^n · x^i, i = 1,… k,
x^i, i = k + 1, … , n.
* If 𝐯 has multiplicity k = 1 and furthermore, 𝐯∈𝒞(k) then one can arrange so that
y^i ∘ℰ =
x^n · x^1, i=1,
x^i, i = 2, … , n.
* If 𝐯 has multiplicity k = 1 and furthermore, K_𝐯⊄T_𝐯ℱ^reg (i.e., if 𝐯∈𝒞(k-1) = 𝒞(0)), then one can arrange so that
y^i ∘ℰ =
x^1 · x^1, i = 1,
x^i, i = 2, … , n.
The proofs of the above two theorems follows in the same vein as <cit.>, which uses the notion of higher order tangent vectors. To keep the article self-contained, we have provided sketches of proof in the appendix (<ref>).
The only regular tangent focal points 𝐯 not considered in <ref>, are of the form 𝒞∖( 𝒞(0) ⊔𝒞(1) ) = 𝒞(1) ∖𝒞(1), where 𝒞 is a connected component of ℱ^reg consisting of multiplicity 1 tangent focal points. Clearly, then form a nowhere dense subset of ℱ^reg.
As an immediate corollary, we have the following.
Let N be a submanifold of a forward complete Finsler manifold (M, F). Then the normal exponential map ℰ = exp^ν|_ν̂ is not injective on any neighborhood of a tangent focal point 𝐯∈ν̂.
Since ℱ^reg is dense in ℱ, we only proof the statement for 𝐯∈ℱ^reg. Assume 𝒞⊂ℱ^reg is the connected component containing 𝐯, and denote the decomposition 𝒞 = 𝒞(k) ⊔𝒞(k-1). If 𝐯 has multiplicity k ≥ 2, then by <ref>, we have a k-dimensional foliation in 𝒞, such that ℰ maps each leaf to a single point. The claim is then immediate. Suppose k = 1. If 𝐯∈𝒞(k), we again have a one dimensional foliation in 𝒞, so that each leaf is mapped to a single point by ℰ. If 𝐯∈𝒞 (k-1), then by <ref> ([thm:normalExponentialLocalForm:3]3), we have a coordinate system for which ℰ looks like <ref>. Consequently, ℰ is not locally injective in a neighborhood of 𝐯. Lastly, suppose 𝐯∈𝒞∖( 𝒞 (k-1) ⊔𝒞 (k) ), which is nowhere dense by <ref>. Then in any neighborhood of 𝐯, there exists a 𝐯 ' ∈𝒞 (k-1) ⊔𝒞 (k) and hence in that neighborhood, the map ℰ fails to be injective. This concludes the proof.
In <cit.>, it was proved that under hypothesis ([eq:hypothesisH]𝖧), we have Cu(N) = Se(N). In the course of the proof, we reached a dichotomy, whose case (a) leads to a contradiction. We would like to point out that in view of <ref>, case (a) is immediately ruled out. In the next section, we shall prove an even stronger result following the ideas of <cit.>.
§ DECOMPOSITION OF THE TANGENT CUT LOCUS
As mentioned earlier, the cut locus Cu(N) consists of points that are either a first focal locus along an N-geodesic, or points admitting at least two N-segments, i.e., points of Se(N). In the normal bundle ν, we have the set of tangent cut points Cu(N), which maps to Cu(N) under the normal exponential map.
A vector 𝐯∈Cu(N) is called an ordinary (or separating) tangent cut point if there exists some 𝐰∈Cu(N) with 𝐰𝐯 such that exp^ν(𝐯) = exp^ν(𝐰). Otherwise, 𝐯 is called a singular tangent cut point. We denote the set of all ordinary tangent cut points of N by Se(N).
It is clear that Se(N) is mapped to Se(N) under the normal exponential map. Let us denote by ℱ_1 ⊂ℱ the set of all first tangent focal locus. Then, it follows that
Cu(N) ∖Se(N) ⊂ℱ_1(N).
Specializing <ref> to the first tangent focal locus, we obtain the following. Set ℱ^reg_1 ℱ^reg∩ℱ_1, the set of regular first tangent focal loci.
ℱ^reg_1 is open in both ℱ_1 and ℱ ^reg, and dense in ℱ _1. Furthermore, ℱ^reg_1 is an embedded codimension 1 submanifold of ν̂.
Suppose 𝐯∈ℱ^reg_1 has multiplicity k. As 𝐯 is regular, there exists arbitrarily small neighborhood 𝐯∈ U ⊂ν̂ so that for each 𝐮∈ U, there is a unique tangent focal point on R_𝐮∩ U, which has multiplicity k. Choose a neighborhood basis, say, U_n so that ∩_n ≥ 1 U_n = {𝐯}. Suppose, if possible, for each n there is some 𝐯_n ∈ U_n such that the first tangent focal locus along 𝐯_n is 𝐰_n 𝐯_n. Since 𝐯_n →𝐯, by a similar argument as in <ref>, passing to a subsequence as necessary, we get 𝐰_n →𝐰, where 𝐰 is a tangent focal point on R_𝐯. As 𝐰_n is a first tangent focal point, we have λ_1(𝐰̂_n) = F(𝐰_n), where 𝐱̂ = 𝐱/F(𝐱)∈ S(ν). By <ref>, we have λ_1(𝐰̂_n) →λ_1(𝐰̂), and hence, λ_1(𝐰̂) = limλ_1(𝐰̂_n) = lim F(𝐰_n) = F(𝐰). But on R_𝐯, we have λ_1(𝐰̂) = λ_1(𝐯̂) = F(𝐯), which shows that 𝐰 = 𝐯, i.e., 𝐰_n →𝐯. Consequently, in every neighborhood U of 𝐯, for n large, we have two tangent focal points 𝐰_n 𝐮_n on R_𝐮_n∩ U, which contradicts the regularity of 𝐯. Hence, for some n_0 we must have that the unique tangent focal point on R_𝐮∩ U_n_0 is a first tangent focal point, for each 𝐮∈ U_n_0. Thus, ℱ∩ U_n_0 = ℱ^reg_1 ∩ U_n_0 = ℱ^reg∩ U_n_0, proving that ℱ^reg_1 is open in both ℱ^reg and ℱ_1.
To show that ℱ^reg_1 is dense in ℱ_1, consider some 𝐯∈ℱ_1 ∖ℱ^reg_1. Thus, 𝐯 is a singular first tangent focal point. In particular, in every neighborhood 𝐯∈ U ⊂ν̂, there exists some 𝐮∈ U so that R_𝐮∩ U has at least two tangent focal points. Choose a sequence 𝐯_n ∈ U of such tangent focal points, so that 𝐯_n →𝐯, and denote the first tangent focal point on R_𝐯_n by 𝐰_n. As argued in the previous paragraph, we can assume that 𝐰_n →𝐯, and in particular, 𝐰_n ∈ U for some n large. Also, 𝐰_n has multiplicity strictly less than that of 𝐯. Repeating finitely many times, we can find some 𝐰∈ U which is a first tangent focal point and has multiplicity. But then 𝐰 is regular. In other words, 𝐰∈ U ∩ℱ^reg_1, proving that ℱ^reg_1 is dense in ℱ_1.
Since ℱ^reg_1 is open in ℱ^reg, it follows from <ref> that ℱ^reg_1 is a codimension 1 submanifold of ν̂.
Following <cit.>, we now prove the following.
Let N be a closed submanifold of a forward complete Finsler manifold (M, F), and suppose hypothesis ([eq:hypothesisH]H) holds. Then, Cu(N) = Se(N).
Since every non-focal tangent cut point is necessarily ordinary (<ref>), we consider some 𝐯∈Cu(N) ∩ℱ = Cu(N) ∩ℱ_1. We have the following cases to consider
* Suppose 𝐯 is a regular focal locus, and of type ([thm:normalExponentialLocalForm:1]1) or ([thm:normalExponentialLocalForm:2]2) in <ref>. Then, we have an involutive distribution on ℱ^reg, defined near 𝐯, such that each leaf is mapped to a single point by exp^ν. In particular, the leaf passing through 𝐯, say, ℒ is mapped to q exp^ν(𝐯). Consider a sequence 𝐯_i ∈ℒ∖{𝐯} with 𝐯_i →𝐮. We have the tangent cut points 𝐰_i ρ(𝐯_i)𝐯_i, and the tangent focal points λ_1(𝐯_i) 𝐯_i = 𝐯_i. From <cit.>, we have ρ≤λ_1. If ρ(𝐯_i) = λ_1(𝐯_i) for some n, we get 𝐯_i ∈Cu(N). But then 𝐯∈Se(N), as exp^ν(𝐯) = exp^ν(𝐯_i) and 𝐯𝐯_i.
Otherwise, assume that ρ(𝐯_i) < λ_1(𝐯_i) holds for all n. As the cut time map ρ is continuous, we get 𝐰_i →𝐯. As 𝐯_i is a first tangent focal locus, we must have 𝐰_i ∈Se(N). But then in any neighborhood of 𝐯 we have some element of Se(N), proving 𝐯∈Se(N).
* Next, suppose 𝐯 is a regular focal locus of type ([thm:normalExponentialLocalForm:3]3) as in <ref>. That is, 𝐯 has multiplicity 1, and K_𝐯∩ T_𝐯ℱ^reg = 0. From the normal form <ref>, we get coordinate systems x^1,…, x^n and y^1,… ,y^n around 𝐯∈ U ⊂ν̂ and exp^ν(𝐯) ∈ V ⊂ M respectively, so that exp^ν folds the neighborhood U around the { x^1 = 0 } hyperplane. In other words, the image of U under exp^ν is contained in the region { y^1 ≥ 0 } of V. Furthermore, the singularities of exp^ν in U can only appear along { x^1 = 0 }, and thus the focal points in V appears on { y^1 = 0 }. Set ℓ d(N, q), and denote Û{𝐮 | 𝐮∈ U }⊂ S(ν). Without loss of generality, we can assume that U is radially convex, so that Û is an open neighborhood of 𝐯 in S(ν), and furthermore, U = { t𝐮 | 𝐮∈Û, |t - ℓ| < ϵ} for some ϵ > 0.
Now, pick a sequence q_i ∈ V with y^1(q_i) < 0, such that q_i → q = exp^ν(𝐯). Say, γ_𝐰 : [0, ℓ_i]→ M is a unit-speed N-segment joining N to q_i, where ℓ_i = d(N, q_i) and 𝐰∈ S(ν). As q_i → q, we have ℓ_i →ℓ. Then, for i large, we must have 𝐰∉Û, since other ℓ_i 𝐰∈ U ⇒ y^1(q_i) ≥ 0, a contradiction. As S(ν) is compact and the set of all such 𝐰 as above avoids an open set Û, passing to a subsequence, we get a convergent sequence 𝐰_i →𝐰 in S(ν) ∖ U such that exp^ν(𝐰_i) = q_i. By <ref>, we see that γ_𝐰(t) = exp^ν(t 𝐰) is an N-segment of length ℓ, joining N to q. Clearly, 𝐯𝐯. Hence, we get 𝐯∈Se(N) in this case.
* Lastly, assume that 𝐯 is a focal point not covered in <ref>. Since 𝐯∈ℱ_1, by <ref>, we have 𝐯 is the limit point of 𝐯_i ∈ℱ^reg_1. Also, in view of <ref>, we may assume that each 𝐯_i is of type ([thm:normalExponentialLocalForm:1]1), ([thm:normalExponentialLocalForm:2]2) or ([thm:normalExponentialLocalForm:3]3) in as in <ref>. Denote 𝐰_i ρ(𝐯_i) 𝐯_i, and note that 𝐰_i →𝐯 as in case (1). If ρ(𝐯_i) = λ_1(𝐯_i) holds for some i, we have 𝐯_i ∈Cu(N), and hence by the previous two cases, 𝐰_i = 𝐯_i is a limit point of Se(N). If ρ(𝐯_i) < λ_1(𝐯_i), then 𝐰_i ∈Cu(N) ∖ℱ_1 = Se(N). By standard Cantor's diagonalization argument, we then get a sequence in Se(N) converging to 𝐯, again showing that 𝐯∈Se(N)
Thus, we have obtained Cu(N) ⊂Se(N).
To show the equality, let us consider some 𝐯∈Se(N)∖Se(N). Then, we have a sequence 𝐯_i ∈Se(N) such that 𝐯_i →𝐯. Furthermore, we have 𝐰_i 𝐯_i such that q_i exp^ν(𝐰_i) = exp^ν(𝐯_i). Clearly, q_i → q exp^ν(𝐯). Also, passing to a subsequence, we have 𝐰_i →𝐰. Since q = exp^ν(𝐯) = lim_i exp^ν(𝐯_i) = lim_i exp^ν(𝐰_i) = exp^ν(𝐰), we must have 𝐰 = 𝐯, as 𝐯∉Se(N). But then in any neighborhood of 𝐯, the map exp^ν fails to be injective. By the inverse function theorem, we must have d_𝐯exp^ν|_ν̂ is singular, i.e., 𝐯∈ℱ. Also, by <ref>, we see that γ_𝐯 being a limit of N-segments γ_𝐯_i, is an N-segment itself. But then q = γ_𝐯(1) is a first focal locus of N along γ_𝐯 <cit.>. In particular, q is then a cut point, and consequently 𝐯∈Cu(N). Hence, we have Cu(N) = Se(N), concluding the proof.
Since exp^ν is continuous, we immediately get
Cu(N) = exp^ν( Cu(N) ) = exp^ν( Se(N)) ⊂exp^ν( Se(N) ) = Se(N).
Consequently, Se(N) is dense in Cu(N). In fact, Cu(N) = Se(N) holds as well <cit.>.
§ HIGHER ORDER TANGENT VECTOR
In <cit.>, the author used the notion second order tangent vectors to state the regularity condition ([R2]R2). In this appendix, we show that our condition ([R2]R2) is directly comparable to that of <cit.>. See also <cit.> for a similar discussion.
Let M be a manifold with M = n. For p ∈ M, denote by ℐ_p the collection of germs at p of functions M →ℝ. A germ 𝐟∈ℐ_p is assumed to be represented by a function f : M→ℝ locally defined near p, and in particular, the evaluation 𝐟(p) = f(p) is well-defined. It is a local ring, with the maximal ideal 𝔪_p {𝐟 | f(p) = 0 }. The k^th order tangent space is denoted as
T^k_p M ( 𝔪_p / 𝔪_p^k+1)^* = ( 𝔪_p / 𝔪_p^k+1, ℝ).
Given a map f : M → N, one has the k^th derivative map
d^k_p f : T^k_p M → T^k_f(p) N
θ ↦( 𝐠 + 𝔪_f(p)^k+1↦θ( f_*(𝐠) + 𝔪^k+1_p ) ),
where the push-forward f_*(𝐠) is defined as the germ of g ∘ f at the point p. There is a canonical isomorphism T^k+1_p M / T^k_p M = ⊙^k+1 T_p M, which gives rise to the (non-canonical) isomorphism T^k_p M = ⊕_i = 1^k ⊙^i T_p M. Here ⊙^i denotes the i^th symmetric tensor product.
We are particularly interested in the identification T_p M ⊙ T_p M = T^2_p M / T_p M,
which is explicitly given as
𝐱⊙𝐲⟼( 𝐟↦ X( Y(f) ) |_p ) T_p M
for arbitrary extensions X, Y of 𝐱, 𝐲∈ T_p M, and for 𝐟∈𝔪_p/𝔪^3_p represented by some f. The Leibniz rule shows that the map is well-defined, whereas the symmetry is a consequence of the identity [X, Y](f) = XY(f) - YX(f) where 𝐟↦ [X, Y](f)|_p is a first order vector. The second order derivative of some f : M → N induces the following natural map,
d^2_p f : T_p M ⊙ T_p M ⟶ T^2_f(p) M / Im d_p f
𝐱⊙𝐲 ⟼( 𝐠↦ X ( Y ( g ∘ f ) ) |_p ) Im d_p f,
which we still denote by d^2 f.
Let us now consider the restricted normal exponential map ℰ = ( exp^ν|_ν̂) : ν̂→ M. For any 𝐯∈ν̂, we have the geodesic γ_𝐯(t) = ℰ(t𝐯). Using the identification above, we have the maps
d_𝐯ℰ : T_𝐯ν̂→ T_γ_𝐯(1) M, d^2_𝐯ℰ : T_𝐯ν̂⊙ T_𝐯ν̂→ T^2_γ_𝐯(1)M / Im d_𝐯ℰ.
Denote the ray σ(t) = t𝐯 and the vector 𝐫 = σ̇(1) ∈ T_𝐯ν̂. Now, the condition (R2) of <cit.> translates to the following
(R2') The map
K_𝐯 = d_𝐯ℰ∋𝐱⟼ d^2_𝐯ℰ(𝐫⊙𝐱) ∈ T_γ_𝐯(𝐯)M / Im d_𝐯ℰ⊂ T^2_γ_𝐯(1) M /Imd_𝐯ℰ
is a linear isomorphism onto the image.
For any 𝐱∈ T_𝐯ν̂, consider the N-Jacobi field J_𝐱 along γ_𝐯 (<ref>). Then, we have the following.
* d_𝐯ℰ(𝐱) = J_𝐱(1), and in particular d_𝐯ℰ( 𝐫 ) = γ̇_𝐯(1).
* d^2_𝐯ℰ( 𝐫⊙𝐱) = J̇_𝐱(1), and thus J̇_𝐱(1) = RX(ℰ) for any extensions
Consequently, ([R2]R2) is equivalent to ([warnerR2]R2').
Let α : (-ϵ, ϵ) →ν̂ be a curve such that α(0) = 𝐯 and α̇(0) = 𝐱. Then, we immediately have
d_𝐯ℰ(𝐱) = . ∂/∂ s|_s=0ℰ( α(s) ) = J_𝐱(1),
which proves ([prop:secondOrderTangent:firstOrder]1). Now, consider a germ 𝐟∈𝔪_p^2/𝔪_p, represented by some f defined near ℰ(𝐯) = γ_𝐯(1). Considering the map Ξ(s, t) = t α(s), we have an extension X(t)= . ∂/∂ s|_s=0Ξ(s, t) of 𝐱 along σ. Extend this X arbitrarily in a neighborhood of 𝐯, and similarly extend 𝐫 to some R. Then,
d^2_𝐯ℰ( 𝐫⊙𝐱)(𝐟) = R X (f ∘ℰ)|_𝐯 = . ∂/∂ t|_t = 1( X(f ∘ℰ) ) (t 𝐯) = . ∂/∂ t|_t = 1 d(f ∘ℰ) ( X_t 𝐯)
= . ∂/∂ t|_t = 1. ∂/∂ s|_s = 0 d(f ∘ℰ) ( t α(s) ) = d_ℰ(𝐯)f ( J̇_𝐱(1) ) = J̇_𝐱(1) ( 𝐟).
This proves ([prop:secondOrderTangent:secondOrder]2). We conclude the proof from <ref> and <ref>.
Suppose 𝐯∈ℱ^reg be a regular focal point, such that K_𝐯⊄T_𝐯ℱ^reg. Let 𝐲∈ K_𝐯∩ T_𝐯ℱ^reg. We shall show that J̇_𝐲(1) = 0, whence we get 𝐲 = 0 by [R2]R2.
Consider the vector 𝐫∈ T_𝐯ν̂ given by 𝐫 = σ̇(1), where σ(t) = t𝐯 is the ray. Since 𝐫 is transverse to both T_𝐯ℱ^reg and K_𝐯 (by <ref> and [R1]R1, respectively), and since T_𝐯ν̂ = K_𝐯 + T_𝐯ℱ^reg by hypothesis, we have 𝐫 = 𝐱 + 𝐳 for some 𝐱∈ K_𝐯∖ T_𝐯ℱ^reg and 𝐳∈ T_𝐯ℱ^reg∖ K_𝐯. Then, it follows from <ref> ([prop:secondOrderTangent:secondOrder]2) that
J̇_𝐲(1) = d^2_𝐯ℰ(𝐫⊙𝐲) = d^2_𝐯ℰ (𝐱⊙𝐲) + d^2_𝐯ℰ(𝐳⊙𝐲),
where d^2_𝐯ℰ denotes the second order derivative of ℰ (see <ref>).
As 𝐳∈ T_𝐯ℱ^reg, we can choose an extension Z of 𝐳 around 𝐯, such that Z is tangential to ℱ^reg on points of ℱ^reg. Similarly, we can extend 𝐲 to some Y near 𝐯 so that Y is in K_𝐯 on points of ℱ^reg, which is possible since K_𝐯 has constant rank on ℱ^reg. Then, for any function f defined near ℰ(𝐯) we have
Z ( Y ( f ∘ℰ) )(𝐯) = Z ( df dℰ (Y))(𝐯) = 0,
as dℰ(Y) = 0 on ℱ^reg. But then d^2_𝐯ℰ(𝐳⊙𝐲) = 0 follows from the definition (<ref>). Since 𝐲∈ T_𝐯ℱ^reg as well, we get from the symmetry, d^2_𝐯ℰ(𝐱⊙𝐲) = d^2_𝐯ℰ(𝐲⊙𝐱) = 0. Thus, J̇_𝐲(1) = 0, which shows that 𝐲 = 0 by ([R2]R2). In other words, K_𝐯∩ T_𝐯ℱ^reg = 0, showing that K_𝐯 = 1. Thus, for K_𝐯≥ 2 we must have K_𝐯⊂ T_𝐯ℱ^reg.
Suppose we are in either case ([thm:normalExponentialLocalForm:1]1) or case ([thm:normalExponentialLocalForm:2]2). In both the cases, we have a k-dimensional foliation on 𝒞 near 𝐯, induced by the distribution d( ℰ|_𝒞), which is of constant rank k and involutive near 𝐯. We can get a coordinate system { u^1, … , u^n } on ν̂ near 𝐯 such that the codimension 1 submanifold 𝒞 is locally given as { u^n = 0 }. Furthermore, we can arrange so that the foliation on 𝒞 is given by
{ u^k+1 = … = u^n-1 = constant, u^n = 0.}.
Recall that each leaf of this foliation is mapped to a constant by ℰ, and d_𝐯ℰ has full rank in the transverse directions. Hence, we can fix a coordinate { v^1,… ,v^n } near ℰ(𝐯) so that image of the slice { u^1 = … = u^k = 0, u^n = 0 } under ℰ is the slice { v^1 = … = v^k = 0, v^n = 0 }, and furthermore, d_𝐯ℰ( . ∂/∂ u^i|_𝐯) = . ∂/∂ v^i|_ℰ(𝐯) holds for i = k+1,… , n. In particular, for i = 1,…, k the function v^i ∘ℰ has a zero of order 1 at 𝐯. Define the functions
w^i =
u^i, i = 1,… ,k
v^i ∘ℰ, i = k + 1, … , n,
near 𝐯. Then, { w^1,… , w^n } is a coordinate near 𝐯. Next, consider the projection π : ℝ^n →ℝ^n onto the last n-k-coordinates, preceded by k many zeros. Near ℰ(𝐯) ∈ M, we then have functions,
y^i =
v^i - v^i ∘ℰ∘( ϕ^-1∘π∘ψ), i=1,… k
v^i, i=k+1, … , n,
where ϕ, ψ are respectively the coordinate charts { w^1,… w^n } and { v^1,… v^n } on ν̂ and M. By construction, { y^1,… ,y^n } is a coordinate system near ℰ(𝐯), since 𝐯 is an order 1 zero of v^i ∘ℰ for i=1,… ,k. Also, for i = 1,… k, we have y^i ∘ℰ = 0 on { w^n = 0 }. Hence, we have functions x^i near 𝐯 such that y^i ∘ℰ = w^n · x^i for i=1,… k. Set x^i = w^i for i = k+1,… n. Thus, we have
y^i ∘ℰ =
x^n · x^i, i=1,… ,k
x^i, i=k+1,… n.
We check that { x^1,… , x^n } is a coordinate near 𝐯. As x^i = w^i for i = k+1,… ,n, it follows that the n × n matrix ( . ∂ x^i/∂ w^j|_𝐯) has full rank if the k × k block given by 1 ≤ i, j ≤ n has full rank. We have
. ∂^2 ( y^i ∘ℰ)/∂ w^n ∂ w^j|_𝐯 = . ∂ x^i/∂ w^j|_𝐯 , 1 ≤ i, j ≤ k,
since y^i ∘ℰ has order 1 zero at 𝐯, and y^i ∘ℰ = w^n · x^i. Consider the vectors 𝐰^i = . ∂/∂ w^i|_𝐯. By construction, K_𝐯 = d_𝐯ℰ = Span⟨𝐰^1,… ,𝐰^k ⟩. Furthermore, 𝐰^n is transverse to T_𝐯𝒞. Consider the radial vector 𝐫∈ T_𝐯ν̂ given by 𝐫 = σ̇(1), where σ(t) = t𝐯 is the ray. Then, we have 𝐫 = a 𝐰^n + 𝐳 for some scalar a 0 and some 𝐳∈ T_𝐯ℱ^reg. As argued in <ref>, Im d_𝐯ℰ we have
J̇_𝐰^i(1) = d^2_𝐯ℰ( 𝐫⊙𝐰^i ) = a d^2_𝐯ℰ( 𝐰^n ⊙𝐰^j ) + d^2_𝐯ℰ( 𝐳⊙𝐰^j )_0 = a d^2_𝐯ℰ( 𝐰^n ⊙𝐰^j ) .
By [R2]R2, we have { d^2_𝐯ℰ(𝐰^n ⊙𝐰^j) Imd_𝐯ℰ, 1 ≤ i ≤ k } are linearly independent. Now, by construction
Im d_𝐯ℰ = Span⟨. ∂/∂ v^i|_ℰ(𝐯) , i = k+1,… ,n ⟩ = Span⟨. ∂/∂ y^i|_ℰ(𝐯) , i = k+1,… ,n ⟩.
Evaluating on y^i for i = 1,…,k we then get that the k × k matrix
[ . ∂^2(y^i ∘ℰ)/∂ w^n ∂ w^j|_𝐯 ] =
[ d^2_𝐯ℰ(𝐰^n ⊙𝐰^j)(y^i) ]
has full rank. This shows that { x^1,… ,x^n } is a coordinate system near 𝐯, concluding the proof of ([thm:normalExponentialLocalForm:1]1) and ([thm:normalExponentialLocalForm:2]2).
Now, suppose the hypothesis of ([thm:normalExponentialLocalForm:3]3) holds. Thus, 𝐯∈ℱ^reg is such that in a neighborhood 𝐯∈ U ⊂𝒞 we have K_𝐮∩ T_𝐯𝒞 = 0. We show that near 𝐯, the normal exponential map ℰ is a submersion with folds (see <cit.> for terminology). This amounts to checking the following two properties.
* As in the proof of <ref>, near 𝐯, the codimension 1 submanifold 𝒞 is given as the zero of the function Δ (<ref>). For k = 1, this Δ is precisely the product of eigenfunctions of dℰ, and thus Δ = d ℰ. It follows from [R2]R2 that along the radial direction, the derivative of Δ is non-vanishing. This shows that the first jet j^1 ℰ is transverse to the codimension 1 submanifold S_1 ⊂ J^1(ν̂, M) consisting of jets of rank n - 1.
* The singularities of ℰ are precisely the neighborhood U, along which we have T_𝐮ν̂ = K_𝐮 + T_𝐮𝒞 = d_𝐮ℰ + T_𝐮( Δ^-1(0) ). This shows that every singularity of ℰ is a fold point.
Then, the normal form <ref> for ℰ follows immediately from <cit.>, finishing the proof of ([thm:normalExponentialLocalForm:3]3).
§ ACKNOWLEDGEMENT
The authors would like to thank to J. Itoh for many fruitful discussions. They would like to thank ICTS for the wonderful hosting and a research-friendly environment during the workshop Zero Mean Curvature, where the bulk of this work was done. The first author was supported by the NBHM grant no. 0204/1(5)/2022/R &D-II/5649 and the second author was supported by the Jilin University.
alphaurl
Jav14b
[AG11]ArdoyGuijaro11
Pablo Angulo Ardoy and Luis Guijarro.
Cut and singular loci up to codimension 3.
Ann. Inst. Fourier (Grenoble), 61(4):1655–1681 (2012), 2011.
https://doi.org/10.5802/aif.2655 doi:10.5802/aif.2655.
[AJ19]Alves2019
Benigno Alves and Miguel Angel Javaloyes.
A note on the existence of tubular neighbourhoods on Finsler manifolds and minimization of orthogonal geodesics to a submanifold.
Proceedings of the American Mathematical Society, 147(1):369–376, 2019.
https://doi.org/10.1090/proc/14229 doi:10.1090/proc/14229.
[AP94]AbaPat94
Marco Abate and Giorgio Patrizio.
Finsler metrics—a global approach, volume 1591 of Lecture Notes in Mathematics.
Springer-Verlag, Berlin, 1994.
With applications to geometric function theory.
https://doi.org/10.1007/BFb0073980 doi:10.1007/BFb0073980.
[BCS00]Bao2000
D. Bao, S.-S. Chern, and Z. Shen.
An introduction to Riemann-Finsler geometry, volume 200 of Graduate Texts in Mathematics.
Springer-Verlag, New York, 2000.
https://doi.org/10.1007/978-1-4612-1268-3 doi:10.1007/978-1-4612-1268-3.
[Bej00]Ben98
Aurel Bejancu.
On the theory of Finsler submanifolds.
In Finslerian geometries (Edmonton, AB, 1998), volume 109 of Fund. Theories Phys., pages 111–129. Kluwer Acad. Publ., Dordrecht, 2000.
[Bis77]Bishop77
Richard L. Bishop.
Decomposition of cut loci.
Proc. Amer. Math. Soc., 65(1):133–136, 1977.
https://doi.org/10.2307/2042008 doi:10.2307/2042008.
[BK23]BorKlin23
Samuël Borza and Wilhelm Klingenberg.
Regularity and continuity properties of the sub-Riemannian exponential map.
J. Dyn. Control Syst., 29(4):1385–1407, 2023.
https://doi.org/10.1007/s10883-022-09624-y doi:10.1007/s10883-022-09624-y.
[BK24]BorKlin24
Samuël Borza and Wilhelm Klingenberg.
Local non-injectivity of the exponential map at critical points in sub-Riemannian geometry.
Nonlinear Anal., 239:Paper No. 113421, 21, 2024.
https://doi.org/10.1016/j.na.2023.113421 doi:10.1016/j.na.2023.113421.
[BP24]BhoPra2023
Aritra Bhowmick and Sachchidanand Prasad.
On the Cut Locus of Submanifolds of a Finsler Manifold.
J. Geom. Anal., 34(10):Paper No. 308, 2024.
https://doi.org/10.1007/s12220-024-01751-1 doi:10.1007/s12220-024-01751-1.
[Buc77]Buc77
Michael A. Buchner.
Simplicial structure of the real analytic cut locus.
Proc. Amer. Math. Soc., 64(1):118–121, 1977.
https://doi.org/10.2307/2040994 doi:10.2307/2040994.
[Bus55]Bus55
Herbert Busemann.
The geometry of geodesics.
Academic Press Inc., New York, N. Y., 1955.
[Fin51]Fin51
P. Finsler.
Über Kurven und Flächen in allgemeinen Räumen.
Verlag Birkhäuser, Basel, 1951.
https://doi.org/10.1007/978-3-0348-4144-3 doi:10.1007/978-3-0348-4144-3.
[GG73]GolGui73
M. Golubitsky and V. Guillemin.
Stable mappings and their singularities.
Graduate Texts in Mathematics, Vol. 14. Springer-Verlag, New York-Heidelberg, 1973.
[Heb81]Hebda81
James J. Hebda.
The regular focal locus.
J. Differential Geometry, 16(3):421–429 (1982), 1981.
URL: <http://projecteuclid.org/euclid.jdg/1214436221>.
[IT01]Itoh2001
Jin-ichi Itoh and Minoru Tanaka.
The Lipschitz continuity of the distance function to the cut locus.
Transactions of the American Mathematical Society, 353(1):21–40, 2001.
https://doi.org/10.1090/S0002-9947-00-02564-2 doi:10.1090/S0002-9947-00-02564-2.
[Jav14a]Javaloyes2014
Miguel Angel Javaloyes.
Chern connection of a pseudo-Finsler metric as a family of affine connections.
Publicationes Mathematicae Debrecen, 84(1-2):29–43, 2014.
https://doi.org/10.5486/PMD.2014.5823 doi:10.5486/PMD.2014.5823.
[Jav14b]Javaloyes2014_Correction
Miguel Angel Javaloyes.
Corrigendum to “Chern connection of a pseudo-Finsler metric as a family of affine connections” [mr3194771].
Publ. Math. Debrecen, 85(3-4):481–487, 2014.
https://doi.org/10.5486/PMD.2014.7061 doi:10.5486/PMD.2014.7061.
[JS15]Javaloyes2015
Miguel Angel Javaloyes and Bruno Learth Soares.
Geodesics and Jacobi fields of pseudo-Finsler manifolds.
Publicationes Mathematicae Debrecen, 87(1-2):57–78, 2015.
https://doi.org/10.5486/PMD.2015.7028 doi:10.5486/PMD.2015.7028.
[Kob67]Kob67
Shoshichi Kobayashi.
On conjugate and cut loci.
In Studies in Global Geometry and Analysis, pages 96–122. Math. Assoc. Amer. (distributed by Prentice-Hall, Englewood Cliffs, N.J.), 1967.
[Li11]Li11
Jintang Li.
The variation formulas of Finsler submanifolds.
J. Geom. Phys., 61(5):890–898, 2011.
https://doi.org/10.1016/j.geomphys.2011.01.003 doi:10.1016/j.geomphys.2011.01.003.
[Lu24]Lu24
Guangcun Lu.
The Morse index theorem in the case of two variable endpoints in conic Finsler manifolds.
Ann. Mat. Pura Appl. (4), 203(2):533–562, 2024.
https://doi.org/10.1007/s10231-023-01373-4 doi:10.1007/s10231-023-01373-4.
[ML32]MorLit32
Marston Morse and S. B. Littauer.
A characterization of fields in the calculus of variations.
Proceedings of the National Academy of Sciences of the United States of America, 18(12):724–730, 1932.
URL: <http://www.jstor.org/stable/86088>.
[MS01]MoShen01
Xiaohuan Mo and Zhongmin Shen.
Recent developments and some open problems in Finsler geometry.
Chinese Sci. Bull., 46(13):1059–1061, 2001.
https://doi.org/10.1007/BF02900677 doi:10.1007/BF02900677.
[Mye35]Mye35
Sumner Byron Myers.
Connections between differential geometry and topology. I. Simply connected surfaces.
Duke Math. J., 1(3):376–391, 1935.
https://doi.org/10.1215/S0012-7094-35-00126-0 doi:10.1215/S0012-7094-35-00126-0.
[Mye36]Mye36
Sumner Byron Myers.
Connections between differential geometry and topology II. Closed surfaces.
Duke Math. J., 2(1):95–102, 1936.
https://doi.org/10.1215/S0012-7094-36-00208-9 doi:10.1215/S0012-7094-36-00208-9.
[Oht21]Ohta2021
Shin-ichi Ohta.
Comparison Finsler Geometry.
Springer International Publishing AG, 2021.
https://doi.org/10.1007/978-3-030-80650-7 doi:10.1007/978-3-030-80650-7.
[Pet06]Peter06
Ioan Radu Peter.
On the Morse index theorem where the ends are submanifolds in Finsler geometry.
Houston Journal of Mathematics, 32(4):995–1009, 2006.
[Poi05]Poin05
Henri Poincaré.
Sur les lignes géodésiques des surfaces convexes.
Trans. Amer. Math. Soc., 6(3):237–274, 1905.
https://doi.org/10.2307/1986219 doi:10.2307/1986219.
[Rad04]Rademacher2004
Hans-Bert Rademacher.
A sphere theorem for non-reversible Finsler metrics.
Mathematische Annalen, 328(3):373–387, 2004.
https://doi.org/10.1007/s00208-003-0485-y doi:10.1007/s00208-003-0485-y.
[Run59]Rund59
Hanno Rund.
The differential geometry of Finsler spaces.
Springer-Verlag, Berlin-Göttingen-Heidelberg,, 1959.
[Sak96]Sak96
Takashi Sakai.
Riemannian geometry, volume 149 of Translations of Mathematical Monographs.
American Mathematical Society, Providence, RI, 1996.
Translated from the 1992 Japanese original by the author.
[Sav43]Sav43
L. J. Savage.
On the crossing of extemals at focal points.
Bull. Amer. Math. Soc., 49:467–469, 1943.
https://doi.org/10.1090/S0002-9904-1943-07960-8 doi:10.1090/S0002-9904-1943-07960-8.
[She01]Shen01
Zhongmin Shen.
Differential geometry of spray and Finsler spaces.
Kluwer Academic Publishers, Dordrecht, 2001.
https://doi.org/10.1007/978-94-015-9727-2 doi:10.1007/978-94-015-9727-2.
[Tho72]Thom72
R. Thom.
Sur le cut-locus d'une variété plongée.
J. Differential Geometry, 6:577–586, 1972.
URL: <http://projecteuclid.org/euclid.jdg/1214430644>.
[War65]Warner1965
Frank W. Warner.
The conjugate locus of a riemannian manifold.
American Journal of Mathematics, 87(3):575, 1965.
https://doi.org/10.2307/2373064 doi:10.2307/2373064.
[Wol79]Wol79
Franz-Erich Wolter.
Distance function and cut loci on a complete Riemannian manifold.
Arch. Math. (Basel), 32(1):92–96, 1979.
https://doi.org/10.1007/BF01238473 doi:10.1007/BF01238473.
[Zha17]Zhao2017
Wei Zhao.
A comparison theorem for finsler submanifolds and its applications.
2017.
https://arxiv.org/abs/1710.10682 arXiv:1710.10682, https://doi.org/10.48550/ARXIV.1710.10682 doi:10.48550/ARXIV.1710.10682.
|
http://arxiv.org/abs/2409.03035v1 | 20240904190845 | Derived structures in the Langlands Correspondence | [
"Tony Feng",
"Michael Harris"
] | math.NT | [
"math.NT",
"math.AG",
"math.RT"
] |
§ ABSTRACT
We survey several recent examples of derived structures emerging in connection with the Langlands correspondence. Cases studies include derived Galois deformation rings, derived Hecke algebras, derived Hitchin stacks, and derived special cycles. We also highlight some open problems that we expect to be important for future progress.
Characterizing the negative triangularity reactor core operating space with integrated modeling
H. S. Wilson^1,
A. O. Nelson^1,
J. McClenaghan^2,
P. Rodriguez-Fernandez^3
J. Parisi^4
and
C. Paz-Soldan^1
================================================================================================================
§ MOTIVATION AND OVERVIEW
§.§ What do we mean by “derived structures in the Langlands correspondence”?
The adjective “derived” has come to be applied to various constructions in mathematics, sometimes with very different meanings. For example, “derived functors”, “derived categories”, and “derived algebraic geometry” are some of the instances that we will encounter. Generally speaking, the word “derived” refers to an enhancement of mathematical constructions that incorporates homotopy theory (broadly construed). The past ten years have seen the rapid development of applications of homotopy theory to the Langlands correspondence, on several different fronts. Let us give an overview of those aspects which will be touched upon in this survey.
§.§.§ Derived Hecke algebras
Perhaps the first examples of “derived functors” that one encounters are the and functors, which are constructed by deriving ⊗ and . In the classical Langlands correspondence, a central role is played by the notion of Hecke operators, which form the Hecke algebra acting on automorphic forms. The Hecke algebra can be viewed as a certain space of endomorphisms; by replacing endomorphisms with “derived endomorphisms” (i.e., Ext groups) one obtains a notion of derived Hecke algebra. This concept has arisen in two rather different contexts: the work of Ollivier-Schneider on the p-adic Langlands correspondence <cit.>, and the work of Venkatesh et al. on cohomology of arithmetic groups <cit.>.
§.§.§ Derived moduli spaces
Derived functors in the above sense fall under the umbrella of classical homological algebra, where one derives functors on abelian categories such as the category of modules over a ring. There is also a theory of derived functors on non-abelian categories, which goes back to Quillen's homotopical algebra <cit.>. When applied to the category of commutative rings, it enters the realm of “derived algebraic geometry”.
In derived algebraic geometry, one constructs derived enhancements of the usual constructs of algebraic geometry. For example, we shall discuss “derived schemes”, “derived stacks”, etc. One can imagine derived schemes as classical schemes enhanced with some additional homotopical information.
It was recently realized that such derived enhancements provide a natural explanation of structures in the Langlands correspondence. Two different types of examples are the derived Galois deformation spaces of Galatius-Venkatesh <cit.>, which were used to explain the structure of cohomology of arithmetic groups, and the derived Hitchin stacks of Feng-Yun-Zhang <cit.>, which were used to construct virtual fundamental cycles for special cycles related to the Kudla program.
§.§.§ Other directions, that will not be discussed
Let us mention some further aspects of the Langlands correspondence where derived notions play a critical role, although they will not be elaborated upon in these notes.
* The Geometric Langlands correspondence features the moduli stack of local systems on a curve, and in particular the category of coherent sheaves on it. To obtain the “correct” version of this category, meaning the one which has a Langlands dual interpretation, one needs to understand this moduli stack in a derived way in general. Such matters are treated in Sam Raskin's article <cit.> in these proceedings.
* Derived commutative algebra plays an important technical role in recent work of Bhatt-Morrow-Scholze <cit.> and Bhatt-Scholze <cit.> on integral p-adic Hodge theory.
§.§ Why derive?
Given the significant technical background required for derived algebraic geometry, it is natural to ask why and how it is useful for researchers interested in the more classical aspects of the Langlands correspondence, such as the reciprocity between automorphic forms and Galois representations.[We exclude branches like Geometric Langlands theory where derived algebraic geometry is baked into the very formulations of the core problems, hence relevant in an obvious way.]
There are several independent reasons, some of which are touched upon in this survey. One is that higher cohomology (of locally symmetric) has become central to the Langlands correspondence, because cohomology provides natural integral structures on the spaces of automorphic forms, and integral structures are needed to bring in p-adic methods. In many classical situations, the relevant cohomology groups are concentrated in a single degree, which allows one to ignore derived aspects to a large extent, but in general they are spread out over many degrees and it becomes necessary to work with complexes, derived functors, derived categories, etc.
So far our discussion only requires deriving abelian structures, which is a theory that has been developed and widely used since the 1970s. But in order to make the connection to Galois representations, we want to invoke moduli spaces of Galois representations called Galois deformation rings. In order to make versions of these objects which are correspondingly “spread out over many degrees”, it becomes necessary derive the notion of a commutative ring. This is the starting point of derived algebraic geometry, and requires more sophisticated homotopical methods.
A third, independent thread treated in these notes concerns enumerative geometry related to automorphic forms. A celebrated example is the Kudla program, which seeks to develop an incarnation of theta functions in arithmetic geometry. More generally, one would hope to develop an incarnation of relative Langlands duality, in the sense of <cit.>, in arithmetic geometry. This means that we seek some incarnation of automorphic forms as cycle classes in the Chow groups or higher cohomology groups of Shimura varieties and moduli of shtukas. We explain here that such cycle classes should arise from derived geometry: the cycles in classical algebraic geometry are poorly behaved in general, while their derived versions have the desired properties.
There are also other motivations which are not treated in these notes, including the conjectural description of cohomology of moduli of shtukas explained in the articles of Xinwen Zhu <cit.> and Emerton-Gee-Hellman <cit.>. In certain situations discussed in those articles, one needs to incorporate derived structure in order to get the “correct” answers.
§.§ The style of these notes
As the above summary is intended to convey, the past few years have witnessed a rapid and remarkably broad influx of homotopy theory into number theory, in the guise of ∞-categories, derived algebraic geometry, stable homotopy theory, etc.
The technical foundations for this theory are formidable, and complicated further by the multitude of different approaches that divide the literature. For example, the literature is split between the language of model categories and ∞-categories (the latter has acquired more popularity recently), and even within the framework of ∞-categories one finds different approaches (quasi-categories, DG-categories, simplicial categories). Derived algebraic geometry also comes in different flavors, being built out of commutative differential graded algebras, simplicial commutative rings, and _∞-algebras. These issues are significant, but often completely orthogonal to those encountered in applying the tools to number theory.
Therefore, in view of the vast literature that already exists for homotopy theoretic background, we opted to write these notes as a kind of “user's manual” for how it is applied to number theory. We do not even attempt to give a thorough technical treatment of the foundational material. Occasionally, we will point out when different approaches exist and comment on their “intuitive” differences. Mostly, we will leave things somewhat vague and informal, relying on analogies and intuitions rather than formal definitions.
§.§ Outline of this article
In <ref> we provide a moral introduction to derived algebraic geometry. We do not provide any technical details – in fact, we make almost no mathematically precise statements; rather, we try to give some guide on “how to think about” certain words and phrases which come up in derived algebraic geometry. This is counterbalanced by Appendix <ref>, which focuses purely on the basic definitions of derived algebra.
In <ref> we discuss the notion of derived moduli spaces, which are our raison d'être for using derived algebraic geometry in the first place. We try to give a general pattern of heuristics which one uses in practice to recognize when derived moduli spaces should be relevant, and how to construct them. We illustrate these heuristics in several examples of relevance to the Langlands correspondence.
In <ref> we survey in more detail the emergence and application of derived moduli spaces related to the Kudla program, focusing on the work of Feng-Yun-Zhang <cit.> on higher theta series for unitary groups over function fields.
In <ref> we survey the work of Galatius-Venkatesh <cit.> on derived Galois deformation rings in extremely informal fashion.
In <ref>, we introduce local derived Hecke algebras, touching on the ℓ≠ p results of <cit.> and also briefly the ℓ = p results of Schneider <cit.>, Ollivier, and Ronchetti.
In <ref> we discuss the cohomology of locally symmetric spaces and the global derived Hecke algebra. Then in <ref> we explain Venkatesh's motivic conjectures.
Finally, in <ref> we formulate several open problems at the interface of derived algebraic geometry and the Langlands program, which should have important consequences.
§.§ Acknowledgments
We thank Henri Darmon, Dennis Gaitsgory, Soren Galatius, Chandrashekhar Khare, Barry Mazur, Arpon Raksit, Victor Rotger, Akshay Venkatesh, Jonathan Wang, Zhiwei Yun, and Wei Zhang for conversations about this material and for collaborations which are discussed here. We thank Gurbir Dhillon and the referees for comments and corrections on a draft.
A preliminary version of Appendix A was tested on MIT graduate students in lectures by the first-named author in Spring 2022. We are grateful to the participants for their feedback.
TF was supported by an NSF Postdoctoral Fellowship under Grant No. DMS-1902927.
§ A GUIDE TO DERIVED ALGEBRAIC GEOMETRY
§.§ What is derived algebraic geometry?
Algebraic geometry is about a theory of geometry for objects called schemes. These are geometric spaces built locally out of pieces called affine schemes (much like how manifolds are built locally out of pieces looking like Euclidean space), which are governed by commutative rings. Therefore, one could say that algebraic geometry is the study of geometry built out of commutative rings.
In this vein, derived algebraic geometry is about a theory of geometry built out of objects that we will informally call “derived commutative rings” for now. Derived commutative rings are some sort of enhancement of commutative rings; they govern what one might call “derived affine schemes”, which form the local building blocks for derived schemes. Therefore, the passage from algebraic geometry to derived algebraic geometry mostly amounts to the passage from commutative rings from derived commutative rings.
§.§ What are derived commutative rings?
Let us caution at the start that there are multiple different approaches to the notion of “derived commutative ring”. All of them are quite technical, and so we will not go into the formal definitions of any of them here. Instead, we will content ourselves with explaining intuitions, analogies, and how we think about things in practice.
§.§.§ Simplicial commutative rings
For the bulk of these notes, our notion of a “derived commutative ring” will be that of simplicial commutative ring. The adjective “simplicial” can be thought of as synonymous to “topological”, so it is a reasonable first intuition to think of a simplicial commutative ring as a topological commutative ring.[One reason we avoid working literally with topological commutative rings, which of course are well-defined mathematical concepts, is that we want to rule out “pathological” topological spaces. So a more refined slogan for “simplicial” might be “topological and nice”.]
In particular, a simplicial commutative ring has associated (abelian) homotopy groups π_0(), π_1(), π_2(), etc. These should be thought of as the homotopy groups of the underlying “topological space” of , but moreover the commutative ring structure on equips π_*() with the structure of a graded-commutative ring. Understanding this graded-commutative ring is an approximation to “understanding” , in the same sense that understanding the usual topological homotopy groups of a topological space is an approximation to “understanding” the space.
The technical definition of a simplicial commutative ring is quite involved, and for this reason will be quarantined to Appendix <ref>. The main body of this text is written to be comprehensible with the notion of “derived commutative rings” treated as a black box, and it might be advisable to read it as such on a first pass. We also remark that we will shortly (in <ref>) change our terminology for derived commutative rings from “simplicial commutative rings” to animated commutative rings.
§.§.§ Other models for derived commutative rings
We briefly mention other possible models for “derived commutative rings” that are commonly encountered in the literature.
* Commutative differential graded algebras (CDGAs). These are the most concrete to define and write down “explicitly”. For example, it is relatively easy to write down CDGAs by generators and relations, while any such presentation would be too huge to write down for almost any simplicial commutative ring. The crucial flaw of CDGAs from our perspective is that they only lead to a functional theory[By this, we mean that there does not exist a model category or ∞-category of CDGAs outside characteristic zero.] of derived commutative rings over characteristic zero, while we definitely want to work integrally or in characteristic p. However, in characteristic 0, CDGAs could be used just as well as simplicial commutative rings.
* _∞-algebras. Roughly speaking, an _∞-algebra is a ring with whose multiplication is commutative up to homotopy coherence. The precise definition of _∞-algebras requires even more homotopy theory than that of simplicial commutative rings. In practice, they are typically more relevant for homotopy theory than algebraic geometry. It is true that _∞ lead to a functional theory in all characteristics and could be used just as well as simplicial commutative rings in characteristic zero, but away from characteristic zero they have different behavior which seems to be less relevant for us.
§.§.§ The category of derived commutative rings
For the purposes of moduli theory, it is not the notion of a simplicial commutative ring itself which is essential, but the correct notion of its ambient category. In terms of the intuition for simplicial commutative rings as being like topological commutative rings, what we would like to do is to take the category of topological commutative rings up to homotopy equivalence; informally speaking, we want to regard “homotopic” things as being equivalent.
For this discussion, we would like to make an analogy to the following objects: abelian groups, complexes of abelian groups, and the derived category of abelian groups. In our analogy a commutative ring is parallel to an abelian group. To “derive” functors on the category of abelian groups, one considers “simplicial abelian groups” and the category of such is equivalent to the category of connective chain complexes. However, for many purposes (such as in defining derived functors), one is not really interested in the category of abelian groups but rather the derived category of abelian groups. Informally, one can construct this category so that the objects are chain complexes, and then the morphisms are obtained from the morphisms among chain complexes by inverting quasi-isomorphisms.
Simplicial commutative rings are parallel to chain complexes of abelian groups. The category we are really after, called the ∞-category of simplicial commutative rings, is not the literal category of simplicial commutative rings but rather a category obtained from this by inverting “weak equivalences”, which should be thought of as the analogue of quasi-isomorphisms between simplicial commutative rings.
Following terminology proposed by Clausen, and explained formally in <cit.>, we will refer to the desired category from the previous paragraph as the category of animated commutative rings, and we will use “animated commutative ring” when thinking of objects of this category. Thus, an animated commutative ring is the same object as a simplicial commutative ring, but the terminology connotes a difference in the ambient category implied, hence also in the notion of (iso)morphism between such objects. For example, isomorphisms of animated commutative rings correspond to what might be called “quasi-isomorphisms” of simplicial commutative rings, compared to a stricter notion of isomorphisms between simplicial commutative rings that is defined in the Appendix.
We should remark that the concept of “animated commutative ring” is not new, but has traditionally just be treated under the name “simplicial commutative rings” in the literature. More recently, it has become more common (at least, in the areas that this survey touches upon) to use the terminology of animation to distinguish the ∞-categorical version. Animation works more generally for other algebraic structures as well, such as abelian groups, where it reproduces an equivalent notion to that of (connective) chain complexes.
§.§ Visualizing derived schemes
In algebraic geometry, one learns to think about commutative rings geometrically. For example, in introductory textbooks on scheme theory (such as <cit.>, whose 4.2 is the inspiration for the title of this subsection), one learns that non-reducedness of a ring – which at first seems “ungeometric” because it is invisible at the level of field-valued points – can be pictured geometrically as “infinitesimal fuzz”. In other words, one visualizes R as an infinitesimal thickening of R_. In this subsection, we will explain how one can similarly visualize a derived scheme as a type of infinitesimal thickening of a classical scheme. The guiding slogan is:
The relationship between derived schemes and classical schemes is analogous to the relationship between (classical) schemes and reduced schemes.
Let us first explain the formal similarities. A scheme has an “underlying reduced” scheme, whose formation is functorial and defines a right adjoint functor to the inclusion of the category of reduced schemes into the category of all schemes.
{schemes}[r, bend right, "Y ↦ Y_"'] {reduced schemes}[l, bend right, "X X"']
At the level of rings, the formation of underlying reduced scheme corresponds locally to quotienting a ring by its nilradical, and this is left adjoint to the forgetful functor including the category of reduced commutative rings into the category of all commutative rings.
{commutative rings}[r, bend left, "R ↦ R/Nil"] {reduced commutative rings}[l, bend left, "S S"]
Now let us explain the analogous picture for animated commutative rings. Any commutative ring R can be viewed as an animated commutative ring R, intuitively by “equipping it with the discrete topology”. Then π_0(R) = R, while π_i(R) = 0 for i>0. This induces a fully faithful embedding of the category of commutative rings into the category of animated commutative rings, and in particular justifies the perspective that animated commutative rings form an “enlargement” of commutative rings.
On the other hand, for any animated commutative ring , its 0th homotopy group π_0() has a natural commutative ring structure. These functors fit into an adjunction analogous to (<ref>):
{animated commutative rings}[r, bend left, "R ↦π_0()"] {commutative rings}[l, bend left, " "]
This construction glues, so that any derived scheme Y has a classical truncation Y_, whose formation is functorial and defines a right adjoint to the functor from schemes to derived schemes:
{derived schemes}[r, bend right, "Y ↦ Y_"'] {schemes}[l, bend right, "X X"']
Analogously to (<ref>), one can visualize a derived scheme as a classical scheme plus some “derived infinitesimal fuzz”.[One difference in practice is that “derived fuzz” can carry negative (virtual) dimension. This is analogous to how the natural extension of dimension to complexes is the Euler characteristic, which can be negative.]
We have not defined the étale site of a derived scheme, but just as the étale topos of a scheme is isomorphic to that of its underlying reduced scheme, it turns out that the étale site of a derived scheme is isomorphic to that of its classical truncation. This further supports the analogy between derived and non-reduced structure.
Going forward we will use calligraphic letters such as , , for simplicial commutative rings, and Roman letters such as , , for classical commutative rings. By the above remarks, we may regard classical commutative rings as animated commutative rings, but we still prefer to use this convention to emphasize that an animated commutative ring comes from a classical commutative ring. Similarly, we use roman letters like or M for classical moduli spaces, and calligraphic letters like for derived moduli spaces.
§.§ Characteristics of simplicial commutative rings
We would like to make some remarks about how to represent the datum of an animated commutative ring. In terms of the intuition for an animated commutative ring as a “topological commutative ring up to homotopy”, it is natural to compare this question to that of representing the datum of a topological space up to homotopy. A topological space is a kind of amorphous object, but algebraic topology furnishes two approaches to measuring it by algebraic invariants: homotopy groups, and (singular) homology groups. We shall explain two parallel invariants for “measuring” animated commutative rings: homotopy groups, and the cotangent complex.
§.§.§ Homotopy groups
An animated commutative ring has a sequence of homotopy groups π_0(), π_1(), …, which are all abelian groups. Furthermore, the multiplication on equips
π_*() := ⊕_i=0^∞π_i()
with the additional structure of a graded-commutative ring. The construction is detailed in <ref>. Under our intuitive analogy of an animated commutative ring being like a topological commutative ring, you can think of π_*() as being the homotopy groups of the underlying topological space, with the ring structure induced by the multiplicative structure of .
If comes from a topological commutative ring, then π_*() agrees with the graded-commutative ring of homotopy groups (defined in the usual topological sense) of the topological commutative ring (with respect to the basepoint at the zero element).
For a topological space X, the homotopy groups of X are a good measure for “understanding” X itself. For example, if X and X' are “nice” topological spaces then a morphism f X → X' is a homotopy equivalence if and only if it induces a bijection on π_i for all i, suitably interpreted to account for connected components.
Similarly, we regard π_i() as a good approximation to “understanding” an animated commutative ring . In particular, a morphism of animated commutative rings f →' is an isomorphism if and only if it induces a bijection on π_i for all i.
§.§.§ Cotangent complex
Let f A → B be a morphism of classical commutative rings. Then there is a cotangent complex _f, defined in <cit.>, which was originally introduced for applications to deformation theory. For our purposes, we regard _f as an animated B-module. We do not give the definition here because it requires a significant amount of development; Appendix <ref> gives a quick introduction to the cotangent complex and its applications. More generally, if f is a morphism of schemes, or derived schemes, or even “nice” stacks, then it has a cotangent complex _f.
If A =, we write _B for the cotangent complex of B with respect to the unique map from . More generally, for a scheme X (or derived scheme, or “nice” stack) we write _X for the cotangent complex of the unique map X →.
Let f A → B be a morphism of commutative rings and A→ A a square-zero thickening with ideal I (i.e., I^2 = 0). Let J = I ⊗_A B. We consider A-algebra deformations of B, i.e. flat A-algebras B such that B⊗_AA/I B. It is explained in <ref> that this deformation theory problem is “controlled” by _f, in the sense that:
* There is a class (I) ∈^2_B(_f, J) which vanishes if and only if such a deformation B exists.
* If (I)=0, then the set of deformations B has the natural structure of a torsor for ^1_B(_f, J).
* The automorphism group of any deformation B is naturally isomorphic to ^0_B(_f, I) ≅_B(Ω_f, I).
If f is smooth, then _f has cohomology concentrated degree 0, i.e., is represented by a B-module, which is none other than the Kähler differentials Ω_B/A.
In fact, the construction of the cotangent complex uses simplicial commutative rings, and was one of the earliest applications for this theory. Roughly speaking, the idea is that when B is not smooth over A then one “resolves” B by a smooth simplicial A-algebra, and forms the Kähler differentials of this resolution. This is reminiscent of how one defines derived functors in homological algebra, by replacing inputs with “resolutions” by certain types of chain complexes with good properties. While chain complexes only makes sense for objects of abelian categories, simplicial objects make sense in much more generality.
Given a sequence of maps
A [r, "f"] [rr, bend left, "h"] B [r, "g"] C
with h = g ∘ f, we get an exact sequence
Ω_f ⊗_B C →Ω_h →Ω_g → 0.
This will extend to an exact triangle (in the derived category of C-modules) of cotangent complexes
_f _B C →_h →_g
which in particular gives a long exact sequence in cohomology,
…→ H^-1(_h) → H^-1(_g) →Ω_f ⊗_B C →Ω_h →Ω_g → 0.
It was arranged by definition that a morphism of animated commutative rings is an isomorphism if and only if it induces an isomorphism on homotopy groups is an isomorphism. By contrast, it is a priori unclear “how much” information the cotangent complex carries. It turns out that it carries “all” the information of the derived structure. This is made precise by the following lemma of Lurie:
Suppose a morphism of simplicial commutative rings f → induces an isomorphism on π_0 and an isomorphism _⊗__. Then f is an isomorphism.
We will not give the formal argument for Lemma <ref>, but we will sketch the intuition. From a functor of points perspective, what we have to show is that f induces an equivalence between morphisms to an arbitrary simplicial commutative ring from and from . As discussed in <ref>, an arbitrary simplicial commutative ring can be thought of as a “derived nilpotent thickening” of its classical truncation. Since morphisms to a classical ring are controlled by π_0, the morphisms to any classical ring from and are identified by f. Then, by (a generalization of) Example <ref>, morphisms to a derived nilpotent thickening of a classical ring are controlled by the respective cotangent complexes, which are again identified by f.
We regard the cotangent complex as a “homology theory” for simplicial commutative rings. From this perspective, Lemma <ref> is analogous to the Hurewicz theorem, which says that for “nice” spaces, it is equivalent for a map to induce a bijection on homotopy sets and homology groups.
§.§.§ Tangent complex
Suppose f → is a morphism of animated commutative rings whose cotangent complex _f is perfect as an animated -module. This means that _f can be locally represented by a finite complex of free -modules. Then we define the tangent complex of f to be _f = _(_f, ), the (derived) dual of the cotangent complex.
This construction can be globalized: if f → is a morphism of (derived) schemes (or nice stacks, so that _f exists) such that _f is perfect over _, then we define _f := _(_f, _).
§.§ Examples of derived schemes
Affine derived schemes are anti-equivalent to animated commutative rings. For an animated commutative ring , we write for the corresponding affine derived scheme. General derived schemes are glued from affine derived schemes, analogously to how schemes are glued from affine schemes. We will now give some examples of natural constructions which lead to derived schemes in practice.
§.§.§ Derived fibered products
Even when starting out with purely classical schemes, derived schemes arise naturally through the derived fibered product operation.
The classical avatar of this operation is the usual fibered product of a two morphisms of schemes X → Z and Y → Z. The resulting fibered product X ×_Z Y is characterized by a universal property among schemes with maps to X and Y, such that the composite maps to Z agree. It is built affine-locally by the tensor product operation on commutative rings.
Similarly, the derived fibered product, which we also sometimes call the homotopy fibered product X _Z Y, is characterized by an analogous universal property. It can be built affine-locally by the derived tensor product operation on animated commutative rings (see below). More generally, the same considerations allow to construct the derived fibered product of derived schemes.
Given morphisms of animated commutative rings C → A and C → B, the derived tensor product A _C B can be characterized by its universal property as the coproduct of A and B in the category of animated C-algebras. An explicit model can be built in terms of simplicial commutative rings, by choosing “resolutions” of A and B as C-algebras; this is explained in <ref>. In particular, this explicit model shows that the homotopy groups of A _C B are the -groups,
π_i(A _C B) ≅_i^C(A,B);
although (<ref>) does not illuminate the multiplicative structure on π_*(A _C B), so it really only encodes information about the underlying animated abelian group of A _C B. From (<ref>), we see that the derived fibered product X _Z Y of classical schemes is classical if and only if _X and _Y are Tor-independent over _Z. In particular, this holds if either X → Z or Y → Z are flat.
The derived fibered product interacts well with formation of (co)tangent complexes, which is actually a technical advantage of working in derived algebraic geometry. More generally, a slogan is that “formation of tangent complexes commutes with homotopy limits”. In particular, we have that
_×_≅Fib( _|_×_⊕_|_×_→_|_×_),
with the fiber formed in the derived category of quasicoherent sheaves on ×_.
§.§.§ Derived vector bundles
Let X be a proper variety over a field k and a locally free coherent sheaf on X. Consider the functor sending S ∈Sch_/k to the space of global sections of on X_S; it is represented by H^0(X, ) = H^0(X, ) ⊗_k ^1_k, the k-vector space H^0(X, ) regarded as an affine space over k. It can be described as ( H^0(X, )^*) where H^0(X, )^* is the dual of H^0(X, ) over k.
A perspective one learns in homological algebra is to view H^0(X, ) as the zeroth cohomology group of a complex Γ(X, ), whose higher cohomology groups are H^i(X, ) for i ≥ 1. Analogously, there is a derived affine scheme
Γ(X, ) = “ (Γ(X, )^*)",
whose classical truncation is H^0(X, ). This represents the functor which, informally speaking, sends a derived scheme S to Γ(X_S, ). What we have seen here is that derived algebraic geometry allows us to construct derived schemes which extend classical moduli of global sections in the same way that derived functors extend classical global sections. Now, we have put quotations in the formula above because we have not explained what it means to take the symmetric power of an animated k-module. It can be characterized as the left adjoint to the forgetful functor from animated commutative k-algebras to animated k-modules.
More generally, one can perform an analogous construction “relative to a base”: given a perfect complex (of quasicoherent sheaves) on a (derived) stack S, one can form a “total space” _S() as a derived stack over S. Such constructions are called derived vector bundles in <cit.>, since they generalize vector bundles, which are the total spaces arising in the special case where is a locally free coherent sheaf (viewed as an perfect complex concentrated in degree 0).
Let S = _n, the moduli stack of rank n vector bundles on smooth projective curve C. Consider the space → S, with R-points the groupoid of rank n vector bundles on C plus a global section of . Although the fibers of → S are vector spaces, the morphism → S is not even flat, since the fibers have different dimensions (even when restricted to connected components of the base). This is because ↦ H^0() behaves “discontinuously”, e.g., the dimension of H^0() varies discontinuously with .
However, the perfect complex Γ() behaves “continuously”, e.g., the Euler characteristic of Γ() varies continuously with . This foreshadows the fact that we can assemble R Γ() into a perfect complex on S, whose total space → S has classical truncation → S and whose derived fiber over is the derived scheme R Γ() in the above sense. This is a variant of the “derived Hitchin stacks” to be discussed in <ref> and <ref>.
§.§.§ Derived moduli spaces
Perhaps the most interesting examples of derived schemes (or stacks) are derived moduli spaces. The entirety of <ref> is devoted to such examples, so let us just give a brief teaser here. The idea here is that one begins with a classical moduli space, which by definition is a functor defined on commutative rings, satisfying appropriate descent conditions. Then one finds a way to possibly reformulate the moduli problem so that it makes sense on animated commutative rings as well, and in general outputs anima (i.e., animated sets) instead of sets. The representing object is called a derived moduli space. We shall see later some of the benefits of promoting moduli spaces to derived moduli spaces.
§.§ Quasismoothness
In this subsection, we define a property of morphisms of derived schemes which is called quasismoothness, which turns out to play a very important role in derived algebraic geometry. It is the derived generalization of a local complete intersection morphism. For motivation, we recall what this means:
Let A be a noetherian ring and f A → B be a finite type morphism. We say that f is a local complete intersection (LCI) if it is Zariski-locally on A the composition of a map of the form A → A[x_1, …, x_n] followed by a quotient by a regular sequence.
Local complete intersections admit a clean characterization in terms of the cotangent complex.
Let f A → B be a homomorphism of Noetherian rings. Then f is LCI if and only if _f is represented by a complex with cohomology concentrated in degrees [-1,0].
A morphism f → of animated commutative rings is quasismooth if _f has tor-amplitude in [-1,0], i.e., is Zariski-locally on represented by a complex of vector bundles in degrees -1, 0. (Here we are using cohomological indexing.)
More generally, a morphism f → of derived stacks is quasismooth if _f exists and has tor-amplitude in [-1, ∞), i.e., is locally represented by a perfect complex of vector bundles in degrees [-1, n] for some n<∞.
If f is quasismooth, then we define the relative dimension of f to be χ(_f).
Theorem <ref> explains why we say that quasismoothness is a derived generalization of “LCI”. This property enables certain important constructions, for example the construction of “Gysin pullbacks” in intersection theory. Moreover, there is a sense in which it is more common to encounter quasismooth morphisms than LCI morphisms in nature; this is actually one of the main motivations for the entrance of derived algebraic geometry into enumerative algebraic geometry. For example, once one knows the definitions it is easy to show:
Quasismooth morphisms are preserved by compositions and derived base changes.
By contrast, note that LCI morphisms are not preserved by base change.
Any finite type affine scheme can be realized as the classical truncation of a quasismooth derived affine scheme, for trivial reasons. Indeed, given any choice of presentation R = [x_1, …, x_n]/(f_1, …, f_m), the derived fibered product of the diagram
[x_1, …, x_n] [d, "(f_1, …, f_m)"]
[r, "0"] [y_1, …, y_m]
has classical truncation isomorphic to R. This shows that the property of being quasismooth imposes no restriction on the underlying classical truncation.
If f → has a tangent complex _f, then it can be used to reformulate quasismoothness: f is quasismooth if and only if _f is represented by a perfect complex with tor-amplitude in (-∞, 1].
By Example <ref>, a derived scheme which is Zariski-locally isomorphic to a derived fibered product of smooth schemes is quasismooth. The converse is also true.
§ DERIVED MODULI SPACES RELATED TO THE LANGLANDS PROGRAM
The relevance of derived algebraic geometry in the Langlands program is through derived moduli spaces. In this section we will give an overview of some examples, discuss how to think about them, and hint at why they are useful.
§.§ The hidden smoothness philosophy
Historically, the notion of derived moduli spaces was anticipated in 1980s, even before the advent of derived algebraic geometry, by Beilinson, Deligne, Drinfeld, and Kontsevich. They emphasized a principle called the hidden smoothness philosophy, which predicts that natural moduli spaces should have derived versions which are quasismooth (and of the “expected dimension”).
The backdrop for this principle is that in certain areas of classical algebraic geometry, one frequently encounters classical moduli spaces – that is to say, moduli functors represented by classical schemes or stacks – which are not LCI or have the “wrong” dimension. In such situations, the hidden smoothness philosophy predicts that these are the classical truncations of derived moduli spaces which do have the “correct” geometric properties. It can then be useful to find the definition of these derived enhancements. In this section we will discuss derived moduli spaces from this perspective: what to look for when trying to “upgrade” a classical moduli space to a derived one. Subsequent sections will (hopefully) illustrate why this might be a useful thing to do.
As far as we know, the motivations for the hidden smoothness philosophy are empirical, and we will try to illustrate it through examples.
[Moduli of Betti local systems] Let us begin by brainstorming informally about a somewhat older example in derived algebraic geometry, which has relevance for example in (Betti) Geometric Langlands <cit.>: the moduli space of (Betti) local systems on a smooth projective (connected) curve C/ of genus g. After choosing a basepoint, one could view a Betti local system as a representation of the fundamental group of C. There is a more canonical way to phrase things without choosing a basepoint, but for concreteness we make such a choice c_0 ∈ C, and then π_1(C, c_0) has a standard presentation
π_1(C,c_0) ≈⟨ a_1, …, a_g, b_1, …, b_g ∏_i=1^g [a_i, b_i] = 1 ⟩.
This presentation suggests a reasonable guess for how to write down the moduli space of rank n (Betti) local systems of C, as a fibered product
_n^2g[d, "∏_i=1^g [x_i, y_i]"]
e [r] _n
quotiented out by the conjugation action of _n. Since all the spaces in the diagram (<ref>) are smooth (and in particular the bottom horizontal arrow is a regular embedding), it suggests that the moduli space of local systems might be a local complete intersection, of dimension equal to 2g _n - 2_n. This turns out to not always be the case, however, because a regular sequence cutting out {e}_n does not always pull back to a regular sequence in _n^2g.
However, if instead of taking the fibered product of diagram (<ref>) in classical schemes, one takes the derived fibered product, then by Example <ref> the resulting derived scheme is quasismooth and has the “expected” dimension 2(g-1) _n.
The Hidden Smoothness philosophy it justified in many examples by the fact that one can find natural (local) presentations for naturally occurring moduli spaces, similar to (<ref>). More precisely, the quasismoothness and dimensionality conditions are suggested by deformation theory, as shall be explained below.
This section explains heuristics for how to solve the following problem:
Given a classical moduli space , find the “correct” derived moduli space of which it is a classical truncation.
By definition, a classical moduli space is a type of functor defined on the category of commutative rings, while a derived moduli space is a type of functor on the category of simplicial commutative rings, so the content of this problem is one of extending the domain of definition for the functor in a “good” way. (The target should also be extended, from sets or groupoids to simplicial sets.) In analogy to the notion of derived functors, we will informally call this problem “deriving the classical moduli space ”.
Of course, this problem is extremely ill-defined: a given classical moduli space can be realized as the classical truncation of many different (quasismooth) derived moduli spaces (see Example <ref>), just as a given reduced scheme can be realized as the underlying reduced of many different schemes. Therefore, what we mean by “correct” derived moduli space is heuristic, and is dictated by the needs of the particular problem at hand. In practice, the problem may be rigidified by the desired to obtain a quasismooth , or even to obtain an with a particular tangent complex.
In this section we will describe certain patterns and heuristics for how this process tends to be carried out in practice. We will illustrate it through two main examples: the derived Galois deformation rings of Galatius-Venkatesh <cit.>, and the derived Hitchin stacks of Feng-Yun-Zhang <cit.>.
§.§ Heuristics for derived moduli spaces
Suppose that we begin with certain classical moduli space , which we wants to promote to a derived moduli space . The following outline describes the most standard pattern for finding the “correct” derived enhancement .
Suppose that we are in the following situation:
* The tangent space to at a point m ∈ can be calculated, and has some interpretation as a natural cohomology group. Furthermore, the automorphisms lie in a “previous” cohomology group, and obstructions lie in the “next” cohomology group.
Then we should try to construct so that its tangent complex (cf. <ref>) is the cohomology complex computing the cohomology groups from the bullet point.
We will give some explicit examples to make this concrete. Before that, however, we will discuss heuristics for when this process is unnecessary.
§.§.§ Heuristics for classicality
In this subsection we give some guiding principles for when one expects to find a derived enhancement which is not represented by a classical scheme or stack. The basic philosophy is captured by the following slogan.
Slogan. If a moduli space is LCI of the “correct” dimension, then it does not need to be derived.
The notion of “correct” dimension is itself heuristic, and might arise from an intersection-theoretic setup such as in (<ref>), or from calculating the Euler characteristic of the expected cotangent complex.
A little more precisely, the statement that a moduli space “does not need to be derived” means that the “correct” derived moduli space should just be isomorphic to its classical truncation.
The basis for this principle is as follows. Suppose is a derived scheme with classical truncation . Following the Hidden Smoothness philosophy, we suppose that is quasismooth of the “correct” dimension, which at m ∈ is the Euler characteristic of the cotangent complex at m, χ(_,m).
The canonical map ι→ induces an isomorphism H^0(ι^* _) H^0(_) and a surjection H^-1(ι^* _) H^-1(_).
We will not give a proof of Lemma <ref>, which one can find in <cit.>, but we will at least give some intuition for it. At the level of simplicial commutative rings, classical truncation is implemented by adding simplices in degrees ≥ 2 to “kill off” higher homotopy groups. In topology, the Hurewicz theorem says roughly that a map of topology spaces induces an isomorphism of low-degree homotopy groups if and only if it induces an isomorphism of low-degree homology groups. Lemma <ref> follows from an analogous estimate for simplicial commutative rings.
There is a variant of Lemma <ref> when is a derived stack, which says that the map ι^* _→_ induces an isomorphisms on degrees ≥ 0 and a surjection in degree -1.
Now, suppose that is LCI of the “correct” dimension, which we take to mean χ(_). Then the cotangent complex of has tor-amplitude in [0,-1] by Theorem <ref>, and its Euler characteristic is the same as that of _. Then Lemma <ref> forces ι^* _→_ to be an isomorphism, and then Lemma <ref> implies that the map → is itself an isomorphism.
We have not turned the above argument into a general formal statement, since the “correct” dimension is determined heuristically. The dimension used in the proof is χ(_), but this is circular as it presupposes the existence of . In practice, “correct” is determined by success in applications. We caution that examples do come up “in nature” where is LCI (or even smooth) but not of the “correct” dimension, such as the derived Galois deformation rings (with crystalline conditions) studied in <cit.>.
In practice, it can be difficult to verify that a moduli space is LCI of the correct dimension, unless is furthermore smooth of the correct dimension. This is because the cotangent complex of is typically not easy to compute, unless the obstruction group vanishes (so that it is a just a sheaf, in which case is smooth). By contrast, for a derived moduli space , the cotangent complex can usually be described concretely because it has an interpretation in terms of “higher derived dual numbers”; see <ref> for more discussion of this.
§.§.§ Running examples of classical moduli spaces
In the outline at the beginning of <ref>, we said:
One begins with a certain classical moduli space , which one wants to promote to a derived moduli space .
We list some examples below, which will serve as running illustrations that we will return to repeatedly.
[Moduli of Betti local systems] Let C be a smooth projective curve of genus g and G a reductive group over . Motivated by Example <ref>, we define the moduli stack of Betti G-local systems on C to be the stack ^_G obtained by taking the fibered product
^, □_G[d] [r] G^2g[d, "∏_i=1^g [x_i, y_i]"]
e [r] G
and then defining ^_G := ^, □_G/G to be the quotient by the conjugation G-action.
This moduli space is the subject of the “spectral side” of the Betti Geometric Langlands conjecture, which was proposed by Ben-Zvi – Nadler <cit.>.
[Global unrestricted Galois deformation ring]
Let F be a global field and _F ⊂ F its ring of integers. Let S be a finite set of primes of _F and _F[1/S] be the localization of _F at S. Let G be a split reductive group and π_1(_F[1/S]) → G(_p) be a Galois representation.[We suppress the choice of basepoint; for a better perspective see <ref>.] We define the (global unrestricted) Galois deformation functor of ρ to be the stack ^_G on the category of Artinian _p-algebras A equipped with an augmentation A _p, with A-points being the groupoid of lifts ρπ_1(_F[1/S]) → G(A) which are sent to ρ by the augmentation ϵ, as in the diagram below:
G(A) [d, "ϵ"]
π_1(_F[1/S])[r] [ur] G(_p)
Such deformation functors were introduced by Mazur in <cit.>, and are at the foundation of the automorphy lifting theorems pioneered by Taylor-Wiles; see the article of <cit.> for more about this. (Here “global” refers to that we are looking at representations of the fundamental group of a ring of S-integers in a global field, while the “unrestricted” refers to that we have not imposed any local conditions, which we would eventually want to do for applications to the Langlands correspondence.)
The need for ^_G to be derived depends subtly on F and on G. We assume that ρ is absolutely irreducible and odd (meaning that the trace of all complex conjugations is minimal – see <cit.>). For example, if F =, and G = _1, _2 then one does not expect any interesting derived enhancement, while if G = _3, _4, … then one does. If F is a number field that is not totally real and G is semisimple, then ^_G should be derived, whereas if F is a function field then ^_G need not be derived for any G by <cit.>. We mention finally that if G = _1 (and p ≠ 2), then ^_G should be derived unless F = or a quadratic imaginary field; this will be discussed more in Example <ref>.
[Local unrestricted Galois deformations]
Below we will discuss deriving (unrestricted) Galois deformation functors of global fields. One could also ask whether (unrestricted) Galois deformation functors of local fields should also be derived. It turns out that one can prove that these are always LCI of the correct dimension, so that this is unnecessary. See <cit.> for the ℓ≠ p case, and <cit.> for the ℓ=p case. By contrast, the question of whether local deformation functors with conditions should be derived is unclear at present, and seems to be an important problem for the future of derived Galois deformation theory; this is discussed more in <ref>.
[Hitchin stacks]
Let C be a smooth projective curve over _q of characteristic p>2. For integers m,n ≥ 0, we define the Hitchin stack ^_m,n to be the stack over _q with R-points being the groupoid of tuples (_1, _2, , t_1, t_2) where
* _i is a rank m vector bundle on C_R = C ×_ k R,
* is a rank n vector bundle on C_R
* t_1 ∈(_1, ) and t_2 ∈(, _2^∨) where _2^∨≅_2^* ⊗ω_C is the Serre dual of _2.
This is the specialization of the unitary Hitchin spaces from <cit.> to the “split case”, where the unitary group is split. It was introduced in order to analyze special cycles on moduli stacks of shtukas, which feature into a function field version of arithmetic theta series.[The spaces ^_m,n are not the spaces of Higgs bundles which were originally studied by Hitchin, but are other instances of a similar construction that has risen to importance in automorphic representation theory – see <cit.> for more discussion.]
§.§.§ Hints from deformation theory
We now suppose the classical moduli space to be given. Next, in <ref>, we said:
The tangent space to at a point m ∈ can be calculated, and has some interpretation as a natural cohomology group. Furthermore, the automorphisms lie in a “previous” cohomology group, and obstructions lie in the “next” cohomology group.
We will now explain this. The point is that the tangent space to at m ∈ has a “concrete” description in terms of first-order deformations, by virtue of the definition of as a moduli space, and the interpretation of the tangent space in terms of dual numbers.
[Moduli of Betti local systems]
Continuing the notation of (<ref>), let L be a (Betti) G-local system on C. We may regard L as a -point of ^_G. Then the tangent space to ^_G at L is H^1(π_1(C), g_L ) where g_L := L×^Gg is the local system on C obtained by taking the quotient of L ×g by the diagonal G-action. This local system is also sometimes denoted L or g_L. Furthermore,
* The automorphisms of L may be canonically identified with H^0(π_1(C), g_L), and
* One can show that the obstruction to first-order deformations lie in H^2(π_1(C), g_L).
[Global unrestricted Galois deformations]
Continuing the notation of Example <ref>, view ρ as an _p-point of ^_G. Then the tangent space to ^_G at ρ is H^1(π_1(_F[1/S]) , g_ρ), where g_ρ has underlying vector space g__p with the action of π_1(_F[1/S]) being through ρπ_1(_F[1/S]) → G(_p) and the adjoint action of G(_p) on g__p. Furthermore,
* The automorphisms of ρ may be canonically identified with H^0(π_1(_F[1/S]) , g_ρ), and
* One can show that the obstructions to first-order deformations lie in H^2(π_1(_F[1/S]), g_ρ) <cit.>.
Note the similarity to Example <ref>.
[Hitchin stack]
Continuing the notation of Example <ref>, let (_1,_2, , t_1,t_2) ∈^_m,n(k) for a field k/_q. Let T(_1,_2, , t_1,t_2) be the complex
(_1) ⊕(_2) ⊕() (_1, ) ⊕(, _2^∨)
where the left term lies in degree -1, the right term lies in degree 0, and the differential sends (A_1, A_2, B) ∈(_1) ⊕(_2) ⊕() to (Bt_1 - t_1A_1, A_2^∨ t_2- t_2B) ∈(_1, ) ⊕(, _2^∨). Then the tangent space to ^_m,n at (_1,_2, , t_1,t_2) may be canonically identified with H^1(C_k, T(_1,_2, , t_1,t_2)). Furthermore, as shown in <cit.>,
* The automorphisms of (_1,_2, , t_1,t_2) may be canonically identified with H^0(C_k, T(_1,_2, , t_1,t_2)), and
* The obstruction to first-order deformations lies in H^2(C_k, T(_1,_2, , t_1,t_2)).
§.§.§ Construction of derived moduli stacks
At this point we have our classical moduli space together with a candidate guess for the (co)tangent complex of the derived moduli stack . Finally, in <ref>, we said:
One then tries to construct so that its tangent complex is the cohomology complex computing the cohomology groups from the previous step.
We will illustrate how this plays out in our running examples.
[Derived moduli of Betti local systems]
We continue the notation of Example <ref>. In fact, there is already a subtle wrinkle here: the cohomology groups in Example <ref> were group cohomology of π_1(C) with coefficients in a local system g_L. The only dependence of this on C is through its fundamental group π_1(C), which can be viewed as capturing the information of the Postnikov truncation τ_≤ 1 C, but the right thing to do is to incorporate the whole “homotopy type” of C. This means that we want the tangent complex at the local system L to be the cohomology complex Γ_(C, g_L[1]) rather than Γ_group(π_1(C), g_L[1]).
Let us briefly discuss the difference between these two objects. We have
H^0(C, g_L) ≅ H^0(π_1(C), g_L)
and
H^1(C, g_L) ≅ H^1(π_1(C), g_L)
so the difference lies in H^2. Furthermore, if C has genus g ≥ 1 then in fact C is a K(π_1(C), 1), so that Γ_(C, g_L[1]) ≅Γ_group(π_1(C), g_L[1]). Therefore this distinction only arises (among smooth proper complex curves) in the case where C ≅^1. In that case π_1(C) is trivial, so the group cohomology is trivial. However, the higher Betti cohomology of ^1 is non-trivial (note that L is necessarily “the” trivial local system, since π_1(^1) = 0).
One way to construct the desired derived moduli stack ^_G is to take the derived fibered product of (<ref>), and then quotient out by the (diagonal) conjugation action of G. To calculate the tangent complex of the resulting object, use that the formation of tangent complex respects homotopy limits (Example <ref>), and the presentation of the homotopy type of C() as the homotopy pushout
S^1 [r] [d] ⋁_i=1^2g (S^1) [d]
[r] C()
Note that for g = 0, where C ≅^1, our presentation of ^_G is the derived self-intersection of the identity in G, quotiented by G. This has dimension -2 G. The classical truncation of ^_G is simply ^_G≅ []/G, reflecting that there is a unique local system on ^1 (the trivial one). A more naïve attempt to upgrade ^_G to a derived moduli space, with tangent complex Γ(π_1(C), g_L[1]), would have simply produced []/G, which is still quasismooth but has the wrong dimension - G.
Note that the description of the tangent complex of ^_G as the perfect complex isomorphic to Γ_(C_R, g_L[1]), functorially in L ∈^_G(R), shows that ^_G is quasismooth, since C has Betti cohomological amplitude in [0,2].
[Derived global unrestricted Galois deformations]
We continue th e notation of Example <ref>. The calculations of Example <ref> suggest that we should look for a derived stack ^_G whose tangent complex at ρ∈^_G(R) should be isomorphic naturally in R to Γ(π_1(_F[1/S]), g_ρ[1]). Note that one might expect, based on Example <ref>, that this answer should be adjusted to Γ_( _F[1/S], g_ρ[1]). It may indeed be better to think of the latter as the “right” answer, but the ring of S-units in a global field is an “étale K(π, 1)”, so that the two always coincide.[
However, when F is the function field of a smooth projective curve C/_q, then it is also natural to study everywhere unramified deformations of a local system on C. In this case the tangent complex is Γ_(C; g_ρ[1]), and it can have nontrivial H^3_. Therefore, the Galois deformation space is not quasismooth in this case. ]
The desired ^_G is the derived Galois deformation functor of Galatius-Venkatesh. We will not give its construction right now; that requires a bit of digression into homotopy theory. Morally, one wants to define the functor of points to be (<ref>) but with A an “Artinian animated commutative ring augmented over _q”. The resulting object should be a functor from the category of Artinian animated commutative rings augmented over _q to the category of anima (also known as “simplicial sets”, “spaces”, or “∞-groupoids”).
The determination of the cohomology of number fields is part of class field theory and Poitou-Tate duality. The output is that if p ≠ 2, then Γ(π_1(_F[1/S]), g_ρ) has tor-amplitude in [0, 2], so that ^_G is quasismooth.
[Derived Hitchin stack]
Continuing with the notation of Example <ref>, we expect that the derived Hitchin stack ^_m,n should be such that the pullback of its tangent complex to (_1, _2, , t_1, t_2) ∈^_m,n(R) is naturally isomorphic to Γ(C_R, T(_1, _2, , t_1, t_2)). This ^_m,n is the “split” special case of the derived Hitchin stacks that appear in <cit.>. It can be derived as a “derived mapping stack” in the sense of <cit.>. Alternatively, it may be viewed as the derived vector bundle (<ref>) associated to the perfect complex taking (_1, _2, ) ∈__m×__m×__n(R) to _C_R(_1, ) ⊕_C_R(, _2^∨).
As we see from these examples, the justification for the hidden smoothness philosophy in spaces arising in number theory seems to be the low cohomological dimensions of number fields and related rings, which lead to “short” tangent complexes.
[Spectral Hitchin stacks and spectral periods]
Non-quasismooth derived moduli spaces are also of interest in the Langlands program. Several examples are studied in <cit.>, where it is argued that they geometrize a spectral (i.e., Galois-theoretic) analogue of periods in automorphic representation theory. An example is the spectral Hitchin stack _m,n^, which parametrizes (E_1, E_2, F, t_1, t_2) analogous to the points of _m,n except with bundles replaced by local systems. For concreteness, let C/ be a smooth projective curve (but it is possible to formulate ℓ-adic variants for curves over finite fields). The R-points of _m,n^ are triples (E_1, E_2, F,t_1, t_2) where
* E_i is an R-family of rank m local systems on C,
* F is an R-family of rank n local systems on C, and
* t_1 ∈(E_1,F) is a flat morphism from E_1 to F, and t_2 ∈(F, E_2^*) is a flat morphism from F to E_2^*.
This has a derived enhancement _m,n^ whose tangent complex is the de Rham cohomology of C with coefficients in an analogous complex T(E_1,E_2, F, t_1, t_2) to that of Example <ref>. However, since the de Rham cohomological dimension of C is 2 while the coherent cohomological dimension of C is 1, this tangent complex actually has amplitude up to degree 2, so that _m,n^ is not quasismooth. In fact, the failure of _m,n^ to be quasismooth is an important facet of <cit.>, which studies the relation between automorphic periods and spectral periods. One of the main points of <cit.> is that the dualizing sheaves of spectral Hitchin stacks, which in some sense measure the failure of quasismoothness, are the spectral counterpart to automorphic periods.
§ DERIVED SPECIAL CYCLES AND HIGHER THETA FUNCTIONS
We will explain applications of the derived Hitchin stacks introduced in Example <ref> towards the theory of special cycles and higher arithmetic theta series, developed in papers of Feng-Yun-Zhang <cit.>.
The context for this discussion is the Kudla program, as discussed in the article of Chao Li <cit.>. This concerns the so-called arithmetic theta functions, which are Fourier series assembled out of special cycles. These were first studied on orthogonal or unitary Shimura varieties, but we will focus on the function-field context, as in the work of Feng-Yun-Zhang. We refer to <cit.> for the classical background and motivations.
§.§ Hermitian shtukas
We recall some of the definitions from <cit.>.
Let X be a smooth projective curve over _q. Let ν X' → X be a finite étale double cover, with the non-trivial automorphism of X' over X denoted σ X' → X'.
For each r ≥ 0 and n > 0, <cit.> and <cit.> construct moduli spaces of rank n unitary shtukas, denoted _U(n)^r. These are variants of the spaces discussed in the lectures of Zhiwei Yun. There is a map π_U(n)^r → (X')^r, called the “leg map”. For r=1, π is to be thought of as roughly analogous to the structure map for the integral model of a unitary Shimura variety.[As explained by Yun, a closer analogy for the latter is the fiber of the π for r=2 over X' ×{∞} for a fixed point ∞∈ X'.] We will now outline these constructions.
For a test scheme S, a rank n Hermitian bundle on X × S, with respect to ν X'→ X, is a vector bundle F of rank n on X' × S, equipped with an isomorphism h Fσ^* F^∨ such that σ^* h^∨ = h. Here, ^∨ = ^* ⊗__X'ω_X' is the Serre dual of . We refer to h as the “Hermitian structure”.
We denote by _U(n) the moduli stack of rank n Hermitian bundles on X, which sends a test scheme S to the groupoid of rank n vector bundles on X' × S equipped with a Hermitian structure.
Let r ≥ 0 be an integer. The Hecke stack _U(n)^r has as S-points the groupoid of the following data:
* x_i' ∈ X'(S) for i = 1, …, r, with graphs denoted by Γ_x'_i⊂ X' × S.
* A sequence of vector bundles F_0, …, F_r of rank n on X' × S, each equipped with Hermitian structure h_i F_i σ^* F_i^∨.
* Isomorphisms f_i F_i-1|_X' × S - Γ_x'_i - Γ_σ(x'_i)F_i|_X' × S - Γ_x'_i-Γ_σ(x'_i), for 1 ≤ i ≤ r, compatible with the Hermitian structures, with the following property: there exists a rank n vector bundle F^♭_i-1/2 on X × S and a diagram of vector bundles
F_i-1/2^♭[dl, "f_i^←"', hook'] [dr, "f_i^→", hook ]
F_i-1 F_i
such that (f_i^←) is locally free of rank 1 over Γ_x'_i, and (f_i^→) is locally free of rank 1 over Γ_σ(x'_i). In particular, f_i^← and f_i^→ are invertible upon restriction to X' × S - Γ_x'_i-Γ_σ(x'_i), and the composition
F_i-1|_X' × S - Γ_x'_i-Γ_σ(x'_i)F_i-1/2^♭|_X' × S - Γ_x'_i-Γ_σ(x'_i)F_i |_X' × S - Γ_x'_i-Γ_σ(x'_i)
agrees with f_i.
For a vector bundle F on X' × S, we denote by F := (_X ×_S)^* F its Frobenius twist. If F has a Hermitian structure h Fσ^* F^∨, then F is equipped with the Hermitian structure h; we may suppress this notation when we speak of the “Hermitian bundle” F. Viewing ∈_U(n)(S), is the image of under the map _U(n)→_U(n).
Let r ≥ 0 be an integer. We define _U(n)^r by the Cartesian diagram
_U(n)^r[r] [d] _U(n)^r[d]
_U(n)[r, "(, )"] _U(n)×_U(n)
A point of _U(n)^r will be called a “Hermitian shtuka (of rank n)”.
Concretely, the S-points of _U(n)^r are given by the groupoid of the following data:
* x'_i ∈ X'(S) for i = 1, …, r, with graphs denoted Γ_x'_i⊂ X × S. These are called the legs of the shtuka.
* A sequence of vector bundles F_0, …, F_n of rank n on X' × S, each equipped with a Hermitian structure h_i F_i σ^* F_i^∨.
* Isomorphisms f_i F_i-1|_X' × S - Γ_x'_i - Γ_(x'_i)F_i|_X' × S - Γ_x'_i-Γ_(x'_i) compatible with the Hermitian structure, which as modifications of the underlying vector bundles on X' × S are as in the definition of the Hecke stack _U(n)^r.
* An isomorphism φF_r ≅F_0 compatible with the Hermitian structure.
One can show that _U(n)^r is a Deligne-Mumford stack locally of finite type. The map _U(n)^r → (X')^r is smooth, separated, equidimensional of relative dimension r(n-1). For these and more geometric properties, see <cit.>.
§.§ Special cycles
Now we will define special cycles on _U(n)^r, which are analogous to the special cycles constructed by Kudla-Rapoport on unitary Shimura varieties <cit.>.
Let E be a rank m vector bundle on X'. We define the stack _E^r whose S-points are given by the groupoid of the following data:
* A rank n Hermitian shtuka ({x'_1, …, x'_r}, {F_0, …, F_r}, {f_1, …, f_r}, φ) ∈_U(n)^r(S).
* Maps of coherent sheaves t_i E⊠_S→F_i on X'× S such that the isomorphism φF_r ≅F_0 intertwines t_r with t_0, and the maps t_i-1, t_i are intertwined by the modification f_i F_i-1F_i for each i = 1, …, r, i.e., the diagram below commutes:
E⊠_S[d, "t_0"] [r, equals] E⊠_S[r, equals] [d, "t_1"] …[d] [r, equals] E⊠_S[r, "∼"] [d, "t_r"] (E⊠_S)[d, " t_0"]
F_0 [r, dashed, "f_0"] F_1 [r, dashed, "f_1"] …[r, dashed, "f_r"] F_r[r, "∼"] F_0
In the sequel, when writing such diagrams we will usually just omit the “⊠_S” factor from the notation.
There is an evident map _E^r→_U(n)^r projecting to the data in the first bullet point. This map is finite by <cit.>, hence induces a pushforward map on Chow groups.
Let _(_q) be the _q-vector space of Hermitian maps a E→σ^* E^∨ such that σ(a)^∨ =a.
Let ({x'_i}, {F_i}, {f_i}, φ, {t_i}) ∈_E^r(S). By the compatibilities between the t_i in the definition of _E^r, the compositions of the sequence of the maps
E⊠_S F_i σ^* F_i^∨σ^* E^∨⊠_S
agree for each i, and (<ref>) for i=r also agrees with the Frobenius twist of (<ref>) for i=0. Hence the composite map (<ref>) gives the same point of _E(S) for every i, which is moreover fixed by Frobenius, hence must come from _E(_q). This defines a map _E^r→_E(_q). For a ∈_E(_q), we denote by ^r_E(a) the fiber of ^r_E over a. We have
_^r=∐_a∈_(_q)^r_(a).
The _^r or _^r(a) are called special cycles of corank m (with r legs).
§.§ Virtual fundamental classes of special cycles
Motivated by the Kudla program, we expect to attach to each special cycle _^r(a), a ∈_(_q), a class [_^r(a)] in the Chow group _*(_U(n)^r) such that the Fourier series with Fourier coefficients [_^r(a)] is “automorphic”. We call such an automorphic form a higher theta function, since in the case r=0 it recovers the definition of classical theta functions.
However, a cursory inspection reveals that for r>0, the cycles _^r(a) necessarily have varying dimensions as a varies. For example, if a = 0 then _^r(0) clearly surjects onto _U(n)^r. On the other hand, it is not hard to see that if r>0 and a ≠ 0 then _^r(a) has smaller dimension than _U(n)^r. Therefore, we should not use the naïve cycle classes [_^r(a)]^ in the definition of higher theta functions; we must construct virtual fundamental cycles [_^r(a)]^, which should all lie in _(n-m)r(_U(n)^r) where m =.
§.§.§ The non-singular terms
There are various heuristics which guide such a construction. A key input is that when m = = 1, and when a is non-singular, meaning that a →σ^* ^∨ is injective as a map of coherent sheaves (equivalently, an isomorphism over the generic point of X'), then _^r(a) is LCI of the “correct” codimension, which is r. In this case, heuristics (justified a postiori by <ref>) suggest that one may take [_^r(a)]^ = [_^r(a)]^. More generally, using this observation one can define [_^r(a)]^ for any non-singular a (the point being to allow m >1) using the original method of <cit.> of presenting _^r(a) as an open-closed component of the “derived intersection” of special cycles labeled by m=1 and non-singular coefficients. The definition of [_^r(a)]^ in this case is carried out in <cit.>.
§.§.§ The singular terms
The definition of [_^r(a)]^ for singular a requires further heuristics. Over a number field, the analogous problem has a relatively simple answer. In that case, a can be represented as a matrix which is of the form a_ns⊕ 0, where a_ns is non-singular of rank m' ≤ m. Then the corresponding virtual fundamental class is the product of the virtual fundamental class for a_ns times the (m-m')th power of the Chern class of a certain “tautological” line bundle.
But in the function field case at hand, the answer is much more complicated: it is in general a sum of infinitely many terms, each involving the top Chern class of a different “tautological” bundle. We can already illustrate why in the special case a = 0. Then _^r(0) is stratified by the kernel of the map t_i ⊠_S →, and each such stratum contributes a piece of the form described above (a Chern class times a virtual class which can be constructed via derived intersections as in the non-singular case). The analogous number field situation has the special property that the governing Hermitian form is positive definite on the generic fiber, which rules out all but one stratum there. This is discussed in more detail in <cit.> and <cit.>.
The upshot is that [_^r(a)]^ can be constructed explicitly for all terms a ∈_(_q), but the answer is quite complicated.
Alternatively, it was observed in <cit.> that the definition of the special cycles _^r promote naturally, according to the pattern of <ref>, to derived special cycles _^r which are quasismooth of the correct dimension, and whose classical truncations recover _^r. According to <cit.>, any quasismooth derived stack has an intrinsic notion of fundamental class []^.[This rough statement had long been a folklore theorem of derived algebraic geometry, announced for example in <cit.>, and its importance for Gromov-Witten or Donaldson-Thomas flavors of enumerative geometry was long understood. However, Khan's construction of []^ is the most general to appear in the literature, and <cit.> uses the full scope of its generality.] In <cit.> it is proved that [_^r(a)]^ coincides with the previously defined (elementary) construction of [_^r(a)]^.
To sketch how _^r is defined, the key point is to realize that ⋃__^r admits a description as a fibered product
⋃__^r [r] [d] _^r [d]
[r, "(, )"] ×
for a unitary generalization of the Hitchin stack from Example <ref>, and _^r a certain Hecke correspondence for . Now, and _^r have natural enhancements to (quasismooth) derived stacks and _^r; for example is a unitary generalization of Example <ref>. Actually, the important point for the quasismoothness of _^r is that the maps _^r → are quasismooth. Then, we define _^r as a derived fibered product
⋃__^r [r] [d] _^r [d]
[r, "(, )"] ×
whose classical truncation recovers _^r by (<ref>). There is a map _^r(a) →_(_q) as before, and we define _^r(a) to be the fiber of the map _^r →_(_q) over a.
Recently, Madapusi has defined derived special cycles on Shimura varieties in <cit.>. A key difference of that situation is that, unlike in the function field case, his construction does not yet have an accompanying moduli-theoretic interpretation. Indeed, the correct moduli problem should be in terms of “global shtukas”, a notion which has not been defined in the setting of number fields, although recent work of Scholze et al. <cit.> gives tantalizing hints for its existence.
§.§ Higher theta series
Let n ≥ 1 and n ≥ m ≥ 1. Then we can assemble [_^r(a)]^ into a Fourier series (with Fourier parameter a ∈_(_q)), called a higher theta series in <cit.>.
Let m ∈_≥ 1. The stack _U^-(2m) parametrizes triples (, h), where is a family of rank 2m vector bundles on X', and h: ^*^* is a skew-Hermitian structure (i.e., ^*h^*=-h).
Let _ P_m be the moduli stack of quadruples (,h,) where (, h)∈_U^-(2m), and ⊂ is a Lagrangian sub-bundle (i.e., has rank m and the composition ⊂h^*^*→^*^* is zero). Thus P_m corresponds to the Siegel parabolic of U^-(2m).
Let ' :=/. The skew-Hermitian form on induces a perfect pairing ×^*' →_X', which identifies ' with ^*^*. We thus have a short exact sequence
0[r] [r] [r] ^*^*[r] 0
and we denote by e_,∈^1(^*^*, ) its extension class. There is a Serre duality pairing between ^1(^*^*, ) and (, ^*^∨) = _(_q), which we denote by e_, ), a∈_q.
Choose a nontrivial additive character ψ:_q→^×. The higher theta series is a function of (,h,) ∈_ P_m(_q), assigning to such a triple the class
Θ^r (,h,) := χ()q^n(-_X)/2∑_a∈_(_q)ψ(ȷe_,,a)[^r_(a)]^∈_(n-m)r(_U(n)^r)
where χ: _X'(_q)→^× is a character whose restriction to _X(_q) is the nth power of the quadratic character _X(_q)→{±1} corresponding to the double cover X'/X by class field theory.
§.§ The modularity conjecture
By analogy with conjectures of Kudla explained in <cit.> (whose formulations are themselves partly conjectural), <cit.> predicts that these higher theta series are modular in the sense of automorphic forms. Concretely, this means the following:
The function Θ^r(, h, ) in <ref> is independent of the choice of Lagrangian sub-bundle ⊂.
The reason that this is called a “Modularity Conjecture” is that it implies that Θ^r(, h, ) descends to a function on _U^-(2m)(_q), which can then be interpreted as a function on the adelic double coset space associated to U^-(2m) using Weil's uniformization. Thus, it implies that Θ^r defines an automorphic form for U^-(2m), valued in _*(_U(n)^r).
When r=0, Conjecture <ref> amounts to the modularity of classical theta functions, which has long been known. When r=1, it is parallel to the conjectural modularity of arithmetic theta series on orthogonal and unitary Shimura varieties, which has seen much recent progress, as explained in the article of Chao Li <cit.>. But for r>1, it is a new phenomenon with no parallel in the classical theory.
We remark that the derived construction of [_^r(a)]^ = [_^r(a)]^ already allowed in <cit.> to prove certain expected properties that were not accessible using the elementary definition. This is not an issue for the virtual classes of special cycles on Shimura varieties; the difference is because of the more complicated nature of singular terms in the function field situation, as described previously. The power of the description [_^r(a)]^ is that it is more uniform and conceptual, and “only” depends on the underlying derived stack _^r(a), whereas [_^r(a)]^ depends not only on _^r(a) but also on an auxiliary presentation of it. For this reason, it was felt at the writing <cit.> that the derived perspective would be more useful for proving modularity, although it was not clear how. This view has been at least partially affirmed in <cit.> and <cit.>, which prove that the ℓ-adic realization of higher theta series after restriction to the generic fiber of _U(n)^r → (X')^r is modular.
Indeed, <cit.> and <cit.> are heavily steeped in the formalism of derived algebraic geometry, although the derived construction of the [_^r(a)]^ is only a small ingredient in the proofs. The modularity on the generic fiber comes from calculations within new forms of Fourier analysis, including a so-called arithmetic Fourier transform on Borel-Moore homology groups that restricts to the usual finite Fourier transform in the case r=0 (where it pertains to the modularity of the classical theta function), and a derived Fourier transform on the derived category of (motivic) sheaves on derived vector bundles. These situations are linked by a higher version of the trace construction, which gives a “sheaf-cycle correspondence” generalizing the classical sheaf-function correspondence. Even sketching the argument is beyond the scope of this discussion, so we just refer to <cit.> for an outline.
We anticipate that a statement which will be needed to go beyond modularity in the generic fiber is the Trace Conjecture formulated later in <ref>. It gives yet another approach to the definition of the virtual fundamental classes of special cycles. This conjecture can also be formulated purely within classical algebraic geometry.
§ DERIVED GALOIS DEFORMATION RINGS
In this section we will give an overview of the study of derived Galois deformation rings in <cit.>. We will elide many technical details in order to give the “big picture”. The reader craving more details could consult the paper <cit.>; another resource is the Oberwolfach Arbeitsgemeinschaft Report <cit.>. Readers should be familiar with the classical theory of Galois deformation rings before reading this section.[For readers without this background, we recommend <cit.> for a quick and illuminating survey on the classical Taylor-Wiles method, and <cit.> for an introduction to the Calegari-Geraghty method. For the more experienced, <cit.> is a clear and concise account containing the technical details.]
§.§ Overview
To orient the reader, we begin with an overview of the contents of this section.
Step One. First, we will explain how to construct the derived global unrestricted Galois deformation ring which represents the functor _G^ from Example <ref>. [This requires some assumptions on ρ, and as literally formulated in Example <ref> can only exist if G is of adjoint type, although this can be generalized in a standard manner by accounting for determinants and centers.]
According to a folklore conjecture of Mazur, should actually be isomorphic to a classical commutative ring, i.e., have vanishing higher homotopy groups. However, this conjecture is wide open, and treating as a derived ring allows us to “circumvent” the conjecture for some purposes. Indeed, what we really need about is the description of its tangent complex in terms of Galois cohomology, which we can calculate unconditionally for the derived ring , for the reasons explained in <ref>, while knowing the tangent complex of π_0() is equivalent to Mazur's conjecture.
Step Two. The deformation ring from Step One does not capture local conditions which we expect to be satisfied for the Galois representations corresponding to automorphic forms. The second step is to cut down the deformation ring by asking for local conditions that “match” those seen from automorphic forms. The subtlest aspect of these local conditions concerns the decomposition group at p, where the local conditions are governed by p-adic Hodge theory.
To do this, one considers analogous derived deformation functors for local Galois groups, and then imposes local conditions by forming the derived fibered product against the global deformation functor. We note that the local deformation functors are usually discrete, so that it is the step of forming the derived fibered product that introduces genuine derived structure.
For the local condition at p, we only consider what may be considered the simplest situation from a modern perspective on integral p-adic Hodge theory: the crystalline Galois deformation functor in the Fontaine-Laffaille range. This should be regarded as a first test case; going beyond it seems to be first obvious barrier to generalization.
This step results in an object ^ called the derived crystalline global Galois deformation ring. In many situations it has virtual dimension -δ, where δ >0 is the “defect” and therefore cannot be classical.
We “know” the tangent complex of ^ in terms of Galois cohomology, but the structure of its homotopy groups is mysterious a priori. It will be determined in the next step.
Step Three. As a derived generalization of R=T theorems, one expects that ^ acts on the cohomology of locally symmetric spaces, in a way that makes said cohomology “free” over ^, at least generically. This action is constructed at the level of homotopy groups in <cit.> in tandem with the determination of π_*( ^), using the Calegari-Geraghty modification of the Taylor-Wiles method. Thus, the calculation of π_*( ^), which is in principle a purely Galois-theoretic problem, uses the correspondence with automorphic forms. Also, this calculation offers an explanation for the interesting numerical pattern in the multiplicities in the cohomology of locally symmetric spaces. This numerical pattern was one of the original motivations for <cit.> and <cit.>.
§.§ Global unrestricted derived deformation ring
Now we will circle back around to the beginning, and discuss how to construct the derived Galois deformation functor in Step One of <ref>.
§.§.§ Classical deformation ring
Let k be a finite field. For the moment let Γ be a discrete group and G a split algebraic group over the Witt vectors W(k).
Let ρ be a representation of Γ in G(k). Let _G^Γ be the functor on the category of Artinian local W(k)-algebras augmented over k, sending to the set of lifts modulo conjugacy,
{ G()[d]
Γ[r, "ρ"'] [ur, "ρ", dashed] G(k)
}/∼
If ρ is absolutely irreducible and Γ satisfies certain finite conditions on its Galois cohomology, then Schlessinger's criterion implies that _G^Γ is representable. These observations, as well as the original idea to consider moduli of Galois deformations, are due to Mazur <cit.>. The story can be extended to profinite groups Γ, by treating _G^Γ as the (filtered) colimit of the deformation functors obtained from the finite quotients of Γ.
Suppose F is the ring of integers in a global field and S is a finite set of places of F, then Γ = π_1(_F[1/S]) satisfies the requisite finiteness conditions by class field theory. We are actually going to rename ^ρ__F[1/S] := _G^Γ in this case, because this notation emphasizes the parameters that we will focus on in this section. The pro-representing ring is the (classical) global unrestricted Galois deformation ring, denoted __F[1/S]^ρ.
§.§.§ The work of Galatius-Venkatesh
The idea of Galatius-Venkatesh is to constructed a derived version of the Galois deformation ring by repeating this story at a simplicial level. The paper <cit.> is written in the language of simplicial commutative rings and model categories, rather than the language of animated commutative rings and ∞-categories. We will take this opportunity to discuss some of the subtleties that this comparison exposes.
Morally, we want to upgrade ^ρ__F[1/S] to a functor ^ρ__F[1/S] on the category of animated Artinian local W(k)-algebras augmented over k (to be appropriately defined) which “sends” such an to the anima of lifts modulo conjugacy,
{ G()[d]
Γ[r, "ρ"'] [ur, "ρ", dashed] G(k)
}/∼
Because of the elaborate nature of the definition of functors between ∞-categories, at least in the formalism of <cit.>, it can be difficult to rigorously construct functors. In particular, a functor of ∞-categories cannot be specified by saying where objects and morphisms go (a point which is sometimes handled sloppily in the literature), and so the discussion of the previous paragraph does not really define a valid functor.
Galatius-Venkatesh choose to use the model category of simplicial commutative rings to address this difficulty. One can think of this as a rigidification of the ∞-categorical of animated commutative rings. It is an ordinary 1-category, so a functor can be specified in the usual way on objects and morphisms. However, part of the structure of a model category specifies a notion of weak equivalence, alias “homotopy equivalence”, and we are only interested in functors that send homotopy equivalences to homotopy equivalences. We call such functors homotopy invariant.
To relate this to the other perspective: the ∞-category of animated commutative rings is the localization of simplicial commutative rings at the homotopy equivalences, so a homotopy-invariant functor out of simplicial commutative rings should be equivalent to a functor out of animated commutative rings. Such statements can be made precise, but the approach of <cit.> is to use the model-theoretic framework throughout.
§.§.§ Derived deformation ring
Now we summarize the construction of ^ρ__F[1/S]. We begin by deriving the functor ^ρ__F[1/S]. First we need to define the notion of an Artinian local simplicial commutative ring.
A simplicial commutative ring is Artinian local if π_0() is Artinian local and π_*() is finitely generated as a module over π_0().
Next, we roughly want to define ^ρ__F[1/S]() to be the simplicial set of lifts of ρΓ→ G(k) to ρΓ→ G(), up to equivalence. We need to define this simplicial set, and ensure that its construction depends on in a homotopy-invariant way. For this it is useful to adopt a different perspective on representations.
Given a discrete group H, there is a classifying space B H. The classifying space BH can be defined as the geometric realization of the simplicial set N_(H) obtained as the nerve of H viewed as a category (consisting of a single object, with endomorphisms given by H). The simplicial set N_(H) takes the form
* ⇇ H H × H …
where explicit formulas for face and degeneracy maps can be found in <cit.>. In particular, BH is equipped with a distinguished basepoint .
If X is a “nice” space (e.g., the geometric realization of a simplicial set), then homotopy classes of maps from X to BH are in bijection with “H-local systems” on X. If X is connected, then after choosing a basepoint x ∈ X these are in bijection with the set of homomorphisms π_1(X,x) → H modulo conjugacy. Therefore, the space of maps from X to BH gives a “space of local systems of X”. (See <cit.> for more discussion.)
More generally, the formula (<ref>) can be used to extend the definition of classifying spaces to groups that are not discrete. If H is a simplicial group, then (<ref>) is a bi-simplicial set, and we define the simplicial set N_(H) to be the diagonal bi-simplicial set. This generalization will be important in defining derived group representation functors, where H will arise by evaluating an algebraic group G on a simplicial ring – it is the passage from discrete R to simplicial commutative which introduces a non-trivial simplicial structure on H.
Suppose Γ is a discrete group, and shift perspectives so that B Γ and BH are viewed as simplicial sets. Then by general categorical considerations, there is a natural simplicial set of morphisms B Γ→ B H, which should be thought of as the simplicial set of representations in Γ in H. Note that this formalism already incorporates “modding out by conjugation”. If we want to model homorphisms rather than representations, then we take the pointed morphisms (B Γ, ) → (B H, ). (This is justified by <cit.>, which says that the classifying space functor from simplicial groups to pointed spaces is fully faithful.)
We have now defined a natural candidate for the “simplicial set of representations Γ→ G()” (modulo conjugation): it should be the simplicial set of morphisms B Γ→ BG(), where if Γ≅_i Γ_i is profinite then we treat B Γ as the formal inverse limit of BΓ_i.
The actual approach of <cit.> is slightly different. To explain it, we recall that from a scheme one X can extract, following Artin-Mazur <cit.> and Friedlander <cit.>, an étale homotopy type (X). This is a pro-simplicial set whose raison d'être is to capture the étale topology of X. For _F[1/S] the ring of S-integers in a global field F, it turns out that (_F[1/S]) ≅ B Γ where Γ is the étale fundamental group of _F[1/S] with respect to some basepoint. (Thus _F[1/S] is an “étale K(π, 1)”, at least with p-adic coefficients where p>2.) We prefer (_F[1/S]) to B Γ, even though they are equivalent, because the former does not refer to a choice of basepoint.[There is a similarity to the discussion of the derived moduli space of Betti local systems in <ref>, in which Γ is analogous to π_1(C), and (_F[1/S]) is analogous to C itself.] By the preceding discussion, the moduli description of _G^Γ(A) can be reformulated as the set of lifts
BG(A) [d]
(_F[1/S]) [r, "ρ"'] [ur, "ρ", dashed] BG(k)
We then define ^ρ__F[1/S] to be the functor on the category of Artinian local simplicial commutative rings equipped with an augmentation over k, which sends to the simplicial set of “lifts”
BG(c()) [d]
(_F[1/S]) [r, "ρ"'] [ur, "ρ", dashed] BG(k)
where c() is a functorial cofibrant replacement of . This notion of “cofibrant replacement” is another piece of the model category structure, analogous to projective resolutions in homological algebra, and is what ensures that the functor is actually homotopy invariant.[A subtle point is that c(-) is not monoidal, so by BG(c()) we mean the cosimplicial commutative ring
W(k) ⇉ c(𝒪_G) c(𝒪_G ⊗𝒪_G) …,
where the maps are dual to those in (<ref>).] Here by “lift” we mean by definition the homotopy fibered product
((_F[1/S]), BG(c())) _((_F[1/S]), BG(k)){ρ}
which is computed by taking fibered product after forming fibrant replacements in the model category of simplicial sets.
§.§.§ Representability
Lurie's thesis <cit.> establishes a derived version of Schlessinger's representability criterion. To check the criterion, one needs to compute the tangent complex; the answer was mentioned in Example <ref> and will be revisited later.
Using Lurie's derived Schlessinger criterion, Galatius-Venkatesh show that ^ρ__F[1/S] is representable by a pro Artinian animated ring ^ρ__F[1/S]. Checking the criterion is not the most interesting aspect of <cit.>, in our opinion, and we will skip it entirely.
§.§ Desiderata
We abstract out the properties of ^ρ__F[1/S] and ^ρ__F[1/S] that we will actually need in the future.
§.§.§ Tangent complex
As already discussed in Example <ref>, the tangent complex of ^ρ__F[1/S] at ρ is Γ(_F[S^-1], g_ρ[1]). This is explained in <cit.>; note the close similarity to derived moduli of Betti local systems, discussed in <ref>.
§.§.§ Comparison to the classical deformation functor
We have the following compatibility with the classical theory.
As a variant of <ref>, one has a fully faithful embedding of the category of Artinian local rings A augmented over k into the category of Artinian local animated rings augmented over k. By construction, we have a commutative diagram
[column sep = large]
{Artinian local augmented
commutative rings A → k}[d, hook] [r, "^ρ__F[1/S]"] Set
{Artinian local augmented simplicial
commutative rings → k}[r, "^ρ__F[1/S]"] sSet[u, "π_0"]
At the level of representing rings, the commutativity of the diagram is equivalent to an isomorphism
π_0(^ρ__F[1/S]) ≅^ρ__F[1/S].
As mentioned previously, it is actually a folklore Conjecture, attributed to Mazur, that ^ρ__F[1/S] is LCI of dimension χ(Γ(_F[1/S], g_ρ[1])). By <ref>, this is equivalent to the statement that ^ρ__F[1/S] is actually homotopy discrete, i.e., π_i(^ρ__F[1/S]) = 0 for i>0.
Why then bother with all the effort to define ^ρ__F[1/S], when we could have simply used ^ρ__F[1/S]? The most important fact we will use going forward is the description of the tangent complex of ^ρ__F[1/S], which comes relatively easily. Without knowing Mazur's conjecture, we don't know anything about the tangent complex of ^ρ__F[1/S] other than its 0th cohomology group.
§.§ Imposing local conditions
Now we begin the consideration of imposing local conditions. We will use this in two ways:
* To impose conditions of p-adic Hodge-theoretic nature, which are necessary in order to cut down to Galois representations that could possible “match” the automorphic side.
* To implement the Taylor-Wiles method, wherein one needs to consider the effect of adding and removing ramification at auxiliary primes.
§.§.§ Local deformation functors
For a local field F_v and a representation ρ of π_1(F_v), one defines the local (unrestricted) derived deformation functor ^ρ_F_v analogously to the global case, sending
↦((F_v), BG(c())) _((F_v), BG(k)){ρ}.
If v is such that ρ|_π_1(F_v) is unramified, then it is also useful to have the variant ^ρ__F_v parametrizing unramified deformations, which sends
↦((_F_v), BG(c())) _((_F_v), BG(k)){ρ}.
When these functors come up, they will typically not be representable, since ρ will typically not be irreducible.
§.§.§ Cutting out local conditions
Suppose v is a place of F. Then we have a map
^ρ__F[1/S]→^ρ_F_v
obtained by pulling back a local system along F_v →_F[1/S]. If v ∉ S, then the above map factors naturally through ^ρ__F[1/S]→^ρ__F_v.
Now suppose that we are given some functor ^ρ, _v_F_v→^ρ_F_v, where the stands for some deformation problem. Typically ^ρ, _v_F_v→^ρ_F_v should be a closed embedding, meaning a closed embedding on classical truncations. An example could be ^ρ__F_v^ρ_F_v, which informally speaking cuts out the deformations which are “unramified at v”. Then we can form the derived fibered product of the diagram
^ρ, {D_v}__F[1/S][r] [d] ^ρ__F[1/S][d]
^ρ, _v_F_v[r] ^ρ_F_v
[Adding ramification]
Suppose v ∉ S, and let S' = S ⊔{ v}. The morphism _F[1/S'] →_F[1/S] induces a map ^ρ__F[1/S]→^ρ__F[1/S']. Informally, this morphism sends a deformation unramified outside S to the same deformation regarded as unramified outside S' (i.e., forgetting that it is unramified at v).
In the theory of classical deformation rings one has that
__F[1/S']^ρ×__F_v^ρ__F_v^ρ≅__F[1/S]^ρ.
Informally this amounts to the statement that “a deformation of local system on _F[1/S'] that is unramified at v comes uniquely from a deformation of a local system on _F[1/S]”, which is obvious. However, we single out this statement because (i) it is used crucially in the Taylor-Wiles method to descend from patched deformation rings, and (ii) its derived version is non-trivial.
To follow up on point (ii): it turns out that the analogous identity holds at the level of derived deformation functors:
__F[1/S']^ρ__F_v^ρ__F_v^ρ≅__F[1/S]^ρ.
This is proved in <cit.> by calculating the map of tangent complexes. Roughly speaking, the content of (<ref>) is the statement that
F_v [d] [r] _F[1/S'] [d]
_F_v[r] _F[1/S]
is a homotopy pushout at the level of étale homotopy types. This is similar to (<ref>), where we encountered a presentation with the special property that it was not just a pushout but a homotopy pushout.[To appreciate this point, it may be instructive to observe that if one uses a presentation of the fundamental group of a genus g Riemann surface which is different from that in (<ref>), then one arrives at an a priori different moduli space of Betti local systems. Why is the presentation in (<ref>) the “right” one?]
§.§.§ Crystalline conditions
The next step is to cut out a derived Galois deformation ring with a p-adic Hodge theory condition imposed at p. For concreteness, we focus our attention on the “crystalline” example, which should correspond to motives with good reduction at p, or from another perspective, automorphic forms with no level at p.
We have a map __F[1/S]^ρ→_F_p^ρ. (We now know that _F_p^ρ = _F_p^ρ is classical for G = _n by work <cit.>, and it is expected to be true in general, although we will not logically use this.) We want to define a local functor ^ρ,_F_p→_F_p^ρ that “cuts out” the “crystalline” deformations. This is something we do not know how to do in general, since we cannot formulate a moduli-theoretic notion of crystallinity over sufficiently general rings (this will be discussed further in <ref>).
The only solution to this problem that we know of (at the present time) is to restrict ourselves to local conditions which “should” actually be classical, meaning “__F[1/S]^ρ, = __F[1/S]^ρ,” where the right hand side is some classically studied object of p-adic Hodge theory, and take as our local condition the composition of the maps
__F[1/S]^ρ,→__F[1/S]^ρ__F[1/S]^ρ.
The way that one can recognize the deformation conditions which “should” be classical is, according to <ref>, that they are LCI of the “correct” dimension. In practice, to analyze the resulting objects one also needs to know the tangent complexes, which is typically hard to calculate unless the deformation functor is smooth, so that the tangent complex is just the tangent space.
A general class of conditions where everything works is the Fontaine-Laffaille range, which is what <cit.> studies. Here the problems all go away:
* There is a complete moduli-theoretic description, even integrally.
* The obstructions vanish, so that the deformation functor is smooth and its tangent complex can be described explicitly.
For the rest of this section, we put ourselves in the same situation. We assume we are in the Fontaine-Laffaille range, so that in particular p is “large enough” compared to the Hodge-Tate weights that we are considering. This gives a closed embedding __F[1/S]^ρ,__F[1/S]^ρ as above. We then define ^ρ, __F[1/S] to represent the derived fibered product ^ρ, __F[1/S] of the diagram
^ρ,__F[1/S][d] [r] __F[1/S]^ρ[d]
__F[1/S]^ρ,[r, hookrightarrow] __F[1/S]^ρ
Concretely, ^ρ, __F[1/S] is calculated using the derived tensor product (cf. <ref>).
A key point is that by taking the derived fibered product, one has control over the resulting tangent complex. Its Euler characteristic can be computed by using Poitou-Tate duality for Galois cohomology of global fields, and the answer is -δ, where δ≥ 0 is a certain quantity called the “defect” that we are not going to explicate (for our purposes, its definition can be taken to be the Euler characteristic, but it is more typical in the literature to take a different starting point). The appearance of this quantity δ is crucial; the fact that the same δ also manifests on the “automorphic side” is the linchpin of the Calegari-Geraghty method.
§.§ Relation to the Calegari-Geraghty method
We are now going to discuss the main results of <cit.>, which give a reinterpretation and elaboration of the Calegari-Geraghty method <cit.>, which is itself a modification of the Taylor-Wiles method for “δ>0” situations. The reader should be familiar with these methods at least at the level of <cit.> in order to get something out of this subsection.
§.§.§ Conjectural picture
We are going to abbreviate _0 := ^ρ, __F[1/S] and _0 := π_0(_0). Let be the Langlands dual group of G. There is a complex _0 of “automorphic forms for of level S", assembled out of the homology of locally symmetric spaces associated to . We will index _0 to be in degrees [-δ, 0], although it might be later (in <ref>) reindexed to be in degrees “[-δ-q_0, -q_0]”. The crucial point is that it this complex has length δ – the same δ which appears as the negative of the dimension of _0. Finally, we write M_0 := H^0(_0).
The “automorphic to Galois” direction of the Langlands correspondence, attaching Galois representations to automorphic forms, equips M_0 with the structure of an _0-module. (This may be conjectural, depending on what G is.) The expectation is (roughly):
There is an action of _0 on _0 with the following properties.
* It recovers the given action of _0 on M_0 by taking homotopy/homology groups, as in the diagram below:
[column sep = tiny]
_0 [d, twoheadrightarrow] _0 [d, twoheadrightarrow]
_0 M_0
* It makes π_*(_0)[1/p] generically free over π_*(_0)[1/p].[Technically, we have defined _0 as a pro-ring, so that π_*(_0) is a pro-group. Some care must be taken to “invert p” on such an object.]
The main result of <cit.> is the construction of an action of π_*(_0) on π_*(_0), under certain conjectures and certain Taylor-Wiles assumptions, with the properties above.
[The case of _1]
At present, the above picture can be realized completely only for G = _1, and even this situation is non-trivial.[So far we have focused on the case where G is of adjoint type, so our definitions have to be modified slightly in order to accommodate this case.] If δ=0, which happens when F = or a quadratic imaginary field, then all rings are classical, and the statement is a consequence of class field theory. The point is that the classical deformation ring is essentially the (completed) group ring of the abelianization of π_1(_F[1/S]). Class field theory identifies this abelianization with the class group of _F[1/S], whose adelic description is the corresponding “locally symmetric space” (a finite set of points).
However, in all other cases we have δ >0, which implies:
* the derived Galois deformation ring is non-classical,
* the locally symmetric space is no longer a discrete set of points,
and the hypothesized action goes beyond classical Galois deformation theory. Nevertheless, the desired action of _0 on _0 can still be constructed, which will be explained in the forthcoming paper <cit.>. The point is that _0 admits a description as a (completed) “derived group ring” of the “derived abelianization” of π_1(_F[1/S]). To elaborate on what this means:
* The abelianization of a group controls its set of morphisms to abelian groups. The derived abelianization of a group controls its anima of morphisms to animated abelian groups. One formulates this precisely in terms of universal properties, as a left adjoint to the forgetful functor from (animated) abelian groups to (animated) groups.
* The group ring of an abelian group can also be characterized in terms of a universal property: it is left adjoint to the functor from commutative rings to abelian groups given by extracting units. Analogously, the “derived group ring” construction can be characterized as left adjoint to the an analogous functor from animated rings to animated abelian groups.
The punchline of <cit.> is that the derived abelianization of π_1(_F[1/S]) “is” the animated abelian group obtained from the corresponding locally symmetric space for G (which has a topological group structure in the special case G = _1), and its derived group ring “is” the homology chains on this animated abelian group, with its induced ring structure.
Meanwhile, _0 is also the homology chains on the locally symmetric space, and the sought-for action is the tautological one. In particular, _0 is a free module of rank one over _0 for G = _1.
§.§.§ Results of Galatius-Venkatesh
An informal statement of the main result of <cit.> is that, assuming “standard” conjectures and hypotheses related to automorphy lifting, and under “no congruences” and Fontaine-Laffaille assumptions,
* π_* (_0) is supported in degrees 0 ≤ * ≤δ, and is moreover an exterior algebra on π_1(_0), which is free of rank δ.
* H_*(_0) carries the structure of a free graded module over π_*(_0), extending the usual structure for *=0.
The conjectures alluded to above are about the existence of Galois representations attached to automorphic forms, and local-global compatibility for such representations. The hypotheses are (a stringent form of)[There should be wide scope for optimization of <cit.> from a technical perspective, by incorporating Kisin's methods.] the usual “Taylor-Wiles” type conditions on the residual representation. See <cit.> for the precise formulations.
§.§.§ Connection to the Taylor-Wiles method
The proof of the main result of Galatius-Venkatesh is based on the Taylor-Wiles method, incorporating the modifications introduced by Calegari-Geraghty. It is an interesting feature that the determination of π_* (_0) seems to be closely bound to the Taylor-Wiles method, and we will give a brief and extremely impressionistic sketch of how this works.
First, we orient ourselves psychologically. We believe there should be an action of _0 on _0, but here and throughout we will only be able to construct (at least at first) the “classical shadow” of this action, as depicted in the diagram below:
[column sep = tiny]
_0 [d, twoheadrightarrow] ? _0 [d, twoheadrightarrow]
_0 M_0
Here ? stands for an action that we believe exists, but don't know how to construct.
The starting point of the Taylor-Wiles method is to consider an auxiliary family of similar situations with additional level structure added at a well-chosen collection of primes Q_n. Indeed, letting Q_n be such a set, we define
* _n := ^ρ, __F[1/SQ_n] and _n := π_0(^ρ, __F[1/SQ_n]), which by <ref> is the classical crystalline deformation ring. The derived ring _n has virtual dimension -δ.
* _n for the corresponding module of “automorphic forms for G of level SQ_n”, which by assumption is a complex concentrated in degrees [-δ, 0], and M_n := H_0(_n).
Then the same story holds for each _n: we want an action _n _n, but what we can actually construct is its classical shadow _n M_n. Therefore, we have a diagram
[column sep = tiny]_n [d, twoheadrightarrow] _n [d, twoheadrightarrow]
_n M_n
At first, the picture looks the same as when n=0, so it seems that no progress has been made. But the point is that by choosing Q_n artfully, one can morally arrange the _n to “limit” to an object _∞ which is actually classical, so that _∞_∞ := π_0(_∞). Now, in reality we will not be constructing _∞ in this way, but we will construct an _∞ which will be seen to be “smooth and of the correct dimension”. By the discussion in <ref>, this justifies setting _∞ to be equal to _∞.
Furthermore, the _n “limit” to an object _∞ (which really does exist), and it is really true that _∞_∞. Also, one should imagine that the diagram (<ref>) “limits” to a diagram
[column sep = tiny]_∞[d, twoheadrightarrow] _∞[d, twoheadrightarrow]
_∞ M_∞
Since in this case _∞_∞ is actually an isomorphism of simplicial commutative rings, it automatically gives the desired action on _∞_∞.
To see how this helps with our original problem, let us contemplate the relation between _n and _0. The difference is that _0 only parametrizes deformations unramified at Q_n (recall that ρ is unramified at the primes in Q_n), while _n allows deformations that ramify at Q_n. As explained in Example <ref>, _0 can be recovered from _n by forming a derived tensor product that cuts out the unramified locus from among the deformations of local Galois groups over the primes in Q_n. More precisely, one chooses Q_n so that deformations of ρ are automatically tame above Q_n. Then the maps from the tame inertia groups over Q_n to π_1(_F[1/SQ_n]) induce a ring homomorphism _n →_n, where _n is certain deformation ring for tame inertia at the primes in Q_n. There is an augmentation _n →_0 = W(k) that corresponds to the deformations which are trivial on tame inertia. By similar conditions as in Example <ref>, one has an equivalence of simplicial commutative rings
_n __n_0 _0.
This identity approximately “limits” to
“_∞__∞_0 _0."
More generally, one has approximately
“_∞__∞_n _n."
The reason we have put these statements in quotes is that it they are not exactly true, but they are true up to a certain amount of error which shrinks to 0 as n →∞, which makes them acceptable for our purposes. The source of this “error” lies in the construction of _∞: it is extracted by a compactness argument from an inverse system of finite quotients of the _n.
There will also be an action of _n on M_n by “diamond operators”, for which one has
_∞__n_0 _n,
Furthermore, local-global compatibility ensures that this action is compatible with the one induced by _n →_n. Then, from the _∞_∞ action on _∞_∞, taking derived tensor products and using (<ref>) and (<ref>), we have
“_0 ≅_∞__∞_0" _∞__∞_0 ≅_0.
As discussed above, the statement in quotes is not exactly true but can be made true on homotopy groups up to arbitrary precision. This is done by instead using (<ref>) to “go from ∞ to n”, which incurs arbitrarily small error as n increases, and then using (<ref>) to “go from n to 0”, which does not incur any error. That is enough to make some isomorphism
π_i(_0 ) ≈π_i( _∞__∞_0) ≅^_∞_i(_∞, _0),
which we will then be able to compute explicitly (under favorable assumptions). Indeed, the argument also shows that _∞→_∞ is a quotient by a regular sequence of length δ, which allows to compute ^_∞_i(_∞, _0) by a standard Koszul complex calculation.
This completes our shamelessly vague and impressionistic sketch. We encourage the reader to consult the article of Caraiani-Shin <cit.> for a more substantial discussion of the Calegari-Geraghty enhancement of the Taylor-Wiles method.
§ DERIVED HECKE ALGEBRAS
§.§ The local derived Hecke algebra
We briefly review the theory of derived Hecke algebras from <cit.>. Let G be a split reductive group over a local field K, say of residue field _q having characteristic p. Let U ⊂ G(K) be a compact open subgroup. (For our purposes, we are most interested in the maximal compact subgroup U = G(_K).)
Let Λ be a commutative ring. Define the the universal module
U_Λ(U) = _U^G(K)Λ
where U acts trivially on Λ. We can present the usual Hecke algebra for the pair (G(K),U) as
(G(K),U; Λ) := _G(K)(U_Λ(U),U_Λ(U)).
This presentation suggests the following generalization.
The derived Hecke algebra for (G(K),U) with coefficients in Λ is the graded ring
H(G(K),U;Λ) := ^*_G(K)(U_Λ(U),U_Λ(U)),
where the is formed in the category of smooth G(K)-representations. For U= G(_K), we abbreviate H(G(K);Λ) := H(G(K), G(_K);Λ).
The ring H(G(K),U;Λ) is really just a graded (associative) ring, rather than any kind of derived (associative) ring. However, its description makes it clear that it arises as the cohomology groups of a differential graded algebra _G(K)(U_Λ(U),U_Λ(U)). It might be more puritanical to call the latter object the “derived Hecke algebra,” but we follow <cit.> in applying this name to the graded ring H(G(K),U;Λ). We will instead call _G(K)(U_Λ(U),U_Λ(U)) the differential graded Hecke algebra, and denote it by (G(K), U; Λ), or sometimes simply (G(K); Λ) if U = G(_K). It does not have the structure of an animated ring; a priori, it is just a differential graded (i.e., _1) algebra, although we conjecture in <ref> that it has more commutative structure.
The 0th graded group of H(G(K),U;Λ) is the classical Hecke algebra (G(K),U;Λ).
We next give a couple more concrete descriptions of the derived Hecke algebra, following <cit.>.
§.§.§ Function-theoretic description
Let x,y ∈ G(K)/U and G_x,y⊂ G(K) be the stabilizer of the pair (x,y). We note that G_x,y is a profinite open subgroup of G(K). We can think of H(G(K),U; Λ) as the space of functions
G(K)/U × G(K)/U ∋ (x,y) ↦ h(x,y) ∈ H^*(G_x,y;Λ)
satisfying the following constraints:
* The function h is “G(K)-invariant” on the left. More precisely, we have
[g]^* h(gx,gy) = h(x,y)
for all g ∈ G(K), where [g]^* H^*(G_gx,gy; Λ) → H^*(G_x,y; Λ) is pullback by (g).
* The support of h is finite modulo G(K).
The multiplication is given by a convolution formula, where one uses the cup product to define multiplication on the codomain, and restriction/inflation to shift cohomology classes to the correct groups <cit.>.
§.§.§ Double coset description
For x ∈ G/U, let U_x = _U(x). Explicitly, if x = g_xU then U_x := U ∩(g_x) U.
We can also describe H(G(K),U;Λ) as functions
x ∈ U G(K) / U ↦ h(x) ∈ H^*(U_x; Λ)
which are compactly supported, i.e., supported on finitely many double cosets. For the description of the algebra structure, it seems better to convert to the model of (<ref>).
§.§.§ The derived Hecke algebra of a torus
Let T be a split torus over K. We can explicitly describe the derived Hecke algebra of the torus T(K), using now the double coset model. Since T is abelian we simply have T(_K)_x = T(_K) for all x ∈ T(K)/T(_K).
We have T(K)/T(_K) ≅ X_*(T). Identify
X_*(T) = T(K) / T(_K) ↪ G(K)/G(_K)
by the map X_*(T) ∋χ↦χ(ϖ) ∈ G(K)/G(_K), where ϖ is a uniformizer of K. Then H(T(K); Λ) simply consists of compactly supported functions
X_*(T) → H^*(T(_K); Λ)
with the multiplication given by convolution; in other words,
H(T(K); Λ) ≅Λ[X_*(T)] ⊗_Λ H^*(T(_K); Λ),
If p is invertible in Λ, then the structure of H^*(T(_K); Λ) can be elucidated as follows. First of all, the reduction map T(_K) → T(_q) has pro-p kernel, which then has vanishing higher cohomology with coefficients in Λ by the assumption, and therefore induces an isomorphism H^*(T(_q); Λ) → H^*(T(_K); Λ).
Furthermore, since T is split we have T(_q) ≅ (_q^×)^r where r = (T). We are mainly interested in the case where Λ is of the form /l^n where is the ring of integers in a number field and l is a prime above ℓ≠ p. In this situation, H^*(T(_q); Λ) has non-zero higher cohomology exactly when q ≡ 1 ℓ.
§.§.§ The derived Satake isomorphism
We assume in this subsection that q ≡ 1 ∈Λ. Let T ⊂ G be a maximal torus and W the Weyl group of T ⊂ G. We consider an analogue of the classical Satake transform for the derived Hecke algebra H(G(K);Λ), which takes the form
“Derived Hecke algebra for G (Derived Hecke algebra for T)^W."
More precisely, we define the derived Satake transform
H(G(K);Λ) →H(T(K); Λ)
simply by restriction (in the function-theoretic model <ref>) along the map
(T(K)/T(_K))^2 → (G(K)/G(_K))^2
from (<ref>). In more detail, let h ∈H(G(K); Λ) be given by the function
(G(K)/G(_K))^2 ∋ (x,y) ↦ h(x,y) ∈ H^*(G_x,y;Λ).
Then (<ref>) takes h to the composition
(T(K)/T(_K))^2 [r, hook] (G(K)/G(_K))^2 [r, "h"] H^*(G_x,y;Λ) [r, ""] H^*(T_x,y;Λ)
It may be surprising that this is the right definition, since the analogous construction in characteristic 0, on the usual (underived) Hecke algebra, is far from being the usual Satake transform. It is only because of our assumptions on the relation between the characteristics (namely, that q ≡ 1 ∈Λ) that this “naïve” construction turns out to behave well (otherwise, it would not even be a ring homomorphism). The construction seems to have been motivated by <cit.>; see <cit.> for a proof of the relevant point.
Let W be the Weyl group of T in G. If |W| is invertible in Λ and q =1 ∈Λ, then the map (<ref>) induces an isomorphism
H(G(K);Λ) H(T(K); Λ)^W.
Strictly speaking, Theorem <ref> is only proved when K has characteristic zero in <cit.>, but essentially the same argument should work in general.
§.§.§ Commutativity?
A priori, (G(K), U; Λ) is just an associative differential graded algebra and H(G(K),U;Λ) is just an associative graded algebra. It is interesting question what more structure or properties these algebras have. Indeed, ^0(H(G(K),U;Λ) ) ≅(G(K),U;Λ) famously turns out to be commutative, although this is non-trivial to prove.
We think it is reasonable to conjecture that (G(K) ;Λ) is also commutative, at least if the characteristic of Λ is not in “bad” position with respect to G and the residue characteristic of K. (No counterexample is known at present, even for bad characteristics.) We list some cases in which the commutativity is known:
* The commutativity is known for Λ of characteristic ℓ∤# W such that q ≡ 1 ℓ by Theorem <ref> (which is due to Venkatesh).
* More generally, the commutativity is known if # G/B(_q) and p are invertible in Λ by Gehrmann <cit.>.[This paper also investigates examples for G = _2 which do not fall under these hypotheses.]
To say a bit about these arguments, we recall two proofs of the commutativity of the classical spherical Hecke algebra: one via the Satake transform, and one via “Gelfand's trick”. Venkatesh's argument is a generalization of the method of the Satake transform, and Gehrmann's argument is a generalization of Gelfand's trick. We note that although Gehrmann proves commutativity in more generality, Venkatesh's proof gives more refined information which is needed for global applications.
There are subtler questions that one can formulate beyond the commutativity of (G(K); Λ), concerning the structure of the differential graded Hecke algebra (G(K); Λ). Some more speculative discussion of this problem will be given in <ref>.
§.§.§ Extensions
The literature on derived Hecke algebras is not as comprehensive as one would hope. It would be useful to:
* Extend the theory to non-split groups G.
* Extend the theory incorporate to non-trivial coefficients, i.e., replace Λ[G(K)/U] with the compact-induction of a non-trivial representation of U. In a global setting, this would be related to considering non-trivial local coefficients on the corresponding locally symmetric space.
§.§ Actions of derived Hecke algebras
Maintain the preceding notation: let G be a reductive group over a local field K of residue characteristic p, U ⊂ G(K) be an open compact subgroup, and Λ be a commutative ring.
If C^∙ is an object of the derived category of smooth G(K)-representations over Λ, then we have (formally) a right action of the differential graded Hecke algebra (G,U; Λ) (resp. the derived Hecke algebra (G,U; Λ)) on _Λ[U](Λ,C^∙) (resp. ^*_Λ[U](Λ,C^∙)).
Indeed, the derived endomorphism ring _G(K)(U_Λ(U),U_Λ(U)) tautologically has a right action on the functor
^∙_Λ[G(_p)](U_Λ(U),-). Then the Fact follows from the Frobenius reciprocity isomorphism
^∙_Λ[U](Λ,C^∙) ^∙_Λ[G(_p)](U_Λ(U),C^∙).
The functor C^↦^∙_Λ[U](Λ,C^∙) is that of derived U-invariants. Thus Fact <ref> may be captured more colloquially by the slogan:
There is a canonical right action of the derived Hecke algebra for (G,U) on the derived U-invariants of a G(K)-representation.
If U is pro-p, and p is invertible in Λ, then formation of U-invariants is exact. The case where p is not invertible in Λ will be discussed in the next subsection.
A source of examples comes from the cohomology of locally symmetric spaces. Taking U to be the level structure at p, the cohomology complex of the locally symmetry space for G is realized canonically as the derived U-invariants of a G(K)-representation, which is at least heuristically “the cohomology complex of the locally symmetric space with infinite level structure at p”. The motivation for the local derived Hecke algebra in <cit.> was to analyze such examples. We will say more about this later in the subsequent sections.
§.§ Mod p derived Hecke algebras of p-adic groups
Let G be a reductive group over _p – we use the same notation for the algebraic group and its group of _p-valued points – and
let k be an algebraically closed field of characteristic p. Let _k(G) denote the category of continuous representations of G on
k-vector spaces, and let ^sm_k(G) ⊂_k(G) denote the subcategory of smooth representations. When G = _2(_p)
or a closely related group, the irreducible objects in ^sm_k(G) have been classified for some time, but for every other group even an approximate classification remains elusive, in spite of impressive efforts over the past 15 years including <cit.>.
The abelian category ^sm_k(G) nevertheless has a simple structure in one respect: every irreducible object admits a canonical space of surjective
homomorphisms from a compact projective generator, denoted U below. This fact has been exploited by Schneider to define a derived equivalence between ^sm_k(G) and the derived category of dg-modules over the derived endomorphism algebra R_G(U)^. The properties of the latter, a mod p analogue of the mod ℓ derived Hecke algebras introduced above, are still largely unexplored. A simpler version, called the derived diamond algebra, has been shown by Khare and Ronchetti <cit.> to play a significant role in the theory of p-adic modular forms.
§.§.§ Schneider's theory
Fix an Iwahori subgroup I ⊂ G, and let I(1) ⊂ I be its maximal pro-p normal subgroup; thus I/I(1) is the group of 𝔽-points of a torus over a finite field 𝔽. It is easy to show that every irreducible π⊂^sm_k(G) is generated by its subspace
π^I(1). This thus defines a canonical functor
τ_k: ^sm_k(G) ((G,I(1);k)^)
π ↦π^I(1)
where (G,I(1);k) is the k-valued Hecke algebra of G relative to I(1). Here we caution about a notational inconsistency: Schneider's convention differs from ours (which follows Venkatesh) by formation of opposite algebra. That is, letting U := _I(1)^G k be the universal induced module, Schneider defines the Hecke algebra of G relative to I(1) to be _G(U)^ instead of _G(U). Thus, when we translate his results into our conventions, an extra “op” will appear.
A well-known theorem of Borel and Casselman <cit.> asserts that, if k is replaced by , then (<ref>) defines an equivalence of categories between the subcategory of the left-hand
side of representations generated by their I(1)-fixed vectors and ((G,I(1);k)^). This is also true for k in characteristic p when G = (2,_p), but Ollivier <cit.> proved that this is almost never the case for other G, even though every irreducible π is generated by π^I(1). Schneider found the appropriate
generalization of the Borel–Casselman theorem by replacing (G,I(1);k) by the differential graded algebra
(G,I(1);k) = R_G(U) := _G(U,U).
The replacement of the functor (<ref>) is
τ^∙_k: D(G) D((G,I(1);k)^)
π ↦_I(1)(1,π) = _G(U,π)
Here D denotes the (unbounded) derived category, we write D(G) instead of D(^sm_k(G)),
and the final equality is Frobenius reciprocity. Schneider's theorem is the following.
Suppose I(1) is a torsion free p-adic group. Then U is a compact generator of D(G) and the functor (<ref>) is
an equivalence of triangulated categories.
A first description of the category ^sm_k(G) is contained in <cit.>. Parabolic induction is defined as a functor from
^sm_k(M) to ^sm_k(G) if M is a Levi component of a parabolic subgroup P; in characteristic p this induction is not normalized and the
result depends on P as well as M. The irreducible admissible representation π of G is supercuspidal if it does not occur as a subquotient of a
representation induced from an admissible irreducible representation of a proper parabolic subgroup of G. The main theorem of <cit.> reduces the classification
of ^sm_k(G) to the classification of supercuspidal representations of Levi subgroups. This corresponds to a similar classification of modules for
the (underived) Hecke algebra (G,I(1);k)^: the Hecke algebra modules corresponding to supercuspidal π are called supersingular and have a
simple characterization in terms of (G,I(1);k)^ (see <cit.>).
This suggests that the structure of D((G,I(1);k)^) can also be
reduced to the study of triangulated subcategories of supersingular modules, but no one seems to have looked into this question.
When I(1) is not torsion free, it can be replaced in the definition of U and in Theorem <ref> by an appropriate
torsion free subgroup of finite index, but the theory of the underived Hecke algebras for general open compact subgroups is
much less clear.
As in the ℓ≠ p case, we will call the cohomology ring
(G,U; Λ) := H^*((G, U; Λ))
the derived Hecke algebra; this is (a priori) just a graded associative ring. The thesis of N. Ronchetti <cit.> studies the derived spherical Hecke algebra H(G,G();Λ) and defines a Satake homomorphism
H(G,G();Λ) ℋ(T,T();Λ) Λ[X_*(T)]⊗ H^*(T();Λ).
In this case, with T() p-torsion free, H^*(T(),Λ) is an exterior algebra on H^1. This algebra is much more manageable
than Schneider's (G,I(1);k); however, the compact induction of the trivial representation
of G() does not generate D(G), so Ronchetti's algebra cannot account fully for the mod p representation theory of G. However, simpler
p-adic derived Hecke algebras do seem to play a role in the global theory of p-adic modular forms; see <ref> below.
§ COHOMOLOGY OF LOCALLY SYMMETRIC SPACES
§.§ Locally symmetric spaces
In this section G is a connected reductive group over , with center Z = Z_G. We fix a subgroup K_∞⊂ G() containing Z() and a maximal
compact subgroup of the identity component G()^0 ⊂ G(), so that K_∞/Z() is compact. Then X = G()/K_∞ is a finite union of copies
of the symmetric space for the derived subgroup G()^ der,0. Let K_f ⊂ G(𝐀_f) be an open
compact subgroup; the corresponding congruence subgroup of G() is the subgroup Γ = Γ_K_f⊂ G() given by the intersection G() ∩ K_f
in G(𝐀_f). Let K'_f ⊂ K_f be a torsion-free normal subgroup of finite index, with quotient Q; then
X(Γ_K'_f) := Γ_K'_f\ X
is a C^∞ manifold, and
X(Γ_K_f) = Γ_K_f\ X = Q\ X(Γ_K'_f)
is the locally symmetric space attached to K_f. For most purposes the finite group Q and the singularities of X(Γ_K_f) will play no role in what follows.
We will be most concerned with the adelic locally symmetric spaces
S_K_f(G,X) = G()\ X × G(𝐀_f)/K_f
and with the limit
S(G,X) = _K_f S_K_f(G,X),
the limit taken over all open compact subgroups K_f ⊂ G(𝐀_f).
For fixed K_f, the adelic locally symmetric space can be identified with the disjoint union of a finite set of discrete arithmetic quotients of X:
by reduction theory we can write G(_f) = ∐_j G() α_j K_f for a finite set of α_j, and then
S_K_f(G,X) = ∐_j X(Γ_j),
where Γ_j = G()∩α_j K_f α_j^-1.
Unless otherwise indicated, for any cohomology theory H^*, usually
with twisted coefficients, the
space H^*(S(G,X)) will be understood to be the filtered colimit
_K_fH^*(S_K_f(G,X)).
§.§ Review of the de Rham theory
Let (G) denote the (complex) vector space of automorphic forms on G()\ G(), and let
_0(G) ⊂(G) denote the subspace of cusp forms. Let Z = Z_G denote the center of G, K_∞⊂ G()
a closed subgroup of the identity component that contains Z() and whose image in the adjoint group G()^ is compact.
Depending on the circumstances, K_∞ may or may not be maximal compact modulo center. For bookkeeping purposes, we let
K^c_∞ denote the maximal compact subgroup of K_∞. We write = (G),
= (K_∞); unless otherwise indicated these are taken to be the Lie algebras of the complex points. Then we have the
Cartan decomposition
= ⊕.
The (possibly disconnected) symmetric space X = G()/K_∞ has an invariant hermitian structure precisely when there is a
homomorphism
h: ^× = R_/_m, G_
with image in the center of K_∞ that satisfies the axioms of a Shimura datum (cf. Morel's article <cit.> in this volume). In that case,
(<ref>) can be refined into the -invariant Harish-Chandra decomposition:
= ⊕^+ ⊕^-,
Here, under the map of tangent spaces
= T_G,e T_X,h,
where T_G,e (resp. T_X,h) is the tangent space at the identity (resp. the tangent space at the K_∞-fixed point h ∈ X),
^+ (resp. ^-) is taken to the holomorphic (resp. antiholomorphic) tangent space; they are respectively
of Hodge type (-1,1) (resp. (1,-1)) for the Hodge structure on defined by h.
We sometimes write ^+ = ^+_h, = _h, in order to emphasize the choice of h.
The symmetric space X has a G()-invariant metric, unique up to multiplication by a positive real scalar, that descends to
S(G,X). If G/Z is anisotropic, then S(G,X) is compact and its de Rham cohomology can be computed by means of harmonic forms with respect to the invariant metric.
Matsushima's formula reinterprets this computation in terms of relative Lie algebra cohomology of the space (G) of automorphic forms.
Let ρ: G (V) be a finite-dimensional linear representation, defined over a subfield E ⊂. Let Ṽ be the corresponding local
system over S(G,X), with coefficients in E:
Ṽ = _K_f G()\ X × G(_f) × V/K_f.
In order for Ṽ to be a local coefficient system, we need to assume that the Zariski closure of Z() ∩ K_f in Z, for all sufficiently small K_f,
is in the kernel of ρ; this assumption will be made without comment in what follows. We write V_ = V⊗_E and define
Ṽ_ in the same way.
By an admissible irreducible representation of G() we always mean an irreducible (,K_∞) × G(_f)-representation, with the chosen K_∞.
Assume G/Z is anisotropic. Then there is a canonical isomorphism of G(_f)-representations
H^* (S(G,X),Ṽ_) ⊕_π m(π)H^* (,; π_∞⊗ V_)⊗π_f.
Here π runs through admissible irreducible representations of G(), and there is a countable direct sum decomposition
(G) ⊕_π m(π)π
where m(π) is an (integer) multiplicity and ππ_∞⊗π_f is the factorization with π_∞ (resp. π_f) an irreducible (,K_∞)-module (resp. G(_f)-representation).
The notation H^* (,; -) denotes relative Lie algebra cohomology.
The standard reference for this result is the book <cit.> of Borel and Wallach, to which we refer for definitions and proofs. The interest of Matsushima's formula
is that it separates the calculation of the cohomology into a global part, corresponding to the multiplicities m(π), and a local part that depends only on π_∞.
We will only consider π for which π_∞ is tempered; in this case the cohomology H^* (,; π_∞⊗ V_) is completely calculated
in <cit.>, and we copy the answer in the following section.
The result is more complicated when G/Z is not anisotropic, and the proofs are considerably more difficult. In particular, there is no longer a direct sum decomposition
as in (<ref>). Nevertheless, it was proved by Franke <cit.> (building on earlier work of Borel) that the following analogue of Theorem <ref> holds:
H^* (S(G,X),Ṽ_) H^* (,; (G) ⊗ V_).
§.§.§ Cuspidal cohomology
We define the cuspidal cohomology
H^* _0(S(G,X),Ṽ_) ⊂ H^* (S(G,X),Ṽ_)
to be the image
of H^* (,; _0(G) ⊗ V_) in H^* (,; (G) ⊗ V_), with respect to the isomorphism (<ref>). Borel proved
that the map from the relative Lie algebra cohomology of cusp forms to cohomology is injective. Thus if we write
_0(G) ⊕_π m_0(π)π,
where m_0(π) denotes the multiplicity of π in the cuspidal spectrum, we have the analogue of Matsushima's formula:
H^* _0(S(G,X),Ṽ_) ⊕_π m_0(π)H^* (,; π_∞⊗ V_)⊗π_f.
In (<ref>) and in Theorem <ref>, the relative Lie algebra cohomology should really be replaced by (,K^c_∞)-cohomology,
where K^c_∞ is now a maximal compact subgroup. The version stated here is only correct when K^c_∞ is connected and the center of G is finite.
The discussion here can easily be adapted to handle complications introduced by disconnectedness of K^c_∞ or by the center of G.
We fix a representation π_f of G(_f) and let [π_f]_∞ denote the set of irreducible representations
of G() such that π = π_∞⊗π_f ⊂_0(G). We make the hypothesis
Every π_∞∈ [π_f]_∞ is tempered.
We can define
H^* _0(S(G,X),Ṽ_)[π_f] = _G(_f)(π_f, H^* _0(S(G,X),Ṽ_)).
By (<ref>) we then have
H^* _0(S(G,X),Ṽ_)[π_f] ⊕_π_∞∈ [π_f]_∞ m_0(π) H^* (,; π_∞⊗ V_).
§.§.§ Relative Lie algebra cohomology in the tempered case
The set of tempered π_∞ with non-trivial relative Lie algebra cohomology with coefficients in a given V_ is determined in <cit.>,
especially in Theorems 3.3 and Theorem 5.1. Every such π_∞ is isomorphic to a parabolically induced representation of the form
I_P,σ,0, in the notation of <cit.>, where P is a fundamental parabolic subgroup of G() with Langlands decomposition ^0MAN and
σ is a discrete series representation of ^0M, whose infinitesimal character is determined by the highest weight of V:
χ_σ = -χ_-s(ρ + λ(V))|_𝔟
(see <cit.>) where λ(V) is the highest weight of V, 𝔟 is a compact Cartan subalgebra of (^0M),
and s is a (uniquely determined) element of the Weyl group of length ℓ(s) = N/2. Moreover, the relative Lie algebra cohomology
H^* (,; π_∞⊗ V_)
is determined explicitly in the two theorems quoted. We let
q(G) = (G) - (K^c_∞)/2; ℓ_0(G) = rank(G) - rank(K^c_∞)
q_0(G) = q(G) - ℓ_0(G)/2.
The invariant ℓ_0 coincides with A where A is the split part of the maximally split torus in G(). Then q_0(G) is an integer (cf. <cit.>) and we have
H^q_0(G) + j(,K^c_∞; π_∞⊗ V_) H^q(^0M)(^0𝔪,K_P;σ⊗ W_s(λ + ρ) - ρ)⊗∧^j(𝔞^*)
Here 𝔞^* is the linear dual of the Lie algebra 𝔞
of the split component A, W_s(λ + ρ) - ρ is the irreducible finite-dimensional representation of ^0M with the
indicated highest weight, and K_P is the maximal compact subgroup K^c_∞∩ P() = K^c_∞∩ ^0M(). Moreover, ^0M is a group with discrete series
and q(^0M) is the dimension of its associated symmetric space.
In particular, since ℓ_0(G) = A, we have
H^q(,K^c_∞; π_∞⊗ V_) ≠ 0 ⇔ q ∈ [q_0(G),q_0(G) + ℓ_0(G)]
§.§.§ The exterior algebra action
A basic fact about discrete series, due to Schmid, is that the space
H^q(^0M)(^0𝔪,K_P;σ⊗ W_s(λ + ρ) - ρ) that appears in
(<ref>) is one-dimensional. It follows that
H^q_0(G) + * (,K^c_∞; π_∞⊗ V_) is naturally a free rank one differential graded
module over the exterior algebra ∧^∙(𝔞^*).
Combining this with (<ref>) we obtain the following global corollary, which is the starting point of Venkatesh's
motivic conjectures.
Suppose π_f has the property that every π_∞∈ [π_f]_∞ is tempered.
Then
H^q_0(G)+∙_0(S(G,X),Ṽ_)[π_f]
is a free differential graded module over ∧^∙(𝔞^*) of rank
m_0(π_f) = ∑_π_∞∈ [π_f]_∞ m_0(π).
The starting point of Venkatesh's motivic conjectures is the observation that the dimension ℓ_0 of 𝔞^* coincides with a
rank of a motivic cohomology group hypothetically attached to π_f. This is explained clearly in <cit.>, and we recall the explanation
here. To a pair consisting of cohomological cuspidal automorphic representation π of G and a finite-dimensional representation τ of the
Langlands L-group ^LG, the Langlands correspondence hypothetically attaches a collection of motives M(τ)_π, obtained by composing τ with
the parameter of π viewed as a homomorphism of a motivic version of the absolute Galois group of to the algebraic group ^LG. For the purposes
of this and the following sections, we only need the case of the dual adjoint representation τ = ^*; then we write M_π = M(Ad)_π. In general the motives have coefficients in a number field E_π, which we ignore temporarily.
Attached to the hypothetical motive is the group H^1_ mot(,M_π(1)) of motivic cohomology. This is supposed to be a direct summand of a certain
K-theory group tensored with , and is conjecturally a finite-dimensional -vector space (more generally an E_π-vector space);
we denote this space v_π (pronounced “Va," in honor of Venkatesh). While
both M_π and its motivic cohomology are inaccessible, if M_π is an actual piece of the cohomology of a smooth projective algebraic variety over
cut out by correspondences, then there are regulator maps comparing v_π⊗ C and certain spaces of cohomological realizations, where
C runs through possible fields of coefficients. The key point
is[Normally one defines _π as the space of classes that extend to an appropriate integral model. In the applications Venkatesh explains how this condition can be ignored. See for example <cit.>.]
(i) The space _π is of dimension ℓ_0.
(ii) The Beilinson regulator defines an isomorphism
_π⊗_ H^1_(M_π,HdR,1),
where H^1_(*,i) is Deligne cohomology and M_π,HdR denotes the Hodge-de Rham realization.
(iii) For any prime p at which M_π has good reduction, the p-adic regulator defines an isomorphism
_π⊗__p H^1_f(, M_π(1)),
where H^1_f is the Bloch-Kato Selmer group.
Each conjecture also asserts that the space on the right-hand side in the isomorphism has dimension ℓ_0. This can often be proved or deduced from other conjectures. Most importantly for our purposes, these conjectures provide rational structures on the exterior algebra ∧^∙(𝔞^*) that can be compared by means of (<ref>) to the rational structure on the cohomology space
H^*_0(S(G,X),Ṽ_)[π_f], induced
by topological cohomology. Specifically, we have
[<cit.>, (5.1.1)] Conjecture <ref> (i) and (ii) give rise to a canonical isomorphism
_π⊗_𝔞^*.
Thus Conjecture <ref> endows 𝔞^* with a canonical -rational structure.
The conjectures of <cit.> all refer back to this -rational structure, and in particular to the following conjecture.
Since π_f is realized in cohomology, it has a model over a number field E(π_f).[If G = GL(n) it follows from the multiplicity one theorem that this can identified with the field of rationality of π_f, that is the fixed field in Gal(/)
of the subgroup that fixes the isomorphism class of π_f. In general E(π_f) may be slightly larger.] The π_f isotypic component of H^*_0(S(G,X),Ṽ_) is then a direct summand of the E(π_f)-rational cohomology
H^*_0(S(G,X),Ṽ_E(π_f)). We state a slight generalization of the main conjecture of Prasanna and Venkatesh:
The action on H^*_0(S(G,X),Ṽ_)[π_f] of ∧^_π preserves the E(π_f)-rational structure.
We describe the actions of the p-adic realizations of _π in greater detail in <ref>.
§.§ Coherent cohomology of Shimura varieties
We return to the notation of <ref> but now we suppose that X has G()-invariant hermitian structure. We assume throughout that
Z_G is trivial; this is certainly unnecessary but it simplifies the exposition. We define
= _h = ⊕^-,
in the notation of (<ref>). This is a maximal parabolic subalgebra with unipotent radical ^-. The locally symmetric space S(G,X) is then
a Shimura variety, and has a canonical model over a number field E(G,X) that is preserved under the right action of G(_f). This induces a canonical model
on the individual finite-level quotients S_K_f(G,X).
Let τ: K_∞(W_τ) be an irreducible representation, and define the holomorphic vector bundle
_τ = _K_f G()\ G() × W_τ/K_∞× K_f
where now G() acts on G() on the left, K_f acts on G(_f) on the right, and K_∞
acts on G() on the right and on W_τ on the left. If K_f is sufficiently small the bundle _τ descends to a vector bundle, also denoted _τ,
on S_K_f(G,X), which has a canonical model over a finite extension E(G,X,τ) of E(G,X).
In practice we fix a level subgroup K_f such that S_K_f(G,X) has a canonical integral model, as in the work of Kisin, Shin, and Zhu <cit.>; this
presupposes that S(G,X) is a Shimura variety of abelian type, which is closely related to moduli of abelian varieties with additional structure.
Suppose for the moment that G is anisotropic over , so that S(G,X) is (pro)-projective. The cohomology of
S(G,X) with coefficients in (the coherent sheaf attached to) the vector bundle _τ then admits an expression in terms of differential forms (Dolbeault cohomology),
which in turn can be written in terms of relative Lie algebra cohomology, analogously to Matsushima's formula:
H^*(S(G,X),_τ) H^*(,; (G) ⊗ W_τ) ⊕_π m(π)H^*(,; π_∞⊗ W_τ)⊗π_f.
(Again, one should replace (,)-cohomology by (,K_h)-cohomology, where K_h ⊃ K_∞ is the stabilizer of h in G() and is not necessarily
connected.)
When S(G,X) is not projective, one needs to replace the left-hand side of (<ref>) by H^*(S(G,X)_Σ,_τ^can),
where S(G,X)_Σ is a well-chosen (smooth projective) toroidal compactification and _τ^can is Mumford's (or Deligne's) canonical
extension of _τ to a vector bundle over S(G,X)_Σ; the result is independent of the choice of Σ.
The analogue of the cuspidal cohomology (<ref>)
H^*_0(S(G,X),_τ) ⊕_π⊂_0(G) m(π)H^*(,; π_∞⊗ W_τ)⊗π_f.
was studied in <cit.>, where in most cases the left hand side is defined algebraically in terms of two distinct extensions of _τ to S(G,X)_Σ. The analogue of Franke's theorem (<ref>) holds for coherent cohomology and was proved recently in the thesis of Jun Su <cit.>.
Suppose the infinitesimal character of τ, as a representation of the enveloping algebra of , restricts on the enveloping algebra of to
the infinitesimal character of a finite-dimensional irreducible representation. Then Schmid's theorem asserts that there is a unique π_∞ and a unique q(τ)
∈{0,1, …, X} such that
H^q(τ)(,; π_∞⊗ W_τ) ≠ 0; moreover, the cohomology space is then of dimension 1, and π_∞ is a discrete series. This theory is described
at length in <cit.>. In particular, the global coherent cohomology H^q_0(S(G,X),_τ) is non-trivial only when q = q(τ).
The version of Venkatesh's motivic conjectures for coherent cohomology, to be discussed in <ref>, concerns τ for which
H^*_0(S(G,X),_τ) has cohomology in several degrees, and the corresponding π_∞ belong to the nondegenerate limit of discrete series (NLDS). It is shown in <cit.>, following work of Floyd Williams, that any given NLDS π_∞ contributes to H^q(,; π_∞⊗ W_τ) for a unique q. Let P(τ) denote
the set of NLDS σ such that H^q(,; σ⊗ W_τ) ≠ 0, and define
Π(τ) = ⊕_σ∈ P(τ)σ.
For the purposes of this definition, it is most convenient to treat the different components of the restriction of any NLDS of G() to its identity component as separate representations σ.
Is there an abelian Lie algebra ⊂ such that
H^*(,; Π(τ) ⊗ W_τ)
is a free module over ∧^∙(^*)?
The answer is affirmative in the admittedly unrepresentative case where G = (2), but the question does not seem to have been explored more generally.
§.§ Action of q-adic derived Hecke algebras on cohomology
We return to the topological setting of <ref>, and specifically the Prasanna-Venkatesh Conjecture <ref>. Thus let
π_f be as in that section; in particular, we assume all π_∞∈ [π_f]_∞ are tempered. We assume E(π_f) =; this hypothesis can easily be relaxed, at the cost of complicating the notation. We similarly assume throughout that the representation V is defined over , and write Ṽ for the corresponding local system with coefficients .
Let p be a prime number such that the local component π_p is unramified. We fix a level subgroup K_0 = ∏_v K_v ⊂ G(_f), with K_p hyperspecial maximal, such that π_f^K_0≠ 0. We will be working with _p-valued cohomology of the locally symmetric space S_K_0(G,X), with coefficients in Ṽ⊗_p. More precisely,
we choose a _p-local system Λ_V ⊂Ṽ, and define the Hecke algebra _K_0, as in <cit.>, to be the _p-algebra generated by Hecke operators at good primes, acting as endomorphisms of the chain complex of Λ_V, viewed as an object in the derived category.
Then π_f determines a maximal ideal = _π⊂_K_0,
which we reserve for later.
Write S_K_0 = S_K_0(G,X). For any r = 1, 2, … we let
Λ_V,r = Λ_V/p^rΛ_V, A_r = /p^r.
We consider the object
Γ(S_K_0,Λ_V,r)
as a module over _K_0.
Let q be a prime such that K_q is hyperspecial maximal compact. Consider the family _q of open compact subgroups U ⊂ K_0
that contain ∏_v ≠ q K_v, and for each U ∈_q, the chain complex C^∙(S_U(G,X),Λ_V,r) as an object
in the derived category of A_r-modules. Then G_q = G(_q) acts on
C_q^∙(A_r) = C^∙(S_U(G,X),Λ_V,r). Let U_q be the universal A_r[G_q]-module
U_A_r(K_q). Then just as in <ref>, Frobenius reciprocity identifies
_A_r[K_q](A_r,C_q^∙(A_r)) _A_r[G_q](U_A(K_q),C_q^∙(A_r))
and thus by Fact <ref> we have an action of the derived Hecke algebra
^*_A_r(G_q,K_q) on H^*(K_q,C_q^∙(A_r)).
By the discussion in <cit.>, we can identify
the hypercohomology H^*(K_q,C_q^∙(A_r)) with H^*(S_K_0,Λ_V,r). Thus formally we have
For any r and any q, there is a canonical action of the derived Hecke algebra (G_q,K_q; A_r)
on H^*(S_K_0,Λ_V,r).
The action in Proposition <ref> is only of interest when _A_r(G_q,K_q) is non-trivial. Thus we assume q ≡ 1 p^r. For such q we have seen in Theorem <ref> that
^*(G_q,K_q; A_r) ^*(T(_q), T(_q); A_r)^W
( A_r[X^*(T)] ⊗_A_r H^*(T_q; A_r) )
^W
where W denotes the Weyl group of G and we have adapted the notation for the algebra on the right hand side.
When q ≡ 1 p^r,
the action of an individual derived Hecke operator in ^1_A_r(G_q,K_q) on cohomology modulo p^r
can be made explicit in terms of the mod p^r characteristic classes of cyclic unramified topological coverings of degree p^r
in the passage from Iwahori level to pro-q Iwahori level at q.
They are thus seen to be derived versions of the diamond operators that play a central role in the Taylor-Wiles method.
This is explained in greater detail in <ref> in the setting of coherent cohomology of modular curves. The topological version was developed in <cit.> and is completely analogous.
Venkatesh explains in <cit.> how to take an increasing union of sets of q as above, while passing to the limit on r, in order to define various versions of a global p-adic derived Hecke algebra that acts on H^*(S_K_0,Λ_V). We choose the strict global derived Hecke algebra, denoted ' in <cit.>, and assume p is prime to the order of the Weyl group. Then ' is graded commutative and contains underived Hecke operators at good primes other than p. These generate the usual (underived) Hecke algebra _K_0⊂'. Moreover, the action of ' preserves
the localization H^*(S_K_0,Λ_V)_ = H^*(S_K_0,Λ_V)__π at the maximal ideal of _K_0 we have been saving for this moment.
The first main result of <cit.> is
<cit.> Under the assumptions listed below, H^*(S_K_0,Λ_V)_ is generated as a graded module over ' by its minimal degree component H^q_0(G)(S_K_0,Λ_V)_.
The proof is an adaptation of the ideas of Calegari-Geraghty, in the version developed by Khare and Thorne,
to the language of derived structures. It depends on a list of technical assumptions <cit.> and one substantial assumption <cit.>. The most significant of the technical assumptions is that the -localized homology [N.B.] H_*(S_K_0,Λ_V)_ is concentrated in the interval
[q_0(G),q_0(G)+ℓ_0(G)] that we have already seen in the calculation of relative Lie algebra cohomology of tempered representations. Related to this is the assumption that H_*(S_K_0,Λ_V) is torsion-free. Finally, G is assumed to be a split -group; this appears to rule out generalizations to simple groups over totally real fields, for example, though the analogous theorem must hold for such groups as well.
The more serious assumption is that π has an associated p-adic Galois parameter. This is a continuous homomorphism
ρ_π: (/) (_p),
where is the Langlands dual group to G, whose Frobenius conjugacy class at the unramified prime q corresponds to the Satake parameter of π_q, and whose restriction to a decomposition group at p is crystalline in an appropriate sense. The existence of such a ρ_π is known when G is the general linear group over
a real or CM field, under our assumption that π contributes to cohomology of some S_K_0.
Venkatesh then makes assumptions on the image of the reduction ρ_π mod p that are familiar from the Taylor-Wiles theory, in particular that the image of
ρ_π is not contained in a proper parabolic subgroup of and is not too small.
Strictly speaking, Venkatesh only proves Theorem <ref> when V is the trivial representation. Moreover, he assumes that π is congruent to no other representations at level K_0, and indeed that
_K_0,_p. As far as we know, no one has written down a proof without these hypotheses, which presumably would require additional restrictions on p.
The second main result is the construction of an action of the p-adic realization of motivic cohomology, as in Conjecture <ref> (iii), on H^*(S_K_0,Λ_V)_ as endomorphisms increasing the degree by 1. This will be discussed in the following section, which is devoted to Venkatesh's main conjectures regarding this action and to generalizations in other settings.
§.§ Proof sketch of Theorem <ref>
added this
We will try to indicate the main ideas behind the proof of Theorem <ref>. It uses the Calegari-Geraghty method explained in <ref>. For a set Q_n of Taylor-Wiles primes, we denote by S_0(Q_n) the covers of S_K_0 obtained by adding Γ_0(q) level structure for all q ∈ Q_n, and by S_1(Q_n) the pro-p subcover of the cover of S_0(Q_n) obtained by adding Γ_1(q) structure for all q ∈ Q_n. The homology complex _n from <ref> is the shift by q_0 of C_*(S_1(Q_n))_n_n where n_n is any choice of maximal ideal of the Hecke algebra for S_1(Q_n) lying over the maximal ideal m of _K_0. In <ref> we patched these _n into an _∞, and analyzes its structure over a patched ring of diamond operators _∞. A key point was that _∞ M_∞. This yields a description of the original homology chains as
_0 = M_∞__∞_0
and the original cohomology cochains as
__∞(M_∞, _0).
Furthermore, one gets (under the Conjectures needed to make the Calegari-Geraghty method work) that M_∞ is generically a free module over its support _∞, which is a quotient of _∞ by a regular sequence. This gives enough information to compute the action of __∞^*(_0, _0) on __∞^*(M_∞, _0) explicitly via Koszul complexes, and one sees (under the assumptions of <cit.>) that this action generates __∞^*(M_∞, _0) over __∞^0(M_∞, _0).
Next, Venkatesh argues that the derived Hecke actions along the same Taylor-Wiles primes “patches” to the action of __∞^*(_0, _0) on __∞^*(M_∞, _0). Let us examine this statement more closely. The ring _∞ is patched from rings _Q_n consisting of diamond operators at primes in the Taylor-Wiles system Q_n. Informally speaking, the action of __∞^*(_0, _0) on __∞^*(M_∞, _0) is patched from the action of __Q_n^*(_0, _0) on __Q_n^*(M_n, _0). Abstractly _Q_n is the tensor product over W(k) of the group rings of T_q, the maximal pro-p quotient of T(_q), as q ∈ Q_n. Hence ^*__Q_n(_0, _0) ≅⊗_q ∈ Q_n H^*(T_q, W(k)). Now the claim is that this action on _n is essentially the derived Hecke action.
We formulate the latter point precisely. For q ∈ Q_n, the ideal m induces an ideal m_q of H(G(_q), G(_q); /p^n ). Part of the definition of a Taylor-Wiles prime is that q is strongly regular, which means that under the Satake isomorphism H(G(_q), G(_q) ; /p^n ) H(T(_q), T(_q); /p^n )^W, there are exactly |W| maximal ideal of H(T(_q), T(_q); /p^n ) lying over m_q, and for any choice of such maximal ideal the induced map on completions is an isomorphism. In particular, any such choice induces an isomorphism
^1(G(_q), G(_q); /p^n )_m_q H^1(T_q; /p^n ).
Furthermore, the map S_0(q) → S_K_0 induces H^*(S_K_0) → H^*(S_0(q)), and then H^*(S_K_0)_m H^*(S_0(q))_n where n is a maximal ideal lying over m∈_K_0 and n_q. Now, H^*(S_0(q)) has an action of the group cohomology H^*(T_q) because there is a finite étale T_q-cover S_1(q) → S_0(q), and we can view H^*(S_0(q)) as the T_q-equivariant cohomology H^*_T_q(S_0(q)). It turns out that the action of ^1(G(_q), G(_q))_m on H^*(S_K_0)_m is intertwined with this action of H^1(T_q) on H^*(S_0(q))_n.
§ VENKATESH'S MOTIVIC CONJECTURES
§.§ The action of motivic cohomology
We retain the assumptions of <ref>. Then Venkatesh's Theorem <ref> tells us that the
ℓ-adic cohomology of the locally symmetric space, localized at the maximal ideal corresponding to π,
is cyclic over the strict global derived Hecke algebra (DHA) . On the other hand, the analytic theory tells us that, after tensoring
with , it is cyclic over the exterior algebra on a vector space of dimension ℓ_0(G). This invariant plays no role in the definition of the strict global DHA but is linked to motivic cohomology by the Bloch-Kato Conjecture. The relation
between the two appearances of ℓ_0(G) is provided by a canonical map from the dual adjoint Selmer group on the
right hand side of Conjecture <ref> (iii) to the degree 1 part of the strict global DHA that is characterized by a local reciprocity law.
The local reciprocity law is nevertheless stated in a global context. We realize π and the Galois
representation in (<ref>) over the ring of integers in
a p-adic field, with residue field k.
Choose a Taylor-Wiles system for π, a set Q_r = (q_1,…,q_m) of primes q_i ≡ 1 p^r, as in <cit.>, to which we refer for the relevant properties.[See also the chapters of Caraiani-Shin and Emerton-Gee-Hellmann in these proceedings. In practice we can work over a base field other than , of course, and the q_i will then be prime ideals.] We write ^*_q,A_r for
^*(G_q,K_q; A_r).
In particular π is unramified at each q ∈ Q_r and the Satake parameters of each π_q, q ∈ Q_r, are strongly regular. This means that if _q ⊂(T(_q),T(_q); A_r)^W is the maximal ideal induced by the maximal ideal _π, then the fiber over _q in
((T(_q),T(_q), A_r) = A_r[X^*(T^∨)] = A_r[X_*(T)])
is a set of |W| maximal ideals. We choose one maximal ideal in (^0(T(_q),T(_q);A_r)) for each q; this corresponds to choosing an element of the Langlands dual torus T^∨(k) that is contained in the conjugacy class of ρ_π(_q); here ρ denotes the reduction of
ρ_π in G^∨(k). Since W acts on both factors in the tensor product in (<ref>),
this choice together with the derived Satake isomorphism allows us to define a canonical injection <cit.>
ι_q,r: H^1(T_q; A_r) ↪^1_q,A_r
We can define a -lattice in the
dual adjoint Selmer group H^1_f(, M_π(1)). With Conjecture <ref> in mind, we let _π,p^* denote the -dual of this lattice.
Under the running assumptions, Venkatesh proves in <cit.> that _π,p is free of rank ℓ_0(G)
over . Note that global duality identifies
_π,p^* with a lattice in (the space of obstructions) H^2_f(,Ad(ρ_π)).
There is a second map
f_q,r: H^1(T_q,A_r) →^*_π,p/p^n
defined by local duality of Galois cohomology (see p. 78 of <cit.>).
Venkatesh's second theorem, after Theorem <ref>, is then a more precise version of the following:
<cit.> There is an action of ^*_π,p on H^*(S_K_0,Λ_V)_ by endomorphisms of degree +1. This action is uniquely characterized by the property that, for any r ≥ 1 and any Taylor-Wiles prime q ∈ Q_a(r) for sufficiently large a(r), the two actions of H^1(T_q,A_r) on H^*(S_K_0,Λ_V)_ – by derived Hecke operators (via ι_q,r) or by
the action of ^*_π,p (via f_q,r) coincide.
In particular, there is an embedding of the exterior algebra ∧^∙^*_π,p in the strict global DHA ' of
endomorphisms of H^*(S_K_0,Λ_V)_.
The proof of Theorem <ref> is an intense computation in <cit.>. Another perspective is given by <cit.>. There, the author constructs an object called the spectral Hecke algebra; it is a derived Hecke algebra that acts on the spectral (i.e., Galois) side of the Langlands correspondence. The construction is motivated by Geometric Langlands, but we will not explain that motivation here; we refer to <cit.> for details.
The usual spherical Hecke algebra for G is the space of functions on G(_q) G(_q) / G(_q) viewed as a double coset set. It is natural to view this double coset space as a groupoid, i.e. remembering automorphisms. Then it has a non-trivial topological structure, and taking cohomology (instead of just functions) gives the derived Hecke algebra. For G = _n, the groupoid may be interpreted as the space of tuples (_1, _2, τ) where _1, _2 are rank n vector bundles on _q and τ is an identification of their restrictions to _q.
To make the spectral Hecke algebra, we instead consider the space of tuples (F_1, F_2, τ) where F_1, F_2 are rank n local systems on _q and τ is an identification of their restrictions to _q. Although the language of the two situations is parallel, the geometry is quite different. In the second case, the construction is only interesting if performed in a derived sense, because the space of rank n local systems on _q embeds in the space of rank n local systems on _q (so that the information of F_2 is redundant). We call this derived space the “spectral Hecke stack”, and its functions are essentially the spectral Hecke algebra of <cit.>.
There is a co-action of the spectral Hecke algebra on the derived Galois deformation ring, and <cit.> computes that it is dual to Venkatesh's reciprocity law for the action of the derived Hecke algebra; furthermore <cit.> explains that there is a co-multiplication on the spectral Hecke algebra making it dual to the derived Hecke algebra. Then <cit.> gives a conceptual interpretation of Venkatesh's reciprocity law as a “derived local-global compatibility” between the actions of the derived Hecke algebra and the spectral Hecke algebra.
Now we tensor the motivic cohomology and the cohomology of S_K_0 with _p and assume the Beilinson-Bloch-Kato Conjecture <ref> (iii). We let ^*_π,E(π_f)⊂^*_π,p⊗_p be the space of classes whose pairing with motivic cohomology lies in E(π_f). Let H^*(S(G,X),Ṽ)_π denote the π_f-isotypic subspace
of H^*(S(G,X),Ṽ).
The p-adic version of Venkatesh's motivic conjecture is
<cit.> Under the action of ∧^∙^*_π,p on
[H^*(S(G,X),Ṽ)_π]^K_0⊗_p ⊂ H^*(S_K_0,Λ_V)_⊗_p
defined by Theorem <ref>, the action of ^*_π,E(π_f)
preserves the E(π_f) rational structure induced from H^*(S(G,X),Λ_V)_π.
The strongest evidence for this conjecture, apart from the results of <cit.> on a similar conjecture
for coherent cohomology of modular curves (see <ref>), is the substantial evidence provided in <cit.> for the analogous conjecture for complex cohomology. In that conjecture the p-adic regulator with values in the Bloch-Kato Selmer group is replaced by the
Beilinson regulator with values in Deligne cohomology, as in Conjecture <ref> (ii). Since derived structures are
not required for the statement of the Prasanna-Venkatesh conjecture, we say no more about it here.
§.§ The action of the derived deformation ring
After constructing the derived deformation ring in <cit.>, Galatius and Venkatesh use it to give an interpretation in derived geometry of the method of Calegari-Geraghty.
The cuspidal cohomological representation π, the localized cohomology H^*(S_K_0,Λ_V)_, the Galois parameter ρ_π, its reduction ρ_π modulo p, and the dual adjoint Selmer group H^1_f(, M_π(1)) are as in <ref> and <ref>.
Let S be the set of places where π and K_0 are ramified, and assume p ∉ S.
Then we can define the crystalline derived deformation ring ^ρ, __F[1/S], with unrestricted ramification at S. [The assumptions of <cit.> already admitted in the previous section remain in force in <cit.>; these include the hypothesis that the local unrestricted deformation spaces at primes in S are well-behaved.]
The Galatius-Venkatesh interpretation of the Calegari-Geraghty method is formulated in terms of homology rather than
cohomology of S_K_0. To avoid complications in the duality between homology and cohomology
we assume V is the trivial representation, so Λ_V is just
the constant local system ; <cit.> does not treat the more general case. The graded homology group H_*(S_K_0, )_m is H_*(_0)[q_0] in the notation of <ref>. The proof of the following theorem, the main result of <cit.>, was sketched earlier:
<cit.> Under the hypotheses of the previous sections the localized homology
H_*(S_K_0,)_ carries the structure of a free graded module over the graded ring π_*(^ρ, __F[1/S]).
§.§.§ Duality of the derived Hecke and derived deformation actions
The article <cit.> does not contain any new formulations of the motivic conjectures, unlike the article <cit.> devoted
to the derived Hecke action. However, when the two actions can be defined they are related canonically.
The main point is the determination of π_1(^ρ, __F[1/S]) <cit.>: in our notation, we have
<cit.> There is an isomorphism
π_1(^ρ, __F[1/S]) _π,p
that induces an isomorphism of graded algebras
π_*(^ρ, __F[1/S]) ∧^*_π,p
The relation between the two actions cannot be completely straightforward, because the derived deformation ring acts on homology while the (strict global) derived Hecke algebra acts on cohomology.
Under the duality between cohomology and homology, the action of ∧^∙^*_π,p
on H^*(S_K_0,)_ defined in Theorem <ref> becomes an adjoint action on
H_*(S_K_0,)_; ^*_π,p acts as operators of degree -1. The compatibility between
the two actions takes the following form:
<cit.> The DHA action of ∧^∙^*_π,p and the derived deformation ring
action of π_* (^ρ, __F[1/S]) are compatible in the following sense. Under the identification
π_*(^ρ, __F[1/S]) ∧^*_π,p
of Lemma <ref>, let v ∈_π,p∈π_1 (^ρ, __F[1/S]), v^* ∈^*_π,p, m ∈ H_*(S_K_0,)_. Then
v^*· v· m + v · v^*· m = ⟨ v,v^* ⟩ m
where ⟨ , ⟩ is the natural -valued pairing between _π,p and ^*_π,p.
§.§ Coherent cohomology
§.§.§ The derived Hecke action
In this section we use the classical notation for modular curves. If d > 1 is an integer we let X_0(d) and
X_1(d) be the familiar modular curves over [1/d] with level structure of type Γ_0(d)
and Γ_1(d) respectively. We think of X_?(d), ? = 0, 1, as the quotient
Γ_?(d)\ℌ; ℌ = (2,)^0/(2)·^×,
where (2,)^0 is the identity component of (2,).
The line bundle over X_?(d), ? = 0, 1 whose sections are modular forms
of weight 1 is denoted ω; the isotropy representation of (2) on the fiber of ω at
its fixed point h ∈ℌ is denoted ω_h.
The bundle ω is the only automorphic line bundle with non-trivial cuspidal
coherent cohomology, in the sense of (<ref>), in more than one degree – in other words in degrees
0 and 1. Moreover, H^0_0(X_?(d),ω) and H^1_0(X_?(d),ω) are isomorphic
as modules over the classical Hecke algebra, which we denote _?d; this is the algebra generated by
standard Hecke operators at all primes not dividing d. This remains true for modular curves attached to all congruence
subgroups: there are two representations (Harish-Chandra modules) of the identity component GL(2,)^0,
say π_0 and π_1, such that
H^i(,;π_i⊗ω_h) = 1, i = 0, 1
in the notation of (<ref>). The representations π_0 and π_1 are holomorphic and antiholomorphic limits of discrete series, respectively, and the direct sum I(0) := π_0 ⊕π_1 has a structure of irreducible
(((2)), O(2)·^×)-module whose restriction to (((2)),(2)·^×) is the sum of the two Harish-Chandra modules for (2,)^0. The notation I(0) is meant to suggest (accurately) that, up to a harmless twist by a power of the absolute value of the determinant, it is obtained by parabolic induction from a unitary character of the Borel subgroup.
For any prime q we let G_q = (2,_q), K_q = (2,_q). Suppose q is relatively prime to d and is congruent to 1 p^r for some r > 0. We let X_?(d)_r denote the base change of X_?(d) to (/p^r) and
define ω_r analogously.
One defines in <cit.> an action of the derived Hecke algebra (G_q,K_q; /p^r )
on H^*(X_?(d)_r,ω_r). The action can be defined formally as in <ref>, but it
also has a concrete definition in terms of cyclic coverings. We let X_0?(qd) denote the modular curve corresponding
to the congruence subgroup Γ_?(d)∩Γ_0(q), and define X_1?(qd) similarly. As long as q ≥ 5 the natural map
X_1?(qd) X_0?(qd)
factors through an étale cover X_Δ(qd)/X_0?(qd) that is cyclic of degree p^r. This cover defines a class
∈ H^1(X_0?(qd)_r; C_r)
that is called the Shimura class. Here C_r is the cyclic quotient of (/q)^* of degree p^r. This can be identified
with /p^r but not canonically; we choose an isomorphism (discrete logarithm)
log: C_r /p^r.
On the other hand, there are two maps p_i: X_0?(qd) X_?(d). If we think of X_0?(qd) as the moduli space of triples
(E,C,ϕ) where E is a (generalized) elliptic curve, C is a cyclic subgroup of order q, and ϕ is
some level structure at d, then π_1 is the map that forgets C and π_2 is the map that replaces E by E/C.
(i) The composition
H^0_0(X_?(d)_r,ω_r) π_1^*→ H^0_0(X_0?(d)_r,ω_r) ∪→ H^1_0(X_0?(d)_r,ω_r)
π_2,*→ H^1_0(X_?(d)_r,ω_r),
where the middle arrow identifies ω_r⊗ C_r with ω_r by means of the discrete logarithm,
coincides with the action of a (precise non-trivial) element T^1_q ∈^1(G_q,K_q).
(ii) The action of ^1(G_q,K_q; /p^r) on H^*(X_?(d)_r,ω_r) commutes with the action of the classical
Hecke operators at primes not dividing qd.
Part (i) is equation (3-1) in <cit.>, while part (ii) was observed in Mazur's Eisenstein ideal paper <cit.> (and is easy to prove).
Analogous operators have been constructed and studied in the Columbia thesis of S. Atanasov <cit.> for Shimura varieties
attached to unitary groups of signature (1,n-1). These groups also have pairs of (nondegenerate) limits of discrete series
π_0, π_1 that have (,)-cohomology with coefficients in the same irreducible representation of (a maximal compact) K.
§.§.§ The conjecture
The action of the global Hecke algebra _?d on the weight one coherent cohomology spaces
H^i_0(X_?(d),ω)⊗_p, i = 0, 1, is diagonalizable and
the characters λ: _?d_p that appear in the decomposition are in bijection
with the cuspidal automorphic representations π of (2,) whose archimedean component
π_∞ is isomorphic to the irreducible representation I(0) introduced above. As already mentioned, the same characters occur in H^0_0 and H^1_0, with the same multiplicity.
It is also convenient to identify each π that contributes to weight one coherent cohomology with its normalized newform g_π, which is uniquely determined by its classical q-expansion. In the statement of the conjecture we will just write g = g_π and we always assume g to be a newform of level d.
The analogue of Venkatesh's conjecture is easier to state in this setting because the hypothetical motive M_π actually exists. By the Deligne-Serre <cit.> Theorem each π (or g_π) corresponds, in the sense of the Langlands correspondence, to a 2-dimensional Galois representation
ρ_π: (L/) (2,)
for some finite extension L/, where is any algebraically closed field.
Each such ρ_π is odd: the determinant of ρ_π(c), for any complex conjugation c ∈(L/), equals -1.
Conversely, the Artin conjecture asserts that every 2-dimensional odd representation of the Galois group of a finite extension of is of the form ρ_π; this has been known since the work of Khare–Wintenberger <cit.> on Serre's Conjecture, although it had been established in most cases by Buzzard–Dickinson–Shepherd-Barron–Taylor <cit.>.
Moreover, when the image of ρ_π is a dihedral group the result is essentially due to Hecke.
Thus to each π we have the 2-dimensional representation ρ_π, which can be viewed as the Galois realization of a rank 2 Artin motive, with coefficients in a ring of algebraic integers, which is a direct factor of the 0-dimensional motive R_L/(L).
The motive M_π is then the direct factor of R_L/(L) whose Galois realization is given by the 3-dimensional Galois representation
^0ρ_π: (L/) (3,C),
the trace free summand of the 4-dimensional representation ρ_π. (In general ^0ρ_π factors through the Galois group of a proper subfield of L, but this is unimportant.)
It is explained in <cit.> that the motivic cohomology group H^1_ mot(,M_π(1)), denoted _π above, can be replaced in the formulation of the conjecture by a certain 1-dimensional space of Stark units, denoted U_π or
U_g if we write g = g_π.
This is defined as
U_g = U_π∼→Hom_𝒪[G_L / Q](Ad^0ρ, U_L⊗𝒪)
This becomes one-dimensional upon tensoring with E = (). One introduces a natural map, denoted
red_q: U_g 𝔽_q^×⊗/p^r.
The conjecture stated in <cit.> relates the action of the derived Hecke operator T^1_q with a rational multiple of the discrete p^r logarithm of a generator of U_g. Since rational multiplication does not sit well with reduction modulo p^r, one needs to define the inputs and the relations carefully. The Shimura class in H^1(X_0?(qd)_r;C_r) defines a non-trivial class _ in H^1(X_0?(qd)_r;_r) (where _r is here the structure sheaf of X_0?(qd)_r) through the composition of (<ref>) with the inclusion of Zariski sheaves /p^r↪_r. A feature specific to the case of (2) and its inner forms is that the class _ is the reduction modulo p^r of the class in H^1(X_0?(qd),)
of (the complex conjugate of) an Eisenstein cusp form f_E^#, in the sense of <cit.>.[Atanasov has shown in <cit.> that for the Shimura varieties attached to unitary groups of signature (1,n-1) with n > 2, the analogues of the Shimura classes do not lift to characteristic zero. ] At least if the space of Eisenstein cusp forms is one-dimensional, this allows us to assign an invariant that measures the action of T^1_q. Define g^* to be the weight 1 newform corresponding to the automorphic representation π^∨ dual to π. We let g^*,(q) be the “old form" in the space π^∨ which in classical notation is just the analytic function z ↦ g^*(qz) for z in the upper half-plane. By Proposition <ref>, T^1_q(g) ∈ H^1(X_0?(d)_r,ω_r) has the same Hecke eigenvalues as g. Hence in the Serre duality pairing
⟨,⟩: H^0_0(X_0?(d)_r,ω_r)⊗ H^1_0(X_0?(d)_r,ω_r) H^1_0(X_0?(d)_r,ω_r^⊗ 2) _r,
where the last arrow is the trace map, the map ∙↦⟨∙, T^1_q(g)⟩ factors through
projection on the g = g_π-eigenspace of level qd. Let now f_E^# = a_1· f_E where f_E is the normalized Eisenstein cusp form
(with leading coefficient 1; the constant a_1 is essentially the inverse of what is called the Merel constant in <cit.>). Then T^1_q(g) is determined by the pairing ⟨ gg^*,(q),f_E ⟩, which can be identified with the Petersson pairing of
the weight 2 forms gg^*,(q) and f_E; g^* has to be replaced by the old form because f_E has level q (the pairing is taking place in the second group from the right in Proposition <ref>) It can also be related to the square root of the central value of the triple product L-function L(s,π×π^∨×π(f_E)), where the notation π(f_E) is self-explanatory.
The motivic conjecture, as formulated in <cit.>, comes down to an equality up to an integer factor:
There exists an integer m =m_g ≥ 1 and
u_g ∈ U_g such that, for all primes q and p as above
m ·⟨ gg^*,(q), 𝔖⟩ = log( red_q(u_g)).
Here both sides depend linearly on the choice of log as in (<ref>). The independence of m from q and p is essential in this
statement, which would otherwise be vacuous.
§.§.§ Results
Weight 1 eigenforms g = g_π can be classified by the images of the corresponding (odd) 2-dimensional Galois representations ρ_π, into those whose image is dihedral – induced from a character ψ_1 of a quadratic extension K/ – and the “exotic" cases where the projectivized image is A_4, S_4,
or A_5. In the dihedral case the Stark unit group U_π can be identified. When K is imaginary – say g or π is of CM type – U_π consists of elliptic units;
when K is real – then g or π is of RM type – U_π is generated by the fundamental unit of K itself. Let D be the discriminant of K.
Using the explicit determination of U_π, the article <cit.>
proves the following theorem:
If K is imaginary, assume that D is an odd prime and that
ψ_1 is unramified. If K is real assume that D is odd and that ψ_1 has conductor dividing the different of K/.
Then Conjecture <ref> is true for g_π.
The restrictions on D and the conductor of ψ_1 are certainly not necessary; they have been relaxed in the Columbia thesis of R. Zhang <cit.>, whose results are close to optimal. The methods of proof of Theorem <ref> depend crucially to the realization of g_π as explicit theta functions. D. Marcil has provided striking numerical confirmation of the conjecture in some A_4 cases <cit.>. As matters stand, it seems unlikely that one can prove unconditional results in the exotic case, though it's conceivable that the conjecture can be deduced somehow as a consequence of Stark's conjecture <cit.>.
§.§ Venkatesh's conjectures, p-adic cohomology, and the invariant ℓ_0
Specialists in p-adic modular forms already encountered the invariant ℓ_0 as early as the 1990s.
Hida's theory of ordinary modular forms, or automorphic cohomology classes, for the reductive group G over ,
constructs a p-adic analytic space whose points parametrize p-adic modular forms, which can be viewed as the generic
fiber _ord,G of the formal spectrum (), where is Hida's “big Hecke algebra." The ring is an algebra over the Iwasawa algebra Λ
of the (maximal compact subgroup of the p-adic points of the) maximal torus of G. In Hida's original theory for modular curves, and its extension to Shimura varieties,
is a finite flat Λ-algebra, and the points of _ord corresponding to eigenvalues of classical (complex) Hecke algebras are dense in _ord.
In general Hida conjectured in <cit.> that the codimension of _ord,G in the rigid generic fiber of (Λ) is exactly ℓ_0.
Urban made the analogous conjecture in the paper <cit.> in which he constructed p-adic families of modular forms of finite slope, parametrized by the eigenvariety _G, one of several constructions of a higher-dimensional generalization of the Coleman-Mazur eigencurve for elliptic modular forms. In the finite slope situation, the eigenvariety is purely a rigid analytic object, Hida's Hecke algebra is replaced by a noetherian subring, which we also denote , of
the ring of global sections of __G, and the map of () to (Λ) is replaced by a morphism : _G. Here the weight space is again a rigid torus of dimension equal to the absolute rank of G, the Iwasawa algebra is replaced by the completed (noetherian) local ring
Λ_w at a point w ∈. Fixing a point x ∈_G, with (x) = w, the map makes the completion _x into a finite Λ_w-algebra.
Both Hida and Urban refer to their conjectures on the codimension as non-abelian analogues of the Leopoldt Conjecture on the p-adic multiplicative independence of
global units in number fields. The Leopoldt Conjecture has for decades been the Bermuda triangle of algebraic number theory, tempting many of the most distinguished specialists into losing precious years in the pursuit of promising but ultimately futile attempted proofs. Best not to try to prove the codimension conjectures of Hida and Urban, in other words. Nevertheless, it's natural to reinterpret these codimension conjectures in Venkatesh's motivic framework. We briefly discuss two such interpretations. The article <cit.> of Hansen and Thorne approaches the eigenvarieties as parameter spaces for Galois representations, whereas
the article <cit.> of Khare and Ronchetti provides evidence that the derived diamond algebras introduced in <ref>, acting on Hida's
ordinary families, are the wild p-adic analogues of Venkatesh's derived Hecke algebras at (tame) Taylor-Wiles primes. In both constructions the motivic cohomology group H^1_ mot(,M_π(1)) that figures in Venkatesh's motivic conjecture appears through its p-adic realization as the dual Selmer group.
§.§.§ The Hansen-Thorne construction
The article <cit.> is concerned exclusively with eigenvarieties attached to the group
of G = (n)_. The eigenvarieties are constructed by Hansen's method in <cit.>, following the topological approach of Ash and Stevens. By work of Harris–Lan–Taylor–Thorne <cit.>, and also by work of Scholze <cit.>, it is known that the classical point x ∈ := _GL(n) corresponding to the cuspidal cohomological
automorphic representation π
has an associated semisimple p-adic Galois representation
ρ_π: (/) GL(n,L)
for some finite extension L/_p over which x is defined. The point x also depends on the secondary datum of a refinement t_x, or an ordering on the eigenvalues of crystalline Frobenius, that is assumed implicitly in what follows. The representation ρ_π does not depend on the choice of t_x.
We then define the dual adjoint Selmer group H^1_f(,M_π(1)) = H^1_f(,∘ρ_π(1)) as in <ref>. Note that this is a
_p-vector space; the constructions in <cit.>, in contrast to those of <cit.>, are carried out entirely in characteristic zero.
The main result of <cit.> is that, at least under favorable conditions, and assuming a conjecture recalled in the statement of Theorem <ref> below, H^1_f(,M_π(1)) controls the
infinitesimal geometry of at x with respect to the morphism .
Combining this with the universal coefficients spectral sequence constructed in <cit.>, the authors recover
the analogue of the Galatius-Venkatesh Theorem <ref>, but without introducing the derived deformation ring.
The main hypothesis required for the final results of <cit.> is a version of the isomorphism
R_x _x
of the Taylor-Wiles method. Here R_x is an underived deformation ring defined using the theory of (φ,Γ)-modules over the Robba ring. In particular, R_x is defined purely by characteristic zero methods, and (pro)-represents a functor on Artinian local L-algebras for the p-adic coefficient field L of ρ_π (<ref>). More precisely, it is assumed that ρ_π is absolutely irreducible; then R_x (pro)-represents trianguline deformations of ρ_π along with the refinement t_x.
The property of being trianguline[pronounced in English to rhyme with fine, wine, and turpentine] <cit.> is determined by the (φ,Γ)-module attached to the restriction of ρ_π to decomposition groups at p; the relevant fact for us is that the Galois representations attached to points on the eigenvariety, including ρ_π itself, necessarily have this property.
We now assume π contributes to H^*(S_K_0,Ṽ), with notation as in <ref> for G = (n)_. We change notation slightly,
and let 𝔪_π denote the maximal ideal corresponding to π in _K_0⊗_p; H^*(S_K_0,Ṽ)__π
denote the localization. Let q_0 = q_0((n)_), ℓ_0 = ℓ_0((n)_). The following theorem summarizes the main results of Theorems 4.9 and 4.13 of <cit.>.
Let x ∈ be a classical point as above, with w = p(x),
and let λ: L be the corresponding character of the Hecke algebra.
Assume λ is numerically noncritical[This is a regularity condition, for which we refer to <cit.>.] Then
* _x ≥Λ_w - ℓ_0;
* If _x = Λ_w - ℓ_0 let V_x = (Λ_x)⊗_Λ_w L, where the map from Λ_w on L is the augmentation. Then H^*(S_K_0,Ṽ)__π is free of rank 1 as graded module over the exterior algebra ∧^*_L V_x and is generated by its term in top degree q_0 + ℓ_0.
* Assume ρ_π is absolutely irreducible and crystalline at p, with the expected Hodge-Tate weights, and its associated Weil-Deligne representation at every prime ℓ (including p) corresponds to π_ℓ under the local Langlands correspondence.
Assume also the isomorphism (<ref>).
Then V_x can be identified with H^1_f(,ρ_π(1)) = _π,p[1/p], and
H^*(S_K_0,Ṽ)__π is free of rank 1 as graded module over the exterior algebra ∧^*_L _π,p, by its term in top degree q_0 + ℓ_0.
It is natural to assume that the action of ∧^*_L _π,p on H^*(S_K_0,Ṽ))__π coincides, up to duality, with the one defined in Theorem <ref>. However, the identification of V_x with _π,p[1/p] depends on the deformation ring R_x, which in turn depends on the choice of refinement t_x. Hansen has conjectured that the action does not depend on this additional choice, but even this conjecture remains open.[We thank David Hansen for pointing this out.]
§.§.§ The Khare-Ronchetti construction
The constructions in this section are based on the paper <cit.>.
Although their derived diamond algebras are defined purely locally, the construction is bound up with the action on
the ordinary part of global cohomology. It is not clear to us how the representation theory of the derived diamond algebras fits in a hypothetical p-adic
local Langlands program.
Let p > 2 be a prime number and let G be the group (n) over a finite extension F/. For any open compact subgroup K ⊂ G and any ring A we let H(G,K;A) denote the (underived) Hecke algebra of K-biinvariant functions on G with coefficients in A.
Let B = T· U be the upper-triangular Borel subgroup, U its unipotent radical, and T the diagonal maximal torus. We restrict attention to (n) in order to use explicit matrix groups, and to make use of the known construction of Galois parameters attached to cuspidal cohomology classes. The article <cit.> treats the general case and the reader should be able to make the appropriate adjustments.
Let U^- be the opposite unipotent (lower triangular) subgroup. Let v be a prime of F dividing p, with integer ring = _v and uniformizer ϖ = ϖ_v.
For 1 ≤ b ≤ c we let
I_b,c,v = U^-(ϖ^c)· T(1+ϖ^b) · U();
i.e.
the generalization from (2) to G of the congruence subgroup
I_b,c,v = [ 1 + ϖ^b* *; ϖ^c* 1+ϖ^b* ]⊂(2,).
We let I_b,c = ∏_v | p I_b,c,v.
Let K = K_p · K^p ⊂ G(_f), where K^p is a fixed level subgroup away from p
and K_p varies over the subgroups ∏_v| p I_b,c,v introduced in (<ref>).
We consider the cohomology of the locally symmetric spaces of dimension d
Y_b,c = G()\ G(_f)/K_∞· K^p· I_b,c
where K_∞⊂ G() is maximal compact and K^p is a fixed level subgroup away from p,
so that K^p· I_b,c⊂ G(_f) is open compact. We assume K^p is hyperspecial maximal
outside a finite set of primes S.
The cohomology of Y_b,c is a module over the unramified Hecke algebra
H^un = ∏_q ∉ S, q ≠ p H(G_q,K_q).
It is also a module over a derived Hecke algebra at p, which we proceed to define in the setting of Hida
theory.
Following Hida, we define an operator U_p = ∏_α∈Σ U_α acting as correspondences on each Y(K(b,c)), compatibly with
the natural maps Y(K(b,c)) Y(K(b',c')) for b' < b, c' < c. (There are really operators U_p(b,c) but we don't stress this.)
Roughly, with G = (n) we have
U_p = ∏_j = 1^n-1 U_j,p, U_j,p = U_α_j = I_b,c∏_v | p diag(ϖ_v _j, _n-j)I_b,c.
The ordinary part H^i(Y_b,c,)_ord⊂ H^i(Y_b,c,) is the submodule (and direct summand) where the U_p operator
acts invertibly.
For all c ≥ b ≥ 1, n ≥ 1, the pullback
H^i(Y_b,b,)_ord H^i(Y_b,c,)_ord
is an isomorphism.
The proof in <cit.> is a calculation with double cosets acting on singular cochains.
If Fact <ref> admits a generalization to overconvergent cohomology it will necessarily need to be formulated
in characteristic zero.
The group I_1,c/I_b,c is isomorphic to the abelian p-group
T_b := ∏_v | pT(1+ϖ_v_v)/T(1+ϖ^b_v_v).
Since I_b,c is normal in I_1,c, the elements
α∈ I_1,c define diamond operators
= I_b,cα I_b,c∈ H_(G,I_b,c).
for all b,c.
The action of on H^i(Y_b,c,)
commutes with the maps Y_b,c Y_b',c' for b' < b, c' < c.
Let Λ_b = _p[T_b], Λ = _b Λ_b, which is the Iwasawa algebra of T(_p).
We also use Λ_F = Λ⊗__p F for any finite extension F/_p.
The T_b,c-covering Y_c,c/Y_b,c corresponds to a morphism to the classifying space
π_b,c: Y_b,c BT_b,c.
Thus there is a ring homomorphism
π_b,c^*: H^*(BT_b,c,) = H^*(T_b,c,) H^*(Y_b,c,)
and thus an action of H^*(T_b,c,) on H^*(Y_b,c,) by derived diamond operators. Since I_b,c^ab = T_b,c,
we have
an isomorphism
ι: H^1(T_b,c,) H^1(I_b,c,) ⊂H_(G_p,I_b,c).
The derived diamond action of ϕ∈ H^1(T_b,c,) coincides with the derived Hecke
action of ι(ϕ) (obtained as in Fact <ref>).
In particular, when b = 1 this action commutes with the U_j,p-operators.
The claim about commutation follows from an explicit double coset calculation.
It implies (when Y_b,c = Y_1,c)
The derived degree 1 diamond operators act on H^*(Y_1,c,)_ord, preserving all
U_j,p-eigenspaces.
Another calculation shows:
As c varies, the actions of H^1(T_1,c,) on H^*(Y_1,1,)_ord are compatible, and thus
give rise in the limit to an action of H^1(T(1+p)),) on H^*(Y_1,1,)_ord.
Letting m →∞ we thus obtain a graded action of H^*(T(1+p),) on H^*(Y_1,1,)_ord.
We return to the setting of <ref>. Let π be a cuspidal cohomological representation of G, as in that section. Thus we
have, as in Corollary <ref>
H^q_0(G)+∙_0(Y_1,1,_p)[π_f]
is a free differential graded module over (p-adic) ∧^∙(𝔞^*) of rank 0 or 1;
here we are using multiplicity one, and the existence of Whittaker models, for (n).
The following theorem is an imprecise version of the main resuilt of <cit.>. Let E be an appropriate p-adic coefficient field for π_f.
Under certain favorable circumstances, H^*(Y_1,1,E)_π is generated over H^q_0(Y_1,1,E)_π by the action of the derived diamond operators H^*(T(1+p),E).
Moreover, there is a canonical map from H^1(T(1+p),E) to the Selmer group _π,p[1/p]^∨ attached to π. And under even more favorable circumstances,
the action of H^*(T(1+p),E) on H^*(Y_1,1,E)_π factors through this Selmer group.
The “favorable circumstances" to which the theorem alludes include the isomorphism (<ref>), in other words the non-abelian Leopoldt conjecture.
As in Theorem <ref>, the actions of Theorem <ref> and Theorem <ref> should be dual to each other.
§ A SELECTION OF OPEN PROBLEMS
We discuss some open problems at the interface of derived algebraic geometry and the Langlands correspondence.
§.§ Derived local deformation conditions
The context for this problem is <ref> and <ref>. In <ref> we discussed examples of moduli spaces that we know how to upgrade to derived moduli spaces. However, there are also many interesting examples, arising in connection with concrete problems, for which we do not know how to construct any good derived enhancement, and for which it is unclear to authors whether such an enhancement should even exist. The examples of Galois deformation functors with local conditions is particularly interesting, since it would presumably have consequences for automorphy lifting.
In practice, when implementing modularity lifting arguments, one wants to impose more general local conditions on Galois deformation functors, as explained for example in the articles of <cit.> in these proceedings. This is done by constructing Galois deformation functors for local fields with local conditions, and it turns out that these can often fail to be LCI; see <cit.> and <cit.> for some “ℓ=p” examples, and <cit.> for ℓ≠ p examples. This is an interesting phenomenon to understand, because Kisin's modification of the Taylor-Wiles method “reduces” automorphy lifting to the problem of having good control of local deformation rings.
We do not know if these local deformation functors should actually be subject to the hidden smoothness philosophy, since these are typically not “honest” moduli functors, in the sense that their moduli descriptions are not known “concretely”. Rather, they are typically constructed from “honest” moduli spaces by processes such as formation of Zariski closure or scheme-theoretic image, which do not admit good derived versions.
For example, the local deformation functors at p are often constructed by the following sort of procedure (which to our knowledge was first employed in <cit.>): define a subset of _p-points of the unrestricted deformation functor using p-adic Hodge theory, and take their Zariski closure. (The point here is that the p-adic Hodge theory is understood most generally in the context of representations with characteristic zero coefficients.) Then the characteristic zero points admit some moduli theoretic description in terms of rational Galois representations with p-adic Hodge theoretic conditions, but it is a priori unclear how to interpret the _p or _p-points concretely in terms of Galois representations.
There are some cases where the theory is well-behaved with integral coefficients, for example in the Fontaine-Laffaille range. In this case, there is a full moduli-theoretic description of the local deformation functor, which even makes clear that it is smooth.
In general, the local deformation functors at p are constructed by Kisin <cit.>, but their construction again involves processes such as formation of scheme-theoretic image or Zariski closure from the generic fiber, which we do not know how to perform in a derived way. This is why we do not know natural ways to derive the local deformation functors. However, in the past few years there have been major advances in our understanding of integral p-adic Hodge theory due to Bhatt-Morrow-Scholze and Bhatt-Scholze. In particular, Bhatt-Scholze have established in <cit.> a new interpretation of lattices in crystalline Galois representations, in terms of prismatic F-crystals. While this itself does not suffice to give a notion of crystalline Galois representations with general coefficients, it is a promising step towards such a concept, and therefore towards a moduli-theoretic description of crystalline deformation functors, which could then be derived.
We caution, however, that even if this is all possible, it is not quite clear at present what applications this would unlock. Global methods, such as the Taylor-Wiles method, are predicated upon the premise that the structure of global deformation rings and automorphic forms can be “simplified” in a suitable sense by adding level structure. However, this simplification occurs completely in the local-to-global aspect, and it is not clear that one can hope for the same type of simplification if local deformation rings are non-classical. Although it does not directly tackle this concern, the work <cit.> of Iyengar-Khare-Manning, which generalizes Wiles' numerical criterion using derived algebra, is a promising step in this broad direction.
§.§ Differential graded Hecke algebra
The context of this discussion is <ref>. In <ref>, we explained that commutativity of the local derived Hecke algebra (G(K); Λ) is still unknown. We conjecture that it is true in general, and moreover should be a consequence of “higher commutativity” for the differential graded Hecke algebra (G(K); Λ). For differential graded algebras, commutativity is a structure rather than a property, and there is in fact a spectrum of structures that measures “how commutative” an algebra is, called _n-structures. An associative differential graded algebra has an _1-structure, while an _∞-structure is the homotopy-coherent analogue of commutativity. It is clear from the construction that (G(K); Λ) has an _1-structure. If this _1-structure can be promoted to an _2-structure, then it would follow that (G(K); Λ) is commutative. We believe that this is the case, and we conjecture moreover that the _1-structure on (G(K); Λ) can be promoted to an _3-structure.
We explain the motivation for this conjecture. Unpublished work of Feng-Gaitsgory <cit.> implies (conditionally) that (G(K); Λ) is commutative whenever Λ is of characteristic ℓ which is not too large respect to G, if K is a function field. The strategy of <cit.> is to apply the categorical trace of Frobenius to the modular derived Satake equivalence, a result which has not appeared in the literature but which has been announced by Arinkin-Bezrukavnikov, and may well be possible to deduce directly from the recent paper <cit.> of Bezrukavnikov-Riche.
It is interesting that this strategy is not some elaboration of a proof of commutativity for the classical Hecke algebra, but rather proceeds from a categorical equivalence. It should also clarify the “meaning” of the derived Hecke algebra in terms of the dual group, but the answer is complicated to state, so we do not formulate it here. It seems likely that a variant of the strategy will also work for p-adic local fields K, although significant foundational groundwork would need to be laid for this to happen.
Now, we explain why this suggests the possibility of an _3-structure on (G(K);Λ). Within the above strategy, the differential graded Hecke algebra appears as the categorical trace of the derived Satake category. This latter object is expected to admit the structure of a “factorization category” over an algebraic curve. This “factorization” structure is an algebraic analogue of an _3-structure; in fact there is a precise relation between the two notions, but it is rather subtle to state.
§.§ Action of the derived Galois deformation ring
The context for this discussion is <ref>. There we sketched the main result of <cit.>, which is the construction of an action of the homotopy groups of a derived Galois deformation ring ^ρ, __F[1/S] on the homology groups of a locally symmetric space (localized at a corresponding maximal ideal of the Hecke algebra).
The above action occurs at the level of classical algebra: a graded commutative ring acts on a graded abelian group. We expect that this action can be refined to an action of the derived Galois deformation ring ^ρ, __F[1/S] on the localization of the homology chains (i.e., an action of an animated commutative ring on an animated abelian group), which induces the previous action after taking homotopy groups.
The construction of this action presents an intriguing challenge. Recall that the Galatius-Venkatesh construction uses Taylor-Wiles patching, which is based on adding level structure away from p. Morally, Galatius-Venkatesh find that the derived Galois deformation ring should become classical when patched, and thus deduce the derived action from the classical action, but this does not literally make sense since there is no patched derived Galois deformation ring. Instead, one could hope to add some kind of infinite level structure at p and hope that the corresponding derived Galois deformation ring is classical. Indeed, the work of Khare-Ronchetti <cit.> suggests that this should happen when ascending the ordinary (Hida) tower, if one considers ordinary deformations. On the other hand, <cit.> also points out that this approach is intimately connected with p-adic transcendence problems which are known to be extremely difficult, such as the Leopoldt Conjecture, so perhaps one should look for some other angle.
Although the construction of this refined action seems like a homotopy-theoretic problem, it would have concrete consequences for classical algebra. In fact, by considerations with the spectral Hecke algebra <cit.>, it would actually imply Venkatesh's reciprocity law <cit.> for the action of the derived Hecke algebra. This is very desirable, since in the current formulation of the Theorem from loc. cit. there is no effective way to determine for which primes q it applies.
§.§ The Trace Conjecture for Hitchin stacks
The context for this discussion is <ref>. The following problem is extracted from <cit.>. Let
[l, "c_0"'] [r, "c_1"]
be a correspondence of (derived) stacks over _q. Define _ by the derived Cartesian square
_[r, "Δ'"] [d, "c'"] [d, "(∘ c_0, c_1)"]
[r, "Δ"] ×
In <cit.>, the following is proved:
The tangent complex __/_q is the restriction of _c_1. In particular, if c_1 is quasismooth, then _ is quasismooth (over _q), of dimension equal to d(c_1) (the relative virtual dimension of c_1).
We will formulate the Trace Conjecture for motivic sheaves. See <cit.> for the relevant definition of motivic sheaves; the reader unfamiliar with motivic sheaf theory may replace with the constant sheaf in the following discussion without losing the main gist.
For a motivic sheaf and i ∈, we write i := [2i](i). For a map of (derived) stacks →, we denote _i(/) := H^-2i(, f^! _(-i)). When f → k is the structure map to a field, we abbreviate _i() := _i(/ k). If f is quasismooth, then we write d(f) = χ(_f) for the Euler characteristic of its tangent complex, and call it the “relative (virtual) dimension” of f. By <cit.>, in this situation there is a relative fundamental class [f] ∈_d(f)(/).
Assume that in the above steup c_1 is quasismooth, so that [c_1] ∈_d(c_1)(/ ) exists. Then Lemma <ref> implies that _ is quasismooth (over _q), so the derived fundamental class [_]∈_d(c_1)(_) exists.
On the other hand, regarding [c_1] as a map c_1^* _→ c_1^! _-d(c_1), we have a sequence of maps
(∘ c_0)^* _ = _ = c_1^* _ c_1^! _-d(c_1),
whose composition we call _. This is a cohomological correspondence between _ and _-d(c_1). We will define another class ([c_1]) ∈_d(c_1)(_). Consider (<ref>), and abbreviate c for the right vertical map. It is explained in <cit.> that
((∘ c_0)^* _, c_1^! _) ≅ c^! (_0^* _, _1^! _)
≅ c^! ((_) ⊠_)
where is the Verdier duality functor. The evaluation map (_) ⊗_-d(c_1)→_-d(c_1) is adjoint to a map (_) ⊠_→Δ_* __-d(c_1) where _ and __ are the respective dualizing complexes. Composing this with (<ref>) gives
((∘ c_0)^* _, c_1^! _) → c^! Δ_* __-d(c_1).
Finally, using proper base change, we have isomorphisms
c^! Δ_* __-d(c_1)≅Δ'_* (c')^! _-d(c_1)≅Δ'_* __-d(c_1).
We may regard as a global section of (c_0^* _, c_1^! _-d(c_1)). Then (α) ∈ H^0( , Δ'_* __-d(c_1)) ≅_d(c_1)(_) is defined as its image under the composition of (<ref>), and (<ref>).
At this point, we have two natural classes in _d(c_1)(_). One is the derived fundamental class [_], and the other is (_). It is then natural to ask if
(_) = [_] ∈_d(c_1)(_).
We have no evidence to believe that it is true in the full stated generality. However, for a derived Hitchin stack of the type considered in Example <ref>, we feel that the equality is probably necessary for the Modularity Conjecture <cit.> to be true, so we conjecture the equality to be true in this case.
For a derived Hitchin stack as in <cit.>, we have
(_) = [_] ∈_d(c_1)(_).
Such have special properties which could conceivably be necessary for the proof; for example, they are quasismooth[Note that this condition is not relevant for the formulation of the Trace Conjecture.]. We expect that the Trace Conjecture should be an important ingredient for a proof of the Modularity Conjecture <ref>.
§ A CRASH COURSE ON SIMPLICIAL COMMUTATIVE RINGS
This appendix provides a primer on simplicial commutative rings, with the goal of working up to the definition of the cotangent complex. Its purpose is to provide more grounding for the readers to whom the “black box” approach of the main text is too vague, and to indicate references for those interested in delving further into the subject. For short introductions we like <cit.>; longer textbooks include <cit.> and <cit.>.
In particular, we think it is edifying for typical readers to learn about simplicial commutative rings and model categories, even if their ultimate goal is work in the language of animated rings and ∞-categories.
Our treatment is extremely far from comprehensive. In fact, we view the sketchiness of our writeup as its main (and perhaps only) feature. Each of the subjects we touch upon is already well-documented in the literature, but simplicial homotopy theory is so vast and technical a subject that there is real risk in getting lost in the details. In keeping with the spirit of the main text, we prefer merely to sketch the skeletal structure of the subject, contenting ourselves with slogans and intuition, and deferring details to our favorite references.
§.§ Why “simplicial”?
We will introduce “simplicial sets” as a combinatorial model for topological spaces, and “simplicial commutative rings” as a combinatorial model for topological commutative rings.
Why do we say “simplicial” instead of “topological”? One answer is that the combinatorial nature of simplicial objects makes them better-behaved from a technical perspective – it rules out “pathological” topological spaces. We do not have a good philosophical answer; the reader could consult <cit.> for more musings on this question.
Furthermore, it turns out that the adjective “simplicial” is flexible enough to be applied to any object of any category, and provides a way to do abstract homotopy theory in many examples of interest. That is, if C is a category of “widgets”, we will be able to define a category sC of “simplicial widgets”.
§.§ The simplex category
The simplex category Δ has as its objects the sets [n] := {0, 1, …, n} for n ≥ 0, and _Δ([n], [m]) consists of all maps of sets f [n] → [m] that do not reverse order, i.e., if i ≤ j then f(i) ≤ f(j).
We will name specific generator morphisms between [n] and [n+1]. For 0 ≤ i ≤ n+1, the coface map
δ_i [n] → [n+1]
is the morphism “skipping over i”, or more formally
δ_i(j) = j j<i,
j+1 j ≥ i.
For 0 ≤ i ≤ n, the codegeneracy map
σ_i [n+1] → [n]
is the morphism “doubling up on i”, or more formally
δ_i(j) = j j ≤ i,
j-1 j ≥ i+1.
The sense in which these morphisms “generate” is that any morphism in Δ can be written as a composition of codegeneracy maps followed by a composition of coface maps.
§.§.§ Simplicial sets
A simplicial set is a functor XΔ^→Sets. Concretely, it is specified by a collection of sets X_n := X([n]), for n ≥ 0, with maps between them indexed by the morphisms in Δ^. In particular, we have face maps
d_i := X(δ_i) X_n+1→ X_n
for 0 ≤ i ≤ n+1,
and degeneracy maps
s_i := X(σ_i) X_n→ X_n+1 for 0 ≤ i ≤ n.
Because of Remark <ref>, to specify a simplicial set it it suffices to specify the data of the {X_n}_n ≥ 0 and the d_i and s_i for each n, satisfying the “simplicial identities” <cit.>:
* d_id_j = d_j-1d_i for i<j.
* d_id_i = d_i d_i+1.
* d_js_j = d_j+1s_j =.
* d_i s_j = s_j-1 d_i for i<j.
* d_is_j = s_j d_i-1 for i>j+1.
* s_i s_j = s_j+1 s_i for i ≤ j.
Each element [n] ∈Δ induces a representable functor on Δ^, namely _Δ(-, [n]), and so induces a simplicial set that we shall call Δ^n. Intuitively, we think of it as corresponding to the topological space of the standard n-simplex, denoted
|Δ^n| := { (x_0, …, x_n) ∈^n+1 0 ≤ x_i ≤ 1, ∑ x_i = 1}.
The definition of simplicial set is an axiomatization of the structure obtained from a topological space X by setting X[n] to be the set of continuous maps from the n-simplex |Δ^n| to X, i.e. X_n is the set of “n-simplices of X”. The face maps d_i are the usual boundary maps. The degeneracy maps account for ways to view lower-dimensional simplices as “degenerate” instances of higher-dimensional simplices; these do not play an important role in the topological theory, but are technically convenient in the simplicial theory. This defines the singular simplices functor
Top→sSets.
Therefore, for a general simplicial set X we will refer to X[n] as the “n-simplices of X”.
§.§.§ Geometric realization
Given a simplicial set X, we can build a topological space X := |X| from X by viewing X as a recipe for assembling a simplicial complex. In formulas, the geometric realization is given by
|X| := ∐_n X[n] × |Δ^n|/(d_i x, u) ∼ (x, δ_i u), (s_i x, u) ∼ (x, σ_i u).
The topological space |X| is a CW complex <cit.>.
An alternative, concise way to express this is as follows: we declare |Δ^n| to be the standard n-simplex { (x_0, …, x_n) ∈^n+1 0 ≤ x_i ≤ 1, ∑ x_i = 1} and then we define
|X| = _Δ^n →X |Δ^n|
where the indexing category has as its objects the maps Δ^n →X for varying n, and as its morphisms the maps Δ^m →Δ^n respecting the given maps to X. Note that the geometric realization of the simplicial set Δ^n agrees with the standard n-simplex |Δ^n|, justifying the notation.
Geometric realization is left adjoint to the singular simplices functor:
_Top(|X|, Y) ≅_sSet(X, Y).
These adjoint functors define a “homotopy equivalence” of categories in a suitable sense, which we are not yet equipped to define precisely. The technical statement is that the functors induce a Quillen equivalence of model categories, with the usual Quillen model structures on each side. See <cit.> for the precise formulation and proof. This justifies the assertion that simplicial sets provide a combinatorial model of topological spaces.
§.§.§ Homotopy groups of a simplicial set
A pointed simplicial set (X, ⋆) has homotopy groups π_i(X, ⋆), which could be defined as the homotopy groups (in the usual sense) of the geometric realization |X| with respect to the basepoint ⋆. They could also be defined directly within the category of simplicial sets, basically by using a combinatorial model for the sphere as the simplicial set Δ^n modulo its boundary, but this involves the subtlety of “resolving” X by a Kan complex (see Remark <ref>). We remark that for a topological space Y, Y is a Kan complex. The point is that in “homotopy-theoretic” categories (such as the category of simplicial sets), objects may not have “enough” maps in or out of them and so must be “resolved” by ones that do. Every topological space is fibrant, which is why |X| does not have to be similarly “resolved”.
When we discuss simplicial (abelian) groups and simplicial commutative rings, we will always take ⋆ to be the point corresponding to the identity element, and omit the basepoint from the notation.
§.§.§ Internal Hom
Given two simplicial sets X and Y, we construct a simplicial set (X, Y) whose set of 0-simplices is identified with _Δ(X, Y). Intuitively, we are trying to put a topological structure on the set of maps from X to Y. This will make the category of simplicial sets into a simplicial category – that is, a category enriched over simplicial sets.
We define
(X, Y)([n]) := _Δ(X×Δ^n, Y).
Note that the product of functors is formed level-wise.
A simplicial category is a possible model for the notion of ∞-category (although not the main one used in <cit.>). Intuitively, this is a type of category enriched over spaces, i.e., where the sets of morphisms are promoted to spaces.
§.§ Simplicial widgets
More generally, if C is any category, then we say that a simplicial object of C is a functor Δ^→C. If C is the category of “widgets”, then the functor category (Δ, C) will be called the category of “simplicial widgets”, and denoted sC.
The following examples will be of particular interest to us.
* A simplicial abelian group is a functor from Δ^ to the category of abelian groups.
* Given a commutative ring R (for us this always means commutative and with unit), a simplicial R-module is a functor from Δ^ to the category of R-modules.
* A simplicial commutative ring is a functor from Δ^ to the category of commutative rings.
* Given a commutative ring R, a simplicial R-algebra is a functor from Δ^ to the category of R-algebras.
The categories of such objects is, in each case, enriched over . To see this, we first observe that for a simplicial set S and a simplicial widget X, we have a simplicial widget
(X^⊗S)[n] = ∐_S_n X_n
with the coproduct on the RHS formed in the category of widgets. Then we define
_sC(X, Y)[n] := _sC( X^⊗Δ[n] , Y).
Let X be a simplicial set. Then there is a simplicial abelian group ⟨X⟩, obtained by forming the free abelian group level-wise: define ⟨X⟩ ([n]) := ⟨X([n]) ⟩ and letting the face and degeneracy maps be induced by those of X.
There is also a simplicial -algebra [X] obtained by forming the free (polynomial) -algebra level-wise, with the face and degeneracy maps induced by those of X.
It turns out that simplicial groups are automatically Kan complexes. Therefore, the homotopy groups of simplicial groups can be calculated as maps from simplicial spheres, in contrast to the situation for general simplicial sets as cautioned in <ref>. The same applies for simplicial abelian groups, simplicial commutative rings, etc. because of how the model category structures are defined in each of these cases.
§.§ The Dold-Kan correspondence
Recall that if C is an abelian category, then we can form the category of chain complexes of elements in C. This is essential for example in homological algebra. It turns out that this is closely related to the category sC.
Let R be a commutative ring. The category of simplicial R-modules is equivalent to the category of non-negatively graded chain complexes of R-modules.
We may view Theorem <ref> as explaining that “simplicial” is a generalization of “chain complex” to non-abelian categories (where the notion “chain complex” does not exist).
We will indicate the functors used to define the equivalence of Theorem <ref>, starting with the functor from simplicial R-modules to chain complexes of R-modules. A proof may be found in <cit.>; we also like the treatment in <cit.>, which we follow. Let M be a simplicial R-module. First we explain an auxiliary construction called the Moore complex of M. From M we form a chain complex M_* by taking M_n:= M[n], and taking the differential d_n M_n → M_n-1 to be the alternating sum of the face maps,
∂ := ∑_i=0^n (-1)^i d_i.
The singular chain complex of a topological space X is by definition the Moore complex of the simplicial abelian group ⟨ X⟩.
The homology groups of the Moore complex M_* coincide with the homotopy groups of the simplicial abelian group M.
Let us analyze the structure on the Moore complex. There are degeneracy maps s_i M_n → M_n+1 for 0 ≤ i ≤ n. Topological intuition suggests that “removing” the degenerate simplices should not affect the homology. To implement this, define DM_*⊂ M_* so that DM_n+1 is the span of the image of the degeneracy maps s_0, …, s_n.
Check that DM_* is a subcomplex of M_*, i.e. is preserved by the differential ∂. Furthermore, show that the map M_* → (M/DM)_* is a quasi-isomorphism.
Basically, we want to instead consider the chain complex (M/DM)_*. However, it is convenient to use a different normalization of this, which we will call the normalized Moore complex. Define NM_n to be the kernel of all the face maps d_i for i<n (but not i=n). Then (-1)^n d_n defines a differential NM_n → NM_n-1.
Check that NM_* is a chain complex.
The sum map
NM_n ⊕ DM_n → M_n
is an isomorphism. Therefore, NM_* maps isomorphically to (M/DM)_*.
The functor from simplicial R-modules to chain complexes of R-modules, that we will take in Theorem <ref>, is M↦ NM_*. A key step in the proof of the Dold-Kan equivalence is to show that
M[n] ≅⊕_[n] [k] NM_k
functorially in M. Here the index set is over maps [n] → [k] in the simplex category (i.e., order-preserving maps) which are surjective. This tells us how to define the inverse functor: given a chain complex M_*, we will define a simplicial R-module by
M[n] := ⊕_[n] [k] M_k.
The details in checking that this defines an equivalence of categories are left to the references.
§.§ Simplicial commutative rings
A simplicial commutative ring is a functor from Δ^ to the category of commutative rings. Because we shall work with these a lot, we adopt a slightly more economical notation. Simplical commutative rings will be denoted using calligraphic letters, and we abbreviate _n := [n]. Classical commutative rings will be denoted using Roman letters such as R.
So, a simplicial commutative ring can be specified concretely by a collection of commutative rings _n, for n ≥ 0, with maps between them indexed by the morphisms in Δ^, which can be specified by degeneracy maps d_i _n+1→_n
and face maps s_i _n →_n+1 satisfying the simplicial identities. This is an axiomatization of the structure that exists on the singular simplices of a topological commutative ring. The category of commutative rings is denoted CR and the category of simplicial commutative rings is denoted SCR.
Let R be a commutative ring. Then we may define a simplicial commutative ring R such that R[n] = R, and all face and degeneracy maps are the identity map. Intuitively, R corresponds to the topological commutative ring which is R equipped with the discrete topology.
§.§.§ Homotopy groups
If R is a simplicial commutative ring, then its homotopy groups π_*(R) form a graded-commutative ring. The underlying group of π_*() coincides with that of the underlying simplicial abelian group of , and can therefore by computed by the Moore complex
…→_n _n-1…_0.
For the constant simplicial commutative ring R, the Moore complex reads
… R R R R
Hence we see π_0(R) = R and π_i(R) = 0 for i >0, in accordance with the intuition expressed in Example <ref>.
It turns out that the augmentation ideal π_>0() always has a divided power structure in π_*(). We will not explain what this means or why it exists, except that for a -algebra a divided power structure always exists and is unique, whereas in positive characteristic it is a rather special piece of structure.
§.§.§ Classical truncation
Recall that in <ref> we asserted the existence of an adjunction
[column sep = huge]
CR[r, bend right, "R ↦R"'] SCR[l, bend right, "π_0() "']
To justify this, we need to argue that
_SCR(, R) ≅_CR(π_0(), R).
A morphism f ∈_SCR(, R) amounts to a system of maps f_n _n → R compatible with the face and degeneracy maps. Since all maps on R are constant, it must therefore be the case that f_n(r) can be calculated by composing with any of the n+1 distinct maps _n →_0 and then applying f_0 _0 → R. For this to be well-defined, we see that f_0 must vanish on the image of d_1 - d_0, and thus factor through π_0() → R. It remains to show that any such f_0 does induce a well-defined map of simplicial commutative rings, which is completed by the following exercise.
Show that all of the n+1 maps _n →_0 induced by the n+1 maps [0] → [n] have the same composition with the quotient _0 π_0().
§.§.§ Enrichment over simplicial sets
Because of its importance, let us explicate the enrichment of simplicial commutative rings over simplicial sets, although it is a special case of the description in <ref>. Given a set S and a commutative ring R, we can form the commutative ring R^⊗ S. Then for a simplicial set S and a simplicial commutative , we define ^⊗S by (^⊗S)_n:= (_n)^⊗S[n]. Finally, we define a simplicial set _SCR(, ') with
_SCR(, ')[n] := _SCR( ^⊗Δ^n, ')
and the natural face and degeneracy maps.
For commutative rings R and R', calculate the homotopy groups of _SCR(R, R'). (Answer: π_0 = _CR(R, R'), and π_i vanish for i>0.) This justifies thinking of CR as being embedded fully faithfully, in the sense of simplicial categories, in SCR.
§.§ Simplicial resolutions
We now discuss a key construction, in which a morphism of simplicial commutative rings is “resolved” by a process analogous to formation of projective or injective resolutions.
§.§.§ Derived functors
Let us recall the paradigm of derived functors in homological algebra, which is probably familiar to you. If F is a right-exact functor on an abelian category C, then we define the “higher derived functors” of F on M ∈C by finding a “free” resolution
…→ P_n → P_n-1→…→ P_0 → M → 0
and then applying F instead to the complex …→ P_n → P_n-1→…→ P_0, which we view as a “replacement” for M. We are being deliberately vague about what the adjective “free” should mean.
Let C be the category of R-modules and N an R-module, F = N ⊗_R (-). The higher derived functors of F are ^R_i(N, -).
Similarly, if F is a left-exact functor, then one performs an analogous process using injective resolutions instead.
Let C be the category of R-modules and N an R-module, F = _R(N, -). The higher derived functors of F are _R^i(N, -).
Simplicial commutative rings will play the role for commutative rings that chain complexes of R-modules play for R-modules. In particular, we will see that replacing a commutative ring by a type of “free resolution” will allow us to build a theory of “derived functors” on the category of simplicial commutative rings. We will illustrate this in the particular example of the cotangent complex.
Recall that for a morphism f A → B, we have the B-module of Kähler differentials Ω^1_f, which is “well-behaved” if f is smooth. For general f, we will “resolve” B by a “smooth” simplicial A-algebra in order to define the cotangent complex _f. For abelian categories, this meant replacing B by a complex of objects which level-wise had good properties (e.g., “free”), but that does not apply to the category of A-algebras, which is far from abelian. Instead, this theory of homological algebra will be replaced by a theory of homotopical algebra developed by Quillen. The correct framework for this is that of Quillen's model categories, but this would involve a significant digression to explain completely, so we will adopt an ad hoc presentation and then hint at the general theory later, in <ref>.
A morphism of simplicial commutative rings f →' is a weak equivalence if it induces an isomorphism of homotopy groups π_*(f) π_*() π_* (').
This notion is an analogue for simplicial commutative rings of the notion of quasi-isomorphism for chain complexes. Indeed, we could make an analogous definition for simplicial R-modules, which would transport under the Dold-Kan Theorem to the usual notion of quasi-isomorphism.
§.§.§ Free simplicial algebras
Let be a simplicial commutative ring. A simplicial -algebra is a simplicial commutative ring equipped with a homomorphism of → of simplicial commutatve rings.
A free simplicial -algebra is a simplicial -algebra of the following form:
* There is a system of sets X_n such that _n = [X_n].
* The degeneracy maps send s_j(X_n) ⊂ X_n+1. (Note that there is no condition on face maps!)
The following Lemma illustrates that a free simplicial -algebra has good “mapping out” properties, similar to those of free resolutions of modules.
Consider a commutative diagram of simplicial algebras
[r] [d] [d, "∼"', "Φ"]
[r] [ur, dashed]
were Φ is level-wise surjective and a weak equivalence, and is a free simplicial -algebra. Then there exists a lift → making the diagram commutative.
See <cit.>. For the more general statements in the framework of model categories, see <cit.>.
Let f A → B be a map of (classical) commutative rings. Then there exists a free simplicial A-algebra and a diagram
[d, "∼"]
A[r, "f"'] [ur] B
We remind that the symbol denotes a weak equivalence, and A (resp. B) denotes the constant simplicial commutative ring on A (resp. B). Furthermore, we can even arrange that the formation of this diagram is functorial in B.
We will give a “canonical” resolution, which is functorial in B. Note that for any A-algebra R, we have a canonical algebra homomorphism
A[R] → R
sending [r] to r. This construction gives rise to two maps A[A[B]] → A[B]:
* Apply (<ref>) with R = B to get a map A[B] → B, and then take A[-]. For example, this map sends [a[b]] ↦ [ab].
* Apply (<ref>) with R = A[B]. For example, this maps sends [a[b]] ↦ a[b].
We also have a map A[B] → A[A[B]] sending [b] to [[b]], which is a section of both maps above.
Contemplating the combinatorics of the situation, we get a simplicial object
… A[A[B]] ⇉ A[B]
where the maps defined previously are the first face and degeneracy maps. Clearly this resolution is level-wise a polynomial algebra over A, and one can check that it is free. Why is it a resolution of B?
This comes from an “extra degeneracy” argument, which is part of a more general pattern. If we have a simplicial set X
… X_1 ⇉ X_0
with an augmentation d_0 X_0 → X_-1 and “extra degeneracies” s_-1 X_n-1→ X_n satisfying the natural extensions of the simplicial identities, then the map X→ X_-1 (where X_-1 is regarded as a constant simplicial set) is a weak equivalence.
Going back to (<ref>), there are extra degeneracies B → A[B] sending b ↦ [b] and s_-1 A[B] ↦ A[A[B]] sending a[b] ↦ [a[b]], etc. This can be used to show that the map from (<ref>) to B is a weak equivalence.
The construction of the proof is a special case of a more general one, and we give the general statement because we feel that it elucidates the situation. Let T be a monad on a category C, meaning the data of an endofunctor T C→C plus natural transformations η_C→ T and μ T ∘ T → T satisfying various coherence conditions detailed <cit.>. We remark that monads often arise in practice as compositions of adjoint functors. (The running example of interest is: C is the category of sets, and T is the functor taking a set S to the underlying set of A[S]. Here T can be viewed as arising from a free-forgetful adjunction.)
An algebra over the monad T consists of the data of x ∈C plus a morphism a T x → x satisfying coherence conditions. (In the running example, the category of algebras over T is equivalent to the category of A-algebras.) Given an algebra x over T, one can form a simplicial set called the bar construction B(T,x)_, which has B(T,x)_n = T^∘ (n+1) x. By formal combinatorial analysis, the bar construction is equipped with extra degeneracies. The augmentation B(T,x)_→ x will therefore be a weak equivalence in great generality.
§.§.§ Uniqueness
If f → is a morphism of simplicial commutative rings, we will refer to a diagram
' [d, "∼"]
[r, "f"'] [ur]
where ' is a free simplicial -algebra, as a free resolution of as a simplicial -algebra. Let us discuss “how unique” these resolutions are. In the classical situation of homological algebra, projective resolutions are unique up to homotopy. Analogously, there is a sense in which free resolutions of a simplicial commutative -algebra are “unique up to homotopy”. The best language for formulating such statements is model category theory, which would take us too far afield to explain, but we can say what it specializes to in this particular situation.
We write [X] := ⊗_[X] for the one-variable polynomial algebra over , the tensor products being formed levelwise (as will be the case for all tensor products formed in this paragraph; they will be the “same” as the derived tensor products to be discussed in <ref>). Let us call [X_0,X_1] := [X] ⊗_[X]. This represents the “homotopy coproduct”; in particular, given two -algebra homomorphisms f,g [X] → we get an -algebra homomorphism f ⊗ g [X_0,X_1] →. There is a product morphism μ[X_0, X_1] →[X]. Let [X_0,X_1,Y] be a simplicial resolution of [X_0,X_1] [X]. Then we say that f and g are homotopic if f ⊗ g extends to a commutative diagram
[X_0,X_1] [r] [dr, "f ⊗ g"'] [X_0,X_1,Y] [d, dashed]
Such a diagram is called a homotopy between f and g.
We say that two simplicial resolutions
' [d, "∼"]
[r, "f"'] [ur] ”[d, "∼"]
[r, "f"'] [ur]
are homotopic if there are maps α' →” and β”→' making the appropriate diagrams commute, and such that α∘β and β∘α are homotopic to the identity. Lemma <ref> may be used to show that any two simplicial resolutions are homotopic.
§.§ A vista of model categories
Let us hint at the more general framework underlying these procedures, which is Quillen's theory of model categories. Our discussion will be very brief; a favorite introductory reference is <cit.>.
The purpose of a model category is to incorporate homotopy theory into category theory. The basic idea is that in practice, one often wants to introduce a notion of maps that are “weak equivalences”, but not necessarily isomorphisms. For example, in the category of topological spaces there is the usual notion of homotopy equivalences, and in the category of chain complexes there is the notion of quasi-isomorphism. A model category is a framework to do homotopy-theoretic constructions in such a situation.
More formally, a model category is a category equipped with several distinguished classes of morphisms, satisfying various properties. The most important are the weak equivalences, which designate maps that are “homotopy equivalences”. The weak equivalences are morphisms that we want to “invert”. Familiar examples are:
* In the category of chain complexes of R-modules, weak equivalences should be the quasi-isomorphisms. Inverting them leads to the derived category of R-modules.
* In the category of simplicial commutative rings, weak equivalences should be as defined in Definition <ref>.
The other data in a model category are classes of morphisms called the cofibrations and fibrations, which are roughly the generalizations of projective and injective resolutions. There are various axioms placed on these classes of morphisms: for example, they are preserved by compositions and retracts. An object is called fibrant if its map to the terminal object is a fibration, and an object is called cofibrant if its map from the initial object is a cofibration.
In the standard model structure on simplicial sets, the Kan complexes referred to in <ref> are the fibrant objects. Kan complexes are characterized by lifting conditions possessed by the singular simplicial set of a topological space; in particular, Y is a Kan complex for any topological space Y. This is the reason why, when defining homotopy groups of simplicial sets as maps from a sphere, it was necessary to replace the target by a Kan complex.
On the other hand, in the standard model category structure on topological space the fibrations are Serre fibrations, and in particular all topological spaces are fibrant. This is the reason why, when defining homotopy groups of a topological space, it was not necessary to replace the target.
The axioms of a model category imply that every object admits a weak equivalence from a cofibrant object, or a weak equivalence to a fibrant object. We think of such a map as a “resolution” by a cofibrant object or a fibrant object. As we know, derived functors on abelian categories are defined using such resolutions. A model category structure provides a more general notion of “resolution” in categories which are not necessarily abelian, and therefore allows to construct “non-abelian derived functors”.
Our particular constructions with simplicial commutative rings are examples of general constructions with model categories. In particular, our notion of “free simplicial -algebra ” means that is a cofibrant -algebra in the standard model structure; meanwhile, all simplicial commutative rings are fibrant.
§.§ The cotangent complex
We may now define the cotangent complex of a morphism f A → B, as invented by Quillen and developed by Illusie in <cit.>. Let B be a simplicial resolution of B as a free simplicial A-algebra. Then let _/A be the simplicial -module obtained by forming Kähler differentials level-wise: _/A[n] := Ω^1__n/A. Finally we define the cotangent complex to be
_f := _B/A := _/A⊗_ B.
Show that if and ' are two free simplicial resolutions of B, then _/A⊗_ B is homotopic to _'/A⊗_ B. Therefore, _B/A is well-defined up to homotopy. In particular, it gives a well-defined object of the derived category of B-modules.
Even better, if is an animated ring, and is an animated -algebra, then _/ can be constructed as an animated -module – see <cit.>.
We expect _B/A to have reasonable finiteness properties when the morphism f has good finiteness properties. However, the canonical resolutions used to build in the proof of Proposition <ref> are extremely large. Therefore, we would like to know that there are “smaller” resolutions. We will examine this question next.
§.§ Economical resolutions
We will show the existence of resolutions with “good” finiteness properties, following <cit.>.
Let A be a noetherian commutative ring and f A → B a finite type morphism. Then there exists a simplicial A-algebra resolution of B by a free simplicial A-algebra A[X] with each X_n finite.
Let A be a noetherian commutative ring and f A → B a finite type morphism. Then there exists a representative of _B/A by a complex of finite free B-modules.
Typically _B/A cannot be represented by a perfect complex; that is, it will typically have homology groups in infinitely many degrees. For example, if A is a field and _B/A has finite Tor-amplitude, then in fact _B/A has Tor-amplitude at most 2, and B is LCI over A. This is a result of Avramov <cit.>, resolving a Conjecture of Quillen.
As motivation, we recall how to construct “efficient” resolutions of a finite type A-module M by a complex of free A-modules. We can build a sequence of complexes of finite free A-modules {F_i} whose homology approximates M in degrees up to i. For F_0, we pick any surjection from a free module A^n M. Then we pick generators for I := (F_0 M), which induces a map A^m A^n with image I, whose map to M induces an isomorphism on H_0. So we may take F_0 = [A^m A^n]. We then inductively build F_i from F_i-1 by picking representatives in F_i-1 for generators of H_i(F_i-1), and then adding free summands in degree i+1 that bound these generators.
To perform an analogous construction for simplicial commutative rings, what we want is a way to “kill cycles” in a simplicial commutative ring. This is accomplished by the following Lemma.
Given a simplicial commutative ring , and a homotopy class [z] ∈π_i(), there exists a free simplicial -algebra ' such that:
* _n' is a free _n-algebra on finitely many generators for each n, and _n' = _n for n ≤ i.
* The map →' induces an isomorphism on π_n for n<i, and for n=i an exact sequence
0 →π_0() · [z] →π_i() →π_i(') → 0
Indeed, supposing Lemma <ref> is true, we can make a resolution as in Proposition <ref> by picking a surjection A[x_1, …, x_d] B. Then, by repeatedly applying Lemma <ref>, we may build a sequence of free simplicial A-algebras with the “correct” finiteness properties and homotopy groups in all finite degrees. Taking their colimit gives the desired resolution.
Now to prove Lemma <ref>, we want to “adjoin” a variable whose boundary is [z]. However, because of the simplicial identities, the process of “adjoining” variables is necessarily quite complicated. For this, we will imitate what happens to the singular simplices when attaching cells to a topological space X to kill a homology class in degree i-1. If we attach a cell in degree i, then this creates additional degenerate simplices in all higher dimensions. For this reason, the construction is considerably more complicated.
We define
X_n := {x_t | t [n] [i+1] ∈Δ}
and we take _n' = _n[X_n], with faces and degeneracies defined as follows.
* We have s_j(x_t) = x_t ∘σ_j.
* Note that X_i+1 = {x_}. We set d_0(x_) = z and d_j(x_) = 0 for j>0. For n>i+1, we define d_j(x_t) = x_t ∘δ_j if t ∘δ_j is surjective, and otherwise it factors through a face map d_j' [i] → [i+1] so we define it to make the following diagram commute
[n][r, "t"] [i+1]
[n-1][u, "δ_j"] [r] [i][u, "δ_j'"]
One can check that this _n' is a simplicial commutative ring satisfying the conclusions of Lemma <ref> (see <cit.> for more details).
A more highbrow way to phrase this construction is as follows (following <cit.>). For a simplicial A-algebra , a class in π_i() is represented by a map of simplicial sets ∂Δ^i+1→ (here we are using Remark <ref>, that is fibrant). This induces a map of simplicial commutative rings A[∂Δ^i+1] →, where A[∂Δ^i+1] is the free simplicial A-algebra on the simplicial set ∂Δ^i+1. We may then form
' := ⊗_A[∂Δ^i+1] A[Δ^i+1]
and we claim that it satisfies the conclusions of Lemma <ref>. The first bullet point is satisfied by inspection. To compute the effect on homotopy groups, we note that π_*(A) π_*(A[Δ^i+1]) because Δ^i+1 is contractible, and the lowest positive-degree homotopy group of A[∂Δ^i+1] is in degree i and maps to [z] in π_i(A) by construction. Then conclude using the Tor spectral sequence <cit.>
_p^π_*(A[∂Δ^i+1])(π_*(), π_*(A[Δ^i+1]))_q π_p+q (⊗_A[∂Δ^i+1] A[Δ^i+1]).
§.§ Derived tensor products
Let → and →' be two maps of simplicial commutative rings. Let be a free resolution of as an -algebra and ' ' be a free resolution of ' as an -algebra. The derived tensor product “_'” is represented by the simplicial commutative ring ⊗_', with
(⊗_')_n = _n ⊗__n'_n.
Although this is not unique since it depends on a choice of resolutions, it is unique up to weak equivalence. In fact, it suffices to resolve only one of the terms, e.g., the simplicial commutative ⊗_' also represents the derived tensor product. Indeed, there is an evident map
⊗_' →⊗_'
and this is a weak equivalence of free '-algebras.
If A is classical and is a free simplicial A-algebra, then the underlying simplicial A-module of associates under the Dold-Kan correspondence to a complex of free A-modules. In particular, the Dold-Kan correspondence takes free simplicial resolutions to the familiar notion of free resolutions of chain complexes. In particular, we see that if A, B, and ' B' are all classical, then we have
π_i ( _') ≅^A_i(B, B').
This has the following consequence.
Let A → B and A → B' be maps of classical commutative rings, such that ^i_A(B,B') = 0 for all i>0. Then B _A B' →π_0(B _A B') ≅ B ⊗_A B' is a weak equivalence.
§.§ Properties of the cotangent complex
Now we establish some properties of the cotangent complex which are familiar for the module of Kähler differentials, at least in the smooth case. As a reference we recommend <cit.>.
If B,B' are A-algebras with ^i_A(B,B') = 0 for i>0, then
* _B ⊗_A B'/B'≅_B/A⊗_A B'.
* _B ⊗_A B'/A≅ (_B/A⊗_A B') ⊕ (B⊗_A _B'/A).
It is true in general that
* __'/'≅_/⊗_'.
* __'/≅ (_/⊗_') ⊕ (⊗__'/).
The Tor-vanishing assumption is used to ensure that B _A B' B ⊗_A B'.
Both arguments are completely formal from the standard properties of formation of Kähler differentials with respect to tensor products, using the observation that:
* If is a free simplicial -algebra, and is any simplicial -algebra, then ⊗_ is a free simplicial -algebra.
If A → B → C is a composition of morphisms, then there is an exact triangle in the derived category of C-modules:
_B/A⊗_B C →_C/A→_C/B
Choose a free resolution A B. This gives C the structure of a -algebra, so we may then choose a free resolution C. This gives a diagram
[ddr, "∼"]
[d, "∼"] [ur, hook]
A [ur, hook] [r] B [rr] C
Then we form the tensor product
[ddr, "∼"] [d]
[d, "∼"] [ur, hook] B ⊗_[dr]
A [ur, hook] [r] B [rr] [ur] C
From the exact sequence of Kähler differentials for smooth morphisms <cit.>, applied levelwise, we obtain an exact triangle
_/A⊗_→_/A→_/.
Applying - ⊗_ C, we get an exact triangle. Let us verify that the terms are as claimed.
* _/A⊗_⊗_ C ≅_/A⊗_ C ≅_B/A⊗_B C.
* _/A⊗_ C ≅_C/A.
* We have _/⊗_ B ≅_B ⊗_ / B. Since B ⊗_ is a free resolution of C as a B-algebra, tensoring with C over gives _C/A.
Let A be a commutative ring.
* If S is a multiplicative system in A, then _S^-1A/A≅ 0.
* Let A → B be a ring homomorphism. If S is a multiplicative system in A and T is a multiplicative system in B containing the image of S, then
_T^-1 B / S^-1 A ≅_B/A⊗_B T^-1 B.
(i) Multiplication induces S^-1 A ⊗_A S^-1A S^-1 A. Putting this into Proposition <ref>(ii), we get
_S^-1 A /A≅_S^-1 A ⊗_A S^-1A /A ≅_S^-1 A /A⊕_S^-1 A /A
via the diagonal map. This forces _S^-1 A /A≅ 0.
(ii) By Proposition <ref>(i) We have _S^-1 B / S^-1A ≅_B/A⊗_B S^-1 B. Apply Proposition <ref> to get an exact triangle
_S^-1 B / S^-1 A⊗_S^-1 B T^-1 B →_T^-1 B / S^-1 A→_T^-1 B/ S^-1 B.
The rightmost term vanishes according to (i), from which the result follows.
§.§ Some computations of the cotangent complex
Now we will identify the cotangent complex in some special cases. Simplicial resolutions are almost always too unwieldy to compute with by hand, so we will instead need to make clever use of the formal properties explained above.
Let A be a noetherian commutative ring and f A → B a finite type morphism. Then
* f is étale if and only if _f ≅ 0.
* f is smooth if and only if _f ≅Ω_f^1 and is finite projective.
We will only prove the forward directions for now. The converses will be established after we have developed the connection between _f and deformation theory.
Suppose f is étale. Then the multiplication map B ⊗_A B → B is a localization, so Corollary <ref> implies that _ B/B ⊗_A B ≅ 0. By Proposition <ref> we have _B ⊗_A B/B≅_B/A⊗_A B. Then applying Proposition <ref> to B B ⊗_A B → B, we find that L_B/A≅ 0.
Next suppose f is smooth. The local structure theorem for smooth maps says that locally on B, f can be factored as an étale map over an affine space:
B [r] [dr, "f"'] ^n_A [d]
A
At the level rings, this means that after some localization, we have a factorization A → P → B where P is a polynomial ring over A and P → B is étale. Then the morphism of constant simplicial A-algebras A → P is already free, so _P/A≅Ω_P/A is finite free. From Proposition <ref> we have an exact triangle _P/A⊗_P B →_B/A→_B/P where _P/A⊗_P B ≅Ω_B/A and _B/P = 0, which completes the proof.
Regular that a regular sequence in a commutative ring A is a finite sequence of elements r_1, …, r_n ∈ A such that each r_i is a non-zerodivisor modulo A/(r_1, …, r_i-1).
If B = A/I and I is generated by a regular sequence, then
_B/A I/I^2[1].
Let us first treat the case A = [x] and B =, I = (x). In this case we have a section →[x]. Apply Proposition <ref> to the sequence →[x] → to get an exact triangle
_[x]/Z⊗_[x]→_/→_/[x].
This shows that _/[x]≅_[x]/[1] ⊗_[x]≅ (x)/(x^2).
Now we consider the more general situation of the Proposition. By induction, it suffices to handle the case where I = (r) is principal. Note that a choice of r induces a map [x] → A sending x ↦ r, and this map induces A/(r) ≅⊗_[x] R. We claim that r is a non zerodivisor if and only if ^i_[x](, A) = 0 for all i>0, or in other words (by Corollary <ref>) if and only if _[x] A A/(r). The claim is an elementary computation in homological algebra, using the resolution [x] [x] of over [x]. Then by Proposition <ref>, we have
_B/A≅_/[x]⊗_[x] A ≅ (x)/(x^2) ⊗_[x] A ≅ (r)/(r^2).
§.§ Globalization
The theory we have discussed can be globalized to schemes, and then to stacks.
For example, let Y → X be a morphism of schemes. Then we define _Y/X by finding a free replacement of _Y as a simplicial _X-algebra, and then forming Ω-levelwise, etc. Traditionally, it was important to develop the theory this way instead of trying to “glue” the cotangent complexes from affine open subspaces, because of inadequate categorical technology: on the one hand the cotangent complex is not unique at the level of chain complexes, and on the other hand the derived category is not suitable for gluing.
However, in modern language one can indeed glue the cotangent complex affine-locally in the desired way, viewing the cotangent complex as an animated module and using that the categories of animated quasicoherent sheaves satisfy Zariski descent.
§.§ André-Quillen homology
Let A → B be a map of commutative rings. We have defined the cotangent complex _B/A as a simplicial B-module up to homotopy.
Let M be a B-module. We define the André-Quillen homology groups
D_i(B/A; M) := H_i(_B/A⊗_B M)
and the André-Quillen cohomology groups
D^i(B/A; M) := H^i(_B(_B/A, M)).
As a consequence of the properties of the cotangent complex, we have various properties of André-Quillen (co)homology:
* Let B be an A-algebra and M' → M → M” be a short exact sequence of B-modules. Then there is a long exact sequence
D^i(B/A; M') → D^i(B/A; M) → D^i(B/A; M”) → D^i+1(B/A; M') →…
* Let B,B' be A-algebras such that ^A_i(B,B') = 0 for i>0. Then for any B ⊗_A B'-module M, we have
D^i(B ⊗_A B'/B'; M) ≅ D^i(B/A; M)
and
D^i(B ⊗_A B'/A; M) ≅ D^i(B/A; M) ⊕ D^i(B'/A; M).
* If A → B → C is a sequence of ring homomorphisms and M is a C-module, then we get a long exact sequence
D^i(C/B; M) → D^i(C/A; M) → D^i(B/A; M) → D^i+1(C/B;M) →…
We will next apply André-Quillen cohomology to study deformation theory.
§.§.§ Deformation theory setup
Let f X → S be a scheme and S S' a square-zero thickening with ideal sheaf . We consider deformations of X to X' → S', as in the diagram
X [d] [r, dashed] X' [d, dashed]
S [r, hook] S'
Then this is a matter of constructing an extension
0 [r] [r, dashed] _X'[r, dashed] _X [r] 0
0 [r] [r] [u] _S'[r] [u, dashed] _S [r] [u] 0
where may be regarded as a sheaf on _X because of the square-zero property, and as such it is isomorphic to f^* ≅⊗__S_X.
§.§.§ Square-zero algebra extensions
Motivated by this, we consider the following problem. Let f A → B a homomorphism of (classical) commutative rings and let M be a B-module. An extension of B by M as A-algebras is an exact sequence
0 → M → E → B → 0
which presents E as a square-zero A-algebra extension of B by M. We would like to understand the structure of all such extensions. It turns out that this is controlled by André-Quillen homology.
Let M be a B-module. Then D^1(B/A; M) is the set of isomorphism classes of extensions of B by M as A-algebras.
The zero element 0 ∈ D^1(B/A; M) corresponds to the trivial square-zero extension B ⊕ M, with multiplication
(b_1, m_1)(b_2, m_2) = (b_1b_2, b_1 m_2 + b_2 m_1).
Let us give maps in both directions. First suppose we have an extension M → E → B. Pick a free A-algebra resolution A → B. In particular, since _0 is polynomial over A, we have some lifting
_0 [r] [d, dashed, "ϕ_0"] B [d, equals]
M [r] E [r] B
Now, we have two maps d_0, d_1 _1 ⇉_0 and we know that the image of (d_0-d_1) is an ideal in _0, with _0 / Im(d_0-d_1) ≅π_0() ≅ B. So the difference
ϕ_0 ∘ (d_0 - d_1) _0 → M
is a B-linear derivation from _0 to M, hence corresponds to an element of _B(Ω__1, M). One checks that it is a cocycle, hence induces a class in D^1(B/A; M). The different choices of ϕ_0 modify this cocycle by a coboundary, so we get a well-defined map from isomorphism classes of extensions to D^1(B/A; M).
This also suggests how to construct the inverse map. Given a class in D^1(B/A; M) represented by a derivation δ_1 → M, define E to be the pushout of B-modules.
_1 [r, "d_0-d_1"] [d, "δ"] _0 [d, dashed]
M [r] E
We equip E with the multiplication induced by its structure as a quotient of _0 ⊕ M. For this to be well-defined one checks that the image of B_1 under (d_1-d_0, δ) is a square-zero ideal, which is straightforward.
Consider -algebra extensions of _p by _p. Since we are considering arbitrary extensions of commutative rings, these are controlled by D^1(_p/; _p). By Proposition <ref>, the cotangent complex of →_p is __p/≅_p[1]. So
D^1(_p/; _p) ≅__p(_p[1], _p[1]) ≅_p.
The 0 class corresponds to the extension _p[ϵ]/ϵ^2. The non-zero classes all have underlying ring /p^2, with the maps to _p being the natural projection composed with an automorphism of _p.
Let A → B a smooth map of commutative rings and let M be any B-module. Then by Proposition <ref> we have _B/A≅Ω_B/A^1 is finite projective, so D^1(B/A; M) =0. This says that there is a unique square-zero A-algebra extension of B by M, which is necessarily the split extension.
We want to show that if f A → B is a finite type morphism of Noetherian rings, and _B/A is a finite projective B-module concentrated in degree 0, then f is smooth. Thanks to the finiteness hypotheses, it suffices to show that f is formally smooth, which means that for any square-zero extension S S, the diagram
S [d, hook] [r] B [d]
S[r] [ur, dashed] A
has a lift. First suppose that the map S → B is an isomorphism. Then S is the spectrum of a square-zero A-algebra extension of B, and we want to show that it splits. But the assumption on _B/A implies that D^1(B/A; M) = 0 for any B-module M. So the class of [S] ∈ D^1(B/A; M) vanishes, which means that there is a lift as desired.
Now we can treat the general case. Suppose S = R, S = R, and let I = (R→ R). Choose a surjection P R from a polynomial A-algebra P, say with kernel J. By choosing lifts of generators, we find a lift P →R, which carries J to I, hence factors over P/J^2. Then P/J^2 is a square-zero A-algebra extension of B, so by the case handled in the previous paragraph there is a splitting B → P/J^2, whose composition to R gives the desired lift.
P [r, "∼"] B = P/J[r] [d, bend left, dashed] R = R/I
P [r] [u, equals] P/J^2 [u, twoheadrightarrow] [r] R[u, twoheadrightarrow]
§.§ Deformation theory of schemes
Let A→ A be an extension by a square-zero ideal I. We consider the problem of finding flat deformations
J [r] B[r] B
I [u] [r] A[u] [r] A [u]
or geometrically, deformations of B → A to a flat family over A.
B [d] [r, hook] B[d]
A [r, hook] A
Since I is square-zero, we can regard I as an A-module. Similarly, we can regard J as a B-module.
In the situation above, B is flat over A if and only if I ⊗_A B J.
If B is flat over A, then applying B⊗_A (-) to the short exact sequence I →A→ A, and using that the A-action on I factors through A, shows that I ⊗_A B J.
For the other direction, see <cit.>.
Henceforth we may assume that J = I ⊗_A B. We know that A-algebra extensions of B by J are classified by D^1(B/A, J). The composition A→ A → B gives a long exact sequence
[row sep = tiny]
[r] D^1(B/A; J) [r] D^1(B/A; J) [r] D^1(A/A; J)
[r] D^2(B/A; J)
The equivalence class of the extension A may be viewed as an element [A] ∈ D^1(A/A; J), which comes from D^1(B/A; J) if and only if its image in D^2(B/A;J) vanishes. By the long exact sequence, its pre-image in D^1(B/A; J) forms a torsor (possibly empty) for D^1(B/A; J). Hence we find:
Let the situation be as above.
* Attached to the extension A→ A is an obstruction class obs(A) ∈ D^2(B/A; J), which vanishes if and only if an extension B exists.
* If obs(A) ∈ D^2(B/A; J) vanishes, then the isomorphism classes of extensions form a non-empty torsor for D^1(B/A; J).
* The automorphism group of any extension is D^0(B/A;J) ≅_B(Ω_B/A; J). (This is elementary; the theory of the cotangent complex is not relevant.)
§.§.§ Application: Witt vectors
We will give an application of the preceding theory to the construction of Witt vectors for perfect _p-algebras.
If B is a perfect _p-algebra, then _B/_p = 0.
The Frobenius endomorphism on B is an isomorphism by definition, hence induces an automorphism of _B/_p. To compute the action of Frobenius on _B/_p, pick a free resolution of B over _p, and note that on B lifts to on (the p-power map level-wise on ):
[r, ""] [d, "∼"] [d, "∼"]
B [r, ""] B
Hence the action of on _B/_p is tensored from its action on _/_p, which is zero because the Frobenius endomorphism induces 0 on Kähler differentials. So on the one hand we have that induces an automorphism of _B/_p, but on the other hand it is the zero map. Therefore, _B/_p =0.
There is a unique flat deformation of B to /p^n.
The base case n=1 is tautological. We may suppose by induction that we have a unique deformation B_n over A_n := /p^n, and we want to extend it over A = /p^n+1. Let I_n ⊂A be the kernel of A→ A_n, a module over A_n which in fact is pulled back from A_1. Let J_n = I_n ⊗_A_n B_n, which is then pulled back from B. Since the action of B_n on J_n factors over B_1 = B, and B_n is flat over A_n, we have D^i(B_n/A_n; J_n) ≅ D^i(B/A_1; J_n) ≅ 0 for each i. In particular, the obstruction class in Proposition <ref> lies in D^2(B_n/A_n; J_n) = 0, so an extension exists; and then since D^1(B_n/A_n; J_n) = 0, the extension is unique.
Therefore, there is a unique (up to unique isomorphism) diagram
B B_2 [l, twoheadrightarrow] B_3 [l, twoheadrightarrow] …[l, twoheadrightarrow]
_p [u] /p^2 [l, twoheadrightarrow] [u] /p^3 [l, twoheadrightarrow] [u] …[l, twoheadrightarrow] [u]
with all squares co-Cartesian and all vertical arrows flat. The inverse limit _n B_n is called the ring of Witt vectors of B. This construction of the Witt vectors was pointed out in <cit.>.
§.§ Deformation theory of maps
Our next situation is motivated by the deformation theory of maps: given a square-zero extension X X, and a map X → Y, we want to understand extensions of the map
X [r, hook] [d] X[dl, dashed]
Y
The local problem is: given an A-algebra B and a square-zero extension
J →B→ B ,
equip B with a compatible A-algebra structure. The equivalence class of B as a commutative -algebra extension of B by J can be viewed as an element [B] ∈ D^1(B/; J). We want to lift this a class in D^1(B/A; M). We have the long exact sequence
…→ D^0(A/; M) → D^1(B/A; M) → D^1(B/; M) → D^1(A/; M) →…
The map D^1(B/; M) → D^1(A/; M) can be interpreted as sending B to the fibered product A ×_B B, which is a commutative algebra extension of A by M. Then we may interpret the long exact sequence as follows.
Let the situation be as above.
* Attached to the extension B→ B is an obstruction class obs(B) ∈ D^1(A/; J), which vanishes if and only if there exists an A-algebra structure on B compatible with the given one on B.
* If obs(B) ∈ D^1(A/; J) vanishes, then the isomorphism classes of A-algebra structures form a non-empty torsor for D^0(A/; J) = _A(Ω_A, J).
§.§.§ Application: Fontaine's map
Let R be a p-adically complete and p-torsion-free ring. Let R^♭ := __p (R/p). This is a perfect ring, characterized uniquely up to unique isomorphism as “the” final perfect ring mapping to R/p.
It is a very important fact in p-adic Hodge Theory that there is a unique map W(R^♭) → R that reduces mod p to the canonical map R^♭→ R/p, called Fontaine's map, although we will not be able to explain its significance here. Here W(R^♭) are the Witt vectors, which as discussed in <ref> is the inverse limit of the unique (up to unique isomorphism) family of lifts
R^♭ W_2(R^♭) [l, twoheadrightarrow] W_3(R^♭) [l, twoheadrightarrow] …[l, twoheadrightarrow]
_p [u] /p^2 [l, twoheadrightarrow] [u] /p^3 [l, twoheadrightarrow] [u] …[l, twoheadrightarrow] [u]
Let us build up a family of maps step-by-step, starting with W_2(R^♭) → R/p^2.
R/p R/p^2 [l, twoheadrightarrow] R/p^3 [l, twoheadrightarrow] …[l, twoheadrightarrow]
R^♭[u] W_2(R^♭) [u, dashed] [l, twoheadrightarrow] W_3(R^♭) [u, dashed] [l, twoheadrightarrow] …[l, twoheadrightarrow] [u]
_p [u] /p^2 [l, twoheadrightarrow] [u] /p^3 [l, twoheadrightarrow] [u] …[l, twoheadrightarrow] [u]
(The p-torsionfree property of R is used to see that R/p^n is flat over /p^n.) By similar arguments as above, the obstruction to such an extension is a class in
D^1(W_2(R^♭) / (/p^2 ); pW_2(R^♭) ),
which is isomorphic to D^1(R^♭ / _p; R^♭), which vanishes because R^♭ is perfect. Then, the set of such an extensions is a torsor for D^0(R^♭ / _p; R^♭) = 0, so the extension is unique. By induction, we have constructed compatible maps W_n(R^♭) → R/p^n for each n ≥ 1. Finally, take the inverse limit of these maps to construct Fontaine's map (here is where we use that R is p-adically complete).
§.§ Global deformations
So far we have discussed deformations of affine schemes. Now we begin the study of deformations of possibly non-affine schemes.
Let X be a smooth (but not necessarily affine) variety over a field k. A first-order deformation of X is a flat family X over k[ϵ]/(ϵ^2), whose fiber over k is X. The we claim that the set of first-order deformations of X is in bijection with H^1(X, TX).
To see this, write X = ⋃ U_i as a union of affines U_i = A_i. A first-order deformation of X is then uniquely specified by a compatible collection of first-order deformations of the U_i. Since the U_i are smooth, they have unique first-order deformations U_i. The gluing data is an automorphism of U_i ∩U_j := A_ij lifting the given automorphism on U_i ∩ U_j := A_ij, which is of the form
a + b ϵ↦ a + (δ a + b) ϵ
where δ is a k-linear derivation from A_ij to itself, i.e., a section of the tangent sheaf TX over U_ij. In order to glue to a first-order deformation of X, these sections must satisfy the cocycle condition on triple intersections. Finally, modifying each U_i by an automorphism (as a first-order deformation) corresponds to adding a coboundary. Therefore, we see that the data of a first-order deformation is exactly that of a class in the first Cech cohomology H^1(X, TX).
As an application, consider the moduli space of (smooth, projective) genus g curves _g. A k-point to this moduli space is a smooth, projective genus g curve C. The tangent space to _g at the k-point corresponding to C has underlying set the set of extensions of this k-point to a k[ϵ]/ϵ^2-point, which is to say the set of first-order deformations of C/k. According to what we have said, this is H^1(C, TC). Furthermore, the obstructions are valued in H^2(C, TC), which vanishes because C is a curve – this tells us that _g is smooth at the k-point corresponding to C.
§.§ Globalizing the cotangent complex
By globalizing Proposition <ref>, one proves:
Let f X → S be a map of schemes. Let S S be a square-zero thickening with ideal sheaf , and = f^*.
* Attached to the extension S S is an obstruction class obs(S) ∈^2(_X/S, ), which vanishes if and only if a flat extension X→S exists.
* If obs(S) ∈^2(_X/S, ) vanishes, then the isomorphism classes of flat extensions form a non-empty torsor for ^1(_X/S, ).
* The automorphism group of any flat extension is ^0(_X/S, ).
[LCI curves]
Recall that a finite type morphism of Noetherian rings A → B is said to be local complete intersection (LCI) if it can be Zariski-locally written as a composition A → A[x_1, …, x_n] B where the second map is a regular quotient. More generally, a morphism of schemes is said to be LCI if it is Zariski-locally LCI. For example, a curve with at worst nodal singularities over a field is LCI. By Proposition <ref>, if Y → X is LCI then _Y/X is concentrated in degrees -1,0.
Suppose C is an LCI curve over a field k, which is is smooth outside a finite number of closed points of C. Then we claim that ^2(_C/k, ) = 0, which in particular shows that all infinitesimal deformations of C/k are unobstructed. By the local-global spectral sequence, it suffices to show the vanishing of
* H^2(C, ^0(_C/k, )),
* H^1(C, ^1(_C/k, )), and
* H^0(C, ^2(_C/k, )).
The first vanishes because C is a curve, so it has cohomological dimension 1. The third vanishes because ^2(_C/k, ) vanishes, thanks to _C/k being supported in degrees [-1, 0]. Finally, since C is smooth away from a finite collection of closed points, ^1(_C/k, ) is a torsion sheaf by the localizing property of the cotangent complex plus its calculation in the smooth case (<ref>), so its higher cohomology vanishes.
For completeness we state the globalization of Proposition <ref>:
Let X X be a square-zero extension with ideal sheaf and f X → Y. Let _Y be the absolute cotangent complex of Y (i.e., the cotangent complex of Y →).
* Attached to X is an obstruction class obs(X) ∈^1(_Y, ), which vanishes if and only if f extends to a map X→ Y.
* If obs(X) ∈^1(_Y, ) vanishes, then the isomorphism classes of extensions of f form a non-empty torsor for ^0(_Y, ).
§.§ Geometric interpretations of André-Quillen homology
In this section we reflect on the relation between derived algebraic geometry and the cotangent complex; a more substantial exposition of this topic may be found in <cit.>. As was discussed in <ref>, one of the characteristic features of derived algebraic geometry in practice is its ability to construct spaces whose cotangent complexes can be “identified” in a meaningful way, for example as some natural cohomology theory. The reason this happens is because derived algebraic geometry supplies a “geometric” interpretation of the cotangent complex.
It may be helpful to consider the analogy between schemes and reduced schemes, explained in <ref>. Even if we are only interested in reduced schemes, allowing nonreduced schemes into our vocabulary allows us to gives a geometric interpretation of tangent spaces as maps from dual numbers. In the context of moduli spaces, this allows to express tangent spaces as first-order deformations, and then often in practice as a cohomology group.
We could contemplate defining a “reduced moduli space” as a functor on reduced commutative rings. However, it would be a priori unclear how to describe its tangent space explicitly. If the functor can be extended to non-reduced rings, then its tangent space space could be calculated as in the previous paragraph, and if for example the moduli space turned out to be smooth then one would know a posteriori the tangent space of the underlying reduced scheme. The reader might enjoy contemplating this thought experiment in the example of _g, the moduli space of smooth projective genus g curves.
Similarly, the tangent complex of a classical moduli space is a priori hard to describe explicitly, because the “higher” tangent spaces are not prescribed by some direct process from the moduli problem. However, in the derived world the full tangent complex does admit such a direct description, which we now explain.
Let A → B be a morphism of commutative rings, and η B → A an augmentation. It may be helpful to focus on the special case A = k for a field k, in which case η corresponds to a k-point of B.
Let M be a B-module. It can be turned into a simplicial B-module M[0], which is characterized as the unique simplicial B-module with π_0(M[0]) = M and π_i(M[0]) = 0 for i>0. For n ≥ 0, there is a simplicial B-module M[n] with π_n(M[n]) = M and π_i(M[n]) = 0 for i ≥ 0. (For example, M[n] can be constructed using the Dold-Kan correspondence.)
Then we form the level-wise simplicial square-zero extension A ⊕ M[n]. Its graded homotopy ring is the square-zero extension of A (in degree 0) by M (in degree n).
Let A = k be a field and M=k. Then k ⊕ k[0] = k[ϵ]/ϵ^2 is the ring of dual numbers over k. More generally, k ⊕ k[n] is called the ring of derived order n dual numbers.
Then the André-Quillen cohomology group D^n(B/A; M) can be viewed as the group of homomorphisms from B to A ⊕ M[n] lifting η, in the homotopy category of simplicial A-algebras. The reason for this is relatively formal, once the relevant definitions are in place; for now we just give a sketch. To compute these homomorphisms, by definition one needs to resolve B by a free simplicial A-algebra . (This is analogous to how maps in the derived category of A-modules are computed by first resolving the source by a projective resolution.) Then, lifts
A ⊕ M[n] [d]
[ur, dashed] [r] A
are computed by derivations from into M[n], which can be converted into a description as homomorphisms from Kähler differentials of into M[n], which is basically the definition of André-Quillen cohomology.
Geometrically, we can think of “(A ⊕ M[n])” as being some “higher derived infinitesimal thickening” of A by M. A derived moduli problem can be evaluated on such a space, giving a geometric interpretation of André-Quillen homology groups; this makes it easier to understand the cotangent complex of a derived moduli space than that of a classical moduli space, in the same sense that it is “easier” to understand the tangent space of a classical moduli space than that of a “reduced moduli space”.
amsalpha
|
http://arxiv.org/abs/2409.03652v1 | 20240905160833 | Benchmarking the integration of hexagonal boron nitride crystals and thin films into graphene-based van der Waals heterostructures | [
"Taoufiq Ouaj",
"Christophe Arnold",
"Jon Azpeitia",
"Sunaja Baltic",
"Julien Barjon",
"Jose Cascales",
"Huanyao Cun",
"David Esteban",
"Mar Garcia-Hernandez",
"Vincent Garnier",
"Subodh K. Gautam",
"Thomas Greber",
"Said Said Hassani",
"Adrian Hemmi",
"Ignacio Jimenéz",
"Catherine Journet",
"Paul Kögerler",
"Annick Loiseau",
"Camille Maestre",
"Marvin Metzelaars",
"Philipp Schmidt",
"Christoph Stampfer",
"Ingrid Stenger",
"Philippe Steyer",
"Takashi Taniguchi",
"Bérangère Toury",
"Kenji Watanabe",
"Bernd Beschoten"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cond-mat.mes-hall"
] |
2nd Institute of Physics and JARA-FIT, RWTH Aachen University, 52074 Aachen, Germany
GEMaC, UVSQ, CNRS, Université Paris Saclay, France
Instituto de Ciencia de Materiales de Madrid (ICMM-CSIC), Sor Juana Inés de la Cruz 3, 28049 Madrid, Spain
2nd Institute of Physics and JARA-FIT, RWTH Aachen University, 52074 Aachen, Germany
GEMaC, UVSQ, CNRS, Université Paris Saclay, France
Instituto de Ciencia de Materiales de Madrid (ICMM-CSIC), Sor Juana Inés de la Cruz 3, 28049 Madrid, Spain
Physik-Institut, University of Zürich, Zürich, Switzerland
Instituto de Ciencia de Materiales de Madrid (ICMM-CSIC), Sor Juana Inés de la Cruz 3, 28049 Madrid, Spain
Instituto de Ciencia de Materiales de Madrid (ICMM-CSIC), Sor Juana Inés de la Cruz 3, 28049 Madrid, Spain
INSA Lyon, Universite Claude Bernard Lyon 1, CNRS, MATEIS, UMR5510,69621 Villeurbanne, France
GEMaC, UVSQ, CNRS, Université Paris Saclay, France
Physik-Institut, University of Zürich, Zürich, Switzerland
GEMaC, UVSQ, CNRS, Université Paris Saclay, France
Physik-Institut, University of Zürich, Zürich, Switzerland
Instituto de Ciencia de Materiales de Madrid (ICMM-CSIC), Sor Juana Inés de la Cruz 3, 28049 Madrid, Spain
Universite Claude Bernard Lyon 1, CNRS, LMI UMR 5615, Villeurbanne, F-69100, France
Institute of Inorganic Chemistry, RWTH Aachen University, 52074 Aachen, Germany
Peter Grünberg Institute (PGI-6), Forschungszentrum Jülich, 52425 Jülich, Germany
Université Paris Saclay, ONERA, CNRS, Laboratoire d’Etude des Microstructures, 92322 Chatillon France
Universite Claude Bernard Lyon 1, CNRS, LMI UMR 5615, Villeurbanne, F-69100, France
2nd Institute of Physics and JARA-FIT, RWTH Aachen University, 52074 Aachen, Germany
Institute of Inorganic Chemistry, RWTH Aachen University, 52074 Aachen, Germany
2nd Institute of Physics and JARA-FIT, RWTH Aachen University, 52074 Aachen, Germany
2nd Institute of Physics and JARA-FIT, RWTH Aachen University, 52074 Aachen, Germany
Peter Grünberg Institute (PGI-9) Forschungszentrum Jülich, 52425 Jülich, Germany
GEMaC, UVSQ, CNRS, Université Paris Saclay, France
INSA Lyon, Universite Claude Bernard Lyon 1, CNRS, MATEIS, UMR5510,69621 Villeurbanne, France
Research Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044, Japan
Universite Claude Bernard Lyon 1, CNRS, LMI UMR 5615, Villeurbanne, F-69100, France
Research Center for Electronic and Optical Materials, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044, Japan
[email protected]
2nd Institute of Physics and JARA-FIT, RWTH Aachen University, 52074 Aachen, Germany
§ ABSTRACT
We present a benchmarking protocol that combines the characterization of boron nitride (BN) crystals and films with the evaluation of the electronic properties of graphene on these substrates.
Our study includes hBN crystals grown under different conditions (atmospheric pressure high temperature, high pressure high temperature, pressure controlled furnace) and scalable BN films deposited by either chemical or physical vapor deposition (CVD or PVD).
We explore the complete process from boron nitride growth, over its optical characterization by time-resolved cathodoluminescence (TRCL), to the optical and electronic characterization of graphene by Raman spectroscopy after encapsulation and Hall bar processing.
Within our benchmarking protocol we achieve a homogeneous electronic performance within each Hall bar device through a fast and reproducible processing routine.
We find that a free exciton lifetime of 1 ns measured on as-grown hBN crystals by TRCL is sufficient to achieve high graphene room temperature charge carrier mobilities of 80,000 cm^2/(Vs) at a carrier density of |n| = 1e12 cm^-2, while respective exciton lifetimes around 100 ps yield mobilities up to 30,000 cm^2/(Vs).
For scalable PVD-grown BN films, we measure carrier mobilities exceeding 10,000 cm^2/(Vs) which correlates with a graphene Raman 2D peak linewidth of 22 cm^-1.
Our work highlights the importance of the Raman 2D linewidth of graphene as a critical metric that effectively assesses the interface quality (i.e. surface roughness) to the BN substrate, which directly affects the charge carrier mobility of graphene.
Graphene 2D linewidth analysis is suitable for all BN substrates and is particularly advantageous when TRCL or BN Raman spectroscopy cannot be applied to specific BN materials such as amorphous or thin films. This underlines the superior role of spatially-resolved spectroscopy in the evaluation of BN crystals and films for the use of high-mobility graphene devices.
Benchmarking the integration of hexagonal boron nitride crystals and thin films into graphene-based van der Waals heterostructures
Bernd Beschoten
September 9, 2024
==================================================================================================================================
§ INTRODUCTION
Boron nitride (BN) with its remarkable thermal stability, chemical inertness and robust mechanical properties has long been used for various applications <cit.>.
It has been demonstrated that hexagonal boron nitride (hBN) is of particular importance for applications in 2D material systems, exhibiting properties crucial for photonics and optoelectronics, such as efficient deep UV emissions <cit.> and quantum photonics capabilities <cit.>.
The high thermal conductivity <cit.>, the large electronic bandgap <cit.>, and the ultra-flat and inert surface <cit.> are important prerequisites for the use as a substrate for other 2D materials or for interface engineering <cit.>.
2D materials encapsulated in hBN allow for record-breaking charge carrier mobilites in graphene <cit.>, high electronic and optical quality in transition metal dichalcogenides (TMDs) <cit.> or, for example, bilayer graphene quantum devices with ultra-clean tunable bandgaps <cit.>.
In fundamental research, hBN flakes exfoliated from bulk crystals grown either at high temperature and high pressure (HPHT) <cit.> or at atmospheric pressure and high temperature (APHT) <cit.> are employed for high-quality device fabrication due to their superior crystal quality.
The synthesis of high-quality hBN crystals in a pressure-controlled furnace (PCF) is a recent development that offers new opportunities for improving material quality <cit.>.
hBN single crystals are small, a few millimeters at most, and therefore do not meet industrial manufacturing requirements. The transition of BN from the use in fundamental research to industrial applications requires process development capable of providing large area single crystal or polycrystalline films that meet both device requirements and high volume production needs.
Techniques like chemical vapor deposition (CVD) <cit.>, metal-organic CVD (MOCVD) <cit.>, molecular beam epitaxy (MBE) <cit.>,
and physical vapor deposition (PVD) <cit.> are under development, offering potential platforms for BN substrates with sufficient interface and/or bulk qualities for the desired technological applications.
Recently, amorphous (or nanocrystalline) boron nitride (aBN), has gained interest due to its ability to be grown at room temperature on arbitrary substrates <cit.> and its low dielectric constant <cit.>.
Especially the full encapsulation of CVD-grown graphene in direct-grown aBN was recently reported to have promising electronic properties, showing its potential as a scalable substrate for graphene and other 2D materials <cit.>.
The evolving diversity of available BN substrates – from high-quality hBN crystals to hBN/aBN films – underlines the need for comparable and meaningful characterization methods of both the crystal quality itself and the ability to be used as substrate in van der Waals heterostructures.
To assess the crystal quality, BN is mostly investigated by cathodoluminescence (CL) <cit.>, photoluminescence (PL) <cit.> or Raman spectroscopy <cit.>.
Raman spectroscopy gives a rapid and non-invasive way to extract the quality of BN films and therefore is an indispensable tool to efficiently monitor the parameter tuning during optimization of growth processes.
On the other side, CL measurements and especially time-resolved cathodoluminescence (TRCL) measurements yield a much more sensitive way to evaluate the crystal quality and to gain a deeper understanding of the type of crystal defects <cit.>.
Here, the free exciton lifetime, which is limited by exciton-defect scattering, yields a sensitive benchmark for the bulk crystal quality.
However, CL measurements are not suitable for most scalable BN growth approaches as they are only applicable to crystalline and thick (>10 μ m) hBN.
While these evaluations are highly important for benchmarking the quality of hBN crystals for optical applications with hBN as the active layer, methods to evaluate the surface quality become equally important when used as a substrate <cit.>.
For example, correlations between the amount or type of defects and the surface roughness seem possible but remain a topic under investigation <cit.>.
Graphene, due to its exceptional high charge carrier mobility, is one of the most interesting 2D materials to be used in combination with BN.
Additionally to the interest due to its electronic properties, graphene is highly sensitive to charge disorder and surface roughness of the substrate, drastically limiting the device performance <cit.>.
Due to both, its huge potential for future high-mobility applications and its high sensitivity to the underlying substrate, the evaluation of graphene itself on the substrate of interest is an appealing way to investigate the suitability of various BN films or crystals as a substrate for 2D materials.
Spatially-resolved confocal Raman microscopy on graphene encapsulated in BN provides a very powerful and sensitive way to directly assess strain, doping, and nm-strain variations <cit.> and directly link it to the electronic properties extracted from charge transport measurements on Hall bar devices <cit.>.
Here, we present a comprehensive evaluation of various BN substrates and present a benchmarking protocol covering the characterization of the BN as well as the evaluation of the electronic properties of exfoliated graphene on these BN substrates.
Our study includes the growth of both high-quality hBN crystals grown via APHT or in a PCF and the growth of scalable BN films via PVD or CVD (section II).
We extract the free exciton lifetime from TRCL measurements to compare the crystal quality of BN crystals and evaluate both exfoliated flakes and films via Raman spectroscopy (section III).
Using exfoliated graphene, we fabricated dry-transferred devices on the BN substrates to assess the interface quality via spatially-resolved Raman spectroscopy (section IV).
To establish reliable benchmarks we focus on the full width at half maximum (FWHM) of the graphene Raman 2D peak, which we identify as the most sensitive benchmark for an early-stage evaluation of the suitability for a diverse set of BN substrates (section V).
Following a newly developed fabrication scheme, with a focus on rapid processing, we build Hall bar structures (section VI) to extract the charge carrier mobility at different charge carrier densities (section VII).
We demonstrate that graphene encapsulated in APHT hBN crystals compares in electronic quality to graphene encapsulated in HPHT-grown hBN crystals, reaching room temperature charge carrier mobilities around 80,000 cm^2/(Vs) at a charge carrier density of n=1e12 cm^-2.
Importantly, we identify a free exciton lifetime of above 1 ns to be sufficient to achieve these high charge carrier mobilities and of 100 ps for charge carrier mobilities up to 30,000 cm^2/(Vs).
Specifically, we demonstrate that graphene on PVD-grown nanocrystalline boron nitride with a graphene 2D peak FWHM below 22 cm^-1 consistently yields charge carrier mobilities exceeding 10,000 cm^2/(Vs).
This underscores the potential of PVD-grown BN films as scalable substrates for high-mobility graphene devices.
§ GROWTH AND PREPARATION OF BORON NITRIDE
In Fig. <ref> the growth and preparation conditions of hBN crystals (APHT and PCF growth) and of BN thin films (PVD and CVD growth) are summarized.
§.§ Atmospheric Pressure and High Temperature (APHT)
The hBN crystals in this study were grown from an iron flux (at RWTH) or chromium-nickel (at RWTH and GEMaC) flux via the APHT method
(see Ref. <cit.> for details on the growth at RWTH).
A schematic illustration of the growth setup is shown in Fig. <ref>(a).
The boron source is either boron powder (RWTH) mixed with the metal pieces or a pyrolytic BN crucible (GEMaC).
The system is first annealed at high temperature under a continuous gas flow of either H_2 and Ar (RWTH) or N_2 (GEMaC) to minimize contaminations with oxygen and carbon.
The hBN crystal growth is started upon introduction of N_2 while maintaining a constant pressure.
After a soaking phase at high temperature to saturate the metal flux with B and N, the furnace is cooled down to a lower temperature at a slow rate (typically between 0.5^∘C/h and 4^∘C/h).
The system is then quickly quenched down to room temperature.
The resulting thick hBN crystal layer is firmly attached to the underlying metal ingot, as seen in Fig. <ref>(a) (right upper panel).
The hBN crystal sheet can be detached from the metal ingot by immersion in hydrochloric acid at room temperature, see the detached crystal sheet in the lower right panel of Fig. <ref>(a).
This step does not affect the quality of the hBN crystals and simplifies the further processing of the hBN crystals for exfoliation and subsequent dry-transfer <cit.>.
§.§ Growth in a pressure-controlled furnace (PCF)
In the PCF method, hBN crystals are grown from the liquid phase of Li_3BN_2-BN at high temperature <cit.>.
The Li_3BN_2 powder is pre-synthesized from Li_3N (Sigma Aldrich, purity > 99.5%) <cit.> and mixed with commercial hBN powder (20 wt% hBN and 80 wt% Li_3BN_2) in a molybdenum crucible.
Since Li_3BN_2 is very sensitive to air and moisture, the growth preparation is performed under inert conditions and careful handling is necessary throughout the whole process.
Both, hBN powder and crucibles are pre-treated at 1200^∘C under vacuum and an Ar/H_2 gas mix to remove potential contaminations.
The growth is performed in a pressure-controlled furnace (PCF) <cit.> (schematically shown in Fig. <ref>(b)) during a fast cooling after a dwelling time of 2 hours at 1800^∘C and a pressure of 180 MPa under Ar atmosphere.
The temperature and the pressure are increased at a rate of 100^∘C/min and 10 MPa/min.
The chamber is initially purged three times (Ar filling followed by pumping) to remove oxygen and moisture.
The sample obtained is an ingot composed of hBN crystals embedded in a solidified Li_3BN_2 matrix.
Li_3BN_2 dissolution is then performed to retrieve individual crystals.
They show a lateral size ranging from several hundreds of micrometers to few millimeters, exemplary shown in Fig. <ref>(b), in the right image.
The crystals have been previously used as encapsulants for TMDs and graphene to obtain optical and electronic devices <cit.>.
§.§ Chemical Vapor Deposition (CVD)
For the growth of hBN layers, Pt(111) thin films with a thickness of 500 nm were prepared on sapphire wafers <cit.>.
The hBN films were grown via CVD in an ultra-high vacuum cold-wall chamber for wafers up to 4-inch <cit.>.
Prior to all hBN preparations, the Pt/sapphire substrates were cleaned by a series of argon sputtering, O_2 exposure and annealing cycles to 1200 K until sharp Pt(111) (1×1) LEED patterns were observed.
Subsequently, hBN layers were prepared at temperatures above 1000 K with borazine (HBNH)_3 as precursor with a partial pressure of 10^-7 mbar (Fig. <ref>(c)).
The quality of grown hBN layers were evaluated with scanning low energy electron diffraction (x-y LEED), x-ray photoelectron spectroscopy (XPS), ultraviolet photoelectron spectroscopy (UPS), scanning tunneling microscopy (STM) and atomic force microscopy (AFM). The reported thickness is derived from XPS intensity values.
The transfer procedure employs the electrochemical “bubbling” method <cit.>.
First, the hBN/Pt(111) sample was spin-coated with 4 wt % polymethyl methacrylate (PMMA) (495 K).
Then we put the PMMA/hBN/Pt sample as working electrode and a Pt wire as counter electrode in a 1.0 M KCl solution.
A negative voltage between -3 V and -5 V was applied to the sample to delaminate the hBN/PMMA film from the substrate.
The delaminated hBN/PMMA film was then rinsed in ultrapure water (Milli-Q Advantage A10) and transferred onto a clean 280 nm Si/SiO_2 substrate with gold markers.
In the next step, the PMMA was removed via a sequence of acetone/ethanol baths and gradual annealing in air at temperatures up to 600 K for 3 h (see transferred BN film in Fig. <ref>(c)).
§.§ Physical Vapor Deposition (PVD)
Thin nanocrystalline BN films are grown via physical vapor deposition using an ion beam assisted deposition process (IBAD-PVD).
The films exhibit hexagonal bonding structure, as assessed by X-ray absorption near edge spectroscopy (XANES) <cit.> but lack of x-ray diffraction.
The films were grown directly on Si/SiO_2 wafers with an oxide thickness of 285 nm and pre-defined Cr/Au marker.
The growth was performed at room temperature using nitrogen gas and a solid boron source.
The IBAD process consisted in the interaction of a directional beam of 500 eV nitrogen ions from a Kaufman source, with concurrent boron atoms from an electron beam evaporator.
The growth setup is schematically depicted in Fig. <ref>(d).
The N-ion and B-atom fluxes were carefully tuned to obtain stoichiometric BN and avoid other B_xN_y phased and bonding configurations <cit.>.
The thickness of the resulting BN film is 30 nm and homogenously covers the whole wafer, as shown in Fig. <ref>(d).
§ CATHODOLUMINESENCE AND RAMAN SPECTROSCOPY ON BN CRYSTALS AND FILMS
The as-grown hBN crystals (APHT and PCF) are examined by means of TRCL measurements and spatially-resolved confocal Raman microscopy.
Raman spectroscopy offers a fast and non-invasive tool to spatially probe optical phonons and their lifetimes, which provide a measure of the crystallinity of hBN <cit.>.
Time-resolved cathodoluminescence (TRCL) measurements allow to determine the lifetimes of the free excitons, which are strongly affected by scattering with defects, providing a valuable and sensitive tool to locally probe the crystal quality of hBN crystals.
§.§ Cathodoluminescence
TRCL measurements were performed on isolated bulk hBN crystals. For each supplier (GEMaC, RWTH, LMI), crystals from multiple growth batches were investigated. The deep UV spectra were recorded at room temperature in a JEOL7001F field-emission-gun scanning electron microscope (SEM) coupled to a Horiba Jobin-Yvon cathodoluminescence (CL) detection system, as described in detail in earlier works <cit.>.
To allow for time resolution, a custom-built fast-beam blanker was installed inside the SEM column, as described in <cit.>.
The dynamics of the free exciton population is captured by measuring the time-dependent CL intensity in a wavelength range of 215 ± 7.5 nm with a temporal resolution of 100 ps.
This spectral range corresponds to the main luminescence feature of high quality hBN crystals.
The 215 nm CL signal results from the indirect exciton recombination assisted via optical phonons.
To focus on bulk properties and minimize surface recombinations, the electron beam acceleration voltage was set to 15 kV <cit.>.
The current was maintained at a low value of 85 pA to prevent nonlinear effects <cit.>.
An exemplary TRCL measurement for each type of hBN crystal investigated is shown in Fig. <ref>.
At t=0, the luminescence peak intensity is normalized to 1, to allow for better comparison of the time evolution.
The free exciton lifetime τ_CL is extracted by fitting the initial decay curve with an exponential function.
We obtain τ = 0.15 ns, 1.67 ns and 3.32 ns for PCF (LMI), APHT (RWTH) and APHT (GEMaC) crystals, respectively.
Statistical evaluation (mean and standard deviation) across different growth batches and spatial positions on the crystals yielded τ = (0.11 ± 0.07) ns, (1.4 ± 0.3) ns and (3.0 ± 0.4) ns for 22, 13 and 8 measured areas on PCF (LMI), APHT (RWTH) and APHT (GEMaC), respectively.
We emphasize the need for statistical evaluation due to notable crystal-to-crystal variations.
The variation in free exciton lifetimes is associated with differences in defect densities in the crystals.
We note that APHT-grown crystals exhibit lifetimes similar to those produced via the HPHT method <cit.> which is consistent with a low defect density. In contrast, PCF-grown crystals show significantly shorter lifetimes. The higher defect density could
result from vacancies, impurities, or structural anomalies, which all may affect the free lifetime.
The variations in lifetimes, even within crystals grown by the same method, highlights the need for careful crystal selection for specific experiments or applications.
Understanding these defect-induced changes in the optical properties is crucial for the further development of hBN applications in optoelectronics and quantum technology.
The benchmarking of hBN crystals via TRCL also sets the stage for understanding their role as substrates in graphene-based devices.
The observed variation in the exciton lifetime of hBN grown by the different methods is expected to correlate with the electronic quality of encapsulated 2D materials.
Its impact on the charge carrier mobility in hBN/graphene/hBN Hall bar devices will be detailed further below.
§.§ Confocal Raman spectroscopy
Raman spectroscopy is a practical and widely used optical probe for characterizing both hBN crystals and thin films.
Its advantage of accessibility makes it an important tool for monitoring the effect of changes in the growth parameters on the crystal quality of BN.
The primary benchmark for assessing the crystal quality of hBN via Raman spectroscopy is the FWHM Γ_E_2g of the E_2g Raman peak, which correlates with the lifetime τ_E_2g of optical phonons corresponding to intralayer vibrations of B and N atoms <cit.>.
The contributions to the phonon linewidth in hBN with a natural isotopic content of boron originate primarily from isotopic disorder-induced scattering, anharmonic phonon decay, or impurity scattering <cit.>.
Thus, in hBN with the same crystal structure and isotope distribution, the variations in FWHM are mainly due to the degree of disorder in the crystal <cit.>.
Changes in bond lengths due to increased defect density or not-purely sp^2-hybridized bonds might also impact the FWHM, as these factors contribute to averaging effects over phonons of different frequencies.
Typically, high quality hBN crystals grown via HPHT or APHT exhibit a FWHM around 8 cm^-1 <cit.>.
For thin BN films this value can increase up to 40 cm^-1 <cit.>.
§.§.§ Experimental setup
Raman measurements were conducted using a commercial confocal micro-Raman setup (WITec alpha 300R) at room temperature.
We utilized a 532 nm excitation wavelength, a laser power of 2 mW and a 100× magnification objective with a numerical aperture of 0.9.
The inelastically scattered light was collected through a fibre (core diameter 100 μ m) and sent to a CCD through a half meter spectrometer equipped with a 1200 lines/mm grating.
For linewidth analysis of high-quality hBN crystals, we employed a grating with 2400 lines/mm.
§.§.§ hBN crystals
We start by exfoliating thin hBN flakes from the bulk crystals using tape (Ultron 1007R) onto Si/SiO_2 wafer with a 90 nm oxide layer and observe similar distribution of thicknesses and lateral sizes for flakes from all crystal suppliers.
Flakes with thicknesses between 20 nm and 40 nm were selected based on their color contrast towards the substrate <cit.>, as these thicknesses are optimal for building state-of-the-art hBN-encapsulated graphene devices.
In Figs. <ref>(a)-(c), we present optical images of representative flakes from the three suppliers ((a) for APHT (RWTH), (b) for APHT (GEMaC), and (c) for PCF (LMI)).
All flakes look similar in terms of contamination or thickness homogeneity.
The FWHM of the E_2g peak is extracted by fitting a single Lorentzian function to the individual Raman spectra.
Spatially-resolved maps of the FWHM are shown in Fig. <ref>(d)-(f).
A respective single Raman spectrum at a representative position, along with the corresponding histogram to the FWHM map, are shown in Figs. <ref> (g)-(i).
There is a narrow hBN Raman peak around 1365 cm^-1 for all flakes.
The maps in Figs. <ref> (d)-(f) reveal a homogenous and narrow distribution of the FWHM, suggesting uniform crystal quality throughout the exfoliated flakes.
A closer inspection of the statistical distribution (insets of (g)-(i)), reveals a Gaussian distribution of the FWHM around 8 cm^-1, demonstrating high crystallinity for all flakes.
These values are comparable to previous studies on APHT or HPHT grown hBN <cit.>.
Interestingly, we observe no significant difference in the Raman FWHM between PCF and APHT crystals. This observation seems surprising since the CL lifetime of the PCF-grown hBN flakes is more than an order of magnitude shorter than the respective lifetimes of the APHT-grown hBN crystals (see Fig. <ref>). It is, however, important to emphasize again that main contributions to the E_2g peak's FWHM in natural hBN results from isotopic disorder <cit.>, that is typically the same for all. While isotopic disorder is generally the same for all hBN crystals, variations in defect type and density can significantly vary between different growth methods. Our studies suggest that the presence of crystal defects in high-quality hBN crystals can barely be probed by Raman spectroscopy. Analyzing the lifetimes of free excitons, on the other hand, offers a significantly more sensitive tool for the local probing of crystal defects.
§.§.§ BN films
We next evaluate boron nitride films, which are either grown directly on the Si/SiO_2 substrate (PVD) or grown by means of CVD and then wet-transferred to a Si/SiO_2 substrate.
In the case of boron nitride films, cathodoluminescence measurements are not feasible, mainly due to the small thickness of the films.
An optical image of the PVD-grown film is shown in Fig. <ref>(a).
We observe a homogeneously grown film over the entire wafer with some spots where the BN is damaged.
In Fig. <ref>(b) we additionally show a Raman spectrum at a representative position.
In contrast to the previously shown Raman spectra of flakes from exfoliated hBN crystals, we do not observe a single narrow Raman peak.
Instead, a broad response ranging from 1100 cm^-1 to 1600 cm^-1 is observed.
This can be related to the amorphous nature of the BN film, which leads to a strong broadening of the Raman peak due to the inclusion of nanocrystalline regions within the BN film <cit.>.
The broadening may also result from random strain effects <cit.>. They lead to an averaging of different bond lengths between the atoms resulting in a statistical averaging of the Raman response due to variations in the phonon frequencies.
As the PVD grown BN films will later be used as a substrate for graphene, we next explore their surface roughness by atomic force microscopy (AFM).
Figure <ref>(c) displays an AFM image for a small region of the sample shown in Fig. <ref>(a).
A root mean square (RMS) roughness of 0.2 nm is extracted from this map.
This low value is in line with RMS values
of hBN and the 2D semiconductor WSe_2, which have proven to be ideal substrates for graphene <cit.>.
In Fig. <ref>(d) we show an optical image of the CVD grown BN film which was transferred on SiO_2.
Due to the wet-transfer process and because multiple layers of hBN are transferred on top of each other, the BN film does not have a homogenous thickness.
From XPS measurements we estimate an average thickness of 3 layers of hBN.
The Raman spectrum at a representative position is shown in Fig. <ref>(e) together with a histogram of the distribution of the FWHM in panel (f).
We observe a well-defined hBN Raman peak at ω_E_2g = 1365 cm^-1 with a FHWM of Γ_E_2g = 34 cm^-1.
The large FWHM is in striking contrast to the previously discussed crystals but comparable to other BN films shown in literature <cit.>.
We attribute the large FWHM to the wet transfer procedure and the remaining PMMA residues on the transferred BN film.
To conclude the pre-characterization of BN crystals and films, we note that there is no common method which is either sensitive enough or applicable to all forms of BN, i.e. crystals and films.
Especially, for nanocrystalline or amorphous BN films, which are recognized as potential substrates for scaled devices, the usual characterization methods are not feasible.
We therefore proceed with the evaluation of graphene in contact with BN, by using graphene as a sensitive detector for the suitability of the underlying BN/hBN substrate for charge transport.
§ DRY-TRANSFER OF GRAPHENE ENCAPSULATED IN BN
The next step in the benchmarking protocol is to build van der Waals heterostructures using BN material to fully encapsulate graphene. The substrate quality of BN is then explored by probing the electronic properties of graphene using both spatially-resolved Raman spectroscopy and charge transport measurements.
For the stacking of the heterostructures we start by exfoliating hBN and graphene flakes onto 90 nm Si/SiO_2.
The flakes are searched and classified using a home-built automatic flake detection tool <cit.>.
Suitable flakes with a thickness between 20 and 40 nm are identified and stacked on top of each other using standard dry-transfer methods with poly(bisphenol A carbonate) (PC) film on top of a drop-shaped polydimethylsiloxane (PDMS) stamping tool <cit.>.
The stacking process is schematically depicted in Fig. <ref>.
For the benchmarking of hBN crystals (APHT and PCF), graphene is picked up using hBN flakes, which were exfoliated from their respective bulk crystals while for the evaluation of BN films the graphene is picked up by exfoliated HPHT-grown hBN (Figs. <ref>(b)-(d)).
In the next step, the hBN/graphene half stack is either transferred onto corresponding hBN crystal flakes (Figs. <ref>(f)-(g)) or placed onto the BN films (Fig. <ref>(e)).
The protection of graphene from the top by an hBN crystal is important to ensure heterostructures of comparable quality and exclude influences on the graphene quality and device performance that can be caused by chemicals or airborne contaminations <cit.> during the subsequent processing steps.
Optical microscope images of the finished stacks are shown in Fig. <ref>(a)-(e).
The lateral size of the stacks is limited by the size of the exfoliated hBN and graphene flakes.
Within this project, we characterized in total over 40 dry-transferred samples to obtain a statistical evaluation of the various BN substrate and to exclude sample-to-sample variations.
§ RAMAN SPECTROSCOPY ON BN-GRAPHENE HETEROSTRUCTURES
§.§ Extraction of strain, strain variations and doping
We first give an overview on the key concepts of graphene-based Raman spectroscopy.
Figure <ref>(a) shows a typical Raman spectrum of graphene encapsulated in hBN crystals.
Three prominent peaks are typically observed corresponding to the above analyzed hBN E_2g peak and the graphene G and 2D peak.
The G peak in graphene results from out-of-phase in-plane vibrations of two carbon atoms of the two sublattices and involves phonons from the Γ-point, whereas the double resonant 2D peak corresponds to a breathing mode, involving phonons near the K-point <cit.>.
A crucial and sensitive quantity for the evaluation of the electronic properties of graphene is the FWHM of the 2D peak, which is directly connected to the extent of nm-scale strain variations within the laser spot <cit.> and therefore also contains information on the roughness of the substrate <cit.>.
As strain variations locally break the hexagonal symmetry of the lattice, a vector potential is induced which in turn leads to an increased probability of backscattering of electrons in charge transport leading to a reduced charge carrier mobility <cit.>.
The 2D FWHM is therefore the main quantity of interest in our study as it directly connects the interface quality given by the BN with the electronic quality of the adjacent graphene sheet.
The G and 2D peak are both susceptible to strain as well as doping <cit.> and the 2D peak position is additionally influenced by dielectric screening from the environment <cit.>, which is, however, not relevant in the scope of this study.
To separate the effects of strain and doping from spatially-resolved Raman maps, the positions of the 2D and G peak are plotted against each other, as illustrated in Fig. <ref>(b).
Since the two peaks shift differently as function of doping and strain, the slopes of the distributions can be used to qualitatively evaluate the type of disorder in the system (strain and/or doping).
A distribution parallel to the strain axis has a slope of 2.2 and is connected to biaxial strain whereas a distribution along the doping axis has a slope ranging between 0.3 and 0.7 depending on both their charge carrier type and the substrate <cit.>.
§.§ Results of spatially-resolved Raman spectroscopy
Raman measurements were performed with the same setup as for the characterization of the hBN crystals and films, using a grating of 1200 lines/mm.
Figures <ref>(f)-(j) show Raman maps of the graphene 2D linewidth for the regions highlighted with black dashed rectangles in the optical images in Figs. <ref>(a)-(e) for each BN source, respectively.
The corresponding histograms are shown in Figs. <ref>(k)-(o) of a selected region of interest, highlighted with a dashed rectangle in the corresponding panel in Figs. <ref>(f)-(j).
The color scale is the same for all maps.
Regions of a higher 2D linewidth within a stack may either result from bubbles (hydrocarbons) that are trapped at the interface between hBN and graphene or may be related to regions with multilayers.
Residual hydrocarbons most likely originate from tape residues during exfoliation or from the polymer used for stacking <cit.>. The latter is a commonly known challenge when using polymer-based dry-transfer techniques.
We observe the formation of bubbles for all stacks produced in this study.
Comparison of the contamination-free regions of the 2D FWHM maps reveals the lowest Γ_2D values for graphene on APHT-hBN (Figs. <ref>(f)-(g)), followed by PCF-grown crystals (Fig. <ref>(h)) and than the BN films (Figs. <ref>(i)-(j)).
The corresponding histograms in Figs. <ref>(k)-(o) enable the quantitative evaluation of the 2D FWHM maps.
The maximum of the statistical distribution ranges from 16.5 cm^-1 for APHT-grown crystals, over 18 cm^-1 for PCF-grown crystals to values larger than 20 cm^-1 for BN films.
We identify the peak position of the 2D linewidth distribution as a robust and sensitive quantity to evaluate the interface quality of the underlying BN, in line with previous works <cit.>.
We conclude that the degree of strain variations in graphene is lowest for the APHT hBN crystals, which shows that they have the highest interface quality (flatness) among the studied BN.
The respective ω_2D vs ω_G scatter plots are shown in Fig. <ref>(p)-(t), where the color code corresponds to the FWHM of the 2D peak.
For the stack presented in the first row of Fig. <ref> we chose a region with a spatially homogeneous and low 2D FWHM. The corresponding 2D vs G peak position distribution shows a strong clustering along the 2.2 strain axis indicating very small strain variations and negligible doping.
For the sample in the second row, the distribution with the lowest 2D linewidth (blue data points) is again mainly distributed along the strain axis. However, areas with inclusion (bubbles) exhibit larger 2D linewidths (green, yellow and reddish color) with a distribution outside the strain axis, which is probably due to larger doping.
The effect of doping on the peak positions is most clearly seen for the PVD-grown BN shown in the fourth row of Fig. <ref>.
The peak positions show a curved distribution that results from both strain and doping.
To go beyond the evaluation of the comparison of representative examples, we plot the graphene 2D linewidth of high-quality regions of all evaluated samples in a combined histogram in Fig. <ref>.
For the APHT-grown crystals we observe narrow distributions of the graphene 2D linewidth with the maximum at 16.5 cm^-1, demonstrating an excellent and reproducible interface quality between graphene and hBN over a number of 20 different heterostructures with hBN crystals taken from different batches.
The histogram distribution of the PCF-crystals shows a broader distribution ranging from 17.5 cm^-1 to 19 cm^-1 indicating a larger amount of strain variations, and when evaluating different stacks, we additionally observe a larger sample-to sample variation in the 2D linewidth distribution.
While the analysis of the free exciton lifetime τ_CL in Fig. <ref> shows slightly shorter lifetimes for RWTH-APHT crystals compared to the GEMaC-APHT crystals there are no differences in the amount of nm-strain variations of encapsulated graphene as inferred from Raman spectroscopy. In contrast, the broader and shifted graphene 2D linewidth distribution of heterostacks fabricated by the PCF crystals seems to be related to their shorter exciton lifetimes.
As the graphene 2D linewidth is connected to nm-strain variations caused by the roughness of the substrate surface, we conclude that the defect concentration in the PCF-grown crystals is so high that it affects the electronic properties of graphene.
Further, this quantity allows us to compare various substrates independent on their crystal nature to each other.
§ PROCESSING INTO HALL BAR STRUCTURES
We next determine the key quantity of interest, the charge carrier mobility of graphene, and link it to the Raman 2D linewidths of graphene and the free exciton lifetimes of the BN substrate.
For this purpose, the fabricated heterostructures are patterned into Hall bar devices and electrically contacted to perform gate-dependent charge transport measurements.
For this study, we established a reproducible fabrication process yielding a high homogeneity of the electronic quality of graphene within a device as well as a high throughput of functioning contacts.
For all devices we applied the same fabrication routine.
A simplified overview of the various processing steps is shown in Fig. <ref>(e).
First, the Hall bar structure is defined by electron beam lithography (EBL) (step 1).
Subsequently, 30 nm aluminum (Al) is deposited using electron beam evaporation with a rate of 0.1 nm/s (step 2) and after lift-off we remain with the final Hall-bar structure protected by the Al hard mask (step 3).
The structure is subsequently etched using atomic layer etching (Oxford Plasma Pro 100) using Ar/SF_6 with a flow rate of 5/20 sccm and HF power of 50 W and a 5s oxygen etch pulse.
The Al is chemically removed using tetramethylammonium hydroxide (TMAH) (step 4).
The contacts to the Hall bar are defined in a second EBL step (step 5) and 5 nm/70 nm of Cr/Au is evaporated, with a rate of 0.2 nm/s and 0.5 nm/s (step 6).
An optical microscope image of a representative, structured and contacted device is shown in Fig. <ref>(b).
At this point, it is important to note that we have taken particular care to minimize the time between the individual processing steps.
The etching, the subsequent second lithography step and the evaporation of Cr/Au was performed within the same day.
By fabricating many devices, we have clear evidence that the time window between etching into the Hall bar structure where we expose the edges of graphene to air and the deposition of the side contacts to graphene should be minimized. For all devices, this time window was below 4 h.
§.§ Influence of processing on the electronic properties of graphene
In this section we discuss the impact of the Hall bar processing onto the mechanical and electronic quality of the devices by using spatially-resolved Raman spectroscopy.
In Fig. <ref> we show a representative device PCF (LMI), with an optical image of the stack in panel (a) and the final device in panel (b).
Figures <ref>(c) and (d) depict spatially-resolved Raman maps of the graphene 2D linewidth of the heterostructure before and after processing, respectively.
The black rectangle in Fig. <ref>(c) illustrates the region chosen for the Hall-bar patterning, and only the Raman data from this region are used for comparison with the final Hall bar.
The respective histogram is shown in Fig. <ref>(f) (green data).
A comparison of the two maps in Figs. <ref>(c) and (d) shows: (i) an overall increases in the Raman 2D FWHM in the center of the Hall bar, which leads to a shift of the respective histogram (red data in Fig. <ref>(f)) towards higher wavenumbers and (ii) a strong increase in linewidth towards the edges of the Hall bar (reddish color in Fig. <ref>(d) that is seen as a tail in the histogram extending to values above 20 cm^-1.
This finding could be linked to mechanical stress that occurs during the fabrication steps.
The different temperatures in the fabrication process, e.g. after baking the resist for lithography or during etching, can lead to stress due to the different thermal expansion coefficients of the materials within the stack and the substrate.
Considering the Raman 2D and G peak positions in Fig. <ref>(g), we clearly observe a red shift of the positions along the 2.2 strain line for the stack after fabrication.
This cloud (red data points) has shifted towards phonon frequencies closer to the point related to that of "pristine" graphene <cit.>, suggesting that strain release may have occurred during the device fabrication.
We only show one example here, but this finding is observed in many different samples, regardless of the type of BN used.
A more detailed investigation is beyond the scope of this paper and future works focusing on the monitoring of different fabrication steps are necessary to draw clearer conclusions.
§.§ Room temperature charge carrier mobilities
The individual Hall bars with the different BN substrates were fabricated in heterostack regions of the lowest possible and homogeneous graphene Raman 2D FWHM (as an example, see black rectangle in Fig. <ref>(c)).
All charge transport measurements were taken at room temperature under vacuum.
An example of a Hall bar with the measurement scheme is depicted in Fig. <ref>(a).
We use an AC voltage V_0 = 1 V at a frequency of 77 Hz and a series resistance of R_P=1 MΩ to pass a constant current of I = 1 μ A between the source and drain contact.
The four-terminal voltage drop is measured for different regions along the graphene transport channel, labelled as V_xx in Fig. <ref>(a) for the upper region as an example .
This voltage drop converts to the resistivity (1/conductivity) following ρ = 1 /σ = W/L · V_xx/I, where L is the distance between the contacts and W the width of the transport channel.
Figure <ref>(b) shows the gate dependent resistivity and conductivity for an APHT device (red traces in panel (d)).
For all measured regions, the conductivity σ reaches at least 400 e^2/h at large gate voltages, i.e. large charge carrier densities, which is mainly limited by electron-phonon scattering <cit.>.
Importantly, and in contrast to previous studies, we observe homogeneous transport properties along the graphene channel and a high yield of functioning contacts (larger than 90 %).
While the electronic homogeneity is likely due to the pre-selection of the regions via Raman mapping, we link the high throughput of functioning contacts to the decreased time between the etching (i.e. exposing of graphene contact areas) and evaporation of the Cr/Au.
The charge carrier density n is extracted from Hall effect measurements. It is connected to the gate voltage by n=α (V_G-V_G^0), where V_G^0 is the position of the charge neutrality point, i.e. the voltage of the Dirac peak, and α is the gate lever arm.
The carrier mobilities μ = σ/(ne) of graphene with the different BN substrates are shown in Figs. <ref>(c)-(h) for each BN source individually. As a reference, we show transport data for a Hall bar device where we used HPHT hBN (NIMS) (see Fig. <ref>(c)).
For each device, multiple regions were measured. Traces of the same color are from different regions of the same device. There are only small variations in transport characteristics within a single device but also between different devices fabricated from the same BN source.
This finding further confirms a robust and reliable processing routine, which was developed as part of the benchmarking study.
For devices built by APHT hBN we measure the highest charge carrier mobilities exceeding 80,000 cm^2(Vs)^-1 at a charge carrier density of |n|=1e12 cm^-2 (see Figs. <ref>(c) and (d)).
These values are fully in line with state of the art high-mobility graphene devices using HPHT hBN <cit.> (see also Fig. <ref>(c)) or APHT hBN from other sources <cit.>.
We therefore highlight the viability of APHT hBN crystals as a true alternative to HPHT hBN crystals, for high-performance graphene devices.
For the PCF-grown crystals in Fig. <ref>(f), we observe a carrier mobility of up to 30,000 cm^2(Vs)^-1 at |n|=1e12 cm^-2.
The lower charge carrier mobility of graphene encapsulated in PCF-grown hBN, when compared to APHT hBN, is fully consistent with our two previous observations, a shorter free exciton lifetime and a higher 2D linewidth of graphene encapsulated in PCF-grown hBN crystals.
The relation between an increase in graphene 2D linewidth and a decrease in charge carrier mobility is understood in terms of increased electron backscattering due to the stronger nm-strain variations <cit.>.
For the PVD grown BN film (Fig. <ref>(g)) we extract charge carrier mobilities over 10,000 cm^2(Vs)^-1 at n=1e12 cm^-2, while we achieve mobilities around 4,000 cm^2(Vs)^-1 at n=1e12 cm^-2 for the CVD-grown and wet-transferred films (Fig. <ref>(h)).
§ DISCUSSION
In Fig. <ref>, we summarize the main results of the BN benchmarking study: (a) room temperature charge carrier mobility vs carrier density and carrier mobilities at n=1e12 cm^-2 vs (b) free exciton lifetime of hBN, (c) Γ_E_2g of hBN and (d) graphene 2D linewidth of all BN substrates.
In Fig. <ref>(a) we show the transport traces for the region of highest mobility for each device shown in Figs. <ref>(c) to (h).
As mentioned above, APHT grown hBN allows for equally high graphene mobilities as achieved for HPHT-grown hBN crystals.
These hBN sources are of high relevance for many research groups, who are interested in high quality hBN crystals for fundamental research.
The PCF-grown crystals, following another route of hBN crystal growth, allow for mobilities up to 30,000 cm^2(Vs)^-1 at n=1e12 cm^-2, demonstrating the great potential of new synthesis routes for the production of high quality hBN crystals.
One aim of this synthesis route is to satisfy the increasing demand of hBN crystals from many research groups.
However, these approaches to grow high quality hBN crystals are not scalable, because they cannot be easily combined with technologically relevant substrates and the desired thicknesses can only be achieved via mechanical exfoliation.
Scalable methods for growing BN are therefore needed to unlock the full potential of graphene-based electronics in future nanoelectronic devices.
In this respect, the PVD growth method is most promising because (i) it allows the growth of tens of nanometer thick films with very low surface roughness and (ii) it can be deposited directly onto Si/SiO_2 substrates. The large BN thickness screens disorder from the silicon substrates, while the deposition on the target substrates prevents the need for large scale layer transfer. Most importantly, the PVD-grown BN allows for room temperature carrier mobilities of graphene exceeding 10,000 cm^2(Vs)^-1 at n=1e12 cm^-2. We conclude that the low temperature PVD growth process of BN on SiO_2 is a promising platform for achieving scalable BN substrates not only for graphene, but also for other 2D materials.
If we compare the charge carrier mobility with the Raman 2D FWHM we see a clear trend of decreasing mobility with increasing 2D FWHM.
This is in good agreement with the finding that nm-scale strain variations are the limitation for high charge carrier mobilities
<cit.> and shows that the graphene Raman 2D FWHM is a good measure for benchmarking as used in the ICE TS 62607-6-6 key control characteristics <cit.>.
Whereas we do not find a correlation between the FWHM of the hBN Raman peak and the charge carrier mobility we observe a clear correlation between the CL lifetime and the mobility, as shown in Fig. <ref>(b).
We observe that we need CL lifetimes of over 1 ns to achieve charge carrier mobilities in the range of 80,000 cm^2(Vs)^-1 at n=1e12 cm^-2.
For CL lifetimes of 100 ps we achieve charge carrier mobilities up to 30,000 cm^2(Vs)^-1.
The interface quality is therefore connected to the hBN crystal quality, i.e. the number of defects, in a sensitive way.
In conclusion, we have presented a comprehensive study of the electronic properties of graphene on different boron nitride substrates using a newly developed reproducible processing routine.
We have shown the complete process from boron nitride synthesis, over its optical characterization, to the optical and electronic characterization of graphene after encapsulation and Hall bar fabrication.
We identify the Raman spectrum of BN as a valuable measure for distinguishing hBN in the high crystallinity limit from BN films, but we also point out the limitations of the Raman analysis when comparing high-quality hBN crystals.
In this respect, time-resolved cathodoluminescence has a clear advantage over Raman spectroscopy when evaluating the as-grown quality of hBN, as the probing of the free exciton lifetime is very sensitive to the defects in hBN.
The fabrication of graphene-based heterostructures on BN substrates demonstrates the high sensitivity of graphene to the environment, allowing graphene to be used as a sensitive detector of the substrate and interface quality.
Variations in the quality of the graphene-BN interface are directly reflected in a broadening of the graphene Raman 2D peak. This broadening has a direct effect on the carrier mobility, i.e. the mobility is inversely proportional to the peak of the 2D linewidth distribution of graphene. It is therefore advisable to characterize the Raman 2D linewidth distribution of the finished heterostructure prior to any processing.
In terms of benchmarking we find that a CL lifetime larger than 1 ns is sufficient for high hBN crystal quality and high graphene-hBN interface qualities with low nm strain variations in graphene, which is essential for fundamental studies on highest mobility graphene-based devices.
For scalable approaches we see that a graphene Raman 2D linewidth below 22 cm^-1 is necessary to achieve charge carrier mobilities over 10,000 cm^2(Vs)^-1.
PVD-grown BN films, therefore, offer a promising platform for scalable high mobility graphene devices.
§ DATA AVAILABILITY
The data supporting the findings of this study are available in a Zenodo repository under, https://doi.org/10.5281/zenodo.13684712.
§ ACKNOWLEDGEMENTS
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 881603 (Graphene Flagship), T.O., S.B., P.S, C.S., and B.B. acknowledge support from the European Research Council (ERC) under grant agreement No. 820254, and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 - 390534769. H.Y.C. was supported by a SPARK grant of the Swiss National Science Foundation (Grant No. CRSK-2_220582). A.H. acknowledges a Forschungskredit of the University of Zürich (Grant No. FK-20-206 114).
K.W. and T.T. acknowledge support from the JSPS KAKENHI (Grant Numbers 21H05233 and 23H02052) and World Premier International Research Center Initiative (WPI), MEXT, Japan.
114
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Meng et al.(2019)Meng, Wang, Cheng, Gao, and Zhang]Meng2019Feb
author author J. Meng, author D. Wang, author L. Cheng, author M. Gao, and author X. Zhang, title title Recent progress in synthesis, properties, and applications of hexagonal boron nitride-based heterostructures, https://doi.org/10.1088/1361-6528/aaf301 journal journal Nanotechnology volume 30, pages 074003. (year 2019), https://arxiv.org/abs/30523895 30523895 NoStop
[Schuéé et al.(2016a)Schuéé, Stenger, Fossard, Loiseau, and Barjon]Schue2016Dec
author author L. Schuéé, author I. Stenger, author F. Fossard, author A. Loiseau, and author J. Barjon, title title Characterization methods dedicated to nanometer-thick hBN layers, https://doi.org/10.1088/2053-1583/4/1/015028 journal journal 2D Mater. volume 4, pages 1 (year 2016a)NoStop
[Backes et al.(2020)Backes, Abdelkader, Alonso, Andrieux-Ledier, Arenal, Azpeitia, Balakrishnan, Banszerus, Barjon, Bartali, Bellani, Berger, Berger, Ortega, Bernard, Beton, Beyer, Bianco, Bøggild, Bonaccorso, Barin, Botas, Bueno, Carriazo, Castellanos-Gomez, Christian, Ciesielski, Ciuk, Cole, Coleman, Coletti, Crema, Cun, Dasler, De Fazio, Díez, Drieschner, Duesberg, Fasel, Feng, Fina, Forti, Galiotis, Garberoglio, García, Garrido, Gibertini, Göölzhääuser, Góómez, Greber, Hauke, Hemmi, Hernandez-Rodriguez, Hirsch, Hodge, Huttel, Jepsen, Jimenez, Kaiser, Kaplas, Kim, Kis, Papagelis, Kostarelos, Krajewska, Lee, Li, Lipsanen, Liscio, Lohe, Loiseau, Lombardi, Lóópez, Martin, Martín, Martínez, Martin-Gago, Martínez, Marzari, Mayoral, McManus, Melucci, Mééndez, Merino, Merino, Meyer, Miniussi, Miseikis, Mishra, Morandi, Munuera, Muññoz, Nolan, Ortolani, Ott, Palacio, Palermo, Parthenios, Pasternak, Patane, Prato, Prevost, Prudkovskiy, Pugno, Rojo, Rossi, Ruffieux, Samorì, Schuéé, Setijadi, Seyller, Speranza, Stampfer, Stenger, Strupinski, Svirko, Taioli, Teo, Testi, Tomarchio, Tortello, Treossi, Turchanin, Vazquez, Villaro, Whelan, Xia, Yakimova, Yang, Yazdi, Yim, Yoon, Zhang, Zhuang, Colombo, Ferrari, and Garcia-Hernandez]Backes2020Jan
author author C. Backes, author A. M. Abdelkader, author C. Alonso, author A. Andrieux-Ledier, author R. Arenal, author J. Azpeitia, author N. Balakrishnan, author L. Banszerus, author J. Barjon, author R. Bartali, author S. Bellani, author C. Berger, author R. Berger, author M. M. B. Ortega, author C. Bernard, author P. H. Beton, author A. Beyer, author A. Bianco, author P. Bøggild, author F. Bonaccorso, author G. B. Barin, author C. Botas, author R. A. Bueno, author D. Carriazo, author A. Castellanos-Gomez, author M. Christian, author A. Ciesielski, author T. Ciuk, author M. T. Cole, author J. Coleman, author C. Coletti, author
L. Crema, author H. Cun, author D. Dasler, author D. De Fazio, author N. Díez, author S. Drieschner, author G. S. Duesberg, author R. Fasel, author X. Feng, author A. Fina, author S. Forti, author C. Galiotis, author G. Garberoglio, author J. M. García, author J. A. Garrido, author
M. Gibertini, author A. Göölzhääuser, author J. Góómez, author T. Greber, author F. Hauke, author A. Hemmi, author I. Hernandez-Rodriguez, author A. Hirsch, author S. A. Hodge, author Y. Huttel, author P. U. Jepsen, author I. Jimenez, author U. Kaiser, author T. Kaplas, author H. Kim, author A. Kis, author K. Papagelis, author K. Kostarelos, author A. Krajewska, author K. Lee, author C. Li, author H. Lipsanen, author A. Liscio, author M. R. Lohe, author A. Loiseau, author L. Lombardi, author M. F. Lóópez, author O. Martin, author C. Martín, author L. Martínez, author J. A. Martin-Gago, author J. I. Martínez, author N. Marzari, author ÁÁ. Mayoral, author J. McManus, author M. Melucci, author J. Mééndez, author C. Merino, author P. Merino, author A. P. Meyer, author E. Miniussi, author V. Miseikis, author N. Mishra, author V. Morandi, author C. Munuera, author R. Muññoz, author H. Nolan, author L. Ortolani, author A. K. Ott, author I. Palacio, author V. Palermo, author J. Parthenios, author I. Pasternak, author A. Patane, author M. Prato, author H. Prevost, author V. Prudkovskiy, author N. Pugno, author T. Rojo, author
A. Rossi, author P. Ruffieux, author P. Samorì, author L. Schuéé, author E. Setijadi, author T. Seyller, author G. Speranza, author C. Stampfer, author I. Stenger, author W. Strupinski, author Y. Svirko, author S. Taioli, author K. B. K. Teo, author M. Testi, author F. Tomarchio, author
M. Tortello, author E. Treossi, author A. Turchanin, author E. Vazquez, author E. Villaro, author P. R. Whelan, author Z. Xia, author R. Yakimova, author S. Yang, author G. R. Yazdi, author C. Yim, author D. Yoon, author X. Zhang, author X. Zhuang, author L. Colombo, author A. C. Ferrari, and author
M. Garcia-Hernandez, title title Production and processing of graphene and related materials, https://doi.org/10.1088/2053-1583/ab1e0a journal journal 2D Mater. volume 7, pages 022001 (year 2020)NoStop
[Naclerio and Kidambi(2023)]Naclerio2023Feb
author author A. E. Naclerio and author P. R. Kidambi, title title A Review of Scalable Hexagonal Boron Nitride (h-BN) Synthesis for Present and Future Applications, https://doi.org/10.1002/adma.202207374 journal journal Adv. Mater. volume 35, pages 2207374 (year 2023)NoStop
[Watanabe et al.(2004)Watanabe, Taniguchi, and Kanda]Watanabe2004Jun
author author K. Watanabe, author T. Taniguchi, and author H. Kanda, title title Direct-bandgap properties and evidence for ultraviolet lasing of hexagonal boron nitride single crystal, https://doi.org/10.1038/nmat1134 journal journal Nat. Mater. volume 3, pages 404 (year 2004)NoStop
[Kubota et al.(2007)Kubota, Watanabe, Tsuda, and Taniguchi]Kubota2007Aug
author author Y. Kubota, author K. Watanabe, author O. Tsuda, and author T. Taniguchi, title title Deep Ultraviolet Light-Emitting Hexagonal Boron Nitride Synthesized at Atmospheric Pressure, https://doi.org/10.1126/science.1144216 journal journal Science volume 317, pages 932 (year 2007)NoStop
[Bourrellier et al.(2016)Bourrellier, Meuret, Tararan, Stééphan, Kociak, Tizei, and Zobelli]Bourrellier2016Jul
author author R. Bourrellier, author S. Meuret, author A. Tararan, author O. Stééphan, author M. Kociak, author L. H. G. Tizei, and author A. Zobelli, title title Bright UV Single Photon Emission at Point Defects in h-BN, https://doi.org/10.1021/acs.nanolett.6b01368 journal journal Nano Lett. volume 16, pages 4317 (year 2016)NoStop
[Grosso et al.(2017)Grosso, Moon, Lienhard, Ali, Efetov, Furchi, Jarillo-Herrero, Ford, Aharonovich, and Englund]Grosso2017Sep
author author G. Grosso, author H. Moon, author B. Lienhard, author S. Ali, author D. K. Efetov, author M. M. Furchi, author P. Jarillo-Herrero, author M. J. Ford, author I. Aharonovich, and author D. Englund, title title Tunable and high-purity room temperature single-photon emission from atomic defects in hexagonal boron nitride, https://doi.org/10.1038/s41467-017-00810-2 journal journal Nat. Commun. volume 8, pages 1 (year
2017)NoStop
[Martínez et al.(2016)Martínez, Pelini, Waselowski, Maze, Gil, Cassabois, and Jacques]Martinez2016Sep
author author L. J. Martínez, author T. Pelini, author V. Waselowski, author J. R. Maze, author B. Gil, author G. Cassabois, and author V. Jacques, title title Efficient single photon emission from a high-purity hexagonal boron nitride crystal, https://doi.org/10.1103/PhysRevB.94.121405 journal journal Phys. Rev. B volume 94, pages 121405 (year 2016)NoStop
[Fournier et al.(2021)Fournier, Plaud, Roux, Pierret, Rosticher, Watanabe, Taniguchi, Buil, Quéélin, Barjon, Hermier, and Delteil]Fournier2021Jun
author author C. Fournier, author A. Plaud, author S. Roux, author A. Pierret, author M. Rosticher, author K. Watanabe, author T. Taniguchi, author S. Buil, author X. Quéélin, author J. Barjon, author J.-P. Hermier, and author A. Delteil, title title Position-controlled quantum emitters with reproducible emission wavelength in hexagonal boron nitride, https://doi.org/10.1038/s41467-021-24019-6
journal journal Nat. Commun. volume 12, pages 1 (year 2021)NoStop
[Lindsay and Broido(2011)]Lindsay2011Oct
author author L. Lindsay and author D. A. Broido, title title Enhanced thermal conductivity and isotope effect in single-layer hexagonal boron nitride, https://doi.org/10.1103/PhysRevB.84.155421 journal journal Phys. Rev. B volume 84, pages 155421 (year 2011)NoStop
[Yuan et al.(2019)Yuan, Li, Lindsay, Cherns, Pomeroy, Liu, Edgar, and Kuball]Yuan2019May
author author C. Yuan, author J. Li, author L. Lindsay, author D. Cherns, author J. W. Pomeroy, author S. Liu, author J. H. Edgar, and author M. Kuball, title title Modulating the thermal conductivity in hexagonal boron nitride via controlled boron isotope concentration, https://doi.org/10.1038/s42005-019-0145-5 journal journal Commun. Phys. volume 2, pages 1 (year 2019)NoStop
[Xue et al.(2011)Xue, Sanchez-Yamagishi, Bulmash, Jacquod, Deshpande, Watanabe, Taniguchi, Jarillo-Herrero, and LeRoy]Xue2011Apr
author author J. Xue, author J. Sanchez-Yamagishi, author D. Bulmash, author P. Jacquod, author A. Deshpande, author K. Watanabe, author T. Taniguchi, author P. Jarillo-Herrero, and author B. J. LeRoy, title title Scanning tunnelling microscopy and spectroscopy of ultra-flat graphene on hexagonal boron nitride, https://doi.org/10.1038/nmat2968 journal journal Nat. Mater. volume 10, pages 282 (year 2011)NoStop
[Woods et al.(2014)Woods, Britnell, Eckmann, Ma, Lu, Guo, Lin, Yu, Cao, Gorbachev, Kretinin, Park, Ponomarenko, Katsnelson, Gornostyrev, Watanabe, Taniguchi, Casiraghi, Gao, Geim, and Novoselov]Woods2014Jun
author author C. R. Woods, author L. Britnell, author A. Eckmann, author R. S. Ma, author J. C. Lu, author H. M. Guo, author X. Lin, author G. L. Yu, author Y. Cao, author R. V. Gorbachev, author A. V. Kretinin, author J. Park, author L. A. Ponomarenko, author M. I. Katsnelson, author Yu. N. Gornostyrev, author
K. Watanabe, author T. Taniguchi, author C. Casiraghi, author H.-J. Gao, author A. K. Geim, and author K. S. Novoselov, title title Commensurate–incommensurate transition in graphene on hexagonal boron nitride, https://doi.org/10.1038/nphys2954 journal journal Nat. Phys. volume 10, pages 451 (year 2014)NoStop
[Britnell et al.(2012)Britnell, Gorbachev, Jalil, Belle, Schedin, Katsnelson, Eaves, Morozov, Mayorov, Peres, Castro Neto, Leist, Geim, Ponomarenko, and Novoselov]Britnell2012Mar
author author L. Britnell, author R. V. Gorbachev, author R. Jalil, author B. D. Belle, author F. Schedin, author M. I. Katsnelson, author L. Eaves, author S. V. Morozov, author A. S. Mayorov, author N. M. R. Peres, author A. H. Castro Neto, author J. Leist, author A. K. Geim, author L. A. Ponomarenko, and author K. S. Novoselov, title title Electron Tunneling through Ultrathin Boron Nitride Crystalline Barriers, https://doi.org/10.1021/nl3002205 journal journal Nano Lett. volume 12, pages 1707 (year 2012)NoStop
[Sharpe et al.(2019)Sharpe, Fox, Barnard, Finney, Watanabe, Taniguchi, Kastner, and Goldhaber-Gordon]Sharpe2019Aug
author author A. L. Sharpe, author E. J. Fox, author A. W. Barnard, author J. Finney, author K. Watanabe, author T. Taniguchi, author M. A. Kastner, and author D. Goldhaber-Gordon, title title Emergent ferromagnetism near three-quarters filling in twisted bilayer graphene, https://doi.org/10.1126/science.aaw3780 journal journal Science volume 365, pages 605 (year 2019)NoStop
[Tebbe et al.(2023)Tebbe, Schüütte, Watanabe, Taniguchi, Stampfer, Beschoten, and Waldecker]Tebbe2023Apr
author author D. Tebbe, author M. Schüütte, author K. Watanabe, author T. Taniguchi, author C. Stampfer, author B. Beschoten, and author L. Waldecker, title title Tailoring the dielectric screening in WS_2 graphene heterostructures, https://doi.org/10.1038/s41699-023-00394-0 journal journal npj 2D Mater. Appl. volume 7, pages 1 (year 2023)NoStop
[Tebbe et al.(2024)Tebbe, Schüütte, Watanabe, Taniguchi, Stampfer, Beschoten, and Waldecker]Tebbe2024May
author author D. Tebbe, author M. Schüütte, author K. Watanabe, author T. Taniguchi, author C. Stampfer, author B. Beschoten, and author L. Waldecker, title title Distance Dependence of the Energy Transfer Mechanism in WS_2-Graphene Heterostructures, https://doi.org/10.1103/PhysRevLett.132.196902 journal journal Phys. Rev. Lett. volume 132, pages 196902 (year 2024)NoStop
[Dean et al.(2010)Dean, Young, Meric, Lee, Wang, Sorgenfrei, Watanabe, Taniguchi, Kim, Shepard, and Hone]Dean2010Oct
author author C. R. Dean, author A. F. Young, author I. Meric, author C. Lee, author L. Wang, author S. Sorgenfrei, author K. Watanabe, author T. Taniguchi, author P. Kim, author K. L. Shepard, and author J. Hone, title title Boron nitride substrates for high-quality graphene electronics, https://doi.org/10.1038/nnano.2010.172 journal journal Nat. Nanotechnol. volume 5, pages 722 (year
2010)NoStop
[Wang et al.(2013)Wang, Meric, Huang, Gao, Gao, Tran, Taniguchi, Watanabe, Campos, Muller, Guo, Kim, Hone, Shepard, and Dean]Wang2013Nov
author author L. Wang, author I. Meric, author P. Y. Huang, author Q. Gao, author Y. Gao, author H. Tran, author T. Taniguchi, author K. Watanabe, author L. M. Campos, author D. A. Muller, author J. Guo, author P. Kim, author J. Hone, author K. L. Shepard, and author C. R. Dean, title title One-Dimensional
Electrical Contact to a Two-Dimensional Material, https://doi.org/10.1126/science.1244358 journal journal Science volume 342, pages 614 (year 2013)NoStop
[Banszerus et al.(2015)Banszerus, Schmitz, Engels, Dauber, Oellers, Haupt, Watanabe, Taniguchi, Beschoten, and Stampfer]Banszerus2015Jul
author author L. Banszerus, author M. Schmitz, author S. Engels, author J. Dauber, author M. Oellers, author F. Haupt, author K. Watanabe, author T. Taniguchi, author B. Beschoten, and author C. Stampfer, title title Ultrahigh-mobility graphene devices from chemical vapor deposition on reusable copper, https://doi.org/10.1126/sciadv.1500222 journal journal Sci. Adv. volume 1, pages e1500222 (year 2015)NoStop
[Onodera et al.(2020)Onodera, Taniguchi, Watanabe, Isayama, Masubuchi, Moriya, and Machida]Onodera2020Jan
author author M. Onodera, author T. Taniguchi, author K. Watanabe, author M. Isayama, author S. Masubuchi, author R. Moriya, and author T. Machida, title title Hexagonal Boron Nitride Synthesized at Atmospheric Pressure Using Metal Alloy Solvents: Evaluation as a Substrate for 2D Materials, https://doi.org/10.1021/acs.nanolett.9b04641 journal journal Nano Lett. volume 20, pages 735 (year 2020)NoStop
[Sonntag et al.(2020)Sonntag, Li, Plaud, Loiseau, Barjon, Edgar, and Stampfer]Sonntag2020Jun
author author J. Sonntag, author J. Li, author A. Plaud, author A. Loiseau, author J. Barjon, author J. H. Edgar, and author C. Stampfer, title title Excellent electronic transport in heterostructures of graphene and monoisotopic boron-nitride grown at atmospheric pressure, https://doi.org/10.1088/2053-1583/ab89e5 journal journal 2D Mater. volume 7, pages 031009 (year 2020)NoStop
[Ouaj et al.(2023)Ouaj, Kramme, Metzelaars, Li, Watanabe, Taniguchi, Edgar, Beschoten, Köögerler, and Stampfer]Ouaj2023Sep
author author T. Ouaj, author L. Kramme, author M. Metzelaars, author J. Li, author K. Watanabe, author T. Taniguchi, author J. H. Edgar, author B. Beschoten, author P. Köögerler, and author C. Stampfer, title title Chemically detaching hBN crystals grown at atmospheric pressure and high temperature for high-performance graphene devices, https://doi.org/10.1088/1361-6528/acf2a0 journal journal Nanotechnology volume 34, pages
475703 (year 2023)NoStop
[Ajayi et al.(2017)Ajayi, Ardelean, Shepard, Wang, Antony, Taniguchi, Watanabe, Heinz, Strauf, Zhu, and Hone]Ajayi2017Jul
author author O. A. Ajayi, author J. V. Ardelean, author G. D. Shepard, author J. Wang, author A. Antony, author T. Taniguchi, author K. Watanabe, author T. F. Heinz, author S. Strauf, author X.-Y. Zhu, and author J. C. Hone, title title Approaching the intrinsic photoluminescence linewidth in transition metal dichalcogenide monolayers, https://doi.org/10.1088/2053-1583/aa6aa1 journal journal 2D Mater. volume 4, pages 031011 (year 2017)NoStop
[Ye et al.(2018)Ye, Waldecker, Ma, Rhodes, Antony, Kim, Zhang, Deng, Jiang, Lu, Smirnov, Watanabe, Taniguchi, Hone, and Heinz]Ye2018Sep
author author Z. Ye, author L. Waldecker, author E. Y. Ma, author D. Rhodes, author A. Antony, author B. Kim, author X.-X. Zhang, author M. Deng, author Y. Jiang, author Z. Lu, author D. Smirnov, author K. Watanabe, author T. Taniguchi, author J. Hone, and author T. F. Heinz, title title Efficient generation of
neutral and charged biexcitons in encapsulated WSe_2 monolayers, https://doi.org/10.1038/s41467-018-05917-8 journal journal Nat. Commun. volume 9, pages 1 (year 2018)NoStop
[Raja et al.(2019)Raja, Waldecker, Zipfel, Cho, Brem, Ziegler, Kulig, Taniguchi, Watanabe, Malic, Heinz, Berkelbach, and Chernikov]Raja2019Sep
author author A. Raja, author L. Waldecker, author J. Zipfel, author Y. Cho, author S. Brem, author J. D. Ziegler, author M. Kulig, author T. Taniguchi, author K. Watanabe, author E. Malic, author T. F. Heinz, author T. C. Berkelbach, and author A. Chernikov, title title Dielectric disorder in two-dimensional materials, https://doi.org/10.1038/s41565-019-0520-0 journal
journal Nat. Nanotechnol. volume 14, pages 832 (year 2019)NoStop
[Cadiz et al.(2017)Cadiz, Courtade, Robert, Wang, Shen, Cai, Taniguchi, Watanabe, Carrere, Lagarde, Manca, Amand, Renucci, Tongay, Marie, and Urbaszek]Cadiz2017May
author author F. Cadiz, author E. Courtade, author C. Robert, author G. Wang, author Y. Shen, author H. Cai, author T. Taniguchi, author K. Watanabe, author H. Carrere, author D. Lagarde, author M. Manca, author T. Amand, author P. Renucci, author S. Tongay, author X. Marie, and author B. Urbaszek, title title Excitonic Linewidth Approaching the Homogeneous Limit in MoS_2-Based van der Waals Heterostructures, https://doi.org/10.1103/PhysRevX.7.021026 journal journal Phys. Rev. X volume 7, pages 021026 (year 2017)NoStop
[Ersfeld et al.(2020)Ersfeld, Volmer, Rathmann, Kotewitz, Heithoff, Lohmann, Yang, Watanabe, Taniguchi, Bartels, Shi, Stampfer, and Beschoten]Ersfeld2020May
author author M. Ersfeld, author F. Volmer, author L. Rathmann, author L. Kotewitz, author M. Heithoff, author M. Lohmann, author B. Yang, author K. Watanabe, author T. Taniguchi, author L. Bartels, author J. Shi, author C. Stampfer, and author B. Beschoten, title title Unveiling Valley Lifetimes of Free Charge Carriers in Monolayer WSe_2, https://doi.org/10.1021/acs.nanolett.9b05138
journal journal Nano Lett. volume 20, pages 3147 (year 2020)NoStop
[Shi et al.(2020)Shi, Shih, Gustafsson, Rhodes, Kim, Watanabe, Taniguchi, Papićć, Hone, and Dean]Shi2020Jul
author author Q. Shi, author E.-M. Shih, author M. V. Gustafsson, author D. A. Rhodes, author B. Kim, author K. Watanabe, author T. Taniguchi, author Z. Papićć, author J. Hone, and author C. R. Dean, title title Odd- and even-denominator fractional quantum Hall states in monolayer WSe_2, https://doi.org/10.1038/s41565-020-0685-6 journal journal Nat. Nanotechnol. volume 15, pages 569 (year
2020)NoStop
[Eich et al.(2018)Eich, Pisoni, Pally, Overweg, Kurzmann, Lee, Rickhaus, Watanabe, Taniguchi, Ensslin, and Ihn]Eich2018Aug
author author M. Eich, author R. Pisoni, author A. Pally, author H. Overweg, author A. Kurzmann, author Y. Lee, author P. Rickhaus, author K. Watanabe, author T. Taniguchi, author K. Ensslin, and author T. Ihn, title title Coupled Quantum Dots in Bilayer Graphene, https://doi.org/10.1021/acs.nanolett.8b01859 journal journal Nano Lett. volume 18, pages 5042 (year 2018)NoStop
[Banszerus et al.(2018)Banszerus, Frohn, Epping, Neumaier, Watanabe, Taniguchi, and Stampfer]Banszerus2018Aug
author author L. Banszerus, author B. Frohn, author A. Epping, author D. Neumaier, author K. Watanabe, author T. Taniguchi, and author C. Stampfer, title title Gate-Defined Electron–Hole Double Dots in Bilayer Graphene, https://doi.org/10.1021/acs.nanolett.8b01303 journal journal Nano Lett. volume 18, pages 4785 (year 2018)NoStop
[Icking et al.(2022)Icking, Banszerus, Wöörtche, Volmer, Schmidt, Steiner, Engels, Hesselmann, Goldsche, Watanabe, Taniguchi, Volk, Beschoten, and Stampfer]Icking2022Nov
author author E. Icking, author L. Banszerus, author F. Wöörtche, author F. Volmer, author P. Schmidt, author C. Steiner, author S. Engels, author J. Hesselmann, author M. Goldsche, author K. Watanabe, author T. Taniguchi, author C. Volk, author B. Beschoten, and author C. Stampfer, title title Transport Spectroscopy of Ultraclean
Tunable Band Gaps in Bilayer Graphene, https://doi.org/10.1002/aelm.202200510 journal journal Adv. Electron. Mater. volume 8, pages 2200510 (year 2022)NoStop
[Icking et al.(2024)Icking, Emmerich, Watanabe, Taniguchi, Beschoten, Lemme, Knoch, and Stampfer]Icking2024Sep
author author E. Icking, author D. Emmerich, author K. Watanabe, author T. Taniguchi, author B. Beschoten, author M. C. Lemme, author J. Knoch, and author C. Stampfer, title title Ultrasteep Slope Cryogenic FETs Based on Bilayer Graphene, journal journal Nano Lett. volume 2024, https://doi.org/10.1021/acs.nanolett.4c02463 10.1021/acs.nanolett.4c02463 (year 2024)NoStop
[Taniguchi and Watanabe(2007)]Taniguchi2007May
author author T. Taniguchi and author K. Watanabe, title title Synthesis of high-purity boron nitride single crystals under high pressure by using Ba–BN solvent, https://doi.org/10.1016/j.jcrysgro.2006.12.061 journal journal J. Cryst. Growth volume 303, pages 525 (year 2007)NoStop
[Zhigadlo(2014)]Zhigadlo2014Sep
author author N. D. Zhigadlo, title title Crystal growth of hexagonal boron nitride (hBN) from Mg–B–N solvent system under high pressure, https://doi.org/10.1016/j.jcrysgro.2014.06.038 journal journal J. Cryst. Growth volume 402, pages 308 (year 2014)NoStop
[Fukunaga et al.(2022)Fukunaga, Nakano, and Taniguchi]Fukunaga2022Nov
author author O. Fukunaga, author S. Nakano, and author T. Taniguchi, title title Experimental results on the high-pressure phase diagram of boron nitride, https://doi.org/10.35848/1347-4065/ac9dd5 journal journal Jpn. J. Appl. Phys. volume 61, pages 125502 (year 2022)NoStop
[Kubota et al.(2008)Kubota, Watanabe, Tsuda, and Taniguchi]Kubota2008Mar
author author Y. Kubota, author K. Watanabe, author O. Tsuda, and author T. Taniguchi, title title Hexagonal Boron Nitride Single Crystal Growth at Atmospheric Pressure Using Ni-Cr Solvent, https://doi.org/10.1021/cm7028382 journal journal Chem. Mater. volume 20, pages 1661 (year 2008)NoStop
[Hoffman et al.(2014)Hoffman, Clubine, Zhang, Snow, and Edgar]Hoffman2014May
author author T. B. Hoffman, author B. Clubine, author Y. Zhang, author K. Snow, and author J. H. Edgar, title title Optimization of Ni–Cr flux growth for hexagonal boron nitride single crystals, https://doi.org/10.1016/j.jcrysgro.2013.09.030 journal journal J. Cryst. Growth volume 393, pages 114 (year 2014)NoStop
[Edgar et al.(2014)Edgar, Hoffman, Clubine, Currie, and Jiang]Edgar2014Oct
author author J. H. Edgar, author T. B. Hoffman, author B. Clubine, author M. Currie, and author H. X. Jiang, title title Characterization of bulk hexagonal boron nitride single crystals grown by the metal flux technique, https://doi.org/10.1016/j.jcrysgro.2014.06.006 journal journal J. Cryst. Growth volume 403, pages 110 (year 2014)NoStop
[Liu et al.(2017)Liu, He, Ye, Du, Lin, Jiang, Liu, and Edgar]Liu2017Sep
author author S. Liu, author R. He, author Z. Ye, author X. Du, author J. Lin, author H. Jiang, author B. Liu, and author J. H. Edgar, title title Large-Scale Growth of High-Quality Hexagonal Boron Nitride Crystals at Atmospheric Pressure from an Fe–Cr Flux, https://doi.org/10.1021/acs.cgd.7b00871 journal journal Cryst. Growth Des. volume 17, pages 4932 (year 2017)NoStop
[Liu et al.(2018)Liu, He, Xue, Li, Liu, and Edgar]Liu2018Sep
author author S. Liu, author R. He, author L. Xue, author J. Li, author B. Liu, and author J. H. Edgar, title title Single Crystal Growth of Millimeter-Sized Monoisotopic Hexagonal Boron Nitride, https://doi.org/10.1021/acs.chemmater.8b02589 journal journal Chem. Mater. volume 30, pages 6222 (year 2018)NoStop
[Zhang et al.(2019)Zhang, Xu, Zhao, Shao, and Wan]Zhang2019Nov
author author S.-Y. Zhang, author K. Xu, author X.-K. Zhao, author Z.-Y. Shao, and author N. Wan, title title Improved hBN Single-Crystal Growth by Adding Carbon in the Metal Flux, https://doi.org/10.1021/acs.cgd.9b00712 journal journal Cryst. Growth Des. volume 19, pages 6252 (year 2019)NoStop
[Li et al.(2020a)Li, Yuan, Elias, Wang, Zhang, Ye, Huang, Kuball, Eda, Redwing, He, Cassabois, Gil, Valvin, Pelini, Liu, and Edgar]Li2020Jun
author author J. Li, author C. Yuan, author C. Elias, author J. Wang, author X. Zhang, author G. Ye, author C. Huang, author M. Kuball, author G. Eda, author J. M. Redwing, author R. He, author G. Cassabois, author B. Gil, author P. Valvin, author T. Pelini, author B. Liu, and author
J. H. Edgar, title title Hexagonal Boron Nitride Single Crystal Growth from Solution with a Temperature Gradient, https://doi.org/10.1021/acs.chemmater.0c00830 journal journal Chem. Mater. volume 32, pages 5066 (year 2020a)NoStop
[Li et al.(2021a)Li, Wang, Zhang, Elias, Ye, Evans, Eda, Redwing, Cassabois, Gil, Valvin, He, Liu, and Edgar]Li2021Apr
author author J. Li, author J. Wang, author X. Zhang, author C. Elias, author G. Ye, author D. Evans, author G. Eda, author J. M. Redwing, author G. Cassabois, author B. Gil, author P. Valvin, author R. He, author B. Liu, and author J. H. Edgar, title title Hexagonal Boron Nitride Crystal Growth from Iron, a Single Component Flux, https://doi.org/10.1021/acsnano.1c00115 journal journal ACS Nano volume 15, pages 7032 (year 2021a)NoStop
[Li et al.(2020b)Li, Elias, Ye, Evans, Liu, He, Cassabois, Gil, Valvin, Liu, and Edgar]Li2020
author author J. Li, author C. Elias, author G. Ye, author D. Evans, author S. Liu, author R. He, author G. Cassabois, author B. Gil, author P. Valvin, author B. Liu, and author J. H. Edgar, title title Single crystal growth of monoisotopic hexagonal boron nitride from a Fe–Cr flux, https://doi.org/10.1039/D0TC02143A journal journal J. Mater. Chem. C volume 8, pages 9931 (year
2020b)NoStop
[Li et al.(2021b)Li, Wen, Tan, Li, Li, Huang, Tian, Yao, Liao, Yu, Liu, Li, Guo, Huang, Gao, Wang, Bai, and Liu]Li2021
author author Y. Li, author X. Wen, author C. Tan, author N. Li, author R. Li, author X. Huang, author H. Tian, author Z. Yao, author P. Liao, author S. Yu, author S. Liu, author Z. Li, author J. Guo, author Y. Huang, author P. Gao, author L. Wang, author S. Bai, and author L. Liu, title title Synthesis of centimeter-scale high-quality polycrystalline hexagonal boron nitride films from Fe fluxes, https://doi.org/10.1039/D1NR02408F journal journal Nanoscale volume 13, pages 11223 (year 2021b)NoStop
[Zhang et al.(2021)Zhang, Yang, Wang, Zhong, and Chen]Zhang2021May
author author N. Zhang, author N. Yang, author W. Wang, author X. Zhong, and author X. Chen, title title Growth of hexagonal boron nitride crystals at atmospheric pressure from CuCr flux, https://doi.org/10.1016/j.jcrysgro.2021.126074 journal journal J. Cryst. Growth volume 562, pages 126074 (year 2021)NoStop
[Li et al.(2020c)Li, Garnier, Steyer, Journet, and Toury]Li2020Feb
author author Y. Li, author V. Garnier, author P. Steyer, author C. Journet, and author B. Toury, title title Millimeter-Scale Hexagonal Boron Nitride Single Crystals for Nanosheet Generation, https://doi.org/10.1021/acsanm.9b02315 journal journal ACS Appl. Nano Mater. volume 3, pages 1508 (year 2020c)NoStop
[Maestre et al.(2022)Maestre, Li, Garnier, Steyer, Roux, Plaud, Loiseau, Barjon, Ren, Robert, Han, Marie, Journet, and Toury]Maestre2022May
author author C. Maestre, author Y. Li, author V. Garnier, author P. Steyer, author S. Roux, author A. Plaud, author A. Loiseau, author J. Barjon, author L. Ren, author C. Robert, author B. Han, author X. Marie, author C. Journet, and author B. Toury, title title From the synthesis of hBN crystals to their use as nanosheets in van der Waals
heterostructures, https://doi.org/10.1088/2053-1583/ac6c31 journal journal 2D Mater. volume 9, pages 035008 (year 2022)NoStop
[Chubarov et al.(2013)Chubarov, Pedersen, Höögberg, Filippov, Engelbrecht, O'Connel, and Henry]Chubarov
author author M. Chubarov, author H. Pedersen, author H. Höögberg, author S. Filippov, author J. Engelbrecht, author J. O'Connel, and author A. Henry, title title Characterization of Boron Nitride thin films, in https://doi.org/10.1109/CLEOPR.2013.6600222 booktitle 2013 Conference on Lasers and Electro-Optics Pacific Rim (CLEOPR) (publisher IEEE, year 2013) pp. pages 2013–04NoStop
[Jang et al.(2016)Jang, Hong, Hyun, Yoon, Kim, Jeong, Shin, Park, Wong, Kwak, Park, Yu, Choi, Mishchenko, Withers, Novoselov, Lim, and Shin]Jang2016May
author author A.-R. Jang, author S. Hong, author C. Hyun, author S. I. Yoon, author G. Kim, author H. Y. Jeong, author T. J. Shin, author S. O. Park, author K. Wong, author S. K. Kwak, author N. Park, author K. Yu, author E. Choi, author A. Mishchenko, author F. Withers, author K. S. Novoselov,
author H. Lim, and author H. S. Shin, title title Wafer-Scale and Wrinkle-Free Epitaxial Growth of Single-Orientated Multilayer Hexagonal Boron Nitride on Sapphire, https://doi.org/10.1021/acs.nanolett.6b01051 journal journal Nano Lett. volume 16, pages 3360 (year 2016)NoStop
[Lee et al.(2018)Lee, Choi, Yun, Kim, Boandoh, Park, Shin, Ko, Lee, Kim, Lee, Kim, and Kim]Lee2018Nov
author author J. S. Lee, author S. H. Choi, author S. J. Yun, author Y. I. Kim, author S. Boandoh, author J.-H. Park, author B. G. Shin, author H. Ko, author S. H. Lee, author Y.-M. Kim, author Y. H. Lee, author K. K. Kim, and author S. M. Kim, title title Wafer-scale single-crystal hexagonal boron nitride film via self-collimated grain formation, https://doi.org/10.1126/science.aau2132 journal journal Science volume 362, pages 817 (year 2018)NoStop
[Asgari et al.(2022)Asgari, Viti, Balci, Shinde, Zhang, Ramezani, Sharma, Meersha, Menichetti, McAleese, Conran, Wang, Tomadin, Ferrari, and Vitiello]Asgari2022Jul
author author M. Asgari, author L. Viti, author O. Balci, author S. M. Shinde, author J. Zhang, author H. Ramezani, author S. Sharma, author A. Meersha, author G. Menichetti, author C. McAleese, author B. Conran, author X. Wang, author A. Tomadin, author A. C. Ferrari, and author M. S. Vitiello, title title
Terahertz photodetection in scalable single-layer-graphene and hexagonal boron nitride heterostructures, https://doi.org/10.1063/5.0097726 journal journal Appl. Phys. Lett. volume 121, pages 031103 (year 2022)NoStop
[Calandrini et al.(2023)Calandrini, Voronin, Balci, Barra-Burillo, Bylinkin, Shinde, Sharma, Casanova, Hueso, Chuvilin, McAleese, Conran, Wang, Teo, Volkov, Ferrari, Nikitin, and Hillenbrand]Calandrini2023Nov
author author E. Calandrini, author K. Voronin, author O. Balci, author M. Barra-Burillo, author A. Bylinkin, author S. M. Shinde, author S. Sharma, author F. Casanova, author L. E. Hueso, author A. Chuvilin, author C. McAleese, author B. R. Conran, author X. Wang, author K. Teo, author V. S. Volkov, author A. C. Ferrari, author A. Y. Nikitin, and author R. Hillenbrand, title title Near- and Far-Field Observation of Phonon Polaritons in Wafer-Scale Multilayer Hexagonal Boron Nitride Prepared by Chemical Vapor Deposition, https://doi.org/10.1002/adma.202302045 journal journal Adv. Mater. volume 35, pages 2302045 (year 2023)NoStop
[Tailpied et al.(2024)Tailpied, Andrieux-Ledier, Fossard, Méérot, Decams, and Loiseau]Tailpied2024Aug
author author L. Tailpied, author A. Andrieux-Ledier, author F. Fossard, author J.-S. Méérot, author J.-M. Decams, and author A. Loiseau, title title Ni(111) Substrate Engineering for the Epitaxial Chemical Vapor Deposition Growth of Wrinkle-Free Multilayer Rhombohedral Boron Nitride Films, journal journal Cryst. Growth Des. volume 2024, https://doi.org/10.1021/acs.cgd.4c00478 10.1021/acs.cgd.4c00478 (year 2024)NoStop
[Kobayashi and Akasaka(2008)]Kobayashi2008Nov
author author Y. Kobayashi and author T. Akasaka, title title Hexagonal BN epitaxial growth on (0001) sapphire substrate by MOVPE, https://doi.org/10.1016/j.jcrysgro.2008.07.010 journal journal J. Cryst. Growth volume 310, pages 5044 (year 2008)NoStop
[Kobayashi et al.(2012)Kobayashi, Kumakura, Akasaka, and Makimoto]Kobayashi2012Apr
author author Y. Kobayashi, author K. Kumakura, author T. Akasaka, and author T. Makimoto, title title Layered boron nitride as a release layer for mechanical transfer of GaN-based devices, https://doi.org/10.1038/nature10970 journal journal Nature volume 484, pages 223 (year 2012)NoStop
[Li et al.(2016)Li, Sundaram, El Gmili, Ayari, Puybaret, Patriarche, Voss, Salvestrini, and Ougazzaden]Li2016Jun
author author X. Li, author S. Sundaram, author Y. El Gmili, author T. Ayari, author R. Puybaret, author G. Patriarche, author P. L. Voss, author J. P. Salvestrini, and author A. Ougazzaden, title title Large-Area Two-Dimensional Layered Hexagonal Boron Nitride Grown on Sapphire by Metalorganic Vapor Phase Epitaxy, https://doi.org/10.1021/acs.cgd.6b00398 journal journal Cryst. Growth Des. volume 16, pages 3409 (year 2016)NoStop
[Jeong et al.(2019)Jeong, Kim, Kim, Moon, Han, Lee, Okello, Song, Choi, and Kim]Jeong2019Apr
author author H. Jeong, author D. Y. Kim, author J. Kim, author S. Moon, author N. Han, author S. H. Lee, author O. F. N. Okello, author K. Song, author S.-Y. Choi, and author J. K. Kim, title title Wafer-scale and selective-area growth of high-quality hexagonal boron nitride on Ni(111) by metal-organic chemical vapor deposition, https://doi.org/10.1038/s41598-019-42236-4 journal journal Sci. Rep. volume 9, pages 1 (year
2019)NoStop
[Cho et al.(2016)Cho, Summerfield, Davies, Cheng, Smith, Mellor, Khlobystov, Foxon, Eaves, Beton, and Novikov]Cho2016Sep
author author Y.-J. Cho, author A. Summerfield, author A. Davies, author T. S. Cheng, author E. F. Smith, author C. J. Mellor, author A. N. Khlobystov, author C. T. Foxon, author L. Eaves, author P. H. Beton, and author S. V. Novikov, title title Hexagonal Boron Nitride Tunnel Barriers Grown on Graphite by High Temperature Molecular Beam Epitaxy, https://doi.org/10.1038/srep34474 journal journal Sci. Rep. volume 6, pages 1 (year 2016)NoStop
[Elias et al.(2019)Elias, Valvin, Pelini, Summerfield, Mellor, Cheng, Eaves, Foxon, Beton, Novikov, Gil, and Cassabois]Elias2019Jun
author author C. Elias, author P. Valvin, author T. Pelini, author A. Summerfield, author C. J. Mellor, author T. S. Cheng, author L. Eaves, author C. T. Foxon, author P. H. Beton, author S. V. Novikov, author B. Gil, and author G. Cassabois, title title Direct band-gap crossover in epitaxial monolayer boron nitride, https://doi.org/10.1038/s41467-019-10610-5 journal journal Nat. Commun. volume 10, pages 1 (year 2019)NoStop
[Heilmann et al.(2020)Heilmann, Prikhodko, Hanke, Sabelfeld, Borgardt, and Lopes]Heilmann2020Feb
author author M. Heilmann, author A. S. Prikhodko, author M. Hanke, author A. Sabelfeld, author N. I. Borgardt, and author J. M. J. Lopes, title title Influence of Proximity to Supporting Substrate on van der Waals Epitaxy of Atomically Thin Graphene/Hexagonal Boron Nitride Heterostructures, https://doi.org/10.1021/acsami.9b21490 journal journal ACS Appl. Mater. Interfaces volume 12, pages 8897 (year 2020)NoStop
[Rousseau et al.(2024)Rousseau, Plo, Valvin, Cheng, Bradford, James, Wrigley, Mellor, Beton, Novikov, Jacques, Gil, and Cassabois]Rousseau2024Mar
author author A. Rousseau, author J. Plo, author P. Valvin, author T. S. Cheng, author J. Bradford, author T. S. S. James, author J. Wrigley, author C. J. Mellor, author P. H. Beton, author S. V. Novikov, author V. Jacques, author B. Gil, and author G. Cassabois, title title Spatially-resolved UV-C emission in epitaxial monolayer boron nitride, https://doi.org/10.1088/2053-1583/ad2f45
journal journal 2D Mater. volume 11, pages 025026 (year 2024)NoStop
[Caretti and Jiméénez(2011a)]Caretti2011Aug
author author I. Caretti and author I. Jiméénez, title title Composition and bonding structure of boron nitride B_1-xN_x thin films grown by ion-beam assisted evaporation, https://doi.org/10.1016/j.cplett.2011.06.001 journal journal Chem. Phys. Lett. volume 511, pages 235 (year 2011a)NoStop
[Jiméénez et al.(2012)Jiméénez, Torres, Caretti, Gago, and Albella]Jimenez2012Mar
author author I. Jiméénez, author R. Torres, author I. Caretti, author R. Gago, and author J. M. Albella, title title A review of monolithic and multilayer coatings within the boron–carbon–nitrogen system by ion-beam-assisted deposition, https://doi.org/10.1557/jmr.2011.398 journal journal J. Mater. Res. volume 27, pages 743 (year 2012)NoStop
[Torres et al.(2014)Torres, Caretti, Serin, Brun, Radnóóczic, and Jiméénez]Torres2014Aug
author author R. Torres, author I. Caretti, author V. Serin, author N. Brun, author G. Radnóóczic, and author I. Jiméénez, title title Reversed texture in nanometric carbon/boron nitride multilayers, https://doi.org/10.1016/j.carbon.2014.03.027 journal journal Carbon volume 74, pages 374 (year 2014)NoStop
[Hong et al.(2020)Hong, Lee, Lee, Lee, Ma, Kim, Yoon, Ihm, Kim, Shin, Kim, Jeon, Jeon, Kim, Lee, Lee, Antidormi, Roche, Chhowalla, Shin, and Shin]Hong2020Jun
author author S. Hong, author C.-S. Lee, author M.-H. Lee, author Y. Lee, author K. Y. Ma, author G. Kim, author S. I. Yoon, author K. Ihm, author K.-J. Kim, author T. J. Shin, author S. W. Kim, author E.-c. Jeon, author H. Jeon, author J.-Y. Kim, author H.-I. Lee, author Z. Lee, author A. Antidormi, author S. Roche, author M. Chhowalla, author H.-J. Shin, and author H. S. Shin, title title Ultralow-dielectric-constant amorphous boron nitride, https://doi.org/10.1038/s41586-020-2375-9 journal journal Nature volume 582, pages 511 (year 2020)NoStop
[Sattari-Esfahlan et al.(2023)Sattari-Esfahlan, Kim, Hyun, Choi, Hwang, Kim, Park, and Lee]Sattari-Esfahlan2023Feb
author author S. M. Sattari-Esfahlan, author H. G. Kim, author S. H. Hyun, author J.-H. Choi, author H. S. Hwang, author E.-T. Kim, author H. G. Park, and author J.-H. Lee, title title Low-Temperature Direct Growth of Amorphous Boron Nitride Films for High-Performance Nanoelectronic Device Applications, https://doi.org/10.1021/acsami.2c18706 journal journal ACS Appl. Mater. Interfaces volume 15, pages 7274 (year 2023)NoStop
[Martini et al.(2023)Martini, Miššeikis, Esteban, Azpeitia, Pezzini, Paletti, Ochapski, Convertino, Hernandez, Jimenez, and Coletti]Martini2023Aug
author author L. Martini, author V. Miššeikis, author D. Esteban, author J. Azpeitia, author S. Pezzini, author P. Paletti, author M. W. Ochapski, author D. Convertino, author M. G. Hernandez, author I. Jimenez, and author C. Coletti, title title Scalable High-Mobility Graphene/hBN Heterostructures, https://doi.org/10.1021/acsami.3c06120 journal journal ACS Appl. Mater. Interfaces volume 15, pages 37794 (year 2023)NoStop
[Schuéé et al.(2016b)Schuéé, Berini, Betz, Plaçais, Ducastelle, Barjon, and Loiseau]Schue2016Mar
author author L. Schuéé, author B. Berini, author A. C. Betz, author B. Plaçais, author F. Ducastelle, author J. Barjon, and author A. Loiseau, title title Dimensionality effects on the luminescence properties of hBN, https://doi.org/10.1039/C6NR01253A journal journal Nanoscale volume 8, pages 6986 (year 2016b)NoStop
[Schuéé et al.(2019)Schuéé, Sponza, Plaud, Bensalah, Watanabe, Taniguchi, Ducastelle, Loiseau, and Barjon]Schue2019Feb
author author L. Schuéé, author L. Sponza, author A. Plaud, author H. Bensalah, author K. Watanabe, author T. Taniguchi, author F. Ducastelle, author A. Loiseau, and author J. Barjon, title title Bright Luminescence from Indirect and Strongly Bound Excitons in h-BN, https://doi.org/10.1103/PhysRevLett.122.067401 journal journal Phys. Rev. Lett. volume 122, pages 067401 (year 2019)NoStop
[Roux et al.(2021)Roux, Arnold, Paleari, Sponza, Janzen, Edgar, Toury, Journet, Garnier, Steyer, Taniguchi, Watanabe, Ducastelle, Loiseau, and Barjon]Roux2021Oct
author author S. Roux, author C. Arnold, author F. Paleari, author L. Sponza, author E. Janzen, author J. H. Edgar, author B. Toury, author C. Journet, author V. Garnier, author P. Steyer, author T. Taniguchi, author K. Watanabe, author F. Ducastelle, author A. Loiseau, and author J. Barjon, title title Radiative
lifetime of free excitons in hexagonal boron nitride, https://doi.org/10.1103/PhysRevB.104.L161203 journal journal Phys. Rev. B volume 104, pages L161203 (year 2021)NoStop
[Cassabois et al.(2016)Cassabois, Valvin, and Gil]Cassabois2016Apr
author author G. Cassabois, author P. Valvin, and author B. Gil, title title Hexagonal boron nitride is an indirect bandgap semiconductor, https://doi.org/10.1038/nphoton.2015.277 journal journal Nat. Photonics volume 10, pages 262 (year 2016)NoStop
[Rousseau et al.(2021)Rousseau, Ren, Durand, Valvin, Gil, Watanabe, Taniguchi, Urbaszek, Marie, Robert, and Cassabois]Rousseau2021Dec
author author A. Rousseau, author L. Ren, author A. Durand, author P. Valvin, author B. Gil, author K. Watanabe, author T. Taniguchi, author B. Urbaszek, author X. Marie, author C. Robert, and author G. Cassabois, title title Monolayer Boron Nitride: Hyperspectral Imaging in the Deep Ultraviolet, https://doi.org/10.1021/acs.nanolett.1c02531 journal journal Nano Lett. volume 21, pages 10133 (year
2021)NoStop
[Reich et al.(2005)Reich, Ferrari, Arenal, Loiseau, Bello, and Robertson]Reich2005May
author author S. Reich, author A. C. Ferrari, author R. Arenal, author A. Loiseau, author I. Bello, and author J. Robertson, title title Resonant Raman scattering in cubic and hexagonal boron nitride, https://doi.org/10.1103/PhysRevB.71.205201 journal journal Phys. Rev. B volume 71, pages 205201 (year 2005)NoStop
[Stenger et al.(2017)Stenger, Schuéé, Boukhicha, Berini, Plaçais, Loiseau, and Barjon]Stenger2017Jun
author author I. Stenger, author L. Schuéé, author M. Boukhicha, author B. Berini, author B. Plaçais, author A. Loiseau, and author J. Barjon, title title Low frequency Raman spectroscopy of few-atomic-layer thick hBN crystals, https://doi.org/10.1088/2053-1583/aa77d4 journal journal 2D Mater. volume 4, pages 031003 (year 2017)NoStop
[Banszerus et al.(2017)Banszerus, Janssen, Otto, Epping, Taniguchi, Watanabe, Beschoten, Neumaier, and Stampfer]Banszerus2017Feb
author author L. Banszerus, author H. Janssen, author M. Otto, author A. Epping, author T. Taniguchi, author K. Watanabe, author B. Beschoten, author D. Neumaier, and author C. Stampfer, title title Identifying suitable substrates for high-quality graphene-based heterostructures, https://doi.org/10.1088/2053-1583/aa5b0f journal journal 2D Mater. volume 4, pages 025030 (year 2017)NoStop
[Caretti and Jiméénez(2012)]Caretti2012Sep
author author I. Caretti and author I. Jiméénez, title title Influence of carbon content and nitrogen vacancies on the bonding structure and mechanical performance of graphite-like BCxN thin films, https://doi.org/10.1063/1.4752757 journal journal J. Appl. Phys. volume 112, pages 063525 (year 2012)NoStop
[Couto et al.(2014)Couto, Costanzo, Engels, Ki, Watanabe, Taniguchi, Stampfer, Guinea, and Morpurgo]Couto2014Oct
author author N. J. G. Couto, author D. Costanzo, author S. Engels, author D.-K. Ki, author K. Watanabe, author T. Taniguchi, author C. Stampfer, author F. Guinea, and author A. F. Morpurgo, title title Random Strain Fluctuations as Dominant Disorder Source for High-Quality On-Substrate Graphene Devices, https://doi.org/10.1103/PhysRevX.4.041019 journal journal Phys. Rev. X volume 4, pages 041019 (year 2014)NoStop
[Lee et al.(2012)Lee, Ahn, Shim, Lee, and Ryu]Lee2012Aug
author author J. E. Lee, author G. Ahn, author J. Shim, author Y. S. Lee, and author S. Ryu, title title Optical separation of mechanical strain from charge doping in graphene, https://doi.org/10.1038/ncomms2022 journal journal Nat. Commun. volume 3, pages 1 (year 2012)NoStop
[Neumann et al.(2015)Neumann, Reichardt, Venezuela, Dröögeler, Banszerus, Schmitz, Watanabe, Taniguchi, Mauri, Beschoten, Rotkin, and Stampfer]Neumann2015Sep
author author C. Neumann, author S. Reichardt, author P. Venezuela, author M. Dröögeler, author L. Banszerus, author M. Schmitz, author K. Watanabe, author T. Taniguchi, author F. Mauri, author B. Beschoten, author S. V. Rotkin, and author C. Stampfer, title title Raman spectroscopy as probe of nanometre-scale strain variations in graphene, https://doi.org/10.1038/ncomms9429 journal
journal Nat. Commun. volume 6, pages 8429 (year 2015)NoStop
[Vincent et al.(2018)Vincent, Panchal, Booth, Power, Jauho, Antonov, and Kazakova]Vincent2018Dec
author author T. Vincent, author V. Panchal, author T. Booth, author S. R. Power, author A.-P. Jauho, author V. Antonov, and author O. Kazakova, title title Probing the nanoscale origin of strain and doping in graphene-hBN heterostructures, https://doi.org/10.1088/2053-1583/aaf1dc journal journal 2D Mater. volume 6, pages 015022 (year 2018)NoStop
[Solozhenko and Turkevich(1997)]Solozhenko1997Aug
author author V. L. Solozhenko and author V. Z. Turkevich, title title High pressure phase equilibria in the Li3N-BN system: in situ studies, https://doi.org/10.1016/S0167-577X(97)00025-6 journal journal Mater. Lett. volume 32, pages 179 (year 1997)NoStop
[Maestre et al.(2021)Maestre, Toury, Steyer, Garnier, and Journet]Maestre2021Oct
author author C. Maestre, author B. Toury, author P. Steyer, author V. Garnier, and author C. Journet, title title Hexagonal boron nitride: a review on selfstanding crystals synthesis towards 2D nanosheets, https://doi.org/10.1088/2515-7639/ac2b87 journal journal J. Phys.: Mater. volume 4, pages 044018 (year 2021)NoStop
[Sahni et al.(2018)Sahni, Ashuri, Emani, Kaduk, Néémeth, and Shaw]Sahni2018May
author author K. Sahni, author M. Ashuri, author S. Emani, author J. A. Kaduk, author K. Néémeth, and author L. L. Shaw, title title On the synthesis of lithium boron nitride (Li_3BN_2), https://doi.org/10.1016/j.ceramint.2018.01.200 journal journal Ceram. Int. volume 44, pages 7734 (year 2018)NoStop
[Schmitt et al.(2023)Schmitt, Mele, Rosticher, Taniguchi, Watanabe, Maestre, Journet, Garnier, Fèève, Berroir, Voisin, Plaçais, and Baudin]Schmitt2023Apr
author author A. Schmitt, author D. Mele, author M. Rosticher, author T. Taniguchi, author K. Watanabe, author C. Maestre, author C. Journet, author V. Garnier, author G. Fèève, author J. M. Berroir, author C. Voisin, author B. Plaçais, and author E. Baudin, title title High-field 1/f noise in hBN-encapsulated graphene transistors, https://doi.org/10.1103/PhysRevB.107.L161104 journal journal Phys. Rev. B volume 107, pages L161104 (year 2023)NoStop
[Verguts et al.(2017)Verguts, Schouteden, Wu, Peters, Vrancken, Wu, Li, Erkens, Porret, Huyghebaert, Van Haesendonck, De Gendt, and Brems]Verguts2017Oct
author author K. Verguts, author K. Schouteden, author C.-H. Wu, author L. Peters, author N. Vrancken, author X. Wu, author Z. Li, author M. Erkens, author C. Porret, author C. Huyghebaert, author C. Van Haesendonck, author S. De Gendt, and author S. Brems, title title Controlling Water Intercalation Is Key to a Direct Graphene Transfer, https://doi.org/10.1021/acsami.7b12573 journal journal ACS Appl. Mater. Interfaces volume 9, pages 37484 (year 2017)NoStop
[Hemmi et al.(2014)Hemmi, Bernard, Cun, Roth, Klööckner, Käälin, Weinl, Gsell, Schreck, Osterwalder, and Greber]Hemmi2014Mar
author author A. Hemmi, author C. Bernard, author H. Cun, author S. Roth, author M. Klööckner, author T. Käälin, author M. Weinl, author S. Gsell, author M. Schreck, author J. Osterwalder, and author T. Greber, title title High quality single atomic layer deposition of hexagonal boron nitride on single crystalline Rh(111) four-inch wafers, https://doi.org/10.1063/1.4866648 journal journal
Rev. Sci. Instrum. volume 85, pages 035101 (year 2014)NoStop
[Hemmi et al.(2021)Hemmi, Cun, Brems, Huyghebaert, and Greber]Hemmi2021Aug
author author A. Hemmi, author H. Cun, author S. Brems, author C. Huyghebaert, and author T. Greber, title title Wafer-scale, epitaxial growth of single layer hexagonal boron nitride on Pt(111), https://doi.org/10.1088/2515-7639/ac0d9e journal journal J. Phys.: Mater. volume 4, pages 044012 (year 2021)NoStop
[Cun et al.(2018)Cun, Hemmi, Miniussi, Bernard, Probst, Liu, Alexander, Kleibert, Mette, Weinl, Schreck, Osterwalder, Radenovic, and Greber]Cun2018Feb
author author H. Cun, author A. Hemmi, author E. Miniussi, author C. Bernard, author B. Probst, author K. Liu, author D. T. L. Alexander, author A. Kleibert, author G. Mette, author M. Weinl, author M. Schreck, author J. Osterwalder, author A. Radenovic, and author T. Greber, title title Centimeter-Sized Single-Orientation Monolayer Hexagonal Boron Nitride
With or Without Nanovoids, https://doi.org/10.1021/acs.nanolett.7b04752 journal journal Nano Lett. volume 18, pages 1205 (year 2018)NoStop
[Jiméénez et al.(1997)Jiméénez, Jankowski, Terminello, Sutherland, Carlisle, Doll, Tong, Shuh, and Himpsel]Jimenez1997May
author author I. Jiméénez, author A. F. Jankowski, author L. J. Terminello, author D. G. J. Sutherland, author J. A. Carlisle, author G. L. Doll, author W. M. Tong, author D. K. Shuh, and author F. J. Himpsel, title title Core-level photoabsorption study of defects and metastable bonding configurations in boron nitride, https://doi.org/10.1103/PhysRevB.55.12025 journal journal Phys. Rev. B volume 55, pages 12025 (year 1997)NoStop
[Caretti and Jiméénez(2011b)]Caretti2011Jul
author author I. Caretti and author I. Jiméénez, title title Point defects in hexagonal BN, BC_3 and BC_xN compounds studied by x-ray absorption near-edge structure, journal journal J. Appl. Phys. volume 110, https://doi.org/10.1063/1.3602996 10.1063/1.3602996 (year 2011b)NoStop
[Roux et al.(2024)Roux, Arnold, Carréé, Janzen, Edgar, Maestre, Toury, Journet, Garnier, Steyer, Taniguchi, Watanabe, Loiseau, and Barjon]Roux2024Apr
author author S. Roux, author C. Arnold, author E. Carréé, author E. Janzen, author J. H. Edgar, author C. Maestre, author B. Toury, author C. Journet, author V. Garnier, author P. Steyer, author T. Taniguchi, author K. Watanabe, author A. Loiseau, and author J. Barjon, title title Surface recombination and out-of-plane
diffusivity of free excitons in hexagonal boron nitride, https://doi.org/10.1103/PhysRevB.109.155305 journal journal Phys. Rev. B volume 109, pages 155305 (year 2024)NoStop
[Plaud et al.(2019)Plaud, Schuéé, Watanabe, Taniguchi, Fossard, Ducastelle, Loiseau, and Barjon]Plaud2019Jun
author author A. Plaud, author L. Schuéé, author K. Watanabe, author T. Taniguchi, author F. Fossard, author F. Ducastelle, author A. Loiseau, and author J. Barjon, title title Exciton-exciton annihilation in hBN, https://doi.org/10.1063/1.5090218 journal journal Appl. Phys. Lett. volume 114, pages 232103 (year 2019)NoStop
[Geick et al.(1966)Geick, Perry, and Rupprecht]Geick1966Jun
author author R. Geick, author C. H. Perry, and author G. Rupprecht, title title Normal Modes in Hexagonal Boron Nitride, https://doi.org/10.1103/PhysRev.146.543 journal journal Phys. Rev. volume 146, pages 543 (year 1966)NoStop
[Cuscóó et al.(2020)Cuscóó, Edgar, Liu, Li, and Artúús]Cusco2020Apr
author author R. Cuscóó, author J. H. Edgar, author S. Liu, author J. Li, and author L. Artúús, title title Isotopic Disorder: The Prevailing Mechanism in Limiting the Phonon Lifetime in Hexagonal BN, https://doi.org/10.1103/PhysRevLett.124.167402 journal journal Phys. Rev. Lett. volume 124, pages 167402 (year 2020)NoStop
[Gorbachev et al.(2011)Gorbachev, Riaz, Nair, Jalil, Britnell, Belle, Hill, Novoselov, Watanabe, Taniguchi, Geim, and Blake]Gorbachev2011Feb
author author R. V. Gorbachev, author I. Riaz, author R. R. Nair, author R. Jalil, author L. Britnell, author B. D. Belle, author E. W. Hill, author K. S. Novoselov, author K. Watanabe, author T. Taniguchi, author A. K. Geim, and author P. Blake, title title Hunting for Monolayer Boron Nitride: Optical and Raman Signatures, https://doi.org/10.1002/smll.201001628 journal journal Small volume 7, pages 465 (year 2011)NoStop
[Uslu et al.(2024)Uslu, Ouaj, Tebbe, Nekrasov, Bertram, Schüütte, Watanabe, Taniguchi, Beschoten, Waldecker, and Stampfer]Uslu2024Feb
author author J.-L. Uslu, author T. Ouaj, author D. Tebbe, author A. Nekrasov, author J. H. Bertram, author M. Schüütte, author K. Watanabe, author T. Taniguchi, author B. Beschoten, author L. Waldecker, and author C. Stampfer, title title An open-source robust machine learning platform for real-time detection and classification of 2D material flakes, https://doi.org/10.1088/2632-2153/ad2287 journal journal Mach.
Learn.: Sci. Technol. volume 5, pages 015027 (year 2024)NoStop
[Nemanich et al.(1981)Nemanich, Solin, and Martin]Nemanich1981Jun
author author R. J. Nemanich, author S. A. Solin, and author R. M. Martin, title title Light scattering study of boron nitride microcrystals, https://doi.org/10.1103/PhysRevB.23.6348 journal journal Phys. Rev. B volume 23, pages 6348 (year 1981)NoStop
[Song et al.(2010)Song, Ci, Lu, Sorokin, Jin, Ni, Kvashnin, Kvashnin, Lou, Yakobson, and Ajayan]Song2010Aug
author author L. Song, author L. Ci, author H. Lu, author P. B. Sorokin, author C. Jin, author J. Ni, author A. G. Kvashnin, author D. G. Kvashnin, author J. Lou, author B. I. Yakobson, and author P. M. Ajayan, title title Large Scale Growth and Characterization of Atomic Hexagonal Boron Nitride Layers, https://doi.org/10.1021/nl1022139 journal journal Nano Lett. volume 10, pages 3209 (year
2010)NoStop
[Shi et al.(2010)Shi, Hamsen, Jia, Kim, Reina, Hofmann, Hsu, Zhang, Li, Juang, Dresselhaus, Li, and Kong]Shi2010Oct
author author Y. Shi, author C. Hamsen, author X. Jia, author K. K. Kim, author A. Reina, author M. Hofmann, author A. L. Hsu, author K. Zhang, author H. Li, author Z.-Y. Juang, author Mildred. S. Dresselhaus, author L.-J. Li, and author J. Kong, title title Synthesis of Few-Layer Hexagonal Boron Nitride Thin Film by Chemical Vapor Deposition, https://doi.org/10.1021/nl1023707
journal journal Nano Lett. volume 10, pages 4134 (year 2010)NoStop
[Hadid et al.(2022)Hadid, Colambo, Avila, Plaud, Boyaval, Deresmes, Nuns, Dudin, Loiseau, Barjon, Wallart, and Vignaud]Hadid2022Nov
author author J. Hadid, author I. Colambo, author J. Avila, author A. Plaud, author C. Boyaval, author D. Deresmes, author N. Nuns, author P. Dudin, author A. Loiseau, author J. Barjon, author X. Wallart, and author D. Vignaud, title title Molecular beam epitaxial growth of multilayer 2D-boron nitride on Ni substrates from borazine and plasma-activated nitrogen, https://doi.org/10.1088/1361-6528/ac99e5 journal journal Nanotechnology volume 34, pages 035601 (year 2022)NoStop
[Bisswanger et al.(2022)Bisswanger, Winter, Schmidt, Volmer, Watanabe, Taniguchi, Stampfer, and Beschoten]Bisswanger2022Jun
author author T. Bisswanger, author Z. Winter, author A. Schmidt, author F. Volmer, author K. Watanabe, author T. Taniguchi, author C. Stampfer, and author B. Beschoten, title title CVD Bilayer Graphene Spin Valves with 26 μm Spin Diffusion Length at Room Temperature, https://doi.org/10.1021/acs.nanolett.2c01119 journal journal Nano Lett. volume 22, pages 4949 (year 2022)NoStop
[Páálinkáás et al.(2022)Páálinkáás, Káálvin, Vancsóó, Kandrai, Szendrő, Néémeth, Néémeth, Pekker, Pap, Petrik, Kamaráás, Tapasztóó, and Nemes-Incze]Palinkas2022Nov
author author A. Páálinkáás, author G. Káálvin, author P. Vancsóó, author K. Kandrai, author M. Szendrő, author G. Néémeth, author M. Néémeth, author ÁÁ. Pekker, author J. S. Pap, author P. Petrik, author K. Kamaráás, author
L. Tapasztóó, and author P. Nemes-Incze, title title The composition and structure of the ubiquitous hydrocarbon contamination on van der Waals materials, https://doi.org/10.1038/s41467-022-34641-7 journal journal Nat. Commun. volume 13, pages 1 (year 2022)NoStop
[Ferrari et al.(2006)Ferrari, Meyer, Scardaci, Casiraghi, Lazzeri, Mauri, Piscanec, Jiang, Novoselov, Roth, and Geim]Ferrari2006Oct
author author A. C. Ferrari, author J. C. Meyer, author V. Scardaci, author C. Casiraghi, author M. Lazzeri, author F. Mauri, author S. Piscanec, author D. Jiang, author K. S. Novoselov, author S. Roth, and author A. K. Geim, title title Raman Spectrum of Graphene and Graphene Layers, https://doi.org/10.1103/PhysRevLett.97.187401 journal journal Phys. Rev. Lett. volume 97, pages 187401 (year
2006)NoStop
[Graf et al.(2007)Graf, Molitor, Ensslin, Stampfer, Jungen, Hierold, and Wirtz]Graf2007Feb
author author D. Graf, author F. Molitor, author K. Ensslin, author C. Stampfer, author A. Jungen, author C. Hierold, and author L. Wirtz, title title Spatially Resolved Raman Spectroscopy of Single- and Few-Layer Graphene, https://doi.org/10.1021/nl061702a journal journal Nano Lett. volume 7, pages 238 (year 2007)NoStop
[Ferrari and Basko(2013)]Ferrari2013Apr
author author A. C. Ferrari and author D. M. Basko, title title Raman spectroscopy as a versatile tool for studying the properties of graphene, https://doi.org/10.1038/nnano.2013.46 journal journal Nat. Nanotechnol. volume 8, pages 235 (year 2013)NoStop
[Forster et al.(2013)Forster, Molina-Sanchez, Engels, Epping, Watanabe, Taniguchi, Wirtz, and Stampfer]Forster2013Aug
author author F. Forster, author A. Molina-Sanchez, author S. Engels, author A. Epping, author K. Watanabe, author T. Taniguchi, author L. Wirtz, and author C. Stampfer, title title Dielectric screening of the Kohn anomaly of graphene on hexagonal boron nitride, https://doi.org/10.1103/PhysRevB.88.085419 journal journal Phys. Rev. B volume 88, pages 085419 (year 2013)NoStop
[Sonntag et al.(2023)Sonntag, Watanabe, Taniguchi, Beschoten, and Stampfer]Sonntag2023Feb
author author J. Sonntag, author K. Watanabe, author T. Taniguchi, author B. Beschoten, and author C. Stampfer, title title Charge carrier density dependent Raman spectra of graphene encapsulated in hexagonal boron nitride, https://doi.org/10.1103/PhysRevB.107.075420 journal journal Phys. Rev. B volume 107, pages 075420 (year 2023)NoStop
[Lin et al.(2012)Lin, Lu, Yeh, Jin, Suenaga, and Chiu]Lin2012Jan
author author Y.-C. Lin, author C.-C. Lu, author C.-H. Yeh, author C. Jin, author K. Suenaga, and author P.-W. Chiu, title title Graphene Annealing: How Clean Can It Be?, https://doi.org/10.1021/nl203733r journal journal Nano Lett. volume 12, pages 414 (year 2012)NoStop
[Schwartz et al.(2019)Schwartz, Chuang, Rosenberger, Sivaram, McCreary, Jonker, and Centrone]Schwartz2019Jul
author author J. Schwartz, author H.-J. Chuang, author M. R. Rosenberger, author S. V. Sivaram, author K. M. McCreary, author B. T. Jonker, and author A. Centrone, title title Chemical Identification of Interlayer Contaminants within van der Waals Heterostructures, https://doi.org/10.1021/acsami.9b06594 journal journal ACS Appl. Mater. Interfaces volume 11, pages 25578 (year 2019)NoStop
[Volmer et al.(2021)Volmer, Seidler, Bisswanger, Tu, Schreiber, Stampfer, and Beschoten]Volmer2021Mar
author author F. Volmer, author I. Seidler, author T. Bisswanger, author J.-S. Tu, author L. R. Schreiber, author C. Stampfer, and author B. Beschoten, title title How to solve problems in micro- and nanofabrication caused by the emission of electrons and charged metal atoms during e-beam evaporation, https://doi.org/10.1088/1361-6463/abe89b journal journal J. Phys. D: Appl. Phys. volume 54, pages 225304 (year 2021)NoStop
[ICE(2021)]ICE
@noop title IEC TS 62607-6-6:2021(E) Nanomanufacturing - Key control characteristics - Part 6-6: “Graphene - Uniformity of strain in graphene analyzed by Raman spectroscopy (year 2021)NoStop
|
http://arxiv.org/abs/2409.03641v1 | 20240905155916 | A "Staircase" formula for the Chern-Schwartz-MacPherson cycle of a matroid | [
"Franquiz Caraballo Alba",
"Jeffery Liu"
] | math.CO | [
"math.CO",
"math.AG",
"14C17 (Primary) 14N20, 05B35"
] |
Benchmarking the integration of hexagonal boron nitride crystals and thin films into graphene-based van der Waals heterostructures
Bernd Beschoten
September 9, 2024
==================================================================================================================================
§ ABSTRACT
We provide a formula for the Poincaré dual of the Chern-Schwartz-MacPherson (CSM) cycle of a matroid in the Chow ring of the matroid. We derive the formula from the ℂ-realizable case and prove that it satisfies a contraction-deletion formula. From this fact, we prove it holds for all matroids, confirming a conjecture of Fife and Rincón.
§ INTRODUCTION
The Chern-Schwartz-MacPherson (CSM) class generalizes the Chern class of the tangent bundle of a variety. In particular, it is defined even when the variety is singular or non-proper. One interpretation of the CSM class of a variety is in terms of a natural transformation between the functor of constructible functions on varieties and the Chow group functor, for more details, see <cit.>. In the case where V is the complement of a hyperplane arrangement 𝒜 in ^d, a combinatorial formula for c_SM(V) in terms of the characteristic polynomial of the underlying matroid was given separately by Aluffi in <cit.> and Huh in <cit.>.
De Concini and Procesi defined the notion of a wonderful model of a hyperplane arrangement with respect to a building set in <cit.>. In this paper, we will use the term wonderful model of 𝒜 to refer to the wonderful model with respect to the maximal building set. Later, Feichtner and Yuzhvinsky <cit.> define the Chow ring of a matroid following the ideas of <cit.> and the correspondence between the Chow ring of a nonsingular proper toric variety X and Minkowski weights on its associated fan Σ proven in <cit.>. In <cit.>, Adiprasito, Huh and Katz bring these concepts together and prove that A^*(M) and MW_*(B(M)) are dual to each other, in analogy to Poincaré duality for the toric case and the realizable case. This gave the necessary setting for López de Medrano, Rincón, and Shaw to define an analogous CSM cycle for matroids.
In <cit.>, the authors define (M) for a matroid M and prove in Theorem 3.1 that when M is realizable over ℂ, its CSM cycle corresponds to the geometric CSM class c_SM(𝒲∖𝒜) of the complement of the arrangement in the wonderful model under the identification of MW_*(B(M)) ≅ A_*(𝒲). However, in the geometric setting c_SM(𝒲∖𝒜) should be Poincaré dual to some element c^SM(𝒲∖𝒜) ∈ A^*(𝒲), since 𝒲 is proper and nonsingular. In the combinatorial setting, the Minkowski weights on the Bergman fan of M and the Chow ring of M satisfy Poincaré duality, so there should be a class c^SM(M) ∈ A^*(M) with c^SM(M) ∩ 1_M = (M). In Section 3 we prove
The CSM class of the complement of the arrangement 𝒜 in its wonderful model 𝒲 is given in terms of the Chow Ring generators by the formula
c_SM(𝒲∖𝒜) = ∏_r=1^(E)(1 - ∑_F ∈ℒ
(F) ≥ r x_F) ∩ [𝒲],
where M is the matroid associated to 𝒜.
In <cit.>, Fife and Rincón conjectured that the degree k part of c^SM(M) is given by the sum
∑_|ℱ|=k C_ℱ x_F_r_1… x_F_r_k
indexed over all size k multisubset chains ℱ = {F_r_1⊆…⊆ F_r_k} of flats of M, with coefficients C_ℱ on each monomial
C_ℱ := (-1)^k ∏_i = 1^k (r_i - i + 1)∏_r = 1^(E) m_ℱ(r)! = (-1)^k (r_1) (r_2 - 1) ⋯ (r_k - k + 1)m_ℱ(1)! m_ℱ(2)! ⋯ m_ℱ((E))!.
where r_i = (F_r_i) and m_ℱ(r) counts the multiplicity of r among {r_1, …, r_k}.
Theorem <ref> proves this conjecture in the case of matroids realizable over ℂ and Corollary <ref> generalizes the answer to arbitrary matroids.
In Section 2 we recall the required combinatorial and algebro-geometric background for the rest of the paper. In Section 3 we prove that our formula does indeed compute c^SM(𝒲∖𝒜) in the case where M is realized by the hyperplane arrangement 𝒜. In Section 4 we define
(M) := ∏_r=1^(E)(1 - ∑_F ∈ℒ
(F) ≥ r x_F) ∈ A^*(M),
the staircase class of a matroid, which generalizes the formula provided in Section 3 to the case where M is an arbitrary matroid and prove in Theorem <ref> that this class satisfies a contraction-deletion formula similar to that of (M). The name comes from the visual structure of the class when written out explicitly; in the case where M = U_3,3 with underlying set E = {0,1,2,3},
(M) = (1 - x_0 - x_1 - x_2 - x_01 - x_02 - x_12 - x_012)
· (1 - x_01 - x_02 - x_12 - x_012)
· (1 - x_012)
Finally, we prove
For an arbitrary matroid M, (M) ∩ 1_M = (M).
The proof of this statement uses the contraction-deletion formula, the interpretation of the Chow ring in terms of piecewise polynomials <cit.> and induction on the number of elements of M, which completes the proof of Conjecture 5.2.4 in <cit.>.
§ BACKGROUND
A matroid M is a finite set E, together with a non-negative interger valued rank function on its subsets : 2^E →^≥ 0, satisfying the properties:
* for all subsets A ⊆ E, (A) ≤ |A|.
* for all A,B ⊆ E, if A⊆ B, then (A) ≤(B).
* for all A,B ⊆ E, (A∪ B) + (A ∩ B) ≤(A) + (B)
A subset F ⊆ E is called a flat of the matroid if for all i ∈ E ∖ F, (F) < (F ∪{i}), or equivalently if it is maximal with respect to inclusion among other subsets of the same rank (F).
The set of all flats forms a lattice under inclusion. The set of all non-empty flats will be denoted by ℒ(M), and the set of all proper, non-empty flats will be denoted by ℒ̂(M). We will also write just ℒ and ℒ̂ to save space if the matroid is understood.
Given a matroid M=(E,),
* an element i ∈ E is a loop if (i) = 0
* an element i ∈ E is a coloop if every minimal set with rank (E) contains i. In other words, i is a coloop if every basis of M contains i.
* A matroid M=(E,) is simple if its rank 1 flats are precisely the singletons {i}, for all i ∈ E.
Given an element i ∈ E and a matroid M = (E, ) on E, we define the deletion M ∖ i as the matroid (E ∖{i},|_2^E ∖{i}) and the contraction M / i as the matroid (E ∖{i}, ') with '(A) = (A ∪{i}) - ({i}).
Given a collection of hyperplanes 𝒜 = {H_0, …, H_n} in ^d_ (or hyperplanes through the origin in ^d+1), the function : 2^{0, …, n}→^≥ 0 defined by (S) = codim( ⋂_i ∈ S H_i) is a matroid on the ground set {0, …, n}.
Given a set of vectors {v_0, …, v_n} in ^d+1, the rank function (S) = (span{v_i | i ∈ S}) defines a matroid. This yields the same matroid as the previous example by taking the normal hyperplane to each vector.
Matroids arising in this way from a hyperplane arrangement in ^d_ (or a vector arrangement in ^d+1) are called -realizable. Note that there exists matroids which are not realizable.
In the realizable case, if M = ({0,…,k},) is represented by the set of vectors {v_0, …, v_k} in ℂ^n, then M i is represented by the set of vectors {v_0, …, v_k}∖{v_i} in ℂ^n and M / i is represented by {v_0 + ⟨v_i|,⟩…, v_k + ⟨v_i|}⟩∖{⟨v_i|}⟩ in ℂ^n / ⟨v_i|$⟩.
Given a finite set E, the matroid M = (E,|_|) is realizable over ℂ by 𝒜 = {V(x_i) | i ∈ E} in (ℂ^E).
Consider the arrangement of four lines in ^2:
H_0 = V(x_0)
H_1 = V(x_1)
H_2 = V(x_1 - x_2)
H_3 = V(x_2)
This arrangement is depicted in Figure <ref> as viewed from the chart x_0 + x_1 + x_2 ≠ 0.
The rank function of the corresponding realizable matroid has values
(∅) =0
(0)=(1)=(2)=(3) =1
(01)=(02)=(03)=(12)=(13)=(23)=(123) =2
(012)=(013)=(023)=(0123) =3
and its flats has the lattice structure
0123
01 02 03 123
0 1 2 3
∅[from=2-1, to=1-2]
[from=2-2, to=1-2]
[from=2-3, to=1-2]
[from=2-4, to=1-2]
[from=3-1, to=2-1]
[from=3-1, to=2-2]
[from=3-1, to=2-3]
[from=3-2, to=2-1]
[from=3-2, to=2-4]
[from=3-3, to=2-2]
[from=3-3, to=2-4]
[from=3-4, to=2-3]
[from=3-4, to=2-4]
[from=4-2, to=3-1]
[from=4-2, to=3-2]
[from=4-2, to=3-3]
[from=4-2, to=3-4]
For an in depth look at matroids, we reference <cit.>.
Let 𝒜 = {H_0, …, H_n} be hyperplane arrangement in ^d. For every flat F in the matroid associated to 𝒜, there is a linear subvariety corresponding to the intersection L_F = ∩_i ∈ F H_i.
For an arbitrary arrangement 𝒜, its De Concini-Procesi wonderful model 𝒲 is a variety constructed from ^d by a sequence of blow-ups at each the intersections by order of dimension:
* First blow up the points {L_F| F ∈ℒ̂, (F) = d}.
* Then blow up the proper transform of the lines {L_F| F ∈ℒ̂, (F) = d-1}.
* Continue in this manner, at each step i = 0, …, d-1, blowing up the proper transform of the i dimensional linear subspaces {L_F| F ∈ℒ̂, (F) = d-i}.
Let π: 𝒲→^d denote this blow-up map. π is an isomorphism on the complement of the arrangement C(𝒜). On the 𝒲 there is a divisor corresponding to each flat of the matroid. All collections of these divisors intersect transversely if their flats are comparable, and are disjoint otherwise. See <cit.> for details.
Given a matroid M on a ground set E, we associate its Chow Ring
A^*(M) := [x_F | F ∈ℒ̂(M)](I_M + J_M),
where I_M and J_M are the ideals
I_M := ({x_Fx_F'|if F and F' are not comparable})
J_M := ( {∑_F ∈ℒ̂
F ∋ i x_F - ∑_F ∈ℒ̂
F ∋ j x_F| i,j ∈ E}).
An equivalent presentation over all non-empty flats is
A^*(M) := [x_F | F ∈ℒ(M)](I_M + J_M)
where I_M and J_M are the ideals
I_M := ({x_Fx_F'|if F and F' are not comparable})
J_M := ( {∑_F ∈ℒ
F ∋ i x_F| i ∈ E})
note that the extra generator x_E equals x_E = -∑_F ∈ℒ̂
F ∋ i x_F for all i ∈ E.
When a matroid is-realizable, arising from a hyperplane arrangement𝒜, its Chow Ring is isomorphic to the Chow Ring of the wonderful modelA^*(𝒲)in the geometric sense, with eachx_Fcorresponding to the class of the divisor associated to the proper flatF, andx_E = -π^*(h), minus the pullback of the hyperplane class in^d.
Let M be the matroid in Example <ref>
Its Chow Ring is presented by the quotient ring
[x_0, x_1, x_2, x_3, x_01, x_02,x_03,x_123,x_0123](I_M + J_M)
with relations given by the ideals
I_M = ( x_0x_1, x_0x_2, x_0x_3, x_0x_123, x_1x_2, x_1x_3, x_1x_02, x_1x_03, x_2x_3, x_2x_01, x_2x_03,
x_3x_01, x_3x_02,
x_01x_02, x_01x_03, x_01x_123,
x_02x_03, x_02x_123,
x_03x_123)
J_M = ( x_0 + x_01 + x_02 + x_03 + x_0123, x_1 + x_01 + x_123 + x_0123,
x_2 + x_02 + x_123 + x_0123, x_3 + x_03 + x_123 + x_0123)
Let E be a finite set and for S ⊂ E define e_S be the element of ℝ^E given by e_E(i) = 1 for all i ∈ S. Given a matroid M = (E, ), we define B̂(M), the affine Bergman fan of M as the fan in ℝ^E whose cones are of the form
σ̂_ℱ := {∑_i = 1^k a_ie_F_i| a_i ≥ 0 },
where ℱ = {∅⊊ F_1 ⊊…⊊ F_k ⊂ E} is a flag of non-empty flats of M. We define the image of this fan under the projection ℝ^E →ℝ^E / ℝ e_E to be B(M), the Bergman fan of M. Note that this projection maps σ̂_ℱ and σ̂_ℱ∪{E} to the same cone, where ℱ is a flag of proper non-empty flats. We denote the cone corresponding to ℱ in B(M) by σ_ℱ.
The following definitions of Chow ring and Minkowski weights of a fanΣare explored in detail in Chapter5of <cit.>. We reproduce the basic definitions and facts for the reader:
Given a unimodular polyhedral fan Σ in a latticed vector space N_ℝ = N ⊗_ℤℝ we define the Chow ring of Σ to be
A^*(Σ) := ℤ[{x_ρ | ρ∈Σ(1)}]/(I_Σ + J_Σ),
where I_Σ consists of monomials x_ρ x_ρ' with ρ, ρ' ∈Σ(1) such that there is no cone σ in Σ such that ρ and ρ' are both codimension 1 subcones of σ and J_Σ consists of linear forms
∑_ρ∈Σ(1)⟨m,v_ρ|x⟩_ρ,
where m ∈_ℤ(N, ℤ) and v_ρ is the generator of ρ in N.
In the specific case where Σ is the (affine) Bergman fan of a matroid M,
A^*(B̂(M)) ≅ A^*(B(M)) ≅ A^*(M)
by the isomorphisms given by x_σ̂_F↦ x_σ_F↦ x_F, for F ≠ E and x_σ̂_E↦ - ∑_F ∋ i x_σ_F.
A weighted fan in N_ℝ is a polyhedral fan Σ, of pure dimension k, and a weight function ω:Σ(k) →ℤ. We say that (Σ, ω) is balanced if for all τ∈Σ(k-1),
∑_σ > τω(σ) v_σ / τ∈⟨τ|,⟩
where v_σ / τ is the generator of σ + ⟨τ|$⟩ inN_ℝ / ⟨τ|$⟩
Given a fan Σ of pure dimension n, we define k-dimensional Minkowski weights on Σ to be the set of balanced weighted subfans of Σ.
Given a pure n-dimensional, balanced, unimodular fan Σ in N_ℝ, A^k(Σ) ≅ MW_n - k(Σ). We denote the isomorphism by _∩ 1_Σ, where 1_Σ is the function σ↦ 1 for all σ∈Σ(n). In the case where Σ = B(M) for some matroid M, we simply write 1_M for 1_B(M). In particular, Remark <ref> and the statement above imply that for all matroids M, MW_*(B̂(M)) ≅ MW_*(B(M)).
In the case where Σ is the fan associated to a proper, nonsingular toric variety X(Σ), A^*(Σ) ≅ A^*(X(Σ)) and A_*(X(Σ)) ≅ MW_*(Σ), see <cit.> for details.
§ A “STAIRCASE" CSM CLASS FORMULA FOR REALIZABLE MATROIDS
The Chern-Schwartz-MacPherson (CSM) class generalizes the Chern class of the tangent bundle of a variety. More explicitly, letVar_ℂbe the category of varieties with proper maps as morphisms andAbbe the category of abelian groups.
There exists a unique natural transformation c_*:𝒞_* → A_* such that for a proper, nonsingular variety V, c_*(1_V) = c(TV) ∩ [V] where 𝒞_*: Var_ℂ→ Ab is the functor of constructible functions and A_*: Var_ℂ→ Ab is the Chow group functor. See Theorem 1 in <cit.> for details.
For a (possibly singular and/or non-proper) subvariety X embedded in a nonsingular proper variety V, define c_SM(X) = c_*(1_X) ∈ A_*(V).
López de Medrano, Rincón, and Shaw defined an analogous CSM cycle for matroids, and proved that when a matroid is realizable overℂ, its CSM cycle corresponds to the geometric CSM classc_SM(𝒲 ∖𝒜)of the complement of the arrangement in the wonderful model under the identification ofA^*(M) ≅A^*(𝒲)(Theorem 3.1 in <cit.>).
A formula for the CSM cycle of a matroid in terms of the generators of the Chow Ring was conjectured by Fife, T. and Rincón, F in private communication, according to Conjecture 5.2.4 in <cit.>.
In this section, we prove such a formula for the CSM class of the complement of the arrangement by geometric means, thus resolving the conjecture for all realizable matroids.
In the next section, we extend this formula to all matroids, and prove that it gives the same CSM cycle as defined by López de Medrano, Rincón, and Shaw.
To proceed, we utilize some geometric formulas for the CSM class due to Aluffi:
(Lemma 1.3 in <cit.>)
Let X ⊆ Y be a nonsingular varieties, and π: Ỹ→ Y the blow-up of Y along X, and X̃ the exceptional divisor. If X is a complete intersection of hypersurfaces Z_1, …, Z_d meeting transversally in Y, then the Chern class of the tangent bundle of the blow up Ỹ is related to c(T_Y) by the formula
c(T_Ỹ) = (1 + X̃)(1 + π^*Z_1 - X̃)⋯(1 + π^*Z_d - X̃)(1 + π^*Z_1)⋯(1 + π^*Z_d)π^*c(T_Y)
(Theorem 1 in <cit.>)
Let X ⊂ Y be a subvariety of a nonsingular variety, with X = D_1 ∪…∪ D_n a normal crossings divisor with smooth components. The CSM class of the complement Y∖ X is
c_SM(Y∖ X) = c(Ω^1_Y(log X)^∨ ) ∩ [Y]
= c(T_Y)(1+D_1) ⋯ (1+D_n)∩ [Y]
The CSM class of the complement of the arrangement in its wonderful model is given in terms of the Chow Ring generators by the formula
c_SM(𝒲∖𝒜) = ∏_r=1^(E)(1 - ∑_F ∈ℒ
(F) ≥ r x_F) ∩ [𝒲]
The wonderful model 𝒲 is constructed by a sequence of blow-ups of ^d at the intersection of hyperplanes determined by each flat F ∈ℒ̂. If we blow-up according to order by rank, highest to lowest (i.e. by dimension, lowest to highest), the locus of blow-up is intersection (F) many hyperplanes at the intersection determined by F minus the exceptional divisors already blown up in the intersection (corresponding to the flats F' which contain F. That is, when all blow-ups are complete, each of the (F) hyperplanes in locus of blow-up pulls back to π^* h -∑_F' ∈ℒ̂
F' ⊋ F x_F' in A^*𝒲, where h is the class of the hyperplane in ^d. By applying the blow-up formula (Lemma <ref>) according to each flat F ∈ℒ̂,
c(T_𝒲) = ∏_F ∈ℒ̂[(1 + x_F) (1 + (π^* h -∑_F' ∈ℒ̂
F' ⊋ F x_F') - x_F)^(F)(1 + (π^* h -∑_F' ∈ℒ̂
F' ⊋ F x_F'))^(F)]π^*c(T_^d).
The complement of the arrangement in 𝒲 is the complement of the union of all divisors x_F for all F ∈ℒ̂, which is normal crossings with smooth components by construction. By the complement formula (Lemma <ref>), we get
c_SM(𝒲∖𝒜) = c(T_𝒲)∏_F ∈ℒ̂ (1 + x_F)∩ [𝒲]
= 1∏_F ∈ℒ̂ (1 + x_F)∏_F ∈ℒ̂[(1 + x_F) (1 + (π^* h -∑_F' ∈ℒ̂
F' ⊋ F x_F') - x_F)^(F)(1 + (π^* h -∑_F' ∈ℒ̂
F' ⊋ F x_F'))^(F)]π^* c(T_^d) ∩ [𝒲].
By cancellation, and π^*c(T_^d) = (1 + π^* h)^d+1 = (1 + π^* h)^(E), we get
c_SM(𝒲∖𝒜) = … = ∏_F ∈ℒ̂[(1 + (π^* h -∑_F' ∈ℒ̂
F' ⊋ F x_F') - x_F)^(F)(1 + (π^* h -∑_F' ∈ℒ̂
F' ⊋ F x_F'))^(F)]π^*c(T_^d) ∩ [𝒲]
= ∏_F ∈ℒ̂[(1 + (π^* h -∑_F' ∈ℒ̂
F' ⊋ F x_F') - x_F)^(F)(1 + (π^* h -∑_F' ∈ℒ̂
F' ⊋ F x_F'))^(F)](1 + π^* h)^(E)∩ [𝒲].
We can substitute x_E := -π^*h to simplify notation:
c_SM(𝒲∖𝒜) = … = ∏_F ∈ℒ̂[(1 + (-x_E -∑_F' ∈ℒ̂
F' ⊋ F x_F') - x_F)^(F)(1 + (-x_E -∑_F' ∈ℒ̂
F' ⊋ F x_F'))^(F)](1 -x_E)^(E)∩ [𝒲]
= ∏_F ∈ℒ̂[(1 - ∑_F' ∈ℒ
F' ⊇ F x_F')^(F)(1 -∑_F' ∈ℒ
F' ⊋ F x_F'))^(F)](1 -x_E)^(E)(1)^(E)∩ [𝒲]
= ∏_F ∈ℒ(1 - ∑_F' ∈ℒ
F' ⊇ F x_F'1- ∑_F' ∈ℒ
F' ⊋ F x_F')^(F)∩ [𝒲].
Next, we compute the coefficients in the formal expansion of the above expression in [x_F | F ∈ℒ]. The coefficient of each monomial ∏_F ∈ℱ x_F (where ℱ some multisubset of ℒ) may be extracted as follows:
ℱ must be a chain, since otherwise ∏_F ∈ℱ x_F = 0 has empty intersection in A^*𝒲. Choose a maximal flag {∅ = F_0 ⊊ F_1 ⊊…⊊ F_d ⊊ F_d+1 = E} (where (F_i) = i) containing the chain ℱ as a multisubset. We may rewrite the monomial as
∏_F ∈ℱ x_F = x_F_r_1⋯ x_F_r_k.
where k = |ℱ| is the degree of the monomial (also, the size of the multiset), and r_1 ≤…≤ r_k is a monotonic sequence. In other words, we re-index the multiset according to the order in the flag by inclusion:
ℱ = {F_r_1, …, F_r_k}, F_r_1⊆…⊆ F_r_k.
Let m_ℱ(r) = |{F ∈ℱ|(F) = r}| count the multiplicity of F_r in the multisubset ℱ, i.e. the number of occurrences of r in {r_1, …, r_k}.
For all F not included in the flag, we may set x_F = 0. This has no effect on ∏_F ∈ℱ x_F, and only kills some other monomials). If F is not in the flag, the fraction (1 - ∑_F' ⊇ F x_F'1- ∑_F' ⊋ F x_F')^(F) becomes 1. Otherwise, when F = F_r in the flag, it becomes (1 - ∑_i = r^(E) x_F_i1- ∑_i = r+1^(E) x_F_i)^r. The expression then simplifies by cancellation (we indicate the result of killing the non-flag divisors by ↦):
∏_F ∈ℒ(1 - ∑_F' ⊇ F x_F'1- ∑_F' ⊋ F x_F')^(F) ↦∏_r = 1^(E)(1 - ∑_i = r^(E) x_F_i1- ∑_i = r+1^(E) x_F_i)^r = ∏_r = 1^(E)(1 - (x_F_r + x_F_r+1 + … + x_E)1- (x_F_r+1 + … + x_E))^r
= ∏_r = 1^(E)(1 - ∑_i = r^(E) x_F_i) = ∏_r = 1^(E) (1 - (x_F_r + … + x_E)).
The coefficient of the monomial x_F_r_1⋯ x_F_r_k may be deduced by counting how many times it the appears in the expansion.[alternatively, we can use the Taylor's Theorem to compute the coefficient by taking the appropriate partial derivatives at zero.] The count amounts to choosing from which factor each x_F_r_i comes, then dividing to account for identical permutations. Note that each -x_F_r_i appears as a term only in the first r_i factors. There are r_1 possible choices for x_F_r_1, then r_2 - 1 many remaining choices for x_F_r_2, and so on (i.e. r_i - i + 1 choices for x_F_r_i once the previous i-1 variables have been chosen). Therefore, the coefficient on x_F_r_1⋯ x_F_r_k is
C_ℱ := (-1)^k ∏_i = 1^k (r_i - i + 1)∏_r = 1^(E) m_ℱ(r)! = (-1)^k (r_1) (r_2 - 1) ⋯ (r_k - k + 1)m_ℱ(1)! m_ℱ(2)! ⋯ m_ℱ((E))!.
Denote this coefficent by C_ℱ for each multisubset ℱ. These are precisely the coefficients conjectured by Fife and Rincón.
Furthermore, an alternate expression
∏_r=1^(E)(1 - ∑_F ∈ℒ
(F) ≥ r x_F)
yields the same coefficients on every monomial, since it has the same form after killing non-flag divisors: ∏_r=1^(E)(1 - ∑_(F) ≥ r x_F) ↦∏_r = 1^(E)(1 - ∑_i = r^(E) x_F_i). By comparing coefficients, we conclude that it is equal to the original CSM class:
c_SM(𝒲∖𝒜) = … = ∏_F ∈ℒ(1 - ∑_F ∈ℒ
F' ⊇ F x_F'1- ∑_F ∈ℒ
F' ⊋ F x_F')^(F)∩ [𝒲]
= ∑_ℱ multisubset of ℒ C_ℱ∏_F ∈ℱ x_F ∩ [𝒲]
= ∏_r=1^(E)(1 - ∑_F ∈ℒ
(F) ≥ r x_F) ∩ [𝒲].
The formula conjectured by Fife and Rincón holds for realizable matroids.
Let 𝒜 be the arrangement in Example <ref>. The CSM class of the complement of the arrangement in its wonderful compactification by the previous result is
c_SM(𝒲∖𝒜)= (1-x_0-x_1-x_2-x_3-x_01-x_02-x_03-x_123-x_0123)
· (1-x_01-x_02-x_03-x_123-x_0123)
·(1-x_0123) ∩ [𝒲]
= (1-x_123-x_0123) ∩ [𝒲].
§ RELATION TO THE CSM CYCLE OF A MATROID
The result from the previous section motivates the following definition for all matroids, not necessarily realizable.
Given a loopless matroid M, define the invariant (M) in its Chow Ring A^*(M) by the formula
(M) := ∏_r=1^(E)(1 - ∑_(F) ≥ r x_F) ∈ A^*(M).
In the case where M has a loop, define (M) = 0.
We prove in this section that(M) ∩1_Mis the CSM cycle of a matroid defined by López de Medrano, Rincón, and Shaw.
Note that the coefficientsC_ℱconjectured by Fife and Rincón are precisely the coefficients in the expansion of(M), so this is equivalent to the statement of their conjecture in full generality.
First we show that this invariant respects certain deletion and contraction operations on a matroid.
Let M be a matroid on ground set E with rank function _M. For any i ∈ E, we can construct the matroid M∖ i, called the deletion of M by the element i. This is the matroid on the set E∖ i with rank function _M∖ i: 2^E∖ i
_M∖ i(S) := _M(S)
For any i ∈ E, we can construct the matroid M / i, called the contraction of M by the element i.
This is the matroid on the set E∖ i with rank function _M∖ i: 2^E∖ i
_M / i(S) := _M(S ∪ i) - _M(i)
§.§ Contraction-deletion of
For a given elementi ∈Ewhich is not a coloop, we obtain a projectionδ: B(M) →B(M ∖i). By Proposition 2.22 in <cit.>,δis a tropical modification along some rational functionϕ; therefore,δinduces a pullbackδ^*:Z_*(B(M ∖i) →Z_*(B(M))and sincediv_B ∖i(ϕ) = B(M / i), we also obtain a pullbackZ_*(B(M / i)) →Z_*(B(M ∖i)). These have corresponding ring/group homomorphisms in the context of the Chow rings, see Section 4.2 for details.
To distinguish the divisors of A^*(M), A^*(M ∖ i), and A^*(M/i), we label them by
A^*(M) = ℤ[x_F | F ∈ℒ(M)]/(I_M + J_M)
A^*(M∖ i) = ℤ[y_F | F ∈ℒ(M∖ i)]/(I_M ∖ i + J_M ∖ i)
A^*(M/ i) = ℤ[z_F | F ∈ℒ(M/i)]/(I_M/i + J_M/i)
i.e. with variable name x, y, z respectively. Define the deletion pullback (a ring homomorphism) δ̅^*: A^*(M ∖ i) → A^*(M) on each generator y_F by
δ̅^*(y_F) = ∑_F' ∈ℒ(M)
F'∖ i = F x_F'.
Define the element y_i ∈ A^*(M ∖ i) by
y_i := - ∑_F ∈ℒ(M∖ i)
F ∉ℒ(M)
F ∪i∈ℒ(M) y_F.
Multiplication by y_i defines a map A^*(M / i) → A^*(M ∖ i)
z ↦ y_i ·ι(z)
where ι(z_F) = y_F (which replaces the variables z with y). This is an embedding A^*(M / i) as the subgroup y_i · A^*(M ∖ i):
By composition we also obtain a map δ̅^* (y_i ·ι(_)): A^*(M / i) → A^*(M).
The invariant (M) satisfies the contraction-deletion formula:
(M) = δ̅^*((M∖ i)) - δ̅^* (y_i ·ι((M/i)))
We partition the nonempty flats of M according to the flowchart:
is i ∈ F?
is F∪{i} a flat? is F-{i} a flat (or ∅)?
𝒯 𝒮 𝒰 𝒱["no"', from=1-1, to=2-1]
["yes", from=1-1, to=2-3]
["no", from=2-1, to=3-1]
["yes", from=2-1, to=3-2]
["yes", from=2-3, to=3-3]
["no", from=2-3, to=3-4]
The sets 𝒯, 𝒮, 𝒰, 𝒱 are disjoint and partition ℒ(M).
Denote also 𝒰∖ i := {F∖ i | F ∈𝒰} and
𝒱∖ i := {F∖ i | F ∈𝒱}.
Note the following reasons for this partition:
* 𝒰∪𝒱 consists of all flats containing the deleted element i. Among these, 𝒰 are the flats that decrease in rank when i is deleted. On the other hand, 𝒱 are the flats that remain the same rank when i is deleted.
* S ∪𝒱∖ i = ℒ(M/i) are the flats of the contraction M/i.
* 𝒯∪𝒮∪𝒱∖ i = ℒ(M ∖ i) are the flats of the deletion M∖ i.
* 𝒰∖ i = 𝒮∪{∅}.
This partition determines the behavior of the divisors under the map δ̅^*:
* If F ∈𝒯, then δ̅^*(y_F) = x_F.
* If F ∈𝒮, then δ̅^*(y_F) = x_F + x_F ∪{i}.
* If F ∈𝒱∖ i, then δ̅^*(y_F) = x_F ∪ i.
* The pullback of the special element y_i = -∑_F ∈𝒱∖ i y_F is
δ̅^*(y_i) = δ̅^*(-∑_F ∈𝒱∖ i y_F) = -∑_F ∈𝒱 x_F = ∑_F ∈𝒰 x_F.
𝒱
𝒯 𝒰
𝒮∪{∅}[from=2-1, to=1-2]
[from=2-3, to=1-2]
[from=3-2, to=2-1]
[from=3-2, to=2-3]
For ranks r=1, …, d+1, let
t_r := ∑_F ∈𝒯
(F) = r x_F, v_r := ∑_F ∈𝒱
(F) = r x_F,
s_r := ∑_F ∈𝒮
(F) = r x_F, u_r := ∑_F ∈𝒰
(F) = r x_F.
The one piece may be empty in some rank r, in which case the sum is zero. For short hand, we also group indices with the same rank (t + s)_r := t_r + s_r, etc.
Using this notation,
(M) = [1 - (t + s + u + v)_1 - (t + s + u + v)_2 - … - (t + s + u + v)_d+1]
· [1 - (t + s + u + v)_2 - … - (t + s + u + v)_d+1]
⋯
· [1 - (t + s + u + v)_d+1]
and
δ̅^*((M∖ i)) = [1 - (t_1 + (s_1 + u_2) + v_1) - (t_2 + (s_2 + u_3) + v_2) - … - (t_d+1 + (s_d+1 + u_d+2) + v_d+1)]
· [1 - (t_2 + (s_2 + u_3) + v_2) - (t_3 + (s_3 + u_4) + v_3) - … - (t_d+1 + (s_d+1 + u_d+2) + v_d+1)]
⋯
· [1 - (t_d+1 + (s_d+1 + u_d+2) + v_d+1)]
= [1 - (t + s + v)_1 - (t + s + u + v)_2 - … - (t + s + u + v)_d+1]
· [1 - (t + s + v)_2 - (t + s + u + v)_3 - … - (t + s + u + v)_d+1]
⋯
· [1 - (t + s + v)_d+1]
Note u_r is omitted from the i-th factor. We compute the difference (M) - δ̅^*((M∖ i)) and show it is equal to the negative of
δ̅^* (y_i ·ι((M/i)))=
[u_1 + u_2 + … + u_d+1]
·[1 - (s_1 + u_2 + v_2) - (s_2 + u_3 + v_3) - … - (s_d + u_d+1 + v_d+1)]
·[1 - (s_2 + u_3 + v_3) - … - (s_d + u_d+1 + v_d+1)]
⋯
· [1 - (s_d + v_d+1 + u_d+1)]
The difference is:
(M) - δ̅^*((M∖ i)) =
[-u_1]·[1 - (t + s + u + v)_2 - … - (t + s + u + v)_d+1] ⋯ [1 - (t + s + u + v)_d+1]
+[1 - (t + s + v)_1 - (t + s + u + v)_2 - … - (t + s + u + v)_d+1] · [-u_2] ⋯ [1 - (t + s + u + v)_d+1]
+ ⋯
+[1 - (t + s + v)_1 - … - (t + s + u + v)_d+1] · [1 - (t + s + v)_2 - … - (t + s + u + v)_d+1] ⋯ [-u_d+1]
Note in the r-th term is like (M) except the r-th factor is replaced by -u_r and for each k < r, u_k each omitted from the k-th factor. Each flat in 𝒯 is not comparable with each flat in 𝒰, so t_k u_r = 0. In other words, t_k is annihilated by each u_r. Since each term has some u_r as a factor, all `t's may be removed.
(M) - δ̅^*((M∖ i)) = … =
[-u_1]·[1 - (s + u + v)_2 - … - (s + u + v)_d+1] ⋯ [1 - (s + u + v)_d+1]
+[1 - (s + v)_1 - (s + u + v)_2 - … - (s + u + v)_d+1] · [-u_2] ⋯ [1 - (s + u + v)_d+1]
+ ⋯
+[1 - (s + v)_1 - … - (s + u + v)_d+1] · [1 - (s + v)_2 - … - (s + u + v)_d+1] ⋯ [-u_d+1]
Also u_r s_k = 0 if k ≥ r. Then, for each term, index r, for all k>r add s_k-1 to the k-th factor (after each [-u_r]).
(M) - δ̅^*((M∖ i)) =… =
[-u_1]·[1 - s_1 - (s + u + v)_2 - … - (s + u + v)_d+1] ⋯ [1 - s_d - (s + u + v)_d+1]
+[1 - (s + v)_1 - (s + u + v)_2 - … - (s + u + v)_d+1] · [-u_2] ⋯ [1 - s_d - (s + u + v)_d+1]
+ ⋯
+[1 - (s + v)_1 - … - (s + u + v)_d+1] · [1 - (s + v)_2 - … - (s + u + v)_d+1] ⋯ [-u_d+1]
Also u_r v_k = 0 if k < r. Then, for each term, index r, for all k < r remove v_k from the k-th factor (before each [-u_k]).
(M) - δ̅^*((M∖ i)) =… =
[-u_1]·[1 - s_1 - (s + u + v)_2 - … - (s + u + v)_d+1] ⋯ [1 - s_d - (s + u + v)_d+1]
+[1 - s_1 - (s + u + v)_2 - … - (s + u + v)_d+1] · [-u_2] ⋯ [1 - s_d - (s + u + v)_d+1]
+ ⋯
+[1 - s_1 - … - (s + u + v)_d+1] · [1 - s_2 - … - (s + u + v)_d+1] ⋯ [-u_d+1]
Factors before and after u_r are identical between terms. Factorize:
(M) - δ̅^*((M∖ i)) =… =
[u_1 + u_2 + … + u_d+1]
·[1 - s_1 - (s + u + v)_2 - (s + u + v)_3 - … - (s + u + v)_d+1]
·[1 - s_2 - (s + u + v)_3 - … - (s + u + v)_d+1]
⋯
· [1 - s_d - (s + u + v)_d+1]
s_d+1 = 0, since x_E is the only flat of rank d+1, and i ∈ E. That is, E ∉𝒮.
(M) - δ̅^*((M∖ i)) =… =
-[u_1 + u_2 + … + u_d+1]
·[1 - s_1 - (s + u + v)_2 - (s + u + v)_3 - … - (u + v)_d+1]
·[1 - s_2 - (s + u + v)_3 - … - (u + v)_d+1]
⋯
· [1 - s_d - (u + v)_d+1]
=
-[u_1 + u_2 + … + u_d+1]
·[1 - (s_1 + u_2 + v_2) - (s_2 + u_3 + v_3) - … - (s_d + u_d+1 + v_d+1)]
·[1 - (s_2 + u_3 + v_3) - … - (s_d + u_d+1 + v_d+1)]
⋯
· [1 - (s_d + u_d+1 + v_d+1)]
= -δ̅^* (y_i ·ι((M/i)))
As desired, we have
(M) = δ̅^*(M∖ i) - δ̅^* (y_i ·ι((M/i)))
§.§ Piecewise polynomials and cocycles
There is an equivalent formulation ofA^*(Σ)in terms of so-called Courant functions onΣ. This approach definesA^*(Σ)as a quotient of continuous piecewise polynomial functions onΣby linear functions fromNtoℝ, see Section 4 of <cit.> for more details.
This other formulation leads to the theory of tropical cocycles onΣ, introduced by François in <cit.> and elaborated upon by Gross and Shokrieh in <cit.>, in analogy to the theory of cycles onΣfrom <cit.>. In particular,Z_*(Σ), the tropical cycles onΣ, consists of weighted fans on subdivisions ofΣ, whileC^*(Σ), the tropical cocycles onΣ, are continuous functions that are piecewise polynomial in a subdivision ofΣ. In <cit.>, the authors prove that, when given the corresponding intersection products, theseC^*(Σ)-algebras are isomorphic. In particular, given a morphism of fansf:Σ_1 →Σ_2, a cocycleh ∈C^*(Σ_2)and a cycleX ∈Z_*(Σ_2),f^*(h ·X) = f^*(h) ·f^*(X), wheref^*:Z_*(Σ_2) →Z_*(Σ_1)is defined in Corollary 4.7 in <cit.>, which agrees with Definition 2.16 in <cit.> in the case wherefis a tropical modification.
Using this theory, we describe the pullbackδ^*:Z_*(B(M ∖i)) →Z_*(B(M))in terms of cocycles, for a matroidMof rankron a setEandi ∈Enot a coloop. Since in our case the mapδ:B(M) →B(M ∖i)maps cones to cones, the image of weights onB(M ∖i)underδ^*is contained in weights onB(M). Similarly, the image underδ^*of a piecewise polynomial onB(M ∖i)is a piecewise polynomial onB(M), giving a pullbackA^*(M ∖i ) →A^*(M).
Sinceδ̂:B̂(M) →B̂(M ∖i)satisfies the conditions of Proposition 3.10 in <cit.>, there exists a rational functionϕsuch thatδis a tropical modification alongϕ, and there is an isomorphismϕ·MW_*(B̂(M ∖i)) MW_*(B̂(M / i)). The inverse of this isomorphism is given byĵ_*:MW_*(B̂(M / i)) →MW_*(B̂(M ∖i)).
The inclusion j_*:MW_*(B(M / i)) → MW_*(B(M ∖ i)) is given by j_*(z ∩ 1_M / i) = y_i ·ι(z) ∩ 1_M ∖ i.
By Proposition 3.10 in <cit.>, ϕ is given by
ϕ(v_σ̂_F) = _M / i(F) -_M ∖ i(F)
= _M(F ∪ i) - _M(i) - _M(F)
= _M(F ∪ i) - _M(F) - 1.
If F is not a flat in M, but F ∪ i is, then _M(F ∪ i) = _M(F), thus
ϕ(v_σ̂_F) = _M(F ∪ i) - _M(F) - 1
= _M(F) - _M(F) - 1
= -1.
If F is a flat in M, then _M(F ∪ i) = _M(F) + 1, thus
ϕ(v_σ̂_F) = _M(F ∪ i) - _M(F) - 1
= _M(F) + 1 - _M(F) - 1
= 0.
Therefore, under the isomorphism MW_*(B̂(M)) ≅ MW_*(B(M)), ϕ·_ = y_i ∩_, where y_i is defined in (<ref>). Finally, it is straightforward check to see that y_i ·ι(z) ∩ 1_M ∖ i = j_*(z ∩ 1_M / i).
For cocycles y ∈ A^*(M∖ i ), δ̅^*(y) ∩ 1_M = δ^*(y ∩ 1_M ∖ i).
Since these are ring homomorphisms and both rings are generated in degree 1, we need only check this holds for Courant functions. Courant functions are uniquely determined by their values on rays; thus the claim simplifies to the statement that for all flats F ∈ℒ̂(M ∖ i), if F is a flat in M, then δ^-1(σ_F) = σ_F∪σ_F ∪ i and x_F(e̅_F) = x_F(δ(e_F∪ i)) = x_F(δ(e_F)), and if F is not a flat of M then δ^-1(σ_F) = σ_F ∪ i and x_F(e̅_F) = x_F(δ(e_F∪ i)). From the definition of δ, it is clear that δ(e_F∪ i) = δ(e_F) = e̅_F, thus proving the claim.
For a matroid M, (M) ∩ 1_M = (M).
We proceed by induction. Note that for the loop matroid M_0, i.e. the matroid on one element whose only element is a loop, (M_0) = 0 = 0 ∩ 1_M_0 = (M_0) ∩ 1_M_0 and for the isthmus M_1, i.e. the matroid on one element whose only element is independent, (M_1) = 1_M_1. Now, assume that for all matroids M' whose ground set consists of n elements (M') ∩ 1_M' = _k(M') and let M be a loopless matroid whose ground set consists of n + 1 elements. If all elements of the ground set of M are coloops, then M is representable over ℂ and thus, by Theorem <ref>, the conclusion follows. Now assume there exists i ∈ M that is not a coloop. Then, by Theorem <ref>, (M) = δ̅^* (M ∖ i) - δ̅^* (M / i). Then, the ground sets of M ∖ i and M / i contain n elements. Thus, by Lemma <ref> and Theorem 5.4 in <cit.>,
(M) ∩ 1_M =
= δ̅^* ((M ∖ i)) ∩ 1_M - δ̅^* (y_i ·ι ((M / i))) ∩ 1_M
= δ^* ((M ∖ i) ∩ 1_M ∖ i) - δ^*( y_i ·ι ((M / i)) ∩ 1_M ∖ i)
= δ^*(M ∖ i) - δ^* j_*((M / i) ∩ 1_M / i)
= δ^*(M ∖ i) - δ^* j_*(M / i)
= (M).
Thus, we obtain as a consequence of the computation in Theorem <ref> and Theorem <ref>:
The formula of Fife and Rincón holds for all matroids.
plain |
http://arxiv.org/abs/2409.02097v2 | 20240903175439 | LinFusion: 1 GPU, 1 Minute, 16K Image | [
"Songhua Liu",
"Weihao Yu",
"Zhenxiong Tan",
"Xinchao Wang"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
Estimating the coherence of noise in mid-scale quantum systems
Inés de Vega
Received ; accepted
==============================================================
§ ABSTRACT
Modern diffusion models, particularly those utilizing a Transformer-based UNet for denoising, rely heavily on self-attention operations to manage complex spatial relationships, thus achieving impressive generation performance.
However, this existing paradigm faces significant challenges in generating high-resolution visual content due to its quadratic time and memory complexity with respect to the number of spatial tokens.
To address this limitation, we aim at a novel linear attention mechanism as an alternative in this paper.
Specifically, we begin our exploration from recently introduced models with linear complexity, e.g., Mamba2, RWKV6, Gated Linear Attention, etc, and identify two key features—attention normalization and non-causal inference—that enhance high-resolution visual generation performance.
Building on these insights, we introduce a generalized linear attention paradigm, which serves as a low-rank approximation of a wide spectrum of popular linear token mixers.
To save the training cost and better leverage pre-trained models, we initialize our models and distill the knowledge from pre-trained StableDiffusion (SD).
We find that the distilled model, termed LinFusion, achieves performance on par with or superior to the original SD after only modest training, while significantly reducing time and memory complexity.
Extensive experiments on SD-v1.5, SD-v2.1, and SD-XL demonstrate that LinFusion delivers satisfactory zero-shot cross-resolution generation performance, generating high-resolution images like 16K resolution.
Moreover, it is highly compatible with pre-trained SD components, such as ControlNet and IP-Adapter, requiring no adaptation efforts.
Codes are available https://github.com/Huage001/LinFusionhere.
§ INTRODUCTION
Recent years have witnessed significant advancements in AI-generated content (AIGC) with diffusion models <cit.>.
On the one hand, unlike classic models like GAN <cit.>, diffusion models refine noise vectors iteratively to produce high-quality results with fine details <cit.>.
On the other hand, having trained on large-scale data pairs, these models exhibit satisfactory alignment between input conditions and output results.
These capabilities have spurred recent advancements in text-to-image generation <cit.>.
Benefiting from the impressive performance and the open-source community, Stable Diffusion (SD) <cit.> stands out as one of the most popular models.
The success of models like SD can be largely attributed to their robust backbone structures for denoising.
From UNet architectures with attention layers <cit.> to Vision Transformers <cit.>, existing designs rely heavily on self-attention mechanisms to manage complex relationships between spatial tokens.
Despite their impressive performance, the quadratic time and memory complexity inherent in self-attention operations poses significant challenges for high-resolution visual generation.
For instance, as illustrated in Fig. <ref>(a), using FP16 precision, SD-v1.5 fails to generate 2048-resolution images on A100, a GPU with 80GB of memory, due to out-of-memory errors, making higher resolutions or larger models even more problematic.
To address these issues, in this paper, we aim at a novel token-mixing mechanism with linear complexity to the number of spatial tokens, offering an alternative to the classic self-attention approach.
Inspired by recently introduced models with linear complexity, such as Mamba <cit.> and Mamba2 <cit.>, which have demonstrated significant potential in sequential generation tasks, we first investigate their applicability as token mixers in diffusion models.
However, there are two drawbacks of Mamba diffusion models.
On the one hand, when a diffusion model operates at a resolution different from its training scale, our theoretical analysis reveals that the feature distribution tends to shift, leading to difficulties in cross-resolution inference.
On the other hand, diffusion models perform a denoising task rather than an auto-regressive task, allowing the model to simultaneously access all noisy spatial tokens and generate denoised tokens based on the entire input. In contrast, Mamba is fundamentally an RNN that processes tokens sequentially, meaning that the generated tokens are conditioned only on preceding tokens, a constraint termed causal restriction. Applying Mamba directly to diffusion models would impose this unnecessary causal restriction on the denoising process, which is both unwarranted and counterproductive. Although bi-directional scanning branches can somewhat alleviate this issue, the problem inevitably persists within each branch.
Focusing on the above drawbacks of Mamba for diffusion models, we propose a generalized linear attention paradigm.
Firstly, to tackle the distribution shift between training resolution and larger inference resolution, a normalizer for Mamba, defined by the cumulative impact of all tokens on the current token, is devised to the aggregated features, ensuring that the total impact remains consistent regardless of the input scale.
Secondly, we aim at a non-causal version of Mamba.
We start our exploration by simply removing the lower triangular causal mask applied on the forget gate and find that all tokens would end up with identical hidden states, which undermines the model's capacity.
To address this issue, we introduce distinct groups of forget gates for different tokens and propose an efficient low-rank approximation, enabling the model to be elegantly implemented in a linear-attention form.
We analyze the proposed approach technically alongside recently introduced linear-complexity token mixers such as Mamba2 <cit.>, RWKV6 <cit.>, and Gated Linear Attention <cit.> and reveal that our model can be regarded as a generalized non-causal version of these popular models.
The proposed generalized linear attention module is integrated into the architectures of SD, replacing the original self-attention layers, and the resultant model is termed as Linear-Complexity Diffusion Model, or LinFusion in short.
By only training the linear attention modules for 50k iterations in a knowledge distillation framework, LinFusion achieves performance on par with or even superior to the original SD, while significantly reducing time and memory complexity, as shown in Fig. <ref>.
Meanwhile, it delivers satisfactory zero-shot cross-resolution generation performance and can generate images at 16K resolution on a single GPU.
It is also compatible with existing components for SD, such as ControlNet <cit.> and IP-Adapter <cit.>, allowing users to inject additional controls to the proposed LinFusion flexibly without any additional training cost.
As shown in Fig. <ref>, extensive experiments on SD-v1.5, SD-v2.1, and SD-XL validate the effectiveness of the proposed model and method.
Our contributions can be summarized as follows:
* We investigate the non-causal and normalization-aware version of Mamba and propose a novel linear attention mechanism that addresses the challenges of high-resolution visual generation with diffusion models.
* Our theoretical analysis indicates that the proposed model is technically a generalized and efficient low-rank approximation of existing popular linear-complexity token mixers.
* Extensive experiments on SD demonstrate that the proposed LinFusion can achieve even better results than the original SD and exerts satisfactory zero-shot cross-resolution generation performance and compatibility with existing components for SD. To the best of our knowledge, this is the first exploration of linear-complexity token mixers on the SD series model for text-to-image generation.
§ RELATED WORKS
In this section, we review related works from two perspectives, namely efficient diffusion architectures and linear-complexity token mixers.
§.§ Efficient Diffusion Architectures
There are mainly two mainstreams of works aiming at more efficient diffusion models, including efficient sampling for a reduced number of sampling time-steps <cit.> and efficient architectures for faster network inference.
This paper focuses on the latter, which is a bottleneck for generating high-resolution visual results, particularly due to the self-attention token mixers in existing diffusion backbones.
To mitigate the efficiency issue triggered by the quadratic time and memory complexity, a series of works, including DiS <cit.>, DiM <cit.>, DiG <cit.>, Diffusion-RWKV <cit.>, DiffuSSM <cit.>, and Zigma <cit.>.
These works have successfully adapted recent state space models like Mamba <cit.>, RWKV <cit.>, or Linear Attention <cit.> into diffusion architectures.
However, these architectures maintain a causal restriction for diffusion task, processing input spatial tokens one by one, with generated tokens conditioned only on preceding tokens. In contrast, the diffusion task allows models to access all noisy tokens simultaneously, making the causal restriction unnecessary. To address this, we eliminate the causal restriction and introduce a non-causal token mixer specifically designed for the diffusion model.
Additionally, previous works have primarily focused on class-conditioned image generation.
For text-to-image generation, <cit.> propose architectural pruning for Stable Diffusion (SD) by reducing the number of UNet stages and blocks, which is orthogonal to our focus on optimizing self-attention layers.
§.§ Linear-Complexity Token Mixers
Despite the widespread adoption of Transformer <cit.> across various fields due to its superior modeling capacity, the quadratic time and memory complexity of the self-attention mechanism often leads to efficiency issues in practice.
A series of linear-complexity token mixers are thus introduced as alternatives, such as Linear Attention <cit.>, State Space Model <cit.>, and their variants including Mamba <cit.>, Mamba2 <cit.>, mLSTM <cit.>, Gated Retention <cit.>, DFW <cit.>, GateLoop <cit.>, HGRN2 <cit.>, RWKV6 <cit.>, GLA <cit.>, etc.
These models are designed for tasks requiring sequential modeling, making it non-trivial to apply them to non-causal vision problems.
Addressing this challenge is the main focus of our paper.
For visual processing tasks, beyond the direct treatment of inputs as sequences, there are concurrent works focused on non-causal token mixers with linear complexity.
MLLA <cit.> employs Linear Attention <cit.> as token mixers in vision backbones without a gating mechanism for hidden states.
In VSSD <cit.>, various input tokens share the same group of gating values.
In contrast, the model proposed in this paper relaxes these gating assumptions, offering a generalized non-causal version of various modern state-space models.
§ METHODOLOGY
§.§ Preliminary
Diffusion Models.
As a popular model for text-to-image generation, Stable Diffusion <cit.> (SD) first learns an auto-encoder (ℰ,𝒟), where the encoder ℰ maps an image x to a lower dimensional latent space: z←ℰ(x), and the decoder 𝒟 learns to decode z back to the image space x̂←𝒟(z) such that x̂ is close to the original x.
In the inference time, a Gaussian noise in the latent space z_T is sampled randomly and denoised by a UNet ϵ_θ for T steps.
The denoised latent code after the final step z_0 is decoded by 𝒟 to derive a generated image.
In training, given an image x and its corresponding text description y, ℰ is utilized to obtain its corresponding latent code, and we add a random Gaussian noise ϵ for its noisy version z_t with respect to the t-th step.
The UNet is trained via the noise prediction loss ℒ_simple <cit.>:
θ=_θ𝔼_z∼ℰ(x),y,ϵ∼𝒩(0,1),t[ℒ_simple] ℒ_simple=‖ϵ-ϵ_θ(z_t,t,y)‖_2^2.
The UNet in SD contains multiple self-attention layers as token mixers to handle spatial-wise relationships and multiple cross-attention layers to handle text-image relationships.
Given an input feature map in the UNet backbone X∈ℝ^n× d and weight parameters W_Q,W_K∈ℝ^d× d' and W_V∈ℝ^d× d, where n is the number of spatial tokens, d is the feature dimension, and d' is the attention dimension, self-attention can be formalized as:
Y=MV, M=softmax(QK^⊤/√(d')), Q=XW_Q, K=XW_K, V=XW_V.
We can observe from Eq. <ref> that the complexity of self-attention is quadratic with respect to n since the attention matrix M∈ℝ^n× n, we mainly focus on its alternatives in this paper and are dedicated on a novel module for token mixing with linear complexity.
Mamba.
As an alternative to Transformer <cit.>, Mamba <cit.> is proposed to handle sequential tasks with linear complexity with respect to the sequence length.
At the heart of Mamba lies the State Space Model (SSM), which can be written as:
H_i=A_i⊙ H_i-1+B_i^⊤ X_i=∑_j=1^i{(∏_k=j+1^iA_k)⊙ (B_j^⊤ X_j)}, Y_i=C_iH_i,
where i is the index of the current token in a sequence, H_i denotes the hidden state, X_i and Y_i are row vectors denoting the i-th rows of the input and output matrices respectively, A_i, B_i, and C_i are input-dependent variables, and ⊙ indicates element-wise multiplication.
§.§ Overview
r4cm
[1]
< g r a p h i c s >
Overview of LinFusion. We replace self-attention layers in the original SD with our LinFusion modules and adopt knowledge distillation to optimize the parameters.
In the latest version, i.e., Mamba2 <cit.>, A_i is a scalar, B_i,C_i∈ℝ^1× d', X_i,Y_i∈ℝ^1× d, and H_i∈ℝ^d'× d.
According to State-Space Duality (SSD), the computation in Eq. <ref> can be reformulated as the following expression, referred to as 1-Semiseparable Structured Masked Attention:
Y=((CB^⊤)⊙Ã)X,
where à is a n× n lower triangular matrix and Ã_ij=∏_k=j+1^iA_k for j≤ i.
Such a matrix à is known as 1-semiseparable, ensuring that Mamba2 can be implemented with linear complexity in n.
In this paper, we aim at a diffusion backbone for the general text-to-image problems with linear complexity with respect to the number of image pixels.
To this end, instead of training a novel model from scratch, we initialize and distill the model from pre-trained SD.
Specifically, we utilize the SD-v1.5 model by default and substitute its self-attention—the primary source of quadratic complexity—with our proposed LinFusion modules.
Only the parameters in these modules are trainable, while the rest of the model remains frozen.
We distill knowledge from the original SD model into LinFusion such that given the same inputs, their outputs are as close as possible.
Fig. <ref> provides an overview of this streamline.
This approach offers two key benefits: (1) Training difficulty and computational overhead are significantly reduced, as the student model only needs to learn spatial relationships, without the added complexity of handling other aspects like text-image alignment; (2) The resulting model is highly compatible with existing components trained on the original SD models and their fine-tuned variations, since we only replace the self-attention layers with LinFusion modules, which are trained to be functionally similar to the original ones while maintaining the overall architecture.
Technically, to derive a linear-complexity diffusion backbone, one simple solution is to replace all the self-attention blocks with Mamba2, as shown in Fig. <ref>(a).
We apply bi-directional SSM to ensure that the current position can access information from subsequent positions.
Moreover, the self-attention modules in Stable Diffusion do not incorporate gated operations <cit.> or RMS-Norm <cit.> as used in Mamba2.
As shown in Fig. <ref>(b), we remove these structures to maintain the consistency and result in a slight improvement in performance.
In the following parts of this section, we delve into the issues of applying SSM, the core module in Mamba2, to diffusion models and accordingly introduce the key features in LinFusion: normalization and non-causality in Secs. <ref> and <ref> respectively.
Finally, in Sec. <ref>, we provide the training objectives to optimize parameters in LinFusion modules.
§.§ Normalization-Aware Mamba
In practice, we find that SSM-based structure shown in Fig. <ref>(b) can achieve satisfactory performance if the training and inference have consistent image resolutions.
However, it fails when their image scales are different.
We refer readers to Sec. <ref> for the experimental results.
To identify the cause of this failure, we examine the channel-wise means of the input and output feature maps, which exhibit the following proposition:
Assuming that the mean of the j-th channel in the input feature map X is μ_j, and denoting (CB^⊤)⊙Ã as M, the mean of this channel in the output feature map Y is μ_j∑_k=1^nM_ik.
The proof is straightforward.
We observe through Fig. <ref>(b) that there is non-negative activation applied on X, B, and C.
Given that A is also non-negative in Mamba2, according to Prop. <ref>, the channel-wise distributions would shift if n is inconsistent in training and inference, which further leads to distorted results.
Solving this problem requires unifying the impact of all tokens on each one to the same scale, a property inherently provided by the function.
In light of this, we propose normalization-aware Mamba in this paper, enforcing that the sum of attention weights from each token equals 1, i.e., ∑_k=1^nM_ik=1, which is equivalent to applying the SSM module one more time to obtain the normalization factor Z:
Z_i=A_i⊙ Z_i-1+B_i, C'_ij=C_ij/∑_k=1^d'{C_ik⊙ Z_ik}.
The operations are illustrated in Fig. <ref>(c).
Experiments indicate that such normalization substantially improve the performance of zero-shot cross-resolution generalization.
§.§ Non-Causal Mamba
While bi-directional scanning enables a token to receive information from subsequent tokens—a crucial feature for diffusion backbones—treating feature maps as 1D sequences compromises the intrinsic spatial structures in 2D images and higher-dimensional visual content.
To address this dilemma more effectively, we focus on developing a non-causal version of Mamba in this paper.
Non-causality indicates that one token can access to all tokens for information mixing, which can be achieved by simply removing the lower triangular causal mask applied on Ã.
Thus, the recursive formula in Eq. <ref> would become H_i=∑_j=1^n{(∏_k=j+1^nA_k)⊙ (B_j^⊤ X_j)}.
We observe that H_i remains invariant with respect to i in this formula.
This implies that the hidden states of all tokens are uniform, which fundamentally undermines the intended purpose of the forget gate A.
To address this issue, we associate different groups of A to various input tokens.
In this case, A is a n× n matrix and H_i=∑_j=1^n{(∏_k=j+1^nA_ik)⊙ (B_j^⊤ X_j)}.
The Ã_ij in Eq. <ref> becomes ∏_k=j+1^nA_ik.
Compared with that in Eq. <ref>, Ã here is not necessarily 1-semiseparable.
To maintain linear complexity, we impose the assumption that à is low-rank separable, i.e., there exist input-dependent matrices F and G such that Ã=FG^⊤.
In this way, the following proposition ensures that Eq. <ref> under this circumstance can be implemented via linear attention:
Given that Ã=FG^⊤, F,G∈ℝ^n× r, and B,C∈ℝ^n× d', denoting C_i=c(X_i), B_i=b(X_i), F_i=f(X_i), and G_i=g(X_i), there exist corresponding functions f' and g' such that Eq. <ref> can be equivalently implemented as linear attention, expressed as Y=f'(X)g'(X)^⊤ X.
Given existing conditions, we have:
(CB^⊤)⊙Ã =[(c(X_i)b^⊤ (X_j))⊙ (f(X_i)g^⊤ (X_j))]_i,j
=[(∑_u=1^d'{c(X_i)_ub(X_j)_u})(∑_v=1^r{f(X_i)_vg(X_j)_v})]_i,j
=[∑_u=1^d'∑_v=1^r{(c(X_i)_uf(X_i)_v)(b(X_j)_ug(X_j)_v)}]_i,j
=[(c(X_i)⊗ f(X_i))(b(X_j)⊗ g(X_j))^⊤]_i,j,
where ⊗ denotes Kronecker product.
Defining f'(X_i)=c(X_i)⊗ f(X_i) and g'(X_i)=b(X_i)⊗ g(X_i), we derive Y=f'(X)g'(X)^⊤ X.
In practice, we adopt two MLPs to mimic the functionalities of f' and g'.
Combining with the normalization operations mentioned in Sec. <ref>, we derive an elegant structure shown in Fig. <ref>(d).
Not only that, we further demonstrate that the form of linear attention described in Proposition <ref> can be extended to the more general case where Ã_ij is a d'-dimension vector rather than a scalar:
Given that Ã∈ℝ^d'× n× n, if for each 1≤ u≤ d', Ã_u is low-rank separable: Ã_u=F_uG^⊤_u, where F_u,G_u∈ℝ^n× r, F_uiv=f(X_i)_uv, and G_ujv=g(X_j)_uv, there exist corresponding functions f' and g' such that the computation Y_i=C_iH_i=C_i∑_j=1^n{Ã_:ij⊙ (B_j^⊤ X_j)} can be equivalently implemented as linear attention, expressed as Y_i=f'(X_i)g'(X)^⊤ X, where Ã_:ij is a column vector and can broadcast to a d'× d matrix.
Given existing conditions, we have:
Y_i =∑_u=1^d'[c(X_i)_u{∑_j=1^n∑_v=1^r(f(X_i)_uvg(X_j)_uvb(X_j)_uX_j)}]
=∑_u=1^d'∑_v=1^r[c(X_i)_uf(X_i)_uv∑_j=1^n{g(X_j)_uvb(X_j)_uX_j}]
=vec(c(X_i)· f(X_i))[vec(b(X_j)· g(X_j))]_j^⊤X,
where f(X_i)=F_:i: and g(X_j)=G_:j: are d'× r matrices, · denotes element-wise multiplication with broadcasting, and vec represents flatting a matrix into a row vector.
Defining f'(X_i)=vec(c(X_i)· f(X_i)) and g'(X_i)=vec(b(X_j)· g(X_j)), we derive Y=f'(X)g'(X)^⊤ X.
From this point of view, the proposed structure can be deemed as a generalized linear attention and a non-causal form of recent linear-complexity sequential models, including Mamba2 <cit.>, RWKV6 <cit.>, GLA <cit.>, etc.
In Tab. <ref>, we provide a summary of the parameterization in recent works for A_i.
r9cm
[1]
Model Parameterization of A_i Causal
Mamba2 <cit.> A_i∈ℝ Yes
mLSTM <cit.> A_i∈ℝ Yes
Gated Retention <cit.> A_i∈ℝ Yes
GateLoop <cit.> A_i∈ℝ^d' Yes
HGRN2 <cit.> A_i∈ℝ^d' Yes
RWKV6 <cit.> A_i∈ℝ^d' Yes
Gated Linear Attention <cit.> A_i∈ℝ^d' Yes
MLLA <cit.> A_ij=1 No
VSSD <cit.> A_ij∈ℝ No
Generalized Linear Attention Ã_ij∈ℝ^d' No
A summary of the parameterization in recent linear token mixers for A_i, partially adapted from <cit.>.
§.§ Training Objectives
In this paper, we replace all self-attention layers in the original SD with LinFusion modules.
Only the parameters within these modules are trained, while all others remain frozen.
To ensure that LinFusion closely mimics the original functionality of self-attention, we augment the standard noise prediction loss ℒ_simple in Eq. <ref> with additional losses.
Specifically, we introduce a knowledge distillation loss ℒ_kd to align the final outputs of the student and teacher models, and a feature matching loss ℒ_feat to match the outputs of each LinFusion module and the corresponding self-attention layer.
The training objectives can be written as:
θ=_θ 𝔼_z∼ℰ(x),y,ϵ∼𝒩(0,1),t[ℒ_simple+αℒ_kd+βℒ_feat],
ℒ_kd=‖ϵ_θ(z_t,t,y)-ϵ_θ_org(z_t, t,y)‖_2^2, ℒ_feat=1/L∑_l=1^L‖ϵ_θ^(l)(z_t,t,y)-ϵ_θ_org^(l)(z_t,t,y)‖_2^2,
where α and β are hyper-parameters controlling the weights of the respective loss terms, θ_org represents parameters of the original SD, L is the number of LinFusion/self-attention modules, and the superscript ^(l) refers to the output of the l-th one in the diffusion backbone.
§ EXPERIMENTS
§.§ Implementation Details
We present qualitative results on SD-v1.5, SD-v2.1, and SD-XL in Fig. <ref>
and mainly conduct experiments on SD-v1.5 in this section.
There are 16 self-attention layers in SD-v1.5 and we replace them with LinFusion modules proposed in this paper.
Functions f' and g' mentioned in Proposition <ref> are implemented as MLP, which consists of a linear branch and a non-linear branch with one block.
Their results are added to form the outputs of f' and g'.
The parameters of the linear branch in f' and g' are initialized as W_Q and W_K respectively, while the outputs of the non-linear branch are initialized as 0.
We use only 169k images in LAION <cit.> with aesthetics scores larger than 6.5 for training and adopt the BLIP2 <cit.> image captioning model to regenerate the textual descriptions.
Both hyper-parameters, α and β, are set as 0.5, following the approach taken in <cit.>, which also focuses on architectural distillation of SD.
The model is optimized using AdamW <cit.> with a learning rate of 10^-4.
Training is conducted on 8 RTX6000Ada GPUs with a total batch size of 96 under 512×512 resolution for 100k iterations, requiring ∼1 day to complete.
The efficiency evaluations are conducted on a single NVIDIA A100-SXM4-80GB GPU.
§.§ Main Results
Ablation Studies.
To demonstrate the effectiveness of the proposed LinFusion, we report the comparison results with alternative solutions such as those shown in Fig. <ref>(a), (b) and (c).
We follow the convention in previous works focusing on text-to-image generation <cit.> and conduct quantitative evaluation on the COCO benchmark <cit.> containing 30k text prompts.
The metrics are FID <cit.> against the COCO2014 test dataset and the cosine similarity in the CLIP-ViT-G feature space <cit.>.
We also report the running time per image with 50 denoising steps and the GPU memory consumption during inference for efficiency comparisons.
Results under 512×512 resolution are shown in Tab. <ref>.
Mitigating Structural Difference.
We begin our exploration from the original Mamba2 structure <cit.> with bi-directional scanning, i.e., Fig. <ref>(a), and try removing the gating and RMS-Norm, i.e., Fig. <ref>(b), to maintain a consistent holistic structure with the self-attention layer in the original SD.
In this way, the only difference with the original SD lies on the SSM or self-attention for token mixing.
We observe that such structural alignment is beneficial for the performance.
Normalization and Non-Causality.
We then apply the proposed normalization operation and the non-causal treatment sequentially, corresponding to Fig. <ref>(c) and (d).
Although results in Tab. <ref> indicate that normalization would slightly hurt the performance, we will show in the following Tab. <ref> that it is crucial for generating images with resolutions unseen during training.
Further adding the proposed non-causal treatment, we obtain results better than Fig. <ref>(b).
We also compare the proposed non-causal operation with the simplified case mentioned in Sec. <ref>, achieved by directly removing the lower triangular causal mask applied on Ã, which results in a 1-rank matrix, i.e., various tokens share the same group of forget gates.
The inferior results demonstrate the effectiveness of the proposed generalized linear attention.
Attention Visualization.
In Fig. <ref>, we visualize the self-attention maps yielded by various methods, including the original SD, bi-directional SSM, linear attention with shared forget gates, and generalized linear attention in LinFusion.
Results indicate that our method works better for capturing broader-range of spatial dependency and best matches the predictions of the original SD.
Knowledge Distillation and Feature Matching.
We finally apply loss terms ℒ_kd and ℒ_feat in Eq. <ref>, which enhance the performance further and even surpass the SD teacher.
Cross-Resolution Inference.
It is desirable for diffusion model to generate images of unseen resolutions during training–a feature of the original SD.
Since modules other than LinFusion are pre-trained and fixed in our work, normalization is a key component for this feature to maintain consistent feature distributions for training and inference.
We report the results of 1024×1024 resolution in Tab. <ref>, which indicate that the conclusion holds for all the basic structures such as Mamba2, Mamba2 without gating and RMS-Norm, and the proposed generalized linear attention.
Fig. <ref> shows a qualitative example, where results without normalization are meaningless.
Ultrahigh-Resolution Generation.
As discussed in <cit.>, directly applying diffusion models trained on low resolutions for higher-resolution generation can result in content distortion and duplication.
In this paper, we address the challenge by handling the low resolution first, based on which higher-scale images are generated using SDEdit <cit.>.
Note that in this work we aim at a linear-complexity model computationally enabling such ultrahigh-resolution generation as shown in Fig. <ref>.
Dedicated designs are left as future directions.
§.§ Empirical Extensions
The proposed LinFusion is highly compatible with various components/plugins for SD, such as ControlNet <cit.>, IP-Adapter <cit.>, and LoRA <cit.>, without any further training or adaptation.
We present qualitative results in Fig. <ref> and refer readers to the appendix for more results.
The following quantitative evaluations indicate comparable performance of LinFusion with the original SD.
ControlNet.
ControlNet <cit.> introduces plug-and-play components to SD for additional conditions, such as edge, depth, and semantic map.
We substitute SD with the proposed LinFusion and compare the FID, CLIP score, and the similarity between the input conditions and the extracted conditions from generated images of diffusion models with original SD.
The results are shown in Tab. <ref>.
IP-Adapter.
Personalized text-to-image generation <cit.> is a popular application of SD, which focuses on generating images simultaneously following both input identities and textual descriptions.
IP-Adapter <cit.> offers a zero-shot solution that trains a mapper from the image space to the condition space of SD, so that it can handle both image and text conditions.
We demonstrate that IP-Adapter trained on SD can be used directly on LinFusion.
The performance on the DreamBooth dataset <cit.>, containing 30 identities and 25 text prompts to form 750 test cases in total, is shown in Tab. <ref>.
We use 5 random seeds for each case and report the averaged CLIP image similarity, DINO <cit.> image similarity, and CLIP text similarity.
LoRA.
Low-rank adapters (LoRA) <cit.> aim at low-rank matrices applied on the weights of a basic model such that it can be adapted for different tasks or purposes.
For instance, <cit.> introduce LCM-LoRA such that the pre-trained SD can be used for LCM inference with only a few denoising steps <cit.>.
Here, we directly apply LoRA in LCM-LoRA model to LinFusion.
The performance on the COCO benchmark is shown in Tab. <ref>.
§ CONCLUSION
This paper introduces a diffusion backbone termed LinFusion for text-to-image generation with linear complexity in the number of pixels.
At the heart of LinFusion lies in a generalized linear attention mechanism, distinguished by its normalization-aware and non-causal operations—key aspects overlooked by recent linear-complexity token mixers like Mamba, Mamba2, and GLA.
We reveal theoretically that the proposed paradigm serves as a general low-rank approximation for the non-causal variants of recent models.
Based on Stable Diffusion (SD), LinFusion modules after knowledge distillation can seamlessly replace self-attention layers in the original model, ensuring that LinFusion is highly compatible to existing components for Stable Diffusion, like ControlNet, IP-Adapter, and LoRA, without any further training effort.
Extensive experiments on SD-v1.5, SD-v2.1, and SD-XL demonstrate that the proposed model outperforms existing baselines and achieves performance on par with, or better than, the original SD with significantly reduced computational overhead.
On a single GPU, it can accommodate image generation with resolutions up to 16K.
iclr2024_conference
|
http://arxiv.org/abs/2409.03257v1 | 20240905053129 | Understanding LLM Development Through Longitudinal Study: Insights from the Open Ko-LLM Leaderboard | [
"Chanjun Park",
"Hyeonwoo Kim"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
Transmit Beamforming Design for ISAC with Stacked Intelligent Metasurfaces
Shunyu Li, Student Member, IEEE,
Fan Zhang, Student Member, IEEE,
Tianqi Mao, Member, IEEE, Rui Na, Zhaocheng Wang, Fellow, IEEE, and George K. Karagiannidis, Fellow, IEEE
This work was supported by National Natural Science Foundation of China under Grant No. 62088101. (Corresponding authors: Tianqi Mao, Rui Na.)
S. Li, T. Mao and R. Na are with State Key Laboratory of CNS/ATM, Beijing Institute of Technology, Beijing 100081, China. T. Mao is also with Beijing Institute of Technology (Zhuhai), Zhuhai 519088, China. R. Na is also with Yangtze Delta Region Academy of Beijing Institute of Technology (Jiaxing), Jiaxing 314019, China (e-mails: [email protected], [email protected], [email protected]).
F. Zhang and Z. Wang are with Department of Electronic Engineering, Tsinghua University, Beijing 100084, China (e-mails: [email protected], [email protected]).
G. K. Karagiannidis is with Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Greece and also with Artificial Intelligence & Cyber Systems Research Center, Lebanese American University (LAU), Lebanon ([email protected]).
September 9, 2024
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
^† Corresponding Author
This paper conducts a longitudinal study over eleven months to address the limitations of prior research on the Open Ko-LLM Leaderboard, which have relied on empirical studies with restricted observation periods of only five months. By extending the analysis duration, we aim to provide a more comprehensive understanding of the progression in developing Korean large language models (LLMs). Our study is guided by three primary research questions: (1) What are the specific challenges in improving LLM performance across diverse tasks on the Open Ko-LLM Leaderboard over time? (2) How does model size impact task performance correlations across various benchmarks? (3) How have the patterns in leaderboard rankings shifted over time on the Open Ko-LLM Leaderboard?. By analyzing 1,769 models over this period, our research offers a comprehensive examination of the ongoing advancements in LLMs and the evolving nature of evaluation frameworks.
§ INTRODUCTION
The rapid advancement of large language models (LLMs) <cit.> has led to the creation of various leaderboards designed to evaluate their performance across a wide range of tasks <cit.>. Among these, the Open LLM Leaderboard <cit.> developed by Hugging Face <cit.> has achieved significant global recognition. In the context of Korean language models, the Open Ko-LLM Leaderboard <cit.> was established to specifically assess LLM performance within the Korean language environment.
While previous analyses of the Open Ko-LLM Leaderboard <cit.> have provided valuable insights into LLM performance, they have been constrained observation periods of only five months, limiting their ability to capture long-term trends. To better understand the ongoing evolution and inherent challenges in LLM development, a more comprehensive and extended analysis is required. This paper addresses this gap by conducting a detailed longitudinal study of the Open Ko-LLM Leaderboard, guided by three primary research questions:
First, we analyze the longitudinal changes in performance across five tasks monitored by the Open Ko-LLM Leaderboard. These tasks are designed to evaluate various capabilities of LLMs, including reasoning, natural language understanding, and common sense knowledge. By examining data collected over a eleven-month period, this study aims to identify which capabilities have presented the greatest challenges for LLM developers, which tasks have reached performance saturation rapidly, and which tasks continue to pose significant difficulties. This analysis will provide quantitative insights into performance trends across different tasks, thereby guiding targeted research efforts and highlighting key areas that require further advancement to push the boundaries of model development.
Second, we explore the correlations between different tasks based on model size. This aspect of the study examines how the performance across different tasks varies depending on the scale of the model. Understanding these correlations will provide insights into the interaction between model capacity and task performance, offering a deeper understanding of how scaling influences overall effectiveness across tasks.
Third, we examine the evolution of leaderboard dynamics from the initial stages to the present by focusing on three key aspects: the correlations between task performances in the early months compared to the entire eleven-month period, the temporal changes in performance based on model type, and the shifts in performance relative to model size. This comprehensive analysis offers insights into the evolving interplay among tasks and the influence of various model characteristics on LLM performance throughout different phases of development.
§ EMPIRICAL ANALYSIS
§.§ Challenges in Enhancing Task Performance Over Time
What are the specific challenges in improving LLM performance across diverse tasks on the Open Ko-LLM Leaderboard over time?. To investigate this question, we conducted a comprehensive analysis of performance trends over a eleven-month period across all tasks on the Open Ko-LLM Leaderboard, including Ko-HellaSwag (commonsense reasoning)<cit.>, Ko-ARC (commonsense and scientific reasoning)<cit.>, Ko-MMLU (multitask language understanding and domain knowledge)<cit.>, Ko-CommonGEN V2 (commonsense generation)<cit.>, and TruthfulQA (truthfulness) <cit.>.
Figure <ref> and Table <ref> show the varying performance patterns of LLMs across these tasks over the eleven-month period. Certain tasks, such as Ko-HellaSwag and Ko-TruthfulQA, exhibit rapid improvements in performance and early saturation. Specifically, Ko-HellaSwag reached a score of 50 almost immediately and achieved 80 by week 26, while Ko-TruthfulQA showed comparable progress, reaching a score of 80 within 25 weeks. These trends indicate that current LLMs are particularly well-suited for tasks requiring straightforward commonsense reasoning and truthfulness, suggesting a relatively lower barrier to achieving performance enhancements in these domains.
Conversely, tasks such as Ko-MMLU and Ko-CommonGEN V2 show slower, more gradual improvements without clear signs of saturation, highlighting their increased complexity and the deeper understanding required from LLMs. Ko-MMLU took 13 weeks to reach a score of 50 and then stabilized around 60 after 26 weeks, indicating a limit to the current models capabilities. Similarly, Ko-CommonGEN V2, despite reaching a score of 50 relatively quickly, showed minimal progress beyond 60. These patterns highlight the significant challenges LLMs face in tasks that demand complex reasoning and specialized knowledge, suggesting these are important areas for further research.
The initial rapid gains in Ko-ARC, followed by minimal progress beyond a score of 60 after 17 weeks, indicate that while LLMs can quickly adapt to certain tasks, their progress is constrained by the need for more complex reasoning skills. This underscores the importance of developing more challenging benchmarks to better evaluate the limitations and capabilities of LLMs, especially in tasks that require more advanced forms of reasoning.
Overall, these findings emphasize the need to include a broad range of complex tasks to comprehensively assess LLM capabilities. While some tasks demonstrate rapid performance saturation, others present ongoing challenges, serving as essential benchmarks for guiding future advancements in LLM development.
§.§ The Influence of Model Size on Task Performance Correlations
How does model size impact task performance correlations across various benchmarks?. To investigate this question, we analyze how model size affects performance improvements across different tasks, using a framework similar to previous studies <cit.>. For this analysis, models were divided into three size categories: under 3 billion parameters, 3 to 7 billion parameters, and 7 to 14 billion parameters. This categorization allows for a detailed examination of how scaling impacts task performance.
Figure <ref> illustrates distinct patterns in task performance correlations depending on model size. Smaller models (under 3 billion parameters) show low or even negative correlations between certain tasks, such as Ko-TruthfulQA and Ko-CommonGen V2, and other tasks. This suggests that smaller models struggle to improve consistently across multiple capabilities, indicating that advancements in one area do not necessarily lead to improvements in others. Consequently, these models tend to have a fragmented skill set, making them less suitable for a comprehensive evaluation of LLM performance.
In contrast, larger models demonstrate higher correlations across most tasks, suggesting that increasing model size results in a more effective integration of various capabilities. For example, models in the 7 to 14 billion parameter category exhibit stronger positive correlations across a majority of tasks, especially those requiring advanced reasoning. This trend indicates that scaling up model size not only enhances performance on individual tasks but also supports a more cohesive development of capabilities, enabling more consistent performance improvements across a wide range of tasks.
These findings highlight the importance of model size in achieving balanced performance across a range of tasks. Smaller models, with their inconsistent performance across tasks, suggest a limitation in their ability to generalize learning effectively. In contrast, the positive correlations observed in larger models imply that increasing model size fosters a more comprehensive understanding and transfer of knowledge across different domains. This insight is crucial for future LLM development, as it underscores the need to consider model size not just for boosting individual task performance, but also for promoting a more integrated and holistic enhancement of capabilities.
§.§ Temporal Shifts in Leaderboard Ranking Patterns
How have the patterns in leaderboard rankings shifted over time on the Open Ko-LLM Leaderboard?. To investigate this question, we extended our analysis to an eleven-month period to see if the initial trends, defined as those observed during the initial five months in the previous study by <cit.>, remained consistent or if new patterns emerged over time. This longer timeframe allows us to capture shifts in model performance and ranking dynamics.
Task Correlations Over Time. Figure <ref> shows the correlation analysis between tasks during the initial phases of the leaderboard and over the full eleven-month period. A notable increase was observed in the correlation between Ko-Truthful QA and other tasks, especially Ko-Hellaswag. This correlation, initially very low at 0.01, rose significantly to 0.5 over time. This change suggests that as higher-performing models, particularly those with 7 billion parameters or more, were introduced, the alignment between tasks became stronger. For most other tasks, correlations remained relatively stable, reflecting their initial patterns.
Performance Trends by Model Type. Figure <ref> presents the performance trends over time for different model types. As noted in previous research <cit.>, improvements in instruction-tuned models typically lagged behind those of pretrained models by about one week. When a pretrained model showed a significant performance boost, instruction-tuned models followed with a similar increase roughly one week later. This pattern persisted throughout the entire period analyzed, indicating a reliance of instruction-tuned models on the advancements made by pretrained models. After April 2024, the performance of pretrained models stabilized, leading to a corresponding lack of progress in both instruction-tuned and RL-tuned models. This trend indicates the fundamental role of pretrained models in driving overall performance gains in LLMs and suggests that further improvements in pretrained models are necessary for advancing model capabilities.
Performance Trends Across Model Sizes. Figure <ref> shows performance variations by model size. Models in the 0-3B range exhibited minimal improvement throughout the leaderboard period, indicating inherent scalability limitations. Similarly, models in the 3-7B range initially demonstrated gains, but their progress stabilized around five months in (April 2024 to August 2024), revealing similar scalability constraints.
Larger models in the 7-14B range showed steady performance improvements during the early phase of the leaderboard, continuing throughout the entire analysis period. However, after April 2024, their performance also reached a saturation point. This stagnation is likely due to the absence of new, high-performing Korean pretrained models, a trend also evident in the analysis of different model types in Figure <ref>.
These findings emphasize that improving LLM performance largely depends on advancements in pretrained models. The leaderboard analysis indicates that, without new breakthroughs in pretrained models, further improvements are limited. This highlights the essential role of continuous innovation in pretrained models for advancing LLM performance.
§ CONCLUSION
This study provides a longitudinal analysis of the Open Ko-LLM Leaderboard, revealing key performance trends: smaller models face scalability limitations, while larger models experience saturation without advancements in pretrained models. These findings highlight the need for continuous innovation in pretrained models to enhance LLM capabilities. Additionally, the study shows that analyzing leaderboard data offers valuable insights into the evolving dynamics of LLM performance.
§ ACKNOWLEDGMENTS
We sincerely thank the National Information Society Agency (NIA), Korea Telecom (KT), and Flitto for their support. We also extend our gratitude to the Korea University NLP & AI Lab, particularly Professor Heuiseok Lim and Jaehyung Seo, for their valuable data contributions, which have greatly enhanced the robustness of the leaderboard. Our appreciation goes to the Hugging Face teams, especially Clémentine Fourrier, Lewis Tunstall, Omar Sanseviero, and Philipp Schmid, for their assistance. We would like to thank SeongHwan Cho for his contributions to the leaderboard development, and Sanghoon Kim for his contributions to the leaderboard infrastructure. Special thanks to Hyunbyung Park for his initial contributions to Ko-H5.
We are also grateful to Professor Harksoo Kim from Konkuk University, Professor Hwanjo Yu from Pohang University of Science and Technology, Professor Sangkeun Jung from Chungnam National University, and Professor Alice Oh from KAIST for their insightful advice on the Open Ko-LLM Leaderboard. Finally, we deeply appreciate the open-source community for their invaluable feedback and contributions.
This work was supported by Institute of Information & Communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No. RS-2024-00338140, Development of learning and utilization technology to reflect sustainability of generative language models and up-to-dateness over time).
§ LIMITATIONS
While this study provides valuable insights into the evaluation of LLMs, several limitations should be acknowledged. First, our analysis is primarily based on data from the Open Ko-LLM Leaderboard. Although this leaderboard offers extensive coverage of various tasks, it may not fully represent the complete spectrum of challenges and scenarios relevant to LLM performance, particularly in specialized or emerging domains.
Additionally, the focus on Korean language models may restrict the generalizability of our findings to other languages and cultural contexts. The linguistic and cultural nuances specific to Korean may not entirely translate to other languages, potentially limiting the applicability of our conclusions.
Furthermore, our study predominantly examines the relationship between model size and performance but does not explore other factors, such as training data diversity or the impact of different fine-tuning techniques, which could also significantly influence model outcomes. Future research should aim to address these gaps by incorporating a broader range of tasks, languages, and evaluation metrics. Expanding the scope of analysis to include models trained in different linguistic and cultural settings, as well as exploring the impact of varied training methodologies, would enhance the robustness and applicability of the findings.
§ ETHICS STATEMENT
In conducting this research, we adhered to the highest ethical standards, ensuring that all data used in the evaluation was sourced responsibly and in compliance with relevant regulations. We are committed to transparency and integrity in our research practices, and we have made our methods and findings available to the community for further scrutiny and development. We also acknowledge the importance of considering the societal impacts of LLMs, particularly in ensuring that their development and deployment are aligned with ethical principles that promote fairness, inclusivity, and accountability.
§ OPEN KO-LLM LEADERBOARD
The Open Ko-LLM Leaderboard <cit.> is a pioneering platform designed to evaluate large language models (LLMs) specifically in the Korean language, addressing the limitations of predominantly English-focused benchmarks. This leaderboard mirrors the structure of the globally recognized Open LLM Leaderboard by Hugging Face <cit.>, ensuring consistency and comparability across languages. It is built on two key principles: alignment with the English leaderboard and the use of private test sets to avoid data contamination, thereby enhancing evaluation robustness.
The leaderboard employs the Ko-H5 benchmark, comprising five tasks that assess various aspects of language understanding and generation in Korean. These tasks are designed to comprehensively evaluate LLM capabilities. The first task, Ko-Hellaswag <cit.>, tests commonsense reasoning by requiring models to complete sentences contextually and logically. The second task, Ko-ARC <cit.>, adapted from the English ARC, evaluates both commonsense and scientific reasoning through multiple-choice questions. Ko-MMLU <cit.>, the third task, assesses multitask language understanding and domain knowledge across various subjects, requiring models to respond accurately to questions from different domains. The fourth task, Ko-CommonGen V2 <cit.>, focuses on commonsense generation, where models must create coherent sentences from given concepts, testing their ability to connect common knowledge meaningfully. Lastly, Ko-TruthfulQA <cit.> evaluates a model ability to provide truthful and accurate responses, crucial for assessing the factual integrity of LLMs in real-world scenarios.
Through the Ko-H5 benchmark, the Open Ko-LLM Leaderboard provides a robust framework for evaluating Korean LLMs and promotes linguistic diversity in LLM evaluation. By incorporating tasks that reflect Korean linguistic and cultural nuances, the leaderboard offers valuable insights into LLM performance beyond English, encouraging a more inclusive approach to language model evaluation.
§ ADDITIONAL ANALYSIS
Figure <ref> presents the monthly distribution of submissions across different model types on the Open Ko-LLM leaderboard. Initially, pretrained models constituted 37% of all submissions, but this proportion declined sharply over time, with no pretrained models submitted by August 2024. This trend signals a diminishing focus on pretrained models within the community, which is concerning given their foundational importance discussed in Section <ref>. Therefore, a renewed emphasis on fostering interest and engagement with pretrained models could help address this emerging gap.
On the other hand, instruction-tuned models, which started at 61%, consistently dominated the submissions, maintaining a steady presence of 70-80% each month. This trend suggests that the community perceives instruction-tuned models as highly effective or suitable for the tasks evaluated. Additionally, RL-tuned models, though initially making up only 2% of submissions, gradually increased to a peak of 29%, reflecting a growing interest in exploring reinforcement learning approaches within the leaderboard context. This variety indicates a healthy exploration of diverse model types, but also highlights areas where community focus could be broadened or rebalanced.
In addition, Table <ref> presents the monthly statistics for both the number of model submissions and the number of completed model evaluations. The Model Submissions Count refers to the total number of models submitted to the leaderboard each month. In contrast, the Model Evaluation Count represents the number of these submitted models that successfully completed the evaluation process.
The discrepancy between the Model Submissions Count and the Model Evaluation Count is due to instances where some models fail to complete the evaluation phase on the leaderboard. This failure can occur for several reasons, such as models being too large to be processed within the available computational resources or issues related to library support and compatibility. As a result, not all submitted models are evaluated successfully, highlighting potential challenges and areas for improvement in handling diverse model architectures on the leaderboard.
|
http://arxiv.org/abs/2409.03120v1 | 20240904231053 | Approximate Environment Decompositions for Robot Coverage Planning using Submodular Set Cover | [
"Megnath Ramesh",
"Frank Imeson",
"Baris Fidan",
"Stephen L. Smith"
] | cs.RO | [
"cs.RO"
] |
QHDOPT: A Software for Nonlinear Optimization with Quantum Hamiltonian Descent
[
==============================================================================
§ ABSTRACT
In this paper, we investigate the problem of decomposing 2D environments for robot coverage planning. Coverage path planning (CPP) involves computing a cost-minimizing path for a robot equipped with a coverage or sensing tool so that the tool visits all points in the environment. CPP is an NP-Hard problem, so existing approaches simplify the problem by decomposing the environment into the minimum number of sectors. Sectors are sub-regions of the environment that can each be covered using a lawnmower path (i.e., along parallel straight-line paths) oriented at an angle. However, traditional methods either limit the coverage orientations to be axis-parallel (horizontal/vertical) or provide no guarantees on the number of sectors in the decomposition. We introduce an approach to decompose the environment into possibly overlapping rectangular sectors. We provide an approximation guarantee on the number of sectors computed using our approach for a given environment. We do this by leveraging the submodular property of the sector coverage function, which enables us to formulate the decomposition problem as a submodular set cover (SSC) problem with well-known approximation guarantees for the greedy algorithm. Our approach improves upon existing coverage planning methods, as demonstrated through an evaluation using maps of complex real-world environments.
§ INTRODUCTION
Coverage path planning (CPP) is the problem of generating a minimum cost path for a robot equipped with a sensing or coverage tool such that the tool visits all points of the robot's environment <cit.>. CPP has a wide range of applications in cleaning <cit.>, agriculture <cit.>, and surveillance tasks for search-and-rescue robots <cit.>. CPP is an NP-Hard problem <cit.> and thus research is primarily focused on computing approximations to the optimal coverage path. Planning approaches in literature typically use a two-step framework to make CPP more tractable, albeit at the cost of solution quality. This framework involves: (i) decomposing the environment into sub-regions which can each be individually covered, and (ii) computing a visitation order or a tour of the sub-regions. In this work, we refer to these sub-regions as "sectors". The environment decomposition step is critical as it determines the majority of the coverage path, and also affects the cost of the tour connecting the sectors. Therefore, it is important to compute decompositions that account for these effects on the coverage path.
In literature, CPP algorithms are grouped into exact or approximate approaches depending on the choice of environment decomposition. Exact approaches usually involve decomposing the environment into convex (or monotone) sectors <cit.>. Each sector is then covered by a lawnmower path (i.e. using parallel coverage lines) along an optimal orientation as determined by the geometry of the sector. However, when applied to complex environments with irregular boundaries, exact CPP approaches are susceptible to over-decomposition where some sectors are thinner than the coverage tool. This is a consequence of covering the entire environment and results in many instances of double coverage where the robot returns to areas it has already covered. Additionally, these approaches provide few guarantees on the number of sectors in the decomposition.
In contrast, approximate CPP approaches use grid decompositions to cover areas of the environment that are reachable by the coverage tool. The decomposition is composed of non-overlapping (usually square) grid cells of size equal to the coverage tool <cit.>. The CPP problem is then to compute an optimal tour that visits each grid cell and minimizes the amount of double coverage. In our previous works <cit.>, we proposed an approach that minimizes double coverage by minimizing the number of coverage lines (i.e. straight-line paths). However, using grid cells constrains the robot to cover the environment along axis-parallel (horizontal or vertical) orientations, leading to “staircase-like" coverage paths which are sub-optimal. We look to achieve a decomposition that addresses the shortcomings of both exact and approximate approaches by (i) preventing over-decomposition and (ii) removing axis-parallel constraints.
In this paper, we propose a decomposition of the environment into possibly overlapping rectangular sectors. We choose rectangles for the following reasons: (i) the lawnmower path to cover a rectangle can be optimally oriented along the longest edge <cit.>, and (ii) one can efficiently identify the largest rectangle in an environment <cit.>. We refer to this as the sector decomposition problem. Inspired by works in sensor coverage <cit.>, we leverage the submodularity property of covering the environment with sectors to design our solution approach. Specifically, we show that the sector decomposition problem is an instance of the submodular set cover (SSC) problem, which has been studied extensively in literature <cit.>. In our proposed approach, we use the result that the greedy algorithm for SSC has a well-known approximation guarantee <cit.>.
Contributions: Our specific contributions are as follows:
* We formulate the sector decomposition problem that aims to minimize the number of sectors and show that this is an instance of the submodular set cover problem.
* We propose a greedy algorithm to solve the sector decomposition problem, for which we provide an approximation guarantee on the number of sectors following that of <cit.>.
* Finally, we present simulation results using maps of real-world environments and perform comparisons against state-of-the-art coverage planning approaches.
The paper is organized as follows. In Section <ref>, we provide some background on the submodular set cover (SSC) problem. In Section <ref>, we introduce the sector decomposition problem. In Section <ref>, we motivate the minimization of sectors and show that the sector decomposition problem is an instance of SSC. In Section <ref>, we present our solution approach that greedily computes a sector decomposition for an environment. In Section <ref>, we briefly describe how the decomposition is used to compute a coverage path for the environment. Finally, in Section <ref>, we provide the results of generating coverage paths for real-world environments using the proposed approach.
§ PRELIMINARIES
In this section, we provide a brief background on submodular functions and the submodular set cover (SSC) problem on which our decomposition approach is based.
§.§ Submodular functions
Let us consider a set X (not necessarily finite) and let the set 2^X be the power set of X, i.e., the set of all subsets of X. A function f : 2^X →ℝ_≥ 0 is submodular if for all A ⊆ B ⊆ X and x ∈ X ∖ B, we have
f(A ∪{x}) - f(A) ≥ f(B ∪{x}) - f(B).
This is known as the property of diminishing returns. Equivalently, a function f : 2^X →ℝ_≥ 0 is submodular if for every A,B ⊆ X,
f(A) + f(B) ≥ f(A ∪ B) + f(A ∩ B).
In addition to submodularity, we consider functions f that possess the following additional properties:
* Monotonicity: For all A ⊆ B ⊆ X, f(A) ≤ f(B),
* Normalization: For the empty set ∅, f(∅) = 0.
§.§ Submodular Set Cover
Consider a set X and a monotone submodular function f: 2^X →ℝ for which f(X) is finite. The submodular set cover problem aims to compute a minimum cardinality subset S ⊆ X subject to a submodular constraint f(S) = f(X). Formally, it can be represented by the following optimization problem:
min_S ⊆ X |S|
s.t. f(S) = f(X).
The submodular set cover problem is NP-Hard. However, a simple greedy algorithm provides the best-known performance guarantee for this problem <cit.>. Let δ_x(S) for some S ⊆ X and x ∈ X ∖ S denote the marginal return obtained by adding x to S:
δ_x(S) = f(S ∪{x}) - f(S).
The greedy algorithm for this problem iteratively constructs a set S as follows: at each iteration r, the algorithm adds the element x ∈ X ∖ S that maximizes the marginal return δ_x(S). The greedy algorithm terminates after a feasible solution S where f(S) = f(X) is obtained.
§ PROBLEM DEFINITION
Consider an indoor environment as represented by a set 𝒲⊆ℝ^2. We look to cover the environment using the robot's coverage tool, which we assume to be a square of width l. This assumption follows from other works in coverage planning including <cit.>. Specifically, we look to decompose 𝒲 into (possibly overlapping) rectangular subsets called sectors. Let 𝒬 denote a set of candidate sectors where each sector Q_i ∈𝒬 represents a rectangle in the environment (i.e., Q_i ⊆𝒲) with the longest edge oriented along an angle θ_i ∈ [0,π). Note that the set 𝒬 can be uncountably infinite.
We now define the functions to compute the coverage of the environment using sectors. Let |Q| denote the area of a set Q ⊆ℝ^2, and let a:2^𝒬→ℝ be a function that computes the area of the union of a subset 𝒮⊆𝒬:
a(𝒮) = |⋃_Q_i ∈𝒮 Q_i|.
We consider a set of candidate sectors 𝒬 such that its union covers 𝒲, i.e. a(𝒬) = |𝒲| (i.e., total area of the environment). This is so that there exists a decomposition that covers the entire environment if necessary.
We are now ready to present the main problem of this paper. Given the environment 𝒲, we look to decompose 𝒲 into the minimum number of rectangular sectors. However, we also look to prevent cases of over-decomposition, where the decomposition may contain small sectors covering inaccessible areas in the environment. To do this, we seek to find a decomposition such that the area within 𝒲 covered by the sectors is at least γ|𝒲|, where γ∈ (0,1] is a design parameter we call the minimum coverage ratio.
[Sector Decomposition]
Given an environment 𝒲, a set of candidate rectangular sectors 𝒬 and a minimum coverage ratio γ, compute a set of sectors 𝒮⊆𝒬 that solves:
min_𝒮⊆𝒬 |𝒮|
s.t. a(𝒮) ≥γ|𝒲|.
Using the sectors computed by solving Problem <ref>, we obtain the coverage lines (i.e., straight-line paths) that form the lawnmower path for the sector. Note that there are two possible lawnmower paths depending on the connections between the coverage lines. From this, we generate a coverage path for the environment using an approach called sector touring where: (i) we pick a lawnmower path for each sector, and (ii) connect the lawnmower paths by computing a cost-minimizing tour between the path endpoints. We refer to the connections between the lawnmower paths as transition paths. An example coverage path for a set of three sectors is shown in Fig. <ref>. We provide more details on sector touring in Section <ref>.
While we consider rectangular sectors in this paper, one could more generally consider sectors that are monotone with a well-defined orientation for the lawnmower path. However, in our solution approach, we leverage existing approaches <cit.> that efficiently compute the largest inscribed rectangle within the environment at an orientation.
§ LINK TO SUBMODULAR SET COVER
In this section, we establish that Problem <ref> is an instance of SSC. We motivate minimizing the number of sectors by deriving a bound on the worst-case coverage path length resulting from a decomposition of rectangular sectors. We then prove the submodularity of the area function a, which, based on <cit.>, implies that the greedy algorithm provides a polynomial-time approximation algorithm. Following <cit.> and the submodularity of the area function a (proof deferred to extended version due to space constraints), this implies that the greedy algorithm provides a polynomial-time approximation algorithm.
§.§ Coverage Path Optimality
Here, we motivate the minimization of the number of sectors in the decomposition by considering the length of the coverage path resulting from sector touring. Giving an exact characterization of the relationship between the sector decomposition and the coverage path length is challenging. Thus, our approach is to bound the length of the coverage path obtained from sector touring and minimizing this bound. For convex environments, we may trivially bound the coverage path length using a single sector covering the environment. However, for non-convex environments, such as those considered in this paper, obtaining a similar bound requires more in-depth analysis.
Consider a decomposition of the environment into rectangular sectors. The lawnmower path length to optimally cover a rectangular sector can be tightly bounded using the rectangle dimensions and the coverage tool width l. Consider a rectangular sector Q of width w and height h. Without loss of generality, let w ≥ h and consider a lawnmower path with coverage lines along the longest edge (i.e. width) of the rectangle. Now, let P_Q be the path to cover Q and let L(P_Q) be its length.
Given a rectangular sector Q of width w and height h, the coverage path P_Q has a length L(P_Q) satisfying
L(P_Q) ≤ w ⌈h/l⌉.
The length of each coverage line in the lawnmower path is w - l as the tool can stay at a distance of l/2 away from the perimeter of the rectangle to cover it. To connect each coverage line to an adjacent line, we require a connecting path of length at most l. So each line-connection pair has a length of w.
If h is a multiple of l, then the number of line-connection pairs is exactly h/l, and the bound is tight. If h is not a multiple of l, i.e., h ≠ cl for some c ∈ℕ. The number of coverage lines needed to cover Q is ⌈ h/l ⌉. Therefore, the total path length is upper-bounded by w ⌈ h/l ⌉, which proves the lemma.
Due to space constraints, we defer the proof of Lemma <ref> to the extended version. Let 𝒮 be a set of rectangular sectors that cover the environment 𝒲, i.e. a(𝒮) = |𝒲|. Given 𝒮, let P be the coverage path obtained by connecting the lawnmower paths using the aforementioned approach, and let L(P) be the length of P. Since the sectors can overlap, the sum of sector areas is given by
∑_Q ∈𝒮 |Q| = (1 + α) |𝒲|,
where α is the ratio of |𝒲| that is double-covered by sectors in 𝒮. Also, let w' represent the longest width of all sectors in 𝒮.
Now, let P^* be the optimal coverage path of 𝒲 (unconstrained by sectors). We now bound L(P) with respect to L(P^*).
Given an environment 𝒲 and a coverage tool of width l, let P^* be an optimal coverage path of 𝒲, with length L(P^*). Consider a decomposition of 𝒲 into rectangular sectors 𝒮, and let α be the ratio of sector overlap and w' be the longest sector width. Then, the coverage path P resulting from sector touring has length L(P) that satisfies:
L(P) ≤ (2 + α) L(P^*) + (√(2) + 1)w'|𝒮|.
First, note that
L(P^*) ≥|𝒲|/l.
This follows since the area covered by a tool of width l along any path of length L is at most lL, with equality if the path is a straight line.
The path P is obtained by (i) covering each sector individually using lawnmower paths, and (ii) connecting the lawnmower paths using transition paths. Let L_s be the total length of the lawnmower paths. Following Lemma <ref>,
L_s ≤∑_Q ∈𝒮( |Q|/l + w' ) ≤∑_Q ∈𝒮 |Q|/l + |𝒮|w'
Following the definition of α, we get the following ratio between the sum of sector areas and l:
∑_Q ∈𝒮 |Q|/l= (1 + α)|𝒲|/l≤ (1 + α) L(P^*).
We now bound the length of the transition paths L_t that connect the lawnmower paths. We split the transition paths into two path segments: (i) inter-sector (path between sectors), and (ii) intra-sector (path traveled within each sector). Fig. <ref> illustrates these path segments in an example coverage path. To bound L_t, we consider computing transition paths as follows: (i) we use a TSP tour that visits a single corner of each sector (inter-sector), and (ii) connect each ends of the lawnmower path to the corner (intra-sector).
To bound the inter-sector path segments, let TOUR(𝒮) be the length of the TSP tour of the sector corners. Note that the optimal coverage path P^* for the environment visits each sector at least once. So, we have that
TOUR(𝒮) ≤ L(P^*).
We now bound the intra-sector path segments for each sector. A lawnmower path for a rectangle starts close to one sector corner and ends at a different corner. Therefore, the cumulative path length to reach any corner from the endpoints of a lawnmower path is at most the diameter of the sector (length of the diagonal). Given that w' is the longest sector width in the decomposition, this diameter is at most √(2) w'. As a result, we have that
L_t ≤ L(P^*) + √(2)|𝒮|w'.
Adding all the path lengths, we arrive at the final result:
L(P) = L_s + L_t
≤ (1 + α) L(P^*) + (1 + √(2))|𝒮|w' + TOUR(𝒮),
≤ (2 + α) L(P^*) + (1 + √(2))|𝒮|w'
which proves our proposition.
Following Proposition <ref>, we observe two factors of the sector decomposition that affect the length of the worst-case coverage path: (i) the number of sectors |𝒮|, and (ii) the area of sector overlap. Therefore, reducing the number of sectors in the decomposition reduces the second term in the bound. Additionally, in our solution approach (Section <ref>), we address sector overlap by constraining the overlap area between sectors when computing the environment decomposition.
§.§ Submodularity of sector coverage
We now show that the coverage function a is a submodular function defined on the set of all candidate sectors 𝒬. This is not immediately obvious as 𝒬 is uncountably infinite.
Since the set of all candidate sectors 𝒬 is uncountably infinite, it is not immediately obvious that the coverage function a is a submodular function defined on 𝒬.
The coverage function a: 2^𝒬→ℝ as defined in Eq. (<ref>) is a normalized, monotone and submodular function.
Normalization holds trivially as the coverage of 𝒲 is 0 if no sectors are picked from 𝒬. Similarly, monotonicity holds trivially, since for any set A ⊆𝒬, each sector in A must cover a non-negative amount of area in 𝒲.
To show the submodularity of a, consider sets A ⊆ B ⊂𝒬 and a sector Q ∈𝒬∖ B. From the definition of submodularity in Section <ref>, the following must hold:
a(A ∪{Q}) - a(A) ≥ a(B ∪{Q}) - a(B).
For the set B, let δ_Q(B) = a(B ∪{Q}) - a(B) represent the marginal return of adding Q to the set B. First, notice that
δ_Q(B) = | Q ∖ (∪_Q_B ∈ B Q_B) |.
Now, manipulating this expression we get
δ_Q(B)
= | Q ∖ (∪_Q_B ∈ B Q_B) |
= | Q ∖((∪_Q_A ∈ A Q_A) ∪ (∪_Q_B ∈ B∖ A Q_B))|
= |(Q ∖ (∪_Q_A ∈ A Q_A)) ∩(Q ∖ (∪_Q_B ∈ B∖ A Q_B))|
≤| Q ∖ (∪_Q_A ∈ A Q_A) |
= δ_Q(A),
where the equality in the third line follows from an application of DeMorgan's Laws (i.e., A ∖ (B∪ C) = (A ∖ B) ∩ (A ∖ C)). This proves the proposition.
We provide a proof for Proposition <ref> in the extended version. Following Proposition <ref>, we have that Problem <ref> is an instance of SSC which allows us to use the result from <cit.> to propose a greedy decomposition approach.
§ SECTOR DECOMPOSITION APPROACH
In this section, we describe our solution approach for the sector decomposition problem. Our approach greedily adds large rectangular sectors to the decomposition that cover new areas of environment. We first describe an approach that leverages computational geometry algorithms to identify large rectangular sectors in the environment (or its subset). We then present a greedy algorithm that computes a sector decomposition while constraining the overlap between the sectors. For the case where the overlap is not constrained, we provide an approximation factor for the greedy algorithm following <cit.>.
§.§ Sector identification
We first identify the largest candidate sector for a given environment 𝒲 (or its subset). For this, we solve the problem of computing a rectangle of maximum area in 𝒲. In computational geometry literature, this is known as the convex skull or the "potato peeling" problem. Computing a convex skull of general shape and orientation requires O(n^7) runtime, where n is the number of vertices in the polygonal representation of 𝒲<cit.>. Another alternative is to use approximation algorithms <cit.> which reduce runtime at the sacrifice of optimality. However, one can compute the largest area axis-parallel rectangle in the environment efficiently using existing algorithms <cit.>. Additionally, for the case of coverage planning, one can use the orientations of the environment's edges (both boundary and hole edges) to constrain the robot's coverage directions <cit.>. This is to ensure that the robot driving the coverage path covers along long walls and moves predictably in the environment.
Our approach to identify the largest sector is as follows. First, we use a polygonal representation of 𝒲 and extract the orientations of the edges to construct a set of candidate orientations Θ. We limit our set of candidate sectors 𝒬 to rectangles oriented along an angle in θ∈Θ. Given this constraint, we define an axis oriented along each θ and obtain the largest axis-parallel rectangular sector in 𝒲 for each angle. Repeating this for every candidate orientation gives us a set of candidate sectors 𝒬 from which we choose the largest one in our greedy approach.
§.§ Greedy decomposition
We now propose an algorithm that decomposes the environment 𝒲 by greedily choosing rectangular sectors that cover the largest remaining uncovered area of 𝒲. The pseudocode for our approach is given in Algorithm <ref>. Until the minimum coverage of the environment is achieved (Line <ref>), we compute a candidate rectangular sector for each candidate orientation θ∈Θ (Line <ref>). We then pick the candidate sector with the largest area and add it to our decomposition 𝒮 (Lines <ref>-<ref>). Once the sector is added, we remove the area covered by the sector from the environment so that future sectors may not have overlapping coverage (Line <ref>).
However, in some cases, a small amount of overlap between sectors is useful to cover more area and have better fitting neighbouring sectors. To do this, we introduce a parameter called sector erosion radius β. This radius is used to compute a Minkowski difference between Q and a disk of radius β. The difference Q is removed from the environment so that future sectors may overlap with Q. Setting β to 0 leads to truly non-overlapping sectors, while arbitrarily large values lead to unconstrained overlapping. In our simulation results in Section <ref>, we further motivate the allowance of a small amount of overlap.
For the case of unconstrained overlap, the candidate sectors must be computed using an alternate approach, as the convex skull method may return the same rectangle repeatedly. One approach is to use a checkerboard decomposition of the environment <cit.> to identify rectangular sub-regions at each orientation. However, the candidate sectors only need to be computed once for this case.
§.§ Analysis
We now provide a brief analysis of the G-Sect algorithm. In the case of overlapping sectors (large β), we obtain an approximation factor for the G-Sect algorithm as a function of the minimum coverage ratio γ. We derive this factor following that of Wolsey's analysis of the greedy algorithm in <cit.>. Given a set of candidate orientations Θ, let 𝒮^* be the minimum sector decomposition of the environment, i.e. |𝒮^*| ≤ |𝒮| for any 𝒮.
Given an environment 𝒲 and a minimum coverage ratio of γ, let 𝒮^* be the minimum sector decomposition given candidate orientations Θ. Then, the G-Sect algorithm computes a sector decomposition 𝒮 satisfying
|𝒮|/|𝒮^*|≤ 1 + ln( 1/1 - γ).
We first prove this proposition for a finite set of candidate sectors 𝒬 that cover the environment 𝒲. Let T be the number of iterations run by the greedy algorithm, and let 𝒮^r for 1 ≤ r ≤ T be the subset that is picked by the greedy algorithm after r iterations. Following Wolsey's analysis of the greedy algorithm for SSC over a finite set <cit.>, we have the following approximation factor:
|𝒮|/|𝒮^*| ≤ 1 + ln( a(𝒬) - a(∅)/a(𝒬) - a(𝒮^T-1)).
Since a is normalized, we have a(∅) = 0. Since 𝒬 covers 𝒲, we have that a(𝒬) = |𝒲|. Given that G-Sect terminates after a minimum coverage of γ|𝒲|, we have that a(𝒮^T-1) ≤γ|𝒲|. Substituting these values in Eq. (<ref>), we get
|𝒮|/|𝒮^*| ≤ 1 + ln( |𝒲|/|𝒲| - γ |𝒲|) = 1 + ln( 1/1 - γ)
We now show that this bound also applies when 𝒬 is infinite. Let 𝒮^* = {Q^*_1, Q^*_2, … Q^*_k} be the minimum sector decomposition of 𝒲 from all of 𝒬. Now, let 𝒮 = {Q_1, Q_2, … Q_T} be the sector decomposition obtained by G-Sect given γ. Using these sets, we construct a finite set of candidate sectors 𝒬' = 𝒮∪𝒮^*. On this finite set 𝒬', we can run G-Sect and for each greedy iteration r, we have the following for all Q_i^* ∈𝒮^*:
a(𝒮^r ∪ Q_r+1) - a(𝒮^r) ≥ a(𝒮^r ∪ Q^*_i) - a(𝒮^r).
This follows from the correctness of the greedy step for a set of candidate orientations Θ. As a result, the approximation factor holds on the finite set 𝒬'. However, since 𝒬' contains the choices of G-Sect from 𝒬 and the optimal set from 𝒬, the approximation factor also holds on the infinite set 𝒬. This proves the proposition.
The proof of Proposition <ref> is deferred to the extended version due to space constraints. Obtaining a similar approximation factor for the case of minimally-overlapping sectors is a non-trivial task. However, we have found that decompositions using minimally overlapping sectors are better suited for coverage planning as they reduce instances of double coverage. In Section <ref>, we provide simulation results of planning coverage paths using minimally overlapping sectors.
§.§ Local sector merges
After G-Sect computes a decomposition, we merge neighbouring sectors to further reduce the number of sectors. Specifically, we conduct merges where the lawnmower path of one sector can be extended to cover a neighbouring sector. Note that these merges may result in sectors that are not necessarily rectangular, i.e. they do not strictly satisfy Problem <ref>. However, they still satisfy the key restriction that each sector is covered by a lawnmower path. More details about the merging is provided in the extended version.
After a decomposition is computed using G-Sect, we merge neighbouring sectors to further reduce the number of sectors. Specifically, we conduct merges where the lawnmower path of one sector can be extended to cover a neighbouring sector. Given the sector decomposition and corresponding sector orientations, we start by counting the number of coverage lines required to cover each sector. We then iterate through the sectors in the order of increasing sector area (i.e. smallest to largest) to check for possible merges. For each sector Q, we look for an adjacent sector Q_adj with a coverage orientation θ_adj such that the number of coverage lines to cover Q ∪ Q_adj along the orientation θ_adj is less than or equal to the cumulative number of lines required to cover each sector individually. The reduction in coverage lines indicates that the lawnmower path of Q_adj can be extended to cover Q. Any ties are broken by choosing Q_adj with the largest area. The merged sector is added back to the set of sectors and the process is repeated until all possible merges have been completed. The resulting sector decomposition forms our final solution to Problem <ref>.
Note that these merges may result in sectors that are not necessarily rectangular, i.e. they do not strictly satisfy Problem <ref>. However, they still satisfy the key restriction that each sector is covered by a lawnmower path. To obtain a strictly rectangular decomposition, this step may be skipped. In practice, a similar strategy may also be used to cover areas that are uncovered by the sector decomposition.
§ SECTOR TOURING
We now describe our approach to obtain a coverage path for the environment using the sector decomposition from Section <ref>. Similar to <cit.>, we formulate this as a GTSP that aims to minimize the time to cover all sectors by determining (i) the visitation order of the sectors in the decomposition, and (ii) entry and exit points for each sector. We create an auxiliary graph where the vertices are grouped into sets. Each set in the graph represents the coverage of a sector and consists of at most four vertices that represent the directed paths to cover the sector, i.e. two possible lawnmower paths per sector where each path can be traversed in two directions. The chosen path and traversal direction determine the entry and exit points for each sector. The graph edges represent the transition paths to traverse from one sector to another. The edge costs are determined by the time to travel along an obstacle-free path from one end of a lawnmower path to another determined using a visibility graph planner <cit.>. We determine this travel time using a piecewise constant acceleration robot model where the robot travels along a straight line with constant acceleration until a maximum velocity is reached. An example set of parameters for this model is given in Table <ref>. In this work, we do not directly penalize the turning time of the robot, but the above robot model penalizes the time to stop to perform an upcoming turn. The GTSP tour on the resulting graph gives the coverage path for the environment as a series of connected sectors.
§ RESULTS
In this section, we present simulation results of coverage planning using the proposed sector decomposition approach for maps of real-world environments. Specifically, our dataset includes 2 street maps of New York and Shanghai cities from <cit.> and 2 anonymized environments obtained from combining maps of real-world environments obtained from our industry partner Avidbots. We compare the coverage performance of our approach against two state-of-the-art approaches: (i) an exact decomposition approach called boustrophedon cell decomposition (BCD) <cit.>, and (ii) our previous approximate decomposition approach called Optimal Axis-Parallel Rank Partitioning (OARP) approach <cit.>.
For our simulations, we set the desired coverage of the decomposition γ to be 95% of the environment, after which the greedy algorithm is terminated. This choice helps eliminate sectors that cover small and inaccessible areas and effectively reduces the number of sectors in the decomposition. We identify the environment's walls using Probabilistic Hough Transform <cit.> and obtain the set of candidate coverage orientations Θ. Table <ref> shows the robot parameters used for the simulations, including the coverage tool width l. For the BCD and sector decomposition approaches, we ignore small obstacles (obstacle area < 4l^2) as they can be evaded effectively during coverage and do not affect the coverage orientation. For OARP, some these obstacles are removed automatically during the construction of the grid decomposition.
To motivate our configuration of G-Sect pertaining to sector overlap, we present the results of computing a sector decomposition for a simpler test map using two values of the sector erosion radius β: l/4 and 2l. Figure <ref> shows the resulting decompositions, where we observe that increasing the amount of overlap reduces the number of sectors but also results in overlapping coverage lines (i.e., double coverage). However, we observe that a small amount of overlap with local sector merges identifies a decomposition with non-overlapping coverage lines. Therefore, we conduct the rest of the experiments with β = l/4 and conduct sector merges.
Figure <ref> shows the decompositions computed using G-Sect for our test maps. We observe that sector decomposition results in intuitive coverage lines, especially in narrow regions where the coverage lines are oriented along the edges of the environment. Table <ref> shows the results of comparing sector decomposition against that of BCD (exact decomposition) <cit.>. We observe that sector decomposition approach covers the environment with fewer sectors, predominantly due to (i) our choice of γ, and (ii) sector merging. In contrast, BCD results in over-decomposition due to the large number of reflex vertices present in the test environments. G-Sect is less influenced by these vertices for the chosen γ.
We now compare the coverage planning performance of the sector decomposition approach against that of BCD and OARP. The robot model from Section <ref> is used to compute the coverage path cost with parameters as per Table <ref>. Table <ref> shows the results of comparing (i) number of coverage lines, (ii) percent area covered, and (iii) coverage path cost. For every map, we run each approach for 5 trials and present the average for each metric. We observe that the proposed approach achieves the best coverage path cost for all test maps. We believe that this is due to the intuitive coverage line placements achieved by G-Sect, e.g., the coverage lines align with the boundaries and the holes. For the New York and Shanghai maps, we observe that OARP computes paths with higher cost than the proposed approach despite having fewer coverage lines. This is because constraining the robot's motion to axis-parallel (horizontal/vertical) orientations creates longer transition paths (where the robot performs no coverage) that add to the cost. However, for environments with axis-parallel boundaries, we expect the OARP approach to perform better because minimizing the number of coverage lines results in fewer transitions. Thus, a future direction we look to explore is a combination of sector decomposition and OARP <cit.> that minimizes the number of coverage lines in the path while considering multiple coverage orientations. Meanwhile, the BCD approach covers the environment entirely, but results in relatively expensive coverage paths as a result of over-decomposition.
§ CONCLUSION
In this paper, we proposed an environment decomposition approach called sector decomposition for robot coverage planning. The proposed approach aims to decompose the environment into the minimum number of rectangular sectors, which can each be covered using a lawnmower path oriented along its longest edge. We showed that sector decomposition is an instance of the submodular set cover (SSC) problem, and proposed a greedy approximation algorithm with a guarantee on the number of sectors in the decomposition. Finally, we present coverage planning results using the proposed approach for real-world environments and show that it improves upon existing coverage planning approaches <cit.>. A future direction of this work is to minimize the number of turns in the coverage path generated using sector decomposition, for cases when robot turning is detrimental to coverage time and quality.
§ ACKNOWLEDGMENT
This work was supported by Canadian Mitacs Accelerate Project IT16435 and Avidbots Corp, Kitchener, ON, Canada.
IEEEtran
|
http://arxiv.org/abs/2409.02793v1 | 20240904150904 | On the long-wave approximation of solitary waves in cylindrical coordinates | [
"James Hornick",
"Dmitry E. Pelinovsky",
"Guido Schneider"
] | math.AP | [
"math.AP",
"math-ph",
"math.CA",
"math.MP",
"nlin.PS",
"nlin.SI"
] |
Long-wave approximation in cylindrical coordinates]On the long-wave approximation of solitary waves in cylindrical coordinates
[J. Hornick]Department of Mathematics and Statistics, McMaster University, Hamilton, Ontario, Canada, L8S 4K1
[D. E. Pelinovsky]Department of Mathematics and Statistics, McMaster University, Hamilton, Ontario, Canada, L8S 4K1
[email protected]
[G. Schneider]Institut für Analysis, Dynamik und Modellierung,
Universität Stuttgart, Pfaffenwaldring 57, D-70569 Stuttgart, Germany
§ ABSTRACT
We address justification and solitary wave solutions of the cylindrical KdV equation which is formally derived as a long wave approximation of radially symmetric waves in a two-dimensional nonlinear dispersive system. For a regularized Boussinesq equation, we prove error estimates between true solutions of this equation and the associated cylindrical KdV approximation in the L^2-based spaces. The justification result holds in the spatial dynamics formulation of the regularized Boussinesq equation. We also prove that the class of solitary wave solutions considered previously in the literature does not contain solutions in the L^2-based spaces. This presents a serious obstacle in the applicability of the cylindrical KdV equation for modeling of radially symmetric solitary waves since the long wave approximation has to be performed separately in different space-time regions.
[
Guido Schneider
September 9, 2024
=====================
§ INTRODUCTION
Long radially symmetric waves in a two-dimensional nonlinear dispersive system can be modeled with the cylindrical Korteweg-de Vries (cKdV) equation.
The cKdV equation has been derived in <cit.> by perturbation theory from the equations of the water wave problem in cylindrical coordinates to describe radially symmetric waves going to infinity.
See <cit.> for an overview about the occurrence of this and other amplitude equations for the shallow water wave problem.
Derivation of the cKdV equation is not straightforward compared to its analog in rectangular coordinates, the classical KdV equation, and it is still an active area of research in physics <cit.>. No mathematically rigorous results have been derived for the justification of the cKdV equation, compared to the rigorous approximation results available for the classical KdV equation after the pioneering works <cit.>.
The main objective of this paper is to prove an approximation result
for the cKdV equation and to discuss the validity of this approximation.
Although we believe that our methods can be applied to every nonlinear dispersive wave system where the cKdV equation can be formally derived we restrict ourselves in the following to the system given by a regularized Boussinesq equation. The regularized Boussinesq equation in two spatial dimensions can be written in the normalized form as
∂_t^2 u - Δ u + σ∂_t^2 Δ u
= Δ (u^2) ,
with space variable x = (x_1,x_2) ∈^2, time variable t ∈,
Laplacian Δ = ∂_x_1^2 + ∂_x_2^2,
and a smooth solution u = u(x,t) ∈. The normalized parameter σ = ± 1 determines the dispersion relation of linear waves u ∼ e^i k · x - i ω t for k = (k_1,k_2) ∈^2 in the form:
ω^2 = |k|^2/1 - σ |k|^2, k ∈^2.
It follows from the dispersion relation (<ref>) and the standard analysis of well-posedness <cit.> that the initial-value problem for (<ref>) with the initial data
u|_t = 0 = u_0(x), ∂_t u |_t = 0 = u_1(x), x ∈^2,
is locally well-posed in Sobolev spaces of sufficient regularity for σ = -1 and ill-posed for σ = +1.
To justify the cKdV equation, we shall use the spatial dynamics formulation with the radius r := √(x_1^2 + x_2^2) as evolutionary variable. It turns out that due to the dispersion term σ∂_t^2 Δ u in (<ref>) the spatial dynamics formulation and the temporal dynamics formulation are not well posed simultaneously. If the temporal dynamics formulation is well posed, the spatial dynamics formulation is ill posed and vice versa.
The radial spatial dynamics formulation of the regularized Boussinesq equation (<ref>) is obtained by introducing
the radial variable r = √(x_1^2+x_2^2) and rewriting (<ref>) for u = u(r,t) with Δ = ∂_r^2 + 1/r∂_r in the form:
(∂_r^2 + r^-1∂_r) (u - σ∂_t^2 u + u^2) =
∂_t^2 u .
The associated spatial dynamics problem is given by
u|_r = r_0 = u_0(t), ∂_r u |_r = r_0 = u_1(t), t ∈,
for some r_0 > 0. It is clear that the spatial evolution of (<ref>) with “initial data" in (<ref>) is locally well-posed for σ = 1 and ill-posed for σ = -1, see Theorem <ref>. In Section <ref> we derive the cKdV equation for long waves of the radial Boussinesq equation (<ref>) in case σ = 1.
The cKdV approximation is given by u(r,t) = ^2 A (^3 r, (t-r))
with being a small parameter and A(ρ,τ) satisfying the following cKdV equation
2 ∂_ρ A + ρ^-1 A + ∂_τ^3 A = ∂_τ (A^2),
where τ := ( t - r ) ∈ℝ and ρ :=^3 r ≥ρ_0 for some ρ_0 > 0 are rescaled versions of the variables (r,t) in the traveling frame and A(ρ,τ) ∈ℝ is the small-amplitude approximation for u(r,t) ∈ℝ. We have to impose the spatial dynamics formulation for the cKdV equation (<ref>) with the initial data
A|_ρ = ρ_0 = A_0(τ), τ∈ℝ.
It follows from the contraction mapping principle applied to the KdV equation <cit.> and the boundedness of the linear term ρ^-1 A for ρ≥ρ_0 > 0 that the initial-value problem for (<ref>) with “initial data" in (<ref>) is locally well-posed for A_0 ∈ H^s(ℝ) with any s > 3/4. Moreover, if ∫_ℝ A_0 d τ = 0, then
∫_ℝ A(ρ,τ) d τ = 0, ρ≥ρ_0,
which implies that the unique local solution of (<ref>)–(<ref>) satisfies
A ∈ C^0([ρ_0,ρ_1],H^s()) ∂_τ^-1 A ∈ C^0([ρ_0,ρ_1],H^s-2())
for some ρ_1 > ρ_0 if A_0 ∈ H^s() and ∂_τ^-1 A ∈ H^s-2(), see Lemma <ref>.
The main approximation result is given by the following theorem.
Fix s_A > 17/2, ρ_1 > ρ_0 > 0, and C_1 > 0. Then there exist ε_0 > 0 and C_0 > 0 such that
for all ε∈ (0,ε_0)
the following holds.
Let A ∈ C^0([ρ_0,ρ_1],H^s_A(ℝ)) be a solution of the cKdV equation (<ref>)
with
sup_ρ∈ [ρ_0,ρ_1]( A(ρ,·) _H^s_A + ∂_τ^-1 A(ρ,·) _H^s_A-2 ) ≤ C_1 .
Then there are solutions (u, ∂_r u) ∈ C^0([ρ_0 ε^-3, ρ_1 ε^-3],H^s() × H^s()) of the radial Boussinesq equation (<ref>) with s > 1/2 satisfying
sup_r ∈ [ρ_0 ε^-3, ρ_1 ε^-3]sup_t ∈| u(r,t) - ε^2 A(ε^3 r, ε(t-r)) | ≤ C_0 ε^7/2 .
The proof of Theorem <ref> goes along the lines of the associated proof for validity of the KdV approximation in <cit.>. However, there are new difficulties which have to be overcome. The major point is
that a vanishing mean value as in (<ref>) is required for the solutions of the cKdV equation (<ref>), a property which fortunately is preserved by the evolution of the cKdV equation. Subsequently, a vanishing mean value is also required for the solutions of the radial Boussinesq equation (<ref>). However, this property is not preserved in the spatial evolution of the radial Boussinesq equation (<ref>). We use a nonlinear change of variables from u(r,t) to v(r,t) in Section <ref> in order to preserve the vanishing mean value in the spatial evolution.
The cKdV equation (<ref>) admits exact solutions for solitary waves due to its integrability <cit.>. These exact solutions have important physical applications <cit.>,
which have continued to stimulate recent research <cit.>.
It was observed that parameters of the exact solutions of the cKdV equation agree well with the experimental and numerical simulations of solitary waves.
However, the solitary wave solutions of the cKdV equation do not decay sufficiently well at infinity <cit.> and hence it is questionable how such solutions can be described in the radial spatial dynamics
of the Boussinesq equation in the mathematically rigorous sense.
We address the solitary wave solutions of the cKdV equation (<ref>) in Section <ref>, where we will use the theory of Airy functions and give a more complete characterization of the solitary wave solutions compared to previous similar results, e.g. in Appendix A of <cit.>. The following theorem presents the corresponding result.
Consider solutions of the cKdV equation (<ref>) in the class of solitary waves given by
A(ρ,τ) = -6 ∂_τ^2 log[ 1 + 1/(6ρ)^1/3 F( τ/(6 ρ)^1/3) ],
with some F ∈ C^∞(ℝ,). All bounded solutions in the form (<ref>) satisfy the decay condition
A(ρ,τ) → 0 |τ| →∞
and the zero-mean constraint
∫_ℝ A(ρ,τ) d τ = 0
but fail to be square integrable, that is, A(ρ,·) ∉ L^2() for every ρ > 0.
The result of Theorem <ref> is due to the slow decay of solitary wave solutions (<ref>) with
A(ρ,τ) ∼ |τ|^-1/2 τ→ -∞.
We note that such solitary wave solutions satisfy
A ∈ C^0((0,∞),Ḣ^s()) for any s > 0 but we are not aware of the local well-posedness for the cKdV equation (<ref>) in Ḣ^s() with s > 0. Consequently, the justification result of Theorem <ref> does not apply to the solitary waves of the cKdV equation (<ref>) and one needs to use matching techniques in different space-time regions in order to consider radial solitary waves diverging from the origin, cf. <cit.>.
Similar questions arise for the long azimuthal perturbations of the long radial waves. A cylindrical Kadomtsev-Petviashvili (cKP) equation was also proposed as a relevant model in <cit.>. Motivated from physics of fluids and plasmas, problems of transverse stability of ring solitons were studied recently in <cit.>. Other applications of the KP approximation are interesting in the context of dynamics of square two-dimensional lattices based on the models of the Fermi-Pasta-Ulam type <cit.>. Radially propagating waves with azimuthal perturbations are natural objects in lattices, see, e.g., <cit.>, and clarification of the justification of
the cKdV equation is a natural first step before justification of the cKP equation in nonlinear two-dimensional lattices. We discuss further implication of the results of Theorems <ref> and <ref> for the cKdV and cKP equations in Section <ref>.
Notation. Throughout this paper different constants are denoted with the same symbol C if they can be chosen independently of the small parameter
0 < ε≪ 1. The Sobolev space H^s(), s ∈ of s-times weakly differentiable functions is equipped with the norm
u _H^s = ( ∑_j= 0^s∫_ |∂_x^j u(x)|^2 dx)^1/2 .
The weighted Lebesgue space L^2_s(), s ∈ is equipped with the norm
u_L^2_s = ( ∫_ |u(k)|^2 (1+k^2)^s dk)^1/2 .
Fourier transform is an isomorphism between H^s() and L^2_s() which allows us to extend the definition of H^s() to all values of s ∈.
Acknowledgement. The work of G. Schneider was partially supported by the Deutsche Forschungsgemeinschaft (DFG, German
Research Foundation) - Project-ID 258734477 - SFB 1173.
D. E. Pelinovsky acknowledges the funding of this study provided by the grant No. FSWE-2023-0004
and grant No. NSH-70.2022.1.5.
§ JUSTIFICATION OF THE CKDV EQUATION
Here we prove Theorem <ref> which states the approximation result for the cKdV equation. The plan is as follows.
In Section <ref> we derive the cKdV equation (<ref>) for the radial Boussinesq equation (<ref>) in case σ = 1.
In Section <ref> we estimate the residual terms, i.e., the terms which remain after inserting the cKdV approximation into the radial Boussinesq equation. In Section <ref> we prove a local existence and uniqueness result for the radial spatial dynamics formulation. In Section <ref>–<ref> we estimate the error made by this formal approximation in the radial spatial dynamics by establishing L^2- and H^1-energy estimates. The argument is completed in Section <ref> by using the energy to control the approximation error and by applying Gronwall's inequality.
§.§ Derivation of the cKdV equation
We rewrite the radial Boussinesq equation (<ref>) with σ = 1 as
(∂_r^2 + r^-1∂_r) (u - ∂_t^2 u + u^2) =
∂_t^2 u .
The cKdV approximation can be derived if r ≥ r_0 > 0 is considered as the evolutionary variable with the initial data (<ref>).
However, this evolutionary system has the disadvantage that
∫_ℝ u(r,t) dt is not preserved in r, see Remarks <ref> and <ref>. In order to overpass this technical difficulty, we rewrite (<ref>) as
∂_t^2 ( 1 + ∂_r^2 + r^-1∂_r) u = (∂_r^2 + r^-1∂_r) (u + u^2)
and make the change of variables v := u + u^2. For small v this
quadratic equation admits a unique solution for small u given by
u = v - v^2 + N(v)
with analytic N(v) = 𝒪(v^3). In variable v, the radial spatial evolution problem is
(∂_r^2 + r^-1∂_r) v = ∂_t^2 ( 1 + ∂_r^2 + r^-1∂_r) (v - v^2 + N(v)) .
The local existence and uniqueness of solutions of the initial-value problem
{[ (∂_r^2 + r^-1∂_r) v = ∂_t^2 ( 1 + ∂_r^2 + r^-1∂_r) (v - v^2 + N(v)), r > r_0,; v |_r = r_0 = v_0, ∂_r v |_r = r_0 = v_1 ].
can be shown for (v_0,v_1) ∈ H^s() × H^s() for every s > 1/2, see Theorem <ref>.
We make the usual ansatz for the derivation of the KdV equation, namely
ε^2 ψ_ cKdV(r,t) := ^2 A(^3 r,( t - c r )) ,
with τ := ( t - c r ) and ρ := ^3 r, where c is the wave speed. Defining the residual
Res(v) := - (∂_r^2 + r^-1∂_r) v+ ∂_t^2 ( 1 + ∂_r^2 + r^-1∂_r) (v - v^2 + N(v))
we find
Res(ε^2 ψ_ cKdV) = - c^2 ^4 ∂_τ^2 A + 2 c ^6 ∂_ρ∂_τ A - ^8 ∂_ρ^2 A
+ c ^6 ρ^-1∂_τ A - ^8 ρ^-1∂_ρ A
+ ^4 ∂_τ^2 A + c^2 ^6 ∂_τ^4 A - 2 c ^8 ∂_ρ∂_τ^3 A + ^10∂_ρ^2 ∂_τ^2 A
- c ^8 ρ^-1∂_τ^3 A + ^10ρ^-1∂_ρ∂_τ^2 A
- ^6 ∂_τ^2 (A^2) - c^2 ^8 ∂_τ^4 (A^2)+ 2 c ^10∂_ρ∂_τ^3 (A^2) - ^12∂_τ^2 ∂_ρ^2 (A^2)
+ c ^10ρ^-1∂_τ^3 (A^2) - ^12ρ^-1∂_ρ∂_τ^2 (A^2)
+ ε^2 ∂_τ^2 ( 1 + (- c ε∂_τ + ε^3 ∂_ρ)^2 +ε^3 ρ^-1 (- c ε∂_τ + ε^3 ∂_ρ)) N(ε^2 A)
where the last line is at least of order 𝒪(ε^8).
We eliminate the terms of 𝒪(ε^4) by choosing c^2 = 1. The radial waves diverge from the origin if c = 1 and converge towards the origin if c = -1. It makes sense to consider only outgoing radial waves, so that we set c = 1 in the following.
With c = 1, the terms of 𝒪(ε^6) are eliminated in Res(ε^2 ψ_ cKdV) by choosing A to satisfy the cKdV equation (<ref>) rewritten here as
2 ∂_ρ A + ρ^-1 A + ∂_τ^3 A
- ∂_τ (A^2) = 0 .
By this choice we formally have
Res(ε^2 ψ_ cKdV) = 𝒪(ε^8) .
We will estimate the residual terms rigorously in Section <ref>.
In our subsequent error estimates ∂_t^-1 has to be applied
to Res(v) in (<ref>). However, this is only possible if the nonlinear change of variables v = u + u^2 is applied. This change of variables also allows us to use the variable ∂_t^-1∂_r u which played
a fundamental role in the justification of the KdV equation in <cit.> and which is necessary to obtain
an L^2-bound for the approximation error.
§.§ Estimates for the residual
For estimating the residual Res(ε^2 ψ_ cKdV) we consider a solution A ∈ C([ρ_0,ρ_1],H^s_A(,))
of the cKdV equation (<ref>) with some s_A ≥ 0 suitably chosen below.
Let
C_A := sup_ρ∈ [ρ_0,ρ_1]A(ρ,·) _H^s_A < ∞.
With c = 1 and A satisfying the cKdV equation (<ref>), the residual is rewritten as
Res(ε^2 ψ_ cKdV) = - ^8 ∂_ρ^2 A - ^8 ρ^-1∂_ρ A - 2 ^8 ∂_ρ∂_τ^3 A + ^10∂_ρ^2 ∂_τ^2 A
- ^8 ρ^-1∂_τ^3 A + ^10ρ^-1∂_ρ∂_τ^2 A - ^8 ∂_τ^4 (A^2) + 2 ^10∂_ρ∂_τ^3 (A^2)
- ^12∂_τ^2 ∂_ρ^2 (A^2) + ^10ρ^-1∂_τ^3 (A^2) - ^12ρ^-1∂_ρ∂_τ^2 (A^2)
+ ε^2 ∂_τ^2 ( 1 + (- ε∂_τ + ε^3 ∂_ρ)^2 +ε^3 ρ^-1 (- ε∂_τ + ε^3 ∂_ρ)) N(ε^2 A)
We can express ρ-derivatives of A by τ-derivatives of A through the right-hand side of the cKdV equation (<ref>). Hence for replacing one ρ-derivative we need three
τ-derivatives. In this way, the term ^10∂_ρ^2 ∂_τ^2 A loses most derivatives, namely eight τ-derivatives. Due to the scaling properties of the L^2-norm w.r.t. the scaling τ = ε (t-r), we are loosing ^-1/2
in the estimates, e.g., see <cit.>. As a result of the standard analysis, we obtain the following lemma.
Let s ≥ 0. Assume (<ref>) with s_A = s + 8 and C_A > 0. There exists a C_ res > 0 such that for all ε∈ (0,1] we have
sup_r ∈ [ρ_0 ε^-3,ρ_1 ε^-3] Res(ε^2 ψ_ cKdV)(r,·) _H^s≤ C_ resε^15/2.
In the subsequent error estimates we also need estimates for
∂_t^-1 applied to Res(ε^2 ψ_ cKdV). The only terms in the residual which have no ∂_t in front are the ones collected in
^8 R_1 = - ^8 ∂_ρ^2 A - ^8 ρ^-1∂_ρ A .
When ∂_ρ A is replaced by the right-hand side of the cKdV equation (<ref>), we find
R_1 = 1/2 (∂_ρ + ρ^-1)
( ρ^-1 A + ∂_τ^3 A - ∂_τ A^2 )
= 1/2∂_τ (∂_ρ + ρ^-1)
( ∂_τ^2 A - A^2 ) + 1/2ρ^-1∂_ρ A
= 1/4∂_τ (2 ∂_ρ + ρ^-1)
( ∂_τ^2 A - A^2 ) - 1/4ρ^-2 A.
Therefore, all terms in the residual can be written as derivatives in τ except of the term - (4 ρ^2)^-1 A.
The operator ∂_τ^-1, respectively a multiplication with 1/ik in the Fourier space, can be applied
to - (4 ρ^2)^-1 A only if A has a vanishing mean value and its Fourier transform decays as 𝒪(|k|) for k → 0. This is why we enforce
the vanishing mean value as in (<ref>) and consider
solutions of the cKdV equation in the class of functions (<ref>). Such solutions are given by the following lemma.
Fix s_A > 3/4, ρ_0 > 0, and pick A_0 ∈ H^s_A() such that
∂_τ^-1 A_0 ∈ H^s_A-2(). There exist C > 0 and ρ_1 > ρ_0 such that the cKdV equation (<ref>) possesses
a unique solution A ∈ C^0([ρ_0,ρ_1],H^s_A()) with A|_ρ = ρ_0 = A_0 satisfying
sup_ρ∈ [ρ_0,ρ_1]( A(ρ,·) _H^s_A + ∂_τ^-1 A(ρ,·) _H^s_A-2 ) ≤ C.
The cKdV equation (<ref>) possesses a unique solution
A ∈ C^0([ρ_0,ρ_1],H^s_A()) for s_A > 3/4, see <cit.>.
To obtain the estimate on B := ∂_τ^-1 A, we rewrite
(<ref>) in the form:
2 ∂_ρ B + ρ^-1 B + ∂_τ^2 A
- A^2 = 0 .
Since B_0 := ∂_τ^-1 A_0 ∈ H^s_A-2(), integration of this equation with A ∈ C^0([ρ_0,ρ_1],H^s_A()) yields
B ∈ C^0([ρ_0,ρ_1],H^s_A-2()).
For estimating the residual Res(ε^2 ψ_ cKdV) we consider a solution A ∈ C^0([ρ_0,ρ_1],H^s_A())
of the cKdV equation (<ref>) with
C_A,B := sup_ρ∈ [ρ_0,ρ_1]( A(ρ,·) _H^s_A + ∂_τ^-1 A(ρ,·) _H^s_A-2 ) < ∞,
and with s_A > 3/4 being sufficiently large.
Due to the correspondence ∂_t^-1 = ε^-1∂_τ^-1 we have the following lemma.
Let s ≥ 0. Assume (<ref>) with s_A = s + 8 and C_A,B > 0. There exists a C_ res > 0 such that
for all ε∈ (0,1] we have
sup_r ∈ [ρ_0 ε^-3,ρ_1 ε^-3]∂_t^-1 Res(ε^2 ψ_cKdV)(r,·) _H^s≤ C_ resε^13/2.
Without the transformation v = u + u^2 which converts (<ref>) into (<ref>), the terms in the residual Res(u) constructed similarly to (<ref>) which have no ∂_t in front would be
- ^8 ∂_ρ^2 A - ^8 ρ^-1∂_ρ A - ^10∂_ρ^2 (A^2) - ^10ρ^-1∂_ρ (A^2) .
As above by replacing ∂_ρ A by the right-hand side of the cKdV equation (<ref>) we gain derivatives in τ. However, due to the ρ^-1 A term in (<ref>) among other terms we would produce terms of the form ^8 ρ^-2 A and ^10ρ^-2 A^2.
The operator ∂_τ^-1 can only be applied to these terms if A and A^2 have a vanishing mean value. However, A^2 can only have a vanishing mean value if A vanishes identically. Moreover, it doesn't help to consider ∂_ρ^2 ∂_τ^-1 (A^2) and ρ^-1∂_ρ∂_τ^-1 (A^2) directly since the cKdV equation (<ref>) does not preserve the L^2-norm of the solutions.
Therefore, the transformation v = u + u^2 is essential for our justification analysis.
§.§ Local existence and uniqueness
Here we prove the local existence and uniqueness of the solutions of
the second-order evolution equation (<ref>), which we rewrite as
(∂_r^2 + r^-1∂_r)(1- ∂_t^2 ) v = ∂_t^2 v +
∂_t^2 ( 1 + ∂_r^2 + r^-1∂_r) (- v^2 + N(v)) .
By using ℬ^2 := ∂_t^2 (1- ∂_t^2 )^-1,
we rewrite the evolution problem in the form:
(∂_r^2 + r^-1∂_r) v = ℬ^2v +
ℬ^2 ( 1 + ∂_r^2 + r^-1∂_r) (- v^2 + N(v))
= ℬ^2 v +
ℬ^2 (- v^2 + N(v)) + r^-1ℬ^2 (-2v +N'(v))∂_r v
+ ℬ^2 [ (-2 v+N'(v)) ∂_r^2 v +(-2+N”(v))(∂_r v)^2 ] .
The operator ℬ^2 is bounded in Sobolev space H^s() for every s ∈ℝ. The second-order evolution equation (<ref>) can be rewritten as a first-order system by introducing w := ∂_r v such that
{[ ∂_r v = w,; ∂_r w = f(v,w) , ].
where
f(v,w) = - r^-1 w + [ 1 - ℬ^2 (-2 v+N'(v)) ·]^-1ℬ^2 [ v - v^2 + N(v) + (-2+N”(v))w^2 ] .
Since N(v) = 𝒪(v^3) for small v, the right hand side of system (<ref>) for sufficiently small v is locally Lipschitz-continuous in H^s() × H^s() for every s > 1/2 due to Sobolev's embedding theorem. The following
local existence and uniqueness result holds due to the Picard-Lindelöf theorem.
Fix s > 1/2 and r_0 > 0. There exists a δ_0 > 0 such that for all δ∈ (0,δ_0) and (v_0,w_0) ∈ H^s() × H^s() with v_0 _H^s≤δ, there exists r_1 > r_0 and a unique solution (v,w) ∈ C^0([r_0,r_1],H^s() × H^s()) of system (<ref>) with (v,w) |_r = r_0 = (v_0,w_0).
There exists a unique solution
(v,∂_r v) ∈ C^0([r_0,r_1],H^s() × H^s()) of the second-order evolution equation (<ref>) for the corresponding (v,∂_r v) |_r = r_0 = (v_0,w_0) .
A combination of the local existence and uniqueness result of Theorem <ref> with the subsequent error estimates, used as a priori estimates, guarantees the existence and uniqueness of the solutions of equations for the error terms, see equation (<ref>), as long as the error is estimated to be small.
§.§ The L^2-error estimates
We introduce the error function R through
the decomposition
v = ε^2 ψ_ cKdV + ε^β R
with ψ_ cKdV(r,t) = A(ρ,τ) and β := 7/2 to be obtained from the energy estimates, see Section <ref>. The error function R satisfies
0 = (∂_r^2 + r^-1∂_r) R - ∂_t^2 (1+∂_r^2 + r^-1∂_r) R
+ 2ε^2 ∂_t^2 (1+∂_r^2 + r^-1∂_r) ( A R)
+ ε^β∂_t^2 (1+∂_r^2 + r^-1∂_r) (R^2)
-ε^-β∂_t^2 ( 1 + ∂_r^2 + r^-1∂_r) (N(ε^2 A + ε^β R) - N(ε^2 A))
- ε^-βRes(ε^2 A).
Before we start to estimate the error we note that there is no problem with regularity of solutions of equation (<ref>) in the following sense. Rewriting (<ref>) as (<ref>) and (<ref>) in Section <ref> shows that if (R,∂_r R) ∈ C^0([r_0,r_1],H^s() × H^s()), then ∂_r^2 R(r,·) has the same regularity.
In particular, we have the estimate:
There exist constant C_ l and a smooth monotone function C_ n such that for all ε∈ (0,1) we have
∂_r^2 R(r,·) _L^2 ≤ε^15/2-β C_ res + C_ l (1 + ε^2 C_A) ( R(r,·) _L^2 + ∂_r R(r,·) _L^2)
+ ε^β C_ n( R(r,·) _L^∞ + ∂_r R(r,·) _L^∞)
( R(r,·) _L^2 + ∂_r R(r,·) _L^2) ,
where C_ res is defined in Lemma <ref> and C_A is defined in (<ref>).
The difficulty in estimating the error R comes from fact that the error equation (<ref>) contains the linear terms of order 𝒪(ε^2) while we have to bound the error on the interval
[ε^-3 r_0, ε^-3r_1] of length 𝒪(ε^-3). We get rid of this
mismatch of powers in ε by writing the terms of order 𝒪(ε^2) as derivatives in r such that these can be either included in the balance of energy or be written as terms
where derivatives fall on A which allows us to estimate these terms to be of order 𝒪(ε^3).
We follow the approach used in the energy estimates for the KdV approximation for obtaining an H^1-estimate for R <cit.>. To obtain first the L^2-estimates for R, we multiply (<ref>) with - ∂_r ∂_t^-2 R and integrate it w.r.t. t.
The term - ∂_r ∂_t^-2 R is defined via its Fourier transform w.r.t. t, i.e., with abuse of notation, by ∂_t^-1 R = ℱ^-1( (ik)^-1R). All integrals in t are considered on and Parseval's equality is used when it is necessary.
We report details of computations as follows.
i) From the linear terms in R we then obtain
s_1 = -∫ (∂_r^2 R )(∂_r ∂_t^-2 R) dt = 1/2d/dr∫ (∂_r ∂_t^-1 R)^2 d t,
s_2 = -∫ ( r^-1∂_r R )(∂_r ∂_t^-2 R) dt = r^-1∫
(∂_r ∂_t^-1 R)^2 dt ,
s_3 = ∫ (∂_t^2 R )(∂_r ∂_t^-2 R) dt = 1/2d/dr∫
R^2 dt,
s_4 = ∫ (∂_t^2 ∂_r^2 R )(∂_r ∂_t^-2 R) dt = 1/2d/dr∫
(∂_r R)^2 dt,
s_5 = ∫ (r^-1∂_t^2 ∂_r R )(∂_r ∂_t^-2 R) dt = r^-1∫
(∂_r R)^2 dt.
ii) From the mixed terms in AR we obtain
s_ mixed = - 2ε^2 ∫ (∂_t^2 (1+∂_r^2 + r^-1∂_r) ( A R) )(∂_r ∂_t^-2R) dt
= - 2ε^2 ∫ ( (1+∂_r^2 + r^-1∂_r) ( A R) )(∂_r R) dt
= s_6 + s_7 + s_8,
where
s_6 := - 2ε^2 ∫ ( A R) (∂_rR) dt,
s_7 := - 2ε^2 ∫ (∂_r^2( AR)) (∂_r R) dt,
s_8 := - 2ε^2 ∫ (r^-1∂_r( A R) )(∂_r R) dt.
We find
s_6 = - ε^2 d/dr∫ A R^2 dt
+ ε^2 ∫ ( ∂_r A) R^2 dt,
where the second term is estimated by
| ε^2 ∫ ( ∂_r A) R^2 dt | ≤ε^2 ∂_r A _L^∞ R _L^2^2
which is 𝒪(ε^3) since ∂_r A = -ε∂_τ A + ε^3 ∂_ρ A by the chain rule.
Next we have
s_7 = - 2ε^2 ∫ (∂_r^2 A)R (∂_r R) dt - 3ε^2 ∫ (∂_r A) (∂_r R)^2 dt - ε^2 d/dr∫ A (∂_r R)^2 dt,
which are estimated by
| 2ε^2 ∫ (∂_r^2 A)R (∂_r R) dt | ≤ 2ε^2 ∂_r^2 A _L^∞ R _L^2∂_r R _L^2,
| 3ε^2 ∫ (∂_r A) (∂_r R)^2 dt | ≤ 3ε^2 ∂_r A _L^∞∂_r R _L^2^2.
These terms are at least of order 𝒪(ε^3)
since ∂_r A = 𝒪(ε) and ∂_r^2 A = 𝒪(ε^2) by the chain rule. For the last term, we obtain the estimate
|s_8| ≤ 2ε^2 r^-1∂_r A _L^∞ R _L^2∂_r R _L^2
+ 2ε^2 r^-1 A _L^∞∂_r R ^2_L^2
which is of order 𝒪(ε^5)
since r ∈ [r_0 ε^-3, ρ_1 ε^-3].
iii) From the quadratic terms in R we obtain
s_ quad = - ε^β∫ (∂_t^2 (1+∂_r^2 + r^-1∂_r) ( R^2) )(∂_r ∂_t^-2R) dt
= - ε^β∫ ( (1+∂_r^2 + r^-1∂_r) ( R^2) )(∂_r R) dt
= s_9 + s_10 + s_11,
where
s_9 := - ε^β∫ R^2 (∂_rR) dt =
- 1/3ε^βd/dr∫ R^3 dt,
s_10 := - ε^β∫ (∂_r^2( R^2)) (∂_r R) dt = - ε^βd/dr∫ R (∂_r R)^2 dt - ε^β∫ (∂_r R)^3 dt,
s_11 := - ε^β∫ (r^-1∂_r( R^2) )(∂_r R) dt.
The remaining terms can be estimated by
| ε^β∫ (∂_r R)^3 dt | ≤ε^β∂_r R _L^∞∂_r R _L^2^2,
| ε^β∫ (r^-1∂_r( R^2) )(∂_r R) dt | ≤ 2ε^β r^-1 R _L^∞∂_r R ^2_L^2 .
iv) For the terms collected in N we have
s_N =
ε^-β∫ ( ∂_t^2 ( 1 + ∂_r^2 + r^-1∂_r) (N(ε^2 A + ε^β R) - N(ε^2 A)))(∂_r ∂_t^-2 R) dt
=
ε^-β∫ ( ( 1 + ∂_r^2 + r^-1∂_r) (N(ε^2 A + ε^β R) - N(ε^2 A)))(∂_r R) dt
Since N(v) is analytic in v we have the representation
N(v) = ∑_n= 3^∞ a_n v^n,
with coefficients a_n ∈, and so
we find
ε^-β (N(ε^2 A + ε^β R) - N(ε^2 A)) = ε^-β∑_n= 3^∞ a_n ∑_j=1^n
( [ n; j ]) (ε^2 A)^n-j ( ε^β R)^j.
such that these terms are at least of order 𝒪(ε^4) and make no problems for the estimates w.r.t. powers of ε.
However, we have to be careful about the regularity of these terms. As a an example, we look at the terms with most time derivatives,
namely
∫∂_r^2 (A^n-jR^j) (∂_r R) dt = s_12 + s_13+ s_14 ,
where
s_12 := ∫ (∂_r^2 (A^n-j)) R^j (∂_r R) dt,
s_13 := 2 ∫ (∂_r (A^n-j)) (∂_r (R^j)) (∂_r R) dt,
s_14 := ∫ A^n-j(∂_r^2 ( R^j)) (∂_r R) dt
= j (j-1) ∫ A^n-j R^j-2 (∂_r R)^3 dt + j ∫ A^n-j R^j-1 (∂_r^2 R) (∂_r R) dt.
The second derivatives ∂_r^2 R is controlled
in terms of R and ∂_r R by means of (<ref>).
As a result, there exists a constant C_ l and a smooth monotone function
C_ n such that for all ε∈ (0,1) we have
|s_N | ≤ε^4 C_ l ( R _L^2^2 + ∂_r R _L^2^2)
+ ε^2+β C_ n ( R _L^∞ + ∂_r R _L^∞)
( R _L^2^2 + ∂_r R _L^2^2).
v) The residual term gives
s_15 = ε^-β∫ (Res(ε^2 A) ) (∂_r ∂_t^-2 R) dt
= - ε^-β∫∂_t^-1(Res(ε^2 A) ) (∂_r ∂_t^-1 R) dt .
It is estimated by
|s_15| ≤ C_ resε^13/2-β∂_r ∂_t^-1 R _L^2,
where C_ res is defined in Lemma <ref>.
Without the change of variables v = u+u^2 we would get additionally the following mixed terms
-2 ε^2 ∫(∂_r^2 ( A R) )(∂_r ∂_t^-2 R) dt - 2ε^2 ∫ (r^-1∂_r ( A R) )(∂_r ∂_t^-2 R) dt
which cannot be written in an obvious manner as sums of a derivative w.r.t. r and higher order terms. Without the change of variables v = u+u^2 according to Remark <ref> we cannot estimate ∂_t^-1(Res(ε^2 A) ) nor the counterpart to s_15. This emphasizes again the necessity of the change of variables v = u + u^2 in order to replace (<ref>) with (<ref>).
§.§ The H^1-error estimates
The energy quantity will be constructed in Section <ref> based on the derivative formulas for s_1, s_3, s_4, and other terms. It will be used for estimating the terms which we were not able to write as derivatives w.r.t. r. Since we need estimates for R _L^∞ we will use Sobolev's embedding
f _L^∞≤ C f _H^1, ∀ f ∈ H^1()
and hence we have to extend the energy by additional terms involving ∂_t R _L^2^2. To do so, we proceed here as in Section <ref> but now for the L^2-error estimates of the t-derivatives.
Based on the product rule
∂_t (uv) _L^2≤ u _L^∞∂_t v _L^2
+ v _L^∞∂_t u _L^2 .
we have the following generalization of the bound (<ref>) in Lemma <ref>.
There exist constant C_ l, C_t,res and a smooth monotone function C_ n such that for all ε∈ (0,1) we have
∂_r^2 R(r,·) _H^1 ≤ε^15/2-β C_ res + C_ l (1 + ε^2 C_A) ( R(r,·) _H^1 + ∂_r R(r,·) _H^1)
+ C_ n( R(r,·) _L^∞ + ∂_r R(r,·) _L^∞)
ε^β ( R(r,·) _H^1 + ∂_r R(r,·) _H^1),
where C_ res is defined in Lemma <ref> and C_A is defined in (<ref>).
To get the H^1-error estimates, we multiply (<ref>) by ∂_r R and then integrate w.r.t. t. We report details of computations as follows.
i) From the linear terms in R we obtain
r_1 = ∫ (∂_r^2 R )(∂_r R) dt = 1/2d/dr∫
(∂_r R)^2 dt,
r_2 = ∫ ( r^-1∂_r R )(∂_r R) dt = r^-1∫
(∂_r R)^2 dt ,
r_3 = - ∫ (∂_t^2 R )(∂_r R) dt = 1/2d/dr∫
(∂_t R)^2 dt,
r_4 = -∫ (∂_t^2 ∂_r^2 R )(∂_r R) dt = 1/2d/dr∫
(∂_r ∂_t R)^2 dt,
r_5 = - ∫ (∂_t^2 r^-1∂_r R )(∂_r R) dt = r^-1∫
(∂_r ∂_t R)^2 dt.
ii) From the mixed terms in AR we obtain
r_ mixed = 2ε^2 ∫ (∂_t^2 (1+∂_r^2 + r^-1∂_r) ( A R) )(∂_r R) dt
= - 2ε^2 ∫ ( (1+∂_r^2 + r^-1∂_r) ∂_t( A R) )(∂_r ∂_t R) dt
= r_6 + r_7 + r_8,
where
r_6 := - 2ε^2 ∫ ( ∂_t( A R) )(∂_r ∂_t R) dt,
r_7 := - 2ε^2 ∫ (∂_r^2 ∂_t ( A R) )(∂_r ∂_t R) dt,
r_8 := - 2ε^2 ∫ (r^-1∂_r ∂_t ( A R) )(∂_t ∂_r R) dt.
We find
r_6 = - 2ε^2 ∫ ( ∂_t A ) R (∂_r ∂_t R) dt - ε^2 d/dr∫ A (∂_t R)^2 dt + ε^2 ∫ (∂_r A) (∂_t R)^2 dt,
which can be estimated as
| 2ε^2 ∫ ( ∂_t A ) R (∂_r ∂_t R) dt| ≤ 2ε^2 ∂_t A _L^∞R _L^2∂_r ∂_t R _L^2,
| ε^2 ∫ (∂_r A) (∂_t R)^2 dt | ≤ε^2 ∂_r A_L^∞∂_t R _L^2^2.
These terms are at least of order 𝒪(ε^3) since
∂_r A and ∂_t A are of order 𝒪(ε) by the chain rule. Next we estimate r_7 for which we note that
d/dr∫ A (∂_r ∂_t R)^2 dt = ∫ (∂_r A) (∂_r ∂_t R)^2 dt +
2 ∫ A (∂_r ∂_t R)(∂_r^2 ∂_t R) dt
and
∂_r^2 ∂_t ( A R) = A ∂_r^2 ∂_t R +2 (∂_rA) ∂_r ∂_t R +( ∂_tA) ∂_r^2 R
+2( ∂_t ∂_r A) ∂_r R +
( ∂_r^2 A) ∂_t R + (∂_r^2 ∂_t A) R.
As a result, we obtain
r_7 = - ε^2 d/dr∫ A (∂_r ∂_t R)^2 dt + r_7,a + r_7,b+ r_7,c + r_7,d+ r_7,e
with
r_7,a := - 3ε^2 ∫ (∂_r A) (∂_r ∂_t R)^2 dt ,
r_7,b := - 2ε^2 ∫ ( ∂_t A) (∂_r^2 R) (∂_r ∂_t R) dt ,
r_7,c := - 4ε^2 ∫ ( ∂_t ∂_r A) (∂_r R ) (∂_r ∂_t R) dt ,
r_7,d := - 2ε^2 ∫ ( ∂_r^2 A) (∂_t R ) (∂_r ∂_t R) dt ,
r_7,e := - 2ε^2 ∫ (∂_r^2 ∂_t A) R (∂_r ∂_t R) dt .
We estimate
|r_7,a| ≤ 3ε^2 ∂_rA _L^∞∂_r ∂_t R_L^2^2 ,
|r_7,b| ≤ 2ε^2 ∂_tA_L^∞∂_r^2 R _L^2∂_r ∂_t R_L^2 ,
|r_7,c| ≤ 4ε^2 ∂_t ∂_r A_L^∞∂_r R _L^2∂_r ∂_t R_L^2 ,
|r_7,d| ≤ 2ε^2 ∂_r^2 A_L^∞∂_t R _L^2∂_r ∂_t R_L^2 ,
|r_7,e| ≤ 2ε^2 ∂_r^2 ∂_t A_L^∞ R _L^2∂_r ∂_t R_L^2.
All these terms are at least of order 𝒪(ε^3) because of the derivatives on A in r and t. Moreover, we can use (<ref>) for estimating ∂_r^2 R _L^2.
The last mixed term is decomposed with the product rule as
r_8 = r_8,a+ r_8,b + r_8,c+ r_8,d,
where
r_8,a := - 2ε^2 ∫ r^-1 (∂_r ∂_t A) R (∂_r ∂_t R) dt,
r_8,b := - 2 ε^2 ∫ r^-1(∂_t A )(∂_r R)(∂_r ∂_t R) dt,
r_8,c := - 2 ε^2 ∫ r^-1(∂_r A) (∂_t R) (∂_r ∂_t R) dt,
r_8,d := - 2ε^2 ∫ r^-1A (∂_r ∂_t R)^2 dt.
We estimate
|r_8,a| ≤ 2ε^2 r^-1∂_r ∂_t A_L^∞ R _L^2∂_r ∂_t R_L^2,
| r_8,b| ≤ 2 ε^2 r^-1∂_t A _L^∞∂_r R_L^2∂_r ∂_t R_L^2,
| r_8,c| ≤ 2 ε^2 r^-1∂_r A_L^∞∂_t R_L^2∂_r ∂_t R_L^2,
| r_8,d| ≤ 2ε^2 r^-1 A _L^∞∂_r ∂_t R_L^2^2 .
iii) From the quadratic terms in R we obtain
r_ quad = ε^β∫ (∂_t^2 (1+∂_r^2 + r^-1∂_r) ( R^2) )(∂_r R) dt
= - ε^β∫ ( (1+∂_r^2 + r^-1∂_r) ∂_t( R^2) )(∂_r ∂_tR) dt
= r_9 + r_10 + r_11,
where
r_9 := - 2ε^β∫ R(∂_tR) (∂_r ∂_t R) dt,
r_10 := - ε^β∫ (∂_r^2 ∂_t (R^2) )(∂_r ∂_t R) dt,
r_11 := - ε^β∫ r^-1 (∂_r ∂_t(R^2) )(∂_r ∂_t R) dt .
The first term is estimated by
|r_9 | ≤ 2ε^β R _L^∞∂_tR_L^2∂_r ∂_t R_L^2 .
The second term is rewritten by using
d/dr∫ R (∂_r ∂_t R)^2 dt = ∫ (∂_r R) (∂_r ∂_t R)^2 dt + 2 ∫ R (∂_r ∂_t R) (∂_r^2 ∂_t R)dt
and
∂_r^2 ∂_t (R^2) = 2 R ∂_r^2 ∂_t R + 4 (∂_r R) ∂_r ∂_t R
+ 2 (∂_t R) ∂_r^2 R
in the form
r_10 = - ε^βd/dr∫ R (∂_r ∂_t R)^2 dt + r_10,a + r_10,b
with
r_10,a := - 3ε^β∫ (∂_r R) (∂_r ∂_t R)^2 dt ,
r_10,b := - 2ε^β∫ (∂_t R) (∂_r^2 R) (∂_r ∂_t R) dt.
The remainder terms are estimated as follows
|r_10,a | ≤ 3 ε^β∂_r R_L^∞∂_r ∂_t R _L^2^2 ,
|r_10,b| ≤ 2ε^β∂_r^2 R_L^∞∂_t R_L^2∂_r ∂_t R_L^2 ,
where we can use (<ref>) and Sobolev's embedding (<ref>) to estimate ∂_r R_L^∞ and ∂_r^2 R _L^∞. The last quadratic term is decomposed with the product rule as
r_11 = r_11,a+r_11,b,
where
r_11,a := - 2 ε^β∫ r^-1R (∂_r ∂_t R)^2 dt ,
r_11,b := - 2 ε^β∫ r^-1 (∂_r R)(∂_t R )(∂_r ∂_t R) dt ,
which we estimate by
| r_11,a| ≤ 2 ε^β r^-1 R _L^∞∂_r ∂_t R _L^2^2 ,
|r_11,b| ≤ 2 ε^β r^-1∂_r R_L^∞∂_t R _L^2∂_r ∂_t R _L^2.
iv) For the terms collected in N we have
r_N = -ε^-β∫ ( ∂_t^2 ( 1 + ∂_r^2 + r^-1∂_r) (N(ε^2 A + ε^β R) - N(ε^2 A)))(∂_r R) dt
=
ε^-β∫ ( ( 1 + ∂_r^2 + r^-1∂_r) ∂_t(N(ε^2 A + ε^β R) - N(ε^2 A)))(∂_r ∂_t R) dt
Proceeding as for the L^2-estimate
and using the bound (<ref>) on the second derivative
∂_r^2 R in terms of R and ∂_r R yields
the existence of a constant C_14,l and a smooth monotone function C_14,n such that for all ε∈ (0,1) we have
|r_N | ≤ C_14,lε^4 ( R _H^1^2 + ∂_r R _H^1^2)
+ C_14,n( R _L^∞ + ∂_r R _L^∞)
ε^2+β ( R _H^1^2 + ∂_r R _H^1^2).
v) The residual term
r_12 = -ε^-β∫ (Res(ε^2 A) )
(∂_r R) dt
is estimated by
|r_12| = C_ resε^15/2 -β∂_r R _L^2,
where C_ res is defined in Lemma <ref>.
§.§ Energy estimates
We use the terms s_1, s_3, s_4, r_1, r_3, r_4, and the parts of s_6, s_7, s_9, s_10, r_6, r_7, and r_10 with derivatives in r to define the following energy
E = E_0 + E_1
with
E_0 = 1/2∫
R^2 dt + 1/2∫
(∂_r ∂_t^-1 R)^2 dt + 1/2∫
(∂_r R)^2 dt
+
1/2∫
(∂_t R)^2 dt + 1/2∫
(∂_r R)^2 dt + 1/2∫
(∂_r ∂_t R)^2 dt ,
E_1 = - ε^2 ∫ A R^2 dt - ε^2 ∫ A (∂_r R)^2 dt - 1/3ε^β∫ R^3 dt
- ε^β∫ R (∂_r R)^2 dt
- ε^2 ∫ A (∂_t R)^2 dt - ε^2 ∫ A (∂_r ∂_t R)^2 dt
- ε^β∫ R (∂_r ∂_t R)^2 dt.
The energy part E_0 is an upper bound for the squared H^1-norm of R, ∂_t^-1 R, and ∂_r R. Moreover, for all M > 0 there exists an ε_1 > 0 such that for all ε∈ (0,ε_1) we have
1/2 E_0 ≤ E_1 ≤3/2 E_0
as long as E^1/2≤ M. All other linear terms which are not contained in the energy E
have either a r^-1 = ε^3 ρ^-1 in front, namely
s_2, s_5, s_8, r_2, r_5, and r_8, or contain a time or space derivative of A, as parts of s_6, s_7, r_6, and r_7, and so all other linear terms are at least of order 𝒪(ε^3).
All nonlinear terms have at least a ε^4 or ε^β in front. The residual terms s_15 and r_16 are of order 𝒪(ε^3) if β is chosen as
β = 7/2. As a result, we estimate the rate of change of
energy E from the following inequality
d/d r E ≤ C ε^3 E + C ε^7/2 E^3/2 + C ε^3 E^1/2
≤ 2 C ε^3 E + C ε^7/2 E^3/2 + C ε^3,
with a constant C independent of ε∈ (0,ε_1 ) as lomg as E^1/2≤ M.
Under the assumption that C ε^1/2 E^1/2≤ 1 we obtain
d/d t E ≤ (2 C+1) ε^3 E + C ε^3.
Gronwall's inequality immediately gives the bound
sup_t ∈ [0,T_0/ε^3] E(t) = C T_0 e^ (2 C+1)T_0 =: M = 𝒪(1)
and so sup_t ∈ [0,T_0/ε^3] R(t) _H^1 = 𝒪(1).
Finally choosing ε_2 > 0 so small that C ε_2^1/2 M^1/2≤ 1 gives the required estimate for all ε∈ (0,ε_0 )
with ε_0 = min ( ε_1 , ε_2 )> 0. Therefore, we have proved Theorem <ref>.
§ SOLITARY WAVE SOLUTIONS OF THE CKDV EQUATION
Here we prove Theorem <ref>. We look for solutions of the cKdV equation (<ref>) in the class of solitary waves represented in the form
A(ρ,τ) = -6 ∂_τ^2 log f(ρ,τ),
which tranforms (<ref>) to the following bilinear equation <cit.>:
2 [ f ∂_ρ∂_τ f - (∂_ρ f) (∂_τ f) ] + ρ^-1 f ∂_τ f + f ∂_τ^4 f - 4 (∂_τ f) (∂_τ^3 f) + 3 (∂_τ^2 f)^2 = 0.
To prove Theorem <ref>, we analyze solutions of (<ref>) in the self-similar form <cit.>:
f(ρ,τ) = 1 + 1/(6 ρ)^1/3 F(z), z = τ/(6 ρ)^1/3
with some F ∈ C^∞(,). The form (<ref>) and (<ref>) yields (<ref>). We give a complete characterization for all possible solutions for F(z) and prove that there exist no square integrable function A(ρ,τ) w.r.t. τ. The proof is based on the three results obtained in the following three lemmas.
The first result gives the most general expression for F(z) in (<ref>).
The most general solution f(ρ,τ) of the bilinear equation (<ref>) in the self-similar form (<ref>) with F ∈ C^∞(,) is given by
F(z) = α[ (w_1')^2 - z w_1^2 ] ± 2 √(αβ)[ w_1' w_2' - z w_1 w_2 ] + β[ (w_2')^2 - z w_2^2 ],
where α,β∈ℝ are arbitrary such that αβ≥ 0 and w_1(z) := Ai(z), w_2(z) := Bi(z) are two linearly independent solutions of the Airy equation
w”(z) - z w(z) = 0.
Substituting (<ref>) into (<ref>) shows that the variables are separated and F(z) satisfies an overdetermined system of two (linear and quadratic) differential equations:
F””(z) - 4 z F”(z) - 2 F'(z) = 0
and
4 F'(z) [ z F'(z) + F(z) - F”'(z) ] + 3 [F”(z)]^2 = 0.
Let G(z) := -F'(z). Then (<ref>) reduces to the third-order equation
G”'(z) - 4 z G'(z) - 2 G(z) = 0,
the general solution of which is known (see 10.4.57 in <cit.>):
G(z) = α [ Ai(z)]^2 + β [ Bi(z)]^2 + γ Ai(z) Bi(z),
where α,β,γ are arbitrary. Denoting w_1(z) := Ai(z) and w_2(z) := Bi(z), we confirm that
d/dz [(w_1,2')^2 - z w_1,2^2] = 2 w_1,2' (w_1,2” - z w_1,2) - w_1,2^2 = -w_1,2^2
and
d/dz [ w_1' w_2' - z w_1 w_2] = (w_1”-zw_1) w_2' + w_1'(w_2”-z w_2) - w_1 w_2 = -w_1 w_2
Hence, F'(z) = -G(z) is integrated to the form
F(z) = C + α[ (w_1')^2 - z w_1^2 ] + β[ (w_2')^2 - z w_2^2 ] + γ[ w_1' w_2' - z w_1 w_2 ],
where C is an integration constant. The same constant C appears in the integration of (<ref>) to the form
F”'(z) - 4 z F'(z) + 2 F(z) = 2 C.
It remains to verify if the general solution (<ref>) satisfies the quadratic equation (<ref>). Multiplying (<ref>) by F”(z) and integrating, we obtain
[F”(z)]^2 - 4 z [F'(z)]^2 + 4 F(z) F'(z) = 4C F'(z) + D,
where D is another integration constant. On the other hand, substituting (<ref>) into (<ref>) yields
[F”(z)]^2 - 4 z [F'(z)]^2 + 4 F(z) F'(z) = 8/3 C F'(z).
Comparison of (<ref>) and (<ref>) yields C = D = 0. Finally, we substitute (<ref>) into (<ref>) with C = 0 and obtain
0 = [F”(z)]^2 - 4z [F'(z)]^2 + 4 F(z) F'(z)
= (γ^2 - 4 αβ) (w_1 w_2' - w_1' w_2)^2,
where the Wronskian of two linearly independent solutions is nonzero,
w_1 w_2' - w_1' w_2 ≠ 0. Hence, the system (<ref>)-(<ref>) is compatible for the solution (<ref>) if and only if C = 0 and γ = ± 2 √(αβ) with only two arbitrary constants α, β∈ℝ.
The solution F(z) in (<ref>) is real if and only αβ≥ 0.
The next result shows that the expression (<ref>) with this F is sign-definite (positive) if and only if α≥ 0 and β = 0.
Let F be given by (<ref>) with αβ≥ 0. For every k > 0, we have k + F(z) > 0 for every z ∈ℝ if and only if α≥ 0 and β = 0.
We shall make use the asymptotic expansion of the Airy functions, see 10.4.59-60 and 10.4.63-64 in <cit.>:
{[ Ai(z) ∼1/2 √(π)√(z) e^-2/3 z^3/2[ 1 + 𝒪(z^-3/2) ],; Bi(z) ∼1/√(π)√(z) e^2/3 z^3/2[ 1 + 𝒪(z^-3/2) ], ]. z → +∞
and
{[ Ai(z) ∼1/√(π)√(|z|)[ sin( 2/3 |z|^3/2 + π/4) + 𝒪(|z|^-3/2) ],; Bi(z) ∼1/√(π)√(|z|)[ cos( 2/3 |z|^3/2 + π/4) + 𝒪(|z|^-3/2) ], ]. z → -∞
Due to cancelations, it is not convenient to use the expression (<ref>) directly as z →±∞. Instead, we use (<ref>) with γ = ± 2 √(αβ) and obtain
F'(z) ∼ - α/4 π√(z) e^-4/3 z^3/2[ 1 + 𝒪(z^-3/2) ] - β/π√(z) e^4/3 z^3/2[ 1 + 𝒪(z^-3/2) ]
∓√(αβ)/π√(z)[ 1 + 𝒪(z^-3/2) ] z → +∞
and
F'(z) ∼ - α/2 π√(|z|)[ 1 + sin( 4/3 |z|^3/2) + 𝒪(|z|^-3/2) ]
- β/2 π√(|z|)[ 1 - sin( 4/3 |z|^3/2) + 𝒪(|z|^-3/2) ]
±√(αβ)/π√(|z|)[ cos( 4/3 |z|^3/2) + 𝒪(|z|^-3/2) ] z → -∞
Integrating these expressions and recalling that C = 0 in (<ref>), we obtain
F(z) ∼α/8 π z e^-4/3 z^3/2[ 1 + 𝒪(z^-3/2) ] - β/2 π z e^4/3 z^3/2[ 1 + 𝒪(z^-3/2) ]
∓2 √(αβ)/π√(z)[ 1 + 𝒪(z^-3/2) ] z → +∞
and
F(z) ∼α/π√(|z|)[ 1 + 𝒪(|z|^-3/2) ] + β/π√(|z|)[ 1 + 𝒪(|z|^-3/2) ]
∓√(αβ)/2 π |z|[ sin( 4/3 |z|^3/2) + 𝒪(|z|^-3/2) ] z → -∞
If β≠ 0, then F(z) → - sgn(β) ∞ as z → +∞.
Since αβ≥ 0, we also get F(z) → sgn(β) ∞ as z → -∞. Hence for every k ≥ 0, k + F(z) is not sign-definite
for every β≠ 0.
Setting β = 0, we get F'(z) = - α [ Ai(z)]^2 and since Ai(z) → 0 as z → +∞ sufficiently fast, we can define
F(z) = α∫_z^∞ [ Ai(z')]^2 dz',
where the constant of integration is uniquely selected since C = 0 in (<ref>). Hence, F(z) is sign-definite for every z ∈ℝ and sgn(F) = sgn(α). We also have F(z) → 0 as z → +∞ and F(z) → sgn(α) ∞ as z → -∞. Hence, for every k > 0, k + F(z) > 0 for every z ∈ℝ if and only if α≥ 0 in (<ref>).
Finally, we use the solution F(z) in (<ref>) with α > 0 and show that the solution A(ρ,·) in (<ref>) and (<ref>) decay to zero at infinity, satisfies the zero-mean constraint, but is not square integrable for every ρ > 0.
Let F be given by (<ref>) with α > 0
and let A be given by (<ref>) with (<ref>). For every ρ > 0, we have A(ρ,τ) → 0 as |τ| →∞,
∫_ℝ A(ρ,τ) d τ = 0,
and A(ρ,·) ∉ L^2(ℝ).
By chain rule, we have from (<ref>) and (<ref>)
A(ρ,τ) = -6/(6 ρ)^2/3∂_z^2 log [(6 ρ)^1/3 + F(z)],
where z = τ/(6 ρ)^1/3. Since k + F(z) > 0 for every k > 0 and z ∈ℝ, we have
A(ρ,·) ∈ L^2_ loc(ℝ). It remains to consider square integrability of A(ρ,·) at infinity.
It follows from (<ref>), see the proof of Lemma <ref>, that
F(z) ∼α/8 π z e^-4/3 z^3/2[ 1 + 𝒪(z^-3/2) ] z → +∞
and
F(z) ∼α/π√(|z|)[ 1 + 𝒪(|z|^-3/2) ] z → -∞.
Since F(z), F'(z) → 0 as z → +∞, we have
A(ρ,τ) ∼ - 6/ρ F”(z) [ 1 + 𝒪(|z|^-3/2) ]
∼
-α/2 πρ e^-4/3 z^3/2[ 1 + 𝒪(|z|^-3/2) ] z → +∞,
hence, A(ρ,·) ∈ L^2(τ_0,∞) for any τ_0 ≫ 1 and ρ > 0. However, since F(z) →∞ and F'(z) → 0 as z → -∞, we have
A(ρ,τ) ∼ -6/(6 ρ)^2/3F”(z)/(6 ρ)^1/3 + F(z)[ 1 + 𝒪(|z|^-3/2) ]
∼ -√(6)/√(ρ |τ|)[ cos( 4/3 |z|^3/2)
+ 𝒪(|z|^-3/2) ]
z → -∞,
where we have used the expansion
F”(z) ∼α/π[ cos( 4/3 |z|^3/2) + 𝒪(|z|^-3/2) ] z → -∞.
Hence, A(ρ,·) ∉ L^2(-∞,τ_0) for any τ_0 ≪ -1 and ρ > 0. At the same time, A(ρ,τ) → 0 as τ→±∞
and the zero-mean constraint is satisfied due to
∫_ℝ A(ρ,τ) d τ = -6/(6 ρ)^1/3F'(z)/(6 ρ)^1/3 + F(z)|_z → -∞^z → +∞ = 0,
due to the decay of F'(z) → 0 as z →±∞.
Figure <ref> shows a representative example of the solitary wave in the cKdV equation (<ref>), where A is plotted versus τ for four values of ρ = 1, 20, 100, 500. The oscillatory tail behind the solitary wave ruins localization of the solitary wave in L^2(ℝ). Similar to <cit.>, we use very large value of α to detach the solitary wave from the oscillatory tail. For larger values of ρ, the solitary wave departs even further from the oscillatory tail but its amplitude also decays to zero.
§ DISCUSSION
We have addressed here the justification of the cKdV equation
(<ref>) in the context of the radial waves diverging from the origin in the 2D regularized Boussinesq equation (<ref>).
We have shown that the spatial dynamics and temporal dynamics formulations of (<ref>) are not well posed simultaneously. If the temporal dynamics formulation is well posed, the spatial dynamics formulation is ill posed and vice versa. We have justified the cKdV equation (<ref>) in the case of the spatial dynamics formulation (<ref>)–(<ref>). The main result of Theorem <ref> relies on the existence of smooth solutions of the cKdV equation (<ref>) with the zero-mean constraint (<ref>) in the class of functions (<ref>) with Sobolev exponent s > 17/2. However, we have also showed in Theorem <ref> that the class of solitary wave solutions decaying at infinity satisfies the zero-mean constraint but fails to
be square integrable due to the oscillatory, weakly decaying tail as τ→ -∞.
This work calls for further study of the applicability of the cKdV equation for the radial waves in nonlinear dispersive systems. We will list several open directions.
First, the solitary waves of the cKdV equation (<ref>) can be written as the approximate solutions of the radial Boussinesq equation
(<ref>) in the form:
u(r,t) = -6/(6r)^2/3( F”(t-r/(6r)^1/3)/(6 r)^1/3 + F(t-r/(6r)^1/3) - [ F'(t-r/(6r)^1/3) ]^2/[ (6 r)^1/3 + F(t-r/(6r)^1/3) ]^2),
where F(z) is given by (<ref>) with α > 0 and ε > 0 is the small parameter of asymptotic expansions. These solitary waves can be considered for fixed t > 0 as functions of r on (0,∞), see Figure <ref> for = 0.1. The solitary waves decay very fast as r → 0 and decay as 𝒪(r^-1) as r →∞, see (<ref>) and (<ref>). However, they are still not square integrable in the radial variable because ∫_0^∞ r u(r,t)^2 dr diverges for every t > 0. In addition, the cKdV equation (<ref>) is ill-posed as the temporal
dynamics formulation from t = 0 to t > 0.
Second, it might be possible to consider the temporal formulation
of the cKdV equation (<ref>) and to justify
it in the framework of the temporal dynamics formulation of the Boussinesq equation (<ref>) with σ = -1. One needs to construct a stable manifold for the cKdV equation (<ref>) and to prove the error estimates on the stable manifold. The stable part of the linear semigroup for the cKdV equation (<ref>) has a decay rate of t^-3 for t →∞ due to λ = -|k|^1/3, which could be sufficient for the construction of the stable manifold. However, one needs to combine the linear estimates with the nonlinear estimates.
Third, one can consider a well-posed 2D Boussinesq equation (<ref>) with σ = -1 and to handle the ill-posed radial spatial dynamics formulation (<ref>)–(<ref>) with the justification of the cKdV approximation as in Theorem <ref> by using the approach from <cit.>. This would involve working in spaces of functions which are analytic in a strip in the complex plane.
The oscillatory tails of the cKdV approximation, see Figure <ref>, would now accumulate towards r → 0 for the well-posed 2D Boussinesq equation, see Figure 4 in <cit.>, with the rate of 𝒪(r^-1/2) as r → 0 which is sufficient for ∫_0^∞ r u(r,t)^2 dt to converge for every t > 0.
We conclude that the most promising problem for future work is to justify the temporal formulation of the cKdV equation (<ref>) for the temporal formulation of the 2D Boussinesq equation (<ref>) with σ = -1, for which the solitary waves are admissible in the L^2-based function spaces. If this justification problem can be solved, one can then consider
the transverse stability problem of cylindrical solitary waves under the azimuthal perturbations within the approximation given by the cKP equation with the exact solutions found in <cit.>.
99
AS1972 M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, New York: Dover (1972).
Bona1 J. L. Bona, M. Chen, and J. C. Saut, “Boussinesq equations and other systems for small-amplitude long waves in nonlinear dispersive media. I. Derivation and linear theory", J. Nonlin. Sci. 12 (2002) 283–318.
Bona2 J. L. Bona, M. Chen, and J. C. Saut, “Boussinesq equations and other systems for small-amplitude long waves in nonlinear dispersive media. II. Nonlinear theory", Nonlinearity 17 (2004) 925–952.
Calogero F. Calogero and A. Degasperis, “Solution by the spectral transform method of a nonlinear evolution equation including as a special case the cylindrical KdV equation", Lett. Nuovo Cimento 23 (1978) 150–154.
Craig W. Craig, “An existence theory for water waves and the Boussinesq and Korteweg–de Vries scaling limits", Comm. Partial Differential Equations 10 (1985) 787–1003.
Dryuma V. S. Dryuma, “An analytical solution of the axial symmetric KdV equation", Izv. Akad. Nauk MSSR 3 (1976) 14–16.
Gal P. Gaillard, “The Johnson equation, Fredholm and Wronskian representations of solutions, and the case of order three", Advances in Math. Phys. 2018 (2018) 1642139.
Grimshaw19 R. H. J. Grimshaw, “Initial conditions for the cylindrical Korteweg-de Vries equation", Stud. Appl. Math. 143 (2019), no. 2, 176-191.
Horikis T.P. Horikis, D.J. Frantzeskakis b, and N.F. Smyth,
“Extended shallow water wave equations",
Wave Motion 112 (2022) 102934.
Step1 W. Hu, J. Ren, and Y. Stepanyants, “Solitary waves and their interactions in the cylindrical Korteweg–de Vries equation,” Symmetry 15 (2023) 413.
Step2 W. Hu, Z. Zhang, Q. Guo, and Y. Stepanyants, “Solitons and lumps in the cylindrical Kadomtsev–-Petviashvili equation. I. Axisymmetric solitons and their stability", Chaos 34 (2024) 013138.
GP21 M. Gallone and S. Pasquali, “Metastability phenomena in two-dimensional rectangular lattices with nearest-neighbour interaction", Nonlinearity 34 (2021) 4983-5044.
Iordansky S.V. Iordansky, “On the asymptotics of an axisymmetric divergent wave in a heavy fluid", Dokl. Akad. Sci. USSR 125 (1959) 1211–1214.
Johnson80 R. S. Johnson, “Water waves and Korteweg-de Vries equations". J Fluid Mech. 97 (1980) 701-719.
Johnson90 R. S. Johnson, “Ring waves on the surface of shear flows: a linear and nonlinear theory", J Fluid Mech. 215 (1990) 145-160.
Johnson99 R.S. Johnson, “A note on an asymptotic solution of the cylindrical Korteweg–de Vries equation", Wave Motion 30 (1999) 1–16.
Johnson03 R. S. Johnson, “The classical problem of water waves: a reservoir of integrable and near-integrable equations", J. Nonlin. Math. Phys. 10 (2003) 72-92.
Hersh N. Hershkowitz and T. Romesser, “Observations of ion-acoustic cylindrical solitons", Phys. Rev. Lett. 32 (1974) 581–583.
HP22 N. Hristov and D. E. Pelinovsky, “Justification of the KP-II approximation in dynamics of
two-dimensional FPU systems", Z. Angew. Math. Phys. 73 (2022), 213 (26 pages).
KanoNishida T. Kano and T. Nishida,
“A mathematical justification for Korteweg-de Vries equation and Boussinesq equation of water surface waves",
Osaka J. Math. 23 (1986) 389-413.
KPV93 C. E. Kenig, G. Ponce, and L. Vega, “Well-posedness and scattering results for the generalized Korteweg-de Vries equation via the contraction principle", Comm. Pure Appl. Math. 46 (1993) 527–620.
Karima1 K. Khusnutdinova and X. Zhang, “Long ring waves in a stratified fluid over a shear flow", J Fluid Mech. 794 (2016) 17–44.
Kresh R. Krechetnikov, “Transverse instability of concentric water waves", J. Nonlinear Sci. 34 (2024) 66 (29 pages).
Maxon S. Maxon and J. Viecelli, “Cylindrical solitons", Phys. Fluids 17 (1974) 1614–1616.
Miles J.W. Miles, “An axisymmetric Boussinesq wave", J. Fluid Mech. 84 (1978) 181–191.
Mac23 J. A. McGinnis, “Macroscopic wave propagation for 2D lattice with random masses", Stud. Appl. Math. 151 (2023) 752-790.
NakamuraChen80 A. Nakamura and H. H. Chen, “Soliton solutions of the cylindrical KdV equation", J Phys Soc Japan. 50 (1981) 711-718.
PS23 D. E. Pelinovsky and G. Schneider, “KP-II approximation for a scalar FPU system on a 2D square lattice", SIAM J. Appl. Math. 83 (2023) 79-98.
SchnICIAM G. Schneider, “Limits for the Korteweg-de Vries-approximation", Z. Angew. Math. Mech. 76, Suppl. 2, (1996) 341-344.
SU17 G. Schneider and H. Uecker, Nonlinear PDEs. A dynamical systems approach, Grad. Stud. Math 182 (AMS, Providence, RI, 2017).
SchneiderWayne G. Schneider and C. E. Wayne, “The long-wave limit for the water wave problem" I. The case of zero surface tension", Comm. Pure Appl. Math. 53 (2000) 1475-1535.
ST18 B. Schweizer and F. Theil, “Lattice dynamics on large time scales and dispersive effective equations", SIAM J. Appl. Math. 78 (2018) 3060-3086.
Karima2 N. Sidorovas, D. Tseluiko, W. Choi, and K. Khusnutdinova,
“Nonlinear concentric water waves of moderate amplitude", Wave Motion 128 (2024) 103295 (22 pages).
Step81 Yu. A. Stepanyants, “Experimental investigation of cylindrically diverging solitons in an electric lattice", Wave Motion 3 (1981) 335–341.
Karima3 D. Tseluiko, N. S. Alharthi, R. Barros, and
K. R. Khusnutdinova, “Internal ring waves in a three-layer fluid on
a current with a constant vertical shear", Nonlinearity 36 (2023) 3431–3466.
Weidman1 P. D. Weidman and R. Zakhem, “Cylindrical solitary waves",
J Fluid Mech. 191 (1988) 557-573.
Weidman2 P. D. Weidman and M. G. Velarde, “Internal solitary waves", Stud Appl Math. 86 (1992) 167-184.
Step3 Z. Zhang, W. Hu, Q. Guo, and Y. Stepanyants, “Solitons and lumps in the cylindrical Kadomtsev–Petviashvili equation. II. Lumps and their interactions", Chaos 34 (2024) 013132.
|
http://arxiv.org/abs/2409.02696v1 | 20240904133049 | Driven Lorentz model in discrete time | [
"Dan Shafir",
"Alessio Squarcini",
"Stanislav Burov",
"Thomas Franosch"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech"
] |
APS/123-QED
[email protected]
Physics Department, Bar-Ilan University, Ramat Gan 5290002, Israel
[email protected]
Institut für Theoretische Physik, Universität Innsbruck, Technikerstraße 21A, A-6020 Innsbruck, Austria
[email protected]
Physics Department, Bar-Ilan University, Ramat Gan 5290002, Israel
[email protected]
Institut für Theoretische Physik, Universität Innsbruck, Technikerstraße 21A, A-6020 Innsbruck, Austria
§ ABSTRACT
We consider a tracer particle performing a random walk on a two-dimensional lattice in the presence of immobile hard obstacles. Starting from equilibrium, a constant force pulling on the particle is switched on, driving the system to a new stationary state. Our study calculates displacement moments in discrete time (number of steps N) for an arbitrarily strong constant driving force, exact to first order in obstacle density.
We find that for fixed driving force F, the approach to the terminal discrete velocity scales as ∼ N^-1exp(- N F^2 / 16) for small F, differing significantly from the ∼ N^-1 prediction of linear response.
Besides a non-analytic dependence on the force and breakdown of Einstein's linear response, our results show that
fluctuations in the directions of the force are enhanced in the presence of obstacles. Notably, the variance grows as
∼ N^3 (superdiffusion) for F →∞ at intermediate steps, reverting to normal diffusion (∼ N) at larger steps, a behavior previously observed in continuous time but demonstrated here in discrete steps for the first time. Unlike the exponential waiting time case, the superdiffusion regime starts immediately at N=1.
The framework presented allows considering any type of waiting-time distribution between steps and transition to continuous time using subordination methods.
Our findings are also validated through computer simulations.
Driven Lorentz model in discrete time
Thomas Franosch
September 9, 2024
=====================================
§ INTRODUCTION
The transport of molecules, colloids and particulate matter in disordered media is ubiquitous in natural, industrial and technological processes. Such systems have been extensively studied using a random-walk-on-a-lattice approach. One of the best-known examples is the continuous-time random walk <cit.> (CTRW), popularized by Montrol and Scher <cit.> to model charge transport in amorphous materials and later used very successfully for the description of transport in porous and biological mediums <cit.>. The idea behind CTRW is to build upon the classical random walk, which is a succession of random steps, by introducing a waiting time distribution between steps, i.e., the medium is a form of energetic landscape which gives rise to waiting or trapping times. It has been shown that when these waiting times follow a scale-free distribution where the mean waiting time diverges, it leads to intriguing phenomena such as anomalous diffusion, weak ergodicity breaking and aging, seen in single quantum dots <cit.>, transport of biomolecules inside the cell <cit.> and glassy systems <cit.> just to name a few.
A second prominent model is the lattice Lorentz gas <cit.>, describing obstructed transport in heterogeneous environments such as the crowded world inside biological cells. The model consists of a tracer particle performing random walk on a lattice where a fraction of the sites (density) is occupied by immobile hard obstacles. These obstacles, placed at random lattice locations, are treated with reflecting boundary conditions.
Many works probe the characteristics of the Lorentz model system by studying the response of the particle to an external driving force <cit.>.
The emphasis is usually for a near-neighbor hopping process where the average waiting time between jumps is finite (often an exponential distribution). Already for finite average waiting time between hops, the presence of obstacles alters the dynamics of the driven system in a non-trivial manner since repeated collisions of the tracer with obstacles introduce correlations and persistent memory <cit.>.
For example, in Ref. <cit.>, a first-order expansion in the density of immobile obstacles already presents a surprising force-dependent exponential decay towards the steady-state drift velocity; In contrast, a very dense system study reveals a surprising very short lived initial high velocity value that abruptly drops to a terminal 'low' value <cit.>.
The combination of obstacles with different types of distributions of waiting times between steps – which we refer to as temporal disorder, has received limited attention so far. One instance of this is a power law distribution resulting in diverging mean waiting times.
In our work, we find the moments of displacement in the domain of number of steps, i.e. discrete time, for arbitrarily strong constant driving force, accurate to first order in the obstacle density. This theoretical approach will establish a framework for future research that will allow to consider any type of temporal disorder and transition to continuous time by means of the method of subordination <cit.>; i.e., a summation of conditional probability on all the possible outcomes of the number of steps during total time t of the process.
Our solution technique employs a scattering formalism borrowed from quantum mechanics <cit.> that has been successfully applied to the analytic study of the driven lattice Lorentz gas in continuous time with an exponential distribution between steps <cit.>.
In this work, the discrete nature of working in number of steps leads to different mathematical challenges and results compared to the continuous case, since we will be dealing mainly with summation techniques (generating functions) instead of integrals (Laplace transforms). We show that for fixed driving F, the step-dependent approach (N discrete time) towards the terminal discrete velocity (average position divided by steps) is exponentially fast rather than a power law decay as predicted by linear response.
Furthermore, we provide the first and second moment to first order in the obstacle density. Showing the dependence on the force is complex and non-analytic, indicating breaking of Einstein's linear response even for small forces.
The intuitive picture is that obstacles suppress the fluctuations in the direction of the force. Our results indicate that for forces large enough, increasing disorder leads to an enhancement. We show that there is a window at intermediate values of steps N where the variance grows as ∼ N^α with a true exponent of α=3 in the limit F →∞ which then drops to normal diffusion (α=1) at large steps. Such intriguing behavior has been already documented for the lattice Lorentz gas <cit.>. We provide the first evidence that such an anomalous behavior occurs also in the discrete domain of the number of steps. Our findings are validated by high-precision stochastic simulations.
As a consequence of the approach developed in this paper, our framework will find natural application to study the impact of heavy-tailed distribution of waiting times with a diverging average in the presence of obstacles.
Another example that deserves to be considered in the presence of obstacles is the case of a temporal quenched disorder which leads to correlations and memory effects <cit.>. A quenched temporal disorder by itself is known to exhibit surprising effects, including mobility enhancement in confined geometries <cit.> and non self-averaging leading to universal fluctuations of diffusivity <cit.>.
Such interpretations which combine temporal and obstacle disorder may be a more complete picture to describe transport in a wide variety of systems and can lead to new theoretical advancements in the field.
§ THE MODEL AND SOLUTION TECHNIQUE
In the two-dimensional lattice Lorentz gas, a tracer particle performs a random walk on a square lattice of size L × L (where L ∈ℕ) defined by the collection of sites 𝐫∈Λ={(x, y) ∈ℕ×ℕ : (1 ≤ x , y ≤ L) }.
Here the lattice spacing a is set to unity for convenience such that the lattice sites assume only integer values and the total number of steps performed is N.
We assume that the number of sites L^2 is very large and approaching the limit L →∞, i.e., the thermodynamic limit.
At the boundaries we employ periodic boundary condition.
In every step the tracer performs a nearest-neighbor jump of size 𝐝∈𝒩={±𝐞_x, ±𝐞_y} where 𝐞_x and 𝐞_y are perpendicular unit vectors in the x and y direction respectively.
The lattice consists of free sites accessible to the tracer as well as sites with randomly placed immobile
hard obstacles of density n (fraction of excluded sites). If the tracer attempts to jump onto an obstacles site, it remains at its initial position before the jump but still the counter for the discrete time is increased by one.
At zero steps (discrete time), a constant force acting on the tracer is switched on, and we use the thermal equilibrium state in the absence of driving F = 0 as the initial condition, i.e., the particle is equally likely to be anywhere on the L^2 sites at 'time' N=0.
The external force pulls the tracer only along the x direction of the lattice and the strength of the force is characterized by the dimensionless force F= force× a / k_B T, where k_B is Boltzmann's constant and T is the temperature.
The transition probabilities for a single step W(𝐝) obey detailed balance, W(𝐞_x) / W(-𝐞_x)=e^F in the x direction and W(𝐞_y) / W(-𝐞_y)=1 in the y direction.
Correspondingly, we choose the transition probabilities parallel and perpendicular to the applied force as
W(±𝐞_x)=Γ e^± F / 2,
and
W(±𝐞_y)=Γ,
respectively as depicted in Fig. <ref> where
Γ = 1 / (e^F / 2+e^-F / 2+2),
is the normalization factor.
We use an approach similar to Ref. <cit.> to find the propagator (defined below) which will enable us to derive the moments of the displacement as a function of the number of steps N.
We first begin for the case of no obstacles, i.e., the free system, since it is more simple and to establish our formalism.
It is convenient to exploit the analogy of the master equation to a Schrödinger equation.
Hence we consider the Hilbert space of lattice functions Λ→ℂ spanned by the orthonormal basis of position kets |𝐫⟩.
We denote by p_N(𝐫) the probability for the tracer to be at 𝐫 after N jumps and define an abstract ket |p_N ⟩ := ∑_𝐫∈Λ p_N(𝐫) |𝐫⟩. The probabilities can thus be obtained as p_N(𝐫) = ⟨𝐫 | p_N ⟩.
We assume that the force is switched on at step N=0 such that the thermal equilibrium state |p_0 ⟩ evolves towards a
new stationary state | p_st⟩ = lim_N→∞ | p_N ⟩ indicated by the subscript "st".
We define the state of the system at step N=0 to be the equilibrium state of the empty lattice with no driving, ⟨𝐫 | p_0 ⟩ = 1/L^2, meaning the probability to find the tracer at any of the L^2 sites of the lattice is uniform.
We define the single jump matrix for the free system with no obstacles indicated by the subscript 0 as
M̂_0=∑_𝐫∑_𝐝∈𝒩 W(𝐝)|𝐫⟩⟨𝐫-𝐝|,
which entails the rule to propagate probabilities in time by | p_N+1⟩ = M̂_0 | p_N⟩.
Here the matrix element ⟨𝐫 | M̂_0 | 𝐫'⟩ is the transition probability from 𝐫' to 𝐫.
Now we define the free propagator to be the z-transform (sometimes called generating function) <cit.> of the single jump matrix M̂_0
Ĝ_0(z) = ∑_N=0^∞ (M̂_0)^N z^N.
It is convenient to perform a spatial Fourier transform, defined by
|𝐤⟩=1/L∑_𝐫∈Λexp (i𝐤·𝐫)|𝐫⟩ ,
where 𝐤=(k_x, k_y) ∈Λ^*={(2 π n_x / L, 2 π n_y / L): (n_x, n_y) ∈Λ} and 𝐤·𝐫=k_x x+k_y y.
Since the matrix M̂_0 is translationally invariant, in the plane wave basis becomes diagonal, i.e., ⟨𝐤 | M̂_0 | 𝐤' ⟩ = λ (𝐤) δ_𝐤, 𝐤' with λ(𝐤) the eigenvalues, sometimes referred to as the characteristic function <cit.>.
The jumps are independent of each other thus the free propagator Ĝ_0 is also diagonal in k-space,
G_0(z, 𝐤) = ⟨𝐤 | Ĝ_0(z) | 𝐤⟩
= ∑_N [λ(𝐤)]^N z^N = [1-z λ(𝐤)]^-1.
The function G_0(z, 𝐤) is also called the moment generating function since (-i)^m∂^m G_0(z, 𝐤) / ∂ k_x^m |_𝐤=0 is the m-th moment of displacement along the x axis (parallel to the direction of the applied force) <cit.>.
This will provide the solution for the first and second moments in the obstacle-free system, a similar approach will be taken if the general propagator for the case of obstacles G(z,𝐤) is known.
Thus the solution strategy of finding the moments will be achieved by finding the general propagator G(z,𝐤).
This would yield the z -transform of the moments and later we show how we switch back to N-space.
We now turn to finding G(z,𝐤) in the case of obstacles.
The general case with obstacles can be obtained by relying on the scattering formalism borrowed from quantum mechanics <cit.>.
The dynamics in the presence of randomly distributed obstacles on the lattice is generated by the modified single jump propagator M̂ = M̂_0 + V̂, where V̂ cancels the transitions from and to the obstacles.
Furthermore, in our calculations we allow the tracer to start at an obstacle site, which then remains immobile.
The potential v̂_1(𝐬_1) for a single obstacle at site 𝐬_1 cancels the transition probabilities from and into the obstacle position.
Therefore the only non vanishing elements of the matrix v̂_1(𝐬_1) involve the obstacle site and its immediate four neighbors.
In Sec. <ref> we will explicitly show the resulting matrix v̂_1(𝐬_1).
Now for J obstacles (or impurities) the total potential will be V̂ = ∑_i=1^Jv̂_i.
We define the obstacle density as
n = J / L^2.
Strictly speaking this definition does not properly account for obstacles that could occur as neighbors since then they will affect each others potential.
But in the limit of large lattices L→∞ and small densities n such realizations rarely occur and therefore we neglect them in our work.
The propagator in the presence of a fixed obstacle realization V̂ = ∑_i=1^Jv̂_i is related to the free propagator Ĝ_0 via a Lippmann-Schwinger equation <cit.>
Ĝ = Ĝ_0 + Ĝ_0 V̂Ĝ.
By iterating we can express Ĝ as
Ĝ = Ĝ_0 + Ĝ_0 V̂( Ĝ_0 + Ĝ_0 V̂Ĝ)
= Ĝ_0 + Ĝ_0 V̂Ĝ_0 + Ĝ_0 V̂Ĝ_0 V̂Ĝ ,
and this can be further expended.
By following this line of reasoning, the propagator can be expressed as follows
Ĝ = Ĝ_0 + Ĝ_0 T̂Ĝ_0 ,
with the scattering matrix T̂ defined by
T̂ = V̂+V̂Ĝ_0 V̂+V̂Ĝ_0 V̂Ĝ_0 V̂+⋯
= ∑_i=1^Jv̂_i+∑_j, k=1^Jv̂_j Ĝ_0 v̂_k+∑_l, m, n=1^Jv̂_l Ĝ_0 v̂_m Ĝ_0 v̂_n+⋯.
We use the single-obstacle scattering operator t̂_i which encodes all possible collisions with obstacle v̂_i defined in z-space by
t̂_i=v̂_i+v̂_i Ĝ_0 v̂_i+v̂_i Ĝ_0 v̂_i Ĝ_0 v̂_i+⋯,
and express T̂ through a multiple scattering expansion, which reads
T̂=∑_i=1^Jt̂_i+∑_j, k=1
j ≠ k^Jt̂_j Ĝ_0 t̂_k+∑_l, m, n=1
l ≠ m, m ≠ n^Jt̂_l Ĝ_0 t̂_m Ĝ_0 t̂_n+⋯.
We are not interested in the dynamics for individual realizations of the obstacle configurations but only in disorder-averaged properties. We use [·]_av to indicate an average over all realizations of the disorder. In particular, after disorder-averaging the system is translationally invariant. The convenient starting point for disorder averaging is Eq. (<ref>). In particular,
[T̂]_av is translationally invariant in space and thus diagonal in the plane-wave basis.
Contributions in first order of the density can now be identified with the forward-scattering amplitude t_i(z, 𝐤)=⟨𝐤|t̂_i| 𝐤⟩ of a single obstacle placed in a random location on the lattice, therefore
⟨𝐤|[T̂]_av| 𝐤⟩ = n L^2 t_i(z, 𝐤)+O(n^2).
Eq. (<ref>) states that the disorder averaged scattering operator is proportional, in 𝐤-space, to the forward scattering amplitude of a single obstacle.
The Lippmann-Schwinger equation <cit.> allows us to express t̂_i as
t̂_i=v̂_i+v̂_i Ĝ_0 t̂_i=v̂_i+t̂_i Ĝ_0 v̂_i,
therefore
⟨𝐫|t̂_i| 𝐫^'⟩ = ⟨𝐫|v̂_i(1-Ĝ_0 v̂_i)^-1| 𝐫^'⟩
= ⟨𝐫|(1-v̂_i Ĝ_0)^-1v̂_i| 𝐫^'⟩.
Since v̂_i only has nonvanishing contributions for the obstacle site and its nearest neighbors, the calculation of ⟨𝐫|t̂_i| 𝐫^'⟩ in Eq. (<ref>) reduces to a 5 × 5 matrix inversion problem.
The real-space matrix elements ⟨𝐫|Ĝ_0| 𝐫^'⟩ of the free propagator for sites around the obstacle are expressed in terms of complete elliptic integrals of the first and second kind (we show this in Sec. <ref>).
The forward-scattering amplitude is then obtained by a change of basis with,
L^2 t_i(z, 𝐤)=∑_𝐫, 𝐫^' e^i𝐤·(𝐫-𝐫^')⟨𝐫|t̂_i| 𝐫^'⟩ ,
where the sum effectively extends only over the obstacle site and its nearest neighbors.
The disorder-averaged propagator to first order in the density of obstacles n (defined in Eq. (<ref>)) is then according to Eqs. (<ref>) and (<ref>)
[G]_av(z, 𝐤)=G_0(z, 𝐤)+n L^2 G_0(z, 𝐤)^2 t_i(z, 𝐤)+O(n^2).
We still need to account for one more correction.
Since the starting position is random, the tracer can start at an obstacle site where it will stay immobile forever. This obstacle can be surrounded by additional obstacle sites but to first order in the density this can be ignored by assuming obstacle sites are far enough from each other.
Starting movement at an obstacle site is not physical, therefore we correct for this behavior by simply multiplying the propagator [G]_av(z, 𝐤) with 1 / (1-n) = 1 + n +O(n^2), where (1 - n) is the fraction of free lattice sites. We denote the corrected propagator by [G]_av^c(z, 𝐤) and keep only terms to first order in n
[G]_av^c(z, 𝐤) = G_0(z, 𝐤)+n [ G_0(z, 𝐤)
+ L^2 G_0(z, 𝐤)^2 t_i(z, 𝐤) ] +O(n^2).
§ THE PROPAGATOR
As shown in Sec. <ref>, finding the propagator in the case of obstacles in 𝐤-space will enable us to calculate the moments of displacement.
According to Eq. (<ref>), the missing piece is the single obstacle forward-scattering amplitude t_i(z, 𝐤)=⟨𝐤|t̂_i| 𝐤⟩.
Since t̂_i is provided in Eq. (<ref>) in real-space, we have to find the ingredients ⟨𝐫 | Ĝ_0 | 𝐫' ⟩ and ⟨𝐫 | v̂_i | 𝐫' ⟩ and then transition to 𝐤-space.
§.§ The single obstacle potential matrix v̂_i
For a reflective obstacle placed at 𝐬_i the new transition probabilities (M̂ + v̂_i) should cancel the transition from and to the obstacle site.
Hence v̂_i is a 5 × 5 matrix with elements ⟨𝐫 | v̂_i | 𝐫' ⟩ that correspond to the transition from 𝐫' to 𝐫 with the basis 𝐫,𝐫^'∈{𝐬_i ±𝐞_y, 𝐬_i ±𝐞_x, 𝐬_i } and we set 𝐬_i for convenience to be zero.
The column index is 𝐫' and the row index is 𝐫.
The reflective potential cancels the transition from and to the obstacle site, hence ⟨𝐬_i|v̂_i| 𝐬_i-𝐝⟩=-W(𝐝) and ⟨𝐬_i-𝐝|v̂_i| 𝐬_i⟩=-W(-𝐝).
If the tracer is at a neighboring site and attempts to jump to 𝐬_i it is scattered back to its original position, therefore ⟨𝐬_i-𝐝|v̂_i| 𝐬_i-𝐝⟩=W(𝐝).
Finally if the tracer is at the obstacle site it is stuck there forever, ⟨𝐬_i|v̂_i| 𝐬_i⟩=1.
We order the rows and columns for the matrix forms of the operators in the real-space basis via the scheme
[ 1 ; 2 3 4; 5 ]
where the obstacle site is located at the origin 0 numbered by 3.
Consequently the order is 𝐞_y, -𝐞_x, 0, 𝐞_x, -𝐞_y.
The resulting potential matrix in our basis is now
v̂_i =Γ([ 1 0 -1 0 0; 0 e^F / 2 -e^-F / 2 0 0; -1 -e^F / 2 1 / Γ -e^-F / 2 -1; 0 0 -e^F / 2 e^-F / 2 0; 0 0 -1 0 1 ]).
§.§ The matrix elements of Ĝ_0(z)
The obstacle-free propagator is provided in Eq. (<ref>) in 𝐤-space, to find the matrix elements in 𝐫-space we perform an inversion to find
⟨𝐫 | Ĝ_0(z) | 𝐫'⟩ = ∫_-π^π∫_-π^πd 𝐤/(2 π)^2exp(-i 𝐤·(𝐫-𝐫^')) /1-zλ(𝐤).
The entries of the matrix Ĝ_0 are the same as defined in Sec. <ref>, with
𝐫,𝐫^' = 𝐞_y, -𝐞_x, 0, 𝐞_x,-𝐞_y.
The eigenvalues λ(𝐤) of the single jump matrix M̂_0 follow directly from translational invariance in the 𝐤-basis
⟨𝐤 | M̂_0 | 𝐤⟩ =
∑_𝐝∈𝒩exp[i 𝐤·𝐝] W(𝐝)
with the result
λ(𝐤) = 2 Γ[ cos (k_x) cosh (F/2)
+ i sin (k_x) sinh (F/2) + cos (k_y) ],
Γ was defined in Eq. (<ref>) to be the normalization of the transition probabilities.
We show in the Appendix <ref> that the matrix elements of the free propagator can also be expressed in the following form
⟨𝐫 | Ĝ_0(z) | 𝐫'⟩ = e^F (x - x') / 2∫_0^∞ d s e^-s
× I_|x-x'|(2 Γ s z ) I_|y-y'|(2 Γ s z) ,
where I_m(…) is a modified Bessel function of the first kind of integer
order m.
Here, the exponential outside of the integral accounts for the asymmetry introduced by the bias and the remaining terms are the solution of the symmetrical problem but with the diffusion coefficient set to be Γ(F) instead of Γ(F=0).
As such we denote the obstacle-free unbiased part by
g_x-x',y-y' = ∫_0^∞ d s e^-s I_|x-x'|(2 Γ s z ) I_|y-y'|(2 Γ s z),
and we obtain ⟨𝐫 | Ĝ_0(z) | 𝐫'⟩ = e^F (x - x') / 2 g_x-x',y-y'. By using the symmetry of the unbiased part g_x y, with respect to x, y and also the fact that g_x y=g_y x we obtain for the matrix Ĝ_0 (z) the following representation
Ĝ_0(z)=([ g_00 e^F / 2 g_11 g_10 e^-F / 2 g_11 g_20; e^-F / 2 g_11 g_00 e^-F / 2 g_10 e^-F g_20 e^-F / 2 g_11; g_10 e^F / 2 g_10 g_00 e^-F / 2 g_10 g_10; e^F / 2 g_11 e^F g_20 e^F / 2 g_10 g_00 e^F / 2 g_11; g_20 e^F / 2 g_11 g_10 e^-F / 2 g_11 g_00 ]).
We note that by virtue of the underlying dihedral symmetry <cit.> that only four elementary propagators are needed to be explicitly calculated, they are: g_00, g_10, g_20 and g_11.
As illustrated in Appendix <ref> these four propagators satisfy a set of linear relationship that eventyally reduce the number of independent propagators from four to two.
The resulting relations are
g_00 = 1 + 4 Γ z g_10,
g_10 = Γ z ( g_00 + g_20 + 2 g_11 ).
We turn now to evaluating g_00 and g_11.
From Eq. (<ref>),
g_00 = ∫_0^∞ d s e^-s I_0(2 Γ s z)^2 = 2/πK(16 Γ ^2 z^2)
g_11 = ∫_0^∞ d s e^-s I_1(2 Γ s z)^2
= 2/π (4 Γ z)^2{[2-(4 Γ z)^2] K[(4 Γ z)^2]
-2 E[(4 Γ z)^2]}
where
K(k) = ∫_0^π / 2d α/√(1-k sin ^2 α)
and
E(k) = ∫_0^π /2√(1-k sin ^2 α) d α
are the complete elliptic integral of the first and second kind respectively.
From Eq. (<ref>), g_10 and g_20 are easily found.
§.§ Scattering matrix t̂_i
To sum up the work done so far; we have found all the ingredients required to calculate the single-obstacle scattering matrix t̂_i in z-space and in the plane-wave basis. By using
Eq.(<ref>) together with Eq.(<ref>), we find
L^2 t_i(z, 𝐤) =L^2⟨𝐤|t̂_i| 𝐤⟩=
∑_𝐫, 𝐫^'exp[i 𝐤·(𝐫-𝐫^')]⟨𝐫|v̂_i(1-Ĝ_0 v̂_i)^-1| 𝐫^'⟩.
where v̂_i is given in Eq.(<ref>) and Ĝ_0 in Eq. (<ref>) is a function of the four obstacle-free propagators, g_00, g_10, g_20, g_11. The problem turns into a 5 × 5 matrix inversion problem which we solve using computer algebra.
Finding t_i(z, 𝐤) allows us to obtain the propagator [G]_av^c(z, 𝐤) for the system with obstacles using Eq. (<ref>). As mentioned before, this propagator is also the moment generating function. Thus, we derive the moments of displacement by taking the appropriate derivative of [G]_av^c(z, 𝐤).
The m-th moment of displacement along the x axis, i.e., in parallel to the direction of the applied force (see Fig. <ref>), is now
⟨x̃^m(z) ⟩ := ∑_N=0^∞⟨ x(N)^m ⟩ z^N
= (-i)^m∂^m [G]_av^c(z, 𝐤) / ∂ k_x^m |_𝐤=0.
Note that in the z domain, we need to distinguish x̃^m(z) from x̃(z)^m.
We used the notation ⟨⋯⟩ in order to indicate the average over the randomness (many obstacle realizations and different trajectories).
In what follows, we investigate the behavior of these moments for any number of steps N, and in particular the convergence towards the terminal velocity v_∞ := lim_N→∞⟨ x(N) ⟩ / N and the variance [ ⟨ x(N)^2 ⟩ - ⟨ x(N) ⟩ ^2 ] as a function of N. The moments are derived for arbitrary force F correct to first order in the density of obstacles n and compared to computer simulations to find the range of validity.
§ FIRST MOMENT
Equation (<ref>) allows computing the first moment of the displacement for any z and force F,
⟨x̃ (z) ⟩ = -i ∂/∂ k_x[ (1 + n) G_0 (z, 𝐤)
+ n L^2 G_0 (z, 𝐤)^2 t_i(z, 𝐤)] |_𝐤=0.
We remind the reader that G_0 (z, 𝐤) is the obstacle-free propagator and is provided by Eq. (<ref>) giving G_0 (z, 𝐤=0) = 1/(1-z).
By expanding Eq. (<ref>) we obtain
⟨x̃ (z) ⟩ = . (1+n) (-i ∂/∂ k_x G_0(z,𝐤) ) |_𝐤=0
+ n [ -2iL^2 G_0(z,𝐤) t_i(z,𝐤) ( ∂ G_0/∂ k_x) .
+. . G_0(z,𝐤)^2 ( -iL^2 ∂ t_i(𝐤)/∂ k_x) ] |_𝐤=0.
Additionally, by symmetry of the problem and verified using computer algebra, we find t_i(z,𝐤=0)=0.
We now have an expression for ⟨x̃ (z) ⟩,
⟨x̃ (z) ⟩ - (1 + n) ⟨x̃_0(z) ⟩ =
= n 1/(1-z)^2.(-i L^2 ∂ t_i(z, 𝐤)/∂ k_x) |_𝐤=0,
we find it using computer algebra. The resulting expression is extremely long and provided in the appendix [Eq. (<ref>)].
Here ⟨x̃_0(z) ⟩ is the solution of the bare dynamics which we know after transitioning to N-space must be ⟨ x_0(N) ⟩ = tanh (F/4) N. This is found by averaging over the displacement of a single step (see Sec. <ref>) and multiplying by N.
As the solution in Eq. (<ref>) is in z-space, we look for the solution for any number of steps N by inverting the process.
Inverting the z-transform in Eq. (<ref>), we obtain the average displacement
⟨ x(N) ⟩ = 1/N!. d^N/dz^N⟨x̃ (z) ⟩|_z=0.
This provides the result for any N and F.
Successively, we will derive in Sec. <ref> the terminal velocity and we will use this formula for ⟨ x(N) ⟩ in order to test the convergence rate towards the terminal velocity in theory and simulations.
§ TERMINAL VELOCITY
To determine the asymptotic behavior of the discrete velocity, v_∞ := lim_N→∞( ⟨ x(N) ⟩ / N ), we make use of the Tauberian theorem <cit.> (see the appendix for the exact method).
The Tauberian Theorem allows transitioning to N-space and to find ⟨ x(N) ⟩ for large N from the behavior of its generating function ⟨x̃ (z) ⟩ =∑_N=0^∞⟨ x(N) ⟩ z^N when z→ 1.
We use in Eq. (<ref>) the asymptotic relation, .(-i L^2 ∂ t_i(z, 𝐤)/∂ k_x )|_𝐤=0 = Δ v_∞+O(1-z) obtained in Appendix <ref> where Δ v_∞ is a function of F only.
Consequently the 1/(1-z)^2 term in Eq. (<ref>) transforms to a linear growth in N and we obtain,
⟨ x(N) ⟩∼ (1+n) v_0 N + n Δ v_∞ N,
as N→∞. Here v_0=tanh (F/4) denotes the velocity of the obstacle-free lattice. Therefore,
v_∞ = lim_N →∞ (⟨ x(N) ⟩) / N
= v_0 + n (v_0 +Δ v_∞).
The terminal behavior is now found for arbitrarily strong driving F and small densities n, while the complete expression is very long [see Eq. (<ref>)], we can elaborate the behavior for small forces. Relying on computer algebra, we find
v_∞ = D_x F + n/16(π/4 -1) F^3 log (F) + O(F^3),
which immediately highlights the nonanalytic behavior in F.
Here D_x = [ 1 - n (π - 1) ] / 4 corresponds to the diffusion coefficient in the x-direction in the absence of force as expected from linear response.
To make a connection with the continuous-time case, N needs to be treated as a random variable, where the duration τ of a step is taken from a distribution ψ(τ). This is essentially the renewal theorem; the m-th moment in the time domain would now be given by ⟨ x(t)^m ⟩ = ∑_N ⟨ x(N)^m ⟩ Q_N(t),
where Q_N(t) is the probability for having performed exactly N steps until time t. For large times, this can be simplified, and the number of steps is switched with the total time passed t divided by the expected time τ of a jump (as long as it is finite). When the times between steps is exponentially distributed with a mean of one, we immediately recover the expression already found in a previous work <cit.> by just switching N with t in Eq. (<ref>), therefore validating our result.
Our result in Eq. (<ref>) holds as long as obstacles are positioned far enough from each other such that sequences of collisions involving obstacles that have been encountered previously can be ignored.
Then, in our derivations, the assumption that [T]_av(𝐤), the disorder-averaged scattering operator in 𝐤-space, is the sum of nL^2 identical single-obstacle forward-scattering amplitudes is correct [Eq. (<ref>)].
To investigate the range of validity of the low-density expansion, we plot the terminal response as a function of F in Fig. <ref>. For each curve, there is a peak indicating a transition for stronger driving where the particle is more often stuck on an obstacle instead of going around it. Thus, the lattice Lorentz model displays negative differential mobility, a phenomenon that was seen in Ref. <cit.> as well.
While the theoretical lines in Fig. <ref> agree nicely with simulations, the suppression of the stationary velocity is underestimated at strong forces indicating that contributions of higher order in the density become relevant. In other words, the range of validity of our approach for the small obstacle densities is force-dependent.
§ APPROCH TOWARDS THE TERMINAL VELOCITY
Next we investigate in more detail the instantaneous velocity defined by
v(N) := ⟨ x(N) - x(N-1) ⟩ for N=1,…
where we set v(0) = 0. In the z-domain, it is directly related to the displacement via
ṽ(z) = (1-z) ⟨x̃(z) ⟩
The behavior of the instantaneous velocity at large step sizes is reflected in the poles of the z-transform. We have already identified a pole of order 2 in ⟨x̃(z) ⟩ which translates to a simple pole in ṽ(z) reflecting that the velocity approaches a constant v(N) → v_∞ as N→∞. Further singular behavior emerges from the single-obstacle t-matrix t̂_i (z ,𝐤) [Eq. (<ref>)]. Since the inversion of the 5× 5 matrix does not give rise to new singular behavior, all non-analytic properties are inherited by the matrix elements, i.e., by the
⟨𝐫 | Ĝ_0(z) | 𝐫'⟩ = ∫_-π^π∫_-π^πd 𝐤/(2 π)^2exp(-i 𝐤·(𝐫-𝐫^')) /1-zλ(𝐤).
For the case of F=0, there is a singularity at z → 1 is originating from λ(𝐤 = 0) = 1.
In Sec. <ref> we have established a linear dependence of these integrals on the complete elliptic integrals of the first and second kinds, K and E, respectively.
The leading term arises from the expansion of K around z=1,
. ⟨0 | Ĝ_0(z) | 0⟩|_F=0 = 2/πK(z^2) ∼ -1/πlog(1-z),
while E is subdominant (see Ref. <cit.>). Upon Taylor expansion of the logarithm
- log (1-z)= ∑_N=1^∞z^N/N,
we read off that logarithm corresponds to an algebraic decay ∝ 1/N for N→∞.
This corresponds to the decay rate expected from linear response when F → 0 as seen in Fig. <ref>.
For the case of F>0, the integral in Eq. (<ref>) is no longer divergent at z=1 but the singularity is shifted.
Using the same argument as in Eq. (<ref>), the leading term is found to
⟨0 | Ĝ_0(z) | 0⟩ = 2/πK(16 Γ ^2 z^2) ∼ -1/πlog( 1-4 Γ z ),
i.e. we anticipate a singularity at z=1/4 Γ > 1. By the scaling property, the z-transform of (4Γ)^-N [v(N)-v_∞] is ṽ(z/4Γ)- v_∞/(1- z / 4Γ ) ∝ -log(1-z) for z→ 1 , and we infer that the
large-step behavior of the velocity in this case is
v(N) - v_∞∝exp[ N log (4 Γ) ] /N.
Therefore the power law tail ∼1/N no longer determines the asymptotic large-N behavior for any finite bias F; rather the decay rate is exponentially fast which can be elaborated further for small F≪ 1 to be ∼exp (-N F^2 / 16) / N. A comparison of this analytical prediction with simulation is shown in Fig. <ref>.
This result is consistent with the continuous case with exponential waiting times <cit.> where it was shown that the regular algebraic decay ∼ t^-1 is elaborated to ∼ t^-1exp (-F^2 t / 16) for F → 0.
The onset time of the exponential behavior can be found by comparing the decay rate with that of linear response, i.e., when exp[N log( 4 Γ)] / N ≪ 1/N. We find N ≫ |1 / log(4 Γ)| = N_f, where N_f is the onset step-time. For small F, N_f ∼ 16/F^2, thus we establish that as the forces are smaller the onset time is delayed significantly as can be seen in Fig. <ref>.
We remark that if we instead examine the average velocity, ⟨ x(N) ⟩ / N, the exponential decay will become subdominant and we will observe the asymptotic behavior ∼ 1/N for N ≫ 1.
§ SECOND MOMENT
By employing the same approach as for the first moment, the second moment of displacement in the x direction, sometimes called mean-squared displacement (MSD) is found by taking the second derivative of the propagator in Eq. (<ref>). From Eq. (<ref>) we obtain,
⟨x̃^2(z) ⟩ = (-i)^2 ∂^2/∂ k_x^2[ (1+n) G_0 (z, 𝐤)
+ n L^2 G_0 (z, 𝐤)^2 t_i(z, 𝐤)] |_𝐤=0.
The first term here is the obstacle-free system which we denote by ⟨x̃_0^2(z) ⟩.
Using G_0 (z, 𝐤=0) = (1-z)^-1 and that t_i(z, 𝐤=0)=0, after some simplification we are left with
⟨x̃^2(z) ⟩ = (1 + n) ⟨x̃_0^2(z) ⟩ - n [ 4 .( i L^2 ∂ t_i (z, 𝐤)/∂ k_x)|_𝐤=0.
×tanh(F/4) z/(1-z)^3
+ .. 1/(1-z)^2( L^2 ∂^2/∂ k_x^2 t_i (z, 𝐤) )|_𝐤=0].
Making use of the inverse z-transform as in Eq. (<ref>), we find the exact solution for ⟨ x(N)^2 ⟩ for any N and F.
This result is used later to find the variance and how it behaves for small number of steps N in Sec. <ref>. We continue to find the asymptotic behavior of the second moment in the limit of N →∞.
§ SECOND MOMENT IN THE REGIME OF LARGE STEP NUMBERS
We use the same methodology as in Sec. <ref> to determine the asymptotic behavior of the MSD, ⟨ x^2 (N) ⟩, in the regime of N≫ 1.
We therefore use in Eq. (<ref>) the (1-z) expansions of the derivatives of the scattering matrix t_i (z,𝐤) up to order O(1-z).
Relying on the results derived in Appendix <ref>, we obtain .(-i L^2 ∂ t_i(z, 𝐤) / ∂ k_x )|_𝐤=0 = Δ v_∞+O(1-z) and
. ( L^2 ∂^2 t_i(z,𝐤) / ∂^2 k_x ) |_𝐤=0=-2 (v_0)^2 z/(1-z) +c_1(F) +O(1-z). Here, c_1(F) here is very long expression that depends only on F, its explicit form can be inferred from Eq. (<ref>).
Consequently, using again the Tauberian theorem, the 1/(1-z)^2 term in Eq. (<ref>) transform to N and the z/(1-z)^3 term transforms to N^2 / 2.
Collecting results, we find asymptotically
⟨ x(N)^2 ⟩∼ (1+n) ⟨ x_0(N)^2 ⟩
+n [ 2 v_0 Δ v_∞ N^2 + v_0^2 N^2 + c_1(F) N ],
for N→∞. Here the ⟨ x_0 (N)^2 ⟩ term is determined by the bare diffusion dynamics, ⟨ x_0(N)^2 ⟩ = ( 1-tanh ^2 (F/4) ) N/2 + tanh ^2 (F/4) N^2.
This result is used later to determine the behavior of the variance in the regime N≫ 1 in Sec. <ref>. For small forces, the expression in Eq. (<ref>) can be simplified further, using Δ v_∞ = -π F / 4 +(π / 4 - 1) F^3 log (F) / 16 + O(F^3) and approximating c_1=-π/2 - (0.333005 +0.106403 log (F)) F^2 + O(F^4), c_1 is given in full detail in the Appendix [Eq. (<ref>)], we obtain
⟨ x(N)^2 ⟩ - (1+n) ⟨ x_0(N)^2 ⟩≈
-n {( 2 π - 1 )/16 F^2 N^2 + [ π/2 + (1/3 +1/10log (F) ) F^2 ] N}.
The force-free term [ 1-n(π -1) ] N / 2 in the second moment of the displacement indicates that when no force is applied (F=0), the obstacles still obstruct the movement of the tracer.
This can be seen in the plot [see Fig. <ref>] of the second moment of displacement in the x direction. Higher-order corrections show that there is a complex non-linear dependence on the force, specifically the logarithmic dependence on F that becomes even more relevant in the variance since the N^2 dependent terms drop, as we show in the next section.
§ THE VARIANCE
In order to determine the variance of the displacement [x(N)]:=⟨ x(N)^2 ⟩ - ⟨ x(N) ⟩^2, we remark that we take contributions only to first order in O(n) when taking the square of ⟨x̃ (z) ⟩ in Eq. (<ref>). Then, the exact solution is determined by taking the inverse z-transform of ⟨x̃^2 (z) ⟩ - ⟨x̃ (z) ⟩^2 as was done in Eq. (<ref>) for the first moment.
In Fig. <ref> and Fig. <ref> we plot the variance in the direction of the force against numerical simulations for small number of steps N.
For this purpose we use the variance of the bare dynamics [x_0(N)] = ( 1- tanh^2 (F/4) ) N / 2.
We see in Fig. <ref> that even for relatively small forces (in the range 1.0 ≲ F ≲ 1.7), the behavior is non-monotonic in N as the system is driven strongly out of equilibrium. An effect that does not appear at all for F=0 (compare Fig. <ref>).
While non-monotonic behavior and breaking of linear response is expected for large forces, we see an additional effect. The system transitions from a negative contribution to the variance in the obstacle density to a positive one when the force is increased enough, F ≳ 1.5 [Fig. <ref>]. Such a behavior has been seen in previous works for the lattice Lorentz model <cit.>, and we investigate it further in Sec. <ref>.
§ SUPERDIFFUSION AT INTERMEDIATE STEPS
The intuitive picture is that obstacles suppress the fluctuations in the direction of the force. Our results indicate that for forces large enough, increasing disorder leads to an enhancement.
The step-dependent behavior of the variance can be quantified in more detail by considering the step-dependent diffusion coefficient defined by a discrete step derivative [Fig.<ref>(c)],
D(N) := 1/2[[(x(N)] - [(x(N-1)] ],
and the local exponent α = α (N) [Fig. <ref>(b)] defined by a discrete logarithmic step derivative,
α(N):= ln[ [x(N)] ] - ln[ [x(N-1)] ]/ln[ N ] - ln[ N - 1 ].
Thus, ordinary diffusion corresponds to α=1, whereas local subdiffusive and superdiffusive behavior is indicated by α<1 and α>1, respectively. Transport at strong driving is dominated by a superdiffusive regime which grows with increasing strength of the driving [Fig. <ref>(b)] while the velocity keeps dropping [Fig. <ref>(a)]
The superdiffusion for stronger forces [Fig. <ref>(b)] can be rationalized as follows. While the average velocity keeps dropping with increasing force [Fig. <ref>(a)], there is a disparity between the trajectories of the tracer that have not yet hit an obstacle and follow a free path, with those that are stuck on an obstacle. Initially, this disparity is increased with the number of steps as indicated by a discrete time window of superdiffusion with a growing exponent α > 1 in Fig. <ref>(a). Once we reach discrete time-scales that are much larger than the time of the mean-free path, regular diffusion is recovered but with a significantly larger diffusion coefficient compared to the bare dynamics. This happens as long as the tracer can eventually go around the obstacle.
For increasing forces, the number of steps it takes to go around an obstacle is increased, which in turn increases this disparity since the trajectory with the free path can cover a larger distance during this time.
To investigate this effect, we develop an asymptotic model similar to Ref. <cit.>. At large forces F≫ 1, the tracer's trajectory can be approximated as performing jumps only in the directions of the force in one dimension until it hits an obstacle. Once this happens, the tracer is stuck and stays there. Therefore the asymptotic model is the following: at each step the tracer has a probability to step along the direction of the force and hit an obstacle with probability p = n or move forward with probability 1-n. The probability of displacement x = j after N jumps now reads
ℙ[x=j | N] = q^j p+q^j(1-p) δ_j N
= p+δ_j N[1-(N+1) p]+O(p^2),
j=0, …, N, q=1-p.
Where we have approximated to first order in the obstacle density O(p=n) by using the approximation (1-p)^N = 1-N p + O(p^2). The first and second moments can now be calculated,
⟨ x(N)⟩ = ∑_j=0^N j ℙ[x=j|N]
= 1/2 n N(N+1)+ N[1-(N+1) n]+O(n^2),
⟨ x(N)^2⟩ = ∑_j=0^N j^2 ℙ(x=j|N)
= 1/6 n N(N+1)(2 N+1) +N^2[1-(N+1) n]
+O(n^2).
Hence the variance is
[x(N)] = 1/2 (1+n) N [1-tanh ^2(F/4)]
+ nN/6 + nN^2/2 + nN^3/3 + O(n^2),
corrected by the empty lattice (the first term) since for F≫ 1, tanh (F/4) ≈1, and it drops, but in simulations [Fig. <ref>] we still plot the behavior for finite values of F.
The diffusion coefficient from Eq. (<ref>) is now
D(N) = 1/4(1+n)[ 1- tanh^2 ( F^2/4) ] + 1/2 n (N+1)^2.
Equation (<ref>) suggests that the true exponent of superdiffusion is α=3 as is corroborated by simulations in Fig. <ref>. The reason for the decay of α to unity at large steps N, is the fact that eventually for any finite forces F the tracer can go around the obstacle if we wait long enough. Therefore, regular diffusion α = 1 is eventually regained for N ≫ 1 but with a considerably higher diffusion coefficient [Fig. <ref>(c)] compared to the bare one defined by D_0=[1 - tanh ^2 (F/4)] / 4.
§ THE DIFFUSION COEFFICIENT IN THE REGIME OF LARGE STEPS
We now determine the diffusion coefficient for large step numbers, N ≫ 1, defined using the variance by D_∞ = lim_N→∞[x(N)]/2N.
We rewrite the MSD in this regime, which was found in Eq. (<ref>), as
⟨ x(N)^2 ⟩ = (1+n) (2 D_0 N + v_0^2 N^2)
+n [ 2 v_0 Δ v_∞ N^2 + v_0^2 N^2 + c_1(F) N ],
where D_0 = [ 1 - tanh ^2 (F/4)] / 4 is the diffusion coefficient of the bare dynamics in the direction of F.
Expanding now ⟨ x(N) ⟩^2 from Eq. (<ref>) to first order in the obstacle density n we find,
⟨ x(N) ⟩^2 = [v_0 + n (v_0 + Δ v_∞)]^2 N^2
= [v_0^2 + 2 n v_0(v_0 + Δ v_∞)] N^2 +O(n^2).
The diffusion coefficient is now obtained,
D_∞ = [ ⟨ x(N)^2 ⟩ - ⟨ x(N) ⟩^2 ] / N
= (1+n) D_0 + n c_1(F) /2 + O(n^2).
We plot the case of F=0 in Fig. <ref> where c_1(F=0)= -π / 2. The full function c_1(F) is given in the appendix and in Fig. <ref> we plot D_∞ in the regime of large steps. Higher-order corrections show that there is a complex non-linear dependence on the force, specifically the logarithmic dependence on F via c_1 ≈ -π /2 - (0.33 + 0.11 ln (F)) F^2 + O(F^4), as was also mentioned previously in Sec. <ref>.
§ SUMMARY AND CONCLUSIONS
In this work we have considered the driven Lorentz model of a tracer particle hopping on a two-dimensional lattice where a fraction of the sites is inaccessible and acts as hard obstacles.
We derive a method to find the exact solution for the first and second moment of the displacement in z-space [Eq. (<ref>) and Eq. (<ref>)] to first order in the obstacle density n for any value of z (the discrete Laplace transform) and force F.
Our result is correct to first order in the obstacle densities where interactions with single obstacles at a time dominate the process.
Meaning, obstacles are assumed to be far enough from each other so the effects of trapping by a cluster of obstacles on the tracer can be ignored.
Considering the mean displacement as a function of steps, ⟨ x(N) ⟩, the first-order expansion in the force obeys linear response in terms of the obstacle density [Eq. (<ref>)], consistent with previous works <cit.>.
The next correction term to the terminal velocity already includes a logarithm, v_∞ = D_x F + (n/16) (π/4 -1) F^3 ln (F) + O(F^3).
Simulations [Fig. <ref>-<ref>] show a good match with theory even for low values of the number of steps.
We note that the point where the theory breaks down depends both on the obstacle density and the magnitude of the force as can be seen the simulation results.
For larger values of F, the range of validity of our theory shrinks to smaller and smaller obstacle densities. Reversely, the role of clusters becomes more and more relevant, the larger force is.
By switching to continuous time with an exponential waiting time PDF we are able to recover all previous result from <cit.>.
This is expected since for finite average waiting times τ, the tracer on average performs a jump every τ (arbitrary units), and the connection between N and t is N = t / τ (for large values of N).
It was predicted that for exponential waiting times the obstacles introduce anti-correlations into the system that break the power-law decay ∼ t^-1 towards the terminal velocity expected from linear response <cit.>.
In particular the decay has an exponential dependence on the force and acts as ∼ t^-1exp (-F^2 t / 16) for F → 0 <cit.>.
This is consistent with the domain of discrete time, where each step is distributed according to a delta function, we find ∼ N^-1 exp (-F^2 N / 16) with an onset step-time of N_f ∼ 16/F^2 for small F (see Sec. <ref>).
Investigating the behavior of the variance at an intermediate number of steps shows non-monotonic behavior in N even at small values of force F [Fig <ref>]. For larger forces there is a window of superdiffusion at intermediate values of steps N where the variance behaves as [x(N)] ∝ N^3 [Eq.(<ref>)] until it drops to regular diffusion with a linear dependence on N [Fig. <ref>].
In the lattice Lorentz model, the superdiffusion can be traced back to the rapid increase of the variance of the free path lengths as the tracer performs a purely directed motion along the field until it hits an obstacle.
Although increasing the applied force on the tracer can reduce its travel time between different obstacles, it will increase the time it spends trapped by the obstacles.
Such non-monotonic behavior and superdiffusivity has been found in previous works for the continuous case of exponentially distributed times between steps <cit.>.
The superdiffusion regime starts immediately for discrete time [Fig. <ref>(b)],
while in the exponential waiting time case there is still a subdiffusive regime at small number of steps of the order of the density n <cit.>.
Our results show that a superdiffusive regime is a generic feature occurring naturally in the lattice Lorentz gas regardless of the time statistics between steps.
The developed methodology can be used in future research to consider systems where different types of disorder exist, such as non-static obstacles or the quenched temporal disorder that is present in the quenched trap model <cit.>.
This work was supported by the Israel Science Foundation Grant No. 2796/20. AS acknowledges FWF Der Wissenschaftsfonds for funding through the Lise-Meitner Fellowship (Grant DOI 10.55776/M3300). TF gratefully acknowledges support by the Austrian Science Fund (FWF) (Grant DOI 10.55776/M3300).
§ THE OBSTACLE-FREE PROPAGATOR Ĝ_0(Z)
§.§ The matrix elements
We explicitly evaluate the elements of the matrix Ĝ_0(z) using a representation that makes use of I_m(…), a modified Bessel function of the first kind of integer order m.
We use an integral representation of a fraction, ∫_0^∞ ds e^-s a = 1/a for a>0, to convert Eq. (<ref>) into a more familiar form
⟨𝐫 | Ĝ_0(z) | 𝐫'⟩ = ∫_-π^πdk_x/2 π e^-i k_x (x-x')
×∫_-π^πdk_y/2 π e^-i k_y (y-y')∫_0^∞ ds e^-s[1-z λ (𝐤)] .
Substituting Eq. (<ref>) into Eq. (<ref>) we obtain
⟨𝐫 | Ĝ_0(z) | 𝐫'⟩ =
∫_0^∞ ds e^-s∫_-π^πdk_x/2 πexp[2sz Γ( cos (k_x) cosh (F/2)
+ i sin (k_x) sinh (F/2) ) - i k_x ( x - x' ) ]
×∫_-π^πdk_y/2 πexp [2s z Γcos (k_y) -i k_y ( y - y' ) ] .
We then use the relation (<cit.>, Eq. (64))
∫_-π^πd k/2 πexp [-i k m] exp [αcos (k)+ i βsin (k)]=
[α+β/√(α^2-β^2)]^m I_m(√(α^2-β^2)).
Notice that the positions x and y were defined to be an integer number (between 1 and L) in Sec. <ref>.
Therefore the matrix elements of the free propagator are
⟨𝐫 | Ĝ_0(z) | 𝐫'⟩ = e^F (x - x') / 2∫_0^∞ d s e^-s
× I_|x-x'|(2 Γ s z ) I_|y-y'|(2 Γ s z) ,
where the exponential outside of the integral accounts for the asymmetry introduced by the bias and the remaining terms are the solution of the symmetrical problem but with the normalization factor set to be Γ(F) instead of Γ(F=0).
§.§ Force-free propagators relations
We show that the four terms of the free symmetrical propagates, g_00, g_10, g_11, g_20, appearing in the matrix elements of Ĝ_0(z) [see Eq. <ref>],
⟨𝐫 | Ĝ_0(z) | 𝐫'⟩ = e^F (x - x') / 2 g_x-x',y-y',
are related to each another.
Thus only the knowledge of two is required to know the other two as well.
The symmetrical case g_xy with no obstacles in Eq. (<ref>) from its definition is also represented in the form <cit.>
g_xy = ∑_N=0^∞ z^N P_N(𝐫) ,
where 𝐫=(x,y) and just for convenience we switch to the notation that P_N(𝐫) = ⟨𝐫 | p_n ⟩ is the probability to be at position 𝐫 after N steps, given we started at zero.
The process is translationally invariant and we are free to choose our starting condition to be at the origin, P_0(𝐫) = δ_𝐫,0
(here δ denotes the Kronecker symbol)
the system obeys a Markovian process
P_N+1(𝐫) = ∑_𝐝∈𝒩 W(𝐝) P_N (𝐫 - 𝐝) ,
multiplying by z^N and summing for N from 0 to ∞
1/z∑_N=1^∞ z^N P_N(𝐫) = ∑_𝐝∈𝒩 W(𝐝) g_𝐫-𝐝 ,
now using P_0(𝐫) = δ _𝐫, 0 we obtain
g_𝐫 = δ _𝐫, 0 + z ∑_𝐝∈𝒩 W(𝐝) g_𝐫-𝐝.
And using 𝐫 = 0 and 𝐫 = (1,0) we find the relations
g_00 = 1 + 4 Γ z g_10
g_10 = Γ z ( g_00 + g_20 + 2 g_11 ).
§ TAUBERIAN THEOREM
The Tauberian Theorem <cit.> allows transitioning to N-space and find ⟨ x(N) ⟩ for large N from the behavior of its generating function ⟨x̃ (z) ⟩ =∑_N=0^∞⟨ x(N) ⟩ z^N when z→ 1.
To be more precise, given that near z → 1 the function behaves as
⟨x̃ (z) ⟩∼1/(1-z)^γ Y(1/1-z),
where γ is some positive number and Y(u) is a slowly varying function of u, i.e.,
lim _u →∞Y(C u)/Y(u)=1,
for any positive constant C.
If the sequence {⟨ x(N) ⟩} is monotonic (at least starting from some value of N) then
⟨ x(N) ⟩≅1/Γ(γ) N^γ-1 Y(N),
where Γ is the Gamma-function.
For example the 1/(1-z)^2 term in Eq. (<ref>) is transformed to N using γ =2 and Y(u)=1 and we obtain
⟨ x(N) ⟩ = (1+n)⟨ x_0(N) ⟩ - n π F/4 N
= (1 - n (π -1))F/4 N
= D_x F N
to first order in the force F and the regime N ≫ 1 correct to first order in the obstacle density n.
§ ASYMPTOTIC BEHAVIOR FOR Z→ 1
In this section we provide the exact asymptotics (z → 1) of the scattering matrix for any force F, used to determine the stationary velocity v_∞ and diffusion coefficient D_∞ appearing in the figures.
The first derivative of t_i(z, 𝐤) can be computed with Wolfram Mathematica asymptotically for z → 1 with the result
.(-i L^2 ∂ t_i(z, 𝐤)/∂ k_x) |_𝐤=0 =
-4 (π -2 E) sinh(F/2) [(π -2 E) cosh ^4(F/4)+2 Ksinh ^2(F/4)]/[8 Ecosh ^4(F/4)-K(4 cosh(F/2)+cosh (F)-5)] (-2 Ecosh ^2(F/4)+2 Ksinh ^2(F/4)+π)-2 tanh(F/4) +O(1-z)
= Δ v_∞ + O(1-z),
where we abbreviated K = K(^4 (F / 4)) and E = E(^4 (F/4)) and the derivative of t_i(z,𝐤) was evaluated up to terms of order O(1-z).
Similarly, the second derivative of t_i(z, 𝐤) can be expanded using computer algebra for z→ 1 with the result
.L^2 ∂^2/∂^2 k_xt_i (z, 𝐤) |_𝐤=0 =
sech^2(F/4) [U_1 + πK U_2 +2 E U_3 +π ^3 (8 cosh(F/2)+5 cosh (F)-5) cosh ^4(F/4)]/2 π[8 cosh ^4(F/4) E-(4 cosh(F/2)+cosh (F)-5) K] (2 sinh ^2(F/4) K-2 cosh ^2(F/4) E+π)
-2 tanh ^2(F/4) z/1-z + O(1-z)
= c_1(F) -2 (v_0)^2 z/1-z +O(1-z).
where
U_1 = 128 sinh ^2(F/4) cosh ^6(F/4) KE^2,
U_2 = π(7 cosh(F/2)+cosh(3 F/2)-6 cosh (F)-2)-4 sinh ^4(F/4) (16 cosh(F/2)+3 cosh (F)-3) K ,
U_3 = 6 πsinh ^4(F/2) K-16 sinh ^4(F/4) (cosh(F/2)+3) cosh ^2(F/4) K^2+π ^2 (1-9 cosh (F)) cosh ^4(F/4).
For small 0 < F≪ 1 this can be further simplified to fourth order in the force to
.L^2 ∂^2/∂^2 k_x t_i (z, 𝐤) |_k⃗=0=c_1(F) - (F^2/8) z/1-z + O(1-z).
where c_1(F) in the regime of small F takes the form,
c_1 = F^2/64 π (π -2){ 2 [π ^2 (π -6)+16] log (F)-112 log (2)+π{16+π [22+42 log (2)-π (13+log (128))]}}
-π/2 + O(F^4).
§ SIMULATIONS
In this section we provide details on the method of simulations.
Fluctuations in the bare dynamics (without obstacles) are much stronger than the obstacle induced ones, especially when the number of steps is small.
Therefore to construct the simulations, it is very useful to adapt the approach from ref. <cit.> which was also successfully adapted in ref. <cit.>.
Let us denote the difference in position of a test particle for the same sequence of trial moves with and without obstacles (excluded sites) by δ x. Clearly, the average magnitude of δ x is proportional to the density of obstacles. Let us write the total position of a particle in the presence of hard obstacles as
x(N) = x_0(N) + δ x.
⟨δ x ⟩ can be computed directly from simulations and converges much faster than ⟨ x(N) ⟩ while x_0(N) is the position of the bare-dynamics (no obstacles) which is known.
To further speed up simulations in the small-N regime, we use a second trick. We simulate obstacle positions only in the effective region of the trajectory. Meaning, obstacles only appear in the area the test particle can reach up to step number N. We then normalize the result to account for the shifted dynamics by using the fact that for obstacle outside the effective region, the mean position would just be that of the bare dynamics x_0(N).
Same methodology were used to determine x(N)^2.
|
http://arxiv.org/abs/2409.03205v1 | 20240905025334 | The YMDB catalog: Young massive detached binaries for the determination of high-precision absolute stellar parameters | [
"Pablo Martín-Ravelo",
"Roberto Gamen",
"Julia I. Arias",
"André-Nicolas Chené",
"Rodolfo H. Barbá"
] | astro-ph.SR | [
"astro-ph.SR"
] |
The YMDB catalog
Departamento de Astronomía, Universidad de La Serena, Av. Cisternas 1200 Norte, La Serena, Chile.
Gemini Observatory/NSF’s NOIRLab, 670 N. A‘ohoku Place, Hilo, HI 96720, USA
[email protected]
Instituto de Astrofísica de La Plata, CONICET–UNLP, Paseo del Bosque s/n, 1900, La Plata, Argentina.
Facultad de Ciencias Astronómicas y Geofísicas, Universidad Nacional de La Plata, Argentina.
Massive stars play a crucial role in the cosmic dynamics and chemical evolution of galaxies. Despite their significance, our understanding of their evolution and properties remains limited. An accurate determination of stellar parameters, such as the mass and radius, is essential for advancing our knowledge. Detached eclipsing binaries (DEBs) are particularly valuable for these determinations due to the minimal interaction between their stellar components, allowing for precise measurements.
This study aims to introduce the Young Massive Detached Binary (YMDB) catalog, designed to address the gap in the high-precision absolute parameter determination for young massive stars. By focusing on DEBs within the spectral range O9-B1, this catalog seeks to provide a reliable database for future astronomical studies and improve our understanding of massive star evolution.
We conducted a photometric analysis of 87 young massive stars in detached eclipsing systems using TESS light curves (LCs) that were processed through a custom pipeline. This analysis involved determining the amplitude of magnitude variations, orbital periods, times of minima, eccentricities, and the presence of apsidal motion and heartbeat phenomena. A thorough literature review was performed to obtain MK spectral classifications. We performed our own spectral classification of 19 systems to support the sample where a new classification was lacking or inconclusive.
The analysis identified 20 previously unreported binary systems, with 13 newly recognized as variable stars. Among the 87 stars examined, 30 are confirmed as YMDB members, and 25 are candidates pending spectral classification. The exclusion of the remaining 32 stars is attributed to unsuitable spectral types or their nondetached binary nature. Notable findings include the identification of new LC classifications, eccentricities in 13 systems, and heartbeat phenomena in several targets.
The YMDB catalog offers a resource of high-quality LCs and reliable stellar classifications, serving as a valuable tool for the astronomical community.
The YMDB catalog: Young massive detached binaries for the determination of high-precision absolute stellar parametersFull version of Tables 2, 3 and 4 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/.
Pablo Martín-Ravelo
1,2
Roberto Gamen
3,4
Julia I. Arias
1
André-Nicolas Chené
2
Rodolfo H. Barbá In Memoriam (1962–2021)
1
Received: 20 June 2024 / Accepted: 05 July 2024
================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Massive stars, defined by their cataclysmic end as core collapse supernovas, are typically those with an initial mass of eight or more solar masses and they fall into the OB spectral classification. These stellar behemoths play a crucial role in cosmic dynamics and chemical evolution, with their supernova events significantly enriching the interstellar medium with heavy elements. Understanding massive stars is fundamental to comprehending stellar evolution, the evolutionary history of galaxies, and the Universe at large. Due to their immense luminosity, they are key observable objects in distant galaxies, making them essential for astronomical studies with both current and forthcoming space and ground-based large telescopes.
Our current understanding of massive star evolution remains limited, as evidenced by the persistent mass discrepancy between empirical measurements from orbital dynamics and theoretical model predictions. This discrepancy has been a longstanding issue in astrophysics <cit.>. Factors inherent in binary systems, such as stellar rotation, which influence orbital ellipticity, synchronization, tidal effects, limb darkening, and radiative propagation, are crucial in this context. Evolutionary models that incorporate rotation show marked differences compared to nonrotational models. Additionally, research by <cit.> highlights significant variations in how current models handle mass loss in these stars.
The determination of reliable masses for massive stars is only achievable in binary systems, particularly through the study of light curves (LCs) and radial velocity (RV) curves of eclipsing binaries. The likelihood of encountering massive stars within binary systems is fairly high, especially during their main sequence phase, the most extended period of a star's lifecycle. Comprehensive spectroscopic surveys reveal that around 75% of main sequence O-type stars <cit.>, and a similar ratio of early B-type stars <cit.>, are part of gravitationally bound systems.
The endemic multiplicity among massive stars offers unique research opportunities, although the method has its limitations. An inherent consequence of binarity is the interaction among its components, which make their evolution paths deviate from those of solitary stars of similar types. This interaction issue was emphasized by <cit.>, who found that at least 71% of O-type stars in binary systems interact with their companions during their lifetimes, with 29% eventually merging into a single entity. Detecting past interactions poses challenges, as evidenced by discrepancies between empirical mass measurements and predictions from atmospheric or stellar evolution models, which often overlook these interactions.
Detached eclipsing binaries (DEBs) stand as critical objects in this context. These systems, charaterized by a negligible or minimal interaction between their stellar components, perfectly meet the criteria for accurate stellar analysis. Given that most massive stars are part of multiple systems, it is preferable to focus on those with well-documented multiplicity. The best single stars for study are often the components of binary systems, as proposed by <cit.> and further discussed by <cit.>. DEBs provide a unique opportunity to determine the physical properties of stars with a high accuracy, such as their masses, and radii. Moreover, they can be used to determine distances. In this context, DEBs are key objects. Determinations from DEBs form the basis for calibrating crucial relationships among stellar parameters, such as the mass–luminosity relationship.
Among massive stars, those within the O8–B3 spectral-type range are predominant (just as a consequence of the stellar mass distribution). However, only a limited number have had their absolute parameters accurately determined.
Our research aims to address this gap by examining a selection of DEBs within this spectral range. We intend to analyze their LCs and RV curves to derive precise stellar parameters.
In this work, we present a curated database of young massive stars, with spectral types around B0 V, in DEBs. This endeavor is the first step to maintain an up-to-date database of empirical absolute parameters for massive stars. To achieve our objectives, we have carefully selected systems from a careful search in the available literature (see details in Sect. <ref>). After that, we generated the LCs of each target in the database using data from the NASA Transiting Exoplanet Survey Satellite (TESS), and analyzed their variations to identify DEBs. This is explained in Sect. <ref>. TESS LCs were also examined for periodicities, to determine eccentricities, ephemerides, and also for the detection of pulsation and/or heartbeat phenomena. Some targets were spectroscopically observed to obtain their spectral classification, which were confusing in the literature. Observations are described in Sect. <ref>. All of the results that populate the Young Massive Detached Binaries (YMDB) catalog are shown in Sect. <ref>.
§ GENERAL METHODOLOGY FOR THE YMDB CATALOG
Our study's approach integrates a thorough search and analysis methodology to identify and evaluate candidates for detailed investigation. Starting with an extensive review of databases and literature, we refined our list of potential candidates by examining their spectral classifications and the likelihood of misclassification, and by quickly reviewing raw TESS LCs to exclude any undoubted nondetached binaries. This process narrowed our initial pool to 87 candidates.
In preparing the LCs, we initially set up background and target masks using a structured pixel grid to minimize light contamination, which is vital for precise LC extraction. We continuously evaluated data quality and implemented polynomial fitting to correct known and recurrent flux pattern variations in the TESS data, such as rollovers, ensuring a cleaner and more accurate representation of the LCs.
We utilized Gaussian Fitting (GF) and Box Least Square (BLS) methods to accurately determine orbital periods and other temporal features. Further analysis of the LCs involved meticulous visual inspection to confirm the detached nature of systems and identify any additional features such as pulsations or heartbeats.
To enhance the validity of the spectral classifications, new spectroscopic observations were conducted using the Jorge Sahade telescope at CASLEO. These efforts were focused on acquiring low-resolution spectra, which were processed using established methods, to clarify any existing classification uncertainties.
§.§ Candidates selection
Candidates for the study were identified through extensive searches in various databases and literature. These sources included the Spectroscopic Binary Orbits Ninth Catalog <cit.>, OWN Survey <cit.>, IACOB <cit.>, Eclipsing Variables Catalog <cit.>, and the multiplicity of northern O-type spectroscopic systems project <cit.>, among others. Additionally, potential targets suggested by collaborative efforts within the astronomical community were also considered.
The search criteria encompassed a wide range, including systems with components in the O7–B3 III–V spectral class, to account for potential misclassifications. This cautious approach was used to ensure all potentially relevant systems were considered, particularly those that might align with the desired O9-B1 IV-V classification upon closer scrutiny.
A thorough review of the bibliography was conducted to assess the reliability of the spectral classifications, the availability of LCs, and the reported detachment status in the literature, narrowing down the original list of 339 targets to 186 candidates.
The process continued with an analysis of TESS data to eliminate any candidates showing clear non-Algol type variability (i.e, Beta Lyræ and W Ursæ Majoris) reducing the number to 114 candidates. A custom pipeline was then developed for extracting high quality TESS LCs, allowing for the estimation of orbital periods (P) and minima (T0), as well as the identification of eccentric orbits, heartbeats and apsidal motion throughout the TESS sectors. This was achieved using both GF and BLS methods. The candidates were meticulously scrutinized, focusing on the identification of Algol-type variables. Systems not fitting the Algol type criteria were excluded, resulting in 87 stars qualifying as potential candidates which we present in this work.
Details of this task are given in the following.
§.§ Construction of light curves
In constructing the LC, the initial step involved assessing the quality masks for each TESS sector, related to the star, to determine whether the default rejection criteria were adequate. For the majority of LCs, it was found that the default quality mask met the needs effectively.
§.§.§ Background mask
To construct a sky background mask for each sector, a 30×30 pixel area centered on the target system is analyzed. Stars within this area, along with their magnitudes, are identified using Gaia DR3 data. The lightkurve tool <cit.> is then employed to generate a mask with threshold zero, specifically excluding pixels affected by stars brighter than a determined magnitude limit. This upper limit is set at 5 magnitudes fainter than the brightest star within a 3-pixel radius of the target system, effectively minimizing contamination from other stars in the extraction of the desired LC.
§.§.§ Target mask
Following the construction of the sky background mask, a target mask is developed using a similar methodology. The function that delineates boxes around mapped stars based on their brightness is also applied to isolate the Target Pixel File (TPF) surrounding the star of interest. Adjustments to the size and position of this cut, alongside a specified threshold value, are made using the lightkurve tool to craft the target mask. This stage might require iteration, involving comparisons of the resulting LC with published ones and the LCs of individual pixels, to verify the accuracy of the selected parameters and the suitability of the target mask.
§.§.§ Data selection
The analysis of the unfolded extracted LC involves identifying and flagging problematic data that could compromise the construction of the LC. A visual inspection of the cadence for each TESS sector is essential, as it can reveal areas potentially affected by gaps in data—either due to TESS CCD readouts or data omitted by the quality mask. Data adjacent to these gaps might exhibit a different background profile, making parts of the LC unreliable if the background extraction fails to account for this variance. Additionally, these segments may experience slight magnitude shifts (Δmag) and may require independent normalization, posing challenges for systems with long periods.
§.§.§ Determination of ephemeris
In determining the ephemeris of our studied systems, we incorporate two principal methods: GF and BLS. GF is primarily utilized for its simplicity and effectiveness in identifying eclipse minima by fitting Gaussian profiles, a method similar to the Phase Dispersion Minimization (PDM) approach.
Phase dispersion minimization is traditionally used to detect stable periodic signals by minimizing scatter across phased data, making it ideal for datasets like those from TESS which may include gaps or nonsinusoidal variations. By adapting this methodology, our GF process not only identifies the minima but also estimates the period from these minima's separations, similarly to how PDM assesses periodicity by examining the variance across different data bins.
After initial minima identification, we fold the LC and apply GF iteratively across all sectors, optimizing the ephemeris precision in a manner analogous to refining PDM's phase coverage by adjusting bin overlaps or employing smoother functions in updated PDM versions.
Box least square is used alongside GF for period verification, with both methods critically reviewed against known periods from literature when available in order to cross-verify with established data the reliability of our adapted PDM-inspired techniques in analyzing TESS's time series data.
§.§.§ Phase-synchronized polynomial fitting
Photometric observations from the TESS mission, similar to those from its predecessor Kepler, are susceptible to instrumental systematic trends that can obscure or distort the intrinsic stellar signals. These systematics arise primarily from spacecraft jitter and other operational imperfections, which introduce noise and trends across different timescales into the captured LCs.
To mitigate these effects, we employed a polynomial fitting method, analogous to the Pixel Level Decorrelation (PLD) method utilized in the EVEREST pipeline for K2 data. This method has been demonstrated to effectively remove correlated noise due to spacecraft motion by fitting and subtracting systematics directly from the pixel-level data <cit.>.
While our approach uses a simpler polynomial fitting of lower orders, it shares the core principle of modeling the observed data to isolate and remove instrumental signatures. The primary distinction of our method, which we designate as "Phase-Synchronized Polynomial Fitting" (PSPF), lies in its utilization of the previously known orbital periods to synchronize (fold) phase points within a TESS sector. This synchronization enables the calculation of fits for data points that share the same phase, thus forming a more complex function that better adapts to the unique signal trends of TESS data. Therefore, PSPF is suited for systems with orbital periods shorter than the duration of a TESS sector. The method cannot be applied if the period exceeds the sector length (PLD should be used in those cases), unless the signal is expected to remain constant in different phases of the LC, such as outside the eclipses in highly detached binaries. It becomes particularly reliable when the orbital period is substantially shorter than the sector length, allowing for enhanced signal integrity post-correction through multiple phase folds.
The correction process begins with the identification of phase points across the LC that are consistent over multiple orbits, meaning without flux variations other than those related to the eclipsing nature of the system or other synchronized pulsations (whose periods are multiple of the orbital period). This leverages the comparative stability of shorter-period systems which may ignore not-synchronized variations if enough points per phase are provided. Each phase point will provide then a value not only for that phase, but for every other phase point in the LC.
A weighted mean is calculated for these values in each phase point to construct a robust fit model. This model is then used to adjust the LC, with the median of all polynomial corrections applied to derive the final corrected LC. This procedure was only implemented when a consistent pattern of systematic error was evident across the entire sector, ensuring that the corrections made were both meaningful and substantiated by the data.
For about 20% of the candidates in our study that exhibited clear systematic trends, the PSPF was crucial for assembling a clean LC. A polynomial fit of order 1 was mainly used, but the selection of polynomial order was tailored to each target's specific noise characteristics, with higher orders only used seldom and for more complex systematic patterns.
Our PSPF corrections were validated against already corrected LCs, showing a significant reduction in noise and the preservation of intrinsic astrophysical signals and effectively detrending the TESS LCs, akin to the results reported by the EVEREST pipeline when applying PLD to K2 data <cit.>.
§.§ Light-curves analysis
All generated LCs underwent meticulous inspection to confirm their detached nature, identify any pulsations or heartbeats, and determine eccentricity and apsidal motion. Visual inspection was used to confirm the detached nature of the system, which was unequivocally determined by the distinct transitions marking the beginning and end of eclipses in the LC. Pulsational features, indicative of either intrinsic variability of the components or the presence of a complex multiple system, were also identified through visual inspection of the LCs. In cases of multiple systems, efforts were made to disentangle the LCs using templates crafted from median data selected, when possible, from segments of the LC minimally affected by eclipses or intrinsic pulsations of the main system. The heartbeat phenomenon was identified as a distinct photometric variation occurring during orbital phases with closer eclipses, indicative of the expected periastron. We compared those variations against all forms of heartbeat showed in <cit.>. Detailed descriptions of these cases are provided in Section <ref> and their LCs are shown in Figure <ref>.
For systems where eccentricity was not immediately obvious, we employed GF to precisely measure the centers of the primary and secondary eclipses, using a threshold of Δϕ=0.0002, to determine whether their phase separation differed from 0.5. It is important to note that an exact half-period interval between eclipses does not necessarily imply a circular orbit; the system could be eccentric with a periastron longitude of either 90^∘ or 270^∘. Therefore, values listed in Tables 2, 3, and 4 may imply eccentricity but never circularity. Variations in the fit to the secondary eclipse across different sectors were analyzed, and systems exhibiting changes beyond a certain threshold (Δϕ=0.0002) were flagged for potential apsidal motion. Additionally, since heartbeat phenomena occur exclusively in eccentric binaries, the detection of heartbeat signals in cases where only one eclipse was observed allowed us to identify the system as eccentric.
§.§ Spectroscopic observations and classification
To address the inconsistencies in spectral classifications found in the existing literature, we undertook the task of obtaining new spectra.
Low-resolution spectra were obtained using the Jorge Sahade telescope at Complejo Astronómico El Leoncito (CASLEO) in San Juan, Argentina, utilizing the REOSC spectrograph in single mode alongside the new Sophia CCD. This equipment yielded spectra with a resolution of R∼1000, covering the wavelength range of 3760–5845 Å suitable for spectral classification in the MKK system.
Spectral data processing was conducted in the standard way using iraf[NOIRLab IRAF is distributed by the Community Science and Data Center at NSF NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the U.S. National Science Foundation.] routines.
The resulting spectra are depicted in Fig. <ref>. They have been classified following the criteria described in <cit.>. To summarize, we employed the ratio between He ii λ4541 and He i λ4387, for late O-types, and Si iii λ4552/Si iv λ4089, for early B-types. Further details about spectral classification of OB stars can be found in the references provided.
The new spectral types are presented in the corresponding Tables <ref>, <ref> and <ref>, column ST1, and are indicated with the label “tw”. No ST2 is given due to the low spectral resolution, which prevents the disentangling of both components in the system.
§ RESULTS
Among the 87 systems, this study reveals 20 new eclipsing binaries (Table <ref>), including 13 whose variable nature was previously unknown, five targets which were known as photometric variables or spectroscopic binaries, but not as DEB, and two of them were identified as nonthermal contact (because they present different light minima). Moreover, we introduce new LC classifications for 30 systems and report novel findings such as eccentricity in 13 systems, heartbeat features in 17, new types of variability in 11, and debut TESS LC presentations for the majority of the sample.
The study's results are presented in three distinct tables: confirmed members of the catalog (Table <ref>), candidates pending spectral classification (Table <ref>), and unqualified candidates (Table <ref>). Confirmed members are systems that meet the essential conditions of displaying Algol-type LCs (detached) and having at least one component with a spectral classification within the O9-B1 V range. Candidates are also detached systems that show potential signs of meeting the spectral type criteria but lack definitive confirmation due to incomplete or inconclusive spectral classification. Unqualified systems are those whose LCs do not appear detached (EB or EW), have primary spectral classifications clearly out of range, or both. The secondary component's classification might either be out of range or impossible to detect due to limitations in current spectroscopic data and analysis. While unqualified detached systems with indeterminate secondary classifications are currently unsuitable for the catalog, future higher-resolution or more sensitive spectra could reveal additional stellar features, prompting their reconsideration as candidates.
For each group, we detail findings from our photometric analysis, including delta magnitude (Δ mag), period (P), time of minima (T_0), observed apsidal motion (Apsidal), additional system variability (MultiP), heartbeat phenomena (HB), and eccentricity (e). Additionally, we provide our spectral classifications for certain stars, either to fill gaps where classifications were absent or to verify existing classifications.
Regarding new spectral types, we confirmed HD 298448, CD-28 5257, HD 309036, V* IK Vel as entries to the YMDB (Table <ref>),
and emphasize that certain eclipsing binary systems were excluded from the YMDB catalog because their spectral types fall outside the range studied here.
An intriguing example is CM CMa (=Gaia DR3 2928505622380096256), which lacks a documented spectral type in the eight publications indexed in the Simbad database. Our April 2023 night spectrum reveals absorptions consistent with an F5 V type, characterized by similar H and K Calcium lines and the presence of the G-band.
Another case is GN Nor (=Gaia DR3 5884730512723833984), which similarly lacks spectroscopical references in Simbad. Our CASLEO spectrum is classified as A0 V, primarily due to the dominance of Balmer lines.
Similar cases are V* GN Nor and * f Vel, whose spectral classifications resulted out of the range for this catalog.
These spectra are shown in Fig. <ref>.
In certain LCs, we observed additional variability beyond the typical eclipsing patterns. These variations, unless identified as heartbeat phenomena, are denoted with 1 in the tables. Therefore, we detected such oscillations in 37 systems, 11 of which are unprecedented.
Most newly discovered pulsating systems have dedicated paragraphs, except
HD 309036,
HD 204827,
all of them well known DEBs.
Finally, the visual inspection of TESS LCs allowed us to detect apsidal motion. This phenomena is indicated in the tables (with 1) when the times of the secondary eclipses seem to vary among different sectors.
§.§ New detached eclipsing binaries discovered
We offer a detailed insight into select DEB featured within the YMDB.
BD+66 1674:
Initially noted as a probable RV variable <cit.>, no additional references were found in the available literature. Although <cit.> classified it as O9.7 IV, an analysis, incorporating high-resolution spectra from the Galactic O-Star Spectroscopic Survey <cit.> and the Library of Libraries of Massive-Star High-Resolution Spectra <cit.>, revealed it to be a B0 V + B0 V system (Maíz Apellániz, priv. comm.). Consequently, it has been included in the YMDB. Its LC displays additional variations beyond the 18-day eclipsing pattern, resembling those of highly eccentric detached double-eclipsing systems. Moreover, a notable bump between eclipses suggests the presence of a heartbeat-like feature (see Fig. <ref>). Considering the pixel size of TESS data, we conducted a search for other targets within a 40 arcsec radius and identified seven Gaia sources. These sources, at least four G-magnitudes fainter than BD+66 1674, complicate the determination of whether the short-period binary is one of these sources or an unresolved component of BD+66 1674.
BD +66 1675:
Classified as O7.5 Vz <cit.>, high-resolution spectra obtained within the context of LiLiMaRlin <cit.> revealed BD +66 1675 to be a triple system comprised of O7.5Vz+O8V +B components. A preliminary analysis of 20 spectra indicated that the RV of the spectral lines belonging to the O7.5 Vz star do not vary within errors. However, the O8V + B pair exhibits RV motion in accordance with the photometric period. Moreover, preliminary orbital elements suggest an eccentric orbit, explaining why only one eclipse is observed. This eclipse occurs when the massive star passes in front of the system, making it a secondary eclipse. Also, we detect a heartbeat-like behavior immediately after the eclipses, but only in the Sector 58 (Fig. <ref>), and short-period variability identified as pulsations.
HD 278236:
Classified as O9 V by <cit.>, no publications reporting variability or binarity were found. It is noteworthy that the relative position of the secondary eclipse appears to vary in different sectors, indicating apsidal motion (Fig. <ref>). Moreover, apsidal motion is making the secondary eclipse narrow overtime (from sector 17-59 ∼1100 days and from 59-73 equivalent to ∼1400 days, for a total time span of ∼2500 days).
RAFGL 5223:
Recognized as a Herbig Ae/Be star candidate, its entire bibliography is centered around this characteristic. However, no indications of variability, either spectroscopic or photometric, were identified. To scrutinize the indicated spectral classification in the Simbad database, we downloaded a X-Shooter spectrum (program ID 084.C-0952(A)) from the ESO portal. By evaluating the ratios between He ii and He i pairs, we determined it to be of O9.2 type, with He ii λ4686/He i λ4713 and Si iv λ4089/He i λ4026 ratios indicative of a V luminosity class. Consequently, it has been included in YMDB as an eccentric short-period binary. We note a potential heartbeat feature, manifested as a bump in the orbital phases where the periastron passage is expected.
We also remark a probable apsidal motion, as separation between eclipses seems to vary among Sectors 7 and 33 (∼700 d).
TYC 8174-540-1:
Initially identified as an OB star by <cit.> and subsequently classified as O9.5 Vn by <cit.>, TYC 8174-540-1's spectral type was verified through our own CASLEO spectrum (see Fig. <ref>). Despite an absence of reports regarding its binary nature, the TESS LC unmistakably depicts both eclipses, which, in fact, are transit and occultation events. This characteristic increases its significance as a benchmark for determining stellar parameters. Notably, the system exhibits eccentricity and appears to manifest a heartbeat phenomenon before entering the primary eclipse.
HD 102475:
We found no evidence in the existing literature suggesting that it is a binary or variable star. It is only mentioned in general works on spectral types <cit.>, where it is classified as B0.5 II and B1 III, respectively. Therefore, we include this target as a candidate until its spectral type is confirmed with modern spectroscopic analysis. It also exhibits short-period variability, interpreted as pulsations.
HD 114026:
While we found no evidence suggesting variability, <cit.> identified it as a probable SB2 system. Despite its spectral classification exhibiting significant dispersion, ranging from OB to B2, with <cit.> designating it as a B0.5 V:n type, we prefer to be cautious and include HD 114026 as a candidate for the YMDB.
CPD -63 3284:
This star lacks a reliable spectral classification, and unfortunately, we were unable to observe it during our run at CASLEO. Initially identified as an OB star by <cit.>, no additional classification information was found. Nevertheless, its LC unmistakably displays double eclipses, which indeed correspond to occultation and transit events. It also exhibits short-period variability, interpreted as pulsations. We include it as a candidate in the YMDB.
UCAC2 5911156:
Initially recognized as an early-type star by <cit.> and subsequently classified as B0.5 V <cit.>, we acquired a new spectrum leading to a reclassification of its luminosity class to III. This adjustment is based on the almost similar intensities of the absorption lines He i λ4387 and Si iii λ4552. The giant nature of UCAC2 5911156 naturally explains the pulsations noted in the LC shown in Fig.<ref>. The LC exhibits eclipses with a superimposed set of oscillations forming a beating pattern, akin to those initially discovered in HD187091 <cit.>. Notably, these damped oscillations do not align with the orbital period, as the maximum amplitudes do not occur at the same phase. To the best of our knowledge, this is the first identification of the system as a DEB and pulsating. However, a detailed analysis of this intriguing system is beyond the scope of this work.
HD 142152:
Initially classified as a B0 III massive star by <cit.> and subsequently confirmed by <cit.>, HD 142152 lacked published spectra. To address this absence, we observed this target using CASLEO and confirmed its spectral classification (see Fig. <ref>). The identification as a probable binary arises from two discrepant radial velocities reported by <cit.>. The TESS LC distinctly illustrates a highly eccentric detached double-eclipsing behavior. Additionally, pulsation-like variations, occurring with 1/8 of the orbital periodicity, and a discernible heartbeat feature are detected. This renders the target exceptionally interesting, despite its exclusion from YMDB due to its luminosity class.
CD -54 6456:
Prior to this study, there were no records indicating variability or binarity for this system. Classified as O9.5 V <cit.>, our analysis involved six high-resolution spectra, which revealed no RV variations in the features of the O-type star. Consequently, this suggests that the eclipsing binary likely involves another stellar pair within the TESS pixel or is indistinguishable from CD -54 6456. For this reason, we have included it as a candidate in the YMDB.
HD 144918:
This target exhibits one of the most widely scattered spectral type assignments in the literature, ranging from O5/7 <cit.> to B0 <cit.>, with none originating from a modern study. Consequently, we decided to acquire a new spectrum. The CASLEO spectrum is classified as O8 V, with this classification based on the observation that He ii λ4542 is slightly fainter than He i λ4471. Concerning its binary nature, it is reported as SB2 by <cit.>, yet no period is provided, and no dedicated work was found in the literature.
BD +55 2722:
This system is, indeed, a Trapezium-like configuration with components named A (O8 Vz), B (O9.5 V), and C (O7 V(n)z+B), according to the GOSSS <cit.>. Given that the three sources are indistinguishable in TESS data, we attribute the DEB discovery to the C component, thereby designating BD +55 2722 C as a candidate in the YMDB. This classification stands until the spectral type of the secondary component is more precisely determined.
HD 277878:
Despite extensive literature search, no evidence of photometric variability was found for this target. Originally identified as an OB or B0-type star, it was recently reclassified as O7 V((f))z based on LAMOST spectra <cit.>, and indicated as SB1. Consequently, we have excluded it from consideration in the YMDB.
* psi02 Ori:
It is a widely recognized spectroscopic binary <cit.>, and also known to exhibit ellipsoidal variations <cit.>. In the TESS data, its double-eclipsing nature is clearly evident, alongside the ellipsoidal variations. However, due to its spectral type being identified as B1 III + B2 V <cit.>, we have excluded it from consideration in the YMDB.
LS VI +00 25:
It was initially identified as an SB1 system by <cit.>, although no photometric variations have been reported to our knowledge based on available literature. With a spectral type of O9.5 V <cit.>, we have included it in the YMDB.
CD-53 6352:
This star is recognized as double or multiple according to the Washington Double Star Catalog (WDS J16001-5355AB), but no photometric or RV variability has been identified. Analysis of TESS data has not conclusively determined whether star A or B is the DEB. While component A is classified as O7 III <cit.>, the spectral classification for component B is unavailable. As such, we have included it as a candidate in the YMDB. Additionally, its LC exhibits short-period variations compatible with pulsations.
HD 338961:
No evidence of variability or binary nature was found for this star in the literature consulted. Despite some dispersion in its spectral classification, we consider the determination B0.5 IIInn from <cit.> to be reliable. Therefore, we have excluded it from inclusion in the YMDB.
§.§ Contact binary systems discovered
We provide details of two targets initially considered for inclusion in the YMDB. However, subsequent analysis of the TESS data revealed that they are contact binaries, leading to their rejection.
LS V +38 12:
This star was identified as a binary system, based on spectroscopic observations, by <cit.>, who classified the pair as O7 V((f))+ B0 III-V.
HD 305850:
It is listed as a pulsating star in Simbad, yet we found no references to its periodicity or LC. Moreover, its spectral type is actually uncertain, requiring an accurate determination.
§.§ DEBs showing heartbeat phenomena
In the YMDB, in addition to TYC 8174-540-1, RAFGL 5223, HD 142152, BD+66 1675 and the subsystem found within BD+66 1674 which are already detailed in <ref>, several other systems exhibit variations in their LCs consistent with the heartbeat phenomenon.
V* NY Cep:
It is a well-known DEB <cit.>, characterized by a single eclipse resulting from its high eccentricity and periastron longitude. Notably, a bump is observed in its LC just before entering the eclipse, which we interpret as a probable heartbeat, although other proximity effects could not be discarded.
HD 99898:
This star is a previously identified variable, classified as Algol type <cit.>, and subsequently analyzed as a detached eclipsing binary (DEB) displaying apsidal motion <cit.>. However, no heartbeat phenomenon was reported prior to this study.
CD-28 5257:
It is a recognized eclipsing binary <cit.>, although the reported periodicity is incorrect. Our analysis reveals it to be an eccentric double-eclipsing system, with eclipses corresponding to both transits and occultations. Furthermore, a marginal bump is observed in the orbital phases where the expected periastron occurs, which we identify as a heartbeat effect, made possible by the precision of the TESS data. This star is marked for exhibiting apsidal motion in Table <ref>, as the orbital phases of the secondary transits appear to vary across available Sectors 7, 8, 34, and 61.
V* V725 Car:
It is a DEB in a highly eccentric orbit, with an RV curve determined for its primary component <cit.>. The new LC presented here reveals a distinct heartbeat effect between the closer eclipses. An integrated analysis of both datasets, spectroscopic and photometric, will enable the determination of stellar parameters for both components.
V* Y Cyg:
It is a well-established SB2 eclipsing binary <cit.>. A marginal flux increase is observed at orbital phases corresponding to the periastron passage, which can be interpreted as a heartbeat effect, although it could also be attributed to ellipsoidal variations. This system exhibits apsidal motion, as evidenced by the variation in secondary eclipse times (and possibly their depths) between sectors 15 and 41 (∼700 d).
CD -27 4726:
It is a very well-known DEB. TESS data show an asymmetric light increasing just before and after the (unique visible) primary eclipse. Its shape reminiscences the one detected on the very massive binary system WR 21a <cit.>. Unluckily, the spectral type of this DEB is not completely reliable thus, we add CD -27 4726 to the candidates list.
HD 306096:
It is also a very studied DEB, but we note a clear heartbeat feature on the orbital phases where the periastron passage is expected.
V* GN Nor:
This is another well-known system; however, its heartbeat has not been reported to the best of our knowledge.
*f Vel:
It is a studied DEB system; however, as far as we are aware, its heartbeat phenomenon has not been documented.
§.§ DEBs with newly determined periods
We compared the periods obtained from our analysis of the TESS data with those previously published. Some of the periods we derived were found to be twice as long as the previously reported ones. This discrepancy is likely due to the improved depth and resolution of the eclipses identified in the TESS data, enabling a clearer distinction between primary and secondary eclipses. While we have already discussed these discrepancies for system CD-28 5257 in Section <ref>, we now address those for which we have not yet provided details.
V* KU Car:
KU Car, reported as B8 in SIMBAD from <cit.> and initially observed with a period of 5.92 days by <cit.>, is reexamined in the author's Master's dissertation <cit.>. The dissertation utilized ASAS-3 data, which revealed a secondary eclipse previously unnoticed, effectively revising the reported period to 2.96 days. Detailed spectral analysis was performed using data from the Galactic O Star Spectroscopic Survey <cit.>, which led to a revised spectral classification of B0.5 V(n). While O'Connell previously suggested an eccentric orbit based on the LC's shape outside the eclipses, this hypothesis was biased by an incomplete understanding of the LC's true nature. Our observations do not indicate eccentricity from the LC; only a comprehensive RV study could resolve this ambiguity. Ongoing spectroscopic analysis aims to refine the temperature and absolute parameters of KU Car.
HD 99630:
This star is known as DEB, but its periodicity is badly reported in the literature <cit.>. Our analysis of the TESS LC reveals that its period is nearly double the previously reported value. Additionally, it shows double eclipses and high eccentricity. Our high-resolution spectra, obtained during the OWN Survey campaigns, indicate its spectral type is earlier than the B4-B5 determined by <cit.>. The ratio between He i λ4471 and Mg ii λ4481 lines is greater than three, and C ii λ4267 is identified, thus a B3-type is more suitable. The weakness of the metal lines confirms its dwarf class, B3 V.
CD-59 3165:
It is recognized as a highly eccentric DEB <cit.>. In the TESS data, in addition to exhibiting double-eclipsing behavior, it also displays other eclipses with a periodicity of 3.18205 d (see Fig. <ref>). Given that this star is identified as a double in the WDS catalog (WDS J10348-6013AB), with both stars separated by only 2.3 arcseconds <cit.>, we can not definitively confirm the origin of these additional eclipses. However, our analysis of two spectra obtained during opposite quadratures of the main DEB system as part of the OWN Survey program reveals distinct spectral features for each component. One component exhibits narrow lines, while the other shows broader lines. The ratio of Si iii λ4552 to Si iv λ4089 is nearly unity in the narrow component, suggesting a spectral type of B0.5. Conversely, this ratio is smaller in the broader component, and considering the absence of He ii λ4542, yield a spectral type of B0-0.2. In terms of luminosity class, the narrow component appears to belong to classes III-I, as He ii λ4686 is markedly fainter than He i λ4713. Conversely, both lines are comparable in the broad component, indicative of a class V classification. In Fig. <ref> we show some spectral regions to illustrate these classifications.
In the future, we plan to conduct a more targeted spectral analysis to identify the spectral signatures of both components of the other DEB system. This analysis will also aim to elucidate the source of the additional eclipses observed.
§.§ Other intriguing systems
We identified a diverse set of interesting systems, each presenting unique photometric characteristics that merit special mention. Firstly, several systems in our sample displayed Tidally Excited Oscillations (TEOs), which are oscillations within a star or system driven by tidal forces due to a close stellar companion. Notable examples include UCAC2 5911156 (detailed in <ref>, Fig. <ref>) along with V* V4386 Sgr, 2MASS J16542949-4139149, * 23 Ori, and V* V1216 Sco.
Additionally, we have successfully disentangled LCs of systems where the combined photometric data initially obscured individual components. Systems such as CD-59 3165 (detailed in <ref>, Fig. <ref>), BD+66 1674 (<ref>, Fig. <ref>) and eta Ori (detailed below, Fig. <ref>).
We also observed systems whose eclipse characteristics vary significantly over time, adding another layer of complexity to their study. For instance, HD 278236 initially showed flat eclipses typical of total eclipses but over a span of 1500 days, the eclipse profile gradually transitioned to partial, with a smoothing and narrowing that indicates dynamic changes in the system. Additionally, the shifting of the secondary eclipse suggests apsidal motion, highlighting the system’s evolving orbital dynamics. (Fig. <ref>). HD 93683 exhibited a decrease in eclipse depth over 1500 days, suggesting changes in the system's configuration or surrounding material (Fig. <ref>). And BD+66 1675 (detailed in <ref>, Fig. <ref>) presented a heartbeat-like feature post-primary eclipse, which shifted to prior the eclipse over 1150 days of observation.
HD 93683:
It is recognized as a SB2 system with a Be-type third component <cit.>. Our analysis of the TESS data unveiled intriguing behavior in this system, characterized by an attenuation of its eclipses (Fig. <ref>). This phenomenon could arise from variations in the brightness of the variable Be-star (either the star itself or its surrounding disk), which may dilute the eclipses. Alternatively, it could result from a rare effect known as Zeipel-Lidov-Kozai cycles, induced by the third component in a noncoplanar orbit, leading to changes in the orbital plane relative to our line of sight. As far as we know, this is the very first case reported in massive systems <cit.>.
This multiple system also exhibits several short-period variations, interpreted as pulsations. <cit.> reported one such variation with a period of 2.4 days. Further analysis is needed to determine additional periodicities in the TESS data.
* eta Ori:
*Eta Ori is an ideal candidate for our catalog as it meets the criteria of a detached binary with spectral types B0.7 V and B1.5: V. <cit.> analyzed TESS data and fitted the LC to obtain absolute parameters for the system, opting not to use data from sector 32 due to its low variability amplitude. In our study, we present LCs using both sectors (Fig. <ref>). We confirm the periodic variations reported in the literature: the primary eclipse cycle at approximately 7.989 days, indicative of the detached eclipsing binary (EB) system, and a shorter cycle of about 0.432 days, associated with pulsational variability. <cit.> initially suggested that the shorter period might indicate a contact binary with a period of 0.864 days. Southworth proposed the configuration as a detached EB with a period of 7.988 days, where one component exhibits g-mode pulsations, alongside a noneclipsing binary with a period of 0.8641 days, showing strong ellipsoidal variations.
§ THE YOUNG MASSIVE DETACHED BINARY CATALOG
This work presents the comprehensive YMDB catalog, derived from the analysis of TESS LCs and spectroscopic data, with additional support from an extensive review of existing literature.
Eclipsing binaries offer a unique opportunity to determine stellar parameters with high precision, especially when combining LC information with RV data. Detached binaries are particularly valuable due to the minimal interaction between their stellar components, allowing for accurate determinations of stellar parameters.
Although systems within the O8–B3 spectral-type range are common, few have had their absolute parameters precisely measured. The YMDB catalog addresses this knowledge gap by providing a curated database of young massive detached binaries, facilitating high-precision stellar parameter determinations.
Through the analysis of TESS LCs for 87 systems with suspected spectral types in the range O9-B1, this study identified 20 new eclipsing binaries, including 13 previously unknown variable systems and 2 nonthermal contact binaries. Additionally, new LC classifications were reported for 30 systems, and novel features such as eccentricity and heartbeat phenomena were discovered in many targets.
The YMDB catalog offers a reliable resource of high-quality LCs, serving as a valuable asset for the astronomical community. The primary results of this study are documented in Table <ref>, which lists the 30 confirmed members of the YMDB. These systems feature detached LCs and have at least one component with a spectral classification within the specified range. For the 25 systems that show potential yet require further spectroscopic verification, details can be found in Table <ref>. The 32 systems that do not qualify for the catalog due to nondetached configurations or incompatible spectral types are included in Table <ref>
It is our desire to further calibrate this method and publish a semiautomatic pipeline for public use.
RG acknowledges support from grant PICT 2019-0344. JIA acknowledges the financial support of DIDULS/ULS, through the project PR2324063. This research made use of Lightkurve, a Python package for Kepler and TESS data analysis (Lightkurve Collaboration, 2018). This work made use of Astropy:[http://www.astropy.org] a community-developed core Python package and an ecosystem of tools and resources for astronomy <cit.>.
aa
64
natexlab#1#1
[Albrecht et al.(2011)Albrecht, Winn, Carter, Snellen, &
de Mooij]2011ApJ...726...68A
Albrecht, S., Winn, J. N., Carter, J. A., Snellen, I. A. G., & de
Mooij, E. J. W. 2011, , 726, 68
[Alexander et al.(2016)Alexander, Hanes, Povich, &
McSwain]2016AJ....152..190A
Alexander, M. J., Hanes, R. J., Povich, M. S., & McSwain, M. V. 2016,
, 152, 190
[Alfonso-Garzón et al.(2012)Alfonso-Garzón, Domingo,
Mas-Hesse, & Giménez]2012A A...548A..79A
Alfonso-Garzón, J., Domingo, A., Mas-Hesse, J. M., & Giménez,
A. 2012, , 548, A79
[Astropy Collaboration et al.(2022)Astropy Collaboration,
Price-Whelan, Lim, Earl, Starkman, Bradley, Shupe, Patil,
Corrales, Brasseur, Nöthe, Donath, Tollerud, Morris,
Ginsburg, Vaher, Weaver, Tocknell, Jamieson, van Kerkwijk,
Robitaille, Merry, Bachetti, Günther, Aldcroft,
Alvarado-Montes, Archibald, Bódi, Bapat, Barentsen,
Bazán, Biswas, Boquien, Burke, Cara, Cara, Conroy,
Conseil, Craig, Cross, Cruz, D'Eugenio, Dencheva, Devillepoix,
Dietrich, Eigenbrot, Erben, Ferreira, Foreman-Mackey, Fox,
Freij, Garg, Geda, Glattly, Gondhalekar, Gordon, Grant,
Greenfield, Groener, Guest, Gurovich, Handberg, Hart,
Hatfield-Dodds, Homeier, Hosseinzadeh, Jenness, Jones, Joseph,
Kalmbach, Karamehmetoglu, Kałuszyński, Kelley, Kern,
Kerzendorf, Koch, Kulumani, Lee, Ly, Ma, MacBride, Maljaars,
Muna, Murphy, Norman, O'Steen, Oman, Pacifici, Pascual,
Pascual-Granado, Patil, Perren, Pickering, Rastogi, Roulston,
Ryan, Rykoff, Sabater, Sakurikar, Salgado, Sanghi, Saunders,
Savchenko, Schwardt, Seifert-Eckert, Shih, Jain, Shukla, Sick,
Simpson, Singanamalla, Singer, Singhal, Sinha, Sipőcz,
Spitler, Stansby, Streicher, Šumak, Swinbank, Taranu,
Tewary, Tremblay, de Val-Borro, Van Kooten, Vasović, Verma,
de Miranda Cardoso, Williams, Wilson, Winkel, Wood-Vasey, Xue,
Yoachim, Zhang, Zonca, & Astropy Project
Contributors]2022ApJ...935..167A
Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022,
, 935, 167
[Avvakumova et al.(2013)Avvakumova, Malkov, &
Kniazev]2013AN....334..860A
Avvakumova, E. A., Malkov, O. Y., & Kniazev, A. Y. 2013, Astronomische
Nachrichten, 334, 860
[Barbá et al.(2017)Barbá, Gamen, Arias, &
Morrell]2017IAUS..329...89B
Barbá, R. H., Gamen, R., Arias, J. I., & Morrell, N. I. 2017, in
The Lives and Death-Throes of Massive Stars, ed. J. J. Eldridge, J. C.
Bray, L. A. S. McClelland, & L. Xiao, Vol. 329, 89–96
[Barbá et al.(2022)Barbá, Gamen, Martín-Ravelo,
Arias, & Morrell]2022MNRAS.516.1149B
Barbá, R. H., Gamen, R. C., Martín-Ravelo, P., Arias, J. I.,
& Morrell, N. I. 2022, , 516, 1149
[Bassino et al.(1982)Bassino, Dessaunet, Muzzio, &
Waldhausen]1982MNRAS.201..885B
Bassino, L. P., Dessaunet, V. H., Muzzio, J. C., & Waldhausen, S.
1982, , 201, 885
[Bodensteiner et al.(2020)Bodensteiner, Shenar, &
Sana]2020A A...641A..42B
Bodensteiner, J., Shenar, T., & Sana, H. 2020, , 641, A42
[Borkovits et al.(2022)Borkovits, Rappaport, Toonen, Moe,
Mitnyan, & Csányi]2022MNRAS.515.3773B
Borkovits, T., Rappaport, S. A., Toonen, S., et al. 2022, , 515,
3773
[Cannon & Pickering(1921)]1921AnHar..96....1C
Cannon, A. J. & Pickering, E. C. 1921, Annals of Harvard College
Observatory, 96, 1
[Chini et al.(2012)Chini, Hoffmeister, Nasseri, Stahl, &
Zinnecker]2012MNRAS.424.1925C
Chini, R., Hoffmeister, V. H., Nasseri, A., Stahl, O., & Zinnecker,
H. 2012, , 424, 1925
[Crampton(1971)]1971AJ.....76..260C
Crampton, D. 1971, , 76, 260
[Crampton(1972)]1972MNRAS.158...85C
Crampton, D. 1972, , 158, 85
[Crampton & Fisher(1974)]1974PDAO...14..283C
Crampton, D. & Fisher, W. A. 1974, Publications of the Dominion
Astrophysical Observatory Victoria, 14, 283
[de Mink et al.(2011)de Mink, Langer, &
Izzard]2011BSRSL..80..543D
de Mink, S. E., Langer, N., & Izzard, R. G. 2011, Bulletin de la Societe
Royale des Sciences de Liege, 80, 543
[Ekström(2021)]2021FrASS...8...53E
Ekström, S. 2021, Frontiers in Astronomy and Space Sciences, 8, 53
[Feast et al.(1961)Feast, Stoy, Thackeray, &
Wesselink]1961MNRAS.122..239F
Feast, M. W., Stoy, R. H., Thackeray, A. D., & Wesselink, A. J. 1961,
, 122, 239
[Feast & Thackeray(1963)]1963MmRAS..68..173F
Feast, M. W. & Thackeray, A. D. 1963, , 68, 173
[Gamen et al.(2007)Gamen, Barbá, Morrell, Arias,
Maíz Apellániz, Sota, Walborn, &
Alfaro]2007BAAA...50..105G
Gamen, R., Barbá, R., Morrell, N., et al. 2007, Spectroscopic
monitoring of Southern Galactic O and WN stars: State of the Art in 2007
[Gamen et al.(2008)Gamen, Barbá, Morrell, Arias, &
Maíz Apellániz]2008RMxAC..33...54G
Gamen, R., Barbá, R. H., Morrell, N. I., Arias, J., & Maíz
Apellániz, J. 2008, in Revista Mexicana de Astronomia y Astrofisica
Conference Series, Vol. 33, Revista Mexicana de Astronomia y Astrofisica
Conference Series, 54–54
[Garrison et al.(1983)Garrison, Schild, &
Hiltner]1983ApJS...52....1G
Garrison, R. F., Schild, R. E., & Hiltner, W. A. 1983, , 52, 1
[Georgelin et al.(1973)Georgelin, Georgelin, &
Roux]1973AnA....25..337G
Georgelin, Y. M., Georgelin, Y. P., & Roux, S. 1973, , 25, 337
[Herrero et al.(1992)Herrero, Kudritzki, Vilchez, Kunze,
Butler, & Haser]1992A A...261..209H
Herrero, A., Kudritzki, R. P., Vilchez, J. M., et al. 1992, , 261,
209
[Houk(1978)]1978mcts.book.....H
Houk, N. 1978, Michigan catalogue of two-dimensional spectral types for the
HD stars
[Houk & Cowley(1975)]1975mcts.book.....H
Houk, N. & Cowley, A. P. 1975, University of Michigan Catalogue of
two-dimensional spectral types for the HD stars. Volume I. Declinations -90_
to -53_ƒ0.
[Khaliullin et al.(2006)Khaliullin, Antipin, &
Khaliullina]2006AstL...32..772K
Khaliullin, K. F., Antipin, S. V., & Khaliullina, A. I. 2006, Astronomy
Letters, 32, 772
[Kim et al.(2018)Kim, Kreiner, Zakrzewski, Ogłoza,
Kim, & Jeong]2018ApJS..235...41K
Kim, C. H., Kreiner, J. M., Zakrzewski, B., et al. 2018, , 235, 41
[Kiminki & Smith(2018)]2018MNRAS.477.2068K
Kiminki, M. M. & Smith, N. 2018, , 477, 2068
[Lee et al.(1993)Lee, Sung, Koch, Hrivnak, Bradstreet,
Corcoran, Mitchell, & Blitzstein]1993ASPC...38..239L
Lee, W. B., Sung, E. C., Koch, R. H., et al. 1993, in Astronomical
Society of the Pacific Conference Series, Vol. 38, New Frontiers in Binary
Star Research, ed. K.-C. Leung & I.-S. Nha, 239
[Li(2021)]2021ApJS..253...54L
Li, G.-W. 2021, , 253, 54
[Lightkurve Collaboration et al.(2018)Lightkurve Collaboration,
Cardoso, Hedges, Gully-Santiago, Saunders, Cody, Barclay, Hall,
Sagear, Turtelboom, Zhang, Tzanidakis, Mighell, Coughlin, Bell,
Berta-Thompson, Williams, Dotson, & Barentsen]2018ascl.soft12013L
Lightkurve Collaboration, Cardoso, J. V. d. M., Hedges, C., et al.
2018, Lightkurve: Kepler and TESS time series analysis in Python,
Astrophysics Source Code Library
[Loden et al.(1976)Loden, Loden, Nordstrom, &
Sundman]1976AnAS...23..283L
Loden, L. O., Loden, K., Nordstrom, B., & Sundman, A. 1976, , 23,
283
[Lu(1985)]1985PASP...97..428L
Lu, W. 1985, , 97, 428
[Luger et al.(2016)Luger, Agol, Kruse, Barnes, Becker,
Foreman-Mackey, & Deming]2016AJ....152..100L
Luger, R., Agol, E., Kruse, E., et al. 2016, , 152, 100
[Luger et al.(2018)Luger, Kruse, Foreman-Mackey, Agol, &
Saunders]2018AJ....156...99L
Luger, R., Kruse, E., Foreman-Mackey, D., Agol, E., & Saunders, N.
2018, , 156, 99
[Lynga(1964)]1964MeLuS.141....1L
Lynga, G. 1964, Meddelanden fran Lunds Astronomiska Observatorium Serie II,
141, 1
[Maíz Apellániz et al.(2016)Maíz Apellániz,
Sota, Arias, Barbá, Walborn, Simón-Díaz, Negueruela,
Marco, Leão, Herrero, Gamen, & Alfaro]2016ApJS..224....4M
Maíz Apellániz, J., Sota, A., Arias, J. I., et al. 2016,
, 224, 4
[Maíz Apellániz et al.(2019a)Maíz
Apellániz, Trigueros Páez, Jiménez Martínez,
Barbá, Simón-Díaz, Pellerin, Negueruela, & Rodrigo
Souza Leão]2019hsax.conf..420M
Maíz Apellániz, J., Trigueros Páez, E., Jiménez
Martínez, I., et al. 2019a, in Highlights on Spanish
Astrophysics X, ed. B. Montesinos, A. Asensio Ramos, F. Buitrago,
R. Schödel, E. Villaver, S. Pérez-Hoyos, &
I. Ordóñez-Etxeberria, 420–420
[Maíz Apellániz et al.(2019b)Maíz
Apellániz, Trigueros Páez, Negueruela, Barbá,
Simón-Díaz, Lorenzo, Sota, Gamen, Fariña, Salas,
Caballero, Morrell, Pellerin, Alfaro, Herrero, Arias, &
Marco]2019AnA...626A..20M
Maíz Apellániz, J., Trigueros Páez, E., Negueruela, I.,
et al. 2019b, , 626, A20
[Martín-Ravelo et al.(2021)Martín-Ravelo, Barbá,
Morrell, Arias, & Gamen]2021mobs.confE..36M
Martín-Ravelo, P., Barbá, Rodolfo, H., Morrell, N., Arias, J.,
& Gamen, R. 2021, in MOBSTER-1 virtual conference: Stellar Variability as
a Probe of Magnetic Fields in Massive Stars, 36
[Martins & Palacios(2013)]2013A A...560A..16M
Martins, F. & Palacios, A. 2013, , 560, A16
[Mason et al.(2001)Mason, Wycoff, Hartkopf, Douglass, &
Worley]2001AJ....122.3466M
Mason, B. D., Wycoff, G. L., Hartkopf, W. I., Douglass, G. G., &
Worley, C. E. 2001, , 122, 3466
[Munari & Tomasella(1999)]1999A A...343..806M
Munari, U. & Tomasella, L. 1999, , 343, 806
[Muzzio & Orsatti(1977)]1977AJ.....82..474M
Muzzio, J. C. & Orsatti, A. M. 1977, , 82, 474
[O'Connell(1951)]1951PRCO....2...85O
O'Connell, D. J. K. 1951, Publications of the Riverview College Observatory,
2, 85
[Plaskett(1908)]1908ApJ....28..266P
Plaskett, J. S. 1908, , 28, 266
[Pojmanski(1998)]1998AcA....48...35P
Pojmanski, G. 1998, , 48, 35
[Pourbaix et al.(2004)Pourbaix, Tokovinin, Batten, Fekel,
Hartkopf, Levato, Morrell, Torres, & Udry]2004A A...424..727P
Pourbaix, D., Tokovinin, A. A., Batten, A. H., et al. 2004, , 424,
727
[Pozo Nuñez et al.(2019)Pozo Nuñez, Chini, Barr
Domínguez, Fein, Hackstein, Pietrzyński, &
Murphy]2019MNRAS.490.5147P
Pozo Nuñez, F., Chini, R., Barr Domínguez, A., et al. 2019,
, 490, 5147
[Sana et al.(2012)Sana, de Mink, de Koter, Langer,
Evans, Gieles, Gosset, Izzard, Le Bouquin, &
Schneider]2012Sci...337..444S
Sana, H., de Mink, S. E., de Koter, A., et al. 2012, Science, 337, 444
[Sawyer(1887)]1887AJ......7..116S
Sawyer, E. F. 1887, , 7, 116
[Shi et al.(2022)Shi, Qian, & Li]2022ApJS..259...50S
Shi, X.-d., Qian, S.-b., & Li, L.-J. 2022, , 259, 50
[Simón-Díaz et al.(2011)Simón-Díaz, Castro,
Garcia, Herrero, & Markova]2011BSRSL..80..514S
Simón-Díaz, S., Castro, N., Garcia, M., Herrero, A., &
Markova, N. 2011, Bulletin de la Societe Royale des Sciences de Liege, 80,
514
[Sota et al.(2014)Sota, Maíz Apellániz, Morrell,
Barbá, Walborn, Gamen, Arias, & Alfaro]2014ApJS..211...10S
Sota, A., Maíz Apellániz, J., Morrell, N. I., et al. 2014,
, 211, 10
[Sota et al.(2011)Sota, Maíz Apellániz, Walborn,
Alfaro, Barbá, Morrell, Gamen, & Arias]2011ApJS..193...24S
Sota, A., Maíz Apellániz, J., Walborn, N. R., et al. 2011,
, 193, 24
[Southworth & Bowman(2022)]2022MNRAS.513.3191S
Southworth, J. & Bowman, D. M. 2022, , 513, 3191
[Thompson et al.(2012)Thompson, Everett, Mullally,
Barclay, Howell, Still, Rowe, Christiansen, Kurtz, Hambleton,
Twicken, Ibrahim, & Clarke]2012ApJ...753...86T
Thompson, S. E., Everett, M., Mullally, F., et al. 2012, , 753, 86
[Trigueros Páez et al.(2021)Trigueros Páez, Barbá,
Negueruela, Maíz Apellániz, Simón-Díaz, &
Holgado]2021A A...655A...4T
Trigueros Páez, E., Barbá, R. H., Negueruela, I., et al. 2021,
, 655, A4
[Turner(1980)]1980ApJ...235..146T
Turner, D. G. 1980, , 235, 146
[Vijapurkar & Drilling(1993)]1993ApJS...89..293V
Vijapurkar, J. & Drilling, J. S. 1993, , 89, 293
[Walborn & Fitzpatrick(1990)]1990PASP..102..379W
Walborn, N. R. & Fitzpatrick, E. L. 1990, , 102, 379
[Weidner & Vink(2010)]2010A A...524A..98W
Weidner, C. & Vink, J. S. 2010, , 524, A98
[Welsh et al.(2011)Welsh, Orosz, Aerts, Brown,
Brugamyer, Cochran, Gilliland, Guzik, Kurtz, Latham, Marcy,
Quinn, Zima, Allen, Batalha, Bryson, Buchhave, Caldwell,
Gautier, Howell, Kinemuchi, Ibrahim, Isaacson, Jenkins, Prsa,
Still, Street, Wohler, Koch, & Borucki]2011ApJS..197....4W
Welsh, W. F., Orosz, J. A., Aerts, C., et al. 2011, , 197, 4
§ FOLDED LIGHT CURVES FOR ALL 87 SYSTEMS ANALYZED IN THIS STUDY
The appendix presents the LCs for all systems analyzed in this study. Figure <ref> shows the LCs for the confirmed systems, Figure <ref> displays the LCs for the candidate systems, and Figure <ref> includes the LCs for the unqualified systems.
|
http://arxiv.org/abs/2409.03747v1 | 20240905175820 | Hybrid Oscillator-Qubit Quantum Processors: Simulating Fermions, Bosons, and Gauge Fields | [
"Eleanor Crane",
"Kevin C. Smith",
"Teague Tomesh",
"Alec Eickbusch",
"John M. Martyn",
"Stefan Kühn",
"Lena Funcke",
"Michael Austin DeMarco",
"Isaac L. Chuang",
"Nathan Wiebe",
"Alexander Schuckert",
"Steven M. Girvin"
] | quant-ph | [
"quant-ph",
"cond-mat.quant-gas",
"cond-mat.str-el",
"hep-lat",
"nucl-th"
] |
Department of Physics, Co-Design Center for Quantum Advantage, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
Joint Quantum Institute and Joint Center for Quantum Information and Computer Science,
University of Maryland and NIST, College Park, Maryland 20742, USA
Brookhaven National Laboratory, Upton, New York 11973, USA
Yale Quantum Institute, PO Box 208 334, 17 Hillhouse Ave, New Haven, CT 06520-8263, USA
Departments of Applied Physics and Physics, Yale University, New Haven, CT 06511, USA
Department of Computer Science, Princeton University, Princeton, NJ 08544, USA
Infleqtion, Chicago, IL 60604, USA
Present address: Google Quantum AI, Santa Barbara, CA
Yale Quantum Institute, PO Box 208 334, 17 Hillhouse Ave, New Haven, CT 06520-8263, USA
Departments of Applied Physics and Physics, Yale University, New Haven, CT 06511, USA
Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
The NSF AI Institute for Artificial Intelligence and Fundamental Interactions
CQTA, Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, 15738 Zeuthen, Germany
Transdisciplinary Research Area “Building Blocks of Matter and Fundamental Interactions” (TRA Matter) and Helmholtz Institute for Radiation and Nuclear Physics (HISKP),
University of Bonn, Nussallee 14-16, 53115 Bonn, Germany
Department of Physics, Co-Design Center for Quantum Advantage, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
The NSF AI Institute for Artificial Intelligence and Fundamental Interactions
Brookhaven National Laboratory, Upton, New York 11973, USA
Department of Physics, Co-Design Center for Quantum Advantage, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
Department of Physics, Co-Design Center for Quantum Advantage, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
Department of Computer Science, University of Toronto, Canada
Pacific Northwest National Laboratory, Richland WA, USA
Canadian Institute for Advanced Studies, Toronto, Canada
Joint Quantum Institute and Joint Center for Quantum Information and Computer Science,
University of Maryland and NIST, College Park, Maryland 20742, USA
Yale Quantum Institute, PO Box 208 334, 17 Hillhouse Ave, New Haven, CT 06520-8263, USA
Departments of Applied Physics and Physics, Yale University, New Haven, CT 06511, USA
†Corresponding authors: [email protected], [email protected], [email protected]
§ ABSTRACT
We develop a hybrid oscillator-qubit processor framework for quantum simulation of strongly correlated fermions and bosons that avoids the boson-to-qubit mapping overhead encountered in qubit hardware. This framework gives exact decompositions of particle interactions such as density-density terms and gauge-invariant hopping, as well as approximate methods based on the Baker-Campbell Hausdorff formulas including the magnetic field term for the U(1) quantum link model in (2+1)D. We use this framework to show how to simulate dynamics using Trotterisation, perform ancilla-free partial error detection using Gauss's law, measure non-local observables, estimate ground state energies using a oscillator-qubit variational quantum eigensolver as well as quantum signal processing, and we numerically study the influence of hardware errors in circuit QED experiments. To show the advantages over all-qubit hardware, we perform an end-to-end comparison of the gate complexity for the gauge-invariant hopping term and find an improvement of the asymptotic scaling with the boson number cutoff S from 𝒪(log(S)^2) to 𝒪(1) in our framework as well as, for bosonic matter, a constant factor improvement of better than 10^4. We also find an improvement from 𝒪(log(S)) to 𝒪(1) for the U(1) magnetic field term. While our work focusses on an implementation in superconducting hardware, our framework can also be used in trapped ion, and neutral atom hardware. This work establishes digital quantum simulation with hybrid oscillator-qubit hardware as a viable and advantageous method for the study of qubit-boson models in materials science, chemistry, and high-energy physics.
Hybrid Oscillator-Qubit Quantum Processors:Simulating Fermions, Bosons, and Gauge Fields
Steven M. Girvin0000-0002-6470-5494^†
September 9, 2024
========================================================================================
§ INTRODUCTION
Underlying many mechanisms in the natural world is the interaction between fermions and bosons, such as phonons mediating electron attraction, leading to superconductivity or the down-conversion of light during photosynthesis. Examples of fermionic particles include electrons, holes, fermionic atoms, baryons, protons, neutrons, quarks, leptons, and examples of bosonic particles include photons, phonons, excitons, plasmons, Cooper pairs, bosonic atoms, mesons, W-bosons, Z-bosons, Higgs bosons, and gluons. When the interactions among these particles is strong, analytical and numerical methods fail, leading to a large range of open problems in high energy physics, chemistry and condensed matter physics. In particular, quantum Monte-Carlo simulations are effective only for static equilibrium properties (or imaginary-time dynamics) and then only in special cases where there is no fermion sign problem. Tensor network methods work well only for systems with limited entanglement. Some of the hardest examples to simulate can be found in the lattice gauge theory (LGT) formulation of the standard model of particle physics <cit.>. LGTs constitute an especially hard problem because, further to containing fermionic or bosonic matter, they also contain gauge fields which require the implementation of N-body interactions with N>2 such as the gauge-invariant kinetic energy (N=3) and the plaquette magnetic field term (N=4).
Quantum simulators <cit.> offer an attractive alternative to study these problems as they can naturally host entangled states which can be strongly correlated <cit.> and they can be used to study real-time equilibrium <cit.> and non-equilibrium dynamics <cit.>. Analog simulations of LGTs have been proposed <cit.> and carried out using, for example, cold atom <cit.>, ion trap <cit.> and donors in silicon <cit.> platforms, but analog quantum simulators are limited in terms of the Hamiltonians that can be implemented and observables that can be measured. In particular, the multi-body interactions appearing in LGTs makes their analog simulation challenging. Separately, while proof-of-principle digital simulations of LGTs have been performed, the overheads encountered in mapping bosons and fermions to qubits make these problems challenging to implement on qubit-based discrete-variable (DV) hardware <cit.>. While digital qudit-based quantum processors offer a promising approach toward efficiently encoding the large Hilbert space of bosons and gauge fields <cit.>, the qudit dimensions thus far explored offer only a slight increase in Hilbert space dimension over qubits. More importantly, neither qubit nor qudit systems natively implement the bosonic operations necessary for creating bosonic Hamiltonian terms – in particular, synthesis of the occupation dependent square-root factors needed for bosonic raising and lowering operators in qubit-based encodings leads to a large overhead for simulating bosonic models <cit.>. Simulation of purely bosonic (1+1)D theories using continuous-variable hardware has been proposed, for example using microwave cavities <cit.> or photonic devices <cit.>, but such an approach does not easily facilitate the study of models involving interactions between bosons and fermionic matter.
In this work, we investigate the promise of hybrid oscillator-qubit quantum processors <cit.> for solving the above challenges. In particular, we leverage this hardware to realize a straightforward mapping between model and computational degrees of freedom, as illustrated in Fig. <ref>. By encoding both the bosonic matter and gauge fields in native oscillator modes, we avoid the costly overheads incurred by boson-to-qubit encodings. More specifically, the primary goal of our approach is to explore the advantage given by the native availability of bosonic gate sets, a crucial component for simulation of Hamiltonians containing bosons, yet costly to implement in qubit-based hardware. While the universal control of oscillator modes is challenging due to the equal level spacing of linear oscillators, here this issue is resolved by leveraging a hybrid oscillator-qubit architecture with non-linearity provided by the qubits. To that end, we build on the universal oscillator-qubit gate sets recently developed in a companion paper <cit.>. In particular, we focus on a circuit quantum electrodynamics (cQED) architecture with high-Q 3D cavities, although many of the hybrid oscillator-qubit operations and techniques we discuss can be implemented in other platforms such as ion traps <cit.>, where a similar oscillator-qubit approach for gauge theories has been proposed <cit.>, or neutral atoms in tweezer arrays <cit.>.
We focus on the task of simulating dynamics, i.e., evolving an initial state under an in-general time-dependent Hamiltonian Ĥ(t), using a digital approach in which we compile evolution under that Hamiltonian into elementary, native oscillator-qubit operations. Many quantum algorithms can be formulated with (controlled) time evolution under the Hamiltonian <cit.> as a fundamental subroutine. The goal of our work is to find these compilations for the most important models in quantum many-body physics involving fermions, bosons and gauge fields and to show that this decomposition leads to advantages compared to compilations using qubit-only hardware.
This work is structured as follows. We start by reviewing elementary concepts (Sec. <ref>). We then present our main results on compilation strategies. This is divided into fundamental primitives (Sec. <ref>), methods for matter fields (Sec. <ref>), and finally gauge-fields and their interactions with matter (Sec. <ref>). We then perform an end-to-end resource estimation of our methods and compare to all-qubit hardware (Sec. <ref>). Finally, we introduce methods to measure observables, algorithms to perform dynamics simulation and ground-state preparation, and numerical benchmarks (Sec. <ref>), and conclude (Sec. <ref>).
As a guide to the reader, here is some more detail about each of the seven sections of this paper, in sequence. In Sec. <ref>, i.e., the remainder of this introductory section, we give context by broadly introducing LGTs (Sec. <ref>) and the challenges associated to their simulation. We choose paradigmatic models <cit.> as a case study, the and the , which we later use in Sec. <ref> to benchmark oscillator-qubit algorithms. We then introduce the superconducting hybrid oscillator-qubit hardware comprising transmon qubits coupled to high-Q microwave cavity resonators on which we base our study (Sec. <ref>). Finally, we introduce Trotter algorithms (Sec. <ref>), which are the basic subroutines used in our oscillator-qubit algorithms.
In Sec. <ref>, we introduce a toolset of hybrid oscillator-qubit compilation strategies used throughout this work. We review previously introduced methods, but also develop exact methods for synthesizing bosonic parity-dependent and density-density operations, as well as oscillator-mediated qubit-qubit entangling gates that we extend to multi-qubit gates.
In Sec. <ref> we leverage this toolset to develop compilation strategies for implementing the time evolution operator for common bosonic (Sec. <ref>) and fermionic (Sec. <ref>) Hamiltonian terms, including single-site potentials, nearest-neighbor and onsite interactions, and hopping terms in hybrid superconducting hardware. We consider both (1+1)D and (2+1)D. For bosonic matter, we utilize the direct mapping of a bosonic matter site to a single mode of a microwave cavity. For fermionic matter, we leverage a Jordan-Wigner mapping using either transmon qubits or dual-rail qubits in microwave cavities.
In Sec. <ref>, we present compilation strategies for implementing pure gauge and gauge-matter Hamiltonian terms, specializing to the case of ℤ_2 and U(1) gauge fields in (2+1)D.
In Sec. <ref> we introduce explicit Trotter circuits for the and and perform an analysis of the Trotter errors. From this, we derive the end-to-end asymptotic scaling of the number of gates required for a dynamics simulation, including the errors encountered in the compilation. We then perform an explicit gate count comparison between our oscillator-qubit and an all-qubit simulation of the bosonic hopping Hamiltonian. To do so, we develop a qubit algorithm in the Fock-binary encoding to simulate bosonic hopping.
In Sec. <ref>, we adapt algorithms for ground-state preparation and dynamics to our oscillator-qubit approach, utilising the compilation methods from the previous sections. We use the (1+1)D and as example models for numerical benchmarks using Bosonic Qiskit <cit.>. In particular, we develop an oscillator-qubit Variational Quantum Eigensolver (VQE) (Sec. <ref>, <ref>) ansatz. Within the VQE approach and in the context of the model, we investigate the effects of hardware and shot noise on VQE ground-state preparation (Sec. <ref>) and demonstrate the utility of post-selection upon Gauss'ss law in improving ground state preparation. Following this, we show how to measure observables in these ground states, in particular non-local observables such as string order correlators and the superfluid stiffness (Sec. <ref>). Finally, we discuss ground state preparation with quantum signal processing (QSP) and compare the error in ground state preparation as a function of the time to solution between QSP and VQE (Sec. <ref>).
In Sec. <ref> we conclude with a summary of our main results, a discussion of the prospects for quantum advantage with our approach, and an outlook on some of the future research directions that are opened up by this work.
Before continuing, we briefly comment on vocabulary and notation used throughout this work. We will use the terms bosonic mode, qumode, oscillator, resonator, and cavity mode interchangeably. In general, we will refer to certain hybrid oscillator-qubit gates as being `controlled' meaning that the generator of the gate contains either a qubit or a mode number-operator, i.e. the phase is proportional to the number of excitations in the state (the gate does nothing to |0⟩). We generally call gates `conditioned' when they obtain a state-dependent phase, i.e. the generator contains a qubit Ẑ or mode Fock-state dependent phases, which could for example be implemented using a projector.
§.§ Lattice gauge theories
In the following, we briefly introduce the LGTs we will use as paradigmatic examples, a and a , which we will show how to compile onto oscillator-qubit hardware in the rest of the study. Here, we introduce the Hamiltonian terms of these models in order to prepare our discussion on how to implement them. In Sec. <ref>, we will introduce their physics in the context of our numerical experiments, especially also emphasizing past work. Note that these models only serve as examples; due to the digital nature of our methods, our study enables the simulation of a wide range of models involving bosons, fermions and gauge fields.
In lattice gauge theories, matter fields reside on sites and gauge fields reside between sites, usually (for the case of 2+1-D) in a square lattice geometry.
. In this model, the gauge fields are spin 1/2 degrees of freedom. The Hamiltonian is written as follows <cit.>:
Ĥ_ℤ_2 = - g ∑_⟨ i,j⟩X̂_i,j -B ∑_i,j,k,l∈□Ẑ_i,jẐ_j,kẐ_k,lẐ_l,i
- J ∑_⟨ i,j⟩( m̂^†_iẐ_i,jm̂_j + h.c.) + U ∑_in̂_i^2,
where i,j indicates the link between lattice site i and j on a square lattice, ∑_⟨ i,j ⟩ indicates a sum over all combinations of i,j such that i,j are nearest-neighbours and is a plaquette of four sites (see Fig. <ref>a). The annihilation operator at matter site i is given by m̂_i and the density or occupation by n̂_i =m̂^†_i m̂_i. The matter can be either bosonic (m̂=â) or fermionic (m̂=ĉ), with corresponding commutation/anticommutation relations [â_i,â_j^†]=δ_ij and {ĉ_i,ĉ_j^†}=δ_ij. X̂_i,j, Ẑ_i,j are Pauli operators that act on the gauge field at the link between sites i and j. The first, second, and third terms in Eq. (<ref>) represent the electric field (coupling g), magnetic field (coupling B), and gauge-invariant hopping (coupling J), respectively. The last term is an on-site interaction of strength U which is the simplest possible interaction term for bosonic matter, but does not contribute as an interaction term for (spinless) fermions since n̂^2=n̂, due to the Pauli exclusion principle.
The defining feature of gauge theories is invariance under local gauge transformations. This leads to the presence of Gauss's law dividing up the Hilbert space of the theory into sectors which are not coupled by the Hamiltonian. This constraint is in analogy to the relation between the integral of the electric field flux through a surface and the charge enclosed by the surface in classical electrodynamics, with a product of field operators replacing the integral. (-1)^n_i takes the role of the charge. In quantum mechanics, Gauss's law is formulated as a constraint on the eigenstates |Ψ⟩; for all i, Ĝ_i|ψ⟩ = q_i|ψ⟩, where q_i∈Z and
Ĝ_i = ∏_⟨ i,j⟩X̂_i,j(-1)^n̂_i,
in (2+1)D, where the product is over the four links ⟨ i,j⟩ that connect to site i, see Fig. <ref>a and c. We choose q_i=1 throughout this work. For open boundary conditions, terminated by matter sites, the Gauss's law constraints on the boundary are fixed by imagining virtual qubit sites around the system and fixing them in a X̂ product state. We set X̂=1 for these qubits when necessary.
. The U(1) LGT, which is equivalent to quantum electrodynamics in the limit of vanishing lattice spacing, hosts continuous gauge fields. The quantum link formalism <cit.> replaces the gauge fields with discrete spin-S degrees of freedom with commutation relations [Ŝ^z_i, Ŝ^+_j]=δ_ijŜ^+_i and (Ŝ^z_i)^2+1/2(Ŝ^+_iŜ_i^-+Ŝ^-_iŜ_i^+)=S(S+1) <cit.>, where S is the spin length leading to a Hilbert space cutoff of 2S+1. The physics of continuous U(1) gauge fields are recovered for S→∞ <cit.>. Choosing (large) finite S, this encoding approximately preserves the bosonic algebra. The Hamiltonian in this formalism reads <cit.>
Ĥ_U(1) = g^2/2∑_⟨i,j|⟩(Ŝ^z_i,j +τ/2π)^2
-1/4g^2(S(S+1))^2∑_i,j,k,l∈□(Ŝ^+_i,jŜ^+_j,kŜ^-_k,lŜ^-_l,i +h.c.)
+ J/2√(S(S+1))∑_⟨i,j|⟩(m̂^†_iŜ^+_i,jm̂_j + h.c.)
+ M∑_i(-1)^i n̂_i,
where the first term is the electric field term, the second term is the magnetic field term, and the third is the gauge-invariant hopping. g is the gauge-matter coupling strength and J is the hopping strength. In one spatial dimension, the constant background electric field τ corresponds to a topological term <cit.>. The last term is the staggered mass term with mass M, which comes from the mapping of the continuous Dirac fermion fields onto the lattice and is therefore only necessary for fermions <cit.>. We set the lattice spacing to unity here. Also note that in order to take the above Hamtiltonian to the continuum limit of QED, the phases in the hopping and staggered mass terms need to be slightly altered, c.f. <cit.>. This does not introduce any additional challenges in our implementation, so we choose the above convention for simplicity.
In addition, the physical states of the Hamiltonian, i.e. the gauge-invariant ones, have to fulfill Gauss'ss law: for all 𝐢, Ĝ_𝐢|ψ⟩ = q_𝐢|ψ⟩, where q_𝐢∈Z and
Ĝ_𝐢 = Ŝ^z_𝐢-𝐞_x+Ŝ^z_𝐢-𝐞_y - Ŝ^z_𝐢+𝐞_x-Ŝ^z_𝐢+𝐞_y - Q̂_𝐢.
To clarify the spatial structure of Gauss's law, we specify the lattice site as a two-dimensional vector 𝐢 in this paragraph only. 𝐞_x/y is a unit vector in the x/y direction of the lattice, respectively. In the above expression Q̂_𝐢 = m̂^†_𝐢m̂_𝐢 - (1-(-1)^r_𝐢)/2 is the staggered charge operator, where r_𝐢 is the Manhattan distance of the site 𝐢 from the arbitrarily chosen origin. The fixed values q_𝐢 correspond to a choice of static background charge configuration, and for the rest of the paper we restrict ourselves to the sector of vanishing static charges, i.e., q_𝐢 = 0 for all 𝐢.
In order to encode the spin S degrees of freedom representing U(1) gauge fields in the into bosonic modes, we use the Schwinger boson mapping. It consists of two bosonic modes, labelled a and b, which fulfill the constraint n̂^a_i,j + n̂^b_i,j = 2S.
The electric field operator in this encoding is represented by
Ŝ^z_i,j = n̂^a_i,j - n̂^b_i,j/2,
and the electric field raising operator S^+_i,j becomes
Ŝ^+_i,j = â^†_i,jb̂_i,j
Ŝ^-_i,j = â_i,jb̂^†_i,j
Compared to an encoding in just a single oscillator <cit.>, our encoding allows for a smaller occupation of the modes to represent electric field zero, reducing the impact of mode decay.
We show a possible encoding of this model in our architecture in Fig. <ref>d.
§.§ Circuit QED platform
We now outline the main features of the platform we consider in the rest of the study – a lattice of microwave cavities coupled to superconducting transmon qubits (see Fig. <ref>b). In particular, we discuss the native Hamiltonian of this platform and the gates that can be implemented via microwave pulses.
Within the circuit QED framework, long-lived bosonic modes can be realized in three-dimensional superconducting microwave cavities with typical lifetimes on the order of one to ten milliseconds (ms) in aluminum <cit.>, Fig. <ref>a. Recently, niobium cavities have been used to improve coherence times, with observed single photon lifetimes on the order of 30 ms <cit.> to over 1 s <cit.>. Separately, recent work realising planar microwave resonators, which may be more scalable due to their smaller size and fabrication advantages, have reached 1 ms single photon lifetimes <cit.>. In this platform, it is common to utilize one mode per cavity, although recent work has demonstrated that multi-mode cavities are also feasible <cit.>.
Superconducting transmon qubits are routinely coupled to microwave cavities and used for control and readout via on-chip readout resonators <cit.>. The relaxation T_1 and dephasing times T_2 of transmons when coupled to microwave cavities are typically on the order of 250 μs <cit.>, with some recent coherence times in tantalum-based transmons reaching 500 μs <cit.>. When simulating operation in hardware in section <ref>, we take into account the dominant source of hardware error, qubit decay and dephasing, adopting a conservative value of T_1=T_2=200 μs if not stated otherwise.
The SNAIL (Superconducting Nonlinear Asymmetric Inductive eLement) <cit.> is an attractive choice for realizing a programmable coupling between cavities. The SNAIL consists of three large junctions shunted by one small junction. When biased with a non-zero DC magnetic flux, the SNAIL (or SNAIL array) generates a three-wave mixing Hamiltonian that enables a parametrized beamsplitter interaction of strength g(t) between the cavities while minimizing unwanted nonlinear static interactions, such as the cross-Kerr <cit.>.
Our proposed architecture is an array of `oscillator-qubit pairs,' each comprising a high-Q mode of a cavity dispersively coupled to a transmon qubit (see Fig. <ref>b, top panel). These units are arranged into a 2D square lattice, where adjacent cavities are connected via a SNAIL, which can be used to implement a beamsplitter (see Fig. <ref>b, bottom panel). In the remainder of this work, we represent cavities (qubits) by circles (crosses). This symbolic representation will later prove useful for denoting the various possibilities for mapping systems of fermions, bosons and gauge fields onto this 2D hardware layout.
While this architecture is physically large in size (each cavity is about 50 mm long <cit.>) and might therefore exhibit scaling limitations, planar resonators, while at present exhibiting shorter coherence times compared to 3D cavities, are smaller and therefore more scalable. Thus, the proposed architecture in Fig. <ref>b can also be implemented with planar resonators, and all results presented herein extend to this case (with the caveat that Hamiltonian parameters, loss rates and gate speeds are informed by recent experiments in 3D cavities, and would need to be adjusted to appropriately reflect the planar case).
In this architecture, each transmon-cavity pair in the array is in the strong-dispersive coupling regime <cit.>. In the rotating-wave-approximation, the effective Hamiltonian of the 2D array can be written as:
Ĥ = Ĥ_0 + Ĥ_1
Ĥ_0 = ∑_i=1^Nχ_i/2Ẑ_in̂_i
+ ∑_i=1^N(ε_i(t)â_i^† + Ω_i(t)σ̂_i^+ + h.c.)
+ ∑_⟨i,j|⟩(g_i,j(t) â_i^†â_j + h.c.)
Ĥ_1 = ∑_i=1^N K_i n̂_i^2 + ∑_⟨i,j|⟩χ_i,jn̂_i n̂_j
Here, â_i (â_i^†) is the annihilation (creation) operator for the ith cavity mode, n̂_i = â_i^†â_i is the corresponding number operator, and σ̂_i^- (σ̂_i^+) is the lowering (raising) operator for the transmon coupled to the ith mode. Ĥ_0 includes the dispersive shift (χ_i), cavity drives (ε_i(t)), transmon Rabi drives (Ω_i(t)), and tunable beamsplitter couplings (g_i,j(t)) implemented via SNAILs. The time-dependence in all parameters can be used to engineer different couplings. Ĥ_1 includes additional unwanted static couplings such as self-Kerr (K_i) and cross-Kerr (χ_i,j). These nonlinear couplings incur contributions from the hybridization of each cavity mode with the transmon and SNAIL couplers. With careful engineering, it is possible to design the components such that the Kerr contributions from transmon and SNAIL cancel or nearly cancel, and χ_i,i+1≈ 0, K_i≈0 could be achieved at a single flux point, with small residual couplings not likely to contribute to the principal results of this work <cit.>. When the beamsplitter interaction is not activated (g_i,j = 0), each oscillator-qubit pair can be independently controlled. For a full discussion of the gates available in the proposed superconducting hybrid oscillator-qubit platform, see Ref. <cit.>. The native gates arising directly from the Hamiltonian of the system are collected in Tab. <ref>. Note that in some parameter regimes, certain transmon-conditioned bosonic operations, such as the conditional displacement (sometimes referred to as `controlled displacement') <cit.>, can be implemented natively.
All phase-space rotations R_i(θ) can be realized `in software' by absorbing them into the phases of the subsequent microwave pulses. This is because all operations are carried out in the rotating frame of the free evolution of the cavity oscillator mode ωâ^†â (each transmon is only ever connected to a single cavity). Therefore, a phase-space rotation can be applied by changing the phase of the subsequent microwave drive carrying out a gate (such as a beamsplitter or displacement) on the corresponding mode. Arbitrary qubit Z rotation gates can similarly be carried out in software.
Arbitrary initial states for each cavity can be prepared using optimal control <cit.>: both qubit and cavity are initialised to the ground state, the qubit prepares the cavity in the required Fock state and is then measured to validate this preparation <cit.>. In particular, Fock states |n⟩ with low photon number can be prepared with typical state-of-the-art Fock preparation infidelities of 10^-2 <cit.>.
As mentioned above, it is possible to effectively turn off the `always on' dispersive interaction using dynamical decoupling. Specifically, the transmons can be driven with Rabi rate Ω_i ≫χ_i ⟨â_i^†â_i|$⟩ during a timeT = 2πk/Ω_i, wherekis an integer, to decouple the cavity mode from the transmon and leave the transmons in their initial state. With this, the beamsplitter gateBS_i,j(φ, θ)is performed by simultaneously driving transmonsiandj, and driving the SNAIL coupler linking modesiandjsuch thatg_i,j > 0. Alternative schemes based on transmon echoing can also be used to decouple and perform transmon-state-independent beamsplitting <cit.>.
In addition to unitary control, the transmons can be leveraged for measurements of the cavity modes. In particular, it is possible to measure the boson occupation in the mode in a total duration that is logarithmic in the maximum Fock state. This is done by reading out the photon number bit-by-bit in its binary representation. Each bit is mapped onto the transmon qubit dispersively coupled to the cavity, which is measured and then reset <cit.>.
§.§ Trotter simulations
In this work, we focus on a Trotter decomposition of dynamics as our basic simulation algorithm. In such simulations, the time evolution operatorexp(-iĤ T)until timeTwith HamiltonianĤ=∑_γ=1^ΓĤ_γis divided intortime steps of durationt=T/r, i.e.,exp(-iĤ T)=(exp(-iĤ t) )^r. Iftis chosen sufficiently small, a product formula approximation toexp(-iĤt)can then be employed to decompose the Hamiltonian into unitaries which only act on a few qubits and oscillator modes. For example, a(2p)^th-order Trotter formula in general can be written as a sequence ofN_expexponentials of Hamiltonian termsĤ_γ_idrawn from{Ĥ_1,…,Ĥ_Γ}and a sequence of timest_iwith|t_i|≤tthat satisfies
exp(-iĤ t)=e^-iĤ_γ_1 t_1⋯ e^-iĤ_γ_N_exp t_N_exp +𝒪(t^2p+1).
There is great freedom in the choice of this decomposition; however, certain orderings can be cheaper, more accurate or more natural.
We choose our decomposition to be given by the local terms inside the sums over lattice sites in many-body Hamiltonians, c.f. Eq. (<ref>) and Eq. (<ref>). For example, the first-order Trotter decomposition of the forU=0in one spatial dimension, whereB=0, is given by
e^-i Ĥ_ℤ_2t=∏_⟨ i,j⟩ e^igt X̂_i,j∏_⟨ i,j⟩ e^iJt ( m̂^†_iẐ_i,jm̂_j + h.c.)+ 𝒪(t^2).
One of the main goals of this paper is to show how to compile unitaries acting on two or more sites, such as the gauge-invariant hoppingexp [ iJt ( m̂^†_i Ẑ_i,j m̂_j + h.c. ) ], into the native operations available in circuit QED hardware or other hardware with equivalent instruction set architectures.
In section <ref>, we discuss the errors incurred from the Trotter approximation and their interplay with compilation errors.
§ IMPLEMENTATION PRIMITIVES
In this section, we introduce the key compilation strategies we use to synthesize oscillator-oscillator, qubit-qubit, and oscillator-qubit gates from native operations. Throughout, we use the word “qubit” to refer either to a transmon qubit or a dual-rail qubit encoded in two cavities
<cit.>, and only distinguish between the two possibilities when clarification is necessary. A summary of the gates we obtain through these compilation strategies is provided in Tab. <ref>. These primitives will then be used in Sections <ref> and <ref> to compile Hamiltonian terms comprising fermions, bosons, and gauge-fields to native operations.
Specifically, in Sec. <ref>, we discuss the bosonic SWAP gate, a particularly useful primitive that enables the synthesis of entangling gates between non-neighbouring modes. In Sec. <ref>, we demonstrate how one can synthesize qubit-conditioned bosonic operations using conditional parity gates and, in Sections <ref> and <ref>, develop strategies to realize bosonic operations conditioned or controlled on the Fock-space occupancy of another mode. Building upon these “Fock-projector-conditioned” and “Fock-projector-controlled” operations, we then discuss how one can apply them in sequence to implement more complex density-controlled gates in Sec. <ref>. We then discuss a technique for implementing oscillator-mediated, multi-qubit gates in Sec. <ref>, and follow this with a discussion on approximate methods for synthesizing multi-mode gates in Sec. <ref>. Finally, we discuss the dual-rail encoding in Sec. <ref> as an alternative scheme for realizing qubits in the proposed hybrid oscillator-qubit architecture.
§.§ Bosonic SWAP
Our architecture in Fig. <ref> is not all-to-all connecting, and hence it is useful to be able to implement SWAP operations between the bosonic sites. In particular, bosonic SWAP gates enable quantum communication and entangling operations between remote bosonic modes, and additionally allow for entangling operations between qubits using the bosonic modes as a quantum bus <cit.>.
To that end, a bosonic SWAP gate can be realized using a beamsplitter and a pair of phase-space rotations,
SWAP_i,j = R_i(-π/2)R_j(-π/2)BS_i,j(0,π/2).
withBS_i,j(φ,θ)andR_j(θ)defined in Tab. <ref>. Here, the role of the phase-space rotations is to cancel the spurious phase obtained from the action of the beamsplitter,BS_i,j(0,π/2)|Ψ_i, Ψ_j⟩=e^-i π/2[n̂_i+n̂_j]|Ψ_j, Ψ_i⟩. As mentioned previously, all such phase-space rotationsR_i(θ)can be realized `in software' by absorbing them into the phases of the subsequent microwave drives. In the proposed architecture with SNAILs linking adjacent cavities, SWAP operations have been demonstrated with a duration of around 100 ns and a fidelity of∼0.999<cit.>.
As discussed in Sec. <ref>, our proposed architecture assumes native entangling gates between each mode-qubit pair and beamsplitters are only available between adjacent sites. However, as discussed in Ref. <cit.>, it is possible to use a beamsplitter network (i.e., a sequence of bosonic SWAPs) to synthesize non-native oscillator-oscillator and oscillator-qubit gates, e.g.,BS_i,j(φ,θ)for any modesi≠j, andCR_i,j(θ)for any transmoniand modej. Thus, we will henceforth assume access to such non-native gates, abstracting away the underlying bosonic SWAPs.
§.§ Qubit-conditioned bosonic operations
Certain bosonic operations, such as beamsplitters or displacements, can be conditioned on qubits by conjugating them with conditional parity gates,CΠ_i,j<cit.>. The intuition for this is provided by the relation
CΠ_i,jâ_jCΠ^†_i,j = iẐ_i â_j,
where we used the Baker-Campbell-Hausdorff formula
e^ÂB̂ e^-Â=B̂+[Â, B̂]+1/2 ![Â,[Â, B̂]]+…,
andCΠ_i,jis the conditional parity gate defined in Tab. <ref>, experimentally realized by evolving under the dispersive interaction for a timeT = π/χ(see Eq. (<ref>)).
It is possible to implement the Hermitian adjointCΠ_i,j^†by simply appending a bosonic phase-space rotation:
CΠ_i,j^† = R_j(-π) CΠ_i,j = CΠ_i,jR_j(-π).
This is becauseCΠ_i,j^†= e^iπ/2Ẑ_i n̂_j = e^iπẐ_i n̂_je^-iπ/2Ẑ_i n̂_j, ande^iπẐ_i n̂_j = cos(πn̂)1 + i sin(πn̂)Ẑ= cos(πn̂) = e^iπn̂_j. In effect, applying aCΠ_i,j^†gate requires the application ofCΠ_i,jand a phase-space rotationR_j(-π)=R_j(π)which, as explained in Sec. <ref>, can be implemented `in software' via modification of the phases of the subsequent microwave drives.
The relation in Eq. (<ref>) allows us to “convert” the generator of certain bosonic operations into one that depends upon the state of a qubit. For example, the conditional displacement gate can be synthesized from an unconditional displacement and a pair of conditional parity gates via
CD_i,j(α) =e^Ẑ_i(αâ_j^† - α^*â_j)
= e^-iπ/2Ẑ_i n̂_je^i(αâ_j^† + α^*â_j)e^iπ/2Ẑ_i n̂_j
=CΠ_i,jD_i(iα)CΠ_i,j^†,
whereα∈C. Note the modified phase in the unconditional displacement. This is due to the factor ofiinherited from the transformation in Eq. (<ref>). Furthermore, we emphasize that while this sequence produces exactly the conditional displacement gate as defined in Table <ref>, this gate can more efficiently be realized natively using the techniques developed in Ref. <cit.>.
As another example, we can exactly compile the conditional beamsplitter gate as follows:
CBS_i,j,k(φ,θ) = e^-iθẐ_i(e^iφâ^†_j â_k + e^-iφâ_j â^†_k),
= e^-iπ/2Ẑ_i n̂_je^θ(e^iφâ^†_j â_k - e^-iφâ_j â^†_k)e^iπ/2Ẑ_i n̂_j
= CΠ_i,j BS_j,k(φ+π/2,θ)CΠ_i,j^†,
where{φ,θ} ∈R. This gate will play an instrumental role in Sec. <ref> for the compilation of interaction terms between matter and gauge fields. Its duration is experimentally limited by the speed of each conditional parity gate, requiring roughly1μs each. As such, the conditional beamsplitter gate requires∼2.1μs, including the worst-case∼100ns required for a full SWAP beamsplitter <cit.>.
More generally, as shown in Fig. <ref>a, we can employ this strategy to realize a conditional unitaryCU^Z_i,jfrom its unconditional counterpart:
CU^Z_i,j = e^Ẑ_i(Θ̂â^†_j - Θ̂^†â_j)
= CΠ_i,je^i(Θ̂â_j^† + Θ̂^†â_j)CΠ_i,j^†.
This decomposition holds for any choice of operatorΘ̂that satisfies[Θ̂,â_j] = [Θ̂,Ẑ_i]=0. For instance,Θ̂→α, corresponds to the conditional displacement in Eq. (<ref>) whileΘ̂→θe^iφ â_kyields the condition beamsplitter in Eq. (<ref>).
This strategy can be iterated to realize bosonic gates conditioned on multiple qubits. For example, one can realize a doubly-conditioned gate of the form
CU^ZZ_i,j,k=e^Ẑ_i Ẑ_j (Θ̂â_k^† - Θ̂^†â_k )
for any operatorΘ̂that commutes withẐ_i,Ẑ_j, andâ_k. In this case, conditional parity gates are implemented on modekfor qubitsjandkin sequence with the aid of SWAP gates. Alternatively, one can apply conditional parity gates acting on different qubit-mode pairs in parallel as shown in Fig. <ref>b.
We note that, in the above, we use `qubit-conditional' and `qubit-controlled' interchangeably to refer to a gate that enacts a bosonic operation that depends upon theZbasis state of a qubit. In particular, distinct non-trivial operations are enacted for both qubit state|0⟩and|1⟩. However, in certain contexts, it is useful to synthesize a `1-controlled' bosonic operation, i.e., one that acts only if the qubit is in|1⟩. In this work, we will refer to this as ann-controlled gate, referring to the operator that appears as a multiplicative factor in the generator. For example, for ann-controlled mode rotation, we have
e^-iθn̂_qubitn̂_mode = R_i(-θ/2)CR_i,i(θ),
wheren̂_qubit=(1-Ẑ)/2. More generally, we have
e^iθn̂_qubitÔ_i = e^iθ/2Ô_ie^-iθ/2Ẑ_i Ô_i,
for anyÔ_iwhich commutes with operations on the qubit.
§.§ Fock-projector-conditioned bosonic operations
An important primitive in the hybrid oscillator-qubit architecture of Fig. <ref> is the synthesis of nontrivial two-mode entangling gates. In this section, we develop a technique to realize bosonic operators that are conditioned on the Fock-space information of a second mode:
CU_i,j^P̅ = e^-iẐ_ancP̅_iÔ_j,
whereP̅_iis an operator with eigenvalues±1defined below (hence the name `conditioned'), `anc' refers to an ancillary transmon qubit andÔ_jis an arbitrary (Hermitian) operation acting on modej. For an arbitrary subspace𝒫of the oscillator,P̅_igives states inside𝒫a phase of-1, and states outside𝒫a phase of1. Mathematically,
P̅_i =∑_n∉𝒫|n⟩⟨n|_i-∑_n∈𝒫|n⟩⟨n|_i,
where each projector acts on modei. Note that we define the operatorP̅_iwithout a hat for simplicity of notation.
BecauseP̅_ihas eigenvalues±1, the core idea is to map this information to an ancillary qubit, which is then used to mediate this information to a second mode via a hybrid qubit-mode gate. This effectively realises the Fock-projector-conditioned bosonic gate in Eq. (<ref>). This requires the ancillary qubit to be initialized to an eigenstate ofẐ_anc. This ancillary qubit can correspond to the transmon qubit dispersively coupled to modeiorj, or any other qubit via appropriate use of bosonic SWAP gates.
To explain further, we begin with the particular example of a parity-conditioned gate, defined as
CU^Π̅_i,j =e^-iẐ_ancΠ̅_iÔ_j,
where
Π̅_i = e^iπn̂_i
is the parity operator acting on modeiwith eigenvalues±1. Therefore, for an ancilla initialized to|0⟩_anc,CU^Π̅_i,japplies the unitarye^-iÔ_j(e^iÔ_j) to modejif modeihas even (odd) parity. To realize this gate, it is helpful to first note that
SQR_anc,i(π⃗_0,0⃗) = e^-iπ/2X̂_anc∑_n odd|n⟩⟨n|_i,
whereπ⃗_0 = {0,π,0,π,…}and0⃗={0,0,0,…}, andΠ̂_i(as opposed toΠ̅_i) is the projector over odd Fock states. In other words, an SQR gate is simply a Fock-projector-controlled qubit rotation. By leveraging the Baker-Cambell-Hausdorff formula in Eq. (<ref>), it can be shown that
SQR_anc,i(π⃗_0,0⃗)Ẑ_ancSQR^†_anc,i(π⃗_0,0⃗) = Ẑ_ancΠ̅_i.
Thus, in direct analogy to the strategy for synthesizing qubit-conditional bosonic operations in Sec. <ref>, we can leverage a pair of SQR gates to condition a bosonic operation on the parity of a second mode:
CU^Π̅_i,j = SQR_anc,i(π⃗_0,0⃗)e^-iẐ_ancÔ_jSQR_anc,i(-π⃗_0,0⃗),
where we have used the fact thatSQR^†_anc,i(θ⃗,φ⃗)=SQR_anc,i(-θ⃗,φ⃗). This requires the realization of the intermediary qubit-oscillator gateexp(-iẐ_ancÔ_j), either natively or, e.g., using the technique in Sec. <ref>.
Returning to the general form in Eq. (<ref>), it is possible to generalize this technique by substitutingΠ̅_iwithP̅_iwhich allows for any arbitrary set of Fock states𝒫by choosing appropriate angles for the SQR gate in Eq. (<ref>). In particular, Eq. (<ref>) generalizes as
SQR_anc,i(θ⃗_P,0⃗)Ẑ_ancSQR^†_anc,i(θ⃗_P,0⃗) = Ẑ_ancP̅_i,
whereθ⃗_P = {θ_0,θ_1,…, θ_n …}and
θ_n =
π if |n⟩∈𝒫
0 otherwise.
Akin to Eq. (<ref>), this leverages the fact thatSQR_anc,i(θ⃗_P,0⃗)realizes a projector-controlled qubit rotation:
SQR_anc,i(θ⃗_P,0⃗) = e^-iπ/2X̂_anc∑_n∈𝒫|n⟩⟨n|_i.
Thus, with this, one can realize the generalized Fock-projector-conditional gate,
CU_i,j^P̅ = e^-iẐ_ancP̅_iÔ_j
=SQR_anc,i(θ⃗_P,0⃗)e^-iẐ_ancÔ_jSQR_anc,i(-θ⃗_P,0⃗).
This sequence is illustrated in Fig. <ref>a.
§.§ Fock-projector-controlled bosonic operations
In this section, we build on the technique developed in the previous section in order to realize bosonic operators that are controlled on the Fock-space information of another mode,
CU_i,j^P̂ = e^-iẐ_ancP̂_iÔ_j,
whereP̂_iis a Fock-space projector onto the subspace𝒫,
P̂_i = ∑_n∈𝒫|n⟩⟨n|_i
= (1 - P̅_i)/2.
The second equality reminds us that the projector and its reflection operator about the orthogonal complementP̅_iare reminiscent ofn̂_qubitandẐfor qubits, wheren̂_qubit=(1-Ẑ)/2. It follows that whileCU^P̅_i,japplies a distinct, nontrivial operation for each value of the modei∈𝒫, this gateCU^P̂_i,jacts withÔ_jonly if modei∈𝒫(hence the `controlled' nomenclature).
In fact, using (<ref>), we can easily construct the target gate (<ref>), similar to the construction in (<ref>):
CU_i,j^P̂ = e^-iẐ_anc((1_i - P̅_i)/2)Ô_j
= e^-iẐ_ancÔ_j/2e^iẐ_ancP̅_iÔ_j/2.
§.§ Boson density-controlled operations
A powerful use-case of the generalized controlled gate in Eq. (<ref>) is to apply many of them in sequence to synthesize complex entangling gates between two modes, mediated by an ancillary qubit in a known eigenstate ofẐ_anc. In this section, we leverage this technique to exactly compile a bosonic density-controlled operation:
CU^n̂_i,j=e^-iẐ_ancn̂_i Ô_j
To obtain this gate, we follow the iterative procedure illustrated in Fig. <ref>c. The core idea is to sequentially condition the oscillator-qubit gateU=exp(-iẐ_ancÔ_j)on each bit of the binary representation ofn̂_i. With each iteration, an appropriate rotation angle is chosen such that the sequence altogether produces the desired operationCU_i,j^n̂. This strategy is reminiscent of binary photon number readout <cit.>, and requires a circuit depth that is logarithmic in the boson number cutoffn_max.
Specifically, as a particular instance of Eq. (<ref>), we define the operator generated byÔ_jand conditioned on thekth superparity of modei,
CU_i,j^Π̅_k(θ) = e^-iθẐ_ancΠ̅_kiÔ_j
= SQR_anc,i(π⃗_k,0⃗)e^-iθẐ_ancÔ_jSQR_anc,i(-π⃗_k,0⃗),
where we have made the variable angleθexplicit for reasons that will become clear shortly.
Similarly, we defineCU_i,j^Π̂_k(θ)in analogy to Eq. (<ref>), see Fig. <ref>b. Here,Π̂_kis projection operator that returns thekth bit ofn̂_i(andΠ̅_kits corresponding reflection operator, returning±1), andπ⃗_k = {θ_0, θ_1, θ_2, …}with
θ_n =
π if ⌊ n/2^k⌋ mod 2 = 1
0 otherwise.
For exampleπ⃗_0 = {0,π,0,π,…}is equivalent to the parity vector defined below Eq. (<ref>), converting the SQR gate into a qubit rotation controlled on the least-significant (zeroth) bit ofn̂_i. We note thatΠ̅_0≡Π̅, where the latter is defined in Eq. (<ref>). Likewise, the choiceπ⃗_1 = {0,0,π,π,…}corresponds to qubit rotation controlled on the first bit ofn̂_i. In general,
SQR_anc,i(π⃗_k,0⃗) = e^-iπ/2X̂_ancΠ̂_ki,
and the choiceπ⃗_kenables a qubit rotation controlled on thekth bit ofn̂_i.
Applying successive operations controlled on each bit ofn̂_iin sequence then yields the general form
CU_i,j^f(Π̂_0, Π̂_1, Π̂_2,…) = ∏_k=0^K-1CU_i,j^Π̂_k(θ_k)
= exp(-iZ_anc∑_k=0^K-1θ_kΠ̂_kiÔ_j),
requiringK = ⌈log_2(n_max+1)⌉iterations of (super)-parity controlled gates, and wheref(Π̂_0,Π̂_1,Π̂_2,…)is any linear function in the bit operatorsΠ̂_k. Returning to the initial goal of synthesizingCU_i,j^n̂in Eq. (<ref>), we note that
n̂_i = ∑_k=0^K-1 2^kΠ̂_ki.
Consequently, choosingθ_k = 2^krealizes the desired gate. This sequence is shown in Fig. <ref>c.
As a particular example of this gate that will prove useful in Sec. <ref> for implementing density-denstiy interactions, one can chooseÔ_j→θn̂_jto realize the phase-space rotation of one mode controlled on the density of second,
RR_i,j(θ) = e^-iθẐ_ancn̂_in̂_j.
Finally, we emphasize that while our focus has been on synthesizing the density-controlled operatorCU_i,j^n̂, the general formCU_i,j^f(Π̂_0, Π̂_1, Π̂_2,…)enables further possibilities beyond this particular choice. Specifically, it allows for operations conditioned on any functionf(Π̂_0, Π̂_1, Π̂_2,…)that is linear in the bit operatorsΠ̂_ki. Morevover, we note that an arbitrary functionf(n̂_i)can be realized by iterating the primitiveCU^P̂_ija number of times linear inn_max. This possibility is discussed in our companion work, Ref. <cit.>.
§.§ Oscillator-mediated multi-qubit gates
As the architecture illustrated in Fig. <ref> does not include direct couplings between transmons, their utility for encoding fermionic and gauge field degrees of freedom relies on the ability to synthesize transmon-transmon entangling gates using native gates. Here, we adopt an oscillator-mediated approach proposed in our companion work Ref. <cit.> to realize such entangling gates that bear close similarity to Mølmer-Sørensen gates in trapped-ion platforms <cit.>, and also some parallels to mediated gates in silicon <cit.>. Crucially, this approach allows for the exact analytical compilation of multi-qubit gates, independent of (and without altering) the initial state of the mediating oscillators in the absence of noise. The core idea is to use a sequence of conditional oscillator displacements that form a closed phase-space path that depends upon the state of the qubits (see, for example, Fig. <ref>a). Upon completion, this closed path leaves the oscillator unchanged yet imparts a geometric phase on the system that depends upon the state of the qubits, thus enacting the target gate.
To realize these oscillator-mediated multi-qubit gates, we require the ability to enact displacements of thejth oscillator conditioned on theith transmon,
CD_i,j(α) = e^Ẑ_i(αâ_j^† - α^* â_j).
Fori=j, such conditional displacements can be realized either natively <cit.> or compiled using conditional-parity gates and unconditional displacements, as shown in Eq. (<ref>); furthermore, as discussed in Ref. <cit.> and summarized in Sec. <ref>, it is possible to synthesize conditional displacements for arbitraryiandjby additionally leveraging a beamsplitter network that links the two sites.
As a particular example, the two-qubit entangling gateZZ_i,j(θ) = exp(-iθ/2Ẑ_i Ẑ_j)can be realized via a sequence of four conditional displacements,
ZZ_i,j(θ) = CD_i,j(-iα)CD_i,i(α)
×CD_i,j(+iα)CD_i,i(-α).
The corresponding phase-space trajectories and circuit diagram are shown in Fig. <ref>a and Fig. <ref>b, respectively. As explained in Ref. <cit.>, this sequence can be further optimized by leveraging conditional displacements on both modesiandjsuch that each accumulates a geometric phase in parallel, leading to a reduction in gate duration by a factor of√(α)compared to the above single-mode approach, while also reducing the potential impact of non-idealities such as self-Kerr interactions (anharmonicities). See Ref. <cit.> for more details.
This strategy can be extended to realize oscillator-mediatedN-qubit gates that are useful for studying lattice gauge theories. For example, as will be discussed in Sec. <ref>, the four qubit gate
e^-i θ/2Ẑ_i Ẑ_j Ẑ_k Ẑ_l
is useful for implementing a magnetic field term in theZ_2LGT introduced in Eq. (<ref>). To realize this gate, we modify the sequence in Eq. (<ref>) such that each displacement is conditioned on the joint state of pairs of qubits, e.g.,Ẑ_i Ẑ_jorẐ_k Ẑ_l, as shown in Fig. <ref>c. To that end, we define the doubly-conditional displacement as
CD^ZZ_i,j,r(α) = e^Ẑ_iẐ_j(αâ^†_r - α^* â_r).
This gate be implemented via the technique described in Sec. <ref>. As illustrated in Fig. <ref>d, the four-qubit term can be exactly realized by sequencing four such doubly-conditional displacements:
e^-i θ/2Ẑ_i Ẑ_j Ẑ_k Ẑ_l = CCD_i,j,r(-iα)CCD_k,l,r(α)
×CCD_i,j,r(+iα)CCD_k,l,r(-α),
withα= √(θ)/2.
Similar to the implementation ofZZ_i,j(θ), it is beneficial to use multiple modes in parallel to accumulate the necessary geometric phase and reduce the displacement parameterα. Moreover, this strategy can be extended to realize arbitrary operations of the forme^-iθŴ, where hereŴis an arbitrary-weight Pauli operator. See Ref. <cit.> for a more complete discussion.
Finally, we note that a similar strategy for oscillator-mediatedN-body entangling gates has been proposed <cit.> and demonstrated <cit.> in trapped-ion platforms. There, spin-dependent squeezing operations are used in place of conditional displacement operators, but the overall framework bears similarity to the strategy described here for the superconducting architecture described in Fig. <ref>.
§.§ Approximate synthesis of multi-mode gates
To approximately implement arbitrary multi-mode operations that fall outside of the form in Sec. <ref> and Sec. <ref>, we use the method introduced in Ref. <cit.>. It relies on a variant of the Baker-Campbell-Hausdorff (BCH) formula,
e^Â t e^B̂ te^ -Â t e^-B̂ t=e^[Â, B̂] t^2+O(t^3).
The key insight is to choose the operatorsÂ, B̂such that they enact commuting bosonic operationsÔ_1,Ô_2(possibly acting on distinct sets of different modes), but conditioned on the same qubit with respect to non-commuting Pauli operators, e.g., using the method described in Sec. <ref>. For example, we can leverage this idea as follows:
Γ_1,2 = e^i X̂Ô_1 t e^i ŶÔ_2 t e^-i X̂Ô_1 t e^-iŶÔ_2 t
=e^[X̂Ô_1, ŶÔ_2](-i t)^2+O(t^3),
=e^-2iẐÔ_1Ô_2t^2+O(t^3) .
See also Fig. <ref>a for the circuit diagram. In this way, one can implement operator multiplication betweenO_1andO_2. Combining this with the Trotter formula to implement addition, following Ref. <cit.> and Fig. <ref>b, we can implement exponentials of non-linear functions in the creation and annihilation operators.
When the commutator obtained through the above BCH formula contains both desired and undesired high-order mode terms, we can cancel the latter by applying Eq. (<ref>) a second time with a different choice of operators. Via the Trotter formula we then obtain:
Δ_1,2 = e^-2iẐÔ_1Ô_2t^2e^-2iẐÔ'̂_̂1̂Ô'_2t^2+O(t^3)
= e^-2iẐ(Ô_1Ô_2+Ô'_1Ô'_2)t^2+O(t^3),
where the error term will be different between the two lines and we rewrote the error as an additive error by Taylor expanding the exponential. Here, each term is conditioned on the same qubit, either natively or via SWAP operations between the qubits or the modes.
In Fig. <ref>c, we use this technique to approximately synthesize the four mode operator,
e^-iẐ(â^†b̂ĉ^†d̂ + h.c.)(2θ)^2.
which will later be used in Sec. <ref> to realize the plaquette term in theU(1)quantum link model.
§.§ Dual-rail qubits
While so far we have discussed primitives for mapping the model to the hardware with transmons being used as data or ancillary qubits and the modes being used to encode bosonic degrees of freedom, another possibility is to use the modes to encode qubits. In this section, we discuss such an encoding: the dual-rail qubit.
The core idea is to encode a single qubit using a pair of cavity modes that share a single photon. The location of the photon then represents the qubit state <cit.>
|0⟩_DR =|0,1⟩
|1⟩_DR =|1,0⟩.
Denoting the cavity photon numbers to be|n^a,n^b⟩, we see that the PauliZoperator has various equivalent representations,
Ẑ^DR =n̂^b - n̂^a
=1-2n̂^a=2n̂^b-1
=e^iπn̂^a=-e^iπn̂^b,
where we have used the fact thatn̂^a + n̂^b = 1. The remaining Pauli operators are given by,
X = a^† b + a b^†
Y = i[a^† b - a b^†].
Note that this is simply the Schwinger boson representation of a spin one-half.
The primary advantage of the dual-rail encoding is that photon loss in either the left or right cavity results in an erasure error that is detectable via joint-parity measurements <cit.>. Furthermore, single-qubit operations are straightforward: rotations about any axis in the azimuthal plane can be carried out with a beamsplitter, and rotations about theZaxis correspond to a phase-space rotation of one of the modes (up to a global phase). Furthermore, it has been shown that one can construct a universal gate set that enables detection of ancillary transmon dephasing and relaxation errors, the latter made possible by leveraging three levels of the transmon qubit <cit.>. Consequently, for models that require many qubit-qubit gates (such 2D fermionic problems that necessitate fermionic SWAP networks), the dual-rail encoding is potentially advantageous for mitigating errors with post-selection in near-term experiments.
ZZ_i,j^DR(θ). As an example of an entangling gate between dual-rail qubits, we consider the compilation ofZZ^DR_i,j(θ)=e^-i(θ/2)Ẑ^DR_i Ẑ^DR_j. Using the fact thatẐ^DR_i=1-2n̂_i^a, this gate can be expressed as
ZZ_i,j^DR(θ) =e^-iθ/2e^-i2θn̂_i^an̂_j^ae^iθn̂_i^ae^iθn̂_j^a
= e^-iθ/2(⟨0|_ anc⊗ )RR_i,j(2θ)(|0⟩_ anc⊗ )
×R_i(-θ)R_j(-θ),
where, for simplicity of notation, we have dropped the superscripta. Therefore, this gate can be realized by combining a pair of phase-space rotations with the technique introduced in Sec. <ref> to realizeRR(2θ)– see Eq. (<ref>).
However, note that in the dual rail encoding, the boson number of modeiis restricted to either 0 or 1, which means that the SQR gate used to implement Eq. (<ref>) of Sec. <ref> can be reduced to a conditional phase-space rotation up to single qubit operations. For the choice ofθ⃗ = π⃗_0, this conditional rotation in the{ |0⟩,|1⟩}subspace corresponds to a conditional-parity gate:
SQR_i,j(π⃗_0, 0⃗) ⇒H_i CΠ_i,jH_i =CΠ^X_i,j
whereH_iis a Hadamard gate on qubiti, and we have defined the shorthandCΠ^X_i,jto denote theX-conditional parity gate.
Therefore, we can re-write the decomposition ofZZ^DR_i,j(θ)as:
ZZ_i,j^DR(θ)= e^-iθ/2(⟨0|_ anc⊗ )CΠ^X_anc,iCR_anc,j(-2θ)CΠ^X_anc,i
×(|0⟩_ anc⊗ )R_i(π)R_j(θ)R_i(-θ)R_j(-θ),
where the top line, up to the global phase, corresponds toRR(2θ)in the truncated dual rail subspace. The complete circuit (with single qubit rotations simplified) is presented in Fig. <ref>a. We note that sequential ancilla-conditional gates using the same transmon (but distinct modes) requires implicit SWAP operations not shown.
While exact, the circuit in Fig. <ref>a does not enable the detection of transmon dephasing errors. To that end, a scheme to synthesize error-detectable entangling gates was presented in Ref. <cit.>. It leverages an “exponentiation circuit” that interleaves ancilla-controlled unitaries (controlled-P̂) with ancilla rotations to construct any unitaryP(θ)=exp(-i θ/2 P̂), providedP^2=1̂. Returning to the case ofZZ_i,j^DR(θ), this gate can be constructed following this prescription in the previous paragraph, choosingP=Ẑ ⊗Ẑand implementing controlled-P̂using a pair of conditional-parity gates. We illustrate the full implementation in terms of our discrete gate set in Fig. <ref>b. Notably, this latter circuit can be understood as a further decomposition of its counterpart in Fig. <ref>a, where the middleCR(-2θ)gate is broken into conditional-parity gates and single qubit operations using the identities in Sec. <ref>.
While both circuits are mathematically equivalent, the structure of Fig. <ref>b enables partial error detection of some of the dominant errors, including transmon dephasing. However, it comes at the cost of requiring four conditional-parity gates, each with a durationT_CΠ = π/χ. Thus, ignoring the cost of SWAPs and single-qubit and mode operations, the minimum duration ofZZ_i,j^DR(θ)using the method in Fig. <ref>b isT_b,min = 4π/χ, independent ofθ. In contrast, the method in Fig. <ref>a requiresT_a,min = 2(π+ θ)/χ, a reduction for even the maximal entangling caseθ=π/2. Thus, there is a tradeoff in error detectability and gate duration, and the approach in Fig. <ref>a may therefore be beneficial in certain contexts, particularly for repeated applications ofZZ_i,j^DR(θ)for smallθ(e.g., for Trotterized circuits).
§ IMPLEMENTATION OF MATTER FIELDS
In this section, we leverage the compilation strategies of the previous section to realize the unitary time evolution operatore^-iĤ tfor a range of common Hamiltonian terms involving bosonic and fermionic matter. For the purely bosonic matter terms, in Sec. <ref>, we show that some of the common dynamics terms correspond to native gates, and others to terms that can be realized using the compilation techniques in the previous section. For the purely fermionic matter terms, in Sec. <ref>, we demonstrate how to encode fermionic SWAP networks into various qubit encodings possible in hybrid oscillator-qubit hardware. For the fermion-boson interaction terms, in Sec. <ref>, we present novel compilation strategies specific to hybrid oscillator-qubit hardware. We summarize these results in Tab. <ref> and Tab. <ref>. All terms involving gauge fields will be treated in Sec. <ref>.
§.§ Bosonic matter
In this section, we discuss the most common Hamiltonian terms for bosonic matter fields (which typically involve small powers of the creation and annihilation operator). The results are summarized in Table <ref>. We first discuss hopping as this is the most elementary term coupling bosonic modes. We then discuss terms related to powers and products of number operatorsn̂: the onsite potential (linear inn̂), the onsite interaction (quadratic inn̂), and the intersite interaction (bilinear,n̂_in̂_j). We map the bosonic matter fields to the native bosonic modes supported in the hybrid oscillator-qubit hardware in Fig. <ref>.
§.§.§ Hopping
The time evolution operatore^-iĤ tfor the hopping interactionĤ = ∑_⟨i,j|⟩J_i,j ( â^†_i â_j + h.c. )between bosonic matter sites can be implemented via Trotterization using a sequence of beamsplitter gatesBS_i,j(φ,θ)(defined in Tab. <ref>) by choosingθ= Jt.
Furthermore, a static gauge field (vector potential) appears as a complex phase in the hoppingJ→J e^iφ<cit.>, and is important to simulate models of the fractional quantum Hall effect <cit.>. To implement this term, one can simply adjust the phaseφof the beamsplitter on each link such that a net phase is accumulated in moving around each plaquette.
§.§.§ Onsite potential
The onsite potential isĤ = ∑_i μ_i n̂_i, withn̂_i = â_i^†â_iμ_i ∈ℝ. The time evolution of this term can be implemented on each site separately using a phase space rotation gate (defined in Tab. <ref>) as is shown in Tab. <ref>.
As mentioned previously in Sec. <ref>, it is possible to absorb phase space rotations into operators that include creation/annihilation operators such as beamsplitters. In particular, a constant site energy shift can be created with a fixed frequency detuning of the microwave tones that activate the beamsplitters.
§.§.§ Onsite interaction
The bosonic onsite interaction is ∑_i U_i n̂_i^2withU_i ∈ℝ. To implement the time evolution operator for this term, one can use the Selective Number-dependent Arbitrary Phase (SNAP) gate <cit.>.SNAP_i(θ⃗), as defined in Tab. <ref>, operates in the strong-dispersive coupling regime where the qubit frequencies depend on the mode occupation such that the SNAP gate effectively imparts an independently chosen phase to the system comprising qubit and mode, for each vaue of the mode occupation. Choosingθ_k=U_i k^2t, wherekis photon number corresponding to a particular Fock state implements the onsite interactionn̂^2_ion theith mode. This procedure additionally requires an ancillary qubit initialized to the state|0⟩.
§.§.§ Intersite interaction
The time-evolution of a nearest-neighbor density-density interactionĤ = ∑_⟨i,j|⟩ J_i,j n̂_i n̂_jbetween two modes in separate cavities, which is useful for example in simulation of the extended Hubbard model or dipolar interacting systems, can be implemented with the method proposed in Sec. <ref> via theRR(θ)gate defined in Eq. (<ref>):
e^-i J_i,jn̂_in̂_j t = (⟨0|_ anc⊗ )RR_i,j(J_i,jt)(|0⟩_ anc⊗ ).
Here, we have used an ancillary qubit that begins and (deterministically) ends in the state|0⟩_anc. As discussed in Sec. <ref> and shown in Table <ref>, this gate requires a native gate count that is logarithmic in the bosonic cutoffn_max. Long-range interactions between sitesiandjcan be implemented at the cost of bosonic SWAP gate count that is linear in the Manhattan-metric distance between the sites.
§.§ Fermionic matter
In this section, we show that it is possible to treat purely fermionic terms within our proposed hybrid qubit-oscillator architecture. While this is not an area in which to expect an explicit advantage over qubit-only hardware, we show that no significant cost is added compared to existing qubit platforms.
We treat fermionic matter by mapping the fermions to qubits and rely on FSWAP networks <cit.> of Jordan-Wigner (JW) strings for a low-depth implementation. For an overview of the set of operations required to implemente^-i Ĥtfor a2D fermionic HamiltonianĤin qubit-based hardware, see App. <ref>. In summary, the two-dimensional lattice is virtually mapped to a one-dimensional chain. Nearest-neighbour hoppings in the2D lattice become either nearest-neighbour hoppings in the1D JW chain (which we call “JW-adjacent”) or long-range hoppings (which we call “non-JW-adjacent”). The former hoppings only require iSWAP operations, whereas the latter also require FSWAP operations. Gates that involve the fermionic number density are mapped to gates involving theẐoperator and do not add any overhead associated with the Jordan-Wigner encoding.
We first summarize the possibilities for encoding fermions (i.e., implementing iSWAP and FSWAP operations) either in transmons (Sec. <ref>) or in cavity dual-rail qubits (Sec. <ref>). While the former is more hardware efficient, the latter enables error-detection and, as a result, is a particularly promising avenue for near-term simulations on noisy hardware. The high-level results for both encodings are summarized in Table <ref>. In Sec. <ref>, we discuss the fidelity of gates that underly the simulation of fermions in the dual-rail encoding.
§.§.§ Transmon qubits
In the proposed 3D post-cavity circuit QED setup, encoding the fermion into the transmon enables the use of the cavities for a separate purpose, such as for representing the phonon modes in the Hubbard-Holstein model, or for encoding bosonic gauge fields as will be demonstrated in the context of aℤ_2lattice gauge theory later in this work.
As the architecture illustrated in Fig. <ref> does not include direct couplings between transmons, we rely on the oscillator-mediated compilation strategy discussed in Sec. <ref> to realize entangling gates,ZZ_i,j(θ)in particular. We reemphasize that this approach does not require that the mediating oscillators are in a known, particular state. Thus, they can simultaneously be used to encode matter or gauge degrees of freedom.
§.§.§ Dual-rail qubits
We now discuss the separate possibility to map fermions to dual rail qubits. As discussed in Sec. <ref>, dual rail qubits present a number of advantages, including the possibility for detection of photon loss and ancilla relaxation and dephasing errors <cit.>.
FSWAP. For the non-JW-adjacent hopping terms, a fullSWAP_i,joperation is required between the dual-rail qubits as part of the FSWAP operation. This can be implemented by separately applying aSWAP_i,joperation between the cavities representing the|1⟩and|0⟩states, as represented in Fig. <ref>. As described in App. <ref>, the FSWAP also requires aCZ_i,j(θ)gate which is equivalent to theZZ_i,j(θ)gate up to single qubit phase gates. As discussed in Sec. <ref> (c.f. App. <ref>), these single qubit phase gates correspond to phase-space rotations on the mode which hosts the boson when the dual rail qubit is in the state|1⟩. The corresponding circuit is shown in Fig. <ref>a.
iSWAP(θ). For the JW-adjacent hopping terms, following Eq. (<ref>), we only need to implement a variable-angle iSWAP gate defined in Fig. <ref>b. This requires onlyZZ_i,j(θ)gates andHandSsingle-qubit rotations. An error-detectable implementation ofZZ_i,j(θ)between dual-rail qubits was presented in Ref. <cit.> and is summarized here in Sec. <ref>. We show the circuit for implementinge^iθ( X̂_i X̂_i+1+Ŷ_i Ŷ_i+1)at the level of the modes supporting the dual-rail qubits in Fig. <ref>b.
§.§.§ Comparing advantages of different platforms for encoding fermions
We note that different platforms have different advantages for implementing non-JW-adjacent hopping.
In a planar transmon qubit array with tunable couplers, FSWAP operations can be implemented using the FSIM gate using FSIM(θ=π/2,ϕ=0)with high fidelity (≈0.9930) <cit.>.e^iθ(X̂ X̂ + Ŷ Ŷ )operations can be implemented using FSIM(θ,ϕ=0), also with high fidelity (currently≈0.9962) <cit.>.
In the ion trap QCCD architecture <cit.> FSWAP operations can be implemented using a SWAP operation and aZZ(π/2). Because the ions can be physically moved around the circuit to achieve all-to-all connectivity, the fidelity of the FSWAP operation reduces to the fidelity ofZZ(π/2)which is currently 0.9980 <cit.>, yielding a fidelity advantage compared to the planar transmon implementation. Furthermore,e^iθ(X̂ X̂ + Ŷ Ŷ )can be implemented with twoZZ(θ)which currently has a fidelity of≈0.9960<cit.>.
In the circuit QED setup using dual rail qubits, as discussed, we can use theZZ(θ)gates to implement fermionic SWAP networks similar to ion trap QCCD architectures. Beamsplitter operations in the high-Q cavity setup required for a SWAP operation between dual-rail qubits have very high fidelity of0.9992<cit.> (comparable to single-qubit gates on transmons). Importantly, we can use the error-detection capability to increase the fidelity of the operations with post-selection, as discussed in <cit.>. This results inZZ(π/2)gates with a theoretically calculated fidelity in the presence of noise and post-selection of≈0.9999<cit.>, for a pure dephasing channel with timeT_ϕ=200μs. The error detection comes at the price of a post-selection requirement, leading to a shot overhead. For the above value ofT_ϕ, it was estimated that an error would be detected in2%<cit.> of shots. While this percentage increases exponentially with circuit depth, even for100ZZ_i,i+1(θ)gates, only1-(1-0.02)^100≈87%of shots would have to be discarded due to a detected error. This leads to a shot overhead of a factor of1/(1-87%)≈7.7.
§.§ Fermion-boson interactions
Here we compile the unitary time evolution operator for two paradigmatic fermion-boson models: phonon-electron interactions in solids and the coupling of strong light fields with electrons. Employing the techniques of the previous sections, we discuss the use of either transmons or dual rail qubits to encode the fermionic matter, and employ modes to represent the bosonic matter.
§.§.§ Phonon-matter interactions
The Holstein model <cit.> describes (dispersionless) optical phonons in the solid state coupled to the density of electrons. It consists of a kinetic energy term for the electrons (c.f. first row in Tab. <ref>), an onsite potential term for for the phonons (c.f. Sec. <ref>) as well as an electron-phonon interaction term
Ĥ= ∑_i g_i ĉ^†_i ĉ_i (â_i^† + â_i).
To implement the latter, note that in the Jordan-Wigner encoding, independent of the dimension, this Hamiltonian term becomes equivalent to the Spin-Holstein coupling <cit.>,
Ĥ= ∑_i g_i/2 (Ẑ_i+1) (â_i^† + â_i).
Time evolution under this Hamiltonian can be easily implemented since all terms in the sum commute and each has a simple implementation; the term proportional toẐ_igenerates a conditional displacement gate and the remaining term can be eliminated by a simple coordinate frame shift for each oscillator (resulting in a constant chemical potential shift for the fermions).
Transmon qubits. If the qubits are mapped to transmon qubits, conditional displacements can be compiled using conditional paritiesCΠ_i,j(Tab. <ref>) and a displacementD_i(g_i t/2). This control was discussed in Sec. <ref> and is illustrated in Fig. <ref>a.
Dual-rail qubits. In the case of dual-rail qubits,Ẑ_i^DR = e^iπn̂_i,1, where the subscriptsi,1indicates the mode which hosts the boson when the dual rail qubit is in state|1⟩. This means that the required dual rail qubit-conditional displacement will take the forme^-iθe^iπn̂_i,1(â_j^†+ â_j), whereθ= g_i t/2. This is a displacement conditioned on the parity of another mode{i,1}and can be realized byCU^Π̅_i,j, whose compilation is presented in Sec. <ref>. The full operation illustrated in Fig. <ref> is as follows:
e^-iθẐ_i^DR (â_j^† + â_j)=
⟨0|_ ancCΠ^X_anc,iCD_anc,j(iθ)CΠ^X_anc,iR_i(π)|0⟩_ anc,
whereCD(α)corresponds to a conditional displacement gate as defined in Table <ref>, andCΠ^Xdenotes the Hadamard-conjugated conditional parity gate introduced in Eq. (<ref>). The subscript `anc' refers to the ancilla transmon qubit which is initialized to|0⟩_ancand guaranteed to end in|0⟩_ancat the end of the sequence unless an error occurred – a fact which could be used for error-detection purposes as previously mentioned.
§.§.§ Light-matter interactions
When light interacts with charged matter on a lattice, for instance when a material is placed in an optical cavity, the Hamiltonian is given by <cit.>
Ĥ= ωâ^†â-J∑_⟨ i,j⟩ĉ^†_i ĉ_j e^ig/√(L)(â^† + â) +h.c.
for a single light modeâ, whereωcorresponds to the free evolution of the photons,Jis the strength of the hopping,gis a coupling constant which depends on the specifics of the system, such as the geometry and material composition of the cavity, andLis the number of lattice sites. Within the Peierls substitution approximation, the exponential term,Ã=g/√(L)(â^†+ â), is given by the particle charge multiplied the line integral of the electromagnetic vector potential along the link from sitejto sitei.
For smallg/√(L)the exponent in the second term can be expanded:
Ĥ = ωâ^†â-J/2∑_⟨ i,j⟩(ĉ^†_i ĉ_j + h.c.)
+g J/√(L)(â^† + â)∑_⟨ i,j⟩ĉ^†_i ĉ_j + O(Jg^2/L).
The first term can be implemented using a bosonic phase-space rotation as discussed in Sec. <ref>. The second term corresponds to fermionic hopping, and can be implemented using the strategy of App. <ref> for fermions encoded in either transmons or dual rail qubits. The second term can be reexpressed asexp(-itg J/√(L)(â^†+ â)X̂_i X̂_i+1)exp(-itg J/√(L)(â^†+ â)Ŷ_i Ŷ_i+1)for Jordan-Wigner adjacent sites. This corresponds to two doubly-conditional displacements, i.e., conditioned on two qubits, with single qubit rotations.
Transmon qubits. If the qubits are mapped to transmon qubits, doubly-conditional displacements can be compiled using the technique described in Sec. <ref> and illustrated in Fig. <ref>b. It requires a pair of conditional parity gatesCΠ(Tab. <ref>) and a conditional displacementCD(α)(or alternatively, two pairs of conditional parity gates and an unconditional displacement).
Dual-rail qubits. If the qubits are mapped to dual-rail qubits, doubly-conditional displacements can be synthesized by iterating the compliation strategy for the singly-conditional displacement in Eq. (<ref>), there developed in the context of phonon-matter interactions. The full sequence is as follows:
e^-iθẐ^DR_i Ẑ^DR_j (â_k^† + â_k)=
(⟨0|_ anc⊗ )CΠ^X_ anc,iCΠ^X_ anc,jCD_anc,k(iθ)
×CΠ^X_ anc,jCΠ^X_ anc,iR_i(π)R_j(π)(|0⟩_ anc⊗),
where, similar to previous gates, we have projected out the ancillary qubit, which is initialized to|0⟩_ancand guaranteed (in the absence of noise) to be disentangled at the end of the sequence. With the above gate realized, single dual rail qubit gates (i.e., beam splitters) can be used to rotate the operatorẐ^DR Ẑ^DRtoX̂^DR X̂^DRorŶ^DR Ŷ^DR, enabling the implementation of all desired gates.
§ IMPLEMENTATION OF GAUGE FIELDS AND GAUGE-MATTER COUPLING
In this section, we present implementations ofe^-iĤ tfor pure gauge Hamiltonian terms – i.e., for the electric and magnetic fields – and for the gauge-invariant hopping which couples the gauge fields to matter. In particular, we discuss the implementation of these terms for the two illustrative examples in(2+1)D: aℤ_2gauge field and aU(1)gauge field, each playing a vital role in the paradigmatic LGTs discussed in Sec. <ref>. However, we emphasize that due to the digital nature of our implementations, more complex models can implemented beyond those in Sec. <ref> using techniques in this section, for instance models that contain bothℤ_2andU(1)fields coupled to the same matter fields.
We present the implementation of pureℤ_2andU(1)gauge fields in Sec. <ref> and Sec. <ref>, respectively.
Following this, we briefly discuss the possibility to realize static gauge fields (which do not posses any of their own dynamics) in Sec. <ref>. We then return to our two examples and describe the implementation of gauge-invariant hopping terms in Sec. <ref> and Sec. <ref> forℤ_2andU(1)fields, respectively. Finally, we present a summary of the implemented terms in Tab. <ref> and Tab. <ref>.
§.§ Pure gauge dynamical Z2 fields
In this section, we present two strategies for representingℤ_2gauge fields in our architecture – one that uses the transmon qubits, and another the relies on the dual-rail encoding discussed in Sec. <ref>. Also see Sec. <ref> for the introduction of theℤ_2Hamiltonian and the notation, in particular how we label the links on which the gauge fields reside.
§.§.§ Electric Field
The time evolution operator of the electric field defined in Tab. <ref> is e^-i g X̂_i,j t , where the coupling strength isgas defined in Eq. (<ref>). This corresponds to a single-qubit rotation.
Transmon qubits.
This term is directly realized through on-resonant driving of each transmon using theR_i^ϕ=0(θ)gate defined in Tab. <ref>.
Dual-rail qubits.
This term is a single-qubit rotation. As discussed in Sec. <ref>, single-qubit rotations (defined in Tab. <ref>) around theX̂^DRandŶ^DRaxis are carried out using beamsplittersBS_i,j(φ,θ)between the two cavities composing the dual-rail qubit.
§.§.§ Magnetic field
The time evolution operator associated with the magnetic field term defined in Tab. <ref> ise^(- i t (-B) Ẑ_i,j Ẑ_j,k Ẑ_k,l Ẑ_l,i), corresponding to a four-qubit entangling gate.
Transmon qubits. The exact gate sequence that implements this term is discussed in Sec. <ref> and illustrated in Fig. <ref>. Multi-body gates in the transmon qubits in our proposed circuit QED architecture have a compact form: the required displacements can all be carried out using a single ancilla mode. This removes the requirement for a costly Pauli-gadget [100], which implements this four-qubit gate using 6 two-qubit gates, though instead requires bosonic SWAP gates to mediate the multi-qubit interaction. However, as previously mentioned, the bosonic SWAP operation has a very high fidelity in the proposed architecture (see Sec. <ref>).
Dual-rail qubits.
In Sec. <ref> we discussed how to obtainZZ_i,j(θ)gates using the exponentiation gadget presented in Ref. <cit.> which implementse^iθP̂for any operatorP̂ifP̂^2 = 1. Therefore, using the same method we can extend this to include higher-weight Pauli strings inP̂. This circuit is illustrated in Fig. <ref>.
As mentioned previously, ancilla relaxation and dephasing errrors, in addition to photon loss, can be detected using this general gate scheme <cit.>.
§.§ Pure gauge dynamical U(1) fields
Next, we turn to the described by the Hamiltonian in Eq. (<ref>). As described in Sec. <ref>, we utilize the Schwinger-boson representation to encode the link degrees of freedom, requiring two bosonic modes per link. We note that the Schwinger-boson representation may be understood as an extension of the dual-rail qubit encoding to spinS > 1/2<cit.>.
§.§.§ Electric field
In the Schwinger-Boson representation, the electric field energy can be written as
Ĥ_E = g^2/2∑_⟨i,j|⟩(n̂^a_i,j - n̂^b_i,j/2 - τ/2π)^2,
wheregis the strength of the electric field andτhere is the topological parameter. When multiplying out the square, the cross-Kerr terms-g^2/2 n̂^a_i,j n̂^b_i,jappear. Although we can in principle implement these terms, c.f. Sec. <ref>, they are harder to implement than self-Kerr terms on the same site. Luckily, we can remove the cross-Kerr terms by using the Schwinger-Boson constraintn̂^a_i,j + n̂^b_i,j = 2S . Due to this identity, we may add a termg^2/2∑_i,j (n̂^a_i,j + n̂^b_i,j/2)^2, which due to the constraint is just a constant, to the Hamiltonian in Eq. (<ref>) to obtain
Ĥ_E = g^2/2∑_⟨i,j|⟩( 1/2((n̂^a_i,j)^2 + (n̂^b_i,j)^2) -τ/2π (n̂^a_i,j - n̂^b_i,j))
≡∑_⟨i,j|⟩ĥ_E,i,j
up to a constant, thereby removing the cross-Kerr term.
The time evolution underĤ_Efulfillsexp(-iĤ_E t)=∏_⟨i,j|⟩ exp(-iĥ_E,i,jt). Because theâandb̂modes commute,exp(-iĥ_E,i,jt)=e^-ig^2/2(1/2n̂_i,j^a^2-τ/2πn̂^a_i,j)te^-ig^2/2(1/2n̂_i,j^b^2+τ/2πn̂^b_i,j)t. This can be implemented exactly using twoSNAP(θ⃗)gates defined in Tab. <ref> on modesaandb:
exp(-iĥ_E,i,j t)=SNAP_a(θ⃗^a)SNAP_b(θ⃗^b)
whereθ^a_n = g^2/2(1/2n^2-τ/2πn)tandθ^b_n=g^2/2(1/2n^2+τ/2πn)t.
§.§.§ Magnetic field
In(2+1)D, the magnetic field Hamiltonian, illustrated in Fig. <ref>a, acts on the four gauge field raising operators linking a square (also called plaquette) of fermionic sites labelledi,j,k,l. We discuss the compilation of a single plaquette exponential termexp(-iĤ_□t)here, where
Ĥ_□= -1/4g^2(S(S+1))^2(Ŝ^+_i,jŜ^+_j,kŜ^-_k,lŜ^-_l,i +h.c.).
Using our Schwinger-boson mapping of the gauge fields in Eq. (<ref>), we obtain a product of creation and annihilation operators which we relabela,bfor the modes of link⟨i,j⟩,c,dfor link⟨j,k⟩,e,ffor link⟨k,l⟩andg,hfor link⟨l,i⟩for simplicity of notation, such that:
Ĥ_□= -1/4g^2(S(S+1))^2(â^†b̂ĉ^†d̂f̂ê^†ĥĝ^† + h.c.).
We show here how to realize the plaquette term using the method introduced in Ref. <cit.> and summarized in Sec. <ref>, which makes use of the group commutator relations given by the Baker-Campbell-Hausdorff (BCH) formula, specifically the relation
e^iÂθ e^iB̂θe^-iÂθ e^-iB̂θ=e^-[Â, B̂] θ^2+O(θ^3).
Through appropriate choice of hybrid mode-qubit operatorsÂandB̂, this gives us the means to implement multiplication between commuting bosonic operators. We also use Trotter formulas to implement addition.
We rely on the above method and present a possible implementation that leverages the conditional beamsplitter gate introduced in Eq. (<ref>) as the fundamental building block. This approach has the advantage of relying on simple hardware primitives, at the cost of Trotter error. Following this, we briefly comment on the possibility to instead leverage a three-wave mixing term realized on the hardware level, c.f. App. <ref>. This latter approach would enable the compilation of the desired term without Trotter error, but uses a gate which is more complicated to engineer on the hardware level.
The key idea for this synthesis is to apply the BCH formula to conditional beamsplitters acting on the four pairs of modes around the plaquette, conditioned on the same ancillary qubit. Intuitively, this leads to multiplication of creation and annihilation operators for these four pairs of modes, yielding the eight-mode term in Eq. (<ref>).
We first show how to synthesise a product of four mode operators. Choosinge^-i  θ = CBS^Y_a,b(-π/2,θ)= e^-iθ(iâ^†b̂ - iâb̂^†)Ŷande^-i Ĉ θ=CBS^Z_c,d(0,θ)= e^-iθ(ĉ^†d̂ + ĉd̂^†)Ẑ, whereĤis a Hadamard gate on the qubit andCBS(φ,θ)is defined in Eq. (<ref>), we define the primitive
V(Aθ,Cθ) = e^i Âθ e^i Ĉθe^-i Âθe^-i Ĉθ.
This set of operations is illustrated in Fig. <ref>b. In order to reduce the error sufficiently, we use this primitive in a higher-order product formula <cit.>
Γ_A,C(θ)
= V(Aγθ,Cγθ) V(-Aγθ,-Cγθ) V(Cγ^2 θ, Aγ^2 θ)
× V(-Cγ^2 θ, -Aγ^2 θ)V(Aγθ,Cγθ) V(-Aγθ,-Cγθ)
= e^-2θ^2( â^†b̂ĉ^† d +â^†b̂ĉd̂^†- âb̂^†ĉ^†d̂- âb̂^†ĉd̂^†) ⊗X̂+𝒪(θ^5),
whereγ=1/√(4-2√(2)).
This expression contains two terms which are not part of the plaquette operator. In order to remove the unwanted terms, we apply the same four-mode synthesis again with different phases. This yields a new termΓ_A',C'obtained using the primitiveV(A'θ,C'θ)withe^-i Â' θ = CBS^Y_a,b(0,θ)ande^-i Ĉ' θ =CBS^Z_c,d(-π/2,θ). CombiningΓ_A,CandΓ_A',C'in a second-order Trotter formula, we remove the unwanted terms,
Δ_A,C = Γ_A,C(θ/2)Γ_A',C'(θ)Γ_A,C(θ/2)
= e^((-2i â^†b̂ĉ^†d̂+2i âb̂^†ĉd̂^†) ⊗ 2i X)(-i θ)^2+𝒪(θ^5),
This set of operations is illustrated in Fig. <ref>c and we dropped the𝒪(θ^6)Trotter error as the𝒪(θ^5)BCH error dominates.
Using a single-qubit gate on the qubit, one can also obtain(Δ_A,C)^†. The same expression can be obtained on the other four modes which are also involved in the plaquette term:
Δ̃_E,G= e^((-2i ê^†f̂ĝ^†ĥ+2i êf̂^†ĝĥ^†) ⊗ 2i Ŷ)(-i θ)^2+𝒪(θ^5).
Note thatΔ̃_E,Gis chosen such thatŶis carried out on the qubit rather thanX̂– this is to enable the further use of the BCH formula which we explain next.
In order to obtain the eight-wave mixing required to realise the plaquette term, we concatenate four four-mode terms according to
Ξ_{A,C},{E,G} = Δ_A,CΔ̃_E,GΔ_A,C^†Δ̃_E,G^†.
This set of operations is illustrated in Fig. <ref>d.
Similarly to the four-mode term, equation (<ref>) yields terms which are not part of the plaquette term. In order to remove them, we multiplyΞ_{A,C},{E,G}with a term which is obtained in the same way but with a different choice of phases in the beamsplitters:
Ξ_2,{A_2,C_2},{ E_2, G_2}= Δ_A_2,C_2Δ̃_E_2,G_2Δ^†_A_2,C_2Δ̃^†_E_2,G_2.
The constituent operations part of this term are the following, where we use the same notation with primes and tildes as in the discussion of the synthesis ofΞ_{A,C},{E,G}. On modesa,b,c,dwe use
A_2 = Ĥ CBS_a,b(π,1) Ĥ,
C_2 =CBS_c,d(0,1),
A'_2 = Ĥ CBS_a,b(π/2,1) Ĥ,
C'_2 =CBS_c,d(π/2,1).
On modese,f,g,hwe use
Ẽ_2 = Ĥ CBS_a,b(0,1) Ĥ,
G̃_2 =Ĥ Ŝ CBS_c,d(0,1)Ĥ Ŝ^†,
Ẽ'̃_2 = Ĥ CBS_a,b(π/2,1) Ĥ,
G̃'̃_2 =Ĥ Ŝ CBS_c,d(-π/2,1)Ĥ Ŝ^†.
In total, we obtain
Ξ_{A,C},{E,G}Ξ_2,{A_2,C_2},{ E_2, G_2}=
e^-i(16( â^†b̂ĉ^†d̂f̂ê^†ĥĝ^† + âb̂^†ĉd̂^†f̂^†êĥ^†ĝ) ⊗Ẑ) θ^4 +O(θ^5),
illustrated in Fig. <ref>e. Choosingθ= (1/64g^2(S(S+1))^2t)^1/4we get time evolution under a single plaquette Hamiltonian for a timestept.
Each CBS gate can be implemented using2conditional parity operations and1beamsplitter, c.f. Eq. (<ref>). We count conditional parities in the following. EachΓcan be implemented using6Voperations (see Eq. (<ref>)), which require8conditional parities to implement and hence eachΓrequires48conditional parities. EachΔoperation requires3Γoperations using the symmetric Trotter formula implying that it requires144conditional parities. EachΞis built from4Δterms and hence each requires576conditional parities. Then the final approximation,V_□requires1152conditional parities as it is composed of twoΞoperations.
We close with a discussion of the error term. Because each pair of mode operators has norma^†b=√(S(S+1))in the Schwinger-boson encoding of a large spin and the error term contains5such pairs, our result implies that the distance between the thus-implemented unitaryV_□and the exact plaquette exponential is
V_□ - e^-i(16( â^†b̂ĉ^†d̂f̂ê^†ĥĝ^† + âb̂^†ĉd̂^†f̂^†êĥ^†ĝ) ⊗Ẑ) t
= 𝒪(t^5/4/g^5/2).
In particular, this error is independent ofS.
Another way of synthesising this term would be to use the three-wave mixing term introduced in App. <ref>. Because the additional terms appearing in the beamsplitter approach mentioned above would not appear using three-wave mixing, we could save on Trotter error and reduce the number of native operations needed by a prefactor. However, because the Trotter error is sub-leading, this will not change the asymptotic error scaling shown above.
§.§ Static gauge fields coupled to matter
Static gauge fields appear when a magnetic field <cit.> is coupled to matter and are crucial when, for example, considering the fractional quantum Hall effect <cit.>. In this case, the hopping term of the Hamiltonian has a fixed complex amplitude on each link, i.e.,
-J∑_⟨i,j|⟩(e^iφâ^†_iâ_j + h.c.),
where we have specialized to the case of bosonic matter. This term can be directly implemented by a beamsplitterBS_i,j(φ,θ)defined in Tab. <ref> via the appropriate choice for the phaseφfor each link.
For fermionic matter, the hopping term generalizes as
-J∑_⟨i,j|⟩(e^iφĉ^†_iĉ_j + h.c.).
The phase can be implemented as a part of the gates necessary for the Jordan-Wigner encoding described in App. <ref>, for either choice of qubit type. TheFSWAPoperation would remain unchanged, but the phase can be incorporated by conjugating the fermionic hopping bye^ iφẐon one of the qubits.
§.§ Dynamical Z2 fields coupled to matter
Here we discuss gauge fields coupled to bosonic and fermionic matter. We first discuss the bosonic gauge-invariant hopping in Sec. <ref>, mapping the gauge fields to transmon and then dual rail qubits. We then discuss the fermionic gauge-invariant hopping in Sec. <ref>, mapping the fermions and the gauge fields to transmon and dual rail qubits.
§.§.§ Bosonic gauge-invariant hopping
The time evolution operator of the gauge-invariant bosonic hopping mediated by aℤ_2gauge field takes the form
e^-i t Ẑ_i,j(â^†_i â_j + â_i â^†_j).
In the following, we discuss the implementation of this operator using either transmons or dual rail (DR) qubits. For both cases, we use modes to directly encode the bosonic matter.
Field: transmons | Matter: modes. For the situation where we map the gauge fields onto transmon qubits, Eq. (<ref>) corresponds to a conditional beamsplitter, discussed in Sec. <ref>.
Field: dual rail | Matter: modes. For the case where we map the gauge fields onto dual-rail qubits, we leverage the fact that the parity of the photon numbern̂in one of the rails is equivalent to the Pauli operatorẐ^DR(see Eq. (<ref>)). Therefore the time evolution of the hopping between sitesiandjtakes the form
e^-i t e^iπn̂_i,j(a_i^† a_j+ â_i â^†_j).
and can be compiled using the methods described in Sec. <ref> and shown in Fig. <ref>.
§.§.§ Fermionic gauge-invariant hopping
The fermionic SWAP networks used to implement fermionic gauge-invariant hopping require the exact same FSWAP operations as are discussed in App. <ref>, where the gauge field also needs to be swapped. To make the hopping gauge invariant, we replace the iSWAP operation with
e^-i t (-J) Ẑ_i,j(X̂_i X̂_j + Ŷ_i Ŷ_j)
=e^-i t (-J) Ẑ_i,j(X̂_i X̂_j)e^-i t (-J) Ẑ_i,j(Ŷ_i Ŷ_j)
= R^π/2_i(π/2)R^π/2_j(π/2)e^-i t (-J) Ẑ_i,j(Ẑ_i Ẑ_j)R^π/2_i(-π/2)R^π/2_j(-π/2)
× R^0_i(π/2)R^0_j(π/2)e^-i t (-J) Ẑ_i,j(Ẑ_i Ẑ_j)R^0_i(-π/2)R^0_j(-π/2),
using the fact thatX̂ = R^π/2(π/2) Ẑ R^π/2(-π/2)andŶ = R^π(π/2) Ẑ R^π(-π/2) , whereR^φ(θ)is defined in Tab. <ref>. We note that this gate sequence can be further compressed by combining successive single-qubit gates into a single rotation.
We do not discuss the all-transmon qubit implementation of this model as this case does not leverage the advantages of our platform.
Field: transmon | Matter: dual rail.
Following Eq. (<ref>), single-qubit rotations can be performed on the dual rail qubits using beamsplitter gates, whilee^-i t (-J) Ẑ_i,j (Ẑ_i^DR Ẑ_j^DR)is a transmonZ-conditional dual railZZ^DRoperation. This latter gate is compiled in Eq. (<ref>) using an ancillary transmon initialized to|0⟩_ancthat is ultimately traced out. However, we can adapt this same sequence to realize the desired gate by replacing the ancilla with the transmon encoding the gauge field in an unknown state (and removing the matrix element projection in Eq. (<ref>)).
Field: dual rail | Matter: transmon. This term requires the implementation ofe^-i t (-J) e^iπn̂_i,j (X̂_i X̂_j + Ŷ_i Ŷ_j)=e^-i t (-J) e^iπn̂_i,j X̂_i X̂_je^i t (-J) e^-iπn̂_i,j Ŷ_i Ŷ_jwhich can be implemented using additional single-qubit rotations frome^-i θe^iπn̂_i,j Ẑ_i Ẑ_j. The latter is an instance of the parity-controlled gateCU^Π̅_i,jintroduced in Sec. <ref>. Specifically, the intermediate gate in Eq, (<ref>) would be replaced by the two transmon qubit gateZZ(θ)implemented in our architecture via the technique in Sec. <ref>.
Field: dual rail | Matter: dual rail. Following Eq. (<ref>), aside from single-qubit rotations, we require implementation ofe^-i t (-J) Ẑ_i,j^DR (Ẑ_i^DR Ẑ_j^DR). This type of multi dual-rail gate can be realized using the Pauli-exponentiation gadget as discussed in Sec. <ref> and shown in Fig. <ref>, there for the slightly more complex case of four dual-rail qubits. Alternatively, one can iterate the parity-control synthesis technique of Sec. <ref>, similar to the doubly-conditional displacement realized in Eq. (<ref>).
§.§ Dynamical U(1) fields coupled to matter
Next, we discuss the implementation of dynamicalU(1)fields coupled to bosonic (Sec. <ref>) and fermionic matter (Sec. <ref>). We show how to implement the dynamics of the hopping described in Eq. (<ref>) which has the form:
exp(-i Jt/2√(S(S+1))∑_⟨i,j|⟩(m̂^†_iŜ^+_i,jm̂_j + h.c.)).
In the remainder of the section, we will assume largeSand write√(S(S+1))≈S. As in Sec. <ref>, we leverage the Schwinger-boson encoding the represent theU(1)gauge fields. Furthermore, for the case of fermionic matter, we separately consider transmons and dual-rail qubits.
§.§.§ Bosonic gauge-invariant hopping
For a bosonic matter siteidenoted by operatord̂, the gauge-invariant hopping between siteiand sitejis given by
e^-i t J/2S(d̂^†_iâ^†_i,jb̂^†_i,jd̂^†_j + h.c.),
whereâ^†_i,j b̂^†_i,jis the Schwinger boson representation of the link operatorŜ^+_i,j. This unitary is generated by a four-mode mixing term, and can be synthesized using the BCH formula, see Eq. (<ref>).
§.§.§ Fermionic gauge-invariant hopping
For fermions in(2+1)D, we first use FSWAP networks to move the sites to neighbouring sites within the JW encoding. To then perform the gauge-invariant hopping, we also SWAP the corresponding gauge fields as depicted in Fig. <ref>. The remaining operation is then given by the nearest-neighbour hopping
e^-i t J/2S(σ̂^+_iâ^†_i,jb̂^†_i,jσ̂^-_j + h.c.),
wherejis a neighbouring site toi. In(1+1)D, SWAP and FSWAP operations are not necessary. We now discuss how this term is synthesized for the cases where the fermions are encoded in transmons or dual-rail qubits.
Fermionic matter: transmons.
We present two schemes, a naive implementation which contains Trotter error, and a scheme which avoids Trotter error (for this term alone), but requires a particular experimental adaptation of the hardware.
Scheme 1– We rewrite the gauge-invariant hopping term as
e^-i Jt/2S(Ĥ_1+Ĥ_2)≈ e^-iJt/2SĤ_1e^-iJt/2SĤ_2+𝒪((Jt/2S)^2),
where we used Trotterization and defined
Ĥ_1 =(X̂_iX̂_j +Ŷ_iŶ_j )(â^†_i,jb̂_i,j+h.c.)
Ĥ_2 =(Ŷ_iX̂_j -X̂_iŶ_j )( e^iπ/2â^†_i,jb̂_i,j+h.c.).
Noting thatX̂_i X̂_jandŶ_i Ŷ_jcommute, as doŶ_i X̂_jandX̂_i Ŷ_j, we are then left with implementing doubly conditional beamsplitters. This is done by sandwiching a beamsplitter with conditional parity operations conditioned on the qubitsiandj, as shown in Eq. (<ref>). However,[Ĥ_1, Ĥ_2]≠0and this term will therefore acquire Trotter error.
Scheme 2– This scheme does not introduce Trotter error, but requires additional engineering at the hardware level. We define the unitary ase^-iθĤwithĤ=σ_i^+ a^† b σ_j^-+σ_i^- a b^† σ_j^+where, for simplicity we have dropped the indicesi,jon the modesaandband we definedθ= Jt/2S.
We start by breaking upĤintoĤ=Ĉ+D̂, where
Ĉ =X̂_i(X̂_j(a^† b+a b^†)+i Ŷ_j(a^† b-a b^†)) / 4
≡X̂_iÊ_j
D̂ =Ŷ_i(-i X̂_j(a^† b-a b^†)+Ŷ_j(a^† b+a b^†)) / 4
≡Ŷ_iF̂_j.
Indeed,[Ĉ, D̂]=0because{Ê_j, F̂_j}=0.
It is helpful to rewrite:
Ê_j =(a b^†σ_j^++a^† b σ_j^-) / 2
F̂_j =i(a b^†σ_j^+-a^† b σ_j^-) / 2.
With this, we can now express the desired time-evolution operator as
e^-i θĤ =e^-i θD̂ e^-i θĈ
=e^-i θŶ_i F̂_j e^-i θX̂_i Ê_j
= R^0_i(-π/2) C Π_i,j e^-i θF̂CΠ_i,j R^0_i(π/2)
× R^π/2_i(-π/2) C Π_i,j e^-i θÊCΠ_i,j R^π/2_i(π/2),
whereR^φ(θ)(a single transmon qubit rotation gate) andCΠgates are defined in Tab. <ref>.
This reduces the problem of realizing four-body interactions to the more manageable problem of realizing the three-body interaction Hamiltonians – the latter being more natural to realize in the circuit QED architecture given the primary source of non-linearity is the four-wave mixing property of Josephson junctions. We derive one possible implementation of these interactions in App. <ref>.
Fermionic matter: dual-rail.
In this case, we must implement a many-body gate that involves the two Schwinger-boson modes representing the gauge fieldsa_i,jandb_i,j, and the four modes composing the two dual-rail qubitsc_i, d_iandc_j, d_j:
e^-iθ(ĉ_i^†d̂_iâ^†_i,jb̂^†_i,jĉ_jd̂_j^† + h.c.).
To implement such a gate, we use the BCH formula synthesis methods presented in Sec. <ref> for pureU(1)gauge fields. In particular, we apply Eq. (<ref>), where we sete^i X̂ Ô_1 θ = e^((-2i c_i^† d_i a_i,j^† b_i,j+2i c_i d_i^† a_i,j b_i,j^†) ⊗2i X)(-i θ)^2+O(θ^3)for which the compilation was shown in Eq. (<ref>), and sete^i Ŷ Ô_2 θ = e^-i Ŷ_k ( c_j^†d_j + h.c.) θ. This yields unwanted terms which we cancel using the Trotter formula in Eq. (<ref>) through use of BCH with appropriate phases in the beamsplitters.
§ GATE COMPLEXITY AND COMPARISON TO ALL-QUBIT ALGORITHMS
In this section, we analyse the gate complexity of our qubit-boson Trotter approach. In particular, we discuss the interplay of Trotter errors with errors introduced by some of our compilation schemes. We first review the general principles of Trotter error analysis. We then discuss a particular Trotter implementation of the
and and its error. While all compilations are exact for the , the compilation of the magnetic field term in the (c.f. section <ref>) introduces additional error. We therefore discuss the interplay of Trotter error with this compilation error. While these sections focus on the asymptotic scaling of the whole Trotter algorithm, we close the section with a comparison of the explicit gate counts of a single Trotter step in our qubit-boson approach and compare it to an all-qubit approach. To do so, we introduce a qubit algorithm for simulating beamsplitters between bosonic modes.
§.§ General Trotter error analysis
We write the Hamiltonian as a sum of termsĤ=∑_γ=1^ΓĤ_γ. We first decompose the time evolutionexp(-iĤ T)until timeTintortime steps of durationt=T/r, i.e.exp(-iĤ T)=(exp(-iĤ t) )^r.
An estimate of the error
ϵ_Trotter≡(𝒮(t))^r-e^-iĤT
of a2p-th order Trotter formula𝒮(t)as introduced in section <ref> is <cit.>
ϵ_Trotter=𝒪((∑_γĤ_γ)^2p+1T^2p+1/r^2p).
The number of Trotter steps for a target Trotter error is hence
r =𝒪((∑_γĤ_γ)^2p+1/2pT^2p+1/2p/ϵ_Trotter^1/2p)
=𝒪((∑_γĤ_γT)^1+1/2pϵ_Trotter^-1/2p).
From this expression, we get the total number of gates as
N_gates,2p =r5^p-1 N_gates,2p=2
=r5^p-1 2 ∑_γ N_gates,γ,
whereN_gates,γis the number of gates required for the individual exponentialsexp(-iĤ_γt). We used that a Trotter formula of order2prequires5times the number gates of a Trotter formula of order2p-2when defined recursively <cit.>. The second-order Trotter formula requires twice as many gates as the first-order Trotter formula. Finally, the number of gates required for the first-order Trotter formula is the sum of the number of gatesN_γrequired for each exponential. In cases where the individual exponentials can only be synthesized with error (dependent ont),N_gates,γcontributes to the overall complexity. Note that in general, the complexity of Trotter formulas is evaluated at fixedp. Hence, factors such as5^p-1above are dropped for estimating the asymptotic complexity in𝒪notation.
Approximating the Trotter error in terms of the norms of the Hamiltonian terms overestimates the error because most terms in many-body Hamiltonians commute with each other. We therefore expect the Trotter error to be much smaller in practice, see e.g. <cit.> for a tighter estimate for the(1+1)D fermionic .
Bosonic Hamiltonians have an in-principle unbounded norm and therefore naively exhibit an unbounded product-formula error. However, in practice, infinitely high Fock states are not occupied during the dynamics such that the error stays bounded. In particular, in the , the spectral norm of bosonic operators is bounded by√(S). Similarly, in the , using an initial state with a total number ofNbosons in the system, bosonic operators are bounded by√(N).
We detail specific Trotter decompositions of the and and their errors next. We specifiy to open boundary conditions on anL ×Lsquare lattice and use the triangle inequalityĤ_0+Ĥ_1≤Ĥ_0 +Ĥ_1 as well asaĤ_0=|a| Ĥ_0. For bosonic matter, we assume evolution within a sector of total boson numberNto estimate the Trotter error.
§.§ Trotter decomposition and gate complexity of the Z2-Higgs model
The electric field and on-site interaction term commute with each other such that we can choose
Ĥ_0=-g∑_⟨i,j|⟩X̂_i,j + U∑_i=1^L^2n̂_i^2,
where⟨i,j|$⟩ indicates summation over the 2L(L-1) bonds on a square lattice with L^2 sites. It follows from the triangle inequality and X̂_i =1, n̂_i=N in a fixed particle sector with N particles that
Ĥ_0 ≤ 2gL(L-1)+|U|L^2N^2.
Next, we split the gauge-invariant hopping into four non-commuting terms ξ=1,2,3,4, where 1(2) denote the even(odd) hoppings in the horizontal direction and 3(4) the even(odd) hoppings in the vertical direction, c.f. Fig. <ref>a). We write this as
Ĥ_1^ξ=-J∑_⟨i,j|_⟩ξ(m̂^†_iẐ_i,jm̂_j + h.c.).
Next we find m̂^†_iẐ_i,jm̂_j + h.c.≤2m̂^†_iẐ_i,jm̂_j=2N, where we used that m̂^†_i= √(N)=m̂_i. From this, we get
∑_ξ=1^4Ĥ_1^ξ≤ 4JNL(L-1),
where N=1 for fermions.
Finally, we divide up the plaquette terms into the two non-overlapping checkerboard patterns of the square lattice, c.f. Fig. <ref>b), which we label with ξ=1,2, such that
Ĥ_2^ξ=-B∑_i,j,k,l∈□_ξ(Ẑ_i,jẐ_j,kẐ_k,lẐ_l,i),
and from the triangle inequality,
∑_ξĤ_2^ξ≤ B(L-1)^2,
where we used that there are (L-1)^2 plaquettes.
Inserting Eqs. (<ref>), (<ref>), (<ref>) into Eq. (<ref>), we find a number of Trotter steps given by
r=
𝒪(((2g+|U|N^2+4JN+B)L^2T)^1+1/2p1/(ϵ_Trotter)^1/2p).
where N=1 and U=0 for fermionic matter.
In particular, we find that for bosonic matter the interaction term dominates the error. If U=0, instead the hopping term is the leading source of error. For fermionic matter, both hopping and field term contribute equally to the Trotter error.
All our compilations of the terms in the are exact and therefore N_gates,γ=𝒪(1). There are (L-1)^2 plaquette terms on a square lattice. Therefore, we find that the total number of gates is given by
N_gates,2p=r(L-1)^2 5^p-1 2 ∑_γ N_gates,γ=
𝒪(((2g+|U|N^2+4JN+B)L^2T)^1+1/2pL^2/(ϵ_Trotter)^1/2p)
for fixed p. This means that for a large value of p and N=𝒪(1) in L, the number of gates approximately scales as L^4, i.e. the square of the total number of sites. This makes sense as there is a number of terms in the Hamiltonian which is linear in the number of sites and in our crude estimate, the number of Trotter steps r for a certain target error is linear in the number of terms in the Hamiltonian. We note that in most interesting situations, N∝ L^2, such that the scaling in L will be higher and in particular, the on-site term dominates the scaling with L in that case, i.e. approximately L^8 for large p. However, we expect this scaling to be much improved when considering tighter bounds on the Trotter error <cit.>.
§.§ Trotter decomposition of the U(1) quantum link model
The electric field and mass terms commute with each other and between lattice sites. Hence, we choose
Ĥ_0=g^2/2∑_⟨i,j|⟩(Ŝ^z_i,j+τ/2π)^2 + M∑_i(-1)^i n̂_i.
Using that spin S operators fulfill Ŝ^z =S and n̂_i=N, we get from the triangle inequality that
Ĥ_0 ≤g^2/2L(L-1) (S+τ/2π)^2+MNL^2.
Further, we split the gauge-invariant hopping in the same way as for the , i.e. into the two checkerboard patterns of the square lattice, writing
Ĥ_1^ξ=J/2√(S(S+1))∑_⟨i,j|_⟩ξ(m̂^†_iŜ^+_i,jm̂_j + h.c.).
Because Ŝ^+ =Ŝ^- =√(S(S+1)), the S-dependence of the norms is cancelled by the prefactors and we find that
∑_ξĤ_1^ξ≤ NJL(L-1).
Finally, akin to the , we use the checkerboard decomposition of the plaquette term, which we write as
Ĥ_2^ξ=-1/4g^2(S(S+1))^2∑_i,j,k,l∈□_ξ(Ŝ^+_i,jŜ^+_j,kŜ^-_k,lŜ^-_l,i +h.c.).
Again, there is no S-dependence of the norm. Because there are are (L-1)^2 plaquettes, the norm is
∑_ξĤ_2^ξ≤1/2g^2 (L-1)^2.
Inserting Eqs. (<ref>), (<ref>),(<ref>) into Eq. (<ref>), we find
r=𝒪(((1/2g^2S^2+MN+NJ+1/2g^2)L^2T)^1+1/2p
×1/(ϵ_Trotter)^1/2p),
where we dropped some subleading contributions in S and L. Again, N=1 for fermionic matter. For large S, we find
r=𝒪(((gSL)^2T)^1+1/2p1/(ϵ_Trotter)^1/2p).
Note that a further decomposition of the Ĥ_n^(ξ) into terms that only act locally yields no further Trotter error as the constituent Hamiltonians commute with each other. For example, exp(-iĤ_2^ξ t)=∏_i,j,k,l∈□_ξexp(-it/4g^2(S(S+1))^2(Ŝ^+_i,jŜ^+_j,kŜ^-_k,lŜ^-_l,i +h.c.)) with no error.
Contrary to the , not all our compilations are exact for the . Specifically, the magnetic field term is compiled using the BCH formula, hence leading to error dependent on the timestep T/r. Because N_gates,γ for the magnetic field term therefore contributes to the overall gate count and we need to include this error in the analysis. This is the topic of the following subsection.
§.§ Gate complexity of the
U(1) quantum link model
We focus on the simulation of the pure gauge theory, i.e. only including electric and magnetic field terms to keep the equations simpler to read. The gauge-invariant hopping can be synthesized without error and the Trotter error is in any case dominated by the electric field term for large S; therefore, the analysis below generalizes trivially to the
with matter. We first discuss the BCH synthesis error encountered in the plaquette term and then combine this with the Trotter error.
First, consider the error of the synthesis of a single plaquette term, which we showed in section <ref> to be given by 𝒪(g^-5/2t^5/4). This error can be further reduced by splitting the evolution in r_□ steps of time t/r_□ (c.f. Fig. <ref>c), yielding
ϵ_□ =r_□𝒪(g^-5/2t^5/4r_□^-5/4)
=𝒪(g^-5/2t^5/4r_□^-1/4)
for the error ϵ_□ to evolve under the Hamiltonian of a single plaquette until time t. Solving for r_□, we get
r_□ =𝒪(g^-5/2t^5/4r_□^-5/4)
=𝒪(g^-10t^5ϵ_□^-4).
Because the BCH error depends on the timestep t chosen in the Trotterization scheme, r_□ depends on the relation chosen between the target Trotter error and the BCH error. We make the choice that the total BCH error incurred to synthesize all plaquette terms for all Trotter steps scales the same as the Trotter error ϵ_Trotter. Because there are (L-1)^2 plaquette terms in the magnetic field Hamiltonian, and a 2p Trotter formula calls the magnetic field term 2× 5^p-1 times in each of the r Trotter steps, this total error is given by
ϵ_BCH=2 5^p-1 r(L-1)^2 ϵ_□.
Solving this equation for ϵ_□ and demanding ϵ_BCH=ϵ_Trotter, we find
ϵ_□ =𝒪(ϵ_Trotter/rL^2).
Inserting this equation into Eq. (<ref>), we find
r_□ =𝒪(t^5r^4L^8/ϵ_Trotter^4g^10)
=𝒪(T^5L^8/rϵ_Trotter^4g^10),
where in the second step we inserted t=T/r.
There are 𝒪(L^2) plaquette and electric field terms in each Trotter step. The electric field term can be synthesized with 𝒪(1) gates, while the plaquette term requires 𝒪(r_□) gates. Hence, the total gate complexity for implementing the U(1) gauge theory is given by
N_gates,2p=𝒪(rL^2(1+r_□)).
Inserting Eq. (<ref>) for r and Eq. (<ref>) for r_□, we find
N_gates,2p=
𝒪(L^2(gSLT)^1+1/2p/(ϵ_Trotter)^1/2p+T^5L^10/ϵ_Trotter^4g^10),
where the first(second) term inside the bracket is the electric field (magnetic field) contribution.
This expression shows that the contribution to the total number of gates coming from the magnetic field term (the second term) is independent of the cutoff S. This is in contrast to qubit methods in the Fock-binary encoding, where this contribution scales as 𝒪(log(S)) <cit.>. This improvement comes at the cost of a high polyonmial scaling with ϵ_Trotter, T and L. However, this scaling can be systematically improved by using higher-order BCH formulas in the synthesis of the magnetic field term <cit.>.
§.§ Gate-complexity advantage over all-qubit hardware
While in the previous sections, we were considering end-to-end asymptotic gate counts for the whole algorithm, here we compare our hybrid qubit-oscillator compilation to an all-qubit compilation by calculating explicit gate counts for implementing single Hamiltonian terms used in a Trotter decomposition. As most Hamiltonian terms only require a handful of qubit-boson gates, this is straight-forward in our qubit-boson approach. By contrast, compiling qubit-boson operations onto qubits is highly non-trivial and hence, most of this section is about constructing a suitable qubit algorithm and counting its gate requirement. To do so, we must first choose a mapping of bosons to qubits and calculate the gate complexity within this qubit approach. A number of different boson-to-qubit mappings exist: the unary encoding <cit.> which is wasteful in number of qubits but straightforward for gate application, the Jordan-Lee-Preskill encoding for phase-space-formulated bosonic Hamiltonians <cit.>, the Gray encoding which requires minimal numbers of bit flips when transferring between Fock states <cit.>, and the binary encoding <cit.> of the Fock-state basis. See also Ref. <cit.> for a discussion of the complexity of simulating bosons in gauge theories.
The Fock-binary encoding – for which the action of a displacement operator is explained in App. F of <cit.> – is efficient in terms of qubit number and operations because native Fock space operations usually contain diagonal elements which can be transformed into diagonal qubit operations such as qubit rotations. The encoding of the state in binary requires a register of qubits as large as n=⌈log_2(N_max+1)⌉ where N_max is the maximum Fock state which can be accessed during the simulation. The protocol entangles this register with an ancilla qubit. The ancilla qubit stores the relative amplitude of Fock states 2j and 2j+1 and the register state 2j+1 hold the information needed to convert the relative amplitudes to absolute amplitudes.
One severe limitation of simulating bosonic operations with qubits is the prefactor coming from the action of the creation and annihilation operators: â^†|k⟩=√(k+1)|k+1⟩ and â|k⟩=√(k)|k-1⟩, where |k⟩ is the Fock state. When performing quantum simulation in regimes intractable with classical methods, for models with bosonic matter (such as the we study here) it is essential to calculate the square-root factors as they are crucial for understanding the physics of the model. One way of calculating a square-root with qubit-only hardware on-the-fly is to first calculate the inverse square root using Newton iterations, and then multiplying the result with the initial value to obtain the square-root. This results in <cit.>
N_CNOT / SQRT=
(270m+126)n^2 + (228m+96)n-12m
operations, where n is the number of qubits required to represent the maximum Fock state in binary and m is the number of Newton iterations required for a set precision. As Newton's method converges quadratically, m=2 or m=3 is usually sufficient to obtain errors less than 10^-4 <cit.>. Therefore, at least when using a naive arithmetic approach, the square-root requirement leads to an extremely large overhead of thousands of CNOT gates when only using qubits as we show below.
We calculate here the number of CNOT gates required to perform a single Trotter step of the in (1+1)D on all-qubit hardware using the Fock-binary encoding, keeping the gauge fields. We do not consider here the magnetic field term which appears in higher dimensions. However, for ℤ_2 gauge fields which can be mapped to qubits, the qubit-only and the qubit-oscillator circuits to implement this term will be the same. Therefore no additional Fock-binary gate count calculation and comparison would need to be made in higher dimensions for this model. The CNOT gate count originates from expressing the gauge-invariant hopping term of the bosonic matter in the qubit representation, because the electric field term for ℤ_2 gauge fields corresponds only to single qubit gates (see Sec. <ref>). We do not treat the bosonic Hubbard on-site interaction, however it could be implemented by imprinting the phase corresponding to n^2 in binary using an oracle (e.g., QRAM). The gauge-invariant hopping corresponds to a qubit-conditional beamsplitter term. We convert this to an all-qubit circuit using the Fock-binary encoding for which the gate counts are explained in App. <ref>. We obtain the following scaling in terms of CNOT gates for the qubit-conditional beamsplitter term (with m=2 Newton iterations), i.e. the bosonic in (1+1)D, to be:
N_CNOT / ℤ_2 = 12 (2673 n^2 + 1160 n - 34).
In Fig. <ref>, we compare gate counts for the and in both types of hardware. The jumps in the qubit-only lines are due to the logarithmic scaling of the qubit register required to represent increasing Hilbert space cutoffs. The in (1+1)D corresponds to the Schwinger model, for which we report the number of CNOT gates required for one Trotter step to be <cit.>:
N_CNOT / U(1) = 9n^2-7n+34.
In comparison, the number of entangling gates required to implement these models in hybrid qubit-oscillator hardware is 𝒪(1). The number of conditional parities CΠ required for the and is:
N_CΠ / ℤ_2 = 2
N_CΠ / U(1) = 8.
Finally, we consider the corresponding circuit fidelities for the execution of one Trotter step for a system size of 2 sites. This is of utmost importance in the NISQ era where entangling gate fidelities rarely reach above 99.99% <cit.>. In the fidelity calculation, we assume single qubit gates and beamsplitters to have ideal fidelity. We plot the circuit fidelity with regards to the entangling gate fidelity which corresponds in the all-qubit case to the CNOT gate fidelity and in the hybrid oscillator-qubit case to the CΠ fidelity. Both types of hardware are based on the same oscillator-qubit circuits and will therefore encounter the same Trotter error (the bosonic mapping is exact). Therefore, Trotter error is not taken into account in the circuit fidelity.
In the fault-tolerant era, ideal fidelities can only be approached with large distance codes which directly increase the space-time volume of the computation. Therefore, even in the fault-tolerant era for all-qubit hardware, entangling gate fidelities will be of concern, and non-error corrected hybrid oscillator-qubit hardware may provide advantages. This is especially true considering the extremely large circuit depths required when mapping gauge fields or bosonic modes to qubits. In the case of the which contains bosonic matter, the square root factors lead to the requirement for extremely high fidelity operation of a qubit-only device, unless we see substantial algorithmic improvements in arithmetic. In this case, the experimental improvements in hybrid oscillator-qubit hardware may enable progress to be made on bosonic models even in relatively large system sizes such as the ones presented here. Conversely, however, we must note that the harmonic approximations used for native bosonic hardware may break down if large occupation cutoffs are required. For these reasons, this work suggests the existence of a regime of advantage but experimental and algorithmic realities need to be considered in greater detail to verify this advantage.
These results suggest that for Trotter-based simulations an advantage may be seen for hybrid hardware; however, a full comparison of qubit to oscillator-qubit devices is challenging to make. For one, a full comparison between the two requires that we compare the best implementations of the best algorithms for both hardware in an agnostic fashion. This is especially significant because alternative methods of simulation, such as qubitization or LCU <cit.> do not require square roots at the price of an increased number of ancillary qubits. A full comparison would need to consider optimized versions of all available algorithms for both hardware platforms.
Second, a more detailed discussion of the approximation errors incurred in the approaches (such as the large S limit or the use of BCH approximations and the different errors in the Trotter formula based on the form of the Hamiltonian) is needed to properly compare different approaches. For these reasons, we cannot claim that this work suggests a universal advantage for simulating ℤ_2 or U(1) LGTs, but it does strongly suggest advantages are likely to exist in certain regimes and subsequent work is needed to understand the shape and size of these regions of advantage.
§ MEASUREMENTS, ALGORITHMS AND NUMERICAL BENCHMARKS
In the previous sections, we have introduced all of the necessary compilation methods to simulate both the and (with either bosonic or fermionic matter) in (2+1)D. As an explicit demonstration of these compilation methods in practice, in this Section we apply them to the (1+1)D variants of these two models. In particular, we present schemes to measure short- and long-range observables, and near-term (Trotter and variational approaches) and longer-term (quantum signal processing) algorithms for dynamics and ground-state preparation, which we benchmark numerically. In particular, to enable approximate ground state preparation using the variational quantum eigensolver (VQE) approach, we show how to measure the expectation value of the Hamiltonian. We also develop an ancilla-free method for using post-selection on the Gauss's law gauge constraint for partial error detection.
We use Bosonic Qiskit for our simulations <cit.>. Each mode is represented with ⌈log_2(N_b+1)+1⌉ qubits, with N_b is the total number of bosons in the system (which is conserved for our Hamiltonians) in order to make sure that the commutation relation for the truncated mode operators obeys [a,a^†]=1 within the accessible Hilbert space. We also include single qubit amplitude and phase damping error channels using Qiskit Aer <cit.> and neglect photon loss and dephasing of the cavities. For transmon qubits, T_2∼ T_1 <cit.>. In tantalum transmons it was shown that T_1>500 μs is possible <cit.>; however, we choose a conservative value of T_1=T_2=200 μs <cit.>. This implies a pure dephasing time of T_ϕ= (1/T_2-1/2T_1)^-1=400 μ s. Cavities have very little dephasing and photon loss times exceeding 1ms <cit.>, and most of our numerical examples do not access high Fock states, justifying us neglecting these processes.
This section is structured as follows. In Sec. <ref>, we discuss the (1+1)D variants of the ℤ_2-Higgs and U(1) quantum link models, their physics and challenges in previous approaches to solve them. In Sec. <ref> we investigate time dynamics, numerically benchmarking a simulation on the while including the dominant source of noise. In Sec. <ref>, we show that ground states of these models can be prepared using a variational quantum eigensolver (VQE), for which we introduce an optimization technique better adapted to noise. In Sec. <ref>, we analyze in detail how to mitigate the influence of shot noise and decoherence. In Sec. <ref>, we show how to measure observables relevant to identifying phase-transitions, such as the pair-gap, the superfluid density, and the string order correlator with respect to the VQE ground states. In Sec. <ref> we present a perspective on how to implement a ground-state preparation technique with provable success probabilities, known as quantum signal processing.
§.§ Models and Physics
To begin, we present the two models that we use for explicit demonstration throughout this section: the containing bosonic matter sites, and the U(1) quantum link model containing fermionic matter sites, both in (1+1)D. These models are special cases of the more general LGT Hamiltonians presented in Section <ref>; we discuss the details and implementation of each below. In particular, we can use the architectures presented in Fig. <ref>c),d) for one spatial dimension by winding a “snake” pattern through the square lattice. This way, every cavity and qubit represents either a gauge field or matter site.
§.§.§ Z2-Higgs model
The ℤ_2 lattice gauge theory was introduced by Wegner <cit.> as a simple model showing signatures of confinement and deconfinement at the critical point <cit.>, making it an ideal testbed for early testing of novel methods for LGT simulations. In two spatial dimensions in the absence of an electric field, its ground state encodes a spin-1/2 degree of freedom and can therefore be used for quantum error correction <cit.>. ℤ_2 lattice gauge theory with bosonic matter is reminiscent of the Higgs sector coupled to the non-Abelian SU(2) electroweak gauge sector (W and Z bosons) of the (3+1)D Standard Model of particle physics
with the two simplifications of a lower dimensionality and a ℤ_2 gauge field instead of the non-Abelian SU(2) electroweak gauge fields. The Hamiltonian in (1+1)D is written as follows:
Ĥ_ℤ_2 = - g ∑_i=1^L-1X̂_i,i+1 + U ∑_i=1^Ln̂_i^2
- J ∑_i=1^L-1( â^†_iẐ_i,i+1â_i+1+ h.c.) ,
where the symbols are the same as described in the introduction (see Section <ref>), and L is the number of sites in the chain.
The ℤ_2 model in (1+1)D in the hardcore boson limit U→∞ has previously been investigated in depth <cit.>, revealing intriguing phase transitions between insulating and Luttinger liquid phases <cit.>. A digital simulation of this model for hardcore-bosons has been performed in Ref. <cit.>. A basic building block of two matter and one gauge field site has been implemented in an experiment with ultracold atoms in a Floquet approach <cit.>. However, this approach is challenging to generalize away from the single particle limit in the presence of on-site interactions U due to Floquet heating, which is important when considering parameter regimes away from the hardcore-boson limit. Our digital implementation enables a fully flexible study of this model, including regimes away from the hardcore limit. While we focus here on the implementation aspects of the model, enabling a study away from the hardcore boson limit, we study its physics and phase diagram in a separate publication <cit.>.
§.§.§
The constrained to (1+1)D and fermionic matter <cit.> reads
Ĥ_U(1) = g^2/2∑_i=1^L-1(Ŝ^z_i,i+1 +
τ/2π)^2 + M∑_i=1^L(-1)^i ĉ^†_i ĉ_i
+ J̃/2S∑_i=1^L-1(ĉ^†_iŜ^+_i,i+1ĉ_i+1 + h.c.),
where compared to Eq. (<ref>), the definition of J differs by J̃=J/2 and we set √(S(S+1))→ S in the denominator of the hopping term, i.e. we consider the limit S→∞.
This model can be connected to a continuum field theory by introducing a lattice spacing a by J→ J/a and g→ g√(a) and taking the limit a→ 0.
This model shares many properties with quantum chromodynamics in (3+1)D: chiral symmetry breaking, confinement, a U(1)_A quantum anomaly, and a topological τ-term, which gives rise to a non-trivial topological vacuum structure <cit.>. Due to the sign problem, this τ-dependence of the model cannot be studied with quantum Monte Carlo methods. One possible way to circumvent the sign problem is using the tensor network (TN) approach, in particular matrix product states (MPS) in (1+1)D. Using MPS, the spectrum of the has been computed, and the model has been studied at non-zero temperature, non-zero chemical potential, and with a non-zero τ-term <cit.> (see Ref. <cit.> for a review). In particular, Refs. <cit.> studied the first-order phase transition at τ=π, in which the CP symmetry is spontaneously broken. The first-order phase transition line terminates at a second-order critical point <cit.>. This model has been extensively simulated in superconducting qubits <cit.>.
Despite these successes of studying sign-problem afflicted regimes with TN, TN approaches have several limitations. First, highly entangled states such as those with volume law entanglement can occur in the evolution after a quench or in thermal states, and cannot be efficiently described using matrix product states <cit.>, the most widely used TN. This is particularly apparent in multi-stage thermalization dynamics such as after instabilities <cit.>. Second, the computational cost of TN algorithms increases with increasing dimensionality of the lattice field theory under consideration. For MPS, a particular kind of one-dimensional TN, the leading-order computational cost of the variational ground-state optimization in the case of open boundary conditions scales as 𝒪(D_b^3) <cit.>, where D_b, called the bond dimension, indicates the tensor size in the MPS. For the generalization to (2+1)D, called projected entangled pair states (PEPS) <cit.>, computational costs can scale up to 𝒪(D_b^10) <cit.>. These costs are still polynomial in the bond dimension, but substantially limit the values of D_b that can be reached in practice. There have been TN studies of gauge theories in (2+1)D and (3+1)D <cit.>, but designing efficient TN algorithms in higher dimensions remains a fundamental challenge.
For this Hamiltonian (Eq. (<ref>)), Refs. <cit.> explored whether the gauge degrees of freedom can be efficiently represented with finite-dimensional systems. For intermediate values of the coupling, g√(a)∼ 0.1, Ref. <cit.> demonstrated that the results converge rapidly, implying that a reasonable accuracy can be obtained even with small cutoffs N_max of the gauge-field truncation. To be more precise, the truncated yielded a fast convergence to the exact ground state for N_max ranging from 3 to 9, reaching sub-permille precision in the energy density for N_max=9 <cit.>. However, when approaching the continuum limit, g√(a)→ 0, the increasing electric field fluctuations require an increasingly large number of states and therefore an increasingly large (up to exponentially large) cutoff. This strongly motivates applying hybrid oscillator-qubit quantum devices to U(1) lattice gauge theory.
§.§ Dynamics
To illustrate the power of our digital approach to qubit-boson quantum simulation of lattice gauge theories we show that some gauge-constrain-induced dynamics can be observed in near-term noisy hardware. Here, we focus on the ℤ_2-Higgs model for U=0.
We first need to prepare an initial state of interest. As discussed in the exposition of the cQED platform in Section <ref>, it is straightforward to prepare the modes in a product state of Fock states. We consider a L=5-site chain prepared in the states |ψ_0⟩=|00100⟩ and |ψ_1⟩=|00200⟩. The limitation to five sites is not restrictive for short timescales – because the interactions are short-range, excitations spread with a finite velocity and as long as the “lightcone” does not reach the boundary, the results obtained here are the same as would be in a larger system. The qubits representing the gauge fields must then be initialised according to Gauss's law, which requires consistency between the qubits and the parity in each mode: choosing the leftmost gauge field qubit to be in |+⟩ and the sites to be in Fock state |00100⟩, the gauge field qubits situated on the links between the bosonic sites need to be initialized in |++–⟩. Likewise, for bosonic sites in |00200⟩, the corresponding gauge field state is |++++⟩. The fact that the state of the gauge fields is fully determined by the state of the sites shows us that the gauge fields could be integrated out, which is usually the case for lattice gauge theories in (1+1)D. The reverse is not true however: the state of the gauge fields does not fully determine the state of the bosonic sites but only their parity. This is in contrast to hard-core bosons, where the parity fully determines the occupation number <cit.>. See also Ref. <cit.> for a more in-depth consideration of these dynamics in the context of hybrid simulation in trapped ions.
We evolve the initial states |ψ_0⟩ and |ψ_1⟩ under the with U=0 using a first-order Trotterization e^-iĤ_ℤ(2)t≈ (Û_1 Û_2)^r and timestep Δ t=t/r with
Û_1 = ∏_i^L e^-i g X̂_i,i+1Δ t,
Û_2 = ∏_i=1^L-1 e^-i J Ẑ_i (â^†_i â_i+1 + â_i â^†_i+1)Δ t.
We have discussed in Section <ref> how the time evolution operator for the individual Hamiltonian terms in Eq. (<ref>) can be implemented using native gates in our proposed architecture.
At the end of the simulation, we measure the state of the qubits and modes. The measurement of the mode occupations are carried out bit-by-bit in a binary representation, first mapping the boson number parity onto the qubit using an SQR gate, making a mid-circuit measurement, resetting the qubit, then measuring the super-parity and so on, as has been demonstrated experimentally <cit.>. This requires a circuit with a depth that is logarithmic in the maximum boson occupation cutoff. If the qubits representing the gauge fields are read out first, they can be reused to iteratively read out the Fock state of the cavity mode to which they are dispersively coupled.
To show that these dynamics can be probed in near-term hardware, we implement the (1+1)D gate sequences in Bosonic Qiskit <cit.>, assuming an implementation in which the ℤ_2 gauge fields are encoded in transmons.
In order to simulate the dynamics in presence of the noise, we need to specify the gate durations we assume for the potential experimental implementation using the hardware in
Sec. <ref>. We assume a value of χ≈ 3 MHz for the dispersive interaction strength. The duration of each Trotter step is then dominated by the conditional beamsplitter operation, which is on the order of 2.1 μs, consisting of two conditional parity gates which take 1 μs each and one beamsplitter which has a duration on the order of 100 ns <cit.>. Single qubit gates take on the order of 10 ns. The Trotter step size in our numerical simulation, Δ t=0.1/J, was chosen to be small enough to recover the expected physical behavior in comparison to exact diagonalisation. We chose 1000 shots per circuit, which leads to both low shot-noise errors as well as a realistic experimental runtime: because each Trotter step has depth 2, a depth one circuit takes <2.2μs and we need 85 Trotter steps to evolve to Jt=8.5. Each circuit therefore takes at most 2× 85× 2.2μs=374μs. We measure 85 circuits and therefore, 1000 shots per circuit leads to a total projected time to take the experimental data of Fig. <ref>b) for g=5J of 1000×85×374μs≈ 32s.
On the one hand, in the regime of infinite hopping strength (i.e., g=0), starting from the Fock state |00100⟩ with open boundary conditions, we find a linear light-cone-like spreading of the boson, c.f. Fig. <ref>. On the other hand, large field strengths prohibit the gauge fields from flipping during hopping events which leads the boson to remain confined to the central site, as predicted and observed in Ref. <cit.> in cold atoms. Strikingly, for large field strengths, when starting from the Fock state with an even initial number of bosons such as |00200⟩, we again find a linear light-cone even in the large field strength regime. We find that the hopping takes place on a much slower time scale. This hopping is enabled by a perturbative energy-conserving pair-hopping process of strength J^2/2g in which a qubit which is flipped by a first boson hopping and then flipped back to its original value by the second boson hopping <cit.>. Despite the transmon decoherence and the much longer time scale (the circuit to evolve to the latest time point for g=5J takes around 2×85×2.2μs≈ 374μs experimental time), these coherent dynamics are observable and match exact diagonalisation.
§.§ Qubit-boson ground state preparation with VQE
In this section, we show how the variational quantum eigensolver algorithm (VQE) <cit.> (see Refs. <cit.> for important follow-up works) can be implemented not only on oscillators/modes <cit.>, but also on our proposed hybrid qubit-oscillator architecture to prepare ground states of the ℤ_2-Higgs and U(1) quantum link model. VQE is a variational ansatz for the ground state wavefunction in which layers of unitaries Û_k act on some initial state. The unitaries in our approach, sometimes called the quantum approximate optimisation algorithm, are related to the Hamiltonian
Ĥ=∑_k,iĥ_k(i),
where ĥ_k(i) are terms of the Hamiltonian with i the site index on which they are applied. We choose
U_k(θ,i)=e^iθĥ_k(i),
where θ is a variational parameter which is optimized. Each gate in the ansatz has an independent angle; we denote the vector of angles with θ.
In the VQE, the classical and quantum processors work together to reach the ground state by executing an optimization algorithm which updates the vector of ansatz angles θ→θ ', and uses the quantum computer to measure the expected energy of the anstatz characterised by those angles ⟨ψ(θ ') | Ĥ | ψ(θ ')|$⟩. The intuition behind this VQE procedure comes from the fact that for our choice of ansatz, a sufficiently large number of layers and a particular choice ofθ, VQE realizes adiabatic state preparation (note the parallel between choosing the gate variable angles to be proportional to time for dynamics, and choosing them to be the optimisation parameter for ground state search). Therefore we know that in principle, VQE is able to prepare the ground state if the Hamiltonian has a large enough gap such that the necessary evolution time is within the coherence time of the quantum computer. In the absence of first-order phase transitions, the gap is always polynomial in the system size and hence an efficient state preparation path can be found. In the gauge theory setting, constructing a variational ansatz this way also has the advantage of fulfilling Gauss's law. This is exactly the case if every term in the Hamiltonian conserves Gauss's law individually (which will be the case for us below). If this is not true, Gauss's law can be fulfilled approximately by choosing the angles small enough to essentially realize the Trotter limit of adiabatic time evolution.
The hope behind VQE is that by minimizing the energy with respect to theθangles, a circuit could be found which prepares the ground state in a shallower circuit than would be required for adiabatic state preparation, which in this digital setting would require Trotterisation. In the following, we discuss VQE in the cQED architecture for theℤ_2-Higgs model andU(1)quantum link model in(1+1)D.
§.§.§ Z2-Higgs model
In the case of the , theÛ_krequired for the ansatz discussed in the first paragraph of (Sec. <ref>), Eq. (<ref>), are
Û_1(θ,i) = e^-θẐ_i,i+1(â^†_i â_i+1 - â_i â^†_i+1)
Û_2(θ,i) = e^iθẐ_i,i+1(â^†_i â_i+1 + â_i â^†_i+1)
Û_3(θ,i) = e^-iθX̂_i,i+1
Û_4(θ,i) = e^-iθn̂_i^2.
While technically not necessary (all other gates together are already able to prepare the ground state by approximating an adiabatic evolution), we find that includingÛ_1leads to a lower energy error for the same number of layers than when it is not included. All of the above unitaries can be realized with our native gate set using techniques previously discussed; in particularÛ_1andÛ_2are conditional beamsplitters (see Eq. (<ref>)),Û_3is a single-qubit rotation, andÛ_4can be implemented using a SNAP gate (see Section <ref>, or alternatively App. <ref> for an ancilla-free implementation).
Using these resource unitaries, we construct anM-layer ansatz withNfield sites andN+1matter sites as
|ψ_M(θ)⟩ = ∏_l=1^M [
∏_i=1^NÛ_3(γ^i_l, i)∏_i=1^N+1Û_4(δ^i_l, i)
∏_i ∈even^N Û_2(α^i_l, i)Û_1(β^i_l, i)
∏_i ∈odd^N Û_2(α^i_l, i)Û_1(β^i_l, i)
]|ϕ⟩,
where|ϕ⟩is an easily-preparable state,θis a vector of the(4N+1)Mreal-numbered angles in this ansatz, and the products overi∈even (odd)include only the field qubits with even (odd) index. This means that every single gate in each layer and at each site has a different angle to be optimised. Fig. <ref>a) shows one layer of the VQE ansatz.
Since theℤ_2resource unitaries in Eq. (<ref> - <ref>) exactly preserve Gauss's law (in the absence of decoherence), the entire circuit evolution will take place within the Gauss's law subspace chosen by the initial state. Restricting the evolution of VQE circuits within a particular subspace is a powerful technique introduced in prior work <cit.>, with many applications to constrained optimization problems such as maximum independent set <cit.>. Here, we choose the initial state|ϕ⟩to be a simple-to-prepare state which obeys Gauss's law, specifically, a product state of Fock states with unit occupation.
Because we aim to provide scalable methods adapted for large system sizes, global optimizers such as `DIRECT' which were used in previous studies of lattice gauge theories <cit.> are not well suited here. We compared optimizers (with Python Scipy names `COBYLA', `Nelder-Mead', `CG', `trust-constr', `BFGS', `L-BFGS-B', `SLSQP') and find that, among the optimizers which were tested, `SLSQP' (Sequential Least Squares Programming) has the fastest convergence rates for theℤ_2model without qubit decoherence or shot noise, measuring the energy by calculating the expectation value of the Hamiltonian matrix directly with the circuit state vector without shot noise.
We find that the VQE circuit (without noise and therefore no sampling error) can estimate ground state energy with relative error with respect to the true ground state energy of around10^-3for two, three, and four sites using one, four, and seven layers, respectively, as can be seen in Fig. <ref>b. The VQE requires on average around10^2,10^3, and10^4circuit evaluations for two, three, and four sites respectively, an increase which is due to the commensurate increase in the number of Fock states required to represent a larger Hilbert space for unit filling. Increasing the number of layers increases the total number of variational parameters, which can lead to a higher final fidelity; however, this presents a tradeoff – in the presence of noise, reliable results can be extracted only from shallow depth circuits. Thus, the overall fidelity will decrease when a large number of layers is used <cit.>.
Furthermore, we find that the higher the boson filling, the larger the number of layers required, with up to four layers required for four bosons in two sites to reach a relative error below10^-3. This may be related to the increased size of the Hilbert space, i.e., due to the increase in possible combinations for distributing the bosons among a set number of sites.
To measure the expectation value of the Hamiltonian as required for the VQE optimisation procedure, we discuss how to measure each term of the Hamiltonian individually <cit.>. The field term⟨- g ∑_i=1^L-1 X̂_i,i+1|$⟩ is a simple sum of X-basis qubit measurements. The onsite interaction term on a single site i can be measured by reading out the photon number of the cavity using the binary search readout <cit.>. This enables the single-shot readout of n̂_i, which can then be used to reconstruct ⟨ n_i^2 ⟩. For the gauge-invariant hopping term - J ∑_i=1^L-1⟨Ẑ_i,i+1( â^†_iâ_i+1 + h.c.)|$⟩, we are faced with the difficulty that we can not directly read out the beamsplitter term by measuring the cavities in the number basis.
Instead, we note the following identity:
⟨ψ|â_i^†â_i+1 + h.c.|ψ⟩
=⟨ψ̃|BS_i,i+1(π/2,π/4)(â_i^†â_i+1 + h.c.)BS_i,i+1^†(π/2,π/4)|ψ̃⟩
=⟨ψ̃|( â_i^†â_i - â_i+1^†â_i+1)|ψ̃⟩
= ⟨ψ̃|n̂_i|ψ̃⟩ - ⟨ψ̃|n̂_i+1|ψ̃⟩,
where we have inserted1 = BS_i,i+1(π/2,-π/4) BS_i,i+1(π/2,π/4)on either side of the hopping, and have cast the expression in terms of the transformed state|ψ̃⟩=BS_i,i+1(π/2,π/4)|ψ⟩. Thus, much like a Hadamard gate facilitates anX-basis measurement for qubits, we can measure the expectation value of a bosonic hopping term by `rotating' the modes into the Fock state basis via a 50:50 beamsplitter, followed by a Fock-basis measurement. By correlating the result with that of the gauge field linkZ-basis measurement result, one can reconstruct the expectation value of the gauge-invariant hopping.
Only commuting terms can be measured simultaneously; for theℤ_2model, we therefore perform three separate measurement circuits to estimate the energy. In the first circuit in Fig. <ref>c, the onsite interaction is measured on all sites simultaneously. In the same circuit, it is possible to measure the field terms, though this step must precede measurement of the onsite interaction such that the gauge link qubits can then be reset and used for binary readout. Next, we measure half of the gauge-invariant hopping terms, preceeded by a measurement of the electric field (X̂_i) at every other gauge link. Finally, the other half of the gauge-invariant hopping terms are measured along with the other half of the field sites.
In summary, in this section we showed that a VQE ansatz for the which is specifically tailored to the types of operations available in hybrid qubit-oscillator hardware indeed only requires very few layers to determine the ground state for the small system sizes we can access numerically for proof of principle demonstrations. The implementation of this ansatz directly follows from the methods presented in the first part of this paper (Secs. <ref> - <ref>), which implies that the hybrid qubit-oscillator hardware will require shallower circuits than qubit hardware as discussed in Sec. <ref>. However, more work needs to be done beyond these proof of principle results to determine the circuit depths and the number of shots which will be required as a function of system size.
§.§.§
In the following, we describe how to carry out a hybrid oscillator-qubit VQE of the in Eq. (<ref>) in(1+1)D. Furthermore, we demonstrate how to detect the first-order phase transition atτ=πin our proposed architecture. While the correspondence between the model in Eq. (<ref>) and quantum electrodynamics in(1+1)D is only applicable for an even number of lattice sites due to the staggering of the fermions, we also study an odd number of sites here for benchmarking purposes.
Following the same scheme as for the , we choose resource unitaries that are generated by the individual terms of the Hamiltonian in Eq. (<ref>). Furthermore, we leverage the Schwinger-boson mapping of the gauge fields as described in Sec. <ref>, and map the fermions to qubits using the Jordan-Wigner encoding (which is particularly simple in(1+1)D). With these choices, we construct our VQE ansatz using the following resource unitaries:
Û_S1(θ,i) = e^i θẐ_i
Û_S2(θ,i) = e^-i θ (n̂_i,i+1^a-n̂^b_i,i+1)
Û_S3(θ,i) = e^-i θ((n_i,i+1^a)^2+(n_i,i+1^b)^2)
Û_S4(θ,i) = e^-iθ(σ̂^+_i+1â^†_i,i+1b̂_i,i+1σ̂^-_i + h.c.)
Û_S5(θ,i) = e^-θ(σ̂^+_i+1â^†_i,i+1b̂_i,i+1σ̂^-_i - h.c.).
Each of these unitaries can be implemented using techniques described in prior sections. In particular,Û_S1andÛ_S2are native gates – see Table <ref>. As explained in Sec. <ref>,Û_S3can be synthesized using a SNAP gate. Finally, the gauge-invariant hopping unitariesÛ_S4andÛ_S5can be implemented with the strategies described in Section <ref>.
Our ansatz for the wavefunction is
|ψ_M(θ)⟩ = ∏_l=1^M [ ∏_i ∈even^N Û_S5(α_l, i)Û_S4(β_l, i)
∏_i ∈odd^N Û_S5(α_l, i)Û_S4(β_l, i)
∏_i=1^NÛ_S3(γ_l, i) ∏_i=1^NÛ_S2(γ_l, i)
∏_i=1^N+1Û_S1(δ_l, i)] |ϕ⟩,
forN+1matter sites. Note that contrary to the ansatz for the , each site has the same parameter such that there are only4Mtotal parameters.
The expectation value of the Hamiltonian, can be measured using correlated measurements. To evaluate the gauge-invariant hopping term, we first useσ̂^+=1/2(X̂+iŶ), to split it into
σ̂^+_i+1â^†_i,i+1b̂_i,i+1σ̂^-_i + h.c.
=
1/4( X̂_i+1(â^†_i,i+1b̂_i,i+1 +h.c.) X̂_i
+ Ŷ_i+1(â^†_i,i+1b̂_i,i+1+h.c.) Ŷ_i
+ X̂_i+1(-iâ^†_i,i+1b̂_i,i+1 +h.c.) Ŷ_i
+ Ŷ_i+1(iâ^†_i,i+1b̂_i,i+1+h.c.) X̂_i)
Using our method in Eq. (<ref>) we can map a measurement of the bosonic terms in above equation to a measurement of the number densities of the two modes. Correlating those number densities with the appropriate qubit measurements, we can therefore measure all four terms in the above expansion. Because we require two circuits to measure all possible lattice sitesi, we require eight separate measurement circuits for the gauge-invariant hopping. All the other terms are straightforwardly accessible by measuring the values of the boson numbers in the modes <cit.> and the state of the qubits.
Using Bosonic Qiskit <cit.>, we implement this ansatz and test the convergence of the VQE with respect to the number of layers forS=1and show the results in Fig. <ref>. For a single gauge field site and two matter sites, we initialize the system in the “vacuum state”|↓0 ↑⟩, where the middle label indicates the eigenstate ofS^z, and assume perfect energy measurements, i.e. no shot noise and decoherence. We find that one layer is sufficient for obtaining a relative energy error of10^-5. This is related to the fact that the ground state for this system is given by1/√(2)(|↑1 ↓⟩ + |↓0 ↑⟩). Three sites, initialised in|↓0 ↑0 ↓⟩, we find from our numerical evaluations that two layers are required to attain low relative errors of10^-8, with only a small increase in total number of iterations from the two-site case.
In the regime of largeg, the features a first order phase transition <cit.> at half-integer values ofτ/2π. This can be seen trivially by takingg→∞, in which case the electric term dominates the Hamiltonian, pinningŜ^zat the integer closest toτ/2π. This phase persists for finiteguntil the mass reachesM/g≈1/3(see definitions ofMandgin Hamiltonian in Eq. (<ref>)). The energy gap between the ground and first excited state is generally exponentially small in system size at first order phase transitions. Therefore, we expect that this transition will be relatively sharp and visible even for small systems of two or three sites. Hence, this is a good proof-of-principle target for small-scale experiments.
For various values ofτused to find ground states with the VQE ansatz discussed above (Eq. (<ref>)-<ref>), we measure the expectation value of the electric field (in the Schwinger-boson representation)⟨S^z_i,i+1|=⟩ 1/2⟨n̂^a_i,i+1 - n̂^b_i,i+1|$⟩. In Fig. <ref> we indeed find sharp drops corresponding to the first order phase transition at τ = ±π. The Bosonic Qiskit VQE results match our exact diagonalization results to relative errors of around 10^-7.
Our results show that our VQE approach using hybrid oscillator-qubit hardware can prepare ground states of the in (1+1)D for two and three site systems. Although in (1+1)D the can be probed with qubit-only hardware by integrating out the gauge fields, this would not generalize to higher dimensions. In that case, each gauge field link would be encoded in many qubits and Hamiltonian operations would be highly costly as discussed in Section <ref>. Another possible advantage of our approach is the usage of the Schwinger-boson constraint n̂_a+n̂_b=2S as a (single) photon loss error detection scheme. This can be implemented by measuring the joint-parity of modes a and b; to what extent this improves results from noisy hardware is an interesting direction for future exploration.
§.§ VQE with hardware and shot noise
Next, we test the resilience of our methods to decoherence and shot noise in the energy measurements. To do so, we simulate the full measurement schemes that would be employed in the experiment and also include qubit decay and dephasing, which are the leading decoherence mechanism <cit.> for the small system sizes and unit filling we consider here (i.e., we neglect photon loss). For systems where large photon numbers are possible, mode decay will become appreciable. The model studied in this section is the , for which we defined the ansatz in Sec. <ref>. We choose U/J=g/J=1, which we expect to be among the most difficult regimes to simulate due to the balanced competition between all energy terms. We consider three matter sites and two gauge links at unit filling, requiring three modes and two qubits.
We start by explaining how to measure the expectation value of the Hamiltonian in experiment. Following this, we first benchmark the simultaneous perturbation stochastic approximation (SPSA) optimiser <cit.> optimizer (as opposed to SLSQP used so far) without shot-noise, as this is the optimizer of choice for optimisations with shot noise. We investigate how to use averaging over previous iterations to improve convergence of SPSA (still without any noise). Third, we run our improved optimisation procedure with shot noise. Upon finding that there is a certain independence to the number of shots in the initial stages of optimisation where the gradients are large, we further adapt our procedure to incrementally increase the number of shots during the optimisation. Fourth, we include hardware noise (qubit T_1 decay and additional pure T_ϕ dephasing) into the numerical simulation. Finally, we error-mitigate our results by implementing a procedure to post-select on preservation of Gauss's law.
§.§.§ Introducing shot noise in the energy measurement
We simulate a binary search routine <cit.> for measuring the cavity in the Fock basis. This naively requires an additional readout qubit per cavity for the measurement (i.e. separate from the transmon qubits that encode the gauge field links). However, for the we can remove this requirement by first measuring the gauge field links, and then reusing the qubits as ancillas for mode readout, c.f. Figs <ref>c. To do so, we measure and reset the field qubits prior to the cavity measurement. In our VQE simulations, we simulate the full binary search, which enables us to simulate the effective shot noise on the mode measurement that results from the qubit measurements. Due to shot noise, each experiment requires 𝒪(1/ϵ^2) circuit executions to estimate ⟨Ĥ_ℤ(2)|$⟩ to additive precisionϵ.
§.§.§ Using averaging in the stochastic VQE optimisation
In the presence of shot noise error, VQE optimisation is far more challenging <cit.>. We use the SPSA, which is particularly well suited for VQE because it only evaluates the Hamiltonian twice per iteration regardless of the dimension of the optimization problem.
In Fig. <ref> we show the energy found in each iteration using the SPSA optimiser. The gray dots indicate an optimisation without any shot noise, and we see that it does not converge within7000iterations.
It has been found that averaging variable parameters can increase the performance of stochastic optimisation <cit.>. We found in our case that the convergence of the SPSA optimiser, which is stochastic, can be improved by periodically choosing parameters for the next iteration which are averages of previous iterations. Therefore, we adapt our implementation of the SPSA optimiser in order to perform a parameter averaging step throughout the optimization: everyniterations, we use the average of the variable parameters of the lastmiterations. It is important to not choosenso small that the averaging is too frequent, resulting in a small sample size, or choosemtoo large such thatm≳n, which may constrain the optimisation too much and lead to trapping in local minima. In Fig. <ref>, we show data corresponding ton=15andm=7in dark blue. We find notable improvements using this strategy over the one without averaging, i.e.n=1andm=1.
§.§.§ Adaptive shot budget in the VQE optimisation
We now include shot noise in the estimation of the energy. The number of shots required for good convergence of the VQE is determined by the size of the energy gradient. Generally, gradients are larger in the first iterations and gradually decrease as the minimum is approached. This is evident in the large fluctuation between iterations for the early iterations<2000in Fig. <ref> (black dots). Therefore, it is natural to increase the number of shots used in the energy measurement of later iterations <cit.>. Specifically, we increase the shots per iteration by an order of magnitude every1500iterations (starting at10shots per iterations). We show the resulting energy trajectory as light blue points in Fig. <ref>. We find that (for this specific system size) this method yields similar performance to the scheme without shot noise (dark blue). In general, we expect the total number of shots necessary to prepare the ground-state with fixed fidelity to increase exponentially with the system size, see e.g. Ref. <cit.> for empirical evidence.
§.§.§ VQE results including qubit decoherence, averaging and adaptive shot budget
Finally, we add the dominant source of experimental noise in the hardware – qubit decay and dephasing – to our simulations in addition to the shot-based measurement of the energy discussed above. This requires specifying the duration of each of the gates in the VQE ansatz. The ansatz is explained in Sec. <ref>– see Fig. <ref>a for a circuit diagram. Below Eqs. (<ref>-<ref>), we explain how every resource unitary can be realized using native gates. Altogether, the depth of one ansatz layer consists of four conditional beamsplitters and aSNAPgate. We assume a value ofχ=πMHz for the dispersive coupling. In this case the maximal duration of a conditional beamsplitter is≈2.25μs, consisting of two conditional parities which takeT=π/χ=1μs each, and one beamsplitter of250ns duration (100ns has been shown to be feasible <cit.>). We do not include duration cancellations, a range of which are possible. For example, two successive conditional beamsplitters acting on the same oscillator modes and qubits can cancel each others' conditional parity operations, leading to a total time of2.25μs. We also note that in principle, the duration depends on the gate parameters, which in experiment could lead to a shorter duration than estimated here. TheSNAPgate takesT=2π/χ=1μs <cit.>. The total duration of one VQE layer is roughly(4×2.25 + 1)=10μs. Four layers (T=40μs) are therefore well within the decay time of the qubit (T_1=200μs) and the mode (T_1 ≈1ms). Therefore, the probability for the|1⟩state of a mode to decay is 4% and for a qubit to decay during the four-layer ansatz cicuit is 20%. While in most regions of the phase diagram, the occupation of the higher-lying mode states is small, and therefore the effect of mode decay (and other non-idealities such as self-Kerr <cit.>) minimal, this is not the case in the superfluid and clump phase whenU/Jis small. In these cases, mode decay could contribute significantly. We leave the study of this effect for future work and neglect photon loss throughout.
We show the optimisation including averaging, adaptive shot budget, and qubit decay in black dots in Fig. <ref>. Comparing this result to the optimisation without qubit decoherence in light blue, we find that qubit decoherence leads to a significant energy error of about0.86J. However, we still find that the energy is well below the first excited state. Therefore, we still expect a reasonable overlap with the ground state. In order to test this expectation, we plot the fidelity with the ground state in the inset of Fig. <ref>, using matching colors to the main figure. We see that for the simulations excluding decay and dephasing errors (light and dark blue), the fidelity remains high, following the same trend as the energy.
We also note that total runtime of the optimisation in the presence of noise is small: we find an energy error below the energy of the first excited state after around3000iterations with a total shot count of<2×10^5, yielding a total quantum runtime of<10s, if this experiment were to be run on current superconducting hardware that is limited only by gate times.
§.§.§ Post-selection on Gauss's law
The conservation laws obeyed by lattice gauge theories, such as Gauss's law and particle number conservation, can be used to discard unphysical shots in the presence of hardware noise, enabling partial error detection.
In order to use Gauss's law for error-mitigation, we must measure the operatorĜ_i(see Eq. (<ref>)) for each shot of the experiment. For theℤ_2-Higgs model in(1+1)D Gauss's law reads
Ĝ_i = X̂_i-1,i e^iπn̂_iX̂_i,i+1 = 1,
for alli. Because Gauss's law commutes with all terms in the Hamiltonian, it is possible to measure Gauss's law for allias well as the energy of the state in the same shot. This requires mid-circuit measurements and is most often done using the Hadamard test <cit.>. However, the Hadamard test involves ancillary qubits which we do not have at our disposal. Therefore, we will present a method below that avoids ancillary qubits. This method requires additional gates and measurements, compared to the measurement circuits presented in Fig. <ref>c). We also discuss that even for the measurement circuits as described in Fig. <ref> (i.e. without additional gates and measurements), Gauss's law can be partially checked.
The first scheme is inspired by the schemes presented in Ref. <cit.> for measuring Toric-Code stabilizers. The key observation is that the individual parts of the Hamiltonian commute with Gauss's law and therefore we can check Gauss's law for each measurement individually. For the measurement of the gauge-invariant hopping, replacing the beamsplitters in Eq. (<ref>) with conditional beamsplitters, a measurement of the number operator of the modes directly yields a measurement of the gauge-invariant hopping. More explicitly,
CBS_i,i+1(π/2,-π/4)(n̂_i-n̂_i+1)CBS_i,i+1(π/2,π/4)
=Ẑ_i,i+1(â_i^†â_i+1 + h.c.).
Hence, conjugating a number measurement of modesiandi+1with conditional beamsplitters yields a measurement of the gauge-invariant hopping without having to measure the qubit, as shown in the green part of the circuit shown in Fig. <ref>a). Crucially, one can check that also the operators effectively measured byn̂_iandn̂_i+1individually commute with Gauss's law. BecauseX̂_i-1,icomutes with both the gauge-invariant hopping and Gauss's law, it can be measured simultaneously. Gauss's law is then measured by the circuit shown in black and white in Fig. <ref>a. It relies on the fact that conjugation of a qubitẐ_i,i+1operator with SQR gates yieldsẐ_i,i+1 e^iπn̂_i, c.f. Eq. (<ref>). Similarly, conjugation with Hadamard gates and CNOTs convertsẐ_i,i+1intoX̂_i-1,i X̂_i,i+1. Taken together, this converts a measurement ofẐ_i,i+1to a measurement of Gauss's law operatorĜ_i. Hence, these two measurements allow for a full check of Gauss's law while measuring the value of the gauge-invariant hopping in the same shot. This scheme requries additional gates.
In the second scheme, we do not require additional gates and instead propose a form of `partial' error-mitigation based on the information we already have access to in the context of the VQE: the measurements of the Hamiltonian terms shown in Fig. <ref>c. This does not enable a local check of Gauss's law for every matter site (which is why we call it `partial'), and hence only slightly improves the results. These three measurement circuits do not commute, and therefore only one can be used per shot of the experiment. The measurement circuit of the number density (shown in Fig. <ref>c) carries out a photon number measurement on each mode, and could thus be combined with the measurement ofX̂on each qubit to check Gauss's law at each site (see Fig. <ref>a). However, the other measurement circuits (see Fig. <ref>c) are not compatible with a measurement of Gauss's law at each site, as some qubits are individually measured alongẐ(which alone does not commute withĜ_ifor an overlapping region) and the mode occupations are not resolved for each site. However, in these circuits, we measureX̂for every other gauge link, and can furthermore resolve total photon number in the intermediary pairs of modes. Thus, we can conduct checks of Gauss's law for blocked pairs of sites:
Ĝ_iĜ_i+1 = X̂_i-1,i e^iπ (n̂_i + n̂_i+1)X̂_i+1,i+2.
This possibility is illustrated in Fig. <ref>b. Thus, we can post-select on valid Gauss'-law preserving states for all three measurement circuits needed for estimating the energy. This scheme captures only single Gauss's law violations, but not errors on two consecutive sites.
We simulate full Gauss's law post-selection (the first scheme) and focus on its influence on the energy measurement. To do so, we take the final VQE parameters found in a noisy simulation (i.e. the final iteration of the black dots in Fig. <ref>) and run a simulation including hardware noise in the VQE circuit, but neglect noise in the measurement circuit. We then check whether an error happened in every quantum trajectory of the noisy simulation. The error-free trajectories exhibit high overlap≈0.96%with the ground state (c.f. blue line in Fig. <ref>c). For trajectories with an error, we project onto the correct Gauss's law sector. The norm of the part of the wavefunction we project out gives us an estimate for the number of shots that would need to be discarded. We find that13%of shots need to be discarded. We show in red the overlap with the eigenstates in the correct Gauss's law sector for some such states in Fig. <ref>c. Those states have still large overlap with the ground state, but also some overlap with excited states. To calculate the average post-selected energy, we renormalize the projected states and average over all trajectories. We find a post-selected energy of0.36J, compared to a result of0.88Jwithout post-selection and an exact ground-state energy ofE/J=0.02. This corresponds to approx.20%of the difference between ground and first excited state energy.
In our simulations, we find that84%of quantum trajectories have no error. This is roughly in agreement with the dephasing survival probability of a single qubit, whereT_2=200μs, only one of the qubits starts in|+⟩, the other in|-⟩, and the total circuit duration isT=40μs, which isexp(-T/T_2)≈82%. These trajectories with no error have energy0.16J. The1-84%=16%of trajectories in which an error happened therefore have average energy4.66J. Gauss-law post-selection reduces this average energy of the noisy trajectories to1.41J, where the remaining energy error results from noise processes within the same Gauss's law sector. We therefore find that removing errors that bring us out of the correct Gauss's law sector drastically reduces the energy error introduced by the noise.
More intricate mitigation methods can be engineered from this Gauss's law check. For example, we note that one can cross-checkĜ_iat different sites to not only detect that an error occured, but also resolve its position. This opens up the possibility to not only post-select on the case of no errors, but additional develop novel error-mitigation techniques where the error can be partially corrected. In addition, if the error is spatially disconnected in such a way that one can ascertain it did spread to a region of interest, then it may be desirable to retain this shot for the computation of observables local to this region. In addition to post-selection using gauge symmetries <cit.>, VQE implementation for the proposed here is amenable to other commonly used noise mitigation techniques such as zero-noise extrapolation <cit.> and probabilistic error cancellation <cit.>. An alternative to post-selection is to introduce an additional term in the Hamiltonian which enforces the symmetry constraint by prethermalization <cit.>, or pseudo-generators for gauge protection <cit.>.
§.§ Measuring observables
Having characterised the hybrid oscillator-qubit VQE state preparation in terms of the energy error and overlap with the true ground state, we now show how to extract relevant observables in the proposed cQED architecture. While we benchmark the measurement of the observables for the ground state, the measurement schemes can in principle be applied to any state.
§.§.§ Detecting phases of the Z2-Higgs model in VQE-prepared ground states
In Fig. <ref>, we calculate different gauge-invariant observables in the for varying onsite interaction, with respect to VQE ground states of a system with four bosonic sites and three gauge field sites found using noiseless simulations employing the SLSQP optimizer. We compare to exact diagonalisation results. We find that all VQE observables agree very well with the exact results, as expected from the low relative energy error found in Fig. <ref> for this small system size.
Importantly, despite the small system size, we can see the hallmarks of three qualitatively different states asU/Jis scanned <cit.>. The three states are indicated by the blue (rightmost), red (middle) and yellow (leftmost) color of the lines, representing a Mott-insulating phase, pair superfluid phase and the “clump phase”– a phase in which the gauge field perturbatively generates a nearest-neighbour attraction between the bosons, leading to the bosons “clumping” together on the same site <cit.>. For four sites, we define the “clump order parameter” as the probability of all four bosons to occupy the same site (defined in caption of Fig. <ref>). This observable indeed becomes large for smallU/J<cit.>. A (Mott) insulating state is characterized by a lack of spreading of particles in the system. We observe signatures of this effect in the string order correlator and pair hopping, which are both small for largeU/J. Moreover, on-site fluctuations are also small in the insulating state as expected. Finally, the parity tends towards-1, indicating that on average there is an odd occupation at each site. By contrast, in a superfluid phase, particles spread far. In the case of the , single particles stay confined due to the presence of the gauge field while two particles can spread <cit.>, which we observe in the large pair hopping and fluctuations. Moreover, contributions from states with a single boson are suppressed, which we see from the parity tending towards+1.
Having shown that signatures of different phases can be found from VQE simulations, we now discuss how the different observables can be measured in our cQED architecture.
§.§.§ Measuring the string order correlator
The gauge-invariant string order correlator is large in a deconfined phase and small in a confined phase. It is defined as⟨ â_i^† Ẑ_i,i+1...Ẑ_j-1,jâ_j + h.c.|$⟩. To measure the value of this correlator on the ground states prepared via VQE, we need to correlate a measurement of the gauge field qubits with the measurement of the beamsplitter operator of two bosonic sites. We note that one possibility is to adapt the scheme used in Sec. <ref> to measure a long-range hopping through the use of bosonic SWAPs. Ordinarily, such a measurement would require a time linear in the distance |j-i|. Alternatively, here we propose a separate strategy to measure this nonlocal observable using homodyne measurements: first, we expand â_i = (x̂_i + ip̂_i)/2, where x̂ and p̂ are the bosonic position and momentum operators respectively, such that
⟨â_i^†Ẑ_i,i+1...Ẑ_j-1,jâ_j + h.c.|⟩
=1/2(⟨x̂_i Ẑ_i,i+1...Ẑ_j-1,jx̂_j|+⟩⟨p̂_i Ẑ_i,i+1...Ẑ_j-1,jp̂_j|⟩)
We must therefore perform correlated measurements of x̂_i, Ẑ_i,i+1 and x̂_j (and, separately, p̂_i, Ẑ_i,i+1 and p̂_j). In order to measure x̂_i, we use a Hadamard test implemented with a displacement conditioned on an ancilla, defined as CD(α) = e^( αâ^† - α^*â)Ẑ^anc (see Table <ref>). Choosing an arbitary real angle θ such that α=iθ, we get CD(iθ)=e^iθx̂Ẑ^anc. A measurement of the ancilla after applying the Hadamard test circuit with a small angle θ, shown inside Fig. <ref>, then yields
⟨Ẑ^anc|_⟩α=iθ =⟨( sin(2θx̂)|
⟩ ≈ 2θ⟨x̂|+⟩ O(θ^3).
To measure the first part of the string order correlator we perform two Hadamard tests simultaneously in order to correlate sites i and j, i.e., after applying the Hadamard circuits on both sites i and j we measure
⟨Ẑ^anc_i Ẑ_i,i+1…Ẑ_j-1,jẐ^anc_j|_⟩α_i=α_j=iθ
≈ 4 θ^2⟨x̂_i Ẑ_i,i+1…Ẑ_j-1,jx̂_j|.⟩
Similarly, we measure the second part of the string order correlator, i.e., those involving p̂ correlations, by using conditional displacements with α=θ.
In summary, we get
⟨â_i^†Ẑ_i,i+1...Ẑ_j-1,jâ_j + h.c.|⟩
≈1/8θ^2(⟨Ẑ^anc_i Ẑ_i,i+1…Ẑ_j-1,jẐ^anc_j|_⟩α_i=α_j=iθ
+⟨Ẑ^anc_i Ẑ_i,i+1…Ẑ_j-1,jẐ^anc_j|_⟩α_i=α_j=θ).
To test that we can be in the small θ regime while still being able to measure the string order correlator accurately with a reasonable number of shots, we simulate the measurement using Bosonic Qiskit, c.f. Fig. <ref>. We find that a choice of θ≈ 0.1 suffices, requiring approximately 1/θ^4=10^4 shots, as expected from the statistical uncertainty of shot noise. Note that for this measurement, we do not need additional ancillae as one can recycle gauge link qubits that are not within the support of the string order correlator for this purpose. Lastly, we remark that one could alternatively use the method of quantum signal processing (see App. <ref>) to extract ⟨x̂⟩ similarly to Eq. (<ref>), but with a larger value of θ. This could be achieved by enacting a target function 2/πarcsin(·) on the signal sin(2θx̂); if 2θx̂≤ 1/2, this outputs θ/π⟨x̂⟩ + O(ϵ) while requiring O(log(1/ϵ)) queries to CD(iθ) <cit.>.
§.§.§ Measuring the superfluid density
The superfluid density ρ_s in 1D is given by <cit.>
ρ_s = L/J∂^2 E_TPB(ϕ)/∂ϕ^2,
where E_TPB(ϕ) is the ground state energy in the presence of “twisted” periodic boundary conditions and J is the hopping matrix element between neighbouring sites. The phase ϕ is given by the phase difference between two edges, for example for the , we have
Ĥ^TPB_ℤ(2)(ϕ) = Ĥ_ℤ(2) - J (e^iϕâ_L^†Ẑ_n â_1 + e^-iϕâ_L Ẑ_n â_1^†),
where the entirety of the accumulated phase ϕ is incorporated by the hopping term between the two edge sites, where Ĥ_ℤ(2) is defined in (<ref>) and L is number of sites. To measure ⟨Ĥ^TPB_ℤ(2)|$⟩, we can follow the general procedure outlined in section <ref>. The procedure to estimate the stiffness is as follows: we first prepare the ground state for a range of small values ofϕ. We do so by extending the variational ansatz in Section <ref> to incorporate edge-to-edge hopping. Once prepared, we estimate Eq. (<ref>) via finite-difference methods or, alternatively, through more sophisticated, noise-resilient techniques such as parameter shift rules <cit.>.
Experimentally, periodic boundary conditions are not required in the hardware to implement edge-to-edge hopping. It is sufficient to synthesize a conditional beamsplitter between the two end modes and leverage this for an edge-to-edge hopping term in the Hamiltonian. We can synthesize a beamsplitter between the edges of the chain using the bosonicSWAPgate defined in Tab. <ref> to move the mode information from both edges of the chain to adjacent sites, carry out the required operation between the modes, andSWAPthe information back.
§.§ Qubit-boson ground state preparation with quantum signal processing
While above we employed the heuristic VQE approach to approximately prepare the ground state, new developments in an algorithm known as quantum signal processing (QSP) provide an alternative approach to ground state preparation with provable accuracy and guarantees. Here we provide a brief comparison of VQE and QSP for ground state preparation of theℤ_2-Higgs model.
Initially pioneered in Refs. <cit.>, QSP provides a systematic method to apply a nearly arbitrary polynomial transformation to a linear operator embedded in a unitary matrix <cit.>, thus furnishing a unifying framework for quantum algorithms <cit.>. In short, QSP works by interleaving the unitary matrix with a sequence of parameterizable SU(2) rotations; see Fig. <ref> for an illustration. The output of this sequence is an embedding of a polynomial transformation of the initial linear operator, parameterized by the chosen SU(2) rotations. For a more detailed introduction to QSP, see App. <ref>.
In App. <ref>, we illustrate how QSP furnishes an algorithm for ground state preparation, as initially presented in Ref. <cit.> for implementation on early fault-tolerant quantum computers. In this scenario, one considers a HamiltonianHwith spectral gap bounded below byΔ, and seeks to prepare its ground state with fidelity≥1 - ϵ. As input to this algorithm, one assumes access to an easily preparable initial state that has overlap≥γwith the ground state. One also assumes access to the time evolution operatorse^±iH, implemented for instance with Trotterization. To avoid alisiaing, it is taken that the eigenvalues ofHlie in the range(0, π); othwerwise can implement a rescaled time evolution operatore^±i (HΔt + ϕ)to meet this constaraint.
With this setup, the algorithm uses QSP to approximate a projector onto the ground state, which is then applied to the initial state to project out its component parallel to the ground state. This algorithm ultimately requires a circuit depthO(1/Δ log(1/ϵγ) ), consisting of this many coherent queries to the controlled time evolutione^±iH. This circuit is repeatedO(1/γ^2)times to prepare the ground state with high probability. The circuit of this algorithm is illustrated in Fig. <ref>.
Recent work has shown that QSP is naturally implementable on hybrid CV-DV quantum procesors <cit.>, rendering it applicable to the LGT simulations studied here. To exemplify the requirements of ground state preparation with QSP, let us consider theℤ_2model withn=3sites and couplingsJ=g=U=1, as investigated in Sec. <ref>. Let us also use the same initial state as above, which numerically has overlapγ≈0.36with the ground state. In this scenario, VQE produces the ground state with infidelityϵ≈2·10^-2(see Fig. <ref>). Aiming to achieve a similar performance with QSP, our calculations in App. <ref> indicate that the QSP approach requires∼50queries to the controlled time evolutione^±iH. Using even a first order Trotterization as outlined in Sec. <ref>, a single such Trotter step would require∼3controlled qubit-boson gates per site, for a rough total of≳10^4qubit-boson gates, which is prohibitively more expensive than VQE.
This prohibitive circuit depth suggests that QSP-based ground state preparation will only become practical and competitive with VQE once qubit-boson gate infidelities≪10^-4can be achieved, in contrast to the current qubit-boson gate infidelities∼10^-3<cit.>. And moreover, even if such a gate fidelity can be achieved, the number of repetitions of the QSP circuit will generally scale exponentially in the system size without prior knowledge of the ground state (see App. <ref>). However, this scaling is to be expected because generic ground state preparation is QMA-hard <cit.>, and of course the VQE ground state preparation is also expected to require resources exponential in the system size, without further prior knowledge.
In Fig. <ref>, we compare the performance of VQE to QSP, as applied to the above scenario of theℤ_2model. For both methods, we plot the dominant source of error vs. time to solution, assuming no gate errors in the underlying circuit. The dominant source of error for VQE is the infidelity with the ground state, and for QSP it is the probability of failure (see App. <ref> for details); the time to solution is taken to be the circuit depth times the number of shots. For VQE, increasing the depth (i.e., number of layers) can increase the fidelity with the ground state, but also leads to a high dimensional optimisation problem that suffers from barren plateaus and requires a high shot count to facilitate optimisation <cit.>. On the other hand, QSP requires a relatively deeper circuit to estimate a projector onto the ground state, and a modest number of shots to successfully project out the ground state. Incidentally, the empirical results of Fig. <ref> suggest that both dominant sources of error scale similarly as a function of the time to solution, implying that their total resource costs are comparable. This arises because our VQE circuit has a shallow depth yet requires many shots, whereas QSP has a deeper circuit yet requires fewer shots. We speculate that this seemingly universal behavior arises from the fact that the complexity of ground state preparation is fundamentally governed by the size of the spectral gap of the Hamiltonian, agnostic to the algorithm used to estimate the ground state. In any event, this analysis implies that VQE is better suited to settings with low gate fidelities and short coherence-times, whereas QSP is more favorable if gates are fast yet circuit repetitions are expensive.
§ CONCLUSIONS AND OUTLOOK
To conclude, in this paper we have developed a resource-efficient framework for implementing fermion-boson quantum simulation of a wide variety of models including lattice gauge theories in(1+1)D and(2+1)D using oscillator-qubit quantum computation. To do so, we developed a range of compilation tools and subroutines to realize multi-body interaction terms using the natively available oscillator-qubit gate set in circuit QED hardware. In particular, we developed parity-controlled, and density-density bosonic gates and oscillator mediated multi-qubit gates as foundational subroutines, which are useful beyond our simulation motivation. In the context of LGTs, we showed how to implement gauge-invariant hopping terms for transmon and dual-rail qubit fermionic encodings. We also presented a method implementing the magnetic field term forU(1)gauge fields.
By performing an end-to-end gate complexity analysis, including gate synthesis errors, we showed that our approach yields an asymptotic improvement from𝒪(log(S)^2)to𝒪(1)scaling in terms of the boson cutoffSfor the gauge-invariant hopping term and an𝒪(log(S))to𝒪(1)improvement for the magnetic field term. Moreover, in order to also compare the prefactors of the compilation, we developed an efficient all-qubit algorithm in the Fock-binary encoding for simulating bosonic hopping terms. We show that even this efficient approach yields gate counts that are10^4times higher than our oscillator-qubit approach.
We then applied our compilation strategies to some of the most pertinent quantum simulation tasks – ground state preparation and time evolution, for which we developed a oscillator-qubit variational quantum eigensolver method. We benchmarked our hybrid oscillator-qubit approach through numerical simulations of a and in(1+1)D in Bosonic Qiskit, and demonstrated that low-depth oscillator-qubit circuits can prepare states with a high overlap with the ground state for small systems. In particular, in the we found relative energy errors of<10^-3for just a two-layer ansatz for three matter sites. Separately, we simulated both the time dynamics and VQE for the in the presence of the leading source of noise in circuit QED architectures
– transmon decay and dephasing. We then developed an approach for post-selecting on Gauss's law in VQE circuits, using the fact that each term in the Hamiltonian individually conserves Gauss's law, and showed that energy errors of20%of the difference between ground and first excited state can be reached with this method. We also showed how non-trivial observables can be measured within our approach, including the superfluid stiffness and string order correlators. Finally, we discussed how quantum signal processing – an alternative approach to eigenstate preparation with performance guarantees – can be implemented in oscillator-qubit systems, showing that its time to solution for a fixed error is comparable to VQE.
§.§ Prospect for quantum advantage
For simulations in the NISQ-era, it is important that errors do not completely destroy the signal to be observed. In cavities, the probability of a Fock state|N⟩to decay to|N-1⟩scales linearly withN, therefore posing a challenge to the achievable fidelity for largeN, in particular in the context of quantum computation (see e.g. section IIE2 in Ref. <cit.>). By contrast, when mapping bosons to qubits using the Fock-binary encoding, the probability to be in an encoded Fock state|N⟩has a decay-rate which scales asΩ(1)and𝒪(log_2(N)). However, this apparent advantage of the Fock-binary encoding does not directly materialize when considering observables. As we discuss in App. <ref>, for both cavities and qubits, even non-linear observables such as⟨n̂^2|$⟩ have a decay rate which is 𝒪(1) in N. This is in analogy to the fact that for many-qubit systems, local observables can have low error even when the global fidelity of the state is essentially zero <cit.>. Therefore, the linearly growing decay rate of number states in cavities need not lead to particular difficulties for quantum simulation when considering such local observables. Moreover, the effective errors induced by qubit decay in the Fock-binary encoding are highly non-linear in N and also lead to large jumps in occupations. For example, Fock states with N=2^n, n∈ℕ, decay immediately to N=0, leading to an effective loss of 2^n bosons at once. Hamiltonian evolution with non-linear terms such as an on-site interaction in the presence of Fock-state-dependent loss can lead to highly correlated errors. This is not the case for bosonic hardware, where the decay rate of ⟨n̂^2|$⟩ for an initial Fock state is a smooth function of the Fock state number and in a single decay event, only a single boson is lost. Finally, in cavity QED, boson loss is far slower (decay rateκ≈1kHz, with some cavities shown to exhibitκ<1Hz) than qubit decay (decay rateγ≈5-10kHz). The excellent coherence of cavities is illustrated by the recent experimental preparation of coherent (i.e., definite parity) photon cat states with as many as1000photons <cit.>. Therefore, the differing nature of errors in oscillators, which is closer to physical errors appearing in bosonic systems, can pose advantages for qubit-boson hardware compared to all-qubit simuluations of bosonic systems.
As to the prospect of quantum advantage, we believe that non-equilibrium dynamics in regimes with large boson number fluctuations, starting in Fock states, will be challenging to simulate classically: the widely-used time-evolving-block-decimation algorithm for tensor networks scales cubically with the boson number cutoff <cit.>. Monte-Carlo methods struggle with non-equilibrium and real-time dynamics, while semi-classical methods such as the truncated Wigner approximation <cit.> with initial states that are non-Gaussian. Finally, brute-force sparse diagonalization methods struggles with large systems – we expect dynamics of system sizes ofL>20bosonic sites with an average of two bosons per site to be challenging to simulate when considering initial states leading to large boson fluctuations such as|0⋯0LL0 ⋯0⟩, in which case the cutoff needs to be chosen to be essentially given by the number of bosons in the system,2L, leading to a Hilbert space size of3L-12L. The largest simulations to date have reached Hilbert space sizes of<2^40<cit.>. Hence, we expect to surpass classical simulability using sparse matrix methods forL≈16. This is in contrast to qubit-only simulations, where systems ofL≳50are needed to reach beyond-classical regimes <cit.>.
§.§ Future directions
Our study opens a promising pathway towards using native oscillator-qubit hardware to simulate boson-fermion-gauge field systems. Compared to qubit-based hardware, this approach will be most advantageous when simulating systems with strong boson number fluctuations, for example in chaotic field theory evolution <cit.>, as such a case requires large cutoffs and, consequently, the synthesis of square-root factors (when acting with creation and annihilation operators) that are expensive to synthesize in all-qubit hardware. A key advantage of our digital approach compared to classical or analog methods is its versatility – our approach can simulate topologies that do not match the hardware topology. For example, a3D model can in principle be mapped to the2D array at the expense of needing to implement long-range entangling gates with SWAPs.
Extending our approach to non-Abelian Gauge theories, most importantlySU(N)to simulate the weak and strong forces, are most pertinent. To this end, our Schwinger-Boson approach can be relatively straightforwardly generalized <cit.> and we expect similar scaling advantages to materialize as we found forU(1)fields.
Many applications of qubit-oscillator quantum simulation such as vibronic excitations of molecules or phonon-electron interactions in condensed-matter physics show similar decoherence processes to the one present in qubit-oscillator hardware. This opens up the direction of hybrid qubit-oscillator digital simulation of open-system dynamics, which would be an interesting topic to explore next. In noisy qubit hardware, this would not be immediately possible as the physical loss processes do not resemble boson loss in Fock-binary encodings as discussed above.
Looking into the future beyond noisy simulations, our work motivates the study of bosonic error-correcting codes, i.e., codes that directly encode a bosonic logical operator. Furthermore, it would be interesting to explore, whether our proposed hybrid oscillator-qubit approach also yields advantages for equilibrium state preparation away from the ground state. To that end, quantum signal processing <cit.>, methods based on filtering <cit.>, or thermalization <cit.> could be employed. Another interesting direction is to develop explicit benchmarks of dynamical quantum simulations of, e.g., string breaking and particle scattering problems <cit.> to better compare hybrid oscillator-qubit and qubit-based approaches.
While challenging in cavity QED hardware, a purely native approach for fermion-boson problems could be explored by combining our approach with digital fermionic quantum processing in neutral atom arrays <cit.>, where bosons can be encoded in (the mechanical) harmonic oscillator degrees of atoms in tweezers <cit.>.
§ ACKNOWLEDGMENTS
This research was funded by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA) under contract number DE-SC0012704. C2QA led this research. Support is also acknowledged from the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Systems Accelerator. S.K. is supported with funds from the Ministry of Science, Research and Culture of the State of Brandenburg within the Centre for Quantum Technologies and Applications (CQTA). J.M.M. acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. 2141064. L.F. is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) as part of the CRC 1639 NuMeriQS – project no. 511713970.
We acknowledge helpful discussions with Marko Cetina, Zohreh Davoudi, Alexey Gorshkov, Fabian Grusdt, Mohammad Hafezi, Or Katz, and Torsten Zache.
External interest disclosure: SMG is a consultant for, and equity holder in, Quantum Circuits, Inc. |
http://arxiv.org/abs/2409.02702v1 | 20240904133912 | Incorporating Like-Minded Peers to Overcome Friend Data Sparsity in Session-Based Social Recommendations | [
"Chunyan An",
"Yunhan Li",
"Qiang Yang",
"Winston K. G. Seah",
"Zhixu Li",
"Conghao Yanga"
] | cs.SI | [
"cs.SI",
"cs.AI"
] |
1
.001
C. An et al.
mode = title]Incorporating Like-Minded Peers to Overcome Friend Data Sparsity in Session-Based Social Recommendations
1]Chunyan An
1]Yunhan Li
2]Qiang Yang[1]
3]Winston K.G. Seah
4]Zhixu Li
1]Conghao Yang
[1]College of Computer Science, Inner Mongolia University, Hohhot, Inner Mongolia, China
[2]College of Medicine, University of Florida, Gainesville, USA
[3]School of Engineering and Computer Science, Victoria University of Wellington, Wellington, New Zealand
[4]School of Computer Science, Fudan University, Shanghai, China
[1]Corresponding author at: College of Medicine, University of Florida, Gainesville, USA.
E-mail address: [email protected] (C. An), [email protected]
(Y. Li), [email protected] (Q. Yang).
§ ABSTRACT
Session-based Social Recommendation (SSR) leverages social relationships within online networks to enhance the performance of Session-based Recommendation (SR). However, existing SSR algorithms often encounter the challenge of “friend data sparsity”. Moreover, significant discrepancies can exist between the purchase preferences of social network friends and those of the target user, reducing the influence of friends relative to the target user's own preferences.
To address these challenges, this paper introduces the concept of “Like-minded Peers” (LMP), representing users whose preferences align with the target user's current session based on their historical sessions. This is the first work, to our knowledge, that uses LMP to enhance the modeling of social influence in SSR. This approach not only alleviates the problem of friend data sparsity but also effectively incorporates users with similar preferences to the target user.
We propose a novel model named Transformer Encoder with Graph Attention Aggregator Recommendation (TEGAARec), which includes the TEGAA module and the GAT-based social aggregation module. The TEGAA module captures and merges both long-term and short-term interests for target users and LMP users. Concurrently, the GAT-based social aggregation module is designed to aggregate the target users' dynamic interests and social influence in a weighted manner.
Extensive experiments on four real-world datasets demonstrate the efficacy and superiority of our proposed model and ablation studies are done to illustrate the contributions of each component in TEGAARec.
Session-based Social Recommendation Session-based Recommendation Recommendation System Graph Neural Network
[
[
September 9, 2024
=====================
§ INTRODUCTION
Session-based recommendation (SR) is tasked with modeling a user's interest preferences based on historical data, such as the user's past purchases within a given timeframe, and predicting the next item of interest <cit.>. Previous studies have demonstrated that incorporating users' social relationships can enhance recommendation performance <cit.>. When social relationships are integrated into the session-based recommendation task, it becomes known as session-based social recommendation (SSR) <cit.>.
Numerous methods have been proposed to enhance the performance of SSR tasks, which can be broadly categorized into two groups, i.e., historical behavioral information based methods <cit.> and social/interactive information based methods <cit.>.
The first group focuses on leveraging historical behavioral information to predict the recommendation information of the target user. Early methods included collaborative filtering techniques <cit.>, while more recent approaches focus on treating session data as sequences and modeling them using recurrent neural networks (RNNs) <cit.>.
The second group centers on information aggregation, wherein the target user's embedded representation is learned by integrating information from both the user-user social network and the user-item interaction network <cit.>. Many existing methods adopt Graph Neural Networks (GNNs) to construct their models, aiming to capture the intricate item transitions within sessions <cit.>.
However, real-world SSR algorithms encounter several challenges. Firstly, the issue of “friend data sparsity” arises, where the recommendation data may lack sufficient information about the friends of the target user, resulting in lost interaction data in the current or historical sessions. Secondly, the preferences of social network friends may differ significantly from those of the target user, and the influence of friends may be limited compared to the target user's own preferences especially facing the problem of friend data sparsity, potentially leading to inaccurate recommendation. Last but not the least, users' interests may evolve dynamically over time, posing a considerable challenge in accurately modeling interest changes of users <cit.>.
To tackle these challenges, we propose the concept of “Like-minded Peers” (LMP) to represent users whose preferences are aligned with the target user's current session in their history sessions. We posit that users who exhibit similarity in interaction patterns with the target user, even if they are not social friends, can offer valuable insights. To the best of our knowledge, this is the first work using LMP to enhance SSR. It not only alleviates the problem of friend data sparseness but also effectively incorporates users with similar preferences to the target user.
For example, consider a scenario where a mother purchases diapers and formula. In this case, she is more likely to share interests with other users who have also bought diapers and formula in the past. This alignment is attributed to the fact that social friends may not necessarily share the same motherhood identity, thus their purchasing behavior has a lesser influence on her buying propensity.
To promote the understanding of LMP, we also provide an
example, as shown in Fig. <ref>. Given the target user Mike in different sessions, at the moment of session a/b, his LMP is defined as the other users who have interacted with Mike for the same item in session a/b before that.
It is notable that while the concept of LMP bears similarities to user-based collaborative filtering algorithms, traditional collaborative filtering methods often suffer from data sparsity issues and limited model expressiveness <cit.>.
In this paper, we collectively refer to the target user's LMP users and social friends as the target user's neighbours.
In addition, considering that the interests of the users (including LMP users and social friends) are dynamic over time, mainstream approaches typically focus only on short-term interests in the current session <cit.>, or consider only the long- and short-term dynamic interests of neighbours, ignoring the long-term interests of the target user <cit.>. In this paper, we model the long-term and short-term interests of the target users as well as their neighbours (including LMP users and social friends) to better capture their dynamic interests. Particularly, we propose a novel model called Transformer Encoder with Graph Attention Aggregator Recommendation (TEGAARec), consisting of a TEGAA module and a graph attention (GAT)-based social aggregator module. TEGAA module focuses on modeling and fusing the dynamic interests of the target user and her/his neighbours. It contains a Transformer <cit.> encoder to encode the short-term interests, a long-term interests encoder to get the long-term interests and a GAT-based Dynamic Interest Aggregator to aggregate the information of items. The learned representations are fed into a GAT-based Social Aggregation module to weightedly aggregate the target users' dynamic interests and the social influence.
The main contributions of this paper are as follows:
∙ To the best of our knowledge, we are the first work to introduce the concept of “Like-minded Peers” (LMP) to alleviate the influence of “friend data sparsity” in SSR .
∙ We propose a novel TEGAARec model, which can model the dynamic interests of the target user and her/his neighbours separately through our designed TEGAA module. A GAT-based Social aggregation module is proposed to weightedly aggregate the target users' dynamic interests and the social influence.
∙ We conduct extensive experiments on four real-world datasets, which shows a large improvement compared to the state-of-the-art work at several metrics.
§ RELATED WORK
§.§ Session-based Recommendation
Session-based recommendation tasks involve sequential recommendation, where early approaches predominantly utilized models from the Recurrent Neural Network (RNN) family to capture temporal order <cit.>. For example, Hidasi et al. <cit.> proposed to use a variant of RNN, namely the Gated Recurrent Unit (GRU) <cit.>, to treat sessions as sequence prediction tasks for session-based recommendation. Li et al. <cit.> further improved upon this by incorporating attention mechanisms into the GRU to better capture users' sequential behaviors and personal intentions.
GNN-based methods have been introduced to address the limitations of RNN models in capturing complex item transitions within sessions. Wu et al. <cit.> proposed the SR-GNN model, which represents sessions as directed graphs and learns item transitions using Gated Graph Neural Networks (GGNN) <cit.>. Xu et al. <cit.> proposed a GC-SAN model that introduces an attention mechanism to learn global dependencies on top of SR-GNN to enhance the session representation. Li et al. <cit.> proposed a HIDE model that goes a step further by introducing a hypergraph neural network approach to the SR task, which models possible interest transitions from different perspectives, and then uses a combination of micro- and macro-approaches to learn the intent behind each clicked item.
However, the increasing complexity of GNN-based approaches has led to only marginal improvements. Zhang et al. <cit.> introduced the Attn-mixer, a multilevel attention mixer network that achieves multilevel reasoning for item transitions without relying on GNNs. Wang et al. <cit.> proposed the GCE-GNN model, which significantly enhances recommendation performance by constructing a global graph to learn item-transition information from other sessions.
Despite their successes, these approaches often overlook the modeling of users' dynamic interests and lacks the modeling of long-term interests across sessions. In contrast, our approach integrates both long-term and short-term user interests to achieve a dynamic representation of users' interests.
§.§ Social Recommendation
User-based collaborative filtering (CF) models <cit.> are the classical algorithm to recommend items to the target user based on user similarity by finding other users with similar preferences or purchase history to the target user and recommending items to the target user that have been liked or purchased by these similar users. They operate under the assumption that users who have interacted with similar items in the past tend to share similar preferences, making information about such users valuable for recommendation. However, they often encounter problems of data sparsity and limited model expressiveness <cit.>.
§.§ Session-based Social Recommendation
Several existing methods tried to address friend data sparsity problem by employing repeated friend selection <cit.>.
For instance, Song et al. <cit.> proposed the DGRec model, which utilized RNN-based models to capture the dynamic interests of users and their friends, while employing graph attention networks to model the influence of friends. Building upon DGRec, Liu et al. <cit.> introduced GNNRec, an enhanced model that constructed session graphs and leveraged Gated Graph Neural Networks (GGNN) to model complex item transitions, further improving recommendation performance.
Additionally, Chen et al. <cit.> proposed SERec, a generalized social recommendation framework that utilized heterogeneous graph neural networks to learn user-item representations, effectively capturing item transitions across sessions. Feng et al. <cit.> proposed a Hierarchical Social Similarity-guided Model with Dual-mode Attention (HMDA) for Session-based Recommendation where the social influence exerted by friends are aggregated.
Although these methods perform well, they are less effective when the user has fewer friends. To overcome this limitation, we introduce the concept of “Like-minded Peers” (LMP), which can enable the identification of users who genuinely share similar preferences with the target users.
§ PROBLEM DEFINITION
Before delving into the problem definition, we first give the basic definitions of concepts used throughout this paper and then the definitions relevant to users: Like-minded Peers (LMP), social friends, and neighbours.
We denote G=(U,E) as the social network where U and E are the users and the social relations among users, respectively. e_(u̅,u)=1 represents the existence of a social edge between the target user u̅ and user u, otherwise e_(u̅,u)=0. Q represents the set of all sessions of users, and Q_u = {S_1^u, S_2^u,…, S_T^u } denotes the set of sessions for a user u ∈ U, where S_t^u is the t-th session of user u. We denoted S_t^u = {i_t,1^u, i_t,2^u, …, i_t,n^u} as the set of n items that the user u preferred at the t-th session. T is the number of historical sessions of user, and n is the number of items in the session. Each user's sessions are chronologically arranged.
Like-Minded Peers. Given a target user u̅ in the T+1-th session, the Like-Minded Peers (𝒳_u̅^T+1) of u̅ are the set of users ranging from the historical session interval [1,T] who have item intersections with the user u̅ in session T+1. It is formally defined below:
𝒳_u̅^T+1 ={∀ u, u ∈ U}
S_T+1^u̅∩⋃_t=1^TS_t^u∅,
u ≠u̅
Social Friends. Given a target user u̅ in the T+1-th session, social friends (𝒵_u̅^T+1) are the set of users ranging from the historical session interval [1,T] who have connection edge in the social network G with the target user u̅ and have interaction behaviors in the historical session. It is formally defined below:
𝒵_u̅^T+1 ={∀ u, u ∈ U}
e_(u̅,u)=1, ⋃_t=1^TS_t^u∅,
u ≠u̅
Neighbours of Users. Traditionally, neighbours are the users who directly/indirectly connect the target user in the social network. In this paper, we give a broad definition of neighbour which is the union set of LMP users 𝒳_u̅^T+1 and social friends 𝒵_u̅^T+1 for the target user u̅, denoted as:
𝒩_u̅^T+1 = 𝒳_u̅^T+1∪𝒵_u̅^T+1
Session-based Social Recommendation (SSR). Given a new session S_T+1^u̅ for target user u̅ and her/his neighbours 𝒩_u̅^T+1, the goal of SSR is to recommend a set of items from I that u̅ is likely to be interested in during next time step T+1. Here, we employ the session information coming from the dynamic interests including the historical sessions ⋃_t=1^TS_t^u̅ and the current session S_T+1^u̅ as well as the neighbour information including LMP users, i.e., 𝒳_u̅^T+1 and social friends i.e., 𝒵_u̅^T+1.
§ PROPOSED METHOD
In this section, we present the detail of our proposed TEGAARec model. Fig. <ref> illustrates the overall framework, which first generates neighbours by randomly sampling LMP users and social friends from the historical sessions, then encodes and merges the long- and short-term interests of the target user and neighbours with our proposed TEGAA module. Next, a GAT-based social aggregation layer is designed to weightedly aggregate the target users' dynamic interests and the social influence. Finally, a prediction layer is used to calculate the similarities between the item embeddings and target user embedding for selecting the top-k items as the recommendation.
§.§ Historical Session-based Neighbour Sampling
In this section, we introduce how to get the neighbours 𝒩_u̅^T+1, i.e., LMP users and social friends, of the target user u̅ given the historical sessions ⋃_t=1^T S_t^u̅. Particularly, for the LMP users acquisition, we first collect the items of u̅ at the current session S_T+1^u̅ and then check the users from the historical sessions ranging from the historical session interval [1, T] where users sharing the same items with u̅ are selected as the candidates. Then, we randomly choose users from candidates as the LMP users according to a predefined value called L_l.
Note that if the size of LMP users is less than L_l, we randomly select duplicate LMP from the existing LMP set. Finally, the LMP users, i.e., 𝒳_u̅^T+1 are captured from the historical sessions. Note that our sampling strategy can be extended to the more complex one by replacing the random selection with the volume-based selection of shared items.
Similarly, for social friends, we first extract users from the social networks in the historical sessions. The users directly connected to u̅ and having interaction behaviors in the historical session are treated as the candidates. Next, the random sampling strategy is employed to select users based on the predefined value L_s
to get 𝒵_u̅^T+1 as the social users.
We union the LMP users and social friends to acquire neighbours of the target user u̅.
§.§ Autoregressive Mask Sequence Construction
To augment the number of training examples <cit.>, we try to split the current session sequence S_T+1^u̅ into several sub-sessions sequence MaskedInput_k and the corresponding labels Target_k. Specifically, for the sequence of interaction item sequence S_T+1^u̅ = { i_T+1,1^u̅,i_T+1,2^u̅,…,i_T+1,n^u̅}, we utilize
an Autoregressive Mask Sequence Construction module to split it as follows:
MaskedInput_k = {i_T+1,1^u̅, …, i_T+1,k^u̅, 0, …, 0},
Target_k = {i_T+1,k+1^u̅}
where MaskedInput_k represents the k-th constructed masked input sequence and Target_k represents the k-th item to be predicted, k ∈ [1, n-1].
Different from existing methods which augment data in the data preprocessing step <cit.>, we implement it during the model's training phase. Our strategy can avoid the repeated sampling of neighbours in the process of historical session-based neighbour sampling.
§.§ Long- and Short-Term Interests Encoding and Fusing with TEGAA Module
Generally, interests of users may vary over time, therefore, it is essential to encode the long- and short-term interests, which can systemically learn the preference of target users. Given any user u_m either from the neighbours 𝒩_u̅^T+1 or from the target user u̅, and her/his interacted item sequence S_t^u_m = {i_t,1^u_m, i_t,2^u_m, ...,i_t,n^u_m}, we aim to learn the fused representations of long-term and short-term interests of the user u_m.
§.§.§ TEGAA Module
To capture the long-term and short-term interests and merge these interests of an user u_m, we propose a novel TEGAA module which consists of a Transformer Encoder, a user-embedding look-up table and a GAT-based Dynamic Interest Aggregator.
Long- and Short-Term Interests Encoding. Given the item interaction sequence S_t^u_m ={i_t,1^u_m,i_t,2^u_m,...,i_t,n^u_m}, we first use the Transformer Encoder to get the embeddings of S_t^u_m as the short-term interests below:
h_t^u_m = Transformer_Encoder(S_t^u_m)
where Transformer_Encoder(·) is the function to get the encoded information of the interaction sequence of the session S_t^u_m.
We employ a single vector from the user-embedded lookup table to learn the representation of the long-term interests of the user u_m below:
emb_u_m = Embedding_user (u_m)
where Embedding_user∈ℝ^|U| × d denotes the user embedding lookup table, |U| is the number of all users, and d is the size of the embedding dimension.
Long- and Short-Term Interests Fusing. Given the encoded information of items and users, we first modify the Multi-head Attention Mechanism to obtain the Multi-head Graph Attention Mechanism (MHGAT) by adjusting the query vectors to the center node, i.e., the target user embedding, and the key vectors to the neighbour embedding. MHGAT is utilzied to capture the short-term interest of the user u_m.
Particularly, we treat the embeddings of target user as the center node and the embeddings of neighbours as the neighbouring nodes of multi-head attention. They are calculated as follows:
Q= emb_u_mW_i^Q,K=h_t^u_mW_i^K,V=h_t^u_m
head_i = Attention(Q, K, V)= SoftMax(QK^⊺/√(d_k))V
h_s = MHGAT(Q, K, V)
= Concat(head_1; …; head_z) W^O + b^O
where head_1,…, head_z denotes the z heads of the multi-head attention mechanism, Concat denotes the vector splicing operation,
W^O∈ ℝ^zd_k×d, W_i^Q∈ ℝ^d×d_k, W_i^K∈ ℝ^d×d_k are the mapping parameter matrices, b^O∈ ℝ^d is the bias term, ⊺ denotes the transpose operation, and d_k=d/h.
Then, we fuse the short-term interest result h_s obtained from the Dynamic Interest Aggregator with the long-term interest representation emb_u_m to obtain the final representation h_u_m below:
h_u_m = TensorFusion(h_s,emb_u_m)
= ReLU(Concat[h_s; emb_u_m] W^F + b^F)
where ReLU(x) = max(0, x) is the nonlinear activation function, W^F∈ℝ^2d× d is the mapping parameter matrix, b^F∈ℝ^d is the bias term, and d is the size of the embedding dimension.
§.§.§ Neighbour and Target User Interests Encoding
Given the designed TEGAA module, we apply it for the neighbour interests encoding and the target user interesting encoding to get the fused long-term and short-term interests. Particularly, for the neighbour interest encoding, we use the TEGAA module to get the encoded results for all |𝒩_u̅^T+1| of the neighbours in the T-th session H_n_m = {h_n_m,1, h_n_m,2, …, h_n_m,|𝒩_𝐮̅^𝐓+1|}
h_n_m,k=TEGAA_Module(S_T^k)
where S_T^k represents the sequence of interaction items for the k-th neighbour's T-th session, h_n_m, k represents the coded representation of neighbour k in the neighbour set through the TEGAA module,
and k ∈ [1, |𝒩_u̅^T+1|].
Similarly, for the target user interests encoding, we feed the constructed masked input sequences into the TEGAA module, and it outputs the encoded information corresponding to each masked sequence of the target user u̅.
The encoded embeddings of n-1 sub-sessions of the target user
u̅ is denoted as H_u̅={h_u̅,1, h_u̅,2, …, h_u̅,n-1}. Each element h_u̅,k∈ H_u̅ is calculated below:
h_u̅,k = TEGAA_Module(MaskedInput_k)
where k is the length of the session.
§.§ GAT-based Social Aggregation Layer
Due to the difference of social influence for neighbours for target user, it is necessary to discern their contributions. In this section, we consider to use the MHGAT module (as mentioned in section <ref>) to weightedly aggregate the target users' dynamic interests and the social influence. On one hand, the neighbours of the target user u̅ in are encoded in session T with H_n_m = {h_n_m,1, h_n_m,2, …, h_n_m,|𝒩_𝐮̅^𝐓+1|} as neighbour nodes. On the other hand, the encoding result H_u̅={h_u̅,1, h_u̅,2, …, h_u̅,n-1} of each mask sequence in Session T+1 for the target user u̅ is used as the target aggregation node. Node types of the target aggregation node and the neighbour nodes are both users, i.e., homogeneous nodes, so we add the self-attention mechanism to the aggregation. The final aggregated result is obtained as H_u̅^'= {h_u̅,1^', h_u̅,2^', …, h_u̅,n-1^'}, whose element is calculated below:
h_u̅,k^'=MHGAT(h_u̅, kW_i^Q,h_n_m^'W_i^K,h_n_m^')
where h_n_m^'=
[h_n_m,1, …, h_n_m,|𝒩_u̅^T+1}|, h_u̅,k]
, and
h_u̅,k represents the encoding result of the target user u̅ for the k-th mask sequence in session T+1, k ∈ [1, n-1].
§.§ Prediction Layer
In this section, we introduce how to select the top relevant items that the target user preferred.
We first encode all the items using an item-embedded lookup table. Given any item i with the id i_v, we get its embedding below:
h_v^i=Embedding_item (i_v)
Then, we use the SoftMax function to predict the probability of the next item as follows:
p(Target_k| MaskedInput_k;{S_t^n_m,n_m∈𝒩_u̅^T+1})
=exp((h_u̅,k')^⊺ h_Target_k^i)/∑_j=1^|I|exp((h_u̅,k')^⊺ h_j^i)
where S_t^n_m represents the t-th session data of neighbour n_m, t ∈ [1, T ], and 𝒩_u̅^T+1 is the set of neighbours of target user u̅. Here, we denote h_u̅,k' as the encoding result of the k-th mask sequence of target user u̅ in Session T+1 into H_u̅, ⊺ as the transpose operation, and |I| as the total number of items.
§.§ Model Training
We treat the SSR task as a classification task that allows the model to learn which items the user is interested in. We use the maximum likelihood estimation to train the model and gradient descent to optimize the model:
-∑_u∈ U∑_t=2^T∑_k=1^n-1 log (p( Target_k| MaskedInput_k;
{S_t<T^n_m,n_m∈𝒩_u̅^T+1}))
§ EXPERIMENTS
In this section, we describe the experimental setup including the used real-world datasets and the comparative baseline methods as well as the used evaluation metrics. We also introduce the implementation details to ensure the reproduction of our model. Finally, we provide the quantitative analysis of the results, the ablation study and the hyper-parameter analysis.
§.§ Experimental Setup
§.§.§ Datasets
We conducted experiments on the following four commonly used real-world datasets, for which descriptive statistical information is shown in Table <ref>.
Douban music and Douban movie[http://www.douban.com]. Douban is a popular communication platform including the inforamtion of music and movie. Users can rate and communicate about music and movie. The dataset includes user's ratings of music and movie and the corresponding timestamps, as well as social network information.
Extended epinions[http://www.trustlet.org/epinions.html]. This dataset contains information about users' trust and distrust. Users usually rate other users based on the quality of their reviews of the item.
Yelp[https://www.yelp.com/dataset]. A very popular online review platform where users can review local businesses (e.g. restaurants and shops), providing content-rich social networking information.
Following <cit.>, we partitioned a dataset by segmenting user behavior into weekly sessions as the train, validation, and test sets. Specifically, sessions within the last s weeks were retained for evaluation. For the Douban music, Douban movie, Extended Epinions, and Yelp datasets, we selected s values of 26, 26, 14, and 24, respectively. Additionally, we filtered out items that did not appear in the training set to ensure consistency across sets. We randomly partitioned the sessions utilized for the testing and evaluation, and evenly distributed them.
§.§.§ Comparative Methods
We selected a number of representative works for comparison including:
* NextItNet<cit.> – A CNN-based model with long-range modeling capabilities. It expands the convolutional layers to increase the receptive field and introduces residual networks into multiple convolutional layers.
* NARM<cit.> – A RNN-based model. It better captures users' sequential behaviours and personal intentions by introducing attention mechanisms into GRUs.
* STAMP<cit.> – An attention-based model. It captures both users' long-term and short-term interests using the attention mechanism.
* SR-GNN<cit.> – A GNN-based model. It converts sequential problems into graph problems by constructing sequences of conversations as directed graphs and then learns the complex transitions between items through gated neural networks.
* GCE-GNN<cit.> – A GNN-based model. It learns information about item-transitions from other sessions through the construction of a global graph, which are further fed into the graph encoder.
* HIDE<cit.> – This method delineates potential shifts in interest from diverse perspectives by generating a hypergraph for each session. Following this, both micro and macro methodologies are employed to discern the underlying purpose of each clicked item.
* Attn-Mixer<cit.> – An attention-based model. It proposes a multi-level attention hybrid network and uses multi-level user intentions to achieve multi-level reasoning for item transitions.
* SERec<cit.> – A social recommendation model. It learns the user-item representations using heterogeneous graph neural networks to capture item transitions across sessions.
* DGRec<cit.> – A social recommendation model. It uses RNNs and graph attention networks to model users' dynamic interests and the influence of their friends.
* GESU<cit.> – A social recommendation model. It captures the rapid user interest changes using GGNN and multi-head attention mechanisms, and aggregates social influence from friends using graph attention mechanisms.
* GNNRec<cit.> – A social recommendation model. It improves performance of DGRec by constructing session graphs and uses GRU to model complex transitions between items.
§.§.§ Evaluation Metrics
We used two popular ranking-based metrics for our evaluation<cit.>: R@K(Recall@K) and N@K(NDCG@K). For R@K, we use K={10, 20}, while for N@K, we set K=20 following <cit.>.
§.§.§ Implementation Details
We use the Pytorch framework <cit.> to build our models, and all experiments are run on two Tesla V100 GPUs. For faster model training convergence,
we used the warm-up technique <cit.> to help the model parameters converge faster for learning rate and training epochs. Specifically, we use the grid search method to search for the following hyper-parameters: learning rate in {0.01, 0.005, 0.0001, 0.00005}, number of neighbours in {25, 50}, number of LMP in {5, 15, 25}, number of layers of TEGAA modules in {1, 3, 5}, number of warm-up learning steps in {5, 10, 20}, and the number of early stop tolerances in {10, 20}. We used a uniform batch size of 50 and the Adam <cit.> optimiser. For all baseline model experiments, we used their default hyper-parameters, and we chose the highest value between our experimental results and those reported in the original paper for other baseline models.
§.§ Quantitative Results
Table <ref> presents a performance comparison between our proposed TEGAARec model and selected baseline models. The experimental findings reveal that, apart from the Attn-mixer model, models incorporating social influence such as DGRec and GNNRec outperform those lacking social influence. This underscores the beneficial impact of social influence on recommendation performance, particularly evident in datasets like Yelp with numerous users and dense social networks. Moreover, GCE-GNN model gains a significant advantage by leveraging global graph construction using other session information, further validating the benefits of incorporating such data in recommendation tasks.
In contrast, the Attn-mixer model, which enhances inference through multi-level user intent modeling, achieves competitive results, highlighting the advantages of employing attention mechanisms.
Our proposed TEGAARec model effectively identifies users with genuinely similar preferences to the target user by leveraging Like-minded Peers (LMP) mining in the recommendation data. Additionally, it accurately models the dynamic interests of users. The results demonstrate a substantial performance improvement compared to other state-of-the-art works.
§.§ Ablation Studies
To assess the effectiveness of each key component in the TEGAARec model, we conducted ablation experiments from six different perspectives. These experiments were carried out on the Douban music and Yelp datasets, and the results are summarized in Table <ref>:
* TEGAARec w/o LMP: This variant utilizes only the social relationships provided by the dataset and excludes the use of Like-minded Peers (LMP).
* TEGAARec w/o GAL: It directly concatenates dynamic interest influences of different neighbours without employing graph attention aggregation.
* TEGAARec w/o SF: It excludes the social friends of the target user while keeps the LMP only as the neighbours.
* TEGAARec w/ PE: This variant incorporates position encoding into the TEGAA module.
* TEGAARec w/o ULI: It excludes the target user’s long-term interest embedding from the model.
* TEGAARec w/o ALI: This variant removes all user long-term interest embeddings from the model.
The experimental results demonstrate that the presence of Like-minded Peers (LMP), the Graph Attention Aggregation Layer, and the inclusion of long-term interests of target users and their neighbours are pivotal factors influencing the model's performance. Removal or replacement of any of these components, as observed in TEGAARec w/o LMP, TEGAARec w/o GAL, and TEGAARec w/o ALI, respectively, leads to a significant degradation in performance.
Interestingly, the absence of significant performance improvement in TEGAARec w/ PE suggests that users' short-term interests are more accurately captured through a collection of interaction items over a brief period, without the need for sequence and position information, which may be redundant.
Also, the removal of social friends has a small influence on the model's performance. This proves the contribution of our designed LMP to some extent especially in the case of sparse social friends.
Additionally, TEGAARec w/o ULI resulted in a slight performance degradation. This observation could be attributed to the fact that the target user's long-term interests are inherently incorporated when they act as a neighbour to other users.
§.§ Hyper-parameter Sensitivity Analysis
During the model training phase, we employed a grid search technique to optimize the hyperparameters. However, considering the large number of hyperparameters involved, we conducted the search on a reduced subset to maintain training efficiency. To further investigate the influence of hyperparameters on model performance, we selected specific key hyperparameters for sensitivity analysis. Consistent with the ablation experiments, we also performed hyper-parameter sensitivity experiments on the Douban music and Yelp datasets.
§.§.§ Impact of Embedding Dimension
To assess the effect of different embedding dimensions (utilized for both user and item embeddings) on model performance, we experimented with embedding dimensions ranging from 16 to 512. The results, depicted in Fig. <ref>, reveal that the model's performance improvement becomes limited when the embedding dimension exceeds 128 on both datasets, with a marginal decreasing effect thereafter.
§.§.§ Impact of Transformer Encoder Layer
To evaluate the impact of varying Transformer Encoder layers on model performance, we conducted experiments with different layer numbers ranging from 1 to 15. The results, as illustrated in Fig. <ref>, indicate that employing different Transformer Encoder layers does not substantially influence the model's performance on both datasets. Surprisingly, our findings suggest that a single layer of Transformer Encoder is adequate for achieving effective modeling.
§.§.§ Impact of the number of LMP users
To evaluate the impact of different LMP users numbers on model performance, we conducted experiments with the numbers ranging from 5 to 55. As depicted in Fig. <ref>, the results indicate that increasing the number of LMP users enhances the model performance when the number of samples is less than 25 on both datasets. However, when the number of LMP samples exceeds 25, a slight degradation in model performance is observed, possibly attributable to the introduction of noise from an excessive number of LMP users.
§.§.§ Impact of the number of social friend samples
To evaluate the impact of varying social friend sampling numbers on model performance, we experimented with numbers ranging from 5 to 55. As illustrated in Fig.<ref>, the results indicate that employing different numbers of social friend samples does not significantly affect the model performance on both datasets. This observation suggests that the influence from social friends is limited especially when it occurs with the problems of social friends sparsity. This finding is further corroborated by our ablation experiments.
§.§ Scalability of LMP
To further explore the influence of LMP, we conducted the extended experiments on the DGRec and GESU models to verify whether LMP can work well on other models on the four datasets. We denote the improved DGRec and GESU models by introducing LMP as DGRecLMP and GESULMP, respectively.
Fig. <ref> shows the performance comparison of the DGRec model before and after the introduction of LMP. The experimental results show that
the introduction of LMP helps to improve the performance on all four datasets, especially on the Douban music and Douban movie with a 66.7% and 90% improvement for Recall@20, respectively.
Similarly, Fig. <ref> shows the performance comparison of the GESU model before and after the introduction of LMP. The experimental results show that GESULMP achieves more significant improvements compared with the DGRec model. Moreover, on the metrics of the Douban music dataset, the performance of the GESULMP model achieves quite excellent results, with a 247.6% and 109.1% improvement for the metric Recall@20 and NDCG@20, respectively. By analyzing the model structure, both GESU and TEGAARec use the multi-head attention module, which has more powerful data modeling capabilities, thus achieving the superior results compared to the GRU network used in the DGRec model. These experimental results further verify the scalability of the LMP concept.
In addition, we compare the results of the improved DGRec model and GESU model, i.e., DGRecLMP and GESULMP, with our TEGAARec model on four dataset in Table <ref>.
Although the GESULMP model achieves very competitive performance by introducing LMP, our proposed TEGAARec model still maintains a large advantage, especially in modeling large-scale data more efficiently, like the Douban movie dataset, which is an order of magnitude larger than the other three datasets.
§ CONCLUSIONS
In this paper, we introduced the concept of “Like-minded Peers” (LMP) to tackle the problem of social friend sparsity and promote the identification of users who genuinely share similar preferences with the target users. Additionally, we proposed the Transformer Encoder with Graph Attention Aggregator Recommendation (TEGAARec) model, which offers improved modeling of dynamic interests and social influences among users. Both the long-term and short-term interests for target users and LMP users are captured and merged to get the final representations of target users.
The substantial performance enhancement achieved by our model across four real-world datasets from diverse domains underscores the effectiveness and superiority of our approach.
For future work, we aim to delve deeper into two key aspects. Firstly, we plan to explore more fine-grained mining of LMP, including considerations of the relative importance of different LMP to a target user. Secondly, we intend to investigate methods for better modeling the impact of different items within a session, thereby enhancing the overall recommendation accuracy and user experience.
§ ACKNOWLEDGEMENT
The authors wish to thank the Projects of Natural Science Foundation of Inner Mongolia under Grant No.2024MS06014, Inner Mongolia Science & Technology Plan under Grant No.2021GG0164, Natural Science Foundation of China under Grant No.61962039, 62162046, 62362054, 62366038, Inner Mongolia Engineering Lab of Cloud Computing and Service Software and Inner Mongolia Engineering Lab of Big Data Analysis Technology,the Engineering Research Center of Ecological Big Data, Ministry of Education, China.
cas-model2-names
|
http://arxiv.org/abs/2409.02840v1 | 20240904161230 | R2GQA: Retriever-Reader-Generator Question Answering System to Support Students Understanding Legal Regulations in Higher Education | [
"Phuc-Tinh Pham Do",
"Duy-Ngoc Dinh Cao",
"Khanh Quoc Tran",
"Kiet Van Nguyen"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
R2GQA System for Legal Regulations in Higher Education]R2GQA: Retriever-Reader-Generator Question Answering System to Support Students Understanding Legal Regulations in Higher Education
1,2]Phuc-Tinh Pham [email protected]
1,2]Duy-Ngoc Dinh [email protected]
1,2]Khanh Quoc [email protected]
[1,2]Kiet Van [email protected]
[1]University of Information Technology, Ho Chi Minh City, Vietnam
[2]Vietnam National University, Ho Chi Minh City, Vietnam
In this article, we propose the R2GQA system, a Retriever-Reader-Generator Question Answering system, consisting of three main components: Document Retriever, Machine Reader, and Answer Generator. The Retriever module employs advanced information retrieval techniques to extract the context of articles from a dataset of legal regulation documents. The Machine Reader module utilizes state-of-the-art natural language understanding algorithms to comprehend the retrieved documents and extract answers. Finally, the Generator module synthesizes the extracted answers into concise and informative responses to questions of students regarding legal regulations. Furthermore, we built the ViRHE4QA dataset in the domain of university training regulations, comprising 9,758 question-answer pairs with a rigorous construction process. This is the first Vietnamese dataset in the higher regulations domain with various types of answers, both extractive and abstractive. In addition, the R2GQA system is the first system to offer abstractive answers in Vietnamese. This paper discusses the design and implementation of each module within the R2GQA system on the ViRHE4QA dataset, highlighting their functionalities and interactions. Furthermore, we present experimental results demonstrating the effectiveness and utility of the proposed system in supporting the comprehension of students of legal regulations in higher education settings. In general, the R2GQA system and the ViRHE4QA dataset promise to contribute significantly to related research and help students navigate complex legal documents and regulations, empowering them to make informed decisions and adhere to institutional policies effectively. Our dataset is available[Link for accessing to the dataset.] for research purposes.
[
*
=====
§ INTRODUCTION
The educational regulations of universities consist of documents regarding training regulations, provisions, and guidelines on current training programs that students must adhere to to complete their academic programs. However, a significant challenge lies in the potential length and complexity of these educational regulations, making it difficult to read and extract information. Searching for specific information from these documents can be time-consuming, posing difficulties for students and lecturers. Alternatively, students may search for the wrong document, leading to misinterpretations and consequential adverse effects on students.
The question-answering system (QAS) can help address the above problem. Similarly to search engines such as Google or Bing, the output of such a system is the answer to the input question based on the information available in the database. A question-answering system typically comprises two main components: a Document Retriever and a Machine Reader (or Answer Generator for abstractive questions). The document retriever queries relevant information related to the question, which is then passed to the Reader/Generator along with the question to generate an answer. Figure <ref> shows the input of the USER question and the output of the BOT Assistant of the question answering system.
In the field of legal in Vietnamese, several question-answering systems have been developed. For example, in 2014, the vLawyer system was proposed by <cit.>, a simple question-answering system with words in the answers directly extracted from documents. Therefore, it can be seen that very few question-answering systems in Vietnamese can provide answers with a human-like style.
When addressing abstract responses using language models, there are several approaches. An approach involves extracting multiple spans from the context and concatenating them as part of the MUSST framework <cit.>. However, this method renders the responses less natural and diverse in language than human-like expressions. Another approach consists of passing the question and context through a generator module to produce complete answers (RAG). This method may result in less accurate output answers due to contextual overload, leading to noise. Furthermore, current answer generator models primarily perform text summarization tasks, which are not always suitable for answer extraction tasks. Enhancing performance can be achieved by using a machine reader module to extract answers before passing them through the answer generator models.
An essential component for implementing a question-answering system is training data. For the Vietnamese legal document, there are currently a few datasets available to build question answering systems. The dataset from Kien et al. (2020) <cit.> and the dataset from Pham and Le (2023) <cit.> are two typical examples. However, the output of the two tasks is a set of ranked texts related to the question. Therefore, in Vietnamese, there is still a lack of datasets with answers extracted from various positions within the context or with natural language styles. Hence, constructing a training dataset with answers synthesized from multiple spans appearing in different positions within the context or having a natural, human-like style is very necessary. In this paper, we have three contributions:
* Question-answering system: We designed a Retriever-Reader-Generator system named R2GQA, the first question-answering system for abstractive answers in Vietnamese, leveraging answers from the Machine Reader. Leveraging answers from the Reader and combining them with questions for the Generator to generate answers helps reduce noise compared to incorporating all information from the context.
* Dataset construction: We create a machine reading comprehension dataset named ViRHE4QA based on the legal regulations in higher education. This is the first Vietnamese dataset in the domain of university regulations, including various types of answers: multi-span extracted answers, abstractive answers. This dataset comprises 9,758 question-answer pairs that will be used to train the Reader and Generator models in our systems.
* Experiments and evaluation: We conduct experiments to evaluate models in the Document Retriever module. We also evaluate extractive reading comprehension models in the machine reader module and text generation models in the Generator module. Additionally, we analyze and compare the performance of our system with open-book question-answering systems, such as RAG <cit.>.
The sections of the paper include Section <ref> that provides an overview of the R2GQA system and the ViRHE4QA dataset. Section <ref> reviews studies related to question-answering systems and datasets in the world and Vietnam. Section <ref> describes the creation and characteristics of the ViRHE4QA dataset. Section <ref> details the design and implementation of our R2GQA system. Section <ref> presents the experimental setup and findings. Section <ref> interprets the results and highlights their implications. Section <ref> examines the limitations and challenges encountered. Finally, Section <ref> summarizes the study and suggests directions for future research.
§ RELATED WORKS
§.§ Related Question Answering System
A question-answering system is a challenging task in natural language processing (NLP). Various types of systems have been developed to date. Based on the type of output answers, there are two popular systems in the field of NLP.
First, question-answering systems with answers extracted from context (extractive question-answering). Some question-answering systems of this type include vLawyer <cit.>, a simple question-answering system on Vietnamese legal texts proposed by Duong and Ho (2014) <cit.>. vLawyer consists of two components: Question Processing and Answer Selection.
DrQA, which is designed for reading comprehension in open-domain question-answering, as proposed by Chen et al. (2017) <cit.>. BERTserini <cit.> is a question-answering system that combines two models: BERT <cit.> and Anserini. Anserini is an information retrieval tool that identifies relevant documents that are likely to contain the answer. BERT <cit.> (Bidirectional Encoder Representations from Transformers) is a language model that understands context and the relationships between words to extract answers from context retrieved by Anserini. MUSST <cit.> is a framework that is used to automatically extract answers from a given context. The answers of this framework are formed from multiple spans in the context to create human-like answers. This framework has two main modules: Passage Ranker and Question Answering.
XLMRQA <cit.> is the first Vietnamese question-answering system with three modules: document retriever, machine reader, and answer selector). This question-answering system outperforms DrQA and BERTserini on the UIT-ViQuAD dataset <cit.>. ViQAS is a question-answering system proposed by Nguyen et al. (2023) <cit.>. In addition to the three retriever-reader-selector modules similar to XLMRQA, ViQAS includes an additional preprocessing rule step before the retriever module. Additionally, in the retriever module, the authors implemented smaller steps including evidence extraction and re-ranking. These changes contributed to ViQAS outperforming DrQA, BERTserini, and XLMRQA in the datasets UIT-ViQuAD <cit.>, ViNewsQA <cit.>, and ViWikiQA <cit.>.
Second, question-answering systems with abstractive answer (abstractive question-answering). For this type of system, there are two common systems: open-book question answering and closed-book question answering. Open-book question-answering systems typically have two modules: retriever and generator. The generator module in these systems is a sequence-to-sequence model such as T5 <cit.> or BART <cit.>. Some systems proposed based on open-book question answering include Fusion-in-Decoder <cit.> and RAG <cit.>. In the past two years, RAG has become very popular due to the strong development of large language models (LLMs) such as Gemini, GPT-4, or Copilot. These LLMs significantly enhance the performance of RAG due to their ability to generate accurate answers.
Closed-book question-answering systems typically have one module, the generator. These generator models are usually generative language models like seq2seq pre-trained on a large collection of unsupervised texts. With enough parameters, these models can memorize some factual knowledge within their parameter weights. Therefore, these language models can freely generate answers to input questions without needing context. Some studies have used this method, such as the paper <cit.>, and CGAP <cit.>. Recently, with the boom of LLMs such as GPT-3.5, GPT-4, Gemini, and LlaMa, closed-book question-answering has been widely applied in practice, and chatbots are increasingly appearing. However, the closed-book question-answering method can sometimes result in hallucination, causing confusion and inaccuracies in the answers.
Both of these methods have different advantages and disadvantages. Therefore, in this paper, we design a question-answering system with three modules (Retriever-Reader-Generator) to leverage the strengths and overcome the limitations of the aforementioned methods. This is the first question-answering system for abstractive answers in Vietnamese.
§.§ Related Dataset
Developing question-answering (QA) systems for specific domains requires specialized datasets tailored to domain knowledge and language. In the legal domain, several renowned datasets have been established and widely used. JEC-QA <cit.> is a comprehensive dataset comprising 26,365 multiple-choice questions, encompassing 13,341 single-answer questions (further divided into 4,603 knowledge-driven and 8,738 case-analysis questions) and 13,024 multi-answer questions (including 5,158 knowledge-driven and 7,866 case-analysis questions). BSARD <cit.>: BSARD was created by legal experts, BSARD comprises 1,108 questions derived from 22,633 legal articles. The dataset exhibits an average article length of 495 words, while questions range from 23 to 262 words, with a median length of 83 words. PRIVACYQA <cit.>: PRIVACYQA comprises 1,750 questions spanning 335 policies and 4,947 sentences, meticulously crafted by experts. The dataset is characterized by its long texts, with an average document size of 3,237.37 words. In particular, PRIVACYQA encompasses a diverse range of question types, including unanswerable and subjective questions.
For Vietnamese, some work on legal QA datasets has recently been published. The QA data set was created by Kien et al. (2020) <cit.> and includes 5,922 questions with 117,545 related articles. This is a large legal dataset for Vietnam. Each question has an average length of 12.5 words and is associated with 1.6 relevant articles. The dataset from Pham and Le (2020) <cit.> consists of 4,547 questions and 5,165 passages. The length of each pair of questions and answers is mostly less than 100 words. For the domain of university education regulations, the dataset from Phuc et al. (2023) <cit.> comprises 10,000 data points in the training set and 1,600 in the test set, constructed based on the guidelines of the Ho Chi Minh City University of Industry. The answers in this dataset are extracted from contextual passages.
Therefore, currently there are few Vietnamese machine reading comprehension datasets containing answers that span multiple positions and exhibit human-like style in the domain of higher education regulations. We hope that our dataset will contribute additional resources for Vietnamese in creating and developing systems in this domain.
§ DATASET
§.§ Dataset Creation
In this section, we introduce how we constructed the dataset. Our dataset creation process consists of 6 phases: context collection (Section <ref>), guidelines creation (Section <ref>), creator agreement (Section <ref>), question-answer creation (Section <ref>), data validation (Section <ref>), and data splitting (Section <ref>). These six phases are illustrated in Figure <ref>.
§.§.§ Context Collection
We collected regulatory documents regarding the curriculum of a university in Vietnam. The documents were gathered in various formats, such as Word, PDFs, or images, so we converted them to Word using smallpdf.com[<https://smallpdf.com/pdf-to-word>] or manually retyped them if the PDF file contains images. After converting all documents to the Docs format, we converted the tables into paragraphs using a predefined format.
Following this process, we obtained 21 documents with an average length of 10.67 pages and 3,881 words per document. As the word count in each document is too large for language models, we divided the documents into smaller paragraphs (called articles) for convenience in dataset construction and model training. In the result, we obtained 294 articles (referred to as contexts below) with an average length of 234.49 words.
§.§.§ Guidelines Creation
We relied on the guidelines from two datasets, UIT-ViQuAD <cit.> and ViRe4MRC <cit.>. These guidelines describe and provide detailed examples to help creators understand how to create questions and answers for the given problem consistently. The guidelines clearly outline different question types including "How", "What", "Which", "Where", "Why", "When", "Who", Yes/No, and other types such as "How long", "How many". Definitions and examples of each type of question are presented in the Table <ref>.
The guidelines cover various strategies for asking questions, such as asking questions from general to specific, asking questions in the order of the context before posing questions whose answers appear at multiple places, and posing "Wh" questions before "Yes/No" questions.
In this paper, we divide the answers into two categories: "Extraction answers" and "Abstract answers". The extractive answers have two types: single-span and multi-span. The extractive answers must contain complete information and be as concise as possible while present in the context (article). In the case of multi-span answers, the spans must be semantic equivalence and should not be concatenated from different parts of the context to form a complete sentence. In Table <ref>, the correct extractive answers should be "học kỳ chính" ("regular semester") and "học kỳ hè" ("summer semester") because these two phrases have semantic equivalence. The answer "Trường có" ("The University has"), "học kỳ chính" (regular semester), "và" ("and"), "học kỳ hè" ("summer semester") is not acceptable because this answer attempts to form a complete sentence, resulting in the extracted words lacking semantic equivalence. Abstractive answers are rewritten answers from the question and extractive answers that resemble how a human would answer, with additional words and meanings to smooth out the extractive answer without changing the meaning or adding new information. We encourage creators to be creative with their writing style for abstract answers. In the example at Table <ref>, the abstractive answer could be: "Trường có các loại học kỳ: học kỳ chính và học kỳ hè" ("The university has semester types: regular semesters and summer semesters."). In this case, the abstractive answer does not include a counting number like "Trường có hai loại học kỳ: học kỳ chính và học kỳ hè" ("The University has two types of semesters: regular semesters and summer semesters.") as this adds information not present in the question or extractive answer.
For reason types, our guidelines provide definitions and examples for reason types such as "Word-matching", "Paraphrasing", "Math", "Coreference", "Causal relation", and "Logic" based on the paper by Sugawara et al. (2018) <cit.>. These types of reasoning are defined as follows:
* Word matching: This involves exact word matching between words in the question and words in the context, or the answer connected to the question matches a sentence in the context.
* Paraphrasing: Questions rephrase the meaning of a context by altering vocabulary and grammar or using different knowledge to formulate the question.
* Math: Questions involving mathematics, where the answer requires applying mathematical operations or comparisons to solve the question.
* Coreference: This reasoning type involves answers that are entities. To identify these entities, one must refer to words or phrases in one or more different sentences that represent the entity being sought.
* Causal relation: The answer may explain the cause leading to the result mentioned in the question, or the question might inquire about the cause leading to the result mentioned in the answer.
* Logic: Utilizing knowledge from the context and the question to infer the answer, commonly seen in Yes/No questions.
We encourage creators to focus on creating questions that involve paraphrasing, math, coreference, causal relations, and logic. We do not encourage creators to create word-matching questions. Because word-matching is the easy reasoning type, a sufficient amount of training data can still achieve good results.
§.§.§ Creator Agreement
We have 7 creators, all university students from the same institution. These creators underwent training on the guidelines and performed multiple rounds of checks. The question-answer pairs must adhere to the guidelines, spelling, and structure of the sentence, ensuring diverse usage of reason types and question types. During each round of evaluation with 100 context-question pairs, creators must independently formulate answers. After that, creators will cross-check each other, provide feedback, and agree on answer writing in the regular meetings. We evaluated the similarity between the creators based on F1-score and BERTScore <cit.> metrics. After three rounds of evaluations with 300 questions, the average results of the 7 creators will be presented as shown in Table <ref>.
§.§.§ Question-answer Creation
The dataset consists of 294 articles divided into two parts: Part 1 includes the first 146 articles labeled by 4 creators, while Part 2 comprises the remaining 148 articles labeled by the remaining 3 creators. This division of annotations and articles ensures diversity throughout the question-answer creation process. This approach allows us to maximize information extraction from the articles across different aspects while preventing duplication in question-answer pairs as in the case of all 7 people labeling all 294 contexts.
Each creator is required to generate at least 300 question-answer pairs in one week. The guidelines are strictly to ensure consistency across the dataset. We encourage creators to pose questions that involve challenging forms of inference, such as paraphrasing, inference from multiple sentences, and inference from a single sentence.
§.§.§ Data Validation
After each week of data creation, the creators will perform self-checks and cross-checks similar to the training phases in Section <ref>. During the self-check process, each creator will review the question-answer pairs from the previous week and make corrections if any errors are found. In the cross-check process, each creator will examine the work of others to ensure adherence to the guidelines and identify errors in the data created by others. Throughout the cross-check process, we will review data from all creators to ensure no errors remain.
Upon completion of the cross-check process, we will hold discussions to address any issues encountered by the creators, propose solutions, and reach a consensus among all creators regarding these errors. In addition, we will update the guidelines weekly to address errors or exceptions.
In addition to the weekly evaluation and error correction processes, we will conduct a final review and error correction after completing the dataset in the last week to ensure consistency once again. Following this process, the dataset can be used for training and testing models.
§.§.§ Data Splitting
After validating the data, we partitioned the data set into three subsets: training, development (validation), and testing, with an 8:1:1 ratio. The balanced allocation between the development and testing subsets is intended to ensure a fair and precise evaluation of the model.
§.§ Dataset Analysis
§.§.§ Overall Statistics
In this section, we conducted an overview analysis of the dataset regarding aspects such as the number of articles and the length of texts within the dataset. The ViRHE4QA dataset comprises 9,758 question-answer pairs from 294 articles within the domain of university training regulations. We conducted statistical analysis on the dataset regarding aspects such as the number of documents, number of articles, number of question-answer pairs, average word count[We count words based on whitespace segmentation.] in documents, articles, questions, extractive length, and abstractive length of the ViRHE4QA dataset, comparing these with the UIT-ViQuAD 1.0 dataset as shown in Table <ref>.
Due to the close-domain dataset, the number of articles and question-answer pairs in ViRHE4QA is lower compared to the UIT-ViQuAD dataset. However, the average length of the articles in ViRHE4QA is longer than that of the UIT-ViQuAD dataset. Furthermore, the average length of the questions and the extractive answers in ViRHE4QA is higher than in UIT-ViQuAD. This poses a challenge for language models to locate and extract information accurately within longer contexts.
§.§.§ Length-based Analysis
To understand more about our dataset and domain, we performed statistics on the number of question-answer pairs grouped by ranges of article length (Table <ref>), question length (Table <ref>), and answer length (Table <ref>). Articles with lengths ranging from 101 to 256 words accounted for the largest proportion, with 3,422 question-answer pairs. However, it should be noted that articles with lengths less than 100 words had the smallest number of pairs, and articles longer than 512 words ranked second highest with 2,306 question-answer pairs. This poses a challenge in our dataset as most current language models accept a maximum input of 512 tokens.
Regarding question length, most are between 8 and 14 words, with a significant number also ranging from 15-21 words, showing relatively little difference compared to the 8-14 word range. Regarding the length of the answer, the highest proportion of extractive answers was less than 21 words, considerably more than other lengths. Meanwhile, abstractive answers predominantly fell within the 21-40 word range. This can be understood because of our guidelines, where extractive answers are expected to be the shortest answers, and abstractive answers represent a combination of the question and an extractive answer. This analysis shows that our dataset presents significant challenges regarding text length for current language models.
§.§.§ Type-based Analysis
In this section, we conducted an analysis of the question types and the answer types in the test set (976 samples). To ensure accuracy, we manually classified the questions following the guidelines in section <ref>, which include 9 question types: What, Who, When, Where, Which, Why, How, Yes/No, and Others; and 6 reason types: Word-matching, Paraphrasing, Math, Coreference, Causal relation, and Logic.
Figure <ref> shows that the "What" type of question had the highest proportion at 40.78%, followed by the "Yes/No" type at 18.14%. Questions categorized as "When", "Where", "Which", and "Why" accounted for a very small proportion (together less than 10%). Compared with the UIT-ViQuAD and UIT-ViNewsQA datasets, our dataset exhibits similar characteristics, with the "What" type of question being predominant (40.78% compared to 49.97% in UIT-ViQuAD and 54.35% in UIT-ViNewsQA).
For reasoning types, according to Figure <ref>, Paraphrasing had the highest proportion at 54.46%, followed by Logic at 20.46%, and then Word-matching at 19.60%. Math, Causal relation, and Coreference types had relatively low proportions at 5.48% total. We request creators limit the use of word-matching question-answer formats to enhance diversity and challenge the dataset. Logical reasoning types are more prevalent because this type of reasoning is closely related to yes/no questions.
§ OUR PROPOSED METHOD
In this section, we will present the question-answering system for abstract answers that we propose. This system consists of three modules: Document Retriever, Machine Reader, and Answer Generator. We named this system R2GQA, with an overall structure depicted in Figure <ref>.
§.§ Document Retriever
The Retriever module uses questions to retrieve contexts that contain answers or relevant information. These contexts are then fed into the machine reader module to extract answers. Additionally, the question scores corresponding to each context will be used to combine with the scores of the answers after the Reader module is executed to select the most accurate answer for the input question.
§.§.§ Lexical Retrieval
Retrieval methods based on lexical similarity employ the degree of overlap between a question and a document to determine relevance. BM25 and TF-IDF are two popular examples of this approach. However, these methods often fail when dealing with queries and documents that exhibit intricate semantic structures due to the limited extent of lexical overlap. This limitation arises from the inability of vectors to capture the true meaning of words.
TF-IDF: TF-IDF, which stands for "Term Frequency-Inverse Document Frequency" is a widely used technique in natural language processing (NLP) for preprocessing text data. This statistical method assesses the significance of a term within a document or dataset. TF-IDF is calculated by two factors: tf(w, c) and df(w, C).
TF-IDF(w,c,C) = tf(w, c) · idf(w, C)
* TF (term frequency) is the frequency of occurrence of a word in a document. The TF value of a word w in context c is calculated according to the following formula:
tf(w,c) = n(w,c)/n(c)
n(w, c) denotes the number of occurrences of the term w in the context c.
n(c) denotes the total number of occurrences of all terms in context c.
* Inverse Document Frequency (IDF) is the inverse frequency of a term within a dataset. In the document collection, each term has a unique IDF value computed by the formula.
idf(w, C) = log| C |/| c_w ∈ C∈ C |
| C | denotes the total number of documents in dataset C. | c_w ∈ C∈ C | is the number of documents c that contain the term w in the dataset. If the term does not appear in any context within the dataset C, the denominator would be 0, leading to an invalid division. Therefore, it is commonly replaced with the formula 1+ | c_w ∈ C∈ C |.
TF-IDF transforms each document in a dataset into a vector representation, often referred to as document embedding. By combining the TF-IDF scores of each term in a document, a vector is formed that places the document within a high-dimensional space. This vector can be used as input for various machine learning models or for computing similarities between documents.
BM25: BM25 is a widely used ranking function in information retrieval to compute and rank the similarity between two texts. BM25 is a simple method commonly employed in question-answer tasks to search the context relevant to the input question. Similarly to TF-IDF, BM25 computes a score for each context in a dataset based on the frequency of the question terms in the context. BM25 also considers document length and term frequency saturation.
BM25(q, c) = ∑_i=1^| q | IDF(q_i) ·f(q_i, c) (k + 1)/f(q_i, c) + k - (1 - b + b ·| c |/C_avgl)
Where:
* q represents the question.
* c signifies a context within the dataset.
* |q| indicates the question length.
* IDF(q_i) stands for the Inverse Document Frequency of the i-th question (q_i).
* f(q_i, d) is the frequency of the i-th question (q_i) within the context c.
* k and b are adjustable parameters used in certain weighting frameworks.
* |c| denotes the length of context c.
* C_avg refers to the average context length within the dataset.
The BM25 formula diverges from TF-IDF in several significant aspects. Firstly, it employs a non-linear approach to calculate term frequency weights, which leads to an exponential increase in weighting with higher term frequencies. Secondly, it normalizes context lengths based on specific terms, decreasing the weighting of frequently occurring terms in extensive contexts. Additionally, the parameters k and b are tunable to adjust the emphasis on term frequency and context length normalization in the calculation.
§.§.§ Contextualized-based Retrieval
Bi-Encoder: Reimers and Gurevych <cit.> proposed the Bi-Encoder in 2019. Bi-Encoder are utilized across various tasks such as NLI, STS, Information Retrieval, and Question Answering systems. For the question-answer system task, the contexts in the dataset are encoded independently into vectors. The input question is then encoded and embedded in the vector space of the contexts to compute similarity scores with each context. Based on these scores, relevant contexts related to the question can be determined.
One of the training methods for bi-encoders involves using the MarginMSE loss function. MarginMSE is based on the paper of Sebastian et al. (2020) <cit.>. Similarly to MultipleNegativesRankingLoss, to train with MarginMSE, triplets (question, context 1, context 2) are required. However, unlike MultipleNegativesRankingLoss, context 1 and context 2 do not need to strictly be positive/negative; both can be relevant or irrelevant to a given question.
For training the bi-encoder with MarginMSE, the following procedure is undertaken: First, scores are computed for each pair (question, context 1) and (question, context 2). The distance of score (ScoreDistance) between the two pairs serves as the label for the triplet (question, context 1, context 2). The ScoreDistance is calculated using the formula:
ScoreDistance = Score_(question, context 1) - Score_(question, context 2)
In training the bi-encoder, question, context 1, and context 2 are encoded into vector spaces, and then the score of (question, context 1) and (question, context 2) is computed. Subsequently, the BDistance is computed by subtracting the score of (question, context 2) from the score of (question, context 1). The purpose of training is to optimize the error between ScoreDistance and BDistance.
§.§.§ Lexical-Contextual Retrieval
Weight Ensemble: We combined the scores of each context when querying by lexical and Bi-Encoder using a weight α for each top_k. The scores calculated from the bi-encoder model were normalized to values in the range [0; 1]. After combining, we extracted the top_k contexts with the highest scores. The combination formula is as follows:
Score=
ScoreBM25_(question, context)·α + (1-α) · ScoreBE_(question, context)
Multiplication Ensemble: We calculated the product of the scores from each context calculated by TF-IDF and Bi-Encoder. Similarly to the weight ensemble, the scores calculated from the Bi-Encoder model were normalized to values in the range [0, 1]. Finally, we extracted the top_k contexts with the highest scores.
Score_(question, context) = ScoreBM25_(question, context)· ScoreBE_(question, context)
We retrieve contexts using a combined method through the following main steps: First, we score the contexts in the database using the BM25 method. Second, we retrieve the scores of the contexts using the Bi-Encoder method and normalize them to the range [0,1]. Third, we obtain the combined scores of the contexts using the combined method. Finally, we extract the top_k highest scores and their corresponding contexts. Algorithm <ref> details our combined querying process.
§.§ Machine Reader
In this study, we implement a Reader module based on the sequence tagging approach - BIO format (B - beginning, I - inside, O - outside). The BIO approach means that tokens in the input will be classified into B, I or O labels. If a token is labeled B or I, it means that the token is part of the answer; otherwise, it does not appear in the answer. This approach is commonly used as a method for extracting answers for extractive reading comprehension tasks <cit.>. Figure <ref> illustrates this approach in detail.
There are various methods for training models using the BIO approach. In recent years, transfer learning methods have proven to be effective for MLP tasks due to their pre-training on large datasets. For machine reading comprehension tasks, several state-of-the-art (SOTA) high-performance models have been trained on multilingual datasets, such as multilingual BERT (mBERT) <cit.> and XLM-RoBERTa <cit.>. Currently, there are also models specifically designed for Vietnamese, such as CafeBERT <cit.>, ViBERT <cit.>, and vELECTRA <cit.>. Therefore, we utilize these models to implement the Reader module.
* XLM-RoBERTa <cit.> is a pre-trained multilingual language model. It was trained on the CommonCrawl dataset, which includes text data from over 100 languages (including more than 137GB of Vietnamese text data). XLM-RoBERTa comes in two versions: large (with 24 layers) and base (with 12 layers).
* CafeBERT <cit.> is a language model built on top of XLM-RoBERTa. This model was trained on approximately 18GB of Vietnamese text data. CafeBERT outperforms XLM-RoBERTa on the VLUE benchmark <cit.> (including the reading comprehension task).
* vELECTRA <cit.> is a model trained on a massive dataset of Vietnamese text data, reaching 58.4GB in size. The authors used BERT as the foundation for the generator module and ELECTRA for the discriminatory module.
* ViBERT <cit.> is a model trained on 10GB of Vietnamese text data. Its architecture is based on the BERT model. In addition to using similar layers to BERT, the authors added two layers before the final linear layer: a bidirectional RNN layer and an Attention layer. This results in ViBERT having a total of 5 layers.
§.§ Answer Generator
In the Retrieval-Reader-Generator system, the Generator module operates at the final stage of the answer generation process. The main function of this module is to merge the information from the question and the extractive answer. This module uses a sequence-to-sequence structure, often seen in tasks such as machine translation or summarization, shown in Figure <ref>. This helps generate a complete, human-like answer.
Mathematically, the function f representing the Generator module is expressed as follows:
A_abstractive = f ( Q, A_extractive)
Here, the inputs (Q, A_extractive) represent:
* Q: the question.
* A_extractive: the extractive answer taken from the Reader module.
The output A_abstractive is the generated answer, refined, coherent, and synthesizes information from the question and the extracted context in a more understandable form.
Vietnamese language generator models are evolving, employing transfer learning methods to enhance performance. In this paper, we use state-of-the-art (SOTA) generator models for Vietnamese, including multilingual models such as mBART-50 <cit.>, mT5 <cit.>; and monolingual models such as BARTpho <cit.> and ViT5 <cit.>.
* mBART-50 <cit.>: mBART-50 is an extension of BART (Bidirectional and Auto-Regressive Transformers), supporting multiple languages, including 137.3GB of Vietnamese data. mBART-50 is a denoising autoencoder trained with masked language modeling and permutation language modeling objectives. It has shown strong performance in various language generation tasks, such as translation and summarization.
* mT5 <cit.>: mT5 is an adaptation of T5 model for multilingual text generation tasks. The key to the innovation of mT5 lies in its ability to perform diverse Natural Language Processing (NLP) tasks within a unified framework, including text generation, translation, summarization, question answering, and more. This unified architecture simplifies NLP application deployment across languages, making it valuable for global communication and multilingual content generation. mT5 is trained on the mC4 dataset, including Vietnamese with 116B tokens, 79M pages, and constituting 1.86% of the training data for the mT5 model.
* BARTpho <cit.>: BARTpho utilizes a "large" architecture and pre-training scheme similar to sequence-to-sequence denoising autoencoder of BART. It leverages a Vietnamese dataset of 20GB from PhoBERT, showing improved performance over mBART in Vietnamese text summarization tasks. BARTpho has two versions: BARTpho_word and BARTpho_syllable, with the word version performing better.
* ViT5 <cit.>: ViT5 is a monolingual model developed for Vietnamese based on the T5 structure. It is built on 138GB of Vietnamese data from the CC100 dataset and is trained for Vietnamese abstractive summarization and Named Entity Recognition tasks. ViT5 significantly improves over current SOTA models in Vietnamese text summarization and competitive results in NER tasks.
§ EXPERIMENTS AND RESULTS
§.§ Metrics
§.§.§ P@k
To evaluate the performance of the retrieval methods, we use the P@k measure. P@k measure is commonly used in information retrieval tasks, and some works such as XLMRQA <cit.>, SPBERTQA <cit.>, LegalCQA <cit.> have utilized it. In formula <ref>, P@k is the proportion of questions for which the relevant corresponding context appears in the contexts returned by the retrieval module. C_i_pos is the relevant context corresponding to question q_i, and C_k(q_i) are the contexts returned by the retrieval module corresponding to question q_i. n is the number of questions.
P@k = 1/n∑_i=1^n
1 if C_i_pos∈ C_k(q_i)
0 if C_i_pos∉ C_k(q_i)
§.§.§ F1
The F1-score is a widely used metric in natural language processing and machine reading comprehension. Evaluates the accuracy of the predicted answers by comparing individual words with those in the correct answers. The F1-score measures the overlap in words between the predicted answers and the ground-truth answers.
Precision = the number of overlap words/the total number of tokens in the predicted answer
Recall = the number of overlap words/the total number of tokens in the gold answer
F1-Score = 2 ·Precision·Recall/Precision + Recall
§.§.§ BLEU
BLEU (Bilingual Evaluation Understudy) <cit.> is a scoring method to measure the similarity between two texts in machine translation. BLEU compares contiguous word sequences in the machine-generated text with those in the reference text, counting matching n-grams with weighted precision. These matches are position independent. BLEU is described by the following formula:
BLEU_Score = BP×exp( ∑_i=1^N (w_i ·log(p_i)) )
Where:
* BP (Brevity Penalty) is a brevity penalty factor to account for shorter translations compared to the reference translations.
* exp denotes the exponential function.
* ∑_i=1^N (w_i ·log(p_i)) represents the weighted sum of the logarithm of precisions p_i, where w_i is the weight for the n-gram precision of order i, and N is the maximum n-gram order considered in the calculation.
§.§.§ ROUGE
In addition to comparing model outputs directly, we assess their agreement by measuring the overlap in content. To do this, we leverage the ROUGE framework (Recall-Oriented Understudy for Gisting Evaluation) <cit.>. ROUGE metrics are popular tools for automating text summarization and machine translation and analyze both the structure and vocabulary of the generated text compared to a reference answer. This study utilizes several ROUGE metrics, including:
* ROUGE-N: This metric focuses on counting matching sequences of words (n-grams) between the answer of the system and the ideal answer. Higher ROUGE-N scores indicate a greater degree of overlap in wording.
* ROUGE-L: This metric prioritizes finding the longest string of words that appears in the same order in both the predicted answer and the gold answer. It emphasizes the importance of word order compared to ROUGE-N.
§.§.§ BERTScore
BERTScore <cit.> is a metric used to evaluate the performance of text generation models, including machine translation and text summarization. This metric leverages the contextual understanding ability of language models to encode predicted answers and gold answers into embedding vectors and then computes the cosine similarity between these embeddings to provide a score for the quality of the generated text. The higher the score, the greater the similarity, indicating better performance of the answers of model.
BERTScore focuses on assessing semantic similarity rather than just lexical similarity like traditional metrics. This helps to evaluate the overall quality of text generation models more comprehensively. Additionally, BERTScore is available for multiple languages, allowing cross-lingual evaluation of text generation models.
§.§ Experimental Design
In this section, we provide detailed configurations of the three modules in R2GQA system: Document Retriever, Machine Reader, and Answer Generator. We conducted all experiments on the RTX 3090 GPU with 24GB VRAM from VastAI[<https://vast.ai/>].
§.§.§ Document Retriever
The purpose of the Document Retriever module is to question for context that may contain answers to the questions. We assign IDs to the contexts, which are used to map to the IDs of the contexts with the highest retrieval scores returned after performing the retrieval methods. For the Retriever module, we conduct experiments with 3 methods: lexical retrieval, contextual retrieval, and lexical-contextual retrieval with top_k = [1, 5, 10, 15, 20, 25, 30].
Lexical retrieval: We experiment with TF-IDF and BM25 methods. To enhance performance, we apply word segmentation using the Pyvi library [<https://pypi.org/project/pyvi/0.0.7.5/>] when conducting query experiments with TF-IDF and BM25.
Contextual retrieval: We employ 2 approaches. In both approaches to contextual retrieval, we utilize word segmentation with Pyvi. LME: We only use pre-trained models (ViEmb[<https://huggingface.co/dangvantuan/vietnamese-embedding>], ViSBERT[<https://huggingface.co/keepitreal/vietnamese-sbert>], ViSimCSE[<https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base>], ViBiEncoder[<https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder>]) from Huggingface to encode the question and context, then employ cosine similarity to calculate similarity scores and re-rank based on these scores. Bi-Encoder <cit.>: We continue to fine-tune pre-trained models from Huggingface with our data using the MarginMSE loss function. We train with epochs = 10, batch_size = 24. The score for each pair, Score(question, context1), Score(question, context2) in Section <ref>, is computed by BM25 instead of Cross-Encoder.
Lexical-Contextual retrieval:
For both combination methods in Section <ref>, we experimented using an exhaustive search of α values in the range [0.1; 0.9] with a step size of 0.1. Through this process, we determined the α value with the highest performance. For the combination with TF-IDF, the highest performance was at α = 0.3, and for BM25, it was α = 0.1.
§.§.§ Machine Reader
For the Reader models, we implemented experiments with models and approaches as in Section <ref>. The models were trained with epochs = 5, batch_size = 8, learning_rate = 5e-5, max_seq_length = 512. The optimizer used was AdamW. The evaluation metrics used were F1, BLEU1, and BERTScore.
Our data contains many contexts longer than 512 tokens, while the maximum length of the models we experimented with is 512 tokens. Therefore, we split each context longer than 512 tokens into multiple input features. To minimize information loss and preserve the semantics of the input features, we use the stride hyperparameter to create overlapping segments between two input features.
§.§.§ Answer Generator
We use the following generator models with specific configurations: mBART-large-50, mT5-base, ViT5-base and BARTpho_word. We added the token </s> to the model input to separate the question and extractive answer in the format Question </s> Extractive answer </s>. For the BARTpho model, the input was formatted as <s> Question </s></s> Extractive answer </s>. We perform word segmentation on BARTpho_word using VncoreNLP before training the model. The model parameters were set as similarly as possible with epoch = 5, learning rate = 4e-05, max_seq_length = 1024 (512 for mT5 due to model limitations), batch_size = 2, and using the AdamW optimizer. The metrics used in this section included: BLEU1, BLEU4, ROUGE-L, and BERTScore.
§.§ Experiment Results
This section initially assesses the performance of our Document Retriever and Machine Reader modules independently. Subsequently, it details experiments involving their integration into the R2GQA system, which is applied to close-domain question answering concerning legal regulations in higher education.
§.§.§ Document Retriever
Based on Table <ref>, it can be seen that the lexical query method combined with the contextual method produces the highest results for all values of the top_k. The LME method consistently produces the lowest results as it has not been trained to understand context within our data domain. Comparing the two ensemble methods, we can observe that the weighted combination method between BM25 and Bi-Encoder provides the highest results for 5 out of the 7 top_k values tested. Thus, it can be concluded that this method has the highest stability. Therefore, we will use this method for our end-to-end system (ViEmb[<https://huggingface.co/dangvantuan/vietnamese-embedding>], ViSBERT[<https://huggingface.co/keepitreal/vietnamese-sbert>], ViSimCSE[<https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base>], ViBiEncoder[<https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder>]).
§.§.§ Machine Reader
Table <ref> shows that the XLM-RoBERTa-Large model achieves the best results on most metrics and answer types. The second-best-performing model is CafeBERT. Across the entire ViRHE4QA test dataset, XLM-RoBERTa-Large outperforms CafeBERT by 0.04% on the F1 metric and 0.5% on BLEU1, while CafeBERT surpasses XLM-R-Large by 0.69% on BERTScore. However, overall, the XLM-R-Large model demonstrates more consistent results, outperforming CafeBERT on 2 out of 3 metrics. The ViBERT model performs the worst in all metrics and answer types, showing a significant gap compared to the other models.
Questions with single-span (1 span) answers achieve significantly better results than questions with multi-span (> 1 spans) answers across all models and metrics. For the XLM-R-Large model, performance on single-phrase answers exceeds that on multi-phrase answers by 13.15%, 17.38%, and 0.06% on F1, BLEU, and BERTScore, respectively. This significant difference indicates that finding answers to questions where the answer appears in multiple locations in the context is much more challenging than when the answer is located in a single position.
§.§.§ Answer Generator
According to the results in Table <ref>, the BARTpho model achieved the highest score on the BLEU1 metric, but mBART outperformed the remaining three metrics, including BLEU4, BERTScore, and ROUGE-L. Compared to mBART, the BARTpho and ViT5 models exhibited slightly lower performance, ranging from 1% to 3%. However, the mT5 model performed significantly worse than the other three models. This could be attributed to the mT5 model having an input length limit of only 512 tokens, while the other three models accept input lengths of up to 1024 tokens. This limitation notably affects the performance, especially with datasets containing contexts longer than 512 tokens, such as ViRHE4QA.
§.§.§ End-to-End System
Based on the results in Section <ref>, Section <ref>, and Section <ref>, we used a weighted combination of Bi-Encoder and BM25 for the Retriever module, the XLM-R-Large model for the Reader module, and the mBART model for the Generator module to evaluate the performance of the R2GQA system.
Table <ref> demonstrates that as the number of retrieved contexts (top_k) increases, the performance of system improves. However, the difference between top_k = 10 and top_k < 15 is greater than that for top_k > 10. The performance of system at top_k = 10 surpasses that at top_k = 5 on BLEU1, BLEU4, ROUGE-L, and BERTScore by 0.78%, 0.65%, 0.81%, and 1.01%, respectively. Similarly, the performance of system at top_k = 30 exceeds that at top_k = 10 on BLEU1, BLEU4, ROUGE-L, and BERTScore by 0.56%, 0.43%, 0.62%, and 0.71%, respectively. Consequently, it can be concluded that the performance nearly reaches saturation at the value of top_k = 10.
§ DISCUSSION
§.§ Impact of Context Length In The Reader Module
To validate the challenge posed by large context length in the dataset, we conducted an experiment to assess the impact of context length on the performance of models in the Reader module. The specific length ranges used for this experiment are detailed in Table <ref>.
Figure <ref> shows that longer context passages result in lower performance compared to shorter ones. The highest results for all models are achieved in the 0-100 word interval, while the other length intervals yield significantly lower results. In particular, in an interval longer than 512 words, almost all models perform the worst. This length interval exceeds the length limits of all models, causing the models to perform poorly due to the lack of information and context.
§.§ Impact of Training Sample Number
To assess the impact of the number of samples in the training set, we trained the model with different quantities: 2000, 4000, 6000, and 7806 questions.
§.§.§ Machine Reader
From Figure <ref>, we can observe that increasing the number of samples in the training set significantly improves the performance of all models. Models such as ViBERT and XLM-R-Base show the most significant improvement, whereas models such as CafeBERT and XLM-R-Large show less improvement, as they already perform well even with a small number of training samples. Therefore, the amount of training data has a significant impact on model performance and will continue to increase as the amount of training data increases. Increasing the training data leads to an increase in the number of contexts and vocabulary, which helps the model learn more real-world scenarios.
§.§.§ Answer Generator
For the Generator module, we focus on the BERTScore metric for analysis. According to Figure <ref>, it can be observed that as the number of samples in the training set increases, the performance of all models improves slightly. However, for the mT5 model, there is a significant improvement in effectiveness from step 2000 to step 4000; after step 4000, the improvement of the model slows down gradually. This can be explained by the fact that at 2000 data points, the mT5 model has not learned much information yet, but by step 4000, it has accumulated enough information to represent better results. For the BARTpho model, the highest score is achieved when the number of training samples is 4000. It appears that when provided with more data, the information for the model may become noisy, leading to no improvement in results. In contrast to the Reader module, the Generator module shows little improvement with an increasing number of samples in the training set. Performance curves remain relatively flat as the training set size increases. This indicates that the transformer models in this module perform very well with less data.
§.§ Impact of Context in Answer Generator Module
We experimented with the influence of context on the Generator module using different types of input: Question and Context (Q+C); Question, Extractive answer, and Context (Q+E+C); Question and Extractive answer (Q+E). We added the token </s> to the model input as described in Section <ref>.
From Table <ref>, Table <ref>, Table <ref>, and Figure <ref>, it can be observed that the performance of the Generator module is highest when the input is Q+E, followed by Q+E+C, and lowest for Q+C. The differences between Q+E and Q+E+C are not significant but are markedly higher than the scores for Q+C. This section reveals that incorporating context into the module is not particularly effective and increases the input length to the Generator module, leading to inaccurate results.
For the Q+E method, the input is more concise, focusing on the main point (extractive answer) to provide the final answer for the system. In addition, using a shorter input reduces costs and resources when operating the system.
§.§ Performance of QA Systems
In this section, we will compare the performance of our system with other QA systems, Naive RAG. The main metrics used for the evaluation include BLEU1, BLEU4, ROUGE1, ROUGE-L, and BERTScore. The experiments were conducted on an NVIDIA RTX 3090 GPU with 24GB VRAM from VastAI[<https://vast.ai/>].
System Configurations:
Naive RAG: RAG is a question-answering system proposed by <cit.>. In the past two years, RAG has been widely used as large language models (LLMs) have developed. Therefore, we compare our system with this method. We use the text-embedding-3-large model to create the vector database and encode the questions. The vector database we use is Chromadb, designed by Langchain, and the large language model used to generate the final answer is GPT-3.5-turbo-instruct. For the prompt method, we use one-shot prompting, meaning that the prompt includes one sample example and the question to be answered. The model can refer to the sample example to answer the question more accurately. We use the text-embedding-3-large model and the GPT-3.5-turbo-instruct model from Microsoft Azure AI[<https://azure.microsoft.com/en-us/solutions/ai>]. The system is illustrated in Figure <ref>.
Comparison Results:
Table <ref> shows that our system achieves higher performance compared to Naive RAG across most metrics, specifically with only top_k = 1. In addition to comparing response times, we also consider operational costs. For the RAG system, the use of APIs incurs $0.021 for the vectorization of the database and $0.0029 per question. In contrast, our system does not generate operational costs per answer. This indicates the high potential application of the R2GQA in real-world QAs tasks.
§ ERROR ANALYSIS
To perform error analysis, we surveyed 200 question-answer pairs predicted by the R2GQA system from the test set and classified errors into the following 5 main types:
Repetition in extractive answers: in the Reader module, we use a BIO format model, which often leads to extractive answers containing one or more repeated words, resulting in grammatically incorrect phrases in Vietnamese.
Context:
Điều 1. Phạm vi điều chỉnh và đối tượng áp dụng
1. Mỗi học kỳ, Trường Đại học A (Trường) tổ chức 01 lần thi lý thuyết giữa kỳ và 01 lần thi lý thuyết cuối kỳ tập trung trực tiếp hoặc trực tuyến (gọi chung là thi) cho các môn học được mở trong học kỳ đó. Thời gian tổ chức thi được qui định trên biểu đồ giảng dạy năm học. Quy định này quy định chung về việc tổ chức các đợt thi bao gồm các công tác: chuẩn bị thi, tổ chức thi, chấm thi, công bố điểm, phúc khảo, chế độ lưu trữ và xử lý vi phạm. Quy định này áp dụng đối với hệ đại học chính quy. Các chương trình đặc biệt có thể có kế hoạch thi riêng tùy theo đặc thù của chương trình.
2. Các hình thức thi bao gồm:
Tự luận (hoặc tự luận kết hợp với trắc nghiệm)
Trắc nghiệm
Vấn đáp
Đồ án
3. Việc tổ chức thi theo hình thức trực tuyến thực hiện theo quy định riêng.
(Article 1. Scope of Regulation and Applicable Subjects
1. Each semester, the University A (the University) organizes one mid-term theoretical exam and one final theoretical exam, either in-person or online (collectively referred to as exams), for the courses offered in that semester. The exam schedule is determined on the academic year teaching schedule. This regulation provides general provisions on organizing exam sessions, including exam preparation, administration, grading, result announcement, appeals, storage, and violation handling. This regulation applies to regular undergraduate programs. Special programs may have their own exam plans depending on the program's characteristics.
2. Exam formats include:
Essay (or a combination of essay and multiple-choice)
Multiple-choice
Question and answer
Project
3. Online exam organization follows separate regulations.)
Question:
Mỗi học kỳ, Trường Đại A (Trường) tổ chức mấy lần thi lý thuyết cuối kỳ cho các môn học được mở trong học kỳ đó? (How many end-of-term theoretical exams does the University A organize for subjects offered in each semester?)
Predicted extractive:
01 01 lần (01 01 times)
True extractive:
01 (01)
Incorrect information extraction: this error occurs when the Reader module extracts inaccurate information or information that does not match the context of the question.
Context:
Điều 14. Chế độ lưu trữ
Toàn bộ biên bản, hồ sơ bảo vệ KLTN được Thư ký Hội đồng bàn giao cho Khoa và được các Khoa lưu trữ tối thiểu trong vòng năm năm.
Khoa tổng hợp điểm vào danh sách các SV đã đăng ký làm KLTN (kể cả các SV không hoàn thành KLTN) do P. ĐTĐH cung cấp và gởi lại cho P. ĐTĐH không quá 2 tuần sau ngày bảo vệ.
(Article 14. Storage Regime
The entire minutes, records of thesis defense are handed over by the Council Secretary to the Faculty and are archived by the Faculties for a minimum of five years.
The Faculty consolidates the scores into the list of students who have registered to do their thesis (including students who have not completed their thesis) provided by the Academic Affairs Office and returns it to the Academic Affairs Office no later than 2 weeks after the defense date.)
Question: Thời gian để Khoa tổng hợp điểm vào danh sách sinh viên tham gia KLTN và gởi cho P. ĐTĐH được tính từ khi nào? (From when is the time for the Faculty to compile grades into the list of students participating in the graduation thesis and send it to the Undergraduate Training Department calculated?)
Predicted extractive: không quá 2 tuần sau ngày bảo vệ. (not more than 2 weeks after the defense day.)
True extractive: sau ngày bảo vệ. (after the defense day.)
Predicted abstractive: Thời gian để Khoa tổng hợp điểm vào danh sách sinh viên tham gia KLTN và gởi cho P. ĐTĐH không quá 2 tuần sau ngày bảo vệ. (The time for the Faculty to compile grades into the list of students participating in the graduation thesis and send it to the Undergraduate Training Department is not more than 2 weeks after the defense day.)
True abstractive: Thời gian để Khoa tổng hợp điểm vào danh sách sinh viên tham gia KLTN và gởi cho P. ĐTĐH được bắt đầu tính sau ngày bảo vệ. (The time for the Faculty to compile grades into the list of students participating in the graduation thesis and send it to the Undergraduate Training Department is calculated from the day after the defense.)
Over-extraction/Under-extraction of information: this error occurs when the system extracts more or less information than required by the question, confusing for the user.
Context:
Điều 17. Quản lý điểm thi
1. Cán bộ chấm thi chịu trách nhiệm về tính chính xác của thông tin điểm thi được nhập và công bố cho SV trên Hệ thống quản lý điểm trong thời hạn chấm thi và nộp điểm.
2. P.ĐTĐH/VPĐB chịu trách nhiệm kiểm tra thông tin điểm thi trên Hệ thống quản lý điểm so với điểm được ghi trên bài thi, tiếp nhận và xử lý khiếu nại của SV về điểm thi và cấp bảng điểm theo yêu cầu.
3. Phòng Dữ liệu và Công nghệ Thông tin có trách nhiệm đảm bảo an toàn cho dữ liệu điểm trên Hệ thống quản lý điểm; đảm bảo chỉ cấp quyền nhập và chỉnh sửa điểm cho cán bộ chấm thi đối với thành phần điểm và lớp mà mình phụ trách chấm thi trong thời hạn chấm thi và nộp điểm. Mọi thao tác trên dữ liệu điểm phải được ghi nhận lại đầy đủ và chính xác (người nhập điểm, người chỉnh sửa điểm, thời gian và lý do chỉnh sửa).
(Article 17. Examination Score Management
1. The examiners are responsible for the accuracy of the exam score information entered and announced to students on the Grade Management System within the grading and score submission period.
2. The Academic Affairs Office/Department is responsible for verifying the exam score information on the Grade Management System against the scores recorded on the exam papers, receiving and handling student complaints about exam scores, and issuing transcripts upon request.
3. The Data and Information Technology Department is responsible for ensuring the security of the exam score data on the Grade Management System, ensuring that only examiners responsible for grading their assigned components and classes have the authority to enter and edit scores within the grading and score submission period. All operations on exam score data must be recorded accurately and completely (person entering the score, person editing the score, time, and reason for the edit).)
Question: Phòng Dữ liệu và Công nghệ Thông tin cấp quyền nhập và chỉnh sửa điểm cho ai? (To whom does the Data and Information Technology Department grant the right to enter and edit scores?)
Predicted extractive: cán bộ chấm thi trách (exam graders responsible)
True extractive: cán bộ chấm thi (exam graders)
Predicted abstractive: Phòng Dữ liệu và Công nghệ Thông tin cấp quyền nhập và chỉnh sửa điểm cho cán bộ chấm thi trách. (The Data and Information Technology Department grants the right to enter and edit scores to the responsible exam graders.)
True abstractive: Phòng Dữ liệu và Công nghệ Thông tin chỉ cấp quyền nhập và sửa điểm cho cán bộ chấm thi. (The Data and Information Technology Department grants the right to enter and edit scores only to the exam graders.)
Incorrect context extraction: the retrieval system returns a text segment that is unrelated or does not contain the necessary information to answer the question.
Question: BĐH tổ chức lấy ý kiến sinh viên về việc gì? (What does the Management Board organize to collect student opinions about?)
True context:
Điều 10. Giảng dạy các môn CTTN
CTTN phải được thực hiện trên quan điểm lấy người học làm trung tâm. Người học phải được tạo điều kiện để thể hiện vai trò chủ động trong tiến trình học tập. Người học phải đóng vai trò chủ động trong hoạt động học tập, thay vì thụ động tiếp nhận kiến thức.
Sinh viên CTTN sẽ học cùng với sinh viên các lớp chương trình chuẩn trong các môn được đào tạo chung, các môn học cốt lõi dành riêng cho sinh viên CTTN được tổ chức lớp học riêng.
Khoa quản lý chuyên môn có trách nhiệm chọn các cán bộ có kinh nghiệm để phụ trách giảng dạy. Các môn học tài năng và KLTN phải do CBGD có học vị tiến sĩ hoặc giảng viên chính, hoặc thạc sĩ tốt nghiệp ở các trường Đại học thuộc các nước tiên tiến, đúng ngành hoặc thuộc ngành gần đảm nhiệm.
Trong tuần đầu tiên của học kỳ, CBGD phải thông báo công khai cho sinh viên về đề cương giảng dạy môn học; trong đó đặc biệt chú ý các thông tin, các phần học bổ sung tăng cường; số cột điểm và tỷ lệ tính của từng cột điểm vào điểm tổng kết môn học.
CBGD phải cung cấp đầy đủ đề cương môn học, tài liệu và công bố nội dung bài giảng trước cho sinh viên trên trang web môn học.
Đầu mỗi học kỳ, đại diện đơn vị quản lý chương trình và các CVHT phải gặp gỡ đại diện sinh viên (ít nhất 3 SV/lớp – do lớp bầu chọn) tất cả các lớp CTTN để trao đổi và nhận phản hồi về tình hình giảng dạy và sinh hoạt. Cuối học kỳ, BĐH phối hợp với phòng Thanh tra - Pháp chế - Đảm bảo chất lượng tổ chức lấy ý kiến sinh viên (dùng phiếu thăm dò, qua trang web,…) về giảng dạy môn học và tổ chức cho CBGD rút kinh nghiệm về các góp ý của sinh viên.
Ngoài nội dung bắt buộc theo đề cương, các môn CTTN có thể có thêm các nội dung tăng cường và một số lượng hạn chế các buổi ""seminar ngoại khóa"". Lịch dạy và lịch dạy bổ sung tăng cường, dạy bù được báo cáo và kiểm tra theo quy trình chung như lớp đại học chính quy đại trà.
(Article 10. Teaching CTTN Courses
Teaching CTTN must be based on the learner-centered approach. Learners must have conditions to play an active role in the learning process. Learners must take an active role in learning activities, rather than passively receiving knowledge.
CTTN students will study alongside students of standard programs in jointly taught subjects; core subjects specifically for CTTN students will have separate class arrangements.
The specialized management department is responsible for selecting experienced personnel to teach. Talent courses and thesis projects must be taught by instructors with a doctoral degree or lecturers who have graduated from universities in advanced countries, in the relevant field or closely related fields.
In the first week of the semester, instructors must publicly announce to students the course syllabus, paying particular attention to additional information, supplementary learning sections, the number of columns for grading, and the weighting of each column in the overall grade of the course.
Instructors must provide complete course syllabi, materials, and pre-announce lecture contents to students on the course website.
At the beginning of each semester, program management representatives and class advisors must meet with student representatives (at least 3 students per class – elected by the class) from all CTTN classes to exchange and receive feedback on teaching and activities. At the end of the semester, the Faculty Board must coordinate with the Inspection and Quality Assurance Department to collect student opinions (using surveys, websites, etc.) on course teaching and organize instructors to learn from student feedback.
In addition to the mandatory content outlined in the syllabus, CTTN courses may include additional supplementary content and a limited number of extracurricular seminars. Teaching schedules and additional teaching schedules, makeup classes must be reported and monitored according to the general procedures for regular undergraduate classes.)
Predicted context:
Điều 16. Đảm bảo chất lượng
- Đơn vị chuyên môn có trách nhiệm chọn các cán bộ đạt yêu cầu theo quy định và có kinh nghiệm giảng dạy để phụ trách giảng dạy các môn học cho các lớp thuộc CT CLC.
- Trong vòng 02 tuần đầu của mỗi học kỳ, đơn vị quản lý và các cố vấn học tập gặp gỡ sinh viên tất cả các lớp CT CLC để trao đổi và nhận phản hồi về tình hình giảng dạy và sinh hoạt.
- Cuối học kỳ, đơn vị quản lý phối hợp với phòng Thanh tra - Pháp chế - Đảm bảo chất lượng tổ chức lấy ý kiến sinh viên (dùng phiếu thăm dò, qua website, …) về việc giảng dạy môn học và có trách nhiệm trao đổi với giảng viên để rút kinh nghiệm về các góp ý của SV.
- Tổ chức lấy ý kiến sinh viên tối thiểu 01 lần mỗi học kỳ về công tác quản lý, phục vụ, điều kiện cơ sở vật chất và tổ chức trả lời những ý kiến phản ánh của sinh viên.
(Article 16. Quality Assurance
- The specialized unit is responsible for selecting qualified personnel as required by regulations and with teaching experience to be in charge of teaching courses for classes under the CT CLC program.
- Within the first two weeks of each semester, the management unit and academic advisors meet with students from all CT CLC classes to exchange and receive feedback on teaching and activities.
- At the end of the semester, the management unit collaborates with the Inspection and Quality Assurance Department to collect student opinions (via surveys, websites, etc.) on course teaching and is responsible for exchanging feedback with instructors to learn from student suggestions.
- Student feedback sessions should be organized at least once per semester on management activities, services, facilities, and addressing student feedback.)
Predicted extractive: công tác quản lý, phục vụ, điều kiện cơ sở vật chất và (management work, services, facilities, and)
True extractive: giảng dạy môn học (teaching of the course)
Predicted abstractive: BĐH tổ chức lấy ý kiến sinh viên về công tác quản lý, phục vụ, điều kiện cơ sở vật chất và các vấn đề khác có liên quan đến việc dạy bù từ cán bộ giảng dạy. (The Management Board organizes to collect student opinions about management work, services, facilities, and other issues related to makeup teaching by lecturers.)
True abstractive: BĐH tổ chức lấy ý kiến sinh viên về giảng dạy môn học. (The Management Board organizes to collect student opinions about the teaching of the course.)
No information in the Machine Reader module: The approach of the Reader Module is sequence tagging, hence there are instances where information cannot be extracted from the retrieved context.
The R2GQA system performs well in generating user-friendly free-form answers, but it still contains many basic errors. These errors include repeating information, extracting redundant or missing information, retrieving incorrect context segments, and extracting information from inaccurate context segments. These issues reduce the accuracy and efficiency of the system, potentially causing confusion for users.
§ CONCLUSION AND FUTURE WORK
In this paper, we introduce the ViRHE4QA dataset, a QA dataset based on academic regulations in higher education. The dataset comprises 9,758 meticulously constructed data samples, created by seven well-trained and closely monitored annotators. Furthermore, we proposed the R2GQA system, which consists of three modules: Retrieval, Reader, and Generator. This system leverages state-of-the-art language models, delivering high performance and efficiency for the QA task without relying on any third-party services. We conducted various experiments on both the dataset and the system to demonstrate their effectiveness and practical applicability.
However, the ViRHE4QA dataset and the R2GQA system face several challenges that need to be addressed in the future. These challenges include managing the length of the input data and improving the accuracy of the Retrieval and Reader systems. We also plan to focus on optimizing and fine-tuning the language models to better fit the context and linguistic characteristics of the Vietnamese language.
Moreover, expanding and enriching the dataset with a broader variety of questions and diverse contexts will enhance the versatility of the system. We aim to explore and integrate advanced techniques such as deep learning and reinforcement learning to further improve the accuracy and performance of the system. Reducing processing time while maintaining high accuracy is a crucial goal in ensuring the practical application of automated QA systems.
Finally, deploying the system in real-world environments will provide valuable feedback, helping us to understand its strengths and weaknesses and propose appropriate improvements. We hope that this work will make a significant contribution to the field of automated QA and open up new research directions in the future.
§ ACKNOWLEDGEMENT
This research was supported by The VNUHCM-University of Information Technology's Scientific Research Support Fund.
§ DECLARATIONS
Conflict of interest The authors declare that they have no conflict of interest.
§ DATA AVAILABILITY
Data will be made available on reasonable request.
|
http://arxiv.org/abs/2409.02381v1 | 20240904020131 | FlexBSO: Flexible Block Storage Offload for Datacenters | [
"Vojtech Aschenbrenner",
"John Shawger",
"Sadman Sakib"
] | cs.NI | [
"cs.NI",
"cs.OS"
] |
Department of Computer Sciences, University of Wisconsin-Madison
FlexBSO: Flexible Block Storage Offload for Datacenters
Sadman Sakib
=======================================================
§ INTRODUCTION
Efficient virtualization of CPU and memory is standardized and mature. Capabilities such as Intel VT-x <cit.> have been added by manufacturers for efficient hypervisor support.
In contrast, virtualization of a block device and its presentation to the virtual machines on the host can be done in multiple ways.
Indeed, hyperscalers develop in-house solutions to improve performance and cost-efficiency of their storage solutions for datacenters.
Unfortunately, these storage solutions are based on specialized hardware and software which are not publicly available.
The traditional solution is to expose virtual block device to the VM through a paravirtualized driver like <cit.>.
provides significantly better performance than real block device driver emulation because of host OS and guest OS cooperation.
The IO requests are then fulfilled by the host OS either with a local block device such as an SSD drive or with some form of disaggregated storage over the network like NVMe-oF or iSCSI.
There are three main problems to the traditional solution.
1) Cost. IO operations consume host CPU cycles due to host OS involvement.
These CPU cycles are doing useless work from the application point of view.
2) Inflexibility. Any change of the virtualized storage stack requires host OS and/or guest OS cooperation and cannot be done silently in production.
3) Performance. IO operations are causing recurring to do the transition from non-root mode to root mode on the host CPU.
This results into excessive IO performance impact.
We propose FlexBSO, a hardware-assisted solution, which solves all the mentioned issues.
Our prototype is based on the publicly available Bluefield-2 SmartNIC with NVIDIA SNAP support, hence can be deployed without any obstacles.
§ DESIGN AND IMPLEMENTATION
FlexBSO uses the Bluefield-2 SmartNIC to completely replace the storage stack of the hypervisor and solves aforementioned issues of traditional paravirtualized solutions.
It uses NVIDIA SNAP with SR-IOV to directly expose an NVMe block device to every single guest on the host (Figure <ref>.
The guest works with the block device as if it was a local NVMe device, but all IO commands are fulfilled by the Bluefield-2 card.
The host OS is completely bypassed.
SNAP can be viewed as an SPDK storage stack enhanced with a subsystem presenting virtual NVMe devices on PCIe bus.
This enables huge flexibility of the solution, because the logic of the storage is just well-known SPDK <cit.>.
It is trivial to modify and/or create new SPDK block devices, to add more layers into the storage stack.
For example, if the customer wants to increase durability of his data, it is possible to seamlessly change the RAID mode from 0 to 1.
More features can be enabled in similar fashion, like encryption of compression.
The flexibility can be taken to the extreme, in fact, this solution enables changing the whole storage backend if necessary without any notice from the guest perspective.
Because the host OS is completely bypassed, this solution eliminates all caused by traditional solutions, which leads into significant performance boost and cost efficiency.
In the following sections, we describe used technologies in a higher detail and share our experience with their modifications.
§.§ SNAP
The proprietary SNAP (Software-defined Network Accelerated Processing) library from NVIDIA allows DPUs such as the Bluefield-2 to emulate an NVMe storage device on the host PCIe bus. The host can use standard NVMe drivers to interact with the device. SNAP can operate in two modes:
* Fully-offloaded mode: NVMe requests are sent directly to an NVMe-oF target by the SNAP accelerator. In this mode, the ARM cores on the DPU are used solely for the control plane and do not touch the data.
* Partially-offloaded mode: NVMe requests are sent to an SPDK storage stack running on the ARM cores of the DPU. This allows for greater flexibility in handling the data, potentially using encryption, compression, or other accelerators present on the DPU.
The details of our system are shown in Figure <ref>. is a traditional NVMe disk on the host. is the NVMe device presented by SNAP. In this partially-offloaded configuration, the SNAP and SPDK components of the storage stack on the SmartNIC are provided by NVIDIA and should not be modified. The block device attached to SPDK is user-configurable, and can be replaced by any SPDK block device or NVMe-oF target. We found that it is possible to compile and link SPDK block devices against the SPDK library on the SmartNIC, allowing us flexibility in how the SmartNIC processes NVMe block requests.
§.§ SPDK and Block Devices
The storage performance development kit (SPDK) is a user-level library and NVMe driver commonly used to build high performance storage applications. Traditional kernel-based I/O requires expensive context switches in response to hardware interrupts. SPDK operates storage drivers in polled-mode rather than interrupt-mode, vastly decreasing operation latency at the cost of using a core for continuous polling. We implemented two SPDK block devices to explore the flexibility of the system.
§.§.§ RAID Block Device
There are two types of block devices in SPDK – virtual block devices (vbdevs) and terminal block devices (bdevs). Virtual block devices receive I/O submissions and do some computation before submitting I/O to other block devices. A classic example of a virtual block device is a RAID <cit.> controller. SPDK is distributed with a RAID vbdev which is capable of RAID0 (striping), RAID1 (mirroring), and RAID5 (distributed parity), although it has several limitations. Notably, the RAID5 controller is only capable of full-stripe writes. It does not do read-modify-write required for single block writes in RAID5. We were primarily interested in exploring in the flexibility of the system, so this limitation did not concern us.
We chose to implement a “safe read” in the RAID5 controller. To do so, we modified the provided RAID5 controller to perform a full-stripe read and recompute parity against the desired block to be read. If the recomputation does not match the original data, we notify the user that a parity check has failed. An example is shown in figure <ref>. The red block refers to the data requested by the user. Our RAID5 will read in the entire strip (Blocks 8-11 including parity) and recompute the parity of block 9. Then, the red region of the block will be compared against the recomputed block to check for bit errors. To implement this, we reuse the function already present in the RAID5 driver. When the completion callback for a read I/O is executed, we do a on the read block's stripe, with the the read block as the reconstruct target. We also added functionality to “poison”, or flip the first bit of, a write block with 0.1% probability to help test this feature.
§.§.§ Compression Block Device
To demonstrate a usecase of our flexible block storage device, we implemented a compression block device in the Bluefield-2 DPU. A compression block device is a virtual storage device on top of a physical storage device that can compress and decompress data transparently. This reduces the storage space required for the data.
We implemented the compression block device in SPDK which contains a virtual block device (bdev) and a malloc block device as base. In the Bluefield-2 DPU, we used SPDK version 23.01. The virtual block device is implemented as a block device module, which is SPDK's equivalent to a device driver in an operating system. The module provides a set of function pointers that are called to service block device I/O requests. So, any other application can send I/O requests to this virtual block device using SPDK I/O submission functions such as .
We used DOCA compress library to perform compression and decompression at the block device layer. DOCA compress library provides the API to compress and decompress data using hardware acceleration. It supports both host and DPU memory regions. Bluefield-2 device supports compression and decompression using the deflate algorithm.
The virtual bdev performs compression on write and decompression on read with doca compression library. The compression is done in the virtual block device before writing compressed data to base block device. Decompression is done in the read callback function of the virtual bdev which is invoked after base block device has finished reading.
§ EXPERIMENTAL RESULTS
We performed an experimental evaluation of crucial parts of the system.
First, we validate the feasibility of the solution in terms of throughput necessary for several VMs running on one node.
This is the most crucial result of the work, because having a bottleneck between the VM and the SmartNIC would be unacceptable.
Second, we explore the flexibility of our solution using SPDK block devices.
We share results from the process of customizing the storage stack on the SmartNIC.
Our experience was good enough to verify the flexibility of the solution.
§.§ SNAP performance
The crucial question is if FlexBSO will provide throughput sufficient enough to saturate needs of multiple VMs running on a single host.
To have a baseline, we compared FlexBSO to NVMe-oF using RDMA, which represents one of the high-performance solutions used traditionally.
Both solutions were configured with SPDK running on the SmartNIC.
FlexBSO uses SNAP to expose the NVMe block device to the VM and the baseline uses NVMe-of target provided by SPDK on SmartNIC and NVMe-of client on the VM.
Figure <ref> shows the throughput of both solutions. FlexBSO (SNAP) clearly dominates, and has more than 3× higher throughput in multi-threaded scenario with the workload oriented for throughput. Another interesting metric is a latency of the solution. For a latency oriented read workload, i.e. single thread, io depth 1 and block size of 4kB, we measured read latency of 16μ s for FlexBSO and 63.7μ s for RDMA. This makes FlexBSO latency almost 4× lower than RDMA.
§.§ Custom SPDK bdevs
Our first custom bdev, RAID5, was developed against a more recent version of SPDK than was present on the SmartNIC. When trying to compile our block device on the SmartNIC, we encountered linking errors. After further investigation, we found that the different versions of SPDK, although only about a year apart, had several interface incompatabilities. Furthermore, the RAID5 block device was significantly changed between versions. We tried to re-port our vbdev to SPDK on the SmartNIC, but we were not able to complete this work in the time available to us. Furthermore, SPDK on the SmartNIC was compiled without the flag to enable RAID5, so we were not able to test SPDK's default RAID5 device. We were reluctant to recompile SPDK on the SmartNIC, as SNAP must also be recompiled against custom versions of SPDK, as described in NVIDIA's documentation, and we did not have access to the SNAP source code.
To understand the performance of our compression bdev, we first measured the time taken to perform (de-)compression in a single job in software using the zlib library, and in hardware using the DOCA compress library (Figure <ref>). We used text data that maintained a compression ratio of about 4. The data size is the size of input data in compression and the size of output data in decompression. In both software and hardware methods, completion time increases linearly with the data size. Roughly, the hardware accelerated jobs took two order of magnitude less time then the software jobs on same data size. However, we saw error in hardware accelerated (de-)compression for data size greater than 128 MB. This is likely due to limitation of size of the DOCA buffer that can be allocated in the Bluefield-2 DPU.
Next, we measured the time required to perform read and write operation through the virtual block device with and without compression feature (Figure <ref>). For data size greater than and equal to 128 KB, we used block size 128 KB which is the maximum allowed size in SPDK. The whole data was first written to the block device, and after write finished, it was read completely. Roughly, the I/Os with compression took four order of magnitudes more time than without compression. The (de-)compression in software and hardware took similar time and both likely experienced similar overhead. Following SPDK design, all IOs completed asynchronously on a non-blocking path. However, this required creating DOCA compress software context each time and using separate workqueue for each DOCA compress job. For larger data size, this likely adds significant overhead. We found memory errors in DOCA when performing IO with compression for data size more than 16 MB.
§ CONCLUSION
NVMe emulation using SNAP compares positively to existing remote-storage storage solutions such as NVMe-oF. We experienced nearly three times the throughput in a multi-threaded environment. We believe another benefit of SNAP is that it lets developers add flexibility to the block device abstraction, since it is software defined. Our experience suggests that doing so is feasible, however it requires a detailed knowledge of the SPDK environment and judicious use of computational resources on the SmartNIC. A promising next step for this work would be to investigate a multi-tenant environment on the host, using SR-IOV. In particular, we would be interested in the scalability of SNAP, and how many host VMs it is able to support efficiently.
|
http://arxiv.org/abs/2409.03389v1 | 20240905095007 | The Completeness of Accreting Neutron Star Binary Candidates from the Chinese Space Station Telescope | [
"Hao Shen",
"Shun-Yi Lan",
"Xiang-Cun Meng"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.HE",
"astro-ph.IM"
] |
20XX Vol. X No. XX, 000–000
Yunnan Observatories, Chinese Academy of Sciences, Kunming 650216, China;
[email protected], [email protected]
University of Chinese Academy of Sciences, Beijing 100049, China
International Centre of Supernovae, Yunnan Key Laboratory, Kunming 650216, China
Received 2024 June 17; accepted 2024 July 27
Neutron star (NS) has many extreme physical conditions, and one may obtain some important informations about NS via accreting neutron star binary (ANSB) systems. The upcoming Chinese Space Station Telescope (CSST) provides an opportunity to search for a large sample of ANSB candidates. Our goal is to check the completeness of the potential ANSB samples from CSST data. In this paper, we generate some ANSBs and normal binaries under CSST photometric system by binary evolution and binary population synthesis method and use a machine learning method to train a classification model. Although the Precision (94.56 %) of our machine learning model is as high as before study, the Recall is only about 63.29 %. The Precision/Recall is mainly determined by the mass transfer rate between the NSs and their companions. In addition, we also find that the completeness of ANSB samples from CSST photometric data by the machine learning method also depends on the companion mass and the age of the system. ANSB candidates with low initial mass companion star (0.1 to 1) have a relatively high Precision (94.94 %) and high Recall (86.32 %), whereas ANSB candidates with higher initial mass companion star (1.1 to 3) have similar Precision (93.88 %) and quite low Recall (42.67 %). Our results indicate that although the machine learning method may obtain a relative pure sample of ANSBs, a completeness correction is necessary for one to obtain a complete sample.
SHEN et al.
The completeness of ANSB candidates from CSST
The completeness of accreting neutron star binary candidates from Chinese Space Station Telescope
Hao SHEN
1,2
Shun-Yi LAN
1,2
Xiang-Cun MENG
1,3
September 9, 2024
=================================================================================================
§ INTRODUCTION
The concept `neutron star' (NS) was proposed by Lev Davidovich Landau in 1932, but until 1967 Antony Hewish and Jocelyn Bell Burnell detected the first known NS (also a pulsar, ). Because NS possess many extreme physical parameters, it is a unique laboratory that allows us to study the properties of dense matter. Several decades later, many breakthroughs in physics and in astronomy are achieved about NSs, especially the first gravitational wave detection from the merging of two NSs (i.e., GW170817), which is `unprecedented joint gravitational and electromagnetic observation' and `marks the beginning of a new era of discovery' <cit.>. However, some crucial information of NS, such as structure, equation of states, birth environment and progenitor are still vague ().
Accreting neutron star binaries (ANSBs) are a sort of special binary systems: where NSs are accreting material from their normal companion stars through Roche lobe outflow (RLOF) or stellar wind accretion <cit.>. The accreted materials may form a disk around the NSs and emit X-ray for releasing gravitational potential energy. Such binaries are called as `X-ray binaries'. Depending on the mass of companion stars, ANSBs are classified into three categories: low-mass X-ray binary (LMXB, companion star mass below 1), intermediate-mass X-ray binary (IMXB, companion star mass between 1 and 10) and high-mass X-ray binary (HMXB, companion star mass higher than 10) (). Some ANSBs are persistent X-ray sources, while the others are transient X-ray sources ().
As X-ray sources, ANSBs also emanate NUV/optical radiation. The NUV/optical radiation from ANSBs has several different physical origins: thermal radiation from accretion disk <cit.>, X-ray reprocess <cit.>, interaction between relativistic stellar wind of NS and inflow matter <cit.>, synchrotron radiation in jet <cit.> and in hot accretion flow <cit.>, the companion star. In this parer, we mainly investigate the contribution from the accretion disk, the X-ray reprocess and the companion star. We will consider other mechanisms step by step to complete our model in the future.
ANSB plays an important role in the theory of binary evolution and the formation of millisecond pulsar (MSP). According to the recycling scenario, a slowly rotating neutron star in a binary system, which obtains angular momentum by accreting material from its companion star, can become a MSP (see for a review). During the accretion phase, the ANSB manifests itself as a X-ray source, while the binary hosts a radio MSP after mass transfer stops. The discovery of coherent pulsations in a transient LMXB SAX J1808.4-3658 <cit.> and transitional MSPs strongly support the recycling scenario <cit.>. So, ANSB may provide key information on the formation of MSP and binary evolutions (e.g. ). However, the present small sample of ANSBs limit their roles to provide such enough information (e.g. ). A complete sample of ANSB is very important to constrain the formation of MSPs. The upcoming Chinese Space Station Telescope (CSST) could provide an opportunity to build such a sample. CSST is a 2 m space telescope and is planned to launch in a few years. The survey from CSST have seven photometric imaging bands covering 255 nm to 1000 nm, with large field of view ∼1 deg^2, and a high-spatial resolution ∼0.15” <cit.>.
In a previous related work, <cit.> produced some ANSBs and normal binary stars to investigate if machine learning can efficiently search for the ANSB candidates under the CSST photometric system. Their classification results indicate that machine learning can efficiently select out the ANSB candidates from the background normal stars. However, the ANSBs in their model were obtained by some binary evolutions, and then the parameter space of these ANSBs may not cover the whole range of the parameter space. They also did not check the completeness of ANSBs in CSST data. In this work, we will extend the parameter space of the generated ANSBs to check the completeness of the ANSB same obtained from the CSST photometric data, and to check which parameter mainly affects the completeness of ANSB sample.
The paper is organized as followings. In Section <ref>, we describe our methods. We show the machine learning classification results in Section <ref>. Discussions and conclusions are given in Section <ref> and <ref>, respectively.
§ METHOD
The main aim in the paper is to check Precision and Recall from the CSST photometric data by a machine learning method, and some basic methods are similar to these in <cit.>. For example, following <cit.>, we assume that the observed optical emission from an ANSB system are mainly contributed by its accretion disk and companion star. <cit.> only consider main sequence stars as the companions, while the RGB (red giant branch) and AGB (asymptotic giant branch) stars are also included in this paper. The detailed methods in the paper are follows.
1. We used binary population synthesis to generate background stars ranging in age from 1 Myr to 14 Gyr, with initial masses ranging from 0.5 to 10 and solar metallicity, in which all the systems are evolving in binaries, based on Hurley's rapid binary evolution code <cit.>. The basic assumptions for the binary population synthesis are similar to those in <cit.> and <cit.>.
2. We generate ANSB samples in our model by setting several independent variables (the mass of NS, the initial mass and the age of companion star, and the mass transfer rate). We assume that the NS in an ANSB system is a point mass, i.e., we neglect the radiation from surface of the NS and the spin of the NS. We generate the neutron stars with masses ranging from 1.4 to 2, by a Monte Carlo method. We focus on the cases that the companions fulfill their Roche lobes, i.e., we focus on the LMXBs and IMXBs, since HMXBs are mainly from wind accretion (). Because there is a rough boundary of 3 for the properties of X-ray binary systems (), based on models from <cit.>, the companions in the ANSBs are generated with discrete initial masses ranging from 0.1 to 3, and with an age evenly distributed during their whole lifes. The mass transfer rates are also randomly generated, from log(M^.//yr) =-14 to log(M^.//yr) =-6, where, the lower limit is set to be much lower than that from wind accretion rate, while the upper limit is set for typical thermal timescale mass transfer rate (). Considering that the companions are fulling their Roche lobes, we set the binary separation a following the equation in <cit.>:
a=R_2(0.69q^2/3+ln(1+q^1/3))/0.49q^2/3,
where R_2 is the radius of a companion star. The orbital inclination θ has a great influence on the observed optical flux from accretion disk, i.e., for given conditions, the flux is proportional to cosθ (see ), we set θ as 0^∘<θ<90^∘, randomly.
3. The other methods to obtain the total magnitudes of ANSBs, including the systematic error of CSST photometric system are the same as those in <cit.>. In paticular, the radiation from the disk includes those from both multi-color disk and irradiate accretion disk, as in <cit.>. We also use the same machine learning process as in <cit.> to compare our results with these in <cit.>.
In Fig <ref>, we show all the ANSBs we generated (colorful dots) in the color-magnitude diagram (CMD), where different colors represent different initial masses of companion stars. For comparison, some background single stars with 1 Gyr age (black dots) are also shown. The ANSBs cover a large range in the CMD because the systems have different NS masses, different companion masses, different ages and different mass transfer rates. The positions of ANSBs in CMD are strongly dependent on mass transfer rates. For a given companion, ANSBs with higher mass transfer rates will appear brighter and bluer, while ANSBs with lower mass transfer rates are located close to normal single stars in the CMD, as shown by the discrete tapes for low mass stars. In Fig <ref>, there is a clear upper boundary for the brightness of ANSBs, which is from our treatment that there is an upper limit of mass transfer rate of 10^-6 /yr.
§ RESULTS
There are several metrics about classification results in machine learning, among them, Precision = TP/TP+FP represents the proportion of examples that are divided into positive cases that are actually positive cases, Recall = TP/TP+FN represents the proportion of all positive cases that are correctly classified, and measures the model's ability to recognize positive cases. Recall may represent the completeness of recognized ANSBs, from observational data. Here, TP (True Positive) means a positive example that is correctly predicted, i.e., the true value of the data is positive and the predicted value is also positive. TN (True Negative) means a negative example that is correctly predicted. FP (False Positive) means a negative example that was incorrectly predicted. FN (False Negative) means a positive example that was incorrectly predicted.
For the whole machine learning sample, the Precision is 94.56 %, similar to that in <cit.>. In other words, almost all of the ANSB candidates identified from the CSST photometric data may be real ANSBs. However, the Recall is only 63.29 %, which means that many ANSBs may be missed for a machine learning method. Actually, as well konwn, there is a competition between Precision and Recall for a machine learning model, in which a key threshold between 0 and 1 is designed to balance the competition. In our model, the threshold is set to be 0.5. We did a test for the threshold of 0.3 and found that Precision is decreased to be 89.42 % and Recall is increased to be 66.67 %, as expected. Table <ref> shows the classification results of our model.
As we shown in Fig <ref>, where an ANSB is recognized by the machine learning method could be mainly determined by the mass transfer rate between the NSs and their companions. In Fig <ref> we show the Precision and Recall of ANSBs as a function of the mass transfer rate, where other parameters are randomly distributed within the range of values we set. For ANSBs with a given companion star, the higher the mass transfer rate, the higher the Precision and the Recall. In the upper panel of Fig <ref>, we show the Precision and Recall of ANSB systems with companion stars of 1 as a function of the mass transfer rate. For ANSB systems with the companion stars of 1, our model cannot identify samples with mass transfer rates of less than log(M^.//yr) =-12. While our model is very good at identifying samples with mass transfer rates higher than log(M^.//yr) =-11, these samples have Precision and Recall close to 1. For comparison, we show in lower panel of Fig <ref> the Precision and Recall of ANSB systems with the companion stars of 3 as a function of mass transfer rate. Our model cannot efficiently identify samples with a mass transfer rate of less than log(M^.//yr) =-9. Samples with mass transfer rates higher than log(M^.//yr) =-9 can well be identified, but about one fifth of their completeness is missing. In other words, for a given companion, Precision/Recall reaches almost a constant value. This is from the fact that when mass transfer rate is high enough, the radiation from the accretion disk of an ANSB dominate the observable optical flux, i.e., the radiation from companion star is negligible, compared with that from the accretion disk. The results here indicate that the CSST photometric system could be sensitive to those ANSBs with thermal-timescale mass transfer.
As shown in Fig <ref>, our model can easily identify ANSB systems with high mass transfer rates, but the completeness is affected by the initial masses of companion stars. In Fig <ref>, we show the Precision and Recall of ANSBs as a function of the initial mass of companion star, where other parameters are randomly distributed within the range of values we set. During the whole interval of the companion mass, the Precision is almost a constant of about 94 %, with a slightly higher value for the stars less than 1 than more massive stars. As discussed by <cit.>, the high Prcision is derived from the fact that the flux of a normal binary is roughly to be the superposition of two blackbody spectra, but the flux from our ANSB model is roughly the superposition of an accretion disk and a blackbody spectrum. Such a huge flux difference decides the high Precision. Moreover, a more massive companion star makes its spectrum to move to a state more similar to an accretion disk as shown in <cit.>, which results in a slightly decrease of the Precision with the companion mass. However, the Recall is highly dependent on the companion mass. For the cases with the companion of 1, the Recall is significantly higher than that from the cases with the companion of 1 (86.32 % vs 42.67 %). In paticular, the Recall decreases with the increase of companion mass for the cases with companions of 0.71. This is due to the fact that the greater the initial mass of companion star, the brighter the companion star becomes, and the flux from the companion star may cover the flux from the accretion disk.
The age of ANSB, which is determined by the companion age, could also affect the Precision and/or Recall. The upper panel of Fig <ref> presents Precision and Recall of ANSBs as a function of the age of ANSB system, where their companion stars have the same initial mass (1), other parameters are randomly distributed within the range of values we set. For the whole age stage, we have an overall result, Precision = 97.32 % and Recall = 70.95 %. For different ages of ANSB systems, Precision is kept at high value (95 %) with small fluctuation. This is because the radiation of a normal binary system comes from two blackbodies, while the radiation of an ANSB system comes from a blackbody and an accretion disk. The recall remains in the range of about 0.7 to 0.75 for 95 % of the time and drops below 0.7 in the last 5 % of the time due to the companion star being in main sequence stage and Hertzsprung gap for 95 % of the time, resulting in minimal changes in brightness and color (see lower panel).
The age of a star heavily depends on its initial mass, so we must consider the influence of initial mass. Similar to Fig <ref>, the upper panel of Fig <ref> illustrates the Precision and Recall of ANSBs with ages, while their companion stars have an initial mass of 3. For the whole age stage, we have an overall result of Precision = 83.58 % and Recall = 41.47 %. The Precision initially fluctuates in the range of 0.7 to 0.8 (age 0 to age 70), and then stabilizes close to a value of 1 (age 70 to age 100). This behavior is attributed to the high surface temperature of a star with an initial mass of 3 during its main sequence stage (age 0 to age 70, see lower panel), because ANSB systems with high temperature companion stars are relatively easily identified to be hot background stars. While Recall rises first and then stays around 0.5. This is because from age 0 to age 65, as the surface temperature of the companion star decreases, the radiation from ANSB system can exhibit both components from the accretion disk and the companion star, rather than just a single hot component. From age 65 to age 70, the brightness of the companion star increases, and the radiation from the companion star may partially cover the radiation from the accretion disk, leading to the decrease in Recall, i.e., the systems may be easily indentified as a single star by our machine learning model. Eventually, from age 70 to age 100, the companion star became a giant, and Recall remains stable.
§ DISCUSSION
In this paper, we check how the completeness of potential ANSB candidates from future CSST photometric data depends on the properties of ANSB, where three parameters are investigated, i.e., mass transfer rates, initial masses and the age of companions.
First, we found that the mass transfer rate has a decisive effect on the recognition results of our model. For ANSB systems with low mass transfer rates, our model is almost unable to identify them. For ANSB systems with sufficiently high mass transfer rates, our model can identify them well, and the recognition results have both high Precision and high Recall. This means that there is a high probability that X-ray emission can be observed in the ANSB systems identified by our model. Mass transfer rate is the dominant parameter to affect the Recall from a machine learning method, because the value directly determines the flux from the accretion disk in an ANSB system <cit.>. However, the initial mass of companion star also has an impact on the identification results. A higher mass transfer rate `threshold' is required to produce good identification results when the initial mass of companion star is larger. Conversely, a lower upper limit of Recall is associated with a greater initial mass of companion star. The results here indicate that the CSST photometric system could be quite sensitive to the ANSB systems that a mass transfer occurs within thermal timescale.
In addition, we find that the initial mass of companion star has little effect on Precision and has a significant effect on Recall. For different initial masses of companion star, Precision is almost a constant (94 %). As discussed by <cit.>, the high Precision is due to the fact that the flux from normal binary star is superposition of two blackbody spectra, while the flux from ANSB is superposition of an accretion disk spectrum and a blackbody spectrum. However, the initial mass of the companion star has a significant effect on Recall. This is because the initial mass determines the brightness of companion star, and the brightness of companion star affects the flux contribution from companion star in ANSB system. Considering the effect of the mass transfer rate, the systems identified from the CSST photometric system are more likely to be LMXBs, which could be observed by X-ray observations.
We also explored the effect of the age of companion star on Precision and Recall, i.e., the more massive the companion, the more significant of the age effect on the Precision and/or Recall. This is due to the higher the initial mass of companion star, the greater the luminosity.
Finally, we should point out that the ANSBs in our model are produced based on some simple assumptions. For example, we only consider the radiation of ANSB from the accretion disk and the companion star; while the corona of the NS and the heating of the companion star are ignored (e.g. ). Some physical processes that we neglected could become important in some special ANSBs, such as, jet <cit.>. We also do not consider the effect of emission lines from the disk, which could affect the accuracy of the classification model (a double peak H_α emission line maybe support the existence of accretion disk, ). In the future, we will add these effects step by step. Maybe the slitless spectroscopy module of CSST might be helpful to improve our model.
§ CONCLUSIONS
In this paper, we produced some ANSBs and background stars to investigate whether or not the completeness of ANSB candidates in CSST data depends on the properties of the potential ANSB systems. We obtain a total result of Precision =94.56 % and Recall =63.29 %. We found that the mass transfer rate had a decisive influence on the identification results of our model. For ANSB systems with high mass transfer rates, our model is able to obtain good identification results (high Precision and high Recall). The initial mass and age of the companion star also have an impact on the identification results. Our results indicate that the ANSB systems with less massive companions and thermal timescale mass transfer rates are more likely to be identified by the CSST photometric system and would have a better completeness.
We thank Zheng-Wei Liu, Hong-Wei Ge, Jun-Feng Cui, Jing-Xiao Luo, Li-Fu Zhang and Li-Jun Zhang for their help. This work is supported by the National Natural Science Foundation of China (Nos. 12288102 and 12333008), and the National Key R&D Program of China (No. 2021YFA1600403). X.M. acknowledges support from the International Centre of Supernovae, Yunnan Key Laboratory (No. 202302AN360001), the Yunnan Revitalization Talent Support Program-Science & Technology Champion Project (No. 202305AB350003), the Yunnan Fundamental Research Projects (No. 202401BC070007 and 202201BC070003), and the science research grants from the China Manned Space Project.
raa
|
http://arxiv.org/abs/2409.03548v1 | 20240905141126 | The essential norms of Toeplitz operators with symbols in $C+H^\infty$ on weighted Hardy spaces are independent of the weights | [
"Oleksiy Karlovych",
"Eugene Shargorodsky"
] | math.FA | [
"math.FA"
] |
§ ABSTRACT
Let 1<p<∞, let H^p be the Hardy space on the unit circle, and let
H^p(w) be the Hardy space with a Muckenhoupt weight w∈ A_p on the unit
circle. In 1988, Böttcher, Krupnik and Silbermann proved that
the essential norm of the Toeplitz operator T(a) with a∈ C
on the weighted Hardy space H^2(ϱ) with a power weight ϱ∈ A_2
is equal to a_L^∞. This implies that the essential norm of
T(a) on H^2(ϱ) does not depend on ϱ. We extend this result
and show that if a∈ C+H^∞, then, for 1<p<∞, the essential
norms of the Toeplitz operator T(a) on H^p and on H^p(w) are the same
for all w∈ A_p. In particular, if w∈ A_2, then the essential norm
of the Toeplitz operator T(a) with a∈ C+H^∞ on the weighted
Hardy space H^2(w) is equal to a_L^∞.
Primary 47B35, 46E30
[
Diana A. Bistrian, Gabriel Dimitriu, Ionel M. Navon
5 September 2024
=======================================================
§ INTRODUCTION
For Banach spaces 𝒳,𝒴, let
ℬ(𝒳,𝒴) and
𝒦(𝒳,𝒴)
denote the sets of bounded linear and compact linear operators from
𝒳 to 𝒴, respectively. The norm of an operator
A∈ℬ(𝒳,𝒴) is denoted
by A_ℬ(𝒳,𝒴). The essential norm of
A ∈ℬ(𝒳,𝒴)
is defined as follows:
A_ℬ(𝒳,𝒴),e
:=
inf{A - K_ℬ(𝒳,𝒴) : K ∈𝒦(𝒳,𝒴)}.
As usual, we abbreviate ℬ(𝒳,𝒳) and
𝒦(𝒳,𝒳) to
ℬ(𝒳) and 𝒦(𝒳), respectively.
Let 𝕋:={z∈ℂ:|z|=1} be the unit circle in the complex
plane. We equip 𝕋 with the Lebesgue measure m normalised so
that m(𝕋)=1. In this paper, all function spaces will be
considered over 𝕋.
A measurable function w:𝕋→[0,∞] is said to be a weight
if 0<w<∞ a.e. on 𝕋. Let 1<p<∞ and w be a weight.
Weighted Lebesgue spaces L^p(w) consist of all measurable functions
f:𝕋→ℂ such that fw∈ L^p. The norm in L^p(w)
is defined by
f_L^p(w):=fw_L^p
=
(∫_𝕋|f(t)|^pw^p(t) dm(t))^1/p.
Consider the operators S and P, defined for a function
f∈ L^1 and a.e. point t∈𝕋 by
(Sf)(t):=1/π i ∫_𝕋f(τ)/τ-t dτ,
(Pf)(t):=1/2(f(t)+(Sf)(t)),
respectively, where the integral is understood in the Cauchy principal
value sense. The operator S is called the Cauchy singular
integral operator and the operator P is called the Riesz projection.
It is well known (see <cit.>
and also <cit.>, <cit.>) that the
Riesz projection P is bounded on L^p(w) if and only if w
belongs to the Muckenhoupt class A_p, that is,
sup_γ⊂𝕋(1/m(γ)∫_γ w^p(t) dm(t))^1/p(1/m(γ)∫_γ w^-p'(t) dm(t))^1/p'
<∞,
where 1/p+1/p'=1 and the supremum is taken over all arcs
γ⊂𝕋. If w∈ A_p, then
L^∞↪ L^p(w)↪ L^1.
For a function f ∈ L^1, let
f(n)
=
1/2π∫_-π^π
f(e^iθ) e^-i nθ dθ ,
n ∈ℤ
be the Fourier coefficients of f. For 1<p<∞ and w∈ A_p, let
H^p(w) = H[L^p(w)] :=
{
f∈ L^p(w) : f(n)=0 n<0
}
be the weighted Hardy space. The classical Hardy spaces H^p,
1≤ p≤∞, are defined similarly if one replaces L^p(w) by L^p
in the above definition. It is well known that if w∈ A_p, then P maps
L^p(w) onto H^p(w).
Let 1<p<∞ and w∈ A_p. For a∈ L^∞, the Toeplitz operator
with symbol a is defined by
T(a)f=P(af),
f∈ H^p(w).
It is clear that T(a)∈ℬ(H^p(w)) and
T(a)_ℬ(H^p(w)), e≤T(a)_ℬ(H^p(w))≤P_ℬ(L^p(w))a_L^∞.
I. Gohberg and N. Krupnik <cit.> proved that
P_ℬ(L^p),e≥ 1/sin(π/p) and conjectured that
P_ℬ(L^p)=1/sin(π/p). This conjecture was confirmed
by B. Hollenbeck and I. Verbitsky in <cit.>. Thus
T(a)_ℬ(H^p),e≤
1/sin(π/p)
a_L^∞,
a∈ L^∞.
Let C denote the Banach space of all complex-valued continuous functions
on 𝕋 with the supremum norm and let
C+H^∞:={f∈ L^∞ : f=g+h, g∈ C, h∈ H^∞}.
In 1967, Sarason observed that C+H^∞ is a closed subalgebra
of L^∞ (see, e.g., <cit.> for the proof
of this fact).
Consider power weights of the form
ϱ(t):=∏_j=1^k |t-t_j|^λ_j
where t_1,…,t_k∈𝕋 are pairwise disjoint and
λ_1,…,λ_k∈ℝ. These weights are usually called
Khvedelidze weights. It is well known that ϱ∈ A_p with
1<p<∞ if and only if -1/p<λ_j<1-1/p for all j∈{1,…,k}
(see, e.g., <cit.>).
In 1988, A. Böttcher, N. Krupnik, and B. Silbermann proved the following
(see <cit.>).
If ϱ is a power weight of the form (<ref>) satisfying
-1/2<λ_j<1/2
j∈{1,…,k},
then for all a∈ C
one has T(a)_ℬ(H^2(ϱ)),e=a_L^∞.
In particular, the essential norm of T(a) on H^2(ϱ) does not
depend on the choice of a Khvedelidze weight
ϱ∈ A_2. Further they asked in <cit.>
whether the essential norm of Toeplitz operators T(a) with
a∈ C acting on Hardy spaces H^p depend on p∈(1,∞).
The second author answered this question in the negative <cit.>.
More precisely, it was shown that the equality
T(a)_ℬ(H^p),e=a_L^∞
holds for all a∈ C if and only if p=2. Nevertheless, the following
estimates for T(a)_ℬ(H^p),e hold
(see (<ref>) and <cit.>).
Let 1<p<∞ and a∈ C+H^∞. Then
a_L^∞≤T(a)_ℬ(H^p),e≤min{2^|1-2/p|,1/sin(π/p)}a_L^∞.
The aim of this paper it to study the relations between the essential norms of
Toeplitz operators with symbols in C+H^∞on unweighted and weighted
Hardy spaces. The following is our main result.
Let 1<p<∞ and w∈ A_p. If a∈ C+H^∞, then
T(a)_ℬ(H^p),e
=
T(a)_ℬ(H^p(w)),e.
It is instructive to compare this result with the behaviour of the essential
norm S_ℬ(L^p(ϱ)),e of the Cauchy singular
integral operator with ϱ∈ A_p of the form (<ref>),
which depends not only on p∈ (1,∞), but also on ϱ
(see <cit.> and Krupnik's survey <cit.>).
In fact, we prove an abstract form of Theorem <ref>
(see Theorem <ref>), where we replace the Hardy
space H^p with 1<p<∞ by the abstract Hardy space H[X] built
upon a Banach function space X such that P∈ℬ(X)
and replace the weighted Hardy space H^p(w) with w∈ A_p by
the abstract Hardy space H[X(w)] built upon
X(w)={f:𝕋→ℂ:fw∈ X} with a weight w such that
P∈ℬ(X(w)). Our proof is fairly elementary and is completely
different from that of Theorem <ref>.
It is clear that Theorems <ref> and <ref>
immediately imply the following extension of Theorem <ref>.
Suppose that 1<p<∞ and w∈ A_p. If a∈ C+H^∞, then
a_L^∞≤T(a)_ℬ(H^p(w)),e≤{2^|1-2/p|,1/sin(π/p)}a_L^∞.
In particular, if w∈ A_2 and a∈ C+H^∞, then
T(a)_ℬ(H^2(w)),e=a_L^∞.
We will use the following notation:
𝐞_m(z) := z^m,
z ∈ℂ,
m ∈ℤ.
The paper is organised as follows. In Section <ref>, we
collect definitions of a Banach function space, its associate space X', and
a weighted Banach function space X(w). Note that if w∈ X and 1/w∈ X',
then X(w) is a Banach function space itself. This allows us to define
the abstract Hardy space H[X(w)] built upon X(w). We recall that if
w∈ X and 1/w∈ X', then then the abstract Hardy spaces H[X] and
H[X(w)] are isometrically isomorphic. We conclude the preliminaries with
some useful properties of the Riesz projection P and the definition of
a Toeplitz operator T(a) on the abstract Hardy space built upon a Banach
function space X such that P∈ℬ(X).
Section <ref> is devoted to the proof of Theorem <ref>.
First we recall an observation from <cit.> that the Toeplitz operator
T(𝐞_-nh) with n∈ℕ and h∈ H^∞
is bounded on the abstract Hardy space H[X] for an arbitrary Banach function
space, i.e. even without the assumption that P∈ℬ(X).
Further, we show that the essential norms of T(𝐞_-nh)
on H[X] and H[X(w)] coincide if w∈ X and 1/w∈ X'.
Here we essentially use the isomorphic isomorphism of H[X] and H[X(w)].
Finally, taking into account that the set
{𝐞_-nh:n∈ℕ,h∈ H^∞} is dense in C+H^∞,
we prove Theorem <ref>. In turn,
Theorem <ref> yield Theorem <ref>.
§ PRELIMINARIES
§.§ Banach function spaces
Let ℳ be the set of all measurable complex-valued functions on
𝕋 equipped with the normalized Lebesgue measure m and let
ℳ^+ be the subset of functions in ℳ whose values lie
in [0,∞]. Following <cit.>, a mapping
ρ: ℳ^+→ [0,∞] is called a Banach function norm if,
for all functions f,g, f_n∈ℳ^+ with n∈ℕ, and for
all constants a≥ 0, the following properties hold:
(A1) ρ(f)=0 ⇔ f=0 ,
ρ(af)=aρ(f),
ρ(f+g) ≤ρ(f)+ρ(g),
(A2) 0≤ g ≤ f ⇒ ρ(g) ≤ρ(f)
,
(A3) 0≤ f_n ↑ f ⇒ρ(f_n) ↑ρ(f) ,
(A4) ρ(1) <∞,
(A5) ∫_𝕋 f(t) dm(t) ≤ Cρ(f)
with a constant C ∈ (0,∞) that may depend on ρ, but is
independent of f. When functions differing only on a set of measure zero
are identified, the set X of all functions f∈ℳ for which
ρ(|f|)<∞ is called a Banach function space. For each f∈ X,
the norm of f is defined by f_X :=ρ(|f|). The set X equipped
with the natural vector space operations and this norm becomes a Banach
space (see <cit.>). If ρ is a Banach
function norm, its associate norm ρ' is defined on ℳ^+ by
ρ'(g):=sup{∫_𝕋 f(t)g(t) dm(t) :
f∈ℳ^+, ρ(f) ≤ 1
}, g∈ℳ^+.
It is a Banach function norm itself <cit.>.
The Banach function space X' determined by the Banach function
norm ρ' is called the associate space (Köthe dual) of X.
The associate space X' can be viewed as a subspace of the
Banach dual space X^*.
§.§ Weighted Banach function spaces
For a weight w and a Banach function space X, the weighted space X(w)
consists of all measurable functions f:𝕋→ℂ such that
fw∈ X. We equip it with the norm f_X(w)=fw_X.
Let X be a Banach function space with the associate spaces X' and
let w be a weight.
(a)
If w∈ X and 1/w∈ X', then X(w) is a Banach function space,
whose associate space is X'(1/w).
(b)
If P is bounded on the normed space X(w), then w∈ X and
1/w∈ X'.
Part (a) was proved in <cit.>, part (b) follows from
<cit.>.
§.§ Isometric isomorphism of weighted and nonweighted abstract
Hardy spaces
Let X be a Banach function space and let X' be its associate space.
Lemma <ref>(a) implies that if w is a weight such that
w∈ X and 1/w∈ X', then X(w) is a Banach function space. Then
L^∞↪ X(w)↪ L^1
and the abstract Hardy space H[X(w)] built upon X(w) is well defined
by
H[X(w)]:=
{
f∈ X(w) : f(n)=0 n<0
}.
For simplicity, we will write H[X] if w=1.
Let X be a Banach function space with the associate space X'
and let w be a weight such that w∈ X and 1/w∈ X'.
Consider
W(z) := exp(1/2π∫_-π^πe^it + z/e^it - z log w(e^it) dt),
|z|<1.
Then the mapping f↦ M_Wf:=W· f
is an isometric isomorphism of H[X(w)] onto H[X] and
the mapping g↦ M_W^-1g :=W^-1· g is an isometric
isomorphism of H[X] onto H[X(w)].
§.§ Auxiliary lemmas on the Riesz projection
We will need the following auxiliary lemmas.
Let f ∈ L^1. Suppose there exists g ∈ H^1 such that
f(n) = g(n) for all n ≥ 0. Then Pf = g.
The following lemma justifies the terminology about the operator P.
If X is a Banach function space on which the operator P is bounded,
then P maps the space X onto the abstract Hardy space H[X].
§.§ Toeplitz operators on abstract Hardy spaces
Let X be a Banach function space. If a∈ L^∞, then the multiplication
operator by a defined by
M_af:=af, f∈ X,
is bounded on X and M_a_ℬ(X)≤a_L^∞.
Further, if the Riesz projection is bounded on X, then the Toeplitz
operator defined by
T(a)f:=P(M_af)=P(af),
f∈ H[X],
is bounded on H[X] and
T(a)_ℬ(H[X])≤P_ℬ(X)a_L^∞.
§ PROOF OF THE MAIN RESULT
§.§ Operator P_n and a special Toeplitz operator
We will need auxiliary results on a representation of a Toeplitz
operator T(𝐞_-nh) with n∈ℕ and h∈ H^∞
in terms of the projection P_n from X onto the subspace of H[X] of
analytic polynomials of order n-1 obtained in our recent paper <cit.>.
We would like to underline that we do not require here and in
Subsection <ref>
that the Riesz projection P is bounded on X.
For every n∈ℕ and f∈ L^1, let
P_n f := ∑_k = 0^n-1f(k) 𝐞_k.
Then the operator P_n is bounded from L^1 to H^∞ and
from a Banach function space X to the abstract Hardy space H[X].
Let X be a Banach function space. If n ∈ℤ_+ and h ∈ H^∞,
then the Toeplitz operator T(𝐞_-n h) : H[X] → H[X] is bounded
and
T(𝐞_-n h) f
=
P(𝐞_-n h f) = 𝐞_-n (I - P_n)(hf),
f∈ H[X].
§.§ The essential norm of a special Toeplitz operator on a weighted
Hardy space is independent of the weight
In this subsection we show that the essential norms of the special
Toepllitz operators with symbols of the form
𝐞_-nh, where n∈ℕ and h∈ H^∞,
are independent of the weight.
Let X be a Banach function space with the associate space X'
and let w be a weight satisfying w∈ X and 1/w∈ X'.
If n ∈ℕ and h ∈ H^∞, then
T(𝐞_-n h)_ℬ(H[X(w)]),e
=
T(𝐞_-n h)_ℬ(H[X]),e .
Let the operators M_W and M_W^-1 be as in
Lemma <ref>. Since W is an outer function
(see <cit.>), one has |W|=w
a.e. on 𝕋, and W∈ H[X], W^-1∈ H[X'].
By Lemma <ref>,
the operator T(𝐞_-nh) is bounded on H[X] and H[X(w)].
For any f ∈ H[X], one gets from (<ref>)
and Lemma <ref> that
M_W T(𝐞_-n h) M_W^-1 f
=
W 𝐞_-n (I - P_n)(hW^-1 f)
=
P(W 𝐞_-n (I - P_n)(hW^-1 f))
=
P(W 𝐞_-n (hW^-1 f)
-
W 𝐞_-n P_n(hW^-1 f))
=
P(𝐞_-n (h f)
-
W 𝐞_-n P_n (hW^-1 f))
=
P(𝐞_-n (I - P_n)(h f)
+
𝐞_-n P_n(h f)
-
W 𝐞_-n P_n(hW^-1 f))
=
P(T(𝐞_-n h) f
+
𝐞_-n P_n (h f)
-
W 𝐞_-n P_n (hW^-1 f))
=
T(𝐞_-n h) f
+
T(𝐞_-n) P_n M_h f
-
T(𝐞_-n) M_W P_n M_hW^-1 f .
Since W^-1∈ X' and h∈ H^∞, it follows from the Hölder
inequality for Banach function spaces (see <cit.>)
that M_hW^-1∈ℬ(H[X], L^1).
On the other hand, taking into account
<cit.>, since
W∈ H[X]⊂ H^1, one has Wg∈ H^1∩ X=H[X]
for every g∈ H^∞.
Hence M_W ∈ℬ(H^∞, H[X]).
Then Lemmas <ref> and <ref>
(with h = 1) imply that
T(𝐞_-n) M_W P_n M_hW^-1∈ℬ(H[X]),
T(𝐞_-n) P_n M_h ∈ℬ(H[X]) .
So,
K_0 :=
T(𝐞_-n) P_n M_h
-
T(𝐞_-n) M_W P_n M_hW^-1
is a bounded finite-rank operator on H[X], and
M_W T(𝐞_-n h) M_W^-1 = T(𝐞_-n h) + K_0 .
Since K_0 ∈𝒦(H[X]), one has
M_W K M_W^-1 - K_0 ∈𝒦(H[X])
for every K ∈𝒦(H[X(w)]). Moreover, for every
T ∈𝒦(H[X]), there exists K ∈𝒦(H[X(w)]) such
that
T = M_W K M_W^-1 - K_0 .
Indeed, this K is given by the formula
K = M_W^-1(T + K_0) M_W∈𝒦(H[X(w)]) .
Using (<ref>), one gets
T(𝐞_-n h)_ℬ(H[X(w)]),e
=
inf_K ∈𝒦(H[X(w)])T(𝐞_-n h) - K_ℬ(H[X(w)])
=
inf_K ∈𝒦(H[X(w)])M_W(T(𝐞_-n h) - K)M_W^-1_ℬ(H[X])
=
inf_K ∈𝒦(H[X(w)])T(𝐞_-n h) - (M_W K M_W^-1 - K_0)_ℬ(H[X])
=
inf_T ∈𝒦(H[X])T(𝐞_-n h) - T_ℬ(H[X])
=
T(𝐞_-n h)_ℬ(H[X]),e,
which completes the proof.
§.§ The essential norm of a Toeplitz operator on a weighted
Hardy space is independent of the weight
Now we are in a position to prove the main result of this section.
Let X be a Banach function space and let w be a weight such that
the Riesz projection P is bounded on the spaces X and X(w).
If a∈ C+H^∞, then
T(a)_ℬ(H[X(w)]),e
=
T(a)_ℬ(H[X]),e.
It follows from <cit.> that there is a sequence
{a_m} of functions of the form 𝐞_-nh with h ∈ H^∞
and n ∈ℕ such that a-a_m_L^∞→ 0 as m→∞.
Since P∈ℬ(X(w)), Lemma <ref>(b) implies that
w∈ X and 1/w∈ X'. Then Lemma <ref>
yields
T(a_m)_ℬ(H[X(w)]),e
=
T(a_m)_ℬ(H[X]),e,
m∈ℕ.
Since P∈ℬ(X), we obtain from (<ref>)
that
|
T(a)_ℬ(H[X]),e
-
T(a_m)_ℬ(H[X]),e|
≤T(a-a_m)_ℬ(H[X]),e
≤P_ℬ(X)a-a_m_L^∞,
whence
lim_m→∞T(a_m)_ℬ(H[X]),e
=
T(a)_ℬ(H[X]),e.
Similarly, since P∈ℬ(X(w)), we get
lim_m→∞T(a_m)_ℬ(H[X(w)]),e
=
T(a)_ℬ(H[X(w)]),e.
Combining
(<ref>)–(<ref>),
we arrive at (<ref>).
§.§ Proof of Theorem <ref>
If 1<p<∞ and w∈ A_p, then the Riesz projection P is bounded
on the Lebesgue space L^p and on its weighted counterpart L^p(w).
In this case, Theorem <ref> follows immediately from
Theorem <ref>.
§ DECLARATIONS
§.§ Funding
This work is funded by national funds through the FCT - Fundação para a
Ciência e a Tecnologia, I.P., under the scope of the projects UIDB/00297/2020
(<https://doi.org/10.54499/UIDB/00297/2020>)
and UIDP/ 00297/2020
(<https://doi.org/10.54499/UIDP/00297/2020>)
(Center for Mathematics and Applications).
§.§ Data availability
Data sharing not applicable to this article as no datasets
were generated or analysed during the current study.
§.§ Conflict of interest
All authors certify that they have no affiliations with
or involvement in any organisation or entity with any financial interest or
non-financial interest in the subject matter or materials discussed in this
manuscript.
abbrv
|
http://arxiv.org/abs/2409.03100v1 | 20240904215956 | Fast and Accurate Collimator-Detector Response Compensation in High-Energy SPECT Imaging with 1D Convolutions and Rotations | [
"Lucas Polson",
"Pedro Esquinas",
"Sara Kurkowska",
"Chenguang Li",
"Peyman Sheikhzadeh",
"Mehrshad Abbassi",
"Saeed Farzanehfar",
"Seyyede Mirabedian",
"Carlos Uribe",
"Arman Rahmim"
] | physics.med-ph | [
"physics.med-ph"
] |
Fast SPECT CDR Modeling]Fast and Accurate Collimator-Detector Response Compensation in High-Energy SPECT Imaging with 1D Convolutions and Rotations
^1 Department of Physics & Astronomy, University of British Columbia, Vancouver, Canada
^2 Department of Integrative Oncology, BC Cancer Research Institute, Vancouver Canada
^3 Molecular Imaging and Therapy Department, BC Cancer, Vancouver Canada
^4 Department of Nuclear Medicine, Pomeranian Medical University, Szczecin, Poland
^5 Nuclear Medicine Department, IKHC, Faculty of Medicine, Tehran University of Medical Science, Tehran, Iran
^6 Department of Radiology, University of British Columbia,Vancouver, Canada
[email protected]
§ ABSTRACT
Objective: Modeling of the collimator-detector response (CDR) in SPECT reconstruction enables improved resolution and more accurate quantitation, especially for higher energy imaging (e.g. ^177Lu and ^225Ac). Such modeling, however, can pose a significant computational bottleneck when there are substantial components of septal penetration and scatter in the acquired data, since a direct convolution-based approach requires large 2D kernels. The present work presents an alternative method for fast and accurate CDR compensation using a linear operator built from 1D convolutions and rotations (1D-R). To enable open-source development and use of these models in image reconstruction, we release a SPECTPSFToolbox repository for the PyTomography project on GitHub. Approach: A 1D-R CDR model was formulated, and subsequently fit to Monte Carlo 440 keV point source data representative of emissions in ^225Ac imaging. Computation times of (i) the proposed 1D-R model and (ii) a traditional model that uses 2 dimensional convolutions (2D) were compared for typical SPECT matrix sizes. Both CDR modeling techniques were then used to reconstruct ^225Ac phantom and patient data, and were compared by quantifying total counts in hot regions of interest (ROIs) and activity contrast between hot ROIs and background regions. Results: The 1D-R and 2D CDR models were created using the SPECTPSFToolbox. For typical matrix sizes in SPECT reconstruction, application of the 1D-R model provides a two-fold computational speed-up over the 2D model. Only small differences between the 1D-R and 2D models (order of 1% ) were obtained for count and contrast quantification in select ROIs. Significance: A technique for CDR modeling in SPECT was proposed that (i) significantly speeds up reconstruction times, and (ii) yields nearly identical reconstructions to traditional 2D convolution based CDR techniques. The released toolbox will permit open-source development of similar models for different isotopes and collimators.
§ INTRODUCTION
Single photon emission computed tomography (SPECT) imaging is an in vivo modality of significant value in different clinical applications <cit.>. In particular, quantitative SPECT imaging holds great potential and value in the field of theranostics, wherein image quantification and dosimetry can be performed to improve radiopharmaceutical therapies (RPTs). Routine SPECT-based dosimetry could unveil the relationships between tumor-response and healthy organ complications with absorbed dose, therefore enabling personalized treatments that maximize tumor dose while minimizing toxicity to organs at risk <cit.>. As a recent and well-known example, the success of ^177Lu-PSMA-617 based RPTs in the VISION <cit.> and TheraP <cit.> randomized control trials has lead to expanded research and clinical translation efforts for wide-scale deployment, as well as pursuit of absorbed dose estimation as a tool towards personalized treatments <cit.>.
SPECT-based dosimetry relies on accurate quantification of activity distribution from SPECT images, which is strongly dependent on the system model used in image reconstruction. The system model is a linear operator that predicts the expectation of the acquired data given a 3D isotope distribution, accounting for phenomena such as photon attenuation in the patient. An important aspect of the model is collimator detector response (CDR) modeling, which estimates image blurring caused by the collimator and the scintillation crystals <cit.>. Computation of the CDR is typically the most computationally expensive operation in image reconstruction.
The CDR can be characterized by the detector response from a point source of activity; this is denoted the point spread function (PSF). The PSF can be decomposed into multiple components. The intrinsic response function (IRF) characterizes the inability of the scintillation crystals to precisely localize the point of interaction, and is sufficiently modeled using a Gaussian function. The collimator response for parallel hole collimators results from the inability of the collimator to accept only photons travelling perpendicular to the detector; it consists of three components: (i) the geometric response function (GRF) <cit.>, which describes photons that travel through the collimator holes without penetrating or interacting with the septa, (ii) the septal penetration response function (SPRF), which describes the contribution from photons that travel through the collimator without being attenuated, and (iii) the septal scatter response function (SSRF), which consists of photons that interacted and scattered within the collimator and were subsequently detected in the scintillator.
Selection of collimator parameters is an important aspect of SPECT imaging <cit.>. As the collimator becomes thicker and the diameter of the collimator bores becomes narrower, the relative contribution from the SPRF and SSRF decreases due to the increased attenuation probability for photons not travelling perpendicular to the detector. In this case, (i) the detector resolution improves and (ii) the net point spread function (PSF) is dominated by the IRF + GRF and can be reasonably approximated using a 2D Gaussian function. A trade-off of having a collimator with high septal thickness and small hole diameters, however, is a decrease in detector sensitivity; the corresponding implications for quantitative imaging is a decrease in precision or longer patient scan times. This trade-off must be independently considered for each isotope. For ^177Lu labeled radiopharmaceuticals, where the imaged photons are 208keV, a commercially labeled “medium energy” collimator configuration (i) yields a reasonable count rate and (ii) adequately minimizes the SPRF and SSRF components. In this situation, a 2D Gaussian PSF model is sufficient for image reconstruction <cit.>.
Many commercially available reconstruction software only offer Gaussian PSF modeling, and thus implicitly assume the SPRF and SSRF are negligible. There are multiple advantages to this. Firstly, the computational advantage of 2D Gaussian PSF modeling is that 2D Gaussian convolution is separable into use of two perpendicular 1D Gaussian convolutions, which are less computationally expensive to implement. Secondly, the Gaussian PSF model used to model the GRF+IRF can be obtained analytically for any photon emission energy and standard collimator shapes, and thus does not require lookup tables. However, when the PSF has significant contributions from the SPRF and SSRF, 2D Gaussian PSF modeling fails to capture all the features of the PSF. This is typically an issue with radioisotopes that emit high photon energies, such as α-emitters like ^225Ac.
Use of α-emitters in radiopharmaceutical therapies presents a major and exciting frontier, due to the high linear energy transfer (LET) associated with α emissions <cit.>. In preclinical studies, simultaneous treatment with ^177Lu-PSMA-617 and ^225Ac-PSMA-617 compared to ^177Lu-PSMA-617 alone resulted in significantly reduced tumor growth <cit.>. Kratochwil et al. <cit.> applied ^225Ac-PSMA-617 to patients with metastatic castration resistant prostate cancer (mCRPC) who previously exhausted ^177Lu-PSMA-617 treatment. Certain patients achieved a full response, with prostate specific antigen (PSA) levels in one patient decreasing from 419 ng/mL to below 0.1 ng/mL. At the time of writing, there are ongoing studies of ^225Ac based radiopharmaceuticals looking at dose escalation <cit.>, fractionation <cit.>, and safety and efficacy <cit.>. A recent meta analysis <cit.> elaborates further on these clinical trials, discussing toxicities and other challenges with these treatments. Quite recently, targeted α therapy with ^213Bi has been shown to reduce amyloid plaque concentration in male mice <cit.>, eluding to a potential treatment option for Alzheimer disease.
Throughout the decay chain of ^225Ac, the daughters ^213Bi and ^221Fr emit photons detectable within a SPECT system of 440keV and 218keV respectively. Unfortunately, even with the commercially available collimators designed for high-energy photons, there are still significant SPRF and SSRF components present in the ^213Bi 440keV peak; a sample PSF is shown in Figure <ref>. This phenomena similarly occurs in imaging of photons with energies of 511 keV (positron emitter) <cit.> and 364 keV (^131I) <cit.>. As a consequence, the PSF can no longer be modeled using a 2D Gaussian and is thus no longer separable into two perpendicular 1D components. Since this is the bottleneck of system modeling, image reconstruction takes significantly longer.
The reduction of reconstruction times in medical imaging remains an important research topic. Tsai et al. <cit.> recently showed that the limited-memory Broyden-Feltcher-Goldfarb-Shannon algorithm with box constraints and a diagonal preconditioner (L-BFGS-B-PC) was able to converge several times faster than the one step late expectation maximum (OSL-EM) algorithm in Positron Emission Tomography (PET) reconstruction because less projection operations were required. As another example, in Chun et al. <cit.>, long reconstruction times for PSF modeling of ^131I were partially remedied by using fast fourier transform (FFT) based 2D convolutions. By contrast, in the present work, we seek to reduce reconstruction times by using a PSF model that incorporates (i) 1D convolutions and (ii) rotations. It will be shown that this model significantly reduces computational time compared to standard and FFT based 2D convolutions, and consequently significantly reduces the time required for SPECT image reconstruction of ^225Ac. The method is evaluated on phantom data and patient data, and we show that the proposed model yields identical results to Monte Carlo based 2D convolution techniques. All models considered in this paper are implemented using the GPU-accelerated functionality of PyTorch.
Alongside this paper, we release the SPECTPSFToolbox: an open-source GitHub repository which forms a new component of the PyTomography <cit.> project. The toolbox contains functionality for developing and fitting arbitrary PSF models to arbitrary point-source data. The saved models can then be loaded in our in-house initiated and community developed library PyTomography for SPECT reconstruction. As such, the techniques used in this paper can also be applied to other isotope / collimator configurations that are of interest in the nuclear medicine community. To encourage community use, we have released nine tutorials demonstrating how to use the toolbox, and how to integrate the models in PyTomography. The link to the PyTomography project (which includes the SPECTPSFToolbox) is <https://github.com/PyTomography>
§ MATERIALS AND METHODS
In Section 2.1, the mathematical formalism of the PSF model is outlined, and the reconstruction protocol for all acquired data in subsequent sections is established. Section 2.2 describes how the model is fit to SIMIND <cit.> Monte Carlo ^225Ac point source data acquisitions simulated at various source-detector distances. The computational time of the model is benchmarked and compared to 2D PSF modeling and Gaussian PSF modeling. In Section 2.3, the model is then used for reconstruction of (i) an ^225Ac phantom consisting of spherical inserts and (ii) a patient receiving ^225Ac-PSMA-617 treatment for metastatic prostate cancer. All computation was performed using a Microsoft Azure virtual machine (Standard NC6s v3) with a 6 CPUs (Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz), 112 GB of RAM, and a Tesla V100 GPU.
§.§ Theory
§.§.§ PSF Modeling
In this work, the notation f(x,y;d) is used for 3D objects: d denotes the distance between a plane parallel to the detector and the detector, and (x,y) denote the position on the plane. It is assumed that x, y, and d are discrete and thus f consists of voxels. The following notation is used to represent convolution operator K:
K^(x;d;b)f ≡∑_x' f(x',y;d) k(x-x';d;b)
K^(y;d;b)f ≡∑_y' f(x,y';d) k(y-y';d)
K^(x,y;d;b)f ≡∑_x',y' f(x',y';d;b) k(x-x',y-y';d;b)
where b are additional parameters the kernel k depends on, such as collimator septal thickness L_b, hole diameter w_b, and the linear attenuation coefficient of the collimator material μ_b.
Assuming a linear shift invariant (LSI) PSF, the SPECT system matrix estimates the projection g_ϕ at angle ϕ as
g_ϕ(x,y) = ∑_d∑_x',y' k_PSF(x-x',y-y',d) p_att(x,y,x',y',d) f(x,y,d)
where (x',y') is the source position on a plane parallel to the detector in 3D space, psf(...,d) is a kernel that yields the point spread function (PSF) at a distance d from the detector, and p_att(x,y,x',y',d) is the probability that photons traveling from (x,y,d) to detector coordinate (x', y') are not attenuated. Under the assumption that the attenuation probabilities vary little across the PSF, it follows that p_att(x,y,x',y',d) ≈ p_att(x,y,x,y,d) and it can simply be expressed as p_att(x,y,d). Defining f'≡ p_attf as the “attenuation-adjusted” image, Equation <ref> can then be rewritten in operator form (Equation <ref>) as
g_ϕ(x,y) = ∑_d K_PSF^(x,y;d;b) f'
Since this convolution operation is typically the bottleneck of SPECT system matrix modeling and image reconstruction, it is of interest to look for techniques to reduce the computation time. Under conditions of no septal penetration and scatter, the CDR is dominated by the GRF: K_PSF^(x,y;d;b) can then be sufficiently approximated using a 2D Gaussian convolution K_G^(x,y;d;b) where the kernel is given by
k_G(x,y;d;L_b,w_b,μ_b) = 1/2 πσ(d,L_b,w_b,μ_b)^2exp(-x^2+y^2/2σ(d,L_b,w_b,μ_b)^2)
σ(d,L_b,w_b,μ_b) = 1/2√(2log(2))(w_b/L_b-2/μ_b· d + w_b )
Convolution with a Gaussian function has computational advantages since it can be decomposed into successive application of two perpendicular 1D kernels via K_G^(x,y;d;b)=K_G^(x;d;b)K_G^(y;d;b); 1D convolution is significantly more computationally efficient than 2D. Unfortunately, the decomposition of a 2D kernel into two 1D kernels is not mathematically possible when the CDR contains significant SPRF and SSRF components. 2D convolution, however, is not the only way to implement Equation <ref>. Owing to the discrete rotational symmetries and features of the anisotropic PSFs obtained with typical SPECT collimators, we propose the following “1D rotation” model (abbreviated as “1D-R”) for the PSF operator:
M_1D-R^(x,y;d;b)≡(∑_θ∈Θ_tℛ^-1_θ K^(x;d;b)_t ℛ_θ + K_B^(x;d;b) K_B^(y;d;b) + 1) K^(x;d;b)_G K^(y;d;b)_G
where ℛ_θ is a rotation operator that implements rotation about an axis in the d direction by angle θ, and Θ_t correspond to the angle of the septal penetration tails (equal to {0,π/3,2π/3} for ^225Ac). The derived linear operator only makes use of 1D convolutions and rotations, since they often require less computational time on GPU compared to 2D convolutions. The kernels, modeling the components of the CDR (as shown in Figure <ref>), are selected as follows:
* Gaussian kernel K^(x;d;b)_G:
k_G(x;d;b) = A_G(d,b) ·exp(-x^2 / 2σ_G(d,b)^2 )
A_G(d,b) = b_0e^-b_1d + b_2e^-b_3 d
σ_G(d,b) = b_4 + b_5(√(d^2+b_6^2) - |b_6|)
This kernel is used to build geometric component of the PSF.
* Tail kernel K^(x;d;b)_t:
k_t(x;d;b) = A_t(d,b) f_t(x/σ_t(d,b))
A_t(d,b) = b_0e^-b_1d + b_2e^-b_3 d
σ_t(d,b) =1+b_4(√((d-d_min)^2+b_5^2) - |b_5| )
where d_min is the source-detector distance used in the PSF fit. f_t(x) is a discrete array of numbers that is linearly interpolated between its fixed points; these points are also considered to be part of the hyperparameters b. This kernel is used to build the septal penetration component of the PSF.
* Isotropic background kernel K^(x;d;b)_B:
k_B(x;d;b) = A_B(d,b) f_B(x/σ_B(d,b))
A_B(d,b) = b_0e^-b_1d + b_2e^-b_3 d
σ_B(d,b) =1+b_4(√((d-d_min)^2+b_5^2) - |b_5| )
f_B(x) = e^-|x|
This kernel is used to build the septal scatter component of the PSF.
All three model components also encapsulate the small blurring contribution from the IRF. The hyperparameters b are separate for each of the three different components.
In subsequent sections, the proposed 1D-R model is evaluated against a Monte Carlo 2D model where kernel data is obtained using SIMIND <cit.>; point source projection data is normalized at a large number of source-detector distances. The corresponding PSF operator, abbreviated as “2D”, is defined as
M_2D^(x,y;d;b)≡ K_true^x,y;d
where the corresponding kernel k_true^x,y;d corresponds to normalized projection data at each source-detector distance d.
The publicly SPECTPSFToolbox repository is structured to facilitate the customizability of these models. For the 1D-R model, components are implemented via separate class instances that are subsequently added and multiplied together to obtain the final form of Equation <ref>. The functional form and hyperparameters of the amplitude/scaling, such as that of Equation <ref>, are left arbitrary for the user. The 2D model (Equation <ref>) is implemented via a class instance that receives a stack of PSF kernels with associated source-detector distances: when used in image reconstruction, the model automatically chooses the PSF kernel closest to each source-detector distance in the reconstruction problem.
§.§.§ Image Reconstruction
All acquired image data in subsequent sections are reconstructed using PyTomography with the maximum likelihood expectation maximization (MLEM) algorithm; this was chosen over the ordered subset expectation maximization algorithm (OSEM) because of the low count statistics for each projection. System matrices employed attenuation correction using attenuation maps derived from acquired computed tomography (CT) images. As described by Zekun et al. <cit.>, SPECT with α-RPTs often have significant stray radiation-related noise component due to the proportionally low count rate. The imaging system equation is thus given by
y̅ = Hx + Ψ̅ + ŝ
where y̅ is the expectation of the acquired count data y, H is the system matrix, x is the estimated 3D count distribution, Ψ̅ is the expectation of the stray radiation-related noise, and ŝ is the scatter estimate. Ψ̅ is assumed to be the same for all bins in a given energy window and proportional to the acquisition time, and can be measured by taking a blank scan using the same energy windows as the patient acquisition, averaging the number of counts in each valid bin, and scaling by the acquisition time used in the clinical protocol. Since the dual energy window (DEW) technique is used for scatter estimation, the stray radiation in that window also must be accounted for: the scatter estimation is thus given by
ŝ = ( w_p/w_l) (S_G[g_l] - Ψ̅_̅l̅)
where w_p is the width of the primary window, w_l is the width of the lower window, S_G is a Gaussian smoothing kernel, g_l is the acquired count data in the lower energy window, and Ψ_l is the expectation of the mean stray-radiation noise in the lower energy window. The smoothing kernel is used here due to the high noise present. One of the dangers of Equation <ref> is that it is capable of yielding negative values in Equation <ref>; in practice this rarely occurred, and all negative values were set to zero.
When used in image reconstruction, the size of the 2D PSF kernels was always set to N-1 × N-1 where N is the number of voxels along the largest direction in the reconstructed image. For the 1D-R model, the kernels corresponding to K_G and K_B were of size N-1, while the kernel corresponding to K_t was of size ⌈√(2) N ⌉ + n to account for the diagonal, where n is of integer value 0 or 1 to make the kernel size odd. It should be noted that for PSFs much larger than the dimensions of the image, the size of the kernels would need to cover a 2N-1 × 2N-1 area to account for contributions from voxels on one edge of the image to detector elements on the opposite edge; in practice, the PSFs were not that large and the dimensions chosen were sufficient.
§.§ Validation and Timing
A 440 keV point source was simulated using the SIMIND Monte Carlo program <cit.> at 1100 positions that linearly varied from 0 cm to 58.44 cm. The detector pixel size was 0.24 cm× 0.24 cm with 255×255 pixels. The simulated collimator corresponded to a Siemens high energy configuration with a hexagonal shape, a hole width of 0.4cm and a hole length of 5.97cm. The intrinsic resolution of the detector was also included in the simulation and was assumed to be 0.38cm at 140 keV, representative of a Symbia system with 3/8” crystal length. The PSFs at all 1100 distances were used to generate K_true^x,y;d in Equation <ref> during reconstruction by selecting the PSF closest to the actual source-detector distance.
Twelve of these PSFs at source detector distances of 1cm and every 5cm from 5cm to 55cm were used as data to fit the 1D-R model. Fitting was performed in three steps: (i) the Gaussian parameters were optimized using the Adaptive Moment Estimation (ADAM) algorithm for 1 · 10^4 iterations and a learning rate of 10^-2, (ii) all the parameters were simultaneously optimized using ADAM for 1.5 · 10^4 iterations and a learning rate of 10^-3.
The computational time for the 1D-R model (Equation <ref>) and 2D model (Equation <ref>) were bench-marked on CPU and GPU for by applying the operators to matrices of four different sizes: 64^3, 128^3, 196^3, and 256^3. Each matrix of size N^3 was filled with random uniform numbers between 0 and 1. Each experiment used conditions similar to SPECT imaging, where the d axis varied from 0cm to 50cm, and the x and y axes varied from -30.74cm to 30.74cm. Use of standard convolution and fast fourier transform (FFT) based convolution were compared for each method.
§.§ SPECT Studies
§.§.§ Phantom Study
Application of (i) standard Gaussian (ii) 1D-R (iii) 2D PSF modeling was evaluated for GPU reconstruction of acquired ^225Ac data. A cylindrical phantom with spheres of diameter 60mm, 28mm, and 22mm with an initial sphere activity concentration of 1.37 kBq/mL was filled at a 10:1 source to background ratio. 34 SPECT acquisitions of the phantom were taken in sequence on a Symbia T2 SPECT/CT system (Siemens Healthineers, USA) with the following settings: 128 × 128 pixels at 4.82 mm× 4.82 mm resolution, 96 projection angles, high energy collimators, and 60 s acquisition time per projection. For reconstruction of the 440keV emission from the ^213Bi daughter, two energy windows were configured for acquisition: (i) photopeak window centered at 440 keV with a width of 20% and (ii) a lower scatter window used for DEW scatter correction centered at 374keV with a width of 12.5%. A blank scan with identical parameters was acquired to obtain the mean-stray radiation noise in each energy window. Two noise levels of the data were considered:
* One of the 34 scans was used for data reconstruction. The number of acquired counts in this scenario corresponds to an expected clinical ^225Ac scan of a patient undergoing ^225Ac-PSMA-617 therapy with injection of 8MBq <cit.> and is scanned for 2.5 min per projection at a time point between 0-72 hr post injection.
* The counts from all 34 scans were summed together and used for image reconstruction. In this case, no scatter blurring was used in Equation <ref> since the noise level was low. Because the detectors had an approximately equal radial path around the phantom for each scan, this corresponds to a single scan with a high count rate.
Although 96 projection angles were acquired, only 32 were used since this was similar to the corresponding ^225Ac patient acquisition in the next section. All images were reconstructed with MLEM for up to 100 iterations.
§.§.§ Patient Study
A patient receiving ^225Ac-PSMA-617 therapy with an injected activity of 8 MBq was imaged 20.5 hours post injection (first cycle). SPECT data were acquired on a Discovery 670 Pro SPECT/CT (GE Healthcare, USA) with a high energy general purpose collimator. 30 projections (15 per head) were acquired for 150 s using an identical photopeak window to the phantom acquisition, and 358.15 keV - 395.85 keV lower window. Since this GE scanner has a different CDR than the Siemens scanner used in prior examples, PSF data were regenerated in SIMIND using a high energy general purpose GE collimator and another 1D-R model was fit as before. In GE scanners, the hexagonal bores in the collimator are rotated by 90^∘ compared to the Siemens collimator, so the angles in Θ_t were adjusted to compensate for this. The acquired data were reconstructed using (i) standard Gaussian (ii) 2D and (iii) 1D-R PSF modeling using MLEM for up to 100 iterations. Reconstructed images were filtered using a Gaussian filter with a full width half max (FWHM) of 3cm. Two high uptake bone lesions were segmented on the 100th iteration reconstruction using 3D Slicer <cit.>. Three lesion ROIs were segmented by a physician on a pre-therapeutical PET image using the PET Edge+ tool of MIM v7.2.1 (MIM Software Inc., USA). A spherical ROI of diameter 10cm was placed in a low uptake region in the center of the patient so that the contrast, defined as the mean uptake ratio between the lesion ROIs and background ROI, could be obtained. The mean number of counts and contrast for each lesion ROI were then evaluated for each iteration of MLEM; before the statistics were computed, the 3cm FWHM Gaussian filter was applied to the image.
§ RESULTS
Figure <ref> shows the Monte Carlo simulated PSF data compared to the fit obtained via Equation <ref>. The curve fit is a reasonable approximation to the PSF at most distances, but is unable to capture features at larger distances, such as the intensity of the tails at 50 cm. The model also consistently under predicts the intensity at the center of the PSF, and this effect is more prevalent at large distances.
Timing benchmarks for PSF modeling using the 1D-R, 1D-R (FFT), 2D, and 2D (FFT) are shown in Figure <ref>. CPU implementation yields no benefits with the proposed 1D-R model, and performs fastest with FFT-based 2D convolutions and slowest using regular 2D convolutions. The GPU implementation is faster than the CPU implementation for all methods, and yields computational benefits when the proposed 1D-R model is used. With a matrix size of 128^3, use of the 1D-R method is over three times faster than 2D (FFT) PSF modeling. For small matrix sizes (64^3) there is no computational speed-up using the 1D-R method, and for large matrix sizes (256^3) the relative time difference between the 1D-R and 2D (FFT) begins to decrease, approaching only a two times speed advantage.
PyTomography reconstructed ^225Ac phantom images are shown in Figure <ref>. The time required for 100 iterations of MLEM for the low count data was 110s (Gaussian PSF), 146s (1D-R PSF) and 377s (2D PSF). Reconstructions using the 1D-R and 2D methods are almost indistinguishable qualitatively. All 34 low count acquisitions were then reconstructed and the 1D-R and 2D methods were quantitatively compared; the differences between the 2D and 1D-R were (1.25 ± 1.25)%, (0.60 ± 0.69)%, and (1.90 ± 0.18)% from smallest to largest sphere respectively. For comparison, the variability in counts between separate noise realizations for the 2D model were 54.4%, 33.0%, and 9.8% from smallest to largest sphere, respectively.
PyTomography reconstructed ^225Ac-PSMA-617 patient images are shown in Figure <ref>. The time required for 100 iterations of MLEM was 38.5 s (Gaussian PSF), 165.6s (1D-R PSF) and 351.5s (2D PSF). Qualitatively, the Gaussian PSF yields less counts in the uptake ROIs, and also greater background counts, for example, in the vertebrae. The count differences between the 1D-R and 2D methods after 100 iterations were 1.27% (lesion 1), 0.48% (lesion 2) and 2.03 % (lesion 3). These differences are comparable to those observed in the phantom study, and were significantly less than the differences between the Gaussian and 2D methods of -42.3 % (lesion 1), -29.1 % (lesion 2) and -44.3 % (lesion 3). The differences in lesion-background contrast between the 1D-R and 2D methods after 100 iterations were -0.18 % (lesion 1), -0.96 % (lesion 2) and 0.56 % (lesion 3). These were smaller in magnitude than the contrast differences between the Gaussian and 2D methods of -51.3% (lesion 1) and -23.3 % (lesion 2) and -56.7 % (lesion 3). Based on the line plots in Figure <ref>, the Gaussian PSF yields convergence with less iterations; this observation is similar to how an absence CDR modeling leads to faster convergence in traditional low and medium energy SPECT. In general, the smaller the spatial extent of the PSF, the faster convergence is attained in MLEM.
§ DISCUSSION
In this work, a 1D-R PSF model for high energy SPECT reconstruction was proposed, and demonstrated to be both efficacious and fast for use in high-energy reconstruction of ^225Ac data. The model was created using publicly shared SPECTPSFToolbox component of the PyTomography project; t is hoped that the shared toolbox and corresponding tutorials will permit the nuclear medicine community to develop custom PSF models for new isotope/collimator configurations as research in novel RPTs and imaging techniques continues to grow.
As shown in Figure <ref>, the proposed 1D-R only yields computational benefits when image reconstruction is performed on GPU. If reconstruction is performed on CPU, then PSF modeling should be implemented with FFT-based 2D convolutions. FFT-based 2D convolutions are still, however, three times faster on GPU than on CPU. Furthermore, if a GPU is used then the 1D-R model can be used for further computational advantages. The fastest implementation of PSF modeling on GPU (proposed 1D-R) is approximately one order of magnitude faster than the fastest implementation of PSF modeling on CPU (FFT-based 2D convolutions).
While the proposed 1D-R model is not able to perfectly capture the features of the PSF at all radial distances in Figure <ref>, it yields near identical reconstructions in Figures <ref> and <ref>, especially when compared to reconstruction using the Gaussian kernel. In the low count scenario of the phantom study, which approximately represents a clinical count rate, the percent difference in the three spheres between the 1D-R and 2D model were small compared to the variability between separate acquisitions. In the patient example, the variability of counts between the 2D and 1D-R models in the bone lesions was similar to the spheres in the phantom: this suggests that any differences in predicted counts between the 2D and 1D-R models in the patient example is also negligible compared to differences that would be observed between separate acquisitions. While the 1D-R model and 2D model yielded similar contrast, the Gaussian model had significantly reduced contrast with differences of -51.3 %, -23.3 %, and -56.7 % compared to the 2D model in the bone lesions. While scaling the magnitude of the image reconstructed using the Gaussian PSF could artificially increase the number of counts in each bone ROI, it would have no impact on these substantial differences in contrast.
In the phantom example, the radial path of the detector approached 7.2cm at its minimum and 27.2cm at its maximum. Since the radius of the cylinder was 10cm, this suggests that the PSF model was able to reasonably function between 0-37cm. Future work may seek to test the efficacy of the proposed 1D-R model for larger radial distances, as this may be required for larger patients.
While the functional form of Equations <ref>, <ref>, <ref> were selected because they produced a reasonable approximation of the PSF, it may be the case that they can be improved upon. Use of rotation operations with the component K_B^(x;d;b)K_B^(y;d;b) may permit more radial symmetry in the SSRF. Substitution of K_G^(x;d;b)K_G^(y;d;b) with a small 2D kernel that represents the true aperture function of the GRF may remedy the under prediction in the center of the PSFs at large distances. Close inspection of the SIMIND data at large distances in Figure <ref> reveals additional dim tails; these could be modeled by including another independent tail component in the model. A trade-off for adding additional features to the model, however, is an increase in computation time. The SPECTPSFToolbox Python library released along with this paper contains tutorials demonstrating how to create custom and fit parameterized PSF models. Users can obtain point source data using Monte Carlo programs (such as GATE <cit.> and SIMIND) or by using real scanners, and fit corresponding models to the acquired data. The models can then be imported to PyTomography for customized and scanner specific SPECT reconstruction. Users can independently evaluate the trade-off between adding model features, impact on reconstructed images, and increase in computation time.
As discussed in Section 2.1.2, the kernel sizes were fixed to N-1 × N-1 for all d. For small d, where the PSF is also smaller, a kernel of this size might be excessive and could unnecessarily increase the computational time required. Future research exploring the reduction of computational time might experiment with using a source-detector distance dependent kernel size N(d) for that matches the size of the PSF; this would further reduce the computational time required.
Since reconstruction using the proposed 1D-R model more than halves reconstruction times compared to the 2D model, it may be preferable for reconstruction of ^225Ac data. Both the 1D-R and 2D models, however, are only applicable when Equation <ref> is used for SPECT system matrix modeling; this equation relies on the assumption that the attenuation probabilities vary little across the PSF. Due to the large spatial extent of the PSFs shown here, this assumption may be invalid, and Equation <ref> may instead be required for accurate reconstruction of ^225Ac data. This assumption largely depends on the density of the object being scanned. If the object had a region of high density which only rays on the outer edges of the PSF passed through, then the equal attenuation path assumption would be invalid. For patients and phantoms, where the density is roughly constant throughout the field of view, application of Equation <ref> may be permitted. Meanwhile, the study of different density configurations and the applicability of Equation <ref> remains to be investigated in future work.
In conclusion, a fast and accurate implementation of high energy CDR modeling in SPECT imaging that uses 1D convolutions and rotations was developed, and tested on ^225Ac reconstructions. The technique was implemented using the open source reconstruction library PyTomography, and was shown to speed up reconstruction times by more than a factor of two compared to conventional, 2D convolution based methods. Furthermore, the 1D-R method yielded near identical results to the 2D method, and the small differences were insignificant compared to the differences between reconstructions of separate noise realizations. The SPECTPSFToolbox python library was developed and publicly shared to permit others in the nuclear medicine community to develop custom isotope/collimator PSF models for use in the open-source reconstruction library PyTomography.
§ ACKNOWLEDGEMENTS
This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) CGS D Award 569711 and Discovery Grant RGPIN-2019-06467, as well as computational resources and services provided by Microsoft AI for Health.
§ REFERENCES
unsrt
|
http://arxiv.org/abs/2409.02846v1 | 20240904161745 | MaDis-Stereo: Enhanced Stereo Matching via Distilled Masked Image Modeling | [
"Jihye Ahn",
"Hyesong Choi",
"Soomin Kim",
"Dongbo Min"
] | cs.CV | [
"cs.CV"
] |
Subspace-thermal discrete time crystals from phase transitions between different n-tuple discrete time crystals
Tzu-Chieh Wei
Accepted Sep 3 2024 to ApJ Letters
===============================================================================================================
§ ABSTRACT
In stereo matching, CNNs have traditionally served as the predominant architectures. Although Transformer-based stereo models have been studied recently, their performance still lags behind CNN-based stereo models due to the inherent data scarcity issue in the stereo matching task. In this paper, we propose Masked Image Modeling Distilled Stereo matching model, termed MaDis-Stereo, that enhances locality inductive bias by leveraging Masked Image Modeling (MIM) in training Transformer-based stereo model. Given randomly masked stereo images as inputs, our method attempts to conduct both image reconstruction and depth prediction tasks. While this strategy is beneficial to resolving the data scarcity issue, the dual challenge of reconstructing masked tokens and subsequently performing stereo matching poses significant challenges, particularly in terms of training stability. To address this, we propose to use an auxiliary network (teacher), updated via Exponential Moving Average (EMA), along with the original stereo model (student), where teacher predictions serve as pseudo supervisory signals to effectively distill knowledge into the student model. State-of-the-arts performance is achieved with the proposed method on several stereo matching such as ETH3D and KITTI 2015. Additionally, to demonstrate that our model effectively leverages locality inductive bias, we provide the attention distance measurement.
§ INTRODUCTION
Stereo depth estimation is a critical task in computer vision, focused on predicting disparities between stereo image pairs. Its importance is widely recognized across various applications, including autonomous driving <cit.>, SLAM <cit.>, robotic control <cit.>, drone navigation <cit.>, and beyond. Recently, the performance of the stereo matching has been improved remarkably by adopting deep neural networks (DNNs), but the difficulty of collecting large-scale ground truth training data using costly equipment such as LiDAR <cit.> becomes a factor that hinders the advance of the stereo matching task.
For this reason, the stereo models based on Convolutional Neural Networks (CNNs) <cit.> are still a widely used solution for stereo matching tasks, unlike other vision tasks where the Transformer architectures <cit.> are gradually becoming mainstream. In stereo matching, the CNNs-based models outperform Transformer-based counterparts <cit.> due to the training data efficiency resulting from the local inductive bias inherent in convolutional operations.
This phenomenon contrasts with the prevailing trends observed in most computer vision tasks. Since the introduction of ViT <cit.>, Transformer-based approaches have been recognized for their capacity to effectively learn global representations from large-scale datasets, achieving state-of-the-art results across numerous vision applications such as image classification <cit.>, 2D/3D object detection <cit.>, and semantic segmentation <cit.>.
Since the Transformer typically requires extensive training data to obtain overwhelming performance, the currently limited ground truth data for stereo matching is an obstacle to achieving competitive performance in the Transformer-based stereo matching networks.
Nonetheless, considering the potential of the ever-evolving Transformer architecture, a few attempts have been made to overcome the above-mentioned limitations of Transformer-based stereo matching networks.
CroCo-Stereo <cit.> partially alleviates the data scarcity issue by leveraging the pre-trained encoder as a backbone. The pre-trained encoder <cit.> is tailored for geometric downstream tasks by performing cross-view completion for massive stereo image pairs. While this approach addresses the data scarcity issue to some extent through the introduction of the pre-training methodology based on cross-view completion, Transformer models—designed primarily for learning global representations with extensive training data—continue to struggle with the complexities of stereo matching tasks.
In the fine-tuning stage for stereo matching, a sufficient amount of training data with ground truth labeled data is still necessary.
To address this data scarcity issue, it is crucial to incorporate locality inductive bias when fine-tuning the whole stereo model, in addition to pre-training the Transformer encoder. Recently, it was reported in <cit.> that Masked Image Modeling (MIM), which tends to concentrate around pixels in attention heads, introduces locality inductive bias in pre-trained Transformer models. The effectiveness of this locality is closely related to the masking ratio and masked patch size during pre-training.
Inspired by the benefits from MIM <cit.>, we propose a novel approach, termed Masked Image Modeling Distilled Stereo matching model (MaDis-Stereo), which is a Transformer-based model that incorporates MIM <cit.> that aims to impart locality inductive bias to the stereo depth estimation. This is achieved by pre-processing the stereo images through random masking and subsequently reconstructing the masked regions of images within the stereo matching network as described in Fig. <ref>.
However, the straightforward application of the MIM without appropriate measures may not suit fine-tuning tasks such as stereo depth estimation well. This challenge stems from the discrepancy between the pre-training task, where MIM is utilized, and the subsequent fine-tuning tasks. In the MIM based pre-training, the primary objective is to mask and reconstruct the image. In contrast, when applying the MIM strategy directly to the stereo depth estimation model, the training process becomes more complex. In addition to the reconstruction of masked patches, it must predict disparities from the reconstructed patches, posing additional challenges beyond standard MIM pretraining. To mitigate the difficulty of these challenging tasks and facilitate effective and robust training, we propose to use additional supervision from teacher networks. Additionally, the masking ratio is set at 40%, which is lower than the standard used in MIM pre-training. This adjustment was determined experimentally to establish an optimal upper bound, ensuring the model can effectively handle the dual challenge of reconstructing masked tokens and subsequently performing stereo matching.
Experimental validation on KITTI 2015 <cit.> and ETH3D <cit.> datasets confirms that our model exhibits improved performance compared to the existing stereo depth estimation approaches.
Our contributions can be summarized as follows:
* We introduce a novel stereo depth estimation framework that leverages Masked Image Modeling (MIM) applied to stereo images to obtain a locality inductive bias, which is particularly suitable for dense prediction tasks.
* We empirically demonstrate that the effective utilization of local image representation learning techniques derived from self-supervised learning significantly enhances the stereo matching process.
* We present a method to employ pseudo disparity maps generated by an Exponential Moving Average (EMA) teacher as supplementary guidance, thereby improving both stability and performance in disparity estimation.
§ RELATED WORK
§.§.§ Stereo Depth Estimation
Stereo depth estimation is a fundamental task that involves determining the disparity of objects within a scene using stereo images, simulating the depth perception capabilities of the human visual system <cit.>. Stereo depth estimation is widely utilized in various fields, including autonomous driving <cit.>, robotics <cit.>, and 3D scene reconstruction <cit.>, where accurate depth information is crucial for navigation, object detection, and environmental interaction. Moreover, it is increasingly being used as ground truth labels in monocular depth estimation tasks <cit.>, further enhancing the training and accuracy of monocular models. With the advancement of transformer frameworks <cit.>, stereo-matching models have evolved significantly, offering enhanced capabilities in stereo depth estimation tasks. Stereo matching models can be broadly categorized into CNN-based and Transformer-based frameworks. Historically, most research has focused on CNN-based models, which construct a 3D cost volume <cit.> to calculate the disparity between corresponding pixels from the stereo images. An alternative approach concatenates features from both images to create a 4D cost volume <cit.>, followed by cost aggregation to refine the volumes. In contrast, Transformer-based models <cit.> employ a different approach. These models utilize a cross-attention layer to facilitate the transfer of information across varying views <cit.>, thereby eliminating the need for generating a correlation volume. Several studies <cit.> demonstrated that the positional encoding inherent in transformers helps establish spatial correspondence between different views. However, these Transformer-based models require massive ground truth training data to achieve competitive performance. Thus, there are fewer Transformer-based models than the latest CNN-based concurrent models <cit.>.
§.§.§ Self-Supervised Learning
Self-Supervised Learning (SSL) is designed to extract meaningful representations from large amounts of unlabeled data <cit.>. This approach facilitates the fine-tuning of models for various downstream tasks. At the core of SSL are carefully crafted pretext tasks that leverage the intrinsic patterns and relationships within the data. These tasks often involve intentionally removing specific image features, such as color <cit.>, followed by training the model to reconstruct the missing details. Broadly, pretext tasks fall into two categories: (1) augmentation-based <cit.> and (2) reconstruction-based methods <cit.>. Augmentation-based methods generate semantically similar outcomes through various transformations, while reconstruction-based methods focus on reconstructing hidden patches or pixels to discern object structures within the image. Despite the absence of explicit semantic supervision, networks trained on these pretext tasks show their effectiveness in learning rich and meaningful image representations from the data. This capability to use large-scale unlabeled datasets has garnered significant attention, particularly in tasks where the availability of labeled data is constrained.
§.§.§ Mask Image Modeling
Motivated by the successes in natural language processing (NLP) exemplified by BERT <cit.> and the advent of ViT <cit.>, a variety of self-supervised pre-training methodologies using Masked Image Modeling (MIM) have emerged. These approaches draw conceptual parallels to denoising autoencoders and context encoders, with the primary objective of reconstructing masked pixels, discrete tokens, or deep features. Initial efforts, such as iGPT <cit.>, which focused on pixel reconstruction, and ViT <cit.>, which aimed to predict the average color of masked patches, fell short of achieving competitive performance relative to supervised models. However, the introduction of BEiT <cit.>, which predicts visual tokens derived from a pre-trained Variational Autoencoder (VAE) <cit.>, marked a milestone in the evolution of self-supervised learning. Further notable advancements include the Masked Autoencoder (MAE) <cit.>, which employs raw pixel prediction for pre-training and emphasizes the importance of a high mask ratio (, 75%) due to spatial redundancy inherent in images. The MultiMAE <cit.> framework has been developed to extend its applicability to multi-modal or multitask scenarios. Furthermore, lightweight approaches <cit.> are being developed to mitigate the substantial training resource requirements of MIM models. Recently, MTO <cit.> optimizes masked tokens to significantly enhance pre-training efficiency, while SBAM <cit.> introduces a saliency-based adaptive masking strategy that further refines the process by dynamically adjusting masking ratios based on token salience.
§ PROPOSED METHOD
§.§ Motivation and Overview
Although Transformer-based models exhibit promising performance, they typically require a large scale of training data compared to CNN-based models. The Masked Image Modeling (MIM) approach offers a potential solution to this data scarcity issue by providing the locality inductive bias into the model. Motivated by the strengths of MIM, we present a novel stereo depth estimation framework, referred to as MaDis-Stereo. However, the straightforward integration of MIM complicates the dual tasks of image reconstruction and disparity prediction, which can introduce model instability. To circumvent these issues, we introduce a novel strategy that effectively integrates MIM into the model while maintaining its stability.
Fig. <ref> illustrates the overall architecture of the proposed method. The MaDis-Stereo comprises a teacher network and a student network. The student network is constructed with a ViT-Base <cit.> encoder to extract features from visible tokens in given images. The ViT-Base decoder consists of plain Transformer decoder blocks including self-attention among token features from the left image, cross-attention with token features from the right image, and a linear layer. A predicted disparity is generated by passing features gathered from different intermediate decoder blocks into the head module <cit.>. In addition, the reconstructed image is produced by a prediction layer. The teacher network shares the same architecture as the student network except a prediction layer for reconstruction, since it takes unmasked images as inputs. During training, the parameters of the teacher network are updated via exponential moving average (EMA) from the student network. The student network is trained using both the ground truth disparities and pseudo disparity maps generated by the teacher network.
§.§ Masking-and-Reconstruction of Stereo Images
MaDis-Stereo's key architectural innovation lies in its capability to learn local representations through a masking-and-reconstruction methodology. For that, we first divide given left and right images I_l and I_r into N non-overlapping patches represented as I_l={I_l,1,..., I_l, N} and I_r={I_r,1,..., I_r,N}, and then apply random masking to them. A masking ratio β∈ [ 0,1 ] is provided to the model as a hyperparameter, which determines the proportion of masked areas within input data. Simply speaking, n patches are masked as follows; n = β· N.
A set of visible left patches is indicated by Ĩ_̃l̃={I_l,i|m_i=0 }, where m_i=0 signifies that the patch I_l,i is not masked, and m_i=1 means otherwise. The same random masking strategy is also applied to the right image. MaDis-Stereo proceeds with the reconstruction process using the masked view as input thereafter. Ĩ_̃l̃ and Ĩ_̃r̃ are processed individually by the ViT <cit.> encoder E_θ. Our model reconstructs the image Î_̂l̂ using a linear layer after decoding the image features with D_ϕ. Here, θ and ϕ are the weight parameters of the Transformer encoder and decoder, respectively. In the decoder, MaDis-Stereo uses cross-attention to reconstruct the image. For reconstructing the left image, the left encoded feature E_θ (Ĩ_̃l̃) is used as a query, while the right encoded feature E_θ (Ĩ_̃r̃) is employed as a key and a value for allowing information exchange between the two views. The process of reconstructing the right image is conducted similarly, except for reversing the query E_θ(Ĩ_̃r̃) and the key and value E_θ(Ĩ_̃l̃). Formally, the left image reconstruction can be written as
Î_̂l̂= 𝙻𝚒𝚗𝚎𝚊𝚛( D_ϕ ( E_θ ( Ĩ_̃l̃ );E_θ ( Ĩ_̃r̃ ) )).
Simultaneously reconstructing masked images and estimating depth is a challenging task and can destabilize the network. Therefore, we use a relatively lower value of r than the masking ratio typically used during the pre-training phase, 60% default masking ratio for SimMIM <cit.>. Experimentally, we found an appropriate masking ratio for MaDis-Stereo to be 40%.
Additionally, Fig. <ref> demonstrates that our approach enhances the model's capability to focus more effectively on local patterns within the images. The attention distances were computed by obtaining attention weights from the cross-attention layer after processing the KITTI 2015 <cit.> dataset through the model. The comparison is made between our model, which employs a masking-and-reconstruction strategy, and a standard Transformer-based stereo matching model <cit.>. A small attention distance indicates that the model's attention heads focus more on nearby pixels, demonstrating stronger locality inductive bias.
§.§ Network Architecture and Training Procedure
As illustrated in Fig. <ref>, the architecture comprises two networks: (1) the student network and (2) the teacher network updated via Exponential Moving Average (EMA). Both networks are built upon the ViT <cit.> encoder for feature extraction, the Transformer decoder with multi-head cross-attention layers, and the RefineNet-based feature fusion block <cit.> as a head module for disparity prediction. Additionally, the student network includes a prediction module composed of a linear layer for reconstructing masked image patches, following the methodology outlined in SimMIM <cit.>.
The overall training procedure of MaDis-Stereo is described as follows. We provide the masked stereo images obtained by applying random masking and original stereo images without masking as inputs. The two masked views are fed into the student network and are then reconstructed to impose the locality inductive bias. Note that this learning process is only conducted within the student network. In the student network, the feature of the masked image is extracted through the ViT <cit.> encoder by processing visible patches, and then the ViT <cit.> decoder uses the multi-head cross-attention layers to facilitate information exchange between the left and right features. The decoded features are subsequently passed through a linear layer for image reconstruction, while also being input into a RefineNet-based feature fusion block <cit.> to predict the output disparity. The RefineNet-based fusion block generates the disparity map by reshaping and merging four features from several transformer decoder blocks using convolutional layers.
On the other hand, the teacher network does not undergo a process of reconstructing masked regions since intact stereo images are used as inputs and only perform disparity prediction. While the student network continues its learning process, the teacher network maintains a stop-gradient state. The encoder of the teacher network is EMA-updated using the encoder of the student network, ensuring smooth training. The overall structure follows the framework of self-supervised learning methods <cit.>.
In our model, the EMA update is applied up to the teacher ViT <cit.> encoder, and the Transformer decoder and feature fusion block <cit.> are built in a Siamese manner. Formally, the teacher encoder parameter θ^T_t+1 at iteration t+1 is EMA-updated using the student encoder parameter θ_t+1 as
θ^T_t+1←αθ^T_t+(1-α)θ_t+1,
where α is a hyperparameter of EMA update, set to 0.9999 in our work.
§.§ Distilling Teacher Knowledge to Student Model
Performing image reconstruction and depth estimation simultaneously can be a challenging task. To alleviate the challenges of the task, we utilize disparity maps generated by the teacher network as supplementary pseudo-labels to guide the student network. To be specific, the ground truth disparity maps are typically sparse due to the inherent limitation of active depth sensors <cit.> used to collect the training data.
Employing pseudo labels facilitates effective model training in scenarios of sparse ground truth disparity maps. These dense pseudo disparity maps act as supervisory signals for regions where ground truth disparity values obtained by LiDAR are unavailable.
From another perspective, using the pseudo labels of the teacher network facilitates effective knowledge transfer to the student network, ultimately enhancing the performance of the student network. Similar approaches have also been employed in weakly/unsupervised domain adaptation <cit.>, where the model is trained using labeled source data and is then applied to the target domain to generate pseudo labels using the teacher network. It was also reported that these pseudo labels help train the student model.
§.§ Loss Function
§.§.§ Disparity loss.
MaDis-Stereo parameterizes the network output using a Laplacian distribution <cit.>, similar to CroCo-Stereo <cit.>. Given a stereo image input, the model predicts a disparity map d and a scale parameter σ. The training objective involves minimizing the negative log-likelihood function of the ground-truth target disparity map d^gt and pseudo-disparity map d^pgt computed from the EMA-updated teacher model as follows:
L_disp = L^gt_disp + L^pgt_disp
= 1/|Ω|[ ∑_i∈ G( | d_i-d^gt_i|/σ_i - 2logσ_i) .
+ . ∑_j∈Ω∖ G( | d_j-d^pgt_j|/σ_j - 2logσ_j) ],
where L^gt_disp and L^pgt_disp are loss functions defined using d^gt and d^pgt, respectively.
The scale parameter σ serves as an indication of predictive uncertainty: higher values of σ result in less stringent penalties for larger errors, while lower values of σ lead to greater rewards for accurate predictions.
Ω and G indicate a set of all pixels in the image and a set of pixels where ground truth depth labels are available, respectively.
Since the ground truth depth labels d^gt are typically sparse, we use the dense pseudo labels d^pgt generated by the teacher network as an additional guidance.
Due to the potential inaccuracies of the pseudo labels d^pgt, we exclude d^pgt for pixels where the d^gt is available. Specifically, for pixel i where ground truth exists, the training process relies solely on the d^gt_i, avoiding any guidance from the pseudo labels d^pgt_i. For the location of pixel j where ground truth d^gt_j is unavailable (, where j ∈Ω∖ G), the pseudo labels d^pgt_j can serve as a guidance.
Similar to the semi-supervised learning paradigms <cit.>, the disparity maps generated from the EMA-teacher model are used as the pseudo depth labels. Specifically, the student network takes masked images with incomplete information as inputs and predicts disparity maps with lower accuracy than the teacher. In contrast, the teacher network predicts pseudo disparity maps from unmasked (original) images, which are more precise compared to those from the student network.
§.§.§ Image Reconstruction loss.
The reconstruction loss is computed by predicting the pixels within masked regions of the input and evaluating L_1 loss, L_img, between the reconstructed images and the original ones.
L_img is computed for the masked patches only as
L_img(Î,I) = 1/N∑_im_I(i)·||Î(i)-I(i)||^2
where m_I(i) indicates whether the i-th pixel is masked or not (0 if not masked, 1 otherwise). N=∑_im_I(i) denotes the total number of masked pixels from the input image.
§.§.§ Total loss.
A final loss is calculated as the sum of the disparity loss L_disp and the image reconstruction loss L_img.
As random masking is applied to both the left and right images in the student network, L_img is computed for both left and right images.
The total loss is defined as L_total = L_disp + L_img(Î_̂l̂,I_l)+L_img(Î_̂r̂,I_r).
§ EXPERIMENTS
§.§ Implementation Details
We fine-tuned with pre-trained weights from CroCo-Stereo <cit.>, adhering to its implementation settings. We evaluated our method on KITTI 2015 <cit.>, and ETH3D <cit.>. The final performance on the KITTI 2015 benchmark in Table <ref> and the ETH3D benchmark in Table <ref> are shown. Further details regarding the training datasets and the specific splits can be found in the supplementary material.
§.§.§ Pre-training.
Pre-trained weights from CroCo-Stereo were fine-tuned for our experimentation purposes. CroCo-Stereo pre-trained on various stereo datasets such as CREStereo <cit.>, SceneFlow <cit.>, ETH3D <cit.>, Booster <cit.>, and Middlebury <cit.>. The pre-trained weights were trained with a batch size of 6 pairs of image crops (704×352) for 32 epochs, utilizing the AdamW optimizer <cit.> with a weight decay rate of 0.05 and a cosine learning rate schedule with a single warm-up epoch, with the learning rate set to 3.10^-5.
§.§.§ Fine-tuning.
CroCo-Stereo was fine-tuned the pre-trained model on 1216×352 crops from non-masked KITTI 2012 <cit.> and KITTI 2015 <cit.> datasets for 20 epochs. Following the most settings of CroCo-Stereo, we used crops of size 1216×352 for fine-tuning, with a learning rate of 3.10^-5. Our MaDis-Stereo adopted the ViT-Base model as its backbone, in contrast to CroCo-Stereo, used the ViT-Large backbone for fine-tuning. Additionally, unlike the original CroCo-Stereo approach, we extended the training of MaDis-Stereo to 100 epochs to ensure effective image reconstruction. For a fair comparison, we compared our results with fine-tuning the CroCo-Stereo for also 100 epochs.
§.§ Evaluation
We evaluated our model on KITTI 2015 <cit.> and ETH3D <cit.> two-view stereo. We experimentally confirmed that our model is most effective when conducting image reconstruction concurrently, particularly at 100 epochs. Similarly, we compared the results of CroCo-Stereo trained for up to 100 epochs with ViT-Base backbone. Table <ref> for MaDis-Stereo results on KITTI 2015 leaderboard. MaDis-Stereo achieves the best result on the main D1-all metrics with 1.57, with the best value also on D1-fg pixels with 2.31. Furthermore, Table <ref> shows the results on the ETH3D leaderboard, where our model demonstrates state-of-the-art performance.
§.§ Ablation Study
Our ablations were performed on the KITTI 2015 <cit.> dataset for stereo matching. Here, we conducted three ablation studies in this section. Firstly, we analyzed the impact of the masking ratio r on MaDis-Stereo. Secondly, we compared the results with/without the EMA structure to assess its influence on the model's stability. Lastly, we explored the effect of utilizing pseudo disparity maps generated by the EMA teacher network to guide the student network.
§.§.§ Masking ratio
We measure the impact of variation in the masking ratio in Table <ref>. The masking ratio is one of the crucial factors influencing the performance of the MIM network. The model learns a locality inductive bias for the missing parts in the process of predicting the masked regions of the image, which is determined by the masking ratio. However, a higher masking ratio reduces the visible image tokens available for reconstruction and makes it a challenging task. To mitigate the difficulty of the applied MIM strategy, we use a lower masking ratio than what is typically used in the MIM pre-training process. We observed that masking ratio r = 0.4 is optimized for MaDis-Stereo.
§.§.§ EMA structure
Fig <ref> illustrates the outcomes achieved with/without the utilization of the EMA structure and each validation error results are described in Table <ref>. To ensure more stable learning, we build our model with an EMA structure. Following the standard EMA model, our MaDis-Stereo consists of the teacher and student network. The parameters of the student network are EMA to the parameters of the teacher network. The model without an EMA structure implies that solely the student network is employed for training. Our observations indicate that the MaDis-Stereo with the EMA structure exhibits much more stable learning than the model without it.
§.§.§ Effects of using pseudo disparity map as supplement guidance
Performing image reconstruction and depth estimation simultaneously presents challenges. To tackle this issue, we utilize pseudo-disparity maps generated by an EMA teacher network as guidance to the student network during training. Table <ref> shows the impact of varying the weight of L_disp, which utilizes the disparity map as guidance.
§ CONCLUSION
We investigate the limitations of Transformer-based fine-tuning models for stereo depth estimation tasks and introduce a novel framework that integrates supervised and self-supervised learning approaches, diverging from conventional supervised methods. Our focus lies in leveraging the masking-and-reconstruction approach to enhance the inductive locality bias essential for scenarios with limited data, thereby addressing the bias deficiency in Transformers. Experimentally, we demonstrate the beneficial impact of learning local image representations during fine-tuning on a stereo depth estimation network.
|